text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
Your search: "author:Kaplan, Nathan"
Combinatorial Theory (1)
Counting core partitions and numerical semigroups using polytopes
Nam, Hayan
UC Irvine Electronic Theses and Dissertations (2019)
A partition is an $a$-core partition if none of its hook lengths are divisible by $a$. It is well known that the number of $a$-core partitions is infinite and the number of simultaneous $(a, b)$-core partitions is a generalized Catalan number if $a$ and $b$ are relatively prime. In the first half of the dissertation, we give an expression for the number of simultaneous $(a_1,a_2,\dots, a_k)$-core partitions that is equal to the number of integer points in a polytope.
In the second half, we discuss objects closely related to core partitions, called numerical semigroups, which are additive monoids that have finite complements in the set of non-negative integers. For a numerical semigroup $S$, the genus of $S$ is the number of elements in $\NN \setminus S$ and the multiplicity is the smallest nonzero element in $S$. In 2008, Bras-Amor
os conjectured that the number of numerical semigroups with genus $g$ is increasing as $g$ increases. Later, Kaplan posed a conjecture that implies Bras-Amor
os conjecture. In this dissertation, we prove Kaplan's conjecture when the multiplicity is 4 or 6 by counting the number of integer points in a polytope. Moreover, we find a formula for the number of numerical semigroups with multiplicity 4 and genus $g$.
Problems relating to arcs in projective space and subrings in Z^n
Isham, Kelly
An important problem in mathematics is understanding the behavior of functions. If we can show that complicated functions can be simplified, computation becomes more feasible. In this thesis, we describe two major projects: counting arcs in projective space and counting subrings in $\Z^n$. In the case of arcs in $\mb{P}^{k-1}(\Fq)$, we seek to understand whether the counting function can be expressed by a simple formula like a polynomial. In the case of subrings of $\Z^n$, we are interested in the asymptotic growth of the function that counts subrings of index at most $X$.
An $n$-arc in $(k-1)$-dimensional projective space is a set of $n$ points so that no $k$ lie on a hyperplane. Let $C_{n,k}(q)$ be the number of $n$-arcs in $\mb{P}^{k-1}(\Fq)$. We discuss new results for $n$-arcs when $k = 3$ and 4. Building off work of Kaplan, Kimport, Lawrence, Peilen, and Weinreich, we show that $C_{10,3}(q)$ is not quasipolynomial in $q$. For almost all $n$, no formulas for $C_{n,k}(q)$ were known when $k \ge 4$. We introduce a new algorithm to count $n$-arcs in $\mb{P}^3(\Fq)$ in terms of a small number of special combinatorial objects and use it to compute $C_{n,4}(q)$ for $n \le 7$. Finally, we discuss generalizations of this algorithm to higher-dimensional projective space.
Let $f_n(p^e)$ count the number of subrings of index $p^e$ in $\Z^n$. We study lower bounds for $f_n(p^e)$ using techniques from combinatorics and arithmetic geometry. Using these lower bounds, we give the best known result about the asymptotic growth of subrings in $\Z^n$ when $n \ge 8$ by studying the analytic properties of the subring zeta function for $\Z^n$. These results immediately give new information about the asymptotic growth of orders in a fixed degree $n$ number field. Finally, we consider the behavior of $f_n(p^e)$ as a function of $p$. We give evidence that this function may always be polynomial in $p$.
Numerical semigroups, polyhedra, and posets I: the group cone
Kaplan, Nathan;
O'Neill, Christopher
Combinatorial Theory, Volume 1 (2021)
Several recent papers have explored families of rational polyhedra whose integer points are in bijection with certain families of numerical semigroups. One such family, first introduced by Kunz, has integer points in bijection with numerical semigroups of fixed multiplicity, and another, introduced by Hellus and Waldi, has integer points corresponding to oversemigroups of numerical semigroups with two generators. In this paper, we provide a combinatorial framework from which to study both families of polyhedra. We introduce a new family of polyhedra called group cones, each constructed from some finite abelian group, from which both of the aforementioned families of polyhedra are directly determined but that are more natural to study from a standpoint of polyhedral geometry. We prove that the faces of group cones are naturally indexed by a family of finite posets, and illustrate how this combinatorial data relates to semigroups living in the corresponding faces of the other two families of polyhedra.
Keywords: Polyhedron, numerical semigroup.
Mathematics Subject Classifications: 52B05, 20M14
Average Cyclicity for Elliptic Curves in Torsion Families
Fredericks, Luke Robert
Let $E/\QQ$ be an elliptic curve; for all but finitely many primes $p$, reduction modulo $p$ yields an elliptic curve over the finite field $\mathbb{F}_p$, and it is natural to ask about the properties of these reductions for varying primes. The purpose of this dissertation is to study one such question, namely, how frequently the reductions result in an elliptic curve with cyclic group structure. To be precise, we let $\pi_E^{cyc}(x) $ denote the number of primes less than $x$ for which the reduction of $E$ modulo $p$ is cyclic. The asymptotic behavior of this function has been established by Serre conditional on Generalized Riemann Hypothesis. Furthermore, Banks and Shparlinski showed that this asymptotic holds unconditionally on average over the family of elliptic curves given by short Weierstrass equations with coefficients taken in a `box.' Inspired by the work of Battista, Bayless, Ivanov and James on the Lang-Trotter conjecture, we study the average asymptotic behavior of the functions $\pi_E^{cyc}$ where the average is taken over certain thin families of elliptic curves: elliptic curves with a rational point of order $m$ defined over $\mathbb{Q}$. The results we obtain are again in agreement with the conditional asymptotic. We also extend the study of cyclicity from elliptic curves defined over the rational numbers to elliptic curves defined over a quadratic extension of $\QQ$ and obtain partial results in that case.As a key tool, we prove an analogue of a result of Vl\u{a}du\c{t} that estimates the number of elliptic curves over a finite field which have some specified torsion and which have group structure that is as cyclic as possible.
COUPLINGS, COMPONENT COUNTING PROCESSES, AND PROBABILISTIC NUMBER THEORY
Squillace, Joseph Paul
Our results are concerned with couplings, component counts of combinatorial objects, and
probabilistic number theory. In the theory of couplings, we are concerned with the general
problem of proving the existence of joint distributions p(i,j) of two discrete random variables M and
N subject to infinitely many constraints of the form p(M=i, N=j)=0. The constraints placed on the joint distributions will require, for many
elements j in the range of N, p(M=i, N=j)=0 for infinitely many values of i in the
range of M, where the corresponding values of i depend on j. To prove the existence of
such joint distributions, we apply a theorem proved by Volker Strassen on the existence of joint
distributions with prespecified marginal distributions. In the case in which N is uniformly
distributed in a combinatorial structure with C_i components of size i, we seek to measure
the amount of dependence in the process (C_i )_{i\le n} by coupling N with a variable M such that
M has Z_i components of size i and the Z_i's are independent, with
sum_{i\le n}(C_i - Z_i)^+ <2.
In the combinatorial example of noncrossing partitions, we provide
two derivations of the probability distribution of the component counts of a uniformly distributed noncrossing partition. Upon applying a bijection between the set of noncrossing partitions
and Dyck paths consisting of up-steps and down-steps, our results specify the joint and marginal distributions of the block counts of the number of consecutive up-steps in a uniformly random chosen Dyck path.
In number theory, we give an analogue of the Erd\"{o}s-Kac
Theorem by providing a family of integer-valued random variables on
{1,\ldots,n} whose number of distinct prime factors
is roughly log\log n+X\cdot\sqrt{\log\log n} for large values
of n, where X is a standard normal variable. Our final result
involves couplings of a Zeta-distributed variable. Given s>1 and
n\in\mathbb{N}, consider a Zeta(s)-distributed integer-valued
random variable with prime factorization Z(s)=\prod_{p}p^{\alpha_{p}(s)}
and the truncation Z_{n}(s)\coloneqq\prod_{p\le n}p^{\alpha_{p}(s)
The prime powers \alpha_{p}(s) are independent with
\alpha_{p}(s)\sim\text{Geometric}(1/p^s),
and we also consider a random variable M(n)=\prod_{p\le n}p^{Z_{p}},
where the Z_{p}'s are independent with Z_{p}\sim\text{Geometric}(1/p).
We apply the concept of pivot mass and a theorem proved by Strassen
in order to prove the existence of couplings of a Zeta-distributed random variable Z(s) and M(n)
in which we can make probabilistic divisibility statements of the form "Z_{n}(s) divides M(n)P(n)"
for some random prime P(n)\le n. In particular, we will
prove that for each n\in\mathbb{N} and an integer k\ge 4, there exists an \varepsilon(k)>0
such that when s\in\left(1,1+\varepsilon(k) we can
couple Z(s) and M\left(n\right) such that if Z(s)\le k,
then Z_{n}(s) always divides M(n)P(n)
for some random prime P(n)\le n. | CommonCrawl |
\begin{document}
\title[Verlinde-type formulas for rational surfaces] {Verlinde-type formulas for rational surfaces} \author{Lothar G\"ottsche} \address{International Centre for Theoretical Physics, Strada Costiera 11, 34014 Trieste, Italy} \email{[email protected]} \subjclass[2000]{Primary 14D21}
\begin{abstract} For a projective algebraic surface $X$, with an ample line bundle $\omega$, let $M_\omega^X(c_1,d)$ be the moduli space of rank $2$, $\omega$-semistable torsion free sheaves $E$ with $c_1(E)=c_1$ and $4c_2(E)-c_1^2=d$. For line bundles $L$ on $X$, let $\mu(L)$ be the corresponding determinant line bundles on $M_H^X(c_1,d)$. The $K$-theoretic Donaldson invariants are the holomorphic Euler characteristics $\chi(M_\omega^X(c_1,d),\mu(L))$. In this paper we develop an algorithm which in principle determines all their generating functions for the projective plane, its blowup in finitely many points, and also for ${\mathbb P}^1\times{\mathbb P}^1$. Among others, we apply this algorithm to compute the generating functions of the $\chi(M_H^{{\mathbb P}^2}(0,d),\mu (nH))$ and $\chi(M_H^{{\mathbb P}^2}(H,d),\mu (nH))$ for $n\le 11$, for $H$ the hyperplane class on ${\mathbb P}^2$. We give some conjectures about the general structure of these generating functions and interpret them in terms of Le Potier's strange duality conjecture. \end{abstract}
\maketitle \tableofcontents
\section{Introduction} In this paper let $(X,\omega)$ be a pair of a rational surface $X$ and an ample line bundle $\omega$.
We consider the moduli spaces $M^X_\omega(c_1,d)$ of $\omega$-semistable torsion-free coherent sheaves of rank $2$ on $X$ with Chern classes $c_1\in H^2(X,{\mathbb Z})$ and $c_2$ such that $d=4c_2-c_1^2$. Associated to a line bundle $L$ on $X$ there is a determinant bundle $\mu(L)\in\operatorname{Pic}(M^X_\omega(c_1,d))$. If $L$ is ample, then $\mu(L)$ is nef and big on $M^X_\omega(c_1,d)$, and a suitable power induces the map from $M^X_\omega(c_1,d)$ to the corresponding Uhlenbeck compactification.
If one considers instead of a rational surface $X$ a curve $C$, the spaces of sections of the corresponding determinant bundles are the spaces of conformal blocks, and their dimensions are given by the celebrated Verlinde formula. In \cite{Zag} many reformulations of this formula are given. In particular [Thm.~1.(vi)]\cite{Zag} expresses the generating function on a fixed curve as a rational function. In this paper we study the generating functions of the holomorphic Euler characteristics $\chi(M^X_\omega(c_1,d),\mu(L))$, and show that they are given as rational functions.
Let $$\chi_{c_1}^{X,\omega}(L):=\sum_{d>0}\chi(M^X_\omega(c_1,d),\mu(L))\Lambda^d$$ (In case $c_1=0$ the coefficient of $\Lambda^4$ is slightly different, furthermore in case $\omega$ lies on a wall (see below), here instead of $\chi(M^X_\omega(c_1,d),\mu(L))$ we use the average over the chambers adjacent to $\omega$.) We can view the spaces of sections $H^0(M^X_\omega(c_1,d),\mu(L))$ as analogues to the spaces of conformal blocks. In most cases we will consider (see \propref{highvan} below), the higher cohomology groups of the determinant bundle $\mu(L)$ vanish. Thus our formulas for the $\chi_{c_1}^{X,\omega}(L)$ are analogues of the Verlinde formula for rational surfaces.
\begin{Notation} For two Laurent series $P(\Lambda)=\sum_{n} a_n\Lambda^n,Q(\Lambda)=\sum_{n} b_n\Lambda^n\in {\mathbb Q}[\Lambda^{-1}][[\Lambda]]$ we write $P(\Lambda)\equiv Q(\Lambda)$ if there is an $n_0\in {\mathbb Z}$ with $a_n=b_n$ for all $n\ge n_0$. \end{Notation}
\begin{Theorem} \label{rationalal} Let $X$ be ${\mathbb P}^2$, ${\mathbb P}^1\times {\mathbb P}^1$ or a blowup of ${\mathbb P}^2$ in $n$ points.
Let $c_1\in H^2(X,{\mathbb Z})$, $L\in Pic(X)$. There is a polynomial $P^{X}_{c_1,L}(\Lambda)\in \Lambda^{-c_1^2}{\mathbb Q}[\Lambda^{\pm 4}]$ and $l^{X}_{c_1,L}\in {\mathbb Z}_{\ge 0}$, such that $$\chi_{c_1}^{X,\omega}(L)\equiv\frac{P^X_{c_1,L}(\Lambda)}{(1-\Lambda^4)^{l^X_{c_1,L}}}.$$
Here $\omega$ is an ample line bundle on $X$ with $\langle\omega,K_X\rangle<0$. In case $X$ is the blowup of ${\mathbb P}^2$ in $n$ points we assume furthermore that $\omega=H-a_1E_1-\ldots-a_nE_n$, with $|a_i|<\frac{1}{\sqrt{n}}$ for all $i$. Note that $P^{X}_{c_1,L}(\Lambda)$, $l^{X}_{c_1,L}$ are independent of $\omega$ (subject to the conditions above).
In particular for any other ample line bundle $\omega'$ on $X$ satisfying the conditions for $\omega$ above, we have $\chi_{c_1}^{X,\omega'}(L)-\chi_{c_1}^{X,\omega}(L)\in {\mathbb Z}[\Lambda]$.
\end{Theorem}
We will see that there is an algorithm for determining the generating functions $\chi_{c_1}^{X,\omega}(L)$ of \thmref{rationalal}. Let now $H$ be the hyperplane bundle on ${\mathbb P}^2$. We apply the algorithm above to determine the generating functions of the $\chi(M^{{\mathbb P}^2}_H(0,d),\mu(nH))$ and the $\chi(M^{{\mathbb P}^2}_H(H,d),\mu(nH))$ for $n\le 11$. These were determined before (and strange duality proven) for $c_1=0$ and $n=1,2$ in \cite{Abe}, and for all $c_1$ for $n=1,2,3$ in \cite{GY}. We get the following result. Put {\small \begin{align*}&p_1(t)=p_2(t)=1,\ p_3(t)=1+t^2,p_4(t)=1+6t^2+t^3,
p_5(t)=1+21t^2+20t^3+21t^4+t^6, \\ &p_6(t)=1+56t^2+147t^3+378t^4+266t^5+148t^6+27t^7+t^8,\\ &p_7(t)=1+126t^2+690t^3+3435t^4+7182t^5+9900t^6 +7182t^7+3435t^8+690t^9+126t^{10}+t^{12},\\ &p_8(t)=1+252t^2+2475t^3+21165t^4+91608t^5+261768t^6+462384t^7+ 549120t^8+417065t^9\\ &\ +210333t^{10}+66168t^{11} +13222t^{12}+1515t^{13}+75t^{14}+t^{15},\\ &p_9(t)=1 + 462t^2 + 7392t^3 + 100359t^4 + 764484t^5+ 3918420t^6 + 13349556t^7 + 31750136t^8 \\ &\ + 52917800t^9 + 62818236t^{10} + 52917800t^{11}+ 31750136t^{12} + 13349556t^{13} + 3918420t^{14}\\ &\ + 764484t^{15}+ 100359t^{16}+7392t^{17}+ 462t^{18}+t^{20},\\ &p_{10}(t)=1+ 792t^2 + 19305t^3 + 393018t^4 + 4788696t^5 + 39997980t^6 + 231274614t^7 + 961535355t^8 \\ &\ + 2922381518t^9 + 6600312300t^{10} + 11171504661t^{11} + 14267039676t^{12} + 13775826120t^{13} \\ &\ + 10059442536t^{14} + 5532629189t^{15} +2277448635t^{16}+693594726t^{17} + 154033780t^{18} +24383106t^{19}\\ &\ + 2669778t^{20}+192588t^{21} + 8196t^{22}+ 165t^{23}+t^{24}.\\ &p_{11}(t)= 1 + 1287t^2 + 45474t^3 + 1328901t^4 + 24287340t^5 + 309119723t^6
+ 2795330694t^7 \\
&\ + 18571137585t^8 + 9253037887 6t^9 + 351841388847t^{10} + 1033686093846t^{11}+ 2369046974245t^{12}\\ &\ + 4264149851544t^{13} + 6056384937603t^{14} + 6805690336900t^{15}+ 6056384937603t^{16}+ 4264149851544t^{17} \\ &\ + 2369046974245t^{18} + 1033686093846t^{19} + 351841388847t^{20} + 92530378876t^{21} + 18571137585t^{22} \\ &\ + 2795 330694t^{23}+ 309119723t^{24}+ 24287340t^{25}+ 1328901t^{26}+ 45474t^{27}+ 1287t^{28}+t^{30}.
\end{align*}}
For $1\le n\le 11$ we put $P_n(\Lambda):=p_n(\Lambda^4)$, $Q_n(\Lambda)=\Lambda^{n^2-1}P_n(\frac{1}{\Lambda})$.
It is easy to see that for $n$ odd, $P_n$ is symmetric, i.e. $P_n(\Lambda)=Q_n(\Lambda)$.
\begin{Theorem} \label{mainp2} For $1\le n\le 11$ we have \begin{enumerate} \item $$1+\binom{n+2}{2}\Lambda^4+\sum_{d>4}\chi(M^{{\mathbb P}^2}_H(0,d),\mu(nH))\Lambda^d=\frac{P_n(\Lambda)}{(1-\Lambda^4)^{\binom{n+2}{2}}}.$$ \item if $n$ is even, then $$\sum_{d>0}\chi(M^{{\mathbb P}^2}_H(H,d),\mu(nH))\Lambda^d=\frac{Q_n(\Lambda)}{(1-\Lambda^4)^{\binom{n+2}{2}}}.$$ \end{enumerate} \end{Theorem}
We see that for $n\le 11$, the generating functions $\chi^{{\mathbb P}^2,H}_H(nh)$, $\chi^{{\mathbb P}^2,H}_0(nH)$ have a number of interesting features, which we conjecture to hold for all $n>0$.
\begin{Conjecture}\label{p2con} For all $n>0$ there are polynomials $p_n(t)\in {\mathbb Z}[t]$ such the following holds. We put $P_n(\Lambda)=p_n(\Lambda^4)$, $Q_n(\Lambda)=\Lambda^{n^2-1}P_n(\frac{1}{\Lambda})$. \begin{enumerate} \item $$1+\binom{n+2}{2}\Lambda^4+\sum_{d>4}\chi(M^{{\mathbb P}^2}_H(0,d),\mu(nH))\Lambda^d=\frac{P_n(\Lambda)}{(1-\Lambda^4)^{\binom{n+2}{2}}}.$$ \item If $n$ is odd, then $P_n(\Lambda)=Q_n(\Lambda)$, if $n$ is even then $$\sum_{d>0}\chi(M^{{\mathbb P}^2}_H(H,d),\mu(nH))\Lambda^d=\frac{Q_n(\Lambda)}{(1-\Lambda^4)^{\binom{n+2}{2}}}.$$ \item $p_n(1)=2^{\binom{n-1}{2}}$. \item For $i$ odd and $i\le n-3$ we have $$\mathop{\text{\rm Coeff}}_{x^i}\big[e^{-(n^2-1)x/2}P_n(e^x)]=\mathop{\text{\rm Coeff}}_{x^i}\big[e^{-(n^2-1)x/2}Q_n(e^x)]=0.$$
\item The degree of $p_{n}(t)$ is the largest integer strictly smaller than $n^2/4$.
\end{enumerate} \end{Conjecture}
On ${\mathbb P}^1\times {\mathbb P}^1$ we get the following results. Let $F$ and $G$ be the classes of the fibres of the projections to the two factors. Let {\small \begin{align*} q^0_1&:=1,\ q^0_{2}:=1+t^2,\ q^0_3=1+10t^2+4t^3+t^4,q^0_4:=1+46t^2 + 104t^3 + 210t^4 + 104t^5 + 46t^6 + t^8,\\ q^0_5&:=1 + 146t^2 + 940t^3 + 5107t^4 + 12372t^5 + 19284t^6+ 16280t^7 + 8547t^8 + 2452t^9 + 386t^{10} + 20t^{11} + t^{12},\\ q^0_6&:=1 + 371t^2 + 5152t^3 + 58556t^4 + 361376t^5 + 1469392t^6 + 3859616t^7 + 6878976t^8 + 8287552t^9 \\&+ 6878976t^{10} + 3859616t^{11} + 1469392t^{12} + 361376t^{13} + 58556t^{14} + 5152t^{15} + 371t^{16} + t^{18},\\ q^0_7&=1+812t^2+ 20840t^3 + 431370t^4+5335368t^5+ 44794932t^6+ 259164216t^7+ 1070840447t^8\\ &+ 3214402272t^9+ 7125238944t^{10}+ 11769293328t^{11} + 14581659884t^{12} + 13577211024t^{13}\\ &+9496341984t^{14} + 4966846032t^{15} + 1928398719t^{16} + 548923040t^{17}+ 112654644t^{18}+ 16232904t^{19}\\ &+ 1584906t^{20}+ 97448t^{21}+ 3564t^{22} + 56t^{23}+t^{24},\\ q^{F+G}_{2}&=t^{\frac{1}{2}}(1+t),q^{F+G}_{4}=t^{\frac{1}{2}}(1 + 10t + 84t^2 + 161t^3 + 161t^4+ 84t^5+ 10t^6 + t^7),\\ q^{F+G}_{6}&=t^{\frac{1}{2}}(1+ 35t+ 1296t^2+ 18670t^3 + 154966t^4 + 770266t^5 + 2504382t^6+ 5405972t^7 + 7921628t^8 \\&+ 7921628t^{9} + 5405972t^{10}+ 2504382t^{11} + 770266t^{12}+ 154966t^{13} + 18670t^{14} + 1296t^{15}+ 35t^{16} + t^{17}),\\ q^F_{2}&:=2t,\ q^F_4:=3t + 43t^2 + 105t^3 + 210t^4 + 105t^5 + 43t^6 + 3t^7,\\ q^F_{6}&:=4t + 274t^2 + 5520t^3 + 57022t^4 + 366052t^5 + 1460922t^6 + 3873184t^7 + 6855798t^8 + 8316880t^9 \\&+ 6855798t^{10}+ 3873184t^{11} + 1460922t^{12} + 366052t^{13} + 57022t^{14} + 5520t^{15} + 274t^{16} + 4t^{17}.\\ \end{align*}} For $n$ odd put $q^{F+G}_n(t)=t^{d^2/2}q^0_n(t^{-1})$.
Then we get \begin{Theorem}\label{P11gen} \begin{enumerate} \item $\displaystyle{\sum_{d>4} \chi(M^{{\mathbb P}^1\times {\mathbb P}^1}_{F+G}(0,d),\mu(nF+nG))\Lambda^d=\frac{q^0_n(\Lambda^4)}{(1-\Lambda^4)^{(n+1)^2}}-1-(n^2+2n+1)\Lambda^4}$ for $1\le n\le 7$. \item $\displaystyle{\sum_{d>0} \chi(M^{{\mathbb P}^1\times {\mathbb P}^1}_{F+G}(F+G,d),\mu(nF+nG))\Lambda^d=\frac{q^{F+G}_n(\Lambda^4)}{(1-\Lambda^4)^{(n+1)^2}}-\Lambda^2}$ for $1\le n\le 7$. \item $\displaystyle{\sum_{d>0} \chi(M^{{\mathbb P}^1\times {\mathbb P}^1}_{F+G}(F,d),\mu(nF+nG))\Lambda^d=\frac{q^F_n(\Lambda^4)}{(1-\Lambda^4)^{(n+1)^2}}}$ for $n=2,4,6$. \end{enumerate} \end{Theorem} \begin{Remark} \begin{enumerate} \item For $d$ even and $c_1=0$, $F$, $F+G$ we have $q^{c_1}_n(t)=t^{n^2/2}q^{c_1}_n(t^{-1})$. \item For all $1\le n\le 7$ we have $q^0_n(1)=q^{F+G}_n(1)=2^{(n-1)^2}$, and if $d$ is also even $q^F_n(1)=2^{(n-1)^2}$. \item For all $1\le n \le 7$ and all $i$ odd with $i\le n-2$ we have $\mathop{\text{\rm Coeff}}_{x^i}\big[e^{-n^2x/4}q^0_{d}(e^x)]=\mathop{\text{\rm Coeff}}_{x^i}\big[e^{-n^2x/4}q^{F+G}_{n}(e^x)]=0$. \end{enumerate} \end{Remark}
The results on ${\mathbb P}^2$, ${\mathbb P}^1\times {\mathbb P}^1$ as well as the computations for other rational surfaces lead to a general conjecture. For a line bundle $L$ on a rational surface $X$ we denote $\chi(L)=L(L-K_X)/2+1$ the holomorphic Euler characteristic and
$g(L)=L(L+K_X)/2+1$ the genus of a smooth curve in the linear system $|L|$.
\begin{Conjecture}\label{ratconj} Let $X$ be a rational surface and let $\omega$ be ample on $X$ with $\langle\omega, K_X\rangle<0$. Let $L$ be a sufficiently ample line bundle on $X$. Then we have the following. \begin{enumerate} \item There is a polynomial $P^X_{c_1,L}(\Lambda)\in \Lambda^{-c_1^2}{\mathbb Z}_{\ge 0}[\Lambda^{\pm 4}]$, such that $$\sum_{d\ge 0} \chi(M^{X}_\omega(c_1,d),\mu(L))\Lambda^d\equiv \frac{P^X_{c_1,L}(\Lambda)}{(1-\Lambda^4)^{\chi(L)}}.$$ \item We have $P^X_{c_1,L}(1)=2^{g(L)}$. \item We have the "duality"
$$P^X_{c_1,L}(\Lambda)=\Lambda^{L^2+8-K_X^2}P^X_{L+K_X-c_1,L}(\frac{1}{\Lambda}).$$ \item If $i$ is odd, and $L$ is sufficiently ample with respect to $i$, then $$\mathop{\text{\rm Coeff}}_{x^i}\big[e^{-\frac{1}{2}(L^2+8-K_X^2)x}P^X_{c_1,L}(e^x)\big]=0.$$ In the case of $({\mathbb P}^2,dH)$ and $({\mathbb P}^1\times {\mathbb P}^1,dF+dG)$ sufficiently ample with respect to $i$ means that $L+K_X$ is $i$-very ample.
\end{enumerate} \end{Conjecture}
\begin{Remark} \label{unique} The polynomial $P^X_{c_1,L}(\Lambda)$ is not well defined. We can write $P^X_{c_1,L}(\Lambda)=\Lambda^{-c_1^2}p^X_{c_1,L}(\Lambda^4)$, and the polynomial $p^X_{c_1,L}(t)$ is well defined only up to adding a Laurent polynomial in $t$ divisible by $(1-t)^{\chi(L)}$. On the other hand, if $L$ is sufficiently ample with respect to $c_1,X$, we conjecture that we can choose $p^X_{c_1,L}(t)$ with $\deg(p^X_{c_1,L}(t))<\chi(L)$ (i.e. the difference in degree of the highest order and lowest order term in $p^X_{c_1,L}(t)$ is smaller than $\chi(L)$). Assuming this, $p^X_{c_1,L}(t)$ and thus $P^X_{c_1,L}(\Lambda)$ are uniquely determined. \end{Remark}
\begin{Remark}\label{chipol} Part (1) of \conref{ratconj} requires a condition of sufficient ampleness (see \thmref{rpoly}). On the other hand it appears that a modified version of the conjecture holds in larger generality, i.e. $\chi^{\omega}_{X,c_1}(L)\equiv \frac{P^X_{c_1,L}(\Lambda)}{(1-\Lambda^4)^{\chi(L)}}.$ with $P^X_{c_1,L}(\Lambda)\in \Lambda^{-c_1^2}{\mathbb Q}[\Lambda^{\pm 4}]$, and \begin{enumerate} \item $P^X_{c_1,L}(1)=2^{g(L)}$, \item
$\chi^{\omega}_{X,c_1}(L)\equiv (-1)^{\chi(L)}\Lambda^{-(L-K_X)^2+4} \cdot \chi^\omega_{X,L+K_X-c_1}(L)|_{\Lambda=\frac{1}{\Lambda}}.$ \end{enumerate} \end{Remark}
\begin{comment} Let $X$ be a rational surface. The above results all have the following shape: for given $c_1,L$, and general ample $\omega$ on $X$, they give a formula for the generating function $\chi^{X,\omega}_{c_1}(L)$. It would be very desirable to understand the dependence of $\chi^{X,\omega}_{c_1}(L)$ on the line bundle $L$. In \cite[Thm.~1.2]{GY} a partial result of this kind is obtained for ${\mathbb P}^1\times{\mathbb P}^1$ and ${\mathbb P}^2$. Here we give a generalization to blowups of ${\mathbb P}^2$.
\begin{Theorem}\label{ruledblow} Let $X$ be the blowup of ${\mathbb P}^2$ in $1+n+m$ general points, with exceptional divisors $E,E_1,\ldots,E_n,D_1,\ldots,D_m$. We write $D:=D_1+\ldots+D_m$. Let $\omega=...$. Fix $r_1,r_2,s_1,s_2$ with $r_1+r_2\le n$ and $s_1+s_2\le m$ Let $$J_{r_1,r_2}^{s_1,s_2}:=E_1+\ldots+E_{r_1}+2(E_{r_1+1}+\ldots+E_{r_1+r_2})+D_{1}+\ldots +D_{s_1}+2(D_{s_1+1}+\ldots D_{s_1+s_2}).$$ Then we have the following. \begin{enumerate} \item If $s_1>0$, then $\chi^{X,\omega}_{D}(dH-dE-J_{r_1,r_2}^{s_1,s_2})=\chi^{X,\omega}_{H-E+D}(dH-dE-J_{r_1,r_2}^{s_1,s_2})=0$, otherwise \begin{align*} \chi^{X,\omega}_{D}(dH-dE-J_{r_1,r_2}^{s_1,s_2})\equiv \chi^{X,\omega}_{H-E+D}(dH-dE-J_{r_1,r_2}^{s_1,s_2})\equiv \frac{(-1)^{s_2}\Lambda^m}{(1-\Lambda^4)^{d+1-r_1-2r_2-2s_2}}\\ \end{align*} \item \begin{align*} $\chi^{X,\omega}_{D}(nH-(n-1) E-J_{r_1,r_2}^{s_1,s_2})\equiv 0$ if $s_1$ is odd, and $\chi^{X,\omega}_{H-E+D}(nH-(n-1) E-J_{r_1,r_2}^{s_1,s_2})\equiv 0$ \chi^{X,\omega}_{D}(nH-(n-1) E-J_{r_1,r_2}^{s_1,s_2})\equiv \\ \chi^{X,\omega}_{H-E+D}(nH-(n-1) E-J_{r_1,r_2}^{s_1,s_2})\equiv \end{align*} \item \begin{align*} \chi^{X,\omega}_{D}(nH-(n-2)E-J_{r_1,r_2}^{s_1,s_2})\equiv\\ \chi^{X,\omega}_{H-E+D}(nH-(n-2)E-J_{r_1,r_2}^{s_1,s_2})\equiv \end{align*} \end{enumerate} \end{Theorem} \end{comment}
\begin{comment} \begin{Remark}\label{chipol} Part (1) of \conref{ratconj} requires some condition of sufficient ampleness, which sometimes can be stronger than just very ampleness. In the case of ${\mathbb P}^2$ it just means very ample. On the other hand by \cite[Thm.~2.3]{dAH} $dH-E_1-\ldots -E_r$ will be very ample for $d=5$ and $r\le 15$, but in \ref{???} the coefficients of $P^X_{c_1,L}$ are all positive only for $r\le 12$. On the other hand it appears that (1), (2), (3) hold in much larger generality if we allow $P^X_{c_1,L}(\Lambda)\in \Lambda^{-c_1^2}{\mathbb Q}[\Lambda^4]$, i.e. we do not require the positivity and the integrality of the coefficients.
\end{Remark} \end{comment} {\bf Approach.} This paper is built on \cite{GY}, and both papers are built on \cite{GNY}. In \cite{GNY} the wallcrossing terms for the $K$-theoretic Donaldson invariants are determined in terms of modular forms, based on the solution of the Nekrasov conjecture for the $K$-theoretic partition function (see \cite{Nek}, \cite{NO}, \cite{NY1},\cite{NY2},\cite{NY3}). , and both \cite{GY} and this paper sum up the wallcrossing terms to get closed formulas for the generating functions. The main new inputs are the systematic use of the generating function the "$K$-theoretic Donaldson invariants with point class" $\chi^{X,\omega}_{c_1}(L,P^r)$, and the blowup formulas. We introduce in an ad hoc way $\chi^{X,\omega}_{c_1}(L,P^r):=\frac{1}{\Lambda^r}\chi^{\widehat X,\omega}_{c_1+E}(L-E)$, where $\widetilde X$ is the blowup of $X$ in $r$ general points and $E$ is the sum of the exceptional divisors (but note that these are invariants on $X$, depending on an ample class $\omega$ on $X$). These invariants satisfy a wallcrossing formula which is very similar to that of the standard $K$-theoretic Donaldson invariants $\chi^{X,\omega}_{c_1}(L)$. We prove blowup formulas that compute all the generating formulas of $K$-theoretic Donaldson invariants on any blowup of $X$ in terms of the $\chi^{X,\omega}_{c_1}(L,P^r)$. On the other hand we also prove blowdown formulas, which compute all the generating functions of the $K$-theoretic Donaldson invariants with point class $\chi^{X,\omega}_{c_1}(M,P^r)$ in terms of a {\it very small part} of those on the blowup $\widehat X$. Then, generalizing the methods of \cite{GY}, we compute this small part in the case $\widehat X$ is the blowup of ${\mathbb P}^2$ in a point. Thus, using the blowdown formulas, we determine the generating functions of the $K$ theoretic Donaldson invariants with point class of ${\mathbb P}^2$, and thus, by using the blowup formula again, of all blowups of ${\mathbb P}^2$. Finally, as the blowup of ${\mathbb P}^1\times {\mathbb P}^1$ in a point is equal to the blowup of ${\mathbb P}^2$ in two points, we apply the blowdown formulas again to determine generating functions for ${\mathbb P}^1\times{\mathbb P}^1$. These methods give an algorithm, which in principle computes all the generating functions mentioned above. The algorithm proves the rationality of the generating functions, and is carried for many $X$ and $L$ to obtain the explicit generating functions $\chi^{X,\omega}_{c_1}(L)$.
\section{Background material}\label{sec:background}
In this whole paper $X$ will be a simply connected nonsingular projective rational surface over ${\mathbb C}$. Usually $X$ will be ${\mathbb P}^2$, ${\mathbb P}^1\times {\mathbb P}^1$ or the blowup of ${\mathbb P}^2$ in finitely many points.
We will fix some notation that we want to use during this whole paper.
\begin{Notation} \label{R} \begin{enumerate} \item For a class $\alpha,\beta\in H^2(X,{\mathbb Q})$, we denote $\langle\alpha,\beta\rangle$ their intersection product. For $\beta\in H^2(X)$ we also write $\beta^2$ instead of $\langle\beta,\beta\rangle$. \item For a line bundle $L$ on $X$ we denote its first Chern class by the same letter. \item If $\widehat X$ is the blowup of $X$ in a point or in a finite set of points, and $L\in Pic(X)$, we denote its pullback to $\widehat X$ by the same letter. The same holds for classes $\alpha\in H^2(X,{\mathbb R})$. \item We denote $
{\mathcal R}:={\mathbb Q}[[q^2\Lambda^2,q^4]]$.
\item Let ${\mathbb Q}[t_1,\ldots,t_k]_n$ be the set of polynomials in $t_1,\ldots,t_k$ of degree $n$ and ${\mathbb Q}[t_1,\ldots,t_k]_{\le n}$ the polynomials in $t_1,\ldots,t_n$
of degree at most $n$.
\item Let $\omega$ be an ample divisor on $X$. For $r\ge 0$, $c_1\in Pic(X)$, $c_2\in H^4(X,{\mathbb Z})$ let $M_\omega^X(r,c_1,c_2)$ the moduli space of $\omega$-semistable rank $r$ sheaves on $X$ with $c_1(E)=c_1$, $c_2(E)=c_2$. Let $M^X_\omega(r, c_1,c_2)_s$ be the open subset of stable sheaves.
We will write $M^X_\omega(c_1,d)$ with $d:=4c_2-c_1^2$ instead of $M^X_\omega(2,c_1,c_2)$.
\end{enumerate}
\end{Notation}
\subsection{Determinant line bundles}\label{sec:detbun} We briefly review the determinant line bundles on the moduli space \cite{DN},\cite{LP1}, \cite{LP2}, for more details we refer to \cite[Chap.~8]{HL}. We mostly follow \cite[Sec.~1.1,1.2]{GNY}.
For a Noetherian scheme $Y$ we denote by $K(Y)$ and $K^0(Y)$ the Grothendieck groups of coherent sheaves and locally free sheaves on $Y$ respectively.
If $Y$ is nonsingular and quasiprojective, then $K(Y)=K^0(Y)$.
If we want to distinguish a sheaf ${\mathcal F}$ and its class in $K(Y)$, we denote the latter by $[{\mathcal F}]$. The product $[{\mathcal F}].[{\mathcal G}]:=\sum_i (-1)^i[\underline{Tor}_i({\mathcal F},{\mathcal G})]$ makes $K^0(Y)$ into a commutative ring and $K(Y)$ into a $K^0(Y)$ module. For a proper morphism $f\colon Y_1\to Y_2$ we have the pushforward homomorphism \(f_!\colon K(Y_1)\to K(Y_2); [{\mathcal F}] \mapsto\sum_i (-1)^i [R^if_*{\mathcal F}].\)
For any morphism $f\colon Y_1\to Y_2$ we have the pullback homomorphism \( f^*\colon K^0(Y_2)\to K^0(Y_1) \) given by \( [{\mathcal F}] \mapsto[f^*{\mathcal F}] \) for a locally free sheaf ${\mathcal F}$ on $Y_2$. Let ${\mathcal E}$ be a flat family of coherent sheaves of class $c$ on $X$ parametrized by a scheme $S$, then ${\mathcal E}\in K^0(X\times S)$. Let $p:X\times S\to S$, $q:X\times S\to X$ be the projections. Define $\lambda_{\mathcal E}:K(X)\to \operatorname{Pic}(S)$ as the composition of the following homomorphisms: \begin{equation}\label{dlb} \xymatrix@C=0.3cm{
K(X)=K^0(X) \ar[rr]^{~~q^{*}} && K^0(X\times S) \ar[rr]^{.[{\mathcal E}]} && K^0(X\times S) \ar[rr]^{~~~p_{!}} && K^0(S)\ar[rr]^{det^{-1}} && \operatorname{Pic}(S),}\end{equation}
By Proposition 2.1.10 in \cite{HL} $p_{!}([{\mathcal F}])\in K^0(S)$ for ${\mathcal F}$ $S$-flat .
We have the following facts. \begin{enumerate} \item $\lambda_{\mathcal E}$ is a homomorphism, i.e. $\lambda_{\mathcal E}(v_1+v_2)=\lambda_{\mathcal E}(v_1)\otimes \lambda_{{\mathcal E}}(v_2)$. \item If $\mu\in \operatorname{Pic}(S)$ is a line bundle, then $\lambda_{{\mathcal E}\otimes p^*\mu}(v)= \lambda_{{\mathcal E}}(v)\otimes \mu^{\chi(c\otimes v)}$. \item $\lambda_{\mathcal E}$ is compatible with base change: if $\phi:S'\to S$ is a morphism, then $\lambda_{\phi^*{\mathcal E}}(v)=\phi^*\lambda_{{\mathcal E}}(v)$. \end{enumerate}
Define $K_c:=c^\perp=\big\{v\in K(X)\bigm| \chi(v\otimes c)=0\big\}$,~
and $K_{c,\omega}:=c^\perp\cap\{1,h,h^2\}^{\perp\perp}$, where $h=[{\mathcal O}_\omega]$. Then we have a well-defined morphism $\lambda\colon K_c\to \operatorname{Pic}(M_\omega^X(c)^s)$, and $\lambda\colon K_{c,\omega}\to \operatorname{Pic}(M_\omega^X(c))$ satisfying the following properties:
\begin{enumerate}
\item The $\lambda$ commute with the inclusions $K_{c,\omega}\subset K_c$ and $\operatorname{Pic}(M_\omega^X(c))\subset \operatorname{Pic}(M_\omega^X(c)^s)$.
\item If ${\mathcal E}$ is a flat family of semistable sheaves on $X$ of class $c$ parametrized by $S$, then we have $\phi_{{\mathcal E}}^*(\lambda(v))=\lambda_{{\mathcal E}}(v)$ for all $v\in K_{c,\omega}$ with $\phi_{\mathcal E}:S\rightarrow M^X_\omega(c)$ the classifying morphism.
\item If ${\mathcal E}$ is a flat family of stable sheaves, the statement of (2) holds with $K_{c,\omega}$, $M^X_\omega(c)$ replaced by $K_{c}$, $M^X_\omega(c)^s$.
\end{enumerate}
Since $X$ is a simply connected surface, both the moduli space $M^X_\omega(c)$ and the determinant line bundle $\lambda(c^*)$ only depend on the images of $c$ and $c^*$ in $K(X)_{num}.$ Here $K(X)_{num}$ is the Grothendieck group modulo numerical equivalence. We say that $u,v\in K(X)$ are numerically equivalent if $u-v$ is in the radical of the quadratic form $(u,v)\mapsto \chi(X,u\otimes v)\equiv \chi(u\otimes v)$
We call $H$ {\it general} with respect to $c$ if all the strictly semistable sheaves in $M_H^X(c)$ are strictly semistable with respect to all ample divisors on $X$ in a neighbourhood of $H$ \
Often $\lambda\colon K_{c,\omega}\to \operatorname{Pic}(M_\omega^X(c))$ can be extended. For instance let $c=(2,c_1,c_2)$, then $\lambda(v(L))$ is well-defined over $M^X_\omega(c)$ if $\<L,\xi\rangle=0$ for all $\xi$ a class of type $(c_1,d)$ (see \secref{walls}) with $\langle\omega,\xi\rangle=0$. This can be seen from the construction of $\lambda(v(L))$ (e.g. see the proof of Theorem 8.1.5 in \cite{HL}).
\subsection{Walls}\label{walls} Denote by ${\mathcal C}\subset H^2(X,{\mathbb R})$ the ample cone of $X$. Then ${\mathcal C}$ has a chamber structure:
For a class $\xi\in H^2(X,{\mathbb Z})\setminus \{0\}$ let $W^\xi:=\big\{ x\in {\mathcal C}\bigm| \langle x,\xi\rangle=0\big\}$. Assume $W^\xi\ne \emptyset $. Let $c_1\in \operatorname{Pic}(X)$, $d\in {\mathbb Z}$ congruent to $-c_1^2$ modulo 4. Then we call $\xi$ a {\it class of type} $(c_1,d)$ and call $W^\xi$ a {\it wall of type} $(c_1,d)$ if the following conditions hold \begin{enumerate} \item $\xi+c_1$ is divisible by $2$ in $H^2(X,{\mathbb Z})$, \item $d+\xi^2\ge 0$. \end{enumerate} We call $\xi$ a {\it class of type} $(c_1)$, if $\xi+c_1$ is divisible by $2$ in $H^2(X,{\mathbb Z})$. We say that $\omega\in {\mathcal C}$ lies on the wall $W^\xi$ if $\omega\in W^\xi$. The {\it chambers of type} $(c_1,d)$ are the connected components of the complement of the walls of type $(c_1,d)$ in ${\mathcal C}$. Then $M_\omega^X(c_1,d)$ depends only on the chamber of type $(c_1,d)$ of $\omega$. Let $c\in K(X)$ be the class of ${\mathcal F}\in M_\omega^X(c_1,d)$. It is easy to see that $\omega$ is general with respect to $c$ if and only if $\omega$ does not lie on a wall of type $(c_1,d)$.
\subsection{$K$-theoretic Donaldson invariants}\label{backDong} We write $M^X_\omega(c_1,d)$ for $M^X_\omega(2,c_1,c_2)$ with $d=4c_2-c_1^2$. Let $v\in K_c$, where $c$ is the class of a coherent rank $2$ sheaf with Chern classes $c_1,c_2$. Let $L$ be a line bundle on $X$ and assume that $\<L,c_1\rangle$ is even. Then for $c$ the class of a rank $2$ coherent sheaf with Chern classes $c_1,c_2$, we put \begin{equation}\label{eq:uL} v(L):=(1-L^{-1})+\langle\frac{L}{2},L+K_X+c_1\rangle[{\mathcal O}_x]\in K_c.\end{equation} Note that $v(L)$ is independent of $c_2$. Assume that $\omega$ is general with respect to $(2,c_1,c_2)$. Then we denote $\mu(L):= \lambda(v(L))\in \operatorname{Pic}(M^X_\omega(c_1,d))$. The {\it $K$-theoretic Donaldson invariant\/} of $X$, with respect to $L,c_1,d,\omega$ is $\chi(M^X_\omega(c_1,d),\mathcal O(\mu(L)))$.
We recall the following blowup relation for the $K$-theoretic Donaldson invariants from \cite[Sec.1.4]{GNY}. Let $(X,\omega)$ be a polarized rational surface. Let $\widehat X$ be the blowup of $X$ in a point and $E$ the exceptional divisor. In the following we always denote a class in $H^*(X,{\mathbb Z})$ and its pullback by the same letter. Let $Q$ be an open subset of a suitable quot-scheme such that $M^X_\omega(c_1,d)=Q/GL(N)$. Assume that $Q$ is smooth \textup(e.g.\ $\langle -K_X,\omega\rangle>0$\textup). We choose $\epsilon>0$ sufficiently small so that $\omega-\epsilon E$ is ample on $\widehat X$ and there is no class $\xi$ of type $(c_1,d)$ or of type $(c_1+E,d+1)$ on $\widehat X$ with $\langle\xi, \omega\rangle<0<\langle\xi, (\omega-\epsilon E)\rangle.$ In case $c_1=0$ assume $d>4$.
\begin{Lemma} \label{blowsimple} We have \begin{align*} \chi({M}^{\widehat X}_{\omega-\epsilon E}(c_1,d),\mu(L))& =\chi(M^X_\omega(c_1,d),\mu(L)), \\
\chi({M}^{\widehat X}_{\omega-\epsilon E}(c_1+E,d+1),\mu(L))& =\chi(M^X_\omega(c_1,d),\mu(L)) \end{align*} for any line bundle $L$ on $X$ such that $\<L, c_1\rangle$ is even and $\<L,\xi\rangle=0$ for $\xi$ any class of type $(c_1,d)$ on $\widehat X$ with $\langle\omega,\xi\rangle=0$. \end{Lemma}
Following \cite{GY}, we introduce the generating function of the $K$-theoretic Donaldson invariants. \begin{Definition} \label{KdonGen} Let $c_1\in H^2(X,{\mathbb Z})$. Let $\omega$ be ample on $X$ not on a wall of type $(c_1)$. \begin{enumerate} \item If $c_1\not \in 2 H^2(X,{\mathbb Z})$, let \begin{equation}\label{eq:Kdon} \begin{split} \chi_{c_1}^{X,\omega}(L)&:=\sum_{d>0} \chi(M^X_\omega(c_1,d),\mathcal O(\mu(L)))\Lambda^d, \end{split} \end{equation} \item In case $c_1=0$ let $\widehat X$ be the blowup of $X$ in a point. Let $E$ be the exceptional divisor. Let $\epsilon>0$ be sufficiently small so that there is no class $\xi$ of class $(E,d+1)$ on $\widehat X$ with $\langle\xi, \omega\rangle <0 <\langle\xi, (\omega-\epsilon E)\rangle$. We put \begin{equation}\label{eq:Kdon0} \begin{split} \chi_{0}^{X,\omega}(L)&:=\sum_{d>4} \chi(M^X_\omega(0,d),\mathcal O(\mu(L)))\Lambda^d+\Big(\chi(M^{\widehat X}_{\omega-\epsilon E}(E,5) ,\mu(L))+LK_X-\frac{K_X^2+L^2}{2}-1\Big)\Lambda^4. \end{split} \end{equation}
\end{enumerate} \end{Definition}
\begin{Remark}\label{rem:canonical}(\cite[Rem.~1.9]{GNY}) If $H$ is a general polarization, then $\mu(2K_X)$ is a line bundle on $M^X_H(c)$ which coincides with the dualizing sheaf on the locus of stable sheaves $M_H^X(c)^s$. If $\dim (M_H^X(c) \setminus M_H^X(c)^s) \leq \dim M_H^X(c)-2$, then $\omega_{M_H^X(c)}=\mu(2K_X)$. \end{Remark}
Under rather general assumptions the higher cohomology of $\mu(L)$ vanishes. The following follows from \cite[Prop.2.9]{GY} and its proof, which is based on \cite[Sec.1.4]{GNY}.
\begin{Proposition} \label{highvan} Fix $c_1,d$. Let $\omega$ be an ample line bundle on $X$ which is general with respect to $c_1,d$, and satisfies $\langle-K_X,\omega\rangle>0$. Let $L$ be a nef line bundle on $X$ such that $L-2K_X$ is ample. If $c_1$ is not divisible by $2$ in $H^2(X,{\mathbb Z})$ or $d>8$, we have $H^i(M_\omega^X(c_1,d),\mu(L))=0$ for all $i>0$, in particular $$\dim H^0(M_\omega^X(c_1,d),\mu(L))=\chi(M_\omega^X(c_1,d),\mu(L)).$$ \end{Proposition}
\section{Strange duality}\label{strd} \subsection{Review of strange duality} We briefly review the strange duality conjecture from for surfaces from \cite{LPst}. The strange duality conjecture was formulated for $X$ a smooth curve in the 1990s (see \cite{Bea} and \cite{Donagi}) and in this case been proved around 2007 (see \cite{Bel1}, \cite{MO1}). For $X$ a surface, there is a formulation for some special due to Le Potier (see \cite{LPst} or \cite{Da2}). Let $c,c^*\in K(X)_{num}$ with $c\in K_{c^*}$. Let $H$ be an ample line bundle on $X$ which is both $c$-general and $c^*$-general.
Write ${\mathcal D}_{c,c^*}:=\lambda(c^*)\in \operatorname{Pic}(M^X_H(c))$, ${\mathcal D}_{c^*,c}:=\lambda(c)\in \operatorname{Pic}(M^X_H(c^*))$. Assume that all $H$-semistable sheaves ${\mathcal F}$ on $X$ of class $c$ and all $H$-semistable sheaves ${\mathcal G}$ on $X$ of class $c^*$ satisfy \begin{enumerate} \item $\underline{Tor}_i({\mathcal F},{\mathcal G})=0$ for all $i\ge 1$, \item $H^2(X,{\mathcal F}\otimes {\mathcal G})=0$. \end{enumerate} Both conditions are automatically satisfied if $c$ is not of dimension $0$ and $c^*$ is of dimension $1$ (see \cite[p.9]{LPst}).
Put ${\mathcal D}:={\mathcal D}_{c,c^*}\boxtimes {\mathcal D}_{c^*,c}\in \operatorname{Pic}(M^X_H(c)\times M^X_H(c^*))$. In \cite[Prop.~9]{LPst} a canonical section $\sigma_{c,c^*}$ of ${\mathcal D}$ is constructed, whose zero set is supported on
$$\mathscr{D}:=\big\{([{\mathcal F}],[{\mathcal G}])\in M^X_H(c)\times M^X_H(c^*)\bigm| H^0(X,{\mathcal F}\otimes {\mathcal G})\ne 0\big\}.$$ The element $\sigma_{c,c^*}$ of $H^0(M^X_H(c),{\mathcal D}_{c,c^*})\otimes H^0(M^X_H(c^*),{\mathcal D}_{c^*,c})$, gives a linear map \begin{equation} \label{SDmap} SD_{c,c^*}:H^0(M^X_H(c),{\mathcal D}_{c,c^*})^\vee \to H^0(M^X_H(c^*),{\mathcal D}_{c^*,c}), \end{equation}
called the {\it strange duality map}. Le Potier's strange duality conjecture is then the following. \begin{Conjecture}\label{sdcon} Under the above assumptions $SD_{c,c^*}$ is an isomorphism. \end{Conjecture}
It seems natural to believe that under more general assumptions than \conref{sdcon} we have the {\it numerical version of strange duality} $\chi(M^X_H(c),{\mathcal D}_{c,c^*})^\vee = \chi(M^X_H(c^*),{\mathcal D}_{c^*,c})$.
\subsection{Interpretation of the main results and conjectures in view of strange duality} In this subsection let
$c=(2,c_1,c_2)$ and $c^*=(0,L,\chi=\langle\frac{L}2,c_1\rangle)$, so that ${\mathcal D}_{c,c^*}=\mu(L)$. The moduli space $M^X_H(c^*)$ is a moduli space of pure dimension $1$ sheaves.
It has a natural projection $\pi:=\pi^{L,c_1}:M^X_H(c^*)\to |L|$, whose fibre over a smooth curve $C$ in $|L|$ is the Jacobian of line bundles degree $\langle \frac{L}{2},c_1+K_X+L\rangle$ on $C$.
In particular $c^*$ is independent of $c_2$.
In case $c_1=0$ the fibre of $\pi^{L,0}$ over the class of a nonsingular curve $C$ is the Jacobian $J_{g(C)-1}(C)$ of degree $g(C)-1=\frac{1}{2}\deg(K_C)$ line bundles on $C$. In this case we denote by $\Theta:=\lambda([{\mathcal O}_X])\in \operatorname{Pic}(M^X_H(c^*)$. The divisor of its restriction to a fibre $J_{g(C)-1}(C)$ is the classical theta divisor of degree $g(C)-1$ divisors on $C$ with a section.
Let again $c_1$ be general and let ${\mathcal O}_X(c_1)$ be the line bundle with first Chern class $c_1$; we denote $\Theta_{2,c_1}:=\lambda([{\mathcal O}_X\oplus {\mathcal O}_X(c_1)])\in\operatorname{Pic}(M^X_H(c^*))$. We also denote $\eta:=\lambda({\mathcal O}_{x})\in \operatorname{Pic}(M^X_H(c^*))$, for $x$ a general point of $X$. It is standard that
$\eta=\pi^*({\mathcal O}_{|L|}(1))$, with ${\mathcal O}_{|L|}(1)$ the hyperplane bundle on $|L|$. Thus we see that ${\mathcal D}_{c^*,c}=\lambda(c)=\Theta_{2,c_1}\otimes \pi^*({\mathcal O}_{|L|}(c_2))$; in particular in case $c_1=0$ we have ${\mathcal D}_{c^*,c}=\lambda(c)=\Theta^{\otimes 2}\otimes \pi^*({\mathcal O}_{|L|}(c_2))$.
We use Le Potier's strange duality conjecture and the results and conjectures from the introduction to make conjectures about the pushforwards
$\pi^{L,c_1}_*(\Theta_{2,c_1})$, $\pi^{L,c_1}_!(\Theta_{2,c_1})$. For a Laurent polynomial $f(t):=\sum_{n} a_n t^n\in {\mathbb Z}[t^{-1},t]$ we put $f({\mathcal O}_{|L|}(-1)):=\bigoplus_n {\mathcal O}_{|L|}(-n)^{\oplus a_n}.$
\begin{Conjecture}\label{splitconj} \begin{enumerate} \item If $L$ is sufficiently ample on $X$, then, defining $p^{X,c_1}_L$ as in \remref{unique}, then
$\pi^{L,c_1}_{*}(\lambda(\Theta_{2,c_1}))\otimes {\mathcal O}_{|L|}=p^{X,c_1}_L({\mathcal O}_{|L|}(-1))$ and $R^i\pi^{L,c_1}_*(\lambda(\Theta_{2,c_1}))=0$ for $i>0$. In particular $\pi_{*}(\lambda_{c^*}(\Theta_{2,c_1}))$ splits as a direct sum of line bundles on
$|L|$. (Note that his implies that $p^{X,c_1}_L$ is a polynomial with nonnegative coefficients, as conjectured in \conref{ratconj}(1)).
\item In particular in the case $X={\mathbb P}^2$, and $d>0$, we get, with the polynomials $p_d(t)$ from \conref{p2con}, that
$$\pi^{dH,0}_{*}(\lambda(\Theta^2))=p_d({\mathcal O}_{|dH|}(-1)), \ \pi^{2dH,H}_{*}(\lambda(\Theta_{2,H}))=p_{2d}({\mathcal O}_{|2dH|}(1))\otimes{\mathcal O}_{|2dH|}(-d^2).$$ \item Under more general assumptions on $L$ on $X$, we expect that there is a choice of $P^{X,c_1}_L(\Lambda)=\Lambda^{-c_1^2}p^{X,c_1}_L(\Lambda^4)$, such that
$\pi^{L,c_1}_{!}(\lambda(\Theta_{2,c_1}))=p^{X,c_1}_L({\mathcal O}_{|L|}(-1))$ \end{enumerate} \end{Conjecture} \begin{Remark} \begin{enumerate} \item Assuming part (2) of \conref{splitconj}, \thmref{mainp2} determines $\pi^{dH,0}_{*}(\lambda(\Theta^2))$, $\pi^{dH,H}_{*}(\lambda(\Theta_{2,H}))$ as direct sum of line bundles for $d\le 11$. \item For $X={\mathbb P}^1\times{\mathbb P}^1$, assuming part (1) of \conref{splitconj},\thmref{P11gen} gives, with the notation from there, for $d\le 7$ that \begin{align*}
\pi^{d(F+G),0}_{*}(\lambda(\Theta^2))&=q^0_d({\mathcal O}_{|d(F+G)|}(-1)),\\
\pi^{d(F+G),F}_{*}(\lambda(\Theta_{2,F}))&=q^F_d({\mathcal O}_{|d(F+G)|}(-1)),\\
\pi^{d(F+G),F+G}_{*}(\lambda(\Theta_{2,F+G}))&=(t^{1/2}q^{F+G}_d(t))|_{t=({\mathcal O}_{|d(F+G)|}(-1))}. \end{align*} \item In \cite{GY} some further generating functions for the $K$-theoretic Donaldson invariants of $X={\mathbb P}^1\times{\mathbb P}^1$ or $X=\widetilde {\mathbb P}^2$ are computed. From the results there we expect \begin{align*}
\pi^{nF+2G,0}_*(\Theta^{2})&=({\mathcal O}_{|nF+2G|}\oplus {\mathcal O}_{|nF+2G|}(-1))^{\otimes n}_{ev},\\
\pi^{nF+2G,F}_*(\Theta_{2,F})&=({\mathcal O}_{|nF+2G|}\oplus {\mathcal O}_{|nF+2G|}(-1))^{\otimes n}_{odd}, \end{align*} where $(\bullet)_{ev}$ and $(\bullet)_{odd}$ denotes respectively the part consisting only of even powers ${\mathcal O}(-2d)$ or odd powers ${\mathcal O}(-2d-1)$. In particular this would give $$\pi^{nF+2G,0}_*(\Theta^{2})\oplus
\pi^{nF+2G,F}_*(\Theta_{2,F})=({\mathcal O}_{|nF+2G|}\oplus {\mathcal O}_{|nF+2G|}(-1))^{\otimes n}.$$ \end{enumerate} \end{Remark}
\begin{Remark} We briefly motivate the above conjectures. Assuming strange duality \conref{sdcon}, we have, using also the projection formula, \begin{align*}
H^0(M_\omega^X(c_1,d),\chi(L))^\vee&=H^0(M^X_\omega(c^*), \lambda(c))=H^0(|L|, \pi^{L,c_1}_*(\lambda(c))\\
&=H^0(|L|, \pi^{L,c_1}_*(\lambda(\Theta_{2,c_1}))\otimes {\mathcal O}_{|L|}(c_2)), \end{align*} and similarly, assuming the numerical version of strange duality above,
$$\chi(M_\omega^X(c_1,d),\chi(L))=\chi( \lambda(c))=\chi(M^X_\omega(c^*),\pi_!(\lambda(c))=\chi(\pi^{L,c_1}_!(\lambda(\Theta_{2,c_1}))\otimes {\mathcal O}_{|L|}(c_2)).$$
We assume $H^i(X,L)=0$ for $i>0$, thus $\dim(|L|)=\chi(L)-1$,
then for $0\le l\le \dim(|L|)$, and $n\ge 0$, we have
$$\sum_{n\ge 0} \chi(|L|,{\mathcal O}_{|L|}(-l+n))t^n=\frac{t^l}{(1-t)^{\chi(L)}}.$$ Thus, assuming the numerical part of the strange duality conjecture and part (3) of \conref{splitconj}, we would get \begin{align*}
\chi^{X,\omega}_{c_1}(L)&\equiv\Lambda^{-c_1^2} \sum_{n\ge 0}\chi\big(|L|,\pi^{L,c_1}_!(\Theta_{2,c_1})\otimes {\mathcal O}_{|L|}(n)\big)\Lambda^{4n}\\
&\equiv\Lambda^{-c_1^2} \sum_{n\ge 0}\chi\big(|L|, p^{X,c_1}_L({\mathcal O}_{|L|}(-1))\otimes {\mathcal O}_{|L|}(n)\big)\Lambda^{4n}\\ &=\Lambda^{-c_1^2}\frac{p^{X,c_1}_L(\Lambda^4)}{(1-\Lambda^4)^{\chi(L)}}=\frac{P^{X,c_1}_L(\Lambda)}{(1-\Lambda^4)^{\chi(L)}} \end{align*} Assuming the strange duality conjecture and part (1) of \conref{splitconj}, we would get the same statement with the left hand side replaced by $\sum_{n\ge 0} H^0(M^X_H(2,c_1,n), \mu(L)) t^{n}$. In other words \conref{splitconj} explains the generating functions of \thmref{mainp2}, \thmref{P11gen} and \conref{ratconj}(1). \end{Remark}
\begin{Remark}
Assuming \conref{splitconj} and the strange duality conjecture, we see that $\mathop{{\rm rk}}(\pi_!(\Theta_{2,c_1}))=p^{X,c_1}_L(1)$. As mentioned above, the fibre over $\pi^{L,c_1}:M^X_H(c^*)\to |L|$ over the point corresponding to a smooth curve $C$ in $|L|$ is the Jacobian $J_d(C)$ of line bundles degree $d=\langle \frac{L}{2},c_1+K_X+L\rangle$ on $C$, and we see that $\Theta_{2,c_1}$ is a polarisation of type $(2,\ldots,2)$. Thus by the Riemann-Roch theorem we have $\chi(J_d(C),\Theta_{2,c_1}|_{J_d(C)})=2^{g(C)}$. Thus \conref{splitconj} implies that $\pi_!(\Theta_{2,c_1})$ has rank $2^{g(C)}$, therefore, assuming the strange duality conjecture, it implies $p^{X,c_1}_L(1)=2^{g(C)}$, as predicted in \conref{ratconj} and seen e.g. in \thmref{mainp2} and \thmref{P11gen} for many $L$ in the case of ${\mathbb P}^2$ and ${\mathbb P}^1\times{\mathbb P}^1$. \end{Remark}
\begin{Remark} Let $L$ again be sufficiently ample on $X$. Assuming the strange duality conjecture \conref{sdcon} and part (1) of \conref{splitconj} we get that part (3) of \conref{ratconj} gives the conjectural duality
$$\pi^{L,c_1}_*(\Theta_{2,c_1})=(\pi^{L,L+K_X-c_1}_*(\Theta_{2,L+K_X-c_1}))^\vee\otimes {\mathcal O}_{|L|}(-\<L,L+K_X)\rangle/2-\<c_1,c_1-K_X)\rangle/2+\<L,c_1\rangle/2-2)).$$ In particular in case $c_1=0$,
$$\pi^{L,0}_*(\Theta^{\otimes 2})=(\pi^{L,L+K_X}_*(\Theta_{2,L+K_X}))^\vee\otimes {\mathcal O}_{|L|}(-\<L,L+K_X)\rangle/2-2).$$ In the case of $X={\mathbb P}^2$ we should have for $d>0$ that \begin{align*}
\pi^{2dH,0}_*(\Theta^{\otimes 2})&=(\pi^{2dH,H}_*(\Theta_{2,H}))^\vee\otimes {\mathcal O}_{|2dH|}(-d^2),\\
\pi^{(2d+1)H,0}_*(\Theta^{\otimes 2})&=(\pi^{(2d+1)H,0}_*(\Theta^{\otimes 2}))^\vee\otimes {\mathcal O}_{|(2d+1)H|}(-d(d+1)). \end{align*} Similarly we conjecture for $X={\mathbb P}^1\times{\mathbb P}^1$ e.g. that for $d>0$
$$\pi^{2d(F+G),0}_*(\Theta^{\otimes 2})=(\pi^{2d(F+G),0}_*(\Theta^{\otimes 2}))^\vee\otimes {\mathcal O}_{|2d(F+G)|}(-2d^2).$$ \end{Remark}
\begin{comment} \begin{Conjecture} Let $X$ be a projective surface with $-K_X$ ample and let $L,\omega$ be very ample line bundles on $X$. Fix $c_1\in H^2(X,{\mathbb Z})$. \begin{enumerate} \item $\pi^{L,c_1}_*(\Theta^{2}_{c_1})$ is a direct sum of line bundles on ${\mathbb P}^N$, more precisely $\pi^{L,c_1}_*(\Theta^{2}_{c_1})=P^X_{c_1,L}({\mathcal O}_{{\mathbb P}^N}(-1))$. \item Write $N:=L(L-K_X)/2$, and $n:=2-\frac{1}{2}(\<c_1,c_1-K_X\rangle+\<L,L+c_1-K_X\rangle)\in{\mathbb Z}$. $$ \pi^{L,c_1}_*(\Theta^{2}_{c_1})=\pi^{L,L+K_X-c_1}_*(\Theta^2_{L+K_X-c_1})^\vee\otimes {\mathcal O}_{{\mathbb P}^N}(n)$$ \item $c_1(\pi^{L,c_1}_*(\Theta^{2}_{c_1}))$. \end{enumerate} \end{Conjecture} \end{comment}
\section{Wallcrossing formula} \subsection{Theta functions and modular forms}\label{thetamod} We start by reviewing results and notations from \cite{GNY}, \cite[Sec.~3.1]{GY}.
For $\tau\in {\mathcal H}=\big\{\tau\in {\mathbb C}\bigm| \Im(\tau)>0\big\}$ put $q=e^{\pi i\tau/4}$ and for $h\in {\mathbb C}$ put $y=e^{h/2}$. Note that the notation is not standard. Recall the $4$ Jacobi theta functions: \begin{equation} \begin{split}\label{theta} \theta_1(h)&:=\sum_{n\in {\mathbb Z}} i^{2n-1} q^{(2n+1)^2} y^{2n+1}=-iq(y-y^{-1})\prod_{n>0}(1-q^{8n})(1-q^{8n}y^2)(1-q^{8n}y^{-2}),\\ \theta_2(h):&=\sum_{n\in {\mathbb Z}} q^{(2n+1)^2} y^{2n+1}=-q(y+y^{-1})\prod_{n>0}(1-q^{8n})(1+q^{8n}y^2)(1+q^{8n}y^{-2}),\\ \theta_3(h)&:=\sum_{n\in {\mathbb Z}} q^{(2n)^2} y^{2n},\qquad \theta_4(h):=\sum_{n\in {\mathbb Z}} i^{2n}q^{(2n)^2} y^{2n}. \end{split} \end{equation}
We usually do not write the argument $\tau$. The conventions are essentially the same as in \cite{WW} and in \cite{Ak}, where the $\theta_i$ for $i\le 3$ are denoted $\vartheta_i$ and $\theta_4$ is denoted $\vartheta_0$. Denote \begin{equation}\label{thetatilde} \begin{split}\theta_i&:=\theta_i(0), \quad \widetilde\theta_i(h):=\frac{\theta_i(h)}{\theta_i}, \quad i=2,3,4;\qquad \widetilde\theta_1(h):=\frac{\theta_1(h)}{\theta_4},\\ u&:=-\frac{\theta_2^2}{\theta_3^2}-\frac{\theta_3^2}{\theta_2^2}=-\frac{1}{4}q^{-2} - 5q^2 + \frac{31}{2}q^6 - 54q^{10}+O(q^{14}), \end{split} \end{equation}
and two Jacobi functions, i.e. Jacobi forms of weight and index $0$,
$\Lambda:=\frac{\theta_1(h)}{\theta_4(h)}$, $M:=2\frac{\widetilde \theta_2(h)\widetilde \theta_3(h)}{\widetilde \theta_4(h)^2}$, which satisfy the relation
\begin{equation}\label{MuL}
M=2\sqrt{1+u\Lambda^2+\Lambda^4},\end{equation}
and the formulas
\begin{equation}\label{dLdh}\frac{\partial\Lambda}{\partial h}=\frac{\theta_2\theta_3}{4i}M, \quad
h=\frac{2i}{\theta_2\theta_3}\int_{0}^\Lambda\frac{dx}{\sqrt{1+ux^2+x^4}}.
\end{equation}
In \cite[Sec.~3.1]{GY} it is shown that $h\in iq^{-1}\Lambda{\mathcal R}.$
A function $F(\tau,h)$ can via formula \eqref{dLdh} also be viewed as a function
of $\tau$ and $\Lambda$.
In this case, viewing $\tau$ and $\Lambda$ as the independent variables we define
$F' :=\frac{4}{\pi i} \frac{\partial F}{\partial \tau}=q\frac{\partial F}{\partial q},
\quad F^*:=\Lambda\frac{\partial F}{\partial \Lambda},$ and get \begin{equation} \label{hstar} h^*=\frac{4i\Lambda}{\theta_2\theta_3 M}, \quad u'=\frac{2\theta_4^8}{\theta_2^2\theta_3^2}. \end{equation}
For future use we record the following standard formulas for the behaviour of the theta functions under translation. \begin{align}\label{T4trans} \theta_4(h+2\pi i)&=\theta_4(h),\quad \theta_4(h+ 2\pi i \tau)=-q^{-4}y^{-2}\theta_4(h),\quad\theta_4(h+ \pi i \tau)= i q^{-1}y^{-1}\theta_1(h)\\ \label{T1trans} \theta_1(h+2\pi i)&=-\theta_1(h),\quad \theta_1(h+ 2\pi i\tau)=-q^{-4}y^{-2}\theta_1(h),\quad \theta_1(h+ \pi i\tau)= i q^{-1}y^{-1}\theta_4(h),\\ \label{T2trans}\theta_2(h+\pi i \tau)&=q^{-1}y^{-1}\theta_3(h),\quad \theta_3(h+\pi i \tau)=q^{-1}y^{-1}\theta_2(h).\end{align} (see e.g. \cite[Table VIII, p.~202]{Ak}).
\begin{Lemma}\label{thetaadd} Let $a$, $b\in {\mathbb Z}$. Then \begin{enumerate} \item $\theta_4(h)=(-1)^bq^{4b^2}y^{2b}\theta_4(h+2\pi i b\tau)$, $\quad \theta_4(h+2\pi i a)=\theta_4(z)$, \item $\theta_1(h)=(-1)^bq^{4b^2}y^{2b}\theta_1(h+2\pi i b\tau)$,$\quad \theta_1(h+2\pi i a)=(-1)^a\theta_1(z)$, \item $\theta_4(h)=e^{\pi i (b-\frac{1}{2})}q^{(2b+1)^2}y^{2b+1}\theta_1(h+2\pi i (b+\frac{1}{2})\tau)$, \item $\theta_1(h)=e^{\pi i (b-\frac{1}{2})}q^{(2b+1)^2}y^{2b+1}\theta_4(h+2\pi i (b+\frac{1}{2})\tau)$. \end{enumerate} \end{Lemma} \begin{proof} All these formulas follow by straightforward induction from \eqref{T4trans} and \eqref{T1trans}. As an illustration we check (1) and (3). The formula $\theta_4(h+ 2\pi i \tau)=-q^{-4}y^{-2}\theta_4(h)$ gives by induction \begin{align*}\theta_4(h+2\pi i b\tau)&=-q^{-4}e^{-(h+2\pi i(b-1)\tau)}\theta_4(h+2\pi i (b-1)\tau)\\ &= -q^{-8b+4}y^{-2}(-1)^{-(b-1)}q^{-4(b-1)^2}y^{-(2b-2)}\theta_4(h)=(-1)^{-b}q^{-(2b)^2}y^{-2b}\theta_4(h),\end{align*} and (1) follows. Similarly \begin{align*}\theta_4(h+2\pi i (b+1/2)\tau)&=iq^{-1}e^{-h/2-\pi i b\tau}\theta_1(h+2\pi ib\tau)=iq^{-4b-1}y^{-1}(-1)^{-b}q^{-(2b)^2}y^{-2b}\theta_1(h)\\ &=e^{-\pi i (b-\frac{1}{2})}q^{-(2b+1)^2}y^{-(2b+1)}\theta_1(h),\end{align*} and (3) follows. \end{proof}
\subsection{Wallcrossing formula}\label{wallcro} Now we review the wallcrossing formula from \cite{GNY}, \cite{GY}, and generalize it slightly. Let $\sigma(X)$ be the signature of $X$.
\begin{Definition}\label{wallcrossterm} Let $r\ge 0$, let $\xi\in H^2(X,{\mathbb Z})$ with $\xi^2< 0$. Let $L$ be a line bundle on $X$. We put $$\Delta_\xi^X(L,P^r):=2 i^{\langle\xi, K_X\rangle} \Lambda^2 q^{-\xi^2} y^{\langle\xi,(L-K_X)\rangle}\widetilde\theta_4(h)^{(L-K_X)^2}\theta_4^{\sigma(X)}u'h^*M^r,$$ and put $\Delta_\xi^X(L):=\Delta_\xi^X(L,P^0)$.
By the results of the previous section it can be developed as a power series $$\Delta_\xi^X(L,P^r)=\sum_{d\ge 0} f_d(\tau)\Lambda^d\in {\mathbb C}((q))[[\Lambda]],$$ whose coefficients $f_d(\tau)$ are Laurent series in $q$. If $\langle\xi,L\rangle\equiv r \mod 2$, the {\it wallcrossing term} is defined as $$\delta_{\xi}^X(L,P^r):=\sum_{d\ge 0} \delta_{\xi,d}^X(L,P^r)\Lambda^d\in {\mathbb Q}[[\Lambda]], $$ with $$\delta_{\xi,d}^X(L,P^r)=\mathop{\text{\rm Coeff}}_{q^0}[f_d(\tau)].$$ Again we write $\delta_{\xi,d}^X(L):=\delta_{\xi,d}^X(L,P^0)$ and $\delta_{\xi}^X(L):=\delta_{\xi}^X(L,P^0)$.
The wallcrossing terms $\delta_{\xi}^X(L):=\delta_{\xi,d}^X(L,P^0)$ were already introduced in \cite{GNY} and used in \cite{GY}. As we will recall in a moment, they compute the change of the K-theoretic Donaldson invariants $\chi^{X,\omega}_{c_1}(L)$, when $\omega$ crosses a wall. Later we will introduce K-theoretic Donaldson invariants with point class $\chi^{X,\omega}_{c_1}(L,P^r)$, whose wallcrossing is computed by $\delta_{\xi}^X(L,P^r)$. Intuitively we want to think of $r$ as the power of a $K$-theoretic point class $\mathcal P$. \end{Definition}
\begin{Remark}\label{delb} \begin{enumerate} \item $\delta_{\xi,d}^X(L,P^r)=0$ unless $d\equiv -\xi^2\mod 4$. \item In the definition of $\delta_\xi^X(L,P^r)$ we can replace $\Delta_{\xi}^X(L,P^r)$ by \begin{equation} \begin{split}\label{Delbar} &\overline \Delta_{\xi}^X(L,P^r):= \frac{1}{2}(\Delta_{\xi}^X(L,P^r)-\Delta_{-\xi}^X(L,P^r))\\ &\ =M^{r} i^{\langle\xi, K_X\rangle} \Lambda^2 q^{-\xi^2} \big(y^{\langle\xi(L-K_X)\rangle}-(-1)^{\xi^2}y^{-\langle\xi,(L-K_X)\rangle}\big)\widetilde\theta_4(h)^{(L-K_X)^2}\theta_4^{\sigma(X)}u'h^*. \end{split} \end{equation} \end{enumerate} \end{Remark} \begin{proof} (1) As $h\in {\mathbb C}[[q^{-1}\Lambda,q^4]]$, we also have $h^*, y,\widetilde \theta_4(h),M\in {\mathbb C}[[q^{-1}\Lambda,q^4]]$. Finally $u,u'\in q^{-2}{\mathbb Q}[[q^4]]$. It follows that $\Delta_{\xi}^X(L,P^r)\in q^{-\xi^2}{\mathbb C}[[q^{-1}\Lambda,q^4]] $. Writing $\Delta_{\xi}^X(L,P^r)=\sum_{d} f_{d,r}(\tau)\Lambda^d$, we see that $\mathop{\text{\rm Coeff}}_{q^0}[f_{d,r}(\tau)]=0$ unless $d\equiv -\xi^2\mod 4$. (2) Note that $\widetilde \theta_4(h)$ is even in $\Lambda$ and $h^*$ is odd in $\Lambda$, thus $\overline \Delta_{\xi}^X(L,P^r)= \sum_{d\equiv -\xi^2 (2) } f_{d,r}(\tau)\Lambda^d$, and the claim follows by (1). \end{proof}
The main result of \cite{GNY} is the following (see also \cite{GY}).
\begin{Theorem}\label{wallcr} Let $H_1$, $H_2$ be ample divisors on $X$, assume that $\<H_1,K_X\rangle<0$, $\<H_2,K_X\rangle<0$, and that $H_1$, $H_2$ do not lie on a wall of type $(c_1,d)$. Then \begin{align*} \chi(M^X_{H_1}(c_1,d),\mu(L))-\chi(M^X_{H_2}(c_1,d),\mu(L))&=\sum_{\xi}\delta^X_{\xi,d}(L), \end{align*} where $\xi$ runs through all classes of type $(c_1,d)$ with $\langle\xi, H_1\rangle>0 >\langle\xi, H_2\rangle$. \item \end{Theorem} Note that the condition $\<H_1,K_X\rangle<0$, $\<H_2,K_X\rangle<0$ implies that all the classes of type $(c_1,d)$ with $\langle\xi, H_1\rangle>0 >\langle\xi ,H_2\rangle$ are good in the sense of \cite{GNY}, so the wallcrossing formula there applies. Let $c_1\in H^2(X,{\mathbb Z})$. Let $H_1,H_2$ be ample on $X$, assume they do not lie on a wall of type $(c_1)$. Then it follows that $$\chi^{X,H_1}_{c_1}(L)-\chi^{X,H_2}_{c_1}(L)=\sum_{\xi}\delta^X_\xi(L),$$ where $\xi$ runs through all classes in $c_1+2H^2(X,{\mathbb Z})$ with $\langle\xi ,H_1\rangle >0>\langle\xi, H_2\rangle$.
\subsection{Polynomiality and vanishing of the wallcrossing} By definition the wallcrossing terms $\delta_\xi^X(L,P^r)$ are power series in $\Lambda$. We now show that they are always polynomials, modifying the proof of \cite[Thm.~3.19]{GY}. We have seen above that $h\in iq^{-1}\Lambda{\mathbb Q}[[q^{-2}\Lambda^2,q^4]]$, and thus $y=e^{h/2}\in {\mathbb Q}[[iq^{-1}\Lambda,q^4]]$. \begin{Lemma} \label{qpow} (\cite[Lem.~3.18]{GY}) \begin{enumerate} \item $\sinh(h/2)=y-y^{-1}\in iq^{-1}\Lambda{\mathcal R}$, $\frac{1}{\sinh(h/2)}\in iq\Lambda^{-1}{\mathcal R}$. \item For all integers $n$ we have \begin{align*}
\sinh((2n+1)h/2)&\in i{\mathbb Q}[q^{-1}\Lambda]_{|2n+1|}{\mathcal R},\quad
\cosh(nh)\in {\mathbb Q}[q^{-2}\Lambda^2]_{|n|} {\mathcal R},\\
\sinh(nh)h^*&\in {\mathbb Q}[q^{-2}\Lambda^2]_{|n|}{\mathcal R}
\quad \cosh((2n+1)h/2)h^*\in i {\mathbb Q}[q^{-1}\Lambda]_{|2n+1|} {\mathcal R}. \end{align*} \item $\widetilde \theta_4(h)\in {\mathcal R} $, with $\widetilde \theta_4(h)=1+q^2\Lambda^2+O(q^4)$. \end{enumerate} \end{Lemma}
\begin{Lemma}\label{vanwall} Let $r\in {\mathbb Z}_{\ge 0}$, let $\xi\in H^2(X,{\mathbb Z})$, and $L$ a line bundle on $X$ with $\xi^2<0$ and $\langle\xi,L\rangle\equiv r\mod 2$. \begin{enumerate} \item
$\delta_{\xi,d}^X(L,P^r)=0$ unless $-\xi^2\le d\le \xi^2+2|\langle\xi,L-K_X\rangle|+2r+4$. In particular $\delta_{\xi}^X(L,P^r)\in {\mathbb Q}[\Lambda]$.
\item $\delta_\xi^X(L,P^r)=0$ unless $-\xi^2\le |\langle\xi,L-K_X\rangle|+r+2$. (Recall that by definition $\xi^2<0$). \end{enumerate} \end{Lemma}
\begin{proof} Assume first that $r=2l$ is even. Let $N:=\langle\xi,L-K_X\rangle.$ Then it is shown in the proof of \cite[Thm.~3.19]{GY} that
$\overline \Delta_\xi^X(L)=q^{-\xi^2}{\mathbb Q}[q^{-1}\Lambda]_{\le |N|+2}{\mathcal R}.$ On the other hand we note that $M^2=4(1+u\Lambda^2+\Lambda^4)\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le 1}{\mathcal R}.$ Putting this together we get
$$\Delta_\xi^X(L,P^{r})\in q^{-\xi^2}{\mathbb Q}[q^{-1}\Lambda]_{\le |N|+r+2}{\mathcal R}.$$
Now assume that $r=2l+1$ is odd. If $N$ is even, then by the condition that $\<L,\xi\rangle$ is odd, we get $(-1)^{\xi^2}=-(-1)^{N}$, and therefore $$\overline \Delta_\xi^X(L,P^r)=q^{-\xi^2}M^ri^{\langle\xi,K_X\rangle}\Lambda^2\cosh(Nh/2)h^* \widetilde \theta_4^{(L-K_X)^2}\theta_4^{\sigma(X)} u'.$$
By \eqref{hstar} we get $h^*M=\frac{4i\Lambda}{\theta_2\theta_3}\in i\Lambda q^{-1}{\mathbb Q}[q^4]$. Thus by Lemma \ref{qpow} we get $\cosh(Nh/2)h^*M\in i{\mathbb Q}[q^{-1}\Lambda]_{\le |N|+1}{\mathcal R}.$
Using also that $\langle\xi, K_X\rangle\equiv \xi^2\equiv 1\mod 2$, that $M^2\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le 1}{\mathcal R}$ and $\Lambda^2u'\in q^{-2}\Lambda^2{\mathcal R}$, we get again $\overline \Delta_\xi^X(L,P^r)\in q^{-\xi^2}{\mathbb Q}[q^{-1}\Lambda]_{\le |N|+r+2}{\mathcal R}.$ Finally, if $N$ is odd, a similar argument shows that $$\overline \Delta_\xi^X(L,P^r)=q^{-\xi^2}M^ri^{\langle\xi,K_X\rangle}\Lambda^2\sinh(Nh/2)h^*
\widetilde \theta_4^{(L-K_X)^2}\theta_4^{\sigma(X)} u'\in q^{-\xi^2}{\mathbb Q}[q^{-1}\Lambda]_{\le |N|+r+2}{\mathcal R}.$$
Therefore we have in all cases that
$\delta_{\xi,d}^X(L,P^r)= 0$ unless $-\xi^2-\min(d,2|N|+2r+4-d)\le 0$, i.e. unless
$-\xi^2\le d\le \xi^2+2|N|+2r+4$. In particular $\delta_{\xi}^X(L,P^r)=0$ unless
$-\xi^2\le\xi^2+2|N|+2r+4$, i.e. unless $-\xi^2\le |N|+r+2$.
\end{proof}
\begin{Remark}\label{c1d} We note that this implies that for $\xi$ a class of type $(c_1)$, $\delta_{\xi,d}^X(L)=0$ for all $L$ unless $\xi$ a class of type $(c_1,d)$. \end{Remark}
\section{Indefinite theta functions, vanishing, invariants with point class}
We want to study the $K$-theoretic Donaldson invariants for polarizations on the boundary of the ample cone. Let $F\in H^2(X,{\mathbb Z})$ the class of an effective divisor with $F^2=0$ and such that $F$ is nef, i.e. $\<F,C\rangle\ge 0$ for any effective curve in $X$. Then $F$ is a limit of ample classes. Let $c_1\in H^2(X,{\mathbb Z})$ such that $\<c_1,F\rangle$ is odd. Fix $d\in {\mathbb Z}$ with $d\equiv -c_1^2 \mod 4$. Let $\omega$ be ample on $X$. Then for $n>0$ sufficiently large $nF+ \omega$ is ample on $X$ and there is no wall $\xi$ of type $(c_1,d)$ with $\langle\xi, (nF+ \omega)\rangle>0> \langle\xi, F\rangle$. Let $L\in \operatorname{Pic}(X)$ and $r\in{\mathbb Z}_{\ge 0}$ with $\<c_1,L\rangle$ even. Thus we define for $n$ sufficiently large \begin{align*}M_F^X(c_1,d)&:=M_{nF+\omega}^X(c_1,d), \\ \chi(M_F^X(c_1,d),\mu(L))&:=\chi(M_{nF+\omega}^X(c_1,d),\mu(L)),\\ \chi^{X,F}_{c_1}(L)&:=\sum_{d\ge 0} \chi(M_F^X(c_1,d),\mu(L))\Lambda^d.
\end{align*}
We use the following standard fact. \begin{Remark}\label{vanbound} Let $X$ be a simply connected algebraic surface, and let $\pi:X\to {\mathbb P}^1$ be a morphism whose general fibre is isomorphic to ${\mathbb P}^1$. Let $F\in H^2(X,{\mathbb Z})$ be the class of a fibre. Then $F$ is nef. Assume that $\<c_1,F\rangle$ is odd. Then $M_F^X(c_1,d)=\emptyset$ for all $d$. Thus $\chi(M_F^X(c_1,d),\mu(L))=0$ for all $d\ge 0$. Thus if $\omega$ ample on $X$ and does not lie on a wall of type $(c_1)$, then $$\chi^{X,\omega}_{c_1}(L)=\sum_{\omega\xi>0>\xi F} \delta_{\xi}^X(L),$$ where the sum is over all classes $\xi$ of type $(c_1)$ with $\omega\xi>0>\xi F$. \end{Remark}
\subsection{Theta functions for indefinite lattices}
We briefly review a few facts about theta functions for indefinite lattices of type $(r-1,1)$ introduced in \cite{GZ}. More can be found in \cite{GZ}, \cite{GY}. For us a {\it lattice} is a free ${\mathbb Z}$-module $\Gamma$ together with a quadratic form $Q:\Gamma\to \frac{1}{2}{\mathbb Z}$, such that the associated bilinear form $x\cdot y:=Q(x+y)-Q(x)-Q(y)$ is nondegenerate and ${\mathbb Z}$-valued. We denote the extension of the quadratic and bilinear form to $\Gamma_{\mathbb R}:=\Gamma\otimes_{{\mathbb Z}} {\mathbb R}$ and $\Gamma_{\mathbb C}:=\Gamma\otimes_{{\mathbb Z}} {\mathbb C}$ by the same letters.
We will consider the case that $\Gamma$ is $H^2(X,{\mathbb Z})$ for a rational surface $X$ with the {\it negative} of the intersection form.Thus for $\alpha,\beta\in H^2(X,{\mathbb Z})$ we have $Q(\alpha)=-\frac{\alpha^2}{2}$, $\alpha\cdot \beta=-\langle\alpha,\beta\rangle$. Now let $\Gamma$ be a lattice of rank $r$. Denote by $M_\Gamma$ the set of meromorphic maps $f:\Gamma_{\mathbb C}\times {\mathcal H}\to {\mathbb C}$. For $A=\left(\begin{matrix} a&b\\c&d\end{matrix}\right)\in Sl(2,{\mathbb Z})$, we define a map
$|_kA:M_\Gamma\to M_\Gamma$ by
$$f|_{k}A(x,\tau):=(c\tau+d)^{-k}\exp\left(-2\pi i\frac{cQ(x)}{c\tau+d}\right) f\left(\frac{x}{c\tau+d},\frac{a\tau+b}{c\tau+d}\right).$$
Then $|_kA$ defines an action of $Sl(2,{\mathbb Z})$ on $M_\Gamma$.
We denote
\begin{align*}S_\Gamma&:=\big\{ f\in \Gamma\bigm| f \hbox{ primitive}, Q(f)=0,\ f\cdot h <0\big\},\
C_\Gamma:=\big\{ m\in \Gamma_{\mathbb R}\bigm| Q(m)<0, \ m\cdot h<0\big\}. \end{align*} For $f\in S_\Gamma$ put
$D(f):=\big\{(\tau,x)\in {\mathcal H}\times \Gamma_{\mathbb C}\bigm| 0< \Im(f\cdot x)<\Im(\tau)/2\big\},$ and for $h\in C_\Gamma$ put $D(h)={\mathcal H}\times \Gamma_{\mathbb C}$. For $t\in {\mathbb R}$ denote $$\mu(t):=\begin{cases} 1& t\ge 0, \\0 & t<0.\end{cases}$$ Let $c,b\in \Gamma$. Let $f,g\in S_\Gamma\cup C_\Gamma$. Then for $(\tau,x)\in D(f)\cap D(g)$ define $$\Theta^{f,g}_{\Gamma,c,b}(\tau,x):=\sum_{\xi\in \Gamma+c/2}(\mu(\xi\cdot f)-\mu(\xi\cdot g)) e^{2\pi i \tau Q(\xi)}e^{2\pi i \xi\cdot(x+b/2)}.$$ Let $T:=\left(\begin{matrix} 1 & 1\\0&1\end{matrix}\right)$, $S:=\left(\begin{matrix} 0 & -1\\1&0\end{matrix}\right)\in Sl(2,{\mathbb Z})$.
\begin{Theorem}\label{thetaprop} \begin{enumerate} \item For $f,g\in S_\Gamma$ the function $\Theta^{f,g}_{X,c,b}(\tau,x)$ has an meromorphic continuation to ${\mathcal H}\times \Gamma_{\mathbb C}$. \item For
$|\Im(f\cdot x)/\Im(\tau)|<1/2$ and $|\Im(g\cdot x)/\Im(\tau)|<1/2$ it has a Fourier development \begin{align*} &\Theta_{X,c,b}^{f,g}(x,\tau):=\frac{1}{1-e^{2\pi i f\cdot (x+b/2)}} \sum_{\substack{\xi\cdot f=0\\ f\cdot g\le\xi\cdot g<0}}e^{2\pi i \tau Q(\xi)}e^{2\pi i \xi \cdot (x+b/2)}\\ & -\frac{1}{1-e^{2\pi i g\cdot (x+b/2)}}\sum_{\substack{\xi\cdot g=0\\ f\cdot g \le \xi \cdot f<0 }}e^{2\pi i \tau Q(\xi)}e^{2\pi i \xi \cdot(x+b/2)}+ \sum_{\xi f>0>\xi g} e^{2\pi i \tau Q(\xi)}\big(e^{2\pi i \xi \cdot(x+b/2)}- e^{-2\pi i \xi \cdot(x+b/2)}\big), \end{align*} where the sums are always over $\xi\in \Gamma+c/2$.
\item \begin{equation*} \label{thetajacobi} \begin{split}
(\Theta_{X,c,b}^{f,g}\theta_{3}^{\sigma(\Gamma)})|_1S&=(-1)^{-b\cdot c/2} \Theta_{X,b,c}^{f,g}\theta_{3}^{\sigma(\Gamma)},\\
(\Theta_{X,c,b}^{f,g}\theta_{3}^{\sigma(\Gamma)})|_1T&=(-1)^{3Q(c)/2-c w/2} \Theta_{X,c,b-c+w}^{f,g}\theta_{4}^{\sigma(\Gamma)},\\
(\Theta_{X,c,b}^{f,g}\theta_{3}^{\sigma(\Gamma)})|_1T^2&=(-1)^{-Q(c)} \Theta_{X,c,b}^{f,g}\theta_{3}^{\sigma(\Gamma)},\\
(\Theta_{X,c,b}^{f,g}\theta_{3}^{\sigma(\Gamma)})|_1T^{-1}S&= (-1)^{-Q(c)/2-c\cdot b/2}\Theta_{X,w-c+b,c}^{f,g}\theta_{2}^{\sigma(\Gamma)}, \end{split}\end{equation*} where $w$ is a characteristic element of $\Gamma$. \end{enumerate} \end{Theorem}
\begin{Remark} For $f,\ g,\ h\in C_\Gamma\cap S_\Gamma$ we have the cocycle condition. $\Theta^{f,g}_{\Gamma,c,b}(\tau,x)+\Theta^{g,h}_{\Gamma,c,b}(\tau,x)=\Theta^{f,h}_{\Gamma,c,b}(\tau,x)$, which holds wherever all three terms are defined. \end{Remark}
In the following let $X$ be a rational algebraic surface. We can express the difference of the $K$-theoretic Donaldson invariants for two different polarisations in terms of these indefinite theta functions. Here we take $\Gamma$ to be $H^2(X,{\mathbb Z})$ with the {\it negative} of the intersection form, and we choose $K_X$ as the characteristic element in \thmref{thetaprop}(3).
\begin{Definition} Let $F,G\in S_\Gamma\cup C_\Gamma$, let $c_1\in H^2(X,{\mathbb Z})$. We put \begin{align*} \Psi^{F,G}_{X,c_1}(L;\Lambda,\tau)&:= \Theta^{F,G}_{X,c_1,K_X}\Big(\frac{(L-K_X)h}{2\pi i},\tau\Big) \Lambda^2 \widetilde\theta_4(h)^{(L-K_X)^2}\theta_4^{\sigma(X)}u'h^*.
\end{align*}
\end{Definition}
\begin{Lemma}\label{thetawall} Let $H_1,H_2$ be ample on $X$ with $\<H_1,K_X\rangle<0$ and $\<H_2,K_X\rangle<0$, and assume that they do not lie on a wall of type $(c_1)$. Then \begin{enumerate} \item $$\Psi^{H_2,H_1}_{X,c_1}(L;\Lambda,\tau)M^r=\sum_\xi \overline \Delta^X_\xi(L,P^r),$$ were $\xi$ runs through all classes on $X$ of type $(c_1)$ with $\<H_2,\xi\rangle >0> \<H_1, \xi \rangle$. \item $\chi^{X,H_2}_{c_1}(L)-\chi^{X,H_1}_{c_1}(L)=\mathop{\text{\rm Coeff}}_{q^0} \big[\Psi^{H_2,H_1}_{X,c_1}(L;\Lambda,\tau)\big]. $ \end{enumerate} \end{Lemma} \begin{proof} (2) is proven in \cite[Cor.~4.6]{GY}, where the assumptions is made that $-K_X$ is ample, but the proof only uses $\<H_1,K_X\rangle<0$ and $\<H_2,K_X\rangle<0$, because this condition is sufficient for \thmref{wallcr}. The argument of \cite[Cor.~4.6]{GY} actually shows (1) in case $r=0$, but as $\overline \Delta^X_\xi(L,P^r)=\overline \Delta^X_\xi(L,P^0)M^r$, the case of general $r$ follows immediately.
\end{proof}
Following \cite{GY} we use \lemref{thetawall} to extend the generating function $\chi^{X,\omega}_{c_1}(L)$ to $\omega\in S_L\cup C_L$.
\begin{Definition}\label{chiext} Let $\eta$ be ample on $X$ with $\langle\eta, K_X\rangle<0$, and not on a wall of type $(c_1)$. Let $\omega\in S_X\cup C_X$. We put $$\chi^{X,\omega}_{c_1}(L):=\chi^{X,\eta}_{c_1}(L)+\mathop{\text{\rm Coeff}}_{q^0} \big[\Psi^{\omega,\eta}_{X,c_1}(L;\Lambda,\tau)\big].$$ By the cocycle condition the definition of $\chi^{X,\omega}_{c_1}(L)$ is independent of the choice of $\eta$. Furthermore by \corref{thetawall} this coincides with the previous definition in case $\omega$ is also ample, $\langle\omega,K_X\rangle<0$ and $\omega$ does not lie on a wall of type $(c_1)$. However if $\langle\omega,K_X\rangle\ge 0$, it is very well possible that the coefficient of $\Lambda^d$ of $\chi^{X,\omega}_{c_1}(L)$ is different from $\chi(M^X_\omega(c_1,d),\mu(L))$. \end{Definition}
\begin{Remark}\label{difftheta} Now let $H_1,H_2\in S_X\cup C_X$. By the cocycle condition, we have \begin{align*} \chi^{X,H_2}_{c_1}(L)-\chi^{X,H_1}_{c_1}(L)=\mathop{\text{\rm Coeff}}_{q^0} \big[\Psi^{H_2,H_1}_{X,c_1}(L;\Lambda,\tau)\big]. \end{align*} \end{Remark}
\begin{Proposition} \label{blowgen} Let $X$ be a rational surface. Let $\omega\in C_X\cup S_X$. Let $c_1\in H^2(X,{\mathbb Z})$. Let $\widehat X$ be the blowup of $X$ in a general point, and $E$ the exceptional divisor. Let $L\in \operatorname{Pic}(X)$ with $\<L, c_1\rangle$ even. Then \begin{enumerate} \item $\chi^{\widehat X,\omega}_{c_1}(L)=\chi^{X,\omega}_{c_1}(L)$, \item $\chi^{\widehat X,\omega}_{c_1+E}(L)=\Lambda \chi^{X,\omega}_{c_1}(L)$. \end{enumerate}
\end{Proposition} \begin{proof}This is \cite[Prop.~4.9]{GY}, where the additional assumption is made that $-K_{\widehat X}$ is ample. The proof works without this assumption with very minor modifications. In the original proof the result is first proven for an $H_0\in C_X$ which does not lie on any wall of type $(c_1)$. We now have to assume in addition that $\<H_0,K_X\rangle<0$. The rest of the proof is unchanged.
\end{proof}
In \cite[Thm.~4.21]{GY} is is shown that if $X$ is a rational surface with $-K_X$ ample, then $\chi^{X,F}_{c_1}(L)=\chi^{X,G}_{c_1}(L)$ for all $F,G\in S_X$. A modification of this proof shows the following. \begin{Proposition}\label{basic} Let $X$ be ${\mathbb P}^1\times{\mathbb P}^1$ or a blowup of ${\mathbb P}^2$ in finitely many points. Let $L\in \operatorname{Pic}(X)$, let $c_1\in H^2(X,{\mathbb Z})$ with $\<c_1,L\rangle$ even. Let $F,G\in S_X$. Assume that for all $W\in K_X+2H^2(X,{\mathbb Z})$ with $\<F,W\rangle\le 0\le \<G,W\rangle$, we have $W^2<K_X^2$. Then $\chi^{X,F}_{c_1}(L)=\chi^{X,G}_{c_1}(L)$. \end{Proposition} \begin{proof} We know that $\chi^{X,F}_{c_1}(L)-\chi^{X,G}_{c_1}(L)=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{F,G}_{X,c_1}(L,\Lambda,\tau)\big]$, and in the proof of \cite[Thm.~4.21]{GY} it is shown that $$\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{F,G}_{X,c_1}(L,\Lambda,\tau)\big]=-\frac{1}{4}\mathop{\text{\rm Coeff}}_{q^0}\big[\tau^{-2}\Psi^{F,G}_{X,c_1}(L,\Lambda,S\tau)\big] -\frac{1}{4}i^{c_1^2+3}\mathop{\text{\rm Coeff}}_{q^0}\big[\tau^{-2}\Psi^{F,G}_{X,c_1}(L,i\Lambda,S\tau)\big].$$ Therefore it is enough to show that $\mathop{\text{\rm Coeff}}_{q^0}\big[\tau^{-2}\Psi^{F,G}_{X,c_1}(L,\Lambda,S\tau)\big]=0$. Furthermore in the proof of \cite[Thm.~4.21]{GY} we have seen that the three functions $\widetilde u:=-\frac{\theta_3^4+\theta_4^4}{\theta_3^2\theta_4^2}$, $$ \widetilde h=-\frac{2}{\theta_{4}\theta_{3}}. \sum_{\substack{n\ge 0\\ n\ge k\ge0}} \binom{-\frac{1}{2}}{n}\binom{n}{k}\frac{\widetilde u^k\Lambda^{4n-2k+1}}{4n-2k+1},\quad \widetilde G(\Lambda,\tau)= \frac{(-1)^{\<c_1,K_X\rangle/2-\sigma(X)/4}\Lambda^3}{\theta_3^3\theta_4^3(1+\widetilde{u}\Lambda^2+\Lambda^4)} \widetilde\theta_{2}(\widetilde h)^{(L-K_X)^2} $$ are regular at $q=0$, and furthermore that we can write $$\tau^{-2}\Psi^{F,G}_{X,c_1}(L,\Lambda,S\tau)=\Theta_{X,K_X,c_1}^{F,G}\left(\frac{(L-K_X)\widetilde h}{2\pi i }, \tau\right)\theta_{2}^{K_X^2} \widetilde G(\Lambda,\tau).$$ (note that $\sigma(X)+8=K_X^2$ to compare with the formulas in the proof of \cite[Thm.~4.19]{GY}). As $\theta_{2}^{K_X^2}$ starts with $q^{K_X^2}$, specializing the formula of \thmref{thetaprop}(2) to the case $c=K_X$, $b=c_1$, $F=f$, $G=g$, we see that all the summands in $\Theta_{X,K_X,c_1}^{F,G}\left(\frac{(L-K_X)\widetilde h}{2\pi i }, \tau\right)$ are of the form $q^{-W^2}J_W(\Lambda,\tau)$, where $J_W(\Lambda,\tau)$ is regular at $q=0$ and
$W\in K_X+2H^2(X,{\mathbb Z})$ with $\<F,W\rangle\le 0\le \<G,W\rangle$. The claim follows. \end{proof}
\begin{Corollary}\label{strucdiff} Let $X={\mathbb P}^1\times{\mathbb P}^1$, or let $X$ be the blowup of ${\mathbb P}^2$ in finitely many general points $p_1,\ldots,p_n$ with exceptional divisors $E_1,\ldots,E_n$. In case $X={\mathbb P}^1\times{\mathbb P}^1$ let $F$ be the class of a fibre of the projection to one of the two factors; otherwise let $F=H-E_i$ for some $i\in \{1,\ldots,n\}$. Let $c_1\in H^2(X,{\mathbb Z})$ and let $L$ be a line bundle on $X$ with $\<L,c_1\rangle$ even. Then \begin{enumerate} \item $\chi^{X,F}_{c_1}(L)=0.$
\item Thus for all $\omega\in S_X\cup C_X$ we have $$\chi^{X,\omega}_{c_1}(L)=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,F}_{X,c_1}(L;\Lambda,\tau)\big].$$ \end{enumerate} \end{Corollary}
\begin{proof} (1) Let $\widehat X$ be the blowup of $X$ in a general point with exceptional divisor $E$. Then $\widehat X$ is the blowup of ${\mathbb P}^2$ in $n+1$ general general points, (with $n=1$ in case $X={\mathbb P}^1\times{\mathbb P}^1$). We denote $E_1,\ldots,E_{n+1}$ the exceptional divisors, then we can assume that $F=H-E_1$. We put $G=H-E_{n+1}$. If $\<c_1, H\rangle$ is even, we put $\widehat c_1=c_1+E_{n+1}$, and if $\<c_1, H\rangle$ is even, we put $\widehat c_1=c_1$. Thus $\langle\widehat c_1, G\rangle$ is odd and therefore by \remref{vanbound} we get $\chi^{\widehat X, G}_{\widehat c_1}(L)=0$. By \propref{blowgen} we have $\chi^{X,F}_{c_1}(L)=\chi^{\widehat X,F}_{\widehat c_1}(L)$ or $\chi^{X,F}_{c_1}(L)=\frac{1}{\Lambda}\chi^{\widehat X,F}_{\widehat c_1}(L)$. Therefore it is enough to show that $\chi^{\widehat X,F}_{\widehat c_1}(L)=\chi^{\widehat X,G}_{\widehat c_1}(L)$. So by \propref{basic} we need to show that for all $W\in K_{\widehat X}+2H^2(\widehat X,{\mathbb Z})$ with $\<F,W\rangle\le 0\le \<G,W\rangle$, we have $W^2<K_{\widehat X}^2$. Let $W=kH+a_1E_1+\ldots +a_{n+1}E_{n+1}\in K_{\widehat X}+2H^2(\widehat X,{\mathbb Z})$ with $\<F,W\rangle\le 0\le \<G,W\rangle$. Then $k,a_1,\ldots,a_{n+1}$ are odd integers, the condition
$\<F,W\rangle\le 0$ gives that $k\le a_1$, and the condition $\<G,W\rangle\ge 0$ gives that $k\ge a_{n+1}$. So either $k<0$ and $|a_1|\ge |k|$ or $k>0$, and $|a_{n+1}|\ge |k|$. As all the $a_i$ are odd, this gives $$W^2=k^2-a_1^2-\ldots-a_{n+1}^2\le -n<8-n=K_X^2.$$ \end{proof}
\subsection{Invariants with point class} We can now define $K$-theoretic Donaldson invariants with powers of the point class. \begin{Corollary} Let $X$ be the blowup of ${\mathbb P}^2$ in general points $p_1,\ldots,p_r$, with exceptional divisors $E_1,\ldots, E_r$, Let $\overline X$ be the blowup of ${\mathbb P}^2$ in general points $q_1,\ldots,q_r$, with exceptional divisors $\overline E_1,\ldots, \overline E_r$. For a class $M=dH+a_1E_1+\ldots+a_rE_r\in H^2(X,{\mathbb R})$ let $\overline M:=dH+a_1\overline E_1+\ldots+a_r\overline E_r\in H^2(\overline X,{\mathbb R})$. Then for all $L\in \operatorname{Pic}(X)$, $c_1\in H^2(X,{\mathbb Z})$ with $\<L,c_1\rangle$ even, $\omega\in C_X\cup S_X$, we have $\chi^{X,\omega}_{c_1}(L)=\chi^{\overline X,\overline \omega}_{\overline c_1}(\overline L)$. \end{Corollary} \begin{proof} Let $F=H-E_1\in S_X$, then $\overline F=H-\overline E_1\in S_{\overline X}$, and thus $\chi^{X,F}_{c_1}(L)=0=\chi^{\overline X,\overline F}_{\overline c_1}(\overline L)$.
The map sending $E_i$ to $\overline E_i$ for all $i$ is an isomorphism of lattices, thus $\Psi^{\omega,F}_{X,c_1}(L;\Lambda,\tau)=\Psi^{\overline \omega,\overline F}_{\overline X,\overline c_1}(\overline L;\Lambda,\tau)$. Thus we get by \corref{strucdiff} that $$\chi^{X,\omega}_{c_1}(L)=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,F}_{X,c_1}(L;\Lambda,\tau)\big]=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\overline \omega,\overline F}_{\overline X,\overline c_1}(\overline L;\Lambda,\tau)\big] =\chi^{\overline X,\overline \omega}_{\overline c_1}(\overline L).$$ \end{proof}
\begin{Definition} Let $X$ be ${\mathbb P}^1\times{\mathbb P}^1$ or the blowup of ${\mathbb P}^2$ in finitely many general points. Let $\omega\in S_X\cup C_X$, $c_1\in H^2(X,{\mathbb Z})$, $L\in Pic(X)$. Let $X_r$ be the blowup of $X$ in $r$ general points, with exceptional divisors $E_1,\ldots,E_r$. Write $E:=E_1+\ldots+E_r$. We put $$\chi^{X,\omega}_{c_1}(L,P^r):=\Lambda^{-r}\chi^{X_r,\omega}_{c_1+E}(L-E), \quad \chi^{X,\omega}_{c_1,d}(L,P^r):=\mathop{\text{\rm Coeff}}_{\Lambda^d}\big[\chi^{X,\omega}_{c_1}(L,P^r)\big].$$ We call the $\chi^{X,\omega}_{c_1,d}(L,P^r),\ \chi^{X,\omega}_{c_1}(L,P^r)$ the $K$-theoretic Donaldson invariants with point class. More generally, if $F(\Lambda,P)=\sum_{i,j} a_{i,j} \Lambda^iP^j\in {\mathbb Q}[\Lambda,P]$ is a polynomial, we put $$\chi^{X,\omega}_{c_1}(L,F(\Lambda,P)):=\sum_{i,j}a_{i,j} \Lambda^i \chi^{X,\omega}_{c_1}(L,P^j).$$ \end{Definition}
\begin{Remark} There should be a $K$-theory class ${\mathcal P}$ on $M^X_\omega(c_1,d)$, such that $\chi^{X,\omega}_{c_1,d}(L,P^r)=\chi(M^X_\omega(c_1,d),\mu(L)\otimes {\mathcal P}^r)$. By the definition $\chi^{X,M}_{c_1}(L,P)=\Lambda^{-r}\chi^{\widehat X,M}_{c_1+E}(L-E)$ the sheaf ${\mathcal P}$ would encode local information at the blown-up point. We could view $\mathcal P$ as a $K$-theoretic analogue of the point class in Donaldson theory. This is our motivation for the name of $\chi^{X,\omega}_{c_1}(L,P^r)$. For the moment we do not attempt to give a definition of this class ${\mathcal P}$. There are already speculations about possible definitions of $K$-theoretic Donaldson invariants with powers of the point class in \cite[Sect.~1.3]{GNY}, and the introduction of $\chi^{X,\omega}_{c_1}(L,P^r)$ is motivated by that, but for the moment we do not try to make a connection to the approach in \cite{GNY}. \end{Remark}
\section{Blowup polynomials, blowup formulas and blowdown formulas} In \cite[Section 4.6]{GY} the blowup polynomials $R_n({\mathfrak \lambda},x)$, $S_n({\mathfrak \lambda},x)$ are introduced. They play a central role in our approach. In this section we will first show that they express all the $K$-theoretic Donaldson invariants of the blowup $\widehat X$ of a surface $X$ in terms of the $K$-theoretic Donaldson invariants of $X$. On the other hand we will use them to show that a small part of the $K$-theoretic Donaldson invariants of the blowup $\widehat X$ determine {\it all} the $K$-theoretic Donaldson invariants of $X$ (and thus by the above all the $K$-theoretic Donaldson invariants of any blowup of $X$, including $\widehat X$). Finally (as already in \cite{GY} in some cases), in the next section, we will use the blowup polynomials to construct recursion relations for many $K$-theoretic Donaldson invariants of rational ruled surfaces, enough to apply the above-mentioned results and determine all $K$-theoretic Donaldson invariants of ${\mathbb P}^2$ and thus of any blowup of ${\mathbb P}^2$.
\subsection{Blowup Polynomials and blowup formulas}
\begin{Definition}\label{blowpol} Define for all $n\in {\mathbb Z}$ rational functions $R_n$, $S_n\in {\mathbb Q}({\mathfrak \lambda},x)$ by $R_0=R_1=1,$ $S_1={\mathfrak \lambda},\ S_2={\mathfrak \lambda} x,$ the recursion relations \begin{align} \label{recur} R_{n+1}&=\frac{R_{n}^2-{\mathfrak \lambda}^2 S_n^2}{R_{n-1}},\quad n\ge 1;\qquad S_{n+1}=\frac{S_{n}^2-{\mathfrak \lambda}^2 R_{n}^2}{S_{n-1}},\quad n\ge 2. \end{align} and $R_{-n}=R_{n}$, $S_{-n}=-S_{n}$. We will prove later that the $R_n$, $S_n$ are indeed polynomials in ${\mathfrak \lambda},x$. \end{Definition} The definition gives \begin{align*} R_1&=1,\ R_2=(1-{\mathfrak \lambda}^4), R_3=-{\mathfrak \lambda}^4 x^2 + (1-{\mathfrak \lambda}^4)^2,\ R_4=-{\mathfrak \lambda}^4x^4+(1-{\mathfrak \lambda}^4)^4,\\ R_5&= -{\mathfrak \lambda}^4x^2\big(x^4 +(2-{\mathfrak \lambda}^4)(1-{\mathfrak \lambda}^4)^2x^2 +3(1-{\mathfrak \lambda}^4)^3\big)+(1-{\mathfrak \lambda}^4)^6,\\ S_1&={\mathfrak \lambda},\ S_2={\mathfrak \lambda} x,\ S_3={\mathfrak \lambda}(x^2-(1-{\mathfrak \lambda}^4)^2),\ S_4={\mathfrak \lambda} x\big((1-{\mathfrak \lambda}^8)x^2-2(1-{\mathfrak \lambda}^4)^3\big). \end{align*}
\begin{Proposition}\label{rnpropo}(\cite[Prop.~4.7]{GY}) \begin{equation}\label{ThRn} \frac{\widetilde \theta_4(nh)}{{\widetilde \theta}_4(h)^{n^2}}=R_n(\Lambda,M),\qquad \frac{{\widetilde \theta}_1(nh)}{{\widetilde \theta}_4(h)^{n^2}}=S_n(\Lambda,M). \end{equation} \end{Proposition}
In the following \propref{blowpsi} and \corref{blowp} let $X={\mathbb P}^1\times{\mathbb P}^1$, or let $X$ be the blowup of ${\mathbb P}^2$ in finitely many general points $p_1,\ldots,p_n$ with exceptional divisors $E_1,\ldots,E_n$. In case $X={\mathbb P}^1\times{\mathbb P}^1$ let $F$ be the class of a fibre of the projection to one of the two factors; otherwise let $F=H-E_i$ for some $i\in \{1,\ldots,n\}$.
\begin{Proposition}\label{blowpsi} Let $c_1\in H^2(X,{\mathbb Z})$ and let $L$ be a line bundle on $X$ with $\<c_1, L\rangle$ even. Let $\widehat X$ be the blowup of $X$ in a point, and let $E$ be the exceptional divisor. Let $\omega\in C_X\cup S_X$. Then \begin{enumerate} \item $\chi^{\widehat X,\omega}_{c_1}(L-(n-1)E)=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,F}_{X,c_1}(L,\Lambda,\tau)R_{n}(\Lambda, M)\big]$. \item $\chi^{\widehat X,\omega}_{c_1+E}(L-(n-1)E)=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,F}_{X,c_1}(L,\Lambda,\tau)S_{n}(\Lambda, M)\big]$. \end{enumerate} \end{Proposition}
\begin{proof} In \label{highblow}[Prop.~4.34]\cite{GY} it is proven for $X={\mathbb P}^1\times {\mathbb P}^1$ or the blowup of ${\mathbb P}^2$ in at most 7 points, and any $F,\omega\in C_X\cup S_X$ that
\begin{align*}\Psi^{\omega,F}_{\widehat X,c_1}(L-(n-1)E;\Lambda,\tau)&=\Psi^{\omega,F}_{X,c_1}(L,\Lambda,\tau)R_{n}(\Lambda, M)\\ \Psi^{\omega,F}_{\widehat X,c_1+E}(L-(n-1)E;\Lambda,\tau)&=\Psi^{\omega,F}_{X,c_1}(L,\Lambda,\tau)S_{n}(\Lambda, M), \end{align*} But the proof works without modification also for $X$ the blowup of ${\mathbb P}^2$ in finitely many points. The result follows by \corref{strucdiff}.\end{proof}
We now see that the wall-crossing for the $K$-theoretic Donaldson invariants $\chi^{X,\omega}_{c_1}(L,P^r)$ with point class is given by the
wallcrossing terms $\delta^{X}_{\xi}(L,P^r)$.
\begin{Corollary} \label{blowp} \begin{enumerate} \item Let $r\ge 0$ and let $L$ be a line bundle on $X$ with $\<L,c_1\rangle\equiv r\mod 2$. Then $$\chi^{X,\omega}_{c_1}(L,P^r)=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,F}_{X,c_1}(L,\Lambda,\tau)M^r\big].$$ \item let $H_1,H_2\in C_X$, not on a wall of type $(c_1)$. Then \begin{align*} \chi^{X,H_2}_{c_1}(L,P^r)-\chi^{X,H_2}_{c_1}(L,P^r)&=\sum_{\xi} \delta^X_{\xi}(L,P^r),\\ \chi^{X,H_2}_{c_1,d}(L,P^r)-\chi^{X,H_2}_{c_1,d}(L,P^r)&=\sum_{\xi} \delta^X_{\xi,d}(L,P^r), \end{align*} where in the first (resp.~second) sum $\xi$ runs though all classes of type $(c_1)$ (resp.~$(c_1,d)$)with $\<H_1,\xi\rangle<0<\<H_2,\xi\rangle$.
\end{enumerate} \end{Corollary} \begin{pf} (1) Let $\widetilde X$ be the blowup of $X$ in $r$ general points and let $\overline E=\overline E_1+\ldots+\overline E_r$ be the sum of the exceptional divisors. Then by definition and iteration of \propref{blowpsi} we have $$\chi^{X,\omega}_{c_1}(L,P^r)=\frac{1}{\Lambda^r}\chi^{\widetilde X,\omega}_{c_1+\overline E}(L-\overline E)=\frac{1}{\Lambda^r} \mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,F}_{X,c_1}(L,\Lambda,\tau)S_{2}(\Lambda, M)^r\big].$$ The claim follows because $S_{2}(\Lambda, M)=\Lambda M$. (2) By definition $\mathop{\text{\rm Coeff}}_{q^0}\big[\overline \Delta^X_{\xi}(L,P^r)\big]=\delta^X_{\xi}(L,P^r),$ therefore by \lemref{thetawall}, and using also \remref{c1d}, (2) follows from (1).
\end{pf} With this we get a general blowup formula. \begin{Theorem}\label{nblow} Let $X$ be ${\mathbb P}^2$, ${\mathbb P}^1\times{\mathbb P}^1$ or the blowup of ${\mathbb P}^2$ in finitely many general points. Let $c_1\in H^2(X,{\mathbb Z})$ and let $L$ be a line bundle on $X$ and let $r\in {\mathbb Z}_{\ge 0}$ with $\<c_1, L\rangle\equiv r\mod 2$. Let $\omega\in C_X\cup S_X$. Let $\widehat X$ be the blowup of $X$ in a general point with exceptional divisor $E$. Then \begin{enumerate} \item $\chi^{\widehat X,\omega}_{c_1}(L-(n-1)E,P^r)=\chi^{X,\omega}_{c_1}(L,P^r \cdot R_n(\Lambda,P))$, \item $\chi^{\widehat X,\omega}_{c_1+E}(L-(n-1)E,P^r)=\chi^{X,\omega}_{c_1}(L,P^r \cdot S_n(\Lambda,P))$. \end{enumerate} \end{Theorem} \begin{proof} If $X={\mathbb P}^2$, then we apply \propref{blowgen} to reduce to the case that $X$ is the blowup of ${\mathbb P}^2$ in a point. Thus we can by \corref{strucdiff} and the definition of $\chi^{X,G}_{c_1}(L,P^s)$, assume that there is an $G\in S_X$ with $\chi^{X,G}_{c_1}(L,P^s)=0$ for all $s\ge 0$.
(1) Let $\widetilde X$ be the blowup of $X$ in $r$ general points, with exceptional divisors $F_1,\ldots,F_r$ and put $F:=F_1+\ldots+F_r$, and let $\overline X$ the blowup of $\widetilde X$ in a point with exceptional divisor $E$. Then by definition $$\chi^{\widehat X,\omega}_{c_1}(L-(n-1)E,P^r)=\chi^{\overline X,\omega}_{c_1+F}(L-F-(n-1)E).$$ We get by \corref{strucdiff} that $$\chi^{\overline X,\omega}_{c_1+F}(L-F-(n-1)E)=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,G}_{\widetilde X,c_1+F}(L-F,\Lambda,\tau)R_{n}(\Lambda, M)\big].$$ On the other hand, by \corref{blowp} we get $$\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,G}_{\widetilde X,c_1+F}(L-F,\Lambda,\tau)R_{n}(\Lambda, M)\big]=\chi^{\widetilde X,\omega}_{c_1+F}(L-F,R_{n}(\Lambda, P))=\chi^{X,\omega}_{c_1}(L,P^r\cdot R_n(\Lambda,P)).$$ The proof of (2) is similar. \end{proof}
\subsection{Further properties of the blowup polynomials} \begin{Proposition}\label{blowpolprop} \begin{enumerate} \item For all $n\in {\mathbb Z}$, we have $R_n
\in {\mathbb Z}[{\mathfrak \lambda}^4,x^2]$, $S_{2n+1}\in {\mathfrak \lambda}{\mathbb Z}[{\mathfrak \lambda}^4,x^2]$, $S_{2n}\in {\mathfrak \lambda} x{\mathbb Z}[{\mathfrak \lambda}^4,x^2]$.
\item $R_n(\lambda,-x)=R_n(\lambda,x)$ and $S_n(\lambda,-x)=(-1)^{n-1} S_n(\lambda,x)$.
\item The $R_n$, $S_n$ satisfy the symmetries \begin{align*} R_{2n}\Big(\frac{1}{{\mathfrak \lambda}},\frac{x}{{\mathfrak \lambda}^2}\Big)&=\frac{(-1)^n}{{\mathfrak \lambda}^{(2n)^2}}R_{2n}({\mathfrak \lambda},x),\quad S_{2n}\Big(\frac{1}{{\mathfrak \lambda}},\frac{x}{{\mathfrak \lambda}^2}\Big)=\frac{(-1)^{n-1}}{{\mathfrak \lambda}^{(2n)^2}}S_{2n}({\mathfrak \lambda},x),\\ R_{2n+1}\Big(\frac{1}{{\mathfrak \lambda}},\frac{x}{{\mathfrak \lambda}^2}\Big)&=\frac{(-1)^n}{{\mathfrak \lambda}^{(2n+1)^2}} S_{2n+1}({\mathfrak \lambda},x),\quad S_{2n+1}\Big(\frac{1}{{\mathfrak \lambda}},\frac{x}{{\mathfrak \lambda}^2}\Big)=\frac{(-1)^n}{{\mathfrak \lambda}^{(2n+1)^2}} R_{2n+1}({\mathfrak \lambda},x). \end{align*}
\item For all $k,n\in {\mathbb Z}$, we have the relations \begin{equation}\label{r2nRn} \begin{split} R_{2n}&=R_n^4-S_n^4, \quad S_{2n}=\frac{1}{{\mathfrak \lambda}}R_nS_n(S_{n+1}R_{n-1}-R_{n+1}S_{n-1}). \end{split} \end{equation} \end{enumerate} \end{Proposition} \begin{proof} We write $$\widetilde R_n(h):=\frac{{\widetilde \theta}_4(nh)}{{\widetilde \theta}_4(h)^{n^2}}=R_n(\Lambda,M),\quad \widetilde S_n(h):= \frac{{\widetilde \theta}_1(nh)}{{\widetilde \theta}_4(h)^{n^2}}=S_n(\Lambda,M),$$ where we have used \eqref{ThRn}.
It is easy to see that $\Lambda$ and $M$ are algebraically independent, i.e. there exists no polynomial $f\in {\mathbb Q}[\lambda,x]\setminus \{0\}$, such that $f(\Lambda,M)=0$ as a function on ${\mathcal H}\times {\mathbb C}$. For this, note that by $M^2=4(1+u\Lambda^2+\Lambda^4)$, the algebraic independence of $\Lambda$ and $M$ is equivalent to that of $\Lambda$ and $u$. But this is clear, because as Laurent series in $q,y$, $u$ is a Laurent series in $q$ starting with $-\frac{1}{4q^{-2}}$ and $\Lambda$ depends on $y$ in a nontrivial way.
As $M$ and $\Lambda$ are algebraically independent, $R_n$, $S_n$ are the unique rational functions satisfying \eqref{ThRn} .
Now we will show (4). For any $k\in {\mathbb Z}$ we also have \begin{equation}\label{RSkn} \begin{split} \widetilde R_{kn}(h)&= \frac{{\widetilde \theta}_4(knh)}{{\widetilde \theta}_4(h)^{k^2 n^2}}=\frac{{\widetilde \theta}_4(knh)}{{\widetilde \theta}_4(nh)^{k^2}}\Big(\frac{{\widetilde \theta}_4(nh)}{{\widetilde \theta}_4(h)^{n^2}}\Big)^{k^2}= \widetilde R_k(nh) \widetilde R_n(h)^{k^2},\\ \widetilde S_{kn}(h)&=\frac{{\widetilde \theta}_1(kn z)}{{\widetilde \theta}_4(nh)^{k^2}}=\widetilde S_k(nh)\widetilde R_n(h)^{k^2} \end{split} \end{equation} Thus, using $\widetilde R_2(h)=1-\Lambda^4$, we find in particular $$\widetilde R_{2n}(h)=\widetilde R_2(nh) \widetilde R_n(h)^{4}= \left(1-\left(\frac{{\widetilde \theta}_1(nh)}{{\widetilde \theta}_4(nh)}\right)^4\right)\widetilde R_n(h)^4= \widetilde R_n(h)^4-\widetilde S_n(h)^4;$$ i.e., using the algebraic independence of $\Lambda$ and $M$, $R_{2n}=R_n^4- S_n^4$. In the same way we have
$$\widetilde S_{2n}(h)=\widetilde S_2(nh)\widetilde R_n(h)^{4}=\Lambda(nh)M(nh)\widetilde R_n(h)^{4}.$$
By definition $\Lambda(nh)=\frac{{\widetilde \theta}_1(nh)}{{\widetilde \theta}_4(nh)}=\frac{\widetilde S_n(h)}{\widetilde R_n(h)}$. Now take the difference of the two formulas (see \cite[\S 2.1 Ex.~3]{WW}) $$\theta_1(y\pm z)\theta_4(y\mp z)\theta_2\theta_3=\theta_1(y)\theta_4(y)\theta_2(z)\theta_3(z)\pm \theta_2(y)\theta_3(y)\theta_1(z)\theta_4(z)$$ with $y=nh$, $z=h$ to get $$\theta_1((n+1)h)\theta_4((n-1)h)\theta_2\theta_3-\theta_1((n-1)h)\theta_4((n+1)h)\theta_2\theta_3=2\theta_2(nh)\theta_3(nh)\theta_1(h)\theta_4(h).$$ This gives \begin{align*} M(nh)&=2\frac{\theta_2(nh)\theta_3(nh)}{\theta_2\theta_3\theta_4(nh)^2}= \frac{\theta_1((n+1)h)\theta_4((n-1)h)-\theta_1((n-1)h)\theta_4((n+1)h)}{\theta_1(h)\theta_4(h) \theta_4(nh)^2}\\&=\frac{1}{\Lambda}\frac{\widetilde S_{n+1}(h)\widetilde R_{n-1}(h)-\widetilde S_{n-1}(h)\widetilde R_{n+1}(h)}{\widetilde R_n(h)^2} \end{align*} Thus $S_{2n}=\frac{1}{\lambda}S_nR_n(S_{n+1}R_{n-1}-S_{n-1}R_{n+1}).$ This shows (4)
(1) Next we will show that $R_n\in {\mathbb Z}[\lambda^4,x]$ and $S_n \in \lambda{\mathbb Z}[\lambda^4,x]$ for all $n\in {\mathbb Z}$. By symmetry it is enough to show this if $n$ is a nonnegative integer. We know that this is true for $0\le n\le 4$. Now assume that $m\ge 2$, and that we know the statement for all $0\le n\le 2m$. Therefore $R_{m+1}\in {\mathbb Z}[\lambda^4,x], S_{m+1}\in \lambda{\mathbb Z}[\lambda^4,x]$, and the formulas \eqref{r2nRn} give that $R_{2m+2}\in {\mathbb Z}[\lambda^4,x]$, $S_{2m+2}\in \lambda{\mathbb Z}[\lambda^4,x]$. The relations \eqref{recur} say that \begin{align*} R_{2m+2}R_{2m}&=R_{2m+1}^2-{\mathfrak \lambda}^2S_{2m+1}^2,\quad S_{2m+2}S_{2m}=S_{2m+1}^2-{\mathfrak \lambda}^2 R_{2m+1}^2, \end{align*} and thus \begin{equation} \label{r2n1} \begin{split} (1-{\mathfrak \lambda}^4)R_{2m+1}^2&=R_{2m+2}R_{2m}+{\mathfrak \lambda}^2 S_{2m+2}S_{2m},\\ (1-{\mathfrak \lambda}^4)S_{2m+1}^2&=S_{2m+2}S_{2m}+{\mathfrak \lambda}^2R_{2m+2}R_{2m}. \end{split} \end{equation} Thus we get $(1-\lambda^4)R_{2m+1}^2\in {\mathbb Z}[\lambda^4,x]$ and $(1-\lambda^4)S_{2m+1}^2\in \lambda^2{\mathbb Z}[\lambda^4,x]$. Therefore, as $1-{\mathfrak \lambda}^4$ is squarefree in ${\mathbb Q}[\lambda,x]$, we also have $R_{2m+1}^2\in {\mathbb Z}[\lambda^4,x]$ and $S_{2m+1}^2\in \lambda^2{\mathbb Z}[\lambda^4,x]$. As we already know that $R_{2m+1},\ S_{2m+1}\in {\mathbb Q}(\lambda,x)$, this gives $R_{2m+1}\in{\mathbb Z}[\lambda^4,x]$ and $S_{2m+1}\in \lambda{\mathbb Z}[\lambda^4,x]$. So $R_{2m+1}$, $R_{2m+2}\in {\mathbb Z}[\lambda^4,x]$ and $S_{2m+1}$, $S_{2m+2}\in \lambda{\mathbb Z}[\lambda^4,x]$. Thus by induction on $m$, we get $R_n\in {\mathbb Z}[\lambda^4,x]$, $S_n\in \lambda{\mathbb Z}[\lambda^4,x]$.
(2) For $n=0,1,2$ we see immediately that the $R_n$ are even in $x$ and the $S_n$ have partity $(-1)^{n-1}$ in $x$. On the other hand the recursion formulas \eqref{recur} say that $R_{n+1}$ has the same parity as $R_{n-1}$ in $x$ and $S_{n+1}$ the same parity as $S_{n-1}$. This also shows that $R_n\in {\mathbb Z}[\lambda^4,x^2]$, $S_{2n}\in \lambda x{\mathbb Z}[\lambda^4,x^2]$, $S_{2n+1}\in \lambda {\mathbb Z}[\lambda^4,x^2]$.
(3) The formulas \eqref{T4trans},\eqref{T1trans},\eqref{T2trans} imply \begin{equation}\label{Lambdatau} \Lambda(h+\pi i \tau)=\frac{\theta_4(h)}{\theta_1(h)}=\frac{1}{\Lambda},\quad M(h+\pi i \tau)=-2\frac{{\widetilde \theta}_2(h){\widetilde \theta}_3(h)}{{\widetilde \theta}_1(h)^2}=-\frac{M}{\Lambda^2}. \end{equation} Part (1) of \lemref{thetaadd} shows $$ \theta_4(2nh+2\pi i n\tau)=(-1)^n q^{-4n^2}(y^{2n})^{-2n}\theta_4(2nh). $$ Thus, using \eqref{T4trans} again, we get $$ \widetilde R_{2n}(h+\pi i \tau)=(-1)^n\frac{{\widetilde \theta}_4(2nh)}{{\widetilde \theta}_1(h)^{4n^2}}=(-1)^n \frac{\widetilde R_{2n}(h)}{\Lambda^{(2n)^2}}. $$ As $\Lambda^4$ and $M$ are algebraically independent, \eqref{Lambdatau} and \eqref{R2ntau} imply that \begin{equation}\label{R2ntau}R_{2n}\Big(\frac{1}{\lambda},\frac{x}{\lambda^2}\Big)=R_{2n}\Big(\frac{1}{\lambda},-\frac{x}{\lambda^2}\Big)=(-1)^n\frac{1}{{\mathfrak \lambda}^{(2n)^2}}R_{2n}({\mathfrak \lambda},x).\end{equation} The same argument using part (2) of \lemref{thetaadd} and \eqref{T1trans} shows $ \widetilde S_{2n}(h+\pi i \tau)=(-1)^n \frac{\widetilde S_{2n}(h)}{\Lambda^{(2n)^2}}, $
and thus \begin{equation}\label{S2ntau}S_{2n}\Big(\frac{1}{\lambda},\frac{x}{\lambda^2}\Big)=-S_{2n}\Big(\frac{1}{\lambda},-\frac{x}{\lambda^2}\Big)=(-1)^{n-1}\frac{1}{{\mathfrak \lambda}^{(2n)^2}}S_{2n}({\mathfrak \lambda},x).\end{equation} Similarly using parts (3) and (4) of \lemref{thetaadd} we get by the same arguments \begin{align*} \widetilde R_{2n+1}(h+\pi i \tau)=\frac{{\widetilde \theta}_4((2n+1)h+\pi i (2n+1)\tau)}{{\widetilde \theta}_4(h+\pi i \tau)^{(2n+1)^2}}= (-1)^n \frac{{\widetilde \theta}_1((2n+1)h)}{{\widetilde \theta}_1(h)^{(2n+1)^2}}=(-1)^n \frac{ \widetilde S_{2n+1}}{\Lambda^{(2n+1)^2}}, \end{align*} and thus $$R_{2n+1}\big(\frac{1}{{\mathfrak \lambda}},\frac{x}{{\mathfrak \lambda}^2}\big)=R_{2n+1}\big(\frac{1}{{\mathfrak \lambda}},-\frac{x}{{\mathfrak \lambda}^2}\big)=\frac{(-1)^n}{{\mathfrak \lambda}^{(2n+1)^2}} S_{2n+1}.$$ The same argument shows $S_{2n+1}\big(\frac{1}{{\mathfrak \lambda}},\frac{x}{{\mathfrak \lambda}^2}\big)=S_{2n+1}\big(\frac{1}{{\mathfrak \lambda}},-\frac{x}{{\mathfrak \lambda}^2}\big)=\frac{(-1)^n}{{\mathfrak \lambda}^{(2n+1)^2}} R_{2n+1}.$ \end{proof}
\begin{comment} We also introduce a normalized version of the blowup polynomials. \begin{LG} explain what they are good for\end{LG}
\begin{Definition} \begin{enumerate} \item For all $n$ and $i=0,1$ we define $$r_n(\lambda,x):=\frac{R_n(\lambda,(1-\lambda^4)x)}{(1-\lambda^4)^{[n^2/4]}},\quad s_n(t,x):=\frac{S_n(\lambda,(1-\lambda^4)x)}{(1-\lambda^4)^{[n^2/4]}}.$$ \end{enumerate} \begin{LG} clarify what is $[n]$.\end{LG}
It is easy to see that the $r_n$, $s_n$ satisfy $r_0=r_1$, $ s_1=\lambda,\ s_2=\lambda x,$ and the recursion relations \begin{align}\label{recurr} r_{n+1}&=\frac{r_{n}^2-\lambda^2 s_n^2}{(1-\lambda^4)^{\epsilon} r_{n-1}},\
n\ge 1,\quad s_{n+1}=\frac{s_{n}^2-\lambda^2r_n^2}{(1-\lambda^4)^{\epsilon} s_{n-1}},\ n\ge 2, \end{align} where ${\epsilon}=0$ if $n$ is even and ${\epsilon}=1$ if $n$ is odd. We get \begin{equation*} \begin{split} r_2&=1,\ r_3=-\lambda^4 x^2+1,\ r_4=-\lambda^4 x^4 + 1,\ r_5=-\lambda^4x^6 + (\lambda^8 + 2\lambda^4)x^4 - 3\lambda^4x^2 + 1,\\ r_6&=(-t^3 - t^2 - t)x^8 + (4t^2 + 4t)x^6 - 6tx^4 + 1,\\ s_3&=\lambda(x^2-1),\ s_4=\lambda x \big((\lambda^4 + 1)x^2 - 2\big),\ s_5=\lambda\big(-\lambda^8x^6 + (2\lambda^4 + 1)x^4 - 3x^2 + 1\big),\\ s_6&=\lambda x(-\lambda^8 x^8 + (\lambda^8 + 4\lambda^4 + 1)x^4 - (4\lambda^4 + 4)x^2 + 3\big). \end{split} \end{equation*} \end{Definition}
\begin{Proposition}\label{rnprop} \begin{enumerate}
\item For all $n\in {\mathbb Z}$,
we have $r_n\in {\mathbb Z}[\lambda^4,x^2]$, $s_{2n} \in \lambda x{\mathbb Z}[\lambda^4,x^2]$, $s_{2n+1} \in \lambda {\mathbb Z}[\lambda^4,x^2]$, \item
$r_{2n}(\lambda,x)= R_n(\lambda x,(1+\lambda^4)x^2-2), \quad s_{2n}(\lambda,x)=S_n(\lambda x,(1+\lambda^4)x^2-2) ,$
\item The $r_n$, $s_n$ satisfy the symmetries \begin{align*} r_{2n}\big(\frac{1}{\lambda},x\lambda^2\big)&=r_{2n}(\lambda,x), \quad s_{2n}\big(\frac{1}{\lambda},x\lambda^2\big)=s_{2n}(\lambda,x),\\ (-1)^n r_{2n+1}\big(\frac{1}{\lambda},x\lambda^2\big)&=s_{2n+1}(\lambda,x), \quad (-1)^n s_{2n+1}\big(\frac{1}{\lambda},x\lambda^2\big)=r_{2n+1}(\lambda,x). \end{align*}
\end{enumerate} \end{Proposition} \begin{proof}
(2) Using $R_2=(1-t)$, we get by the definition of $\widetilde R_{2n}$ that \begin{align*} \frac{\widetilde R_{2n}}{(1-\Lambda^4)^{n^2}}&= \frac{\widetilde R_{2n}}{\widetilde R_2^{n^2}}=\frac{{\widetilde \theta}_4(2nh)}{{\widetilde \theta}_4(h)^{4n^2}} \cdot \frac{{\widetilde \theta}_4(h)^{4n^2}}{{\widetilde \theta}_4(2h)^{n^2}}=\frac{{\widetilde \theta}_4(2nh)} {{\widetilde \theta}_4(2h)^{n^2}}=R_n(\Lambda^4(2h),M(2h)). \end{align*} By definition we have \begin{align*} \Lambda(2h)&=\frac{\theta_1(2h)}{\theta_4(2h)}= \frac{\theta_1(2h)}{\theta_1(h){\widetilde \theta}_4(h)^3}\cdot\frac{{\widetilde \theta}_4(h)^4}{{\widetilde \theta}_4(2h)}\cdot \frac{\theta_1(h)}{\theta_4(h)}=\frac{\Lambda\widetilde S_2}{\widetilde R_2}=\frac{\Lambda M}{1-\Lambda^4},\\ M(2h)&=\frac{\theta_1(4h)}{\theta_1(2h){\widetilde \theta}_4(2h)^3}= \frac{{\widetilde \theta}_1(4h)}{{\widetilde \theta}_4(h)^{16}}\cdot \frac{{\widetilde \theta}_4(h)^4}{{\widetilde \theta}_1(2h)}\cdot \Big(\frac{{\widetilde \theta}_4(h)^4}{{\widetilde \theta}_4(2h)}\Big)^3=\frac{\widetilde S_4}{\widetilde S_2\widetilde R_2^3}=\frac{1+\Lambda^4}{(1-\Lambda^4)^2}M^2-2. \end{align*} Thus we get \begin{align*}\frac{R_{2n}(t,x)}{(1-t)^{n^2}}&=R_n\Big(\frac{\lambda x}{(1-\lambda^4)},\frac{\lambda^4+1}{(1-\lambda^4)^2}x^2-2\Big),\\ r_{2n}(\lambda,x)&=\frac{R_{2n}(\lambda,(1-\lambda^4)x)}{(1-\lambda^4)^{n^2}}=R_n(\lambda x,(\lambda^4+1)x^2-2). \end{align*} Similarly we have \begin{align*} \frac{\widetilde S_{2n}}{(1-\Lambda^4)^{n^2}}&= \frac{\widetilde S_{2n}}{\widetilde R_2^{n^2}}=\frac{{\widetilde \theta}_1(2nh)}{{\widetilde \theta}_4(h)^{4n^2}} \cdot \frac{{\widetilde \theta}_4(h)^{4n^2}}{{\widetilde \theta}_4(2h)^{n^2}} =\frac{{\widetilde \theta}_1(2nh)} {{\widetilde \theta}_4(2h)^{n^2}}=S_n(\Lambda^4(2h),M(2h)) \end{align*} In the same way as for $r_{2n}$ this gives \begin{align*} \frac{S_{2n}(\lambda,x)}{(1-\lambda^4)^{n^2}}&=S_n\Big(\frac{\lambda x}{(1-\lambda^4)},\frac{\lambda^4+1}{(1-\lambda^4)^2}x^2-2\Big),\quad s_{2n}(\lambda,x)=S_n(\lambda x,(\lambda^4+1)x^2-2). \end{align*}
(1) Next we will show that $r_n,s_n\in {\mathbb Z}[\lambda,x]$, for all $n\in {\mathbb Z}$. Then (1) follows by the definition of $r_n$, $s_n$. By symmetry it is enough to show this if $n$ is a positive integer. We know that this is true for $n\le 4$. Now assume that $m\ge 2$, and that we know the statement for all $n\le 2m$. As $R_{m+1},S_{m+1}\in {\mathbb Z}[\lambda,x]$, part (2) gives that $r_{2m+2},s_{2m+2}\in {\mathbb Z}[\lambda,x]$. The relations $$(1-\lambda^4)r_{2m+2}r_{2m}=r_{2m+1}^2-\lambda^2s_{2m+1}^2, \ (1-\lambda^4)s_{2m+2}s_{2m}=s_{2m+1}^2-\lambda^2 r_{2m+1}^2,$$ give \begin{align} \label{r2n1}r_{2m+1}^2&=r_{2m+2}r_{2m}+\lambda^2 s_{2m+2}s_{2m},\quad s_{2m+1}^2=s_{2m+2}s_{2m}+\lambda^2 r_{2m+2}r_{2m}. \end{align} Thus we get $r_{2m+1}^2\in {\mathbb Z}[\lambda,x]$, $s_{2m+1}^2\in{\mathbb Z}[\lambda,x]$. As we already know that $r_{2m+1},\ s_{2m+1}\in {\mathbb Q}(\lambda,x)$, it follows that they are in ${\mathbb Z}[\lambda,x]$.
(3) By part (3) of \propref{blowpolprop} we get $$r_{2n}\Big(\frac{1}{\lambda},x\lambda^2\Big)=\frac{(-\lambda^4)^{n^2}}{(1-t)^{n^2}}R_{2n}\Big(\frac{1}{\lambda},x(1-\lambda^4)/\lambda^2\Big) =\frac{1}{(1-t)^{n^2}}R_{2n}(t,x(1-t)) =r_{2n}(t,x).$$
Similarly we get from part (3) of \propref{blowpolprop} \begin{align*}
s_{2n}\Big(\frac{1}{\lambda},x\lambda^2\Big)&=s_{2n}(\lambda,x),\\ (-1)^n \lambda r_{2n+1}\Big(\frac{1}{\lambda},x\lambda^2\Big)&= s_{2n+1}(\lambda,x),\ s_{2n+1}\Big(\frac{1}{\lambda},x\lambda^2\Big)=(-1)^n\lambda r_{2n+1}(\lambda, x). \end{align*} \end{proof}
\end{comment} \subsection{Blowdown formulas} Let $\widehat X$ be the blowup of a rational surface $X$ in a point. As mentioned at the beginning of this section, the blowup polynomials determine a blowup formula which computes the $K$-theoretic Donaldson invariants $\widehat X$ in terms of those of $X$. We will also need a blowdown formula which determines all the $K$-theoretic Donaldson invariants of $X$ in terms of a small part of those of $\widehat X$. In order to prove the blowdown formula, we will need that, for $n,m$ relatively prime integers, the polynomials $R_n$ $R_{m}$ and $S_n$, $S_{m}$ are as polynomials in $x$ in a suitable sense relatively prime.
\begin{Proposition}\label{blowdownpol} Let $n,m\in {\mathbb Z}$ be relatively prime. \begin{enumerate} \item There exists a minimal integer $M^0_{n,m}\in {\mathbb Z}_{\ge 0}$ and unique polynomials $h^0_{n,m},\ l^0_{n,m}\in {\mathbb Q}[\lambda^4,x^2]$, such that $(1-\lambda^4)^{M^0_{n,m}}=h^0_{n,m} R_n+l^0_{n,m} R_m$.
\item
There exists a minimal integers $M^1_{n,m}\in {\mathbb Z}_{\ge 0}$ and unique polynomials $h^1_{n,m},\ l^1_{n,m}\in {\mathbb Q}[\lambda^4,x]$, such that $\lambda(1-\lambda^4)^{M^1_{n,m}}=h^1_{n,m} S_n+l^1_{n,m} S_m$. \end{enumerate} \end{Proposition}
\begin{proof}
For all $l\in {\mathbb Z}$ we write $$\overline S_{2l}:=x\frac{S_{2l}}{\lambda}=\frac{S_2S_{2l}}{S_1^2}, \quad \overline S_{2l+1}:=\frac{S_{2l+1}}{\lambda}=\frac{S_{2l+1}}{S_1}\in {\mathbb Z}[\lambda^4,x^2].$$ Let $I_{n,m}=\<R_n,R_{m}\rangle\subset {\mathbb Z}[\lambda^4,x^2]$ be the ideal generated by $R_n, R_{m}\in {\mathbb Z}[\lambda^4,x^2]$, and let $J_{n,m}=\langle\overline S_n,\overline S_{m}\rangle\subset {\mathbb Z}[\lambda^4,x^2]$ be the ideal generated by $\overline S_n,\overline S_{m}\in {\mathbb Z}[\lambda^4,x^2]$. Then the Proposition follows immediately from the following.
\begin{Claim}[1] There are $M^0_{n,m} , M^1_{n,m}\in {\mathbb Z}_{\ge 0}$ with $(1-\lambda^4)^{M^0_{n,m}}\in I_{n,m}$ and $(1-\lambda^4)^{M^1_{n,m}}\in J_{n,m}$. \end{Claim}
Let \begin{align*}
V_{n,m}&:=\big\{(\alpha^4,\beta^2)\in {\mathbb C}^2\bigm| R_{n}(\alpha,\beta)=R_{m}(\alpha,\beta)=0 \big\}, \\
W_{n,m}&:=\big\{(\alpha^4,\beta^2)\in {\mathbb C}^2\bigm| \overline S_{n}(\alpha,\beta)=\overline S_{m}(\alpha,\beta)=0 \big\}. \end{align*} Then by the Nullstellensatz the Claim (1) follows immediately from the following.
\begin{Claim}[2]
$V_{n,m}, W_{n,m}\subset \{(1,0)\}$,
\end{Claim}
\noindent{\it Proof of Claim(2):} The idea of the proof is as follows: For each $(\alpha,\beta)\in {\mathbb C}^2$ with $(\alpha^4,\beta^2)\in {\mathbb C}^2\setminus \{(1,0)\}$ we want to show that \begin{enumerate} \item $R_n(\alpha,\beta)$ or $R_{m}(\alpha,\beta)$ is nonzero, \item $\overline S_n(\alpha,\beta)$ or $\overline S_{m}(\alpha,\beta)$ is nonzero. \end{enumerate} Recall that we have $\widetilde R_n=R_n(\Lambda,M)$, and we put $\widehat S_n:=\overline S_n(\Lambda,M)$, so that $\widehat S_{2n}=\frac{M \overline S_{2n}}{\Lambda}$ and $\widehat S_{2n+1}=\frac{\overline S_{2n+1}}{\Lambda}$. We denote \begin{align*}
\Lambda|_S(h,\tau)&:=\Lambda(\frac{h}{\tau},-\frac{1}{\tau}), \quad M|_S(h,\tau):=M(\frac{h}{\tau},-\frac{1}{\tau}),\\
\widetilde R_m|_S(h,\tau)&:=\widetilde R_m(\frac{h}{\tau},-\frac{1}{\tau}), \quad \widehat S_m|_S(h,\tau)=\widehat S_m(\frac{h}{\tau},-\frac{1}{\tau}) \end{align*} the application of the operator $S:(h,\tau)\mapsto (\frac{h}{\tau},-\frac{1}{\tau})$ to the Jacobi functions $\Lambda$, $M$, $\widetilde R_m$, $\widehat S_m$. Obviously we have
$$R_m(\Lambda|_S,M|_S)= \widetilde R_m|_S, \quad \overline S_m(\Lambda|_S,M|_S)= \widehat S_m|_S.$$ We denote $Z(f)\subset {\mathbb C}$ the zero set of a meromorphic function $f:{\mathbb C}\to{\mathbb C}$.
Therefore Claim (2) will follow once we prove the following facts: \begin{enumerate} \item Every $(\alpha, \beta)\in {\mathbb C}^2\setminus \{(1,0)\}$ can be written as
$(\Lambda(h,\tau)^4,M(h,\tau)^2)$ for some $h\in {\mathbb C}$, $\tau\in {\mathcal H}\cup\{\infty\}$ or as $(\Lambda^4|_S(h,\infty)$, $M^2|_S(h,\infty))$ for some $h\in {\mathbb C}$.
Here we by $\Lambda(h,\infty)$, $M(h,\infty)$, $(\Lambda|_S(h,\infty)$, $M|_S(h,\infty))$, we mean the coefficient of $q^0$ of the $q$-development of
$\Lambda$, $M$, ($\Lambda|_S$, $M|_S$) (asserting also that these developments are power series in $q$).
\item For all $\tau\in {\mathcal H}\cup \{\infty\}$ we have \begin{align*}
Z(\widetilde R_n(\bullet,\tau))\cap Z(\widetilde R_{m}(\bullet,\tau))&:= \big\{h\in {\mathbb C} \bigm|\widetilde R_n(h,\tau)=\widetilde R_{m}(h,\tau)=0\big\}=\emptyset,\\
Z(\widetilde R_n|_S(\bullet,\infty))\cap Z(\widetilde R_{m}|_S(\bullet,\infty))&:= \big\{h\in {\mathbb C} \bigm|\widetilde R_n|_S(h,\infty)=\widetilde R_{m}|_S(h,\infty)=0\big\}=\emptyset. \end{align*} \item For all $\tau\in {\mathcal H}\cup \{\infty\}$ we have
\begin{align*}Z(\widehat S_n(\bullet,\tau))\cap Z(\widehat S_{m}(\bullet,\tau))&:= \big\{h\in {\mathbb C} \bigm|\widehat S_n(h,\tau)=\widehat S_{m}(h,\tau)=0\big\}=\emptyset,\\
Z(\widehat S_n|_S(\bullet,\infty)\cap Z(\widehat S_{m}|_S(\bullet,\infty))&:= \big\{h\in {\mathbb C} \bigm|\widehat S_n|_S(h,\infty)=\widehat S_{m}|_S(h,\infty)=0\big\}=\emptyset. \end{align*} \end{enumerate} (1) For any fixed $\tau\in {\mathcal H}$ the range of the elliptic function $\Lambda=\Lambda(\tau,\bullet)$ is ${\mathbb C}\cup \infty$. $u$ is a Hauptmodul for $\Gamma^0(4)$, which takes the values $-2,2,\infty$ at the cusps $0,2,\infty$ respectively. Therefore the range of $u$ as a function on ${\mathcal H}$ is ${\mathbb C}\setminus \{-2,2\}$. By the equation
$M^2=4(1+u\Lambda^2+\Lambda^4)$, we get therefore that the range of $(\Lambda^4, M^2)$ on ${\mathcal H}\times {\mathbb C}$ contains
the set
$$I_1:={\mathbb C}^2\setminus \{(c^2,4(1+c)^2)\ |\ c\in {\mathbb C}\}.$$
Now we look at $\tau=\infty$, i.e. $q=0$.
By the $q$-developments \eqref{theta}, we see that
\begin{equation}
\label{thetaq0}
\begin{split}
{\widetilde \theta}_1(h,\tau)&=O(q),\quad
{\widetilde \theta}_4(h,\tau)=1+O(q),
\quad {\widetilde \theta}_3(h,\tau)=1+O(q),\\
{\widetilde \theta}_2(h,\tau)&=\cosh(h/2)+O(q),\quad \frac{{\widetilde \theta}_1(h,\tau)}{{\widetilde \theta}_2(h,\tau)}=-i\tanh(h/2)+O(q).
\end{split}
\end{equation}
Therefore we get from the definitions
$$\Lambda(h,\tau)=O(q), \quad M(h,\tau)=2\frac{{\widetilde \theta}_2(h,\tau){\widetilde \theta}_3(h,\tau)}{{\widetilde \theta}_4(h,\tau)^2}=2\cosh(h/2)+O(q).$$
As $\cosh:{\mathbb C}\to {\mathbb C}$ is surjective, we see that the range of
$(\Lambda^4, M^2)$ on $ {\mathbb C}\times \{\infty\} $ is $I_2:=\{0\}\times {\mathbb C}$.
From the definitions and \eqref{thetaq0} we obtain
\begin{align*}
\Lambda^4|_S(h,\tau)&=\frac{{\widetilde \theta}_1(h,\tau)^4}{{\widetilde \theta}_2(h,\tau)^4}=\tanh(h/2)^4+O(q),\\
M^2|_S(h,\tau)&=4\frac{{\widetilde \theta}_3(h,\tau)^2{\widetilde \theta}_4(h,\tau)^2}{{\widetilde \theta}_2(h,\tau)^4}=\frac{4}{\cosh(h/2)^4}+O(q)=
4(1-\tanh(h/2)^2)^2+O(q).\end{align*}
It is an easy exercise that the range of $\tanh:{\mathbb C}\to {\mathbb C}$ is ${\mathbb C}\setminus\{\pm 1\}$. Thus the range of $( \Lambda^4|_S,M^2|_S)$ on ${\mathbb C}\times \{\infty\}$ is
$$I_3=\big \{(c^2,4(1-c)^2)\bigm| c\in {\mathbb C}\setminus \{1\}\big\}.$$ As $I_1\cup I_2\cup I_3={\mathbb C}^2\setminus \{(1,0)\}$, (1) follows.
(2) First let $\tau$ in ${\mathcal H}$. It is standard that $\theta_1(h)$ and $\theta_4(h)$ are holomorphic in $h$ on ${\mathbb C}$ and $Z(\theta_1(h))=2\pi i ({\mathbb Z}+{\mathbb Z}\tau)$, $Z(\theta_4(h))=2\pi i ({\mathbb Z}+({\mathbb Z}+\frac{1}{2})\tau)$. Thus by $\widetilde R_n=\frac{{\widetilde \theta}_4(nz)}{{\widetilde \theta}_4(z)^{n^2}}$, we see that \begin{align*}
Z(\widetilde R_n(\bullet,\tau))&=\Big\{2\pi i\big(\frac{a}{n}+\frac{b}{2n}\tau\big)\bigm| a,b\in {\mathbb Z},\ b \hbox{ odd}, (a,b)\not \equiv (0,0) \hbox{ mod }n \Big\} \end{align*} Assume that $2\pi i \big(\frac{a}{n}+\frac{b}{2n}\tau \big)=2\pi i \big(\frac{a'}{m}+\frac{b'}{2m}\tau \big)\in {\mathbb Z}(\widetilde R_n(\bullet,\tau))\cap Z(\widetilde R_{m}(\bullet, \tau))$. As $n$ and $m$ are relatively prime, we see that there $a'',b''\in {\mathbb Z}$, such that $$\frac{b}{2n}=\frac{b'}{2m}=\frac{b''}{2}, \quad \frac{a}{n}=\frac{a'}{m}=a''.$$ Thus $a$ and $b$ are both divisible by $n$, and thus $2\pi i \big(\frac{a}{n}+\frac{b}{2n}\tau\big)\not\in {\mathbb Z}(\widetilde R_n(\bullet,\tau))$.
Now let $\tau=\infty$ Then $\widetilde R_n(h,\infty)=\frac{{\widetilde \theta}_4(nh,\infty)}{{\widetilde \theta}_4(h,\infty)^{n^2}}=1$. Thus $Z(R_n(\bullet,\infty))=\emptyset$.
Finally we consider $\widetilde R_n|_S(h,\infty)$. We have \begin{align*}
\widetilde R_n|_S(h,\infty)&=\frac{{\widetilde \theta}_2(nh,\infty)}{{\widetilde \theta}_{2}(h,\infty)^{n^2}}= \frac{\cosh(n h/2)}{\cosh( h/2)^{n^2}},\quad \end{align*} This gives \begin{align*}
Z(\widetilde R_n|_S(\bullet,\infty))&=\Big\{\pi i \frac{b}{2n}\Bigm| b\in {\mathbb Z} \hbox{ odd, } n\not|b\Big\},\quad \end{align*}
and again it is clear that $Z(\widetilde R_n|_S(\bullet,\infty))\cap Z(\widetilde R_m|_S(\bullet,\infty))=\emptyset$.
(3) We note that \begin{align*} \widehat S_{2l+1}&=\frac{\widetilde S_{2l+1}}{\widetilde S_1}=\frac{\theta_{1}((2l+1)h)}{\theta_1(h){\widetilde \theta}_4(h)^{4l^2+4l}},\\ \widehat S_{2l}&=\frac{\widetilde S_2\widetilde S_{2l}}{\widetilde S_1^2}=\frac{\theta_{1}(2lh)\theta_1(2h)}{\theta_1(h)^2{\widetilde \theta}_4(h)^{4l^2+2}}. \end{align*} Let $\tau\in {\mathcal H}$, then this gives \begin{align*}
Z(\widehat S_{2l+1}(\bullet, \tau))&=\big\{2\pi i (\frac{a}{2l+1}+\frac{b}{2l+1}\tau)\bigm| a,b\in {\mathbb Z}, \ (a,b)\not \equiv (0,0)\mod 2l+1\big\},\\
Z(\widehat S_{2l}(\bullet, \tau))&=\big\{2\pi i (\frac{a}{2l}+\frac{b}{2l}\tau)\bigm| a,b\in {\mathbb Z}, \ (a,b)\not \equiv (0,0)\mod 2l\big\}. \end{align*} Thus we see immediately that $Z(\widehat S_{n}(\bullet, \tau))\cap Z(\widehat S_{m}(\bullet, \tau))=\emptyset,$ if $n$ and $m$ are relatively prime.
Now let $\tau=\infty$. Then $$\widehat S_{2l+1}(h, \infty)=\frac{\sinh((2l+1)h/2)}{\sinh(h/2)},\quad \widehat S_{2l}(h ,\infty)=\frac{\sinh(lh)\sinh(h)}{\sinh(h/2)^2},$$
So it is easy to see that $Z(\widehat S_{n}(\bullet,\infty))\cap Z(\widehat S_{m}(\bullet, \infty))=\emptyset,$ if $n$ and $m$ are relatively prime. Finally $$\widehat S_{2l+1}|_S(h,\tau)=\frac{\theta_1((2l+1)h)}{\theta_1(h){\widetilde \theta}_2(h)^{4l^2+4l}},\quad
\widehat S_{2l}|_S(h,\tau)=\frac{\theta_1(2lh) \theta_1(2h)}{\theta_1(h)^2{\widetilde \theta}_2(h)^{4m^2+2}}.$$
Thus we get $$\widehat S_{2l+1}|_S(h,\infty)=\frac{\sinh((2l+1)h/2)}{\sinh(h/2)\cosh(h/2)^{4l^2+4l}},\
\widehat S_{2l}|_S(h,\infty)=\frac{\sinh(lh)\sinh(h)}{\sinh(h/2)^2\cosh(h/2)^{4l^2+2}},$$
and again it is evident that for $n$ and $m$ relatively prime
$Z(\widehat S_{n}|_S(\bullet,\infty))\cap Z(\widehat S_{m}|_S(\bullet, \infty))=\emptyset.$ \end{proof}
\begin{Corollary}\label{blowdownmn} Let $n,m\in {\mathbb Z}$ be relatively prime. Let $X$ be ${\mathbb P}^2$, ${\mathbb P}^1\times{\mathbb P}^1$ or the blowup of ${\mathbb P}^2$ in finitely many general points. Let $c_1\in H^2(X,{\mathbb Z})$, let $L$ be a line bundle on $X$, let $r\in {\mathbb Z}_{\ge 0}$ with $\<c_1,L\rangle\equiv r\mod 2$. Let $\widehat X$ be the blowup of $X$ in a point. Let $\omega\in S_X$ Using the notations of \propref{blowdownpol}, we have \begin{align*} \chi^{X,\omega}_{c_1}(L,P^r)&=\frac{1}{(1-\Lambda^4)^{M^0_{n,m}}}\big(\chi^{\widehat X,\omega}_{c_1}(L-(n-1)E,P^r\cdot h^0_{m,n}(\Lambda,P)\\ &\qquad+ \chi^{\widehat X,\omega}_{c_1}(L-(m-1)E,P^r\cdot l^0_{m,n}(\Lambda,P)\big)\\ &=\frac{1}{\Lambda(1-\Lambda^4)^{M^1_{n,m}}}\big(\chi^{\widehat X,\omega}_{c_1+E}(L-(n-1)E,P^r\cdot h^1_{n,m}(\Lambda,P)\\ &\qquad+ \chi^{\widehat X,\omega}_{c_1+E}(L-(m-1)E,P^r\cdot l^1_{n,m}(\Lambda,P)\big). \end{align*} \end{Corollary} \begin{proof} (1) By \thmref{nblow} we have \begin{align*} \big(&\chi^{\widehat X,\omega}_{c_1}(L-(n-1)E,P^r\cdot h^0_{m,n}(\Lambda,P)+ \chi^{\widehat X,\omega}_{c_1}(L-(m-1)E,P^r\cdot l^0_{m,n}(\Lambda,P)\big)\\& = \chi^{X,\omega}_{c_1}\big(L,P^r\cdot \big(R_n(\Lambda,P)h^0_{m,n}(\Lambda,P)+R_{m}(\Lambda,P)l^0_{n,m}(\Lambda,P)\big)\big)= (1-\Lambda^4)^{M^0_{n,m}} \chi^{X,\omega}_{c_1}(L,P^r), \end{align*} where in the last step we use \propref{blowdownpol}. The proof of (2) is similar. \end{proof}
\section{Recursion formulas for rational ruled surfaces}
\subsection{The limit of the invariant at the boundary point} For $X={\mathbb P}^1\times {\mathbb P}^1$ or $X=\widehat {\mathbb P}^2$ the blowup of ${\mathbb P}^2$ in a point, we denote the line bundles on $X$ in a uniform way. \begin{Notation} Let $X={\mathbb P}^1\times {\mathbb P}^1$ or $X=\widehat {\mathbb P}^2$. In the case $X={\mathbb P}^1\times {\mathbb P}^1$ we denote $F$ the class of the fibre of the projection to the first factor, and by $G$ the class of the fibre of the projection to the second factor. In the case $X=\widehat {\mathbb P}^2$, let $H$ be the pullback of the hyperplane class on ${\mathbb P}^2$ and $E$ the class of the exceptional divisor. Then $F:=H-E$ is the fibre of the ruling of $X$. We put $G:=\frac{1}{2}(H+E)$. Note that $G$ is not an integral cohomology class. In fact, while $H^2({\mathbb P}^1\times{\mathbb P}^1,{\mathbb Z})={\mathbb Z} F\oplus {\mathbb Z} G$, we have
$$H^2(\widehat {\mathbb P}^2,{\mathbb Z})={\mathbb Z} H\oplus {\mathbb Z} E=\big\{aF+bG\bigm| a\in {\mathbb Z},b\in 2{\mathbb Z} \hbox{ or } a\in {\mathbb Z}+\frac{1}{2}, b\in 2{\mathbb Z}+1\big\}.$$ On the other hand we note that both on $X={\mathbb P}^1\times{\mathbb P}^1$ and $\widehat {\mathbb P}^2$ we have $F^2=G^2=0$, $\<F,G\rangle=1$, and $-K_X=2F+2G$. \end{Notation}
We want to define and study the limit of the $K$-theoretic Donaldson invariants $\chi^{X,\omega}_{c_1}(L,P^r)$ as the ample class $\omega$ tends to $F$. For $c_1=F$ or $c_1=0$ this will be different from our previous definition of $\chi^{X,F}_{c_1}(L,P^r)$.
\begin{Definition} Let $r\in {\mathbb Z}_{\ge 0}$, let $L\in \operatorname{Pic}(X)$ with $\<c_1,L\rangle+r$ even. Fix $d\in {\mathbb Z}$ with $d\equiv -c_1^2 \mod 4$. For $n_{d,r}>0$ sufficiently large, $n_{d,r}F+ G$ is ample on $X$, and
there is no wall $\xi$ of type $(c_1,d)$ with $\langle\xi ,(n_{d,r}dF+ G)\rangle>0> \langle\xi, F\rangle$. For all $\omega\in S_X\cup C_X$, we define $\chi^{X,\omega}_{c_1,d}(L,P^r):=\mathop{\text{\rm Coeff}}_{\Lambda^d}\big[\chi^{X,\omega}_{c_1,d}(L,P^r)\big],$ and put \begin{align*}\\ \chi^{X,F_+}_{c_1,d}(L,P^r)&:=\chi^{X,n_{d,r}F+ G}_{c_1,d}(L,P^r),\quad \chi^{X,F_+}_{c_1}(L):=\sum_{d\ge 0} \chi^{X,F_+}_{c_1,d}(L,P^r)\Lambda^d. \end{align*}
\end{Definition}
Now we give a formula for $\chi^{X,F_+}_{0}(nF+mG,P^r)$ and $\chi^{X,F_+}_{F}(nF+mG,P^r)$. The result and the proof are similar to \cite[Prop.~5.3]{GY}.
The rest of this section will be mostly devoted to giving an explicit evaluation of this formula for $m\le 2$.
\begin{Proposition}\label{Fplus} Let $X={\mathbb P}^1\times{\mathbb P}^1$ or $X=\widehat {\mathbb P}^2$. \begin{enumerate} \item Let $nF+mG$ be a line bundle on $X$ with $m$ even. Then $$\chi^{X,F_+}_{F}(nF+mG,P^r)=\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{1}{2\sinh((m/2+1)h)}\Lambda^2{\widetilde \theta}_4(h)^{2(n+2)(m+2)}u'h^*M^r\right].$$
\item Let $nF+mG$ be a line bundle on $X$ (note that we might have $n\in \frac{1}{2}{\mathbb Z}$. Then $$\chi^{X,F_+}_{0}(nF+mG,P^r)=-\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{1}{2}\left(\coth((m/2+1)h)\right)\Lambda^2{\widetilde \theta}_4(h)^{2(n+2)(m+2)}u'h^*M^r\right].$$ \end{enumerate}
\end{Proposition}
\begin{proof} We denote $\Gamma_X=H^2(X,{\mathbb Z})$ with inner product the negative of the intersection form. Let $c_1=0$ or $c_1=F$, fix $d$, and let $s\in {\mathbb Z}_{\ge 0}$ be sufficiently large so that there is no class $\xi$ of $(c_1,d)$ with $\langle\xi, F\rangle<0<\langle\xi, (G+sF)\rangle$. Write $L:=nF+mG$. By \corref{blowp} we get \begin{align*} &\chi^{X,sF+G}_{F,d}(L,P^r)=\mathop{\text{\rm Coeff}}_{\Lambda^d}\mathop{\text{\rm Coeff}}_{q^0}\left[\Psi^{G+sF,F}_{X,F}(L;\Lambda,\tau)M^r\right]\\ \quad&=\mathop{\text{\rm Coeff}}_{\Lambda^d}\mathop{\text{\rm Coeff}}_{q^0}\left[\Theta^{G+sF,F}_{\Gamma_X,F,K_X}(\frac{1}{2\pi i}(L-K_X)h,\tau)\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*M^r\right]\\ \quad&= \mathop{\text{\rm Coeff}}_{\Lambda^d}\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{e^{-\langle\frac{F}{2},(L-K_X)\rangle}}{1-e^{-\<F,(L-K_X)\>h}}\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*M^r\right]+\sum_{\<F,\xi\rangle<0<\langle(G+sF),\xi\rangle}\delta^X_\xi(L,P^r). \end{align*} Here the second sum is over the classes of type $(F,d)$. By our assumption on $s$ the second sum is empty, so we get \begin{align*} \chi^{X,F_+}_F(L,P^r)&=\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{e^{-\langle\frac{F}{2},(L-K_X)\>h}}{1-e^{-\<F,(L-K_X)\>h}}\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*M^r\right]\\&= \mathop{\text{\rm Coeff}}_{q^0}\left[\frac{\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*}{2\sinh(\langle\frac{F}{2},(L-K_X)\>h)}M^r\right].\end{align*} In the case $c_1=0$ the argument is very similar. By definition and \thmref{vanbound} we have \begin{align*} &\chi^{X,sF+G}_{0,d}(L,P^r)=\mathop{\text{\rm Coeff}}_{\Lambda^d}\mathop{\text{\rm Coeff}}_{q^0}\left[\Psi^{sF+G,F}_{X,0}(L;\Lambda,\tau)M^r\right]\\ \quad&=\mathop{\text{\rm Coeff}}_{\Lambda^d}\mathop{\text{\rm Coeff}}_{q^0}\left[\Theta^{sF+G,F}_{\Gamma_X,0,K_X}(\frac{1}{2\pi i}(L-K_X)h,\tau)\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*M^r\right]\\ \quad&= -\mathop{\text{\rm Coeff}}_{\Lambda^d}\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{e^{-\<F,L-K_X\>h}\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*}{1-e^{-\<F,(L-K_X)\>h}}M^r\right]+\sum_{\<F,\xi\rangle<0<\langle(G+sF),\xi\rangle}\delta^X_\xi(L,P^r). \end{align*} The second sum is again over the walls of type $(0,d)$, and thus it is $0$. Thus we get \begin{align*} \chi^{X,F_+}_{0}(L,P^r)&=-\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{e^{-\<F,L-K_X\>h}\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*}{1-e^{-\<F,(L-K_X)\>h}}M^r\right]\\ &= -\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{1}{2}\left(\coth(\<F,(L-K_X)/2\>h)-1\right)\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*M^r\right]. \end{align*} Note that by \remref{delb}, we get $$\mathop{\text{\rm Coeff}}_{q^0}[\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*M^r]=\mathop{\text{\rm Coeff}}_{q^0}[(1-1)\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*M^r]=0.$$ \end{proof}
\begin{Remark}\label{Gplus} In the case of ${\mathbb P}^1\times{\mathbb P}^1$, we can in the same way define $\chi^{{\mathbb P}^1\times{\mathbb P}^1,G_+}_{c_1,d}(L,P^r):=\chi^{{\mathbb P}^1\times{\mathbb P}^1,G+n_dF}_{c_1,d}(L,P^r)$ for $n_d$ sufficiently large with respect to $d$, and $$\chi^{{\mathbb P}^1\times{\mathbb P}^1,G_+}_{c_1}(nF+mG,P^r):=\sum_{d>0} \chi^{{\mathbb P}^1\times{\mathbb P}^1,G_+}_{c_1,d}(L,P^r)\Lambda^d.$$ Then we see immediately that $\chi^{{\mathbb P}^1\times{\mathbb P}^1,G_+}_{F}(nF+mG,P^r)=0$, and we get by symmetry from \propref{Fplus} that $$\chi^{{\mathbb P}^1\times{\mathbb P}^1,G_+}_{0}(nF+mG,P^r)=-\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{1}{2}\left(\coth((n/2+1)h)\right)\Lambda^2{\widetilde \theta}_4(h)^{2(n+2)(m+2)}u'h^*M^r\right].$$ \end{Remark}
\subsection{Recursion formulas from theta constant identities}
We now use the blowup polynomials to show recursion formulas in $n$ and $r$ for the $K$-theoretical Donaldson invariants $\chi^{X,F_+}_{0}(nF+mG,P^r)$, $\chi^{X,F_+}_{F}(nF+mG,P^r)$ for $0\le m\le 2$. We use the fact that the $\widetilde S_n$ vanish at division points $a\in \frac{2}{n}\pi i {\mathbb Z}$, together with other vanishing results proven in \cite{GY}. We consider expressions relating the left hand sides of the formulas of \propref{Fplus} for $\chi^{X,F_+}_{0}(nF+mG,P^r)$, $\chi^{X,F_+}_{F}(nF+mG,P^r)$ for successive values of $n$. We will show that these are almost holomorphic in $q$, i.e. that they have only finitely many monomials $\Lambda^d q^s$ with nonzero coefficients and $s\le 0$. This will then give recursion formulas for $\chi^{X,F_+}_{0}(nF+mG,P^r)$, $\chi^{X,F_+}_{F}(nF+mG,P^r)$.
We will frequently use the following \begin{Notation} \begin{enumerate} \item For a power series $f=\sum_{n\ge 0} f_n(y)q^n \in {\mathbb C}[y^{\pm 1}][[q]]$, and a polynomial $g\in {\mathbb C}[y^{\pm 1}]$ we say that $g$ {\it divides} $f$, if $g$ divides $f_n$ for all $n$. \item For a Laurent series $h=\sum_{n} a_nq^n\in {\mathbb C}((q))$ the {\it principal part} is ${\mathcal P}[h]:=\sum_{n\le 0} a_nq^n$. Note that this contains the coefficient of $q^0$. This is because we think of $q$ as $e^{\pi i \tau/4}$, with $\tau$ in ${\mathcal H}$, and then $\frac{dq}{q}=\frac{\pi i}{4} d\tau$. For a series $h=\sum_{n\ge 0} h_n(q)\Lambda^n\in {\mathbb C}((q))[[\Lambda]]$, the {\it principal part} is ${\mathcal P}[h]:=\sum_{n\ge 0} {\mathcal P}[h_n]\Lambda^n\in {\mathbb C}((q))[[\Lambda]]$. We recall the previous notation $\mathop{\text{\rm Coeff}}_{q^0}[h]:=\sum_{n\ge 0} \mathop{\text{\rm Coeff}}_{q^0}[h_n]\Lambda^n$. \item We write ${\mathbb Q}[[y^{\pm 2} q^4,q^4]]^\times$ for the set power series in $y^{\pm 2} q^4,q^4$ whose constant part is $1$. \end{enumerate} \end{Notation}
\begin{Remark} By \eqref{thetatilde},\eqref{MuL} we have
${\mathcal P}[M^2]=4-q^{-2}\Lambda^2+4\Lambda^4$, and thus obviously \begin{align*}{\mathcal P}[M^2-(1-\Lambda^4)^2]&= 3 -q^{-2}\Lambda^2+6\Lambda^4 -\Lambda^8,\\ {\mathcal P}[M^2(1+\Lambda^4)-2(1-\Lambda^4)^2]&=2-q^{-2}\Lambda^2+12\Lambda^4-q^{-2}\Lambda^6+2\Lambda^8. \end{align*} \end{Remark}
\begin{Lemma}\label{theth1} For all $r\in {\mathbb Z}_{>0}$ we have
\begin{align*} \tag{1} &g^r_1:={\mathcal P}\Big[\frac{1}{2\sinh(h)} M^{2r}u'h^*\Lambda^2\Big]\in {\mathbb Q}[q^{-2}\Lambda^2,\Lambda^4]_{\le r},\\ \tag{2}&g^r_2:={\mathcal P}\Big[-\frac{1}{2}\coth(h)M^{2r}u'h^*\Lambda^2\Big]\in {\mathbb Q}[q^{-2}\Lambda^2,\Lambda^4]_{\le r+1},\\
\tag{3}&{\mathcal P}\Big[\frac{1}{2\sinh(3h/2)}M\big({\widetilde \theta}_4(h)^3(1-\Lambda^4)-1\big)u'h^*\Lambda^2\Big]=\Lambda^4,\\ \tag{4}& g^r_3:={\mathcal P}\Big[\frac{1}{2\sinh(3h/2)}M^{2r-1}(M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4))u'h^*\Lambda^2\Big]\in {\mathbb Q}[q^{-2}\Lambda^2,\Lambda^4]_{\le r},\\ \tag{5}&g^r_4:={\mathcal P}\Big[-\frac{1}{2}\coth(3h/2)M^{2r-2}(M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4)) u'h^*\Lambda^2\Big]\in {\mathbb Q}[q^{-2}\Lambda^2,\Lambda^4]_{\le r+1},\\
\tag{6}&g^r_5:={\mathcal P}\Big[-\frac{1}{2}\tanh(h)M^{2r-2}\big({\widetilde \theta}_4(h)^8(M^2-(1-\Lambda^4)^2)-1\big) u'h^*\Lambda^2\Big] \in {\mathbb Q}[q^{-2}\Lambda^2,\Lambda^4]_{\le r+2}. \end{align*} \end{Lemma} \begin{pf} (1) We know ${\widetilde \theta}_4(h)\in {\mathbb Q}[[y^{\pm 2}q^4,q^4]]^\times$, ${\widetilde \theta}_1(h)\in iq(y-y^{-1}){\mathbb Q}[[y^{\pm 2}q^4,q^4]]^\times$, and from the product formula of \eqref{theta} we see that even ${\widetilde \theta}_1(2h)\in iq(y^2-y^{-2}){\mathbb Q}[[y^{\pm 2}q^4,q^4]]^\times$. By \defref{blowpol}, \propref{rnpropo} we have $\frac{{\widetilde \theta}_1(2h)}{{\widetilde \theta}_4(h)^4}=\Lambda M$, thus we get that $\Lambda^{2} M^{2}\in q^{2}(y^2-y^{-2})^{2}{\mathbb Q}[[y^{\pm 2}q^4,q^4]].$ As $u'\in q^{-2}{\mathbb Q}[[q^4]]$, we get that $$f(y,q):=\sum_{n\ge 0} f_n(y)q^{4n}:=\frac{1}{\sinh(h)}\Lambda^{2} M^{2}u'\in (y^2-y^{-2}){\mathbb Q}[[y^{\pm 2}q^4,q^4]].$$ Thus $f_n(y)$ is a Laurent polynomial in $y^2$ of degree at most $n+1$, and we see from the definitions that is it antisymmetric under $y\to y^{-1}$. Therefore $f_n(y)$ can be written as a linear combination of $\sinh(lh)$ for $l=1,\ldots, n+1$. Thus we get by \lemref{qpow} that $f_n(y)h^*\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le n+1}{\mathbb Q}[[q^2\Lambda^2,q^4]]$, and thus the principal part of $q^{4n} f_n(y)h^*$ vanishes unless $4n\le 2n+2$, i.e. $n\le 1$. Therefore the principal part of $f(y,q)h^*$ is a polynomial in $q^{-2}\Lambda^2,\Lambda^2q^2$, and $q^4$ and thus (as the power of $q$ must be nonpositive) a polynomial in $q^{-2}\Lambda^2$ and $\Lambda^4$, and we see that its degree is at most $1$.
By \eqref{MuL}, we have that $M^2=4+4u\Lambda^2+4\Lambda^4$. Using that $u\in q^{-2}{\mathbb Q}[[q^4]]$ we get that $M^2\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le 1}{\mathbb Q}[[q^2\Lambda^2,q^4]]$. Therefore by the above $$M^{2r-2}f_n(y,q)h^*\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le n+r}{\mathbb Q}[[q^2\Lambda^2,q^4]].$$ The same argument as above shows that the principal part of $M^{2r-2}f(y,q)h^*$ is a polynomial in $q^{-2}\Lambda^2$ and $\Lambda^4$ of degree at most $r$.
(2) In (1) we have seen that $$M^{2r-2}f_n(y,q)h^*\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le n+r}{\mathbb Q}[[q^2\Lambda^2,q^4]].$$ We have $\coth(h)\Lambda^{2} M^{2r}u'h^*=\cosh(h)M^{2r-2}f(y,q)h^*$, and by \lemref{qpow} we have that $$\cosh(h)M^{2r-2}f_n(y,q)h^*\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le n+r+1}{\mathbb Q}[[q^2\Lambda^2,q^4]]$$ The same argument as in (1) shows that the principal part of $\cosh(h)M^{2r-2}f(y,q)h^*$ is a polynomial in $q^{-2}\Lambda^2$ and $\Lambda^4$ of degree at most $r+1$.
(3) By \cite{GY}, Prop.~5.10(5) and its proof, we have that ${\widetilde \theta}_4(h)^3(1-\Lambda^4)-1\in {\mathbb Q}[y^{\pm 1}][[q]]$ is divisible by $y^3-y^{-3}$. Thus also $M({\widetilde \theta}_4(h)^3(1-\Lambda^4)-1)\in {\mathbb Q}[y^{\pm 2}][[q]]$ is divisible by $y^3-y^{-3}$. We note that $\Lambda\in iq(y-y^{-1}){\mathbb Q}[[y^{\pm 2}q^4,q^4]]$, thus $1-\Lambda^4\in {\mathbb Q}[y^{\pm 2}]_{\le 1}{\mathbb Q}[[y^{\pm 2}q^4,q^4]]$. We already know ${\widetilde \theta}_4(h)\in {\mathbb Q}[[y^{\pm 2}q^4,q^4]]$, $M\in (y+y^{-1}){\mathbb Q}[[y^{\pm 2}q^4,q^4]]$. Thus $M({\widetilde \theta}_4^3(1-\Lambda^4)-1)\in (y^3-y^{-3}){\mathbb Q}[[y^{\pm 2}q^4,q^4]]$. Therefore, writing $$f:=\sum_{n\ge 0} f_n(y) q^{4n}:=\frac{1}{2\sinh(3h/2)}M({\widetilde \theta}_4(h)^3(1-\Lambda^4)-1),$$ $f_n(y)$ is a Laurent polynomial in $y^2$ of degree at most $n$, and we see from the definitions that it is antisymmetric under $y\to y^{-1}$. Thus by \lemref{qpow} we get $f_n(y)h^*\in{\mathbb Q}[q^{-2}\Lambda^2]_{\le n}{\mathbb Q}[[q^2\Lambda^2, q^4]]$. Therefore $fh^*\in {\mathbb Q}[[q^2\Lambda^2, q^4]]$ and $f h^*u'\Lambda^2\in [q^{-2}\Lambda^2,\Lambda^4]_{\le 1}{\mathbb Q}[[q^2\Lambda^2, q^4]]$. Computation of the first few coefficients gives ${\mathcal P}[f u'\Lambda^2]=\Lambda^4$.
(4) As ${\widetilde \theta}_4(h)^3(1-\Lambda^4)-1\in {\mathbb Q}[y^{\pm 1}][[q]]$ is divisible by $y^3-y^{-3}$, the same is true for $\Lambda M^2({\widetilde \theta}_4^3(1-\Lambda^4)-1)$. On the other hand $\Lambda(M^2-(1-\Lambda^4)^2)\in i{\mathbb Q}[y^{\pm 1}][[q]]$, and by $\Lambda(M^2-(1-\Lambda^4)^2)=\widetilde S_3=\frac{{\widetilde \theta}_1(3h)}{{\widetilde \theta}_4(h)^9}$ (see \defref{blowpol}, \propref{rnpropo}), we see that $\Lambda(M^2-(1-\Lambda^4)^2)$, vanishes for $3h\in 2\pi i {\mathbb Z}$, i.e. when $y^3=y^{-3}$. Thus it is also divisible by $y^3-y^{-3}$. Therefore also $$\Lambda (M^2{\widetilde \theta}_4(h)^3(1-\Lambda^4)-(1-\Lambda^4)^2)=\Lambda M^2({\widetilde \theta}_4^3(1-\Lambda^4)-1)+\Lambda(M^2-(1-\Lambda^4)^2)\in i{\mathbb Q}[y^{\pm 1}][[q]]$$ is divisible $y^3-y^{-3}$. We note that $(1-\Lambda^4)=\widetilde R_2=\frac{{\widetilde \theta}_4(2h)}{{\widetilde \theta}_4(h)^4}\in {\mathbb Q}[y^{\pm1}][[q]]$ does not vanish at any $h$ with $3h\in 2\pi i {\mathbb Z}$. It follows that also the power series $\Lambda (M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4))$ is divisible by $y^3-y^{-3}$. Finally we note that $\Lambda\in iq(y-y^{-1}){\mathbb Q}[[y^{\pm 2}q^4,q^4]]$, thus $1-\Lambda^4\in {\mathbb Q}[y^{\pm 2}]_1{\mathbb Q}[[y^{\pm 2}q^4,q^4]]$.
Thus $$\Lambda (M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4))\in iq(y-y^{-1}){\mathbb Q}[y^{\pm 2}]_{\le 1}{\mathbb Q}[[y^{\pm2}q^4,q^4]],$$ and therefore, as it is divisible by $y^3-y^{-3}$, we can write $\frac{1}{\sinh(3h/2)}M \Lambda (M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4))=q \sum_{n\ge 0} \overline f_n(y)q^{4n}$, where $\overline f_n(y)$ is an odd Laurent polynomial in $y$ of degree $2n+1$, symmetric under $y\to y^{-1}$. Thus by \lemref{qpow} we get $\overline f_n(y)h^*\in \Lambda{\mathbb Q}[q^{-2}\Lambda^2]_{\le n}{\mathbb Q}[[q^2\Lambda^2, q^4]]$, and thus $\overline f_n(y)h^*u'\Lambda\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le n+1}{\mathbb Q}[[q^2\Lambda^2, q^4]]$. It follows as before that the principal part of $\frac{1}{2\sinh(3h/2)}M (M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4))u'h^*\Lambda^2$ is a polynomial in $q^{-2}\Lambda^2$ and $\Lambda^4$ of degree at most $1$. Using the fact that $M^2\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le 1}{\mathbb Q}[[q^2\Lambda^2,q^4]]$ in the same way as in the proof of (1) we see that
the principal part of $\frac{1}{2\sinh(3h/2)} M^{2r-1}(M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4))u'\Lambda^2h^*$ is a polynomial of degree at most $r$ in $q^{-2}\Lambda^2$ and $\Lambda^4$.
(5) We see that the left hand side of (5) is obtained from the left hand side of (4) by multiplying by $\cosh(3h/2)/M$.
As by the above $M\in (y+y^{-1}){\mathbb Q}[[y^{\pm 2}q^4,q^4]]^\times$, we see that
$$2\cosh(3h/2)/M=(y^3+y^{-3})/M\in (y^2-1+y^{-2}){\mathbb Q}[[y^{\pm 2}q^4,q^4]] \subset
{\mathbb Q}[q^{-2}\Lambda^2]_{\le 1}{\mathbb Q}[[q^2\Lambda^2,q^4]],$$
where the inclusion on the right follows again by \lemref{qpow}. Therefore (5) follows from (4).
(6) We note that by \defref{blowpol} and \propref{rnpropo} we have
$$
s_1:=(1+\Lambda^4)M^2-2(1-\Lambda^4)^2=\frac{S_4(\Lambda,M)}{S_2(\Lambda,M)R_2(\Lambda,M)}=\frac{{\widetilde \theta}_1(4h)}{{\widetilde \theta}_1(2h) {\widetilde \theta}_4(2h){\widetilde \theta}_4(h)^8}.$$ Again $s_1$ is in ${\mathbb Q}[y^{\pm 1}][[q]]$. As ${\widetilde \theta}_4(h)$ has no zeros on $2\pi i {\mathbb R}$ and ${\widetilde \theta}_1(h)$ vanishes precisely for $h\in 2\pi i {\mathbb Z}$, we find that $s_1$ vanishes if $y^4=y^{-4}$, but not $y^2=y^{-2}$. Thus the coefficient of every power of $q$ of $s_1$ is divisible by $y^2+y^{-2}$.
In \cite{GY} Proposition 5.10(6) and its proof it is shown that $$s_2:={\widetilde \theta}_4(h)^8(1-\Lambda^4)^3-(1+\Lambda^4)\in (y^2+y^{-2}){\mathbb Q}[y^{\pm 1}][[q]].$$ Thus also $$M^2{\widetilde \theta}_4(h)^8(1-\Lambda^4)^3-2(1-\Lambda^4)^2=M^2s_2+s_1\in (y^2+y^{-2}){\mathbb Q}[y^{\pm 1}][[q]].$$ As $\widetilde R_2=(1-\Lambda^4)\in {\mathbb Q}[y^{\pm 1}][[q]]^\times$ does not vanish for $h\in i{\mathbb R}$, we get that $s_3:=M^2{\widetilde \theta}_4(h)^8(1-\Lambda^4)-2\in(y^2+y^{-2}) {\mathbb Q}[y^{\pm 1}][[q]].$ Therefore also $$\frac{1}{2}(s_3+{\widetilde \theta}_4(h)^8s_1)=M^2{\widetilde \theta}_4(h)^8-{\widetilde \theta}_4(h)^8(1-\Lambda^4)^2-1 \in (y^2+y^{-2}){\mathbb Q}[y^{\pm 1}][[q]].$$ On the other hand we know $M^2\in (y+y^{-1})^2 {\mathbb Q}[[y^{\pm 2}q^4,q^4]]$, ${\widetilde \theta}_4(h)\in {\mathbb Q}[[y^{\pm 2}q^4,q^4]]$ and $(1-\Lambda^4)^2\in {\mathbb Q}[y^{\pm 2}]_{\le 2}{\mathbb Q}[[y^{\pm 2}q^4,q^4]]$. Thus $$l:=\tanh(h)\big(M^2{\widetilde \theta}_4(h)^8-{\widetilde \theta}_4(h)^8(1-\Lambda^4)^2-1\big)\in {\mathbb Q}[y^{\pm 2}]_{\le 2}{\mathbb Q}[[y^{\pm 2}q^4,q^4]].$$ Thus we can write $l=\sum_{n\ge 0} l_n(y)q^{4n}$ where $l_n(y)$ is a Laurent polynomial in $y^2$ of degree $n+2$, symmetric under $y\to y^{-1}$. Thus by \lemref{qpow} we get $ l_n(y)h^*\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le n+2}{\mathbb Q}[[q^2\Lambda^2, q^4]]$, and thus $l_n(y)h^*u'\Lambda^2\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le n+3}{\mathbb Q}[[q^2\Lambda^2, q^4]]$. It follows as before that the principal part of $\tanh(h)\big(M^2{\widetilde \theta}_4(h)^8-{\widetilde \theta}_4(h)^8(1-\Lambda^4)^2-1\big)h^*u'\Lambda^2$ is a polynomial in $q^{-2}\Lambda^2$ and $\Lambda^4$ of degree at most $3$. Using again the fact that $M^2\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le 1}{\mathbb Q}[[q^2\Lambda^2,q^4]]$, we see that
the principal part of $\tanh(h)M^{2r-2}\big(M^2{\widetilde \theta}_4(h)^8-{\widetilde \theta}_4(h)^8(1-\Lambda^4)^2-1\big)h^*u'\Lambda^2$ is a polynomial of degree at most $r+2$ in $q^{-2}\Lambda^2$ and $\Lambda^4$.
\end{pf}
\begin{Remark} The principal parts above can be easily computed by calculations with the lower order terms with the power series, using the formulas given in \secref{thetamod}. We see for instance: \begin{align*} g_1^1&=q^{-2}\Lambda^2-4\Lambda^4,\quad g_2^1=-q^{-2}\Lambda^2 +\Big(\frac{1}{2}q^{-4} - 8\Big) \Lambda^4-q^{-2}\Lambda^6-\Lambda^8,\\
g_3^1&=q^{-2}\Lambda^2 -5\Lambda^4, \quad g_3^2=4q^{-2}\Lambda^2 - (q^{-4} + 20)\Lambda^4 + 9q^{-2}\Lambda^6 -23\Lambda^8,\\ g_4^1&=-\frac{1}{2}q^{-2}\Lambda^2 + (\frac{1}{2}q^{-4}- 11 )\Lambda^4-\frac{1}{2}\Lambda^8,\\
g_5^1&=(\frac{1}{2}q^{-4} - 12)\Lambda^4 + 2q^{-2}\Lambda^6 + 4\Lambda^8 -\frac{1}{2}q^{-2} \Lambda^{10} +5\Lambda^{12}. \end{align*} \end{Remark}
We apply \lemref{theth1} to compute the limit of the $K$-theoretic Donaldson invariants at $F$.
\begin{Proposition}\label{p11r} For $X={\mathbb P}^1\times{\mathbb P}^1$ or $X=\widehat {\mathbb P}^2$, all $n\in {\mathbb Z}$ we have \begin{enumerate} \item $\displaystyle{ 1+\chi^{X,F_+}_{F}(nF)=\frac{1}{(1-\Lambda^4)^{n+1}}}.$ \item For all $r>0$ there is a polynomial $h^0_{r}(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r}$ with $\chi^{X,F_+}_{F}(nF,P^{2r})=h^0_r(n,\Lambda^4).$ \item $\displaystyle{1+(2n+5)\Lambda^4+\chi^{X,F_+}_{0}(nF)=\frac{1}{(1-\Lambda^4)^{n+1}}}$. \item For all $r>0$ there is a polynomial $\overline h^0_{r}(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r+1}$ with $\chi^{X,F_+}_{0}(nF,P^{2r})=\overline h^0_r(n,\Lambda^4).$ \end{enumerate} \end{Proposition} \begin{proof} (1) and (3) are proven in \cite[Prop.~5.14]{GY}.
(2) Let $r>0$. By \propref{Fplus} we have \begin{align*} \chi^{X,F_+}_{F}(nF,P^{2r})&=\mathop{\text{\rm Coeff}}_{q^0}\Big[\frac{1}{2\sinh(h)}\Lambda^2{\widetilde \theta}_4(h)^{4n+8}u'h^*M^{2r}\Big]=\mathop{\text{\rm Coeff}}_{q^0}\Big[g_{1}^r {\widetilde \theta}_4(h)^{4n+8}\Big]. \end{align*} where the last step uses \lemref{theth1}(1) and the fact that ${\widetilde \theta}_4(h)\in {\mathbb Q}[[q^2\Lambda^2,q^4]]^\times$, and thus ${\widetilde \theta}_4(h)^{4n+8}\in {\mathbb Q}[[nq^2\Lambda^2,nq^4,q^2\Lambda^2,q^4]]^\times$. As $g_{1}^r(q^{-2}\Lambda^2,\Lambda^4)$ is a polynomial of degree at most $r$, we see that $\mathop{\text{\rm Coeff}}_{q^0}\big[g_{1}^r {\widetilde \theta}_4(h)^{4n+8}\big]$ is a polynomial of degree at most $r$ in $\Lambda^4$, $n\Lambda^4$.
(4) Let $r>0$. By \propref{Fplus} and \lemref{theth1}(2) we have \begin{align*} \chi^{X,F_+}_{F}(nF,P^{2r})&=\mathop{\text{\rm Coeff}}_{q^0}\Big[-\frac{1}{2}\coth(h)\Lambda^2{\widetilde \theta}_4(h)^{4n+8}u'h^*M^{2r}\Big]=\mathop{\text{\rm Coeff}}_{q^0}\Big[g_{2}^r{\widetilde \theta}_4(h)^{4n+8}\Big]. \end{align*} As $g_{2}^r$ is a polynomial of degree at most $r+1$, we see as in (1) that $\mathop{\text{\rm Coeff}}_{q^0}\big[g_{2}^r{\widetilde \theta}_4(h)^{4n+8}\big]$ is a polynomial of degree at most $r+1$ in $\Lambda^4$, $n\Lambda^4$. \end{proof}
\begin{Remark} We list the first few polynomials $h^0_r$, $\overline h^0_r$. \begin{align*} &h^0_1=(4n + 4)\Lambda^4, \quad h^0_2=(16n + 16)\Lambda^4 - (8n^2 + 6n -3)\Lambda^8, \\ &h^0_3=(64n + 64)\Lambda^4 + (-64n^2 + 24n + 100)\Lambda^8 + (\hbox{$\frac{32}{3}$}n^3 - 8n^2 - \hbox{$\frac{68}{3}$}n)\Lambda^{12},\\
&\overline h^0_1=-(4n + 16)\Lambda^4 + (4n^2 + 15n + 13)\Lambda^8,\\ &\overline h^0_2=-(16n + 64)\Lambda^4 + (24n^2 + 78n + 18)\Lambda^8 - (\hbox{$\frac{16}{3}$}n^3 + 20n^2 + \hbox{$\frac{50}{3}$}n -2)\Lambda^{12}.
\end{align*} \end{Remark}
\begin{Proposition}\label{p11GM} For $X={\mathbb P}^1\times{\mathbb P}^1$ and $n\in {\mathbb Z}$, and for $X=\widehat {\mathbb P}^2$ and $n\in {\mathbb Z}+\frac{1}{2}$, and all $r\in {\mathbb Z}_{\ge 0}$ we have the following. \begin{enumerate} \item $\displaystyle{\chi^{X,F_+}_{F}(nF+G,P^{2r+1})=\frac{1}{(1-\Lambda^4)^{2n+1-2r}}+h^1_r(n,\Lambda^4)},$ where $h^1_r(n,\Lambda^4)\in{\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r}$. \item $\displaystyle{ \chi^{X,F+}_{0}(nF+G,P^{2r})=\frac{1}{(1-\Lambda^4)^{2n+2-2r}}+\overline h^1_r}(n,\Lambda^4),$ where $\overline h^1_r(n,\Lambda^4)\in{\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r+1}$. \end{enumerate} \end{Proposition} \begin{pf} (1) First we deal with the case $r=0$. We do this by ascending and descending induction on $n$. Let $n=-1$. By \corref{blowp} we know that $\chi^{X,G}_{F}(-F+G,P)=0$ and $$\chi^{X,F+}_{F}(-F+G,P)=\sum_{\xi}\delta_\xi^X(-F+G,P),$$
where $\xi$ runs through all classes of type $F$ with $G\xi<0<F\xi$, i.e. through all $\xi=(2mG-(2n-1)F))$ with $n,m\in {\mathbb Z}_{>0}$. By \lemref{vanwall} we have $\delta_{2mG-(2n-1)F}^X(-F+G,P)=0$ unless
$|6n-3-2m|+3\ge 8nm-4m$, and we check easily that this can only happen for $n=m=1$. Then computing with the lowest order terms of the formula of \defref{wallcrossterm} gives $\delta_{2G-F}^X(-F+G,P)=-\Lambda^4$. Thus $\chi^{X,F_+}_{F}(-F+G,P)=-\Lambda^4=(1-\Lambda^4)-1$. This shows the case $n=-1$.
Now let $n\in \frac{1}{2}{\mathbb Z}$ be general, then we have by \propref{Fplus} that \begin{align*}(1-\Lambda^4)\chi^{X,F_+}_{F}&((n+1/2)F+G,P)-\chi^{X,F_+}_{F}(nF+G,P)\\ &=\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{1}{2\sinh(3h/2)}{\widetilde \theta}_4(h)^{6(n+2)}\big({\widetilde \theta}_4(h)^3(1-\Lambda^4)-1\big)Mu'h^*\Lambda^2\right] \\&=\mathop{\text{\rm Coeff}}_{q^0}\big[{\widetilde \theta}_4(h)^{6(n+2)}\Lambda^4\big]= \Lambda^4,\end{align*} and where in the last line we have used \lemref{theth1}(3) and the fact that ${\widetilde \theta}_4(h)\in {\mathbb Q}[[q^2\Lambda^2,q^4]]^\times$.
Thus $$(1-\Lambda^4)\big(1+\chi^{X,F_+}_{F}((n+1/2)F+G,P)\big)-(1+\chi^{X,F_+}_{F}(nF+G,P))=(1-\Lambda^4)-1+\Lambda^4=0$$
and using the result for $n=-1$, $r=0$, the result for $r=0$ follows by ascending and descending induction over $n\in \frac{1}{2}{\mathbb Z}$.
Let $r>0$, $n\in \frac{1}{2}{\mathbb Z}$. By \propref{Fplus} we have \begin{align*}\chi^{X,F_+}_{F}\big(nF+G,&P^{2r+1}\big)-(1-\Lambda^4) \chi^{X,F_+}_{F}\big((n-1/2)F+G,P^{2r-1}\big)\\ &=\mathop{\text{\rm Coeff}}_{q^0}\Big[\frac{1}{2\sinh(3h/2)}{\widetilde \theta}_4(h)^{6n+9}M^{2r-1}\big(M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4)\big)u'h^*\Lambda^2\Big] \\&= \mathop{\text{\rm Coeff}}_{q^0}\big[{\widetilde \theta}_4(h)^{6n+9}g^r_3\big],\end{align*} where the last line is by \lemref{theth1}(4). As ${\widetilde \theta}_4(h)^{6n+9}\in {\mathbb Q}[[nq^2\Lambda^2,nq^4,q^2\Lambda^2,q^4]]^\times$, and $g_3^r$ is a polynomial in $q^{-2}\Lambda^2$, $\Lambda^4$ of degree $r$, we find, as in the proof of \propref{p11r}, that $$h'_r(n,\Lambda^4):=\chi^{X,F_+}_{F}(nF+G,P^{2r+1})-(1-\Lambda^4) \chi^{X,F_+}_{F}\big((n-1/2)F+G,P^{2r-1}\big)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r}.$$ Assume now by induction on $r$ that $$\chi^{X,F_+}_{F}\big((n-1/2)F+G,P^{2r-1}\big)=\frac{1}{1-\Lambda^4)^{2n-2r+2}}+h^1_{r-1}\big((n-1/2),\Lambda^4\big)$$
with $h^1_{r-1}(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r-1}.$ Then \begin{align*}\chi^{X,F_+}_{F}(nF+G,P^{2r+1}\big)-\frac{1}{(1-\Lambda^4)^{2n-2r+1}}&=(1-\Lambda^4) h^1_{r-1}\big((n-1/2),\Lambda^4)+h'_r(n,\Lambda^4). \end{align*} Thus we put $$h^1_r(n,\Lambda^4):=(1-\Lambda^4) h^1_{r-1}\big((n-1/2),\Lambda^4)+h'_r(n,\Lambda^4).$$ As $h'_r(n,\Lambda^4)$ has degree at most $r$ in $\Lambda^4$, $n\Lambda^4$, the claim follows.
(2) The case $r=0$ is proven in \cite[Prop~5.16]{GY}, with $\overline h^1_0(n,\Lambda^4)=-1-(3n+7)\Lambda^4.$ For $r>0$ we prove the result by induction. Let $r>0$, then
we have by \propref{Fplus} and \lemref{theth1}(5) \begin{align*}\chi^{X,F_+}_{0}(nF+G,P^{2r}\big)&-(1-\Lambda^4)\chi^{X,F_+}_{0}\big((n-1/2)F+G,P^{2r-2}\big)\\&=\mathop{\text{\rm Coeff}}_{q^0}\Big[-\frac{1}{2}\coth(3h/2){\widetilde \theta}_4(h)^{6n+9}M^{2r-2}\big(M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4)\big)u'h^*\Lambda^2\Big] \\&= \mathop{\text{\rm Coeff}}_{q^0}\big[{\widetilde \theta}_4(h)^{6n+9}g^{r}_4\big]=:l'_r(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r+1}.\end{align*} Assume now that $$\chi^{X,F_+}_{0}\big((n-1/2)F+G,P^{2r-2}\big)=\frac{1}{1-\Lambda^4)^{2n-2r+3}}+\overline h^1_{r-1}(n-1/2,\Lambda^4),$$ with $\overline h^1_{r-1}(n-1/2,\Lambda^4)\in{\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r}$. Then \begin{align*}\chi^{X,F_+}_{0}(nF+G,P^{2r})-\frac{1}{(1-\Lambda^4)^{2n-2r+2}} &=(1-\Lambda^4) \overline h^1_{r-1}((n-1/2),\Lambda^4)+l'_r(n,\Lambda^4). \end{align*} The result follows by induction on $r$. \end{pf}
\begin{Remark} We list the $h^1_r(n,\Lambda^4),\ \overline h^1_r(n,\Lambda^4)$ for small values of $n$, \begin{align*} &h^1_0=-1,\quad h^1_1=-1+(6n + 5)\Lambda^4 \quad h^1_2=-1+(30n + 19)\Lambda^4 - (18n^2 + 15n - 2)\Lambda^8,\\ & h^1_3=-1+(126n + 69)\Lambda^4+ (-162n^2 + 9n + 114)\Lambda^8 + (36n^3 - 43n - 7)\Lambda^{12},\\
&\overline h^1_0=-1-(3n + 7)\Lambda^4, \quad \overline h^1_1=-1-(6n + 20)\Lambda^4 + (9n^2 + \hbox{$\frac{69}{2}$}n + 32)\Lambda^8,\\& \overline h^1_2=-1-(18n +78)\Lambda^4 + (54n^2 + 189n + 120)\Lambda^8 - (18n^3 + 81n^2 + 109n +40)\Lambda^{12}. \end{align*} \end{Remark}
\begin{Proposition}\label{p112GM} Let $X={\mathbb P}^1\times{\mathbb P}^1$ or $X=\widehat {\mathbb P}^2$. \begin{enumerate} \item
For all $n\in {\mathbb Z}$ $$\chi^{X,F_+}_{F}(nF+2G)= \frac{1}{2}\frac{(1+\Lambda^4)^n-(1-\Lambda^4)^n}{(1-\Lambda^4)^{3n+3}}.$$ \item For all $n\in {\mathbb Z}$ and all $r>0$ we have $$\chi^{X,F_+}_{F}(nF+2G,P^{2r})=\frac{2^{r-1}(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+3-2r}}-h^2_r(n,\Lambda^4),$$ where $h^2_r(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le 2r+2}$. \item $$\chi^{X,F_+}_{0}(nF+2G)= \frac{1}{2}\frac{(1+\Lambda^4)^n+(1-\Lambda^4)^n}{(1-\Lambda^4)^{3n+3}}-1-(4n+9)\Lambda^4.$$ \item For all $n\in {\mathbb Z}$ and all $r>0$ we have $$\chi^{X,F_+}_{0}(nF+2G,P^{2r})=\frac{2^{r-1}(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+3-2r}}-\overline h^2_r(n,\Lambda^4),$$ where $\overline h^2_r(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le 2r+2}$. \end{enumerate} \end{Proposition} \begin{pf} (1) and (3) were proven in \cite[Prop.~5.17]{GY}. (2) We will first show by induction on $r$ that \begin{equation}\label{p12req} -\frac{1}{2}\mathop{\text{\rm Coeff}}_{q^0}\big[\tanh(h){\widetilde \theta}_4(h)^{8(n+2)}u'h^*\Lambda^2M^{2r}\big]=2^r\frac{(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+3-2r}}+s'_r(n,\Lambda^4). \end{equation} For polynomials $s'_r(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le 2r+2}$ For $r=0$ this is shown in the proof of \cite[Prop.~5.17]{GY} with $s_0'=-1-(4n+9)\Lambda^4.$
Fix $r>0$, assume that \eqref{p12req} holds $r-1$ and for all $n\in {\mathbb Z}$. By \lemref{theth1}(6) we have \begin{align*} -\frac{1}{2}\mathop{\text{\rm Coeff}}_{q^0}\big[\tanh(h)&\big(M^{2r}{\widetilde \theta}_4(h)^{8(n+2)}-(1-\Lambda^4)^2M^{2r-2}{\widetilde \theta}_4(h)^{(8n+2)}-{\widetilde \theta}_4(h)^{8(n+1)}M^{2r-2}\big)u'h^*\Lambda^2\big]\\ &=\mathop{\text{\rm Coeff}}_{q^0}\big[{\widetilde \theta}_4(h)^{8(n+1)}g_5^{r}\big]=:s''_r(n,\Lambda^4).\end{align*} Again, as ${\widetilde \theta}_4(h)\in {\mathbb Q}[[\Lambda^2q^4,q^4]]^\times$, and $g_5^{r}$ has degree $r+2$ in $q^{-2}\Lambda^2,\Lambda^4$, we see that $s''_r(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r+2}$. Thus we get by induction on $r$ \begin{align*} -\frac{1}{2}\mathop{\text{\rm Coeff}}_{q^0}\big[&\tanh(h)M^{2r}{\widetilde \theta}_4(h)^{8(n+2)}u'h^*\Lambda^2\big]= \frac{2^{r-1}(1+\Lambda^4)^{n-r+1}}{(1-\Lambda^4)^{3n+3-2r}}+(1-\Lambda^4)^2s'_{r-1}(n,\Lambda^4) \\&+ \frac{2^{r-1}(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+2-2r}}+s'_{r-1}(n-1,\Lambda^4)+s''_r(n,\Lambda^4)=\frac{2^{r}(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+3-2r}}+s'_{r}(n,\Lambda^4)\end{align*} with $$s'_{r}(n,\Lambda^4)=(1-\Lambda^4)^2s'_{r-1}(n,\Lambda^4)+s'_{r-1}(n-1,\Lambda^4)+s''_r(n,\Lambda^4).$$ As $s'_{r-1}\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le 2r}$, $s''_r\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r+2},$ we get
$s'_{r}(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le 2r+2}$.
Now we show (2): We note that $\frac{1}{2\sinh(2h)}=\frac{1}{4}\big(\coth(h)-\tanh(h)\big)$. Therefore we get by \propref{Fplus} \begin{align*} \chi^{X,F_+}(nF+2G,P^{2r})&=\frac{1}{4}\mathop{\text{\rm Coeff}}_{q^0}\big[\big(\coth(h)-\tanh(h)\big){\widetilde \theta}_4(h)^{8(n+2)}M^{2r}u'h^*\Lambda^2\big]\\ =&-\frac{1}{2}\chi^{X,F_+}_0((2n+2) F,P^{2r})+\frac{1}{2}\mathop{\text{\rm Coeff}}_{q^0}\big[\big(-\frac{1}{2}\tanh(h)\big){\widetilde \theta}_4(h)^{8(n+2)}M^{2r}u'h^*\Lambda^2\big]\\ =&-\frac{1}{2}\overline h^0_r(2n+2,\Lambda^4)+\frac{2^{r-1}(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+3-2r}}+\frac{1}{2}s_r'(n,\Lambda^4). \end{align*} here in the last line we have used \propref{p11r} and \eqref{p12req}. The claim follows with $h^2_r(n,\Lambda^4)=\frac{1}{2}\big(s_r'(n,\Lambda^4)-\overline h^0_r(2n+2, \Lambda^4)\big)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le 2r+2}$.
Finally we show (4): $-\frac{1}{2}\coth(2h)=\frac{1}{4}\big(-\coth(h)-\tanh(h)\big)$ and
\propref{Fplus} give \begin{align*} \chi^{X,F_+}(nF+2G,P^{2r})&=\frac{1}{4}\mathop{\text{\rm Coeff}}_{q^0}\big[\big(-\coth(h)-\tanh(h)\big){\widetilde \theta}_4(h)^{8(n+2)}M^{2r}u'h^*\Lambda^2\big]\\ =&\frac{1}{2}\chi^{X,F_+}_0((2n+2)F,P^{2r})+\frac{1}{2}\mathop{\text{\rm Coeff}}_{q^0}\big[\big(-\frac{1}{2}\tanh(h)\big){\widetilde \theta}_4(h)^{8(n+2)}M^{2r}u'h^*\Lambda^2\big]\\ =&\frac{1}{2}\overline h^0_r(2n+2,\Lambda^4)+\frac{2^{r-1}(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+3-2r}}+\frac{1}{2}s_r'(n,\Lambda^4). \end{align*} The claim follows with $\overline h^2_r=\frac{1}{2}\big(s_r'(n,\Lambda^4)+\overline h^0_r(2n+2, \Lambda^4)\big)$. \end{pf}
\begin{Remark} Again we can readily compute the first few of the $h^2_r$, $\overline h^2_r$. \begin{align*} &h^2_1=-1,\quad h^2_2=-2 + (8n + 6)\Lambda^4,\quad h^2_3=-4 + (48n + 24)\Lambda^4 - (32n^2 + 28n)\Lambda^8,\\ &h^2_4=-8 + (224n + 72)\Lambda^4 + (-320n^2 - 40n + 128)\Lambda^8 + (\hbox{$\frac{256}{3}$}n^3 + 32n^2 - \hbox{$\frac{184}{3}$}n - 16)\Lambda^{12},\\ &\overline h^2_1= -1 -(8n + 24)\Lambda^4 + (16n^2 + 62n + 59)\Lambda^8,\\ &\overline h^2_2=-2 -(24n +90)\Lambda^4 + (96n^2 + 348n + 270)\Lambda^8 -(\hbox{$\frac{128}{3}$}n^3 + 208n^2 + \hbox{$\frac{964}{3}$}n +154)\Lambda^{12}. \end{align*} It appears that one has $h^2_r\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r-1}$ and $\overline h^2_r\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r}$. \end{Remark}
\begin{Corollary}\label{P2P12GH} Let $X=\widehat {\mathbb P}^2$ or $X={\mathbb P}^1\times {\mathbb P}^1$, let $\omega\in H^2(X,{\mathbb R})$ be a class with $\langle\omega^2\rangle>0$. \begin{enumerate} \item For $n\in {\mathbb Z}_{\ge 0}$ we have \begin{align*} \chi_{0}^{X,\omega}(nF)&\equiv \chi_{F}^{X,\omega}(nF)\equiv \frac{1}{(1-\Lambda^4)^{n+1}}, \quad \chi_{0}^{X,\omega}(nF,P^{2r})\equiv \chi_{F}^{X,\omega}(nF,P^{2r})\equiv 0 \hbox{ for }r>0. \end{align*} \item For $n\in {\mathbb Z}_{\ge 0}$ if $X={\mathbb P}^1\times {\mathbb P}^1$ and $n\in {\mathbb Z}_{\ge 0}+\frac{1}{2}$ if $X=\widehat {\mathbb P}^2$, we have \begin{align*} \chi_{0}^{X,\omega}(nF+G,P^{2r})&\equiv \frac{1}{(1-\Lambda^4)^{2n+2-2r}},\quad \chi_{F}^{X,\omega}(nF+G,P^{2r+1})\equiv \frac{1}{(1-\Lambda^4)^{2n+1-2r}}. \end{align*} \item For $n\in {\mathbb Z}_{\ge 0}$ we have \begin{align*} \chi_{0}^{X,\omega}(nF+2G)&\equiv \frac{(1+\Lambda^4)^{n}+(1-\Lambda^4)^n}{2(1-\Lambda^4)^{3n+3}},\quad \chi_{F}^{X,\omega}(nF+2G)\equiv \frac{(1+\Lambda^4)^{n}-(1-\Lambda^4)^n}{2(1-\Lambda^4)^{3n+3}}\\ \chi_{0}^{X,\omega}(nF+2G,P^{2r})&\equiv \chi_{F}^{X,\omega}(nF+2G,P^{2r}) \equiv \frac{2^{r-1}(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+3-2r}}\hbox{ for }1\le r \le n. \end{align*} \end{enumerate} \end{Corollary}
\begin{proof} Write $\omega=uF+vG$, with $u,v\in {\mathbb R}_{>0}$, write $w=v/u$. By \propref{p11r}, \propref{p11GM}, \propref{p112GM} and \lemref{vanwall} it is sufficient to prove that for $L=nF+mG$, with $0\le m\le 2$, under the assumptions of the Corollary, there are only finitely many classes $\xi$ of type $(0)$ or type $(F)$ with $\langle\omega,\xi\rangle\ge 0>\<F,\xi\rangle$, such that $\delta_\xi^X(nF+mG,P^s)\ne 0$, i.e. such that
$-\xi^2\le |\langle\xi,(L-K_X)\rangle| +s+2$. These walls are of the form $\xi=aF-bG$ with $a\in {\mathbb Z}_{>0}$ and $b\in 2{\mathbb Z}_{>0}$, and $aw\ge b$, and the condition becomes \begin{equation} \label{abwall}
2ab\le |b(n+2)-a(m+2)| +s+2. \end{equation}
Let $\xi=(aF-bG)$ be such a wall with $\delta_\xi^X(nF+mG,P^s)\ne 0$. If $a(m+2)\le b(n+2)$, then \eqref{abwall} becomes $$2ab\le b(n+2)-a(m+2) +s+2 \le b(n+2)+s,$$ therefore $(2a-n-2)b\le s$. Therefore $2a-n-2\le \frac{s}{b}\le \frac{s}{2}$. Therefore $a$ is bounded, and by the condition $b\le aw$ also $b$ is bounded, so there are only finitely many possibilities for $a,b$.
Now assume $a(m+2)\ge b(n+2)$. Then as $b\ge 2$, \eqref{abwall} gives $$4a\le 2ab\le a(m+2)-2(n+2)+s+2,$$ i.e. $(2-m)a\le -2n+s-2$. If $m=0,1$, then $a\le \frac{-2n+s-2}{2-m}$, thus $a$ is bounded, and by $a(m+2)\ge b(n+2)$ also $b$ is bounded. If $m=2$, the inequality becomes $2n\le s-2$, so if $2n\ge s$ there are no walls with $\delta_\xi^X(nF+mG,P^s)\ne 0$. Thus the claim follows. \end{proof}
\begin{Remark}\label{nonwalls} As we will use this later in \secref{CompP2}, we explicitly state the bounds obtained in the above proof in the case of $X=\widehat {\mathbb P}^2$, $\omega=H$ (i.e. $w=2$ in the notation above). Fix $n\in {\mathbb Z}_{\ge 0}$, $s\in {\mathbb Z}_{\ge 0}$. Let $\xi=aF-bG$ be a class of type $(0)$ or $(F)$ with $\langle\xi, \omega\rangle\ge 0>\langle\xi,F\rangle$. \begin{enumerate} \item If $\delta_\xi^X(nF-nE,P^s)=\delta_\xi^{X}(nF,P^s)\ne \emptyset$, then \begin{enumerate} \item either $2a\le (n+2) b$ and $0<a\le\frac{n+2}{2}+\frac{s}{4}$ and $0<b\le 2a$, \item or $0<(n+2)b\le 2a$ and $0<a\le \frac{s}{2}-n-1$. \end{enumerate} \item If $\delta_\xi^X(nF-(n-1)E,P^s)=\delta_\xi^{X}((n-1/2)F+G,P^s)\ne \emptyset$, then \begin{enumerate} \item either $3a\le (n+\frac{3}{2}) b$ and $)<a\le \frac{n+3/2}{2}+\frac{s}{4}$ and $0<b\le 2a$, \item or $0<(n+3/2)b\le 3a$ and $0<a\le s-2n-2$. \end{enumerate} \end{enumerate} \end{Remark}
\begin{Remark} Note that the results of \corref{P2P12GH} are compatible with \conref{ratconj}. This is particularly remarkable for part (3) of \corref{P2P12GH}, which can only be proven for $r\le n$, while its correctness for $r>n$ would contradict \conref{ratconj}.
The fact that the formulas hold without restriction for $\chi^{X,F_+}_{0}(nF+2G,P^{2r})$, $\chi^{X,F_+}_{F}(nF+2G,P^{2r})$ is not in contradiction to \conref{ratconj}, because it is only claimed for $\chi^{Y,\omega}_{c_1}(L,P^r)$ with $\omega$ an ample class on $Y$. \end{Remark}
\section{Computation of the invariants of the plane}
We now want to use the results obtained so far to give an algorithm to compute the generating functions $\chi^{{\mathbb P}^2,H}_{0}(nH,P^r)$, $\chi^{{\mathbb P}^2,H}_{H}(nH,P^r)$ of the $K$-theoretic Donaldson invariants of the projective plane. We use this algorithm to prove that these generating functions are always rational functions of a very special kind. Then we will use this algorithm to explicitly compute $\chi^{{\mathbb P}^2,H}_{0}(nH,P^r)$, $\chi^{{\mathbb P}^2,H}_{H}(nH,P^r)$ for not too large values of $n$ and $r$. First we explicitly carry out the algorithm by hand when $r=0$ and $n\le 5$ in an elementary but tedious computation.
Finally we implemented the algorithm as a PARI program, which in principle can prove a formula for $\chi^{{\mathbb P}^2,H}_{0}(nH,P^r)$, $\chi^{{\mathbb P}^2,H}_{H}(nH,P^r)$ for any $n$ and $r$. The computations have been carried out for $r=0$ and $n\le 11$ and $n\le 8$ and $r\le 16$.
\subsection{The strategy}\label{strategy}
\corref{blowdownmn} says in particular the following. \begin{Remark}\label{npol} \begin{enumerate} \item For all $n\in {\mathbb Z}_{>0}$ there exist unique polynomials $f_{n},g_n\in {\mathbb Q}[x,\lambda^4]$ and an integer $N_n$, such that $f_n S_n+g_n S_{n+1}=\lambda(1-\lambda^4)^{N_n}$. \item For all $n\in {\mathbb Z}_{>0}$ there exist unique polynomials $ h_{n}, l_n\in {\mathbb Q}[x^2,\lambda^4]$ and an integer $M_n$, such that
$h_n R_n+ l_n R_{n+1}=(1-\lambda^4)^{M_n}$. \end{enumerate} \end{Remark}
Using these polynomials, we can determine the $K$-theoretic Donaldson invariants of ${\mathbb P}^2$ in terms of those of $\widehat {\mathbb P}^2$. \begin{Corollary}\label{blowdownform} For all $n,k\in {\mathbb Z}$, $r\in {\mathbb Z}_{\ge 0}$ we have \begin{align*} \tag{H}\chi^{{\mathbb P}^2,H}_H(nH,P^r)&=\frac{1}{\Lambda(1-\Lambda^4)^{N_k}}\Big(\chi^{\widehat {\mathbb P}^2,H}_{F}\big((n+1-k)G+\frac{n+k-1}{2}F,P^r\cdot f_k(P,\Lambda)\Big)\\&\qquad\qquad + \chi^{\widehat {\mathbb P}^2,H}_{F}\Big((n-k)G+\frac{n+k}{2}F,P^r\cdot g_k(P,\Lambda)\Big)\Big),\\ \tag{0}\chi^{{\mathbb P}^2,H}_0(nH,P^r)&=\frac{1}{(1-\Lambda^4)^{M_k}}\Big(\chi^{\widehat {\mathbb P}^2,H}_{0}\Big((n+1-k)G+\frac{n+k-1}{2}F,P^r\cdot h_k(P,\Lambda)\Big)\\&\qquad\qquad + \chi^{\widehat {\mathbb P}^2,H}_{0}\Big((n-k)G+\frac{n+k}{2}F,P^r\cdot l_k(P,\Lambda)\Big)\Big). \end{align*} \end{Corollary} \begin{proof} Note that $(n-k)G+\frac{n+k}{2}F=nH-kE$. Therefore we get by \thmref{nblow} that \begin{align*} \chi^{\widehat {\mathbb P}^2,H}_{F}&\Big((n+1-k)G+\frac{n+k-1}{2}F,P^r\cdot f_k(P,\Lambda)\Big) + \chi^{\widehat {\mathbb P}^2,H}_{F}\Big((n-k)G+\frac{n+k}{2}F,P^r\cdot g_k(P,\Lambda)\Big)\\ &=\chi^{{\mathbb P}^2,H}_H\big(nH,P^r \cdot\big(f_k(P,\Lambda) S_{k}(P,\Lambda)+g_k(P,\Lambda) S_{k+1}(P,\Lambda)\big)\big), \end{align*} and the result follows by $f_k(P,\Lambda) S_{k}(P,\Lambda)+g_k(P,\Lambda) S_{k+1}(P,\Lambda)=\Lambda(1-\Lambda^4)^{N_k}$. In the same way \begin{align*}\chi^{\widehat {\mathbb P}^2,H}_{0}\Big((n+1-k)G&+\frac{n+k-1}{2}F,P^r\cdot h_k(P,\Lambda)\Big) + \chi^{\widehat {\mathbb P}^2,H}_{0}\Big((n-k)G+\frac{n+k}{2}F,P^r\cdot l_k(P,\Lambda)\Big)\\ &=\chi^{{\mathbb P}^2,H}_0\big(nH,P^r \cdot\big(h_k(P,\Lambda) R_{k}(P,\Lambda)+l_k(P,\Lambda) R_{k+1}(P,\Lambda)\big)\big), \end{align*} and $h_k(P,\Lambda) R_{k}(P,\Lambda)+l_k(P,\Lambda) R_{k+1}(P,\Lambda)=(1-\Lambda^4)^{M_k}$. \end{proof}
Using \corref{P2P12GH} we can use this in two different ways to compute the $K$-theoretic Donaldson invariants of ${\mathbb P}^2$. \begin{enumerate} \item We apply parts (1) and (2) of \corref{P2P12GH} to compute the $\chi_0^{\widehat {\mathbb P}^2,H}(nF,P^s)$, $\chi_F^{\widehat {\mathbb P}^2,H}(nF,P^s)$, $\chi_0^{\widehat {\mathbb P}^2,H}(G+(n-\frac{1}{2})F,P^s)$, $\chi_F^{\widehat {\mathbb P}^2,H}(G+(n-\frac{1}{2})F,P^s)$, and then apply \corref{blowdownform} with $k=n$. Parts (1) and (2) of \corref{P2P12GH} apply for all values of $n$ and $s$, so this method can always be used. We will apply this \secref{P2rat} to prove the rationality of the generating functions of the $K$-theoretic Donaldson invariants of ${\mathbb P}^2$ and of blowups of ${\mathbb P}^2$, and then in \secref{CompP2} to compute the $K$-theoretic Donaldson invariants of ${\mathbb P}^2$ using a PARI program. \item We apply parts (2) and (3) of \corref{P2P12GH} to compute the $\chi_0^{\widehat {\mathbb P}^2,H}(G+(n-\frac{1}{2})F,P^s)$, $\chi_F^{\widehat {\mathbb P}^2,H}(G+(n-\frac{1}{2})F,P^s)$, $\chi_0^{\widehat {\mathbb P}^2,H}(2G+(n-1)F,P^s)$, $\chi_F^{\widehat {\mathbb P}^2,H}(2G+(n-1)F,P^s)$, and then apply \corref{blowdownform} with $k=n-1$. This requires less computation than the first approach. However, as part (3) of \corref{P2P12GH} holds for $\chi_0^{\widehat {\mathbb P}^2,H}(2G+(n-1)F,P^s)$, $\chi_F^{\widehat {\mathbb P}^2,H}(2G+(n-1)F,P^s)$ only when $s\le 2n-2$, this method only allows to compute $\chi_0^{{\mathbb P}_2,H}(nH,P^r)$ when $$r+\max\big(\deg_x(h_{n-1}(x,\lambda),\deg_x(l_{n-1}(x,\lambda))\big)\le 2n-2, $$ and the same way it only allows to compute $\chi_H^{{\mathbb P}_2,H}(nH,P^r)$ when $$r+\max\big(\deg_x(f_{n-1}(x,\lambda),\deg_x(g_{n-1}(x,\lambda))\big)\le 2n-2.$$
As the degree of $S_n$ and $R_n$ in $x$ grows faster than $2n$, this requires that $n$ and $r$ are both relatively small. We will use this to compute $\chi_0^{{\mathbb P}_2,H}(nH)$, $\chi_H^{{\mathbb P}_2,H}(nH)$ by hand for $n=4,5$. \end{enumerate}
\subsection{Rationality of the generating function}\label{P2rat} We now use the above algorithm to prove a structural result about the $K$-theoretic Donaldson invariants of ${\mathbb P}^2$ and the blowups of ${\mathbb P}^2$. \begin{Theorem}\label{P2rat1} \begin{enumerate} \item For all $n\in {\mathbb Z}$, $r\in {\mathbb Z}_{\ge 0}$ with $n+r$ even, there exists an integer $d^1_{n,r}$ and a polynomial $p^1_{n,r}\in {\mathbb Q}[\Lambda^4]$, such that $$\chi^{{\mathbb P}^2,H}_{H}(nH,P^r)\equiv \frac{p^1_{n,r}}{\Lambda (1-\Lambda^4)^{d^1_{n,r}}}.$$ Furthermore we can choose $p^1_{n,0}\in {\mathbb Z}[\Lambda^4]$. \item For all $n\in {\mathbb Z}$, $r\in 2{\mathbb Z}_{\ge 0}$ there exists an integer $d^0_{n,r}$ and a polynomial $p^0_{n,r}\in {\mathbb Q}[\Lambda^4]$, such that $$\chi^{{\mathbb P}^2,H}_{0}(nH,P^r)\equiv \frac{p^0_{n,r}}{(1-\Lambda^4)^{d^0_{n,r}}}.$$ Furthermore we can choose $p^0_{n,0}\in {\mathbb Z}[\Lambda^4]$. \end{enumerate} \end{Theorem}
\begin{proof} By \corref{P2P12GH} there exist for all $L=nF$, $L=(n-1/2)F+G$ with $n\in {\mathbb Z}$ and all $r\in {\mathbb Z}_{\ge 0}$ integers $e^F_{n,r}$, $e^0_{n,r}$ and polynomials $q^F_{n,r}, q^0_{n,r}\in {\mathbb Q}[\Lambda^4]$, so that $$\chi^{\widehat {\mathbb P}^2,H}_F(L,P^r)=\frac{q^F_{n,r}}{(1-\Lambda^4)^{e^F_{n,r}}}, \quad \chi^{\widehat {\mathbb P}^2,H}_0(L,P^r)=\frac{q^0_{n,r}}{(1-\Lambda^4)^{e^0_{n,r}}}.$$ Thus part (H) of \corref{blowdownform} (with $k=n$) gives $\chi^{{\mathbb P}^2,H}_{H}(nH,P^r)= \frac{p^1_{n,r}}{\Lambda (1-\Lambda^4)^{d^1_{n,r}}},$ for suitable $d^1_{n,r}\in {\mathbb Z}_{\ge 0}$, $p^1_{n,r}\in {\mathbb Q}[\Lambda^4]$ and similarly part (0) of \corref{blowdownform} (with $k=n$) gives $\chi^{{\mathbb P}^2,H}_{0}(nH,P^r)= \frac{p^0_{n,r}}{ (1-\Lambda^4)^{d^0_{n,r}}},$ for suitable $d^0_{n,r}\in {\mathbb Z}_{\ge 0}$, $p^0_{n,r}\in {\mathbb Q}[\Lambda^4]$. Finally we want to see that $p^1_{n,0}\in {\mathbb Z}[\Lambda^4]$, and we can chose $p^0_{n,0}$ so that it is in ${\mathbb Z}[\Lambda^4]$. By definition we have $$\chi^{{\mathbb P}^2,H}_{H}(nH)=\sum_{k>0} \chi(M^{X}_{H}(H,4k-1),\mu(nH))\Lambda^d\in \Lambda^3{\mathbb Z}[[\Lambda^4]].$$ Writing $p^0_{n,0}=\sum_{k>0} a_k \Lambda^{4k}$ we see from the formula $\chi^{{\mathbb P}^2,H}_{H}(nH)=\frac{p^1_{n,0}}{\Lambda (1-\Lambda^4)^{d^1_{n,0}}}$ that $a_0=0$, and inductively that $$a_k=\chi(M^{X}_{H}(H,4k-1))-\sum_{i=1}^{k}a_{k-i} \binom{d^{1}_{n,0}+i-1}{i}\in {\mathbb Z}.$$ For $k$ large enough we have that the coefficient of $\Lambda^{4k}$ of $\chi^{{\mathbb P}^2,H}_{0}(nH)$ is $\chi(M^{X}_{H}(0,4k),\mu(nH))$. Thus, adding a polynomial $h\in Q[\Lambda^4]$ to $p^0_{n,0}$, we can assume that $\frac{p^0_{n,0}}{ (1-\Lambda^4)^{d^0_{n,0}}}\in {\mathbb Z}[\Lambda^4]$. One concludes in the same way as for $p^1_{n,0}$.
\end{proof} Indeed a more careful argument will show that we can choose $p^0_{n,r}, \ p^1_{n,r}\in {\mathbb Z}[\Lambda^4]$ for all $r$.
We now use this result and the blowup formulas to describe the generating functions of the $K$-theoretic Donaldson invariants of blowups of ${\mathbb P}^2$ in finitely many points in an open subset of the ample cone as rational functions.
\begin{Lemma}\label{blowindep}
Let $X$ be the blowup of ${\mathbb P}^2$ in finitely many points, $p_1,\ldots,p_n$, and denote by $E_1,\ldots,E_n$ the exceptional divisors. Fix $c_1\in H^2(X,{\mathbb Z})$, and an $r\ge 0$. Let $L$ be a line bundle on $X$. Let $\omega=H-\alpha_1 E_1-\ldots -\alpha_n E_n$ with $|\alpha_i|<\frac{1}{\sqrt{n}}$, for all $i$, and $\langle\omega, K_X\rangle<0$. Then $\chi^{X,\omega}_{c_1}(L,P^r)\equiv\chi^{X,H}_{c_1}(L,P^r)$. \end{Lemma} \begin{proof}
We put $\epsilon:=\max(|\alpha_i|)_{i=1}^n$, and $\delta:=\frac{1}{n}-\epsilon^2>0$. Let $L=dH-m_1E_1-\ldots m_nE_n$, with $d,m_1,\ldots,m_n\in {\mathbb Z}$, and let $r\ge 0$.
We want to show that there are only finitely many classes $\xi$ of type $(c_1)$ on $X$ with $\<H,\xi\rangle\ge 0 \ge \langle\omega,\xi\rangle$ and $\delta^X_\xi(L,P^r)\ne 0$. As by \lemref{vanwall} each $\delta^X_\xi(L,P^r)$ is a polynomial in $\Lambda$, this gives $\chi^{X,\omega}_{c_1}(L,P^r)\equiv \chi^{X,H}_{c_1}(L,P^r)$.
We write $\xi=aH-b_1E_1-\ldots -b_n E_n$, and $b:=(|b_1|+\ldots+|b_n|$); then we get $a\ge 0$ and $$0\ge \langle\omega,\xi\rangle=a-\alpha_1b_1-\ldots-\alpha_nb_n\ge a-b\epsilon,$$
i.e. $a\le b\epsilon$. Assume $\delta_\xi^X(L,P^r)\ne 0$, then by \lemref{vanwall} $-\xi^2\le |\langle\xi, (L-K_X)\rangle|+r+2$. We have \begin{equation}\label{ineq}\xi^2=-a^2+b_1^2+\ldots+b_n^2\ge-\epsilon^2b^2+\frac{b^2}{n}=\delta b^2, \end{equation} where we have used the easy inequality $b_1^2+\ldots+b_n^2\ge \frac{b^2}{n}$ and our definition $\frac{1}{n}-\epsilon^2=\delta>0$.
On the other hand, putting $m:=\max|m_i+1|_{i=1}^n$ we get \begin{align*}
|\langle\xi, (L-K_X)\rangle|+r+2&=|a(d+3)-(m_1+1) b_1-\ldots-(m_n+1)b_n|+r+2\\&\le a|d+3|+|m_1+1| |b_1|+\ldots+|m_n+1| |b_n|+r+2\\&\le
\epsilon b|d+3|+mb+r+2= (m+|d+3|\epsilon)b+r+2. \end{align*} Putting this together with \eqref{ineq}, and using $\epsilon\le 1$, we get
\begin{equation}
\label{blowbound}
\delta (|b_1|+\ldots+|b_n|)\le \max|m_i+1|_{i=1}^n +|d+3|+\frac{r+2}{|b_1|+\ldots+|b_n|}.
\end{equation}
thus $b=|b_1|+\ldots+|b_n|$ is bounded and $a\le b\epsilon$ is bounded, and therefore there are only finitely many choices for $\xi$. \end{proof}
The following theorem contains \thmref{rationalal} as a special case. \begin{Theorem}\label{blowrat}
Let $X$ be the blowup of ${\mathbb P}^2$ in finitely many points. With the assumptions and notations of \lemref{blowindep}, there exist an integer $d^{c_1}_{L,r}\in{\mathbb Z}_{\ge 0}$ and a polynomial $p^{c_1}_{L,r}\in {\mathbb Q}[\Lambda^{\pm 4}]$, such that $$\chi^{X,\omega}_{c_1}(L,P^r)\equiv\frac{p^{c_1}_{L,r}}{\Lambda^{c_1^2}(1-\Lambda^4)^{d^{c_1}_{L,r}}}.$$ \end{Theorem} \begin{proof} We write $c_1=kH+l_1E_1+\ldots +l_nE_n$. By renumbering the $E_i$ we can assume that $l_i$ is odd for $1\le i\le s$ and $l_i$ is even for $s+1\le i\le n$. Write $L=dH-m_1E_1-\ldots -m_nE_n$, with $d,m_1,\ldots,m_n\in {\mathbb Z}$.
By \lemref{blowindep}, it is enough to show the claim for $\omega=H$. By repeatedly applying \thmref{nblow}, we get $$\chi^{X,H}_{c_1}(L,P^r)=\chi^{{\mathbb P}^2,H}_{kH}\Big(dH,P^r\cdot \Big(\prod_{i=1}^s S_{m_i+1}(P,\Lambda)\Big)\cdot \Big(\prod_{i=s+1}^n R_{m_i+1}(P,\Lambda)\Big)\Big).$$ Put $\kappa=0$ if $k$ is even, and $\kappa=1$ if $k$ is odd. We know that $\chi^{{\mathbb P}^2,H}_{kH}(dH,P^r)$ depends only on $\kappa$, and by \thmref{P2rat1} we have $$\chi^{{\mathbb P}^2,H}_{kH}(dH,P^r)=\frac{p^{\kappa}_{d,r}}{\Lambda^\kappa (1-\Lambda^4)^{d^\kappa_{d,r}}},$$
We know that $R_n(P,\Lambda)\in {\mathbb Z}[P,\Lambda^4]$, $S_n(P,\Lambda)\in \Lambda{\mathbb Z}[P,\Lambda^4]$. Therefore we can write $\chi^{X,H}_{c_1}(L,P^r)=\frac{ p}{\Lambda^{\kappa-s}(1-\Lambda^4)^N}$ for a suitable polynomial $p\in {\mathbb Q}[\Lambda^{\pm 4}]$ and a nonnegative integer $N$. Note that $c_1^2=k^2-l_1^2-\ldots -l_n^2\equiv \kappa-s\mod 4$. Let $w:=\frac{1}{4}(c_1^2-(\kappa-s))$. Then $$\chi^{X,H}_{c_1}(L,P^r)=\frac{ p}{\Lambda^{\kappa-s}(1-\Lambda^4)^N}=\frac{\Lambda^{4w}p}{\Lambda^{c_1^2}(1-\Lambda^4)^N},$$ and the claim follows. \end{proof}
\subsection{Explicit computations for small $n$}\label{explicitp2}
We compute $\chi^{{\mathbb P}^2,H}_{0}(nH)$, $\chi^{{\mathbb P}^2,H}_{H}(nH)$ for small values of $n$, using the blowup formulas, using the strategy outlined in \secref{strategy}. In the next subsection we do the same computations for larger $n$ using a computer program written in Pari. These invariants have been computed before (see \cite{Abe},\cite{GY}) for $1\le n \le 3$.
\begin{Proposition} \begin{enumerate} \item $\displaystyle{\chi^{{\mathbb P}^2,H}_{H}(4H)=\frac{\Lambda^3+6\Lambda^7+\Lambda^{15}}{(1-\Lambda^4)^{15}}.}$ \item $\displaystyle{\chi^{{\mathbb P}^2,H}_{0}(4H)=\frac{1+6\Lambda^8+\Lambda^{12}}{(1-\Lambda^4)^{15}}-1-51/2\Lambda^4.}$ \end{enumerate} \end{Proposition}
\begin{proof} (1) We have $$S_3(x,\lambda)=\lambda \left(x^2-(1-\lambda^4)^2\right), \quad S_4=\lambda x\big((1-\lambda^8)x^2-2(1-\lambda^4)^3\big). $$ Using division with rest as polynomials in $x$, we write $\lambda(1-\lambda^4)^6$ as a linear combination of $S_3(x,\lambda)$ and $S_4(x,\lambda)$ $$\lambda(1-\lambda^4)^6=\left((1-\lambda^8)x^2-(1-\lambda^4)^4\right)S_3(x,\lambda)-x S_4(x,\lambda).$$ Thus we get by \corref{blowdownform} that \begin{equation}\label{4HB} \begin{split} \chi^{{\mathbb P}^2,H}_{H}(4H)&=\frac{1}{\Lambda(1-\Lambda^4)^6}\big((1-\Lambda^8)\chi^{\widehat {\mathbb P}^2,H}_{F}(4H-2E,P^2)-(1-\Lambda^4)^4\chi^{\widehat {\mathbb P}^2,H}_{F}(4H-2E)\\&\qquad\qquad- \chi^{\widehat {\mathbb P}^2,H}_{F}(4H-3E,P^1)\big). \end{split} \end{equation} By \propref{p11GM} we have $$\chi^{\widehat {\mathbb P}^2,F_+}_{F}(4H-3E,P^1)=\frac{1}{(1-\Lambda^4)^8}-1,$$ and $\xi=H-3E$ is the only class of type $(F)$ on $\widehat {\mathbb P}^2$ with $\langle\xi, H\rangle\ge0 >\langle\xi, F\rangle$ with $\delta_\xi^{\widehat{\mathbb P}^2}((4H-3E),P^1)\ne 0$. In fact $\delta_{H-3E}^{\widehat{\mathbb P}^2}((4H-3E),P^1)=\Lambda^8.$ Thus $$\chi^{\widehat {\mathbb P}^2,H}_{F}(4H-3E,P^1)=\frac{1}{(1-\Lambda^4)^8}-1+\Lambda^8.$$ By \propref{p112GM} we have that $$\chi^{\widehat {\mathbb P}^2,F_+}_{F}(4H-2E)=\frac{3\Lambda^4+\Lambda^{12}}{(1-\Lambda^4)^{12}},\quad \chi^{\widehat {\mathbb P}^2,F_+}_{F}(4H-2E,P^2)=\frac{(1+\Lambda^4)^2}{(1-\Lambda^4)^{10}}-1.$$ Furthermore there is no class of type $(F)$ on $\widehat {\mathbb P}^2$ with $\langle\xi, H\rangle \ge 0 >\langle\xi, F\rangle$ with $\delta_\xi^{\widehat{\mathbb P}^2}(4H-2E)\ne 0$ or $\delta_\xi^{\widehat{\mathbb P}^2}(4H-2E,P^2)\ne 0$.
Thus $\chi^{\widehat {\mathbb P}^2,H}_{F}(4H-2E)=\chi^{\widehat {\mathbb P}^2,F_+}_{F}(4H-2E)$ and $\chi^{\widehat {\mathbb P}^2,H}_{F}(4H-2E,P^2)=\chi^{\widehat {\mathbb P}^2,F_+}_{F}(4H-2E,P^2)$.
Putting these values into \eqref{4HB} yields $\chi^{{\mathbb P}^2,H}_{H}(4H)=\frac{\Lambda^3+6\Lambda^7+\Lambda^{15}}{(1-\Lambda^4)^{15}}$.
(2) For $R_3(x,\lambda)=-\lambda^4x^2+(1-\lambda^4)^2$, $R_4=-\lambda^4 x^2+(1-\lambda^4)^4,$ we get $$(1-\lambda^4)^5=\left(\lambda^4x^2+(1-\lambda^4)^2\right)R_3(x,\lambda)-\lambda^4R_4(x,\lambda).$$ Thus \corref{blowdownform} gives \begin{equation}\label{40B} \chi^{{\mathbb P}^2,H}_{0}(4H)=\frac{1}{(1-\Lambda^4)^5}\left(\Lambda^4\chi^{\widehat {\mathbb P}^2,H}_{0}(4H-2E,P^2)+(1-\Lambda^4)^2\chi^{\widehat {\mathbb P}^2,H}_{0}(4H-2E) -\Lambda^4\chi^{\widehat {\mathbb P}^2,H}_{0}(4H-3E)\right). \end{equation} By \propref{p11GM} we have $\chi^{\widehat {\mathbb P}^2,F_+}_{0}(4H-3E)=\frac{1}{(1-\Lambda^4)^9}-1-35/2\Lambda^4$. Furthermore there are no classes $\xi$ of type $(0)$ with $\langle\xi, H\rangle >0 >\langle\xi ,F\rangle$ with $\delta_\xi^{\widehat{\mathbb P}^2}((4H-3E))\ne 0$, and the only classes of type $(0)$ with $\langle\xi, H\rangle =0 >\langle\xi, F\rangle$ are $-2E$ and $-4E$ with $$\frac{1}{2}\delta^{\widehat {\mathbb P}^2}_{-2E}(4H-3E)=-2\Lambda^4 + 291\Lambda^8 - 3531\Lambda^{12} + 16215/2\Lambda^{16}, \quad \frac{1}{2}\delta^{\widehat {\mathbb P}^2}_{-4E}(4H-3E)=7\Lambda^{16} - 51/2\Lambda^{20},$$ giving $$\chi^{\widehat {\mathbb P}^2,H}_{0}(4H-3E)=\frac{1}{(1-\Lambda^4)^9}-1-39/2\Lambda^4 + 291\Lambda^8 - 3531\Lambda^{12} + 16229/2\Lambda^{16}- 51/2\Lambda^{20}.$$ By \propref{p112GM} we have $\chi^{\widehat {\mathbb P}^2,F_+}_{0}(4H-2E)=\frac{1+3\Lambda^8}{(1-\Lambda^4)^{12}}-1-21\Lambda^4$. Furthermore the only class $\xi$ of type $(0)$ with $\langle\xi, H\rangle \ge 0 >\langle\xi, F\rangle$ with $\delta_\xi^{\widehat{\mathbb P}^2}(4H-2E)\ne 0$ is $-2E$ with $\frac{1}{2}\delta^{\widehat {\mathbb P}^2}_{-2E}(4H-2E)=-3/2\Lambda^4 + 108\Lambda^8 - 1225/2\Lambda^{12}$, giving $$\chi^{\widehat {\mathbb P}^2,H}_{0}(4H-2E)=\frac{1+3\Lambda^8}{(1-\Lambda^4)^{12}}-1-45/2\Lambda^4 + 108\Lambda^8 - 1225/2\Lambda^{12}.$$ By \propref{p112GM} we have $\chi^{\widehat {\mathbb P}^2,F_+}_{0}(4H-2E,P^2)=\frac{(1+\Lambda^4)^2}{(1-\Lambda^4)^{10}}-1-48\Lambda^4+389\Lambda^8$, and the classes $\xi$ of type $(0)$ with $\langle\xi, H\rangle \ge 0 >\langle\xi, F\rangle$ with $\delta_\xi^{\widehat{\mathbb P}^2}(4H-2E,P^2)\ne 0$ are $-2E$ and $-4E$ with $\frac{1}{2}\delta^{\widehat{\mathbb P}^2}_{-2E}(4H-2E,P^2)=-6\Lambda^4 + 508\Lambda^8 - 4614\Lambda^{12} + 8600\Lambda^{16} $ and $\frac{1}{2}\delta^{\widehat{\mathbb P}^2}_{-4E}(4H-2E,P^2)=1/2\Lambda^{16}$, giving $$\chi^{\widehat {\mathbb P}^2,H}_{0}(4H-2E)=\frac{(1+\Lambda^4)^2}{(1-\Lambda^4)^{10}}-1-54\Lambda^4+ 897\Lambda^8 - 4614\Lambda^{12} + 17201/2\Lambda^{16} .$$ Putting this into \eqref{40B} gives $\chi^{{\mathbb P}^2,H}_{0}(4H)=\frac{ 1 + 6\Lambda^8 + \Lambda^{12}}{(1-\Lambda^4)^{15}}-1-51/2\Lambda^4$. \end{proof}
\begin{Proposition} $\displaystyle{\chi^{{\mathbb P}^2,H}_{0}(5H)=\frac{1+21\Lambda^8+20\Lambda^{12}+21\Lambda^{16}+\Lambda^{24}}{(1-\Lambda^4)^{21}}-1-33\Lambda^4.}$ \end{Proposition} \begin{proof} We use \propref{p112GM} to compute $\chi^{\widehat {\mathbb P}^2,F_+}_{0}(5H-3E,P^r)$ for $r=0,2,4$, and \propref{p11GM} to compute $\chi^{\widehat {\mathbb P}^2,F_+}_{0}(5H-4E,P^r)$ for $r=0,2$. The only classes of type $(0)$ with $\<H ,\xi\rangle \ge 0 >\<F,\xi\rangle$ and $\delta^{\widehat {\mathbb P}^2}_\xi(5H-3H,P^r)\ne 0$ for $r=0,2,4$ or $\delta^{\widehat {\mathbb P}^2}_\xi(5H-4H,P^r)\ne 0$ for $r=0,2$ are $-2E$ and $-4E$. Adding their wallcrossing terms to the $\chi^{\widehat {\mathbb P}^2,F_+}_{0}(5H-3E,P^r)$, $\chi^{\widehat {\mathbb P}^2,F_+}_{0}(5H-4E,P^r)$ we get \begin{align*}\chi^{\widehat {\mathbb P}^2,H}_{0}(5H-3E)&=\frac{1+6\Lambda^8+\Lambda^{16}}{(1-\Lambda^4)^{15}} -1 - 27\Lambda^4 + 366\Lambda^8 - 6066\Lambda^{12} + 18917\Lambda^{16} - 33\Lambda^{20},\\ \chi^{\widehat {\mathbb P}^2,H}_{0}(5H-3E,P^2)&=\frac{(1+\Lambda^4)^3}{(1-\Lambda^4)^{13}}-1 - 64\Lambda^4 + 2163\Lambda^8 - 32806\Lambda^{12} + 172163\Lambda^{16}\\&\qquad - 242616\Lambda^{20} + 1007\Lambda^{24},\\ \chi^{\widehat {\mathbb P}^2,H}_{0}(5H-3E,P^4)&=\frac{2(1+\Lambda^4)^2}{(1-\Lambda^4)^{11}} -2 - 218\Lambda^4 + 10110\Lambda^8 - 170462\Lambda^{12} + 1121538\Lambda^{16} \\&\qquad- 2798450\Lambda^{20} + 2249462\Lambda^{24} - 18786\Lambda^{28},\\ \chi^{\widehat {\mathbb P}^2,H}_{0}(5H-4E)&=\frac{1}{(1-\Lambda^4)^{11}} -1 - 23\Lambda^4 + 786\Lambda^8 - 20234\Lambda^{12} + 124671\Lambda^{16} - 201885\Lambda^{20}\\&\qquad + 18372\Lambda^{24} - 21840\Lambda^{28},\\ \chi^{\widehat {\mathbb P}^2,H}_{0}(5H-4E,P^2)&=\frac{1}{(1-\Lambda^4)^9} -1 - 57\Lambda^4 + 3691\Lambda^8 - 95035\Lambda^{12} + 741175\Lambda^{16} - 2043587\Lambda^{20} \\&\qquad+ 1906119\Lambda^{24} - 414993\Lambda^{28} + 295880\Lambda^{32}. \end{align*} We compute $$R_5(x,\lambda)=-\lambda^4x^6+\lambda^4(1-\lambda^4)^2(2+\lambda^4)x^4-3\lambda^4(1-\lambda^4)^4x^2+(1-\lambda^4)^6.$$ Using again division with rest, we get \begin{align*} (1-\lambda^4)^{11}&=\big((\lambda^4+3\lambda^8)x^4+(\lambda^4-8\lambda^8+10\lambda^{12}-3\lambda^{20})x^2 +(1+4\lambda^8-\lambda^{12})(1-\lambda^4)^4\big)R_4(x,\lambda)\\ &\qquad -\big((\lambda^4+3\lambda^8)x^2+ (3+\lambda^4)(1-\lambda^4)^2\big)R_5(x,\lambda). \end{align*} Thus again we get
$(1-\Lambda^4)^{11}\chi^{{\mathbb P}^2,H}_{0}(5H)$ as the result of replacing $\lambda$ by $\Lambda$ and $x^rR_4(x,\lambda)$ by $\chi^{\widehat {\mathbb P}^2,H}_{0}(5H-3E,P^r)$, $x^rR_5(x,\lambda)$
by $\chi^{\widehat {\mathbb P}^2,H}_{0}(5H-4E,P^r)$. This gives after some computation that $\chi^{{\mathbb P}^2,H}_{0}(5H)=\frac{1 + 21\Lambda^8 + 20\Lambda^{12} + 21\Lambda^{16} + \Lambda^{24}}{(1-\Lambda^4)^{21}}-1-33\Lambda^4.$ \end{proof}
\subsection{Computer computations for larger $n$} \label{CompP2}
We outline the computations of the PARI program to compute the $\chi^{{\mathbb P}^2,H}_0(nH,P^r)$, $\chi^{{\mathbb P}^2,H}_H(nH,P^r)$.
We have carried out these computations for $r=0$ and $n\le 11$, and $r\le 16$ and $n\le 8$.
To have effective bounds on the number of terms needed to compute we use the following remark. \begin{Remark} \label{qlevel} For all $l\in \frac{1}{2}{\mathbb Z}$ we have $$ \frac{1}{\sinh(lh)},\ \coth(lh)\in q\Lambda^{-1}{\mathbb C}[[q^{-1}\Lambda,q^4]],\quad h,\ \exp(lh),\ {\widetilde \theta}_4(h),\ \Lambda^2u',\ h^*,\ M\in {\mathbb C}[[q^{-1}\Lambda,q^4]].$$ Therefore we have the following. \begin{enumerate} \item By \propref{Fplus}, for $X={\mathbb P}^1\times {\mathbb P}^1$ or $\widehat {\mathbb P}^2$, to compute $\chi^{X,F_+}_F(nF+mG,P^r)$ modulo $\Lambda^{k+1}$, it is enough to evaluate the formulas of \propref{Fplus} modulo $q^{k+1}$ and modulo $\Lambda^{k+1}$. \item By \defref{wallcrossterm} for any rational surface $X$ and any class $\xi\in H^2(X,{\mathbb Z})$ with $\xi^2<0$ and any line bundle $L\in \operatorname{Pic}(X)$, to compute $\delta^X_{\xi}(L,P^r)$ modulo $\Lambda^{k+1}$ it is enough to evaluate the formulas of \defref{wallcrossterm} modulo $q^{k+1}$ and modulo $\Lambda^{k+1}$. \end{enumerate} \end{Remark}
{\bf Step 1.} As mentioned above we will use \corref{blowdownform} with $k=n$. The polynomials $f_n,g_n$, $h_n,l_n$, and the integers $N_n$, $M_n$ of \corref{blowdownform} are computed by the program as follows. Apply the Euclidean algorithm in ${\mathbb Q}(\lambda)[x^2]$ (i.e. repeated division with rest) to $S_n$, $S_{n+1}$, to find $\overline f_n$, $\overline g_n\in {\mathbb Q}(\lambda)[x]$ with $\overline f_n S_n+\overline g_n S_{n+1}=1$. Choose the minimal $N_n\in {\mathbb Z}_{\ge 0}$, so that $$f_n:=\lambda(1-\lambda^4)^{N_n} \overline f_n, \ g_n:=\lambda(1-\Lambda^4)^{N_n} \overline g_n\in {\mathbb Q}[x,\Lambda^4].$$ These exist by \propref{blowdownpol}. Similarly $h_n$, $l_n$, $M_n$ are computed as follows. Apply the Euclidean algorithm in ${\mathbb Q}(\lambda)[x^2]$ to $R_n$, $R_{n+1}$, to find $\overline h_n$, $\overline l_n\in {\mathbb Q}(\lambda)[x]$ with $\overline h_nR_n+\overline l_nR_{n+1}=1$, and then again multiply with the minimal power $(1-\lambda^4)^{M_n}$, to obtain. $$h_n:=(1-\lambda^4)^{M_n} \overline h_n,\ l_n:=(1-\lambda^4)^{M_n} \overline l_n\in {\mathbb Q}[x^2,\lambda^4].$$
{\bf Step 2.} Use \propref{p11r} to compute $\chi^{X,F_+}_F(nF,P^{2s})$ for $2s\le \deg_x(g_n)+r$ and $\chi^{X,F_+}_0(nF,P^{2s})$ for $2s\le\deg_x(l_n)+r$. For $s=0$, the formula is explicitly given in \propref{p11r}. For $s>0$ we know by \propref{p11r} that $\chi^{X,F_+}_F(nF,P^{2s})$ is a polynomial in $\Lambda^4$ of degree at most $s$ and $\chi^{X,F_+}_0(nF,P^s)$ a polynomial of at most degree $s+1$ in $\Lambda^4$,
So, using \remref{qlevel}, the computation is done by evaluating the formula of \propref{Fplus} as a power series in $\Lambda,q$ modulo $\Lambda^{4s+1}$ and $q^{4s+1}$ or $\Lambda^{4s+5}$, $q^{4s+5}$ respectively. As all the power series in the formula are completely explicit, this is a straightforward evaluation.
In the same way we use \propref{p11GM} to compute
$\chi^{X,F_+}_F(G+(n-\frac{1}{2})F,P^{2s+1})$ for $2s+1\le \deg_x(f_n)+r$ and $\chi^{X,F_+}_0(G+(n-\frac{1}{2})F,P^{2s})$ for $2s\le\deg_x(h_n)+r$.
By \propref{p11GM}
$$\chi^{X,F_+}_F(G+(n-\frac{1}{2})F,P^{2s+1})-\frac{1}{(1-\Lambda^4)^{2n-2s}}, \quad \chi^{X,F_+}_0(G+(n-\frac{1}{2})F,P^{2s})-\frac{1}{(1-\Lambda^4)^{2n+1-2s}}$$ are both polynomials of degree at most $s+1$ in $\Lambda^4$, so, using also \remref{qlevel}, again they are computed by evaluating the formula of \propref{Fplus} as a power series in $\Lambda,q$ modulo $\Lambda^{4s+5}$ and $q^{4s+5}$. Again this is a straightforward evaluation.
{\bf Step 3.} By the proof of \corref{P2P12GH} there are finitely many classes $\xi=aF-bG$ of type $(0)$ or $F$ on $\widehat {\mathbb P}^2$ with $\langle\xi ,H\rangle \ge 0>\langle\xi ,F\rangle$ and $\delta_\xi^{\widehat {\mathbb P}^2}(nF,P^s)\ne 0$ or $\delta_\xi^{\widehat {\mathbb P}^2}(G+(n-\frac{1}{2})F,P^s)\ne 0$. In \remref{nonwalls} effective bounds for $a$ and $b$ are given in terms of $n$ and $s$, which leave only finitely many possibilities. For all $\xi=aF-bG$, so that $(a,b)$ satisfies these bounds, it is first checked whether indeed the criterion
$-\xi^2\le |\langle\xi ,L-K_{\widehat {\mathbb P}^2})\rangle|+s+2$ for the non-vanishing of $\delta_\xi^{\widehat {\mathbb P}^2}(L,P^s)$ for $L=nF$ or $L=(n-1/2)F+G$ is satisfied. If yes, $\delta_\xi^{\widehat {\mathbb P}^2}(L,P^s)$ is computed by evaluating the formula of \defref{wallcrossterm}. By \lemref{vanwall} we have that $\delta_\xi^{\widehat {\mathbb P}^2}(L,P^s)$ is a polynomial in $\Lambda$ of degree at most
$$a(\xi,L,X,s):=\xi^2+2|\langle\xi,L-K_{\widehat {\mathbb P}^2})\rangle|+2s+4,$$
so to determine $\delta_\xi^{\widehat {\mathbb P}^2}(L,P^s)$ we only need to compute it modulo $\Lambda^{a(\xi,L,X,s)+1}$ and thus by \remref{qlevel} we only need to evaluate the formula of \defref{wallcrossterm} modulo $\Lambda^{a(\xi,L,X,s)+1}$ and $q^{a(\xi,L,X,s)+1}$, so this is again a straightforward evaluation.
Then for $c_1=0,F$ and $L=nF$, $(n-1/2)F+G$, we compute $$\chi^{\widehat {\mathbb P}^2, H}_{c_1}(L,P^s):=\chi^{\widehat {\mathbb P}^2, F_+}_{c_1}(L,P^s)+\frac{1}{2}\sum_{\langle\xi, H\rangle=0>\langle\xi,F\rangle}\delta^X_\xi(L,P^s)+ \sum_{\langle\xi, H\rangle>0>\langle\xi,F\rangle} \delta^X_\xi(L,P^s),$$ where the sums are over all $\xi$ of type $(c_1)$ with $\delta^{\widehat {\mathbb P}^2}_\xi(L,P^s)\ne 0$.
{\bf Step 4.} Finally apply \corref{blowdownform} to compute
\begin{align*} \chi^{{\mathbb P}^2,H}_H(nH,P^r)&=\frac{1}{\Lambda(1-\Lambda^4)^{N_k}}\Big(\chi^{\widehat {\mathbb P}^2,H}_{F}\big(G+(n-1/2)F,P^r\cdot f_n(P,\Lambda)\big) + \chi^{\widehat {\mathbb P}^2,H}_{F}\big(nF,P^r\cdot g_n(P,\Lambda)\big)\Big),\\ \chi^{{\mathbb P}^2,H}_0(nH,P^r)&=\frac{1}{(1-\Lambda^4)^{M_k}}\Big(\chi^{\widehat {\mathbb P}^2,H}_{0}\big(G+(n-1/2)F,P^r\cdot h_n(P,\Lambda)\big) + \chi^{\widehat {\mathbb P}^2,H}_{0}(\big(nF,P^r\cdot l_n(P,\Lambda)\big)\Big). \end{align*} At this point all the terms on the right hand side have already been computed.
We have carried out this computation for the following cases. \begin{enumerate} \item For $\chi^{{\mathbb P}^2,H}_H(nH,P^r)$ with $n\equiv r\mod 2$, in the cases $r\le 1$, $n\le 10$ and $r\le 16$, $n\le 8$. \item For $\chi^{{\mathbb P}^2,H}_0(nH,P^r)$ with $r$ even, in the cases $r=0$, $n\le 11$ and $r\le 15$, $n\le 8$. \end{enumerate} For the case $r=0$ we obtain, with the notations of the introduction: \begin{Proposition}\label{propp2} With the notations of \thmref{mainp2} we have for $1\le n\le 11$ \begin{enumerate} \item $\displaystyle{\chi^{{\mathbb P}^2,H}_0(nH)=\frac{P_n(\Lambda)}{(1-\Lambda^4)^{\binom{n+2}{2}}}-1-\frac{1}{2}(n^2+6n+11)\Lambda^4}$, \item If $n$ is even, then $\displaystyle{\chi^{{\mathbb P}^2,H}_H(nH)=\frac{Q_n(\Lambda)}{(1-\Lambda^4)^{\binom{n+2}{2}}}}$. \end{enumerate} \end{Proposition}
\thmref{mainp2} now follows directly from \propref{propp2} and \propref{highvan}.
We list also the results for $ \chi^{{\mathbb P}^2,H}_{H}(nH,P^1)$. We put {\small \begin{align*} &q_1=2-t,\ q_3=2,\ q_5=2 + 20t + 20t^2 + 20t^3 + 2t^4,\\ &q_7=2 + 80t + 770t^2 + 3080t^3 + 7580t^4 + 9744t^5 + 7580t^6 + 3080t^7 + 770t^8 + 80t^9 + 2t^{10},\\ &q_9=2 + 207t + 6192t^2 + 85887t^3 + 701568t^4 + 3707406t^5+ 13050156t^6 + 31611681t^7 \\& + 53322786t^8 + 63463686t^9 + 53322786t^{10} + 31611681t^{11} + 13050156t^{12} + 3707406t^{13}\\ &+ 701568t^{14}+ 85887t^{15}+ 6192t^{16} + 207t^{17}+2t^{18} \end{align*}}
\begin{Proposition} For $1\le n\le 9$ we have $\displaystyle{ \chi^H_{{\mathbb P}^2,dH}(H,P^1)=\frac{\Lambda^{3}q_{n}(\Lambda^4)}{(1-\Lambda^4)^{\binom{n+2}{2}-1}}}.$ \end{Proposition}
We list also in the form of tables part of the results obtained for $\chi^{{\mathbb P}^2,H}_{c_1}(nH,P^r)$, with $c_1=0, H$ for $r>0$. Here to simplify the expressions we only write down the results up to adding a Laurent polynomial in $\Lambda$. We define polynomials $p_{d,r}$ by the following tables.
$$\begin{tabular}{r|c|c|c|c|c|c|c}
&d=3&5&7\\
\hline
r=1&$2t$& $2t+20t^2+20t^3+20t^4+2t^5$&$\genfrac{}{}{0pt}{}{2t + 80^2 + 770t^3 + 3080t^4 + 7580t^5 + 9744t^6}{ + 7580t^7+ 3080t^8 + 770t^9 + 80t^{10}+ 2t^{11}}$\\
\hline
2&$1+t$&$1+6t + 25t^2 +25t^3+6t^4 +t^5$&$\genfrac{}{}{0pt}{}{1 + 15t + 239t^2 + 1549t^3 + 5274t^4+ 9306t^5 + 9306t^6 }{+ 5274t^7+ 1549t^8+ 239t^9 + 15t^{10} + t^{11}}$\\ \hline
3&$1+t$&$8t + 24t^2 + 24t^3 + 8t^4$&$ \genfrac{}{}{0pt}{}{8t + 219t^2 + 1485t^3 + 5159t^4 + 9513t^5 + 9513t^6}{ + 5159t^7 + 1485t^8 + 219t^8 + 8t^{10}}$\\
\hline
4&2&$2+14t+32t^{2}+14t^3+2t^4$&$\genfrac{}{}{0pt}{}{2+ 44t + 546t^2 + 2936t^3 + 7676t^4 + 10360t^5 + 7676t^6 }{+ 2936t^7+ 546t^8 + 44t^9 + 2t^{10}}$\\ \hline
5&2&$1+ 16t + 30t^2 +16t^3+t^4$&$ \genfrac{}{}{0pt}{}{32t+510t^2 + 2820t^3 + 7682t^4 + 10680t^5 }{+ 7682t^6 + 2820t^7+ 510t^8 + 32t^9}$\\ \hline
6&$t^{-1}+1$&$5+27t+27t^2+5t^3$&$\genfrac{}{}{0pt}{}{5 + 120t + 1209t^2 + 5075t^3 + 9975t^4 + 9975t^5 + 5075t^6 }{+ 1209t^7 + 120t^8 + 5t^9}$\\
\hline
7&$3-t$&$4 + 28t + 28t^2 + 4t^3$&$\genfrac{}{}{0pt}{}{1 + 99t + 1134t^2 + 4954t^3 + 10196t^4+ 10196t^5 }{ + 4954t^6 + 1134t^7+ 99t^8 + t^9}$\\
\hline
8&&$14 + 36t + 14t^2$&$\genfrac{}{}{0pt}{}{14 + 318t + 2508t^2 + 7874t^3 + 11340t^4 + 7874t^5}{+ 2508t^6 + 318t^7 + 14t^8}$\\
\hline
9&& $12 + 40t + 12t^2$&$\genfrac{}{}{0pt}{}{6 + 276t + 2376t^2 + 7884t^3 + 11684t^4 }{+ 7884t^5 + 2376t^6 + 276t^7 + 6t^8}$\\
\hline 10&&$t^{-1} + 31 + 31t + t^2$&$ \genfrac{}{}{0pt}{}{42 + 810t + 4742t^2 + 10790t^3 + 10790t^4 + 4742t^5}{ + 810t^6 + 42t^7}$\\
\hline
11&&$32+32t$&$\genfrac{}{}{0pt}{}{25 + 719t + 4605t^2 + 11035t^3 + 11035t^4}{ + 4605t^5 + 719t^6 + 25t^7}$\\
\hline
12&&$6t^{-1}+52+6t$&$ \genfrac{}{}{0pt}{}{132 + 1920t + 8028t^2 + 12608t^3 + 8028t^4 + 1920t^5}{ + 132t^6}$\\
\hline
13&&$-t^{-2} +8t^{-1}+50+8t-t^2$&$\genfrac{}{}{0pt}{}{90 + 1756t + 8038t^2 + 13000t^3 + 8038t^4}{ + 1756t^5 + 90t^6}$\\
\hline
14&&$22t^{-1}+57-21t+7t^2-t^3$&$ \genfrac{}{}{0pt}{}{t^{-1} + 407 + 4149t + 11827t^2 + 11827t^3 + 4149t^4}{ + 407t^5 + t^6}$\\ \hline
15&&$-4t^{-2} +36t^{-1} + 36 - 4t$&$\genfrac{}{}{0pt}{}{300 + 3964t + 12120t^2 + 12120t^3 + 3964t^4}{ + 300t^5}$\\ \end{tabular}$$
$$\begin{tabular}{l | c | c | c | c }
&d=2&4&6\\ \hline r=2&$1$&$1+3t+4t^2$&$1+10t+89t^2+272t^3+371t^4+210t^5+67t^6+4t^7$\\ 4&$t^{-1}$&$2+5t+t^2$ &$2+27t + 168t^2 + 370t^3+ 318t^4 + 123t^5 + 16t^6$\\ 6&&5+3t &$5+ 66t+ 287t^2 + 404t^3+ 219t^4 + 42t^5 + t^6$\\ 8&&$t^{-1}+7$&$14+149t + 408t^2 + 350t^3 + 98t^4 + 5t^5$\\ 10&&$4t^{-1}+5-t$&$42 + 288t + 468t^2+ 208t^3 + 18t^4$\\ 12&&$9t^{-1}-1$&$t^{-1} + 116 + 462t + 388t^2 + 57t^3$\\ 14&&&$8t^{-1}+280+568t+168t^2$ \end{tabular}$$
\begin{Theorem}\label{rpoly} With the polynomials $p_{d,r}$ given above, we have \begin{enumerate} \item If $r$ is even, then $\displaystyle{\chi^{{\mathbb P}^2,H}_{0}(dH,P^r)\equiv \frac{p_{d,r}(\Lambda^4)}{(1-\Lambda^4)^{\binom{d+2}{2}-r}}.}$ \item If $d$ and $r$ are both odd, then $\displaystyle{\chi^{{\mathbb P}^2,H}_{H}(dH,P^r)\equiv \frac{ \Lambda^{-1} p_{d,r}(\Lambda^{4})}{(1-\Lambda^4)^{\binom{d+2}{2}-r}}= \frac{ \Lambda^{d^2-2r} p_{d,r}(\Lambda^{-4})}{(1-\Lambda^4)^{\binom{d+2}{2}-r}}.}$ \item If $d$ and $r$ are both even, then $\displaystyle{\chi^{{\mathbb P}^2,H}_{H}(dH,P^r)\equiv \frac{ \Lambda^{d^2-2r-1} p_{d,r}(\Lambda^{-4})}{(1-\Lambda^4)^{\binom{d+2}{2}-r}}.}$ \end{enumerate} \end{Theorem}
\subsection{Invariants of blowups of the plane}
We want to apply the above results to compute $K$-theoretic Donaldson invariants of blowups of ${\mathbb P}^2$ in a finite number of points.
\begin{Remark} Let $X_r$ be the blowup of ${\mathbb P}^2$ in $r$ general points and let $E:=E_1+\ldots+ E_r$ be the sum of the exceptional divisors. By definition we have for $c_1=0,H$ that $\chi^{X_r,H}_{c_1+E}(nH-E)=\Lambda^r\chi^{{\mathbb P}^2,H}_{c_1}(nH,P^r)$. By \lemref{blowindep} we have therefore $\chi^{X_r,\omega}_{c_1+E}(nH-E)\equiv\Lambda^r\chi^{{\mathbb P}^2,H}_{c_1}(nH,P^r)$ for all classes $\omega=H-\sum_{i=1}^r a_i E_i$ on $X_r$ with $\langle\omega, K_{X_r}\rangle<0$ and $0\le a_i< \frac{1}{\sqrt r}$ for all $i$. Therefore the formulas of \thmref{rpoly} also give the $\chi^{X_r,\omega}_{c_1+E}(nH-E)$.
\end{Remark}
By \thmref{nblow}, and using \lemref{blowindep}, we can, from the $\chi^{{\mathbb P}^2,H}_{0}(nH,P^r)$, $\chi^{{\mathbb P}^2,H}_{H}(nH,P^r)$, readily compute the generating functions of $K$-theoretical Donaldson invariants $\chi^{X,\omega}_{c_1}(L)$ for any blowup $X$ of ${\mathbb P}^2$ in finitely many points, for any $c_1,L\in \operatorname{Pic}(X)$, and for any $\omega$ close to $H$, up to addition of a Laurent polynomial. In particular we can readily apply this computation to the tables of the $\chi^{{\mathbb P}^2,H}_{0}(nH,P^r)$, $\chi^{{\mathbb P}^2,H}_{H}(nH,P^r)$ of \thmref{rpoly} above. We will only write down the result in one simple case. We take $X_s$ the blowup of ${\mathbb P}^2$ in $s$ points, and let again $E=\sum_{i=1}^s E_i$ be the sum of the exceptional divisors, let $L:=dH-2E$ and consider the cases $c_1=0$, $c_1=E$, $c_1=H$ and $c_1=K_{X_s}$.
We define polynomials $q_{d,s}$ by the following table,
$$\begin{tabular}{l | c | c|c |c }
&d=3&4&5&6\\
\hline
s=1&1&$1+3t^2$&$1+15t^2+10t^3+6t^4$&$1+46t^2+104t^3+210t^4 + 105t^5 + 43t^6 + 3t^7$\\
s=2&&$1+t^2$&$1+10t^2+4t^3+t^4$&$1+37t^2+ 70t^3 + 105t^4 + 34t^5 + 9t^6$\\
s=3&&$1$&$1+6t^2+t^3$&$1+ 29t^2 + 44t^3 + 45t^4 + 8t^5 + t^6$\\
s=4&&&$1+3t^2$&$1 + 22t^2 + 25t^3 + 15t^4 + t^5$\\
s=5&&&$1+t^2$&$1+ 16t^2 + 12t^3 + 3t^4$\\
s=6&&&$1$&$1+11t^2+4t^3$\\
s=7&&&&$1+7t^2$\\
\end{tabular}$$ and polynomials $r_{d,s}$ by the following table.
$$\begin{tabular}{l | c | c|c |c }
&d=4&6\\
\hline
s=1&$1+3t$&$1+24t+105t^2+161t^3+168t^4+43t^5+10t^6$\\
s=2&$1+t$&$1+21t+71t^2+90t^3+63t^4+9t^5+t^6$\\
s=3&$1$&$1+ 18t+ 45t^2+ 45t^3 + 18t^4+ t^5$\\
s=4&&$1+15t+26t^2+19t^3+3t^4$\\
s=5&&$1+12t +13t^2 +6t^3$\\
s=6&&$1+9t+5t^2+t^3$\\
s=7&&$1+6t+t^2$\\
s=8&&$1+3t$\\
\end{tabular}$$
\begin{Proposition} Let $X_s$ be the blowup of ${\mathbb P}^2$ in $r$ general points with exceptional divisors $E_1,\ldots,E_s$, and write $E=\sum_{i=1}^sE_i$. With the $q_{d,s}$ and $r_{d,s}$ given by the above tables we get \begin{align*} \chi^{X,H}_{0}(dH-2E)&\equiv \frac{q_{d,s}(\Lambda^4)}{(1-\Lambda^4)^{\binom{d+2}{2}-3s}}\\ \chi^{X,H}_{K_{X_s}}(dH-2E)&\equiv \frac{\Lambda^{d^2-1-3s}q_{d,s}(\frac{1}{\Lambda^4})}{(1-\Lambda^4)^{\binom{d+2}{2}-3s}}, \quad d \hbox{ even}\\ \chi^{X,H}_{H}(dH-2E)&\equiv \frac{\Lambda^{3}r_{d,s}(\Lambda^4)}{(1-\Lambda^4)^{\binom{d+2}{2}-3s}}, \quad d \hbox{ even}\\ \chi^{X,H}_{E}(dH-2E)&\equiv \begin{cases} \frac{\Lambda^{d^2-1-3s} q_{d,s}(\frac{1}{\Lambda^4})}{(1-\Lambda^4)^{\binom{d+2}{2}-3s}}& d \hbox{ odd}\\ \frac{\Lambda^{d^2-4-3s} r_{d,s}(\frac{1}{\Lambda^4})}{(1-\Lambda^4)^{\binom{d+2}{2}-3s}}& d \hbox{ even} \end{cases} \end{align*} The same formulas also apply with $\chi^{X,H}_{c_1}(dH-2E)$ replaced by $\chi^{X,\omega}_{c_1}(dH-2E)$, with $\omega=H-a_1E_1-\ldots a_rE_s$ and $0\le a_i\le \sqrt{s}$ for all $i$. \end{Proposition} \begin{proof} Recall that $R_3=-{\mathfrak \lambda}^4 x^2 + (1-{\mathfrak \lambda}^4)^2$, $S_3={\mathfrak \lambda}(x^2-(1-{\mathfrak \lambda}^4)^2)$. Noting that $K_{X_s}\equiv H+E\mod 2H^2(X,{\mathbb Z})$, we
we get by
\thmref{nblow} that \begin{align*} \chi^{X_s,H}_0(dH-2E)&=\chi^{{\mathbb P}^2,H}_{0}\big(dH,\big(-\Lambda^4 P^2 + 1-\Lambda^4)^2\big)^s\big),\\
\chi^{X_s,H}_H(dH-2E)&=\chi^{{\mathbb P}^2,H}_{H}\big(dH,\big(-\Lambda^4 P^2 + 1-\Lambda^4)^2\big)^s\big),\\
\chi^{X_s,H}_E(dH-2E)&=\Lambda^s\chi^{{\mathbb P}^2,H}_{0}\big(dH,\big(P^2 - (1-\Lambda^4)^2\big)^s\big),\\
\chi^{X_s,H}_{K_{X_s}}(dH-2E)&=\Lambda^s\chi^{{\mathbb P}^2,H}_{H}\big(dH,\big(P^2 - (1-\Lambda^4)^2\big)^s\big). \end{align*} Now we just put the values of the tables of \thmref{rpoly} into these formulas. \end{proof}
\begin{comment}
\begin{Corollary} Let $r,k\ge s\ge 0$. We denote $m:=\max(r,k)$. Let $X$ be the blowup of ${\mathbb P}^2$ in $m+1$ general points. Let $E_0,\ldots,E_{m}$ be the exceptional divisors. \begin{align*}&\chi^{X,F+}_{E_{s+1}+\ldots +E_{r}}\big(nH-(n-2)E_0-3(E_1+\ldots E_s)-2(E_{s+1}+\ldots+E_k)\big)\\ &\qquad \equiv \frac{1}{2}\Lambda^{r-s} \frac{(1+\Lambda^4)^{n-1-2s-k}+(1+\Lambda^4)^{n-1-2s-k}}{(1-\Lambda^4)^{3(n-2s-k)}},\\ &\chi^{X,F+}_{H+E_0+E_{s+1}+\ldots +E_{r}}\big(nH-(n-2)E_0-3(E_1+\ldots E_s)-2(E_{s+1}+\ldots+E_k)\big)\\ &\qquad \equiv \frac{1}{2}\Lambda^{r-s} \frac{(1+\Lambda^4)^{n-1-2s-k}-(1+\Lambda^4)^{n-1-2s-k}}{(1-\Lambda^4)^{3(n-2s-k)}}. \end{align*} \end{Corollary} \begin{proof} Let $\widehat {\mathbb P}^2$ be the blowup of ${\mathbb P}^2$ in a point and $E_0$ the exceptional divisor. As before we write $F=H-E_0$, $G=\frac{1}{2}(H+E_0)$. Then $nH-(n-2)E_0=(n-1)F+2G$. We apply \propref{p112GM} and the blowup formulas: Using $S_1=\lambda$, $S_3=\lambda(x^2-(1-\Lambda^4)^2)$, $R_3=(1-\lambda^4)^2-\lambda^4x^2$, $R_4=(1-\lambda^4)^4-\lambda^4x^4$, we get \begin{align*} \chi^{X,F+}_{E_{s+1}+\ldots +E_{r}}\big(nH-(n-2)E_0-3(E_1+\ldots E_s)-2(E_{s+1}+\ldots+E_k)\big)=\chi^{\widehat {\mathbb P}^2,F_+} and \propref{p112GM}, the result follows by induction from the trivial identities \begin{align*} 2(1+\lambda^4)^{n-1}-(1+\lambda^4)^{n}&=(1-\lambda^4)(1+\lambda^4)^{n-1}\\ (1+\lambda^4)^{n+1}-2\lambda^4(1+\lambda^4)^{n}&=(1-\lambda^4)(1+\lambda^4)^{n}\\ (1+\lambda^4)^{n+2}-4\lambda^4(1+\lambda^4)^{n}&=(1-\lambda^4)^2(1+\lambda^4)^{n}. \end{align*} \end{proof} \end{comment}
\subsection{Symmetries from Cremona transforms}
\begin{Remark} The Conjecture \ref{ratconj} will often predict a symmetry for the polynomials $P^X_{c_1,L}(\Lambda)$. Assume Conjecture \ref{ratconj}. Then we have the following. \begin{enumerate} \item If $c_1\equiv L+K_X-c_1\mod 2H^2(X,{\mathbb Z})$, then $P^X_{c_1,L}(\Lambda)=\Lambda^{L^2+8-K_X^2}P^X_{c_1,L}(\frac{1}{\Lambda})$. \item More generally let $X$ be the blowup of ${\mathbb P}^2$ in $n$ points, with exceptional divisors $E_1,\ldots E_n$, $L=dH-a_1E_1-\ldots -a_nE_n$. If
$\sigma$ is a permutation of $\{1,\ldots, n\}$, we write $\sigma(L):=dH-a_{\sigma(1)}E_1-\ldots -a_\sigma(n)E_n$.
Then $\chi^{X,H}_{c_1}(L)=\chi^{X,H}_{\sigma(c_1)}(\sigma(L))$.
\end{enumerate}
Thus, if there is a $\sigma$ with $L=\sigma(L)$ and $\sigma(c_1)\equiv L+K_X-c_1\mod 2H^2(X,{\mathbb Z})$, then
$P^X_{c_1,L}(\Lambda)=\Lambda^{L^2+8-K_X^2}P^X_{c_1,L}(\frac{1}{\Lambda})$.
\end{Remark} Other symmetries come from the Cremona transform of the plane, which we briefly review. Let $p_1,p_2,p_3$ be three general points in ${\mathbb P}^2$. For $i=1,2,3$ let $L_k$ the line through $p_i,p_j$ where $\{i,j,k\}=\{1,2,3\}$. Let $X$ be the blowup of ${\mathbb P}^2$ in $p_1,p_2,p_3$, with exceptional divisors $E_1,E_2,E_3$, and let $\overline E_1,\overline E_2,\overline E_3$ be the strict transforms of the lines $L_1,L_2,L_3$. The $\overline E_i$ are disjoint $(-1)$ curves which can be blown down to obtain another projective plane $\overline {\mathbb P}^2$. Let $H$ (resp. $\overline H$) be the pullback of the hyperplane class from ${\mathbb P}^2$ (resp. $\overline {\mathbb P}^2$) to $X$. Then $H^2(X,{\mathbb Z})$ has two different bases $H,E_1,E_2,E_3$ and $\overline H,\overline E_1,\overline E_2,\overline E_3$, which are related by the formula $$dH-a_1E_1-a_2E_2-a_3E_3=(2d-a_1-a_2-a_3)\overline H-(d-a_2-a_3)\overline E_1-(d-a_1-a_3)\overline E_2-(d-a_1-a_2)\overline E_3.$$ Note that this description is symmetric under exchanging the role of $H,E_1,E_2,E_3$ and $\overline H, \overline E_1,\overline E_2,\overline E_3$. Let $c_1\in H^2(X,{\mathbb Z})$. If $\<c_1,K_X\rangle$ is even, then it is easy to see that $\overline c_1\equiv c_1\mod 2H^2(X,{\mathbb Z})$, but if $\<c_1,K_X\rangle$ is odd, then $\overline c_1\equiv K_X-c_1\mod 2H^2(X,{\mathbb Z})$. For a class $L=dH-a_1E_1-a_2E_2-a_3E_3\in H^2(X,{\mathbb Z})$ we denote $\overline L=d\overline H-a_1\overline E_1-a_2\overline E_2-a_3\overline E_3.$ Then it is clear from the definition that $\chi^{X,H}_{c_1}(L)=\chi^{X,\overline H}_{\overline{c}_1}(\overline L)$, and by \lemref{blowindep} we get $\chi^{X,\overline H}_{\overline{c}_1}(\overline L)\equiv \chi^{X,H}_{\overline c_1}(\overline L).$ If $\sigma$ is a permuation of $\{1,2,3\}$ and we denote $\sigma(L):=dH-a_{\sigma_1}E_1-a_{\sigma_2}E_2-a_{\sigma_3}E_3$, If $\sigma(L)=L$, then it is clear that $\chi^{X,H}_{c_1}(L)=\chi^{X,H}_{\sigma(c_1)}(L)$.
Now assume $d=a_1+a_2+a_3$. Then $\overline L=L$, so that
$\chi^{X,H}_{c_1}(L)\equiv\chi^{X,H}_{{\overline c_1}}(L).$
Assume now $\<c_1,K_X\rangle$ is odd. Assuming also \conref{ratconj}, the polynomials $P^X_{c_1,L} \in \Lambda^{-c_1^2}{\mathbb Z}[\Lambda^4]$ mentioned there satisfy \begin{enumerate} \item $P^X_{c_1,L}(\Lambda)=P^X_{K_X-c_1,L}(\Lambda)=\Lambda^{L^2+8-K_X^2} P^X_{L+K_X-c_1}(\frac{1}{\Lambda})=\Lambda^{L^2+8-K_X^2} P^X_{L- c_1}(\frac{1}{\Lambda}).$ \item Thus if there is a permutation $\sigma$ of $\{1,2,3\}$ with $\sigma(L)=L$ and $c_1\equiv L-\sigma(c_1)\mod 2 H^2(X,{\mathbb Z})$, or with $\sigma(L)=L$ and $c_1\equiv L+K_X-\sigma(c_1)\mod 2 H^2(X,{\mathbb Z})$. Then we have the symmetries $$P^X_{c_1,L}(\Lambda)=\Lambda^{L^2+8-K_X^2} P^X_{c_1,L}(\frac{1}{\Lambda})=P^X_{K_X-c_1,L}(\Lambda)=\Lambda^{L^2+8-K_X^2} P^X_{K_X-c_1,L}(\frac{1}{\Lambda}).$$ \end{enumerate}
We check these predictions in a number of cases.
Let
$L_5=5H-2E_1-2E_2-E_3$, $L_6=6H-2E_1-2E_2-2E_3$, $L_7=7H-3E_1-2E_2-2E_3$.
We find \begin{align*}
&\chi^{X,H}_{K_X+E_2}(L_5)\equiv\chi^{X,H}_{E_2}(L_5)\equiv\frac{5\Lambda^5+6\Lambda^9+5\Lambda^{13}}{(1-\Lambda^4)^{14}},\\ &\chi^{X,H}_{H}(L_6)\equiv\chi^{X,H}_{E_1+E_2+E_3}(L_6)\equiv\frac{\Lambda^3+18\Lambda^7+45\Lambda^{11}+45\Lambda^{15}+18\Lambda^{19}+\Lambda^{23}}{(1-\Lambda^4)^{19}},\\ &\chi^{X,H}_{K_X-E_3}(L_6)\equiv\chi^{X,H}_{E_3}(L_6)\equiv\frac{8\Lambda^5+26\Lambda^9+60\Lambda^{13}+26\Lambda^{17}+8\Lambda^{21}}{(1-\Lambda^4)^{19}},\\ &\chi^{X,H}_{K_X-E_3}(L_7)\equiv\chi^{X,H}_{E_3}(L_7)\equiv\frac{11\Lambda^5+61\Lambda^9+265\Lambda^{13}+350\Lambda^{17}+265\Lambda^{21}+61\Lambda^{25}+11\Lambda^{29}}{(1-\Lambda^4)^{24}}. \end{align*}
\section{The invariants of ${\mathbb P}^1\times{\mathbb P}^1$} In this section we will use the results of the previous section to compute the $K$-theoretic Donaldson invariants of ${\mathbb P}^1\times {\mathbb P}^1$.
\subsection{A structural result} First we will show analoguously to \thmref{blowrat} that all the generating functions $\chi^{{\mathbb P}^1\times{\mathbb P}^1,\omega}_{c_1}(L,P^r)$ are rational functions.
\begin{Lemma} Let $c_1\in H^2({\mathbb P}^1\times{\mathbb P}^1,{\mathbb Z})$. Let $L$ be a line bundle on ${\mathbb P}^1\times {\mathbb P}^1$ with $\<L,c_1\rangle+r$ even. Let $\omega$ be an ample classes on ${\mathbb P}^1\times{\mathbb P}^1$. Then $\chi^{{\mathbb P}^1\times {\mathbb P}^1,\omega}_{c_1}(L,P^r)\equiv \chi^{{\mathbb P}^1\times {\mathbb P}^1F+G}_{c_1}(L,P^{r})$. \end{Lemma} \begin{proof} We write $L=nF+mG$. By exchanging the role of $F$ and $G$ if necessary we can write $\omega=G+\alpha F$, with $1\le \alpha$. We have to show that there are only finitely many classes $\xi$ of type $(c_1)$ with $\langle\xi,(F+G)\rangle\le 0\le \langle\xi,\omega\rangle$ and $\delta^X_\xi(L,P^r)\ne 0$. Such a class is of the form $\xi=aG-bF$, with $a,b\in {\mathbb Z}_{>0}$ satisfying
$$a\le b,\quad \alpha a\ge b, \quad 2ab\le |a(n+2)-b(m+2)|+r+2.$$
This gives $$2ab\le a|n+2|+b|m+2|+r+2\le b|n+m+4|+r+2.$$ Thus we get $a\le \frac{|n+m+4|}{2}+\frac{r}{2}+1$. Therefore $a$ is bounded and by $\alpha a\ge b$ also $b$ is bounded. Therefore there are only finitely many possible classes $\xi$. \end{proof}
We use the fact that the blowup $\widetilde {\mathbb P}^2$ of ${\mathbb P}^2$ in two different points is also the blowup of ${\mathbb P}^1\times {\mathbb P}^1$ in a point. We can identify the classes as follows. Let $H$ be the hyperplane class on ${\mathbb P}^2$ and let $E_1$, $E_2$ be the exceptional divisors of the double blowup of ${\mathbb P}^2$. Let $F$, $G$ be the fibres of the two different projections of ${\mathbb P}^1\times {\mathbb P}^1$ to its factors, and let $E$ be the exceptional divisor of the blowup of ${\mathbb P}^1\times {\mathbb P}^1$. Then on $\widetilde {\mathbb P}^2$ we have the identifications \begin{align*} F&=H-E_1, \quad G=H-E_2, \quad E=H-E_1-E_2,\\ H&=F+G-E, \quad E_1=G-E, \quad E_2=F-E. \end{align*}
\begin{Theorem}\label{P11rat} Let $c_1\in \{0,F,G,F+G\}$. Let $L$ be a line bundle on ${\mathbb P}^1\times {\mathbb P}^1$ with $\<L,c_1\rangle$ even. Let $r\in {\mathbb Z}_{\ge 0}$ with $\<L,c_1\rangle+r$ even. There exists a polynomial $p^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,L,r}(t)$ and an integer $N^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,L,r}$, such that for all ample classes $\omega$ on ${\mathbb P}^1\times {\mathbb P}^1$, we have $$\chi^{{\mathbb P}^1\times {\mathbb P}^1,\omega}_{c_1}(L,P^r)\equiv \frac{p^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,L,r}(\Lambda^4)}{\Lambda^{c_1^2}(1-\Lambda^4)^{N^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,L,r}}}.$$ \end{Theorem} \begin{proof}
Note that on $\widetilde {\mathbb P}^2$ we have $F+G=2H-E_1-E_2$. We write $L=nF+mG$, with $n,m\in {\mathbb Z}$. Then on $\widetilde {\mathbb P}^2$ we have $L=(n+m)H-nE_1-mE_2$. By \thmref{nblow} we have therefore \begin{align*} \chi^{\widetilde {\mathbb P}^2,H}_{0}(nF+mG,P^r)&=\chi^{{\mathbb P}^2,H}_{0}\big((n+m)H, P^r \cdot R_{n+1}(P,\Lambda)R_{m+1}(P,\Lambda)\big),\\ \chi^{\widetilde {\mathbb P}^2,H}_{F}(nF+mG,P^r)&=\chi^{{\mathbb P}^2,H}_{H}\big((n+m)H, P^r \cdot S_{n+1}(P,\Lambda)R_{m+1}(P,\Lambda)\big),\\ \chi^{\widetilde {\mathbb P}^2,H}_{G}(nF+mG,P^r)&=\chi^{{\mathbb P}^2,H}_{H}\big((n+m)H, P^r \cdot R_{n+1}(P,\Lambda)S_{m+1}(P,\Lambda)\big),\\ \chi^{\widetilde {\mathbb P}^2,H}_{F+G}(nF+mG,P^r)&=\chi^{{\mathbb P}^2,H}_{0}\big((n+m)H, P^r \cdot S_{n+1}(P,\Lambda)S_{m+1}(P,\Lambda)\big). \end{align*} As $R_{n}(P,\Lambda)\in {\mathbb Z}[P^2,\Lambda^4]$, $S_{n}(P,\Lambda)\in \Lambda{\mathbb Z}[P,\Lambda^4]$, we see by \thmref{P2rat1} that for $c_1=0,F,G,F+G$ we can write $$ \chi^{\widetilde {\mathbb P}^2,H}_{c_1}(nF+mG,P^r)\equiv \frac{p^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,nF+mG,r}(\Lambda^4)}{\Lambda^{c_1^2}(1-\Lambda^4)^{N^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,nF+mG,r}}},$$ with $p^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,nF+mG,r}\in {\mathbb Q}[t]$ and $N^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,nF+mG,r}\in{\mathbb Z}_{\ge 0}.$ As on $\widetilde {\mathbb P}^2$ we have $F+G=2H-E_1-E_2$, we get by \thmref{blowrat} that again for $c_1=0,F,G,F+G$ we have $\chi^{\widetilde {\mathbb P}^2,F+G}_{c_1}(nF+mG,P^r)\equiv\chi^{\widetilde {\mathbb P}^2,H}_{c_1}(nF+mG,P^r)$. Finally by the blowdown formula \thmref{nblow} we have $\chi^{\widetilde {\mathbb P}^2,F+G}_{c_1}(nF+mG,P^r)=\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{c_1}(nF+mG,P^r).$ \end{proof}
\subsection{Computations for $L=d(F+G)$} We will compute $\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{c_1}(d(F+G))$ for $d\le 7$ and $c_1=0$, $F$, $G$, $F+G$. Obviously by symmetry $\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{G}(d(F+G))=\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{F}(d(F+G))$, and furthermore we have just seen that $\chi^{{\mathbb P}^1\times {\mathbb P}^1,\omega}_{c_1}(d(F+G))\equiv\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{c_1}(d(F+G))$ for any ample class $\omega$ on ${\mathbb P}^1\times{\mathbb P}^1$. We use a different strategy than in the proof of \thmref{P11rat}, which is computationally more tractable, and allows us to compute $\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{c_1}(d(F+G))$ for $d\le 7$, using only the $\chi^{{\mathbb P}^2,H}_H(nH,P^r)$, for $1\le n\le 8$, $0\le r\le 16$ already computed.
{\bf Step 1.} By $F=H-E_1$, $G=H-E_2$, $E=H-E_1-E_2$, and thus $d(F+G)-dE=dH$, and \thmref{nblow} we have \begin{align*} \chi^{\widetilde {\mathbb P}^2,H}_E(d(F+G)-dE,P^r)&=\chi^{\widetilde {\mathbb P}^2,H}_{H-E_1-E_2}(dH,P^r)=\Lambda^2\chi^{{\mathbb P}^2,H}_{H}(dH,P^r),\\ \chi^{\widetilde {\mathbb P}^2,H}_E(d(F+G)-(d-1)E,P^r)&=\chi^{\widetilde {\mathbb P}^2,H}_{H-E_1-E_2}((d+1)H-E_1-E_2,P^{r})\\&=\Lambda^2\chi^{{\mathbb P}^2,H}_{H}((d+1)H,P^{r+2}),\\ \chi^{\widetilde {\mathbb P}^2,H}_F(d(F+G)-dE,P^r)&=\chi^{{\mathbb P}^2,H}_{H-E_1}(dH,P^r)=\Lambda\chi^{{\mathbb P}^2,H}_{H}(dH,P^r),\\ \chi^{\widetilde {\mathbb P}^2,H}_F(d(F+G)-(d-1)E,P^r)&=\chi^{\widetilde {\mathbb P}^2,H}_{H-E_1}((d+1)H-E_1-E_2,P^{r})\\&=\Lambda(1-\Lambda^4)\chi^{{\mathbb P}^2,H}_{H}((d+1)H,P^{r+1}),\\ \chi^{\widetilde {\mathbb P}^2,H}_{F+G-E}(d(F+G)-dE,P^r)&=\chi^{{\mathbb P}^2,H}_{H}(dH,P^r),\\ \chi^{\widetilde {\mathbb P}^2,H}_{F+G-E}(d(F+G)-(d-1)E,P^r)&=\chi^{\widetilde {\mathbb P}^2,H}_{H}((d+1)H-E_1-E_2,P^{r})\\&=(1-\Lambda^4)^2\chi^{{\mathbb P}^2,H}_{H}((d+1)H,P^{r}), \end{align*} where we have used that $S_1(P,\Lambda)=\Lambda$, $S_2(P,\Lambda)=P\Lambda$ and $R_2(P,\Lambda)=(1-\Lambda^4)$.
The $\chi^{{\mathbb P}^2,H}_{H}(nH,P^{s})$ have been computed for $n\le 8$ and $s\le 16$. In the tables above they are only listed for $n\le 7$ and only up to adding a Laurent polynomial in $\Lambda$, so as to give them a particularly simple form, but they have been computed precisely. Thus in the range $d\le 7$ and $r\le 14$, the all the invariants on the left hand side of the formulas of Step 1 have been computed.
{\bf Step 2.} For $d\le 7$ we compute \begin{align*} \tag{1}\chi^{\widetilde {\mathbb P}^2,F+G}_{E}(d(F+G)-dE,P^r)&=\Lambda^2\chi^{{\mathbb P}^2,H}_{H}(dH,P^r)+\sum_{\xi} \delta_\xi^{\widetilde {\mathbb P}^2}(dH,P^r),\\ \chi^{\widetilde {\mathbb P}^2,F+G}_{E}(d(F+G)-(d-1)E,P^r)&=\Lambda^2\chi^{{\mathbb P}^2,H}_{H}((d+1)H,P^{r+2})\\ &\quad+\sum_{\xi} \delta_\xi^{\widetilde {\mathbb P}^2}((d+1)H-E_1-E_2,P^r),\\ \tag{2}\chi^{\widetilde {\mathbb P}^2,F+G}_{F}(d(F+G)-dE,P^r)&=\Lambda\chi^{{\mathbb P}^2,H}_{H}(dH,P^{r})+\sum_{\xi} \delta_\xi^{\widetilde {\mathbb P}^2}(dH,P^r),\\ \chi^{\widetilde {\mathbb P}^2,F+G}_{F}(d(F+G)-(d-1)E,P^r)&=\Lambda(1-\Lambda^4)\chi^{{\mathbb P}^2,H}_{H}((d+1)H,P^{r+1})\\ &\quad+\sum_{\xi} \delta_\xi^{\widetilde {\mathbb P}^2}((d+1)H-E_1-E_2,P^r),\\ \tag{3}\chi^{\widetilde {\mathbb P}^2,F+G}_{F+G-E}(d(F+G)-dE,P^r)&=\chi^{{\mathbb P}^2,H}_{H}(dH,P^{r})+\sum_{\xi} \delta_\xi^{\widetilde {\mathbb P}^2}(dH,P^r),\\ \chi^{\widetilde {\mathbb P}^2,F+G}_{F+G-E}(d(F+G)-(d-1)E,P^r)&=(1-\Lambda^4)^2\chi^{{\mathbb P}^2,H}_{H}((d+1)H,P^{r})\\ &\quad+\sum_{\xi} \delta_\xi^{\widetilde {\mathbb P}^2}((d+1)H-E_1-E_2,P^r).\\ \end{align*} Here the sums are over all classes $\xi\in H^2(\widetilde {\mathbb P}^2,{\mathbb Z})$ with $\<H,\xi\rangle\le 0\le\langle(2H-E_1-E_2),\xi\rangle$ (but at least one of the inequalities is strict) and $\delta_\xi^{\widetilde {\mathbb P}^2}\ne \emptyset$, and the summand is $\delta_\xi^{\widetilde {\mathbb P}^2}(dH,P^r)$, $\delta_\xi^{\widetilde {\mathbb P}^2}((d+1)H-E_1-E_2,P^r)$, if both inequalities are strict and $\frac{1}{2}\delta_\xi^{\widetilde {\mathbb P}^2}(dH,P^r)$, $\frac{1}{2}\delta_\xi^{\widetilde {\mathbb P}^2}((d+1)H-E_1-E_2,P^r)$ if one of them is an equality. (Note that we can exclude the $\xi$ with $\langle\xi,H\rangle=0=\langle\xi,2H-E_1-E_2\rangle$, because with $\xi$ also $-\xi$ will fulfil this property and $\delta_\xi^{\widetilde {\mathbb P}^2}(L,P^r)=-\delta_{-\xi}^{\widetilde {\mathbb P}^2}(L,P^r)$.)
In (1) these are classes of type $(E=H-E_1-E_2)$, in (2) of type $(F=H-E_1)$ and in (3) of type $(F+G-E=H)$. By \lemref{blowindep} there are finitely many such classes. In fact in the notations of the proof of \lemref{blowindep} we have
$$n=2, \ \epsilon=\frac{1}{2}, \ \delta=\frac{1}{4}, |m_1+1|=|m_2+1|=1.$$
Thus if $\xi=aH-b_1E_1-b_2E_2$ is such a class, then we get by \eqref{blowbound} that
$|b_1|+|b_2|\le 4(|d+3| +r+4)$ and $0<a\le \frac{1}{2}(|b_1|+|b_2|)$. For all $\xi$ satisfying these bounds it is first checked whether indeed the criterion of \lemref{vanwall}(2) for the non-vanishing of the wallcrossing term $\delta_\xi^{\widetilde {\mathbb P}^2}(dH,P^r)$, $\delta_\xi^{\widetilde {\mathbb P}^2}((d+1)H-E_1-E_2,P^r)$ is fulfilled, if yes we compute the wallcrossing term, we again use that by \lemref{vanwall}
$\delta_{\xi,d}^X(L,P^r)=0$ unless $d\le a_{\xi,L,X}:=\xi^2+2|\langle\xi,L-K_X\rangle|+2r+4$. Thus, also using \remref{qlevel} it is enough evaluate the formula of \defref{wallcrossterm} modulo $q^{a_{\xi,L,X}}$ and $\Lambda^{a_{\xi,L,X}}$. This is again a finite evaluation.
{\bf Step 3.} By \thmref{nblow} we have \begin{align*} \chi^{\widetilde {\mathbb P}^2,F+G}_{E}(d(F+G)-dE,P^r)&=\chi^{ {\mathbb P}^1\times {\mathbb P}^1,F+G}_{0}(d(F+G),S_{d+1}(P,\Lambda) P^r),\\ \chi^{\widetilde {\mathbb P}^2,F+G}_{E}(d(F+G)-(d-1)E,P^r)&=\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{0}(d(F+G),S_{d}(P,\Lambda) P^r),\\ \chi^{\widetilde {\mathbb P}^2,F+G}_{F}(d(F+G)-dE,P^r)&=\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{F}(d(F+G),R_{d+1}(P,\Lambda)P^r),\\ \chi^{\widetilde {\mathbb P}^2,F+G}_{F}(d(F+G)-(d-1)E,P^r)&=\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{F}(d(F+G),R_{d}(P,\Lambda)P^r),\\ \chi^{\widetilde {\mathbb P}^2,F+G}_{F+G-E}(d(F+G)-dE,P^r)&=\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{F+G}(d(F+G),S_{d+1}(P,\Lambda)P^r),\\ \chi^{\widetilde {\mathbb P}^2,F+G}_{F+G-E}(d(F+G)-(d-1)E,P^r)&=\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{F+G}(d(F+G),S_{d}(P,\Lambda)P^r). \end{align*} By \remref{npol} there exist polynomials $f_d\in {\mathbb Q}[x,\Lambda^4]$, $g_d\in {\mathbb Q}[x,\Lambda^4]$ with $f_d S_d(x,\lambda)+g_d S_{d+1}(x,\lambda)=\lambda(1-\lambda^4)^{N_d}$, and $h_d\in {\mathbb Q}[x,\Lambda^4]$, $l_d\in {\mathbb Q}[x,\Lambda^4]$ with $h_d R_d(x,\lambda)+l_d R_{d+1}(x,\lambda)=(1-\lambda^4)^{M_d}$. For $d\le 7$ we see that $f_d$, $h_d$ are polynomials in $x$ of degree at most $14$, and $g_d$, $l_d$ are polynomials in $x$ of degree at most $11$.
Thus we get \begin{align*} \chi^{ {\mathbb P}^1\times {\mathbb P}^1,F+G}_{0}(d(F+G))&=\frac{1}{\Lambda(1-\Lambda^4)^{M_d}}\Big(\chi^{\widetilde {\mathbb P}^2,F+G}_{E}(d(F+G)-(d-1)E,f_d(P,\Lambda))\\ &\qquad+\chi^{\widetilde {\mathbb P}^2,F+G}_{E}(d(F+G)-dE,g_d(P,\Lambda))\Big),\\ \chi^{ {\mathbb P}^1\times {\mathbb P}^1,F+G}_{F}(d(F+G))&=\frac{1}{(1-\Lambda^4)^{N_d}}\Big(\chi^{\widetilde {\mathbb P}^2,F+G}_{F}(d(F+G)-(d-1)E,h_d(P,\Lambda))\\ &\qquad+\chi^{\widetilde {\mathbb P}^2,F+G}_{F}(d(F+G)-dE,l_d(P,\Lambda))\Big),\\ \chi^{ {\mathbb P}^1\times {\mathbb P}^1,F+G}_{F+G}(d(F+G))&=\frac{1}{\Lambda(1-\Lambda^4)^{M_d}}\Big(\chi^{\widetilde {\mathbb P}^2,F+G}_{F+G-E}(d(F+G)-(d-1)E,f_d(P,\Lambda))\\ &\qquad+\chi^{\widetilde {\mathbb P}^2,F+G}_{F+G-E}(d(F+G)-dE,g_d(P,\Lambda))\Big). \end{align*}
\begin{comment}
The strategy is as follows. \begin{enumerate} \item We have computed before $\chi^{{\mathbb P}^2,H}_0(dH,P^r)$, $\chi^{{\mathbb P}^2,H}_H(dH,P^r)$ for $d\le 8$ and $r\le 16$. (Note that we need these numbers now including the correction terms). \item We use the fact that the blowup $\widetilde {\mathbb P}^2$ of ${\mathbb P}^2$ in two different points is also the blowup of ${\mathbb P}^1\times {\mathbb P}^1$ in a point. We can identify the classes as follows. Let $H$ be the hyperplane class on ${\mathbb P}^2$ and let $E_1$, $E_2$ be the exceptional divisors of the double blowup of ${\mathbb P}^2$. Let $F$, $G$ be the fibres of the two different projections of ${\mathbb P}^1\times {\mathbb P}^1$ to its factors, and let $E$ be the exceptional divisor of the blowup of ${\mathbb P}^1\times {\mathbb P}^1$. Then on $\widetilde {\mathbb P}^2$ we have the identifications. $$F=H-E_1, \quad G=H-E_2, \quad E=H-E_1-E_2,$$ Therefore we have $$dH=dF+dG-dE,\quad (d+1)H-E_1-E_2=dF+dG-(d-1)E.$$ \item The correspondence for the Chern classes is \begin{enumerate} \item $0=0$, thus start with $c_1=0$, both blowups with $R$, \item $F=H-E_1$, start with $c_1=H$, one blowup with $S$, one with $R$ \item $F+G=2H-E_1-E_2$, start with $c_1=0$, both blowups with $S$. Alternatively and I think better we do it via $E=H-E_1-E_2$. We have $F+G=H+E$. Thus we use $c_1=H$ on ${\mathbb P}^2$. And then in the end we do blowdown via $S$. \end{enumerate}
\item Using the blowup formulas we compute from the results of ${\mathbb P}^2$ for $d\le 8$ and $r\le 14$, the \begin{align*} \chi^{\widetilde P^2,H}_{0}(d(F+G)-dE,P^r)&=\chi^{\widetilde P^2,H}_{0}(dH,P^r)\\ \chi^{\widetilde P^2,H}_{F}(d(F+G)-dE,P^r)&=\chi^{\widetilde P^2,H}_{H-E}(dH,P^r)=\Lambda\chi^{\widetilde P^2,H}_{H}(dH,P^r)\\ \chi^{\widetilde P^2,H}_{F+G}(d(F+G)-dE,P^r)&=\chi^{\widetilde P^2,H}_{E_1+E_2}(dH,P^r)=\Lambda^2\chi^{\widetilde P^2,H}_{0}(dH,P^r) \end{align*} and the \begin{align*} \chi^{\widetilde {\mathbb P}^2,H}_{0}(d(F+G)-(d-1)E,P^r)&=\chi^{\widetilde P^2,H}_{0}(dH-E_1-E_2,P^r)\\ \chi^{\widetilde {\mathbb P}^2,H}_{F}(d(F+G)-(d-1)E,P^r)&=\chi^{\widetilde P^2,H}_{H-E_1}(dH-E_1-E_2,P^r)=\Lambda(1-\Lambda^4)\chi^{\widetilde P^2,H}_{H}(dH,P^{r+1})\\ \chi^{\widetilde {\mathbb P}^2,H}_{F+G}(d(F+G)-(d-1)E,P^r)&=\chi^{\widetilde P^2,H}_{E_1+E_2}(dH-E_1-E_2,P^r)=\Lambda^2\chi^{\widetilde P^2,H}_{0}(dH,P^{r+2}) \end{align*} \item we cross all the walls between $H$ and $F+G$ on $\widetilde {\mathbb P}^2$ (this might be a bit more complicated, because now the $H^2(X,{\mathbb Z})$ has rank $3$, so we have a $2$ dimensional family of walls). Thus we arrive at \begin{align*} &\chi^{\widetilde P^2,F+G}_{0}(d(F+G)-dE,P^r),\quad \chi^{\widetilde P^2,F+G}_{F}(d(F+G)-dE,P^r),\quad \chi^{\widetilde P^2,F+G}_{F+G}(d(F+G)-dE,P^r)\\ &\chi^{\widetilde {\mathbb P}^2,F+G}_{0}(d(F+G)-(d-1)E,P^r),\quad \chi^{\widetilde {\mathbb P}^2,F+G}_{F}(d(F+G)-(d-1)E,P^r),\quad \chi^{\widetilde {\mathbb P}^2,F}_{F+G}(d(F+G)-(d-1)E,P^r). \end{align*} \item We use the blowdown formulas to compute the corresponding results for ${\mathbb P}^1\times{\mathbb P}^1$. Again we use the get algorithm to find a linear combination of $R[d+1]$, $R[d]$, which is just a multiple of $(1-\Lambda^4)$. Then this applied the above gives us the result: If $$1=\sum_i f_i(\lambda^4)x^i R[d+1]+\sum_j g_j(\lambda^4)x^i R[d],$$ then $$\chi^{{\mathbb P}^1\times{\mathbb P}^1,F+G}_{0}(d(F+G))=\sum_{i} f_i(\Lambda^4) \chi^{\widetilde P^2,F+G}_{0}(d(F+G)-dE,P^i)+\sum_j g_j(\Lambda^4)x^i\chi^{\widetilde {\mathbb P}^2,F+G}_{0}(d(F+G)-(d-1)E,P^j).$$ And similar for the other Chern classes. This gives us a complete algorithm. \end{enumerate}
I think I want to insist that I always start with $c_1=H$ on ${\mathbb P}^2$. \begin{enumerate} \item $c_1=0$. Take $c_1=E=H-E_1-E_2$ on $\widetilde {\mathbb P}^2$. Then use the blowdown formulas with $S$. To get to $H-E_1-E_2$ we start with $c_1=H$ and use both $S$ blowup formulas. \item $c_1=F=H-E_1$ Start with $c_1=H$, use $S$-blowup and $R$ blowup, and then $R$-blowdown. \item $c_1=F+G=H+E$. Use both $R$-blowup formulas, and then $S$-blowdown formula. \end{enumerate}
For the moment we deal with all first Chern classes at the same time.
{\bf The walls between $H$ and $F+G$.} I can choose either $H,E_1,E_2$ as basis or $F,G,E$. Thus $H=F+G-E$ or $F+G=2H-E_1-E_2$. Let me work with $H,E_1,E_2$. I want to sum over the walls $\xi$ with $\langle\xi ,H\rangle\le0\le\langle\xi, (2H-E_1-E_2) \rangle$. The walls with $=$ again need to be counted with a factor $\frac{1}{2}.$ Then I need to look at $\xi=-nH+a_1E_1+a_2E_1$ with $a_1+a_2\ge 2n\ge 0$, note that in principle also $a_1$ or $a_2$ can be negative. Depending on the first First deal with $L=dH$. Then $L-K_X=(d+3)H-E_1-E_2$. The condition for a non vanishing wall is that
$$-\xi^2=-n^2+a_1^2+a_2^2\le |\langle\xi,(L-K_X)\rangle|+r+2=|n(d+3)-a_1-a_2|+r+2.$$ Let me list for the different values of $c_1$ what parities we have. \begin{enumerate} \item $c_1=0$, i.e. $c_1=H-E_1-E_2$ on $\widetilde {\mathbb P}^2$. In this case $n,a_1,a_2$ are all odd. \item $c_1=F=H-E_1$. In this case $n$ and $a_1$ are odd, $a_2$ is even. \item $c_1=F+G=H-E$. We work with $c_1=H$ on $\widetilde {\mathbb P}^2$. Thus $n$ is odd, and $a_1, \ a_2$ are both even. \end{enumerate} Note that $n$ is always odd, so we do not need to consider the case $n=0$, so $H$ is on no wall. On the other side we will have to consider the case that $F+G$ lies on a wall, at least on ${\mathbb P}^1\times{\mathbb P}^1$, for $c_1=0$ and $c_1=F+G$. On the other hand, for $c_1=0$ we have $F+G=2H-E_1-E_2$ can be orthogonal to a wall. In the same way for $c_1=F+G$. And also for $c_1=F$.
Let me look at the case $c_1=F+G$, $L=dF+dG=dH+dE$. That is $dF+dG-dE=dH$, and as $E=H-E_1-E_2$ we get $dF+dG-(d-1)E=(d+1)H-E_1-E_2$. We need two different ones.
\end{comment}
All these computations are carried out with a Pari program. Finally we arrive at the following result.
\begin{Theorem} With the notation of \thmref{P11gen}. \begin{enumerate} \item $\displaystyle{\chi^{{\mathbb P}^1\times{\mathbb P}^1,F+G}_{0}(dF+dG)=\frac{p_d(\Lambda^4)}{(1-\Lambda^4)^{(d+1)^2}}-1-(d^2+4d+5)\Lambda^4}$ for $1\le d\le 7$. \item $\displaystyle{\chi^{{\mathbb P}^1\times{\mathbb P}^1,F+G}_{F+G}(dF+dG)=\frac{q_d(\Lambda^4)}{(1-\Lambda^4)^{(d+1)^2}}-\Lambda^2}$ for $1\le d\le 7$. \item $\displaystyle{\chi^{{\mathbb P}^1\times{\mathbb P}^1,F+G}_{F}(dF+dG)=\frac{r_d(\Lambda^4)}{(1-\Lambda^4)^{(d+1)^2}}}$ for $d=2,4,6$. \end{enumerate} \end{Theorem} \thmref{P11gen} follows from this and \propref{highvan}.
\end{document} | arXiv |
In-Vitro dual inhibition of protein glycation, and oxidation by some Arabian plants
Maqsood A. Siddiqui1,3,
Saima Rasheed2,
Quaiser Saquib1,3,
Abdulaziz A. Al-Khedhairy1,
Mansour S. Al-Said5,
Javed Musarrat4 &
Muhammad Iqbal Choudhary2
Diabetes mellitus is a metabolic disorder of epidemic proportion, projected to become the major cause of morbidity and mortality in the world in future. Despite extensive research in understanding this disease at molecular level, and the discovery of new drugs, diabetes and its complications remain largely untreated. Many of the late diabetic complications are associated with the glycation of proteins in the body. Natural flora has long been a rich source for therapeutic agents, especially against diabetes. The present study deals with the anti-glycation properties of some medicinally important plants of Arabian region.
Twenty-six medicinal plants, commonly found in different regions of Arabian Peninsula, were evaluated for their protein anti-glycation activity by using BSA-MG glycation assay in-vitro. The extracts were incubated with BSA and MG at 37 °C for 9 days, each sample was then examined for the presence of fluorescence (λex 330 nm, and λem 420 nm), which represent the extent of protein glycation. Antioxidant activity was evaluated by using 1,1-diphenyl- 2-picrylhydrazyl (DPPH), iron chelation, and superoxide radical scavenging asaays.
The data revealed that out of 26 medicinal plants, five plants viz. Sida cordifolia, Plumbago zeylanica, Tribulus terrestris, Glycyrrhiza glabra, and Rosa indica were active against the in-vitro protein glycation with IC50 values between 0.408- 1.690 mg/mL. Among the active plants, Glycyrrhiza glabra L. was found to be the most potent (IC50 = 0.408 ± 0.027 mg/mL), followed by Rosa indica (IC50 = 0.596 ± 0.0179 mg/mL), and Sida cordifolia L. (IC50 = 0.63 ± 0.009 mg/mL). The antioxidant potential of these plant extracts were also determined by using DPPH (2,2-diphenyl-1-picrylhydrazyl), iron chelation, and superoxide anion radical scavenging assays. Among five plants, Sida cordifolia exhibited a potent anti-oxidant activity in both DPPH and superoxide anion radical scavenging assays (IC50 = 0.005 ± 0.0004, and 0.078 ± 0.002 mg/mL, respectively), followed by Rosa indica (IC50 = 0.023 ± 0.0005 and 0.141 ± 0.003 mg/mL, respectively).
Protein glycation in hyperglycemic conditions involve oxidative changes. Therefore dual inhibition of protein glycation and oxidation are desirable properties in any test substance investigated for therapeutic purposes.
Diabetes mellitus (DM) is an impending public health challenge of the present century [1]. It affects over 387 million people globally, and this number is projected to increase to 592 million by 2035. DM is currently the fourth leading cause of mortality in the world. It has also emerged as a major socioeconomic burden for developing countries [2]. In last three decades, extensive research has been conducted on glycation and anti-glycation processes in diabetes, based on the fact that the hyperglycemic condition or excess glucose in blood leads to the binding of free sugars with bio-molecules [3–5]. Glycation is a spontaneous, non-enzymatic reaction between biomolecules (proteins, lipids, and DNA) and reducing sugars (such as glucose, fructose, and ribose), resulting in the formation of advanced glycation endproducts (AGEs) [6–8]. The accelerated process of proteins glycation has been identified as a marker, as well as a core reason for the onset of many diabetic complications, affecting the eyes, blood vessels, kidneys, skin, etc. [9, 10]. Oxidative reactions are known to be involved in the protein glycation cascade. Most importantly, AGEs, via their receptors (RAGEs), inactivate the enzymes and promote the formation of reactive oxygen species (ROS). It is suggested that the generation of oxygen free radicals by glycation of biomolecules is one of the major biochemical pathways of oxidative tissue damage in diabetes. Search for agents with dual inhibitory effects, i.e. antioxidant and anti-glycation, is therefore a valid approach towards the treatment of complications resulting from non-enzymatic glycation reaction [11, 12]. Although extensive research has been conducted on various classes of glycation inhibitors, but none has reached to the clinical use. Therefore, there is an urgent need to identify the agents which inhibit or reverse the complex reactions of protein glycation, and oxidation.
Evidences about the anti-diabetic properties of medicinal plants have been continuously reported. During the last two decades, we have been focusing on bioactive natural products. This has led to the identification of several classes of safe and effective lead molecules [12–16]. Information on dual inhibition pattern (anti-glycation and antioxidant activity) of traditional Arabian medicinal plants is scarce. So far no large-scale systematic study of the anti-glycation activity of medicinal herbs has been conducted. Therefore, the present study was designed to identify new and effective inhibitors of protein glycation during hyperglycemia from medicinal plants of Arabian region. We evaluated 26 medicinal plants, commonly found in different regions of Arabian Peninsula. These plants are used in herbal medicines for the treatment of different diseases, including diabetes.
Bovine serum albumin (BSA), and ethanol was purchased from Merck Marker Pvt. Ltd. (Germany), methylglyoxal (MG) (40 % aqueous solution), 2,2-diphenyl-1-picrylhydrazyl (DPPH), iron chloride, ferrozine, β-nicotanamide adenine dinucleotide (NADH), nitro blue tetrazolium (NBT), phenazine methosulphate (PMS), Quercetin (purity: ≥95.0 %), Gallic acid (purity: ≥98.0 %), and rutin (purity: ≥90 %) were from Sigma Aldrich (Japan). Sodium azide (NaN3), disodium hydrogen phosphate (Na2HPO4), and sodium dihydrogen phosphate (NaH2PO4) were obtained from Scharlau Chemie, S. A. (Spain), while dimethyl sulphoxide (DMSO) was acquired from Fischer Scientific (UK).
All plant samples were collected from different regions of Arabian Peninsula. Different parts of these plants (such as leaves, flowers, stems, or roots; Table 1) were separately processed for the preparation of crude extracts. Plants were identified by taxonomist at the Department of Botany, University of Karachi, Karachi, Pakistan (Herbarium voucher numbers are mentioned in supporting information). The samples were air-dried, protecting from sunlight, and powdered. These powdered samples were then stored at room temperature.
Preparation of the crude extracts of medicinally important plants
Crude extracts were prepared by extracting different powdered parts of the plants (1 Kg) in 3 L distilled methanol. In brief, the extracts were obtained by triple soaking in methanol for 3 days (at room temperature) and the solvent was evaporated under reduced pressure. The crude extracts were then freeze dried, and the extracts were solublized in DMSO and used for the in-vitro experiments.
In-vitro anti-glycation assay
The reaction was performed in triplicate, and in such a way that in 200 μL solution, the final concentration of BSA was 10 mg/mL, methylglyoxal was 14 mM, and test extracts (dissolved in DMSO; final concentration 10 %) were 2 mg/mL. Solution of methylglyoxal and BSA were prepared in phosphate buffer (0.1 M, pH 7.4, containing 3 mM sodium azide as antimicrobial agent). Briefly, the 200 μL of reaction mixture comprised of BSA (50 μL), methylglyoxal (50 μL), test extracts (20 μL), and phosphate buffer (80 μL), while in the negative control wells, 20 μL of DMSO (final concentration 10 %) was added instead of test extracts. It was then incubated at 37 °C for 9 days (under sterile conditions). After incubation, each sample was examined for the development of fluorescence (λex 330 nm and λem 420 nm), against blank on a microtitre plate reader (SpectraMax M5, Molecular Devices, CA, USA) [17]. Rutin was used as positive control. The percent inhibition of each extract was calculated by using the following formula:
$$ \%\ \mathrm{Inhibition} = \left(1 - \mathrm{Fluorescence}\ \mathrm{of}\ \mathrm{test}\ \mathrm{sample}/\ \mathrm{Fluorescence}\ \mathrm{of}\ \mathrm{the}\ \mathrm{control}\right) \times 100 $$
In-vitro antioxidant activities
DPPH Free radical scavenging assay
Solution of DPPH (0.3 mM) was prepared in ethanol, while different concentrations of the test extracts were prepared in DMSO. In each well of 96-wells plate, 5 μL of the test extracts and 95 μL of DPPH solution were added, and the pre-read (absorbance) was recorded at 515 nm. The reaction was then incubated for 30 min at 37 °C. Plate was shaken for 1 min for thorough mixing and the change in absorbance was recorded at 515 nm on microplate-reader (SpectraMax M5, Molecular Devices, CA, USA) [18]. Gallic acid was used as a positive control. The percentage of DPPH radical scavenging was calculated by using following formula:
$$ \%\ \mathrm{R}\mathrm{S}\mathrm{A} = 100 - \left(\varDelta \mathrm{A}\ \mathrm{S}\mathrm{ample}\ /\varDelta \mathrm{A}\ \mathrm{Control}\right)\ \mathrm{x}100 $$
(Where RSA = radical scavenging activity and ΔA = change in absorbance)
Iron chelation assay
The Fe2+-chelating ability was determined according to the method of Koncic et al. with slight modifications [19]. In this assay, the concentration of Fe2+ ion was measured through the formation of ferrous ion–ferrozine complex. Plant extracts, dissolved in DMSO (2 mg/mL, 5 μL), was mixed with 0.3 mM FeCl2 (35 μL) and 0.5 mM ferrozine (60 μL). Ferrozine reacted with the divalent iron resulting in the formation of stable violet colored complex (soluble in water). The mixture was shaken and left at room temperature for 10 min. The change in the absorbance of the resulting mixture was measured at 562 nm by using SpectraMax M5 (Molecular Devices, CA, USA). Disodium EDTA was used as a reference compound.
Superoxide anion radical scavenging assay
The reaction mixture contained 10 μL of crude plant extracts (2 mg/mL; dissolved in DMSO), 90 μL of phosphate buffer (0.1 M; pH 7.4), 40 μL of (0.2 mM) β-nicotanamide adenine dinucleotide (NADH), and 40 μL of (0.081 mM) nitro blue tetrazolium (NBT). The reaction was initiated by the addition of 20 μL of (0.008 mM) phenazine methosulphate (PMS). The solutions of NADH, NBT and PMS were prepared in phosphate buffer (0.1 M; pH 7.4) [20]. The formation of superoxide was monitored by measuring the absorbance of the blue formazan dye after 5 min at 560 nm by using microtitre plate reader (SpectraMax M5, Molecular Devices, CA, USA). Quercetin was used as a positive control.
The results were analyzed by using SoftMax Pro Software (Molecular Devices, CA, USA), and expressed as the mean ± S.E.M. of three experiments. The IC50 values were calculated by the EZ-Fit enzyme kinetics program (Perellela Scientific, Inc., Amherst, Mars, USA). GraphPad Prism 5, program was used for plotting dose dependant graphs of active plant extracts and for other graphs.
Results revealed that out of 26 medicinal plants, five plants (i.e. Sida cordifolia, Plumbago zeylanica, Tribulus terrestris, Glycyrrhiza glabra, and Rosa indica) were able to inhibit the in-vitro protein glycation with IC50 values between 0.408- 1.690 mg/mL, while remaining plant extracts were found to be inactive as they showed less than 50 % inhibition at 2 mg/mL concentration (Table 1). Among five active plants, Glycyrrhiza glabra L. was found to be the most potent (IC50 = 0.408 ± 0.027 mg/mL; Fig. 1), followed by Rosa indica (IC50 = 0.596 ± 0.0179 mg/mL; Fig. 2) and Sida cordifolia L. (IC50 = 0.63 ± 0.009 mg/mL; Fig. 3). Extract of Plumbago zeylanica and Tribulus terrestris showed a week anti-glycation potential (IC50 = 1.300 ± 0.033 and 1.690 ± 0.020 mg/mL, respectively), as compared to other active plants in this study (Table 1; Figs. 4 and 5).
In-vitro anti-glycation activity of methanolic extract of Glycyrrhiza glabra
In-vitro anti-glycation activity of methanolic extract of Rosa indica
In-vitro anti-glycation activity of methanolic extract of Sida cordifolia
Table 1 Anti-glycation activity of extracts of medicinally important plants of Arabian origin
In-vitro anti-glycation activity of methanolic extract of Plumbago zeylanica
In-vitro anti-glycation activity of methanolic extract of Tribulus terrestris
The plants which were found to be active against protein glycation in-vitro (i.e. Sida cordifolia, Plumbago zeylanica, Tribulus terrestris, Glycyrrhiza glabra, and Rosa indica) were evaluated for DPPH radical scavenging activity. Figure 6 shows that all plants were active (Table 2; Fig. 6). Gallic acid was used as positive control in this assay. Results revealed that Sida cordifolia L. showed a potent antioxidant activity when compared with remaining four active plants.
In-vitro antioxidant (DPPH) activity of medicinally important plants of Arabian origin which were found to be active against in-vitro anti-glycation assay
Table 2 In-vitro antioxidant activity of medicinal plants of Arabian origin (which were found to be active in BSA-MG glycation assay)
In the next stage, five active plants (i.e. Sida cordifolia, Plumbago zeylanica, Tribulus terrestris, Glycyrrhiza glabra, and Rosa indica) were evaluated for iron chelating ability. Table 2 showed that none of these has the ability to chelate with the iron at 2 mg/mL concentration. In the final step of this study, selected plants were evaluated for superoxide anion radical scavenging activity. Results showed that, except G. glabra, all other plants were found to scavenge the superoxide anion radicals effectively (Table 2, Figs. 7, 8, 9 and 10).
In-vitro superoxide anion radical scavenging activity of methanolic extract of Sida cordifolia
In-vitro superoxide anion radical scavenging activity of methanolic extract of Plumbago zeylanica
In-vitro superoxide anion radical scavenging activity of methanolic extract of Tribulus terrestris
In-vitro superoxide anion radical scavenging activity of methanolic extract of Rosa indica
The present study was carried out to study medicinal plants of Arabian origin as anti-glycation agents. In this study, we systematically evaluated 26 medicinal plants for their anti-glycation activity potential (Table 1). Results revealed that out of 26 medicinal plants, five (i.e. Sida cordifolia, Plumbago zeylanica, Tribulus terrestris, Glycyrrhiza glabra, and Rosa indica) were found active against the in-vitro protein glycation. Among five active plants, Glycyrrhiza glabra L. (Voucher number: 37999) was found to be the most potent one. G. glabra belongs to the Fabaceae/Leguminosae family. It is famous for underground stems, which is widely used in flavor confectionery [21]. This plant is also known for its diverse biological activities, such as anti-inflammatory, anti-microbial, hepatoprotective properties. It is also used as folk remedy for sore throats, mouth ulcers, stomach ulcers, inflammatory stomach conditions, and indigestion [22–26]. G. glabra was also reported for hypoglycemic activity in rats [27]. G. glabra- based herbal formulations are known to exhibit anti-AGEs activities. Additionally a pure substance (glycyrrhizic acid) from the roots of this plant showed anti-glycation potential in high fat diet treated rats [14, 28, 29]. Major constituents of G. glabra include flavonoids, isoflavonoids, saponins, and tripentenes [30]. Literature has no report describing the in-vitro anti-glycation activity of root extracts of G. glabra. In our study methanolic extracts of G. glabra showed 63.42 % inhibition (IC50 = 0.408 ± 0.027 mg/mL) in BSA-MG glycation assay (Table 1; Fig. 1). Therefore in view of these results, G. glabra may be used as a therapeutic agent to reduce AGEs formation in diabetes.
Rosa indica L. (Voucher number: 91863) is an ornamental plant, known for perfuming effect. It possess pharmacological properties such as antioxidant, anti-fungal, anti-bacterial and urease inhibitory activities [31–33]. Manikandan et al. reported the synthesis of silver nanoparticles using extract of the petals of Rosa indica (ethanolic extract), and its in-vitro antibacterial, anticancer and anti-inflammatory activities. Different parts of R. indica (e.g. petals and buds) are known to treat runny nose, blocked bronchial tubes, asthma, and chest problems [34]. The bioactive compounds isolated, from Rosa indica, include flavonoids, alkaloids, phenols, saponins, and steroids [35]. There is no report describing the anti-glycation activity of R. indica. In our in-vitro experimental assay, R. indica showed a good anti-glycation potential with 78.56 % inhibition (IC50 = 0.596 ± 0.0179 mg/mL) (Table 1; Fig. 2).
The third most potent plant Sida cordifolia L. (Voucher number: 12135) belongs to Malvaceae family. Roots of S. cordifolia are used in coryza, pain, cardiac diseases, nervous disorders, and for anti-inflammatory, analgesic, hypoglycemic, antimicrobial, anti-hypercholesterolemic, antioxidant activities [36–40]. In addition, the extract of S. cordifolia has shown the anti-aging properties [41]. Major phytoconstituents of Sida cordifolia include alkaloids, flavonoids, steroids, phytoecdysteroids, and fatty acids [42]. As per literature survey, this is the first report describing the anti-glycation potential of the methanolic extracts of the seeds of S. cordifolia. In the present study, S. cordifolia showed 81.98 % inhibition of BSA-MG glycation with IC50 = 0.63 ± 0.009 mg/mL (Table 1; Fig. 3).
Crude extract of Plumbago zeylanica L. (voucher No. 24177) and Tribulus terrestris L. (voucher No. 53177) showed a week anti-glycation potential (IC50 = 1.300 ± 0.033, and 1.690 ± 0.020 mg/mL, respectively), when compared with the other active plants of this study (Table 1; Fig. 4 and 5).
The plant P. zeylanica showed many biological properties, such as anti-inflammatory, hypolipidimic, wound healing, antidiabetic, memory-inducing, blood coagulation, anti-malarial, anti-fertility, anti-microbial, anticancer, antiviral, antioxidant, and anti-larvicidal activities. The phytochemical investigation showed that these biological activities are due to the presence of compounds, such as elliptinone, zeylanone, sistosterol and plumbagin [43]. Tribulus terrestris L. is known for several pharmacological properties, and its use in folk medicine for the treatment of impotence, edema, rheumatism, kidney stones, and hypertension. T. terrestris contains phenols, saponins, alkaloids and sterols as active constituents [44].
Oxidative reactions are known to be involved in the protein glycation cascade. Most importantly, AGEs via their receptors (RAGEs), inactivate the enzymes and promote the formation of reactive oxygen species. Dual activity, i.e. antioxidant and anti-glycation, is therefore a valid approach for the treatment of complications resulting from hyperglycemia [11, 12]. The antioxidant activities of plants are mainly due to two mechanisms, i.e. scavenging the free radicals produced in the body or by chelating the transition metal [45]. Keeping this in view, all active plants (i.e. Sida cordifolia, Plumbago zeylanica, Tribulus terrestris, Glycyrrhiza glabra, and Rosa indica) were evaluated for their DPPH, superoxide anion radical scavenging and iron-chelating activities.
DPPH (2,2-diphenyl-1-picrylhydrazyl) is a stable free radical, in which electronic delocalization resulted in deep violet coloration. Certain plant extracts are able to donate hydrogen atoms and convert the DPPH radical into its reduced and stable form, and hence resulted in fading of violet color into pale yellow [45]. The in-vitro DPPH radical scavenging assay was performed with gallic acid as a positive control. Results revealed that all plants were active (Table 2; Fig. 6).
Glycation reaction, and AGEs are known to produce reactive oxygen intermediates (mainly superoxide anion and hydrogen peroxide) both in-vitro as well as in-vivo. In the in-vivo system, once generated, H2O2 can quickly enter inside the cell, while other activated oxygen species cannot. Within the cell, H2O2 can react with iron or copper in the Fenton reaction, and leads to the formation of hydroxyl radicals. These hydroxyl radicals contribute factors in diabetes-related oxidative stress [46]. Therefore metal chelators (e.g. Fe-chelators) can effectively serve as AGE-inhibitors. Interestingly when we evaluated the five active plants (i.e. S. cordifolia, P. zeylanica, T. terrestris, G. glabra, and R. indica) for iron chelating ability, all were found to be inactive, showing that they do not have ability to chelate with the iron at 2 mg/mL concentration.
Superoxide radical anion is formed from the reduction (i.e. one-electron) of free molecular oxygen by membrane-bound enzyme i.e. nicotinamide adenine dinucleotide phosphate Oxidase (NADPH). Ortwerth et al. reported that in glycation reaction, superoxide anion is formed by superoxide dismutase-dependent reduction of ferricytochrome C. They reported that Amadori products (formed from the reaction of lysine and small sugars) can generate superoxide anion even in the absence of metals [47]. Therefore, scavenging of superoxide anion was identifying as useful approach to inhibit the glycation mediated complications. In the final step of this study, we subjected all five plants for superoxide anion radical scavenging assay. Results showed that except G. glabra, all four plants scavenge the superoxide anion radicals effectively (Table 2, Figs. 7, 8, 9, and 10).
In conclusion, the crude methanolic extracts of Rosa indica L., and Sida cordifolia L. have exhibited a potent protein anti-glycation activity in in-vitro BSA-MG anti-glycation model. On the basis of these findings, one of the possible mechanisms of their reported antidiabetic activities is the inhibition of glycation and antioxidant properties. The anti-glycation activity of these medicinal plants are in good agreement with their uses in antidiabetic herbal medicines. Therefore, these plants needs to be further investigated phytochemically as well as pharmacologically to identify the active constituents and to establish their therapeutic potential against glycation induced pathologies in diabetes.
AGEs, Advanced glycation endproducts; BSA, Bovine serum albumin; DM, Diabetes mellitus; DNA, Deoxyribonucleic acid; DPPH, 1,1-Diphenyl 2-picrylhydrazyl; em, Emission; ex, Excitation; H2O2: Hydrogen peroxide; IC50, Half maximal inhibitory concentration; MG, Methylglyoxal; RAGEs: Receptor for advanced glycation endproducts; ROS, Reactive oxygen species
Zimmet P. The burden of type 2 diabetes: Are we doing enough? Diabetes Metab. 2003;29:6S9–18.
International Diabetes Federation (IDF). IDF Diabetes Atlas. 6th ed. Brussels: IDF; 2013.
Finlayson C, Zimmerman D. Hyperglycemia not due to diabetes mellitus. Clin Pediatr Emerg Med. 2009;10:252–55.
Lorenzo C, Williams K, Hunt KJ, Haffner SM. The national cholesterol education program-Adult treatment panel III, international diabetes federation, and world health organization definitions of the metabolic syndrome as predictors of incident cardiovascular disease and diabetes. Diabetes Care. 2007;30:8–13.
Rondeau P, Bourdon E. The glycation of albumin: Structural and functional impacts. Biochimie. 2011;93:645–58.
Cho SJ, Roman G, Yeboah F, Konishi Y. The road to advanced glycation end products: A mechanistic perspective. Curr Med Chem. 2007;14:1653–71.
Li W, Zheng H, Bukuru J, De Kimpe N. Natural medicines used in the traditional Chinese medical system for therapy of diabetes mellitus. J Ethnopharmacol. 2004;92:1–21.
Ulrich P, Cerami A. Protein glycation, diabetes, and aging. Recent Prog Horm Res. 2001;56:1–21.
Ahmed N. Advanced glycation end products- Role in pathology of diabetic complications. Diabetes Res Clin Pract. 2005;67:3–21.
Reddy VP, Beyaz A. Inhibitors of the Maillard reaction and AGE breakers as therapeutics for multiple diseases. Drug Discov Today. 2006;11:646–54.
Maritim AC, Sanders RA, Watkins JB. Diabetes, oxidative stress, and antioxidants: A review. J Biochem Mol Toxicol. 2003;17:24–38.
Yamaguchi F, Ariga T, Yoshimura Y, Nakazawa H. Antioxidative and anti-glycation activity of garcinol from Garcinia indica fruit rind. J Agric Food Chem. 2000;48:180–85.
Ayatollahi SAM, Kobarfard F, Asgarpanah J, Choudhary MI. Anti-glycation activity of Otostegia persica (Burm.) Boiss. Afr J Biotechnol. 2010;9:3645–48.
Cheng HS, Kong JM, Ng AX, Chan WK, Ton SH, Kadir KA. Novel inhibitory effects of Glycyrrhizic Acid on the accumulation of advanced glycation end product and its receptor expression. Nat Prod Bioprospect. 2014;4:325–33.
Choudhary MI, Adhikari A, Rasheed S, Marasini BP, Hussain N, Kaleem WA, Atta-ur-Rahman. Cyclopeptide alkaloids of Ziziphus oxyphylla Edgw. as novel inhibitors of α-glucosidase enzyme and protein glycation. Phytochem Lett. 2011;4:404–06.
Peng X, Ma J, Chen F, Wang M. Naturally occurring inhibitors against the formation of advanced glycation end-products. Food Funct. 2011;2:289–301.
Zeb A, Malik I, Rasheed S, Choudhary MI, Basha FZ. Metronidazole esters: A new class of anti-glycation agents. Med Chem. 2012;8:846–52.
Orhan I, Kartal M, Naz Q, Ejaz A, Yilmaz G, Kan Y, Konuklugil B, Sener B, Choudhary MI. Antioxidant and anticholinesterase evaluation of selected Turkish Salvia species. Food Chem. 2007;103:1247–54.
Koncic MZ, Barbaric M, Perkovic I, Zorc B. Antiradical, chelating and antioxidant activities of hydroxamic acids and hydroxyureas. Molecules. 2011;16:6232–42.
Hazra B, Biswas S, Mandal N. Antioxidant and Free Radical Scavenging Activity of Spondias Pinnata. B.M.C. Complement Altern Med. 2008;8:63–73.
Vispute S, Khopade S. Glycyrrhiza Glabra Linn. "Klitaka": A review. Int J Pharm Biol Sci. 2011;2:42–5.
Dhingra D, Parle M, Kulkarni SK. Memory enhancing activity of Glycyrrhiza glabra in mice. J Ethnopharmacol. 2004;91:361–65.
Gupta VK, Fatima A, Faridi U, Negi AS, Shanker K, Kumar JK, Rahuja N, Luqman S, Sisodia BS, Saikia D, Darokar MP, Khanuja SPS. Antimicrobial potential of Glycyrrhiza glabra roots. J Ethnopharmacol. 2008;116:377–80.
RenJie L. Optimization of extraction process of Glycyrrhiza glabra polysaccharides by response surface methodology. Carbohydr Polym. 2008;74:858–61.
Sedighinia F, Afshar AS, Soleimanpour S, Zarif R, Asili J, Ghazvini K. Antibacterial activity of Glycyrrhiza glabra against oral pathogens: An in vitro study. Avicenna J Phytomed. 2012;2:118–24.
Wittschier N, Faller G, Hensel A. Aqueous extracts and polysaccharides from Liquorice roots (Glycyrrhiza glabra L.) inhibit adhesion of Helicobacter pylori to human gastric mucosa. J Ethnopharmacol. 2009;125:218–23.
Sitohy MZ, Massry RA, Saadany SS, Labib SM. Metabolic effects of licorice roots (Glycyrrhiza glabra) on lipid distribution pattern, liver and renal functions of albino rats. Nahrung. 1991;35:799–806.
Deetae P, Parichanon P, Trakunleewatthana P, Chanseetis C, Lertsiri S. Antioxidant and anti-glycation properties of Thai herbal teas in comparison with conventional teas. Food Chem. 2012;133:953–59.
Lo HY, Hsiang CY, Li TC, Li CC, Huang HC, Ho TY, Chen JC. A novel glycated hemoglobin A1c-lowering traditional Chinese medicinal formula, identified by translational medicine study. Plos One. 2014;9:e104650.
Parvaiz M, Hussain K, Khalid S, Hussnain N, Iram N, Hussain Z, Ali MA. A review: Medicinal importance of Glycyrrhiza glabra L. (Fabaceae family). Global J Pharmacol. 2014;8:8–13.
Bai S, Bharti P, Seasotiya L, Malik A, Dalal S. In vitro screening and evaluation of some Indian medicinal plants for their potential to inhibit Jack bean and bacterial ureases causing urinary infections. Pharm Biol. 2014;53:326–33.
Khan JA, Tewari S. A study on antibacterial properties of Rosa indica against various pathogens. Electron J Environ Agric Food Chem. 2011;10:2838–46.
Saeed R, Hameed-Ur-Rehman, Ali S, Ullah H, Ullah M, Rohullah, Hassan S, Farhan, Ahmed S, Akhwan S. Phytochemical analysis and anti-microbial activities of Rosa indica collected from Kohat Pakistan. Am J Phytomed Clin Ther. 2014;2:1370–77.
Manikandan R, Manikandan B, Raman T, Arunagirinathan K, Prabhu NM, Jothi BM, Perumal M, Palanisamy S, Munusamy A. Biosynthesis of silver nanoparticles using ethanolic petals extract of Rosa indica and characterization of its antibacterial, anticancer and anti-inflammatory activities. Spectrochim Acta A Mol Biomol Spectrosc. 2015;138:120–29.
Sahoo AM, Chakraborti CK, Nayak S, Kayal S. Correlation between phytochemical screening and in vitro antibacterial activity study of Rosa indica Linn. leaves. Int J Res Ayurveda Pharm. 2011;2:1595–97.
Derbre S, Morel S, Richomme P, Toure AK. Anti-glycation agent comprising a Garcinia kola extract or fraction. 2014.US Patent 20140142171 A1.
Franzotti EM, Santos CV, Rodrigues HM, Mourao RH, Andrade MR, Antoniolli AR. Anti-inflammatory, analgesic activity and acute toxicity of Sida cordifolia L. (Malva-branca). J Ethnopharmacol. 2000;72:273–77.
Kanth VR, Diwan PV. Analgesic, antiinflammatory and hypoglycaemic activities of Sida cordifolia. Phytother Res. 1999;3:75–7.
Mahesh B, Satish S. Antimicrobial activity of some important medicinal plant against plant and human pathogens. World J Agri Sci. 2008;4:839–43.
Momin MA, Bellah SF, Rahman SM, Rahman AA, Murshid GM, Emran TB. Phytopharmacological evaluation of ethanol extract of Sida cordifolia L. roots. Asian Pac J Trop Biomed. 2014;4:18–24.
Dhalwal K, Deshpande YS, Purohit AP, Kadam SS. Evaluation of the Antioxidant Activity of Sida cordifolia. Pharm Biol. 2005;43:754–61.
Galal A, Raman V, Khan IA. Sida cordifolia, a traditional herb in modern perspective-A review. Curr Tradit Med. 2015;1:5–17.
Jain P, Sharma HP, Basri F, Baraik B, Kumari S, Pathak C. Pharmacological Profiles of Ethno-Medicinal Plant: Plumbago zeylanica L. - A Review. Int J Pharm Sci Rev Res. 2014;24:157–63.
Hammoda HM, Ghazy NM, Harraz FM, Radwan MM, ElSohly MA, Abdallah II. Chemical constituents from Tribulus terrestris and screening of their antioxidant activity. Phytochemistry. 2013;92:153–59.
Kazeem MI, Ashafa AOT. In-vitro antioxidant and antidiabetic potentials of Dianthus basuticus Burtt Davy whole plant extract. J Herb Med. 2015;5:158–64.
Khechai F, Ollivier V, Bridey F, Amar M, Hakim J, Prost D. Effect of advanced glycation end product–modified albumin on tissue factor expression by monocytes. Arterioscler Thromb Vasc Biol. 1997;17:2885–90.
Ortwerth BJ, James H, Simpson G, Linetsky M. The generation of superoxide anions in glycation reactions with sugars, osones, and 3-deoxyosones. Biochem Biophys Res Commun. 1998;245:161–5.
This project was funded by the National Plan for Science, Technology and Innovation (MAARIFAH), King Abdulaziz City for Science and Technology (KACST), Kingdom of Saudi Arabia (Award Number 12-MED2491-02).
The datasets supporting this manuscript are included within the material and methods section of the article and also in the additional supporting information. These informations would be available for publication in the appropriate sections of this article.
MAS (Maqsood A. Siddiqui) designed the research proposal, and selected and arranged the plants for the study. SR (Saima Rasheed) was responsible for the in-vitro bioassays and interpretation of the bioassay results. QS (Quaiser Saquib) contributed the ethnobotanic data. AAA (Abdulaziz A. Al-Khedhairy) was involved in the proposal writing and finalization of manuscript. MSA (Mansour S. Al-Said) worked on research methodology and discussion on the results. JM (Javed Musarrat) worked on research methodology and antioxidant results. MIC (M. Iqbal Choudhary) has made substantial contribution for the interpretation of results and finalization of manuscript. All authors have read and approved the final manuscript.
As corresponding author, I give my permission for the material in this manuscript to appear in the print, and online version.
This information is not relevant
Department of Zoology, College of Science, King Saud University, P. O. Box. 2455, Riyadh, 11451, Saudi Arabia
Maqsood A. Siddiqui, Quaiser Saquib & Abdulaziz A. Al-Khedhairy
H.E.J. Research Institute of Chemistry, International Center for Chemical and Biological Sciences, University of Karachi, Karachi, 75270, Pakistan
Saima Rasheed & Muhammad Iqbal Choudhary
A. R. Al-Jeraisy Chair for DNA Research, Zoology Department, College of Science, King Saud University, P. O. Box. 2455, Riyadh, 11451, Saudi Arabia
Maqsood A. Siddiqui & Quaiser Saquib
Department of Agriculture Microbiology, Faculty of Agriculture Sciences, AMU, Aligarh, 202002, India
Javed Musarrat
Departments of Pharmacognosy, College of Pharmacy, King Saud University, P. O. Box. 2455, Riyadh, 11451, Saudi Arabia
Mansour S. Al-Said
Maqsood A. Siddiqui
Saima Rasheed
Quaiser Saquib
Abdulaziz A. Al-Khedhairy
Muhammad Iqbal Choudhary
Correspondence to Muhammad Iqbal Choudhary.
Siddiqui, M.A., Rasheed, S., Saquib, Q. et al. In-Vitro dual inhibition of protein glycation, and oxidation by some Arabian plants. BMC Complement Altern Med 16, 276 (2016). https://doi.org/10.1186/s12906-016-1225-7
Arabian medicinal plants
Advanced glycation end products (AGEs)
Glycyrrhiza glabra L.
Rosa indica L.
Sida cordifolia L. | CommonCrawl |
Genomic reconstruction of the SARS-CoV-2 epidemic in England
Harald S. Vöhringer1,
Theo Sanderson2,3,
Matthew Sinnott ORCID: orcid.org/0000-0002-3054-78462,
Nicola De Maio1,
Thuy Nguyen2,
Richard Goater ORCID: orcid.org/0000-0001-9954-841X2,
Frank Schwach2,4,
Ian Harrison ORCID: orcid.org/0000-0003-4117-961X4,
Joel Hellewell ORCID: orcid.org/0000-0003-2683-08495,
Cristina V. Ariani2,
Sonia Gonçalves2,
David K. Jackson ORCID: orcid.org/0000-0002-8090-94622,
Ian Johnston2,
Alexander W. Jung1,
Callum Saint ORCID: orcid.org/0000-0001-8720-97362,
John Sillitoe2,
Maria Suciu2,
Nick Goldman ORCID: orcid.org/0000-0001-8486-22111,
Jasmina Panovska-Griffiths ORCID: orcid.org/0000-0002-7720-11216,
The Wellcome Sanger Institute COVID-19 Surveillance Team,
The COVID-19 Genomics UK (COG-UK) Consortium*,
Ewan Birney ORCID: orcid.org/0000-0001-8314-84971,
Erik Volz ORCID: orcid.org/0000-0001-6268-89377,
Sebastian Funk ORCID: orcid.org/0000-0002-2842-34065,
Dominic Kwiatkowski ORCID: orcid.org/0000-0002-5023-01762,
Meera Chand4,8,
Inigo Martincorena ORCID: orcid.org/0000-0003-1122-44162,
Jeffrey C. Barrett ORCID: orcid.org/0000-0002-1152-370X2 &
Moritz Gerstung ORCID: orcid.org/0000-0001-6709-963X1,9
Evolutionary genetics
The evolution of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus leads to new variants that warrant timely epidemiological characterization. Here we use the dense genomic surveillance data generated by the COVID-19 Genomics UK Consortium to reconstruct the dynamics of 71 different lineages in each of 315 English local authorities between September 2020 and June 2021. This analysis reveals a series of subepidemics that peaked in early autumn 2020, followed by a jump in transmissibility of the B.1.1.7/Alpha lineage. The Alpha variant grew when other lineages declined during the second national lockdown and regionally tiered restrictions between November and December 2020. A third more stringent national lockdown suppressed the Alpha variant and eliminated nearly all other lineages in early 2021. Yet a series of variants (most of which contained the spike E484K mutation) defied these trends and persisted at moderately increasing proportions. However, by accounting for sustained introductions, we found that the transmissibility of these variants is unlikely to have exceeded the transmissibility of the Alpha variant. Finally, B.1.617.2/Delta was repeatedly introduced in England and grew rapidly in early summer 2021, constituting approximately 98% of sampled SARS-CoV-2 genomes on 26 June 2021.
The SARS-CoV-2 virus accumulates approximately 24 point mutations per year, or 0.3 mutations per viral generation1,2,3. Most of these mutations appear to be evolutionarily neutral but, as the SARS-CoV-2 epidemic spread around the world during spring 2020, it became apparent that the virus is continuing to adapt to its human host. An initial sign was the emergence and global spread of the spike protein variant D614G in the second quarter of 2020. Epidemiological analyses estimated that this mutation, which defines the B.1 lineage, confers a 20% transmissibility advantage over the original A lineage that was isolated in Wuhan, China4.
A broad range of lineages have been defined since that can be used to track SARS-CoV-2 transmission across the globe5,6. For example, B.1.177/EU-1 emerged in Spain in early summer 2020 and spread across Europe through travel7. Subsequently, four variants of concern (VOCs) have been identified by the WHO and other public health authorities: the B.1.351/Beta lineage was discovered in South Africa8, where it spread rapidly in late 2020. The B.1.1.7/Alpha lineage was first observed in Kent in September 2020 (ref. 9) from where it swept through the United Kingdom and large parts of the world due to a 50–60% increase10,11,12,13 in transmissibility. P.1/Gamma originated in Brazil14,15 and has spread throughout South America. Most recently, B.1.617.2/Delta was associated with a large surge of coronavirus disease 2019 (COVID-19) in India in April 2021 and subsequently around the world.
Epidemiology of SARS-CoV-2 in England
In the United Kingdom, by late June 2021 the COVID-19 Genomics UK Consortium (COG-UK) had sequenced close to 600,000 viral samples. These data have enabled a detailed reconstruction of the dynamics of the first wave of the epidemic in the United Kingdom between February and August 2020 (ref. 16). Here we leverage a subset of those data—genomic surveillance data generated at the Wellcome Sanger Institute—to characterize the growth rates and geographical spread of different SARS-CoV-2 lineages and reconstruct how newly emerging variants changed the course of the epidemic.
Our data cover England between 1 September 2020 and 26 June 2021, encompassing three epidemic waves and two national lockdowns (Fig. 1a). In this time period, we sequenced 281,178 viral genomes, corresponding to an average of 7.2% (281,178/3,894,234) of all of the positive tests from PCR testing for the wider population, ranging from 5% in winter 2020 to 38% in early summer 2021, and filtered to remove cases that were associated with international travel (Methods and Extended Data Fig. 1a, b). Overall, a total of 328 SARS-CoV-2 lineages were identified using the PANGO lineage definition5. As some of these lineages were only rarely and intermittently detected, we collapsed these on the basis of the underlying phylogenetic tree into a set of 71 lineages for modelling (Fig. 1b–d and Supplementary Tables 1 and 2).
Fig. 1: SARS-CoV-2 surveillance sequencing in England between September 2020 and June 2021.
a, Positive Pillar 2 SARS-CoV-2 tests in England. b, The relative frequency of 328 different PANGO lineages, representing approximately 7.2% of the tests shown in a. c, Positive tests (row 1) and the frequency of 4 major lineages (rows 2–5) across 315 English lower tier local authorities. d, The absolute frequency of sequenced genomes mapped to 71 PANGO lineages. The blue areas in the pie charts are proportional to the fraction of LTLAs in which a given lineage was observed.
These data reveal a diversity of lineages in the fall of 2020 followed by sweeps of the Alpha and Delta variants (Fig. 1b and Supplementary Tables 2 and 3). Figure 1c shows the geographical distribution of cases and of different lineages, studied at the level of 315 English lower tier local authorities (LTLAs), administrative regions with approximately 100,000–200,000 inhabitants.
Modelling the dynamics of SARS-CoV-2
We developed a Bayesian statistical model that tracks the fraction of genomes from different lineages in each LTLA in each week and fits the daily total number of positive Pillar 2 tests (Methods and Extended Data Fig. 2). The multivariate logistic regression model is conceptually similar to previous approaches in its estimation of relative growth rates10,11. It accounts for differences in the epidemiological dynamics between LTLAs, and enables the introduction of new lineages (Fig. 2a–c). Despite the sampling noise in a given week, the fitted proportions recapitulate the observed proportions of genomes as revealed by 35 example LTLAs covering the geography of England (Fig. 2b, c and Supplementary Notes 1 and 2). The quality of fit is confirmed by different probabilistic model selection criteria (Extended Data Fig. 3) and also evident at the aggregated regional level (Extended Data Fig. 4).
Fig. 2: Spatiotemporal model of 71 SARS-CoV-2 lineages in 315 English LTLAs between September 2020 and June 2021.
a, The average growth rates for 71 lineages. Data are median ± 95% CI. b, Lineage-specific relative frequency for 35 selected LTLAs, arranged by longitude and latitude to geographically cover England. c, Fitted lineage-specific relative frequency for the same LTLAs as in b. d, Fitted lineage-specific incidence for the same LTLAs as in b.
Although the relative growth rate of each lineage is modelled as identical across LTLAs, the local viral proportions change dynamically due to the timing and rate of introduction of different lineages. The model also calculates total and lineage-specific local incidences and time-dependent growth rates and approximate reproduction numbers Rt by negative binomial spline fitting of the number of daily positive PCR tests (Methods, Fig. 2d and Extended Data Fig. 2c). Together, this enables a quantitative reconstruction of different periods of the epidemic, which we will discuss in chronological order.
Multiple subepidemics in autumn 2020
Autumn 2020 was characterized by a surge of cases—concentrated in the north of England—that peaked in November, triggering a second national lockdown (Fig. 1a, c). This second wave initially featured B.1 and B.1.1 sublineages, which were slightly more prevalent in the south and north of England, respectively (Fig. 2b, c). Yet, the proportion of B.1.177 and its geographically diverse sublineages steadily increased across LTLAs from around 25% at the beginning of September to 65% at the end of October. This corresponds to a growth rate of between 8% (growth per 5.1 d; 95% confidence interval (CI) = 7–9%) and 12% (95% CI = 11–13%) greater than that of B.1 or B.1.1. The trend of B.1.177 expansion relative to B.1 persisted throughout January (Extended Data Fig. 5a) and involved a number of monophyletic sublineages that arose in the UK, and similar patterns were observed in Denmark17 (Extended Data Fig. 5b). Such behaviour cannot easily be explained by international travel, which was the major factor in the initial spread of B.1. throughout Europe in summer 2020 (ref. 7). However, the underlying biological mechanism is unclear as the characteristic A222V spike variant is not believed to confer a growth advantage7.
The spread of Alpha during restrictions
The subsequent third wave from December 2020 to February 2021 was almost exclusively driven by Alpha/B.1.1.7, as described previously10,11,18. The rapid sweep of Alpha was due to an estimated transmissibility advantage of 1.52 compared with B.1.1 (growth per 5.1 d; 95% CI = 1.50–1.55; Fig. 2a), assuming an unchanged generation interval distribution19. The growth advantage is thought to stem, at least in part, from spike mutations that facilitate ACE2 receptor binding (N501Y)20,21 and furin cleavage (P681H)22. Alpha grew during a period of restrictions, which proved to be insufficient to contain its spread (Fig. 3a).
Fig. 3: Growth of B.1.1.7/Alpha and other lineages in relation to lockdown restrictions between November 2020 and March 2021.
a, Maps and dates of national and regional restrictions in England. Second national lockdown: closed hospitality businesses; contacts ≤ 2, outdoors only; open schools; reasonable excuse needed for leaving home45. Tier 1: private indoor gatherings of ≤6 persons. Tier 2: as tier 1 plus restricted hospitality services; gatherings of ≤6 in public outdoor places. Tier 3: as tier 2 plus most hospitality businesses closed. Tier 4: as tier 3 but single outdoor contact. Third national lockdown: closed schools with the exception of key workers. b, Local lineage-specific Rt values for Alpha and the average Rt value (growth per 5.1 d) of all of the other lineages in the same periods. c, Rt values from n = 315 LTLA shown in b. The box centre horizontal line indicates the median, box limits show the quartiles, the whiskers extend to 1.5× the interquartile range. d, Total and lineage-specific incidence (top) and Rt values (bottom) for six selected LTLAs during the period of restrictions. e, Crude lineage-specific fold changes (odds ratios) in Alpha and other lineages across the second (orange) and third national lockdown (red).
The second national lockdown from 5 November to 1 December 2020 successfully reduced the total number of cases, but this masked a lineage-specific increase (Rt > 1; defined as growth per 5.1 d) in Alpha and a simultaneous decrease in other hitherto dominant lineages (Rt < 1) in 78% (246/315) of LTLAs23 (Fig. 3b, c). This pattern of Alpha-specific growth during lockdown is supported by a model-agnostic analysis of raw case numbers and proportions of Alpha genomes (Fig. 3e).
Three levels of regionally tiered restrictions were introduced in December 2020 (ref. 24) (Fig. 3a). The areas under different tiers of restrictions visibly and quantitatively coincide with the resulting local Rt values, with greater Rt values in areas with lower restrictions (Fig. 3a–c).The reopening caused a surge of cases across all tiers with Rt > 1, which is also evident in selected time series (Fig. 3d). As Alpha cases surged, more areas were placed under tier 3 restrictions, and stricter tier 4 restrictions were introduced. Nevertheless, Alpha continued to grow (Rt > 1) in most areas, presumably driven by increased social interaction over Christmas (Fig. 3c).
After the peak of 72,088 daily cases on 29 December 2020 (Fig. 1a), a third national lockdown was announced on 4 January 2021 (Fig. 3a). The lockdown and increasing immunity derived from infection and increasing vaccination25 led to a sustained contraction of the epidemic to approximately 5,500 daily cases by 8 March, when restrictions began to be lifted by reopening schools (further steps of easing occurred on 12 April and 17 May). In contrast to the second national lockdown 93% (296/315) of LTLAs exhibited a contraction in both Alpha and other lineages (Fig. 3e).
Elimination of lineages in early 2021
The lineage-specific rates of decline during the third national lockdown and throughout March 2021 resulted in large differences in lineage-specific incidence. Cases of Alpha contracted nationally from a peak of around 50,000 daily new cases to approximately 2,750 on 1 April 2021 (Fig. 4a). At the same time, B.1.177—the most prevalent lineage in November 2020—fell to less than an estimated 10 cases per day. Moreover, the incidence of most other lineages present in autumn 2020 was well below 1 after April 2021, implying that the majority of them have been eliminated. The number of observed distinct PANGO lineages declined from a peak of 137 to only 22 in the first week of April 2021 (Fig. 4b). Although this may be attributed in part to how PANGO lineages were defined, we note that the period of contraction did not replenish the genetic diversity lost due to the selective sweep by Alpha (Extended Data Fig. 6).
Fig. 4: Elimination of SARS-CoV-2 lineages during spring 2021.
a, Modelled lineage-specific incidence in England. The colours resemble major lineages as indicated and shades thereof indicate the respective sublineages. b, The observed number of PANGO lineages per week.
Refractory variants with E484K mutations
Parallel to the elimination of many formerly dominant SARS-CoV-2 lineages, a number of new variants were imported or emerged (Fig. 4a). These include the VOCs B.1.351/Beta and P.1/Gamma, which carry the spike variant N501Y that is also found in B.1.1.7/Alpha and a similar pair of mutations (K417N/T and E484K) that were each shown to reduce the binding affinity of antibodies from vaccine-derived or convalescent sera20,26,27,28,29 . The ability to escape from previous immunity is consistent with the epidemiology of Beta in South Africa8 and especially the surge of Gamma in Manaus15. The variants B.1.525/Eta, B.1.526/Iota, B.1.1.318 and P.2/Zeta also harbour E484K spike mutations as per their lineage definition, and sublineages of Alpha and A.23.1 that acquired E484K were found in England (Fig. 5a, b).
Fig. 5: Dynamics of E484K variants and Delta between January and June 2021.
a, The observed relative frequency of other lineages (light grey), Alpha/B.1.1.7 (dark grey), E484K variants (orange) and Delta/B.1.617.2 (brown). b, The observed and modelled relative frequency of variants in England. c, The total and relative lineage-specific incidence in four selected LTLAs. For b and c, the shaded areas indicate the 95% CIs. d, Estimated UK clade numbers (numbers in square parentheses represent minimum and maximum numbers) and sizes. e, Crude growth rates (odds ratios) of Delta and Alpha between April and June 2021, as in Fig. 3e. f, Lineage-specific Rt values of n = 315 LTLA in the same period, defined as in Fig. 3c. g, Changes in the average transmissibility across 315 LTLAs during the study period.
The proportion of these E484K-containing variants was consistently 0.3–0.4% from January to early April 2021. A transient rise, especially of the Beta and Gamma variants, was observed in May 2021 (Fig. 5a, b). Yet, the dynamics were largely stochastic and characterized by a series of individual and localized outbreaks, possibly curtailed by local surge testing efforts against Beta and Gamma variants (Fig. 5c). Consistent with the transient nature of these outbreaks, the estimated growth rates of these variants were typically lower than Alpha (Fig. 2a).
Sustained imports from international travel were a critical driving mechanism behind the observed number of non-Alpha cases. A phylogeographical analysis establishing the most parsimonious sets of monophyletic and exclusively domestic clades, which can be interpreted as individual introductions, confirmed that A.23.1 with E484K (1 clade) probably has a domestic origin as no genomes of the same clade were observed internationally (Methods, Fig. 5d and Extended Data Fig. 7). The estimated number of introductions was lowest for B.1.1.318 (3 introductions, range = 1–6), and highest for Beta (49 introductions, range = 45–58) and Eta (30 introductions, range = 18–34). Although our data exclude genomes sampled directly from travellers, these repeated introductions show that the true rate of transmission is lower than the observed increase in the number of surveillance genomes.
The rise of Delta from April to June 2021
The B.1.617.1/Kappa and B.1.617.2/Delta lineages, which were first detected in India in 2020, first appeared in English surveillance samples in March 2021. In contrast to other VOCs, Delta/Kappa do not contain N501Y or E484K mutations, but their L452R mutation may reduce antibody recognition27 and P681R enhances furin cleavage30, similar to the P681H mutation of Alpha. The frequency of Delta, which harbours further spike mutations of unknown function, increased rapidly and reached levels of 98% (12,474/12,689) on 26 June 2021 (Fig. 5a, b). Although initially constrained to a small number of large local clusters, such as in Bolton, in May 2021 (Fig. 5c), Delta was detected in all LTLAs by 26 June 2021 (Fig. 1c). The sweep of Delta occurred at a rate of around 59% (growth per 5.1 d, CI = 53–66) higher than Alpha with minor regional variation (Fig. 2a, Extended Data Fig. 4e and Supplementary Table 4).
The rapid rise of Delta contrasts with Kappa, which grew more slowly despite being introduced at a similar time and into a similar demographic background (Figs. 2a and 5b). This is also evident in the phylogeographical analysis (based on data as of 1 May 2021). The 224 genomes of Delta derive from larger clades (23 introductions, range = 6–40; around 10 genomes for every introduction) compared with the 80 genomes of Kappa (17 introductions, range = 15–31; around 3–4 genomes per introduction) and also other variants (Fig. 5d and Extended Data Fig. 8). The AY.1 lineage, derived from Delta and containing an additional K417N mutation, appeared only transiently (Fig. 5b).
The sustained domestic growth of Delta and its international spread31 relative to the Alpha lineage are the first evidence of a biological growth advantage. The causes appear to be a combination of increased transmissibility and immune evasion. Evidence for higher transmissibility includes the fast growth in younger unvaccinated age groups, reports of elevated secondary attack rates32 and a higher viral load33. Furthermore, vaccine efficacy against infection by Delta is diminished, depending on the type of vaccine34,35, and reinfection is more frequent36, both supported by experimental research demonstrating the reduced antibody neutralization of Delta by vaccine-derived and convalescent sera37,38.
The higher growth rate of Delta—combined with gradual reopening and proceeding vaccination—repeated the dichotomous pattern of lineage-specific decline and growth, although now with declining Alpha (Rt < 1) and growing Delta (Rt > 1; Fig. 5e, f). Overall, we estimate that the spread of more transmissible variants between August 2020 and early summer 2021 increased the average growth rate of circulating SARS-CoV-2 in England by a factor of 2.39 (95% CI = 2.25–2.42; Fig. 5g). Thus, previously effective interventions may prove to be insufficient to contain newly emerging and more transmissible variants.
Our dense genomic surveillance analysis identified lineages that consistently grew faster than others in each local authority and, therefore, at the same time, under the same restrictions and in a comparable population. This pinpointed a series of variants with elevated transmissibility, in broad agreement with other reports10,11,13,15,31. However, a number of limitations exist. The growth rates of rare new variants are stochastic due to introductions and superspreading. Local outbreaks of the Beta and Gamma variants triggered asymptomatic surge testing, which may have reduced their spread. Furthermore, transmission depends both on the viral variant and the immunity of the host population, which changed from less than 20% to over 90% in the study period39. This will influence the growth rates of variants with immune evasion capabilities over time. The effect of immunity is currently not modelled, but may become more important in the future as SARS-CoV-2 becomes endemic. Further limitations are discussed in the Limitations section of the Methods.
The third and fourth waves in England were each caused by more transmissible variants, which outgrew restrictions that were sufficient to suppress previous variants. During the second national lockdown, Alpha grew despite falling numbers for other lineages and, similarly, Delta took hold in April and May when cases of Alpha were declining. The fact that such growth was initially masked by the falling cases of dominant lineages highlights the need for dense genomic surveillance and rapid analysis to devise optimal and timely control strategies. Such surveillance should ideally be global as, even though Delta was associated with a large wave of cases in India, its transmissibility remained unclear at the time due to a lack of systematic genomic surveillance data.
The 2.4-fold increase in growth rate during the study period as a result of new variants is also likely to have consequences for the future course of the pandemic. If this increase in growth rate was explained solely by higher transmissibility, it would raise the basic reproduction number R0 from a value of around 2.5–3 in spring 2020 (ref. 40) to the range of 6–7 for Delta. This is likely to spur new waves of the epidemic in countries that have to date been able to control the epidemic despite low vaccination rates, and it may exacerbate the situation elsewhere. Although the exact herd-immunity threshold depends on contact patterns and the distribution of immunity across age groups41,42, it is worth considering that Delta may increase the threshold to values around 0.85. Given current estimates of vaccine efficacy34,35,43 this would require nearly 100% vaccination coverage. Even though more than 90% of adults had antibodies against SARS-CoV-2 (ref. 39) and close to 70% had received two doses of vaccination, England saw rising Delta variant cases in the first weeks of July 2021. It can therefore be expected that other countries with high vaccination coverage are also likely to experience rising cases when restrictions are lifted.
SARS-CoV-2 is likely to continue its evolutionary adaptation process to humans44. To date, variants with considerably higher transmissibility have had strongest positive selection, and swept through England during the 10 months of this investigation. However, the possibility that an increasingly immune population may now select for variants with better immune escape highlights the need for continued systematic and, ideally, global genomic surveillance.
Pillar 2 SARS-CoV-2 testing data
Publicly available daily SARS-CoV-2 test result data from testing for the wider population outside the National Health Service (Pillar 2 newCasesBySpecimenDate) were downloaded from https://coronavirus.data.gov.uk/ spanning the date range from 1 September 2020 to 30 June 2021 for 315 English LTLAs (downloaded on 20 July 2021). These data are mostly positive PCR tests, with about 4% of results from lateral flow tests without PCR confirmation. In this dataset, the City of London is merged with Hackney, and the Isles of Scilly are merged with Cornwall due to their small number of inhabitants, thereby reducing the number of English LTLAs from 317 to 315. Population data for each LTLA were downloaded from the Office of National Statistics (ONS; https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationestimates/datasets/populationestimatesforukenglandandwalesscotlandandnorthernireland).
SARS-CoV-2 surveillance sequencing
In total, 281,178 tests (September 2020 to June 2021) were collected as part of random surveillance of positive tests of residents of England from four Pillar 2 Lighthouse laboratories. The samples were collected between 1 September 2020 and 26 June 2021. A random selection of samples was taken, after excluding those that were known to be taken during quarantine of recent travellers, and samples from targeted and local surge testing efforts. The available metadata made this selection imperfect, but these samples should be an approximately random selection of infections in England during this time period, and the large sample size makes our subsequent inferences robust.
We amplified RNA extracts from these tests with Ct < 30 using the ARTIC amplicon protocol (https://www.protocols.io/workspaces/coguk/publications). We sequenced 384-sample pools on Illumina NovaSeq, and produced consensus fasta sequences according to the ARTIC nextflow processing pipeline (https://github.com/connor-lab/ncov2019-artic-nf). Lineage assignments were made using Pangolin5, according to the latest lineage definitions at the time, except for B.1.617, which we reanalysed after the designation of sublineages B.1.617.1, B.1.617.2 and B.1.617.3. Lineage prevalence was computed from 281,178 genome sequences. The genomes were mapped to the same 315 English LTLAs as for the testing data described above. Mapping was performed from outer postcodes to LTLA, which can introduce some misassignment to neighbouring LTLAs. Furthermore, lineages in each LTLA were aggregated to counts per week for a total of 43 weeks, defined beginning on Sunday and ending on Saturday.
Finally, the complete set of 328 SARS-CoV-2 PANGO lineages was collapsed into l = 71 lineages using the underlying phylogenetic tree, such that each resulting lineage constituted at least 100 genomes, unless the lineage has been designated a VOC, variant under investigation (VUI) or variant in monitoring by Public Health England32.
Spatiotemporal genomic surveillance model
A hierarchical Bayesian model was used to fit local incidence data in a given day in each local authority and jointly estimate the relative historical prevalence and transmission parameters. In the following, t denotes time and is measured in days. We use the convention that bold lowercase symbols, such as b, indicate vectors.
Suppose that \({{\bf{x}}}^{{\prime} }(t)=({\bf{b}}+{r}_{0}(t))\cdot {\bf{x}}(t)\) describes the ordinary differential equation (ODE) for the viral dynamics for a set of l different lineages. Here r0(t) is a scalar time-dependent logarithmic growth rate that is thought to reflect lineage-independent transmission determinants, which changes over time in response to behaviour, non-pharmaceutical interventions (NPIs) and immunity. This reflects a scenario in which the lineages differ only in terms of the intensity of transmission, but not the intergeneration time distribution. The ODE is solved by \({\bf{x}}(t)={{\rm{e}}}^{{\bf{c}}+{\bf{b}}t+{\int }_{{t}_{0}}^{t}{r}_{0}(t){\rm{d}}t}={{\rm{e}}}^{{\bf{c}}+{\bf{b}}t}\nu (t)\). The term ν(t) contributes the same factor to each lineage and therefore drops from the relative proportions of lineages \({\bf{p}}(t)=\frac{{\bf{x}}(t)}{\sum {\bf{x}}(t)}\propto {{\rm{e}}}^{{\bf{c}}+{\bf{b}}t}\).
In the given model, the lineage prevalence p(t) follows a multinomial logistic-linear trajectory. Moreover, the total incidence factorizes into \({\boldsymbol{\mu }}(t)=\nu (t)\sum {{\rm{e}}}^{{\bf{c}}+{\bf{b}}t}\), which provides a basis to separately estimate the total incidence µ(t) from Pillar 2 test data and lineage-specific prevalence p(t) from genomic surveillance data (which are taken from a varying proportion of positive tests). By using the equations above, one can subsequently calculate lineage-specific estimates by multiplying µ(t) with the respective genomic proportions p(t).
In the following text, we describe a flexible semi-parametric model of the incidence. Let µ(t) be the expected daily number of positive Pillar 2 tests and s the population size in each of 315 LTLAs. Denote \({\boldsymbol{\lambda }}(t)=\,\log \,{\boldsymbol{\mu }}(t)-\,\log (s)\) the logarithmic daily incidence per capita at time t in each of the 315 LTLAs.
Suppose f(t) is the daily number of new infections caused by the number of people infected at time t. As new cases are noticed and tested only after a delay u with distribution g, the observed number of cases f *(t) will be given by the convolution
$${f}^{\ast }(t)={\int }_{0}^{{\rm{\infty }}}g(u)f(t-u){\rm{d}}u=(g\ast f)(t).$$
The time from infection to test is given by the incubation time plus the largely unknown distribution of the time from symptoms to test, which, in England, was required to take place within 5 d of symptom onset. To account for these factors, the log normal incubation time distribution from ref. 46 is scaled by the equivalent of changing the mean by 2 d. The convolution shifts cases approximately 6 d into the future and also spreads them out according to the width of g (Extended Data Fig. 2a).
To parametrize the short- and longer-term changes of the logarithmic incidence \({\boldsymbol{\lambda }}(t)\), we use a combination of h weekly and k − h monthly cubic basis splines \({\bf{f}}(t)=({f}_{1}(t),\ldots ,{f}_{k}(t)).\) The knots of the h weekly splines uniformly tile the observation period except for the last 6 weeks.
Each spline basis function is convolved with the time to test distribution g, \({{\bf{f}}}^{\ast }(t)=({f}_{1}^{\ast }(t),\ldots ,{f}_{k}^{\ast }(t))\) as outlined above and used to fit the logarithmic incidence. The derivatives of the original basis f′(t) are used to calculate the underlying growth rates and Rt values, as shown further below. The convolved spline basis f*(t) is used to fit the per capita incidence in each LTLA as (Extended Data Fig. 2b):
$${\boldsymbol{\lambda }}(t)={\bf{B}}\times {{\bf{f}}}^{\ast }(t).$$
This implies that fitting the incidence function for each of the m local authorities is achieved by a suitable choice of coefficients \({\bf{B}}\in {{\mathbb{R}}}^{m\times k}\), that is one coefficient for each spline function for each of the LTLAs. The parameters B have a univariate normal prior distribution each, which reads for LTLA i and spline j:
$${{\bf{B}}}_{i,j}\sim N(0,{\sigma }_{j}).$$
The s.d. of the prior regularizes the amplitude of the splines and is chosen as \({\sigma }_{j}=0.2\) for weekly splines and \({\sigma }_{j}=1\) for monthly splines. This choice was found to reduce the overall variance resulting from the high number of weekly splines, meant to capture rapid changes in growth rates, but which can lead to instabilities particularly at the end of the time series, when not all effects of changes in growth rates are observed yet. The less regularized monthly splines reflect trends on the scale of several weeks and are therefore subject to less noise.
Finally, we introduce a term accounting for periodic differences in weekly testing patterns (there are typically 30% lower specimens taken on weekends; Fig. 1a):
$$\mathop{{\boldsymbol{\mu }}}\limits^{ \sim }={\boldsymbol{\mu }}(t)\cdot \delta (t),$$
where the scalar \(\delta (t)=\delta (t-i\times 7)\,{\rm{\forall }}i\in {\mathbb{N}}\) and prior distribution \(\delta (t)\sim {\rm{L}}{\rm{o}}{\rm{g}}{\rm{N}}{\rm{o}}{\rm{r}}{\rm{m}}{\rm{a}}{\rm{l}}(0,1)\) for \(t=1,\ldots ,6\) and \(\delta (0)=1\).
The total incidence was fitted to the observed number of positive daily tests X by a negative binomial with a dispersion \(\omega =10\). The overdispersion buffers against non-Poissonian uncorrelated fluctuations in the number of daily tests.
$${\bf{X}}(t)\sim {\rm{N}}{\rm{B}}(\mathop{{\boldsymbol{\mu }}}\limits^{ \sim }(t),\omega ).$$
The equation above assumes that all elements of X(t) are independent, conditional on \(\tilde{{\boldsymbol{\mu }}}(t)\).
Growth rates and R t values
A convenient consequence of the spline basis of \(\log ({\boldsymbol{\mu }})={\boldsymbol{\lambda }}\), is that the delay-adjusted daily logarithmic growth rate r(t) = λ′(t) of the local epidemic simplifies to:
$${\bf{r}}(t)={\bf{B}}\times {{\bf{f}}}^{{\prime} }(t),$$
where \({{\bf{f}}}_{j}^{{\prime} }(t)\) represents the first derivative of the jth cubic spline basis function.
To express the daily growth rate as an approximate reproductive number Rt, one needs to consider the distribution of the intergeneration time, which is assumed to be gamma distributed with mean 6.3 d (α = 2.29, β = 0.36)46. The Rt value can be expressed as a Laplace transform of the intergeneration time distribution47. Effectively, this shortens the relative time period because the exponential dynamics put disproportionally more weight on stochastically early transmissions over late ones. For reasons of simplicity and being mindful also of the uncertainties of the intergeneration time distribution, we approximate Rt values by multiplying the logarithmic growth rates with a value of \({\bar{\tau }}_{{\rm{e}}}\) = 5.1 d, which was found to be a reasonable approximation to the convolution required to calculate Rt values (denoted here by the lower case symbol \({\boldsymbol{\rho }}(t)\) in line with our convention for vector-variate symbols and to avoid confusion with the epidemiological growth rate rt),
$$\log ({\boldsymbol{\rho }}(t))\approx \frac{{\rm{d}}\,\log ({\boldsymbol{\mu }}(t))}{{\rm{d}}t}{\bar{\tau }}_{{\rm{e}}}={\bf{r}}(t){\bar{\tau }}_{{\rm{e}}}$$
Thus, the overall growth rate scaled to an effective inter generation time of 5.1 d can be readily derived from the derivatives of the spline basis and the corresponding coefficients. The values derived from the approach are in very close agreement with those of the method of ref. 48, but shifted according to the typical delay from infection to test (Extended Data Fig. 2b).
Genomic prevalence
The dynamics of the relative frequency P(t) of each lineage was modelled using a logistic-linear model in each LTLA, as described above. The logistic prevalence of each lineage in each LTLA is defined as \({\bf{L}}(t)={\rm{l}}{\rm{o}}{\rm{g}}{\rm{i}}{\rm{t}}({\bf{P}}(t))\). This is modelled using the piecewise linear expression
$${\bf{L}}(t)={\bf{C}}+{\bf{b}}\cdot {{\bf{t}}}_{+},$$
where b may be interpreted as a lineage-specific growth advantage and C as an offset term of dimension (LTLA × lineages). Time \({{\bf{t}}}_{+}\) is measured since introduction t0 and is defined as
$${{\bf{t}}}_{+}=t-{{\bf{t}}}_{0}\,{\rm{i}}{\rm{f}}\,t > {{\bf{t}}}_{0}\,{\rm{e}}{\rm{l}}{\rm{s}}{\rm{e}}-{\rm{\infty }}$$
and accounts for the fact that lineages can be entirely absent prior to a stochastically distributed time period preceding their first observation. This is because, in the absence of such a term, the absence of a lineage prior to the point of observation can only be explained by a higher growth rate compared with the preceding lineages, which may not necessarily be the case. As the exact time of introduction is generally unknown, a stochastic three-week period of \({{\bf{t}}}_{0}\sim {\rm{Unif}}(-14,0)+{{\bf{t}}}_{0}^{{\rm{obs}}}\)prior to the first observation \({{\bf{t}}}_{0}^{{\rm{obs}}}\) was chosen.
As the inverse logit transformation projects onto the l − 1 dimensional simplex \({S}_{l-1}\) and therefore loses one degree of freedom, B.1.177 was set as a baseline with
$${{\bf{L}}}_{\cdot ,0}(t)=0.$$
The offset parameters C are modelled across LTLAs as independently distributed multivariate normal random variables with a lineage-specific mean c and covariance \(\Sigma =10\cdot {I}_{l-1}\), where \({I}_{l-1}\) denotes an \((l-1)\times (l-1)\) identity matrix. The lineage-specific parameters growth rate b and average offset c are modelled using IID Normal prior distributions
$${\bf{b}}\sim N(0,0.2)$$
$${\bf{c}}\sim N(-10,5)$$
The time-dependent relative prevalence P(t) of SARS-CoV2 lineages was fitted to the number of weekly genomes Y(t) in each LTLA by a Dirichlet-multinomial distribution with expectation \({\mathbb{E}}[{\bf{Y}}(t)]\approx {\bf{P}}(t)\cdot {\bf{G}}(t)\) where G(t) are the total number of genomes sequenced from each LTLA in each week. For LTLA i, this is defined as:
$${{\bf{Y}}}_{i,\cdot }(t)\sim {\rm{D}}{\rm{i}}{\rm{r}}{\rm{M}}{\rm{u}}{\rm{l}}{\rm{t}}({\alpha }_{0}+{{\boldsymbol{\alpha }}}_{1}{{\bf{P}}}_{i,\cdot }(t),{{\bf{G}}}_{i}(t)).$$
The scalar parameter \({{\boldsymbol{\alpha }}}_{0}=0.01\) can be interpreted as a weak prior with expectation 1/n, making the model less sensitive to the introduction of single new lineages, which can otherwise exert a very strong effect. Furthermore, the array \({{\boldsymbol{\alpha }}}_{1}=\frac{{\rm{cases}}}{2}\) increases the variance to account for the fact that, especially at high sequencing coverage (genomes ≈ cases), cases and therefore genomes are likely to be correlated and overdispersed as they may derive from a single transmission event. Other choices such as \({{\boldsymbol{\alpha }}}_{1}=1,000\), which make the model converge to a standard multinomial, leave the conclusions qualitatively unchanged. This model aspect is illustrated in Extended Data Fig. 2c.
Lineage-specific incidence and growth rates
From the two definitions above it follows that the lineage-specific incidence is given by multiplying the total incidence in each LTLA µ(t) with the corresponding lineage frequency estimate P(t)for lineage j at each time point
\({{\bf{M}}}_{\cdot ,j}(t)={\boldsymbol{\mu }}(t)\cdot {{\bf{P}}}_{\cdot ,j}(t)\) for \(j=0,\ldots ,l-1\)
Further corresponding lineage-specific Rt values R(t) in each LTLA can be calculated from the lineage-agnostic average Rt value ρ(t) and the lineage proportions P(t) as
$$\log {\bf{R}}(t)=\,\log \,{\boldsymbol{\rho }}(t)+{\bar{\tau }}_{{\rm{e}}}({\bf{b}}-{\bf{P}}(t)\times {\bf{b}})$$
By adding the log-transformed growth rate fold changes b and subtracting the average log-transformed growth rate change \({\bf{P}}(t)\times {\bf{b}}\), it follows that \({{\bf{R}}}_{i,\cdot }(t)={{\bf{R}}}_{i,0}(t){{\rm{e}}}^{{\bar{\tau }}_{{\rm{e}}}{\bf{b}}}\), where \({{\bf{R}}}_{i,0}(t)\) is the Rt value of the reference lineage j = 0 (for which \({{\bf{b}}}_{0}=0\)) in LTLA i. It follows that all other lineage-specific the Rt values are proportional to this baseline at any given point in time with factor \({{\rm{e}}}^{{\bar{\tau }}_{{\rm{e}}}{\bf{b}}}\).
The model was implemented in numpyro49,50 and fitted using stochastic variational inference51. Guide functions were multivariate normal distributions for each row (corresponding to an LTLA) of B, C to preserve the correlations across lineages and time as well as for (b, c) to also model correlations between growth rates and typical introduction.
Phylogeographic analyses
To infer VOC introduction events into the UK and corresponding clade sizes, we investigated VOC genome sequences from GISAID (https://www.gisaid.org/) available from any country. We downloaded multiple sequence alignments of genome sequences with the release dates 17 April 2021 (for the analysis of the lineages A.23.1, B.1.1.318, B.1.351 andB.1.525) and 5 May 2021 (for the analysis of the B.1.617 sublineages). We next extracted a subalignment from each lineage (according to the 1 April 2021 version of PANGOlin for the 17 April 2021 alignment and the 23 April 2021 version of PANGOlin for the 5 May 2021 alignment) and, for each subalignment, we inferred a phylogeny through maximum likelihood using FastTree2 (v.2.1.11)52 with the default options and GTR substitution model53.
On each VOC/VUI phylogeny, we inferred the minimum and maximum number of introductions of the considered SARS-CoV-2 lineage into the UK compatible with a parsimonious migration history of the ancestors of the considered samples; we also measured clade sizes for one specific example parsimonious migration history. We counted only introduction events into the UK that resulted in at least one descendant from the set of UK samples that we considered in this work for our hierarchical Bayesian model; similarly, we measured clade sizes by the number of UK samples considered here included in such clades. Multiple occurrences of identical sequences were counted as separate cases, as this helped us to identify rapid SARS-CoV-2 spread.
When using parsimony, we considered only migration histories along a phylogenetic tree that are parsimonious in terms of the number of migration events from and to the UK (in practice, we collapse all of the non-UK locations into a single one). Furthermore, as SARS-CoV-2 phylogenies present substantial numbers of polytomies, that is, phylogenetic nodes where the tree topology cannot be reconstructed due to a lack of mutation events on certain branches, we developed a tailored dynamic programming approach to efficiently integrate over all possible splits of polytomies and over all possible parsimonious migration histories. The idea of this method is somewhat similar to typical Bayesian phylogeographic inference54 in that it enables us to at least in part integrate over phylogenetic uncertainty and uncertainty in migration history; however, it also represents a very simplified version of these analyses, more so than ref. 16, as it considers most of the phylogenetic tree as fixed, ignores sampling times and uses parsimony instead of a likelihood-based approach. Parsimony is expected to represent a good approximation in the context of SARS-CoV-2, due to the shortness (both in time and substitutions) of the phylogenetic branches considered55,56. The main advantage of our approach is that, owing to the dynamic programming implementation, it is more computationally efficient than Bayesian alternatives, as the most computationally demanding step is the inference of the maximum likelihood phylogenetic tree. This enables us to infer plausible ranges for numbers of introduction events for large datasets and to quickly update our analyses as new sequences become available. The other advantage of this approach is that it enables us to easily customize the analysis and to focus on inferred UK introductions that result in at least one UK surveillance sample, while still making use of non-surveillance UK samples to inform the inferred phylogenetic tree and migration history. Note that possible biases due to uneven sequencing rates across the world55 apply to our approach as well as other popular phylogeographic methods. Our approach works by traversing the maximum likelihood tree starting from the terminal nodes and ending at the root (postorder traversal). Here, we define a 'UK clade' as a maximal subtree of the total phylogeny for which all terminal nodes are from the UK, all internal nodes are inferred to be from the UK and at least one terminal node is a UK surveillance sample; the size of a UK clade is defined as the number of UK surveillance samples in it. At each node, using values already calculated for all children nodes (possibly more than two children in the case of a multifurcation), we calculate the following quantities: (1) the maximum and minimum number of possible descendant UK clades of the current node, over the space of possible parsimonious migration histories, and conditional on the current node being UK or non-UK; (2) the number of migration events compatible with a parsimonious migration history in the subtree below the current node, and conditional on the current node being UK or non-UK; (3) the size so far of the UK clade the current node is part of, conditional on it being UK; and (4) a sample of UK clade sizes for the subtree below the node. To calculate these quantities, for each internal node, and conditional on each possible node state (UK or non-UK), we consider the possible scenarios of having 0 of 1 migration events between the internal node and its children nodes (migration histories with more than 1 migration event between the node and its children are surely not parsimonious in our analysis and can be ignored).
To confirm the results of our analyses based on parsimony, we also used the new Bayesian phylogenetic approach Thorney BEAST16 (https://beast.community/thorney_beast) for VOCs for which it was computationally feasible, that is, excluding B.1.351. For each VOC, we used in Thorney BEAST the same topology inferred with FastTree2 as for our parsimony analysis; we also used treetime57 v.0.8.2 to estimatea timed tree and branch divergences for use in Thorney BEAST. We used a two-state (UK and non-UK) migration model54 of migration to infer introductions into the UK but again counted, from the posteriorsample trees, only UK clades with at least one UK surveillance sample.We used a Skygrid58 tree coalescent prior with six time intervals. The comparison of parsimony and Bayesian estimates is shown in Extended Data Fig. 8d.
ONS infection survey analysis
Data from the cross-sectional infection survey were downloaded from https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/bulletins/coronaviruscovid19infectionsurveypilot/30april2021.
Comparison of ONS incidence estimates with hospitalization, case and death rates was conducted by estimating infection trajectories separately from observed cases, hospitalizations and deaths59,60, convolving them with estimated PCR detection curves61, and dividing the resulting PCR prevalence estimates by the estimated prevalence from the ONS Community Infection Survey at the midpoints of the two-week intervals over which prevalence was reported in the survey.
Maps were plotted using LTLA shapefiles (https://geoportal.statistics.gov.uk/datasets/69dc11c7386943b4ad8893c45648b1e1), sourced from the ONS, which is licensed under the Open Government Licence v.3.0.
A main limitation of the analysis is that the transmission model is deterministic, whereas the spread of variants is a stochastic process. Although the logistic growth assumption is a consistent estimator of the average transmission dynamics, individual outbreaks may deviate from these averages and therefore produce unreliable estimates.
Stochastic growth effects are accounted for only in terms of (uncorrelated) overdispersion and the offset at the time of the introduction. For these reasons, the estimated growth rates may not accurately reflect the viral transmissibility, especially at a low prevalence. It is therefore important to assess whether consistent growth patterns in multiple independent areas are observed. We note that the posterior distribution of the growth rates of rare variants tends to be biased to the baseline due to the centred prior.
In its current form, the model accounts for only a single introduction event per LTLA. Although this problem is in part alleviated by the high spatial resolution, which spreads introductions across 315 LTLAs, it is important to investigate whether sustained introductions inflate the observed growth rates, as in the case of the Delta variant or other VOCs and VUIs. This can be achieved by a more detailed phylogeographic assessment and through the assessment of monophyletic sublineages.
Furthermore, there is no explicit transmission modelled from one LTLA to another. As each introduction is therefore modelled separately, this makes the model conservative in ascertaining elevated transmission as single observed cases across different LTLAs can be explained by their introduction.
The inferred growth rates also cannot identify a particular mechanism of altered transmission. Biological mechanisms include a higher viral load, longer infectivity or greater susceptibility. Lineages could potentially differ by their intergeneration time, which would lead to nonlinear scaling. Here we did not find convincing evidence in incidence data for such effects, in contrast to previous reports23. However, contact-tracing data indicate that the intergeneration time may be shortening for more transmissible lineages such as Delta33,62. Cases of the Beta and Gamma VOCs may have been more intensely contact traced and triggered asymptomatic surge testing in some postcode areas. This may have reduced the observed growth rates relative to other lineages.
Lineages, such as Beta, Gamma or Delta also differ in their ability to evade previous immunity. As immunity changes over time, this might lead to a differential growth advantage over time. It is therefore advisable to assess whether a growth advantage is constant over periods in which immunity changes considerably.
A further limitation underlies the nature of lineage definition and assignment. The PANGO lineage definition5 assigns lineages to geographical clusters, which have by definition expanded, and this can induce a certain survivor bias, often followed by winner's curse. Another issue results from the fact that very recent variants may not be classified as a lineage despite having grown, which can inflate the growth rate of ancestral lineages over sublineages.
As the total incidence is modelled on the basis of the total number of positive PCR tests, it may be influenced by testing capacity; the total number of tests approximately tripled between September 2020 and March 2021. This can potentially lead to a time trend in recorded cases and therefore baseline Rt values if the access to testing changed, for example, by too few tests being available tests during periods of high incidence, or changes to the eligibility to intermittently test with fewer symptoms. Generally, the observed incidence was in good agreement with representative cross-sectional estimates from the ONS63,64, except for a period of peak incidence from late December 2020 to January 2021 (Extended Data Fig. 1d). Values after 8 March 2021 need to be interpreted with caution as Pillar 2 PCR testing was supplemented by lateral flow devices, which increased the number of daily tests to more than 1.5 million. Positive cases were usually confirmed by PCR and counted only once.
The modelled curves are smoothed over intervals of approximately 7 d using cubic splines, creating the possibility that later time points influence the period of investigation and cause a certain waviness of the Rt value pattern. An alternative parameterization using piecewise linear basis functions per week (that is, constant Rt values per week) leaves the overall conclusions and extracted parameters broadly unchanged.
This study was performed as part of surveillance for COVID-19 under the auspices of Section 251 of the National Health Service Act 2006. It therefore did not require individual patient consent or ethical approval. The COG-UK study protocol was approved by the Public Health England Research Ethics Governance Group.
Further information on research design is available in the Nature Research Reporting Summary linked to this paper.
PCR test data are publicly available online (https://coronavirus.data.gov.uk/). A filtered, privacy conserving version of the lineage–LTLA–week dataset is publicly available online (https://covid19.sanger.ac.uk/downloads) and enables strong reproduction of our results, despite a small number of cells having been suppressed to avoid disclosure. Full SARS-CoV-2 genome data and geolocations can be obtained under controlled access from https://www.cogconsortium.uk/data/. Application for full data access requires a description of the planned analysis and can be initiated at [email protected]. The data and a version of the analysis with fewer lineages can be interactively explored at https://covid19.sanger.ac.uk. Source data are provided with this paper.
The genomic surveillance model is implemented in Python and available at GitHub (https://github.com/gerstung-lab/genomicsurveillance) and as a PyPI package (genomicsurveillance). Specific code for the analyses of this study can be found as individual Google colab notebooks in the same repository. These were run using Python v.3.7.1 (packages: matplotlib (v.3.4.1), numpy (v.1.20.2), pandas (v.1.2.3), scikit-learn (v.0.19.1), scipy (v.1.6.2), seaborn (v.0.11.1), jax (v.0.2.8), genomicsurveillance (v.0.4.0), numpyro (v.0.4.0)). The phylogeographic analyses were performed using Thorney Beast (v.0.1.1) and https://github.com/NicolaDM/phylogeographySARS-CoV-2. Code for the ONS infection survey analysis is available at GitHub (https://github.com/jhellewell14/ons_severity_estimates).
Rambaut, A. Phylogenetic Analysis of nCoV-2019 Genomes (Virological, 2020); https://virological.org/t/phylodynamic-analysis-176-genomes-6-mar-2020/356
Nextstrain Team Genomic Epidemiology of Novel Coronavirus—Global Subsampling (Nextstrain, 2020); https://nextstrain.org/ncov/global?l=clock
Hadfield, J. et al. Nextstrain: real-time tracking of pathogen evolution. Bioinformatics 34, 4121–4123 (2018).
Volz, E. et al. Evaluating the effects of SARS-CoV-2 spike mutation D614G on transmissibility and pathogenicity. Cell 184, 64–75 (2021).
Rambaut, A. et al. A dynamic nomenclature proposal for SARS-CoV-2 lineages to assist genomic epidemiology. Nat. Microbiol. 5, 1403–1407 (2020).
O'Toole, Á. et al. Global Report Investigating Novel Coronavirus Haplotypes https://cov-lineages.org/global_report.html (2021).
Hodcroft, E. B. et al. Spread of a SARS-CoV-2 variant through Europe in the summer of 2020. Nature 595, 707–712 (2021).
CAS Article ADS Google Scholar
Tegally, H., Wilkinson, E., Lessells, R.J. et al. Sixteen novel lineages of SARS-CoV-2 in South Africa. Nat. Med. 27, 440–446 (2021).
Rambaut, A. et al. Preliminary Genomic Characterisation of an Emergent SARS-CoV-2 Lineage in the UK Defined by a Novel Set of Spike Mutations (Virological, 2020); https://virological.org/t/preliminary-genomic-characterisation-of-an-emergent-sars-cov-2-lineage-in-the-uk-defined-by-a-novel-set-of-spike-mutations/563
Volz, E., Mishra, S., Chand, M. et al. Assessing transmissibility of SARS-CoV-2 lineage B.1.1.7 in England. Nature 593, 266–269 (2021).
Davies, N. G. et al. Estimated transmissibility and impact of SARS-CoV-2 lineage B.1.1.7 in England. Science 372, eabg3055 (2021).
O'Toole, Á. et al. Tracking the International Spread of SARS-CoV-2 Lineages B.1.1.7 and B.1.351/501Y-V2 (Virological, 2021); https://virological.org/t/tracking-the-international-spread-of-sars-cov-2-lineages-b-1-1-7-and-b-1-351-501y-v2/592
Washington, N. L. et al. Emergence and rapid transmission of SARS-CoV-2 B.1.1.7 in the United States. Cell 184, 2587–2594 (2021).
Faria, N. R. et al. Genomic Characterisation of an Emergent SARS-CoV-2 Lineage in Manaus: Preliminary Findings (Virological, 2021); https://www.icpcovid.com/sites/default/files/2021-01/Ep%20102-1%20Genomic%20characterisation%20of%20an%20emergent%20SARS-CoV-2%20lineage%20in%20Manaus%20Genomic%20Epidemiology%20-%20Virological.pdf
Faria, N. R. et al. Genomics and Epidemiology of the P.1 SARS-CoV-2 Lineage in Manaus, Brazil. Science 372, 815–21 (2021).
du Plessis, L. et al. Establishment and lineage dynamics of the SARS-CoV-2 epidemic in the UK. Science 371, 708–712 (2021).
Article ADS Google Scholar
Danish Covid-19 Genome Consortium Genomic Overview of SARS-CoV-2 in Denmark (2021); https://www.covid19genomics.dk/statistics
Kraemer, M. U. G. et al. Spatiotemporal invasion dynamics of SARS-CoV-2 lineage B.1.1.7 emergence. Science 373, 889–895 (2021).
Park, S. W. et al Roles of generation-interval distributions in shaping relative epidemic strength, speed, and control of new SARS-CoV-2 variants. Preprint at medRxiv https://doi.org/10.1101/2021.05.03.21256545 (2021).
Starr, T. N. et al. Deep mutational scanning of SARS-CoV-2 receptor binding domain reveals constraints on folding and ACE2 binding. Cell 182, 1295–1310 (2020).
Zahradník, J., Marciano, S., Shemesh, M. et al. SARS-CoV-2 variant prediction and antiviral drug design are enabled by RBD in vitro evolution. Nat. Microbiol. 6, 1188–1198 (2021).
Brown, J. C. et al. Increased transmission of SARS-CoV-2 lineage B.1.1.7 (VOC 2020212/01) is not accounted for by a replicative advantage in primary airway cells or antibody escape. Preprint at bioRxiv https://doi.org/10.1101/2021.02.24.432576 (2021).
Vöhringer, H. et al. Lineage-specific Growth of SARS-CoV-2 B.1.1.7 During the English National Lockdown (Virological, 2020); https://virological.org/t/lineage-specific-growth-of-sars-cov-2-b-1-1-7-during-the-english-national-lockdown/575/2
The Health Protection (Coronavirus, Restrictions) (All Tiers) (England) Regulations 2020. Wikipedia https://en.wikipedia.org/w/index.php?title=The_Health_Protection_(Coronavirus,_Restrictions)_(All_Tiers)_(England)_Regulations_2020&oldid=1014831173 (2021).
Steel, K. & Davies, B. Coronavirus (COVID-19) Infection Survey, Antibody and Vaccination Data for the UK (ONS, 2021); https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/articles/coronaviruscovid19infectionsurveyantibodydatafortheuk/28april2021
Greaney, A. J. et al. Comprehensive mapping of mutations in the SARS-CoV-2 receptor-binding domain that affect recognition by polyclonal human plasma antibodies. Cell Host & Microbe 29, 463-476 (2021).
Greaney, A. J. et al. Complete mapping of mutations to the SARS-CoV-2 spike receptor-binding domain that escape antibody recognition. Cell Host Microbe 29, 44–57 (2021).
Zhou, D. et al. Evidence of escape of SARS-CoV-2 variant B.1.351 from natural and vaccine-induced sera. Cell 184, 2348–2361 (2021).
Planas, D. et al. Sensitivity of infectious SARS-CoV-2 B.1.1.7 and B.1.351 variants to neutralizing antibodies. Nat. Med. 27, 917–924 (2021).
Peacock, T. P. et al. The SARS-CoV-2 variants associated with infections in India, B.1.617, show enhanced spike cleavage by furin. Preprint at bioRxiv https://doi.org/10.1101/2021.05.28.446163 (2021).
Campbell, F. et al. Increased transmissibility and global spread of SARS-CoV-2 variants of concern as at June 2021. Euro Surveill. 26, 2100509 (2021).
Investigation of Novel SARS-CoV-2 Variants of Concern Technical briefing 10 (Public Health England, 2021); https://www.gov.uk/government/publications/investigation-of-novel-sars-cov-2-variant-variant-of-concern-20201201
Li, B. et al. Viral infection and transmission in a large well-traced outbreak caused by the Delta SARS-CoV-2 variant. Preprint at medRxiv https://doi.org/10.1101/2021.07.07.21260122 (2021).
Nasreen, S. et al. Effectiveness of COVID-19 vaccines against variants of concern in Ontario, Canada. Preprint at medRxiv https://doi.org/10.1101/2021.06.28.21259420 (2021).
Lopez Bernal, J. et al. Effectiveness of Covid-19 vaccines against the B.1.617.2 (Delta) variant. N. Engl. J. Med. 385, 585–594 (2021).
Investigation of Novel SARS-CoV-2 Variants of Concern Technical briefing 19 (Public Health England, 2021); https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1005517/Technical_Briefing_19.pdf
Ferreira, I. et al. SARS-CoV-2 B.1.617 emergence and sensitivity to vaccine-elicited antibodies. Preprint at bioRxiv https://doi.org/10.1101/2021.05.08.443253 (2021).
Wall, E. C. et al. Neutralising antibody activity against SARS-CoV-2 VOCs B.1.617.2 and B.1.351 by BNT162b2 vaccination. Lancet 397, 2331–2333 (2021).
Steel, K. & Haughton, P. Coronavirus (COVID-19) Infection Survey, Antibody and Vaccination Data, UK (ONS, 2021); https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/bulletins/coronaviruscovid19infectionsurveyantibodyandvaccinationdatafortheuk/21july2021
Anderson, R. et al. Reproduction Number (R) and Growth Rate (r) of the COVID-19 Epidemic in the UK: Methods of Estimation, Data Sources, Causes of Heterogeneity, and use as a Guide in Policy Formulation (The Royal Society, 2020).
Britton, T., Ball, F. & Trapman, P. A mathematical model reveals the influence of population heterogeneity on herd immunity to SARS-CoV-2. Science 369, 846–849 (2020).
MathSciNet CAS Article ADS Google Scholar
Funk, S. et al. Combining serological and contact data to derive target immunity levels for achieving and maintaining measles elimination. BMC Med. 17, 180 (2019).
Hodgson, D., Flasche, S., Jit, M., Kucharski, A. J. & CMMID COVID-19 Working Group The potential for vaccination-induced herd immunity against the SARS-CoV-2 B.1.1.7 variant. Euro Surveill. 26, 2100428 (2021).
van Dorp, L., Houldcroft, C. J., Richard, D. & Balloux, F. COVID-19, the first pandemic in the post-genomic era. Curr. Opin. Virol. 50, 40–48 (2021).
The Health Protection (Coronavirus, Restrictions) (England) (No. 4) Regulations 2020. Wikipedia https://en.wikipedia.org/w/index.php?title=The_Health_Protection_(Coronavirus,_Restrictions)_(England)_(No._4)_Regulations_2020&oldid=1014701607 (2021).
Bi, Q. et al. Epidemiology and transmission of COVID-19 in 391 cases and 1286 of their close contacts in Shenzhen, China: a retrospective cohort study. Lancet Infect. Dis. 20, 911–919 (2020).
Wallinga, J. & Lipsitch, M. How generation intervals shape the relationship between growth rates and reproductive numbers. Proc. Biol. Sci. 274, 599–604 (2007).
Cori, A., Ferguson, N. M., Fraser, C. & Cauchemez, S. A new framework and software to estimate time-varying reproduction numbers during epidemics. Am. J. Epidemiol. 178, 1505–1512 (2013).
Bingham, E. et al. Pyro: deep universal probabilistic programming. Preprint at http://arxiv.org/abs/1810.09538 (2018).
Phan, D., Pradhan, N. & Jankowiak, M. Composable effects for flexible and accelerated probabilistic programming in NumPyro. Preprint at http://arxiv.org/abs/1912.11554 (2019).
Hoffman, M. D., Blei, D. M., Wang, C. & Paisley, J. Stochastic variational inference. J. Mach. Learn. Res. 14, 1303–1347 (2013).
Price, M. N., Dehal, P. S. & Arkin, A. P. FastTree 2—approximately maximum-likelihood trees for large alignments. PLoS ONE 5, e9490 (2010).
Tavaré, S. Some probabilistic and statistical problems in the analysis of DNA sequences. Lect. Math. Life Sci. 17, 57–86 (1986).
Lemey, P., Rambaut, A., Drummond, A. J. & Suchard, M. A. Bayesian phylogeography finds its roots. PLoS Comput. Biol. 5, e1000520 (2009).
MathSciNet Article ADS Google Scholar
De Maio, N., Wu, C.-H., O'Reilly, K. M. & Wilson, D. New routes to phylogeography: a bayesian structured coalescent approximation. PLoS Genet. 11, e1005421 (2015).
Turakhia, Y. et al. Ultrafast Sample placement on Existing tRees (UShER) enables real-time phylogenetics for the SARS-CoV-2 pandemic. Nat. Genet. 53, 809–816 (2021).
Sagulenko, P., Puller, V. & Neher, R. A. TreeTime: maximum-likelihood phylodynamic analysis. Virus Evol. 4, vex042 (2018).
Gill, M. S. et al. Improving Bayesian population dynamics inference: a coalescent-based model for multiple loci. Mol. Biol. Evol. 30, 713–724 (2013).
Sherratt, K. et al. Exploring surveillance data biases when estimating the reproduction number: with insights into subpopulation transmission of COVID-19 in England. Philos. Trans. Royal Soc. B 376 https://doi.org/10.1098/RSTB.2020.0283 (2021).
Abbott, S. et al. Estimating the time-varying reproduction number of SARS-CoV-2 using national and subnational case counts. Wellcome Open Res. 5, 112 (2020).
Hellewell, J. et al. Estimating the effectiveness of routine asymptomatic PCR testing at different frequencies for the detection of SARS-CoV-2 infections. BMC Med. 19, 106 (2021).
Hart, W. S. et al. Inference of SARS-CoV-2 generation times using UK household data. Preprint at medRxiv https://doi.org/10.1101/2021.05.27.21257936 (2021).
Pouwels, K. B. et al. Community prevalence of SARS-CoV-2 in England from April to November, 2020: results from the ONS Coronavirus Infection Survey. Lancet Publ. Health 6, e30–e38 (2021).
Donnarumma, K. S. Coronavirus (COVID-19) Infection Survey, UK (ONS, 2021); https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/bulletins/coronaviruscovid19infectionsurveypilot/23april2021
We thank E. Allara (Cambridge) and G. Whitton (Sanger) for providing outer postcodes to LTLA mappings; R. Beale for comments and J. McCrone for setting up Thorney Beast analysis; all of the contributors who submitted genome sequences to GISAID (acknowledgement tables for individual sequences are provided at GitHub; https://github.com/NicolaDM/phylogeographySARS-CoV-2); and our colleagues at EMBL-EBI, the Wellcome Sanger Institute and COG-UK for discussions and comments on this manuscript. COG-UK is supported by funding from the Medical Research Council (MRC), part of UK Research & Innovation (UKRI), the National Institute of Health Research (NIHR) and Genome Research Limited, operating as the Wellcome Sanger Institute. Additional sequence generation was funded by the Department of Health and Social Care. H.S.V., J.P.G. and M.G. are supported by a grant from the Department of Health and Social Care. A.W.J., E.B. and M.G. are beneficiaries from grant NNF17OC0027594 from the Novo Nordisk Foundation. E.V. is supported by Wellcome Trust grant 220885/Z/20/Z. T.S. is supported by grant 210918/Z/18/Z, and J.H. and S.F. by grant 210758/Z/18/Z from the Wellcome Trust. H.S.V., N.D.M., A.W.J., N.G., E.B. and M.G. are supported by EMBL.
Open access funding provided by Deutsches Krebsforschungszentrum (DKFZ).
European Molecular Biology Laboratory, European Bioinformatics Institute EMBL-EBI, Hinxton, UK
Harald S. Vöhringer, Nicola De Maio, Alexander W. Jung, Nick Goldman, Ewan Birney & Moritz Gerstung
Wellcome Sanger Institute, Hinxton, UK
Theo Sanderson, Matthew Sinnott, Thuy Nguyen, Richard Goater, Frank Schwach, Cristina V. Ariani, Sonia Gonçalves, David K. Jackson, Ian Johnston, Callum Saint, John Sillitoe, Maria Suciu, Irina Abnizova, Louise Aigrain, Alex Alderton, Mozam Ali, Laura Allen, Roberto Amato, Ralph Anderson, Cristina Ariani, Siobhan Austin-Guest, Sendu Bala, Jeffrey Barrett, Andrew Bassett, Kristina Battleday, James Beal, Mathew Beale, Charlotte Beaver, Sam Bellany, Tristram Bellerby, Katie Bellis, Duncan Berger, Matt Berriman, Emma Betteridge, Paul Bevan, Simon Binley, Jason Bishop, Kirsty Blackburn, James Bonfield, Nick Boughton, Sam Bowker, Timothy Brendler-Spaeth, Iraad Bronner, Tanya Brooklyn, Sarah Kay Buddenborg, Robert Bush, Catarina Caetano, Alex Cagan, Nicola Carter, Joanna Cartwright, Tiago Carvalho Monteiro, Liz Chapman, Tracey-Jane Chillingworth, Peter Clapham, Richard Clark, Adrian Clarke, Catriona Clarke, Daryl Cole, Elizabeth Cook, Maria Coppola, Linda Cornell, Clare Cornwell, Craig Corton, Abby Crackett, Alison Cranage, Harriet Craven, Sarah Craw, Mark Crawford, Tim Cutts, Monika Dabrowska, Matt Davies, Robert Davies, Joseph Dawson, Callum Day, Aiden Densem, Thomas Dibling, Cat Dockree, David Dodd, Sunil Dogga, Matthew Dorman, Gordon Dougan, Martin Dougherty, Alexander Dove, Lucy Drummond, Eleanor Drury, Monika Dudek, Jillian Durham, Laura Durrant, Elizabeth Easthope, Sabine Eckert, Pete Ellis, Ben Farr, Michael Fenton, Marcella Ferrero, Neil Flack, Howerd Fordham, Grace Forsythe, Luke Foulser, Matt Francis, Audrey Fraser, Adam Freeman, Anastasia Galvin, Maria Garcia-Casado, Alex Gedny, Sophia Girgis, James Glover, Sonia Goncalves, Scott Goodwin, Oliver Gould, Marina Gourtovaia, Andy Gray, Emma Gray, Coline Griffiths, Yong Gu, Florence Guerin, Will Hamilton, Hannah Hanks, Ewan Harrison, Alexandria Harrott, Edward Harry, Julia Harvison, Paul Heath, Anastasia Hernandez-Koutoucheva, Rhiannon Hobbs, Dave Holland, Sarah Holmes, Gary Hornett, Nicholas Hough, Liz Huckle, Lena Hughes-Hallet, Adam Hunter, Stephen Inglis, Sameena Iqbal, Adam Jackson, David Jackson, Keith James, Dorota Jamrozy, Carlos Jimenez Verdejo, Matthew Jones, Kalyan Kallepally, Leanne Kane, Keely Kay, Sally Kay, Jon Keatley, Alan Keith, Alison King, Lucy Kitchin, Matt Kleanthous, Martina Klimekova, Petra Korlevic, Ksenia Krasheninnkova, Greg Lane, Cordelia Langford, Adam Laverack, Katharine Law, Mara Lawniczak, Stefanie Lensing, Steven Leonard, Laura Letchford, Kevin Lewis, Amanah Lewis-Wade, Jennifer Liddle, Quan Lin, Sarah Lindsay, Sally Linsdell, Rich Livett, Stephanie Lo, Rhona Long, Jamie Lovell, Jon Lovell, Catherine Ludden, James Mack, Mark Maddison, Aleksei Makunin, Irfan Mamun, Jenny Mansfield, Neil Marriott, Matt Martin, Matthew Mayho, Shane McCarthy, Jo McClintock, Samantha McGuigan, Sandra McHugh, Liz McMinn, Carl Meadows, Emily Mobley, Robin Moll, Maria Morra, Leanne Morrow, Kathryn Murie, Sian Nash, Claire Nathwani, Plamena Naydenova, Alexandra Neaverson, Rachel Nelson, Ed Nerou, Jon Nicholson, Tabea Nimz, Guillaume G. Noell, Sarah O'Meara, Valeriu Ohan, Karen Oliver, Charles Olney, Doug Ormond, Agnes Oszlanczi, Steve Palmer, Yoke Fei Pang, Barbora Pardubska, Naomi Park, Aaron Parmar, Gaurang Patel, Minal Patel, Maggie Payne, Arabella Petersen, Deborah Plowman, Tom Preston, Liam Prestwood, Christoph Puethe, Michael Quail, Diana Rajan, Shavanthi Rajatileka, Richard Rance, Suzannah Rawlings, Nicholas Redshaw, Joe Reynolds, Mark Reynolds, Simon Rice, Matt Richardson, Connor Roberts, Katrina Robinson, Melanie Robinson, David Robinson, Hazel Rogers, Eduardo Martin Rojo, Daljit Roopra, Mark Rose, Luke Rudd, Nicholas Salmon, David Saul, Frank Schwach, Carol Scott, Phil Seekings, Lesley Shirley, Alison Simms, Matthew Sinnott, Shanthi Sivadasan, Bart Siwek, Dale Sizer, Kenneth Skeldon, Jason Skelton, Joanna Slater-Tunstill, Lisa Sloper, Nathalie Smerdon, Chris Smith, Christen Smith, James Smith, Katie Smith, Michelle Smith, Sean Smith, Tina Smith, Leighton Sneade, Carmen Diaz Soria, Emily Souster, Andrew Sparkes, Michael Spencer-Chapman, Janet Squares, Robert Stanley, Claire Steed, Tim Stickland, Ian Still, Michael R. Stratton, Michelle Strickland, Allen Swann, Agnieszka Swiatkowska, Neil Sycamore, Emma Swift, Edward Symons, Suzanne Szluha, Emma Taluy, Nunu Tao, Katy Taylor, Sam Taylor, Stacey Thompson, Mark Thompson, Mark Thomson, Nicholas Thomson, Scott Thurston, Gerry Tonkin-Hill, Dee Toombs, Benjamin Topping, Jaime Tovar-Corona, Daniel Ungureanu, James Uphill, Jana Urbanova, Philip Jansen Van Vuuren, Valerie Vancollie, Paul Voak, Danielle Walker, Matthew Walker, Matt Waller, Gary Ward, Charlie Weatherhogg, Niki Webb, Danni Weldon, Alan Wells, Eloise Wells, Luke Westwood, Theo Whipp, Thomas Whiteley, Georgia Whitton, Andrew Whitwham, Sara Widaa, Mia Williams, Mark Wilson, Sean Wright, Ewan M. Harrison, Sónia Gonçalves, David M. Aanensen, Dinesh Aggarwal, Anthony P. Underwood, Cordelia F. Langford, Dominic Kwiatkowski, Andrew R. Bassett, Michael H. Spencer Chapman, Theo Sanderson, Naomi R. Park, Robert M. Davies, Katherine L. Bellis, Matthew J. Dorman, Robin J. Moll, Jaime M. Tovar-Corona, Khalil Abudahab, Ben E. W. Taylor, Jon-Paul Keatley, Iraad F. Bronner, Ben W. Farr, Stefanie V. Lensing, Shane A. McCarthy, Michael A. Quail, Nicholas M. Redshaw, Scott A. J. Thurston, Jennifier Liddle, Dominic Kwiatkowski, Inigo Martincorena & Jeffrey C. Barrett
The Francis Crick Institute, London, UK
Theo Sanderson
Public Health England, London, UK
Frank Schwach, Ian Harrison, Catherine Ludden, Dinesh Aggarwal, Husam Osman, Meera Chand, Sharon J. Peacock, Peter Muir, Esther Robinson, Barry B. Vipond, Andrew Bosworth, Stephanie Hutchings, Hannah M. Pymont, Richard Myers, Alicia Thornton, Katerina Galai, Shazaad S. Y. Ahmad, Nicholas W. Machin, Richard Hopes, Chloe Bishop, Vicki Chalker, Elias Allara, Clare Pearson, David Bibby, Gavin Dabrera, Nicholas Ellaby, Eileen Gallagher, Jonathan Hubb, Angie Lackenby, David Lee, Nikos Manesis, Tamyo Mbisa, Steven Platt, Katherine A. Twohig & Meera Chand
London School of Hygiene & Tropical Medicine, London, UK
Joel Hellewell & Sebastian Funk
The Big Data Institute, Nuffield Department of Medicine, University of Oxford, Oxford, UK
Jasmina Panovska-Griffiths, Tanya Golubchik, David Bonsall, Katrina Lythgoe, Helen Fryer, Christophe Fraser & Laura Thomson
MRC Centre for Global Infectious Disease Analysis, Jameel Institute for Disease and Emergency Analytics, Imperial College London, London, UK
Adam A. Witney, Erik M. Volz, Rob Johnson, Olivia Boyd, Lily Geidelberg, Manon Ragonnet-Cronin & Erik Volz
Guy's and St Thomas' NHS Foundation Trust, London, UK
Perminder Smith, Adela Alcolea, Natasha Ohemeng-Kumi, John Ramble, Jasveen Sehmi & Meera Chand
Division for AI in Oncology, German Cancer Research Centre DKFZ, Heidelberg, Germany
Moritz Gerstung
Department of Medicine, University of Cambridge, Cambridge, UK
Sharon Peacock, Ramin Sadri, Catarina Sousa, Catherine Ludden, M. Estee Torok, Dinesh Aggarwal, Alessandro M. Carabelli, Sharon J. Peacock, Beth Blane, MacGregor Cox, Patrick Maxwell, Ken Smith, Ellena Brooks, Carol M. Churcher, Mireille Fragakis, Katerina Galai, Andrew Jermy, Sarah Judges, Georgina M. McManus, Kim S. Smith, Elaine Westwick & Theresa Feltwell
Centre for Enzyme Innovation, University of Portsmouth, Portsmouth, UK
Samuel C. Robson, Angela H. Beckett & Ekaterina Shelest
School of Pharmacy & Biomedical Sciences, University of Portsmouth, Portsmouth, UK
Samuel C. Robson, Katie F. Loveson, Kate F. Cook, Christopher Fearn & Salman Goudarzi
Cardiff University, Cardiff, UK
Thomas R. Connor, Anna Price, Andrew Mack, Arthur Morriss, Thomas Whalley, Joel Southgate, Martyn Guest, Christine Kitchen, Ian Merrick, Robert Munn, Owen Jones, Catherine Bresner, William Fuller, Angela Marchbank & Trudy Workman
Public Health Wales, Cardiff, UK
Thomas R. Connor, Sally Corden, Catherine Moore, Joanne Watkins, Matthew Bull, Catryn Williams, Nicole Pacchiarini, Simon Cottrell, Sara Rey, David Heyburn, Chris Williams, Noel Craine, Malorie Perry, Alexander Adams, Tara Annett, Hibo Asad, Alec Birchley, Jason Coombes, Johnathan M. Evans, Laia Fina, Bree Gatica-Wilcox, Lauren Gilbert, Lee Graham, Jessica Hey, Ember Hilvers, Sophie Jones, Hannah Jones, Sara Kumziene-Summerhayes, Caoimhe McKerr, Jessica Powell, Georgia Pugh, Sarah Taylor, Joel Southgate, Alisha Davies, Elen De Lacy, Fatima Downing, Sue Edwards, Laura Gifford, Mari Morgan & Amy Gaskin
Institute of Microbiology and Infection, University of Birmingham, Birmingham, UK
Nicholas J. Loman, Sam Nicholls, Claire McMurray, Alan McNally, Joshua Quick, Radoslaw Poplawski, Joanne Stockton & Will Rowe
King's College London, London, UK
Rocio T. Martinez Nunez & Themoula Charalampous
University of Edinburgh, Edinburgh, UK
Andrew Rambaut, Kate E. Templeton, Áine O'Toole, Thomas Williams, Rachel Colquhoun, Verity Hill, Ben Jackson, J. T. McCrone, Nathan Medd, Emily Scher, Carlos E. Balcazar, Michael D. Gallagher, Daniel Maloney, Thomas D. Stanton & Kathleen A. Williamson
Centre for Clinical Infection and Diagnostics Research, Department of Infectious Diseases, Guy's and St Thomas' NHS Foundation Trust, London, UK
Luke B. Snell, Gaia Nebbia, Jonathan Edgeworth, Rahul Batra, Bindi Patel, Themoula Charalampous & Amita Patel
University College London Hospitals NHS Foundation Trust, London, UK
Eleni Nastouli, Catherine Houlihan, Matthew Byott, Sunando Roy, Judith Heaney, Judith Breuer, Moira J. Spyer, Rachel J. Williams, Charlotte A. Williams, John A. Hartley, Leah Ensell, Helen L. Lowe, Laurentiu Maftei, Matteo Mondani, Nadua Bayzid & Marius Cotic
Advanced Pathogen Diagnostics Unit, University College London Hospital, London, UK
Eleni Nastouli, Dan Frampton, Matthew Byott, Judith Heaney & Moira J. Spyer
Great Ormond Street Institute of Child Health, University College London, London, UK
Eleni Nastouli & Moira J. Spyer
Department of Infectious Diseases and Microbiology, Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
M. Estee Torok & William L. Hamilton
Division of Virology, Department of Pathology, University of Cambridge, Cambridge, UK
Ian G. Goodfellow, Yasmin Chaudhry, Rhys Izuagbe, Aminu S. Jahun, Iliana Georgana, Myra Hosmillo & Malte L. Pinckert
University Hospital Southampton NHS Foundation Trust, Southampton, UK
Jacqui A. Prieto, Kordo Saeed, Stephen Aplin, Matthew Harvey, Thea Sass, Helen Umpleby, Helen Wheeler, Emanuela Pelosi, Siona Silviera, Eleri Wilson-Davies, Adhyana I. K. Mahanama, Buddhini Samaraweera & Sarah Jeremiah
School of Health Sciences, University of Southampton, Southampton, UK
Jacqui A. Prieto
School of Medicine, University of Southampton, Southampton, UK
Kordo Saeed
Division of Infection and Immunity, University College London, London, UK
Catherine Houlihan & Dan Frampton
University of Brighton, Brighton, UK
Giselda Bucca, Colin P. Smith & Andrew R. Hesketh
Institute for Infection and Immunity, St George's University of London, London, UK
Cassie F. Pope, Kenneth G. Laing & Irene M. Monahan
Infection Care Group, St George's University Hospitals NHS Foundation Trust, London, UK
Cassie F. Pope
MRC–University of Glasgow Centre for Virus Research, Glasgow, UK
Emma C. Thomson, James G. Shepherd, David L. Robertson, William T. Harvey, Joseph Hughes, Richard J. Orton, Ana da Silva Filipe, Igor Starinskij, Sreenu Vattipally, Afrida Mukaddas, Derek W. Wright, Alice Broos, Daniel Mair, Jenna Nichols, Kyriaki Nomikou, Lily Tong & Ioulia Tsatsani
University of Cambridge, Cambridge, UK
Ewan M. Harrison, Ravi K. Gupta, Michael H. Spencer Chapman, Chris Ruis, Sophia T. Girgis, Katherine L. Bellis, Leanne M. Kermack, Claire Cormie, Joana Dias, Sally Forrest, Ellen E. Higginson, Mailis Maes, Jamie Young, Elias Allara, Clare Pearson & Shane A. McCarthy
Queen's University Belfast, Belfast, UK
Fiona Rogan, Derek J. Fairley, David A. Simpson, Marc Fuchs, Julia Miskelly, Stephen Bridgett, Timofey Skvortsov, Declan T. Bradley, Zoltan Molnar, Chris Baxter, Sílvia F. Carvalho, Deborah Lavin, Arun Mariappan, Clara Radulescu, Aditi Singh & Miao Tang
Blackpool Teaching Hospitals NHS Foundation Trust, Blackpool, UK
Shaun M. Beckwith, Abigail Murray & Dawn Singleton
Hull University Teaching Hospitals NHS Trust, Hull, UK
Kirstine Eastick, Patrick J. Lillie, Phillipa J. Burns, Kavitha Gajee, Katie Kitchman & William Everson
University Hospitals Dorset NHS Foundation Trust, Poole, UK
Liz A. Sheridan, Dorian Crudgington & Ben Macklin
University Hospitals Sussex NHS Foundation Trust, Worthing, UK
Paul Randell, Mohammed O. Hassan-Ibrahim, Michelle J. Erkiert, Nicola J. Chaloner, Benjamin J. Cogger, Lisa J. Easton, Hannah Huckson, Jonathan Lewis, Sarah Lowdon, Cassandra S. Malone, Florence Munemo, Manasa Mutingwende, Roberto Nicodemi, Olga Podplomyk & Thomas Somassa
University of Exeter, Exeter, UK
Leigh M. Jackson, Ben Temperton, Stephen L. Michell, Aaron R. Jeffries, Christopher R. Jones, Bridget A. Knight, Robin Manley, Michelle L. Michelsen, Christine M. Sambles, David J. Studholme & Joanna Warwick-Dugdale
Belfast Health & Social Care Trust, Belfast, UK
Derek J. Fairley, James P. McKenna & Tanya Curran
Deep Seq, School of Life Sciences, Queens Medical Centre, University of Nottingham, Nottingham, UK
Matthew W. Loose, Matthew Carlile, Nadine Holmes, Christopher Moore, Victoria Wright, Johnny Debebe & Fei Sang
East Kent Hospitals University NHS Foundation Trust, Canterbury, UK
Samuel Moses & Hannah Lowe
University of Kent, Canterbury, UK
Samuel Moses
Hub for Biotechnology in the Built Environment, Northumbria University, Newcastle upon Tyne, UK
Darren L. Smith, Matthew Bashton & Gregory R. Young
Northumbria University, Newcastle upon Tyne, UK
Darren L. Smith, Matthew Bashton, Gregory R. Young, Clare M. McCann, Andrew Nelson, Matthew R. Crown, John H. Henderson, Amy Hollis, William Stanley & Wen C. Yew
NU-OMICS, Northumbria University, Newcastle upon Tyne, UK
Darren L. Smith, Clare M. McCann & Andrew Nelson
Centre for Genomic Pathogen Surveillance, University of Oxford, Oxford, UK
David M. Aanensen, Anthony P. Underwood, Khalil Abudahab, Mirko Menegazzo, Ben E. W. Taylor & Corin A. Yeats
Public Health England, Cambridge, Cambridge, UK
Martin D. Curran & Surendra Parmar
University of Sheffield, Sheffield, UK
Matthew D. Parker, Thushan I. de Silva, Dennis Wang, Mehmet Yavus, Samantha E. Hansford, Benjamin H. Foulkes, Marta Gallis, Hailey R. Hornsby, Stavroula F. Louka, Manoj Pohare, Paige Wolverson, Peijun Zhang, Timothy M. Freeman, Sharon N. Hsu, Benjamin B. Lindsey, Nikki Smith, Cariad Evans, Kate Johnson, David G. Partridge, Mohammad Raza, Alexander J. Keeley, Adrienn Angyal, Luke R. Green & Max Whiteley
Portsmouth Hospitals University NHS Trust, Portsmouth, UK
Sharon Glaysher, Scott Elliott, Kelly Bicknell, Robert Impey & Sarah Wyllie
NHS Lothian, Edinburgh, UK
Kate E. Templeton, Rebecca Dewar, Martin P. McHugh, Elizabeth Wastnedge, Seb Cotton & Abbie Gallagher
NHS Greater Glasgow and Clyde, Glasgow, UK
Rory N. Gunson, Rachel Blacow & Jon Perkins
Quadram Institute Bioscience, Norwich, UK
Justin O'Grady, Andrew J. Page, Gemma L. Kay, Nabil-Fareed Alikhan, Alexander J. Trotter, Leonardo de Oliveira Martins, Alison E. Mather, Lizzie Meadows, Alp Aydin, David J. Baker, Ebenezer Foster-Nyarko, Sophie J. Prosolek, Steven Rudder & Thanh Le-Viet
University of East Anglia, Norwich, UK
Justin O'Grady & Rose K. Davidson
University of Oxford, Oxford, UK
Dominic Kwiatkowski
Hampshire Hospitals NHS Foundation Trust, Basingstoke, UK
Nicholas Cortes, Nathan Moore, Claire Thomas, Stephen P. Kidd, Beatrice Bertolusso, Jessica Lynch, Gabrielle Vernet & Rebecca Williams
Royal Free London NHS Foundation Trust, London, UK
Tabitha W. Mahungu, Tanzina Haque, Jennifer Hart, Dianne Irish-Tavares & Eric Witele
South Tees Hospitals NHS Foundation Trust, Middlesbrough, UK
Steven Liggett, Craig Mower, Louisa K. Watson, Paul Baker, Stephen Bonner, Sarah Essex & Leanne J. Murray
School of Biological Sciences, University of Portsmouth, Portsmouth, UK
Angela H. Beckett, Yann Bourgeois & Ekaterina Shelest
Public Health Scotland, Edinburgh, UK
Matthew T. G. Holden, Sharif Shaaban, Stefan Rooke & Gonzalo Yebra
Health Services Laboratories, London, UK
Lisa J. Levett, Judith Heaney, Paul R. Grant, Stuart Kirk, Wendy Chatterton & Monika Pusok
Heartlands Hospital, Birmingham, UK
Husam Osman, Esther Robinson, Li Xu-McCrae & Andrew Bosworth
University of Liverpool, Liverpool, UK
Alistair C. Darby, Steve Paterson, Miren Iturriza-Gomara, Anita O. Lucaci, Sam T. Haldenby, Kathryn A. Jackson, Lance Turtle, Richard Eccles, Matthew Gemmell, Richard Gregory, Margaret Hughes, Charlotte Nelson, Lucille Rainbow, Edith E. Vamos, Hermione J. Webster, Mark Whitehead & Claudia Wierzbicki
Department of Zoology, University of Oxford, Oxford, UK
Oliver G. Pybus, Louis du Plessis, Moritz U. G. Kraemer, Jayna Raghwani, Alex E. Zarebski, Stephen W. Attwood, Marina Escalera Zamudio, Sarah Francois, Bernardo Gutierrez & Tetyana I. Vasylyeva
MRC Biostatistics Unit, University of Cambridge, Cambridge, UK
Daniela de Angelis, Chris J. Illingworth, Chris Jackson & David Pascall
Microbiology Department, Buckinghamshire Healthcare NHS Trust, Aylesbury, UK
Nick Wong, David T. Pritchard, Debbie Binns & Victoria James
The Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK
Yusri Taha, Jennifer Collins, Gary Eltringham, Shirelle Burton-Fanning, Brendan A. I. Payne & Sheila Waugh
University of St Andrews, St Andrews, UK
Martin P. McHugh
Imperial College Healthcare NHS Trust, London, UK
Siddharth Mookerjee, Alison Cox, Pinglawathee Madona, Marcus Pond, Paul A. Randell, Alison H. Holmes, James R. Price & Frances Bolt
NIHR Health Protection Research Unit in HCAI and AMR, Imperial College London, London, UK
Siddharth Mookerjee, Alison H. Holmes, James R. Price & Frances Bolt
Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
Ben Warne
Department of Microbiology, South West London Pathology, London, UK
Joshua F. Taylor & Ngee Keong Tan
Betsi Cadwaladr University Health Board, Wrexham, UK
Helen Adams
Institute of Biodiversity, Animal Health & Comparative Medicine, Glasgow, UK
William T. Harvey & Katherine L. Smollett
Norfolk County Council, Norfolk, UK
Lewis G. Spurgin & Louise Smith
Great Ormond Street Hospital for Children NHS Foundation Trust, London, UK
Judith Breuer, Julianne R. Brown, Kathryn A. Harris, Nathaniel Storey, Laura Atkinson, Jack C. D. Lee & Divya Shah
Wellcome Centre for Human Genetics, Nuffield Department of Medicine, University of Oxford, Oxford, UK
John A. Todd, David Buck, Angie Green, George MacIntyre-Cockett & Amy Trebes
Cardiff and Vale University Health Board, Cardiff, UK
Michaela John, Sian Morgan, Safiah Afifi, Robert Beer, Joshua Maksimovic, Kathryn McCluggage Masters & Karla Spellman
Turnkey Laboratory, University of Birmingham, Birmingham, UK
Alan McNally, Fiona Ashford, Angus Best, Liam Crawford, Nicola Cumley, Megan Mayhew, Oliver Megram, Jeremy Mirza, Emma Moles-Garcia & Benita Percival
Norfolk and Norwich University Hospitals NHS Foundation Trust, Norwich, UK
Samir Dervisevic, Rachael Stanley & Lindsay Coupland
Royal Brompton and Harefield Hospitals, London, UK
Newara A. Ramadan
The Queen Elizabeth Hospital King's Lynn NHS Foundation Trust, King's Lynn, UK
Christopher Jeanes
Whittington Health NHS Trust, London, UK
Jana Catalan & Neil Jones
Barking, Havering and Redbridge University Hospitals NHS Trust, London, UK
Amy Ash & Cherian Koshy
Bournemouth University, Bournemouth, UK
Magdalena Barrow, Sarah L. Buchan & Anna Mantzouratou
Clinical Microbiology Department, Queens Medical Centre, Nottingham University Hospitals NHS Trust, Nottingham, UK
Gemma Clark, Louise Berry, Tim Boswell, Vicki M. Fleming, Hannah C. Howson-Wells, Amelia Joseph, Manjinder Khakh & Michelle M. Lister
Clinical Microbiology, University Hospitals of Leicester NHS Trust, Leicester, UK
Christopher W. Holmes, Paul W. Bird, Karlie Fallon, Thomas Helmer, Claire L. McMurray, Mina Odedra, Jessica Shaw, Julian W. Tang & Nicholas J. Willford
County Durham and Darlington NHS Foundation Trust, Durham, UK
Sharon Campbell, Victoria Blakey, Veena Raviprakash, Nicola Sheriff & Lesley-Anne Williams
Department of Microbiology, Kettering General Hospital, Kettering, UK
Thomas Davis, Sahar Eldirdiri & Anita Kenyon
Barts Health NHS Trust, London, UK
Kathryn A. Harris
North West London Pathology, London, UK
Alison Cox, Pinglawathee Madona, Marcus Pond & Paul A. Randell
Maidstone and Tunbridge Wells NHS Trust, Maidstone, UK
Karen T. Withell & Graciela Sluga
Microbiology, Royal Oldham Hospital, Oldham, UK
Cheryl Williams
North Cumbria Integrated Care NHS Foundation Trust, Carlisle, UK
Clive Graham, Edward Barton, Debra Padgett & Garren Scott
North Tees and Hartlepool NHS Foundation Trust, Stockton on Tees, UK
Rebecca Denton-Smith, Emma Swindells, Robyn Turnbull & Jane Greenaway
Path Links, Northern Lincolnshire and Goole NHS Foundation Trust, Grimsby, UK
Tim J. Sloan, Phillip Clarke, Nichola Duckworth & Sarah Walsh
Queen Elizabeth Hospital, Birmingham, UK
Anna Casey & Liz Ratcliffe
Royal Devon and Exeter NHS Foundation Trust, Exeter, UK
Christopher R. Jones, Bridget A. Knight, Jennifer Poyner, Cressida Auckland & Helen Morcrette
Virology, School of Life Sciences, Queens Medical Centre, University of Nottingham, Nottingham, UK
Patrick C. McClure, Jonathan Ball, Timothy Byaruhanga, Joseph G. Chappell, Jayasree Dey, Jack D. Hill & Emily J. Park
Swansea University, Swansea, UK
Ronan A. Lyons
Viapath, Guy's and St Thomas' NHS Foundation Trust, and King's College Hospital NHS Foundation Trust, London, UK
Perminder Smith, Adela Alcolea, Natasha Ohemeng-Kumi, John Ramble & Jasveen Sehmi
Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
Mehmet Yavus, Cariad Evans, Kate Johnson, David G. Partridge & Mohammad Raza
Cambridge Stem Cell Institute, University of Cambridge, Cambridge, UK
Nicola Reynolds
West of Scotland Specialist Virology Centre, NHS Greater Glasgow and Clyde, Glasgow, UK
Lynne Ferguson, Emily J. Goldstein, Alasdair Maclean & Rachael Tomb
Public Health Agency, Belfast, UK
Tim Wyatt & Declan T. Bradley
Northumbria Healthcare NHS Foundation Trust, Newcastle upon Tyne, UK
Giles Idle & Kevin Cole
University of Southampton, Southampton, UK
Matilde Mori
East Suffolk and North Essex NHS Foundation Trust, Colchester, UK
Luke Bedford
East Sussex Healthcare NHS Trust, St Leonards-on-Sea, UK
James S. Cargill & Warwick Hughes
Gateshead Health NHS Foundation Trust, Gateshead, UK
Jonathan Moore & Susanne Stonehouse
Isle of Wight NHS Trust, Newport, UK
Anibolina Castigador & Emily Macnaughton
King's College Hospital NHS Foundation Trust, London, UK
Kate El Bouzidi, Temi Lampejo & Malur Sudhanva
Liverpool Clinical Laboratories, Liverpool, UK
Cassie Breen
Manchester University NHS Foundation Trust, Manchester, UK
Shazaad S. Y. Ahmad, Ryan P. George & Nicholas W. Machin
North Middlesex University Hospital NHS Trust, London, UK
Aidan Cross & Mariyam Mirfenderesky
Southwest Pathology Services, Taunton, UK
Andrew I. Lawton
The Royal Marsden NHS Foundation Trust, London, UK
Andrea N. Gomes, Maimuna Kimuli & Darren R. Murray
The Royal Wolverhampton NHS Trust, Wolverhampton, UK
Paula Ashfield & Donald Dobie
University of Birmingham, Birmingham, UK
Andrew Beggs & Alex Richter
Watford General Hospital, Watford, UK
Arezou Fanaie, Rachel A. Hilson & Geraldine Yaze
Guy's and St Thomas' Biomedical Research Centre, London, UK
Flavia Flaviani
Newcastle University, Newcastle, UK
Sarah O'Brien, Steven Rushton & Roy Sanderson
Harald S. Vöhringer
Matthew Sinnott
Nicola De Maio
Richard Goater
Frank Schwach
Ian Harrison
Joel Hellewell
Cristina V. Ariani
Sonia Gonçalves
David K. Jackson
Ian Johnston
Alexander W. Jung
Callum Saint
John Sillitoe
Maria Suciu
Nick Goldman
Jasmina Panovska-Griffiths
Ewan Birney
Erik Volz
Sebastian Funk
Meera Chand
Inigo Martincorena
Jeffrey C. Barrett
The Wellcome Sanger Institute COVID-19 Surveillance Team
Irina Abnizova
, Louise Aigrain
, Alex Alderton
, Mozam Ali
, Laura Allen
, Roberto Amato
, Ralph Anderson
, Cristina Ariani
, Siobhan Austin-Guest
, Sendu Bala
, Jeffrey Barrett
, Andrew Bassett
, Kristina Battleday
, James Beal
, Mathew Beale
, Charlotte Beaver
, Sam Bellany
, Tristram Bellerby
, Katie Bellis
, Duncan Berger
, Matt Berriman
, Emma Betteridge
, Paul Bevan
, Simon Binley
, Jason Bishop
, Kirsty Blackburn
, James Bonfield
, Nick Boughton
, Sam Bowker
, Timothy Brendler-Spaeth
, Iraad Bronner
, Tanya Brooklyn
, Sarah Kay Buddenborg
, Robert Bush
, Catarina Caetano
, Alex Cagan
, Nicola Carter
, Joanna Cartwright
, Tiago Carvalho Monteiro
, Liz Chapman
, Tracey-Jane Chillingworth
, Peter Clapham
, Richard Clark
, Adrian Clarke
, Catriona Clarke
, Daryl Cole
, Elizabeth Cook
, Maria Coppola
, Linda Cornell
, Clare Cornwell
, Craig Corton
, Abby Crackett
, Alison Cranage
, Harriet Craven
, Sarah Craw
, Mark Crawford
, Tim Cutts
, Monika Dabrowska
, Matt Davies
, Robert Davies
, Joseph Dawson
, Callum Day
, Aiden Densem
, Thomas Dibling
, Cat Dockree
, David Dodd
, Sunil Dogga
, Matthew Dorman
, Gordon Dougan
, Martin Dougherty
, Alexander Dove
, Lucy Drummond
, Eleanor Drury
, Monika Dudek
, Jillian Durham
, Laura Durrant
, Elizabeth Easthope
, Sabine Eckert
, Pete Ellis
, Ben Farr
, Michael Fenton
, Marcella Ferrero
, Neil Flack
, Howerd Fordham
, Grace Forsythe
, Luke Foulser
, Matt Francis
, Audrey Fraser
, Adam Freeman
, Anastasia Galvin
, Maria Garcia-Casado
, Alex Gedny
, Sophia Girgis
, James Glover
, Sonia Goncalves
, Scott Goodwin
, Oliver Gould
, Marina Gourtovaia
, Andy Gray
, Emma Gray
, Coline Griffiths
, Yong Gu
, Florence Guerin
, Will Hamilton
, Hannah Hanks
, Ewan Harrison
, Alexandria Harrott
, Edward Harry
, Julia Harvison
, Paul Heath
, Anastasia Hernandez-Koutoucheva
, Rhiannon Hobbs
, Dave Holland
, Sarah Holmes
, Gary Hornett
, Nicholas Hough
, Liz Huckle
, Lena Hughes-Hallet
, Adam Hunter
, Stephen Inglis
, Sameena Iqbal
, Adam Jackson
, David Jackson
, Keith James
, Dorota Jamrozy
, Carlos Jimenez Verdejo
, Ian Johnston
, Matthew Jones
, Kalyan Kallepally
, Leanne Kane
, Keely Kay
, Sally Kay
, Jon Keatley
, Alan Keith
, Alison King
, Lucy Kitchin
, Matt Kleanthous
, Martina Klimekova
, Petra Korlevic
, Ksenia Krasheninnkova
, Dominic Kwiatkowski
, Greg Lane
, Cordelia Langford
, Adam Laverack
, Katharine Law
, Mara Lawniczak
, Stefanie Lensing
, Steven Leonard
, Laura Letchford
, Kevin Lewis
, Amanah Lewis-Wade
, Jennifer Liddle
, Quan Lin
, Sarah Lindsay
, Sally Linsdell
, Rich Livett
, Stephanie Lo
, Rhona Long
, Jamie Lovell
, Jon Lovell
, Catherine Ludden
, James Mack
, Mark Maddison
, Aleksei Makunin
, Irfan Mamun
, Jenny Mansfield
, Neil Marriott
, Matt Martin
, Inigo Martincorena
, Matthew Mayho
, Shane McCarthy
, Jo McClintock
, Samantha McGuigan
, Sandra McHugh
, Liz McMinn
, Carl Meadows
, Emily Mobley
, Robin Moll
, Maria Morra
, Leanne Morrow
, Kathryn Murie
, Sian Nash
, Claire Nathwani
, Plamena Naydenova
, Alexandra Neaverson
, Rachel Nelson
, Ed Nerou
, Jon Nicholson
, Tabea Nimz
, Guillaume G. Noell
, Sarah O'Meara
, Valeriu Ohan
, Karen Oliver
, Charles Olney
, Doug Ormond
, Agnes Oszlanczi
, Steve Palmer
, Yoke Fei Pang
, Barbora Pardubska
, Naomi Park
, Aaron Parmar
, Gaurang Patel
, Minal Patel
, Maggie Payne
, Sharon Peacock
, Arabella Petersen
, Deborah Plowman
, Tom Preston
, Liam Prestwood
, Christoph Puethe
, Michael Quail
, Diana Rajan
, Shavanthi Rajatileka
, Richard Rance
, Suzannah Rawlings
, Nicholas Redshaw
, Joe Reynolds
, Mark Reynolds
, Simon Rice
, Matt Richardson
, Connor Roberts
, Katrina Robinson
, Melanie Robinson
, David Robinson
, Hazel Rogers
, Eduardo Martin Rojo
, Daljit Roopra
, Mark Rose
, Luke Rudd
, Ramin Sadri
, Nicholas Salmon
, David Saul
, Frank Schwach
, Carol Scott
, Phil Seekings
, Lesley Shirley
, John Sillitoe
, Alison Simms
, Matthew Sinnott
, Shanthi Sivadasan
, Bart Siwek
, Dale Sizer
, Kenneth Skeldon
, Jason Skelton
, Joanna Slater-Tunstill
, Lisa Sloper
, Nathalie Smerdon
, Chris Smith
, Christen Smith
, James Smith
, Katie Smith
, Michelle Smith
, Sean Smith
, Tina Smith
, Leighton Sneade
, Carmen Diaz Soria
, Catarina Sousa
, Emily Souster
, Andrew Sparkes
, Michael Spencer-Chapman
, Janet Squares
, Robert Stanley
, Claire Steed
, Tim Stickland
, Ian Still
, Michael R. Stratton
, Michelle Strickland
, Allen Swann
, Agnieszka Swiatkowska
, Neil Sycamore
, Emma Swift
, Edward Symons
, Suzanne Szluha
, Emma Taluy
, Nunu Tao
, Katy Taylor
, Sam Taylor
, Stacey Thompson
, Mark Thompson
, Mark Thomson
, Nicholas Thomson
, Scott Thurston
, Gerry Tonkin-Hill
, Dee Toombs
, Benjamin Topping
, Jaime Tovar-Corona
, Daniel Ungureanu
, James Uphill
, Jana Urbanova
, Philip Jansen Van Vuuren
, Valerie Vancollie
, Paul Voak
, Danielle Walker
, Matthew Walker
, Matt Waller
, Gary Ward
, Charlie Weatherhogg
, Niki Webb
, Danni Weldon
, Alan Wells
, Eloise Wells
, Luke Westwood
, Theo Whipp
, Thomas Whiteley
, Georgia Whitton
, Andrew Whitwham
, Sara Widaa
, Mia Williams
, Mark Wilson
& Sean Wright
The COVID-19 Genomics UK (COG-UK) Consortium*
Funding acquisition, leadership and supervision, metadata curation, project administration, samples and logistics, sequencing and analysis, software and analysis tools, and visualization
Samuel C. Robson
Funding acquisition, leadership and supervision, metadata curation, project administration, samples and logistics, sequencing and analysis, and software and analysis tools
Thomas R. Connor
& Nicholas J. Loman
Leadership and supervision, metadata curation, project administration, samples and logistics, sequencing and analysis, software and analysis tools, and visualization
Tanya Golubchik
Funding acquisition, leadership and supervision, metadata curation, samples and logistics, sequencing and analysis, and visualization
Rocio T. Martinez Nunez
Funding acquisition, leadership and supervision, project administration, samples and logistics, sequencing and analysis, and software and analysis tools
David Bonsall
Funding acquisition, leadership and supervision, project administration, sequencing and analysis, software and analysis tools, and visualization
Andrew Rambaut
Funding acquisition, metadata curation, project administration, samples and logistics, sequencing and analysis, and software and analysis tools
Luke B. Snell
Leadership and supervision, metadata curation, project administration, samples and logistics, software and analysis tools, and visualization
Rich Livett
Funding acquisition, leadership and supervision, metadata curation, project administration, and samples and logistics
Catherine Ludden
Funding acquisition, leadership and supervision, metadata curation, samples and logistics, and sequencing and analysis
Sally Corden
& Eleni Nastouli
Funding acquisition, leadership and supervision, metadata curation, sequencing and analysis, and software and analysis tools
Gaia Nebbia
Funding acquisition, leadership and supervision, project administration, samples and logistics, and sequencing and analysis
Leadership and supervision, metadata curation, project administration, samples and logistics, and sequencing and analysis
Katrina Lythgoe
, M. Estee Torok
& Ian G. Goodfellow
Leadership and supervision, metadata curation, project administration, samples and logistics, and visualization
& Kordo Saeed
Leadership and supervision, metadata curation, project administration, sequencing and analysis, and software and analysis tools
Leadership and supervision, metadata curation, samples and logistics, sequencing and analysis, and visualization
Catherine Houlihan
Leadership and supervision, metadata curation, sequencing and analysis, software and analysis tools, and visualization
Dan Frampton
Metadata curation, project administration, samples and logistics, sequencing and analysis, and software and analysis tools
William L. Hamilton
& Adam A. Witney
Funding acquisition, samples and logistics, sequencing and analysis, and visualization
Giselda Bucca
Funding acquisition, leadership and supervision, metadata curation and project administration
Funding acquisition, leadership and supervision, metadata curation, and samples and logistics
Catherine Moore
Funding acquisition, leadership and supervision, metadata curation, and sequencing and analysis
Emma C. Thomson
Funding acquisition, leadership and supervision, project administration, and samples and logistics
Ewan M. Harrison
Funding acquisition, leadership and supervision, sequencing and analysis, and visualization
Colin P. Smith
Leadership and supervision, metadata curation, project administration, and sequencing and analysis
Fiona Rogan
Leadership and supervision, metadata curation, project administration, and samples and logistics
Shaun M. Beckwith
, Abigail Murray
, Dawn Singleton
, Kirstine Eastick
, Liz A. Sheridan
, Paul Randell
, Leigh M. Jackson
, Cristina V. Ariani
& Sónia Gonçalves
Leadership and supervision, metadata curation, samples and logistics, and sequencing and analysis
Derek J. Fairley
, Matthew W. Loose
& Joanne Watkins
Leadership and supervision, metadata curation, samples and logistics, and visualization
Leadership and supervision, metadata curation, sequencing and analysis, and software and analysis tools
Sam Nicholls
, Matthew Bull
& Roberto Amato
Leadership and supervision, project administration, samples and logistics, and sequencing and analysis
Darren L. Smith
Leadership and supervision, sequencing and analysis, software and analysis tools, and visualization
David M. Aanensen
& Jeffrey C. Barrett
Metadata curation, project administration, samples and logistics, and sequencing and analysis
Dinesh Aggarwal
, James G. Shepherd
, Martin D. Curran
& Surendra Parmar
Metadata curation, project administration, sequencing and analysis, and software and analysis tools
Matthew D. Parker
Metadata curation, samples and logistics, sequencing and analysis, and software and analysis tools
Catryn Williams
Metadata curation, samples and logistics, sequencing and analysis, and visualization
Sharon Glaysher
Metadata curation, sequencing and analysis, software and analysis tools, and visualization
Anthony P. Underwood
, Matthew Bashton
, Nicole Pacchiarini
, Katie F. Loveson
& Matthew Byott
Project administration, sequencing and analysis, software and analysis tools, and visualization
Alessandro M. Carabelli
Funding acquisition, leadership and supervision, and metadata curation
Kate E. Templeton
Funding acquisition, leadership and supervision, and project administration
Thushan I. de Silva
, Dennis Wang
, Cordelia F. Langford
& John Sillitoe
Funding acquisition, leadership and supervision, and samples and logistics
Rory N. Gunson
Funding acquisition, leadership and supervision, and sequencing and analysis
Simon Cottrell
, Justin O'Grady
& Dominic Kwiatkowski
Leadership and supervision, metadata curation and project administration
Patrick J. Lillie
Leadership and supervision, metadata curation, and samples and logistics
Nicholas Cortes
, Nathan Moore
, Claire Thomas
, Phillipa J. Burns
, Tabitha W. Mahungu
& Steven Liggett
Leadership and supervision, metadata curation, and sequencing and analysis
Angela H. Beckett
& Matthew T. G. Holden
Leadership and supervision, project administration, and samples and logistics
Lisa J. Levett
, Husam Osman
& Mohammed O. Hassan-Ibrahim
Leadership and supervision, project administration, and sequencing and analysis
David A. Simpson
Leadership and supervision, samples and logistics, and sequencing and analysis
, Ravi K. Gupta
, Alistair C. Darby
& Steve Paterson
Leadership and supervision, sequencing and analysis, and software and analysis tools
Oliver G. Pybus
, Erik M. Volz
, Daniela de Angelis
, David L. Robertson
, Andrew J. Page
& Inigo Martincorena
Leadership and supervision, sequencing and analysis, and visualization
Louise Aigrain
& Andrew R. Bassett
Metadata curation, project administration, and samples and logistics
Nick Wong
, Yusri Taha
, Michelle J. Erkiert
& Michael H. Spencer Chapman
Metadata curation, project administration, and sequencing and analysis
Rebecca Dewar
& Martin P. McHugh
Metadata curation, project administration, and software and analysis tools
Siddharth Mookerjee
Metadata curation, project administration and visualization
Stephen Aplin
, Matthew Harvey
, Thea Sass
, Helen Umpleby
& Helen Wheeler
Metadata curation, samples and logistics, and sequencing and analysis
James P. McKenna
, Ben Warne
, Joshua F. Taylor
, Yasmin Chaudhry
, Rhys Izuagbe
, Aminu S. Jahun
, Gregory R. Young
, Claire McMurray
, Clare M. McCann
, Andrew Nelson
& Scott Elliott
Metadata curation, samples and logistics, and visualization
Hannah Lowe
Metadata curation, sequencing and analysis, and software and analysis tools
Anna Price
, Matthew R. Crown
, Sara Rey
, Sunando Roy
& Ben Temperton
Metadata curation, sequencing and analysis, and visualization
Sharif Shaaban
& Andrew R. Hesketh
Project administration, samples and logistics, and sequencing and analysis
Kenneth G. Laing
, Irene M. Monahan
& Judith Heaney
Project administration, samples and logistics, and visualization
Emanuela Pelosi
, Siona Silviera
& Eleri Wilson-Davies
Samples and logistics, software and analysis tools, and visualization
Helen Fryer
Sequencing and analysis, software and analysis tools, and visualization
, Louis du Plessis
, Rob Johnson
, William T. Harvey
, Joseph Hughes
, Richard J. Orton
, Lewis G. Spurgin
, Yann Bourgeois
, Chris Ruis
, Áine O'Toole
& Theo Sanderson
Funding acquisition, and leadership and supervision
, Jonathan Edgeworth
, Judith Breuer
, Stephen L. Michell
& John A. Todd
Funding acquisition and project administration
Michaela John
& David Buck
Leadership and supervision, and metadata curation
Kavitha Gajee
& Gemma L. Kay
Leadership and supervision, and project administration
Sharon J. Peacock
& David Heyburn
Leadership and supervision, and samples and logistics
Katie Kitchman
, Alan McNally
, David T. Pritchard
, Samir Dervisevic
, Peter Muir
, Esther Robinson
, Barry B. Vipond
, Newara A. Ramadan
, Christopher Jeanes
, Jana Catalan
& Neil Jones
Leadership and supervision, and sequencing and analysis
Ana da Silva Filipe
, Chris Williams
, Marc Fuchs
, Julia Miskelly
, Aaron R. Jeffries
& Naomi R. Park
Metadata curation, and samples and logistics
Amy Ash
, Cherian Koshy
, Magdalena Barrow
, Sarah L. Buchan
, Anna Mantzouratou
, Gemma Clark
, Christopher W. Holmes
, Sharon Campbell
, Thomas Davis
, Ngee Keong Tan
, Julianne R. Brown
, Kathryn A. Harris
, Stephen P. Kidd
, Paul R. Grant
, Li Xu-McCrae
, Alison Cox
, Pinglawathee Madona
, Marcus Pond
, Paul A. Randell
, Karen T. Withell
, Cheryl Williams
, Clive Graham
, Rebecca Denton-Smith
, Emma Swindells
, Robyn Turnbull
, Tim J. Sloan
, Andrew Bosworth
, Stephanie Hutchings
, Hannah M. Pymont
, Anna Casey
, Liz Ratcliffe
, Christopher R. Jones
, Bridget A. Knight
, Tanzina Haque
, Jennifer Hart
, Dianne Irish-Tavares
, Eric Witele
, Craig Mower
, Louisa K. Watson
, Jennifer Collins
, Gary Eltringham
, Dorian Crudgington
, Ben Macklin
, Miren Iturriza-Gomara
, Anita O. Lucaci
& Patrick C. McClure
Metadata curation, and sequencing and analysis
Matthew Carlile
, Nadine Holmes
, Christopher Moore
, Nathaniel Storey
, Stefan Rooke
, Gonzalo Yebra
, Noel Craine
, Malorie Perry
, Nabil-Fareed Alikhan
, Stephen Bridgett
, Kate F. Cook
, Christopher Fearn
, Salman Goudarzi
, Ronan A. Lyons
, Thomas Williams
, Sam T. Haldenby
& Steven Leonard
Metadata curation, and software and analysis tools
Robert M. Davies
Project administration, and samples and logistics
Rahul Batra
, Beth Blane
, Moira J. Spyer
, Perminder Smith
, Mehmet Yavus
, Rachel J. Williams
, Adhyana I. K. Mahanama
, Buddhini Samaraweera
, Sophia T. Girgis
, Samantha E. Hansford
, Angie Green
, Katherine L. Bellis
, Matthew J. Dorman
& Shavanthi Rajatileka
Project administration, and sequencing and analysis
Joshua Quick
Project administration, and software and analysis tools
Radoslaw Poplawski
Samples and logistics, and sequencing and analysis
, Andrew Mack
, Arthur Morriss
, Thomas Whalley
, Bindi Patel
, Iliana Georgana
, Myra Hosmillo
, Malte L. Pinckert
, Joanne Stockton
, John H. Henderson
, Amy Hollis
, William Stanley
, Wen C. Yew
, Richard Myers
, Alicia Thornton
, Alexander Adams
, Tara Annett
, Hibo Asad
, Alec Birchley
, Jason Coombes
, Johnathan M. Evans
, Laia Fina
, Bree Gatica-Wilcox
, Lauren Gilbert
, Lee Graham
, Jessica Hey
, Ember Hilvers
, Sophie Jones
, Hannah Jones
, Sara Kumziene-Summerhayes
, Caoimhe McKerr
, Jessica Powell
, Georgia Pugh
, Sarah Taylor
, Alexander J. Trotter
, Charlotte A. Williams
, Leanne M. Kermack
, Benjamin H. Foulkes
, Marta Gallis
, Hailey R. Hornsby
, Stavroula F. Louka
, Manoj Pohare
, Paige Wolverson
, Peijun Zhang
, George MacIntyre-Cockett
, Amy Trebes
, Robin J. Moll
, Lynne Ferguson
, Emily J. Goldstein
, Alasdair Maclean
& Rachael Tomb
Samples and logistics, and software and analysis tools
Igor Starinskij
Sequencing and analysis, and software and analysis tools
Laura Thomson
, Joel Southgate
, Moritz U. G. Kraemer
, Jayna Raghwani
, Alex E. Zarebski
, Olivia Boyd
, Lily Geidelberg
, Chris J. Illingworth
, Chris Jackson
, David Pascall
, Sreenu Vattipally
, Timothy M. Freeman
, Sharon N. Hsu
, Benjamin B. Lindsey
& Jaime M. Tovar-Corona
Sequencing and analysis, and visualization
MacGregor Cox
Software and analysis tools, and visualization
Khalil Abudahab
, Mirko Menegazzo
, Ben E. W. Taylor
, Corin A. Yeats
, Afrida Mukaddas
, Derek W. Wright
, Leonardo de Oliveira Martins
, Rachel Colquhoun
, Verity Hill
, Ben Jackson
, J. T. McCrone
, Nathan Medd
, Emily Scher
& Jon-Paul Keatley
Leadership and supervision
Tanya Curran
, Sian Morgan
, Patrick Maxwell
, Ken Smith
, Sahar Eldirdiri
, Anita Kenyon
, Alison H. Holmes
, James R. Price
, Tim Wyatt
, Alison E. Mather
, Timofey Skvortsov
& John A. Hartley
Metadata curation
Martyn Guest
, Christine Kitchen
, Ian Merrick
, Robert Munn
, Beatrice Bertolusso
, Jessica Lynch
, Gabrielle Vernet
, Stuart Kirk
, Elizabeth Wastnedge
, Rachael Stanley
, Giles Idle
, Declan T. Bradley
, Jennifer Poyner
& Matilde Mori
Owen Jones
, Victoria Wright
, Ellena Brooks
, Carol M. Churcher
, Mireille Fragakis
, Katerina Galai
, Andrew Jermy
, Sarah Judges
, Georgina M. McManus
, Kim S. Smith
, Elaine Westwick
, Stephen W. Attwood
, Frances Bolt
, Alisha Davies
, Elen De Lacy
, Fatima Downing
, Sue Edwards
, Lizzie Meadows
, Sarah Jeremiah
, Nikki Smith
& Luke Foulser
Samples and logistics
Themoula Charalampous
, Amita Patel
, Louise Berry
, Tim Boswell
, Vicki M. Fleming
, Hannah C. Howson-Wells
, Amelia Joseph
, Manjinder Khakh
, Michelle M. Lister
, Paul W. Bird
, Karlie Fallon
, Thomas Helmer
, Claire L. McMurray
, Mina Odedra
, Jessica Shaw
, Julian W. Tang
, Nicholas J. Willford
, Victoria Blakey
, Veena Raviprakash
, Nicola Sheriff
, Lesley-Anne Williams
, Theresa Feltwell
, Luke Bedford
, James S. Cargill
, Warwick Hughes
, Jonathan Moore
, Susanne Stonehouse
, Laura Atkinson
, Jack C. D. Lee
, Divya Shah
, Adela Alcolea
, Natasha Ohemeng-Kumi
, John Ramble
, Jasveen Sehmi
, Rebecca Williams
, Wendy Chatterton
, Monika Pusok
, William Everson
, Anibolina Castigador
, Emily Macnaughton
, Kate El Bouzidi
, Temi Lampejo
, Malur Sudhanva
, Cassie Breen
, Graciela Sluga
, Shazaad S. Y. Ahmad
, Ryan P. George
, Nicholas W. Machin
, Debbie Binns
, Victoria James
, Rachel Blacow
, Lindsay Coupland
, Louise Smith
, Edward Barton
, Debra Padgett
, Garren Scott
, Aidan Cross
, Mariyam Mirfenderesky
, Jane Greenaway
, Kevin Cole
, Phillip Clarke
, Nichola Duckworth
, Sarah Walsh
, Kelly Bicknell
, Robert Impey
, Sarah Wyllie
, Richard Hopes
, Chloe Bishop
, Vicki Chalker
, Ian Harrison
, Laura Gifford
, Zoltan Molnar
, Cressida Auckland
, Cariad Evans
, Kate Johnson
, David G. Partridge
, Mohammad Raza
, Paul Baker
, Stephen Bonner
, Sarah Essex
, Leanne J. Murray
, Andrew I. Lawton
, Shirelle Burton-Fanning
, Brendan A. I. Payne
, Sheila Waugh
, Andrea N. Gomes
, Maimuna Kimuli
, Darren R. Murray
, Paula Ashfield
, Donald Dobie
, Fiona Ashford
, Angus Best
, Liam Crawford
, Nicola Cumley
, Megan Mayhew
, Oliver Megram
, Jeremy Mirza
, Emma Moles-Garcia
, Benita Percival
, Leah Ensell
, Helen L. Lowe
, Laurentiu Maftei
, Matteo Mondani
, Nicola J. Chaloner
, Benjamin J. Cogger
, Lisa J. Easton
, Hannah Huckson
, Jonathan Lewis
, Sarah Lowdon
, Cassandra S. Malone
, Florence Munemo
, Manasa Mutingwende
, Roberto Nicodemi
, Olga Podplomyk
, Thomas Somassa
, Andrew Beggs
, Alex Richter
, Claire Cormie
, Joana Dias
, Sally Forrest
, Ellen E. Higginson
, Mailis Maes
, Jamie Young
, Rose K. Davidson
, Kathryn A. Jackson
, Lance Turtle
, Alexander J. Keeley
, Jonathan Ball
, Timothy Byaruhanga
, Joseph G. Chappell
, Jayasree Dey
, Jack D. Hill
, Emily J. Park
, Arezou Fanaie
, Rachel A. Hilson
, Geraldine Yaze
& Stephanie Lo
Sequencing and analysis
Safiah Afifi
, Robert Beer
, Joshua Maksimovic
, Kathryn McCluggage Masters
, Karla Spellman
, Catherine Bresner
, William Fuller
, Angela Marchbank
, Trudy Workman
, Ekaterina Shelest
, Johnny Debebe
, Fei Sang
, Marina Escalera Zamudio
, Sarah Francois
, Bernardo Gutierrez
, Tetyana I. Vasylyeva
, Flavia Flaviani
, Manon Ragonnet-Cronin
, Katherine L. Smollett
, Alice Broos
, Daniel Mair
, Jenna Nichols
, Kyriaki Nomikou
, Lily Tong
, Ioulia Tsatsani
, Sarah O'Brien
, Steven Rushton
, Roy Sanderson
, Jon Perkins
, Seb Cotton
, Abbie Gallagher
, Elias Allara
, Clare Pearson
, David Bibby
, Gavin Dabrera
, Nicholas Ellaby
, Eileen Gallagher
, Jonathan Hubb
, Angie Lackenby
, David Lee
, Nikos Manesis
, Tamyo Mbisa
, Steven Platt
, Katherine A. Twohig
, Mari Morgan
, Alp Aydin
, David J. Baker
, Ebenezer Foster-Nyarko
, Sophie J. Prosolek
, Steven Rudder
, Chris Baxter
, Sílvia F. Carvalho
, Deborah Lavin
, Arun Mariappan
, Clara Radulescu
, Aditi Singh
, Miao Tang
, Helen Morcrette
, Nadua Bayzid
, Marius Cotic
, Carlos E. Balcazar
, Michael D. Gallagher
, Daniel Maloney
, Thomas D. Stanton
, Kathleen A. Williamson
, Robin Manley
, Michelle L. Michelsen
, Christine M. Sambles
, David J. Studholme
, Joanna Warwick-Dugdale
, Richard Eccles
, Matthew Gemmell
, Richard Gregory
, Margaret Hughes
, Charlotte Nelson
, Lucille Rainbow
, Edith E. Vamos
, Hermione J. Webster
, Mark Whitehead
, Claudia Wierzbicki
, Adrienn Angyal
, Luke R. Green
, Max Whiteley
, Iraad F. Bronner
, Ben W. Farr
, Stefanie V. Lensing
, Shane A. McCarthy
, Michael A. Quail
, Nicholas M. Redshaw
& Scott A. J. Thurston
Software and analysis tools
Will Rowe
, Amy Gaskin
, Thanh Le-Viet
, Jennifier Liddle
& Andrew Whitwham
H.S.V. and M.G. developed the analysis code, which H.S.V. implemented with input from A.W.J.; H.S.V. created most of the figures. M.S. analysed, annotated and aggregated viral genome data. N.D.M. conducted phylogeographic analyses supervised by N.G.; T.S., R.G., M.S. and H.S.V. developed the interactive spatiotemporal viewer. T.N., F.S., I.H., R.A., C.A., S.G., D.J., I.J., C.S., J.S., T.S. and M.S. analysed genomic surveillance data under the supervision of D.K., M.C., I.M. and J.C.B.; J.H. and S.F. analysed ONS data and helped with epidemiological modelling and data interpretation. E.V. analysed growth rates and helped with data interpretation. E.B. and J.P.G. supervised H.S.V. and helped with data interpretation. J.C.B. and M.G. supervised the analysis with advice from I.M.; M.G., H.S.V., M.S., N.D.M., T.S., I.M. and J.C.B. wrote the manuscript with input from all of the co-authors.
Correspondence to Jeffrey C. Barrett or Moritz Gerstung.
E.B. is a paid consultant of Oxford Nanopore.
Peer review information Nature thanks Tulio De Oliveira, Philippe Lemey and Matthew Scotch for their contribution to the peer review of this work. Peer reviewer reports are available.
Extended Data Fig. 1 SARS-CoV-2 surveillance sequencing in England between September 2020 and June 2021.
a. Local monthly coverage across 315 LTLAs. b. Weekly coverage of genomic surveillance sequencing. c. Hospitalization, case and infection fatality rates relative to ONS prevalence. Dots denote mean estimates and error bars 95% CIs.
Extended Data Fig. 2 Genomic surveillance model of total incidence and lineage-specific frequencies.
a. Cubic basis splines (top row) are convolved with the infection to test distribution (row 2 and 3) and used to fit the log incidence in a LTLA and its corresponding derivatives (growth rates; bottom row). b. Example incidence (top row), logarithmic incidence with individual convolved basis functions (dashed lines, row 2), growth rate with individual spline basis derivatives (dashed lines, row 3) and resulting (case) reproduction numbers (growth rate per 5.1d) from our approach (GenomicSurveillance) and estimates by EpiEstim48, shifted by 10d to approximate a case reproduction number. c. The relative frequencies of 62 different lineages are modelled using piecewise multinomial logistic regression. The linear logits are modelled to jump stochastically within 21d prior to first observation to account for the effects of new introductions. Shown are the logits of 5 selected lineages in two different LTLAs.
Extended Data Fig. 3 Genomic surveillance model selection.
a. Model loss in terms of the ELBO objective function and the model hyperparameters alpha0 and alpha1 (see Methods). b. Model deviance (calculated as −2 x log pointwise predictive density) with respect to the model hyperparameters α0 and α1 (see Methods). c. Mean squared error (MSE) of modelled weekly proportions of highly prevalent lineages with respect to the model parameters α0 and α1 (see Methods). d. Same as in c, but for lineages exhibiting low frequencies (VOCs).
Extended Data Fig. 4 Spatiotemporal model of 71 SARS-CoV-2 lineages in 315 English LTLAs between September 2020 and June 2021.
a. Regional lineage specific relative frequency of lineages contributing more than 50 genomes during the time period shown. Dots denote observed data, lines the fits aggregated to each region. b. Same as a, but on a log scale. c. Same data as in a, shown as stacked bar charts. Colours resemble major lineages as indicated and shadings thereof indicate sublineages. d. Same fits as in a, shown as stacked segments. e. Average growth rates for 71 SARS-Cov2 lineages estimated in different regions in England. Dots denote median estimates and error bars 95% CIs.
Extended Data Fig. 5 Relative growth of B.1.177.
a. Lineage-specific relative frequency data in England, excluding B.1.1.7 and other VOCs/VUIs (Category Other includes: A, A.18, A.20, A.23, A.25, A.27, A.28, B, B.29, B.40, None). Colours resemble major lineages as indicated and shadings thereof indicate sublineages. b. Lineage-specific relative frequency data in Denmark, excluding B.1.1.7 and other VOCs/VUIs. Colours resemble major lineages as indicated and shadings thereof indicate sublineages.
Extended Data Fig. 6 Genomic diversity of the SARS-CoV-2 epidemic.
Shown is the entropy (blue), total number of observed Pango lineages (grey, divided by 4), as well as the proportion of B.1.1.7 (orange, right axis). The sweep of B.1.1.7 causes an intermittent decline of genomic diversity as measured by the entropy.
Extended Data Fig. 7 Global phylogenetic trees of selected VOCs/VUIs.
English surveillance and other (targeted and quarantine) samples are highlighted respectively orange and red.
Extended Data Fig. 8 Global phylogenetic trees of B.1.617 sublineages.
a, b and c. English surveillance and other (targeted and quarantine) samples are highlighted respectively orange and red. The trees of B.1.617.1 and B.1.617.2 are rooted. d. Number of UK introductions inferred by parsimony (minimum and maximum numbers) and by Thorney BEAST (95% posterior CI) for each VOC.
Supplementary Notes 1–3.
Peer Review File
Supplementary Tables
Supplementary Tables 1–4.
Source Data Fig. 1
Vöhringer, H.S., Sanderson, T., Sinnott, M. et al. Genomic reconstruction of the SARS-CoV-2 epidemic in England. Nature 600, 506–511 (2021). https://doi.org/10.1038/s41586-021-04069-y
Issue Date: 16 December 2021
Implications of testicular ACE2 and the renin–angiotensin system for SARS-CoV-2 on testis function
R. Clayton Edenfield
Charles A. Easley
Nature Reviews Urology (2021) | CommonCrawl |
\begin{document}
\titlerunning{Nonlinear stochastic evolution equations of second order with damping} \title{Nonlinear stochastic evolution equations of\newline second order with damping}
\author{ Etienne Emmrich \and David \v{S}i\v{s}ka \thanks{This work has been partially supported by the Collaborative Research Center 910, which is funded by the German Science Foundation.} } \authorrunning{E.~Emmrich, D.~\v{S}i\v{s}ka} \institute{Etienne Emmrich \at Technische Universit{\"a}t Berlin, Institut f\"ur Mathematik \\ Stra{\ss}e des 17.\ Juni 136, 10623 Berlin, Germany \\ \email{[email protected]} \and David \v{S}i\v{s}ka \at University of Edinburgh, School of Mathematics, James Clerk Maxwell Building,\\ King's Buildings, Peter Guthrie Tait Road, Edinburgh EH9 3FD, United Kingdom \\ \email{[email protected]}}
\date{28th September 2016}
\maketitle
\begin{abstract} Convergence of a full discretization of a second order stochastic evolution equation with nonlinear damping is shown and thus existence of a solution is established. The discretization scheme combines an implicit time stepping scheme with an internal approximation. Uniqueness is proved as well. \end{abstract}
\keywords{Stochastic evolution equation of second order, Monotone operator, Full discretization, Convergence, Existence, Uniqueness} \subclass{60H15, 47J35, 60H35, 65M12}
\section{Introduction} In this article, a second order evolution equation with additive and multiplicative ``noise'' is considered. Such equations were first studied by Pardoux~\cite{pardoux:thesis}. The corresponding initial value problem may be written as \begin{equation} \label{eq:0a} \ddot{u} + A\dot{u} + Bu = f + C(u,\dot{u}) \dot{W} \text{ in } (0,T), \ \dot{u}(0) = v_0, \ u(0) = u_0, \end{equation} where $\dot{W}$ is the ``noise'' and $T>0$ is given. A variety of phenomena in physical sciences and engineering can be modelled using equations of the form~\eqref{eq:0a}. If $K$ is the integral operator with $(Kw)(t) := \int_0^t w(s)ds$ for some function $w$ then the above problem is (with $\dot{u} = v$) formally equivalent to \begin{equation} \label{eq:0b} \dot{v} + Av + B\left(u_0 + Kv\right) = f + C\left(u_0 + Kv,v\right)\dot{W} \text{ in } (0,T), \ v(0) = v_0 . \end{equation}
To give a more precise meaning to the above problem, let $(H, (\cdot,\cdot), |\cdot|)$ be a real Hilbert space identified with its dual $H^*$ and let $(V_A,\|\cdot\|_{V_A})$ and $(V_B,\|\cdot\|_{V_B})$ be real, reflexive, separable Banach spaces that are densely and continuously embedded in $H$. The main result will require, in addition, that $V_A$ is densely and continuously embedded in $V_B$ and so \begin{equation*} V_A \hookrightarrow V_B \hookrightarrow H = H^* \hookrightarrow V_B^* \hookrightarrow V_A^* \end{equation*} with $\hookrightarrow$ denoting dense and continuous embeddings. We will use $\langle \cdot, \cdot \rangle$ to denote the duality pairing between elements of some Banach space and its dual. Moreover, let $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\in [0,T]},\mathbb{P})$ be a stochastic basis and let $W = (W(t))_{t\in [0,T]}$ be an infinite dimensional Wiener process adapted to the filtration $(\mathcal{F}_t)_{t\in [0,T]}$ and such that for any $t, h\geq0$ the increment $W(t+h)-W(t)$ is independent of $\mathcal{F}_t$.
The exact assumptions will be stated in Section~\ref{sec:2}. For now it suffices to say that $B:V_B\times \Omega \to V_B^*$ is a linear, bounded, symmetric and strongly positive operator. The operator $A:V_A\times \Omega \to V_A^*$ and, for $j\in \mathbb{N}$, the operators $C_j: V_B \times V_A \times \Omega \to H$ are nonlinear, jointly satisfying appropriate coercivity and monotonicity-like conditions. Furthermore, we assume that $A$ is hemicontinuous and satisfies a growth condition. We write $C = (C_j)_{j \in \mathbb{N}}$ and assume that $C$ maps $V_B \times V_A \times \Omega$ into $l^2(H)$. We consider the stochastic evolution equation \begin{equation} \begin{split} \label{eq:1} & v(t) + \int_0^t \big[Av(s) + B\big(u_0 + (Kv)(s) \big)\big] ds\\ & = v_0 + \int_0^t f(s) ds + \int_0^t C\big( u_0 + (Kv)(s) , v(s)\big) dW(s) \end{split} \end{equation} for $t \in [0,T]$, where $u_0$ and $v_0$ are given $\mathcal{F}_0$-measurable random variables that are $V_B$ and $H$-valued, respectively. The $V_A^*$-valued process $f$ is adapted to $(\mathcal{F}_t)_{t\geq 0}$ and the stochastic integral is the It\^o integral with \begin{equation*}
\int_0^t C(u(s), v(s)) dW(s) = \sum_{j=1}^{\infty} \int_0^t C_j (u(s),v(s)) dW_j(s). \end{equation*}
Stochastic partial differential equations of second order in time are an active area of research. Broadly speaking, difficulties arise from nonlinear operators, lack of damping, multiplicative noise and noise terms that are not continuous martingales as well as from regularity issues inherent to second order evolution equations. Nonlinear operators are a particular issue if they are nonlinear in the ``highest order'' term rather than a nonlinear perturbation of a linear principal part. We briefly point the reader to various papers exploring some of the above issues.
Peszat and Zabczyk~\cite{peszat:zabczyk:nonlinear:stochastic:wave} give necessary and sufficient conditions for the existence of solutions to a stochastic wave equation without damping, linear in the highest order term with nonlinear zero order term and nonlinear multiplicative noise. Marinelli and Quer-Sardanyons~\cite{marinelli:existence} prove existence of solutions for a class of semilinear stochastic wave equations driven by an additive noise term given by a possibly discontinuous square integrable martingale. Kim~\cite{kim:on:the:stochastic} proved existence and uniqueness of a solution to a semilinear stochastic wave equation with damping and additive noise. Carmona and Nualart~\cite{carmona:nualart:random} investigate the smoothness properties of the solutions of one-dimensional wave equations with nonlinear random forcing. Further work has been done regarding the smoothness of solutions, we refer the reader to Millet and Morien~\cite{millet:morien:on:a:stochastic:wave} as well as Millet and Sanz-Sol{\'e}~\cite{millet:sanz-sole:a:stochastic:wave} and the references therein.
In the deterministic case, second order evolution equations similar to (\ref{eq:0a}) have been investigated in the seminal paper of Lions and Strauss~\cite{Lions-Strauss}. This has been extended to the stochastic case by Pardoux~\cite{pardoux:thesis}. Indeed, Pardoux~\cite{pardoux:thesis} has shown existence of solutions via a Galerkin approximation and uniqueness to~\eqref{eq:1} under the assumption that the operators are deterministic and Lipschitz continuous on bounded subsets but allowing time-dependent operators. Finally, we note that Pardoux~\cite{pardoux:thesis} also covers the case of first-order-in-time stochastic evolution equations. For first-order-in-time stochastic evolution equations, we also refer the reader to Krylov and Rozovskii~\cite{krylov:rozovskii:stochastic}.
Our aim is twofold: We wish to prove convergence of a fully discrete approximation of (\ref{eq:1}) including a time discretization. As far as the authors are aware, this paper is the first to prove convergence of a full discretization of stochastic evolution equations of second order with a damping that has nonlinear principal part and a rather general multiplicative noise. Moreover, we wish to extend Pardoux's result to random operators removing the Lipschitz-type condition. See Example~\ref{example:no_lip_on_bdd_subsets} for a situation where the assumption of Lipschitz continuity on bounded subsets does not hold but the assumptions of this paper are satisfied. We show existence of solutions to~\eqref{eq:1} by proving appropriate convergence of solutions to a full discretization. Unfortunately, the randomness of the operators finally requires the assumption that $V_A$ is continuously embedded in $V_B$ (see also Remark~\ref{rem:VA}), which is not the case with Pardoux~\cite{pardoux:thesis}. The reason is the use of the standard It\^o formula for the square of the norm, see, e.g., Krylov and Rozovski{\u\i}~\cite{krylov:rozovskii:stochastic}, Gy\"ongy and Krylov~\cite{gyongy:krylov:ito:formula} or Pr\'ev\^ot and R\"ockner~\cite{prevot:rockner:concise}. It is left for future work whether the It\^{o} formula can be adapted to the general case where neither is $V_A$ embedded into $V_B$ nor is $V_B$ embedded into $V_A$. This is a rather delicate problem already for the integration by parts in the deterministic case (see again Lions and Strauss~\cite{Lions-Strauss} as well as Emmrich and Thalhammer \cite{emmrich:doubly}). Finally, we will show that two solutions are indistinguishable.
Let us now describe the full discretization. A Galerkin scheme $(V_m)_{m\in \mathbb{N}}$ for $V_A$ will provide the internal approximation. For the temporal discretization, we choose an explicit scheme for approximating the stochastic integral but otherwise we use an implicit scheme. Finally, we have to truncate the infinite dimensional noise term.
Fix $m, r, N \in \mathbb{N}$. Let $\tau := T/N$. For $n=0,1,\ldots, N$, let $t_n := n\tau$. Define $C^r := (C^r_j)_{j\in \mathbb{N}}$ with $C^r_j:= C_j$ for $j=1,\ldots,r$, $C^r_j = 0$ for $j>r$ and let \begin{equation*} \Delta W^n := \left\{ \begin{array}{ll} W(t_n) - W(t_{n-1}) &\,\, \text{for } \,\,n=2,\ldots,N,\\ 0,& \,\, \text{for } \,\, n = 1. \end{array} \right. \end{equation*} For $g\in l^2(H)$, we define $g W(t) := \sum_{j\in \mathbb{N}} g_k W_k(t)$. Clearly, $\tau$, $t_n$ and $\Delta W^n$ all depend on $N$. This dependence will always be omitted in our notation. The reason for taking $\Delta W^1 = 0$ will become clear during the proof of the a priori estimate for the discrete problem. It allows one to assume that $v_0$ is an $H$-valued $\mathcal{F}_0$-measurable random variable (rather than a $V_A$-valued one). This is consistent with the case of deterministic second-order-in-time evolution equations, see Lions and Strauss~\cite{Lions-Strauss}, and the stochastic second-order-in-time evolution equations, see Pardoux~\cite{pardoux:thesis}.
We now define $(u^n)_{n=0}^N$ and $(v^n)_{n=0}^N$ which will be approximations of $u$ and $v$, respectively, such that $u(t_n) \approx u^n$ and $v(t_n) \approx v^n$. Assume that the $\mathcal{F}_0$-measurable random variables $u^0$ and $v^0$ take values in $V_m$ and are some given approximations of the initial values $u_0$ and $v_0$, respectively. Let $(f^n)_{n=1}^N$ be an approximation of $f$ with $f^n$ being an $\mathcal{F}_{t_n}$-measurable $V_A^*$-valued random variable for $n=1,\ldots,N$.
Now we can fully discretize~\eqref{eq:1}. We do this by approximating the integrands in~\eqref{eq:1} by piecewise constant processes on the time grid $(t_n)_{n=0}^N$. Effectively, the value on the right-hand side of each interval is taken when approximating the non-stochastic integrals and the value on the left-hand side of each interval is taken when approximating the It\^o stochastic integral. We define $(v^n)_{n=1}^N $ with $v^n$ being $V_m$-valued for $n=1,\ldots,N$ as the solution of \begin{equation} \begin{split} \label{eq:2b} & (v^n,\varphi) + \tau\sum_{k=1}^n\bigg\langle A v^k + B\bigg(u^0 + \tau \sum_{j=1}^k v^j\bigg),\varphi \bigg\rangle \\ & = (v^0, \varphi) + \tau \sum_{k=1}^n \langle f^k, \varphi\rangle + \sum_{k=1}^n \bigg( C^r\bigg(u^0 + \tau \sum_{j=1}^{k-1} v^j,v^{k-1}\bigg) \Delta W^k,\varphi \bigg) \end{split} \end{equation} for all $\varphi \in V_m$ and $n= 1, \ldots , N$. We can immediately see that~\eqref{eq:2b} corresponds to \begin{equation} \label{eq:2bb} \begin{split} & \bigg(\frac{v^n - v^{n-1}}{\tau}, \varphi\bigg) + \bigg\langle A v^n + B\bigg(u^0 + \tau \sum_{k=1}^n v^k\bigg),\varphi \bigg\rangle\\ & = \langle f^n, \varphi \rangle + \bigg(C^r\bigg(u^0 + \tau \sum_{k=1}^{n-1} v^k,v^{n-1}\bigg)\frac{\Delta W^n}{\tau}, \varphi\bigg) \end{split} \end{equation} for all $\varphi \in V_m$ and for $n=1,\ldots,N$. This is exactly the numerical scheme one could obtain directly from~\eqref{eq:0b}. In the case $C=0$ (i.e., the non-stochastic case) this would be an implicit Euler scheme in the ``velocity'', with the integral operator replaced by a simple quadrature. With $u^n := u^0 + \tau\sum_{k=1}^n v^k$, we further see that~\eqref{eq:2b} is also equivalent to \begin{equation*} \begin{split} &\bigg(\frac{u^n - 2u^{n-1} + u^{n-2}}{\tau^2},\varphi\bigg) + \bigg\langle A\bigg(\frac{u^n-u^{n-1}}{\tau}\bigg) + B u^n,\varphi \bigg\rangle\\ & = \langle f^n, \varphi \rangle + \bigg(C^r\bigg(u^{n-1},\frac{u^{n-1}-u^{n-2}}{\tau}\bigg)\frac{\Delta W^n}{\tau}, \varphi \bigg) \end{split} \end{equation*} for all $\varphi \in V_m$ and for $n=1,\ldots,N$, where $u^0$ and $u^{-1}:= u^0 - \tau v^0$ are given. One could obtain this scheme directly from~\eqref{eq:0a}.
Numerical schemes for deterministic evolution equations of the above type have been investigated mostly for the particular case that $V_A = V_B$. Emmrich and Thalhammer~\cite{emmrich:thalhammer:convergence} have proved weak convergence of time discretizations under the assumption that $V_A$ is continuously embedded in $V_B$. In Emmrich and Thalhammer \cite{emmrich:doubly}, weak convergence of fully discrete approximations is proved in the case when strongly continuous perturbations are added to the nonlinear principal part $A$ and the linear principal part $B$ even if $V_A$ is not embedded in $V_B$. This also generalizes the existence result of Lions and Strauss~\cite{Lions-Strauss}. The convergence results have subsequently been extended in Emmrich and \v{S}i\v{s}ka~\cite{emmrich:siska:full}. The situation for linear principal part $A$ but nonlinear, non-monotone $B$ requires a different analysis and is studied in Emmrich and \v{S}i\v{s}ka~\cite{emmrich:evolution}.
Numerical solutions of second-order-in-time stochastic partial differential equations have also been studied but for semilinear problems. Kov\'acs, Saedpanach and Larsson~\cite{kovacs:saedpanah:larsson:finite} considered a finite element approximation of the linear stochastic wave equation with additive noise using semigroup theory. Hausenblas~\cite{hausenblas:weak} demonstrated weak convergence (weak in the probabilistic sense) of numerical approximations to semilinear stochastic wave equations with additive noise. De Naurois, Jentzen and Welti prove weak convergence rates for spatial spectral approximations for an equation with multiplicative noise~\cite{deNaurois:jentzen:welti}. For results on full-discretization, see also Anton, Cohen, Larsson and Wang~\cite{anton:cohen:larsson:wang}. Semigroup theory is also used by Tessitore and Zabczyk~\cite{tessitore:zabczyk:wong} to prove weak convergence of the laws for Wong--Zakai approximations to semilinear strongly damped evolution equations of second order with multiplicative noise acting on the zero-order-in-time term. Error estimates and estimates of the rate of convergence can be found, e.g., in Walsh~\cite{walsh:on:numerical} and Quer-Sardanyons and Sanz-Sol\'e~\cite{quer-sardanyons:sanz-sole:space} for particular examples governed by a linear principal part.
This paper is organized as follows. Section~\ref{sec:2} contains all the assumptions and the statement of the main results of the paper. In Section~\ref{sec:fulldisc}, we study the full discretization, prove that the fully discrete problem has a unique solution and establish a priori estimates. We use the a priori estimates and compactness arguments in Section~\ref{sec:weaklimits} to obtain a stochastic process that is the weak limit of piecewise-constant-in-time prolongations of the solutions to the discrete problem. In Section~\ref{sec:identlims}, it is shown that the weak limits satisfy the stochastic evolution equation. This finally proves convergence as well as existence of a solution. Uniqueness is then proved in Section~\ref{sec:uniq}.
\section{Statement of assumptions and results} \label{sec:2} In this section, we state the precise assumptions on the operators, we define what is meant by a solution to~\eqref{eq:1} and we give the statement of the main result of this paper. Let us start with explaining the notation.
Throughout this paper, let $c > 0$ denote a generic constant that is independent of the discretization parameters. We set $\sum_{j=1}^0 z_j = 0$ for arbitrary $z_j$. Recall that $T>0$ is given and that $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\in [0,T]},\mathbb{P})$ is a stochastic basis. By this, we mean that the probability space $(\Omega, \mathcal{F}, \mathbb{P})$ is complete, $(\mathcal{F}_t)_{t\in [0,T]}$ is a filtration such that any set of probability zero that is in $\mathcal{F}$ also belongs to $\mathcal{F}_0$ and such that $\mathcal{F}_s = \bigcap_{t>s} \mathcal{F}_t$ for all $s\in [0,T)$. Moreover, $W = (W(t))_{t\in [0,T]}$ is an infinite dimensional Wiener process adapted to $(\mathcal{F}_t)_{t\in [0,T]}$ and such that for any $t , h\geq0$ the increment $W(t+h)-W(t)$ is independent of $\mathcal{F}_t$.
For a Banach space $(X,\|\cdot\|_X)$, we denote its dual by $(X^*,\|\cdot\|_{X^*})$ and we use $\langle g, w\rangle$ to denote the duality pairing between $g\in X^*$ and $w\in X$. We will use the symbol $\rightharpoonup$ to denote weak convergence. Let $p\in[2,\infty)$ be given and let $q = \frac{p}{p-1}$ be the conjugate exponent of $p$. For a separable and reflexive Banach space $X$, we denote by $L^p(\Omega; X)$ and $L^p((0,T)\times \Omega; X)$ the standard Bochner--Lebesgue spaces (with respect to $\mathcal{F}$) and refer to Diestel and Uhl~\cite{diestel:vector} for more details. In particular, we recall that the concepts of strong measurability, weak measurability and measurability coincide since $X$ is separable (see also Amann and Escher~\cite{amann:escher}). The norms are given by \begin{equation*}
\|w\|_{L^p(\Omega;X)} := \left(\mathbb{E}\|w\|_X^p \right)^{1/p} \text{ and }
\|w\|_{L^p((0,T)\times \Omega;X)} := \left(\mathbb{E}\int_0^T \|w(t)\|_X^p dt \right)^{1/p} . \end{equation*} The duals of $L^p(\Omega; X)$ and $L^p((0,T)\times \Omega; X)$ are identified with $L^q(\Omega; X^*)$ and $L^q((0,T)\times \Omega; X^*)$, respectively. Let $\mathcal{L}^p(X)$ be the linear subspace of $L^p((0,T)\times \Omega; X)$ consisting of equivalence classes of $X$-valued stochastic processes that are measurable with respect to the progressive $\sigma$-algebra. Note that $\mathcal{L}^p(X)$ is closed.
We say that an operator $D:X\times \Omega \to X^*$ is weakly measurable with respect to some $\sigma$-algebra $\mathcal{G} \subseteq \mathcal{F}$ if the real-valued random variable $\langle Dw, z \rangle$ is $\mathcal{G}$-measurable for any $w$ and $z$ in $X$, i.e., $Dw:\Omega \to X^*$ is weakly* $\mathcal{G}$-measurable for all $w \in X$.
Recall that $(H, (\cdot,\cdot), |\cdot|)$ is a real, separable Hilbert space, identified with its dual. By $h \in l^2(H)$, we mean that $h=(h_j)_{j\in \mathbb{N}}$ with $h_j \in H$ for $j\in \mathbb{N}$ and
$\sum_{j\in \mathbb{N}} |h_j|^2 < \infty$. We define the inner product in $l^2(H)$ by $(g,h)_{l^2(H)} := \sum_{j\in\mathbb{N}} (g_j,h_j)$, where $g , h \in l^2(H)$. This induces a norm on $l^2(H)$ by $|h|_{l^2(H)} = (h,h)_{l^2(H)}^{1/2}$. Further recall that $(V_A,\|\cdot\|_{V_A})$ and $(V_B,\|\cdot\|_{V_B})$ are real, reflexive and separable Banach spaces that are densely and continuously embedded in $H$ and that the main result will require, in addition, that $V_A$ is densely and continuously embedded in $V_B$ and so \begin{equation}\label{embedding} V_A \hookrightarrow V_B \hookrightarrow H = H^* \hookrightarrow V_B^* \hookrightarrow V_A^* \end{equation} with $\hookrightarrow$ denoting dense and continuous embeddings. Our notation does not distinguish whether the duality pairing $\langle \cdot, \cdot \rangle$ is the duality pairing between $V_A$ and $V_A^*$ or $V_B$ and $V_B^*$ since in situations when both would be well defined they coincide due to~\eqref{embedding}.
Finally, we need a Galerkin scheme for $V_A$ which we denote by $(V_m)_{m\in \mathbb{N}}$. That is, we assume that for all $m\in \mathbb{N}$ we have $V_m \subseteq V_{m+1} \subset V_A$ and that $\bigcup_{m\in \mathbb{N}} V_m$ is dense in $V_A$. We assume further, without loss of generality, that the dimension of $V_m$ is $m$.
\begin{assumptionB} Let $B:V_B\times \Omega \to V_B^*$ be weakly $\mathcal{F}_0$-measurable. Assume moreover that $B$ is, almost surely, {\em linear}, {\em symmetric} and let there be $\mu_B > 0$ and $c_B > 0$ such that, almost surely, \begin{equation*}
\langle Bw ,w \rangle \geq \mu_B\|w\|^2_{V_B} \text{ and } \|Bw\|_{V_B^*} \leq c_B\|w\|_{V_B} \quad \forall w\in V_B. \end{equation*} This means that $B$ is, almost surely, {\em strongly positive} and {\em bounded}. \end{assumptionB}
Note that with this assumption we can define, for $\mathbb{P}$-almost all $\omega \in \Omega$, an inner product on $V_B$ by $(w,z)_B := \langle Bw,z \rangle$ for any $w,z \in V_B$. We will denote the norm associated with the inner product by $|\cdot|_B := (\cdot, \cdot)_B^{1/2}$. This norm is equivalent to $\|\cdot\|_{V_B}$.
\begin{assumptionAC} The operators $A:V_A\times\Omega \to V_A^*$ and $C:V_B\times V_A\times \Omega \to l^2(H)$ are weakly $\mathcal{F}_0$-measurable. Moreover, we assume that $A$, is almost surely, {\em hemicontinuous}, i.e., there is $\Omega_0 \in \mathcal{F}_0$ with $\mathbb{P}(\Omega_0) = 0$ and for every $\omega \in \Omega\setminus \Omega_0$ the function $\epsilon\mapsto \langle A(w+\epsilon z,\omega), v \rangle : [0,1] \to \mathbb{R}$ is continuous for any $v,w,z \in V_A$.
There is $c_A >0 $ such that, almost surely, the {\em growth condition} \begin{equation*}
\|Aw\|_{V_A^*} \leq c_A(1+\|w\|_{V_A})^{p-1} \quad \forall w \in V_A \end{equation*} is satisfied.
There are $\mu_A > 0$, $\lambda_A \geq 0$, $\lambda_B \geq 0$ and $\kappa\geq 0$ such that, almost surely, the operators $A$ and $C$ satisfy the {\em monotonicity-like} condition \begin{equation} \label{eq:orig_monotonicity}
\langle Aw - Az, w-z \rangle + \lambda_A |w-z|^2 \geq \frac{1}{2}|C(u,w) - C(v,z)|_{l^2(H)}^2 - \lambda_B|u-v|_B^2 \end{equation} for any $w,z \in V_A$ and $u,v \in V_B$ and the {\em coercivity-like} condition \begin{equation} \label{eq:orig_coercivity}
\langle Aw,w \rangle + \lambda_A|w|^2\geq \mu_A \|w\|_{V_A}^p + \frac{1}{2}|C(u,w)|_{l^2(H)}^2 - \lambda_B|u|_B^2 - \kappa \end{equation} for any $w \in V_A$ and $u \in V_B$. \end{assumptionAC}
The almost sure hemicontinuity of $A:V_A \times \Omega \to V_A^*$ together with the almost sure monotonicity of $A + \lambda_A I:V_A \times \Omega \to V_A^*$ (see~\eqref{eq:orig_monotonicity}) imply that $A$ is in fact, almost surely, demicontinuous (see also Krylov and Rozovskii~\cite{krylov:rozovskii:stochastic}).
The growth condition and coercivity from Assumption~{\em AC} imply that for any $u\in V_B$ and $w\in V_A$, \begin{equation} \label{eq:Cbdd}
|C(u,w)|_{l^2(H)}^2 \leq c(1+|u|_B^2 + |w|^2 + \|w\|_{V_A}^p). \end{equation} The monotonicity-like condition implies that $C$ is Lipschitz continuous in its first argument uniformly with respect to its second argument. Indeed for all $w\in V_A$ and all $u,v \in V_B$ we get \begin{equation*}
|C(u,w) - C(v,w)|_{l^2(H)} \leq \sqrt{2\lambda_B}|u-v|_B. \end{equation*} If the coercivity and monotonicity-like conditions are satisfied then we obtain with $\lambda := 2\max(\lambda_A, \lambda_B, \kappa)$ \begin{equation} \label{eq:mod_monotonicity}
2\langle Aw - Az, w-z \rangle + \lambda|w-z|^2 + \lambda|u-v|_B^2 \geq |C(u,w) - C(v,z)|_{l^2(H)}^2 \end{equation} and \begin{equation} \label{eq:mod_coercivity}
2\langle Aw,w \rangle + \lambda(|w|^2 + |u|_B^2 + 1) \geq 2\mu_A \|w\|_{V_A}^p + |C(u,w)|_{l^2(H)}^2. \end{equation}
In many applications, the operators $A$ and $C$ would arise separately from various modelling considerations. In such a situation, it may be useful to see under what assumptions on $A$ and $C$, stated independently, would (\ref{eq:orig_monotonicity}) and (\ref{eq:orig_coercivity}) hold. To that end, assume that there are $\mu_A > 0$ and $\lambda_1, \lambda_2 \ge 0$ such that, almost surely, for all $w,z \in V_A$ \begin{equation} \label{eq:mon_and_coer_of_A_only}
\langle Aw - Az , w -z \rangle + \lambda_1 |w-z|^2 \geq 0 \,\,\textrm{ and }\,\, \langle Aw, w \rangle + \lambda_2 |w|^2 \geq \mu_A \|w\|_{V_A}^p. \end{equation} Assume further that there are $\lambda_3, \lambda_4 \geq 0$ such that, almost surely, for all $u,v \in V_B$ and $w,z \in V_A$ \begin{equation*}
|C(u,w) - C(v,z)|_{l^2(H)}^2 \leq \lambda_3|u-v|_B^2 + \lambda_4|w-z|^2. \end{equation*}
With $v=z=0$ and $\kappa = |C(0,0)|_{l^2(H)}^2$, we obtain \begin{equation*}
|C(u,w)|_{l^2(H)}^2 \leq 2\big( \lambda_3 |u|_B^2 + \lambda_4 |w|^2 + \kappa \big). \end{equation*} Then \eqref{eq:orig_monotonicity} and \eqref{eq:orig_coercivity} follow with a suitable choice of the constants.
Examples of operators satisfying the above assumptions and the corresponding stochastic partial differential equations can be found in Pardoux~\cite[Part III, Ch. 3]{pardoux:thesis}. Let us present an example where the condition on Lipschitz continuity on bounded sets as required by Pardoux is not satisfied but the assumptions of this paper hold.
\begin{example} \label{example:no_lip_on_bdd_subsets} We consider a bounded domain $\mathcal{D}$ in $\mathbb{R}^d$ with smooth boundary and take $V_A=V_B=H^1_0(\mathcal{D})$, the standard Sobolev space, and $H=L^2(\mathcal{D})$. Following Emmrich~\cite{emmrich:time}, we consider $\rho:\mathbb{R}^d\to\mathbb{R}^d$ given by \begin{equation*} \rho(z) = \left\{ \begin{array}{ll}
0 & \text{if} \quad |z| = 0,\\
|z|^{-1/2}z & \text{if} \quad |z| \in (0,1),\\ z & \textrm{otherwise}. \end{array} \right. \end{equation*} It is then easy to check that $A:V_A\to V_A^*$ given by \begin{equation*} \langle A v, w \rangle = \int_\mathcal{D} \rho(\nabla v)\cdot \nabla w \, dx \end{equation*} satisfies the hemicontinuity and growth condition of Assumption AC as well as the monotonicity and coercivity condition~\eqref{eq:mon_and_coer_of_A_only}. Moreover it is possible to show that this operator $A$ does not satisfy the assumption of Lipschitz continuity on bounded subsets of Pardoux~\cite{pardoux:thesis}. \end{example}
We say that $\tilde{z}$ is a modification of $z\in \mathcal{L}^\gamma(X)$ ($\gamma \in [1,\infty)$) if $z(t,\omega) = \tilde{z}(t,\omega)$ for $(dt \times d\mathbb{P})$-almost all $(t,\omega)$. If $X\hookrightarrow H$ then we say that $\tilde{z}$ is an $H$-valued continuous modification of $z\in \mathcal{L}^\gamma(X)$ if $t \mapsto \tilde{z}(t,\omega) : [0,T] \to H$ is continuous for almost all $\omega \in \Omega$ and $\tilde{z}$ is a modification of $z$.
We will use the following notation for stochastic integrals: Given $x\in \mathcal{L}^2(H)$ and $y\in \mathcal{L}^2(l^2(H))$, we write \[ \int_0^t (x(s), y(s)dW(s)) := \sum_{j\in\mathbb{N}}\int_0^t (x(s), y_j(s))dW_j(s). \]
\begin{definition}[Solution] \label{def:soln} Let $u_0 \in L^2(\Omega; V_B)$ and $v_0 \in L^2(\Omega; H)$ be $\mathcal{F}_0$-measurable and let $f\in \mathcal{L}^q\left({V_A}^*\right)$. Let there be $v\in \mathcal{L}^p(V_A)$ such that $u_0 + Kv \in \mathcal{L}^2(V_B)$ and moreover let there be an $H$-valued continuous modification $\tilde{v}$ of $v$. Then $v$ is said to be a {\em solution} to~\eqref{eq:1} if $\mathbb{P}$-almost everywhere, for all $t\in [0,T]$ and for all $z \in V_A$ \begin{equation*} \begin{split} & ( \tilde{v}(t), z ) + \int_0^t \left\langle Av(s) + B\left(u_0 + (Kv)(s)\right),z\right\rangle ds \\ & = (v_0,z) + \int_0^t \langle f(s),z\rangle ds + \int_0^t \big(z, C(u_0 + (Kv)(s),v(s)) dW(s) \big). \end{split} \end{equation*} \end{definition}
We will typically not distinguish between $\tilde{v}$ and $v$, denoting both by $v$, to simplify notation. The following result on the uniqueness of solutions to~\eqref{eq:1} will be proved in Section~\ref{sec:uniq}.
\begin{theorem}[Uniqueness of solution] \label{lemma:uniq} Let Assumptions~{\em AC} and {\em B} and let (\ref{embedding}) hold. Let $v_1$ and $v_2$ be two solutions to~\eqref{eq:1} in the sense of Definition~\ref{def:soln}. Then \begin{equation*}
\mathbb{P}\left(\max_{t \in [0,T]}|v_1(t) - v_2(t)| = 0\right) = 1, \end{equation*} i.e., $v_1$ and $v_2$ are indistinguishable. Moreover, if we let \begin{equation*} u_1 = u_0 + Kv_1 \quad \textrm{ and } \quad u_2 = u_0 +Kv_2 \end{equation*} then \begin{equation*}
\mathbb{P}\left(\max_{t \in [0,T]}\|u_1(t) - u_2(t)\|_{V_B} = 0\right) = 1, \end{equation*} i.e., $u_1$ and $u_2$ are also indistinguishable. \end{theorem}
Consider a sequence $(m_\ell,r_\ell,N_\ell)_{\ell \in \mathbb{N}}$ such that $m_\ell\to \infty$, $r_\ell \to \infty$ and $N_\ell \to \infty$ as $\ell \to \infty$ and let $\tau_\ell = T/N_\ell$. Let $(u^0_\ell)_{\ell \in \mathbb{N}}$ be a sequence of $\mathcal{F}_0$-measurable random variables with values in $V_{m_\ell}$ such that $u^0_\ell \in L^2(\Omega;V_B)$ and $u^0_\ell \to u_0$ in $L^2(\Omega;V_B)$ as $\ell \to \infty$. Moreover, let $(v^0_\ell)_{\ell \in \mathbb{N}}$ be a sequence of $\mathcal{F}_0$-measurable random variables with values in $V_{m_\ell}$ such that $v^0_\ell \in L^2(\Omega;H)$ and $v^0_\ell \to v_0$ in $L^2(\Omega;H)$ as $\ell \to \infty$. For $f\in \mathcal{L}^q\left({V_A}^*\right)$, we use the approximation \begin{equation}\label{approx-fff} f^n := \frac{1}{\tau_\ell} \int_{t_{n-1}}^{t_n} f(t)\,dt,\quad n = 1, \ldots, N_\ell\,, \end{equation} where we recall that $t_n = n\tau_\ell$ for $n=0,\ldots,N_\ell$. Note that for readability we drop the dependence of $t_n$ and $f^n$ on $N_\ell$.
For each $(m_\ell, r_\ell, N_\ell)$, we take $(f^n)_{n=1}^{N_\ell}$ and the solution to the scheme~\eqref{eq:2b} and use this to define stochastic processes $f_\ell$, $v_\ell$ and $u_\ell$, which will be approximations of $f$, $v$ and $u$, as follows: for $n=1,\ldots,N_\ell$, let \begin{equation} \label{eq:ext1} f_\ell(t) := f^n, \, v_\ell(t) := v^n, \, u_\ell(t) := u^n \, \textrm { if } \, t\in (t_{n-1}, t_n]. \end{equation} We may set $f_\ell(0) = f^1$, $v_\ell(0) = v^1$, $u_\ell(0) = u^1$. Note that $u^n$ and $v^n$ indeed depend on $m_\ell$ and $N_\ell$.
We see that even if $v^n$ and $u^n$ are $\mathcal{F}_{t_n}$-measurable for each $n=0,1,\ldots, N_\ell$ then the processes $v_\ell$ and $u_\ell$ are not $(\mathcal{F}_t)_{t\in [0,T]}$ adapted. Thus we will not be able to directly use compactness-based arguments to get weak limits that are adapted. To overcome this, we will also use the following approximations: for $n=2,\ldots,N_\ell$, let \begin{equation} \label{eq:ext2} v_\ell^-(t) := v^{n-1}, \, u_\ell^-(t) := u^{n-1}\, \textrm { if } \, t\in [t_{n-1}, t_n) \end{equation} and let $v_\ell^{-}(t) = 0$ and $u_\ell^{-}(t) = u^0$ if $t\in [0,\tau_\ell)$. We may set $v_\ell^-(T) = v^{N_\ell}$, $u_\ell^-(T) = u^{N_\ell}$.
We note that $v_\ell(t_n) = v_\ell^-(t_n) = v^n$ and $u_\ell(t_n) = u_\ell^-(t_n) = u^n$ for $n=1,\ldots,N_\ell$. If $v^n$ and $u^n$ are $\mathcal{F}_{t_n}$-measurable for each $n=0,1,\ldots, N$ then the processes $v_\ell^-$ and $u_\ell^-$ are $(\mathcal{F}_t)_{t\in [0,T]}$ adapted. For $v_\ell^-$ (and $u_\ell^-$) we will then be able to obtain weak limits that are themselves adapted processes. Later, we will show that the weak limits of $v_\ell^-$ and $v_\ell$ as well as of $u_\ell^-$ and $u_\ell$ coincide.
We now rewrite~\eqref{eq:2b} in an integral form. To that end, define $\theta_\ell^+(0) :=0$ and $\theta_\ell^+(t) := t_n$ if $t\in (t_{n-1},t_n]$ and $n=1,\ldots,N_\ell$. Then saying $(v^n)_{n=1}^N$ satisfies~\eqref{eq:2b} with $m=m_\ell$ and $\tau = \tau_\ell$ is equivalent to \begin{equation} \label{eq:scheme_continuous_formulation} \begin{split} & (v_\ell(t),\varphi) + \bigg\langle\int_0^{\theta_\ell^+(t)} (Av_\ell(s)+Bu_\ell(s)-f_\ell(s))ds,\varphi \bigg\rangle\\ & = (v^0_\ell,\varphi) + \bigg(\int_{\tau_\ell}^{\theta_\ell^+(t)} C^{r_\ell}(u_\ell^-(s),v_\ell^-(s))dW(s),\varphi\bigg) \end{split} \end{equation} for all $\varphi \in V_{m_\ell}$ and for all $t\in (0,T]$.
The following theorem is the main result of the paper. Recall that $\lambda$ arises from Assumptions AC as $\lambda = 2\max(\lambda_A, \lambda_B, \kappa)$. \begin{theorem}[Existence and convergence] \label{thm:main} Let Assumptions~{\em AC} and {\em B} and let (\ref{embedding}) hold. Let $u_0 \in L^2(\Omega; V_B)$ and $v_0 \in L^2(\Omega; H)$ be $\mathcal{F}_0$-measurable and let $f\in \mathcal{L}^q\left({V_A}^*\right)$. Then the stochastic evolution equation~\eqref{eq:1} possesses a solution $v\in \mathcal{L}^p(V_A)$ according to Definition~\ref{def:soln} with $u=u_0+Kv \in \mathcal{L}^2(V_B)$.
Furthermore, consider $(m_\ell,N_\ell)_{\ell \in \mathbb{N}}$ with $m_\ell\to \infty$ and $N_\ell \to \infty$ as $\ell \to \infty$ such that $\sup_{\ell\in\mathbb{N}} \lambda \tau_\ell < 1$. Let $(u^0_\ell)_{\ell \in \mathbb{N}} \subset L^2(\Omega;V_B)$, $(v^0_\ell)_{\ell \in \mathbb{N}} \subset L^2(\Omega;H)$ be sequences of $\mathcal{F}_0$-measurable random variables with values in $V_{m_\ell}$ such that $u^0_\ell \to u_0$ in $L^2(\Omega;V_B)$ and $v^0_\ell \to v_0$ in $L^2(\Omega;H)$ as $\ell \to \infty$. Let $(f_\ell)_\ell\in \mathbb{N}$ be given by (\ref{approx-fff}) and (\ref{eq:ext1}). The numerical scheme \eqref{eq:scheme_continuous_formulation} then admits a unique solution with \begin{equation*} \begin{split} & u_\ell \rightharpoonup u \text{ in } L^2((0,T)\times\Omega;V_B) \text{ and } v_\ell \rightharpoonup v \text{ in } L^p((0,T)\times \Omega;V_A)\\ & u_\ell(T) \to u(T) \text{ in } L^2(\Omega;V_B) \text{ and } v_\ell(T) \to v(T) \text{ in } L^2(\Omega;H) \text{ as } \ell \to \infty . \end{split} \end{equation*} \end{theorem}
The proof can be briefly summarized as follows: We first need to show that the fully discretized problem has a unique solution, which is covered by Theorem~\ref{thm:existence_disc}. Then we obtain a priori estimates for the fully discrete problem (Theorem~\ref{thm:disc-apriori}), so that we can extract weakly convergent subsequences using compactness arguments (Lemma~\ref{lemma:weaklimits}). At this point, the only step left to do is to identify the weak limits from the nonlinear terms. Convergence of the full sequence of approximations (and not just of a subsequence) follows because of the uniqueness result.
\begin{remark} \label{rem:VA} Our results require the assumption that $V_A \hookrightarrow V_B$. The need for this assumption arises from the use of the standard It\^o formula for the square of the norm, which also provides existence of a continuous modification. However, if $A$, $B$ and $C$ are deterministic then Pardoux~\cite[Part III, Chapter 2, Theorem 3.1]{pardoux:thesis} proves the energy equality~\eqref{eq:7} and sufficient regularity without the need to assume $V_A \hookrightarrow V_B$. It remains open whether this approach can be extended to the situation of random and time-dependent operators. \end{remark}
\section{Full discretization: existence, uniqueness and a priori estimates} \label{sec:fulldisc} In this section, we show that the full discretization~\eqref{eq:2b} has a unique solution, adapted to the filtration given, and prove an a priori estimate. The a priori estimate is essential for the proof of the main result of the paper as this allows us to use compactness arguments to extract weakly convergent subsequences from the sequence of approximate solutions.
Existence of solutions to the discrete problem will be proved by applying the following lemma. \begin{lemma} \label{lemma:consequenceofbrower}
Let $\vec{h}:\mathbb{R}^m \to \mathbb{R}^m$ be continuous. If there is $R>0$ such that $\vec{h}(\vec{v})\cdot \vec{v} \geq 0$ whenever $\|\vec{v}\|_{\mathbb{R}^m} = R$ then there exists $\vec{\bar{v}}$ satisfying $\|\vec{\bar{v}}\|_{\mathbb{R}^m} \leq R$ and $\vec{h}(\vec{\bar{v}}) = 0$. \end{lemma} \begin{proof} The lemma is proved by contradiction from Brouwer's fixed point theorem (see, e.g.,~\cite[Ch. 3, Lemma 2.1]{ggz}). \end{proof} To obtain the appropriate measurability of the solution to the discrete problem we need the following lemma, which is a modification of Gy\"ongy~\cite[Lemma 3.8]{gyongy:on:stochastic:III}. \begin{lemma} \label{lemma:measurability} Let $(S,\Sigma)$ be a measure space. Let $\vec{f}:S\times \mathbb{R}^m \to \mathbb{R}^m$ be a function that is $\Sigma$-measurable in its first argument for every $\vec{x}\in \mathbb{R}^m$, that is continuous in its second argument for every $\alpha \in S$ and moreover such that for every $\alpha \in S$ the equation $\vec{f}(\alpha, \vec{x}) = \vec{0}$ has a unique solution $\vec{x}=\vec{g}(\alpha)$. Then $\vec{g}:S \to \mathbb{R}^m$ is $\Sigma$-measurable. \end{lemma} \begin{proof} Let $F$ be a closed set in $\mathbb{R}^m$. Then \begin{equation*}
\vec{g}^{-1}(F) := \{\alpha \in S : \vec{g}(\alpha) \in F\} = \left\{\alpha \in S : \min_{\vec{x}\in F}\|\vec{f}(\alpha, \vec{x})\|_{\mathbb{R}^m} = 0 \right\}, \end{equation*} since $F$ is closed. But since $\vec{f} = \vec{f}(\alpha,\vec{x})$ is continuous in the second argument for every $\alpha \in S$ and $\Sigma$-measurable in the first argument for every $\vec{x} \in \mathbb{R}^m$, we see that $\vec{g}^{-1}(F) \in \Sigma$. \end{proof}
Let $W^r := (W^r_j)_{j\in \mathbb{N}}$ and $\Delta W^{r,n}:= (\Delta W^{r,n}_j)_{j\in \mathbb{N}}$ with \[ W^r_j := \left\{ \begin{array}{lll}
W_j & \text{for} & j=1,\ldots,r\,, \\
0 & \text{for} & j > r \end{array} \right. \quad \text{and}\quad \Delta W^{r,n}_j := \left\{ \begin{array}{lll}
\Delta W^n_j & \text{for} & j=1,\ldots,r\,, \\
0 & \text{for} & j > r. \end{array} \right. \]
We are now ready to prove existence of solutions to the full discretization. \begin{theorem}[Existence and uniqueness for full discretization] \label{thm:existence_disc} Let $m,N,r\in \mathbb{N}$ be fixed and let Assumptions~{\em AC} and {\em B} hold. Moreover, let $\lambda\tau \leq 1$. Then, given $V_m$-valued and $\mathcal{F}_{0}$-measurable random variables $u^0,v^0$ and right-hand side $f\in \mathcal{L}^q(V_A^*)$, the fully discrete problem~\eqref{eq:2b} has a unique solution $(v^n)_{n=1}^N$ in the sense that if $(v_1^n)_{n=1}^N$ and $(v_2^n)_{n=1}^N$ both satisfy~\eqref{eq:2b} then \[
\mathbb{P}\left(\max_{n=1,\ldots,N} |v_1^n - v_2^n| = 0\right) = 1. \] Furthermore, for all $n=1,\dots,N$, the $V_m$-valued random variables $v^n$ are $\mathcal{F}_{t_n}$-measurable. \end{theorem}
\begin{proof} We prove existence and uniqueness step by step. Assume that the $V_m$-valued random variables $v^0, v^1, \ldots , v^{n-1}$ already satisfy~\eqref{eq:2b} (for all superscripts up to $n-1$). Moreover, assume that $v^k$ is $\mathcal{F}_{t_k}$-measurable for $k=1,\dots,n-1$. We will show that there is an $V_m$-valued and $\mathcal{F}_{t_n}$-measurable $v^n$ satisfying~\eqref{eq:2b}.
First recall that $u^k = u^0 + \tau\sum_{j=1}^k v^j$. So $(u^k)_{k=0}^{n-1}$ is also known. Recall that we are assuming that the dimension of $V_m$ is $m$. Let $(\varphi_i)_{i=1}^m$ be a basis for $V_m$. Then there is a one-to-one correspondence between any $w \in V_m$ and $\vec{w} = (w_1, \ldots, w_m)^T \in \mathbb{R}^m$ given by
$w = \sum_{i=1}^m w_i \varphi_i$. We use this to define a norm on $\mathbb{R}^m$ by $\|\vec{w}\|_{\mathbb{R}^m} := \|w\|_{V_A}$.
Let $\Omega' \in \mathcal{F}_0$ be such that $\mathbb{P}(\Omega') = 1$ and such that, for all $\omega \in \Omega'$, $t\mapsto \langle A(w+t z,\omega), v \rangle$ is continuous for any $w,z\in V_A$, the joint monotonicity-like condition and the coercivity condition on $A$ and $C$ are satisfied and $B$ is linear, symmetric and strongly positive. This is possible due to Assumptions AC and B. For an arbitrary $\omega \in \Omega'$ and an arbitrary $v\in V_m$ and hence for some $\vec{v} = (v_1,\ldots,v_m)^T \in \mathbb{R}^m$, define $\vec{h}: \Omega' \times \mathbb{R}^m \to \mathbb{R}^m$, component-wise, for $l=1,\ldots,m$, as \begin{equation*} \begin{split} h(\omega,\vec{v})_l :={} & \frac{1}{\tau}(v - v^{n-1}(\omega),\varphi_l) + \langle A(v,\omega), \varphi_l\rangle + \langle B(u^{n-1}(\omega) + \tau v, \omega),\varphi_l \rangle \\ & - \langle f^n(\omega), \varphi_l\rangle - \left( C^r(u^{n-1}(\omega),v^{n-1}(\omega),\omega)) \frac{\Delta W^n(\omega)}{\tau},\varphi_l\right). \end{split} \end{equation*} The first step in showing that~\eqref{eq:2b} has a solution is to show that for each $\omega \in \Omega'$ there is some $\vec{v}$ such that $\vec{h}(\omega,\vec{v}) = \vec{0}$. To that end, we would like to apply Lemma~\ref{lemma:consequenceofbrower}. We see that \begin{equation*} \begin{split} \vec{h}(\omega,\vec{v})\cdot \vec{v} ={} & \frac{1}{\tau}(v - v^{n-1}(\omega),v) + \langle A(v,\omega), v\rangle + \langle B(u^{n-1}(\omega) + \tau v,\omega),v \rangle \\ & - \langle f^n(\omega), v\rangle - \left( C(u^{n-1}(\omega),v^{n-1}(\omega),\omega) \frac{\Delta W^{r,n}(\omega)}{\tau}, v\right). \end{split} \end{equation*}
Now we wish to find large $R(\omega) > 0$, which also depends on $m$, such that if $\|v\|_{V_A} = R(\omega)$ then $\vec{h}(\omega,\vec{v})\cdot \vec{v} \geq 0$. Note that since $V_A\hookrightarrow H$, we get \begin{equation*}
(v-v^{n-1}(\omega),v) \geq |v|^2 - c|v^{n-1}(\omega)|\|v\|_{V_A}. \end{equation*} The coercivity in Assumption~{\em AC} together with Assumption~{\em B} imply \begin{equation*} \begin{split}
\vec{h}(\omega,\vec{v}) & \cdot \vec{v} \geq{} \frac{1}{\tau}(|v|^2 - c|v^{n-1}(\omega)|\|v\|_{V_A}) + \mu_A\|v\|_{V_A}^p
+ \frac{1}{2}|C(0,v,\omega)|_{l^2(H)}^2 \\
& - \lambda_A|v|^2
- \kappa - \|B(u^{n-1}(\omega),\omega)\|_{V_B^*}\|v\|_{V_B} + \tau \langle B(v,\omega), v\rangle \\
& - \|f^n(\omega)\|_{V_A^*}\|v\|_{V_A}
- |C(u^{n-1}(\omega),v^{n-1}(\omega),\omega)|_{l^2(H)}
|v|\bigg|\frac{\Delta W^{r,n}(\omega)}{\tau}\bigg|. \end{split} \end{equation*} Note that $V_m$ is finite dimensional and so there is $c_m > 0$ such that
$\|\varphi\|_{V_B} \leq c_m \|\varphi\|_{V_A}$ for all $\varphi \in V_m$. Thus, noting also that $2\lambda_A\tau \le \lambda \tau \le 1$, we find that \begin{equation*} \begin{split}
\vec{h}(\omega,\vec{v}) & \cdot \vec{v} \geq{} \|v\|_{V_A}\bigg( \mu_A\|v\|_{V_A}^{p-1}
- c|v^{n-1}(\omega)| - c_m \|B(u^{n-1}(\omega),\omega)\|_{V_B^*} \\
& - \|f^n(\omega)\|_{V_A^*} - c |C(u^{n-1}(\omega),v^{n-1}(\omega),\omega)|_{l^2(H)}\bigg|\frac{\Delta W^{r,n}(\omega)}{\tau}\bigg| \bigg) - \kappa. \end{split} \end{equation*} Now choose $R(\omega)$ large such that $R(\omega) \geq \kappa$ and also \begin{equation*} \begin{split}
& \mu_A R(\omega)^{p-1} - c|v^{n-1}(\omega)| - c_m \|B(u^{n-1}(\omega),\omega)\|_{V_B^*}
- \|f^n(\omega)\|_{V_A^*} \\
& - c |C(u^{n-1}(\omega),v^{n-1}(\omega),\omega)|_{l^2(H)}
\bigg|\frac{\Delta W^{r,n}(\omega)}{\tau}\bigg| \geq 1. \end{split} \end{equation*}
Then, if $\|v\|_{V_A} = R(\omega)$, we have $h(\omega,\vec{v})\cdot \vec{v} \geq 0$.
Note that $\omega \in \Omega'$ and on this set we have linearity and boundedness of $B$ and demicontinuity of $A$ (this follows from the monotonicity-like assumption on $A$ and the hemicontinuity assumption on $A$). Thus the function $\vec{h}(\omega,\cdot)$ is continuous and Lemma~\ref{lemma:consequenceofbrower} guarantees existence of $\vec{v}$ such that $\vec{h}(\omega,\vec{v})=\vec{0}$.
Next we show that the zero of $\vec{h}(\omega,\cdot)$ is unique. Assume that there are two distinct $\vec{v_1}$ and $\vec{v_2}$ such that $\vec{h}(\omega,\vec{v_1}) = \vec{0}$ and $\vec{h}(\omega,\vec{v_2}) = \vec{0}$. Then \begin{align*}
0 = & \tau \left( \vec{h}(\omega, \vec{v}_1) - \vec{h}(\omega, \vec{v}_2) ,
\vec{v}_1 - \vec{v}_2 \right) =
|v_1 - v_2|^2 \\ & + \tau \langle A(v_1,\omega) - A(v_2,\omega), v_1-v_2\rangle + \tau^2\langle B(v_1,\omega) - B(v_2,\omega) , v_1-v_2 \rangle . \end{align*}
We recall that (\ref{eq:orig_monotonicity}) implies the monotonicity of $A + \lambda_A I$ and that $B$ is strongly positive. This yields \begin{align*}
0 \ge |v_1 - v_2|^2 - \lambda_A \tau |v_1 - v_2|^2 + \mu_B \tau^2 \|v_1 - v_2\|_{V_B}^2 , \end{align*} which shows that $v_1$ and $v_2$ cannot be distinct since $\lambda_A \tau \le 1/2$. Hence the zero to $\vec{h}(\omega,\cdot)$ is unique. Let $v^n(\omega) := v$ for $\omega \in \Omega'$ and $v^n(\omega) = 0$ for $\omega \in \Omega\setminus \Omega'$. By Lemma~\ref{lemma:measurability}, we see that $v^n$ is $\mathcal{F}_{t_n}$-measurable. \end{proof}
Now we need to obtain the a priori estimate.
\begin{theorem}[Discrete a~priori estimates] \label{thm:disc-apriori} Let $m,N,r\in \mathbb{N}$ be fixed and let Assumptions~{\em AC} and~{\em B} hold. Moreover, for $f\in \mathcal{L}^q(V_A^*)$ let $(f_n)_{n=1}^N$ be given by (\ref{approx-fff}) and let $u^0$ and $v^0$ be $V_m$-valued and $\mathcal{F}_0$-measurable and such that $u^0 \in L^2(\Omega; H)$ and $v^0 \in L^2(\Omega; V_B)$. Then for all $n = 1,\ldots, N$ \begin{equation} \label{eq:apriori1} \begin{split}
& \mathbb{E}\bigg[|v^n|^2 + |u^n|_B^2 + \sum_{j=1}^n |u^j - u^{j-1}|_B^2\bigg]\\
& \leq \mathbb{E}\bigg[|v^0|^2 + |u^0|_B^2
+ 2\tau \sum_{j=1}^n \langle f^j - Av^j,v^j\rangle + \tau \sum_{j=1}^n|C^r(u^j,v^j)|_{l^2(H)}^2\bigg]. \end{split} \end{equation} Moreover, if $\lambda \tau < 1$ then \begin{equation} \label{eq:apriori2} \begin{split}
& \mathbb{E}\bigg[|v^n|^2 + |u^n|_B^2 + \mu_A \tau \sum_{j=1}^n\|v^j\|_{V_A}^p + \sum_{j=1}^n |u^j - u^{j-1}|_B^2\bigg]\\
& \leq c e^{\lambda T(1-\lambda \tau)^{-1}} \left( \mathbb{E}\bigg[|v^0|^2 + |u^0|_B^2\bigg] + \|f\|_{L^q((0,T)\times \Omega;V_A^*)}^q + T \right). \end{split} \end{equation} \end{theorem} \begin{proof} By taking $\varphi = v^n$ in \eqref{eq:2bb} and using the relation \begin{equation*}
(a-b , a) = \frac{1}{2}(|a|^2 - |b|^2 + |a-b|^2) , \end{equation*} we get, for $j=1,\ldots,N$, \begin{equation} \label{eq:apriori_est_proof_1} \begin{split}
& \frac{1}{2\tau}\big(|v^j|^2 -|v^{j-1}|^2 +|v^j-v^{j-1}|^2\big) + \langle Av^j + Bu^j, v^j\rangle \\ & = \langle f^j, v^j \rangle + \bigg( C (u^{j-1}, v^{j-1})\frac{\Delta W^{r,j}}{\tau}, v^j \bigg). \end{split} \end{equation} We note that $\langle Bu^j, v^j \rangle = (u^j,v^j)_B$ and so \begin{equation*} 2\tau \sum_{j=1}^n (u^j,v^j)_B =
2 \sum_{j=1}^n (u^j,u^j - u^{j-1})_B
= |u^n|_B^2 - |u^0|_B^2 + \sum_{j=1}^n |u^j - u^{j-1}|_B^2. \end{equation*} Thus, after multiplying by $2\tau$ and summing up from $j=1$ to $n$ in~\eqref{eq:apriori_est_proof_1}, we find \begin{equation} \label{eq:apriori_est_proof_2} \begin{split} &
|v^n|^2 + \sum_{j=1}^n|v^j - v^{j-1}|^2 + |u^n|_B^2 + \sum_{j=1}^n |u^j - u^{j-1}|_B^2 + 2\tau \sum_{j=1}^n \langle Av^j, v^j \rangle \\
& = |v^0|^2 + |u^0|_B^2 + 2\tau \sum_{j=1}^n \langle f^j ,v^j \rangle + 2 \sum_{j=1}^n ( C (u^{j-1}, v^{j-1})\Delta W^{r,j}, v^j ). \end{split} \end{equation} Using Cauchy--Schwarz's and Young's inequalities, we obtain that \begin{equation*} \begin{split} & ( C (u^{j-1}, v^{j-1}) \Delta W^{r,j}, v^j )\\ & = ( C (u^{j-1}, v^{j-1}) \Delta W^{r,j}, v^{j-1} ) + ( C (u^{j-1}, v^{j-1}) \Delta W^{r,j}, v^j - v^{j-1} ) \\
& \leq ( C (u^{j-1}, v^{j-1}) \Delta W^{r,j} , v^{j-1} ) + \frac{1}{2}|C (u^{j-1}, v^{j-1}) \Delta W^{r,j}|^2 + \frac{1}{2}|v^j - v^{j-1}|^2. \end{split} \end{equation*} By the assumption on $(\mathcal{F}_t)$ and $W$, $\Delta W^{r,j}$ is independent of $\mathcal{F}_{t_{j-1}}$ and hence \begin{equation*} \mathbb{E} ( C(u^{j-1}, v^{j-1}) \Delta W^{r,j} , v^{j-1} ) = 0. \end{equation*} Furthermore, a straightforward calculation shows that \begin{equation*}
\mathbb{E} |C (u^{j-1}, v^{j-1}) \Delta W^{r,j}|^2 = \left\{ \begin{array}{ll} 0 & \text{ if } j = 1,\\
\tau \mathbb{E} |C^r (u^{j-1}, v^{j-1})|^2_{l^2(H)} & \,\,\textrm{if} \,\, j = 2,\ldots, N. \end{array} \right. \end{equation*} Using this and taking expectation in~\eqref{eq:apriori_est_proof_2} leads to \begin{equation*} \begin{split}
& \mathbb{E}\bigg[|v^n|^2 + |u^n|_B^2 + \sum_{j=1}^n |u^j - u^{j-1}|_2^B\bigg]\\
& \leq \mathbb{E}\bigg[|v^0|^2 + |u^0|_B^2
+ 2\tau \sum_{j=1}^n \langle f^j - Av^j,v^j\rangle + \tau \sum_{j=2}^n|C^r(u^{j-1},v^{j-1})|_{l^2(H)}^2\bigg] \end{split} \end{equation*} At this point, we only have to observe that \begin{equation*}
\sum_{j=2}^n|C^r(u^{j-1},v^{j-1})|_{l^2(H)}^2 \leq \sum_{j=1}^n|C^r(u^j,v^j)|_{l^2(H)}^2 \end{equation*} to obtain the first claim of the theorem.
Now we apply the coercivity condition in Assumption~{\em AC} and~\eqref{eq:mod_coercivity} to get, for any $j=1,\ldots, N$, \begin{equation*}
-2\langle Av^j, v^j \rangle \leq - 2 \mu_A\|v^j\|_{V_A}^p - |C(u^j, v^j)|_{l^2(H)}^2 + \lambda |v^j|^2 + \lambda|u^j|_B^2 + \lambda. \end{equation*} Thus, again with Young's inequality, we find \begin{equation*} \begin{split}
& \mathbb{E} \bigg[|v^n|^2 + |u^n|_B^2 + \sum_{j=1}^n |u^j - u^{j-1}|_B^2 + \mu_A \tau \sum_{j=1}^n \|v^j\|_{V_A}^p \bigg] \\
&\leq \mathbb{E}\bigg[|v^0|^2 + |u^0|_B^2
+ c\tau \sum_{j=1}^n \|f^j\|_{V_A^*}^q + \lambda \tau\sum_{j=1}^n(1+|v^j|^2 + |u^j|_B^2)\bigg]. \end{split} \end{equation*} Then, since $\lambda \tau < 1$, \begin{equation*} \begin{split}
& \mathbb{E} \bigg[|v^n|^2 + |u^n|_B^2 + \sum_{j=1}^n |u^j - u^{j-1}|_B^2 + \mu_A \tau \sum_{j=1}^n \|v^j\|_{V_A}^p \bigg] \\
& \leq \frac{1}{1-\lambda \tau}\mathbb{E}\bigg[|v^0|^2 + |u^0|_B^2
+ c\tau \sum_{j=1}^n \|f^j\|_{V_A^*}^q + \lambda \tau\sum_{j=1}^{n-1}(|v^j|^2 + |u^j|_B^2) + \lambda T \bigg]. \end{split} \end{equation*} Since $f\in \mathcal{L}^q(V_A)$, we have \begin{equation*}
\mathbb{E}\bigg[ \tau \sum_{j=1}^N\|f^j\|_{V_A^*}^q\bigg] \leq \mathbb{E} \int_0^T \|f(t)\|_{V_A^*}^q dt = \|f\|_{L^q((0,T)\times \Omega; V_A^*)}^q. \end{equation*} Finally, we can apply a discrete Gronwall lemma to obtain the second claim of the theorem and thus conclude the proof. \end{proof}
\section{Weak limits from compactness} \label{sec:weaklimits} In this section, we consider a sequence of approximate problems (\ref{eq:scheme_continuous_formulation}) and use compactness arguments and the a priori estimate of Theorem~\ref{thm:disc-apriori} to show that weak limits of the piecewise-constant-in-time prolongations of the fully discrete approximate solutions exist and that they satisfy an equation closely resembling~\eqref{eq:1}.
Recall that we have constructed $v_\ell^-, v_\ell$ and $u_\ell^-, u_\ell$ in~\eqref{eq:ext1} and~\eqref{eq:ext2} by interpolating the solution of the fully discrete problem~\eqref{eq:2b}. The following corollary is a direct consequence of the a priori estimates of Theorem~\ref{thm:disc-apriori}. \begin{corollary} \label{corollary:to_apriori_est} Let the assumptions of Theorem~\ref{thm:main} be fulfilled. Then \begin{equation} \label{eq:corollary_to_apriori_est_1} \begin{split}
& \sup_{t\in [0,T]}\mathbb{E}|v_\ell^-(t)|^2 \leq c, \,\, \sup_{t\in [0,T]}\mathbb{E}|u_\ell^-(t)|_B^2 \leq c\,\,\textrm{ and }\,\, \mathbb{E}\int_0^T \|v_\ell^-(t)\|_{V_A}^p dt \leq c,\\
& \sup_{t\in [0,T]}\mathbb{E}|v_\ell(t)|^2 \leq c, \,\, \sup_{t\in [0,T]}\mathbb{E}|u_\ell(t)|_B^2 \leq c\,\,\textrm{ and }\,\, \mathbb{E}\int_0^T \|v_\ell(t)\|_{V_A}^p dt \leq c. \end{split} \end{equation} Furthermore, \begin{equation} \label{eq:corollary_to_apriori_est_2} \begin{split}
& \mathbb{E}\int_0^T \|Av_\ell^-(t)\|_{V_A^*}^q \, dt \leq c,\quad \mathbb{E}\int_0^T \|Av_\ell(t)\|_{V_A^*}^q \, dt \leq c,\\
&\mathbb{E}\int_0^T \|Bu_\ell(t)\|_{V_B^*}^2 \, dt \leq c,\\
& \mathbb{E}\int_0^T |C(u_\ell^-(t)),v_\ell^-(t)|_{l^2(H)}^2\, dt \leq c, \quad \mathbb{E}\int_0^T |C(u_\ell(t)),v_\ell(t)|_{l^2(H)}^2\, dt \leq c. \\ \end{split} \end{equation} Finally, \begin{equation} \label{eq:corollary_to_apriori_est_3}
\mathbb{E}\int_0^T |u_\ell(t) - u_\ell^-(t)|_B^2 \,dt \leq c\tau_\ell . \end{equation} \end{corollary}
\begin{proof} In view of the assumptions, the right-hand side of (\ref{eq:apriori2}) is uniformly bounded with respect to $\ell$. This immediately implies~\eqref{eq:corollary_to_apriori_est_1}. The assumptions on the growth of $A$ and $B$ together with \eqref{eq:Cbdd} and the first part of the corollary imply~\eqref{eq:corollary_to_apriori_est_2}. Finally, \eqref{eq:corollary_to_apriori_est_3} is a consequence of~\eqref{eq:apriori2} and the observation that \begin{equation*}
\mathbb{E}\int_0^T |u_\ell(t) - u_\ell^-(t)|_B^2\,dt = \tau_\ell \mathbb{E}\sum_{k=1}^{N_\ell} |u^k - u^{k-1}|_B^2. \end{equation*} \end{proof}
We will need the following lemma to match the limits of the approximations $v_\ell$ of $v$ with their ``delayed'' and progressively measurable counterparts $v_\ell^-$, see also Gy\"{o}ngy and Millet~\cite{gyongy:millet:on:discretization}.
\begin{lemma} \label{lemma:indent_with_steklov} Let $X$ be a separable and reflexive Banach space and let $\bar{p} \in (1,\infty)$. Consider $\left( (x^n_\ell)_{n=0}^{N_\ell} \right)_{\ell\in \mathbb{N}}$ with $x^n_\ell \in L^{\bar{p}}(\Omega;X)$ for all $n = 0, 1, \ldots , N_\ell$ and $\ell \in \mathbb{N}$. Consider the piecewise-constant-in-time processes $x_\ell$ and $x_\ell^-$ with $x_\ell(t_n) = x_\ell^-(t_n) = x^n_\ell$ and \begin{equation*} x_\ell(t) = x^n \,\,\textrm { if }\,\, t\in (t_{n-1},t_n) \,\, \textrm{ and } x_\ell^-(t) = x^{n-1} \,\,\textrm { if }\,\, t\in (t_{n-1},t_n) \end{equation*} for $n=1,\ldots,N_\ell$, $\ell\in \mathbb{N}$. Assume that $(x_\ell)_{\ell\in\mathbb{N}}$ and $(x_\ell^-)_{\ell\in\mathbb{N}}$ are bounded in $L^{\bar{p}}((0,T)\times\Omega; X)$. Then there is a subsequence denoted by $\ell'$ and $x , x^- \in L^{\bar{p}}((0,T)\times\Omega; X)$ such that $x_{\ell'} \rightharpoonup {x}$ and $x_{\ell'}^- \rightharpoonup x^-$ in $L^{\bar{p}}((0,T)\times\Omega; X)$ as $\ell' \to \infty$ with $x = x^-$. \end{lemma}
\begin{proof} The existence of a subsequence and of $x, x^- \in L^{\bar{p}}((0,T)\times\Omega; X)$ such that $x_{\ell'} \rightharpoonup {x}$ and $x_{\ell'}^- \rightharpoonup x^-$ in $L^{\bar{p}}((0,T)\times\Omega; X)$ as $\ell' \to \infty$ follows from standard compactness arguments since $L^{\bar{p}}((0,T)\times\Omega; X)$ is reflexive. It remains to show that $x = x^-$.
To that end, we will employ the averaging operator $S_\ell:L^{\bar{q}}((0,T)\times \Omega;X^*) \to L^{\bar{q}}((0,T)\times \Omega;X^*)$ ($1/\bar{p} + 1/\bar{q} = 1$) defined by \begin{equation*} (S_\ell y)(t) := \left\{ \begin{array}{ll} \displaystyle \frac{1}{\tau_\ell}\int_{\theta_\ell^+(t)}^{\theta_\ell^+(t+\tau_\ell)} y(s)ds & \ \textrm{ if } \ t \in [0,T-\tau_\ell] , \\ 0 & \ \textrm { otherwise. } \end{array} \right. \end{equation*} It can be shown for all $y \in L^{\bar{q}}((0,T)\times \Omega;X^*)$, using standard arguments, that $S_\ell y \to y$ in $L^{\bar{q}}((0,T)\times \Omega;X^*)$ as $\ell \to \infty$.
Let $y\in L^{\bar{q}}((0,T)\times \Omega;X^*)$. A short calculation then reveals that \begin{equation} \label{eq:calc_with_S} \int_0^T \langle(S_\ell y)(t), x_\ell(t)\rangle dt = \int_{\tau_\ell}^T \langle y(t),x_\ell^-(t)\rangle dt \end{equation} and hence \begin{equation*} \begin{split} & \mathbb{E}\int_0^T \langle y(t), x(t) - x^-(t)\rangle \, dt = \mathbb{E}\int_0^T \langle y(t), x(t) - x_{\ell'}^-(t)\rangle \,dt \\ & + \mathbb{E}\int_0^T \langle y(t),x_{\ell'}^-(t) - x_{\ell'}(t)\rangle\, dt
+ \mathbb{E}\int_0^T \langle y(t),x_{\ell'}(t) - x^-(t)\rangle\,dt. \end{split} \end{equation*} The first and last integral on the right-hand side converge to $0$ as $\ell'\to \infty$. We observe that due to~\eqref{eq:calc_with_S} \begin{equation*} \begin{split} \mathbb{E}\int_0^T \langle y(t),x_{\ell'}^-(t) - x_{\ell'}(t)\rangle\,dt ={}& \mathbb{E}\int_0^{\tau_\ell} \langle y(t), x_{\ell'}^-(t)\rangle\, dt \\ & + \mathbb{E} \int_0^T \langle(S_{\ell'}y)(t) - y(t),x_{\ell'}(t)\rangle\, dt. \end{split} \end{equation*} The first integral on the right-hand side converges to $0$ since $\tau_\ell \to 0$ and since $(x_{\ell'}^-)_{\ell\in\mathbb{N}}$ is bounded in $L^{\bar{p}}((0,T)\times \Omega;X)$. The second integral on the right-hand side converges to $0$ since $S_{\ell'} y \to y$ in $L^{\bar{q}}((0,T)\times \Omega;X^*)$ as $\ell' \to \infty$ and since $(x_{\ell'})_{\ell\in\mathbb{N}}$ is bounded in $L^{\bar{p}}((0,T)\times \Omega;X)$. This finally shows that $x = x^-$ in $L^{\bar{p}}((0,T)\times \Omega;X)$. \end{proof}
\begin{lemma} \label{lemma:weaklimits} Let the assumptions of Theorem~\ref{thm:main} be fulfilled. Then there is a subsequence denoted by $\ell'$ such that: \begin{enumerate}[(i)] \item There is $v\in \mathcal{L}^p(V_A)$ such that $v_{\ell'}^- \rightharpoonup v$ and $v_{\ell'} \rightharpoonup v$ in $L^p((0,T)\times \Omega; V_A)$. There is $\xi \in L^2(\Omega;H)$ such that $v_{\ell'}^-(T) = v_{\ell'}(T) \rightharpoonup \xi$ in $L^2(\Omega;H)$ as $\ell' \to \infty$. \item There is $u\in \mathcal{L}^2(V_B)$ such that $u_{\ell'}^- \rightharpoonup u$ and $u_{\ell'} \rightharpoonup u$ in $L^2((0,T)\times \Omega;V_B)$ as $\ell'\to \infty$. Furthermore, $u - u_0 = Kv$ in $\mathcal{L}^p(V_A)$ and
the paths of $u-u_0$ are absolutely continuous. Finally, $u_{\ell'}^-(T) = u_{\ell'}(T)\rightharpoonup u(T)$ in $L^2(\Omega; V_B)$ and $u(0) = u_0$. \item There is $a \in \mathcal{L}^q(V_A^*)$ such that $Av_{\ell'} \rightharpoonup a$ in $L^q((0,T)\times \Omega; V_A^*)$. There is $\bar{c} \in \mathcal{L}^2(l^2(H))$ such that $C^{r_{\ell'}}(u_{\ell'}^-,v_{\ell'}^-)$, $C(u_{\ell'},v_{\ell'})$ and $C^{r_{\ell'}}(u_{\ell'},v_{\ell'})$ all converge weakly to $\bar{c}$ in $L^2((0,T)\times \Omega; l^2(H))$ as $\ell' \to \infty$. \end{enumerate} \end{lemma} \begin{proof} We begin by observing that $L^p((0,T)\times\Omega;V_A)$, $\mathcal{L}(V_A)$ and $L^2(\Omega;H)$ are reflexive. Then, due to Corollary~\ref{corollary:to_apriori_est} and due to e.g. Br\'ezis~\cite[Theorem 3.18]{brezis:functional}, there are $v \in L^p((0,T)\times \Omega;V_A)$ ${v}^- \in \mathcal{L}(V_A)$ and $\xi \in L^2(\Omega; H)$ and a subsequence denoted by $\ell'$ such that $v_{\ell'}^- \rightharpoonup v^-$ and $v_{\ell'} \rightharpoonup {v}$ in $L^p((0,T)\times \Omega; V_A)$ as well as $v_{\ell'}(T) \rightharpoonup \xi$ in $L^2(\Omega; H)$ as $\ell'\to\infty$. To complete the proof of the first statement, we simply need to apply Lemma~\ref{lemma:indent_with_steklov} to see that $v=v^-$.
Using the same argument as in the first part of the proof, we obtain $u_{\ell'}^-\rightharpoonup u$ and $u_{\ell'}\rightharpoonup u$ in $L^2((0,T)\times \Omega;V_B)$ with $u\in \mathcal{L}^2(V_B)$ as well as $u_{\ell'}(T) \rightharpoonup \eta$ with $\eta \in L^2(\Omega, V_B)$ as $\ell'\to\infty$. By the way, \eqref{eq:corollary_to_apriori_est_3} implies that \begin{equation*}
\|u_\ell - u_\ell^-\|_{L^2((0,T)\times \Omega; V_B)} \to 0 \text{ as } \ell \to \infty , \end{equation*} which also shows that the weak limits of $u_\ell$ and $u_\ell^-$ coincide.
Now we would like to show that $u-u_0 = Kv$. A straightforward calculation shows that \begin{equation*} u_{\ell} - u^0_{\ell} = Kv_{\ell} + e_{\ell} , \text{ where } e_\ell(t):= \int_t^{\theta_\ell^+(t)}v_\ell(s)ds. \end{equation*} Another straightforward calculation also shows that $Kv_{\ell'} \rightharpoonup Kv$ in $L^p((0,T)\times \Omega;V_A)$ since $v_{\ell'} \rightharpoonup v$ in $L^p((0,T)\times \Omega;V_A)$ as $\ell'\to\infty$. Due to Theorem~\ref{thm:disc-apriori}, we have \begin{equation*} \begin{split}
\|e_\ell\|_{{L}^p((0,T)\times \Omega ; V_A)}^p
& = \mathbb{E}\int_0^T \bigg\|\int_t^{\theta_\ell^+(t)}v_\ell(s)ds \bigg\|_{V_A}^p dt\\
& = \mathbb{E}\sum_{j=1}^{N_\ell} \int_{t_{j-1}}^{t_j} (t_j - t)^p \|v^{j}\|_{V_A}^p dt\\
& \le \tau_\ell^p \mathbb{E} \tau_\ell\sum_{j=1}^{N_\ell}\|v^{j}\|_{V_A}^p \leq c\tau_\ell^p \to 0 \text{ as } \ell \to \infty. \end{split} \end{equation*} It follows that \begin{equation*} u_{\ell'} - u^0_{\ell'} = Kv_{\ell'} + e_{\ell'}\rightharpoonup Kv \end{equation*} in $L^p((0,T)\times \Omega;V_A)$ as $\ell'\to\infty$, which shows that $u-u_0 = Kv$ in view of $u_{\ell'} \rightharpoonup u$ in $L^2((0,T)\times \Omega;V_B)$ as $\ell'\to\infty$ and $u^0_\ell \to u_0$ in $L^2(\Omega;V_B)$ as $\ell\to\infty$.
Hence almost all paths of $u-u_0$ are absolutely continuous as functions mapping $[0,T]$ into $V_A$. Moreover, $u(0) = u_0$ since $(Kv)(0) = 0$.
To complete the proof of the second statement of the lemma, we have to show that $\eta = u(T)$. Again, a straightforward calculation shows that $(Kv_{\ell'})(T) \rightharpoonup (Kv)(T)$ in $L^p(\Omega;V_A)$ as $\ell'\to\infty$ since for all $g \in L^q(\Omega;V_A^*)$ \begin{equation*} \mathbb{E} \,\langle g, (Kv_{\ell'})(T) - (Kv)(T)\rangle = \mathbb{E} \int_0^T \langle g , v_{\ell'}(t) - v(t) \rangle dt \end{equation*} and since $v_{\ell'} \rightharpoonup v$ in $L^p((0,T)\times \Omega;V_A)$ as $\ell'\to\infty$. Therefore, we find that $\eta - u_0 = (Kv)(T) = u(T) - u_0$.
The second part of Corollary~\ref{corollary:to_apriori_est} (see (\ref{eq:corollary_to_apriori_est_2})) implies (iii) with the same arguments as before. In particular, the weak limits of $Av_{\ell'}^-$ and of $C^{r_{\ell'}}(u_{\ell'}^-, v_{\ell'}^-)$ are progressively measurable and thus $a\in \mathcal{L}^q(V_A^*)$ as well as $\bar{c}\in \mathcal{L}^2(l^2(H))$. Indeed,~\eqref{eq:corollary_to_apriori_est_2} implies that \[
\sum_{j=r_{\ell'}}^\infty \mathbb{E}\int_0^T |C_j(u_{\ell'}, v_{\ell'})|^2 dt \to 0 \] as $\ell'\to \infty$. This in turn implies that \[
\|C^{r_{\ell'}}(u_{\ell'}, v_{\ell'})-C(u_{\ell'}, v_{\ell'})\|_{L^2((0,T)\times \Omega;l^2(H))} \to 0. \] Using this observation allows us to show that the weak limits of $C^{r_{\ell'}}(u_{\ell'}, v_{\ell'})$ and $C(u_{\ell'}, v_{\ell'})$ coincide in $L^2((0,T)\times \Omega;l^2(H))$. Moreover, due to Lemma~\ref{lemma:indent_with_steklov}, the weak limits of $C^{r_{\ell'}}(u_{\ell'}, v_{\ell'})$ and $C^{r_{\ell'}}(u_{\ell'}^-, v_{\ell'}^-)$ also coincide. \end{proof}
At this point, we are ready to take the limit in~\eqref{eq:scheme_continuous_formulation} along $\ell'\to\infty$.
\begin{lemma} \label{lemma:eqns} Let the assumptions of Theorem~\ref{thm:main} be fulfilled. Then for $(dt\times d\mathbb{P})$-almost all $(t,\omega) \in (0,T)\times \Omega$ \begin{equation} \label{eq:5a}
v(t) + \int_0^t a(s)ds + \int_0^t Bu(s) ds = v_0 + \int_0^t f(s)ds + \int_0^t \bar{c}(s)dW(s) \text{ in } V_A^* , \end{equation} and there is an $H$-valued continuous modification of $v$ (which we denote by $v$ again) such that for all $t\in [0,T]$ \begin{equation} \label{eq:7} \begin{split}
|v(t)|^2 + |u(t)|_B^2 ={} & |v_0|^2 + |u_0|_B^2 + \int_0^t \big[ 2\langle f(s)-a(s), v(s)\rangle + |\bar{c}(s)|^2\big]ds\\ & + 2\int_0^t (v(s),\bar{c}(s)dW(s)). \end{split} \end{equation} Finally, $\xi = v(T)$ and thus $v_{\ell'}(T) \rightharpoonup v(T)$ in $L^2(\Omega; H)$ as $\ell'\to\infty$. \end{lemma}
\begin{proof} In what follows, we only write $\ell$ instead of $\ell'$. Let us fix $m \leq m_\ell$ and take $\varphi = \psi(t)\bar{\varphi}$ in \eqref{eq:scheme_continuous_formulation} with $\bar{\varphi} \in V_m$ and $\psi \in L^p((0,T)\times \Omega; \mathbb{R})$. Integrating from $0$ to $T$ and taking the expectation then leads to \begin{equation*} \begin{split} & \mathbb{E}\int_0^T \bigg[(v_\ell(t),\varphi(t)) + \bigg\langle\int_0^{\theta_\ell^+(t)} (Av_\ell(s)+Bu_\ell(s))ds,\varphi(t) \bigg\rangle \bigg]dt \\ & = \mathbb{E}\int_0^T \bigg[ (v^0_\ell,\varphi(t)) + \bigg\langle\int_0^{\theta_\ell^+(t)}f_\ell(s)ds,\varphi(t) \bigg\rangle \\ & \quad + \bigg(\int_{\tau_\ell}^{\theta_\ell^+(t)} C^{r_\ell}(u_\ell^-(s),v_\ell^-(s))dW(s),\varphi(t)\bigg)\bigg]dt. \end{split} \end{equation*} We subsequently see that \begin{equation} \label{eq:6} \begin{split} & \mathbb{E}\int_0^T \bigg[(v_\ell(t),\varphi(t)) + \langle (KAv_\ell)(t),\varphi(t)\rangle + \langle (K Bu_\ell)(t),\varphi(t)\rangle \bigg] dt \\ & = \mathbb{E}\int_0^T\bigg[ (v^0_\ell,\varphi(t)) + \langle (K f_\ell)(t),\varphi(t)\rangle \\ & \quad + \bigg(\int_0^t C^{r_\ell}(u_\ell^-(s),v_\ell^-(s))dW(s),\varphi(t)\bigg)\bigg]dt + R_\ell^1 + R_\ell^2 + R_\ell^3, \end{split} \end{equation} where \begin{align*} R_\ell^1 &:= \mathbb{E}\int_0^T \bigg\langle\int_t^{\theta_\ell^+(t)} (f_\ell(s) - Av_\ell(s) - Bu_\ell(s))ds,\varphi(t) \bigg\rangle dt, \\ R_\ell^2 &:= \mathbb{E}\int_0^T \bigg(\int_0^{\tau_\ell} C(u_\ell^-(s),v_\ell^-(s))dW^{r_\ell}(s),\varphi(t)\bigg)dt, \\ R_\ell^3 &:= \mathbb{E} \int_0^T \bigg(\int_t^{\theta_\ell^+(t)} C(u_\ell^-(s),v_\ell^-(s))dW^{r_\ell}(s),\varphi(t)\bigg)dt. \end{align*} We will now show that $R_\ell^1, \, R_\ell^2, \, R_\ell^3 \to 0$ as $\ell \to \infty$.
Because of \begin{align*} R_\ell^1 &= \mathbb{E} \sum_{j=1}^{N_\ell} \int_{t_{j-1}}^{t_j} \bigg\langle\int_t^{t_j} (f^j - Av^j - Bu^j)ds,\varphi(t) \bigg\rangle dt \\ &= \mathbb{E} \int_0^T (\theta_\ell^+(t) - t)\left\langle f_\ell(t) - Av_\ell(t) - Bu_\ell(t),\varphi(t) \right\rangle dt , \end{align*} we obtain, using H\"older's inequality and Corollary~\ref{corollary:to_apriori_est}, \begin{align*}
|R_\ell^1| \le{} &
\tau_\ell \mathbb{E}\int_0^T \left|\left\langle f_\ell(t) - Av_\ell(t) - Bu_\ell(t),\varphi(t)\right\rangle \right| dt\\
\leq{} & \tau_\ell \big(
\big(\|f_\ell\|_{L^q((0,T)\times \Omega; V_A^*)}
+ \|Av_\ell\|_{L^q((0,T)\times \Omega; V_A^*)}\big)\|\varphi\|_{L^p((0,T)\times \Omega; V_A)}\\
& + \|Bu_\ell\|_{L^2((0,T)\times \Omega; V_B^*)} \|\varphi\|_{L^2((0,T)\times \Omega; V_B)} \big) \to 0 \end{align*} as $\ell\to\infty$. Using H\"older's inequality and It\^o's isometry (see, e.g., Pr\'ev\^ot and R\"ockner~\cite[Section 2.3]{prevot:rockner:concise}), we find with $u_\ell^-(t) = u^0_\ell$ and $v_\ell^-(t) = 0$ if $t\in [0,\tau_\ell)$ that \begin{align*}
|R_\ell^2| & \leq \mathbb{E}\int_0^T\bigg|\int_0^{\tau_\ell} C(u_\ell^-(s),v_\ell^-(s))dW^{r_\ell}(s)\bigg||\varphi(t)|dt\\ & \leq
\left(\mathbb{E} \int_0^T \left| \int_0^{\tau_\ell} C(u^0_\ell,0)dW^{r_\ell}(s) \right|^2dt\right)^{1/2} \|\varphi\|_{L^2((0,T)\times \Omega; H)}\\
& = \left(\mathbb{E} \int_0^T \int_0^{\tau_\ell} |C(u^0_\ell,0)|_{l^2(H)}^2 ds dt\right)^{1/2} \|\varphi\|_{L^2((0,T)\times \Omega; H)}\\ & = ( \tau_\ell T )^{1/2}
\left(\mathbb{E} |C(u^0_\ell,0)|_{l^2(H)}^2 \right)^{1/2} \|\varphi\|_{L^2((0,T)\times \Omega; H)}
\to 0 \end{align*} as $\ell\to\infty$. Similarly, using also Corollary~\ref{corollary:to_apriori_est}, we see that \begin{align*}
|R_\ell^3| & \le \left( \mathbb{E} \int_0^T \bigg|\int_t^{\theta_\ell^+(t)} C(u_\ell^-(s),v_\ell^-(s))dW^{r_\ell}(s)
\bigg|^2 dt \right)^{1/2}
\|\varphi\|_{L^2((0,T)\times \Omega; H)} \\
& = \left( \mathbb{E} \int_0^T\int_t^{\theta_\ell^+(t)} \bigg|
C(u_\ell^-(s),v_\ell^-(s))\bigg|_{l^2(H)}^2 dsdt \right)^{1/2}
\|\varphi\|_{L^2((0,T)\times \Omega; H)} \\
& = \left( \mathbb{E} \int_0^T (\theta_\ell^+(t) - t ) \bigg|
C(u_\ell^-(t),v_\ell^-(t))\bigg|_{l^2(H)}^2 dt \right)^{1/2}
\|\varphi\|_{L^2((0,T)\times \Omega; H)} \\
& \le \tau_\ell^{1/2} \left( \mathbb{E} \int_0^T \bigg|
C(u_\ell^-(t),v_\ell^-(t))\bigg|_{l^2(H)}^2 dt \right)^{1/2}
\|\varphi\|_{L^2((0,T)\times \Omega; H)} \to 0 \end{align*} as $\ell\to\infty$.
We would now like to let $\ell\to \infty$ in~\eqref{eq:6}. A simple calculation shows that $KAv_\ell \rightharpoonup Ka$ in $L^q((0,T)\times \Omega ; V_A^*)$ as $\ell \to \infty$ since $Av_\ell \rightharpoonup a$ in $L^q((0,T)\times \Omega ; V_A^*)$ as $\ell \to \infty$. Analogously, we observe that $KBu_\ell \rightharpoonup KBu$ in $L^2((0,T)\times \Omega ; V_B^*)$ as $\ell \to \infty$ since $u_\ell \rightharpoonup u$ in $L^2((0,T)\times \Omega ; V_B)$ and thus $Bu_\ell \rightharpoonup Bu$ in $L^2((0,T)\times \Omega ; V_B^*)$ as $\ell \to \infty$ (note that $B$ is linear bounded and thus weakly-weakly continuous).
The stochastic integral is a linear bounded operator mapping $\mathcal{L}^2(l^2(H))$ into $\mathcal{L}^2(H)$. Indeed, by It\^o's isometry (see again Pr\'ev\^ot and R\"ockner~\cite[Section~2.3]{prevot:rockner:concise}), we have for any $g\in \mathcal{L}^2(l^2(H))$ \begin{equation*} \begin{split}
\bigg\|\int_0^\cdot g(s)dW(s)\bigg\|_{L^2((0,T)\times \Omega;H)}^2
& = \mathbb{E} \int_0^T \int_0^t |g(s)|_{l^2(H)}^2\, dsdt\\
& \leq T\|g\|_{L^2((0,T)\times \Omega;l^2(H))}^2. \end{split} \end{equation*} Hence the stochastic integral maps weakly convergent sequences in $\mathcal{L}^2(l^2(H))$ into weakly convergent sequences in $\mathcal{L}^2(H)$. With Lemma~\ref{lemma:weaklimits}, we thus obtain \begin{equation*} \mathbb{E}\int_0^T \!\! \bigg(\int_0^t C^{r_\ell}(u_{\ell}^-(s),v_{\ell}^-(s))dW(s),\varphi(t)\bigg)dt \to \mathbb{E}\int_0^T \!\!\bigg(\int_0^t \bar{c}(s)dW(s),\varphi(t)\bigg)dt \end{equation*} as $\ell \to \infty$.
So, taking the limit in~\eqref{eq:6} as $\ell\to \infty$ and using also $v_\ell \rightharpoonup v$ in $L^2((0,T)\times \Omega;H)$, $v^0_\ell \to v_0$ in $L^2(\Omega;H)$ and $f_\ell \to f$ in $L^q((0,T)\times\Omega;V_A^*)$ as $\ell\to\infty$ (the latter can be shown by standard arguments), we arrive at \begin{equation*} \begin{split} & \mathbb{E} \int_0^T \bigg[ (v(t), \varphi(t)) + \left\langle \int_0^t a(s)ds, \varphi(t)\right\rangle + \left\langle \int_0^t Bu(s)ds, \varphi(t)\right\rangle \bigg] dt\\ & = \mathbb{E} \int_0^T\bigg[ (v_0, \varphi(t)) + \left\langle \int_0^t f(s)ds, \varphi(t)\right\rangle + \left(\int_0^t \bar{c}(s)dW(s),\varphi(t)\right)\bigg]dt , \end{split} \end{equation*} which holds for all $\varphi = \psi \bar{\varphi}$ with $\psi \in L^p((0,T)\times \Omega; \mathbb{R})$ and $\bar{\varphi} \in V_m$. As $(V_m)_{m\in \mathbb{N}}$ is a Galerkin scheme for $V_A$, the above equation indeed holds for $\varphi = \psi \bar{\varphi}$ with any $\bar{\varphi} \in V_A \hookrightarrow V_B$. This proves \eqref{eq:5a}.
Now we need to use $V_A\hookrightarrow V_B$. With this assumption, we can apply the It\^o formula for the square of the norm (see, e.g., Krylov and Rozovskii~\cite[Theorem~3.1 and Section~2]{krylov:rozovskii:stochastic} or Pr\'ev\^ot and R\"ockner~\cite[Theorem~4.2.5]{prevot:rockner:concise}). Thus we conclude that $v$ has an $H$-valued continuous modification (which we label $v$ again) such that (\ref{eq:5a}) holds for all $t\in [0,T]$ and \begin{equation*} \begin{split}
|v(t)|^2 - |v_0|^2 ={}& \int_0^t \big[ 2\langle f(s)-a(s)-Bu(s), v(s)\rangle + |c(s)|^2\big]ds\\ & + 2\int_0^t (v(s),c(s)dW(s)) . \end{split} \end{equation*} With \begin{align*} \int_0^t \langle Bu(s) & , v(s)\rangle ds = \int_0^t \langle B (u_0 + (Kv)(s)), v(s)\rangle ds \\ &= \langle Bu_0 , (Kv)(t) \rangle + \int_0^t \int_0^s \langle Bv(\sigma) , v(s) \rangle d\sigma ds \\ &= \langle Bu_0 , (Kv)(t) \rangle + \int_0^t \int_\sigma^t \langle Bv(\sigma) , v(s) \rangle ds d\sigma \\ &= \langle Bu_0 , (Kv)(t) \rangle + \langle B(Kv)(t) , (Kv)(t) \rangle - \int_0^t \langle B v(\sigma) , (Kv)(\sigma) \rangle d\sigma \\ & = \langle B (u(t) + u_0) , (u(t)-u_0) \rangle - \int_0^t \langle B u(s) , v(s) \rangle ds \end{align*} and thus \begin{equation}\label{eq:Buv}
2\int_0^t \langle Bu(s), v(s)\rangle ds = |u(t)|_B^2 - |u_0|_B^2 , \end{equation} we arrive at \eqref{eq:7}.
Recall that $\xi$ is the weak limit of $v_{\ell}(T)$ in $L^2(\Omega;H)$. Using a similar limiting argument as above, we obtain that \begin{equation*} \xi + \int_0^T a(s)\, ds + \int_0^T Bu(s)\, ds = v_0 + \int_0^T f(s)\, ds + \int_0^T \bar{c}(s)\,dW(s) \end{equation*} with the equality holding almost surely in $H$. This, together with the knowledge that $v$ has an $H$-valued continuous modification and with~\eqref{eq:5a}, implies that $\xi=v(T)$. \end{proof}
\section{Identifying the limits in the nonlinear terms. Proof of convergence and existence} \label{sec:identlims}
In this section, we continue the considerations of the previous section and we will use a variant of a well known monotonicity argument to identify $a$ with $Av$ and $c$ with $C(u,v)$. This will conclude the proof of the main theorem of the paper. We will need the following observation. \begin{lemma} \label{lemma:int_by_pts_cals} Let $a$ and $b$ be real-valued integrable functions such that for all $t \in [0,T]$ \begin{equation} \label{eq:int_by_pts_cal1} a(t) \leq a(0) + \int_0^t b(s)ds. \end{equation} Then for all $\kappa \ge 0$ and for all $t \in [0,T]$ \begin{equation} \label{eq:int_by_pts_cal2} e^{-\kappa t} a(t) + \kappa \int_0^t e^{-\kappa s} a(s) ds \leq a(0) + \int_0^t e^{-\kappa s} b(s) ds. \end{equation} Moreover, if equality holds in~\eqref{eq:int_by_pts_cal1} then equality also holds in~\eqref{eq:int_by_pts_cal2}. \end{lemma} \begin{proof} Using the assumption and integrating by parts, we find \[ \begin{split} & e^{-\kappa t}a(t) + \int_0^t \kappa e^{-\kappa s} a(s) ds \leq e^{-\kappa t} a(0) + e^{-\kappa t} \int_0^t b(s) ds \\ & + \int_0^t \kappa e^{-\kappa s} \bigg[a(0) + \int_0^s b(u)du \bigg] ds
= a(0) + \int_0^t e^{-\kappa s} b(s) ds. \end{split} \] This proves the assertion. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main}] Let \begin{equation*} \varphi_\ell(t) := \left\{ \begin{array}{lcl} \displaystyle
\mathbb{E} ( |v_\ell(t)|^2 + |u_\ell(t)|_B^2 ) & \textrm{ if } & t \in (0,T], \\[1ex] \displaystyle
\mathbb{E} ( |v^0_\ell|^2 + |u^0_\ell|_B^2 ) & \textrm{ if } & t = 0. \end{array} \right. \end{equation*} Then from Theorem~\ref{thm:disc-apriori}, in particular~\eqref{eq:apriori1}, we find for all $t\in [0,T]$ \begin{equation*}
\varphi_\ell(t) \! \leq \varphi_\ell(0) + \mathbb{E}\int_0^t \!\!\big[ 2\langle f_\ell(s) - Av_\ell(s), v_\ell(s) \rangle + |C^{r_\ell}(u_\ell(s),v_\ell(s))|_{l^2(H)}^2 \big]ds + R_\ell(t), \end{equation*} where \begin{equation*}
R_\ell(t) := \mathbb{E} \int_t^{\theta_\ell^+(t)} \big[ 2\langle f_\ell(s) - Av_\ell(s), v_\ell(s) \rangle + |C^{r_\ell}(u_\ell(s),v_\ell(s))|_{l^2(H)}^2 \big]ds. \end{equation*} Note that $R_\ell(0) = R_\ell(T) = 0$. From Lemma~\ref{lemma:int_by_pts_cals}, we see that \begin{equation} \label{eq:int_by_parts_in_disc_eq} \begin{split} & e^{-\lambda T} \varphi_\ell(T) \leq \varphi_\ell(0) - \lambda \int_0^T e^{-\lambda s} \varphi_\ell(s)ds \\
& + \mathbb{E}\int_0^T \!\!\!\! e^{-\lambda s}\big[ 2\langle f_\ell(s) - Av_\ell(s), v_\ell(s) \rangle + |C^{r_\ell}(u_\ell(s),v_\ell(s))|_{l^2(H)}^2 \big]ds + \bar{R}_\ell, \end{split} \end{equation}
where $\bar{R}_\ell := \lambda \int_0^T e^{-\lambda s} |R_\ell(s)| ds$. We will show that $\bar{R}_\ell \to 0$ as $\ell \to \infty$. Indeed, \begin{equation*} \begin{split} & \bar{R}_\ell
\leq \lambda \mathbb{E} \int_0^T \int_t^{\theta_\ell^+(t)} \big| 2\langle f_\ell(s) - Av_\ell(s), v_\ell(s) \rangle + |C(u_\ell(s),v_\ell(s))|_{l^2(H)}^2 \big| ds dt\\
& \leq c \tau_\ell \mathbb{E} \int_0^T \big[ 2\left(\|f_\ell(t)\|_{V_A^*} + \|Av_\ell(t)\|_{V_A^*}\right)\|v_\ell(t)\|_{V_A} + |C(u_\ell(t), v_\ell(t))|_{l^2(H)}^2 \big] dt\\ & \leq c \tau_\ell, \end{split} \end{equation*} since the integrand is piecewise constant in time and since we can apply Young's inequality and Corollary~\ref{corollary:to_apriori_est}.
Now we are ready to apply the monotonicity-like assumption~\eqref{eq:mod_monotonicity}. Let $w \in \mathcal{L}^p(V_A)$ and let $z \in \mathcal{L}^2(V_B)$. We see that \begin{equation*} \begin{split} & \mathbb{E}\int_0^T e^{-\lambda s} \langle Av_\ell(s), v_\ell(s) \rangle ds = \mathbb{E} \int_0^T e^{-\lambda s} \langle Av_\ell(s) - Aw(s), v_\ell(s) - w(s) \rangle ds\\ & \, + \mathbb{E}\int_0^T e^{-\lambda s} [ \langle Aw(s), v_\ell(s) - w(s)\rangle + \langle Av_\ell(s),w(s)\rangle ] ds\\
& \geq \frac{1}{2}\mathbb{E} \int_0^T \!\! e^{-\lambda s} \big[|C(u_\ell(s),v_\ell(s)) - C(z(s),w(s))|_{l^2(H)}^2 \\
& \, - \lambda|v_\ell(s)-w(s)|^2 - \lambda |u_\ell(s) - z(s)|_B^2\big]ds\\ & \, + \mathbb{E}\int_0^T e^{-\lambda s} [ \langle Aw(s), v_\ell(s) - w(s)\rangle + \langle Av_\ell(s),w(s)\rangle ] ds. \end{split} \end{equation*} Then from~\eqref{eq:int_by_parts_in_disc_eq}, we can deduce that \begin{equation} \label{eq:mono_trick_0} \begin{split}
& e^{-\lambda T} \mathbb{E}\big(|v_\ell(T)|^2 + |u_\ell(T)|_B^2\big)\\
& \leq \mathbb{E}\big(|v^0_\ell|^2 + |u^0_\ell|_B^2\big) - \lambda \int_0^T e^{-\lambda s} \mathbb{E}\big(|v_\ell(s)|^2 + |u_\ell(s)|_B^2\big)ds \\
& \, + \mathbb{E}\int_0^T \!\!\! e^{-\lambda s}\big[ 2\langle f_\ell(s) - Av_\ell(s), v_\ell(s) \rangle + |C(u_\ell(s),v_\ell(s))|_{l^2(H)}^2 \big]ds + \bar{R}_\ell \\
& \leq \mathbb{E}\big(|v^0_\ell|^2 + |u^0_\ell|_B^2\big) + 2\mathbb{E}\int_0^T e^{-\lambda s}\langle f_\ell(s),v_\ell(s) \rangle ds \\ & \, + \mathbb{E}\int_0^T e^{-\lambda s}\big[ 2 \big(C(u_\ell(s),v_\ell(s)), C(z(s),w(s)) \big)_{l^2(H)} \\
& \, - |C(z(s),w(s))|_{l^2(H)}^2
- 2\lambda (v_\ell(s), w(s)) + \lambda|w(s)|^2 \\ & \, - 2\lambda (u_\ell(s),z(s))_B
+ \lambda|z(s)|_B^2 \big]ds \\ & \, - \mathbb{E}\int_0^T 2e^{-\lambda s} \big[ \langle Aw(s), v_\ell(s) - w(s)\rangle + \langle Av_\ell(s),w(s)\rangle \big] ds + \bar{R}_\ell. \end{split} \end{equation} We can now take the limit inferior along the subsequence $\ell'$. Due to Lemma~\ref{lemma:weaklimits} and due to the weak sequential lower-semicontinuity of the norm, we see that \begin{equation} \label{eq:mono_trick_1} \begin{split}
& e^{-\lambda T} \mathbb{E}\big(|v(T)|^2 + |u(T)|_B^2\big) \leq \liminf_{\ell' \to \infty} e^{-\lambda T}
\mathbb{E}\big(|v_{\ell'}(T)|^2 + |u_{\ell'}(T)|_B^2\big) \\
& \leq \mathbb{E}\big(|v_0|^2 + |u_0|_B^2\big) + 2\mathbb{E}\int_0^T \!\!e^{-\lambda s} \langle f(s),v(s) \rangle ds \\
& \quad + \mathbb{E}\int_0^T e^{-\lambda s}\big[ 2 \big(\bar{c}(s), C(z(s),w(s)) \big)_{l^2(H)} - |C(z(s),w(s))|_{l^2(H)}^2 \\
& \quad - 2\lambda (v(s), w(s)) + \lambda|w(s)|^2 - 2\lambda (u(s),z(s))_B + \lambda|z(s)|_B^2 \big]ds \\ & \quad - \mathbb{E}\int_0^T 2e^{-\lambda s} \big[ \langle Aw(s), v(s) - w(s)\rangle + \langle a(s),w(s)\rangle \big] ds. \end{split} \end{equation} We now need the limit equation obtained in Lemma~\ref{lemma:eqns} to proceed. Taking expectation in~\eqref{eq:7} and using Lemma~\ref{lemma:int_by_pts_cals}, we get \begin{equation} \label{eq:mono_trick_2} \begin{split}
e^{-\lambda T}\mathbb{E} \big(|v(T)|^2 + & |u(T)|_B^2 \big) = \mathbb{E}\big(|v_0|^2 + |u_0|_B^2\big)\\
& - \lambda \mathbb{E} \int_0^T e^{-\lambda s} \big[|v(s)|^2 + |u(s)|_B^2\big] ds\\ & + \mathbb{E}\int_0^T e^{-\lambda s} \big[ 2\langle f(s) - a(s),v(s)\rangle +
|\bar{c}(s)|_{l^2(H)}^2\big] ds. \end{split} \end{equation} Subtracting~\eqref{eq:mono_trick_2} from~\eqref{eq:mono_trick_1} leads to \begin{equation} \label{eq:mono_trick_3} \begin{split} 0 \leq{} & \liminf_{\ell' \to \infty} e^{-\lambda T}
\mathbb{E}\big(|v_{\ell'}(T)|^2 + |u_{\ell'}(T)|_B^2\big)
- e^{-\lambda T} \mathbb{E}\big(|v(T)|^2 + |u(T)|_B^2\big) \\
\leq{} & \, \mathbb{E} \int_0^T e^{-\lambda s} \big[ -|\bar{c}(s) - C(z(s),w(s))|_{l^2(H)}^2 \\
& + \lambda |v(s)-w(s)|^2 + \lambda |u(s)-z(s)|_B^2 + 2\langle a(s), v(s) - w(s)\rangle \big] ds\\ &- 2\mathbb{E}\int_0^T e^{-\lambda s} \langle Aw(s), v(s)-w(s)\rangle ds . \end{split} \end{equation} This implies \begin{equation} \label{eq:mono_trick_4} \begin{split} & 2\mathbb{E}\int_0^T e^{-\lambda s} \langle Aw(s), v(s)-w(s)\rangle ds \\
& \le \mathbb{E} \int_0^T e^{-\lambda s} \big[ -|\bar{c}(s) - C(z(s),w(s))|_{l^2(H)}^2 \\
&\quad + \lambda |v(s)-w(s)|^2 + \lambda |u(s)-z(s)|_B^2 + 2\langle a(s), v(s) - w(s)\rangle \big] ds\\ & \leq \mathbb{E} \int_0^T e^{-\lambda s} \big[
\lambda |v(s)-w(s)|^2 + \lambda |u(s)-z(s)|_B^2 + 2\langle a(s), v(s) - w(s)\rangle \big] ds \end{split} \end{equation} Now we are ready to identify the limits. First we take $w = v$ and $z = u$. The first inequality in~\eqref{eq:mono_trick_4} leads to \begin{equation*}
0 \leq -\mathbb{E} \int_0^T e^{-\lambda s} |\bar{c}(s) - C(u(s),v(s))|_{l^2(H)}^2 ds \end{equation*} which can only be true if $\bar{c}=C(u,v)$. Next we take an arbitrary $\bar{w} \in \mathcal{L}^p(V)$, set $\bar{z} = u_0 + K\bar{w}$ and let $\epsilon \in (0,1)$. Then with $w = v - \epsilon \bar{w}$ and $z = u - \epsilon \bar{z}$, the second inequality in~\eqref{eq:mono_trick_4} leads to \begin{equation*} \begin{split} &2\mathbb{E}\int_0^T e^{-\lambda s} \langle A(v(s)-\epsilon \bar{w}(s)), \epsilon \bar{w}(s)\rangle ds \\
& \leq \mathbb{E} \int_0^T e^{-\lambda s} \big[ \lambda \epsilon^2 (|\bar{w}(s)|^2 + |\bar{z}(s)|_B^2) + 2\langle a(s), \epsilon \bar{w}(s)\rangle \big] ds.\\ \end{split} \end{equation*} We divide by $\epsilon > 0$. Due to the hemicontinuity and growth assumptions on $A$ and since $\epsilon < 1$, we can apply Lebesgue's theorem on dominated convergence and let $\epsilon \to 0$. Hence, we arrive at \begin{equation*} \mathbb{E}\int_0^T e^{-\lambda s} \langle Av(s), \bar{w}(s)\rangle ds \leq \mathbb{E} \int_0^T e^{-\lambda s} \langle a(s), \bar{w}(s)\rangle ds , \end{equation*} which can only hold true for all $\bar{w} \in \mathcal{L}^p(V)$ if $a=Av$. Finally, we note that the uniqueness of the solution to equation~\eqref{eq:1} implies that the whole sequence converges to the limit and not only the subsequence.
We will now show that $v_\ell(T) \to v(T)$ in $L^2(\Omega;H)$ and $u_\ell(T) \to u(T)$ in $L^2(\Omega;V_B)$ as $\ell\to\infty$. We first take the limit superior in~\eqref{eq:mono_trick_0} with $w=v$ and $z=u$ to obtain \begin{equation*} \begin{split}
\limsup_{\ell\to \infty} e^{-\lambda T} & \mathbb{E} \big( |v_\ell(T)|^2 + |u_\ell(T)|_B^2 \big) \\
& \leq \mathbb{E}\big(|v_0|^2 + |u_0|_B^2\big) + \mathbb{E}\int_0^T \!\!e^{-\lambda s} \Big[ 2\langle f(s)-a(s),v(s) \rangle \\
& \quad + 2 \big(\bar{c}(s), C(u(s),v(s)) \big)_{l^2(H)} - |C(u(s),v(s))|_{l^2(H)}^2 \\
& \quad - 2\lambda (v(s), v(s)) + \lambda|v(s)|^2 - 2\lambda (u(s),u(s))_B + \lambda|u(s)|_B^2 \Big]\,ds. \\ \end{split} \end{equation*} Since $a=Av$ and $\bar{c}=C(u,v)$ and due to~\eqref{eq:mono_trick_2}, we get \begin{equation} \label{eq:endptconv} \begin{split}
\limsup_{\ell\to \infty} e^{-\lambda T} & \mathbb{E} \big( |v_\ell(T)|^2 + |u_\ell(T)|_B^2 \big)\\
& \leq \mathbb{E}\big(|v_0|^2 + |u_0|_B^2\big) + \mathbb{E}\int_0^T \!\! e^{-\lambda s} \Big[2\langle f(s)-Av(s),v(s) \rangle \\
& \quad +|C(u(s),v(s))|_{l^2(H)}^2
- \lambda|v(s)|^2 - \lambda|u(s)|_B^2 \Big]\, ds \\
& = e^{-\lambda T}\mathbb{E} \big( |v(T)|^2 + |u(T)|_B^2 \big). \end{split} \end{equation} Finally, due to weak sequential lower-semicontinuity of the norm and with~\eqref{eq:endptconv}, we see that \[ \begin{split}
e^{-\lambda T}\mathbb{E} \big( |v(T)|^2 + |u(T)|_B^2 \big)
& \leq \liminf_{\ell\to \infty} e^{-\lambda T} \mathbb{E} \big( |v_\ell(T)|^2 + |u_\ell(T)|_B^2 \big) \\
& \leq \limsup_{\ell\to \infty} e^{-\lambda T} \mathbb{E} \big( |v_\ell(T)|^2 + |u_\ell(T)|_B^2 \big)\\
& \leq e^{-\lambda T}\mathbb{E} \big( |v(T)|^2 + |u(T)|_B^2 \big). \end{split} \]
Hence $\mathbb{E} \big( |v_\ell(T)|^2 + |u_\ell(T)|_B^2 \big) \to \mathbb{E}\big(|v(T)|^2 + |u(T)|_B^2\big)$ as $\ell \to \infty$. The space $L^2(\Omega; (H,V_B))$ with the natural inner product is a Hilbert space. This is because the space $V_B$, under the conditions imposed on $B$, is a Hilbert space. We can now use this together with the weak convergence $v_\ell(T) \rightharpoonup v(T)$ in $L^2(\Omega;H)$ and $u_\ell(T) \rightharpoonup u(T)$ in $L^2(\Omega;V_B)$ to complete the proof. \end{proof}
\begin{remark} \label{remark:stronger_monotonicity} It is possible to show that if $A$ and $C$ jointly satisfy some appropriate stronger monotonicity assumption then $v_\ell \to v$ in $L^p((0,T)\times \Omega;V_A)$ as $\ell \to \infty$. For example, if there is $\mu > 0$ such that, almost surely, for any $w,z \in V_A$ and $u,v \in V_B$ \begin{equation} \label{eq:stronger_monotonicity} \begin{split}
&\langle Aw - Az, w-z \rangle + \lambda_A |w-z|^2 \\
& \geq \mu\|w-z\|_{V_A}^p + \frac{1}{2}|C(u,w) - C(v,z)|_{l^2(H)}^2 - \lambda_B|u-v|_B^2 \end{split} \end{equation} then $v_\ell \to v$ in $L^p((0,T)\times \Omega;V_A)$ as $\ell \to \infty$. \end{remark}
Indeed with~\eqref{eq:stronger_monotonicity}, we obtain, instead of~\eqref{eq:mono_trick_0}, the following (we have taken $w=v$ and $z=u$): \begin{equation*} \begin{split}
& \mu \mathbb{E}\int_0^T e^{-\lambda s} \|v_\ell(s) - v(s)\|_{V_A}^p ds
+ e^{-\lambda T} \mathbb{E}\big(|v_\ell(T)|^2 + |u_\ell(T)|_B^2\big)\\
& \leq \mathbb{E}\big(|v^0_\ell|^2 + |u^0_\ell|_B^2\big) + \mathbb{E}\int_0^T e^{-\lambda s}\big[ 2\langle f_\ell(s),v_\ell(s) \rangle ds \\ & \, + \mathbb{E}\int_0^T e^{-\lambda s}\big[ 2 \big(C^{r_\ell}(u_\ell(s),v_\ell(s)), C^{r_\ell}(u(s),v(s)) \big)_{l^2(H)}\\
& \, - |C^{r_\ell}(u(s),v(s))|_{l^2(H)}^2
- 2\lambda (v_\ell(s), v(s)) + \lambda|v(s)|^2 - 2\lambda (u_\ell(s),u(s))_B\\
& \, + \lambda|u(s)|_B^2 \big]ds - \mathbb{E}\int_0^T 2e^{-\lambda s} \big[ \langle Av(s), v_\ell(s) - v(s)\rangle + \langle Av_\ell(s),v(s)\rangle \big] ds + \bar{R}_\ell. \end{split} \end{equation*} Taking the limit as $\ell \to \infty$ and using Lemma~\ref{lemma:weaklimits} together with the fact, established earlier, that $a=Av$ and $c=C(u,v)$, we obtain \begin{equation*} \begin{split}
& \mu \lim_{\ell \to \infty} \mathbb{E}\int_0^T e^{-\lambda s} \|v_\ell(s) - v(s)\|_{V_A}^p ds + e^{-\lambda T} \mathbb{E}\big(|v(T)|^2 + |u(T)|_B^2\big)\\
& \leq \mathbb{E}\big(|v_0|^2 + |u_0|_B^2\big) -\lambda \mathbb{E}\int_0^T e^{-\lambda s} \big[|v(s)|^2 + |u(s)|_B^2\big] ds\\
& \quad + \mathbb{E}\int_0^T e^{-\lambda s}\big[ 2\langle f(s) - Av(s),v(s) \rangle + |C(u(s),v(s))|_{l^2(H)}^2\big] ds. \end{split} \end{equation*} If we subtract~\eqref{eq:mono_trick_2} then we obtain \begin{equation*}
\mu \lim_{\ell \to \infty} \mathbb{E}\int_0^T e^{-\lambda s} \|v_\ell(s) - v(s)\|_{V_A}^p ds \leq 0. \end{equation*} From this, we conclude that $v_\ell \to v$ in $L^p((0,T)\times \Omega;V_A)$ and thus also $u_\ell \to u$ in $L^2((0,T)\times \Omega;V_B)$ as $\ell \to \infty$.
\section{Proof of uniqueness} \label{sec:uniq} In this short section, we will prove that the solution to~\eqref{eq:1} is unique in the sense specified in Theorem~\ref{lemma:uniq}. \begin{proof}[Proof of Theorem~\ref{lemma:uniq}] Let $v := v_1 - v_2$ and $u := u_1 - u_2$. Then $\mathbb{P}$-almost everywhere and for all $t\in [0,T]$ \begin{equation*} \begin{split} v(t) ={}& - \int_0^t \big[Av_1(s) - Av_2(s) + Bu(s) \big] ds \\ & + \int_0^t \left[C(u_1(s),v_1(s)) - C(u_2(s),v_2(s))\right] dW(s) \end{split} \end{equation*} holds in $V_A^*$. With the assumption $V_A \hookrightarrow V_B$, we may apply It\^o's formula for the square of the norm (see, e.g., Pr\'ev\^ot and R\"ockner~\cite[Theorem 4.2.5]{prevot:rockner:concise}) and obtain \begin{equation*} \begin{split}
|v(t)|^2 ={} & -2\int_0^t \langle Av_1(s) - Av_2(s)+Bu(s),v(s) \rangle ds \\ & + 2\int_0^t (v(s), C(u_1(s),v_1(s)) - C(u_2(s), v_2(s)) dW(s) )\\
& + \int_0^t |C(u_1(s),v_1(s)) - C(u_2(s), v_2(s))|_{l^2(H)}^2 ds. \end{split} \end{equation*} Since $u(0) = 0$, we obtain with (\ref{eq:Buv}) \begin{equation*} \begin{split}
|v(t)|^2 + |u(t)|_B^2 ={}& -2\int_0^t \langle Av_1(s) - Av_2(s),v(s) \rangle ds \\ & + 2\int_0^t \big(v(s), [C(u_1(s),v_1(s)) - C(u_2(s), v_2(s))] dW(s) \big) \\
& + \int_0^t |C(u_1(s),v_1(s)) - C(u_2(s), v_2(s))|_{l^2(H)}^2 ds. \end{split} \end{equation*} Now we apply It\^o's formula for real-valued processes (similar to Lemma~\ref{lemma:int_by_pts_cals}) to obtain \begin{equation*} \begin{split}
e^{-\lambda t}\big(|v(t)|^2 + & |u(t)|_B^2\big) = - \lambda \int_0^t e^{-\lambda s} \big(|v(s)|^2 + |u(s)|_B^2 \big)ds\\ & - 2\int_0^t e^{-\lambda s} \langle Av_1(s) - Av_2(s),v(s) \rangle ds \\
& + \int_0^t e^{-\lambda s} |C(u_1(s),v_1(s)) - C(u_2(s), v_2(s))|_{l^2(H)}^2 ds + m(t), \end{split} \end{equation*} where \begin{equation*} m(t) = 2\int_0^t e^{-\lambda s} \big(v(s), [C(u_1(s),v_1(s)) - C(u_2(s), v_2(s))] dW(s)\big). \end{equation*} This together with (\ref{eq:mod_monotonicity}) yields \begin{equation*}
0 \leq e^{-\lambda t}\big(|v(t)|^2 + |u(t)|_B^2\big) \leq m(t). \end{equation*}
Hence the process $m(t)$ is non-negative for all $t\in [0,T]$. We also can see that it is a continuous local martingale starting from $0$. Thus, almost surely, $m(t) = 0$ for all $t\in [0,T]$. But this in turn means that, almost surely, $|v_1(t) - v_2(t)|^2 = |v(t)|^2 = 0$ as well as $|u_1(t) - u_2(t)|_B^2 = |u(t)|_B^2 = 0$ for all $t\in [0,T]$. Thus solutions to~\eqref{eq:1} must be indistinguishable. \end{proof}
\end{document} | arXiv |
Small icosihemidodecahedron
In geometry, the small icosihemidodecahedron (or small icosahemidodecahedron) is a uniform star polyhedron, indexed as U49. It has 26 faces (20 triangles and 6 decagons), 60 edges, and 30 vertices.[1] Its vertex figure alternates two regular triangles and decagons as a crossed quadrilateral. It is a hemipolyhedron with its six decagonal faces passing through the model center.
Small icosihemidodecahedron
TypeUniform star polyhedron
ElementsF = 26, E = 60
V = 30 (χ = −4)
Faces by sides20{3}+6{10}
Coxeter diagram (double covering)
Wythoff symbol3/2 3 | 5 (double covering)
Symmetry groupIh, [5,3], *532
Index referencesU49, C63, W89
Dual polyhedronSmall icosihemidodecacron
Vertex figure
3.10.3/2.10
Bowers acronymSeihid
It is given a Wythoff symbol, 3⁄2 3 | 5, but that construction represents a double covering of this model.
Related polyhedra
It shares its edge arrangement with the icosidodecahedron (its convex hull, having the triangular faces in common), and with the small dodecahemidodecahedron (having the decagonal faces in common).
Icosidodecahedron
Small icosihemidodecahedron
Small dodecahemidodecahedron
See also
• Pentakis icosidodecahedron
• List of uniform polyhedra
References
1. Maeder, Roman. "49: small icosihemidodecahedron". MathConsult.{{cite web}}: CS1 maint: url-status (link)
External links
• Weisstein, Eric W. "Small icosihemidodecahedron". MathWorld.
• Uniform polyhedra and duals
Star-polyhedra navigator
Kepler-Poinsot
polyhedra
(nonconvex
regular polyhedra)
• small stellated dodecahedron
• great dodecahedron
• great stellated dodecahedron
• great icosahedron
Uniform truncations
of Kepler-Poinsot
polyhedra
• dodecadodecahedron
• truncated great dodecahedron
• rhombidodecadodecahedron
• truncated dodecadodecahedron
• snub dodecadodecahedron
• great icosidodecahedron
• truncated great icosahedron
• nonconvex great rhombicosidodecahedron
• great truncated icosidodecahedron
Nonconvex uniform
hemipolyhedra
• tetrahemihexahedron
• cubohemioctahedron
• octahemioctahedron
• small dodecahemidodecahedron
• small icosihemidodecahedron
• great dodecahemidodecahedron
• great icosihemidodecahedron
• great dodecahemicosahedron
• small dodecahemicosahedron
Duals of nonconvex
uniform polyhedra
• medial rhombic triacontahedron
• small stellapentakis dodecahedron
• medial deltoidal hexecontahedron
• small rhombidodecacron
• medial pentagonal hexecontahedron
• medial disdyakis triacontahedron
• great rhombic triacontahedron
• great stellapentakis dodecahedron
• great deltoidal hexecontahedron
• great disdyakis triacontahedron
• great pentagonal hexecontahedron
Duals of nonconvex
uniform polyhedra with
infinite stellations
• tetrahemihexacron
• hexahemioctacron
• octahemioctacron
• small dodecahemidodecacron
• small icosihemidodecacron
• great dodecahemidodecacron
• great icosihemidodecacron
• great dodecahemicosacron
• small dodecahemicosacron
| Wikipedia |
nLab
The nLab is a wiki for research-level notes, expositions and collaborative work, including original research, in mathematics, physics, and philosophy, with a focus on methods from type theory, category theory, and homotopy theory. The nLab espouses the "n-point of view"[1] (a deliberate pun on Wikipedia's "neutral point of view") that type theory, homotopy theory, category theory, and higher category theory provide a useful unifying viewpoint for mathematics, physics and philosophy. The n in n-point of view could refer to either n-categories as found in higher category theory, n-groupoids as found in both homotopy theory and higher category theory, or n-types as found in homotopy type theory.
Overview
The nLab was originally conceived to provide a repository for ideas (and even new research) generated in the comments on posts at the n-Category Café, a group blog run (at the time) by John C. Baez, David Corfield and Urs Schreiber. Eventually the nLab developed into an independent project, which has since grown to include whole research projects and encyclopedic material.[2]
Associated to the nLab is the nForum, an online discussion forum for announcement and discussion of nLab edits (the analog of Wikipedia's "talk" pages) as well as for general discussion of the topics covered in the nLab. The preferred way of contacting the nLab steering committee is to post on the nForum.[3] An experimental sub-project of the nLab is the Publications of the nLab, intended as a journal for refereed research articles that are published online and cross-hyperlinked with the main wiki.
The nLab was set up on November 28, 2008 by Urs Schreiber using the Instiki software provided and maintained by Jacques Distler. Since May 2015 it runs on a server at Carnegie Mellon University that is funded in the context of Steve Awodey's Homotopy Type Theory MURI grant.[4] The system administrator is Richard Williamson. The domain ncatlab.org is owned by Urs Schreiber.
The nLab is listed on MathOverflow as a standard online mathematics reference to check before asking questions.[5] Many questions and answers link to the nLab for background material.[6] It is one of two wikis mentioned by the mathematical physicist John C. Baez in his review of math blogs for the American Mathematical Society.[7]
There is an informal steering committee, which "doesn't run the nLab",[8] but exists in order to resolve issues that would cause the whole project to run into trouble.
See also
• MathOverflow
References
1. nPOV in nLab
2. Urs Schreiber, What is... the nLab?
3. Steering committee in nLab meta
4. Awodey, Steve (29 April 2014). "HoTT awarded a MURI". Homotopy Type Theory. Retrieved 8 August 2020.
5. MathOverflow, 1.0 'How to ask' page. Archived on 2013-06-04.
6. MathOverflow, Results for a search for 'nlab'. As of 2018-12-11 there are over 800 results.
7. John C. Baez, "Math Blogs", Notices of the American Mathematical Society, March 2010
8. Steering committee in nLab meta
External links
• nLab
• nForum
• Publications of the nLab
| Wikipedia |
Multi-dimensional geospatial data mining in a distributed environment using MapReduce
Mazin Alkathiri ORCID: orcid.org/0000-0003-1195-55501,
Abdul Jhummarwala2 &
M. B. Potdar2
Journal of Big Data volume 6, Article number: 82 (2019) Cite this article
Data mining and machine learning techniques for processing raster data consider a single spectral band of data at a time. The individual results are combined to obtain the final output. The essence of related multi-spectral information is lost when the bands are considered independently. The proposed platform is based on Apache Hadoop ecosystem and supports performing analysis on large amounts of multispectral raster data using MapReduce. A novel technique of transforming the spectral space to the geometrical space is also proposed. The technique allows to consider multiple bands coherently. The results of clustering 106 pixels for multiband imagery with widely used GIS software have been tested and other machine learning methods are planned to be incorporated in the platform. The platform is scalable to support tens of spectral bands. The results from our platform were found to be better and are also available faster due to application of distributed processing.
Satellites orbiting the Earth with their remote sensing capabilities captures information about the geography of Earth in form of remotely sensed images. These images are representations of the Earth's surface as seen from space and contains intensity about the physical quantities such as the solar radiance reflected from the ground, emitted infrared radiation or backscattered radar intensity [14]. This information is captured by multiple sensors on board the satellites which capture radiation for various wavelengths and is provided in the form of multispectral raster data. The use of multiple sensors for the same geographic area captures various types of information which includes thermal imaging (infrared), visible radiation (Blue, Green and Red), etc., and is stored as individual bands [2]. Multi spectral and multi-dimensional data is usually available in form of multi-band georeferenced tagged image file format (GeoTIFF) files, an extension of TIFF format. A Landsat 7 image comes in the form of a GeoTIFF file consisting of 8 spectral bands and each of the spectral band stores a different wavelength scattered or emitted from the Earth's surface. The earlier GeoTIFF standard was limited to supporting 4 GB of raster data which has been superseded by the current Big GeoTIFF standard and allows storage of image files larger than 4 GB in the TIFF container [17]. This was required due to the increasing spatial resolution and number of concurrent bands that needed to be stored for a geographic area. There is also a large availability of images of Giga Pixel resolution (109 pixels) images from domains such as bio-technology and forensics which are also stored in Big GeoTIFF format. Organizing and managing this kind of data is in itself a huge task and processing of it requires designing parallel and distributed systems which will allow for faster processing of terabytes of data and provide the results in a limited amount of time.
Raster image consists of representation of geographic objects in a two-dimensional scene and it is a two dimensional array of individual picture elements called pixels arranged in columns and rows [45]. Each pixel individually represents information about an area on the Earth's surface. The information about the area is represented by an intensity value and a location address in a two dimensional image. While the intensity value is represented by the measured reflectance, the location is represented by (longitude, latitude) value for a geo-referenced image [43]. A single pixel in a multiband image has several values depending on the number of sensors which captured information for that geographic location. The individual bands are usually used independently depending upon the geospatial analysis required and the intermediary outputs combined to form the final results. All of these bands when used in conjunction for geospatial analysis will provide more accurate representation about the phenomena on the Earth's surface. There are many techniques available to store and organize the multiband data (pixels) of an image in binary files such as band sequential (BSQ), band interleaved by pixel (BIP), and band interleaved by line (BIL). The BIL format stores the data of the first pixel from all the different bands in the first row, and the data of the second pixel from all the different bands in the second row and so on [20]. One example of such format is the sensor's data that comes from French satellite [also known as SPOT (Satellite Pour l'Observation de la Terre which translates to Satellite for observation of Earth) data] [64]. This study uses a custom input format which is similar to BIL to overcome some of the difficulties which are faced when processing such binary data formats in a MapReduce environment. There is a separate section ("Geometrical space to spectral space (preparation phase)") discussing the details and the data format required as input to the developed mining framework.
The paper has the following structure. A review of the advancements in Big Geospatial data mining has been presented in "Related works" section. The novel approach of converting multispectral data to geometrical space is discussed and developed in "Proposed methodology" section. "Result and discussion" section provides an analysis summary of the obtained results. Finally, "Conclusion and future work" section concludes the paper and provides directions for further research.
Due to the requirements for various applications related to planning and decision making, the Landsat 7 program was launched in 1999 [29, 36]. The planning and decision making include landuse change analysis, environment conservation and impact assessment, wildlife habitat mapping, disaster management, urban sprawl analysis, agriculture and horticulture, natural resource management and monitoring, etc. The Landsat 7 program served to make a complete temporal archive of cloud free images of the Earth and is still active after the launch of its successor Landsat 8 in 2013 [15]. The Earth's surface as depicted by true colour on widely used web mapping services like Google Maps/Earth, Bing Maps, Yahoo Maps, etc., is based on colour enhanced Landsat 7 satellite imagery. In addition to the satellite imagery, geospatial data is also being acquired by use of aircrafts, unmanned aerial vehicles (drones) and ground based operations such as land surveys.
Big-geospatial data
Collectively the geospatial data available from several sources has grown into petabytes and increases by terabytes in size every day [24]. The increase in the sources of data and its acquisition have been exponential as compared to the development of processing systems which can process the data in real time. Large amount of processed geospatial data is available for development of virtual globe applications from Nasa World Wind, temporal datasets archived at Google Earth Engine, etc. Beside these crowd sourcing online efforts such as OpenStreetMaps [5, 30] and Wikimapia have also assimilated terabytes of geospatial data. The data available from these efforts may have been derived from satellite imagery but is only applicable for a few applications such as for routing and navigation purposes. USGS [47], an organization established for the development of public maps and geo-sciences expertise has started providing access to applications and data related to disaster management during earthquakes, landslides, volcanoes, etc. but is limited in providing support for processing related to other planning and decision making applications. Private organizations such as earth observation system (EOS) have started providing automated on-the-fly earth observation (EO) imagery processing and analysis. Their products include providing realtime processing of classic GIS algorithms [21] on several of the open data sets available from the earth observation satellites (EOS).
The amount of geospatial data available is not just an increase in size but with availability of higher resolution has also increased the complexity of processing it and led to the geospatial "Big Data" phenomenon. According to Bhosale and Gadekar [8], the term 'Big Data' describes innovative techniques and technologies to capture, store, distribute, manage and analyze petabytes or larger-sized datasets with high-velocity and different structures. It is the data sets or combinations of data sets whose size (volume), complexity (variability), and rate of growth (velocity) make them difficult to be captured, managed, processed or analyzed by conventional technologies and tools. It has been stated for geospatial data in [48] that, "the size of spatial big data tends to exceeds the capacity of commonly used spatial computing systems owing to their volume, variety and velocity", which truly encompasses the amount of spatial data available today and the complexity of the operations to be performed into the boundaries of the big data problem. The authors in [44] have supported that spatial data are large in quantity and are complex in structures and relationships. The study in [49] draws our attention to spatial interaction, spatial structure and spatial processes in addition to the spatial location which forms the basis of any spatial processing system.
The richness of information contained in raster data is only limited by the number of captured bands and its resolution. To derive the full benefits by processing such data it has become of utmost importance to overhaul existing multi-dimensional approaches and consider the geospatial characteristic of the data. This will not only ease and simplify the way geospatial data is processed and analyzed but will also allow to further exploit the available richness of data. The variety in attributes that can be gathered from multiple spectrums for a geographic feature must be studied, visualized, interpreted and mined so as to extract qualitative, meaningful, useful information and new relationships. The results can provide insights into accurate geographic phenomena which is not available from analysis of individual bands. With the accumulation of large amount of data comes the difficult challenge of processing it and derive meaningful information which can be used for planning and decision making. The main aim of this work is to discover hidden knowledge from big geo-spatial data by considering multiple dimensions collectively for a geographical area rather than processing the bands individually. The novel approach of converting from spatial space to geometrical space preserves the essential multispectral characteristics of the data. The work addresses the shortcomings of existing approaches while processing big geospatial data and new distributed techniques required for processing both raster and vector data have been presented. In the present work, k-means clustering has been described in detail. The developed techniques can be adapted to several of the spatial data mining tasks including spatial prediction; spatial association rule mining; and temporal geo-visualization.
For processing raster data, image processing techniques have been well developed and are available with open source packages such as OpenCV, Scilab and other closed source packages and libraries. These are limited in scale and processing of giga-pixel scale images such as large multiband GeoTIFF files require tens of hours if not days. This inhibits the discovery of important knowledge, the realtime provision of which may be highly useful in applications of disaster relief, etc. Knowledge Discovery in Databases (KDD) is defined as the process of discovering useful knowledge from a collection of data and is closely related to data mining and it is important for the spatial data as well [35, 45]. The data mining process has been depicted in Fig. 1. It includes data preparation and selection, data cleansing, incorporating prior knowledge on data sets and interpreting accurate solutions from the observed results [63]. The data mining life cycle (DMLC) starts with understanding the inputs or requirements, to formation of the system and until the last stage of deployment. Each of these depicted phases could be repeated in case the requirement changes. Geospatial data mining is an extension of classical data mining approach with the addition of geospatial component which requires application of complex image processing and spatial data processing techniques.
Data mining life cycle
Big-geospatial data processing
The classical data mining approach is no longer fit for processing of Big Data and has been modified and adapted by many frameworks which been developed to utilize the computing and storage available from distributed computing devices [37]. Big geo-spatial data adds another level of complexity to this Big Data ecosystem which now also requires considering the spatial and geographical location. This has furthered the complexity big data challenge [66]. A framework such as GS-Hadoop [31] can process millions of geospatial vector features in a matter of minutes but is limited to only processing vector data. De Smith et al. [19] have addressed the full spectrum of spatial analysis and associated modeling techniques that were available at the time with widely used geographic information systems (GIS) and associated softwares. The existing geospatial data processing systems are overwhelmed with the amount of data available and the complex operations required to be performed demands urgent development of tools capable of managing and analyzing such Big geospatial data [32]. Bradley et al. [12] aim to reduce the size of the data to be processed by identifying regions of the data that can be compressed, regions that must be kept, and regions that can be discarded. The transmission of high resolution raster images over low-bandwidth connections requires a great amount of time. This problem can be mitigated to a little extent by transmitting a series of low resolution approximations which converge to the final image [52]. Low bandwidth connections are no longer of concern due to the development of faster networks and internet bandwidth available to gigabit speeds for organizations [33, 54].
The above mentioned studies address a few of the shortcomings of traditional geoprocessing while some others [25, 68] extend the geoprocessing functionality to work upon parallel and distributed processing systems. Beside the complex processing of geospatial data, a considerable amount of work has been done on use of multidimensional data structures in information processing systems (IPS), the applications of which have been in fields of business analysis, astronomy, geomatics, bioinformatics, etc. [9]. The term "Multidimensional" essentially describes the way in which numerical information can be categorized and viewed [16]. It has been already established that large geospatial databases consist of multidimensional information. Several multidimensional models have also been proposed for establishing multidimensional databases (MDB) and on-line analytical processing (OLAP) [55]. It has also been stated that the traditional database systems are inappropriate for storage and analysis of multidimensional data since these systems are optimized for online transactional processing (OLTP) in which an enormous number of concurrent transactions containing normally few records are involved. Multidimensional data cannot be stored in OLTP databases. Geodatabases store multidimensional geospatial data with associated vector attributes and features for the raster data [6].
Distributed processing of geospatial data
The distributed data processing framework MapReduce was first introduced by Google and later it was incorporated into Hadoop as its strong capability [46, 60]. Apache Hadoop is an open-source software for reliable, scalable, distributed computing on commodity hardware [27, 56]. Hadoop is one of the most widely used distributed processing frameworks developed to address the challenges of big data. The framework is extensible and can be adapted to support big geospatial data. The main concept of the framework is segregated in two parts, viz., the Hadoop Distributed File System (HDFS) for storing data and the MapReduce programming model to process the data which is usually stored on HDFS.
The framework subsequently has been in development by the Apache Software Foundation. Apache defines Hadoop as software library framework that allows for the distributed processing of large datasets across clusters of computers using simple programming models [26]. It is important to highlight that initially Hadoop only supported MapReduce type of applications but has later been extended to support other programming paradigms [28]. Hadoop is capable of storing and managing large volumes of data efficiently, using a large number of commodity hardware as Nodes forming a Hadoop cluster. The same cluster is used for processing the data locally stored on the Nodes to reduce the network communication. Hadoop can be used for many applications involving large volume of Geospatial as well as Spatio-temporal data analysis, Biomedical Imagery analysis, simulation of various physical, chemical and computationally intensive application biological processes [2, 13, 18, 38].
MapReduce model of programming has become one of the best ways to processes big data which is inherently stored by Hadoop on its own distributed file system (HDFS) [10, 11]. The MapReduce model takes care of managing the whole processes from receiving the data, processing it and aggregating the results to form a single output. It takes care of distributing the data and managing the distributed resources throughout the whole processes. Applications which require to work with big data benefits hugely from HDFS as it provides high throughput access and streaming capabilities to large amounts of data. It has been developed to be fault-tolerant, can run on cheap commercially of the shelf (COTS) hardware and support streaming data. The main MapReduce phases include the Map and the Reduce phases which have been depicted in Fig. 2.
Schematic diagram for processing big data using Map and Reduce
The Key-Value approach used by the Map phase groups input values with their associated keys. The keys along with their set of values will be sent to the Reduce phase and the required functions will be applied on those groups of values to get the needed output. There are other phases in MapReduce such as Shuffle phase, Sorting phase, Partitioner phase and Combine phase. The Combiner collects different (Key, Value) pairs, group similar keys and send them to the required node for the reducer. Keys and Values are sorted during the sorting phase. An appropriate partitioning logic can also be made available to ease the transferring of the data between the nodes.
To identify spatial patterns, most well-known statistical techniques are based on the concept of intra- and inter-cluster variances (like the k-means algorithm or the Empirical Orthogonal Function) [7]. There are various Classification and Clustering algorithms supported by Mahout, a data mining platform built on Hadoop. It supports k-means, canopy, fuzzy k-mean, naive bayes, etc. [26]. It should also be noted that these algorithms can be easily used with any framework based on Hadoop such as Apache Spark.
K-means algorithm is made to group a set of data into K sub-groups of the data or as we can say into K number of clusters, where the data can be in N dimensions, and in each cluster the sum of squares is minimized. Zhang et al. [67] improve the initial focal point and determine the K value, through simulation experiments while [1] propose new cost function and distance measure based on co-occurrence of values that works well for data with mixed numeric and categorical features. Sarode and Mishra [50] have mentioned, "It is not practical to require that the solution has minimal sum of squares against all partitions", except if the size of the data and dimensions is very small and the number of the clusters K is two.
Eldawy and Mokbel [23] developed SpatialHadoop, which is a comprehensive extension to Hadoop for support for geo-spatial vector data over Hadoop. It supports spatial constructs and the awareness of spatial data inside Hadoop code base. SpatialHadoop is composed of four main layers, which are language, storage, map-reduce and operations layer. The language layer provides Pigeon, a high level SQL-like language. The storage layer employs two level index structure of global and local indexing. And it introduces two components, spatialFileSplitter and spatialRecordReader, through the MapReduce layer. Finally the operation layer in reduce some basic spatial operations like range query, K-Nearest Neighbours (kNN), and spatial join, etc. SpatialHadoop is meant for the spatial data but it support only supports single dimension vector data and it does not have any of the data mining (classification and clustering) techniques listed above which may be required for processing satellite imagery [3].
Mennis and Guo [39] described the urgent need for effective and efficient methods to extract unknown and unexpected information from datasets of unprecedentedly large size having millions of observations, high dimensionality by hundreds of variables, and coming from heterogeneous data sources and having other complex attributes. Yao [65] stressed the development of spatial data infrastructure and efficient and effective spatio-temporal data mining methods. The development of CLARANS [42] is based on randomized search and is based on PAM and CLARA used for cluster analysis. Vatsavai et al. [58] studied into the IO and CPU requirements of spatial data mining algorithms for analyzing big spatial data and have presented the applications of bio-mass monitoring, complex object recognition, climate change studies, social media mining and mobility applications. STING [61] use hierarchical statistical information grid based approach for spatial data mining and STING+ [62] extends the approach by suspending the effects of the updates in the hierarchy until their cumulative effect triggers to mulitple layers in the hierarchy. Bédard et al. [4] highlighted the requirement of efficient spatial data mining methods to cope with the huge size of spatial data which is increasing rapidly in the spatial data warehouses. To analyse the collected data at multiple resolutions, it is required to develop techniques with the ability to keep up the performance independent of the size of the spatial data. A clustering model represented using choropleth to identify spatial relationships between the clustering obtained by spatial data mining has been developed using ArcView (a desktop GIS) and highlights the importance of correct visualization of geospatial data [41]. PixelMap [34] technique combines kernel-density-based clustering with visualization for displaying large amounts of spatial data. Visualization of big spatial datasets at various levels is an important requirement. The output from the proposed mining techniques can be scaled at various levels and passed to Desktop GIS for visualization.
Proposed methodology
In this paper, the development and implementation of distributed framework for mining multiband raster geospatial data has been described. The framework has been evaluated using k-means clustering function which has also been updated to support our multi-dimensional data format in MapReduce environment. The proposed framework also supports multi-distance calculating functions such as the Euclidean distance and Manhattan distance while it is also simple to extend it to support other distance calculations such as Mahalanobis distance depending on the number of dimensions involved for data processing [51]. K-means clustering is a method commonly used to automatically partition a dataset into k-groups. It proceeds by selecting k initial cluster centers and then iteratively refining them as follows [59]:
Each instance di is assigned to its closest cluster center.
Each cluster center cj is updated to be the mean of its constituent instances.
The algorithm converges when there is no further change in assignment of instances to clusters.
In the present work, multi spectral (multi-dimensional) geospatial data derived from Landsat 8 have been used. To derive the experimental results, four to six spectral bands have been taken from several satellite images for the experimentations. The data is transformed for use with the developed mining platform. In the first stage, each pixel value of the different four bands are considered from the spectral space to the geometrical space. This has been further discussed in "Geometrical space to spectral space (preparation phase)" section of the paper. In the second stage, K-means clustering is applied in the MapReduce distributed mode and finally return the data into its initial form so it can be used for visualization. There are several implementations which have been based upon increasing the efficiency of k-means either supervised or non-supervised and there are several others which support multi-dimensional k-means clustering but none of them directly consider the information available in multispectral format. The present work can be easily extended for application of any other classification or clustering technique and any number of bands with simple modifications to support the proposed index file format. The proposed work forms one of the first distributed implementation for mining multi spectral data (supporting multiple dimensions) collectively and the techniques described can form a candidate for inclusion in the machine learning Mahout framework or for raster processing support in other distributed geospatial processing systems. The performance of the mining platform has also been found to be satisfactory with respect to the amount of resources allocated and this has been highlighted in the succeeding sections.
Geometrical space to spectral space (preparation phase)
SpatialHadoop supports working with the geometrical location of the different features and implements spatial operations according to the type of shapes of the geo-spatial data which may be point, line, or polygon (rectangle). It does not support raster data. This study deals with the special case of working with multi spectral raster data available from Landsat 8 imagery in which every pixel can have 11 different values available from different bands. We use a subset of these bands All of these values are considered as the positional value of that pixel in different dimensions. In this way, all the different values of a single pixel are used to form a multidimensional spatial shape. E.g., polygon in a multi-dimensional space. The data in the geo-spatial mining process can then be used to perform the desired spatial operation. Table 1 describes the sample dataset.
Table 1 Subset of the bands selected from the Landsat 8 image for experimentations
Dataset description: predominantly limestone mining area
No. of bands: 6 (subset from Landsat 8 image has been taken)
Area: Aravalli Fort Hills (North of Gujarat, India)
Geographic Location:
Lat, Lon: 24° 00′ North, 72° 54′ East
Landsat 8 (path): (148,149); Landsat 8 (row): (43,44)
For implementation of distributed k-means for supporting multi-dimensional data, four bands from a raster image have been considered initially. The polygon thus formed with four points (one in each dimension) can be also reduced to a two dimension rectangle. Spatial operations can then be simplistically applied to this form of data. The following formula represents pixel values for four bands which have then been converted to a polygon and has been represented as a rectangle in two dimensions what is called as indexed pixel data and the process has been depicted in Fig. 3.
$$\left( {{\text{X}}_{1} ,{\text{Y}}_{1} ,{\text{X}}_{2} ,{\text{Y}}_{2} } \right) \to {\text{Polygon}} \;\left( {{\text{X}}_{1} ,{\text{Y}}_{1} ,{\text{X}}_{2} ,{\text{Y}}_{1} ,{\text{X}}_{2} ,{\text{Y}}_{2} ,{\text{X}}_{1} ,{\text{Y}}_{2} ,{\text{X}}_{1} ,{\text{Y}}_{1} } \right)$$
$$\left( {45,46,47,48} \right) \to {\text{Polygon}}\;\left( {45,46,47,46,47,48,45,48,45,46} \right)$$
Phase-1 indexing
Figure 4 represents the total workflow which is divided into three main stages. In the first stage, the data is transformed it into 4-dimensional data set in which each pixel is transformed essentially into a rectangle owing to '4' values obtained from each band of the image. The '4' values for each pixel from every band is put into a resultant file which contains a matrix resembling the image's pixels. The process can quickly iterate over thousands of rows and columns and in the resultant file each row contains the pixel's values for the same geographic location from the '4' different bands. These '4' different values are represented as shown in Fig. 4a which also contains an index value unique to every pixel. The process is extensible to support 'N' number of bands.
The image after the preparation (a, c) and after being clustered (b, d)
The proposed mining mode demonstrates application of k-means clustering to work with multi-dimensional images in a MapReduce environment. Figure 5 represents the clustering and editing functions available for the input format. This work can further be extended to support other geospatial operations available with SpatialHadoop or can alternatively be integrated with Mahout. The MapReduce implementation of the workflow and support for working with data stored on Hadoop Distributed File System (HDFS) opens opportunities for mainly supporting the big data ecosystem.
Phase-2 K-means clustering
Index all the pixels and assigning geographic location
Map Phase
The proposed k-means clustering model goes through the main two phases of the MapReduce programming module which are the Map and the Reduce phases [69]. In the Map phase, the k-means model will take two different files as the inputs, the data set file and the initial centroid file. It will calculate the distance from each point in the data input to each of the initial centroids. This way the nearest centroid to each point in the whole data set is obtained. The Map phase will then send to the reducer the values obtained for each point along with the nearest centroid to that point. The output from the Mapper to the Reducer task will include the nearest key (point) together with centroid values. Multi-spectral k-means clustering depicting transformation of values from Map to Reduce phases.
Input: A list of < key l, value l > pairs, k global centroids. Where value 1 is its content of online of the input file which contain the multi values of each pixel and its location.
Output: A list of < key 2, value 2 > pairs. Where key 2 is the index of the cluster and value 2 is the point values and its location which belonging to that cluster (key 2).
Reduce phase
The reduce phase will receive each key with its attached group of values which are all the points from the data input for which the corresponding key is the nearest centroid. The main job of the Reducer is to calculate the optimal centroid out of each group of points which will have the average distant to all the element of that group of points. The reducer will produce the final output which is the new optimal centroids which again along the data input file will be taken to go through the Map and Reduce phases for the next iteration. The process is repeated till all the centroids get converged.
Input: A list of < key 3, value 3 > pairs Where key 3 is the index of the cluster and value 3 is the list of points values belonging to that cluster
Output: it will have two cases, one if the running iteration is any iteration but not the last, and the other case if the running iteration is the last iteration. And it can be described as:
In the case of any iteration it will give the new calculated set of centroids.
In case of the last iteration A list of < key 4, value 4 > pairs will be added as an output. Where key 4 is the index of the cluster and value 3 is the position of the pixel in the image as shown in Fig. 4c
The final output of the model will be two main files. The first file is the set of final centroids set, Fig. 4c. The second file will contain the coordinates of each point in the input dataset along with the cluster number which that point belong to as shown in Fig. 4d. Using the last output file from Reduce phase, we can get the clustered image back again for the visualization purposes, which is further done using MapReduce. Figure 6 represents the final clustering output from the MapReduce model. Figure 7 describes the phases for processing the image.
8k image file after being clustered and plotted (cluster visualization from ENVI)
Phase-3 filtering and topology study
Spectral space to geometrical space
The input multiband raster image is converted from geometrical space to the spectral space for processing purposes. Values from all the available bands from the image are considered. Those values represent different values of one pixel in an image and the same has been explained earlier. The location of that pixel is also added as the index to that same line where it has the pixel's values similar to BIL format. Due to the present work, it has become possible to study and analyze multiband raster images without the need to process different bands individually and infer the phenomena for a particular area. After processing the image it is required to convert the obtained output from ASCII to image from the geometrical space to the spectral space, to be able to visualize the image or perform further processing and annotations that might be required after the mining process. The next part of the paper discusses several image processing functions that have been developed to work with multiband raster data on Hadoop.
Image filtering and after processing phase
Mode filtering
Mode filtering [57] model to the raster image, the mode filtering involves assigning to the central pixel the most common values inside the window around the pixel. Programs to work upon a distributed platform have been developed which will apply the mode filtering on the image. The window size that is needed by the algorithm can be specified at runtime. Mode filtering works to smoothen the edges of the polygons and at the same time to reduce the noise. Figure 8 shows the working of the filter. The window size here is (5 × 5) pixels, the filter will find out which value inside the window is the most common value and it will assign it to the widow's central cell. The most common valued in the window, i.e., 8 will replace all the 9 values in the 5th column.
Mode filtering process
To execute the mode filtering, the following four arguments are required:
<The input files >—the clustered image
<File size >—the dimensions of the image
<The window size>
<The output file name>
In Fig. 9, a considerable amount of difference can be noticed after applying mode filter with different window size. The mode filtering can also be applied iteratively. An example image of the size 8000 × 8000 pixels is depicted in Fig. 9a. Figure 9b, c, d shows the results of mode filtering with window of the size 5 × 5, 9 × 9 and 11 × 11 pixels respectively on the same input image. One may select a window size appropriate for the data and to remove the desired amount of noise from the input image.
Image after applying mode filtering with different window sizes
Boundaries highlighting enhancement
An application for vectorization has built to derive the boundaries of all the identified clusters so the output can be used with desktop GIS softwares for further analysis. Options to filter certain clusters or a group of clusters according to the requirements are available and can be specified as parameters when executing the vectorization tool. Two examples have been represented in Figs. 10 and 11. Figure 10 shows the polygons of Cluster No. 7 after filtering the results of the image of size of 8000 × 8000 pixels. Figure 11 shows the polygons of Cluster No. 3 filtered from the image of size 4000 × 4000 pixels. In both the figures, the left side contain the whole image and the right side contains a small part of the image (shown zoomed in). The visualization is accomplished using QGIS.
Highlighting the boundaries of Cluster No. 7 in 8k image
Highlighting the boundaries of Cluster No. 3–4k image
Application for further editing including clipping any part of an image, splitting any image (horizontal/vertical) or for joining adjoining images. Figure 12 shows two different examples after clipping two different parts of an image. The left side is 1000 × 1000 pixels while a small part on the top left corner of the complete image is also highlighted with red boundaries. On the far right side of the another polygon with blue boundaries is clipped. The size of the clipped raster is 2000 × 1000 pixels.
Clipping two parts of different sizes from an image
An editing tool for splitting images into multiple parts is also integrated with the data mining framework. This tool has two different modes; the first one is for vertical splitting and the second one is for horizontal splitting. After selecting the appropriate splitting method, the column(s) and/or row(s) for the split can be specified. The number of partitions or splits can also be specified and the application will automatically calculate the relevant rows and columns to be passed as arguments. Examples for both the cases have been presented. Figure 13 shows the vertical splitting in three different windows. The center window (highlighted in blue) shows the full image whereas the right and left windows shows the new images created after splitting the original image. Figure 14 depicts the same using horizontal splitting mode in which the left image represent the original image and on the right, the horizontally split parts are represented.
Vertical splitting of the image in the middle
Horizontal-split of the image in the left
Join is the editing tool that is made for joining any two images and it also have different modes, which are right, left, up, and down. Two input images and the mode for joining those two images is passed as an argument to attach the second images to the first one accordingly. This can be done to the images before the mining phase or after that according to the need. It has been made possible to process several images together after getting them from different sources, converting it to the proposed format and finally joining them together in a distributed environment. Figure 15 shows an example of an image that has been joined with itself using the Duplicate copy → MapReduce join process.
Joining two copies of the same image
Cleaning the image and removing small polygons (or clusters) is a technique called salt and pepper [53]. This technique has been modified to work in a distributed environment and has been integrated with the mining platform. To perform cleaning of the clustered image(s), a limit for the minimum size of polygons has to be specified which will remove all sub-clusters. Those small sub-clusters can be safely ignored while performing spatial operations or calculating statistics for large geographic areas. This function can also remove all the sub-clusters below a specified threshold and a clean output is obtained.
In Fig. 16, an image of 8000 × 8000 pixels is represented after application of the above discussed techniques. Clustering have been applied and the noise in the resultant image is cleaned by application of mode filter with a window of size 11 × 11 pixels. All the small sub-polygons with a threshold of the size 100 × 100 or less have been cleaned. The resultant output is still left with several small polygons which may not be needed for further study. An appropriate threshold can also be decided automatically depending on certain parameters which can be specified by the user. A percentage of sub-clusters can be removed and which will only keep the required polygons which cover the maximum amount of geographical area.
After removing shapes of size 100 × 100 pixels and less
Figure 17 represents results from the same image (of size of 8000 × 8000 pixels) after application of the above discussed techniques. As discussed above, after clustering mode filtering is performed with a window size of 11 × 11 pixels. The application automatically decides to clean all the small sub-clusters keeping the rest which covers more than 90% of the geographic area. The clustered data is filtered by automatically calculating the minimum polygon size, which may be different for each cluster. E.g., in Cluster No. 1 (Blue), the threshold size of polygon which is not removed or cleaned is 1000 pixels. In Cluster No. 2 (Green) the minimum size of polygons is found to be 1694 and thus polygons which represent more than 90% of the area in each cluster will not be discarded.
Removing the smallest 10% of shapes from each cluster
Figure 18 represents output from the same input image. The application of the above discussed techniques is the same. For this sample, the biggest 10 sub-polygons from each cluster are kept where those are most likely to represent the area for further study. In each cluster, these top ten polygons represented a different percentage of the area for the cluster. E.g., in Cluster No. 4 (Purple), the top ten polygons represent 48% of the cluster; The total area size and the top ten polygons from Cluster No. 10 (Blue) represent 77% of the area of the cluster.
The biggest 10 shapes from each cluster
Studying the polygons in the clusters
The binary topological relation [22] between two objects A and B. is based on the intersection of the three part of each object which are the interior, boundary, and exterior of those two objects using the nine-intersection matrix which is shown below:
$$\Gamma_{9} \left( {{\text{A}},{\text{B}} } \right) = \left( {\begin{array}{*{20}ll} {A^{o} \cap B^{o} } & {A^{o} \cap \partial B} & {A^{o} \cap B^{ - } } \\ {\partial A \cap B^{o} } & {\partial A \cap \partial B} & {\partial A \cap B^{ - } } \\ {A^{ - } \cap B^{o} } & {A^{ - } \cap \partial B} & {A^{ - } \cap B^{ - } } \\ \end{array} } \right) .$$
where: \({A}{:}\; {\text{object}}; A^{o}: {\text{interior}}; A^{ - }{:}\; {\text{boundary}}\;{\text{and}}\; \partial {A}{:}\; {\text{exterior}}\), \(B{:}\; {\text{object}}; B^{o}{:}\; {\text{interior}}; B^{ - } {:} \; {\text{boundary}}\;{\text{and}}\;\partial B{:}\; {\text{exterior}}.\)
In Fig. 19, two polygons are taken from the 4 k data set after clustering. The first polygon is highlighted with a yellow colour and the other one is gray in colour. The topological relationship is established by the application and it can be identified that polygon number 114 is inside polygon number 739 and they are not touching the boundaries. The binary topological relationship between these two objects is found with the following:
$${\text{Case}}\;1{:} \,\,\Gamma_{9} \left( {114,739} \right) = \left( {\begin{array}{*{20}ll} {1 0 0} \\ {1 0 0} \\ {1 1 1} \\ \end{array} } \right).$$
Topological relationship for Case 1
In Fig. 20, a polygon which is numbered 245 on the left side of the diagram belongs to the Cluster No. 2 that is outside of polygon number 1277 which belongs to Cluster No. 4 The polygon is shown on the right side of the diagram. Those polygons are touching each other's boundaries and we find the binary topological relationship between those to objects as the following:
$${\text{Case}}\;2{:} \,\,\Gamma_{9} \left( {245 ,1277 } \right) = \left( {\begin{array}{*{20}ll} {0 0 1} \\ {0 1 1} \\ {1 1 1} \\ \end{array} } \right).$$
In Fig. 21, two polygons have been selected and highlighted with a yellow colour. Polygon No. 245 is outside Polygon No. 1270 and they are not touching each other's boundaries. The binary topological relationship between those two objects is realized from the following:
An exhaustive list of relations between the largest polygons can be iteratively computed for topological study of the area. This list can then be further used to perform statistical studies and application of other machine learning approaches to derive interactions between different environmental factors and conditions.
Result and discussion
Technical specification of the Hadoop cluster used for the experiments
The Hadoop-cluster was set-up in a HP Proliant DL580 G7 server with 4 × Intel® Xeon® CPU E7-4870 @ 2.40 GHz totalling to 80 cores. The server is equipped with 512 GB of RAM. HPE 3PAR StoreServ served as storage backend with 2 × 8 Gbps connectivity. Hadoop (v.2.6.0) cluster was configured in the server with 50 Virtual Machines. The storage capacity of the cluster is 2.7 TB and which has been represented in Fig. 22. The cluster consisted of one Master machine (Name Node) and 50 data machines (Data Nodes). The Name Node is configured with 4 virtual processors and 12 GB RAM whereas the Data Nodes were heterogeneously configured with the following:
1 data node (on Name Node) with 4 virtual processor and 12 GB RAM
38 data nodes with 1 virtual processor and 8 GB RAM
The storage capacity of the Hadoop cluster
Data set preparation
The data used to test the functionality of the mining framework consists of two multispectral images of different size for the same location as described in "Geometrical space to spectral space (preparation phase)" section.
The pre-preparation phase: It consists of converting the multi-spectral data set into two dimension image by indexing all the pixels in the image obtained from different spectral bands using each pixel's location. This generalization of multiple values for a single pixel obtained from multiple band into a single file also makes it easier for the data to processed irrespective of the number of bands from the image. As an example the values of the first pixel taken from 6 different files (in case of 6 band image) are gathered in a single line along with the (number of the row and column which both represent the index for that pixel). The process is repeated for all the remaining pixels in the image. Everything is performed using MapReduce paradigm to utilize the potential of the Hadoop Cluster.
New format: From the pre-preparation phase, the example image with 6 bands is converted into two dimension image stored in an ASCII file which looks like the following:
#Hadoop fs -cat bigdata-256bs/part-r-00000 | head -n 10
8650,8075,7650,13779,11029,8582,1,10001
10275,10484,11249,17210,16363,13831,1,1003
The ASCII file is comma separated file and the first six values (in Red) represent the values of a pixel gathered from different bands whereas the last two values (in Black) represents the index of that pixel (geographic location). With this format it become easier to process the data for the pixel collectively using MapReduce.
Benchmarking the Hadoop cluster (I/O)
As the storage backend consists of a single Storage Area Network (HPE 3PAR StorServ) with multiple disks, it is also important to test and benchmark the throughput of the HDFS. The benchmarking has been performed in conjunction with several suggestions provided by Mukherjee et al. [40]. It is appropriate to use a distributed file system such as HDFS on top of a shared disk infrastructure such as a storage area network. It is also possible to reduce the replication factor to 1 as redundancy and fault tolerance are not desired in such an enterprise storage system. Table 2 shows various statistics from the DFSIO benchmark which ships with Hadoop.
Table 2 Benchmarking the cluster
Running K-means clustering
In the above sections, several functions provided by the mining framework have been tested with many multiband images. The current section provides detailed discussion about the proposed MapReduce extensions to the k-means algorithm to work with multiband data in a geometric space. The results for clustering two multiband images for the same geographical location but with different resolutions using our approach to k-means have been described in detail. The proposed approach does not just perform clustering but with support for various image processing techniques allows to describe the geographical features in the image and in particular the topological relationships between different object that exist in the image. Those functions have already been described earlier.
The multiband images are uploaded to the HDFS and an indexed file is generated using the technique discussed in "Data set preparation" section using MapReduce. A parameter file containing the list of initial centroids is also provided for the uploaded image. In the subsequent testing, as the two different images are for the same geographic location with different resolutions, the same set of the initial centroids for clustering both the images is used. This will also help to compare the output from both of the images and to identify the quality of our clustering model. This has been further demonstrated in Table 7.
To test the performance of clustering, each image was clustered several times with a different block-size. The number of blocks that an image will be divided into depends on the size of the image and the block size that has been specified in the cluster configuration. A small block size will lead to a large number of blocks even for a small image while a large block size is configured if it is desired that the image is not split into a large number of block. Small block size is desired in case of small images so that several block are distributed across the cluster and thus cluster storage and computing capacity can be used. A large block size would reduce the network communication as several small block need not be communicated across the cluster of 50 nodes. The results of the clustering approach has been tested twice; once in a full Hadoop cluster consisting of 50 nodes and the second time with just two nodes.
The performance of the approach to cluster the multi-spectral raster images in the Hadoop framework (a distributed environment) and related image processing techniques is represented in Tables 3, 4, 5 and 6. Each of the tables shows the elapsed time of the k-means clustering process, the average time used for each of mapping, shuffling, merging, and reduce phases in addition to the total mapping time with respect to the block size.
Table 3 Clustering (1.9 GB) raster image with a 50 node Hadoop cluster
Table 4 Clustering (1.9 GB) raster image with a 2 node Hadoop cluster
Table 5 Clustering (17.79 GB) raster image using a 50 nodes Hadoop-cluster
Table 6 Clustering 17.79 GB raster image using a 2 node Hadoop cluster
K-means using 50 node Hadoop cluster for Image No. 1
Table 3 shows the several statistics of clustering the Image No. 1 with 8000 × 6000 pixels using a full Hadoop cluster of 50 nodes. It is evident that the minimum elapsed time was recorded in the time where the image was stored using the block size 32 MB, that was followed by the block size of 64 MB and so on till the maximum block size of 256 MB. The same is clearly illustrated in Fig. 23 and it can be seen that all of average mapping time, average shuffling time, and the totally mapping time increase with the block size. The average merge and reduce time was not affected with the change in the block size of the image.
Clustering 1.9 GB raster image with a 50 nodes Hadoop cluster
Figure 23 clearly shows the increase in the execution time with respect to the block size.
From the statistics we can infer that by increasing the block size, the number of the blocks created from the image will be reduced and due to less number of blocks, only a few number of Hadoop nodes (processing machines) will compute over the data. This is because several Data Nodes will not be even utilized due to non-availability of data local to them and this leads to increase in the execution time.
K-means using 2 node Hadoop cluster for Image No. 1
The Hadoop cluster was resized keeping only a couple of Data Nodes. Table 4 represents the average mapping, shuffling, merging, and reduce phases in addition to the total mapping time with respect to the block size. It is found that the execution time was the least in the case of the blocks with size of 64 MB. It was followed by 96, 128 and 256 MB block size respectively. It can be noticed from Fig. 24 that the execution time increases with the increase in the block size.
Clustering 1.9 GB raster image with a 2 node Hadoop cluster
From the statistics of executing K-means clustering model over Hadoop clusters with 50 nodes, as available in Table 3, it is evident that the fastest execution time is 4.1 min with block size of 32 MB. From the statistics, available in Table 4, it can be noticed that the execution time has increase by 3 times due to limited number of nodes (= 2) and the minimum execution time is 11.5 min with block size of 64 MB. Figure 25 illustrates the elapsed time for both the clusters varying the block size.
Comparing the elapsed time for running k-means with 1.9 GB raster image in 2 and 50 nodes Hadoop cluster
Comparing results of K-means between 2 nodes and 50 nodes Hadoop cluster for Image No. 1
Figure 26 shows a large increase in the time needed for shuffling the blocks with increasing the size of the blocks. The shuffle time remains considerably the same with up to 64 MB of block size for both 2 nodes and 50 nodes Hadoop cluster. For the 50 nodes cluster, the shuffle time varies from 1 min to 6.1 min for 64 MB to 256 MB block size respectively. For the 2 Nodes Hadoop cluster, the average shuffling time goes up to 13.5 min in case of 256 MB.
Comparing the average shuffle time for running k-means with 1.9 GB raster image in 2 and 50 nodes Hadoop cluster
It has been mentioned before that the processing time increases in the case of bigger blocks of data. With bigger blocks there will be less number of blocks. E.g., if an image is of 1 GB size, with a block size of 256 MB, it will result in only 4 blocks. This means that only four mappers can ever run for processing the data from this image. For a large Cluster, this will leave the resources unutilized and lead to an increase the execution time.
K-means using 50 nodes Hadoop cluster for Image No. 2
The second image (Image No. 2) is for the same geographical location but with different resolution. It is a 6-band raster image and each band has 24000 rows and 18000 columns. In an uncompressed form this image requires 4.83 GB storage with pixel depth of 2 bytes (24,000 rows × 18,000 columns × 6 bands × 2 bytes of data). After running the indexing method upon this binary file we get a plain text ASCII representation (an indexed image) of size 17.76 GB. For further processing, the initial set of centroids are provided (these are the same centroids which were provided for the previous image considering the same geographical location and to make sure that the k-means clustering technique receives the same input). The values presented in Table 5 and represented in Fig. 27 have been averaged by running the clustering technique four times upon a full cluster of 50 nodes. For each of the run, the block-size is changed and the file is updated to follow the new block size. As the current image is much larger than Image No. 1, the block size have also been increase accordingly. The block size for the experimentations used are 128 MB, 256 MB, 512 MB, and 1024 MB respectively.
Clustering 17.79 GB raster image with a 50 nodes Hadoop-cluster
From the values in Table 6 it can be observed that the most appropriate block size for the Image No. 2 is 512 MB which requires the minimum execution time. The values have been averaged over four consecutive runs to minimize error margin. For very large block size (of 18 GB), as represented in Fig. 28, the Shuffle time increases exponentially as the phase requires data to be transferred from one node to another and is heavily dependent on the network.
Clustering 17.79 GB raster image using a 2 node Hadoop cluster
From the results represented in Fig. 29, the advantage of utilizing the Hadoop distributed environment for processing large raster images is evident. It can be seen that the execution time decreases considerably when the same Image (No. 2) is processes upon a full cluster of 50 nodes rather than two nodes cluster. In Fig. 29, the red line represent the time needed to cluster the image using two node or a Hadoop cluster of to machines whereas the blue line represent the time needed to do the same processes using 50 nodes. Even for larger block size, the results remain constant and as is evident.
Comparing the elapsed time for running k-means over (17.76 GB) raster image in (1 and 50) nodes Hadoop cluster
Figure 29 also shows that running the algorithm using two nodes, for the dataset, the block size was 512 MB provided the best performance. While processing the same dataset using 50 machines, 256 MB block size is preferable. With a quick comparison of clustering Image No. 2 (size: 18 GB) and Image No. 1 (size: 1.9 GB), it is found that 32 MB and 64 MB block size provided the minimum execution time for 50 Node cluster and 2 node cluster respectively. It is concluded that the block size is an important factor to be considered for preparing the dataset along with other factors such as the number of nodes (machines) in the Hadoop cluster when deploying a geo-spatial Big Data processing framework.
The 50 nodes Hadoop cluster is configured with about 360 GB of memory. From Fig. 30, it is visible that a large amount of memory is being used with 128 MB of data blocks and this can be attributed to the fact that all of the data nodes in the cluster are contributing. The use of memory decreases with the increase in the data blocks because several of the data nodes remain idle and does not contribute as no blocks are available local to them for processing. Similar results have been obtained with Image No. 1 with block size ranging from 16 to 256 MB and have been excluded for brevity.
Resources utilized in the case on 1 node cluster
Evaluating the results of clustering using the proposed technique with widely used geospatial image analysis software
Table 7 represents the percentage of the number of pixels in each cluster from the two different images. As the images are for the same geographic area but with a different resolution, the results of clustering with the proposed techniques and using ENVI, a widely used image processing application, are identical. The percentage difference for both the images have been calculated and verified for multiple runs as described in the above sections. The percentage difference is found to be negligible except in case of Cluster No. 4 and Cluster No. 10 in the discussed case which validates the use of the proposed technique for multi-dimensional data.
Table 7 Comparing the number of pixels for the important clusters
Conclusion and future work
The main objective of the present work is to utilize the advances in distributed processing, specifically MapReduce programming paradigm, to facilitate big geospatial data processing and mining. The multispectral essence of the raster images has been preserved by converting multi-spectral space to multi-dimensional geometric space. The existing data mining and machine learning techniques are limited by scale and the ones even which are available for use in a distributed environment for processing raster data and scalability only support processing of single spectral band at a time. With the development of this work, it has now become easier to port existing image processing techniques for distributed processing of giga-pixel imagery and application of custom logic for development of applications.
A big geospatial data mining platform based on Apache Hadoop distributed processing environment has been developed in this work. The developed mining platform comes with a group of editing, filtering and other image processing techniques to help better extracting the geographical features out of the image data. The processing (mining) capabilities and other image processing facilities of the framework have been tested using several images of unconventional size in scale of giga-pixels, and it shows the advantages of using the developed framework upon a distributed environment as compared to a desktop GIS application using a single machine. In a distributed environment, clustering was performed on sample images over 50 node (machines) cluster and over 2 nodes (machines). ENVI, a desktop GIS application, was also used over the same image of a reduced resolution to analyze the results of the proposed clustering technique. There is a gain of factor of about 2 to 2.5 in time of data process depending on the block size employed. The clustering results comparison shows maximum deviation of 2.7 percent which is negligible. The analysis of times of various sub processes shows clearly advantages of this processing in MapReduce programming. The classified image data can be used further for spatial as well as temporal characterizing the geo-spatial objects in an area for scene modelling and also modelling geo-spatial processes. The results of clustering have been tested by going from spatial to geometrical space and similarly other methods can also be adapted to support processing of multiband data coherently.
The results derived also highlight the importance of requirement of compute, memory, storage and network infrastructure for processing such large datasets. Appropriate data storage mechanisms are also required for fast access to large amounts of data as in a distributed environment the data is distributed across a number of nodes. The nodes where the data blocks are stored contribute to the overall performance of the system. This has also been evaluated by using different block size when storing the data.
The proposed work can be extended to support other spatial mining techniques. Distributed processing techniques developed for clustering can be extended to support other types of processing. Workflows are an important component of any geospatial data mining systems and the current system can be extended to support workflow type of applications. The current system does not have a visualization interface which can be provided and will allow to see big geospatial data at various levels of abstractions. A workflow pipeline which can inserts the data into a database for an application server such as GeoServer is highly desirable and is currently being worked out.
The data set used in this paper is derived from Landsat 8 multispectral images for the location of Aravalli Fort Hills, an area North of Gujarat state, India with Geographic Location (around): Lat: 24° 00′ North, and Lon: 72° 54′ East). The dataset has been processed and provided by BISAG, Gandhinagar, Gujarat, India.
GIS:
GeoTIFF:
georeferenced tagged image file format
GB:
BSQ:
band sequential
BIP:
band interleaved by pixel
BIL:
band interleaved by line
Satellite Pour l'Observation de la Terre
GS-Hadoop:
Geo Spatial Hadoop
IPS:
information processing systems
MDB:
multidimensional databases
OLAP:
on-line analytical processing
OLTP:
online transactional processing
KDD:
DMLC:
HDFS:
Hadoop Distributed File System
commercially of the shelf
KNN:
K-Nearest Neighbours
Ahmad A, Dey L. A k-mean clustering algorithm for mixed numeric and categorical data. Data Knowl Eng. 2007;63:503–27.
Alarabi L, Mokbel MF, Musleh M. St-hadoop: a mapreduce framework for spatio-temporal data. GeoInformatica. 2018;22:785–813.
Alkathiri M, Jhummarwala A, Potdar M. Geo-spatial big data mining techniques. Int J Comput Appl. 2016;135:28–36.
Bédard Y, Merrett T, Han J. Fundamentals of spatial data warehousing for geographic knowledge discovery. Geogr Data Min Knowl Discov. 2001;2:53–73.
Bennett J. OpenStreetMap. Birmingham: Packt Publishing Ltd.; 2010.
Bereta K, Koubarakis M. Ontop of geospatial databases. In: International semantic web conference. Cham: Springer; 2016. p. 37–52.
Bernard E, Naveau P, Vrac M, Mestre O. Clustering of maxima: spatial dependencies among heavy rainfall in France. J Clim. 2013;26:7929–37.
Bhosale HS, Gadekar DP. A review paper on big data and hadoop. Int J Sci Res Publ. 2014;4:1.
Borodin A, Mirvoda S, Porshnev S. Analysis of multidimensional data with high dimensionality: data access problems and possible solutions. In: ITM web of conferences. Les Ulis: EDP Sciences; 2016. p. 01005.
Borthakur D. The hadoop distributed file system: architecture and design. Hadoop Proj Website. 2007;11:21.
Borthakur D. HDFS architecture guide. Hadoop apache project. 2008. http://hadoopapache.org/common/docs/current/hdfsdesign.pdf. p. 39.
Bradley PS, Fayyad UM, Reina C. Scaling clustering algorithms to large databases. KDD. 1998;98:9–15.
Calimeri F, Caracciolo M, Marzullo A, Stamile C. BioHIPI: biomedical hadoop image processing interface. In: International workshop on machine learning, optimization, and big data. Cham: Springer; 2017. p. 540–8.
Campbell JB, Wynne RH. Introduction to remote sensing. New York: Guilford Press; 2011.
Council NR. Landsat and beyond: sustaining and enhancing the nation's land imaging program. Washington, D.C: National Academies Press; 2013.
Coveney M. Corporate performance management (CPM). 2003. http://www.businessforum.com/Comshare04B.html.
Crema S, Cavalli M. SedInConnect: a stand-alone, free and open source tool for the assessment of sediment connectivity. Comput Geosci. 2018;111:39–45.
Dagade V, Lagali M, Avadhani S, Kalekar P. Big data weather analytics using hadoop. Int J Emerg Technol Comput Sci Electron (IJETCSE). 2015;14:0976–1353.
de Smith MJ, Goodchild MF, Longley P. Geospatial analysis: a comprehensive guide to principles, techniques and software tools. Leicester: Troubador Publishing Ltd; 2009.
Ding Q, Khan M, Roy A, Perrizo W. The P-tree algebra. In: Proceedings of the 2002 ACM symposium on applied computing. ACM; 2002. p. 426–31.
EARTH_OBSERVATION_SYSTEM. 2019. EOS processing—classic gis algorithms. https://eos.com/eos-processing/.
Egenhofer MJ, Herring J. Categorizing binary topological relations between regions, lines, and points in geographic databases. Technical report, Department of Surveying Engineering, University of Maine, Orono, ME; 1990.
Eldawy A, Mokbel MF. Spatialhadoop: a mapreduce framework for spatial data. In: 2015 IEEE 31st international conference on data engineering (ICDE). New York: IEEE; 2015. p. 1352–63.
Eldawy A, Niu L, Haynes D, Su Z. Large scale analytics of vector + raster big spatial data. In: Proceedings of the 25th ACM SIGSPATIAL international conference on advances in geographic information systems. New York: ACM; 2017. p. 62.
Evans MR, Oliver D, Yang K, Zhou X, Ali RY, Shekhar S. Enabling spatial big data via CyberGIS: challenges and opportunities. CyberGIS for geospatial discovery and innovation. Dordrecht: Springer; 2019.
Foundation AS. Apache hadoop. The Apache Software Foundation. 2018. https://hadoop.apache.org/.
Ghazi MR, Gangodkar D. Hadoop, MapReduce and HDFS: a developers perspective. Procedia Comput Sci. 2015;48:45–50.
Gopalani S, Arora R. Comparing apache spark and map reduce with performance analysis using k-means. Int J Comput Appl. 2015;113:8–11.
Goward SN, Masek JG, Williams DL, Irons JR, Thompson R. The Landsat 7 mission: terrestrial research and applications for the 21st century. Remote Sens Environ. 2001;78:3–12.
Haklay M, Weber P. Openstreetmap: user-generated street maps. IEEE Pervas Comput. 2008;7:12–8.
Jhummarwala A, Mazin A, Potdar M. Geospatial hadoop (GS-Hadoop) an efficient mapreduce based engine for distributed processing of shapefiles. In: Proceedings of the the 2nd international conference on advances in computing, communication, & automation, Bareilly, India; 2016. p. 1–7.
Jo J, Lee K-W. High-performance geospatial big data processing system based on MapReduce. ISPRS Int J Geo-Inf. 2018;7:399.
Johnson L. Fiber optics: mature and growing fast. Tech Dir. 2016;76:22.
Keim DA, Panse C, Sips M, North SC. Pixel based visual data mining of geo-spatial data. Comput Gr. 2004;28:327–44.
Koperski K. A progressive refinement approach to spatial data mining. Canada: Simon Fraser University; 1999.
Lauer DT, Morain SA, Salomonson VV. The Landsat program: its origins, evolution, and impacts. Photogramm Eng Remote Sens. 1997;63:831–8.
Lausch A, Schmidt A, Tischendorf L. Data mining and linked open data—new perspectives for data analysis in environmental research. Ecol Model. 2015;295:5–17.
Lenka RK, Barik RK, Gupta N, Ali SM, Rath A, Dubey H. Comparative analysis of SpatialHadoop and GeoSpark for geospatial big data analytics. In: 2016 2nd international conference on contemporary computing and informatics (IC3I). New York: IEEE; 2016. p. 484–8.
Mennis J, Guo D. Spatial data mining and geographic knowledge discovery—an introduction. Comput Environ Urban Syst. 2009;33:403–8.
Mukherjee A, Datta J, Jorapur R, Singhvi R, Haloi S, Akram W. Shared disk big data analytics with apache hadoop. In: 2012 19th international conference on high performance computing. New York: IEEE; 2012. p. 1–6.
Murray AT, Shyy T-K. Integrating attribute and space characteristics in choropleth display and spatial data mining. Int J Geogr Inf Sci. 2000;14:649–67.
Ng RT, Han J. CLARANS: a method for clustering objects for spatial data mining. IEEE Trans Knowl Data Eng. 2002;14:1003–16.
Nijmeijer R, de Haas A, Dost R, Budde P. ILWIS 3.0 academic: user's guide; 2001.
Ooi B, Sacks-Davis R, Han J. Indexing in spatial databases. Unpublished/Technical Papers; 1993.
Parker JR. Extracting vectors from raster images. Comput Gr. 1988;12:75–9.
Peralta D, del Río S, Ramírez-Gallego S, Triguero I, Benitez JM, Herrera F. Evolutionary feature selection for big data classification: a mapreduce approach. Math Probl Eng. 2015. https://doi.org/10.1155/2015/246139.
Rabbitt MC. The United States geological survey, 1879–1989, US Government Printing Office. 1989.
Samson G, Lu J, Xu Q. Large spatial datasets: present challenges, future opportunities. In: Proceedings of the international conference on change, innovation, informatics and disruptive technology ICCIIDT'16, London-UK, October 11, 12, 2016; 2016. p. 204–17.
Samson GL, Lu J, Wang L, Wilson D. An approach for mining complex spatial dataset. In: Proceedings of the international conference on information and knowledge engineering (IKE), 2013. The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp). p. 1.
Sarode AJ, Mishra A. Audit and analysis of impostors: an experimental approach to detect fake profile in online social network. In: Proceedings of the sixth international conference on computer and communication technology 2015. New York: ACM; 2015. p. 1–8.
Shahid R, Bertazzon S, Knudtson ML, Ghali WA. Comparison of distance measures in spatial analytical modeling for health service planning. BMC Health Serv Res. 2009;9:200.
Sloan KR, Tanimoto SL. Progressive refinement of raster images. IEEE Trans Comput. 1979;28:871–4.
Szeliski R. Computer vision: algorithms and applications. New York: Springer Science & Business Media; 2010.
Talbot D, Warner E, Anderson C, Hessekiel K, Jones D. A Massachusetts municipal light plant seizes internet access business opportunities; 2015.
Trujillo J, Palomar M. An object oriented approach to multidimensional database conceptual modeling (OOMD). In: Proceedings of the 1st ACM international workshop on data warehousing and OLAP. New York: ACM; 1998. p. 16–21.
Uzunkaya C, Ensari T, Kavurucu Y. Hadoop ecosystem and its analysis on tweets. Procedia-Soc Behav Sci. 2015;195:1890–7.
van de Weijer J. Local mode filtering J. van de Weijer R. van den Boomgaard Intelligent Sensory Information Systems Faculty of Science, University of Amsterdam Kruislaan 403, 1098 SJ Amsterdam, The Netherlands.
Vatsavai RR, Ganguly A, Chandola V, Stefanidis A, Klasky S, Shekhar S. Spatiotemporal data mining in the era of big spatial data: algorithms and applications. In: Proceedings of the 1st ACM SIGSPATIAL international workshop on analytics for big geospatial data; 2012. p. 1–10.
Wagstaff K, Cardie C, Rogers S, Schrödl S. Constrained k-means clustering with background knowledge. In: ICML; 2001. p. 577–84.
Wang C-S, Lin S-L, Chang JY. MapReduce-based frequent pattern mining framework with multiple item support. In: Asian conference on intelligent information and database systems. New York: Springer; 2017. p. 65–74.
Wang W, Yang J, Muntz R. STING: a statistical information grid approach to spatial data mining. In: VLDB; 1997. p. 186–195.
Wang W, Yang J, Muntz R. STING+: an approach to active spatial data mining. In: Proceedings 15th international conference on data engineering (Cat. No. 99CB36337). New York: IEEE; 1999. p. 116–125.
Witten IH, Frank E, Hall MA, Pal CJ. Data mining: practical machine learning tools and techniques. Burlington: Morgan Kaufmann; 2016.
Xiaoke Z, Chao M, Haifeng H, Fangfang L. Radiometric correction based on multi-temporal spot satellite images. In: 2009 international conference on wireless communications & signal processing. New York: IEEE; 2009. p. 1–6.
Yao X. Research issues in spatio-temporal data mining. In: Workshop on geospatial visualization and knowledge discovery, University Consortium for Geographic Information Science, Virginia; 2003. p. 1–6.
Yoon I, Yi S, Oh C, Jung H, Yi Y. Distributed video decoding on hadoop. IEICE Trans Inf Syst. 2018;101:2933–41.
Zhang Z, Zhang J, Xue H. Improved K-means clustering algorithm. In: 2008 congress on image and signal processing. New York: IEEE; 2008. p. 169–72.
Zhao T, Zhang C, Anselin L, Li W, Chen K. A parallel approach for improving Geo-SPARQL query performance. Int J Digit Earth. 2015;8:383–402.
Zhao W, Ma H, He Q. Parallel k-means clustering based on mapreduce. In: IEEE international conference on cloud computing. New York: Springer; 2009. p. 674–9.
We are grateful to Shri T. P. Singh, Director, BISAG for his keen interest in and support to this work. We are also grateful to Apache Software Foundation and the Open Source community for making a plethora of softwares open source and without the availability of which the current development would not have been possible.
Administrative Sciences College Hadhramout, University of Science and Technology, Hadhramout, Yemen
Mazin Alkathiri
Bhaskaracharya Institute for Space Applications and Geo-Informatics, Gandhinagar, 382007, India
Abdul Jhummarwala
& M. B. Potdar
Search for Mazin Alkathiri in:
Search for Abdul Jhummarwala in:
Search for M. B. Potdar in:
MA has developed the proposed mining platform development of the required programs and test cases, analysis and interpretation of data and drafting of the manuscript. AJ has served as advisor in study conception and for critical revision. MBP has also served as advisor, and critically reviewed the complete developments. All authors read and approved the final manuscript.
Correspondence to Mazin Alkathiri.
Alkathiri, M., Jhummarwala, A. & Potdar, M.B. Multi-dimensional geospatial data mining in a distributed environment using MapReduce. J Big Data 6, 82 (2019) doi:10.1186/s40537-019-0245-9
Multiband raster processing
Multi-dimensional data processing
Geospatial processing
Spectral to geometrical space | CommonCrawl |
\begin{definition}[Definition:Component of Graph]
Let $G$ be a graph.
Let $H$ be a subgraph of $G$ such that:
:$H$ is connected
:$H$ is not contained in any connected subgraph of $G$ which has more vertices or edges than $H$ has.
Then $H$ is a '''component''' of $G$.
\end{definition} | ProofWiki |
Let $a$ and $b$ be the two real values of $x$ for which\[\sqrt[3]{x} + \sqrt[3]{20 - x} = 2\]The smaller of the two values can be expressed as $p - \sqrt{q}$, where $p$ and $q$ are integers. Compute $p + q$.
Let $a=\sqrt[3]{x}, b = \sqrt[3]{20-x}$. Then $a+b = 2$ and $a^3 + b^3 = 20$. Factoring,\[a^3 + b^3 = (a+b)((a+b)^2-3ab) = 2(4-3ab)= 8-6ab=20 \Longrightarrow ab = -2\]
Solving $a+b=2, ab=-2$ gives us the quadratic $a^2 - 2a - 2 = 0$. The quadratic formula yields $a = \frac{2 - \sqrt{12}}{2} = 1 - \sqrt{3}$, and $x = a^3 = (1-\sqrt{3})^3 = 1 - 3\sqrt{3} + 9 - 3\sqrt{3} = 10 - \sqrt{108}$. Therefore, $p+q=\boxed{118}$. | Math Dataset |
\begin{document}
\title[Finite basis problems for plactic-like monoids]{Finite basis problems for stalactic, taiga, sylvester and Baxter monoids} \thanks{}
\author[B. B. Han]{Bin Bin Han } \author[W. T. Zhang]{Wen Ting Zhang$^\star$}\thanks{$^\star$Corresponding author}
\address{School of Mathematics and Statistics, Lanzhou University, Lanzhou, Gansu 730000, PR China; Key Laboratory of Applied Mathematics and Complex Systems, Lanzhou, Gansu 730000, PR China} \email{[email protected]}
\subjclass[2010]{05E99, 20M05}
\keywords{stalactic monoid; taiga monoid; sylvester monoid; Baxter monoid; finite basis problem; identity}
\thanks{This research was partially supported by the National Natural Science Foundation of China (nos.~11771191, 11371177) and the Natural Science Foundation of Gansu Province (no. 20JR5RA275).}
\begin{abstract} Stalactic, taiga, sylvester and Baxter monoids arise from the combinatorics of tableaux by identifying words over a fixed ordered alphabet whenever they produce the same tableau via some insertion algorithm. In this paper, three sufficient conditions under which semigroups are finitely based are given. By applying these sufficient conditions, it is shown that all stalactic and taiga monoids of rank greater than or equal to $2$ are finitely based and satisfy the same identities, that all sylvester monoids of rank greater than or equal to $2$ are finitely based and satisfy the same identities and that all Baxter monoids of rank greater than or equal to $2$ are finitely based and satisfy the same identities. \end{abstract}
\maketitle
\section{Introduction} Knuth introduced the tableaux algebra \cite{K71} in the 1970s and this algebra was later studied in detail by Lascoux and Sch\"{u}tzenberger under the name plactic monoid \cite{LS81}. Plactic monoid arise from the combinatorics of tableaux by identifying words over a fixed ordered alphabet whenever they produce the same tableau via Schensted's insertion algorithm \cite{Sch61}. Plactic-like monoids which arise from the combinatorics of tableaux as the plactic monoid include the hypoplactic monoid \cite{KT97,Nov00}, the stalactic monoid \cite{HNT07,Pri13}, the taiga monoid \cite{Pri13}, the sylvester monoid \cite{HNT05} and the Baxter monoid \cite{Gir12}. These monoids have attracted much attention due to their interesting connection with combinatorics \cite{Lot02} and applications in symmetric functions \cite{Mac08}, representation theory \cite{Ful97}, Kostka-Foulkes polynomials \cite{LS78,LS81}, Schubert polynomials \cite{LS85, LS90}, and musical theory \cite{Jed11}.
Each of these plactic-like monoids can be obtained by factoring the free monoid $\mathcal{A}^*$ over the infinite ordered alphabet $\mathcal{A} = \{1 < 2 < 3 < \cdots\}$ by a congruence that can be defined by a so-called insertion algorithm that computes a combinatorial object from a word. For example, for the stalatic monoid, the corresponding combinatorial objects are stalactic tableaus. We introduce the definitions of combinatorial objects and insertion algorithms used to construct the stalactic, taiga, sylvester and Baxter monoids.
A \textit{stalactic tableau} is a finite array of symbols of $\mathcal{A}$ in which columns are top-aligned, and two symbols appear in the same column if and only if they are equal. The associated insertion algorithm is as follows:
\begin{algo}\label{algo:stal}\cite[\S~3.7]{HNT07} Input: A stalactic tableau $T$ and a symbol $a \in \mathcal{A}$. If $a$ does not appear in $T$, add $a$ to the left of the top row of $T$; if $a$ does appear in $T$, add $a$ to the bottom of the column in which $a$ appears. Output the new tableau. \end{algo}
Let $w_1, \cdots, w_k\in \mathcal{A}$ and $w=w_1\cdots w_k \in \mathcal{A}^*$. Then the combinatorial object ${\rm P}_{\operatorname{\mathsf{stal}}_{\infty}}(w)$ of $w$ is obtained as follows: reading $w$ from right-to-left, one starts with an empty tableau and inserts each symbol in $w$ into a stalactic tableau according to Algorithm \ref{algo:stal}. For example, ${\rm P}_{\operatorname{\mathsf{stal}}_{\infty}}(3613151265)$ is given as follows: \begin{equation*} \begin{tikzpicture} \matrix [nodes=draw,column sep=1mm] { \node {3}; &[-1mm] \node{1}; &[-1mm] \node{2};&[-1mm] \node{6}; &[-1mm] \node{5}; \\ \node {3}; & \node{1}; & & \node{6}; & \node{5};\\
& \node{1}; & & &\\ }; \end{tikzpicture} \end{equation*} Notice that the order in which the symbols appear along the first row in ${\rm P}_{\operatorname{\mathsf{stal}}_{\infty}}(w)$ is the same as the order of the rightmost instances of the symbols that appear in $w$.
A \textit{binary search tree with multiplicities} is a labelled binary search tree in which each label appears at most once, where the label of each node is greater than the label of every node in its left subtree, and less than the label of every node in its right subtree, and where a non-negative integer called the \textit{multiplicity} is assigned to each node label. The associated insertion algorithm is as follows: \begin{algo}\label{algo:taig}\cite[ Algorithm 3]{Pri13} Input: A binary search tree with multiplicities $T$ and a symbol $a \in \mathcal{A}$. If $T$ is empty, create a node, label it by $a$, and assign it multiplicity $1$. If $T$ is non-empty, examine the label $x$ of the root node: if $a < x$, recursively insert $a$ into the left subtree of the root node; if $a > x$, recursively insert $a$ into the right subtree of the root note; and if $a = x$, increment by $1$ the multiplicity of the node label $x$. \end{algo}
Let $w_1, \cdots, w_k\in \mathcal{A}$ and $w=w_1\cdots w_k \in \mathcal{A}^*$. Then the combinatorial object ${\rm P}_{\operatorname{\mathsf{taig}}_{\infty}}(w)$ of $w$ is obtained as follows: reading $w$ from right-to-left, one starts with an empty tree and inserts each symbol in $w$ into a binary search tree with multiplicities according to Algorithm \ref{algo:taig}. For example, ${\rm P}_{\operatorname{\mathsf{taig}}_{\infty}}(3613151265)$ is given as follows: \begin{equation*} \begin{tikzpicture} [grow'=down,line width = 0pt, every node/.style={draw,circle,inner sep=1pt}, level distance=.5cm, level 1/.style={sibling distance=10mm}, level 2/.style={sibling distance=10mm}]
\node (root) {$5^2$}
child {node {$6^2$}}
child {node {$2^1$}
child{node{$3^2$}}
child {node {$1^2$}}
}; \end{tikzpicture} \end{equation*}
A \textit{right strict binary search tree} is a labelled rooted binary tree where the label of each node is greater than or equal to the label of every node in its left subtree, and strictly less than every node in its right subtree. The associated insertion algorithm is as follows: \begin{algo}\label{algo:sylv}\cite[\S~3.3]{HNT05} Input: A right strict binary search tree $T$ and a symbol $a\in \mathcal{A}$. If $T$ is empty, create a node and label it $a$. If $T$ is non-empty, examine the label $x$ of the root node: if $a > x$, recursively insert $a$ into the right subtree of the root node; otherwise recursively insert $a$ into the left subtree of the root note. Output the resulting tree. \end{algo}
Let $w_1, \cdots, w_k\in \mathcal{A}$ and $w=w_1\cdots w_k \in \mathcal{A}^*$. Then the combinatorial object ${\rm P}_{\operatorname{\mathsf{sylv}}_{\infty}}(w)$ of $w$ is obtained as follows: reading $w$ from right-to-left, one starts with an empty tree and inserts each symbol in $w$ into a right strict binary search tree according to Algorithm \ref{algo:sylv}. For example, ${\rm P}_{\operatorname{\mathsf{sylv}}_{\infty}}(3613151265)$ is given as follows: \begin{equation*} \begin{tikzpicture} [grow'=down,line width = 0pt, every node/.style={draw,circle,inner sep=2pt}, level distance=.5cm, level 1/.style={sibling distance=20mm}, level 2/.style={sibling distance=10mm}, level 3/.style={sibling distance=10mm}, level 4/.style={sibling distance=10mm}]
\node (root) {5}
child {node {6}
child[missing]
child {node {6}}
}
child {node {2}
child {node {5}
child[missing]
child {node {3}
child[missing]
child {node {3}}
}
}
child {node {1}
child[missing]
child {node{1}
child[missing]
child {node {1}}
}
}
}; \end{tikzpicture} \end{equation*}
A \textit{left strict binary search tree is} a labelled rooted binary tree where the label of each node is strictly greater than the label of every node in its left subtree, and less than or equal to every node in its right subtree. The associated insertion algorithm is as follows: \begin{algo}\label{algo:sylv-2} Input: A left strict binary search tree $T$ and a symbol $a \in \mathcal{A}$. If $T$ is empty, create a node and label it $a$. If $T$ is non-empty, examine the label $x$ of the root node: if $a < x$, recursively insert $a$ into the left subtree of the root node; otherwise recursively insert $a$ into the right subtree of the root note. Output the resulting tree. \end{algo}
Let $w_1, \cdots, w_k\in \mathcal{A}$ and $w=w_1\cdots w_k \in \mathcal{A}^*$. Then the combinatorial object ${\rm P}_{\operatorname{\mathsf{sylv}}^{\sharp}_{\infty}}(w)$ of $w$ is obtained as follows: reading $w$ from left-to-right, one starts with an empty tree and inserts each symbol in $w$ into a left strict binary search tree according to Algorithm \ref{algo:sylv-2}. For example, ${\rm P}_{\operatorname{\mathsf{sylv}}^{\sharp}_{\infty}}(3613151265)$ is given as follows: \begin{equation*} \begin{tikzpicture} [grow'=down,line width = 0pt, every node/.style={draw,circle,inner sep=2pt}, level distance=.5cm, level 1/.style={sibling distance=20mm}, level 2/.style={sibling distance=10mm}, level 3/.style={sibling distance=10mm}, level 4/.style={sibling distance=10mm}]
\node (root) {3}
child {node {6}
child{node{6}}
child {node {3}
child {node {5}
child {node {5}}
child [missing]
}
child [missing]
}
}
child {node {1}
child {node {1}
child {node {1}
child {node {2}}
child[missing]
}
child[missing]
}
child[missing]
}; \end{tikzpicture} \end{equation*}
Let $w_1, \cdots, w_k\in \mathcal{A}$ and $w=w_1\cdots w_k \in \mathcal{A}^*$. Then the combinatorial object ${\rm P}_{\operatorname{\mathsf{baxt}}_{\infty}}(w)$ of $w$ is obtained by the Algorithms \ref{algo:sylv} and \ref{algo:sylv-2}, that is, ${\rm P}_{{\operatorname{\mathsf{baxt}}}_{\infty}}(w)= ({\rm P}_{{\operatorname{\mathsf{sylv}}_{\infty}^{\sharp}}}(w), {\rm P}_{{\operatorname{\mathsf{sylv}}}_{\infty}}(w))$.
For each $\operatorname{\mathsf{M}} \in \{\operatorname{\mathsf{stal}},\operatorname{\mathsf{taig}}, \operatorname{\mathsf{sylv}},\operatorname{\mathsf{sylv}}^{\sharp}, \operatorname{\mathsf{baxt}}\}$, define the relation $\equiv_{\operatorname{\mathsf{M}}_{\infty}}$ by \[ u \equiv_{\operatorname{\mathsf{M}}_{\infty}} v \Longleftrightarrow {\rm P}_{\operatorname{\mathsf{M}}_{\infty}}(u) = {\rm P}_{\operatorname{\mathsf{M}}_{\infty}}(v) \] for any $u,v \in \mathcal{A}^*$. In each case, the relation $\equiv_{\operatorname{\mathsf{M}}_{\infty}}$ is a congruence on $\mathcal{A}^*$. The stalactic monoid $\operatorname{\mathsf{stal}}_{\infty}$ [resp. taiga monoid $\operatorname{\mathsf{taig}}_{\infty}$, sylvster monoid $\operatorname{\mathsf{sylv}}_{\infty}$, $\sharp$-sylvster monoid $\operatorname{\mathsf{sylv}}^{\sharp}_{\infty}$, Baxter monoid $\operatorname{\mathsf{baxt}}_{\infty}$] is the factor monoid $ \mathcal{A}^*/_{\equiv_{\operatorname{\mathsf{M}}_{\infty}}}$. The rank-$n$ analogue $\operatorname{\mathsf{stal}}_{n}$ [resp. $\operatorname{\mathsf{taig}}_{n}$, $\operatorname{\mathsf{sylv}}_{n}$, $\operatorname{\mathsf{sylv}}^{\sharp}_{n}$, $\operatorname{\mathsf{baxt}}_{n}$] is the factor monoid $ \mathcal{A}^*_n/_{\equiv_{\operatorname{\mathsf{M}}_{\infty}}}$, where the relation $\equiv_{\operatorname{\mathsf{M}}_{\infty}}$ is naturally restricted to $\mathcal{A}^*_n\times\mathcal{A}^*_n$ and $\mathcal{A}_n = \{1 < 2 < \cdots < n\}$ is set of the first $n$ natural numbers viewed as a finite ordered alphabet. It follows from the definition of $\equiv_{\operatorname{\mathsf{M}}_{\infty}}$ for any $\operatorname{\mathsf{M}} \in \{\operatorname{\mathsf{stal}},\operatorname{\mathsf{taig}}, \operatorname{\mathsf{sylv}},\operatorname{\mathsf{sylv}}^{\sharp}, \operatorname{\mathsf{baxt}}\}$ that each element $[u]_{\equiv_{\operatorname{\mathsf{M}}_{\infty}}}$ of the factor monoid $\operatorname{\mathsf{M}}_{\infty}$ can be identified with the combinatorial object ${\rm P}_{\operatorname{\mathsf{M}}_{\infty}}(u)$. In each case, $\operatorname{\mathsf{M}}_1$ is a free monogenic monoid $\langle a \rangle=\{1, a, a^2, a^3,\ldots\}$ and thus commutative. Note that \[ \operatorname{\mathsf{M}}_1 \subset \operatorname{\mathsf{M}}_2 \subset \cdots \subset \operatorname{\mathsf{M}}_i \subset\operatorname{\mathsf{M}}_{i+1}\subset \cdots \subset \operatorname{\mathsf{M}}_{\infty}. \]
The \textit{evaluation} of a word $u\in \mathcal{A}^*$, denoted by $\operatorname{\mathsf{ev}}(u)$, is the infinite tuple of non-negative integers, indexed by $\mathcal{A}$, whose $a$-th element, denoted by $|u|_a$, is the number of times the symbol $a$
appears in $u$; thus this tuple describes the number of each symbol in $\mathcal{A}$ that appears in $u$. It is immediate from the definition of the monoids above that if $u \equiv_{\operatorname{\mathsf{M}}_\infty} v$, then $\operatorname{\mathsf{ev}}(u) = \operatorname{\mathsf{ev}}(v)$, and hence it makes sense to define the evaluation of an element $p$ of one of these monoids to be the evaluation of any word representing it. We write $\operatorname{\mathsf{ev}}(u) \leqslant \operatorname{\mathsf{ev}}(v)$ [resp. $\operatorname{\mathsf{ev}}(u) < \operatorname{\mathsf{ev}}(v)$] if $|u|_a\leqslant |v|_a$ [resp. $|u|_a < |v|_a$] for each non-negative integer $a$.
A \textit{basis} for an algebra $A$ is a set of identities satisfied by $A$ that axiomatize all identities of $A$. An algebra $A$ is said to be \textit{finitely based} if it has some finite basis. Otherwise, it is said to be \textit{non-finitely based}. The finite basis problem, that is the problem of classifying algebras according to the finite basis property, is one of the most prominent research problems in universal algebra.
Since the first example of {non-finitely based} finite semigroup was discovered by Perkins~\cite{Perkins69} in the 1960s, the finite basis problem for semigroups has attracted much attention. Now there exist several powerful methods to attack the finite basis problem for finite semigroups (see Volkov~\cite{Vol01} for detail).
In contrast with the finite case, the finite basis problem for infinite semigroups is less explored. On the one hand, infinite semigroups usually arise in mathematics as transformation semigroups of an infinite set, or semigroups of relations on an infinite domain, or matrix semigroups over an infinite ring. And all these semigroups are too big to satisfy any non-trivial identity. On the other hand, when an infinite semigroup does satisfy non-trivial identities, then deciding if there is a finite basis remains difficult. Indeed, many of methods designed for finite semigroups do not apply so that fresh techniques are required.
Since the plactic monoid of infinite rank does not satisfy any non-trivial identity \cite[Proposition 3.1]{CKK17}, the plactic monoid of infinite rank is finitely based. The plactic monoid of rank $2$ satisfies exactly the same identities as the bicyclic monoid \cite[Remark 4.6]{JK19}, or equivalently \cite[Theorem 4.1]{DJK18} the monoid of all $2\times 2$ upper triangular tropical matrices. Thus the plactic monoid of rank $2$ is non-finitely based by the result of Chen et al. \cite[Corollary 5.6]{CHLS16}. The plactic monoid of rank $3$ satisfies exactly the same identities as the monoid of all $3\times 3$ upper triangular tropical matrices \cite[Corollary 4.5]{JK19}. Thus the plactic monoid of rank $3$ is non-finitely based by the result of Han et al. \cite{HZL}. The finite basis problems for the plactic monoids of rank greater than or equal to $4$ are still open. Cain et al. proved that all hypoplactic monoids of rank greater than or equal to $2$ are finitely based and satisfy the same identities \cite{C}. For each $\operatorname{\mathsf{M}} \in \{\operatorname{\mathsf{stal}},\operatorname{\mathsf{taig}}, \operatorname{\mathsf{sylv}},\operatorname{\mathsf{sylv}}^{\sharp}, \operatorname{\mathsf{baxt}}\}$, $\operatorname{\mathsf{M}}_1$ is a free monogenic monoid and commutative, and so $\operatorname{\mathsf{M}}_1$ is finitely based by \cite[Theorem~9]{Perkins69}. However the finite basis problems for $\operatorname{\mathsf{M}}_n$ with $2\leq n \leq \infty$ are still open.
In this paper, we investigate the finite basis problems for all stalactic, taiga, sylvester and Baxter monoids of rank greater than or equal to $2$. It is shown that all stalactic and taiga monoids of rank greater than or equal to $2$ are finitely based and satisfy the same identities, that all sylvester monoids of rank greater than or equal to $2$ are finitely based and satisfy the same identities and that all Baxter monoids of rank greater than or equal to $2$ are finitely based and satisfy the same identities.
This paper is organized as follows. Notation and background information of the paper are given in Section \ref{sec: prelim}. In Section \ref{sec:3sc}, three sufficient conditions under which semigroups are finitely based are given. By applying these sufficient conditions, we solve the finite basis problems for all stalactic, taiga, sylvester and Baxter monoids of rank greater than or equal to 2 in Section \ref{sec:app}.
\section{Preliminaries} \label{sec: prelim} Most of the notation and background material of this article are given in this section. Refer to the monograph of Burris and Sankappanavar~\cite{BS81} for more information.
Let~$\mathcal{X}$ be a countably infinite alphabet. Elements of $\mathcal{X}$ are called \textit{letters} and elements of the free monoid $\mathcal{X}^*$ are called \textit{words}. Let $\mathbf{w} \in \mathcal{X}^*, x, y, x_1, x_2, \dots , x_m \in \mathcal{X}$. Then \begin{itemize}
\item the \textit{content} of $\mathbf{w}$, denoted by $\operatorname{\mathsf{con}}(\mathbf{w})$, is the set of letters occurring in $\mathbf{w}$;
\item $\operatorname{\mathsf{occ}}(x, \mathbf{w})$ is the number of occurrences of the letter $x$ in $\mathbf{w}$;
\item $\overleftarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{w})$ [resp. $(\overrightarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{w}))$] is the number of occurrences of $x$ before [resp. after] the first [resp. last] occurrence of $y$ in $\mathbf{w}$;
\item $\mathbf{w}$ is said to be \textit{simple} if $\operatorname{\mathsf{occ}}(x, \mathbf{w}) = 1$ for any $x\in \operatorname{\mathsf{con}}(\mathbf{w})$;
\item the \textit{initial part} [resp. \textit{final part}] of $\mathbf{w}$, denoted by $\operatorname{\mathsf{ip}}(\mathbf{w})$ [resp. $\operatorname{\mathsf{fp}}(\mathbf{w})$], is the simple word obtained from $\mathbf{w}$ by retaining the first [resp. last] occurrence of each letter;
\item $\operatorname{\mathsf{mix}}(\mathbf{w})$ is the word obtained from $\mathbf{w}$ by retaining the first and the last occurrences of each letter;
\item $\mathbf{w}[x_1, x_2, \dots , x_m]$ denote the word obtained from $\mathbf{w}$ by retaining only the occurrences of the letters $x_1, x_2, \dots , x_m$. \end{itemize}
A \textit{semigroup identity} is a formal expression $\mathbf{u} \approx \mathbf{v}$ where $\mathbf{u}, \mathbf{v}$ are words over the alphabet $\mathcal{X}$. An identity $\mathbf{u}\approx \mathbf{v}$ is said to be \textit{non-trivial} if $\mathbf{u}\neq \mathbf{v}$ and \textit{trivial} otherwise. A semigroup $S$ \textit{satisfies} an identity $\mathbf{u}\approx \mathbf{v}$ if the equality $\varphi(\mathbf{u}) = \varphi(\mathbf{v})$ holds in $S$ for every possible substitution $\varphi : \mathcal{X}\rightarrow S$. Denote by $\operatorname{\mathsf{id}}(S)$ the set of all non-trivial identities satisfied by $S$.
Clearly any monoid that satisfies an identity $\mathbf{s} \approx \mathbf{t}$ also satisfies the identity $\mathbf{s}[x_1, x_2, \dots , x_n] \approx \mathbf{t}[x_1, x_2, \dots , x_n]$ for any $x_1, x_2, \dots , x_n \in\mathcal{X}$, since assigning the unit element to a letter $x$ in an identity is effectively the same as removing all occurrences of $x$.
An \textit{identity system} $\Sigma$ is a collection of non-trivial identities. An identity $\mathbf{u}\approx \mathbf{v}$ is said to be \textit{derived} from $\Sigma$ or is a \textit{consequence} of $\Sigma$ if there is a sequence of words \[ \mathbf{u} = \mathbf{u}_1, \mathbf{u}_2,\cdots , \mathbf{u}_{n-1}, \mathbf{u}_n = \mathbf{v} \] over the alphabet $\mathcal{X}$ such that for every $i = 1, 2, \dots , n-1, \mathbf{u}_i = \mathbf{a}_i\varphi_i(\mathbf{p}_i)\mathbf{b}_i, \mathbf{u}_{i+1} = \mathbf{a}_i\varphi_i(\mathbf{q}_i)\mathbf{b}_i$ with some words $\mathbf{a}_i, \mathbf{b}_i \in \mathcal{X}^*$, some endomorphism $\varphi_i:\mathcal{X}^+ \rightarrow \mathcal{X}^+$ and some identity $\mathbf{p}_i\approx \mathbf{q}_i \in \Sigma$.
Given an identity system $\Sigma$, we denote by $\operatorname{\mathsf{id}}(\Sigma)$ the set of all consequences of $\Sigma$. An \textit{identity basis} for a semigroup $S$ is any set $\Sigma \subseteq \operatorname{\mathsf{id}}(S)$ such that $\operatorname{\mathsf{id}}(\Sigma) = \operatorname{\mathsf{id}}(S)$, that is, every identity satisfied by $S$ can be derived from $\Sigma$. A semigroup $S$ is called \textit{finitely based} if it possesses a finite identity basis, that is, all identities satisfied by $S$ can be derived from a finite subset of $\operatorname{\mathsf{id}}(S)$; otherwise $S$ is called \textit{non-finitely based}. Two semigroups $S_1$ and $S_2$ are called \textit{equationally equivalent} if $\operatorname{\mathsf{id}}(S_1) = \operatorname{\mathsf{id}}(S_2)$.
For any semigroup $S$, let $S^1$ be the monoid obtained from $S$ by adjoining a unit element. Denote by $L_2$, $R_2$, $M$ the left-zero semigroup of order $2$, the right-zero semigroup of order $2$ and the free monogenic monoid, whose presentations are given as follows: \begin{align*}
L_2&=\langle a,b~|~a^2=ab=a, b^2=ba=b\rangle,\\
R_2&=\langle a,b~|~a^2=ba=a, b^2=ab=b \rangle,\\ M&=\langle a \rangle=\{1, a, a^2, a^3, \ldots \}. \end{align*}
The following results are well-known.
\begin{lemma}\label{L21} Let $\mathbf{u} \approx \mathbf{v}$ be any non-trivial identity. Then \begin{enumerate}[\rm(i)]
\item $L^1_2$ satisfies $\mathbf{u} \approx \mathbf{v}$ if and only if $\operatorname{\mathsf{ip}}(\mathbf{u}) = \operatorname{\mathsf{ip}}(\mathbf{v})$;
\item $R^1_2$ satisfies $\mathbf{u} \approx \mathbf{v}$ if and only if $\operatorname{\mathsf{fp}}(\mathbf{u}) = \operatorname{\mathsf{fp}}(\mathbf{v})$;
\item $M$ satisfies $\mathbf{u} \approx \mathbf{v}$ if and only if $\operatorname{\mathsf{occ}}(x,\mathbf{u})=\operatorname{\mathsf{occ}}(x,\mathbf{v})$ for any $x \in \mathcal{X}$. \end{enumerate} \end{lemma}
\section{Three sufficient conditions for a semigroup to be finitely based}\label{sec:3sc}
In this section, we give three sufficient conditions under which a semigroup is finitely based. The next three theorems are the main results in this section.
\begin{theorem}\label{thm:sc} Suppose that a semigroup $S$ satisfies the identity $xyx \approx yx^2$ and for any identity $\mathbf{u}\approx \mathbf{v}$ satisfied by $S$, \begin{enumerate}[\rm(i)] \item $\operatorname{\mathsf{occ}}(x,\mathbf{u})=\operatorname{\mathsf{occ}}(x,\mathbf{v})$ for any $x \in \mathcal{X}$; \item $\operatorname{\mathsf{fp}}(\mathbf{u})=\operatorname{\mathsf{fp}}(\mathbf{v})$. \end{enumerate} Then the identity $xyx \approx yx^2$ is an identity basis for $S$, and so $S$ is finitely based. \end{theorem}
\begin{proof} It suffices to show that any identity $\mathbf{u} \approx \mathbf{v}$ satisfied by $S$ can be derived from $xyx \approx yx^2$. For any $x\in \operatorname{\mathsf{con}}(\mathbf{u})$, if $\operatorname{\mathsf{occ}}(x, \mathbf{u})\geq 2$, then the identity $xyx\approx yx^2$ can be used to gather any non-last $x$ in $\mathbf{u}$ with the last $x$ in $\mathbf{u}$, that is $\mathbf{u}$ can be written into the form \begin{align*}\label{cf} \mathbf{u}=x_1^{e_1}x_2^{e_2}\cdots x_m^{e_m} \end{align*} where $\operatorname{\mathsf{fp}}(\mathbf{u})=x_1x_2\cdots x_m$. By the same argument, $\mathbf{v}$ can be written into the form \[ \mathbf{v}=y_1^{f_1}y_2^{f_2}\cdots y_n^{f_n} \] where $\operatorname{\mathsf{fp}}(\mathbf{v})=y_1y_2\cdots y_n$. It follows from (ii) that $m=n, x_i=y_i$ for $i=1,2,\dots,m$. And $e_i=f_i$ for $i=1,2,\dots,m$ can be obtained from (i). Therefore $\mathbf{u}=\mathbf{v}$. Consequently, every identity satisfied by $S$ is a consequence of the identity $xyx \approx yx^2$, and so the identity $xyx \approx yx^2$ is an identity basis for $S$. \end{proof}
\begin{theorem}\label{thm:sc1} Suppose that a semigroup $S$ satisfies the identity $xysxty \approx yxsxty$ and for any identity $\mathbf{u}\approx \mathbf{v}$ satisfied by $S$, \begin{enumerate}[\rm(i)] \item $\operatorname{\mathsf{occ}}(x,\mathbf{u})=\operatorname{\mathsf{occ}}(x,\mathbf{v})$ for any $x \in \mathcal{X}$; \item $\overrightarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{u})=\overrightarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{v})$ for any $x,y \in \mathcal{X}$; \item $\operatorname{\mathsf{fp}}(\mathbf{u})=\operatorname{\mathsf{fp}}(\mathbf{v})$. \end{enumerate} Then the identity $xysxty \approx yxsxty$ is an identity basis for $S$, and so $S$ is finitely based. \end{theorem}
\begin{proof} It suffices to show that any identity $\mathbf{u} \approx \mathbf{v}$ satisfied by $S$ can be derived from $xysxty \approx yxsxty$. Since $S$ satisfies the identity $xysxty \approx yxsxty$, it follows that $\mathbf{u}$ can be written into the form \[ \mathbf{u}_1x_1^{e_1}\mathbf{u}_2x_2^{e_2}\cdots\mathbf{u}_mx_m^{e_m} \] where $\operatorname{\mathsf{fp}}(\mathbf{u})=x_1\cdots x_m$, $e_i \geq 1$ and $\operatorname{\mathsf{con}}(\mathbf{u}_i) \subseteq \{x_i, \ldots, x_m\}$ for $1\leq i \leq m$. For each $1\leq i\leq m$, the letters in $\mathbf{u}_i$ are not the last occurrences and so can be moved in any manner by using the identity $xysxty \approx yxsxty$. In particular, any occurrence of $x_i$ in $\mathbf{u}_i$ can be moved to the right and combined with $x_i^{e_i}$ that immediately follows $\mathbf{u}_i$. Therefore, we may further assume that $\mathbf{u}_i=x_{i+1}^{g_{i+1}}\cdots x_m^{g_m}$ with $g_{i+1}, \ldots, g_m \geq 0$ for $1\leq i\leq m-1$ and $\mathbf{u}_m=\emptyset$. By the same argument, $\mathbf{v}$ can be written into the form \[ \mathbf{v}=\mathbf{v}_1y_1^{f_1}\mathbf{v}_2y_2^{f_2}\cdots\mathbf{v}_ny_n^{f_n} \] where $\operatorname{\mathsf{fp}}(\mathbf{v})=y_1y_2\cdots y_n$, $f_i \geq 1$ for $1\leq i \leq n$, $\mathbf{v}_i=y_{i+1}^{h_{i+1}}\cdots x_n^{h_n}$ with $h_{i+1}, \ldots, h_n \geq 0$ for $1\leq i\leq n-1$ and $\mathbf{v}_n=\emptyset$. It follows from (iii) that $m=n$ and $x_i=y_i$ for $1\leq i \leq m$. It follows from (i) that $e_1=f_1$. For $1< i \leq m$, since $x_i\not\in \operatorname{\mathsf{con}}(\mathbf{u}_i\mathbf{v}_i)$, it follows that $e_i=\overrightarrow{\operatorname{\mathsf{occ}}}_{x_{i-1}}(x_i,\mathbf{u})$ and $f_i=\overrightarrow{\operatorname{\mathsf{occ}}}_{x_{i-1}}(x_i,\mathbf{v})$. Hence by (ii) $e_i=f_i$ for $1< i \leq m$. Therefore $e_i=f_i$ for $1 \leq i \leq m$.
Clearly, $\mathbf{u}_m=\mathbf{v}_m=\emptyset$. To show that $\mathbf{u}_i=\mathbf{v}_i$ for $i=1, 2,\dots, m-1$, it suffices to show that $\operatorname{\mathsf{occ}}(x_j,\mathbf{u}_i)=\operatorname{\mathsf{occ}}(x_j,\mathbf{v}_i)$ for $i+1\leq j\leq m$. Since \begin{align*} \operatorname{\mathsf{occ}}(x_j,\mathbf{u}_1)&=\operatorname{\mathsf{occ}}(x_j, \mathbf{u})-\overrightarrow{\operatorname{\mathsf{occ}}}_{x_1}(x_j, \mathbf{u})\\ &=\operatorname{\mathsf{occ}}(x_j, \mathbf{v})-\overrightarrow{\operatorname{\mathsf{occ}}}_{x_1}(x_j, \mathbf{v})\\ &=\operatorname{\mathsf{occ}}(x_j,\mathbf{v}_1) \end{align*} by (i) and (ii), it follows that $\mathbf{u}_1=\mathbf{v}_1$. Since \begin{align*} \operatorname{\mathsf{occ}}(x_j,\mathbf{u}_i)&=\overrightarrow{\operatorname{\mathsf{occ}}}_{x_{i-1}}(x_j, \mathbf{u})-\overrightarrow{\operatorname{\mathsf{occ}}}_{x_i}(x_j, \mathbf{u})\\ &=\overrightarrow{\operatorname{\mathsf{occ}}}_{x_{i-1}}(x_j, \mathbf{v})-\overrightarrow{\operatorname{\mathsf{occ}}}_{x_i}(x_j, \mathbf{v})\\ &=\operatorname{\mathsf{occ}}(x_j,\mathbf{v}_i) \end{align*} by (ii), it follows that $\mathbf{u}_i=\mathbf{v}_i$ for $1<i\leq m-1$. Therefore $\mathbf{u}=\mathbf{v}$. Consequently, every identity satisfied by $S$ is a consequence of the identity $xysxty \approx yxsxty$, and so the identity $xysxty \approx yxsxty$ is an identity basis for $S$. \end{proof}
\begin{theorem}\label{thm:sc2} Suppose that a semigroup $S$ satisfies the identities \begin{subequations}\label{id:baxt} \begin{gather} ysxt\,xy\,hxky \approx ysxt\,yx\,hxky, \label{id:a}\\ xsyt\,xy\,hxky \approx xsyt\,yx\,hxky,\label{id:b} \end{gather} \end{subequations} and for any identity $\mathbf{u}\approx \mathbf{v}$ satisfied by $S$, \begin{enumerate}[\rm(i)] \item $\operatorname{\mathsf{occ}}(x,\mathbf{u})=\operatorname{\mathsf{occ}}(x,\mathbf{v})$ for any $x \in \mathcal{X}$; \item $\overleftarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{u})=\overleftarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{v}), \overrightarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{u})=\overrightarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{v})$ for any $x,y \in \mathcal{X}$; \item $\operatorname{\mathsf{ip}}(\mathbf{u})=\operatorname{\mathsf{ip}}(\mathbf{v}), \operatorname{\mathsf{fp}}(\mathbf{u})=\operatorname{\mathsf{fp}}(\mathbf{v})$. \end{enumerate} Then the identities \eqref{id:baxt} constitute an identity basis for $S$, and so $S$ is finitely based. \end{theorem} \begin{proof} It suffices to show that any identity $\mathbf{u} \approx \mathbf{v}$ satisfied by $S$ can be derived from \eqref{id:baxt}. Suppose that $\operatorname{\mathsf{con}}(\mathbf{u})=\{x_1,x_2,\dots,x_r\}$ and $\operatorname{\mathsf{mix}}(\mathbf{u})=a_1 a_2 \cdots a_{m+1}$ with $a_i\in \operatorname{\mathsf{con}}(\mathbf{u})$ for $i=1,2,\ldots,m+1$. Clearly, $\mathbf{u}$ can be written into the form \[ \mathbf{u} = a_1\mathbf{u}_1a_2\mathbf{u}_2\cdots a_m\mathbf{u}_ma_{m+1} \] where $\mathbf{u}_1,\dots, \mathbf{u}_m \in \mathcal{X}^*$. Since each occurrence of letters in $\mathbf{u}_i$ is neither its first occurrence nor its last occurrence in $\mathbf{u}$, the letters in $\mathbf{u}_i$ can be permutated within $\mathbf{u}_i$ by the identities \eqref{id:baxt} in any manner. Therefore, we may assume that $\mathbf{u}_i = x^{i_{j_1}}_1 x^{i_{j_2}}_2\cdots x^{i_{j_r}}_r$ with some non-negative integers $i_{j_1} , i_{j_2} ,\dots, i_{j_r}$. Suppose that $\operatorname{\mathsf{con}}(\mathbf{v})=\{y_1,y_2,\dots,y_s\}$ and $\operatorname{\mathsf{mix}}(\mathbf{v})=\{b_1, b_2,\dots, b_{n+1}\}$ with $b_i\in \operatorname{\mathsf{con}}(\mathbf{v})$ for $i=1,2,\ldots,n+1$. By the same argument, $\mathbf{v}$ can be written into the form \[ \mathbf{v} = b_1\mathbf{v}_1b_2\mathbf{v}_2\cdots b_n\mathbf{v}_nb_{n+1} \] where $\mathbf{v}_i = y^{i_{k_1}}_1 y^{i_{k_2}}_2\cdots y^{i_{k_s}}_s$ with some non-negative integers $i_{k_1} , i_{k_2} ,\dots, i_{k_s}$. In the following, we will show that if $\mathbf{u} \approx \mathbf{v}$ satisfies the conditions (i)--(iii), then $\mathbf{u}=\mathbf{v}$.
It follows from (i) that either $\operatorname{\mathsf{occ}}(x, \mathbf{u})= \operatorname{\mathsf{occ}}(x, \mathbf{v})=1$ or $\operatorname{\mathsf{occ}}(x, \mathbf{u})=\operatorname{\mathsf{occ}}(x, \mathbf{v})\geq 2$. Hence $m=n$.
Next we show that $\operatorname{\mathsf{mix}}(\mathbf{u}) = \operatorname{\mathsf{mix}}(\mathbf{v})$. Clearly, $a_1 = b_1$ by (iii). Proceeding by induction, suppose that $a_1 \cdots a_{k-1} = b_1 \cdots b_{k-1}$. Assume that $a_k = x$ and $b_k = y$. If both $a_k$ and $b_k$ are the first occurrences of $x$ in $\mathbf{u}$ and $y$ in $\mathbf{v}$ respectively, then it follows from $\operatorname{\mathsf{ip}}(\mathbf{u})=\operatorname{\mathsf{ip}}(\mathbf{v})$ that $a_k = b_k$; if both $a_k$ and $b_k$ are the last occurrences of $x$ in $\mathbf{u}$ and $y$ in $\mathbf{v}$ respectively, then it follows from $\operatorname{\mathsf{fp}}(\mathbf{u})=\operatorname{\mathsf{fp}}(\mathbf{v})$ that $a_k = b_k$. Otherwise, by symmetry, we may assume that $a_k$ is the first occurrence of $x$ in $\mathbf{u}$ and $b_k$ is the last occurrence of $y$ in $\mathbf{v}$. Then by the above arguments the result $a_k = b_k$ still holds when either $\operatorname{\mathsf{occ}}(x,\mathbf{u})=1$ or $\operatorname{\mathsf{occ}}(y,\mathbf{v})=1$. Therefore, we may assume that $\operatorname{\mathsf{occ}}(x,\mathbf{u}), \operatorname{\mathsf{occ}}(y,\mathbf{v})\geq 2$.
Suppose that $a_k \neq b_k$. Then $x \not\in \operatorname{\mathsf{con}}(a_1\mathbf{u}_1\cdots a_{k-1}\mathbf{u}_{k-1})$ by $a_k$ being the first occurrence of $x$ in $\mathbf{u}$ and $y \not\in \operatorname{\mathsf{con}}(\mathbf{v}_kb_{k+1} \cdots \mathbf{v}_nb_{n+1})$ by $b_k$ being the last occurrence of $y$ in $\mathbf{v}$. Hence it follows from $\operatorname{\mathsf{occ}}(y,\mathbf{v})\geq 2$ that $\operatorname{\mathsf{occ}}(y, b_1\cdots b_{k-1})=1$, so that $\operatorname{\mathsf{occ}}(y, a_1\cdots a_{k-1})=\operatorname{\mathsf{occ}}(y, a_{k+1}\cdots a_{n+1})=1$. Clearly, $x \not\in\operatorname{\mathsf{con}}(a_1\cdots a_{k-1})=\operatorname{\mathsf{con}}(b_1\cdots b_{k-1})$. Hence $\operatorname{\mathsf{occ}}(y, \mathbf{v})=\overleftarrow{\operatorname{\mathsf{occ}}}_x(y,\mathbf{v})$. Now by (i) and (ii), \[ \operatorname{\mathsf{occ}}(y, \mathbf{u}) = \operatorname{\mathsf{occ}}(y, \mathbf{v})=\overleftarrow{\operatorname{\mathsf{occ}}}_x(y,\mathbf{v}) = \overleftarrow{\operatorname{\mathsf{occ}}}_x(y,\mathbf{u}). \] But $\operatorname{\mathsf{occ}}(y, \mathbf{u})= \overleftarrow{\operatorname{\mathsf{occ}}}_x(y,\mathbf{u})$ is impossible since $\operatorname{\mathsf{occ}}(y, a_{k+1}\cdots a_{n+1})=1$, hence $a_k = b_k$. Therefore $a_i = b_i$ for all $i = 1,...,n + 1$ by induction, and so $\operatorname{\mathsf{mix}}(\mathbf{u}) = \operatorname{\mathsf{mix}}(\mathbf{v})$.
Finally, we show that $\mathbf{u}_k =\mathbf{v}_k$ for each $k=1, \ldots, n$. By the forms of $\mathbf{u}$ and $\mathbf{v}$, it suffices to show that $\operatorname{\mathsf{occ}}(z, \mathbf{u}_k) = \operatorname{\mathsf{occ}}(z, \mathbf{v}_k)$ for any $z \in \operatorname{\mathsf{con}}(\mathbf{u}_k\mathbf{v}_k)$ and $k = 1, 2,\ldots, n$. Let $\operatorname{\mathsf{occ}}(z, \mathbf{u}_k) = s$ and $\operatorname{\mathsf{occ}}(z, \mathbf{v}_k) = t$. There are two cases.
{\bf Case~1.} $a_k = a_{k+1} = x$. Then $a_k$ and $a_{k+1}$ are the first and the last occurrences of $x$ in both $\mathbf{u}$ and $\mathbf{v}$. If $z = x$, then by (i), \[ 2 + s = \operatorname{\mathsf{occ}}(x, \mathbf{u}) = \operatorname{\mathsf{occ}}(x, \mathbf{v})=2+ t. \] Hence $s = t$. If $z \neq x$, then by (i), \[ \overleftarrow{\operatorname{\mathsf{occ}}}_x(z, \mathbf{u}) + s + \overrightarrow{\operatorname{\mathsf{occ}}}_x(z, \mathbf{u}) = \operatorname{\mathsf{occ}}(z, \mathbf{u}) = \operatorname{\mathsf{occ}}(z, \mathbf{v}) =\overleftarrow{\operatorname{\mathsf{occ}}}_x(z, \mathbf{v}) + t + \overrightarrow{\operatorname{\mathsf{occ}}}_x(z, \mathbf{v}). \] Thus $s = t$ follows from (ii).
{\bf Case~2.} $a_k = x \neq y = a_{k+1}$. By symmetry, there are three subcases.
{\bf 2.1.} $a_k$ and $a_{k+1}$ are the first occurrences of $x$ and $y$ respectively in both $\mathbf{u}$ and $\mathbf{v}$. Clearly, $z \neq y$. If $z = x$, then by (ii), \[ 1 + s = \overleftarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{u}) =\overleftarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{v})=1+ t. \] Hence $s = t$. If $z \neq x$, then by (ii), \[ \overleftarrow{\operatorname{\mathsf{occ}}}_x(z, \mathbf{u}) + s = \overleftarrow{\operatorname{\mathsf{occ}}}_y(z, \mathbf{u}) = \overleftarrow{\operatorname{\mathsf{occ}}}_y(z, \mathbf{v}) = \overleftarrow{\operatorname{\mathsf{occ}}}_x(z, \mathbf{v}) + t. \] Thus $s = t$ follows from (ii).
{\bf 2.2.} $a_k$ is the first occurrence of $x$ and $a_{k+1}$ is the last occurrence of $y$ in both $\mathbf{u}$ and $\mathbf{v}$. If $z = x$, then by (i), \[ 1 + s + \overrightarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{u}) = \operatorname{\mathsf{occ}}(x, \mathbf{u}) = \operatorname{\mathsf{occ}}(x, \mathbf{v})=1+ t + \overrightarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{v}). \] Thus $s = t$ follows from (ii). If $z = y$, then by (i), \[ \overleftarrow{\operatorname{\mathsf{occ}}}_x(y, \mathbf{u}) + s +1= \operatorname{\mathsf{occ}}(y, \mathbf{u}) = \operatorname{\mathsf{occ}}(y, \mathbf{v}) = \overleftarrow{\operatorname{\mathsf{occ}}}_x(y, \mathbf{v}) + t + 1. \] Thus $s = t$ follows from (ii). If $z \neq x, y$, then by (i), \[ \overleftarrow{\operatorname{\mathsf{occ}}}_x(z, \mathbf{u}) + s + \overrightarrow{\operatorname{\mathsf{occ}}}_y(z, \mathbf{u}) = \operatorname{\mathsf{occ}}(z, \mathbf{u}) = \operatorname{\mathsf{occ}}(z, \mathbf{v}) = \overleftarrow{\operatorname{\mathsf{occ}}}_x(z, \mathbf{v}) + t + \overrightarrow{\operatorname{\mathsf{occ}}}_y(z, \mathbf{v}). \] Thus $s = t$ follows from (ii).
{\bf 2.3.} $a_k$ is the last occurrence of $x$ and $a_{k+1}$ is the first occurrence of $y$ in both $\mathbf{u}$ and $\mathbf{v}$. Clearly, $z \neq x, y$. Then by (i), \[ \overrightarrow{\operatorname{\mathsf{occ}}}_x(z, \mathbf{u}) + \overleftarrow{\operatorname{\mathsf{occ}}}_y(z, \mathbf{u}) - s = \operatorname{\mathsf{occ}}(z, \mathbf{u}) = \operatorname{\mathsf{occ}}(z, \mathbf{v}) = \overrightarrow{\operatorname{\mathsf{occ}}}_x(z, \mathbf{v}) + \overleftarrow{\operatorname{\mathsf{occ}}}_y(z, \mathbf{v}) - t. \] Thus $s = t$ follows from (ii).
Hence $\mathbf{u}_k = \mathbf{v}_k$ for $k = 1, 2,\cdots,n$. Therefore $\mathbf{u} = \mathbf{v}$. Consequently, every identity satisfied by $S$ is a consequence of the identities in \eqref{id:baxt}, and so the identities \eqref{id:baxt} constitute an identity basis for $S$. \end{proof}
\section{Finite basis problems for stalactic, taiga, sylvester and Baxter monoids}\label{sec:app}
In this section, by applying the sufficient conditions given in Section \ref{sec:3sc}, we solve the finite basis problems for all stalactic, taiga, sylvester and Baxter monoids of rank greater than or equal to $2$.
\subsection{Finite basis problems for stalactic and taiga monoids}
\begin{lemma}\cite[Propositions 15 and 16]{CM16}\label{lem::stal id} Both the stalatic monoid $\operatorname{\mathsf{stal}}_{\infty}$ and the taiga monoid $\operatorname{\mathsf{taig}}_{\infty}$ satisfy the identity $xyx \approx yx^2$. \end{lemma}
\begin{theorem}
The identity $xyx\approx yx^2$ is a finite identity basis for the monoids $\operatorname{\mathsf{stal}}_n$ and $\operatorname{\mathsf{taig}}_n$ whenever $2\leq n\leq \infty$. Therefore all stalactic and taiga monoids of rank greater than or equal to $2$ are equationally equivalent. \end{theorem}
\begin{proof} Clearly, we only need to show that each of the monoids $\operatorname{\mathsf{stal}}_n$ and $\operatorname{\mathsf{taig}}_n$ for any $2\leq n\leq \infty$ can be defined by the identity $xyx\approx yx^2$. First we show that each of the monoids $\operatorname{\mathsf{taig}}_n$ for any $2\leq n \leq \infty$ can be defined by the identity $xyx \approx yx^2$. Note that \[ \operatorname{\mathsf{taig}}_1 \subset \operatorname{\mathsf{taig}}_2 \subset \cdots \subset \operatorname{\mathsf{taig}}_n \subset \cdots \subset \operatorname{\mathsf{taig}}_{\infty}. \] By Theorem \ref{thm:sc}, it suffices to show that $\operatorname{\mathsf{taig}}_{\infty}$ satisfies the identity $xyx\approx yx^2$ and $\operatorname{\mathsf{taig}}_2$ satisfies the conditions (i) and (ii) in Theorem \ref{thm:sc}. Clearly, $\operatorname{\mathsf{taig}}_{\infty}$ satisfies the identity $xyx\approx yx^2$ by Lemma~\ref{lem::stal id}.
Let $\mathbf{u}\approx \mathbf{v}$ be any identity satisfied by the monoid $\operatorname{\mathsf{taig}}_2$. Since $\operatorname{\mathsf{taig}}_1$ is a free monogenic monoid, it follows from Lemma~\ref{L21} (iii) that $\operatorname{\mathsf{occ}}(x,\mathbf{u})=\operatorname{\mathsf{occ}}(x,\mathbf{v})$ for any $x \in \mathcal{X}$, and so the condition (i) holds in $\operatorname{\mathsf{taig}}_2$. Suppose that $\operatorname{\mathsf{fp}}(\mathbf{u})\neq \operatorname{\mathsf{fp}}(\mathbf{v})$. Then there exist some $x,y$ such that $\operatorname{\mathsf{taig}}_2$ satisfies $\mathbf{a} yx^s=\mathbf{u}[x,y]\approx \mathbf{v}[x,y]=\mathbf{b} xy^t$ for some $s,t \geq 1$ and $\mathbf{a}, \mathbf{b} \in \{x,y\}^*$. Let $\varphi$ be a substitution such that $x\mapsto 2, y\mapsto 1$. Then $\varphi(\mathbf{u}[x,y])$ ends with $2$ and $\varphi(\mathbf{v}[x,y])$ ends with $1$. Since the rightmost symbol in a word $w$ determines the root node of $\mathrm{P}_{\operatorname{\mathsf{taig}}_2}(w)$, it follows that $\varphi(\mathbf{u}[x,y]) \ne \varphi(\mathbf{v}[x,y])$. This implies that $\operatorname{\mathsf{taig}}_2$ does not satisfy $\mathbf{u}[x,y]\approx \mathbf{v}[x,y]$, a contradiction. Hence $\operatorname{\mathsf{fp}}(\mathbf{u})= \operatorname{\mathsf{fp}}(\mathbf{v})$, and so the condition (ii) holds.
Next we show that each of the monoids $\operatorname{\mathsf{stal}}_n$ for any $2\leq n\leq \infty$ can be defined by the identity $xyx \approx yx^2$. Note that \[ \operatorname{\mathsf{stal}}_1 \subset \operatorname{\mathsf{stal}}_2 \subset \cdots \subset \operatorname{\mathsf{stal}}_n \subset \cdots \subset \operatorname{\mathsf{stal}}_{\infty}. \] By Theorem \ref{thm:sc}, it suffices to show that $\operatorname{\mathsf{stal}}_{\infty}$ satisfies the identity $xyx\approx yx^2$ and $\operatorname{\mathsf{stal}}_2$ satisfies the conditions (i) and (ii) in Theorem \ref{thm:sc}. Clearly, $\operatorname{\mathsf{stal}}_{\infty}$ satisfies the identity $xyx\approx yx^2$ by Lemma~\ref{lem::stal id}. It is routine to show that $\operatorname{\mathsf{stal}}_2$ is isomorphic to $\operatorname{\mathsf{taig}}_2$. Hence $\operatorname{\mathsf{stal}}_2$ satisfies the conditions (i) and (ii) in Theorem \ref{thm:sc} by the above arguments.
Consequently, each of monoids $\operatorname{\mathsf{stal}}_n, \operatorname{\mathsf{taig}}_n$ for any $2\leq n\leq \infty$ can be defined by the identity $xyx \approx yx^2$, and so all of them are equationally equivalent. \end{proof}
\subsection{Finite basis problem for sylvester monoid}
\begin{lemma}\label{lem:property of sylv'id} Let $\mathbf{u}\approx \mathbf{v}$ be any identity satisfied by the monoid $\operatorname{\mathsf{sylv}}_2$. Then the monoid $\operatorname{\mathsf{sylv}}_2$ satisfies the conditions (i)--(iii) in Theorem \ref{thm:sc1}. \end{lemma}
\begin{proof} Let $\mathbf{u}\approx \mathbf{v}$ be any identity satisfied by $\operatorname{\mathsf{sylv}}_2$. Since $\operatorname{\mathsf{sylv}}_1$ is a free monogenic monoid, it follows from Lemma~\ref{L21} (iii) that $\operatorname{\mathsf{occ}}(x,\mathbf{u})=\operatorname{\mathsf{occ}}(x,\mathbf{v})$ for any $x \in \mathcal{X}$, and so the condition (i) holds in $\operatorname{\mathsf{sylv}}_2$. Suppose that $\overrightarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{u})\neq \overrightarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{v})$ for some $x,y \in \mathcal{X}$. Then $\operatorname{\mathsf{sylv}}_2$ satisfies $\mathbf{a} yx^s=\mathbf{u}[x,y]\approx \mathbf{v}[x,y]=\mathbf{b} yx^t$ for some $s, t \geq 1$, $s\ne t$ and $\mathbf{a}, \mathbf{b} \in \{x,y\}^*$. Without loss of generality, we may assume that $s < t$. Let $\phi$ be a substitution such that $x\mapsto 2, y\mapsto 1$. Using the Algorithm \ref{algo:sylv}, one sees that \begin{align*} \mathrm{P}_{\operatorname{\mathsf{sylv}}_2}(\phi(\mathbf{u}[x,y]))=\parbox[c]{2.5cm} {\begin{tikzpicture} [line width = 0pt, empty/.style = {circle, draw, inner sep=2pt}] \node [empty,label = right:$1$-th] (A) at (2,2) {2}; \node [empty, label = right:$2$-th] (B) at (1.6,1.2) {2}; \node [empty, label = right:$s$-th] (C) at (1.2,.4) {2}; \node [empty, label = right:$(s+1)$-th] (D) at (.8,-.4) {1}; \node [rectangle,draw, minimum size = 0.4cm] (E) at (.4,-1.2) {}; \node [rectangle,draw, minimum size = 0.4cm] (F) at (1.2,-1.2) {}; \draw (A) -- (B); \draw[dashed] (B) -- (C); \draw (C) -- (D); \draw (D) -- (E); \draw (D) -- (F); \end{tikzpicture}} \text{and}\quad \mathrm{P}_{\operatorname{\mathsf{sylv}}_2}(\phi(\mathbf{v}[x,y]))=\parbox[c]{3cm} {\begin{tikzpicture} [line width = 0pt, empty/.style = {circle, draw, inner sep=2pt}] \node [empty,label = right:$1$-th] (A) at (2,2) {2}; \node [empty, label = right:$2$-th] (B) at (1.6,1.2) {2}; \node [empty, label = right:$s$-th] (C) at (1.2,.4) {2}; \node [empty, label = right:$t$-th] (D) at (.8,-.4) {2}; \node [empty, label = right:$(t+1)$-th] (E) at (.4,-1.2) {1}; \node [rectangle,draw, minimum size = 0.4cm] (F) at (0,-2) {}; \node [rectangle,draw, minimum size = 0.4cm] (G) at (.8,-2) {}; \draw (A) -- (B); \draw[dashed] (B) -- (C); \draw[dashed] (C) -- (D); \draw (D) -- (E); \draw (E) -- (F); \draw (E) -- (G); \end{tikzpicture}} \end{align*} Then $\varphi(\mathbf{u}[x,y]) \ne \varphi(\mathbf{v}[x,y])$. This implies that $\operatorname{\mathsf{sylv}}_2$ does not satisfy $\mathbf{u}[x,y]\approx \mathbf{v}[x,y]$, a contradiction. Hence $\overrightarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{u})=\overrightarrow{\operatorname{\mathsf{occ}}}_y(x, \mathbf{v})$ for any $x,y \in \mathcal{X}$, and so the condition (ii) holds.
Suppose that $\operatorname{\mathsf{fp}}(\mathbf{u})\neq \operatorname{\mathsf{fp}}(\mathbf{v})$. Then there exists letters $x,y$ such that $\operatorname{\mathsf{sylv}}_2$ satisfies $\mathbf{a} yx^s=\mathbf{u}[x,y]\approx \mathbf{v}[x,y]=\mathbf{b} xy^t$ for some $s, t\geq 1$ and $\mathbf{a}, \mathbf{b} \in \{x,y\}^*$. Let $\varphi$ be a substitution such that $x\mapsto 2, y\mapsto 1$. Then $\varphi(\mathbf{u}[x,y])$ ends with $2$ and $\varphi(\mathbf{v}[x,y])$ ends with $1$. Since the rightmost symbol in a word $w$ determines the root node of $\mathrm{P}_{\operatorname{\mathsf{sylv}}_2}(w)$, it follows that $\varphi(\mathbf{u}[x,y]) \ne \varphi(\mathbf{v}[x,y])$. This implies that $\operatorname{\mathsf{sylv}}_2$ does not satisfy $\mathbf{u}[x,y]\approx \mathbf{v}[x,y]$, a contradiction. Hence $\operatorname{\mathsf{fp}}(\mathbf{u})= \operatorname{\mathsf{fp}}(\mathbf{v})$, and so the condition (iii) holds. \end{proof}
\begin{lemma}\label{lem:pr=qr} Let $p, q, r \in \operatorname{\mathsf{sylv}}_{\infty}$ such that $\operatorname{\mathsf{ev}}(p) = \operatorname{\mathsf{ev}}(q)\leqslant \operatorname{\mathsf{ev}}(r)$. Then $pr = qr$. \end{lemma}
\begin{proof} In \cite[Lemma 19]{CM16}, it is shown that if $p, q, r \in \operatorname{\mathsf{sylv}}_{\infty}$ such that $\operatorname{\mathsf{ev}}(p) = \operatorname{\mathsf{ev}}(q)= \operatorname{\mathsf{ev}}(r)$, then $pr = qr$. In fact, by the proof of \cite[Lemma 19]{CM16}, it is easy to see that the result still holds when $\operatorname{\mathsf{ev}}(p) =\operatorname{\mathsf{ev}}(q)<\operatorname{\mathsf{ev}}(r)$. This is because every symbol $d$ that from $p$ or $q$ is inserted into a particular previously empty subtree of ${\rm P}_{\operatorname{\mathsf{sylv}}_{\infty}}(r)$, dependent only on the value of the symbol $d$ (and not on its position in $p$ or $q$), and that unequal symbols are inserted into different subtrees. Since $\operatorname{\mathsf{ev}}(p) = \operatorname{\mathsf{ev}}(q)$, the same number of symbols $d$ are inserted for each such symbol $d$. Hence if $p, q, r \in \operatorname{\mathsf{sylv}}_{\infty}$ such that $\operatorname{\mathsf{ev}}(p) = \operatorname{\mathsf{ev}}(q)\leqslant \operatorname{\mathsf{ev}}(r)$, then $pr = qr$ still holds. \end{proof}
\begin{theorem}\label{cor:sylv's id} The sylvester monoid $\operatorname{\mathsf{sylv}}_{\infty}$ satisfies the identity $xysxty \approx yxsxty$. \end{theorem}
\begin{proof} Let $\varphi :\mathcal{X}\rightarrow \operatorname{\mathsf{sylv}}_{\infty}$ be any substitution. Then it is obvious that $\operatorname{\mathsf{ev}}(\varphi(xy))=\operatorname{\mathsf{ev}}(\varphi(yx))\leqslant \operatorname{\mathsf{ev}}(\varphi(sxty))$. Hence it follows from Lemma \ref{lem:pr=qr} that $\varphi(xysxty)=\varphi(yxsxty)$. Therefore the sylvester monoid $\operatorname{\mathsf{sylv}}_{\infty}$ satisfies the identity $xysxty \approx yxsxty$. \end{proof}
\begin{theorem} The identity $xysxty \approx yxsxty$ is a finite identity basis for the monoids $\operatorname{\mathsf{sylv}}_n$ whenever $2\leq n\leq \infty$. Therefore all sylvster monoids of rank greater than or equal to $2$ are equationally equivalent. \end{theorem}
\begin{proof} Clearly, we only need to show that each of the monoids $\operatorname{\mathsf{sylv}}_n$ for any $2\leq n\leq \infty$ can be defined by the identity $xysxty \approx yxsxty$. Note that \[ \operatorname{\mathsf{sylv}}_1 \subset \operatorname{\mathsf{sylv}}_2 \subset \cdots \subset \operatorname{\mathsf{sylv}}_n \subset \cdots \subset \operatorname{\mathsf{sylv}}_{\infty}. \] By Theorem \ref{thm:sc1}, it suffices to show that $\operatorname{\mathsf{sylv}}_{\infty}$ satisfies the identity $xysxty \approx yxsxty$ and $\operatorname{\mathsf{sylv}}_2$ satisfies the conditions (i)--(iii) in Theorem \ref{thm:sc1}. Therefore, the results hold directly follows from Theorem~\ref{cor:sylv's id} and Lemma \ref{lem:property of sylv'id}. \end{proof}
Symmetrically, we have \begin{theorem} The identity $ytxsyx \approx ytxsxy$ is a finite identity basis for the monoids $\operatorname{\mathsf{sylv}}_n^\sharp$ whenever $2\leq n\leq \infty$. Therefore all $\sharp$-sylvster monoids of rank greater than or equal to $2$ are equationally equivalent. \end{theorem}
\subsection{Finite basis problem for Baxter monoid}
\begin{lemma}\label{lem:spr=sqr} Let $p, q, r, s \in \operatorname{\mathsf{baxt}}_{\infty}$ such that $\operatorname{\mathsf{ev}}(p) = \operatorname{\mathsf{ev}}(q) \leqslant \operatorname{\mathsf{ev}}(r), \operatorname{\mathsf{ev}}(s)$. Then $spr = sqr$. \end{lemma}
\begin{proof} Since $\operatorname{\mathsf{ev}}(p) = \operatorname{\mathsf{ev}}(q) \leqslant \operatorname{\mathsf{ev}}(r)$, it follows from Lemma \ref{lem:pr=qr} that ${\rm P}_{\operatorname{\mathsf{sylv}}_{\infty}}(pr)={\rm P}_{\operatorname{\mathsf{sylv}}_{\infty}}(qr)$. Thus ${\rm P}_{\operatorname{\mathsf{sylv}}_{\infty}}(spr)={\rm P}_{\operatorname{\mathsf{sylv}}_{\infty}}(sqr)$. Since $\operatorname{\mathsf{ev}}(p) = \operatorname{\mathsf{ev}}(q) \leqslant \operatorname{\mathsf{ev}}(s)$, it follows from the dual of Lemma \ref{lem:pr=qr} that ${\rm P}_{\operatorname{\mathsf{sylv}}^{\sharp}_{\infty}}(sp)={\rm P}_{\operatorname{\mathsf{sylv}}^{\sharp}_{\infty}}(sq)$. Thus ${\rm P}_{\operatorname{\mathsf{sylv}}^{\sharp}_{\infty}}(spr)={\rm P}_{\operatorname{\mathsf{sylv}}^{\sharp}_{\infty}}(sqr)$. Therefore ${\rm P}_{\operatorname{\mathsf{baxt}}_{\infty}}(spr)={\rm P}_{\operatorname{\mathsf{baxt}}_{\infty}}(sqr)$, so that $spr = sqr$. \end{proof}
\begin{theorem}\label{cor:baxt's id} The Baxter monoid $\operatorname{\mathsf{baxt}}_{\infty}$ satisfies the identities \eqref{id:baxt}. \end{theorem}
\begin{proof} Let $\varphi :\mathcal{X}\rightarrow \operatorname{\mathsf{baxt}}_{\infty}$ be any substitution. It is obvious that $\operatorname{\mathsf{ev}}(\varphi(xy))=\operatorname{\mathsf{ev}}(\varphi(yx))\leqslant \operatorname{\mathsf{ev}}(\varphi(ysxt)), \operatorname{\mathsf{ev}}(\varphi(hxky))$. By Lemma \ref{lem:spr=sqr}, we have $\varphi(ysxtxyhxky) = \varphi(ysxtyxhxky)$. Therefore the Baxter monoid $\operatorname{\mathsf{baxt}}_{\infty}$ satisfies the identity \eqref{id:a}. A similar argument can show that the Baxter monoid $\operatorname{\mathsf{baxt}}_{\infty}$ satisfies the identity \eqref{id:b}. \end{proof}
\begin{theorem}\label{thm:baxt'id} The identities \eqref{id:baxt} constitute a finite identity basis for the monoids $\operatorname{\mathsf{baxt}}_n$ whenever $2\leq n\leq \infty$. Therefore all Baxter monoids of rank greater than or equal to $2$ are equationally equivalent. \end{theorem} \begin{proof} Clearly, we only need to show that each of the monoids $\operatorname{\mathsf{baxt}}_n$ for any $2\leq n\leq \infty$ can be defined by the identities \eqref{id:baxt}. Note that \[ \operatorname{\mathsf{baxt}}_1 \subset \operatorname{\mathsf{baxt}}_2 \subset \cdots \subset \operatorname{\mathsf{baxt}}_n \subset \cdots \subset \operatorname{\mathsf{baxt}}_{\infty}. \] By Theorem \ref{thm:sc2}, it suffices to show that $\operatorname{\mathsf{baxt}}_{\infty}$ satisfies the identities \eqref{id:baxt} and $\operatorname{\mathsf{baxt}}_2$ satisfies the conditions (i)--(iii) in Theorem \ref{thm:sc2}.
Clearly, the Baxter monoid $\operatorname{\mathsf{baxt}}_{\infty}$ satisfies the identities \eqref{id:baxt} by Theorem \ref{cor:baxt's id}. Since both $\operatorname{\mathsf{sylv}}_2$ and $\operatorname{\mathsf{sylv}}_2^{\sharp}$ are homomorphic images of $\operatorname{\mathsf{baxt}}_2$ by the definition of Baxter monoid, it follows from Lemma \ref{lem:property of sylv'id} and its dual that $\operatorname{\mathsf{baxt}}_2$ satisfies the conditions (i)--(iii) in Theorem \ref{thm:sc2}. Consequently, each of monoids $\operatorname{\mathsf{baxt}}_n$ for any $2\leq n\leq \infty$ can be defined by the identities \eqref{id:baxt}, and so all of them are equationally equivalent. \end{proof}
\end{document} | arXiv |
Rotational Motions Questions
1) A $12\, {\rm cm}$ diameter, $2\,{\rm kg}$ uniform circular disk, which is initially at rest, experiences the net torque shown in the figure below. What is the disk's angular velocity
A $12\, {\rm cm}$ diameter, $2\,{\rm kg}$ uniform circular disk, which is initially at rest, experiences the net torque shown in the figure below. What is the disk's angular velocity at $t=12\, \mathrm{s}$? The disk rotates about an axis perpendicular to the plane of the disk and through its center. Note: $I_{disk}=\frac{1}{2}MR^2$
\[\tau =I\alpha =I\frac{d\omega }{dt}\ \Longrightarrow \frac{d\omega }{dt}=\frac{\tau }{I}\]
\[\omega =\frac{1}{I}\underbrace{\int^t_0{\tau \left(t\right)dt}}_{area}\]
Instead of evaluate the above integral, compute the area under the $\tau -t$ diagram
\[\omega =\frac{1}{I}\left(area\right)=\frac{1}{0.5\times 2\times {\left(0.06\right)}^2}\left(\frac{1}{2}\left(0.2\right)\left(8\right)-\frac{1}{2}\left(-0.1\right)\left(4\right)\right)=170\ \frac{\mathrm{rad}}{\mathrm{s}}\]
2) A uniform rectangular beam of length $L=5\, \mathrm{m}$and mass $M=40\, \mathrm{kg}$ is supported but not attached to the two posts which are length $D=3\, \mathrm{m}$ apart.
A uniform rectangular beam of length $L=5\, \mathrm{m}$and mass $M=40\, \mathrm{kg}$ is supported but not attached to the two posts which are length $D=3\, \mathrm{m}$ apart. A child of mass $W=20$ kg starts walking along the beam.
(a) Assuming infinitely rigid posts, how close can the child get to the right end of the beam without it falling over? [Hint: The upward force exerted by the left on the beam cannot be negative --this is the limiting condition on how far the child can be to the right. Set up the static equilibrium condition with the pivot about the left end of the beam.]
(b) Suppose the left end of the beam is attached to the left post although it can still freely pivot about it. The right post is not attached as in part a). If the left post is infinitely rigid while the Young's modulus for the right post which is of length $s=10\, {\rm m}$ is $Y={10}^{10}\, \mathrm{N/}{\mathrm{m}}^{\mathrm{2}}$ and its cross sectional area is $A=\frac{1}{2}\,{\mathrm{m}}^{\mathrm{2}}$, how much does its length differ between the situation when the child is not present and when the child is at the rightmost edge of the beam?
(a) Take the pivot about left end. Identify all of the forces that act on the beam. Since we want the beam to be in static equilibrium after the child reaches to the desired point so apply this condition to the beam:
\[\Sigma \vec F_y=0\to \ mg+wg=F_1+F_2\]
Where $F_1$ and $F_2$ are the normal forces acted by the posts on the beam. All of the above forces create the following torque about left post:
\[\Sigma \vec \tau=\vec r \times \vec F=0 \Rightarrow F_1D=Mg\,\frac{L}{2}+wg\left(L-x\right)\]
Since $F_2\ge 0$ and $F_1D\le \left(Mg+wg\right)D$ then
\[Mg\,\frac{L}{2}+wg\left(L-x\right)\le \left(M+w\right)gD\Rightarrow \ -\frac{mg}{wg}\left(D-\frac{L}{2}\right)+\left(L-D\right)\le x\ \]
\[\Longrightarrow x\ge 1\ \mathrm{m}\]
(b) Since now the beam is attached to the left post, the force on the left end of the beam can point down. Hence, the child can easily be all the way to the right.
$F_1D\cong Mg\frac{L}{2}+wgL\Rightarrow \ F_1\approx Mg\ \left(\frac{1}{2}\frac{L}{D}\right)+wg\frac{L}{D}$
Let $\mathrm{\Delta }S$ be the compression of post 2. Hence $\mathrm{\Delta }S_{child\ on}\approx \left(\frac{F_1}{A}\right)\frac{S}{Y}$.
With the child off, $F^{'}_1\approx Mg\ \left(\frac{L}{2D}\right)$ and $\mathrm{\Delta }S_{child\ off}\approx \left(\frac{F^{'}_1}{A}\right)\frac{S}{Y}$
\[\therefore \mathrm{\Delta }S_{child\ on}-\mathrm{\Delta }S_{child\ off}\approx \frac{\left(F_1-F^{'}_1\right)}{A}\frac{S}{Y}=\left(\frac{wg\frac{L}{D}}{A}\right)\frac{S}{Y}=\left(\frac{327}{\frac{1}{2}}\right)\frac{10}{{10}^{10}}\approx 7\times {10}^{-7}\mathrm{m}\]
3) A grindstone spinning at the rate of $8.3\ \mathrm{rev/s}$ has what approximate angular speed?
A grindstone spinning at the rate of $8.3\ \mathrm{rev/s}$ has what approximate angular speed?
The angular speed in the rotational motions with period $T$ and frequency $f$ is defined by
\[\omega =\frac{2\pi }{T}=2\pi f\]
Where $T$ is in seconds and $f$ in Hertz thus $\omega $ has unit $\mathrm{rad/s}$. So only convert $\mathrm{rev/s}$ to $\mathrm{rad/s}$.
\[1\ rev=2\pi \ \mathrm{rad}\]
\[\omega =8.3\frac{\mathrm{rev}}{\mathrm{s}}\left(2\pi \frac{\mathrm{rad}}{\mathrm{1\ rev}}\right)=52\frac{\mathrm{rad}}{\mathrm{s}}\]
4) The figure below represents a $2.0\, {\rm m}$ long bar pinned at point A. compute the net torque on the bar with respect to point A due to the forces $F_1=50\, \mathrm{N}$ applied 1.0 m from point A
The figure below represents a $2.0\, {\rm m}$ long bar pinned at point A. compute the net torque on the bar with respect to point A due to the forces $F_1=50\, \mathrm{N}$ applied 1.0 m from point A, and $F_2=100\,\mathrm{N}$ as shown in the figure below.
The torque about an axis or point is defined as the cross product of moment arm distance vector $\vec{r}$ and force $\vec{F}$ which produce rotation.
\[\vec{\tau }=\vec{r}\times \vec{F}\]
Note: the moment arm is the minimum distance between the pivot point and the line of action (the line along which the forces act)
First find the magnitude of the torques due to the forces $F_1$ and $F_2$ about the pivot point $A$.
\[\left|{\vec \tau }_1 \right|=r_1F_1\,{\sin \theta \ }=\left(\frac{L}{2}\right)F_1\,{\sin 90{}^\circ \ }=50\ \mathrm{\ N}\]
Where $\theta $ is the angle between the lever arm and force vector $\vec F$.
\[\left|{\vec \tau }_2\right|=r_2F_2\,{\sin \theta \ }=LF_2\,{\sin 60{}^\circ \ }=2\times 100\times \frac{\sqrt{3}}{2}=100\sqrt{3}\]
The torque is a vector quantity so use the right hand rule to find its direction. Curl your fingers of the right hand from the direction of $\vec r$ into the direction of $\vec F$. Then your right thumb points in the direction of torque $\vec \tau$ (or the direction of curling the right fingers shows the direction of torque as clockwise or counter clockwise). Let us consider the positive direction to be out of the page (counter clockwise) and vice versa. So
\[{\vec \tau }_1=-50\ \mathrm{N} \ \ and \ \vec \tau_2=+100\sqrt{3}\ \mathrm{N}\]
The total torque exerted on the pivot point $A$ is the sum of them
\[\mathrm{\Sigma }{\vec \tau }_A=+100\sqrt{3}-50=+123.2\ \mathrm{N}\]
Thus the total torque is counter clockwise.
5) A physical pendulum has a shape of a disk of radius $r$ and mass $m$. The pendulum swings about an axis perpendicular to the plane of the disk and at a distance $L$ from the center of the disk.
A physical pendulum has a shape of a disk of radius $r$ and mass $m$. The pendulum swings about an axis perpendicular to the plane of the disk and at a distance $L$ from the center of the disk.
(a) What is the moment of inertia of the disk about the axis where the pendulum swings?
(b) What is the oscillation frequency of this pendulum?
(c) For what value of $L$ is this frequency at a maximum?
($I_{disk}$ about its center of mass is $\frac{1}{2}mr^2$- assume that the oscillation amplitude is small and small angle approximation is ${\sin \theta \ }\sim \theta $ and ${\cos \theta \ }\sim 1-\frac{{\theta }^2}{2}$ )
(a) Note: parallel axis theorem
The moment of inertia is minimal when the rotation axis passes through the center-of-mass(CM) and increases as the rotation axis is moved further from the (CM). i.e. $I=I_{cm}+mh^2$, where $h$ is the distance from the CM to the parallel axis of rotation.
So, $I=\frac{1}{2}mr^2+mL^2$
(b) The radial part of $mg$ will be balanced by the tension on the cord (so there is no radial acceleration). The tangential part of it provides the torque on the pendulum.
\[\vec{\tau }=I\alpha =I\frac{d^2\theta }{dt^2}\ \Rightarrow \ -mgL\,{\sin \theta \ }=I\frac{d^2\theta }{dt^2}\]
\[\therefore \frac{d^2\theta }{dt^2}+\frac{mgL}{I}{\sin \theta \ }=0\]
\[\mathrm{small\ angle\ Approx.}\ \ :\ \frac{d^2\theta }{dt^2}+\frac{mgL}{I}\theta \approx 0\]
Note: In above the minus sign is there because the torque opposes the angular displacement from equilibrium.
by considering $\frac {mgL}{I}$ as the squared angular momentum $\omega^2$, the above equation have similar form of an SHM equation, therefore
\[{\omega }^2=\frac{mgL}{I}=\frac{mgL}{mL^2+\frac{1}{2}mr^2} \quad , \quad f=\frac{\omega }{2\pi }\]
(c) To find the extremes of a function, we must take the derivative of it.If $\omega $ is an extreme, so ${\omega }^2$ is also, therefore we can examine $d{\omega }^2/dL$ rather $d\omega /dL$
\[\frac{d{\omega }^2}{dL}=1-\frac{r^2}{L^2}\frac{1}{1+r^22L^2}=0\ \Longrightarrow \ \ L=R/\sqrt{2}\]
It is less than $R$!
6) Two astronauts, each having a mass $M$, are connected by a rope of length $d$ having negligible mass. They are isolated in space, orbiting their central mass at speeds $v$.
Two astronauts, each having a mass $M$, are connected by a rope of length $d$ having negligible mass. They are isolated in space, orbiting their central mass at speeds $v$. Treating the astronauts as particles, calculate
(a) The angular momentum of the system
(b) The rotational energy of the system
By pulling on the rope, one of astronauts shortens the distance between them to $d/2$.
(c) What is the new angular momentum of the system?
(d) What are the astronauts' new speeds?
(e) What is the new rotational energy of the system?
(f) How much work does the astronaut do in shortening the rope?
(a) Recall that the angular momentum of a rotational object is given by $\vec{L}=\vec{r}\times \vec{p}$. So the magnitude of the angular momentum about CM for this case is
\[\left|\vec{L}\right|=Mv\left(\frac{d}{2}\right)+Mv\left(\frac{d}{2}\right)=dMv\]
By the Right hand rule, the direction of it is out of page.
(b) The rotational energy of a rotating system defines as $K_r=\frac{1}{2}I{\omega }^2$, where $\omega $ is angular velocity $\vec{v}=\vec{\omega }\times \vec{r}$ and $I=\Sigma m_ir_i^2$ is the moment of inertia of the system.
In this problem, $\vec \omega \bot \vec r\ \Rightarrow \omega =\frac{v}{r}$
\[K_r=\frac{1}{2}\left(Mr^2\right){\left(\frac{v}{r}\right)}^2=\frac{1}{2}Mv^2\]
\[K_{r,tot}=2K_r=2\left(\frac{1}{2}Mv^2\right)=Mv^2\]
(c) Because there is no external torque acting on the system (in space effects due to friction and air resistance are negligible), so the total angular momentum is constant i.e.
\[\vec {\tau }_{net\ ext}=\frac{d{\vec{L}}_{sys}}{dt}=0\Rightarrow \ L_i=L_f=dMv\]
(d) $L_f=dMv=2\left(Mv_f\frac{d}{4}\right)\Rightarrow v_f=2v$
(e) $K^{'}_{r,tot}=2\frac{1}{2}\left(I_f{\omega }^2_f\right)=M{\left(\frac{d}{4}\right)}^2{\left(\frac{v_f}{\frac{d}{2}}\right)}^2=Mv^2_f=M{\left(2v\right)}^2=4Mv^2$
\[\Rightarrow \ K^{'}_{r,tot}=4K_{r,tot}\]
(f) Use the work - energy theorem $\Delta K=W$
\[K_f-K_i=W\Rightarrow 4Mv^2-Mv^2=W\]
\[\therefore W=3Mv^2\]
7) The upper end of the string wrapped around the cylinder in the figure below is held by a hand that is accelerated upward so that the center of mass of the cylinder does not move as the cylinder spins up
The upper end of the string wrapped around the cylinder in the figure below is held by a hand that is accelerated upward so that the center of mass of the cylinder does not move as the cylinder spins up. Find:
(a) The tension in the string
(b) The angular acceleration of the cylinder
(c) The acceleration of the hand.
(a) Draw a free body diagram as below and apply Newton's 2${}^{nd}$ law to it:
\[\Sigma \vec F_y=ma_{lin}=0\]
\[T-mg=0\Rightarrow T=mg\]
Where $a_{lin}$ is the acceleration of the center of mass of the cylinder that is stated does not move.
(b) The free body diagram shows that the tension of the string causes the rotation of the cylinder ($mg$ acts at the center of mass, so it does not provide a torque about the center of mass) so Newton's 2${}^{nd}$ law for rotating objects states:
\[\Sigma \tau =I\alpha \Rightarrow \ TR=I\alpha \Rightarrow \alpha =\frac{TR}{I}=\frac{TR}{\frac{1}{2}mR^2}=\frac{2T}{mR}\]
(c) The angular acceleration is related to the tangential acceleration of the cylinder via
\[a=\alpha R\Rightarrow a=\frac{2T}{m}\]
8) A force $\vec{F}=3\hat{i}-2\hat{j}\ \mathrm{(N)}$ acts at a location $\vec{r}=1\ \hat{i}+2\hat{j}\ \mathrm{(m)}$ on an object. What is the torque that this force applies about an axis through the origin perpendicular to the $xy$ plane?
A force $\vec{F}=3\hat{i}-2\hat{j}\ \mathrm{(N)}$ acts at a location $\vec{r}=1\ \hat{i}+2\hat{j}\ \mathrm{(m)}$ on an object. What is the torque that this force applies about an axis through the origin perpendicular to the $xy$ plane?
By definition, the torque acts on an object is $\vec{\tau }=\vec{r}\times \vec{F}$. There are two solution, using determinant method or direct product method.
Determinant method:
\[\vec{\tau }=\vec{r}\times \vec{F}=\left| \begin{array}{ccc}
\hat{i} & \hat{j} & \hat{k} \\
1 & 2 & 0 \\
3 & -2 & 0 \end{array}
\right|=\hat{i}\left(2\left(0\right)-\left(0\right)\left(-2\right)\right)-\hat{j}\left(1\left(0\right)-\left(0\right)\left(3\right)\right)+\hat{k}\left(\left(1\right)\left(-2\right)-\left(2\right)\left(3\right)\right)=-8\hat{k}\ \mathrm{(N.m)}\]
Direct product:
\vec{\tau }=\vec{r}\times \vec{F} &=\left(1\ \hat{i}+2\hat{j}\right)\times \left(3\hat{i}-2\hat{j}\right)\\
&=\left(1\right)\left(3\right)\left(\underbrace{\hat{i}\times \hat{i}}_{0}\right)+\left(1\right)\left(-2\right)\left(\underbrace{\hat{i}\times \hat{j}}_{\hat{k}}\right)+\left(2\right)\left(3\right)\left(\underbrace{\hat{j}\times \hat{i}}_{-\hat{k}}\right)+\left(2\right)\left(-2\right)\left(\underbrace{\hat{j}\times \hat{j}}_{0}\right)\\
&= -2\hat{k}+6\left(-\hat{k}\right)=-8\hat{k}\ {\rm N.m}
Note: an easy way to remember the cross product of unit vectors are shown in the figure below.
By going around this figure in the clockwise direction, take the positive cross product and vice versa. That is
\[\hat{i}\times \hat{j}=\hat{k}\ \ and\ \ \hat{j}\times \hat{i}=-\hat{k}\]
\[\hat{k}\times \hat{i}=\hat{j}\ \ \ and\ \ \hat{i}\times \hat{k}=-\hat{j}\]
And so on. The cross product of the same unit vectors with each other is always zero.
\[\hat{i}\times \hat{i}=\hat{j}\times \hat{j}=\hat{k}\times \hat{k}=0\]
9) A man of mass $M=75\,{\rm kg}$ lowers himself down from the top of a building by using a rope wound on a drum (a hollow cylinder of radius $r=0.50\,{\rm m}$ and mass $2M=150\, {\rm kg}$), as shown in the picture. The man and the drum start at rest.
A man of mass $M=75\,{\rm kg}$ lowers himself down from the top of a building by using a rope wound on a drum (a hollow cylinder of radius $r=0.50\,{\rm m}$ and mass $2M=150\, {\rm kg}$), as shown in the picture. The man and the drum start at rest.
(a) Find the angular acceleration of the drum.
(b) What is the velocity of the man when he has dropped $\mathrm{20\ m}$?
(a) Use Newton's 2${}^{nd}$ law for linear and rotational motion as follows
\[\mathrm{\Sigma }\vec F_y=Ma\Rightarrow T\hat{j}+Mg\left(-\hat{j}\right)=Ma\left(-\hat{j}\right)\Rightarrow T=M(g-a)\]
Where $a$ is the linear acceleration of the man. Recall that in the rotational motion linear acceleration (if the string around the pulley does not slip) is related to the angular acceleration by $a=\alpha r$.
The tension of the rope acts a torque on the cylinder about the horizontal axis passing through the central axis of the cylinder. So $\tau =I\alpha $, where $I$ is the moment of inertia and $\alpha $ is the angular acceleration. By definition of the torque $\vec{\tau }=\vec{r}\times \vec{F}$ we obtain
\[\tau =I\alpha \to rT=I\alpha \Rightarrow \ \alpha =\frac{rT}{I_{cylinder}}\]
Now substitute the tension on the rope into the above relation:
\[\alpha =\frac{rM\left(g-a\right)}{I_{cyli}}=\frac{rM\left(g-\alpha r\right)}{I_{cyli}}\to \alpha \left(I_{cyli}+Mr^2\right)=rMg\]
\[\Rightarrow \alpha =\frac{Mgr}{I_{cyli}+Mr^2}=\frac{Mgr}{\left(2M\right)r^2+Mr^2}=\frac{g}{3r}=6.5\ {\mathrm{s}}^{\mathrm{-}\mathrm{2}}\]
Where we have use the $I_{cylinder}=mr^2$ about central axis.
(b) Use the following kinematic equation to find his speed.
\[v^2-v^2_0=2a\mathrm{\Delta }y\]
Since the man start at rest, $v_0=0$. If we choose the starting point as the base, the man has reached $-y$ below the base. Therefore,
\[v=\sqrt{2\underbrace{a}_{\mathrm{is\ negative}}\mathrm{\Delta }y}=\sqrt{2\left(-\alpha r\right)\mathrm{\Delta }y}=\sqrt{2(-0.5\times 6.5)(-20)}=11.4\ \frac{\mathrm{m}}{\mathrm{s}}\]
10) A hoop is released from rest at the top of a plane inclined at $20{}^\circ $ above horizontal. How long does it take the hoop to roll $12.0\ \mathrm{m}$ down the plane? ($I_{hoop}=mr^2$)
A hoop is released from rest at the top of a plane inclined at $20{}^\circ $ above horizontal. How long does it take the hoop to roll $12.0\ \mathrm{m}$ down the plane? ($I_{hoop}=mr^2$)
In this problem we have a rotational motion. So use the conservation of mechanical energy between initial and final points to find the velocity of the center of mass of the hoop at the bottom of the incline plane.
\[E_{top}=E_{bottom}\]
\[mgh=\underbrace{\frac{1}{2}mv^2_c}_{ \begin{array}{c}
trans.\ \ kinetic \\
energy \end{array}
}+\underbrace{\frac{1}{2}I{\omega }^2}_{ \begin{array}{c}
rotational\ kinetic \\
}\]
We know that in the rotational motions, the angular and tangential velocities are related together via $v=r\omega $.
By substituting the given values, we obtain
\[mgh=\frac{1}{2}mv^2_c+\frac{1}{2}\left(mr^2\right){\left(\frac{v_c}{r}\right)}^2\Rightarrow v_c=\sqrt{gh}=\sqrt{g(L\,{\sin 20{}^\circ \ })}\]
From the geometry, we have substituted $L\,{\sin 20{}^\circ \ }$ for height of the incline plane $h$.
Now using the kinematic equation $v^2-v^2_0=2ax$, find the acceleration of the CM and then use the equation $y=\frac{1}{2}at^2+v_0t$ to determine the required time.
\[v^2-\underbrace{v^2_0}_{0}=2a\underbrace{x}_{L}\Rightarrow \ a=\frac{gL}{2L}{\sin 20{}^\circ \ }\]
If we suppose the axis parallel to the incline plane as the $y$ axis then $y=L$
\[y=\frac{1}{2}at^2+\underbrace{v_0}_{0}t\to L=\frac{1}{2}at^2\]
\Rightarrow t & =\sqrt{\frac{2L}{a}}=\sqrt{\frac{2L}{\left(\frac{gL}{2L}\,{\sin 20{}^\circ \ }\right)}}=\sqrt{\frac{4L}{g{\sin 20{}^\circ \ }}}\\
& =\sqrt{\frac{4\left(12\right)}{9.8\,{\sin 20{}^\circ \ }}}=3.78\ \mathrm{s}
11) A ball begins rolling up a hill as shown, with no slipping or loss of mechanical energy. Its initial velocity in the horizontal flat region is $30\ \mathrm{m/s}$. The moment of inertia of a solid sphere with mass $M$ and radius $R$ is given by $I=\fra
A ball begins rolling up a hill as shown, with no slipping or loss of mechanical energy. Its initial velocity in the horizontal flat region is $30\ \mathrm{m/s}$. The moment of inertia of a solid sphere with mass $M$ and radius $R$ is given by $I=\frac{2}{5}MR^2$.
(a) Calculate its velocity $v_f$ just as it rolls horizontally off the edge of the cliff, which is $20\ \mathrm{m}$ above the initial position.
(b) Calculate the time $t$ that it takes to hit the ground after falling off the cliff.
(c) Calculate the distance from the base of the cliff to the point where the ball hits the ground.
(d) Calculate the (center of mass) velocity that the ball has when it hits the ground.
(a) By using the conservation of mechanical energy, we have
\[E_i=E_f\Rightarrow \frac{1}{2}Mv^2_0+\frac{1}{2}I{\omega }^2=\frac{1}{2}Mv^2_f+\frac{1}{2}I{\omega }^2+Mgh\]
The required condition for rotation without slipping is $v=r\omega $, where $v$ is the tangential velocity of the ball and $r$ is its radius. So
\[\frac{1}{2}Mv^2_0+\frac{1}{2}\left(\frac{2}{5}Mr^2\right){\left(\frac{v_0}{r}\right)}^2=\frac{1}{2}Mv^2_f+\frac{1}{2}\left(\frac{2}{5}Mr^2\right){\left(\frac{v_f}{r}\right)}^2+Mgh\]
\[\frac{7}{10}Mv^2_0=\frac{7}{10}Mv^2_f+Mgh\Rightarrow v_f=\sqrt{v^2_0-\frac{10}{7}gh}\]
\[\Rightarrow \ v_f=\sqrt{{30}^2-\frac{10}{7}\left(9.8\times 20\right)}=25\frac{\mathrm{m}}{\mathrm{s}}\]
(b) After the ball leaves the cliff, it moves in projectile motion (with $\alpha =0{}^\circ $). In the free fall stage, use the following kinematic equation
\[y=-\frac{1}{2}gt^2+\underbrace{v_0\,{\sin \alpha\ }}_{v_{0y}}t+y_0\]
Here $v_0$ is the initial velocity at the moment of free fall but at the end of the cliff we have $v_0=v_f$. If above the cliff be chosen as the base then the hitting coordinate is $y=-20\ \mathrm{m}$.
\[-20=-\frac{1}{2}\left(9.8\right)t^2+0t+0\Rightarrow t=2.02\ \mathrm{s}\]
(c) In the projectile motion, for finding the travelled horizontal distance, we must use the following kinematic equation
\[x=\underbrace{v_0\,{\cos \alpha \ }}_{v_{0x}}t\Rightarrow x=24.9\times {\cos 0{}^\circ \ }\times 2.02=50\ \mathrm{m}\]
(d) In the projectile motion, the projectile has no horizontal acceleration so the only acceleration is the downward free fall acceleration $-g$. Therefore, at any moment first find the components of the velocity as follows
\[v_x=v_{0x}=v_0\,{\cos \alpha \ }=25\ \mathrm{m/s}\]
\[v_y=\underbrace{v_{0y}}_{v_0\,{\sin \alpha \ }}-gt=-9.8\times 2.02=-20\ \mathrm{m/s\ }\]
Now use the below relation to determine the magnitude of the velocity
\[v=\sqrt{v^2_x+v^2_y}=\sqrt{{25}^2+{\left(-20\right)}^2}=32\ \mathrm{m/s}\]
12) A block of mass $m$ is being whirled around on a frictionless surface (as shown), in a circular path of radius $r$.
A block of mass $m$ is being whirled around on a frictionless surface (as shown), in a circular path of radius $r$.
(a) Its initial velocity is $v$. If it is pulled in so that the new radius is $r/2$, what is its new velocity?
(b) The initial tension in the string is $T$. What is the final tension after the block is pulled into the new radius of $r/2$?
(a) There are no external torques act about the vertical axis passed through the hole since the surface is frictionless. So the total angular momentum of the system is conserved. By definition, the angular momentum is $\vec{L}=\vec{r}\times \vec{P}$. Therefore, we have
\[L_i=L_f\Rightarrow rmv=\left(\frac{r}{2}\right)mv^{'}\Rightarrow v^{'}=2v\]
(b) The tension in the string provides the radial acceleration of the block. So the initial tension is
\[T=\frac{mv^2}{r}\]
After the string reaches the new radius $r/2$, the tension in it is
\[T^{'}=\frac{m{v^2}^{'}}{r^{'}}=\frac{m{\left(2v\right)}^2}{\frac{r}{2}}=8\frac{mv^2}{r}\]
\[\Rightarrow T^{'}=8T\]
13) A solid disk has a radius of $R=0.1\ \mathrm{m}$ and mass of $M=2.0\ \mathrm{kg}$. it begins rolling up a slope without slipping. The slope is $\theta =20{}^\circ $ above the horizontal, and the disk has an initial speed
A solid disk has a radius of $R=0.1\ \mathrm{m}$ and mass of $M=2.0\ \mathrm{kg}$. it begins rolling up a slope without slipping. The slope is $\theta =20{}^\circ $ above the horizontal, and the disk has an initial speed of $v_0=3.0\ \mathrm{m/s}$. How long it will take for the disk to come to a stop?
First, draw a sketch and show the forces that acts on the disk. Consider the direction of the motion of the disk to be positive direction. Apply Newton's second law in component form for the $x$ axis.
\[\Sigma F_x=ma_{cm}\Rightarrow \ -f_k-mg\,{\sin \theta \ }=ma_{cm}\]
Now apply Newton's second law for the rotational motion about a horizontal axis through the center of mass and perpendicular to ${\vec{v}}_{cm}$.
\[\Sigma {\tau }_i=I_{cm}\alpha \]
\[f_kR=I_{cm}\alpha \]
Other forces ($N,mg\,{\sin \theta \ }$) does not exert torques on the ball since they act on the center of mass or axis of rotation.
Relate $\alpha $ and $a_{cm}$ using the nonslip condition that is $a_{cm}=\alpha R$. Combining these equations, gives
\[\left\{ \begin{array}{rcl}
f_k & = &-mg\,{\sin \theta \ }-ma_{cm} \\
f_kR & = & I_{cm}\alpha \\
\alpha & = & \frac{a_{cm}}{R} \end{array}
\right.\]
\[\Rightarrow \ -\left(mg\,{\sin \theta \ }+ma_{cm}\right)R=I_{cm}\left(\frac{a_{cm}}{R}\right)\]
Solving for $a_{cm}$, we obtain
\[\Rightarrow \ a_{cm}=-\frac{mg{\sin \theta \ }}{\frac{I_{cm}}{R}+mR}\]
With $a_{cm}$ given by the above equation, use the following kinematic equation to find the time required for the disk to come to a stop ($v=0$).
\[v=v_0+at\Rightarrow 0=v_0+a_{cm}t\Rightarrow t=-\frac{v_0}{a_{cm}}\]
\[\Rightarrow t=\frac{\left(\frac{I_{cm}}{R}+mR\right)v_0}{mg{\sin \theta \ }}\]
The moment of inertia for disk is $I_{cm}=\frac{1}{2}mR^2$, by substituting it into above, we get
\[t=\frac{3v_0}{2\left(g{\sin \theta \ }\right)}=\frac{3\times 3}{2\times 9.8\times {\sin 20{}^\circ \ }}=1.34\ \mathrm{s}\]
14) Consider the masses configured on a frictionless inclined plane as shown in the figure. The pulley is frictionless, has a moment of inertia $I$ and a radius $R$. Determine the correct expression for the acceleration
Consider the masses configured on a frictionless inclined plane as shown in the figure. The pulley is frictionless, has a moment of inertia $I$ and a radius $R$. Determine the correct expression for the acceleration of $m_1$ in terms of the other quantities. Take the positive direction for the acceleration of $m_1$ to be down.
Apply Newton's second law to each of the masses and pulley in the figure. Since the pulley has the mass (moment of inertia), so the tension on the string is different on each side of the pulley.
\[\Sigma \vec F_1=m_1a\to m_1g-T_1=m_1a\]
\[\Sigma \vec F_2=m_2a\to T_2-m_2g\,{\sin \theta \ }=m_2a\]
Now apply Newton's second law for rotational motion of the pulley as follows
\[\Sigma {\tau }_{ext}=I\alpha \to \ R\left(T_1-T_2\right)=I\left(\frac{a}{R}\right)\]
Where the angular acceleration $\alpha $ is related the linear (center of mass) acceleration by $\alpha =a/R$. Substituting the tensions $T_1$ and $T_2$ into the above equation, we obtain
\[R\left(\left(m_1g-m_1a\right)-\left(m_2a+m_2g{\sin \theta \ }\right)\right)=I\left(\frac{a}{R}\right)\]
\[g\left(m_1-m_2{\sin \theta \ }\right)-a\left(m_1+m_2\right)=\frac{Ia}{R^2}\]
\[a=g\frac{m_1-m_2{\sin \theta \ }}{m_1+m_2+\frac{I}{R^2}}\]
15) A uniform disk of mass $M=10\ \mathrm{kg}$ and radius $R=0.500\ \mathrm{m}$ is rolling without slipping on a horizontal surface. The rolling disk has a linear (or translational) velocity of
A uniform disk of mass $M=10\ \mathrm{kg}$ and radius $R=0.500\ \mathrm{m}$ is rolling without slipping on a horizontal surface. The rolling disk has a linear (or translational) velocity of $v_{cm}=5\ \mathrm{m/s}$ to the right, as shown in the figure. The moment of inertial of the disk rotating about its center of mass is $I=\frac{1}{2}MR^2$.
(a) What is the angular velocity of the rolling disk about its center of mass?
(b) What is the kinetic energy of the rolling disk?
(c) If this disk rolled up a hill, what is the maximum height (above its starting point) that it would reach?
(d) If instead of a rolling disk, you were given a square sliding mass ($m=10\ \mathrm{kg}$) with the same initial linear velocity $v_{cm}=5\ \mathrm{m/s}$, what maximum height could the sliding mass reach? (assuming all surfaces were frictionless)
(a) The angular velocity $\omega $ of a rolling object is related to its tangential velocity by $v=R\omega $. Thus
\[v_{cm}=R\omega \to \omega =\frac{v_{cm}}{R}=\frac{5}{0.5}=10\ \mathrm{rad/s}\]
(b) A rolling object has two kinetic energy, translational and rotational. Therefore,
\[K_{tot}=K_{tran}+K_{rot}=\frac{1}{2}Mv^2_{cm}+\frac{1}{2}I_{cm}{\omega }^2=\frac{1}{2}Mv^2_{cm}+\frac{1}{2}\left(\frac{1}{2}MR^2\right){\omega }^2\]
\[\Rightarrow K_{tot}=\frac{1}{2}\left(10\right){\left(5\right)}^2+\frac{1}{4}\left(10\right){\left(0.5\right)}^2{\left(10\right)}^2=188\ \mathrm{J}\]
(c) Use the conservation of the mechanical energy as $E_i=E_f$.
\[U_i+K_i=U_f+K_f\to \ Mgh_i+K_{i,tot}=MgH+K_{f,tot}\]
Let the lowest point of the hill be the base of the gravitational potential so $U_i=0$. At the final point the rolling object comes to a stop, $K_{f,tot}=0$. Therefore
\[0+188=\left(10\right)\left(9.8\right)H+0\to H=\frac{188}{98}=1.92\mathrm{\ m}\]
(d) Use again the conservation of mechanical energy, but in this case, there is no rotational kinetic energy so
\[E_i=E_f\to mgh_i+\frac{1}{2}mv^2_{cm}=mgH+K_f\]
\[\Rightarrow 0+\frac{1}{2}mv^2_{cm}=mgH+0\Rightarrow H=\frac{v^2_{cm}}{2g}=\frac{5^2}{2\left(9.8\right)}=1.28\ \mathrm{m}\]
16) A uniform density metal plate in the shape of an equilateral triangle with each side of length$L=2.00\ \mathrm{m}$, is pivoted about an axis perpendicular to the plate and located at its center.
A uniform density metal plate in the shape of an equilateral triangle with each side of length$L=2.00\ \mathrm{m}$, is pivoted about an axis perpendicular to the plate and located at its center. Two small rocket engines are attached to the tips of the triangle that act to create two forces on the triangle $F_1=100\ \mathrm{N}$ and $F_2=100\ \mathrm{N}$ as shown in the figure.
(a) Calculate the torque about the pivot axis from the force $F_1$.
(b) Calculate the torque about the pivot axis from force $F_2$.
(c) Calculate the total(or net) torque about the pivot axis.
(d) You now do an experiment and find that, starting from rest, the triangle rotates about its center through one complete revolution in $4.00$ seconds. Using this information, calculate the moment of inertia about its center.
(a) From the geometry first find the moment arm $r$, which is the distance from the pivot point to the point where the force $F$ acts.
\[{\cos 30{}^\circ \ }=\frac{\frac{L}{2}}{r}\Rightarrow r=\frac{L}{2{\cos 30{}^\circ \ }}\]
Now use the definition of the torque $\vec{\tau }=\vec{r}\times \vec{F}=rF\,\sin\ \theta $ to find its magnitude. $\theta $ is the angle between the radial direction and the direction of the force.
\left|{\vec {\tau }_1}\right|=rF_1\,{\sin \theta \ } &=\left(\frac{L}{2{\cos 30{}^\circ \ }}\right)\left(\underbrace{F_1{\cos 30{}^\circ \ }}_{F_{\bot }}{\sin 90{}^\circ \ }+\underbrace{F_1{\sin 30{}^\circ \ }}_{F_{\parallel }}{\sin 0{}^\circ \ }\right)\\
&=\left(\frac{2}{2{\cos 30{}^\circ \ }}\right)\left(100\times {\cos 30{}^\circ \ }\times 1\right)\\
& =100\ \mathrm{N.m}
Now put your fingers of the right hand along the moment arm then curl them in the direction of the applied force. The thumb points in the direction of torque (right hand rule). In this case, the torque is directed in the plane or is clockwise.
(b) Similar as part (a)
\[\left|{\vec \tau }_2 \right|=r_2F_2{\sin \theta \ }=\left(\frac{L}{2{\cos 30{}^\circ \ }}\right)\left(\underbrace{F_2{\sin (30+60){}^\circ \ }}_{F_{\bot }}\right){\sin 90{}^\circ \ }=\frac{100}{\frac{\sqrt{3}}{2}}=\frac{200}{\sqrt{3}}\mathrm{N.m}\]
Using the right hand rule, we obtain its direction out of the plane or counter clockwise. Let us assigning a minus sign for clockwise and positive for counter clockwise that is
\[{\vec{\tau }}_1=-100\ \mathrm{N.m\ \ and\ \ } \vec{\tau }_2=+\frac{200}{\sqrt 3}\ \mathrm{N.m}\]
(c) Therefore, the total torque due to the forces $F_1$ and $F_2$ is
\[{\vec{\tau }}_{tot}={\vec{\tau }}_1+{\vec{\tau }}_2=-100+\frac{200}{\sqrt{3}}\ \cong +15\ \mathrm{N.m\ counter\ clockwise}\]
(d) The total torque acting on a body equals the product of the moment of inertia of its and its angular acceleration. This is Newton's second law for rotation.
\[{\tau }_{net}=I\alpha \]
Where $\alpha $ is related to the angular coordinate by $\theta ={\theta }_0+\frac{1}{2}\alpha t^2+{\omega }_0t$, which is analogous to those for straight-line motion with constant linear acceleration.
In the problem is said that the body rotates one complete cycle so $\theta =2\pi \ \mathrm{rad}$. Now find the angular acceleration of the triangle as
\[\theta ={\theta }_0+\frac{1}{2}\alpha t^2+{\omega }_0t\to 2\pi =0+\frac{1}{2}\alpha {\left(4\right)}^2+0\left(4\right)\Rightarrow \ \alpha =\frac{\pi }{4}\mathrm{\ rad/}{\mathrm{s}}^{\mathrm{2}}\]
Where we have supposed that the initial angular coordinate and velocity be zero.
By substituting $\alpha $ into the equation ${\tau }_{net}$ , we get
\[{\tau }_{net}=I\alpha \Rightarrow I_{cm}=\frac{15}{\frac{\pi }{4}}=\frac{60}{\pi }\ \mathrm{kg.}{\mathrm{m}}^{\mathrm{2}}\]
17) A cylindrical log of mass $M$ rolls without slipping along the ground. Its center moves along at speed $V$. What is the kinetic energy of the log in terms of $M$ and $V$?
A cylindrical log of mass $M$ rolls without slipping along the ground. Its center moves along at speed $V$. What is the kinetic energy of the log in terms of $M$ and $V$?
In the rotational motions, the kinetic energy has two contributions one is due to the translational motion and the other is due to the pure rotational motion. Thus
\[K=\underbrace{\frac{1}{2}mV^2}_{trans}+\underbrace{\frac{1}{2}I{\omega }^2}_{rotation}\]
Where $I$ is the moment of inertial of the object.
In the special motion of rolling without slipping, the point of the wheel in contact with the ground is momentarily at rest and the wheel rotates about a rotation axis through the contact points. In this motion, there is a relation which is unique that is $v=r\omega $ where $r$ is the radial distance from the rotation axis to the any point of the wheel. In the case of wheel or cylinder the geometric center of them is coincident with the center of mass. So the nonslip condition is satisfied as $v_{CM}=R\omega$, where $R$ is the radius of them. Therefore,
\[K=\frac{1}{2}mV^2_{CM}+\frac{1}{2}I{\omega }^2=\frac{1}{2}mV^2_{CM}+\frac{1}{2}\left(\frac{1}{2}mR^2\right){\left(\frac{V_{CM}}{R}\right)}^2=\frac{3}{4}mV^2_{CM}\]
In above, we have used the fact that the moment of inertia of the cylinder about the central axis is $I_{cyl}=\frac{1}{2}mR^2$.
Category : Rotational Motions
MOST USEFUL FORMULA IN ROTATIONAL MOTION:
Relationship between linear and angular speeds:
\[v=r\omega\]
Angular speed $\omega$ must be measured in radians per second ($\mathrm {rad/s}$).
Tangential acceleration of a point on a rotating object:
\[a_{tra}=\frac {dv}{dt}=r\frac {d\omega}{dt}=r\alpha\]
Definition of moment of inertia:
\[I=m_1r_1^2+m_2r_2^2+\dotsc=\Sigma m_ir_i^2\]
Kinetic energy of a rotating body:
\[K=\frac 1 2 I\omega^2\]
Parallel-Axis theorem:
\[I_P=I_{cm}+Md^2\]
where $d$ is the distance between cenetr of mass and the axis of rotation.
Definition of torque:
\[\vec \tau=\vec r \times \vec F\]
Rotational analog of Newton's second law:
\[\Sigma \tau=I\alpha\]
Angular momentum of a particle w.r.t a point:
\[\vec L=\vec r\times \vec p=\vec r\times m\vec v\]
Angular momentum of a rotating body around an axis:
\[\vec L=I\vec \omega \]
Number Of Questions : 17
Rotational Motions
Work and Energy
Harmonic motions
Newton's Laws of motion
Vectors and Coordinate Systems
Capacitance and Resistance
Momentum and Collision
Kinematics in Two Dimensions
Kinematics in One Dimension
Welcome to Physexams
One of the most important goals that we all strive to reach is to have a good future which can be achieved merely through knowledge and science. Mathematics and sciences are the foundations of technology and development in each society. The more time and energy is spent on these areas, the more prosperous the society will be. The most important way of improving and thinking correctly at the beg ...
PhysExams.com
info[at]physexams.com
© 2015 All rights reserved. by Physexams.com | CommonCrawl |
Planted clique
In computational complexity theory, a planted clique or hidden clique in an undirected graph is a clique formed from another graph by selecting a subset of vertices and adding edges between each pair of vertices in the subset. The planted clique problem is the algorithmic problem of distinguishing random graphs from graphs that have a planted clique. This is a variation of the clique problem; it may be solved in quasi-polynomial time but is conjectured not to be solvable in polynomial time for intermediate values of the clique size. The conjecture that no polynomial time solution exists is called the planted clique conjecture; it has been used as a computational hardness assumption.
Definition
A clique in a graph is a subset of vertices, all of which are adjacent to each other. A planted clique is a clique created from another graph by adding edges between all pairs of a selected subset of vertices.
The planted clique problem can be formalized as a decision problem over a random distribution on graphs, parameterized by two numbers, n (the number of vertices), and k (the size of the clique). These parameters may be used to generate a graph, by the following random process:[1]
1. Create an Erdős–Rényi random graph on n vertices by choosing independently for each pair of vertices whether to include an edge connecting that pair, with probability 1/2 for each pair.
2. Decide whether or not to add a clique to the graph, with probability 1/2; if not, return the graph formed in step 1.
3. Choose randomly a subset of k of the n vertices and add an edge (if one is not already present) between each pair of the selected vertices.
The problem is then to determine algorithmically whether one of the graphs resulting from this process contains a clique of at least k vertices.
Upper and lower bounds
There exists a function $f(n)\sim 2\log _{2}n$ such that asymptotically almost surely, the size of the largest clique in an n-vertex random graph is either $f(n)$ or $f(n)+1$,[2] and there exists some constant $c$ such that the expected number of cliques of size $\geq f(n)-c$ converges to infinity. Consequently, one should expect that the planting a clique of size $\sim 2\log _{2}n$ cannot be detected with high probability.
By the central limit theorem, the vertex degrees of the random graph would be distributed close to a standard normal distribution with mean ${\frac {n}{2}}$ and standard deviation ${\frac {\sqrt {n}}{2}}$. Consequently, when $k$ is on the order of ${\sqrt {n}}$ it would create a detectable change in the shape of the distribution. Namely, if you plot out the vertex degree distribution, it would look like a deformed bell curve. Therefore, the most interesting range of values for the parameter k is between these two values,[1]
$2\log _{2}n\ll k\ll {\sqrt {n}}.$
Algorithms
Large cliques
For sufficiently large values of the parameter k, the planted clique problem can be solved (with high probability) in polynomial time.[1]
Kučera (1995) observes that, when $k=\Omega ({\sqrt {n\log n}})$ then almost surely all vertices of the planted clique have higher degree than all vertices outside the clique, making the clique very easy to find. He describes a modification to the random process for generating planted clique instances, that makes the vertex degrees more uniform even for large values of k, but shows that despite this modification the planted clique may still be found quickly.[3]
Alon, Krivelevich & Sudakov (1998) prove for $k>10{\sqrt {n}}$ a planted clique can be found with high probability by the following method:
1. Compute the eigenvector of the adjacency matrix corresponding to its second highest eigenvalue.
2. Select the k vertices whose coordinates in this eigenvector have the largest absolute values.
3. Return the set of vertices that are adjacent to at least 3/4 of the selected vertices.
They show how to modify this technique so that it continues to work whenever k is at least proportional to some multiple of the square root of the number of vertices.[4] Large planted cliques can also be found using semidefinite programming.[5] A combinatorial technique based on randomly sampling vertices can achieve the same bound on k and runs in linear time.[6]
Quasipolynomial time
It is also possible to solve the planted clique problem, regardless of the choice of k, in quasi-polynomial time.[7] Because the largest clique in a random graph typically has size near 2 log2 n,[8] a planted clique of size k (if it exists) can be found with high probability by the following method:
1. Loop through all sets S of $\min(k,3\log _{2}n)$ vertices.
2. For each choice of S, test whether S is a clique. If it is, and $|S|=k$, return S. Otherwise, find the set T of vertices that are adjacent to all vertices in S. If $|T|\geq k$, return T.
The running time of this algorithm is quasipolynomial, because there are quasipolynomially many choices of S to loop over. This method is guaranteed to try a set S that is a subset of the planted clique; with high probability, the set T will consist only of other members of the planted clique.
As a hardness assumption
The planted clique conjecture is the conjecture that there is no polynomial time algorithm that takes as input graphs produced by the planted clique process and distinguishes the ones with planted cliques from the ones that don't have planted cliques with probability significantly better than random chance.[9]
Hazan & Krauthgamer (2011) used the assumption that finding planted cliques is hard as a computational hardness assumption to prove that, if so, it is also hard to approximate the best Nash equilibrium in a two-player game.[7] The planted clique conjecture has also been used as a hardness assumption to show the difficulty of property testing k-independence of random distributions,[10] finding clusters in social networks,[11] and machine learning.[12]
References
1. Arora, Sanjeev; Barak, Boaz (2009), Computational Complexity: A Modern Approach, Cambridge University Press, pp. 362–363, ISBN 9780521424264.
2. Bollobas, B.; Erdös, P. (November 1976). "Cliques in random graphs". Mathematical Proceedings of the Cambridge Philosophical Society. 80 (3): 419–427. doi:10.1017/S0305004100053056. ISSN 1469-8064. S2CID 16619643.
3. Kučera, Luděk (1995), "Expected complexity of graph partitioning problems", Discrete Applied Mathematics, 57 (2–3): 193–212, doi:10.1016/0166-218X(94)00103-K, hdl:11858/00-001M-0000-0014-B73F-2, MR 1327775.
4. Alon, Noga; Krivelevich, Michael; Sudakov, Benny (1998), "Finding a large hidden clique in a random graph", Random Structures & Algorithms, 13 (3–4): 457–466, CiteSeerX 10.1.1.24.6419, doi:10.1002/(SICI)1098-2418(199810/12)13:3/4<457::AID-RSA14>3.3.CO;2-K, MR 1662795
5. Feige, U.; Krauthgamer, R. (2000), "Finding and certifying a large hidden clique in a semirandom graph", Random Structures and Algorithms, 16 (2): 195–208, doi:10.1002/(SICI)1098-2418(200003)16:2<195::AID-RSA5>3.0.CO;2-A.
6. Dekel, Yael; Gurel-Gurevich, Ori; Peres, Yuval (2014), "Finding hidden cliques in linear time with high probability", Combinatorics, Probability and Computing, 23 (1): 29–49, arXiv:1010.2997, doi:10.1017/S096354831300045X, MR 3197965, S2CID 14356678.
7. Hazan, Elad; Krauthgamer, Robert (2011), "How hard is it to approximate the best Nash equilibrium?", SIAM Journal on Computing, 40 (1): 79–91, CiteSeerX 10.1.1.511.4422, doi:10.1137/090766991, MR 2765712.
8. Grimmett, G. R.; McDiarmid, C. J. H. (1975), "On colouring random graphs", Mathematical Proceedings of the Cambridge Philosophical Society, 77 (2): 313–324, Bibcode:1975MPCPS..77..313G, doi:10.1017/S0305004100051124, MR 0369129, S2CID 3421302.
9. Braverman, Mark; Ko, Young Kun; Rubinstein, Aviad; Weinstein, Omri (2015), ETH hardness for densest-k-subgraph with perfect completeness, arXiv:1504.08352, Bibcode:2015arXiv150408352B.
10. Alon, Noga; Andoni, Alexandr; Kaufman, Tali; Matulef, Kevin; Rubinfeld, Ronitt; Xie, Ning (2007), "Testing k-wise and almost k-wise independence", STOC'07—Proceedings of the 39th Annual ACM Symposium on Theory of Computing, New York: ACM, pp. 496–505, doi:10.1145/1250790.1250863, ISBN 9781595936318, MR 2402475, S2CID 5050980.
11. Balcan, Maria-Florina; Borgs, Christian; Braverman, Mark; Chayes, Jennifer; Teng, Shang-Hua (2013), "Finding Endogenously Formed Communities", Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA '13), SIAM, pp. 767–783, ISBN 978-1-611972-51-1.
12. Berthet, Quentin; Rigollet, Philippe (2013), "Complexity theoretic lower bounds for sparse principal component detection", Conference on Learning Theory, Journal of Machine Learning Research, 30: 1046–1066.
Computational hardness assumptions
Number theoretic
• Integer factorization
• Phi-hiding
• RSA problem
• Strong RSA
• Quadratic residuosity
• Decisional composite residuosity
• Higher residuosity
Group theoretic
• Discrete logarithm
• Diffie-Hellman
• Decisional Diffie–Hellman
• Computational Diffie–Hellman
Pairings
• External Diffie–Hellman
• Sub-group hiding
• Decision linear
Lattices
• Shortest vector problem (gap)
• Closest vector problem (gap)
• Learning with errors
• Ring learning with errors
• Short integer solution
Non-cryptographic
• Exponential time hypothesis
• Unique games conjecture
• Planted clique conjecture
| Wikipedia |
\begin{document}
\begin{abstract} We study convergence of 3D lattice sums via expanding spheres. It is well-known that, in contrast to summation via expanding cubes, the expanding spheres method may lead to formally divergent series (this will be so e.g. for the classical NaCl-Madelung constant). In the present paper we prove that these series remain convergent in Cesaro sense. For the case of second order Cesaro summation, we present an elementary proof of convergence and the proof for first order Cesaro summation is more involved and is based on the Riemann localization for multi-dimensional Fourier series. \end{abstract} \subjclass[2010]{11L03, 42B08, 35B10, 35R11} \keywords{lattice sums, Madelung constants, Cesaro summation, Fourier series, Riemann localization} \thanks{The first author has been partially supported by the LMS URB grant 1920-04. The second author is partially supported by the EPSRC grant EP/P024920/1} \maketitle \tableofcontents \section{Introduction}\label{s0} Lattice sums of the form
\begin{equation}\label{0.lattice} \sum_{(n,k,m)\in\mathbb Z^3}\frac{e^{i(nx_1+kx_2+mx_3)}}{(a^2+n^2+k^2+m^2)^s} \end{equation}
and various their extensions naturally appear in many branches of modern analysis including analytic number theory (e.g. for study the number of lattice points in spheres or balls), analysis of PDEs (e.g. for constructing Green functions for various differential operators in periodic domains, finding best constants in interpolation inequalities, etc.), harmonic analysis as well as in applications, e.g. for computing the electrostatic potential of a single ion in a crystal (the so-called Madelung constants), see \cite{Flap,MFS,BDZ,Bor13,mar,mar2000,Ram21,ZI} and references therein. For instance, the classical Madelung constant for the NaCl crystal is given by
\begin{equation}\label{0.M} M=\sideset{}{'}\sum_{(i,j,k)\in \mathbb Z^3}\frac{(-1)^{i+j+k}}{(i^2+j^2+k^2)^{1/2}}, \end{equation}
where the index ${'}$ means that the sum does not contain the term which corresponds to $(i,j,k)=0$. \par The common feature of series \eqref{0.lattice} and \eqref{0.M} is that the decay rate of the terms is not strong enough to provide absolute convergence, so they are often only conditionally convergent and their convergence/divergence strongly depends on the method of summation. The typical methods of summation are summation by expanding cubes/rectangles or summation by expanding spheres, see sections \S\ref{s1} and \S\ref{s2} for definitions and \cite{Bor13} for more details. For instance, when summation by expanding spheres is used, the formula for the Madelung constant has an especially elegant form
\begin{equation}\label{2.Ms} M=\sum_{n=1}^\infty (-1)^n\frac{r_3(n)}{\sqrt{n}}, \end{equation}
where $r_3(n)$ is the number of integer point in a sphere of radius $\sqrt{n}$. Exactly this formula is commonly used in physical literature although it has been known for more than 70 years that series \eqref{2.Ms} is {\it divergent}, see \cite{Emer}. Thus, one should either switch from expanding spheres to expanding cubes/rectangles for summation of \eqref{0.M} (which is suggested to do e.g. in \cite{Bor13} and where such a convergence problem does not appear) or to use more advanced methods for summation of \eqref{2.Ms}, for instance Abel or Cesaro summation. Surprisingly, the possibility to justify \eqref{2.Ms} in such a way is not properly studied (although there are detailed results concerning Cesaro summation for different methods, e.g. for the so called summation by diamonds, see \cite{Bor13}) and the main aim of the present notes is to cover this gap. \par Namely, we will study the following generalized Madelung constants:
\begin{equation}\label{2.Mg} M_{a,s}=\sideset{}{'}\sum_{(i,j,k)\in \mathbb Z^3}\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s}=\sum_{n=1}^\infty(-1)^n\frac{r_3(n)}{(a^2+n)^s}, \end{equation}
where $a\in\R$ and $s>0$ and the sum in the RHS is understood in the sense of Cesaro (Cesaro-Riesz) summation of order $\kappa$, see Definition \ref{Def2.Cesaro} below. Our presentation of the main result consists of two parts. \par First, we present a very elementary proof of convergence for second order Cesaro summation which is based only on counting the number of lattice points in spherical layers by volume comparison arguments. This gives the following result \begin{theorem}\label{Th0.c2} Let $a\in\R$ and $s>0$. Then
\begin{equation} M_{a,s}=\lim_{N\to\infty}\sum_{n=1}^N(-1)^n\(1-\frac nN\)^2 \frac{r_3(n)}{(a^2+n)^s}. \end{equation}
In particular, the limit in the RHS exists. \end{theorem} Second, we establish the convergence for the first order Cesaro summation. \begin{theorem}\label{Th0.c1} Let $a\in\R$ and $s>0$. Then
\begin{equation} M_{a,s}=\lim_{N\to\infty}\sum_{n=1}^N(-1)^n\(1-\frac nN\) \frac{r_3(n)}{(a^2+n)^s}. \end{equation}
In particular, the limit in the RHS exists. \end{theorem} In contrast to Theorem \ref{Th0.c2}, the proof of this result is more involved and is based on an interesting connection between the convergence of lattice sums and Riemann localization for multiple Fourier series, see section \S\ref{s22} for more details. Note that Theorem \ref{Th0.c2} is a formal corollary of Theorem \ref{Th0.c1}, but we prefer to keep both of them not only since the proof of Theorem \ref{Th0.c2} is essentially simple, but also since it possesses extensions to other methods of summation, see the discussion in section \S\ref{s3}. Also note that the above convergence results have mainly theoretical interest since much more effective formulas for Madelung constants are available for practical computations, see \cite{Bor13} and references therein. \par The paper is organized as follows. Some preliminary results concerning lattice sums and summation by rectangles are collected in \S\ref{s1}. The proofs of Theorems \ref{Th0.c2} and \ref{Th0.c1} are given in sections \S\ref{s21} and \S\ref{s22} respectively. Some discussion around the obtained results, their possible generalizations and numerical simulations are presented in section \S\ref{s3}.
\section{Preliminaries}\label{s1} In this section, we recall standard results about lattice sums and prepare some technical tools which will be used in the sequel. We start with the simple lemma which is however crucial for what follows. \begin{lemma}\label{Lem1.block} Let the function $f:\R^3\to\R$ be 3 times continuously differentiable in a cube $Q_{I,J,K}:=[I,I+1]\times[J,J+1]\times[K,K+1]$. Then
\begin{multline}\label{1.E} \min_{x\in Q_{2I,2J,2K}}\{-\partial_{x_1}\partial_{x_2}\partial_{x_3}f(x)\}\le\\\le E_{I,J,K}(f):=\sum_{i=2I}^{2I+1}\sum_{j=2J}^{2J+1} \sum_{k=2K}^{2K+1}(-1)^{i+j+k}f(i,j,k)\le\\\le \max_{x\in Q_{2I,2J,2K}}\{-\partial_{x_1}\partial_{x_2}\partial_{x_3}f(x)\}. \end{multline}
\end{lemma} \begin{proof} Indeed, it is not difficult to check using the Newton-Leibnitz formula that $$ E_{I,J,K}(f)=-\int_0^1\int_0^1\int_0^1 \partial_{x_1}\partial_{x_2}\partial_{x_3}f(2I+s_1,2J+s_2,2K+s_3)\,ds_1\,ds_2\,ds_3 $$ and this formula gives the desired result. \end{proof} \noindent A typical example of the function $f$ is the following one
\begin{equation}\label{1.pol}
f_{a,s}(x)=(a^2+|x|^2)^s,\ \ |x|^2=x_1^2+x_2^2+x_3^2. \end{equation}
In this case, $$
\partial_{x_1}\partial_{x_2}\partial_{x_3}f= 8s(s-1)(s-2)x_1x_2x_3(a^2+|x|^2)^{s-3/2} $$ and, therefore,
\begin{equation}\label{1.bet}
|E_{I,J,K}(f)|\le C(a^2+I^2+J^2+K^2)^{s-\frac32}. \end{equation}
One more important property of the function \eqref{1.pol} is that the term $E_{I,J,K}$ is sign-definite in the octant $I,J,K\ge0$.
\par
At the next step, we state a straightforward extension of the integral comparison principle to the case of multi-dimensional series. We recall that, in one dimensional case, for a positive monotone decreasing function $f:[A,B]\to\R$, $A,B\in\mathbb Z$, $B>A$, we have
$$
f(B)+\int_A^{B}f(x)\,dx\le \sum_{n=A}^Bf(n)\le f(A)+\int_{A}^{B}f(x)\,dx $$ which, in turn, is an immediate corollary of the estimate $$ f(n+1)\le\int_n^{n+1}f(x)\,dx\le f(n). $$ \begin{lemma}\label{Lem1.int} Let the continuous function $f:\R^3\setminus\{0\}\to\R_+$ be such that
\begin{equation}\label{1.good} C_2\max_{x\in Q_{i,j,k}} f(x)\le \min_{x\in Q_{i,j,k}}f(x)\le C_1\max_{x\in Q_{i.j,k}} f(x), \end{equation}
$(i,j,k)\in\mathbb Z^3$ and the constants $C_1$ and $C_2$ are positive and are independent of $Q_{i,j,k}\not\owns 0$. Let also $\Omega\subset\R^3$ be a domain which does not contain $0$ and
\begin{equation}\label{1.lat} \Omega_{lat}:=\{(i,j,k)\in\mathbb Z^3:\,\exists Q_{I,J,K}\subset\Omega,\ \ (i,j,k)\in Q_{I,J,K},\ 0\notin Q_{I,J,K}\}. \end{equation}
Then,
\begin{equation}\label{1.comp} \sum_{(i,j,k)\in\Omega_{lat}}f(i,j,k)\le C\int_\Omega f(x)\,dx, \end{equation}
where the constant $C$ is independent of $\Omega$ and $f$. If assumption \eqref{1.good} is satisfied for all $(I,J,K)$, the condition $0\notin\Omega$ and $0\notin Q_{I,J,K}$ can be removed. \begin{comment} Then, for every $N,M,K\in\mathbb N$, we have
\begin{multline} \sum_{(i,j,k)\in\Pi_{N,M,K}' }f(i,j,k)\le \int_{x\in\Pi_{M,N,K}'}f(x)\,dx+f(1,1,1)+\\+\int_{(x_1,x_2)\in\Pi_{N,M}'}f(x_1,x_2,0)\,dx_1\,dx_2+ \int_{(x_2,x_3)\in\Pi_{M,K}'}f(0,x_2,x_3)\,dx_2\,dx_3+\\+ \int_{(x_1,x_3)\in\Pi_{M,K}'}f(x_1,0,x_3)\,dx_1\,dx_3 +f(1,1,0)+f(0,1,1)+f(1,0,1)+\\+f(1,0,0)+f(0,0,1)+f(0,1,0)+\\+\int_1^Nf(x_1,0,0)\,dx_1+ \int_1^Mf(0,x_2,0)\,dx_2+\int_1^Kf(0,0,x_3)\,dx_3 \end{multline}
\end{comment} \end{lemma} \begin{proof} Indeed, assumption \eqref{1.good} guarantees that
\begin{equation}\label{1.mult} C_2\int_{Q_{I,J,K}}f(x)\,dx\le f(i,j,k)\le C_1\int_{Q_{I,J,K}}f(x)\,dx \end{equation}
for all $Q_{I,J,K}$ which do not contain zero and all $(i,j,k)\in Q_{I,J,K}\cap\mathbb Z^3$. Since any point $(i,j,k)\in\mathbb Z$ can belong no more than $8$ different cubes $Q_{I,J,K}$, \eqref{1.mult} implies \eqref{1.comp} (with the constant $C=8C_1$) and finishes the proof of the lemma. \end{proof} We will mainly use this lemma for functions $f_{a,s}(x)$ defined by \eqref{1.pol}. It is not difficult to see that these functions satisfy assumption \eqref{1.good}. For instance, this follows from the obvious estimate $$
|\nabla f_{a,s}(x)|\le \frac{C_{s}}{\sqrt{a^2+|x|^2}}f_{a,s}(x) $$ and the mean value theorem. Moreover, if $a\ne0$, condition \eqref{1.good} holds for $Q_{i,j,k}\owns 0$ as well. As a corollary, we get the following estimate for summation "by spheres":
\begin{multline}\label{1.as} \sideset{}{'}\sum_{(i,j,k)\in B_n\cap\mathbb Z^3} f_{a,s}(i,j,k)\le C_s\int_{x\in B_n\setminus B_1}(a^2+x^2)^{s}\,dx\le\\\le 4\pi C_s\int_1^{\sqrt n} R^2(a^2+R^2)^s\,dR\le 4\pi C_s\int_1^{\sqrt n} R(a^2+R^2)^{s-1/2}\,dR=\\=\frac {4\pi C_s}{2s+3}\((a^2+n)^{s+3/2}-(a^2+1)^{s+3/2}\), \end{multline}
where $B_n:=\{x\in\R^3\,:\,|x|^2\le n\}$ and $\sum'$ means that $(i,j,k)=0$ is ex\-clu\-ded. Of course, in the case $s=-\frac32$, the RHS of \eqref{1.as} reads as $2\pi C_s\ln\frac{a^2+n^2}{a^2+1}$. In particular, if $s>\frac32$, passing to the limit $n\to\infty$ in \eqref{1.as}, we see that
\begin{equation}\label{1.simple} \sideset{}{'}\sum_{(i,j,k)\in\mathbb Z^3}\frac1{(a^2+i^2+j^2+k^2)^s}= \sideset{}{'}\sum_{(i,j,k)\in\mathbb Z^3}f_{a,-s}(i,j,k)\le \frac {C_s}{(a^2+1)^{s-\frac32}}. \end{equation}
Thus, the series in the LHS is absolutely convergent if $s>\frac32$ and its sum tends to zero as $a\to\infty$. It is also well-known that condition $s>\frac32$ is sharp and the series is
divergent if $s\le \frac32$. \par We also mention that Lemmas \ref{Lem1.block} and \ref{Lem1.int} are stated for 3-dimensional case just for simplicity. Obviously, their analogues hold for any dimension. We will use this observation later. \par We now turn to the alternating version of lattice sums \eqref{1.simple}
\begin{equation}\label{1.main} M_{a,s}:=\sideset{}{'}\sum_{(i,j,k)\in\mathbb Z^3}\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s} \end{equation}
which is the main object of study in these notes. We recall that, due to \eqref{1.simple}, this series is absolutely convergent for $s>\frac32$, so the sum is independent of the method of summation. In contrast to this, in the case $0<s\le\frac32$, the convergence is not absolute and depends strongly to the method of summation, see \cite{Bor13} and references therein for more details. Note also that $M_{a,s}$ is analytic in $s$ and, similarly to the classical Riemann zeta function, can be extended to a holomorphic function on $\mathbb C$ with a pole at $s=0$, but this is beyond the scope of our paper, see e.g. \cite{Bor13} for more details. Thus, we are assuming from now on that $0<s\le\frac32$. We start with the most studied case of summation by expanding rectangles/parallelograms. \begin{definition} Let $\Pi_{I,J,K}:=[-I,I]\times[-J,J]\times[-K,K]$, $I,J,K\in\mathbb N$, and $$ S_{\Pi_{I,J,K}}(a,s):=\sideset{}{'}\sum_{(i,j,k)\in\Pi_{I,J,K}\cap\mathbb Z^3}\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s}. $$ We say that \eqref{1.main} is summable by expanding rectangles if the following triple limit exists and finite $$ M_{a,s}=\lim_{(I,J,K)\to\infty} S_{\Pi_{I,J,K}}(a,s). $$ \end{definition} To study the sum \eqref{1.main}, we combine the terms belonging to cubes $Q_{2i,2j,2k}$ and introduce the partial sums
\begin{equation} E_{\Pi_{I,J,K}}(a,s):=\sideset{}{'}\sum_{(2i,2j,2k)\in\Pi_{I,J,K}\cap2\mathbb Z^3}E_{i,j,k}(a.s), \end{equation}
where $E_{i,j,k}(a,s):=E_{i,j,k}(f_{a,-s})$ is defined in \eqref{1.E}. \begin{theorem} Let $0<s\le \frac32$. Then,
\begin{equation}\label{1.equiv}
\bigg|S_{\Pi_{I,J,K}}(a,s)-E_{\Pi_{I,J,K}}(a,s)\bigg|\le \frac {C_s}{\(a^2+\min\{I^2,J^2,K^2\}\)^{s}}, \end{equation} where the constant $C_s$ is independent of $a$ and $I,J,K$. \end{theorem} \begin{proof} We first mention that, according to Lemma \ref{Lem1.block} and estimate \eqref{1.simple}, we see that
\begin{equation}\label{1.e-conv}
|E_{\Pi_{I,J,K}}(a,s)|\le \frac{C_s}{(a^2+1)^s} \end{equation}
uniformly with respect to $(I,J,K)$. \par The difference between $S_{\Pi_{I,J,K}}$ and $E_{\Pi_{I,J,K}}$ consists of the alternating sum of $f_{a,-s}(i,j,k)$ where $(i,j,k)$ belong to the boundary of $\Pi_{I,J,K}$. Let us write an explicit formula for the case when all $I,J,K$ are even (other cases are considered analogously):
\begin{multline}\label{1.huge} S_{\Pi_{2I,2J,2K}}(a,s)-E_{\Pi_{2I,2J,2K}}(a,s)=\!\!\!\sideset{}{'}\sum_{\substack{-2J\le j\le-2J\\-2K\le k\le2K}}(-1)^{j+k}f_{a,-s}(2I,j,k)+\\+\sideset{}{'}\sum_{\substack{-2I\le i\le-2I\\-2K\le k\le2K}}(-1)^{i+k}f_{a,-s}(i,2J,k)+\sideset{}{'}\sum_{\substack{-2I\le i\le-2I\\-2J\le j\le2J}}(-1)^{i+j}f_{a,-s}(i,j,2K)-\\- \sideset{}{'}\sum_{-2I\le i\le-2I}(-1)^{i}f_{a,-s}(i,2J,2K)-\sideset{}{'}\sum_{-2J\le j\le-2J}(-1)^{j}f_{a,-s}(2I,j,2K)-\\- \sideset{}{'}\sum_{-2K\le k\le-2K}(-1)^{k}f_{a,-s}(2I,2J,k)+f_{a.-s}(2I,2J,2K). \end{multline}
In the RHS of this formula we see the analogues of lattice sum \eqref{1.main} in lower dimensions one or two and, thus, it allows to reduce the dimension. Indeed, assume that the analogues of estimate \eqref{1.equiv} are already established in one and two dimensions. Then, using the lower dimensional analogue of \eqref{1.e-conv} together with the fact that $$ f_{a,-s}(2I,j,k)=f_{\sqrt{a^2+4I^2},-s}(i,j), $$ where we have 2D analogue of the function $f_{a,-s}$ in the RHS, we arrive at
\begin{multline}\label{1.huge1}
\bigg|S_{\Pi_{2I,2J,2K}}(a,s)-E_{\Pi_{2I,2J,2K}}(a,s)\bigg|\le\\\le \frac{C_s}{(a^2+4I^2+1)^s}+\frac{C_s}{(a^2+\min\{J^2,K^2\})^s}+ \frac{C_s}{(a^2+4J^2+1)^s}+\\+\frac{C_s}{(a^2+\min\{I^2,K^2\})^s}+ \frac{C_s}{(a^2+4K^2+1)^s}+\frac{C_s}{(a^2+\min\{I^2,J^2\})^s}+\\+ +\frac{C_s}{(a^2+4I^2+4K^2\})^s}+\frac{C_s}{(a^2+4I^2+4J^2\})^s}+ \frac{C_s}{(a^2+4J^2+4K^2\})^s}+\\+ \frac{C_s}{(a^2+4I^2+4J^2+4K^2\})^s}\le \frac{C_s'}{(a^2+\min\{I^2,J^2,K^2\})^s}. \end{multline}
Since in 1D case the desired estimate is obvious, we complete the proof of the theorem by induction. \end{proof} \begin{corollary}\label{Cor1.main} Let $s>0$. Then series \eqref{1.main} is convergent by expanding rectangles and
\begin{equation}\label{1.rep} M_{a,s}=\sideset{}{'}\sum_{(i,j,k)\in\mathbb Z^3}E_{i,j,k}(a,s). \end{equation}
In particular, the series in RHS of \eqref{1.rep} is absolutely convergent, so the method of summation for it is not important. \end{corollary} Indeed, this fact is an immediate corollary of estimates \eqref{1.equiv}, \eqref{1.bet} and \eqref{1.simple}.
\section{Summation by expanding spheres}\label{s2} We now turn to summation by expanding spheres. In other words, we want to write the formula \eqref{1.main} in the form
\begin{equation}\label{2.sphere} M_{a,s}=\lim_{N\to\infty}\sideset{}{'}\sum_{i^2+j^2+k^2\le N}\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s}. \end{equation}
Moreover, since $(i+j+k)^2=i^2+j^2+k^2+2(ij+jk+ik)$, we have $(-1)^{i+j+k}=(-1)^{i^2+j^2+k^2}$, so formula \eqref{2.sphere} can be rewritten in the following elegant form
\begin{equation}\label{2.sp} M_{a,s}=\sum_{n=1}^\infty (-1)^n\frac{r_3(n)}{(a^2+n)^s}, \end{equation}
where $r_3(n)$ is the number of integer points on a sphere of radius $\sqrt{n}$ centered at zero, see e.g. \cite{Ram21} and reference therein for more details about this function. However, the convergence of series \eqref{2.sp} is more delicate. In particular, it is well-known that this series is divergent for $s\le\frac12$, see \cite{Emer,Bor13}. For the convenience of the reader, we give the proof of this fact below.
\begin{lemma}\label{Lem2.div} Let $c>0$ be small enough. Then, there are infinitely many values of $n\in\mathbb N$ such that
\begin{equation}\label{2.bad} r_3(n)\ge c\sqrt{n} \end{equation}
and, particularly, series \eqref{2.sp} is divergent for all $s\le\frac12$. \end{lemma} \begin{proof} Indeed, by comparison of volumes, we see that the number $M_N$ of integer points in a spherical layer $N\le i^2+j^2+k^2\le 2N$ can be estimated from above as $$ M_N=\sum_{n=N}^{2N}r_3(n)\ge \frac43\pi\((\sqrt{2N}-\sqrt3)^{3}-(\sqrt{N}+\sqrt3)^{3}\)\ge cN^{3/2} $$ for sufficiently small $c>0$. Thus, for every sufficiently big $N\in\mathbb N$, there exists $n\in[N,2N]$ such that $r_3(n)\ge c\sqrt{n}$ and estimate \eqref{2.bad} is verified. The divergence of \eqref{2.sp} for $s\le \frac12$ is an immediate corollary of this estimate since the $n$th term $(-1)^n\frac{r_3(n)}{(a^2+n)^s}$ does not tend to zero under this condition and the lemma is proved. \end{proof} \begin{remark} The condition that $c>0$ is small can be removed using more sophisticated methods. Moreover, it is known that the inequality $$ r_3(n)\ge c\sqrt{n}\ln\ln n $$ holds for infinitely many values of $n\in\mathbb N$ (for properly chosen $c>0$). On the other hand, for every $\varepsilon>0$, there exists $C_\varepsilon>0$ such that $$ r_3(n)\le C_\varepsilon n^{\frac12+\varepsilon}, $$
see \cite{Ram21} and references therein. Thus, we cannot establish divergence of \eqref{2.sp} via the $n$th term test if $s>\frac12$. Since this series is alternating, one may expect convergence for $s>\frac12$. However, the behavior of $r_3(n)$ as $n\to\infty$ is very irregular and, to the best of our knowledge, this convergence is still an open problem for $\frac12<s\le\frac{25}{34}$, see \cite{Bor13} for the convergence in the case $s>\frac{25}{34}$ and related results. \end{remark} Thus, one should use weaker concepts of convergence in order to justify equality \eqref{2.sp}. The main aim of these notes is to establish the convergence in the sense of Cesaro. \begin{definition}\label{Def2.Cesaro} Let $\kappa>0$. We say the series \eqref{2.sp} is $\kappa$-Cesaro (Cesaro-Riesz) summable if the sequence $$ C^\kappa_N(a,s):=\sum_{n=1}^N\(1-\frac nN\)^\kappa(-1)^n\frac{r_3(n)}{(a^2+n)^s} $$ is convergent. Then we write $$ (C,\kappa)-\sum_{N=1}^\infty (-1)^n\frac{r_3(n)}{(a^2+n)^s}:=\lim_{N\to\infty}C_N^\kappa(a,s). $$ Obviously, $\kappa=0$ corresponds to the usual summation and if a series is $\kappa$-Cesaro summable, then it is also $\kappa_1$-Cesaro summable for any $\kappa_1>\kappa$, see e.g.~\cite{Ha}. \end{definition}
\subsection{Second order Cesaro summation}\label{s21} The aim of this subsection is to present a very elementary proof of the fact that the series \eqref{2.sp} is second order Cesaro summable. Namely, the following theorem holds. \begin{theorem}\label{Th2.2c} Let $s>0$. Then the series \eqref{2.sp} is second order Cesaro summable and
\begin{equation}\label{2.2good} M_{a,s}=(C,2)-\sum_{N=1}^\infty (-1)^n\frac{r_3(n)}{(a^2+n)^s}, \end{equation}
where $M_{a,s}$ is the same as in \eqref{1.main} and \eqref{1.rep}. \end{theorem} \begin{proof}
For every $N\in\mathbb N$, let us introduce the sets
\begin{equation*} D_N:=\bigcup\limits_{\substack{(I,J,K)\in2\mathbb Z^3\\Q_{I,J,K}\subset B_N}}Q_{I,J,K},\ \ \ D_N':=B_N\setminus D_N \end{equation*}
and split the sum $C^2_N(a,s)$ as follows
\begin{multline} C^2_N(a,s)=\sideset{}{'}\sum_{(i,j,k)\in B_N\cap\mathbb Z^3}\(1-\frac{i^2+j^2+k^2}{N}\)^2\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s}=\\=\sideset{}{'}\sum_{(i,j,k)\in D_N\cap\mathbb Z^3}\(1-\frac{i^2+j^2+k^2}{N}\)^2\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s}+\\+ \!\!\!\sideset{}{'}\sum_{(i,j,k)\in D_N'\cap\mathbb Z^3}\!\!\!\(1\!-\!\frac{i^2+j^2+k^2}{N}\)^2\!\!\!\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s}\!:= A_N(a,s)+R_N(a,s). \end{multline}
Let us start with estimating the sum $R_N(a,s)$. To this end we use the elementary fact that $$ \sqrt{N}-\sqrt3\le \sqrt{i^2+j^2+k^2}\le \sqrt{N} $$ for all $(i,j,k)\in D_N'$ ($\sqrt{3}$ is the length of the diagonal of the cube $Q_{I,J,K}$). Therefore,
\begin{equation}\label{2.R}
|R_N(a,s)|\le \(1-\frac{(\sqrt{N}-\sqrt3)^2}{N}\)^2\frac{\#\(D'_M\cap\mathbb Z^3\)}{\(a^2+(\sqrt {N}-\sqrt3)^2\)^s}. \end{equation}
Using again the fact that all integer points of $D'_N$ belongs to the spherical layer
$\sqrt{N}-\sqrt{3}\le |x|^2\le\sqrt{N}$ together with the volume comparison arguments, we conclude
that $$ \#\(D'_M\cap\mathbb Z^3\)\le \frac43\pi\((\sqrt{N}+\sqrt3)^2-(\sqrt{N}-\sqrt3)^2\)\le c_0 N $$ for some positive $c_0$. Therefore,
\begin{equation}
|R_N(a,s)|\le \frac{C}{N}\frac{c_0N}{\(a^2+(\sqrt {N}-\sqrt3)^2\)^s}=\frac{C}{\(a^2+(\sqrt {N}-\sqrt3)^2\)^s}\to0 \end{equation}
as $N\to\infty$. Thus, the term $R_N$ is not essential and we only need to estimate the sum $A_N$. To this end, we rewrite it as follows
\begin{multline}\label{2.huge2} A_N(a,s)=\(1-\frac{a^2}N\)^2\sideset{}{'}\sum_{(i,j,k)\in D_N\cap\mathbb Z^3}\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^s}+\\+\frac2N\(1-\frac {a^2}N\)\sideset{}{'}\sum_{(i,j,k)\in D_N\cap\mathbb Z^3}\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^{s-1}}+\\+ \frac1{N^2}\sideset{}{'}\sum_{(i,j,k)\in D_N\cap\mathbb Z^3}\frac{(-1)^{i+j+k}}{(a^2+i^2+j^2+k^2)^{s-2}}=\\= \(1-\frac{a^2}N\)^2\sideset{}{'}\sum_{(i,j,k)\in \frac12D_{N}\cap\mathbb Z^3}E_{i,j,k}(a,s)+\\+\frac2N\(1-\frac {a^2}N\)\sideset{}{'}\sum_{(i,j,k)\in \frac12D_{N}\cap\mathbb Z^3}E_{i,j,k}(a,s-1)+\\+ \frac1{N^2}\sideset{}{'}\sum_{(i,j,k)\in \frac12D_{N}\cap\mathbb Z^3}E_{i,j,k}(a,s-2). \end{multline}
From Corollary \ref{Cor1.main}, we know that the first sum in the RHS of \eqref{2.huge2} converges to $M_{a,s}$ as $N\to\infty$. Using estimates \eqref{1.bet} and \eqref{1.as}, we also conclude that
\begin{equation}
\bigg|\sideset{}{'}\sum_{(i,j,k)\in \frac12D_N\cap\mathbb Z^3}E_{i,j,k}(a,s-1)\bigg|\le CN^{1-s} \end{equation}
and
\begin{equation}
\bigg|\sideset{}{'}\sum_{(i,j,k)\in \frac12D_N\cap\mathbb Z^3}E_{i,j,k}(a,s-2)\bigg|\le CN^{2-s}. \end{equation}
Thus, two other terms in the RHS of \eqref{2.huge2} tend to zero as $N\to\infty$ and the theorem is proved. \end{proof}
\subsection{First order Cesaro summation}\label{s22} We may try to treat this case analogously to the proof of Theorem \ref{Th2.2c}. However, in this case, we will have the multiplier $(1-\frac{(\sqrt{N}-\sqrt3)^2}{N})$ without the extra square and this leads to the extra technical assumption $s>\frac12$. In particular, this method does not allow us to establish the convergence for the case of classical NaCl-Madelung constant ($a=0$, $s=\frac12$). In this subsection, we present an alternative method based on the Riemann localization principle for multiple Fourier series which allows us to remove the technical condition $s>\frac12$. The key idea of our method is to introduce the function
\begin{equation}\label{2.F} M_{a,s}(x):=\sideset{}{'}\sum_{(n,k,l)\in\mathbb Z^3}\frac{e^{i(nx_1+kx_2+lx_3)}}{(a^2+n^2+k^2+l^2)^s}. \end{equation}
The series is clearly convergent, say, in $\mathcal D'(\mathbb T^3)$ and defines (up to a constant) a fundamental solution for the fractional Laplacian $(a^2-\Delta)^s$ on a torus $\mathbb T^3$ defined on functions with zero mean. Then, at least formally, $$ M_{a,s}=M_{a,s}(\pi,\pi,\pi) $$ and justification of this is related to the convergence problem for multi-dimensional Fourier series. \par Let $G_{a,s}(x)$ be the fundamental solution for $(a^2-\Delta)^s$ in the whole space $\R^3$, i.e. $$
G_{a,s}(x)=-\frac{1}{2^{\frac12+s}\pi^{\frac32}\Gamma(s)}\frac1{|x|^{3-2s}}\Psi(a|x|),\ \ \Psi(z):=z^{\frac32-s}K_{\frac32-s}(z), $$ where $K_\nu(z)$ is a modified Bessel function of the second kind and $\Gamma(s)$ is the Euler gamma function, see e.g. \cite{SL,Watson}. In particular, passing to the limit $a\to0$ and using that $\Psi(0)=2^{\frac12-s}\Gamma(\frac32-s)$, we get the fundamental solution for the case $a=0$: $$
G_{0,s}(x)=-\frac{\Gamma(\frac32-s)}{2^{2s}\pi^{\frac32}\Gamma(s)}\,\frac1{|x|^{3-2s}}. $$
Then, as known, the periodization of this function will be the fundamental solution on a torus:
\begin{equation}\label{2.Poisson} M_{a,s}(x)=C_0+\frac1{(2\pi)^3}\sum_{(n,k,l)\in\mathbb Z^3}G_{a,s}\(x-2\pi(n,k,l)\), \end{equation}
where the constant $C_0$ is chosen in such a way that $M_{a,s}(x)$ has a zero mean on the torus, see \cite{Flap,Trans} and references therein. Recall that, for $a>0$, the function $G_{a,s}(x)$ decays exponentially as $|x|\to\infty$, so the convergence of \eqref{2.Poisson} is immediate (and identity \eqref{2.Poisson} is nothing more than the Poisson Summation Formula applied to \eqref{2.F}). However, when $a=0$, the convergence of \eqref{2.Poisson} is more delicate since $G_{0,s}(x)\sim |x|^{2s-3}$ and the decay rate is not strong enough to get the absolute convergence. Thus, some regularization should be done and the method of summation also becomes important, see \cite{Bor13,CR,mar2000} and reference therein. Recall also that we need to consider the case $s\le\frac12$ only (since for $s>\frac12$, we have convergence of the first order Cesaro sums by elementary methods).
\begin{lemma}\label{Lem2.Green} Let $0<s<1$. Then
\begin{multline}\label{2.Poisson0} M_{0,s}(x)=C_0'+\frac1{(2\pi)^3} G_{0,s}(x)+\\+\frac1{(2\pi)^3}\sideset{}{'}\sum_{(n,k,l)\in\mathbb Z^3}\bigg(G_{0,s}\(x-2\pi(n,k,l)\)-G_{0,s}(2\pi(n,k,l))\bigg), \end{multline}
where the convergence is understood in the sense of convergence by expanding rectangles and $C'_0$ is chosen in such a way that the mean value of the expression in the RHS is zero. \end{lemma} \begin{proof}[Sketch of the proof] Although this result seems well-known, we sketch below the proof of convergence of the RHS (the equality with the LHS can be established after that in a standard way, e.g. passing to the limit $a\to0$ in \eqref{2.Poisson}). \par To estimate the terms in the RHS, we use the following version of a mean value theorem for second differences:
\begin{multline} f(p+x)+f(p-x)-2f(p)=[f(p+x)-f(p)]-[f(p)-f(p-x)]\\=x\int_0^1(f'(p+\kappa x)-f'(p-\kappa x))\,d\kappa=\\= 2x^2\int_0^1\int_0^1\kappa_1\kappa f''(p+\kappa(1-2\kappa_1)x)\,d\kappa\,d\kappa_1 \end{multline}
applying this formula to the function $G_{0,s}(x)$, we get
\begin{multline*}
\bigg|\sum_{\varepsilon_i=\pm1,\, i=1,2,3 }\bigg(G_{0,s}(2\pi n+\varepsilon_1x_1,2\pi k+\varepsilon_2 x_2,2\pi l+\varepsilon_3x_3)-G_{0,s}(2\pi(n,k,l))\bigg)\bigg|\\\le C\sum_{i=1}^3\|\partial^2_{x_i}G_{0,s}\|_{C(2\pi(n,k,l)+\mathbb T^3)}\le \frac{C_1}{(n^2+k^2+l^2)^{\frac32-2(s-1)}}. \end{multline*}
Thus, we see that, if we combine together in the RHS of \eqref{2.Poisson0} the terms corresponding to 8 nodes $(\pm n,\pm k,\pm l)$ (for every fixed $(n,k,l)$), the obtained series will become absolutely convergent (here we use the assumption $s<1$). \par It remains to note that the parallelepipeds $\Pi_{N,M,K}$ enjoy the property: $(n,m,k)\in\Pi_{N,M,K}$ implies that all 8 points $(\pm n,\pm m,\pm k)\in \Pi_{N,M,K}$. This implies the convergence by expanding rectangles and finishes the proof of the lemma. \end{proof}
\begin{corollary}\label{Cor2.Grsm} Let $0<s<\frac32$ and $a>0$ or $a=0$ and $0<s<1$. Then, the function $M_{a,s}(x)$ is $C^\infty(\mathbb T^3\setminus\{0\})$ and $G_{a,s}(x)\sim \frac C{|x|^{3-2s}}$ near zero. In particular. $M_{a,s}\in L^{1+\varepsilon}(\mathbb T^3)$ for some positive $\varepsilon=\varepsilon(s)$. \end{corollary}
\begin{proof} Indeed, the infinite differentiability follows from \eqref{2.Poisson} and \eqref{2.Poisson0} since differentiation of $G_{a,s}(x)$ in $x$ can only improve the rate of convergence. In addition, $M_{a,s}(x)-\frac1{(2\pi)^3}G_{a,s}(x)$ is smooth on the whole $\mathbb T^3$, so $M_{a,s}$ belongs to the same Lebesgue space $L^p$ as the function $|x|^{2s-3}$. \end{proof} \begin{remark} The technical assumption $s<1$ can be removed using the fact that $(-\Delta)^{s_1}(-\Delta)^{s_2}=(-\Delta)^{s_1+s_2}$ and, therefore $$ G_{a,s_1+s_2}=G_{a,s_1}*G_{a,s_2} $$ using the elementary properties of convolutions. Note that the result of Corollary \ref{Cor2.Grsm} can be obtained in a straightforward way using the standard PDEs technique, but we prefer to use the explicit formulas \eqref{2.Poisson} and \eqref{2.Poisson0} which look a bit more transparent. In addition, using the Poisson Summation Formula in a more sophisticated way (e.g. in the spirit of \cite{mar}, see also references therein), we can obtain much better (exponentially convergent) series for $M_{0,s}(x)$. \end{remark} We are now ready to state and prove the main result of this section. \begin{theorem}\label{Th2.1c} Let $s>0$. Then
\begin{equation}\label{2.1cesaro} M_{a,s}=M_{a,s}(\pi,\pi,\pi)=\lim_{N\to\infty}\sum_{n=1}^n\(1-\frac nN\)\frac{(-1)^nr_3(n)}{(a^2+n)^s} \end{equation}
and, therefore, \eqref{2.sphere} is first order Cesaro summable by expanding spheres. \end{theorem} \begin{proof} As already mentioned above, it is sufficient to consider the case $0<s<1$ only. We also recall that \eqref{2.F} is nothing more than formal Fourier expansions for the function $M_{a,s}(x)$, therefore, to verify the second equality in \eqref{2.1cesaro}, we need to check the convergence of Fourier expansions of $M_{a,s}(x)$ at $x=(\pi,\pi,\pi)$ by first Cesaro expanding spheres. To do this, we use the analogue of Riemann localization property for multi-dimensional Fourier series. Namely, as proved in \cite{stein}, this localization is satisfied for first order Cesaro summation by expanding spheres in the class of functions $f$ such that $$
\int_{\mathbb T^3}|f(x)|\ln_+|f(x)|\,dx<\infty $$ (this is exactly the critical case $\kappa=\frac{d-1}2=1$ for $d=3$). Thus, since this condition is satisfied for $M_{a,s}(x)$ due to Corollary \ref{Cor2.Grsm}, the Fourier series for $M_{a,s}(x)$ and $M_{a,s}(x)-\frac1{(2\pi)^3}G_{a,s}(x)$ are convergent or divergent simultaneously. Since the second function is $C^\infty$ on the whole torus, we have the desired convergence, see also \cite{MFS} and references therein. Thus, the second equality in \eqref{2.1cesaro} is established. To verify the first equality, it is enough to mention that the series is second order Cesaro summable to $M_{a,s}$ due to Theorem \ref{Th2.2c}. This finishes the proof of the theorem. \end{proof}
\section{Concluding remarks}\label{s3} Note that formally Theorem \ref{Th2.1c} covers Theorem \ref{Th2.2c}. Nevertheless, we would like to present both methods. The one given in subsection \ref{s21} is not only very elementary and transparent, but also can be easily extended to summation by general expanding domains $N\Omega$ where $\Omega$ is a sufficiently regular bounded domain in $\R^3$ containing zero. Also the rate of convergence of second Cesaro sums can be easily controlled. Some numeric simulations for the case of NaCl-Madelung constant ($a=0$, $s=\frac12$) are presented in the figure below
\begin{figure}
\caption{A figure plotting $N$th partial sums of \eqref{2.2good} with $a=0$ and $s=\frac12$ up to N = 5000.}
\label{fig:coffee}
\end{figure}
\noindent and we clearly see the convergence to the Madelung constant $$ M_{0,1/2}=-1.74756... $$ The second method (used in the proof of Theorem \ref{Th2.1c}) is more delicate and strongly based on the Riemann localization for multiple Fourier series and classical results of \cite{stein}. This method is more restricted to expanding spheres and the rate of convergence is not clear. Some numeric simulation for the NaCl-Madelung constant is presented in the figure below
\begin{figure}
\caption{A figure plotting $N$th partial sums of \eqref{2.1cesaro} with $a=0$ and $s=\frac12$ up to N = 5000.}
\label{fig:coffee1}
\end{figure}
\noindent and we see that the rate of convergence is essentially worse than for the case of second order Cesaro summation. As an advantage of this method, we mention the ability to extend it for more general class of exponential sums of the form \eqref{2.F}. \par Both methods are easily extendable to other dimensions $d\ne3$. Indeed, it is not difficult to see that the elementary method works for Cesaro summation of order $\kappa\ge d-2$ and the second one requires weaker assumption $\kappa\ge\frac{d-1}2$. Using the fact that the function $M_{a,s}(x)$ is more regular (belongs to some Sobolev space $W^{\varepsilon,p}(\mathbb T^3)$), together with the fact that Riemann localization holds for slightly subcritical values of $\kappa$ if this extra regularity is known (see e.g. \cite{MFS}), one can prove convergence for some $\kappa=\kappa(s)<\frac{d-1}2$ although the sharp values for $\kappa(s)$ seem to be unknown.
\end{document} | arXiv |
How to Invest with Confidence
Investing Essentials
Financial Technology & Automated Investing
Investing Portfolio Management
Cumulative Return Definition
Reviewed by James Chen
What Is Cumulative Return?
A cumulative return on an investment is the aggregate amount that the investment has gained or lost over time, independent of the period of time involved. Presented as a percentage, the cumulative return is the raw mathematical return of the following calculation:
(Current Price of Security)−(Original Price of Security)Original Price of Security\frac{(Current\ Price \ of \ Security) - (Original \ Price \ of \ Security)}{Original \ Price \ of \ Security}Original Price of Security(Current Price of Security)−(Original Price of Security)
How Cumulative Return Works
The cumulative return of a stock that does not have a dividend is easily calculated by figuring out the amount of profit or loss over the original price. For example, investing $10,000 in Johnson & Johnson (JNJ) for a 10-year period ending on Dec. 31, 2018, results in $48,922. With no dividends reinvested, this is a total cumulative return of 697.99% or an average of 10.94%; it also includes two stock splits. The value of dividends received during that time period also adds another $13,611 in profit above the original investment.
Calculating the cumulative return for a stock that reinvests dividends is much more difficult. In the Johnson & Johnson example above, reinvesting the dividends nets a total value of $75,626. Cumulative return could be misleading in this scenario because the reinvested aggregate amount is more than the previous example, where the total between the principal of $48,922 and dividends not invested of $13,611 is $62,533.
Reinvesting the dividends increases the investor's cost basis and reduces cumulative return. For the reinvested example, the stockholder's cumulative return is 656.26% or an average of 10.64%. When compared, the reinvested amount has a lower cumulative return but really yields more total dollar amount for the investor, with an additional $13,093.
Expressed as a percentage, cumulative return is the total change in the price of an investment over a set time period—an aggregate return, not an annualized one.
Reinvesting the dividends or capital gains of an investment impacts its cumulative return.
Cumulative return figures for mutual typically omit the effect of annual expense ratios and other fees on the fund's performance.
Cumulative Return and Mutual Funds
A common way to present the "effect" of a mutual fund's performance over time is to show the cumulative return with a visual such as a mountain graph. Investors should check to confirm whether interest and/or dividends are included in the cumulative return (the marketing materials or info accompanying an illustration should say); such payouts may be assumed to be reinvested or simply counted as raw dollars when calculating the cumulative return.
One notable difference between mutual funds and stocks is mutual funds sometimes distribute capital gains to the fund holders. This distribution usually comes at the end of a calendar year and consists of the profits the portfolio managers made when closing out holdings. Mutual fund owners have the option to reinvest those capital gains, which can further make calculating the cumulative return more difficult.
Cumulative Return Versus Compound Return
Along with the cumulative return, a mutual fund or other investment usually indicates its compound return. Unlike the cumulative return, the compound return figure is annualized.
While cumulative returns may sound more impressive than the usually smaller annualized rate of return, they typically omit the effect of the annual expenses on the returns an investor will truly receive. Annual expenses an investor may expect include fund total expense ratios, interest rates on loans and management fees. When worked out on a cumulative basis, these fees can substantially eat into cumulative return numbers.
Upside/Downside Ratio
The upside/downside ratio is a market breadth indicator that shows the relationship between the volumes of advancing and declining issues on an exchange.
How Net Debt Is Calculated and Used to Measure a Company's Liquidity
Net debt is a liquidity metric used to determine how well a company can pay all of its debts if they were due immediately. Net debt shows how much cash would remain if all debts were paid off and if a company has enough liquidity to meet its debt obligations.
Enterprise Value – EV Definition
Enterprise value (EV) is a measure of a company's total value, often used as a comprehensive alternative to equity market capitalization. EV includes in its calculation the market capitalization of a company but also short-term and long-term debt as well as any cash on the company's balance sheet.
How Return on Equity Works
Return on equity (ROE) is a measure of financial performance calculated by dividing net income by shareholders' equity. Because shareholders' equity is equal to a company's assets minus its debt, ROE could be thought of as the return on net assets.
Market momentum is a measure of overall market sentiment that can support buying and selling with and against market trends.
Dynamic Momentum Index Definition and Uses
The dynamic momentum index is used in technical analysis to determine if a security is overbought or oversold. It can be used to generate trading signals in trending or ranging markets.
Tools for Fundamental Analysis
What Determines a Company's Share Price?
Learn to Value Real Estate Investment Property
Private Equity & Venture Cap
Private Equity Jargon Explained
Financial Futures Trading
Calculating Fair Value in Futures Markets
How to Calculate the ROI on a Rental Property
Calculating Present and Future Value of Annuities | CommonCrawl |
Methodology article
Dynamic meta-analysis: a method of using global evidence for local decision making
Gorm E. Shackelford ORCID: orcid.org/0000-0003-0949-09341,2,
Philip A. Martin1,2,
Amelia S. C. Hood1,
Alec P. Christie1,
Elena Kulinskaya3 &
William J. Sutherland1,2
BMC Biology volume 19, Article number: 33 (2021) Cite this article
Meta-analysis is often used to make generalisations across all available evidence at the global scale. But how can these global generalisations be used for evidence-based decision making at the local scale, if the global evidence is not perceived to be relevant to local decisions? We show how an interactive method of meta-analysis—dynamic meta-analysis—can be used to assess the local relevance of global evidence.
We developed Metadataset (www.metadataset.com) as a proof-of-concept for dynamic meta-analysis. Using Metadataset, we show how evidence can be filtered and weighted, and results can be recalculated, using dynamic methods of subgroup analysis, meta-regression, and recalibration. With an example from agroecology, we show how dynamic meta-analysis could lead to different conclusions for different subsets of the global evidence. Dynamic meta-analysis could also lead to a rebalancing of power and responsibility in evidence synthesis, since evidence users would be able to make decisions that are typically made by systematic reviewers—decisions about which studies to include (e.g. critical appraisal) and how to handle missing or poorly reported data (e.g. sensitivity analysis).
In this study, we show how dynamic meta-analysis can meet an important challenge in evidence-based decision making—the challenge of using global evidence for local decisions. We suggest that dynamic meta-analysis can be used for subject-wide evidence synthesis in several scientific disciplines, including agroecology and conservation biology. Future studies should develop standardised classification systems for the metadata that are used to filter and weight the evidence. Future studies should also develop standardised software packages, so that researchers can efficiently publish dynamic versions of their meta-analyses and keep them up-to-date as living systematic reviews. Metadataset is a proof-of-concept for this type of software, and it is open source. Future studies should improve the user experience, scale the software architecture, agree on standards for data and metadata storage and processing, and develop protocols for responsible evidence use.
Meta-analysis is often used to make generalisations about interventions, such as agricultural practices or medical treatments [1]. It can be difficult to make generalisations if interventions have different effects in different contexts. For example, a meta-analysis of conservation agriculture found beneficial effects in hotter, drier climates, but not in colder, wetter climates [2]. Therefore, it can be difficult to use meta-analysis to make decisions about interventions in a specific context, unless the results are known to be generalizable to that specific context.
What is needed is a method of meta-analysis that enables decision makers to answer the question, "How effective is this intervention in my specific context?" [3,4,5].
Subgroup analysis and meta-regression [6] are standard methods of meta-analysis that can be used to answer this question, but the researchers who produce a meta-analysis may not answer the specific question that the decision makers want answered. In the above example of conservation agriculture [2], the researchers used meta-regression to ask, "How effective is conservation agriculture in different climates?" But the decision makers may want to ask, "How effective is conservation agriculture in my climate or in my country?" Researchers may not provide an answer to this question, not only because they do not know which variables will define the context for different decision makers, but also because they do not have the time and space to analyse and publish the results for all combinations and permutations of context-defining variables. Instead, researchers may only publish an answer to a more generic question.
The lack of context-specific evidence is a problem in evidence-based decision making [7,8,9]. One solution to this problem is to commission new research and/or new reviews that exactly match the local context (e.g. "co-production" of knowledge), but that takes time and money and may be impractical or impossible for many decisions. Another solution is to assess the relevance of existing research that does not exactly match the local context (e.g. "co-assessment" of knowledge [10]). Relevance includes both "applicability" and "transferability" [3]. Transferability is the extent to which an intervention would have the same effect in a different context (e.g. conservation agriculture might have a different effect in a different climate). Applicability is the extent to which an intervention would be feasible in a different context (e.g. conservation agriculture might not be feasible in an area without access to herbicides or seed drills). We use these terms as defined above (in the sense of [3]), but we note that applicability, transferability, external validity, and generalizability are sometimes used interchangeably and are sometimes used in somewhat different senses [4, 11]. Here, we focus on transferability, but we also discuss applicability.
It has been suggested that "research cannot provide an exact match to every practitioner's circumstances, or perhaps any practitioner's circumstances because environments are dynamic and often changing, whereas completed research is static" [5]. A partial solution to this problem could be to make research more dynamic, by enabling decision makers to interact with it. For example, decision makers could filter a database of research publications, to find studies that are more relevant to their circumstances, or they could weight these studies by relevance to their circumstances. Several methods of interactive evidence synthesis have already been developed. For example, interactive evidence maps enable users to filter research publications by country (e.g. [12]). Decision-support systems enable users to weight evidence by value to stakeholders (e.g. [13]). However, as far as we are aware, there are no tools that enable users to both filter and weight the studies in a meta-analysis, and thereby to answer the question, "How effective is this intervention in my specific context?" Therefore, we developed a tool for this purpose, and here we show how this tool could be used to assess the local relevance of a global meta-analysis in agroecology.
This tool is an example of a method that we refer to as dynamic meta-analysis. This term has been used in different disciplines and in different senses (cf. [14,15,16]), and sometimes in the sense of a living systematic review that can be dynamically updated by researchers [17, 18], instead of a meta-analysis that can be dynamically filtered and weighted by users. However, as far as we are aware, dynamic meta-analysis has not been defined as a method, and we define it here.
Dynamic meta-analysis
As we define it here, dynamic meta-analysis is a method of interactively filtering and weighting the data in a meta-analysis. The diagnostic feature of a dynamic meta-analysis is that it takes place in a dynamic environment (e.g. a web application), not a static environment (e.g. a print publication), and this enables users to interact with it. Dynamic meta-analysis includes subgroup analysis and/or meta-regression [6]. These are standard methods in meta-analysis, and they are used to calculate the results for a subset of the data, either by analysing only that subset (subgroup analysis) or else by analysing all of the data but calculating different results for different subsets, while accounting for the effects of other variables (meta-regression). The variables that define these subsets can include country, climate type, soil type, study design, or any other metadata that can be used to define relevance. In a dynamic meta-analysis, users filter the data to define a subset that is relevant to them, and then the results for that subset are calculated, using subgroup analysis and/or meta-regression.
Dynamic meta-analysis also includes recalibration [19], which is a method of weighting studies based on their relevance. With recalibration, users can consider a wider range of evidence—not only the data that is completely relevant, but also the data that is partially relevant. Recalibration may be the only option, if no evidence exists that is completely relevant.
Dynamic meta-analysis also includes elements of critical appraisal (i.e. deciding which studies should be included in the meta-analysis, based on study quality) and sensitivity analysis (i.e. permuting the assumptions of a meta-analysis, to test the robustness of the results). Critical appraisal and sensitivity analysis are typically performed by systematic reviewers (e.g. see the Collaboration for Environmental Evidence (CEE) [20] for standard methods), but dynamic meta-analysis enables decision makers to participate in both critical appraisal and sensitivity analysis.
For example, decision makers may want to understand the implications of including or excluding a controversial study, or the implications of including studies that are relevant to their local context, even though these studies are lower-quality, if higher-quality studies are not available [21]. For example, if decision makers are looking for conservation studies on a specific biome or taxon, higher-quality studies may not be available [7]. In some forms of evidence synthesis, lower-quality studies are excluded from the evidence base before they can be considered by decision makers (e.g. best evidence synthesis [22]), but in a dynamic meta-analysis, these studies can be included in the evidence base and tagged with metadata, so that decision makers can consider these studies for themselves.
It may also be important to include all studies, regardless of study quality, if study quality is related to study results. For example, in a review of forest conservation strategies, lower-quality studies were more likely to report negative results [23]. By comparing the results of different analyses that are based on different studies or different assumptions (e.g. different methods for handling missing data), users can test the sensitivity of the results to these different assumptions (sensitivity analysis).
Metadataset: a website for dynamic meta-analysis
We developed Metadataset [24] as a proof-of-concept for dynamic meta-analysis. Metadataset is a website that provides two methods of interactive evidence synthesis: (1) browsing publications by intervention, outcome, or country (using interactive evidence maps) (Fig. 1) and (2) filtering and weighting the evidence in a dynamic meta-analysis (Fig. 2). Additional file 1 is a video that shows how Metadataset can be used.
A screenshot from Metadataset that shows an interactive evidence map
A screenshot from Metadataset that shows a dynamic meta-analysis
Additional file 1. Metadataset video.mp4.
At present, Metadataset has evidence on two subject areas: (1) agriculture, which includes data from a meta-analysis of cover crops in Mediterranean climates [25] and a systematic map of cassava farming practices that is a work in progress [26], and (2) invasive species, which includes a systematic review of management practices for invasive plants that is also a work in progress [27]. However, we plan to expand Metadataset to other subject areas, and we welcome collaborations. Here we focus on cover crops in Mediterranean climates as an example of dynamic meta-analysis.
Cover crops are often grown over the winter, as an alternative to bare soil or fallow, and cash crops are grown over the following summer. Shackelford et al. [25] analysed the effects of cover crops on ten outcomes (e.g. cash crop yield and soil organic matter) and recorded the metadata that we use here for subgroup analysis and meta-regression (e.g. country, cover crop type, fertiliser usage, and tillage). Shackelford et al. [25] presented some subgroup analyses (e.g. legumes vs non-legumes as cover crop types), but noted the problem of not being able to report all combinations of subgroups that might be of interest to a reader (e.g. legumes in California, without synthetic fertiliser). We entered their data into Metadataset, to show how dynamic meta-analysis is a solution to this problem.
We imagined a scenario in which a hypothetical user searches for evidence on cover crops that are brassicas (e.g. mustard or rapeseed) on irrigated farms in California. Brassicas do not fertilise the soil as legumes do (by fixing nitrogen), and their negative effects on soil fertility (including allelochemicals that poison the soil for other plants) could have negative effects on the yields of the cash crops that are grown over the following summer, even if they do successfully suppress weeds over the winter. Thus, there is a reason to believe that the evidence on cover crops in general may not be transferable to specific cover crops, such as brassicas or legumes, which have different effects on the soil [25]. We show how this hypothetical user could filter and weight the evidence on Metadataset.
Additional file 1 is a video that shows these results on the Metadataset website. Additional file 2 is R code that reproduces these results, using the data from Additional file 3. On the Metadataset website, the evidence on cover crops [28] includes 57 publications from 5 countries: France (2 publications), Greece (2), Italy (24), Spain (9), and the USA (20). Browsing the data by outcome, a user finds the hierarchical classification of outcomes. She clicks "filter by intervention" for one outcome ("10.10.10. Crop yield") and she sees that there are 316 data points for this outcome. She clicks an intervention ("Rotating cash/food crops with cover crops"), and the Shiny app opens.
To see the results for all 316 data points in the Shiny app, she deselects the option for "Exclude rows with exceptionally high variance (outliers)" and then she clicks "Start your analysis" to start a dynamic meta-analysis for her selected intervention and outcome (Step 1 in Table 1). Based on all 316 data points from 38 publications, cover crops do not have significant effects on cash crop yields (response ratio = 1; P = 0.9788; cash crop yields are 0% different with cover crops than they are without cover crops, with a 95% confidence interval from 4% lower to 4% higher).
Table 1 An example of the steps in a dynamic meta-analysis
However, these are the generic results for all of the global evidence. To find results that are transferable to her specific context, she filters the evidence (Step 2 in Table 1). She selects "United States of America" from the filter for "Country", "Brassica" from the filter for "Cover crop type", and "Yes" from the filter for "Irrigated cash crop". She then clicks "Update your analysis" to see the subgroup analysis for these filters (Fig. 2). Based on 14 data points from 2 publications (the only publications in which the cover crops were brassicas, grown in the USA, followed by irrigated cash crops), cash crop yields are lower after cover crops, but not significantly lower (13% lower, with a 95% confidence interval from 30% lower to 9% higher; P = 0.2381).
She clicks "Meta-regression" to see if the results from this subgroup analysis are relatively similar to the results from the meta-regression (Step 3 in Table 1). In the meta-regression, cash crop yields are significantly lower after cover crops (9% lower, with a 95% confidence interval from 12% lower to 5% lower; P < 0.0001). This is not surprising, since meta-regression is potentially more powerful statistically than subgroup analysis (it uses all of the data, and it potentially produces better estimates of variance). However, she sees a warning that one of her selected filters ("Irrigated cash crop") did not have a significant effect on this outcome (i.e. this moderator was not included in the "best" meta-regression model, with the lowest AICc). She deselects this filter and clicks "Update your analysis". There are now 30 data points from 3 publications in the subgroup analysis, and yields are now significantly lower (P = 0.0436). So far, it seems that the global evidence is not transferable to her local conditions (neutral effects vs negative effects on cash crop yields). However, she has found some evidence that seems transferable, and she has recalculated the results for this evidence, using subgroup analysis and meta-regression.
She clicks the tab for "Study summaries and weights" to see the paragraphs that summarise each of these three studies (Fig. 3). She sees one study on maize, one on tomatoes, and one on beans. Tomatoes are less applicable in her interests (she is mostly interested in grains or pulses as cash crops), so she sets a relevance weight of 0.5 for the study on tomatoes. She then returns to the tab for "Dynamic meta-analysis" and clicks "Update your analysis" to see the effects of this recalibration (Step 4 in Table 1). The results are still negative, but slightly more significant (P = 0.0224).
A screenshot from Metadataset that shows a method of recalibration in a dynamic meta-analysis. Users can adjust the weight of a study, based on its relevance to their context
She then considers the sensitivity of these results by permuting the settings. For example, there are several options for handling missing data, and these can be selected, deselected, and/or adjusted for sensitivity analysis (Step 5 in Table 1). Deselecting the option for "approximate the variance of the log response ratio" (below the filters), the result is still significantly negative. Permuting several other options (e.g. the sliders for assumed P values), this result seems to be robust (all of the results are significantly negative).
She reaches the conclusion that cover crops could have negative effects on cash crop yields in her local conditions (brassicas as cover crops on irrigated fields in California, and preferably with grains or pulses as cash crops). She would have reached a very different conclusion using the global evidence (cover crops have neutral effects on cash crop yields). However, she found only three relevant studies, and there is some uncertainty in these results. It has been suggested that uncertainty could be incorporated into decision analysis [29], and she could use results of her dynamic meta-analysis—the mean effect size and its confidence interval—as inputs for decision analysis. However, we will leave this hypothetical user here, having shown some of the key features of dynamic meta-analysis on Metadataset.
Dynamic meta-analysis provides a partial solution to an important problem in evidence-based decision making—lack of access to relevant evidence [7,8,9]—not only by helping users to find locally relevant evidence in a global evidence base, but also by helping them to use this evidence to reach locally relevant conclusions. We showed how the Metadataset website can be used for dynamic meta-analysis, as a proof-of-concept for software that could be used in other disciplines. For example, we showed how a hypothetical user could reach a different conclusion when using the global evidence (cover crops have no effect on cash crop yields) instead of the locally relevant evidence (brassicas have negative effects on cash crop yields in California). As a next step, this evidence could be used as an input into decision analysis [13], but that is beyond the scope of our work here. Here we discuss some strengths and weaknesses of dynamic meta-analysis, and we suggest that this method could be scaled up and used for subject-wide evidence synthesis in several scientific disciplines.
Metadataset compared to other tools
Researchers in psychology have suggested "community augmented meta-analysis" (CAMA), in which open-access databases of effect sizes could be updated and reused by researchers for future meta-analyses [30]. MetaLab [31] is an implementation of CAMA that includes data from several meta-analyses in psychology [18]. It enables researchers to test the effects of covariates on the mean effect size (using meta-regression), but it does not provide options for subgroup analysis or recalibration, which Metadataset does. MetaLab and other interactive databases of effect sizes could presumably be modified to provide these options. However, it would perhaps be better to have one large database for each subject area, with interoperable data and metadata, rather than many small databases.
An older, offline tool that seems to be more similar to Metadataset in both function and intention is the Transparent Interactive Decision Interrogator (TIDI) in medicine [32]. TIDI provides options for subgroup analysis and study exclusion, but not recalibration. A newer, online tool is IU-MA [33], which provides "interactive up-to-date meta-analysis" of two datasets in medicine [16]. Becker et al. [16] also refer to dynamic meta-analyses, but they do not provide a definition of the term, and although their IU-MAs provide options for subgroup analysis, they do not provide options for recalibration.
All of these tools are clearly useful, and there are clearly many similarities between them, but there are also many differences. One important difference is that none of these tools, with the exception of Metadataset, provides options for recalibration (i.e. weighting individual studies based on their relevance) or for analysing the data at different levels of resolution (i.e. lumping or splitting interventions and outcomes before starting a dynamic meta-analysis). We see recalibration as a key feature for dynamic meta-analysis. We also see this lumping or splitting of evidence (which we will refer to as the dynamic scoping of a meta-analysis) as a key feature. As well as assessing the transferability of evidence using dynamic meta-analysis, we suggest that users should be able to assess the applicability of evidence by dynamically scoping the meta-analysis (which is also a process of filtering the evidence, like subgroup analysis, but it is done before starting the meta-analysis). Dynamic scoping could also provide a partial solution to the "apples and oranges" problem in meta-analysis [34], since users could decide for themselves which "apples" and which "oranges" should be compared (e.g. deciding which interventions and/or outcomes should be analysed together). Therefore, we think that both filtering (subgroup analysis and dynamic scoping) and weighting (recalibration) should be seen as key features of dynamic meta-analysis.
Recalibration has the potential to improve evidence synthesis in subject areas where there is not any evidence that is completely relevant to decision makers (where subgroup analysis would not be useful). This relates to another important difference between these tools, which is that they are solutions to different problems, in different disciplines (agroecology, conservation biology, medicine, and psychology). In some disciplines, the need for recalibration may be less important than we perceive it to be in agroecology and conservation biology, in which there may be no evidence for a specific biome or taxon [7, 9], and in which heterogeneity may be higher than it is in carefully controlled clinical or laboratory sciences. Thus, recalibration and other methods of assessing existing evidence may be especially important in disciplines with sparse evidence (cf. [35]).
Dynamic meta-analysis of data from living systematic reviews
There is an important distinction between a dynamic meta-analysis, as we have defined it here, and a living review. As we see it, the diagnostic feature of a living review is that it is updated as soon as possible after a new study is published, whereas the diagnostic feature of a dynamic meta-analysis is that it is interactive. However, a dynamic meta-analysis could use data from a living review, and thus it could be part of a living review. Metadataset already uses data from an online database that can be easily updated, and so it is already possible to use Metadataset for living reviews. When new studies are added to the database, they are immediately available for dynamic meta-analysis. A traditional meta-analysis is static and cannot easily be updated without reanalysis and republication. In contrast, a dynamic meta-analysis can be easily updated, and therefore it could be ideal for the meta-analytic component of a living review.
Dynamic meta-analysis for subject-wide evidence synthesis
Metadataset was developed as part of the Conservation Evidence project [36], which provides summaries of scientific studies (including the studies of cover crops [37] that we used as an example of dynamic meta-analysis). By browsing and searching the Conservation Evidence website [38], users may already be able to find summaries of studies that match their local conditions. In this sense, Metadataset does not represent progress beyond the interface that is already available on Conservation Evidence. However, Metadataset goes a step further. It enables users to reach new conclusions based on these studies.
This is only possible because Metadataset provides quantitative evidence (effect sizes) that can be dynamically reanalysed, whereas Conservation Evidence provides qualitative evidence ("effectiveness categories" [36]) that cannot yet be dynamically reanalysed. It is possible that dynamic methods could be developed for Conservation Evidence, perhaps by using expert assessment to assign quantitative scores to each study. However, there are good reasons that Conservation Evidence does not yet use quantitative methods. For example, the populations and outcomes of conservation studies are heterogeneous, and this suggests that meta-analysis might not be an appropriate method of evidence synthesis [7], whereas agricultural studies may be more homogenous. Nevertheless, in subject areas for which quantitative methods are appropriate, Metadataset represents progress towards the co-assessment of evidence [10], and dynamic meta-analysis complements the qualitative methods that are used by Conservation Evidence.
We suggest that dynamic meta-analysis could be particularly useful in the context of subject-wide evidence synthesis [35, 36], which is a method of evidence synthesis that was developed by the Conservation Evidence project. Whereas a typical systematic review includes studies of only one or a few interventions, a subject-wide evidence synthesis includes studies of all interventions in a subject area (e.g. bird conservation), and thus it benefits from economies of scale [35]. For example, a publication only needs to be read once, and all of the data can be extracted for all interventions, rather than needing to be read once for each review of each intervention.
Subject-wide evidence synthesis is evidence synthesis on the scale that is needed for multi-criteria decision analysis [13], and thus it is particularly relevant to a discussion of evidence-based decision making. Because subject-wide evidence synthesis is global in scale, it begs the question, "How relevant is this global evidence for my local decision?" We suggest that dynamic meta-analysis, or some similar method of assessing the local relevance of global evidence, could be especially useful for subject-wide evidence synthesis. On Metadataset, our work on invasive plant management [27] is an example of subject-wide evidence syntheses in conservation biology, and it will soon be possible to assess the transferability of this evidence using dynamic meta-analysis. It will also be possible to browse this evidence by intervention and outcome, and thus to consider its applicability to a specific decision (using dynamic meta-analysis only for those interventions and outcomes that are considered to be applicable).
Protocols for evidence use
Dynamic meta-analysis could lead to a rebalancing of power and responsibility in evidence-synthesis, since evidence users would be able to make decisions that are typically made by researchers (Table 2). Protocols for evidence synthesis by researchers are well developed (e.g. [20]), but protocols for evidence use by decision makers may need to be developed. Researchers who reanalyse existing datasets already need to take extra steps to avoid conflicts of interest and other perverse incentives [39]. However, these steps may become even more important as data is reanalysed not by researchers but by policy makers or other evidence users, especially if they have political agendas or other conflicts of interest that might bias their conclusions.
Table 2 Some comparisons between static and dynamic meta-analysis. In dynamic meta-analysis, many decisions are made by users, not researchers. However, these decisions are informed by researchers, who provide the metadata on which the decisions are based. In a static meta-analysis, most decisions are made by researchers. However, these decisions are often informed by users, who are often consulted when the protocol for a meta-analysis is being developed. Thus, both researchers and users can be involved in both static and dynamic meta-analysis, but only in dynamic meta-analysis can users interact with the methods and results
For example, if a user does multiple analyses, selecting and deselecting different filters, then it will be difficult to interpret the statistical significance of their results, because of the multiple hypothesis tests that this involves (the problem of "data dredging") [40]. Furthermore, if a user does multiple analyses, and selects only one of these analyses as the basis for their decision (perhaps because it supports their political agenda), then it will be difficult to defend the credibility of their conclusions (the problem of "cherry picking").
Protocols for evidence use could require dynamic meta-analyses to be predefined (e.g. predefining the filters that would be selected), and users could be restricted to a limited number of analyses. However, our objective here is only to show how dynamic meta-analysis could be used, as a proof-of-concept, and not how it should be used. Protocols for evidence use would need to be developed together with stakeholders, and it is also possible that different protocols could be developed for different purposes (e.g. data exploration vs decision making). Developing these protocols is beyond the scope of our work here, as is developing standardised classification systems for metadata (see below). Even with these protocols, it will undoubtedly be possible to misuse the data in a dynamic meta-analysis. However, the alternatives—not providing tools for dynamic meta-analysis, or not providing protocols for evidence use—could possibly be worse (e.g. if it means that evidence is not used at all, because it is not perceived to be relevant to local decisions) and would seem to be a missed opportunity.
Standardised classification systems for metadata
Dynamic meta-analysis is limited by the quantity and quality of data and metadata that are available for each study. It has often been suggested that standards of data reporting need to be improved (e.g. [41]), but here we suggest that standards of metadata reporting also need to be improved, and standardised systems for classifying metadata need to be developed for use in evidence synthesis. For Metadataset, we developed hierarchical classification systems for interventions and outcomes, and we will refine these systems as we review new studies. Standardised classification systems for other forms of metadata (e.g. terrestrial ecoregions [42]) will either need to be adopted or developed (e.g. as an extension of Ecological Metadata Language [43]). If a unified system could be developed for classifying all of the interventions, outcomes, and other metadata within a discipline, then the evidence from multiple subject-wide evidence syntheses could be integrated into a single discipline-wide database with interoperable data and metadata (cf. [36]). This should not be seen as a precondition for dynamic meta-analysis, but it could be a vision for the future.
The future of dynamic meta-analysis
There are several challenges that will need to be met, before dynamic meta-analysis can be scaled up and used more widely. Metadataset is a proof-of-concept for the software that could be used for dynamic meta-analysis, and it is open-source software, but it would need to be further developed before researchers could easily publish dynamic versions of their own meta-analyses, and before these analyses could easily be used by decision makers. However, Metadataset was designed for the possibility of hosting other meta-analyses in other subject areas, and it may be possible for other researchers to use it in the future (indeed, it is already being used for meta-analyses in two different subject areas with two different sets of metadata). We would welcome collaborations with other researchers and software developers to improve this proof-of-concept and/or to develop alternative software packages for dynamic meta-analysis. We foresee two types of challenge in further developing the concept of dynamic meta-analysis: technical challenges and philosophical challenges.
Among the technical challenges, the software for dynamic meta-analysis will need to handle larger datasets and larger numbers of users than our proof-of-concept can handle. This software will also need to be better tested with users (both researchers and decision makers), to improve the user experience. For example, different versions of the software could be developed for different types of user (e.g. researchers with experience of meta-analysis vs decision makers without any experience of data analysis). The software will also need to provide other analytical options. For example, Metadataset calculates the log response ratio, but many researchers may want other measures of effect size (e.g. the standardised mean difference) and other options for data processing (e.g. other methods of imputing missing data).
Among the philosophical challenges, standardised classification systems for metadata will need to be developed, and so will protocols for evidence use (see above). Furthermore, the role of the evidence user will need to be more carefully considered. For example, we cannot easily imagine that farmers or ministers of agriculture would directly interact with a dynamic meta-analysis of cover crops, but we can more easily imagine that government aides or agricultural researchers would do so. Different types of user are likely to have different views of the evidence, and how it should be explored and presented, and this may mean that different approaches to dynamic meta-analysis are needed for different types of user.
Nature is infinitely variable, and in many disciplines, it is simply not possible to make generalisations that are universally applicable and transferable. But neither is it possible to be infinitely patient in waiting for locally relevant evidence to be co-produced for every decision. If decisions need to be made quickly and efficiently, they may need to be based on the co-assessment of existing evidence, rather than the co-production of new evidence [10]. Here we have defined dynamic meta-analysis as a method that can be used for the co-assessment of existing evidence. We have also shown how this method could be used to reach new conclusions from existing evidence, with the example of Metadataset.
The Metadataset website (www.metadataset.com) is built on two separate web frameworks: (1) the Django framework for Python (www.djangoproject.com), and (2) the Shiny framework for R (https://shiny.rstudio.com). Using the Django app, researchers can screen publications for inclusion in evidence maps and can tag these publications with interventions, outcomes, and other metadata. They can then enter the data that will be used for dynamic meta-analysis (e.g. the mean values for treatment groups and control groups, standard deviations, numbers of replicates, and P values), and they can write paragraphs that summarise each study. Users can browse this evidence by intervention, outcome, or country, to find relevant publications and/or datasets. They can then click a link to the Shiny app, to interact with their selected datasets using dynamic meta-analysis. The code is open source (Django app: https://github.com/gormshackelford/metadataset), Shiny app: https://github.com/gormshackelford/metadataset-shiny), and the data is open access (the data can be downloaded in CSV files via the Shiny app). Metadataset was developed as part of Conservation Evidence (www.conservationevidence.com) and BioRISC (the Biosecurity Research Initiative at St Catharine's College, Cambridge; www.biorisc.com).
Methods for dynamic meta-analysis on Metadataset
The Shiny app uses the methods from Shackelford et al. [25] to calculate the mean effect size of an intervention as the log response ratio. The response ratio is the numerical value of an outcome, measured with the intervention, divided by the numerical value of an outcome, measured without the intervention. The natural logarithm of the response ratio (the log response ratio) is typically used for meta-analysis [44]. Using the rma.mv function from the metafor package in R [45], the Shiny app fits a mixed-effects meta-analysis that accounts for non-independence of data points (for example, multiple data points within one study, within one publication) by using random effects (e.g. "random ~ 1 | publication/study" in the rma.mv function in metafor). Users can select, deselect, and/or adjust settings for missing or poorly reported data. For example, there are settings for imputing the variance of studies with missing variances (using the mean variance), approximating the variance of studies with missing variances (based on their P values; see Shackelford et al. [25]), and excluding outliers.
Users can filter the data (e.g. they can select "Brassica" from the filter for "Cover crop type"), and then they can use subgroup analysis and/or meta-regression to recalculate the results. They can view forest plots and funnel plots of their filtered data and read the paragraphs that summarise the studies that are included in their analyses. They can also assign a weight to each study, based on its relevance to their decision-making context. It has been suggested that a ratio of 5:4 (one "deciban") is the smallest difference in the weight of evidence that is perceptible to humans [46]. Therefore, we allow users to assign weights on a scale from 0 to 1, in increments of 0.1, without allowing weights that are overly precise and beyond human perception (e.g. a ratio of 1:0.99).
After selecting filters and doing a subgroup analysis, with or without recalibration, users can also do a meta-regression. The Shiny app fits a model in metafor, as before, but with all of the selected filters and all of their two-way interactions as moderators. For example, if the user selects a filter for "Country" and a filter for "Cover crop type" then we fit a metafor model with "mods = ~ Country + Cover.crop.type + Country:Cover.crop.type". We then use the MuMIn package in R [47] to fit all possible combination of these moderators (e.g. a model without the two-way interaction term, or a model without any moderators). We then use the "best" model (with the lowest AICc score) to get the model predictions for the filters that the user selected (e.g. the results for brassicas in the USA; please see Additional file 2 for an example). We show these results to the user, together with the results from the subgroup analysis for the same filters. If one or more of the filters were not included in the meta-regression model, then we show a warning.
The Shiny app uses the methods from Shackelford et al. [25] to calculate the log response ratio and its variance. We will not repeat these methods here, since Shackelford et al. [25] is open access. However, we will review the assumptions that underpin these methods, and we will show how these assumptions can be changed, by selecting, deselecting, and/or adjusting the settings in the Shiny app.
Standard methods of calculating the variance of the log response ratio require standard errors and sample sizes, which are often unreported in research publications. It has been suggested that it is better to approximate or impute the missing data in a meta-analysis than it is to exclude the publications with missing data [48]. Sensitivity analysis can be used to test the assumptions that are used for approximation or imputation, as shown by Shackelford et al. [25]. If standard errors (or standard deviations) and sample sizes are unavailable, the Shiny app approximates the variance (v) of the log response ratio (L) using the Z value (often calculated from the P value), by using this equation:
$$ \left|L\right|-\left(Z\ast \sqrt{\mathrm{v}}\right)=0 $$
In other words, it uses the equation for the confidence interval, CI = L ± (Z * √v) [44], to set the lower or upper bound of the (1 – P) * 100% confidence interval to zero, and then it calculates v from this equation. In the dynamic meta-analysis of cover crops, this option enabled us to include many additional publications. However, this option can be deselected using the checkbox for "Approximate the variance of the log response ratio using its (assumed) p-value or z-value".
If exact P values are unavailable, because they were reported as "significant" or "non-significant" (e.g. P < 0.05 or P > 0.05), then the Shiny app assumes an exact value of 0.025 for "significant" results and 0.525 for "non-significant" results. However, these default values can be adjusted via sliders in the Shiny app (and these values will not be used at all if the checkbox is deselected). If even P values are unavailable, variance can be imputed using the mean variance of all other included studies, but this can also be deselected using the checkbox for "Impute the variance for rows without variance (using the mean variance)". If selected, the mean variance is calculating using a linear model with the same random effects as the meta-analysis, using the lme package in R.
The random effects in all models are specified as "random = ~ 1 | publication/study" (i.e. study is nested with publication). The "study" variable is dynamically generated by concatenating "study_ID" (which is statically defined by researchers in the Django app) and any other filter variables (which are dynamically select by users in the Shiny app). For example, Shackelford et al. [25] defined "studies" as experiments with different species of cash crops and/or cover crops, even if these experiments were reported in the same publication. Thus, the user could select "Cash crop" and "Cover crop" in the "Random effects" section of the Shiny app. The Shiny app would then dynamically generate a new variable by pasting together "study_ID", "Cash.crop" and "Cover.crop" (e.g. "Study ID 1 Maize Rye") and then use this new variable as the "study" in the formula for random effects.
Studies with exceptionally high variance (outliers) can be defined in terms of deviations from the median variance (median absolute deviance or MAD [49]), and there is a slider for this in the Shiny app. Outliers can be excluded from the analysis, and there is a checkbox for this. The default setting is to exclude outliers, but the default threshold for defining outliers is relatively inclusive (10 deviations from the median variance). Excluding outliers can sometimes solve problems with convergence failures in the metafor model, which would otherwise show as error messages, and this relatively inclusive threshold for excluding outliers seems to be a useful default setting.
We think these default settings represent reasonable assumptions, but these settings can be selected, deselected, and/or adjusted, and sensitivity analysis can be used to test the effects of these assumptions. If users need more control than this, then they can download and analyse the data themselves using R or other software packages.
The standard method in meta-analysis is to weight each study by the inverse of its variance, so that studies with smaller variances have larger weights. To weight each study not only by the inverse of its variance, but also by its relevance (assigned by the user), we specify a weight matrix, W, using this equation:
$$ W={C}^{1/2}{M}^{-1}{C}^{1/2} $$
C is a diagonal matrix of relevance weights (one weight for each study, assigned by the user, with a default weight of 1), and M is the default variance-covariance matrix in metafor (please see Additional file 2 for an example). The default weight matrix in metafor is the inverse of M, and here we multiply it by the square-root of C, our relevance matrix, twice (effectively multiplying M by C, but maintaining a symmetrical weight matrix). With a relevance weight of 1 for each study (the default setting), this has no effect on the weight matrix, and thus it is also possible for users to fit a model with inverse-variance weights. However, with a relevance weight of less than 1, a study has less effect on the mean effect size. We use this method as an example of recalibration, in the sense of Kneale et al. [19]. Kneale et al. [19] provided an example of weighting studies in a meta-analysis, based on the similarity of these studies to different decision contexts, but they noted their method was provisional. Our method of modifying the weight matrix is also provisional. However, we think it is useful as an example of recalibration. Similar methods for using study-quality weights have been implemented in other meta-analyses, but it has been suggested that these methods also need further research [50].
Value judgements
If a dynamic meta-analysis is done at a high level in the hierarchy of outcomes (e.g. soil), then it may include multiple low-level outcomes (e.g. soil organic matter, soil nitrate leaching, and soil water content), and therefore, the user may need to decide whether it is better for the intervention to cause an increase or a decrease in each outcome. Without doing this, the overall effect size will not be meaningful across multiple outcomes. There are settings for this in the Shiny app (on the "Value judgements" tab). For example, the user could decide that an increase in soil organic matter and soil water content, but a decrease in soil nitrate leaching, would be good outcomes in their context. The user would then select "decrease is better" for soil nitrate leaching. The Shiny app would then invert the response ratio for that outcome, so that a positive effect size would represent a good outcome across all outcomes.
All data generated or analysed during this study are included in this published article [and its additional files] and available in the Zenodo repository (https://doi.org/10.5281/zenodo.3832211).
Gurevitch J, Koricheva J, Nakagawa S, Stewart G. Meta-analysis and the science of research synthesis. Nature. 2018;555:175–82.
Steward PR, Dougill AJ, Thierfelder C, Pittelkow CM, Stringer LC, Kudzala M, et al. The adaptive capacity of maize-based conservation agriculture systems to climate stress in tropical and subtropical environments: a meta-regression of yields. Agric Ecosyst Environ. 2018;251:194–202.
Wang S, Moss JR, Hiller JE. Applicability and transferability of interventions in evidence-based public health. Health Promot Int. 2005;21:76–83.
Burford B, Lewin S, Welch V, Rehfuess E, Waters E. Assessing the applicability of findings in systematic reviews of complex interventions can enhance the utility of reviews for decision making. J Clin Epidemiol. 2013;66:1251–61.
Avellar SA, Thomas J, Kleinman R, Sama-Miller E, Woodruff SE, Coughlin R, et al. External validity: the next step for systematic reviews? Eval Rev. 2016;41:283–325.
Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to meta-analysis. Chichester: Wiley; 2009.
Christie AP, Amano T, Martin PA, Petrovan SO, Shackelford GE, Simmons BI, et al. Poor availability of context-specific evidence hampers decision-making in conservation. Biol Conserv. 2020;248:108666.
Innvær S, Vist G, Trommald M, Oxman A. Health policy-makers' perceptions of their use of evidence: a systematic review. J Health Serv Res Policy. 2002;7:239–44.
Cook CN, Possingham HP, Fuller RA. Contribution of systematic reviews to management decisions. Conserv Biol. 2013;27:902–15.
Sutherland WJ, Shackelford G, Rose DC. Collaborating with communities: co-production or co-assessment? Oryx. 2017;51:569–70.
Burchett H, Umoquit M, Dobrow M. How do we know when research from one setting can be useful in another? A review of external validity, applicability and transferability frameworks. J Health Serv Res Policy. 2011;16:238–44.
McKinnon MC, Cheng SH, Garside R, Masuda YJ, Miller DC. Sustainability: map the evidence. Nature News. 2015;528:185.
Shackelford GE, Kelsey R, Sutherland WJ, Kennedy CM, Wood SA, Gennet S, et al. Evidence synthesis as the basis for decision analysis: a method of selecting the best agricultural practices for multiple ecosystem services. Front Sustainable Food Syst. 2019;3:83.
Garamszegi LZ, Nunn CL, McCabe CM. Informatics approaches to develop dynamic meta-analyses. Evol Ecol. 2012;26:1275–6.
Maki A, Cohen MA, Vandenbergh MP. Using meta-analysis in the social sciences to improve environmental policy. In: Leal Filho W, Marans RW, Callewaert J, editors. Handbook of sustainability and social science research. Cham: Springer International Publishing; 2018. p. 27–43. https://doi.org/10.1007/978-3-319-67122-2_2.
Chapter Google Scholar
Becker AS, Kirchner J, Sartoretti T, Ghafoor S, Woo S, Suh CH, et al. Interactive, up-to-date meta-analysis of MRI in the management of men with suspected prostate cancer. J Digit Imaging. 2020. https://doi.org/10.1007/s10278-019-00312-1.
Elliott JH, Turner T, Clavisi O, Thomas J, Higgins JPT, Mavergames C, et al. Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap. PLoS Med. 2014;11:e1001603.
Bergmann C, Tsuji S, Piccinini PE, Lewis ML, Braginsky M, Frank MC, et al. Promoting replicability in developmental research through meta-analyses: insights from language acquisition research. Child Dev. 2018;89:1996–2009.
Kneale D, Thomas J, O'Mara-Eves A, Wiggins R. How can additional secondary data analysis of observational data enhance the generalisability of meta-analytic evidence for local public health decision making? Res Synth Methods. 2019;10:44–56.
PubMed Article PubMed Central Google Scholar
CEE. Guidelines and Standards for Evidence Synthesis in Environmental Management. Version 5.0. 2018. www.environmentalevidence.org/information-for-authors. Accessed 5 Mar 2019.
McGill E, Egan M, Petticrew M, Mountford L, Milton S, Whitehead M, et al. Trading quality for relevance: non-health decision-makers' use of evidence on the social determinants of health. BMJ Open. 2015;5:e007053.
Slavin RE. Best-evidence synthesis: an alternative to meta-analytic and traditional reviews. Educ Res. 1986;15:5–11.
Burivalova Z, Miteva D, Salafsky N, Butler RA, Wilcove DS. Evidence types and trends in tropical forest conservation literature. Trends Ecol Evol. 2019;34:669–79.
Metadataset. https://www.metadataset.com. Accessed 20 May 2020.
Shackelford GE, Kelsey R, Dicks LV. Effects of cover crops on multiple ecosystem services: ten meta-analyses of data from arable farmland in California and the Mediterranean. Land Use Policy. 2019;88:104204.
Shackelford GE, Haddaway NR, Usieta HO, Pypers P, Petrovan SO, Sutherland WJ. Cassava farming practices and their agricultural and environmental impacts: a systematic map protocol. Environ Evid. 2018;7:30.
Martin PA, Shackelford GE, Bullock JM, Gallardo B, Aldridge DC, Sutherland WJ. Management of UK priority invasive alien plants: a systematic review protocol. Environ Evid. 2020;9:1.
Metadataset: Cover crops. https://www.metadataset.com/subject/cover-crops/. Accessed 20 May 2020.
Gregory R, Failing L, Harstone M, Long G, McDaniels T, Ohlson D. Structured decision making: a practical guide to environmental management choices. Chichester: Wiley; 2012. https://doi.org/10.1002/9781444398557
Tsuji S, Bergmann C, Cristia A. Community-augmented meta-analyses: toward cumulative data assessment. Perspect Psychol Sci. 2014;9:661–5.
MetaLab. http://metalab.stanford.edu. Accessed 20 May 2020.
Bujkiewicz S, Jones HE, Lai MCW, Cooper NJ, Hawkins N, Squires H, et al. Development of a transparent interactive decision interrogator to facilitate the decision-making process in health care. Value Health. 2011;14:768–76.
IU-MA. http://www.iu-ma.org. Accessed 20 May 2020.
Sharpe D. Of apples and oranges, file drawers and garbage: why validity issues in meta-analysis will not go away. Clin Psychol Rev. 1997;17:881–901.
Sutherland WJ, Wordley CFR. A fresh approach to evidence synthesis. Nature. 2018;558:364.
Sutherland W, Taylor N, MacFarlane D, Amano T, Christie A, Dicks L, et al. Building a tool to overcome barriers in research-implementation spaces: the conservation evidence database. Biol Conserv. 2019;238. https://doi.org/10.1016/j.biocon.2019.108199.
Shackelford GE, Kelsey R, Robertson RJ, Williams DR, Dicks LV. Sustainable agriculture in California and Mediterranean climates: evidence for the effects of selected interventions. Cambridge: University of Cambridge; 2017. www.conservationevidence.com
Conservation Evidence. http://www.conservationevidence.com. Accessed 20 May 2020.
Christakis DA, Zimmerman FJ. Rethinking reanalysis. JAMA. 2013;310:2499–500.
Szucs D. A tutorial on hunting statistical significance by chasing N. Front Psychol. 2016;7:1444.
Gurevitch J, Hedges LV. Statistical issues in ecological meta-analyses. Ecology. 1999;80:1142–9.
Olson DM, Dinerstein E, Wikramanayake ED, Burgess ND, Powell GVN, Underwood EC, et al. Terrestrial ecoregions of the world: a new map of life on earth. BioScience. 2001;51:933–8.
Michener WK, Brunt JW, Helly JJ, Kirchner TB, Stafford SG. Nongeospatial metadata for the ecological sciences. Ecol Appl. 1997;7:330–42.
Hedges LV, Gurevitch J, Curtis PS. The meta-analysis of response ratios in experimental ecology. Ecology. 1999;80:1150–6.
Viechtbauer W. Conducting meta-analyses in R with the metafor package. J Stat Softw. 2010;36:1–48.
Good IJ. Weight of evidence: a brief survey. Bayesian Stat. 1985;2:249–70.
Bartoń K. MuMIn: multi-model inference. 2009. https://cran.r-project.org/web/packages/MuMIn/index.html.
Lajeunesse MJ. Recovering missing or partial data from studies: a survey of conversions and imputations for meta-analysis. In: Koricheva J, Gurevitch J, Mengersen K, editors. Handbook of meta-analysis in ecology and evolution. Princeton: Princeton University Press; 2013. p. 195–206.
Leys C, Ley C, Klein O, Bernard P, Licata L. Detecting outliers: do not use standard deviation around the mean, use absolute deviation around the median. J Exp Soc Psychol. 2013;49:764–6.
Stone JC, Glass K, Munn Z, Tugwell P, Doi SAR. Comparison of bias adjustment methods in meta-analysis suggests that quality effects modeling may have less limitations than other approaches. J Clin Epidemiol. 2020;117:36–45.
We thank our funders. AC was supported by the Natural Environment Research Council as part of the Cambridge Earth System Science NERC DTP [NE/L002507/1].
The David and Claudia Harding Foundation, the A. G. Leventis Foundation, and Arcadia
Department of Zoology, University of Cambridge, Cambridge, CB2 3QZ, UK
Gorm E. Shackelford, Philip A. Martin, Amelia S. C. Hood, Alec P. Christie & William J. Sutherland
BioRISC (Biosecurity Research Initiative at St Catharine's), St Catharine's College, Cambridge, CB2 1RL, UK
Gorm E. Shackelford, Philip A. Martin & William J. Sutherland
School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ, UK
Elena Kulinskaya
Gorm E. Shackelford
Philip A. Martin
Amelia S. C. Hood
Alec P. Christie
William J. Sutherland
GS wrote the manuscript. All authors revised the manuscript. GS, AC, and EK selected the statistical methods for use on Metadataset. GS wrote the code for Metadataset and created the video tutorial. GS, PM, and AH extracted data from publications and entered data into the Metadataset database. The authors read and approved the final manuscript.
Correspondence to Gorm E. Shackelford.
Examples of methods in R.R.
Data for examples of methods in R.csv.
Shackelford, G.E., Martin, P.A., Hood, A.S.C. et al. Dynamic meta-analysis: a method of using global evidence for local decision making. BMC Biol 19, 33 (2021). https://doi.org/10.1186/s12915-021-00974-w
Received: 27 May 2020
External validity
Generalisability
Subject-wide evidence synthesis
Transferability
Research Synthesis and Meta-research in Biology | CommonCrawl |
Why valuations when defining FOL?
Why does one need valuations in order to define the semantics of first-order logic? Why not just define it for sentences and also define formula substitutions (in he expected way). That should be enough:
$$M \models \forall x. \phi \iff \text{for all }d\in \mathrm{dom}(M),\ M \models \phi[x\mapsto d]$$
$$M,v \models \forall x. \phi \iff \text{for all }d\in \mathrm{dom}(M),\ M, v[x\mapsto d] \models \phi$$
lo.logic
Emil Jeřábek
It is perfectly possible to define satisfaction using just sentences as you suggest, and in fact, it used to be the standard approach for quite some time.
The drawback of this method is that it requires to mix semantic objects into syntax: in order to make an inductive definition of satisfaction of sentences in a model $M$, it is not sufficient to define it for sentences of the original language of $M$. You need to first expand the language with individual constants for all elements of the domain of $M$, and then you can define satisfaction for sentences in the expanded language. This is, I believe, the main reason why this approach went into disuse; if you use valuations, you can maintain a clear conceptual distinction between syntactic formulas of the original language and semantic entities that are used to model them.
Emil JeřábekEmil Jeřábek
$\begingroup$ I think it depends somewhat on whether the author is approaching things from a proof theory side or a model theory side. In the case of proof theory, the original language is of interest for studying provability of sentences, but in the case of model theory the expanded language is more useful for studying definability. So for example Marker's model theory book defines satisfaction via the extended language, but Enderton's intro logic book uses valuations. $\endgroup$ – Carl Mummert May 3 '12 at 21:50
The meaning of a closed formula is a truth value $\bot$ or $\top$. The meaning of a formula containing a free variable $x$ ranging over a set $A$ is a function from $A$ to truth values. Functions $A \to \lbrace \bot, \top \rbrace$ form a complete Boolean algebra, so we can interpet first-order logic in it.
Similarly, a closed term $t$ denotes an element of some domain $D$, while a term with a free variable denotes a function $D \to D$ because the element depends on the value of the variable.
It is therefore natural to interpret a formula $\phi(x_1, \ldots, x_n)$ with free variables $x_1, \ldots, x_n$ in the complete Boolean algebra $D^n \to \lbrace{\bot, \top\rbrace}$ where $D$ is the domain of range of the variables. Whether you phrase the interpretation in this complete Boolean algebra in terms of valuations or otherwise is a technical matter.
Mathematicians seem to be generally confused about free variables. They think they are implicitly universally quantified or some such. The cause of this is a meta-theorem stating that $\phi(x)$ is provable if and only if its universal closure $\forall x . \phi(x)$ is provable. But there is more to formulas than their provability. For example, $\phi(x)$ is not generally equivalent to $\forall x . \phi(x)$, so we certainly cannot pretend that these two formulas are interchangable.
formulas with free variables are unavoidable, at least in the usual first-order logic,
the meaning of a formula with a free variable is a truth function,
therefore in semantics we are forced to consider complete Boolean algebras $D^n \to \lbrace\bot, \top\rbrace$, which is where valuations come from,
the universal closure of a formula is not equivalent to the original formula,
it is a mistake to equate the meaning of a formula with the meaning of its universal closure, just as it is a mistake to equate a function with its codomain.
Andrej BauerAndrej Bauer
$\begingroup$ Cool. Clear and simple answser! I wonder what the logicians have to say about this? $\endgroup$ – Uday Reddy May 6 '12 at 12:29
$\begingroup$ I am one of "the logicians", it's written on my certificate of PhD. $\endgroup$ – Andrej Bauer May 6 '12 at 16:39
Simply because it's more natural to say "$x > 2$ is true when $x$ is $\pi$" (that is, on a valuation which sends $x$ to $\pi$) than "$x > 2$ is true when we substitute $\pi$ (the number itself, not the Greek letter) for $x$". Technically the approaches are equivalent.
Alexey RomanovAlexey Romanov
I want to strengthen Alexey's answer, and claim that the reason is that the first definition suffers from technical difficulties, and not just that the second (standard) way is more natural.
Alexy's point is that the first approach, i.e.:
$M \models \forall x . \phi \iff$ for all $d \in M$: $M \models \phi[x\mapsto d]$
mixes syntax and semantics.
For example, let's take Alexey's example:
${(0,\infty)} \models x > 2$
Then in order to show that, one of the things we have to show is: $(0,\infty) \models \pi > 2$
The entity $\pi > 2$ is not a formula, unless our language includes the symbol $\pi$, that is interpreted in the model $M$ as the mathematical constant $\pi \approx 3.141\ldots$.
A more extreme case would be to show that $M\models\sqrt[15]{15,000,000} > 2$, and again, the right hand side is a valid formula only if our language contains a binary radical symbol $\sqrt{}$, that is interpreted as the radical, and number constants $15$ and $15,000,000$.
To ram the point home, consider what happens when the model we present has a more complicated structure. For example, instead of taking real numbers, take Dedekind cuts (a particular implementation of the real numbers).
Then the elements of your model are not just "numbers". They are pairs of sets of rational numbers $(A,B)$ that form a Dedkind cut.
Now, look at the object $({q \in \mathbb Q | q < 0 \vee q^2 < 5}, {q \in \mathbb Q | 0 \leq q \wedge q^2 > 5}) > 2$" (which is what we get when we "substitute" the Dedekind cut describing $\sqrt{5}$ in the formula $x > 2$. What is this object? It's not a formula --- it has sets, and pairs and who knows what in it. It's potentially infinite.
So in order for this approach to work well, you need to extend your notion of "formula" to include such mixed entities of semantic and syntactic objects. Then you need to define operations such as substitutions on them. But now substitutions would no longer be syntactic functions: $[ x \mapsto t]: Terms \to Terms$. They would be operations on very very large collections of these generalised, semantically mixed terms.
It's possible you will be able to overcome these technicalities, but I guess you will have to work very hard.
The standard approach keeps the distinction between syntax and semantics. What we change is the valuation, a semantic entity, and keep formulae syntactic.
Ohad KammarOhad Kammar
$\begingroup$ The key point to the first approach is that given a model $M$ in a language $L$ you first expand to a language $L(M)$ in which there is a new constant symbol for every element in $M$. Then you can just substitute these constant symbols into formulas in the usual way. There are no actual technical difficulties. $\endgroup$ – Carl Mummert May 3 '12 at 21:45
Not the answer you're looking for? Browse other questions tagged lo.logic or ask your own question.
Why do we need formal semantics for predicate logic?
Non-interesting numbers via resource-bounded properties?
Techniques for Reversing the Order of Quantifiers
Understanding least-fixed point logic
Why does IFP< not capture PTIME?
Is infinitary logic a logic in the sense of Gurevich?
Does there exist a sentence of first-order logic that is satisfiable only in infinite models that do not have a finite algorithmic representation? | CommonCrawl |
Power Series And Taylor Series
What is Power series? A power series is a series of the form. The problem with the approach in that section is that everything came down to needing to be able to relate the function in some way to. Let f(x) = X1 n=0 c n(x a)n = c 0 + c 1(x a) + c. DEFINITION 2. This of course is just a power series shifted over by c units. 2 Properties of Power Series 10. The Taylor series of is the sum of the Taylor series of and of. What is the interval of convergence for this series? Answer: The Maclaurin series for ex is 1+x+ x2 2! + x3. Read moreTaylor and Maclaurin Series. In the Summer 1994, the author developed computer activities intended to provide an intuitive interpretation to some of the fundamental notions involved in studying infinite series and Taylor polynomials. Note: In Problem 52, there is a mistake in the directions. This calculus 2 video tutorial explains how to find the Taylor series and the Maclaurin series of a function using a simple formula. 2, is a Taylor series centered at zero. For further details, see the class handout on the inverse. Multivariate Taylor Series. Find more Mathematics widgets in Wolfram|Alpha. There is also a special kind of Taylor series called a Maclaurin series. Leavitt Power series in the past played a minor role in the numerical solutions of ordi-nary and partial differential equations. In the Summer 1994, the author developed computer activities intended to provide an intuitive interpretation to some of the fundamental notions involved in studying infinite series and Taylor polynomials. So, a couple definitions to get us started here. A power series converges uniformly and absolutely in any region which lies entirely inside its circle of convergence. Taylor and Laurent Series We think in generalities, but we live in details. Here we address the main question. Use a known Maclaurin series to obtain the Maclaurin series for the function f(x) = cos(πx). You can specify the order of the Taylor polynomial. The series we will derive a power series that will converge to the factor. Taylor series is a special power series that provides an alternative and easy-to-manipulate way of representing well-known functions. This example generalizes as follows. Choose the maximum degree of the Taylor polynomial to use to approximate a function. Just assume that RaiseTo(raise a number to the power of x) is there. Each of the resistors in a series circuit consumes power which is dissipated in the form of heat. To find the Maclaurin Series simply set your Point to zero (0). Convergence of In nite Series in General and Taylor Series in Particular E. It takes the following form: Here's a common example of a p-series, when p = 2: Here are a few other examples of p-series: Remember not to confuse p-series with geometric series. A Taylor series method for numerical fluid mechanics J. 5) on uniform convergence is optional. This is due to the uniqueness of the Taylor series of a function centered at a point. Let's prove a lemma to deal with that last point. Section 4-16 : Taylor Series. If a function is equal to it's Taylor series locally, it is said to be an analytic function, and it has a lot of interesting properties. The last section (15. The method is shown to be nondispersive, nondiffusive, and for. (ii) Using (i) or otherwise nd the Taylor series expansion of 1 (1 x)2 and 1 (1 x)3 about a = 0, stating carefully any theorems you may use about integrating or di er-entiating power series within their radius of convergence. Taylor's Theorem states that if f is represented by a power series centered at c, then the power series has the form 0 n fx = ssss s ss sssssssssssss s ss sssssssssssssss s ss ssssssssssssssss s s If a power series is centered at c = 0, then it is called a sssssssss ssssss. (Several of these are listed below. Perfect for acing essays, tests, and quizzes, as well as for writing lesson plans. RELATION BETWEEN TAYLOR SERIES AND POWER SERIES A power series = Taylor series of its sum In other words, every time you obtain an identity X∞ n=0 a nx n = (something) then the power series on the left-hand sidemust be the Taylor series of that something on the right-hand side. Series effectively evaluates partial derivatives using D. If a= 0 the series is often called a Maclaurin Math formulas for Taylor and Maclaurin series Author:. These are the most important series of all! (Taylor, Maclaurin, etc, etc. The series P1 n=0 anx n, x 2 R, is called a power series. Recall from Chapter 8 that a power series. As the names suggest, the power series is a special type of series and it is extensively used in Numerical Analysis and related mathematical modelling. So we can write a simple generalised expression for a power series as g of x, equals a, plus bx, plus cx squared, plus dx cubed et cetera. The TaylorAnim command can handle functions that "blow-up" (go to infinity). Substituting the coefficients back into the series yields. 4 Working with Taylor Series Learn with flashcards, games, and more — for free. Taylor and Maclaurin Series Tutorial for Calculus students. Taylor and Maclaurin (Power) Series Calculator. They are distinguished by the name Maclaurin series. Maclaurin Series: If a function f can be differentiated n times, at x=0, then we define. To enhance our students' learning of the infinite series material, a computer laboratory activity devoted to the subject was created. The Taylor series above for arcsin x, arccos x and arctan x correspond to the corresponding principal values of these functions, respectively. Power series are useful because ssss sssss ssss ss sss. This series is referred to as the Taylor series of a function f(x) centered at c. + Formal manipulation of Taylor series and shortcuts to computing Taylor series, including substitution, differentiation, antidifferentiation, and the formation of new series from known series. This gives us a simple formulaB for the sum:" B B B â œ " " B # $ This is our first example of a Taylor series —a power series that adds up to a known function. is the Taylor series of f(z) = 1=(1 z) about z= 0. Every Taylor series is a power series in 0 0 0! k k k fx xx k is a power series in x 2 Theorem 9. Use the ratio test to show that the Taylor series centered at 0 for sin(x) converges for all real numbers. But this is good news for combinatorics. Taylor series is a special power series that provides an alternative and easy-to-manipulate way of representing well-known functions. But it converges at both end points and does so, therefore, absolutely. Use the ratio test, unless otherwiseinstructed. f The coefficients of this power series may be expressed with the Bernoulli numbers. In the figure below, we consider the graphs. Correct! This is the correct answer, found by using mathematical operations on the geometric power series. Lesson 23: Power Series Expansions. CHAPTER 38 Power Series In Problems 38. In fact, that's the brute force method of finding a series representation for a function, but there are other ways. Many times a Taylor expansion is used for approximations in solving transcendental equations such as x - ln x = 5 which cannot be solved by currently known algebraic manipulations In some cases in solving differential equations a Taylor series will actually give an exact answer that can't be readily found by any other method. If you want the Maclaurin polynomial, just set the point to `0`. (See the text, p. FUNCTIONS OF A COMPLEX VARIABLE (S1) Lecture 7 Power series expansions ⊲ Taylor series f representable by Taylor series expansion is said to be analytic. What is Power series? A power series is a series of the form. If the series uses the derivatives at zero, the series is also called a Maclaurin series, named after Scottish mathematician Colin Maclaurin (February 1698 – 14 June 1746). In this video, Patrick teaches how to Differentiate and Integrate Power Series to Derive New Power Series Expressions. This example generalizes as follows. Taylor and Laurent Series We think in generalities, but we live in details. The resulting series can be used to study the solution to problems for which direct calculation is di cult. Find the Taylor series for e−x2 centered at 0. However, using differentiation and integration we can expand many more functions into power series also. This is a convergent power series, but the same power series does not define an asymptotic series for exp(z). Another immediate and straightforward consequence of Theorem 2. The series P1 n=0 anx n, x 2 R, is called a power series. The binomial function Remark: If m is a positive integer, then the binomial function f m is a polynomial, therefore the Taylor series is the same polynomial, hence the Taylor series has only the first m +1 terms non-zero. + Maclaurin series and the general Taylor series centered at x = a. Power Series Solution of a Differential Equation • Approximation by Taylor Series Power Series Solution of a Differential Equation We conclude this chapter by showing how power series can be used to solve certain types of differential equations. because we take the formula for a Taylor polynomial centered at zero and let it keep on going. We have over 350 practice questions in Calculus for you to master. Learn how these polynomials work. To construct a power series solution around the point x = x o, we procede as follows: (1) Set y(x) = P 1 n=0 a n(x x o) n. The TaylorAnim command can handle functions that "blow-up" (go to infinity). And this is because they are composed of coefficients in front of increasing powers of x. Given just the series, you can quickly evaluate , , , …, and so on. Before, we only considered power series over R but now, we will consider power series over C as well. Free power series calculator - Find convergence interval of power series step-by-step. (1) Find the radius of convergence of (a) X1 n=1 5nxn n2 (b) For what values of xdoes X1 n=1 (2x+ 1)n n3 converge? (c) Give an example of a power series which converges for all x2( 1;1] and at no other points. ) Series can also generate some power series that involve fractional and negative powers, not directly covered by the standard Taylor series formula. We now shift from the approach of Cauchy and Goursat to another approach of evaluating complex integrals, that is, evaluating them by residue integration. Consider the one dimensional initial value problem y' = f(x, y), y(x 0) = y 0 where f is a function of two variables x and y and (x 0, y 0) is a known point on the solution curve. It is a series that is used to create an estimate (guess) of what a function looks like. In this video, Patrick teaches how to Differentiate and Integrate Power Series to Derive New Power Series Expressions. The nearer to a the value is, the more quickly the series will converge. We would like to know which x0s we can plug in to get a convergent series. The TaylorAnim command can handle functions that "blow-up" (go to infinity). Several useful Taylor series are more easily derived from the geometric series (11), (19) than from. To enhance our students' learning of the infinite series material, a computer laboratory activity devoted to the subject was created. Taylor's Series method. We begin with the general power series solution method. POWER SERIES 251 For example, sine is an analytic function and its Taylor series around x 0 = 0 is given by sin(x) = X1 n=0 (1)n (2n + 1)! x2n+1: In Figure 7. We begin with the general power series solution method. Math formulas and cheat sheet generator creator for Taylor and Maclaurin Series. The calculator will find the Taylor (or power) series expansion of the given function around the given point, with steps shown. Fenton School Of Mathematics University Of New South Wales Kensington, N. We prove it in order to demonstrates the Taylor series proposition above. Intervals of Convergence of Power Series. And this is because they are composed of coefficients in front of increasing powers of x. It explains how to derive power series of composite functions. What is a power series? 6. Practice Problems: Taylor and Maclaurin Series 1. Taylor's Series. Title: Taylor series of hyperbolic functions:. For example, the Taylor Series for ex is given by:. Power Series, Taylor Series In Chapter 14, we evaluated complex integrals directly by using Cauchy's integral formula, which was derived from the famous Cauchy integral theorem. I was just wondering in the lingo of Mathematics, are these two "ideas" the same? I know we have Taylor series, and their specialisation the Maclaurin series, but are power series a more general co. The series converges only for. A summary of Differentiation and Integration of Power Series in 's The Taylor Series. (Any power series whatso-ever. In 1668, the theory of power series began with the publication of the series for ln()1+x by Nicolaus Mercator, who did this by "integrating" 1 1+x (Stillwell 1989, 120). In other words, it's not a hypothesis we have to verify or check for. Leavitt Power series in the past played a minor role in the numerical solutions of ordi-nary and partial differential equations. If a= 0 the series is often called a Maclaurin Math formulas for Taylor and Maclaurin series Author:. THE BINOMIAL SERIES 375 6. If we take x0 = x¡c then the power series around c reduces to the power series around 0. infinite series in Novæ quadraturae arithmeticae in 1650, finding 1 n=1 nn()+1 ∞ ∑ along with proving the divergence of the harmonic series. Since this power must come from the source, the total power must be equal to the power consumed by the circuit resistances. Another immediate and straightforward consequence of Theorem 2. The basic idea is to approximate the solution with a power series of the form: (1) X1 m=0 a m(x mx 0) : As an example consider the Taylor. Created by Sal Khan. Find the Taylor series for e−x2 centered at 0. The calculator will find the Taylor (or power) series expansion of the given function around the given point, with steps shown. 10 The Binomial Series 6. Note that since is an even function, all its Taylor polynomials are also even polynomials. Incorrect! The signs are incorrect. It takes the following form: Here's a common example of a p-series, when p = 2: Here are a few other examples of p-series: Remember not to confuse p-series with geometric series. But this is good news for combinatorics. Convergence of In nite Series in General and Taylor Series in Particular E. CHAPTER 38 Power Series In Problems 38. Finding Taylor Polynomials The TI-89 taylor( command can be used to find partial sums. The partial sum is called the nth-order Taylor polynomial for f centered at a. Taylor series is a way to representat a function as a sum of terms calculated based on the function's derivative values at a given point as shown on the image below. TAYLOR AND MACLAURIN™S SERIES 359 6. Here is a set of practice problems to accompany the Taylor Series section of the Series & Sequences chapter of the notes for Paul Dawkins Calculus II course at Lamar University. In general this series will converge only for certain values of x determined by the radius of convergence of the power series (see Note 17). Indeed, the entire power series" B B B â# $ can be thought of as a geometric series with a common ratio of. f The coefficients of this power series may be expressed with the Bernoulli numbers. Power series are useful because ssss sssss ssss ss sss. (See the text, p. More generally, if c 2 R, then the series P1 n=0 an(x¡c)n, x 2 R, is called a power series around c. Note that since is an even function, all its Taylor polynomials are also even polynomials. Taylor and Maclaurin Series Tutorial for Calculus students. You can skip questions if you would like and come back to them. (Several of these are listed below. But it converges at both end points and does so, therefore, absolutely. Spring 03 midterm with answers. of better and better approximations to f leading to a power series expansion f(x) = X∞ n=0 f(n)(a) n! (x−a)n which is known as the Taylor series for f. In this section we will discuss a. Created by Sal Khan. The series generated by the sequences (a nzn) as z varies are called the power series generated by (a n). for any x in the series' interval of convergence. If the series uses the derivatives at zero, the series is also called a Maclaurin series, named after Scottish mathematician Colin Maclaurin (February 1698 – 14 June 1746). Taylor series is a special power series that provides an alternative and easy-to-manipulate way of representing well-known functions. Learn exactly what happened in this chapter, scene, or section of Calculus BC: Series and what it means. A power series about x = x 0 (or centered at x = x 0), or just power series, is any series that can be written in the form X1 n=0 a n(x x 0)n; where x 0 and a n are numbers. We can obtain a finite part, the first few terms, of a power series expansion of a function about a point by means of the Mathematica function Series as follows:. Example 5 Find the Maclaurin series for cos(x). Whether the power series converges at x = x0 ± ρ is tricky to determine. Title: Taylor series of hyperbolic functions:. We now shift from the approach of Cauchy and Goursat to another approach of evaluating complex integrals, that is, evaluating them by residue integration. As mentioned earlier, the function 1=(1 z) exists and is in nitely di erentiable everywhere except at z= 1 while the series P 1 n=0 z nonly exists in the unit circle jzj<1. This is the geometric power series. The following proposition is sometimes useful. Reviewing Taylor Series In first year calculus, you undoubtedly spent significant time studying Taylor series. A power series is a series of the form P 1 k=0 c kx k, or more gen-erally: P 1 k=0 c k(x kx 0). Finding the series expansion of d u _ " / du dk 'w\. What is the interval of convergence for this series? Answer: The Maclaurin series for ex is 1+x+ x2 2! + x3. As it happens, Every power series is the Taylor series of some $C^{\infty}$ function , but whether you refer to a series as a power series or a Taylor series depends on context. Many functions can be written as a power series. A power series in the variable x and centered at a is the in nite series. A Taylor series is a clever way to approximate any function as a polynomial with an infinite number of terms. If it is true, explain why. For example, consider the Taylor series for exp(z). 4- Represent functions as Taylor series and Maclaurin series. This Demonstration illustrates the interval of convergence for power series. Every Taylor series provides the. Question about sum and diff. which is valid for -10. FUNCTIONS OF A COMPLEX VARIABLE (S1) Lecture 7 Power series expansions ⊲ Taylor series f representable by Taylor series expansion is said to be analytic. Math formulas and cheat sheet generator for power series. Today I'd like to post a short piece of code I made after a review of Taylor series I did. Convergence of Taylor series 3. Finding Taylor Polynomials The TI-89 taylor( command can be used to find partial sums. Taylor series expansion of exponential functions and the combinations of exponential functions and logarithmic functions or trigonometric functions. Whitehead 8. (See the text, p. Let be the radius of convergence, and. What is Power series? A power series is a series of the form. All images are from "Thomas' Calcu-. Taylor and Maclaurin (Power) Series Calculator. 2 (Taylor Series). (c) If P a. ) There is a C1(R) function gwhich has this series as its Taylor series at 0. 3 Examples We now look how to -nd the Taylor and Maclaurin™s series of some functions. Such series can be described informally as infinite polynomials (i. Taylor series is a special power series that provides an alternative and easy-to-manipulate way of representing well-known functions. Review your understanding of the function approximation series (Taylor, Maclaurin, and Power series) with some challenging problems. A Taylor series method for numerical fluid mechanics J. Convergence of In nite Series in General and Taylor Series in Particular E. In this video, Patrick teaches how to Differentiate and Integrate Power Series to Derive New Power Series Expressions. These are called the Taylor coefficients of f, and the resulting power series is called the Taylor series of the function f. MATRIX AND POWER SERIES METHODS Mathematics 306 All You Ever Wanted to Know About Matrix Algebra and Infinite Series But Were Afraid To Ask By John W. (ii) Using (i) or otherwise nd the Taylor series expansion of 1 (1 x)2 and 1 (1 x)3 about a = 0, stating carefully any theorems you may use about integrating or di er-entiating power series within their radius of convergence. We now come to the important topics of power series and Taylor polynomials and series. Taylor's Theorem Let f be a function with all derivatives in (a-r,a+r). To find the values of x at which a power series converges requires knowing the actual form of the values a m, a m+1, a m+2, , and requires more work than we are able to do here. Prerequisite: Chaps. One important application of power series is to approximate a function using partial sums of its Taylor series. Power Series Solution of a Differential Equation • Approximation by Taylor Series Power Series Solution of a Differential Equation We conclude this chapter by showing how power series can be used to solve certain types of differential equations. In many cases, the third statement below is taken to be the definition of the exponential function. If it is false, explain why or give an example that disproves the statement. Taylor series expansions of hyperbolic functions, i. Taylor series expanded about x=0 are often relatively simple. Sketch of Proof Pick f kga fast decreasing sequence of positive real numbers. Limits like are "easy" to compute, since they can be rewritten as follows. We use the power series for the sine function (see sine function#Computation of power series): Dividing both sides by (valid when ), we get: We note that the power series also works at (because ), hence it works globally, and is the power series for the sinc function. problems concerning complex numbers with answers. The basic idea hinges on the geometric series expansion of. Use a known Maclaurin series to obtain the Maclaurin series for the function f(x) = cos(πx). This is a convergent power series, but the same power series does not define an asymptotic series for exp(z). The Taylor Series represents f(x) on (a-r,a+r) if and only if. 1) Lecture 26 Play Video: Taylor and MacLaurin Series (Ex. Spring 03 final with answers. A power series can be integrated term by term along any curve C which lies entirely. I was just wondering in the lingo of Mathematics, are these two "ideas" the same? I know we have Taylor series, and their specialisation the Maclaurin series, but are power series a more general co. Drek intends to pollute into my fifties my irony is sarcastic and to thin more and. Radius of convergence 8. 5- Approximate functions using Taylor polynomials and partial sums of infinite series. The objective of this section is to become fa-miliar with the theory and application of power series and Taylor series. Clearly, since many derivatives are involved, a Taylor series expansion is only possible when the function is so smooth that it can be differentiated again and again. DeTurck Math 104 002 2018A: Series 2/42. In the figure below, we consider the graphs. Math24 Search. Taylor and Laurent series Complex sequences and series An infinite sequence of complex numbers, denoted by {zn}, can be considered as a function defined on a set of positive integers into. Recall that if has derivatives of all orders at , then the Taylor series centered at for is On the other hand, suppose we give you a series, that we claim is the Taylor series for a function. Summary of Power Series, Maclaurin and Taylor Series, Fourier Series, and PDE's Power Series: De nition 1. (6) (i) Find the power series expansion of the function 1 1 x about a = 0. 4 Find the Maclaurin™s series for f(x) = ex, -nd its domain. of better and better approximations to f leading to a power series expansion f(x) = X∞ n=0 f(n)(a) n! (x−a)n which is known as the Taylor series for f. COMPLETE SOLUTION SET. First of all, just to review the concepts of Maclaurin and Taylor series, I am giving the definitions below. To di erentiate these two cases, a power series over the reals will be denoted f(x); and over the complex, f(z). Power Series Solution of a Differential Equation • Approximation by Taylor Series Power Series Solution of a Differential Equation We conclude this chapter by showing how power series can be used to solve certain types of differential equations. ] Also find the associated radius of convergence. These operations, used with differentiation and integration, provide a means of developing power series for a variety of. We would like to know which x0s we can plug in to get a convergent series. A summary of Differentiation and Integration of Power Series in 's The Taylor Series. Perfect for acing essays, tests, and quizzes, as well as for writing lesson plans. The proof is very similar to an argument we have seen already. Taylor series and power series Computation of power series. 3 Examples We now look how to -nd the Taylor and Maclaurin™s series of some functions. " This becomes clearer in the expanded […]. To determine this, we consider the ratio test for power series:. Math24 Search. As mentioned earlier, the function 1=(1 z) exists and is in nitely di erentiable everywhere except at z= 1 while the series P 1 n=0 z nonly exists in the unit circle jzj<1. f The coefficients of this power series may be expressed with the Bernoulli numbers. Find the Maclaurin series for f(x) = e5x. An important type of series is called the p-series. Taylor Series. Since every power of in the power series for sine is odd, we can see that sine is an odd function. A series of the form This series is useful for computing the value of some general function f(x) for values of x near a. Taylor Series SingleVariable and Multi-Variable • Single variable Taylor series: Let f be an infinitely differentiable function in some open interval around x= a. We say that powers of x are a complete set of functions because any function can be expressed as a linear combination of them. There have been good reasons. The Taylor Series represents f(x) on (a-r,a+r) if and only if. The series you have described is not a geometric series. In fact, Borel's theorem implies that every power series is the Taylor series of some smooth function. Taylor's Theorem states that if f is represented by a power series centered at c, then the power series has the form 0 n fx = ssss s ss sssssssssssss s ss sssssssssssssss s ss ssssssssssssssss s s If a power series is centered at c = 0, then it is called a sssssssss ssssss. + Formal manipulation of Taylor series and shortcuts to computing Taylor series, including substitution, differentiation, antidifferentiation, and the formation of new series from known series. In this video, Patrick teaches how to Differentiate and Integrate Power Series to Derive New Power Series Expressions. Find the Taylor series for e−x2 centered at 0. The main results of this chapter are that complex power series represent analytic functions, as shown in Sec. Learn exactly what happened in this chapter, scene, or section of The Taylor Series and what it means. In other words, the terms in the series will get smaller as n gets bigger; that's an indication that x may be inside the radius of convergence. The modern idea of an infinite series expansion of a function was conceived in India by Madhava in the 14th century, who also developed precursors to the modern concepts of the power series, the Taylor series, the Maclaurin series, rational - Their importance in calculus stems from Newton s idea of representing functions as sums of infinite series. We know that ex = X∞ n=0 xn n!. 1) DEFINITION 1. Lecture 14 : Power Series, Taylor Series Let an 2 Rfor n = 0;1;2;:::. Wolfram|Alpha can compute Taylor, Maclaurin, Laurent, Puiseux and other series expansions. Polynomial Approximations. 1 Approximating Functions with Polynomials 10. Thread Safety The taylor command is thread-safe as of Maple 15. Consider the one dimensional initial value problem y' = f(x, y), y(x 0) = y 0 where f is a function of two variables x and y and (x 0, y 0) is a known point on the solution curve. Operations on power series. A p-series can be either divergent or convergent, depending on its value. Common Maclaurin series 4. It is often difficult to operate with power series. Taylor Series and Asymptotic Expansions The importance of power series as a convenient representation, as an approximation tool, as a tool for solving differential equations and so on, is pretty obvious. Well, power series are important because ANY function can be represented by an infinite sum of powers of the argument. Differentiating and Integrating Power Series. To find the values of x at which a power series converges requires knowing the actual form of the values a m, a m+1, a m+2, , and requires more work than we are able to do here. 1) Lecture 26 Play Video: Taylor and MacLaurin Series (Ex. Drek intends to pollute into my fifties my irony is sarcastic and to thin more and. The series converges absolutely for all in some finite open interval and diverges if or. In the previous section we started looking at writing down a power series representation of a function. We use the power series for the sine function (see sine function#Computation of power series): Dividing both sides by (valid when ), we get: We note that the power series also works at (because ), hence it works globally, and is the power series for the sinc function. The Taylor series above for arcsin x, arccos x and arctan x correspond to the corresponding principal values of these functions, respectively. Many functions can be written as a power series. Let be the radius of convergence, and. Operations on power series. (Several of these are listed below. Calculus II, Section11. Some of my graphs for calc 3 (for peopel whose classes are different, it's just calc with more than two variables) get hung up when I try to increase the number of points it graphs so that I get higher detail. Multivariate Taylor Series. This gives us a simple formulaB for the sum:" B B B â œ " " B # $ This is our first example of a Taylor series —a power series that adds up to a known function. 31: Power Series, Taylor Series and Analytic Functions (section 5. questions about Taylor series with answers. The Maclaurin series is a template that allows you to express many other functions as power series. In practice the Taylor series does converge to the function for most functions of interest, so that the Taylor series for a function is an excellent way to work that function. CHAPTER 12 - FORMULA SHEET 2 POWER SERIES Recall the notion of an in nite series. Today I'd like to post a short piece of code I made after a review of Taylor series I did. Thus one may define a solution of a differential equation as a power series which, one hopes to prove, is the Taylor series of the desired solution. The following two chapters will deal with problem-solving techniques in the context of the material in this chapter. For which values of x do the values of f(x) and the sum of the power series expansion coincide? Taylor Series De nition If f(x) is a function with in nitely many derivatives at a, the Taylor Series of the function f(x) at/about a is the. A Taylor series is a polynomial of infinite degrees that can be used to represent all sorts of functions, particularly functions that aren't polynomials. power series, such as the Taylor series of a basic function. In fact, that's the brute force method of finding a series representation for a function, but there are other ways. Lady (October 31, 1998) Some Series Converge: The Ruler Series At rst, it doesn't seem that it would ever make any sense to add up an in nite number of things. Taylor series is a way to representat a function as a sum of terms calculated based on the function's derivative values at a given point as shown on the image below. We now come to the important topics of power series and Taylor polynomials and series. f The coefficients of this power series may be expressed with the Bernoulli numbers. | CommonCrawl |
\begin{document}
\title{{Multicriteria Scalable Graph Drawing via Stochastic Gradient Descent}\xspace, {$(SGD)^2$}\xspace}
\author{\IEEEauthorblockN{Reyan Ahmed, Felice De Luca, Sabin Devkota, Stephen Kobourov, Mingwei Li} \thanks{An earlier version of this paper appears in GD'20~\cite{ahmed2020gd}; in this extended version we use stochastic gradient descent which allows for multicriteria optimization on larger graphs.} \IEEEauthorblockA{\\Department of Computer Science, University of Arizona, USA}}
\IEEEtitleabstractindextext{ \begin{abstract} Readability criteria, such as distance or neighborhood preservation, are often used to optimize node-link representations of graphs to enable the comprehension of the underlying data.
With few exceptions, graph drawing algorithms typically optimize one such criterion, usually at the expense of others. We propose a layout approach, {Multicriteria Scalable Graph Drawing via Stochastic Gradient Descent}\xspace, {$(SGD)^2$}\xspace, that can handle multiple readability criteria.
\blue{ {$(SGD)^2$}\xspace can optimize any criterion that can be described by a differentiable function. } Our approach is flexible and can be used to optimize several criteria that have already been considered earlier (e.g., obtaining ideal edge lengths, stress, neighborhood preservation) as well as other criteria which have not yet been explicitly optimized in such fashion (e.g., node resolution, angular resolution, aspect ratio).
The approach is scalable and can handle large graphs. A variation of the underlying approach can also be used to optimize many desirable properties in planar graphs, while maintaining planarity.
\blue{ Finally, we provide quantitative and qualitative evidence of the effectiveness of {$(SGD)^2$}\xspace: we analyze the interactions between criteria, measure the quality of layouts generated from {$(SGD)^2$}\xspace as well as the runtime behavior, and analyze the impact of sample sizes. The source code is available on github and we
also provide an interactive demo for small graphs.
} \end{abstract}
\begin{IEEEkeywords} Graph drawing, gradient descent, quality metrics. \end{IEEEkeywords}}
\maketitle
\IEEEdisplaynontitleabstractindextext
\IEEEpeerreviewmaketitle
\IEEEraisesectionheading{ \section{Introduction}\label{sec:introduction}} Graphs represent relationships between entities and visualization of this information is relevant in many domains.
Several criteria have been proposed to evaluate the readability of graph drawings, including the number of edge crossings, distance preservation, and neighborhood preservation. Such criteria evaluate different aspects of the drawing and different layout algorithms optimize different criteria. It is challenging to optimize multiple readability criteria at once and there are few approaches that can support this. Examples of approaches that can handle a small number of related criteria include the stress majorization framework of Wang et al.~\cite{wang2017revisiting}, which optimizes distance preservation via stress as well as ideal edge length preservation.
The Stress Plus X (SPX) framework of Devkota et al.~\cite{devkota2019stress} can minimize the number of crossings, or maximize the minimum angle of edge crossings.
While these frameworks can handle a limited set of related criteria, it is not clear how to extend them to arbitrary optimization goals.
The reason for this limitation is that these frameworks are dependent on a particular mathematical formulation. For example, the SPX framework
was designed for crossing minimization, which can be easily modified to handle crossing angle maximization (by adding a cosine factor to the optimization function).
This ``trick" can be applied only to a limited set of criteria but not the majority of other criteria that are incompatible with the basic formulation.
\begin{figure}\label{fig:tensorflowjs-ui}
\end{figure}
In this paper, we propose a general approach, {Multicriteria Scalable Graph Drawing via Stochastic Gradient Descent}\xspace, {$(SGD)^2$}\xspace, that can optimize a large set of drawing criteria, provided that the corresponding metrics that evaluate the criteria are differentiable functions. \blue{ If the criterion is not naturally differentiable, we design a differentiable surrogate function to approximate and optimize the original criterion. In {$(SGD)^2$}\xspace, auto-differentiation tools are used for the gradient-based optimization. } To demonstrate the flexibility of the approach, we consider a set of nine criteria: minimizing stress, maximizing node resolution, obtaining ideal edge lengths, maximizing neighborhood preservation, maximizing crossing angle, optimizing total angular resolution, minimizing aspect ratio, optimizing the Gabriel graph property, and minimizing edge crossings. \blue{ We evaluate the effectiveness of our approach quantitatively and qualitatively with evidence drawn from a set of experiments. To illustrate the effectiveness and efficiency of multicriteria optimization, we evaluate the compatibility of every pair of criteria, measure the quality of each criterion, and demonstrate the distinctive looks of graph layouts under different drawing objectives. We also evaluate the runtime performance and the impact of sample sizes used in the optimization, and compare our methods with existing ones. } We implemented our method with PyTorch. The code is available at: \url{https://github.com/tiga1231/graph-drawing/tree/sgd}. For demonstration purposes, we also built an interactive prototype \blue{(that implements full-batch gradient descent on small graphs)} in JavaScript using tensorflow.js and D3.js, which is available on \url{http://hdc.cs.arizona.edu/~mwli/graph-drawing/}. This interactive prototype allows nodes to be moved manually and combinations of criteria can be optimized by selecting a weight for each;
\blue{see Fig.~\ref{fig:tensorflowjs-ui}.}
\begin{comment}
\begin{figure}
\caption{Generating symmetric layout using {$(SGD)^2$}\xspace: (a) input random layout, and (b) after applying {$(SGD)^2$}\xspace}
\label{fig:inputsym_in}
\label{fig:inputsym_out}
\label{fig:symgdqnfirst}
\end{figure} \end{comment}
\section{Related Work} Many criteria associated with the readability of graph drawings have been proposed~\cite{ware2002cognitive}. Most graph layout algorithms are designed to (explicitly or implicitly) optimize a single criterion.
For instance, a classic layout criterion is stress minimization ~\cite{kamada_1989}, where stress is defined by $\sum\limits_{i < j}w_{ij} (|X_i-X_j| - d_{ij})^2$. Here, $X$ is a $n\times2$ matrix containing coordinates for the $n$ nodes, $d_{ij}$ is typically the graph-theoretical distance between two nodes $i$ and $j$ and $w_{ij}=d_{ij}^{-\alpha}$ is a normalization factor with $\alpha$ equal to $0, 1$ or $2$. Thus reducing the stress in a layout corresponds to computing node positions so that the actual distance between pairs of nodes is proportional to the graph theoretic distance between them. Optimizing stress can be accomplished by stress minimization, or stress majorization, which can speed up the computation~\cite{gansner2004graph}. In this paper we only consider drawing in the Euclidean plane, however, stress can be also optimized in other spaces such as the torus~\cite{chen2020doughnets}.
Stress minimization corresponds to optimizing the global structure of the layout, as the stress metric takes into account all pairwise distances in the graph. The t-SNET algorithm of Kruiger et al.~\cite{kruiger2017graph} directly optimizes neighborhood preservation, which captures the local structure of a graph, as the neighborhood preservation metric only considers distances between pairs of nodes that are close to each other. Optimizing local or global distance preservation can be seen as special cases of the more general dimensionality reduction approaches such as multi-dimensional scaling~\cite{shepard1962analysis,kruskal1964multidimensional}.
Purchase et al.~\cite{Purchase1997} showed that the readability of graphs increases if a layout has fewer edge crossings. The underlying optimization problem is NP-hard and several graph drawing contests have been organized with the objective of minimizing the number of crossings in the graph drawings~\cite{Abrego12,Buchheim13}. Recently several algorithms that directly minimize crossings have been proposed ~\cite{bennett2010,Radermacher18}.
The negative impact on graph readability due to edge crossings can be mitigated if crossing pairs of edges have a large crossings angle~\cite{Argyriou2010,huang2014,huang2013,didimo2014crossangle}. Formally, the crossing angle of a straight-line drawing of a graph is the minimum angle between two crossing edges in the layout, and optimizing this property is also NP-hard.
Recent graph drawing contests have been organized with the objective of maximizing the crossings angle in graph drawings and this has led to several heuristics for this problem~\cite{Demel2018AGH,Bekos18}.
The algorithms above are very effective at optimizing the specific readability criterion they are designed for, but they cannot be directly used to optimize additional criteria. This is a desirable goal, since optimizing one criterion often leads to poor layouts with respect to one or more other criteria: for example, algorithms that optimize the crossing angle tend to create drawings with high stress and no neighborhood preservation~\cite{devkota2019stress}.
Davidson and Harel~\cite{davidson1996drawing} used simulated annealing to optimize different graph readability criteria (keeping nodes away from other nodes and edges, uniform edge lengths, minimizing edge crossings). \blue{Huang et al.~\cite{huang2013} extended a force-directed algorithm to optimize crossing angle and angular resolution by incorporating two additional angle forces. The authors show that in addition to optimizing crossing angle and angular resolution, the algorithm also improves other desirable properties (average size of crossing angles, standard deviation of crossing angles, standard deviations of angular resolution, etc.). In a force-directed method similar to the algorithm proposed by Huang et al.~\cite{huang2013}, to optimize each criterion one needs to design a new force. The new force can be considered as a gradient update by hand, whereas {$(SGD)^2$}\xspace is a gradient descent based algorithm where the gradients are computed automatically using auto-differentiation tools.} Recently, several approaches have been proposed to simultaneously improve multiple layout criteria. Wang et al.~\cite{wang2017revisiting} propose a revised formulation of stress that can be used to specify ideal edge direction in addition to ideal edge lengths in a graph drawing. Wang et al.~\cite{wang2018structure} extended that stress formulation to produce structure-aware and smooth fish-eye views of graphs.
Devkota et al.~\cite{devkota2019stress} also use a stress-based approach to minimize edge crossings and maximize crossing angles.
Eades et al.~\cite{10.1007/978-3-319-27261-0_41} provided a technique to draw large graphs while optimizing different geometric criteria, including the Gabriel graph property. Although the approaches above are designed to optimize multiple criteria, they cannot be naturally extended to handle other optimization goals.
Constraint-based layout algorithms such as COLA~\cite{ipsepcola_2006, scalable_cola_2009}, can be used to enforce separation constraints on pairs of nodes to support properties such as customized node ordering or downward pointing edges. The coordinates of two nodes are related by inequalities in the form of $x_i \geq x_j + gap$ for a node pair $(i,j)$. Dwyer et al.~\cite{dwyer2009constrained} use gradient projection to handle these constraints, be moving nodes as little as needed to satisfy the inequalities/ equalities after each iteration of the layout method. The gradient projection method has been extended to also handle non-linear constraints~\cite{dwyer2009layout}. These hard constraints are powerful but a bit restrictive and are different from \blue{the soft constraints} in our {$(SGD)^2$}\xspace framework.
\begin{comment}
{\reyan{Also there are other readability criteria for which we can apply {$(SGD)^2$}\xspace: crossings, NP, stress, area, aspect ratio,
symmetry,
upwardness drawing, label overlapping removal, \href{https://arxiv.org/pdf/1908.03586.pdf}{edge-length ratio}, \href{https://arxiv.org/pdf/1908.07363.pdf}{node overlap removal}, \href{https://arxiv.org/pdf/1908.06504.pdf}{total angular resolution}, \href{https://link.springer.com/chapter/10.1007/978-3-030-04414-5_20}{crossing-angle maximization}, \href{https://arxiv.org/pdf/1708.09815.pdf}{drawings with few
slopes},
\href{https://pdfs.semanticscholar.org/be7e/4c447ea27e0891397ae36d8957d3cbcea613.pdf}{maximizing the minimum angle between edges leaving a node (angular resolution)},
\href{https://pdfs.semanticscholar.org/be7e/4c447ea27e0891397ae36d8957d3cbcea613.pdf}{maximizing consistent flow direction}, \href{https://arxiv.org/pdf/1908.07792.pdf}{clustering quality}.}}
stress in torus
https://dl.acm.org/doi/pdf/10.1145/3313831.3376180 \end{comment}
\section{The {$(SGD)^2$}\xspace Framework}
The {$(SGD)^2$}\xspace framework is a general optimization approach to generate a layout with any desired set of aesthetic metrics, provided that they can be expressed by a smooth function. The basic principles underlying this framework are simple. The first step is to select a set of layout readability criteria and loss functions that measure each of them. Then we define the function to optimize as a linear combination of the loss functions for each individual criterion. Finally, we iterate the gradient descent steps, from which we obtain a slightly better drawing at each iteration. Fig.~\ref{fig:gdgdframework} depicts the framework of {$(SGD)^2$}\xspace: Given any graph with $n$ nodes and a readability criterion $Q$, we design a loss function $L_{Q}: \mathbb{R}^{n \times 2} \to \mathbb{R}$ that maps the current layout $X \in \mathbb{R}^{n \times 2}$ to a measure $L_{Q}(X)$ with respect to the readability criterion.
Then we combine multiple loss functions from different criteria into a single one by taking a weighted sum, $L(X) = \Sigma_{Q}w_Q L_{Q}(X)$, where a lower value is always desirable. At each iteration, a slightly better layout can be found by taking a small ($\epsilon$) step along the (negative) gradient direction: $X^{(new)} = X - \epsilon \cdot \nabla\; L(X)$. \blue{Algorithm~\ref{alg:gd2} summarises the {$(SGD)^2$}\xspace optimization procedure.}
\begin{figure}
\caption{The {$(SGD)^2$}\xspace framework: Given a graph and a set of criteria (with weights), formulate an objective function based on the selected set of criteria and weights. Then compute the quality (value) of the objective function of the current layout of the graph. Next, generate the gradient (analytically or automatically). Using the gradient information, update the coordinates of the layout. Finally, update the objective function based on the layout via regular or stochastic gradient descent. This process is repeated for a fixed number of iterations.}
\label{fig:gdgdframework}
\end{figure}
\begin{algorithm}[t] \DontPrintSemicolon \caption{\blue{The {$(SGD)^2$}\xspace Algorithm}\label{alg:gd2}} \KwInput{
\\
$G = (V,E)$ \tcp{graph}
$C = \{\dots c \dots\}$ \tcp{criteria}
$S: c \mapsto s_c$ \tcp{sample sizes for each $c$}
$L_c: \mathbb{R}^{s_c \times 2} \to \mathbb{R}_+$ \tcp{loss functions}
$maxiter \in \mathbb{Z}_+$ \tcp{number of iterations}
$W: c \mapsto w_c$, where $w_c: [1, maxiter] \to \mathbb{R_+}$ \tcp{weight schedules}
$\eta: [1, maxiter] \to \mathbb{R_+}$ \tcp{learning rate}
$q$ \tcp{criterion for safe update}
$Q_{q}$ \tcp{quality measure of $q$} } \KwOutput{
$X$ \tcp{layout that optimizes multiple criteria} } \Fn{Layout($G; C, S, W, maxiter, \eta$)}{
$X \leftarrow$ InitializeLayout($G$)\;
\If{`crossings' $\in C$}{
$cd \leftarrow$ InitializeCrossingDetector()\;
}
\For {$t = 1, \dots, maxiter$}{
$l \leftarrow 0$\;
\For {$c \in C$ s.t. $w_c(t) > 0$}{
$sample \leftarrow$ Sample($c, s_c$)\;
\If{c == `crossings'}{
UpdateCrossingDetector($cd, sample$)\;
$l_{c} \leftarrow L_{c}(sample; G, cd)$\;
}
\Else{
$l_{c} \leftarrow L_{c}(sample; G)$\;
}
$l \leftarrow l + w_c(t) \cdot l_c$\;
}
\If{`Safe update' is enabled}{
$X_{prev} \leftarrow X$\;
$X_{new} \leftarrow X - \eta(t) \cdot \nabla_{X} l$\;
$X \leftarrow$ SafeUpdate($X_{prev}, X_{new}; G, Q_q$) \tcp{Alg. \ref{alg:safe_update_2}}
}
\Else{
$X \leftarrow X - \eta(t) \cdot \nabla_{X} l$\;
}
}
Return X\; } \end{algorithm}
\subsection{Gradient Descent Optimization}
There are different kinds of gradient descent algorithms. The standard method considers all nodes, computes the gradient of the objective function, and updates node coordinates based on the gradient. Some objectives may consider all the nodes in every step. For example, the basic stress formulation~\cite{kamada_1989} falls in this category. To compute the gradient for optimization, one has to iterate through all the nodes which makes it not scalable to very large graphs. Fortunately,most of these objectives can be decomposed into optimization over only subsets of nodes. Consider stress minimization again, if we sample a set of node pairs randomly and minimize the stress between the nodes in each pair, the stress of the whole graph is also minimized~\cite{zheng2018graph}. \blue{This approach is known as stochastic gradient descent (SGD) and we use this idea extensively.}
In section~\ref{sect:properties-and-measures}, we specify the objective loss functions and sampling methods we used for each readability criterion we consider.
Not all readability criteria come naturally in the form of differentiable functions. We cannot compute the gradient of or apply SGD on non-differentiable functions. In cases that the original objective is continuous but not everywhere differentiable e.g., a 'hinge' function f(x)=max(0,x), we can compute the subgradient and update the objective based on the subgradient. Hence, as long as the function is continuously defined on a connected component in the domain, we can apply the subgradient descent algorithm.
When a function is not defined in a connected domain, we can introduce surrogate loss functions to `connect the pieces'. For example, when optimizing neighborhood preservation we maximize the Jaccard similarity between graph neighbors and nearest neighbors in graph layout. However, Jaccard similarity is only defined between two binary vectors. To solve this problem we extend Jaccard similarity to all real vectors by its Lov\'{a}sz extension~\cite{berman2018lovasz} and apply that to optimize neighborhood preservation.
An essential part of gradient descent based algorithms is to compute the gradient/subgradient of the objective function. In practice, it is not necessary to write down the gradient analytically as it can be computed automatically via (reverse-mode) automatic differentiation~\cite{griewank2008evaluating}. Deep learning packages such as Tensorflow~\cite{abadi2016tensorflow} and PyTorch~\cite{paszke2019pytorch} apply automatic differentiation to compute the gradient of complicated functions.
\blue{ Most of the objective functions that we consider here are not convex and do not have unique global minimizers. Therefore, even though SGD is known to converge (to at least a local optimum) in relatively relaxed settings~\cite{gower2019sgd, bassily2018exponential}, few optimization objectives are guaranteed to find the global optimum. In particular, unlike methods such as stress majorization~\cite{gansner2004graph}, most of our optimization objectives are not guaranteed to converge to the global optimum. Meanwhile, most of the objective functions for which SGD works well in practice (e.g., in deep learning) are neither convex nor have unique global minimizers~\cite{li2017visualizing}.
With this in mind, we follow the common practice of applying an annealing process, if necessary, to ensure convergence (to a possibly local minimum). }
When optimizing multiple criteria simultaneously, we combine them via a weighted sum. However, choosing a proper weight for each criterion can be tricky. Consider, for example, maximizing crossing angles and minimizing stress simultaneously with a fixed pair of weights. At the very early stage, the initial drawing may have many crossings and stress minimization often removes most of the early crossings. As a result, maximizing crossing angles in the early stage can be harmful as it moves nodes in directions that contradict those that come from stress minimization. Therefore, a well-tailored \textit{weight scheduling} is needed for a successful outcome. Continuing with the same example, a better outcome can be achieved by first optimizing stress until it converges, and later adding weights for the crossing angle maximization. To explore different ways of scheduling, we provide an interface that allows manual tuning of the weights. \blue{
We consider weight schedules for different criteria sets in Section~\ref{sect:analysis-of-qualities}. }
\subsection{Implementation} We implemented the {$(SGD)^2$}\xspace framework in Python. In particular we used PyTorch~\cite{paszke2019pytorch} automatic differentiation, NetworkX~\cite{networkx} for processing graphs, and matplotlib~\cite{matplotlib} for drawing. The code is available at \url{https://github.com/tiga1231/graph-drawing/tree/sgd}. To demonstrate our method with small graphs, we have provided an interactive tool written in JavaScript \footnote{ \url{http://hdc.cs.arizona.edu/~mwli/graph-drawing/}}, where we used the automatic differentiation tools in tensorflow.js~\cite{mlsys2019_154} and the drawing library D3.js~\cite{2011-d3}.
\begin{comment}
{
\begin{figure}
\caption{Layouts computed by {$(SGD)^2$}\xspace: (a) a clustered graph using stress + crossing minimization the number of crossings in this layout is 5 that is the minimum achievable, (b) a clustered graph using edge uniformity minimization, the edge length uniformity in this layout is 0 that is the minimum achievable (c) a dodecahedron graph layout using neighborhood preservation optimization, and (d) a dodecahedron graph drawn using symmetry optimization.}
\label{fig:blocknp}
\label{fig:blocknpue}
\label{fig:gridst}
\label{fig:treest}
\label{fig:rand_samples}
\end{figure}
} \end{comment}
\section{Properties and Measures}\label{sect:properties-and-measures} In this section we specify the aesthetic goals, definitions, quality measures and loss functions for each of the $9$ graph drawing properties we optimized: stress, ideal edge lengths, neighborhood preservation, crossing number, crossing angle, aspect ratio, angular resolution, node resolution and Gabriel graph property. Other standard graph notation is summarized in Table~\ref{table:notations}. In each subsection, we first define our loss function for the entire graph. For small graphs, one can apply (full-batch) gradient descent directly on this loss. To speed up our method for larger graphs, we sample portions of our loss functions at each iteration and apply (mini-batch) stochastic gradient descent on them. \blue{ The definition of a sample can be different for each criterion. For example, for stress minimization we sample pairs of nodes; for ideal edge length, we sample edges. Hence, the sample sizes of different criteria can be set independently. Moreover, when the sample size for a certain criterion exceeds the number of possible samples, our method is automatically equivalent to (full-batch) gradient descent for that criterion. In Section~\ref{sect:analysis-of-sample-size}, we discuss the effect of the sample sizes on the convergence rates. The analysis has helped us set the default values for each readability criterion. In general, for each criterion we sample mini-batches from a pool of all sample points (e.g., all pairs of nodes for stress, all edges for ideal edge length) without replacement, and `refill the pool' when all sample points are drawn. In practice, we shuffle the list of data points, draw mini-batches from the list in consecutive order, and re-shuffle the list once every data point is drawn. } \begin{table}[h] \resizebox{\columnwidth}{!}{
\begin{tabular}{l|l}
\toprule
Notation & Description\\
\midrule
$G$ & Graph\\
$V$ & The set of nodes in $G$, indexed by $i$, $j$ or $k$\\
$E$ & The set of edges in $G$, indexed by a pair of nodes $i,j$ in $V$\\
$n=|V|$ & Number of nodes in $G$\\
$m$ & sample size for a certain criterion in SGD\\
$|E|$ & Number of edges in $G$\\
$Adj$ and $A_{i,j}$ & Adjacency matrix of $G$ and its $(i,j)$-th entry\\
$d_{ij}$ & Graph-theoretic distance between node $i$ and $j$\\
$X_{n \times 2}$ & 2D-coordinates of nodes in the drawing\\
$||X_i - X_j||$ & The Euclidean distance between nodes $i$ and $j$ \\
$\theta_i$ & $i^{th}$ crossing angle\\
$\varphi_{ijk}$ & Angle between incident edges $(i,j)$ and $(j,k)$\\
\bottomrule
\end{tabular}
\caption{Graph notation used in this paper.}
\label{table:notations} } \end{table}
\subsection{Stress} We minimize stress, $L_{ST}$, to draw a graph that matches the Euclidean distances between pairs of nodes in the drawing to their graph theoretic distances. Following the original definition of stress~\cite{kamada_1989}, we minimize \begin{align}
L_{ST} = \sum\limits_{i<j}\;w_{ij}(||X_i - X_j||_2 - d_{ij})^2 \label{eq:loss-stress} \end{align} Where $d_{ij}$ is the graph-theoretical distance between nodes $i$ and $j$, $X_i$ and $X_j$ are the coordinates of nodes $i$ and $j$ in the layout. The normalization factor $w_{ij}=d_{ij}^{-2}$ balances the influence of short and long distances: the longer the graph theoretic distance, the more tolerance we give to the discrepancy between two distances. When comparing two drawings of the same graph with respect to stress, a smaller value (lower bounded by $0$) corresponds to a better drawing. To work with large graphs, we take the mean loss for any pairs of nodes turn it into the expectation of stress \begin{align}
\hat{L}_{ST} = \mathbb{E}_{i\neq j}\; [w_{ij}(||X_i - X_j||_2 - d_{ij})^2] \label{eq:loss-stress-expectation} \end{align} \blue{The quality measure for stress, $Q_{ST}$, is equal to the loss $\hat{L}_{ST}$ over all pairs of nodes.} In each SGD iteration we minimize the loss by sampling a number of node pairs. \blue{ Since the expectation of the gradient of the sample loss equals the true loss, we can use the gradient of the sample loss as an estimate of the true gradient and update the drawing through SGD accordingly. In each SGD iteration, we sample $m$ pairs of nodes.
By default, we set $m=32$ based on our experiments with different sample sizes in Section~\ref{sect:analysis-of-sample-size}. Before a round that goes over all pairs of nodes, we shuffle a list of node-pairs and take mini-batches from the shuffled list. This guarantees that we process every pair of nodes exactly once per round. }
\subsection{Ideal Edge Length} Given a set of ideal edge lengths $\{l_{ij}: (i,j) \in E\}$ we minimize the variance from the ideal lengths:
\begin{align} L_{IL} &= \sum\limits_{(i,j) \in E}\;
(\frac{||X_i - X_j|| - l_{ij}}{l_{ij}})^2 \label{eq:loss-ideal-edge-length} \end{align} For unweighted graphs, by default we use $1$ as the ideal edge length for all edges $(i,j) \in E$.
As with stress minimization, for large graphs we replace the summation by the expectation and estimate it through sampling the edges.
\begin{align}
\hat{L}_{IL} &= \mathbb{E}_{(i,j) \in E}[
(\frac{||X_i - X_j|| - l_{ij}}{l_{ij}})^2
]
\label{eq:loss-ideal-edge-length-expectation}
\end{align} \blue{ The quality measure $Q_{IL} = \hat{L}_{IL}$ is lower bounded by $0$ and a lower score yields a better layout. Similar to the sampling strategy for stress, here we keep a list of all edges in random order, draw mini-batches (by default, of size $m=32$) from it, and re-shuffle the list after all edges are processed once.}
\subsection{Neighborhood Preservation} \label{sec:neighbor} Neighborhood preservation aims to keep adjacent nodes close to each other in the layout.
Similar to Kruiger et al.~\cite{kruiger2017graph}, the idea is to have the $k$-nearest (Euclidean) neighbors (k-NN) of node $i$ in the drawing to align with the $k$ nearest nodes (in terms of graph distance from $i$).
Here we choose $k$ to be the degree of node - for nodes of different degrees, we consider a different number of neighbors. A natural quality measure for the alignment is the Jaccard index between the two pieces of information. Let, $Q_{NP} = JaccardIndex(K, Adj) = \frac{|\{(i,j): K_{ij}=1 \text{ and } A_{ij}=1\}|}{|\{(i,j): K_{ij}=1 \text{ or } A_{ij}=1\}|}$, where $Adj$ denotes the adjacency matrix and the $i$-th row in $K$ denotes the $k$-nearest neighborhood information of $i$:
$K_{ij} = 1$ if $j$ is one of the k-nearest neighbors of $i$ and $K_{ij}$ = 0 otherwise.
To express the Jaccard index as a differentiable minimization problem, first, we express the neighborhood information in the drawing as a smooth function of node positions $X_i$ and store it in a matrix $\hat{K}$. In $\hat{K}$, a positive entry $\hat{K}_{i,j}$ means node $j$ is one of the k-nearest neighbors of $i$, otherwise the entry is negative. Next, we take a differentiable surrogate function of the Jaccard index, the Lov\'{a}sz hinge loss (LHL) given by Berman et al.~\cite{berman2018lovasz}, to make the Jaccard loss optimizable via gradient descent. We minimize
\begin{align} L_{NP} &= LHL(\hat{K}, Adj)\label{eq:lovasz-hinge} \end{align} where $\hat{K}$ denotes the $k$-nearest neighbor estimation.
For simplicity, let $d_{i,j}=||X_i - X_j||$ denote the Euclidean distance between node $i$ and $j$, then we design $\hat{K}$ as:
\begin{align} \hat{K}_{i,j} &= \left\{\begin{array}{ll}
-(d_{i,j} - \frac{d_{i,\pi_k} + d_{i,\pi_{k+1}}}{2} ) & \text{ if } i \neq j\\
0 & \text{ if } i=j\\ \end{array}\right.\label{eq:neighbor-pred} \end{align} where $\pi_{k}$ denotes the $k^{th}$ nearest neighbor of node $i$. In other words, for every node $i$, we treat the average distance to its $k^{th} $ and $(k+1)^{th}$ nearest neighbor as a threshold, and use it to measure whether node $j$ is in the neighbor or not. Note that $d_{i,j}$, $d_{i,\pi_k}$ and $d_{i,\pi_{k+1}}$ are all smooth functions of node positions in the layout, so $\hat{K}_{i,j}$ is also a smooth function of node positions $X$. Furthermore, $\hat{K}_{i,j}$ is positive if node $j$ is a k-NN of node $i$, otherwise it is negative, as is required by LHL~\cite{berman2018lovasz}.
In order to handle large graphs we sample nodes for stochastic gradient descent. However, note that the nearest neighbors $\pi_k$ and $\pi_{k+1}$ in $\hat{K}_{i,j}$ depend on distances from all nodes. To derive a reliable estimation of the Jaccard index, instead of letting $k$ equal to the degree of node $i$ in the full graph, we need $k$ equal to the degree of the subgraph that we sample. In other words, in every gradient descent iteration we sample a subgraph from the full graph and compute $LHL$ of the subgraph. In practice, we randomly select a small set of $m$ nodes \blue{(by default, $m=16$)}, along with nodes that are $1$ or $2$ hops away from any of them. We also include a fraction of nodes that are not already in the sample. We extract the subgraph induced by this set of nodes and apply stochastic gradient descent.
\subsection{Crossing Number} Reducing the number of edge crossings is one of the classic optimization goals in graph drawing, known to affect readability~\cite{Purchase1997}. \blue{ Shabbeer et al.~\cite{bennett2010}, employed an expectation-maximization-like algorithm to minimize the number of crossings. Since two edges do not cross if and only if there exists a line that separates their extreme points, they trained many support vector machine (SVM) classifiers to separate crossing pairs and use the classifiers as a guide to eliminate crossings. Since one has to train as many SVM classifiers as the number of crossings in the graph and knowledge learned by one SVM does not naturally transfer to another, we found that this approach does not work well on large graphs. With this in mind we modified our initial approach to that of Tiezzi et al.\cite{tiezzi2021graph}, which uses Graph Neural Networks to reduce the number of crossings in two steps. First, they train a generic neural network to predict if any two edges cross each other. Since neural networks are differentiable, the well-trained edge crossing predictor from this step will serve as a guide to gradient descent steps later on.
In the second step they train a Graph Neural Network and use the edge crossing predictor as a guide to improve the layout. Our method uses only the first step above and utilizes a different training strategy. Instead of training the edge crossing predictor using a synthetic dataset before the layout optimization, we train the crossing predictor directly on the current graph layout while simultaneously updating the node coordinates in the same graph, using the crossing predictor as a guide. } \blue{ Formally, let $f_\beta$ denote a neural network with trainable parameters $\beta$ that takes the coordinates of the four nodes of any two edges $X^{(i)} \in \mathbb{R}^{4 \times 2}$ and outputs a scalar from the $(0,1)$ interval. An output close to $0$ means ``no crossing'' and one close to $1$ means ``crossing''. In practice, $f_{\beta}$ is a simple multi-layer perceptron (MLP) with batch normalization~\cite{ioffe2015batch} and LeakyReLU activation. To train a neural crossing detector $f_{\beta}$, we feed different edge pairs $X^{(i)} \in \mathbb{R}^{4 \times 2}$ to approximate the ground truth $t^{(i)} \in \{0, 1\}$ where $0$ means ``no crossing'' and $1$ means ``crossing''. We optimize the parameters $\beta$ to minimize the cross entropy (CE) loss $L_{\beta}$ between the prediction $f_{\beta}(X^{(i)})$ and the ground truth $t^{(i)}$, averaging over a sample of $n$ instances of edge pairs: $$ L_{\beta}(\beta;X^{(1)} \dots X^{(n)}) = \frac{1}{n}\sum\limits_{i=1}^n CE(f_\beta (X^{(i)}), t^{(i)}) $$ where \begin{align} CE(y, t) := - t \cdot log(y) - (1-t) \cdot log(1-y) \label{eq:cross-entropy} \end{align} We use the neural crossing detector $f_\beta$ to construct a differentiable surrogate loss function for crossing minimization. Specifically, given a well-trained $f_\beta$, we can reduce the number of crossings in a layout by minimizing the cross entropy between the prediction of edge pairs $f_\beta(X^{(i)})$ and the desired target (i.e., no crossing $t=0$): \begin{align} L_{CR}(X; \beta) = \frac{1}{n}\sum\limits_{i=1}^n CE(f_\beta (X^{(i)}), 0) \end{align} In practice, we minimize $L_{\beta}$ and $L_{CR}$ simultaneously in each {$(SGD)^2$}\xspace iteration. We first improve the neural crossing predictor using a sample of edge pairs from the graph. For simplicity, we describe the training by SGD, although in practice one can utilize any SGD variants (e.g. SGD with momentum, ADAM\cite{kingma2014adam} or RMSProp~\cite{Tieleman2012}) to train the predictor more efficiently. \begin{align} \beta^{(new)} = \beta - \epsilon' \cdot \nabla L_{\beta} \end{align} In the meantime we update the layout in a similar manner: \begin{align} X^{(new)} = X - \epsilon \cdot \nabla L_{CR} \end{align} Although one could improve the neural crossing predictor by multiple steps in every {$(SGD)^2$}\xspace iteration, we found little difference when varying the number of steps. Therefore, we only take one step to improve the neural crossing predictor in every {$(SGD)^2$}\xspace iteration. As with other criteria, we randomly draw mini-batches (by default, of size $m=128$) and iterate through all edge pairs over the course of the SGD iterations. }
\blue{ When a graph layout does not have many crossings (e.g., a stress-minimized layout of a near-planar graph), this sampling strategy is not efficient. In that case, we use an efficient algorithm (Bentley-Ottmann~\cite{bentley1979algorithms}) to find all crossing edges in a graph, and sample a mini-batch of crossings. Since finding all crossings can be slow for large graphs, we only do this once every few iterations and reuse the finding across a few iterations. Specifically, when we draw mini-batches from the pool of all crossings, we recompute all crossings again once the pool is drained. To evaluate the quality we simply count the number of crossings. }
\subsection{Crossing Angle Maximization} When edge crossings are unavoidable, the graph drawing can still be easier to read when edges cross at angles close to 90 degrees~\cite{ware2002cognitive}. Heuristics such as those by Demel et al.~\cite{Demel2018AGH} and Bekos et al.~\cite{Bekos18} have been proposed and have been successful in graph drawing challenges~\cite{devanny2017graph}. We use an approach similar to the force-directed algorithm given by Eades et al.~\cite{eades2010force} and minimize the squared cosine of crossing angles: \begin{align} L_{CAM} = \sum_{\substack{\text{all crossed edge pairs }\\(i,j), (k,l) \in E}}
(\frac{\langle X_{i}-X_{j}, X_{k}-X_{l}\rangle}{|X_{i}-X_{j}|\cdot|X_{k}-X_{l}|})^2 \end{align} We evaluate quality by measuring the worst (normalized) absolute discrepancy between each crossing angle $\theta$ and the target crossing angle (i.e. 90 degrees): $
Q_{CAM} = \max_{\theta} |\theta - \frac{\pi}{2}| / \frac{\pi}{2} $. As with crossing numbers, for large graphs we sample a subset \blue{(by default, of size $m=16$)} of edge pairs and consider their crossing angles if the edge pair cross each other. Again, if there are not many crossing pairs we use an efficient algorithm to find all crossings.
When optimizing the number of crossings and crossing angles simultaneously, we sample from the same pool of crossings formed via the Bentley-Ottmann algorithm.
\subsection{Aspect Ratio}
Good use of drawing area is often measured by the aspect ratio~\cite{duncan1998balanced} of the bounding box of the drawing, with 1:1 as the optimum. \blue{ The idea here is to consider different rotations of the current layout and try to ``squarify" the corresponding bounding boxes. In practice,
we rely on the singular values of the matrix of node coordinates to approximate the worst aspect ratio. Formally, assume vertex coordinates are centered with zero mean and let $X$ denote the collection of (centered) vertex coordinates as rows in a matrix. Since the coordinates are two dimensional, $X$ has only two (non-zero) singular values, denoted by $\sigma_1$ and $\sigma_2$ and each measures the standard deviation of the layout along with two orthogonal directions. Then we approximate the aspect ratio using the quotient of the two singular values of $X$ and encourage the ratio to be close to the target ratio $r=1$ using the cross entropy (CE) in Eq.~\ref{eq:cross-entropy}: $$ L_{AR} = CE(\frac{\sigma_2}{\sigma_1}, r)
$$ Note that although we only consider 1:1 ratios, the formulation of cross entropy let us consider arbitrary ratios. During mini-batch SGD, we simply sample a subset of nodes (by default, of size $m=128$) and use the singular values of the matrix formed by the subset to optimize the aspect ratio. }
Finally, we evaluate the drawing quality by measuring the worst aspect ratio on a finite set of rotations. The quality score ranges from 0 to 1. In our case, 1 is optimal and the minimal ratio among different rotations is the worst: $ Q_{AR} = \min_{ \theta \in \{
\frac{2\pi k}{N}, \text{ for } k=0, \cdots (N-1)
\} } \frac{\min(w_{\theta}, h_{\theta})}{\max(w_{\theta}, h_{\theta})} $, where $N$ is the number of rotations sampled (e.g., $N=7$), and $w_{\theta}$, $h_{\theta}$ are the width and height of the bounding box when rotating the drawing around its center by an angle $\theta$.
\subsection{Angular Resolution} Distributing edges adjacent to a node makes it easier to perceive the information presented in a node-link diagram~\cite{huang2013}. Angular resolution~\cite{Argyriou2010}, defined as the minimum angle between incident edges, is one way to quantify this goal.
Formally,
$ ANR = \min_{j \in V} \min_{(i,j),(j,k) \in E} \varphi_{ijk} $, where $\varphi_{ijk}$ is the angle formed by between edges $(i,j)$ and $(j,k)$. Note that for any given graph, an upper bound of this quantity is $\frac{2\pi}{d_{max}}$ where $d_{max}$ is the maximum degree of nodes in the graph. Therefore in the evaluation, we will use this upper bound to normalize our quality measure to $[0,1]$, i.e. $ Q_{ANR} = \frac{ANR}{2\pi / d_{max}} $. To achieve a better drawing quality via gradient descent, we define the angular energy of an angle $\varphi$ to be $e^{-s \cdot \varphi}$, where $s$ is a constant controlling the sensitivity of angular energy with respect to the angle (by default $s=1$), and minimize the total angular energy over all incident edges:
\begin{align} L_{ANR} = \sum_{(i,j),(j,k) \in E} e^{-s \cdot \varphi_{ijk}}
\end{align}
When the graph is large, it is expensive to compute the energy of all pairs of incident edges. Therefore, in {$(SGD)^2$}\xspace we randomly sample a minibatch of pairs of incident edges \blue{(by default, of size $m=128$)} and minimize the energy of the sample accordingly.
\begin{figure}
\caption{Optimizing Planar Graphs: (a) An initial layout without crossings, (b) A layout after optimizing stress while maintaining planarity.}
\label{fig:maintian_planarity}
\end{figure}
\begin{figure}
\caption{(a) The edge uniformity loss is increasing when we optimize the stress of a nested triangular graph, (b) The loss is decreasing when we update the coordinates carefully.}
\label{fig:maintian_EU_triangular}
\end{figure}
\subsection{Node Resolution} Good node resolution is associated with the ability to distinguish different nodes by preventing nodes from occluding each other. Node resolution is typically defined as the minimum Euclidean distance between two nodes in the drawing~\cite{chrobak1996convex,schulz2011drawing}. However, in order to align with the units in other objectives such as stress, we normalize the minimum Euclidean distance with respect to a reference value. Hence we define the node resolution to be the ratio between the shortest and longest distances between pairs of nodes in the drawing, $
VR = \frac{\min_{i \neq j}||X_i - X_j||}{r \cdot d_{max}}
$, where $d_{max} = \max_{k,l}||X_k - X_l||$. To achieve a certain target resolution $r \in [0,1]$ by minimizing a loss function, we minimize
\begin{align} L_{VR} = \sum_{i,j \in V, i \neq j} max(
0,
( 1 - \frac{||X_i - X_j||}{r \cdot d_{max}})^2 )
\end{align}
In practice, we set the target resolution to be $r=\frac{1}{\sqrt{|V|}}$, where $|V|$ is the number of nodes in the graph. In this way, an optimal drawing will distribute nodes uniformly in the drawing area. Each term in the summation vanishes when the distance between two nodes meets the required resolution $r$, otherwise it is greater than zero. In the evaluation, we report, as a quality measure, the ratio between the actual and target resolution and cap its value between $0$ (worst) and $1$ (best).
$
Q_{VR} = \min(1.0, \frac{\min_{i,j} ||X_i - X_j||}{r \cdot d_{max}}) $
For large graphs, we sample a subset of nodes \blue{(by default, of size $m=256$)} and compute the approximate loss of node resolution on the sample.
\subsection{Gabriel Graph Property} A graph is a Gabriel graph if it can be drawn in such a way that any disk formed by using an edge in the graph as its diameter contains no other nodes. Not all graphs are Gabriel graphs, but drawing a graph so that as many of these edge-based disks are empty of other nodes has been associated with good readability~\cite{10.1007/978-3-319-27261-0_41}. This property can be enforced by a repulsive force around the midpoints of edges. Formally, we establish a repulsive field with radius $r_{ij}$ equal to half of the edge length, around the midpoint $c_{ij}$ of each edge $(i,j) \in E$, and we minimize the total potential energy:
\begin{align}
L_{GA} = \sum_{ \substack{
(i,j) \in E,\\
k \in V \setminus \{i,j\} }}
max(0, r_{ij} - |X_k - c_{ij}|) \; ^ 2
\label{eq:gabriel} \end{align} where
$ c_{ij} = \frac{X_i + X_j}{2} $ and $
r_{ij} = \frac{|X_i - X_j|}{2} $. We use the (normalized) minimum distance from nodes to centers to characterize the quality of a drawing with respect to Gabriel graph property: $
Q_{GA} = \min (1, \min_{(i,j) \in E, k \in V}\frac{|X_k - c_{ij}|}{r_{ij}}) $.
For large graphs, we sample a mini-batch of node-edge pairs \blue{(by default, of size $m=64$)} and compute the approximate loss from the sample.
\begin{comment}
\subsection{\color{red}Symmetry}
Different metrics have been proposed to measure symmetry~\cite{klapaukh2014empirical,klapaukh2018symmetry,brandes2007eigensolver,deluca2017experimental,dunne2015readability,gansner2005graph,koren2002ace,maaten2008visualizing,ortmann2016sparse}. Recently, Meidiana et al.~\cite{meidiana2020quality} proposed an efficient metric to compute symmetry, however, to use this metric, we need to have prior knowledge about the automorphism of the underlying graph. We use a similar metric proposed in~\cite{purchase2002metrics}. We make some modifications to fit the metric in our optimization framework. We consider a bisector of each line connecting a node pair. These bisectors are considered as symmetric axes. For each edge pair $e_1, e_2$ we take the reflection of $e_1$ with respect to a bisector. Then we compute the distance of the end nodes between $e_2$ and the reflection of $e_1$. If the distance is larger than a threshold we clip the distance to that threshold. Then we sum up the distances. We denote the bisector of node $u, v$ by $B(u, v)$, the distance of edge pair $e_1, e_2$ with respect to $B(u, v)$ by $d_{B(u, v)}(e_1, e_2)$, and the threshold by $t$. Then the loss value of symmetry, $$L_{SY} = \sum_{u, v \in V} \sum_{e_!, e_2 \in E} min(d_{B(u, v)}(e_1, e_2), t)$$. For our experiments we keep $t=1$. We provide an example of a symmetric layout computed using our framework in Figure~\ref{fig:optimize_symmetry_torch}.
\begin{figure}
\caption{Optimizing symmetry: (a) A random initial layout of a symmetric graph, (b) A layout after optimizing symmetry.}
\label{fig:optimize_symmetry_torch}
\end{figure} \end{comment}
\begin{algorithm}[t] \DontPrintSemicolon \caption{\blue{Update coordinates without quality decline}}\label{alg:safe_update_2} \Fn{SafeUpdate($X_{prev}, X_{new}; G, Q_q$)}{
$X \leftarrow X_{prev}$\;
$q_0 \leftarrow Q_q(X; G)$\;
\For{each node $u \in V$}{
$X[u] \leftarrow X_{new}[u]$\;
$q_{tmp} \leftarrow Q_q(X)$\;
\If{QualityDeclines($q_0, q_{tmp}$)}{
$X[u] \leftarrow X_{prev}[u]$\;
}
}
return X\; } \end{algorithm}
\begin{figure*}
\caption{\blue{Compatible pairs}}
\caption{\blue{Better pairs}}
\caption{\blue{Worse pairs}}
\label{fig:three-pairs}
\end{figure*}
\begin{figure*}
\caption{ \blue{ Drawings of optimizing single or pair of criteria on a 6x10 grid with 60 nodes (lower left) or a 5-level balanced binary tree with 63 nodes (upper right). The sample size and weight for each criterion are shown in the diagonal entries of the figure. } }
\label{fig:analysis-pairs-drawings}
\end{figure*}
\subsection{Optimizing Layouts without Quality Decline}
Many optimization criteria tend to provide a drawing that has fewer crossings but cannot guarantee a crossing-free drawing. Stress is one such criterion, whose optimization does not guarantee crossing-free layout when graph is planar (or even a tree). One of the reasons for the popularity of stress-based layout methods is that they capture the underlying graph topology well. On the other hand, algorithms that are guaranteed to produce planar drawing (for planar graphs) are well known to dramatically distort the graph topology.
Our crossing minimization optimization is a soft constraint and does not guarantee a planar drawing when graph is planar. Hence, we provide an extra feature in our system that if we start with a planar drawing we can optimize any of the criteria above, while guaranteeing that no edge crossing will arise at any time. To do this, we add one additional test for every gradient descent step: for each proposed coordinate, we first check whether moving a node to this coordinate will introduce a crossing and if so, we do not update the coordinate. Using this method starting with an initial planar drawing, we can improve it, and provide a layout that is also planar and preserves the topology; see Fig.~\ref{fig:maintian_planarity}. This technique can be useful in other force-directed algorithms that do not guarantee crossing-free drawing when the initial layout is without crossings~\cite{fowler2012planar}.
We can generalize this idea for any graphs while optimizing any criterion. In the above scenario, we maintained the number of crossings of the layouts equal to zero since we started with a planar layout. If the graph is non-planar then we can update the coordinates in a similar way and the number of crossings in the progressive layouts will be decreasing. Similarly, we can consider other criteria, for example, the edge uniformity loss to decrease in the progressive layouts. If we are optimizing another criterion, for example, stress, there is no guarantee that edge uniformity will improve, see Fig.~\ref{fig:maintian_EU_triangular}. \blue{The general algorithm for safely update the layout with respect to any quality measure is described in Algorithm~\ref{alg:safe_update_2}. Note that Algorithm~\ref{alg:safe_update_2} is applied in each SGD iteration of Algorithm \ref{alg:gd2}. Furthermore, it goes through all nodes in the graph and the quality measure is evaluated every time an intermediate layout is generated by a single node update. Hence, this approach does not scale well to large graphs when the quality measure requires high computational overhead. }
\section{Experimental Evaluation} In this section, we assess the effectiveness and limitations of our approach. Since multiple criteria are not necessarily compatible with each other during optimization, we first identify all compatible pairs of criteria. After identifying all compatible pairs, we hand-craft weighting schedules to optimize multiple criteria using {$(SGD)^2$}\xspace and compare our layouts from multicriteria optimization with classic drawing algorithms. Finally, we analyze the runtime behavior and the impact of sample size in our approach.
\subsection{Interactions between Criteria}\label{sect:analysis-of-criteria-pairs}
\blue{We test the interactions between every pair of criteria using two regular graphs, a 6x10 grid (60 nodes) and a balanced binary tree with depth 5 (63 nodes). Before dealing with pairs of criteria, we first test every single criterion to establish a lower bound of the quality measure. In this section, we invert some quality measures (e.g., neighborhood preservation, and angular resolution) such that lower values are always better in all quality measures. Then, we optimize every pair of criteria, monitor the quality measures of the pair over the course of training, and compare them with the corresponding lower bound found when optimizing each single criterion.}
\blue{As expected, we observe that all but one criterion, when optimized on their own, improve or maintain high quality during improvement iterations. The exception is crossing angle maximization, with a quality measure that depends on the worst crossing in the graph. The initial random layout usually has many crossings and maximizing crossing angles on its own (e.g., without also minimizing the number of crossings) does not necessarily lead to high-quality results. Later we will see that optimizing other criteria together with crossing angle maximization helps. Further, when weight factors can be adjusted with a schedule, we recommend assigning positive weight to crossing angle maximization only at the later iterations.}
\blue{When optimizing pairs of aesthetic criteria, we see three types of pairs: compatible pairs, better pairs and worse pairs. Fig.~\ref{fig:three-pairs} shows an example for each of the three cases. Most pairs are compatible pairs. For example, stress minimization is compatible with most other drawing aesthetics, as the qualities of both optimization goals can improve over time and they both achieve their lower bounds. Some pairs of criteria even do better together than alone. For example, crossing angle maximization together with stress minimization leads to better results than just crossing angle maximization, confirming the results of Huang et al.~\cite{huang2013}. A few pairs of criteria are not fully compatible, leading to worse joint optimization. For example, when simultaneously optimizing vertex resolution and angular resolution, neither value can reach their corresponding lower bound.}
\begin{figure*}
\caption{ \blue{ Learning curves of optimizing pairs of criteria on a 6x10 grid with 60 nodes (lower left) or a 5-level balanced binary tree with 63 nodes (upper right). Better pairs are highlighted in green; worse pairs are highlighted in red. The sample size and weight for each criterion are shown in the diagonal entries of the figure. In this figure, some quality measures (neighborhood preservation, aspect ratio, angular resolution, vertex resolution and gabriel) are inverted $Q \mapsto 1-Q$, some (stress and ideal edge length) are normalized to the [0,1] interval so that among all criteria smaller values are always better and the worst value is always 1. } }
\label{fig:analysis-pairs-learning-curves}
\end{figure*}
\blue{Out of all 36 pairs, we find we find 20 compatible pairs, 9 better pairs and 7 worse pairs for the $6 \times 10$ grid; for the binary tree with depth 5, we find 13 compatible pairs, 9 better pairs and 14 worse pairs. Comparing the compatibility between the two graphs, we note that all worse pairs in the grid are also worse pairs in the tree, and most of the better pairs and compatible pairs are shared between two graphs. See the additional figures (Fig.~\ref{fig:analysis-pairs-drawings} and \ref{fig:analysis-pairs-learning-curves}) for the drawings and quality curves of all criteria pairs and singletons. It is worth noting that the compatibility of criteria also depends on the specific choice of weight factors. For example, having a dominating criterion by assigning a large weight to it can deteriorate the quality of the other. In this analysis, we assign a fixed weight factor (and sample size) to each criterion so that every pair yields a reasonable outcome.}
\begin{figure*}\label{fig:all-drawings-partial}
\end{figure*}
\begin{table*}[thbp]
\begin{tabular}{l|rrrrrrrr} \hline
methods \textbackslash~graphs & dodecahedron & tree-2-6 & grid-12-24 & spx-teaser & 494-bus & grid1 & dwt-307 & dwt-1005 \\ \hline
neato & 0.081 & \textbf{0.078} & \textbf{0.013} & \textbf{0.027} & 0.076 & \textbf{0.062} & 0.083 & \textbf{0.022} \\
sfdp & 0.080 & 0.133 & 0.024 & 0.052 & 0.099 & 0.071 & \textbf{0.081} & 0.029 \\
{$(SGD)^2$}\xspace (ST) & \textbf{0.079} & \textbf{0.078} & \textbf{0.013} & \textbf{0.027} & \textbf{0.071} & \textbf{0.062} & 0.082 & \textbf{0.022} \\
{$(SGD)^2$}\xspace (NP) & 0.188 & 0.233 & 0.065 & 0.241 & 0.825 & 0.149 & 0.279 & 0.275 \\
{$(SGD)^2$}\xspace (ST+IL) & 0.107 & 0.100 & 0.033 & 0.054 & 0.090 & 0.100 & 0.119 & 0.043 \\
{$(SGD)^2$}\xspace (ST+NP) & 0.188 & 0.106 & \textbf{0.013} & 0.051 & 0.739 & 0.080 & 0.178 & 0.059 \\
{$(SGD)^2$}\xspace (ST+CR) & 0.190 & \textbf{0.078} & \textbf{0.013} & 0.028 & 0.079 & 0.073 & 0.091 & 0.045 \\
{$(SGD)^2$}\xspace (ST+CAM) & 0.099 & \textbf{0.078} & 0.015 & 0.029 & 0.075 & 0.063 & 0.094 & 0.029 \\
{$(SGD)^2$}\xspace (ST+AR) & 0.080 & 0.081 & 0.055 & \textbf{0.027} & 0.075 & 0.067 & 0.084 & 0.023 \\
{$(SGD)^2$}\xspace (ST+VR) & 0.083 & 0.080 & \textbf{0.013} & 0.032 & 0.073 & 0.063 & 0.088 & 0.023 \\
{$(SGD)^2$}\xspace (ST+GB) & 0.080 & \textbf{0.078} & \textbf{0.013} & \textbf{0.027} & \textbf{0.071} & \textbf{0.062} & 0.083 & \textbf{0.022} \\
{$(SGD)^2$}\xspace (ST+IL+ANR) & 0.089 & 0.098 & 0.020 & 0.036 & 0.080 & 0.084 & 0.096 & 0.027 \\
{$(SGD)^2$}\xspace (IL+NP+VR) & 0.083 & 0.119 & 0.061 & 0.134 & 0.208 & 0.068 & 0.110 & 0.205 \\ \hline \end{tabular} \caption{\blue{Quality Measures of Stress (ST)}} \label{tab:quality-table-stress} \end{table*}
\begin{table*}[thbp]
\begin{tabular}{l|rrrrrrrr} \hline
methods \textbackslash~graphs & dodecahedron & tree-2-6 & grid-12-24 & spx-teaser & 494-bus & grid1 & dwt-307 & dwt-1005 \\ \hline
neato & 0.028 & 0.005 & \textbf{0.002} & 0.099 & 0.484 & 0.026 & 0.299 & 0.280 \\
sfdp & 0.018 & 0.143 & 0.051 & 0.271 & 0.585 & 0.052 & 0.279 & 0.346 \\
{$(SGD)^2$}\xspace (ST) & 0.062 & 0.103 & 0.053 & 0.114 & 0.526 & 0.111 & 0.320 & 0.293 \\
{$(SGD)^2$}\xspace (NP) & 0.679 & 0.867 & 0.351 & 0.884 & 3.348 & 0.649 & 1.083 & 1.312 \\
{$(SGD)^2$}\xspace (ST+IL) & \textbf{0.004} & \textbf{0.003} & \textbf{0.002} & \textbf{0.009} & \textbf{0.462} & \textbf{0.002} & \textbf{0.249} & \textbf{0.245} \\
{$(SGD)^2$}\xspace (ST+NP) & 0.679 & 0.227 & 0.056 & 0.434 & 1.608 & 0.418 & 0.749 & 0.519 \\
{$(SGD)^2$}\xspace (ST+CR) & 0.517 & 0.103 & 0.053 & 0.139 & 0.593 & 0.205 & 0.394 & 0.482 \\
{$(SGD)^2$}\xspace (ST+CAM) & 0.109 & 0.103 & 0.058 & 0.132 & 0.595 & 0.141 & 0.497 & 0.368 \\
{$(SGD)^2$}\xspace (ST+AR) & 0.061 & 0.112 & 0.203 & 0.114 & 0.533 & 0.148 & 0.334 & 0.302 \\
{$(SGD)^2$}\xspace (ST+VR) & 0.083 & 0.257 & 0.067 & 0.159 & 0.569 & 0.155 & 0.399 & 0.309 \\
{$(SGD)^2$}\xspace (ST+GB) & 0.061 & 0.098 & 0.053 & 0.092 & 0.521 & 0.111 & 0.338 & 0.291 \\
{$(SGD)^2$}\xspace (ST+IL+ANR) & 0.016 & 0.027 & 0.012 & 0.029 & 0.486 & 0.027 & 0.277 & 0.268 \\
{$(SGD)^2$}\xspace (IL+NP+VR) & 0.082 & 0.178 & 0.116 & 0.242 & 0.702 & 0.147 & 0.388 & 0.549 \\ \hline \end{tabular} \caption{\blue{Quality Measures of Ideal Edge Length (IL)}} \label{tab:quality-table-ideal_edge_length} \end{table*}
\begin{table*}[thbp]
\begin{tabular}{l|rrrrrrrr} \hline
methods \textbackslash~graphs & dodecahedron & tree-2-6 & grid-12-24 & spx-teaser & 494-bus & grid1 & dwt-307 & dwt-1005 \\ \hline
neato & 0.723 & 0.718 & \textbf{0.000} & 0.474 & 0.846 & 0.558 & 0.699 & 0.545 \\
sfdp & 0.571 & 0.592 & 0.063 & 0.533 & 0.750 & 0.651 & 0.584 & 0.653 \\
{$(SGD)^2$}\xspace (ST) & 0.500 & 0.721 & \textbf{0.000} & 0.503 & 0.832 & 0.553 & 0.657 & 0.548 \\
{$(SGD)^2$}\xspace (NP) & \textbf{0.400} & 0.225 & 0.276 & 0.487 & \textbf{0.659} & 0.480 & \textbf{0.428} & 0.516 \\
{$(SGD)^2$}\xspace (ST+IL) & 0.667 & 0.701 & \textbf{0.000} & 0.518 & 0.867 & 0.674 & 0.766 & 0.665 \\
{$(SGD)^2$}\xspace (ST+NP) & \textbf{0.400} & \textbf{0.194} & \textbf{0.000} & 0.467 & 0.664 & \textbf{0.347} & 0.472 & \textbf{0.441} \\
{$(SGD)^2$}\xspace (ST+CR) & 0.481 & 0.714 & \textbf{0.000} & \textbf{0.420} & 0.765 & 0.532 & 0.582 & 0.567 \\
{$(SGD)^2$}\xspace (ST+CAM) & 0.681 & 0.721 & 0.042 & 0.531 & 0.851 & 0.552 & 0.763 & 0.631 \\
{$(SGD)^2$}\xspace (ST+AR) & 0.500 & 0.727 & 0.418 & 0.492 & 0.842 & 0.560 & 0.757 & 0.584 \\
{$(SGD)^2$}\xspace (ST+VR) & 0.750 & 0.708 & \textbf{0.000} & 0.609 & 0.834 & 0.527 & 0.709 & 0.570 \\
{$(SGD)^2$}\xspace (ST+GB) & 0.696 & 0.727 & \textbf{0.000} & 0.566 & 0.831 & 0.555 & 0.702 & 0.547 \\
{$(SGD)^2$}\xspace (ST+IL+ANR) & 0.500 & 0.749 & \textbf{0.000} & 0.539 & 0.823 & 0.622 & 0.705 & 0.520 \\
{$(SGD)^2$}\xspace (IL+NP+VR) & 0.636 & \textbf{0.194} & 0.165 & 0.472 & 0.713 & 0.349 & 0.497 & 0.717 \\ \hline \end{tabular} \caption{\blue{Quality Measures of Neighborhood Preservation (NP)}} \label{tab:quality-table-neighborhood_preservation} \end{table*}
\begin{table*}[thbp]
\begin{tabular}{l|rrrrrrrr} \hline
methods \textbackslash~graphs & dodecahedron & tree-2-6 & grid-12-24 & spx-teaser & 494-bus & grid1 & dwt-307 & dwt-1005 \\ \hline
neato & 8 & 1 & \textbf{0} & 80 & 274 & 134 & 1896 & 8001 \\
sfdp & 9 & 2 & \textbf{0} & \textbf{72} & 141 & 109 & 1651 & \textbf{5447} \\
{$(SGD)^2$}\xspace (ST) & 10 & \textbf{0} & \textbf{0} & 73 & 283 & 133 & 1749 & 8367 \\
{$(SGD)^2$}\xspace (NP) & 10 & 5 & 109 & 155 & 216 & \textbf{80} & 1251 & 10736 \\
{$(SGD)^2$}\xspace (ST+IL) & 10 & \textbf{0} & \textbf{0} & 73 & 325 & 151 & 3924 & 13094 \\
{$(SGD)^2$}\xspace (ST+NP) & 6 & \textbf{0} & \textbf{0} & 111 & \textbf{131} & 94 & \textbf{265} & 6932 \\
{$(SGD)^2$}\xspace (ST+CR) & \textbf{0} & \textbf{0} & \textbf{0} & 73 & 187 & 133 & 347 & 8827 \\
{$(SGD)^2$}\xspace (ST+CAM) & 7 & \textbf{0} & 9 & 84 & 430 & 158 & 4673 & 12972 \\
{$(SGD)^2$}\xspace (ST+AR) & 10 & \textbf{0} & 2 & \textbf{72} & 303 & 142 & 2836 & 8628 \\
{$(SGD)^2$}\xspace (ST+VR) & 7 & \textbf{0} & \textbf{0} & 79 & 282 & 134 & 2983 & 8874 \\
{$(SGD)^2$}\xspace (ST+GB) & 6 & \textbf{0} & \textbf{0} & 75 & 264 & 134 & 2034 & 8305 \\
{$(SGD)^2$}\xspace (ST+IL+ANR) & 10 & 24 & \textbf{0} & \textbf{72} & 302 & 171 & 2793 & 8241 \\
{$(SGD)^2$}\xspace (IL+NP+VR) & 6 & 2 & 96 & 105 & 173 & 103 & 1333 & 18276 \\ \hline \end{tabular} \caption{\blue{Quality Measures of Crossings (CR)}} \label{tab:quality-table-crossings} \end{table*}
\begin{table*}[thbp]
\begin{tabular}{l|rrrrrrrr} \hline
methods \textbackslash~graphs & dodecahedron & tree-2-6 & grid-12-24 & spx-teaser & 494-bus & grid1 & dwt-307 & dwt-1005 \\ \hline
neato & 0.254 & 0.419 & \textbf{0.000} & 0.786 & 0.956 & 0.927 & 0.970 & 0.999 \\
sfdp & 0.622 & 0.235 & \textbf{0.000} & 0.806 & 0.972 & 0.970 & 0.963 & 1.000 \\
{$(SGD)^2$}\xspace (ST) & 0.601 & \textbf{0.000} & \textbf{0.000} & 0.975 & 0.923 & 0.946 & 0.983 & 0.990 \\
{$(SGD)^2$}\xspace (NP) & 0.969 & 0.808 & 0.945 & 0.999 & 0.913 & 0.939 & 0.988 & 0.992 \\
{$(SGD)^2$}\xspace (ST+IL) & 0.602 & \textbf{0.000} & \textbf{0.000} & 0.575 & 0.970 & 0.837 & 0.989 & 0.998 \\
{$(SGD)^2$}\xspace (ST+NP) & 0.969 & \textbf{0.000} & \textbf{0.000} & 1.000 & \textbf{0.838} & 0.864 & 0.958 & 0.992 \\
{$(SGD)^2$}\xspace (ST+CR) & \textbf{0.000} & \textbf{0.000} & \textbf{0.000} & 0.940 & 0.885 & 0.889 & 0.955 & \textbf{0.984} \\
{$(SGD)^2$}\xspace (ST+CAM) & \textbf{0.000} & \textbf{0.000} & 0.810 & 0.646 & 0.937 & \textbf{0.622} & 0.973 & 0.993 \\
{$(SGD)^2$}\xspace (ST+AR) & 0.605 & \textbf{0.000} & 0.757 & 0.712 & 0.985 & 0.965 & 0.978 & 0.995 \\
{$(SGD)^2$}\xspace (ST+VR) & 0.164 & \textbf{0.000} & \textbf{0.000} & 0.550 & 0.943 & 0.883 & 0.975 & 0.995 \\
{$(SGD)^2$}\xspace (ST+GB) & 0.524 & \textbf{0.000} & \textbf{0.000} & 0.834 & 0.945 & 0.937 & 0.992 & 0.991 \\
{$(SGD)^2$}\xspace (ST+IL+ANR) & 0.603 & 0.725 & \textbf{0.000} & \textbf{0.421} & 0.935 & 0.961 & 0.973 & 0.997 \\
{$(SGD)^2$}\xspace (IL+NP+VR) & 0.323 & 0.143 & 0.954 & 0.963 & 0.913 & 0.631 & \textbf{0.926} & 0.991 \\ \hline \end{tabular} \caption{\blue{Quality Measures of Crossing Angle Maximization (CAM)}} \label{tab:quality-table-crossing_angle_maximization} \end{table*}
\begin{table*}[thbp]
\begin{tabular}{l|rrrrrrrr} \hline
methods \textbackslash~graphs & dodecahedron & tree-2-6 & grid-12-24 & spx-teaser & 494-bus & grid1 & dwt-307 & dwt-1005 \\ \hline
neato & 0.062 & 0.145 & 0.483 & 0.010 & 0.143 & 0.314 & 0.065 & \textbf{0.012} \\
sfdp & 0.068 & 0.084 & 0.536 & 0.010 & 0.192 & 0.452 & 0.095 & 0.018 \\
{$(SGD)^2$}\xspace (ST) & 0.049 & 0.150 & 0.470 & \textbf{0.002} & 0.155 & 0.315 & 0.071 & 0.094 \\
{$(SGD)^2$}\xspace (NP) & \textbf{0.047} & 0.124 & 0.481 & 0.176 & 0.162 & \textbf{0.049} & 0.109 & 0.282 \\
{$(SGD)^2$}\xspace (ST+IL) & \textbf{0.048} & 0.169 & 0.507 & 0.018 & 0.160 & 0.470 & 0.157 & 0.045 \\
{$(SGD)^2$}\xspace (ST+NP) & 0.049 & 0.126 & 0.479 & \textbf{0.002} & 0.192 & 0.209 & 0.077 & 0.101 \\
{$(SGD)^2$}\xspace (ST+CR) & 0.134 & 0.149 & 0.473 & 0.028 & 0.136 & 0.368 & 0.030 & 0.079 \\
{$(SGD)^2$}\xspace (ST+CAM) & 0.068 & 0.149 & 0.449 & 0.009 & 0.188 & 0.288 & \textbf{0.028} & 0.141 \\
{$(SGD)^2$}\xspace (ST+AR) & \textbf{0.047} & \textbf{0.017} & \textbf{0.048} & 0.008 & \textbf{0.118} & 0.154 & 0.043 & 0.057 \\
{$(SGD)^2$}\xspace (ST+VR) & 0.068 & 0.135 & 0.466 & 0.009 & 0.152 & 0.300 & 0.044 & 0.091 \\
{$(SGD)^2$}\xspace (ST+GB) & 0.069 & 0.154 & 0.470 & 0.013 & 0.155 & 0.318 & 0.037 & 0.093 \\
{$(SGD)^2$}\xspace (ST+IL+ANR) & \textbf{0.048} & 0.197 & 0.508 & 0.045 & 0.178 & 0.269 & 0.058 & 0.084 \\
{$(SGD)^2$}\xspace (IL+NP+VR) & 0.101 & 0.231 & 0.455 & 0.242 & 0.359 & 0.282 & 0.074 & 0.106 \\ \hline \end{tabular} \caption{\blue{Quality Measures of Aspect Ratio (AR)}} \label{tab:quality-table-aspect_ratio} \end{table*}
\begin{table*}[thbp]
\begin{tabular}{l|rrrrrrrr} \hline
methods \textbackslash~graphs & dodecahedron & tree-2-6 & grid-12-24 & spx-teaser & 494-bus & grid1 & dwt-307 & dwt-1005 \\ \hline
neato & 0.753 & 0.749 & 0.525 & 0.999 & 0.998 & 1.000 & 1.000 & \textbf{1.000} \\
sfdp & 0.459 & 0.908 & 0.547 & 0.996 & 0.999 & 0.996 & 1.000 & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST) & \textbf{0.401} & 0.750 & 0.524 & 0.989 & \textbf{0.993} & 0.941 & 0.999 & \textbf{1.000} \\
{$(SGD)^2$}\xspace (NP) & 0.928 & 0.976 & 0.994 & 0.996 & 1.000 & 0.984 & 1.000 & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST+IL) & 0.517 & 0.844 & 0.497 & 0.939 & 0.999 & 0.932 & 0.999 & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST+NP) & 0.952 & 0.466 & \textbf{0.487} & 1.000 & 0.999 & 0.998 & 0.999 & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST+CR) & 0.746 & 0.769 & 0.526 & 0.996 & 0.998 & 0.969 & \textbf{0.998} & \textbf{0.999} \\
{$(SGD)^2$}\xspace (ST+CAM) & 1.000 & 0.745 & 0.871 & 0.913 & 1.000 & 0.999 & 1.000 & \textbf{0.999} \\
{$(SGD)^2$}\xspace (ST+AR) & 0.403 & 0.779 & 0.991 & 0.963 & 0.998 & 0.994 & 0.999 & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST+VR) & 0.760 & 0.979 & 0.540 & 0.936 & 1.000 & 0.914 & 1.000 & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST+GB) & 0.573 & 0.750 & 0.528 & \textbf{0.868} & 1.000 & 0.903 & 1.000 & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST+IL+ANR) & \textbf{0.401} & \textbf{0.327} & 0.528 & 0.990 & 1.000 & \textbf{0.740} & \textbf{0.999} & \textbf{0.999} \\
{$(SGD)^2$}\xspace (IL+NP+VR) & 0.677 & 0.418 & 0.974 & 0.995 & 0.999 & 0.823 & \textbf{0.999} & \textbf{1.000} \\ \hline \end{tabular} \caption{\blue{Quality Measures of Angular Resolution (ANR)}} \label{tab:quality-table-angular_resolution} \end{table*}
\begin{table*}[thbp]
\begin{tabular}{l|rrrrrrrr} \hline
methods \textbackslash~graphs & dodecahedron & tree-2-6 & grid-12-24 & spx-teaser & 494-bus & grid1 & dwt-307 & dwt-1005 \\ \hline
neato & 0.637 & 0.735 & 0.362 & 0.765 & 0.948 & 0.780 & 0.870 & \textbf{0.931} \\
sfdp & 0.391 & 0.594 & 0.807 & 0.825 & 0.951 & 0.877 & 0.859 & 0.967 \\
{$(SGD)^2$}\xspace (ST) & 0.269 & 0.727 & 0.368 & 0.663 & 0.991 & 0.798 & 0.869 & 0.976 \\
{$(SGD)^2$}\xspace (NP) & 1.000 & 0.875 & 0.902 & 0.992 & 0.986 & 0.814 & 0.827 & 0.987 \\
{$(SGD)^2$}\xspace (ST+IL) & 0.334 & 0.705 & \textbf{0.354} & 0.788 & 0.993 & 0.973 & 0.994 & 0.993 \\
{$(SGD)^2$}\xspace (ST+NP) & 1.000 & 0.688 & 0.454 & 1.000 & 0.989 & 0.791 & 0.862 & 0.977 \\
{$(SGD)^2$}\xspace (ST+CR) & 0.570 & 0.762 & 0.379 & 0.789 & 0.954 & 0.908 & 0.785 & 0.950 \\
{$(SGD)^2$}\xspace (ST+CAM) & 0.996 & 0.715 & 0.839 & 0.789 & 0.974 & 0.856 & 0.967 & 0.959 \\
{$(SGD)^2$}\xspace (ST+AR) & 0.278 & 0.682 & 0.633 & 0.698 & 0.938 & 0.802 & 0.963 & 0.991 \\
{$(SGD)^2$}\xspace (ST+VR) & \textbf{0.165} & \textbf{0.266} & 0.388 & \textbf{0.407} & \textbf{0.754} & \textbf{0.590} & \textbf{0.617} & 0.945 \\
{$(SGD)^2$}\xspace (ST+GB) & 0.529 & 0.737 & 0.365 & 0.630 & 0.895 & 0.858 & 0.913 & 0.986 \\
{$(SGD)^2$}\xspace (ST+IL+ANR) & 0.332 & 0.959 & 0.392 & 0.873 & 0.961 & 0.959 & 0.931 & 0.984 \\
{$(SGD)^2$}\xspace (IL+NP+VR) & 0.355 & 0.615 & 0.795 & 0.744 & 0.831 & \textbf{0.590} & 0.751 & 0.993 \\ \hline \end{tabular} \caption{\blue{Quality Measures of Vertex Resolution (VR)}} \label{tab:quality-table-vertex_resolution} \end{table*}
\begin{table*}[thbp]
\begin{tabular}{l|rrrrrrrr} \hline
methods \textbackslash~graphs & dodecahedron & tree-2-6 & grid-12-24 & spx-teaser & 494-bus & grid1 & dwt-307 & dwt-1005 \\ \hline
neato & 0.429 & 0.595 & \textbf{0.000} & 0.933 & \textbf{1.000} & 0.920 & \textbf{1.000} & \textbf{1.000} \\
sfdp & 0.860 & 0.806 & 0.148 & 0.758 & \textbf{1.000} & 0.967 & \textbf{1.000} & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST) & 0.677 & 0.130 & \textbf{0.000} & 0.795 & \textbf{1.000} & 0.963 & \textbf{1.000} & \textbf{1.000} \\
{$(SGD)^2$}\xspace (NP) & 0.951 & 0.954 & 0.973 & 0.920 & \textbf{1.000} & 0.903 & \textbf{1.000} & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST+IL) & 0.679 & \textbf{0.000} & \textbf{0.000} & 0.834 & \textbf{1.000} & 0.860 & \textbf{1.000} & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST+NP) & 0.951 & 0.658 & \textbf{0.000} & \textbf{0.039} & \textbf{1.000} & 0.916 & \textbf{1.000} & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST+CR) & 0.704 & 0.205 & \textbf{0.000} & 0.967 & \textbf{1.000} & 0.988 & \textbf{1.000} & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST+CAM) & 0.695 & 0.199 & 0.799 & 0.836 & \textbf{1.000} & 0.905 & \textbf{1.000} & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST+AR) & 0.678 & 0.396 & 0.887 & 0.903 & \textbf{1.000} & 0.958 & \textbf{1.000} & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST+VR) & 0.437 & 0.957 & \textbf{0.000} & 0.862 & \textbf{1.000} & 0.987 & \textbf{1.000} & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST+GB) & \textbf{0.368} & 0.036 & \textbf{0.000} & 0.664 & \textbf{1.000} & \textbf{0.791} & \textbf{1.000} & \textbf{1.000} \\
{$(SGD)^2$}\xspace (ST+IL+ANR) & 0.681 & 0.745 & \textbf{0.000} & 0.630 & \textbf{1.000} & 0.967 & \textbf{1.000} & \textbf{1.000} \\
{$(SGD)^2$}\xspace (IL+NP+VR) & 0.421 & 0.800 & 0.982 & 0.926 & \textbf{1.000} & 0.972 & \textbf{1.000} & \textbf{1.000} \\ \hline \end{tabular} \caption{\blue{Quality Measures of Gabriel Property (GB)}} \label{tab:quality-table-gabriel} \end{table*}
\subsection{Quality Analysis}\label{sect:analysis-of-qualities}
\blue{We compare layouts obtained with {$(SGD)^2$}\xspace when optimizing different aesthetic goals to layouts obtained by neato~\cite{ellson2001graphviz} and sfdp~\cite{ellson2001graphviz}, which are classic implementations of stress-majorization and scalable force-directed methods.
Fig.~\ref{fig:all-drawings-partial} shows the layouts along with information about each graph. The graphs are chosen to represent a variety of classes such as trees, grids, regular shapes, and to also include real-world examples. In particular, the last four graphs in Fig.~\ref{fig:all-drawings-partial} are from the Sparse Matrix Collection~\cite{davis2011university} and are also used to evaluate stress minimization via SGD in~\cite{zheng2018graph}; see the supplementary materials for more layouts. }
\blue{ Next, we evaluate each layout on 9 readability criteria: stress (\texttt{ST}), node resolution (\texttt{VR}), ideal edge lengths (\texttt{IL}), neighbor preservation (\texttt{NP}), crossings (\texttt{CR}), crossing angle (\texttt{CA}), angular resolution (\texttt{ANR}), aspect ratio (\texttt{AR}), and Gabriel graph property (\texttt{GB}). Our experiment utilizes 8 graphs and layouts computed by neato, sfdp, and 7 runs of {$(SGD)^2$}\xspace using various combinations of objectives. \blue{ Tables~\ref{tab:quality-table-stress} to~\ref{tab:quality-table-gabriel} summarize the first 4 of the 9 quality measures for the layouts in Fig.~\ref{fig:all-drawings-partial}. More combinations of criteria used for {$(SGD)^2$}\xspace and the remaining quality measures are included in the supplementary materials. The quality measure for crossings is the actual number of edge crossings in the layout. For all other criteria, we use the formulas defined in Section~\ref{sect:properties-and-measures}. All quality measures produce values greater than or equal to zero: the lower the value the better the measure. In each column, the best score is bold.} When optimizing via multicriteria {$(SGD)^2$}\xspace, we choose compatible pairs, better pairs, or compatible triples among the 9 criteria. When optimizing incompatible pairs or triples, we fix the number of iterations in {$(SGD)^2$}\xspace, select and prioritize one criterion (or compatible pair) in an early stage of the training and postpone the others to the later stage. For example, when simultaneously optimizing ideal edge length (IL), neighborhood preservation (NP) and vertex resolution (VR), we assign zero weight to VR and positive weights to IL and NP in the first half of the iterations. Then we gradually decrease the weights of IL and NP to 0 (by a smooth function that interpolates the highest and lowest weights) and increase the weight of VR in the second half of the iterations with a similar smooth growth function. At each stage, we interpolate the two weight levels of each criterion $w_{start}$ and $w_{stop}$ between the start and stopping iterations $t_{start}$ to $t_{stop}$ by a scaled and translated smooth-step function $g(t)$: $$ g(t) = (w_{stop}-w_{start}) \cdot f(\frac{t-t_{start}}{t_{stop}-t_{start}}) + w_{start} $$ where $f(x) = 3x^2-2x^3$ for $x \in [0,1]$ is typically called the (standard) smooth-step function. }
\blue{
The experimental results confirm that {$(SGD)^2$}\xspace yields better or comparable results for most quality measures and on most graphs.
We do note that some criteria (e.g., CR and GB) are harder to optimize on real-world large graphs; improving the performance on such tough criteria is natural direction for future work.}
\begin{figure*}\label{fig:analysis-of-sample-size}
\end{figure*}
\subsection{Analysis of Sample Size}\label{sect:analysis-of-sample-size} \blue{ In this section we analyze the impact of sample size on the convergence rate of {$(SGD)^2$}\xspace.
In deep learning, models trained with different sample sizes can converge to different types of minima; e.g., smaller sample tend to lead to a better generalization\cite{keskar2016large}. In {$(SGD)^2$}\xspace, smaller sample size usually results in faster run time \textit{per iteration} but does not necessarily yield faster \textit{per-second} convergence.
As described in Section~\ref{sect:properties-and-measures}, we use different sampling strategies and sample sizes for each readability criterion. Consider, for example, stress minimization and how optimal sample size closely depends on other factors in the optimization. In other words, there is no ``one size fits all'' sample size. In particular, for any given graph, the optimal sample size depends on the learning rate of the SGD algorithm. Fig.~\ref{fig:analysis-of-sample-size} shows the quality (i.e. stress) of layouts for a binary tree with 9 levels (1023 nodes) as a function of total run time of the {$(SGD)^2$}\xspace algorithm. In each plot, we visualize the convergence of the algorithm under a fixed learning rate with various sample sizes. When the learning rate is small (the two plots on the left of Fig.~\ref{fig:analysis-of-sample-size}), we observe that smaller sample sizes (e.g., 4 or 8) converge faster. In contrast, a medium sample size (e.g., 16 or 32) can benefit from larger learning rates (the two plots on the right of Fig.~\ref{fig:analysis-of-sample-size}) and converge faster than any cases that use a smaller learning rate. Moreover, when using a large learning rate, the training with a smaller sample size becomes less stable (due to the high variance of gradients, see the rightmost plot in Fig.~\ref{fig:analysis-of-sample-size}). We illustrate this observation on the binary tree graph with 9 levels (1023 nodes) and have the same observation on other trees and grids with various sizes. Moreover, we observed that this interplay between sample size and learning rate on convergence rate is less obvious in variants of the SGD algorithm. When replacing the SGD with some of its variants (e.g., AdaDelta~\cite{zeiler2012adadelta}, RMSProp~\cite{Tieleman2012} or ADAM~\cite{kingma2014adam}) that takes adaptive step size based on the gradient of previous steps, the convergence rate of stress minimization becomes less sensitive to sample size or learning rate. }
\begin{figure}
\caption{ Runtime of balanced trees (top) and 2D grids (bottom). The plots have log scales on both x and y axes. }
\label{fig:runtime}
\end{figure}
\subsection{Analysis of Run Time} To test the scalability of our method, we test the runtime of our method on larger graphs. We tested our code on a MacBook Pro with a 2.9 GHz Dual-Core Intel Core i5 CPU and 16GB of memory. We picked two families of graphs: balanced binary trees and 2D grids, and measured the convergence time as the size of the graph grows. For balanced binary trees, we start with a tree with $4$ levels ($15$ nodes) and gradually increase the depth to $12$ levels ($4095$ nodes). For grids, we start with a grid of size $16 \times 2$ ($32$ nodes) and double the number of columns until we have a grid of size $16 \times 256$ ($4096$ nodes). For each criterion, we randomly initialize nodes in the layout from standard Gaussian, optimize the layout with respect to only one criterion using SGD and stop as soon as the layout converges.
We ensure the convergence by gradually decreasing the learning rate of SGD: we decrease the learning rate by a factor of $0.7$ every time the loss has not decreased for a certain number of iterations, often referred to as the ``patience.'' In general, we need more patience for larger graphs and for smaller mini-batch to compensate for the large variance in loss estimation due to random sampling.
Here, we set the patience to $max(100, int(|V|/m)*300)$ iterations when we optimize a graph with $|V|$ nodes and take $m$ samples in every SGD iteration. \blue{To further improve the robustness of the stopping criteria, we smooth the sample loss by taking the exponential moving average. That is, on the $i^{th}$ iteration, the smoothed loss $L_i$ is defined as \begin{align*} L_i &= \frac{SL_i + s^1 SL_{i-1} + \dots + s^{i-1} SL_1}{1+s+s^2+\dots+ s^{i-1}} = \frac{\Sigma_{k=1}^i s^{i-k} \, SL_k}{\Sigma_{k=1}^i s^{i-k}} \end{align*} where $s$ is a smoothing factor. We set $s=0.5^{1/100}\approx 0.993$, a rate at which the $100^{th}$ preceding iteration will contribute half as much as that of the latest sample loss. }
Fig.~\ref{fig:runtime} summarizes the runtime analysis for the two families of graphs (trees and grids) for all 9 criteria. Note that we are using log-log plots (log scales for both the $x$ and $y$ axes). This experimental analysis shows linear or near-linear time for the underlying algorithms.
This is shown as steeper slopes in the log-log plots.
\section{Conclusions, Limitations, Future Work}
We introduced the graph drawing framework, {$(SGD)^2$}\xspace, for multicriteria graph drawing and
showed how this approach can be used to optimize different graph drawing criteria and combinations thereof.
We showed that multiple readability criteria can be optimized jointly via SGD if each of them can be expressed as a differentiable function. In cases that some readability criteria are not naturally differentiable (e.g., neighborhood preservation or crossing number), one can find differentiable surrogate functions and optimize the criteria indirectly.
\blue{ We measured the quality of generated layouts and analyzed interactions between criteria, the runtime behavior, and the impact of sample sizes; all of which provide evidence of the effectiveness of {$(SGD)^2$}\xspace }
\blue{\textbf{Support for More Constraint Types:} Although {$(SGD)^2$}\xspace is a flexible framework that can optimize a wide range of criteria, we did not consider the class of constraints where the node coordinates are related by some inequalities (i.e., hard constraints). } Similarly, in the {$(SGD)^2$}\xspace framework we do not naturally support shape-based drawing constraints such as those in~\cite{ipsepcola_2006, scalable_cola_2009,wang2017revisiting}.
\blue{ Incorporating a wider range of constraint types and studying the interactions between them in the multi-objective setting are natural directions for future work. }
\textbf{Better Weight Balancing for Multicriteria Objectives:} The {$(SGD)^2$}\xspace framework is flexible and natural directions for future work include adding further drawing criteria and better ways to combine them. \blue{ An appropriate balance between weights for the different criteria can be crucial as more and more criteria are incorporated into the optimization. Currently, we manually choose appropriate weight schedules based on specific combinations of criteria. In the future, we would like to explore ways to automatically design and balance weight schedules in multicriteria graph drawing. }
\blue{\textbf{Applications of Different Techniques and Frameworks:} Besides gradient descent, there are other optimization techniques
that could be deployed to multi-objective problems~\cite{orosz2020robust}. Similarly, while we used Tensorflow.js and PyTorch to implement {$(SGD)^2$}\xspace, there are other frameworks (e.g., pymoo~\cite{blank2020pymoo}) with support for multi-objective optimization. The application of different optimization techniques and frameworks to multicriteria network visualization seems like an interesting direction for future work. }
\blue{ \textbf{Scalability for Larger Graphs:} Currently, not all criteria are fully optimized for speed.
Alternative objective functions, for example tsNET by Kruiger et al.~\cite{kruiger2017graph} for neighborhood preservation, could be considered in the {$(SGD)^2$}\xspace framework as further runtime scalability and quality improvements are needed for graphs with millions of nodes and edges. One possible direction for improving scalability is to employ a multi-level algorithmic framework.}
\section*{Acknowledgments} This work was supported in part by NSF grants CCF-1740858, CCF-1712119, and DMS-1839274.
\begin{IEEEbiography}[{\vspace*{-.5cm}\includegraphics[width=.8in,height=1.0in,clip,keepaspectratio]{./figures/reyan_cropped.png}}]{Reyan Ahmed} is a Ph.D. student at the Department of Computer Science at the University of Arizona. He received B.Sc. and M.Sc. degree in Computer Science and Engineering from Bangladesh University of Engineering and Technology. His research interests include graph algorithms, network visualization, and data science. \end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=.8in,height=1.0in,clip,keepaspectratio]{./figures/felice.jpeg}}]{Felice De Luca} is a postdoctoral researcher at the Department of Computer Science at the University of Arizona. He received an MS degree in 2014 and a PhD in 2018 at the Department of Computer and Automation Engineering at the University of Perugia. His research interests include graph drawing, information visualization, algorithm engineering, and computational geometry. \end{IEEEbiography}
\begin{IEEEbiography}[{\vspace*{-1cm}\includegraphics[width=.8in,height=1.0in,clip,keepaspectratio]{./figures/devkota_cropped.png}}]{Sabin Devkota} is a Ph.D. student at the Department of Computer Science at the University of Arizona. He received a bachelor's in Electronics and Communication Engineering from Tribhuvan University. His research interests include information visualization. \end{IEEEbiography}
\begin{IEEEbiography}[{\vspace*{-.5cm}\includegraphics[width=.8in,height=1.0in,clip,keepaspectratio]{./figures/kobourov.jpg}}]{Stephen Kobourov} is a Professor at the Department of Computer Science at the University of Arizona. He received a BS degree in Mathematics and Computer Science from Dartmouth College and MS and PhD degrees from Johns Hopkins University. His research interests include information visualisation, graph theory, and geometric algorithms. \end{IEEEbiography}
\begin{IEEEbiography}[{\vspace*{-1cm}\includegraphics[width=.8in,height=1.0in,clip,keepaspectratio]{./figures/mingwei_cropped.png}}]{Mingwei Li} is a PhD student in the Department of Computer Science, University of Arizona. He received the BEng degree in electronics engineering from the Hong Kong University of Science and Technology. His research interests include data visualization and machine learning. \end{IEEEbiography}
\resizebox{\columnwidth}{!}{
\begin{tabular}{l|l}
\toprule
Notation & Description\\
\midrule
$G$ & Graph\\
$V$ & The set of nodes in $G$, indexed by $i$, $j$ or $k$\\
$E$ & The set of edges in $G$, indexed by a pair of nodes $i,j$ in $V$\\
$n=|V|$ & Number of nodes in $G$\\
$m$ & sample size for a certain criterion in SGD\\
$|E|$ & Number of edges in $G$\\
$Adj$ and $A_{i,j}$ & Adjacency matrix of $G$ and its $(i,j)$-th entry\\
$d_{ij}$ & Graph-theoretic distance between node $i$ and $j$\\
$X_{n \times 2}$ & 2D-coordinates of nodes in the drawing\\
$||X_i - X_j||$ & The Euclidean distance between nodes $i$ and $j$ \\
$\theta_i$ & $i^{th}$ crossing angle\\
$\varphi_{ijk}$ & Angle between incident edges $(i,j)$ and $(j,k)$\\
\bottomrule
\end{tabular}
\caption{Graph notation used in this paper.}
\label{table:notations} } \end{table}
\resizebox{1.1\textwidth}{!}{ \centering
\begin{tabular}{p{2.0cm}|p{4cm}|p{4cm}|p{4cm}}
\toprule Property & Gradient Descent & Subgradient Descent & Stochastic Gradient Descent\\ \midrule Stress
& $\sum\limits_{i<j}\;w_{ij}(|X_i - X_j|_2 - d_{ij})^2$
& $\sum\limits_{i<j}\;w_{ij}(|X_i - X_j|_2 - d_{ij})^2$
& $w_{ij}(|X_i - X_j|_2 - d_{ij})^2$ for a random pair of nodes $i, j \in V$\\ \hline
Ideal \hspace{.8cm}Edge Length
& $\sqrt{ \frac{1}{|E|} \sum\limits_{(i,j) \in E}\;
(\frac{||X_i - X_j|| - l_{ij}}{l_{ij}}})^2$ (Eq.~\ref{eq:loss-ideal-edge-length})
& $\frac{1}{|E|} \sum\limits_{(i,j) \in E}\;
|\frac{||X_i - X_j|| - l_{ij}}{l_{ij}}|$
& $|\frac{||X_i - X_j|| - l_{ij}}{l_{ij}}|$ for a random edge $(i,j) \in E$\\ \hline
Crossing \hspace{.5cm} Angle & $\sum\limits_i\;cos(\theta_i)^2$
& $\sum\limits_i\; |cos(\theta_i)|$
& $|cos(\theta_i)|$ for a random crossing $i$\\ \hline
Neighborhood Preservation & Lov\'asz \textbf{softmax}~\cite{berman2018lovasz} between
neighborhood prediction (Eq.\ref{eq:neighbor-pred})
and adjacency matrix $Adj$ & Lov\'asz \textbf{hinge}~\cite{berman2018lovasz} between
neighborhood prediction (Eq.\ref{eq:neighbor-pred})
and adjacency matrix $Adj$ & Lov\'asz \textbf{softmax} or \textbf{hinge}~\cite{berman2018lovasz} on a random node.
(i.e. Jaccard loss between a random \textit{row} of K in Eq. \ref{eq:neighbor-pred}
and the corresponding row in the adjacency matrix $Adj$)\\ \hline
Crossing Number & Shabbeer et al.~\cite{bennett2010} & Shabbeer et al.~\cite{bennett2010} & Shabbeer et al.~\cite{bennett2010}\\ \hline
Angular \hspace{.5cm} Resolution & $\sum\limits_{(i,j),(j,k) \in E}\; e^{-\varphi_{ijk}}$ & $\sum\limits_{v \in E}\; e^{-\varphi_{ijk}}$ & \makecell[l]{$e^{-\varphi_{ijk}}$ \\for random $(i,j),(j,k) \in E$} \\ \hline
Vertex\hspace{.8cm} Resolution
& $\sum_{i,j \in V, i \neq j}$ ${ReLU( 1 - \frac{||X_i - X_j||}{d_{max} \cdot r}) ^2}$ (Eq. \ref{eq:loss-vertex-resolution})
& $\sum_{i,j \in V, i \neq j}$ ${ReLU( 1 - \frac{||X_i - X_j||}{d_{max} \cdot r})}$
& $ReLU( 1 - \frac{||X_i - X_j||}{d_{max} \cdot r})$ for random $i,j \in V, i \neq j$ \\ \hline
Gabriel Graph
&$\sum_{\substack{(i,j) \in E, k \in V \setminus \{i,j\}}} $ $ReLU(r_{ij} - |X_k - c_{ij}|) \; ^ 2$ (Eq. \ref{eq:gabriel})
&$\sum_{\substack{(i,j) \in E,k \in V \setminus \{i,j\}}} $ $ReLU(r_{ij} - |X_k - c_{ij}|)$
&$ReLU(r_{ij} - |X_k - c_{ij}|)$ for random $(i,j) \in E$ and $k \in V \setminus \{i,j\}$ \\ \hline
Aspect Ratio & Eq. \ref{eq:aspect-ratio} & Eq. \ref{eq:aspect-ratio} & Eq. \ref{eq:aspect-ratio} \\
\bottomrule\\
\end{tabular} } \caption{Summary of the objective functions via different optimization methods.} \label{table:loss-functions} \end{table}
\end{document} | arXiv |
\begin{document}
\title{Orbital angular momentum interference of trapped matter waves}
\author{Filip Kia\l ka} \affiliation{Faculty of Physics, University of Vienna, Boltzmanngasse 5, A-1090 Vienna, Austria} \affiliation{Faculty of Physics, University of Duisburg-Essen, Lotharstra\ss e 1, 47048 Duisburg, Germany} \author{Benjamin A. Stickler} \affiliation{Quantum Optics and Laser Science, Imperial College London, SW72AZ London, United Kingdom} \affiliation{Faculty of Physics, University of Duisburg-Essen, Lotharstra\ss e 1, 47048 Duisburg, Germany} \author{Klaus Hornberger} \affiliation{Faculty of Physics, University of Duisburg-Essen, Lotharstra\ss e 1, 47048 Duisburg, Germany}
\date{\today}
\begin{abstract} We introduce a matter wave interference scheme based on the quantization of orbital angular momentum in a ring trap. It operates without beam splitters, is sensitive to geometric phases induced by external gauge fields, and allows measuring interatomic scattering lengths. {We argue that} orbital angular momentum interferometry offers a {versatile} platform for quantum coherent experiments with cold atoms and Bose-Einstein condensates using state-of-the-art technology. \end{abstract}
\maketitle
\textit{Introduction---} Trapped interference experiments~\cite{WuPRL2005, SackettPRA2006, NakagawaPRL2007, karski2009quantum, SchmiedmayerNC2013, SchmiedmayerS2018, neil2019, MullerS2019} are promising platforms for the next generation of force and acceleration sensors. Guiding matter waves enables atom interferometers with long interrogation times, while providing considerable freedom for choosing the geometry~\cite{GarrawayPRL2001, bell2016bose, BeugnonPRA2017, KlitzingN2019}. Toroidal traps are particularly attractive for fundamental quantum experiments~\cite{Stamper-KurnPRL2005, PhillipsPRL2007, CampbellPRL2011, ZurekSR2012, CampbellN2014, CampbellPRX2018} and for precision sensing~\cite{Stamper-KurnPRA2015, TaylorPRL2016, GardinerPRL2018} with ultracold gases or fluids. The ring geometry implies that the orbital angular momentum of the revolving particles is conserved. As argued in the following, its fundamental quantization can be exploited to realize trapped interference schemes requiring no beam splitters.
{We note that} the free quantum dynamics in a ring geometry exhibit {\it quantum revivals}. {An} initially well-localized wave packet quickly disperses {along} the ring on a timescale determined by the orbital angular momentum spread. Only after a much longer quantum revival time, which is independent of the initial state, does the localized wave packet {briefly} reappear due to the quantization of orbital angular momentum \cite{RobinettPR2004}. Similar revival effects are encountered in the orientation of revolving molecules \cite{SeidemanPRL1999,IvanovPRL2004,poulsen2004nonadiabatic}, and they have been proposed for electromagnetic pulse shaping in semiconductors~\cite{BerakdarPRB2006} as well as for macroscopic quantum superposition tests with nanorotors~\cite{HornbergerNJP2018}.
Here, we propose an interference scheme which exploits the brief emergence of a balanced superposition at half the revival time. By imprinting a relative phase { on the superposition,} one can coherently control at which antipode the wave packet reappears after the full revival time. The presence of an additional gauge field induces a rotation of the revival determined by the accumulated geometric phase. In contrast to many existing proposals for interference in ring traps~\cite{Stamper-KurnPRA2015,KlitzingNJP2016,AhufingerNJP2018}, orbital angular momentum interference does not rely on atomic spin states or collective excitations. It is thus applicable to all matter-wave experiments with a toroidal geometry, ranging from electrons in solid state quantum rings~\cite{Fomin2014} to nanoparticles in optomechanical traps~\cite{VamivakasRPP2020}. Here we discuss the special case of optically trapped atomic clouds or Bose-Einstein condensates (BECs), and show that this scheme is sufficiently resilient to {be realizable} with state-of-the-art technology.
\textit{Interference scheme---\label{sec:interference_schemes}} In order to explain the interference scheme we first consider the idealized case of a point particle of mass $m$ confined to a circle of radius $R$. Its Hamiltonian reads $\mathsf{H} = \mathsf{L}_z^2 / 2 m R^2$. Since the eigenvalues of the orbital angular momentum operator $\mathsf{L}_z$ are integer multiples of $\hbar$, with eigenstates $\ket{\ell}$, the time evolution operator $\mathsf{U}_0(t) = \sum_{\ell \in \mathbb{Z}} \exp(-i \hbar t \ell ^2 / 2 m R^2) \ketbra{\ell}{\ell}$ is unity for all even multiples of the revival time \begin{equation} \label{eq:revival-time}
T_\text{rev} = \frac{2 \pi m R^2}{\hbar}. \end{equation}
A straightforward calculation shows that the evolution for the revival time performs a $\pi$ rotation, ${\sf U}_0(n T_{\rm rev}) = \exp (i n \pi {\sf L}_z /\hbar)$, with $n \in \mathbb{N}_0$. In a similar fashion, free evolution for $T_{\rm rev}/2$ acts as a beam splitter, preparing a balanced superposition of the initial state and its $\pi$-rotated version~\cite{SorokinEJP2003,IvanovPRL2004}, \begin{equation} \label{eq:beamsplitting}
\mathsf{U}_0 \left(\frac{T_\text{rev}}{2}\right)= \frac{\mathrm{e}^{- i \pi / 4}}{\sqrt{2}} \left(\mathbbold{1} + i \mathrm{e}^{i \pi \mathsf{L}_z / \hbar}\right), \end{equation} where $\mathbbold{1}$ is the unity operator.
An initially tightly confined wave packet thus first disperses on a short timescale determined by its initial angular momentum uncertainty. The state then remains delocalized over the ring for most of time, showing fractional revivals such as Eq.~\eqref{eq:beamsplitting} at fractions of the revival time. The lifetime of these fractional and full revivals is determined by the initial dispersion time, and is thus typically orders of magnitude smaller than the revival time itself.
The dynamical beam splitting described by (\ref{eq:beamsplitting}) is exploited by the following interference scheme; see Fig.~\ref{fig:schemes}(a): The particle is initially prepared in a well-localized state $\ket{\psi_0}$. After dispersing on a short timescale, the localized state reappears at half of the revival time { in a} balanced superposition $(\ket{\psi_0} + i \ket{\psi_\pi})/\sqrt{2}$, with the $\pi$-rotated initial state $\ket{\psi_\pi} = \exp(i \pi {\sf L}_z/\hbar)\ket{\psi_0}$. { Then} a relative phase $\varphi$ is induced between the two wave packets, for instance gravitationally by tilting the ring, optically via laser illumination, or in the case of an atomic cloud via magnetic control of the scattering length. After imprinting the phase, the state evolves freely for another $T_{\rm rev}/2$, yielding the final state $\ket{\psi_f} = \cos(\varphi/2)\ket{\psi_\pi} + i \sin(\varphi/2)\ket{\psi_0}$. The final position of the particle is thus determined interferometrically.
\begin{figure}\label{fig:schemes}
\end{figure}
{\it Gauge fields and external potentials---} The interference effect depends sensitively on the interaction with external gauge fields. If the field ${\bf A}({\bf r})$ is minimally coupled to the canonical angular momentum ${\sf L}_z$, the kinetic angular momentum is ${\sf L}_z - \gamma R A(\hat \alpha)$. Here $\gamma$ is the gauge coupling and $A(\alpha) = {\bf A}(R {\bf e}_\rho(\alpha)) \cdot {\bf e}_\alpha (\alpha)$ is the azimuthal component of the gauge field evaluated at the angular position~$\alpha$.
The presence of ${\bf A}({\bf r})$ implies a gauge-invariant flux $\Phi = R \oint d \alpha A(\alpha)$ piercing the ring interferometer and thus modifying the free time evolution of the matter wave. The unitary time evolution operator becomes \begin{equation} \label{eq:U-AB}
\mathsf{U}_\Phi \left(t\right) = \mathsf{V} \exp(i \frac{\gamma \Phi}{\hbar} \frac{t}{T_{\rm rev}} \frac{\mathsf{L}_z}{\hbar}) \mathsf{U}_0 (t) \mathsf{V}^\dagger, \end{equation} where $\mathsf{V} = \exp(- i \gamma \Phi \hat{\alpha} / 2 \pi \hbar + i \gamma R / \hbar \int_0^{\hat{\alpha}} d\alpha' A(\alpha'))$ can always be set to unity by choosing an appropriate gauge (symmetric gauge in the case of a constant field). Thus, a finite flux {induces} a rotation of the recurred wave packet by the Aharanov-Bohm-type phase $\gamma \Phi/\hbar$.
For example, if the particles are electrically charged, $\gamma = q$, a magnetic flux $\Phi$ through the ring will shift the energy levels \cite{PetroffPRL2000,SorokinEJP2003} causing the wave packet to rotate. In a similar fashion, the Aharonov-Casher phase~\cite{CasherPRL1984} can be measured if a magnetic dipole ${\bf m} = m_0 {\bf e}_z$ evolves in presence of the electrostatic field ${\bf E}(R{\bf e}_\rho) = E_0 {\bf e}_\rho$ produced by a line charge. In this case one has $\gamma {\bf A} = {\bf m} \times {\bf E}/c^2$, implying $\gamma \Phi = 2 \pi R E_0 m_0 / c^2$. Likewise, geometric phases can result for a permanent or induced electric dipole ${\bf p}$ in a magnetostatic field ${\bf B}$, so that $\gamma{\bf A} = \vb{p} \times \vb{B}$ \cite{wei1995}, or for a massive particle in a noninertial frame rotating with angular frequency $\boldsymbol{\omega}$ around the trap center, so that $\gamma {\bf A} = m R^2 {\boldsymbol \omega}$.
The presence of a weak external potential $V(\alpha) = V_0 \cos(\alpha - \alpha_0)$, such as that arising from a constant tilt of the ring, leads to phase dispersion. To leading order in $V_0$, the energies are shifted by \begin{equation}
\Delta E_\ell^{\rm (pot)} \approx \frac{m R^2 V_0^2}{4 \hbar^2} \left (\ell^2 - \frac{1}{4} \right )^{-1}. \end{equation} Since this is not proportional to $\ell^2$, a conservative torque affects the shape of the recurring wave packet. This is in contrast to gauge fields, which only shift the position { of the revival}.
{\it Revivals in 3D torus traps---} The evolution of a particle in a real-world (three-dimensional) torus trap differs from the idealized situation described so far. The dynamics transverse to the ring tangent affect the angular dynamics even if the transverse motion remains in its ground state, since { the centrifugal force distorts the level spacing}. Shape imperfections and excitations of the transverse degrees of freedom can further affect the interference. We will show next that the proposed orbital angular momentum interference protocol is { nevertheless} surprisingly robust and remains feasible for realistic trap geometries.
To study { the dynamics in a real-world torus trap}, we expand the full 3D Hamiltonian of a particle in a torus trap and consider leading-order corrections in the transverse size of the wave packet. For this sake, we use { a} Frenet-Serret coordinate system $(s,u,v)$ with arc length $s $ and two transverse coordinates $u,v$. Thus, the position vector is $\vb{r} = \vb{R}(s) + u \vb{n}(s) + v \vb{b}(s)$, where $\vb{R}(s)$ traces the center {line} of the torus trap, while $\vb{n}(s) = {\bf R}''(s)/\kappa$ and $\vb{b}(s) = {\bf R}'(s) \times {\bf n}(s)$ span the transverse plane at each position~\cite{PavloffPRA2001,StringariNJP2006}. Here, $\kappa = | {\bf R}''(s) |$ is the curvature, where prime denotes derivative with respect to $s$.
Since the new coordinate system $(s,u,v)$ is curved, coordinate-space normalization of the wave function includes the root {of the} metric determinant { (Jacobian) $h$}. Expressing the latter as { $h = 1 - \kappa u$} and assuming that the trapping potential is separable in the transverse direction yields the Hamiltonian~\cite{PavloffPRA2001,StringariNJP2006} \begin{align} \label{eq:H}
\mathsf{H}_s = {}& - \frac{\hbar^2}{2 m} \left[\partial_s \frac{\partial_s}{h^2} + \partial_u^2 + \partial_v^2 + \frac{\kappa^2}{4 h^2} + \frac{5 (h')^2}{4 h^4} - \frac{h''}{2 h^3}\right] \nonumber\\
{}& + V_u(u) + V_v(v), \end{align} which acts on the rescaled wave function { $\chi = \sqrt{h} \psi$}.
If the radially confining potential is harmonic with frequency $\omega_\perp$ and assuming that centrifugal distortions and small deviations from the ideal circular trap can be described by expanding the Hamiltonian to first order in the small quantities $\kappa \sigma_u$, $\kappa' \sigma_u /\kappa$, and $\kappa'' \sigma_u / \kappa^2$ (with $\sigma_u=\sqrt{\hbar /m\omega_\bot}$ the width of the transverse ground state), \begin{align}
\mathsf{H}_s \approx& - \frac{\hbar^2}{2 m} \bigg[\left(1 + 2 \kappa u \right) \left(\partial_s^2 + \frac{\kappa^2}{4}\right) + 2 \kappa' u \left(1 + 3 \kappa u \right) \partial_s \nonumber \\
&{} + \frac{\kappa^{\prime\prime}u}{2} + \partial_u^2 + \partial_v^2 \bigg] + \frac{m \omega_\perp^2}{2}u^2 + V_v(v). \label{eq:H-first} \end{align}
\textit{Centrifugal energy corrections---\label{sec:imperfections}} For { an ideal} torus where $\kappa = 1/R$ the stationary Schrödinger equation becomes separable. It admits solutions of the form \begin{equation}
\chi_{\ell k n} (s, u ,v) = \frac{1}{\sqrt{2 \pi R}} \mathrm{e}^{i \ell s / R} \xi_{\ell k}(u) \Psi_n(v), \end{equation} with eigenenergies $E_{\ell k n} = \hbar^2 \ell^2/2 m R^2 + E_{\ell k}^{(u)} + E_n^{(v)}$ where $k,n \in \mathbb{N}_0$. Here, $\Psi_n(v)$ are normalized eigenstates of the harmonic motion out of the ring plane, whose eigenenergies $E_n^{(v)}$ are independent of $\ell$ and thus do not affect the revival structure of the matter wave.
The radially confining harmonic potential in the Schr\"{o}dinger equation for $\xi_{\ell k}(u)$ is centrifugally shifted by $u_\ell = \hbar^2\left(\ell^2 - 1/4\right)/m^2 \omega_\perp^2 R^3$, \begin{align} \label{eq:radial-schreq}
\left [ - \frac{\hbar^2}{2 m} \partial_u^2 + \frac{m \omega_\perp^2}{2} \left(u^2 + 2 u u_\ell \right ) \right ]\xi_{\ell k}(u) = E^{(u)}_{\ell k} \xi_{\ell k}(u). \end{align} Thus, the eigenergies \begin{equation} \label{eq:centrifugal-shift}
E_{\ell k}^{(u)} = \hbar \omega_\bot \left ( k + \frac{1}{2} \right ) - \frac{\hbar^4}{2 m^3 \omega_\perp^2 R^6} \left(\ell^2 - \frac{1}{4}\right)^2, \end{equation} are lowered due to the centrifugal barrier.
The $\ell$ dependence in the eigenenergies \eqref{eq:centrifugal-shift} can shift and diminish the revival. Specifically, the $\ell^2$ term in Eq.~\eqref{eq:centrifugal-shift} { delays} the revival without affecting its visibility, while the $\ell^4$ correction decreases the fidelity of the revival and may further modify the revival time. The optimal recurrence time can be determined numerically from this equation.
{\it Shape imperfections ---} In practice, deviations from the perfect circular shape of the torus trap are the most important source of imperfections for optical traps. In particular, residual astigmatism in the focusing optics may introduce a finite ellipticity to the trap, which can be quantified with the help of \eqref{eq:H-first}.
We replace the arc length with the eccentric anomaly $\beta \in [- \pi, \pi)$ used for the standard parametrization of the ellipse. Thus, $\partial_s = h_{\varepsilon}^{-1}(\beta) \partial_\beta/R$, where $R$ and $\varepsilon$ are the semimajor axis and the eccentricity and $h_{\varepsilon}(\beta) = \sqrt{1 - \varepsilon^2 \cos^2 \beta}$ is the Jacobi determinant of the ellipse. In lowest order of $\varepsilon$, the Hamiltonian reads as ${\sf H}_\beta = h_{\varepsilon}^{1/2} {\sf H}_s h_{\varepsilon}^{-1/2} \approx {\sf H}_\beta^{(0)} + \varepsilon^2 {\sf H}_\beta^{(\varepsilon)}$, where ${\sf H}_\beta^{(0)}$ describes the motion on the circle and \begin{align} \label{eq:eccentricity}
\mathsf{H}_\beta^{(\varepsilon)} ={}& -\frac{\hbar^2}{4mR^2} \left[1 + \frac{3u}{R} +\left( 1 + \frac{5 u}{R} \right) \cos(2 \beta)\right] \partial_\beta^2 \nonumber\\
&{} +\frac{\hbar^2}{2 m R^2} \left(1 + \frac{5u}{R} + \frac{9 u^2}{R^2} \right) \sin (2 \beta) \partial_\beta \nonumber\\
&{} -\frac{\hbar^2}{16 m R^2} \left[1 + \frac{3 u}{R} - \left(1 + \frac{11u}{R} \right) \cos (2\beta) \right], \end{align} This implies that the eccentricity-induced energy shift reads in first order perturbation theory \begin{equation} \label{eq:ellipticity-shift}
\Delta E_\ell^{(\varepsilon)} = \frac{\hbar^2 \varepsilon^2}{8 \pi m R^2} \left(1 + \frac{3 u_\ell}{R}\right) \left(\ell^2 - \frac{1}{4}\right). \end{equation} Here we expressed the position expectation value of the radial state by the centrifugal shift of the harmonic potential \eqref{eq:radial-schreq}, $\langle u \rangle = - u_\ell$. The { first-order} influence of a finite eccentricity is thus to decrease the revival time, while further diminishing the revival due to the $\ell$ dependence of the radial potential minimum $u_\ell$.
\begin{figure*}
\caption{
Mean-field simulation of the interference scheme shown in Fig.~\ref{fig:schemes}(a) realized with a BEC of $^{39}\mathrm{K}$ in an optical trap.
(a) Snapshots of the time evolution for a noninteracting condensate: initial particle density, dispersion, recurrent superposition at half of the revival time, and final interferometrically controlled revival with $\varphi = \pi/3$.
The external phase of $\exp(i \varphi\cos^2 \alpha)$ is applied on the left part of the ring at $T_\text{rev}/2$.
The revival time $T_\text{rev} \approx 135.8 \, \text{ms}$ is found by maximizing the overlap between the initial and final states for $\varphi = 0$.
(b) As in (a) but with interatomic interactions characterized by the scattering length of one Bohr radius for a BEC of $N = 2 \times 10^4$ atoms.
As a result of the interactions the revival time { changes} to $T_\text{rev} \approx 136.2 \, \text{ms}$.
(c) Interference signal as a function of external phase $\varphi$ in the noninteracting [as in (a), circles] and interacting [as in (b), diamonds] cases, as compared to the ideal situation (dotted line).
The population imbalance is defined as $(N_\text{R} - N_\text{L}) / (N_\text{R} + N_\text{L})$, where $N_\text{R}$, $N_\text{L}$ are the numbers of atoms on the right and left sides of the ring, weighted with $\cos^2\alpha$.
}
\label{fig:movie}
\end{figure*}
\textit{Implementation with BECs ---\label{sec:BEC}} We are now in a position to argue that the orbital angular momentum interference scheme can be realistically carried out with weakly interacting BECs in an optical torus trap. For concreteness, we consider a condensate of $^{39}\mathrm{K}$ in a trap formed by two coaxial Gaussian beams, one repulsive and one attractive, intersected with an attractive light sheet, as in Ref.~\cite{Stamper-KurnPRA2015}. The wavelengths of the red- and blue-detuned laser beams are assumed to be $830$ and $532 \, \text{nm}$, respectively, with powers of $2$ and $2.5 \, \text{mW}$ as well as waists of $13$ and $5.5 \, \mu\text{m}$. The light sheet with {the same} wavelength as the red-detuned laser has a power of $10 \, \text{mW}$ and waists of $5$ and $200 \, \mu\text{m}$, so that the trap radius is $R \approx 5.9 \, \mu\text{m}$ and the transverse confining frequency { $\omega_\bot \approx 6.4 \: \text{kHz}$}. The necessary coherence time of $T_\text{rev} \approx 135 \, \text{ms}$ is experimentally within reach~\cite{MullerS2019}.
Figure \ref{fig:movie} shows the simulated dynamics of the orbital angular momentum interference protocol for (a) a noninteracting and (b) a weakly interacting BEC of $N = 2\times 10^4$ $^{39}$K atoms. We assume in both cases that the Feshbach resonances of $^{39}$K~\cite{TiesingaRMP2010} are used to make the interactions (a) negligibly small or (b) equivalent to a scattering length of one Bohr radius. The tightly confined initial wave packet, loaded from three-dimensional harmonic trap of frequency $\omega_\bot$, quickly disperses around the torus. It then reappears in a superposition after approximately $65$\,ms. The presence of interactions diminishes the revival signal. However, even at a realistic transverse confinement and interaction strength, the effect is still clearly visible in the population imbalance displayed in panel (c). The latter shows that the interference visibility exhibits almost the ideal dependence on the imprinted phase. The numerical calculations are based on the Trotter-Suzuki expansion~\cite{Bederian2011,CucchiettiCPC2013,CalderaroCPC2015}.
For this setup, the centrifugal energy shift \eqref{eq:centrifugal-shift} amounts to a few percent of the rotational energy for the highest-populated $\ell$ eigenstates ($\ell \simeq 25$). The corresponding correction to the revival time is at a permille level, but given the quick dispersion time, exact timing {on the scale of a few microseconds} is required to imprint the phase and to observe the revival. In a similar fashion, the corrections of the revival time due to interactions must be accounted for, as has been done numerically in Fig.\,2(b).
The relative phase $\varphi$ can be imprinted, e.g., optically, via tilting of the apparatus, or via induced interatomic interactions. For example, if the trap is briefly tilted at $T_{\rm rev}/2$ the gravitational potential yields the phase $\varphi_g \approx 2 m g R t_\text{d} \sin\theta / \hbar$, where $\theta$ is the tilt angle and $t_\text{d}$ is the revival lifetime. The latter is the dispersion timescale $t_\text{d} \approx 1/\omega_\perp$ of the initial wave packet of width $\sqrt{\hbar / \omega_\perp m}$. For the above example, this requires tilting with a precision of hundreds of microradians.
Likewise, if the magnetic field on one side of the ring is detuned from the zero crossing of the Feshbach resonance, the matter wave acquires a relative phase {$\varphi_a \approx 4 \pi \hbar \, a \, n_{\rm BEC}\, t_{\rm d} / m$}, where $ a$ is the induced scattering length and $n_{\rm BEC}$ is the particle density in the initial state. With this one can measure the scattering length with precision $\Delta a \approx 0.2 a_0$ { (with $a_0$ the Bohr radius)}, on par with state-of-the-art time-of-flight~\cite{SimoniNJP2007} and spectroscopic~\cite{LisdatPRA2008} measurements for $^{39}\mathrm{K}$.
\textit{Conclusions ---\label{sec:summary_and_outlook}} We introduced orbital angular momentum interference as an attractive platform for trapped matter-wave interferometry in toroidal geometries. Since the proposed scheme relies on the universal property of orbital momentum quantization, realizations with many different systems can be readily envisioned, e.g., single atoms or BECs in optical traps, ions in electric traps, electrons in solid state quantum rings, as well as molecules and nanoparticles in optical or electrical traps. For the case of a BEC in an optical trap, we have shown that the protocol is feasible with present-day technology.
The interference effect is sensitive to the presence of gauge fields. In the presence of a magnetic field flux $\Phi$, for instance, the revival of particles with charge $q$ will be displaced by the angle $q \Phi / \hbar$. Assuming that displacements on the size of the initial wave packet can be angularly resolved, fields below $10^{-7} \, \mathrm{T}$ level can be detected with the setup described above.
{\it Acknowledgments ---} We thank Markus Arndt, Thorsten Schumm, and Philipp Haslinger for helpful discussions. F.K. acknowledges support by the Austrian Science Fund (FWF) Project No. W1210-N25, B.A.S. acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 841040.
\newcommand{\noopsort}[1]{} \begin{thebibliography}{43} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2005)\citenamefont {Wang},
\citenamefont {Anderson}, \citenamefont {Bright}, \citenamefont {Cornell},
\citenamefont {Diot}, \citenamefont {Kishimoto}, \citenamefont {Prentiss},
\citenamefont {Saravanan}, \citenamefont {Segal},\ and\ \citenamefont
{Wu}}]{WuPRL2005}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Ying-Ju}\ \bibnamefont
{Wang}}, \bibinfo {author} {\bibfnamefont {Dana~Z.}\ \bibnamefont
{Anderson}}, \bibinfo {author} {\bibfnamefont {Victor~M.}\ \bibnamefont
{Bright}}, \bibinfo {author} {\bibfnamefont {Eric~A.}\ \bibnamefont
{Cornell}}, \bibinfo {author} {\bibfnamefont {Quentin}\ \bibnamefont {Diot}},
\bibinfo {author} {\bibfnamefont {Tetsuo}\ \bibnamefont {Kishimoto}},
\bibinfo {author} {\bibfnamefont {Mara}\ \bibnamefont {Prentiss}}, \bibinfo
{author} {\bibfnamefont {R.~A.}\ \bibnamefont {Saravanan}}, \bibinfo {author}
{\bibfnamefont {Stephen~R.}\ \bibnamefont {Segal}}, \ and\ \bibinfo {author}
{\bibfnamefont {Saijun}\ \bibnamefont {Wu}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Atom michelson interferometer on a chip using a
{{Bose}}-{{Einstein}} condensate},}\ }\href {\doibase
10.1103/PhysRevLett.94.090405} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages}
{090405} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Garcia}\ \emph {et~al.}(2006)\citenamefont {Garcia},
\citenamefont {Deissler}, \citenamefont {Hughes}, \citenamefont {Reeves},\
and\ \citenamefont {Sackett}}]{SackettPRA2006}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Garcia}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Deissler}},
\bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont {Hughes}}, \bibinfo
{author} {\bibfnamefont {J.~M.}\ \bibnamefont {Reeves}}, \ and\ \bibinfo
{author} {\bibfnamefont {C.~A.}\ \bibnamefont {Sackett}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Bose-{{Einstein}}-condensate
interferometer with macroscopic arm separation},}\ }\href {\doibase
10.1103/PhysRevA.74.031601} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {031601(R)}
(\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Horikoshi}\ and\ \citenamefont
{Nakagawa}(2007)}]{NakagawaPRL2007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Munekazu}\
\bibnamefont {Horikoshi}}\ and\ \bibinfo {author} {\bibfnamefont {Ken'ichi}\
\bibnamefont {Nakagawa}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Suppression of dephasing due to a trapping potential and atom-atom
interactions in a trapped-condensate interferometer},}\ }\href {\doibase
10.1103/PhysRevLett.99.180401} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages}
{180401} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Karski}\ \emph {et~al.}(2009)\citenamefont {Karski},
\citenamefont {F{\"o}rster}, \citenamefont {Choi}, \citenamefont {Steffen},
\citenamefont {Alt}, \citenamefont {Meschede},\ and\ \citenamefont
{Widera}}]{karski2009quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Michał}\ \bibnamefont
{Karski}}, \bibinfo {author} {\bibfnamefont {Leonid}\ \bibnamefont
{F{\"o}rster}}, \bibinfo {author} {\bibfnamefont {Jai-Min}\ \bibnamefont
{Choi}}, \bibinfo {author} {\bibfnamefont {Andreas}\ \bibnamefont {Steffen}},
\bibinfo {author} {\bibfnamefont {Wolfgang}\ \bibnamefont {Alt}}, \bibinfo
{author} {\bibfnamefont {Dieter}\ \bibnamefont {Meschede}}, \ and\ \bibinfo
{author} {\bibfnamefont {Artur}\ \bibnamefont {Widera}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {Quantum walk in position space with single
optically trapped atoms},}\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Science}\ }\textbf {\bibinfo {volume} {325}},\ \bibinfo {pages}
{174--177} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Berrada}\ \emph {et~al.}(2013)\citenamefont
{Berrada}, \citenamefont {{\noopsort{frank}}{van Frank}}, \citenamefont
{B{\"u}cker}, \citenamefont {Schumm}, \citenamefont {Schaff},\ and\
\citenamefont {Schmiedmayer}}]{SchmiedmayerNC2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Berrada}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{{\noopsort{frank}}{van Frank}}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {B{\"u}cker}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Schumm}}, \bibinfo {author} {\bibfnamefont {J.-F.}\
\bibnamefont {Schaff}}, \ and\ \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Schmiedmayer}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {Integrated {{Mach}}-{{Zehnder}} interferometer for {Bose-Einstein}
condensates},}\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Nat. Commun.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {2077}
(\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rauer}\ \emph {et~al.}(2018)\citenamefont {Rauer},
\citenamefont {Erne}, \citenamefont {Schweigler}, \citenamefont {Cataldini},
\citenamefont {Tajik},\ and\ \citenamefont
{Schmiedmayer}}]{SchmiedmayerS2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Bernhard}\
\bibnamefont {Rauer}}, \bibinfo {author} {\bibfnamefont {Sebastian}\
\bibnamefont {Erne}}, \bibinfo {author} {\bibfnamefont {Thomas}\ \bibnamefont
{Schweigler}}, \bibinfo {author} {\bibfnamefont {Federica}\ \bibnamefont
{Cataldini}}, \bibinfo {author} {\bibfnamefont {Mohammadamin}\ \bibnamefont
{Tajik}}, \ and\ \bibinfo {author} {\bibfnamefont {J{\"o}rg}\ \bibnamefont
{Schmiedmayer}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Recurrences in an isolated quantum many-body system},}\ }\href {\doibase
10.1126/science.aan7938} {\bibfield {journal} {\bibinfo {journal}
{Science}\ }\textbf {\bibinfo {volume} {360}},\ \bibinfo {pages} {307--310}
(\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Urban}\ \emph {et~al.}(2019)\citenamefont {Urban},
\citenamefont {Glikin}, \citenamefont {Mouradian}, \citenamefont {Krimmel},
\citenamefont {Hemmerling},\ and\ \citenamefont {Haeffner}}]{neil2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Erik}\ \bibnamefont
{Urban}}, \bibinfo {author} {\bibfnamefont {Neil}\ \bibnamefont {Glikin}},
\bibinfo {author} {\bibfnamefont {Sara}\ \bibnamefont {Mouradian}}, \bibinfo
{author} {\bibfnamefont {Kai}\ \bibnamefont {Krimmel}}, \bibinfo {author}
{\bibfnamefont {Boerge}\ \bibnamefont {Hemmerling}}, \ and\ \bibinfo {author}
{\bibfnamefont {Hartmut}\ \bibnamefont {Haeffner}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {Coherent control of the rotational degree of
freedom of a two-ion coulomb crystal},}\ }\href {\doibase
10.1103/PhysRevLett.123.133202} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {123}},\ \bibinfo {pages}
{133202} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xu}\ \emph {et~al.}(2019)\citenamefont {Xu},
\citenamefont {Jaffe}, \citenamefont {Panda}, \citenamefont {Kristensen},
\citenamefont {Clark},\ and\ \citenamefont {M{\"u}ller}}]{MullerS2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Victoria}\
\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {Matt}\ \bibnamefont
{Jaffe}}, \bibinfo {author} {\bibfnamefont {Cristian~D.}\ \bibnamefont
{Panda}}, \bibinfo {author} {\bibfnamefont {Sofus~L.}\ \bibnamefont
{Kristensen}}, \bibinfo {author} {\bibfnamefont {Logan~W.}\ \bibnamefont
{Clark}}, \ and\ \bibinfo {author} {\bibfnamefont {Holger}\ \bibnamefont
{M{\"u}ller}},\ }\bibfield {title} {{\selectlanguage {en}\enquote {\bibinfo
{title} {Probing gravity by holding atoms for 20 seconds},}\ }}\href
{\doibase 10.1126/science.aay6428} {\bibfield {journal} {\bibinfo {journal}
{Science}\ }\textbf {\bibinfo {volume} {366}},\ \bibinfo {pages} {745--749}
(\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zobay}\ and\ \citenamefont
{Garraway}(2001)}]{GarrawayPRL2001}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Zobay}}\ and\ \bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont
{Garraway}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Two-{{Dimensional Atom Trapping}} in {{Field}}-{{Induced Adiabatic
Potentials}}},}\ }\href {\doibase 10.1103/PhysRevLett.86.1195} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {86}},\ \bibinfo {pages} {1195--1198} (\bibinfo {year}
{2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bell}\ \emph {et~al.}(2016)\citenamefont {Bell},
\citenamefont {Glidden}, \citenamefont {Humbert}, \citenamefont {Bromley},
\citenamefont {Haine}, \citenamefont {Davis}, \citenamefont {Neely},
\citenamefont {Baker},\ and\ \citenamefont
{Rubinsztein-Dunlop}}]{bell2016bose}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Thomas~A}\
\bibnamefont {Bell}}, \bibinfo {author} {\bibfnamefont {Jake~AP}\
\bibnamefont {Glidden}}, \bibinfo {author} {\bibfnamefont {Leif}\
\bibnamefont {Humbert}}, \bibinfo {author} {\bibfnamefont {Michael~WJ}\
\bibnamefont {Bromley}}, \bibinfo {author} {\bibfnamefont {Simon~A}\
\bibnamefont {Haine}}, \bibinfo {author} {\bibfnamefont {Matthew~J}\
\bibnamefont {Davis}}, \bibinfo {author} {\bibfnamefont {Tyler~W}\
\bibnamefont {Neely}}, \bibinfo {author} {\bibfnamefont {Mark~A}\
\bibnamefont {Baker}}, \ and\ \bibinfo {author} {\bibfnamefont {Halina}\
\bibnamefont {Rubinsztein-Dunlop}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {Bose--einstein condensation in large time-averaged optical ring
potentials},}\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New
J. Phys.}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {035003}
(\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ville}\ \emph {et~al.}(2017)\citenamefont {Ville},
\citenamefont {Bienaim{\'e}}, \citenamefont {{Saint-Jalm}}, \citenamefont
{Corman}, \citenamefont {Aidelsburger}, \citenamefont {Chomaz}, \citenamefont
{Kleinlein}, \citenamefont {Perconte}, \citenamefont {Nascimb{\`e}ne},
\citenamefont {Dalibard},\ and\ \citenamefont {Beugnon}}]{BeugnonPRA2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont
{Ville}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Bienaim{\'e}}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {{Saint-Jalm}}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {Corman}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Aidelsburger}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Chomaz}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Kleinlein}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Perconte}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Nascimb{\`e}ne}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Dalibard}}, \ and\ \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Beugnon}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {Loading and compression of a single two-dimensional {{Bose}} gas in
an optical accordion},}\ }\href {\doibase 10.1103/PhysRevA.95.013632}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {95}},\ \bibinfo {pages} {013632} (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pandey}\ \emph {et~al.}(2019)\citenamefont {Pandey},
\citenamefont {Mas}, \citenamefont {Drougakis}, \citenamefont {Thekkeppatt},
\citenamefont {Bolpasi}, \citenamefont {Vasilakis}, \citenamefont {Poulios},\
and\ \citenamefont {{\noopsort{klitzing}}von Klitzing}}]{KlitzingN2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Saurabh}\ \bibnamefont
{Pandey}}, \bibinfo {author} {\bibfnamefont {Hector}\ \bibnamefont {Mas}},
\bibinfo {author} {\bibfnamefont {Giannis}\ \bibnamefont {Drougakis}},
\bibinfo {author} {\bibfnamefont {Premjith}\ \bibnamefont {Thekkeppatt}},
\bibinfo {author} {\bibfnamefont {Vasiliki}\ \bibnamefont {Bolpasi}},
\bibinfo {author} {\bibfnamefont {Georgios}\ \bibnamefont {Vasilakis}},
\bibinfo {author} {\bibfnamefont {Konstantinos}\ \bibnamefont {Poulios}}, \
and\ \bibinfo {author} {\bibfnamefont {Wolf}\ \bibnamefont
{{\noopsort{klitzing}}von Klitzing}},\ }\bibfield {title} {{\selectlanguage
{en}\enquote {\bibinfo {title} {Hypersonic {{Bose}}\textendash{{Einstein}}
condensates in accelerator rings},}\ }}\href {\doibase
10.1038/s41586-019-1273-5} {\bibfield {journal} {\bibinfo {journal}
{Nature}\ }\textbf {\bibinfo {volume} {570}},\ \bibinfo {pages} {205--209}
(\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gupta}\ \emph {et~al.}(2005)\citenamefont {Gupta},
\citenamefont {Murch}, \citenamefont {Moore}, \citenamefont {Purdy},\ and\
\citenamefont {{Stamper-Kurn}}}]{Stamper-KurnPRL2005}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Gupta}}, \bibinfo {author} {\bibfnamefont {K.~W.}\ \bibnamefont {Murch}},
\bibinfo {author} {\bibfnamefont {K.~L.}\ \bibnamefont {Moore}}, \bibinfo
{author} {\bibfnamefont {T.~P.}\ \bibnamefont {Purdy}}, \ and\ \bibinfo
{author} {\bibfnamefont {D.~M.}\ \bibnamefont {{Stamper-Kurn}}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {{Bose-Einstein} condensation in a
circular waveguide},}\ }\href {\doibase 10.1103/PhysRevLett.95.143201}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {95}},\ \bibinfo {pages} {143201} (\bibinfo {year}
{2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ryu}\ \emph {et~al.}(2007)\citenamefont {Ryu},
\citenamefont {Andersen}, \citenamefont {Clad{\'e}}, \citenamefont
{Natarajan}, \citenamefont {Helmerson},\ and\ \citenamefont
{Phillips}}]{PhillipsPRL2007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Ryu}}, \bibinfo {author} {\bibfnamefont {M.~F.}\ \bibnamefont {Andersen}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Clad{\'e}}}, \bibinfo
{author} {\bibfnamefont {Vasant}\ \bibnamefont {Natarajan}}, \bibinfo
{author} {\bibfnamefont {K.}~\bibnamefont {Helmerson}}, \ and\ \bibinfo
{author} {\bibfnamefont {W.~D.}\ \bibnamefont {Phillips}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Observation of {{Persistent Flow}} of a
{{Bose}}-{{Einstein Condensate}} in a {{Toroidal Trap}}},}\ }\href {\doibase
10.1103/PhysRevLett.99.260401} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages}
{260401} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ramanathan}\ \emph {et~al.}(2011)\citenamefont
{Ramanathan}, \citenamefont {Wright}, \citenamefont {Muniz}, \citenamefont
{Zelan}, \citenamefont {Hill}, \citenamefont {Lobb}, \citenamefont
{Helmerson}, \citenamefont {Phillips},\ and\ \citenamefont
{Campbell}}]{CampbellPRL2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Ramanathan}}, \bibinfo {author} {\bibfnamefont {K.~C.}\ \bibnamefont
{Wright}}, \bibinfo {author} {\bibfnamefont {S.~R.}\ \bibnamefont {Muniz}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Zelan}}, \bibinfo
{author} {\bibfnamefont {W.~T.}\ \bibnamefont {Hill}}, \bibinfo {author}
{\bibfnamefont {C.~J.}\ \bibnamefont {Lobb}}, \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Helmerson}}, \bibinfo {author}
{\bibfnamefont {W.~D.}\ \bibnamefont {Phillips}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.~K.}\ \bibnamefont {Campbell}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {Superflow in a toroidal {Bose-Einstein}
condensate: An atom circuit with a tunable weak link},}\ }\href {\doibase
10.1103/PhysRevLett.106.130401} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo {pages}
{130401} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Das}\ \emph {et~al.}(2012)\citenamefont {Das},
\citenamefont {Sabbatini},\ and\ \citenamefont {Zurek}}]{ZurekSR2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Arnab}\ \bibnamefont
{Das}}, \bibinfo {author} {\bibfnamefont {Jacopo}\ \bibnamefont {Sabbatini}},
\ and\ \bibinfo {author} {\bibfnamefont {Wojciech~H.}\ \bibnamefont
{Zurek}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Winding up
superfluid in a torus via {{Bose}}-{{Einstein}} condensation},}\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep.}\ }\textbf {\bibinfo
{volume} {2}},\ \bibinfo {pages} {352} (\bibinfo {year} {2012})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Eckel}\ \emph {et~al.}(2014)\citenamefont {Eckel},
\citenamefont {Lee}, \citenamefont {Jendrzejewski}, \citenamefont {Murray},
\citenamefont {Clark}, \citenamefont {Lobb}, \citenamefont {Phillips},
\citenamefont {Edwards},\ and\ \citenamefont {Campbell}}]{CampbellN2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Stephen}\ \bibnamefont
{Eckel}}, \bibinfo {author} {\bibfnamefont {Jeffrey~G.}\ \bibnamefont {Lee}},
\bibinfo {author} {\bibfnamefont {Fred}\ \bibnamefont {Jendrzejewski}},
\bibinfo {author} {\bibfnamefont {Noel}\ \bibnamefont {Murray}}, \bibinfo
{author} {\bibfnamefont {Charles~W.}\ \bibnamefont {Clark}}, \bibinfo
{author} {\bibfnamefont {Christopher~J.}\ \bibnamefont {Lobb}}, \bibinfo
{author} {\bibfnamefont {William~D.}\ \bibnamefont {Phillips}}, \bibinfo
{author} {\bibfnamefont {Mark}\ \bibnamefont {Edwards}}, \ and\ \bibinfo
{author} {\bibfnamefont {Gretchen~K.}\ \bibnamefont {Campbell}},\ }\bibfield
{title} {{\selectlanguage {en}\enquote {\bibinfo {title} {Hysteresis in a
quantized superfluid `atomtronic' circuit},}\ }}\href {\doibase
10.1038/nature12958} {\bibfield {journal} {\bibinfo {journal} {Nature}\
}\textbf {\bibinfo {volume} {506}},\ \bibinfo {pages} {200--203} (\bibinfo
{year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Eckel}\ \emph {et~al.}(2018)\citenamefont {Eckel},
\citenamefont {Kumar}, \citenamefont {Jacobson}, \citenamefont {Spielman},\
and\ \citenamefont {Campbell}}]{CampbellPRX2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Eckel}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kumar}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jacobson}}, \bibinfo
{author} {\bibfnamefont {I.~B.}\ \bibnamefont {Spielman}}, \ and\ \bibinfo
{author} {\bibfnamefont {G.~K.}\ \bibnamefont {Campbell}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {A rapidly expanding
{{Bose}}-{{Einstein}} condensate: {{An}} expanding universe in the lab},}\
}\href {\doibase 10.1103/PhysRevX.8.021021} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages}
{021021} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Marti}\ \emph {et~al.}(2015)\citenamefont {Marti},
\citenamefont {Olf},\ and\ \citenamefont
{{Stamper-Kurn}}}]{Stamper-KurnPRA2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~Edward}\
\bibnamefont {Marti}}, \bibinfo {author} {\bibfnamefont {Ryan}\ \bibnamefont
{Olf}}, \ and\ \bibinfo {author} {\bibfnamefont {Dan~M.}\ \bibnamefont
{{Stamper-Kurn}}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Collective excitation interferometry with a toroidal {{Bose}}-{{Einstein}}
condensate},}\ }\href {\doibase 10.1103/PhysRevA.91.013602} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{91}},\ \bibinfo {pages} {013602} (\bibinfo {year} {2015})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Ragole}\ and\ \citenamefont
{Taylor}(2016)}]{TaylorPRL2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Stephen}\ \bibnamefont
{Ragole}}\ and\ \bibinfo {author} {\bibfnamefont {Jacob~M.}\ \bibnamefont
{Taylor}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Interacting
atomic interferometry for rotation sensing approaching the heisenberg
limit},}\ }\href {\doibase 10.1103/PhysRevLett.117.203002} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {117}},\ \bibinfo {pages} {203002} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Helm}\ \emph {et~al.}(2018)\citenamefont {Helm},
\citenamefont {Billam}, \citenamefont {Rakonjac}, \citenamefont {Cornish},\
and\ \citenamefont {Gardiner}}]{GardinerPRL2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont
{Helm}}, \bibinfo {author} {\bibfnamefont {T.~P.}\ \bibnamefont {Billam}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Rakonjac}}, \bibinfo
{author} {\bibfnamefont {S.~L.}\ \bibnamefont {Cornish}}, \ and\ \bibinfo
{author} {\bibfnamefont {S.~A.}\ \bibnamefont {Gardiner}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Spin-orbit-coupled interferometry with
ring-trapped {Bose-Einstein} condensates},}\ }\href {\doibase
10.1103/PhysRevLett.120.063201} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {120}},\ \bibinfo {pages}
{063201} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Robinett}(2004)}]{RobinettPR2004}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.W.}\ \bibnamefont
{Robinett}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum wave
packet revivals},}\ }\href {\doibase
https://doi.org/10.1016/j.physrep.2003.11.002} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rep.}\ }\textbf {\bibinfo {volume} {392}},\
\bibinfo {pages} {1--119} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Seideman}(1999)}]{SeidemanPRL1999}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Tamar}\ \bibnamefont
{Seideman}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Revival
structure of aligned rotational wave packets},}\ }\href {\doibase
10.1103/PhysRevLett.83.4971} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages}
{4971--4974} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Spanner}\ \emph {et~al.}(2004)\citenamefont
{Spanner}, \citenamefont {Shapiro},\ and\ \citenamefont
{Ivanov}}]{IvanovPRL2004}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Michael}\ \bibnamefont
{Spanner}}, \bibinfo {author} {\bibfnamefont {E.~A.}\ \bibnamefont
{Shapiro}}, \ and\ \bibinfo {author} {\bibfnamefont {Misha}\ \bibnamefont
{Ivanov}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Coherent control
of rotational wave-packet dynamics via fractional revivals},}\ }\href
{\doibase 10.1103/PhysRevLett.92.093001} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo
{pages} {093001} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Poulsen}\ \emph {et~al.}(2004)\citenamefont
{Poulsen}, \citenamefont {Peronne}, \citenamefont {Stapelfeldt},
\citenamefont {Bisgaard}, \citenamefont {Viftrup}, \citenamefont {Hamilton},\
and\ \citenamefont {Seideman}}]{poulsen2004nonadiabatic}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Mikael~D}\
\bibnamefont {Poulsen}}, \bibinfo {author} {\bibfnamefont {Emmanuel}\
\bibnamefont {Peronne}}, \bibinfo {author} {\bibfnamefont {Henrik}\
\bibnamefont {Stapelfeldt}}, \bibinfo {author} {\bibfnamefont {Christer~Z}\
\bibnamefont {Bisgaard}}, \bibinfo {author} {\bibfnamefont {Simon~S}\
\bibnamefont {Viftrup}}, \bibinfo {author} {\bibfnamefont {Edward}\
\bibnamefont {Hamilton}}, \ and\ \bibinfo {author} {\bibfnamefont {Tamar}\
\bibnamefont {Seideman}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Nonadiabatic alignment of asymmetric top molecules: Rotational revivals},}\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys.}\
}\textbf {\bibinfo {volume} {121}},\ \bibinfo {pages} {783--791} (\bibinfo
{year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Moskalenko}\ \emph {et~al.}(2006)\citenamefont
{Moskalenko}, \citenamefont {{Matos-Abiague}},\ and\ \citenamefont
{Berakdar}}]{BerakdarPRB2006}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont
{Moskalenko}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{{Matos-Abiague}}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Berakdar}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Revivals,
collapses, and magnetic-pulse generation in quantum rings},}\ }\href
{\doibase 10.1103/PhysRevB.74.161303} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo
{pages} {161303} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stickler}\ \emph {et~al.}(2018)\citenamefont
{Stickler}, \citenamefont {Papendell}, \citenamefont {Kuhn}, \citenamefont
{Schrinski}, \citenamefont {Millen}, \citenamefont {Arndt},\ and\
\citenamefont {Hornberger}}]{HornbergerNJP2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Benjamin~A}\
\bibnamefont {Stickler}}, \bibinfo {author} {\bibfnamefont {Birthe}\
\bibnamefont {Papendell}}, \bibinfo {author} {\bibfnamefont {Stefan}\
\bibnamefont {Kuhn}}, \bibinfo {author} {\bibfnamefont {Bj{\"o}rn}\
\bibnamefont {Schrinski}}, \bibinfo {author} {\bibfnamefont {James}\
\bibnamefont {Millen}}, \bibinfo {author} {\bibfnamefont {Markus}\
\bibnamefont {Arndt}}, \ and\ \bibinfo {author} {\bibfnamefont {Klaus}\
\bibnamefont {Hornberger}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Probing macroscopic quantum superpositions with nanorotors},}\ }\href
{\doibase 10.1088/1367-2630/aaece4} {\bibfield {journal} {\bibinfo
{journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {20}},\ \bibinfo
{pages} {122001} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Navez}\ \emph {et~al.}(2016)\citenamefont {Navez},
\citenamefont {Pandey}, \citenamefont {Mas}, \citenamefont {Poulios},
\citenamefont {Fernholz},\ and\ \citenamefont {{\noopsort{klitzing}}{von
Klitzing}}}]{KlitzingNJP2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P}~\bibnamefont
{Navez}}, \bibinfo {author} {\bibfnamefont {S}~\bibnamefont {Pandey}},
\bibinfo {author} {\bibfnamefont {H}~\bibnamefont {Mas}}, \bibinfo {author}
{\bibfnamefont {K}~\bibnamefont {Poulios}}, \bibinfo {author} {\bibfnamefont
{T}~\bibnamefont {Fernholz}}, \ and\ \bibinfo {author} {\bibfnamefont
{W}~\bibnamefont {{\noopsort{klitzing}}{von Klitzing}}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {Matter-wave interferometers using {{TAAP}}
rings},}\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New J.
Phys.}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {075014}
(\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pelegr{\'i}}\ \emph {et~al.}(2018)\citenamefont
{Pelegr{\'i}}, \citenamefont {Mompart},\ and\ \citenamefont
{Ahufinger}}]{AhufingerNJP2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Pelegr{\'i}}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Mompart}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Ahufinger}},\ }\bibfield {title} {{\selectlanguage {en}\enquote {\bibinfo
{title} {Quantum sensing using imbalanced counter-rotating
{{Bose}}\textendash{{Einstein}} condensate modes},}\ }}\href {\doibase
10.1088/1367-2630/aae107} {\bibfield {journal} {\bibinfo {journal} {New J.
Phys.}\ }\textbf {\bibinfo {volume} {20}},\ \bibinfo {pages} {103001}
(\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fomin}(2014)}]{Fomin2014}
\BibitemOpen
\bibinfo {editor} {\bibfnamefont {Vladimir~M.}\ \bibnamefont {Fomin}},\ ed.,\
\href {\doibase 10.1007/978-3-642-39197-2} {{\selectlanguage {en}\emph
{\bibinfo {title} {Physics of {{Quantum Rings}}}}}},\ {{NanoScience}} and
{{Technology}}\ (\bibinfo {publisher} {{Springer-Verlag}},\ \bibinfo
{address} {{Berlin Heidelberg}},\ \bibinfo {year} {2014})\BibitemShut
{NoStop} \bibitem [{\citenamefont {Millen}\ \emph {et~al.}(2020)\citenamefont {Millen},
\citenamefont {Monteiro}, \citenamefont {Pettit},\ and\ \citenamefont
{Vamivakas}}]{VamivakasRPP2020}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {James}\ \bibnamefont
{Millen}}, \bibinfo {author} {\bibfnamefont {Tania~S.}\ \bibnamefont
{Monteiro}}, \bibinfo {author} {\bibfnamefont {Robert}\ \bibnamefont
{Pettit}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~Nick}\ \bibnamefont
{Vamivakas}},\ }\bibfield {title} {{\selectlanguage {en}\enquote {\bibinfo
{title} {Optomechanics with levitated particles},}\ }}\href {\doibase
10.1088/1361-6633/ab6100} {\bibfield {journal} {\bibinfo {journal} {Rep.
Prog. Phys.}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {026401}
(\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vugalter}\ \emph {et~al.}(2003)\citenamefont
{Vugalter}, \citenamefont {Das},\ and\ \citenamefont
{Sorokin}}]{SorokinEJP2003}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G~A}\ \bibnamefont
{Vugalter}}, \bibinfo {author} {\bibfnamefont {A~K}\ \bibnamefont {Das}}, \
and\ \bibinfo {author} {\bibfnamefont {V~A}\ \bibnamefont {Sorokin}},\
}\bibfield {title} {\enquote {\bibinfo {title} {A charged particle on a ring
in a magnetic field: Quantum revivals},}\ }\href {\doibase
10.1088/0143-0807/25/2/003} {\bibfield {journal} {\bibinfo {journal} {Eur.
J. Phys.}\ }\textbf {\bibinfo {volume} {25}},\ \bibinfo {pages} {157--170}
(\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lorke}\ \emph {et~al.}(2000)\citenamefont {Lorke},
\citenamefont {Johannes~Luyken}, \citenamefont {Govorov}, \citenamefont
{Kotthaus}, \citenamefont {Garcia},\ and\ \citenamefont
{Petroff}}]{PetroffPRL2000}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Axel}\ \bibnamefont
{Lorke}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Johannes~Luyken}}, \bibinfo {author} {\bibfnamefont {Alexander~O.}\
\bibnamefont {Govorov}}, \bibinfo {author} {\bibfnamefont {J{\"o}rg~P.}\
\bibnamefont {Kotthaus}}, \bibinfo {author} {\bibfnamefont {J.~M.}\
\bibnamefont {Garcia}}, \ and\ \bibinfo {author} {\bibfnamefont {P.~M.}\
\bibnamefont {Petroff}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Spectroscopy of nanoscopic semiconductor rings},}\ }\href {\doibase
10.1103/PhysRevLett.84.2223} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages}
{2223--2226} (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aharonov}\ and\ \citenamefont
{Casher}(1984)}]{CasherPRL1984}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Aharonov}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Casher}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Topological
quantum effects for neutral particles},}\ }\href {\doibase
10.1103/PhysRevLett.53.319} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. Lett.}\ }\textbf {\bibinfo {volume} {53}},\ \bibinfo {pages} {319--321}
(\bibinfo {year} {1984})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wei}\ \emph {et~al.}(1995)\citenamefont {Wei},
\citenamefont {Han},\ and\ \citenamefont {Wei}}]{wei1995}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Haiqing}\ \bibnamefont
{Wei}}, \bibinfo {author} {\bibfnamefont {Rushan}\ \bibnamefont {Han}}, \
and\ \bibinfo {author} {\bibfnamefont {Xiuqing}\ \bibnamefont {Wei}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Quantum phase of induced
dipoles moving in a magnetic field},}\ }\href {\doibase
10.1103/PhysRevLett.75.2071} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {75}},\ \bibinfo {pages}
{2071--2073} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Leboeuf}\ and\ \citenamefont
{Pavloff}(2001)}]{PavloffPRA2001}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Leboeuf}}\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Pavloff}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Bose-{{Einstein}} beams: {{Coherent}} propagation through a guide},}\ }\href
{\doibase 10.1103/PhysRevA.64.033602} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {64}},\ \bibinfo
{pages} {033602} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schwartz}\ \emph {et~al.}(2006)\citenamefont
{Schwartz}, \citenamefont {Cozzini}, \citenamefont {Menotti}, \citenamefont
{Carusotto}, \citenamefont {Bouyer},\ and\ \citenamefont
{Stringari}}]{StringariNJP2006}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S}~\bibnamefont
{Schwartz}}, \bibinfo {author} {\bibfnamefont {M}~\bibnamefont {Cozzini}},
\bibinfo {author} {\bibfnamefont {C}~\bibnamefont {Menotti}}, \bibinfo
{author} {\bibfnamefont {I}~\bibnamefont {Carusotto}}, \bibinfo {author}
{\bibfnamefont {P}~\bibnamefont {Bouyer}}, \ and\ \bibinfo {author}
{\bibfnamefont {S}~\bibnamefont {Stringari}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {One-dimensional description of a
{{Bose}}\textendash{{Einstein}} condensate in a rotating closed-loop
waveguide},}\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New
J. Phys.}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {162}
(\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chin}\ \emph {et~al.}(2010)\citenamefont {Chin},
\citenamefont {Grimm}, \citenamefont {Julienne},\ and\ \citenamefont
{Tiesinga}}]{TiesingaRMP2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Cheng}\ \bibnamefont
{Chin}}, \bibinfo {author} {\bibfnamefont {Rudolf}\ \bibnamefont {Grimm}},
\bibinfo {author} {\bibfnamefont {Paul}\ \bibnamefont {Julienne}}, \ and\
\bibinfo {author} {\bibfnamefont {Eite}\ \bibnamefont {Tiesinga}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Feshbach resonances in
ultracold gases},}\ }\href {\doibase 10.1103/RevModPhys.82.1225} {\bibfield
{journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume}
{82}},\ \bibinfo {pages} {1225--1286} (\bibinfo {year} {2010})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Bederi{\'a}n}\ and\ \citenamefont
{Dente}(2011)}]{Bederian2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Carlos~S.}\
\bibnamefont {Bederi{\'a}n}}\ and\ \bibinfo {author} {\bibfnamefont
{Axel~D.}\ \bibnamefont {Dente}},\ }\bibfield {title} {\enquote {\bibinfo
{title} {Boosting quantum evolutions using {{Trotter}}-{{Suzuki}} algorithms
on {{GPUs}}},}\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Proceedings
of {{HPCLatAm}}-11, 4th High-Performance Computing Symposium,
{{C{\'o}rdoba}}, Argentina}}}\ (\bibinfo {year} {2011})\BibitemShut {NoStop} \bibitem [{\citenamefont {Wittek}\ and\ \citenamefont
{Cucchietti}(2013)}]{CucchiettiCPC2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Peter}\ \bibnamefont
{Wittek}}\ and\ \bibinfo {author} {\bibfnamefont {Fernando~M.}\ \bibnamefont
{Cucchietti}},\ }\bibfield {title} {\enquote {\bibinfo {title} {A
second-order distributed {{Trotter}}\textendash{{Suzuki}} solver with a
hybrid {{CPU}}\textendash{{GPU}} kernel},}\ }\href {\doibase
https://doi.org/10.1016/j.cpc.2012.12.008} {\bibfield {journal} {\bibinfo
{journal} {Comput. Phys. Commun.}\ }\textbf {\bibinfo {volume} {184}},\
\bibinfo {pages} {1165--1171} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wittek}\ and\ \citenamefont
{Calderaro}(2015)}]{CalderaroCPC2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Peter}\ \bibnamefont
{Wittek}}\ and\ \bibinfo {author} {\bibfnamefont {Luca}\ \bibnamefont
{Calderaro}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Extended
computational kernels in a massively parallel implementation of the
{{Trotter}}\textendash{{Suzuki}} approximation},}\ }\href {\doibase
https://doi.org/10.1016/j.cpc.2015.07.017} {\bibfield {journal} {\bibinfo
{journal} {Comput. Phys. Commun.}\ }\textbf {\bibinfo {volume} {197}},\
\bibinfo {pages} {339--340} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {D'Errico}\ \emph {et~al.}(2007)\citenamefont
{D'Errico}, \citenamefont {Zaccanti}, \citenamefont {Fattori}, \citenamefont
{Roati}, \citenamefont {Inguscio}, \citenamefont {Modugno},\ and\
\citenamefont {Simoni}}]{SimoniNJP2007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Chiara}\ \bibnamefont
{D'Errico}}, \bibinfo {author} {\bibfnamefont {Matteo}\ \bibnamefont
{Zaccanti}}, \bibinfo {author} {\bibfnamefont {Marco}\ \bibnamefont
{Fattori}}, \bibinfo {author} {\bibfnamefont {Giacomo}\ \bibnamefont
{Roati}}, \bibinfo {author} {\bibfnamefont {Massimo}\ \bibnamefont
{Inguscio}}, \bibinfo {author} {\bibfnamefont {Giovanni}\ \bibnamefont
{Modugno}}, \ and\ \bibinfo {author} {\bibfnamefont {Andrea}\ \bibnamefont
{Simoni}},\ }\bibfield {title} {{\selectlanguage {en}\enquote {\bibinfo
{title} {Feshbach resonances in ultracold {{39K}}},}\ }}\href {\doibase
10.1088/1367-2630/9/7/223} {\bibfield {journal} {\bibinfo {journal} {New J.
Phys.}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {223--223}
(\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Falke}\ \emph {et~al.}(2008)\citenamefont {Falke},
\citenamefont {Kn{\"o}ckel}, \citenamefont {Friebe}, \citenamefont
{Riedmann}, \citenamefont {Tiemann},\ and\ \citenamefont
{Lisdat}}]{LisdatPRA2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Stephan}\ \bibnamefont
{Falke}}, \bibinfo {author} {\bibfnamefont {Horst}\ \bibnamefont
{Kn{\"o}ckel}}, \bibinfo {author} {\bibfnamefont {Jan}\ \bibnamefont
{Friebe}}, \bibinfo {author} {\bibfnamefont {Matthias}\ \bibnamefont
{Riedmann}}, \bibinfo {author} {\bibfnamefont {Eberhard}\ \bibnamefont
{Tiemann}}, \ and\ \bibinfo {author} {\bibfnamefont {Christian}\ \bibnamefont
{Lisdat}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Potassium
ground-state scattering parameters and {{Born}}-{{Oppenheimer}} potentials
from molecular spectroscopy},}\ }\href {\doibase 10.1103/PhysRevA.78.012503}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {78}},\ \bibinfo {pages} {012503} (\bibinfo {year}
{2008})}\BibitemShut {NoStop} \end{thebibliography}
\end{document} | arXiv |
Christiane Tretter
Christiane Tretter (born 28 December 1964)[1] is a German mathematician and mathematical physicist who works as a professor in the Mathematical Institute (MAI) of the University of Bern in Switzerland, and as managing director of the institute.[2] Her research interests include differential operators and spectral theory.
Christiane Tretter
Born (1964-12-28) 28 December 1964
NationalityGerman
AwardsRichard von Mises Prize
Academic background
Alma materUniversity of Regensburg
ThesisAsymptotische Randbedingungen für Entwicklungssätze bei Randeigenwertproblemen zu $N(y)=\lambda P(y)$ mit $\lambda $-abhängigen Randbedingungen (1992)
Doctoral advisorReinhard Mennicken
Academic work
DisciplineMathematics
Sub-disciplineMathematical physics
InstitutionsUniversity of Bern
University of Bremen
University of Leicester
Main interestsDifferential operators
Spectral theory
Education and career
Tretter studied mathematics, with a minor in physics, at the University of Regensburg, earning a diploma in 1989, a Ph.D. in 1992, and a habilitation in 1998.[1] Her doctoral dissertation, Asymptotische Randbedingungen für Entwicklungssätze bei Randeigenwertproblemen zu $N(y)=\lambda P(y)$ mit $\lambda $-abhängigen Randbedingungen, was supervised by Reinhard Mennicken.[3]
She became a lecturer at the University of Leicester in 2000, moved to the University of Bremen as a professor in 2002, and took her present position in Bern in 2006.[1]
Since 2008 she has been editor-in-chief of the journal Integral Equations and Operator Theory.[1]
Books
Tretter is the author of two mathematical monographs, Spectral Theory of Block Operator Matrices and Applications (2008)[4] and On Lambda-Nonlinear-Boundary-Eigenvalue-Problems (1993),[5] and of two textbooks in mathematical analysis.
Recognition
Tretter won the Richard von Mises Prize of the Gesellschaft für Angewandte Mathematik und Mechanik in 1995.[6]
References
1. Curriculum vitae (PDF), August 2018, retrieved 27 February 2020
2. "Prof. Dr. Christiane Tretter, Managing director of the Institute", About us, Mathematical Institute of the University of Bern, 31 May 2018, retrieved 27 February 2020
3. Christiane Tretter at the Mathematics Genealogy Project
4. Reviews of Spectral Theory of Block Operator Matrices and Applications:
• Jeffrey, Alan, zbMATH, Zbl 1173.47003{{citation}}: CS1 maint: untitled periodical (link)
• Langer, Matthias (2010), Mathematical Reviews, MR 2463978{{citation}}: CS1 maint: untitled periodical (link)
5. Reviews of On Lambda-Nonlinear-Boundary-Eigenvalue-Problems:
• Möller, M., zbMATH, Zbl 0785.34054{{citation}}: CS1 maint: untitled periodical (link)
• Eberhard, Walter (1994), Mathematical Reviews, MR 1236842{{citation}}: CS1 maint: untitled periodical (link)
6. Richard von Mises Prize winners, GAMM, retrieved 27 February 2020
External links
• Official website
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
• Czech Republic
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Find an expert ⇨
Dr Ryan Cooke
Royal Society University Research Fellow in the Department of Physics
Member of the Centre for Extragalactic Astronomy
Contact Dr Ryan Cooke
Centre for Extragalactic Astronomy
Big Bang Nucleosynthesis
Fundamental Physics
The First Stars
Cooke, Ryan & Fumagalli, Michele (2018). Measurement of the primordial helium abundance from the intergalactic medium. Nature astronomy 2(12): 957-961.
Cooke, Ryan J., Pettini, Max & Steidel, Charles C. (2018). One Percent Determination of the Primordial Deuterium Abundance. The Astrophysical Journal 855(2): 102.
Hsyu, Tiffany, Cooke, Ryan J., Prochaska, J. Xavier & Bolte, Michael (2018). Searching for the lowest metallicity galaxies in the local universe. The Astrophysical Journal 863(2): 134.
Cooke, Ryan J., Pettini, Max & Steidel, Charles C. (2017). Discovery of the most metal-poor damped Lyman-α system. Monthly Notices of the Royal Astronomical Society 467(1): 802-811.
Hsyu, Tiffany, Cooke, Ryan J., Prochaska, J. Xavier & Bolte, Michael (2017). The Little Cub: Discovery of an Extremely Metal-poor Star-forming Galaxy in the Local Universe. The Astrophysical Journal 845(2): L22.
Cooke, R. & Pettini, M. (2016). The primordial abundance of deuterium: ionization correction. Monthly Notices of the Royal Astronomical Society 455(2): 1512-1521.
Cooke, R.J., Pettini, M., Nollett, K.M. & Jorgenson, R. (2016). The Primordial Deuterium Abundance of the Most Metal-poor Damped Lyman-alpha System. The Astrophysical Journal 830(2): 148.
Cooke, R.J. (2015). Big Bang Nucleosynthesis and the Helium Isotope Ratio. The Astrophysical Journal Letters 812: L12.
Shen, S., Cooke, R.J., Ramirez-Ruiz, E., Madau, P., Mayer, L. & Guedes, J. (2015). The History of R-Process Enrichment in the Milky Way. The Astrophysical Journal 807: 115.
Cooke, R.J., Pettini, M. & Jorgenson, R.A. (2015). The most metal-poor damped Lyα systems: An insight into dwarf galaxies at high redshift. The Astrophysical Journal 800(1): 12.
Cooke, R.J. & Madau, P. (2014). Carbon-enhanced Metal-poor Stars: Relics from the Dark Ages. The Astrophysical Journal 791: 116.
Cooke, R.J., Pettini, M., Jorgenson, R.A., Murphy, M.T. & Steidel, C.C. (2014). Precision Measures of the Primordial Abundance of Deuterium. The Astrophysical Journal 781: 31.
Cooke, R., Pettini, M., Jorgenson, R.A., Murphy, M.T., Rudie, G.C. & Steidel, C.C. (2013). The explosion energy of early stellar populations: the Fe-peak element ratios in low-metallicity damped Ly-alpha systems. Monthly Notices of the Royal Astronomical Society 431: 1625-1637.
Steiner, J.F., Reis, R.C., Fabian, A.C., Remillard, R.A., McClintock, J.E., Gou, L., Cooke, R., Brenneman, L.W. & Sanders, J.S. (2012). A broad iron line in LMC X-1. Monthly Notices of the Royal Astronomical Society 427: 2552-2561.
Cooke, R., Pettini, M. & Murphy, M.T. (2012). A new candidate for probing Population III nucleosynthesis with carbon-enhanced damped Ly-alpha systems. Monthly Notices of the Royal Astronomical Society 425: 347-354.
Pettini, M. & Cooke, R. (2012). A new, precise measurement of the primordial abundance of deuterium. Monthly Notices of the Royal Astronomical Society 425: 2477-2486.
Cooke, R., Pettini, M., Steidel, C.C., Rudie, G.C. & Jorgenson, R.A. (2011). A carbon-enhanced metal-poor damped Ly$\alpha$ system: probing gas from Population III nucleosynthesis?. Monthly Notices of the Royal Astronomical Society 412: 1047-1058.
Cooke, R., Pettini, M., Steidel, C.C., Rudie, G.C. & Nissen, P.E. (2011). The most metal-poor damped Lyα systems: insights into chemical evolution in the very metal-poor regime. Monthly Notices of the Royal Astronomical Society 417(2): 1534-1558.
Cooke, R., Pettini, M., Steidel, C.C., King, L.J., Rudie, G.C. & Rakic, O. (2010). A newly discovered DLA and associated Ly-alpha emission in the spectra of the gravitationally lensed quasar UM673A,B. Monthly Notices of the Royal Astronomical Society 409: 679-693.
Cooke, R. & Lynden-Bell, D. (2010). Does the Universe accelerate equally in all directions?. Monthly Notices of the Royal Astronomical Society 401: 1409-1414.
Cooke, R., Bland-Hawthorn, J., Sharp, R. & Kuncic, Z. (2008). Ionization Cone in the X-Ray Binary LMC X-1. The Astrophysical Journal Letters 687: L29.
Cooke, R., Kuncic, Z., Sharp, R. & Bland-Hawthorn, J. (2007). Spectacular Trailing Streamers near LMC X-1: The First Evidence of a Jet?. The Astrophysical Journal Letters 667: L163-L166.
Show all publications
Miss Louise Welsh
Search Research Directory
Durham Research Online
Updated: 22nd August 2019 | CommonCrawl |
Topological group
In mathematics, topological groups are logically the combination of groups and topological spaces, i.e. they are groups and topological spaces at the same time, such that the continuity condition for the group operations connects these two structures together and consequently they are not independent from each other.[1]
Algebraic structure → Group theory
Group theory
Basic notions
• Subgroup
• Normal subgroup
• Quotient group
• (Semi-)direct product
Group homomorphisms
• kernel
• image
• direct sum
• wreath product
• simple
• finite
• infinite
• continuous
• multiplicative
• additive
• cyclic
• abelian
• dihedral
• nilpotent
• solvable
• action
• Glossary of group theory
• List of group theory topics
Finite groups
• Cyclic group Zn
• Symmetric group Sn
• Alternating group An
• Dihedral group Dn
• Quaternion group Q
• Cauchy's theorem
• Lagrange's theorem
• Sylow theorems
• Hall's theorem
• p-group
• Elementary abelian group
• Frobenius group
• Schur multiplier
Classification of finite simple groups
• cyclic
• alternating
• Lie type
• sporadic
• Discrete groups
• Lattices
• Integers ($\mathbb {Z} $)
• Free group
Modular groups
• PSL(2, $\mathbb {Z} $)
• SL(2, $\mathbb {Z} $)
• Arithmetic group
• Lattice
• Hyperbolic group
Topological and Lie groups
• Solenoid
• Circle
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Euclidean E(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
• G2
• F4
• E6
• E7
• E8
• Lorentz
• Poincaré
• Conformal
• Diffeomorphism
• Loop
Infinite dimensional Lie group
• O(∞)
• SU(∞)
• Sp(∞)
Algebraic groups
• Linear algebraic group
• Reductive group
• Abelian variety
• Elliptic curve
Topological groups have been studied extensively in the period of 1925 to 1940. Haar and Weil (respectively in 1933 and 1940) showed that the integrals and Fourier series are special cases of a very wide class of topological groups.[2]
Topological groups, along with continuous group actions, are used to study continuous symmetries, which have many applications, for example, in physics. In functional analysis, every topological vector space is an additive topological group with the additional property that scalar multiplication is continuous; consequently, many results from the theory of topological groups can be applied to functional analysis.
Formal definition
A topological group, G, is a topological space that is also a group such that the group operation (in this case product):
⋅ : G × G → G, (x, y) ↦ xy
and the inversion map:
−1 : G → G, x ↦ x−1
are continuous.[note 1] Here G × G is viewed as a topological space with the product topology. Such a topology is said to be compatible with the group operations and is called a group topology.
Checking continuity
The product map is continuous if and only if for any x, y ∈ G and any neighborhood W of xy in G, there exist neighborhoods U of x and V of y in G such that U ⋅ V ⊆ W, where U ⋅ V := {u ⋅ v : u ∈ U, v ∈ V}. The inversion map is continuous if and only if for any x ∈ G and any neighborhood V of x−1 in G, there exists a neighborhood U of x in G such that U−1 ⊆ V, where U−1 := { u−1 : u ∈ U }.
To show that a topology is compatible with the group operations, it suffices to check that the map
G × G → G, (x, y) ↦ xy−1
is continuous. Explicitly, this means that for any x, y ∈ G and any neighborhood W in G of xy−1, there exist neighborhoods U of x and V of y in G such that U ⋅ (V−1) ⊆ W.
Additive notation
This definition used notation for multiplicative groups; the equivalent for additive groups would be that the following two operations are continuous:
+ : G × G → G , (x, y) ↦ x + y
− : G → G , x ↦ −x.
Hausdorffness
Although not part of this definition, many authors[3] require that the topology on G be Hausdorff. One reason for this is that any topological group can be canonically associated with a Hausdorff topological group by taking an appropriate canonical quotient; this however, often still requires working with the original non-Hausdorff topological group. Other reasons, and some equivalent conditions, are discussed below.
This article will not assume that topological groups are necessarily Hausdorff.
Category
In the language of category theory, topological groups can be defined concisely as group objects in the category of topological spaces, in the same way that ordinary groups are group objects in the category of sets. Note that the axioms are given in terms of the maps (binary product, unary inverse, and nullary identity), hence are categorical definitions.
Homomorphisms
A homomorphism of topological groups means a continuous group homomorphism G → H. Topological groups, together with their homomorphisms, form a category. A group homomorphism between topological groups is continuous if and only if it is continuous at some point.[4]
An isomorphism of topological groups is a group isomorphism that is also a homeomorphism of the underlying topological spaces. This is stronger than simply requiring a continuous group isomorphism—the inverse must also be continuous. There are examples of topological groups that are isomorphic as ordinary groups but not as topological groups. Indeed, any non-discrete topological group is also a topological group when considered with the discrete topology. The underlying groups are the same, but as topological groups there is not an isomorphism.
Examples
Every group can be trivially made into a topological group by considering it with the discrete topology; such groups are called discrete groups. In this sense, the theory of topological groups subsumes that of ordinary groups. The indiscrete topology (i.e. the trivial topology) also makes every group into a topological group.
The real numbers, $\mathbb {R} $ with the usual topology form a topological group under addition. Euclidean n-space $\mathbb {R} $n is also a topological group under addition, and more generally, every topological vector space forms an (abelian) topological group. Some other examples of abelian topological groups are the circle group S1, or the torus (S1)n for any natural number n.
The classical groups are important examples of non-abelian topological groups. For instance, the general linear group GL(n,$\mathbb {R} $) of all invertible n-by-n matrices with real entries can be viewed as a topological group with the topology defined by viewing GL(n,$\mathbb {R} $) as a subspace of Euclidean space $\mathbb {R} $n×n. Another classical group is the orthogonal group O(n), the group of all linear maps from $\mathbb {R} $n to itself that preserve the length of all vectors. The orthogonal group is compact as a topological space. Much of Euclidean geometry can be viewed as studying the structure of the orthogonal group, or the closely related group O(n) ⋉ $\mathbb {R} $n of isometries of $\mathbb {R} $n.
The groups mentioned so far are all Lie groups, meaning that they are smooth manifolds in such a way that the group operations are smooth, not just continuous. Lie groups are the best-understood topological groups; many questions about Lie groups can be converted to purely algebraic questions about Lie algebras and then solved.
An example of a topological group that is not a Lie group is the additive group $\mathbb {Q} $ of rational numbers, with the topology inherited from $\mathbb {R} $. This is a countable space, and it does not have the discrete topology. An important example for number theory is the group $\mathbb {Z} $p of p-adic integers, for a prime number p, meaning the inverse limit of the finite groups $\mathbb {Z} $/pn as n goes to infinity. The group $\mathbb {Z} $p is well behaved in that it is compact (in fact, homeomorphic to the Cantor set), but it differs from (real) Lie groups in that it is totally disconnected. More generally, there is a theory of p-adic Lie groups, including compact groups such as GL(n,$\mathbb {Z} $p) as well as locally compact groups such as GL(n,$\mathbb {Q} $p), where $\mathbb {Q} $p is the locally compact field of p-adic numbers.
The group $\mathbb {Z} $p is a pro-finite group; it is isomorphic to a subgroup of the product $\prod _{n\geq 1}\mathbb {Z} /p^{n}$ in such a way that its topology is induced by the product topology, where the finite groups $\mathbb {Z} /p^{n}$ are given the discrete topology. Another large class of pro-finite groups important in number theory are absolute Galois groups.
Some topological groups can be viewed as infinite dimensional Lie groups; this phrase is best understood informally, to include several different families of examples. For example, a topological vector space, such as a Banach space or Hilbert space, is an abelian topological group under addition. Some other infinite-dimensional groups that have been studied, with varying degrees of success, are loop groups, Kac–Moody groups, diffeomorphism groups, homeomorphism groups, and gauge groups.
In every Banach algebra with multiplicative identity, the set of invertible elements forms a topological group under multiplication. For example, the group of invertible bounded operators on a Hilbert space arises this way.
Properties
Translation invariance
Every topological group's topology is translation invariant, which by definition means that if for any $a\in G,$ left or right multiplication by this element yields a homeomorphism $G\to G.$ Consequently, for any $a\in G$ and $S\subseteq G,$ the subset $S$ is open (resp. closed) in $G$ if and only if this is true of its left translation $aS:=\{as:s\in S\}$ and right translation $Sa:=\{sa:s\in S\}.$ If ${\mathcal {N}}$ is a neighborhood basis of the identity element in a topological group $G$ then for all $x\in X,$ $x{\mathcal {N}}:=\{xN:N\in {\mathcal {N}}\}$ is a neighborhood basis of $x$ in $G.$[4] In particular, any group topology on a topological group is completely determined by any neighborhood basis at the identity element. If $S$ is any subset of $G$ and $U$ is an open subset of $G,$ then $SU:=\{su:s\in S,u\in U\}$ is an open subset of $G.$[4]
Symmetric neighborhoods
The inversion operation $g\mapsto g^{-1}$ on a topological group $G$ is a homeomorphism from $G$ to itself.
A subset $S\subseteq G$ is said to be symmetric if $S^{-1}=S,$ where $S^{-1}:=\left\{s^{-1}:s\in S\right\}.$ The closure of every symmetric set in a commutative topological group is symmetric.[4] If S is any subset of a commutative topological group G, then the following sets are also symmetric: S−1 ∩ S, S−1 ∪ S, and S−1 S.[4]
For any neighborhood N in a commutative topological group G of the identity element, there exists a symmetric neighborhood M of the identity element such that M−1 M ⊆ N, where note that M−1 M is necessarily a symmetric neighborhood of the identity element.[4] Thus every topological group has a neighborhood basis at the identity element consisting of symmetric sets.
If G is a locally compact commutative group, then for any neighborhood N in G of the identity element, there exists a symmetric relatively compact neighborhood M of the identity element such that cl M ⊆ N (where cl M is symmetric as well).[4]
Uniform space
Every topological group can be viewed as a uniform space in two ways; the left uniformity turns all left multiplications into uniformly continuous maps while the right uniformity turns all right multiplications into uniformly continuous maps.[5] If G is not abelian, then these two need not coincide. The uniform structures allow one to talk about notions such as completeness, uniform continuity and uniform convergence on topological groups.
Separation properties
If U is an open subset of a commutative topological group G and U contains a compact set K, then there exists a neighborhood N of the identity element such that KN ⊆ U.[4]
As a uniform space, every commutative topological group is completely regular. Consequently, for a multiplicative topological group G with identity element 1, the following are equivalent:[4]
1. G is a T0-space (Kolmogorov);
2. G is a T2-space (Hausdorff);
3. G is a T31⁄2 (Tychonoff);
4. { 1 } is closed in G;
5. { 1 } := ∩N ∈ 𝒩 N, where 𝒩 is a neighborhood basis of the identity element in G;
6. for any $x\in G$ such that $x\neq 1,$ there exists a neighborhood U in G of the identity element such that $x\not \in U.$
A subgroup of a commutative topological group is discrete if and only if it has an isolated point.[4]
If G is not Hausdorff, then one can obtain a Hausdorff group by passing to the quotient group G/K, where K is the closure of the identity.[6] This is equivalent to taking the Kolmogorov quotient of G.
Metrisability
Let G be a topological group. As with any topological space, we say that G is metrisable if and only if there exists a metric d on G, which induces the same topology on $G$. A metric d on G is called
• left-invariant (resp. right-invariant) if and only if $d(ax_{1},ax_{2})=d(x_{1},x_{2})$(resp. $d(x_{1}a,x_{2}a)=d(x_{1},x_{2})$) for all $a,x_{1},x_{2}\in G$ (equivalently, $d$ is left-invariant just in case the map $x\mapsto ax$ is an isometry from $(G,d)$ to itself for each $a\in G$).
• proper if and only if all open balls, $B(r)=\{g\in G\mid d(g,1)<r\}$ for $r>0$, are pre-compact.
The Birkhoff–Kakutani theorem (named after mathematicians Garrett Birkhoff and Shizuo Kakutani) states that the following three conditions on a topological group G are equivalent:[7]
1. G is first countable (equivalently: the identity element 1 is closed in G, and there is a countable basis of neighborhoods for 1 in G).
2. G is metrisable (as a topological space).
3. There is a left-invariant metric on G that induces the given topology on G.
Furthermore, the following are equivalent for any topological group G:
1. G is a second countable locally compact (Hausdorff) space.
2. G is a Polish, locally compact (Hausdorff) space.
3. G is properly metrisable (as a topological space).
4. There is a left-invariant, proper metric on G that induces the given topology on G.
Note: As with the rest of the article we of assume here a Hausdorff topology. The implications 4 $\Rightarrow $ 3 $\Rightarrow $ 2 $\Rightarrow $ 1 hold in any topological space. In particular 3 $\Rightarrow $ 2 holds, since in particular any properly metrisable space is countable union of compact metrisable and thus separable (cf. properties of compact metric spaces) subsets. The non-trivial implication 1 $\Rightarrow $ 4 was first proved by Raimond Struble in 1974.[8] An alternative approach was made by Uffe Haagerup and Agata Przybyszewska in 2006,[9] the idea of the which is as follows: One relies on the construction of a left-invariant metric, $d_{0}$, as in the case of first countable spaces. By local compactness, closed balls of sufficiently small radii are compact, and by normalising we can assume this holds for radius 1. Closing the open ball, U, of radius 1 under multiplication yields a clopen subgroup, H, of G, on which the metric $d_{0}$ is proper. Since H is open and G is second countable, the subgroup has at most countably many cosets. One now uses this sequence of cosets and the metric on H to construct a proper metric on G.
Subgroups
Every subgroup of a topological group is itself a topological group when given the subspace topology. Every open subgroup H is also closed in G, since the complement of H is the open set given by the union of open sets gH for g ∈ G \ H. If H is a subgroup of G then the closure of H is also a subgroup. Likewise, if H is a normal subgroup of G, the closure of H is normal in G.
Quotients and normal subgroups
If H is a subgroup of G, the set of left cosets G/H with the quotient topology is called a homogeneous space for G. The quotient map $q:G\to G/H$ is always open. For example, for a positive integer n, the sphere Sn is a homogeneous space for the rotation group SO(n+1) in $\mathbb {R} $n+1, with Sn = SO(n+1)/SO(n). A homogeneous space G/H is Hausdorff if and only if H is closed in G.[10] Partly for this reason, it is natural to concentrate on closed subgroups when studying topological groups.
If H is a normal subgroup of G, then the quotient group G/H becomes a topological group when given the quotient topology. It is Hausdorff if and only if H is closed in G. For example, the quotient group $\mathbb {R} /\mathbb {Z} $ is isomorphic to the circle group S1.
In any topological group, the identity component (i.e., the connected component containing the identity element) is a closed normal subgroup. If C is the identity component and a is any point of G, then the left coset aC is the component of G containing a. So the collection of all left cosets (or right cosets) of C in G is equal to the collection of all components of G. It follows that the quotient group G/C is totally disconnected.[11]
Closure and compactness
In any commutative topological group, the product (assuming the group is multiplicative) KC of a compact set K and a closed set C is a closed set.[4] Furthermore, for any subsets R and S of G, (cl R)(cl S) ⊆ cl (RS).[4]
If H is a subgroup of a commutative topological group G and if N is a neighborhood in G of the identity element such that H ∩ cl N is closed, then H is closed.[4] Every discrete subgroup of a Hausdorff commutative topological group is closed.[4]
Isomorphism theorems
The isomorphism theorems from ordinary group theory are not always true in the topological setting. This is because a bijective homomorphism need not be an isomorphism of topological groups.
For example, a native version of the first isomorphism theorem is false for topological groups: if $f:G\to H$ is a morphism of topological groups (that is, a continuous homomorphism), it is not necessarily true that the induced homomorphism ${\tilde {f}}:G/\ker f\to \mathrm {Im} (f)$ is an isomorphism of topological groups; it will be a bijective, continuous homomorphism, but it will not necessarily be a homeomorphism. In other words, it will not necessarily admit an inverse in the category of topological groups.
There is a version of the first isomorphism theorem for topological groups, which may be stated as follows: if $f:G\to H$ is a continuous homomorphism, then the induced homomorphism from G/ker(f) to im(f) is an isomorphism if and only if the map f is open onto its image.[12]
The third isomorphism theorem, however, is true more or less verbatim for topological groups, as one may easily check.
Hilbert's fifth problem
There are several strong results on the relation between topological groups and Lie groups. First, every continuous homomorphism of Lie groups $G\to H$ is smooth. It follows that a topological group has a unique structure of a Lie group if one exists. Also, Cartan's theorem says that every closed subgroup of a Lie group is a Lie subgroup, in particular a smooth submanifold.
Hilbert's fifth problem asked whether a topological group G that is a topological manifold must be a Lie group. In other words, does G have the structure of a smooth manifold, making the group operations smooth? As shown by Andrew Gleason, Deane Montgomery, and Leo Zippin, the answer to this problem is yes.[13] In fact, G has a real analytic structure. Using the smooth structure, one can define the Lie algebra of G, an object of linear algebra that determines a connected group G up to covering spaces. As a result, the solution to Hilbert's fifth problem reduces the classification of topological groups that are topological manifolds to an algebraic problem, albeit a complicated problem in general.
The theorem also has consequences for broader classes of topological groups. First, every compact group (understood to be Hausdorff) is an inverse limit of compact Lie groups. (One important case is an inverse limit of finite groups, called a profinite group. For example, the group $\mathbb {Z} $p of p-adic integers and the absolute Galois group of a field are profinite groups.) Furthermore, every connected locally compact group is an inverse limit of connected Lie groups.[14] At the other extreme, a totally disconnected locally compact group always contains a compact open subgroup, which is necessarily a profinite group.[15] (For example, the locally compact group GL(n,$\mathbb {Q} $p) contains the compact open subgroup GL(n,$\mathbb {Z} $p), which is the inverse limit of the finite groups GL(n,$\mathbb {Z} $/pr) as r' goes to infinity.)
Representations of compact or locally compact groups
An action of a topological group G on a topological space X is a group action of G on X such that the corresponding function G × X → X is continuous. Likewise, a representation of a topological group G on a real or complex topological vector space V is a continuous action of G on V such that for each g ∈ G, the map v ↦ gv from V to itself is linear.
Group actions and representation theory are particularly well understood for compact groups, generalizing what happens for finite groups. For example, every finite-dimensional (real or complex) representation of a compact group is a direct sum of irreducible representations. An infinite-dimensional unitary representation of a compact group can be decomposed as a Hilbert-space direct sum of irreducible representations, which are all finite-dimensional; this is part of the Peter–Weyl theorem.[16] For example, the theory of Fourier series describes the decomposition of the unitary representation of the circle group S1 on the complex Hilbert space L2(S1). The irreducible representations of S1 are all 1-dimensional, of the form z ↦ zn for integers n (where S1 is viewed as a subgroup of the multiplicative group $\mathbb {C} $*). Each of these representations occurs with multiplicity 1 in L2(S1).
The irreducible representations of all compact connected Lie groups have been classified. In particular, the character of each irreducible representation is given by the Weyl character formula.
More generally, locally compact groups have a rich theory of harmonic analysis, because they admit a natural notion of measure and integral, given by the Haar measure. Every unitary representation of a locally compact group can be described as a direct integral of irreducible unitary representations. (The decomposition is essentially unique if G is of Type I, which includes the most important examples such as abelian groups and semisimple Lie groups.[17]) A basic example is the Fourier transform, which decomposes the action of the additive group $\mathbb {R} $ on the Hilbert space L2($\mathbb {R} $) as a direct integral of the irreducible unitary representations of $\mathbb {R} $. The irreducible unitary representations of $\mathbb {R} $ are all 1-dimensional, of the form x ↦ e2πiax for a ∈ $\mathbb {R} $.
The irreducible unitary representations of a locally compact group may be infinite-dimensional. A major goal of representation theory, related to the Langlands classification of admissible representations, is to find the unitary dual (the space of all irreducible unitary representations) for the semisimple Lie groups. The unitary dual is known in many cases such as SL(2,$\mathbb {R} $), but not all.
For a locally compact abelian group G, every irreducible unitary representation has dimension 1. In this case, the unitary dual ${\hat {G}}$ is a group, in fact another locally compact abelian group. Pontryagin duality states that for a locally compact abelian group G, the dual of ${\hat {G}}$ is the original group G. For example, the dual group of the integers $\mathbb {Z} $ is the circle group S1, while the group $\mathbb {R} $ of real numbers is isomorphic to its own dual.
Every locally compact group G has a good supply of irreducible unitary representations; for example, enough representations to distinguish the points of G (the Gelfand–Raikov theorem). By contrast, representation theory for topological groups that are not locally compact has so far been developed only in special situations, and it may not be reasonable to expect a general theory. For example, there are many abelian Banach–Lie groups for which every representation on Hilbert space is trivial.[18]
Homotopy theory of topological groups
Topological groups are special among all topological spaces, even in terms of their homotopy type. One basic point is that a topological group G determines a path-connected topological space, the classifying space BG (which classifies principal G-bundles over topological spaces, under mild hypotheses). The group G is isomorphic in the homotopy category to the loop space of BG; that implies various restrictions on the homotopy type of G.[19] Some of these restrictions hold in the broader context of H-spaces.
For example, the fundamental group of a topological group G is abelian. (More generally, the Whitehead product on the homotopy groups of G is zero.) Also, for any field k, the cohomology ring H*(G,k) has the structure of a Hopf algebra. In view of structure theorems on Hopf algebras by Heinz Hopf and Armand Borel, this puts strong restrictions on the possible cohomology rings of topological groups. In particular, if G is a path-connected topological group whose rational cohomology ring H*(G,$\mathbb {Q} $) is finite-dimensional in each degree, then this ring must be a free graded-commutative algebra over $\mathbb {Q} $, that is, the tensor product of a polynomial ring on generators of even degree with an exterior algebra on generators of odd degree.[20]
In particular, for a connected Lie group G, the rational cohomology ring of G is an exterior algebra on generators of odd degree. Moreover, a connected Lie group G has a maximal compact subgroup K, which is unique up to conjugation, and the inclusion of K into G is a homotopy equivalence. So describing the homotopy types of Lie groups reduces to the case of compact Lie groups. For example, the maximal compact subgroup of SL(2,$\mathbb {R} $) is the circle group SO(2), and the homogeneous space SL(2,$\mathbb {R} $)/SO(2) can be identified with the hyperbolic plane. Since the hyperbolic plane is contractible, the inclusion of the circle group into SL(2,$\mathbb {R} $) is a homotopy equivalence.
Finally, compact connected Lie groups have been classified by Wilhelm Killing, Élie Cartan, and Hermann Weyl. As a result, there is an essentially complete description of the possible homotopy types of Lie groups. For example, a compact connected Lie group of dimension at most 3 is either a torus, the group SU(2) (diffeomorphic to the 3-sphere S3), or its quotient group SU(2)/{±1} ≅ SO(3) (diffeomorphic to RP3).
Complete topological group
See also: Complete uniform space
Information about convergence of nets and filters, such as definitions and properties, can be found in the article about filters in topology.
Canonical uniformity on a commutative topological group
Main article: Uniform space
This article will henceforth assume that any topological group that we consider is an additive commutative topological group with identity element $0.$
The diagonal of $X$ is the set
$\Delta _{X}:=\{(x,x):x\in X\}$
and for any $N\subseteq X$ containing $0,$ the canonical entourage or canonical vicinities around $N$ is the set
$\Delta _{X}(N):=\{(x,y)\in X\times X:x-y\in N\}=\bigcup _{y\in X}[(y+N)\times \{y\}]=\Delta _{X}+(N\times \{0\})$
For a topological group $(X,\tau ),$ the canonical uniformity[21] on $X$ is the uniform structure induced by the set of all canonical entourages $\Delta (N)$ as $N$ ranges over all neighborhoods of $0$ in $X.$
That is, it is the upward closure of the following prefilter on $X\times X,$
$\left\{\Delta (N):N{\text{ is a neighborhood of }}0{\text{ in }}X\right\}$
where this prefilter forms what is known as a base of entourages of the canonical uniformity.
For a commutative additive group $X,$ a fundamental system of entourages ${\mathcal {B}}$ is called a translation-invariant uniformity if for every $B\in {\mathcal {B}},$ $(x,y)\in B$ if and only if $(x+z,y+z)\in B$ for all $x,y,z\in X.$ A uniformity ${\mathcal {B}}$ is called translation-invariant if it has a base of entourages that is translation-invariant.[22]
• The canonical uniformity on any commutative topological group is translation-invariant.
• The same canonical uniformity would result by using a neighborhood basis of the origin rather the filter of all neighborhoods of the origin.
• Every entourage $\Delta _{X}(N)$ contains the diagonal $\Delta _{X}:=\Delta _{X}(\{0\})=\{(x,x):x\in X\}$ because $0\in N.$
• If $N$ is symmetric (that is, $-N=N$) then $\Delta _{X}(N)$ is symmetric (meaning that $\Delta _{X}(N)^{\operatorname {op} }=\Delta _{X}(N)$) and $\Delta _{X}(N)\circ \Delta _{X}(N)=\{(x,z):{\text{ there exists }}y\in X{\text{ such that }}x,z\in y+N\}=\bigcup _{y\in X}[(y+N)\times (y+N)]=\Delta _{X}+(N\times N).$
• The topology induced on $X$ by the canonical uniformity is the same as the topology that $X$ started with (that is, it is $\tau $).
Cauchy prefilters and nets
Main articles: Filters in topology and Net (mathematics)
The general theory of uniform spaces has its own definition of a "Cauchy prefilter" and "Cauchy net." For the canonical uniformity on $X,$ these reduces down to the definition described below.
Suppose $x_{\bullet }=\left(x_{i}\right)_{i\in I}$ is a net in $X$ and $y_{\bullet }=\left(y_{j}\right)_{j\in J}$ is a net in $Y.$ Make $I\times J$ into a directed set by declaring $(i,j)\leq \left(i_{2},j_{2}\right)$ if and only if $i\leq i_{2}{\text{ and }}j\leq j_{2}.$ Then[23] $x_{\bullet }\times y_{\bullet }:=\left(x_{i},y_{j}\right)_{(i,j)\in I\times J}$ denotes the product net. If $X=Y$ then the image of this net under the addition map $X\times X\to X$ denotes the sum of these two nets:
$x_{\bullet }+y_{\bullet }:=\left(x_{i}+y_{j}\right)_{(i,j)\in I\times J}$
and similarly their difference is defined to be the image of the product net under the subtraction map:
$x_{\bullet }-y_{\bullet }:=\left(x_{i}-y_{j}\right)_{(i,j)\in I\times J}.$
A net $x_{\bullet }=\left(x_{i}\right)_{i\in I}$ in an additive topological group $X$ is called a Cauchy net if[24]
$\left(x_{i}-x_{j}\right)_{(i,j)\in I\times I}\to 0{\text{ in }}X$
or equivalently, if for every neighborhood $N$ of $0$ in $X,$ there exists some $i_{0}\in I$ such that $x_{i}-x_{j}\in N$ for all indices $i,j\geq i_{0}.$
A Cauchy sequence is a Cauchy net that is a sequence.
If $B$ is a subset of an additive group $X$ and $N$ is a set containing $0,$ then$B$ is said to be an $N$-small set or small of order $N$ if $B-B\subseteq N.$[25]
A prefilter ${\mathcal {B}}$ on an additive topological group $X$ called a Cauchy prefilter if it satisfies any of the following equivalent conditions:
1. ${\mathcal {B}}-{\mathcal {B}}\to 0$ in $X,$ where ${\mathcal {B}}-{\mathcal {B}}:=\{B-C:B,C\in {\mathcal {B}}\}$ is a prefilter.
2. $\{B-B:B\in {\mathcal {B}}\}\to 0$ in $X,$ where $\{B-B:B\in {\mathcal {B}}\}$ is a prefilter equivalent to ${\mathcal {B}}-{\mathcal {B}}.$
3. For every neighborhood $N$ of $0$ in $X,$ ${\mathcal {B}}$ contains some $N$-small set (that is, there exists some $B\in {\mathcal {B}}$ such that $B-B\subseteq N$).[25]
and if $X$ is commutative then also:
1. For every neighborhood $N$ of $0$ in $X,$ there exists some $B\in {\mathcal {B}}$ and some $x\in X$ such that $B\subseteq x+N.$[25]
• It suffices to check any of the above condition for any given neighborhood basis of $0$ in $X.$
Suppose ${\mathcal {B}}$ is a prefilter on a commutative topological group $X$ and $x\in X.$ Then ${\mathcal {B}}\to x$ in $X$ if and only if $x\in \operatorname {cl} {\mathcal {B}}$ and ${\mathcal {B}}$ is Cauchy.[23]
Complete commutative topological group
Main article: Complete uniform space
Recall that for any $S\subseteq X,$ a prefilter ${\mathcal {C}}$ on $S$ is necessarily a subset of $\wp (S)$; that is, ${\mathcal {C}}\subseteq \wp (S).$
A subset $S$ of a topological group $X$ is called a complete subset if it satisfies any of the following equivalent conditions:
1. Every Cauchy prefilter ${\mathcal {C}}\subseteq \wp (S)$ on $S$ converges to at least one point of $S.$
• If $X$ is Hausdorff then every prefilter on $S$ will converge to at most one point of $X.$ But if $X$ is not Hausdorff then a prefilter may converge to multiple points in $X.$ The same is true for nets.
2. Every Cauchy net in $S$ converges to at least one point of $S$;
3. Every Cauchy filter ${\mathcal {C}}$ on $S$ converges to at least one point of $S.$
4. $S$ is a complete uniform space (under the point-set topology definition of "complete uniform space") when $S$ is endowed with the uniformity induced on it by the canonical uniformity of $X$;
A subset $S$ is called a sequentially complete subset if every Cauchy sequence in $S$ (or equivalently, every elementary Cauchy filter/prefilter on $S$) converges to at least one point of $S.$
• Importantly, convergence outside of $S$ is allowed: If $X$ is not Hausdorff and if every Cauchy prefilter on $S$ converges to some point of $S,$ then $S$ will be complete even if some or all Cauchy prefilters on $S$ also converge to points(s) in the complement $X\setminus S.$ In short, there is no requirement that these Cauchy prefilters on $S$ converge only to points in $S.$ The same can be said of the convergence of Cauchy nets in $S.$
• As a consequence, if a commutative topological group $X$ is not Hausdorff, then every subset of the closure of $\{0\},$ say $S\subseteq \operatorname {cl} \{0\},$ is complete (since it is clearly compact and every compact set is necessarily complete). So in particular, if $S\neq \varnothing $ (for example, if $S$ a is singleton set such as $S=\{0\}$) then $S$ would be complete even though every Cauchy net in $S$ (and every Cauchy prefilter on $S$), converges to every point in $\operatorname {cl} \{0\}$ (include those points in $\operatorname {cl} \{0\}$ that are not in $S$).
• This example also shows that complete subsets (indeed, even compact subsets) of a non-Hausdorff space may fail to be closed (for example, if $\varnothing \neq S\subseteq \operatorname {cl} \{0\}$ then $S$ is closed if and only if $S=\operatorname {cl} \{0\}$).
A commutative topological group $X$ is called a complete group if any of the following equivalent conditions hold:
1. $X$ is complete as a subset of itself.
2. Every Cauchy net in $X$ converges to at least one point of $X.$
3. There exists a neighborhood of $0$ in $X$ that is also a complete subset of $X.$[25]
• This implies that every locally compact commutative topological group is complete.
4. When endowed with its canonical uniformity, $X$ becomes is a complete uniform space.
• In the general theory of uniform spaces, a uniform space is called a complete uniform space if each Cauchy filter in $X$ converges in $(X,\tau )$ to some point of $X.$
A topological group is called sequentially complete if it is a sequentially complete subset of itself.
Neighborhood basis: Suppose $C$ is a completion of a commutative topological group $X$ with $X\subseteq C$ and that ${\mathcal {N}}$ is a neighborhood base of the origin in $X.$ Then the family of sets
$\left\{\operatorname {cl} _{C}N:N\in {\mathcal {N}}\right\}$
is a neighborhood basis at the origin in $C.$[23]
Uniform continuity
Let $X$ and $Y$ be topological groups, $D\subseteq X,$ and $f:D\to Y$ be a map. Then $f:D\to Y$ is uniformly continuous if for every neighborhood $U$ of the origin in $X,$ there exists a neighborhood $V$ of the origin in $Y$ such that for all $x,y\in D,$ if $y-x\in U$ then $f(y)-f(x)\in V.$
Generalizations
Various generalizations of topological groups can be obtained by weakening the continuity conditions:[26]
• A semitopological group is a group G with a topology such that for each c ∈ G the two functions G → G defined by x ↦ xc and x ↦ cx are continuous.
• A quasitopological group is a semitopological group in which the function mapping elements to their inverses is also continuous.
• A paratopological group is a group with a topology such that the group operation is continuous.
See also
• Algebraic group – Algebraic variety with a group structure
• Complete field – Algebraic structure that is complete relative to a metricPages displaying wikidata descriptions as a fallback
• Compact group – Topological group with compact topology
• Complete topological vector space – A TVS where points that get progressively closer to each other will always converge to a point
• Lie group – Group that is also a differentiable manifold with group operations that are smooth
• Locally compact field
• Locally compact group – topological group G for which the underlying topology is locally compact and Hausdorff, so that the Haar measure can be definedPages displaying wikidata descriptions as a fallback
• Locally compact quantum group – relatively new C*-algebraic approach toward quantum groupsPages displaying wikidata descriptions as a fallback
• Profinite group – topological group that is isomorphic to the inverse (projective) limit of an inverse system of discrete finite groupsPages displaying wikidata descriptions as a fallback
• Ordered topological vector space
• Topological abelian group – concept in mathematicsPages displaying wikidata descriptions as a fallback
• Topological field – Algebraic structure with addition, multiplication, and divisionPages displaying short descriptions of redirect targets
• Topological module
• Topological ring – ring where ring operations are continuousPages displaying wikidata descriptions as a fallback
• Topological semigroup – semigroup with continuous operationPages displaying wikidata descriptions as a fallback
• Topological vector space – Vector space with a notion of nearness
Notes
1. i.e. Continuous means that for any open set U ⊆ G, f−1(U) is open in the domain dom f of f.
Citations
1. Pontrjagin 1946, p. 52.
2. Hewitt & Ross 1979, p. 1.
3. Armstrong 1997, p. 73; Bredon 1997, p. 51
4. Narici & Beckenstein 2011, pp. 19–45.
5. Bourbaki 1998, section III.3.
6. Bourbaki 1998, section III.2.7.
7. Montgomery & Zippin 1955, section 1.22.
8. Struble, Raimond A. (1974). "Metrics in locally compact groups". Compositio Mathematica. 28 (3): 217–222.
9. Haagerup, Uffe; Przybyszewska, Agata (2006), Proper metrics on locally compact groups, and proper affine isometric actions on, CiteSeerX 10.1.1.236.827
10. Bourbaki 1998, section III.2.5.
11. Bourbaki 1998, section I.11.5.
12. Bourbaki 1998, section III.2.8.
13. Montgomery & Zippin 1955, section 4.10.
14. Montgomery & Zippin 1955, section 4.6.
15. Bourbaki 1998, section III.4.6.
16. Hewitt & Ross 1970, Theorem 27.40.
17. Mackey 1976, section 2.4.
18. Banaszczyk 1983.
19. Hatcher 2001, Theorem 4.66.
20. Hatcher 2001, Theorem 3C.4.
21. Edwards 1995, p. 61.
22. Schaefer & Wolff 1999, pp. 12–19.
23. Narici & Beckenstein 2011, pp. 47–66.
24. Narici & Beckenstein 2011, p. 48.
25. Narici & Beckenstein 2011, pp. 48–51.
26. Arhangel'skii & Tkachenko 2008, p. 12.
References
• Arhangel'skii, Alexander; Tkachenko, Mikhail (2008). Topological Groups and Related Structures. World Scientific. ISBN 978-90-78677-06-2. MR 2433295.
• Armstrong, Mark A. (1997). Basic Topology (1st ed.). Springer-Verlag. ISBN 0-387-90839-0. MR 0705632.
• Banaszczyk, Wojciech (1983), "On the existence of exotic Banach–Lie groups", Mathematische Annalen, 264 (4): 485–493, doi:10.1007/BF01456956, MR 0716262, S2CID 119755117
• Bourbaki, Nicolas (1998), General Topology. Chapters 1–4, Springer-Verlag, ISBN 3-540-64241-2, MR 1726779
• Folland, Gerald B. (1995), A Course in Abstract Harmonic Analysis, CRC Press, ISBN 0-8493-8490-7
• Bredon, Glen E. (1997). Topology and Geometry. Graduate Texts in Mathematics (1st ed.). Springer-Verlag. ISBN 0-387-97926-3. MR 1700700.
• Hatcher, Allen (2001), Algebraic Topology, Cambridge University Press, ISBN 0-521-79540-0, MR 1867354
• Edwards, Robert E. (1995). Functional Analysis: Theory and Applications. New York: Dover Publications. ISBN 978-0-486-68143-6. OCLC 30593138.
• Hewitt, Edwin; Ross, Kenneth A. (1979), Abstract Harmonic Analysis, vol. 1 (2nd ed.), Springer-Verlag, ISBN 978-0387941905, MR 0551496
• Hewitt, Edwin; Ross, Kenneth A. (1970), Abstract Harmonic Analysis, vol. 2, Springer-Verlag, ISBN 978-0387048321, MR 0262773
• Mackey, George W. (1976), The Theory of Unitary Group Representations, University of Chicago Press, ISBN 0-226-50051-9, MR 0396826
• Montgomery, Deane; Zippin, Leo (1955), Topological Transformation Groups, New York, London: Interscience Publishers, MR 0073104
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Pontrjagin, Leon (1946). Topological Groups. Princeton University Press.
• Pontryagin, Lev S. (1986). Topological Groups. trans. from Russian by Arlen Brown and P.S.V. Naidu (3rd ed.). New York: Gordon and Breach Science Publishers. ISBN 2-88124-133-6. MR 0201557.
• Porteous, Ian R. (1981). Topological Geometry (2nd ed.). Cambridge University Press. ISBN 0-521-23160-4. MR 0606198.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Authority control: National
• Japan
• Czech Republic
| Wikipedia |
\begin{definition}[Definition:Congruence Relation]
Let $\struct {S, \circ}$ be an algebraic structure.
Let $\RR$ be an equivalence relation on $S$.
Then $\RR$ is a '''congruence relation for $\circ$''' {{iff}}:
:$\forall x_1, x_2, y_1, y_2 \in S: \paren {x_1 \mathrel \RR x_2} \land \paren {y_1 \mathrel \RR y_2} \implies \paren {x_1 \circ y_1} \mathrel \RR \paren {x_2 \circ y_2}$
\end{definition} | ProofWiki |
On the modeling of moving populations through set evolution equations
DCDS Home
Smooth stabilizers for measures on the torus
January 2015, 35(1): 59-72. doi: 10.3934/dcds.2015.35.59
Periodic orbits and invariant cones in three-dimensional piecewise linear systems
Victoriano Carmona 1, , Emilio Freire 1, and Soledad Fernández-García 2,
Escuela Técnica Superior de Ingeniería, Departamento de Matemática Aplicada II, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Sevilla, Spain, Spain
MYCENAE Project-Team, Paris-Rocquencourt Centre, Inria, Domaine de Voluceau BP 105, 78153 Le Chesnay Cedex, France
Received July 2013 Revised May 2014 Published August 2014
We deal with the existence of invariant cones in a family of three-dimensional non-observable piecewise linear systems with two zones of linearity. We find a subfamily of systems with one invariant cone foliated by periodic orbits. After that, we perturb the system by making it observable and non-homogeneous. Then, the periodic orbits that remain after the perturbation are analyzed.
Keywords: Piecewise linear systems, invariant manifolds, invariant cones., periodic orbits, half-Poincaré maps.
Mathematics Subject Classification: Primary: 34C25, 34C45; Secondary: 34C23, 37G1.
Citation: Victoriano Carmona, Emilio Freire, Soledad Fernández-García. Periodic orbits and invariant cones in three-dimensional piecewise linear systems. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 59-72. doi: 10.3934/dcds.2015.35.59
S. Barnet and R. G. Cameron, Introduction to Mathematical Control Theory,, Oxford University Press, (1985). Google Scholar
T. R. Blows and L. M. Perko, Bifurcation of limit cycles from centers and separatrix cycles of Planar analityc systems,, SIAM Rev., 36 (1994), 341. doi: 10.1137/1036094. Google Scholar
V. Carmona, S. Fernández-García and E. Freire, Saddle-node bifurcation of invariant cones in 3D piecewise linear systems,, Phys. D, 241 (2012), 623. doi: 10.1016/j.physd.2011.11.020. Google Scholar
V. Carmona, E. Freire, E. Ponce and F. Torres, On simplifying and classifying piecewise-linear systems,, IEEE Trans. Circuits Systems I Fund. Theory Appl., 49 (2002), 609. doi: 10.1109/TCSI.2002.1001950. Google Scholar
V. Carmona, E. Freire, E. Ponce, J. Ros and F. Torres, Limit cycle bifurcation in 3D continuous piecewise linear systems with two zones. Application to Chua's circuit,, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 15 (2005), 3153. doi: 10.1142/S0218127405014027. Google Scholar
V. Carmona, E. Freire, E. Ponce and F. Torres, Bifurcation of invariant cones in piecewise linear homogeneous systems,, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 15 (2005), 2469. doi: 10.1142/S0218127405013423. Google Scholar
V. Carmona, E. Freire, E. Ponce and F. Torres, The continuous matching of two stable linear systems can be unstable,, Discrete and Contin. Dyn. Syst., 16 (2006), 689. doi: 10.3934/dcds.2006.16.689. Google Scholar
A. Cima, J. Llibre and M. A. Teixeira, Limit cycles of some polynomial differential systems in dimension 2, 3 and 4, via averaging theory,, Appl. Anal., 87 (2008), 149. doi: 10.1080/00036810701556136. Google Scholar
Earl A. Coddington and N. Levinson, Theory of Ordinary Differential Equations,, McGraw-Hill, (1955). Google Scholar
S. Coombes, R. Thul and K. C. A. Wedgwood, Nonsmooth dynamics in spiking neuron models,, Phys. D, 241 (2012), 2042. doi: 10.1016/j.physd.2011.05.012. Google Scholar
Z. Du, Y. Li and W. Zhang, Bifurcation of periodic orbits in a class of planar Filippov systems,, Nonlinear Analysis, 69 (2008), 3610. doi: 10.1016/j.na.2007.09.045. Google Scholar
E. Freire, E. Ponce, F. Rodrigo and F. Torres, Bifurcation sets of continuous piecewise linear systems with two zones,, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 8 (1998), 2073. doi: 10.1142/S0218127498001728. Google Scholar
E. Freire, E. Ponce and J. Ros, Limit cycle bifurcation from center in symmetric piecewise-linear systems,, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 9 (1999), 895. doi: 10.1142/S0218127499000638. Google Scholar
E. Freire, E. Ponce and F. Torres, Hopf-like bifurcations in planar piecewise linear systems,, Publ. Mat., 41 (1997), 135. doi: 10.5565/PUBLMAT_41197_08. Google Scholar
C. Kahlert and O. E. Rössler, Anaytical properties of Poincaré halfmaps in a class of piecewise-linear dynamical systems,, Z. Naturforsch. A, 40 (1985), 1011. Google Scholar
M. Kunze, Lecture Notes in Mathematics,, Springer, (2000). doi: 10.1007/BFb0103843. Google Scholar
T. Küpper, Invariant cones for non-smooth dynamical systems,, Math. Comput. Simulation., 79 (2008), 1396. doi: 10.1016/j.matcom.2008.03.010. Google Scholar
T. Küpper, D. Weiss and H. A. Hoshman, Invariant manifolds for nonsmooth systems,, Phys. D, 241 (2012), 1895. doi: 10.1016/j.physd.2011.07.012. Google Scholar
J. Llibre and A. E. Teruel, Existence of Poincaré maps in piecewise linear differential systems in $\mathbb R^N$,, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 14 (2004), 2843. doi: 10.1142/S0218127404010874. Google Scholar
W. S. Loud, Periodic solutions of a perturbed autonomous system,, Ann. of Math., 70 (1959), 490. doi: 10.2307/1970327. Google Scholar
G. M. Maggio, M. di Bernardo and M. P. Kennedy, Nonsmooth bifurcations in a piecewise-linear model of the Colpitts oscillator,, IEEE Trans. Circuits Systems I Fund. Theory Appl., 47 (2000), 1160. doi: 10.1109/81.873871. Google Scholar
F. Verhulst, Nonlinear Differential Equations and Dynamical Systems,, Springer, (1996). doi: 10.1007/978-3-642-61453-8. Google Scholar
Roberto Castelli. Efficient representation of invariant manifolds of periodic orbits in the CRTBP. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 563-586. doi: 10.3934/dcdsb.2018197
Christopher K. R. T. Jones, Siu-Kei Tin. Generalized exchange lemmas and orbits heteroclinic to invariant manifolds. Discrete & Continuous Dynamical Systems - S, 2009, 2 (4) : 967-1023. doi: 10.3934/dcdss.2009.2.967
Miguel Ângelo De Sousa Mendes. Quasi-invariant attractors of piecewise isometric systems. Discrete & Continuous Dynamical Systems - A, 2003, 9 (2) : 323-338. doi: 10.3934/dcds.2003.9.323
Tifei Qian, Zhihong Xia. Heteroclinic orbits and chaotic invariant sets for monotone twist maps. Discrete & Continuous Dynamical Systems - A, 2003, 9 (1) : 69-95. doi: 10.3934/dcds.2003.9.69
I. Baldomá, Àlex Haro. One dimensional invariant manifolds of Gevrey type in real-analytic maps. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 295-322. doi: 10.3934/dcdsb.2008.10.295
Rafael de la Llave, Jason D. Mireles James. Parameterization of invariant manifolds by reducibility for volume preserving and symplectic maps. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4321-4360. doi: 10.3934/dcds.2012.32.4321
Armengol Gasull, Víctor Mañosa. Periodic orbits of discrete and continuous dynamical systems via Poincaré-Miranda theorem. Discrete & Continuous Dynamical Systems - B, 2020, 25 (2) : 651-670. doi: 10.3934/dcdsb.2019259
Ábel Garab. Unique periodic orbits of a delay differential equation with piecewise linear feedback function. Discrete & Continuous Dynamical Systems - A, 2013, 33 (6) : 2369-2387. doi: 10.3934/dcds.2013.33.2369
Rovella Alvaro, Vilamajó Francesc, Romero Neptalí. Invariant manifolds for delay endomorphisms. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 35-50. doi: 10.3934/dcds.2001.7.35
Nguyen Thieu Huy, Pham Van Bang. Invariant stable manifolds for partial neutral functional differential equations in admissible spaces on a half-line. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 2993-3011. doi: 10.3934/dcdsb.2015.20.2993
Y. Latushkin, B. Layton. The optimal gap condition for invariant manifolds. Discrete & Continuous Dynamical Systems - A, 1999, 5 (2) : 233-268. doi: 10.3934/dcds.1999.5.233
Àlex Haro, Rafael de la Llave. A parameterization method for the computation of invariant tori and their whiskers in quasi-periodic maps: Numerical algorithms. Discrete & Continuous Dynamical Systems - B, 2006, 6 (6) : 1261-1300. doi: 10.3934/dcdsb.2006.6.1261
José F. Alves, Davide Azevedo. Statistical properties of diffeomorphisms with weak invariant manifolds. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 1-41. doi: 10.3934/dcds.2016.36.1
Henk Broer, Aaron Hagen, Gert Vegter. Numerical approximation of normally hyperbolic invariant manifolds. Conference Publications, 2003, 2003 (Special) : 133-140. doi: 10.3934/proc.2003.2003.133
George Osipenko. Indestructibility of invariant locally non-unique manifolds. Discrete & Continuous Dynamical Systems - A, 1996, 2 (2) : 203-219. doi: 10.3934/dcds.1996.2.203
Arturo Echeverría-Enríquez, Alberto Ibort, Miguel C. Muñoz-Lecanda, Narciso Román-Roy. Invariant forms and automorphisms of locally homogeneous multisymplectic manifolds. Journal of Geometric Mechanics, 2012, 4 (4) : 397-419. doi: 10.3934/jgm.2012.4.397
Bernd Aulbach, Martin Rasmussen, Stefan Siegmund. Invariant manifolds as pullback attractors of nonautonomous differential equations. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 579-596. doi: 10.3934/dcds.2006.15.579
Pablo Aguirre, Bernd Krauskopf, Hinke M. Osinga. Global invariant manifolds near a Shilnikov homoclinic bifurcation. Journal of Computational Dynamics, 2014, 1 (1) : 1-38. doi: 10.3934/jcd.2014.1.1
M. R. S. Kulenović, Orlando Merino. A global attractivity result for maps with invariant boxes. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 97-110. doi: 10.3934/dcdsb.2006.6.97
Arno Berger, Roland Zweimüller. Invariant measures for general induced maps and towers. Discrete & Continuous Dynamical Systems - A, 2013, 33 (9) : 3885-3901. doi: 10.3934/dcds.2013.33.3885
Victoriano Carmona Emilio Freire Soledad Fernández-García | CommonCrawl |
Random attractors for stochastic delay wave equations on $ \mathbb{R}^n $ with linear memory and nonlinear damping
New stability result for a Bresse system with one infinite memory in the shear angle equation
Bayesian topological signal processing
Christopher Oballe 1, , Alan Cherne 2, , Dave Boothe 3, , Scott Kerick 3, , Piotr J. Franaszczuk 3, and Vasileios Maroulas 2,
University of Notre Dame, Department of Aerospace and Mechanical Engineering, Fitzpatrick Hall of Engineering and Cushing Hall, 112 N Notre Dame Ave, Notre Dame, IN 46556
University of Tennessee, Department of Mathematics, 1403 Circle Drive, Knoxville, TN 37996-1320
US Army Research Laboratory, 7101 Mulberry Point Road, Bldg. 459, Aberdeen Proving Ground, MD 21005-5425
Received January 2021 Revised April 2021 Early access July 2021
Topological data analysis encompasses a broad set of techniques that investigate the shape of data. One of the predominant tools in topological data analysis is persistent homology, which is used to create topological summaries of data called persistence diagrams. Persistent homology offers a novel method for signal analysis. Herein, we aid interpretation of the sublevel set persistence diagrams of signals by 1) showing the effect of frequency and instantaneous amplitude on the persistence diagrams for a family of deterministic signals, and 2) providing a general equation for the probability density of persistence diagrams of random signals via a pushforward measure. We also provide a topologically-motivated, efficiently computable statistical descriptor analogous to the power spectral density for signals based on a generalized Bayesian framework for persistence diagrams. This Bayesian descriptor is shown to be competitive with power spectral densities and continuous wavelet transforms at distinguishing signals with different dynamics in a classification problem with autoregressive signals.
Keywords: Topological data analysis, Bayesian, signal processing, autoregressive, EEG, machine learning.
Mathematics Subject Classification: Primary: 55N31, 68T07.
Citation: Christopher Oballe, Alan Cherne, Dave Boothe, Scott Kerick, Piotr J. Franaszczuk, Vasileios Maroulas. Bayesian topological signal processing. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2021084
H. Adams, T. Emerson, M. Kirby, R. Neville, C. Peterson, P. Shipman, S. Chepushtanova, E. Hanson, F. Motta and L. Ziegelmeier, Persistence images: A stable vector representation of persistent homology, The Journal of Machine Learning Research, 18 (2017), 218-252. Google Scholar
M. Bandarabadi, A. Dourado, C. A. Teixeira, T. I. Netoff and K. K. Parhi, Seizure prediction with bipolar spectral power features using adaboost and svm classifiers, Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), (2013), 6305–6308. Google Scholar
S. Barbarossa and S. Sardellitti, Topological signal processing over simplicial complexes, IEEE Transactions on Signal Processing, 68 (2020), 2992-3007. doi: 10.1109/TSP.2020.2981920. Google Scholar
R. J. Barry, A. R. Clarke, S. J. Johnstone, C. A. Magee and J. A. Rushby, EEG differences between eyes-closed and eyes-open resting conditions, Clinical Neurophysiology, 118 (2007), 2765-2773. doi: 10.1016/j.clinph.2007.07.028. Google Scholar
J. Berwald and M. Gidea, Critical transitions in a model of a genetic regulatory system, Mathematical Biosciences and Engineering, 11 (2014), 723-740. doi: 10.3934/mbe.2014.11.723. Google Scholar
P. Bromiley, Products and convolutions of gaussian probability density functions, Tina-Vision Memo, 3.4 (2003), 13 pp. Google Scholar
P. Bubenik, Statistical topological data analysis using persistence landscapes, The Journal of Machine Learning Research, 16 (2015), 77-102. Google Scholar
G. Carlsson, Topology and data, Bulletin of the American Mathematical Society, 46 (2009), 255-308. doi: 10.1090/S0273-0979-09-01249-X. Google Scholar
G. Carlsson, A. Zomorodian, A. Collins and L. Guibas, Persistence barcodes for shapes, in Symposium on Geometry Processing, (eds. R. Scopigno and D. Zorin), The Eurographics Association, (2004), 124–135. doi: 10.1145/1057432.1057449. Google Scholar
D. Cohen-Steiner, H. Edelsbrunner and J. Harer, Stability of persistence diagrams, Discrete & Computational Geometry, 37 (2007), 103-120. doi: 10.1007/s00454-006-1276-5. Google Scholar
W. Crawley-Boevey, Decomposition of pointwise finite-dimensional persistence modules, Journal of Algebra and Its Applications, 14 (2015), 1550066. doi: 10.1142/S0219498815500668. Google Scholar
H. Edelsbrunner, D. Letscher and A. Zomorodian, Topological persistence and simplification, Discrete & Computational Geometry, 28 (2002), 511-533. doi: 10.1007/s00454-002-2885-2. Google Scholar
H. Edelsbrunner and J. Harer, Computational Topology, American Mathematical Society, 2010. doi: 10.1090/mbk/069. Google Scholar
B. T. Fasy, F. Lecci, A. Rinaldo, L. Wasserman, S. Balakrishnan, A. Singh and et al., Confidence sets for persistence diagrams, The Annals of Statistics, 42 (2014), 2301-2339. doi: 10.1214/14-AOS1252. Google Scholar
P. J. Franaszczuk and K. J. Blinowska, Linear model of brain electrical activity? EEG as a superposition of damped oscillatory modes, Biological Cybernetics, 53 (1985), 19-25. doi: 10.1007/BF00355687. Google Scholar
P. J. Franaszczuk, G. K. Bergey, P. J. Durka and H. M. Eisenberg, Time-frequency analysis using the matching pursuit algorithm applied to seizures originating from the mesial temporal lobe, Electroencephalography and Clinical Neurophysiology, 106 (1998), 513-521. doi: 10.1016/S0013-4694(98)00024-8. Google Scholar
S. Gholizadeh and W. Zadrozny, A short survey of topological data analysis in time series and systems analysis, (2018). Google Scholar
R. Ghrist, Barcodes: The persistent topology of data, Bull. Amer. Math. Soc. (N.S.), 45 (2008), 61-75. doi: 10.1090/S0273-0979-07-01191-3. Google Scholar
C. Ieracitano, N. Mammone, A. Bramanti, S. Marino, A. Hussain and F. C. Morabito, A time-frequency based machine learning system for brain states classification via eeg signal processing, in International Joint Conference on Neural Networks (IJCNN), (2019), 1–8. doi: 10.1109/IJCNN.2019.8852240. Google Scholar
F. Khasawneh and E. Munch, Exploring Equilibria in Stochastic Delay Differential Equations Using Persistent Homology, 2014. doi: 10.1115/DETC2014-35655. Google Scholar
[21] J. F. C. Kingman, Poisson Processes, Oxford Studies in Probability, 3, Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1993. Google Scholar
S. G. Mallat and {Z hifeng Zhang}, Matching pursuits with time-frequency dictionaries, IEEE Transactions on Signal Processing, 41 (1993), 3397-3415. Google Scholar
A. Marchese and V. Maroulas, Signal classification with a point process distance on the space of persistence diagrams, Advances in Data Analysis and Classification, 12 (2018), 657-682. doi: 10.1007/s11634-017-0294-x. Google Scholar
V. Maroulas, J. L. Mike and C. Oballe, Nonparametric estimation of probability density functions of random persistence diagrams, Journal of Machine Learning Research, 20 (2019), 1–49. Available from: http://jmlr.org/papers/v20/18-618.html. Google Scholar
V. Maroulas, F. Nasrin and C. Oballe, A bayesian framework for persistent homology, SIAM Journal on Mathematics of Data Science, 2 (2020), 48-74. doi: 10.1137/19M1268719. Google Scholar
Y. Mileyko, S. Mukherjee and J. Harer, Probability measures on the space of persistence diagrams, Inverse Problems, 27 (2011), 124007. doi: 10.1088/0266-5611/27/12/124007. Google Scholar
A. Monod, S. Kalisnik, J. A. Patino-Galindo and L. Crawford, Tropical sufficient statistics for persistent homology, SIAM Journal on Applied Algebra and Geometry, 3 (2019), 337–371. doi: 10.1137/17M1148037. Google Scholar
F. Nasrin, C. Oballe, D. Boothe and V. Maroulas, Bayesian topological learning for brain state classification, in 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA), (2019), 1247–1252. Google Scholar
A. V. Oppenheim, J. R. Buck and R. W. Schafer, Discrete-Time Signal Processing, 2nd edition, Prentice-Hall signal processing, Prentice-Hall, Upper Saddle River, NJ, 1999. Available from: https://cds.cern.ch/record/389969. Google Scholar
J. A. Perea and J. Harer, Sliding windows and persistence: An application of topological methods to signal analysis, Found. Comput. Math., 15 (2015), 799-838. doi: 10.1007/s10208-014-9206-z. Google Scholar
R. Pintelon and J. Schoukens, Time series analysis in the frequency domain, IEEE Transactions on Signal Processing, 47 (1999), 206-210. Google Scholar
M. Robinson, Topological Signal Processing, Springer, 2014. doi: 10.1007/978-3-642-36104-3. Google Scholar
M. D. Sacchi, T. J. Ulrych and C. J. Walker, Interpolation and extrapolation using a high-resolution discrete fourier transform, IEEE Transactions on Signal Processing, 46 (1998), 31-38. doi: 10.1109/78.651165. Google Scholar
N. Sanderson, E. Shugerman, S. Molnar, J. D. Meiss and E. Bradley, Computational topology techniques for characterizing time-series data, in Advances in Intelligent Data Analysis XVI, Springer International Publishing, (2017), 284–296. Google Scholar
K. F. Swaiman, S. Ashwal and M. I. Shevell, Swaiman's Pediatric Neurology, Elsevier, 2018. doi: 10.1016/c2013-1-00079-0. Google Scholar
T. Shiraishi, T. Le, H. Kashima and M. Yamada, Topological bayesian optimization with persistence diagrams, preprint, arXiv: 1902.09722. Google Scholar
B. W. Silverman, Density Estimation for Statistics and Data Analysis, Monographs on Statistics and Applied Probability. Chapman & Hall, London, 1986. Google Scholar
P. Skraba, V. de Silva and M. Vejdemo-Johansson, Topological analysis of recurrent systems, in NIPS 2012, 2012. Google Scholar
Y. Umeda, Time series classification via topological data analysis, Transactions of The Japanese Society for Artificial Intelligence, 32 (2017), 1-12. doi: 10.1527/tjsai.D-G72. Google Scholar
Y. Wang, H. Ombao and M. K. Chung, Topological data analysis of single-trial electroencephalographic signals, Ann. Appl. Stat., 12 (2018), 1506-1534. doi: 10.1214/17-AOAS1119. Google Scholar
Figure 1. Shown above (a) are the sublevel sets $ C_{-0.5} $, $ C_{0} $, $ C_{0.25} $, and $ C_1 $ for a damped cosine $ e^{-2t}\cos(8\pi t) $. (b) shows the persistence diagram of the sublevel set filtration. The points in (b) are colored to match the connected components their birth coordinates correspond to. The transition from $ C_0 $ to $ C_{0.25} $ depicts the Elder rule; notice that in $ C_0 $, there are light blue and purple connected components, which merge together in $ C_{0.25} $. A similar merging happens in the transition from $ C_{0.25} $ to $ C_{0.5} $. Since the purple component has a later birth value, it disappears into the light blue component, which persists until it merges into the green component by the same line of reasoning.
Figure 2. This figure illustrates sources of uncertainty in persistence diagrams. Shown above are signals with additive noise (a) $ \mathcal{N}(0,0.01) $, and (b) $ \mathcal{N}(0,0.1) $ along with their persistence diagrams. The persistence diagram for the true underlying signal is shown in red. Spurious features arise due to noise and additionally, true features also shift around.
Figure 3. Top: We consider three signals. The blue signal (Signal 1) and the red signal (Signal 2) are modeled by $ a_{\beta}(t)\cos(8\pi t) $ where $ a_{\beta}(t) = 5e^{-{\beta}t} $ with $ {\beta} = 1,4 $ in Signals 1 and 2 respectively. The green signal (Signal 3) is then added to each case and the amplitudes are translated to have global minima equal to zero. Bottom: The associated persistence diagrams are plotted using the method described in Section 2.2. We observe that as $ {\beta} $ increases, the high-frequency oscillations are less affected by the low-frequency signal and converge faster towards the uniform shape of the green signal. This leads to a decrease in the variance of the persistence coordinates in the red diagram.
Figure 4. (a) The damped cosine $ e^{-2t}\cos(8\pi t) $ with additive noise $ \mathcal{N}(0,0.01) $ and (b) its persistence diagram. (b) shows an uninformative prior intensity with a single component at $ (1,1) $ with covariance matrix $ 10I $. Using the model from Equation (7) with the prior in (c) and the observed diagram in (b) results in the posterior intensity shown in (d). To account for spurious points, which we suspected to be low persistence in this example, we placed components of $ \lambda_{S} $ at $ (0.5,0.1), (1,0.1),(0.75,0.1) $ and $ (1.75,0.1) $.
Figure 5. This figure demonstrates the effect of greater low frequency power on the persistence diagram of a signal. Figures (a) and (c) show two signals, respectively, which are the result of summing low-frequency and high-frequency oscillators. The power of the low-frequency signal is greater in (a) than in (c). To ensure that persistence diagrams in (b) and (d) lie in $ \mathbb{W} $, the aggregate signals in both (a) and (c) have been translated so that their absolute minima are at zero. Notice in (b) that elements of the persistence diagram show greater spread along the Birth axis than in (d). This results in greater birth variance of the corresponding posterior intensity. Also notice the isolated high-persistence mode in (b), which is not present in (d). These phenomena arise because the low frequency signal scatters the higher frequency peaks along the Amplitude axis.
Figure 6. This plot depicts the relationship between the cardinality of persistence diagrams and the frequency of the dominant oscillation for one second autoregressive signals across various damping factors. For each included frequency and damping factor, we simulated thirty signals (each had a component fixed at zero to give the $ 1/f $ PSD commonly seen in EEG), computed their persistence diagrams, then recorded their average cardinality. We see a strong positive correlation between this average cardinality and the frequency of the dominant oscillation (i.e., PSD Peak Frequency) consistent with the idealized deterministic sinusoid case.
Figure 7. The peak frequency $ f $ for $ \mathcal{A}_{f}^{\beta} $ plotted against the average birth variance for its persistence diagrams. Colors depict the damping factor $ \beta $.
Figure 8. The average (log) power spectral densities along with examples of signals and persistence diagrams from each class for damping factors of top) 4, and bottom) 32
Table 2. Parameter values for autoregressive model determined by fitting to real EEG. Missing values indicate that the optimal AR model order did not include a corresponding frequency component
Signal Length 1 Second 5 Seconds
$ f_1 $ $ f_2 $ $ f_3 $ $ f_4 $ $ \beta_1 $ $ \beta_2 $ $ \beta_3 $ $ \beta_4 $ $ f_1 $ $ f_2 $ $ f_3 $ $ f_4 $ $ \beta_1 $ $ \beta_2 $ $ \beta_3 $ $ \beta_4 $
Signal 1 0 5.87 18.59 - 344.80 5.37 16.6 - 0 6.00 14.4 20.85 24.98 10.54 31.64 26.97
Signal 2 0 10.70 - - 202.78 7.41 - - 0 10.16 23.02 - 17.24 4.06 20.37 -
Table 1. Precisions and recalls for each feature and classifier. Results are reported as mean $ \pm $ standard error across each class
Bayesian PSD CWT
Classifier Precision Recall Precision Recall Precision Recall
LR $ 0.84 \pm 0.06 $ $ 0.85 \pm 0.07 $ $ 0.90 \pm 0.04 $ $ 0.90 \pm 0.04 $ $ 0.91 \pm 0.03 $ $ 0.90 \pm 0.04 $
SVM - Lin. $ 0.92 \pm 0.05 $ $ 0.91 \pm 0.04 $ $ 0.91 \pm 0.03 $ $ 0.90 \pm 0.05 $ $ 0.91 \pm 0.04 $ $ 0.91 \pm 0.03 $
MLP $ 0.89 \pm 0.05 $ $ 0.88 \pm 0.04 $ $ 0.90 \pm 0.02 $ $ 0.89 \pm 0.02 $ $ 0.92 \pm 0.03 $ $ 0.93 \pm 0.02 $
Marc Bocquet, Julien Brajard, Alberto Carrassi, Laurent Bertino. Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization. Foundations of Data Science, 2020, 2 (1) : 55-80. doi: 10.3934/fods.2020004
Jiang Xie, Junfu Xu, Celine Nie, Qing Nie. Machine learning of swimming data via wisdom of crowd and regression analysis. Mathematical Biosciences & Engineering, 2017, 14 (2) : 511-527. doi: 10.3934/mbe.2017031
Andreas Chirstmann, Qiang Wu, Ding-Xuan Zhou. Preface to the special issue on analysis in machine learning and data science. Communications on Pure & Applied Analysis, 2020, 19 (8) : i-iii. doi: 10.3934/cpaa.2020171
Xingong Zhang. Single machine and flowshop scheduling problems with sum-of-processing time based learning phenomenon. Journal of Industrial & Management Optimization, 2020, 16 (1) : 231-244. doi: 10.3934/jimo.2018148
Alan Beggs. Learning in monotone bayesian games. Journal of Dynamics & Games, 2015, 2 (2) : 117-140. doi: 10.3934/jdg.2015.2.117
George Siopsis. Quantum topological data analysis with continuous variables. Foundations of Data Science, 2019, 1 (4) : 419-431. doi: 10.3934/fods.2019017
Tyrus Berry, Timothy Sauer. Consistent manifold representation for topological data analysis. Foundations of Data Science, 2019, 1 (1) : 1-38. doi: 10.3934/fods.2019001
Jiping Tao, Zhijun Chao, Yugeng Xi. A semi-online algorithm and its competitive analysis for a single machine scheduling problem with bounded processing times. Journal of Industrial & Management Optimization, 2010, 6 (2) : 269-282. doi: 10.3934/jimo.2010.6.269
Yuanjia Ma. The optimization algorithm for blind processing of high frequency signal of capacitive sensor. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1399-1412. doi: 10.3934/dcdss.2019096
Émilie Chouzenoux, Henri Gérard, Jean-Christophe Pesquet. General risk measures for robust machine learning. Foundations of Data Science, 2019, 1 (3) : 249-269. doi: 10.3934/fods.2019011
Ana Rita Nogueira, João Gama, Carlos Abreu Ferreira. Causal discovery in machine learning: Theories and applications. Journal of Dynamics & Games, 2021, 8 (3) : 203-231. doi: 10.3934/jdg.2021008
Robert D. Sidman, Marie Erie, Henry Chu. A method, with applications, for analyzing co-registered EEG and MRI data. Conference Publications, 2001, 2001 (Special) : 349-356. doi: 10.3934/proc.2001.2001.349
Dominique Duncan, Ronen Talmon, Hitten P. Zaveri, Ronald R. Coifman. Identifying preseizure state in intracranial EEG data using diffusion kernels. Mathematical Biosciences & Engineering, 2013, 10 (3) : 579-590. doi: 10.3934/mbe.2013.10.579
Matthew M. Dunlop, Andrew M. Stuart. The Bayesian formulation of EIT: Analysis and algorithms. Inverse Problems & Imaging, 2016, 10 (4) : 1007-1036. doi: 10.3934/ipi.2016030
Hamed Azizollahi, Marion Darbas, Mohamadou M. Diallo, Abdellatif El Badia, Stephanie Lohrengel. EEG in neonates: Forward modeling and sensitivity analysis with respect to variations of the conductivity. Mathematical Biosciences & Engineering, 2018, 15 (4) : 905-932. doi: 10.3934/mbe.2018041
Roberto C. Alamino, Nestor Caticha. Bayesian online algorithms for learning in discrete hidden Markov models. Discrete & Continuous Dynamical Systems - B, 2008, 9 (1) : 1-10. doi: 10.3934/dcdsb.2008.9.1
Govinda Anantha Padmanabha, Nicholas Zabaras. A Bayesian multiscale deep learning framework for flows in random media. Foundations of Data Science, 2021, 3 (2) : 251-303. doi: 10.3934/fods.2021016
Ji-Bo Wang, Bo Zhang, Hongyu He. A unified analysis for scheduling problems with variable processing times. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021008
Mingbao Cheng, Shuxian Xiao, Guosheng Liu. Single-machine rescheduling problems with learning effect under disruptions. Journal of Industrial & Management Optimization, 2018, 14 (3) : 967-980. doi: 10.3934/jimo.2017085
Mustaffa Alfatlawi, Vaibhav Srivastava. An incremental approach to online dynamic mode decomposition for time-varying systems with applications to EEG data modeling. Journal of Computational Dynamics, 2020, 7 (2) : 209-241. doi: 10.3934/jcd.2020009
Christopher Oballe Alan Cherne Dave Boothe Scott Kerick Piotr J. Franaszczuk Vasileios Maroulas | CommonCrawl |
\begin{definition}[Definition:Tetration]
500pxrightthumb$y = \map {\operatorname {tet}_b} x$ versus $x$ for various $b$
500pxrightthumb$f = \map {\operatorname {tet}_b} x$ in the $x, b$ plane with levels $f = \text{const}$.
\end{definition} | ProofWiki |
Springer for Research & Development
Proof of the Absence of Long-Range Temporal Orders in Gibbs States
Haruki Watanabe ORCID: orcid.org/0000-0002-8112-021X1,
Masaki Oshikawa2 &
Tohru Koma3
Journal of Statistical Physics (2020)Cite this article
We address the question whether time translation symmetry can be spontaneously broken in a quantum many-body system. One way of detecting such a symmetry breaking is to examine the time-dependence of a correlation function. If the large-distance behavior of the correlation function exhibits a nontrivial time-dependence in the thermodynamic limit, the system would develop a temporal long-range order, realizing a time crystal. In an earlier publication, we sketched a proof for the absence of such time dependence in the thermal equilibrium described by the Gibbs state (Watanabe and Oshikawa in Phys Rev Lett 114:251603, 2015). Here we present a complete proof and extend the argument to a more general class of stationary states than the Gibbs states.
Time crystals are a newly proposed state of matter that spontaneously breaks the time translation symmetry. The idea of time crystals in the case of the continuous time translation symmetry was first proposed by Wilczek [1], although the validity of the concrete model in this original proposal was soon questioned in Ref. [2]. Then a no-go theorem for a wider but still restricted class of models was presented in Ref. [3]. In a more general setting, the absence of time crystalline orders in the ground state or in the Gibbs state was proven in Ref. [4] without specifying the Hamiltonian but assuming only its locality. These developments triggered further investigation of so-called Floquet time crystals or discrete time crystals in nonequilibrium setting [5,6,7,8,9,10] that break a discrete time translation symmetry into its subgroup. See Refs. [11,12,13] for recent reviews on this topic.
The argument for the no-go theorem at finite temperatures in Ref. [4] was based on the Lieb-Robinson bound [14, 15], which was used to constrain finite time behavior of the correlation function. However, in Ref. [4], Fourier transformation of the correlation function was performed with respect to an infinitely long time, out of the validity of the constraint. This issue was recently pointed out by Ref. [13]. In this work, we present a complete version of the proof without such an issue. Furthermore, we examine the conditions on the density operator to which our argument can be straightforwardly extended. Clarifying these subtleties and settling down the limitations on what the Gibbs state and similar type of stationary states can do should in turn accelerate our exploration of new states that exhibit nontrivial temporal orders.
The Theorem and Its Proof
Setup and Statement
Let us consider a static Hamiltonian \(\hat{H}\) defined on a d-dimensional lattice \(\Lambda \) that is a finite subset of \(\mathbb {Z}^d\). We assume that the Hamiltonian \(\hat{H}\) is written as a sum of local bounded Hamiltonians:
$$\begin{aligned} \hat{H}=\sum _{\mathbf {x}\in \Lambda }\hat{h}_{\mathbf {x}}. \end{aligned}$$
More precisely, we assume that the support of the local Hamiltonian \(\hat{h}_{\mathbf {x}}\) is limited to a finite range \(R_h\) from \(\mathbf {x}\in \Lambda \) and that the operator normFootnote 1 of \(\hat{h}_{\mathbf {x}}\) is bounded by a constant \(N_h\). Both \(R_h\) and \(N_h\) are independent of the position \(\mathbf {x}\in \Lambda \) or the system size \(|\Lambda |\). This setting includes a wide variety of quantum spin systems, fermion systems, and "hard-core" boson systems Footnote 2.
Similarly, we consider observables (not necessarily Hermitian) \(\hat{A}\) and \(\hat{B}\) written as a sum of local observables:
$$\begin{aligned} \hat{A}:=\frac{1}{|\Lambda |}\sum _{\mathbf {x}\in \Lambda } \hat{a}_{\mathbf {x}},\quad \hat{B}:=\frac{1}{|\Lambda |}\sum _{\mathbf {x}\in \Lambda } \hat{b}_{\mathbf {x}}. \end{aligned}$$
The support of \(\hat{a}_{\mathbf {x}}\) and \(\hat{b}_{\mathbf {x}}\) (\(\mathbf {x}\in \Lambda \)) are within a finite range \(R_a\), \(R_b\) from \(\mathbf {x}\) and their operator norm is bounded by constants \(N_a\), \(N_b\), respectively. All of these constants are independent of \(\mathbf {x}\) or \(|\Lambda |\).
We introduce the time evolution of operators for \(t\in \mathbb {R}\) by
$$\begin{aligned} \hat{A}(t):=e^{i\hat{H} t}\hat{A}e^{-i\hat{H} t}. \end{aligned}$$
Our interest is in the time-dependence of the correlation function
$$\begin{aligned} \langle \hat{A}(t)\hat{B}\rangle :=\mathrm{Tr}\left( \hat{A}(t)\hat{B}\hat{\rho }\right) . \end{aligned}$$
Here \(\hat{\rho }\) is the Gibbs state
$$\begin{aligned} \hat{\rho }:=\frac{1}{Z}e^{-\beta \hat{H}} \end{aligned}$$
at the inverse temperature \(\beta \) and \(Z:=\mathrm{Tr}\,e^{-\beta \hat{H}}\) is the partition function. Our claim is that \(\langle \hat{A}(t)\hat{B}\rangle \) is independent of t in the thermodynamic limit \(|\Lambda |\rightarrow +\infty \) [4], i.e.,
$$\begin{aligned} \lim _{|\Lambda |\rightarrow \infty }\left| \langle \hat{A}(t)\hat{B}\rangle -\langle \hat{A}\hat{B}\rangle \right| =0. \end{aligned}$$
Proof for \(\beta >0\)
To prove Eq. (6), it is sufficient to treat the special case \(\hat{B}=\hat{A}^\dagger \):
$$\begin{aligned} \lim _{|\Lambda |\rightarrow \infty }\left| \langle \hat{A}(t)\hat{A}^\dagger \rangle -\langle \hat{A}\hat{A}^\dagger \rangle \right| =0. \end{aligned}$$
This is because \(\langle \hat{A}(t)\hat{B}\rangle \) can be rewritten as
$$\begin{aligned} \langle \hat{A}(t)\hat{B}\rangle&= \frac{1}{2}\Big \langle \big (\hat{A}(t)+\hat{B}^\dagger (t)\big )\big (\hat{A}+\hat{B}^\dagger \big )^\dagger \Big \rangle +\frac{i}{2}\Big \langle \big (\hat{A}(t)+i\hat{B}(t)^\dagger \big )\big (\hat{A}+i\hat{B}^\dagger \big )^\dagger \Big \rangle \nonumber \\&\quad -\frac{1+i}{2}\langle \hat{A}(t)\hat{A}^\dagger \rangle -\frac{1+i}{2}\langle \hat{B}^\dagger (t)\hat{B}\rangle . \end{aligned}$$
Once Eq. (7) is established, it applies to all four correlation functions in the right-hand side and we obtain Eq. (6).
We denote by \(|\Phi _n\rangle \) the eigenstate of the Hamiltonian \(\hat{H}\) with the eigenvalue \(E_n\) (\(n\in \mathbb {N}\)). Using the complete system and writing
$$\begin{aligned} \rho (E_n):=\frac{1}{Z}e^{-\beta E_n}, \end{aligned}$$
we get
$$\begin{aligned} \langle \hat{A}(t)\hat{A}^\dagger \rangle =\sum _{m,n}|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_m)\,e^{i(E_m-E_n)t}. \end{aligned}$$
We split the summation over m and n into four intervals of \(E_n-E_m\):
$$\begin{aligned} {\left\{ \begin{array}{ll} \text {(i):}&{}2\varepsilon \le E_n-E_m\le K,\\ \text {(ii):}&{}-K\le E_n-E_m\le -2\varepsilon ,\\ \text {(iii):}&{}K<|E_n-E_m|,\\ \text {(iv):}&{}|E_n-E_m|<2\varepsilon , \end{array}\right. } \end{aligned}$$
where \(\varepsilon \) is a small positive number and K is a large positive number. Then the time-dependence of \(\langle \hat{A}(t)\hat{A}^\dagger \rangle \) can be bounded as
$$\begin{aligned}&\left| \langle \hat{A}(t)\hat{A}^\dagger \rangle -\langle \hat{A}\hat{A}^\dagger \rangle \right| \nonumber \\&\quad \le {} 2\sum _{m,n\,:\,2\varepsilon \le E_n-E_m\le K}|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_m)\nonumber \\&\qquad +2\sum _{m,n\,:\,-K\le E_n-E_m\le -2\varepsilon }|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_m)\nonumber \\&\qquad +2\sum _{m,n\,:\,K<|E_n-E_m|}|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_m)\nonumber \\&\qquad +\sum _{m,n\,:\,|E_n-E_m|<2\varepsilon }|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_m)\left| e^{i(E_m-E_n)t}-1\right| . \end{aligned}$$
In the following, we derive an upper bound for each term in the right hand side one by one. The first two terms will be bounded using the Lieb-Robinson bound and the monotonically decreasing nature of the Boltzmann factor (9). The third term will be evaluated by making use of the large energy difference. Finally, the last term is trivially small because of the time-dependent factor with small energy difference. Plugging these results [Eqs. (30), (33), (38), and (39) below] into the right-hand side of Eq. (12), we get
$$\begin{aligned} \left| \langle \hat{A}(t)\hat{A}^\dagger \rangle -\langle \hat{A}\hat{A}^\dagger \rangle \right|&\le 2\varepsilon +2\varepsilon +\frac{2C}{K^2}+2N_a^2\varepsilon |t|, \end{aligned}$$
where C is a positive constant independent of the system size. Since we can take \(\varepsilon \) to be small and K to be large by choosing a sufficiently large system size \(|\Lambda |\), we obtain the desired result.
The Range (i): \(2\varepsilon \le E_n-E_m\le K\)
Let us start with the contribution from the range \(2\varepsilon \le E_n-E_m\le K\). To this end, we introduce a cutoff function \(\eta ^{+}\in C_0^\infty (\mathbb {R})\) (i.e., an infinitely differentiable function with a compact support ) that satisfies the following conditions: Footnote 3
$$\begin{aligned} {\left\{ \begin{array}{ll} \eta ^{+}(\omega )=1 &{} (2\varepsilon \le \omega \le K),\\ \eta ^{+}(\omega )=0 &{} (\omega \le \varepsilon \text { or }K+\varepsilon \le \omega ),\\ 0\le \eta ^{+}(\omega )\le 1&{} (\text {otherwise}). \end{array}\right. } \end{aligned}$$
The Fourier transform of \(\eta ^+(\omega )\) is given by
$$\begin{aligned} \tilde{\eta }^+(t):=\frac{1}{2\pi }\int _{-\infty }^{+\infty }d\omega \, e^{i\omega t}\eta ^+(\omega ), \end{aligned}$$
which decays faster than any power of t. This can be shown by performing an integration by parts repeatedly:
$$\begin{aligned} \tilde{\eta }^+(t)=\frac{1}{2\pi }\left( \frac{i}{t}\right) ^\ell \int _{-\infty }^{+\infty }d\omega \, e^{i\omega t}\frac{\partial ^\ell }{\partial \omega ^\ell }\eta ^+(\omega )\quad \text { for }t\ne 0, \end{aligned}$$
which implies, for any integer \(\ell \in \mathbb {N}\), that
$$\begin{aligned}&|\tilde{\eta }^+(t)|\le \mathcal {C}_\ell |t|^{-\ell }, \end{aligned}$$
$$\begin{aligned}&\mathcal {C}_\ell :=\frac{1}{2\pi }\int _{-\infty }^{+\infty }d\omega \, \left| \frac{\partial ^\ell }{\partial \omega ^\ell }\eta ^+(\omega )\right| . \end{aligned}$$
We consider a correlation function
$$\begin{aligned} g(t):=\langle [\hat{A}(t),\hat{A}^\dagger ]\rangle =\sum _{m,n}|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\left( \rho (E_m)-\rho (E_n)\right) e^{i(E_m-E_n)t}. \end{aligned}$$
On one hand, we have
$$\begin{aligned} \int _{-\infty }^{+\infty } dt\, g(t)\tilde{\eta }^+(t)&=\sum _{m,n}|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2 \left( \rho (E_m)-\rho (E_n)\right) \int _{-\infty }^{+\infty } dt\,e^{-i(E_n-E_m)t}\tilde{\eta }^+(t)\nonumber \\&=\sum _{m,n}|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\left( \rho (E_m)-\rho (E_n)\right) \eta ^+(E_n-E_m)\nonumber \\&\ge \sum _{m,n\,:\,2\varepsilon \le E_n-E_m\le K}|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\left( \rho (E_m)-\rho (E_n)\right) \nonumber \\&\ge \sum _{m,n\,:\,2\varepsilon \le E_n-E_m\le K}|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_m) \Delta . \end{aligned}$$
In passing to the third line, we used \(\rho (E_n)< \rho (E_m)\) when \(E_n> E_m\) and the conditions (14) of \(\eta ^+(\omega )\). In the last line, we defined
$$\begin{aligned} \Delta :=1-\max _{m,n\,:\,2\varepsilon \le E_n-E_m \le K}\frac{\rho (E_n)}{\rho (E_m)}. \end{aligned}$$
For the Gibbs state (9), we have
$$\begin{aligned}&\Delta \ge h(\varepsilon )>0\quad \text {for }\varepsilon >0, \end{aligned}$$
$$\begin{aligned}&h(\varepsilon ):=1-e^{-2\beta \varepsilon }. \end{aligned}$$
On the other hand, we can decompose the integral into two parts as
$$\begin{aligned} \int _{-\infty }^{+\infty } dt\, g(t)\tilde{\eta }^+(t) =\int _{|t|\ge T}dt\, g(t)\tilde{\eta }^+(t)+\int _{-T}^{+T}dt\, g(t)\tilde{\eta }^+(t) \end{aligned}$$
where T is a large positive number. For the first integral in the right-hand side, we use the property Eq. (17) of the function \(\tilde{\eta }^+(t)\) as well as the trivial bound \(|g(t)|\le 2N_a^2\).Footnote 4 For a given function \(\tilde{\eta }^+(t)\) with the parameters \(\varepsilon \) and K, we can find a large T such that
$$\begin{aligned} \left| \int _{|t|\ge T}dt\, g(t)\tilde{\eta }^+(t)\right| \le \varepsilon h(\varepsilon ). \end{aligned}$$
For the second integral, we can use the Lieb-Robinson bound [14, 15], from which we have [4]
$$\begin{aligned} \Big \Vert \big [\hat{A}(t),\hat{A}^\dagger \big ]\Big \Vert \le \frac{C_1+C_2|t|^d}{|\Lambda |} \end{aligned}$$
for system-size-independent constants \(C_1\) and \(C_2\). Thus
$$\begin{aligned} \left| \int _{-T}^{+T}dt\, g(t)\tilde{\eta }^+(t)\right|&\le \int _{-T}^{+T}dt\, \big |\langle [\hat{A}(t),\hat{A}^\dagger ]\rangle \big |\left| \tilde{\eta }^+(t)\right| \nonumber \\&\le \int _{-T}^{+T}dt\, \frac{C_1+C_2|t|^d}{|\Lambda |}\frac{K}{2\pi }=K\frac{(d+1)C_1T+C_2T^{d+1}}{\pi (d+1)|\Lambda |}, \end{aligned}$$
where in the second inequality we used
$$\begin{aligned} \left| \tilde{\eta }^+(t)\right| \le \frac{1}{2\pi }\int _{-\infty }^{+\infty }d\omega \, \eta ^+(\omega )\le \frac{1}{2\pi }\int _{\varepsilon }^{K+\varepsilon }d\omega \, 1=\frac{K}{2\pi }. \end{aligned}$$
Therefore, for any given large K and T, there exists a large volume \(|\Lambda |\) such that
$$\begin{aligned} \left| \int _{-T}^{+T}dt\, g(t)\tilde{\eta }^+(t)\right| \le \varepsilon h(\varepsilon ). \end{aligned}$$
Combining Eqs. (25) and (29) with the bound (20), we get
$$\begin{aligned} \sum _{m,n\,:\,2\varepsilon \le E_n-E_m\le K} |\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_m)\le \frac{2\varepsilon h(\varepsilon )}{\Delta }\le 2\varepsilon . \end{aligned}$$
The Range (ii): \(-K\le E_n-E_m\le -2\varepsilon \)
Similarly, to estimate the contribution from the range \(-K\le E_n-E_m\le -2\varepsilon \), we introduce a cutoff function \(\eta ^{-}\in C_0^\infty (\mathbb {R})\) that satisfies the following conditions:
$$\begin{aligned} {\left\{ \begin{array}{ll} \eta ^{-}(\omega )=-1 &{} (-K\le \omega \le -2\varepsilon ),\\ \eta ^{-}(\omega )=0 &{} (\omega \le -K-\varepsilon \text { or }-\varepsilon \le \omega ),\\ -1\le \eta ^{-}(\omega )\le 0&{} (\text {otherwise}). \end{array}\right. } \end{aligned}$$
This time we have
$$\begin{aligned} \int _{-\infty }^{+\infty } dt\, g(t)\tilde{\eta }^-(t)&=\sum _{m,n}|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2 \left( \rho (E_m)-\rho (E_n)\right) \int _{-\infty }^{+\infty } dt\,e^{-i(E_n-E_m)t}\tilde{\eta }^-(t)\nonumber \\&=\sum _{m,n}|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\left( \rho (E_n)-\rho (E_m)\right) (-\eta ^-(E_n-E_m))\nonumber \\&\ge \sum _{m,n\,:\,-K\le E_n-E_m\le -2\varepsilon }|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\left( \rho (E_n)-\rho (E_m)\right) \nonumber \\&\ge \sum _{m,n\,:\,-K\le E_n-E_m\le -2\varepsilon }|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_n) \Delta \nonumber \\&\ge \sum _{m,n\,:\,-K\le E_n-E_m\le -2\varepsilon }|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_m) \Delta . \end{aligned}$$
Here, \(\Delta \) is defined in Eq. (21). In the same way as before, we find
$$\begin{aligned} \sum _{m,n\,:\,-K\le E_n-E_m\le -2\varepsilon } |\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_m)\le 2\varepsilon . \end{aligned}$$
The Range (iii): \(K<|E_n-E_m|\)
The third contribution can be easily bounded by using a trick.
$$\begin{aligned} \sum _{m,n\,:\,K<|E_n-E_m|}|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_m)&\le \frac{1}{K^2}\sum _{m,n}(E_n-E_m)^2|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_m)\nonumber \\&= \frac{1}{K^2} \langle [\hat{A},\hat{H}][\hat{H},\hat{A}^\dagger ]\rangle \nonumber \\&\le \frac{1}{K^2} \big \Vert [\hat{A},\hat{H}]\big \Vert ^2. \end{aligned}$$
Thanks to the assumed locality of the Hamiltonian, this operator norm can be bounded as
$$\begin{aligned}&\big \Vert [\hat{A},\hat{H}]\big \Vert =\frac{1}{|\Lambda |}\sum _{\mathbf {x},\mathbf {y}\in \Lambda \,:\,|\mathbf {x}-\mathbf {y}|\le R_h+R_a}\big \Vert [\hat{a}_{\mathbf {x}},\hat{h}_{\mathbf {y}}]\big \Vert \le C, \end{aligned}$$
$$\begin{aligned}&C:= 2N_aN_hv(R_a+R_h), \end{aligned}$$
$$\begin{aligned}&v(R):=\frac{1}{|\Lambda |}\sum _{\mathbf {x},\mathbf {y}\in \Lambda \,:\,|\mathbf {x}-\mathbf {y}|\le R}1. \end{aligned}$$
Note that v(R) does not grow with \(|\Lambda |\). Therefore,
$$\begin{aligned} \sum _{m,n\,:\,K<|E_n-E_m|}|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_m)\le \frac{C}{K^2}. \end{aligned}$$
The Range (iv): \(|E_n-E_m|<\varepsilon \)
Finally, using the fact that \(|e^{ix}-1|=2|\sin \frac{x}{2}|\le |x|\) for any real number x, we get
$$\begin{aligned}&\sum _{m,n\,:\,|E_n-E_m|<2\varepsilon }|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_m)\left| e^{i(E_m-E_n)t}-1\right| \nonumber \\&\quad \le 2\varepsilon |t|\sum _{m,n:\,|E_n-E_m|<2\varepsilon }|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_m)\nonumber \\&\quad \le 2\varepsilon |t|\sum _{m,n}|\langle \Phi _m|\hat{A}|\Phi _n\rangle |^2\rho (E_m)=2\varepsilon |t| \langle \hat{A}\hat{A}^\dagger \rangle \nonumber \\&\quad \le 2\Vert \hat{A}\Vert ^2\varepsilon |t| \le 2N_a^2\varepsilon |t|. \end{aligned}$$
This completes the verification of Eq. (13) and hence the proof of Eq. (7).
Proof for \(\beta =0\)
Interestingly, the proof in the previous section does not apply to the infinite temperature (\(\beta =0\)) where \(\rho (E_n)\) in Eq. (9) becomes constant:
$$\begin{aligned} \rho (E_n)=\frac{1}{\mathcal {D}}. \end{aligned}$$
Here \(\mathcal {D}\) is the dimension of the entire Hilbert space. In this special case, however, we can directly prove Eq. (6) using the clustering property of the infinite-temperature state.Footnote 5 Thus the "absence of the time crystals" also holds at the infinite temperature, consistently with the intuition that the infinite temperature is the most disordered limit.
At \(\beta =0\), the equal-time correlation function trivially exhibits the locality
$$\begin{aligned} \langle \hat{a}_{\mathbf {x}} \hat{b}_{\mathbf {y}} \rangle = \langle \hat{a}_{\mathbf {x}} \rangle \langle \hat{b}_{\mathbf {y}} \rangle \end{aligned}$$
if the support of \(\hat{a}_{\mathbf {x}}\) and \(\hat{b}_{\mathbf {y}}\) do not overlap. This implies the clustering property and the absence of any spatial long-range order. However, quantum dynamics is nontrivial even at the infinite temperature (see, for example, Ref. [16] and references therein) and the question of the time crystal is not totally trivial.
To prove Eq. (7) in this setting, let us define
$$\begin{aligned} \delta \hat{A}:=\hat{A}-\langle \hat{A}\rangle =\frac{1}{|\Lambda |}\sum _{\mathbf {x}\in \Lambda } (\hat{a}_{\mathbf {x}}-\langle \hat{a}_{\mathbf {x}}\rangle ). \end{aligned}$$
It follows that
$$\begin{aligned} \Big |\langle \delta \hat{A}(t)\,\delta \hat{A}^\dagger \rangle \Big |&=\Big |\sum _{n,m}|\langle \Phi _n|\delta \hat{A}|\Phi _m\rangle |^2 \rho (E_n)e^{i(E_n-E_m)t}\Big |\nonumber \\&\le \sum _{n,m}|\langle \Phi _n|\delta \hat{A}|\Phi _m\rangle |^2\rho (E_n)=\langle \delta \hat{A}\,\delta \hat{A}^\dagger \rangle \end{aligned}$$
$$\begin{aligned} \Big |\langle \hat{A}(t)\hat{A}^\dagger \rangle -\langle \hat{A}\,\hat{A}^\dagger \rangle \Big |=\Big |\langle \delta \hat{A}(t)\,\delta \hat{A}^\dagger \rangle -\langle \delta \hat{A}\,\delta \hat{A}^\dagger \rangle \Big |\le 2 \langle \delta \hat{A}\,\delta \hat{A}^\dagger \rangle . \end{aligned}$$
The locality in Eq. (41) implies that \(\langle \delta \hat{A}\,\delta \hat{A}^\dagger \rangle \) is inversely proportional to \(|\Lambda |\). Therefore, we obtain Eq. (7) which gives Eq. (6) as explained in Sec. 2.2.1.
The proof for the Gibbs state at a finite temperature in Sec. 2.2 equally applies to other stationary states (\([\hat{\rho },\hat{H}]=0\)) of a local, static Hamiltonian as long as the density operator \(\hat{\rho }\) satisfies the following conditions.
The weight \(\rho (E_n)\) for each eigenstate \(|\Phi _n\rangle \) should be a strictly decreasing function of \(E_n\), i.e., \(\rho (E_n)<\rho (E_m)\) when \(E_n>E_m\) for any n and m.
There exists a smooth function \(h(\varepsilon )\) of \(\varepsilon \) such that the quantity \(\Delta \) defined in Eq. (21) is bounded below as in Eq. (22). Furthermore, \(h(\varepsilon )\) must be independent of the system size \(|\Lambda |\).
This argument can also be straightforwardly modified when the weight \(\rho (E_n)\) is a strictly increasing function of \(E_n\). Therefore, the same theorem holds for Gibbs state with a "negative temperature" (\(\beta <0\)), which is well-defined for a bounded Hamiltonian we discuss here.
Recently, in the interesting paper [17], Huang showed the inequality
$$\begin{aligned} \left| \langle \hat{A}(t)\hat{B}\rangle -\langle \hat{A}\hat{B}\rangle \right| = O(|\Lambda |^{-1}) \end{aligned}$$
for an arbitrary stationary state \(\hat{\rho }\) assuming a sufficiently fast decay of spatial correlation functions but without assuming the locality of the Hamiltonian. Here we comment that, in order to prove the weaker statement Eq. (6), which already implies the absence of long-range temporal order, it is sufficient to assume the "clustering" property of spatial correlation functions. We say a state exhibits clustering if there exists a system-size independent function f(r) with \(\lim _{r\rightarrow \infty }f(r)=0\) such that correlation functions of any local bounded operator obey
$$\begin{aligned} |\langle \delta \hat{a}_{\mathbf {x}}\,\delta \hat{a}_{\mathbf {y}}^\dagger \rangle |\le f(|\mathbf {x}-\mathbf {y}|)\quad \text {for all}\quad \mathbf {x}, \mathbf {y}\in \Lambda . \end{aligned}$$
As we discuss in Appendix A, Eq. (46) readily gives
$$\begin{aligned} \lim _{|\Lambda |\rightarrow \infty }\langle \delta \hat{A}\,\delta \hat{A}^\dagger \rangle =0. \end{aligned}$$
Moreover, Eq. (43) holds as long as \(\rho (E_n)\ge 0\) for all n even when \(\rho (E_n)\) is not constant [17]. Thus, we get Eq. (7) by combining Eqs. (44) and (47).
In conclusion, in order to realize a temporal long-range order in a stationary state, the system has to fulfill both of the following two conditions:
Either the weight \(\rho (E_n)\) breaks some of the above conditions or the Hamiltonian \(\hat{H}\) violates the locality.
The state \(\hat{\rho }\) does not possess the clustering property.
For example, a time-crystalline behavior may be observed in a single eigenstate (except for the ground state) or a micro-canonical ensemble of a local Hamiltonian (e.g. Ref. [18]), and in the ground state of a non-local Hamiltonian (e.g. Ref. [19]).
After we revised our manuscript and updated the version on arXiv, Huang also posted the second version of his paper (Y. Huang, arXiv:1912.01210v2) in which the author made a revision for the main result in the first version [17], and added a new result which is similar to ours in Sec. 3.
The operator norm of an operator \(\hat{O}\) is defined as \({\Vert }\hat{O}{\Vert }:=\text {sup}_{|\psi \rangle , {\Vert }{|\psi \rangle }|{\Vert }>0}{\Vert }\hat{O}{|\psi \rangle }{\Vert }/{\Vert }{|\psi \rangle }{\Vert }\).
For the assumed boundedness, the maximum number of bosons that can occupy a single site must be a finite number independent of the system size.
An example of \(\eta ^{+}(\omega )\) for the range \(\varepsilon \le \omega \le 2\varepsilon \) and \(K\le \omega \le K+\varepsilon \) can be constructed using \(m(x):=\int _{-1}^{x}dy\,e^{-\frac{1}{1-y^2}}\) (\(-1\le x\le +1\)). For example, one can set \(\eta ^{+}(\omega )=m(\frac{2\omega -3\varepsilon }{\varepsilon })/m(+1)\) for \(\varepsilon \le \omega \le 2\varepsilon \).
Here and hereafter, we use the standard properties of the operator norm, such as \({|}{\langle }{\hat{O}}{\rangle }{|}\le {\Vert }{\hat{O}}{\Vert }\), \({\Vert }{\hat{O}}{\Vert }={\Vert }{\hat{O}}^\dagger {\Vert }\), and \({\Vert }{\hat{O}\hat{O}'}{\Vert }\le {\Vert }{\hat{O}}{\Vert }{\Vert }{\hat{O}'}{\Vert }\) for operators \(\hat{O}\) and \(\hat{O}'\).
In the earlier version of the manuscript [20], our proof for \(\beta =0\) was based on the Lieb-Robinson bound. The present simpler proof was informed by Yichen Huang. See also Ref. [17] and Sect. 3.
Wilczek, F.: Quantum time crystals. Phys. Rev. Lett. 109, 160401 (2012)
Bruno, P.: Comment on "quantum time crystals". Phys. Rev. Lett. 110, 118901 (2013)
Bruno, P.: Impossibility of spontaneously rotating time crystals: a no-go theorem. Phys. Rev. Lett. 111, 070402 (2013)
Watanabe, H., Oshikawa, M.: Absence of quantum time crystals. Phys. Rev. Lett. 114, 251603 (2015)
Sacha, K.: Modeling spontaneous breaking of time-translation symmetry. Phys. Rev. A 91, 033617 (2015)
Khemani, V., Lazarides, A., Moessner, R., Sondhi, S.L.: Phase structure of driven quantum systems. Phys. Rev. Lett. 116, 250401 (2016)
Else, D.V., Bauer, B., Nayak, C.: Floquet time crystals. Phys. Rev. Lett. 117, 090402 (2016)
Yao, N.Y., Potter, A.C., Potirniche, I.-D., Vishwanath, A.: Discrete time crystals: rigidity, criticality, and realizations. Phys. Rev. Lett. 118, 030401 (2017)
Choi, S., Choi, J., Landig, R., Kucsko, G., Zhou, Hengyun, Isoya, Junichi, Jelezko, Fedor, Onoda, Shinobu, Sumiya, Hitoshi, Khemani, Vedika, von Keyserlingk, Curt, Yao, Norman Y., Demler, Eugene, Lukin, Mikhail D.: Observation of discrete time-crystalline order in a disordered dipolar many-body system. Nature 543, 221–225 (2017)
Zhang, J., Hess, P.W., Kyprianidis, A., Becker, P., Lee, A., Smith, J., Pagano, G., Potirniche, I.D., Potter, A.C., Vishwanath, A., Yao, N.Y., Monroe, C.: Observation of a discrete time crystal. Nature 543, 217–220 (2017)
Sacha, K., Zakrzewski, J.: Time crystals: a review. Rep. Prog. Phys. 81, 016401 (2017)
Else, D.V., Monroe, C., Nayak, C., Yao, N.Y.: Discrete time crystals. arXiv:1905.13232
Khemani, V., Moessner, R., Sondhi, S.L.: A brief history of time crystals. arXiv:1910.10745
Lieb, E.H., Robinson, D.W.: The finite group velocity of quantum spin systems. Commun. Math. Phys. 28, 251–257 (1972)
Hastings, M.B.: Locality in quantum systems. arXiv:1008.5137
Žnidarič, M.: Spin transport in a one-dimensional anisotropic Heisenberg model. Phys. Rev. Lett. 106, 220601 (2011)
Huang, Y.: Absence of temporal order in states with fast decay of spatial correlations. arXiv:1912.01210v1
Syrwid, A., Zakrzewski, J., Sacha, K.: Time crystal behavior of excited eigenstates. Phys. Rev. Lett. 119, 250602 (2017)
Kozin, V.K., Kyriienko, O.: Quantum time crystals from Hamiltonians with long-range interactions. Phys. Rev. Lett. 123, 210602 (2019)
Watanabe, H., Oshikawa, M., Koma, T.: Absence of temporal order in states with fast decay of spatial correlations. arXiv:1911.12939v1
We thank Hal Tasaki for fruitful discussions. We also thank Shivaji Sondhi for encouraging us to publish the present result, which is based on an earlier unpublished note, and Yichen Huang for informing us of a simplification of the proof for \(\beta =0\). The work of H. W. was supported by JSPS KAKENHI Grant No. JP17K17678 and by JST PRESTO Grant No. JPMJPR18LA. The work of M. O. was supported in part by JSPS KAKENHI Grant No. JP19H01808 and by US National Science Foundation Grant No. NSF PHY-1748958 through Kavli Institute for Theoretical Physics, UC Santa Barbara.
Department of Applied Physics, University of Tokyo, Tokyo, 113-8656, Japan
Haruki Watanabe
Institute for Solid State Physics, University of Tokyo, Kashiwa, 277-8581, Japan
Masaki Oshikawa
Department of Physics, Gakushuin University, Mejiro, Toshima-ku, Tokyo, 171-8588, Japan
Tohru Koma
Search for Haruki Watanabe in:
Search for Masaki Oshikawa in:
Search for Tohru Koma in:
Correspondence to Haruki Watanabe.
Communicated by Hal Tasaki.
Appendix A: Fluctuation and Clustering
In this appendix, we show that fluctuations of normalized macroscopic observables are negligible in the large volume limit when the state possesses the clustering property. The clustering defined in Eq. (46) means that, for any \(\varepsilon >0\), there exists \(R>0\) (independent of \(|\Lambda |\)) such that
$$\begin{aligned} |\langle \delta \hat{a}_{\mathbf {x}}\,\delta \hat{a}_{\mathbf {y}}^\dagger \rangle |<\frac{1}{2}\varepsilon \quad \text {if}\quad |\mathbf {x}-\mathbf {y}|>R. \end{aligned}$$
(A1)
$$\begin{aligned} \langle \delta \hat{A}\,\delta \hat{A}^\dagger \rangle&\le \frac{1}{|\Lambda |^2}\sum _{\mathbf {x},\mathbf {y}\in \Lambda \,:\,|\mathbf {x}-\mathbf {y}|\le R}| \langle \delta \hat{a}_{\mathbf {x}}\,\delta \hat{a}_{\mathbf {y}}^\dagger \rangle |+\frac{1}{|\Lambda |^2}\sum _{\mathbf {x},\mathbf {y}\in \Lambda \,:\,|\mathbf {x}-\mathbf {y}|> R}| \langle \delta \hat{a}_{\mathbf {x}}\,\delta \hat{a}_{\mathbf {y}}^\dagger \rangle |\nonumber \\&\quad \le \frac{2N_a^2v(R)}{|\Lambda |}+\frac{1}{2}\varepsilon , \end{aligned}$$
where v(R) is defined in Eq. (37). Since \(N_a\) is independent of the system size, we can find \(|\Lambda |\) such that \(\frac{2N_a^2v(R)}{|\Lambda |}<\frac{1}{2}\varepsilon \). Therefore, for a sufficiently large system size, we have
$$\begin{aligned} \langle \delta \hat{A}\,\delta \hat{A}^\dagger \rangle <\varepsilon . \end{aligned}$$
This completes the proof of Eq. (47).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Watanabe, H., Oshikawa, M. & Koma, T. Proof of the Absence of Long-Range Temporal Orders in Gibbs States. J Stat Phys (2020). https://doi.org/10.1007/s10955-019-02471-5
Accepted: 13 December 2019
DOI: https://doi.org/10.1007/s10955-019-02471-5
Not logged in - 3.214.184.250 | CommonCrawl |
There's a nice paper by Menon and Elkan about dyadic prediction with latent features. It cares about the right things: ease of implementation, ability to incorporate multiple sources of information, and scalability. The model can be thought of as a mash-up of multinomial logit and matrix factorization.
\] where $\alpha$ is a vector of $k$ latent factors associated with $r$ and $\beta$ is a vector of $k$ latent factors associated with $c$. $r$ and $c$ are identifiers here, e.g., user ids and movie ids, user ids and advertisement ids, etc.
and then choose --quadratic ab and --loss logistic. Unfortunately this does not do the right thing. First, it creates some extra features (it is a outer product, not an inner product). Second, these extra features have their own independent weights, whereas in the model the weight is the product of the individual weights. A possible solution is to add a --dotproduct option to vowpal which would take two namespaces and emulate the features corresponding to their inner product (in this case the order of the features in the input would matter).
If you've followed this far, you can probably see how additional features $s (x)$ associated with the dyad can be added in another namespace to augment the model with side information.
Similarly it is easy to incorporate side information associated with each component, which would not be placed inside the alpha and beta namespaces to avoid getting picked up by the --dotproduct (in fact, for typical side information associated with the components, --quadratic on the component side information would be reasonable). Note the authors report better results learning the latent model first, fixing the latent weights, and then learning the weights associated with the side information.
For multi-valued data the authors use multinomial logit but I suspect a scoring filter tree with a logistic base learner could get the job done.
Finally, the authors suggest that regularization is necessary for good results. Possibly I can get away with using the ``only do one pass through the training data'' style of regularization. | CommonCrawl |
Estimating regional prevalence of chronic hepatitis C with a capture-recapture analysis
Patricia A. M. Kracht ORCID: orcid.org/0000-0002-5794-93861,
Joop E. Arends1,
Andy I. M. Hoepelman1 &
Mirjam E. E. Kretzschmar2,3
The hepatitis C virus (HCV) infection is a candidate disease for micro-elimination. Accurate baseline HCV prevalence estimation is essential to monitor progress to micro-elimination but can be methodologically challenging in low-endemic regions like the Netherlands due to lack of disaggregated data by age or risk-groups on the number of chronic HCV patients (i.e. HCV RNA positive). This study estimates the number of patients that has had a chronic HCV infection (ever-chronic) in the Utrecht region of the Netherlands.
In the Utrecht province in the Netherlands, positive HCV tests from the period 2001–2015 from one diagnostic center and four hospital laboratories were collected. A two-source capture-recapture method was used to analyze the overlap between the two registries (with 92% HCV RNA and 8% HCV immunoblot confirmed infections) to obtain the number of ever-chronic HCV infections in the Utrecht region. The Utrecht region was defined as an area with a 25 km radius from the Utrecht city center. The current viremic HCV prevalence was calculated by taking into account the proportion of cured and deceased HCV patients from a local HCV retrieval (REACH) project.
The estimated number of ever-chronic HCV patients was 1245 (95% CI 1164–1326) and would indicate a prevalence of 0.10 (95% CI 0.09–0.10) in the Utrecht region. This is 30% (95% CI 21–38%) more than the number of known HCV patients in the records. The ever-chronic HCV prevalence was highest in the 1960–1969 age cohort (0.16; 95% CI 0.14–0.18). Since 50% of the HCV patients were cured or deceased in the REACH-project, the number of current viremic HCV patients was estimated at 623 individuals in the Utrecht region (prevalence 0.05%).
The results of this study suggest a low ever-chronic and current HCV prevalence in the Utrecht area in the Netherlands, but other studies need to confirm this.
Estimates based on modelling suggest that 71 million individuals are chronically infected with the hepatitis C virus (HCV) worldwide and consequently are at risk for developing the associated long-term complications such as cirrhosis and hepatocellular carcinoma [1]. Over the past decade, potent direct-acting antivirals (DAAs) have transformed HCV into a candidate infectious disease for global elimination, and to this end the WHO has set out ambitious service coverage targets for eliminating HCV as a public health threat by the year 2030 [2]. The micro-elimination method as a bottom-up approach has been successfully adopted over the recent years [3]. Micro-elimination projects pursue elimination goals with efficient and tailored interventions on a small scale, e.g. in distinct local regions or in specific risk-groups such as persons who inject drugs (PWID) or migrants [4]. Evaluation and monitoring of micro-elimination initiatives requires accurate baseline and follow-up epidemiological data on HCV. Anti-HCV antibodies (anti-HCV) are used to determine seroprevalence of HCV in serosurveys but information on the HCV-RNA positive fraction is also necessary to quantify the viremic population that is still in need of treatment. Obtaining reliable HCV prevalence estimates can be methodologically challenging in low endemic regions like the Netherlands. For instance, serosurveys on HCV often suffer from lack of data disaggregated by age or risk-groups, leading to a high uncertainty in these subgroup estimates [5]. The estimation accuracy of the viremic HCV population can drop even further due to a low or absent HCV RNA positive fraction in (disaggregated) serosurvey data. The Workbook method, which was developed by the WHO for HIV prevalence estimation, aims to overcome this issue and uses risk-group based data to characterize low-level concentrated epidemics such as HCV and HIV [6]. Identifying subgroups at risk however can be challenging, and consequently this approach is still affected by limited data among high-risk groups. A capture-recapture technique can be used to make inference about the total population size by studying the overlap between different data sources [7,8,9]. This method has been employed in a wide range of settings including laboratory and patient data registries [10, 11] but has not been employed to estimate HCV prevalence in the Netherlands.
In the Utrecht province in the Netherlands, positive HCV test results (i.e. either an anti-HCV or a HCV-RNA PCR) from the period 2001–2015 were extracted from the Laboratory Information System (LIS) of all four regional hospitals and one diagnostic center for general practitioners (GPs) as part of a regional micro-elimination project (REACH) that aimed to identify and trace untreated HCV patients [3]. The current study aimed to estimate the number of individuals in the Utrecht region that have ever had a chronic HCV infection (i.e. the ever-chronic HCV prevalence), which does not take account the proportion of cured or deceased patients and may serve as a baseline indicator for future (micro-)elimination efforts in the Netherlands. The ever-chronic HCV prevalence is estimated by studying the overlap in HCV test results between the hospital and GP diagnostic center with a two-source capture-recapture analysis. In addition, we analyzed the contribution of specific age-cohorts and risk-groups to the HCV burden of disease with additional capture-recapture analyses. Finally, the REACH-project generated information about the proportion of cured and deceased HCV patients, which was used to estimate the size of the current viremic HCV population. Based on the clinical experience of infectious diseases specialists at the University Medical Center Utrecht, we hypothesized that the viremic HCV population would be substantially lower than the 0.16% estimation of the most recent benchmark study on HCV prevalence in the Netherlands [6].
REACH-project
The REACH-project aimed to trace previously diagnosed but untreated HCV patients by extracting positive HCV test-results (i.e. either an anti-HCV test or a HCV-RNA PCR test) from the LIS from all four regional hospitals and one diagnostic center for GPs in the Utrecht region in the Netherlands as published before [3]. Positive HCV test results were obtained from the period 2001–2015 from the hospitals' LIS. The GPs' LIS only contained HCV test results from the period 2006–2015 due to a change in the type of LIS that was used in the diagnostic center. The positive HCV diagnostics were linked to clinical records that were subsequently screened by the author (P.K.) to identify those HCV patients who had not yet been cured. Various patient-characteristics were documented, of which data on age, sex and country of birth were relevant to the current study. Untreated HCV patients were invited for re-evaluation at a nearby hepatitis treatment center, which among others included repeated HCV RNA testing. Before contacting patients, the Basic Registry of Persons was consulted based on the individual's citizen service number to obtain up-to-date contact information, which also included the postal code. This study has been approved by the institutional review board of the University Medical Center Utrecht.
Capture-recapture
The capture-recapture method originates from the field of ecology where it was often adopted to estimate an animal population size. First, a portion of a population is captured, marked and released. Later on, another sample is extracted from the population and the number of marked individuals in this second sample is counted. If the population mixes homogeneously and every individual is captured with the same probability, the proportion of marked individuals in the second sample is an estimate for the proportion of marked individuals in the entire population. Estimation of the population size can then be obtained by dividing the number of marked individuals by the fraction of marked individuals in the second sample. This approach is often referred to as the Lincoln-Petersen method, named after those who popularized its use [7, 8]. The capture-recapture method has also been adopted in epidemiology to assess the completeness of disease registries [11], and it can be applied to a variety of data sources including health insurance claims, medical prescriptions, but also laboratory and hospital records as was the case in the current study. There are several assumptions to this method, which include: 1) the population is closed (no birth, death or migration); 2) all individuals have the same chance of 'being caught' in the second sample; 3) tagging of individuals does not affect their catchability and 4) data sources have to be independent of one another.
The modified Lincoln-Petersen estimator with Chapman adjustment was used in this study to estimate the prevalence of chronic HCV in Utrecht with a 95% confidence interval (CI) by means of a two-source capture-recapture analysis (Fig. 1). The two data-sources involved aggregated lists of all positive anti-HCV and HCV RNA tests from: 1) the four regional hospitals' LIS and 2) the LIS of the diagnostic center for GPs in the Utrecht province. An estimated/recorded population size ratio with 95% CI was calculated by dividing the estimated number of chronic HCV patients obtained from the capture-recapture analysis by the number of originally recorded chronic HCV patients. Since the HCV test results from the GPs LIS could not be acquired from the years 2001 until 2005, there was no overlap to be studied in this period. To overcome this issue, the population size ratio that was acquired by the capture-recapture analysis for the period 2006–2015 was used to estimate the number of patients that have had a chronic HCV infection (ever-chronic) in the entire study period (2001–2015). The HCV prevalence in Utrecht was calculated by dividing the result of the capture-recapture analysis by the total population size of the Utrecht province on the 1st of January 2016 (Statistics Netherlands - CBS). Finally, the REACH-project generated information about the proportion of cured and deceased HCV patients (=50%) [3]. This proportion was subtracted from the estimated ever-chronic HCV infections to generate an estimation of the current viremic population size.
Lincoln-Petersen with Chapman modification. N = estimated number of cases in the population; n1 = number of cases in first sample; n2 = number of cases 'captured' in second sample; m2 = number of cases that were 'recaptured' in second sample; Var = variance; 95CI = 95% confidence interval [7,8,9]
All cleared or cured acute HCV infections and false positive test results were excluded from the analysis through screening of the patient clinical records during the REACH project. The majority of the identified HCV infections (95%) were HCV RNA confirmed and the remaining 5% only had a positive anti-HCV which can indicate both past as well as ongoing HCV infection. Since the latter was a relatively small proportion and progression from acute to chronic HCV occurs in ~ 75% [12, 13] all patients with a positive anti-HCV test, but for whom the HCV RNA test result was lacking, were assumed to have a chronic HCV infection for the purpose of this study. Since the primary outcome of the capture-recapture analysis was the number of ever-chronic HCV infected in the Utrecht region, we initially ignored whether patients had deceased or had already been cured.
$$ N=\left[\begin{array}{l}\left({n}_1+ 1\right)\left({n}_2+ 1\right)\\ {}\kern0.96em \left({m}_2+ 1\right)\end{array}\right]\kern0.5em -1 $$
$$ Var=\left[\begin{array}{l}\left({n}_1+1\right)\left({n}_2+1\right)\left({n}_1-{m}_2\right)\left({n}_2-{m}_2\right)\\ {}\kern2.04em {\left({m}_2+1\right)}^2\left({m}_2+2\right)\end{array}\right] $$
$$ 95 CI=N\pm Z\sqrt{\mathit{\operatorname{var}}(N)} $$
Utrecht region
The Utrecht province is the smallest province of the Netherlands as its borders demarcate approximately 1400 km2, but also one of the most densely populated with approximately 1274 million inhabitants in 2016. The home residencies of the HCV patients in our study sample were located throughout the Netherlands, but only patients living in the Utrecht region were included. For the purpose of the capture-recapture analysis, the Utrecht region was set as the area within a 25-km radius from the Utrecht city center as this most approximated the actual Utrecht province borders, excluding any large cities from neighboring provinces but still including the most important cities from hospitals participating in REACH (including the most distant city Amersfoort of which the outer border is at a linear distance of 25 km from Utrecht) (Fig. 3b). The 25-km radius area spans a total of 1963 km2. The individuals' (last available) residency location was based on their six digit postal code. Patients for whom no information about their current or last available residency was available were excluded from the capture-recapture analysis. The postal codes were translated into geographic coordinates in order to select all HCV patients living within 25-km radius from Utrecht.
Stratified analysis
The capture-recapture analysis was performed on different HCV patient groups based on the distance of their residency from the Utrecht city center with incremental steps of 10 km (i.e. 10 to a maximum of > 120 km distance from Utrecht). In this manner, the patient subgroups gradually increased in size until the entire study sample was included. For each distance, the estimated/recorded population size ratio with 95% CI was also calculated. In addition, we estimated the number of ever-chronic HCV infected in five specific age cohorts (< 1950; 1950–1959; 1960–1969; 1970–1979; ≥1980) and risk-groups (migrants vs. non-migrants) in the Utrecht region with stratified capture-recapture analyses in these groups. The ever-chronic HCV prevalence in each age cohort in Utrecht was calculated by dividing the result of the stratified capture-recapture analysis by the total population size of each age cohort in the Utrecht province on the 1st of January 2016 (Statistics Netherlands - CBS).
The geographical patient selection was done in R studio version 1.1.463 and the "FSA" and "FSAdata" packages were used for the capture-recapture analysis.
From the generated laboratory list of positive HCV tests in the Utrecht province, 1853 individual HCV patients could be identified and for 1738 of them, their (last) postal codes were known. Basic patient characteristics are described in Table 1. Patients were predominantly male (74%) and born in the Netherlands (55%), while 45% had a migration background, and of 10%, the country of birth was unknown. More than half (56%) of the HCV patients were born between 1950 until 1969 (Fig. 2). Those HCV patients without a known residency location were more often female (36% vs 25%, p 0.013) and more frequently had an unknown country of birth (50% vs 7%, p, < 0.001) compared to those with a registered address. There were no significant differences in the decade of birth between those with and without a known residency location.
Table 1 Patient characteristics of HCV patients tested between 2001 and 2015 in Utrecht
Birth decade of HCV patients who were tested in Utrecht between 2001 and 2015
Capture-recapture analysis
The geographical location of home addresses of all 1738 known HCV patients was mapped for all recorded HCV patients (Fig. 3a), as well as for those who lived in the Utrecht region (Fig. 3b) for the period 2001–2015. A total of 1016 (55%) HCV patients lived in the Utrecht region. The capture-recapture analysis over the period 2006–2015 resulted in an estimated-to-recorded population size ratio of 1.3 (95% CI 1.21–1.38) in the Utrecht region. This resulted in 1245 (95% CI 1164–1326) estimated ever-chronic HCV infected patients in the Utrecht region. With 1274 million inhabitants, these results indicate an ever-viremic HCV prevalence of 0.10% (95% CI 0.09–0.10). Taking into account the proportion of 50% that already had been cured and/or had deceased in the REACH-project, the number of ever-chronic HCV infections would decrease by 50% to generate the estimated 623 current chronic HCV infections in the Utrecht region. These results would indicate a current viremic HCV prevalence of 0.05% in the Utrecht region.
HCV patients who were tested in the Utrecht region between 2001 and 2015. A Last known residency of all HCV patients who were tested in Utrecht between 2001 and 2015; B Selection of HCV patients who were tested in Utrecht between 2001 and 2015 and of whom the last known residency lays within a 25-km radius from the Utrecht city center. The red dots indicate the five nearest cities not pertaining to the Utrecht province, which were therefore not included in the HCV patient selection: Almere, Amsterdam, Ede, Gouda, 's Hertogenbosch. Figure created with R studio version 1.1.463 using the GADMTools package (© 2018 GADM – freely available for academic use)
Results from stratified analysis
The estimated-to-recorded population size ratios for different patients groups that were selected based on a specified distance of their residency from the Utrecht city center (with incremental steps of 10 km) are depicted in Supplementary Figure 1A. The results of the stratified capture recapture analysis are depicted in Supplementary Figure 1B. The estimated-to-recorded population size ratio gradually increased from 1.1 (95% CI 1.05–1.15) in the10-km radius area to 1.6 (95% 1.47–1.69) for all the recorded patients.
The estimated-to-recorded population size ratio for five different age cohorts varied between 1.1–1.6 with overlapping confidence intervals. The estimated ever-chronic HCV prevalence in each age cohort is depicted in Fig. 4. For the migrant subgroup, the estimated HCV population size in the Utrecht region consisted of 374 (95% CI 313–450) ever-chronic HCV infected, which constitutes a 25% (95% CI 5–51%) increase of the recorded HCV population. In contrast, the estimated HCV population size of the Dutch natives in the Utrecht region amounts to 528 (95% CI 440–640) ever-chronic HCV infected, corresponding to a 31% (95% CI 9–59%) increase compared to the recorded Dutch born HCV population in Utrecht.
The estimated ever-chronic HCV prevalence per age cohort within the Utrecht region. Ratio of the number of estimated HCV population per age cohort in Utrecht versus the total population size of each age cohort in the Utrecht province on the 1st of January 2016 (Statistics Netherlands - CBS)
Population size estimation of the population of chronically HCV infected persons in a low prevalence country, where HCV prevalence is concentrated in risk groups, can be methodologically challenging. This study aimed to estimate the ever-chronic HCV prevalence in the Utrecht region by means of a two-source capture-recapture analysis on two extensive lists of positive HCV tests performed between 2001 and 2015 in one diagnostic center for GPs and four hospital laboratories. The number of estimated ever-chronic HCV infected patients in the Utrecht region was 1245 (95% CI 1164–1326), indicating a local prevalence of 0.10 (95% CI 0.09–0.10).
The most recent benchmark study on HCV prevalence in the Netherlands, adopting the Workbook method, reported a slightly higher nationwide ever-chronic HCV prevalence of 0.16% (low 0.06%, high 0.27%) compared to our study, but the confidence intervals overlap [6]. The input data of the Workbook analysis consisted of a combination of heterogeneous studies of which some but not all reported on the HCV RNA positive fraction and also included data from modeling studies. The most recent national cross-sectional serosurvey reported a HCV seroprevalence of 0.3%, but probably does not reflect the current HCV prevalence as it was conducted in 2007. The results of our capture-recapture analysis are based on extensive recent data on HCV tests and suggest a somewhat lower prevalence than the aforementioned HCV prevalence estimates. Our current viremic HCV prevalence estimation, that takes into account cured and deceased patients, is even lower with 0.05%. These findings are supported by a more recent large birth cohort in the south of the Netherlands that tested 3434 individuals in 2014/2015 and found a 0.20% (95% CI, 0.08–0.42%) HCV seroprevalence, but detected no viremic HCV infections [14].
The current capture-recapture approach might, however, have underestimated the total number of HCV infected individuals, since the undiagnosed HCV patients in the population were not sampled. In contrast, between 0 and 8% of the included HCV infections may actually have been cleared or cured HCV infections since they were only HCV immunoblot confirmed. It is probable that this has lead to overestimation of the ever-chronic HCV prevalence. In addition, a closed population was assumed although birth, death and migration did occur. This assumption would probably not have greatly affected the number of ever-chronic HCV infected, which was the main aim of our study. Moreover, the death rate was accounted for by taking the proportion of deceased HCV patients in the REACH-project as a proxy. Also, birth is presumed to have a low contribution to the current chronic infections due to the limited perinatal transmission of HCV (4–10%) in women and because the predominantly male HCV population does not contribute to perinatal transmission [15]. With regard to migration, 7.3% of the recorded HCV patients had been tested in an asylum center, and it is not known what proportion was granted asylum and stayed in the Netherlands and how many moved back to their country of origin. Any bias introduced through this, most likely would have led to overestimation of the ever-chronic HCV prevalence in the Utrecht region. The assumption of independence between the two data sources may be debated since HCV patients diagnosed at the GP are generally referred to a nearby hepatitis treatment center in the Netherlands. For this reason, HCV patients might be registered in both lists more often than would be expected with a random selection. In addition, our results demonstrate that the chance to be included in both databases among others depends on the distance of the home address from the Utrecht city center (Supplementary Figure 1A). A stratified analysis, as performed in the current study, is one of the possible solutions to this issue. When more than two data-sources are available, a log-linear or logit model can be used to model dependence between the different sources by including covariates that may be associated with heterogeneous catchability [16]. Since we only had access to two data sources for the current study, we could not address the question of dependence. Previous studies successfully adopted the capture-recapture method for monitoring and surveillance of infectious diseases. Similarly to our approach, Boender et al. linked the Dutch HIV Monitoring Foundation and the National Registry for Notifiable Diseases to assess annual HIV incidence [17]. In France, a more extensive three source capture-recapture analysis incorporated variables of heterogeneous catchability to estimate the HIV incidence rate in children [18]. A study from Denmark used multiple registries to estimate the national HCV prevalence [11]. In contrast to our analysis, all these studies included national registries, which naturally are preferred to local data sources to prevent the necessity for a (geographical) patient selection such as performed in the current study. Nevertheless, our study demonstrated that a capture-recapture analysis on two data sources can be of additional value to single disease registries for infectious disease prevalence estimation.
To further increase our knowledge on the viremic fHCV population and to monitor micro-elimination progress, the optimal solution would be mandatory registration of both acute and chronic HCV infections. In the Netherlands, mandatory notification of HCV infections to Public Health Services for the purpose of contact tracing is only required for acute HCV infections and chronic HCV infections of unknown duration at the moment [19]. Privacy regulations thus far have precluded further implementation. Either way, future national HCV registries may still benefit from a capture-recapture analysis to assess is completeness.
The results of the capture-recapture analysis suggest that the number of ever-chronic HCV infections in the Utrecht region is lower than previously assumed. Further studies our needed to confirm these results. To monitor the HCV population with ongoing viremia and the progress to micro-elimination, mandatory notification of all HCV infections is essential. A capture-recapture analysis will remain of additional value to assess completeness of HCV registries in the future.
The datasets analyzed during the current study are available from the corresponding author on reasonable request.
HCV:
DAAs:
Direct-acting antivirals
PWID:
Persons who inject drugs
anti-HCV:
anti-HCV antibodies
LIS:
Laboratory information system
Hofman R, Nusselder WJ, Veldhuijzen IK, Richardus JH. Mortality due to chronic viral hepatitis B and C infections in the Netherlands. Ned Tijdschr Geneeskd. 2016;160:D511.
World Health Organization. Global health sector strategy on viral hepatitis 2016–2021. 2016.
Kracht PAM, Arends JE, van Erpecum KJ, Thijsen SFT, Vlaminckx BJM, Weersink AJL, et al. REtrieval and cure of chronic hepatitis C (REACH): results of micro-elimination in the Utrecht province. Liver Int. 2019;39(3):455–62.
Lazarus JV, Safreed-Harmon K, Thursz MR, Dillon JF, El-Sayed MH, Elsharkawy AM, et al. The micro-elimination approach to eliminating hepatitis C: strategic and operational considerations. Semin Liver Dis. 2018;38:181–92.
Vriend HJ, Op de Coul EL, van de Laar TJ, Urbanus AT, van der Klis FR, Boot HJ. Hepatitis C virus seroprevalence in The Netherlands. Eur J Pub Health. 2012;22:819–21.
Koopsen J, van Steenbergen JE, Richardus JH, Prins M, Op de Coul EL, Croes EA, et al. Chronic hepatitis B and C infections in the Netherlands: estimated prevalence in risk groups and the general population. Epidemiol Infect. 2019;147:e147.
Lincoln FC. Calculating waterfowl abundance on the basis of banding returns. Washington: The United States Government Publishing Office; 1930.
Ricker WE. Computation and interpretation of biological statistics of fish populations. Ottawa: Fisheries Research Board of Canada; 1975.
Chapman DG. Some properties of the hypergeometric distribution with applications to zoological sample censuses; 1951.
Törner A, Stokkeland K, Svensson Å, Dickman PW, Hultcrantz R, Montgomery S, et al. The underreporting of hepatocellular carcinoma to the cancer register and a log-linear model to estimate a more correct incidence. Hepatology. 2017;65:885–92.
Christensen PB, Hay G, Jepsen P, Omland LH, Just SA, Krarup HB, et al. Hepatitis C prevalence in Denmark -an estimate based on multiple national registers. BMC Infect Dis. 2012;12:178.
Grebely J, Page K, Sacks-Davis R, van der Loeff MS, Rice TM, Bruneau J, et al. The effects of female sex, viral genotype, and IL28B genotype on spontaneous clearance of acute hepatitis C virus infection. Hepatology. 2014;59:109–20.
Wang CC, Krantz E, Klarquist J, Krows M, McBride L, Scott EP, et al. Acute hepatitis C in a contemporary US cohort: modes of acquisition and factors influencing viral clearance. J Infect Dis. 2007;196:1474–82.
Heil J, Hoebe CJPA, Cals JWL, ter Waarbeek HLG, van Loo IHM, Dukers-Muijrers NHTM. Detecting hepatitis B and C by combined public health and primary care birth cohort testing. Ann Fam Med. 2018;16:21–7.
England K, Thorne C, Newell ML. Vertically acquired paediatric coinfection with HIV and hepatitis C virus. Lancet Infect Dis. 2006;6:83–90.
Tilling K, Sterne JAC. Capture-recapture models including covariate effects. Am J Epidemiol. 1999;149:392–400.
Boender TS, Op de Coul E, Arends J, Prins M, van der Valk M, van der Meer JT, et al. Acute hepatitis C infection among adults with HIV in the Netherlands between 2003 and 2016: a capture–recapture analysis for the 2013 to 2016 period. Eurosurveillance. 2020;25:1900450.
Héraud-Bousquet V, Lot F, Esvan M, Cazein F, Laurent C, Warszawski J, et al. A three-source capture-recapture estimate of the number of new HIV diagnoses in children in France from 2003-2006 with multiple imputation of a variable of heterogeneous catchability. BMC Infect Dis. 2012;12:251.
LCI-RIVM. Hepatitis C richtlijn. 2019. https://lci.rivm.nl/richtlijnen/hepatitis-c.
No funding was received.
Department of Infectious Diseases, University Medical Center Utrecht, PO BOX 85500, 3508, Utrecht, GA, The Netherlands
Patricia A. M. Kracht, Joop E. Arends & Andy I. M. Hoepelman
Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
Mirjam E. E. Kretzschmar
Center for Infectious Disease Control, National Institute for Public Health and the Environment, Bilthoven, The Netherlands
Patricia A. M. Kracht
Joop E. Arends
Andy I. M. Hoepelman
Study design: PK and MK; PK; Data-analysis and drafting of the manuscript: PK, JA and MK; Critical revision of the manuscript: AH. All authors read and approved the final version.
Correspondence to Patricia A. M. Kracht.
The REACH-project on which this study was based, has been approved by the institutional review board of the University Medical Center Utrecht. The board waived the need for informed consent. No administrative permissions were required to access the raw data. The collected data was anonymised before its use.
Dr. Arends reports fees paid the institution from Abbvie, BMS, Gilead, Janssen, MSD and ViiV and research grants from Abbvie and BMS, outside the submitted work. Dr. Hoepelman reports personal fees from Abbvie, BMS, Gilead and Janssen. All other authors declare that they have no competing interest with respect to the presented work.
Additional file 1: Supplementary Figure 1.
Capture-recapture analysis: the estimated/recorded population size ratio and the HCV population size estimation. A: The ratio between the estimated and baseline HCV population size was calculated for each distance including for the 95% confidence intervals. B: A capture-recapture analysis was performed on different HCV patients selections based on their residency distance from the Utrecht city center with incremental steps of 10-kilometer. HCV population size estimates with 95% confidence intervals are depicted for each distance.
Kracht, P.A.M., Arends, J.E., Hoepelman, A.I.M. et al. Estimating regional prevalence of chronic hepatitis C with a capture-recapture analysis. BMC Infect Dis 21, 640 (2021). https://doi.org/10.1186/s12879-021-06324-z
Received: 07 June 2020
DOI: https://doi.org/10.1186/s12879-021-06324-z
REACH, micro-elimination | CommonCrawl |
\begin{document}
\renewcommand{2.0}{1.0}
\title{Time-Correlation and Decoherence in a Two-Particle Interferometer}
\author{Bert\'{u}lio de Lima Bernardo}
\affiliation{Departamento de F\'{\i}sica, CCEN, Universidade Federal da Para\'{\i}ba, Caixa Postal 5008, 58059-900, Jo\~ao Pessoa, PB, Brazil}
\email{[email protected]}
\begin{abstract}
A two-particle interferometer is theoretically analyzed, to show how
decoherence induced by interactions with the environment affects
time correlations, a process we call time-correlation decoherence.
Specifically, on the basis of simple mathematical analysis we show
how the interaction between a bipartite entangled system and a
photon bath representing the environment can efface the oscillations
in the coincidence-detection rate of the interferometer. We discuss
the dependence of this kind of decoherence on the photon energy
and density. \end{abstract}
\pacs{}
\maketitle
\section {Introduction} \label{sec:1} As famously stated by Feynman \cite{feynman}, Young's double-slit experiment has in it the {\it heart of quantum mechanics} and {\it
contains the only mystery of the theory}. From the quantum-mechanical point of view, this experiment consists of a group of particles, such as electrons, approaching a screen with two slits. After traversing the slits, the particles impinge on a distant detector screen, which registers permanently their positions. If no information is available concerning the passage of the particles through the slits, the particle density on the detection screen displays an interference pattern described by the expression
$\rho(x)=\frac{1}{2}|\psi_{1}(x)+\psi_{2}(x)|^{2}$, where $\psi_{1}(x)$ and $\psi_{2}(x)$ are the partial wave functions associated with the passage through slits 1 and 2, respectively. On the other hand, if the experimental procedure determines the slit traversed by each particle, the interference pattern disappears and the detector exhibits the classical addition of two patterns, one due to the particles that have traversed slit 1 and the other due to those that have traversed slit 2, i.~e.,
$\rho(x)=\frac{1}{2}|\psi_{1}(x)|^2+\frac{1}{2}|\psi_{2}(x)|^{2}$. This experiment leads to the conclusion that quantum interference is incompatible with which-path information.
Let us now consider the behavior of a classical macroscopic object immersed in a large environment of gaseous molecules, light, thermal photons, etc.. At any moment a huge number of environmental particles collide with the object, in such a way that they will carry some information about the object, on its position and orientation in space, for instance. In this case, the information is associated with the scattering positions and deflection angles. We see that every object interacts with its environment, as a result of which information about the physical properties of the former is inevitably encoded in the latter.
Interactions between quantum objects and their environments are significantly weaker, because quantum systems are several orders of magnitude smaller than classical ones. Nonetheless, system-environment interactions are ubiquitous in quantum physics and can transfer which-path (or which-state) information to the environment by the aforementioned mechanism. In other words, an interacting environment suppresses interference (wave-like behavior) in atomic systems and consequently bars quantum manifestations at the macroscopic scale. System-environment interactions explain how the classical behavior of the macroscopic world emerges from the quantum properties of its building blocks \cite{schloss, zurek, schloss2}.
A quantum superposition depends on the relative phases between its components. System-environment interactions transfer which-path (or which-state) information to the environment at the expense of the coherence among those relative phases. This inevitable monitoring of the system by the environment therefore amounts to the so-called \emph{decoherence} process. In the two-slit interferometer, one can always regard any mechanism offering information on the particle path as a form of system-environment interaction responsible for a specific kind of decoherence, i.~e., the destruction of the interference pattern on the detection screen \cite{wootters, schloss2}. This remarkably simple, evident form of decoherence is the object of our analysis.
We are particularly interested in a class of interference devices first developed in the 1980's, the two-particle
interferometer \cite{rarity,kwiat,ou}. Certain experiments have shown that when two entangled particles separately go through a single-particle interferometer, such as the Young interferometer, an interference pattern results when the rate of coincident arrival is measured, while no such pattern appears when only one particle is observed.\cite{mandel} Entangled particles, in this context, are particles simultaneously created by the same source in such a way that, due to momentum conservation, one only has to determine the position of one particle to predict the position of the other. This kind of interferometer, as we shall see, is particularly sensitive to entanglement correlation. Nonetheless, the literature contains no detailed description of decoherence in two- and many-particle systems of this kind.
The purpose of this paper is to establish a simple, direct connection between decoherence and two-particle interferometry. To this end, we discuss a \emph{gedanken} experiment originally devised by Horne and Zeilinger \cite{horne}, which is convenient because simple calculations suffice to describe the system after interaction with the bath of monochromatic photons that here represents the environment. As already mentioned, in the two-particle interferometer under study interference is only observed in time-correlation measurements. For this reason, we will refer to the environmental disturbance as \emph{time-correlation decoherence} (TCD), to distinguish it from the well-known spatial decoherence that is commonplace in the single-particle systems.
\section{Two-particle interferometry} \label{sec:2} The \emph{gedanken} experiment, which Gottfried has also explored very well \cite{gott}, analyzes the particles produced by the decay process $A \rightarrow a + b$, each daughter particle going through a double-slit apparatus, as shown in Fig.~1. If $A$ is approximately at rest, momentum conservation forces $a$ and $b$ to travel in approximately opposite directions. Therefore, if $a$ passes through one of the slits on the right, $b$ must pass through the diametrically opposite slit on the left.
\begin{figure}
\caption{(Colored online) Schematic representation of the
\emph{gedanken} experiment discussed in the text. Particle $A$, with
approximately zero momentum, decays into two particles $a$ and
$b$. To conserve momentum, the daughter particles must travel in
approximately diametrically opposite paths. Each particle traverses
a double-slit apparatus before being detected by a screen, at
$y_a$ or $y_{b}$. The lengths $r_{a1}$, $r_{a2}$, $r_{b1}$, and
$r_{b2}$ are the distances from the slits to the detection points.}
\label{F1}
\end{figure}
Let $\ket{R_{1}}$ and $\ket{R_{2}}$ denote the quantum states of particle $a$ corresponding to passage through slits 1 and 2 on the right, and $\ket{L_{1}}$ and $\ket{L_{2}}$ denote the quantum states of particle $b$ corresponding to the passage through slits 1 and 2 on the left, respectively. We can then write the two-particle entangled quantum state in the form \begin{equation} \label{1} \ket{\psi}= \frac{1}{\sqrt{2}}(\ket{R_{1}}\ket{L_{2}} + \ket{R_{2}}\ket{L_{1}}). \end{equation}
On the right-hand side of Eq.~\eqref{1} we recognize a state that is entangled, in the above-defined sense, since $\ket{\psi}$ cannot be factorized into a simple product of $a$ and $b$ states, i.~e., no two states $\ket{R_{i}}$ and $\ket{L_{j}}$ can be found such that $\ket{\psi} = \ket{R_{i}} \ket{L_{j}}$. The state of one particle cannot be specified without reference to the other particle; the two particles are therefore entangled.
The concepts of density matrix and reduced density matrix have capital importance in decoherence theory. We therefore adopt those concepts from the outset, to familiarize the reader with them. We shall make only simple use of these tools. To compare our formalism with the quantum-state formalism, we recommend Gottfried's analysis of the same \emph{gedanken} experiment \cite{gott}. A clear introduction to the density matrix and reduced density matrix in the context of decoherence can be found in Ref.~[12].
To start, we write the density matrix $\rho = \ket{\psi}\bra{\psi}$ for this system in the following form \begin{equation} \label{2} \rho= \frac{1}{2}\sum_{\substack{ij=1\\i \neq j}}^{2}\ket{R_{i}}\ket{L_{j}}\bra{L_{j}}\bra{R_{i}} + \frac{1}{2}\sum_{\substack{ij=1\\i \neq j}}^{2}\ket{R_{i}}\ket{L_{j}}\bra{L_{i}}\bra{R_{j}}. \end{equation}
To describe the behavior of one of the particles, the reduced density matrix associated with that particle is convenient. If we are interested in particle $a$, for instance, to compute the reduced density matrix $\rho_{a}$ we trace Eq.~\eqref{2} over the states of particle $b$ in the following way: \begin{equation} \label{3} \rho_{a} = \operatorname{Tr_{b}}\ket{\psi}\bra{\psi}
= \frac{1}{2}\sum_{i=1}^{2}\braket{L_{i}|\psi}\braket{\psi|L_{i}}. \end{equation}
Since $\braket{L_{1}|L_{2}}=0$, we have that $\rho_{a}= 1/2\sum_{i=1}^{2}\ket{R_{i}}\bra{R_{i}}$. This density matrix corresponds to a particle density $\rho(y_{a})$ on the detecting screen on the right-hand side of Fig.~1 given by the expression \begin{equation} \label{4}
\rho(y_{a}) \equiv \braket{y_{a}|\rho_{a}|y_{a}}
= \frac{1}{2}|\psi_{a1}(y_{a})|^{2} + \frac{1}{2}|\psi_{a2}(y_{a})|^{2}, \end{equation}
where $\psi_{ai}(y_{a})= \braket{y_{a}|R_{i}}$ ($i=1,2$).
As we can see, the distribution of $a$ particles on the detection screen exhibits no interference pattern. Given the symmetry of the apparatus, we see that \emph{mutatis mutandis} the same result describes the distribution of $b$ particles on the left-hand detection screen. Physically speaking, the absence of interference patterns stems from assuming particle $A$ to be approximately at rest initially. According to the uncertainty principle, we have almost no information on the initial position of $A$. Particle $A$ is therefore equivalent to a large source of daughter particles, and the path of each daughter is undefined relative to the path of the other. Consequently, single-particle interference cannot occur.
Let us now analyze the system as a whole. The probability density of simultaneously detecting particle $a$ at $y_{a}$ and particle $b$ at $y_{b}$ is given by the expression \begin{equation} \label{5}
\rho(y_{a},y_{b}) \equiv \bra{y_{b}}\braket{y_{a}|\rho|y_{a}}\ket{y_{b}}. \end{equation}
Substitution of Eq.~\eqref{2} into Eq.~\eqref{5} yields the result \begin{align} \label{6} \rho(y_{a},y_{b}) &= \frac{1}{2} \sum_{\substack{ij=1\\i \neq j}}^{2}
\braket{y_{a}|R_{i}}\braket{y_{b}|L_{j}}\braket{L_{j}|y_{b}}
\braket{R_{i}|y_{a}}\nonumber\\ &+\frac{1}{2}\sum_{\substack{ij=1\\i \neq j}}^{2}
\braket{y_{a}|R_{i}}\braket{y_{b}|L_{j}}\braket{L_{i}|y_{b}}
\braket{R_{j}|y_{a}}. \end{align}
Let us assume that, after passing through one of the slits, the wavefunctions of the particles are spherical waves, i.~e., given by the expressions \begin{equation} \label{7}
\psi_{aj}(r_{aj})=\braket{r_{aj}|R_{j}}=\frac{e^{ikr_{aj}}}{r_{aj}} \end{equation} and \begin{equation} \label{8}
\psi_{bj}(r_{bj})=\braket{r_{bj}|L_{j}}=\frac{e^{ikr_{bj}}}{r_{bj}}\qquad(j=1,2), \end{equation} where the $r_{(a,b)j}$ denote the distances from the slits to the detection points, and $k$ is the wavenumber.
If we let the distance between the slits and the detection screen be much larger than the separation between the two slits so that we are in the Fraunhofer diffraction limit, we have that $r_{a (1,2)} \approx L \mp \theta y_{a}$ and $r_{b (1,2)} \approx L \mp \theta y_{b}$ \cite{born}, with the coordinates $y$ defined in Fig.~1, and the angle $\theta$ and distance $L$ defined in Fig.~2. Equations~\eqref{7}~and \eqref{8} then yield the approximate equalities \begin{equation} \label{9}
\braket{y_{a}|R_{1,2}} \approx \frac{e^{ik( L \mp \theta y_{a})}}{ L \mp \theta y_{a}} \end{equation} and \begin{equation} \label{10}
\braket{y_{b}|L_{1,2}} \approx \frac{e^{ik( L \mp \theta y_{b})}}{ L \mp \theta y_{b}}. \end{equation}
We now substitute Eqs.~\eqref{9} and~\eqref{10} into Eq.~\eqref{6}. Notice taken that the denominators can all be absorbed into an irrelevant overall factor, for small diffraction angles we can write the following expression for the joint probability of detecting particle $a$ at $y_{a}$ and particle $b$ at $y_{b}$: \begin{equation} \label{11} \rho(y_{a},y_{b}) \doteq \cos^{2}\big(k \theta(y_{a}-y_{b})\big), \end{equation} where the symbol $\doteq$ stands for equality up to a constant factor. This equation shows that the coincident-arrival rate is a periodic function of the relative position $y_{a}-y_{b}$, a functional form characteristic of interference. It is not difficult to identify the source of this curious behavior: as Eq.~\eqref{1} shows, the daughter particles emitted in opposite directions by the decay of particle $A$ can reach the screens in two alternative ways. The interference between these two paths is responsible for the oscillatory coincidence rate.
The contrast between Eqs.~\eqref{4}~and \eqref{11} constitutes a particular instance of a relation between the single- and two-particle interferences first identified by Jaeger \emph{et
al.}~\cite{jaeger} in their analysis of the system depicted in Fig.~1. Ref.~\onlinecite{jaeger} showed that if the initial state is maximally entangled, the coincidence rate is affected by interference, as we have shown, while single-particle detection patterns are not. By contrast, when the particles are created in a separable state, only single-particle interference arises. More specifically, the more entangled the initial state, the stronger the interference in the coincidence detection rate, and the weaker the single-particle interference. This complementarity has been verified in a number of experiments \cite{rarity,
kwiat, ou}.
Although capturing essential features of experimental systems, Fig.~1 is schematic. One of its most important limitations is the absence of interaction between the particles and the environment. In the next section we show, for the first time, how this interaction undercuts the interference responsible for the oscillatory behavior in Eq.~\eqref{11}, i.~e., how the system-environment interaction gives rise to decoherence.
\begin{figure}
\caption{(Colored online) Distance $L$ and the angle $\theta$ associated with
arrangement in Fig.~1.}
\label{F2}
\end{figure}
\section{Time-Correlation Decoherence}
We now turn to the open system, i.~e., to particles that interact with the environment. As we shall see, decoherence will arise, i.~e., time correlations will lose coherence.
First, we place a light source beyond the two slits on the right-hand side of Fig.~1, to play the role of the environment. In this arrangement, if we place a detector behind each slit, a photon that happens to be scattered by particle $a$ near one of the slits will be recorded by the nearest detector. The collision between particle $a$ and the photon represents the system-environment interaction, which, as we shall show, entangles them and causes decoherence. Next, we restrict our analysis to photons with an wavelength so short that diffraction can be ruled out and we can be sure that a photon scattered at slit 1 cannot reach the detector at slit 2 or vice versa. The purpose of the detectors is to detect interactions between particle $a$ and the environment. In no way do they affect decoherence.
In analogy with the discussion in Section~\ref{sec:2}, we describe the system-environment interaction by the expressions \begin{equation} \label{12} \ket{R_{1}}\ket{L_{2}}\ket{\epsilon_{0}} \rightarrow \ket{R_{1}}\ket{L_{2}}\ket{\epsilon_{1}} \end{equation} and \begin{equation} \label{13} \ket{R_{2}}\ket{L_{1}}\ket{\epsilon_{0}} \rightarrow \ket{R_{2}}\ket{L_{1}}\ket{\epsilon_{2}}, \end{equation} where the environmental states $\ket{\epsilon_{0}}$, $\ket{\epsilon_{1}}$, and $\ket{\epsilon_{2}}$ are the initial photonic quantum state, the state into which the arrival of a photon scattered near slit 1 triggers detector 1, and the state into which the arrival of a photon scattered near slit 2 triggers detector 2, respectively. The initial environmental state $\ket{\epsilon_{0}}$ evolves into $\ket{\epsilon_{1}}$ or $\ket{\epsilon_{2}}$, depending on the system state. Equations.~\eqref{12}~and \eqref{13} are valid only if particle $a$ scatters a photon right after passing through one of the slits, prior to any other collision. Otherwise, it would be incorrect to write $\ket{R_{1}}$ or $\ket{R_{2}}$ (which, according to Eq.~\eqref{9}, represent spherical waves emerging from slits 1 and 2) on the right-hand sides of expressions~\eqref{12}~or \eqref{13}, respectively. The linearity of the Schr\"{o}dinger equation implies the von Neumann measurement scheme \cite{zurek, schloss} \begin{align} \label{14} \frac{1}{\sqrt{2}}(\ket{R_{1}}\ket{L_{2}} + \ket{R_{2}}\ket{L_{1}})\ket{\epsilon_{0}} &\rightarrow \nonumber\\ \ket{\phi}=\frac{1}{\sqrt{2}}(\ket{R_{1}}\ket{L_{2}}\ket{\epsilon_{1}} &+ \ket{R_{2}}\ket{L_{1}}\ket{\epsilon_{2}}). \end{align}
We see that the system states have become entangled with the environmental states, which encode information on the particle paths. The initial coherence between the system states $\ket{R_{2}}\ket{L_{1}}$ and $\ket{R_{1}}\ket{L_{2}}$ is now shared with the environment, i.~e., is now a property of the system-environment state.
Let us analyze the behavior of this system in more detail. If we determine the reduced density matrix $\rho_{a} = \operatorname{Tr_{bE}} \ket{\phi}\bra{\phi}$ for particle $a$, where $\operatorname{Tr_{bE}}$ stands for the trace over the states of particle $b$ and the environment, and proceed to calculating
$\braket{y_{a}|\rho_{a}|y_{a}}$, it results that the probability density $\rho({y_{a}})$ of finding particle $a$ at position $y_{a}$ on the screen is still given by Eq.~\eqref{4}. Not surprisingly, the entanglement between system and environment has no effect upon the already-incoherent single-particle probability density.
On the other hand, if we calculate the reduced density matrix $\rho_{ab} = \operatorname{Tr_{E}} \ket{\phi}\bra{\phi}$
for the two particles, where $\operatorname{Tr_{E}}$ stands for the trace over the environmental states only, we find the equality \begin{equation} \label{15} \rho_{ab} = \frac{1}{2}\sum_{k=1}^{2} \bra{\epsilon_{k}}O_{ij} + Q_{ij}\ket{\epsilon_{k}}, \end{equation} where \begin{equation} O_{ij}= \sum_{\substack{ij=1,\\i \neq j}}^{2} \ket{R_{i}}\ket{L_{j}}\ket{\epsilon_{i}}\bra{\epsilon_{i}}\bra{L_{j}}\bra{R_{i}}\label{eq:1} \end{equation} and \begin{equation} Q_{ij} = \sum_{\substack{ij=1,\\i \neq j}}^{2} \ket{R_{i}}\ket{L_{j}}\ket{\epsilon_{i}}\bra{\epsilon_{j}}\bra{L_{i}}\bra{R_{j}}.\label{eq:2} \end{equation}
If a given photon is recorded by detector 1, the same photon cannot be recorded by detector 2. Mathematically, this self-evident notion corresponds to the equality $\braket{\epsilon_{i}|\epsilon_{j}}=0$ for $i \neq j$. Equation~\eqref{15} therefore reduces to the equation \begin{equation} \label{16} \rho_{ab}=\frac{1}{2}\sum_{\substack{ij=1,\\i \neq j}}^{2}\ket{R_{i}}\ket{L_{j}}\bra{L_{j}}\bra{R_{i}}, \end{equation} and Eqs.~\eqref{9}~and \eqref{10} yield the following expression for the probability density of simultaneously detecting particle $a$ at $y_{a}$ and particle $b$ at $y_{b}$: \begin{equation} \label{17}
\rho(y_{a},y_{b}) \equiv \bra{y_{a}} \braket{y_{b}|\rho_{ab}|y_{b}}\ket{y_{a}} = \mathrm{const}. \end{equation}
The probability distribution in Eq.~\eqref{17} is position independent. We therefore see that the system-environment interaction has destroyed the coincidence-rate interference expressed by Eq.~\eqref{11}, i.~e., the interference prevalent in the isolated, photon-free system. Since time-correlation interference is lost, we call this phenomenon time-correlation decoherence (TCD).
Interesting issues emerge when we examine the environmental properties. In particular, we are insterested on the dependence of TCD upon the photon energy, or equivalently, upon the wavelength of the light. Up to this point we have only considered the small-wavelength limit, i.~e., a wavelength $\lambda$ that is dwarved by the slit separation $d$. With larger wavelengths, diffraction allows photons scattered near slit 1 (2) to reach detector 2 (1). The light is now unable to resolve the separation between the slits and the environment and cannot encode a significant amount of information on the particle paths.
In order to account for these new possibilities, we now write the system state of the system in the form \begin{align} \label{18} \ket{\varphi} &= n \ket{R_{1}}\ket{L_{2}}\ket{\epsilon_{1}} + m \ket{R_{1}}\ket{L_{2}}\ket{\epsilon_{2}} \nonumber \\ &+ n \ket{R_{2}}\ket{L_{1}}\ket{\epsilon_{2}} + m \ket{R_{2}}\ket{L_{1}}\ket{\epsilon_{1}}, \end{align} where $n$ and $m$ are the probability amplitudes for a photon scattered near a given slit to be recorded by the detectors that are closer and farther from that slit, respectively. The right-hand side of Eq.~\eqref{18} remains invariant under the change $1\leftrightarrow2$ because we are working with identical slits and symmetrically positioned detectors.
When we calculate the reduced density matrix,
$\rho_{ab}^{(\varphi)} = \operatorname{Tr_{E}}\ket{\varphi}\bra{\varphi}$, under the condition $\braket{\epsilon_{1}|\epsilon_{2}} = 0$, the following result emerges: \begin{align} \label{19}
\rho_{ab}^{(\varphi)} &= (|n|^2 + |m|^2) \sum_{\substack{ij=1,\\i \neq j}}^{2}\ket{R_{i}}\ket{L_{j}}\bra{L_{j}}\bra{R_{i}} \nonumber \\ &+ (nm^{*} + n^{*}m) \sum_{\substack{ij=1,\\i \neq j}}^{2}\ket{R_{i}}\ket{L_{j}}\bra{L_{i}}\bra{R_{j}}. \end{align}
Equation~\eqref{19} is our central result. To find the probability density of simultaneously detecting particle $a$ at $y_{a}$ and particle $b$ at $y_{b}$ as a function of the amplitudes $n$ and $m$ we only have to compute
$\bra{y_{b}}\braket{y_{a}|\rho_{ab}^{(\varphi)}|y_{a}}\ket{y_{b}}$. The second term on the right-hand side of Eq.~\eqref{19}, which contains the off-diagonal elements of $\rho_{ab}^{(\varphi)}$ on the basis $\{ \ket{R}\ket{L} \}$, is usually referred to as the interference term because it monitors the quantum coherence among the components on the right-hand side of Eq.~\eqref{18} \cite{schloss2}.
In the small-wavelength limit, $n=1/\sqrt{2}$ and $m=0$. In this case, as expected, Eq.~\eqref{19} reduces to Eq.~\eqref{16}. TCD is maximum and the coincidence arrival rate displays no vestige of interference. In the large-wavelength limit, on the other hand, $n = m = 1/2$, since the amplitude of a photon reaching a detector is independent of the slit at which it was scattered. Under these conditions, Eq.~\eqref{19} reduces to Eq.~\eqref{2}, TCD is minimum, and the coincidence arrival rate shows the interference features identified in our discussion of Eq.~\eqref{11}. No information on particle paths is conveyed to the environnment.
In the intermediate case, in which the detectors receive only partial which-path information, we have that $n > m \neq 0$. The coincidence-rate interference is weaker than in the large-wavelength limit and the probability distribution combines a term reminiscent of Eq.~\eqref{11} with a constant contribution, analogous to Eq.~\eqref{17}: \begin{align} \label{20} \rho(y_{a},y_{b}) &\equiv \bra{y_{a}}
\braket{y_{b}|\rho_{ab}|y_{b}}\ket{y_{a}} \nonumber\\
&\doteq |n|^{2} + |m|^{2} + (nm^{*} + mn^{*}) \cos\big(2k \theta (y_{a} - y_{b})\big). \end{align}
Another important parameter is the intensity of the light source, i.~e., the photon density in the region beyond the slits on the right-hand side of Fig.~\ref{F1}. So far we have implicitly assumed the intensity to be sufficiently high to insure scattering, with 100\,\% certainty. This constraint relaxed, the properties of the system are described by a mixed density matrix \cite{schloss2} of the form $\rho = w_{1}\ket{\phi}\bra{\phi} + w_{2}\ket{\alpha}\bra{\alpha}$. Here $\ket{\phi}$ is defined as in Eq.~\eqref{14}, $\ket{\alpha} = \ket{\psi}\ket{\epsilon _{0}}$ is a separable (non-entangled) system-environment state associated with the absence of collisions, and $w_{1}$ and $w_{2}=1-w_{1}$ are the classical probabilities of particle $a$ scattering or not scattering a photon after passing through the slits, respectively. We calculate the reduced density matrix, $\rho_{ab} = \operatorname{Tr_{E}}(\rho)$ and from $\rho_{ab}$, with the wavefunctions in Eqs.~\eqref{9} and~\eqref{10}, we find the following expression for the probability density $\rho(y_{a},y_{b})$ to detect the two particles in coincidence: \begin{align} \label{21} \rho(y_{a},y_{b}) &\equiv \bra{y_{a}}
\braket{y_{b}|\rho_{ab}|y_{b}}\ket{y_{a}} \nonumber\\ &\doteq w_{1} + 2 w_{2} \cos^{2}\big(k \theta(y_{a}-y_{b})\big), \end{align} which, as expected, combines features found in Eqs.~\eqref{11}~and \eqref{17}.
For completeness, we cursorily discuss an alternative arrangement, with an additional light source and two other detectors beyond the slitted screen on the left-hand side of in Fig.~1. Qualitatively, the new arrangement is equivalent to the setup we have analyzed. As long as one of the particles or both of them scatter photons after passing through the slits, the environmental state changes as it acquire information on the paths followed by the particles. Clearly, with two collision alternatives, the odds in favor of acquiring which-path information are higher, and interference is weakened. Equations~\eqref{17}, \eqref{18}, and \eqref{19} are still applicable, but given that each particle can now collide with a photon, the collision-probability parameter $w_{1}$ is larger. For example, if the two light sources are identical $w_{1}$ is twice larger than in the previous case.
\section{Conclusion}
In conclusion, we have quantitatively studied a class of quantum-mechanical decoherence processes, to show how the system-environment interactions suppress coincidence-rate interference in a two-particle interferometer. The environment was modeled by a photon bath. Given the loss in particle time-correlation coherence, we have called this process time-correlation decoherence (TCD). In addition, we have brought to light the decisive importance of the photon energy and density in TCD.
\begin{acknowledgments}
The author gratefully acknowledges Eric J. Heller for invaluable
comments and discussions. This work was supported by
Coordena\c{c}\~{a}o de Aperfei\c{c}oamento de Pessoal de N\'{i}vel
Superior (CAPES) and by Conselho Nacional de Desenvolvimento
Científico e Tecnológico (CNPq). \end{acknowledgments}
\end{document} | arXiv |
# Linear regression and its mathematical foundation
Linear regression is a fundamental concept in machine learning that allows us to model the relationship between a dependent variable and one or more independent variables. It is widely used in various fields, including calculus, to make predictions and solve problems.
The mathematical foundation of linear regression is based on the concept of least squares. The goal is to find the best-fitting line through the data points, which minimizes the sum of the squared differences between the actual data points and the predicted values. This is achieved by solving the normal equation, which is a system of linear equations derived from the partial derivatives of the sum of squared errors with respect to the coefficients of the linear regression model.
The normal equation for linear regression is given by:
$$\hat{\beta} = (X^T X)^{-1} X^T y$$
where $\hat{\beta}$ is the vector of coefficients, $X$ is the matrix of independent variables, $y$ is the vector of dependent variables, and $T$ denotes the transpose of a matrix.
Consider a simple linear regression problem with one independent variable $x$ and one dependent variable $y$. The goal is to find the best-fitting line through the data points, which minimizes the sum of the squared differences between the actual data points and the predicted values.
To find the best-fitting line, we can use the normal equation:
$$\hat{\beta} = \frac{1}{n\sum_{i=1}^n(x_i - \bar{x})^2} \sum_{i=1}^n(x_i - \bar{x})(y_i - \bar{y})$$
where $n$ is the number of data points, $\bar{x}$ and $\bar{y}$ are the mean values of the independent and dependent variables, respectively.
## Exercise
1. Find the best-fitting line for the following data points:
| $x$ | $y$ |
|-----|-----|
| 1 | 2 |
| 2 | 3 |
| 3 | 4 |
| 4 | 5 |
Use the normal equation to find the coefficients of the best-fitting line.
### Solution
The best-fitting line has the equation $y = 1.5x + 0.5$.
# Gradient descent and its algorithmic implementation
Gradient descent is an optimization algorithm used to find the minimum value of a function iteratively. It is widely used in machine learning to train models by minimizing the cost function.
The algorithm works by updating the coefficients of the model in the opposite direction of the gradient of the cost function. This is done by taking small steps in the direction of the steepest decrease of the function, which is determined by the negative of the gradient.
The algorithm can be implemented as follows:
1. Initialize the coefficients of the model.
2. Compute the gradient of the cost function with respect to the coefficients.
3. Update the coefficients by subtracting the product of the learning rate and the gradient.
4. Repeat steps 2-3 until the change in the coefficients is below a certain threshold, indicating convergence.
Consider a simple linear regression problem with one independent variable $x$ and one dependent variable $y$. The goal is to find the coefficients of the best-fitting line using gradient descent.
To find the coefficients, we can use the following algorithm:
1. Initialize $\beta_0$ and $\beta_1$ with arbitrary values.
2. Compute the gradient of the cost function with respect to the coefficients.
3. Update the coefficients by subtracting the product of the learning rate and the gradient.
4. Repeat steps 2-3 until the change in the coefficients is below a certain threshold, indicating convergence.
## Exercise
1. Use gradient descent to find the coefficients of the best-fitting line for the following data points:
| $x$ | $y$ |
|-----|-----|
| 1 | 2 |
| 2 | 3 |
| 3 | 4 |
| 4 | 5 |
Use a learning rate of 0.01 and a maximum number of iterations of 1000.
### Solution
The best-fitting line has the equation $y = 1.5x + 0.5$.
# Decision trees: concepts and algorithms
A decision tree is a flowchart-like structure used for classification and regression tasks in machine learning. It works by recursively splitting the data into subsets based on the values of input features, and then making a prediction based on the majority class or average value in each subset.
There are several algorithms for building decision trees, including:
- ID3: Iterative Dichotomiser 3.
- C4.5: Classification and Regression Trees.
- CART: Classification and Regression Trees.
Consider a dataset with two features, $x_1$ and $x_2$, and a binary target variable $y$. The decision tree algorithm can be used to build a tree that classifies new data points based on the values of $x_1$ and $x_2$.
For example, if the data points are as follows:
| $x_1$ | $x_2$ | $y$ |
|-------|-------|-----|
| 1 | 1 | 0 |
| 1 | 2 | 1 |
| 2 | 1 | 1 |
| 2 | 2 | 0 |
The decision tree algorithm can build a tree that classifies new data points based on the values of $x_1$ and $x_2$.
## Exercise
Instructions:
1. Build a decision tree for the following dataset:
| $x_1$ | $x_2$ | $y$ |
|-------|-------|-----|
| 1 | 1 | 0 |
| 1 | 2 | 1 |
| 2 | 1 | 1 |
| 2 | 2 | 0 |
Use the ID3 algorithm and a maximum depth of 2.
### Solution
The decision tree is as follows:
```
1. Is $x_2$ = 1?
- If yes, then $y$ = 0.
- If no, then $y$ = 1.
```
# K-nearest neighbors: principles and implementation
K-nearest neighbors (KNN) is a non-parametric classification and regression algorithm. It works by finding the K training data points that are closest to a new data point, and then making a prediction based on the majority class or average value of the K nearest neighbors.
The KNN algorithm can be implemented as follows:
1. Choose the value of K.
2. For each new data point, compute the distance to all training data points.
3. Find the K training data points with the smallest distances.
4. Make a prediction based on the majority class or average value of the K nearest neighbors.
Consider a dataset with two features, $x_1$ and $x_2$, and a binary target variable $y$. The KNN algorithm can be used to classify new data points based on the values of $x_1$ and $x_2$.
For example, if the data points are as follows:
| $x_1$ | $x_2$ | $y$ |
|-------|-------|-----|
| 1 | 1 | 0 |
| 1 | 2 | 1 |
| 2 | 1 | 1 |
| 2 | 2 | 0 |
The KNN algorithm can be used to classify new data points based on the values of $x_1$ and $x_2$.
## Exercise
Instructions:
1. Use the KNN algorithm to classify the following data points:
| $x_1$ | $x_2$ | $y$ |
|-------|-------|-----|
| 1 | 1 | 0 |
| 1 | 2 | 1 |
| 2 | 1 | 1 |
| 2 | 2 | 0 |
Use K = 2.
### Solution
The KNN algorithm classifies the new data points as follows:
- For the first data point, the KNN algorithm classifies it as 0.
- For the second data point, the KNN algorithm classifies it as 1.
- For the third data point, the KNN algorithm classifies it as 1.
- For the fourth data point, the KNN algorithm classifies it as 0.
# Optimization techniques in machine learning
Optimization techniques are used in machine learning to find the best values for the coefficients of a model, or to train a model using gradient descent. Some common optimization techniques include:
- Gradient descent: A first-order optimization algorithm used to find the minimum value of a function iteratively.
- Stochastic gradient descent: A variation of gradient descent that updates the coefficients using a random subset of the training data.
- Batch gradient descent: A variation of gradient descent that updates the coefficients using the entire training dataset.
- Momentum: A technique that helps gradient descent converge faster by adding a fraction of the previous update to the current update.
- Adaptive learning rate methods: Methods that adjust the learning rate during training, such as adaptive gradient descent (Adagrad) and RMSprop.
Consider a simple linear regression problem with one independent variable $x$ and one dependent variable $y$. The goal is to find the coefficients of the best-fitting line using optimization techniques.
To find the coefficients, we can use the following algorithm:
1. Initialize $\beta_0$ and $\beta_1$ with arbitrary values.
2. Compute the gradient of the cost function with respect to the coefficients.
3. Update the coefficients by subtracting the product of the learning rate and the gradient.
4. Repeat steps 2-3 until the change in the coefficients is below a certain threshold, indicating convergence.
## Exercise
Instructions:
1. Use optimization techniques to find the coefficients of the best-fitting line for the following data points:
| $x$ | $y$ |
|-----|-----|
| 1 | 2 |
| 2 | 3 |
| 3 | 4 |
| 4 | 5 |
Use a learning rate of 0.01 and a maximum number of iterations of 1000.
### Solution
The best-fitting line has the equation $y = 1.5x + 0.5$.
# Support vector machines: theory and practice
Support vector machines (SVM) are a class of supervised learning models used for classification and regression tasks. The goal of SVM is to find a hyperplane that separates the data points into different classes or predicts the value of a continuous variable.
SVM can be implemented using the following algorithm:
1. Choose a kernel function to transform the input data into a higher-dimensional space.
2. Find the hyperplane that maximizes the margin between the classes or predicts the value of the target variable.
3. Make a prediction based on the class or value of the hyperplane.
Consider a dataset with two features, $x_1$ and $x_2$, and a binary target variable $y$. The SVM algorithm can be used to classify new data points based on the values of $x_1$ and $x_2$.
For example, if the data points are as follows:
| $x_1$ | $x_2$ | $y$ |
|-------|-------|-----|
| 1 | 1 | 0 |
| 1 | 2 | 1 |
| 2 | 1 | 1 |
| 2 | 2 | 0 |
The SVM algorithm can be used to classify new data points based on the values of $x_1$ and $x_2$.
## Exercise
Instructions:
1. Use the SVM algorithm to classify the following data points:
| $x_1$ | $x_2$ | $y$ |
|-------|-------|-----|
| 1 | 1 | 0 |
| 1 | 2 | 1 |
| 2 | 1 | 1 |
| 2 | 2 | 0 |
Use a linear kernel function and a regularization parameter of 1.
### Solution
The SVM algorithm classifies the new data points as follows:
- For the first data point, the SVM algorithm classifies it as 0.
- For the second data point, the SVM algorithm classifies it as 1.
- For the third data point, the SVM algorithm classifies it as 1.
- For the fourth data point, the SVM algorithm classifies it as 0.
# Applications of machine learning in calculus: interpolation and approximation
Machine learning can be applied to interpolation and approximation problems in calculus. For example, linear regression can be used to find the best-fitting line for a set of data points, or support vector machines can be used to approximate a continuous function.
In interpolation problems, the goal is to find a function that passes through a given set of data points. Machine learning techniques can be used to find the coefficients of the function that minimize the error between the function and the data points.
Consider a dataset with one independent variable $x$ and one dependent variable $y$. The goal is to find a function that approximates the data points using machine learning techniques.
For example, if the data points are as follows:
| $x$ | $y$ |
|-----|-----|
| 1 | 2 |
| 2 | 3 |
| 3 | 4 |
| 4 | 5 |
The linear regression algorithm can be used to find the coefficients of the best-fitting line for the data points.
## Exercise
Instructions:
1. Use machine learning techniques to find the coefficients of the best-fitting line for the following data points:
| $x$ | $y$ |
|-----|-----|
| 1 | 2 |
| 2 | 3 |
| 3 | 4 |
| 4 | 5 |
Use a learning rate of 0.01 and a maximum number of iterations of 1000.
### Solution
The best-fitting line has the equation $y = 1.5x + 0.5$.
# Machine learning in differential equations
Machine learning can be applied to solve differential equations by learning the function that satisfies the differential equation. For example, neural networks can be used to learn the coefficients of a differential equation that approximates a given function.
In this approach, the neural network is trained to predict the values of a function that satisfies a given differential equation, such as the heat equation or the wave equation. The neural network is trained using a dataset of input-output pairs, where the input is a point in the domain and the output is the corresponding value of the function.
Consider the heat equation $\frac{\partial u}{\partial t} = \alpha \frac{\partial^2 u}{\partial x^2}$ with boundary conditions $u(0, t) = 0$ and $u(1, t) = 0$. The goal is to find the function $u(x, t)$ that satisfies the heat equation using machine learning techniques.
For example, if the dataset contains the following input-output pairs:
| $x$ | $t$ | $u(x, t)$ |
|-----|-----|-----------|
| 0.1 | 0.5 | 0.01 |
| 0.2 | 0.5 | 0.04 |
| 0.3 | 0.5 | 0.09 |
| 0.4 | 0.5 | 0.16 |
The neural network can be trained to predict the values of the function that satisfies the heat equation.
## Exercise
Instructions:
1. Use machine learning techniques to find the function $u(x, t)$ that satisfies the heat equation for the given dataset:
| $x$ | $t$ | $u(x, t)$ |
|-----|-----|-----------|
| 0.1 | 0.5 | 0.01 |
| 0.2 | 0.5 | 0.04 |
| 0.3 | 0.5 | 0.09 |
| 0.4 | 0.5 | 0.16 |
Use a learning rate of 0.01 and a maximum number of iterations of 1000.
### Solution
The neural network learns the function $u(x, t)$ that satisfies the heat equation.
# Machine learning in integration and differentiation problems
Machine learning can be applied to integration and differentiation problems by learning the function that approximates the integral or derivative of a given function. For example, support vector machines can be used to approximate the integral or derivative of a function using a dataset of input-output pairs.
In this approach, the support vector machine is trained to predict the values of the integral or derivative of a function, such as the sine function or the exponential function. The support vector machine is trained using a dataset of input-output pairs, where the input is a point in the domain and the output is the corresponding value of the integral or derivative.
Consider the sine function $f(x) = \sin(x)$. The goal is to find the function $F(x)$ that approximates the integral of $f(x)$ using machine learning techniques.
For example, if the dataset contains the following input-output pairs:
| $x$ | $F(x)$ |
|-----|--------|
| 0.1 | 0.009 |
| 0.2 | 0.037 |
| 0.3 | 0.075 |
| 0.4 | 0.121 |
The support vector machine can be trained to predict the values of the function that approximates the integral of the sine function.
## Exercise
Instructions:
1. Use machine learning techniques to find the function $F(x)$ that approximates the integral of $f(x)$ for the given dataset:
| $x$ | $F(x)$ |
|-----|--------|
| 0.1 | 0.009 |
| 0.2 | 0.037 |
| 0.3 | 0.075 |
| 0.4 | 0.121 |
Use a learning rate of 0.01 and a maximum number of iterations of 1000.
### Solution
The support vector machine learns the function $F(x)$ that approximates the integral of the sine function. | Textbooks |
Journal of NeuroEngineering and Rehabilitation
Regenerative peripheral nerve interfaces for real-time, proportional control of a Neuroprosthetic hand
Christopher M. Frost1 na1,
Daniel C. Ursu ORCID: orcid.org/0000-0002-6579-103X1,2 na1,
Shane M. Flattery3,
Andrej Nedic1,
Cheryl A. Hassett1,
Jana D. Moon1,
Patrick J. Buchanan1,
R. Brent Gillespie2,
Theodore A. Kung1,
Stephen W. P. Kemp1,4,
Paul S. Cederna1,4 &
Melanie G. Urbanchek1
Journal of NeuroEngineering and Rehabilitation volume 15, Article number: 108 (2018) Cite this article
Regenerative peripheral nerve interfaces (RPNIs) are biological constructs which amplify neural signals and have shown long-term stability in rat models. Real-time control of a neuroprosthesis in rat models has not yet been demonstrated. The purpose of this study was to: a) design and validate a system for translating electromyography (EMG) signals from an RPNI in a rat model into real-time control of a neuroprosthetic hand, and; b) use the system to demonstrate RPNI proportional neuroprosthesis control.
Animals were randomly assigned to three experimental groups: (1) Control; (2) Denervated, and; (3) RPNI. In the RPNI group, the extensor digitorum longus (EDL) muscle was dissected free, denervated, transferred to the lateral thigh and neurotized with the residual end of the transected common peroneal nerve. Rats received tactile stimuli to the hind-limb via monofilaments, and electrodes were used to record EMG. Signals were filtered, rectified and integrated using a moving sample window. Processed EMG signals (iEMG) from RPNIs were validated against Control and Denervated group outputs.
Voluntary reflexive rat movements produced signaling that activated the prosthesis in both the Control and RPNI groups, but produced no activation in the Denervated group. Signal-to-Noise ratio between hind-limb movement and resting iEMG was 3.55 for Controls and 3.81 for RPNIs. Both Control and RPNI groups exhibited a logarithmic iEMG increase with increased monofilament pressure, allowing graded prosthetic hand speed control (R2 = 0.758 and R2 = 0.802, respectively).
EMG signals were successfully acquired from RPNIs and translated into real-time neuroprosthetic control. Signal contamination from muscles adjacent to the RPNI was minimal. RPNI constructs provided reliable proportional prosthetic hand control.
Approximately 185,000 individuals suffer limb loss annually in the United States [1]. The growing rate of amputees and technological advancements have greatly improved human-neuroprosthetic interfacing [2]. A comprehensive literature review on the needs and priorities of prostheses users performed by Cordella et al. in 2016 revealed that an estimated 75% of upper prosthetic users wore functional prostheses for at least 8 h per day, compared with only 45% of cosmetic prosthesis owners [3]. A functional prosthesis was more likely to be worn the higher the level of amputation, and especially during dynamic activities of daily living, such as work, driving and sports [3]. Importantly, upper arm amputees who tested both conventional (body powered or myoelectric arms) and the DEKA Gen 3 advanced myoelectric prosthesis found conventional prostheses performed faster, and with smoother motions and less movement deviation than the advanced DEKA prosthetic device [4]. This finding is largely attributed to a lack of an intuitive, functional neural interface that can provide high fidelity control signals to actualize the functionality of advanced neuroprosthetic devices.
Advanced anthropomorphic modular prosthetic arm systems have only become commercially available in the last 5 years, in large part due to technology developed with DARPA's funding of the Revolutionizing Prosthetics Program in 2006 [5]. Currently, multi-electrode-based prosthetic devices, such as the DEKA arm (DEKA, Manchester, NH), i-Limb (TouchBionics, Touch Bionics, Mansfield, MA), the Johns Hopkins Modular Prosthetic Limb (MPL, Johns Hopkins University Applied Physics Lab, Baltimore, MD), and Ottobock (Otto Bock HealthCare, Duderstadt, Germany), provide increased ranges of motion, dexterity and control options, and are capable of up to five-finger movements and 20 degrees of freedom [6, 7]. However, a limitation in controlling these advanced robotic prostheses is the need for an appropriate neural interface that can extract clear multifunctional signal information at a speed that matches naturalistic human motion [8, 9].
Neural interfaces, i.e. the use of electrodes to record physiological signals for voluntary prosthetic control, come in different forms and all have unique advantages and challenges. All prostheses require either nerve or muscle electrodes as part of the neural interface [6], and consequently, interfacing electrodes vary in size (standard pad to microelectrodes), shape (multipolar cuff, fine wire, sieve), number of electrode sites (bipolar or multi-array), and location (transverse intrafascicular multichannel nerve, longitudinal intrafascicular nerve, epimysial, intramysial and intracortical microelectrode arrays placed in the cortex) [9,10,11,12]. Cuff electrodes circumferentially envelope peripheral nerves and nerve fascicles, and have shown promising results in signal transduction; however, long term signal fidelity may be compromised due to epineurial inflammation and scarring [13,14,15]. Both intrafascicular electrodes and sieve electrodes allow for nerve and signal specificity, but are hampered by long-term signal loss due to biofouling [16,17,18]. Epimysial and intramysial electrodes can be larger in size, are physically more robust, are less compromised by fibrosis, and transduce myoelectric signals with less impedance [19].
The most successful form of neural interfacing to date is Targeted Muscle Reinnervation (TMR) [20]. TMR is an FDA-approved procedure to surgically construct additional EMG control sites using residual nerves [21]. Remaining nerves from the amputated limb are transferred to expendable regions of residual muscle in or near the residual limb; commonly, the ipsilateral pectoral muscle is denervated and used for this purpose. The nerves reinnervate the "target" and produce additional EMG signal sites for prosthetic control. Ideally, TMR is performed during the initial amputation procedure, which has been proven to reduce neuroma formation [22,23,24]. TMR uses external skin surface electrodes to transduce EMG signals, thus avoiding the build-up of connective tissue on electrodes due to a foreign body reaction. Yet a disadvantage of surface EMG electrode systems is their lack of robustness to variance caused by donning, fatigue, perspiration, and other conditions that cause positional and physiological changes in the electrical characteristics of the signal sites [21]. Moreover, the reinnervation of the whole pectoral muscle with up to three nerves, each of which is responsible for specific and distinct functions in the arm, requires the implementation of complex pattern classification and feature extraction algorithms, such that the overlapping neural signals acquired from the EMG electrode array can be decoded and assigned to their intended control targets [25].
Despite the advancements that have benefitted human-prosthetic interfacing, a need remains for a neural interface that can provide real-time, long-term, contamination free, signal fidelity for optimal prosthetic activation and control. In this study, we use the Regenerative Peripheral Nerve Interface (RPNI) as a strategy for neural interfacing. RPNIs are neuromuscular biological interfaces surgically constructed from free muscle grafts (3 × 1 cm.) obtained from expendable skeletal muscle in the residual limb or from a distant site. The residual peripheral nerves are dissected into single nerve fascicles, or groups of fascicles, to create functional units. The muscle grafts are then neurotized by the terminal branches of the residual nerves. Revascularization, regeneration, and eventually reinnervation allows the RPNI to mature in 3 to 4 months [26, 27]. This technique reduces the amount of neural manipulation and risk of iatrogenic nerve damage. Previous studies in our laboratory have shown that RPNIs transduce evoked muscle potentials for up to 18 months, prevent neuroma formation, and amplify motor nerve signaling [28, 29]. Thus, RPNI technology takes advantage of the signal from individual muscles that can be recorded via intramuscular EMG signals generated from the RPNI, obviating the need for signal decoding of multi-nerve motor features via classification algorithms [21].
There have been few investigations into the fine motor control of neuroprosthetic devices using the RPNI technique. As such, the purposes of this study were to: a) build and validate an algorithm for translating EMG signals from RPNIs for real-time control of a myoelectrically actuated neuroprosthetic hand; and b) use this algorithm to demonstrate the ability of RPNIs to provide proportional neuroprosthesis control. It was hypothesized that both Control and RPNI groups would demonstrate reliable and proportional control of the myoelectric hand, while the Denervated group would not activate the neuroprosthesis.
Animal model
All procedures were approved by the University of Michigan, Institutional Animal Care and Use Committee, and were in strict accordance with the National Research Council's Guide for the Care and Use of Laboratory Animals (1996) [30]. Retired F344 male breeder rats (Charles River, Wilmington, MA) weighing 300 to 420 g were anesthetized with weight-based Pentobarbital and administered Buprenorphine-HCl as analgesia.
Regenerative peripheral nerve Interface surgery
The study design consisted of three separate groups, Control (n = 2), Denervated (n = 1), and RPNI (n = 3). In each group, all rats underwent a proximal and distal tenotomy of the extensor digitorum longus (EDL) muscle. In the Control group, no additional interventions were performed. In the Denervated and RPNI groups, the common peroneal nerve was divided and the free EDL muscle graft was transferred to the lateral thigh. In the RPNI group, the proximal end of the divided peroneal nerve was implanted into the EDL skeletal muscle graft to create an RPNI. In the Denervated group, the proximal end of the peroneal nerve was reflected proximally to prevent EDL skeletal muscle graft reinnervation (Fig. 1).
Left: Control group with primary repair of the extensor digitorum longus muscle (EDL) tenotomies without denervation of the muscle. Center: Denervated group with free EDL muscle graft performed to the lateral thigh. Neurotization and reinnervation was not performed, leaving the EDL muscle graft without innervation. Electrode placement was identical to the Control group. Right: Regenerative Peripheral Nerve Interface (RPNI) group with free EDL muscle graft performed to the lateral thigh. Neurotization and reinnervation were implemented using the peroneal nerve. Each rat received bipolar epimysial electrodes (white), whose wires (blue) were tunneled subcutaneously to the upper dorsum. a. bipolar electrode cables. b. tibialis anterior muscle; c. soleus and gastrocnemius muscles; d. distal end of common peroneal nerve; e. EDL muscle; f. proximal common peroneal nerve; g. tibial nerve
Two stainless steel electrodes made of Cooner wire (Cooner Wire Co., Chatsworth, CA) were sutured onto the EDL epimysium, with electrodes separated longitudinally by 1.5 cm. The EDL muscle was then covered by a single-layer of acellular porcine intestinal submucosa scaffold (SIS) (Surgisis, Cook Biotech, West Lafayette, IN). The leading ends and connecting cables of the electrodes were tunneled, coiled, and buried subcutaneously within the dorsum of each rat between the scapulae.
Five months following implantation, the free ends of the implanted electrode cables were exposed through a dorsal incision. EMG signals were then recorded, amplified to 1000x and band-pass filtered (1–500 Hz) on a custom-built analog bipolar instrumentation amplifier. Signal amplitudes were calibrated using a function generator (B&K Precision, Model 4075, B&K Precision Corporation, Yorba Linda, CA) and oscilloscope (Agilent InfiniiVision Model MSO-X 2012-A, Agilent Technologies, Santa Clara, CA). The amplified and filtered signals were acquired at a 3 kHz sampling rate using a data acquisition card (NI BNC 2120, National Instruments, Austin, TX) using LabVIEW software (National Instruments, Austin, TX). During post processing, the signals were digitally rectified and zero-phase low-pass filtered to 50 Hz.
A von Frey monofilament testing protocol was initiated to evoke reflex anterior compartment dorsi-flexion of the hind paw, and activation of the EDL or RPNI muscle [31]. During testing, each rat was placed in a 4 × 5 × 8 in.3 Plexiglas® box with a wire mesh bottom. Monofilament fibers were applied to the left experimental ankle to induce a voluntary muscle reflex leg movement. Monofilament pressure was initiated at 4 g of force, and monofilament fibers of up to 100 g were randomly administered to the ankle. Four cycles lasting five minutes were performed at each monofilament force level. All rats were free to ambulate while connected to the myoelectric prosthesis to correct for the possibility of EMG signaling from other muscles. To avoid habituation, 1–2 min of rest was allowed between each testing cycle. Rats in each group were evaluated for 3 days with 2 days of rest between each evaluation period. The monofilament testing lasted no longer than 2 h per day. Post-evaluation, all rats were sacrificed and their hind limb dissected in order to assess the amount of scar tissue and vascularity in the repaired EDL (Control group) and free grafted muscles in the lateral thigh (RPNI and Denervated groups). Prosthetic activation and hind limb movement were video recorded at 120 frames per second using a high-speed, high-definition camera (GoPro Hero2, San Mateo, CA). Rectified EMG and prosthetic activation were synchronized and recorded using the LabVIEW software (Fig. 2).
EMG signals integrated over 300 msec (iEMG) – based prosthesis activation during one testing session. Plots of filtered EMG tracings (Blue) and periods of prosthesis activation (Green) during 40 s of testing in Control (Top), Denervated (Middle) and RPNI (Bottom) groups. Baseline iEMG is calculated as a running average. An algorithm activates the prosthesis after detecting an iEMG window more than 1 standard deviation above the mean iEMG
A computer algorithm was written using LabVIEW to allow interpretation of the EMG activity and prosthetic control. The rectified and filtered EMG signals were divided into 300 millisecond intervals. Each 300 millisecond interval was then integrated with respect to time and a mean value iEMG was calculated in units of mV × sec. A running threshold was calculated by averaging all previous intervals, giving 50% weight to the immediately prior interval. Activation of the prosthesis occurred when the real-time iEMG was greater than the running threshold by at least one standard deviation (Fig. 3).
Schematic showing acquisition, transduction and analysis of real-time recorded EMG signaling from an RPNI rat. a. Bipolar collection of raw EMG signals. Ground electrode is referenced in ear. b. Raw EMG signals undergo signal processing in the form of filtering and rectification. c. & d. 300 msec consecutive EMG signal acquisition intervals obtained during c. no observed leg motion (baseline signal activity below threshold), and d. Leg motion and subsequent prosthetic hand activation due to signal surpassing threshold of activation. Blue lines: EMG signal; Red lines: iEMG value; Green lines: Activation threshold
Graded control of the prosthesis was achieved by modulation of the output voltage to the "DMC + Hand" (Otto Bock Healthcare, Vienna, Austria) using an Arduino Uno R3 prototyping board (Arduino LLC. Cambridge, MA) equipped with a motor-driving amplifier (SparkFun Electronics, Niwot, CO). Output voltage to the hand was increased with larger iEMG values by calculating the number of standard deviations above the running threshold for each iEMG interval (Eq. 1).
$$ {V}_{O\mathrm{u} tput}={V}_{Max}-\frac{V_{Max}}{1+\left({SD}_{Above\ Threshold}\right)} $$
Video recordings of each testing period were analyzed to determine interface performance. Sensitivity and specificity were each calculated based on appropriate activation of the prosthesis during hind limb movement and non-activation during periods of rest, respectively. The number of recorded prosthetic movements during each 4-min testing period was compared to the total number of observed leg movements to determine sensitivity (Eq. 2). The number of errant activations of the prosthesis during rest intervals with no hind limb movement was calculated to determine specificity (Eq. 3). Within group Student's T-Test statistical computations were performed using SPSS Statistics 22, (SPSS, IBM Inc., 2013, Armonk, NY). Significance levels were set to α = 0.05.
$$ Sensitivity=\frac{Prosthetic\ activations\ after\ hindlimb\ movement\ }{Total\ number\ of\ hindlimb\ movement s} $$
$$ Specifity=\frac{Total\ rest\ intervals- prosthetic\ activations\ during\ rest\ intervals\ }{Total\ rest\ intervals} $$
Accuracy of Neuroprosthesis activation
In total, 1040 Control group hind limb movements in 208 min and 876 RPNI group hind limb movements in 172 min were captured. (see Video, Additional file 1: Video S1, which demonstrates prosthesis activation in response to monofilament stimulation on the volar side of the hind paw) Significantly reduced hind paw movements were recorded during 51 min within the Denervated group, likely resulting from the lack of peroneal nerve innervation to the lateral compartment musculature of the lower hind limb (see Video, Additional file 2: Video S2, which demonstrates no prosthesis activation in response to monofilament stimulation on the volar side of the hind paw in a Denervated rat).
Additional file 1: Video S1. Which illustrates prosthetic hand activation using RPNI generated myoelectric signals, in response to monofilament application to the left hind paw of a rat fitted with an RPNI. (MP4 12600 kb)
Additional file 2: Video S2. Which illustrates a denervated rat experimental trial, in which monofilament application fails to produce actuation of the prosthetic hand. (MP4 4092 kb)
The iEMG activation signals were significantly higher in both the Control and RPNI groups when compared to the baseline signals obtained during the between-trial resting periods, indicating that the calculated threshold denoting prosthesis activation (Eq. 1) was successfully defined. The calculated sensitivity (ability to detect prosthetic activation after stimulation) and specificity (ability to prevent unwanted activation during rest) values for prosthesis activation are reported in Table 1. Signal to noise ratio means and standard deviations between iEMG resulting in initial hind limb movement, (i.e. iEMG acquired during the lowest monofilament stimulus resulting in paw retraction, and therefore prosthesis activation) and resting iEMG was 3.55 ± 0.38 and 3.81 ± 0.52 for the Control and RPNI groups, respectively.
Table 1 Summary Data of EMG Translation System
Proportional control of the Neuroprosthesis
Proportional control of a neuroprosthesis requires the ability to distinguish variations in EMG peak recordings from volitional behavior. Using this tenet, EMG amplitude was mapped 1:1 to the speed of prosthetic hand movement [32]. Increasing von Frey monofilament pressure led to an observable increase in rat hind limb movement intensity. Rats in the Control and RPNI groups had a positive logarithmic correlation between von Frey filament forces (intensity of stimulus), EMG amplitude, and therefore the instantaneous voltage used to actuate the prosthetic hand. This positive correlation enabled a pre-programmed, graded control of the prosthetic hand speed (R2 = 0.802, p < 0.05 and R2 = 0.758, p < 0.05, respectively) (Fig. 4).
A semi-logarithmic relationship between monofilament pressure applied and iEMG recorded during four testing blocks. Each block lasted 5 min for each increment of pressure increase in RPNI and Control groups (blue and orange, respectively). Monofilament pressure is graphed logarithmically to linearize each graph. Each represents the mean ± 1 SD for the average of 54 leg movements for control and 51 leg movements for RPNI per increment of pressure. Positive trends in both RPNI and Control groups imply RPNI transduced EMG signals of proportional intensity similar to that of an in situ Control
As expected, no significant correlations were found between resting "baseline" iEMG activity and the monofilament pressure subsequently used for either the Control or RPNI groups (R2 = 0.12 and R2 = 0.19, respectively). This is expected, as changes in "baseline" iEMG activity results from biologic and electronic variation, whereas increased iEMG activity during activation is due to increased muscle activation, contraction, and movement, not random variation (Fig. 5).
Mean ± 1 Standard Deviation of iEMG values obtained during baseline (blue) and activation trials regardless of monofilament pressure (orange) in Control, Denervated and RPNI rat cohorts. iEMG is calculated as the area under the curve measured during consecutive 300 msec intervals of EMG signal acquisition during testing. Activated iEMG is recorded during rat movement while baseline iEMG is obtained during rest. † Denervated group as expected did not show activity during rat movement; therefore, no activated iEMG was calculated. A * indicates significantly higher activation signals, when compared with relative baseline signals within Control and RPNI groups (p < 0.05)
Regenerative peripheral nerve interfaces (RPNI) provide a biologic connection to peripheral nerves to amplify efferent motor action potentials producing high-fidelity motor control signals and favorable signal to noise ratios. In this study, we have demonstrated reliable RPNI signal transduction in real-time EMG signals obtained during voluntary muscle activation. To date, this is the first study to demonstrate both real time and proportional control of a myoelectric prosthesis using an RPNI.
The amplitude based direct control algorithm strategy determined for this study was modelled using simple linear regression. While there are many means of quantifying muscle activity using myoelectric signals [33], integrated EMG was chosen as the proportional input to the controller, as it has been shown to be a reliable quantifier of muscle force [34]. In order to reset the integration to zero, a 300 millisecond acquisition window was employed; the window timespan was chosen to ensure that at maximum opening velocity (300 mm/s), the prosthetic hand does not exceed its opening width (100 mm) [35]. To ensure that the prosthetic gripper's activation does not occur as a result of background noise or previous myoelectric activity, a running threshold was computed using a weighted average of the myoelectric signals recorded during previous acquisition windows. Consequently, the prosthetic hand was actuated only if the integrated EMG signal obtained during the current sampling window was one standard deviation above threshold [25]; during actuation, the gripper's velocity was proportional to the amount of discrete standard deviations that iEMG lay above threshold.
An important criterion in patient satisfaction resides in the reaction time of the prosthetic device [24, 36,37,38]. The algorithm in this study integrated EMG signals over 300 millisecond intervals, building an acceptable 300 millisecond delay into the prosthesis activation time. Integrating EMG signals over this period reduced errant prosthetic activation due to random variation in baseline EMG. Future studies utilizing RPNI interfaces with alternative methods for prosthesis activation can reduce this built-in delay. In the present study, prosthetic hand speed was increased with increasing amplitude of iEMG signals. A strong, positive correlation existed between iEMG signal amplitude and the monofilament force applied to the rat's limb (i.e. the experimental stimulus). This is pivotal, as adjusting both speed and directional movement of a prosthetic device restores greater functionality to the amputee.
One of the primary challenges in neural interfaces is long-term durability in performance. The current study shows that RPNIs were safe and effective in the rat hind limb. As expected, post-experimental gross evaluation of the lateral thigh compartment revealed that the free muscle grafts were healthy in RPNI group rats, but were severely atrophic in the Denervated group. Furthermore, the consistently low EMG signals derived from the Denervated group signify that RPNI EMG activity is not affected by motion artifact or crosstalk from neighboring muscles.
The implantation procedures were well tolerated, consistent with previous RPNI implantation surgeries [27,28,29, 39]. Within the lifespan of the rat, we have observed minimal to no signal degradation over at least 7 months post implantation [29]. The electrodes implanted in the RPNI in this study were stainless steel, and interfaced directly with transferred skeletal muscle, thereby avoiding direct contact with the peripheral nerve and corresponding biofouling of the electrode, possible neural inflammation and injury. While muscle tissue tolerates the presence of epi- or intra-muscular electrodes fairly well, electrode materials and designs are currently being investigated to continue to minimize the inflammatory response. [40, 41]
The study's purposes included proof that signals transduced from only one RPNI are suitable to control a one degree of freedom (DOF) myoelectric hand. RPNI technology with multiple implanted RPNIs would also be applicable for prostheses capable of many DOFs. As devised, each RPNI is anatomically "hard-wired" from select motor control areas of the brain through peripheral nerves to individual RPNIs. Consequently, multiple RPNI EMG signals, or co-activation, could be decoded using linear regression control or parallel multi-site control as accomplished with signals available in TMR [42]. Control strategies such as amplitude based direct control, as well as sequential and simultaneous pattern recognition have been studied with able bodied and TMR patients [32]. Those who study efficient control may find that providing several strategies may allow a neuroprosthesis user to achieve fine actuation with direct control, and larger movements with simultaneous control [42].
There are inherent limitations to demonstrating the feasibility of myoelectric prosthesis control using a rat model. Limiting RPNI implantation to one RPNI per one rat hind limb allows for only a simple model which limits prosthetic functionality to single axis actuation. RPNIs are currently being implanted in humans, on multiple individual nerve branches, to provide numerous independent DOF. In this manner, each RPNI will contribute to several movements of a prosthesis when the transduced EMG signals are processed using pattern recognition.
Finally, in this study, we valuated outcomes 135 days after RPNI surgery with electrode implantation. This time-point was selected based on previous studies showing RPNI revascularization, muscle fiber regeneration, and reinnervation occurring at 120 days [29]. Future longitudinal studies of RPNI control of a neuroprosthetic device are currently assessing the lifetime efficacy of RPNI signal transduction.
This study validated an algorithm for translating EMG signals from RPNIs for real-time control of a neuroprosthetic hand. Signal contamination from muscles adjacent to the RPNI was minimal. The EMG signals were successfully acquired from RPNIs and translated into real-time neuroprosthetic control via an algorithm that allowed for concrete demonstration that RPNIs provide reliable proportional control of the neuroprosthesis. RPNI myoelectric hand control was both sensitive and specific.
Ziegler-Graham K, et al. Estimating the prevalence of limb loss in the United States: 2005 to 2050. Arch Phys Med Rehabil. 2008;89(3):422–9.
Cloutier A, Yang J. Design, control, and sensory feedback of externally powered hand prostheses: a literature review. Crit Rev Biomed Eng. 2013;41(2):161–81.
Cordella F, et al. Literature review on needs of upper limb prosthesis users. Front Neurosci. 2016;10:209.
Cowley J, Resnik L, Wilken J, Smurr Walters L, Gates D. Movement quality of conventional prostheses and the DEKA Arm during everyday tasks. Prosthet Orthot Int. 2017;41:33–40.
Miranda RA, et al. DARPA-funded efforts in the development of novel brain-computer interface technologies. J Neurosci Methods. 2015;244:52–67.
Biddiss EA, Chau TT. Upper limb prosthesis use and abandonment: a survey of the last 25 years. Prosthetics Orthot Int. 2007;31(3):236–57.
Ryait HS, Arora AS, Agarwal R. Study of issues in the development of surface EMG controlled human hand. J Mater Sci Mater Med. 2009;20(Suppl 1):S107–14.
Engdahl SM, et al. Surveying the interest of individuals with upper limb loss in novel prosthetic control techniques. J NeuroEng Rehabil. 2015;12:53.
Navarro X, et al. A critical review of interfaces with the peripheral nervous system for the control of neuroprostheses and hybrid bionic systems. J Peripher Nerv Syst. 2005;10(3):229–58.
Gilja V, et al. Clinical translation of a high-performance neural prosthesis. Nat Med. 2015;21(10):1142–5.
Badia J, et al. Spatial and functional selectivity of peripheral nerve signal recording with the transversal Intrafascicular multichannel electrode (TIME). IEEE Trans Neural Syst Rehabil Eng. 2016;24(1):20–7.
Castro F, Negredo P, Avendano C. Fiber composition of the rat sciatic nerve and its modification during regeneration through a sieve electrode. Brain Res. 2008;1190:65–77.
Larsen JO, et al. Degeneration and regeneration in rabbit peripheral nerve with long-term nerve cuff electrode implant: a stereological study of myelinated and unmyelinated axons. Acta Neuropathol. 1998;96(4):365–78.
Thil MA, et al. Time course of tissue remodelling and electrophysiology in the rat sciatic nerve after spiral cuff electrode implantation. J Neuroimmunol. 2007;185(1–2):103–14.
Tan DW, et al. A neural interface provides long-term stable natural touch perception. Sci Transl Med. 2014;6(257):257ra138.
Yoshida K, Stieglitz T, Shaoyu Q. Bioelectric interfaces for the peripheral nervous system. In: Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International Conference of the IEEE; 2014.
Thota AK, et al. A system and method to interface with multiple groups of axons in several fascicles of peripheral nerves. J Neurosci Methods. 2015;244:78–84.
Jia X, et al. Residual motor signal in long-term human severed peripheral nerves and feasibility of neural signal-controlled artificial limb. J Hand Surg. 2007;32(5):657–66.
Hargrove L, et al. The effect of ECG interference on pattern-recognition-based myoelectric control for targeted muscle reinnervated patients. IEEE Trans Biomed Eng. 2009;56(9):2197–201.
Kuiken TA, et al. Targeted muscle reinnervation for real-time myoelectric control of multifunction artificial arms. JAMA. 2009;301(6):619–28.
Ohnishi K, Weir RF, Kuiken TA. Neural machine interfaces for controlling multifunctional powered upper-limb prostheses. Expert Rev Med Devices. 2007;4(1):43–53.
Zhou P, et al. Decoding a New Neural–Machine Interface for Control of Artificial Limbs. J Neurophysiol. 2007;98:2974–82.
Cheesborough JE, et al. Targeted muscle reinnervation in the initial management of traumatic upper extremity amputation injury. Hand (New York). 2014;9(2):253–7.
Resnik L, Klinger SL, Etter K. The DEKA arm: its features, functionality, and evolution during the veterans affairs study to optimize the DEKA arm. Prosthetics Orthot Int. 2014;38(6):492–504.
Jiang N, Englehart KB, Parker PA. Extracting simultaneous and proportional neural control information for multiple-DOF prostheses from the surface electromyographic signal. IEEE Trans Biomed Eng. 2009;56(4):1070–80.
Kubiak CA, Kemp SWP, Cederna PS. The regenerative peripheral nerve interface for neuroma management. JAMA Surg. 2018;153(7):681–2.
Baldwin J, et al. Abstract 99: Early Muscle Revascularization and Regeneration at the Regenerative Peripheral Nerve Interface. Plast Reconstr Surg. 2012;130(1S):73.
Urbanchek MG, et al. Long-Term Stability of Regenerative Peripheral Nerve Interfaces (RPNI). Plast Reconstr Surg. 2011;128(4S):88–9.
Kung TA, et al. Regenerative peripheral nerve interface viability and signal transduction with an implanted electrode. Plast Reconstr Surg. 2014;133(6):1380–94.
Guide for the Care and Use of Laboratory Animals. Nat'l Research Council (US) Committee for the Update of the Guide for the Care and Use of Laboratory Animals. 8th ed. Washington (DC): Guide for the Care and Use of Laboratory Animals; 2011.
Nedic A, Moon JD, Kung TA, et al. Von Frey monofilament testing successfully discriminates between sensory function of mixed nerve and sensory nerve regenerative peripheral nerve interfaces. 6th International IEEE/EMBS Conference on Neural Engineering (NER); 2013. p. 255–8.
Wurth SM, Hargrove LJ. A real-time comparison between direct control, sequential pattern recognition control and simultaneous pattern recognition control using a Fitts' law style assessment procedure. J NeuroEng Rehabil. 2014;11(1):1–13.
Geethanjali P. Myoelectric control of prosthetic hands: state-of-the-art review. Med Devices (Auckl). 2016;9:247–55.
Metral S, Cassar G. Relationship between force and integrated EMG activity during voluntary isometric anisotonic contraction. Eur J Appl Physiol Occup Physiol. 1981;46(2):185–98.
Ottobock USA, SensorHand Speed Myoelectric Prosthesis Technical Data. Pamphlet. 2018. https://www.ottobockus.com/prosthetics/upper-limb-prosthetics/solution-overview/myoelectric-devices-speedhands/.
Belter JT, et al. Mechanical design and performance specifications of anthropomorphic prosthetic hands: a review. J Rehabil Res Dev. 2013;50(5):599–618.
Yao J, et al. Sensory cortical re-mapping following upper-limb amputation and subsequent targeted reinnervation: a case report. Neuroimage Clin. 2015;8:329–36.
Alshammary NA, Dalley SA, Goldfarb M. Assessment of a multigrasp myoelectric control approach for use by transhumeral amputees. Conf Proc IEEE Eng Med Biol Soc. 2012;2012:968–71.
Hu Y, et al. Muscle graft volume for regenerative peripheral nerve interfaces as optimized by electrical signal capacity for Neuroprosthetic control. Plast Reconstr Surg. 2015;136(2):443.
Micera S, Navarro X. Bidirectional interfaces with the peripheral nervous system. Int Rev Neurobiol. 2009;86:23–38.
Vasudevan S, Patel K, Welle C. Rodent model for assessing the long term safety and performance of peripheral nerve recording electrodes. J Neural Eng. 2017;14(1):016008.
Smith LH, Kuiken TA, Hargrove LJ. Evaluation of Linear Regression Simultaneous Myoelectric Control Using Intramuscular EMG. IEEE Trans Biomed Eng. 2016;63:737–46.
We thank Nicholas B. Langhals, PhD for technical assistance.
This work was sponsored by the Defense Advanced Research Projects Agency (DARPA) MTO through the Space and Naval Warfare Systems Center, Pacific Grant/Contract No. N66001–11-C-4190 and National Institutes of Health, National Institute of General Medical Sciences, T32 GM008616.
The authors declare that the data and supporting materials used in the publication of this manuscript are readily available for the reader upon request.
Christopher M. Frost and Daniel C. Ursu contributed equally to this work.
University of Michigan Department of Surgery, Section of Plastic Surgery, 570 MSRB II Level A, 1150 W. Medical Center Drive, Ann Arbor, MI, 48109-5456, USA
Christopher M. Frost, Daniel C. Ursu, Andrej Nedic, Cheryl A. Hassett, Jana D. Moon, Patrick J. Buchanan, Theodore A. Kung, Stephen W. P. Kemp, Paul S. Cederna & Melanie G. Urbanchek
University of Michigan Department of Mechanical Engineering, Ann Arbor, MI, USA
Daniel C. Ursu & R. Brent Gillespie
Vassar College, Poughkeepsie, NY, USA
Shane M. Flattery
Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
Stephen W. P. Kemp & Paul S. Cederna
Christopher M. Frost
Daniel C. Ursu
Andrej Nedic
Cheryl A. Hassett
Jana D. Moon
R. Brent Gillespie
Theodore A. Kung
Stephen W. P. Kemp
Paul S. Cederna
Melanie G. Urbanchek
CF and DU oversaw and ran the animal experiments, programmed the data acquisition software and analyzed the data. In addition, DU constructed the electrical apparatus used to control the movement of the prosthetic hand along with the control algorithm. SF and AN assisted in running the animal experiments and categorizing the data for analysis. CH, JM, and PB performed the animal surgeries. RBG, TK, and SK along with PC and MU provided valuable mentorship and extensive help with preparing and revising the manuscript. All authors contributed to the manuscript's preparation and revision. All authors read and approved the final manuscript.
Correspondence to Daniel C. Ursu.
All animal care and use procedures were conducted in accordance with the National Research Council's Guide for the Care and Use of Laboratory Animals, (1996) and were approved by the University of Michigan Animal Care and Use Committee, under protocol number PRO00005717.
Frost, C.M., Ursu, D.C., Flattery, S.M. et al. Regenerative peripheral nerve interfaces for real-time, proportional control of a Neuroprosthetic hand. J NeuroEngineering Rehabil 15, 108 (2018). https://doi.org/10.1186/s12984-018-0452-1
Peripheral nerve Interface
Amputees | CommonCrawl |
# Fundamentals of data science
1.1 What is data science?
Data science is the study of data, its collection, analysis, and interpretation, with the goal of extracting useful information and making informed decisions. It involves a combination of programming, statistical analysis, and domain knowledge to uncover patterns and trends in data.
1.2 The data science process
The data science process typically involves the following steps:
1. Data collection: Gathering relevant data from various sources, such as databases, APIs, or web scraping.
2. Data cleaning: Preprocessing the data to handle missing values, outliers, and inconsistencies.
3. Exploratory data analysis: Analyzing the data to understand its structure, identify patterns, and visualize relationships between variables.
4. Feature engineering: Creating new features or transforming existing ones to improve the performance of machine learning models.
5. Model selection and training: Choosing an appropriate machine learning algorithm and training it on the data.
6. Model evaluation: Assessing the performance of the trained model using appropriate evaluation metrics.
7. Model deployment: Integrating the model into a production environment to make predictions on new data.
For example, let's say we have a dataset of customer transactions for an e-commerce company. We can apply the data science process to analyze this data and gain insights into customer behavior, such as identifying patterns in purchasing habits or predicting customer churn.
## Exercise
Think of a real-world scenario where data science techniques can be applied. Describe the problem and how data science can help solve it.
### Solution
One example could be analyzing social media data to understand customer sentiment towards a brand. By collecting and analyzing data from platforms like Twitter or Facebook, we can gain insights into how customers perceive a brand and identify areas for improvement in their products or services. This can help the company make data-driven decisions to enhance customer satisfaction and loyalty.
# Supervised learning: linear regression and logistic regression
2.1 Linear regression
Linear regression is a statistical model used to predict a continuous output variable based on one or more input variables. It assumes a linear relationship between the input variables and the output variable.
The equation for a simple linear regression model can be written as:
$$y = mx + b$$
where $y$ is the output variable, $x$ is the input variable, $m$ is the slope of the line, and $b$ is the y-intercept.
2.2 Logistic regression
Logistic regression is a binary classification algorithm used to predict the probability of an event occurring. It is commonly used in situations where the output variable is categorical, such as predicting whether an email is spam or not.
The logistic regression model uses the logistic function to model the relationship between the input variables and the probability of the event occurring. The equation for logistic regression can be written as:
$$P(y=1|x) = \frac{1}{1 + e^{-(\beta_0 + \beta_1x)}}$$
where $P(y=1|x)$ is the probability of the event occurring given the input variable $x$, $\beta_0$ is the intercept, and $\beta_1$ is the coefficient for the input variable.
For example, let's say we have a dataset of housing prices and we want to predict the price of a house based on its size. We can use linear regression to create a model that predicts the price based on the size of the house.
## Exercise
Consider the following dataset of students' test scores and the corresponding number of hours they studied:
| Hours Studied | Test Score |
|---------------|------------|
| 4 | 75 |
| 6 | 85 |
| 8 | 90 |
| 10 | 95 |
Using linear regression, predict the test score for a student who studied for 7 hours.
### Solution
To predict the test score for a student who studied for 7 hours, we can use the linear regression equation:
$$y = mx + b$$
where $y$ is the test score, $x$ is the number of hours studied, $m$ is the slope, and $b$ is the y-intercept.
From the given dataset, we can calculate the slope and y-intercept as follows:
$$m = \frac{\sum{(x_i - \bar{x})(y_i - \bar{y})}}{\sum{(x_i - \bar{x})^2}}$$
$$b = \bar{y} - m\bar{x}$$
where $\bar{x}$ and $\bar{y}$ are the means of the input and output variables, respectively.
Using these formulas, we can calculate the slope and y-intercept:
$$m = \frac{(6-7)(85-86) + (8-7)(90-86) + (10-7)(95-86)}{(6-7)^2 + (8-7)^2 + (10-7)^2}$$
$$b = 86 - m \cdot 7$$
Finally, we can substitute the values into the linear regression equation to predict the test score for 7 hours of studying:
$$y = m \cdot 7 + b$$
# Model evaluation and selection
3.1 Evaluation metrics
There are several evaluation metrics that can be used to assess the performance of a machine learning model. Some common evaluation metrics include accuracy, precision, recall, and F1 score.
- Accuracy measures the proportion of correct predictions out of the total number of predictions.
- Precision measures the proportion of true positive predictions out of the total number of positive predictions.
- Recall measures the proportion of true positive predictions out of the total number of actual positive instances.
- F1 score is the harmonic mean of precision and recall, providing a balanced measure of a model's performance.
These evaluation metrics can be calculated using the confusion matrix, which is a table that summarizes the performance of a classification model.
3.2 Model selection
Model selection involves choosing the best model from a set of candidate models. This can be done using techniques such as cross-validation and grid search.
- Cross-validation is a technique that involves splitting the data into multiple subsets, training the model on some subsets, and evaluating its performance on the remaining subsets. This helps to assess the model's performance on unseen data and avoid overfitting.
- Grid search is a technique that involves systematically searching through a predefined set of hyperparameters to find the combination that results in the best model performance. Hyperparameters are parameters that are not learned from the data, but are set by the user before training the model.
By evaluating the performance of different models using cross-validation and selecting the best hyperparameters using grid search, we can choose the model that is most likely to generalize well to unseen data.
For example, let's say we have a dataset of emails and we want to build a model to classify them as spam or not spam. We can train multiple models, such as logistic regression, decision trees, and random forests, and evaluate their performance using cross-validation and evaluation metrics. Based on the results, we can select the model that has the highest accuracy or F1 score.
## Exercise
Consider the following confusion matrix for a binary classification model:
| | Predicted Negative | Predicted Positive |
|--------------|--------------------|--------------------|
| Actual Negative | 90 | 10 |
| Actual Positive | 5 | 95 |
Calculate the accuracy, precision, recall, and F1 score for this model.
### Solution
To calculate the accuracy, we sum the correct predictions (90 + 95) and divide by the total number of predictions (90 + 10 + 5 + 95):
Accuracy = (90 + 95) / (90 + 10 + 5 + 95) = 0.925
To calculate the precision, we divide the true positive predictions (95) by the sum of true positive and false positive predictions (10 + 95):
Precision = 95 / (10 + 95) = 0.904
To calculate the recall, we divide the true positive predictions (95) by the sum of true positive and false negative predictions (5 + 95):
Recall = 95 / (5 + 95) = 0.95
To calculate the F1 score, we use the formula:
F1 score = 2 * (precision * recall) / (precision + recall)
F1 score = 2 * (0.904 * 0.95) / (0.904 + 0.95) = 0.926
# Unsupervised learning: clustering and dimensionality reduction
4.1 Clustering
Clustering is a technique used to group similar data points together based on their characteristics. The goal of clustering is to identify natural groupings or clusters in the data without any prior knowledge of the classes or labels.
There are various clustering algorithms available, such as k-means clustering, hierarchical clustering, and DBSCAN. These algorithms use different approaches to define the similarity between data points and assign them to clusters.
4.2 Dimensionality reduction
Dimensionality reduction is a technique used to reduce the number of features or variables in a dataset while preserving the important information. High-dimensional datasets often suffer from the curse of dimensionality, where the presence of many features can lead to increased computational complexity and overfitting.
There are two main types of dimensionality reduction techniques: feature selection and feature extraction. Feature selection involves selecting a subset of the original features based on their relevance to the task at hand. Feature extraction involves transforming the original features into a lower-dimensional space while preserving the important information.
Principal Component Analysis (PCA) is a commonly used technique for dimensionality reduction. It identifies the directions in the data that capture the most variation and projects the data onto these directions.
For example, let's say we have a dataset of customer transactions with various attributes such as age, income, and purchase history. We can use clustering to group similar customers together based on their transaction patterns. This can help us identify different customer segments and tailor marketing strategies accordingly.
In another example, let's say we have a dataset with thousands of features. We can use dimensionality reduction techniques to reduce the number of features to a more manageable size. This can help improve the performance of machine learning models and reduce computational complexity.
## Exercise
Consider the following dataset with two features:
| Data Point | Feature 1 | Feature 2 |
|------------|-----------|-----------|
| A | 1 | 2 |
| B | 3 | 4 |
| C | 5 | 6 |
| D | 7 | 8 |
Perform k-means clustering on this dataset with k=2. Assign each data point to the corresponding cluster.
### Solution
To perform k-means clustering, we first randomly initialize two cluster centroids. Let's say we initialize them at (1, 2) and (7, 8). We then assign each data point to the closest centroid based on the Euclidean distance. After the initial assignment, we update the centroids by taking the mean of the data points assigned to each cluster. We repeat this process until convergence.
After the first iteration, the assignments are as follows:
| Data Point | Feature 1 | Feature 2 | Cluster |
|------------|-----------|-----------|---------|
| A | 1 | 2 | 1 |
| B | 3 | 4 | 1 |
| C | 5 | 6 | 2 |
| D | 7 | 8 | 2 |
After the second iteration, the assignments are as follows:
| Data Point | Feature 1 | Feature 2 | Cluster |
|------------|-----------|-----------|---------|
| A | 1 | 2 | 1 |
| B | 3 | 4 | 1 |
| C | 5 | 6 | 2 |
| D | 7 | 8 | 2 |
Since there is no change in the assignments, the algorithm has converged. The final clusters are:
Cluster 1: A, B
Cluster 2: C, D
# Supervised learning: decision trees and random forests
5.1 Decision trees
A decision tree is a flowchart-like structure in which each internal node represents a feature or attribute, each branch represents a decision rule, and each leaf node represents the outcome. Decision trees are easy to understand and interpret, making them a popular choice for both classification and regression tasks.
The decision tree algorithm works by recursively partitioning the data based on the values of the features. At each internal node, a decision rule is applied to determine which branch to follow. The process continues until a leaf node is reached, which represents the predicted outcome.
5.2 Random forests
A random forest is an ensemble learning method that combines multiple decision trees to make predictions. Each tree in the random forest is trained on a random subset of the training data and a random subset of the features. This randomness helps to reduce overfitting and improve the generalization ability of the model.
The random forest algorithm works by aggregating the predictions of the individual trees. For classification tasks, the final prediction is determined by majority voting. For regression tasks, the final prediction is determined by averaging the predictions of the individual trees.
Random forests are known for their high accuracy and robustness to noise and outliers. They are widely used in various domains, including finance, healthcare, and marketing.
For example, let's say we have a dataset of customer information, including age, income, and purchase history. We want to predict whether a customer will churn or not. We can use a decision tree to learn a model that predicts the churn status based on the customer attributes.
In another example, let's say we have a dataset of housing prices, including features such as location, number of bedrooms, and square footage. We want to predict the selling price of a house. We can use a random forest to learn a model that predicts the selling price based on the house attributes.
## Exercise
Consider the following dataset of weather conditions and corresponding play decisions:
| Outlook | Temperature | Humidity | Windy | Play |
|-----------|-------------|----------|-------|------|
| Sunny | Hot | High | False | No |
| Sunny | Hot | High | True | No |
| Overcast | Hot | High | False | Yes |
| Rainy | Mild | High | False | Yes |
| Rainy | Cool | Normal | False | Yes |
| Rainy | Cool | Normal | True | No |
| Overcast | Cool | Normal | True | Yes |
| Sunny | Mild | High | False | No |
| Sunny | Cool | Normal | False | Yes |
| Rainy | Mild | Normal | False | Yes |
| Sunny | Mild | Normal | True | Yes |
| Overcast | Mild | High | True | Yes |
| Overcast | Hot | Normal | False | Yes |
| Rainy | Mild | High | True | No |
Use the decision tree algorithm to learn a model that predicts the play decision based on the weather conditions.
### Solution
The decision tree algorithm works by recursively partitioning the data based on the values of the features. At each internal node, a decision rule is applied to determine which branch to follow. The process continues until a leaf node is reached, which represents the predicted outcome.
Here is one possible decision tree that predicts the play decision based on the weather conditions:
```
Outlook = Sunny:
Humidity = High:
Play = No
Humidity = Normal:
Play = Yes
Outlook = Overcast:
Play = Yes
Outlook = Rainy:
Windy = False:
Play = Yes
Windy = True:
Play = No
```
This decision tree correctly predicts the play decision for all the examples in the dataset.
# Regression analysis: linear, polynomial, and multiple regression
6.1 Linear regression
Linear regression is a simple and widely used regression technique. It assumes a linear relationship between the dependent variable and the independent variable(s). The goal of linear regression is to find the best-fitting line that minimizes the sum of the squared differences between the observed and predicted values.
The equation for a simple linear regression model is:
$$y = \beta_0 + \beta_1x$$
where $y$ is the dependent variable, $x$ is the independent variable, $\beta_0$ is the intercept, and $\beta_1$ is the slope.
6.2 Polynomial regression
Polynomial regression is an extension of linear regression that allows for non-linear relationships between the dependent variable and the independent variable(s). It uses polynomial functions to model the relationship.
The equation for a polynomial regression model is:
$$y = \beta_0 + \beta_1x + \beta_2x^2 + \ldots + \beta_nx^n$$
where $y$ is the dependent variable, $x$ is the independent variable, $\beta_0$ is the intercept, $\beta_1$ to $\beta_n$ are the coefficients, and $n$ is the degree of the polynomial.
Polynomial regression can capture more complex patterns in the data, but it can also be prone to overfitting if the degree of the polynomial is too high.
6.3 Multiple regression
Multiple regression is a regression technique that allows for multiple independent variables. It models the relationship between the dependent variable and two or more independent variables.
The equation for a multiple regression model is:
$$y = \beta_0 + \beta_1x_1 + \beta_2x_2 + \ldots + \beta_nx_n$$
where $y$ is the dependent variable, $x_1$ to $x_n$ are the independent variables, $\beta_0$ is the intercept, and $\beta_1$ to $\beta_n$ are the coefficients.
Multiple regression can account for the influence of multiple factors on the dependent variable, but it assumes that there is a linear relationship between the variables.
For example, let's say we have a dataset of house prices and we want to predict the price of a house based on its size. We can use linear regression to model the relationship between the house size (independent variable) and the house price (dependent variable).
In another example, let's say we have a dataset of student test scores and we want to predict the scores based on the amount of study time and the number of extracurricular activities. We can use multiple regression to model the relationship between the study time and extracurricular activities (independent variables) and the test scores (dependent variable).
## Exercise
Consider the following dataset of car prices and corresponding features:
| Car | Mileage | Age | Price |
|--------|---------|-----|-------|
| Honda | 50000 | 3 | 15000 |
| Toyota | 80000 | 5 | 18000 |
| Ford | 60000 | 4 | 16000 |
| Honda | 30000 | 2 | 20000 |
| Toyota | 40000 | 3 | 17000 |
| Ford | 70000 | 6 | 14000 |
Use multiple regression to learn a model that predicts the car price based on the mileage and age.
### Solution
The multiple regression model assumes a linear relationship between the dependent variable (car price) and the independent variables (mileage and age).
The equation for the multiple regression model is:
$$Price = \beta_0 + \beta_1 \cdot Mileage + \beta_2 \cdot Age$$
To find the best-fitting model, we need to estimate the coefficients $\beta_0$, $\beta_1$, and $\beta_2$.
Using the given dataset, we can use statistical techniques to estimate the coefficients and build the regression model. The estimated coefficients for the model are:
$$\beta_0 = 10000$$
$$\beta_1 = -0.1$$
$$\beta_2 = -2000$$
The final multiple regression model for predicting car prices based on mileage and age is:
$$Price = 10000 - 0.1 \cdot Mileage - 2000 \cdot Age$$
This model can be used to predict the price of a car based on its mileage and age.
# Unsupervised learning: association rule mining and anomaly detection
7.1 Association rule mining
Association rule mining is a technique used to discover interesting relationships or patterns in large datasets. It is commonly used in market basket analysis, where the goal is to find associations between items that are frequently purchased together.
The most common measure used in association rule mining is called support and confidence. Support measures the frequency of occurrence of an itemset in a dataset, while confidence measures the strength of the association between two itemsets.
7.2 Anomaly detection
Anomaly detection is a technique used to identify unusual or rare events or patterns in data. It is commonly used in fraud detection, network intrusion detection, and system health monitoring.
There are various methods for anomaly detection, including statistical methods, clustering methods, and machine learning methods. These methods aim to identify data points that deviate significantly from the normal behavior of the dataset.
For example, let's say we have a dataset of customer transactions in a supermarket. We can use association rule mining to discover patterns such as "Customers who buy milk and bread are likely to also buy eggs". This information can be used for targeted marketing or product placement strategies.
In another example, let's say we have a dataset of network traffic data. We can use anomaly detection to identify unusual patterns of network traffic that may indicate a network intrusion or a cyber attack.
## Exercise
Consider the following dataset of customer purchases:
| Customer | Item 1 | Item 2 | Item 3 |
|----------|--------|--------|--------|
| 1 | A | B | C |
| 2 | B | C | D |
| 3 | A | B | D |
| 4 | A | C | D |
| 5 | B | C | D |
Use association rule mining to discover interesting patterns in the dataset.
### Solution
To discover interesting patterns in the dataset, we can calculate the support and confidence measures for each itemset.
The support of an itemset is calculated as the number of transactions containing the itemset divided by the total number of transactions.
The confidence of a rule is calculated as the number of transactions containing both the antecedent and consequent of the rule divided by the number of transactions containing the antecedent.
Using the given dataset, we can calculate the support and confidence measures for each itemset and rule:
Support(A) = 3/5 = 0.6
Support(B) = 4/5 = 0.8
Support(C) = 4/5 = 0.8
Support(D) = 4/5 = 0.8
Confidence(A -> B) = Support(A ∪ B) / Support(A) = 3/5 / 3/5 = 1
Confidence(A -> C) = Support(A ∪ C) / Support(A) = 3/5 / 3/5 = 1
Confidence(A -> D) = Support(A ∪ D) / Support(A) = 3/5 / 3/5 = 1
Confidence(B -> C) = Support(B ∪ C) / Support(B) = 4/5 / 4/5 = 1
Confidence(B -> D) = Support(B ∪ D) / Support(B) = 4/5 / 4/5 = 1
Confidence(C -> D) = Support(C ∪ D) / Support(C) = 4/5 / 4/5 = 1
Based on these measures, we can conclude that all itemsets have a support of 0.8 and all rules have a confidence of 1. This means that all items are frequently purchased together and there are no interesting patterns in the dataset.
# Model optimization and tuning
Model optimization and tuning is an important step in the machine learning process. It involves fine-tuning the parameters and hyperparameters of a model to improve its performance on a given task.
In this section, we will explore various techniques for model optimization and tuning, including cross-validation, grid search, and regularization.
8.1 Cross-validation
Cross-validation is a technique used to evaluate the performance of a model on an independent dataset. It involves splitting the dataset into multiple subsets, or folds, and training the model on a subset while evaluating it on the remaining subsets. This process is repeated multiple times, with each fold serving as the test set once.
The most common type of cross-validation is k-fold cross-validation, where the dataset is divided into k equal-sized folds. The model is trained and evaluated k times, with each fold serving as the test set once.
8.2 Grid search
Grid search is a technique used to find the optimal combination of hyperparameters for a model. It involves defining a grid of possible values for each hyperparameter and evaluating the model's performance for each combination of values.
Grid search can be computationally expensive, especially for models with a large number of hyperparameters. However, it is a powerful technique for finding the best hyperparameters for a given task.
8.3 Regularization
Regularization is a technique used to prevent overfitting in machine learning models. It involves adding a penalty term to the loss function that discourages the model from fitting the training data too closely.
There are different types of regularization techniques, including L1 regularization (also known as Lasso regularization) and L2 regularization (also known as Ridge regularization). These techniques add a regularization term to the loss function that penalizes large parameter values.
For example, let's say we have a dataset of housing prices and we want to build a regression model to predict the prices based on various features. We can use cross-validation to evaluate the performance of different regression models and select the best one.
In another example, let's say we have a dataset of images and we want to build a convolutional neural network (CNN) to classify the images. We can use grid search to find the optimal combination of hyperparameters for the CNN, such as the learning rate, batch size, and number of filters.
## Exercise
Consider the following dataset of student test scores:
| Student | Study Time | Test Score |
|---------|------------|------------|
| 1 | 2 | 80 |
| 2 | 3 | 85 |
| 3 | 4 | 90 |
| 4 | 5 | 95 |
| 5 | 6 | 100 |
Use cross-validation to evaluate the performance of a linear regression model on the dataset.
### Solution
To evaluate the performance of a linear regression model on the dataset, we can use cross-validation.
First, we divide the dataset into k equal-sized folds. Let's use k = 5 for this example.
Fold 1: Train on students 2, 3, 4, 5; Test on student 1
Fold 2: Train on students 1, 3, 4, 5; Test on student 2
Fold 3: Train on students 1, 2, 4, 5; Test on student 3
Fold 4: Train on students 1, 2, 3, 5; Test on student 4
Fold 5: Train on students 1, 2, 3, 4; Test on student 5
Next, we train the linear regression model on each training set and evaluate its performance on the corresponding test set. We repeat this process for each fold.
Finally, we calculate the average performance of the model across all folds to get an overall estimate of its performance.
Using the given dataset, we can perform cross-validation and calculate the average test score for the linear regression model. The results are as follows:
Fold 1: Train on students 2, 3, 4, 5; Test on student 1; Test score = 85
Fold 2: Train on students 1, 3, 4, 5; Test on student 2; Test score = 90
Fold 3: Train on students 1, 2, 4, 5; Test on student 3; Test score = 95
Fold 4: Train on students 1, 2, 3, 5; Test on student 4; Test score = 100
Fold 5: Train on students 1, 2, 3, 4; Test on student 5; Test score = 80
Average test score = (85 + 90 + 95 + 100 + 80) / 5 = 90
Based on the cross-validation results, the linear regression model has an average test score of 90 on the dataset.
# Ensemble methods and boosting
Ensemble methods and boosting are powerful techniques used in machine learning to improve the performance of models.
Ensemble methods involve combining multiple models to make predictions. The idea behind ensemble methods is that by combining the predictions of multiple models, the overall prediction will be more accurate and robust.
Boosting is a specific type of ensemble method that involves training models sequentially, where each model focuses on the instances that were misclassified by the previous models. This iterative process helps to improve the overall performance of the ensemble.
9.1 Bagging
Bagging is a popular ensemble method that involves training multiple models on different subsets of the training data. Each model is trained independently and makes its own prediction. The final prediction is then determined by aggregating the predictions of all the models.
One common example of bagging is the random forest algorithm, which combines multiple decision trees to make predictions. Each decision tree is trained on a different subset of the training data, and the final prediction is determined by majority voting.
9.2 Boosting
Boosting is another popular ensemble method that involves training models sequentially. Each model is trained to correct the mistakes made by the previous models. The final prediction is then determined by combining the predictions of all the models, with each model's prediction weighted based on its performance.
One common example of boosting is the AdaBoost algorithm, which iteratively trains weak learners and assigns higher weights to the instances that were misclassified by the previous models. The final prediction is determined by combining the predictions of all the weak learners, with each weak learner's prediction weighted based on its performance.
9.3 Stacking
Stacking is a more advanced ensemble method that involves training multiple models and combining their predictions using another model, called a meta-model. The idea behind stacking is to leverage the strengths of different models and improve the overall prediction.
In stacking, the base models are trained on the training data and make predictions. The predictions of the base models are then used as input features for the meta-model, which is trained to make the final prediction.
Stacking can be a powerful technique, but it can also be more computationally expensive and require more data compared to other ensemble methods.
For example, let's say we want to build a model to predict whether a customer will churn or not. We can use ensemble methods like bagging or boosting to improve the performance of our model.
In bagging, we can train multiple decision trees on different subsets of the training data and combine their predictions. This can help to reduce overfitting and improve the generalization of the model.
In boosting, we can train multiple weak learners sequentially, where each weak learner focuses on the instances that were misclassified by the previous models. This iterative process helps to improve the overall performance of the ensemble and make more accurate predictions.
In stacking, we can train multiple models, such as decision trees, support vector machines, and neural networks, and combine their predictions using another model, such as logistic regression or a neural network. This can help to leverage the strengths of different models and improve the overall prediction.
## Exercise
Consider the following dataset of customer churn:
| Customer | Age | Gender | Income | Churn |
|----------|-----|--------|--------|-------|
| 1 | 25 | Male | 50000 | No |
| 2 | 35 | Female | 60000 | No |
| 3 | 45 | Male | 70000 | Yes |
| 4 | 55 | Female | 80000 | Yes |
| 5 | 65 | Male | 90000 | Yes |
Use bagging to train a random forest model on the dataset and make a prediction for a new customer with the following attributes: age = 30, gender = Male, income = 55000.
### Solution
To train a random forest model using bagging, we can follow these steps:
1. Split the dataset into a training set and a test set.
2. Train multiple decision trees on different subsets of the training set.
3. Combine the predictions of the decision trees to make the final prediction.
Using the given dataset, we can train a random forest model to predict customer churn. Let's assume that we split the dataset into a training set and a test set, with 80% of the data used for training and 20% used for testing.
After training the random forest model, we can make a prediction for a new customer with the following attributes: age = 30, gender = Male, income = 55000.
The random forest model predicts that the new customer will not churn (No) based on the given attributes.
Note: The actual implementation of training a random forest model using bagging may involve additional steps, such as feature selection and hyperparameter tuning.
# Neural networks and deep learning
Neural networks and deep learning are powerful techniques used in machine learning and artificial intelligence. They are inspired by the structure and function of the human brain and have been successful in solving complex problems in various domains.
A neural network is a collection of interconnected nodes, called neurons, that work together to process and analyze data. Each neuron takes input from its connected neurons, performs a computation, and produces an output. The outputs of the neurons are then used as inputs for other neurons, forming a network of interconnected computations.
Deep learning is a subfield of machine learning that focuses on neural networks with multiple layers, called deep neural networks. These networks are capable of learning hierarchical representations of data, where each layer learns to extract increasingly complex features from the input data.
10.1 Feedforward Neural Networks
Feedforward neural networks are the most basic type of neural network. They consist of an input layer, one or more hidden layers, and an output layer. The input layer receives the input data, and the output layer produces the final prediction or classification.
Each neuron in the hidden layers and the output layer performs a computation using a set of weights and biases. These weights and biases are learned during the training process, where the network adjusts them to minimize the difference between the predicted output and the true output.
Feedforward neural networks are often used for tasks such as image classification, speech recognition, and natural language processing.
10.2 Convolutional Neural Networks
Convolutional neural networks (CNNs) are a type of neural network that are particularly effective for image and video processing tasks. They are designed to automatically learn and extract features from images, such as edges, textures, and shapes.
CNNs consist of multiple layers, including convolutional layers, pooling layers, and fully connected layers. The convolutional layers apply filters to the input image, which helps to detect and extract features. The pooling layers reduce the spatial dimensions of the feature maps, making the network more efficient. The fully connected layers perform the final classification or prediction.
CNNs have achieved state-of-the-art performance in tasks such as image classification, object detection, and image segmentation.
For example, let's say we want to build a model to classify images of cats and dogs. We can use a feedforward neural network with an input layer, a hidden layer, and an output layer. The input layer receives the pixel values of the image, and the output layer produces the probability of the image being a cat or a dog.
During the training process, the network adjusts the weights and biases of the neurons to minimize the difference between the predicted probabilities and the true labels of the images. This process, known as backpropagation, uses gradient descent to update the weights and biases.
After training, the neural network can make predictions on new images by feeding the pixel values through the network and obtaining the output probabilities. The image is classified as a cat or a dog based on the highest probability.
## Exercise
Consider the following dataset of handwritten digits:
| Image | Label |
|-------|-------|
| Image 1 | 5 |
| Image 2 | 2 |
| Image 3 | 7 |
| Image 4 | 1 |
| Image 5 | 9 |
Use a convolutional neural network (CNN) to classify the images into their respective labels.
### Solution
To use a convolutional neural network (CNN) to classify the images, we can follow these steps:
1. Preprocess the images by resizing them to a consistent size and normalizing the pixel values.
2. Split the dataset into a training set and a test set.
3. Design the architecture of the CNN, including the number and size of the convolutional and pooling layers.
4. Train the CNN on the training set, adjusting the weights and biases using backpropagation and gradient descent.
5. Evaluate the performance of the trained CNN on the test set, measuring metrics such as accuracy and loss.
Using the given dataset, we can train a CNN to classify the handwritten digits. Let's assume that we resize the images to 28x28 pixels and normalize the pixel values between 0 and 1.
After designing and training the CNN, we can evaluate its performance on the test set. The CNN achieves an accuracy of 80%, correctly classifying 4 out of the 5 images.
Note: The actual implementation of training a CNN may involve additional steps, such as data augmentation, regularization, and hyperparameter tuning.
# Real-world applications of machine learning
11.1 Healthcare
Machine learning has been instrumental in advancing healthcare by enabling early detection and diagnosis of diseases, improving treatment plans, and predicting patient outcomes. For example, machine learning algorithms have been developed to analyze medical images, such as X-rays and MRI scans, to detect abnormalities and assist radiologists in making accurate diagnoses. Machine learning models have also been used to predict patient readmissions, identify high-risk patients, and personalize treatment plans.
11.2 Finance
Machine learning has transformed the finance industry by improving fraud detection, predicting stock prices, and automating trading strategies. Machine learning algorithms can analyze large volumes of financial data and identify patterns that humans may not be able to detect. This has led to more effective fraud detection systems, more accurate stock price predictions, and automated trading systems that can make split-second decisions based on market conditions.
11.3 Transportation
Machine learning has played a crucial role in the development of self-driving cars and intelligent transportation systems. Machine learning algorithms can analyze real-time data from sensors, cameras, and GPS to make decisions and navigate vehicles safely. These algorithms can detect objects, predict their movements, and make decisions in complex traffic scenarios. Machine learning has also been used to optimize transportation routes, reduce congestion, and improve fuel efficiency.
11.4 Retail
Machine learning has transformed the retail industry by enabling personalized marketing, demand forecasting, and inventory management. Machine learning algorithms can analyze customer data, such as purchase history and browsing behavior, to make personalized product recommendations and target marketing campaigns. Machine learning models can also analyze historical sales data to forecast demand and optimize inventory levels, reducing costs and improving customer satisfaction.
For example, Amazon uses machine learning algorithms to make personalized product recommendations to its customers. By analyzing customer browsing and purchase history, as well as similar customer profiles, Amazon can suggest products that are likely to be of interest to each individual customer. This has significantly improved the customer shopping experience and increased sales for the company.
## Exercise
Think of a real-world application where machine learning can be used to solve a problem or improve a process. Describe the problem or process and explain how machine learning can be applied to address it.
### Solution
One real-world application where machine learning can be used is in the field of agriculture. Farmers face the challenge of predicting crop yields and optimizing the use of resources, such as water and fertilizers. Machine learning algorithms can analyze historical weather data, soil conditions, and crop characteristics to predict crop yields and optimize resource allocation. By using machine learning, farmers can make data-driven decisions and improve the efficiency and productivity of their farms. | Textbooks |
\begin{document}
\author{Debsoumya Chakraborti\thanks{Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA15213, USA}, Alan Frieze\thanks{Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA15213, USA, Research supported in part by NSF grant DMS1661063}, Simi Haber\thanks{Department of Mathematics, Bar-Ilan University. This research was partially supported by the BIU Center for Research in Applied Cryptography and Cyber Security in conjunction with the Israel National Directorate in the Prime Minister’s office.}, Mihir Hasabnis\thanks{Department of Mathematical Sciences, Carnegie Mellon University, Pittsburgh, PA15213, USA}}
\title{Isomorphism for Random $k$-Uniform Hypergraphs } \maketitle
\begin{abstract} We study the isomorphism problem for random hypergraphs. We show that it is solvable in polynomial time for the binomial random $k$-uniform hypergraph $H_{n,p;k}$, for a wide range of $p$. We also show that it is solvable w.h.p. for random $r$-regular, $k$-uniform hypergraphs $H_{n,r;k},r=O(1)$. \end{abstract}
\section{Introduction} In this note we study the isomorphism problem for two models of random $k$-uniform hypergraphs, $k\geq 3$. A hypergraph is $k$-uniform if all of its edges are of size $k$. The graph isomorphism problem for random graphs is well understood and in this note we extend some of the ideas to hypergraphs.
The first paper to study graph isomorphism in this context was that of Babai, Erd\H{o}s and Selkow \cite{BES}. They considered the model $G_{n,p}$ where $p$ is a constant independent of $n$. {\color{black} They showed that w.h.p.\footnote{A sequence of events ${\cal E}_n,n\geq 1$ occurs with high probability (w.h.p.) if $\lim_{n\to\infty}\mathbb{P}({\cal E}_n)=1$.} a {\em canonical labelling} of $G=G_{n,p}$ can be constructed in $O(n^2)$ time.} In a canonical labelling we assign a unique label to each vertex of a graph such that labels are invariant under isomorphism. It follows that two graphs with the same vertex set are isomorphic, if and only if the labels coincide. (This includes the case where one graph has a unique labeling and the other does not. In which case the two graphs are not isomorphic.) The failure probability for their algorithm was bounded by $O(n^{-1/7})$. Karp \cite{K}, Lipton \cite{L} and Babai and Kucera \cite{BK} reduced the failure probability to $O(c^n),c<1$. These papers consider $p$ to be constant and the paper of Czajka and Pandurangan \cite{CP} allows $p=p(n)=o(1)$. We use the following result from \cite{CP}: the notation $A_n\gg B_n$ means that $A_n/B_n\to\infty$ as $n\to \infty$. \begin{theorem}\label{th1} Suppose that $p\gg\frac{\log^4n}{n\log\log n}$ and $p\leq \frac12$. Then {\color{black} there is a polynomial time algorithm that finds a canonical labeling q.s.\footnote{A sequence of events ${\cal E}_n,n\geq 1$ occurs quite surely (q.s.) if $\mathbb{P}({\cal E}_n)=1-O(n^{-K})$ for any positive constant $K$.} for $G_{n,p}$. In fact the running time of the algorithm is $O(n^2p)$ q.s.} \end{theorem} Our first result concerns the random hypergraph $H_{n,p;k}$, the random $k$-uniform hypergraph on vertex set $[n]$ in which each of the possible edges {\color{black} in} $\binom{[n]}{k}$ occurs independently with probability $p$. We say that two $k$-uniform hypergraphs $H_1,H_2$ are isomorphic if there is a bijection $f:V(H_1)\to V(H_2)$ such that $\set{x_1,x_2,\ldots,x_k}$ is an edge of $H_1$ if and only if $\set{f(x_1),f(x_2),\ldots,f(x_k)}$ is an edge of $H_2$. \begin{theorem} \label{th2} Suppose that $k\geq 3$ and $p,1-p\gg n^{-(k-2)}\log n$ then {\color{black} there exists an $O(n^{2k})$ time algorithm that finds a canonical labeling for $H_{n,p;k}$ w.h.p.} \end{theorem} Bollob\'as \cite{B} and Kucera \cite{Ku} proved that random regular graphs have canonical labelings w.h.p. We extend the argument of \cite{B} to regular hypergraphs. A hypergraph is regular of degree $r$ if every vertex is in exactly $r$ edges. We denote a random $r$-regular, $k$-uniform hypergraph on vertex set $[n]$ by $H_{n,r;k}$. \begin{theorem} \label{th3} {\color{black} Suppose that $r,k$ are constants. Then there is an $O(n^{8/5})$ time algorithm that finds a canonical labeling for $H_{n,r;k}$ w.h.p.} \end{theorem} \section{Proof of Theorem \ref{th2}} Given $H=H_{n,p;k}$ we let $H_i$ denote the $(k-1)$-uniform hypergraph with vertex set $[n]\setminus \set{i}$ and edges $\set{{\color{black} e\setminus \set{i}:\;i\in e\in E(H)}}$. {\color{black} $H_i$ is known as the {\em link} associated with vertex $i$.} Let ${\cal E}_k$ denote the event $\set{\not\exists i,j:H_i\cong H_j}$. \begin{lemma}\label{lem1} Suppose that $k\geq 3$ and $\om\to\infty$ and $p,1-p\geq \om n^{-(k-2)}\log n$. Then ${\cal E}_k$ occurs q.s. \end{lemma} \begin{proof} \begin{align*} \mathbb{P}(\exists i,j:H_i\cong H_j) &\leq n^4n! (p^2 + (1-p)^2)^{\binom{n-4}{k-1}}\\ &\leq 3n^{9/2}{\bfrac{n}{e}}^n (p^2+(1-p)^2)^{\binom{n-4}{k-1}p}\\ &\leq n^{-\om/k!}. \end{align*} {\bf Explanation:} There are $\binom{n}{2}$ choices for $i,j$. There are at most $n^2$ choices for $y=f(i),x=f^{-1}(j)$ in an isomorphism $f$ between $H_i$ and $H_j$. This accounts for the $n^4$ term. There are $(n-3)!<n!$ possible isomorphisms between $H_i-\set{y,j}$ and $H_j-\set{x,i}$. Then for every $(k-1)$-set of vertices $S$ that includes none of $i,j,x,y$, the probability for there to be an edge or non-edge in both $H_i$ and $H_j$ is given by the expression $p^2 + (1-p)^2$.
The above estimation shows that even disregarding edges containing $i,j,x$ or $y$, w.h.p. there are no $i,j$ with $H_i \cong H_j$. \end{proof} Let $\mathcal{G}_k$ be the event that {\color{black} a canonical labeling for $H_{n,p;k}$} can be constructed in $O(n^{2k})$ time. Now assume inductively that \beq{ind}{ \mathbb{P}(H_{n,p;k}\notin \mathcal{G}_k)\leq n^{-\om/(k+1)!}. } The base case, $k=2$, for \eqref{ind} is {\color{black} given by the result of \cite{L}, although in addition \cite{K}, \cite{CP} can be used for constant $p$}. Let ${\cal B}_i$ be the event that $H_i\notin \mathcal{G}_{k-1}$. Then \beq{1}{ \mathbb{P}(H_{n,p;k}\notin \mathcal{G}_k)\leq {\color{black} (1-\mathbb{P}({\cal E}_k))}+\sum_{i=1}^n\mathbb{P}({\cal B}_i). } Indeed, if none of the events in \eqref{1} occur then in time $O(n^2\times n^{2(k-1)})=O(n^{2k})$ we can by induction uniquely label each vertex via the {\color{black} labelled link}. After this we can confirm that ${\cal E}_k$ has occurred. This confirms the claimed time complexity. Given that {\color{black} ${\cal E}_k$ has occured}, this will determine the only possible isomorphism {\color{black} $\f$ between $H$ and any other $k$-uniform hypergraph $H'$ on vertex set $[n]$. We determine $\f$ by comparing the links of $H,H'$, using induction to see if they are isomorphic. We can see if there is a mapping $\f$ such that the link of $i$ in $H$ is isomorphic to the link of $\f(i)$ in $H'$ then and then we check to see whether or not $\f$ is actually an isomorphism.}
Going back to \eqref{1} we see by induction that \[ \mathbb{P}(H_{n,p;k}\notin \mathcal{G}_k)\leq n^{-\om/k!}+n^2\times (k-1)n^{2k-2}n^{-\om/k!}\leq n^{-\om/(k+1)!}. \] This completes the proof of Theorem \ref{th2}. \section{Proof of Theorem \ref{th3}} We extend the analysis of Bollob\'as \cite{B} to hypergraphs. For a vertex $v$, we let $d_\ell(v)$ denote the number of vertices at hypergraph distance $\ell$ from $v$ in $H=H_{n,r;k}$. We show that if \[ \ell^*=\rdup{\frac{3}{5}\log_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma}n}\text{ where }\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma=(r-1)(k-1). \] then w.h.p. no two vertices have the same sequence $(d_\ell(v),\ell=1,2,\ldots,\ell^*)$. By doing a breadth first search from each vertex of $H$ we can therefore w.h.p. distinctly label each vertex within $O(n\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell^*})=O(n^{8/5})$ steps.
We use the {\color{black} configuration} model for hypergraphs, which is a simple generalisation of the model in Bollob\'as \cite{B1}. We let {\color{black} $W=[rn]$} where $m=rn/k$ is an integer. Assume that it is partitioned into sets $W_1,W_2,\ldots,W_n$ of size $r$. We define $f:W\to [n]$ by $f(w)=i$ if $w\in W_i$. A configuration $F$ is a partition of $W$ into sets $F_1,F_2,\ldots,F_m$ of size $k$. Given $F$ we obtain the (multi)hypergraph $\g(F)$ where $F_i=\set{w_1,w_2,\ldots,w_k}$ gives rise to the edge $\set{f(w_1),f(w_2),\ldots,f(w_k)}$ for $i=1,2,\ldots,m$. {\color{black} Configurations can contain multiple edges and loops. Nevertheless, it is known that if $\g(F)$ has a hypergraph property w.h.p. then $H_{n,r;k}$ will also have this property w.h.p.,} see for example \cite{CFMR}.
In the following $H=H_{n,r;k}$. For a set $S\subseteq [n]$, we let $e_H(S)$ denote the number of edges of $H$ that are contained in $S$. \LEM{lemiso1}{
Let $\ell_0=\rdup{100\log_{\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma}\log n}$. Then w.h.p., $e_H(S)<\frac{|S|+1}{k-1}$ for all $S\subseteq [n], |S|\leq {\color{black} 10\ell_0}$. } \begin{proof} We have that \begin{align}
&\mathbb{P}\brac{\exists S:|S|\leq 10\ell_0,e_H(S)\geq \frac{|S|+1}{k-1}}\nonumber\\ &\leq \sum_{s=4}^{10\ell_0}\binom{n}{s}\binom{sr}{\frac{s+1}{k-1}} \bfrac{\binom{sr}{k-1}}{\binom{km-10k\ell_0}{k-1}}^\frac{s+1}{k-1}\label{erf1}\\ &\leq \sum_{s=4}^{10\ell_0}\bfrac{ne}{s}^s(er(k-1))^{\frac{s+1}{k-1}}\bfrac{rs}{rn-o(n)}^{s+1}\nonumber\\ &\leq\frac{1}{n^{1-o(1)}}\sum_{s=4}^{10\ell_0}se^s\brac{e(k-1)r}^{\frac{s+1}{k-1}} =o(1).\nonumber \end{align} {\color{black} {\bf Explanation for \eqref{erf1}:} we choose a set $S$ and then a set $X$ of $(s+1)/(k-1)$ members of $W_S=\bigcup_{i\in S}W_i$. We then estimate the probability that each member of $X$ is paired in $F$ with $k-1$ members of $W_S\setminus X$. For each $x\in X$, given some previous choices, there are at most $\binom{sr}{k-1}$ choices contained in $W_S$, out of at least $\binom{km-10k\ell_0}{k-1}$ choices overall.} \end{proof} Let ${\cal E}$ denote the high probability event in Lemma \ref{lemiso1}. We will condition on the occurrence of ${\cal E}$.
Now for $v\in [n]$, let $S_\ell(v)$ denote the set of vertices at distance $\ell$ from $v$ and let $S_{\leq \ell}(v)=\bigcup_{j\leq \ell}S_{j}(v)$. {\color{black} Here the distance between vertices $u,v$ is the minimum length of a path/sequence of edges $e_1,e_2,\ldots,e_k$ such that $u\in e_1,v\in e_k$ and $e_i\cap e_{i+1}\neq \emptyset$ for $1\leq i<k$.} We note that {\color{black} \beq{smallSk}{
|S_\ell(v)|\leq (k-1)r\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell-1}\text{ for all }v\in [n],\ell\geq 1. }} Furthermore, Lemma \ref{lemiso1} implies that there exist $b_{r,k}<a_{r,k}<(k-1)r$ such that w.h.p., we have for all $v,w\in [n], 1\leq \ell\leq \ell_0$, \begin{align}
|S_\ell(v)|&\geq a_{r,k}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell-1}.\label{smallSk1}\\
|S_\ell(v)\setminus S_\ell(w)|&\geq b_{r,k}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell-1}.\label{smallSk1a} \end{align} {\color{black}
To see this, observe that $|S_{\ell+1}|=\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma|S_{\ell}|$ unless there are two vertices $x,y\in S_\ell$ and either (i) an edge $e$ of $\g(F)$ such that $e\supseteq \set{x,y}$ or (ii) a vertex $z\in S_{\ell+1}$ and edges $e,f$ of $\g(F)$ such that $e\supseteq \set{x,z}$ and $f\supseteq \set{y,z}$. Lemma \ref{lemiso1} implies that w.h.p. there is at most one such case of (i) or (ii) for $1\leq \ell\leq \ell_0$. Suppose that there are two distinct edges $e_i,i=1,2$ that cause (i) at levels $\ell_1,\ell_2$ and suppose that $\set{x_i,y_i}$ corresponds to $e_i,i=1,2$. Each $x\in S_\ell$ lies in the final edge of an $\ell$-length path $P_u$ from $v$ to $x$. Now ${\cal P}=P_{x_1}, P_{x_2},P_{y_1},P_{y_2}$ spans $\Pi\leq 2(\ell_1+\ell_2)\leq 2\ell_0$ edges and we can choose these paths to not contain $e_1$ or $e_2$. Furthermore, ${\cal P}$ spans at most $1+(k-1)\Pi$ vertices, since adding a new edge to a connected set of vertices adds at most $k-1$ new vertices. If we add $e_1,e_2$ to these $\Pi$ edges then we have at most $1+(k-1)\Pi+2(k-2)$ vertices spanning $\Pi+2$ edges and this contradicts Lemma \ref{lemiso1}. The remaining two cases (2 times (ii) or (i) and (ii)) can be argued similarly. So, typically adding an edge in the construction of $S_{\ell+1}$ adds $k-1$ new vertices. W.h.p., there is one case and this only adds $k-2$ vertices. This explains \eqref{smallSk1}.
A similar argument yields \eqref{smallSk1a}. Having constructed $S_{\ell}(w)$, we see that typically adding an edge in the construction of $S_{\ell}(v)$ adds $k-1$ new vertices to the union $S_{\ell}(v)\cup S_{\ell_0}(w)$ and that w.h.p. it adds at least $k-2$ vertices. }
Now consider $\ell>\ell_0$. Consider doing breadth first search from $v$ or $v,w$ exposing the configuration pairing as we go. Let an edge be {\em dispensable} if it contains two vertices already known to be in $S_{\leq \ell}$. {\color{black} The argument above} implies that w.h.p. there is at most one dispensable edge in $S_{\leq\ell_0}$. \LEM{lemiso2}{ With probability $1-o(n^{-2})$, (i) at most 20 of the first $n^{\frac{2}{5}}$ exposed edges are dispensable and (ii) at most $n^{1/4}$ of the first $n^{\frac{3}{5}}$ exposed edges are dispensable. } \begin{proof} The probability that the $\s$th edge is dispensable is at most \\{\color{black} $\frac{r((\s-1)(k-1)+1)(k-1)}{rn-k\s}$}, independent of the history of the process. {\color{black} (Knowing one vertex of this edge and choosing the rest of it, there are at most $r((\s-1)(k-1)+1)(k-1)$ choices out of at least $rn-k\s$ that will lead to this edge being dispensable.)} Hence, \multstar{ \mathbb{P}(\exists\; 20\text{ dispensable edges in the first }n^{2/5})\leq \binom{n^{2/5}}{20}\bfrac{{\color{black} rk^2}n^{2/5}}{rn-o(n)}^{20}\\ =o(n^{-2}). } \multstar{ \mathbb{P}(\exists\; n^{1/4}\text{ dispensable edges in first }n^{3/5})\leq \binom{n^{3/5}}{n^{1/4}}\bfrac{{\color{black} rk^2}n^{3/5}}{rn-o(n)}^{n^{1/4}}\\ =o(n^{-2}). } \end{proof} Now let {\color{black} $\ell_1=\rdup{\log_{r-1}n^{2/5}}$ and $\ell_2=\rdup{\log_{r-1}n^{3/5}}$}. Then, we have that, conditional on ${\cal E}$, with probability $1-o(n^{-2})$, \begin{align*}
&|S_\ell(v)|\geq (a_{r,k}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell_0-1}-40)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell-\ell_0}:\;\ell_0<\ell\leq\ell_1.\\
&|S_\ell(v)|\geq (a_{r,k}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell_1-1}-40\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell_1-\ell_0}-2n^{1/4})\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell-\ell_1};\; \ell_1<\ell\leq\ell_2.\\
&|S_\ell(w)\setminus S_\ell(v)|\geq (b_{r,k}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell_0-1}-40)\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell-\ell_0}:\;\ell_0<\ell\leq\ell_1.\\
&|S_\ell(w)\setminus S_\ell(v)|\geq (b_{r,k}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell_1-1}-40\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell_1-\ell_0}-2n^{1/4})\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell-\ell_1};\; \ell_1<\ell\leq\ell_2. \end{align*} We deduce from this that if $\ell_3=\rdup{\log_{r-1}n^{4/7}}$ and $\ell=\ell_3+a,a=O(1)$ then with probability $1-o(n^{-2})$, \begin{align*}
|S_\ell(w)|&\geq (a_{r,k}-o(1))\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell-1}\approx a_{r,k}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{a-1}n^{4/7} .\\
|S_\ell(w)\setminus S_\ell(v)|&\geq (b_{r,k}-o(1))\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{\ell-1}\approx b_{r,k}\rho} \def\R{\Rho} \def\s{\sigma} \def\S{\Sigma^{a-1}n^{4/7} . \end{align*}
Suppose now that we consider the execution of breadth first search up until we have exposed {\color{black} $S_{\ell+1}(v)\cup S_{\ell+1}(w)$ and the edges $\wh{E}$ defining this set. We let $U=W\setminus \wh{E}$. We will show that we can find a position in the process so that conditioning up to this point, in order to have $|S_{\ell+1}(v)|=|S_{\ell+1}(w)|$ there will have to be a prescribed, but unlikely, outcome for a large number of edge selections.
Our conditioning includes all the choices of $e\in F$ that are necessary to construct $S_{\ell+1}(v)\cup S_{\ell}(w)$. We refer to a choice of $e$ as an {\em edge-selection}. After an edge-selection $e$, we update $U\gets U\setminus \set{e}$. Consider the edge-selections involving $W_x,x\in S_\ell(w)\setminus S_{\ell+1}(v)$. Now at most $n^{1/4}$ of these edge-selections involve vertices in $S_{\leq \ell+1}(v)\cup S_{\leq \ell}(w)$. Condition on these as well. There must now be $\l=\Theta(n^{4/7})$ further edge-selections containing elements of $W_x,x\in S_\ell(w)\setminus S_{\ell+1}(v)$ and $W_y,y\notin S_{\ell+1}(v)\cup S_\ell(w)$. Let $Z$ denote the vertices in $S_\ell(w)$ involved in these $\l$ edge-selections. Furthermore, to have $|S_{\ell+1}(v)|=|S_{\ell+1}(w)|$ these $\l$ selections must involve exactly $t$ of the sets $W_y,y\notin S_{\ell+1}(v)\cup S_\ell(w)$. Here $t$ is the unique value that will ensure that $|S_{\ell+1}(w)|=|S_{\ell+1}(v)|$. The important point is that $t$ is determined {\em before} the making of these $\l$ edge-selections. Let $R=\bigcup_{y\notin S_{\ell+1}(v)\cup S_\ell(w)}W_y$ at the point immediately prior to the $\l$ edge-selections. Let $S=R\cap \bigcup_{e:e\cap Z\neq \emptyset}e$ and note that $S$ is a random $s$-subset of $R$ for some $s=\Theta(n^{4/7})$.
The following lemma will easily show that w.h.p. $H$ has a canonical labeling defined by the values of $|S_{\ell}(v)|, 1\leq \ell\leq \ell^*,\,v\in [n]$. } \LEM{isomain}{ Let $R=\bigcup_{i=1}^\m R_i$ be a partitioning of an $r\m$ set $R$ into $\m$ subsets of size $r$. Suppose that $S$ is a random $s$-subset of $R$, where $\m^{5/9}<s<\m^{3/5}$. Let $X_S$ denote the number of sets $R_i$ intersected by $S$. Then \[ \max_j\mathbb{P}(X_S=j)\leq \frac{c_0\m^{1/2}}{s}, \] for some constant $c_0$. } \begin{proof} The probability that $S$ has at least 3 elements in some set $R_i$ is at most \[ \frac{\m\binom{r}{3}\binom{r\m-3}{s-3}}{\binom{r\m}{s}}\leq \frac{s^3}{6\m^2}\leq \frac{\m^{1/2}}{6s}. \] But \[
\mathbb{P}(X_S=j)\leq \mathbb{P}\brac{\max_i|S\cap R_i|\geq 3}+\mathbb{P}\brac{X_S=j\text{ and }\max_i|S\cap R_i|\leq 2}. \] So the lemma will follow if we prove that for every $j$, \beq{lemfollow}{
P_j=\mathbb{P}\brac{X_S=j\text{ and }\max_i|S\cap R_i|\leq 2}\leq \frac{c_1\m^{1/2}}{s}, } for some constant $c_1$.
Clearly, $P_j=0$ if $j<s/2$ and otherwise \beq{defPj}{ P_j=\frac{\binom{\m}{j}\binom{j}{s-j}r^{2j-s}\binom{r}{2}^{s-j}}{\binom{r\m}{s}}. } Now for $s/2\leq j<s$ we have \beq{Pjrat}{ \frac{P_{j+1}}{P_j}=\frac{(\m-j)(s-j)}{(2j+2-s)(2j+1-s)}\frac{2r}{r-1}. } We note that if $s-j\geq \frac{10s^2}{\m}$ then $\frac{P_{j+1}}{P_j}\geq \frac{10r}{3(r-1)}\geq 2$ and so the $j$ maximising $P_j$ is of the form $s-\frac{\alpha} \def\b{\beta} \def\d{\delta} \def\D{\Delta s^2}{\m}$ where $\alpha} \def\b{\beta} \def\d{\delta} \def\D{\Delta\leq 10$. If we substitute $j=s-\frac{\alpha} \def\b{\beta} \def\d{\delta} \def\D{\Delta s^2}{\m}$ into \eqref{Pjrat} then we see that \[
\frac{P_{j+1}}{P_j}\in \frac{2\alpha} \def\b{\beta} \def\d{\delta} \def\D{\Delta r}{r-1}\left[1\pm c_2\frac{s}{\m}\right] \] for some absolute constant $c_2>0$.
It follows that if $j_0$ is the index maximising $P_j$ then \[ \card{j_0-\brac{s-\frac{(r-1)s^2}{2r\m}}}\leq 1. \] Furthermore, if $j_1=j_0-\frac{s}{\m^{1/2}}$ then \[
\frac{P_{j+1}}{P_j}\leq 1+c_3\frac{\m^{1/2}}{s}\text{ for }j_1\leq j\leq j_0, \] for some absolute constant $c_3>0$.
This implies that for all $j_1 \le j \le j_0$, \multstar{ P_j\geq P_{j_0}\brac{1+c_3\frac{\m^{1/2}}{s}}^{-(j_0-j_1)}\\= P_{j_0}\exp\set{-(j_0-j_1)\brac{c_3\frac{\m^{1/2}}{s}+O\bfrac{\m}{s^2}}} \geq P_{j_0}e^{-2c_3}. } It follows from this that \[ P_{j_0} \le e^{2c_3} \min_{j \in [j_1,j_0]} P_j \le \frac{e^{2c_3}}{j_0 - j_1} \sum_{j \in [j_1,j_0]} P_j \le \frac{e^{2c_3}\m^{1/2}}{s}. \] \end{proof} We apply Lemma \ref{isomain} with $\m=n-o(n),s=\Theta(n^{4/7})$ to show that \[
\mathbb{P}((|S_\ell(v)|=|S_\ell(w)|,\ell\in [\ell_3,\ell_3+14])\leq \bfrac{c_0n^{1/2}}{n^{4/7}}^{15}=o(n^{-2}). \] This completes the proof of Theorem \ref{th3}.
\end{document} | arXiv |
Abundant number
In number theory, an abundant number or excessive number is a positive integer for which the sum of its proper divisors is greater than the number. The integer 12 is the first abundant number. Its proper divisors are 1, 2, 3, 4 and 6 for a total of 16. The amount by which the sum exceeds the number is the abundance. The number 12 has an abundance of 4, for example.
Definition
A number n for which the sum of divisors σ(n) > 2n, or, equivalently, the sum of proper divisors (or aliquot sum) s(n) > n.
Abundance is the value σ(n) − 2n (or s(n) − n).
Examples
The first 28 abundant numbers are:
12, 18, 20, 24, 30, 36, 40, 42, 48, 54, 56, 60, 66, 70, 72, 78, 80, 84, 88, 90, 96, 100, 102, 104, 108, 112, 114, 120, ... (sequence A005101 in the OEIS).
For example, the proper divisors of 24 are 1, 2, 3, 4, 6, 8, and 12, whose sum is 36. Because 36 is greater than 24, the number 24 is abundant. Its abundance is 36 − 24 = 12.
Properties
• The smallest odd abundant number is 945.
• The smallest abundant number not divisible by 2 or by 3 is 5391411025 whose distinct prime factors are 5, 7, 11, 13, 17, 19, 23, and 29 (sequence A047802 in the OEIS). An algorithm given by Iannucci in 2005 shows how to find the smallest abundant number not divisible by the first k primes.[1] If $A(k)$ represents the smallest abundant number not divisible by the first k primes then for all $\epsilon >0$ we have
$(1-\epsilon )(k\ln k)^{2-\epsilon }<\ln A(k)<(1+\epsilon )(k\ln k)^{2+\epsilon }$
for sufficiently large k.
• Every multiple of a perfect number (except the perfect number itself) is abundant.[2] For example, every multiple of 6 greater than 6 is abundant because $1+{\tfrac {n}{2}}+{\tfrac {n}{3}}+{\tfrac {n}{6}}=n+1.$
• Every multiple of an abundant number is abundant.[2] For example, every multiple of 20 (including 20 itself) is abundant because ${\tfrac {n}{2}}+{\tfrac {n}{4}}+{\tfrac {n}{5}}+{\tfrac {n}{10}}+{\tfrac {n}{20}}=n+{\tfrac {n}{10}}.$
• Consequently, infinitely many even and odd abundant numbers exist.
• Furthermore, the set of abundant numbers has a non-zero natural density.[3] Marc Deléglise showed in 1998 that the natural density of the set of abundant numbers and perfect numbers is between 0.2474 and 0.2480.[4]
• An abundant number which is not the multiple of an abundant number or perfect number (i.e. all its proper divisors are deficient) is called a primitive abundant number
• An abundant number whose abundance is greater than any lower number is called a highly abundant number, and one whose relative abundance (i.e. s(n)/n ) is greater than any lower number is called a superabundant number
• Every integer greater than 20161 can be written as the sum of two abundant numbers.[5]
• An abundant number which is not a semiperfect number is called a weird number.[6] An abundant number with abundance 1 is called a quasiperfect number, although none have yet been found.
• Every abundant number is a multiple of either a perfect number or a primitive abundant number.
Related concepts
Numbers whose sum of proper factors equals the number itself (such as 6 and 28) are called perfect numbers, while numbers whose sum of proper factors is less than the number itself are called deficient numbers. The first known classification of numbers as deficient, perfect or abundant was by Nicomachus in his Introductio Arithmetica (circa 100 AD), which described abundant numbers as like deformed animals with too many limbs.
The abundancy index of n is the ratio σ(n)/n.[7] Distinct numbers n1, n2, ... (whether abundant or not) with the same abundancy index are called friendly numbers.
The sequence (ak) of least numbers n such that σ(n) > kn, in which a2 = 12 corresponds to the first abundant number, grows very quickly (sequence A134716 in the OEIS).
The smallest odd integer with abundancy index exceeding 3 is 1018976683725 = 33 × 52 × 72 × 11 × 13 × 17 × 19 × 23 × 29.[8]
If p = (p1, ..., pn) is a list of primes, then p is termed abundant if some integer composed only of primes in p is abundant. A necessary and sufficient condition for this is that the product of pi/(pi − 1) be > 2.[9]
References
1. D. Iannucci (2005), "On the smallest abundant number not divisible by the first k primes", Bulletin of the Belgian Mathematical Society, 12 (1): 39–44
2. Tattersall (2005) p.134
3. Hall, Richard R.; Tenenbaum, Gérald (1988). Divisors. Cambridge Tracts in Mathematics. Vol. 90. Cambridge: Cambridge University Press. p. 95. ISBN 978-0-521-34056-4. Zbl 0653.10001.
4. Deléglise, Marc (1998). "Bounds for the density of abundant integers". Experimental Mathematics. 7 (2): 137–143. CiteSeerX 10.1.1.36.8272. doi:10.1080/10586458.1998.10504363. ISSN 1058-6458. MR 1677091. Zbl 0923.11127.
5. Sloane, N. J. A. (ed.). "Sequence A048242 (Numbers that are not the sum of two abundant numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
6. Tattersall (2005) p.144
7. Laatsch, Richard (1986). "Measuring the abundancy of integers". Mathematics Magazine. 59 (2): 84–92. doi:10.2307/2690424. ISSN 0025-570X. JSTOR 2690424. MR 0835144. Zbl 0601.10003.
8. For smallest odd integer k with abundancy index exceeding n, see Sloane, N. J. A. (ed.). "Sequence A119240 (Least odd number k such that sigma(k)/k >= n.)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
9. Friedman, Charles N. (1993). "Sums of divisors and Egyptian fractions". Journal of Number Theory. 44 (3): 328–339. doi:10.1006/jnth.1993.1057. MR 1233293. Zbl 0781.11015.
• Tattersall, James J. (2005). Elementary Number Theory in Nine Chapters (2nd ed.). Cambridge University Press. ISBN 978-0-521-85014-8. Zbl 1071.11002.
External links
• The Prime Glossary: Abundant number
• Weisstein, Eric W. "Abundant Number". MathWorld.
• Abundant number at PlanetMath.
Divisibility-based sets of integers
Overview
• Integer factorization
• Divisor
• Unitary divisor
• Divisor function
• Prime factor
• Fundamental theorem of arithmetic
Factorization forms
• Prime
• Composite
• Semiprime
• Pronic
• Sphenic
• Square-free
• Powerful
• Perfect power
• Achilles
• Smooth
• Regular
• Rough
• Unusual
Constrained divisor sums
• Perfect
• Almost perfect
• Quasiperfect
• Multiply perfect
• Hemiperfect
• Hyperperfect
• Superperfect
• Unitary perfect
• Semiperfect
• Practical
• Erdős–Nicolas
With many divisors
• Abundant
• Primitive abundant
• Highly abundant
• Superabundant
• Colossally abundant
• Highly composite
• Superior highly composite
• Weird
Aliquot sequence-related
• Untouchable
• Amicable (Triple)
• Sociable
• Betrothed
Base-dependent
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
Other sets
• Arithmetic
• Deficient
• Friendly
• Solitary
• Sublime
• Harmonic divisor
• Descartes
• Refactorable
• Superperfect
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
| Wikipedia |
Like logarithmic terms
The logarithmic terms which contain same logarithmic coefficients are called like logarithmic terms.
Logarithm terms are often appeared similar when two or more logarithmic terms are compared. It is possible when the logarithmic terms contain same logarithmic coefficient. Due to similar logarithmic coefficient, the logarithmic terms are called as like logarithmic terms.
Examine the following examples to identity the like logarithmic terms.
$(1) \,\,\,$ $6\log_{3}{7}$ and $-8\log_{3}{7}$
Express both terms as factors by factorization (or) factorisation method.
$6 \times \log_{3}{7}$ and $-8 \times \log_{3}{7}$
$6$ and $-8$ are different and numbers. $\log_{3}{7}$ is a logarithmic coefficient of $6$ and $-8$ in the both terms. Therefore, $6\log_{3}{7}$ and $-8\log_{3}{7}$ are similar in appearance and known as like logarithmic terms.
$(2) \,\,\,$ $d\log_{a}{xy}$, $\Big(\dfrac{1}{c}\Big)\log_{a}{xy}$ and $0.6\log_{f}{x}\log_{a}{xy}$
Once again, factorize (or) factorise all three logarithmic terms to identity common logarithmic coefficients.
$d \times \log_{a}{xy}$, $\Big(\dfrac{1}{c}\Big) \times \log_{a}{xy}$ and $0.6 \times \log_{f}{x} \times \log_{a}{xy}$
$\log_{a}{xy}$ is a logarithmic coefficient of $d$ in the first term, a logarithmic coefficient of $\dfrac{1}{c}$ in the second term and also a logarithmic coefficient of $0.6\log_{f}{x}$ in the third term.
In this case, the factor $\log_{f}{x}$ is a logarithmic coefficient of $0.6\log_{a}{xy}$ but it is not appeared in remaining two terms. Due to the common involvement of $\log_{a}{xy}$ in all three terms, the three log terms are appeared similar. Hence, the $d\log_{a}{xy}$, $\Big(\dfrac{1}{c}\Big)\log_{a}{xy}$ and $0.6\log_{f}{x}\log_{a}{xy}$ are called as like logarithmic terms. | CommonCrawl |
\begin{document}
\title{\large \bf Tightly Localized Stationary Pulses in Multi-Level Atomic System}
\author{Xiong-Jun Liu$^{a}$\footnote{Electronic address: [email protected]}, Xin Liu$^{b}$, Zheng-Xin Liu$^{c}$, L. C. Kwek$^{a,d}$ and C. H. Oh$^{a}$\footnote{Electronic address: [email protected]}} \affiliation{a. Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542 \\ b. Department of Physics, Texas A\&M University, College Station, Texas 77843-4242, USA\\ c. Theoretical Physics Division, Nankai Institute of Mathematics,Nankai University, Tianjin 300071, P.R.China\\ d. National Institute of Education, Nanyang Technological University, 1 Nanyang Walk, Singapore 639798}
\begin{abstract} We show the pulse matching phenomenon can be obtained in the general multi-level system with electromagnetically induced transparency (EIT). For this we find a novel way to create tightly localized stationary pulses by using counter-propagating pump fields. The present process is a spatial compression of excitation so that it allows us to shape and further intensify the localized stationary pulses, without using standing waves of pump fields or spatially modulated pump fields.
\pacs{42.50.Gy, 03.67.-a, 42.50.Dv} \end{abstract}
\maketitle
\indent Recently, an important progress in electromagnetically induced transparency (EIT) \cite{EIT,exp1,6,exp2,4,5,wu} is that experimental realization for the coherent control of stationary pulses was achieved by using standing waves of pump fields in the three-level system \cite{nature,three-level}. The creation of stationary pulses can enhance the nonlinear couplings between photons or collective excitations corresponding to stored photons, both of which are useful for deterministic logic operations. The key point in the creation of stationary pulses is expressed by the pulse matching phenomenon between the forward (FW) and backward (BW) propagating probe fields \cite{nature,three-level,four}. For a three-level system, the technique to generate tightly localized stationary pulses involves the use of standing waves of pump fields with a frequency-comb or spatially modulated pump fields \cite{three-level}. However, such tight localization is created by a filtering process rather than a compression of excitation. Thus the three-level technique cannot be applied directly to applications in quantum nonlinear optics.
On the other hand, coherent manipulation of probe lights has been studied in the four-level double $\Lambda$-type system \cite{exp3,liu} and also in the general multi-level atomic system that interacts with many probe and pump fields \cite{multi-level,level2}. It has also been shown in Ref. \cite{multi-level} that, one can convert different probe lights into each other by manipulating the external pump fields based on such general EIT method, indicating a sort of pulse matching phenomenon between all applied probe fields. This observation also motivates us to probe into a new technique of creating localized stationary pulses based on multi-level atomic system. In this rapid communication, we shall show the tightly localized stationary pulses can be obtained through a spatial compression of excitation the general multi-level EIT system.
\begin{figure}
\caption{(color online) (a) General $m$-level atomic system coupled to $m-2$ quantized probe and classical pump fields which propagate in $+z$ or $-z$ directions. (b) No. $1$ to No. $m-3$ pump/probe pulses propagate in the $+z$ direction, while No. $m-2$ pump/probe pulse propagates in the $-z$ direction.}
\label{1}
\end{figure}
We consider the quasi-one dimensional system shown in Fig. 1(a) for an ensemble of $m$-level atoms interacting with $m-2$
quantized probe fields which couple the transitions from the ground state $|b\rangle$ to excited state $|e_{\sigma}\rangle$
$(1\leq \sigma\leq m-2)$ with coupling constants $g_{\sigma}$, and $m-2$ classical pump fields which couple the transitions from the state $|c\rangle$ to excited ones $|e_{\sigma}\rangle$ with Rabi-frequencies $\Omega_{\sigma}(z,t)$. All probe and pump fields are co-propagating in the $+z$ or $-z$ direction (Fig. 1(b)), and \begin{eqnarray}\label{eqn:field1} E_{\sigma}(z,t)&=&\sqrt{\frac{\hbar\nu_{\sigma}}{2\epsilon_0V}}\hat{\cal E}_{\sigma}(z,t)e^{i(k_{p{\sigma}}z-\nu_{\sigma}t)},\nonumber\\ \\ \Omega_{\sigma}(z,t)&=&\Omega_{\sigma0}e^{i(k_{c{\sigma}}z-\omega_{\sigma}t)}\nonumber \end{eqnarray} where $\sigma=1,2,...,m-2$, $\hat{\cal E}_{\sigma}$ and $\Omega_{{\sigma}0}$ are slowly-varying amplitudes, $k_{p{\sigma}}$ and $k_{c{\sigma}}$, respectively $z$-component wave vectors of probe and pump fields, can be positive or negative. For $k_{p{\sigma}}>0$ and $k_{c{\sigma}}>0$ ($k_{p{\sigma}}<0$ and $k_{c{\sigma}}<0$), it means the ${\sigma}$th pair of probe and pump fields propagate in the $+z$ ($-z$) direction. We consider all transitions to be at resonance. Under the rotating-wave approximation, the interaction Hamiltonian can be written as: \begin{eqnarray}\label{eqn:H1} \hat{\mathcal V}&=&-\int\frac{dz}{L}\bigr(\hbar N\sum_{\sigma=1}^{m-2} g_\sigma\widetilde\sigma_{e_{\sigma}b}(z,t)\hat {\cal E}_\sigma(z,t)+\nonumber\\ &&+\hbar N\sum_{\sigma=1}^{m-2}\Omega_{\sigma0}(t) \widetilde\sigma_{e_{\sigma}c}(z,t)+h.c.\bigr), \end{eqnarray} where $N$ is the total atom number, $L$ is the length of the medium in the $z$ direction, and the continuous atomic variables $\widetilde\sigma_{\mu\nu}(z,t) =\frac{1}{N_z}\sum_{z_j\in N_z}{\hat\sigma}_{\mu\nu}^j(t)$ are defined by a collection of $N_z\gg 1$ atoms in a very small length length interval $\Delta z$
\cite{6}. $\hat\sigma_{e_{\sigma}b}^j=|e_{\sigma}^j\rangle\langle b^j| \, {\rm e}^{-i(k_{p{\sigma}}z-\omega_{e_{\sigma}b}t)}$ and
$\hat\sigma_{e_{\sigma}c}^j=|e_{\sigma}^j\rangle\langle c^j| \, {\rm e}^{-i(k_{c{\sigma}}z-\omega_{e_{\sigma}c}t)}$ are the slowly-varying parts of the $j$th atomic flip operator. We note that an essential difference between our model and the three-level case is that for the case of multi-frequency optical pulses, here the one- and two-photon detunings can be avoided for all optical transitions, and no standing waves of pump fields or spatially modulated pump fields are used.
The evolution of the slowly-varying amplitudes $\hat{\cal E}_\sigma(z,t)$ can be described by the propagation equations \begin{eqnarray}\label{field equation} \left(\frac{\partial}{\partial t}+\frac{\nu_\sigma}{k_{p\sigma}}\frac{\partial}{\partial z}\right) \hat {\cal E}_\sigma(z,t)= ig_\sigma N\widetilde\sigma_{be_\sigma}(z,t), \end{eqnarray} where we note $\nu_\sigma/k_{p\sigma}=\pm c$ for the $\pm z$ directional propagation field. Under the condition of low excitation, i. e. $\widetilde\sigma_{bb}\approx1$, the atomic evolution governed by the Heisenberg-Langevin equations can be obtained by \begin{eqnarray}\label{eqn:1} \dot{\widetilde\sigma}_{be_\sigma}=-\gamma_{be_\sigma} {\widetilde\sigma}_{be_\sigma} +ig_\sigma\hat{\cal E}_\sigma+i\Omega_{\sigma0}{\widetilde\sigma}_{bc} +F_{be_\sigma}, \end{eqnarray} \begin{eqnarray}\label{eqn:2} \dot{\widetilde\sigma}_{bc}= i\sum_{\sigma=1}^{m-2}\Omega_{\sigma0}{\widetilde\sigma}_{be_\sigma} -i\sum_{\sigma=1}^{m-2}g_\sigma\hat {\cal E}_\sigma{\widetilde\sigma}_{e_\sigma c}+F_{bc}, \end{eqnarray} \begin{eqnarray}\label{eqn:3} \dot{\widetilde\sigma}_{ce_\sigma}=-\gamma_{ce_\sigma} {\widetilde\sigma}_{ce_\sigma} +i\sum_{\sigma=1}^{m-2}g_\sigma\hat{\cal E}_\sigma{\widetilde\sigma}_{cb} +F_{ce_\sigma}, \end{eqnarray} where $\gamma_{\mu\nu}$ are the transversal decay rates that will be assumed $\gamma_{be_\sigma}=\Gamma$ in the following derivation and $F_{\mu\nu}$ are $\delta$-correlated Langevin noise operators. From the Eq. (\ref{eqn:1}) we find in the lowest order: ${\widetilde\sigma}_{be_\sigma}=(ig_\sigma\hat{\cal E}_\sigma+i\Omega_{\sigma0}{\widetilde\sigma}_{bc} +F_{be_\sigma})/\Gamma$. Substitute this result into Eq. (\ref{eqn:2}) yields $\dot{\widetilde\sigma}_{bc}= \Gamma^{-1}\Omega_0^2{\widetilde\sigma}_{bc} -\Gamma^{-1}\sum_{\sigma=1}^{m-2}g_\sigma\Omega_{\sigma0}\hat {\cal E}_\sigma-i\sum_{\sigma=1}^{m-2}g_\sigma\hat {\cal E}_\sigma{\widetilde\sigma}_{e_\sigma c}$, where $\Omega_0=\sqrt{\sum_{\sigma=1}^{m-2}\Omega^2_{\sigma0}}$. The Langevin noise terms are neglected in the present results, since under the adiabatic condition the Langevin noise terms have no effect on EIT quantum memory technique \cite{6}. For our purpose we shall calculate $\widetilde\sigma_{bc}$ to the first order, neglecting the small time derivatives of $\Omega_{\sigma0}$, thus \begin{eqnarray}\label{eqn:coherence4} \widetilde\sigma_{bc}&\approx&-\frac{1}{\Omega_0^2}\sum_{\sigma=1}^{m-2}g_\sigma\Omega_{\sigma0}\hat {\cal E}_\sigma -\frac{1}{\Omega_0^4}\sum_{jk\sigma}g_jg_kg_\sigma\Omega_{\sigma0}\hat{\cal E}_j^{\dag}\hat{\cal E}_k\hat{\cal E}_\sigma+\nonumber\\ &&+\frac{\Gamma}{\Omega_0^4}\sum_{\sigma=1}^{m-2}g_\sigma\Omega_{\sigma0}\partial_t\hat {\cal E}_\sigma. \end{eqnarray} The second term in the right hand side of above equation represents the nonlinear couplings between the probe pulses.
The dark-state polaritons (DSPs) in the general multi-level EIT system is first obtained in \cite{multi-level}, where the single-mode probe pulses are considered. Accordingly, the dark- and bright-state polaritons (BSPs) in the present general multi-level system can be defined by: \begin{eqnarray}\label{eqn:DSP} \hat\Psi(z,t)&=&\cos\theta\prod_{j=1}^{m-3}\cos\phi_j\hat {\cal E}_1\nonumber\\ &&+\cos\theta\sum_{l=2}^{m-2}\sin\phi_{l-1}\prod_{j=l}^{m-3}\cos\phi_j\hat {\cal E}_l\nonumber\\ &&-\sin\theta(t)\, \sqrt{N}\, \widetilde\sigma_{bc}(z,t), \end{eqnarray} \begin{eqnarray}\label{eqn:BSP} \hat\Phi(z,t)&=&\sin\theta\prod_{j=1}^{m-3}\cos\phi_j\hat {\cal E}_1\nonumber\\ &&+\sin\theta\sum_{l=2}^{m-2}\sin\phi_{l-1}\prod_{j=l}^{m-3}\cos\phi_j\hat {\cal E}_l\nonumber\\ &&+\cos\theta(t)\, \sqrt{N}\, \widetilde\sigma_{bc}(z,t), \end{eqnarray} which are superpositions of the atomic coherence and the $m-2$ probe fields. The mixing angles $\theta$ and $\phi_j$ in the new quantum fields are defined through \begin{equation}\label{eqn:10} \tan\theta=\frac{g_1g_2...g_{m-2}\sqrt{N}}{\bigr[\sum_{j=1}^{m-2}\bigr(\Omega_{j0}^2\prod_{l=1,l\neq j}^{m-2}g_l^2\bigr)\bigr]^{1/2}}\nonumber \end{equation} and \begin{equation}\label{eqn:11} \tan\phi_j=\frac{\prod_{l=1}^{j}g_l\Omega_{j+1,0}}{\bigr[\sum_{l=1}^{j}\bigr(\Omega^2_{l0}\prod_{s=1,s\neq l}^{j+1}g^2_s\bigr)\bigr]^{1/2}}.\nonumber \end{equation}
Using the definitions above, one can transform the equations of motion for the probe fields and the atomic variables into the new field variables. With the low-excitation approximation and neglecting the nonlinear effects we find that the DSP field satisfies \begin{widetext} \begin{eqnarray}\label{eqn:DSP1} \bigr(\partial_t +c\cos^2\theta\cos\alpha_{m-2} \partial_z\bigr)\, \hat\Psi &=&-\dot\theta\, \hat\Phi+\sum_{j=1}^{m-2}\dot\phi_j\cos\theta \, \hat s_j-\frac{c}{2}\sin2\theta\cos\alpha_{m-2}\, \partial_z\hat\Phi, +\nonumber\\ &&+c\cos\theta\sum_{j=1}^{m-2}\prod_{l=j}^{m-3}\cos\phi_l\sin2\phi_{j-1}\bigr(\frac{1}{2c}\frac{\nu_j}{k_{pj}} +\frac{\cos\alpha_{j-1}}{2}\bigr)\partial_z\hat s_j, \end{eqnarray} \end{widetext} where we have defined $$\cos\alpha_\sigma=c\frac{\sum_{j=1}^{\sigma}\frac{k_{pj}}{\nu_j}\Omega_{j0}^2\prod_{l=1,l\neq j}^{\sigma}g_l^2}{\sum_{j=1}^{\sigma}\Omega_{j0}^2\prod_{l=1,l\neq j}^{\sigma}g_l^2}, \ \sigma=1,2,...,m-3$$ and $\hat s_j=\partial_{\phi_j}\hat\Psi/\cos\theta$. It then follows that $\hat s_1=\prod_{j=2}^{m-3}\cos\phi_j(-\sin\phi_1{\cal E}_1+\cos\phi_1{\cal E}_2)$, $\hat s_2=\prod_{j=3}^{m-3}\cos\phi_j(-\sin\phi_2{(\cos\phi_1\cal E}_1+\sin\phi_1{\cal E}_2)+\cos\phi_2{\cal E}_3)$, and generally \begin{eqnarray}\label{eqn:s1} \hat s_k=\prod_{j=k+1}^{m-3}\cos\phi_j\hat{\cal s}_k, \ \ k=1,2,...,m-3, \end{eqnarray} with \begin{eqnarray}\label{eqn:s2} \hat{\cal s}_k&=&\cos\phi_k{\cal E}_{k+1}-\sin\phi_k\sum_{m=2}^k\bigr(\prod_{l=m}^{k-1}\cos\phi_l\bigr)\sin\phi_{m-1}{\cal E}_m\nonumber\\ &-&\sin\phi_k\prod_{l=1}^{k-1}\cos\phi_l{\cal E}_1. \end{eqnarray} On the other hand, the equation of BSP field can be obtained as \begin{widetext} \begin{eqnarray}\label{eqn:BSP1} \Phi=\frac{\Gamma}{\sqrt{N}}\biggl(\sum_{j=1}^{m-2}\bigr(\frac{\Omega_{j0}/\Omega_0}{g_j}\bigr)^2\biggl)^{1/2} \frac{\cos\theta}{\Omega_0}\partial_t(\sin\theta\Psi-\cos\theta\Phi)+\cos\theta\bigr(g_1\Omega_{01}\sin\phi_1\hat s_1-\sum_{l=2}^{m-3}g_l\Omega_{l0}\cos\phi_{l-1}\hat s_{l-1}\bigr). \end{eqnarray} \end{widetext} By comparing Eqs. (\ref{eqn:DSP1}) and (\ref{eqn:BSP1}) with the corresponding DSP and BSP fields in the three-level system, one can see a key difference is the appearance of $\hat s_j(z,t)$ from the probe fields in our model. The adiabatic condition in the present case can be fulfilled only if $\hat s_j(z,t)=0$ for all $j$. However, the input probe pulses are generally independent of each other so that the fields $\hat s_j$ need not be zero. To study the dynamics of the DSP field, we should therefore investigate first the pulse matching between all the probe fields needed for adiabatic condition. Bearing these ideas in mind, we next explore the evolution of a set of normal fields by introducing \begin{eqnarray}\label{eqn:probe1} \hat G_{j,j+1}=-\sin\phi_{j,j+1}\hat{\cal E}_j(z,t)+\cos\phi_{j,j+1}\hat{\cal E}_{j+1}(z,t), \end{eqnarray} where $j=1, 2, ..., m-3$ and $\tan\phi_{j,j+1}=g_j\Omega_{j+1,0}/g_{j+1}\Omega_{j0}$. From the Eq. (\ref{field equation}) and together with the results of ${\widetilde\sigma}_{be_\sigma}$ and ${\widetilde\sigma}_{bc}$ one can verify that the field $\hat G_{j,j+1}$ satisfies the equation \begin{widetext} \begin{eqnarray}\label{eqn:field3} (\partial_t-c\cos^2\beta\cos2\phi_{j,j+1}\partial_z)\hat G_{j,j+1}=-\frac{(g_j^2\Omega_{j+1}^2+g_{j+1}^2\Omega_j^2)N}{\Gamma}\frac{\cos^2\beta}{\Omega_0^2}\hat G_{j,j+1}-\nonumber\\ -\frac{1}{2}g_jg_{j+1}\sqrt{N}\sin2\beta\partial_t\hat {\cal E}_{j,j+1}+c\cos^2\beta\sin2\phi_{j,j+1}\partial_z\hat {\cal E}_{j,j+1} +F(\hat {\cal E}_\sigma, \sigma\neq j,j+1) \end{eqnarray} \end{widetext} with \begin{eqnarray}\label{eqn:mix3} \tan^2\beta=\frac{N\Omega_j^2\Omega_{j+1}^2}{g_j^2\Omega_{j+1}^2+g_{j+1}^2\Omega_j^2} \frac{(g_j^2-g_{j+1}^2)^2}{\Omega_0^4},\nonumber \end{eqnarray} and $\hat {\cal E}_{j,j+1}=\cos\phi_{j,j+1}\hat{\cal E}_j(z,t)+\sin\phi_{j,j+1}\hat{\cal E}_{j+1}(z,t)$. $F(\hat {\cal E}_\sigma)$ includes no $\hat {\cal E}_j$ or $\hat {\cal E}_{j+1}$. The first term in the right hand side of Eq. (\ref{eqn:field3}) reveals a very large absorption of $\hat G_{j,j+1}$, which results in a large decay in the field $\hat G_{j,j+1}$ and then the system satisfies the pulse matching condition \cite{match,match2,multi-level}: $\hat {\cal E}_{j+1}\rightarrow\tan\phi_{j,j+1}\hat {\cal E}_j$. For an explicit numerical estimation, we set some typical values \cite{exp1,exp2}: $g_j\approx g_{j+1}\sim10^5s^{-1}, N\approx10^8$, $\Gamma\approx10^8s^{-1}$, so that the life time of field $\hat G_{j,j+1}(z,t)$ is about $\Delta t<10^{-8}s$ which is much smaller than the storage time \cite{exp2}. Furthermore, by introducing the adiabatic parameter $\tau\equiv(\sum_j(1/g_j)^2)^{1/2}/\sqrt{N}T$ where $T$ is the characteristic time scale, we can calculate the lowest order in Eq. (\ref{eqn:BSP1}) and obtain $\hat\Phi\approx0, \hat G_{j,j+1}\approx0$. On the other hand, under the condition of pulse matching, one can verify that $\hat s_j(z,t)\propto\hat G_{j,j+1}=0$. Thus equation (\ref{eqn:DSP1}) is reduced to the shape- and state-preserving case \begin{eqnarray}\label{eqn:DSP2} \bigr(\partial_t +c\cos^2\theta\cos\alpha_{m-2} \partial_z\bigr)\, \hat\Psi(z,t)=0. \end{eqnarray}
The formula (\ref{eqn:DSP2}) is the main result of the present work. The group velocity of the DSP field is \begin{eqnarray}\label{eqn:group} V_g=\cos^2\theta\frac{\sum_{j=1}^{m-2}\frac{\nu_j}{k_{pj}}\Omega_{j0}^2\prod_{l=1,l\neq j}^{m-2}g_l^2}{\sum_{j=1}^{m-2}\Omega_{j0}^2\prod_{l=1,l\neq j}^{m-2}g_l^2}. \end{eqnarray} One should bear in mind that the wave vectors $k_{pj}$ can be positive (in the $+z$ direction) or negative (in the $-z$ direction). So, by adjusting the Rabi-frequencies for external pump fields properly under the adiabatic condition so that $\cos\alpha_{m-2}=0$, we can obtain a zero velocity for the DSP field. In particular, one may set No. $1$ to No. $m-3$ pump/probe pulses in the $+z$ direction, while No. $m-2$ pump/probe pulse in the $-z$ direction (Fig. 1(b)) and $\Omega_{m-2,0}=\sum_{j=1}^{m-3}\frac{g_{m-2}^2}{g_j^2}\Omega_{j0}^2$, in an experiment so that the group velocity $V_g=0$. In this way, we create the multi-frequency stationary pulses with each component: \begin{eqnarray}\label{eqn:stationary} \hat{\cal E}_1&=&\cos\theta\prod_{j=1}^{m-3}\cos\phi_j\Psi(z,t),\nonumber\\ \hat{\cal E}_l&=&\cos\theta\sin\phi_{l-1}\prod_{j=l}^{m-3}\cos\phi_j\Psi(z,t),\\ l&=&2,...,m-2.\nonumber \end{eqnarray} These components interfere to create a sharp spatial envelope. It is helpful to present a comparison of our results with those obtained for a three-level system \cite{three-level,nature}: i) In the present system, all optical pulses can couple in resonance to the corresponding atomic transitions, thus all the applied probe fields with different frequencies contribute to the generation of stationary pulses. This means the present process is a spatial compression of excitation, which allows us to shape and intensify the localized stationary pulses. Our technique can therefore be expected to enhance further nonlinear couplings and be applied in the most straightforward manner, e.g. to applications in quantum nonlinear optics \cite{nonlinear}. Numerical results in Fig. 2 show how tight localization of stationary pulses can be readily obtained when the multi-level system is applied. \begin{figure}
\caption{(color online) Localization of created stationary pulses for 5-level (red solid line) cases, where three input probe lights are used and the parameters are set as $\omega_{e_3e_2}=\omega_{e_2e_1}\approx\nu_2/100$ (a). As a comparison, blue dashed line shows the stationary pulses created in 3-level system (b) by employing one standing wave of pump fields. The probe lights are used with the envelop $\exp(-z^2)$.}
\label{}
\end{figure} In contrast, for a three-level system, a frequency-comb is used to create a localized pulse, filtering the off-resonant input probe pulses and retaining only the resonant one for creation \cite{three-level}. Generally, the total number of probe photons created by a frequency-comb in a three-level system is much less than in the current model; ii) The present technique can be freely controlled. For example, based on our results, we see that the pulse matching in the present case is between all of the probe pulses with different frequencies, say, $\hat {\cal E}_{\sigma}=\prod_{j=l}^{\sigma}\tan\phi_{j,j+1}\hat {\cal E}_l \ (l\geq1, \sigma\leq{m-2})$. Thus, in principle, one can use just one pump field to achieve the stationary pulse by adjusting its intensity to match those of the other pump fields; iii) It requires no standing waves in the pump fields or spatially modulated pump fields to create localized stationary pulses. \begin{figure}
\caption{(color online) (a)(b) Schematic of experimental realization of stationary pulses with four-level double-$\Lambda$-type $^{87}$Rb atoms coupled to two single-mode quantized and two classical pump fields that propagate in $+z$ and $-z$ directions, respectively.}
\label{}
\end{figure}
Experimentally, the simplest multi-level system is an ensemble of four-level double $\Lambda$-type $^{87}$Rb atoms. The schematic diagram for experimental realization is shown in Fig.3. All the atoms first are trapped in state $|b\rangle$ ($5^2S_{1/2}$) and only the $\pm z$ directional propagation pump fields ($\Omega_1$ and $\Omega_2$) are applied to couple the transitions from
$|c\rangle$ ($5^2S_{1/2}$) to $|e_1\rangle$ ($5^2P_{1/2} (F=1)$)
and $|e_2\rangle$ ($5^2P_{3/2} (F=1)$) respectively. We then input the probe pulses ($\hat{\cal E}_{1,2}$) and allow the system to achieve adiabatic condition. Finally, by adjusting $\Omega_1$ or $\Omega_2$ so that $g_1\Omega_{20}=g_2\Omega_{10}$, we can create the stationary pulses for probe fields $\hat {\cal E}_1(z,t)=\cos\theta\cos\phi\hat \Psi, \hat {\cal E}_2(z,t)=\cos\theta\sin\phi\hat \Psi$, where $\hat\Psi$ is determined by the Eq. (\ref{eqn:DSP}) with $m=4$. It can be expected that when the level number $m$ becomes larger, the more tightly localized stationary pulses can be created. According to the numerical results in Fig. 2, the effect becomes substantial when $m\geq5$.
In conclusion, we have shown the tightly localized stationary pulses can be obtained in the general multi-level EIT system. We have examined the dynamics of DSPs in detail and found that, all the applied probe pulses with different frequencies contribute to the stationary pulses. The present process is therefore a spatial compression of excitation, which may be able to enhance further nonlinear optical couplings and will have interesting applications in quantum nonlinear optics \cite{nonlinear}. In particular, our technique may open up a novel way towards the spatial compression of many probe photons with small losses. According to the results in \cite{multi-level}, if initially input is a non-classical probe pulse, e.g. a quantum superposition of coherent states, we may also generate entangled stationary pulses within our model.
\noindent We thank Dr. Y. Zhao (Heideberg University) for the helpful discussions. This work is supported by NUS academic research Grant No. WBS: R -144-000-172-101, US NSF under the grant DMR-0547875, and by NSF of China under grants No.10275036.
\noindent
\end{document} | arXiv |
The Schrödinger equation
Published on Thursday, 03 February 2022 22:00
Written by Danko Georgiev
The fathers of matrix quantum mechanics believed that the quantum particles are unvisualizable and pop into existence only when measured. Challenging this view, in 1926 Erwin Schrödinger developed his wave equation that describes the quantum particles as packets of quantum probability amplitudes evolving in space and time. Thus, Schrödinger visualized the unvisualizable and lifted the veil that has been obscuring the wonders of the quantum world.
The Schrödinger equation governs the time evolution of closed quantum physical systems \begin{equation}\imath\hbar\frac{\partial}{\partial t}\,|\Psi(x,y,z,t)\rangle=\hat{H}\,|\Psi(x,y,z,t)\rangle\label{eq:1}\end{equation} where $\imath$ is the imaginary unit, $\hbar$ is the reduced Planck constant, $\frac{\partial}{\partial t}$ indicates a partial derivative with respect to time, $\Psi$ is the wave function of the quantum system, $x$, $y$, $z$, $t$ are the three position coordinates and time respectively, and $\hat{H}$ is the Hamiltonian operator corresponding to the total energy of the system. The solution of the Schrödinger equation is the quantum wave function $\Psi$ of the system. At each point $(x,y,z)$ in space at a time $t$, the value of the quantum wave function is a complex number $\Psi(x,y,z,t)$, referred to as a quantum probability amplitude. Since the quantum wave function provides a complete description of the quantum state of a physical system, it follows that the fabric of the quantum world is made of quantum probability amplitudes $\Psi(x,y,z,t)$. The squared modulus $\left|\Psi(x,y,z,t)\right|^{2}$ of each quantum probability amplitude gives a corresponding quantum probability for a physical event to occur at the given point in space and time. The quantum probabilities do not arise due to our ignorance of what the state of the quantum system is, but rather represent inherent propensities of the quantum systems to produce certain outcomes under experimental measurement.
Despite that the Schrödinger equation can be easily written in its general mathematical form, solving it for an arbitrary Hamiltonian $\hat{H}$ is extremely difficult and usually done only approximately with a finite numerical precision. Understanding the general properties of the Schrödinger equation and its solutions is essential for modern developments in theoretical physics, quantum information theory, quantum chemistry, biology, neuroscience, cognitive science and artificial intelligence. | CommonCrawl |
Availability of direct path in half‐duplex‐based cooperative relay networks
Jeehoon Lee1,
Minjoong Rim2 &
Kiseon Kim1
In this paper, we validate the availability of direct path in half‐duplex‐based cooperative relay networks from a practical point of view. Cooperative relaying is a low‐complexity technique, which schedules orthogonal transmissions through the divided time slots. By doing so, transmission impairments due to multi‐path fading and path loss are mitigated by obtaining a diversity gain. In conventional approaches, most researchers have focused on a role of relay and assumed that the received signal‐to‐noise ratio in source‐to‐destination link is doubled when source transmits the same signal twice during the two transmission phases. However, in practical wireless environments, a wireless channel is not static but varies with time. Thus, although the source retransmits the same signal during the second transmission phase instead of forwarding by a relay, the (time) diversity gain may be obtained. As a result, the performance of relaying‐aided cooperative communication is not always better than that of the repeated transmission (RT), but the RT scheme may be a better option than a cooperative relaying scheme. To this end, we first show that the RT scheme is comparable to conventional cooperative relaying schemes. We then propose a selection decode‐and‐forward (DF) relaying scheme, which combines the DF relaying and RT schemes. The proposed selection DF relaying scheme has better outage performance than comparable relaying schemes in time‐varying channels. Lastly, all the theoretical results are validated through numerical evaluations and Monte Carlo simulations.
In recent years, cooperation technologies among distributed nodes or users have emerged as new communication paradigms. This mainly comes from two flows of communication fields. The first is the advent of ad hoc and sensor networks with many new applications, where a sender requires the assistance of other nodes to forward or relay its information to a desired receiver. The second emerges from the demands for very high data rate with communication reliability. In current fourth‐generation wireless networks, multiple‐input multiple‐output (MIMO) technologies are considered as a powerful approach to meet the demands. In practice, most mobiles have difficulty with multiple antennas installed on small‐size devices or the propagation environment of a wireless link may not support the requirements for deploying MIMO technologies. To overcome such limitations in achieving MIMO gains, cooperative technologies in a distributed fashion are emerging beyond traditional point‐to‐point communications. Accordingly, cooperative communications with relaying nodes have been highly favored in the literature [1‐3].
In cooperative relay networks, a relay is basically employed as two types of relaying methods: amplify‐and‐forward (AF) and decode‐and‐forward (DF) relaying. In the former scheme, a relay simply amplifies the received signal from the source and forwards it to the corresponding destination irrespective of source‐to‐relay (SR) link condition [4]. In the latter scheme, a relay first decodes the received signal and re‐encodes it for forwarding regardless of the decoding result at the relay, namely, fixed DF relaying scheme [5] or determines whether to forward it or not depending on the channel condition of the SR link, namely, selective DF relaying scheme [6]. These two types of relaying schemes form the foundation of various sophisticated relaying schemes.
In half‐duplex‐based cooperative relay networks, the channel of source‐to‐destination (SD) link has been modeled as a time‐invariant channel during the two transmission phases [4,7‐11]. According to the assumption, it is well known that the cooperative relaying schemes are more beneficial than repeated transmission (RT), which simply transmits the same signal twice through the SD link, at high‐SNR region due to its diversity gain. However, the coherence time of a wireless channel is determined based on the inverse of the maximum Doppler frequency \(f_{D}^{\text {max}} \propto (v/c)f_{c}\), where v indicates the speed of the mobile device relative to the receiver, c is the speed of light, and f c is the carrier frequency [12]. The more the speed of the mobile device or carrier frequency increases, the more the coherence time is reduced. Furthermore, the maximum Doppler frequency is not zero even with static terminals. Therefore, we should consider the channel of the SD link as a time‐varying channel during the two transmission phases when the coherence time is shorter than the transmission symbol period [13].
In half‐duplex‐based cooperative relay networks, the SD link may be considered as a replacement of forwarding by a relay during the second transmission phase. In the literature, it has been simply assumed that the received SNR in the SD link is doubled by the RT scheme [4,9‐11]. In order to prevent diversity loss caused by error propagation at a relay, the authors of [4] introduced the selection DF (SDF) relaying scheme in which the relay forwards the received signal only when the transmission in the SR link is not in outage; otherwise, the source retransmits its signal during the second transmission phase while the relay remains idle. In [9], the outage probabilities of various selection relaying protocols, which the relay chooses DF, AF, or direct transmission to deliver the signal depending on the channel quality of the SR link, were analyzed and compared for a cooperative relay network including three nodes. In [10] and [11], the authors proposed incremental selection AF relaying schemes, in which the source retransmits its signal when doubling the received SNR in the SD link satisfies the target SNR. Otherwise, the relay forwards it during the second transmission phase, under a single and multiple relay scenarios, respectively. According to available CSI information at a relay, the authors of [14] analyzed the several optimum thresholds for determining whether or not to forward, minimizing the end‐to‐end bit error rate in cooperative digital relaying systems using BPSK modulation.
Under time‐varying channels, the impact of outdated channel estimates caused by feedback delay or scheduling delay on the performance of relay selection schemes were analyzed in the literature [15‐18]. The authors of [15] analyzed the outage probability of opportunistic relay selection (ORS) in a scenario based on DF relaying and showed that the diversity order is always equal to 1 when available CSI is outdated. In [16] and [17], the authors analyzed the impact of outdated CSI on the outage probability and average bit error rate of AF partial relay selection and ORS systems. It was shown in [17] that when the outdated link SNR is less correlated to the current SNR, it is better to use long‐term statistics of the channel at the relay, i.e., fixed gain amplification. In [18], the effect of outdated channel estimates on outage and error rate performance of relay selection schemes was studied, and the authors showed that the AF‐based best relay selection scheme is more sensitive to CSI imperfection than DF‐based best relay selection scheme. However, all these works do not consider the direct path.
In order to achieve higher bandwidth efficiency while guaranteeing the same diversity order as that of conventional cooperative schemes, the authors of [19] proposed a new cooperative communication protocol, where the source determines whether cooperation with one relay is beneficial or not under multi‐node DF cooperative scenarios, i.e., 'when to cooperate?' and 'whom to cooperate with?'. In [20], a novel selection AF relaying protocol, which determines the transmitting node during the second transmission phase between source and relay, was proposed in time‐varying channels.
Motivation and contribution
In the literature, the researchers for selection relaying protocols have not considered the time‐varying property of a wireless channel for the SD link. Furthermore, the direct link has not been considered yet in investigating the impact of the outdated CSI by the time‐varying property. In practical wireless environments, the channel quality of the SD link is not always poor, and when the SD link is similar to the SR or relay‐to‐destination (RD) link, the direct path is also possible candidate for obtaining diversity gain. In this paper, we focus on such points and investigate the availability of the direct path in practical wireless environments. The main contributions of this paper are listed as follows:
Verification of the availability of the direct path: Unlike the conventional approaches described earlier, we validate the availability of the direct path in half‐duplex‐based cooperative relay networks. By analyzing the outage probability of the RT scheme, we show that the time diversity gain can be obtained by the RT scheme even though the channel correlation of the SD link during the two transmission phases is extremely high. In addition, we provide several useful insights to employ the direct path in cooperative relay networks.
Proposal of a SDF relaying scheme: As one approach to utilize the direct path in time‐varying channels, we propose a SDF relaying scheme which combines the DF relaying and RT schemes. Unlike [4], the proposed SDF relaying scheme considers not only the instantaneous channel information of the SR link but also the channel statistics of the RD and SD links for selecting the transmitting node during the second transmission phase. For performance analysis, we derive the exact and asymptotic outage probabilities of the proposed SDF relaying scheme and show that the proposed scheme is especially more beneficial when the relay is close to the destination.
Verification of all the analyzed results: Through numerical evaluations and Monte Carlo simulations, we show that the analyzed results are well matched with the simulation results. In addition, we compare the outage probability of the proposed SDF relaying scheme with those of conventional comparable schemes and show that the SDF relaying scheme is superior to its benchmark schemes.
The remainder of this paper is organized as follows. The 'System model' section briefly describes the system model for the considered cooperative relay network. In the 'Outage probability analysis of the RT scheme' section, the exact and approximate outage probabilities of the RT scheme are derived, followed by the comparison of the approximate outage probability of the RT scheme with that of the AF relaying scheme. For performance improvement, we propose a selection DF relaying scheme by combining the DF relaying and RT schemes in the 'Proposal of a selection DF relaying scheme under time‐varying channels' section. All the results are validated through Monte Carlo simulations in the 'Simulations and discussions' section. Finally, the 'Conclusions' section concludes this paper.
System model
We investigate communication scenarios of AF, DF relaying, and RT schemes for cooperative relay networks with one source, one relay, and one destination under time‐varying channels shown in Figure 1. Conventionally, after source's transmission during the first transmission phase in half‐duplex‐based cooperative relay networks, only the path through the relay has been considered as a unique option to obtain a diversity gain. However, since the diversity gain can be also obtained through the direct path under time‐varying channels as will be described later, not only the path through the relay but also the direct path should be simultaneously considered as equal candidates for transmission during the second transmission phase in cooperative relay networks from a practical viewpoint.
Scenarios of conventional cooperative relaying and repeated transmission. For half‐duplex‐based cooperative relay networks with one source, one relay, and one destination under time‐varying channels.
In the following discussions, we employ the propagation model which includes large‐scale path loss, shadow fading, and frequency non‐selective fading [21,22] and is expressed as:
$$\begin{array}{@{}rcl@{}} h_{ij}^{(t)} = \frac{g_{ij}^{(t)}}{\sqrt{d_{ij}^{\eta}}}, \end{array} $$
((1))
where \(h_{\textit {ij}}^{(t)}\), i∈{s,r}, j∈{r,d}, t∈{1,2}, is the channel gain from node i to j during phase t, \(g_{\textit {ij}}^{(t)}\) captures the channel fading characteristics due to the rich scattering environment, which follows a zero‐mean complex Gaussian distribution with variance \(\sigma _{\textit {ij}}^{2}\), d ij is the distance between node i and j, and η is the path loss exponent. Similarly, the noise term \(n_{\textit {ij}}^{(t)}\) denotes the AWGN with variance N 0. We denote the total transmission power during the two transmission phases as P T , which is shared by the source and relay.
During phase 1, the received signal from the source to node j, where j∈{r,d}, is given by:
$$ y_{sj}^{(1)} = \sqrt{\rho_{1} P_{T}} h_{sj}^{(1)} x + n_{sj}^{(1)}, $$
where ρ 1 is a positive value satisfying 0<ρ 1<1 and x is transmitted signal with E[|x|2]=1. During phase 2, if the source retransmits the same signal x, then the received signal at the destination is given by:
$$ y_{sd}^{(2)} = \sqrt{\rho_{2} P_{T}} h_{sd}^{(2)} x + n_{sd}^{(2)}, $$
where ρ 2 is a positive value satisfying 0<ρ 2<1 and the condition ρ 1+ρ 2=1 should be satisfied due to the limited power. On the other hand, when the relay forwards the received signal from the source during phase 2, the received signal at the destination is given by:
$$ y_{rd}^{(2)} = \mu_{r} h_{rd}^{(2)} x_{r} + n_{rd}^{(2)}, $$
If the relay is operated in AF mode, then x r is equal to \(y_{\textit {sr}}^{(1)}\) and μ r in (4) is given [4] as follows:
$$\begin{array}{@{}rcl@{}} \mu_{r}^{\text{AF}} = \sqrt{\frac{\rho_{2} P_{T}}{\rho_{1} P_{T} |h_{sr}^{(1)}|^{2}+N_{0}}}. \end{array} $$
According to ρ 1 and ρ 2, the transmitted power at the source and relay is determined. Meanwhile, when the relay is employed in DF mode, x r is the decoded and re‐encoded signal \(\tilde {x}\) and μ r in (4) is given by:
$$\begin{array}{@{}rcl@{}} \mu_{r}^{DF} = \sqrt{\rho_{2} P_{T}}. \end{array} $$
The received signals during the two transmission phases are combined by a maximal ratio combiner (MRC) at the destination. For AF relaying and DF relaying schemes, the instantaneous SNR at the output of the MRC can be respectively written as:
$$\begin{array}{@{}rcl@{}} \gamma_{\text{MRC}}^{\text{AF}} = \gamma_{\text{SD}}^{(1)} + \gamma_{\text{SRD}}^{\text{AF}}, \end{array} $$
$$\begin{array}{@{}rcl@{}} \gamma_{\text{MRC}}^{\text{DF}} = \gamma_{\text{SD}}^{(1)} + \gamma_{\text{SRD}}^{\text{DF}}, \end{array} $$
$$ \begin{aligned} &{}\gamma_{\text{SD}}^{(1)} = \rho_{1}\gamma|h_{sd}^{(1)}|^{2}, \quad \gamma_{\text{SRD}}^{\text{AF}} = \frac{\rho_{1}\rho_{2} \gamma^{2} |h_{sr}^{(1)}|^{2}|h_{rd}^{(2)}|^{2}} {\rho_{1}\gamma|h_{sr}^{(1)}|^{2} + \rho_{2}\gamma|h_{rd}^{(2)}|^{2}+1}, \\ &{}\gamma_{\text{SRD}}^{\text{DF}} = \rho_{2}\gamma|h_{rd}^{(2)}|^{2}, \quad \gamma = \frac{P_{T}}{N_{0}}. \end{aligned} $$
When the source transmits the signal x twice during phase 1 and phase 2, the two channel coefficients of \(h_{\textit {sd}}^{(1)}\) and \(h_{\textit {sd}}^{(2)}\) may be correlated each other. Let us denote the correlation coefficient between \(h_{\textit {sd}}^{(1)}\) and \(h_{\textit {sd}}^{(2)}\) as α, which depends on maximum Doppler frequency, time interval between phase 1 and phase 2, and so on. Then, the following relation is satisfied [18,23]:
$$\begin{array}{@{}rcl@{}} h_{sd}^{(2)} = \alpha h_{sd}^{(1)} + \sqrt{1-\alpha^{2}} w_{sd}, \end{array} $$
where w sd follows the same distribution as \(h_{\textit {sd}}^{(1)}\). For the maximum Doppler shift \(f_{D}^{\text {max}}\) and the time interval τ between phase 1 and phase 2, the correlation coefficient α is given [24] by:
$$\begin{array}{@{}rcl@{}} \alpha = J_{0} (2 \pi f_{D}^{\text{max}} \tau), \end{array} $$
where J 0(·) is the zeroth‐order Bessel function of the first kind [25]. In practical wireless systems, the maximum Doppler frequency is not zero even with static terminals, and the time interval cannot be zero. Thus, from (10), the range of 0≤α<1 is reasonable in real wireless environments. From (2), (3), and (9), the instantaneous SNR at the output of the MRC for the RT scheme is given by:
$$\begin{array}{@{}rcl@{}} \gamma_{\text{MRC}}^{\text{RT}} &=& \gamma_{\text{SD}}^{(1)} + \gamma_{\text{SD}}^{(2)}\\ &=& \gamma \left\{(\rho_{1}+\rho_{2}\alpha^{2})|h_{sd}^{(1)}|^{2} + \rho_{2}(1-\alpha^{2})|w_{sd}|^{2}\right\}. \end{array} $$
Lastly, we assume that all the nodes utilize one antenna for transmitting and receiving.
Outage probability analysis of the RT scheme
In this section, we validate the availability of the direct path, in half‐duplex‐based cooperative relay networks under time‐varying channels. To do so, we first derive the exact and asymptotic outage probabilities of the RT scheme and show that the diversity gain can be obtained by the RT scheme. Based on the derived results, we provide several useful insights by comparing the asymptotic outage probabilities of the AF relaying and RT schemes.
Derivation of the outage probability of the RT scheme
The outage probability is defined as the probability that the instantaneous capacity of the system is below a predefined value R (b/s/Hz). From the definition, the outage probability of the RT scheme is defined as:
$$\begin{array}{@{}rcl@{}} P_{\text{RT}}^{\text{Out}} (\gamma, R) = \text{Pr}[C_{\text{RT}} < R], \end{array} $$
where C RT is the instantaneous capacity of the RT scheme, which is given by:
$$\begin{array}{@{}rcl@{}} C_{\text{RT}} = \frac{1}{2} \log \left(1 + \gamma_{\text{MRC}}^{\text{RT}} \right) \end{array} $$
From (11), (12), and (13), the outage probability of the RT scheme can be rewritten as:
$$ \begin{aligned} P_{\text{RT}}^{\text{Out}} (\gamma, R) &= \text{Pr} \left[(\rho_{1}+\rho_{2}\alpha^{2})|h_{sd}^{(1)}|^{2}\right. \\[-2pt] &\left.+ \rho_{2}(1-\alpha^{2})|w_{sd}|^{2} < f(\gamma,R)\right], \end{aligned} $$
where the function f(γ,R) is defined as:
$$\begin{array}{@{}rcl@{}} f(\gamma,R) = \frac{2^{2R}-1}{\gamma} \end{array} $$
In (14), \((\rho _{1}+\rho _{2}\alpha ^{2})|h_{\textit {sd}}^{(1)}|^{2}\) and ρ 2(1−α 2)|w sd |2 are exponential random variables with parameters \(d_{\textit {sd}}^{\eta }/\{\sigma _{\textit {sd}}^{2}(\rho _{1}+\rho _{2}\alpha ^{2})\}\) and \(d_{\textit {sd}}^{\eta }/\{\sigma _{\textit {sd}}^{2}\rho _{2}(1-\alpha ^{2})\}\), respectively.
At first, if the correlation coefficient α is equal to one, i.e., α 1=1, then the equality of '\(h_{\textit {sd}}^{(1)}=h_{\textit {sd}}^{(2)}\)' is satisfied, and thus, from (14), the outage probability of the RT scheme is derived as:
$$\begin{array}{@{}rcl@{}} P_{\text{RT}}^{\text{Out}} (\gamma, R)|_{\alpha=1} &=& \text{Pr} \left[(\rho_{1}+\rho_{2})|h_{1d}^{(1)}|^{2} < f(\gamma, R)\right] \\ &=& 1-e^{-d_{sd}^{\eta}f(\gamma, R)/\{\sigma_{sd}^{2}(\rho_{1}+\rho_{2})\}}. \end{array} $$
Secondly, let us consider the case of α≠1. Denoting \(k=|h_{1d}^{(1)}|^{2}\), the outage probability of the RT scheme is derived as:
$$ {\fontsize{7.6pt}{9.6pt}\selectfont{\begin{aligned} {}P_{\text{RT}}^{\text{Out}} (&\gamma, R)|_{\alpha \neq 1} = \text{Pr} \left[ |w_{sd}|^{2} < \frac{f(\gamma, R)-\left(\rho_{1}+\rho_{2}\alpha^{2}\right)k}{\rho_{2}(1-\alpha^{2})} \right] \\[-2pt] &= \frac{d_{sd}^{\eta}}{\sigma_{sd}^{2}} \int\limits_{0}^{\frac{f(\gamma,R)}{\rho_{1}+\rho_{2}\alpha^{2}}} \! \! \left\{\!1 \,-\, e^{\! -d_{sd}^{\eta}\{f(\gamma\!, R)-(\rho_{1}\! +\rho_{2}\alpha^{2})k\}/\{\sigma_{sd}^{2}\rho_{2}\left(\! 1-\alpha^{2}\right)\}} \right\}\!e^{- d_{sd}^{\eta}k/\sigma_{sd}^{2}} dk \\[-2pt] &= \frac{d_{sd}^{\eta}}{\sigma_{sd}^{2} } \underbrace{\int\limits_{0}^{\frac{f(\gamma,R)}{\rho_{1}+\rho_{2}\alpha^{2}}} e^{-d_{sd}^{\eta}k/\sigma_{sd}^{2}} dk}_{P_{A}} \\[-2pt] &-\frac{d_{sd}^{\eta}e^{\! -d_{sd}^{\eta}f(\gamma, R)/\{\sigma_{sd}^{2} \rho_{2}\left(1-\alpha^{2}\right)\}}}{\sigma_{sd}^{2}} \underbrace{\int\limits_{0}^{\frac{f(\gamma,R)}{\rho_{1}+\rho_{2}\alpha^{2}}} \! e^{d_{sd}^{\eta}\left(\rho_{1}-\rho_{2}+2\rho_{2}\alpha^{2}\right)k/\{\sigma_{sd}^{2} \rho_{2}\left(1-\!\alpha^{2}\right)\}} dk}_{P_{B}} \end{aligned}}} $$
In (17), P A and P B are respectively derived as:
$$\begin{array}{@{}rcl@{}} P_{A} =\frac{\sigma_{sd}^{2}}{d_{sd}^{\eta}} \left(1 - e^{- d_{sd}^{\eta}f(\gamma,R)/\{\sigma_{sd}^{2} (\rho_{1}+\rho_{2}\alpha^{2})\}}\right) \end{array} $$
$$ {}P_{B}\,=\,\!\left\{\! \begin{array}{ll} f(\gamma,R)/\rho_{1}, &\!\!\!\!\alpha\! =\! 0, \rho_{1}\,=\,\rho_{2} \\ \!\!\frac{\sigma_{sd}^{2} \rho_{2}(1-\alpha^{2})}{d_{sd}^{\eta}(\rho_{1}-\rho_{2}+2\rho_{2}\alpha^{2})}\!\! \left(e^{\frac{d_{sd}^{\eta}(\rho_{1}-\rho_{2}+2\rho_{2}\alpha^{2})f(\gamma,R)} {\sigma_{sd}^{2} \rho_{2}(1-\alpha^{2})(\rho_{1}+\rho_{2}\alpha^{2}) }} \,-\,\!1 \!\right)\!\!, &\!\!\text{otherwise}\\ \end{array} \right. $$
From (17), (18), and (19), we have:
$$ {\fontsize{8.8pt}{9.6pt}\selectfont{ \begin{aligned} {}P_{RT}^{Out} (\!\gamma\!,\! R)|_{\alpha \neq 1} \,=\,\left\{\!\! \begin{array}{ll} 1\! -\! \left(\!1\! +\! \frac{d_{sd}^{\eta}f(\gamma\!,R)}{\sigma_{sd}^{2}\rho_{1}}\right) e^{\!-d_{sd}^{\eta}f(\!\gamma\!,R)/(\sigma_{sd}^{2}\rho_{1})}\!, &\!\!\!\!\!\!\alpha\,=\, 0, \rho_{1}\,=\,\rho_{2} \\ 1\! -\! e^{-d_{sd}^{\eta}f(\gamma,R)/\left\{\sigma_{sd}^{2}(\rho_{1}+\rho_{2}\alpha^{2})\right\}} \\ \!+\frac{\rho_{2}(1-\alpha^{2})}{\rho_{1}-\rho_{2}+2\rho_{2}\alpha^{2}} \!\left(\!e^{-d_{sd}^{\eta}f(\!\gamma\!,R)/\left\{\!\sigma_{sd}^{2}\rho_{2}(\!1-\alpha^{2})\!\right\}}\right.\\ \left.\!-e^{-d_{sd}^{\eta}f(\gamma,R)/\{\sigma_{sd}^{2}(\rho_{1}+\rho_{2}\alpha^{2})\}} \!\right)\!, &\!\!\!\!\!\text{otherwise}\\ \end{array} \right. \end{aligned}}} $$
From (16) and (20), we have:
$$ {\fontsize{8.5pt}{9.6pt}\selectfont{\begin{aligned} {}P_{\text{RT}}^{\text{Out}} (\!\gamma\!,\! R) \,=\,\left\{\!\! \begin{array}{ll} 1\,-\,e^{-d_{sd}^{\eta}(2^{2R}-1)/\left\{\gamma\sigma_{sd}^{2}(\rho_{1}+\rho_{2})\right\}}, &\!\!\!\!\alpha\,=\,1 \\ 1 \,-\,\left(\!\!1\! +\! \frac{d_{sd}^{\eta}(2^{2R}-1)}{\gamma\sigma_{sd}^{2}\rho_{1}}\!\!\right) \!e^{-d_{sd}^{\eta}(2^{2R}-1)/(\gamma\sigma_{sd}^{2}\rho_{1})}, &\!\!\!\!\alpha \!\,=\, 0, \!\rho_{1}\,=\,\rho_{2} \\ 1 \,-\, \frac{\rho_{1}+\rho_{2}\alpha^{2}}{\rho_{1}-\rho_{2}+2\rho_{2}\alpha^{2}} e^{-d_{sd}^{\eta}(2^{2R}-1)/\left\{\gamma\sigma_{sd}^{2}(\rho_{1}+\rho_{2}\alpha^{2})\right\}} \\ \quad+\frac{\rho_{2}(1-\alpha^{2})}{\rho_{1}-\rho_{2}+2\rho_{2}\alpha^{2}} e^{-d_{sd}^{\eta}(2^{2R}-1)/\left\{\gamma\sigma_{sd}^{2}\rho_{2}(1-\alpha^{2})\right\}}, &\text{otherwise}\\ \end{array} \right. \end{aligned}}} $$
In addition, we derive the approximate outage probability of (21) at high SNR region in order to confirm the diversity order. Let us denote the first, second, and third terms of (21) as \({P_{1}^{O}}\), \({P_{2}^{O}}\), and \({P_{3}^{O}}\), respectively. We employ the following approximation relationship:
$$\begin{array}{@{}rcl@{}} \lim\limits_{c \rightarrow 0} e^{-c} \approx 1-c+\frac{1}{2}c^{2} \end{array} $$
From (22), at high SNR region, the first and second terms of (21) can be respectively approximated as:
$$ {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} {}\lim\limits_{\gamma \rightarrow \infty} {P_{1}^{O}} \approx \frac{d_{sd}^{\eta}f(\gamma,R)}{\sigma_{sd}^{2}(\rho_{1}+\rho_{2})} -\frac{1}{2} \left(\frac{d_{sd}^{\eta}f(\gamma,R)}{\sigma_{sd}^{2}(\rho_{1}+\rho_{2})}\right)^{\!\!\text{\fontsize{8}{7}{\selectfont{2}}}} \approx \frac{d_{sd}^{\eta}f(\gamma,R)}{\sigma_{sd}^{2}(\rho_{1}+\rho_{2})} \end{aligned}}} $$
$$ {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} {}\lim\limits_{\gamma \rightarrow \infty} {P_{2}^{O}} &\!\approx \!1\,-\,\left(\!1\,+\,\frac{d_{sd}^{\eta}f(\gamma,R)}{\sigma_{sd}^{2} \rho_{1}}\!\right)\! \left\{\!1\! - \!\frac{d_{sd}^{\eta}f(\gamma,R)}{\sigma_{sd}^{2} \rho_{1}} \,+\, \frac{1}{2}\!\left(\!\frac{d_{sd}^{\eta}f(\gamma,R)}{\sigma_{sd}^{2} \rho_{1}}\!\!\right)^{\!\!\text{\fontsize{8}{7}{\selectfont{2}}}}\! \right\} \\ &\,=\,\frac{1}{2}\left\{\frac{d_{sd}^{2\eta}f^{2}(\gamma,R)}{\sigma_{sd}^{4} {\rho_{1}^{2}}} -\frac{d_{sd}^{3\eta}f^{3}(\gamma,R)}{\sigma_{sd}^{6} {\rho_{1}^{3}}} \right\} \approx \frac{d_{sd}^{2\eta}f^{2}(\gamma,R)}{2\sigma_{sd}^{4} {\rho_{1}^{2}}} \end{aligned}}} $$
Unlike \({P_{1}^{O}}\) and \({P_{2}^{O}}\), the approximation of \({P_{3}^{O}}\) depends on both of γ and α. With the condition of \((1-{\rho _{1}^{4}})\gamma \gg 1\), \({P_{3}^{O}}\) can be approximated as:
$$ {\fontsize{7.9pt}{9.6pt}\selectfont{\begin{aligned} {}\lim\limits_{(\!1-\alpha^{4})\gamma \rightarrow \infty\!} {P_{3}^{O}} &\!\approx\! 1\! -\! \frac{\rho_{1}\,+\,\rho_{2}\alpha^{2}}{\rho_{1}\,-\,\rho_{2}\,+\,2\rho_{2}\!\alpha\!^{2}}\! \left(\!\!1\! \,-\, \frac{d_{sd}^{\eta}f(\gamma, R)}{\sigma_{sd}^{2} (\rho_{1}\,+\,\rho_{2}\!\alpha^{2}\!)} \,+\, \frac{d_{sd}^{2\eta} f^{2}(\!\gamma\!,\!R)}{2\sigma_{sd}^{4} (\!\rho_{1}\,+\,\!\rho_{2}\alpha^{2}\!)^{2}} \!\right) \\ &\,+\, \frac{\rho_{2}(\!1\,-\,\alpha^{2})}{\rho_{1}\,-\,\rho_{2}\,+\,2\rho_{2}\alpha^{2}} \left(\!\!1\!\! -\! \frac{d_{sd}^{\eta}f(\!\gamma\!,\!R)}{\sigma_{\!sd}^{2} \rho_{2}(\!1\!-\alpha^{2})} + \frac{d_{sd}^{2\eta}f^{2}(\gamma,R)}{2\sigma_{sd}^{4} {\rho_{2}^{2}}(1-\alpha^{2})^{2}}\right) \\ &\!=\frac{d_{sd}^{2\eta}f^{2}(\gamma,R)}{2\sigma_{sd}^{4} \rho_{2}(1-\alpha^{2})(\rho_{1}+\rho_{2}\alpha^{2})} \end{aligned}}} $$
From (23), (24), and (25), the outage probability in (21) can be approximated as:
$$ {\fontsize{8.7pt}{9.6pt}\selectfont{\begin{aligned} {}\widetilde{P}_{\text{RT}}^{\text{Out}} (\!\gamma\!, R) \!\approx\!\!\left\{\!\! \begin{array}{ll} \frac{d_{sd}^{\eta}}{\sigma_{sd}^{2}(\rho_{1}+\rho_{2})}\left(\frac{2^{2R}-1}{\gamma}\right), &\!\alpha\! = 1, \gamma \gg 1 \\ \frac{d_{sd}^{2\eta}}{2\sigma_{sd}^{4} \rho_{2}(\!1-\alpha^{2})(\rho_{1}+\rho_{2}\alpha^{2})} \!\left(\!\frac{2^{2R}-1}{\gamma} \!\right)^{2}, &\!0 \!\le\! \alpha \!\!<\! \!1, (1\,-\,\alpha^{4})\gamma \!\!\gg \!1 \\ \end{array} \right. \end{aligned}}} $$
As shown in (26), the time diversity gain can be obtained by retransmission at the source unless α is not equal to one. The condition ' α=0' means that the two channel coefficients of \(h_{\textit {sd}}^{(1)}\) and \(h_{\textit {sd}}^{(2)}\) are completely independent of each other. In this case, we can simply expect that the diversity gain is obtained. In addition, for the case of 0<α<1, the diversity gain can be still obtained depending on α and γ. Figure 2 visually describes such a phenomenon. In Figure 2, we also plotted the outage probability of the direct transmission (DT) as a benchmark, which is defined as
$$\begin{array}{@{}rcl@{}} P_{\text{DT}}^{\text{Out}} (\gamma, R) = \text{Pr}[\log(1+\rho_{1}\gamma|h_{sd}^{(1)}|^{2}) < R] \end{array} $$
Outage probability of the RT scheme versus the total transmission power P T .R = 1, d sd = 1 km, η=3, \(\sigma _{\textit {sd}}^{2}=1\), N 0 = ‐80 dBm, ρ 1=ρ 2=0.5.
Although the channel correlation of the SD link is extremely high, i.e., α=0.99, the diversity gain can be obtained at high SNR region when the source transmits the same signal twice during phase 1 and phase 2. In conclusion, the RT scheme is comparable to cooperative relaying schemes such as AF and DF relaying.
Comparison of the outage probabilities of the AF relaying and RT schemes
From [4], it is evident that the AF relaying and selection DF relaying schemes, which are representative relaying schemes in cooperative relay networks, have the same asymptotic outage probability at high SNR region. Thus, to show that the RT scheme is comparable to conventional relaying schemes, we only compare the RT scheme to the AF relaying scheme.
Under the system environments described in this paper, the instantaneous channel capacity of the AF relaying scheme is given by:
$$ {\fontsize{9.1pt}{9.6pt}\selectfont{\begin{aligned} {}I_{AF} = \frac{1}{2} \log \left(1+\rho_{1}\gamma|h_{sd}^{(1)}|^{2} +\frac{\rho_{1}\rho_{2}\gamma^{2}|h_{sr}^{(1)}|^{2}|h_{rd}^{(2)}|^{2}}{\rho_{1}\gamma|h_{sr}^{(1)}|^{2}+\rho_{2}\gamma|h_{rd}^{(2)}|^{2}+1} \right). \end{aligned}}} $$
Through the similar procedures as in [4], the outage probability of the AF relaying scheme at high SNR region can be approximated as:
$$ \begin{aligned} P_{AF}^{\text{Out}} (\gamma,R) &= \Pr \Big[ I_{AF} < R \Big] \\ &\approx \frac{d_{sd}^{\eta}}{2\rho_{1}\sigma_{sd}^{2}} \left(\frac{d_{sr}^{\eta}}{\rho_{1}\sigma_{sr}^{2}} + \frac{d_{rd}^{\eta}}{\rho_{2}\sigma_{rd}^{2}} \right) \left(\frac{2^{2R}-1}{\gamma}\right)^{\!\!\text{\fontsize{9}{7}{\selectfont{2}}}}. \end{aligned} $$
For the case of α≠1, we compare (29) with the second part of (26) to confirm the better scheme having lower outage probability. Let us consider the case of α=0, which means that the channel coefficients of the SD link between phase 1 and phase 2 are completely independent of each other. Such a case can occur when the channel of the SD link varies rapidly with time or when the time interval between phase 1 and phase 2 is sufficiently large with the high maximum Doppler frequency. From (26) and (29), we can obtain the following result:
$$ {}\alpha = 0 \quad \rightarrow \quad \frac{d_{sd}^{\eta}}{\rho_{2}\sigma_{sd}^{2}} \qquad \begin{aligned} \overset{\text{RT}}{\le} \\ \underset{\text{AF}}{>} \end{aligned} \qquad \frac{d_{sr}^{\eta}}{\rho_{1}\sigma_{sr}^{2}} + \frac{d_{rd}^{\eta}}{\rho_{2}\sigma_{rd}^{2}}. $$
The outage probability of the RT scheme is only influenced by the channel quality of the SD link, whereas that of the AF relaying scheme depends on channel quality of both the SR and RD links. Thus, when one of the SR or RD links has similar channel quality with the SD link, the outage performance of the AF relaying scheme cannot be better than that of the RT scheme. For example, let us assume ρ 1=ρ 2 and consider the two cases of \(d_{\textit {sr}}^{\eta }/\sigma _{\textit {sr}}^{2}:d_{\textit {rd}}^{\eta }/\sigma _{\textit {rd}}^{2}:d_{\textit {sd}}^{\eta }/\sigma _{\textit {sd}}^{2} = 1000:1:1\) or \(d_{\textit {sr}}^{\eta }/\sigma _{\textit {sr}}^{2}:d_{\textit {rd}}^{\eta }/\sigma _{\textit {rd}}^{2}:d_{\textit {sd}}^{\eta }/\sigma _{\textit {sd}}^{2} = 1:1000:1\). In such cases, although the quality of the SR or RD link is sufficiently good compared to the others, the outage performance of the AF relaying scheme is still less than that of the RT scheme statistically. On the other hand, if both SR and RD links have much higher channel quality than the SD link, then the AF relaying scheme can be better option than the RT scheme. However, the AF relaying scheme basically requires some additional efforts such as relay allocation, synchronization, relay installation cost, etc. Therefore, given such extra efforts, the research on whether performance improvement by cooperative relaying schemes is reasonable or not is also a crucial issue. We leave this as a potential future work.
For the case of 0<α<1, from (26) and (29), the scheme having lower outage probability is determined by the following relational expression:
$$ {\fontsize{7.8pt}{9.6pt}\selectfont{ \begin{aligned} {}\alpha \begin{aligned} \overset{\text{RT}}{\le}\\ \underset{\text{AF }}{>} \end{aligned} \sqrt{\frac{\rho_{2}-\rho_{1}}{2\rho_{2}} \,+\, \sqrt{\frac{(\rho_{2}-\rho_{1})^{2}}{4{\rho_{2}^{2}}} + \frac{1}{\rho_{2}} \left(\rho_{1} - \frac{\rho_{1}\sigma_{sr}^{2}\sigma_{rd}^{2} d_{sd}^{\eta}} {\sigma_{sd}^{2}(\rho_{2}\sigma_{rd}^{2}d_{sr}^{\eta}+\rho_{1}\sigma_{sr}^{2} d_{rd}^{\eta})}\right)}}. \end{aligned}}} $$
On the right‐hand side of (31), all the parameters are deterministic. The correlation coefficient α is related to many factors such as the carrier frequency f c , the maximum Doppler frequency \(f_{D}^{\text {max}}\), the time interval τ between phase 1 and phase 2, the mobile speed v, etc [26]. The carrier frequency and time interval is determined by system operator. The maximum Doppler frequency is defined as the ratio of the mobile speed v to the wavelength of the carrier frequency λ, i.e., \(f_{D}^{\text {max}} = v/\lambda \). The destination can estimate the maximum Doppler frequency and the mobile speed [26,27], thus the channel correlation α can be calculated by (10). In the current literature, it is well‐known that the AF relaying scheme is more effective than the non‐cooperative case without the help of relay. However, from (31) as the correlation coefficient α decreases, the RT scheme can have lower outage probability than the AF relaying scheme. As shown in [4], the outage performance of the selection DF relaying scheme has the same asymptotic outage probability as the AF relaying scheme. Thus, the outage performance of the RT scheme can also be better than that of the selection DF relaying scheme. More generally, various relaying schemes can be inefficient compared to the RT scheme, and thus, it should be carefully evaluated whether relaying schemes are useful or not even though such relaying schemes are available.
As described previously, the correlation coefficient α is related to the maximum Doppler frequency and the time delay τ. Therefore, from (10), (31) can be also rewritten as:
$$\begin{array}{@{}rcl@{}} {}\tau \begin{aligned} \overset{ AF}{<} \\ \underset{RT \;}{\ge} \end{aligned} J_{0}^{-1}\left(\! 2 \pi f_{D}^{max} \sqrt{\frac{\rho_{2}-\rho_{1}}{2\rho_{2}} \,+\, \sqrt{\frac{(\rho_{2}-\rho_{1})^{2}}{4{\rho_{2}^{2}}} + \frac{\xi}{\rho_{2}}}} \right), \end{array} $$
$$\begin{array}{@{}rcl@{}} \xi = \rho_{1} - \frac{\rho_{1}\sigma_{sr}^{2}\sigma_{rd}^{2} d_{sd}^{\eta}} {\sigma_{sd}^{2}(\rho_{2}\sigma_{rd}^{2}d_{sr}^{\eta}+\rho_{1}\sigma_{sr}^{2} d_{rd}^{\eta})}. \end{array} $$
Unlike the carrier frequency and the mobile speed, the time interval τ can be arbitrary changed by the source or destination. Thus, although the channel condition of the SD link is worse than those of the SR and RD links, the outage performance of the RT scheme can be better than that of the AF relaying scheme by controlling the time interval τ. Figure 3 shows the variation of the channel correlation of the SD link with an increase in the time interval between phase 1 and phase 2. With a specific maximum Doppler frequency value, the correlation coefficient decreases with an increase in the time interval. For example, in wireless environments used by pedestrians, the maximum Doppler frequency is usually 5 Hz. With a maximum Doppler frequency of 5 Hz, α is 0.9755, 0.9037, 0.7900, and 0.6425 at the time interval of 10, 20, 30, and 40 ms, respectively. In other words, as the time interval increases, the α value decreases and the effectiveness of the RT scheme is improved. Even with the low maximum Doppler frequency, the RT scheme can still provide the diversity gain and have better outage performance than the AF relaying scheme according to (31).
Variation of the correlation coefficient of the SD link versus the time interval between phase 1 and phase 2.
To control the time interval τ in real wireless systems, we give one possible scenario. In an incremental relaying protocol, when the destination selects the source as a transmitting node during the second phase, the destination may inform the specific time interval \(\bar {\tau }\) to the source. For example, let us define the minimum and maximum time intervals as τ min and τ max, respectively. If n bits are available to inform the specific time interval \(\bar {\tau }\) at the destination, the time difference between τ min and τ max is divided into 2n slots, and the destination may select the specific time interval \(\bar {\tau }\) among them and inform it to the source. However, increasing the specific time interval \(\bar {\tau }\) may result in network delay, and thus, the optimal time interval should be carefully determined. We leave this problem as a future work.
Proposal of an on‐off control protocol
As shown in the previous section, both of the AF relaying and RT schemes provides the same diversity gain, and thus, we can expect performance improvement by selecting the better scheme between them according to the channel conditions. If instantaneous channel information of all the links are available, then the outage performance can be much improved by adaptively selecting the better scheme. However, in time‐varying channels, obtaining exact channel information is challenging. On the other hand, the channel statistics does not vary in real time. Thus, it is possible to use the channel statistics for selecting the better scheme between them. From such a perspective, we propose an on‐off control protocol in half‐duplex‐based cooperative relay network as in Figure 1, which works as follows:
Step 1: During phase 1, the source broadcasts its signal to both the relay and destination.
Step 2: Between phase 1 and phase 2, the source determines the transmitting node between the source and relay according to (31).
Step 3: Before phase 2, the source broadcasts the selection result through a short packet including one bit information.
Step 4: If the bit is 1, the relay simply amplifies and forwards the received signal from the source ('On' mode), and the source remains idle. Otherwise, the relay remains idle ('Off' mode), and the source retransmits the same signal to the destination.
When applying this protocol to the cooperative relay network, the asymptotic outage probability of the proposed on‐off control protocol at high SNR region is given by:
$$ {\fontsize{8.4pt}{9.6pt}\selectfont{\begin{aligned} {} \widetilde{P}_{\textmd{On-Off}}^{\textmd{Out}} (\gamma, R) &\approx \frac{d_{sd}^{\eta}}{2\sigma_{sd}^{2}} \left(\frac{2^{2R}-1}{\gamma} \right)^{\!\!\text{\fontsize{8}{7}{\selectfont{2}}}} \\ &\cdot \min \left(\frac{d_{sd}^{\eta}}{\sigma_{sd}^{2}\rho_{2}(1-\alpha^{2})(\rho_{1}+\rho_{2}\alpha^{2})}, \frac{d_{sr}^{\eta}}{{\rho_{1}^{2}}\sigma_{sr}^{2}} +\frac{d_{rd}^{\eta}}{\rho_{1}\rho_{2}\sigma_{rd}^{2}} \right) \end{aligned}}} $$
where min(·,·) is the function which selects lower value between the two values. The proposed on‐off control protocol guarantees the minimum outage probability between those of the AF relaying and RT schemes.
Proposal of a selection DF relaying scheme under time‐varying channels
In the previous section, we showed that the RT scheme is comparable to cooperative relaying schemes in half‐duplex‐based cooperative relay networks. In the literature, researchers have considered the SD link as a time‐invariant channel during the two transmission phases and simply assumed that the received SNR in the SD link is doubled by the RT scheme. Based on the assumption, many cooperative relaying protocols have been investigated and proposed. However, such an assumption may not be practical in real wireless environments, and thus, research on cooperative relaying protocols considering the time‐varying property of the SD link is required. With this in mind, we proposed a SDF relaying scheme which combines the DF relaying and RT scheme under time‐varying channels in this section.
Proposal of a SDF relaying scheme
Under time‐varying channels, although the instantaneous channel information is obtained, it is likely to be outdated. Thus, we assume that only the local CSI information is available. In conventional SDF (C‐SDF) relaying scheme [4,9], the relay determines whether or not to forward, depending merely on the channel quality of the SR link. This is due to the assumption that the channel of the SD link is fixed during the two transmission phases. However, since the RT scheme is comparable to a cooperative relaying scheme, both the source and relay should be simultaneously considered as possible candidates for transmission during phase 2 in practical wireless environments. The proposed SDF (P‐SDF) relaying scheme determines the transmitting node during phase 2 between the source and relay depending on not only the instantaneous channel information of the SR link but also the channel statistics of the SD and RD links.
The protocol of the P‐SDF relaying scheme is described as follows. At first, when the instantaneous capacity of the SR link is less than the spectral efficiency R (b/s/Hz), i.e., \(\log (1+\rho _{1}\gamma |h_{\textit {sr}}^{(1)}|^{2}) < R\), the relay selects the source as a transmitting node during phase 2. On the other hand, when the condition of \(\log (1+\rho _{1}\gamma |h_{\textit {sr}}^{(1)}|^{2}) \ge R\) is satisfied, the relay compares the two asymptotic outage probabilities of the DF relaying and RT schemes because the instantaneous channel information is not available due to the assumption. To do that, we first derive the asymptotic conditional outage probability of the DF relaying with respect to \(|h_{\textit {sr}}^{(1)}|^{2}\). The instantaneous capacity of the DF relaying for the case of \(\log (1+\rho _{1}\gamma |h_{\textit {sr}}^{(1)}|^{2}) \ge R\) is given by:
$$ {}C_{\text{DF}} \Big|_{|h_{sr}^{(1)}|^{2} > f(\gamma,R)/\rho_{1}} = \frac{1}{2} \log \left(1+\gamma (\rho_{1}|h_{sd}^{(1)}|^{2}\! +\! \rho_{2}|h_{rd}^{(2)}|^{2}) \right), $$
where f(γ,R) is defined as (15). From (35), its conditional outage probability is expressed as:
$$ \begin{aligned} &P_{\text{DF}}^{\text{Out}}\left(\gamma,R \Big| |h_{sd}^{(1)}|^{2} > f(\gamma,R)/\rho_{1}\right) \\&\qquad= \text{Pr} \left[ \rho_{1}|h_{sd}^{(1)}|^{2} + \rho_{2} |h_{rd}^{(2)}|^{2} < f(\gamma,R)\right]. \end{aligned} $$
In (36), \(\rho _{1}|h_{\textit {sd}}^{(1)}|^{2}\) and \(\rho _{2} |h_{\textit {rd}}^{(2)}|^{2}\) are exponential random variables with parameters \(d_{\textit {sd}}^{\eta }/(\rho _{1}\sigma _{\textit {sd}}^{2})\) and \(d_{\textit {rd}}^{\eta }/(\rho _{2}\sigma _{\textit {rd}}^{2})\), respectively. Thus, through the similar procedures with Proposition 1, (36) can be expressed as:
$$ \begin{aligned} &{}P_{\text{DF}}^{\text{Out}}\left(\gamma,R \Big| |h_{sd}^{(1)}|^{2} > f(\gamma,R)/\rho_{1}\right) \\ &{}=\left\{\!\! \begin{array}{ll} 1\! -\!\left(\!1\,+\,\frac{d_{sd}^{\eta}(2^{2R}-1)}{\rho_{1}\gamma\sigma_{sd}^{2}}\right) e^{-d_{sd}^{\eta}(2^{2R}-1)/(\rho_{1}\gamma\sigma_{sd}^{2})}, &\frac{d_{sd}^{\eta}}{\rho_{1}\sigma_{sd}^{2}} = \frac{d_{rd}^{\eta}}{\rho_{2}\sigma_{rd}^{2}} \\ 1 \,+\, \frac{\rho_{2}\sigma_{rd}^{2} d_{sd}^{\eta}}{\rho_{1}\sigma_{sd}^{2} d_{rd}^{\eta}-\rho_{2}\sigma_{rd}^{2} d_{sd}^{\eta}} e^{-d_{rd}^{\eta}(2^{2R}-1) / (\rho_{2}\gamma\sigma_{rd}^{2})}\\ - \frac{\rho_{1}\sigma_{sd}^{2} d_{rd}^{\eta}}{\rho_{1}\sigma_{sd}^{2} d_{rd}^{\eta}-\rho_{2}\sigma_{rd}^{2} d_{sd}^{\eta}} e^{-d_{sd}^{\eta}(2^{2R}-1) / (\rho_{1}\gamma\sigma_{sd}^{2})}, &\frac{d_{sr}^{\eta}}{\rho_{1}\sigma_{sd}^{2}} \neq \frac{d_{rd}^{\eta}}{\rho_{2}\sigma_{rd}^{2}} \\ \end{array} \right. \end{aligned} $$
At high SNR region, from (22), the conditional outage probability of the DF relaying is approximated as:
$$ {}\widetilde{P}_{\text{DF}}^{\text{Out}}\left(\!\gamma\!,R \Big| |h_{sd}^{(1)}|^{2} \!>\! f(\gamma,R)/\rho_{1}\!\right) \,=\, \frac{d_{sd}^{\eta}d_{rd}^{\eta}}{2\rho_{1}\rho_{2}\sigma_{sd}^{2}\sigma_{rd}^{2}} \left(\!\frac{2^{2R}-1}{\gamma}\!\right)^{\!\!\text{\fontsize{9}{7}{\selectfont{2}}}}\!. $$
By comparing the second part of (26) with (38), we have:
$$\begin{array}{@{}rcl@{}} \frac{d_{rd}^{\eta}}{\sigma_{rd}^{2}} \qquad \begin{aligned} \overset{\text{RT}}{\ge} \\ \underset{\text{DF}}{<} \end{aligned} \qquad \zeta \end{array} $$
$$ \zeta = \frac{\rho_{1} d_{sd}^{\eta}}{\sigma_{sd}^{2}(1-\alpha^{2})(\rho_{1}+\rho_{2}\alpha^{2})} $$
In summary, the better scheme S ∗ having lower outage probability is determined by:
$$\begin{array}{@{}rcl@{}} S^{*} =\left\{ \begin{array}{ll} \text{DF}, &|h_{sr}^{(1)}|^{2} > f(\gamma,R)/\rho_{1} \;\; \text{and} \;\; \frac{d_{rd}^{\eta}}{\sigma_{rd}^{2}} < \zeta \\ \text{RT}, &\text{otherwise} \\ \end{array} \right. \end{array} $$
When both the conditions of \(|h_{\textit {sr}}^{(1)}|^{2} > f(\gamma,R)/\rho _{1}\) and \(d_{\textit {rd}}^{\eta }/\sigma _{\textit {rd}}^{2} < \zeta \) are satisfied, the relay forwards the received signal, or otherwise the source retransmits its signal while the relay remains idle.
Outage probability analysis of the P‐SDF relaying scheme
Based on (40), the outage probability of the P‐SDF relaying scheme is defined as:
$$ {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} &{}P_{\text{P-SDF}}^{\text{Out}} (\gamma,R) = \Pr[S^{*}=\textmd{DF}]P_{\text{DF}}^{\text{Out}}(\gamma,R | S^{*}=\text{DF})\\ &\qquad\qquad\qquad\qquad+ \Pr[S^{*} = \text{RT}]P_{RT}^{\text{Out}}(\gamma,R |S^{*} = \text{RT}) \\ &{}=\left\{ \begin{array}{ll} P_{\text{RT}}^{\text{Out}} (\gamma,R), &\frac{d_{rd}^{\eta}}{\sigma_{rd}^{2}} \ge \zeta \\ \text{Pr}\left[|h_{sr}^{(1)}|^{2} > \frac{f(\gamma,R)}{\rho_{1}}\right] P_{\text{DF}}^{\text{Out}} \left(\gamma,R \; \Big| |h_{sd}^{(1)}|^{2} > \frac{f(\gamma,R)}{\rho_{1}} \right)\\ +\text{Pr}\left[|h_{sr}^{(1)}|^{2} \le \frac{f(\gamma,R)}{\rho_{1}}\right] P_{\text{RT}}^{\text{Out}}(\gamma,R), &\frac{d_{rd}^{\eta}} {\sigma_{rd}^{2}} < \zeta \\ \end{array} \right. \end{aligned}}} $$
In (41), \(P_{\text {RT}}^{\text {Out}}(\gamma,R)\) and \(P_{\text {DF}}^{\text {Out}}(\gamma,R | |h_{\textit {sd}}^{(1)}|^{2} > f(\gamma,R))\) were derived in the previous section. In addition, the probability \(\text {Pr}[|h_{\textit {sr}}^{(1)}|^{2} \le f(\gamma,R)/\rho _{1}]\) is derived as:
$$\begin{array}{@{}rcl@{}} \text{Pr}[|h_{sr}^{(1)}|^{2} \le f(\gamma,R)/\rho_{1}] = 1-e^{-d_{sr}^{\eta}f(\gamma,R)/(\rho_{1}\sigma_{sr}^{2})} \end{array} $$
By applying (21), (36), and (42) to (41), the exact outage probability of the P‐SDF relaying scheme is derived. We omit the applying result to prevent the duplicated expression.
With the relationship in (22), at high SNR region, through some mathematical manipulation the outage probability of the P‐SDF relaying scheme can be approximated as:
$$ {\fontsize{8.6pt}{9.6pt}\selectfont{\begin{aligned} {} P_{\text{P-SDF}}^{\text{Out}} (\gamma, R)\approx \left\{ \begin{array}{ll} \frac{d_{sd}^{2\eta}}{2\sigma_{sd}^{4} \rho_{2}(1-\alpha^{2})(\rho_{1}+\rho_{2}\alpha^{2})} \left(\frac{2^{2R}-1}{\gamma} \right)^{\!2}, &\frac{d_{rd}^{\eta}}{\sigma_{rd}^{2}} \ge \zeta \\ \left(\frac{d_{sd}^{\eta}}{2\rho_{1}\rho_{2}\sigma_{sd}^{2}}\right) \left\{ \left(1-\frac{d_{sr}^{\eta}(2^{2R}-1)} {\gamma\rho_{1}\sigma_{sr}^{2}}\right) \left(\frac{d_{rd}^{\eta}}{\sigma_{rd}^{2}}\right)\right.\\ \left. + \left(\frac{d_{sr}^{\eta}d_{sd}^{\eta}(2^{2R}-1)} {\gamma\sigma_{sr}^{2}\sigma_{sd}^{2}(1-\alpha^{2})(\rho_{1}+\rho_{2}\alpha^{2})}\right) \right\} \left(\frac{2^{2R}-1}{\gamma} \right)^{2}, &\frac{d_{rd}^{\eta}}{\sigma_{rd}^{2}} < \zeta \\ \end{array} \right. \end{aligned}}} $$
Herein, we make mention of an important point of the proposed SDF relaying scheme. Interestingly, the P‐SDF relaying scheme can provide additional diversity gain for a specific case. As \(d_{\textit {rd}}^{\eta }\) approaches zero, (43) is approximated as:
$$ {\fontsize{8.1pt}{9.6pt}\selectfont{ \begin{aligned} {}\lim\limits_{d_{rd}^{\eta} \rightarrow 0} P_{\text{P-SDF}}^{\text{Out}} (\!\gamma, R) \approx \left(\frac{d_{sr}^{\eta}d_{sd}^{2\eta}}{2\rho_{1}\rho_{2}\sigma_{sr}^{2}\sigma_{sd}^{4}(1-\alpha^{2})(\rho_{1}+\rho_{2}\alpha^{2})}\right) \left(\!\frac{2^{2R}-1}{\gamma} \right)^{\!\!\text{\fontsize{7}{7}{\selectfont{3}}}} \end{aligned}}} $$
Unlike conventional relaying schemes, the P‐SDF relaying scheme provides the diversity order of three when the relay is close to the destination.
Simulations and discussions
In this section, we describe our numerical and Monte Carlo simulation results as a validation of the previously obtained results. Through the simulations, we first show that the outage probability of the RT scheme is comparable to that of the AF relaying scheme under various simulation scenarios. And then, the outage performance of the P‐SDF relaying scheme is compared to several benchmarks schemes. By comparison, we show that the P‐SDF relaying scheme is superior to the benchmark schemes.
For simulations, we consider the cooperative relay network including one source, one relay, and one destination under time‐varying channels shown in Figure 1. Basically, we consider the effects of large‐scale and small‐scale fading for all the links as described in the 'System model' section. Based on such considerations, we set R = 1 (b/s/Hz), d sd = 1 (km), d sr =d rd = 0.8 (km), η = 3, σ ij = 1 where i∈{s,r},j∈{r,d}, P T = 10 ∼ 40 (dBm), ρ 1=ρ 2 = 0.5, N 0 = ‐80 (dBm), unless otherwise specified. Each caption in the corresponding figure includes the detailed parameter information.
Verification of the RT scheme
The outage performance of the RT scheme is strongly dependent on the channel variation of the SD link. Figure 4 compares the outage probabilities of the RT and AF relaying scheme with various correlation coefficients regarding the total transmission power P T . The RT scheme as well as the AF relaying scheme provides the diversity order of two even though the channel correlation of the SD link during the two transmission phases is sufficiently high, i.e., α=0.99. In practical wireless environments, considering the RT scheme as a possible option for obtaining a diversity gain in cooperative relay networks is reasonable. As α decreases, the outage performance of the RT scheme is enhanced, and with α=0.5, the two curves almost overlap. In addition, the derived exact outage probability of the RT scheme as in (21) is well matched with that of Monte Carlo simulation results.
Outage probability comparison of the AF relaying and RT schemes versus the total transmission power P T . R=1, d sr =d rd = 0.8 km, d sd = 1 km, η=3, \(\sigma _{\textit {ij}}^{2}\)=1 where i∈{s,r},j∈{r,d}, N 0 = ‐80 dBm, ρ 1=ρ 2=0.5.
Figures 5 and 6 shows the effective range of the RT scheme depending on the path loss. With the fixed distances of d rd and d sd , Figure 5 compares the outage probabilities of both schemes regarding the distance variation between the source and relay. Although the distance between the source and relay is sufficiently short, the outage performance of the AF relaying scheme does not get better than that of the RT scheme when the channel correlation of the SD link is low. This is due to the fact that the performance of the AF relaying scheme depends on both SR and RD links. For the case where both the distances, d sr and d rd , get shorter, the AF relaying scheme can have better outage performance than the RT scheme as given in Figure 6. According to the channel correlation of the SD link and the relay location, the better scheme having lower outage probability is determined.
Outage probability comparison of the AF relaying and RT schemes versus d sr . R=1, d rd = 0.8 km, d sd = 1 km, η=3, \(\sigma _{\textit {ij}}^{2}\)=1 where i∈{s,r},j∈{r,d}, P T = 35 dBm, N 0 = ‐80 dBm, ρ 1=ρ 2=0.5.
Outage probability comparison of the AF relaying and RT schemes versus both d sr and d rd . R=1, d sd = 1 km, η=3, \(\sigma _{\textit {ij}}^{2}\)=1 where i∈{s,r},j∈{r,d}, P T = 35 dBm, N 0 = ‐80 dBm, ρ 1=ρ 2=0.5.
In Figure 7, the ratios of the outage probability of the RT scheme to that of the AF relaying scheme are plotted in terms of the channel correlation of the SD link. When the ratio is smaller than 1, it means that the RT scheme has lower outage probability than the AF relaying scheme. Herein, we changed the distance between the source and relay in particular because the performance of a relaying scheme is strongly related to the SR link condition. In the high range of α, i.e., α = 0.6 to 0.9, the ratios are sharply changed. This means that the RT scheme is still attractive even though the channel correlation of the SD link is not much low. When α is around 0.5 or 0.6, the outage probability of the RT scheme is very similar with that of α=0.1.
Ratios of the outage probability. Of the RT scheme to that of the AF relaying scheme versus the correlation coefficient (R=1, d rd = 1 km, d sd = 1 km, η=3, \(\sigma _{\textit {ij}}^{2}\)=1 where i∈{s,r},j∈{r,d}, P T = 35 dBm, N 0 = ‐80 dBm, ρ 1=ρ 2=0.5).
Verification of the P‐SDF relaying scheme
In this subsection, we validate the outage performance of the P‐SDF relaying scheme proposed in the 'Proposal of a selection DF relaying scheme under time‐varying channels' section. For comparison, we consider the three following benchmark schemes: 1) RT scheme, which was described in the 'Motivation and contribution' section, 2) DF‐AF relaying scheme [9], whereby the relay adaptively selects the transmission mode between AF and DF relay modes depending on the SR link, 3) SDR relaying scheme [14], which utilizes optimal threshold for SNR‐based selective digital relaying. Since the DF‐AF relaying scheme adaptively selects the better relaying mode between them, we do not consider the individual AF and DF relaying schemes as benchmarks.
Figure 8 compares the outage probability of the P‐SDF relaying scheme with those of the benchmark schemes in terms of the total transmission power P T . Under the scenario described in the caption of Figure 8, the P‐SDF relaying scheme has the best outage performance, whereas all the benchmark schemes have the similar outage performance. Compared to the benchmark schemes, 2 dB SNR gain is obtained by the P‐SDF relaying scheme.
Outage probability comparison. Of the P‐SDF relaying and benchmark schemes versus the total transmission power (R=1, d sr =d rd = 0.8 km, d sd = 1 km, η=3, \(\sigma _{\textit {ij}}^{2}\)=1 where i∈{s,r},j∈{r,d}, α = 0.5, N 0 = ‐80 dBm, ρ 1=ρ 2=0.5).
The performance of cooperative relaying schemes is strongly dependent on the channel quality of the SR link. In Figure 9, we confirm the outage probability variation of all the schemes in terms of the distance variation between the source and relay. Figure 9 depicts that the P‐SDF relaying scheme has the best outage performance in all ranges, even though the outage probabilities of the DF‐AF and SDR relaying schemes approaches that of the P‐SDF relaying scheme as the distance d sr decreases. This is because of the fact that DF relaying is the best option when the channel quality of the SR link is sufficiently good. On the other hand, when the channel quality of the SR link is poor, both the outage probabilities of the DF‐AF and SDR relaying schemes perform worse than that of the RT scheme.
Outage probability comparison of the P‐SDF relaying and benchmark schemes. Versus d sr (R=1, d rd = 0.8 km, d sd = 1 km, η=3, \(\sigma _{\textit {ij}}^{2}\)=1 where i∈{s,r},j∈{r,d}, α = 0.5, P T = 30 dBm, N 0 = ‐80 dBm, ρ 1=ρ 2=0.5).
The performance of the P‐SDF relaying scheme is deeply related to the channel quality of the RD link, which was explained in the 'Proposal of a selection DF relaying scheme under time‐varying channels' section. Figures 10 and 11 support the analyzed results in the 'Proposal of a selection DF relaying scheme under time‐varying channels' section. In Figure 10, as the distance d rd decreases, the outage performance of the P‐SDF relaying scheme is better whereas those of the DF‐AF and SDR relaying schemes slightly get better. Figure 11 shows that the additional diversity gain can be obtained by the P‐SDF relaying scheme. When the relay is close to the destination, i.e., d rd = 0.1 km, the P‐SDF relaying provides the diversity order of three, whereas the other schemes only provides the diversity order of two. The P‐SDF relaying scheme is more effective when the relay is positioned on a spot near to the destination.
Outage probability comparison of the P‐SDF relaying and benchmark schemes. Versus d rd (R=1, d sr = 0.8 km, d sd = 1 km, η=3, \(\sigma _{\textit {ij}}^{2}\)=1 where i∈{s,r},j∈{r,d}, α = 0.5, P T = 30 dBm, N 0 = ‐80 dBm, ρ 1=ρ 2=0.5).
Outage probability comparison of the P‐SDF relaying and benchmark schemes versus P T . R=1, d sr = 0.8 km, d rd = 0.1 km, d sd = 1 km, η=3, \(\sigma _{\textit {ij}}^{2}\)=1 where i∈{s,r},j∈{r,d}, α = 0.5, N 0 = ‐80 dBm, ρ 1=ρ 2=0.5.
In this paper, we investigate the availability of the direct path in half‐duplex‐based cooperative relay networks including one source, one relay, and one destination under time‐varying channels. Unlike conventional approaches, we proved that the RT scheme should be modeled as a time‐varying channel from a practical viewpoint. To do that, we analyzed the outage performance of the RT scheme and validated it through Monte Carlo simulations. Based on the analysis, we provided useful insights to employ the RT scheme in cooperative relay networks. In addition, we proposed a SDF relaying scheme, namely, P‐SDF relaying scheme, which adaptively selects the transmitting node during the second transmission phase between source and relay. The P‐SDF relaying scheme resulted in better outage performance in comparison to the conventional relaying schemes and, in particular, is more beneficial when the relay is positioned near to the destination. The RT scheme is applicable to all the cooperative relaying scenarios.
RU Nabar, H Bolcskei, FW Kneubuhler, Fading relay channel: performance limits and space‐time signal design. IEEE J. Sel. Areas Commun. 22(6), 1099–1109 (2004).
XF Tao, XD Xu, QM Cui, An overview of cooperative communications. IEEE Commun. Mag. 50(6), 65–71 (2012).
M Xia, S Aissa, Underlay cooperative AF relaying in cellular networks: performance and challenges. IEEE Commun. Mag. 51(12), 170–76 (2013).
JN Laneman, DNC Tse, GW Wornell, Cooperative diversity in wireless networks: efficient protocols and outage behavior. IEEE Trans. Inf. Theory. 50(12), 3062–3080 (2004).
A Bletsas, H Shin, MZ Win, Cooperative communications with outage‐optimal opportunistic relaying. IEEE Trans. Wireless Commun. 6(9), 3450–3460 (2007).
K Woradit, TQS Quek, W Suwansantisuk, H Wymeersch, L Wuttisittikulkij, MZ Win, Outage behavior of selective relaying schemes. IEEE Trans. Wireless Commun. 8(8), 3890–3895 (2009).
Y Zou, YD Yao, B Zheng, Oppoutunistic distributed space‐time coding for decode‐and‐forward cooperation system. IEEE Trans. Signal Process. 60(4), 1766–1781 (2012).
T Liu, L Song, Y Li, Q Huo, B Jiao, Performance analysis of hybrid relay selection in cooperative wireless systems. IEEE Trans. Commun. 60(3), 779–788 (2012).
W Su, X Liu, On optimum selection relaying protocols in cooperative wireless networks. IEEE Trans. Commun. 58(1), 52–57 (2010).
QF Zhou, FCM Lau, Two incremental relaying protocols for cooperative networks. IET Commun. 2(10), 1272–1278 (2008).
QF Zhou, FCM Lau, SF Hau, Asymptotic analysis of opportunistic relaying protocols. IEEE Trans. Wireless Commun. 8(8), 3915–3920 (2009).
K Kim, H Park, HM Kwon, Optimum clustered pilot sequence for OFDM systems under rapidly time‐varying channel. IEEE Trans. Commun. 60(5), 1357–1370 (2012).
D Li, S Feng, W Ye, Pilot‐assisted channel estimation method for OFDMA systems over time‐varying channels. IEEE Commun. Lett. 13(11), 826–828 (2009).
FA Onat, A Adinoyi, Y Fan, H Yanikomeroglu, JS Thompson, ID Marsland, Threshold selection for SNR‐based selective digtal relaying in cooperative wireless networks. IEEE Trans. Wireless Commun. 7(11), 4226–4237 (2008).
JL Vicario, A Bel, JA Lopez‐Salcedo, G Seco, Opportunistic relay selection with outdated CSI: outage probability and diversity analysis. IEEE Trans. Wireless Commun. 8(6), 2872–2876 (2009).
HA Suraweera, M Soysa, C Tellambura, HK Garg, Performance analysis of partial relay selection with feedback delay. IEEE Signal Process. Lett. 17(6), 531–534 (2010).
M Soysa, HA Suraweera, C Tellambura, HK Garg, Patial and opportunistic relay selection with outdated channel estimates. IEEE Trans. Commun. 60(3), 840–850 (2012).
DS Michalopoulos, HA Suraweera, GK Karagiannidis, R Schober, Amplify‐and‐forward relay selection with outdated channel estimates. IEEE Trans. Commun. 60(5), 1278–1290 (2012).
AS Ibrahim, AK Sadek, W Su, KJR Liu, Cooperative communications with relay‐selection: when to cooperate and whom to cooperate with?. IEEE Trans. Wireless Commun. 7(7), 2814–2827 (2008).
J Lee, M Rim, K Kim, On the outage performance of selection amplify‐and‐forward relaying scheme. IEEE Commun. Lett. 18(3), 423–426 (2014).
Z Mobini, P Sadeghi, M Khabbazian, S Zokaei, Power allocation and group assignment for reducing network coding noise in multi‐unicast wireless systems. IEEE Trans. Veh. Technol. 61(8), 3615–3629 (2012).
Z Ding, KK Leung, DL Goeckel, D Towsley, On the study of network coding with diversity. IEEE Trans. Wireless Commun. 8(3), 1247–1259 (2009).
Y Ma, D Zhang, A Leith, Z Wang, Error performance of transmit beamforming with delayed and limited feedback. IEEE Trans. Wireless Commun. 8(3), 1164–1170 (2009).
M Torabi, JF Frigon, B Sanso, Performance analysis of adaptive modulation in multiuser selection diversity systems with OSTBC over time‐varying channels. IEEE Signal Process. Lett. 19(4), 211–214 (2012).
IS Gradshteyn, IM Ryzhik, Table of Integrals, Series and Products, 7th edition (Academic, 2007).
KE Baddour, NC Beaulieu, Robust doppler spread estimation in nonisotropic fading channels. IEEE Trans. Wireless Commun. 4(6), 2677–2682 (2005).
C Tepedelenlioglu, GB Giannakis, On velocity estimation and correlation properties of narrow‐band mobile communication channels. IEEE Trans. Veh. Technol. 50(4), 1039–1052 (2001).
This research was a part of the project titled 'Domestic products development of marine survey and ocean exploration equipments', funded by the ministry of Oceans and Fisheries, Korea, and by Leading Foreign Research Institute Recruitment Program through NRF, Korea, funded by the Ministry of Science, ICT and Future Planning (MSIP) (2009‐00422).
School of Information and Communications, GIST, 123 Cheomdangwagi‐ro, Buk‐gu, Gwangju, Korea
Jeehoon Lee
& Kiseon Kim
Department of Information and Communication Engineering, Dongguk University‐Seoul, 30, Pildong‐ro 1‐gil, Jung‐gu, Seoul, Korea
Minjoong Rim
Search for Jeehoon Lee in:
Search for Minjoong Rim in:
Search for Kiseon Kim in:
Correspondence to Kiseon Kim.
Lee, J., Rim, M. & Kim, K. Availability of direct path in half‐duplex‐based cooperative relay networks. J Wireless Com Network 2015, 135 (2015) doi:10.1186/s13638-015-0359-5
Cooperative relay networks
Time‐varying channels
Repeated transmission
Selective relaying | CommonCrawl |
Entropy + Algebra + Topology = ?
Today I'd like to share a bit of math involving ideas from information theory, algebra, and topology. It's all in a new paper I've recently uploaded to the arXiv, whose abstract you can see on the right. The paper is short — just 11 pages! Even so, I thought it'd be nice to stroll through some of the surrounding mathematics here.
To introduce those ideas, let's start by thinking about the function $d\colon[0,1]\to\mathbb{R}$ defined by $d(x)=-x\log x$ when $x>0$ and $d(x)=0$ when $x=0$. Perhaps after getting out pencil and paper, it's easy to check that this function satisfies an equation that looks a lot like the product rule from Calculus:
Functions that satisfy an equation reminiscent of the "Leibniz rule," like this one, are called derivations, which invokes the familiar idea of a derivative. The nonzero term $-x\log x$ above may also look familiar to some of you. It's an expression that appears in the Shannon entropy of a probability distribution. A probability distribution on a finite set $\{1,\ldots,n\}$ for $n\geq 1$ is a sequence $p=(p_1,\ldots,p_n)$ of nonnegative real numbers satisfying $\sum_{i=1}^np_i=1$, and the Shannon entropy of $p$ is defined to be
Now it turns out that the function $d$ is nonlinear, which means we can't pull it out in front of the summation. In other words, $H(p)\neq d(\sum_ip_i).$ Even so, curiosity might cause us to wonder about settings in which Shannon entropy is itself a derivation. One such setting is described in the paper above, which shows a correspondence between Shannon entropy and derivations of (wait for it...) topological simplices!
But what does that mean? To make sense of it, we need to bring in an algebraic tool that has origins in homotopy theory. That tool is called an operad, and it's something we've previously introduced here on the blog. Roughly speaking, an operad is an abstract way to encode the various "flavors" that algebras come in: associative algebras, commutative algebras, Lie algebras, and so on. Operads have been used extensively in algebraic topology (see this friendly article by Jim Stasheff in the AMS Notices) and even in physics, too. In fact, we saw an example of an operad on PBS Infinite Series way back in the day — the associahedra!
As it turns out, topological simplices are another nice example of an operad, as described in this old blog post and more recently in chapter 12 of Tom Leinster's new book, Entropy and Diversity. Formally, an $n-1$-simplex $\Delta^{n-1}$ is the set of all points $(p_1,\ldots,p_n)$ in $\mathbb{R}^n$ such that $0\leq p_i\leq 1$ for each $i$ and $\sum_i p_i=1$. So a point in a simplex is nothing more than a probability distribution! In this way, probabilities and topology go hand-in-hand.
But what does this this have to do with entropy? Or algebra? Or derivations, for that matter? I'll explain. But first, let me tell you why I find the confluence of these ideas so intriguing.
A Detour into (Co)homology
In recent years, it's become evident that the intersection of information theory and algebraic topology is fertile ground. Ideas from (co)homological algebra, in particular, have arisen in a few different places. Loosely speaking, homological tools enable the detection of "holes" in a topological space and are thus a helpful way to distinguish one space from another — just count the number of holes in each! Conceptually, a hole is like a string that is closed and through which you can poke your finger. Said differently, a hole is a closed string that is not the boundary of some 2-dimensional region of space. And by the way, boundaries are often indicated with the letter $d,$ meaning that when $R$ is a region its boundary is denoted by $dR$.
If $S$ is a closed string, then its boundary is intuitively just a point. (Imagine starting with the unit interval $[0,1]$ and glueing the endpoints $0$ and $1$ together to form a loop.) This idea can be succinctly written as $dS=0$. If that closed string $S$ is also the boundary of some region so that $S=dR$, then it follows that $dS=d(dR)=0$. This leads to the pithy saying, "the boundary of a boundary is zero," which translates concisely as $d^2=0$. This equation is strikingly fundamental in mathematics.
"If I could only understand the beautiful consequence following from the concise proposition $d^2=0$."
- Henri Cartan
Quanta Magazine recently published a great article explaining these ideas, so I won't go into detail. The main thing to know is that "holes" are more formally called cycles, or better yet, 1-cycles, since the concept can be abstracted to higher dimensions. In any case, the ability to detect boundaries is important. A one-dimensional hole is precisely a 1-cycle that isn't a boundary! What's more, this story about homology has a dual version called cohomology. There, the dual notion of a hole is called a cocycle, and in both cases the totality of all (co)cycles, (co)boundaries (and higher dimensional analogues), and the (co)boundary detector $d$ can be organized into something called a (co)chain complex.
Lingo aside, here's the relevant point: Although I've drawn amoeba-like shapes above, we can also make sense of "holes" and "shapes" and "(co)homology" in a purely algebraic, rather than topological, setting. For example, you can compute the homology of your favorite associative algebra! In this algebraic context, there are some scenarios in which the boundary operator $d$ may also satisfy the Leibniz rule (or some version of it), i.e. the boundary operator $d$ may also be a derivation.
I'm being admittedly vague here, but I hope to pique your curiosity.
After all, what does this any of this have to do with entropy?
As I mentioned above, the nexus of information theory and algebraic topology is a tantalizing place. In 2015, Pierre Baudot and Daniel Bennequin published a paper called "The Homological Nature of Entropy" where they introduce tools of "information cohomology" and construct a certain cochain complex for which entropy represents the unique 1-cocycle. Around the same time, Philippe Elbaz-Vincent and Herbet Gangl define so-called "information functions of degree 1," which are functions that look like entropy, and proved these functions behave "a lot like certain derivations."
A few years years earlier, John Baez, Tobias Fritz, and Tom Leinster gave a category theoretical characterization of entropy in 2011. In preparation of that paper, Baez wrote an informal article on the nLab where he observed that entropy seems to behave like a derivation when viewed from the vantage point of operads. (Verifying this observation and making it precise is the content of my paper.) And — as if that weren't enough! — in 2019 Tom Mainiero explored cohomological ideas in the context of mutual information and entropy in a paper called "Homological Tools for the Quantum Mechanic" and found that entropy appears in the Euler characteristic of a particular cochain complex associated to a quantum state.
Taking inventory of these ideas, one gets the feeling that these results are all consistent with the notion that entropy behaves a little like "$d$ of something" for some suitable (co)boundary-like operator $d$.
The result I've shared on the arXiv is in a similar vein.
Entropy + Algebra + Topology = Derivations
Rather than looking at derivations $d$ on a (co)chain complex associated to a topological space, or derivations of an associative algebra, we instead look at derivations of the operad of topological simplices. Defining that concept is a key part of the paper, so you'll have to read the article to know what it means!
Inspiration for this came from a few observations made by John Baez in this 2011 blog post together with a nice characterization of Shannon entropy given by Dmitry Faddeev in 1956 and an enlightening variation of it given recently by Tom Leinster in chapter 12 of this book. And I first learned about the operad of simplices in this excellent talk by Tom at CIRM in 2017 on "The Categorical Origins of Entropy."
The math that ties all this together is explained in the preprint, which I've titled "Entropy as a Topological Operad Derivation." I hope you'll take a look! Perhaps unsurprisingly, there are a few pictures, too. Here's my favorite one, which you can find on page 9:
And with that, I'll leave you with the punchline of the paper:
Theorem.
Shannon entropy defines a derivation of the operad of topological simplices, and for every derivation of this operad there exists a point at which it is given by a constant multiple of Shannon entropy.
Topology: A Categorical Approach
Open Sets Are Everything | CommonCrawl |
\begin{definition}[Definition:Asymptotically Equal/General Definition/Point]
Let $T = \left({S, \tau}\right)$ be a topological space.
Let $V$ be a normed vector space over $\R$ or $\C$ with norm $\left\Vert{\, \cdot \,}\right\Vert$.
Let $f, g: S \to V$ be mappings.
Let $x_0 \in X$.
Then:
: $f$ is '''asymptotically equal''' to $g$ as $x \to x_0$
{{iff}}:
: $f - g = o \left({g}\right)$ as $x \to x_0$
where $o$ denotes little-O notation.
Category:Definitions/Asymptotic Notation
\end{definition} | ProofWiki |
Diffeology
In mathematics, a diffeology on a set generalizes the concept of smooth charts in a differentiable manifold, declaring what the "smooth parametrizations" in the set are.
Not to be confused with Diffiety.
The concept was first introduced by Jean-Marie Souriau in the 1980s under the name Espace différentiel[1][2] and later developed by his students Paul Donato[3] and Patrick Iglesias.[4][5] A related idea was introduced by Kuo-Tsaï Chen (陳國才, Chen Guocai) in the 1970s, using convex sets instead of open sets for the domains of the plots.[6]
Intuitive definition
Recall that a topological manifold is a topological space which is locally homeomorphic to $\mathbb {R} ^{n}$. Differentiable manifolds generalize the notion of smoothness on $\mathbb {R} ^{n}$ in the following sense: a differentiable manifold is a topological manifold with a differentiable atlas, i.e. a collection of maps from open subsets of $\mathbb {R} ^{n}$ to the manifold which are used to "pull back" the differential structure from $\mathbb {R} ^{n}$ to the manifold.
A diffeological space consists of a set together with a collection of maps (called a diffeology) satisfying suitable axioms, which generalise the notion of an atlas on a manifold. In this way, the relationship between smooth manifolds and diffeological spaces is analogous to the relationship between topological manifolds and topological spaces.
More precisely, a smooth manifold can be equivalently defined as a diffeological space which is locally diffeomorphic to $\mathbb {R} ^{n}$. Indeed, every smooth manifold has a natural diffeology, consisting of its maximal atlas (all the smooth maps from open subsets of $\mathbb {R} ^{n}$ to the manifold). This abstract point of view makes no reference to a specific atlas (and therefore to a fixed dimension $n$) nor to the underlying topological space, and is therefore suitable to treat examples of objects more general than manifolds.
Formal definition
A diffeology on a set $X$ consists of a collection of maps, called plots or parametrizations, from open subsets of $\mathbb {R} ^{n}$ ($n\geq 0$) to $X$ such that the following axioms hold:
• Covering axiom: every constant map is a plot.
• Locality axiom: for a given map $f:U\to X$, if every point in $U$ has a neighborhood $V\subset U$ such that $f_{\mid V}$ is a plot, then $f$ itself is a plot.
• Smooth compatibility axiom: if $p$ is a plot, and $f$ is a smooth function from an open subset of some $\mathbb {R} ^{m}$ into the domain of $p$, then the composite $p\circ f$ is a plot.
Note that the domains of different plots can be subsets of $\mathbb {R} ^{n}$ for different values of $n$; in particular, any diffeology contains the elements of its underlying set as the plots with $n=0$. A set together with a diffeology is called a diffeological space.
More abstractly, a diffeological space is a concrete sheaf on the site of open subsets of $\mathbb {R} ^{n}$, for all $n\geq 0$, and open covers.[7]
Morphisms
A map between diffeological spaces is called smooth if and only if its composite with any plot of the first space is a plot of the second space. It is called a diffeomorphism if it is smooth, bijective, and its inverse is also smooth. By construction, given a diffeological space $X$, its plots defined on $U$ are precisely all the smooth maps from $U$ to $X$.
Diffeological spaces form a category where the morphisms are smooth maps. The category of diffeological spaces is closed under many categorical operations: for instance, it is Cartesian closed, complete and cocomplete, and more generally it is a quasitopos.[7]
D-topology
Any diffeological space is automatically a topological space with the so-called D-topology: the final topology such that all plots are continuous (with respect to the euclidean topology on $\mathbb {R} ^{n}$).
In other words, a subset $U\subset X$ is open if and only if $f^{-1}(U)$ is open for any plot $f$ on $X$. Actually, the D-topology is completely determined by smooth curves, i.e. a subset $U\subset X$ is open if and only if $c^{-1}(U)$ is open for any smooth map $c:\mathbb {R} \to X$.[8]
The D-topology is automatically locally path-connected[9] and a differentiable map between diffeological spaces is automatically continuous between their D-topologies.[5]
Additional structures
A Cartan-De Rham calculus can be developed in the framework of diffeologies, as well as a suitable adaptation of the notions of fiber bundles, homotopy, etc.[5] However, there is not a canonical definition of tangent spaces and tangent bundles for diffeological spaces.[10]
Examples
Trivial examples
• Any set can be endowed with the coarse (or trivial, or indiscrete) diffeology, i.e. the largest possible diffeology (any map is a plot). The corresponding D-topology is the trivial topology.
• Any set can be endowed with the discrete (or fine) diffeology, i.e. the smallest possible diffeology (the only plots are the locally constant maps). The corresponding D-topology is the discrete topology.
• Any topological space can be endowed with the continuous diffeology, whose plots are all continuous maps.
Manifolds
• Any differentiable manifold is a diffeological space by considering its maximal atlas (i.e., the plots are all smooth maps from open subsets of $\mathbb {R} ^{n}$ to the manifold); its D-topology recovers the original manifold topology. With this diffeology, a map between two smooth manifolds is smooth if and only if it is differentiable in the diffeological sense. Accordingly, smooth manifolds with smooth maps form a full subcategory of the category of diffeological spaces.
• Similarly, complex manifolds, analytic manifolds, etc. have natural diffeologies consisting of the maps preserving the extra structure.
• This method of modeling diffeological spaces can be extended to locals models which are not necessarily the euclidean space $\mathbb {R} ^{n}$. For instance, diffeological spaces include orbifolds, which are modeled on quotient spaces $\mathbb {R} ^{n}/\Gamma $, for $\Gamma $ is a finite linear subgroup,[11] or manifolds with boundary and corners, modeled on orthants, etc.[12]
• Any Banach manifold is a diffeological space.[13]
• Any Fréchet manifold is a diffeological space.[14][15]
Constructions from other diffeological spaces
• If a set $X$ is given two different diffeologies, their intersection is a diffeology on $X$, called the intersection diffeology, which is finer than both starting diffeologies. The D-topology of the intersection diffeology is the intersection of the D-topologies of the initial diffeologies.
• If $Y$ is a subset of the diffeological space $X$, then the subspace diffeology on $Y$ is the diffeology consisting of the plots of $X$ whose images are subsets of $Y$. The D-topology of $Y$ is the subspace topology of the D-topology of $X$.
• If $X$ and $Y$ are diffeological spaces, then the product diffeology on the Cartesian product $X\times Y$ is the diffeology generated by all products of plots of $X$ and of $Y$. The D-topology of $X\times Y$ is the product topology of the D-topologies of $X$ and $Y$.
• If $X$ is a diffeological space and $\sim $ is an equivalence relation on $X$, then the quotient diffeology on the quotient set $X$/~ is the diffeology generated by all compositions of plots of $X$ with the projection from $X$ to $X/\sim $. The D-topology on $X/\sim $ is the quotient topology of the D-topology of $X$ (note that this topology may be trivial without the diffeology being trivial).
• The pushforward diffeology of a diffeological space $X$ by a function $f:X\to Y$ is the diffeology on $Y$ generated by the compositions $f\circ p$, for $p$ a plot of $X$. In other words, the pushforward diffeology is the smallest diffeology on $Y$ making $f$ differentiable. The quotient diffeology boils down to the pushforward diffeology by the projection $X\to X/\sim $.
• The pullback diffeology of a diffeological space $Y$ by a function $f:X\to Y$ is the diffeology on $X$ whose plots are maps $p$ such that the composition $f\circ p$ is a plot of $Y$. In other words, the pullback diffeology is the smallest diffeology on $X$ making $f$ differentiable.
• The functional diffeology between two diffeological spaces $X,Y$ is the diffeology on the set ${\mathcal {C}}^{\infty }(X,Y)$ of differentiable maps, whose plots are the maps $\phi :U\to {\mathcal {C}}^{\infty }(X,Y)$ such that $(u,x)\mapsto \phi (u)(x)$ is smooth (with respect to the product diffeology of $U\times X$). When $X$ and $Y$ are manifolds, the D-topology of ${\mathcal {C}}^{\infty }(X,Y)$ is the smallest locally path-connected topology containing the weak topology.[8]
Wire/spaghetti diffeology
The wire diffeology (or spaghetti diffeology) on $\mathbb {R} ^{2}$ is the diffeology whose plots factor locally through $\mathbb {R} $. More precisely, a map $p:U\to \mathbb {R} ^{2}$ is a plot if and only if for every $u\in U$ there is an open neighbourhood $V\subseteq U$ of $u$ such that $p|_{V}=q\circ F$ for two plots $F:V\to \mathbb {R} $ and $q:\mathbb {R} \to \mathbb {R} ^{2}$. This diffeology does not coincide with the standard diffeology on $\mathbb {R} ^{2}$: for instance, the identity $\mathrm {id} :\mathbb {R} ^{2}\to \mathbb {R} ^{2}$ :\mathbb {R} ^{2}\to \mathbb {R} ^{2}} is not a plot in the wire diffeology.[5]
This example can be enlarged to diffeologies whose plots factor locally through $\mathbb {R} ^{r}$. More generally, one can consider the rank-$r$-restricted diffeology on a smooth manifold $M$: a map $U\to M$ is a plot if and only if the rank of its differential is less or equal than $r$. For $r=1$ one recovers the wire diffeology.[16]
Other examples
• Quotients gives an easy way to construct non-manifold diffeologies. For example, the set of real numbers $\mathbb {R} $ is a smooth manifold. The quotient $\mathbb {R} /(\mathbb {Z} +\alpha \mathbb {Z} )$, for some irrational $\alpha $, called irrational torus, is a diffeological space diffeomorphic to the quotient of the regular 2-torus $\mathbb {R} ^{2}/\mathbb {Z} ^{2}$ by a line of slope $\alpha $. It has a non-trivial diffeology, but its D-topology is the trivial topology.[17]
• Combining the subspace diffeology and the functional diffeology, one can define diffeologies on the space of sections of a fibre bundle, or the space of bisections of a Lie groupoid, etc.
Subductions and inductions
Analogously to the notions of submersions and immersions between manifolds, there are two special classes of morphisms between diffeological spaces. A subduction is a surjective function $f:X\to Y$ between diffeological spaces such that the diffeology of $Y$ is the pushforward of the diffeology of $X$. Similarly, an induction is an injective function $f:X\to Y$ between diffeological spaces such that the diffeology of $X$ is the pullback of the diffeology of $Y$. Note that subductions and inductions are automatically smooth.
When $X$ and $Y$ are smooth manifolds, a subduction (respectively, induction) between them is precisely a surjective submersion (respectively, injective immersion). Moreover, these notions enjoy similar properties to submersion and immersions, such as:
• A composition $f\circ g$ is a subduction (respectively, induction) if and only if $f$ is a subduction (respectively, $g$ is an induction).
• An injective subduction (respectively, a surjective induction) is a diffeomorphism.
An embedding is an induction which is also a homeomorphism with its image, with respect to the subset topology induced from the D-topology of the codomain. For diffeologies underlying smooth manifolds, this boils down to the standard notion of embedding.
References
1. Souriau, J. M. (1980), García, P. L.; Pérez-Rendón, A.; Souriau, J. M. (eds.), "Groupes differentiels", Differential Geometrical Methods in Mathematical Physics, Lecture Notes in Mathematics, Berlin, Heidelberg: Springer Berlin Heidelberg, vol. 836, pp. 91–128, doi:10.1007/bfb0089728, ISBN 978-3-540-10275-5, retrieved 2022-01-16
2. Souriau, Jean-Marie (1984), Denardo, G.; Ghirardi, G.; Weber, T. (eds.), "Groupes différentiels et physique mathématique", Group Theoretical Methods in Physics, Lecture Notes in Physics, Berlin/Heidelberg: Springer-Verlag, vol. 201, pp. 511–513, doi:10.1007/bfb0016198, ISBN 978-3-540-13335-3, retrieved 2022-01-16
3. Donato, Paul (1984). Revêtement et groupe fondamental des espaces différentiels homogènes [Coverings and fundamental groups of homogeneous differential spaces] (in French). Marseille: PhD thesis, Université de Provence.
4. Iglesias, Patrick (1985). Fibrés difféologiques et homotopie [Diffeological fiber bundles and homotopy] (PDF) (in French). Marseille: PhD thesis, Université de Provence.
5. Iglesias-Zemmour, Patrick (2013-04-09). Diffeology. Mathematical Surveys and Monographs. Vol. 185. American Mathematical Society. doi:10.1090/surv/185. ISBN 978-0-8218-9131-5.
6. Chen, Kuo-Tsai (1977). "Iterated path integrals". Bulletin of the American Mathematical Society. 83 (5): 831–879. doi:10.1090/S0002-9904-1977-14320-6. ISSN 0002-9904.
7. Baez, John; Hoffnung, Alexander (2011). "Convenient categories of smooth spaces". Transactions of the American Mathematical Society. 363 (11): 5789–5825. doi:10.1090/S0002-9947-2011-05107-X. ISSN 0002-9947.
8. Christensen, John Daniel; Sinnamon, Gordon; Wu, Enxin (2014-10-09). "The D -topology for diffeological spaces". Pacific Journal of Mathematics. 272 (1): 87–110. doi:10.2140/pjm.2014.272.87. ISSN 0030-8730.
9. Laubinger, Martin (2006). "Diffeological spaces". Proyecciones. 25 (2): 151–178. doi:10.4067/S0716-09172006000200003. ISSN 0717-6279.
10. Christensen, Daniel; Wu, Enxin (2016). "Tangent spaces and tangent bundles for diffeological spaces". Cahiers de Topologie et Geométrie Différentielle Catégoriques. 57 (1): 3–50. arXiv:1411.5425.
11. Iglesias-Zemmour, Patrick; Karshon, Yael; Zadka, Moshe (2010). "Orbifolds as diffeologies" (PDF). Transactions of the American Mathematical Society. 362 (6): 2811–2831. doi:10.1090/S0002-9947-10-05006-3. JSTOR 25677806. S2CID 15210173.
12. Gürer, Serap; Iglesias-Zemmour, Patrick (2019). "Differential forms on manifolds with boundary and corners". Indagationes Mathematicae. 30 (5): 920–929. doi:10.1016/j.indag.2019.07.004.
13. Hain, Richard M. (1979). "A characterization of smooth functions defined on a Banach space". Proceedings of the American Mathematical Society. 77 (1): 63–67. doi:10.1090/S0002-9939-1979-0539632-8. ISSN 0002-9939.
14. Losik, Mark (1992). "О многообразиях Фреше как диффеологических пространствах" [Fréchet manifolds as diffeological spaces]. Izv. Vyssh. Uchebn. Zaved. Mat. (in Russian). 5: 36–42 – via All-Russian Mathematical Portal.
15. Losik, Mark (1994). "Categorical differential geometry". Cahiers de Topologie et Géométrie Différentielle Catégoriques. 35 (4): 274–290.
16. Blohmann, Christian (2023-01-06). "Elastic diffeological spaces". arXiv:2301.02583 [math.DG].
17. Donato, Paul; Iglesias, Patrick (1985). "Exemples de groupes difféologiques: flots irrationnels sur le tore" [Examples of diffeological groups: irrational flows on the torus]. C. R. Acad. Sci. Paris Sér. I (in French). 301 (4): 127–130. MR 0799609.
External links
• Patrick Iglesias-Zemmour: Diffeology (book), Mathematical Surveys and Monographs, vol. 185, American Mathematical Society, Providence, RI USA [2013].
• Patrick Iglesias-Zemmour: Diffeology (many documents)
• diffeology.net Global hub on diffeology and related topics
Manifolds (Glossary)
Basic concepts
• Topological manifold
• Atlas
• Differentiable/Smooth manifold
• Differential structure
• Smooth atlas
• Submanifold
• Riemannian manifold
• Smooth map
• Submersion
• Pushforward
• Tangent space
• Differential form
• Vector field
Main results (list)
• Atiyah–Singer index
• Darboux's
• De Rham's
• Frobenius
• Generalized Stokes
• Hopf–Rinow
• Noether's
• Sard's
• Whitney embedding
Maps
• Curve
• Diffeomorphism
• Local
• Geodesic
• Exponential map
• in Lie theory
• Foliation
• Immersion
• Integral curve
• Lie derivative
• Section
• Submersion
Types of
manifolds
• Closed
• (Almost) Complex
• (Almost) Contact
• Fibered
• Finsler
• Flat
• G-structure
• Hadamard
• Hermitian
• Hyperbolic
• Kähler
• Kenmotsu
• Lie group
• Lie algebra
• Manifold with boundary
• Oriented
• Parallelizable
• Poisson
• Prime
• Quaternionic
• Hypercomplex
• (Pseudo−, Sub−) Riemannian
• Rizza
• (Almost) Symplectic
• Tame
Tensors
Vectors
• Distribution
• Lie bracket
• Pushforward
• Tangent space
• bundle
• Torsion
• Vector field
• Vector flow
Covectors
• Closed/Exact
• Covariant derivative
• Cotangent space
• bundle
• De Rham cohomology
• Differential form
• Vector-valued
• Exterior derivative
• Interior product
• Pullback
• Ricci curvature
• flow
• Riemann curvature tensor
• Tensor field
• density
• Volume form
• Wedge product
Bundles
• Adjoint
• Affine
• Associated
• Cotangent
• Dual
• Fiber
• (Co) Fibration
• Jet
• Lie algebra
• (Stable) Normal
• Principal
• Spinor
• Subbundle
• Tangent
• Tensor
• Vector
Connections
• Affine
• Cartan
• Ehresmann
• Form
• Generalized
• Koszul
• Levi-Civita
• Principal
• Vector
• Parallel transport
Related
• Classification of manifolds
• Gauge theory
• History
• Morse theory
• Moving frame
• Singularity theory
Generalizations
• Banach manifold
• Diffeology
• Diffiety
• Fréchet manifold
• K-theory
• Orbifold
• Secondary calculus
• over commutative algebras
• Sheaf
• Stratifold
• Supermanifold
• Stratified space
| Wikipedia |
Abstract: While going deeper has been witnessed to improve the performance of convolutional neural networks (CNN), going smaller for CNN has received increasing attention recently due to its attractiveness for mobile/embedded applications. It remains an active and important topic how to design a small network while retaining the performance of large and deep CNNs (e.g., Inception Nets, ResNets). Albeit there are already intensive studies on compressing the size of CNNs, the considerable drop of performance is still a key concern in many designs. This paper addresses this concern with several new contributions. First, we propose a simple yet powerful method for compressing the size of deep CNNs based on parameter binarization. The striking difference from most previous work on parameter binarization/quantization lies at different treatments of $1\times 1$ convolutions and $k\times k$ convolutions ($k>1$), where we only binarize $k\times k$ convolutions into binary patterns. The resulting networks are referred to as pattern networks. By doing this, we show that previous deep CNNs such as GoogLeNet and Inception-type Nets can be compressed dramatically with marginal drop in performance. Second, in light of the different functionalities of $1\times 1$ (data projection/transformation) and $k\times k$ convolutions (pattern extraction), we propose a new block structure codenamed the pattern residual block that adds transformed feature maps generated by $1\times 1$ convolutions to the pattern feature maps generated by $k\times k$ convolutions, based on which we design a small network with $\sim 1$ million parameters. Combining with our parameter binarization, we achieve better performance on ImageNet than using similar sized networks including recently released Google MobileNets. | CommonCrawl |
Lines of Intersection Between Two Planes
Lines of Intersection Between Planes
Sometimes we want to calculate the line at which two planes intersect each other. We can accomplish this with a system of equations to determine where these two planes intersect. Note that this will result in a system with parameters from which we can determine parametric equations from.
Let's hypothetically say that we want to find the equation of the line of intersection between the following lines $L_1$ and $L_2$:
\begin{align} L_1: 2x - y - 4z + 2 = 0 \\ L_2: -3x + 2y - z + 2 = 0 \end{align}
We will begin by first setting up a system of linear equations. Note that we have more variables (3) than the number of equations (2), so there will be a column of zeroes after we convert the matrix of lines $L_1$ and $L_2$ into reduced row echelon form. The following matrix represents our two lines: $\begin{bmatrix}2 & -1 & -4 & -2 \\ -3& 2 & -1 & -2 \end{bmatrix}$.
We will thus convert this matrix intro reduced row echelon form by Gauss-Jordan Elimination:
\begin{align} \frac{1}{2} R_1 \to R_1 \\ \begin{bmatrix} 1 & -\frac{1}{2} & -2 & -1 \\ -3& 2 & -1 & -2 \end{bmatrix} \end{align}
\begin{align} -\frac{1}{3} R_2 \to R_2 \\ \begin{bmatrix} 1 & -\frac{1}{2} & -\frac{6}{3} & -\frac{3}{3} \\ 1& -\frac{2}{3} & \frac{1}{3} & \frac{2}{3} \end{bmatrix} \end{align}
\begin{align} R_2 - R_1 \to R_2 \\ \begin{bmatrix} 1 & \frac{-1}{2} & -\frac{6}{3} & -\frac{3}{3} \\ 0 & -\frac{1}{6} & \frac{7}{3} & \frac{5}{3} \end{bmatrix} \end{align}
\begin{align} -6R_2 \to R_2 \\ \begin{bmatrix} 1 & \frac{-1}{2} & -\frac{6}{3} & -\frac{3}{3} \\ 0 & 1 & -14 & -10 \end{bmatrix} \end{align}
\begin{align} R_1 + \frac{1}{2} R_2 \to R_1 \\ \begin{bmatrix} 1 & 0 & -9 & -6 \\ 0 & 1 & -14 & -10 \end{bmatrix} \end{align}
We now have the system in reduced row echelon form. We can see that we have a free parameter for $z$, so let's parameterize this variable. Let $z = t$ for $(-\infty < t < \infty)$. Therefore, we can determine the equation of the line as a set of parameterized equations:
\begin{align} \quad x = -6 + 9t \quad , \quad y = -10 + 14t \quad , \quad z = t \quad (-\infty < t < \infty) \end{align} | CommonCrawl |
Measurements of $\pi^\pm$, K$^\pm$, p and $\bar{\textrm{p}}$ spectra in proton-proton interactions at 20, 31, 40, 80 and 158 GeV/c with the NA61/SHINE spectrometer at the CERN SPS (1705.02467)
NA61/SHINE Collaboration: A. Aduszkiewicz, Y. Ali, E. Andronov, T. Antićić, B. Baatar, M. Baszczyk, S. Bhosale, A. Blondel, M. Bogomilov, A. Brandin, A. Bravar, J. Brzychczyk, S.A. Bunyatov, O. Busygina, H. Cherif, M. Ćirković, T. Czopowicz, A. Damyanova, N. Davis, H. Dembinski, M. Deveaux, W. Dominik, P. Dorosz, J. Dumarchez, R. Engel, A. Ereditato, G.A. Feofilov, Z. Fodor, C. Francois, A. Garibov, M. Gaździcki, M. Golubeva, K. Grebieszkow, F. Guber, A. Haesler, A.E. Hervé, J. Hylen, S. Igolkin, A. Ivashkin, S.R. Johnson, K. Kadija, E. Kaptur, M. Kiełbowicz, V.A. Kireyeu, V. Klochkov, N. Knezević, V.I. Kolesnikov, D. Kolev, V.P. Kondratiev, A. Korzenev, V. Kovalenko, K. Kowalik, S. Kowalski, M. Koziel, A. Krasnoperov, W. Kucewicz, M. Kuich, A. Kurepin, D. Larsen, A. László, M. Lewicki, B. Lundberg, V.V. Lyubushkin, B. Łysakowski, M. Maćkowiak-Pawłowska, B. Maksiak, A.I. Malakhov, D. Manić, A. Marchionni, A. Marcinek, A.D. Marino, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G.L. Melkumov, A. Merzlaya, B. Messerly, Ł. Mik, G.B. Mills, S. Morozov, S. Mrówczyński, Y. Nagai, M. Naskręt, V. Ozvenchuk, V. Paolone, M. Pavin, O. Petukhov, C. Pistillo, R. Płaneta, P. Podlaski, B.A. Popov, M. Posiadała, S. Puławski, J. Puzović, R. Rameika, W. Rauch, M. Ravonel, R. Renfordt, E. Richter-Wąs, D. Röhrich, E. Rondio, M. Roth, B.T. Rumberger, A. Rustamov, M. Rybczynski, A. Rybicki, A. Sadovsky, K. Schmidt, I. Selyuzhenkov, A. Seryakov, P. Seyboth, M. Słodkowski, A. Snoch, P. Staszel, G. Stefanek, J. Stepaniak, M. Strikhanov, H. Ströbele, T. Šuša, M. Szuba, A. Taranenko, A. Tefelska, D. Tefelski, V. Tereshchenko, A. Toia, R. Tsenov, L. Turko, R. Ulrich, M. Unger, D. Veberič, V.V. Vechernin, G. Vesztergombi, L. Vinogradov, M. Walewski, A. Wickremasinghe, C. Wilkinson, Z. Włodarczyk, A. Wojtaszek-Szwarc, O. Wyszyński, L. Zambelli, E.D. Zimmerman, R. Zwaska
Sept. 27, 2017 nucl-ex
Measurements of inclusive spectra and mean multiplicities of $\pi^\pm$, K$^\pm$, p and $\bar{\textrm{p}}$ produced in inelastic p+p interactions at incident projectile momenta of 20, 31, 40, 80 and 158 GeV/c ($\sqrt{s} = $ 6.3, 7.7, 8.8, 12.3 and 17.3 GeV, respectively) were performed at the CERN Super Proton Synchrotron using the large acceptance NA61/SHINE hadron spectrometer. Spectra are presented as function of rapidity and transverse momentum and are compared to predictions of current models. The measurements serve as the baseline in the NA61/SHINE study of the properties of the onset of deconfinement and search for the critical point of strongly interacting matter.
Two-particle correlations in azimuthal angle and pseudorapidity in inelastic p+p interactions at the CERN Super Proton Synchrotron (1610.00482)
NA61/SHINE Collaboration: A. Aduszkiewicz, Y. Ali, E. Andronov, T. Anticic, N. Antoniou, B. Baatar, F. Bay, A. Blondel, M. Bogomilov, A. Brandin, A. Bravar, J. Brzychczyk, S.A. Bunyatov, O. Busygina, P. Christakoglou, M. Cirkovic, T. Czopowicz, A. Damyanova, N. Davis, H. Dembinski, M. Deveaux, F. Diakonos, S. Di Luise, W. Dominik, J. Dumarchez, R. Engel, A. Ereditato, G.A. Feofilov, Z. Fodor, A. Garibov, M. Gazdzicki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, T. Hasegawa, A.E. Herve, M. Hierholzer, J. Hylen, S. Igolkin, A. Ivashkin, S.R. Johnson, K. Kadija, A. Kapoyannis, E. Kaptur, M. Kielbowicz, J. Kisiel, N. Knezevic, T. Kobayashi, V.I. Kolesnikov, D. Kolev, V.P. Kondratiev, A. Korzenev, V. Kovalenko, K. Kowalik, S. Kowalski, M. Koziel, A. Krasnoperov, M. Kuich, A. Kurepin, D. Larsen, A. Laszlo, M. Lewicki, B. Lundberg, V.V. Lyubushkin, M. Mackowiak-Pawlowska, B. Maksiak, A.I. Malakhov, D. Manic, A. Marchionni, A. Marcinek, A.D. Marino, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G.L. Melkumov, A. Merzlaya, B. Messerly, G.B. Mills, S. Morozov, S. Mrowczynski, Y. Nagai, T. Nakadaira, M. Naskret, M. Nirkko, K. Nishikawa, V. Ozvenchuk, A.D. Panagiotou, V. Paolone, M. Pavin, O. Petukhov, C. Pistillo, R. Planeta, B.A. Popov, M. Posiadala, S. Pulawski, J. Puzovic, R. Rameika, W. Rauch, M. Ravonel, A. Redij, R. Renfordt, E. Richter-Was, A. Robert, D. Rohrich, E. Rondio, M. Roth, A. Rubbia, B.T. Rumberger, A. Rustamov, M. Rybczynski, A. Rybicki, A. Sadovsky, K. Sakashita, R. Sarnecki, K. Schmidt, T. Sekiguchi, I. Selyuzhenkov, A. Seryakov, P. Seyboth, D. Sgalaberna, M. Shibata, M. Slodkowski, P. Staszel, G. Stefanek, J. Stepaniak, H. Strobele, T. Susa, M. Szuba, M. Tada, A. Taranenko, A. Tefelska, D. Tefelski, V. Tereshchenko, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberic, V.V. Vechernin, G. Vesztergombi, L. Vinogradov, M. Walewski, A. Wickremasinghe, A. Wilczek, Z. Wlodarczyk, A. Wojtaszek-Szwarc, O. Wyszynski, L. Zambelli, E.D. Zimmerman, R. Zwaska
Feb. 7, 2017 nucl-ex
Results on two-particle $\Delta\eta\Delta\phi$ correlations in inelastic p+p interactions at 20, 31, 40, 80, and 158~GeV/c are presented. The measurements were performed using the large acceptance NA61/SHINE hadron spectrometer at the CERN Super Proton Synchrotron. The data show structures which can be attributed mainly to effects of resonance decays, momentum conservation, and quantum statistics. The results are compared with the EPOS and UrQMD models.
Measurements of $\pi^{\pm}$ differential yields from the surface of the T2K replica target for incoming 31 GeV/c protons with the NA61/SHINE spectrometer at the CERN SPS (1603.06774)
NA61/SHINE Collaboration: N. Abgrall, A. Aduszkiewicz, M. Ajaz, Y. Ali, E. Andronov, T. Antićić, N. Antoniou, B. Baatar, F. Bay, A. Blondel, J. Blümer, M. Bogomilov, A. Brandin, A. Bravar, J. Brzychczyk, S.A. Bunyatov, O. Busygina, P. Christakoglou, M. Ćirković, T. Czopowicz, N. Davis, S. Debieux, H. Dembinski, M. Deveaux, F. Diakonos, S. Di Luise, W. Dominik, J. Dumarchez, K. Dynowski, R. Engel, A. Ereditato, G.A. Feofilov, Z. Fodor, A. Garibov, M. Gaździcki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, T. Hasegawa, A.E. Hervé, M. Hierholzer, S. Igolkin, A. Ivashkin, S.R. Johnson, K. Kadija, A. Kapoyannis, E. Kaptur, J. Kisiel, T. Kobayashi, V.I. Kolesnikov, D. Kolev, V.P. Kondratiev, A. Korzenev, K. Kowalik, S. Kowalski, M. Koziel, A. Krasnoperov, M. Kuich, A. Kurepin, D. Larsen, A. László, M. Lewicki, V.V. Lyubushkin, M. Maćkowiak-Pawłowska, B. Maksiak, A.I. Malakhov, D. Manic, A. Marcinek, A.D. Marino, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G.L. Melkumov, B. Messerly, G.B. Mills, S. Morozov, S. Mrówczyński, Y. Nagai, T. Nakadaira, M. Naskręt, M. Nirkko, K. Nishikawa, A.D. Panagiotou, V. Paolone, M. Pavin, O. Petukhov, C. Pistillo, R. Płaneta, B.A. Popov, M. Posiadała-Zezula, S. Puławski, J. Puzović, W. Rauch, M. Ravonel, A. Redij, R. Renfordt, E. Richter-Wąs, A. Robert, D. Röhrich, E. Rondio, M. Roth, A. Rubbia, B.T. Rumberger, A. Rustamov, M. Rybczynski, A. Sadovsky, K. Sakashita, R. Sarnecki, K. Schmidt, T. Sekiguchi, I. Selyuzhenkov, A. Seryakov, P. Seyboth, D. Sgalaberna, M. Shibata, M. Słodkowski, P. Staszel, G. Stefanek, J. Stepaniak, H. Ströbele, T. Šuša, M. Szuba, M. Tada, A. Taranenko, A. Tefelska, D. Tefelski, V. Tereshchenko, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberič, V.V. Vechernin, G. Vesztergombi, L. Vinogradov, A. Wilczek, Z. Włodarczyk, A. Wojtaszek-Szwarc, O. Wyszyński, K. Yarritu, L. Zambelli, E.D. Zimmerman, M. Friend, V. Galymov, M. Hartz, T. Hiraki, A. Ichikawa, H. Kubo, K. Matsuoka, A. Murakami, T. Nakaya, K. Suzuki, M. Tzanov, M. Yu
Nov. 29, 2016 hep-ex, physics.ins-det
Measurements of particle emission from a replica of the T2K 90 cm-long carbon target were performed in the NA61/SHINE experiment at CERN SPS, using data collected during a high-statistics run in 2009. An efficient use of the long-target measurements for neutrino flux predictions in T2K requires dedicated reconstruction and analysis techniques. Fully-corrected differential yields of $\pi^\pm$-mesons from the surface of the T2K replica target for incoming 31 GeV/c protons are presented. A possible strategy to implement these results into the T2K neutrino beam predictions is discussed and the propagation of the uncertainties of these results to the final neutrino flux is performed.
Multiplicity and transverse momentum fluctuations in inelastic proton-proton interactions at the CERN Super Proton Synchrotron (1510.00163)
NA61/SHINE Collaboration: A. Aduszkiewicz, Y. Ali, E. Andronov, T. Anticic, N. Antoniou, B. Baatar, F. Bay, A. Blondel, J. Blumer, M. Bogomilov, A. Bravar, J. Brzychczyk, S. A. Bunyatov, O. Busygina, P. Christakoglou, M. Cirkovic, T. Czopowicz, N. Davis, S. Debieux, H. Dembinski, M. Deveaux, F. Diakonos, S. Di Luise, W. Dominik, J. Dumarchez, K. Dynowski, R. Engel, A. Ereditato, G. A. Feofilov, Z. Fodor, A. Garibov, M. Gazdzicki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, T. Hasegawa, A. Herve, M. Hierholzer, S. Igolkin, A. Ivashkin, K. Kadija, A. Kapoyannis, E. Kaptur, J. Kisiel, T. Kobayashi, V. I. Kolesnikov, D. Kolev, V. P. Kondratiev, A. Korzenev, K. Kowalik, S. Kowalski, M. Koziel, A. Krasnoperov, M. Kuich, A. Kurepin, D. Larsen, A. Laszlo, M. Lewicki, V. V. Lyubushkin, M. Mackowiak-Pawlowska, B. Maksiak, A. I. Malakhov, D. Manic, A. Marcinek, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G. L. Melkumov, S. Morozov, S. Mrowczynski, T. Nakadaira, M. Naskret, M. Nirkko, K. Nishikawa, A. D. Panagiotou, M. Pavin, O. Petukhov, C. Pistillo, R. Planeta, B. A. Popov, M. Posiadala, S. Pulawski, J. Puzovic, W. Rauch, M. Ravonel, A. Redij, R. Renfordt, E. Richter-Was, A. Robert, D. Rohrich, E. Rondio, M. Roth, A. Rubbia, A. Rustamov, M. Rybczynski, A. Sadovsky, K. Sakashita, R. Sarnecki, K. Schmidt, T. Sekiguchi, A. Seryakov, P. Seyboth, D. Sgalaberna, M. Shibata, M. Slodkowski, P. Staszel, G. Stefanek, J. Stepaniak, H. Strobele, T. Susa, M. Szuba, M. Tada, A. Tefelska, D. Tefelski, V. Tereshchenko, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberic, V. V. Vechernin, G. Vesztergombi, L. Vinogradov, A. Wilczek, Z. Wlodarczyk, A. Wojtaszek-Szwarc, O. Wyszynski, L. Zambelli
Aug. 30, 2016 hep-ex, nucl-ex
Measurements of multiplicity and transverse momentum fluctuations of charged particles were performed in inelastic p+p interactions at 20, 31, 40, 80 and 158 GeV/c beam momentum. Results for the scaled variance of the multiplicity distribution and for three strongly intensive measures of multiplicity and transverse momentum fluctuations \$\Delta[P_{T},N]\$, \$\Sigma[P_{T},N]\$ and \$\Phi_{p_T}\$ are presented. For the first time the results on fluctuations are fully corrected for experimental biases. The results on multiplicity and transverse momentum fluctuations significantly deviate from expectations for the independent particle production. They also depend on charges of selected hadrons. The string-resonance Monte Carlo models EPOS and UrQMD do not describe the data. The scaled variance of multiplicity fluctuations is significantly higher in inelastic p+p interactions than in central Pb+Pb collisions measured by NA49 at the same energy per nucleon. This is in qualitative disagreement with the predictions of the Wounded Nucleon Model. Within the statistical framework the enhanced multiplicity fluctuations in inelastic p+p interactions can be interpreted as due to event-by-event fluctuations of the fireball energy and/or volume.
Production of deuterium, tritium, and $^3$He in central Pb+Pb collisions at 20A, 30A, 40A, 80A, and 158A GeV at the CERN SPS (1606.04234)
NA49 Collaboration: T. Anticic, B. Baatar, J. Bartke, H. Beck, L. Betev, H. Białkowska, C. Blume, B. Boimska, J. Book, M. Botje, P. Bunčić, P. Christakoglou, P. Chung, O. Chvala, J.G. Cramer, V. Eckardt, Z. Fodor, P. Foka, V. Friese, M. Gaździcki, K. Grebieszkow, C. Höhne, K. Kadija, A. Karev, V.I. Kolesnikov, M. Kowalski, D. Kresan, A. Laszlo, R. Lacey, M. van Leeuwen, M. Maćkowiak-Pawłowska, M. Makariev, A.I. Malakhov, G.L. Melkumov, M. Mitrovski, St. Mrówczyński, G. Pálla, A.D. Panagiotou, D. Prindle, F. Pühlhofer, R. Renfordt, C. Roland, G. Roland, A. Rustamov, M. Rybczyński, A. Rybicki, A. Sandoval, N. Schmitz, T. Schuster, P. Seyboth, F. Siklér, E. Skrzypczak, M. Slodkowski, G. Stefanek, R. Stock, H. Ströbele, T. Susa, M. Szuba, D. Varga, M. Vassiliou, G.I. Veres, G. Vesztergombi, D. Vranić, Z. Włodarczyk, A. Wojtaszek-Szwarc
June 15, 2016 nucl-ex
Production of $d$, $t$, and $^3$He nuclei in central Pb+Pb interactions was studied at five collision energies ($\sqrt{s_{NN}}=$ 6.3, 7.6, 8.8, 12.3, and 17.3 GeV) with the NA49 detector at the CERN SPS. Transverse momentum spectra, rapidity distributions, and particle ratios were measured. Yields are compared to predictions of statistical models. Phase-space distributions of light nuclei are discussed and compared to those of protons in the context of a coalescence approach. The coalescence parameters $B_2$ and $B_3$, as well as coalescence radii for $d$ and $^3$He were determined as a function of transverse mass at all energies.
Production of $\Lambda$ hyperons in inelastic p+p interactions at 158 GeV/$c$ (1510.03720)
A. Aduszkiewicz, Y. Ali, E. Andronov, T. Antićić, N. Antoniou, B. Baatar, F. Bay, A. Blondel, M. Bogomilov, A. Brandin, A. Bravar, J. Brzychczyk, S.A. Bunyatov, O. Busygina, P. Christakoglou, M. Ćirković, T. Czopowicz, A. Damyanova, N. Davis, H. Dembinski, M. Deveaux, F. Diakonos, S. Di Luise, W. Dominik, J. Dumarchez, K. Dynowski, R. Engel, A. Ereditato, G.A. Feofilov, Z. Fodor, A. Garibov, M. Gaździcki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, T. Hasegawa, A.E. Hervé, M. Hierholzer, S. Igolkin, A. Ivashkin, S.R. Johnson, K. Kadija, A. Kapoyannis, E. Kaptur, J. Kisiel, T. Kobayashi, V.I. Kolesnikov, D. Kolev, V.P. Kondratiev, A. Korzenev, K. Kowalik, S. Kowalski, M. Koziel, A. Krasnoperov, M. Kuich, A. Kurepin, D. Larsen, A. László, M. Lewicki, V.V. Lyubushkin, M. Maćkowiak-Pawłowska, B. Maksiak, A.I. Malakhov, D. Manić, A. Marcinek, A.D. Marino, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G.L. Melkumov, B. Messerly, G.B. Mills, S. Morozov, S. Mrówczyński, Y. Nagai, T. Nakadaira, M. Naskręt, M. Nirkko, K. Nishikawa, A.D. Panagiotou, V. Paolone, M. Pavin, O. Petukhov, C. Pistillo, R. Płaneta, B.A. Popov, M. Posiadała, S. Puławski, J. Puzović, W. Rauch, M. Ravonel, A. Redij, R. Renfordt, E. Richter-Wąs, A. Robert, D. Röhrich, E. Rondio, M. Roth, A. Rubbia, B.T. Rumberger, A. Rustamov, M. Rybczynski, A. Sadovsky, K. Sakashita, K. Schmidt, T. Sekiguchi, I. Selyuzhenkov, A. Seryakov, P. Seyboth, D. Sgalaberna, M. Shibata, M. Słodkowski, P. Staszel, G. Stefanek, J. Stepaniak, H. Ströbele, T. Šuša, M. Szuba, M. Tada, A. Taranenko, D. Tefelski, V. Tereshchenko, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberič, V.V. Vechernin, G. Vesztergombi, L. Vinogradov, A. Wilczek, Z. Włodarczyk, A. Wojtaszek-Szwarc, O. Wyszyński, L. Zambelli, E.D. Zimmerman (The NA61/SHINE Collaboration)
April 25, 2016 hep-ex, nucl-ex
Inclusive production of $\Lambda$-hyperons was measured with the large acceptance NA61/SHINE spectrometer at the CERN SPS in inelastic p+p interactions at beam momentum of 158~\GeVc. Spectra of transverse momentum and transverse mass as well as distributions of rapidity and x$_{_F}$ are presented. The mean multiplicity was estimated to be $0.120\,\pm0.006\;(stat.)\,\pm 0.010\;(sys.)$. The results are compared with previous measurements and predictions of the EPOS, UrQMD and FRITIOF models.
Measurements of $\pi^\pm$, $K^\pm$, $K^0_S$, $\Lambda$ and proton production in proton-carbon interactions at 31 GeV/$c$ with the NA61/SHINE spectrometer at the CERN SPS (1510.02703)
N. Abgrall, A. Aduszkiewicz, Y. Ali, E. Andronov, T. Antićić, N. Antoniou, B. Baatar, F. Bay, A. Blondel, J. Blümer, M. Bogomilov, A. Brandin, A. Bravar, J. Brzychczyk, S.A. Bunyatov, O. Busygina, P. Christakoglou, T. Czopowicz, A. Damyanova, N. Davis, S. Debieux, H. Dembinski, M. Deveaux, F. Diakonos, S. Di Luise, W. Dominik, T. Drozhzhova, J. Dumarchez, K. Dynowski, R. Engel, A. Ereditato, G.A. Feofilov, Z. Fodor, M. Gaździcki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, T. Hasegawa, A. Herve, M. Hierholzer, S. Igolkin, A. Ivashkin, D. Joković, S. R. Johnson, K. Kadija, A. Kapoyannis, E. Kaptur, D. Kiełczewska, J. Kisiel, T. Kobayashi, V.I. Kolesnikov, D. Kolev, V.P. Kondratiev, A. Korzenev, K. Kowalik, S. Kowalski, M. Koziel, A. Krasnoperov, M. Kuich, A. Kurepin, D. Larsen, A. László, M. Lewicki, V.V. Lyubushkin, M. Maćkowiak-Pawłowska, Z. Majka, B. Maksiak, A.I. Malakhov, A. Marchionni, D. Manić, A. Marcinek, A. D. Marino, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G.L. Melkumov, B. Messerly, G. B. Mills, S. Morozov, S. Mrówczyński, S. Murphy, Y. Nagai, T. Nakadaira, M. Naskret, M. Nirkko, K. Nishikawa, T. Palczewski, A.D. Panagiotou, V. Paolone, M. Pavin, O. Petukhov, C. Pistillo, R. Płaneta, J. Pluta, B.A. Popov, M. Posiadała-Zezula, S. Puławski, J. Puzović, W. Rauch, M. Ravonel, A. Redij, R. Renfordt, E. Richter-Was, A. Robert, D. Röhrich, E. Rondio, M. Roth, A. Rubbia, B. T. Rumberger, A. Rustamov, M. Rybczynski, A. Sadovsky, K. Sakashita, R. Sarnecki, K. Schmidt, T. Sekiguchi, I. Selyuzhenkov, A. Seryakov, P. Seyboth, D. Sgalaberna, M. Shibata, M. Słodkowski, P. Staszel, G. Stefanek, J. Stepaniak, H. Ströbele, T. Šuša, M. Szuba, M. Tada, A. Taranenko, A. Tefelska, D. Tefelski, V. Tereshchenko, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberič, V.V. Vechernin, G. Vesztergombi, L. Vinogradov, A. Wilczek, Z. Wlodarczyk, A. Wojtaszek-Szwarc, O. Wyszyński, K. Yarritu, L. Zambelli, E. D. Zimmerman
Feb. 24, 2016 hep-ex, nucl-ex
Measurements of hadron production in p+C interactions at 31 GeV/c are performed using the NA61/ SHINE spectrometer at the CERN SPS. The analysis is based on the full set of data collected in 2009 using a graphite target with a thickness of 4% of a nuclear interaction length. Inelastic and production cross sections as well as spectra of $\pi^\pm$, $K^\pm$, p, $K^0_S$ and $\Lambda$ are measured with high precision. These measurements are essential for improved calculations of the initial neutrino fluxes in the T2K long-baseline neutrino oscillation experiment in Japan. A comparison of the NA61/SHINE measurements with predictions of several hadroproduction models is presented.
Critical fluctuations of the proton density in A+A collisions at $158A$ GeV (1208.5292)
T. Anticic, B. Baatar, D. Barna, J. Bartke, J. Beck, L. Betev, H. Białkowska, C. Blume, M. Bogusz, B. Boimska, J. Book, M. Botje, P. Bunčić, T. Cetner, P. Christakoglou, P. Chung, O. Chvala, J. Cramer, V. Eckardt, Z. Fodor, P. Foka, V. Friese, M. Gaździcki, K. Grebieszkow, C. Höhne, K. Kadija, A. Karev, V. I. Kolesnikov, M. Kowalski, D. Kresan, A. Laszlo, R. Lacey, M. van Leeuwen, M. Maćkowiak-Pawłowska, M. Makariev, A. I. Malakhov, M. Mateev, G. L. Melkumov, M. Mitrovski, St. Mrówczyński, G. Pálla, A. D. Panagiotou, W. Peryt, J. Pluta, D. Prindle, F. Pühlhofer, R. Renfordt, C. Roland, G. Roland, A. Rustamov, M. Rybczyński, A. Rybicki, A. Sandoval, N. Schmitz, T. Schuster, P. Seyboth, F. Siklér, E. Skrzypczak, M. Slodkowski, G. Stefanek, R. Stock, H. Ströbele, T. Susa, M. Szuba, D. Varga, M. Vassiliou, G. I. Veres, G. Vesztergombi, D. Vranić, Z. Włodarczyk, A. Wojtaszek-Szwarć, N. G. Antoniou, N. Davis, F. K. Diakonos
Nov. 6, 2015 hep-ph, hep-ex, nucl-ex
We look for fluctuations expected for the QCD critical point using an intermittency analysis in the transverse momentum phase space of protons produced around midrapidity in the 12.5\% most central C+C, Si+Si and Pb+Pb collisions at the maximum SPS energy of 158$A$~GeV. We find evidence of power-law fluctuations for the Si+Si data. The fitted power-law exponent $\phi_{2} = 0.96^{+0.38}_{-0.25}\text{ (stat.)}$ $\pm 0.16\text{ (syst.)}$ is consistent with the va\-lue expected for critical fluctuations. Power-law fluctuations had previously also been observed in low-mass $\pi^+ \pi^-$ pairs in the same Si+Si collisions.
Measurement of event-by-event transverse momentum and multiplicity fluctuations using strongly intensive measures $\Delta[P_T, N]$ and $\Sigma[P_T, N]$ in nucleus-nucleus collisions at the CERN Super Proton Synchrotron (1509.04633)
NA49 Collaboration: T. Anticic, B. Baatar, J. Bartke, H. Beck, L. Betev, H. Bialkowska, C. Blume, B. Boimska, J. Book, M. Botje, P. Buncic, P. Christakoglou, P. Chung, O. Chvala, J. Cramer, V. Eckardt, Z. Fodor, P. Foka, V. Friese, M. Gazdzicki, K. Grebieszkow, C.Hohne, K. Kadija, A. Karev, V. Kolesnikov, M. Kowalski, D. Kresan, A. Laszlo, R. Lacey, M. van Leeuwen, M. Mackowiak-Pawlowska, M. Makariev, A. Malakhov, G. Melkumov, M. Mitrovski, S. Mrowczynski, G. Palla, A. Panagiotou, J. Pluta, D. Prindle, F. Puhlhofer, R. Renfordt, C. Roland, G. Roland, M. Rybczynski, A. Rybicki, A. Sandoval, A. Rustamov, N. Schmitz, T. Schuster, P. Seyboth, F. Sikler, E. Skrzypczak, M. Slodkowski, G. Stefanek, R. Stock, H. Strobele, T. Susa, M. Szuba, D. Varga, M. Vassiliou, G. Veres, G. Vesztergombi, D. Vranic, Z. Wlodarczyk, A. Wojtaszek-Szwarc
Results from the NA49 experiment at the CERN SPS are presented on event-by-event transverse momentum and multiplicity fluctuations of charged particles, produced at forward rapidities in central Pb+Pb interactions at beam momenta 20$A$, 30$A$, 40$A$, 80$A$, and 158$A$ GeV/c, as well as in systems of different size ($p+p$, C+C, Si+Si, and Pb+Pb) at 158$A$ GeV/c. This publication extends the previous NA49 measurements of the strongly intensive measure $\Phi_{p_T}$ by a study of the recently proposed strongly intensive measures of fluctuations $\Delta[P_T, N]$ and $\Sigma[P_T, N]$. In the explored kinematic region transverse momentum and multiplicity fluctuations show no significant energy dependence in the SPS energy range. However, a remarkable system size dependence is observed for both $\Delta[P_T, N]$ and $\Sigma[P_T, N]$, with the largest values measured in peripheral Pb+Pb interactions. The results are compared with NA61/SHINE measurements in $p+p$ collisions, as well as with predictions of the UrQMD and EPOS models.
Observation of the rare $B^0_s\to\mu^+\mu^-$ decay from the combined analysis of CMS and LHCb data (1411.4413)
The CMS, LHCb Collaborations: V. Khachatryan, A.M. Sirunyan, A. Tumasyan, W. Adam, T. Bergauer, M. Dragicevic, J. Erö, M. Friedl, R. Frühwirth, V.M. Ghete, C. Hartl, N. Hörmann, J. Hrubec, M. Jeitler, W. Kiesenhofer, V. Knünz, M. Krammer, I. Krätschmer, D. Liko, I. Mikulec, D. Rabady, B. Rahbaran, H. Rohringer, R. Schöfbeck, J. Strauss, W. Treberer-Treberspurg, W. Waltenberger, C.-E. Wulz, V. Mossolov, N. Shumeiko, J. Suarez Gonzalez, S. Alderweireldt, S. Bansal, T. Cornelis, E.A. De Wolf, X. Janssen, A. Knutsson, J. Lauwers, S. Luyckx, S. Ochesanu, R. Rougny, M. Van De Klundert, H. Van Haevermaet, P. Van Mechelen, N. Van Remortel, A. Van Spilbeeck, F. Blekman, S. Blyweert, J. D'Hondt, N. Daci, N. Heracleous, J. Keaveney, S. Lowette, M. Maes, A. Olbrechts, Q. Python, D. Strom, S. Tavernier, W. Van Doninck, P. Van Mulders, G.P. Van Onsem, I. Villella, C. Caillol, B. Clerbaux, G. De Lentdecker, D. Dobur, L. Favart, A.P.R. Gay, A. Grebenyuk, A. Léonard, A. Mohammadi, L. Perniè, A. Randle-conde, T. Reis, T. Seva, L. Thomas, C. Vander Velde, P. Vanlaer, J. Wang, F. Zenoni, V. Adler, K. Beernaert, L. Benucci, A. Cimmino, S. Costantini, S. Crucy, S. Dildick, A. Fagot, G. Garcia, J. Mccartin, A.A. Ocampo Rios, D. Ryckbosch, S. Salva Diblen, M. Sigamani, N. Strobbe, F. Thyssen, M. Tytgat, E. Yazgan, N. Zaganidis, S. Basegmez, C. Beluffi, G. Bruno, R. Castello, A. Caudron, L. Ceard, G.G. Da Silveira, C. Delaere, T. du Pree, D. Favart, L. Forthomme, A. Giammanco, J. Hollar, A. Jafari, P. Jez, M. Komm, V. Lemaitre, C. Nuttens, D. Pagano, L. Perrini, A. Pin, K. Piotrzkowski, A. Popov, L. Quertenmont, M. Selvaggi, M. Vidal Marono, J.M. Vizan Garcia, N. Beliy, T. Caebergs, E. Daubie, G.H. Hammad, W.L. Aldá Júnior, G.A. Alves, L. Brito, M. Correa Martins Junior, T. Dos Reis Martins, C. Mora Herrera, M.E. Pol, P. Rebello Teles, W. Carvalho, J. Chinellato, A. Custódio, E.M. Da Costa, D. De Jesus Damiao, C. De Oliveira Martins, S. Fonseca De Souza, H. Malbouisson, D. Matos Figueiredo, L. Mundim, H. Nogima, W.L. Prado Da Silva, J. Santaolalla, A. Santoro, A. Sznajder, E.J. Tonelli Manganote, A. Vilela Pereira, C.A. Bernardes, S. Dogra, T.R. Fernandez Perez Tomei, E.M. Gregores, P.G. Mercadante, S.F. Novaes, Sandra S. Padula, A. Aleksandrov, V. Genchev, R. Hadjiiska, P. Iaydjiev, A. Marinov, S. Piperov, M. Rodozov, G. Sultanov, M. Vutova, A. Dimitrov, I. Glushkov, L. Litov, B. Pavlov, P. Petkov, J.G. Bian, G.M. Chen, H.S. Chen, M. Chen, T. Cheng, R. Du, C.H. Jiang, R. Plestina, F. Romeo, J. Tao, Z. Wang, C. Asawatangtrakuldee, Y. Ban, Q. Li, S. Liu, Y. Mao, S.J. Qian, D. Wang, Z. Xu, W. Zou, C. Avila, A. Cabrera, L.F. Chaparro Sierra, C. Florez, J.P. Gomez, B. Gomez Moreno, J.C. Sanabria, N. Godinovic, D. Lelas, D. Polic, I. Puljak, Z. Antunovic, M. Kovac, V. Brigljevic, K. Kadija, J. Luetic, D. Mekterovic, L. Sudic, A. Attikis, G. Mavromanolakis, J. Mousa, C. Nicolaou, F. Ptochos, P.A. Razis, M. Bodlak, M. Finger, M. Finger Jr., Y. Assran, A. Ellithi Kamel, M.A. Mahmoud, A. Radi, M. Kadastik, M. Murumaa, M. Raidal, A. Tiko, P. Eerola, G. Fedi, M. Voutilainen, J. Härkönen, V. Karimäki, R. Kinnunen, M.J. Kortelainen, T. Lampén, K. Lassila-Perini, S. Lehti, T. Lindén, P. Luukka, T. Mäenpää, T. Peltola, E. Tuominen, J. Tuominiemi, E. Tuovinen, L. Wendland, J. Talvitie, T. Tuuva, M. Besancon, F. Couderc, M. Dejardin, D. Denegri, B. Fabbro, J.L. Faure, C. Favaro, F. Ferri, S. Ganjour, A. Givernaud, P. Gras, G. Hamel de Monchenault, P. Jarry, E. Locci, J. Malcles, J. Rander, A. Rosowsky, M. Titov, S. Baffioni, F. Beaudette, P. Busson, C. Charlot, T. Dahms, M. Dalchenko, L. Dobrzynski, N. Filipovic, A. Florent, R. Granier de Cassagnac, L. Mastrolorenzo, P. Miné, C. Mironov, I.N. Naranjo, M. Nguyen, C. Ochando, G. Ortona, P. Paganini, S. Regnard, R. Salerno, J.B. Sauvan, Y. Sirois, C. Veelken, Y. Yilmaz, A. Zabi, J.-L. Agram, J. Andrea, A. Aubin, D. Bloch, J.-M. Brom, E.C. Chabert, C. Collard, E. Conte, J.-C. Fontaine, D. Gelé, U. Goerlach, C. Goetzmann, A.-C. Le Bihan, K. Skovpen, P. Van Hove, S. Gadrat, S. Beauceron, N. Beaupere, G. Boudoul, E. Bouvier, S. Brochet, C.A. Carrillo Montoya, J. Chasserat, R. Chierici, D. Contardo, P. Depasse, H. El Mamouni, J. Fan, J. Fay, S. Gascon, M. Gouzevitch, B. Ille, T. Kurca, M. Lethuillier, L. Mirabito, S. Perries, J.D. Ruiz Alvarez, D. Sabes, L. Sgandurra, V. Sordini, M. Vander Donckt, P. Verdier, S. Viret, H. Xiao, Z. Tsamalaidze, C. Autermann, S. Beranek, M. Bontenackels, M. Edelhoff, L. Feld, A. Heister, O. Hindrichs, K. Klein, A. Ostapchuk, F. Raupach, J. Sammet, S. Schael, J.F. Schulte, H. Weber, B. Wittmer, V. Zhukov, M. Ata, M. Brodski, E. Dietz-Laursonn, D. Duchardt, M. Erdmann, R. Fischer, A. Güth, T. Hebbeker, C. Heidemann, K. Hoepfner, D. Klingebiel, S. Knutzen, P. Kreuzer, M. Merschmeyer, A. Meyer, P. Millet, M. Olschewski, K. Padeken, P. Papacz, H. Reithler, S.A. Schmitz, L. Sonnenschein, D. Teyssier, S. Thüer, M. Weber, V. Cherepanov, Y. Erdogan, G. Flügge, H. Geenen, M. Geisler, W. Haj Ahmad, F. Hoehle, B. Kargoll, T. Kress, Y. Kuessel, A. Künsken, J. Lingemann, A. Nowack, I.M. Nugent, O. Pooth, A. Stahl, M. Aldaya Martin, I. Asin, N. Bartosik, J. Behr, U. Behrens, A.J. Bell, A. Bethani, K. Borras, A. Burgmeier, A. Cakir, L. Calligaris, A. Campbell, S. Choudhury, F. Costanza, C. Diez Pardos, G. Dolinska, S. Dooling, T. Dorland, G. Eckerlin, D. Eckstein, T. Eichhorn, G. Flucke, J. Garay Garcia, A. Geiser, P. Gunnellini, J. Hauk, M. Hempel, H. Jung, A. Kalogeropoulos, M. Kasemann, P. Katsas, J. Kieseler, C. Kleinwort, I. Korol, D. Krücker, W. Lange, J. Leonard, K. Lipka, A. Lobanov, W. Lohmann, B. Lutz, R. Mankel, I. Marfin, I.-A. Melzer-Pellmann, A.B. Meyer, G. Mittag, J. Mnich, A. Mussgiller, S. Naumann-Emme, A. Nayak, E. Ntomari, H. Perrey, D. Pitzl, R. Placakyte, A. Raspereza, P.M. Ribeiro Cipriano, B. Roland, E. Ron, M.Ö. Sahin, J. Salfeld-Nebgen, P. Saxena, T. Schoerner-Sadenius, M. Schröder, C. Seitz, S. Spannagel, A.D.R. Vargas Trevino, R. Walsh, C. Wissing, V. Blobel, M. Centis Vignali, A.R. Draeger, J. Erfle, E. Garutti, K. Goebel, M. Görner, J. Haller, M. Hoffmann, R.S. Höing, A. Junkes, H. Kirschenmann, R. Klanner, R. Kogler, J. Lange, T. Lapsien, T. Lenz, I. Marchesini, J. Ott, T. Peiffer, A. Perieanu, N. Pietsch, J. Poehlsen, T. Poehlsen, D. Rathjens, C. Sander, H. Schettler, P. Schleper, E. Schlieckau, A. Schmidt, M. Seidel, V. Sola, H. Stadie, G. Steinbrück, D. Troendle, E. Usai, L. Vanelderen, A. Vanhoefer, C. Barth, C. Baus, J. Berger, C. Böser, E. Butz, T. Chwalek, W. De Boer, A. Descroix, A. Dierlamm, M. Feindt, F. Frensch, M. Giffels, A. Gilbert, F. Hartmann, T. Hauth, U. Husemann, I. Katkov, A. Kornmayer, E. Kuznetsova, P. Lobelle Pardo, M.U. Mozer, T. Müller, Th. Müller, A. Nürnberg, G. Quast, K. Rabbertz, S. Röcker, H.J. Simonis, F.M. Stober, R. Ulrich, J. Wagner-Kuhr, S. Wayand, T. Weiler, R. Wolf, G. Anagnostou, G. Daskalakis, T. Geralis, V.A. Giakoumopoulou, A. Kyriakis, D. Loukas, A. Markou, C. Markou, A. Psallidas, I. Topsis-Giotis, A. Agapitos, S. Kesisoglou, A. Panagiotou, N. Saoulidou, E. Stiliaris, X. Aslanoglou, I. Evangelou, G. Flouris, C. Foudas, P. Kokkas, N. Manthos, I. Papadopoulos, E. Paradas, J. Strologas, G. Bencze, C. Hajdu, P. Hidas, D. Horvath, F. Sikler, V. Veszpremi, G. Vesztergombi, A.J. Zsigmond, N. Beni, S. Czellar, J. Karancsi, J. Molnar, J. Palinkas, Z. Szillasi, A. Makovec, P. Raics, Z.L. Trocsanyi, B. Ujvari, N. Sahoo, S.K. Swain, S.B. Beri, V. Bhatnagar, R. Gupta, U.Bhawandeep, A.K. Kalsi, M. Kaur, R. Kumar, M. Mittal, N. Nishu, J.B. Singh, Ashok Kumar, Arun Kumar, S. Ahuja, A. Bhardwaj, B.C. Choudhary, A. Kumar, S. Malhotra, M. Naimuddin, K. Ranjan, V. Sharma, S. Banerjee, S. Bhattacharya, K. Chatterjee, S. Dutta, B. Gomber, Sa. Jain, Sh. Jain, R. Khurana, A. Modak, S. Mukherjee, D. Roy, S. Sarkar, M. Sharan, A. Abdulsalam, D. Dutta, S. Kailas, V. Kumar, A.K. Mohanty, L.M. Pant, P. Shukla, A. Topkar, T. Aziz, S. Banerjee, S. Bhowmik, R.M. Chatterjee, R.K. Dewanjee, S. Dugad, S. Ganguly, S. Ghosh, M. Guchait, A. Gurtu, G. Kole, S. Kumar, M. Maity, G. Majumder, K. Mazumdar, G.B. Mohanty, B. Parida, K. Sudhakar, N. Wickramage, H. Bakhshiansohi, H. Behnamian, S.M. Etesami, A. Fahim, R. Goldouzian, M. Khakzad, M. Mohammadi Najafabadi, M. Naseri, S. Paktinat Mehdiabadi, F. Rezaei Hosseinabadi, B. Safarzadeh, M. Zeinali, M. Felcini, M. Grunewald, M. Abbrescia, C. Calabria, S.S. Chhibra, A. Colaleo, D. Creanza, N. De Filippis, M. De Palma, L. Fiore, G. Iaselli, G. Maggi, M. Maggi, S. My, S. Nuzzo, A. Pompili, G. Pugliese, R. Radogna, G. Selvaggi, A. Sharma, L. Silvestris, R. Venditti, P. Verwilligen, G. Abbiendi, A.C. Benvenuti, D. Bonacorsi, S. Braibant-Giacomelli, L. Brigliadori, R. Campanini, P. Capiluppi, A. Castro, F.R. Cavallo, G. Codispoti, M. Cuffiani, G.M. Dallavalle, F. Fabbri, A. Fanfani, D. Fasanella, P. Giacomelli, C. Grandi, L. Guiducci, S. Marcellini, G. Masetti, A. Montanari, F.L. Navarria, A. Perrotta, F. Primavera, A.M. Rossi, T. Rovelli, G.P. Siroli, N. Tosi, R. Travaglini, S. Albergo, G. Cappello, M. Chiorboli, S. Costa, F. Giordano, R. Potenza, A. Tricomi, C. Tuve, G. Barbagli, V. Ciulli, C. Civinini, R. D'Alessandro, E. Focardi, E. Gallo, S. Gonzi, V. Gori, P. Lenzi, M. Meschini, S. Paoletti, G. Sguazzoni, A. Tropiano, L. Benussi, S. Bianco, F. Fabbri, D. Piccolo, R. Ferretti, F. Ferro, M. Lo Vetere, E. Robutti, S. Tosi, M.E. Dinardo, S. Fiorendi, S. Gennai, R. Gerosa, A. Ghezzi, P. Govoni, M.T. Lucchini, S. Malvezzi, R.A. Manzoni, A. Martelli, B. Marzocchi, D. Menasce, L. Moroni, M. Paganoni, D. Pedrini, S. Ragazzi, N. Redaelli, T. Tabarelli de Fatis, S. Buontempo, N. Cavallo, S. Di Guida, F. Fabozzi, A.O.M. Iorio, L. Lista, S. Meola, M. Merola, P. Paolucci, P. Azzi, N. Bacchetta, D. Bisello, A. Branca, R. Carlin, P. Checchia, M. Dall'Osso, T. Dorigo, U. Dosselli, M. Galanti, F. Gasparini, U. Gasparini, P. Giubilato, A. Gozzelino, K. Kanishchev, S. Lacaprara, M. Margoni, A.T. Meneguzzo, J. Pazzini, N. Pozzobon, P. Ronchese, F. Simonetto, E. Torassa, M. Tosi, P. Zotto, A. Zucchetta, G. Zumerle, M. Gabusi, S.P. Ratti, V. Re, C. Riccardi, P. Salvini, P. Vitulo, M. Biasini, G.M. Bilei, D. Ciangottini, L. Fanò, P. Lariccia, G. Mantovani, M. Menichelli, A. Saha, A. Santocchia, A. Spiezia, K. Androsov, P. Azzurri, G. Bagliesi, J. Bernardini, T. Boccali, G. Broccolo, R. Castaldi, M.A. Ciocci, R. Dell'Orso, S. Donato, F. Fiori, L. Foà, A. Giassi, M.T. Grippo, F. Ligabue, T. Lomtadze, L. Martini, A. Messineo, C.S. Moon, F. Palla, A. Rizzi, A. Savoy-Navarro, A.T. Serban, P. Spagnolo, P. Squillacioti, R. Tenchini, G. Tonelli, A. Venturi, P.G. Verdini, C. Vernieri, L. Barone, F. Cavallari, G. D'imperio, D. Del Re, M. Diemoz, C. Jorda, E. Longo, F. Margaroli, P. Meridiani, F. Micheli, S. Nourbakhsh, G. Organtini, R. Paramatti, S. Rahatlou, C. Rovelli, F. Santanastasio, L. Soffi, P. Traczyk, N. Amapane, R. Arcidiacono, S. Argiro, M. Arneodo, R. Bellan, C. Biino, N. Cartiglia, S. Casasso, M. Costa, A. Degano, N. Demaria, L. Finco, C. Mariotti, S. Maselli, E. Migliore, V. Monaco, M. Musich, M.M. Obertino, L. Pacher, N. Pastrone, M. Pelliccioni, G.L. Pinna Angioni, A. Potenza, A. Romero, M. Ruspa, R. Sacchi, A. Solano, A. Staiano, U. Tamponi, S. Belforte, V. Candelise, M. Casarsa, F. Cossutti, G. Della Ricca, B. Gobbo, C. La Licata, M. Marone, A. Schizzi, T. Umer, A. Zanetti, S. Chang, A. Kropivnitskaya, S.K. Nam, D.H. Kim, G.N. Kim, M.S. Kim, D.J. Kong, S. Lee, Y.D. Oh, H. Park, A. Sakharov, D.C. Son, T.J. Kim, J.Y. Kim, S. Song, S. Choi, D. Gyun, B. Hong, M. Jo, H. Kim, Y. Kim, B. Lee, K.S. Lee, S.K. Park, Y. Roh, H.D. Yoo, M. Choi, J.H. Kim, I.C. Park, G. Ryu, M.S. Ryu, Y. Choi, Y.K. Choi, J. Goh, D. Kim, E. Kwon, J. Lee, I. Yu, A. Juodagalvis, J.R. Komaragiri, M.A.B. Md Ali, E. Casimiro Linares, H. Castilla-Valdez, E. De La Cruz-Burelo, I. Heredia-de La Cruz, A. Hernandez-Almada, R. Lopez-Fernandez, A. Sanchez-Hernandez, S. Carrillo Moreno, F. Vazquez Valencia, I. Pedraza, H.A. Salazar Ibarguen, A. Morelos Pineda, D. Krofcheck, P.H. Butler, S. Reucroft, A. Ahmad, M. Ahmad, Q. Hassan, H.R. Hoorani, W.A. Khan, T. Khurshid, M. Shoaib, H. Bialkowska, M. Bluj, B. Boimska, T. Frueboes, M. Górski, M. Kazana, K. Nawrocki, K. Romanowska-Rybinska, M. Szleper, P. Zalewski, G. Brona, K. Bunkowski, M. Cwiok, W. Dominik, K. Doroba, A. Kalinowski, M. Konecki, J. Krolikowski, M. Misiura, M. Olszewski, W. Wolszczak, P. Bargassa, C. Beirão Da Cruz E Silva, P. Faccioli, P.G. Ferreira Parracho, M. Gallinaro, L. Lloret Iglesias, F. Nguyen, J. Rodrigues Antunes, J. Seixas, J. Varela, P. Vischia, S. Afanasiev, P. Bunin, M. Gavrilenko, I. Golutvin, I. Gorbunov, A. Kamenev, V. Karjavin, V. Konoplyanikov, A. Lanev, A. Malakhov, V. Matveev, P. Moisenz, V. Palichik, V. Perelygin, S. Shmatov, N. Skatchkov, V. Smirnov, A. Zarubin, V. Golovtsov, Y. Ivanov, V. Kim, P. Levchenko, V. Murzin, V. Oreshkin, I. Smirnov, V. Sulimov, L. Uvarov, S. Vavilov, A. Vorobyev, An. Vorobyev, Yu. Andreev, A. Dermenev, S. Gninenko, N. Golubev, M. Kirsanov, N. Krasnikov, A. Pashenkov, D. Tlisov, A. Toropin, V. Epshteyn, V. Gavrilov, N. Lychkovskaya, V. Popov, I. Pozdnyakov, G. Safronov, S. Semenov, A. Spiridonov, V. Stolin, E. Vlasov, A. Zhokin, V. Andreev, M. Azarkin, I. Dremin, M. Kirakosyan, A. Leonidov, G. Mesyats, S.V. Rusakov, A. Vinogradov, A. Belyaev, E. Boos, M. Dubinin, L. Dudko, A. Ershov, A. Gribushin, V. Klyukhin, O. Kodolova, I. Lokhtin, S. Obraztsov, S. Petrushanko, V. Savrin, A. Snigirev, I. Azhgirey, I. Bayshev, S. Bitioukov, V. Kachanov, A. Kalinin, D. Konstantinov, V. Krychkine, V. Petrov, R. Ryutin, A. Sobol, L. Tourtchanovitch, S. Troshin, N. Tyurin, A. Uzunian, A. Volkov, P. Adzic, M. Ekmedzic, J. Milosevic, V. Rekovic, J. Alcaraz Maestre, C. Battilana, E. Calvo, M. Cerrada, M. Chamizo Llatas, N. Colino, B. De La Cruz, A. Delgado Peris, D. Domínguez Vázquez, A. Escalante Del Valle, C. Fernandez Bedoya, J.P. Fernández Ramos, J. Flix, M.C. Fouz, P. Garcia-Abia, O. Gonzalez Lopez, S. Goy Lopez, J.M. Hernandez, M.I. Josa, E. Navarro De Martino, A. Pérez-Calero Yzquierdo, J. Puerta Pelayo, A. Quintario Olmeda, I. Redondo, L. Romero, M.S. Soares, C. Albajar, J.F. de Trocóniz, M. Missiroli, D. Moran, H. Brun, J. Cuevas, J. Fernandez Menendez, S. Folgueras, I. Gonzalez Caballero, J.A. Brochero Cifuentes, I.J. Cabrillo, A. Calderon, J. Duarte Campderros, M. Fernandez, G. Gomez, A. Graziano, A. Lopez Virto, J. Marco, R. Marco, C. Martinez Rivero, F. Matorras, F.J. Munoz Sanchez, J. Piedra Gomez, T. Rodrigo, A.Y. Rodríguez-Marrero, A. Ruiz-Jimeno, L. Scodellaro, I. Vila, R. Vilar Cortabitarte, D. Abbaneo, E. Auffray, G. Auzinger, M. Bachtis, P. Baillon, A.H. Ball, D. Barney, A. Benaglia, J. Bendavid, L. Benhabib, J.F. Benitez, C. Bernet, P. Bloch, A. Bocci, A. Bonato, O. Bondu, C. Botta, H. Breuker, T. Camporesi, G. Cerminara, S. Colafranceschi, M. D'Alfonso, D. d'Enterria, A. Dabrowski, A. David, F. De Guio, A. De Roeck, S. De Visscher, E. Di Marco, M. Dobson, M. Dordevic, N. Dupont-Sagorin, A. Elliott-Peisert, G. Franzoni, W. Funk, D. Gigi, K. Gill, D. Giordano, M. Girone, F. Glege, R. Guida, S. Gundacker, M. Guthoff, J. Hammer, M. Hansen, P. Harris, J. Hegeman, V. Innocente, P. Janot, K. Kousouris, K. Krajczar, P. Lecoq, C. Lourenço, N. Magini, L. Malgeri, M. Mannelli, J. Marrouche, L. Masetti, F. Meijers, S. Mersi, E. Meschi, F. Moortgat, S. Morovic, M. Mulders, L. Orsini, L. Pape, E. Perez, L. Perrozzi, A. Petrilli, G. Petrucciani, A. Pfeiffer, M. Pimiä, D. Piparo, M. Plagge, A. Racz, G. Rolandi, M. Rovere, H. Sakulin, C. Schäfer, C. Schwick, A. Sharma, P. Siegrist, P. Silva, M. Simon, P. Sphicas, D. Spiga, J. Steggemann, B. Stieger, M. Stoye, Y. Takahashi, D. Treille, A. Tsirou, G.I. Veres, N. Wardle, H.K. Wöhri, H. Wollny, W.D. Zeuner, W. Bertl, K. Deiters, W. Erdmann, R. Horisberger, Q. Ingram, H.C. Kaestli, D. Kotlinski, D. Renker, T. Rohe, F. Bachmair, L. Bäni, L. Bianchini, M.A. Buchmann, B. Casal, N. Chanon, G. Dissertori, M. Dittmar, M. Donegà, M. Dünser, P. Eller, C. Grab, D. Hits, J. Hoss, W. Lustermann, B. Mangano, A.C. Marini, M. Marionneau, P. Martinez Ruiz del Arbol, M. Masciovecchio, D. Meister, N. Mohr, P. Musella, C. Nägeli, F. Nessi-Tedaldi, F. Pandolfi, F. Pauss, M. Peruzzi, M. Quittnat, L. Rebane, M. Rossini, A. Starodumov, M. Takahashi, K. Theofilatos, R. Wallny, H.A. Weber, C. Amsler, M.F. Canelli, V. Chiochia, A. De Cosa, A. Hinzmann, T. Hreus, B. Kilminster, C. Lange, B. Millan Mejias, J. Ngadiuba, D. Pinna, P. Robmann, F.J. Ronga, S. Taroni, M. Verzetti, Y. Yang, M. Cardaci, K.H. Chen, C. Ferro, C.M. Kuo, W. Lin, Y.J. Lu, R. Volpe, S.S. Yu, P. Chang, Y.H. Chang, Y.W. Chang, Y. Chao, K.F. Chen, P.H. Chen, C. Dietz, U. Grundler, W.-S. Hou, K.Y. Kao, Y.F. Liu, R.-S. Lu, D. Majumder, E. Petrakou, Y.M. Tzeng, R. Wilken, B. Asavapibhop, G. Singh, N. Srimanobhas, N. Suwonjandee, A. Adiguzel, M.N. Bakirci, S. Cerci, C. Dozen, I. Dumanoglu, E. Eskut, S. Girgis, G. Gokbulut, E. Gurpinar, I. Hos, E.E. Kangal, A. Kayis Topaksu, G. Onengut, K. Ozdemir, S. Ozturk, A. Polatoz, D. Sunar Cerci, B. Tali, H. Topakli, M. Vergili, I.V. Akin, B. Bilin, S. Bilmis, H. Gamsizkan, B. Isildak, G. Karapinar, K. Ocalan, S. Sekmen, U.E. Surat, M. Yalvac, M. Zeyrek, E.A. Albayrak, E. Gülmez, M. Kaya, O. Kaya, T. Yetkin, K. Cankocak, F.I. Vardarlı, L. Levchuk, P. Sorokin, J.J. Brooke, E. Clement, D. Cussans, H. Flacher, J. Goldstein, M. Grimes, G.P. Heath, H.F. Heath, J. Jacob, L. Kreczko, C. Lucas, Z. Meng, D.M. Newbold, S. Paramesvaran, A. Poll, T. Sakuma, S. Senkin, V.J. Smith, K.W. Bell, A. Belyaev, C. Brew, R.M. Brown, D.J.A. Cockerill, J.A. Coughlan, K. Harder, S. Harper, E. Olaiya, D. Petyt, C.H. Shepherd-Themistocleous, A. Thea, I.R. Tomalin, T. Williams, W.J. Womersley, S.D. Worm, M. Baber, R. Bainbridge, O. Buchmuller, D. Burton, D. Colling, N. Cripps, P. Dauncey, G. Davies, M. Della Negra, P. Dunne, W. Ferguson, J. Fulcher, D. Futyan, G. Hall, G. Iles, M. Jarvis, G. Karapostoli, M. Kenzie, R. Lane, R. Lucas, L. Lyons, A.-M. Magnan, S. Malik, B. Mathias, J. Nash, A. Nikitenko, J. Pela, M. Pesaresi, K. Petridis, D.M. Raymond, S. Rogerson, A. Rose, C. Seez, P. Sharp, A. Tapper, M. Vazquez Acosta, T. Virdee, S.C. Zenz, J.E. Cole, P.R. Hobson, A. Khan, P. Kyberd, D. Leggat, D. Leslie, I.D. Reid, P. Symonds, L. Teodorescu, M. Turner, J. Dittmann, K. Hatakeyama, A. Kasmi, H. Liu, T. Scarborough, O. Charaf, S.I. Cooper, C. Henderson, P. Rumerio, A. Avetisyan, T. Bose, C. Fantasia, P. Lawson, C. Richardson, J. Rohlf, J. St. John, L. Sulak, J. Alimena, E. Berry, S. Bhattacharya, G. Christopher, D. Cutts, Z. Demiragli, N. Dhingra, A. Ferapontov, A. Garabedian, U. Heintz, G. Kukartsev, E. Laird, G. Landsberg, M. Luk, M. Narain, M. Segala, T. Sinthuprasith, T. Speer, J. Swanson, R. Breedon, G. Breto, M. Calderon De La Barca Sanchez, S. Chauhan, M. Chertok, J. Conway, R. Conway, P.T. Cox, R. Erbacher, M. Gardner, W. Ko, R. Lander, M. Mulhearn, D. Pellett, J. Pilot, F. Ricci-Tam, S. Shalhout, J. Smith, M. Squires, D. Stolp, M. Tripathi, S. Wilbur, R. Yohay, R. Cousins, P. Everaerts, C. Farrell, J. Hauser, M. Ignatenko, G. Rakness, E. Takasugi, V. Valuev, M. Weber, K. Burt, R. Clare, J. Ellison, J.W. Gary, G. Hanson, J. Heilman, M. Ivova Rikova, P. Jandir, E. Kennedy, F. Lacroix, O.R. Long, A. Luthra, M. Malberti, M. Olmedo Negrete, A. Shrinivas, S. Sumowidagdo, S. Wimpenny, J.G. Branson, G.B. Cerati, S. Cittolin, R.T. D'Agnolo, A. Holzner, R. Kelley, D. Klein, D. Kovalskyi, J. Letts, I. Macneill, D. Olivito, S. Padhi, C. Palmer, M. Pieri, M. Sani, V. Sharma, S. Simon, Y. Tu, A. Vartak, C. Welke, F. Würthwein, A. Yagil, D. Barge, J. Bradmiller-Feld, C. Campagnari, T. Danielson, A. Dishaw, V. Dutta, K. Flowers, M. Franco Sevilla, P. Geffert, C. George, F. Golf, L. Gouskos, J. Incandela, C. Justus, N. Mccoll, J. Richman, D. Stuart, W. To, C. West, J. Yoo, A. Apresyan, A. Bornheim, J. Bunn, Y. Chen, J. Duarte, A. Mott, H.B. Newman, C. Pena, M. Pierini, M. Spiropulu, J.R. Vlimant, R. Wilkinson, S. Xie, R.Y. Zhu, V. Azzolini, A. Calamba, B. Carlson, T. Ferguson, Y. Iiyama, M. Paulini, J. Russ, H. Vogel, I. Vorobiev, J.P. Cumalat, W.T. Ford, A. Gaz, M. Krohn, E. Luiggi Lopez, U. Nauenberg, J.G. Smith, K. Stenson, S.R. Wagner, J. Alexander, A. Chatterjee, J. Chaves, J. Chu, S. Dittmer, N. Eggert, N. Mirman, G. Nicolas Kaufman, J.R. Patterson, A. Ryd, E. Salvati, L. Skinnari, W. Sun, W.D. Teo, J. Thom, J. Thompson, J. Tucker, Y. Weng, L. Winstrom, P. Wittich, D. Winn, S. Abdullin, M. Albrow, J. Anderson, G. Apollinari, L.A.T. Bauerdick, A. Beretvas, J. Berryhill, P.C. Bhat, G. Bolla, K. Burkett, J.N. Butler, H.W.K. Cheung, F. Chlebana, S. Cihangir, V.D. Elvira, I. Fisk, J. Freeman, Y. Gao, E. Gottschalk, L. Gray, D. Green, S. Grünendahl, O. Gutsche, J. Hanlon, D. Hare, R.M. Harris, J. Hirschauer, B. Hooberman, S. Jindariani, M. Johnson, U. Joshi, K. Kaadze, B. Klima, B. Kreis, S. Kwan, J. Linacre, D. Lincoln, R. Lipton, T. Liu, J. Lykken, K. Maeshima, J.M. Marraffino, V.I. Martinez Outschoorn, S. Maruyama, D. Mason, P. McBride, P. Merkel, K. Mishra, S. Mrenna, S. Nahn, C. Newman-Holmes, V. O'Dell, O. Prokofyev, E. Sexton-Kennedy, S. Sharma, A. Soha, W.J. Spalding, L. Spiegel, L. Taylor, S. Tkaczyk, N.V. Tran, L. Uplegger, E.W. Vaandering, R. Vidal, A. Whitbeck, J. Whitmore, F. Yang, D. Acosta, P. Avery, P. Bortignon, D. Bourilkov, M. Carver, D. Curry, S. Das, M. De Gruttola, G.P. Di Giovanni, R.D. Field, M. Fisher, I.K. Furic, J. Hugon, J. Konigsberg, A. Korytov, T. Kypreos, J.F. Low, K. Matchev, H. Mei, P. Milenovic, G. Mitselmakher, L. Muniz, A. Rinkevicius, L. Shchutska, M. Snowball, D. Sperka, J. Yelton, M. Zakaria, S. Hewamanage, S. Linn, P. Markowitz, G. Martinez, J.L. Rodriguez, T. Adams, A. Askew, J. Bochenek, B. Diamond, J. Haas, S. Hagopian, V. Hagopian, K.F. Johnson, H. Prosper, V. Veeraraghavan, M. Weinberg, M.M. Baarmand, M. Hohlmann, H. Kalakhety, F. Yumiceva, M.R. Adams, L. Apanasevich, D. Berry, R.R. Betts, I. Bucinskaite, R. Cavanaugh, O. Evdokimov, L. Gauthier, C.E. Gerber, D.J. Hofman, P. Kurt, D.H. Moon, C. O'Brien, I.D. Sandoval Gonzalez, C. Silkworth, P. Turner, N. Varelas, B. Bilki, W. Clarida, K. Dilsiz, M. Haytmyradov, J.-P. Merlo, H. Mermerkaya, A. Mestvirishvili, A. Moeller, J. Nachtman, H. Ogul, Y. Onel, F. Ozok, A. Penzo, R. Rahmat, S. Sen, P. Tan, E. Tiras, J. Wetzel, K. Yi, B.A. Barnett, B. Blumenfeld, S. Bolognesi, D. Fehling, A.V. Gritsan, P. Maksimovic, C. Martin, M. Swartz, P. Baringer, A. Bean, G. Benelli, C. Bruner, R.P. Kenny III, M. Malek, M. Murray, D. Noonan, S. Sanders, J. Sekaric, R. Stringer, Q. Wang, J.S. Wood, I. Chakaberia, A. Ivanov, S. Khalil, M. Makouski, Y. Maravin, L.K. Saini, N. Skhirtladze, I. Svintradze, J. Gronberg, D. Lange, F. Rebassoo, D. Wright, A. Baden, A. Belloni, B. Calvert, S.C. Eno, J.A. Gomez, N.J. Hadley, R.G. Kellogg, T. Kolberg, Y. Lu, A.C. Mignerey, K. Pedro, A. Skuja, M.B. Tonjes, S.C. Tonwar, A. Apyan, R. Barbieri, G. Bauer, W. Busza, I.A. Cali, M. Chan, L. Di Matteo, G. Gomez Ceballos, M. Goncharov, D. Gulhan, M. Klute, Y.S. Lai, Y.-J. Lee, A. Levin, P.D. Luckey, T. Ma, C. Paus, D. Ralph, C. Roland, G. Roland, G.S.F. Stephans, K. Sumorok, D. Velicanu, J. Veverka, B. Wyslouch, M. Yang, M. Zanetti, V. Zhukova, B. Dahmes, A. Gude, S.C. Kao, K. Klapoetke, Y. Kubota, J. Mans, N. Pastika, R. Rusack, A. Singovsky, N. Tambe, J. Turkewitz, J.G. Acosta, S. Oliveros, E. Avdeeva, K. Bloom, S. Bose, D.R. Claes, A. Dominguez, R. Gonzalez Suarez, J. Keller, D. Knowlton, I. Kravchenko, J. Lazo-Flores, F. Meier, F. Ratnikov, G.R. Snow, M. Zvada, J. Dolen, A. Godshalk, I. Iashvili, A. Kharchilava, A. Kumar, S. Rappoccio, G. Alverson, E. Barberis, D. Baumgartel, M. Chasco, A. Massironi, D.M. Morse, D. Nash, T. Orimoto, D. Trocino, R.-J. Wang, D. Wood, J. Zhang, K.A. Hahn, A. Kubik, N. Mucia, N. Odell, B. Pollack, A. Pozdnyakov, M. Schmitt, S. Stoynev, K. Sung, M. Velasco, S. Won, A. Brinkerhoff, K.M. Chan, A. Drozdetskiy, M. Hildreth, C. Jessop, D.J. Karmgard, N. Kellams, K. Lannon, S. Lynch, N. Marinelli, Y. Musienko, T. Pearson, M. Planer, R. Ruchti, G. Smith, N. Valls, M. Wayne, M. Wolf, A. Woodard, L. Antonelli, J. Brinson, B. Bylsma, L.S. Durkin, S. Flowers, A. Hart, C. Hill, R. Hughes, K. Kotov, T.Y. Ling, W. Luo, D. Puigh, M. Rodenburg, B.L. Winer, H. Wolfe, H.W. Wulsin, O. Driga, P. Elmer, J. Hardenbrook, P. Hebda, A. Hunt, S.A. Koay, P. Lujan, D. Marlow, T. Medvedeva, M. Mooney, J. Olsen, P. Piroué, X. Quan, H. Saka, D. Stickland, C. Tully, J.S. Werner, A. Zuranski, E. Brownson, S. Malik, H. Mendez, J.E. Ramirez Vargas, V.E. Barnes, D. Benedetti, D. Bortoletto, M. De Mattia, L. Gutay, Z. Hu, M.K. Jha, M. Jones, K. Jung, M. Kress, N. Leonardo, D.H. Miller, N. Neumeister, B.C. Radburn-Smith, X. Shi, I. Shipsey, D. Silvers, A. Svyatkovskiy, F. Wang, W. Xie, L. Xu, J. Zablocki, N. Parashar, J. Stupak, A. Adair, B. Akgun, K.M. Ecklund, F.J.M. Geurts, W. Li, B. Michlin, B.P. Padley, R. Redjimi, J. Roberts, J. Zabel, B. Betchart, A. Bodek, R. Covarelli, P. de Barbaro, R. Demina, Y. Eshaq, T. Ferbel, A. Garcia-Bellido, P. Goldenzweig, J. Han, A. Harel, A. Khukhunaishvili, S. Korjenevski, G. Petrillo, D. Vishnevskiy, R. Ciesielski, L. Demortier, K. Goulianos, C. Mesropian, S. Arora, A. Barker, J.P. Chou, C. Contreras-Campana, E. Contreras-Campana, D. Duggan, D. Ferencek, Y. Gershtein, R. Gray, E. Halkiadakis, D. Hidas, S. Kaplan, A. Lath, S. Panwalkar, M. Park, R. Patel, S. Salur, S. Schnetzer, S. Somalwar, R. Stone, S. Thomas, P. Thomassen, M. Walker, K. Rose, S. Spanier, A. York, O. Bouhali, A. Castaneda Hernandez, R. Eusebi, W. Flanagan, J. Gilmore, T. Kamon, V. Khotilovich, V. Krutelyov, R. Montalvo, I. Osipenkov, Y. Pakhotin, A. Perloff, J. Roe, A. Rose, A. Safonov, I. Suarez, A. Tatarinov, K.A. Ulmer, N. Akchurin, C. Cowden, J. Damgov, C. Dragoiu, P.R. Dudero, J. Faulkner, K. Kovitanggoon, S. Kunori, S.W. Lee, T. Libeiro, I. Volobouev, E. Appelt, A.G. Delannoy, S. Greene, A. Gurrola, W. Johns, C. Maguire, Y. Mao, A. Melo, M. Sharma, P. Sheldon, B. Snook, S. Tuo, J. Velkovska, M.W. Arenton, S. Boutle, B. Cox, B. Francis, J. Goodell, R. Hirosky, A. Ledovskoy, H. Li, C. Lin, C. Neu, J. Wood, C. Clarke, R. Harr, P.E. Karchin, C. Kottachchi Kankanamge Don, P. Lamichhane, J. Sturdy, D.A. Belknap, D. Carlsmith, M. Cepeda, S. Dasu, L. Dodd, S. Duric, E. Friis, R. Hall-Wilton, M. Herndon, A. Hervé, P. Klabbers, A. Lanaro, C. Lazaridis, A. Levine, R. Loveless, A. Mohapatra, I. Ojalvo, T. Perry, G.A. Pierro, G. Polese, I. Ross, T. Sarangi, A. Savin, W.H. Smith, D. Taylor, C. Vuosalo, N. Woods, I. Bediaga, J.M. De Miranda, F. Ferreira Rodrigues, A. Gomes, A. Massafferri, A.C. dos Reis, A.B. Rodrigues, S. Amato, K. Carvalho Akiba, L. De Paula, O. Francisco, M. Gandelman, A. Hicheur, J.H. Lopes, D. Martins Tostes, I. Nasteva, J.M. Otalora Goicochea, E. Polycarpo, C. Potterat, M.S. Rangel, V. Salustino Guimaraes, B. Souza De Paula, D. Vieira, L. An, Y. Gao, F. Jing, Y. Li, Z. Yang, X. Yuan, Y. Zhang, L. Zhong, L. Beaucourt, M. Chefdeville, D. Decamp, N. Déléage, Ph. Ghez, J.-P. Lees, J.F. Marchand, M.-N. Minard, B. Pietrzyk, W. Qian, S. T'Jampens, V. Tisserand, E. Tournefier, Z. Ajaltouni, M. Baalouch, E. Cogneras, O. Deschamps, I. El Rifai, M. Grabalosa Gándara, P. Henrard, M. Hoballah, R. Lefèvre, J. Maratas, S. Monteil, V. Niess, P. Perret, C. Adrover, S. Akar, E. Aslanides, J. Cogan, W. Kanso, R. Le Gac, O. Leroy, G. Mancinelli, A. Mordà, M. Perrin-Terrin, J. Serrano, A. Tsaregorodtsev, Y. Amhis, S. Barsuk, M. Borsato, O. Kochebina, J. Lefrançois, F. Machefert, A. Martín Sánchez, M. Nicol, P. Robbe, M.-H. Schune, M. Teklishyn, A. Vallier, B. Viaud, G. Wormser, E. Ben-Haim, M. Charles, S. Coquereau, P. David, L. Del Buono, L. Henry, F. Polci, J. Albrecht, T. Brambach, Ch. Cauet, M. Deckenhoff, U. Eitschberger, R. Ekelhof, L. Gavardi, F. Kruse, F. Meier, R. Niet, C.J. Parkinson, M. Schlupp, A. Shires, B. Spaan, S. Swientek, J. Wishahi, O. Aquines Gutierrez, J. Blouw, M. Britsch, M. Fontana, D. Popov, M. Schmelling, D. Volyanskyy, M. Zavertyaev, S. Bachmann, A. Bien, A. Comerma-Montells, M. De Cian, F. Dordei, S. Esen, C. Färber, E. Gersabeck, L. Grillo, X. Han, S. Hansmann-Menzemer, A. Jaeger, M. Kolpin, K. Kreplin, G. Krocker, B. Leverington, J. Marks, M. Meissner, M. Neuner, T. Nikodem, P. Seyfert, M. Stahl, S. Stahl, U. Uwer, M. Vesterinen, S. Wandernoth, D. Wiedner, A. Zhelezov, R. McNulty, R. Wallace, W.C. Zhang, A. Palano, A. Carbone, A. Falabella, D. Galli, U. Marconi, N. Moggi, M. Mussini, S. Perazzini, V. Vagnoni, G. Valenti, M. Zangoli, W. Bonivento, S. Cadeddu, A. Cardini, V. Cogoni, A. Contu, A. Lai, B. Liu, G. Manca, R. Oldeman, B. Saitta, C. Vacca, M. Andreotti, W. Baldini, C. Bozzi, R. Calabrese, M. Corvo, M. Fiore, M. Fiorini, E. Luppi, L.L. Pappalardo, I. Shapoval, G. Tellarini, L. Tomassetti, S. Vecchi, L. Anderlini, A. Bizzeti, M. Frosini, G. Graziani, G. Passaleva, M. Veltri, G. Bencivenni, P. Campana, P. De Simone, G. Lanfranchi, M. Palutan, M. Rama, A. Sarti, B. Sciascia, R. Vazquez Gomez, R. Cardinale, F. Fontanelli, S. Gambetta, C. Patrignani, A. Petrolini, A. Pistone, M. Calvi, L. Cassina, C. Gotti, B. Khanji, M. Kucharczyk, C. Matteuzzi, J. Fu, A. Geraci, N. Neri, F. Palombo, S. Amerio, G. Collazuol, S. Gallorini, A. Gianelle, D. Lucchesi, A. Lupato, M. Morandin, M. Rotondo, L. Sestini, G. Simi, R. Stroili, F. Bedeschi, R. Cenci, S. Leo, P. Marino, M.J. Morello, G. Punzi, S. Stracka, J. Walsh, G. Carboni, E. Furfaro, E. Santovetti, A. Satta, A.A. Alves Jr, G. Auriemma, V. Bocci, G. Martellotti, G. Penso, D. Pinci, R. Santacesaria, C. Satriano, A. Sciubba, A. Dziurda, W. Kucewicz, T. Lesiak, B. Rachwal, M. Witek, M. Firlej, T. Fiutowski, M. Idzik, P. Morawski, J. Moron, A. Oblakowska-Mucha, K. Swientek, T. Szumlak, V. Batozskaya, K. Klimaszewski, K. Kurek, M. Szczekowski, A. Ukleja, W. Wislicki, L. Cojocariu, L. Giubega, A. Grecu, F. Maciuc, M. Orlandea, B. Popovici, S. Stoica, M. Straticiuc, G. Alkhazov, N. Bondar, A. Dzyuba, O. Maev, N. Sagidova, Y. Shcheglov, A. Vorobyev, S. Belogurov, I. Belyaev, V. Egorychev, D. Golubkov, T. Kvaratskheliya, I.V. Machikhiliyan, I. Polyakov, D. Savrina, A. Semennikov, A. Zhokhov, A. Berezhnoy, M. Korolev, A. Leflat, N. Nikitin, S. Filippov, E. Gushchin, L. Kravchuk, A. Bondar, S. Eidelman, P. Krokovny, V. Kudryavtsev, L. Shekhtman, V. Vorobyev, A. Artamonov, K. Belous, R. Dzhelyadin, Yu. Guz, A. Novoselov, V. Obraztsov, A. Popov, V. Romanovsky, M. Shapkin, O. Stenyakin, O. Yushchenko, A. Badalov, M. Calvo Gomez, L. Garrido, D. Gascon, R. Graciani Diaz, E. Graugés, C. Marin Benito, E. Picatoste Olloqui, V. Rives Molina, H. Ruiz, X. Vilasis-Cardona, B. Adeva, P. Alvarez Cartelle, A. Dosil Suárez, V. Fernandez Albor, A. Gallas Torreira, J. García Pardiñas, J.A. Hernando Morata, M. Plo Casasus, A. Romero Vidal, J.J. Saborido Silva, B. Sanmartin Sedes, C. Santamarina Rios, P. Vazquez Regueiro, C. Vázquez Sierra, M. Vieites Diaz, F. Alessio, F. Archilli, C. Barschel, S. Benson, J. Buytaert, D. Campora Perez, L. Castillo Garcia, M. Cattaneo, Ph. Charpentier, X. Cid Vidal, M. Clemencic, J. Closier, V. Coco, P. Collins, G. Corti, B. Couturier, C. D'Ambrosio, F. Dettori, A. Di Canto, H. Dijkstra, P. Durante, M. Ferro-Luzzi, R. Forty, M. Frank, C. Frei, C. Gaspar, V.V. Gligorov, L.A. Granado Cardoso, T. Gys, C. Haen, J. He, T. Head, E. van Herwijnen, R. Jacobsson, D. Johnson, C. Joram, B. Jost, M. Karacson, T.M. Karbach, D. Lacarrere, B. Langhans, R. Lindner, C. Linn, S. Lohn, A. Mapelli, R. Matev, Z. Mathe, S. Neubert, N. Neufeld, A. Otto, J. Panman, M. Pepe Altarelli, N. Rauschmayr, M. Rihl, S. Roiser, T. Ruf, H. Schindler, B. Schmidt, A. Schopper, R. Schwemmer, S. Sridharan, F. Stagni, V.K. Subbiah, F. Teubert, E. Thomas, D. Tonelli, A. Trisovic, M. Ubeda Garcia, J. Wicht, K. Wyllie, V. Battista, A. Bay, F. Blanc, M. Dorigo, F. Dupertuis, C. Fitzpatrick, S. Gianì, G. Haefeli, P. Jaton, C. Khurewathanakul, I. Komarov, V.N. La Thi, N. Lopez-March, R. Märki, M. Martinelli, B. Muster, T. Nakada, A.D. Nguyen, T.D. Nguyen, C. Nguyen-Mau, J. Prisciandaro, A. Puig Navarro, B. Rakotomiaramanana, J. Rouvinet, O. Schneider, F. Soomro, P. Szczypka, M. Tobin, S. Tourneur, M.T. Tran, G. Veneziano, Z. Xu, J. Anderson, R. Bernet, E. Bowen, A. Bursche, N. Chiapolini, M. Chrzaszcz, Ch. Elsasser, E. Graverini, F. Lionetto, P. Lowdon, K. Müller, N. Serra, O. Steinkamp, B. Storaci, U. Straumann, M. Tresch, A. Vollhardt, R. Aaij, S. Ali, M. van Beuzekom, P.N.Y. David, K. De Bruyn, C. Farinelli, V. Heijne, W. Hulsbergen, E. Jans, P. Koppenburg, A. Kozlinskiy, J. van Leerdam, M. Merk, S. Oggero, A. Pellegrino, H. Snoek, J. van Tilburg, P. Tsopelas, N. Tuning, J.A. de Vries, T. Ketel, R.F. Koopman, R.W. Lambert, D. Martinez Santos, G. Raven, M. Schiller, V. Syropoulos, S. Tolk, A. Dovbnya, S. Kandybei, I. Raniuk, O. Okhrimenko, V. Pugatch, S. Bifani, N. Farley, P. Griffith, I.R. Kenyon, C. Lazzeroni, A. Mazurov, J. McCarthy, L. Pescatore, N.K. Watson, M.P. Williams, M. Adinolfi, J. Benton, N.H. Brook, A. Cook, M. Coombes, J. Dalseno, T. Hampson, S.T. Harnew, P. Naik, E. Price, C. Prouve, J.H. Rademacker, S. Richards, D.M. Saunders, N. Skidmore, D. Souza, J.J. Velthuis, D. Voong, W. Barter, M.-O. Bettler, H.V. Cliff, H.-M. Evans, J. Garra Tico, V. Gibson, S. Gregson, S.C. Haines, C.R. Jones, M. Sirendi, J. Smith, D.R. Ward, S.A. Wotton, S. Wright, J.J. Back, T. Blake, D.C. Craik, A.C. Crocombe, D. Dossett, T. Gershon, M. Kreps, C. Langenbruch, T. Latham, D.P. O'Hanlon, T. Pilař, A. Poluektov, M.M. Reid, R. Silva Coutinho, C. Wallace, M. Whitehead, S. Easo, R. Nandakumar, A. Papanestis, S. Ricciardi, F.F. Wilson, L. Carson, P.E.L. Clarke, G.A. Cowan, S. Eisenhardt, D. Ferguson, D. Lambert, H. Luo, A.-B. Morris, F. Muheim, M. Needham, S. Playfer, M. Alexander, J. Beddow, C.-T. Dean, L. Eklund, D. Hynds, S. Karodia, I. Longstaff, S. Ogilvy, M. Pappagallo, P. Sail, I. Skillicorn, F.J.P. Soler, P. Spradlin, A. Affolder, T.J.V. Bowcock, H. Brown, G. Casse, S. Donleavy, K. Dreimanis, S. Farry, R. Fay, K. Hennessy, D. Hutchcroft, M. Liles, B. McSkelly, G.D. Patel, J.D. Price, A. Pritchard, K. Rinnert, T. Shears, N.A. Smith, G. Ciezarek, S. Cunliffe, R. Currie, U. Egede, P. Fol, A. Golutvin, S. Hall, M. McCann, P. Owen, M. Patel, K. Petridis, F. Redi, I. Sepp, E. Smith, W. Sutcliffe, D. Websdale, R.B. Appleby, R.J. Barlow, T. Bird, P.M. Bjørnstad, S. Borghi, D. Brett, J. Brodzicka, L. Capriotti, S. Chen, S. De Capua, G. Dujany, M. Gersabeck, J. Harrison, C. Hombach, S. Klaver, G. Lafferty, A. McNab, C. Parkes, A. Pearce, S. Reichert, E. Rodrigues, P. Rodriguez Perez, M. Smith, S.-F. Cheung, D. Derkach, T. Evans, R. Gauld, E. Greening, N. Harnew, D. Hill, P. Hunt, N. Hussain, J. Jalocha, M. John, O. Lupton, S. Malde, E. Smith, S. Stevenson, C. Thomas, S. Topp-Joergensen, N. Torr, G. Wilkinson, I. Counts, P. Ilten, M. Williams, R. Andreassen, A. Davis, W. De Silva, B. Meadows, M.D. Sokoloff, L. Sun, J. Todd, J.E. Andrews, B. Hamilton, A. Jawahery, J. Wimberley, M. Artuso, S. Blusk, A. Borgia, T. Britton, S. Ely, P. Gandini, J. Garofoli, B. Gui, C. Hadjivasiliou, N. Jurik, M. Kelsey, R. Mountain, B.K. Pal, T. Skwarnicki, S. Stone, J. Wang, Z. Xing, L. Zhang, C. Baesso, M. Cruz Torres, C. Göbel, J. Molina Rodriguez, Y. Xie, D.A. Milanes, O. Grünberg, M. Heß, C. Voß, R. Waldi, T. Likhomanenko, A. Malinin, V. Shevchenko, A. Ustyuzhanin, F. Martinez Vidal, A. Oyanguren, P. Ruiz Valls, C. Sanchez Mayordomo, C.J.G. Onderwater, H.W. Wilschut, E. Pesen
Aug. 17, 2015 hep-ph, hep-ex
A joint measurement is presented of the branching fractions $B^0_s\to\mu^+\mu^-$ and $B^0\to\mu^+\mu^-$ in proton-proton collisions at the LHC by the CMS and LHCb experiments. The data samples were collected in 2011 at a centre-of-mass energy of 7 TeV, and in 2012 at 8 TeV. The combined analysis produces the first observation of the $B^0_s\to\mu^+\mu^-$ decay, with a statistical significance exceeding six standard deviations, and the best measurement of its branching fraction so far. Furthermore, evidence for the $B^0\to\mu^+\mu^-$ decay is obtained with a statistical significance of three standard deviations. The branching fraction measurements are statistically compatible with SM predictions and impose stringent constraints on several theories beyond the SM.
Measurement of negatively charged pion spectra in inelastic p+p interactions at $p_{lab}$ = 20, 31, 40, 80 and 158 GeV/c (1310.2417)
NA61/SHINE Collaboration: N. Abgrall, A. Aduszkiewicz, Y. Ali, T. Anticic, N. Antoniou, B. Baatar, F. Bay, A. Blondel, J. Blumer, M. Bogomilov, A. Bravar, J. Brzychczyk, S. A. Bunyatov, O. Busygina, P. Christakoglou, T. Czopowicz, N. Davis, S. Debieux, H. Dembinski, F. Diakonos, S. Di Luise, W. Dominik, T. Drozhzhova, J. Dumarchez, K. Dynowski, R. Engel, A. Ereditato, G. A. Feofilov, Z. Fodor, A. Fulop, M. Gaździcki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, T. Hasegawa, M. Hierholzer, R. Idczak, S. Igolkin, A. Ivashkin, D. Joković, K. Kadija, A. Kapoyannis, E. Kaptur, D. Kiełczewska, M. Kirejczyk, J. Kisiel, T. Kiss, S. Kleinfelder, T. Kobayashi, V. I. Kolesnikov, D. Kolev, V. P. Kondratiev, A. Korzenev, P. Kovesarki, S. Kowalski, A. Krasnoperov, A. Kurepin, D. Larsen, A. László, V. V. Lyubushkin, M. Maćkowiak-Pawłowska, Z. Majka, B. Maksiak, A. I. Malakhov, D. Manić, A. Marcinek, V. Marin, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G. L. Melkumov, St. Mrówczyński, S. Murphy, T. Nakadaira, M. Nirkko, K. Nishikawa, T. Palczewski, G. Palla, A. D. Panagiotou, T. Paul, C. Pistillo, W. Peryt, O. Petukhov, R. Płaneta, J. Pluta, B. A. Popov, M. Posiadała, S. Puławski, J. Puzović, W. Rauch, M. Ravonel, A. Redij, R. Renfordt, A. Robert, D. Röhrich, E. Rondio, M. Roth, A. Rubbia, A. Rustamov, M. Rybczynski, A. Sadovsky, K. Sakashita, M. Savić, K. Schmidt, T. Sekiguchi, P. Seyboth, D. Sgalaberna, M. Shibata, R. Sipos, E. Skrzypczak, M. Słodkowski, P. Staszel, G. Stefanek, J. Stepaniak, H. Ströbele, T. Šuša, M. Szuba, M. Tada, V. Tereshchenko, T. Tolyhi, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberič, V. V. Vechernin, G. Vesztergombi, L. Vinogradov, A. Wilczek, Z. Włodarczyk, A. Wojtaszek-Szwarc, O. Wyszyński, L. Zambelli, W. Zipper
We present experimental results on inclusive spectra and mean multiplicities of negatively charged pions produced in inelastic p+p interactions at incident projectile momenta of 20, 31, 40, 80 and 158 GeV/c ($\sqrt{s} = $ 6.3, 7.7, 8.8, 12.3 and 17.3 GeV, respectively). The measurements were performed using the large acceptance NA61/SHINE hadron spectrometer at the CERN Super Proton Synchrotron. Two-dimensional spectra are determined in terms of rapidity and transverse momentum. Their properties such as the width of rapidity distributions and the inverse slope parameter of transverse mass spectra are extracted and their collision energy dependences are presented. The results on inelastic p+p interactions are compared with the corresponding data on central Pb+Pb collisions measured by the NA49 experiment at the CERN SPS. The results presented in this paper are part of the NA61/SHINE ion program devoted to the study of the properties of the onset of deconfinement and search for the critical point of strongly interacting matter. They are required for interpretation of results on nucleus-nucleus and proton-nucleus collisions.
Phase-space dependence of particle-ratio fluctuations in Pb+Pb collisions from 20A to 158A GeV beam energy (1310.3428)
T. Anticic, B. Baatar, D. Barna, J. Bartke, H. Beck, L. Betev, H. Białkowska, C. Blume, B. Boimska, J. Book, M. Botje, P. Bunčić, P. Christakoglou, P. Chung, O. Chvala, J. Cramer, V. Eckardt, Z. Fodor, P. Foka, V. Friese, M. Gaździcki, K. Grebieszkow, C. Höhne, K. Kadija, A. Karev, V. Kolesnikov, M. Kowalski, D. Kresan, A. Laszlo, R. Lacey, M. van Leeuwen, M. Maćkowiak-Pawłowska, M. Makariev, A. Malakhov, G. Melkumov, M. Mitrovski, S. Mrówczyński, G. Pálla, A. Panagiotou, W. Peryt, J. Pluta, D. Prindle, F. Pühlhofer, R. Renfordt, C. Roland, G. Roland, M. Rybczyński, A. Rybicki, A. Sandoval, A. Rustamov, N. Schmitz, T. Schuster, P. Seyboth, F. Siklér, E. Skrzypczak, M. Słodkowski, G. Stefanek, R. Stock, H. Ströbele, T. Susa, M. Szuba, D. Varga, M. Vassiliou, G. Veres, G. Vesztergombi, D. Vranić, Z. Włodarczyk, A. Wojtaszek-Szwarc
Oct. 12, 2013 hep-ex, nucl-ex
A novel approach, the identity method, was used for particle identification and the study of fluctuations of particle yield ratios in Pb+Pb collisions at the CERN Super Proton Synchrotron (SPS). This procedure allows to unfold the moments of the unknown multiplicity distributions of protons (p), kaons (K), pions ($\pi$) and electrons (e). Using these moments the excitation function of the fluctuation measure $\nu_{\text{\text{dyn}}}$[A,B] was measured, with A and B denoting different particle types. The obtained energy dependence of $\nu_{\text{dyn}}$ agrees with previously published NA49 results on the related measure $\sigma_{\text{dyn}}$. Moreover, $\nu_{\text{dyn}}$ was found to depend on the phase space coverage for [K,p] and [K,$\pi$] pairs. This feature most likely explains the reported differences between measurements of NA49 and those of STAR in central Au+Au collisions.
Measurements of Production Properties of K0S mesons and Lambda hyperons in Proton-Carbon Interactions at 31 GeV/c (1309.1997)
N. Abgrall, A. Aduszkiewicz, Y. Ali, T. Anticic, N. Antoniou, J. Argyriades, B. Baatar, A. Blondel, J. Blumer, M. Bogomilov, A. Bravar, W. Brooks, J. Brzychczyk, S. A. Bunyatov, O. Busygina, P. Christakoglou, T. Czopowicz, N. Davis, S. Debieux, H. Dembinski, F. Diakonos, S. Di Luise, W. Dominik, T. Drozhzhova, J. Dumarchez, K. Dynowski, R. Engel, A. Ereditato, L. Esposito, G. A. Feofilov, Z. Fodor, A. Fulop, M. Gaździcki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, H. Hakobyan, A. Haesler, T. Hasegawa, M. Hierholzer, R. Idczak, S. Igolkin, Y. Ivanov, A. Ivashkin, D. Jokovic, K. Kadija, A. Kapoyannis, N. Katrynska, E. Kaptur, D. Kielczewska, D. Kikola, M. Kirejczyk, J. Kisiel, T. Kiss, S. Kleinfelder, T. Kobayashi, V. I. Kolesnikov, D. Kolev, V. P. Kondratiev, A. Korzenev, S. Kowalski, A. Krasnoperov, S. Kuleshov, A. Kurepin, D. Larsen, A. Laszlo, V. V. Lyubushkin, M. Mackowiak-Pawlowska, Z. Majka, B. Maksiak, A. I. Malakhov, D. Maletic, D. Manic, A. Marchionni, A. Marcinek, V. Marin, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G. L. Melkumov, St. Mrówczyński, S. Murphy, T. Nakadaira, M. Nirkko, K. Nishikawa, T. Palczewski, G. Palla, A. D. Panagiotou, T. Paul, W. Peryt, C. Pistillo, A. Redij, O. Petukhov, R. Planeta, J. Pluta, B. A. Popov, M. Posiadała, S. Puławski, J. Puzovic, W. Rauch, M. Ravonel, R. Renfordt, A. Robert, D. Röhrich, E. Rondio, M. Roth, A. Rubbia, A. Rustamov, M. Rybczynski, A. Sadovsky, K. Sakashita, M. Savic, K. Schmidt, T. Sekiguchi, P. Seyboth, M. Shibata, R. Sipos, E. Skrzypczak, M. Slodkowski, P. Staszel, G. Stefanek, J. Stepaniak, T. Susa, M. Szuba, M. Tada, V. Tereshchenko, T. Tolyhi, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberic, V. V. Vechernin, G. Vesztergombi, L. Vinogradov, A. Wilczek, Z. Wlodarczyk, A. Wojtaszek, O. Wyszyński, L. Zambelli, W. Zipper
Sept. 8, 2013 nucl-ex, physics.acc-ph
Spectra of K0S mesons and Lambda hyperons were measured in p+C interactions at 31 GeV/c with the large acceptance NA61/SHINE spectrometer at the CERN SPS. The data were collected with an isotropic graphite target with a thickness of 4% of a nuclear interaction length. Interaction cross sections, charged pion spectra, and charged kaon spectra were previously measured using the same data set. Results on K0S and Lambda production in p+C interactions serve as reference for the understanding of the enhancement of strangeness production in nucleus-nucleus collisions. Moreover, they provide important input for the improvement of neutrino flux predictions for the T2K long baseline neutrino oscillation experiment in Japan. Inclusive production cross sections for K0S and Lambda are presented as a function of laboratory momentum in intervals of the laboratory polar angle covering the range from 0 up to 240 mrad. The results are compared with predictions of several hadron production models. The K0S mean multiplicity in production processes <n_K0S> and the inclusive cross section for K0S production were measured and amount to 0.127 +- 0.005 (stat) +- 0.022 (sys) and 29.0 +- 1.6 (stat) +- 5.0 (sys) mb, respectively.
Inclusive production of protons, anti-protons, neutrons, deuterons and tritons in p+C collisions at 158 GeV/c beam momentum (1207.6520)
The NA49 Collaboration: B. Baatar, G. Barr, J. Bartke, L. Betev, O. Chvala, J. Dolejsi, V. Eckardt, H. G. Fischer, Z. Fodor, A. Karev, V. Kolesnikov, M. Kowalski, M. Makariev, A. Malakhov, M. Mateev, G. Melkumov, A. Rybicki, N. Schmitz, P. Seyboth, R. Stock, G. Tinti, D. Varga, G. Vesztergombi, S. Wenig
March 8, 2013 hep-ex, nucl-ex
The production of protons, anti-protons, neutrons, deuterons and tritons in minimum bias p+C interactions is studied using a sample of 385 734 inelastic events obtained with the NA49 detector at the CERN SPS at 158 GeV/c beam momentum. The data cover a phase space area ranging from 0 to 1.9 GeV/c in transverse momentum and in Feynman x from -0.80 to 0.95 for protons, from -0.2 to 0.4 for anti-protons and from 0.2 to 0.95 for neutrons. Existing data in the far backward hemisphere are used to extend the coverage for protons and light nuclear fragments into the region of intranuclear cascading. The use of corresponding data sets obtained in hadron-proton collisions with the same detector allows for the detailed analysis and model-independent separation of the three principle components of hadronization in p+C interactions, namely projectile fragmentation, target fragmentation of participant nucleons and intranuclear cascading.
Pion emission from the T2K replica target: method, results and application (1207.2114)
N. Abgrall, A. Aduszkiewicz, T. Anticic, N. Antoniou, J. Argyriades, B. Baatar, A. Blondel, J. Blumer, M. Bogomilov, A. Bravar, W. Brooks, J. Brzychczyk, A. Bubak, S. A. Bunyatov, O. Busygina, P. Christakoglou, P. Chung, T. Czopowicz, N. Davis, S. Debieux, S. Di Luise, W. Dominik, J. Dumarchez, K. Dynowski, R. Engel, A. Ereditato, L. S. Esposito, G. A. Feofilov, Z. Fodor, A. Ferrero, A. Fulop, M. Gazdzicki, M. Golubeva, B. Grabez, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, H. Hakobyan, T. Hasegawa, R. Idczak, S. Igolkin, Y. Ivanov, A. Ivashkin, K. Kadija, A. Kapoyannis, N. Katrynska, D. Kielczewska, D. Kikola, M. Kirejczyk, J. Kisiel, T. Kiss, S. Kleinfelder, T. Kobayashi, O. Kochebina, V. I. Kolesnikov, D. Kolev, V. P. Kondratiev, A. Korzenev, S. Kowalski, A. Krasnoperov, S. Kuleshov, A. Kurepin, R. Lacey, D. Larsen, A. Laszlo, V. V. Lyubushkin, M. Mackowiak-Pawlowska, Z. Majka, B. Maksiak, A. I. Malakhov, D. Maletic, A. Marchionni, A. Marcinek, I. Maris, V. Marin, K. Marton, T. Matulewicz, V. Matveev, G. L. Melkumov, M. Messina, St. Mrowczynski, S. Murphy, T. Nakadaira, K. Nishikawa, T. Palczewski, G. Palla, A. D. Panagiotou, T. Paul, W. Peryt, O. Petukhov, R. Planeta, J. Pluta, B. A. Popov, M. Posiadala, S. Pulawski, J. Puzovic, W. Rauch, M. Ravonel, R. Renfordt, A. Robert, D. Rohrich, E. Rondio, B. Rossi, M. Roth, A. Rubbia, A. Rustamov, M. Rybczynski, A. Sadovsky, K. Sakashita, M. Savic, T. Sekiguchi, P. Seyboth, M. Shibata, R. Sipos, E. Skrzypczak, M. Slodkowski, P. Staszel, G. Stefanek, J. Stepaniak, C. Strabel, H. Strobele, T. Susa, M. Szuba, M. Tada, A. Taranenko, V. Tereshchenko, T. Tolyhi, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberic, V. V. Vechernin, G. Vesztergombi, A. Wilczek, Z. Wlodarczyk, A. Wojtaszek-Szwarc, O. Wyszynski, L. Zambelli, W. Zipper (The NA61/SHINE Collaboration), V. Galymov, M. Hartz, A. K. Ichikawa, H. Kubo, A. D. Marino, K. Matsuoka, A. Murakami, T. Nakaya, K. Suzuki, T. Yuan, E. D. Zimmerman
Nov. 28, 2012 hep-ex, nucl-ex
The T2K long-baseline neutrino oscillation experiment in Japan needs precise predictions of the initial neutrino flux. The highest precision can be reached based on detailed measurements of hadron emission from the same target as used by T2K exposed to a proton beam of the same kinetic energy of 30 GeV. The corresponding data were recorded in 2007-2010 by the NA61/SHINE experiment at the CERN SPS using a replica of the T2K graphite target. In this paper details of the experiment, data taking, data analysis method and results from the 2007 pilot run are presented. Furthermore, the application of the NA61/SHINE measurements to the predictions of the T2K initial neutrino flux is described and discussed.
System-size and centrality dependence of charged kaon and pion production in nucleus-nucleus collisions at 40A GeV and158A GeV beam energy (1207.0348)
NA49 Collaboration: T. Anticic, B. Baatar, D. Barna, J. Bartke, H. Beck, L. Betev, H. Bialkowska, C. Blume, M. Bogusz, B. Boimska, J. Book, M. Botje, P. Buncic, T. Cetner, P. Christakoglou, P. Chung, O. Chvala, J.G. Cramer, P. Dinkelaker, V. Eckardt, Z. Fodor, P. Foka, V. Friese, M. Gazdzicki, K. Grebieszkow, C. Höhne, K. Kadija, A. Karev, M. Kliemant, V.I. Kolesnikov, T. Kollegger, M. Kowalski, D. Kresan, A. Laszlo, R. Lacey, M. van Leeuwen, B. Lungwitz, M. Mackowiak, M. Makariev, A.I. Malakhov, M. Mateev, G.L. Melkumov, M. Mitrovski, St. Mrowczynski, V. Nicolic, G. Palla, A.D. Panagiotou, W. Peryt, J. Pluta, D. Prindle, F. Pühlhofer, R. Renfordt, C. Roland, G. Roland, M. Rybczynski,1 A. Rybicki, A. Sandoval, N. Schmitz, T. Schuster, P. Seyboth, F. Sikler, E. Skrzypczak, M. Slodkowski, G. Stefanek, R. Stock, H. Ströbele, T. Susa, M. Szuba, M.Utvic, D. Varga, M. Vassiliou, G.I. Veres, G. Vesztergombi, D. Vranic, Z. Wlodarczyk, A. Wojtaszek-Szwarc
July 2, 2012 nucl-ex
Measurements of charged pion and kaon production are presented in centrality selected Pb+Pb collisions at 40A GeV and 158A GeV beam energy as well as in semi-central C+C and Si+Si interactions at 40A GeV. Transverse mass spectra, rapidity spectra and total yields are determined as a function of centrality. The system-size and centrality dependence of relative strangeness production in nucleus-nucleus collisions at 40A GeV and 158A GeV beam energy are derived from the data presented here and published data for C+C and Si+Si collisions at 158A GeV beam energy. At both energies a steep increase with centrality is observed for small systems followed by a weak rise or even saturation for higher centralities. This behavior is compared to calculations using transport models (UrQMD and HSD), a percolation model and the core-corona approach.
Measurement of Production Properties of Positively Charged Kaons in Proton-Carbon Interactions at 31 GeV/c (1112.0150)
The NA61/SHINE Collaboration: N. Abgrall, A. Aduszkiewicz, T. Anticic, N. Antoniou, J. Argyriades, B. Baatar, A. Blondel, J. Blumer, M. Bogusz, L. Boldizsar, A. Bravar, W. Brooks, J. Brzychczyk, A. Bubak, S. A. Bunyatov, O. Busygina, T. Cetner, K.-U. Choi, P. Christakoglou, P. Chung, T. Czopowicz, N. Davis, F. Diakonos, S. Di Luise, W. Dominik, J. Dumarchez, R. Engel, A. Ereditato, L. S. Esposito, G. A. Feofilov, Z. Fodor, A. Ferrero, A. Fulop, X. Garrido, M. Gazdzicki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, H. Hakobyan, T. Hasegawa, R. Idczak, Y. Ivanov, A. Ivashkin, K. Kadija, A. Kapoyannis, N. Katrynska, D. Kielczewska, D. Kikola, J.-H. Kim, M. Kirejczyk, J. Kisiel, T. Kobayashi, O. Kochebina, V. I. Kolesnikov, D. Kolev, V. P. Kondratiev, A. Korzenev, S. Kowalski, A. Krasnoperov, S. Kuleshov, A. Kurepin, R. Lacey, J. Lagoda, A. Laszlo, V. V. Lyubushkin, M. Mackowiak-Pawlowska, Z. Majka, A. I. Malakhov, A. Marchionni, A. Marcinek, I. Maris, V. Marin, T. Matulewicz, V. Matveev, G. L. Melkumov, A. Meregaglia, M. Messina, St. Mrowczynski, S. Murphy, T. Nakadaira, K. Nishikawa, T. Palczewski, G. Palla, A. D. Panagiotou, T. Paul, W. Peryt, O. Petukhov, R. Planeta, J. Pluta, B. A. Popov, M. Posiadala, S. Pulawski, W. Rauch, M. Ravonel, R. Renfordt, A. Robert, D. Rohrich, E. Rondio, B. Rossi, M. Roth, A. Rubbia, M. Rybczynski, A. Sadovsky, K. Sakashita, T. Sekiguchi, P. Seyboth, M. Shibata, E. Skrzypczak, M. Slodkowski, P. Staszel, G. Stefanek, J. Stepaniak, C. Strabel, H. Strobele, T. Susa, P. Szaflik, M. Szuba, M. Tada, A. Taranenko, V. Tereshchenko, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberic, V. V. Vechernin, G. Vesztergombi, A. Wilczek, Z. Wlodarczyk, A. Wojtaszek-Szwarc, J.-G. Yi, I.-K. Yoo, L. Zambelli, W. Zipper
Dec. 1, 2011 hep-ex
Spectra of positively charged kaons in p+C interactions at 31 GeV/c were measured with the NA61/SHINE spectrometer at the CERN SPS. The analysis is based on the full set of data collected in 2007 with a graphite target with a thickness of 4% of a nuclear interaction length. Interaction cross sections and charged pion spectra were already measured using the same set of data. These new measurements in combination with the published ones are required to improve predictions of the neutrino flux for the T2K long baseline neutrino oscillation experiment in Japan. In particular, the knowledge of kaon production is crucial for precisely predicting the intrinsic electron neutrino component and the high energy tail of the T2K beam. The results are presented as a function of laboratory momentum in 2 intervals of the laboratory polar angle covering the range from 20 up to 240 mrad. The kaon spectra are compared with predictions of several hadron production models. Using the published pion results and the new kaon data, the K+/\pi+ ratios are computed.
Measurements of Cross Sections and Charged Pion Spectra in Proton-Carbon Interactions at 31 GeV/c (1102.0983)
N. Abgrall, A. Aduszkiewicz, B. Andrieu, T. Anticic, N. Antoniou, J. Argyriades, A. G. Asryan, B. Baatar, A. Blondel, J. Blumer, M. Bogusz, L. Boldizsar, A. Bravar, W. Brooks, J. Brzychczyk, A. Bubak, S. A. Bunyatov, O. Busygina, T. Cetner, K.-U. Choi, P. Christakoglou, P. Chung, T. Czopowicz, N. Davis, F. Diakonos, S. Di Luise, W. Dominik, J. Dumarchez, R. Engel, A. Ereditato, L. S. Esposito, G. A. Feofilov, Z. Fodor, A. Ferrero, A. Fulop, X. Garrido, M. Gazdzicki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, H. Hakobyan, T. Hasegawa, S. Igolkin, A. S. Ivanov, Y. Ivanov, A. Ivashkin, K. Kadija, A. Kapoyannis, N. Katrynska, D. Kielczewska, D. Kikola, J.-H. Kim, M. Kirejczyk, J. Kisiel, T. Kobayashi, O. Kochebina, V. I. Kolesnikov, D. Kolev, V. P. Kondratiev, A. Korzenev, S. Kowalski, S. Kuleshov, A. Kurepin, R. Lacey, J. Lagoda, A. Laszlo, V. V. Lyubushkin, M. Mackowiak, Z. Majka, A. I. Malakhov, A. Marchionni, A. Marcinek, I. Maris, V. Marin, T. Matulewicz, V. Matveev, G. L. Melkumov, A. Meregaglia, M. Messina, St. Mrowczynski, S. Murphy, T. Nakadaira, P. A. Naumenko, K. Nishikawa, T. Palczewski, G. Palla, A. D. Panagiotou, W. Peryt, O. Petukhov, R. Planeta, J. Pluta, B. A. Popov, M. Posiadala, S. Pulawski, W. Rauch, M. Ravonel, R. Renfordt, A. Robert, D. Rohrich, E. Rondio, B. Rossi, M. Roth, A. Rubbia, M. Rybczynski, A. Sadovsky, K. Sakashita, T. Sekiguchi, P. Seyboth, M. Shibata, A. N. Sissakian, E. Skrzypczak, M. Slodkowski, A. S. Sorin, P. Staszel, G. Stefanek, J. Stepaniak, C. Strabel, H. Strobele, T. Susa, P. Szaflik, M. Szuba, M. Tada, A. Taranenko, R. Tsenov, R. Ulrich, M. Unger, M. Vassiliou, V. V. Vechernin, G. Vesztergombi, A. Wilczek, Z. Wlodarczyk, A. Wojtaszek, J.-G. Yi, I.-K. Yoo, W. Zipper
Sept. 6, 2011 hep-ex
Interaction cross sections and charged pion spectra in p+C interactions at 31 GeV/c were measured with the large acceptance NA61/SHINE spectrometer at the CERN SPS. These data are required to improve predictions of the neutrino flux for the T2K long baseline neutrino oscillation experiment in Japan. A set of data collected during the first NA61/SHINE run in 2007 with an isotropic graphite target with a thickness of 4% of a nuclear interaction length was used for the analysis. The measured p+C inelastic and production cross sections are 257.2 +- 1.9 +- 8.9 mb and 229.3 +- 1.9 +- 9.0 mb, respectively. Inclusive production cross sections for negatively and positively charged pions are presented as a function of laboratory momentum in 10 intervals of the laboratory polar angle covering the range from 0 up to 420 mrad. The spectra are compared with predictions of several hadron production models.
Energy dependence of kaon-to-proton ratio fluctuations in central Pb+Pb collisions from $\sqrt{s_{NN}}$ = 6.3 to 17.3 GeV (1101.3250)
T. Anticic, B. Baatar, D. Barna, J. Bartke, H. Beck, L. Betev, H. Białkowska, C. Blume, M. Bogusz, B. Boimska, J. Book, M. Botje, P. Bunčić, T. Cetner, P. Christakoglou, P. Chung, O. Chvala, J.G. Cramer, V. Eckardt, Z. Fodor, P. Foka, V. Friese, M. Gaździcki, K. Grebieszkow, C. Höhne, K. Kadija, A. Karev, V.I. Kolesnikov, T. Kollegger, M. Kowalski, D. Kresan, A. Laszlo, R. Lacey, M. van Leeuwen, M. Mackowiak, M. Makariev, A.I. Malakhov, M. Mateev, G.L. Melkumov, M. Mitrovski, St. Mrówczyński, V. Nicolic, G. Pálla, A.D. Panagiotou, W. Peryt, J. Pluta, D. Prindle, F. Pühlhofer, R. Renfordt, C. Roland, G. Roland, M. Rybczyński, A. Rybicki, A. Sandoval, N. Schmitz, T. Schuster, P. Seyboth, F. Siklér, E. Skrzypczak, M. Slodkowski, G. Stefanek, R. Stock, H. Ströbele, T. Susa, M. Szuba, M. Utvić, D. Varga, M. Vassiliou, G.I. Veres, G. Vesztergombi, D. Vranić, Z. Włodarczyk, A. Wojtaszek-Szwarc
May 30, 2011 nucl-ex
Kaons and protons carry large parts of two conserved quantities, strangeness and baryon number. It is argued that their correlation and thus also fluctuations are sensitive to conditions prevailing at the anticipated parton-hadron phase boundary. Fluctuations of the $(\mathrm{K}^+ + \mathrm{K}^-)/(\mathrm{p}+\bar{\mathrm{p}})$ and $\mathrm{K}^+/\mathrm{p}$ ratios have been measured for the first time by NA49 in central Pb+Pb collisions at 5 SPS energies between $\sqrt{s_{NN}}$= 6.3 GeV and 17.3 GeV. Both ratios exhibit a change of sign in $\sigma_{\mathrm{dyn}}$, a measure of non-statistical fluctuations, around $\sqrt{s_{NN}}$ = 8 GeV. Below this energy, $\sigma_{\mathrm{dyn}}$ is positive, indicating higher fluctuation compared to a mixed event background sample, while for higher energies, $\sigma_{\mathrm{dyn}}$ is negative, indicating correlated emission of kaons and protons. The results are compared to UrQMD calculations which which give a good description at the higher SPS energies, but fail to reproduce the transition to positive values.
Proton -- Lambda Correlations in Central Pb+Pb Collisions at sqrt(s_{NN}) = 17.3 GeV (1103.3395)
T. Anticic, B. Baatar, D.Barna, J. Bartke, H. Beck, L. Betev, H. Bialkowska, C. Blume, M. Bogusz, B. Boimska, J. Book, M. Botje, P. Buncic, T. Cetner, P. Christakoglou, P. Chung, O. Chvala, J. Cramer, V. Eckardt, Z. Fodor, P. Foka, V. Friese, M. Gazdzicki, K. Grebieszkow, C. Hohne, K. Kadija, A. Karev, V. Kolesnikov, M. Kowalski, D. Kresan, A. Laszlo, R. Lacey, M. van Leeuwen, M.K. Mackowiak, M. Makariev, A. Malakhov, M. Mateev, G. Melkumov, M. Mitrovski, St. Mrowczynski, V. Nicolic, G. Palla, A. Panagiotou, W. Peryt, J. Pluta, D. Prindle, F. Puhlhofer, R. Renfordt, C. Roland, G. Roland, M. Rybczynski, A. Rybicki, A. Sandoval, N. Schmitz, T. Schuster, P. Seyboth, F. Sikler, E. Skrzypczak, M. Slodkowski, G. Stefanek, R. Stock, H. Strobele, T. Susa, M. Szuba, M. Utvic, D. Varga, M. Vassiliou, G. Veres, G. Vesztergombi, D. Vranic, Z. Wlodarczyk, A. Wojtaszek-Szwarc
March 17, 2011 hep-ph, nucl-ex
The momentum correlation between protons and lambda particles emitted from central Pb+Pb collisions at sqrt(s_{NN}) = 17.3 GeV was studied by the NA49 experiment at the CERN SPS. A clear enhancement is observed for small relative momenta (q_{inv} < 0.2 GeV). By fitting a theoretical model, which uses the strong interaction between the proton and the lambda in a given pair, to the measured data a value for the effective source size is deduced. Assuming a static Gaussian source distribution we derive an effective radius parameter of R_G = 3.02 \pm 0.20$(stat.)^{+0.44}_{-0.16}(syst.) fm.
Centrality dependence of proton and antiproton spectra in Pb+Pb collisions at 40A GeV and 158A GeV measured at the CERN SPS (1009.1747)
T. Anticic, B. Baatar, D. Barna, J. Bartke, H. Beck, L. Betev, H. Bialkowska, C. Blume, M. Bogusz, B. Boimska, J. Book, M. Botje, P. Buncic, T. Cetner, P. Christakoglou, P. Chung, O. Chvala, J.G. Cramer, V. Eckardt, Z. Fodor, P. Foka, V. Friese, M. Gazdzicki, K. Grebieszkow, C. Höhne, K. Kadija, A. Karev, V.I. Kolesnikov, M. Kowalski, D. Kresan, A. Laszlo, R. Lacey, M. van Leeuwen, M. Mackowiak, M. Makariev, A.I. Malakhov, M. Mateev, G.L. Melkumov, M. Mitrovski, S. Mrowczynski, V. Nicolic, G. Pálla, A.D. Panagiotou, W. Peryt, J. Pluta, D. Prindle, F. Pühlhofer, R. Renfordt, C. Roland, G. Roland, M. Rybczyński, A. Rybicki, A. Sandoval, N. Schmitz, T. Schuster, P. Seyboth, F. Sikler, E. Skrzypczak, M. Slodkowski, G. Stefanek, R. Stock, H. Strobele, T. Susa, M. Szuba, M. Utvic, D. Varga, M. Vassiliou G.I. Veres, G. Vesztergombi, D. Vranic, Z. Włodarczyk, A. Wojtaszek-Szwarc
The yields of (anti-)protons were measured by the NA49 Collaboration in centrality selected Pb+Pb collisions at 40A GeV and 158A GeV. Particle identification was obtained in the laboratory momentum range from 5 to 63 GeV/c by the measurement of the energy loss dE/dx in the TPC detector gas. The corresponding rapidity coverage extends 1.6 units from mid-rapidity into the forward hemisphere. Transverse mass spectra, the rapidity dependences of the average transverse mass, and rapidity density distributions were studied as a function of collision centrality. The values of the average transverse mass as well as the midrapidity yields of protons when normalized to the number of wounded nucleons show only modest centrality dependences. In contrast, the shape of the rapidity distribution changes significantly with collision centrality, especially at 40A GeV. The experimental results are compared to calculations of the HSD and UrQMD transport models.
Inclusive production of charged kaons in p+p collisions at 158 GeV/c beam momentum and a new evaluation of the energy dependence of kaon production up to collider energies (1004.1889)
The NA49 Collaboration: T. Anticic, B. Baatar, J. Bartke, L. Betev, H. Białkowska, B. Boimska, J. Bracinik, V. Cerny, O. Chvala, J. Dolejsi, V. Eckardt, H. G. Fischer, Z. Fodor, E. Gładysz, K. Kadija, A. Karev, V. Kolesnikov, M. Kowalski, M. Kreps, M. Makariev, A. Malakhov, M. Mateev, G. Melkumov, A. Rybicki, N. Schmitz, P. Seyboth, T. Susa, P. Szymanski, V. Trubnikov, D. Varga, G. Vesztergombi, S. Wenig
April 12, 2010 hep-ex
New data on the production of charged kaons in p+p interactions are presented. The data come from a sample of 4.8 million inelastic events obtained with the NA49 detector at the CERN SPS at 158 GeV/c beam momentum. The kaons are identified by energy loss in a large TPC tracking system. Inclusive invariant cross sections are obtained in intervals from 0 to 1.7 GeV/c in transverse momentum and from 0 to 0.5 in Feynman x. Using these data as a reference, a new evaluation of the energy dependence of kaon production, including neutral kaons, is conducted over a range from 3 GeV to p+anti-p collider energies.
Inclusive production of protons, anti-protons and neutrons in p+p collisions at 158 GeV/c beam momentum (0904.2708)
NA49 Collaboration: T. Anticic, B. Baatar, J. Bartke, L. Betev, H. Białkowska, C. Blume, B. Boimska, J. Bracinik, V. Cerny, O. Chvala, J. Dolejsi, V. Eckardt, H. G. Fischer, Z. Fodor, P. Foka, V. Friese, M. Gaździcki, C. Höhne, K. Kadija, A. Karev, V. Kolesnikov, M. Kowalski, M. Kreps, M. Makariev, A. Malakhov, M. Mateev, G. Melkumov, M. Mitrovski, S. Mrówczyński, R. Renfordt, M. Rybczyński, A. Rybicki, A. Sandoval, N. Schmitz, P. Seyboth, G. Stefanek, R. Stock, H. Ströbele, T. Susa, P. Szymanski, V. Trubnikov, D. Varga, G. Vesztergombi, D. Vranić, S. Wenig, Z. Włodarczyk, A. Wojtaszek
New data on the production of protons, anti-protons and neutrons in p+p interactions are presented. The data come from a sample of 4.8 million inelastic events obtained with the NA49 detector at the CERN SPS at 158 GeV/c beam momentum. The charged baryons are identified by energy loss measurement in a large TPC tracking system. Neutrons are detected in a forward hadronic calorimeter. Inclusive invariant cross sections are obtained in intervals from 0 to 1.9 GeV/c (0 to 1.5 GeV/c) in transverse momentum and from -0.05 to 0.95 (-0.05 to 0.4) in Feynman x for protons (anti-protons), respectively. pT integrated neutron cross sections are given in the interval from 0.1 to 0.9 in Feynman x. The data are compared to a wide sample of existing results in the SPS and ISR energy ranges as well as to proton and neutron measurements from HERA and RHIC.
Energy dependence of phi meson production in central Pb+Pb collisions at sqrt(s_nn) = 6 to 17 GeV (0806.1937)
C. Alt, T. Anticic, B. Baatar, D. Barna, J. Bartke, L. Betev, H. Bialkowska, C. Blume, B. Boimska, M. Botje, J. Bracinik, R. Bramm, P. Buncic, V. Cerny, P. Christakoglou, P. Chung, O. Chvala, J.G. Cramer, P. Csato, P. Dinkelaker, V. Eckardt, D. Flierl, Z. Fodor, P. Foka, V. Friese, J. Gal, M. Gazdzicki, V. Genchev, G. Georgopoulos, E. Galadysz, K. Grebieszkow, S. Hegyi, C. Hoehne, K. Kadija, A. Karev, D. Kikola, M. Kliemant, S. Kniege, V.I. Kolesnikov, T. Kollegger, E. Kornas, R. Korus, M. Kowalski, I. Kraus, M. Kreps, D. Kresan, A. Laszlo, R. Lacey, M. van Leeuwen, P. Levai, L. Litov, B. Lungwitz, M. Makariev, A.I. Malakhov, M. Mateev, G.L. Melkumov, A. Mischke, M. Mitrovski, J. Molnar, St. Mrowczynski, V. Nicolic, G. Palla, A.D. Panagiotou, D. Panayotov, A. Petridis, W. Peryt, M. Pikna, J. Pluta, D. Prindle, F. Puehlhofer, R. Renfordt, C. Roland, G. Roland, M. Rybczynski, A. Rybicki, A. Sandoval, N. Schmitz, T. Schuster, P. Seyboth, F. Sikler, B. Sitar, E. Skrzypczak, M. Slodkowski, G. Stefanek, R. Stock, C. Strabel, H. Stroebele, T. Susa, I. Szentpetery, J. Sziklai, M. Szuba, P. Szymanski, V. Trubnikov, D. Varga, M. Vassiliou, G.I. Veres, G. Vesztergombi, D. Vranic, A. Wetzler, Z. Walodarczyk, I.K. Yoo, J. Zimanyi
Oct. 27, 2008 nucl-ex
Phi meson production is studied by the NA49 Collaboration in central Pb+Pb collisions at 20A, 30A, 40A, 80A and 158A GeV beam energy. The data are compared with measurements at lower and higher energies and to microscopic and thermal models. The energy dependence of yields and spectral distributions is compatible with the assumption that partonic degrees of freedom set in at low SPS energies.
Event-by-event transverse momentum fluctuations in nuclear collisions at CERN SPS (0707.4608)
K. Grebieszkow, C. Alt, T. Anticic, B. Baatar, D. Barna, J. Bartke, L. Betev, H. Białkowska, C. Blume, B. Boimska, M. Botje, J. Bracinik, R. Bramm, P. Bunčić, V. Cerny, P. Christakoglou, P. Chung, O. Chvala, J.G. Cramer, P. Csató, P. Dinkelaker, V. Eckardt, D. Flierl, Z. Fodor, P. Foka, V. Friese, J. Gál, M. Gaździcki, V. Genchev, G. Georgopoulos, E. Gładysz, S. Hegyi, C. Höhne, K. Kadija, A. Karev, D. Kikola, M. Kliemant, S. Kniege, V.I. Kolesnikov, E. Kornas, R. Korus, M. Kowalski, I. Kraus, M. Kreps, A. Laszlo, R. Lacey, M. van Leeuwen, P. Lévai, L. Litov, B. Lungwitz, M. Makariev, A.I. Malakhov, M. Mateev, G.L. Melkumov, A. Mischke, M. Mitrovski, J. Molnár, St. Mrówczyński, V. Nicolic, G. Pálla, A.D. Panagiotou, D. Panayotov, A. Petridis, W. Peryt, M. Pikna, J. Pluta, D. Prindle, F. Pühlhofer, R. Renfordt, C. Roland, G. Roland, M. Rybczyński, A. Rybicki, A. Sandoval, N. Schmitz, T. Schuster, P. Seyboth, F. Siklér, B. Sitar, E. Skrzypczak, M. Slodkowski, G. Stefanek, R. Stock, C. Strabel, H. Ströbele, T. Susa, I. Szentpétery, J. Sziklai, M. Szuba, P. Szymanski, V. Trubnikov, D. Varga, M. Vassiliou, G.I. Veres, G. Vesztergombi, D. Vranić, A. Wetzler, Z. Włodarczyk, A. Wojtaszek, I.K. Yoo, J. Zimányi
Feb. 28, 2008 nucl-ex
The latest NA49 results on event-by-event transverse momentum fluctuations are presented for central Pb+Pb interactions over the whole SPS energy range (20A - 158A GeV). Two different methods are applied: evaluating the $\Phi_{p_{T}}$ fluctuation measure and studying two-particle transverse momentum correlations. The obtained results are compared to predictions of the UrQMD model. The results on the energy dependence are compared to the NA49 data on the system size dependence. The NA61 (SHINE, NA49-future) strategy of searching of the QCD critical end-point is also discussed. | CommonCrawl |
Householder's method
In mathematics, and more specifically in numerical analysis, Householder's methods are a class of root-finding algorithms that are used for functions of one real variable with continuous derivatives up to some order d + 1. Each of these methods is characterized by the number d, which is known as the order of the method. The algorithm is iterative and has a rate of convergence of d + 1.
These methods are named after the American mathematician Alston Scott Householder.
Method
Householder's method is a numerical algorithm for solving the nonlinear equation f(x) = 0. In this case, the function f has to be a function of one real variable. The method consists of a sequence of iterations
$x_{n+1}=x_{n}+d\;{\frac {\left(1/f\right)^{(d-1)}(x_{n})}{\left(1/f\right)^{(d)}(x_{n})}}$
beginning with an initial guess x0.[1]
If f is a d + 1 times continuously differentiable function and a is a zero of f but not of its derivative, then, in a neighborhood of a, the iterates xn satisfy:
$|x_{n+1}-a|\leq K\cdot {|x_{n}-a|}^{d+1}$, for some $K>0.\!$
This means that the iterates converge to the zero if the initial guess is sufficiently close, and that the convergence has order d + 1.
Despite their order of convergence, these methods are not widely used because the gain in precision is not commensurate with the rise in effort for large d. The Ostrowski index expresses the error reduction in the number of function evaluations instead of the iteration count.[2]
• For polynomials, the evaluation of the first d derivatives of f at xn using the Horner method has an effort of d + 1 polynomial evaluations. Since n(d + 1) evaluations over n iterations give an error exponent of (d + 1)n, the exponent for one function evaluation is ${\sqrt[{d+1}]{d+1}}$, numerically 1.4142, 1.4422, 1.4142, 1.3797 for d = 1, 2, 3, 4, and falling after that. By this criterion, the d=2 case (Halley's method) is the optimal value of d.
• For general functions the derivative evaluation using the Taylor arithmetic of automatic differentiation requires the equivalent of (d + 1)(d + 2)/2 function evaluations. One function evaluation thus reduces the error by an exponent of ${\sqrt[{\frac {(d+1)(d+2)}{2}}]{d+1}}$, which is ${\sqrt[{3}]{2}}\approx 1.2599$ for Newton's method, ${\sqrt[{6}]{3}}\approx 1.2009$ for Halley's method and falling towards 1 or linear convergence for the higher order methods.
Motivation
First approach
Suppose f is analytic in a neighborhood of a and f(a) = 0. Then f has a Taylor series at a and its constant term is zero. Because this constant term is zero, the function f(x) / (x − a) will have a Taylor series at a and, when f ′ (a) ≠ 0, its constant term will not be zero. Because that constant term is not zero, it follows that the reciprocal (x − a) / f(x) has a Taylor series at a, which we will write as $\sum _{k=0}^{+\infty }{\frac {c_{k}(x-a)^{k}}{k!}}$ and its constant term c0 will not be zero. Using that Taylor series we can write
${\frac {1}{f}}={\frac {c_{0}}{x-a}}+\sum _{k=1}^{+\infty }{\frac {c_{k}(x-a)^{k-1}}{k~(k-1)!}}\,.$
We can compute its d-th derivative:
$\left({\frac {1}{f}}\right)^{(d)}={\frac {(-1)^{d}d!~c_{0}}{(x-a)^{d+1}}}+\sum _{k=d+1}^{+\infty }{\frac {c_{k}(x-a)^{k-d-1}}{k~(k-d-1)!}}$
$={\frac {(-1)^{d}d!~c_{0}}{(x-a)^{d+1}}}\left(1+{\frac {1}{(-1)^{d}d!~c_{0}}}\sum _{k=d+1}^{+\infty }{\frac {c_{k}(x-a)^{k}}{k~(k-d-1)!}}\right)$
$={\frac {(-1)^{d}d!~c_{0}}{(x-a)^{d+1}}}\left(1+{\mathcal {O}}\left((x-a)^{d+1}\right)\right)\,.$
Conveniently, the terms for k = 1, ..., d have vanished. We thus get that the ratio
$d~{\frac {(1/f)^{(d-1)}}{(1/f)^{(d)}}}=d~{\frac {(-1)^{d-1}(d-1)!~c_{0}}{(-1)^{d}d!~c_{0}}}(x-a)\left({\frac {1+{\mathcal {O}}\left((x-a)^{d}\right)}{1+{\mathcal {O}}\left((x-a)^{d+1}\right)}}\right)$
$=-(x-a)\left(1+{\mathcal {O}}\left((x-a)^{d}\right)\right)\,.$
If a is the zero of f that is closest to x then the second factor goes to 1 as d goes to infinity and $x+d~{\frac {(1/f)^{(d-1)}}{(1/f)^{(d)}}}$ goes to a.
Second approach
Suppose x = a is a simple root. Then near x = a, (1/f)(x) is a meromorphic function. Suppose we have the Taylor expansion:
$(1/f)(x)=\sum _{d=0}^{\infty }{\frac {(1/f)^{(d)}(b)}{d!}}(x-b)^{d}$
around a point b that is closer to a than it is to any other zero of f. By König's theorem, we have:
$a-b=\lim _{d\rightarrow \infty }{\frac {\frac {(1/f)^{(d-1)}(b)}{(d-1)!}}{\frac {(1/f)^{(d)}(b)}{d!}}}=d{\frac {(1/f)^{(d-1)}(b)}{(1/f)^{(d)}(b)}}.$
These suggest that Householder's iteration might be a good convergent iteration. The actual proof of the convergence is also based on these ideas.
The methods of lower order
Householder's method of order 1 is just Newton's method, since:
${\begin{array}{rl}x_{n+1}=&x_{n}+1\,{\frac {\left(1/f\right)(x_{n})}{\left(1/f\right)^{(1)}(x_{n})}}\\[.7em]=&x_{n}+{\frac {1}{f(x_{n})}}\cdot \left({\frac {-f'(x_{n})}{f(x_{n})^{2}}}\right)^{-1}\\[.7em]=&x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}.\end{array}}$
For Householder's method of order 2 one gets Halley's method, since the identities
$\textstyle (1/f)'(x)=-{\frac {f'(x)}{f(x)^{2}}}\ $
and
$\textstyle \ (1/f)''(x)=-{\frac {f''(x)}{f(x)^{2}}}+2{\frac {f'(x)^{2}}{f(x)^{3}}}$
result in
${\begin{array}{rl}x_{n+1}=&x_{n}+2\,{\frac {\left(1/f\right)'(x_{n})}{\left(1/f\right)''(x_{n})}}\\[1em]=&x_{n}+{\frac {-2f(x_{n})\,f'(x_{n})}{-f(x_{n})f''(x_{n})+2f'(x_{n})^{2}}}\\[1em]=&x_{n}-{\frac {f(x_{n})f'(x_{n})}{f'(x_{n})^{2}-{\tfrac {1}{2}}f(x_{n})f''(x_{n})}}\\[1em]=&x_{n}+h_{n}\;{\frac {1}{1+{\frac {1}{2}}(f''/f')(x_{n})\,h_{n}}}.\end{array}}$
In the last line, $h_{n}=-{\tfrac {f(x_{n})}{f'(x_{n})}}$ is the update of the Newton iteration at the point $x_{n}$. This line was added to demonstrate where the difference to the simple Newton's method lies.
The third order method is obtained from the identity of the third order derivative of 1/f
$\textstyle (1/f)'''(x)=-{\frac {f'''(x)}{f(x)^{2}}}+6{\frac {f'(x)\,f''(x)}{f(x)^{3}}}-6{\frac {f'(x)^{3}}{f(x)^{4}}}$
and has the formula
${\begin{array}{rl}x_{n+1}=&x_{n}+3\,{\frac {\left(1/f\right)''(x_{n})}{\left(1/f\right)'''(x_{n})}}\\[1em]=&x_{n}-{\frac {6f(x_{n})\,f'(x_{n})^{2}-3f(x_{n})^{2}f''(x_{n})}{6f'(x_{n})^{3}-6f(x_{n})f'(x_{n})\,f''(x_{n})+f(x_{n})^{2}\,f'''(x_{n})}}\\[1em]=&x_{n}+h_{n}{\frac {1+{\frac {1}{2}}(f''/f')(x_{n})\,h_{n}}{1+(f''/f')(x_{n})\,h_{n}+{\frac {1}{6}}(f'''/f')(x_{n})\,h_{n}^{2}}}\end{array}}$
and so on.
Example
The first problem solved by Newton with the Newton-Raphson-Simpson method was the polynomial equation $y^{3}-2y-5=0$. He observed that there should be a solution close to 2. Replacing y = x + 2 transforms the equation into
$0=f(x)=-1+10x+6x^{2}+x^{3}$.
The Taylor series of the reciprocal function starts with
${\begin{array}{rl}1/f(x)=&-1-10\,x-106\,x^{2}-1121\,x^{3}-11856\,x^{4}-125392\,x^{5}\\&-1326177\,x^{6}-14025978\,x^{7}-148342234\,x^{8}-1568904385\,x^{9}\\&-16593123232\,x^{10}+O(x^{11})\end{array}}$
The result of applying Householder's methods of various orders at x = 0 is also obtained by dividing neighboring coefficients of the latter power series. For the first orders one gets the following values after just one iteration step: For an example, in the case of the 3rd order, $x_{1}=0.0+106/1121=0.09455842997324$.
d x1
1 0.100000000000000000000000000000000
2 0.094339622641509433962264150943396
3 0.094558429973238180196253345227475
4 0.094551282051282051282051282051282
5 0.094551486538216154140615031261962
6 0.094551481438752142436492263099118
7 0.094551481543746895938379484125812
8 0.094551481542336756233561913325371
9 0.094551481542324837086869382419375
10 0.094551481542326678478801765822985
As one can see, there are a little bit more than d correct decimal places for each order d. The first one hundred digits of the correct solution are 0.0945514815423265914823865405793029638573061056282391803041285290453121899834836671462672817771577578.
Let's calculate the $x_{2},x_{3},x_{4}$ values for some lowest order,
$f=-1+10x+6x^{2}+x^{3}$
$f^{\prime }=10+12x+3x^{2}$
$f^{\prime \prime }=12+6x$
$f^{\prime \prime \prime }=6$
And using following relations,
1st order; $x_{i+1}=x_{i}-f(x_{i})/f^{\prime }(x_{i})$
2nd order; $x_{i+1}=x_{i}-2ff^{\prime }/(2{f^{\prime }}^{2}-ff^{\prime \prime })$
3rd order; $x_{i+1}=x_{i}-(6f{f^{\prime }}^{2}-3f^{2}f^{\prime \prime })/(6{f^{\prime }}^{3}-6ff^{\prime }f^{\prime \prime }+f^{2}f^{\prime \prime \prime })$
x 1st (Newton) 2nd (Halley) 3rd order 4th order
x1 0.100000000000000000000000000000000 0.094339622641509433962264150943395 0.094558429973238180196253345227475 0.09455128205128
x2 0.094568121104185218165627782724844 0.094551481540164214717107966227500 0.094551481542326591482567319958483
x3 0.094551481698199302883823703544266 0.094551481542326591482386540579303 0.094551481542326591482386540579303
x4 0.094551481542326591496064847153714 0.094551481542326591482386540579303 0.094551481542326591482386540579303
x5 0.094551481542326591482386540579303
x6 0.094551481542326591482386540579303
Derivation
An exact derivation of Householder's methods starts from the Padé approximation of order d + 1 of the function, where the approximant with linear numerator is chosen. Once this has been achieved, the update for the next approximation results from computing the unique zero of the numerator.
The Padé approximation has the form
$f(x+h)={\frac {a_{0}+h}{b_{0}+b_{1}h+\cdots +b_{d-1}h^{d-1}}}+O(h^{d+1}).$
The rational function has a zero at $h=-a_{0}$.
Just as the Taylor polynomial of degree d has d + 1 coefficients that depend on the function f, the Padé approximation also has d + 1 coefficients dependent on f and its derivatives. More precisely, in any Padé approximant, the degrees of the numerator and denominator polynomials have to add to the order of the approximant. Therefore, $b_{d}=0$ has to hold.
One could determine the Padé approximant starting from the Taylor polynomial of f using Euclid's algorithm. However, starting from the Taylor polynomial of 1/f is shorter and leads directly to the given formula. Since
$(1/f)(x+h)=(1/f)(x)+(1/f)'(x)h+\cdots +(1/f)^{(d-1)}(x){\frac {h^{d-1}}{(d-1)!}}+(1/f)^{(d)}(x){\frac {h^{d}}{d!}}+O(h^{d+1})$
has to be equal to the inverse of the desired rational function, we get after multiplying with $a_{0}+h$ in the power $h^{d}$ the equation
$0=b_{d}=a_{0}(1/f)^{(d)}(x){\frac {1}{d!}}+(1/f)^{(d-1)}(x){\frac {1}{(d-1)!}}$.
Now, solving the last equation for the zero $h=-a_{0}$ of the numerator results in
${\begin{aligned}h&=-a_{0}={\frac {{\frac {1}{(d-1)!}}(1/f)^{(d-1)}(x)}{{\frac {1}{d!}}(1/f)^{(d)}(x)}}\\&=d\,{\frac {(1/f)^{(d-1)}(x)}{(1/f)^{(d)}(x)}}\end{aligned}}$.
This implies the iteration formula
$x_{n+1}=x_{n}+d\;{\frac {\left(1/f\right)^{(d-1)}(x_{n})}{\left(1/f\right)^{(d)}(x_{n})}}$.
Relation to Newton's method
Householder's method applied to the real-valued function f(x) is the same as Newton's method applied to the function g(x):
$x_{n+1}=x_{n}-{\frac {g(x_{n})}{g'(x_{n})}}$
with
$g(x)=\left|(1/f)^{(d-1)}\right|^{-1/d}\,.$
In particular, d = 1 gives Newton's method unmodified and d = 2 gives Halley's method.
References
1. Householder, Alston Scott (1970). The Numerical Treatment of a Single Nonlinear Equation. McGraw-Hill. p. 169. ISBN 0-07-030465-3.
2. Ostrowski, A. M. (1966). Solution of Equations and Systems of Equations. Pure and Applied Mathematics. Vol. 9 (Second ed.). New York: Academic Press.
External links
• Pascal Sebah and Xavier Gourdon (2001). "Newton's method and high order iteration". Note: Use the PostScript version of this link; the website version is not compiled correctly.
| Wikipedia |
Search Results: 1 - 10 of 120141 matches for " Anzhong Wang "
Page 1 /120141
Critical Phenomena in Gravitational Collapse: The Studies So Far
Wang, Anzhong;
Brazilian Journal of Physics , 2001, DOI: 10.1590/S0103-97332001000200009
Abstract: studies of black hole formation from gravitational collapse have revealed interesting non-linear phenomena at the threshold of black hole formation. in particular, in 1993 choptuik studied the collapse of a massless scalar field with spherical symmetry and found some behaviour, which is quite similar to the critical phenomena well-known in statistical mechanics and quantum field theory. universality and echoing of the critical solution and power-law scaling of the black hole masses have given rise to the name critical phenomena in gravitational collapse. choptuik's results were soon confirmed both numerically and semi-analytically, and have extended to various other matter fields. in this paper, we shall give a brief introduction to this fascinating and relatively new area, and provide an updated publication list. an analytical "toy" model of critical collapse is presented, and some current investigations are given.
Wang Anzhong
Brazilian Journal of Physics , 2001,
Critical collapse in tensor-multi-scalar and non-linear gravity theories: A universal class
Anzhong Wang
Physics , 1999,
Abstract: Critical collapse in tensor-multi-scalar gravity theories is studied, and found that for any given target space all the theories conformally related belong to the same universal class. When only one scalar field is present, the universality is extended to include a class of non-linear gravity theories.
Comment on "Absence of trapped surfaces and singularities in cylindrical collapse"
Physics , 2003, DOI: 10.1103/PhysRevD.72.108501
Abstract: Recently, the gravitational collapse of an infinite cylindrical thin shell of matter in an otherwise empty spacetime with two hypersurface orthogonal Killing vectors was studied by Gon\c{c}alves [Phys. Rev. {\bf D65}, 084045 (2002).]. By using three "alternative" criteria for trapped surfaces, the author claimed to have shown that {\em they can never form either outside or on the shell, regardingless of the matter content for the shell, except at asymptotical future null infinite}. Following Penrose's original idea, we first define trapped surfaces in cylindrical spacetimes in terms of the expansions of null directions orthogonal to the surfaces, and then show that the first criterion used by Gon\c{c}alves is incorrect. We also show that his analysis of non-existence of trapped surfaces in vacuum is incomplete. To confirm our claim, we present an example that is a solution to the vacuum Einstein field equations and satisfies all the regular conditions imposed by Gon\c{c}alves. After extending the solution to the whole spacetime, we show explicitly that trapped surfaces exist in the extended region.
$f(R)$ theory and geometric origin of the dark sector in Horava-Lifshitz gravity
Physics , 2010, DOI: 10.1142/S0217732311034980
Abstract: Inclusion of $f(R)$ term in the action of Horava-Lifshitz quantum gravity with projectability but without detailed balance condition is investigated, where $R$ denotes the 3-spatial dimensional Ricci scalar. Conditions for the spin-0 graviton to be free of ghosts and instability are studied. The requirement that the theory reduce to general relativity in the IR makes the scalar mode unstable in the Minkowski background but stable in the de Sitter. It is remarkable that the dark sector, dark matter and dark energy, of the universe has a naturally geometric origin in such a setup. Bouncing universes can also be constructed. Scalar perturbations in the FRW backgrounds with non-zero curvature are presented.
Orbifold branes in string/M-Theory and their cosmological applications
Abstract: In this brief report, we summarize our recent studies in brane cosmology in both string theory and M-Theory on $S^{1}/Z_{2}$. In such setups, we find that the radion is stable and its mass, with a very conservative estimation, can be of the order of $0. 1 \sim 0.01$ GeV. The hierarchy problem can be addressed by combining the large extra dimension, warped factor, and tension coupling mechanisms. Gravity is localized on the visible brane, and the spectrum of the gravitational Kaluza-Klein (KK) modes is discrete and can have a mass gap of TeV. The corrections to the 4D Newtonian potential from the higher order gravitational KK modes are exponentially suppressed. Applying such setups to cosmology, we find that a late transient acceleration of the universe seems to be the generic feature of the theory, due to the interaction between branes and bulk. A bouncing early universe is also rather easily realized.
Vector and tensor perturbations in Horava-Lifshitz cosmology
Abstract: We study cosmological vector and tensor perturbations in Horava-Lifshitz gravity, adopting the most general Sotiriou-Visser-Weinfurtner generalization without the detailed balance but with projectability condition. After deriving the general formulas in a flat FRW background, we find that the vector perturbations are identical to those given in general relativity. This is true also in the non-flat cases. For the tensor perturbations, high order derivatives of the curvatures produce effectively an anisotropic stress, which could have significant efforts on the high-frequency modes of gravitational waves, while for the low-frenquency modes, the efforts are negligible. The power spectrum is scale-invariant in the UV regime, because of the particular dispersion relations. But, due to lower-order corrections, it will eventually reduce to that given in GR in the IR limit. Applying the general formulas to the de Sitter and power-law backgrounds, we calculate the power spectrum and index, using the uniform approximations, and obtain their analytical expressions in both cases.
Thick de Sitter 3-Branes, Dynamic Black Holes and Localization of Gravity
Abstract: The embedding of a thick de Sitter 3-brane into a five-dimensional bulk is studied, assuming a scalar field with potential is present in the bulk. A class of solutions is found in closed form that can represent a thick de Sitter 3-brane interpolating either between two dynamical black holes with a $R \times S_{4}$ topology or between two Rindler-like spacetimes with a $R_{2}\times S_{3}$ topology. The gravitational field is localized in a small region near the center of the 3-brane. The analysis of graviton fluctuations shows that a zero mode exists and separates itself from a set of continuous modes by a mass gap. The existence of such a mass gap is shown to be universal. The scalar perturbations are also studied and shown to be stable.
No-Go Theorem in Spacetimes with Two Commuting Spacelike Killing Vectors
Physics , 2003, DOI: 10.1007/s10714-005-0166-0
Abstract: Four-dimensional Riemannian spacetimes with two commuting spacelike Killing vectors are studied in Einstein's theory of gravity, and found that no outer apparent horizons exist, provided that the dominant energy condition holds.
Critical Collapse of Cylindrically Symmetric Scalar Field in Four-Dimensional Einstein's Theory of Gravity
Abstract: Four-dimensional cylindrically symmetric spacetimes with homothetic self-similarity are studied in the context of Einstein's Theory of Gravity, and a class of exact solutions to the Einstein-massless scalar field equations is found. Their local and global properties are investigated and found that they represent gravitational collapse of a massless scalar field. In some cases the collapse forms black holes with cylindrical symmetry, while in the other cases it does not. The linear perturbations of these solutions are also studied and given in closed form. From the spectra of the unstable eigen-modes, it is found that there exists one solution that has precisely one unstable mode, which may represent a critical solution, sitting on a boundary that separates two different basins of attraction in the phase space. | CommonCrawl |
\begin{definition}[Definition:Coreflexive Relation]
Let $\RR \subseteq S \times S$ be a relation in $S$.
\end{definition} | ProofWiki |
Michiel Coignet
Michiel Coignet (also Quignet, Cognet or Connette in Italian) (1549 in Antwerp – 24 December 1623 in Antwerp) was a Flemish polymath who made significant contributions to various disciplines including cosmography, mathematics, navigation and cartography. He also built new and improved scientific instruments and made military engineering designs.
Coignet was a scientist at the court of the governors of the Spanish Netherlands Albert VII, Archduke of Austria and Isabella Clara Eugenia where he held a position similar to that of his compatriot Simon Stevin at the rival court of Maurice, Prince of Orange.[1]
Life
Michiel Coignet’s father Gillis (also known as Egidius) was a goldsmith and maker of astronomical and mathematical instruments in Antwerp and was married to Brigitte Anthonis Hendriks. Michiel’s brother Jacob III became a physician while his brother Gillis I became a painter. Michiel’s father died in 1562-1563. The details on Michiel’s education are scarce.[2] He was admitted to the St Ambrose Guild of School Teachers in 1568. He taught French and mathematics. It is likely that at the time he started working as a teacher he had already studied higher mathematics since the mathematics class he taught was referred to as 'mathematicam' whereas lower mathematics was referred to as 'cijfferen' (calculation).[3]
He married Maria vanden Eynde c. 1570 and the couple would have 10 children. Only their son Antonius was still alive at the time of his death.[4] In 1572-73 Michiel Coignet was appointed by the city as 'wijnroeier' ('wine gauger'). The wijnroeier was a municipal employee tasked with measuring the wine barrels that arrived in the city in order to calculate the taxes due. From the year 1572 also dates Michiel’s first signed instrument, an astrolabium. This is an indication that his mother likely kept her deceased husband’s workshop in operation until her son could become a master of the Guild of Saint Luke. Michiel became a member of the Guild as the son of a member in 1581 and became a full master in 1589 after a period of apprenticeship.[3] He also became a member of the Guild of Meerse, which was the guild of the shopkeepers and wine gaugers.[5]
Michiel Coignet converted to the Protestant faith. After the Fall of Antwerp in 1585, he joined a local schutterij called the 'kolveniersgilde'. Since only Catholics were typically allowed to join the schutterij it is assumed that he had reconverted to Catholicism. His brother Gillis, however, did not and emigrated to Amsterdam where he had a successful career as an artist.[3] In 1585 Coignet stopped teaching except for classes for military officers and sons of prosperous merchants.[6]
Michiel Coignet remained in this position of 'wijnroeier' until he started his service as a mathematician and engineer for the Archdukes in 1596. He would remain in court service until his death in 1623.[3] In 1604 Coignet received a further stipend from the court for his role as of cosmographer.[7] In 1606, he remarried after the death of his first wife in November 1605 and had four children from this second marriage.[8] One of them was the painter Michiel II Coignet (1618-1663).[9]
In the summer of 1623 Coignet made a request to the Archduchess Isabella to get a pension. She yielded his request and decided to grant him a single lump sum for his services. However, Coignet died before the sum was paid. The Archduchess Isabella wanted to have his works published, but this plan was not realized.[8]
Instrument making
Coignet invented several instruments and corresponded with Galileo Galilei (from 1588), Gerhard Mercator, Godefroy Wendelin, Ludolph van Ceulen and Fabrizio Mordente, whom he met during the latter’s 1584 sojourn in Antwerp. Among other things, Coignet invented and described instruments that had a function similar to that of the proportional compass.[10] During the dispute over the invention of the proportional compass in 1610, Giovanni Camillo Gloriosi attributed the invention to Coignet and not to Galileo, although the instrument is now mainly attributed to Coignet’s friend Mordente.[11] Coignet distributed the computational functions over several bars and described the instrument in several treatises: on the flat ruler (Traité des Sinus, 1610); flat-legged proportional compasses (De regulae pantometae, 1612); and four-point proportional compasses (El uso del compas proportional, 1618).[6]
Navigation
Strongly encouraged by Gillis Hooftman[12] in 1580 Coignet published a treatise on navigation entitled Nieuwe Onderwijsinghe op de principaelste Puncten der Zeevaert ('New Instructions on the Principal Points of Navigation'). It was published by the Antwerp publisher Hendrik Hendriksen as an appendix to the Dutch-language translation of Pedro de Medina's Arte de Navegar.[13] In the appendix he pointed to the possibility of determining longitude at sea with watches on the ships. He also described some of his newly invented instruments such as the nautical hemisphere.[14] The nautical hemisphere is an instrument with which the longitude problem could, in theory, be solved. In 2008 an example of this instrument, likely made in Coignet’s workshop, surfaced during an exhibition on the history of the Jesuit Seminary of Tournai.[15]
An expanded, French-language version of the 'Nieuwe Onderwijsinghe prepared by Coignet was published in 1581 by Hendrik Hendriksen under the title Instruction nouvelle des poincts plus excellents et nécessaires, touchant l'art de naviguer... nouvellement practiqué et composé en langue thioise, par Michiel Coignet,... Depuis reveu et augmenté par le mesme autheur…[16]
Cartography
Around 1600 Coignet became involved in the publication of atlases. He edited various editions of the world maps of Abraham Ortelius. He added an introduction on projections and 13 maps to some editions of Ortelius' atlas published as Epitome theatri orbis terrarum d'Ortelius (1601). The Latin-language 'Epitome' was quickly translated into English and French. Coignet edited the French version published in Antwerp. One of the new maps was a map with a description of Japan, for which he had obtained the information from Jesuit sources.[17] Coignet also added an introduction to the atlas Speculum Orbis terrarum of Gerard de Jode.[18]
In 1621 Coignet drew a map that showed the preferred itinerary for merchants and merchandise traveling from Flanders to Milan (two copies are preserved one of which is kept in the library of the Katholieke Universiteit Leuven). The map was promoted in May 1621 by the Antwerp newspaper Nieuwe Tijdinghe in an advertisement that referred to the route as the Prince conduitte since the route fell supposedly under the protection of the Archdukes. The advertisement claimed that the proposed itinerary would reduce travel time by 10 to 12 days and was 'without danger'.[19]
Mathematics
Coignet may have been a pupil of the German mathematician Valentin Mennher, whose books he published in new editions after Mennher's death in 1570. He also edited Willem Raets' Arithmetica in 1580 and included an appendix on wine gauging.[5] As a mathematician Coignet was highly regarded by his contemporaries and was praised by the Flemish mathematician Adriaan van Roomen. He taught the subject including during his European tour when he instructed Marin Getaldić and the officers of Archduke Albert. Getaldić was later in Paris from where he informed Coignet in a letter dated 15 February 1600 about the mathematician François Viète.[20]
Military engineering design
Coignet was involved in various military engineering projects mainly related to fortification and wrote about ballistics in one of his treatises (El uso de las doze diuisiones geometricas, 1618). From 1596 he worked for the Archdukes on the fortification of the forts along the Scheldt river. He took on an advisory role in the Siege of Hulst of 1596 and the Siege of Ostend from 1602 to 1604.[5]
In 1608 he designed together with the municipal surveyor Mattheus van Herle a fortification in the area of the St Michael’s Abbey. Around 1614 he made further military maps. During that time he was in charge of inspecting the excavation of the city moats. He discovered that the contractor was making the moats too narrow and tried to cut off some of the corners to save time. He was forced to conduct regular inspections in order to curb these malpractices. During this time he may also have been involved in the reparation of the city walls and the design of a new fort on the left bank of the Scheldt river. In 1618 he discussed with Don Iñigo de Borgia, the commander of the Spanish garrison, the construction of two guard posts on the city walls.[21]
References
1. Meskens (2013)
2. Meskens (2013), p. 14
3. Ad Meskens, Een familie herenigd met haar instrument, in: Scientiarum Historia 27 (2001) 1, pp. 73–81 (in Dutch)
4. Meskens (2013), p. 15
5. Meskens (2013), p. 16
6. Ad Meskens, Michiel Coignet's contribution tot the development of the sector, In: Annals of science. 54 (1997), pp. 143–160
7. Meskens (2013), p. 19
8. Meskens (2013), p. 20
9. Meskens (2013), p. 22
10. Gerard L'Estrange Turner, Elizabethan instrument makers : the origins of the London trade in precision, Oxford University Press, p. 70
11. Michel Coignet at museogalileo
12. "Gillis Hooftman: Businessman and Patron (engl.)". Archived from the original on 4 March 2016. Retrieved 23 September 2015.
13. Meskens (2013), p. 139
14. Meskens (2013), p. 148
15. Ad Meskens, Een zeesfeer van Michiel Coignet?, in: Studium : Tijdschrift voor Wetenschaps- en Universiteits-Geschiedenis , 201, pp. 53–56
16. Meskens (2013), p. 146
17. Donald F. Lach, Asia in the Making of Europe, Volume II: A Century of Wonder. Book 2: The Literary Arts, Volume 2, University of Chicago Press, 15 Jan 2010, p. 357
18. Meskens (2013), pp. 169–171
19. Geoffrey Parker, The Army of Flanders and the Spanish Road, 1567-1659: The Logistics of Spanish Victory and Defeat in the Low Countries' Wars, Cambridge University Press, 14 October 2004
20. Reprinted in his De Numerosa potestatum ad exegesim resolutione
21. Meskens (2013), pp. 197–210
Sources
Wikimedia Commons has media related to Michiel Coignet.
• Ad Meskens, Practical mathematics in a commercial metropolis: Mathematical life in late 16th century Antwerp, Springer Science & Business Media, 12 Mar 2013
Authority control
International
• FAST
• ISNI
• VIAF
National
• Spain
• France
• BnF data
• Germany
• Belgium
• United States
• Czech Republic
• Australia
• Netherlands
• Poland
Academics
• zbMATH
Artists
• RKD Artists
People
• Netherlands
• Trove
Other
• IdRef
| Wikipedia |
On the symmetry of spatially periodic two-dimensional water waves
DCDS Home
Geometric Lorenz flows with historic behavior
December 2016, 36(12): 7029-7056. doi: 10.3934/dcds.2016106
On hyperbolicity in the renormalization of near-critical area-preserving maps
Hans Koch 1,
Department of Mathematics, The University of Texas at Austin, Austin, TX 78712
Received February 2016 Revised June 2016 Published October 2016
We consider MacKay's renormalization operator for pairs of area-preserving maps, near the fixed point obtained in [1]. Of particular interest is the restriction $\mathfrak{R}_{0}$ of this operator to pairs that commute and have a zero Calabi invariant. We prove that a suitable extension of $\mathfrak{R}_{0}^{3}$ is hyperbolic at the fixed point, with a single expanding direction. The pairs in this direction are presumably commuting, but we currently have no proof for this. Our analysis yields rigorous bounds on various ``universal'' quantities, including the expanding eigenvalue.
Keywords: invariant circle, hyperbolicity., renormalization, Area-preserving maps.
Mathematics Subject Classification: Primary: 37E20; Secondary: 37F2.
Citation: Hans Koch. On hyperbolicity in the renormalization of near-critical area-preserving maps. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 7029-7056. doi: 10.3934/dcds.2016106
G. Arioli and H. Koch, The critical renormalization fixed point for commuting pairs of area-preserving maps,, Comm. Math. Phys., 295 (2010), 415. doi: 10.1007/s00220-009-0922-1. Google Scholar
R. de la Llave and A. Olvera, The obstruction criterion for non-existence of invariant circles and renormalization,, Nonlinearity, 19 (2006), 1907. doi: 10.1088/0951-7715/19/8/008. Google Scholar
J.-P. Eckmann, H. Koch and P. Wittwer, A computer-assisted proof of universality for area-preserving maps,, Mem. Amer. Math. Soc., 47 (1984), 1. doi: 10.1090/memo/0289. Google Scholar
C. Falcolini and R. de la Llave, A rigorous partial justification of Greene's criterion,, J. Stat. Phys., 67 (1992), 609. doi: 10.1007/BF01049722. Google Scholar
D. Gaidashev, T. Johnson and M. Martens, Rigidity for infinitely renormalizable area-preserving maps,, Preprint, (2012). doi: 10.1215/00127094-3165327. Google Scholar
J. M. Greene, A method for determining a stochastic transition,, J. Math. Phys., 20 (1979), 1183. doi: 10.1063/1.524170. Google Scholar
E. Hille and R. S. Phillips, Functional Analysis and Semi-groups,, AMS Colloquium Publications, 31 (1974). Google Scholar
H. Hofer and E. Zehnder, Symplectic Invariants and Hamiltonian Dynamics,, Birkhäuser Verlag, (1994). doi: 10.1007/978-3-0348-8540-9. Google Scholar
H. Koch, A renormalization group fixed point associated with the breakup of golden invariant tori,, Discrete Contin. Dynam. Systems, 11 (2004), 881. doi: 10.3934/dcds.2004.11.881. Google Scholar
H. Koch, Existence of Critical Invariant Tori,, Erg. Theor. Dyn. Syst., 28 (2008), 1879. doi: 10.1017/S0143385708000199. Google Scholar
R. S. MacKay, Renormalisation in Area Preserving Maps,, Thesis, (1982). doi: 10.1142/9789814354462. Google Scholar
R. S. MacKay, Greene's residue criterion,, Nonlinearity, 5 (1992), 161. doi: 10.1088/0951-7715/5/1/007. Google Scholar
A. Olvera and C. Simó, An obstruction method for the destruction of invariant curves,, Physica D, 26 (1987), 181. doi: 10.1016/0167-2789(87)90222-3. Google Scholar
S. Ostlund, D. Rand, J. Sethna and E. Siggia, Universal transition from quasiperiodicity to chaos in dissipative systems,, Phys. Rev. Lett., 49 (1982), 132. doi: 10.1103/PhysRevLett.49.132. Google Scholar
S. J. Shenker and L. P. Kadanoff, Critical behaviour of KAM surfaces. I Empirical results,, J. Stat. Phys., 27 (1982), 631. doi: 10.1007/BF01013439. Google Scholar
A. Stirnemann, Renormalization for Golden Circles,, Comm. Math. Phys., 152 (1993), 369. doi: 10.1007/BF02098303. Google Scholar
A. Stirnemann, Towards an existence proof of mackay's fixed point,, Comm. Math. Phys., 188 (1997), 723. doi: 10.1007/s002200050185. Google Scholar
M. Yampolsky, Hyperbolicity of renormalization of critical circle maps,, Publ. Math. Inst. Hautes Etudes Sci., 96 (2002), 1. doi: 10.1007/s10240-003-0007-1. Google Scholar
Ada Reference Manual, ISO/IEC 8652:2012(E), available e.g. at http://www.ada-auth.org/arm.html., (). Google Scholar
The Institute of Electrical and Electronics Engineers, Inc., IEEE Standard for Binary Floating-Point Arithmetic,, ANSI/IEEE Std 754-2008., (): 754. Google Scholar
A free-software compiler for the Ada programming language, which is part of the GNU Compiler Collection,, see http://gcc.gnu.org/., (). Google Scholar
The MPFR library for multiple-precision floating-point computations with correct rounding, see, http://www.mpfr.org/., (). Google Scholar
The computer programs are available, at, ftp://ftp.ma.utexas.edu/pub/papers/koch/maps-spec/index.html., (). Google Scholar
Denis Gaidashev, Tomas Johnson. Spectral properties of renormalization for area-preserving maps. Discrete & Continuous Dynamical Systems - A, 2016, 36 (7) : 3651-3675. doi: 10.3934/dcds.2016.36.3651
Simion Filip. Tropical dynamics of area-preserving maps. Journal of Modern Dynamics, 2019, 14: 179-226. doi: 10.3934/jmd.2019007
Mário Bessa, César M. Silva. Dense area-preserving homeomorphisms have zero Lyapunov exponents. Discrete & Continuous Dynamical Systems - A, 2012, 32 (4) : 1231-1244. doi: 10.3934/dcds.2012.32.1231
Giovanni Forni. The cohomological equation for area-preserving flows on compact surfaces. Electronic Research Announcements, 1995, 1: 114-123.
Denis Gaidashev, Tomas Johnson. Dynamics of the universal area-preserving map associated with period-doubling: Stable sets. Journal of Modern Dynamics, 2009, 3 (4) : 555-587. doi: 10.3934/jmd.2009.3.555
Daniel N. Dore, Andrew D. Hanlon. Area preserving maps on $\boldsymbol{S^2}$: A lower bound on the $\boldsymbol{C^0}$-norm using symplectic spectral invariants. Electronic Research Announcements, 2013, 20: 97-102. doi: 10.3934/era.2013.20.97
Rafael de la Llave, Jason D. Mireles James. Parameterization of invariant manifolds by reducibility for volume preserving and symplectic maps. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4321-4360. doi: 10.3934/dcds.2012.32.4321
Miroslav KolÁŘ, Michal BeneŠ, Daniel ŠevČoviČ. Area preserving geodesic curvature driven flow of closed curves on a surface. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3671-3689. doi: 10.3934/dcdsb.2017148
Jingzhi Yan. Existence of torsion-low maximal isotopies for area preserving surface homeomorphisms. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4571-4602. doi: 10.3934/dcds.2018200
Hans Koch. On the renormalization of Hamiltonian flows, and critical invariant tori. Discrete & Continuous Dynamical Systems - A, 2002, 8 (3) : 633-646. doi: 10.3934/dcds.2002.8.633
Yiming Ding. Renormalization and $\alpha$-limit set for expanding Lorenz maps. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 979-999. doi: 10.3934/dcds.2011.29.979
Rafael De La Llave, Michael Shub, Carles Simó. Entropy estimates for a family of expanding maps of the circle. Discrete & Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 597-608. doi: 10.3934/dcdsb.2008.10.597
Liviana Palmisano. Unbounded regime for circle maps with a flat interval. Discrete & Continuous Dynamical Systems - A, 2015, 35 (5) : 2099-2122. doi: 10.3934/dcds.2015.35.2099
Alena Erchenko. Flexibility of Lyapunov exponents for expanding circle maps. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2325-2342. doi: 10.3934/dcds.2019098
C. Chandre. Renormalization for cubic frequency invariant tori in Hamiltonian systems with two degrees of freedom. Discrete & Continuous Dynamical Systems - B, 2002, 2 (3) : 457-465. doi: 10.3934/dcdsb.2002.2.457
Denis G. Gaidashev. Renormalization of isoenergetically degenerate hamiltonian flows and associated bifurcations of invariant tori. Discrete & Continuous Dynamical Systems - A, 2005, 13 (1) : 63-102. doi: 10.3934/dcds.2005.13.63
Hans Koch. A renormalization group fixed point associated with the breakup of golden invariant tori. Discrete & Continuous Dynamical Systems - A, 2004, 11 (4) : 881-909. doi: 10.3934/dcds.2004.11.881
H. E. Lomelí, J. D. Meiss. Generating forms for exact volume-preserving maps. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 361-377. doi: 10.3934/dcdss.2009.2.361
Shigenori Matsumoto. A generic-dimensional property of the invariant measures for circle diffeomorphisms. Journal of Modern Dynamics, 2013, 7 (4) : 553-563. doi: 10.3934/jmd.2013.7.553
Joachim Escher, Boris Kolev. Right-invariant Sobolev metrics of fractional order on the diffeomorphism group of the circle. Journal of Geometric Mechanics, 2014, 6 (3) : 335-372. doi: 10.3934/jgm.2014.6.335 | CommonCrawl |
\begin{document}
\begin{center} {\LARGE Linearized} {\LARGE Boltzmann Collision Operator: II. Polyatomic Molecules Modeled by a }
{\LARGE Continuous Internal Energy Variable}
{\large Niclas Bernhoff
}
Department of Mathematics, Karlstad University, 65188 Karlstad, Sweden
[email protected] \end{center}
\textbf{Abstract:}{\small \ The linearized collision operator of the Boltzmann equation for single species can be written as a sum of a positive multiplication operator, the collision frequency, and a compact integral operator. This classical result has more recently, been extended to multi-component mixtures and polyatomic single species with the polyatomicity modeled by a discrete internal energy variable. In this work we prove compactness of the integral operator for polyatomic single species, with the polyatomicity modeled by a continuous internal energy variable, and the number of internal degrees of freedom greater or equal to two. The terms of the integral operator are shown to be, or be the uniform limit of, Hilbert-Schmidt integral operators. Self-adjointness of the linearized collision operator follows. Coercivity of the collision frequency are shown for hard-sphere like and hard potential with cut-off like models, implying Fredholmness of the linearized collision operator.}
\textbf{Keywords:}{\small \ Boltzmann equation, Polyatomic gases, Linearized collision operator, Hilbert-Schmidt integral operator}
\textbf{MSC Classification:}{\small \ 82C40, 35Q20, 35Q70, 76P05, 47G10}
\section{Introduction\label{S1}}
The Boltzmann equation is a fundamental equation of kinetic theory of gases. Considering deviations of an equilibrium, or Maxwellian, distribution, a linearized collision operator is obtained. The linearized collision operator can in a natural way be written as a sum of a positive multiplication operator, the collision frequency, and an integral operator $-K$. Compact properties of the integral operator $K$ (for angular cut-off kernels) are extensively studied for monatomic single species, see e.g. \cite{Gr-63, Dr-75, Cercignani-88, LS-10}. The integral operator can be written as the sum of two compact integral operators, in the form of a Hilbert-Schmidt integral operator and an approximately Hilbert-Schmidt integral operator, i.e. an operator, which is the uniform limit of Hilbert-Schmidt integral operators (cf. Lemma $\ref{LGD}$ in Section $\ref{PT1}$) \cite{Glassey}, and so compactness of the integral operator $K$ follows. More recently, compactness results were also obtained for monatomic multi-component mixtures \cite{BGPS-13}, see also \cite{Be-21a} for a different approach, and for polyatomic single species, where the polyatomicity is modeled by a discrete internal energy variable \cite{Be-21a}. In this work, we consider polyatomic single species, where the polyatomicity is modeled by a continuous internal energy variable \cite{BDLP-94, BBBD-18, GP-20}. We restrict ourselves to the physical case when the number of internal degrees of freedom is greater or equal to two. The compactness property in the case when the molecules is restricted to undergo resonant collisions \cite{BRS-22} , i.e. collisions where the sum of the internal energies is conserved under the collision, is recently considered in \cite{BBS-22}.
Motivated by an approach by Kogan in \cite[Sect. 2.8]{Kogan} for the monatomic single species case, a probabilistic formulation of the collision operator is considered as the starting point. With this approach, cf. \cite {Be-21a}, it is shown, based on modified arguments in the monatomic case, that the integral operator $K$ can be written as a sum of a Hilbert-Schmidt integral operator and an operator, which is the uniform limit of Hilbert-Schmidt integral operators (and even might be an Hilbert-Schmidt integral operator itself) - and so compactness of the integral operator $K$ follows. The operator $K$ is self-adjoint, as well as the collision frequency, why the linearized collision operator as the sum of two self-adjoint operators of which one is bounded, is also self-adjoint.
For hard sphere like models and hard potential with cut-off like models, bounds on the collision frequency are obtained. Then the collision frequency is coercive and becomes a Fredholm operator. The set of Fredholm operators is closed under addition with compact operators, why also the linearized collision operator becomes a Fredholm operator by the compactness of the integral operator $K$. For hard sphere like models, as well as, "super hard" potential like models, the linearized collision operator satisfies all the properties of the general linear operator in the abstract half-space problem considered in \cite{Be-21}.
The rest of the paper is organized as follows. In Section $\ref{S2}$, the model considered is presented. The probabilistic formulation of the collision operator considered and its relation to a more classical formulation \cite{BDLP-94, BBBD-18, GP-20} is accounted for in Section $\ref {S2.1}$ \ Some classical results for the collision operator in Section $\ref {S2.2}$ and the linearized collision operator in Section $\ref{S2.3}$ are reviewed. Section $\ref{S3}$ is devoted to the main results of this paper. A proof of compactness of the integral operator is presented in Section $\ref {PT1}$, while a proof of the bounds on the collision frequency appears in Section $\ref{PT2}$,
\section{Kinetic model\label{S2}}
In this section the model considered is presented. A probabilistic formulation of the collision operator, cf. \cite{Kogan, SNB-85, BPS-90, Be-21a}, is considered, whose relation to a more classical formulation is accounted for. Known properties of the model and corresponding linearized collision operator are also reviewed.
Consider a single species of polyatomic molecules with mass $m$, where the polyatomicity is modeled by an internal energy variable $I\in $ $\mathbb{R} _{+}$. The distribution functions are nonnegative functions of the form $ f=f\left( t,\mathbf{x},\boldsymbol{\xi },I\right) $, with $t\in \mathbb{R} _{+}$, $\mathbf{x}=\left( x,y,z\right) \in \mathbb{R}^{3}$, and $\boldsymbol{ \xi }=\left( \xi _{x},\xi _{y},\xi _{z}\right) \in \mathbb{R}^{3}$.
Moreover, consider the real Hilbert space $\mathcal{\mathfrak{h:}} =L^{2}\left( d\boldsymbol{\xi \,}\mathbf{\,}dI\right) $, with inner product \begin{equation*} \left( f,g\right) =\int_{\mathbb{R}^{3}\times \mathbb{R}_{+}}fg\,d \boldsymbol{\xi \,}dI\text{ for }f,g\in L^{2}\left( d\boldsymbol{\xi \,} \mathbf{\,}dI\right) \text{.} \end{equation*}
The evolution of the distribution functions is (in the absence of external forces) described by the Boltzmann equation \begin{equation} \frac{\partial f}{\partial t}+\left( \boldsymbol{\xi }\cdot \nabla _{\mathbf{ x}}\right) f=Q\left( f,f\right) \text{,} \label{BE1} \end{equation} where the collision operator $Q=Q\left( f,f\right) $ is a quadratic bilinear operator that accounts for the change of velocities and internal energies of particles due to binary collisions (assuming that the gas is rarefied, such that other collisions are negligible).
\subsection{Collision operator\label{S2.1}}
The collision operator in the Boltzmann equation $\left( \ref{BE1}\right) $ for polyatomic molecules can be written in the form \begin{eqnarray*} Q(f,f) &=&\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{3}}W( \boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. ) \\ &&\times \left( \frac{f^{\prime }f_{\ast }^{\prime }}{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}-\frac{ff_{\ast }}{\left( II_{\ast }\right) ^{\delta /2-1}}\right) \,d\boldsymbol{\xi }_{\ast }d \boldsymbol{\xi }^{\prime }d\boldsymbol{\xi }_{\ast }^{\prime }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime }\text{.} \end{eqnarray*} Here and below the abbreviations \begin{equation} f_{\ast }=f\left( t,\mathbf{x},\boldsymbol{\xi }_{\ast },I_{\ast }\right) \text{, }f^{\prime }=f\left( t,\mathbf{x},\boldsymbol{\xi }^{\prime },I^{\prime }\right) \text{, and }f_{\ast }^{\prime }=f\left( t,\mathbf{x}, \boldsymbol{\xi }_{\ast }^{\prime },I_{\ast }^{\prime }\right) \label{a1} \end{equation} are used, and $\delta $, with $\delta \geq 2$, denote the number of internal degrees of freedom.
The transition probabilities $W(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )$ are of the form, cf. \cite{Kogan, SNB-85, BPS-90} for the monatomic case, \begin{eqnarray} &&W(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. ) \notag \\ &=&4m\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}\sigma ^{\prime }\frac{\left\vert \mathbf{g}^{\prime }\right\vert }{\left\vert \mathbf{g}\right\vert }\delta _{3}\left( \boldsymbol{\xi }+\boldsymbol{\xi } _{\ast }-\boldsymbol{\xi }^{\prime }-\boldsymbol{\xi }_{\ast }^{\prime }\right) \notag \\ &&\times \delta _{1}\left( \frac{m}{2}\left( \left\vert \boldsymbol{\xi } \right\vert ^{2}+\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}-\left\vert \boldsymbol{\xi }^{\prime }\right\vert ^{2}-\left\vert \boldsymbol{\xi }_{\ast }^{\prime }\right\vert ^{2}\right) -\Delta I\right) \notag \\ &=&4m\left( II_{\ast }\right) ^{\delta /2-1}\sigma \frac{\left\vert \mathbf{g }\right\vert }{\left\vert \mathbf{g}^{\prime }\right\vert }\delta _{3}\left( \boldsymbol{\xi }+\boldsymbol{\xi }_{\ast }-\boldsymbol{\xi }^{\prime }- \boldsymbol{\xi }_{\ast }^{\prime }\right) \notag \\ &&\times \delta _{1}\left( \frac{m}{2}\left( \left\vert \boldsymbol{\xi } \right\vert ^{2}+\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}-\left\vert \boldsymbol{\xi }^{\prime }\right\vert ^{2}-\left\vert \boldsymbol{\xi }_{\ast }^{\prime }\right\vert ^{2}\right) -\Delta I\right) \text{, } \notag \\ &&\text{where }\sigma ^{\prime }=\sigma \left( \left\vert \mathbf{g}^{\prime }\right\vert ,\left\vert \cos \theta \right\vert ,I^{\prime },I_{\ast }^{\prime },I,I_{\ast }\right) >0\text{ and} \notag \\ &&\sigma =\sigma \left( \left\vert \mathbf{g}\right\vert ,\left\vert \cos \theta \right\vert ,I,I_{\ast },I^{\prime },I_{\ast }^{\prime }\right) >0 \text{ \ a.e., with }\cos \theta =\frac{\mathbf{g}\cdot \mathbf{g}^{\prime } }{\left\vert \mathbf{g}\right\vert \left\vert \mathbf{g}^{\prime }\right\vert }\text{,} \notag \\ &&\text{ }\mathbf{g}=\boldsymbol{\xi }-\boldsymbol{\xi }_{\ast }\text{, } \mathbf{g}^{\prime }=\boldsymbol{\xi }^{\prime }-\boldsymbol{\xi }_{\ast }^{\prime }\text{, and }\Delta I=I^{\prime }+I_{\ast }^{\prime }-I-I_{\ast } \text{,} \label{tp} \end{eqnarray} where $\delta _{3}$ and $\delta _{1}$ denote the Dirac's delta function in $ \mathbb{R}^{3}$ and $\mathbb{R}$, respectively; taking the conservation of momentum and total energy into account. Here it is assumed that the scattering cross sections $\sigma $ satisfy the microreversibility condition \begin{eqnarray} &&\left( II_{\ast }\right) ^{\delta /2-1}\left\vert \mathbf{g}\right\vert ^{2}\sigma \left( \left\vert \mathbf{g}\right\vert ,\left\vert \cos \theta \right\vert ,I,I_{\ast },I^{\prime },I_{\ast }^{\prime }\right) \notag \\ &=&\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}\left\vert \mathbf{g}^{\prime }\right\vert ^{2}\sigma \left( \left\vert \mathbf{g} ^{\prime }\right\vert ,\left\vert \cos \theta \right\vert ,I^{\prime },I_{\ast }^{\prime },I,I_{\ast }\right) \text{.} \label{mr} \end{eqnarray} Furthermore, to have invariance of change of particles in a collision, it is assumed that the scattering cross sections $\sigma $ satisfy the symmetry relations \begin{eqnarray} \sigma \left( \left\vert \mathbf{g}\right\vert ,\left\vert \cos \theta \right\vert ,I,I_{\ast },I^{\prime },I_{\ast }^{\prime }\right) &=&\sigma \left( \left\vert \mathbf{g}\right\vert ,\left\vert \cos \theta \right\vert ,I,I_{\ast },I_{\ast }^{\prime },I^{\prime }\right) \notag \\ &=&\sigma \left( \left\vert \mathbf{g}\right\vert ,\left\vert \cos \theta \right\vert ,I_{\ast },I,I_{\ast }^{\prime },I^{\prime }\right) \text{.} \label{sr} \end{eqnarray} The invariance under change of particles in a collision, which follows by the definition of the transition probability $\left( \ref{tp}\right) $ and the symmetry relations $\left( \ref{sr}\right) $ for the collision frequency, and the microreversibility of the collisions $\left( \ref{mr} \right) $, imply that the transition probabilities $\left( \ref{tp}\right) $ satisfy the relations \begin{eqnarray} W(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. ) &=&W(\boldsymbol{\xi }_{\ast },\boldsymbol{ \xi },I_{\ast },I\left\vert \boldsymbol{\xi }_{\ast }^{\prime },\boldsymbol{ \xi }^{\prime },I_{\ast }^{\prime },I^{\prime }\right. ) \notag \\ W(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. ) &=&W(\boldsymbol{\xi }^{\prime },\boldsymbol{ \xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\left\vert \boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\right. ) \notag \\ W(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. ) &=&W(\boldsymbol{\xi },\boldsymbol{\xi } _{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }_{\ast }^{\prime }, \boldsymbol{\xi }^{\prime },I_{\ast }^{\prime },I^{\prime }\right. )\text{.} \label{rel1} \end{eqnarray} Applying known properties of Dirac's delta function, the transition probabilities may be transformed to \begin{align*} & W(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. ) \\ =& \frac{m}{2}\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}\sigma ^{\prime }\frac{\left\vert \mathbf{g}^{\prime }\right\vert }{ \left\vert \mathbf{g}\right\vert }\delta _{3}\left( \mathbf{G}-\mathbf{G} ^{\prime }\right) \delta _{1}\left( \frac{m}{4}\left( \left\vert \mathbf{g} \right\vert ^{2}-\left\vert \mathbf{g}^{\prime }\right\vert ^{2}\right) -\Delta I\right) \\ =\,& \frac{m}{2}\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}\sigma ^{\prime }\frac{\left\vert \mathbf{g}^{\prime }\right\vert }{ \left\vert \mathbf{g}\right\vert }\delta _{3}\left( \mathbf{G}-\mathbf{G} ^{\prime }\right) \delta _{1}\left( E-E^{\prime }\right) \\ =& \,\frac{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}{ \left\vert \mathbf{g}\right\vert }\sigma ^{\prime }\delta _{3}\left( \mathbf{ G}-\mathbf{G}^{\prime }\right) \delta _{1}\left( \sqrt{\left\vert \mathbf{g} \right\vert ^{2}-\frac{4}{m}\Delta I}-\left\vert \mathbf{g}^{\prime }\right\vert \right) \mathbf{1}_{m\left\vert \mathbf{g}\right\vert ^{2}>4\Delta I} \\ =& \left( II_{\ast }\right) ^{\delta /2-1}\sigma \frac{\left\vert \mathbf{g} \right\vert }{\left\vert \mathbf{g}^{\prime }\right\vert ^{2}}\delta _{3}\left( \mathbf{G}-\mathbf{G}^{\prime }\right) \delta _{1}\left( \sqrt{ \left\vert \mathbf{g}\right\vert ^{2}-\frac{4}{m}\Delta I}-\left\vert \mathbf{g}^{\prime }\right\vert \right) \mathbf{1}_{m\left\vert \mathbf{g} \right\vert ^{2}>4\Delta I}\text{, with} \\ & \mathbf{G}=\frac{\boldsymbol{\xi }+\boldsymbol{\xi }_{\ast }}{2},\text{ } \mathbf{G}^{\prime }=\frac{\boldsymbol{\xi }^{\prime }+\boldsymbol{\xi } _{\ast }^{\prime }}{2}\text{, }E=\frac{m}{4}\left\vert \mathbf{g}\right\vert ^{2}+I+I_{\ast }\text{,}\ E^{\prime }=\frac{m}{4}\left\vert \mathbf{g} ^{\prime }\right\vert ^{2}+I^{\prime }+I_{\ast }^{\prime }\text{,} \end{align*}
A series of change of variables: first $\left\{ \boldsymbol{\xi }^{\prime }, \boldsymbol{\xi }_{\ast }^{\prime }\right\} \rightarrow \!\left\{ \mathbf{g} ^{\prime }=\boldsymbol{\xi }^{\prime }-\boldsymbol{\xi }_{\ast }^{\prime } \text{,}\mathbf{G}^{\prime }=\dfrac{\boldsymbol{\xi }^{\prime }+\boldsymbol{ \xi }_{\ast }^{\prime }}{2}\!\right\} \!$, followed by a change to spherical coordinates $\left\{ \mathbf{g}^{\prime }\right\} \rightarrow \left\{ \left\vert \mathbf{g}^{\prime }\right\vert ,\boldsymbol{\omega \,}=\dfrac{ \mathbf{g}^{\prime }}{\left\vert \mathbf{g}^{\prime }\right\vert }\right\} $ , and then $\left\{ \left\vert \mathbf{g}^{\prime }\right\vert ,I^{\prime },I_{\ast }^{\prime }\right\} \rightarrow \left\{ R=\dfrac{m\left\vert \mathbf{g}^{\prime }\right\vert ^{2}}{4E},r=\dfrac{I^{\prime }}{(1-R)E} ,E^{\prime }=\dfrac{m}{4}\left\vert \mathbf{g}^{\prime }\right\vert ^{2}+I^{\prime }+I_{\ast }^{\prime }\right\} $, observing that \begin{eqnarray} &&d\boldsymbol{\xi }^{\prime }d\boldsymbol{\xi }_{\ast }^{\prime }dI^{\prime }dI_{\ast }^{\prime }=d\mathbf{g}^{\prime }d\mathbf{G}^{\prime }dI^{\prime }dI_{\ast }^{\prime }=\left\vert \mathbf{g}^{\prime }\right\vert ^{2}d\left\vert \mathbf{g}^{\prime }\right\vert d\boldsymbol{\omega \,}d \mathbf{G}^{\prime }dI^{\prime }dI_{\ast }^{\prime } \notag \\ &=&\frac{4}{m^{3/2}}E^{5/2}(1-R)R^{1/2}dRd\boldsymbol{\omega \,}d\mathbf{G} ^{\prime }\boldsymbol{\,}dr\boldsymbol{\,}dE^{\prime } \notag \\ &=&\frac{4E^{\delta +1/2}}{m^{3/2}\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}\left( r(1-r)\right) ^{\delta /2-1}\left( 1-R\right) ^{\delta -1}R^{1/2}\boldsymbol{\,}dE^{\prime }d\mathbf{G}^{\prime }d \boldsymbol{\omega \,}dr\boldsymbol{\,}dR\text{,} \notag \\ &&\text{ where }I^{\prime }=r\left( 1-R\right) E\text{ and }I_{\ast }^{\prime }=\left( 1-r\right) \left( 1-R\right) E\text{,} \label{df1} \end{eqnarray} results in a more familiar form of the Boltzmann collision operator for polyatomic molecules modeled with a continuous internal energy variable \cite {BDLP-94, BBBD-18, GP-20} \begin{eqnarray*} Q(f,f) &=&\int_{\left( \mathbb{R}^{3}\right) ^{2}\times \left( \mathbb{R} _{+}\right) ^{2}\times \lbrack 0,1]^{2}\mathbb{\times S}^{2}}W(\boldsymbol{ \xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi } ^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. ) \\ &&\times \frac{4E^{\delta +1/2}}{m^{3/2}\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}\left( \frac{f^{\prime }f_{\ast }^{\prime }}{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}-\frac{ff_{\ast }}{ \left( II_{\ast }\right) ^{\delta /2-1}}\right) \\ &&\times \left( r(1-r)\right) ^{\delta /2-1}\left( 1-R\right) ^{\delta -1}R^{1/2}\boldsymbol{\,}dE^{\prime }d\mathbf{G}^{\prime }d\boldsymbol{ \omega \,}dr\boldsymbol{\,}dR\boldsymbol{\,}d\boldsymbol{\xi }_{\ast } \boldsymbol{\,}dI_{\ast } \\ &=&\int_{\mathbb{R}^{3}\times \mathbb{R}_{+}\times \lbrack 0,1]^{2}\mathbb{ \times S}^{2}}B\left( \frac{f^{\prime }f_{\ast }^{\prime }}{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}-\frac{ff_{\ast }}{\left( II_{\ast }\right) ^{\delta /2-1}}\right) \\ &&\times \left( r(1-r)\right) ^{\delta /2-1}\left( 1-R\right) ^{\delta -1}R^{1/2}\left( II_{\ast }\right) ^{\delta /2-1}d\boldsymbol{\omega \,}dr \boldsymbol{\,}dR\boldsymbol{\,}d\boldsymbol{\xi }_{\ast }\boldsymbol{\,} dI_{\ast }, \end{eqnarray*} with the collision kernel \begin{eqnarray} B &=&\frac{2\sigma \left\vert \mathbf{g}\right\vert E^{\delta +1/2}\mathbf{1} _{m\left\vert \mathbf{g}\right\vert ^{2}>4\Delta I}}{\sqrt{m}\sqrt{ \left\vert \mathbf{g}\right\vert ^{2}-\frac{4}{m}\Delta I}\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}=\frac{\sigma \left\vert \mathbf{g }\right\vert E^{2}\mathbf{1}_{E>0}}{\left( 1-R\right) ^{\delta -2}R^{1/2}\left( r(1-r)\right) ^{\delta /2-1}}\text{, where} \notag \\ E &=&\frac{m\left\vert \mathbf{g}\right\vert ^{2}}{4}-\Delta I\text{ and } \Delta I=I^{\prime }+I_{\ast }^{\prime }-I-I_{\ast }\text{,} \label{ck1} \end{eqnarray} and \begin{equation*} \left\{ \begin{array}{l} \boldsymbol{\xi }^{\prime }=\dfrac{\boldsymbol{\xi }+\boldsymbol{\xi }_{\ast }}{2}+\dfrac{\sqrt{\left\vert \boldsymbol{\xi }-\boldsymbol{\xi }_{\ast }\right\vert ^{2}-\dfrac{4}{m}\Delta I}}{2}\omega =\mathbf{G}+\dfrac{\sqrt{ \left\vert \mathbf{g}\right\vert ^{2}-\dfrac{4}{m}\Delta I}}{2}\omega
\\ \boldsymbol{\xi }_{\ast }^{\prime }=\dfrac{\boldsymbol{\xi }+\boldsymbol{\xi }_{\ast }}{2}-\dfrac{\sqrt{\left\vert \boldsymbol{\xi }-\boldsymbol{\xi } _{\ast }\right\vert ^{2}-\dfrac{4}{m}\Delta I}}{2}\omega =\mathbf{G}-\dfrac{ \sqrt{\left\vert \mathbf{g}\right\vert ^{2}-\dfrac{4}{m}\Delta I}}{2}\omega \end{array} \right. \text{, }\omega \in S^{2}\text{.} \end{equation*}
\begin{remark} \label{RRC}By multiplying the transition probability $\left( \ref{tp}\right) $ with an extra Dirac's delta function in $\mathbb{R}$, namely \begin{equation*} \delta _{1}\left( \Delta I\right) \end{equation*} we obtain the case where the molecules are assumed to undergo resonant collisions \cite{BRS-22}. \end{remark}
\subsection{Collision invariants and Maxwellian distributions\label{S2.2}}
The following lemma follows directly by the relations $\left( \ref{rel1} \right) $.
\begin{lemma} \label{L0}The measure \begin{equation*} dA=W(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )\,d\boldsymbol{\xi }\,d\boldsymbol{\xi }_{\ast }d\boldsymbol{\xi }^{\prime }d\boldsymbol{\xi }_{\ast }^{\prime }dIdI_{\ast }dI^{\prime }dI_{\ast }^{\prime } \end{equation*} is invariant under the interchanges of variables \begin{eqnarray} \left( \boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\right) &\leftrightarrow &\left( \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right) \text{,} \notag \\ \left( \boldsymbol{\xi },I\right) &\leftrightarrow &\left( \boldsymbol{\xi } _{\ast },I_{\ast }\right) \text{, and} \notag \\ \left( \boldsymbol{\xi }^{\prime },I^{\prime }\right) &\leftrightarrow &\left( \boldsymbol{\xi }_{\ast }^{\prime },I_{\ast }^{\prime }\right) \text{ ,} \label{tr1} \end{eqnarray} respectively. \end{lemma}
The weak form of the collision operator $Q(f,f)$ reads \begin{eqnarray*} \left( Q(f,f),g\right) &=&\int_{\left( \mathbb{R}^{3}\times \mathbb{R} _{+}\right) ^{4}}\left( \frac{f^{\prime }f_{\ast }^{\prime }}{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}-\frac{ff_{\ast }}{ \left( II_{\ast }\right) ^{\delta /2-1}}\right) g\,dA \\ &=&\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{4}}\left( \frac{ f^{\prime }f_{\ast }^{\prime }}{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}-\frac{ff_{\ast }}{\left( II_{\ast }\right) ^{\delta /2-1}} \right) g_{\ast }\,dA \\ &=&-\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{4}}\left( \frac{f^{\prime }f_{\ast }^{\prime }}{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}-\frac{ff_{\ast }}{\left( II_{\ast }\right) ^{\delta /2-1}}\right) g^{\prime }\,dA \\ &=&-\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{4}}\left( \frac{f^{\prime }f_{\ast }^{\prime }}{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}-\frac{ff_{\ast }}{\left( II_{\ast }\right) ^{\delta /2-1}}\right) g_{\ast }^{\prime }\,dA, \end{eqnarray*} for $g=g\left( \boldsymbol{\xi },I\right) $, such that the first integral is defined, while the following integrals are obtained by applying Lemma $\ref {L0}$.
We have the following proposition.
\begin{proposition} \label{P1}Let $g=g\left( \boldsymbol{\xi },I\right) $ be such that \begin{equation*} \int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{4}}\left( \frac{ f^{\prime }f_{\ast }^{\prime }}{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}-\frac{ff_{\ast }}{\left( II_{\ast }\right) ^{\delta /2-1}} \right) g\,dA \end{equation*} is defined. Then \begin{equation*} \left( Q(f,f),g\right) =\frac{1}{4}\int_{\left( \mathbb{R}^{3}\times \mathbb{ R}_{+}\right) ^{4}}\left( \frac{f^{\prime }f_{\ast }^{\prime }}{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}-\frac{ff_{\ast }}{ \left( II_{\ast }\right) ^{\delta /2-1}}\right) \left( g+g_{\ast }-g^{\prime }-g_{\ast }^{\prime }\right) \,dA. \end{equation*} \end{proposition}
\begin{definition} A function $g=g\left( \boldsymbol{\xi },I\right) \ $is a collision invariant if \begin{equation*} \left( g+g_{\ast }-g^{\prime }-g_{\ast }^{\prime }\right) W(\boldsymbol{\xi } ,\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime }, \boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )=0\text{ a.e. .} \end{equation*} \end{definition}
Then it is clear that $1,$ $\xi _{x},$ $\xi _{y},$ $\xi _{z},$ and $ m\left\vert \boldsymbol{\xi }\right\vert ^{2}+2I$ are collision invariants - corresponding to conservation of mass, momentum, and total energy - and, in fact we have the following proposition, cf. Proposition 1 in \cite{BDLP-94}.
\begin{proposition} \label{P2}The vector space of collision invariants is generated by \begin{equation*} \left\{ 1,\xi _{x},\xi _{y},\xi _{z},m\left\vert \boldsymbol{\xi } \right\vert ^{2}+2I\right\} \text{.} \end{equation*} \end{proposition}
Define \begin{equation*} \mathcal{W}\left[ f\right] :=\left( Q(f,f),\log \left( I^{1-\delta /2}f\right) \right) . \end{equation*} It follows by Proposition $\ref{P1}$ that \begin{eqnarray*} &&\mathcal{W}\left[ f\right] \\ &=&-\frac{1}{4}\int\limits_{\left( \mathbb{R}^{3}\times \mathbb{R} _{+}\right) ^{4}}\!\frac{ff_{\ast }}{\left( II_{\ast }\right) ^{\delta /2-1}} \left( \frac{\left( II_{\ast }\right) ^{\delta /2-1}f^{\prime }f_{\ast }^{\prime }}{ff_{\ast }\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}-1\right) \log \left( \frac{\left( II_{\ast }\right) ^{\delta /2-1}f^{\prime }f_{\ast }^{\prime }}{ff_{\ast }\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}\right) dA\text{.} \end{eqnarray*} Since $\left( x-1\right) \mathrm{log}\left( x\right) \geq 0$ for $x>0$, with equality if and only if $x=1$, \begin{equation*} \mathcal{W}\left[ f\right] \leq 0\text{,} \end{equation*} with equality if and only if \begin{equation} \left( \frac{ff_{\ast }}{\left( II_{\ast }\right) ^{\delta /2-1}}-\frac{ f^{\prime }f_{\ast }^{\prime }}{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}\right) W(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )=0\text{ a.e.,} \label{m1} \end{equation} or, equivalently, if and only if \begin{equation*} Q(f,f)\equiv 0\text{.} \end{equation*}
For any equilibrium, or Maxwellian, distribution $M$, $Q(M,M)\equiv 0$, why it follows, by the relation $\left( \ref{m1}\right) $, that \begin{multline*} \left( \log \frac{M}{I^{\delta /2-1}}+\log \frac{M_{\ast }}{I_{\ast }^{\delta /2-1}}-\log \frac{M^{\prime }}{\left( I^{\prime }\right) ^{\delta /2-1}}-\log \frac{M_{\ast }^{\prime }}{\left( I_{\ast }^{\prime }\right) ^{\delta /2-1}}\right) \\ \times W(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )=0\text{ a.e. .} \end{multline*} Hence, $\log \dfrac{M}{I^{\delta /2-1}}$ is a collision invariant, and the Maxwellian distributions are of the form \begin{equation*} M=\dfrac{nI^{\delta /2-1}m^{3/2}}{\left( 2\pi \right) ^{3/2}T^{\left( \delta +3\right) /2}\Gamma \left( \frac{\delta }{2}\right) }e^{-\left( m\left\vert \boldsymbol{\xi }-\mathbf{u}\right\vert ^{2}+2I\right) /\left( 2T\right) } \text{, } \end{equation*} where $n=\left( M,1\right) $, $\mathbf{u}=\dfrac{1}{n}\left( M,\boldsymbol{ \xi }\right) $, and $T=\dfrac{m}{3n}\left( M,\left\vert \boldsymbol{\xi }- \mathbf{u}\right\vert ^{2}\right) $, while $\Gamma =\Gamma (s)$ denotes the Gamma function $\Gamma (s)=\int_{0}^{\infty }x^{s-1}e^{-x}\,dx$.
Note that by equation $\left( \ref{m1}\right) $ any Maxwellian distribution $ M$ satisfies the relations \begin{equation} \left( \frac{M^{\prime }M_{\ast }^{\prime }}{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}-\frac{MM_{\ast }}{\left( II_{\ast }\right) ^{\delta /2-1}}\right) W(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )=0\text{ a.e. }. \label{M1} \end{equation}
\begin{remark} Introducing the $\mathcal{H}$-functional \begin{equation*} \mathcal{H}\left[ f\right] =\left( f,\log \left( I^{1-\delta /2}f\right) \right) \text{,} \end{equation*} an $\mathcal{H}$-theorem can be obtained, cf. \cite{BDLP-94, BBBD-18, GP-20}. \end{remark}
\subsection{Linearized collision operator\label{S2.3}}
Considering deviations of a Maxwellian distribution $M=\dfrac{I^{\delta /2-1}m^{3/2}}{\left( 2\pi \right) ^{3/2}\Gamma \left( \frac{\delta }{2} \right) }e^{-\frac{m\left\vert \boldsymbol{\xi }\right\vert ^{2}}{2}-I}$ of the form \begin{equation} f=M+M^{1/2}h \label{s1} \end{equation} results, by insertion in the Boltzmann equation $\left( \ref{BE1}\right) $, in the equation \begin{equation} \frac{\partial h}{\partial t}+\left( \boldsymbol{\xi }\cdot \nabla _{\mathbf{ x}}\right) h+\mathcal{L}h=\Gamma \left( h,h\right) \text{,} \label{LBE} \end{equation} where the linearized collision operator $\mathcal{L}$ is given by \ \begin{eqnarray} \mathcal{L}h &=&-\!M^{-1/2}\left( Q(M,M^{1/2}h)+Q(M^{1/2}h,M)\right) \notag \\ &=&\!M^{-1/2}\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{3}} \frac{\left( MM_{\ast }M^{\prime }M_{\ast }^{\prime }\right) ^{1/2}}{\left( II_{\ast }I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /4-1/2}}W( \boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. ) \notag \\ &&\!\times \left( \frac{h}{M^{1/2}}+\frac{h_{\ast }}{M_{\ast }^{1/2}}-\frac{ h^{\prime }}{\left( M^{\prime }\right) ^{1/2}}-\frac{h_{\ast }^{\prime }}{ \left( M_{\ast }^{\prime }\right) ^{1/2}}\right) \,d\boldsymbol{\xi }_{\ast }d\boldsymbol{\xi }^{\prime }d\boldsymbol{\xi }_{\ast }^{\prime }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime } \notag \\ &=&\upsilon h-K\left( h\right) \text{,} \label{dec2} \end{eqnarray} with \begin{eqnarray} \upsilon &=&\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{3}}\! \frac{M_{\ast }}{\left( II_{\ast }\right) ^{\delta /2-1}}W(\boldsymbol{\xi }, \boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime }, \boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )d \boldsymbol{\xi }_{\ast }d\boldsymbol{\xi }^{\prime }d\boldsymbol{\xi } _{\ast }^{\prime }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime }\!\text{,} \notag \\ K\left( h\right) &=&M^{-1/2}\int_{\left( \mathbb{R}^{3}\times \mathbb{R} _{+}\right) ^{3}}\frac{\left( MM_{\ast }M^{\prime }M_{\ast }^{\prime }\right) ^{1/2}}{\left( II_{\ast }I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /4-1/2}}W(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. ) \notag \\ &&\times \left( \frac{h^{\prime }}{\left( M^{\prime }\right) ^{1/2}}+\frac{ h_{\ast }^{\prime }}{\left( M_{\ast }^{\prime }\right) ^{1/2}}-\frac{h_{\ast }}{M_{\ast }^{1/2}}\right) \,d\boldsymbol{\xi }_{\ast }d\boldsymbol{\xi } ^{\prime }d\boldsymbol{\xi }_{\ast }^{\prime }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime }\text{,} \label{dec1} \end{eqnarray} while the quadratic term $\Gamma $ is given by \begin{equation} \Gamma \left( h,h\right) =M^{-1/2}Q(M^{1/2}h,M^{1/2}h)\text{.} \label{nl1} \end{equation} The following lemma follows immediately by Lemma $\ref{L0}$.
\begin{lemma} \label{L1}The measure \begin{equation*} d\widetilde{A}=\frac{\left( MM_{\ast }M^{\prime }M_{\ast }^{\prime }\right) ^{1/2}}{\left( II_{\ast }I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /4-1/2}}dA \end{equation*} is invariant under the interchanges of variables $\left( \ref{tr1}\right) $, respectively. \end{lemma}
The weak form of the linearized collision operator $\mathcal{L}$ reads \begin{eqnarray*} \left( \mathcal{L}h,g\right) &=&\int_{\left( \mathbb{R}^{3}\times \mathbb{R} _{+}\right) ^{4}}\left( \frac{h}{M^{1/2}}+\frac{h_{\ast }}{M_{\ast }^{1/2}}- \frac{h^{\prime }}{\left( M^{\prime }\right) ^{1/2}}-\frac{h_{\ast }^{\prime }}{\left( M_{\ast }^{\prime }\right) ^{1/2}}\right) \frac{g}{M^{1/2}}\,d \widetilde{A} \\ &=&\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{4}}\left( \frac{ h}{M^{1/2}}+\frac{h_{\ast }}{M_{\ast }^{1/2}}-\frac{h^{\prime }}{\left( M^{\prime }\right) ^{1/2}}-\frac{h_{\ast }^{\prime }}{\left( M_{\ast }^{\prime }\right) ^{1/2}}\right) \frac{g_{\ast }}{M_{\ast }^{1/2}}\,d \widetilde{A} \\ &=&-\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{4}}\left( \frac{h}{M^{1/2}}+\frac{h_{\ast }}{M_{\ast }^{1/2}}-\frac{h^{\prime }}{ \left( M^{\prime }\right) ^{1/2}}-\frac{h_{\ast }^{\prime }}{\left( M_{\ast }^{\prime }\right) ^{1/2}}\right) \frac{g^{\prime }}{\left( M^{\prime }\right) ^{1/2}}\,d\widetilde{A} \\ &=&-\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{4}}\left( \frac{h}{M^{1/2}}+\frac{h_{\ast }}{M_{\ast }^{1/2}}-\frac{h^{\prime }}{ \left( M^{\prime }\right) ^{1/2}}-\frac{h_{\ast }^{\prime }}{\left( M_{\ast }^{\prime }\right) ^{1/2}}\right) \frac{g_{\ast }^{\prime }}{\left( M_{\ast }^{\prime }\right) ^{1/2}}\,d\widetilde{A} \end{eqnarray*} for $g=g\left( \boldsymbol{\xi },I\right) $, such that the first integral is defined, while the following integrals are obtained by applying Lemma $\ref {L1}$. Then we have the following lemma.
\begin{lemma} \label{L2}Let $g=g\left( \boldsymbol{\xi },I\right) $ be such that \begin{equation*} \int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{4}}\left( \frac{h}{ M^{1/2}}+\frac{h_{\ast }}{M_{\ast }^{1/2}}-\frac{h^{\prime }}{\left( M^{\prime }\right) ^{1/2}}-\frac{h_{\ast }^{\prime }}{\left( M_{\ast }^{\prime }\right) ^{1/2}}\right) \frac{g}{M^{1/2}}\,d\widetilde{A} \end{equation*} is defined. Then \begin{eqnarray*} \left( \mathcal{L}h,g\right) &=&\frac{1}{4}\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{4}}\left( \frac{h}{M^{1/2}}+\frac{h_{\ast }}{M_{\ast }^{1/2}}-\frac{h^{\prime }}{\left( M^{\prime }\right) ^{1/2}}-\frac{h_{\ast }^{\prime }}{\left( M_{\ast }^{\prime }\right) ^{1/2}}\right) \\ &&\times \left( \frac{g}{M^{1/2}}+\frac{g_{\ast }}{M_{\ast }^{1/2}}-\frac{ g^{\prime }}{\left( M^{\prime }\right) ^{1/2}}-\frac{g_{\ast }^{\prime }}{ \left( M_{\ast }^{\prime }\right) ^{1/2}}\right) d\widetilde{A}. \end{eqnarray*} \end{lemma}
\begin{proposition} \label{Prop1}The linearized collision operator is symmetric and nonnegative, \begin{equation*} \left( \mathcal{L}h,g\right) =\left( h,\mathcal{L}g\right) \text{ and } \left( \mathcal{L}h,h\right) \geq 0\text{,} \end{equation*} and\ the kernel of $\mathcal{L}$, $\ker \mathcal{L}$, is generated by \begin{equation*} \left\{ M^{1/2},\xi _{x}M^{1/2},\xi _{y}M^{1/2},\xi _{z}M^{1/2},\left( m\left\vert \boldsymbol{\xi }\right\vert ^{2}+2I\right) M^{1/2}\right\} \text{.} \end{equation*} \end{proposition}
\begin{proof} By Lemma $\ref{L2}$, it is immediate that $\left( \mathcal{L}h,g\right) =\left( h,\mathcal{L}g\right) $ and $\left( \mathcal{L}h,h\right) \geq 0.$ Furthermore, $h\in \ker \mathcal{L}$ if and only if $\left( \mathcal{L} h,h\right) =0$, which will be fulfilled\ if and only if \begin{equation*} \left( \frac{h}{M^{1/2}}+\frac{h_{\ast }}{M_{\ast }^{1/2}}-\frac{h^{\prime } }{\left( M^{\prime }\right) ^{1/2}}-\frac{h_{\ast }^{\prime }}{\left( M_{\ast }^{\prime }\right) ^{1/2}}\right) W(\boldsymbol{\xi },\boldsymbol{ \xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{ \xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )=0\text{ a.e.,} \end{equation*} i.e. if and only if $\dfrac{h}{M^{1/2}}$ is a collision invariant. The last part of the lemma now follows by Proposition $\ref{P2}.$ \end{proof}
\begin{remark} Note also that the quadratic term is orthogonal to the kernel of $\mathcal{L} $, i.e. $\Gamma \left( h,h\right) \in \left( \ker \mathcal{L}\right) ^{\perp _{\mathcal{\mathfrak{h}}}}$. \end{remark}
\section{Main results\label{S3}}
In this section the main results, concerning compactness properties in Theorem \ref{Thm1} and bounds on collision frequencies in Theorem \ref{Thm2} , are presented. The proofs of the corollaries are essentially the same as the corresponding ones in \cite{Be-21a}, but are presented here for self-containment of the paper.
Assume that for some positive number $\gamma $, such that $0<\gamma <1$, there is a bound \begin{eqnarray} &&\!\!\!\!\!\!\!\!0\leq \sigma \left( \left\vert \mathbf{g}\right\vert ,\cos \theta ,I,I_{\ast },I^{\prime },I_{\ast }^{\prime }\right) \mathbf{1} _{m\left\vert \mathbf{g}\right\vert ^{2}>4\Delta I}\leq C\frac{\Psi +\Psi ^{\gamma /2}}{\left\vert \mathbf{g}\right\vert ^{2}}\frac{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}{E^{\delta -1/2}}\mathbf{1} _{m\left\vert \mathbf{g}\right\vert ^{2}>4\Delta I}\text{, } \notag \\ &&\text{ with }E=\frac{m}{4}\left\vert \mathbf{g}\right\vert ^{2}+I+I_{\ast } \text{ and }\Psi =\left\vert \mathbf{g}\right\vert \sqrt{\left\vert \mathbf{g }\right\vert ^{2}-\frac{4}{m}\Delta I}\text{,} \label{est1} \end{eqnarray} on the scattering cross section $\sigma $, or, equivalently, the bound \begin{equation} \!0\leq B\left( \left\vert \mathbf{g}\right\vert ,\cos \theta ,I,I_{\ast },I^{\prime },I_{\ast }^{\prime }\right) \leq CE\left( 1+\frac{1}{\Psi ^{1-\gamma /2}}\right) \mathbf{1}_{m\left\vert \mathbf{g}\right\vert ^{2}>4\Delta I} \label{est1a} \end{equation} on the collision kernel $\left( \ref{ck1}\right) $, for some positive constant $C>0$. Then the following result may be obtained.
\begin{theorem} \label{Thm1}Assume that the scattering cross section $\sigma $, satisfy the bound $\left( \ref{est1}\right) $ for some positive number $\gamma $, such that $0<\gamma <1$. Then the operator $K$ given by $\left( \ref{dec1}\right) $ is a self-adjoint compact operator on $L^{2}\left( d\boldsymbol{\xi \,} \mathbf{\,}dI\right) $. \end{theorem}
The proof of Theorem \ref{Thm1} will be addressed in Section $\ref{PT1}$.
\begin{corollary} \label{Cor1}The linearized collision operator $\mathcal{L}$, with scattering cross section satisfying $\left( \ref{est1}\right) $, is a closed, densely defined, self-adjoint operator on $L^{2}\left( d\boldsymbol{\xi \,}\mathbf{\, }dI\right) $. \end{corollary}
\begin{proof} The multiplication operator $\Lambda $, where $\Lambda f=vf$, is a closed, densely defined, self-adjoint operator. Hence, by Theorem \ref{Thm1}, $ \mathcal{L}=\Lambda -K$ is closed, as the sum of a closed and a bounded operator, densely defined, since the domains of the linear operators $ \mathcal{L}$ and $\Lambda $ are equal, $D(\mathcal{L})=D(\Lambda )$, and self-adjoint, since the set of self-adjoint operators is closed under addition of bounded self-adjoint operators, see Theorem 4.3 of Chapter V in \cite{Kato}. \end{proof}
\begin{remark} The collision kernels (cf. Model 1-3 in \cite{GP-20})
1) \begin{equation*} B=bE^{\alpha /2} \end{equation*}
2) \begin{equation*} B=b\left( R^{\alpha /2}\left\vert \mathbf{g}\right\vert ^{\alpha }+(1-R\right) ^{\alpha /2}\left( \frac{I+I_{\ast }}{m}\right) ^{\alpha /2})\leq \frac{2^{\alpha }+1}{m^{\alpha /2}}bE^{\alpha /2} \end{equation*}
3) \begin{eqnarray*} B&=&b\left( R^{\alpha /2}\left\vert \mathbf{g}\right\vert ^{\alpha }+\left( r\left( 1-R\right) \frac{I}{m}\right) ^{\alpha /2}+\left( \left( 1-r\right) \left( 1-R\right) \frac{I_{\ast }}{m}\right) ^{\alpha /2}\right) \\ &\leq& \frac{ 2^{\alpha }+2}{m^{\alpha /2}}bE^{\alpha /2} \end{eqnarray*} where $\left\{ R=\dfrac{m\left\vert \mathbf{g}^{\prime }\right\vert ^{2}}{4E} ,r=\dfrac{I^{\prime }}{(1-R)E}\right\} \in \left[ 0,1\right] ^{2}$, satisfy the bound $\left( \ref{est1a}\right) $ for positive numbers $\alpha $ less or equal to $2$, $0<\alpha \leq 2$, and bounded functions $b=b\left( \cos \theta \right) $. Indeed, $E^{\alpha /2}$ is bounded above by $E$ for $ \alpha =2$, or, if $E\geq 1$, while for $\alpha \in \left( 0,2\right) $, $ E^{\alpha /2}=\dfrac{E}{E^{1-\alpha /2}}<\dfrac{E}{E^{1-\gamma /2}}\leq \dfrac{E}{\Psi ^{1-\gamma /2}}$ for $\gamma =\dfrac{\alpha }{2}\in \left( 0,1\right) $ if $E<1$. \end{remark}
\begin{remark} By slight modifications in the proof of Theorem \ref{Thm1} in Section $\ref {PT1}$\ one may replace the bounds $\left( \ref{est1}\right) $ and $\left( \ref{est1a}\right) $, with \begin{equation*} 0\leq \sigma \mathbf{1}_{m\left\vert \mathbf{g}\right\vert ^{2}>4\Delta I}\leq C\frac{\Psi +\Psi ^{\gamma /2}}{\left\vert \mathbf{g}\right\vert ^{2}} \frac{1+E^{\eta -1}}{E^{\delta -1/2}}\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}\mathbf{1}_{m\left\vert \mathbf{g}\right\vert ^{2}>4\Delta I} \end{equation*} and \begin{equation*} \!0\leq B\leq C\left( E^{\eta }+E\right) \left( 1+\frac{1}{\Psi ^{1-\gamma /2}}\right) \mathbf{1}_{m\left\vert \mathbf{g}\right\vert ^{2}>4\Delta I} \text{,} \end{equation*} for any $1<\eta <\dfrac{\delta +1}{2}$, respectively. \end{remark}
Now consider the scattering cross section \begin{equation} \sigma =C\dfrac{\sqrt{\left\vert \mathbf{g}\right\vert ^{2}-\frac{4}{m} \Delta I}}{\left\vert \mathbf{g}\right\vert E^{\delta +\left( \alpha -1\right) /2}}\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1} \text{ if }\left\vert \mathbf{g}\right\vert ^{2}>\frac{4}{m}\Delta I \label{e1} \end{equation} or, equivalently, the collision kernel $\left( \ref{ck1}\right) $ \begin{equation} \!B=CE^{1-\alpha /2}1_{E>0} \label{e1a} \end{equation} for some positive constant $C>0$ and nonnegative number $\alpha $ less than $ 2$, $0\leq \alpha <2$ - cf. hard sphere models for $\alpha =1$.
In fact, it would be enough with the bounds, if $\left\vert \mathbf{g} \right\vert ^{2}>\frac{4}{m}\Delta I$, \begin{equation} C_{-}\dfrac{\sqrt{\left\vert \mathbf{g}\right\vert ^{2}-\frac{4}{m}\Delta I} }{\left\vert \mathbf{g}\right\vert E^{\delta +\left( \alpha -1\right) /2}} \left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}\leq \sigma \leq C_{+}\dfrac{\sqrt{\left\vert \mathbf{g}\right\vert ^{2}-\frac{4}{m}\Delta I} }{\left\vert \mathbf{g}\right\vert E^{\delta +\left( \alpha -1\right) /2}} \left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1} \label{ie1} \end{equation} on the scattering cross sections, or, equivalently, the bounds \begin{equation} \!C_{-}E^{1-\alpha /2}1_{E>0}\leq B\leq C_{+}E^{1-\alpha /2}1_{E>0} \label{ie1a} \end{equation} on the collision kernel $\left( \ref{ck1}\right) $, for some positive constants $C_{\pm }>0$ and nonnegative number $\alpha $ less than $2$, $ 0\leq \alpha <2$ - cf. hard potential with cut-off models, with "super hard" potentials for $0\leq \alpha <1$.
\begin{theorem} \label{Thm2} The linearized collision operator $\mathcal{L}$, with scattering cross section $\left( \ref{e1}\right) $ (or $\left( \ref{ie1} \right) $), can be split into a positive multiplication operator $\Lambda $, where $\Lambda f=vf$, with $\nu =\nu (\left\vert \boldsymbol{\xi } \right\vert ,I)$, minus a compact operator $K$ on $L^{2}\left( d\boldsymbol{ \xi \,}\mathbf{\,}dI\right) $ \begin{equation} \mathcal{L}=\Lambda -K, \label{dec3} \end{equation} such that there exist positive numbers $\nu _{-}$ and $\nu _{+}$, $0<\nu _{-}<\nu _{+}$, such that for any positive number $\varepsilon >0$ \begin{equation} \nu _{-}\left( 1+\left\vert \boldsymbol{\xi }\right\vert +\sqrt{I}\right) ^{2-\alpha }\leq \nu (\left\vert \boldsymbol{\xi }\right\vert ,I)\leq \nu _{+}\left( 1+\left\vert \boldsymbol{\xi }\right\vert +\sqrt{I}\right) ^{2-\alpha +\varepsilon }\text{ for all }\boldsymbol{\xi }\in \mathbb{R}^{3} \text{.} \label{ine1} \end{equation} \end{theorem}
The decomposition $\left( \ref{dec3}\right) $ follows by decomposition $ \left( \ref{dec2}\right) ,\left( \ref{dec1}\right) $ and Theorem \ref{Thm1}, while the bounds $\left( \ref{ine1}\right) $ on the collision frequency will be proven in Section $\ref{PT2}$.
\begin{corollary} \label{Cor2}The linearized collision operator $\mathcal{L}$, with scattering cross section $\left( \ref{e1}\right) $ (or $\left( \ref{ie1}\right) $), is a Fredholm operator. \end{corollary}
\begin{proof} By Theorem \ref{Thm2} the multiplication operator $\Lambda $ is coercive, why it is a Fredholm operator. Furthermore, the set of Fredholm operators is closed under addition of compact operators, see Theorem 5.26 of Chapter IV in \cite{Kato} and its proof, so, by Theorem \ref{Thm2}, $\mathcal{L}$ is a Fredholm operator. \end{proof}
For hard sphere like models and "super hard" potential like models, we obtain the following result.
\begin{corollary} \label{Cor3}For the linearized collision operator $\mathcal{L}$, with scattering cross section $\left( \ref{e1}\right) $ (or $\left( \ref{ie1} \right) $) with $0\leq \alpha \leq 1$, there exists a positive number $ \lambda $, $0<\lambda <1$, such that \begin{equation*} \left( h,\mathcal{L}h\right) \geq \lambda \left( h,\nu (\left\vert \boldsymbol{\xi }\right\vert ,I)h\right) \geq \lambda \nu _{-}\left( h,\left( 1+\left\vert \boldsymbol{\xi }\right\vert \right) h\right) \end{equation*} for all $h\in D\left( \mathcal{L}\right) \cap \mathrm{Im}\mathcal{L}$. \end{corollary}
\begin{proof} Let $h\in D\left( \mathcal{L}\right) \cap \left( \mathrm{ker}\mathcal{L} \right) ^{\perp }=D\left( \mathcal{L}\right) \cap \mathrm{Im}\mathcal{L}$. As a Fredholm operator, $\mathcal{L}$ is closed with a closed range, and as a compact operator, $K$ is bounded, and so there are positive constants $\nu _{0}>0$ and $c_{K}>0$, such that \begin{equation*} (h,Lh)\geq \nu _{0}(h,h)\text{ and }(h,Kh)\leq c_{K}(h,h). \end{equation*} Let $\lambda =\dfrac{\nu _{0}}{\nu _{0}+c_{K}}$. Then the corollary follows, since \begin{eqnarray*}
(h,Lh) &=&(1-\lambda )(h,\mathcal{L}h)+\lambda (h,(\nu (|\boldsymbol{\xi }|,I)-K)h) \\
&\geq &(1-\lambda )\nu _{0}(h,h)+\lambda (h,\nu (|\boldsymbol{\xi }
|,I)h)-\lambda c_{K}(h,h) \\
&=&(\nu _{0}-\lambda (\nu _{0}+c_{K}))(h,h)+\lambda (h,\nu (|\boldsymbol{\xi
}|,I)h) \\
&=&\lambda (h,\nu (|\boldsymbol{\xi }|,I)h)\text{.} \end{eqnarray*} \end{proof}
\begin{remark} For hard sphere like models, as well as, "super hard" potential like models, the linearized collision operator satisfies all the properties of the general linear operator in the abstract half-space problem considered in \cite{Be-21}, under the assumption $B=\xi _{x}$ for the linear operator $B$ in \cite{Be-21}, by Proposition \ref{Prop1} and Corollaries \ref{Cor1}-\ref {Cor3}. Indeed, by Proposition \ref{Prop1} and Corollaries \ref{Cor1}-\ref {Cor3}, with $0\leq \alpha \leq 1$, in the expressions $\left( \ref{e1} \right) $ and $\left( \ref{e1a}\right) $, or, the bounds $\left( \ref{ie1} \right) $ and $\left( \ref{ie1a}\right) $, for the scattering cross section and collision kernel, respectively, the linearized collision operator will be a nonnegative self-adjoint Fredholm operator on the real Hilbert space $ \mathcal{\mathfrak{h}}=L^{2}\left( d\boldsymbol{\xi \,}\mathbf{\,}dI\right) $ , and moreover, there exists a positive number $\mu $, $\mu >0$, such that \begin{equation*} \left( h,\mathcal{L}h\right) \geq \mu \left( h,\left( 1+\left\vert \xi _{x}\right\vert \right) h\right) \text{ for all }h\in D\left( \mathcal{L} \right) \cap \mathrm{Im}\mathcal{L}. \end{equation*} \end{remark}
\section{\label{PT1}Compactness}
This section concerns the proof of Theorem \ref{Thm1}. Note that in the proof the kernels are rewritten in such a way that $\boldsymbol{\xi }_{\ast } $ and $I_{\ast }$ - and not $\boldsymbol{\xi }^{\prime }$ and $I^{\prime }$ , or, $\boldsymbol{\xi }_{\ast }^{\prime }$ and $I_{\ast }^{\prime }$ - always will be the arguments of the distribution functions. Then there will be essentially two different types of kernels; either $\left( \boldsymbol{ \xi }_{\ast },I_{\ast }\right) $ are arguments in the loss term (like $ \left( \boldsymbol{\xi },I\right) $) or in the gain term (unlike $\left( \boldsymbol{\xi },I\right) $) of the collision operator. The kernel of the term from the loss part of the collision operator will be shown\ to be Hilbert-Schmidt in a quite direct way, while the kernel of the terms from the gain part of the collision operator will be shown to be approximately Hilbert-Schmidt, in the sense of Lemma \ref{LGD} below, or, equivalently, a uniform limit of Hilbert-Schmidt integral operators. In fact, with similar arguments as in the proof below one may show that if the number of internal degrees of freedom $\delta $ is greater than $5/2$, $\delta >5/2$, then, under the assumption $\left( \ref{est1}\right) $, the terms from the gain part are Hilbert-Schmidt integral operators as well.
Denote, for any (nonzero) natural number $N$, \begin{align*} \mathfrak{h}_{N}:=& \left\{ (\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })\in \left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{2}:\left\vert \boldsymbol{\xi }-\boldsymbol{\xi }_{\ast }\right\vert \geq \frac{1}{N}\text{; }\left\vert \boldsymbol{\xi }\right\vert \leq N\right\} \text{, and} \\ b^{(N)}=& b^{(N)}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }):=b(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })\mathbf{1}_{ \mathfrak{h}_{N}}\text{.} \end{align*} Then we have the following lemma, cf Glassey \cite[Lemma 3.5.1]{Glassey} and Drange \cite{Dr-75}.
\begin{lemma} \label{LGD} Assume that $Tf\left( \boldsymbol{\xi },I\right) =\int_{\mathbb{R }^{3}\times \mathbb{R}_{+}}b(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })f\left( \boldsymbol{\xi }_{\ast },I_{\ast }\right) \,d \boldsymbol{\xi }_{\ast }dI_{\ast }$, with $b(\boldsymbol{\xi },\boldsymbol{ \xi }_{\ast },I,I_{\ast })\geq 0$. Then $T$ is compact on $L^{2}\left( d \boldsymbol{\xi \,}dI\right) $ if
(i) $\int_{\mathbb{R}^{3}\times \mathbb{R}_{+}}b(\boldsymbol{\xi }, \boldsymbol{\xi }_{\ast },I,I_{\ast })\,d\boldsymbol{\xi }$ $dI$ is bounded in $\left( \boldsymbol{\xi }_{\ast },I_{\ast }\right) $;
(ii) $b^{(N)}\in L^{2}\left( d\boldsymbol{\xi \,}d\boldsymbol{\xi }_{\ast }dI \boldsymbol{\,}dI_{\ast }\right) $ for any (nonzero) natural number $N$;
(iii) $\underset{\in \mathbb{R}^{3}\times \mathbb{R}_{+}}{\underset{\left( \boldsymbol{\xi },I\right) }{\sup }}\int_{\mathbb{R}^{3}\times \mathbb{R} _{+}}b(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })-b^{(N)}( \boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })\,d\boldsymbol{\xi } _{\ast }dI_{\ast }\rightarrow 0$ as $N\rightarrow \infty $. \end{lemma}
Then $T$ is the uniform limit of Hilbert-Schmidt integral operators \cite[ Lemma 3.5.1]{Glassey}, and we say that the kernel $b(\boldsymbol{\xi }, \boldsymbol{\xi }_{\ast },I,I_{\ast })$ is approximately Hilbert-Schmidt, while $T$ is an approximately Hilbert-Schmidt integral operator. Note that by this definition a Hilbert-Schmidt integral operator, will also be an approximately Hilbert-Schmidt integral operator. The reader is referred to \cite[Lemma 3.5.1]{Glassey} for a proof of Lemma \ref{LGD}.
Note that throughout the proof, $C$ will denote a generic positive constant.
\begin{proof} Rewrite expression $\left( \ref{dec1}\right) $ as \begin{eqnarray*} Kh &=&M^{-1/2}\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{3}}w( \boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. ) \\ &&\times \left( \frac{h_{\ast }}{M_{\ast }^{1/2}}-\frac{h^{\prime }}{\left( M^{\prime }\right) ^{1/2}}-\frac{h_{\ast }^{\prime }}{\left( M_{\ast }^{\prime }\right) ^{1/2}}\right) \,d\boldsymbol{\xi }_{\ast }d\boldsymbol{ \xi }^{\prime }d\boldsymbol{\xi }_{\ast }^{\prime }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime }\text{,} \end{eqnarray*} with \begin{equation*} w(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )=\frac{\left( MM_{\ast }M^{\prime }M_{\ast }^{\prime }\right) ^{1/2}}{\left( II_{\ast }I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /4-1/2}}W(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )\text{.} \end{equation*} Due to relations $\left( \ref{rel1}\right) $, the relations \begin{eqnarray} w(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. ) &=&w(\boldsymbol{\xi }_{\ast },\boldsymbol{ \xi },I_{\ast },I\left\vert \boldsymbol{\xi }_{\ast }^{\prime },\boldsymbol{ \xi }^{\prime },I_{\ast }^{\prime },I^{\prime }\right. ) \notag \\ w(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. ) &=&w(\boldsymbol{\xi }^{\prime },\boldsymbol{ \xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\left\vert \boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\right. ) \notag \\ w(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. ) &=&w(\boldsymbol{\xi },\boldsymbol{\xi } _{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }_{\ast }^{\prime }, \boldsymbol{\xi }^{\prime },I_{\ast }^{\prime },I^{\prime }\right. ) \label{rel2} \end{eqnarray} are satisfied.
By first renaming $\left\{ \boldsymbol{\xi }_{\ast },I_{\ast }\right\} \leftrightarrows \left\{ \boldsymbol{\xi }^{\prime },I^{\prime }\right\} $, then $\left\{ \boldsymbol{\xi }_{\ast },I_{\ast }\right\} \leftrightarrows \left\{ \boldsymbol{\xi }_{\ast }^{\prime },I_{\ast }^{\prime }\right\} $, followed by applying the last relation in $\left( \ref{rel2}\right) $, \begin{eqnarray*} &&\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{3}}w(\boldsymbol{ \xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi } ^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )\,\frac{h_{\ast }^{\prime }}{\left( M_{\ast }^{\prime }\right) ^{1/2}}\,d\boldsymbol{\xi }_{\ast }d\boldsymbol{\xi }^{\prime }d\boldsymbol{ \xi }_{\ast }^{\prime }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime } \\ &=&\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{3}}w( \boldsymbol{\xi },\boldsymbol{\xi }^{\prime },I,I^{\prime }\left\vert \boldsymbol{\xi }_{\ast },\boldsymbol{\xi }_{\ast }^{\prime },I_{\ast },I_{\ast }^{\prime }\right. )\,\frac{h_{\ast }^{\prime }}{\left( M_{\ast }^{\prime }\right) ^{1/2}}\,d\boldsymbol{\xi }_{\ast }d\boldsymbol{\xi } ^{\prime }d\boldsymbol{\xi }_{\ast }^{\prime }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime } \\ &=&\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{3}}w( \boldsymbol{\xi },\boldsymbol{\xi }^{\prime },I,I^{\prime }\left\vert \boldsymbol{\xi }_{\ast }^{\prime },\boldsymbol{\xi }_{\ast },I_{\ast }^{\prime },I_{\ast }\right. )\,\frac{h_{\ast }}{M_{\ast }^{1/2}}\,d \boldsymbol{\xi }_{\ast }d\boldsymbol{\xi }^{\prime }d\boldsymbol{\xi } _{\ast }^{\prime }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime } \\ &=&\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{3}}w( \boldsymbol{\xi },\boldsymbol{\xi }^{\prime },I,I^{\prime }\left\vert \boldsymbol{\xi }_{\ast },\boldsymbol{\xi }_{\ast }^{\prime },I_{\ast },I_{\ast }^{\prime }\right. )\,\frac{h_{\ast }}{M_{\ast }^{1/2}}\,d \boldsymbol{\xi }_{\ast }d\boldsymbol{\xi }^{\prime }d\boldsymbol{\xi } _{\ast }^{\prime }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime }\text{.} \end{eqnarray*} Moreover, by renaming $\left\{ \boldsymbol{\xi }_{\ast },I_{\ast }\right\} \leftrightarrows \left\{ \boldsymbol{\xi }^{\prime },I^{\prime }\right\} $, \begin{eqnarray*} &&\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{3}}w(\boldsymbol{ \xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi } ^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )\,\frac{h^{\prime }}{\left( M^{\prime }\right) ^{1/2}}\,d \boldsymbol{\xi }_{\ast }d\boldsymbol{\xi }^{\prime }d\boldsymbol{\xi } _{\ast }^{\prime }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime } \\ &=&\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{3}}w( \boldsymbol{\xi },\boldsymbol{\xi }^{\prime },I,I^{\prime }\left\vert \boldsymbol{\xi }_{\ast },\boldsymbol{\xi }_{\ast }^{\prime },I_{\ast },I_{\ast }^{\prime }\right. )\,\frac{h_{\ast }}{M_{\ast }^{1/2}}\,d \boldsymbol{\xi }_{\ast }d\boldsymbol{\xi }^{\prime }d\boldsymbol{\xi } _{\ast }^{\prime }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime }\text{.} \end{eqnarray*} It follows that \begin{eqnarray} K\left( h\right) &=&\int_{\mathbb{R}^{3}\times \mathbb{R}_{+}}k(\boldsymbol{ \xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })\,h_{\ast }\,d\boldsymbol{\xi } _{\ast }dI_{\ast }\text{, where } \notag \\ k(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }) &=&k_{2}( \boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })-k_{1}(\boldsymbol{ \xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })\text{,} \notag \end{eqnarray} with \begin{eqnarray} &&k_{1}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }) \notag \\ &=&\left( MM_{\ast }\right) ^{-1/2}\int_{\left( \mathbb{R}^{3}\times \mathbb{ R}_{+}\right) ^{2}}w(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )\,d\boldsymbol{\xi }^{\prime }d \boldsymbol{\xi }_{\ast }^{\prime }dI^{\prime }dI_{\ast }^{\prime }\text{ and } \notag \\ &&k_{2}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }) \notag \\ &=&2\left( MM_{\ast }\right) ^{-1/2}\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{2}}w(\boldsymbol{\xi },\boldsymbol{\xi }^{\prime },I,I^{\prime }\left\vert \boldsymbol{\xi }_{\ast },\boldsymbol{\xi }_{\ast }^{\prime },I_{\ast },I_{\ast }^{\prime }\right. )\,d\boldsymbol{\xi } ^{\prime }d\boldsymbol{\xi }_{\ast }^{\prime }dI^{\prime }dI_{\ast }^{\prime }\text{.} \label{k1} \end{eqnarray}
Note that \begin{equation*} k(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })=k_{2}(\boldsymbol{ \xi }_{\ast },\boldsymbol{\xi },I_{\ast },I)-k_{1}(\boldsymbol{\xi }_{\ast }, \boldsymbol{\xi },I_{\ast },I)=k(\boldsymbol{\xi }_{\ast },\boldsymbol{\xi } ,I_{\ast },I), \end{equation*} since, by applying the first and the last relation in $\left( \ref{rel2} \right) $, \begin{eqnarray} &&k_{1}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }) \notag \\ &=&\left( MM_{\ast }\right) ^{-1/2}\int_{\left( \mathbb{R}^{3}\times \mathbb{ R}_{+}\right) ^{2}}w(\boldsymbol{\xi }_{\ast },\boldsymbol{\xi },I_{\ast },I\left\vert \boldsymbol{\xi }_{\ast }^{\prime },\boldsymbol{\xi }^{\prime },I_{\ast }^{\prime },I^{\prime }\right. )\,d\boldsymbol{\xi }^{\prime }d \boldsymbol{\xi }_{\ast }^{\prime }dI^{\prime }dI_{\ast }^{\prime } \notag \\ &=&\left( MM_{\ast }\right) ^{-1/2}\int_{\left( \mathbb{R}^{3}\times \mathbb{ R}_{+}\right) ^{2}}w(\boldsymbol{\xi }_{\ast },\boldsymbol{\xi },I_{\ast },I\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )\,d\boldsymbol{\xi }^{\prime }d \boldsymbol{\xi }_{\ast }^{\prime }dI^{\prime }dI_{\ast }^{\prime } \notag \\ &=&k_{1}(\boldsymbol{\xi }_{\ast },\boldsymbol{\xi },I_{\ast },I) \label{sa1} \end{eqnarray} and, by applying the second relation in $\left( \ref{rel2}\right) $ and renaming $\left\{ \boldsymbol{\xi }^{\prime },I^{\prime }\right\} \leftrightarrows \left\{ \boldsymbol{\xi }_{\ast }^{\prime },I_{\ast }^{\prime }\right\} $, \begin{eqnarray} &&k_{2}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }) \notag \\ &=&2\left( MM_{\ast }\right) ^{-1/2}\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{2}}w(\boldsymbol{\xi }_{\ast },\boldsymbol{\xi } _{\ast }^{\prime },I_{\ast },I_{\ast }^{\prime }\left\vert \boldsymbol{\xi }, \boldsymbol{\xi }^{\prime },I,I^{\prime }\right. )\,d\boldsymbol{\xi } ^{\prime }d\boldsymbol{\xi }_{\ast }^{\prime }dI^{\prime }dI_{\ast }^{\prime } \notag \\ &=&2\left( MM_{\ast }\right) ^{-1/2}\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{2}}w(\boldsymbol{\xi }_{\ast },\boldsymbol{\xi } ^{\prime },I_{\ast },I^{\prime }\left\vert \boldsymbol{\xi },\boldsymbol{\xi }_{\ast }^{\prime },I,I_{\ast }^{\prime }\right. )\,d\boldsymbol{\xi } ^{\prime }d\boldsymbol{\xi }_{\ast }^{\prime }dI^{\prime }dI_{\ast }^{\prime } \notag \\ &=&k_{2}(\boldsymbol{\xi }_{\ast },\boldsymbol{\xi },I_{\ast },I)\text{.} \label{sa2} \end{eqnarray}
We now continue by proving the compactness for the two different types of collision kernel separately.
\begin{figure}
\caption{Typical collision of $K_{1}$. Classical representation of an inelastic collision.}
\label{fig1}
\end{figure}
\textbf{I. Compactness of }$K_{1}=\int_{\mathbb{R}^{3}\times \mathbb{R} _{+}}k_{1}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })\,h_{\ast }\,d\boldsymbol{\xi }_{\ast }dI_{\ast }$.
A change of variables $\left\{ \boldsymbol{\xi }^{\prime },\boldsymbol{\xi } _{\ast }^{\prime }\right\} \rightarrow \left\{ \left\vert \mathbf{g}^{\prime }\right\vert =\left\vert \boldsymbol{\xi }^{\prime }-\boldsymbol{\xi }_{\ast }^{\prime }\right\vert ,\boldsymbol{\omega }=\dfrac{\mathbf{g}^{\prime }}{ \left\vert \mathbf{g}^{\prime }\right\vert },\mathbf{G}^{\prime }=\dfrac{ \boldsymbol{\xi }^{\prime }+\boldsymbol{\xi }_{\ast }^{\prime }}{2}\right\} $ , cf. Figure $\ref{fig1}$, noting that $\left( \ref{df1}\right) $ and using relation $\left( \ref{M1}\right) $, expression $\left( \ref{k1}\right) $ of $ k_{1}$ may be transformed to \begin{eqnarray*} &&k_{1}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }) \\ &=&\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{2}}\frac{\left( M^{\prime }M_{\ast }^{\prime }\right) ^{1/2}}{\left( II_{\ast }I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /4-1/2}}W(\boldsymbol{\xi },\boldsymbol{ \xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{ \xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )\,d \boldsymbol{\xi }^{\prime }d\boldsymbol{\xi }_{\ast }^{\prime }dI^{\prime }dI_{\ast }^{\prime } \\ &=&\int_{\mathbb{R}^{3}\times \mathbb{R}_{+}^{3}\times \mathbb{S}^{2}}\frac{ \left( M^{\prime }M_{\ast }^{\prime }\right) ^{1/2}}{\left( II_{\ast }I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /4-1/2}}\left\vert \mathbf{g} ^{\prime }\right\vert ^{2}W(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime },\boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )\, \\ &&d\mathbf{G}^{\prime }d\left\vert \mathbf{g}^{\prime }\right\vert d \boldsymbol{\omega \,}dI^{\prime }dI_{\ast }^{\prime } \\ &=&\left( MM_{\ast }\right) ^{1/4}\left( II_{\ast }\right) ^{\delta /8-1/4}\left\vert \mathbf{g}\right\vert \int_{\mathbb{S}^{2}\times \mathbb{R} _{+}^{2}}\frac{\left( M^{\prime }M_{\ast }^{\prime }\right) ^{1/4}}{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /8-1/4}}\sigma \mathbf{1} _{m\left\vert \mathbf{g}\right\vert ^{2}>4\Delta I}\,d\boldsymbol{\omega } \,dI^{\prime }dI_{\ast }^{\prime }\text{.} \end{eqnarray*} Since, $E\geq \left( I^{\prime }I_{\ast }^{\prime }\right) ^{1/2}$ and $\Psi \leq \left( EE^{\prime }\right) ^{1/2}=E$, it follows, by assumption $\left( \ref{est1}\right) $, that \begin{eqnarray*} &&k_{1}^{2}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }) \\ &\leq &C\left( MM_{\ast }\right) ^{1/2}\frac{\left( II_{\ast }\right) ^{\delta /4-1/2}}{\left\vert \mathbf{g}\right\vert ^{2}} \\ &&\times \left( \int_{\mathbb{S}^{2}\times \mathbb{R}_{+}^{2}}\frac{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}{E^{\delta -1/2}} e^{-I^{\prime }/4}e^{-I_{\ast }^{\prime }/4}\left( \Psi +\Psi ^{\gamma /2}\right) ^{2}\mathbf{1}_{m\left\vert \mathbf{g}\right\vert ^{2}>4\Delta I}\,d\boldsymbol{\omega }\,dI^{\prime }dI_{\ast }^{\prime }\right) ^{2} \\ &\leq &C\frac{\left( MM_{\ast }\right) ^{1/2}}{\left( II_{\ast }\right) ^{\delta /4-1/2}}\frac{\left( II_{\ast }\right) ^{\delta /2-1}}{\left\vert \mathbf{g}\right\vert ^{2}}\left( E+E^{\gamma /2}\right) ^{2}\left( \int_{ \mathbb{S}^{2}}\,d\boldsymbol{\omega }\,\right) ^{2}\left( \int_{0}^{\infty } \frac{e^{-I/4}}{I^{3/4}}\,dI\right) ^{4}\text{.} \end{eqnarray*} Now, noting that \begin{equation*} m\frac{\left\vert \boldsymbol{\xi }\right\vert ^{2}}{2}+m\frac{\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}}{2}+I+I_{\ast }=m\left\vert \mathbf{G}\right\vert ^{2}+m\frac{\left\vert \mathbf{g}\right\vert ^{2}}{4} +I+I_{\ast }=m\left\vert \mathbf{G}\right\vert ^{2}+E\text{, \ } \end{equation*} the bound \begin{equation} k_{1}^{2}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })\leq Ce^{-m\left\vert \mathbf{G}\right\vert ^{2}/2-E/2}\frac{\left( II_{\ast }\right) ^{\delta /2-1}}{\left\vert \mathbf{g}\right\vert ^{2}}\left( E+E^{\gamma /2}\right) ^{2} \label{b1} \end{equation} may be obtained. Then, by applying the bound $\left( \ref{b1}\right) $ and first changing variables of integration $\left\{ \boldsymbol{\xi }, \boldsymbol{\xi }_{\ast }\right\} \rightarrow \left\{ \mathbf{g},\mathbf{G} \right\} $, with unitary Jacobian, and then to spherical coordinates, \begin{eqnarray*} &&\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{2}}k_{1}^{2}( \boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })d\boldsymbol{\xi \,}d \boldsymbol{\xi }_{\ast }dI\boldsymbol{\,}dI_{\ast } \\ &\leq &C\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{2}}e^{-m\left\vert \mathbf{G}\right\vert ^{2}/2-E/2}\frac{\left( II_{\ast }\right) ^{\delta /2-1}}{\left\vert \mathbf{g}\right\vert ^{2}}\left( E+E^{\gamma /2}\right) ^{2}d\mathbf{g}\boldsymbol{\,}d\mathbf{G}\boldsymbol{ \,}dI\boldsymbol{\,}dI_{\ast } \\ &=&C\int_{\mathbb{R}_{+}^{3}}e^{-m\left\vert \mathbf{g}\right\vert ^{2}/8}e^{-I/2}e^{-I_{\ast }/2}\left( II_{\ast }\right) ^{\delta /2-1}\left( E+E^{\gamma /2}\right) ^{2}d\left\vert \mathbf{g}\right\vert \boldsymbol{\,} dI\boldsymbol{\,}dI_{\ast } \\ &&\times \int_{0}^{\infty }R^{2}e^{-R^{2}}dR\left( \int_{\mathbb{S}^{2}}\,d \boldsymbol{\omega }\,\right) ^{2} \\ &\leq &C\int_{\mathbb{R}_{+}^{3}}e^{-m\left\vert \mathbf{g}\right\vert ^{2}/8}e^{-I/2}e^{-I_{\ast }/2}\left( II_{\ast }\right) ^{\delta /2-1}\left( 1+E^{2}\right) \,d\left\vert \mathbf{g}\right\vert \boldsymbol{\,}dI \boldsymbol{\,}dI_{\ast } \\ &\leq &C\int_{0}^{\infty }e^{-m\left\vert \mathbf{g}\right\vert ^{2}/8}\left( 1+\frac{m^{2}}{16}\left\vert \mathbf{g}\right\vert ^{4}\right) \,d\left\vert \mathbf{g}\right\vert \boldsymbol{\,}\left( \int_{0}^{\infty }\left( 1+I\right) ^{2}e^{-I/2}I^{\delta /2-1}\,dI\right) ^{2} \\ &=&C \end{eqnarray*} Hence, \begin{equation*} K_{1}=\int_{\mathbb{R}^{3}\times \mathbb{R}_{+}}k_{1}(\boldsymbol{\xi }, \boldsymbol{\xi }_{\ast },I,I_{\ast })\,h_{\ast }\,d\boldsymbol{\xi }_{\ast }dI_{\ast } \end{equation*} is a Hilbert-Schmidt integral operator and as such compact on $L^{2}\left( d \boldsymbol{\xi \,}dI\right) $, see e.g. Theorem 7.83 in \cite{RenardyRogers} .
\begin{figure}
\caption{Typical collision of $K_{2}$.}
\label{fig2}
\end{figure}
\textbf{II. Compactness of }$K_{2}=\int_{\mathbb{R}^{3}\times \mathbb{R} _{+}}k_{2}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })\,h_{\ast }\,d\boldsymbol{\xi }_{\ast }dI_{\ast }$.
Denote, cf. Figure $\ref{fig2}$, \begin{eqnarray*} \chi &=&\left( \boldsymbol{\xi }_{\ast }-\boldsymbol{\xi }^{\prime }\right) \cdot \frac{\mathbf{g}}{\left\vert \mathbf{g}\right\vert }\text{, with } \mathbf{g}=\boldsymbol{\xi }-\boldsymbol{\xi }\mathbf{_{\ast }}\text{;} \\ \mathbf{g}^{\prime } &=&\boldsymbol{\xi }^{\prime }-\boldsymbol{\xi }_{\ast }^{\prime },\text{ }\widetilde{\mathbf{g}}=\boldsymbol{\xi }-\boldsymbol{\xi }^{\prime }\text{, and }\mathbf{g}_{\ast }=\boldsymbol{\xi }_{\ast }- \boldsymbol{\xi }_{\ast }^{\prime }\text{.} \end{eqnarray*}
Note that, see Figure $\ref{fig2}$, \begin{eqnarray*} &&W(\boldsymbol{\xi },\boldsymbol{\xi }^{\prime },I,I^{\prime }\left\vert \boldsymbol{\xi }_{\ast },\boldsymbol{\xi }_{\ast }^{\prime },I_{\ast },I_{\ast }^{\prime }\right. ) \\ &=&4m\left( II^{\prime }\right) ^{\delta /2-1}\widetilde{\sigma }\frac{ \left\vert \widetilde{\mathbf{g}}\right\vert }{\left\vert \mathbf{g}_{\ast }\right\vert }\delta _{1}\left( \frac{m}{2}\left( \left\vert \boldsymbol{\xi }\right\vert ^{2}-\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}+\left\vert \boldsymbol{\xi }^{\prime }\right\vert ^{2}-\left\vert \boldsymbol{\xi }_{\ast }^{\prime }\right\vert ^{2}\right) -\Delta I_{\ast }\right) \\ &&\times \delta _{3}\left( \boldsymbol{\xi }^{\prime }-\boldsymbol{\xi } _{\ast }^{\prime }+\boldsymbol{\xi }-\boldsymbol{\xi }_{\ast }\right) \\ &=&4m\left( II^{\prime }\right) ^{\delta /2-1}\widetilde{\sigma }\frac{ \left\vert \widetilde{\mathbf{g}}\right\vert }{\left\vert \mathbf{g}_{\ast }\right\vert }\delta _{1}\left( m\left\vert \mathbf{g}\right\vert \chi -\Delta I_{\ast }\right) \delta _{3}\left( \mathbf{g}^{\prime }+\mathbf{g} \right) \\ &=&4\left( II^{\prime }\right) ^{\delta /2-1}\widetilde{\sigma }\frac{ \left\vert \widetilde{\mathbf{g}}\right\vert }{\left\vert \mathbf{g}_{\ast }\right\vert \left\vert \mathbf{g}\right\vert }\delta _{1}\left( \chi -\frac{ \Delta I_{\ast }}{m\left\vert \mathbf{g}\right\vert }\right) \delta _{3}\left( \mathbf{g}^{\prime }+\mathbf{g}\right) \text{, with} \\ &&\Delta I_{\ast }=I_{\ast }+I_{\ast }^{\prime }-I-I^{\prime }\text{, and } \widetilde{\sigma }=\sigma \left( \left\vert \widetilde{\mathbf{g}} \right\vert ,\frac{\widetilde{\mathbf{g}}\cdot \mathbf{g}_{\ast }}{ \left\vert \widetilde{\mathbf{g}}\right\vert \left\vert \mathbf{g}_{\ast }\right\vert },I,I^{\prime },I_{\ast },I_{\ast }^{\prime }\right) \text{.} \end{eqnarray*}
By a change of variables $\left\{ \boldsymbol{\xi }^{\prime },\boldsymbol{ \xi }_{\ast }^{\prime }\right\} \rightarrow \left\{ \mathbf{g}^{\prime }= \boldsymbol{\xi }^{\prime }-\boldsymbol{\xi }_{\ast }^{\prime },~\mathbf{h}= \boldsymbol{\xi }^{\prime }-\boldsymbol{\xi }_{\ast }\right\} $, where \begin{equation*} d\boldsymbol{\xi }^{\prime }d\boldsymbol{\xi }_{\ast }^{\prime }=d\mathbf{g} ^{\prime }d\mathbf{h}=d\mathbf{g}^{\prime }d\chi d\mathbf{w}\text{,\ with } \mathbf{w}=\boldsymbol{\xi }^{\prime }-\boldsymbol{\xi }_{\ast }+\chi \mathbf{n}\text{ and }\mathbf{n}=\frac{\mathbf{g}}{\left\vert \mathbf{g} \right\vert }\text{,} \end{equation*} the expression $\left( \ref{k1}\right) $ of $k_{2}$ may be transformed in the following way \begin{eqnarray*} &&k_{2}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }) \\ &=&\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{2}}2\frac{ \left( M^{\prime }M_{\ast }^{\prime }\right) ^{1/2}}{\left( II_{\ast }I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /4-1/2}}W(\boldsymbol{\xi }, \boldsymbol{\xi }^{\prime },I,I^{\prime }\left\vert \boldsymbol{\xi }_{\ast },\boldsymbol{\xi }_{\ast }^{\prime },I_{\ast },I_{\ast }^{\prime }\right. )\,d\mathbf{g}^{\prime }d\mathbf{h\,}dI^{\prime }dI_{\ast }^{\prime } \\ &=&\!\!\left( \frac{I}{I_{\ast }}\right) ^{\delta /4-1/2}\!\!\!\!\int_{\left( \mathbb{R}^{3}\right) ^{\perp _{\mathbf{n} }}\times \mathbb{R}_{+}^{2}}\!\!2\left( M^{\prime }M_{\ast }^{\prime }\right) ^{1/2}\!\left( \frac{I^{\prime }}{I_{\ast }^{\prime }}\right) ^{\delta /4-1/2}\!\!\widetilde{\sigma }\frac{\left\vert \widetilde{\mathbf{g} }\right\vert \mathbf{1}_{\left\vert \widetilde{\mathbf{g}}\right\vert ^{2}>4\Delta I_{\ast }}}{\left\vert \mathbf{g}_{\ast }\right\vert \left\vert \mathbf{g}\right\vert }\,d\mathbf{w}dI^{\prime }dI_{\ast }^{\prime }\text{.} \end{eqnarray*} Here, see Figure $\ref{fig2}$, \begin{equation*} \left\{ \begin{array}{c} \boldsymbol{\xi }^{\prime }=\boldsymbol{\xi }_{\ast }+\mathbf{w}-\chi \mathbf{n} \\ \boldsymbol{\xi }_{\ast }^{\prime }=\boldsymbol{\xi }+\mathbf{w}-\chi \mathbf{n} \end{array} \right. \text{, with }\chi =\frac{\Delta I_{\ast }}{m\left\vert \mathbf{g} \right\vert }\text{,} \end{equation*} implying that \begin{eqnarray*} \frac{\left\vert \boldsymbol{\xi }^{\prime }\right\vert ^{2}}{2}+\frac{ \left\vert \boldsymbol{\xi }_{\ast }^{\prime }\right\vert ^{2}}{2} &=&\left\vert \frac{\boldsymbol{\xi +\xi }_{\ast }}{2}-\chi \mathbf{n}+ \mathbf{w}\right\vert ^{2}+\frac{\left\vert \boldsymbol{\xi }-\boldsymbol{ \xi }_{\ast }\right\vert ^{2}}{4} \\ &=&\left\vert \frac{\left( \boldsymbol{\xi +\xi }_{\ast }\right) _{\perp _{ \boldsymbol{n}}}}{2}+\mathbf{w}\right\vert ^{2}+\left( \frac{\left( \boldsymbol{\xi +\xi }_{\ast }\right) _{\mathbf{n}}}{2}-\chi \right) ^{2}+ \frac{\left\vert \mathbf{g}\right\vert ^{2}}{4} \\ &=&\left\vert \frac{\left( \boldsymbol{\xi +\xi }_{\ast }\right) _{\perp _{ \boldsymbol{n}}}}{2}+\mathbf{w}\right\vert ^{2}+\frac{\left( \left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}-\left\vert \boldsymbol{\xi } \right\vert ^{2}+2\chi \left\vert \boldsymbol{\xi }-\boldsymbol{\xi }_{\ast }\right\vert \right) ^{2}}{4\left\vert \boldsymbol{\xi }-\boldsymbol{\xi } _{\ast }\right\vert ^{2}}+\frac{\left\vert \mathbf{g}\right\vert ^{2}}{4} \text{,\ } \end{eqnarray*} with \begin{eqnarray*} \left( \boldsymbol{\xi +\xi }_{\ast }\right) _{\mathbf{n}} &=&\left( \boldsymbol{\xi +\xi }_{\ast }\right) \cdot \mathbf{n}=\frac{\left\vert \boldsymbol{\xi }\right\vert ^{2}-\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}}{\left\vert \boldsymbol{\xi }-\boldsymbol{\xi }_{\ast }\right\vert }\text{ and} \\ \text{\ }\left( \boldsymbol{\xi +\xi }_{\ast }\right) _{\perp _{\boldsymbol{n }}} &=&\boldsymbol{\xi +\xi }_{\ast }-\left( \boldsymbol{\xi +\xi }_{\ast }\right) _{\mathbf{n}}\mathbf{n}\text{.} \end{eqnarray*} Note that for any number $s$, such that $s\geq -1/2$, \begin{equation} \frac{\left( II_{\ast }\right) ^{\delta /4-1/2}}{\widetilde{E}^{\delta /2+s-1/2}}\leq \frac{I^{\delta /4-1/2-\kappa /2}}{I_{\ast }^{\delta /4+s-\kappa /2}}\text{ for }0\leq \kappa \leq \delta +2s-1\text{,} \label{b5} \end{equation} where $\widetilde{E}=m\left\vert \widetilde{\mathbf{g}}\right\vert ^{2}/4+I+I^{\prime }=m\left\vert \mathbf{g}_{\ast }\right\vert ^{2}/4+I_{\ast }+I_{\ast }^{\prime }$.
By bound $\left( \ref{b5}\right) $ and assumption $\left( \ref{est1}\right) $ , for any numbers $s$ and $\kappa $, such that $\ -1/2\leq s\leq \delta /2$ and $0\leq \kappa \leq \delta +2s-1$, \begin{eqnarray} &&k_{2}^{2}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }) \notag \\ &\leq &\frac{C}{\left\vert \mathbf{g}\right\vert ^{2}}\frac{I^{\delta /2-1-\kappa }}{I_{\ast }^{\delta /2+2s-\kappa }}\left( \int_{\mathbb{R} _{+}^{2}}\exp \left( -m\frac{\left( \left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}-\left\vert \boldsymbol{\xi }\right\vert ^{2}+2\chi \left\vert \mathbf{g}\right\vert \right) ^{2}}{8\left\vert \mathbf{g} \right\vert ^{2}}-\frac{m}{8}\left\vert \mathbf{g}\right\vert ^{2}\right) \right. \notag \\ &&\times \int_{\left( \mathbb{R}^{3}\right) ^{\perp _{\mathbf{n}}}}\left( 1+ \frac{1}{\widetilde{\Psi }^{1-\gamma /2}}\right) \exp \left( -\frac{m}{2} \left\vert \frac{\left( \boldsymbol{\xi +\xi }_{\ast }\right) _{\perp _{ \boldsymbol{n}}}}{2}+\mathbf{w}\right\vert ^{2}\right) d\mathbf{w} \notag \\ &&\left. \,\times \mathbf{\,}e^{-\left( I^{\prime }+I_{\ast }^{\prime }\right) /2}\frac{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /4-1/2}}{\widetilde{E}^{\delta /2-s}}dI^{\prime }dI_{\ast }^{\prime }\right) ^{2} \notag \\ &\leq &\frac{C}{\left\vert \mathbf{g}\right\vert ^{2}}\frac{I^{\delta /2-1-\kappa }}{I_{\ast }^{\delta /2+2s-\kappa }}\left( \int_{\mathbb{R} _{+}^{2}}\exp \left( -\frac{m}{8}\left( \left\vert \mathbf{g}\right\vert +2\left\vert \boldsymbol{\xi }\right\vert \cos \varphi +2\chi \right) ^{2}- \frac{m}{8}\left\vert \mathbf{g}\right\vert ^{2}\right) \right. \notag \\ &&\times \left. \frac{e^{-\left( I^{\prime }+I_{\ast }^{\prime }\right) /2}}{ \left( I^{\prime }I_{\ast }^{\prime }\right) ^{1/2-s/2}}dI^{\prime }dI_{\ast }^{\prime }\right) ^{2}\text{, where }\widetilde{\Psi }=\left\vert \widetilde{\mathbf{g}}\right\vert \left\vert \mathbf{g}_{\ast }\right\vert \text{ and }\cos \varphi =\mathbf{n}\cdot \frac{\boldsymbol{\xi }}{ \left\vert \boldsymbol{\xi }\right\vert }\text{,} \label{b2a} \end{eqnarray} since \begin{eqnarray*} &&\int_{\left( \mathbb{R}^{3}\right) ^{\perp _{\mathbf{n}}}}\left( 1+\frac{1 }{\widetilde{\Psi }^{1-\gamma /2}}\right) \exp \left( -\frac{m}{2}\left\vert \frac{\left( \boldsymbol{\xi +\xi }_{\ast }\right) _{\perp _{\boldsymbol{n}}} }{2}+\mathbf{w}\right\vert ^{2}\right) d\mathbf{w} \\ &\leq &C\int_{\left( \mathbb{R}^{3}\right) ^{\perp _{\mathbf{n}}}}\left( 1+\left\vert \mathbf{w}\right\vert ^{\gamma -2}\right) \exp \left( -\frac{m}{ 2}\left\vert \frac{\left( \boldsymbol{\xi +\xi }_{\ast }\right) _{\perp _{ \boldsymbol{n}}}}{2}+\mathbf{w}\right\vert ^{2}\right) \,d\mathbf{w\,} \\ &\leq &C\left( \int_{\left\vert \mathbf{w}\right\vert \leq 1}1+\left\vert \mathbf{w}\right\vert ^{\gamma -2}d\mathbf{w\,}+2\int_{\left\vert \mathbf{w} \right\vert \geq 1}\exp \left( -\frac{m}{2}\left\vert \frac{\left( \boldsymbol{\xi +\xi }_{\ast }\right) _{\perp _{\boldsymbol{n}}}}{2}+\mathbf{ w}\right\vert ^{2}\right) d\mathbf{w}\right) \\ &\leq &C\left( \int_{\left\vert \mathbf{w}\right\vert \leq 1}1+\left\vert \mathbf{w}\right\vert ^{\gamma -2}\,d\mathbf{w}\,+\int_{\left( \mathbb{R} ^{3}\right) ^{\perp _{\mathbf{n}}}}e^{-\left\vert \widetilde{\mathbf{w}} \right\vert ^{2}}\,d\widetilde{\mathbf{w}}\right) \\ &=&C\left( \int_{0}^{1}r+r^{\gamma -1}\,dr+\int_{0}^{\infty }re^{-r^{2}}\,dr\right) =C\text{.} \end{eqnarray*} For any numbers $s$ and $\kappa $, such that $\ -1/2\leq s\leq \delta /2$ and $0\leq \kappa \leq \delta +2s-1$, by the bound $\left( \ref{b2a}\right) $ on $k_{2}^{2}$ and the Cauchy-Schwarz inequality, \begin{eqnarray} &&k_{2}^{2}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }) \notag \\ &\leq &\frac{C}{\left\vert \mathbf{g}\right\vert ^{2}}\frac{I^{\delta /2-1-\kappa }}{I_{\ast }^{\delta /2+2s-\kappa }}\int_{\mathbb{R}_{+}^{2}} \mathbf{\,}\frac{e^{-\left( I^{\prime }+I_{\ast }^{\prime }\right) /2}}{ \left( I^{\prime }I_{\ast }^{\prime }\right) ^{1/2-s/2}}dI^{\prime }dI_{\ast }^{\prime } \notag \\ &&\times \int_{\mathbb{R}_{+}^{2}}\exp \left( -\frac{m}{4}\left( \left\vert \mathbf{g}\right\vert +2\left\vert \boldsymbol{\xi }\right\vert \cos \varphi +2\chi \right) ^{2}-\frac{m}{4}\left\vert \mathbf{g}\right\vert ^{2}\right) \frac{e^{-\left( I^{\prime }+I_{\ast }^{\prime }\right) /2}}{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{1/2-s/2}}dI^{\prime }dI_{\ast }^{\prime } \notag \\ &=&\frac{C}{\left\vert \mathbf{g}\right\vert ^{2}}\frac{I^{\delta /2-1-\kappa }}{I_{\ast }^{\delta /2+2s-\kappa }}\int_{\mathbb{R} _{+}^{2}}\exp \left( -\frac{m}{4}\left( \left\vert \mathbf{g}\right\vert +2\left\vert \boldsymbol{\xi }\right\vert \cos \varphi +2\chi \right) ^{2}- \frac{m}{4}\left\vert \mathbf{g}\right\vert ^{2}\right) \notag \\ &&\times \frac{e^{-\left( I^{\prime }+I_{\ast }^{\prime }\right) /2}}{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{1/2-s/2}}dI^{\prime }dI_{\ast }^{\prime }\text{, with }\cos \varphi =\mathbf{n}\cdot \dfrac{\boldsymbol{ \xi }}{\left\vert \boldsymbol{\xi }\right\vert }, \label{b2} \end{eqnarray} since, \begin{equation} \int_{\mathbb{R}_{+}^{2}}\mathbf{\,}\frac{e^{-\left( I^{\prime }+I_{\ast }^{\prime }\right) /2}}{\left( I^{\prime }I_{\ast }^{\prime }\right) ^{1/2-s/2}}dI^{\prime }dI_{\ast }^{\prime }=\left( \int_{0}^{\infty }\mathbf{ \,}\frac{e^{-I/2}}{I^{1/2-s/2}}dI\right) ^{2}=C\text{,} \label{i1} \end{equation} implying also \begin{equation} k_{2}^{2}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })\leq \frac{ C}{\left\vert \mathbf{g}\right\vert ^{2}}\frac{I^{\delta /2-1-\kappa }}{ I_{\ast }^{\delta /2+2s-\kappa }}\exp \left( -\frac{m}{4}\left\vert \mathbf{g }\right\vert ^{2}\right) \text{.} \label{b2b} \end{equation}
Note that by letting $\kappa =(\delta -1)/2+s$ \begin{equation} \frac{I^{\delta /2-1-\kappa }}{I_{\ast }^{\delta /2+2s-\kappa }}=\left( II_{\ast }\right) ^{-1/2-s}=\left\{ \begin{array}{l} \left( II_{\ast }\right) ^{-5/4}\text{ for }s=3/4 \\ 1\text{ for }s=-1/2 \end{array} \right. \text{,} \label{b3} \end{equation} while by letting $s=1/8$ \begin{equation} \frac{I^{\delta /2-1-\kappa }}{I_{\ast }^{\delta /2+2s-\kappa }}=\frac{ I^{\delta /2-1-\kappa }}{I_{\ast }^{\delta /2+1/4-\kappa }}=\left\{ \begin{array}{l} I_{\ast }^{-5/4}\text{ for }\kappa =\delta /2-1 \\ I^{-5/4}\text{ for }\kappa =\delta /2+1/4 \end{array} \right. \text{.} \label{b4} \end{equation} Moreover, by letting $s=3/4$ \begin{equation} \frac{I^{\delta /2-1-\kappa }}{I_{\ast }^{\delta /2+2s-\kappa }}=\frac{ I^{\delta /2-1-\kappa }}{I_{\ast }^{\delta /2+3/2-\kappa }}=\left\{ \begin{array}{l} I_{\ast }^{-5/2}\text{ for }\kappa =\delta /2-1 \\ I^{-5/2}\text{ for }\kappa =\delta /2+3/2 \end{array} \right. \text{.} \label{b4b} \end{equation}
Hence, by the bound $\left( \ref{b2b}\right) $ on $k_{2}^{2}$, together with expressions $\left( \ref{b3}\right) $ and $\left( \ref{b4}\right) $, \begin{eqnarray} &&k_{2}^{2}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }) \notag \\ &\leq &\frac{C}{\left\vert \mathbf{g}\right\vert ^{2}}\exp \left( -\frac{m}{4 }\left\vert \mathbf{g}\right\vert ^{2}\right) \left( \mathbf{1}_{I\leq 1}+I^{-5/4}\mathbf{1}_{I\geq 1}\right) \left( \mathbf{1}_{I_{\ast }\leq 1}+I_{\ast }^{-5/4}\mathbf{1}_{I_{\ast }\geq 1}\right) \text{.} \label{b3a} \end{eqnarray}
To show that $k_{2}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast }) \mathbf{1}_{\mathfrak{h}_{N}}\in L^{2}\left( d\boldsymbol{\xi \,}\,d \boldsymbol{\xi }_{\ast }dI\,dI_{\ast }\right) $ for any (non-zero) natural number $N$, separate the integration domain\ of the integral of $k_{2}^{2}( \boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })$ over $\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{2}$ in two separate domains \begin{equation*} \left\{ \left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{2}\text{; } \left\vert \mathbf{g}\right\vert \geq \left\vert \boldsymbol{\xi } \right\vert \right\} \text{ and }\left\{ \left( \mathbb{R}^{3}\times \mathbb{ R}_{+}\right) ^{2}\text{; }\left\vert \mathbf{g}\right\vert \leq \left\vert \boldsymbol{\xi }\right\vert \right\} . \end{equation*} The integral of $k_{2}^{2}$ over the domain $\left\{ \left( \mathbb{R} ^{3}\times \mathbb{R}_{+}\right) ^{2}\text{; }\left\vert \mathbf{g} \right\vert \geq \left\vert \boldsymbol{\xi }\right\vert \right\} $ will be bounded, since, by the bound $\left( \ref{b3a}\right) $, \begin{eqnarray*} &&\int_{0}^{\infty }\int_{0}^{\infty }\int_{\left\vert \mathbf{g}\right\vert \geq \left\vert \boldsymbol{\xi }\right\vert }k_{2}^{2}(\boldsymbol{\xi }, \boldsymbol{\xi }_{\ast },I,I_{\ast })\,d\boldsymbol{\xi }d\boldsymbol{\xi } _{\ast }dI\mathbf{\,}dI_{\ast } \\ &\leq &C\int_{\left\vert \mathbf{g}\right\vert \geq \left\vert \boldsymbol{ \xi }\right\vert }\frac{e^{-m\left\vert \mathbf{g}\right\vert ^{2}/4}}{ \left\vert \mathbf{g}\right\vert ^{2}}\mathbf{\,}d\mathbf{g\,}d\boldsymbol{ \xi \,}\left( \int_{0}^{1}dI+\int_{1}^{\infty }I^{-5/4}dI\right) ^{2} \\ &\leq &C\int_{\left\vert \mathbf{g}\right\vert \geq \left\vert \boldsymbol{ \xi }\right\vert }\frac{e^{-m\left\vert \mathbf{g}\right\vert ^{2}/4}}{ \left\vert \mathbf{g}\right\vert ^{2}}\mathbf{\,}d\mathbf{g\,}d\boldsymbol{ \xi \,}=C\int_{0}^{\infty }\int_{r}^{\infty }e^{-mR^{2}/4}r^{2}dR\mathbf{\,} dr\boldsymbol{\,} \\ &\leq &C\int_{0}^{\infty }e^{-mR^{2}/8}dR\int_{0}^{\infty }e^{-mr^{2}/8}r^{2}dr=C \end{eqnarray*} Regarding the second domain, consider the truncated domains \begin{equation*} \left\{ \left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{2}\text{; } \left\vert \mathbf{g}\right\vert \leq \left\vert \boldsymbol{\xi } \right\vert \leq N\right\} \end{equation*} for (non-zero) natural numbers $N$. Then by the bound $\left( \ref{b2} \right) $ on $k_{2}^{2}$, together with expressions $\left( \ref{i1}\right) $ ,$\left( \ref{b3}\right) $, $\left( \ref{b4}\right) $, \begin{eqnarray*} \int_{\mathbb{R}_{+}^{2}}\int_{\left\vert \mathbf{g}\right\vert \leq \left\vert \boldsymbol{\xi }\right\vert \leq N}k_{2}^{2}(\boldsymbol{\xi }, \boldsymbol{\xi }_{\ast },I,I_{\ast })\,d\boldsymbol{\xi }d\boldsymbol{\xi } _{\ast }dIdI_{\ast }&\leq& CN^{2}\left( \int_{0}^{1}dI+\int_{1}^{\infty }I^{-5/4}dI\right) ^{2} \\ &=&CN^{2}\text{,} \end{eqnarray*} since \begin{eqnarray*} &&\int_{\left\vert \mathbf{g}\right\vert \leq \left\vert \boldsymbol{\xi } \right\vert \leq N}\frac{C}{\left\vert \mathbf{g}\right\vert ^{2}}\exp \left( -\frac{m}{4}\left( \left\vert \mathbf{g}\right\vert +2\left\vert \boldsymbol{\xi }\right\vert \cos \varphi +2\chi \right) ^{2}-\frac{m}{4} \left\vert \mathbf{g}\right\vert ^{2}\right) d\boldsymbol{\xi }\mathbf{\,}d \mathbf{g} \\ &=&C\int_{0}^{N}\int_{0}^{r}\int_{0}^{\pi }r^{2}\exp \left( -\frac{m}{4} \left( R+2r\cos \varphi +2\chi _{R}\right) ^{2}-\frac{m}{4}R^{2}\right) \sin \varphi \,d\varphi \mathbf{\,}dR\mathbf{\,}dr \\ &=&C\int_{0}^{N}\int_{0}^{r}\int_{R+2\chi _{R}-2r}^{R+2\chi _{R}+2r}re^{-m\eta ^{2}/4}e^{-mR^{2}/4}d\eta \mathbf{\,}dR\mathbf{\,}dr \\ &\leq &C\int_{0}^{N}r\,dr\int_{0}^{\infty }e^{-mR^{2}/4}dR\int_{-\infty }^{\infty }e^{-m\eta ^{2}/4}d\eta =CN^{2}\text{, with }\chi _{R}=\frac{ \Delta I_{\ast }}{mR}\text{.} \end{eqnarray*}
Furthermore, the integral of $k_{2}(\boldsymbol{\xi },\boldsymbol{\xi } _{\ast },I,I_{\ast })$ with respect to $\left( \boldsymbol{\xi },I\right) $ over $\mathbb{R}^{3}\times \mathbb{R}_{+}$ is bounded in $\left( \boldsymbol{ \xi }_{\ast },I_{\ast }\right) $. Indeed, by the bound $\left( \ref{b2b} \right) $ on $k_{2}^{2}$, together with expressions on $k_{2}^{2}$, together with expressions $\left( \ref{b3}\right) $ and $\left( \ref{b4b}\right) $, \begin{equation*} k_{2}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })\leq \frac{C}{ \left\vert \mathbf{g}\right\vert }\exp \left( -\frac{m}{8}\left\vert \mathbf{ g}\right\vert ^{2}\right) \left( \mathbf{1}_{I_{\ast }\leq 1}+I_{\ast }^{-5/4}\mathbf{1}_{I_{\ast }\geq 1}\right) \text{,} \end{equation*} why the following bound on the integral of $k_{2}$ with respect to $\left( \boldsymbol{\xi }_{\ast },I_{\ast }\right) $ over the domain $\left\{ \mathbb{R}^{3}\!\!\times \mathbb{R}_{+}\text{;}\left\vert \mathbf{g} \right\vert \!\geq \!\left\vert \boldsymbol{\xi }\right\vert \right\} $ can be obtained for $\left\vert \boldsymbol{\xi }\right\vert \neq 0$ \begin{eqnarray*} &&\int_{0}^{\infty }\int_{\left\vert \mathbf{g}\right\vert \geq \left\vert \boldsymbol{\xi }\right\vert }k_{2}(\boldsymbol{\xi },\boldsymbol{\xi } _{\ast },I,I_{\ast })\,d\boldsymbol{\xi }_{\ast }dI_{\ast } \\ &\leq &\frac{C}{\left\vert \boldsymbol{\xi }\right\vert }\int_{\left\vert \mathbf{g}\right\vert \geq \left\vert \boldsymbol{\xi }\right\vert }e^{-m\left\vert \mathbf{g}\right\vert ^{2}/8}d\mathbf{g}\left( \int_{0}^{1}dI_{\ast }+\int_{1}^{\infty }\frac{dI_{\ast }}{I_{\ast }^{5/4}} \right) \\ &\leq &\frac{C}{\left\vert \boldsymbol{\xi }\right\vert }\int_{0}^{\infty }e^{-mr^{2}/8}r^{2}dr\int_{\mathbb{S}^{2}}\,d\boldsymbol{\omega }=\frac{C}{ \left\vert \boldsymbol{\xi }\right\vert }\text{.} \end{eqnarray*} Moreover, over the domain $\left\{ \mathbb{R}^{3}\times \mathbb{R}_{+}\text{ ; }\left\vert \mathbf{g}\right\vert \leq \left\vert \boldsymbol{\xi } \right\vert \right\} $, by the bound $\left( \ref{b2a}\right) $ on $k_{2}^{2} $, and expressions $\left( \ref{i1}\right) $, $\left( \ref{b3}\right) $, $ \left( \ref{b4b}\right) $, \begin{equation*} \int_{0}^{\infty }\int_{\left\vert \mathbf{g}\right\vert \leq \left\vert \boldsymbol{\xi }\right\vert }k_{2}(\boldsymbol{\xi },\boldsymbol{\xi } _{\ast },I,I_{\ast })\,d\boldsymbol{\xi }_{\ast }dI_{\ast }\leq \frac{C}{ \left\vert \boldsymbol{\xi }\right\vert }\left( \int_{0}^{1}dI_{\ast }+\int_{1}^{\infty }I_{\ast }^{-5/4}dI_{\ast }\right) =\frac{C}{\left\vert \boldsymbol{\xi }\right\vert }\text{,} \end{equation*} since \begin{eqnarray*} &&\int_{\left\vert \mathbf{g}\right\vert \leq \left\vert \boldsymbol{\xi } \right\vert }\frac{C}{\left\vert \mathbf{g}\right\vert }\exp \left( -\frac{m }{8}\left( \left\vert \mathbf{g}\right\vert +2\left\vert \boldsymbol{\xi } \right\vert \cos \varphi +2\chi \right) ^{2}-\frac{m}{8}\left\vert \mathbf{g} \right\vert ^{2}\right) \,d\mathbf{g} \\ &=&C\int_{0}^{\left\vert \boldsymbol{\xi }\right\vert }\int_{0}^{\pi }R\exp \left( -\frac{m}{8}\left( R+2\left\vert \boldsymbol{\xi }\right\vert \cos \varphi +2\chi _{R}\right) ^{2}-\frac{m}{8}R^{2}\right) \sin \varphi \,d\varphi \mathbf{\,}dR \\ &=&\frac{C}{\left\vert \boldsymbol{\xi }\right\vert }\int_{0}^{\left\vert \boldsymbol{\xi }\right\vert }\int_{R+2\chi _{R}-2\left\vert \boldsymbol{\xi }\right\vert }^{R+2\chi _{R}+2\left\vert \boldsymbol{\xi }\right\vert }Re^{-m\eta ^{2}/8}e^{-mR^{2}/8}d\eta dR \\ &\leq &\frac{C}{\left\vert \boldsymbol{\xi }\right\vert }\int_{0}^{\infty }Re^{-mR^{2}/8}\,dR\int_{-\infty }^{\infty }e^{-m\eta ^{2}/8}d\eta =\frac{C}{ \left\vert \boldsymbol{\xi }\right\vert }\text{, with }\chi _{R}=\frac{ \Delta I_{\ast }}{mR}\text{.} \end{eqnarray*} However, due to the symmetry $k_{2}(\boldsymbol{\xi },\boldsymbol{\xi } _{\ast },I,I_{\ast })=k_{2}(\boldsymbol{\xi }_{\ast },\boldsymbol{\xi } ,I_{\ast },I)$ $\left( \ref{sa2}\right) $, also \begin{equation*} \int_{0}^{\infty }\int_{\mathbb{R}^{3}}k_{2}(\boldsymbol{\xi },\boldsymbol{ \xi }_{\ast },I,I_{\ast })\,d\boldsymbol{\xi \,}dI\leq \frac{C}{\left\vert \boldsymbol{\xi }_{\ast }\right\vert }\text{.} \end{equation*} Therefore, if $\left\vert \boldsymbol{\xi }_{\ast }\right\vert \geq 1$, then \begin{equation*} \int_{0}^{\infty }\int_{\mathbb{R}^{3}}k_{2}(\boldsymbol{\xi },\boldsymbol{ \xi }_{\ast },I,I_{\ast })\,d\boldsymbol{\xi \,}dI\,\leq \frac{C}{\left\vert \boldsymbol{\xi }_{\ast }\right\vert }\leq C\text{.} \end{equation*} Otherwise, if $\left\vert \boldsymbol{\xi }_{\ast }\right\vert \leq 1$, then, by the bounds bound $\left( \ref{b2a}\right) $ on $k_{2}^{2}$, and expressions $\left( \ref{i1}\right) $, $\left( \ref{b3}\right) $, $\left( \ref{b4b}\right) $, \begin{eqnarray*} &&\int_{0}^{\infty }\int_{\mathbb{R}^{3}}k_{2}(\boldsymbol{\xi },\boldsymbol{ \xi }_{\ast },I,I_{\ast })\,d\boldsymbol{\xi \,}dI=\int_{0}^{\infty }\int_{ \mathbb{R}^{3}}k_{2}(\boldsymbol{\xi }_{\ast },\boldsymbol{\xi },I_{\ast },I)\,d\boldsymbol{\xi \,}dI\,\, \\ &\leq &C\left( \int_{0}^{1}dI+\int_{1}^{\infty }I^{-5/4}dI\right) \leq C \end{eqnarray*} since \begin{eqnarray*} &&\int_{\mathbb{R}^{3}}\frac{C}{\left\vert \mathbf{g}\right\vert }\exp \left( -\frac{m}{8}\left( \left\vert \mathbf{g}\right\vert +2\left\vert \boldsymbol{\xi }_{\ast }\right\vert \cos \varphi _{\ast }-2\chi \right) ^{2}-\frac{m}{8}\left\vert \mathbf{g}\right\vert ^{2}\right) \,d\mathbf{g} \\ &=&C\int_{0}^{\infty }\int_{0}^{\pi }R\exp \left( -\frac{m}{8}\left( R+2\left\vert \boldsymbol{\xi }\right\vert \cos \varphi _{\ast }-2\chi _{R}\right) ^{2}-\frac{m}{8}R^{2}\right) \sin \varphi _{\ast }\,d\varphi _{\ast }\boldsymbol{\,}dR \\ &\leq &\frac{C}{\left\vert \boldsymbol{\xi }_{\ast }\right\vert } \int_{0}^{\infty }Re^{-mR^{2}/8}dR\int_{R-2\chi _{R}-2\left\vert \boldsymbol{ \xi }_{\ast }\right\vert }^{R-2\chi _{R}+2\left\vert \boldsymbol{\xi }_{\ast }\right\vert }\,dt=C\text{, with }\chi _{R}=\frac{\Delta I_{\ast }}{mR}\text{ and }\\ &&\cos \varphi _{\ast }=\mathbf{n}\cdot \frac{\boldsymbol{\xi }_{\ast }}{ \left\vert \boldsymbol{\xi }_{\ast }\right\vert }\text{.} \end{eqnarray*}
Furthermore, \begin{eqnarray*} &&\sup_{\left( \boldsymbol{\xi },I\right) \in \mathbb{R}^{3}\times \mathbb{R} _{+}}\int_{\mathbb{R}^{3}\times \mathbb{R}_{+}}k_{2}(\boldsymbol{\xi }, \boldsymbol{\xi }_{\ast },I,I_{\ast })-k_{2}(\boldsymbol{\xi },\boldsymbol{ \xi }_{\ast },I,I_{\ast })\mathbf{1}_{\mathfrak{h}_{N}}\,d\boldsymbol{\xi } _{\ast }dI_{\ast } \\ &\leq &\sup_{\left( \boldsymbol{\xi },I\right) \in \mathbb{R}^{3}\times \mathbb{R}_{+}}\int_{0}^{\infty }\int_{\left\vert \mathbf{g}\right\vert \leq \frac{1}{N}}k_{2}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })\,d \boldsymbol{\xi }_{\ast }dI_{\ast } \\ &&+\sup_{\left\vert \boldsymbol{\xi }\right\vert \geq N}\int_{0}^{\infty }\int_{\mathbb{R}^{3}}k_{2}(\boldsymbol{\xi },\boldsymbol{\xi }_{\ast },I,I_{\ast })\,d\boldsymbol{\xi }_{\ast }dI_{\ast } \\ &\leq &\int_{\left\vert \mathbf{g}\right\vert \leq \frac{1}{N}}\frac{C}{ \left\vert \mathbf{g}\right\vert }\,d\mathbf{g}\left( \int_{0}^{1}dI_{\ast }+\int_{1}^{\infty }\frac{dI_{\ast }}{I_{\ast }^{5/4}}\right) +\frac{C}{N} \leq C\left( \int_{0}^{\frac{1}{N}}R\,dR+\frac{1}{N}\right) \\ &=&C\left( \frac{1}{N^{2}}+\frac{1}{N}\right) \rightarrow 0\text{ as } N\rightarrow \infty \text{.} \end{eqnarray*}
Hence, by Lemma \ref{LGD}, the operator \begin{equation*} K_{2}=\int_{\mathbb{R}^{3}\times \mathbb{R}_{+}}k_{2}(\boldsymbol{\xi }, \boldsymbol{\xi }_{\ast },I,I_{\ast })\,h_{\ast }\,d\boldsymbol{\xi }_{\ast }dI_{\ast } \end{equation*} is compact on $L^{2}\left( d\boldsymbol{\xi \,}dI\right) $.
Concluding, the operator $K=K_{2}-K_{1}$ is a compact self-adjoint operator on $L^{2}\left( d\boldsymbol{\xi \,\,}dI\right) $. The self-adjointness is due to the symmetry relations $\left( \ref{sa1}\right) ,\left( \ref{sa2} \right) $, cf. \cite[p.198]{Yoshida-65}. \end{proof}
\section{\label{PT2}Bounds on the collision frequency}
This section concerns the proof of Theorem \ref{Thm2}. Note that throughout the proof, $C$ will denote a generic positive constant.
\begin{proof} Under assumption $\left( \ref{e1}\right) $ the collision frequency $\upsilon $ equals \begin{eqnarray*} \upsilon &=&\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{3}} \frac{M_{\ast }}{\left( II_{\ast }\right) ^{\delta /2-1}}W(\boldsymbol{\xi }, \boldsymbol{\xi }_{\ast },I,I_{\ast }\left\vert \boldsymbol{\xi }^{\prime }, \boldsymbol{\xi }_{\ast }^{\prime },I^{\prime },I_{\ast }^{\prime }\right. )\,d\boldsymbol{\xi }_{\ast }d\boldsymbol{\xi }^{\prime }d\boldsymbol{\xi } _{\ast }^{\prime }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime } \\ &=&C\int_{\left( \mathbb{R}^{3}\times \mathbb{R}_{+}\right) ^{3}}M_{\ast }\sigma \frac{\left\vert \mathbf{g}\right\vert }{\left\vert \mathbf{g} ^{\prime }\right\vert ^{2}}\delta _{3}\left( \mathbf{G}-\mathbf{G}^{\prime }\right) \widehat{\delta }_{1}\,d\boldsymbol{\xi }_{\ast }d\mathbf{G} ^{\prime }d\mathbf{g}^{\prime }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime } \\ &=&C\int_{\mathbb{R}^{3}\times \left( \mathbb{R}_{+}\right) ^{4}}I_{\ast }^{\delta /2-1}e^{-I_{\ast }}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}\sigma \left\vert \mathbf{g}\right\vert \widehat{\delta } _{1}\,d\boldsymbol{\xi }_{\ast }d\left\vert \mathbf{g}^{\prime }\right\vert dI_{\ast }dI^{\prime }dI_{\ast }^{\prime }\int_{\mathbb{S}^{2}}\,d \boldsymbol{\omega } \\ &=&C\int_{\mathbb{R}^{3}\times \left( \mathbb{R}_{+}\right) ^{4}}e^{-I_{\ast }}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}\dfrac{ \left\vert \mathbf{g}^{\prime }\right\vert \left( I_{\ast }I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}{E^{\delta -1/2}}\widehat{\delta } _{1}\,d\boldsymbol{\xi }_{\ast }d\left\vert \mathbf{g}^{\prime }\right\vert dI_{\ast }dI^{\prime }dI_{\ast }^{\prime } \\ &=&C\int_{\mathbb{R}^{3}\times \left( \mathbb{R}_{+}\right) ^{3}}e^{-I_{\ast }}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}\dfrac{\sqrt{ E-\left( I^{\prime }+I_{\ast }^{\prime }\right) }}{E^{\delta -1/2}}\left( I_{\ast }I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1} \\ &&\times \mathbf{1}_{m\left\vert \mathbf{g}\right\vert ^{2}>4\Delta I}d \boldsymbol{\xi }_{\ast }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime }\text{,} \end{eqnarray*} with \begin{eqnarray*} \widehat{\delta }_{1} &=&\mathbf{1}_{m\left\vert \mathbf{g}\right\vert ^{2}>4\Delta I}\delta _{1}\left( \sqrt{\left\vert \mathbf{g}\right\vert ^{2}- \frac{4}{m}\Delta I}-\left\vert \mathbf{g}^{\prime }\right\vert \right) \\ &=&\mathbf{1}_{m\left\vert \mathbf{g}\right\vert ^{2}>4\Delta I}\delta _{1}\left( \sqrt{E-\left( I^{\prime }+I_{\ast }^{\prime }\right) } -\left\vert \mathbf{g}^{\prime }\right\vert \right) . \end{eqnarray*} Then \begin{eqnarray*} \upsilon &\geq &C\int_{\mathbb{R}^{3}\times \mathbb{R}_{+}}\int_{I^{\prime }+I_{\ast }^{\prime }\leq E/2}e^{-I_{\ast }}e^{-m\left\vert \boldsymbol{\xi } _{\ast }\right\vert ^{2}/2}\dfrac{\sqrt{E-\left( I^{\prime }+I_{\ast }^{\prime }\right) }}{E^{\delta +\left( \alpha -1\right) /2}}\left( I_{\ast }I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}\, \\ &&d\boldsymbol{\xi }_{\ast }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime } \\ &\geq &C\int_{\mathbb{R}^{3}\times \mathbb{R}_{+}}\dfrac{E^{1/2}e^{-m\left \vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}}{E^{\delta +\left( \alpha -1\right) /2}}I_{\ast }^{\delta /2-1}e^{-I_{\ast }}\left( \int_{0}^{E/4}\left( I^{\prime }\right) ^{\delta /2-1}dI^{\prime }\right) ^{2}\,d\boldsymbol{\xi }_{\ast }dI_{\ast } \\ &=&C\int_{\mathbb{R}^{3}\times \mathbb{R}_{+}}E^{1-\alpha /2}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}I_{\ast }^{\delta /2-1}e^{-I_{\ast }}\,d\boldsymbol{\xi }_{\ast }dI_{\ast } \\ &\geq &C\int_{\mathbb{R}^{3}}\left( \left\vert \mathbf{g}\right\vert ^{2}+I\right) ^{1-\alpha /2}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}\,d\boldsymbol{\xi }_{\ast }\int_{0}^{\infty }I_{\ast }^{\delta /2-1}e^{-I_{\ast }}\,dI_{\ast } \\ &\geq &C\int_{\mathbb{R}^{3}}\left( \left\vert \left\vert \boldsymbol{\xi } \right\vert -\left\vert \boldsymbol{\xi }_{\ast }\right\vert \right\vert ^{2}+I\right) ^{1-\alpha /2}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}\,d\boldsymbol{\xi }_{\ast }\text{.} \end{eqnarray*}
Consider two different cases separately: $\left\vert \boldsymbol{\xi } \right\vert \leq 1$ and $\left\vert \boldsymbol{\xi }\right\vert \geq 1$. Firstly, if $\left\vert \boldsymbol{\xi }\right\vert \geq 1$, then \begin{eqnarray*} \upsilon &\geq &C\int_{\left\vert \boldsymbol{\xi }_{\ast }\right\vert \leq 1/2}\left( \left( \left\vert \boldsymbol{\xi }\right\vert -\left\vert \boldsymbol{\xi }_{\ast }\right\vert \right) ^{2}+I\right) ^{1-\alpha /2}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}\,d \boldsymbol{\xi }_{\ast } \\ &\geq &C\left( \left\vert \boldsymbol{\xi }\right\vert ^{2}+I\right) ^{1-\alpha /2}\int_{\left\vert \boldsymbol{\xi }_{\ast }\right\vert \leq 1/2}e^{-m/8}\,d\boldsymbol{\xi }_{\ast }=C\left( \left\vert \boldsymbol{\xi } \right\vert ^{2}+I\right) ^{1-\alpha /2} \\ &\geq &C\left( \left\vert \boldsymbol{\xi }\right\vert +\sqrt{I}\right) ^{2-\alpha }\geq C\left( 1+\left\vert \boldsymbol{\xi }\right\vert +\sqrt{I} \right) ^{2-\alpha }\text{.} \end{eqnarray*} Secondly, if $\left\vert \boldsymbol{\xi }\right\vert \leq 1$, then \begin{eqnarray*} \upsilon &\geq &C\int_{\left\vert \boldsymbol{\xi }_{\ast }\right\vert \geq 2}\left( \left( \left\vert \boldsymbol{\xi }_{\ast }\right\vert -\left\vert \boldsymbol{\xi }\right\vert \right) ^{2}+I\right) ^{1-\alpha /2}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}\,d \boldsymbol{\xi }_{\ast } \\ &\geq &C\left( 1+I\right) ^{1-\alpha /2}\int_{\left\vert \boldsymbol{\xi } _{\ast }\right\vert \geq 2}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}\,d\boldsymbol{\xi }_{\ast }=C(1+I)^{1-\alpha /2} \\ &\geq &C\left( 1+\sqrt{I}\right) ^{2-\alpha }\geq C\left( 1+\left\vert \boldsymbol{\xi }\right\vert +\sqrt{I}\right) ^{2-\alpha }\text{.} \end{eqnarray*} Hence, there is a positive constant $\upsilon _{-}>0$, such that $\upsilon \geq \upsilon _{-}\left( 1+\left\vert \boldsymbol{\xi }\right\vert +\sqrt{I} \right) ^{2-\alpha }$ for all $\boldsymbol{\xi }\in \mathbb{R}^{3}$.
On the other hand, for any positive number $\varepsilon >0$ \begin{eqnarray*} \upsilon &\leq &C\int_{\mathbb{R}^{3}\times \left( \mathbb{R}_{+}\right) ^{3}}e^{-I_{\ast }}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}\dfrac{\left( I_{\ast }I^{\prime }I_{\ast }^{\prime }\right) ^{\delta /2-1}}{E^{\delta +\alpha /2-1}}\,d\boldsymbol{\xi }_{\ast }dI_{\ast }dI^{\prime }dI_{\ast }^{\prime } \\ &=&C\int_{\mathbb{R}^{3}}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}\int_{0}^{\infty }I_{\ast }^{\delta /2-1}e^{-I_{\ast }}\left( \int_{0}^{1}\dfrac{\left( I^{\prime }\right) ^{\delta /2-1}}{ E^{\delta /2+\alpha /4-1/2}}\,dI^{\prime }\right) ^{2}dI_{\ast }d\boldsymbol{ \xi }_{\ast } \\ &&+2C\int_{\mathbb{R}^{3}}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}\int_{0}^{\infty }I_{\ast }^{\delta /2-1}e^{-I_{\ast }}\int_{1}^{\infty }\dfrac{\left( I_{\ast }^{\prime }\right) ^{\delta /2-1}}{ E^{\left( \delta +\alpha \right) /2+1/4}}dI_{\ast }^{\prime } \\ &&\times \int_{0}^{1}\dfrac{\left( I^{\prime }\right) ^{\delta /2-1}}{ E^{\delta /2-5/4}}\,dI^{\prime }dI_{\ast }d\boldsymbol{\xi }_{\ast } \\ &&+C\int_{\mathbb{R}^{3}\times \mathbb{R}_{+}}\!\!\!E^{1+\left( \varepsilon -\alpha \right) /2}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}I_{\ast }^{\delta /2-1}e^{-I_{\ast }}\left( \int_{1}^{\infty }\dfrac{ \left( I^{\prime }\right) ^{\delta /2-1}}{E^{\delta /2+\varepsilon /4}} dI^{\prime }\right) ^{2}\!d\boldsymbol{\xi }_{\ast }dI_{\ast } \\ &\leq &C\int_{\mathbb{R}^{3}}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}d\boldsymbol{\xi }_{\ast }\int_{0}^{\infty }I_{\ast }^{\delta /2-1}e^{-I_{\ast }}dI_{\ast }\left( \int_{0}^{1}\dfrac{1}{\left( I^{\prime }\right) ^{\alpha /4+1/2}}\,dI^{\prime }\right) ^{2} \\ &&+C\!\int_{\mathbb{R}^{3}}\!e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}d\boldsymbol{\xi }_{\ast }\int_{0}^{\infty }\!\!I_{\ast }^{\delta /2-1}e^{-I_{\ast }}dI_{\ast }\int_{1}^{\infty }\!\!\dfrac{1}{ \left( I_{\ast }^{\prime }\right) ^{\alpha /2+5/4}}dI_{\ast }^{\prime }\int_{0}^{1}\!\!\left( I^{\prime }\right) ^{1/4}dI^{\prime } \\ &&+C\int_{\mathbb{R}^{3}\times \mathbb{R}_{+}}\!\!\!E^{1+\left( \varepsilon -\alpha \right) /2}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}I_{\ast }^{\delta /2-1}e^{-I_{\ast }}d\boldsymbol{\xi }_{\ast }dI_{\ast }\!\left( \int_{1}^{\infty }\dfrac{1}{\left( I_{\ast }^{\prime }\right) ^{1+\varepsilon /4}}dI_{\ast }^{\prime }\right) ^{2} \\ &\leq &C\left( 1+\int_{\mathbb{R}^{3}}\left( 1+m\frac{\left\vert \mathbf{g} \right\vert ^{2}}{4}+I\right) ^{1+\left( \varepsilon -\alpha \right) /2}e^{-m\left\vert \boldsymbol{\xi }_{\ast }\right\vert ^{2}/2}d\boldsymbol{ \xi }_{\ast }\right. \\ &&\times \left. \int_{0}^{\infty }\left( 1+I_{\ast }\right) I_{\ast }^{\delta /2-1}e^{-I_{\ast }}dI_{\ast }\right) \\ &\leq &C\left( 1+\left( 1+\left\vert \boldsymbol{\xi }\right\vert ^{2}+I\right) ^{1+\left( \varepsilon -\alpha \right) /2}\int_{0}^{\infty }\left( 1+r^{2}\right) ^{1+\varepsilon }r^{2}e^{-2r^{2}}dr\right) \\ &\leq &C\left( 1+\left\vert \boldsymbol{\xi }\right\vert +\sqrt{I}\right) ^{2-\alpha +\varepsilon }\text{.} \end{eqnarray*} Hence, there is a positive constant $\upsilon _{+}>0$, such that $\upsilon \leq \upsilon _{+}\!\!\left( 1+\left\vert \boldsymbol{\xi }\right\vert + \sqrt{I}\right) ^{2-\alpha +\varepsilon }$ for all $\boldsymbol{\xi }\in \mathbb{R}^{3}$. \end{proof}
\end{document} | arXiv |
\begin{document}
\title[Characterizations of the Whitney and contact Whitney spheres] {New characterizations of the Whitney spheres and the contact Whitney spheres}
\author{Zejun Hu and Cheng Xing} \address{ School of Mathematics and Statistics, Zhengzhou University, Zhengzhou 450001, People's Republic of China.} \email{[email protected]; [email protected]}
\thanks{2020 {\it Mathematics Subject Classification.} Primary 53C24; Secondary 53C25, 53D12}
\thanks{This project was supported by NSF of China, Grant Number 11771404.}
\date{}
\keywords{Complex space form, Sasakian space form, Whitney sphere, contact Whitney sphere, integral inequality.}
\begin{abstract} In this paper, based on the classical K. Yano's formula, we first establish an optimal integral inequality for compact Lagrangian submanifolds in the complex space forms, which involves the Ricci curvature in the direction $J\vec{H}$ and the norm of the covariant differentiation of the second fundamental form $h$, where $J$ is the almost complex structure and $\vec{H}$ is the mean curvature vector field. Second and analogously, for compact Legendrian submanifolds in the Sasakian space forms with Sasakian structure $(\varphi,\xi,\eta,g)$, we also establish an optimal integral inequality involving the Ricci curvature in the direction $\varphi\vec{H}$ and the norm of the modified covariant differentiation of the second fundamental form. The integral inequality is optimal in the sense that all submanifolds attaining the equality are completely classified. As direct consequences, we obtain new and global characterizations for the Whitney spheres in complex space forms as well as the contact Whitney spheres in Sasakian space forms. Finally, we show that, just as the Whitney spheres in complex space forms, the contact Whitney spheres in Sasakian space forms are locally conformally flat manifolds with sectional curvatures non-constant. \end{abstract}
\maketitle
\numberwithin{equation}{section}
\section{Introduction}\label{sect:1}
In this paper, we consider compact Lagrangian submanifolds in the $n$-dimensional complex space form $N^n(4c)$ of constant holomorphic sectional curvature $4c$, $c\in\{0,1,-1\}$; and analogously, we also consider compact Legendrian submanifolds in the $(2n+1)$-dimensional Sasakian space form $\tilde{N}^{2n+1}(\tilde{c})$ with constant $\varphi$-sectional curvature $\tilde{c}$. As our main achievements, we shall establish an integral inequality for either class of such submanifolds. Then, as direct consequences, we can get new and integral characterizations for the Whitney spheres in the complex space forms and also the contact Whitney spheres in the Sasakian space forms.
Recall that the complex space form $N^n(4c)$ with almost complex structure $J$ and Riemannian metric $g$ is the complex Euclidean space $\mathbb{C}^n$ for $c=0$, the complex projective space $\mathbb{C}P^n(4)$ for $c=1$, and the complex hyperbolic space $\mathbb{C}H^n(-4)$ for $c=-1$. Let $M^n\hookrightarrow N^n(4c)$ be a {\it Lagrangian} immersion of an $n$-dimensional differentiable manifold $M^n$ ($n\ge2$), i.e., $J$ carries each tangent space of $M^n$ into its corresponding normal space. In order to state our first main result, we shall recall the notion of {\it Whitney spheres} in each complex space form.
\vskip3mm\noindent {\it Example 1.1}: {\bf Whitney spheres} in $\mathbb{C}^n$ (cf. \cite{BCM,C1,CU,LV,RU,SS}).
As the most classical notion of {\it Whitney spheres}, these are usually defined as a family of Lagrangian immersions from the unit sphere $\mathbb{S}^n$, centered at the origin $O$ of $\mathbb{R}^{n+1}$, into the complex Euclidean space $\mathbb{C}^n\cong\mathbb{R}^{2n}$, given by $\Psi_{r,B}: \mathbb{S}^n\rightarrow\mathbb{C}^n$ with
\begin{equation}\label{eqn:1.1} \Psi_{r,B}(u_1,\ldots,u_{n+1}) =\tfrac{r}{1+u^2_{n+1}}(u_1,u_1u_{n+1},\ldots,u_n,u_nu_{n+1})+B, \end{equation}
where $r$ is a positive number and $B$ is a vector of $\mathbb{C}^n$. The number $r$ and the vector $B$ are called the radius and the center of the Whitney spheres, respectively. Up to translation and scaling of $\mathbb{C}^n$, all the Whitney spheres are congruent with the standard one corresponding to $r=1$ and $B=O$.
According to Gromov \cite{G}, the sphere cannot be embedded into $\mathbb{C}^n$ as a Lagrangian submanifold. This fact implies that the Whitney spheres in \eqref{eqn:1.1} have the best possible behavior, because it is embedded except at the poles of $\mathbb{S}^n$ where it has a double points. Indeed, in a certain sense, the Whitney spheres in $\mathbb{C}^n$ play the role of umbilical hypersurfaces of the Euclidean space $\mathbb{R}^{n+1}$ inside the family of Lagrangian submanifolds and have been characterized in several ways as done for the Euclidean spheres (cf. \cite{RU}).
\vskip3mm\noindent {\it Example 1.2}: {\bf Whitney spheres} in $\mathbb{C}P^n(4)$ (cf. \cite{CMU,CU1,CV,LV}).
In this case, the {\it Whitney spheres} are a one-parameter family of Lagrangian sphere immersions into $\mathbb{C}P^n(4)$, given by $\Psi_\theta:\mathbb{S}^n\rightarrow\mathbb{C}P^n(4)$ for $\theta>0$ with
\begin{equation}\label{eqn:1.2} \Psi_\theta(u_1,\dots,u_{n+1})=\Pi\left(\tfrac{(u_1,\ldots,u_n)}{\cosh \theta+i\sinh\theta u_{n+1}};\tfrac{\sinh\theta\cosh\theta(1+u^2_{n+1}) +iu_{n+1}}{\cosh^2\theta+\sinh^2\theta u^2_{n+1}}\right), \end{equation}
where $\Pi:\mathbb{S}^{2n+1}\rightarrow\mathbb{C}P^n(4)$ is the Hopf projection. We notice that $\Psi_\theta$ are embeddings except at the poles of $\mathbb{S}^n$ where it has a double points, and that $\Psi_0$ is the totally geodesic Lagrangian immersion of $\mathbb{S}^n$ into $\mathbb{C}P^n(4)$.
\vskip3mm\noindent {\it Example 1.3}: {\bf Whitney spheres} in $\mathbb{C}H^n(-4)$ (cf. \cite{CMU,CU1,CV,LV}).
Let $(\cdot,\cdot)$ denote the hermitian form of $\mathbb{C}^{n+1}$, i.e., $(z,w)=\sum\limits_{i=1}^nz_i\bar{w}_i-z_{n+1}\bar{w}_{n+1}$ for $z,w\in\mathbb{C}^{n+1}$, and $\mathbb{H}^{2n+1}_1(-1)=\{z\in\mathbb{C}^{n+1}:\,(z,z)=-1\}$ be the Anti-de Sitter space of constant sectional curvature $-1$.
Then, the {\it Whitney spheres} in $\mathbb{C}H^n(-4)$ are a one-parameter family of Lagrangian sphere immersions into $\mathbb{C}H^n(-4)$, given by $\Phi_\theta:\mathbb{S}^n\rightarrow\mathbb{C}H^n(-4)$ for $\theta>0$ with
\begin{equation}\label{eqn:1.3} \Phi_\theta(u_1,\ldots,u_{n+1})=\Pi\left(\tfrac{(u_1,\ldots,u_n)}{\sinh\theta+i\cosh\theta u_{n+1}}; \tfrac{\sinh\theta\cosh\theta(1+u^2_{n+1})-iu_{n+1}}{\sinh^2\theta+\cosh^2\theta u^2_{n+1}}\right), \end{equation}
where $\Pi:\mathbb{H}^{2n+1}_1(-1)\rightarrow\mathbb{C}H^n(-4)$ is the Hopf projection. We also notice that $\Phi_\theta$ are embeddings except in double points.
\vskip2mm The remarkable properties of the Whitney spheres are summarized as follows:
\begin{theorem}[cf. \cite{BCM,C2,CMU,CU,CU1,CV,LV,RU}]\label{thm:1.1} Let $x:M^n\to N^n(4c)$ be an $n$-dimensional compact Lagrangian submanifold that is neither totally geodesic nor of parallel mean curvature vector field. Then, $x(M^n)$ is the Whitney sphere in $N^n(4c)$ if and only if one of the following pointwise relations holds:
{\rm(1)} The squared mean curvature $|\vec{H}|^2$ and the scalar curvature $R$ of $M^n$ satisfy the relation $|\vec{H}|^2=\tfrac{n+2}{n^2(n-1)}R-\tfrac{n+2}{n}c$;
{\rm(2)} The second fundamental form $h$ and the mean curvature vector field $\vec{H}$ of $M^n$ satisfy $h(X,Y)=\tfrac{n}{n+2}\big[g(X,Y)\vec{H}+g(JX,\vec{H})JY+g(JY,\vec{H})JX\big]$ for $X,Y\in TM^n$;
{\rm(3)} The second fundamental form $h$ and the mean curvature vector field $\vec{H}$ of $M^n$
satisfy $\|\bar\nabla h\|^2=\tfrac{3n^2}{n+2}\|\nabla^\perp\vec{H}\|^2$. Here, $\bar\nabla h$ denotes the covariant differentiation of $h$ with respect to the van der Waerden-Bortolotti connection of $x:M^n\rightarrow N^n(4c)$. \end{theorem}
Moreover, Castro-Montealegre-Urbano \cite{CMU} and Ros-Urbano \cite{RU} further proved that the Whitney spheres in $N^n(4c)$ can be characterized by some other relations about the global geometric and topological invariants.
\vskip2mm As the first main result of this paper, we have obtained an optimal integral inequality that involves the Ricci curvature ${\rm Ric}\,(J\vec{H},J\vec{H})$ in the direction $J\vec{H}$ and the norm of the covariant differentiation $\bar{\nabla} h$ of the second fundamental form:
\begin{theorem}\label{thm:1.2} Let $x:M^n\rightarrow N^n(4c)\ (n\ge2)$ be an $n$-dimensional compact Lagrangian submanifold. Then, it holds that
\begin{equation}\label{eqn:1.4} \int_{M^n}{\rm Ric}\,(J\vec{H},J\vec{H})~dV_{M^n}\leq
\tfrac{(n-1)(n+2)}{3n^2}\int_{M^n}\|\bar{\nabla} h\|^2~dV_{M^n}, \end{equation}
where $\|\cdot\|$ and $dV_{M^n}$ denote the tensorial norm and the volume element of $M^n$ with respect to the induced metric, respectively.
Moreover, the equality in \eqref{eqn:1.4} holds if and only if either $x(M^n)$ is of parallel second fundamental form, or it is one of the Whitney spheres in $N^n(4c)$. \end{theorem}
\begin{remark}\label{rm:1.1} The classification of Lagrangian submanifolds with parallel second fundamental form in $N^n(4c)$ has been fulfilled for each $c$, see \cite{DLVW,HY} for details. \end{remark}
From Theorem \ref{thm:1.2}, we get a new and global geometric characterization of the Whitney spheres in $N^n(4c)$:
\begin{corollary}\label{cor:1.1} Let $x:M^n\rightarrow N^n(c)\ (n\ge2)$ be an $n$-dimensional compact Lagrangian submanifold with non-parallel mean curvature vector field. Then,
\begin{equation}\label{eqn:1.5} \int_{M^n}{\rm Ric}\,(J\vec{H},J\vec{H})~dV_{M^n}=
\tfrac{(n-1)(n+2)}{3n^2}\int_{M^n}\|\bar{\nabla} h\|^2~dV_{M^n} \end{equation}
holds if and only if $x(M^n)$ is a Whitney sphere in $N^n(c)$. \end{corollary}
\vskip2mm Next, before stating our second main result, we shall first review the standard models of the Sasakian space form $\tilde{N}^{2n+1}(\tilde{c})$ with Sasakian structure $(\varphi,\xi,\eta,g)$ possessing constant $\varphi$-sectional curvature $\tilde{c}$, then for each value $\tilde{c}$ we introduce the canonical {\it Legendrian} (i.e., {\it the $n$-dimensional $C$-totally real}, or equivalenly, {\it integral}) submanifolds: The contact Whitney spheres in $\tilde{N}^{2n+1}(\tilde{c})$.
\vskip3mm\noindent {\it Example 1.4}: {\bf Contact Whitney spheres} in $\tilde{N}^{2n+1}(-3)=(\mathbb R^{2n+1},\varphi,\xi,\eta,g)$.
Here, for the Cartesian coordinates $(x_1,\ldots,x_n,y_1,\ldots,y_n,z)$ of $\mathbb{R}^{2n+1}$,
\begin{equation*} \left\{ \begin{aligned} &\xi=2\tfrac{\partial}{\partial z}, \ \ \eta=\tfrac12\Big(dz-\sum_{i=1}^ny_idx_i\Big), \ \ g=\eta\otimes\eta+\tfrac14\sum_{i=1}^n(dx_i\otimes dx_i+dy_i\otimes dy_i),\\[-1mm]
&\varphi\Big(\sum_{i=1}^n\big(X_i\tfrac{\partial}{\partial x_i}+Y_i\tfrac{\partial}{\partial y_i}\big)+Z\tfrac{\partial}{\partial z}\Big) =\sum_{i=1}^n\big(Y_i\tfrac{\partial}{\partial x_i}-X_i\tfrac{\partial}{\partial y_i}\big)+\sum_{i=1}^nY_iy_i\tfrac{\partial}{\partial z}, \end{aligned} \right. \end{equation*} define the standard Sasakian structure $(\varphi,\xi,\eta,g)$ on $\mathbb{R}^{2n+1}$.
As were introduced by Blair and Carriazo in \cite{BC}, the contact Whitney spheres in $\tilde{N}^{2n+1}(-3)$ were the Legendrian imbeddings $\tilde\Psi_{B,a,r}:\mathbb{S}^n\rightarrow\mathbb{R}^{2n+1}$ defined by
\begin{equation}\label{eqn:1.6} \tilde\Psi_{B,a,r}(u_0,u_1,\ldots,u_n)=\tfrac{r}{1+u^2_0}\big(u_0u_1,\ldots,u_0u_n,u_1,\dots, \tfrac{ru_0}{1+u^2_0}+a(1+u^2_0)\big)+B, \end{equation}
where $r$ is a positive number, $a$ is a real constant and $B$ is a vector of $\mathbb{R}^{2n+1}$.
\vskip3mm\noindent {\it Example 1.5}: {\bf Contact Whitney spheres} in $\tilde{N}^{2n+1}(\tilde{c}) =(\mathbb S^{2n+1},\varphi,\xi,\eta,g)$ with $\tilde{c}>-3$.
Note that the unit sphere $\mathbb S^{2n+1}$, as a real hypersurface of the complex Euclidean space $\mathbb C^{n+1}$, has a natural Sasakian structure $(\bar\varphi,\bar\xi,\bar\eta,\bar g)$:
$\bar g$ is the induced metric;
$\bar\xi=JN$, where $J$ is the natural complex structure of $\mathbb C^{n+1}$ and $N$ is the unit normal vector field of the inclusion $\mathbb S^{2n+1}\hookrightarrow\mathbb C^{n+1}$;
$\bar\eta(X)=\bar{g}(X,\bar\xi)$ and $\bar\varphi(X)=JX-\langle JX,N\rangle N$ for any tangent vector field $X$ of $\mathbb S^{2n+1}$, where $\langle\cdot,\cdot\rangle$ denotes the standard Hermitian metric on $\mathbb C^{n+1}$. Then, the standard Sasakian structure $(\varphi,\xi,\eta,g)$ on $\mathbb S^{2n+1}$ is given by applying a $D_a$-homothetic deformation as follows:
\begin{equation*} \eta=a\bar\eta,\ \ \xi=\tfrac1a\bar\xi,\ \ \varphi=\bar\varphi,\ \ g=a\bar g+a(a-1)\bar\eta\otimes\bar\eta, \end{equation*}
where $a$ is a positive real number and $\tilde{c}=\tfrac4a-3$.
Then, as were introduced in \cite{HY}, the contact Whitney spheres in $\tilde{N}^{2n+1}(\tilde{c})$ for $\tilde{c}>-3$ are a family of Legendrian immersions $\tilde\Psi_\theta:\mathbb{S}^n\rightarrow\mathbb{S}^{2n+1}$ for $\theta>0$, that are explicitly given by
\begin{equation}\label{eqn:1.7} \tilde\Psi_{\theta}(u_1,u_2,\ldots,u_{n+1})=\Big(\tfrac{(u_1,\ldots,u_n)}{\cosh\theta+i\sinh \theta u_{n+1}}; \tfrac{\sinh \theta\cosh\theta(1+u^2_{n+1})+iu_{n+1}}{\cosh^2\theta+\sinh^2\theta u^2_{n+1}}\Big). \end{equation}
\vskip3mm {\it Example 1.6}: {\bf Contact Whitney spheres} in $\tilde{N}^{2n+1}(\tilde{c}) =(\mathbb{B}^n\times\mathbb{R},\varphi,\xi,\eta, g)$ with $\tilde{c}<-3$.
Here, $\mathbb{B}^n=\{(z_1, \ldots, z_n)\in \mathbb{C}^n;\ \|z\|^2=\sum\limits_{i=1}^n|z_i|^2< 1\}$ equipped with the usual complex structure and the canonical Bergman metric
$$
\tilde{g}=4\Big\{\tfrac{1}{1-\|z\|^2}\sum_{i=1}^ndz_id\bar{z}_i
+\tfrac{1}{(1-\|z\|^2)^{2}}\sum_{i,j=1}^nz_i\bar{z}_jdz_jd\bar{z}_i\Big\} $$ is a K\"ahler manifold with constant holomorphic sectional curvature $-1$.
Let $t$ be the coordinate of $\mathbb{R}$ and
$\omega=\frac{2\sqrt{-1}}{1-\|z\|^2}\sum\limits_{j=1}^n(\bar{z}_jdz_j-z_jd\bar{z}_j)$. Then, $\mathbb{B}^n\times\mathbb{R}$ has a Sasakian structure $\{\bar{\varphi}, \bar{\xi},\bar{\eta},\bar{g}\}$ with constant $\bar{\varphi}$-sectional curvature $-4$, defined as follows:
\begin{equation}\nonumber \left\{ \begin{aligned} &\bar{\eta}=\omega+dt, \ \ \bar{\xi}=\tfrac{\partial}{\partial t}, \ \ \bar{g} =\tilde{g}+\bar{\eta}\otimes \bar{\eta},\\
&\bar{\varphi}\,\Big(\sum_{i=1}^{n}(a_i\tfrac{\partial}{\partial z_i} +b_i\tfrac{\partial}{\partial \bar{z}_i})+e\tfrac{\partial}{\partial t}\Big)\\
&=\sqrt{-1}\sum_{i=1}^n(b_i\tfrac{\partial}{\partial z_i}
-a_i\tfrac{\partial}{\partial \bar{z}_i})+\tfrac{2}{1-\|z\|^2}\sum_{i=1}^n (b_i\bar{z}_i+a_iz_i)\tfrac{\partial}{\partial t}. \end{aligned}\right. \end{equation}
Then, $\tilde{N}^{2n+1}(\tilde{c})=(\mathbb{B}^{n}\times\mathbb{R}, \varphi, \xi, \eta, g)$ is given by the $D_a$-homothetic deformation
$$ \eta=a\bar{\eta}, \ \ \xi=\tfrac{1}{a}\bar{\xi}, \ \ g=a\bar{g}+a(a-1)\bar{\eta}\otimes\bar{\eta}, $$ and $\tilde{c}=-\frac{1}{a}-3$, where $a$ is a positive number.
As were introduced in \cite{HY}, the contact Whitney spheres in $\tilde{N}^{2n+1}(\tilde{c})$ for $\tilde{c}<-3$ are a one-parameter family of Legendrian immersions $\tilde\Phi_{\theta}:\mathbb{S}^n\rightarrow\mathbb{B}^n\times\mathbb{R}$ for $\theta>0$, that are explicitly given by
\begin{equation}\label{eqn:1.8} \pi(\tilde\Phi_{\theta}(u_1,u_2,\ldots,u_{n+1}))=\Pi\Big(\tfrac{(u_1,\ldots,u_n)} {\cosh\theta+i\sinh\theta {u}_{n+1}}; \tfrac{\sinh\theta\cosh\theta(1+u^2_{n+1}) -iu_{n+1}}{\cosh^2\theta+\sinh^2\theta u^2_{n+1}}\Big), \end{equation}
where $\pi:\tilde{N}^{2n+1}(\tilde{c})\rightarrow N^n(c)$ with $c=\tilde{c}+3$ is the canonical projection and $\Pi:\mathbb{H}^{2n+1}_1(-1)\rightarrow\mathbb{C}H^n(-4)$ is the Hopf projection.
\vskip2mm According to Proposition 2 of Blair-Carriazo \cite{BC} and Theorem 4.2 of Hu-Yin \cite{HY}, for each contact Whitney sphere $M^n$ in any Sasakian space form $\tilde{N}^{2n+1}(\tilde{c})$, the second fundamental form $h$ and the mean curvature vector field $\vec{H}$ satisfy the relation
\begin{equation}\label{eqn:1.9} h(X,Y)=\tfrac{n}{n+2}\big[g(X,Y)\vec{H}+g(\varphi X,\vec{H})\varphi Y+g(\varphi Y,\vec{H})\varphi X\big] \end{equation}
for any tangent vector fields $X,Y\in TM^n$. Without introducing the notion of contact Whitney spheres as canonical examples, Piti\c{s} \cite{P} proved that results for Lagrangian submanifolds of the complex space forms in \cite{CMU,RU} hold analogously for those Legendrian submanifolds of the Sasakian space forms which satisfy \eqref{eqn:1.9}. Moreover, an analogue of the result for Whitney spheres in complex space forms by Li-Vrancken \cite{LV} was established for contact Whitney spheres in Sasakian space forms by Hu-Yin \cite{HY}. It follows that a corresponding version of Theorem \ref{thm:1.1} for contact Whitney spheres in Sasakian space forms is already known.
Next, along a similar spirit as above, we get the second main result of this paper. Actually, we can show that an optimal integral inequality, as in Theorem \ref{thm:1.2}, that involves the Ricci curvature ${\rm Ric}\,(\varphi\vec{H},\varphi\vec{H})$ in the direction $\varphi\vec{H}$ and the norm of the modified covariant differentiation $\bar\nabla^\xi h$ of the second fundamental form holds also for compact Legendrian submanifolds in the Sasakian space forms:
\begin{theorem}\label{thm:1.3} Let $x:M^n\rightarrow\tilde{N}^{2n+1}(\tilde{c})$ be an $n$-dimensional compact Legendrian submanifold. Then, it holds that
\begin{equation}\label{eqn:1.10} \int_{M^n}{\rm Ric}\,(\varphi\vec{H},\varphi\vec{H})~dV_{M^n}\leq
\tfrac{(n-1)(n+2)}{3n^2}\int_{M^n}\|\bar\nabla^\xi h\|^2~dV_{M^n}, \end{equation}
where, $\bar\nabla^\xi h$ denotes the projection of $\bar\nabla h$ onto $TM^n\oplus\varphi(TM^n)$, whereas $\bar\nabla h$ denotes the covariant differentiation of $h$ with respect to the van der Waerden-Bortolotti connection of $M^n\hookrightarrow\tilde{N}^{2n+1}(\tilde{c})$,
$\|\cdot\|$ and $dV_{M^n}$ denote the tensorial norm and the volume element of $M^n$ with respect to the induced metric, respectively.
Moreover, the equality in \eqref{eqn:1.10} holds if and only if either $\bar\nabla^\xi h=0$ (i.e. $x(M^n)$ is of $C$-parallel second fundamental form), or $x(M^n)$ is one of the contact Whitney spheres in $\tilde{N}^{2n+1}(\tilde{c})$. \end{theorem}
\begin{remark}\label{rm:1.2} The classification of Legendrian submanifolds with $C$-parallel second fundamental form in the Sasakian space forms has been fulfilled. For the details see Theorem 4.1 in \cite{HY}. \end{remark}
\vskip1mm From Theorem \ref{thm:1.3}, we get a new and global geometric characterization of the contact Whitney spheres in $\tilde{N}^{2n+1}(\tilde{c})$:
\begin{corollary}\label{cor:1.2} Let $x:M^n\rightarrow\tilde{N}^{2n+1}(\tilde{c})\ (n\ge2)$ be an $n$-dimensional compact Legendrian submanifold with non-$C$-parallel mean curvature vector field. Then,
\begin{equation}\label{eqn:1.11} \int_{M^n}{\rm Ric}\,(\varphi\vec{H},\varphi\vec{H})~dV_{M^n}=
\tfrac{(n-1)(n+2)}{3n^2}\int_{M^n}\|\bar{\nabla}^\xi h\|^2~dV_{M^n} \end{equation}
holds if and only if $x(M^n)$ is a contact Whitney sphere in $\tilde{N}^{2n+1}(\tilde{c})$ . \end{corollary}
\section{Preliminaries}\label{sect:2}
In this section, we first briefly review some of the basic notions about Lagrangian submanifolds in the complex space form $N^n(4c)$ and Legendrian submanifolds in the Sasakian space form $\tilde{N}^{2n+1}(\tilde{c})$, respectively. Then, we state a classical formula due to K. Yano that we need in the proof of our theorems.
Let $M^n\hookrightarrow N^n(4c)$ (resp. $M^n\hookrightarrow \tilde{N}^{2n+1}(\tilde{c})$) be an isometric immersion from an $n$-dimensional Riemannian manifold $M^n$ into the $n$-dimensional complex space form $N^n(4c)$ of constant holomorphic sectional curvature $4c$ (resp. the $(2n+1)$-dimensional Sasakian space form $\tilde{N}^{2n+1}(\tilde{c})$ of constant $\varphi$-section curvature $\tilde{c}$). For simplicity, we denote by the same notation $g$ the Riemannian metric on $M^n$, $N^n(4c)$ and $\tilde{N}^{2n+1}(\tilde{c})$. Let $\nabla$ (resp. $\bar\nabla$) be the Levi-Civita connection of $M^n$ (resp. $N^n(4c)$ and $\tilde{N}^{2n+1}(\tilde{c})$). Then, for both $M^n\hookrightarrow N^n(4c)$ and $M^n\hookrightarrow \tilde{N}^{2n+1}(\tilde{c})$, we have the Gauss and Weingarten formulas:
\begin{align}\label{eqn:2.1} \bar\nabla_XY=\nabla_XY+h(X,Y),\ \ \bar\nabla_XV=-A_VX+\nabla_X^\perp V \end{align}
for any tangent vector fields $X,Y\in TM^n$ and normal vector field $V\in T^\perp M^n$. Here, $\nabla^\bot$ denotes the normal connection in the normal bundle $T^\perp M$, $h$ (resp. $A_V$) denotes the second fundamental form (resp. the shape operator with respect to $V$) of $M^n\hookrightarrow N^n(4c)$ (resp. $M^n\hookrightarrow \tilde{N}^{2n+1}(\tilde{c})$). From \eqref{eqn:2.1}, we have the relation
\begin{equation}\label{eqn:2.2} g(h(X,Y),V)=g(A_VX,Y). \end{equation}
\subsection{Lagrangian submanifolds of the complex space form $N^n(4c)$}\label{sect:2.1}~
The curvature tensor $\bar{R}(X,Y)Z:=\bar{\nabla}_{X}\bar{\nabla}_{Y}Z-\bar{\nabla}_{Y}\bar{\nabla}_{X}Z-\bar{\nabla}_{[X,Y]}Z$ of $N^n(4c)$ has the following expression:
\begin{equation}\label{eqn:2.3} \begin{aligned} \bar{R}(X,Y)Z=c\big[&g(Y,Z)X-g(X,Z)Y\\ &+g(JY,Z)JX-g(JX,Z)JY-2g(JX,Y)JZ\big]. \end{aligned} \end{equation}
Let $M^n\hookrightarrow N^n(4c)$ be a Lagrangian immersion. Then, we have (cf. e.g. \cite{LV})
\begin{align}\label{eqn:2.4} \nabla_X^\perp JY=J\nabla^{\perp}_XY,\quad A_{JX}Y=-Jh(X,Y)=A_{JY}X, \end{align}
and thus $g(h(X,Y),JZ)$ is totally symmetric in $X$, $Y$ and $Z$:
\begin{equation}\label{eqn:2.5} g(h(X,Y),JZ)=g(h(Y,Z),JX)=g(h(Z,X),JY). \end{equation}
We choose a local adapted Lagrangian frame field $\{e_1,...,e_n,e_{1^*},\ldots,e_{n^*}\}$ such that $e_1,\ldots,e_n$ are orthonormal tangent vector fields, and $e_{1^*}=Je_1,\ldots,e_{n^*}=Je_n$ are orthonormal normal vector fields of $M^n\hookrightarrow N^n(4c)$, respectively. In follows we shall make use of the indices convention: $i^*=n+i,\ \ 1\le i,j,k,\ldots\le n$.
Denote by $\{\omega^1,\ldots,\omega^n\}$ the dual frame of $\{e_1,\ldots,e_n\}$. Let $\omega_i^j$ and $\omega_{i^*}^{j^*}$ denote the connection $1$-forms of $TM^n$ and $T^\perp M^n$, respectively:
$$ \nabla e_i=\sum^n_{j=1}\omega_{i}^je_j, \ \ \nabla^\perp e_{i^*}=\sum_{j=1}^n\omega_{i^*}^{j^*}e_{j^*},\ \ 1\le i\le n, $$
where $\omega_i^j+\omega^i_j=0$ and by \eqref{eqn:2.4} it holds that $\omega_i^j=\omega_{i^*}^{j^*}$.
Put $h^{k^*}_{ij}=g(h(e_i,e_j),Je_k)$. From \eqref{eqn:2.5}, we see that
\begin{equation}\label{eqn:2.6} h^{k^*}_{ij}=h^{j^*}_{ik}=h^{i^*}_{jk},\ \ 1\leq i,j,k\leq n. \end{equation}
Let $R_{ijkl}:=g\big(R(e_i,e_j)e_l,e_k\big)$ and $R_{ijk^*l^*}:=g\big(R^\perp(e_i,e_j)e_{l^*},e_{k^*}\big)$ be the components of the curvature tensors of $\nabla$ and $\nabla^\bot$, respectively. Then the equations of Gauss, Ricci and Codazzi of $M^n\hookrightarrow N^n(4c)$ are given by
\begin{equation}\label{eqn:2.7} R_{ijkl}=c(\delta_{ik}\delta_{jl}-\delta_{il}\delta_{jk})
+\sum_{m}(h^{m^*}_{ik}h^{m^*}_{jl}-h^{m^*}_{il}h^{m^*}_{jk}), \end{equation}
\begin{equation}\label{eqn:2.8} R_{ijk^*l^*}=c(\delta_{ik}\delta_{jl}-\delta_{il}\delta_{jk}) +\sum_{m=1}^n(h^{m^*}_{ik}h^{m^*}_{jl}-h^{m^*}_{il}h^{m^*}_{jk})=R_{ijkl}, \end{equation}
\begin{gather} h^{l^*}_{ij,k}=h^{l^*}_{ik,j},\label{eqn:2.9} \end{gather}
where $h^{l^*}_{ij,k}$ denotes the components of the covariant differentiation of $h$, namely $\bar{\nabla}h$, defined by
\begin{equation}\label{eqn:2.10} \sum_{l=1}^nh^{l^*}_{ij,k}e_{l^*}:=\nabla^{\perp}_{e_k}\big(h(e_i,e_j)\big) -h(\nabla_{e_k}e_i,e_j)-h(e_i,\nabla_{e_k}e_j). \end{equation}
The mean curvature vector field $\vec{H}$ of $M^n\hookrightarrow N^n(4c)$ is defined by
\begin{equation}\label{eqn:2.11} \vec{H}:=\tfrac1n\sum_{i=1}^nh(e_i,e_i)=:\sum_{j=1}^nH^{j^*}e_{j^*},\ \ H^{j^*}=\tfrac1n\sum_{i=1}^nh^{j^*}_{ii},\ \ 1\le j\le n. \end{equation}
Put $\nabla^\perp_{e_i}\vec{H}=\sum\limits_{j=1}^nH^{j^*}_{,i}e_{j^*}$, $1\le i\le n$. From \eqref{eqn:2.6} and \eqref{eqn:2.9}, we obtain
\begin{equation}\label{eqn:2.12} H^{j^*}_{,i}=H^{i^*}_{,j},\ \ 1\le i,j\le n. \end{equation}
\subsection{Legendrian submanifolds of the Sasakian space form $\tilde{N}^{2n+1}(\tilde{c})$}\label{sect:2.2}~
The following facts of this subsection are referred to e.g. \cite{HY}. The curvature tensor of the Sasakian space form $\tilde{N}^{2n+1}(\tilde{c})$ is given by
\begin{equation}\label{eqn:2.13} \begin{split} \bar{R}(X,Y)Z=&\tfrac{\tilde{c}+3}{4}[g(Y,Z)X-g(X,Z)Y]+\tfrac{\tilde{c}-1}{4}\big[\eta(X)\eta(Z)Y\\ &-\eta(Y)\eta(Z)X+g(X,Z)\eta(Y)\xi-g(Y,Z)\eta(X)\xi\\ &+g(\varphi Y,Z)\varphi X-g(\varphi X,Z)\varphi Y
+2g(X,\varphi Y)\varphi Z\big]. \end{split} \end{equation}
Moreover, for tangent vector fields $X,Y$ of $\tilde{N}^{2n+1}(\tilde{c})$, the Sasakian structure $(\varphi,\xi,\eta,g)$ of $\tilde{N}^{2n+1}(\tilde{c})$ satisfy:
\begin{equation}\label{eqn:2.14} \left\{ \begin{aligned} &\eta(X)=g(X,\xi),\ \ \varphi\xi=0,\ \ \eta(\varphi X)=0,\\
&\varphi^2X=-X+\eta(X)\xi,\ \ d\eta(X,Y)=g(X,\varphi Y),\\
&g(\varphi X,\varphi Y)=g(X,Y)-\eta(X)\eta(Y),\ \ {\rm rank}\,(\varphi)=2n,\\
&\bar{\nabla}_{X}\xi=-\varphi X,\ \ (\bar{\nabla}_{X}\varphi)Y =g(X,Y)\xi-\eta(Y)X. \end{aligned} \right. \end{equation}
Let $M^n\hookrightarrow \tilde{N}^{2n+1}(\tilde{c})$ be a Legendrian immersion. Then, we have
\begin{equation}\label{eqn:2.15} A_{\varphi Y}X=-\varphi h(X,Y),\quad \nabla^{\bot}_{X}\varphi Y=\varphi\nabla_XY+g(X,Y)\xi. \end{equation}
In follows we shall make the following convention on range of indices:
$$ \alpha^*=\alpha+n;\ \ 1\le i,j,k,l,m\le n; \ \ 1\le \alpha,\beta\le n+1. $$
We choose a local {\it Legendre frame field} $\{e_1,\ldots,e_n,e_{1^*}, \ldots,e_{n^*},e_{2n+1}=\xi\}$ along $M^n\hookrightarrow \tilde{N}^{2n+1}(\tilde{c})$ such that $\{e_i\}_{i=1}^n$ is an orthonormal frame field of $M^n$, and $\{e_{1^*}=\varphi e_1, \ldots,e_{n^*}=\varphi e_n,e_{2n+1}=\xi\}$ are the orthonormal normal vector fields of $M^n\hookrightarrow\tilde{N}^{2n+1}(\tilde{c})$. Let $\omega_i^j$ and $\omega_{\alpha^*}^{\beta^*}$ denote the connection $1$-forms of $TM^n$ and $T^\perp M^n$, respectively:
$$ \nabla e_i=\sum^n_{j=1}\omega_{i}^je_j, \ \ \nabla^\perp e_{\alpha^*}=\sum_{\beta=1}^{n+1}\omega_{\alpha^*}^{\beta^*}e_{\beta^*},\ \ 1\le i\le n,\ \ 1\le\alpha\le n+1, $$
where $\omega_i^j+\omega_j^i=0$ and $\omega_{\alpha^*}^{\beta^*} +\omega_{\beta^*}^{\alpha^*}=0$. Moreover, by \eqref{eqn:2.15}, we have $\omega_i^j=\omega_{i^*}^{j^*}$ and $\omega_{i^*}^{2n+1}=\omega^i$.
Put $h^{k^*}_{ij}=g(h(e_i,e_j),\varphi e_k)$ and $h^{2n+1}_{ij}=g(h(e_i,e_j),e_{2n+1})$. From \eqref{eqn:2.14} and \eqref{eqn:2.15}, we have
\begin{equation}\label{eqn:2.16} h^{k^*}_{ij}=h^{j^*}_{ik}=h^{i^*}_{jk},\quad h^{2n+1}_{ij}=0,\ \ 1\le i,j,k\le n. \end{equation}
Now, with the same notations as in the preceding subsection, the equations of Gauss, Ricci and Codazzi of $M^n\hookrightarrow \tilde{N}^{2n+1}(\tilde{c})$ are as follows:
\begin{equation}\label{eqn:2.17} R_{ijkl}=\tfrac{\tilde{c}+3}4(\delta_{ik}\delta_{jl}-\delta_{il}\delta_{jk}) +\sum_{m=1}^n(h_{ik}^{m^*}h_{jl}^{m^*}-h_{il}^{m^*}h_{jk}^{m^*}), \end{equation}
\begin{equation}\label{eqn:2.18} R_{ijk^*l^*}=\tfrac{\tilde{c}-1}4(\delta_{ik}\delta_{jl}-\delta_{il}\delta_{jk}) +\sum_{m=1}^n(h^{m^*}_{ik}h^{m^*}_{jl}-h^{m^*}_{il}h^{m^*}_{jk}), \ \ R_{ijk^*(2n+1)}=0, \end{equation}
\begin{equation}\label{eqn:2.19} h^{\alpha^*}_{ij,k}=h^{\alpha^*}_{ik,j}, \end{equation}
where as usual $h^{\alpha^*}_{ij,k}$ is defined by
\begin{equation}\label{eqn:2.20} \sum_{\alpha=1}^{n+1}h^{\alpha^*}_{ij,k}e_{\alpha^*}:=\nabla_{e_k}^\perp\big(h(e_i,e_j)\big) -h(\nabla_{e_k}e_i,e_j)-h(e_i,\nabla_{e_k}e_j),\ \ 1\le i,j,k\le n. \end{equation}
Moreover, associated to $\nabla,\,\nabla^\perp$ and $\bar\nabla$, we can naturally define a modified covariant differentiation $\bar\nabla^\xi h$ of the second fundamental form by
\begin{equation}\label{eqn:2.21} (\bar\nabla^{\xi}_Xh)(Y,Z):=\nabla_X^\perp(h(Y,Z))-h(\nabla_XY,Z) -h(Y,\nabla_XZ)-g(h(Y,Z),\varphi X)\xi. \end{equation}
Recall that the second fundamental form $h$ of $M^n\hookrightarrow \tilde{N}^{2n+1}(\tilde{c})$ is said to be $C$-parallel if and only if $\bar\nabla^\xi h=0$ (cf. \cite{HY}). Actually, we have $g((\bar\nabla^{\xi}_Xh)(Y,Z),\xi)=0$ for any $X,Y,Z\in TM^n$. Thus, we can denote
\begin{equation}\label{eqn:2.22} (\bar\nabla^{\xi}_{e_k}h)(e_i,e_j):=\sum_{l=1}^n\bar{h}^{l^*}_{ij,k}e_{l^*},\ \ 1\le i,j,k\le n. \end{equation}
Then, by \eqref{eqn:2.20}, \eqref{eqn:2.21} and the above discussions, we have
\begin{equation}\label{eqn:2.23} h_{ij,k}^{(n+1)^*}=h_{ij}^{k^*},\ \ h^{l^*}_{ij,k}=\bar{h}^{l^*}_{ij,k},\ \ \forall\, i,j,k,l. \end{equation}
From \eqref{eqn:2.16}, the mean curvature vector $\vec{H}$ of $M^n\hookrightarrow\tilde{N}^{2n+1}(\tilde{c})$ becomes:
\begin{equation}\label{eqn:2.24} \vec{H}=\tfrac1n\sum_{i=1}^nh(e_i,e_i)=\sum_{k=1}^nH^{k^*}e_{k^*},\ \ H^{k^*}:=\tfrac1n\sum_{i=1}^nh^{k^*}_{ii},\ \ 1\le k\le n. \end{equation}
Put
$$ \nabla^\perp_{e_i} \vec{H}=\sum_{\alpha=1}^{n+1} H^{\alpha^*}_{,i}e_{\alpha^*},\ \ \bar\nabla^\xi_{e_i}\vec{H}:=\nabla^\perp_{e_i}\vec{H}-g(\vec{H},e_{i^*})\xi =:\sum_{k=1}^n\bar{H}^{k^*}_{,i}e_{k^*},\ \ 1\le i\le n. $$
From \eqref{eqn:2.16}, \eqref{eqn:2.19} and \eqref{eqn:2.23}, we get
\begin{equation}\label{eqn:2.25} H^{j^*}_{,i}=H^{i^*}_{,j},\quad \bar{H}^{j^*}_{,i}=H^{j^*}_{,i},\ \ 1\le i,j\le n. \end{equation}
\subsection{Yano's formula}\label{sect:2.3}~
In order to prove Theorem \ref{thm:1.2} and Theorem \ref{thm:1.3}, we still need the following useful formula due to K. Yano \cite{Y}. A simply proof is referred also to \cite{HMVY}.
\begin{lemma}[cf. Lemma 5.1 of \cite{HMVY}]\label{lem:2.1} Let $(M,g)$ be a Riemannian manifold with Levi-Civita connection $\nabla$. Then, for any tangent vector field $X$ on $M$, it holds that
\begin{equation}\label{eqn:2.26} \begin{aligned}
{\rm div}(\nabla_XX-({\rm div}X)X)={\rm Ric}\,(X,X)+\tfrac{1}{2}\|\mathcal{L}_Xg\|^2
-\|\nabla X\|^2-({\rm div}X)^2, \end{aligned} \end{equation}
where $\mathcal{L}_Xg$ is the Lie derivative of $g$ with respect to $X$ and $\|\cdot\|$ denotes the length with respect to $g$. \end{lemma}
\section{Proof of Theorem \ref{thm:1.2}}\label{sect:3}
First of all, we state the following simple fact without proof.
\begin{lemma}\label{lem:3.1} Let $x:M^n\rightarrow N^n(4c)$ be an $n$-dimensional Lagrangian submanifold with mean curvature vector field $\vec{H}$. Then, it holds that
\begin{equation}\label{eqn:3.1} \begin{aligned}
\|\nabla J\vec{H}\|^2\ge\tfrac1n({\rm div}J\vec{H})^2. \end{aligned} \end{equation}
Moreover, the equality in \eqref{eqn:3.1} holds if and only if $\nabla J\vec{H}=\frac1n({\rm div}\,J\vec{H})\,{\rm id}$, i.e., $J\vec{H}$ is a conformal vector field on $M^n$, or equivalently, $M^n$ is a Lagrangian submanifold with conformal Maslov form. \end{lemma}
We also need the following result due to Li-Vrancken \cite{LV}:
\begin{lemma}[cf. Lemma 3.2 in \cite{LV}]\label{lem:3.2} Let $x:M^n\rightarrow N^n(4c)$ be an $n$-dimensional Lagrangian submanifold with mean curvature tensor $\vec{H}$. Then, it holds that
\begin{equation}\label{eqn:3.2}
\|\bar\nabla h\|^2\ge\tfrac{3n^2}{n+2}\|\nabla^\perp\vec{H}\|^2. \end{equation}
Moreover, the equality in \eqref{eqn:3.2} holds if and only if
\begin{equation}\label{eqn:3.3} (\bar\nabla_Z h)(X,Y)=\tfrac{n}{n+2}\big[g(Y,Z)\nabla_X^\perp\vec{H}+g(X,Z)\nabla_Y^\perp\vec{H}+g(X,Y)\nabla_Z^\perp\vec{H}\big]. \end{equation}
\end{lemma}
\vskip1mm Now, we are ready to complete the proof of Theorem \ref{thm:1.2}.
\begin{proof}[Proof of Theorem \ref{thm:1.2}] Let $M^n\hookrightarrow N^n(4c)$ be a compact Lagrangian submanifold and $\{e_1,...,e_n,e_{1^*},\ldots,e_{n^*}\}$ be a local adapted Lagrangian frame field along $M^n$. From \eqref{eqn:2.4} and that $\nabla^\perp_{e_i}\vec{H}=\sum\limits_{j=1}^nH^{j^*}_{,i}e_{j^*}$, we have
\begin{equation}\label{eqn:3.4} \begin{aligned}
\|\nabla J\vec{H}\|^2=\|\nabla^{\perp}\vec{H}\|^2=\sum_{i,j=1}^n(H^{j^*}_{,i})^2. \end{aligned} \end{equation}
Then, by \eqref{eqn:2.12}, calculating the squared length of the Lie derivative $\mathcal{L}_{J\vec{H}}g$ of $g$ with respect to $J\vec{H}$, we obtain
\begin{equation}\label{eqn:3.5} \begin{aligned}
\|\mathcal{L}_{J\vec{H}}g\|^2&=\sum_{i,j=1}^n\big[(\mathcal{L}_{J\vec{H}}g)(e_i,e_j)\big]^2
=\sum_{i,j=1}^n\big(H^{j^*}_{,i}+H^{i^*}_{,j}\big)^2=4\|\nabla^{\perp}\vec{H}\|^2. \end{aligned} \end{equation}
Thus, we can apply Lemma \ref{lem:2.1} and \eqref{eqn:3.1} to obtain that
\begin{equation}\label{eqn:3.6} \begin{aligned} {\rm div}(\nabla_{J\vec{H}}J\vec{H}-({\rm div}J\vec{H})J\vec{H})
&={\rm Ric}\,(J\vec{H},J\vec{H})+\|\nabla J\vec{H}\|^2-({\rm div}J\vec{H})^2\\
&\ge{\rm Ric}\,(J\vec{H},J\vec{H})-(n-1)\|\nabla^{\perp}\vec{H}\|^2, \end{aligned} \end{equation}
where the equality in \eqref{eqn:3.6} holds if and only if $\nabla J\vec{H}=\frac1n({\rm div}\,J\vec{H})\,{\rm id}$, or equivalently, $M^n$ is a Lagrangian submanifold with conformal Maslov form.
From \eqref{eqn:3.6}, by further applying Lemma \ref{lem:3.2}, we get
\begin{equation}\label{eqn:3.7} {\rm div}(\nabla_{J\vec{H}}J\vec{H}-({\rm div}J\vec{H})J\vec{H})
\ge{\rm Ric}\,(J\vec{H},J\vec{H})-\tfrac{(n-1)(n+2)}{3n^2}\|\bar\nabla h\|^2, \end{equation}
where the equality holds if and only if both $\nabla J\vec{H}=\frac1n({\rm div}\,J\vec{H})\,{\rm id}$ and \eqref{eqn:3.3} hold.
By the compactness of $M^n$, we can integrate the inequality \eqref{eqn:3.7} over $M^n$. Then, applying for the divergence theorem, we obtain the integral inequality \eqref{eqn:1.4}.
It is easily seen that the equality holds in \eqref{eqn:1.4} if and only if the equality in \eqref{eqn:3.2} holds identically. Thus, according to Main Theorem in \cite{LV}, equality in \eqref{eqn:1.4} holds if and only if either $x(M^n)$ is of parallel second fundamental form, or $x(M^n)$ is one of the Whitney spheres in $N^n(4c)$.
This completes the proof of Theorem \ref{thm:1.2}. \end{proof}
\section{Proof of Theorem \ref{thm:1.3}}\label{sect:4}
Let $x:M^n\rightarrow\tilde{N}^{2n+1}(\tilde{c})$ be an $n$-dimensional Legendrian submanifold in the Sasakian space form $\tilde{N}^{2n+1}(\tilde{c})$ with Sasakian structure $(\varphi,\xi,\eta,g)$. First of all, similar to Lemma \ref{lem:3.1}, we have the following simple result.
\begin{lemma}\label{lem:4.1} Let $x:M^n\rightarrow\tilde{N}^{2n+1}(\tilde{c})$ be an $n$-dimensional Legendian submanifold with mean curvature vector field $\vec{H}$. Then, it holds that
\begin{equation}\label{eqn:4.1} \begin{aligned}
\|\nabla(\varphi\vec{H})\|^2\ge\tfrac1n({\rm div}\,\varphi\vec{H})^2. \end{aligned} \end{equation}
Moreover, the equality in \eqref{eqn:4.1} holds if and only if $\nabla(\varphi\vec{H})=\frac1n({\rm div}\,\varphi\vec{H})\,{\rm id}$, i.e., $\varphi\vec{H}$ is a conformal vector field on $M^n$. \end{lemma}
We also need the following result:
\begin{lemma}[cf. Lemma 3.3 in \cite{HY}]\label{lem:4.2} Let $x:M^n\rightarrow\tilde{N}^{2n+1}(\tilde{c})$ be an $n$-dimensional Legendrian submanifold with second fundamental form $h$ and mean curvature vector field $\vec{H}$. Then, it holds that
\begin{equation}\label{eqn:4.2}
\|\bar\nabla^{\xi} h\|^2\geq \tfrac{3n^2}{n+2}\|\bar\nabla^{\xi}\vec{H}\|^2, \end{equation}
where, with respect to a local Legendre frame field $\{e_A\}_{A=1}^{2n+1}$,
$$
\|\bar\nabla^{\xi} h \|^2=\sum_{i,j,k,l=1}^n(h^{l^*}_{ij,k})^2,\ \
\|\bar\nabla^{\xi}\vec{H}\|^2=\sum_{i,j=1}^n(H^{j^*}_{,i})^2. $$
Moreover, the equality in \eqref{eqn:4.2} holds if and only if
\begin{equation}\label{eqn:4.3} h^{l^*}_{ij,k}=\tfrac{n}{n+2}\big(H^{l^*}_{,i}\delta_{jk}+H^{l^*}_{,j}\delta_{ik} +H^{l^*}_{,k}\delta_{ij}\big),\ \ 1\le i,j,k,l\le n. \end{equation} \end{lemma}
\vskip2mm Now, we are ready to complete the proof of Theorem \ref{thm:1.3}.
\begin{proof}[Proof of Theorem \ref{thm:1.3}]
Let $x:M^n\rightarrow\tilde{N}^{2n+1}(\tilde{c})$ be a compact $n$-dimensional Legendrian submanifold and $\{e_1,\ldots,e_n,e_{1^*},\ldots,e_{n^*},e_{2n+1}=\xi\}$ be a local adapted Legendre frame field along $M^n$. By definition, we have
\begin{equation}\label{eqn:4.4} \begin{aligned}
\|\nabla(\varphi\vec{H})\|^2=\sum_{i,j=1}^n(g(\nabla_{e_i}(\varphi\vec{H}),e_j))^2
=\sum_{i,j=1}^n(\bar{H}^{j^*}_{,i})^2=\|\bar\nabla^\xi\vec{H}\|^2. \end{aligned} \end{equation}
Then, by \eqref{eqn:2.25}, calculating the squared length of the Lie derivative $\mathcal{L}_{\varphi\vec{H}}g$ of $g$ with respect to $\varphi\vec{H}$, we obtain
\begin{equation}\label{eqn:4.5} \begin{aligned}
\|\mathcal{L}_{\varphi\vec{H}}g\|^2 =\sum_{i,j=1}^n\big[(\mathcal{L}_{\varphi\vec{H}}g)(e_i,e_j)\big]^2
=\sum_{i,j=1}^n\big(H^{j^*}_{,i}+H^{i^*}_{,j}\big)^2=4\|\nabla(\varphi\vec{H})\|^2. \end{aligned} \end{equation}
Thus, we can apply Lemma \ref{lem:2.1} and \eqref{eqn:4.1} to obtain that
\begin{equation}\label{eqn:4.6} \begin{aligned} {\rm div}(\nabla_{\varphi\vec{H}}(\varphi\vec{H})-({\rm div}\,\varphi\vec{H})\varphi\vec{H})
&={\rm Ric}\,(\varphi\vec{H},\varphi\vec{H})+\|\nabla (\varphi\vec{H})\|^2-({\rm div}\varphi\vec{H})^2\\
&\ge{\rm Ric}\,(\varphi\vec{H},\varphi\vec{H})-(n-1)\|\nabla(\varphi\vec{H})\|^2, \end{aligned} \end{equation}
where the equality in \eqref{eqn:4.6} holds if and only if $\nabla(\varphi\vec{H})=\frac1n({\rm div}\,\varphi\vec{H})\,{\rm id}$.
From \eqref{eqn:4.6} and that $\|\nabla(\varphi\vec{H})\|^2=\|\bar\nabla^\xi\vec{H}\|^2$, by further applying Lemma \ref{lem:4.2}, we get
\begin{equation}\label{eqn:4.7} {\rm div}(\nabla_{\varphi\vec{H}}(\varphi\vec{H})-({\rm div}\,\varphi\vec{H})\varphi\vec{H})
\ge{\rm Ric}\,(\varphi\vec{H},\varphi\vec{H})-\tfrac{(n-1)(n+2)}{3n^2}\|\bar\nabla^\xi h\|^2, \end{equation}
where the equality holds if and only if both $\nabla(\varphi\vec{H})=\frac1n({\rm div}\,\varphi\vec{H})\,{\rm id}$ and \eqref{eqn:4.3} hold.
By the compactness of $M^n$, we can integrate the inequality \eqref{eqn:4.7} over $M^n$. Then, applying for the divergence theorem, we obtain the integral inequality \eqref{eqn:1.10}.
It is easily seen from the above arguments that the equality in \eqref{eqn:1.10} holds if and only if the equality in \eqref{eqn:4.2} holds identically. Thus, according to Theorem 1.1 in \cite{HY}, equality in \eqref{eqn:1.10} holds if and only if either $x(M^n)$ is of $C$-parallel second fundamental form, or $x(M^n)$ is one of the contact Whitney spheres in $\tilde{N}^{2n+1}(\tilde{c})$.
This completes the proof of Theorem \ref{thm:1.3}. \end{proof}
\vskip2mm As final remarks, we would mention that all the Whitney spheres in the complex space forms are conformally equivalent to the round sphere (cf. \cite{RU} and \cite{CMU}). Now, for any one of the contact Whitney spheres, $x:\mathbb{S}^n\rightarrow\tilde{N}^{2n+1}(\tilde{c})$, its second fundamental form $h$ has the expression \eqref{eqn:1.9}. Thus, by using the Gauss equation and direct calculations, we can immediately obtain the following
\begin{theorem} The sectional curvatures of the contact Whitney spheres are not constant. Nevertheless, all the contact Whitney spheres in each of the Sasakian space forms are conformally equivalent to the round sphere. \end{theorem}
\end{document} | arXiv |
Hubble parameter data from Cosmic Chronometers. Has time dilation been taken into account?
The expansion history of the Universe can found by measuring the differential age evolution of cosmic chronometers. This yields a measurement of the Hubble parameter H(z) as a function of redshift.
see: Jimenez & Loeb (2002); Moresco et al. (2018).
The basic idea of the Cosmic Chronometer method is that the Hubble parameter is related to the differential redshift-time relation as $H(z) = \frac{-1}{1+z}\times\frac{dz}{dt}\tag1$
(1) is derived from
$H(z)= \frac{\dot a}{a}$
$a=\frac{1}{1+z}$ and
$\frac{da}{dt}=\frac{da}{dz} \times \frac{dz}{dt}$
Then the quantities $dz$ and $dt$ need to be measured for a passively evolving system such as a group of stars, $dz$ is found from spectroscopic measurements.
For $dt$ the 'cosmic chronometer' was a direct spectroscopic observable (the 4000 ˚A break) known to be linearly related with the age of the stellar population
and a plot of $H(z)$ against $z$ can be obtained like the one above.
"The solid line and the dashed regions...show the fiducial flat LCDM cosmology... H0 = 67.8 km/s/Mpc, m = 0.308. For comparison an Einstein-de Sitter model is shown..."
• Moresco et al DOI:10.1088/1475-7516/2016/05/014
The method doesn't need absolute ages of stars, but the difference in ages of the stars.
The question is this:
Since both the actual age of a star and the difference in ages of a pair of stars are both subject to time dilation, has this properly been accounted for in the method?
For example, with easy numbers.
If time dilation caused a 10.2 million year old star at a redshift of $z$ to appear to be 5.1 million years old, and a 10 million year old star further away at redshift $z+dz$ to appear to be 5 million years old, the $dt$ would be measured as 0.1 million years, but in reality it should be 0.2 million years.
It doesn't seem to be taken into account, or is it already there in equation (1)?
cosmology space-expansion redshift
John HunterJohn Hunter
The method in summary is to observe two galaxies that were assumed to have formed at the same cosmic epoch but are observed at different redshifts. The difference in their redshift is $\Delta z$ and then by measuring some spectral feature that depends on age (in their own frame of reference), to estimate the difference in age between them $\Delta t$. The idea is to choose a pair of galaxies that formed at a considerably higher redshift than where they are now observed; where "considerably higher" I think means that the time since they formed is a lot longer than the observed age difference between them.
The key thing here is that the calibration of whatever age indicator you are using is done (or calculated) in the rest frame. When we look at a galaxy at a redshift of 1, we see their stellar populations as they were about 7.9 billion years ago. A galaxy at redshift 1.1 is seen as it was 8.3 billion years ago (these numbers based on a standard cosmological model). We therefore expect to see a difference in the age of their populations of about 0.4 billion years, providing they were both formed at the same time at a much higher redshift than they have now. There is no further time dilation to insert into these ages or age differences.
You might be confused by the time dilation that must be taken into account if we were to see something change with time. For example if we could observe a very distant star pulsating and the pulsation period gave the age, then yes, that would need to be corrected for time dilation to estimate the rest frame pulsation period. But that correction is already done when observing spectral features - you just blueshift it back to where it was in the rest frame of the galaxy.
ProfRobProfRob
$\begingroup$ Thanks for the answer. In your example of a star at redshift 1 whose light left it 7.9byr ago and star at redshift 1.1 whose light left it 8.3 byr (difference 0.4). The universe is 13.8byr old, the first would be 5.9byr old when the light left it (if had formed shortly after the Big Bang). During these 5.9byrs wouldn't it have appeared to us to have evolved only for about half this time due to the time dilation at redshift of about 1 and appear to be 2.95byrs old, similarly for the other (5.5 becoming 2.75), giving a measured age difference to us of 0.2byrs instead of the true 0.4byrs? $\endgroup$
– John Hunter
$\begingroup$ @JohnHunter The light left the first star when it was 5.9 Gyr old (in its own frame of reference - that is what this value of time means). Therefore when that light gets to us, it looks like it was emitted by a star that was 5.9 Gyr old, but redshifted by a factor of $1+z$. There isn't any other correction to apply. $\endgroup$
$\begingroup$ Thanks, will have a think about it... $\endgroup$
Not the answer you're looking for? Browse other questions tagged cosmology space-expansion redshift or ask your own question.
Value of the Hubble parameter over time
How is "little $h$" measured in cosmology? The dimensionless parameter from the Hubble constant, $H_0$
Redshift of distant galaxies: why not a doppler effect?
Time dilation of distant cosmic events. What is it?
Time dilation beyond cosmic horizon
Has the gaseous cosmic web ever been directly observed?
The difference between the Hubble parameter and time derivative of the scaling factor | CommonCrawl |
UV resonance Raman spectroscopy is a well established technique for probing peptide and protein secondary structure. Excitation between 180 to 215 nm, within the $\pi $ to $\pi $* electronic transitions of the peptide backbone, results in the enhancement of amide vibrations. We use UVRR excitation profiles and depolarization ratios to examine the underlying peptide bond electronic transitions. The present consensus is that three electronic transitions (n to $\pi $* and two $\pi $ to $\pi $*) occur in simple amides between 230 and 130 nm. In $\alpha $-helices a weak n to $\pi $* electronic transition occurs at 220 nm, while a higher frequency $\pi $ to $\pi $* transition occurs at 190 nm. This $\pi $ to $\pi $* transition undergoes exciton splitting, giving rise to two dipole-allowed transitions: one perpendicular to the helical axis (190 nm) and the second parallel to the axis (205 nm). The melted state of alpha-helices resembles left-handed poly-proline II (PPII) helices. The PPII helix electronic transitions have been defined as an n to $\pi $* transition at $\sim $ 220 nm and a $\pi $ to $\pi $* transition at $\sim $ 200 nm. For beta-sheets, the $\pi $ to $\pi $* transition occurs at $\sim $ 194 nm for parallel and $\sim $196 nm for anti-parallel sheets. n to pi* transition occurs at $\sim $217 nm for both. | CommonCrawl |
\begin{document}
\renewcommand{\emph} [1] {{#1}}
\title{On the effect of decoherence on quantum tunnelling. } \author[A.Y. Klimenko]{A.Y. Klimenko \thanks{ email: [email protected]}} \affiliation{The University of Queensland, SoMME, QLD 4072, Australia \\
\small{\it (SN Applied Sciences, 2021, 3:710) }}
\maketitle
\begin{abstract} {decoherence, quantum tunnelling, non-equilibrium dynamics} \textbf{Abstract} This work proposes a series of quantum experiments that can, at least in principle, allow for examining microscopic mechanisms associated with decoherence. \emph{ These experiments can be interpreted as a quantum-mechanical version of non-equilibrium mixing between two volumes separated by a thin interface. } One of the principal goals of such experiments is in identifying non-equilibrium conditions when time-symmetric laws give way to time-directional, irreversible processes, which are represented by decoherence at the quantum level. The rate of decoherence is suggested to be examined indirectly, with minimal intrusions --- this can be achieved by measuring tunnelling rates that, in turn, are affected by decoherence. Decoherence is understood here as a general process that does not involve any significant exchanges of energy and governed by a particular class of the Kraus operators. The present work analyses different regimes of tunnelling in the presence of decoherence and obtains formulae that link the corresponding rates of tunnelling and decoherence under different conditions. \emph{ It is shown that the effects on tunnelling of intrinsic decoherence and of decoherence due to unitary interactions with the environment are similar but not the same and can be distinguished in experiments. } \ \end{abstract}
\section{Introduction \label{Sec1}}
The goal of this work is to consider experiments that can, at least in principle, examine time-directional quantum effects in an effectively isolated system. Such experiments need to be conducted somewhere at the notional boundary between the microscopic quantum\ and macroscopic thermodynamic worlds, that is we need to deal with quantum systems that can exhibit some degree of thermodynamic behaviour. At the quantum level, this corresponds to persisting decoherence, which is, perhaps, the most fundamental irreversible process that we are aware of --- it takes place at the smallest scales, increases entropy \cite{Abe2020} and, expectedly, induces various macroscopic effects associated with the thermodynamic arrow of time \cite{mixing2020}. A large volume of literature is dedicated to decoherence, which may involve both intrinsic \cite{Zurek2002, QTreview,Beretta2005,Stamp2012} and environmental \cite {Zurek1982,Joos1984,Joos2003,EnvDec2005,CT-G2006,CT-P2006,Stamp2012,Yukalov2011} mechanisms.
{\ The present work examines a problem that, at least conceptually, can become an experiment probing the direction of time.\ This problem represents a quantum-mechanical version of non-equilibrium mixing between two volumes separated by a thin interface. In this quantum version of the classical problem, particles tunnel through the interface and, at the same time, are subject to the omnipresent influence of quantum decoherence, which, presumably, is the fundamental mechanism enacting non-equilibrium, time-directional effects in the macroscopic world \cite{SciRep2016}. }
In quantum experiments, one has to face another fundamental difficulty --- interferences from the environment and measurements. Environmental interferences can overwhelm intrinsic mechanisms of decoherence, while\ quantum measurements routinely cause decoherences and collapses (which are interpreted here as defined in the Appendix of Ref. \cite{Klimenko2019}) instead of observing these decoherences and collapses without interfering. It appears, however, that, under the conditions examined in this work, decoherence affects the rate of quantum tunnelling and, therefore, can be characterised by the tunnelling rates without measuring decoherence directly. Among many formulations of tunnelling problems \cite {LL3,Tunn2003,Dattagupta2004,Tunn2009,TunnZeno2014,Tunn2017,QTG2018}, we select one that has a transparent and, at the same time, sufficiently general solution. For this formulation involving quantum tunnelling through a high potential barrier under non-equilibrium conditions, we examine mechanisms that may be ultimately responsible for the direction of time. Conducting such experiments is not easy but seems possible even under the current level of technology. {\ Conceptually and technically similar experiments have been performed in the past \cite {ImryYoseph2002,Dattagupta2004,Dec1Exp2008,TunnZeno2014,QTG2018}. These experiments investigated mesoscopic decoherence in context of the Aharonov-Bohm effect \cite{ImryYoseph2002}, proton tunnelling under thermal bath conditions \cite{Dattagupta2004}, the effect of invasive frequent measurements on quantum tunnelling \cite{TunnZeno2014} (i.e. the \textit{ quantum Zeno effect} \cite{Zeno1977}).} Our main interest is in examining decoherence by using tunnelling but, unlike in the previous of experiments, under conditions that avoid direct interferences from the environment and measurements, and screen the experiment from a supposed direct influence of the temporal boundary conditions imposed on the universe. \footnote{The key question investigated in the present work --- whether the intrinsic and environmental mechanisms of decoherence can be distinguished in experiments --- has been repeatedly raised in the past: for example, by G.N. Hatsopoulos and G.P. Beretta in "Where is the entropy challenge?" [AIP Conf. Proc. 1033, 34-54 (2008)].}
This manuscript is organised as follows. Section \ref{Sec2} briefly reviews the interpretation of the arrow of time from a philosophical perspective pointing to the ideas of Hans Reichenbach as a source of critical thinking about the time that is relevant to the present work. The readers, who are interested in quantum mechanics and non-equilibrium dynamics more than in philosophical issues, can omit this section at first reading. Section \ref {Sec3} introduces the tunnelling problem and, in the context of this problem, discusses emergence of the arrow of time. Section \ref{Sec4} reviews different time-asymmetric and time-symmetric interpretations of quantum mechanics, in particular the two-state vector formalism \cite {2S-QM1964,2state1998,2S-QM2008}. Section \ref{Sec5} examines tunnelling in absence of decoherence, while Section \ref{Sec6} investigates the influence of decoherence on the tunnelling rates. Section \ref{Sec7} discusses conduct of experiments based on the results of this work. Section \ref{Sec8} summarises our conclusions. More extensive derivations of asymptotic tunnelling rates are presented in \ref{SecA} and a brief consideration of the problem from the perspective of the theory of environmental decoherence \cite{Zurek1982} is given in \ref{SecB}.
\section{Discrimination of the past and the future from a philosophical perspective \label{Sec2}}
It is well known that the most important physical laws --- those of classical, relativistic and quantum mechanics --- are time-symmetric, but our experience of physical reality strongly discriminates the past and the future. The observed arrow of time is reflected in the second law of thermodynamics, which permits entropy increases but bans reduction of entropy in isolated thermodynamic systems. While the Boltzmann time hypothesis, which suggests that the perceived arrow of time and the direction of entropy increase must be the same (i.e. connected at some fundamental level), may be striking at first, but after some thinking over the issue, most people tend to arrive\ at the same conclusion. Since Ludwig Boltzmann \cite{Boltzmann-book,Klimenko2019}, the overall conditions prevailing in the universe (or its observable part) have been thought to be responsible for this temporal asymmetry. In modern physics, the increasing trend for entropy is commonly explained by the asymmetry of temporal boundary conditions imposed on the universe, i.e. by low-entropy conditions at the time of Big Bang \cite{PenroseBook}. This explanation is called the \textit{past hypothesis }by Albert \cite{Albert2000} and in other publications. There are no doubts that the past conditions existing in the universe are very important. The pertinent question, however, is not whether these conditions are important, but whether the direct influence of the initial conditions imposed on the universe is sufficient to explain all time-directional phenomena observed in experiments. A number of publications seem to be content with the sufficiency of the special initial conditions in the early universe to explain all entropy increases in thermodynamically isolated systems, even if it is presumed that all laws of physics are time-symmetric \cite{Albert2000,North2011}.
The alternative view is that the past hypothesis is important but, on its own, is insufficient to fully explain entropy increases required by the second law. This view can be traced to the \textit{principle of parallelism of entropy increase}, which was introduced by Hans Reichenbach \cite {Reichenbach1971}, and further explained, evaluated and extended by Davies \cite{Davies1977}, Sklar \cite{Sklar1993} and Winsberg \cite {Winsberg2004a,Winsberg2004b}. The Reichenbach principle concurs that initial conditions imposed on the universe can explain many effects associated with entropy increase; nor does it deny that entropy can fluctuate. {\ The initial conditions imposed on the universe may explain why entropy tends to increase more often than to decrease in semi-isolated thermodynamic subsystems (branches) but, assuming that all governing physical laws are time-symmetric, these initial conditions do not explain the persistence and consistency of this increase (this, of course, does not exclude the existence of fluctuations of entropy but indicates that, according to the fluctuation theorem, entropy increases over a given time interval are consistently more likely than entropy decreases \cite {Searles2008}).} Consider a system that is isolated from the rest of the universe without reaching internal equilibrium: would such a system demonstrate conventional thermodynamic behaviour, or would its entropy increase terminate under these conditions? Reichenbach \cite{Reichenbach1971} conjectured that such an isolated system would still display conventional thermodynamic properties, and we do not have any experimental evidence to the contrary. The principle of parallelism of entropy increase is useful not only as a thought experiment. When applied at a physical level, Reichenbach's ideas lead us to the existence of a time priming mechanism that continues to exert its influence even in isolated conditions \cite {KM-Entropy2014,Klimenko2019,mixing2020}. Implications of the directionality of time in quantum mechanics \cite{Abe2020} and chemical kinetics \cite {Maas2020} are further discussed in special issue \cite{Entropy2021}.
Huw Price \cite{PriceBook} pointed out that our temporal (antecedent) intuition often results in implicit discrimination of the directions of time in physical theories --- this tends to introduce conceptual biases that may be difficult to identify due to the all-encompassing strength of our intuitive perception of time. These biases conventionally involve assumptions associated with the conceptualisation of antecedent causality, such as imposing\ initial (and not final) conditions or presuming stochastic independence before (and not after) interactions. These assumptions are very reasonable and supported by our real-world experience, but may form a logical circle: effectively, we often presume antecedent causality in order to explain entropy increase forward in time, which, in turn, is used to explain and justify antecedent causality \cite{mixing2020}. Here, we recognise that the laws of classical and quantum mechanics are time-symmetric, and will endeavour to avoid implicit discrimination of the directions of time \cite{Klimenko2019}. When reading this article, the reader, who is accustomed to thinking in terms of antecedent causality, might feel that something is strange or missing. The concept of time priming used here is aimed at avoiding intuitive assumptions introducing directionality of time by implying antecedent causality in one form or another --- time priming does not seem relevant whenever antecedent causality is presumed. Since the directions of time are, obviously, not equivalent, there must be a physical mechanism that is responsible for this and, at least in principle, testable in experiments. One of such possible mechanisms, pointing to interactions of quantum effects and gravity, has been suggested by Penrose \cite{Penrose1996}. Another possibility is that this mechanism is related to the temporal asymmetry of matter and antimatter \cite{K-PhysA, KM-Entropy2014,SciRep2016,Ent2017}. In the present work, however, we do not presume any specific form of the mechanism and use this special term, the \textit{time primer,} as a place holder for possible physical explanations. Detecting the time primer in experiments is most likely to be difficult due to the expected smallness of its magnitude.
The Reichenbach parallelism principle is not a trivial statement or tautology: one can imagine a state of affairs in which this principle has only limited validity. A thermodynamic system, placed in isolation and screened by equilibrium states from the initial and final conditions imposed on the universe, might, at least in principle, cease to exhibit thermodynamic, entropy-increasing behaviour even if non-equilibrium conditions are created within a selected time interval. \textit{ Reichenbach's conjecture} tells us that this should not happen: such an isolated system would still tend to increase its entropy similarly to and in parallel with entropy increases of various thermodynamic systems scattered in the rest of the universe. While general implications of the Reichenbach principles are discussed in Ref. \cite{mixing2020}, our broad goal is to consider specific experiments where these principles can be examined directly or indirectly but, desirably, examined in a way that can give some indications of the underlying mechanisms responsible for decoherence and, ideally, for the direction of time. While the experiments suggested in the present work are related to modern quantum mechanics more than to Reichenbach's branch model, one needs to acknowledge that these experiments are following the direction of his thinking.
\section{Quantum mixing in a branch system \label{Sec3}}
{\ This section introduces a detailed description of the problem, which, as noted above, represents a quantum-mechanical version of mixing across an interface that is deemed to be branched and isolated from the rest of the universe. }
\subsection{Formulation of the problem}
We consider a number (say, $N_{0}$) of quantum particles placed in a rectangular box AB, which is partitioned into two sections A and B. The quantum levels in the system are sparsely populated so that the rules of the Maxwell-Boltzmann statistics apply. The tunnelling particles are initially located in section A of the box AB, as shown in Figure \ref{fig1}. The value of the potential $V$ is prohibitively high in section B to permit any significant presence of the particles in this section. Section A also contains a number $N_{1}$ of inert, non-tunnelling particles, and this number is sufficiently large so that the system of particles can be expected to behave thermodynamically ($N_{1}\gg N_{0}$). We expect that all of the particles achieve thermodynamic equilibrium during the passive stages of the experiment (the particles interact with each other but these interactions are deemed to be weak). The particles in the box are trapped by a potential field and completely isolated from the environment (which is also deemed to be in an equilibrium state) during the duration of the experiment $ -t_{b}<t<+t_{b}.$ In a more simple experimental setup, the tunnelling particles\ can be brought into thermodynamic equilibrium with their container box. Using inert particles, however, allows us to control the statistical scale of the experiment.
The equilibrium state is maintained for a long time $\sim t_{b}$\ prior to (and after) the active phase of the experiment --- this time is much larger than the characteristic thermalisation time ${\Greekmath 011C} _{t}$ for this system $ t_{b}\gg {\Greekmath 011C} _{t}$. Thermalisation implies achieving thermodynamic equilibrium between all particles under consideration while, in the present context, equilibration implies reaching steady-state concentrations of the tunnelling particles. Depending on conditions, equilibration may or may not require thermalisation. During the passive stages of the experiment, equilibration requires energy exchanges between different modes and, therefore, presumes thermalisation. In the active stage of the experiment, however, the system may reach equilibrium values of $N_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{A}}$ and $N_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{B}}$ without reaching (or without significantly disturbing) thermodynamic equilibrium. Thermalisation and equilibration are generally not synonymous \cite{Yukalov2011}. Thermalisation requires a substantial energy exchange between different modes to reach the equilibrium thermodynamic distributions, while this is not necessarily the case for equilibration.
The active phase of the experiment $-t_{s}\leq t\leq +t_{s}$ is short $ t_{s}\ll t_{b}$. At time $t=-t_{s},$ the particle access to section B of the box is opened by rapid lowering of the potential $V(\mathbf{r},t)$ in this section to the same level as in section A, so that the tunnelling particles can now tunnel through the barrier that separates sections A and B, while the inert particles remain in section A. The rate of change of the potential is fast compared to the characteristic tunnelling time so that the concentrations of the particles in sections A and B deviate from their equilibrium values. Particles tunnel from A to B and back through a potential barrier separating the sections until the process is terminated at $t=t_{s}$ in a time-symmetric manner by increasing potential in section B to its original value. The experiment is expected to follow by a long-lasting equilibrium state, where particles are again in a thermodynamic equilibrium (this implies that their location is in section A). It is also worthwile to consider the temporal symmetry in the experiment of lowering and rising the potential $V(\mathbf{r},t)$ in a piston-like manner (e.g. as discussed in Refs. \cite{mixing2020,AYK2020}), although the present work is primarily focused on examining the effects of decoherence on tunnelling.
\subsection{Expected emergence of temporal asymmetry}
Would the concentration of particles in section B behave in a thermodynamics manner with time-delayed relaxations towards its equilibrium value as illustrated by curve 2 in Figure \ref{fig1}? The Reichenbach conjecture states that it would: despite being completely isolated from the environment and fully screened by the equilibrium states from the initial and final conditions imposed on the universe, the system is still expected to display time-directional thermodynamic behaviour (this, of course, needs to be confirmed by actual experiments, but, for the sake of the argument, we assume at this point that the Reichenbach conjecture is correct). If the laws of the universe are time-symmetric, the initial and final conditions are similar, and interactions of the system and the universe take place only through strictly time-symmetric disturbance of the potential $V(\mathbf{r} ,t)=V(\mathbf{r},-t)$, why the response of the system to time-symmetric inputs is evidently time-asymmetric?
\subsection{Environmental interferences}
While we have declared complete isolation of the system, controlling and eliminating environmental interferences for a particle or a system of particles may, in real-world experiments, be quite difficult. According to thinking common among many physicists, any quantum system is always subject to influences of the environment; environmental interactions, no doubts, can cause and do cause decoherences as indicated in many theories and experiments \cite {Zurek1982,Joos2003,EnvDec2005,CT-G2006,CT-P2006,Stamp2012,Yukalov2011,Nature2020decoherence} . The interferences that involve a measurable, specific influence of environment on the system (such as the direct effect of cosmic radiation on superconducting qubits measured in Ref. \cite{Nature2020decoherence}) can be evaluated and, at least in principle, protected from in these experiments. Bell entanglement of two elementary particles can be protected from environmental interferences and preserved for a very long time. However, the presumed omnipresent environmental interference that induces decoherence but does not have any specific measurable mechanism and any specific physical particles or surrounding objects casing it (e.g. interference involving a special quantum field that is like, say, the Higgs field present everywhere), is, conceptually, no different from intrinsic decoherence. While the distinction between the intrinsic and environmental mechanisms of decoherence is blurred and depends on exact definitions, the principal difference between various theories of decoherence and thermalisation is in relying or not relying on time-symmetric scientific frameworks (such as unitary evolutions in quantum mechanics). We therefore distinguish intrinsic or effectively intrinsic mechanisms of decoherence from unitary interferences with the environment.
Unitary environmental interferences do not, by themselves, discriminate the directions of time: all theories of environmental decoherence based on time-symmetric physical laws (e.g. unitary evolutions of quantum mechanics) must involve another principal element --- an assumption that violates the equivalence of the directions of time. The effect of the environment on decoherence becomes clear only if we presume antecedent causality (although causality is something that we have vowed to avoid in the present work). Indeed, in the absence of directionality of time required by antecedent causality, environmental interactions can induce recoherences in the same way as they induce decoherences. The time-directional effect of random interventions produced by the environment is determined by imposing initial (as opposite to final) conditions on the system (individual realisations of a random process with independent increments are not time-directional \cite {mixing2020}). Note that the immediate environment can be in a thermal equilibrium state and experience only time-symmetric fluctuations or, even better, be kept at (nearly) absolute zero temperature (while\ radiation that can transmit interactions over long distances is expected to be decoherence-neutral by itself \cite{Ent2017}). In the casual model, the final state depends on random interventions but the initial state does not, because the initial state is fixed but the final state is not. The interventions are deemed to cause the final state but not the initial state. If we fix the final state instead of fixing the initial state, then the effect of environmental interferences would be recohering. By themselves, the environmental interferences only introduce some effective randomness into the system. Presuming antecedent causality is the central element of the major theories of environmental interference --- it is causality and not the interference that breaks the symmetry of the directions of time.
\subsection{Initial and final conditions imposed on the universe}
One, of course, may abandon antecedent causality and, instead, invoke low-entropy initial conditions imposed on the universe. While these conditions must be very important, the main question concerning our experiment remains: how can these conditions influence the stochastic state of the system after a long period of equilibrium? We may assume that $ N_{0}\sim 1$ so that the system of $N_{1}$ inert particles under consideration is small (while $t_{b}$ is extremely long, much much longer than $t_{s}$) and repeatedly experiences very substantial fluctuations around equilibrium during the passive phases of the experiment. Due to these fluctuations, we may, in principle, select the time moments $t=\pm t_{b}$ when the final state at $t=+t_{b}$ has lower entropy than the initial state at $t=-t_{b}$ --- does this mean that the arrow of time should be reversed during the active phase of the experiment and the changes in $N_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{B}}$ -- the number of particles in section B -- would tend to preempt the changes of the potential $V(\mathbf{r},t)$ rather than to follow them? While the negative answer to this question is expected, the physical mechanism that can allow the temporal boundary conditions imposed on the universe to affect the active phase of our experiment is not obvious. Or, alternatively, should the arrow of time disappear and the directions of time become equivalent under these conditions? According to the Reichenbach conjecture, we tend to believe that the arrow of time should persist.\
In the suggested experiment, the system cannot preserve any statistical information about the conditions that preceded the experiment or follow the experiment --- equilibrium states achieve maximal entropy and necessarily destroy all such information. Yet, there must be a physical mechanism that discriminates the directions of time in lieu of the direct action of the temporal boundary conditions imposed on the universe if the Reichenbach conjecture is correct. This mechanism is called here the \textit{time primer} . Conceptually, the time primer does not replace global temporal boundary conditions imposed on the universe but reflects the local action of these global conditions.
The time primer may act predominantly on a larger system and propagate to a semi-localised subsystem through time-priming interference, which (the subsystem) in this case can behave similarly to the effect of intrinsic time priming but without any intrinsic time priming on its own. One may invoke time priming in larger and larger environments but this interpretation neither gives a complete explanation (now we need to explain time priming in the environment, which may well be in its equilibrium state) nor helps the experiments (instead of confining and measuring the effect of interest, we disperse it over the environment in a way that it is likely to become experimentally untraceable). Therefore, environmental interactions should be avoided as much as possible or, at least, they need to be measured and quantified. The boundary conditions imposed on the universe may indeed determine the direction of decoherence in a tiny experiment with quantum mixing, but this influence must have a specific physical mechanism and should be measurable and quantifiable.
\subsection{Conditions of the experiment}
By changing the parameters of the experiment, we can observe different physical conditions. The system of particles may consist of one or more particles, which do not strongly interact between\ themselves and may or may not interact with a thermodynamic (or statistical microscopic) object (e.g. a system of inert particles), while the particles and the object remain fully isolated from the environment. The characteristic tunnelling time can also be changed by varying the shape of the potential. If the active phase of the experiment is sufficiently short $t_{s}\ll {\Greekmath 011C} _{t}$, then there is no substantial exchange of energy takes place within the system during the active phase. This, however, does not imply that quantum particles evolve unitarily since they may still be affected by decoherence. Since the equilibrated system should be, from the quantum perspective, in or close to its maximally mixed state under specified conditions (or in the effectively maximally mixed state specified by canonical typicality \cite {CT-G2006,CT-P2006}), thermalisation necessarily implies decoherence, and, therefore, the characteristic decoherence time cannot be longer than the characteristic thermalisation time. In fact, one may expect the decoherence time ${\Greekmath 011C} _{d}$ to be significantly shorter than the thermalisation time $ {\Greekmath 011C} _{t}$ (at least for sufficiently large systems) so that characteristic time ${\Greekmath 011C} _{d}$ can be shorter or comparable to $t_{s},$ even if $t_{s}$ is much smaller than ${\Greekmath 011C} _{t}.$ In the case of\ ${\Greekmath 011C} _{d}\ll t_{s},$ decoherence must have a strong influence on the experiment. If however, the active phase of the experiment is much shorter than the characteristic decoherence time $t_{s}\ll {\Greekmath 011C} _{d}$, then decoherence has little effect on the quantum system of particles, which is now expected to evolve unitarily and be governed by the Schr\"{o}dinger equation during the active phase.
\section{Time-directional and time-symmetric interpretations of quantum mechanics\label{Sec4}}
This section discusses time-asymmetric and time-symmetric interpretations of quantum mechanics. The former tends to imply antecedent causality, while the latter can be used to avoid implicit discrimination of the directions of time.
\subsection{Schr\"{o}dinger equation and its solution}
As we have to deal with quantum mixtures, different particles generally do not form coherent superpositions, and quantum wave functions and density matrices are more useful tools than quantum fields under these conditions. Hence, we focus first our attention on behaviour of a single quantum particle but remember that interactions between particles within the system are conducive to decoherence. According to the conventional interpretation of the problem, evolution of the wave function is governed by the \textit{ Schr\"{o}dinger equation} in the position representation \begin{equation} i\hbar \frac{\partial {\Greekmath 0120} }{\partial t}=\mathbb{H}{\Greekmath 0120} ,\ \ \ \ \ \mathbb{H} =\frac{1}{2m}\mathbbm{p}^{2}+\mathbb{V}=-\frac{\hbar ^{2}}{2m}{\Greekmath 0272} ^{2}+V( \mathbf{r},t) \label{1Shr} \end{equation} with the Dirichlet (zero) boundary conditions \begin{equation} {\Greekmath 0120} =0\ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{at }\mathbf{r}\in \partial \RIfM@\expandafter\text@\else\expandafter\mbox\fi{AB,\ \ \ AB}=\RIfM@\expandafter\text@\else\expandafter\mbox\fi{A}\cup \RIfM@\expandafter\text@\else\expandafter\mbox\fi{B} \label{1bc} \end{equation} since the potential $V$ is assumed to be very high at and beyond the boundaries. Here, ${\Greekmath 0120} $ is the wave function, $t$ is time, $\mathbb{H}$ is the Hamiltonian, $\hbar $ is the Planck constant and $m$ is the particle mass. Relations (\ref{1Shr}) and (\ref{1bc}) apply to all wave functions that correspond to different particles, assuming that interactions between particles can be neglected. The sections A and B are separated by a thin high-energy barrier located near $x=0.$\ The probability of tunnelling is relatively small but essential; tunnelling may or may not be affected by decoherence as discussed in the rest of this paper.
According to the formulation of the problem presented above, the potential $V
$ is assumed to be time-independent $V(\mathbf{r},t)=V(\mathbf{r})$ within the time interval $-t_{s}<t<+t_{s}$, which is of interest in the present work. Since the Hamiltonian is Hermitian $\left\langle {\Greekmath 011E} \middle|\mathbb{H
}{\Greekmath 0120} \right\rangle =\left\langle \mathbb{H}{\Greekmath 011E} \middle|{\Greekmath 0120} \right\rangle $ , the solution of the problem is based on the Hilbert--Schmidt theorem \begin{equation} {\Greekmath 0120} (t^{\circ },\mathbf{r})=\mathbb{U(}t^{\circ }){\Greekmath 0120} _{0}=\exp \left( \frac{\mathbb{H}}{i\hbar }t^{\circ }\right) {\Greekmath 0120} _{0}=\sum_{j}{\Greekmath 0120} _{j}=\sum_{j}a_{j}\exp \left( -i{\Greekmath 0121} _{j}t^{\circ }\right) \Psi _{j}( \mathbf{r}) \label{1sol} \end{equation} where \begin{equation}
a_{j}=\frac{\left\langle \Psi _{j}\middle|{\Greekmath 0120} _{0}\right\rangle }{Q_{j}},\ \ \ \mathbb{H}\Psi _{j}=E_{j}\Psi _{j},\ \ \ \left\langle \Psi _{j}\middle
|\Psi _{i}\right\rangle ={\Greekmath 010E} _{ji}Q_{j},\ \ \ {\Greekmath 0121} _{j}=\frac{E_{j}}{ \hbar } \label{eig1r} \end{equation} ${\Greekmath 0120} _{0}=\left. {\Greekmath 0120} \right\vert _{t=t_{0}}$ specifies the initial conditions, $t^{\circ }=t-t_{0},$\ and the energy eigenstates $\Psi _{j}( \mathbf{r})$ satisfy the same boundary conditions as ${\Greekmath 0120} $. The initial
(or final) conditions can be set at $t_{0}=-t_{s}$\ or at $t_{0}=+t_{s}$. The jumps of the potential at $t=\pm t_{s}$ are assumed to be so rapid that the wave function does not have time to adjust and $\left. {\Greekmath 0120} \right\vert _{t=t_{0}+0}=\left. {\Greekmath 0120} \right\vert _{t=t_{0}-0}$. The bra/ket product notation $\left\langle {\Greekmath 011E} \middle|{\Greekmath 0120} \right\rangle $ implies integration of the product ${\Greekmath 011E} ^{\ast }{\Greekmath 0120} $ over the interior of box AB.\ For the potential $V(\mathbf{r})=V(x),$ which depends only on $x$ but not on $y$ and $z$ (these are the Cartesian components of the physical coordinate $\mathbf{r }$), the eigenstate variables are separated $\Psi _{j}=\tilde{\Psi} _{j}(x)\sin (k_{y}y)\sin (k_{z}z)$ so that \begin{equation} -\frac{\hbar ^{2}}{2m}\frac{\partial ^{2}\tilde{\Psi}_{j}}{\partial x^{2}} +V(x)\tilde{\Psi}_{j}=\tilde{E}_{j}\tilde{\Psi}_{j}\ \ \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and}\ \ \ \tilde{E}_{j}+\hbar ^{2}\frac{k_{y}^{2}+k_{z}^{2}}{2m}=E_{j} \label{eig1x} \end{equation}
\subsection{On time-symmetric formulations of quantum mechanics}
The conventional formulation of quantum mechanics implies that the solution $ {\Greekmath 0120} $ of the Schr\"{o}dinger equation (\ref{1Shr}) can have only one temporal boundary condition $\left. {\Greekmath 0120} \right\vert _{t=t_{0}}={\Greekmath 0120} _{0},$ requiring us to set either the initial condition at $t_{0}=-{\Greekmath 0110} t_{s}$ or the final condition at $t_{0}=+{\Greekmath 0110} t_{s},$ where ${\Greekmath 0110} \geq 1$. Despite the unitarity and reversibility of quantum evolutions governed by (\ref{1Shr} ), this violates the symmetry of time and forces us to make a time-asymmetric choice between the initial and final conditions. Selection of initial or final conditions is explicitly discriminating in case of random or diffusional systems \cite{mixing2020}, but one may note that any given solution ${\Greekmath 0120} ={\Greekmath 0120} _{S}(t)$ of the Schr\"{o}dinger equation (\ref {1Shr}) allows for two conditions $\left. {\Greekmath 0120} \right\vert _{t=-t_{s}}={\Greekmath 0120} _{S}(-{\Greekmath 0110} t_{s})$ and $\left. {\Greekmath 0120} \right\vert _{t=+t_{s}}={\Greekmath 0120} _{S}(+{\Greekmath 0110} t_{s})$, which correspond to the same ${\Greekmath 0120} _{S}(t)$. Our temporal, causality-based intuition, however, forces us to select specific types of conditions that are associated with antecedent causality and, quite often, are time-asymmetric. For example, one can choose 1) ${\Greekmath 0120} =0$ in section B at $t=-t_{s}$ or 2) ${\Greekmath 0120} =0$ in section B at $t=+t_{s}$ --- these conditions are generally not equivalent and, therefore, choosing between conditions 1 and 2 is time-asymmetric. The fundamental dilemma of selecting between initial and final conditions is commonly resolved by invoking antecedent causality and choosing\ initial conditions over final conditions --- this is practically correct but tends to hide the inequivalence of the directions of time, especially when some degree of uncertainty is introduced into the system.
Several interpretation of quantum mechanics permit time-symmetric formulation of temporal boundary conditions \cite {2S-QM1964,TransQM1986,2S-QM2008}. Time reversal is naturally present in relativistic quantum mechanics due to its \textit{Lorentz invariance}. For \ example, the \textit{Klein--Gordon} (Klein--Gordon--Fock) equation \cite {RQM2012} \begin{equation} \frac{1}{c^{2}}\frac{\partial ^{2}{\Greekmath 0120} }{\partial t^{2}}-{\Greekmath 0272} ^{2}{\Greekmath 0120} + \frac{m^{2}c^{2}}{\hbar ^{2}}{\Greekmath 0120} =0 \label{2-K-G} \end{equation} {\ is of the second order in time, is invariant with respect to the reversal of time $t\rightarrow -t$ and therefore necessarily involves at least two waves propagating forward and backward in time. Note that only free spinless particles satisfy equation (\ref{2-K-G}): interactions of the particle spin with electromagnetic fields require more elaborate treatment --- the \textit{ Dirac equation} \cite{Dirac1928}) --- which generally is invariant only under \textit{charge-parity-time (CPT)} conjugation and not under mere reversals of time. Interactions of the thermodynamic, time-directional effects with CPT-invariance have been extensively discussed elsewhere \cite {Sakharov1967,Gell-Mann1993,K-PhysA, KM-Entropy2014,SciRep2016,Ent2017} and are not specifically considered here. The Klein--Gordon equation is used here only to illustrate the effects of Lorentz invariance.} The non-relativistic limit of the Klein--Gordon equation is obtained by substituting ${\Greekmath 0120} =e^{-i{\Greekmath 0121} _{0}t}{\Greekmath 0127} $ and ${\Greekmath 0120} =e^{+i{\Greekmath 0121} _{0}t}{\Greekmath 011E} $\ to offset the domination of the $mc^{2}$ term by selecting $ {\Greekmath 0121} _{0}=mc^{2}/\hbar $. This yields the two corresponding equations \begin{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{a) }i\hbar \frac{\partial {\Greekmath 0127} }{\partial t}=\mathbb{H}{\Greekmath 0127} \ \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and b)\ }i\hbar \frac{\partial {\Greekmath 011E} }{\partial t}=-\mathbb{H}{\Greekmath 011E} \label{2rel} \end{equation} where the second equation is the time-reversal $t\rightarrow -t$ of the first. Conventional non-relativistic quantum mechanics admits only equation ( \ref{2rel}a), while quantum field theory interprets (\ref{2rel}b) as corresponding to antiparticles that nominally move backward in time. The \textit{transactional interpretation} of quantum mechanics \cite{TransQM1986} argues that both of these equations play a role: the first corresponds to waves propagating forward in time and the second corresponds to waves propagating backward in time and both of these waves are physically significant.
Another interpretation is given by the so called \textit{two-state vector formalism} \cite{2S-QM1964,2state1998,2S-QM2008}, where each quantum system is characterised by two vectors, which are usually written as bra and ket: $ \left\langle {\Greekmath 011E} \right\vert $ and $\left\vert {\Greekmath 0127} \right\rangle .$ These vectors satisfy the Schr\"{o}dinger equation \begin{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{a) }i\hbar \frac{\partial \left\vert {\Greekmath 0127} \right\rangle }{\partial t }=\mathbb{H}\left\vert {\Greekmath 0127} \right\rangle \ \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and \ b)\ }i\hbar \frac{\partial \left\langle {\Greekmath 011E} \right\vert }{\partial t}=-\left\langle {\Greekmath 011E} \right\vert \mathbb{H} \label{2st} \end{equation} Equation (\ref{2st}b) can be obtained as the Hermitian (conjugate) transpose of (\ref{2st}a), although the Hermitian transpose ${\Greekmath 0127} ^{\dagger }$of the state ${\Greekmath 0127} $ is not necessarily the same as ${\Greekmath 011E} ,$ since, as discussed below, $\left\langle {\Greekmath 011E} \right\vert $ and $\left\vert {\Greekmath 0127} \right\rangle $ are generally constrained by different initial and final conditions: the initial conditions are imposed on $\left\vert {\Greekmath 0127} \right\rangle ,$ while $\left\langle {\Greekmath 011E} \right\vert $ satisfies the final conditions. {\ Equations (\ref{2st}b) and (\ref{2rel}b) may look similar but, in fact, these equations are generally different (unless, as in equation (\ref{1Shr}), the Hamiltonian $\mathbb{H}$ is strictly invariant with respect to the reversal of time), as are the corresponding conceptual interpretations. The two-state formalism is conventionally interpreted along time-asymmetric, casual lines: the state of the system $\left\vert {\Greekmath 0127} \right\rangle $ is determined by its past, while $\left\langle {\Greekmath 011E} \right\vert $ specifies how the system will affect measuring devices in the future, reflecting postselection. According to this casual perspective, $ \left\vert {\Greekmath 0127} \right\rangle $ is a genuine characteristic of the system at a given moment, while $\left\langle {\Greekmath 011E} \right\vert $ is not but can be treated as such for the sake of convenience \cite{2S-QM2008}. There is also an implied time-symmetric interpretation of the two-state formalism, where both states $\left\langle {\Greekmath 011E} \right\vert $ and $\left\vert {\Greekmath 0127} \right\rangle $ are considered to be intrinsic physical characteristics of the system at a given time moment. }
Finally, the two-state vector formalism requires that the \textit{Born rule} for the probability density of particle location $P(\mathbf{r)}$, which is conventionally given by \begin{equation} \left( \overset{\ }{\underset{\ }{P(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{J,}t)}}\right) _{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Born}}=
\frac{\left\langle {\Greekmath 0120} \middle|\mathbb{P}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{J}}\middle|{\Greekmath 0120} \right\rangle }{Q_{1}} \label{Born} \end{equation} should be replaced by the time-symmetric \textit{Aharonov, Bergman and Lebowitz (ABL) rule} \cite{2S-QM1964} \begin{equation} \left( \overset{\ }{\underset{\ }{P(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{J,}t)}}\right) _{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ABL}}=\frac{
\left\vert \left\langle {\Greekmath 011E} \middle|\mathbb{P}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{J}}\middle|{\Greekmath 0127} \right\rangle \right\vert ^{2}}{Q_{2}} \label{ABL} \end{equation} where $\mathbb{P}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{J}},$ J = A,B is projector in the wave function into either section A or section B and \begin{equation}
Q_{1}=\sum_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{J}}\left\langle {\Greekmath 0120} \middle|\mathbb{P}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{J}}\middle
|{\Greekmath 0120} \right\rangle ,\ \ \ Q_{2}=\sum_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{J}}\left\vert \left\langle {\Greekmath 011E}
\middle|\mathbb{P}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{J}}\middle|{\Greekmath 0127} \right\rangle \right\vert ^{2} \end{equation} The ABL rule is similar to the interpretation of quantum mechanics called \textit{consistent histories} \cite {ConsHist1984,Stanford-consistent-histories}. In this context, we stress that the location operators in (\ref{ABL}) form a consistent set of projectors since $\mathbb{P}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{A}}\mathbb{P}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{B}}=0$. The approach of consistent histories also\ has time-symmetric and time-asymmetric, casual versions of the approach \cite{Gell-Mann1993}.
\subsection{The initial and final conditions}
{\ Leaving aside philosophical aspects of quantum mechanics, we focus on the initial and final conditions. As specified above, the problem shown in Figure \ref{fig1} does not have any explicitly measured initial and final conditions. Such measurements can be performed at $t=-{\Greekmath 0110} t_{s}$ and $ t=+{\Greekmath 0110} t_{s}$ with ${\Greekmath 0110} >1$ and ${\Greekmath 0110} t_{s}\ll t_{b}$, so that the evolution of the system is not disturbed by these measurements during the active phase of the experiment. The case of a single particle is discussed here for the sake of simplicity. The measurements attempt to detect the presence of particles in section A. If the particle is not detected either at $t=-{\Greekmath 0110} t_{s}$ or at $t=+{\Greekmath 0110} t_{s}$ then this realisation is discarded (i.e. both pre-selection and post-selection apply). The two-state formalism seems to be the most suitable time-symmetric framework available for this case. According to this formalism, the state $\left\vert {\Greekmath 0127} \right\rangle $ is deemed to propagate forward in time and, therefore, is subject to the initial conditions, while $\left\langle {\Greekmath 011E} \right\vert $ is deemed to propagate backwards in time and, therefore, is subject to the final conditions: \begin{equation} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{a) }\left\vert {\Greekmath 0127} \right\rangle _{t=-{\Greekmath 0110} t_{s}}={\Greekmath 0127} _{1} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{\ \ \ and \ \ b) }\left\langle {\Greekmath 011E} \right\vert _{t=+{\Greekmath 0110} t_{s}}={\Greekmath 011E} _{2} \label{2bc} \end{equation} Assuming that the initial and final conditions are the same or similar, so should be ${\Greekmath 0127} _{1}$ and ${\Greekmath 011E} _{2}$. In conventional quantum mechanics, we invoke antecedent causality to justify our preference for initial conditions over final conditions and impose only the initial condition (\ref {2bc}a). }
While we can set the initial and final conditions at $t=\pm {\Greekmath 0110} t_{s}$, the corresponding solutions of equations (\ref{2st}) remain undisturbed until the moments of the potential jumps $t=\pm t_{s}$ are reached. Hence, from the mathematical perspective, we can put ${\Greekmath 0110} =1$ in (\ref{2bc}), and set these undisturbed conditions at $t=t_{0}=\pm t_{s}$ so that the equations (\ref{2st}) are to be solved only within the time interval $ -t_{s}<t<+t_{s}$. Note that, according to the two-state vector formalism, changes in probabilities may precede the relevant changes of the potential. The jumps of the potential $V$ in box B at $t_{0}=\pm t_{s}$ are presumed to be rapid so that the wave functions do not have time to change substantially and remain practically the same at $t_{0}-0$ and at $t_{0}+0.$ Therefore, we do not need to specify whether the initial and final conditions are applied before or after the jumps of the potential. If the final conditions are not set, the ABL rule (\ref{ABL}) reverts to the Born rule (\ref{Born}).
If the thermalisation time ${\Greekmath 011C} _{t}$ is smaller than or comparable to $ {\Greekmath 0110} t_{s}$ then, post-selection should have little effect --- the experiment is effectively screened from the final conditions. If the characteristic time associated with decoherence ${\Greekmath 011C} _{d}$ is smaller than or comparable to ${\Greekmath 0110} t_{s}$ (but ${\Greekmath 011C} _{t}\gg {\Greekmath 0110} t_{s}$), then decoherence can affect the active phase of the experiment by screening it from the final conditions (in this context decoherence can be seen as an intermediate projective measurement that remains unknown, i.e. a latent collapse \cite{Klimenko2019}). If the basis of the measurement and decoherence are consistent (while accounting for unitary evolution of the system between the time moments of decoherence and measurement), decoherence should not have any effects on the measurement. In the opposite case, ${\Greekmath 011C} _{d}\gg t_{s}$, decoherence does not have much influence on the experiment during its active phase $-t_{s}<t<+t_{s}$.
\subsection{Non-intrusive measurements}
If the system under consideration involves a sufficiently large, statistically significant number of particles $N_{0}$, measurements conducted over one or few particles should have a minimal effect on the system. In more accurate terms, this implies that the projection operator $ \mathbb{P}$ associated with this measurement projects the overall large Hilbert space into its subspace that has only a slightly smaller dimension than the original space (i.e. mostly preserving the complexity of the original state). Measuring interventions, however, involve decoherences and collapses, and, as demonstrated in experiments \cite{TunnZeno2014}, this can affect the rate of tunnelling. It is preferable to avoid any irreversible measurements, at least during the active phase of the experiment. This goal can be achieved by resorting to generalised or weak measurements \cite {WeakM1991,2S-QM2008,WeakMes2013,Stanford-consistent-histories}, which use an ancilla system.
The ancilla system is created well before $t=-{\Greekmath 0110} t_{s}$ with a few quantum particles in a specific coherent state, (say, spin down). At some time moment $t_{m}$ during the active phase $-t_{s}\leq t_{m}\leq +t_{s}$ an interaction window is created $t_{m}-\Delta t/2\leq t\leq t_{m}+\Delta t/2$ as shown in Figure \ref{fig2}. During this window, unitary interactions are allowed between the ancilla and tunnelling particles in section B. If interactions take place, the state of the ancilla particles changes (there also must be at least some minor change in the state of a particle, say, alteration of its spin). The state of the ancilla is measured only after $ t=+{\Greekmath 0110} t_{s}$ and alterations of the original ancilla state are indicative of the presence of tunnelling particles in section B. These type measurements allow us to detect the presence of tunnelling particles in section B without causing any decoherence or collapses during the active phase of the experiment.
\section{Tunnelling without decoherence\label{Sec5}.}
While many tunnelling problems can be solved analytically \cite{LL3,Tunn2003} , our goal is in obtaining sufficiently general but relatively simple and transparent solutions, which are suitable for further analysis involving decoherence. The initial conditions correspond to all particles located in section A, presumably in a maximally mixed state although, in this section, we neglect interactions between the particles and focus on the interaction of the relevant pure states with the barrier. During the active stage of the experiment $-t_{s}<t<+t_{s}$, particles tunnel to section B. In this section, the evolution of quantum particles is examined without the influence of decoherence so that a coherent wave function remains coherent during the active phase. For a potential barrier specified by the delta function $V(x)=s{\Greekmath 010E} (x),$ we can easily evaluate the energy eigenfunctions. The probability of tunnelling is presumed to be small $\sim \hat{s}^{-2}\ll 1$,\ $\hat{s}=sm/k_{0}\hbar ^{2}.$\ The problem under consideration involves many possible quantum states but can be effectively reduced to a two-state dynamic by introducing the partition states. The resonant, intermediate and non-resonant cases need to be considered separately. The details of the solutions are elaborated in \ref{SecA}.
\subsection{Evolution of the partition states}
{\ As demonstrated in \ref{SecA}(\ref{sec_near_res}), resonant ${\Greekmath 0111} \rightarrow 0,$ near-resonant $\left\vert {\Greekmath 0111} \right\vert \sim 1$ and intermediate $1\ll \left\vert {\Greekmath 0111} \right\vert \ll \hat{s}$ modes form pairs -- the "plus" mode ${\Greekmath 0120} _{+}$ and the "minus" mode ${\Greekmath 0120} _{+}$ with very close energies and wave numbers. Here, ${\Greekmath 0111} =2\hat{s}{\Greekmath 0112} $ and ${\Greekmath 0112} $ is the phase of the deviation from resonant conditions, i.e. ${\Greekmath 0112} =0$ corresponds to the exact resonance (see (\ref{A2rm-jAjB}) for details). } These modes are energy eigenstates and, according to (\ref{1sol}), evolve as \begin{equation} {\Greekmath 0120} _{\pm }=e^{-i{\Greekmath 0121} _{\pm }t^{\circ }}\left\vert \pm \right\rangle ,\ \ \ \ \left\vert \pm \right\rangle =\left\{ \begin{array}{c} A_{\pm }\sin (k_{0}x+...)\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ in section A} \\ B_{\pm }\sin (k_{0}x+...)\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ in section B} \end{array} \right. \end{equation} The conventional normalisation \begin{equation} \frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}\left\vert A_{\pm }\right\vert ^{2}+x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny B}}}\left\vert B_{\pm }\right\vert ^{2}}{2}=\left\vert \tilde{A}_{\pm }\right\vert ^{2}+\left\vert \tilde{B}_{\pm }\right\vert ^{2}=1 \end{equation} is conveniently expressed in terms of the volume-adjusted amplitudes $\tilde{ A}$ and $\tilde{B},$ which satisfy \begin{equation} \left( \frac{\tilde{B}}{\tilde{A}}\right) _{-}\left( \frac{\tilde{B}}{\tilde{ A}}\right) _{+}=-1,\ \ \tilde{A}=A\sqrt{\frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}{2}},\ \ \tilde{B}=B\sqrt{\frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}{2}} \label{A5rat} \end{equation} according to (\ref{A2rm-rat}).
The stationary orthogonal (unitary) transformation of the basis \begin{eqnarray}
\left\vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{A}\right\rangle &=&\frac{1}{\sqrt{1+{\Greekmath 0118} ^{2}}}\left( \ \big|
+\big\rangle\ +\ {\Greekmath 0118} \big|-\big\rangle\right) \label{A5A} \\ \left\vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{B}\right\rangle &=&\frac{1}{\sqrt{1+{\Greekmath 0118} ^{2}}}\left( {\Greekmath 0118}
\big|+\big\rangle\ -\ \ \big|-\big\rangle\right) \label{A5B} \end{eqnarray} converts the "plus" $\left\vert +\right\rangle $ and "minus" $\left\vert
-\right\rangle $ eigenstates, into states $|$A$\rangle $ and $|$B$\rangle $. Unlike the states $\left\vert +\right\rangle $ and $\left\vert
-\right\rangle ,$ the states $|$A$\rangle $ and $|$B$\rangle $ are not energy eigenstates. Here we denote \begin{equation} {\Greekmath 0118} =\left( \frac{\tilde{B}}{\tilde{A}}\right) _{+}=\sqrt{\frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny B}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}}\left( \frac{B}{A}\right) _{+}={\Greekmath 011B} \left( \frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}\right) ^{1/2}F_{+}\left( {\Greekmath 0111} ,\frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}} \right) \label{A5AB} \end{equation} where the function $F_{+}$ is defined by (\ref{A2rm-BA}) and ${\Greekmath 011B} =\pm 1$
as specified in (\ref{A2rm-eq2}). The states $|$A$\rangle $ and $|$B$\rangle
$ are referred to as the "partition states": to the leading order of our analysis, the state $|$A$\rangle $ implies exclusive localisation in section A of the box, while the state $|$B$\rangle $ corresponds to exclusive localisation in section B. Indeed, with the definition of ${\Greekmath 0118} $ given by ( \ref{A5AB}) and the use of equations (\ref{3PSI}), (\ref{A2rm-jAjB})-(\ref {A2rm-rat}) the partition states are approximated by \begin{equation} \left\vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{A}\right\rangle \approx \left\{ \begin{array}{cc} \left( \frac{2}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}\right) ^{1/2}\sin (k_{0}x+...) & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in section A} \\ 0 & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{in section B} \end{array} \right. \end{equation} \begin{equation} \left\vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{B}\right\rangle \approx \left\{ \begin{array}{cc} 0 & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{in section A} \\ \left( \frac{2}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}\right) ^{1/2}\sin (k_{0}x+...) & \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ in section B} \end{array} \right. \end{equation} since $k_{+}=\Delta k_{+}+k_{0}\approx k_{-}=\Delta k_{-}+k_{0}\approx k_{0}$ at the leading order and we select $A_{+}>0$ and $A_{-}>0$ to remove freedom in choosing signs.
It is clear that the normalised amplitudes of the wave functions that correspond to states $|$A$\rangle $ and $|$B$\rangle $\ are given by $\tilde{
A}$ and $\tilde{B}$. Since the states $|$A$\rangle $ and $|$B$\rangle $\ are not energy eigenstates, their amplitudes $\tilde{A}$ and $\tilde{B}$ change in time as determined by the equation \begin{equation} i\hbar \frac{\partial }{\partial t}\left[ \begin{array}{c} \tilde{A} \\ \tilde{B} \end{array} \right] =\mathbb{H}\left[ \begin{array}{c} \tilde{A} \\ \tilde{B} \end{array} \right] ,\ \ \mathbb{H}=\mathbb{H}_{0}+\mathbb{H}_{1},\ \ \label{U_AB} \end{equation} where the Hamiltonian in the new basis is given by \begin{equation} \mathbb{H}_{0}=\left[ \begin{array}{cc} \frac{E_{+}+E_{-}}{2} & 0 \\ 0 & \frac{E_{+}+E_{-}}{2} \end{array} \right] ,\ \ \mathbb{H}_{1}=\frac{E_{+}-E_{-}}{1+{\Greekmath 0118} ^{2}}\left[ \begin{array}{cc} \frac{\left( 1-{\Greekmath 0118} ^{2}\right) }{2} & {\Greekmath 0118} \\ {\Greekmath 0118} & \frac{\left( {\Greekmath 0118} ^{2}-1\right) }{2} \end{array} \right] \label{H0H1} \end{equation} Here, $\left\langle +\right\vert \mathbb{H}\left\vert +\right\rangle =E_{+}$ , $\left\langle -\right\vert \mathbb{H}\left\vert -\right\rangle =E_{-}$ and $\left\langle +\right\vert \mathbb{H}\left\vert -\right\rangle =\left\langle -\right\vert \mathbb{H}\left\vert +\right\rangle =0$ since the states $ \left\vert +\right\rangle $ and $\left\vert -\right\rangle $ are energy eigenstates. According to (\ref{1sol}), this equation is solved by the following unitary evolution matrix $\mathbb{U}$ \begin{equation} \left[ \begin{array}{c} \tilde{A} \\ \tilde{B} \end{array} \right] =\underset{\mathbb{U}}{\underbrace{\frac{\Omega _{0}}{1+{\Greekmath 0118} ^{2}} \left[ \begin{array}{cc} \Omega +{\Greekmath 0118} ^{2}/\Omega & -2i{\Greekmath 0118} \sin \left( \frac{\Delta {\Greekmath 0121} }{2} t^{\circ }\right) \\ -2i{\Greekmath 0118} \sin \left( \frac{\Delta {\Greekmath 0121} }{2}t^{\circ }\right) & {\Greekmath 0118} ^{2}\Omega +1/\Omega \end{array} \right] }}\left[ \begin{array}{c} \tilde{A} \\ \tilde{B} \end{array} \right] _{t=t_{0}} \label{A5U} \end{equation} where we denote \begin{equation} E_{0}=\frac{E_{+}+E_{-}}{2}\approx \frac{k_{0}^{2}\hbar ^{2}}{2m},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\ \ } \Omega _{0}=e^{-\frac{iE_{0}t^{\circ }}{\hbar }},\ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{\ }\Omega =e^{-i \frac{\Delta {\Greekmath 0121} }{2}t^{\circ }},\ \ t^{\circ }=t-t_{0}, \label{A5eq1} \end{equation} \begin{equation} \Delta {\Greekmath 0121} ={\Greekmath 0121} _{+}-{\Greekmath 0121} _{-}=\frac{E_{+}-E_{-}}{\hbar }\approx \frac{k_{0}\hbar }{m}\Delta k,\ \ \ \Delta k=\Delta k_{+}-\Delta k_{-}=\frac{ 1}{2\hat{s}}\frac{D^{1/2}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}} \label{A5eq2} \end{equation} and $D$ is specified in (\ref{A2rm-D}). With the initial conditions \begin{equation} \left[ \begin{array}{c} \tilde{A} \\ \tilde{B} \end{array} \right] _{t=t_{0}}=\left[ \begin{array}{c} 1 \\ 0 \end{array} \right] \end{equation} which correspond to particle location in section A at $t=t_{0}$, the amplitudes of the partition states depend on time $t^{\circ }=t-t_{0}$ and evolve as \begin{equation} \left[ \begin{array}{c} \tilde{A} \\ \tilde{B} \end{array} \right] =\frac{\Omega _{0}}{1+{\Greekmath 0118} ^{2}}\left[ \begin{array}{c} e^{-i\frac{\Delta {\Greekmath 0121} }{2}t^{\circ }}+{\Greekmath 0118} ^{2}e^{+i\frac{\Delta {\Greekmath 0121} }{2 }t^{\circ }} \\ -2i{\Greekmath 0118} \sin \left( \frac{\Delta {\Greekmath 0121} }{2}t^{\circ }\right) \end{array} \right] \label{ABt} \end{equation}
assuming that all particles are initially present only in the section A. Note that the evolution preserves normalisation $|\tilde{A}|^{2}+|\tilde{B}
|^{2}=1,$ where the amplitudes $|\tilde{A}|^{2}$ and $|\tilde{B}|^{2}$ are conventionally interpreted as probabilities of localisation $P($A$)=|\tilde{A
}|^{2}$ and $P($B$)=|\tilde{B}|^{2}$ associated with this resonant pair. The extent of tunnelling (i.e. a quantity constraining $|\tilde{B}|$ and $P($B$)$ for any $t^{\circ }$), which is determined by ${\Greekmath 0126} =\left\vert {\Greekmath 0118} \right\vert /(1+{\Greekmath 0118} ^{2})$, remains small when $\left\vert {\Greekmath 0118} \right\vert \ll 1$ or $\left\vert {\Greekmath 0118} \right\vert \gg 1$.
\subsection{Tunnelling by resonant and near-resonant modes}
The resonant modes (${\Greekmath 0111} \rightarrow 0$) are energy eigenstates that are energy eigenstates in all sections of the box, that is resonant modes are resonant in both sections A and B. The near-resonant modes are ($\left\vert {\Greekmath 0111} \right\vert \sim 1$) close to the resonant conditions in A and B. For these modes, the characteristic transmission frequency $\hat{{\Greekmath 0121}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{r }}$ and the characteristic transmission time $\hat{{\Greekmath 011C}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{r}}=1/\hat{ {\Greekmath 0121}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{r}}$ are evaluated from equations (\ref{A5AB}), (\ref{A5eq1}) and\ (\ref{A5eq2}) \begin{equation} {\Greekmath 0118} ={\Greekmath 011B} \sqrt{\frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}} F_{+}\left( {\Greekmath 0111} ,\frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}\right) ,\ \ \ \Delta {\Greekmath 0121} =\frac{u_{0}}{2\hat{s}}\frac{D^{1/2}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A} }}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}} \label{taunr} \end{equation} where $F_{+}$ and $D$ depend on ${\Greekmath 0111} =2\hat{s}{\Greekmath 0112} ,$ $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}} }$ and $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}$\ as specified in (\ref{A2rm-dk}) -- (\ref {A2rm-D}). For the resonance modes ${\Greekmath 0111} \rightarrow 0,$ these equations simplify according to (\ref{A4dk}) and (\ref{A4BA}): \begin{equation} {\Greekmath 0118} ={\Greekmath 011B} \sqrt{\frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{,\ \ \ }\Delta {\Greekmath 0121} =\hat{{\Greekmath 0121}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{r}}=\frac{1}{\hat{{\Greekmath 011C}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{r}}} \approx \frac{u_{0}}{x_{0}\hat{s}}=\frac{1}{{\Greekmath 011C} _{0}\hat{s}} \label{taur} \end{equation} Here, we also introduce useful parameters$\ $ \begin{equation} {\Greekmath 011C} _{0}=\frac{x_{0}}{u_{0}},\ u_{0}=\frac{k_{0}\hbar }{m},\ \ x_{0}=\frac{ 2x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}+x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny B}}}} \end{equation} \ where ${\Greekmath 011C} _{0}$ is the characteristic fly time defined in terms of the characteristic length of the box section $x_{0}$ and the characteristic velocity $u_{0},$ which can be estimated using thermodynamic quantities $ mu_{0}^{2}=2\tilde{E}\approx k_{B}T$.
If $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}=x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}=x_{0},$ then all modes are resonant, while the "minus" mode becomes symmetric and the "plus" mode antisymmetric --- this case is referred to as the resonance case (see \ref {SecA}(\ref{sec_res})). In the resonance case, the evolution of the partition states simplifies into \begin{equation} \left[ \begin{array}{c} \tilde{A} \\ \tilde{B} \end{array} \right] =\Omega _{0}\left[ \begin{array}{c} +\cos \left( \frac{\hat{{\Greekmath 0121}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{r}}}{2}t^{\circ }\right) \\ -i\sin \left( \frac{\hat{{\Greekmath 0121}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{r}}}{2}t^{\circ }\right) \end{array} \right] \end{equation}
\subsection{Tunnelling by intermediate and non-resonant modes}
We now examine the limit ${\Greekmath 0111} \rightarrow \pm \infty $ and turn to consideration of the intermediate ($1\ll \left\vert {\Greekmath 0111} \right\vert \ll \hat{s}$) and non-resonant modes ($\left\vert {\Greekmath 0111} \right\vert \sim \hat{s}$ ), which as shown in \ref{SecA}(\ref{sec_non_res}) must be either A-resonant or B-resonant, assuming that $\hat{s}\gg 1$. Generally, links between the plus and minus modes are preserved for the intermediate modes but weaken for non-resonant modes, which do not necessarily form pairs. \ref{SecA}(\ref {sec_non_res}) indicates that $\left\vert B/A\right\vert \sim 1/\left\vert {\Greekmath 0111} \right\vert \ll 1$ for A-resonant modes and $\left\vert B/A\right\vert \sim \left\vert {\Greekmath 0111} \right\vert \gg 1$ for B-resonant modes. Since the initial wave function ${\Greekmath 0120} _{0}=\left. {\Greekmath 0120} \right\vert _{t=t_{0}}$ is localised exclusively in section A$,$\ the A-resonant modes dominate the expansion in energy eigenstates (\ref{1sol})-(\ref{eig1r}). The components with different values of $k$ and ${\Greekmath 0121} $ quickly lose phase correlation and we focus on modes that have close $k$ and ${\Greekmath 0121} $. If ${\Greekmath 0111} \rightarrow +\infty ,$ the "plus" branch corresponds to A-resonant modes and the "minus" branch corresponds to B-resonant modes. Equations (\ref{A5A}), ( \ref{A5B}) and (\ref{ABt}) are still valid for intermediate modes, but the solution parameters are evaluated differently \begin{equation} {\Greekmath 0118} ={\Greekmath 011B} \frac{\sqrt{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny A}}}+x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}\frac{1}{{\Greekmath 0111} },\ \Delta k\approx \frac{ \left\vert {\Greekmath 0111} \right\vert }{\hat{s}}\frac{1}{x_{0}},\ \ \ \Delta {\Greekmath 0121} = \hat{{\Greekmath 0121}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{i}}=\frac{1}{\hat{{\Greekmath 011C}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{i}}}\sim \frac{\left\vert {\Greekmath 0111} \right\vert }{\hat{s}}\frac{u_{0}}{x_{0}} \label{IR-1} \end{equation} from equations (\ref{A5eq1}),\ (\ref{A5eq2}), (\ref{A2dk_inf}) and (\ref {A2BA_inf}) assuming ${\Greekmath 0111} =2\hat{s}{\Greekmath 0112} \rightarrow \pm \infty $. For the non-resonant modes, we can use the same estimates but put $\left\vert {\Greekmath 0112} \right\vert \approx 1$, $\left\vert {\Greekmath 0111} \right\vert \sim \hat{s}$ and estimate \begin{equation} \left\vert {\Greekmath 0118} \right\vert \sim \frac{1}{\hat{s}}\ll 1,\ \Delta k\sim \frac{1 }{x_{0}},\ \ \ \Delta {\Greekmath 0121} =\hat{{\Greekmath 0121}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{n}}=\frac{1}{\hat{{\Greekmath 011C}}_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{n}}}\sim \frac{u_{0}}{x_{0}}=\frac{1}{{\Greekmath 011C} _{0}} \label{NR-1} \end{equation} The modes away from the resonance conditions are characterised by relatively small extent of tunnelling determined by ${\Greekmath 0126} \approx {\Greekmath 0118} $. The probability of localisation in section B delivered by the intermediate modes evolves periodically \begin{equation} P(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{B},t)=\left\vert \tilde{B}\right\vert ^{2}\approx 4{\Greekmath 0118} ^{2}\sin ^{2}\left( \frac{\Delta {\Greekmath 0121} }{2}\left( t-t_{0}\right) \right) \sim \frac{1 }{{\Greekmath 0111} ^{2}}\ll 1 \label{NR-2} \end{equation} and becomes small $\sim 1/\hat{s}^{2}$ for non-resonant modes (despite progressing faster in time than the resonant modes $\left\vert t-t_{0}\right\vert \sim \hat{{\Greekmath 011C}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{n}}\sim {\Greekmath 011C} _{0}\ll \hat{{\Greekmath 011C}}_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{r}}$). Note that the resonant modes also achieve probability $P($B$ ,t)\sim 1/\hat{s}^{2}$ over time $t\sim {\Greekmath 011C} _{0}$ but, unlike the non-resonant modes, they proceed further to deliver $P($B$,t)\sim 1$ when $ t\sim \hat{{\Greekmath 011C}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{r}}\sim {\Greekmath 011C} _{0}\hat{s}\gg {\Greekmath 011C} _{0}$.
The estimates of this section (\ref{NR-1}) and (\ref{NR-2}) remain the same even if a non-resonant mode (say, A-resonant but not B-resonant) is not explicitly coupled with any B-resonant mode. Indeed, over time $t\sim {\Greekmath 011C} _{n}\sim {\Greekmath 011C} _{0},$ this mode would lose correlations with the\ other modes and according to (\ref{A2Ar}) can contribute to particles appearing in section B only a small probability $\sim 1/\hat{s}^{2}$ at any time $ t\gtrsim \hat{{\Greekmath 011C}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{n}}$. Without a sufficiently large fraction of the resonant modes, the probability of finding tunnelling particles in section B remains small indefinitely. While tunnelling is contributed less by non-resonant modes, these modes are often more numerous than the resonant and intermediate modes.
The fraction of resonant modes is determined by geometry, i.e. by $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny A}}}$ and $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}$. For example, if $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}} }=2x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}},$ then each second A-resonant mode is also B-resonant. Note that under conditions of $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}\sim x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny B}}}$ the fraction of resonant and near-resonant modes cannot fall below $\sim 1/\hat{s}$. First, let us assume $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}\geq x_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}$ or, otherwise, swap A and B. \ref{SecA}(\ref{sec_near_res} ) indicates that ${\Greekmath 0112} \sim 1/\hat{s}$ to achieve near-resonance conditions. Equations (\ref{A2rm-jAjB}) result in $j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}} }={\Greekmath 010D} j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}-{\Greekmath 0112} ^{\prime }$ where $j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}$ and $j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}$ are integers, ${\Greekmath 010D} =$ $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}/x_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}\leq 1$, and ${\Greekmath 0112} ^{\prime }={\Greekmath 0112} (1+x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B} }}/x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}})/{\Greekmath 0119} \sim 1/\hat{s}$ is small. Assuming that $j_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}$ can reach $1$ for typical energies, the overall fraction of resonant and near-resonant modes $\left\vert j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}-{\Greekmath 010D} j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}\right\vert \lesssim 1/\hat{s}$ of all modes $j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny A}}}=1,2,3,...$ and $j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}=1,2,3,...$, cannot be smaller than $\sim 1/\hat{s}$. Therefore, despite being relatively small in the numbers of modes, these numbers are sufficient for the resonant and near-resonant modes to dominate tunnelling.
\section{Effect of decoherence on tunnelling\label{Sec6}}
This section examines the effects of decoherence on tunnelling, which appear to be substantial and, therefore, detectable in experiments. We begin with a general consideration of decoherence leading to a specific form of the \textit{Lindblad equation} \cite{LindbladG1976} that corresponds to our understanding of decoherence. The exact physical mechanism responsible for decoherence remains unknown and referred to here as time priming; while the main parameter that quantifies decoherence is its characteristic frequency $ {\Greekmath 0121} _{d}$. Decoherence is expected to result in loss of coherent interferences without any substantial unitary interactions with the environment \cite{ImryYoseph2002,Dattagupta2004}, measurements \cite {TunnZeno2014}, or any other effects that may cause a significant redistribution of energy. With exception of the last subsection, we consider effects that are intrinsic or effectively intrinsic. While decoherence triggers equilibration and thermalisation, the latter does involve a redistribution of energy and should not be confused with decoherence, which is deemed to have negligible energy effects. The obtained form of the Lindblad equation is converted into a \textit{Pauli master equation} \cite {Pauli1928} for state probabilities, which is subsequently used for determining the effect of decoherence on the tunnelling rates\ in the resonant and nonresonant cases.
\subsection{Decoherence in the context of time priming.}
While unitary evolution of a quantum system is fully specified by the Schr \"{o}dinger equation, our knowledge of decoherence and collapses is much more limited. Let us illustrate this point by a simple example: consider two states of a quantum system \begin{equation} {\Greekmath 0120} _{+}=\frac{1}{\sqrt{2}}\left( \left\vert E_{1}\right\rangle +\left\vert E_{2}\right\rangle \right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }{\Greekmath 0120} _{-}=\frac{1}{\sqrt{2}}\left( \left\vert E_{1}\right\rangle -\left\vert E_{2}\right\rangle \right) \end{equation} that are expressed in terms the energy eigenstates $\left\vert E_{1}\right\rangle $ and $\left\vert E_{2}\right\rangle $. The superposition state ${\Greekmath 0120} =({\Greekmath 0120} _{+}+{\Greekmath 0120} _{-})/2^{1/2}=\left\vert E_{1}\right\rangle $ would have its energy measured as $E_{1}$. Assume that ${\Greekmath 0120} $ decoheres into a mixture of ${\Greekmath 0120} _{+}$ and ${\Greekmath 0120} _{-}$ with equal probabilities. Measuring energy for each of these functions ${\Greekmath 0120} _{+}$ and ${\Greekmath 0120} _{-}$ would produce either $E_{1}$ or $E_{2}$ with equal probability. The choice of ${\Greekmath 0120} _{+}$ and ${\Greekmath 0120} _{-}$ as the decoherence basis that does not coincide with energy eigenstates results in substantial energy change in the system. In the context of the direction of time, however, decoherence is commonly understood as loss of interference between the components with minimal energy interactions. This, of course, does not exclude other forms of decoherence with stronger interactions and significant energy exchanges and these other forms may be important under some conditions. In the present work, however, we restrict our attention to less energetic forms of decoherence that can be associated with the time primer.
Still, choosing exact energy eigenstates as the basis for decoherence does not solve the problem --- these eigenstates continue to exist without interacting with each other and this is not a particularly interesting case. Analysis of decoherence becomes most meaningful when decoherence basis is selected along with eigenstates of the principal part $\mathbb{H}_{0}$ of the Hamiltonian $\mathbb{H=H}_{0}+\mathbb{H}^{\prime }$ but there is also a smaller interference component $\mathbb{H}^{\prime }$ that acts along with decoherence. It is clear that splitting the Hamiltonian in two parts requires some physical grounds for doing this. For example $\mathbb{H}_{0}$ may be Hamiltonian that is intrinsically associated with a system, while $ \mathbb{H}^{\prime }$ corresponds to external influence or some other form of interference. In context of particle physics, $\mathbb{H}_{0}$ is conventionally related to strong interactions, while $\mathbb{H}^{\prime }$ pertains to weak interactions that are known to break the symmetry of the directions of time in CP violations (which seems to be consistent with time-directional character of decoherence). In any case, the decoherence basis that is associated with eigenstates of some Hamiltonian $\mathbb{H} _{0} $ must be orthogonal (and is conventionally selected orthonormal). The wave functions undergo unitary transformations $\left\vert {\Greekmath 0120} \right\rangle _{t^{\prime }}=\mathbb{U}(t^{\prime }-t)\left\vert {\Greekmath 0120} \right\rangle _{t}$ but may also experience decoherence events, where the projections $\left\vert d_{j}\right\rangle \left\langle d_{j}\right\vert \left\vert {\Greekmath 0120} \right\rangle $ of every wave function ${\Greekmath 0120} $ onto the decoherence basis $\left\vert d_{1}\right\rangle ,...,\left\vert d_{n}\right\rangle $ lose their coherence (completely or partially). As considered above, $\left\langle d_{i}\right\vert \left\vert d_{j}\right\rangle ={\Greekmath 010E} _{ij}$ since $\left\vert d_{j}\right\rangle $ satisfies $\mathbb{H}_{0}\left\vert d_{j}\right\rangle =E_{j}^{\circ }\left\vert d_{j}\right\rangle $.
Consider the density matrix $\mathbf{{\Greekmath 011A} ,}$ which generally evolves by unitary transformations $\mathbf{{\Greekmath 011A} }^{\prime }=\mathbb{U}\mathbf{{\Greekmath 011A} } \mathbb{U}^{\dagger }$ but also experiences\ decoherence events, where it is transformed by the Kraus operators $\mathbb{K}_{j}$ \begin{equation} \mathbf{{\Greekmath 011A} }^{\prime }\mathbf{=}\sum_{j=0}^{n}\mathbb{K}_{j}\mathbf{{\Greekmath 011A} } \mathbb{K}_{j}^{\dagger },\ \ \ \ \ \ \sum_{j=0}^{n}\mathbb{K}_{j}^{\dagger } \mathbb{K}_{j}=\mathbb{I}\ \label{Kraus} \end{equation} Note that equation (\ref{Kraus}) represents a specific form of the \textit{ Kraus transformation} \begin{equation} \mathbb{K}_{j}=\sqrt{{\Greekmath 0115} }\left\vert d_{j}\right\rangle \left\langle d_{j}\right\vert ,\ \ \ \mathbb{K}_{0}=\sqrt{(1-{\Greekmath 0115} )}\mathbb{I} \end{equation} that corresponds to specific action of decoherence that is discussed above (assuming $0\leq {\Greekmath 0115} \leq 1$) and, at least in principle, can be associated with the time primer. The last constraint in (\ref{Kraus}) is satisfied since, obviously, $\Sigma _{j}\left\vert d_{j}\right\rangle \left\langle d_{j}\right\vert =\mathbb{I}$. If ${\Greekmath 0115} =0$, transformation ( \ref{Kraus}) is identical $\mathbf{{\Greekmath 011A} }^{\prime }\mathbf{=}\mathbb{I} \mathbf{{\Greekmath 011A} }$ and no decoherence occurs. If ${\Greekmath 0115} =1$, the projections on the decoherence basis becomes fully independent. The transformation (\ref {Kraus}) with ${\Greekmath 0115} ^{\prime }=1-(1-{\Greekmath 0115} )^{-1}$ reverses (\ref{Kraus} ) with ${\Greekmath 0115} $ and, therefore, represents recoherence. While Kraus operators are convenient to characterise discrete decoherence events that interrupt unitary evolution, continuous decoherence can be conventionally described by the Lindblad operators $\mathbb{L}_{j}=\mathbb{K}_{j}/\sqrt{ {\Greekmath 0115} }$ so that \begin{equation} \mathbf{{\Greekmath 011A} }^{\prime }\mathbf{=}(1-{\Greekmath 0115} )\mathbf{{\Greekmath 011A} }+{\Greekmath 0115} \sum_{j=1}^{n}\mathbb{L}_{j}\mathbf{{\Greekmath 011A} }\mathbb{L}_{j}^{\dagger } \end{equation} Assuming that ${\Greekmath 0115} =\Delta t/{\Greekmath 011C} _{d}$, we obtain \begin{equation} \Delta \mathbf{{\Greekmath 011A} }=\mathbf{{\Greekmath 011A} }^{\prime }-\mathbf{{\Greekmath 011A} =}\frac{\Delta t }{i\hbar }\left[ \mathbb{H},\mathbf{{\Greekmath 011A} }\right] +\frac{\Delta t}{{\Greekmath 011C} _{d}} \left( \sum_{j=1}^{n}\mathbb{L}_{j}\mathbf{{\Greekmath 011A} }\mathbb{L}_{j}^{\dagger }- \mathbf{{\Greekmath 011A} }\right) \label{Lin1} \end{equation} where the Hamiltonian term reflects a time differential of the unitary transformation $\mathbf{{\Greekmath 011A} }^{\prime }=\mathbb{U}\mathbf{{\Greekmath 011A} }\mathbb{U} ^{\dagger }.$ Dividing this equation by $\Delta t$ and taking the limit $ \Delta t\rightarrow 0$ leads to a specific, simple form of the \textit{ Lindblad equation} \begin{equation} \frac{\partial \mathbf{{\Greekmath 011A} }}{\partial t}=\frac{\left[ \mathbb{H},\mathbf{ {\Greekmath 011A} }\right] }{i\hbar }+\frac{1}{{\Greekmath 011C} _{d}}\left( \sum_{j=1}^{n}\mathbb{L} _{j}\mathbf{{\Greekmath 011A} }\mathbb{L}_{j}^{\dagger }-\mathbf{{\Greekmath 011A} }\right) \label{Lin2} \end{equation} The simplification of the Lindblad equation is due to the relation $\mathop{\textstyle \sum }_{j} \mathbb{L}_{j}^{\dagger }\mathbb{L}_{j}=\mathbb{I}$, which is valid here but not satisfied in the general case. The value ${\Greekmath 011C} _{d}$ represents the characteristic decoherence time. Since $\mathbb{L}_{j}$ are Hermitian and $ {\Greekmath 011C} _{d}>0$ in this form of the Lindblad equation, the evolution governed by (\ref{Lin2}) does not decrease entropy \cite{Abe2017}.
While the physical implications of discrete and continuous decoherence should be similar, we, for the sake of transparency, consider discrete decoherence events specified by (\ref{Kraus}) with ${\Greekmath 0115} =1$ and spaced by characteristic decoherence time ${\Greekmath 011C} _{d}$. The decoherence events suppress all non-diagonal elements of the density matrix, while the unitary evolution $\mathbf{{\Greekmath 011A} }^{\prime }\mathbf{=}\mathbb{U}\mathbf{{\Greekmath 011A} }\mathbb{ U}^{\dagger }$ persists between the decoherence\ events \cite{SciRep2016}. Hence, the density matrix is transformed by the unitary evolution and a subsequent decoherence event as \begin{equation*} \left[ \begin{array}{ccc} {\Greekmath 011A} _{11} & & 0 \\ & \ddots & \\ 0 & & {\Greekmath 011A} _{nn} \end{array} \right] \underset{t\rightarrow t^{\prime }}{\longrightarrow }\left[ \begin{array}{ccc} {\Greekmath 011A} _{11}^{\prime } & & 0 \\ & \ddots & \\ 0 & & {\Greekmath 011A} _{nn}^{\prime } \end{array} \right] ,\ \ \ {\Greekmath 011A} _{jj}^{\prime }=\sum_{k}\left\vert U_{kj}\right\vert ^{2}{\Greekmath 011A} _{kk} \end{equation*} over each of the intervals $[t,t^{\prime }],$ where $t^{\prime }=t+{\Greekmath 011C} _{d}$ . Here, $U_{kj}$ represent the components of the unitary evolution operator $ \mathbb{U}(t^{\prime }-t)$. Considering long times $t>{\Greekmath 011C} _{d}$, we conclude that the probabilities $P_{j}={\Greekmath 011A} _{jj}$ are transformed according to \begin{equation} \frac{dP_{j}}{dt}=\frac{1}{{\Greekmath 011C} _{d}}\sum_{k}\left( \left\vert U_{kj}\right\vert ^{2}-{\Greekmath 010E} _{kj}\right) P_{k} \label{PauliME} \end{equation} which, essentially, is a \textit{Pauli master equation} describing evolution of a Markov chain with transitional probabilities given by deviations of $ \left\vert U_{kj}\right\vert ^{2}$ from the unity matrix.
Despite the existence of many theories \cite {Joos2003,Schlosshauer2007,Zeh2007}, there is no certainty about the exact effect of decoherence on wave functions distributed in space. We, however, expect loss of coherence between energy eigenstates with substantially different energies$,$ as well as expect and are primarily interested in losses of coherence between the branches of the wave functions located in sections A and B, which converts coherent waves into a mixture of probabilities for particle presence in these sections. In any case, decoherence can be charactetised by its principal parameter --- the characteristic frequency of decoherence ${\Greekmath 0121} _{d}$\ or the characteristic decoherence time ${\Greekmath 011C} _{d}=1/{\Greekmath 0121} _{d}$, which is featured in equation ( \ref{PauliME}).
\subsection{Effect on the resonant and near-resonant modes}
If the characteristic time of decoherence is longer than the resonance tunnelling time ${\Greekmath 011C} _{d}>\hat{{\Greekmath 011C}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{r}}\approx {\Greekmath 011C} _{0}\hat{s},$ decoherence has little effect on the tunnelling rate but even infrequent decoherence changes the character of the solution --- it relaxes towards stationary distributions instead of oscillating indefinitely. When, however, the decoherence time becomes short ${\Greekmath 011C} _{d}<\hat{{\Greekmath 011C}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{r}}$ (but not too short ${\Greekmath 011C} _{d}>{\Greekmath 011C} _{0}$) it converts the unitary evolution of the resonance modes, which is specified by (\ref{U_AB})-(\ref{A5U}) and (\ref {ABt}) in the basis of the partition states, into a Markov process, which according to (\ref{PauliME}) is given by \begin{equation} \frac{d}{dt}\left[ \begin{array}{c} P_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}} \\ P_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}} \end{array} \right] =\frac{1}{{\Greekmath 011C} _{d}}\left[ \begin{array}{cc} -W & W \\ W & -W \end{array} \right] \left[ \begin{array}{c} P_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}} \\ P_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}} \end{array} \right] =\frac{1}{\tilde{{\Greekmath 011C}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{r}}}\left[ \begin{array}{cc} -1 & 1 \\ 1 & -1 \end{array} \right] \left[ \begin{array}{c} P_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}} \\ P_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}} \end{array} \right] \label{Markov2} \end{equation} where $\tilde{{\Greekmath 011C}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{r}}=W/{\Greekmath 011C} _{d},$ $W=\left\vert U_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny AB }}}\right\vert ^{2}\ $and $\left\vert U_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny AB}}}\right\vert \sim 2{\Greekmath 0126} \sin \left( \Delta {\Greekmath 0121} {\Greekmath 011C} _{d}/2\right) ,\ \ {\Greekmath 0126} =\left\vert {\Greekmath 0118} \right\vert /(1+{\Greekmath 0118} ^{2})$\ is the off-diagonal component of the unitary evolution matrix (\ref{A5U}) and $\Delta {\Greekmath 0121} $ is given by ( \ref{taunr}). Note that, according to (\ref{Markov2}), the extent of tunnelling is no longer limited by ${\Greekmath 0126}$. Since $\Delta {\Greekmath 0121} ^{-1}\sim \hat{{\Greekmath 011C}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{r}}>{\Greekmath 011C} _{d}$, the sine can be expanded $ \left\vert U_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny AB}}}\right\vert \sim {\Greekmath 0126} \Delta {\Greekmath 0121} {\Greekmath 011C} _{d}$. For exact resonance, we assume $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}\sim x_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}$, put ${\Greekmath 0111} =0$, ${\Greekmath 0126} \sim 1$ \ and obtain \begin{equation} \tilde{{\Greekmath 0121}}=\frac{1}{\tilde{{\Greekmath 011C}}}\approx \hat{{\Greekmath 0121}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{r}}^{2}{\Greekmath 011C} _{d}=\frac{{\Greekmath 011C} _{d}}{\hat{{\Greekmath 011C}}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{r}}^{2}}=\frac{{\Greekmath 011C} _{d}}{{\Greekmath 011C} _{0}^{2}\hat{s}^{2}} \label{w-r-d} \end{equation} This expression reflects the quantum Zeno effect, which is well-known and has been recently demonstrated in experiments \cite{TunnZeno2014} applying frequent measurements to quantum tunnelling. Increasing frequency of decoherence reduces the rate of tunnelling for resonant modes. Note that, even in the presence of decoherence, the resonant modes do not lead to the same density of particles in both sections (when $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}\neq x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}$) but to the equal probabilities of being in these sections $P_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}},P_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}\rightarrow 1/2$ as $ t\rightarrow \infty $. This is consistent with general expectations of statistical quantum mechanics: the amplitudes of modes having similar energies are expected to be similar under equilibrium conditions.
\subsection{Effect on the non-resonant and intermediate modes.}
For non-resonant modes, we can estimate $\Delta {\Greekmath 0121} \sim 1/{\Greekmath 011C} _{0}$ and $\left\vert U_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny AB}}}(t)\right\vert \sim 1/\hat{s}$ for any $ t\gtrsim {\Greekmath 011C} _{0}$ --- see equations (\ref{NR-1}) and (\ref{A5U}). Hence, decoherence of a moderate intensity ${\Greekmath 011C} _{d}>{\Greekmath 011C} _{0}$ leads to $ W=\left\vert U_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny AB}}}\right\vert ^{2}\sim 1/\hat{s}^{2}.$ The characteristic frequency $\tilde{{\Greekmath 0121}}$ and time $\tilde{{\Greekmath 011C}}$ of tunnelling associated with decoherence of non-resonant modes becomes \begin{equation} \tilde{{\Greekmath 0121}}=\frac{1}{\tilde{{\Greekmath 011C}}}\approx \frac{1}{{\Greekmath 011C} _{d}\hat{s}^{2}}= \frac{{\Greekmath 0121} _{d}}{\hat{s}^{2}} \label{w-n-d} \end{equation} Decoherence promotes tunnelling carried by non-resonant modes and impedes tunnelling conducted by resonant modes. While $\tilde{{\Greekmath 0121}}$ specified by ( \ref{w-n-d})\ is generally smaller than $\tilde{{\Greekmath 0121}}$ given by (\ref {w-r-d})\ (assuming ${\Greekmath 011C} _{d}>{\Greekmath 011C} _{0}$), the non-resonant modes are likely to be more numerous. The A-resonant modes are primarily responsible for tunnelling from A to B and the B-resonant modes are primarily responsible for tunnelling from B to A. The overall tunnelling rate is an aggregate of the tunnelling rates produced by each mode and estimated by ( \ref{w-n-d}). Note that the ratio of the number of A-resonant the number of B-resonant modes is roughly proportional to \ $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}/x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny B}}}$ for a given small energy interval; hence, in the equilibrium (or near-equilibrium) conditions, where modes with close energies must have similar amplitudes, the probability of finding a particle in a particular section (e.g. A or B) is proportional to the volume of this section. Note that Markov models (\ref{Markov2}) do not constrain the extent of tunnelling by its unitary value ${\Greekmath 0126} ,$ but promote equidistribution between modes.
The estimates for the intermediate modes are similar $\Delta {\Greekmath 0121} \sim \left\vert {\Greekmath 0111} \right\vert /(\hat{s}{\Greekmath 011C} _{0})$ and $\left\vert U_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny AB}}}(t)\right\vert \sim \left\vert {\Greekmath 0118} \right\vert \sim 1/\left\vert {\Greekmath 0111} \right\vert $ for any $t\gtrsim {\Greekmath 011C} _{0}\hat{s}/\left\vert {\Greekmath 0111} \right\vert ,$ where parameter ${\Greekmath 0111} =2\hat{s}{\Greekmath 0112} $ is moderately large $ 1\ll \left\vert {\Greekmath 0111} \right\vert =2\hat{s}\left\vert {\Greekmath 0112} \right\vert \ll \hat{s}$ and determines how far the mode is from the resonance and $ \left\vert {\Greekmath 0111} \right\vert \sim \hat{s}$ corresponds to non-resonant modes. The tunnelling rate depends on relative values of $\Delta {\Greekmath 0121} $ and $ {\Greekmath 0121} _{d}$ \begin{equation} \tilde{{\Greekmath 0121}}=\frac{1}{\tilde{{\Greekmath 011C}}}\approx \left\{ \begin{array}{cc} {\Greekmath 0121} _{d}/{\Greekmath 0111} ^{2}, & {\Greekmath 0121} _{d}\leq \frac{\left\vert {\Greekmath 0111} \right\vert }{ {\Greekmath 011C} _{0}\hat{s}} \\ \frac{1}{{\Greekmath 011C} _{0}^{2}\hat{s}^{2}{\Greekmath 0121} _{d}}, & \frac{1}{{\Greekmath 011C} _{0}}\geq {\Greekmath 0121} _{d}\geq \frac{\left\vert {\Greekmath 0111} \right\vert }{{\Greekmath 011C} _{0}\hat{s}} \end{array} \right. \label{w-d-i} \end{equation}
\subsection{The effect of intensive decoherence.}
Finally as decoherence becomes more intensive and ${\Greekmath 011C} _{d}\lesssim {\Greekmath 011C} _{0},$ the coherent solutions cannot be sustained within each section of the box --- the model of standing and evolving waves gives way to quantum particles represented by wave packets. The coherent solutions stretching from one side of the section to another are meaningless if the characteristic decoherence time is shorter than the time of reflection from the walls. In these conditions we necessarily use the transmission $ \left\vert q\right\vert ^{2}$ and reflection $\left\vert r\right\vert ^{2}$ probabilities associated with tunnelling, which are specified by (\ref {A1-ass}) for the case under consideration. There is no longer any difference between the resonant and non-resonant modes. The probabilities of location in section A and section B are governed by the following Markov chain \begin{equation} -\frac{dP_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}{dt}=\frac{dP_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}{dt}=\frac{ u_{0}\left\vert q\right\vert ^{2}}{2}\left( \frac{P_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}{x_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}-\frac{P_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}\right) \end{equation} where the intensity of collisions with the barrier is evaluated to be proportional to $u_{0}/(2x)$. Assuming $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}\approx x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny B}}}\approx x_{0}$ With $\left\vert q\right\vert ^{2}$ given by (\ref {A1-ass}), the transmission frequency becomes \begin{equation} \tilde{{\Greekmath 0121}}=\frac{1}{\tilde{{\Greekmath 011C}}}\approx \frac{u_{0}\left\vert q\right\vert ^{2}}{x_{0}}=\frac{1}{\hat{s}^{2}{\Greekmath 011C} _{0}} \label{w-t-d} \end{equation} Note the consistency of (\ref{w-t-d}) with the previous estimates (\ref {w-r-d}) and (\ref{w-n-d}), which can be converted into (\ref{w-t-d}) by substituting ${\Greekmath 011C} _{d}={\Greekmath 011C} _{0}$. The model (\ref{w-t-d}) based on tunnelling probabilities should be valid for a wide range of small decoherence times ${\Greekmath 011C} _{d}\lesssim {\Greekmath 011C} _{0},$ perhaps as long as decoherence does not interfere with the actual passage through \ the barrier.
\subsection{Intrinsic versus environmental decoherence\label{Sec6vs}}
The tunnelling frequencies are shown versus the decoherence frequency for different modes in Figure \ref{fig3}. \ The tunnelling frequency of the resonant modes decreases with increasing ${\Greekmath 0121} _{d}$ while the tunnelling frequency of the non-resonant modes increases with increasing ${\Greekmath 0121} _{d}$ up until the both types of modes reach the common value specified by (\ref {w-t-d}). The figure also shows an intermediate mode that displays features that are intermediate between the resonant and non-resonant modes. The effect of intrinsic decoherence is complex but can be broadly characterised by enhancing the extent of tunnelling and promoting equidistribution (and, effectively, equilibration) of the particle locations between sections A and B.
{\ Note that the effect of intrinsic (or effectively intrinsic) decoherence on tunnelling considered here is generally different from the decoherence effect produced by unitary interactions with a larger system or with the environment. Unlike the former, the latter does not enhance the extent of tunnelling. The effect of environmental interference does not become significant until its energy of interactions becomes comparable with the energy gap $E_{+}-E_{-}$. The effect of such significant decoherence is conventionally described by the Zurek theory \cite{Zurek1982}, which indicates that the rate of tunnelling increases significantly when the energy of interactions exceeds the energy gap (see \ref{SecB}). If some minor energy exchanges (much smaller than those required by thermalisation) are allowed in addition to the classical interpretation of decoherence, then the acceleration of tunnelling mentioned above would be supplemented by a reduction of the extent of tunnelling, which may result in the effective termination of tunnelling (see \ref{SecB} (\ref{SecBb})). One can see that different types of decoherence affect tunnelling differently and, therefore, can (at least in principle) be distinguished in experiments. }
\section{Discussion of the experiment \label{Sec7}}
{\ The Reichenbach conjecture suggests that all branch systems tend to evolve forward in time towards equilibrated and thermalised conditions, even if they are fully isolated from the rest of the universe. Obtaining experimental confirmation or repudiation of this conjecture would be of principal importance for our understanding of the universe. Thermalisation, however, is the overall outcome of numerous microscopic processes, whose fine mechanisms are concealed by the significance and magnitude of the outcome. We, therefore, are interested in and focus on decoherence that, as one would hope, can provide more information about the actual mechanisms of time priming than thermalisation. We assume, by default, that the active phase of the experiment $-t_{s}\leq t\leq +t_{s}$ is faster than the rate of thermalisation $t_{s}\ll {\Greekmath 011C} _{t}.$ The thermalisation time ${\Greekmath 011C} _{t}$ can be assessed during the passive phase of the experiment (e.g. by examining the system after $t=+t_{s},$ when, as discussed in Section \ref{Sec4}, thermalisation is expected to screen the active phase of the experiment from the final conditions, even if such final conditions are imposed on the system by postselection). }
The present work analyses different regimes of interference between decoherence and tunnelling, producing a range of behaviours illustrated in Figure \ref{fig3} and in \ref{SecB}. The frequency of decoherence can be estimated indirectly by measuring the tunnelling rates. Although some of these experiments might be difficult to conduct, experimental studies of decoherence \cite{ImryYoseph2002,Dec1Exp2008} and tunnelling \cite {Dattagupta2004,TunnZeno2014,QTG2018} that have some parallels with the present analysis have been successfully carried out in the past. These experiments, however, need to be modified to reduce influence of the environment, avoid both thermalisation and near-zero temperatures, and satisfy a number of conditions discussed below. As described in Section \ref {Sec3}, the suggested experiments involve trapping quantum particles in section A, allowing them to tunnel to another section B and measuring the tunnelling rates. This seems straight-forward but the devil is always in the details.
In order to examine the effect of decoherence on tunnelling experimentally, the characteristic times of tunnelling $\hat{{\Greekmath 011C}}\approx \hat{s}{\Greekmath 011C} _{0}= \hat{s}x_{0}/u_{0}$ and decoherence ${\Greekmath 011C} _{d}$ \ must be comparable. The key point of the experiment is selecting experimental parameters so that the transition between coherent and non-coherent regimes is observed. While the characteristic time of tunnelling $\hat{{\Greekmath 011C}}$ can be changed in experiments (although only within certain limits that are determined by the conditions of the experiment), ${\Greekmath 011C} _{d}$ is expected to be very small for macroscopic objects and very large (possibly infinite) for elementary particles. Hence, the number of particles in the experiment needs to be selected so that the expected decoherence rate for this system is not too large and not too small.
This experiment is concerned with the state of the quantum system when all external interferences are (gradually) removed. As we increase the isolation of the system by encircling it with perfect insulators, mirrors and shields, screening the system from cosmic radiations and other forms of environmental interferences, the intensity of environment-induced decoherence should also reduce in proportion to the reduction of its cause. We might observe that at some stage decoherence disappears or becomes too small and infrequent to be detected --- in this case, we, as discussed above, need to increase the scale of the experiment to bring the rate of decoherence into the measurable range. If decoherence does not reappear even for sufficiently large, macroscopic objects and decoherence can be reduced below any given level by increasing isolation of a system, this would demonstrate the incorrectness of the Reichenbach conjecture.
We assume, however, that the Reichenbach conjecture is correct and there is a component of decoherence that cannot be eliminated by progressive isolation of the system under any circumstances. We refer to such ineliminable component as intrinsic (or effectively intrinsic). Measuring the rate of tunnelling gives us information not only about the decoherence rate but also about its nature. The effects of intrinsic and environmental decoherences are similar in some respects but, as discussed in Section \ref {Sec6}(\ref{Sec6vs}), are different in others. One of the most interesting outcomes of the experiments would be determining which of the two patterns is followed by the ineliminable component of decoherence.
Any experiments that can bring some light into this matter and demonstrate either existence of an (effectively) intrinsic component of decoherence or its absence would be of the highest importance. The arrow of time is real and so must be its time primer ---an underlying physical mechanism that enacts the direction of time --- but, generally, it is difficult to say whether this mechanism can be confidently detected under the current level of technology.
The tunnelling experiments can be conducted with different particles: photons, electrons, protons and, possibly, neutrons or even atomic nuclei are the most likely candidates. The best choice of particles is not clear --- while tunnelling is easier to achieve with lighter particles, photons are expected to be decoherence-neutral \cite{Ent2017} and thus are less likely to exhibit any intrinsic decoherence. Considering that the known cases of CP violations, which have been detected in hadrons \cite {PDG2012,Barbar2016}, imply violation of the symmetry of time (assuming CPT invariance) and that high-energy hadron collisions seem to lead to thermodynamic behaviour in quark-gluon plasma \cite{Nature2007,MuB2011}, we infer that these experiments point in the direction of protons and nuclei as the most interesting particles for these experiments --- these particles are most likely to possess properties associated with intrinsic decoherence, presuming that such properties exist \cite{mixing2020}. (Note that thermodynamic interferences may become apparent as ostensible CPT violations in systems that are in fact CPT-preserving \cite{K-PhysA}.) The experiment needs to be organised so that the tunnelling particles are baryons (or are in contact with baryons although jointly isolated from the environment). While cooling the surrounding to near-zero temperatures to control environmental interferences seems like a good idea, cooling the system is generally not desirable since this may dramatically reduce the magnitude of intrinsic decoherence or completely freeze it.
If time-directional behaviour associated with decoherence can be detected in the tunnelling of protons, it seems logical to conduct similar experiments with antiprotons (assuming that the substantial practical difficulties associated with such experiments can be overcome). Since conventional thermodynamics can be extended from matter to antimatter in two possible mutually exclusive ways: symmetric (i.e. CP-invariant) and antisymmetric (i.e. CPT-invariant) \cite{KM-Entropy2014,SciRep2016,Ent2017}, decohering behaviour of antiparticles is of particular interest. The CPT-invariant version of thermodynamics expects antibaryons to predominantly recohere while the CP-invariant version of thermodynamics insists that both baryons and antibaryons must exhibit the same decohering behaviour.
\section{Conclusion \label{Sec8}}
The present work evaluates the effect of decoherence on the dynamic of quantum tunnelling, carried out by resonant, intermediate and non-resonant modes under generally non-equilibrium conditions that exist during the active phase of the experiments. Decoherence tends to enhance tunnelling by non-resonant modes and attenuate resonant tunnelling. The main conclusion of the present analysis is that, under conditions considered here, the rate of decoherence substantially affects the rate of tunnelling, and therefore can be determined or estimated by measuring the rate of tunnelling. This seems to be easier and less intrusive than direct testing of the coherent states. The effects noted above become clear when the quantum barrier is high ($\hat{ s}\gg 1$), the tunnelling transmission coefficient is low and the energy eigenstates on both sides of the barrier are weakly coupled.
The problem of interference from the environment and measurements, which inevitably cause decoherences and collapses, is especially pertinent to examining decoherence. In simple terms, quantum measurements are bound to cause the effects that they are intended to detect not create. Hence, measuring the decoherence rates indirectly, through proxies is always preferable. Examining tunnelling rates as proxies for decoherence rates and using ancillary quantum systems to avoid direct interference seem very useful in this context.
{\ This work shows that despite a significant degree of similarity, the intrinsic (or effectively intrinsic) and unitary environmental mechanisms of decoherence, affect the tunnelling rates differently and, therefore, can be, at least in principle, experimentally distinguished from each other. In such experiments, we need to minimise environmental interferences and avoid strong interactions between different modes causing substantial energy exchanges and thermalisation. }
The principal question that was formulated by Hans Reichenbach half a century ago and still remains unanswered is whether thermodynamic directionality of time would persist in fully isolated conditions. While Reichenbach's conjecture (that it would) seems more probable, scientific questions of this kind cannot be answered without experimental evidence. If the arrow of time persists, there must be a dynamic mechanism (which we call the time primer) that is responsible for this, and this mechanism should be experimentally testable. This work suggests that these issues can be examined in experiments involving quantum tunnelling.
{\ }
\section*{Declarations}
\subsection{Funding}
Not applicable
\subsection{Conflicts of interest / Competing interests}
The author states that there is no conflict of interest.
\subsection{Availability of data and material}
Not applicable
\subsection{Code availability (software application or custom code)}
Not applicable
\subsection{Authors' contributions}
Not applicable
\appendix{Tunnelling in a box and energy eigenstates \label{SecA}}
This Appendix presents equations for particle tunnelling in a rectangular box and is subject to conditions imposed by the box boundaries --- the problem is selected to allow for a complete and transparent analytical evaluation. The results are used in the main body of the paper. Various tunnelling solutions can be found in vast literature dedicated to this topic \cite{LL3,Tunn2003,Tunn2009}.
\subsection{Tunnelling through symmetric barriers\label{sec_A_tun}}
The quantum outcomes of tunnelling can be expressed by the scattering matrix $\mathbb{S}$, which is a unitary matrix ($\mathbb{SS}^{\dag }\mathbb{=I}$) that connects the amplitudes $A^{-}$ and\ $B^{-}$ of incoming waves $ A^{-}e^{-i({\Greekmath 0121} t+kx)}$ and\ $B^{-}e^{-i({\Greekmath 0121} t-kx)}$ with the amplitudes $A^{+}$ and\ $B^{+}$ of the outgoing waves $A^{+}e^{-i({\Greekmath 0121} t-kx)}$ and\ $B^{+}e^{-i({\Greekmath 0121} t+kx)}$ (see Figure \ref{fig4}) so that: \begin{equation} \left[ \begin{array}{c} A^{+} \\ B^{+} \end{array} \right] =\mathbb{S}\left[ \begin{array}{c} A^{-} \\ B^{-} \end{array} \right] ,\ \ \ \mathbb{S=}\left[ \begin{array}{cc} \tilde{r} & \tilde{q} \\ -\tilde{q}^{\ast } & \tilde{r}^{\ast } \end{array} \right] =\left[ \begin{array}{cc} r & q \\ q & r \end{array} \right] \label{Scatter} \end{equation} In the last expression for $\mathbb{S}$ in (\ref{Scatter}), the quantum barrier is assumed to be symmetric, which corresponds to a symmetric matrix $ \mathbb{S}$, which is invariant with respect swapping $A$ and $B$. The first expression for $\mathbb{S}$ is general provided $\left\vert \tilde{q} \right\vert ^{2}+\left\vert \tilde{r}\right\vert ^{2}=1$. The reflection $r$ and transmission $q$ coefficients satisfy $\left\vert q\right\vert ^{2}+\left\vert r\right\vert ^{2}=1$ and $\left\vert r^{2}-q^{2}\right\vert =1$ (implying that ${\Greekmath 011F} =q^{2}/r^{2}$ is real and ${\Greekmath 011F} \leq 0$) due to the unitary of $\mathbb{S}$. Hence, $q=\pm ir(\left\vert r\right\vert ^{-2}-1)^{1/2}$ and $r=\mp iq(\left\vert q\right\vert ^{-2}-1)^{1/2}$.
The matrix $\mathbb{S}$ should not be confused with the commonly used transfer matrix $\mathbb{M}$ that links the wave amplitudes on one side of the barrier to the wave amplitudes on the other side. \begin{equation} \left[ \begin{array}{c} B^{-} \\ B^{+} \end{array} \right] =\mathbb{M}\left[ \begin{array}{c} A^{+} \\ A^{-} \end{array} \right] ,\ \ \ \left[ \begin{array}{c} A^{-} \\ A^{+} \end{array} \right] =\mathbb{M}\left[ \begin{array}{c} B^{+} \\ B^{-} \end{array} \right] , \label{A1BA} \end{equation} \ where \begin{equation} \mathbb{M=}\frac{1}{q}\left[ \begin{array}{cc} 1 & -r \\ r & q^{2}-r^{2} \end{array} \right] \label{A1M} \end{equation} and $q^{2}-r^{2}=-r^{2}/\left\vert r^{2}\right\vert =q^{2}/\left\vert q^{2}\right\vert $.
The values of\ $r$ and $q$ can be easily evaluated for a rectangular barrier of height $V_{0}$ and width $\Delta x$ \cite{LL3,mixing2020}. Assuming that $ V_{0}\rightarrow \infty $ and $\Delta x\rightarrow 0$ so that $s\ =V_{0}\Delta x\ \sim \func{const}$ and $V(x)\rightarrow s{\Greekmath 010E} (x),$ we obtain \begin{equation} q=\frac{1}{1+i\hat{s}},\ \ r=\frac{-i\hat{s}}{1+i\hat{s}},\ \label{A1qr} \end{equation} where \begin{equation} \hat{s}=\frac{\tilde{s}}{k}=\frac{{\Greekmath 0114} ^{2}\Delta x}{2k}=\frac{m}{k\hbar ^{2}}s=\frac{V_{0}}{\hbar }\frac{\Delta x}{u_{0}},\ \ \ {\Greekmath 0114} ^{2}=\frac{2m }{\hbar ^{2}}V_{0},\ \ \ s=V_{0}\Delta x,\ \ u_{0}=\frac{k\hbar }{m} \label{A1par} \end{equation} If $A^{+}=\left( A^{-}\right) ^{\ast }=A,$ then (\ref{A1BA})-(\ref{A1qr}) yield $B^{+}=\left( B^{-}\right) ^{\ast }=B=A^{\ast }-i\hat{s}(A+A^{\ast })$ and \begin{equation} A+A^{\ast }=B+B^{\ast },\ \ \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }B-B^{\ast }+A-A^{\ast }+2i\hat{s} (A+A^{\ast })=0 \label{A1jmp} \end{equation} With $\left\vert q\right\vert ^{2}$ ranging from 1 to 0 and $\left\vert r\right\vert ^{2}$ ranging from 0 to 1 as $\hat{s}$ increases from 0 to $ \infty $, the barrier shaped as the delta function is a basic representation for many other barriers. Generally, $r$ and $q$ can be jointly multiplied by any arbitrary phase $e^{i{\Greekmath 0123} _{1}}$ and preserve unitarity of $\mathbb{ S}$ (if the barrier is non-symmetric, then $\mathbb{S}$ involves another arbitrary angle ${\Greekmath 0123} _{2}$) but, if the phase shifts are not of major concern, the delta function tends to provide a good model for interactions of a wave function of given $k$ with the barriers.
If $\hat{s}\rightarrow \infty $, the transmission $\left\vert q\right\vert ^{2}$ and reflection $\left\vert r\right\vert ^{2}$ probabilities are given by \begin{equation} \left\vert q\right\vert ^{2}=\frac{1}{\hat{s}^{2}},\ \left\vert r\right\vert ^{2}=1-\frac{1}{\hat{s}^{2}} \label{A1-ass} \end{equation} These equations are special cases of more general expressions for the transmission and reflection probabilities obtained by Igor Vladimirov (2008, unpublished).
\subsection{Energy eigenstates}
The eigenstates of the Schr\"{o}dinger equation (\ref{eig1x}) \begin{equation} \mathbb{\tilde{H}}\tilde{\Psi}_{j}=-\frac{\hbar ^{2}}{2m}\frac{\partial ^{2} \tilde{\Psi}_{j}}{\partial x^{2}}+V(x)\tilde{\Psi}_{j}=\tilde{E}_{j}\tilde{ \Psi}_{j} \label{A2eq} \end{equation} are to be determined within the interval $-x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}\leq x\leq x_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}$ with homogeneous boundary conditions \begin{equation} \tilde{\Psi}_{j}=0\ \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{at\ \ }x=x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}\ \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and\ \ } \tilde{\Psi}_{j}=0\ \ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{at\ \ }x=-x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}} \label{A2bc} \end{equation} and singular potential $V(x)=s{\Greekmath 010E} (x)$. The parameter $s$ is assumed to be sufficiently large so that the probabilities of tunnelling through the barrier are low.
\subsection{Note on singular potentials}
Consider a rectangular barrier $V=V_{0}$ at $-\Delta x/2\leq x\leq +\Delta x/2$ and $V=0$ elsewhere. The limit $V_{0}=V_{n}^{\circ }\rightarrow \infty , $ $\Delta x=\Delta x_{n}\rightarrow 0$ as $n=1,2,...$ so that $ V_{n}^{\circ }\Delta x_{n}=s$ corresponds to introducing singularity $ V(x)=V_{n}(x)\rightarrow s{\Greekmath 010E} (x)$ into the model. The presence of the delta function ${\Greekmath 010E} (x)$ in the potential does not affect validity of the Hilbert--Schmidt theorem. With the use of the Green function $\mathbb{\tilde{ H}}G(x,x_{0})={\Greekmath 010E} (x-x_{0})$ and $G=0\ $at $x=x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}\ $and\ $x=x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}$,\ the eigenstate problem $\mathbb{\tilde{H}}\tilde{ \Psi}_{j}=\tilde{E}_{j}\tilde{\Psi}_{j}$ is conventionally converted into a Fredholm integral equation \begin{equation} \tilde{\Psi}_{j}(x)=\tilde{E}_{j}\mathbb{G}\tilde{\Psi}_{j}=\tilde{E} _{j}\mathop{\displaystyle \int}\limits_{-x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}^{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}G(x,x_{0}) \tilde{\Psi}_{j}(x_{0})dx_{0} \end{equation} where the integral operator $\mathbb{G=\tilde{H}}^{-1}$ is compact and Hermitian in compliance with the conditions of the Hilbert--Schmidt theorem. In three-dimensional case, the Green function defined by $\mathbb{H}G( \mathbf{r},\mathbf{r}_{0})={\Greekmath 010E} (\mathbf{r}-\mathbf{r}_{0})$ and $G=0\ $at $\mathbf{r}\in \partial $AB can be used to convert the eigenstate problem $ \mathbb{H}\Psi _{j}=E_{j}\Psi _{j}$ into integral equation. Since the sequence $\mathbb{G}_{n}$ of integral operators $\mathbb{G}_{1},\mathbb{G} _{2},...$ corresponding to $V_{0}=V_{1}^{\circ },V_{2}^{\circ },...$ converge $\mathbb{G}_{n}\rightarrow \mathbb{G}_{{\Greekmath 010E} }$ by the operator norm when $n\rightarrow \infty $ and $V_{n}(x)\rightarrow s{\Greekmath 010E} (x)$, the\ theorem by Kolmogorov and Fomin \cite{KolmogorovFomin} (Theorem 1, Sec. 2, Chpt. 6, Part IV ) ensures that the limiting integral operator $\mathbb{G} _{{\Greekmath 010E} }$ is compact and, obviously, Hermitian. Hence, solution (\ref{1sol} ) must be universally valid even for singular potentials $V(x)=s{\Greekmath 010E} (x)$. The system of energy eigenstates is complete in Hilbert space and covers all possible evolutions of the Schr\"{o}dinger equation.
\subsection{Eigenfunctions for a delta-function barrier}
Since Hamiltonian $\mathbb{\tilde{H}}$ is time-symmetric, the energy eigenstates $\tilde{\Psi}_{j}$ can be treated as real without loss of generality. Assuming $V(x)=s{\Greekmath 010E} (x),$ the solution of (\ref{A2eq}) with boundary conditions (\ref{A2bc}) is given by \begin{equation} \tilde{\Psi}_{j}=\left\{ \begin{array}{c} A_{j}\sin (k_{j}x+{\Greekmath 010B} _{j}),\ \ \ \ {\Greekmath 010B} _{j}=-k_{j}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ in section A} \\ B_{j}\sin (k_{j}x+{\Greekmath 010C} _{j}),\ \ \ \ {\Greekmath 010C} _{j}=+k_{j}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ in section B} \end{array} \right. \label{3PSI} \end{equation} The amplitudes $A_{j}$ and $B_{j},$ which are assumed real, are constrained by (\ref{A1jmp}) (i.e. $2A=-iA_{j}e^{i{\Greekmath 010B} _{j}}$ and $2B=-iB_{j}e^{i{\Greekmath 010C} _{j}}$), that is by continuity of the functions $\ x=0$ and jumps of the derivatives induced by $V=s{\Greekmath 010E} (x)$ \begin{equation} B_{j}\sin ({\Greekmath 010C} _{j})=A_{j}\sin ({\Greekmath 010B} _{j}),\ \ \ B_{j}\cos ({\Greekmath 010C} _{j})=A_{j}\cos ({\Greekmath 010B} _{j})-2\frac{\tilde{s}}{k_{j}}A_{j}\sin ({\Greekmath 010B} _{j}) \label{3BA} \end{equation} Dividing the second equation by the first equation and substituting ${\Greekmath 010B} _{j}$ and ${\Greekmath 010C} _{j}$ from (\ref{3PSI}) yields the dispersion equation \begin{equation} \cot (k_{j}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}})+\cot (k_{j}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}})+2\frac{ \tilde{s}}{k_{j}}=0 \label{3disp} \end{equation} that determines energy eigenvalues \begin{equation} \tilde{E}_{j}=\frac{k_{j}^{2}\hbar ^{2}}{2m} \label{3Ej} \end{equation} in terms of $\tilde{s}=ms/\hbar ^{2}$. The amplitude ratio is then given by \begin{equation} \frac{B_{j}}{A_{j}}=-\frac{\sin (k_{j}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}})}{\sin (k_{j}x_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}})} \label{3rat} \end{equation} In the rest of the analysis we assume that $\hat{s}=\tilde{s}/k$ is large for typical values of $k$ to simplify the equations and obtain conditions that are of interest for our consideration.
\subsection{The resonance case\label{sec_res}}
In this case $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}=x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}=x_{0}$ and all modes are resonant. Two family of solutions are distinguished: first, antisymmetric $\tilde{\Psi}_{j}(-x)=-\tilde{\Psi}_{j}(x),$ smooth at $x=0$ with $k_{j}$ specified by $\sin (k_{j}^{(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{a})}x_{0})=0;$ and, second, symmetric $\tilde{\Psi}_{j}(-x)=\tilde{\Psi}_{j}(x),$ V-shaped at $x=0$ with $k_{j}$ evaluated from $\cot (k_{j}^{(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{s})}x_{0})=-\tilde{s}/k_{j}^{( \RIfM@\expandafter\text@\else\expandafter\mbox\fi{s})}$. Note that $\left\vert B_{j}\right\vert =\left\vert A_{j}\right\vert $ for both of the families. Assuming that $\hat{s}=\tilde{s} /k_{j}$ is large, we expand $\cot ({\Greekmath 0119} j+{\Greekmath 010B} )=1/{\Greekmath 010B} +...$ and obtain \begin{equation} k_{j}^{(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{a})}=\frac{{\Greekmath 0119} j}{x_{0}},\ \ \ k_{j}^{(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{s})}\approx \frac{ {\Greekmath 0119} j}{x_{0}}\left( 1-\frac{1}{x_{0}\tilde{s}}\right) ,\ \ \ \left( \frac{ B_{j}}{A_{j}}\right) ^{(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{a})}=1,\ \ \ \left( \frac{B_{j}}{A_{j}}\right) ^{(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{s})}=-1 \label{A2res} \end{equation} where$\ j=1,$ $2,$ $3,$ $...$ for both the symmetric (s) and antisymmetric (a) modes.
Existence of symmetric and antisymmetric modes is a general property of quantum equations with any symmetric potential $V(x)=V(-x)$ (implying that $ x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}=x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}$). Indeed, let $\tilde{\Psi}_{j}$ be a solution of (\ref{A2eq}) and $\mathbb{P}$ be the parity operator that transforms $x\rightarrow -x$. Without loss of generality we can assume that $ \tilde{\Psi}_{j}$ is real. The parity transformation preserves (\ref{A2eq}) for symmetric potentials $V(x)$ and $[\mathbb{P},\mathbb{\tilde{H}}]=0.$ Hence, $\mathbb{P}\tilde{\Psi}_{j}$ is also solution of (\ref{A2eq}) and, provided the eigenvalue $\tilde{E}_{j}$ is not degenerate, we can always chose $c$ so that $c\mathbb{P}\tilde{\Psi}_{j}=\tilde{\Psi}_{j}$ coincides with the original solution, where $c$ is an unknown constant satisfying $ \left\vert c\right\vert =1$ to preserve normalisation. By applying the operator $c\mathbb{P}$ twice we obtain $x\rightarrow x$ and $c\mathbb{P}c \mathbb{P}\tilde{\Psi}_{j}=c^{2}\tilde{\Psi}_{j}=\tilde{\Psi}_{j}$. Hence, either $c=+1,$ which corresponds to a symmetric mode, or $c=-1$, which corresponds to an antisymmetric mode. Under the limit of a high, impenetrable barrier (i.e. $\hat{s}\rightarrow \infty $ in our terms) the wave functions in sections A and B interact less and less and, therefore, the symmetric and antisymmetric modes become very similar and merge $\tilde{E }_{j}^{(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{a})}-\tilde{E}_{j}^{(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{s})}\rightarrow 0$.
\subsection{Non-resonant modes \label{sec_non_res}}
If $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}\neq x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}$, at least some and, typically, most modes are non-resonant. Assuming that $\hat{s}=\tilde{s}/k$ is large, we identify two family of solutions among the non-resonant modes: A-resonant where $\cot (k_{j}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}})\approx -2\tilde{s}/k_{j}$ and B-resonant where $\cot (k_{j}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}})\approx -2\tilde{s} /k_{j}$. For these modes, one can easily obtain from (\ref{3disp}) and (\ref {3rat}) the following expansions \begin{equation} k_{j}^{(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}})}\approx \frac{{\Greekmath 0119} j}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}\left( 1- \frac{1}{2x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}\tilde{s}}\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\ \left( \frac{B_{j} }{A_{j}}\right) ^{(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}})}\approx {\Greekmath 011B} _{j}\frac{{\Greekmath 0119} j/(2x_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}\tilde{s})}{\sin ({\Greekmath 0119} jx_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}/x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}})}\sim \frac{1}{\hat{s}}\ll 1 \label{A2Ar} \end{equation} \begin{equation} k_{j}^{(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}})}\approx \frac{{\Greekmath 0119} j}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}\left( 1- \frac{1}{2x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}\tilde{s}}\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\ \left( \frac{B_{j} }{A_{j}}\right) ^{(\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}})}\approx {\Greekmath 011B} _{j}\frac{\sin ({\Greekmath 0119} jx_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}/x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}})}{{\Greekmath 0119} j/(2x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}\tilde{s })}\sim \hat{s}\gg 1 \label{A2Br} \end{equation} where $\ j=1,$ $2,$ $3,$ $...$ and ${\Greekmath 011B} _{j}=\cos ({\Greekmath 0119} j)=(-1)^{j}$ alternates the signs. These expressions are valid unless a mode is (or is close to) A-resonant and B-resonant at the same time --- these resonant, near-resonant or intermediate modes require a more careful examination and are considered below.
\subsection{Resonant, near-resonant and intermediate modes\label {sec_near_res}}
Although $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}\neq x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}$ some of the modes can still be exactly resonant or close to resonant conditions simultaneously in both sections A and B: \ \begin{equation} k_{0}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}={\Greekmath 0119} j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}-{\Greekmath 0112} \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ and \ \ }k_{0}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}={\Greekmath 0119} j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}+{\Greekmath 0112} \label{A2rm-jAjB} \end{equation} for some real $k_{0},$ integer $j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}$ and integer $j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny B}}},$ where $\left\vert {\Greekmath 0112} \right\vert \sim 1/\hat{s}\ll 1$ is a phase shift indicating small deviations from the resonance. The condition $ {\Greekmath 0112} =0$ corresponds to exact resonance. In the rest of the Appendix the subscript "$j$" is omitted implying that wave vectors, energies and amplitudes considered here are related to a selected mode with some integer $ j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}$ and $j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}$ in (\ref{A2rm-jAjB}). Let $ k=k_{0}+\Delta k$ where $\Delta k\sim 1/\hat{s}$ is small, then at the leading order \begin{equation} \frac{1}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}\Delta k+{\Greekmath 0112} }+\frac{1}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}} }\Delta k-{\Greekmath 0112} }+2\hat{s}=0 \label{A2rm-eq1} \end{equation} \begin{equation} \frac{B}{A}=-{\Greekmath 011B} \frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}\Delta k-{\Greekmath 0112} }{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny B}}}\Delta k+{\Greekmath 0112} },\ \ {\Greekmath 011B} =\frac{\cos ({\Greekmath 0119} j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}} })}{\cos ({\Greekmath 0119} j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}})}=\pm 1 \label{A2rm-eq2} \end{equation} Equations (\ref{A2rm-eq1})-(\ref{A2rm-eq2}) can be solved to yield: \begin{equation} \Delta k_{\mp }=\frac{1}{4\hat{s}}\frac{\left( {\Greekmath 0111} -1\right) x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny B}}}-\left( {\Greekmath 0111} +1\right) x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}\mp D^{1/2}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny A}}}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}},\ \ \ E_{\mp }=\frac{k_{0}\hbar ^{2}}{2m} (k_{0}+2\Delta k_{\mp }) \label{A2rm-dk} \end{equation} \begin{equation} \left( \frac{B}{A}\right) _{\mp }={\Greekmath 011B} \frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}{x_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}F_{\mp }\left( {\Greekmath 0111} ,\frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny A}}}}\right) ,\ \ \ F_{\mp }=\frac{(q_{+})\pm D^{1/2}}{(q_{-})\mp D^{1/2}},\ \ \ \ {\Greekmath 0111} =2\hat{s}{\Greekmath 0112} \label{A2rm-BA} \end{equation} where \begin{equation} q_{\pm }=\left( {\Greekmath 0111} \pm 1\right) (x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}+x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}} }),\ \ \ D=(x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}+x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}})\left( \left( {\Greekmath 0111} -1\right) ^{2}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}+\left( {\Greekmath 0111} +1\right) ^{2}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}\right) \label{A2rm-D} \end{equation} Note the equality \begin{equation} \left( \frac{B}{A}\right) _{-}\left( \frac{B}{A}\right) _{+}=-\frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny A}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}} \label{A2rm-rat} \end{equation} which implies that when one branch of the solution becomes large, the other inevitably becomes small and vice versa. The superscript indices "$+$" and "$ -$" are used to denote values that correspond to the "plus" and "minus" solutions of (\ref{A2rm-eq1}). When the sections of the box are of similar sizes $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}\approx x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}\approx x_{0}$ (although not necessarily exactly identical $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}\neq x_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}$), equations (\ref{A2rm-dk}) and (\ref{A2rm-BA}) can be simplified \begin{equation} \Delta k_{\mp }\approx \frac{1}{2\hat{s}}\frac{-1\mp \sqrt{({\Greekmath 0111} ^{2}+1)}}{ x_{0}},\ \ \left( \frac{B}{A}\right) _{\mp }\approx {\Greekmath 011B} \frac{\left( {\Greekmath 0111} +1\right) \pm \sqrt{({\Greekmath 0111} ^{2}+1)}}{\left( {\Greekmath 0111} -1\right) \mp \sqrt{({\Greekmath 0111} ^{2}+1)}} \end{equation}
\subsection{Asymptotes for the resonant and intermediate modes}
When using parameter ${\Greekmath 0111} $, we distinguish resonant ${\Greekmath 0111} \rightarrow 0,$ near-resonant $\left\vert {\Greekmath 0111} \right\vert \sim 1$, intermediate $1\ll \left\vert {\Greekmath 0111} \right\vert \ll \hat{s}$ and non-resonant $\left\vert {\Greekmath 0111} \right\vert \sim \hat{s}\gg 1$ modes. For equations (\ref{A2rm-dk}) and (\ref {A2rm-BA}), the resonance limit of ${\Greekmath 0111} =2\hat{s}{\Greekmath 0112} \rightarrow 0$ is given by \begin{equation} \Delta k_{-}=-\frac{1}{2\hat{s}}\frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}+x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B} }}}{\ x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}-\frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}} }-x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}{\Greekmath 0111} +...\ ,\ \ \ \ \Delta k_{+}=\frac{1}{2\hat{s}}\frac{{\Greekmath 0111} ^{2}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}} }+x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}+... \label{A4dk} \end{equation} \begin{equation} \left( \frac{B}{A}\right) _{-}=-{\Greekmath 011B} \frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny B}}}}\left( 1+{\Greekmath 0111} \right) +...\ ,\ \ \ \left( \frac{B}{A}\right) _{+}={\Greekmath 011B} \left( 1-{\Greekmath 0111} \right) +... \label{A4BA} \end{equation} Comparison with the resonance case of subsection (\ref{sec_res}) indicates that, at ${\Greekmath 0111} =0$ and $x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}=x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}},$ the "minus" solution represents the symmetric mode and the "plus" solution represents the antisymmetric mode.
The asymptotic representation of equations (\ref{A2rm-dk}) and (\ref{A2rm-BA} ) for intermediate modes is evaluated at the non-resonant limit ${\Greekmath 0111} \rightarrow +\infty $ yielding \begin{equation} \Delta k_{-}=-\frac{1}{2\hat{s}}\frac{{\Greekmath 0111} +1}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}+...\ ,\ \ \ \ \Delta k_{+}=\frac{1}{2\hat{s}}\frac{{\Greekmath 0111} -1}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}+... \label{A2dk_inf} \end{equation} \begin{equation} \left( \frac{B}{A}\right) _{-}=-{\Greekmath 011B} \frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}+x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny B}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}{\Greekmath 0111} +...\ ,\ \ \ \left( \frac{B}{A} \right) _{+}={\Greekmath 011B} \frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}+x_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}\frac{1}{{\Greekmath 0111} }+... \label{A2BA_inf} \end{equation} The "minus" branch matches the B-resonant solution in (\ref{A2Br}) and the "plus" branch matches the A-resonant solution in (\ref{A2Ar}). For example, substituting ${\Greekmath 0119} j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}={\Greekmath 0119} j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny B}}}/x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}-\left( 1+x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}/x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny A}}}\right) {\Greekmath 0112} $ obtained from (\ref{A2rm-jAjB}) into $\sin ({\Greekmath 0119} jx_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}/x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}})$ in (\ref{A2Br}) (while putting $ j=j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}$\ and expanding $\sin (...)$ to the leading order)\ results in the first equation in (\ref{A2BA_inf}). Similarly, substituting the equivalent expression ${\Greekmath 0119} j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}={\Greekmath 0119} j_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}} }x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}/x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}+\left( 1+x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}/x_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}\right) {\Greekmath 0112} $ into expansion of $\sin ({\Greekmath 0119} jx_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny B}}}/x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}})$ in (\ref{A2Ar}) (while putting $j=j_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}$\ this time)\ results in the second equation in (\ref {A2BA_inf}). \ These asymptotes, however, are swapped under the limit ${\Greekmath 0111} \rightarrow -\infty $ that yields the following expressions: \begin{equation} \Delta k_{-}=\frac{1}{2\hat{s}}\frac{{\Greekmath 0111} -1}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}+...\ ,\ \ \ \ \Delta k_{+}=-\frac{1}{2\hat{s}}\frac{{\Greekmath 0111} +1}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}} +... \end{equation} \begin{equation} \left( \frac{B}{A}\right) _{-}={\Greekmath 011B} \frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}}{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ {\tiny A}}}+x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}\frac{1}{{\Greekmath 0111} }+...\ ,\ \ \ \left( \frac{B}{ A}\right) _{+}=-{\Greekmath 011B} \frac{x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny A}}}+x_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}{x_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{{\tiny B}}}}{\Greekmath 0111} +... \end{equation}
{\ \appendix{Tunnelling influenced by environmental interferences \label {SecB}} }
The system under consideration is placed in a contact with a larger system or the environment, which has the following energy eigenstates $\left\vert l\right\rangle =\left\vert E_{l}\right\rangle ,$ where $l=1,2,...,N_{e}$ and $N_{e}$ is extremely large. The joint Hamiltonian of the system and environment is given by the usual expression \begin{equation} \mathbb{H=H}_{s}\mathbb{\otimes I}_{e}+\mathbb{I}_{s}\mathbb{\otimes H}_{e}+ \mathbb{H}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{int}} \label{HsHint} \end{equation} where the subscripts "$s$" indicates the system, "$e$" indicates the environment (or a larger system) and $\mathbb{H}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{int}}$ specifies interactions between the system and the environment and acts in the system $ \mathbb{\otimes }$ environment product space $\left\vert s\right\rangle \left\vert l\right\rangle =\left\vert s\right\rangle \mathbb{\otimes } \left\vert l\right\rangle $. We consider weak interactions of the eigenstates of the system Hamiltonian $\mathbb{H}_{s}=\mathbb{H}_{0}+\mathbb{ H}_{1}$, which is specified previously in (\ref{H0H1}), with a selected environmental energy eigenstate $\left\vert l\right\rangle$. Obtaining the overall solution of the problem $\left\vert {\Greekmath 0120} _{s\mathbb{\otimes } e}\right\rangle $ is followed by tracing out the degrees of freedom associated with the environment to determine the effective density matrix of the system: $\mathbf{{\Greekmath 011A} }_{s}=\func{tr}_{e}(\left\vert {\Greekmath 0120} _{s\mathbb{ \otimes }e}\right\rangle \left\langle {\Greekmath 0120} _{s\mathbb{\otimes }e}\right\vert ).$ In our analysis, we omit the environmental energy exponents $\exp \left( -iE_{l}t^{\circ }/\hbar \right) $ since they do not affect the trace. As in other theories of environmental decoherence, we necessarily use antecedent causality in this analysis.
\subsection{Environmental decoherence without energy exchange \label{SecBa}}
According to Zurek's theory \cite{Zurek1982}, decoherence occurs due to environmental interferences without any energy exchange between different energy modes. Hence, the interaction Hamiltonian takes a diagonal form when energy eigenstates are used so that \begin{equation} \left\langle +\right\vert \left\langle l\right\vert \mathbb{H}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{int} }\left\vert +\right\rangle \left\vert l\right\rangle =E_{+l},\ \ \ \ \left\langle -\right\vert \left\langle l\right\vert \mathbb{H}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{int} }\left\vert -\right\rangle \left\vert l\right\rangle =E_{-l} \end{equation} and all other components are zeros, for example, $\left\langle -\right\vert \left\langle l\right\vert \mathbb{H}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{int}}\left\vert +\right\rangle \left\vert l\right\rangle =0$ and $\left\langle +\right\vert \left\langle l\right\vert \mathbb{H}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{int}}\left\vert +\right\rangle \left\vert l^{\prime }\right\rangle =0$ when $l\neq l^{\prime }$. This form of the interaction Hamiltonian $\mathbb{H}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{int}}$ in (\ref{HsHint}) results in adjustments of the natural frequencies of the system (i.e. ${\Greekmath 0121} _{+}$ and ${\Greekmath 0121} _{-}$). The solution of this problem is obvious and, by analogy with (\ref{ABt}), is given by \begin{equation} \left[ \begin{array}{c} \tilde{A} \\ \tilde{B} \end{array} \right] _{l}=\frac{e^{-i({\Greekmath 0121} _{0}+{\Greekmath 0121} _{0l})t^{\circ }}}{1+{\Greekmath 0118} ^{2}} \left[ \begin{array}{c} \exp \left( -i\frac{\Delta {\Greekmath 0121} +\Delta {\Greekmath 0121} _{_{l}}}{2}t^{\circ }\right) +{\Greekmath 0118} ^{2}\exp \left( +i\frac{\Delta {\Greekmath 0121} +\Delta {\Greekmath 0121} _{_{l}}}{2 }t^{\circ }\right) \\ -2i{\Greekmath 0118} \sin \left( \frac{\Delta {\Greekmath 0121} +\Delta {\Greekmath 0121} _{_{l}}}{2}t^{\circ }\right) \end{array} \right] \end{equation} where $\Delta {\Greekmath 0121} _{_{l}}=(E_{+l}-E_{-l})/\hbar $ and ${\Greekmath 0121} _{0l}=(E_{+l}+E_{-l})/(2\hbar ),$ while the other quantities ${\Greekmath 0121} _{0},$ $ \Delta {\Greekmath 0121} $ and ${\Greekmath 0118} $ are the same as defined in (\ref{A5eq1}), (\ref {A5eq2}) and (\ref{A5AB}). One can easily see that the effect of environment is negligible as long as $\Delta {\Greekmath 0121} _{_{l}}\ll \Delta {\Greekmath 0121} $. If, however, $\Delta {\Greekmath 0121} _{_{l}}\gg \Delta {\Greekmath 0121} ,$ then the environment would cause a rapid loss of the coherence between the "plus" $\left\vert +\right\rangle \left\vert l\right\rangle $ and "minus" $\left\vert -\right\rangle \left\vert l\right\rangle $ modes, resulting in the corresponding acceleration of tunnelling without any changes in the extent of tunnelling determined by ${\Greekmath 0126} =\left\vert {\Greekmath 0118} \right\vert /(1+{\Greekmath 0118} ^{2}) $.
\subsection{Environmental decoherence with minimal energy exchanges\label {SecBb}}
The analysis of the influence of decoherence on tunnelling considered in Section \ref{Sec6} uses the partition states $\left\vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{A} \right\rangle $ and $\left\vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{B}\right\rangle $ as the decoherence basis. We now apply a similar assumption, implying that the environmental interferences affect sections A and B autonomously. As demonstrated below, it is sufficient to assume that the environment interferes only with the state $\left\vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{A}\right\rangle $ but not with the state $\left\vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{B}\right\rangle$. As in Section \ref{Sec6}, the energy exchanges due to these interferences are deemed to be small (i.e. weaker than those that can cause thermalisation during the active stage of the experiment). The Hamiltonian takes the following form \begin{equation} \left\langle \RIfM@\expandafter\text@\else\expandafter\mbox\fi{A}\right\vert \left\langle l\right\vert \mathbb{H}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ int}}\left\vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{A}\right\rangle \left\vert l\right\rangle =E_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{A} l} \label{AppB_H} \end{equation} while the other elements of the interaction Hamiltonian are zeros. Note that simultaneous adjustment of both energies $\left\langle \RIfM@\expandafter\text@\else\expandafter\mbox\fi{A}\right\vert \left\langle l\right\vert \mathbb{H}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{int}}\left\vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{A} \right\rangle \left\vert l\right\rangle =E_{0l}$ and $\left\langle \RIfM@\expandafter\text@\else\expandafter\mbox\fi{B} \right\vert \left\langle l\right\vert \mathbb{H}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{int}}\left\vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{B}\right\rangle \left\vert l\right\rangle =E_{0l}$ corresponds to $ \left\langle +\right\vert \left\langle l\right\vert \mathbb{H}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{int} }\left\vert +\right\rangle \left\vert l\right\rangle =E_{0l}$ and $ \left\langle -\right\vert \left\langle l\right\vert \mathbb{H}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{int} }\left\vert -\right\rangle \left\vert l\right\rangle =E_{0l}$ and results in a mere adjustment of the principal frequency ${\Greekmath 0121} _{0}$. Any exclusive interference of the environment with section B $\left\langle \RIfM@\expandafter\text@\else\expandafter\mbox\fi{B} \right\vert \left\langle l\right\vert \mathbb{H}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{int}}\left\vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{B}\right\rangle \left\vert l\right\rangle =E_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{B}l}$ can be considered to be a result of changes in $E_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{A}l}$ and $E_{0l}$. Hence, we need to consider only the interaction Hamiltonian specified by (\ref {AppB_H}). The solution of the Schr\"{o}dinger equation for Hamiltonian (\ref {HsHint}) with (\ref{H0H1}) and (\ref{AppB_H}) (which can be validated by substitution) is given here without derivation \begin{equation} \left[ \begin{array}{c} \tilde{A} \\ \tilde{B} \end{array} \right] _{l}=e^{-i({\Greekmath 0121} _{0}+{\Greekmath 0121} _{_{l}}/2)t^{\circ }}\left[ \begin{array}{c} \frac{b_{0}-b_{1}}{2b_{0}}\exp \left( i\frac{b_{0}t^{\circ }}{2}\right) + \frac{b_{0}+b_{1}}{2b_{0}}\exp \left( -i\frac{b_{0}t^{\circ }}{2}\right) \\ -2i\frac{{\Greekmath 0118} }{1+{\Greekmath 0118} ^{2}}\frac{\Delta {\Greekmath 0121} }{b_{0}}\sin \left( \frac{ b_{0}t^{\circ }}{2}\right) \end{array} \right] \label{AppB_AB2} \end{equation} where \begin{equation} b_{1}=+\Delta {\Greekmath 0121} \frac{1-{\Greekmath 0118} ^{2}}{1+{\Greekmath 0118} ^{2}}+{\Greekmath 0121} _{_{l}},\ \ \ b_{0}^{2}=\Delta {\Greekmath 0121} ^{2}+2{\Greekmath 0121} _{_{l}}\Delta {\Greekmath 0121} \frac{1-{\Greekmath 0118} ^{2}}{ 1+{\Greekmath 0118} ^{2}}+{\Greekmath 0121} _{_{l}}^{2},\ \ \ {\Greekmath 0121} _{_{l}}=\frac{E_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{A}l}}{ \hbar } \end{equation} while the other quantities ${\Greekmath 0121} _{0},$ $\Delta {\Greekmath 0121} $ and ${\Greekmath 0118} $ are the same as defined in (\ref{A5eq1}), (\ref{A5eq2}) and (\ref{A5AB}). \ When ${\Greekmath 0121} _{_{l}}\ll \Delta {\Greekmath 0121} ,$ (\ref{AppB_AB2}) yields (\ref{ABt}) and the interferences do not exercise much influence on the system. If ${\Greekmath 0121} _{_{l}}\gg \Delta {\Greekmath 0121} $, these influences are strong since the asymptotic limit of (\ref{AppB_AB2}) is given by \begin{equation} \left[ \begin{array}{c} \tilde{A} \\ \tilde{B} \end{array} \right] _{l}=e^{-i{\Greekmath 0121} _{0}t^{\circ }}\left[ \begin{array}{c} \exp (-i{\Greekmath 0121} _{_{l}}t)+... \\ -\frac{{\Greekmath 0118} }{1+{\Greekmath 0118} ^{2}}\frac{\Delta {\Greekmath 0121} }{{\Greekmath 0121} _{_{l}}}(1-\exp (-i{\Greekmath 0121} _{_{l}}t)) \end{array} \right] \end{equation} As in the previous subsection, tunnelling is accelerated by the factor of $ 2{\Greekmath 0121} _{_{l}}/\Delta {\Greekmath 0121} $, but the extent of tunnelling ${\Greekmath 0126} =\left\vert {\Greekmath 0118} \right\vert /(1+{\Greekmath 0118} ^{2})$ is reduced by the factor of $ \Delta {\Greekmath 0121} /{\Greekmath 0121} _{_{l}}$. At the limit of $\Delta {\Greekmath 0121} /{\Greekmath 0121} _{l}\rightarrow 0,$ sections A and B become effectively isolated from each other. Tracing out the degrees of freedom associated with the environment suppresses the off-diagonal elements of the effective density matrix of the system $\mathbf{{\Greekmath 011A}} _{s}$ but would not affect our conclusions limiting the amplitude of $({\Greekmath 011A} _{s})_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{BB}}.$
The extent of tunnelling can be enhanced by assuming that $\left\vert E_{ \RIfM@\expandafter\text@\else\expandafter\mbox\fi{AB}l}\right\vert \neq 0,$ where$\ E_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{AB}l}=\left\langle \RIfM@\expandafter\text@\else\expandafter\mbox\fi{A} \right\vert \left\langle l\right\vert \mathbb{H}_{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{int}}\left\vert \RIfM@\expandafter\text@\else\expandafter\mbox\fi{B}\right\rangle \left\vert l\right\rangle .$ This assumption, however, does not seem physical, since it implies a rather strange possibility of tunnelling from A to B through the environment (even if the magnitude of the barrier $\hat{s}$ is prohibitively high to permit tunnelling).
\renewcommand{\emph} [1] {\textit{#1}}
\begin{figure}\label{fig1}
\end{figure}
\begin{figure}
\caption{Measuring presence of the working particles in section B: the projective measurements are conducted only on the ancilla system after the active phase of the experiment is completed. The evolution of the system is unitary during the active phase. }
\label{fig2}
\end{figure}
\begin{figure}
\caption{The normalised rate of tunnelling depending on the normalised characteristic decoherence rate for different modes. }
\label{fig3}
\end{figure}
\begin{figure}
\caption{Quantum tunnelling through a potential barrier: schematic of the incoming and outgoing waves}
\label{fig4}
\end{figure}
\end{document} | arXiv |
\begin{document}
\title{On Symmetries of Elliptic Nets and Valuations of Net Polynomials}
\author{Amir Akbary, Jeff Bleaney, and Soroosh Yazdani}
\thanks{Research of the authors is partially supported by NSERC}
\date{\today}
\keywords{\noindent elliptic divisibility sequences, division polynomials, elliptic nets, net polynomials}
\subjclass[2010]{11G05, 11G07, 11B37.}
\address{Department of Mathematics and Computer Science \\
University of Lethbridge \\
Lethbridge, AB T1K 3M4 \\
Canada} \email{[email protected]}
\email{[email protected]}
\email{[email protected]}
\begin{abstract} Under certain conditions, we prove that the set of zeros of an elliptic net forms an Abelian group.
We present two applications of this fact. Firstly we give a
generalization of a theorem of Ayad on valuations of division polynomials in
the context of net polynomials. Secondly we generalize a theorem of Ward on
symmetry of elliptic divisibility sequences to the case of elliptic nets. \end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
Let $E$ be an elliptic curve defined over a field $K$ with the Weierstrass model $f(x, y)=0$, where \begin{equation}
\label{WE}
f(x,y):=y^2+a_1 xy+a_3 y- x^3-a_2 x^2-a_4 x-a_6;
~~~a_i\in K. \end{equation} It is known that there are polynomials $\phi_n,~\psi_n$, and $\omega_n\in K[x, y]/\langle f(x,y) \rangle$ such that for any $P \in E(K)$, the group of $K$-rational points of $E$, we have \begin{equation}
\label{eqn DivPoly}
nP=\left(\frac{\phi_n(P)}{\psi_n^2 (P)}, \frac{\omega_n(P)}{\psi_n^3(P)} \right). \end{equation} Moreover, $\psi_n$ satisfies the recursion \begin{equation}
\label{dpr}
\psi_{m+n}\psi_{m-n} = \psi_{m+1}\psi_{m-1}\psi_{n}^{2} - \psi_{n+1}\psi_{n-1}\psi_{m}^{2}, \end{equation} with initial conditions \begin{align*}
&\psi_1=1,~\psi_2=2y+a_1 x+a_3,~ \psi_3=3x^4+b_2 x^3+3b_4 x^2+3 b_6 x+b_8, \\
&\psi_4 =\psi_2 \cdot \left(2x^6+b_2 x^5+5b_4 x^4+10 b_6 x^3+(b_2b_8-b_4b_6)x+(b_4b_8-b_6^2)\right). \end{align*} Here \begin{align*} &b_2=a_1^2+4a_2,~b_4=2a_4+a_1 a_3,~b_6=a_3^2+4a_6,\\ &b_8=a_1^2 a_6+4a_2 a_6-a_1 a_3 a_4+a_2 a_3^2-a_4^2. \end{align*} The polynomial $\psi_n$ is called the \emph{$n$-th division polynomial} associated to $E$. (See \cite[Chapter 2]{Lang} for the basic properties of division polynomials.)
Now let $K$ be a field with a discrete valuation $\nu$, let $\mathcal{O}_\nu = \{x \in K : \, \nu(x) \geq 0\}$ and $\mathfrak{p} = \{x \in K : \, \nu(x)>0 \}.$ In \cite[Theorem A]{Ayad}, Ayad proved the following theorem on the valuation of $\psi_n(P)$. \begin{thm}[{\bf Ayad}]
\label{ayad}
Let $E/K$ be an elliptic curve defined by the polynomial \eqref{WE} with
$a_i \in \mathcal{O}_\nu$ for $i=1, 2, 3, 4, 6$. Let $P\in E(K)$ be a point in $E(K)$ such
that $P \not\equiv \infty \pmod \mathfrak{p}$. Then the following
assertions are equivalent:
\begin{enumerate}[(a)]
\item $\nu(\psi_2(P))$ and $\nu(\psi_3(P))>0$.
\item For all integers $n\geq 2$, we have $\nu(\psi_n(P))>0$.
\item There exists an integer $n_0\geq 2$ such that
$\nu(\psi_{n_0}(P))$ and $\nu(\psi_{n_0+1}(P))>0.$
\item There exists an integer $m_0\geq 2$ such that
$\nu(\psi_{m_0} (P))$ and
$\nu(\phi_{m_0} (P))>0.$
\item Reduction of $P$ modulo $\mathfrak{p}$ is singular.
\end{enumerate} \end{thm}
An important ingredient of the proof of the above theorem is the recursion \eqref{dpr}. Generally, any solution over an arbitrary integral domain $R$ of the recursion
\begin{equation}
\label{eds1}
W_{m+n}W_{m-n} W_{1}^{2}= W_{m+1}W_{m-1}W_{n}^{2} - W_{n+1}W_{n-1}W_{m}^{2}, \end{equation} where $m, n \in \mathbb{Z}$, is called an \emph{elliptic sequence}. Hence the sequence $(\psi_n(P))$ is an example of an elliptic sequence. The theory of elliptic sequences was developed by Morgan Ward in 1948. An \emph{elliptic divisibility sequence} (EDS) is an integer elliptic sequence $(W_{n})$, which is also a divisibility sequence (i.e. $W_m\mid W_n$ if $m\mid n$).
Theorem \ref{ayad} has an immediate application to elliptic denominator sequences, which we will define now. Let $E/\mathbb{Q}$ be an elliptic curve defined by \eqref{WE}, with $a_{i} \in \mathbb{Z}$ for $i = 1,2,3,4,6$, and let $P\in E(\mathbb{Q})$ be a non-torsion point. It is known that $$P=\left(\frac{A_P}{D_P^2}, \frac{B_P}{D_P^3} \right)$$ with $\gcd (A_P, D_P)= \gcd(B_P, D_P)=1$ and $D_P\geq 1$ (see \cite[Proposition 7.3.1]{Cohen}). Let $(D_{nP})$ be the sequence of denominators of the multiples of $P$. More precisely $D_{nP}$ is given by the identity \begin{equation}
\label{eqn EllDen}
nP=\left(\frac{A_{nP}}{D_{nP}^2}, \frac{B_{nP}}{D_{nP}^3} \right) \end{equation} with $\gcd(A_{nP}, D_{nP})= \gcd(B_{nP}, D_{nP})=1$ and $D_{nP}\geq 1$. One can show that $(D_{nP})$ is a divisibility sequence. Some authors call this sequence an elliptic divisibility sequence. In this paper, in order to distinguish this sequence from the classical elliptic divisibility sequences studied by Ward, we call the sequence $(D_{nP})$ the \emph{elliptic denominator sequence} associated to the elliptic curve $E$ and the point $P$.
Comparing equations \eqref{eqn EllDen} and \eqref{eqn DivPoly} we expect a close relation between $\psi_n(P)$ and $D_{nP}$. In particular, for any prime $p$ we have that \begin{equation}
\label{eqn padicvalue} \nu_p(x(nP))= \nu_p(A_{nP})-2\nu_p(D_{nP}) = \nu_p(\phi_n(P))-2\nu_p(\psi_n(P)), \end{equation} where $\nu_p$ is the $p$-adic valuation on $\mathbb{Q}$ and $x(nP)$ is the $x$ coordinate of $nP$.
From construction of division polynomials we know that if $p\nmid D_p$ then $\nu_p(\psi_n(P))\geq 0$ and $\nu_p(\phi_n(P))\geq 0$. Now Theorem \ref{ayad} tells us that if $P$ reduces to a non-singular point and if $P$ modulo $p$ is different from $\infty$ (i.e. $p \nmid D_P$), then $\nu_p(\psi_n(P))\nu_p(\phi_n(P))=0$. Under these conditions if $\nu_p(x(nP))\geq 0$ then by \eqref{eqn padicvalue} and the fact that $A_{nP}$ and $D_{nP}$ are coprime to each other, we have $\nu_p(D_{nP})=\nu_p(\psi_n(P))=0$. Similarly, if $\nu_p(x(nP))<0$ then $\nu_p(D_{nP})=\nu_p(\psi_n(P))=-{1\over 2}\nu_p(x(nP))$.
Therefore, we have the following proposition.
\begin{prop}
\label{first-proposition}
Let $E/\mathbb{Q}$ be an elliptic curve over the rationals given by equation \eqref{WE},
and assume that $a_i \in \mathbb{Z}$. Furthermore, let $P \in E(\mathbb{Q})$ be a point of
infinite order such that $P \not\equiv \infty \pmod p$ and let $(D_{nP})$ be the elliptic denominator
sequence associated to $E$ and $P$.
Then for a prime $p$ if $P \pmod p$ is non-singular, we have
$$\nu_p(D_{nP})=\nu_p({\psi}_n(P)).$$
\end{prop} \begin{rem}\label{mark} (a) One can drop the condition $P \not\equiv \infty \pmod p$ in the previous proposition and prove a stronger result for an scaled version of $\psi_n(P)$. Let $$\hat{\psi}_n(P):=D_P^{n^2} {\psi}_n(P).$$ Then if $P \pmod p$ is non-singular for all primes $p$, we have
$$D_{nP}=|{\hat\psi}_n(P)|.$$(See \cite{Ayad} ). For a proof of this fact (in more general case of elliptic nets) see Proposition \ref{second-proposition}.
\noindent (b) Formulas for explicit valuations of $\psi_n(P)$ at primes $p$ (of good or bad reduction) are given in \cite{Stange3}. Also in \cite{SS} the sign of $\psi_n(P)$ is computed explicitly. \end{rem}
In \cite{Stange}, Stange generalized the concept of an elliptic sequence to an $n$-dimensional array, called an elliptic net. In this paper we give a generalization of Ayad's theorem for net polynomials. \begin{defn}
Let $A$ be a free Abelian group of finite rank, and $R$ be an
integral domain. Let ${\mathbf 0}$ and $0$ be the additive identity elements of $A$ and $R$ respectively. An {elliptic net} is any map $W:A \rightarrow R$ for which
$W({\mathbf 0}) = 0$, and that satisfies
\begin{multline} \label{net recurrence}
W(\mathbf{p}+\mathbf{q}+\mathbf{s})W(\mathbf{p}-\mathbf{q})W(\mathbf{r}+\mathbf{s})W(\mathbf{r}) \\ + W(\mathbf{q}+\mathbf{r}+\mathbf{s})W(\mathbf{q}-\mathbf{r})W(\mathbf{p}+\mathbf{s})W(\mathbf{p}) \\ +
W(\mathbf{r}+\mathbf{p}+\mathbf{s})W(\mathbf{r}-\mathbf{p})W(\mathbf{q}+\mathbf{s})W(\mathbf{q}) = 0,
\end{multline}
for all $\mathbf{p},\mathbf{q},\mathbf{r},\mathbf{s} \in A.$ We identify the {rank} of $W$ with the rank of $A$. \end{defn}
Note that if $A=\mathbb{Z}$ and $W:A\rightarrow R$ is an elliptic net, then by setting $\mathbf{p}=m$, $\mathbf{q}=n$, $\mathbf{r}=1$, and $\mathbf{s}={ 0}$ in \eqref{net recurrence}, and noting that $W$ is an odd function, we get that $W(n)$ satisfies equation \eqref{eds1}, hence $(W(n))$ is an elliptic sequence. Therefore elliptic nets are a generalization of elliptic sequences.
We can relate elliptic nets to elliptic curves in the following way. For an arbitrary field $K$, let $$ S=K[x_1,y_1,\cdots, x_r, y_r], $$ and consider the polynomial ring $$
\mathcal{R}_r=
K[x_i, y_i]_{1\leq i \leq r}[(x_i-x_j)^{-1}]_{1\leq i<j\leq r}/\langle f(x_i, y_i)\rangle _{1\leq i \leq r}, $$ where $f$ is the defining polynomial \eqref{WE} for $E$. Let $\mathbf{P}=(P_1,P_2,\ldots,P_r) \in E(K)^r$ and $\mathbf{v} = (v_{1}, v_{2}, \dots, v_{r}) \in \mathbb{Z}^{r}$. From \cite[Section 4]{Stange} follows that there exist ``polynomials" $\Psi_{\mathbf{v}}, \Phi_{\mathbf{v}}, \overline{\Omega}_{\mathbf{v}}\in \mathcal{R}_r$ such that $\Psi_\mathbf{v}$ (as a function of $\mathbf{v} \in \mathbb{Z}^r$) is an elliptic net and \begin{equation}
\label{netp}
\mathbf{v} \cdot \mathbf{P}=v_{1}P_{1} + v_{2}P_{2} + \dots + v_{r}P_{r} =
\Big(\frac{\Phi_{\mathbf{v}}(\mathbf{P})}{\Psi^{2}_{\mathbf{v}}(\mathbf{P})},
\frac{\overline{\Omega}_{\mathbf{v}}(\mathbf{P})}{\Psi^{3}_{\mathbf{v}}(\mathbf{P})}\Big). \end{equation} The ``polynomial'' $\Psi_{\mathbf{v}}$ is called the \emph{$\mathbf{v}$-th net polynomial} associated to $E$. Also, the function $\mathbf{v} \mapsto \Psi_{\mathbf{v}}(\mathbf{P})$ is called \emph{the elliptic net} associated to $E$ and $\mathbf{P}$. In \cite{Stange}, Stange also proves that when $r>1$, then we can compute $\Psi_\mathbf{v}$ using the recurrence relation \eqref{net recurrence} and the initial values $\Psi_\mathbf{v}$ for $\mathbf{v}=\mathbf{e}_i$, $\mathbf{v}=2\mathbf{e}_i$, $\mathbf{v}=\mathbf{e}_i + \mathbf{e}_j$ and $\mathbf{v}=2\mathbf{e}_i+\mathbf{e}_j$, where $\{\mathbf{e}_1,\mathbf{e}_2,\ldots,\mathbf{e}_r\}$ is the standard basis for $\mathbb{Z}^r$. (For $r=1$ the recurrence \eqref{dpr} shows that $\psi_n$ is uniquely determined by $\psi_1$, $\psi_2$, $\psi_3$, and $\psi_4$.) Note that the initial values of $\Psi_\mathbf{v}$ are defined as follows: \begin{align}
\begin{aligned}
\label{Psi init}
&\Psi_{\mathbf{e_{i}}}= 1,~
\Psi_{\mathbf{2e_{i}}}= 2y_{i} + a_{1}x_{i} + a_{3},~
\Psi_{\mathbf{e_{i}+e_{j}}} = 1,\\
&\Psi_{2\mathbf{e_{i}}+\mathbf{e_{j}}} =
2x_{i} + x_{j} -\Big(\frac{y_{j}-y_{i}}{x_{j}-x_{i}}\Big)^{2} -a_{1} \Big(
\frac{y_{j}-y_{i}}{x_{j}-x_{i}} \Big) +a_{2}. \end{aligned} \end{align} The above initial conditions define the $\mathbf{v}$-th net polynomials of rank $r>1$ for any elliptic curves completely. We refer the reader to Theorem 2.5, Lemma 2.6, and Theorem 2.8 of \cite{Stange} for the details of how this can be done.
In this paper, we prove the following generalization of Theorem \ref{ayad} for net polynomials. Let $K$, $\nu$, $\mathcal{O}_\nu$, and $\mathfrak{p}$ be defined as before. \begin{thm} \label{first-theorem}
Let $E/K$ be an elliptic curve
defined by the polynomial \eqref{WE} with $a_i \in \mathcal{O}_\nu$ for
$i=1, 2, 3, 4, 6$.
Let $\mathbf{P} = (P_{1}, P_{2}, \dots, P_{r}) \in E(K)^{r}$ be such that
$P_{i} \not\equiv \infty \pmod \mathfrak{p}$, for $1\leq i \leq r$, and
$P_i\pm P_j \not\equiv \infty \pmod \mathfrak{p}$, for $1\leq i <j \leq r$.
Then the following are equivalent:
\begin{enumerate}[(a)]
\item \label{property 1}
There exists $1 \leq i \leq r$, such that
$$
\nu(\Psi_{2\mathbf{e}_{i}}(\mathbf{P})) > 0 ~~~~ {\rm and}
~~~~ \nu(\Psi_{3\mathbf{e}_{i}}(\mathbf{P})) > 0.
$$
\item \label{property 2}
There exists $1 \leq i \leq r$ such that for all $n \geq 2$ we have
$$\nu(\Psi_{n\mathbf{e}_{i}}(\mathbf{P})) > 0.$$
\item \label{property 3}
There exists $\mathbf{v} \in \mathbb{Z}^{r}$ and $1 \leq i \leq r$ such that
$$
\nu(\Psi_{\mathbf{v}}(\mathbf{P})) > 0 ~~~~ {\rm and}
~~~~ \nu(\Psi_{\mathbf{v}+\mathbf{e}_{i}}(\mathbf{P})) > 0.$$
\item \label{property 4}
There exists $\mathbf{v} \in \mathbb{Z}^{r}$ such that
$$
\nu(\Psi_{\mathbf{v}}(\mathbf{P})) > 0 ~~~~ {\rm and} ~~~~
\nu(\Phi_{\mathbf{v}}(\mathbf{P})) > 0.
$$
\item \label{property 5}
There exists $1 \leq i \leq r$ such that $P_{i} \pmod {\mathfrak{p}}$ is singular.
\end{enumerate} \end{thm}
To prove this, we first need to show that $\nu(\Psi_{\mathbf{v}}(\mathbf{P})) \geq 0$ in the cases we are dealing with. This result is of independent interest, so we record it in the following proposition.
\begin{prop} \label{valuation-prop}
Let $E/K$ be an elliptic curve defined by the polynomial \eqref{WE}
with $a_{i} \in \mathcal{O}_\nu$ for $i = 1,2,3,4,6$, and let $\mathbf{P} =
(P_{1}, P_{2}, \dots, P_{r}) \in E(K)^{r}$.
When $r=1$, assume that $P_1 \not \equiv \infty \pmod \mathfrak{p}$.
When $r>1$, then assume that for all $1 \leq i<j \leq r$ we have
$P_i \not \equiv \infty \pmod \mathfrak{p}$ and
$P_i \pm P_j \not \equiv \infty \pmod \mathfrak{p}$.
Then for all $\mathbf{v} \in \mathbb{Z}^r$ we have
\[ \nu(\Psi_\mathbf{v}(\mathbf{P})) \geq 0, \]
hence $\Psi_\mathbf{v}(\mathbf{P}) \in \mathcal{O}_\nu$. \end{prop}
Next we specialize to the case that $E$ is defined over $\mathbb{Q}$. Let $E/\mathbb{Q}$ be an elliptic curve, and let ${\mathbf{P}}=(P_1,P_2,\dots,P_r) \in E(\mathbb{Q})^r$ be $r$ linearly independent points in $E(\mathbb{Q})$. For $\mathbf{v}=(v_1,v_2,\cdots,v_r) \in \mathbb{Z}^r$, let ${\mathbf{v}}\cdot \mathbf{P}=v_1P_1+\cdots+v_r P_r$. We denote the \emph{elliptic denominator net} associated to $E$ and $P$ by $(D_{\mathbf{v}\cdot \mathbf{P}})$, where $D_{\mathbf{v}\cdot \mathbf{P}}$ is the denominator of $\mathbf{v} \cdot \mathbf{P}$. More precisely, \begin{equation} \label{Denom}
\mathbf{v} \cdot \mathbf{P}= v_1P_1+v_2P_2+\dots+v_rP_r=
\left(\frac{A_{\mathbf{v} \cdot
\mathbf{P}}}{D_{\mathbf{v} \cdot \mathbf{P}}^2}, \frac{B_{\mathbf{v} \cdot
\mathbf{P}}}{D_{\mathbf{v} \cdot \mathbf{P}}^3} \right). \end{equation} We are interested in the relation between the element $D_{\mathbf{v} \cdot \mathbf{P}}$ of the elliptic denominator net, and the value of the $\mathbf{v}$-th net polynomial $\Psi_{\mathbf{v}}$ at $\mathbf{P}$. An immediate corollary of Theorem \ref{first-theorem} is that for all but finitely many primes $p$ we have \[ \nu_p(D_{\mathbf{v} \cdot \mathbf{P}}) = \nu_p(\Psi_\mathbf{v}(\mathbf{P})), \] where $\nu_p$ is the $p$-adic valuation. We extend this result, however similar to Remark \ref{mark} (a), we need to multiply $\Psi_{\mathbf{v}}$ at $\mathbf{P}$ with a quadratic form to obtain an equivalent net polynomial ${\hat{\Psi}}_{\mathbf{v}}$.
More precisely, by using notation \eqref{Denom}, let \begin{equation} \label{FVP} F_\mathbf{v}(\mathbf{P}) = \prod_{1 \leq i \leq j \leq r} A_{ij}^{v_iv_j}, \end{equation} where $$ A_{ii}=D_{\mathbf{e}_i \cdot \mathbf{P}}=D_{P_i},~~{\rm and}~~ A_{ij}=\frac{D_{P_i+P_j}}{D_{P_i} D_{P_j}}~~{\rm for}~~i\neq j.$$ Then $F(\mathbf{P}):\mathbb{Z}^r \rightarrow K^\times$ defined by $\mathbf{v} \mapsto F_\mathbf{v}(\mathbf{P})$ is a quadratic form. Define \[ \hat{\Psi}_\mathbf{v}(\mathbf{P}) = F_\mathbf{v}(\mathbf{P})\Psi_\mathbf{v}(\mathbf{P}), \] for all $\mathbf{v} \in \mathbb{Z}^r$. Then $\hat{\Psi}(\mathbf{P})$ is an elliptic net that is scale equivalent to $\Psi(\mathbf{P})$ (see Section \ref{sec2} for more explanation). Furthermore, notice that \[ \hat{\Psi}_{\mathbf{e}_i}(\mathbf{P})=F_{\mathbf{e}_i}(\mathbf{P})\Psi_{\mathbf{e}_i}(\mathbf{P})=A_{ii}=D_{\mathbf{e}_i \cdot \mathbf{P}}, \] and \[ \hat{\Psi}_{\mathbf{e}_i+\mathbf{e}_j}(\mathbf{P})=F_{\mathbf{e}_i+\mathbf{e}_j}(\mathbf{P})\Psi_{\mathbf{e}_i+\mathbf{e}_j} (\mathbf{P})=A_{ii}A_{jj}A_{ij}=D_{P_i+P_j}=D_{(\mathbf{e}_i+\mathbf{e}_j) \cdot \mathbf{P}}. \]
We will prove the following generalization of Proposition \ref{first-proposition}. \begin{prop} \label{second-proposition}
Let $E/\mathbb{Q}$ be an elliptic net defined by polynomial \eqref{WE}
with $a_i\in \mathbb{Z}$ for $i=1, 2, 3, 4, 6$. Let
$\mathbf{P}=(P_1,\ldots,P_r) \in E(\mathbb{Q})^r$ be an $r$-tuple consisting of $r$ linearly independent points in $E(\mathbb{Q})$.
Let $p$ be a prime so that $P_i \pmod p$ is non-singular for $1\leq i \leq r$.
Then
\[
\nu_p(D_{\mathbf{v} \cdot \mathbf{P}})=\nu_p(\hat{\Psi}_\mathbf{v}(\mathbf{P})),
\]
for all $\mathbf{v} \in \mathbb{Z}^r$.
In particular, if for all primes $p$ and all integers $1 \leq i \leq r$
we have that $P_i \pmod p$ is nonsingular, then
\[
D_{\mathbf{v} \cdot \mathbf{P}} = |\hat{\Psi}_\mathbf{v}(\mathbf{P})|.
\]
\end{prop}
Section \ref{sec5} includes proofs of Propositions \ref{valuation-prop}, \ref{second-proposition}, and Theorem \ref{first-theorem}. Also see Examples \ref{first-example} and \ref{second-example} for concrete descriptions of Proposition \ref{second-proposition}.
To prove Theorem \ref{first-theorem}, we need to study the behaviour of zeros of an elliptic net $W:\mathbb{Z}^r \rightarrow {K}$, where ${K}$ is an arbitrary field. Recall that for the values of rank $1$ elliptic nets (i.e. elliptic sequences), we have the concept of {\em rank of apparition}. More precisely, for any elliptic sequence $(W_n)$ we say that a natural number $\rho$ is a rank of apparition if $W_{\rho}=0$ and
$W_m \neq 0$ for any $m | \rho$. We say a sequence has {\em a unique rank of apparition} $\rho~ (>1)$ if
$W_k = 0 $ if and only if $\rho | k$. Motivated by this definition, we say an elliptic net $W:\mathbb{Z}^r \rightarrow {K}$ has a {\em unique rank of apparition with respect to the standard basis} if each sequence $(W(n\mathbf{e}_1)),~ (W(n\mathbf{e}_2)),~\ldots,~(W(n\mathbf{e}_r))$ has a unique rank of apparition. In general, it is convenient to have a definition that works for a free finitely generated Abelian group $A$, rather than $\mathbb{Z}^r$.
\begin{defn}
\label{apparition}
Let $W: A \rightarrow {K}$ be an elliptic net of rank $r$.
Let $\mathscr{B}=\{\mathbf{b}_1, \mathbf{b}_2, \dots, \mathbf{b}_r\}$ be a basis for $A$. We say
that $W$ has a {\em unique rank of apparition with respect to
$\mathscr{B}$} if there exists an
$r$-tuple $(\rho_1, \rho_2, \dots, \rho_r)$ of positive integers with
$\rho_{i} > 1$ for $1 \leq i \leq r$, such that
$$W(n\mathbf{b}_i)=0 \iff \rho_i \mid n,$$
for all $1\leq i \leq r$. \end{defn} Note that an elliptic sequence $(W_n)$ has a unique rank of apparition if its corresponding net $n \mapsto W_n$ has a unique rank of apparition with respect to $\{1\}$.
We remark here another possible generalization of a unique rank of apparition of a sequence. Namely, for a sequence $W:\mathbb{Z} \rightarrow {K}$, having a unique rank of apparition is the same as $\Lambda = \{ v \in \mathbb{Z} : \, W(v)=0\}$ being a subgroup of $\mathbb{Z}$. Therefore, a natural generalization of the concept of unique rank of apparition to elliptic nets $W:A \rightarrow {K}$ is that $\Lambda = W^{-1}(0)=\{\mathbf{v} \in A : \, W(\mathbf{v})=0\}$ to be a subgroup of $A$. The following theorem shows that our concept of unique rank of apparition implies that $\Lambda$ is a subgroup of $A$. \begin{thm}
\label{second-theorem}
Let $W:A \rightarrow {K}$ be an elliptic net, and let
$\mathscr{B}=\{\mathbf{b}_1, \dots, \mathbf{b}_r\}$ be a basis for $A$. Assume that $W$ has a
unique {rank of apparition with respect to $\mathscr{B}$}. Let
$$\Lambda = W^{-1}(0)=\{\mathbf{v} \in A : \, W(\mathbf{v}) = 0 \}$$
be the zero set of $W$. Then $\Lambda$ is a full rank subgroup of $A$. \end{thm} \noindent We prove Theorem \ref{second-theorem} in Section \ref{sec3}.
The proof of Theorem \ref{first-theorem} comes as a combination of Theorems \ref{ayad}, \ref{second-theorem} and the following theorem.
\begin{thm}[\bf Ward]
\label{Ward6.2}
Let $(W_n)$ be an elliptic sequence.
A necessary and sufficient condition that $(W_n)$ does not have a unique
rank of apparition is that $W_3=W_4=0$. \end{thm} The proof of Theorem \ref{Ward6.2} is analogous to \cite[Theorem 6.2]{Ward} where the case of an integer elliptic sequence modulo $p$ has been considered.
Here we describe another application of Theorem \ref{second-theorem}. Let $W_n=\hat{\psi}_n(P)$ as defined in Remark \ref{mark} (a). Then Proposition \ref{first-proposition} tells us that in many cases, we can think of $W_n$ as the denominator of the point $nP$ for some elliptic curve $E/\mathbb{Q}$ and some point $P \in E(\mathbb{Q})$. Now let $p$ be a prime of good reduction and let $n_p$ be the order of the point $P$ in $E(\mathbb{F}_p)$, where $\mathbb{F}_p$ is the finite field of $p$ elements. Then we have that $(n_p+k)P \equiv kP \pmod p$. Therefore, it is tempting to assume that $W_{n_p+k} \equiv W_k \pmod p$. More generally, let $W:\mathbb{Z} \rightarrow {K}$ be an elliptic sequence with $\rho$ the unique rank of apparition of $W$. Then, one may speculate that $W_{\rho+k} = W_k$. This in fact is not true. However, in \cite{Ward} the following is proved. \begin{thm}[\bf Ward's Symmetry Theorem]
\label{Ward sym}
Let $(W_n)$ be an elliptic sequence and assume $W_2W_3 \neq 0$.
Let $\rho>1$ be the unique rank of apparition of $W$.
Then there exists $a, b \in {K}$ such that
\begin{equation*}
W_{m\rho+n} = a^{m^2} b^{mn} W_{n}
\end{equation*}
for all $m,n \in \mathbb{Z}$. \end{thm} \noindent See Theorem 9.2 of \cite{Ward} for a proof and Theorem 8.2 of \cite{Ward} for some properties of elements $a$ and $b$, when ${K}=\mathbb{F}_p$. Note that the proofs also work for any field ${K}$.
The following theorem gives a generalization of Theorem \ref{Ward sym}.
\begin{thm}\label{Ward-generalized}
Let $W:A \rightarrow {K}$ be an elliptic net with the property that $\Lambda=W^{-1}(0)$ is a subgroup of $A$
and assume $|A/\Lambda| \geq 4$. Then, there exist well defined functions
$\xi :\Lambda \rightarrow {K}^\times$ and $\chi:\Lambda \times A \rightarrow {K}^\times$ such that $$W(\bm{\lambda}+\mathbf{v})=\xi(\bm{\lambda})\chi(\bm{\lambda},\mathbf{v})W(\mathbf{v})~ for ~ all ~\bm{\lambda} \in \Lambda~
and~ all ~\mathbf{v} \in A,$$ and the functions $\xi$ and $\chi$ satisfy the following properties: \begin{enumerate}[(i)]
\item $\chi$ is bilinear,
\item $\chi(\bm{\lambda}_1,\bm{\lambda}_2)=\chi(\bm{\lambda}_2,\bm{\lambda}_1)$,
\item $\xi(\bm{\lambda}_1+\bm{\lambda}_2)=\xi(\bm{\lambda}_1)\xi(\bm{\lambda}_2)\chi(\bm{\lambda}_1,\bm{\lambda}_2)$,
\item $\xi(-\bm{\lambda})=\xi(\bm{\lambda})$, and
\item $\xi(\bm{\lambda})^2 = \chi(\bm{\lambda},\bm{\lambda})$.
\end{enumerate}
Furthermore, the functions $\chi(\bm{\lambda},\mathbf{p})$ and $\xi(\bm{\lambda})$, are defined by
$$
\begin{array}{cccc}
\delta: & \Lambda \times (A \setminus \Lambda) & \longrightarrow & {K}^\times \\
& (\bm{\lambda}, \mathbf{p}) & \longmapsto & \frac{W(\bm{\lambda}+\mathbf{p})}{W(\mathbf{p})},
\end{array}
$$
and relations
$$
\begin{array}{cccc}
\chi: & \Lambda \times A & \longrightarrow & {K}^\times \\
& (\bm{\lambda}, \mathbf{p}) & \longmapsto & \frac{\delta(\bm{\lambda}, \mathbf{p}+\mathbf{v})}{\delta(\bm{\lambda}, \mathbf{v})},
\end{array}
$$
where $\mathbf{v}$ is any element of $A$ with $\mathbf{v},\mathbf{v}+\mathbf{p} \notin \Lambda$,
and
$$
\begin{array}{cccc}
\xi: & \Lambda & \longrightarrow & {K}^\times \\
& \bm{\lambda} & \longmapsto & \frac{\delta(\bm{\lambda},\mathbf{v})}{\chi(\bm{\lambda},\mathbf{v})},
\end{array}
$$
for any $\mathbf{v} \in A\setminus \Lambda$. \end{thm} Note that under conditions of Theorem \ref{Ward sym}, by considering $\bm{\lambda}=m\rho$ and $\mathbf{v}=n$ in the previous theorem and applying the bilinearity of $\chi$ and Corollary \ref{nsquare}, we obtain $$W(m\rho+n)=\xi(m\rho) \chi(m\rho, n)W(n)=\xi(\rho)^{m^2} \chi(\rho, 1)^{mn}W(n).$$ Thus, by letting $a=\xi(\rho)$ and $b=\chi(\rho, 1)$ we have the assertion of Theorem \ref{Ward sym}.
We remark here that in \cite{Stange1}, Stange relates some of the functions given in Theorem \ref{Ward-generalized} to the Tate pairing on $E$. Furthermore, special cases of the above formula does show up in her thesis. However to the best of our knowledge, the statement of the above theorem is new.
Given the properties of $\chi$ and $\xi$, for any $r \in \mathbb{N}$, any $\bm{\lambda}_1,\bm{\lambda}_2,\ldots,\bm{\bm{\lambda}}_r \in \Lambda$, and any $n_1,n_2,\ldots,n_r \in \mathbb{Z}$, we get that
\begin{equation}
\label{W-formula}
W\left(\left(\sum_{i=1}^{r}n_{i}\bm{\lambda}_{i}\right)+\mathbf{v} \right) =
\left(\prod_{i=1}^r \xi(\bm{\lambda}_i)^{n_i^2} \chi(\bm{\lambda}_i,\mathbf{v})^{n_i}
\left(\prod_{j=1}^{i-1} \chi(\bm{\lambda}_i,\bm{\lambda}_j)^{n_in_j} \right)
\right) W(\mathbf{v}).
\end{equation} As a simple corollary of the above identity, we have the following periodicity result. \begin{cor} \label{cor-13} Let $W:A \rightarrow \mathbb{F}_q$ be an elliptic net, and let $\Lambda=W^{-1}(0)$. Assume that $\Lambda$ is a subgroup of $A$ and
$|A/\Lambda|\geq 4$.
Then $W(\mathbf{v}_1)=W(\mathbf{v}_2)$ if $\mathbf{v}_1 \equiv \mathbf{v}_2 \pmod{ (q-1)}$. \end{cor} We can also employ \eqref{W-formula} in computing elliptic nets with values in finite fields (See Example \ref{third-example} for a description). Section \ref{sec4} is dedicated to proofs of Theorem \ref{Ward-generalized} and its corollaries.
\section{Review of Elliptic Nets} \label{sec2} We will collect some basic facts about elliptic nets in this section for sake of completion. Recall that for a free Abelian group $A$ and an integral domain $R$, we defined an elliptic net to be any map $W: A \rightarrow R$ with $W(\mathbf{0})=0$ and \begin{multline*}
W(\mathbf{p}+\mathbf{q}+\mathbf{s})W(\mathbf{p}-\mathbf{q})W(\mathbf{r}+\mathbf{s})W(\mathbf{r}) \\ + W(\mathbf{q}+\mathbf{r}+\mathbf{s})W(\mathbf{q}-\mathbf{r})W(\mathbf{p}+\mathbf{s})W(\mathbf{p}) \\ +
W(\mathbf{r}+\mathbf{p}+\mathbf{s})W(\mathbf{r}-\mathbf{p})W(\mathbf{q}+\mathbf{s})W(\mathbf{q}) = 0, \end{multline*} for all $\mathbf{p},\mathbf{q},\mathbf{r},$ and $\mathbf{s} \in A$. Also recall that the rank of an elliptic net is defined to be the rank of its domain $A$.
\begin{lem}
\label{lem prelim}
Let $W:A \rightarrow R$ be an elliptic net.
\begin{enumerate}[(a)]
\item For any integral domain $R'$ and any morphism $\pi:R \rightarrow R'$, the
function $\pi \circ W:A \rightarrow R'$ is an elliptic net,
\item For any subgroup $A' \subset A$, the function
$W|_{A'} :A' \rightarrow R$ is an elliptic net.
\item For any $\mathbf{v} \in A$ we have $W(-\mathbf{v})=-W(\mathbf{v})$,
\end{enumerate} \end{lem} \begin{proof}
To prove the first two parts of this lemma, note that both
$W|_{A'}$ and $\pi \circ W$ satisfy the elliptic net recurrence
\eqref{net recurrence}.
To prove $W(-\mathbf{v})=-W(\mathbf{v})$, observe that if $W(\mathbf{v})=W(-\mathbf{v})=0$, then we are
done. Otherwise, assume without loss of generality that $W(\mathbf{v}) \neq 0$. Then by
setting $\mathbf{p}=\mathbf{q}=\mathbf{v}$ and $\mathbf{r}=\mathbf{s}=\mathbf{0}$ in \eqref{net recurrence} we have
\[ W(\mathbf{v})^3(W(\mathbf{v})+W(-\mathbf{v}))=0. \]
Since $R$ is an integral domain, we get $W(-\mathbf{v})=-W(\mathbf{v})$. \end{proof}
We have already remarked that the values of an elliptic net of rank $1$ form an elliptic sequence. Let $W:A \rightarrow R$ be any elliptic net, and let $\mathbf{v} \in A$. Then by part (b) of the above lemma, $W|_{ \mathbf{v}\mathbb{Z} } : \mathbb{Z} \rightarrow R$ is an elliptic net of rank $1$. Also, note that if $R$ is an integral domain and ${K} = \Frac(R)$, the fraction field of $R$, then $i:R \rightarrow {K}$ is injective. Therefore $i \circ W: A \rightarrow {K}$ is an elliptic net, and $(i \circ W)^{-1}(0)=W^{-1}(0)$. Therefore we are not losing any generality in Theorems \ref{second-theorem} and \ref{Ward-generalized} by focusing on elliptic nets having entries in a field.
Next we are interested in relating elliptic nets with linear combination of points on elliptic curves. In order to do this we review some results of \cite{Stange} on net polynomials.
For a complex lattice $\Lambda \subset \mathbb{C}$, let
$\sigma:\mathbb{C} \rightarrow \mathbb{C}$ be the Weierstrass
$\sigma$ function
\begin{align*}
\sigma(z)=\sigma(z;\Lambda) = z \prod_{w \in \Lambda, w \neq 0}
\left( 1 - {z \over w}\right) e^{ {z\over w} + {1\over 2} ({z\over w})^2} .
\end{align*}
Fix an $r$-tuple ${\mathbf z} = (z_1,z_2,\ldots,z_r) \in \mathbb{C}^r$
with $z_i \not \in \Lambda$ and $z_i + z_j \not \in \Lambda$.
For an $r$-tuple $\mathbf{v}=(v_1,v_2,\ldots,v_r) \in \mathbb{Z}^r$ define
\begin{align*}
\Omega_\mathbf{v}({\mathbf z}) = \Omega_\mathbf{v}(\mathbf{z} ; \Lambda) = (-1)^{\sum_{1\leq i \leq j \leq r} v_i v_j +1}{\sigma(v_1z_1+v_2z_2+\cdots+v_rz_r) \over
\left( \prod_{i=1}^r \sigma(z_i)^{2v_i^2 - \sum_{j=1}^r v_iv_j} \right)
\left( \prod_{1 \leq i < j \leq r} \sigma(z_i+z_j)^{v_iv_j} \right)}.
\end{align*}
\begin{thm}[{\bf Stange}]
The function
\begin{align*}
\begin{array}{cccc}
\Omega = \Omega(\mathbf{z};\Lambda): & \mathbb{Z}^r & \longrightarrow & \mathbb{C} \\
& \mathbf{v} & \longmapsto & \Omega_\mathbf{v}(\mathbf{z}),
\end{array}
\end{align*}
is an elliptic net.
\end{thm}
\begin{proof}
See \cite[Theorem 3.7]{Stange}.
\end{proof}
Now let $E/\mathbb{C}$ be an elliptic curve, and let
$\Lambda_E$ be the lattice corresponding to $E$.
Let
${\mathbf P}=(P_1,\ldots,P_r) \in E(\mathbb{C})^r$ with
$P_i, P_i + P_j \neq \infty$ and
let ${\mathbf z}=(z_1,\ldots,z_r) \in \mathbb{C}^r$ be such
that $z_i$ maps to $P_i$ under the uniformization map
\[ \mathbb{C} \rightarrow \mathbb{C}/\Lambda_E \simeq E(\mathbb{C}). \]
Then the function
\begin{align*}
\begin{array}{cccc}
\Psi(\mathbf{P};E): & \mathbb{Z}^r & \longrightarrow & \mathbb{C} \\
& \mathbf{v} & \longmapsto & \Omega_\mathbf{v}(\mathbf{z}),
\end{array}
\end{align*}
is an elliptic net with values in $\mathbb{C}$. We call $\Psi({\bf P};E)$ the \emph{elliptic net associated to $E( {\rm over~} \mathbb{C})$ and $\mathbf{P}$}.
Let $S^ \text{univ}=\mathbb{Z}[\alpha_1,\alpha_2,\alpha_3,\alpha_4,\alpha_6]$, and for
any positive integer $r$ let
\[
\mathcal{R}_{r}^{ \text{univ}} = S^{ \text{univ}}[x_i,y_i]_{1 \leq i \leq r}[(x_i-x_j)^{-1}]_{1 \leq i < j \leq r}/\langle f(x_i,y_i) \rangle _{1 \leq i \leq r},
\]
where $f(x, y)$ is given by \eqref{WE}.
Then for every elliptic curve $E/K$ defined by the polynomial
\eqref{WE}, and $\mathbf{P} \in E(K)^r$ with $P_i, P_i \pm P_j \neq \infty$,
we can find a morphism
\[ \pi=\pi_{\mathbf{P};E} : \mathcal{R}_{r}^{ \text{univ}} \rightarrow K \]
so that
$\pi(\alpha_i)=a_i$, and $(\pi(x_i),\pi(y_i))=P_i$.
The following result is proved in \cite[section 4]{Stange}.
\begin{thm}[{\bf Stange}]
For each $\mathbf{v} \in \mathbb{Z}^r$,
there is $\Psi_{\mathbf{v}}^{ \text{univ}} \in \mathcal{R}_{r}^{ \text{univ}}$ so that
$\Psi^ \text{univ}:\mathbf{v} \mapsto \Psi_{\mathbf{v}}^{ \text{univ}}$ is an elliptic net, and
for any elliptic curve $E/\mathbb{C}$ and $\mathbf{P} \in E(\mathbb{C})^r$
with $P_i, P_i \pm P_j \neq \infty$ we have
\[ \pi_{\mathbf{P};E}\circ \Psi^ \text{univ} = \Psi(\mathbf{P};E). \]
\end{thm} Let $ \mathcal{R}_r^ \text{univ}$, $S^ \text{univ}$, and
$E/K$ be as before.
Then, there exists a map $\pi_E : S^ \text{univ} \rightarrow K$, so that
$\pi_E(\alpha_i)=a_i$.
This induces a map
\[ (\pi_E)_* : \mathcal{R}_r^ \text{univ} \rightarrow
K[x_i,y_i]_{1 \leq i \leq r}[(x_i-x_j)^{-1}]_{1 \leq i < j \leq r}/\langle f(x_i,y_i) \rangle _{1 \leq i \leq r}. \]
Then part (a) of lemma \ref{lem prelim} shows that
$\Psi : \mathbf{v} \mapsto (\pi_E)_* (\Psi_\mathbf{v}^ \text{univ})$ defines an elliptic net with values in
\[ \mathcal{R}_r := K[x_i,y_i]_{1 \leq i \leq r}[(x_i-x_j)^{-1}]_{1 \leq i < j \leq r}/\langle f(x_i,y_i) \rangle _{1 \leq i \leq r}. \]
We call $\Psi_\mathbf{v} \in \mathcal{R}_r$ the \emph{$\mathbf{v}$-th net polynomial} associated to $E$.
\remove{In this paper, we use $\Psi_\mathbf{v}$ instead of $(\pi_E)_*(\Psi_\mathbf{v}^ \text{univ})$, when
$E/K$ is clear.}
Now let
${\mathbf{P}} \in E(K)^r$ with $P_i, P_i \pm P_j \neq \infty$.
Then by part (a) of Lemma \ref{lem prelim},
$\Psi(\mathbf{P};E):\mathbf{v} \mapsto \Psi_\mathbf{v}(\mathbf{P})$ is an elliptic net with values in $K$. We call $\Psi({\bf P};E)$ the \emph{elliptic net associated to $E( {\rm over~} K)$ and $\mathbf{P}$}.
Here we note that $\Psi_{n\mathbf{e}_i}(\mathbf{P})=\psi_n(P_i)$. Moreover, we remark that for $E/K$ defined by the polynomial \eqref{WE}, we can compute $\Psi_\mathbf{v}$ explicitly. In fact for $\mathbf{v} \in \{ \mathbf{e}_i, 2\mathbf{e}_i, \mathbf{e}_i+\mathbf{e}_j, 2\mathbf{e}_i+\mathbf{e}_j : \, i \neq j \}$, the exact values of $\Psi_\mathbf{v}$ are given by \eqref{Psi init}. Furthermore, as we pointed out in the introduction, Theorem 2.5, Lemma 2.6, and Theorem 2.8 of \cite{Stange} prove that these initial conditions are sufficient for computing $\Psi_\mathbf{v}$ for any $\mathbf{v} \in \mathbb{Z}^r$. \begin{exam}
If we let $(\mathbf{p},\mathbf{q},\mathbf{r},\mathbf{s})=(\mathbf{e}_i+\mathbf{e}_j, \mathbf{e}_i\pm \mathbf{e}_k, -\mathbf{e}_i, -\mathbf{e}_i)$, then from \eqref{net recurrence} we get that
\[ \Psi_{\mathbf{e}_i+\mathbf{e}_j \pm \mathbf{e}_k} \Psi_{\mathbf{e}_j\mp \mathbf{e}_k} \Psi_{-2\mathbf{e}_i}\Psi_{-\mathbf{e}_i}+\Psi_{-\mathbf{e}_i \pm \mathbf{e}_k} \Psi_{2\mathbf{e}_i\pm \mathbf{e}_k} \Psi_{\mathbf{e}_j}\Psi_{\mathbf{e}_i+\mathbf{e}_j}+\Psi_{-\mathbf{e}_i\pm \mathbf{e}_k} \Psi_{-2\mathbf{e}_i- \mathbf{e}_j} \Psi_{\pm\mathbf{e}_k}\Psi_{\mathbf{e}_i\pm \mathbf{e}_k} = 0 .\] We note that in \cite[Theorem 2.5]{Stange} it is shown that the terms $\Psi_{\mathbf{e}_i - \mathbf{e}_j}$, and $\Psi_{2\mathbf{e}_{i} - \mathbf{e}_{j}}$ can be computed explicitly in terms of $\Psi_{\mathbf{v}}$ for $\mathbf{v} \in \{ \mathbf{e}_i, 2\mathbf{e}_i, \mathbf{e}_i+\mathbf{e}_j, 2\mathbf{e}_i+\mathbf{e}_j : \, i \neq j \}$. In particular, setting $(\mathbf{p}, \mathbf{q}, \mathbf{r}, \mathbf{s}) = (\mathbf{e}_i, \mathbf{e}_j, \mathbf{0}, \mathbf{e}_i + \mathbf{e}_j)$ gives
\[\Psi_{\mathbf{e}_i - \mathbf{e}_j} = \Psi_{\mathbf{e}_i + 2\mathbf{e}_j} - \Psi_{2\mathbf{e}_i + \mathbf{e}_j}.\] Similarly, taking $(\mathbf{p},\mathbf{q},\mathbf{r},\mathbf{s}) = (-\mathbf{e}_i + \mathbf{e}_j, \mathbf{e}_j, \mathbf{e}_i, \mathbf{e}_i)$, we have
\[\Psi_{2\mathbf{e}_i -\mathbf{e}_j} = \psi_{2\mathbf{e}_i}\psi_{2\mathbf{e}_j} - \psi_{2\mathbf{e}_i + \mathbf{e}_j}\psi_{\mathbf{e}_i - \mathbf{e}_j}^2. \] Thus $\Psi_{\mathbf{e}_i+\mathbf{e}_j \pm \mathbf{e}_k}$ can be computed using $\Psi_{\mathbf{v}}$ for $\mathbf{v} \in \{ \mathbf{e}_i, 2\mathbf{e}_i, \mathbf{e}_i+\mathbf{e}_j, 2\mathbf{e}_i+\mathbf{e}_j : \, i \neq j \}.$ \end{exam}
We are interested in relating $\Psi_\mathbf{v}(\mathbf{P})$ to the denominators of linear combinations of points on an elliptic curve. To do this, recall that for any $E/K$ given by the polynomial \eqref{WE}, $\mathbf{P} \in E(K)^r$, and $\mathbf{v} \in \mathbb{Z}^r$ we can find rational functions (by repeated use of doubling and addition formulas for elliptic curves) $X_\mathbf{v}, Y_\mathbf{v} \in \Frac( \mathcal{R}_r)$, the fraction field of $ \mathcal{R}_r$, such that \[ \mathbf{v} \cdot \mathbf{P} = v_1P_1+\cdots + v_r P_r = (X_\mathbf{v}(\mathbf{P}),Y_\mathbf{v}(\mathbf{P})). \] The following lemma gives an explicit representation for $X_\mathbf{v}$ in terms of net polynomials. \begin{lem}
\label{numerator formula}
Let $E/K$ be an elliptic net, and let $\mathbf{P} \in E(K)^r$ be such that
$P_i \neq \infty$ and $P_i \pm P_j \neq \infty$. Then
For any $\mathbf{v} \in \mathbb{Z}^r$, there is $\Phi_\mathbf{v} \in \mathcal{R}_r$ such that
\[ X_{\mathbf{v}} = {\Phi_\mathbf{v} \over \Psi_\mathbf{v}^2}. \]
In particular for any $1 \leq i \leq r$ we have
\[ \Phi_\mathbf{v} (\mathbf{P}) = \Psi_\mathbf{v}^2(\mathbf{P}) x(P_i) - \Psi_{\mathbf{v} + \mathbf{e}_i}(\mathbf{P})\Psi_{\mathbf{v}-\mathbf{e}_i}(\mathbf{P}). \] \end{lem} \begin{proof}
In \cite[Lemma 4.2]{Stange}, it is proved that for any
$\mathbf{v},\mathbf{u} \in \mathbb{Z}^r$ we have
\[ \Psi_\mathbf{v}^2 \Psi_\mathbf{u}^2(X_\mathbf{v} - X_\mathbf{u})=-\Psi_{\mathbf{v}+\mathbf{u}}\Psi_{\mathbf{v}-\mathbf{u}}. \]
If we let $\mathbf{u}=\mathbf{e}_i$, then
$X_\mathbf{u} (\mathbf{P})= x(P_i)$. Thus we have
\[ (\Psi_\mathbf{v}^2 X_\mathbf{v})(\mathbf{P}) = \Psi_\mathbf{v}^2(\mathbf{P}) x(P_i) - \Psi_{\mathbf{v} + \mathbf{e}_i}(\mathbf{P})\Psi_{\mathbf{v}-\mathbf{e}_i}(\mathbf{P}), \]
which gives us the desired result. \end{proof}
\begin{defn} Let $B$ and $C$ be Abelian groups written additively. Furthermore, assume that $C$ is $2$-torsion free. Then a function $F:B \rightarrow C$ is a quadratic form if \begin{align}
\label{paral}
F(x+y)+F(x-y)=2F(x)+2F(y), \end{align} for all $x,y \in B$. \end{defn} Equation \eqref{paral} is sometimes called the {\em parallelogram law}. \begin{exam}
\begin{enumerate}[(a)]
\item Let $a_i, c_{ij} \in \mathbb{Q}$ and consider $F:\mathbb{Z}^r \rightarrow \mathbb{Q}$ defined by
\[ F(v_1,v_2,\ldots,v_r) = \sum_{i=1}^r a_i v_i^2 + \sum_{1 \leq i < j \leq r} c_{ij}v_iv_j. \]
Then we can check that $F$ satisfies the parallelogram law \eqref{paral}.
\item Let $p_i, q_{ij} \in \mathbb{Q}^\times$. Then
the function $G:\mathbb{Z}^r \rightarrow \mathbb{Q}^\times$ defined by
\[ G(v_1,v_2,\ldots,v_r) = \prod_{i=1}^r p_i^{ v_i^2} \cdot \prod_{1 \leq i < j \leq r} q_{ij}^{v_iv_j} \]
is a quadratic form.
\item Let $F_1,F_2 : B \rightarrow C$ be two quadratic forms. Then
their difference, $F_1-F_2$, is again a quadratic form.
\end{enumerate} \end{exam} The main reason we are interested in quadratic forms is the following result. \begin{prop}
Let $K$ be a field and let $W:A \rightarrow K$ be an elliptic net.
Let $F:A \rightarrow K^\times$ be a quadratic form. Then
\begin{equation}
\begin{array}{cccc}
W^F: & A & \longrightarrow & K \\
& \mathbf{v} & \longmapsto & W(\mathbf{v})F(\mathbf{v})
\end{array}
\end{equation}
is an elliptic net. \end{prop} \begin{proof}
See \cite[Proposition 6.1]{Stange}. \end{proof} \begin{defn} We say that two elliptic nets $W$ and $W^\prime$ are scale equivalent, if there is a quadratic form $F: A \rightarrow K$ such that $W^\prime=W^F$. \end{defn}
Let $\lambda_p$ be the \emph{(local) N\'{e}ron height function on $E$ associated to the prime $p$}. (See \cite[Chapter VI, Theorem{1.1}]{AECII} for properties of $\lambda_p$.) An important property of N\'{e}ron height is that it satisfies the \emph{quasi-parallelogram law}.
\begin{lem} \label{quasi} Assume that $P$, $Q \in E(\mathbb{Q})$ are two points such that $P$, $Q$, $P\pm Q\neq \infty$. Then we have $$\lambda_p(P+Q)+\lambda_p(P-Q)=2\lambda_p(P)+2\lambda_p(Q)+\nu_p(x(P)-x(Q))-\frac{1}{6}\nu_p(\Delta_E).$$ \end{lem} \begin{proof} See \cite[Page 476, Exercise 6.3]{AECII}. \end{proof}
\begin{lem}
\label{diff quad}
Let $E/\mathbb{Q}$ defined by the polynomial \eqref{WE}, and
assume $a_i$'s are all integers. Let $\Delta_E$ be the discriminant of $E$. Let
$\mathbf{P}=(P_1,\ldots,P_r) \in E(\mathbb{Q})^r$ be an $r$-tuple consisting of $r$
linearly independent points on $E(\mathbb{Q})$.
Define
\begin{align*}
\varepsilon(\mathbf{v}) =
\begin{cases}
\lambda_p(\mathbf{v} \cdot \mathbf{P})-\frac{1}{12} \nu_p(\Delta_E)-\nu_p(\Psi_\mathbf{v}(\mathbf{P})) & \mbox{ if $\mathbf{v} \neq \mathbf{0}$,} \\
0 & \mbox{ otherwise.} \\
\end{cases}
\end{align*}
Then $\varepsilon$ is a quadratic form from $\mathbb{Z}^r$ to $\mathbb{Z}$. \end{lem} \begin{proof}
From Lemma \ref{numerator formula}, we know that
\begin{align}
\label{par Psi} \nu_p(\Psi_{\mathbf{v}+\mathbf{w}}(\mathbf{P}))+\nu_p(\Psi_{\mathbf{v}-\mathbf{w}}(\mathbf{P})) =
2\nu_p(\Psi_\mathbf{v}(\mathbf{P}))+2\nu_p(\Psi_\mathbf{w}(\mathbf{P}))+
\nu_p(X_\mathbf{v}(\mathbf{P})-X_\mathbf{w}(\mathbf{P})).
\end{align}
Now assume that $\mathbf{v}, \mathbf{w}, \mathbf{v}\pm \mathbf{w} \neq {\mathbf 0}$. Then substituting $\mathbf{v} \cdot \mathbf{P}$ and $\mathbf{w} \cdot \mathbf{P}$ in Lemma \ref{quasi},
we get
\begin{multline}
\label{par den}
\lambda_p(\mathbf{v}\cdot \mathbf{P}+\mathbf{w} \cdot \mathbf{P})+
\lambda_p(\mathbf{v}\cdot \mathbf{P}-\mathbf{w} \cdot \mathbf{P}) =
2\lambda_p(\mathbf{v} \cdot \mathbf{P})+
2\lambda_p(\mathbf{w} \cdot \mathbf{P})+
\nu_p(X_\mathbf{v}(\mathbf{P})-X_\mathbf{w}(\mathbf{P}))-\frac{1}{6} \nu_p(\Delta_E).
\end{multline}
Subtracting \eqref{par Psi} from \eqref{par den} we have
\begin{align}
\label{p-law}
\varepsilon(\mathbf{v}+\mathbf{w})+\varepsilon(\mathbf{v}-\mathbf{w})=2\varepsilon(\mathbf{v})+2\varepsilon(\mathbf{w}),
\end{align} where $\mathbf{v}, \mathbf{w}, \mathbf{v}\pm \mathbf{w} \neq \mathbf{0}$. The identity \eqref{p-law} also holds if $\mathbf{v}$ or $\mathbf{w}=\mathbf{0}$. So to complete the proof it is enough to show that $\varepsilon(2\mathbf{v})=4\varepsilon(\mathbf{v})$. In order to establish this we add copies of \eqref{p-law} for $(\mathbf{v}, \mathbf{w})=(4\mathbf{u}, \mathbf{u}), (3\mathbf{u}, \mathbf{u}), (3\mathbf{u}, \mathbf{u}), (2\mathbf{u}, \mathbf{u})$ to obtain \begin{equation} \label{first} \varepsilon(5\mathbf{u})+\varepsilon(\mathbf{u}) = 2\varepsilon(3\mathbf{u}) + 8\varepsilon(\mathbf{u})
\end{equation} Also letting $(\mathbf{v}, \mathbf{w})=(3\mathbf{u}, 2\mathbf{u})$ in \eqref{p-law} yields \begin{equation} \label{second} \varepsilon(5\mathbf{u})+\varepsilon(\mathbf{u})=2\varepsilon(3\mathbf{u})+2\varepsilon(2\mathbf{u}). \end{equation} Now subtracting \eqref{second} from \eqref{first} gives $\varepsilon(2\mathbf{u})=4\varepsilon(\mathbf{u}).$ Thus $\varepsilon$ is a quadratic form as desired. \end{proof}
\section{Proof of Theorem \ref{second-theorem}} {\label{sec3}} Let ${K}$ be any field and assume that $W:A \rightarrow {K}$ is an elliptic net of rank $r$. Theorem \ref{second-theorem} claims that if $W$ has a unique rank of apparition then $\Lambda=W^{-1}(0)$ will be a subgroup of $A$. The goal of this section is to prove this claim.
Throughout this section assume that $W$ has a unique rank of apparition and let $\mathscr{B}=\{\mathbf{b}_1,\mathbf{b}_2,\ldots,\mathbf{b}_r\}$ be a basis for $A$ such that $W$ has a unique rank of apparition with respect to $\mathscr{B}$. Therefore, there exists $(\rho_1,\rho_2,\ldots,\rho_r) \in \mathbb{Z}^r$ with $\rho_i > 1$ for $1 \leq i \leq r$ such that
$W(n\mathbf{b}_i)=0$ if and only if $n | \rho_i$.
Let $A_i$ be the subgroup of $A$ generated by $\{\mathbf{b}_1,\mathbf{b}_2,\ldots,\mathbf{b}_i\}$ for $1 \leq i \leq r$ and let \[\Lambda_i = \Lambda \cap A_i = \{ \mathbf{v} \in A_i : \, W(\mathbf{v})=0 \}.\]
Note that $\Lambda_i$ is the zero set of the elliptic net $W|_{A_i}:A_i \rightarrow {K}$. By induction on $i$, we will prove that $\Lambda_i$ is a subgroup of $A_i$. Note that the base case, $i=1$, is true by definition of unique rank of apparition.
We will prove the inductive step by proving three lemmas. \begin{lem} \label{lem2}
Let $n \in \mathbb{Z}$, and let $1 \leq i \leq r$. If $\rho_i \mid n$, then we have
$$W(\mathbf{v}+n\mathbf{b}_{i}) = 0 \Longleftrightarrow \mathbf{v} \in \Lambda.$$ \end{lem}
\begin{proof}
First let $ \mathbf{v}\in \Lambda$. Taking $\mathbf{p}=\mathbf{v}$, $\mathbf{q} = -n\mathbf{b}_{i},$ $\mathbf{r} =
\mathbf{b}_{i},$ and $\mathbf{s} = 2n\mathbf{b}_{i}$ in \eqref{net recurrence} yields
\begin{equation} \label{zero id 1}
W(\mathbf{v}+n\mathbf{b}_{i})^{2}W((2n+1)\mathbf{b}_{i})W(\mathbf{b}_{i}) = 0.
\end{equation}
Note that since $\rho_i \mid n$ and $\rho_{i}>1$,
we have $\rho_i \nmid (2n+1)$ and so $W((2n+1)\mathbf{b}_{i})\neq 0$.
Thus, from \eqref{zero id 1}, we have $W(\mathbf{v}+n\mathbf{b}_{i}) = 0$ for all $\mathbf{v} \in \Lambda$.
Conversely assume that $\mathbf{v} \notin \Lambda$. Then taking $\mathbf{p}=\mathbf{v}$,
$\mathbf{q} = n\mathbf{b}_{i},$ $\mathbf{r} = \mathbf{b}_{i},$ and $\mathbf{s}= \mathbf{0}$ in \eqref{net recurrence} yields
\begin{equation} \label{zero id 2}
W(\mathbf{v}+n\mathbf{b}_{i})W(\mathbf{v}-n\mathbf{b}_{i})W(\mathbf{b}_{i})^{2} + W((n+1)\mathbf{b}_{i})W((n-1)\mathbf{b}_{i})W(\mathbf{v})^{2} = 0.
\end{equation}
Since $\mathbf{v} \notin \Lambda$ and $\rho_{i}\mid n$, we have
$W((n+1)\mathbf{b}_{i})W((n-1)\mathbf{b}_{i})W(\mathbf{v})^{2} \neq 0$. It therefore follows,
from \eqref{zero id 2}, that $W(\mathbf{v}+n\mathbf{b}_{i}) \neq 0$ for all $\mathbf{v} \notin \Lambda$. \end{proof} The following is a straightforward consequence of Lemma \ref{lem2}.
\begin{cor} We have
$$\{n_1\mathbf{b}_1+n_2\mathbf{b}_2+\dots+n_r\mathbf{b}_r : \, \rho_i \mid n_i ~\mbox{for}~1\leq i \leq r\} \subseteq \Lambda.$$ \end{cor}
\begin{lem} \label{lem3}
Suppose that for a fixed $i>1$ we have that
$\Lambda_{i-1}$ is a subgroup of $A$.
Then for all $\mathbf{v} \in \Lambda_{i-1}$, we have
$$W(\mathbf{v} + n\mathbf{b}_{i}) = 0 \Longleftrightarrow \rho_{i} \mid n.$$ \end{lem}
\begin{proof}
Choose $\mathbf{v} \in \Lambda_{i-1}$.
Since $\mathbf{v} \in \Lambda_{i-1} \subset \Lambda$, it
follows from Lemma \ref{lem2} that if $\rho_{i}\mid n$ then
$W(\mathbf{v}+n\mathbf{b}_{i}) = 0$.
Conversely, let $\rho_{i} \nmid n$, taking $\mathbf{p}=\mathbf{v}$, $\mathbf{q} = n\mathbf{b}_{i},$
$\mathbf{r} \in A_{i-1} \setminus \Lambda_{i-1},$ and
$\mathbf{s} = \mathbf{0}$ in \eqref{net recurrence} yields
\begin{equation} \label{star}
W(\mathbf{v}+n\mathbf{b}_{i})W(\mathbf{v}-n\mathbf{b}_{i})W(\mathbf{r})^{2} +
W(\mathbf{r}+\mathbf{v})W(\mathbf{r}-\mathbf{v})W(n\mathbf{b}_{i})^{2} = 0.
\end{equation}
Since $\mathbf{v} \in \Lambda_{i-1},$ $\mathbf{r} \in A_{i-1} \setminus \Lambda_{i-1}$, and
$\Lambda_{i-1}$ is a subgroup, it follows that $\mathbf{v} \pm \mathbf{r} \in A_{i-1}
\setminus \Lambda_{i-1}$, hence $W(\mathbf{v}\pm \mathbf{r}) \neq 0$. It therefore follows
from \eqref{star} that $W(\mathbf{v}+n\mathbf{b}_{i}) \neq 0$. \end{proof}
\begin{lem} \label{lem4}
Suppose that $\Lambda_{i-1}$ is a subgroup of $A$ for a fixed $i>1$ and $\rho_{i} > 2$.
Let $\mathbf{u}, \mathbf{v} \in \Lambda_{i}$ such that $\mathbf{u} = \mathbf{u}_{0} + n\mathbf{b}_{i},$
and $\mathbf{v} = \mathbf{v}_{0} + n\mathbf{b}_{i}$ for $\mathbf{u}_{0}, \mathbf{v}_{0} \in A_{i-1}$.
Then $\mathbf{u}-\mathbf{v}=\mathbf{u}_0-\mathbf{v}_0 \in \Lambda_{i-1}.$ \end{lem}
\begin{proof}
Setting $\mathbf{p} = \mathbf{u}_{0} + n\mathbf{b}_{i},$ $\mathbf{q} = \mathbf{v}_{0} + n\mathbf{b}_{i}$, $\mathbf{r} = m\mathbf{b}_{i},$ and $\mathbf{s} = -2n\mathbf{b}_{i}$ in \eqref{net recurrence} gives \begin{equation} \label{pplusq} W(\mathbf{u}_{0} + \mathbf{v}_{0})W(\mathbf{u}_{0}-\mathbf{v}_{0})W((2n-m)\mathbf{b}_{i})W(m\mathbf{b}_{i}) = 0. \end{equation} Since $\rho_{i} > 2$, we have $W(b_i),~W(2b_i)\neq 0$. So we can choose $m \in \{1,2\}$ such that $$W((2n-m)\mathbf{b}_{i})W(m\mathbf{b}_{i}) \neq 0.$$ Thus from \eqref{pplusq} we conclude that $W(\mathbf{u}_{0} + \mathbf{v}_{0})W(\mathbf{u}_{0}-\mathbf{v}_{0}) = 0$.
Now if $W(\mathbf{u}_{0}-\mathbf{v}_{0}) = 0$ we are done. Otherwise we assume that $W(\mathbf{u}_{0}-\mathbf{v}_{0}) \neq 0$, hence $W(\mathbf{u}_{0} + \mathbf{v}_{0}) = 0$, and show that this gives a contradiction.
Setting $\mathbf{p} = \mathbf{u}_{0} + n\mathbf{b}_{i},$ $\mathbf{q} = \mathbf{v}_{0} + n\mathbf{b}_{i}$, $\mathbf{r} = \mathbf{b}_{i}$, and $\mathbf{s}=\mathbf{0}$ in \eqref{net recurrence} gives $$W(\mathbf{u}_{0}+\mathbf{v}_{0} + 2n\mathbf{b}_{i})W(\mathbf{u}_{0}-\mathbf{v}_{0})W(\mathbf{b}_{i})^{2} = 0,$$ hence $W(\mathbf{u}_{0}+\mathbf{v}_{0} + 2n\mathbf{b}_{i}) = 0$ (recall that $W(\mathbf{u}_{0}-\mathbf{v}_{0}) \neq 0$). Since $\mathbf{u}_0+\mathbf{v}_0\in \Lambda_{i-1}$ it follows from Lemma \ref{lem3} that $\rho_{i}\mid 2n$. Now we consider two cases.
Case 1: If $\rho_{i} \mid n$, then since $\mathbf{u} = \mathbf{u}_{0} +n\mathbf{b}_{i}, \mathbf{v} = \mathbf{v}_{0}+n\mathbf{b}_{i} \in \Lambda_{i}$, it follows from Lemma \ref{lem2} that $\mathbf{u}_{0}, \mathbf{v}_{0} \in \Lambda_{i-1}$, hence $\mathbf{u}_{0} -\mathbf{v}_{0} \in \Lambda_{i-1}$, contradicting our assumption that $W(\mathbf{u}_{0} - \mathbf{v}_{0}) \neq 0.$
Case 2: If $\rho_{i} \nmid n$, then $W(\mathbf{u}_{0}+\mathbf{v}_{0}+n\mathbf{b}_{i}) \neq 0$ by Lemma \ref{lem3}. Setting $\mathbf{p} = \mathbf{u}_{0} + n\mathbf{b}_{i}$, $\mathbf{q} = \mathbf{v}_{0} + n\mathbf{b}_{i}$, $\mathbf{r}=\mathbf{b}_{i},$ and $\mathbf{s} = -n\mathbf{b}_{i}$ in \eqref{net recurrence} gives $$W(\mathbf{u}_{0}+\mathbf{v}_{0}+n\mathbf{b}_{i})W(\mathbf{u}_{0}-\mathbf{v}_{0})W((n-1)\mathbf{b}_{i})W(\mathbf{b}_{i}) = 0,$$ hence $W((n-1)\mathbf{b}_{i}) = 0$ and so $\rho_i \mid n-1$. Similarly by setting $\mathbf{p} = \mathbf{u}_{0} + n\mathbf{b}_{i},$ $\mathbf{q} = \mathbf{v}_{0} + n\mathbf{b}_{i}$, $\mathbf{r}=-\mathbf{b}_{i},$ and $\mathbf{s} = -n\mathbf{b}_{i}$ in \eqref{net recurrence} we find that $W((n+1)\mathbf{b}_{i}) = 0$ and so $\rho_i \mid n+1$. Since $\rho_i\mid n-1$ and $\rho_i\mid n+1$, we have $\rho_{i} = 2$. This is a contradiction. \end{proof}
We are ready to prove our main result on zeros of an elliptic net.
\begin{proof}[Proof of Theorem \ref{second-theorem}]
We proceed by induction on $i$. Note that $\Lambda_{1}$ is a subgroup of $\mathbf{b}_1\mathbb{Z}$, since $W(n\mathbf{b}_{1}) = 0$
if and only if $\rho_{1}|n$.
Assume that $\Lambda_{i-1}$ is a subgroup.
We want to prove that $\Lambda_i$ is a subgroup, that is
for any $\mathbf{u},\mathbf{v} \in \Lambda_i$ that $\mathbf{u}-\mathbf{v} \in \Lambda_i$.
We will prove this by contradiction, so assume that $\mathbf{u} - \mathbf{v} \not \in \Lambda_i$.
Let $\mathbf{u}=\mathbf{u}_0+n\mathbf{b}_i,~\mathbf{v}=\mathbf{v}_0+m\mathbf{b}_i \in \Lambda_{i}$,
where $\mathbf{u}_0, \mathbf{v}_0\in A_{i-1}$.
It follows from \eqref{net recurrence}, for $\mathbf{p}=\mathbf{u}$, $\mathbf{q}=\mathbf{v}$, $\mathbf{r}=\mathbf{u}+\mathbf{w}$, and $\mathbf{s}=-2\mathbf{u}$,
that $W(\mathbf{u}-\mathbf{v})^2W(\mathbf{u}-\mathbf{w})W(\mathbf{u}+\mathbf{w}) = 0.$ Since $W(\mathbf{u}-\mathbf{v})\neq 0$ and
$\mathbf{u}=\mathbf{u}_0+n\mathbf{b}_i$, we conclude that
\begin{equation} \label{zero id 3}
W(\mathbf{u}_0+n\mathbf{b}_{i}-\mathbf{w})W(\mathbf{u}_0+n\mathbf{b}_{i}+\mathbf{w})=0
\end{equation}
for any $\mathbf{w} \in A_i$. We claim that \eqref{zero id 3} implies
that $\rho_{i} \mid n$.
To show this assume otherwise that $\rho_{i} \nmid n$. Then, since
$\mathbf{u} = \mathbf{u}_{0}+n\mathbf{b}_{i} \in \Lambda_{i}$ it follows from Lemma \ref{lem3} that
$\mathbf{u}_0\not\in \Lambda_{i-1}$. We consider two cases.
Case 1: If $\rho_{i} > 2$, then setting $\mathbf{w} = \mathbf{u}_0$ in \eqref{zero id 3} yields
$$W(2\mathbf{u}_{0}+n\mathbf{b}_{i})W(n\mathbf{b}_{i}) = 0.$$
Then we have that $W(2\mathbf{u}_{0}+n\mathbf{b}_{i}) =0$ since $\rho_i \nmid n$. Since
$W(\mathbf{u}_{0}+n\mathbf{b}_{i}) = W(2\mathbf{u}_{0}+n\mathbf{b}_{i}) = 0$, it follows from
Lemma \ref{lem4} that $\mathbf{u}_{0} \in \Lambda_{i-1}$. This is a contradiction.
Case 2: If $\rho_{i} = 2$, then setting $\mathbf{w} = \mathbf{b}_{i}$ in \eqref{zero id 3} yields
$$W(\mathbf{u}_{0} + (n+1)\mathbf{b}_{i})W(\mathbf{u}_{0} + (n-1)\mathbf{b}_{i}) = 0,$$ from which it follows that
$\mathbf{u}_{0} \in \Lambda_{i-1}$ (since both $n-1$ and $n+1$ are even). This is a
contradiction.
In either case, the assumption $\rho_i \nmid n$ leads to a contradiction. Thus, we have
$\mathbf{u} = \mathbf{u}_{0} + n\mathbf{b}_{i}$ with $\mathbf{u}_{0} \in \Lambda_{i-1}$, and $\rho_{i} \mid n$.
Similarly we have $\mathbf{v} = \mathbf{v}_{0} + m\mathbf{b}_{i}$, with $\mathbf{v}_{0} \in \Lambda_{i-1}$ and
$\rho_{i}|m$. Then, $\mathbf{u}-\mathbf{v} = \mathbf{u}_{0} - \mathbf{v}_{0} + (n-m)\mathbf{b}_{i}$ with
$\mathbf{u}_{0} - \mathbf{v}_{0} \in \Lambda_{i-1}$, and $\rho_{i}\mid (n-m)$.
Thus it follows from Lemma \ref{lem3}
that $W(\mathbf{u}-\mathbf{v}) = 0$. This is a contradiction as we assumed that $W(\mathbf{u}-\mathbf{v})\neq 0$.
Since the assumption $\mathbf{u}-\mathbf{v}\not\in \Lambda_i$ leads to a contradiction, we
conclude that $\mathbf{u}-\mathbf{v}\in \Lambda_i$ and so $\Lambda_i$ is a subgroup of $A$. \end{proof}
\section{Proofs of Theorem \ref{Ward-generalized} and Corollary \ref{cor-13}} \label{sec4}
Theorem \ref{second-theorem} shows that for a given elliptic net $W:A \rightarrow {K}$, in favorable conditions, if $W(\bm{\lambda}_1)=W(\bm{\lambda}_2)=0$ then $W(\bm{\lambda}_1+\bm{\lambda}_2)=0$. In this section we study the relation between $W(\mathbf{v}+\bm{\lambda})$ and $W(\mathbf{v})$ when $W(\bm{\lambda})=0$ but $W(\mathbf{v})$ is non-zero. Throughout this section we assume that $\Lambda=W^{-1}(0)$ is a subgroup of
$A$. We also assume that $|A/\Lambda| \geq 4$. The results of this section generalizes Theorem \ref{Ward sym} to Elliptic nets. In order to do this, we first define the auxiliary function $$
\begin{array}{cccc}
\delta: & \Lambda \times (A \setminus \Lambda) & \longrightarrow & {K}^\times \\
& (\bm{\lambda}, \mathbf{v}) & \longmapsto & \frac{W(\bm{\lambda}+\mathbf{v})}{W(\mathbf{v})},
\end{array} $$ and explore the properties of $\delta$. Notice that for $\bm{\lambda} \in \Lambda$ and $\mathbf{v} \notin \Lambda$ we get that $\delta(\bm{\lambda},\mathbf{v}) \neq 0$. We have the following lemma.
\begin{lem} \label{sigeqn}
For all $\bm{\lambda} \in \Lambda$, and
$\mathbf{a},\mathbf{b},\mathbf{c},\mathbf{d} \in A \backslash \Lambda$
with $\mathbf{a}+\mathbf{b} = \mathbf{c}+\mathbf{d}$, we have
$$ \delta(\bm{\lambda},\mathbf{a}) \delta(\bm{\lambda}, \mathbf{b}) =
\delta(\bm{\lambda}, \mathbf{c}) \delta(\bm{\lambda}, \mathbf{d}).$$
\end{lem}
\begin{proof}
Assume that $\mathbf{p}+\mathbf{s}, \mathbf{p}, \mathbf{q}+\mathbf{s}, \mathbf{q} \notin \Lambda$. Then, setting
$\mathbf{r} = \bm{\lambda}$ in \eqref{net recurrence} gives
\begin{equation*}
W(\bm{\lambda}+\mathbf{q}+\mathbf{s})W(\bm{\lambda}-\mathbf{q})W(\mathbf{p}+\mathbf{s})W(-\mathbf{p}) = W(\bm{\lambda}+\mathbf{p}+\mathbf{s})W(\bm{\lambda} - \mathbf{p})W(\mathbf{q}+\mathbf{s})W(-\mathbf{q}).
\end{equation*}
Since $\mathbf{p}+\mathbf{s}, \mathbf{p}, \mathbf{q}+\mathbf{s},\mathbf{q} \notin \Lambda$ we have $W(\mathbf{p}+\mathbf{s})W(\mathbf{p})W(\mathbf{q}+\mathbf{s})W(\mathbf{q}) \neq 0$, hence
$$
\frac{W(\bm{\lambda}+\mathbf{q}+\mathbf{s})W(\bm{\lambda}-\mathbf{q})}{W(\mathbf{q}+\mathbf{s})W(-\mathbf{q})} =
\frac{W(\bm{\lambda}+\mathbf{p}+\mathbf{s})W(\bm{\lambda}-\mathbf{p})}{W(\mathbf{p}+\mathbf{s})W(-\mathbf{p})}.
$$
Thus
$$
\delta(\bm{\lambda}, \mathbf{q}+\mathbf{s}) \delta(\bm{\lambda}, -\mathbf{q}) = \delta(\bm{\lambda}, \mathbf{p}+\mathbf{s})\delta(\bm{\lambda}, -\mathbf{p}).
$$
Taking
$$
\mathbf{a}=\mathbf{q}+\mathbf{s},~
\mathbf{b}=-\mathbf{q},~
\mathbf{c}=\mathbf{p}+\mathbf{s},~{\rm and}~
\mathbf{d}=-\mathbf{p},
$$
yields the result. \end{proof}
Note that if $\mathbf{v},\mathbf{p}_1,\mathbf{p}_2 \in A$ and $\mathbf{p}_1,\mathbf{p}_2,\mathbf{v}+\mathbf{p}_1,\mathbf{v}+\mathbf{p}_2 \not \in \Lambda$, then $$\delta(\bm{\lambda},\mathbf{v}+\mathbf{p}_1)\delta(\bm{\lambda},\mathbf{p}_2)=\delta(\bm{\lambda},\mathbf{v}+\mathbf{p}_2)\delta(\bm{\lambda},\mathbf{p}_1).$$ Since $\delta$ is nonzero, we get \begin{equation}
\label{eqn well defined}
{\delta(\bm{\lambda},\mathbf{v}+\mathbf{p}_1) \over \delta(\bm{\lambda},\mathbf{p}_1)} =
{\delta(\bm{\lambda},\mathbf{v}+\mathbf{p}_2) \over \delta(\bm{\lambda},\mathbf{p}_2)}. \end{equation}
Since we are assuming that $|A/\Lambda|\geq 4$, we get that for any $\mathbf{v} \in A$ there is an an element $\mathbf{p} \in A$ so that $\mathbf{p}$ and $\mathbf{v}+\mathbf{p}$ are in $A \setminus \Lambda$. In light of this observation, we define the function $\chi$ by \begin{equation}
\label{eqn chi}
\begin{array}{cccc}
\chi: & \Lambda \times A & \longrightarrow & {K}^\times \\
& (\bm{\lambda}, \mathbf{v}) & \longmapsto & \frac{\delta(\bm{\lambda}, \mathbf{v}+\mathbf{p})}{\delta(\bm{\lambda}, \mathbf{p})},
\end{array} \end{equation} for any choice of $\mathbf{p}$ with $\mathbf{p}, \mathbf{v}+\mathbf{p} \notin \Lambda$. Equation \eqref{eqn well defined} shows that this definition is independent of the choice of $\mathbf{p}$. Furthermore, note that $\delta$ is non-zero, so $\chi$ maps to ${K}^\times$.
We now show that $\chi$ is a bilinear map.
\begin{lem}\label{chi lem}
Let $W:A \rightarrow {K}$ be an elliptic net, and $\Lambda=W^{-1}(0)$ be a subgroup
of $A$ such that $|A/\Lambda| \geq 4$. Let $\chi:\Lambda\times A \rightarrow {K}^\times$
be defined as before.
Then for $\bm{\lambda}, \bm{\lambda}_{1}, \bm{\lambda}_{2} \in \Lambda,$ and
$\mathbf{v}, \mathbf{v}_{1}, \mathbf{v}_{2} \in A$, we have the following:
\begin{enumerate}[(i)]
\item $\chi(\bm{\lambda}, \mathbf{v}_{1} + \mathbf{v}_{2}) = \chi(\bm{\lambda}, \mathbf{v}_{1})\chi(\bm{\lambda}, \mathbf{v}_2).$
\item $\chi(\bm{\lambda}_{1} + \bm{\lambda}_{2}, \mathbf{v}) = \chi(\bm{\lambda}_{1}, \mathbf{v})\chi(\bm{\lambda}_{2}, \mathbf{v}).$
\item $\chi(\bm{\lambda}_{1}, \bm{\lambda}_{2}) = \chi(\bm{\lambda}_{2}, \bm{\lambda}_{1}).$
\item\label{chi inverse} $\chi(\bm{\lambda}, -\mathbf{v}) = \chi(\bm{\lambda}, \mathbf{v})^{-1}.$
\end{enumerate} \end{lem}
\begin{proof}
First we note that if $|A/\Lambda| \geq 4$, then for any choice of $\mathbf{v}_1,\mathbf{v}_2 \in A$,
we can find $\mathbf{p} \in A$ so that $\mathbf{p},\mathbf{p}+\mathbf{v}_2,$ and $\mathbf{p}+\mathbf{v}_1+\mathbf{v}_2$ are not
in $\Lambda$. In particular, by pigeonhole principle, we can find $\overline{\mathbf{u}} \in A/\Lambda$
so that the image of $\mathbf{0},\mathbf{v}_2$ and $\mathbf{v}_1+\mathbf{v}_2$ will miss $\overline{\mathbf{u}}$ in
$A/\Lambda$. Letting $\mathbf{p}$ be any element in $A$ that reduces to $-\overline{\mathbf{u}}$ we get
the desired result.
Given this $\mathbf{p}$ we have,
\begin{eqnarray*}
\chi(\bm{\lambda}, \mathbf{v}_{1})\chi(\bm{\lambda}, \mathbf{v}_{2}) & = &
\frac{\delta(\bm{\lambda}, \mathbf{v}_{1}+\mathbf{v}_{2}+\mathbf{p})}{\delta(\bm{\lambda}, \mathbf{v}_{2}+\mathbf{p})} \frac{\delta(\bm{\lambda}, \mathbf{v}_{2}+\mathbf{p})}{\delta(\bm{\lambda}, \mathbf{p})} \\
& = & \frac{\delta(\bm{\lambda}, \mathbf{v}_{1}+\mathbf{v}_{2}+\mathbf{p})}{\delta(\bm{\lambda}, \mathbf{p})} \\
& = & \chi(\bm{\lambda}, \mathbf{v}_{1}+\mathbf{v}_{2}).
\end{eqnarray*}
This proves the first statement.
For the second statement, we let $\mathbf{p} \in A \backslash \Lambda$ be such that
$\mathbf{v} + \mathbf{p} \notin \Lambda$ (Again, by pigeonhole principle, such an element exists).
Since $\Lambda$ is a subgroup of $A$, it follows that
$\mathbf{v}+\mathbf{p}+\bm{\lambda}_{2}, \mathbf{p}+\bm{\lambda}_{2} \notin \Lambda$.
Hence, we have
\begin{eqnarray*}
\chi(\bm{\lambda}_{1}, \mathbf{v}) \chi(\bm{\lambda}_{2}, \mathbf{v}) & = &
\frac{\delta(\bm{\lambda}_{1}, \mathbf{v}+\mathbf{p}+\bm{\lambda}_{2}) \delta(\bm{\lambda}_{2}, \mathbf{v}+\mathbf{p})}{\delta(\bm{\lambda}_{1}, \mathbf{p}+\bm{\lambda}_{2}) \delta(\bm{\lambda}_{2}, \mathbf{p})} \\
& = &
\frac{W(\mathbf{v}+\mathbf{p}+\bm{\lambda}_{1}+\bm{\lambda}_{2})W(\mathbf{p}+\bm{\lambda}_{2})W(\mathbf{v}+\mathbf{p}+\bm{\lambda}_{2})W(\mathbf{p})}{W(\mathbf{v}+\mathbf{p}+\bm{\lambda}_{2})W(\mathbf{p}+\bm{\lambda}_{1}+\bm{\lambda}_{2})W(\mathbf{v}+\mathbf{p})W(\mathbf{p}+\bm{\lambda}_{2})}
\\ & = &
\frac{W(\mathbf{v}+\mathbf{p}+\bm{\lambda}_{1}+\bm{\lambda}_{2})W(\mathbf{p})}{W(\mathbf{v}+\mathbf{p})W(\mathbf{p}+\bm{\lambda}_{1}+\bm{\lambda}_{2})} \\
& = & \frac{\delta(\bm{\lambda}_{1}+\bm{\lambda}_{2}, \mathbf{v}+\mathbf{p})}{\delta(\bm{\lambda}_{1}+\bm{\lambda}_{2}, \mathbf{p})} \\
& = & \chi(\bm{\lambda}_{1}+\bm{\lambda}_{2}, \mathbf{v}).
\end{eqnarray*}
For the third statement, taking $\mathbf{p}\in A\setminus \Lambda$, we have
$$ \chi(\bm{\lambda}_{1}, \bm{\lambda}_{2}) =
\frac{\delta(\bm{\lambda}_{1}, \bm{\lambda}_{2} + \mathbf{p})}{\delta(\bm{\lambda}_{1}, \mathbf{p})} =
\frac{W(\bm{\lambda}_{1} + \bm{\lambda}_{2} +\mathbf{p})W(\mathbf{p})}{W(\bm{\lambda}_{2} + \mathbf{p})W(\bm{\lambda}_{1} +\mathbf{p})} =
\frac{\delta(\bm{\lambda}_{2}, \bm{\lambda}_{1} + \mathbf{p})}{\delta(\bm{\lambda}_{2}, \mathbf{p})} =
\chi(\bm{\lambda}_{2}, \bm{\lambda}_{1}). $$
The last statement follows from $(i)$ and the fact that $\chi(\bm{\lambda}, 0) = 1$. \end{proof}
Note that for $\bm{\lambda} \in \Lambda$ and $\mathbf{v} \notin \Lambda$ we have \[ W(\mathbf{v}+\bm{\lambda})=\delta(\bm{\lambda},\mathbf{v})W(\mathbf{v})={\delta(\bm{\lambda},\mathbf{v}) \over \chi(\bm{\lambda},\mathbf{v})} \chi(\bm{\lambda},\mathbf{v})W(\mathbf{v}). \] We now show that $\delta(\bm{\lambda},\mathbf{v})/\chi(\bm{\lambda},\mathbf{v})$ is independent of choice of $\mathbf{v}$.
\begin{lem} \label{constlem}
For all $\mathbf{v}_{1}, \mathbf{v}_{2} \in A \backslash \Lambda$ we have
\begin{equation*}
{\delta(\bm{\lambda},\mathbf{v}_1) \over \chi(\bm{\lambda},\mathbf{v}_1)} = {\delta(\bm{\lambda}, \mathbf{v}_{2}) \over \chi(\bm{\lambda},\mathbf{v}_2)}.
\end{equation*} \end{lem}
\begin{proof}
First, if $\mathbf{v}_{1} + \mathbf{v}_{2} \notin \Lambda$ we have
\begin{equation}\label{a identity 1}
\frac{\delta(\bm{\lambda}, \mathbf{v}_{1})}{\chi(\bm{\lambda}, \mathbf{v}_{1})} =
\frac{\delta(\bm{\lambda}, \mathbf{v}_{1}) \delta(\bm{\lambda}, \mathbf{v}_{2})}{\delta(\bm{\lambda}, \mathbf{v}_{1}+\mathbf{v}_{2})} =
\frac{\delta(\bm{\lambda}, \mathbf{v}_{2})}{\chi(\bm{\lambda}, \mathbf{v}_{2})}.
\end{equation}
Next, we suppose that $\mathbf{v}_{1} + \mathbf{v}_{2} \in \Lambda$. Then,
since $|A/\Lambda| \geq 4$, we can find $\mathbf{p} \in A\setminus \Lambda$ such that
$\mathbf{p} \not\equiv -\mathbf{v}_{1}, -\mathbf{v}_{2} \pmod \Lambda$. Then, we have
\begin{equation*}
\mathbf{v}_{1} +\mathbf{v}_{2} + \mathbf{p}, 2\mathbf{v}_{1}+\mathbf{v}_{2}+\mathbf{p}, \mathbf{v}_{1}+2\mathbf{v}_{2}+\mathbf{p} \notin \Lambda.
\end{equation*}
It then follows from \eqref{a identity 1}, that
\begin{equation*}
{\delta(\bm{\lambda},\mathbf{v}_{1}) \over \chi(\bm{\lambda},\mathbf{v}_1)}=
{\delta(\bm{\lambda},\mathbf{v}_{1}+\mathbf{v}_{2}+\mathbf{p}) \over \chi(\bm{\lambda},\mathbf{v}_1+\mathbf{v}_2+\mathbf{p})} =
{\delta(\bm{\lambda},\mathbf{v}_{2}) \over \chi(\bm{\lambda},\mathbf{v}_2)}.
\end{equation*} \end{proof}
Now in light of Lemma \ref{constlem}, we define \begin{equation}
\begin{array}{cccc}
\xi: & \Lambda & \longrightarrow & {K}^\times \\
& \bm{\lambda} & \longmapsto & \frac{\delta(\bm{\lambda}, \mathbf{v})}{\chi(\bm{\lambda}, \mathbf{v})},
\end{array} \end{equation} for any choice of $\mathbf{v} \in A \setminus \Lambda$. Lemma \ref{constlem} shows that $\xi$ is a well defined function.
We are now in a position to give a generalization of Theorem \ref{Ward sym}. \begin{reptheorem}{Ward-generalized}
Let $W:A \rightarrow {K}$ be an elliptic net with the property that $\Lambda=W^{-1}(0)$ is a subgroup of $A$
and assume $|A/\Lambda| \geq 4$. Then, there exist well defined functions
$\xi :\Lambda \rightarrow {K}^\times$ and $\chi:\Lambda \times A \rightarrow {K}^\times
$ such that $$W(\bm{\lambda}+\mathbf{v})=\xi(\bm{\lambda})\chi(\bm{\lambda},\mathbf{v})W(\mathbf{v})~ for ~ all ~\bm{\lambda} \in \Lambda~
and~ all ~\mathbf{v} \in A,$$ and the functions $\xi$ and $\chi$ satisfy the following properties:
\begin{enumerate}[(i)]
\item $\chi$ is bilinear,
\item $\chi(\bm{\lambda}_1,\bm{\lambda}_2)=\chi(\bm{\lambda}_2,\bm{\lambda}_1)$,
\item $\xi(\bm{\lambda}_1+\bm{\lambda}_2)=\xi(\bm{\lambda}_1)\xi(\bm{\lambda}_2)\chi(\bm{\lambda}_1,\bm{\lambda}_2)$,
\item $\xi(-\bm{\lambda})=\xi(\bm{\lambda})$, and
\item $\xi(\bm{\lambda})^2 = \chi(\bm{\lambda},\bm{\lambda})$.
\end{enumerate} \end{reptheorem}
\begin{proof}
Recall that we have defined the functions
$\delta(\bm{\lambda},\mathbf{v})={W(\mathbf{v}+\bm{\lambda}) \over W(\mathbf{v})}$,
$\chi(\bm{\lambda},\mathbf{v})=\frac{\delta(\bm{\lambda},\mathbf{v}+\mathbf{p})}{\delta(\bm{\lambda},\mathbf{p})}$,
$\xi(\bm{\lambda})={\delta(\bm{\lambda},\mathbf{v}) \over \chi(\bm{\lambda},\mathbf{v})}$
for any choice of $\mathbf{v}, \mathbf{p} \in A$ so that the fractions make sense.
Note that
\[ W(\mathbf{v}+\bm{\lambda})=\delta(\bm{\lambda},\mathbf{v})W(\mathbf{v}) = \xi(\bm{\lambda})\chi(\bm{\lambda},\mathbf{v})W(\mathbf{v}), \]
for any $\mathbf{v} \not \in \Lambda$. If $\mathbf{v} \in \Lambda$ then both sides are $0$.
Therefore, for any $\mathbf{v} \in A$ and any $\bm{\lambda} \in \Lambda$ we have
\begin{equation}
W(\mathbf{v}+\bm{\lambda})=\xi(\bm{\lambda})\chi(\bm{\lambda},\mathbf{v})W(\mathbf{v})
\label{eqn monodromy}
\end{equation}
Furthermore, Lemma \ref{chi lem} shows that $\chi$ is bilinear and
$\chi|_{\Lambda \times \Lambda}$ is symmetric.
Therefore, all we have to do is to show that
\begin{equation}
\label{eqn asum}
\xi(\bm{\lambda}_1+\bm{\lambda}_2)=\xi(\bm{\lambda}_1)\xi(\bm{\lambda}_2)\chi(\bm{\lambda}_1,\bm{\lambda}_2),
\end{equation}
that $\xi(-\bm{\lambda})=\xi(\bm{\lambda})$, and
\begin{equation}
\label{eqn asqr}
\xi(\bm{\lambda})^2 = \chi(\bm{\lambda},\bm{\lambda}).
\end{equation}
Let $\bm{\lambda}_1,\bm{\lambda}_2 \in \Lambda$ and $\mathbf{v} \notin \Lambda$.
Note that by \eqref{eqn monodromy} and (i) we get
\[ W(\bm{\lambda}_1+\bm{\lambda}_2+\mathbf{v})=\xi(\bm{\lambda}_1+\bm{\lambda}_2)\chi(\bm{\lambda}_1+\bm{\lambda}_2,\mathbf{v})W(\mathbf{v}) = \xi(\bm{\lambda}_1+\bm{\lambda}_2)\chi(\bm{\lambda}_1,\mathbf{v})\chi(\bm{\lambda}_2,\mathbf{v})W(\mathbf{v}). \]
On the other hand
\begin{align*}
W(\bm{\lambda}_1+(\bm{\lambda}_2+\mathbf{v}))=& \xi(\bm{\lambda}_1)\chi(\bm{\lambda}_1,\mathbf{v}+\bm{\lambda}_2)W(\mathbf{v}+\bm{\lambda}_2) \\
=& \xi(\bm{\lambda}_1)\xi(\bm{\lambda}_2)\chi(\bm{\lambda}_1,\mathbf{v}+\bm{\lambda}_2)\chi(\bm{\lambda}_2,\mathbf{v})W(\mathbf{v}) \\
=& \xi(\bm{\lambda}_1)\xi(\bm{\lambda}_2)\chi(\bm{\lambda}_1,\bm{\lambda}_2)\chi(\bm{\lambda}_1,\mathbf{v})\chi(\bm{\lambda}_2,\mathbf{v})W(\mathbf{v}).
\end{align*}
Equating the above two equations for $W(\bm{\lambda}_1+\bm{\lambda_2}+\mathbf{v})$ yields
\[ \xi(\bm{\lambda}_1+\bm{\lambda}_2)\chi(\bm{\lambda}_1,\mathbf{v})\chi(\bm{\lambda}_2,\mathbf{v}) = \xi(\bm{\lambda}_1)\xi(\bm{\lambda}_2)\chi(\bm{\lambda}_1,\bm{\lambda}_2)\chi(\bm{\lambda}_1,\mathbf{v})\chi(\bm{\lambda}_2,\mathbf{v}), \]
which gives us \eqref{eqn asum}.
Now note that $\xi(\mathbf{0})=1$, since
$W(\mathbf{v}+\mathbf{0})=\xi(\mathbf{0})\chi(\mathbf{0},\mathbf{v})W(\mathbf{v})=W(\mathbf{v})$.
Similarly,
\begin{align*}
W(-\mathbf{v}-\bm{\lambda})&=\xi(-\bm{\lambda})\chi(-\bm{\lambda},-\mathbf{v})W(-\mathbf{v}) \\
&=\xi(-\bm{\lambda})\chi(\bm{\lambda},\mathbf{v})W(-\mathbf{v}) \\
&=-\xi(-\bm{\lambda})\chi(\bm{\lambda},\mathbf{v})W(\mathbf{v})
\end{align*}
while
\begin{align*}
W(-\mathbf{v}-\bm{\lambda})&=-W(\mathbf{v}+\bm{\lambda}) \\
&=-\xi(\bm{\lambda})\chi(\bm{\lambda},\mathbf{v})W(\mathbf{v})
\end{align*}
which implies $\xi(-\bm{\lambda})=\xi(\bm{\lambda})$.
Therefore
\[ 1=\xi(\mathbf{0})=\xi(\bm{\lambda}-\bm{\lambda})=\xi(\bm{\lambda})\xi(-\bm{\lambda})\chi(\bm{\lambda},-\bm{\lambda}), \]
which by employing part (iv) of Lemma \ref{chi lem} results in $\xi(\bm{\lambda})^2=\chi(\bm{\lambda},\bm{\lambda})$. This completes the proof of our
theorem. \end{proof}
As an immediate corollary of the above theorem we have \begin{cor} \label{nsquare}
Let $W:A\rightarrow {K}$ be an elliptic net with $\Lambda=W^{-1}(0)$ be a subgroup
of $A$ and $|A/\Lambda|\geq 4$. Then for all $\bm{\lambda} \in \Lambda$ and $n \in \mathbb{Z}$
we have
\[ \xi(n\bm{\lambda})=\xi(\bm{\lambda})^{n^2}. \] \end{cor} \begin{proof}
We already showed that $\xi(\mathbf{0})=1$, so the statement holds for $n=0$.
It also trivially holds for $n=1$. We proceed by induction.
Assume the statement is true for some $n \geq 1$. From part (4) of Theorem \ref{Ward-generalized} and Lemma \ref{chi lem}, we have
$$\xi((n+1)\bm{\lambda}) =
\xi(\bm{\lambda})\xi(n\bm{\lambda}) \chi(\bm{\lambda}, n\bm{\lambda}) =
\xi(\bm{\lambda})\xi(n\bm{\lambda}) \chi(\bm{\lambda}, \bm{\lambda})^{n}.$$
From the induction hypothesis and part (v) of Theorem \ref{Ward-generalized}, it follows that
$$\xi((n+1)\bm{\lambda}) = \xi(\bm{\lambda})^{n^2+1}\xi(\bm{\lambda})^{2n} = \xi(\bm{\lambda})^{(n+1)^2}.$$
Therefore the statement holds for all $n \geq 0$.
Finally note that $\xi(-n\bm{\lambda})=\xi(n\bm{\lambda})=\xi(\bm{\lambda})^{n^2}$ from
part (5) of Theorem \ref{Ward-generalized}.
Thus the statement holds for all $n \in \mathbb{Z}$. \end{proof}
Note that Theorem \ref{Ward-generalized} allows us to compute $W:A \rightarrow {K}$ by knowing the values of $W$ on a set of representatives of $A/\Lambda$ and by computing certain values of $\chi$ and $\xi$. In particular if $\Lambda$ is a full rank subgroup of $A$, then we can choose $\bm{\lambda}_1,\bm{\lambda}_2,\ldots,\bm{\lambda}_r$ as a basis of $\Lambda$. Then \begin{align*}
W\left(\left(\sum_{i=1}^r n_i\bm{\lambda}_i\right)+\mathbf{v}\right) &= \xi\left(\sum_{i=1}^r n_i\bm{\lambda}_i\right) \chi\left(\sum_{i=1}^r n_i\bm{\lambda}_i,\mathbf{v}\right) W(\mathbf{v}) \\
&=\xi\left(\sum_{i=1}^r n_i\bm{\lambda}_i\right) \prod_{i=1}^r \chi\left(\bm{\lambda}_i,\mathbf{v}\right)^{n_i}W(\mathbf{v}) \end{align*} and \begin{align*}
\xi\left(\sum_{i=1}^r n_i\bm{\lambda}_i\right) &= \prod_{i=1}^r \xi(n_i\bm{\lambda}_i) \left(\prod_{j=i+1}^r \chi(\bm{\lambda}_i,\bm{\lambda}_j)^{n_in_j} \right)\\
&=\prod_{i=1}^r \xi(\bm{\lambda}_i)^{n_i^2} \left(\prod_{j=i+1}^r \chi(\bm{\lambda}_i,\bm{\lambda}_j)^{n_in_j} \right). \end{align*} Combining the above two identities yields \eqref{W-formula}.
\par \begin{proof}[Proof of Corollary \ref{cor-13}]
If ${K}=\mathbb{F}_q$, a finite field with $q$ elements, and if $(q-1) | n_i$ for all $i$, then we get $\xi(\sum_{i=1}^r n_i \bm{\lambda}_i) = 1$, since every term is raised to a power divisible by $n_i$ for some $i$. Similarly, $\chi(\bm{\lambda}_i,\mathbf{v})^{n_i}=1$. \end{proof}
\begin{exam}\label{third-example}
Here by an example we show that how one can use the identity (\ref{W-formula}) to calculate an arbitrary term of an elliptic net over a finite field. To illustrate the method we consider a rank $2$ elliptic net associated to an elliptic curve over $\mathbb{Q}$ and compute a specific term of its associated $p$-reduced nets as $p$ varies over certain primes.
For a prime $p$ let $W:\mathbb{Z}^{2} \rightarrow \mathbb{F}_{p}$ be the elliptic net associated to the rank $2$ elliptic curve $y^{2} = x^{3} - 11$ and generators $P = (3,4)$, and $Q = (15,58)$. The net $W$ has a unique rank of apparition respect to the standard basis $\{\mathbf{e}_1, \mathbf{e}_2\}$ and so its zero set forms a subgroup of rank $2$ of $\mathbb{Z}^2$. We choose a basis $\{\bm{\lambda}_1, \bm{\lambda}_2\}$ for this subgroup and by using definitions of functions $\xi$ and $\chi$ we compute
$\xi(\bm{\lambda}_{1})$, $\xi(\bm{\lambda}_{2})$, $\chi(\bm{\lambda}_{1},\bm{\lambda}_{2})$, $\chi(\bm{\lambda}_{1}, \mathbf{e}_{1})$, $\chi(\bm{\lambda}_{1}, \mathbf{e}_{2})$, $\chi_(\bm{\lambda}_{2},\mathbf{e}_{1})$, and $\chi(\bm{\lambda}_{2},\mathbf{e}_{2})$. The following table summarizes the result of our computations for five values of $p$ (i.e. $p= 7, 11, 19, 61, 89$).
\begin{table}[h!]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}
$ p $ & $\bm{\lambda}_{1}$ & $\bm{\lambda}_{2}$ & $\xi(\bm{\lambda}_{1})$ & $\xi(\bm{\lambda}_{2})$ & $\chi(\bm{\lambda}_{1},\bm{\lambda}_{2})$ & $\chi(\bm{\lambda}_{1}, \mathbf{e}_{1})$ & $\chi(\bm{\lambda}_{1}, \mathbf{e}_{2})$ & $\chi_(\bm{\lambda}_{2},\mathbf{e}_{1})$ & $\chi(\bm{\lambda}_{2},\mathbf{e}_{2})$ \\
\hline
7 & (1,5) & (0,13) & 1 & 4 & 3 & 3 & 3 & 6 & 2 \\
11 & (1,7) & (0,11) & 4 & 9 & 9 & 4 & 9 & 9 & 6 \\
19 & (1,6) & (0,14) & 8 & 5 & 4 & 1 & 3 & 6 & 2 \\
61 & (2,8) & (0,38) & 39 & 60 & 19 & 34 & 6 & 43 & 41 \\
89 & (9,3) & (0,10) & 87 & 43 & 80 & 62 & 58 & 52 & 33
\end{tabular}
\end{table}
Let $D$ be a fixed set of representatives for $\mathbb{Z}^2/\Lambda$. Then any point $(r, s)$ in $\mathbb{Z}^2$ can be uniquely written as $(r, s)=n_{1}\bm{\lambda}_{1} + n_{2}\bm{\lambda}_{2} + m_{1}\mathbf{e}_{1} + m_{2}\mathbf{e}_{2}$ with $(m_1, m_2)\in D$. Now by computing values for $W(m_1\mathbf{e}_1+m_2\mathbf{e}_2)$ (by using the defining recursion of our net), the above table, and employing the rank $2$ version of \eqref{W-formula},
\begin{multline*}
W(n_{1}\bm{\lambda}_{1} + n_{2}\bm{\lambda}_{2} + m_{1}\mathbf{e}_{1} + m_{2}\mathbf{e}_{2}) = \xi(\bm{\lambda}_{1})^{n_{1}^{2}}\xi(\bm{\lambda}_{2})^{n_{2}^{2}}\chi(\bm{\lambda}_{1},\bm{\lambda}_{2})^{n_{1}n_{2}} \chi(\bm{\lambda}_{1},\mathbf{e}_{1})^{n_{1}m_{1}}\chi(\bm{\lambda}_{1},\mathbf{e}_{2})^{n_{1}m_{2}}\\
\times~\chi(\bm{\lambda}_{2},\mathbf{e}_{1})^{n_{2}m_{1}}\chi(\bm{\lambda}_{2},\mathbf{e}_{2})^{n_{2}m_{2}} W(m_{1}\mathbf{e}_{1}+m_{2}\mathbf{e}_{2}),
\end{multline*}
we can compute $W(r, s)$.
Here by using the above formula and table we compute the term $W(101,100)$ modulo $p$.
\begin{table}[h!]
\centering
\begin{tabular}{c|c|c|c|c|c|c}
$p$ & $n_{1}$ & $n_{2}$ & $m_{1}$ & $m_{2}$ & $W(m_{1}\mathbf{e}_{1}+m_{2}\mathbf{e}_{2})$ & W(101, 100) \\
\hline
7 & 101 & -32 & 0 & 11 & 3 & 1 \\
11 & 101 & -56 & 0 & 9 & 6 & 5 \\
19 & 101 & -37 & 0 & 12 & 12 & 12\\
61 & 50 & -8 & 1 & 4 & 21 & 28 \\
89 & 11 & 6 & 2 & 7 & 44 & 52
\end{tabular}
\end{table}
\end{exam} \section{Proofs of Proposition \ref{valuation-prop}, Theorem \ref{first-theorem}, and Proposition \ref{second-proposition}} \label{sec5} Recall that $K$ is a field with a discrete valuation $\nu : K^\times \rightarrow \mathbb{Z}$. We have $\mathcal{O}_\nu$, $\mathfrak{p}$, and ${K}$ defined as before. An application of the fact that $\Psi_\mathbf{v}^ \text{univ} \in \mathcal{R}_r^ \text{univ}$ is the following proof of proposition \ref{valuation-prop}. \begin{proof}[Proof of Proposition \ref{valuation-prop}]
Recall that $\pi_E: S^{ \text{univ}} \rightarrow K$ is defined by $\pi_E(\alpha_i)=a_i$. Then the image of $\pi_E$ lies in
$\mathcal{O}_\nu$, so we can think of $\pi_E$ as a function from $S^ \text{univ}$ into $\mathcal{O}_\nu$.
In particular for any $\mathbf{v} \in \mathbb{Z}^r$ we get that
\begin{equation}
\label{net-polynomial}
\Psi_\mathbf{v}=(\pi_E)_*(\Psi_\mathbf{v}^ \text{univ}) \in \mathcal{O}_\nu[x_i,y_i]_{1\leq i\leq r}[(x_i-x_j)^{-1}]_{1\leq i <j\leq r}/{\langle f(x_i,y_i)\rangle}_{1\leq i \leq r}.
\end{equation}
Now assume that $P_i \not \equiv \infty \pmod \mathfrak{p}$ and
$P_i \pm P_j \not \equiv \infty \pmod \mathfrak{p}$
for all $i \neq j$.
Then, since $P_i \not \equiv \infty \pmod \mathfrak{p}$, we have
$\nu(x(P_i)) \geq 0$ and $\nu(y(P_i)) \geq 0$ and so $\nu(x(P_i)-x(P_j)) \geq 0$.
On the other hand, since $P_i, P_j, P_i \pm P_j \not \equiv \infty \pmod \mathfrak{p}$ we conclude that $x(P_i) \not \equiv x(P_j) \pmod \mathfrak{p}$, and thus
$\nu(x(P_i)-x(P_j)) \leq 0$. Therefore, $\nu(x(P_i)-x(P_j)) = 0$. This together with \eqref{net-polynomial} give
$\nu(\Psi_\mathbf{v}(\mathbf{P})) \geq 0$, as desired. \end{proof} \begin{proof}[Proof of Theorem \ref{first-theorem}]
$(a) \Longrightarrow (b)$.
Observe that $\Psi_{ne_i}({\bf P})=\psi_n(P_i)$. So
the result follows from Theorem \ref{ayad}.
$(b) \Longrightarrow (c)$ is clear.
$(c) \Longleftrightarrow (d).$
From Lemma \ref{numerator formula}, we have
\[ \Phi_\mathbf{v}(\mathbf{P})=\Psi_\mathbf{v}^2(\mathbf{P})x(P_i) - \Psi_{\mathbf{v}+\mathbf{e}_i}(\mathbf{P})\Psi_{\mathbf{v}-\mathbf{e}_i}(\mathbf{P}), \]
which implies the (c) and (d) are equivalent.
$(c) \Longrightarrow (e)$.
First note that by proposition \ref{valuation-prop}, we have
$\nu(\Psi_\mathbf{v}(\mathbf{P})) \geq 0$, hence
$\nu(\Psi_\mathbf{v}(\mathbf{P})) \in \mathcal{O}_\nu$ and therefore
the reduction mod $\mathfrak{p}$ is well defined. We let
$\Psi_\mathbf{v}(\mathbf{P}) \pmod \mathfrak{p}$ be the image of $\Psi_\mathbf{v}(\mathbf{P})$
in the corresponding residue field under this reduction map.
By part (a) of Lemma \ref{lem prelim} we get that
$\Psi_\mathbf{v}(\mathbf{P}) \pmod \mathfrak{p}$ is an elliptic net.
Under the assumptions of (c) we have
$\Psi_{\mathbf{v}}(\mathbf{P}) \pmod \mathfrak{p} = 0$ and
$\Psi_{\mathbf{v}+\mathbf{e}_i}(\mathbf{P})\pmod \mathfrak{p} =0$. Now if the zero set of
$\Psi_{\mathbf{v}}(\mathbf{P}) \pmod \mathfrak{p}$ forms a subgroup then we have
$\Psi_{\mathbf{e}_i}(\mathbf{P}) \pmod \mathfrak{p}=\psi_1(P_i) \pmod \mathfrak{p}=0$
which is a contradiction, since $\psi_1=1$. So the zero set of
$\Psi_{\mathbf{v}}(\mathbf{P}) \pmod \mathfrak{p}$
does not form a subgroup of $\mathbb{Z}^r$ and thus by Theorem
\ref{second-theorem} we conclude that
$\Psi_{\mathbf{v}}(\mathbf{P}) \pmod \mathfrak{p}$
does not have a unique rank of apparition (with respect to
$\{\mathbf{e}_1, \cdots, \mathbf{e}_r\}$).
So there exists $1\leq i \leq r$ such that
$\Psi_{n\mathbf{e}_i}(\mathbf{P}) \pmod \mathfrak{p}$ does not have a unique
rank of apparition. By Theorem \ref{Ward6.2} we get
that $\Psi_{3\mathbf{e}_i} \pmod \mathfrak{p} =\Psi_{4\mathbf{e}_i} \pmod \mathfrak{p} = 0,$
which means
$\nu(\Psi_{3\mathbf{e}_i})$ and $\nu(\Psi_{4\mathbf{e}_i}) > 0$.
Therefore from Theorem \ref{ayad} we conclude that $P_i \pmod \mathfrak{p}$ is singular.
$(e) \Longrightarrow (a)$ Since $P_i \pmod \mathfrak{p}$ is singular, then
from Theorem \ref{ayad} we know that $\nu(\psi_2(P_i))>0$ and
$\nu(\psi_3(P_i))>0$. Now the result follows since
$\psi_n(P_i)=\Psi_{ne_i}({\bf P})$ for $n\in \mathbb{Z}$. \end{proof}
\begin{proof}[Proof of Proposition \ref{second-proposition}]
First of all by \cite[Theorem 4.1]{AECII} if $P$ is a point such that $P \pmod p$ is non-singular then we have the following expression for the local N\'{e}ron height of $P$, $$\lambda_p(P)= \max\left\{ -{1\over 2} \nu_p(x(P)),0\right\}+\frac{1}{12}\nu_p(\Delta_E).$$ Observe that $$\nu_p(D_P)=\max\left\{-{1\over 2} \nu_p(x(P)), 0\right\}.$$ Under our assumptions since $P_i \pmod p$ is non-singular for $1\leq i \leq r$, we conclude that the quadratic form $\varepsilon(\mathbf{v})$ in Lemma \ref{diff quad}, can be written as
\[ \varepsilon(\mathbf{v})=\nu_p(D_{\mathbf{v} \cdot \mathbf{P}})-\nu_p(\Psi_\mathbf{v}(\mathbf{P})) \] for $\mathbf{v}\neq \mathbf{0}$. We also note that $\mathbf{v} \mapsto \nu_p(F_\mathbf{v}(\mathbf({P}))$ is a
quadratic form, where $F_\mathbf{v}(\mathbf{P})$ is given in \eqref{FVP}.
Define $\hat{\varepsilon}:\mathbb{Z}^r \rightarrow \mathbb{Z}$ by
\[ \hat{\varepsilon}(\mathbf{v})=\varepsilon(\mathbf{v})-\nu_p(F_\mathbf{v}(\mathbf{P}))=
\nu_p(D_{\mathbf{v} \cdot \mathbf{P}})-\nu_p(\hat{\Psi}_\mathbf{v}(\mathbf{P})). \]
Since $\hat{\varepsilon}$ is the difference of two quadratic forms, we conclude
that $\hat{\varepsilon}$ is also a quadratic form.
Furthermore, we have
\begin{align*}
\hat{\varepsilon}(\mathbf{e}_i)=\nu_p(D_{P_i})-\nu_p(\hat{\Psi}_{\mathbf{e}_i}(\mathbf{P}))=0,
\end{align*}
for all $1\leq i \leq r$,
and
\begin{align*}
\hat{\varepsilon}(\mathbf{e}_i+\mathbf{e}_j)=\nu_p(D_{P_i+P_j})-\nu_p(\hat{\Psi}_{\mathbf{e}_i+\mathbf{e}_j}(\mathbf{P}))=0,
\end{align*}
for all $1\leq i < j \leq r$.
Thus by \cite[Lemma 4.5]{Stange} we have $\hat{\varepsilon}(\mathbf{v})=0$ for
all $\mathbf{v} \in \mathbb{Z}^r$.
This shows that, for all $\mathbf{v} \in \mathbb{Z}^r$, we have
\[ \nu_p(D_{\mathbf{v} \cdot \mathbf{P}}) = \nu_p(\hat{\Psi}_\mathbf{v}(\mathbf{P})), \]
as desired. \end{proof}
The following two examples give illustrations of Proposition \ref{second-proposition}. \begin{exam}{\label{first-example}}
We consider the elliptic curve $E: y^2=x^3-11$. Then the group of rational
points of $E$ over $\mathbb{Q}$ is generated by two points $P=(3, 4)$ and
$Q=(15, 58)$. We observe that $P, Q \not \equiv \infty \pmod p$ for all
primes $p$ and $P+Q \not \equiv \infty \pmod p$ for all primes $p$ except
$p=2$. In Table \ref{table1} we provide some values of the elliptic denominator net
associated to $E$ and the points $P$ and $Q$ as a two dimensional array with lower left corner $D_{0Q+0P}$, lower right corner
$D_{4Q+0P}$, upper left corner $D_{0Q+9P}$, and upper right
corner $D_{4Q+9P}$. Table \ref{table2} provides the corresponding values for the
elliptic net associated to net polynomials $\Psi_{(v_1, v_2)}(P, Q)$. As
predicted in Proposition \ref{second-proposition} the valuations of these
two nets at all primes $p$ (except $p=2$) coincide. \end{exam} \begin{exam}\label{second-example}
We consider the elliptic curve $E: y^2+7y=x^3+x^2+28x$ with
$E(\mathbb{Q})$ generated by two independent points $P=(0,0)$ and $Q=(1,
3)$. Then $P, Q, P+Q \not \equiv \infty \pmod p$ for any prime $p$. However
$P$ reduces to a singular point modulo $7$. Thus as predicted in Proposition
\ref{second-proposition} the valuations of the elliptic denominator net (given in
Table \ref{table3}) and the elliptic net (given in Table \ref{table4}) are the same for all
primes $p\neq 7$. \end{exam}
\newgeometry{bottom=0in}
\begin{landscape}
\begin{table}
\fontsize{6pt}{1}\selectfont
\begin{tabular}{MMMMM}
3^{3} \cdot 17 \cdot 861139 \cdot 638022143238323743 & 2 \cdot 31 \cdot 227 \cdot 32114101 \cdot 2233563433631 & 13 \cdot 97 \cdot 967 \cdot 2333 \cdot 899531 \cdot 20086489 & 2 \cdot 3^{2} \cdot 67 \cdot 89 \cdot 379 \cdot 1078019 \cdot 724929587 & 23 \cdot 103 \cdot 340789 \cdot 175849593114259\\
2^{5} \cdot 37 \cdot 167 \cdot 245519 \cdot 3048674017 & 3 \cdot 7^{2} \cdot 11 \cdot 1567 \cdot 634026250609 & 2^{2} \cdot 5^{2} \cdot 43 \cdot 293 \cdot 349 \cdot 631 \cdot 1670527 & 41 \cdot 227 \cdot 4051 \cdot 32279374297 & 2^{3} \cdot 3 \cdot 17 \cdot 37 \cdot 47 \cdot 149 \cdot 263 \cdot 2003 \cdot 714947\\
19 \cdot 433 \cdot 2689 \cdot 8819 \cdot 40487 & 2 \cdot 131 \cdot 179 \cdot 2103080101 & 3 \cdot 17 \cdot 101 \cdot 15641 \cdot 150379 & 2 \cdot 71 \cdot 83 \cdot 107 \cdot 751 \cdot 22613 & 77711 \cdot 82149276767\\
2^{3} \cdot 3^{2} \cdot 5 \cdot 17 \cdot 23 \cdot 1737017 & 163 \cdot 1877 \cdot 42797 & 2^{2} \cdot 67 \cdot 317 \cdot 98377 & 3^{2} \cdot 5 \cdot 59 \cdot 25640299 & 2^{6} \cdot 7 \cdot 41 \cdot 157 \cdot 229 \cdot 9437\\
449 \cdot 104759 & 2 \cdot 3 \cdot 29 \cdot 809 & 11 \cdot 19 \cdot 31 \cdot 677 & 2 \cdot 29 \cdot 569 \cdot 4987 & 3 \cdot 17 \cdot 1439 \cdot 925741\\
2^{4} \cdot 37 \cdot 167 & 5^{2} \cdot 631 & 2^{2} \cdot 3 \cdot 17 \cdot 149 & 13 \cdot 30557 & 2^{3} \cdot 5 \cdot 37 \cdot 239 \cdot 1549\\
3^{2} \cdot 17 & 2 \cdot 67 & 7 \cdot 157 & 2 \cdot 3^{3} \cdot 2087 & 19 \cdot 23 \cdot 503 \cdot 659\\
2^{3} & 3 & 2^{2} \cdot 5 & 11 \cdot 1553 & 2^{4} \cdot 3 \cdot 17 \cdot 199 \cdot 577\\
1 & 2 & 3 \cdot 17 & 2 \cdot 31 \cdot 233 & 631 \cdot 1753\\
0 & 1 & 2^{2} \cdot 29 & 3^{2} \cdot 5 \cdot 3331 & 2^{3} \cdot 29 \cdot 37 \cdot 83 \cdot 3467\\
\end{tabular}
\par
\caption{Elliptic denominator net associated to $E:y^2=x^3-11$ and the
points $Q=(15,58)$ and $P=(3, 4)$}
\label{table1}
\end{table}
\begin{table}
\fontsize{6pt}{1}\selectfont
\begin{tabular}{MMMMM}
-3^{3} \cdot 17 \cdot 861139 \cdot 638022143238323743 & -2^{-8} \cdot 31 \cdot 227 \cdot 32114101 \cdot 2233563433631 & -2^{-18} \cdot 13 \cdot 97 \cdot 967 \cdot 2333 \cdot 899531 \cdot 20086489 & -2^{-26} \cdot 3^{2} \cdot 67 \cdot 89 \cdot 379 \cdot 1078019 \cdot 724929587 & -2^{-36} \cdot 23 \cdot 103 \cdot 340789 \cdot 175849593114259 \\
2^{5} \cdot 37 \cdot 167 \cdot 245519 \cdot 3048674017 & -2^{-8} \cdot 3 \cdot 7^{2} \cdot 11 \cdot 1567 \cdot 634026250609 & -2^{-14} \cdot 5^{2} \cdot 43 \cdot 293 \cdot 349 \cdot 631 \cdot 1670527 & -2^{-24} \cdot 41 \cdot 227 \cdot 4051 \cdot 32279374297 & -2^{-29} \cdot 3 \cdot 17 \cdot 37 \cdot 47 \cdot 149 \cdot 263 \cdot 2003 \cdot 714947\\
19 \cdot 433 \cdot 2689 \cdot 8819 \cdot 40487 & 2^{-6} \cdot 131 \cdot 179 \cdot 2103080101 & 2^{-14} \cdot 3 \cdot 17 \cdot 101 \cdot 15641 \cdot 150379 & -2^{-20} \cdot 71 \cdot 83 \cdot 107 \cdot 751 \cdot 22613 & -2^{-28} \cdot 77711 \cdot 82149276767\\
2^{3} \cdot 3^{2} \cdot 5 \cdot 17 \cdot 23 \cdot 1737017 & 2^{-6} \cdot 163 \cdot 1877 \cdot 42797 & 2^{-10} \cdot 67 \cdot 317 \cdot 98377 & 2^{-18} \cdot 3^{2} \cdot 5 \cdot 59 \cdot 25640299 & 2^{-18} \cdot 7 \cdot 41 \cdot 157 \cdot 229 \cdot 9437\\
-449 \cdot 104759 & -2^{-4} \cdot 3 \cdot 29 \cdot 809 & 2^{-10} \cdot 11 \cdot 19 \cdot 31 \cdot 677 & 2^{-14} \cdot 29 \cdot 569 \cdot 4987 & 2^{-20} \cdot 3 \cdot 17 \cdot 1439 \cdot 925741\\
-2^{4} \cdot 37 \cdot 167 & -2^{-4} \cdot 5^{2} \cdot 631 & -2^{-6} \cdot 3 \cdot 17 \cdot 149 & -2^{-12} \cdot 13 \cdot 30557 & 2^{-13} \cdot 5 \cdot 37 \cdot 239 \cdot 1549\\
-3^{2} \cdot 17 & -2^{-2} \cdot 67 & -2^{-6} \cdot 7 \cdot 157 & -2^{-8} \cdot 3^{3} \cdot 2087 & -2^{-12} \cdot 19 \cdot 23 \cdot 503 \cdot 659\\
2^{3} & 2^{-2} \cdot 3 & -2^{-2} \cdot 5 & -2^{-6} \cdot 11 \cdot 1553 & -2^{-4} \cdot 3 \cdot 17 \cdot 199 \cdot 577\\
1 & 1 & 2^{-2} \cdot 3 \cdot 17 & 2^{-2} \cdot 31 \cdot 233 & -2^{-4} \cdot 631 \cdot 1753\\
0 & 1 & 2^{2} \cdot 29 & 3^{2} \cdot 5 \cdot 3331 & 2^{3} \cdot 29 \cdot 37 \cdot 83 \cdot 3467\\
\end{tabular}
\par
\caption{Elliptic net associated to $E: y^2 = x^3 -11$ and the points $Q = (15,58)$ and $P=(3, 4)$.}
\label{table2}
\end{table}
\begin{table}
\centering
\fontsize{6pt}{1}\selectfont
\begin{tabular}{MMMMMMM}
3^{2} \cdot 5 \cdot 8243 \cdot 7289363 & 59 \cdot 523 \cdot 1170779 & 2803 \cdot 2163467 & 2^{3} \cdot 23 \cdot 7758139 & 59 \cdot 149837011 & 31 \cdot 229 \cdot 32045369 & 3 \cdot 11 \cdot 733 \cdot 154099559\\
13 \cdot 127 \cdot 3066533 & 2 \cdot 41 \cdot 53 \cdot 26627 & 7 \cdot 13 \cdot 17 \cdot 5653 & 5^{2} \cdot 29 \cdot 67 \cdot 487 & 3 \cdot 13 \cdot 19 \cdot 89 \cdot 1291 & 7 \cdot 109 \cdot 1427 \cdot 2833 & 2^{2} \cdot 13 \cdot 167 \cdot 199 \cdot 617887\\
5948431 & 181 \cdot 8819 & 3^{2} \cdot 47 \cdot 1097 & 11 \cdot 11779 & 2 \cdot 61 \cdot 74377 & 17 \cdot 25967671 & 5 \cdot 56479 \cdot 333271\\
3 \cdot 5 \cdot 7 \cdot 1949 & 6553 & 2^{4} \cdot 431 & 7^{2} \cdot 521 & 42181 & 47 \cdot 71 \cdot 14557 & 3 \cdot 7 \cdot 127 \cdot 349 \cdot 32537\\
2 \cdot 11 \cdot 113 & 911 & 463 & 5 \cdot 557 & 3^{3} \cdot 37 \cdot 137 & 2^{2} \cdot 2059769 & 25084117199\\
127 & 7 & 3 \cdot 19 & 2 \cdot 199 & 7 \cdot 2039 & 653 \cdot 15767 & 5 \cdot 11 \cdot 293 \cdot 662327\\
3 \cdot 5 & 2^{3} & 1 & 349 & 53 \cdot 593 & 5624039 & 2 \cdot 3^{2} \cdot 41 \cdot 73 \cdot 661 \cdot 2141\\
1 & 1 & 7 & 5 \cdot 11 & 2^{2} \cdot 3 \cdot 23 \cdot 107 & 7 \cdot 4812433 & 19 \cdot 127 \cdot 601 \cdot 4637\\
1 & 1 & 2 \cdot 3 & 601 & 277 \cdot 313 & 1987 \cdot 119321 & 5^{2} \cdot 139843540153\\
0 & 1 & 13 & 7 \cdot 59 & 13 \cdot 55819 & 2 \cdot 29 \cdot 26272439 & 3 \cdot 7 \cdot 13 \cdot 59 \cdot 263 \cdot 5880307\\
\end{tabular}
\par
\caption{Elliptic denominator net associated to $E: y^2 +7y = x^3 +x^2 +28x$ and the points $Q = (1,3)$ and $P=(0, 0)$}
\label{table3}
\end{table}
\begin{table}
\centering
\fontsize{6pt}{1}\selectfont
\begin{tabular}{MMMMMMM}
-3^{2} \cdot 5 \cdot 7^{20} \cdot 8243 \cdot 7289363 & 7^{20} \cdot 59 \cdot 523 \cdot 1170779 & 7^{20} \cdot 2803 \cdot 2163467 & 2^{3} \cdot 7^{20} \cdot 23 \cdot 7758139 & -7^{20} \cdot 59 \cdot 149837011 & -7^{20} \cdot 31 \cdot 229 \cdot 32045369 & -3 \cdot 7^{20} \cdot 11 \cdot 733 \cdot 154099559\\
-7^{16} \cdot 13 \cdot 127 \cdot 3066533 & -2 \cdot 7^{16} \cdot 41 \cdot 53 \cdot 26627 & 7^{17} \cdot 13 \cdot 17 \cdot 5653 & 5^{2} \cdot 7^{16} \cdot 29 \cdot 67 \cdot 487 & 3 \cdot 7^{16} \cdot 13 \cdot 19 \cdot 89 \cdot 1291 & -7^{17} \cdot 109 \cdot 1427 \cdot 2833 & -2^{2} \cdot 7^{16} \cdot 13 \cdot 167 \cdot 199 \cdot 617887\\
7^{12} \cdot 5948431 & -7^{12} \cdot 181 \cdot 8819 & -3^{2} \cdot 7^{12} \cdot 47 \cdot 1097 & 7^{12} \cdot 11 \cdot 11779 & 2 \cdot 7^{12} \cdot 61 \cdot 74377 & 7^{12} \cdot 17 \cdot 25967671 & -5 \cdot 7^{12} \cdot 56479 \cdot 333271\\
3 \cdot 5 \cdot 7^{10} \cdot 1949 & 7^{9} \cdot 6553 & -2^{4} \cdot 7^{9} \cdot 431 & -7^{11} \cdot 521 & -7^{9} \cdot 42181 & 7^{9} \cdot 47 \cdot 71 \cdot 14557 & 3 \cdot 7^{10} \cdot 127 \cdot 349 \cdot 32537\\
2 \cdot 7^{6} \cdot 11 \cdot 113 & 7^{6} \cdot 911 & 7^{6} \cdot 463 & -5 \cdot 7^{6} \cdot 557 & -3^{3} \cdot 7^{6} \cdot 37 \cdot 137 & -2^{2} \cdot 7^{6} \cdot 2059769 & 7^{6} \cdot 25084117199\\
-7^{4} \cdot 127 & 7^{5} & 3 \cdot 7^{4} \cdot 19 & 2 \cdot 7^{4} \cdot 199 & -7^{5} \cdot 2039 & -7^{4} \cdot 653 \cdot 15767 & -5 \cdot 7^{4} \cdot 11 \cdot 293 \cdot 662327\\
-3 \cdot 5 \cdot 7^{2} & -2^{3} \cdot 7^{2} & -7^{2} & 7^{2} \cdot 349 & 7^{2} \cdot 53 \cdot 593 & -7^{2} \cdot 5624039 & -2 \cdot 3^{2} \cdot 7^{2} \cdot 41 \cdot 73 \cdot 661 \cdot 2141\\
7 & -7 & -7^{2} & -5 \cdot 7 \cdot 11 & 2^{2} \cdot 3 \cdot 7 \cdot 23 \cdot 107 & 7^{2} \cdot 4812433 & -7 \cdot 19 \cdot 127 \cdot 601 \cdot 4637\\
1 & 1 & -2 \cdot 3 & -601 & -277 \cdot 313 & 1987 \cdot 119321 & 5^{2} \cdot 139843540153\\
0 & 1 & 13 & -7 \cdot 59 & -13 \cdot 55819 & -2 \cdot 29 \cdot 26272439 & 3 \cdot 7 \cdot 13 \cdot 59 \cdot 263 \cdot 5880307\\
\end{tabular}
\par
\caption{Elliptic nets associated to $E: y^{2} +7y = x^{3} +x^{2} +28x$ and the points $Q = (1,3)$ and $P=(0, 0)$}
\label{table4}
\end{table}
\end{landscape}
\end{document} | arXiv |
Maxim O. Lavrentovich
I am an assistant professor at the University of Tennessee, working on theoretical problems in biophysics and soft condensed matter physics.
Pattern formation
Evolutionary dynamics and domain walls
Non-equilibrium statistical mechanics
Range expansions
I'm interested in the evolution of populations that are invading or spreading into new territory. Such populations are called range expansions.
A range expansion of E. coli on a Petri dish. (Image courtesy of Bryan Weinstein)
When populations spread on a surface or in three-dimensional space (such as a solid tumor in healthy tissue), spatial fluctuations may play an important role in the evolutionary dynamics. We can see in the image above that an initially well-mixed population of red, blue, and green-fluorescent E. coli segregate into single-colored sectors as the organisms spread across the surface of a Petri dish (forming a colony).This is a result of the small effective population size at the frontier of the colony, where a small fraction of the total population divides into the new territory. Small number fluctuations, then, rapidly fix the population to a single color at the frontier.
We studied such range expansions in the presence of selection and mutation. In the simplest instance, we consider a blue strain with a selective advantage \(s\) over a yellow strain, which may convert to the yellow strain with some rate \(\mu\). In this case, depending on the mutation rate $\mu$ and the selection coefficient \(s\), the blue strain may either survive in the population at long times, or go extinct. Spatial fluctuations strongly suppress the blue strain, leading to more extinction, as shown in the phase diagram below.
In a two-dimensional range expansion (a microbial colony growing across a Petri dish, for example), extinction is enhanced due to spatial fluctuations. We see in this phase diagram that a blue strain with a selective advantage \(s\) will go extinct faster in a range expansion for a fixed mutation rate $\mu$ than in the well-mixed case (dashed line). Such spatial extinction transitions may be described by directed percolation, which has concrete predictions for quantities such as the opening angle $\theta$ of blue genetic sectors [1,2].
Domain walls in ferroelectrics and cellular populations
Remarkably, domain walls in ferroelectric materials (regions separating two different electric polarizations) share similar characteristics as range expansion frontiers or interfaces between competing species. In both cases, the interface can be captured by a scalar function \(\phi(\mathbf{x},t)\) that indicates the "phase" of the material, such as the polarization \(P\) or the cellular density. Then, the dynamics of this field are governed by an equation that tends to minimize some effective "free energy" \(\mathcal{F}\), which has the general form
$$\mathcal{F}=\int \mathrm{d}\mathbf{x}\left[ \kappa (\nabla \phi)^2 + V(\phi) \right],$$
where $V(\phi)$ has local minima at the preferred values of $\phi$, which might be the two stable polarizations $\pm P_0$ along some axis for a uniaxial ferroelectric material, or simply minima at \(\phi=0,1\) for the two stable states of a growing population (no cells and cells at the local carrying capacity for which we can take \(\phi=1\) without loss of generality). The simplest evolution for \(\phi(\mathbf{x},t)\) would be the "model A" [3] dynamics
$$\partial_t \phi = - \nu\, \frac{\delta \mathcal{F}}{\delta \phi} + \xi(\mathbf{x},t),$$
where $\nu$ is a "viscosity" setting the timescale of relaxation toward equilibrium and $\xi(\mathbf{x},t)$ is a spatiotemporal noise representing either thermal fluctuations for ferroelectric materials, or stochastic cell birth/death for cellular populations. These different sources of noise have significant consequences on the noisy dynamics of the wall. The characteristic features of the noise are specified by the noise correlations, which are given by \( \langle \xi(\mathbf{x},t) \xi(\mathbf{x}',t') \rangle = \Xi[ \phi ] \delta(\mathbf{x}-\mathbf{x}') \delta(t-t')\), where $\Xi$ is a constant for thermal noise (proportional to the temperature and $\nu$ according to the fluctuation-dissipation theorem) and proportional to $\phi$ in the case of cellular populations as no (local) stochastic birth/death can happen in the absence of cells.
The dynamics may also be significantly influenced by quenched noise, such as obstacles or nutrient variation for cellular growth or crystalline defects in ferroelectric materials. Such noise is captured by introducing random variations in the coefficients of the potential \(V[\phi]\). The thermal and quenched noises have very different effects, with quenched noise tending to pin domain walls and thermal noise tending to broaden and depin the domain wall. These effects, however, also depend on the nature of the potential \(V[\phi]\). In ferroelectrics, for example, there may exist other metastable states (such as a metastable paraelectric state with \(P=0\)). These states may be excited by the thermal fluctuations, leading to dramatic changes to the domain wall behavior [4].
[1] M. O. Lavrentovich, M. E. Wahl, D. R. Nelson, and A. W. Murray Spatially constrained growth enhances conversional meltdown Biophysical Journal 110(12) 271 (2016)
[2] M. O. Lavrentovich, K. S. Korolev, and D. R. Nelson Radial Domany-Kinzel models with mutation and selection Physical Review E 87 012103 (2013)
[3] P. C. Hohenberg and B. I. Halperin Theory of Dynamics Critical Phenomena Reviews of Modern Physics 49, 435 (1977)
[4] N. Bauer, S. M. Neumayer, P. Maksymovych, M. O. Lavrentovich (arxiv.org/abs/2208.02990) (2022) | CommonCrawl |
\begin{document}
\begin{center} {\bf {\sc \Large Conditional aggregation-based Choquet integral\\ on discrete space}} \end{center}
\vskip 12pt
\begin{comment} \begin{center} \title{Size-based Choquet integral and its application to OWA} \author{\bas{Basarik Stanislav}\fnref{fn4}}\ead{[email protected]} \author{\jb{Borzová Jana}\fnref{fn1}}\ead{[email protected]} \author{\lh{Halčinová Lenka}\fnref{fn2}}\ead{[email protected]} \author{\js{Šupina Jaroslav}\fnref{fn3}}\ead{[email protected]} \address{Institute of Mathematics, P.~J. \v{S}af\'arik University in Ko\v sice, Jesenn\'a 5, 040 01 Ko\v{s}ice, Slovakia} \fntext[fn2]{Supported by the grants APVV-16-0337, VVGS-2016-255, VVGS-PF-2017-255.}
\begin{abstract} .... \end{abstract} \begin{keyword} {size\sep super level measure\sep non-additive measure \sep possibility measure } \MSC[2010] 28A12 \end{keyword} \end{center} \end{comment}
\begin{center} {\bf Stanislav Basarik, Jana Borzová, \bf Lenka Hal\v{c}inov\'{a}, Jaroslav \v{S}upina}\blfootnote{{\it Mathematics Subject Classification (2010):} Primary 28A12, 28E10
}\\ {\footnotesize{\textit{Institute of Mathematics, P.~J.~\v{S}af\'arik University in Ko\v sice, Jesenn\'a 5, 040 01 Ko\v{s}ice, Slovakia}}} \end{center}
\begin{abstract} We derive computational formulas for the~generalized Choquet integral based on the~novel survival function introduced by M.~Boczek et al.~\cite{BoczekHalcinovaHutnikKaluszka2020}. We demonstrate its usefulness on the~Knapsack problem and the~problem of accommodation options. Moreover, we describe sufficient and necessary conditions under which novel survival functions based on different parameters coincide. This is closely related to the incomparability of input vectors (alternatives) in decision-making processes.
\end{abstract}
\noindent{\small{\it Keywords: }{Choquet integral; conditional aggregation operator; decision making, survival function}}
\section{Introduction}
M.~Boczek et al.~\cite{BoczekHalcinovaHutnikKaluszka2020}, inspired by consumer’s problems, aggregation operators, and conditional expectation, introduced a new notion of conditional aggregation operators. Conditional aggregation operators cover many existing aggregations, such as the~arithmetic and geometric mean, or plenty of~integrals known in~the~literature~\cite{KlementMesiarPap2010, Shilkret1971, Sugeno1974}. They became the essence of the~generalization of survival function (a notion known from~\cite{DuranteSempi2015}, from~\cite{grabisch2016set} known as the decumulative distribution function or from~\cite{BorzovaHalcinovaHutnik2015a} known as the strict level measure) introduced by M.~Boczek et al.~\cite{BoczekHalcinovaHutnikKaluszka2020}. Just as the survival function is the basis of the definition of the famous Choquet integral, the~generalized survival function enabled the building of a new integral. The generalized Choquet integral $\mathrm{C}_{\cA}(\mathbf{x},\mu)$ based on the~generalized survival function $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}$ has been naturally introduced by \[ \mathrm{C}_{\cA}(\mathbf{x},\mu)=\int_{0}^{\infty}\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\,\mathrm{d}\alpha. \]
A~discrete form of the famous Choquet integral is of great importance in decision making theory, regarding a finite set $[n] = \{1,\dots, n\}$ as criteria set, a vector $\mathbf{x} \in [0, +\infty)^{[n]}$ as a~score vector, and a capacity $\mu\colon 2^{[n]} \to [0, +\infty)$ as the weights of particular sets of criteria.
We provide problems, which complement the~ones described in~\cite{BoczekHalcinovaHutnikKaluszka2020}, and which stress the~potential of the~novel survival function and integral based on it in decision theory. However, the~main aim of the~present paper is to provide various computational formulas for $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}$, and consequently for $\mathrm{C}_{\cA}(\mathbf{x},\mu)$, since the computation of $\mathrm{C}_{\cA}(\mathbf{x},\mu)$ has not yet been thoroughly investigated. Thus, we believe that our computational algorithms improve the~practical implementation of the~novel concept of aggregation in various problems.
\par The~novel survival function $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}$ is based on an~aggregation of an~input vector $\mathbf{x}$, applying each conditional aggregation operator from the~family~$\bA$ of size~$\kappa$ on~$\mathbf{x}$ and corresponding set~of its indices. To illustrate our results we provide a~sample formula for $\mathrm{C}_{\cA}(\mathbf{x},\mu)$. Indeed, we show that \begin{align}\label{sample_Ch}{ \mathrm{C}_{\cA}(\mathbf{x},\mu)=\sum_{i=0}^{\kappa-1}\mu(F_{\mathbf{i}(i)})(\sA_{i+1}-\sA_i),} \end{align} where $\sA_0\leq\dots\leq\sA_{\kappa-1}$ is the~arrangement of all the~values of the~family of conditional aggregation operators~$\bA$ applied on~$\mathbf{x}$ and corresponding sets $E_0,\dots, E_{\kappa-1}$, and $\mu(F_0)\leq\dots\leq\mu(F_{\kappa-1})$ are all the~values of the~measure~$\mu$. The~index~$\mathbf{i}(i)$ deciding the~value of $\mathrm{C}_{\cA}(\mathbf{x},\mu)$ on the~interval $[\sA_i,\sA_{i+1})$ is the~minimum of the~set $\{(0),(1),\dots,(i)\}$ of the~indices such that $F_{(0)}=E_0^c,\dots,F_{(i)}=E_i^c$. The~proof of formula~\eqref{sample_Ch} as well as many other similar formulas can be found in Theorem~\ref{vypocetCh}. \par The~paper is organized as follows. Section~2 contains necessary terminology on measure and a~family of conditional aggregation operators. Moreover, it describes two problems in decision-making theory that can be modeled by the~generalized survival function. The~first series of formulas for its computation is presented in~Section~3. The~next section reduces the~minimization of the~reals in the~formulas to the~minimization of indices, and shows two graphical approaches to obtain the~generalized survival function. In addition, it contains a~solution to the~first of two mentioned problems in decision-making theory. The~formulas for the~generalized Choquet integral computation are listed in Section~5, including simplified fomulas for special types of measures. The~solution to the~second problem from Section~2 is presented here. The~study of the~conditions such that the~novel survival functions based on different parameters coincide is performed in~Section~6. Finally, the~paper itself contains just proofs of selected results, the~remaining proofs are attached in the~Appendix.
\section{Background and interpretations}\label{sec: motivation}
As we have already mentioned, we shall consider a finite set $[n]:=\{1,2,\dots,n\}$, $n\in\mathbb{N}$, $n\geq1$. Let us denote $\ozn{n}:=\{0\}\cup[n]$. By $2^{[n]}$ we mean the power set of $[n]$. A set function $\mu\colon \mathcal{S}\to[0,+\infty)$, $\{\emptyset\}\subseteq\mathcal{S}\subseteq2^{[n]}$, such that $\mu(E)\leq\mu(F)$ whenever $E\subseteq F$, with $\mu(\emptyset)=0$, we call \textit{monotone measure} on $\mathcal{S}$. Moreover, if $[n]\in \mathcal{S}$, we assume $\mu([n])>0$, and if $\mu([n])=1$, the monotone measure $\mu$ is called \textit{capacity} or \textit{normalized monotone measure}. Further, we put $\max{\emptyset}=0$, $\min{\emptyset}=+\infty$ and $\textstyle\sum_{i\in\emptyset}x_i=0$.
We shall work with nonnegative real-valued vectors, we use the notation $\mathbf{x}=(x_1,\dots,x_n)$, $x_i\in[0,+\infty)$, $i\in[n]$. The family of all nonnegative real-valued vectors on $[n]$ is the set $[0,+\infty)^{[n]}$. By $\mathbf{1}_E$ we shall denote \textit{indicator function} of a set $E\subseteq [0,+\infty)$, i.e.\ $\mathbf{1}_E(x)=1$, if $x\in E$, and $\mathbf{1}_E(x)=0$, if $x\notin E$. Especially, $\mathbf{1}_\emptyset(x)=0$ for each $x\in E$. Let us consider a set $\mathcal{E}\subseteq2^{[n]}$. Unless otherwise stated, for application reasons we assume $\{\emptyset,[n]\}\subseteq\mathcal{E}$. We call it the \textit{collection}. Let us denote the number of sets in $\mathcal{E}$ by $\kappa$, i.e.\ $|\mathcal{E}|=\kappa$. Let $\hat{\cE}=\{E^c:E\in\cE\}$, i.e.\ $\hat{\cE}$ contains the complements of the sets from collection $\mathcal{E}$. The set of all monotone measures on $\hat{\mathcal{E}}$ we shall denote by $\mathbf{M}$.
In the following, we present the definition of the conditional aggregation operator. The inspiration for this concept can be found in probability theory, specifically in conditional expectation. The idea of aggregating data not on the whole, but on a conditional set is expressed in the terms of this definition. The conditional aggregation operator generalizes the classical definition of aggregation operator introduced by Calvo et al.\ in~\cite{CalvoKolesarovaKomornikovaMesiar2002} and forms the basic component of the definition of the generalized survival function.
\begin{definition}\label{def: gsf}\rm(cf.~\cite[Definition 4.1.]{BoczekHalcinovaHutnikKaluszka2020}) A~\textit{family of conditional aggregation operators} (FCA for short) is a~family $$\bA=\{\aA{\cdot}: E\in\mathcal{E}\}\footnote{$\cA$ is a~family of operators parametrized by a~set from~${\cE}$.},$$ such that each $\aA{\cdot}$ is a~map $\aA{\cdot}\colon [0,+\infty)^{[n]}\to[0,+\infty)$ satisfying the following conditions: \begin{enumerate}[\rm(i)] \item $\aA[E]{\mathbf{x}}\le \aA[E]{\mathbf{y}}$ for any $\mathbf{x},\mathbf{y}$ such that $x_i\le y_i$ for any $i\in E$, $E\neq\emptyset$; \item $\aA{\mathbf{1}_{E^c}}=0$, $E\neq\emptyset$. \end{enumerate} If $\mu$ is a monotone measure on $\hat{\cE}=\{E^c: E\in\cE\}$, i.e. $\mu\in\mathbf{M}$, then the~\textit{generalized survival function} with respect to $\bA$ is defined as \begin{eqnarray}\label{predpis gsf}\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}:=\min\left\{\mu(E^c): \aA[E]{\mathbf{x}}\leq\alpha,\, E\in\mathcal{E}\right\}\end{eqnarray} for any $\alpha\in[0,+\infty)$. \end{definition}
If $E\neq\emptyset$, then $\aA{\cdot}$ is called the~\textit{conditional aggregation operator w.r.t.\ $E$}. Moreover, we consider that each element of FCA satisfies $\aA[\emptyset]{\cdot}=0$.
\begin{remark}Note that the survival function $\mu(\{\mathbf{x}>\alpha\})=\mu(\{i\in[n]:x_i>\alpha\})$ can be rewritten as \begin{align}\label{vznik}\mu(\{\mathbf{x}>\alpha\})&=\mu([n]\setminus\{\mathbf{x}\leq\alpha\})=\min\big\{\mu (E^c):(\forall i\in E)\,\, x_i\leq \alpha,\, E\in 2^{[n]}\big\}\nonumber\\ &=\min\{\mu(E^c):\max_{i\in E}x_i\leq\alpha,E\in 2^{[n]}\},\end{align} therefore the introduction of generalized survival function consists in the simple idea of replacing $\max_{i\in E}x_i$~in~(\ref{vznik}) with another functional. Clearly, for $\mathcal{E}=2^{[n]}$ and $\cA^{\mathrm{max}}$ we get the original strict survival function.\end{remark}
\begin{example}\label{prikladyFCA}\rm Let $\mathbf{x}\in[0,+\infty)^{[n]}$. Typical examples of the FCA are:\begin{enumerate}[(i)] \item $\cA^{\mathrm{max}}=\{\aAi[E]{\cdot}{\mathrm{max}}:E\in{\cE}\}$ with $\aAi[E]{\mathbf{x}}{\mathrm{max}}=\max_{i\in E}x_i$ for $E\neq\emptyset$; \item $\cA^{\mathrm{min}}=\{\aAi[E]{\cdot}{\mathrm{min}}:E\in{\cE}\}$ with $\aAi[E]{\mathbf{x}}{\mathrm{min}}=\min_{i\in E}x_i$ for $E\neq\emptyset$; \item $\cA^{\mathrm{sum}}=\{\aAi[E]{\cdot}{\mathrm{sum}}:E\in{\cE}\}$ with $\aAi[E]{\mathbf{x}}{\mathrm{sum}}=\sum_{i\in E}x_i$ for $E\neq\emptyset$.\end{enumerate}\end{example}
Note that the FCA does not have to contain elements of only one type.
\begin{example} Let $\mathbf{x}\in[0,+\infty)^{[3]}$, and ${\cE}=\{\emptyset, \{1\},\{2\},\{3\},\{1,2,3\}\}$. We can consider the following FCA $$\cA=\{\aAi[E]{\cdot}{\mathrm{max}}:E\in\{\{1\},\{2\},\{1,2,3\}\}\cup\{\aAi[E]{\cdot}{\mathrm{min}}:E\in\{\{3\}\}\}\cup \{\aA[\emptyset]{\cdot}\}.$$ \end{example} For other examples of FCA we recommend~\cite{BoczekHalcinovaHutnikKaluszka2020}. On several places in this paper, we shall work with the FCA that is \textit{nondecreasing} w.r.t sets, i.e. the map $E\mapsto\aA[E]{\cdot}$ will be nondecreasing. E.g., the families $\cA^{\mathrm{max}}$ and $\cA^{\mathrm{sum}}$ in Example~\ref{prikladyFCA} are nondecreasing w.r.t. sets.
In the following, we present problems that emphasize the need for the generalized survival function and the generalized Choquet integral in real situations. We stress that M.~Boczek et al.\ in~\cite{BoczekHalcinovaHutnikKaluszka2020} introduced several problems of this kind. However, we move beyond these examples. The solution to these problems using our results is delayed to Subsection~\ref{subsec: indices} and Section~\ref{Choquet}.
\paragraph{Knapsack problem} Let us imagine a person who is preparing for a holiday. The person plans to travel by plane. Therefore while packing the suitcase, he must keep the rules according to which it is allowed to carry less or equal to $1$~liter of liquids in the suitcase. Moreover, liquids must be in containers with a volume of up to $100$\,ml. The prices of these products (soap, shampoo, cream, etc.) and their possible combinations (packages of products) are more expensive in the destination country than at home. The person wants to buy such products, or packages of products, while still at home, to minimize the price of the holiday.
Let $[n]$, $n\geq 1$ be a set of liquid products that the person needs. Then $\mathcal{E}\subseteq2^{[n]}$ represents all possible combinations of products. Let us consider $\mathbf{x}=(x_1,\dots,x_n)\in[0,+\infty)^{[n]}$, where $x_i$ represents the volume of $i$-th container and let a monotone measure $\mu\in\mathbf{M}$ represent the price of a package of products. Note that the monotone measure $\mu$ need not be additive. It is often possible to buy a package of products that is cheaper than the sum of the prices of the individual products. The task is to choose such a combination $E\in\mathcal{E}$ of products that their volume does not exceed the given limit, i.e.\ $\textstyle\sum_{i\in E}x_i\leq1000$ (milliliters), having in mind that we want to minimize the price of those products that will no longer fit in the suitcase and the person will have to buy them during the holiday. In other words, we are faced to solve the optimization problem $$\min\left\{\mu(E^c): \sum_{i\in E}x_i\leq1000,\,E\in\mathcal{E}\right\}\text{.}$$
\noindent One can observe that the given formula is a special case of the generalized survival function given in~\eqref{predpis gsf} with the conditional aggregation operator being the sum.
Last, but not least let us point out the essence of the collection $\mathcal{E}$. There are situations when instead of the whole powerset one is forced to consider a subcollection $\mathcal{E}\subset 2^{[n]}$. E.g. among the products that one is considering can be shampoo and conditioner separately or shampoo containing conditioner. Of course, these products should not be considered together. Thus all sets consisting of these three products are disqualified and it makes sense to take $\mathcal{E}\subset 2^{[n]}$ instead of the whole powerset.
The~full solution to the~problem using our results is accomplished in Subsection~\ref{subsec: indices}.
\paragraph{Problem of accommodation options}
Three people, Anthony, Brittany, Charley, are going on three different holidays and each of them is looking for accommodation in their own destination. They use the same online search engine, which searches for available accommodation based on three criteria, namely the distance of accommodation from the destination (in km), the price per night (in euros), and reviews (scale from $1$ to $10$). Each person has a certain character (in the sense that the criteria have different importance for them). Anthony is a person who saves money, Brittany does not want to walk far and Charley is looking for high-quality accommodation. We can model these characters using monotone measures as shown in Table~\ref{charaktery} (assume that the given values are chosen by these three people in the search engine). Let us label the criteria as $\text{D}$ (distance), $\text{P}$ (price) and $\text{R}$ (reviews). \begin{table}[H]
\centering
\begin{tabular}{c|c|c|c}
& distance (D) & price (P) & reviews (R) \\\hline
Anthony ($\mu$) & $0.3$ & $0.8$ & $0.1$ \\
Brittany ($\nu$) & $0.75$ & $0.4$ & $0.2$ \\
Charley ($\xi$) & $0.2$ & $0.5$ & $0.7$
\end{tabular}
\caption{Characters of people expressed by a monotone measures $\mu$, $\nu$ and $\xi$}
\label{charaktery} \end{table} \noindent For the sake of simplicity, let us suppose that the search engine offered Anthony, Brittany, and Charley two accommodation options, see Table~\ref{options_booking}. Let the values be in the range that is acceptable for them.
\begin{table}[H]
\centering
\begin{subtable}{0.32\linewidth}
\centering
\begin{tabular}{c|N|N|N}
& $\text{D}$ & $\text{P}$ & $\text{R}$ \\\hline
opt.\ $1$ & $4$ & $100$ & $7$\\
opt.\ $2$ & $10$ & $84$ & $8$
\end{tabular}
\caption{Anthony}
\end{subtable}
\begin{subtable}{0.32\linewidth}
\centering
\begin{tabular}{c|N|N|N}
& $\text{D}$ & $\text{P}$ & $\text{R}$ \\\hline
opt.\ $1$ & $7$ & $100$ & $2$\\
opt.\ $2$ & $10$ & $80$ & $10$
\end{tabular}
\caption{Brittany}
\end{subtable}
\begin{subtable}{0.32\linewidth}
\centering
\begin{tabular}{c|N|N|N}
& $\text{D}$ & $\text{P}$ & $\text{R}$ \\\hline
opt.\ $1$ & $3$ & $100$ & $8$\\
opt.\ $2$ & $15$ & $95$ & $10$
\end{tabular}
\caption{Charley}
\end{subtable}
\caption{Accommodation options}
\label{options_booking} \end{table} \noindent The aim is to determine the accommodation for each person that would fit him the best (based on character). In other words, the aim is to find a method that will select (based on the character of a person) the accommodation we would expect.
Clearly, this is a problem of the decision-making process. In the literature, there is a known approach to solving this problem using the theory of nonadditive measures and integrals, where Choquet integral
is mainly used, see~\cite{Grabisch1996}. The main advantage of this method is that by nonadditive measures one can model interactions between criteria (e.g. usually with a higher review score one can expect higher price). In this paper, we point out the advantages of using a generalized version of the Choquet integral
in (this) decision-making process.
The~full solution to the~problem using our results is presented in Section~\ref{Choquet}.
\section{ Computational formulas}\label{prvy_sposob}
When working with a~family of conditional aggregation operators~$\bA$, an input vector $\mathbf{x}$ is aggregated on sets $E$ from $\cE\subseteq 2^{[n]}$. By this procedure, instead of $n$ input components of $\mathbf{x}$ we obtain $\kappa=|\cE|$ input values of the~family of conditional aggregation operators~$\bA$: \begin{align}\label{Ei} 0=\sA_0\leq\sA_1\leq\dots\leq\sA_{\kappa-2}\leq\sA_{\kappa-1}<+\infty, \end{align} with the convention $\sA_{\kappa}=+\infty$. This ``new'' input enters the computation of the generalized survival function. Now we provide two approaches to compute generalized survival function values, the~first one is presented in~Theorem~\ref{zjednodusenie_def} and the~other one in~Theorem~\ref{gsf2}. We shall fix a~notation. \par Let $E_*$ be any bijection $E_*\colon \ozn{\kappa-1}\to{\cE}$ such that \begin{align}\label{EiAi} \sA_i=\aA[E_{i}]{\mathbf{x}}. \end{align} The~second bijection is a~map $F_*\colon \ozn{\kappa-1}\to{\hat{\cE}}$ such that denoting $\mu_j=\mu(F_{j})$ we have: \begin{align}\label{Fi} 0=\mu_0\leq \mu_1\leq\dots\leq\mu_{\kappa-2}\leq\mu_{\kappa-1}< +\infty. \end{align} It should be recalled that $\hat{\cE}$ is the collection of complements of $\cE$, i.e. for every $F_j\in\hat{\cE}$ there exists $E_i\in\cE$ such that $F_j=E_i^c$. We denote sets in collection $\cE$ by $E_i$, $i\in\ozn{\kappa-1}$, and their complements in $\hat{\cE}$ are denoted by $F_j$, $j\in\ozn{\kappa-1}$, because of technical details. In the whole paper for ease of writing, we shall use a~shortcut notation $$\sA_{\inv{j}}:=\aA[F_j^c]{\mathbf{x}},\,\,\,\, \text{and}\,\,\,\, \mu_{(i)}:=\mu(E_i^c).$$ One can notice that maps $E_*$ and $F_*$ need not be unique (they are unique just in case they are injective on~$\ozn{\kappa-1}$), but this has no influence on presented results.
In the following example, we only demonstrate the introduction of bijection $E_*$ and $F_*$, respectively. We shall use the input data below also in other examples in this paper.
\begin{example}\label{graphical_representation}\rm Let us consider the collection $\cE=\{\emptyset, \{1\},\{2\}, \{3\},\{1,3\},\{1,2,3\}\}$, the family of conditional aggregation operators $\cA^{\mathrm{sum}}=\{\aAi[E]{\cdot}{\mathrm{sum}}:E\in\cE\}$, the vector $\mathbf{x}=(2,3,1)$, and the monotone measure $\mu$ on $\hat{\cE}$ with corresponding values in the table. \begin{table}[H] \renewcommand*{\arraystretch}{1.2} \centering
\begin{tabular}{|c|o|o|o|o|o|o|} \hline $E$ & $\emptyset$ & $\{1\}$ & $\{2\}$ & $\{3\}$ & $\{1,3\}$ & $\{1,2,3\}$ \\ \hline $\aAi[E]{\mathbf{x}}{\mathrm{sum}}$ & $0$ & $2$ & $3$ & $1$ & $3$ & $6$\\\hline $F$ & $\{1,2,3\}$ & $\{2,3\}$ & $\{1,3\}$ & $\{1,2\}$ & $\{2\}$ & $\emptyset$ \\ \hline $\mu(F)$ & $1$ & $0.7$ & $0.5$ & $0.5$ & $0.5$ & $0$ \\ \hline \end{tabular} \end{table} \noindent Using maps $E_*$ and $F_*$ we obtain the following arrangement: \begin{table}[H] \renewcommand*{\arraystretch}{1.2} \centering
\begin{tabular}{|n|o|o|o|o|o|o|} \hline $i,j$ & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline $E_i$ & $\emptyset$ & $\{3\}$ & $\{1\}$ & $\{1,3\}$ & $\{2\}$ & $\{1,2,3\}$ \\\hline $\sA_i$ & $0$ & $1$ & $2$ & $3$ & $3$ & $6$ \\\hline $F_j$ & $\emptyset$ & $\{2\}$ & $\{1,2\}$ & $\{1,3\}$ & $\{2,3\}$ & $\{1,2,3\}$ \\ \hline $\mu_j$ & $0$ & $0.5$ & $0.5$ & $0.5$ & $0.7$ & $1$ \\ \hline
\end{tabular}
\end{table} \end{example}
\par
The~first approach to computing generalized survival function values is based on an (ascending) arrangement~\eqref{Ei} of conditional aggregation operator values. Let us note that this approach is directly derived from Definition~\ref{def: gsf}. In the computation process using this approach, we look for generalized survival function values that are achieved at intervals $[\sA_i,\sA_{i+1})$ with the convention $\sA_\kappa=+\infty$.
\begin{theorem}\label{zjednodusenie_def} Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,+\infty)^{[n]}$. Then \begin{enumerate}[{\rm(i)}] \item for any $i\in\ozn{\kappa-1}$ it holds that $$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha} = \min_{k\leq i}\mu_{(k)}\,\,\ \textrm{for any}\,\,\alpha\in[\sA_i,\sA_{i+1});$$ \item \belowdisplayskip=0pt \abovedisplayskip=0pt\parbox{\linewidth}{ \begin{align}\label{vyjgsf1} \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\sum_{i=0}^{\kappa-1} \min_{k\leq i}\mu_{(k)}\mathbf{1}_{[\sA_i,\sA_{i+1})}(\alpha)\,\,\ \textrm{for any}\,\,\alpha\in[0,+\infty).
\end{align}}
\end{enumerate} \end{theorem}
\noindent {\bf Proof. \ \ } (i) Let us consider an arbitrary (fixed) $i\in\ozn{\kappa-1}$ such that $\sA_i<\sA_{i+1}$, and let us take $\alpha\in[\sA_i,\sA_{i+1})$. Since $0=\sA_0\leq\sA_1\leq\dots\leq\sA_i\leq\alpha$, then $$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha} =\min\{\mu(E^c): \aA[E]{\mathbf{x}}\leq \alpha,\, E\in\mathcal{E}\}=\min\{\mu_{(k)}: k\in\{0,\dots,i\}\}=\min_{k\leq i}\mu_{(k)}.$$
\noindent The case (ii) immediately follows from (i).\null
$\Box\;\;$
\begin{remark}
Let $(x_1,\dots,x_n)\in[0,+\infty)^{[n]}$. Let us denote $\mathbf{x}=(x_{\sigma(1)},x_{\sigma(2)},\dots,x_{\sigma(n)})$, with $\sigma\colon [n]\to[n]$ being a permutation such that $x_{\sigma(1)}\leq x_{\sigma(2)}\leq \dots\leq x_{\sigma(n)}$ with the convention $x_{\sigma(0)}=0$. Also let $\cA^\mathrm{max}$ be a FCA with the collection $\mathcal{E}=\{G_{\sigma{(i+1)}}^c:i\in\ozn{n}\}$ where $G_{\sigma{(i)}}=\{\sigma{(i)},\dots,\sigma{(n)}\}$ for $i\in[n]$ and $G_{\sigma(n+1)}=\emptyset$.
Then from the previous proposition we get the standard formula of the survival function~\cite{HalcinovaHutnikKiselakSupina2019}:
$$\mu_{\bA}(\x,\alpha){\mathrm{max}}{\mathbf{x}}{\alpha}=\sum_{i=0}^{n-1} \mu{(G_{\sigma(i+1)})}\mathbf{1}_{[x_{\sigma{(i)}},x_{\sigma{(i+1)}})}(\alpha)\,\,\ \textrm{for any}\,\,\alpha\in[0,+\infty)\text{.}$$
Indeed, $\sA^\mathrm{max}(\mathbf{x}|G_{\sigma(i+1)}^c)=x_{\sigma(i)}$ for any $i\in\ozn{n}$. Moreover, if $x_{\sigma(i)}=\sA^\mathrm{max}_{l_i}$, $l_i\in[\kappa-1]_0$, then $\min_{k\leq l_i}\mu_{(k)}=\min_{k\leq i}\mu(G_{\sigma(k)})=\mu(G_{\sigma(i)})$ because of the monotonicity of $\mu$. \end{remark}
\begin{remark}\label{gsf=0}\rm Since $[n]\in\cE$, then the definition of generalized survival function guarantees to achieve zero value.
According to arrangement in~(\ref{Ei}), for any $\alpha\geq\sA_{\kappa-1}\geq \aA[{[n]}]{\mathbf{x}}$ it holds $$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\min\{\mu(E^c): \aA[E]{\mathbf{x}}\leq\alpha,\,E\in\mathcal{E}\}\leq\mu([n]^c)=\mu(\emptyset)=0\text{.}$$ On the other hand $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\geq0$. Thus, the generalized survival function achieves zero value on the interval $[\aA[{[n]}]{\mathbf{x}},+\infty)\supseteq[\sA_{\kappa-1},+\infty)$.
\end{remark}
\begin{example}\label{priklad1}\rm Let us consider the same inputs as in Example~\ref{graphical_representation}. Using Theorem~\ref{zjednodusenie_def}(i) let us compute the generalized survival function for any $\alpha\in[0,+\infty)$.
\begin{itemize*}
\item $\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\mu_{(5)}=0$ for any $\alpha\in[\sA_5,\sA_6)=[6,+\infty)$, which corresponds to Remark~\ref{gsf=0}.
\item $\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\mu_{(4)}=0.5$ for any $\alpha\in[\sA_4,\sA_5)=[3,6)$.
\item $[\sA_3,\sA_4)=[3,3)=\emptyset$.
\item $\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\mu_{(1)}=0.5$ for any $\alpha\in[\sA_2,\sA_3)=[2,3)$.
\item $\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\mu_{(1)}=0.5$ for any $\alpha\in[\sA_1,\sA_2)=[1,2)$.
\item $\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\mu_{(0)}=1$ for any $\alpha\in[\sA_0,\sA_1)=[0,1)$. \end{itemize*} \noindent So we get \begin{align*}
\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}&=\mathbf{1}_{[0,1)}(\alpha)+0.5\cdot\mathbf{1}_{[1,2)}(\alpha)+0.5\cdot\mathbf{1}_{[2,3)}(\alpha)+0.5\cdot\mathbf{1}_{[3,6)}(\alpha)\\&=\mathbf{1}_{[0,1)}(\alpha)+0.5\cdot\mathbf{1}_{[1,6)}(\alpha)\text{,} \end{align*} $\alpha\in[0,+\infty)$. \end{example}
The~second approach to computing generalized survival function values is based on an (ascending) arrangement~\eqref{Fi} of measure values. Similarly, as in Remark~\ref{gsf=0}, let us point out first where the generalized survival function achieves zero value.
\begin{remark}\label{A1c}
From arrangement of $\mu$, see~\eqref{Fi}, we have that
$\sA_{\inv{0}}\in\{\sA_{\inv{j}}:\mu_j=0,\,j\in\ozn{\kappa-1}\}$. Thus, for any $\alpha\in[\sA_{\inv{0}},+\infty)$ it holds $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu_0=0$. \end{remark}
Compared to the first approach presented in~Theorem~\ref{zjednodusenie_def}, here in the second approach we look for intervals at which monotone measure values $\mu(F_j)$ (i.e.\ generalized survival function values) are achieved. It is worth noting that the first approach is appropriate to use if the number of aggregation operator values is much smaller than the number of measure values. In the opposite case, it is more effective to use the second approach.
\begin{theorem}\label{gsf2} Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,+\infty)^{[n]}$. Then \begin{enumerate}[{\rm(i)}] \item for any $j\in\ozn{\kappa-1}$ it holds that $$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha} = \mu_j\,\,\ \textrm{for any}\,\,\alpha\in\Big[\min_{k\leq j}\sA_{\inv{k}},\min_{k<j}\sA_{\inv{k}}\Big);$$ \item \belowdisplayskip=0pt \abovedisplayskip=0pt\parbox{\linewidth}{ \begin{align}\label{vyjgsf2} \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\sum_{j=0}^{\kappa-1}\mu_j\mathbf{1}_{\big[\min\limits_{k\leq j}\sA_{\inv{k}},\min\limits_{k<j}\sA_{\inv{k}}\big)}(\alpha)\,\,\ \textrm{for any}\,\,\alpha\in[0,+\infty). \end{align}} \end{enumerate} \end{theorem}
\noindent {\bf Proof. \ \ }
For $j=0$ we have, that $\Big[\min_{k\leq 0}\sA_{\inv{k}},\min_{k<0}\sA_{\inv{k}}\Big)=[\sA_{\inv{0}},+\infty)$. Then, from Remark~\ref{A1c} is easy to see, that for any $\alpha\in[\sA_{\inv{0}},+\infty)$ it holds $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu_0$. Let us take an arbitrary (fixed) $j\in[\kappa-1]$. The case when $\min_{k\leq j}\sA_{\inv{k}}=\min_{k<j}\sA_{\inv{k}}$ is trivial. Let us suppose that $\min_{k\leq j}\sA_{\inv{k}}\neq\min_{k<j}\sA_{\inv{k}}$. Then from the fact that $$\min\Big\{\min_{k<j}\sA_{\inv{k}},\sA_{\inv{j}}\Big\}=\min_{k\leq j}\sA_{\inv{k}}<\min_{k<j}\sA_{\inv{k}},$$ we have $\min_{k\leq j}\sA_{\inv{k}}=\sA_{\inv{j}}$. Further, $\min_{k<j}\sA_{\inv{k}}\leq\sA_{\inv{l}}$ for each $l\in[j-1]$. Then for any $\alpha$ such that $$\sA_{\inv{j}}=\min_{k\leq j}\sA_{\inv{k}}\leq\alpha<\min_{k<j}\sA_{\inv{k}}\leq\sA_{\inv{l}}$$ with $l\in[j-1]$ we have \begin{itemize}
\item $\sA_{\inv{j}}\in\{\aA[E]{\mathbf{x}}\leq\alpha,\, E\in\mathcal{E}\}$, therefore $\mu_j\in\{\mu(E^c):\aA[E]{\mathbf{x}}\leq\alpha,\,E\in\mathcal{E}\}$. Thus, $$\min\{\mu(E^c):\aA[E]{\mathbf{x}}\leq\alpha,\,E\in\mathcal{E}\}\leq\mu_j.$$
\item $\sA_{\inv{l}}\notin\{\aA[E]{\mathbf{x}}\leq\alpha,\, E\in\mathcal{E}\}$, therefore $\mu_{l}\notin\{\mu(E^c):\aA[E]{\mathbf{x}}\leq\alpha,\,E\in\mathcal{E}\}$ and from the ordering \eqref{Fi} we have $\mu(E^c)\geq \mu_j$ for each $E\in\mathcal{E}$ such that $\aA[E]{\mathbf{x}}\leq\alpha$, therefore
$$\min\{\mu(E^c):\aA[E]{\mathbf{x}}\leq\alpha,\,E\in\mathcal{E}\}\geq\mu_j.$$
\end{itemize} The formula in~(ii) directly follows from~(i). \null
$\Box\;\;$
\begin{example}\label{priklad2}\rm Let us consider the same inputs as in Example~\ref{graphical_representation}. According to Remark~\ref{A1c}, $\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=0$ for any $\alpha\in[\sA_{\inv{0}},+\infty)=[6,+\infty)$. \begin{itemize*} \item $\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\mu_1=0.5.$ for any $\alpha\in\big[\sA_{\inv{1}},\sA_{\inv{0}}\big)=[3,6)$. \item $\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\mu_2=0.5$ for any $\alpha\in\big[\sA_{\inv{2}},\sA_{\inv{1}}\big)=[1,3)$. \item $\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\mu_3=0.5$ for any $\alpha\in\big[\sA_{\inv{2}},\sA_{\inv{2}}\big)=\emptyset$. \item $\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\mu_4=0.7$ for any $\alpha\in\big[\sA_{\inv{2}},\sA_{\inv{2}}\big)=\emptyset$. \item $\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\mu_5=1$ for any $\alpha\in\big[\sA_{\inv{5}},\sA_{\inv{2}}\big)=[0,1)$. \end{itemize*} Therefore, the generalized survival function has the form \begin{align*}
\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}&=0.5\cdot\mathbf{1}_{[3,6)}(\alpha)+0.5\cdot\mathbf{1}_{[1,3)}(\alpha)+\mathbf{1}_{[0,1)}(\alpha)\\&=\mathbf{1}_{[0,1)}(\alpha)+0.5\cdot\mathbf{1}_{[1,6)}(\alpha)\text{,} \end{align*} $\alpha\in[0,+\infty)$, compare with Example~\ref{priklad1}. \end{example}
The following expressions of generalized survival functions are w.r.t.\ special measures. Their~detailed proofs are in Appendix.
\begin{corollary}\label{specimiery} Let $\mathbf{x}\in[0,+\infty)^{[n]}$. \begin{enumerate}[\rm (i)] \item Let $\cA$ be FCA and $\bar{\mu}$ be the greatest monotone measure, i.e., $\bar{\mu}(F)=0$, if $F=\emptyset$ and $\bar{\mu}(F)=1$, otherwise. Then the generalized survival function w.r.t.\ $\bar{\mu}$ takes the form $$\bar{\mu}_{\cA}({\mathbf{x}},{\alpha})=\mathbf{1}_{[0,\aA[{[n]}]{\mathbf{x}})}(\alpha).$$ \ \item Let $\cA$ be FCA and $\mmu$ be the weakest monotone measure, i.e., $\mmu(F)=1$, if $F=[n]$ and $\mmu(F)=0$, otherwise. Then the generalized survival function w.r.t.\ $\mmu$ takes the form $$\mmu_\cA({\mathbf{x}},{\alpha})=\mathbf{1}_{\big[0,\min\limits_{E\neq\emptyset}\aA[E]{\mathbf{x}}\big)}(\alpha).$$
\item Let $\cA$ be FCA monotone w.r.t.\ sets with $\cE=2^{[n]}$. Let $\mu$ be a symmetric measure, i.e., $\mu(F)=\mu(G)$ for $F,G\in 2^{[n]}$ such that $|F|=|G|$ (see e.g.~\cite{MirandaGrabischGil2002}). Let us set $\mu^i:=\mu(F)$, if $|F|=i$, $i\in[n]_0$. Then the generalized survival function w.r.t.\ $\mu$ takes the form $$\mu_\cA({\mathbf{x}},{\alpha})=\sum_{i=0}^n \mu^i\mathbf{1}_{\big[\min\limits_{|E|=n-i}\aA[E]{\mathbf{x}},\min\limits_{|E|=n-i+1}\aA[E]{\mathbf{x}}\big)}(\alpha).$$
\item Let $\cA$ be FCA monotone w.r.t.\ sets with $\cE=2^{[n]}$. Let $\Pi$ be a possibility measure given for any $F\subseteq[n]$ as $\Pi(F)=\max_{i\in F}\pi(i)$.
The function $\pi\colon [n] \to [0, 1]$, $\pi(i) =\Pi (\{i\})$ is called a possibility distribution (of $\Pi$), see e.g.~\cite{ZADEH19999}. Let $\sigma\colon[n]\to[n]$ be a permutation such that $0=\pi(\sigma(0))\leq\pi(\sigma(1))\leq\dots\leq\pi(\sigma(n))=1$ and $G_{\sigma(i)}=\{\sigma(i),\dots,\sigma(n)\}$ for $i\in\{1,\dots,n\}$ with $G_{\sigma(n+1)}=\emptyset$ and $\aA[G_{\sigma(0)}]{\mathbf{x}}=\infty$. Then the generalized survival function w.r.t.\ $\Pi$ takes the form $$\Pi_{\cA}(\mathbf{x},\alpha)=\sum_{i=0}^n\pi(\sigma(i))\mathbf{1}_{\big[\aA[G_{\sigma(i+1)}]{\mathbf{x}},\aA[G_{\sigma(i)}]{\mathbf{x}}\big)}(\alpha).$$
\item Let $\cA$ be FCA monotone w.r.t.\ sets with $\cE=2^{[n]}$. Let $\mathrm{N}$ be a necessity measure given for any $F\subseteq[n]$ as $\mathrm{N}(F)=1-\max_{i\notin F} \pi(i)$ with the convention that the maximum of empty set is $0$. Let $\sigma\colon[n]\to[n]$ be a permutation given as in the previous case. Then the generalized survival function w.r.t.\ $N$ takes the form $$N_{\cA}(\mathbf{x},\alpha)=\sum_{i=0}^n\big(1-\pi(\sigma(i))\big)\mathbf{1}_{\big[\min\limits_{k\geq i}\aA[\{\sigma(k)\} ]{\mathbf{x}},\min\limits_{k> i}\aA[\{\sigma(k)\}]{\mathbf{x}}\big)}(\alpha),$$ where we need the convention $\min\limits_{k\geq 0}\aA[\{\sigma(k)\} ]{\mathbf{x}}=0$.
\end{enumerate} \end{corollary}
Let us compare both approaches to calculate the generalized survival function. Comparing Example~\ref{priklad1} and Example~\ref{priklad2} one can observe that the domain partition of the generalized survival function (the interval $[0,\infty)$) in Example~\ref{priklad1} is a superset of the domain partition of the same generalized survival function in Example~\ref{priklad2}. This observation, we show, holds in general. As a consequence, we get that the number of nonzero summands in expression~\eqref{vyjgsf2} is less than in expression~\eqref{vyjgsf1}. The proof can be found in Appendix.
\begin{proposition}\label{porovnanie} Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,+\infty)^{[n]}$. For each $i\in\ozn{\kappa-1}$ there exists $j$ such that $[\sA_i,\sA_{i+1})\subseteq [\min_{k\leq j}\sA_{\inv{k}}, \min_{k< j}\sA_{\inv{k}})$. \end{proposition}
\section{Permutations and visualization}\label{druhy_sposob}
Both formulas in Section~\ref{prvy_sposob}, presented in Theorem~\ref{zjednodusenie_def}, \ref{gsf2}
ask to minimize the~values of either monotone measure or aggregation operator. The~main aim of the~present section is to show that the computation of the generalized survival function (and the~minimization process) may be accomplished just on a set of integer indices of the~values of monotone measure and aggregation operator.
Moreover, the~whole procedure may be visualized, so the~computation of the~generalized survival function becomes easily accessible. The~main tools are several functions on a~set of indices~$\ozn{\kappa-1}$.
\subsection{Computation via indices}\label{subsec: indices} Let us introduce the main tool that we shall work with in this subsection. In accordance with~\eqref{EiAi},~\eqref{Fi} one can see that
each set from the~collection~$\cE$ appears in the~basic enumeration $E_*\colon \ozn{\kappa-1}\to\cE$ once and then its complement appears again in the~basic enumeration $F_*\colon \ozn{\kappa-1}\to\hat{\cE}$. More precisely, if $V\in\cE$ then there are unique indices $i,j\in\ozn{\kappa-1}$ such that \begin{center} $V=E_i$ and $V^c=F_j$. \end{center} By going through all the sets from~$\cE$ we can define a permutation~$(\cdot)\colon\ozn{\kappa-1}\to\ozn{\kappa-1}$ which will describe connections among these two indices. More precisely,
the~basic permutation, essential for this subsection, $(\cdot)\colon\ozn{\kappa-1}\to\ozn{\kappa-1}$ is defined by \begin{center} $(i)=j$ whenever $E_i=F_j^c$, \end{center} i.e., $F_{(i)}=E_i^c$. Thus, the~permutation $(\cdot)$ is in accordance with the~previously adopted notation since $\mu_{(i)}=\mu(F_{(i)})=\mu(E_i^c)$. Let us illustrate the computation of the~permutation $(\cdot)$.
\begin{example}
Let us consider the same input data as in Example~\ref{graphical_representation}. Then the index space is $\ozn{5}$.
Thus $(\cdot)$ is a~map from~$\ozn{5}$ to $\ozn{5}$. To calculate the~value $(0)$ we need the~set $E_0=\emptyset$. Its complement is $\{1,2,3\}$, which has index $5$ in the~enumeration~$F_*$, i.e., $F_5=\{1,2,3\}$. Thus $(0)=5$. Similarly, to compute~$(1)$, we see that $E_1=\{3\}$ and $E_1^c=\{1,2\}=F_2$. Thus $(1)=2$. Continuing in a~similar fashion, we obtain all the~values, \begin{center} $(2)=4$, $(3)=1$, $(4)=3$, $(5)=0$. \end{center} \end{example}
The arrangement from previous example may be graphically represented by a~diagram or by graphs, see Figure~\ref{obr1}. Each of these representations has certain advantages. In diagram, the~domain of~$(\cdot)$ is the~lower axis, while the~codomain is the~upper one. Thus the~indices of the~aggregation operator are in the~lower axis, the~indices in the~upper axis correspond to the~values of the~monotone measure. From a practical point of view, the axes are reversed order. To describe all visualizations deeply, let us point out that the index assignment process can be seen also reciprocally, i.e., let us define $\inv{\cdot}\colon\ozn{\kappa-1}\to\ozn{\kappa-1}$ \begin{center} $\inv{j}=i$ whenever $E_i^c=F_j$, \end{center} i.e., $E_{\inv{j}}=F_{j}^c$. It is easy to check that $\inv{\cdot}=(\cdot)^{-1}$.
\begin{figure}
\caption{Diagram of $(\cdot)$, resp.\ $\inv{\cdot}=(\cdot)^{-1}$}
\label{obr1a}
\caption{Graph of $(\cdot)$}
\label{ge_sur_func}
\caption{Graph of $\inv{\cdot}=(\cdot)^{-1}$}
\label{ge_sur_func_2}
\caption{Diagram and graphs of functions~$(\cdot)$, $\inv{\cdot}$ for data from Example~\ref{graphical_representation}}
\label{obr1}
\end{figure}
Once comparing Definition~\ref{def: gsf} of the~generalized survival function and the~map~$(\cdot)$, one can see that the~assignment~$(\cdot)$ is a~part of the~formula defining the~generalized survival function. More precisely, we have \[ \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\min\left\{\mu_{(i)}: \sA_i\leq\alpha,\, i\in\ozn{\kappa-1}\right\},\quad\alpha\in[0,+\infty). \] Then the~whole computation in the~previous formula may be visualised via Figure~\ref{obr1a}. Indeed, once we need to compute~$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}$ for a given $\alpha$, we can find the~largest index~$i$ such that $\sA_i\leq\alpha$. Afterwards we need to consider all the~indices on the~upper axis in Figure~\ref{obr1a} adjacent with the~indices greater than or equal to~$i$ (the~right-hand side indices with respect to~$i$ on the lower axis). Finally, the~minimization the~values of monotone measure of sets with the~selected indices on the~upper axis leads to the~value $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}$. However, the~application of the~assignment~$(\cdot)$ goes far beyond the~latter observations.
To understand the~real contribution of~$(\cdot)$, we need to analyse crossing-overs in Figure~\ref{obr1a}. The crossing-overs can be in only two cases:\begin{itemize}\item if $\textstyle\min_{k\leq i}\mu_{(k)}<\mu_{(i)}$, then the value $\mu_{(i)}$ is not achieved by the generalized survival function because of Theorem~\ref{zjednodusenie_def}; \item if $\mu_{(k)}=\mu_{(l)}$, $k,l\in\ozn{\kappa-1}$, where $k<l$ and $(k)<(l)$, which corresponds to the ambiguity of the arrangement~\eqref{Ei}, or~\eqref{Fi}. \end{itemize}
These connections can be removed or redefined in an appropriate manner without changing the formula of generalized survival functions. Thus let us redefine the mapping~$(\cdot)$ in a manner that will be beneficial for us. Let us define a~map $\mathbf{i}\colon\ozn{\kappa-1}\to\ozn{\kappa-1}$ by \begin{align}\label{ii} \mathbf{i}(i)=\min\{(0),\dots,(i)\}. \end{align}
The previous mapping will shorten a lot of further expressions, and calculations. It will be useful in Section~\ref{Choquet} in deriving the generalized Choquet integral formulas.
\begin{figure}
\caption{Diagram of the map~$\mathbf{i}$ for data from Example~\ref{graphical_representation}}
\label{obr2}
\end{figure} Figure~\ref{obr2} contains a~diagram describing function~$\mathbf{i}$ computed for data from Example~\ref{graphical_representation}. Although one can use formula~\eqref{ii} directly and compute~$\mathbf{i}$ algebraically, it can be easily obtained also from the~diagram in Figure~\ref{obr1a}. Indeed, since $(4)=3$ and $(3)=1$, the~corresponding edges in Figure~\ref{obr1a} are crossed-over. It is just enough to eliminate this crossing-over by redefining $\mathbf{i}(4)=1$. Similarly for crossing-over, which corresponds to values $(1)$ and $(2)$. After its elimination we obtain~$\mathbf{i}$.
\begin{lema}\label{vl_i} Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,+\infty)^{[n]}$, and $\mathbf{i}$ be a map given in~(\ref{ii}). Then for any $i\in\ozn{\kappa-1}$ it holds $$\min_{k\leq i} \mu_{(k)}=\mu_{\mathbf{i}(i)}\text{.}$$ \end{lema}
\noindent {\bf Proof. \ \ } Because of the definition of $(\cdot)$ and from the arrangement~(\ref{Fi}) we get equalities $$\min_{k\leq i} \mu_{(k)}=\min\{\mu_l:l=(k), k\leq i\}=\mu_{\min\{(0),\dots,(i)\}}=\mu_{\mathbf{i}(i)}\text{.}$$
Thus we get the required result. \null
$\Box\;\;$
Via the~assignment~$\mathbf{i}$, the formula of the~generalized survival function can be described directly, without the~need to minimize, compare the following corollary with Theorem~\ref{zjednodusenie_def}.
\begin{corollary}\label{application_i} Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,+\infty)^{[n]}$, and $\mathbf{i}$ be a map given in~(\ref{ii}). Then \begin{align}\label{restated_formula} \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\sum_{i=0}^{\kappa-1} \mu_{\mathbf{i}(i)}\mathbf{1}_{[\sA_i,\sA_{i+1})}(\alpha) \end{align} for any $\alpha\in[0,+\infty)$. \end{corollary}
\noindent {\bf Proof. \ \ } The required result follows from Lemma~\ref{vl_i} and Theorem~\ref{zjednodusenie_def}.
\null
$\Box\;\;$
The~difference between Theorem~\ref{zjednodusenie_def} and Theorem~\ref{gsf2} is in the~object being minimized. In Theorem~\ref{zjednodusenie_def} are minimized the~values of monotone measure, while in Theorem~\ref{gsf2} are minimized the~values of aggregation operator. Thus Corollary~\ref{application_i} is the~counterpart of Theorem~\ref{zjednodusenie_def} and naturally, the need to introduce a counterpart of Theorem~\ref{gsf2} arises. Similarly, as the formula for the generalized survival function can be rewritten via permutation $(\cdot)$, it can be also rewritten via permutation $\inv{\cdot}$ as follows \[ \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\min\left\{\mu_j: \sA_\inv{j}\leq\alpha,\, j\in\ozn{\kappa-1}\right\},\quad\alpha\in[0,+\infty). \]
Let us recall that $\inv{\cdot}=(\cdot)^{-1}\colon\ozn{\kappa-1}\to\ozn{\kappa-1}$, i.e., $\inv{(i)}=i,(\inv{j})=j$, and $E_\inv{j}=F_j^c$, i.e., $F_j=E_\inv{j}^c$. The~permutation $\inv{\cdot}$ is in accordance with the~previously adopted notation since $\sA_\inv{j}=\aA[E_\inv{j}]{\mathbf{x}}=\aA[F_j^c]{\mathbf{x}}$. Let us define a~map $\mathbf{j}\colon\ozn{\kappa-1}\to\ozn{\kappa-1}$ as follows \begin{align}\label{jj} \mathbf{j}(j)=\min\{\inv{0},\dots,\inv{j}\}. \end{align}
\begin{remark}\label{nerastucost_i_j} It is easy to see that for any $k\in\ozn{\kappa-1}$ it holds $\mathbf{i}(k)\leq(k)$ and $\mathbf{j}({k})\leq\inv{{k}}$.
Moreover, the maps $\mathbf{i},\mathbf{j}$ are nonincreasing. \end{remark}
Similarly as in the case of the map $\mathbf{i}$ we can shorten a lot of expressions, and calculations with the map $\mathbf{j}$. \begin{figure}
\caption{Diagram of the map~$\mathbf{j}$ for data from Example~\ref{graphical_representation}.}
\label{obr3}
\end{figure}
The~corresponding diagram of~$\mathbf{j}$ based on data from Example~\ref{graphical_representation} is depicted in Figure~\ref{obr3}. Note that similarly to the~diagram in Figure~\ref{obr2} it does not contain any crossing-over, and may be analogously created either directly using formula~\eqref{jj} or by eliminating crossings-over in the~diagram in Figure~\ref{obr1}.
\begin{lema}\label{vl_j} Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,+\infty)^{[n]}$, and $\mathbf{j}$ be a map given as in~(\ref{jj}). Then for any $j\in\ozn{\kappa-1}$ it holds $$\min\limits_{k\leq j}\sA_{\inv{k}}=\sA_{\mathbf{j}(j)}\text{.}$$ \end{lema}
\noindent {\bf Proof. \ \ } Because of the definition of $(\cdot)$ and from the arrangement~(\ref{Ei}) we immediately have $$\min\limits_{k\leq j}\sA_\inv{k}=\min\{\sA_l: l=\inv{k},k\leq j\} =\sA_{\min\{\inv{0},\dots,\inv{j}\}} =\sA_{\mathbf{j}(j)}\text{,}$$
thus we get the required result. \null
$\Box\;\;$
From the formula~(\ref{vyjgsf2}) we get the new expression of the generalized survival function in~the~following form.
\begin{corollary}\label{application_j} Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,+\infty)^{[n]}$, and $\mathbf{j}$ be a map given as in~(\ref{jj}). Then \begin{align}\label{restated_formula_2} \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\sum_{j=0}^{\kappa-1}\mu_j\mathbf{1}_{\big[\sA_{\mathbf{j}(j)},\sA_{\mathbf{j}(j-1)}\big)}(\alpha) \end{align} for any $\alpha\in[0,+\infty)$ with the convention $\sA_{\mathbf{j}(-1)}=+\infty$. \end{corollary}
\noindent {\bf Proof. \ \ } The required result follows from a Lemma~\ref{vl_j} and Theorem~\ref{gsf2}.
\null
$\Box\;\;$
Let us conclude this subsection with the~observation that if $(\cdot)$ is decreasing, then formulas~\eqref{vyjgsf1},~\eqref{vyjgsf2} and~\eqref{restated_formula},~\eqref{restated_formula_2} will be simplified.
\begin{corollary}\label{dosledok_formuly_bez_i_a_j}
Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,+\infty)^{[n]}$. If the mapping $(\cdot)$ is decreasing, then \begin{align*} \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}&=\sum_{i=0}^{\kappa-1}\mu_{\kappa-1-i}\cdot\mathbf{1}_{\big[\sA_i,\sA_{i+1}\big)}(\alpha)=\sum_{j=0}^{\kappa-1}\mu_j\mathbf{1}_{\big[\sA_{\kappa-1-j},\sA_{\kappa-j}\big)}(\alpha) \end{align*} for any $\alpha\in[0,+\infty)$. \end{corollary} \begin{proof} If the map $(\cdot)$ is decreasing, then $\inv{\cdot}$ is also decreasing and it holds $(\cdot)=\mathbf{i}$, $\inv{\cdot}=\mathbf{j}$. Then expressions~\eqref{restated_formula},~\eqref{restated_formula_2} are simplified \begin{align*} \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\sum_{i=0}^{\kappa-1} \mu_{(i)}\mathbf{1}_{[\sA_i,\sA_{i+1})}(\alpha)=\sum_{j=0}^{\kappa-1}\mu_j\mathbf{1}_{\big[\sA_{\inv{j}},\sA_{\inv{j-1}}\big)}(\alpha). \end{align*} Further, the map $(\cdot)$ is decreasing if and only if for any $i\in\ozn{\kappa-1}$ it holds $(i)=\kappa-1-i$, or equivalently for any $j\in\ozn{\kappa-1}$ it holds $\inv{j}=\kappa-1-j$, thus $(\cdot)=\inv{\cdot}$. This completes the proof. \null
$\Box\;\;$
\end{proof}
\begin{example}\label{example_bez_i_a_j} Let us consider the collection $\cE=\{\emptyset, \{1\},\{3\},\{1,2\},\{1,2,3\}\}$, the family of conditional aggregation operators $\cA^{\mathrm{sum}}=\{\aAi[E]{\cdot}{\mathrm{sum}}:E\in\cE\}$, the vector $\mathbf{x}=(2,3,4)$, and the monotone measure $\mu$ on $\hat{\cE}$ with values $\mu(\emptyset)=0$, $\mu(\{3\})=0.3$, $\mu(\{1,2\})=0.5$, $\mu(\{2,3\})=0.8$, $\mu(\{1,2,3\})=1$. \begin{table}[H] \renewcommand*{\arraystretch}{1.2} \centering
\begin{tabular}{|n|o|o|o|o|o|} \hline $i,j$ & 0 & 1 & 2 & 3 & 4 \\ \hline $E_i$ & $\emptyset$ & $\{1\}$ & $\{3\}$ & $\{1,2\}$ & $\{1,2,3\}$ \\\hline $\sA_i$ & $0$ & $2$ & $4$ & $5$ & $9$ \\\hline $F_j$ & $\emptyset$ & $\{3\}$ & $\{1,2\}$ & $\{2,3\}$ & $\{1,2,3\}$ \\ \hline $\mu_j$ & $0$ & $0.3$ & $0.5$ & $0.8$ & $1$ \\ \hline
\end{tabular} \end{table} \noindent The diagram of functions $(\cdot)$ and $\inv{\cdot}$ can be seen in the Figure~\ref{obr_diagram_bez_i_j}. \begin{figure}
\caption{Diagram of $(\cdot)$ and $\inv{\cdot}$ from Example~\ref{example_bez_i_a_j}}
\label{obr_diagram_bez_i_j}
\end{figure}
\noindent It is easy to verify that the assumptions of Corollary~\ref{dosledok_formuly_bez_i_a_j} are satisfied. According to this result, the generalized survival function has the form $$\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=1\cdot\mathbf{1}_{[0,2)}(\alpha)+0.8\cdot\mathbf{1}_{[2,4)}(\alpha)+0.5\cdot\mathbf{1}_{[4,5)}(\alpha)+0.3\cdot\mathbf{1}_{[5,9)}(\alpha)$$ for any $\alpha\in[0,+\infty)$. \end{example}
From the previous one can see that the assumption of Corollary~\ref{dosledok_formuly_bez_i_a_j} can be graphically interpreted in the way that there are no crossing-overs in the diagrams, see Figure~\ref{obr_diagram_bez_i_j}.
In the following, we apply the~obtained formulas to the~solution of the~Knapsack problem described in~Section~\ref{sec: motivation}.
\paragraph{Solution of the Knapsack problem} Let us assume that a total of $800$\,ml of liquids are already packed in the knapsack.
Thus, it is still possible to pack $200$\,ml of liquids. Let us choose from the products listed in Table~\ref{batozina}. In the table, except for the volume of the products also their price is indicated. \begin{table}[H]
\centering
\begin{tabular}{c|c|c|c|c}
product & $a$ & $b$ & $c$ & $d$ \\\hline
volume & $80$ & $75$ & $55$ & $65$ \\
price & $1.2$ & $1$ & $0.6$ & $0.8$
\end{tabular}
\caption{List of products}
\label{batozina} \end{table} \noindent As we have pointed out in Section~\ref{sec: motivation}, our goal is to minimize purchase costs for products that we shall need to buy at the holiday destination (we suppose prices are higher here). Moreover, we have to keep a limit of $200$\,ml of liquids we can carry, i.e., this is the limit for the total volume of products, we shall buy at home. We shall solve the optimization problem $$\min\{\mu(E^c): \aAi[E]{\mathbf{x}}{\text{sum}}\leq200,\,E\in\mathcal{E}\}\text{,}$$ with $\mathbf{x}=(80,75,55,65)$, the collection $\mathcal{E}$ consists of all possible combinations of elements $a,b,c,d$, and the monotone measure $\mu$ represents the price of products. The values of $\mu$ together with the values of the conditional aggregation operator $\sA^{\mathrm{sum}}$ can be seen in Table~\ref{tabulka_batozina}. \begin{table}[H] \renewcommand*{\arraystretch}{1.2}
\centering
\small
\begin{tabular}{c|c|c|c|c|ccc|c|c|c|c|c}
$\mu_j$ & $\mu(E^c)$ & $E^c$ & $E$ & $\sA^\text{sum}$ & $\sA_i$ &\hspace*{0.3cm}& $\mu_j$ & $\mu(E^c)$ & $E^c$ & $E$ & $\sA^\text{sum}$ & $\sA_i$\\\cline{1-6}\cline{8-13}
$\mu_{15}$ & $3.6$ & $\{a,b,c,d\}$ & $\emptyset$ & $0$ & $\sA_0$ && $\mu_9$ & $1.8$ & $\{a,d\}$ & $\{b,c\}$ & $130$ & $\sA_6$\\
$\mu_{11}$ & $2.4$ & $\{b,c,d\}$ & $\{a\}$ & $80$ & $\sA_4$ && $\mu_7$ & $1.8$ & $\{a,c\}$ & $\{b,d\}$ & $140$ & $\sA_8$\\
$\mu_{13}$ & $2.5$ & $\{a,c,d\}$ & $\{b\}$ & $75$ & $\sA_3$ && $\mu_{10}$ & $2.2$ & $\{a,b\}$ & $\{c,d\}$ & $120$ & $\sA_5$\\
$\mu_{14}$ & $3.0$ & $\{a,b,d\}$ & $\{c\}$ & $55$ & $\sA_1$ && $\mu_2$ & $0.8$ & $\{d\}$ & $\{a,b,c\}$ & $210$ & $\sA_{13}$\\
$\mu_{12}$ & $2.5$ & $\{a,b,c\}$ & $\{d\}$ & $65$ & $\sA_2$ && $\mu_1$ & $0.6$ & $\{c\}$ & $\{a,b,d\}$ & $220$ & $\sA_{14}$\\
$\mu_5$ & $1.4$ & $\{c,d\}$ & $\{a,b\}$ & $155$ & $\sA_{10}$ && $\mu_3$ & $1.0$ & $\{b\}$ & $\{a,c,d\}$ & $200$ & $\sA_{12}$\\
$\mu_8$ & $1.8$ & $\{b,d\}$ & $\{a,c\}$ & $135$ & $\sA_7$ && $\mu_4$ & $1.2$ & $\{a\}$ & $\{b,c,d\}$ & $195$ & $\sA_{11}$\\
$\mu_6$ & $1.4$ & $\{b,c\}$ & $\{a,d\}$ & $145$ & $\sA_9$ && $\mu_0$ & $0$ & $\emptyset$ & $\{a,b,c,d\}$ & $275$ & $\sA_{15}$\\
\end{tabular}
\caption{Values of the conditional aggregation operator and corresponding monotone measure}
\label{tabulka_batozina} \end{table} We can notice that in Table~\ref{tabulka_batozina} there are $15$ different values of the conditional aggregation operator and $12$ different values of the (nonadditive) monotone measure. To solve the knapsack problem we have to find the value of the generalized survival function at point $200$. This can be easily done using formula~\eqref{vyjgsf1} or~\eqref{restated_formula}. The value of $200$ lies in the interval $[\sA_{12},\sA_{13})$, where the generalized survival function takes the value
$$\min_{k\leq 12}\mu_{(k)}=\mu_{\mathbf{i}(12)}=\mu_3=1\text{.}$$ The same result we get when we determine the whole formula of the generalized survival function using formula~\eqref{skrateniegsf2f} \begin{align*}
\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}&=3.6\cdot\mathbf{1}_{[0;55)}(\alpha)+3\cdot\mathbf{1}_{[55;65)}(\alpha)+2.5\cdot\mathbf{1}_{[65;80)}(\alpha)+2.4\cdot\mathbf{1}_{[80;120)}(\alpha)\\
&+2.2\cdot\mathbf{1}_{[120;130)}(\alpha)+1.8\cdot\mathbf{1}_{[130;145)}(\alpha)+1.4\cdot\mathbf{1}_{[145;195)}(\alpha)+1.2\cdot\mathbf{1}_{[195;200)}(\alpha)\\
&+\mathbf{1}_{[200;210)}(\alpha)+0.8\cdot\mathbf{1}_{[210;220)}(\alpha)+0.6\cdot\mathbf{1}_{[220;275)}(\alpha)\text{.} \end{align*}
\noindent As we can see, the result coincides with the result from the previous calculation. So, a traveler should buy products $a,c,d$ at home and product $b$ at the destination.
\subsection{How to draw a~graph of a~generalized survival function}
In the previous section, we have presented visualizations of maps $(\cdot)$ and $\inv{\cdot}$ using both a diagram and a graph in the~Cartesian coordinate system, see demonstratory Figure~\ref{obr1} based on inputs from~Example~\ref{graphical_representation}.
We have already pointed out the advantages of the first visualization, and in the following we deal with the second one.
In fact, we shall show how the~graph of $(\cdot)$ or $\inv{\cdot}$ can be transformed to the~plot of the~generalized survival function.
The~first step is to transform the~graphs of $(\cdot)$ or $\inv{\cdot}$, see Figure~\ref{ge_sur_func}\subref{ge_sur_func_2}, into the~graphs of the~mappings $\mathbf{i}$ or $\mathbf{j}$, see Figure~\ref{image_i(i)_and_j(j)}, decreasing some of the~values of $(\cdot)$ and $\inv{\cdot}$ to produce nonincreasing functions $\mathbf{i}$ and $\mathbf{j}$, respectively. More precisely, the~formulas~\eqref{ii} and~\eqref{jj} are used to compute $\mathbf{i}$ and $\mathbf{j}$ from $(\cdot)$ and $\inv{\cdot}$, respectively.
\begin{figure}
\caption{Graphs of $\mathbf{i}$ and $\mathbf{j}$.}
\label{image_i(i)_and_j(j)}
\end{figure}
The~second step is to extend the~domain of~$\mathbf{i}$ from $\ozn{\kappa-1}$ to $[0,+\infty)$, and obtain the~graph of the~\textit{indexed generalized survival function} $\mu_{\bA}(\x,\alpha){I}{\mathbf{x}}{\beta}$. It is enough to naturally define $\mu_{\bA}(\x,\alpha){I}{\mathbf{x}}{\beta}=\mathbf{i}(\lfloor\beta\rfloor)$ for $\beta<\kappa-1$ and $\mu_{\bA}(\x,\alpha){I}{\mathbf{x}}{\beta}=\mathbf{i}(\kappa-1)$ otherwise, as is done in~Figure~\ref{indexed_survival_function}(a). There is similar way to obtain the~indexed generalized survival function from the~graph of~$\mathbf{j}$, depicted in~Figure~\ref{indexed_survival_function}(b), which we describe later.
\begin{figure}
\caption{Indexed generalized survival function (using $\mathbf{i}$)}
\label{ind_gen_sur_func}
\caption{Indexed generalized survival function (using $\mathbf{j}$)}
\label{ind_gen_sur_func_2}
\caption{Indexed generalized survival functions deriving by using maps $\mathbf{i}$ and $\mathbf{j}$}
\label{indexed_survival_function}
\end{figure}
Before we proceed to the~last step, let us stress that Figure~\ref{indexed_survival_function} of~$\mu_{\bA}(\x,\alpha){I}{\mathbf{x}}{\beta}$ is very close to how the~real generalized survival function $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}$ depicted in Figure~\ref{priradenie} looks like. Vaguely speaking, the~only difference is that the~graph of~$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}$ has values $\sA_0,\sA_1,\dots,\sA_{\kappa-1}$ on the~horizontal axis instead of values $0,1,\dots,\kappa-1$, and values $\mu_0,\mu_1,\dots,\mu_{\kappa-1}$ on the~vertical axis again instead of values $0,1,\dots,\kappa-1$. Moreover, some values $\sA_i,\sA_j$ and $\mu_i,\mu_j$ may coincide even for different~$i,j$.
\begin{figure}
\caption{Generalized survival function}
\label{priradenie}
\end{figure}
More formally, the~similarities between $\mu_{\bA}(\x,\alpha){I}{\mathbf{x}}{\beta}$ and $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}$ may be understood once we represent the~latter one via the~formula~\eqref{restated_formula}, and the~first one in a~similar way, namely \begin{align}\label{index_formula_1} \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\sum_{i=0}^{\kappa-1} \mu_{\mathbf{i}(i)}\mathbf{1}_{[\sA_i,\sA_{i+1})}(\alpha),\hspace{1cm}\mu_{\bA}(\x,\alpha){I}{\mathbf{x}}{\beta}=\sum_{i=0}^{\kappa-2} \mathbf{i}(i)\mathbf{1}_{[i,\,i+1)}(\beta). \end{align} The~equalities~\eqref{index_formula_1} are based on~function~$\mathbf{i}$, but we may proceed similarly with function~$\mathbf{j}$ using the~equality~\eqref{restated_formula_2}: \begin{align}\label{index_formula_2} \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\sum_{i=0}^{\kappa-1} \mu_{\mathbf{i}(i)}\mathbf{1}_{[\sA_i,\sA_{i+1})}(\alpha),\hspace{1cm}\mu_{\bA}(\x,\alpha){I}{\mathbf{x}}{\beta}&=\sum_{j=1}^{\kappa-1} j\mathbf{1}_{[\mathbf{j}(j),\,\mathbf{j}(j-1))}(\beta). \end{align}
The~equality~\eqref{index_formula_2} gives clues how to obtain the~graph of the~indexed generalized survival function~$\mu_{\bA}(\x,\alpha){I}{\mathbf{x}}{\beta}$ from the~graph of the~map~$\mathbf{j}$. There is necessary one more step, namely, one has to draw a~graph of the~generalized inverse~~$\mathbf{j}^-$ from the~graph of~$\mathbf{j}$ first. Then, similarly to the~case of~$\mathbf{i}$, define $\mu_{\bA}(\x,\alpha){I}{\mathbf{x}}{\beta}=\mathbf{j}^-(\lfloor\beta\rfloor)$ for $\beta<\kappa-1$ and $\mu_{\bA}(\x,\alpha){I}{\mathbf{x}}{\beta}=\mathbf{j}^-(\kappa-1)$ otherwise, see~Figure~\ref{indexed_survival_function}(b).
Let us summarize the calculation of the indexed generalized survival function. First, we construct the bijections $E_*$ a $F_*$ as it shown in \eqref{Ei} a \eqref{Fi}. Then, using the permutation $(\cdot)\colon \ozn{\kappa-1}\to\ozn{\kappa-1}$ described above, we obtain the values of $\mathbf{i}(i)$ and $\mathbf{j}(i)$ for each $i\in\ozn{\kappa-1}$, and thus we can construct the indexed generalized survival function. Finally, for each index $k\in\ozn{\kappa-1}$ on the horizotal, resp.\ vertical, axes we assign the value $\sA_k$, resp.\ $\mu_k$. This entire process can be described by the following Algorithm~\ref{alg_GSF_from_i} and Algorithm~\ref{alg_GSF_from_j}. Note that these algorithms assume the fact that the values of map $(\cdot)$, resp.\ $\inv{\cdot}$ are known to the user, or for completeness, we present algorithms for its calculation in Appendix.
\noindent \begin{minipage}{0.49\textwidth} \SetKwRepeat{Do}{do}{while} \begin{algorithm}[H] \Fn{the-graph-of-GSF$(\mathcal{E}, \mu, \mathbf{x}, \cA)$}{ $(0),\dots,(\kappa-1)$ $\leftarrow$ the-$(\cdot)$-map$(\mathcal{E}, \mu, \mathbf{x}, \cA)$\; $GSF:=0$\; \For{$(i=0$, $i<\kappa-1$, $i\mathrm{++})$}{
\If{$(i+1)>(i)$}{
$(i+1):=(i)$\;
}
$GSF:=GSF+\mu_{(i)}\cdot\mathbf{1}_{[\sA_i,\sA_{i+1})}$\; } \Return{$GSF$\; }} \caption{Calculation of generalized survival function (GSF) using the map $\mathbf{i}$ } \label{alg_GSF_from_i} \end{algorithm}
\end{minipage}
\begin{minipage}{0.49\textwidth} \SetKwRepeat{Do}{do}{while} \begin{algorithm}[H] \Fn{the-graph-of-GSF$(\mathcal{E}, \mu, \mathbf{x}, \cA)$}{ $\inv{0},\dots,\inv{\kappa-1}$ $\leftarrow$ the-$\inv{\cdot}$-map$(\mathcal{E}, \mu, \mathbf{x}, \cA)$\; $GSF:=0$\; \For{$(j=1$, $j<\kappa$, $j\mathrm{++})$}{
\If{$\inv{j-1}<\inv{j}$}{
$\inv{j}:=\inv{j-1}$\;
}
$GSF:=GSF+\mu_j\cdot\mathbf{1}_{[\sA_\inv{j},\sA_\inv{j-1})}$\; } \Return{$GSF$\; }} \caption{Calculation of generalized survival function (GSF) using the map $\mathbf{j}$ } \label{alg_GSF_from_j} \end{algorithm}
\end{minipage}
Note that although the maps $\mathbf{i}$ and $\mathbf{j}$ is not appear explicitly in the algorithms, their generation is hidden in their for loops. In this part of algorithms, the indexed generalized survival function is also generated, while its standard version is immediately assigned to it. A zero value of generalized survival function can also be marked in the graph (Figure~\ref{priradenie}), which corresponds to Remark~\ref{gsf=0} and Remark~\ref{A1c}, but it is not necessary, since this value does not bring any new information.
\begin{remark} The following relationships can be observed from formulas~\eqref{index_formula_1} and~\eqref{index_formula_2}. Let $I=\{\mathbf{i}(i):i\in\ozn{\kappa-1}\}$ and $J=\{\mathbf{j}(j):j\in\ozn{\kappa-1}\}$, $j\in I$ and $i\in J$, then \begin{align*} \min\{k\in\ozn{\kappa-1}:\mathbf{j}(k)=i\}&=\mathbf{i}(i)\text{,}\\ \min\{k\in\ozn{\kappa-1}:\mathbf{i}(k)=j\}&=\mathbf{j}(j)\text{.} \end{align*} These equalities hold because of \begin{align*}
\min\{k\in\ozn{\kappa-1}:\mathbf{j}(k)=i\}&=\min\{k\in\ozn{\kappa-1}:\min\{\inv{0},\dots,\inv{k}\}=i\}\\&=\min\{k\in\ozn{\kappa-1}: \inv{k}=i\}=\min\{k\in\ozn{\kappa-1}: k=(i)\}\\&=\min\{(0),\dots,(i)\}=\mathbf{i}(i)\text{.} \end{align*} The penultimate equality follows from the fact that $(\cdot)$ is nonincreasing, see Remark~\ref{nerastucost_i_j}. Similarly for the second formula. \end{remark}
\begin{remark} It is also possible to use \eqref{index_formula_1} and~\eqref{index_formula_2} with analogy to the formula \eqref{predpis gsf} for calculating the indexed generalized survival function as follows \begin{align*} \mu_{\bA}(\x,\alpha){I}{\mathbf{x}}{\beta}&=\min\left\{(i): i\leq \beta,\, i\in\ozn{\kappa-1}\right\}=\mathbf{i}(\beta)\\ &=\min\left\{j: \inv{j}\leq \beta,\, j\in\ozn{\kappa-1}\right\}=\mathbf{j}^-(\beta) \end{align*} for any $\beta\in[0,+\infty)$ with respect to extended domain of $\mathbf{i}$ and $\mathbf{j}^-$, i.e.\ $[0,+\infty)$, described above. \end{remark}
\section{Generalized Choquet integral computation}\label{Choquet}
The survival function measuring the $\alpha$-level set $\{f>\alpha\} $\footnote{$\{f>\alpha\}=\{x\in X:f(x)>\alpha\}$} of an arbitrary measurable function $f$ via monotone measure $\mu$ is the essence of the concept of the famous Choquet integral~\cite{Choquet1954} defined by $$\mathrm{C}(f,\mu)=\int_{0}^{\infty}\mu(\{f>\alpha\})\,\mathrm{d}\alpha.$$
On the finite spaces, the evaluation formula of the Choquet integral has the simplified form \begin{align}\label{Chformula}\mathrm{C}(\mathbf{x},\mu)=\sum_{i=1}^{n} \mu{(G_{\sigma(i)})}(x_{\sigma{(i)}}-x_{\sigma{(i-1)}})=\sum_{i=1}^{n} x_{\sigma{(i)}}(\mu{(G_{\sigma(i)})}-\mu{(G_{\sigma(i+1)})}),\end{align} where $\mathbf{x}=(x_{\sigma(1)},x_{\sigma(2)},\dots,x_{\sigma(n)})$, with $\sigma\colon [n]\to[n]$ being a permutation such that $x_{\sigma(1)}\leq x_{\sigma(2)}\leq \dots\leq x_{\sigma(n)}$ with the convention $x_{\sigma(0)}=0$, $G_{\sigma{(i)}}=\{\sigma{(i)},\dots,\sigma{(n)}\}$ for $i\in[n]$, and $G_{\sigma(n+1)}=\emptyset$.
Based on generalized survival function, a new concept generalizing the Choquet integral naturally arises. In this section, we aim to provide formulas for discrete $\cA$-Choquet integral.
\begin{definition}\label{def: Chintegral}\rm(cf.~\cite[Definition 5.4.]{BoczekHalcinovaHutnikKaluszka2020}) Let $\cA$ be a FCA, $\mu\in\mathbf{M}$ and $\mathbf{x}\in[0,+\infty)^{[n]}$. The Choquet integral with respect to $\cA$ and $\mu$ ($\cA$-Choquet integral, for short) of $\mathbf{x}$ is defined as \begin{align}\label{Chint}\mathrm{C}_{\cA}(\mathbf{x},\mu)=\int_{0}^{\infty}\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}\,\mathrm{d}\alpha.\end{align} \end{definition}
As we have obtained the generalized survival function expressions, see Sections~\ref{prvy_sposob},~\ref{druhy_sposob}, the computation of the $\cA$-Choquet integral becomes a trivial matter. Note that the formulas in the first two lines are obtained by means of mappings $\mathbf{i}$ and $\mathbf{j}$, respectively. By introducing these mappings, we managed to obtain expressions similar to the original one in~(\ref{Chformula}).
\begin{theorem}\label{vypocetCh} Let $\cA$ be a FCA, $\mu\in\mathbf{M}$ and $\mathbf{x}\in[0,+\infty)^{[n]}$. Then \begin{enumerate}[\rm (i)] \item $\mathrm{C}_{\cA}(\mathbf{x},\mu)=\sum_{i=0}^{\kappa-2}\mu_{\mathbf{i}(i)}(\sA_{i+1}-\sA_i)=\sum_{i=0}^{\kappa-1}\sA_{i+1}\big(\mu_{\mathbf{i}(i)}-\mu_{\mathbf{i}(i+1)}\big)$\text{,}\,
{\rm [cf. Corollary~\ref{application_i}]}\\ \item $\mathrm{C}_{\cA}(\mathbf{x},\mu)=\sum_{i=1}^{\kappa-1}\mu_i\big(\sA_{\mathbf{j}(i-1)}-\sA_{\mathbf{j}(i)}\big)=\sum_{i=1}^{\kappa-1}\sA_{\mathbf{j}(i-1)}(\mu_{i}-\mu_{i-1})$\text{,}\,
{\rm [cf. Corollary~\ref{application_j}]}\\ \item $\mathrm{C}_{\cA}(\mathbf{x},\mu)=\sum_{i=0}^{\kappa-2}\min_{k\leq i}\mu_{(k)}(\sA_{i+1}-\sA_i)=\sum_{i=0}^{\kappa-1}\sA_{i+1}\big(\min_{k\leq i}\mu_{(k)}-\min_{k\leq i+1} \mu_{(k)}\big)$\text{,}\,
{\rm [cf. Theorem~\ref{zjednodusenie_def}]}\\
\item $\mathrm{C}_{\cA}(\mathbf{x},\mu)=\sum_{i=1}^{\kappa-1}\mu_i\big(\min_{k<i} \sA_\inv{k}-\min_{k\leq i}\sA_\inv{k}\big)=\sum_{i=1}^{\kappa-1}\min_{k<i}\sA_\inv{k}(\mu_{i}-\mu_{i-1})$\text{.}\,
{\rm [cf. Theorem~\ref{gsf2}]}\\
\end{enumerate}
\end{theorem}
Let us recall that in Corollary~\ref{specimiery} we listed computational formulas for generalized survival function for special measures. Thus, Corollary~\ref{specimiery} helps us to substantially improve the~formulas presented in Theorem~\ref{vypocetCh} for special measures.
\begin{corollary}\label{specimieryCh} Let $x\in[0,+\infty)^{[n]}$. \begin{enumerate}[\rm (i)] \item Let $\cA$ be FCA and $\bar{\mu}$ be a greatest monotone measure. Then $$\mathrm{C}_{\cA}(\mathbf{x},\mmu)=\aA[{[n]}]{\mathbf{x}}.$$ \item Let $\cA$ be FCA. Let $\mmu$ be a weakest monotone measure. Then $$\mathrm{C}_{\cA}(\mathbf{x},\bar{\mu})=\min_{E\neq\emptyset}\aA[E]{\mathbf{x}}.$$
\item Let $\cA$ be FCA monotone w.r.t.\ sets with $\cE=2^{[n]}$. Let $\mu$ be a symmetric measure and set $\mu^i:=\mu(F)$, if $|F|=i$, $i\in[n]\cup\{0\}$. Then $$\mathrm{C}_{\cA}(\mathbf{x},\mu)=\sum_{i=1}^n (\mu^i-\mu^{i-1})\min\limits_{|E|=n-i+1}\aA[E]{\mathbf{x}}.$$ \item Let $\cA$ be FCA monotone w.r.t.\ sets with $\cE=2^{[n]}$. Let $\Pi$ be a possibility measure. Let $G_{\sigma(i)}=\{\sigma(i),\dots,\sigma(n)\}$ for $i\in\{1,\dots,n\}$, where $\sigma$ is a permutation as in Corollary~\ref{specimiery}. Then $$\mathrm{C}_{\cA}(\mathbf{x},\Pi)=\sum_{i=1}^n\left(\pi(\sigma(i))-\pi(\sigma(i-1))\right)\aA[G_{\sigma(i)}]{\mathbf{x}}.$$ \item Let $\cA$ be FCA monotone w.r.t.\ sets with $\cE=2^{[n]}$. Let $\mathrm{N}$ be a neccessity measure and $\sigma$ be a permutation as in Corollary~\ref{specimiery}. Then $$\mathrm{C}_{\cA}(\mathbf{x},N)=\sum_{i=1}^n\big(\pi(\sigma(i))-\pi(\sigma(i-1))\big) \min\limits_{k\geq i}\aA[\{\sigma(k)\}]{\mathbf{x}}.$$
\end{enumerate} \end{corollary}
\begin{remark} In the following, we aim to emphasize that the $\cA$-Choquet integral covers the famuous Choquet integral. Let us consider a special family $\cA^{\mathrm{max}}$. Taking symmetric measures we obtain $$\mathrm{C}_{\cA^\mathrm{max}}(\mathbf{x},\mu)=\sum_{i=1}^n (\mu^i-\mu^{i-1})x_{\sigma(n-i+1)}=\sum_{i=1}^n(\mu^{n-i+1}-\mu^{n-i})x_{\sigma(i)},$$ where $\sigma\colon [n]\to[n]$ is a permutation such that $x_{\sigma(1)}\leq\dots\leq x_{\sigma(n)}$ with the convention $x_{\sigma(0)}=0$. This is a formula of Yager's ordered weighted averaging (OWA) operator~\cite{Yager1998} as Grabisch has shown, see~\cite{GRABISCH20111}. Therefore, the generalized $\cA$-Choquet integral w.r.t. symmetric measures can be seen as a new type of the OWA operator.
The formula for possibility measure in Corollary~\ref{specimieryCh}\,(iv) simplifies to the form \begin{align*}\mathrm{C}_{\cA^\mathrm{max}}(\mathbf{x},\Pi) =\sum_{i=1}^n(\pi(\sigma(i))-\pi(\sigma(i-1)))\max_{k\geq i}x_{\sigma(k)} =\sum_{i=1}^n(\pi(\sigma(i))-\pi(\sigma(i-1)))\max_{\pi(k)\geq\pi(\sigma(i))} x_{k},\end{align*} which is the famous Choquet integral with respect to possibility measure, cf.~\cite{DuboisRico2016}, and permutation $\sigma$ as in Corollary~\ref{specimiery}. Similarly, taking necessity measure in~Corollary~\ref{specimieryCh}\,(v) we obtain \begin{align*}\mathrm{C}_{\cA^\mathrm{max}}(\mathbf{x},\mathrm{N})=\sum_{i=1}^n(\pi(\sigma(i))-\pi(\sigma(i-1)))\min_{k\geq i}x_{\sigma(k)} =\sum_{i=1}^n(\pi(\sigma(i))-\pi(\sigma(i-1)))\min_{\pi(k)\geq\pi(\sigma(i))} x_{k}.\end{align*}\end{remark}
Due to formulas derived in this section, we are ready to solve the~problem of accomodation options introduced in Section~\ref{sec: motivation}, using the~concept of the generalized Choquet integral.
\paragraph{Solution of the Problem of accomodation options} The aim of this example is to determine the most suitable accommodation for each character of the traveler (Anthony, Brittany, Charley). We use two methods: the standard and the generalized Choquet integral. Since we are deciding with respect to several criteria, the problem of choosing the best accommodation is a multi-criteria decision-making problem.
As we can see in Table~\ref{options_booking}, the criteria of the multicriteria decision-making process are different types: distance and price are minimization criteria, while the review is maximization criteria. Thus values in tables in the recent form are not suitable for working with, we have to reevaluate them using the standard methods, see~\cite{Singh2020}.
\noindent According to the previous let us adjust the column corresponding to distance: \begin{itemize} \item In each table let us choose the minimum value in the column for the distance. \item Let us divide the minimum from the previous step by each value from the distance column. \end{itemize}
Let us repeat this process for the price as well. For the reviews column let us divide each value by the maximum value from the corresponding column. This gives us the following input vectors for decision-making process $\mathbf{a}_1,\mathbf{a}_2,\mathbf{b}_1,\mathbf{b}_2,\mathbf{c}_1,\mathbf{c}_2$: \begin{table}[H]
\centering
\begin{subtable}{0.32\linewidth}
\centering
\begin{tabular}{M|M|M|M}
& $\text{D}$ & $\text{P}$ & $\text{R}$ \\\hline
$\mathbf{a}_1$ & $1$ & $0.84$ & $0.875$\\
$\mathbf{a}_2$ & $0.4$ & $1$ & $1$
\end{tabular}
\caption{Anthony}
\end{subtable}
\begin{subtable}{0.32\linewidth}
\centering
\begin{tabular}{M|M|M|M}
& $\text{D}$ & $\text{P}$ & $\text{R}$ \\\hline
$\mathbf{b}_1$ & $1$ & $0.8$ & $0.2$\\
$\mathbf{b}_2$ & $0.7$ & $1$ & $1$
\end{tabular}
\caption{Brittany}
\end{subtable}
\begin{subtable}{0.32\linewidth}
\centering
\begin{tabular}{M|M|M|M}
& $\text{D}$ & $\text{P}$ & $\text{R}$ \\\hline
$\mathbf{c}_1$ & $1$ & $0.95$ & $0.8$\\
$\mathbf{c}_2$ & $0.2$ & $1$ & $1$
\end{tabular}
\caption{Charley}
\end{subtable}
\caption{Accommodation options for people -- adjusted data}
\label{options_booking_adjust} \end{table} To evaluate each input vector (alternative) by Choquet integrals we have to know the values of monotone measures of each set $E\subseteq\{D,P,R\}$. We already know the values of the monotone measures $\mu$, $\nu$, $\xi$, that specify the characters of persons, for singletons, see Table~\ref{charaktery}. Further, because of the definition of the (normalized) monotone measure, \begin{center}$\mu(\emptyset)=\nu(\emptyset)=\xi(\emptyset)=0$,\,\, $\mu(\{\text{D}, \text{P}, \text{R}\})=\nu(\{\text{D}, \text{P}, \text{R}\})=\xi(\{\text{D}, \text{P}, \text{R}\})=1$.\end{center} It remains to determine the values of monotone measures of sets $\{\text{D}, \text{P}\}$, $\{\text{D}, \text{R}\}$, $\{\text{P}, \text{R}\}$. The question is: How to set this values? Let us rephrase this question: If we set the values of the monotone measure, how do we know that they are set well? Various approaches to verify the correctness of the set values are known in the literature. Let us mention e.g.\ interaction index \cite{Grabisch1996}, Banzhaf power index or Shapley value. We chose the Shapley value and the methodology described in~\cite{Grabisch1996}.
The Shapley value expresses the significance of a given criterion by aggregating in a certain way the values of the monotone measure of the sets to which the given criterion belongs: \begin{align}\label{shapley_formula}
\varphi_\mu(i)=\sum_{A\subseteq [n]\backslash \{i\}} \gamma_{[n]}(A)\cdot\left(\mu(A\cup\{i\})-\mu(A)\right)\text{,} \end{align}
where $i\in[n]$ is the index associated with the criterion and $\textstyle\gamma_{[n]}(A)=\frac{(n-|A|-1)!\cdot|A|!}{n!}$. Shapley value has the property that $\textstyle\sum_{i=1}^n\varphi_\mu(i)=1$.
So, we set the monotone measures $\mu, \nu, \xi$ to obtain $$\mu(\{i\})=\varphi_{\mu}(i),\quad \nu(\{i\})=\varphi_{\nu}(i),\quad \xi(\{i\})=\varphi_{\xi}(i),$$
i.e.\ we form the equations given by~\eqref{shapley_formula} and we calculate the remaining values of the monotone measures (in the table are their approximated values to two decimal places): \begin{table}[H]
\centering
\begin{tabular}{c|o|o|o}
$E$ & $\mu(E)$ & $\nu(E)$ & $\xi(E)$ \\ \hline
$\{\text{D},\text{P}\}$ & $0.94$ & $0.85$ & $0.63$ \\
$\{\text{D},\text{R}\}$ & $0.48$ & $0.76$ & $0.71$ \\
$\{\text{P},\text{R}\}$ & $0.81$ & $0.59$ & $0.84$
\end{tabular}
\caption{Monotone measures calculating by Shapley value}
\label{monotone_measure_Shapley} \end{table} Finally, let us calculate the standard and generalized Choquet integral of vectors $\mathbf{a}_1,\mathbf{a}_2,$ $\mathbf{b}_1,\mathbf{b}_2,\mathbf{c}_1,\mathbf{c}_2$ with respect to monotone measures $\mu$, $\nu$, $\xi$. As a conditional aggregation operator let us use the standard Choquet integral \begin{center}
$\sA^{\mathrm{Ch}}(\mathbf{x}|E)=\mathrm{C}(\mathbf{x}\mathbf{1}_E,m)$, \end{center} with $\mathbf{x}\in[0,\infty)^{[3]}$, $E\in\mathcal{E}=2^{[3]}$ and monotone measure $m$ (we shall use $\mu$, $\nu$, $\xi$ according to character). Further, $\mathcal{E}=2^{[3]}$. We expect that \begin{itemize}
\item $\mathbf{a}_2$ should be preferred to $\mathbf{a}_1$ with respect to character of Anthony,
\item $\mathbf{b}_1$ should be preferred to $\mathbf{b}_2$, because of the character of Brittany,
\item $\mathbf{c}_2$ preferred to $\mathbf{c}_1$ for Charley.
\end{itemize}
The results for both integrals are shown in Table~\ref{results}. Let us look at the highlighted diagonal of Table~\ref{results}. In the highlighted cells, the first line, we see the calculation of the standard Choquet integrals. As we can see, we did not receive the expected accommodation option preferences. In the second line of highlighted cells, the generalized Choquet integral is calculated. We see that this method matches our expected preferences. This indicates its benefits in decision-making processes.
The generalized Choquet integral in the calculation process takes into account the input vector with respect to various conditional sets. If the collection contains more sets than the cardinality of the basic set, this integral aggregates more input values than when using the standard Choquet integral. Thanks to this, the calculation is more detailed, "smoother" and more accurate.
To be complete, we have calculated the overall scores of various combinations of accommod\-ation--person, see the other cells of Table~\ref{results}.
\begin{table}
\centering
\begin{tabular}{c|c|S|S|S}
accom.\ opt. & type of integral & Anthony ($\mu$) & Brittany ($\nu$) & Charley ($\xi$) \\ \hline
\multirow{4}{*}{$\mathbf{a}_1$, $\mathbf{a}_2$} & \multirow{2}{*}{\footnotesize{Choquet integral}} & \cellcolor{gray!10} $\mathbf{a}_1\succ \mathbf{a}_2$ & $\mathbf{a}_1\succ \mathbf{a}_2$ & $\mathbf{a}_2\succ \mathbf{a}_1$ \\
& & \cellcolor{gray!10} \footnotesize{$0.8943>0.8860$} & \footnotesize{$0.9604>0.7540$} & \footnotesize{$0.9040>0.8899$} \\[5pt]
& \multirow{2}{*}{\footnotesize{\shortstack{generalized Choquet \\ integral}}} & \cellcolor{gray!10} $\mathbf{a}_2\succ \mathbf{a}_1$ & $\mathbf{a}_1\succ \mathbf{a}_2$ & $\mathbf{a}_2\succ \mathbf{a}_1$ \\
& & \cellcolor{gray!10} \footnotesize{$0.6857>0.6439$} & \footnotesize{$0.6845>0.4554$} & \footnotesize{$0.6566>0.6161$}\\\hline
\multirow{4}{*}{$\mathbf{b}_1$, $\mathbf{b}_2$} & \multirow{2}{*}{\footnotesize{Choquet integral}} & $\mathbf{b}_2\succ \mathbf{b}_1$ & \cellcolor{gray!10} $\mathbf{b}_2\succ \mathbf{b}_1$ & $\mathbf{b}_2\succ \mathbf{b}_1$ \\
& & \footnotesize{$0.9430>0.8240$} & \cellcolor{gray!10} \footnotesize{$0.8770>0.8600$} & \footnotesize{$0.9520>0.6180$} \\[5pt]
& \multirow{2}{*}{\footnotesize{\shortstack{generalized Choquet \\ integral}}} & $\mathbf{b}_2\succ \mathbf{b}_1$ & \cellcolor{gray!10} $\mathbf{b}_1\succ \mathbf{b}_2$ & $\mathbf{b}_2\succ \mathbf{b}_1$ \\
& & \footnotesize{$0.7127>0.6087$} & \cellcolor{gray!10} \footnotesize{$0.6393>0.5861$} & \footnotesize{$0.6766>0.3551$}\\\hline
\multirow{4}{*}{$\mathbf{c}_1$, $\mathbf{c}_2$} & \multirow{2}{*}{\footnotesize{Choquet integral}} & $\mathbf{c}_1\succ \mathbf{c}_2$ & $\mathbf{c}_1\succ \mathbf{c}_2$ & \cellcolor{gray!10} $\mathbf{c}_1\succ \mathbf{c}_2$ \\
& & \footnotesize{$0.9560>0.8480$} & \footnotesize{$0.9650>0.6720$} & \cellcolor{gray!10} \footnotesize{$0.9045>0.8720$} \\[5pt]
& \multirow{2}{*}{\footnotesize{\shortstack{generalized Choquet \\ integral}}} & $\mathbf{c}_1\succ \mathbf{c}_2$ & $\mathbf{c}_1\succ \mathbf{c}_2$ & \cellcolor{gray!10} $\mathbf{c}_2\succ \mathbf{c}_1$ \\
& & \footnotesize{$0.7069>0.6654$} & \footnotesize{$0.6895>0.3532$} & \cellcolor{gray!10} \footnotesize{$0.6433>0.6226$}\\
\end{tabular}
\caption{Results of preferences of accommodation options. The preference relation is represented by Choquet integrals as follows: $\mathbf{x}\succ\mathbf{y}$ ($\mathbf{x}$ is preferred to $\mathbf{y}$) if only if $\mathrm{C}(\mathbf{x},m)>\mathrm{C}(\mathbf{y},m)$, resp.\ $\mathrm{C}_{\cA}(\mathbf{x},m)>\mathrm{C}_{\cA}(\mathbf{y},m)$, where $m$ is $\mu$, $\nu$ or $\xi$, respectively.}
\label{results} \end{table}
\section{Searching optimal intervals, and indistinguishibility}\label{optimal_sposob}
The Choquet integral is a basic tool for multicriteria decision making and modeling of decision under risk and uncertainty. In~\cite{DuboisRico2016}, Dubois and Rico studied the equality conditions of Choquet integrals of particular input vectors. They considered Choquet integrals with respect to possibility and necessity measures. In~\cite{ChenMesiarLiStupnanova2017}, Chen et al. continued their research with a view to a wider class of so-called universal integrals~\cite{KlementMesiarPap2010}. Universal integrals form one class of utility functions in multicriteria decision making.
In this section, we formulate the equality conditions of generalized survival functions considering arbitrary measures. Naturally, if generalized survival functions coincide, then their Choquet integrals equal. In order to obtain the equality conditions, we study the greatest possible intervals on which the generalized survival function takes its possible values.
For this purpose, for any $j\in\ozn{\kappa-1}$ let us set \begin{align} \label{phii}{\varphi^*}(j):=\max\{k\in\ozn{\kappa-1}:\mu_k=\mu_j\}\quad\text{and}\quad{\varphi_*}(j)=\min\{k\in\ozn{\kappa-1}:\mu_k=\mu_j\}. \end{align} The following proposition summarizes the basic properties of ${\varphi^*}$ and ${\varphi_*}$.
\begin{proposition}\label{vlastnosti_fi} Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,\infty)^{[n]}$, ${\varphi^*}$ and ${\varphi_*}$ are defined as in~\eqref{phii} and $a,b,j\in\ozn{\kappa-1}$. Then the following properties hold. \begin{enumerate}[\rm (i)]
\item ${\varphi_*}(j)\leq j\leq{\varphi^*}(j)$,
\item $\mu_{{\varphi^*}(j)}=\mu_k=\mu_{{\varphi_*}(j)}$ for any integer $k\in\{{\varphi_*}(j),\dots,{\varphi^*}(j)\}$,
\item $\bigcup_{{\varphi_*}(j)\leq l\leq {\varphi^*}(j)}\Big[\min_{k\leq l}\sA_{\inv{k}},\min_{k<l}\sA_{\inv{k}}\Big)=\Big[\min_{k\leq{\varphi^*}(j)}\sA_{\inv{k}},\min_{k<{\varphi_*}(j)}\sA_{\inv{k}}\Big)$,
\item ${\varphi_*}$ and ${\varphi^*}$ are nondecreasing.
\end{enumerate} \end{proposition} \begin{proof} The statements (i) and (ii) follow directly from definitions of ${\varphi^*}$, ${\varphi_*}$, and arrangement~\eqref{Fi}. The validity of the statement (iii) follows from the fact that $\min_{k<l}\sA_\inv{k}=\min_{k\leq l-1}\sA_\inv{k}$ for $l\in\{{\varphi_*}(j),\dots,{\varphi^*}(j)\},$ $l\neq 0$, and because of $$\min_{k\leq{\varphi^*}(j)}\sA_\inv{k}\leq\min_{k<{\varphi^*}(j)}\sA_\inv{k}=\min_{k\leq{\varphi^*}(j)-1}\sA_\inv{k}\leq\dots\leq\min_{k<{\varphi_*}(j)+1}\sA_\inv{k}=\min_{k\leq{\varphi_*}(j)}\sA_\inv{k}\leq\min_{k<{\varphi_*}(j)}\sA_\inv{k}$$ with the convention already stated in this paper $\min\limits_{k< 0}\sA_\inv{k}=\min \emptyset=+\infty.$ Now we show (iv). If $a\leq b$, then from arrangement~\eqref{Fi} we have $\mu_a\leq\mu_b$ and thus $$\max\{k\in\ozn{\kappa-1}:\mu_a=\mu_k\}\leq\max\{k\in\ozn{\kappa-1}:\mu_b=\mu_k\},$$ hence ${\varphi^*}(a)\leq{\varphi^*}(b)$.
The second part of the statement (iv) can be shown analogously. \null
$\Box\;\;$
\end{proof}
By means of ${\varphi^*}$ and ${\varphi_*}$ we can determine the greatest possible intervals corresponding to given monotone measure. Simultaneously, we show under what conditions the value of a given monotone measure is not achieved by the generalized survival function.
\begin{proposition}\label{najgsf} Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,+\infty)^{[n]}$, ${\varphi_*}$ and ${\varphi^*}$ be given as in~(\ref{phii}), and $j\in\ozn{\kappa-1}$. Then
$$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu_j \text{ for any } \alpha\in\Big[\min_{k\leq{\varphi^*}(j)}\sA_\inv{k},\min_{k<{\varphi_*}(j)}\sA_\inv{k}\Big)$$
with the convention $\min\limits_{k< 0}\sA_\inv{k}=\min \emptyset=+\infty$ and this interval is the greatest possible. \end{proposition} \begin{proof}
According to Theorem~\ref{gsf2}, Proposition~\ref{vlastnosti_fi}(i), (ii), (iii) the generalized survival function $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}$ achieves the value $\mu_j$ on interval $\Big[\min_{k\leq{\varphi^*}(j)}\sA_\inv{k},\min_{k<{\varphi_*}(j)}\sA_\inv{k}\Big)$. However, it is not clear that this interval is the greatest possible. By contradiction:
So let exist $\tilde{j}\in\ozn{\kappa-1}$ such that $$\min_{k\leq{\varphi^*}(\tilde{j})}\sA_\inv{k}<\min_{k\leq{\varphi^*}(j)}\sA_\inv{k}\quad\text{or}\quad\min_{k<{\varphi_*}(j)}\sA_\inv{k}<\min_{k<{\varphi_*}(\tilde{j})}\sA_\inv{k},$$ and $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu_j$ for $\alpha\in\Big[\min_{k\leq{\varphi^*}(\tilde{j})}\sA_\inv{k},\min_{k\leq{\varphi^*}(j)}\sA_\inv{k}\Big)$ or $\alpha\in\Big[\min_{k<{\varphi_*}(j)}\sA_\inv{k},\min_{k<{\varphi_*}(\tilde{j})}\sA_\inv{k}\Big)$. Let us discuss these cases. \begin{itemize}
\item By Theorem~\ref{gsf2}(i) for $\alpha=\min_{k\leq{\varphi^*}(\tilde{j})}\sA_\inv{k}$ it holds $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu_{{\varphi^*}(\tilde{j})}=\mu_{\tilde{j}}$, where the last equality holds because of Proposition~\ref{vlastnosti_fi}(i), (ii).
However, $\mu_{\tilde{j}}\neq\mu_j$, otherwise we get a~contradiction.
Indeed, if $\mu_{\tilde{j}}=\mu_j$, then ${\varphi^*}(j)={\varphi^*}(\tilde{j})$ and thus $\min_{k\leq{\varphi^*}(\tilde{j})}\sA_\inv{k}=\min_{k\leq{\varphi^*}(j)}\sA_\inv{k}$, which is in conflict with choice of $\tilde{j}$.
\item Using the same arguments as in the previous case, for $\alpha=\min_{k<{\varphi_*}(j)}\sA_\inv{k}=\min_{k\leq{\varphi_*}(j)-1}\sA_\inv{k}$, we have $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu_{{\varphi_*}(j)-1}$.
Further, $\mu_{{\varphi_*}(j)-1}<\mu_{{\varphi_*}(j)}=\mu_j$, thus $\mu_{{\varphi_*}(j)-1}\neq\mu_j$. \end{itemize}
This completes the proof. \null
$\Box\;\;$
\end{proof}
From the above proposition, we immediately obtain the following result.
\begin{corollary}\label{nenadobuda} Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,+\infty)^{[n]}$, ${\varphi_*}$ and ${\varphi^*}$ be given as in~(\ref{phii}), and $j\in\ozn{\kappa-1}$. The value $\mu_j$ is not achieved if and only if $\min_{k\leq{\varphi^*}(j)}\sA_\inv{k}=\min_{k<{\varphi_*}(j)}\sA_\inv{k}$. \end{corollary}
\begin{lema}\label{nenadobuda3}
Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,+\infty)^{[n]}$, ${\varphi_*}$ and ${\varphi^*}$ be given as in~(\ref{phii}).
\begin{enumerate}[\rm(i)]
\item Let $j\in[\kappa-1]$. Then it holds
\begin{enumerate}
\item[\rm(i1)] If $\mu_j>\mu_{j-1}$, then ${\varphi_*}(j)-1={\varphi^*}(j-1)$ and $\mu_j$ is achieved on $\big[\min\limits_{k\leq {\varphi^*}(j)}\sA_\inv{k}, \min\limits_{k\leq {\varphi^*}(j-1)}\sA_\inv{k}\big)$. Moreover, this interval is the greatest possible.
\item[\rm(i2)] If $\mu_j=\mu_{j-1}$, then ${\varphi^*}(j)={\varphi^*}(j-1)$.
\end{enumerate}
\item Let $j\in\ozn{\kappa-1}$. Then it holds \begin{enumerate}
\item[\rm(ii1)] If $\mu_j<\mu_{j+1}$, then ${\varphi^*}(j)+1={\varphi_*}(j+1)$ and
$\mu_j$ is achieved on $\Big[\min_{k<{\varphi_*}(j+1)}\sA_\inv{k},\min_{k<{\varphi_*}(j)}\sA_\inv{k}\Big)$, with $\min\limits_{k< {\varphi_*}(\kappa)}\sA_\inv{k}=0$ by convention. Moreover, this interval is the greatest possible. \item[\rm(ii2)] If $\mu_j=\mu_{j+1}$, then ${\varphi_*}(j)={\varphi_*}(j+1)$.
\end{enumerate}
\end{enumerate} \end{lema}
\begin{proof} Let us prove part (i). \begin{enumerate}
\item[(i1)] If $\mu_j>\mu_{j-1}$, then directly from definitions of ${\varphi_*}$ and ${\varphi^*}$ we have ${\varphi_*}(j)=j$ and ${\varphi^*}(j-1)=j-1$, thus ${\varphi^*}(j-1)={\varphi_*}(j)-1$.
Further, according to Proposition~\ref{najgsf}, we have $$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu_j$$ for each $\alpha\in\big[\min\limits_{k\leq {\varphi^*}(j)}\sA_\inv{k}, \min\limits_{k\leq {\varphi_*}(j)-1}\sA_\inv{k}\big)$ and this interval is the greatest possible.
Using the above equality
we have $$\big[\min\limits_{k\leq {\varphi^*}(j)}\sA_\inv{k}, \min\limits_{k\leq {\varphi_*}(j)-1}\sA_\inv{k}\big)=\big[\min\limits_{k\leq {\varphi^*}(j)}\sA_\inv{k}, \min\limits_{k\leq {\varphi^*}(j-1)}\sA_\inv{k}\big).$$ \item[(i2)] It holds trivially. \end{enumerate}
Part (ii) can be proved analogously. \null
$\Box\;\;$
\end{proof}
Let us notice that Proposition~\ref{najgsf} and its corollaries are crucial to state the sufficient conditions for indistinguishability of generalized Choquet integral equivalent pairs, see Section~\ref{Choquet}. Using the above results, one can obtain the improvement of formula~(\ref{vyjgsf2}).
\begin{proposition}\label{skrateniegsf2}Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,+\infty)^{[n]}$, and $\varphi$ be given as in~(\ref{phii}). Then \begin{align}\label{skrateniegsf2f} \mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}&=\sum_{j=1}^{\kappa-1}\mu_j\mathbf{1}_{\big[\min\limits_{k\leq {\varphi^*}(j)}\sA_\inv{k},\min\limits_{k\leq {\varphi^*}(j-1)}\sA_\inv{k}\big)}(\alpha)=\sum_{j=1}^{\kappa-1}\mu_j\mathbf{1}_{\big[\min\limits_{k< {\varphi_*}(j+1)}\sA_\inv{k},\min\limits_{k< {\varphi_*}(j)}\sA_\inv{k}\big)}(\alpha)\
\end{align} for any $\alpha\in[0,\infty)$, with the conventions ${\varphi_*}(\kappa):=\kappa$ (thus $\min\limits_{k< {\varphi_*}(\kappa)}\sA_\inv{k}=0$), $\min\limits_{k< 0}\sA_\inv{k}=\min \emptyset=+\infty$. \end{proposition} \begin{proof} Let us consider an arbitrary (fixed) $j\in[\kappa-1]$. If $\mu_j>\mu_{j-1}$, then according to Lemma~\ref{nenadobuda3} (i1) $\mu_j$ is achieved on $\Big[\min_{k\leq{\varphi^*}(j)}\sA_\inv{k},\min_{k\leq{\varphi^*}(j-1)}\sA_\inv{k}\Big)$. Moreover, this interval is the greatest possible. If $\mu_j=\mu_{j-1}$, then ${\varphi^*}(j)={\varphi^*}(j-1)$, see Lemma~\ref{nenadobuda3}(i2), thus $\Big[\min_{k\leq{\varphi^*}(j)}\sA_\inv{k},\min_{k\leq{\varphi^*}(j-1)}\sA_\inv{k}\Big)=\emptyset$. This demonstrates that each value of generalized survival function is included just once in the sum and the first formula is right. The second formula can be proved analogously. \null
$\Box\;\;$
\end{proof}
If we use the mapping $\mathbf{j}$ defined by~(\ref{jj}), then formulas in~(\ref{skrateniegsf2f}) can be rewritten similarly as in Corollary~\ref{application_j}, i.e., $$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\sum_{j=1}^{\kappa-1}\mu_j\mathbf{1}_{\big[\sA_{\mathbf{j}({\varphi^*}(j))},\sA_{\mathbf{j}({\varphi^*}(j-1 ))}\big)}(\alpha)=\sum_{j=1}^{\kappa-1}\mu_j\mathbf{1}_{\big[\sA_{\mathbf{j}({\varphi_*}(j+1)-1)},\sA_{\mathbf{j}({\varphi_*}(j)-1)}\big)}(\alpha). $$ The use of the approach described in the previous proposition is shown in the following example.
\begin{example}\label{example_j} \rm Let us consider the same inputs as in Example~\ref{graphical_representation}. Let us use the last formula given in~\eqref{skrateniegsf2f} to calculate the generalized survival function. \begin{itemize*} \item For $j=1$ we get $\mu_1\cdot\mathbf{1}_{[\sA_{\mathbf{j}(0)},\sA_{\mathbf{j}(0)})}=0.5\cdot\mathbf{1}_{\emptyset}$. \item For $j=2$ we get $\mu_2\cdot\mathbf{1}_{[\sA_{\mathbf{j}(0)},\sA_{\mathbf{j}(0)})}=0.5\cdot\mathbf{1}_{\emptyset}$. \item For $j=3$ we get $\mu_3\cdot\mathbf{1}_{[\sA_{\mathbf{j}(3)},\sA_{\mathbf{j}(0)})}=\mu_3\cdot\mathbf{1}_{[\sA_{1},\sA_{5})}=0.5\cdot\mathbf{1}_{[1,6)}$. \item For $j=4$ we get $\mu_4\cdot\mathbf{1}_{[\sA_{\mathbf{j}(4)},\sA_{\mathbf{j}(3)})}=\mu_4\cdot\mathbf{1}_{[\sA_{1},\sA_{1})}=0.7\cdot\mathbf{1}_{\emptyset}$. \item For $j=5$ we get $\mu_5\cdot\mathbf{1}_{[\sA_{\mathbf{j}(5)},\sA_{\mathbf{j}(4)})}=\mu_5\cdot\mathbf{1}_{[\sA_{0},\sA_{1})}=1\cdot\mathbf{1}_{[0,1)}$. \end{itemize*} Therefore, the generalized survival function has the form \begin{align*}
\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\mathbf{1}_{[0,1)}(\alpha)+0.5\cdot\mathbf{1}_{[1,6)}(\alpha)\text{,} \end{align*} $\alpha\in[0,+\infty)$, compare with Example~\ref{priklad2}. \end{example}
As in the whole paper let us introduce the greatest possible intervals on which a value of monotone measure is achieved using bijection $E_*\colon \ozn{\kappa-1}\to{\cE}$, see~\eqref{EiAi}. Since the proofs of the following propositions are analogous, we omit them. Let $i\in\ozn{\kappa-1}$, let us define \begin{align}\label{psii} \begin{split} {\psi_*}(i)&:=\min\{l\in\ozn{\kappa-1}:\min_{k\leq l}\mu_{(k)}=\min_{k\leq i}\mu_{(k)}\}\text{,}\\\text{and}\quad{\psi^*}(i)&:=\max\{l\in\ozn{\kappa-1}:\min_{k\leq l}\mu_{(k)}=\min_{k\leq i}\mu_{(k)}\}\text{.} \end{split} \end{align}
\begin{proposition}\label{vlastnosti_psi} Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,\infty)^{[n]}$, ${\psi^*}$, ${\psi_*}$ be defined as in~\eqref{psii} and $a,b,i\in\ozn{\kappa-1}$. Then the following properties hold. \begin{enumerate}[\rm (i)] \item ${\psi_*}(i)=\min\{l\in\ozn{\kappa-1}:\mu_{(l)}=\min_{k\leq i}\mu_{(k)}\}\text{.}$
\item ${\psi_*}(0)=0$, ${\psi_*}(i)\leq i\leq{\psi^*}(i)$.
\item For any integer $u\in\{{\psi_*}(i),\dots,{\psi^*}(i)\}$ it holds $\min_{k\leq u}\mu_{(k)}=\min_{k\leq i}\mu_{(k)}=\mu_{({\psi_*}(i))}=\mu_{\mathbf{i}(i)}$.
\item $\min_{k\leq a}\mu_{(k)}=\min_{k\leq b}\mu_{(k)}$ if only if ${\psi_*}(a)={\psi_*}(b)$ if only if ${\psi^*}(a)={\psi^*}(b)$.
\item $\min_{k\leq a}\mu_{(k)}>\min_{k\leq b}\mu_{(k)}$ if only if ${\psi_*}(a)<{\psi_*}(b)$ if only if ${\psi^*}(a)<{\psi^*}(b)$.
\item ${\psi_*}$ and ${\psi^*}$ are nondecreasing.
\end{enumerate} \end{proposition} \begin{proof} See Appendix.\null
$\Box\;\;$
\end{proof}
\begin{remark} In general, we cannot determine the order of the values $\mu_{(i)}$, $i\in\ozn{\kappa-1}$.
However, it holds
\begin{align*}
&\mu_{({\psi_*}(0))}\geq\mu_{({\psi_*}(1))}\geq\dots\geq\mu_{({\psi_*}(\kappa-1))}\text{,}\\
\text{and}\quad& \mu_{({\psi^*}(0))}\geq\mu_{({\psi^*}(1))}\geq\dots\geq\mu_{({\psi^*}(\kappa-1))}\text{.}
\end{align*}
This property follows directly from the definitions of ${\psi_*}$ and ${\psi^*}$.
Indeed, we have already shown that for $a\leq b$ we get ${\psi_*}(a)\leq{\psi_*}(b)$.
In case that ${\psi_*}(a)={\psi_*}(b)$, the result is clear. If ${\psi_*}(a)<{\psi_*}(b)$, then the result follows from Proposition~\ref{vlastnosti_psi} (vi). Similarly for ${\psi^*}$. \end{remark}
In Theorem~\ref{zjednodusenie_def} and Corollary~\ref{application_i}, we have described the value of the monotone measure acquired on a given interval. However, the same value can be acquired on several intervals. Using ${\psi^*}$ and ${\psi_*}$, we can determine the greatest interval on which the value of the monotone measure is achieved.
\begin{proposition}\label{naj_gsf_cez_psi} Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,+\infty)^{[n]}$, ${\psi_*}$ and ${\psi^*}$ be given as in~(\ref{psii}), and $i\in\ozn{\kappa-1}$. Then $$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\min_{k\leq i}\mu_{(k)} \text{ for any } \alpha\in\big[\sA_{{\psi_*}(i)},\sA_{{\psi^*}(i)+1}\big)$$ and this interval is the greatest possible. \end{proposition} \begin{proof} See Appendix.\null
$\Box\;\;$
\end{proof}
\begin{lema}\label{lema_ku_psi} Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,+\infty)^{[n]}$, ${\psi_*}$ and ${\psi^*}$ be given as in~(\ref{psii}). \begin{enumerate}[\rm(i)]
\item Let $i\in{\ozn{\kappa-2}}$ . Then it holds
\begin{enumerate}
\item[\rm(i1)] If $\min_{k\leq i}\mu_{(k)}>\min_{k\leq i+1}\mu_{(k)}$, then ${\psi^*}(i)+1={\psi_*}(i+1)$ and $\min_{k\leq i}\mu_{(k)}
$ is achieved on $\big[\sA_{{\psi_*}(i)}, \sA_{{\psi_*}(i+1)}\big)$. Moreover, this interval is the greatest possible.
\item[\rm(i2)] If $\min_{k\leq i}\mu_{(k)}=\min_{k\leq i+1}\mu_{(k)}$, then ${\psi_*}(i)={\psi_*}(i+1)$.
\end{enumerate}
\item Let $i\in{{[\kappa-1]}}$. Then it holds
\begin{enumerate}
\item[\rm(ii1)] If $\min_{k\leq i-1}\mu_{(k)}<\min_{k\leq i}\mu_{(k)}$, then ${\psi_*}(i)={\psi^*}(i-1)+1$ and
$\min_{k\leq i}\mu_{(k)}
$ is achieved on $\Big[\sA_{{\psi^*}(i-1)+1},\sA_{{\psi^*}(i)+1}\Big)$.
Moreover, this interval is the greatest possible.
\item[\rm(ii2)] If $\min_{k\leq i-1}\mu_{(k)}=\min_{k\leq i}\mu_{(k)}$, then ${\psi^*}(i-1)={\psi^*}(i)$.
\end{enumerate} \end{enumerate} \end{lema}
\begin{proposition}\label{skratenie_cez_psi} Let $\cA$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x}\in[0,+\infty)^{[n]}$, ${\psi_*}$ and ${\psi^*}$ be given as in~(\ref{psii}), and $i\in\ozn{\kappa-1}$. Then
the formula of generalized survival function is as follows \begin{align}\label{skrateniegsf2fA}
\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\sum_{i=0}^{\kappa-2}\mu_{({\psi_*}(i))}\mathbf{1}_{\big[\sA_{{\psi_*}(i)},\sA_{{\psi_*}(i+1)})}(\alpha)=\sum_{i={0}}^{\kappa-2}{\mu_{({\psi_*}(i))}}\mathbf{1}_{\big[\sA_{{\psi^*}(i-1)+1},\sA_{{\psi^*}(i)+1})}(\alpha) \end{align} for any $\alpha\in[0,+\infty)$ with the convention ${\psi^*}(-1)=0$.
\end{proposition} \begin{proof}
See Appendix.
\null
$\Box\;\;$
\end{proof}
If we use the mapping $\mathbf{i}$ defined by~(\ref{ii}), then formulas in~(\ref{skrateniegsf2fA}) can be rewritten similarly as in Corollary~\ref{application_i}, i.e.,
$$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\sum_{i=0}^{\kappa-2}\mu_{\mathbf{i}(i)}\mathbf{1}_{\big[\sA_{{\psi_*}(i)},\sA_{{\psi_*}(i+1)})}(\alpha)=\sum_{i={0}}^{\kappa-2}{\mu_{\mathbf{i}(i)}}\mathbf{1}_{\big[\sA_{{\psi^*}(i-1)+1},\sA_{{\psi^*}(i)+1})}(\alpha)\text{.}$$
As we have already mentioned, searching for optimal intervals will be helpful for studying the indistinguishability of generalized survival functions. In the following, we state sufficient and necessary conditions under which the generalized survival functions coincide. This is applicable to decision-making problems. In fact, if the generalized survival functions of two alternatives (e.g.\ two offers of accommodation) are the same, then their overall score will be the same. Both alternatives will be in the same place in the ranking.
\begin{definition}The triples $(\mu, \cA, \mathbf{x})$ and $(\mu^\prime, \cA^\prime, \mathbf{x}^\prime)$, where $\mu,\mu^\prime$ are monotone measures, $\cA,\cA^\prime$ are FCA and $\mathbf{x},\mathbf{x}^\prime$ are vectors, are called \textit{integral equivalent}, if $$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\gsfp{\prime}{\mathbf{x}^\prime}{\alpha}.$$\end{definition}
\begin{proposition}\label{prop_nerozlisitelnost} Let $\cA$ and $\cA^\prime$ be a FCA, $\mu,\mu^{\prime}\in\mathbf{M}$, $\mathbf{x},\mathbf{x}^\prime\in[0,+\infty)^{[n]}$ and ${\varphi^*}$ be given by~(\ref{phii}). Then the following assertions are equivalent: \begin{enumerate}[\rm (i)] \item $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu^\prime_{\cA^{\prime}}(\mathbf{x}^{\prime},\alpha)$ for any $\alpha\in[0,\infty)$; \item for each $j\in\ozn{\kappa-1}$ with
$\min_{k\leq{\varphi^*}(j)}\sA(\mathbf{x}|F_k^c)<\min_{k<{\varphi_*}(j)}\sA(\mathbf{x}|F_k^c)$
there exists $j^{\prime}\in\ozn{\kappa^\prime-1}$ such that $\mu_{j}=\mu^\prime_{j^{\prime}}$, $\min_{k\leq{\varphi^*}(j)}\sA(\mathbf{x}|F_k^c)=\min_{k\leq{\varphi^*}(j^\prime)}\sA^\prime(\mathbf{x}^\prime|F_k^c)$ and $\min_{k<{\varphi_*}(j)}\sA_\inv{k}=\min_{k<{\varphi_*}(j^\prime)}\sA^\prime_\inv{k}$.
\end{enumerate} \end{proposition} \begin{proof} It follows from Proposition~\ref{najgsf}. \null
$\Box\;\;$
\end{proof}
\begin{remark}
Following Proposition~\ref{naj_gsf_cez_psi}, condition (ii) in the previous proposition can be equivalently formulated as follows:
for each $i\in\ozn{\kappa-1}$ with $\sA_{{\psi_*}(i)}<\sA_{{\psi^*}(i)+1}$
there exists $i^{\prime}\in[\kappa^\prime-1]$ such that $\mu_{{\psi_*}(i)}=\mu^\prime_{{\psi_*}(i^{\prime})}$, $\sA(\mathbf{x}|E_{{\psi_*}(i)})=\sA^{\prime}(\mathbf{x}^{\prime}|E_{{\psi_*}(i^{\prime})})$ and $\sA(\mathbf{x}|E_{{\psi^*}(i)+1})=\sA^{\prime}(\mathbf{x}^{\prime}|E_{{\psi^*}(i^{\prime})+1})$.
\end{remark}
Fixing a collection $\cE$ and a monotone measure $\mu$ we obtain the following sufficient and necessary condition for integral equivalence of triples $(\mu, \sA, \mathbf{x})$ and $(\mu, \sA^\prime, \mathbf{x}^\prime)$.
\begin{corollary}
Let $\bA=\{\aA{\cdot}: E\in\mathcal{E}\}$ and $\cA^\prime=\{\aAp{\cdot}: E\in\mathcal{E}\}$ be a FCA, $\mu\in\mathbf{M}$, $\mathbf{x},\mathbf{x}^\prime\in[0,+\infty)^{[n]}$, and ${\varphi_*},{\varphi^*}$ and ${\psi_*},{\psi^*}$ be given by~\eqref{phii}, \eqref{psii}, respectively. Then the following assertions are equivalent: \begin{enumerate}[\rm (i)] \item $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu_{\bA}(\x,\alpha){\prime}{\mathbf{x}^\prime}{\alpha}$ for any $\alpha\in[0,+\infty)$;
\item $\min_{k\leq{\varphi^*}(j)}\sA(\mathbf{x}|F_k^c)=\min_{k\leq{\varphi^*}(j)}\sA^\prime(\mathbf{x}^\prime|F_k^c)$
for any $j\in\ozn{\kappa-1}$\\ (or equivalently, $\sA(\mathbf{x}|E_{{\psi_*}(i)})=\sA^{\prime}(\mathbf{x}^{\prime}|E_{{\psi_*}(i)})$ for each $i\in\ozn{\kappa-1}$).
\end{enumerate} \end{corollary}
\begin{remark} In~\cite{ChenMesiarLiStupnanova2017}, the authors derived the necessary
and sufficient condition of equality $\mu(\{\mathbf{x}\geq\alpha\})=\mu(\{\mathbf{y}\geq\alpha\})$ with $\mu$ being a possibility and necessity measure, respectively. Our result includes equality $\mu(\{\mathbf{x}>\alpha\})=\mu(\{\mathbf{y}>\alpha\})$ with $\mu$ being an arbitrary monotone measure.
\end{remark}
\begin{example} Let us consider the collection $\mathcal{E}=\{\emptyset,\{1\},\{1,2,3\}\}$, the families of conditional aggregation operators $\cA^\mathrm{max}=\{\aAi[E]{\cdot}{\mathrm{max}}:E\in\mathcal{E}\}$ and $\cA^\mathrm{sum}=\{\aAi[E]{\cdot}{\mathrm{sum}}:E\in\mathcal{E}\}$, the vectors $\mathbf{x}=(2,5,9)$ and $\mathbf{x}^\prime=(2,3,4)$, and the monotone measure $\mu\in\mathbf{M}$ with corresponding values in table. \begin{table}[H] \renewcommand*{\arraystretch}{1.2} \begin{center}
\begin{tabular}{|c|M|c|c|} \hline $j$ & 0 & 1 & 2 \\ \hline $F_j$ & $\emptyset$ & $\{2,3\}$ & $\{1,2,3\}$ \\ \hline $\mu_j$ & $0$ & $0.5$ & $1$ \\ \hline
$\sA^{\textrm{max}}(\mathbf{x}|F_j^c)=\sA^{\mathrm{sum}}(\mathbf{x}^\prime|F_j^c)$ & $9$ & $2$ & $0$ \\\hline
\end{tabular} \end{center} \end{table}
\noindent It is easy to see that for any $j\in\ozn{2}$ we have ${\varphi^*}(j)=j$ and $\min_{k\leq j}\sA^{\mathrm{max}}(\mathbf{x}|F_k^c)=\min_{k\leq j}\sA^{\mathrm{sum}}(\mathbf{x}^\prime|F_k^c)$. Then according to previous proposition one can expect the equality of corresponding generalized survival functions. And, it is:
$$\mu_{\bA}(\x,\alpha){\mathrm{max}}{\mathbf{x}}{\alpha}=\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}^\prime}{\alpha}=1\cdot\mathbf{1}_{[0,2)}+0.5\cdot\mathbf{1}_{[2,9)}$$ for any $\alpha\in[0,+\infty)$. \end{example}
The following example demonstrates a standard situation in decision-making processes. Different alternatives can be evaluated with the same score, thus they are incomparable. It is not possible to decide which one is better.
\begin{example} Let us consider the collection $\mathcal{E}=\{\emptyset,\{2\},\{3\},\{1,2\},\{1,3\},\{1,2,3\}\}$, the family of conditional aggregation operators $\cA^\mathrm{sum}=\{\aAi[E]{\cdot}{\mathrm{sum}}:E\in\mathcal{E}\}$, vectors $\mathbf{x}=(1,3,5)$ and $\mathbf{x}^\prime=(2,4,3)$, and the monotone measure $\mu\in\mathbf{M}$ with corresponding values in table.
\begin{table}[H] \renewcommand*{\arraystretch}{1.2} \begin{center}
\begin{tabular}{|c|M|c|c|c|c|c|} \hline $j$ & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline $F_j$ & $\emptyset$ & $\{1,3\}$ & $\{1,2\}$ & $\{3\}$ & $\{2\}$ & $\{1,2,3\}$ \\ \hline $\mu_j$ & $0$ & $0.5$ & $0.5$ & $0.5$ & $0.5$ & $1$ \\ \hline
$\sA^{\mathrm{sum}}(\mathbf{x}|F_k^c)$ & $9$ & $3$ & $5$ & $4$ & $6$ & $0$ \\ \hline
$\sA^{\mathrm{sum}}(\mathbf{x}^\prime|F_k^c)$ & $9$ & $4$ & $3$ & $6$ & $5$ & $0$ \\ \hline \end{tabular} \end{center} \end{table} \noindent It is easy to see that ${\varphi^*}(0)=0, {\varphi^*}(1)={\varphi^*}(2)={\varphi^*}(3)={\varphi^*}(4)=4, {\varphi^*}(5)=5$ and \begin{center}
$\min_{k\leq 0}\sA^{\mathrm{max}}(\mathbf{x}|F_k^c)=9=\min_{k\leq 0}\sA^{\mathrm{sum}}(\mathbf{x}^\prime|F_k^c)$,\\ $\min_{k\leq 4}\sA^{\mathrm{max}}(\mathbf{x}|F_k^c)=3=\min_{k\leq 4}\sA^{\mathrm{sum}}(\mathbf{x}^\prime|F_k^c)$,\\ $\min_{k\leq 5}\sA^{\mathrm{max}}(\mathbf{x}|F_k^c)=0=\min_{k\leq 5}\sA^{\mathrm{sum}}(\mathbf{x}^\prime|F_k^c)$. \end{center} Then according to previous proposition one can expect the equality of corresponding generalized survival functions. And, it is:
$$\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}}{\alpha}=\mu_{\bA}(\x,\alpha){\mathrm{sum}}{\mathbf{x}^\prime}{\alpha}=1\cdot\mathbf{1}_{[0,3)}+0.5\cdot\mathbf{1}_{[3,9)}$$ for any $\alpha\in[0,+\infty)$.
\end{example}
Immediately, we get the sufficient condition for equality of generalized Choquet integrals defined w.r.t.\ different FCA and vectors. \begin{corollary}\label{rovnostCh}Let $\cA$ and $\cA^\prime$ be a FCA, $\mu\colon\hat{\cE}\to[0,+\infty)$, $\mathbf{x},\mathbf{x}^\prime\in[0,+\infty)^{[n]}$ and $\varphi$ be given by~(\ref{phii}). If $\min_{k\leq\varphi(i)}\aA[F_k^c]{\mathbf{x}}=\min_{k\leq\varphi(i)}\aAi[F_k^c]{\mathbf{x}^\prime}{\prime}$ for any $\varphi(i)\in[\kappa]$, then $$\mathrm{C}_{\cA}(\mathbf{x},\mu)=\mathrm{C}_{\cA^\prime}(\mathbf{x}^\prime,\mu).$$ \end{corollary}
\section*{Conclusion}
In the paper, we dealt with the concept of the generalized survival function and the generalized Choquet integral related to it. Considering their applications, see Section 2, we were mainly interested in their computational formulas on discrete space.
The derivation of these formulas required the introduction of new notations whose idea and practical meaning can be visually interpreted, as we described in Sections 3 and 4. Interesting results are Proposition~\ref{zjednodusenie_def}, Proposition~\ref{gsf2}, Corollary~\ref{application_i} and Corollary~\ref{application_j}.
In Section 4 we also pointed out the direct and efficient construction of the graph of the generalized survival function.
Motivated by applications we solved the indistinguishability of generalized survival functions. This question is interesting, especially in decision-making processes, because the indistinguishability means incomparability of two inputs (alternatives). The interesting results given in Proposition~\ref{skrateniegsf2}, Proposition~\ref{skratenie_cez_psi} and Proposition~\ref{prop_nerozlisitelnost} are related to this.
In this paper, we also mentioned the formulas for calculating the generalized Choquet integral with respect to special types of monotone measure, and we solved the introductory problems serving as a motivation to study mentioned concepts.
\section*{Acknowledgments} The work was supported the grants APVV-21-0468, VEGA 1/0657/22, and grant scheme VVGS-PF-2022-2143.
\section*{Appendix} \textbf{Proof of Corollary~\ref{specimiery}} \begin{enumerate}[(i)] \item Clearly, in accordance with~(\ref{Fi}) we have $0=\mu_0=\mu(\emptyset)$ and according to Theorem~\ref{gsf2} it is achieved on $\Big[\min_{k\leq 0}\sA_{\inv{k}},+\infty\Big)=[\sA_{\inv{0}},+\infty)=[\aA[{[n]}]{\mathbf{x}},+\infty)$. Then $1=\mu_1=\dots=\mu_{\kappa-1}$ is achieved on $[0,\aA[{[n]}]{\mathbf{x}})$.
\item From Theorem~\ref{gsf2}, the value $1=\mu_{\kappa-1}$ is achieved on $\Big[0, \min_{k<\kappa-1}\sA_{\inv{k}}\Big)=\Big[0,\min_{E\neq \emptyset}\aA[E]{\mathbf{x}}\Big).$
\item
Let us consider the bijection $F_{*}$ given in~\eqref{Fi} such that $|F_{k}|<|F_{l}|$ implies $k<l$ and for any $i\in[n]_0$ let us set \begin{align*}\omega^{*}(i)&=\max\{j\in[\kappa-1]_0:|F_j|=i\},\,\,
\omega_{*}(i)=\min\{j\in[\kappa-1]_0:|F_j|=i\}. \end{align*} It is easy to see that $\omega_{*}(i)-1=\omega^{*}(i-1)$ with the convention $\omega^{*}(-1)=-1$. Further, because of monotonicity of $\mu$ and FCA for any $i\in[n]_0$ we have \begin{align*}
\min_{k\leq\omega^{*}(i)}\sA_{\inv{k}}= \min_{k\leq\omega^{*}(i)}\sA(\mathbf{x}|F_k^c)=\min_{k\in[\omega_{*}(i),\omega^{*}(i)]}\sA(\mathbf{x}|F_k^c)=\min_{|E|=n-i}\sA(\mathbf{x}|E). \end{align*}
Indeed, for any $k<\omega_{*}(i)$ since $\mathcal{E}=2^{[n]}$ there exists $\tilde{k}\in[\omega_{*}(i), \omega^{*}(i)]$ such that $F_{\tilde{k}}\supset F_{k}$. Then $\sA(\mathbf{x}|F_{\tilde{k}}^c)\leq \sA(\mathbf{x}|F_{k}^c)$. According to Theorem~\ref{gsf2} we have that $\mu^i$ is achieved on \begin{align*}\bigcup_{j\in\{\omega_{*}(i),\dots,\omega^{*}(i)\}}\Big[\min_{k\leq j}\sA_{\inv{k}},\min_{k< j}\sA_{\inv{k}}\Big)&=\Big[\min_{k\leq \omega^{*}(i)}\sA_{\inv{k}},\min_{k< \omega_{*}(i)}\sA_{\inv{k}}\Big)\\&=\Big[\min_{k\leq \omega^{*}(i)}\sA_{\inv{k}},\min_{k\leq \omega^{*}(i-1)}\sA_{\inv{k}}\Big)\\&=\Big[\min_{E=n-i}\sA_{\inv{k}},\min_{E=n-i+1}\sA_{\inv{k}}\Big). \end{align*} \item Let us consider the bijection $F_{*}$ given in~\eqref{Fi} such that $F_0=\emptyset$ and for any $i\in\{2,3,\dots,n\}$ \begin{center} $\{\sigma(i-1)\}\subseteq F_{k}\subseteq G^c_{\sigma(i)}$, $\{\sigma(i)\}\subseteq F_{l}\subseteq G^c_{\sigma(i+1)}$, implies $k<l$. \end{center}
According to Theorem~\ref{gsf2} it is clear that $\pi(\sigma(0))$ is achieved on $\Big[\sA(\mathbf{x}|G_{\sigma(1)}),+\infty\Big)$. Further, for any $i\in[n]$ let us set \begin{align*}\tau^{*}(\sigma(i))&=\max\{j\in[\kappa-1]_0:\Pi(F_j)=\pi(\sigma(i))\,\,\text{and}\,\,\{\sigma(i)\}\subseteq F_j\subseteq G^c_{\sigma(i+1)}\},\,\,\\ \tau_{*}(\sigma(i))&=\min\{j\in[\kappa-1]_0:\Pi(F_j)=\pi(\sigma(i))\,\,\text{and}\,\,\{\sigma(i)\}\subseteq F_j\subseteq G^c_{\sigma(i+1)}\}. \end{align*} It is easy to see that $\tau_{*}(\sigma(i))-1=\tau^{*}(\sigma(i-1))$ with the convention $\tau^{*}(0)=0$. Further, because of monotonicity of $\mu$ and FCA for any $i\in[n]_0$ we have \begin{align*}
\min_{k\leq\tau^{*}(\sigma(i))}\sA_{\inv{k}}= \min_{k\leq\tau^{*}(\sigma(i))}\sA(\mathbf{x}|F_k^c)=\min_{k\in\{\tau_{*}(\sigma(i)),\dots,\tau^{*}(\sigma(i))\}}\sA(\mathbf{x}|F_k^c)=\sA(\mathbf{x}|G_{\sigma(i+1)}). \end{align*}
Indeed, for any $k<\tau_{*}(\sigma(i))$ we have $F_{k}\subseteq G^c_{\sigma(i+1)}$, where $G^c_{\sigma(i+1)}=F_{\tilde{k}}$ for some $\tilde{k}\in[\tau_{*}(\sigma(i)),\tau^{*}(\sigma(i))]$, thus explaining the second equality. Further, for any $k\in[\tau_{*}(\sigma(i)),\tau^{*}(\sigma(i))]$ $$F_{k}\subseteq G^c_{\sigma(i+1)}$$ therefore $\sA(\mathbf{x}|G_{\sigma(i+1)})\leq \sA(\mathbf{x}|F_{k}^c)$. According to Theorem~\ref{gsf2} we have that $\pi(\sigma(i))$ is achieved on
\begin{align*}\bigcup_{j\in\{\tau_{*}(\sigma(i)),\dots,\tau^{*}(\sigma(i))\}}\Big[\min_{k\leq j}\sA_{\inv{k}},\min_{k< j}\sA_{\inv{k}}\Big)&=\Big[\min_{k\leq \tau^{*}(\sigma(i))}\sA_{\inv{k}},\min_{k< \tau_{*}(\sigma(i))}\sA_{\inv{k}}\Big)\\&=\Big[\min_{k\leq \tau^{*}(\sigma(i))}\sA_{\inv{k}},\min_{k\leq \tau^{*}(\sigma(i-1))}\sA_{\inv{k}}\Big)\\&=\Big[\sA(\mathbf{x}|G_{\sigma(i+1)}),\sA(\mathbf{x}|G_{\sigma(i)})\Big). \end{align*} \item It is clear that $N([n])=1-\Pi(\emptyset)=1$ and for each $\{\sigma(i)\}\subseteq F^c\subseteq G_{\sigma(i+1)}^c$ $$N(F)=1-\Pi(F^c)=1-\pi(\sigma(i)),$$ with $i\in[n]$ and $$0=1-\pi(\sigma(n))\leq \dots \leq 1-\pi(\sigma(i))\leq\dots\leq 1-\pi(\sigma(0))=1.$$ Let us consider the bijection $F_{*}$ given in~\eqref{Fi} such that $F_{\kappa-1}=\emptyset$ and for any $i\in\{2,3,\dots,n\}$ \begin{center} $\{\sigma(i-1)\}\subseteq F_{k}\subseteq G^c_{\sigma(i)}$, $\{\sigma(i)\}\subseteq F_{l}\subseteq G^c_{\sigma(i+1)}$, implies $k>l$. \end{center}
According to Theorem~\ref{gsf2} it is clear that $0=1-\pi(\sigma(n))$ is achieved on $\Big[\sA(\mathbf{x}|\{\sigma(n)\}),+\infty\Big)$. Further, for any $i\in[n-1]$ let us set \begin{align*}\rho^{*}(\sigma(i))&=\max\{j\in[\kappa-1]_0:N(F_j)=1-\pi(\sigma(i))\,\,\text{and}\,\,\{\sigma(i)\}\subseteq F_j^c\subseteq G_{\sigma(i+1)}^c\}\\ \rho_{*}(\sigma(i))&=\min\{j\in[\kappa-1]_0:N(F_j)=1-\pi(\sigma(i))\,\,\text{and}\,\,\{\sigma(i)\}\subseteq F_j^c\subseteq G_{\sigma(i+1)}^c\}. \end{align*}
It is easy to see that $\rho^{*}(\sigma(i))\geq\rho_{*}(\sigma(i))>0$ and $\rho_{*}(\sigma(i))-1=\rho^{*}(\sigma(i+1))$. Further, because of monotonicity of $\mu$ and FCA for any $i\in[n-1]_0$ we have \begin{align*}
\min_{k\leq\rho^{*}(\sigma(i))}\sA_{\inv{k}}= \min_{k\leq\rho^{*}(\sigma(i))}\sA(\mathbf{x}|F_k^c)=\min_{k\geq i}\sA(\mathbf{x}|\{\sigma(k)\}). \end{align*} Indeed, for any $k\leq \rho^{*}(\sigma(i))$ there exists $\tilde{k}\geq i$ such that $N(F_k)= 1-\pi(\sigma(\tilde{k}))\leq 1-\pi(\sigma(i))$. Then $F_k^c\supseteq\{\sigma(\tilde{k})\}$. According to Theorem~\ref{gsf2} we have that $1-\pi(\sigma(i))$ is achieved on
\begin{align*}&\bigcup_{j\in\{\rho_{*}(\sigma(i)),\dots,\rho^{*}(\sigma(i))\}}\Big[\min_{k\leq j}\sA_{\inv{k}},\min_{k< j}\sA_{\inv{k}}\Big)=\Big[\min_{k\leq \rho^{*}(\sigma(i))}\sA_{\inv{k}},\min_{k< \rho_{*}(\sigma(i))}\sA_{\inv{k}}\Big)\\&=\Big[\min_{k\leq \rho^{*}(\sigma(i))}\sA_{\inv{k}},\min_{k\leq \rho^{*}(\sigma(i+1))}\sA_{\inv{k}}\Big)=\Big[\min_{k\geq i}\sA(\mathbf{x}|\{\sigma(k)\}),\min_{k\geq i+1}\sA(\mathbf{x}|\{\sigma(k)\}))\Big). \end{align*}
\end{enumerate}
\noindent\textbf{Proof of Proposition~\ref{porovnanie}} Let us consider an arbitrary, fixed $i\in\ozn{\kappa-1}$ such that $\sA_i\neq \sA_{i+1}$ (otherwise it is trivial). Let us denote $$j=\min\{l: \sA_{\inv{l}}\leq \sA_i\text{\,\,and\,\,} \min_{k\leq i}\mu_{(k)}=\mu_l\}.$$ The above-mentioned set is nonempty. Indeed, for each $l$ such that $\min_{k\leq i}\mu_{(k)}=\mu_l$ w.r.t.\ denotations from the beginning of this section there exists $i_l\leq i$ such that $F_{l}=E_{i_l}^c$ and $\mu_l=\mu_{(i_l)}$. Further, it is clear that $\sA_{\inv{l}}= \sA_{i_l}\leq \sA_{i}$.
From the definition of $j$ we immediately have $\min_{k\leq j}\sA_{\inv{k}}\leq\sA_{\inv{j}}\leq\sA_i$. Moreover, it also holds that $\min_{k< j}\sA_{\inv{k}}\geq \sA_{i+1}$. By contradiction: Let $\min_{k< j}\sA_{\inv{k}}< \sA_{i+1}$. Then there exists $k^*<j$ such that $\sA_{\inv{k^\ast}}< \sA_{i+1}$. Because of~\eqref{Ei} we get $\sA_{\inv{k^\ast}}\leq \sA_{i}$. Further, $\mu_{k^*}<\mu_j$: From the fact that $k^*<j$ we have $\mu_{k^*}\leq\mu_j$, however, the equality can not happen because of the definition of $j$. Further for $\alpha\in[\sA_i,\sA_{i+1})$ because of formula~\eqref{vyjgsf1} and because of the definition of $j$ we have $\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\mu_{j}$. However, since $\sA_{\inv{k^\ast}}\leq \sA_{i}$ for $\alpha\in[\sA_i,\sA_{i+1})$ we get $$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\min\left\{\mu_j: \sA_{\inv{j}}\leq\alpha\right\}\leq\mu_{k^*}<\mu_{j}.$$ This is a contradiction.\null
$\Box\;\;$
\noindent\textbf{Auxiliary pseudocodes used in Algorithms~\ref{alg_GSF_from_i} and Algorithms~\ref{alg_GSF_from_j}.}
The first pseudocode, namely method \textit{find-sets-$E_i$-and-$F_i$}, describe the calculation of permutation listed in~(\ref{Ei}) and~(\ref{Fi}). Second and third algorithms describe determination of values of $(\cdot)$, $\inv{\cdot}$.
\begin{center} \SetKwRepeat{Do}{do}{while} \begin{algorithm}[H] \Fn{find-sets-$E_i$-and-$F_i$$(\mathcal{E}, \mu, \mathbf{x}, \cA)$}{
$i:=0$, $\mathcal{E}^\prime:=\mathcal{E}$\;
\Do{$\mathcal{E}\neq\emptyset$}{
$E_i\colon E_i\in\mathcal{E}\text{ and }\aA[E_i]{\mathbf{x}}=\min\{\aA[E]{\mathbf{x}}:E\in\mathcal{E}\}$\;
$F_i\colon F_i\in\mathcal{E}^\prime\text{ and }\mu(F_i)=\min\{\mu(F):F\in\mathcal{E}^\prime\}$\;
$\mathcal{E}:=\mathcal{E}\setminus\{E_i\}$\;
$\mathcal{E}^\prime:=\mathcal{E}^\prime\setminus\{F_i\}$\;
$i:=i+1$\;
}\Return{$E_0,\dots,E_{\kappa-1}$, $F_0,\dots,F_{\kappa-1}$}\;
} \caption{Determination of sets $E_i$ and $F_i$, $i\in\ozn{\kappa-1}$ } \label{alg_EF} \end{algorithm}
\end{center}
\noindent \begin{minipage}{0.49\textwidth} \SetKwRepeat{Do}{do}{while} \begin{algorithm}[H] \Fn{the-$(\cdot)$-map$(\mathcal{E}, \mu, \mathbf{x}, \cA)$}{ $E_0,\dots,E_{\kappa-1}$, $F_0,\dots,F_{\kappa-1}$ $\leftarrow$ find-sets-$E_i$-and-$F_i$$(\mathcal{E}, \mu, \mathbf{x}, \cA)$\; \For{$(i=0$, $i<\kappa$, $i\mathrm{++})$}{
\For{$(j=0$, $j<\kappa$, $j\mathrm{++})$}{
\If{$(E_i=F_j^c)$}{
$(i):=j$\;
}
} } \Return{$(0),\dots,(\kappa-1)$}\; } \caption{Determination of values of a map $(\cdot)$ } \label{alg_(i)} \end{algorithm} \end{minipage}
\begin{minipage}{0.49\textwidth} \SetKwRepeat{Do}{do}{while} \begin{algorithm}[H] \Fn{the-$\inv{\cdot}$-map$(\mathcal{E}, \mu, \mathbf{x}, \cA)$}{ $E_0,\dots,E_{\kappa-1}$, $F_0,\dots,F_{\kappa-1}$ $\leftarrow$ find-sets-$E_i$-and-$F_i$$(\mathcal{E}, \mu, \mathbf{x}, \cA)$\; \For{$(i=0$, $i<\kappa$, $i\mathrm{++})$}{
\For{$(j=0$, $j<\kappa$, $j\mathrm{++})$}{
\If{$(E_i=F_j^c)$}{
$\inv{j}:=i$\;
}
} } \Return{$\inv{0},\dots,\inv{\kappa-1}$}\; } \caption{Determination of values of a map $\inv{\cdot}$ } \label{alg_(j)^-1} \end{algorithm} \end{minipage}
\noindent\textbf{Proof of Proposition~\ref{vlastnosti_psi}} \begin{enumerate}[(i)] \item
Let us denote $$i^{\star}=\min\{l \in\ozn{\kappa-1}: \mu_{(l)}=\min_{k\leq i}\mu_{(k)}\}.$$ It is clear that $i^{\star}\leq i$, $\min_{k\leq i}\mu_{(k)}=\mu_{(i^{\star})}$. Further, \begin{itemize}
\item Since $\mu_{(i^{\star})}\geq\min_{k\leq i^{\star}}\mu_{(k)}\geq \min_{k\leq i}\mu_{(k)}=\mu_{(i^{\star})}$, then $\min_{k\leq i^{\star}}\mu_{(k)}=\min_{k\leq i}\mu_{(k)}$. Therefore ${\psi_*}(i)\leq i^{\star}$.
\item For any $l<i^{\star}$ it holds $$\min_{k\leq l}\mu_{(k)}>\min_{k\leq i}\mu_{(k)}.$$ Indeed, $\min_{k\leq l}\mu_{(k)}\geq\min_{k\leq i^{\star}}\mu_{(k)}=\mu_{(i^{\star})}$. However, $\min_{k\leq l}\mu_{(k)}\neq\min_{k\leq i^{\star}}\mu_{(k)}$. By contradiction: Let $\min_{k\leq l}\mu_{(k)}=\mu_{(i^{\star})}$. Then there exist $k^{\star}\leq l<i^{\star}$ such that $\mu_{(k^{\star})}=\mu_{(i^{\star})}$. This is a contradiction with the definition of $j^{\star}$. Therefore ${\psi_*}(i)>l$. \end{itemize} From the previous it follows that ${\psi_*}(i)=i^{\star}$.
\item It follows directly from definitions of ${\psi^*}$, ${\psi_*}$.
\item It follows from the equalities $\min_{k\leq {\psi_*}(i)}\mu_{(k)}=\min_{k\leq i}\mu_{(k)}=\min_{k\leq {\psi^*}(i)}\mu_{(k)}$. Moreover, from part (i) and because of Lemma~\ref{vl_i} we have $\mu({\psi_*}(i))=\min_{k\leq i}\mu_{(k)}=\mathbf{i}{(i)}$. \item It follows directly from definitions of ${\psi_*}$, ${\psi^*}$ and from the fact that if ${\psi_*}(a)={\psi_*}(b)$ (${\psi^*}(a)={\psi^*}(b)$), then there is $\tilde{l}\in\ozn{\kappa-1}$ such that $\min_{k\leq a}\mu_{(k)}=\min_{k\leq\tilde{l}}\mu_{(k)}=\min_{k\leq b}\mu_{(k)}$. \item It follows from the proof of (iv) and from the fact that if ${\psi_*}(a)<{\psi_*}(b)$, then $\min_{k\leq{\psi_*}(a)}\mu_{(k)}>\min_{k\leq{\psi_*}(b)}\mu_{(k)}$ (equality does not occur because of statement (v)). Then we get $$\min_{k\leq a}\mu_{(k)}=\min_{k\leq{\psi_*}(a)}\mu_{(k)}>\min_{k\leq{\psi_*}(b)}\mu_{(k)}=\min_{k\leq b}\mu_{(k)},$$ where the first and the last equality hold because of definition ${\psi_*}$. The same for ${\psi^*}(a)<{\psi^*}(b)$. \item Let us denote $$M_1=\{l_1\in\ozn{\kappa-1}: \min_{k\leq l_1}\mu_{(k)}=\min_{k\leq a}\mu_{(k)}\}\,\quad M_2=\{l_2\in\ozn{\kappa-1}: \min_{k\leq l_2}\mu_{(k)}=\min_{k\leq b}\mu_{(k)}\}.$$ Since $a\leq b$, then $\min_{k\leq a}\mu_{(k)}\geq\min_{k\leq b}\mu_{(k)}$. If $\min_{k\leq a}\mu_{(k)}=\min_{k\leq b}\mu_{(k)}$, then directly from definition ${\psi_*}$, resp.\ ${\psi^*}$, we have ${\psi_*}(a)={\psi_*}(b)$, resp.\ ${\psi^*}(a)={\psi^*}(b)$. Further, let us suppose that $\min_{k\leq a}\mu_{(k)}>\min_{k\leq b}\mu_{(k)}$. Then for any $l_1\in M_1$ and any $l_2\in M_2$ it holds $$\min_{k\leq l_1}\mu_{(k)}=\min_{k\leq a}\mu_{(k)}>\min_{k\leq b}\mu_{(k)}=\min_{k\leq l_2}\mu_{(k)}\text{,}$$ therefore $l_2>l_1$. Thus also $\min M_2>\min M_1$, resp.\ $\max M_2>\max M_1$, that is ${\psi_*}(b)>{\psi_*}(a)$, resp.\ ${\psi^*}(b)>{\psi^*}(a)$.
\null
$\Box\;\;$
\end{enumerate}
\noindent\textbf{Proof of Proposition~\ref{naj_gsf_cez_psi}} Because of Proposition~\ref{vlastnosti_psi} (iii) and Theorem~\ref{zjednodusenie_def} we know that the value $\min_{k\leq i}\mu_{(k)}=\mu_{({\psi_*}(i))}$ is achieved on intervals $[\sA_m,\sA_{m+1})$, $m\in\{{\psi_*}(i),\dots,{\psi^*}(i)\}$. From definitions of ${\psi^*}$ and ${\psi_*}$ it is clear that there are no other intervals with this property. Therefore the value $\min_{k\leq i}\mu_{(k)}{=\mu_{({\psi_*}(i))}}$ is achieved on $\bigcup_{m\in M}[\sA_m,\sA_{m+1})=\big[\sA_{{\psi_*}(i)},\sA_{{\psi^*}(i)+1}\big)$.
\null
$\Box\;\;$
\noindent\textbf{Proof of Lemma~\ref{lema_ku_psi}} Let us prove part (i). \begin{enumerate}
\item[(i1)] If $\min_{k\leq i}\mu_{(k)}>\min_{k\leq i+1}\mu_{(k)}$, then directly from definitions of ${\psi_*}$ and ${\psi^*}$ we have ${\psi^*}(i)=i$ and ${\psi_*}(i+1)=i+1$, thus ${\psi^*}(i)+1={\psi_*}(i+1)$.
Further, according to Proposition~\ref{naj_gsf_cez_psi}, we have
$$\mu_{\bA}(\x,\alpha){}{\mathbf{x}}{\alpha}=\min_{k\leq i}\mu_{(k)}=\mu_{({\psi_*}{(i)})}$$
for any $\alpha\in\big[\sA_{{\psi_*}(i)},\sA_{{\psi^*}(i)+1}\big)$ and this interval is the greatest possible.
Using the above equality
we have $$\big[\sA_{{\psi_*}(i)},\sA_{{\psi^*}(i)+1}\big)=\big[\sA_{{\psi_*}(i)},\sA_{{\psi_*}(i+1)}\big).$$
\item[(i2)] It holds trivially. \end{enumerate} Part (ii) can be proved analogously. \null
$\Box\;\;$
\noindent{\textbf{Proof of Proposition~\ref{skratenie_cez_psi}}} Let us consider an arbitrary (fixed) $i\in\ozn{\kappa-2}$. If $\min_{k\leq i}\mu_{(k)}>\min_{k\leq i+1}\mu_{(k)}$, then according to Lemma~\ref{lema_ku_psi} (i1) $\min_{k\leq i}\mu_{(k)}{=\mu_{({\psi_*}(i))}}$ is achieved on $\big[\sA_{{\psi_*}(i)}, \sA_{{\psi_*}(i+1)}\big)$. Moreover, this interval is the greatest possible. If $\min_{k\leq i}\mu_{(k)}=\min_{k\leq i+1}\mu_{(k)}$, then ${\psi_*}(i)={\psi_*}(i+1)$, see Lemma~\ref{lema_ku_psi}(i2), thus $\big[\sA_{{\psi_*}(i)}, \sA_{{\psi_*}(i+1)}\big)=\emptyset$. This demonstrates that each value of generalized survival function is included just once in the sum and the first formula is right. The third formula follows from Lemma~\ref{vl_i}. The second and the fourth formula can be proved analogously. \null
$\Box\;\;$
\end{document} | arXiv |
\begin{document}
\title{Limit theorems for\sequential MCMC methods}
\enlargethispage{\baselineskip}
\begin{abstract}
\noindent{}\Gls{SMC} methods, also known as \glspl{PF}, constitute a class of algorithms used to approximate expectations with respect to a sequence of probability distributions as well as the normalising constants of those distributions. Sequential \gls{MCMC} methods are an alternative class of techniques addressing similar problems in which particles are sampled according to an \gls{MCMC} kernel rather than conditionally independently at each time step. These methods were introduced over twenty years ago by \citet{berzuini1997dynamic}. Recently, there has been a renewed interest in such algorithms as they demonstrate an empirical performance superior to that of \gls{SMC} methods in some applications. We establish a \glsdesc{SLLN} and a \glsdesc{CLT} for sequential \gls{MCMC} methods and provide conditions under which errors can be controlled uniformly in time. In the context of state-space models, we provide conditions under which sequential \gls{MCMC} methods can indeed outperform standard \gls{SMC} methods in terms of asymptotic variance of the corresponding Monte Carlo estimators.
\noindent\textbf{Keywords:} almost sure convergence; central limit theorem; $L_p$-error bounds; particle filters; strong consistency; time-uniform convergence \end{abstract}
\section{Introduction} \glsreset{MCMC} \glsreset{SMC} \Gls{SMC} algorithms are used to approximate expectations with respect to a sequence of probability measures as well as the normalizing constants of those measures. These techniques have found numerous applications in statistics, signal processing, physics and related fields \citep[see, e.g.\@,][for a recent review]{kunsch2013particle}. These algorithms proceed in a sequential manner by generating a collection of $N$ conditionally independent particles at each time step. An alternative to these schemes in which the particles at each time step are sampled instead according to a single \gls{MCMC} chain was proposed early on by \citet{berzuini1997dynamic}. Over recent years, there has been a renewed interest in such ideas as there is empirical evidence that these methods can outperform standard \gls{SMC} algorithms in interesting scenarios \citep[see, e.g.\@,][for novel applications and extensions]{carmi2012gaussian,golightly2006bayesian,septier2009mcmc,septier2016langevin,pal2018sequential}. These methods have been termed \emph{sequential \gls{MCMC}} methods in the literature. However, in this work, we will also refer to these as \emph{\glspl{MCMCPF}}, to convey the idea that they rely on the same importance-sampling construction as particle methods.
Although there is a wealth of theoretical results available for \gls{SMC} algorithms -- see, for example, \citet{delmoral2004feynman} -- to the best of our knowledge, no convergence guarantees have yet been provided for \glspl{MCMCPF}. The present work fills this gap by providing a \glsdesc{SLLN} and a \glsdesc{CLT} for the Monte Carlo estimators of expectations and normalising constants obtained through \glspl{MCMCPF}.
Our results show that compared to conventional \glspl{PF}, the asymptotic variance of estimators obtained by \glspl{MCMCPF} includes additional terms which can be identified as the excess variance arising from the autocorrelation of the \gls{MCMC} chains used to generate the particles. This implies that a standard \gls{PF} always provides estimators with a lower asymptotic variance than the corresponding \gls{MCMCPF} if both algorithms target the same distributions and if the latter relies on positive \gls{MCMC} kernels.
However, \glspl{MCMCPF} exhibit a significant advantage over regular \glspl{PF}. The popular \gls{FAAPF} introduced by \citet{pitt1999filtering} typically significantly outperforms the \gls{BPF} of \citet{gordon1993novel}, for example when approximating the optimal filter for state-space models in the presence of informative measurements. Unfortunately, the \gls{FAAPF} is implementable for only a very restricted class of models whereas the \gls{MCMCPF} version of the \gls{FAAPF} is much more widely applicable. In scenarios in which the \gls{FAAPF} is not implementable, but its \gls{MCMCPF} version is, and in which the \gls{MCMC} kernels used by the latter are sufficiently rapidly mixing, the \gls{MCMCPF} can substantially outperform implementable but rather inefficient standard \glspl{PF} such as the \gls{BPF}.
\section{MCMC-PFs}
\subsection{Notation}
Let $(\Omega, \mathcal{A}, \mathbb{P})$ be some probability space and denote expectation with respect to $\mathbb{P}$ by $\E$. For some set measurable space $(H, \mathcal{H})$, we let $\boundedFunSet(H)$ denote the Banach space of all of bounded, real-valued, $\mathcal{H}$-measurable functions on $H$, equipped with the uniform norm $\lVert f \rVert \coloneqq \sup_{x \in H} \lvert f(x)\rvert$. We also endow this space with the Borel $\sigma$-algebra (with respect to $\lVert \,\cdot\, \rVert$), and the product spaces $\boundedFunSet(H) \times \boundedFunSet(H)$ and $\boundedFunSet(H)^d$ for $d \in \mathbb{N}$ with the associated product $\sigma$-algebras. We also define the subsets $\boundedFunSet_1(H) \coloneqq \{f \in \boundedFunSet(H) \mid \lVert f \rVert \leq 1\}$.
Furthermore, for any $f \in \boundedFunSet(H)$, $\osc(f) \coloneqq \sup_{(x,y) \in H^2} \lvert f(x) - f(y) \rvert$. Finally, we let $\mathbf{1} \in \boundedFunSet(H)$ denote the unit function on $H$, i.e.\@{} $\mathbf{1} \equiv 1$.
Let $\measSet(H)$ denote the Banach space of all finite and signed measures on $(H, \mathcal{H})$ equipped with the total variation norm $\lVert \mu \rVert \coloneqq \frac{1}{2} \sup_{f \in \boundedFunSet_1(H)} \lvert \mu(f) \rvert$, where $\mu(f) \coloneqq \int_{H} f(x) \mu(\mathrm{d} x)$, for any $\mu \in \measSet(H)$ and any $f \in \boundedFunSet(H)$. We define $\probMeasSet(H) \subseteq \measSet(H)$ to be the set of all probability measures on $(H, \mathcal{H})$.
Let $({H^\prime}, {\mathcal{H}^\prime})$ be another measurable space. For bounded integral operators $M\colon \boundedFunSet({H^\prime}) \to \boundedFunSet(H)$, $f \mapsto M(f)(x) \coloneqq \int_{H^\prime} f(z) M(x, \mathrm{d} z)$ for any $x \in H$, we define $\smash{[\mu \otimes M](f) =\allowbreak \int_{H \times {H^\prime}} \mu(\mathrm{d} x) M(x,\mathrm{d} y) f(x,y)}$ for any $\mu$ in $\probMeasSet(H)$ and $f \in \boundedFunSet(H)\times\boundedFunSet({H^\prime})$. We also define the operator norm $\lVert M \rVert \coloneqq
\sup_{f \in \boundedFunSet_1({H^\prime})} \lVert M(f)\rVert$ as well as the \emph{Dobrushin coefficient}: $\beta(M) \coloneqq \sup_{(x,y) \in {H^\prime} \times {H^\prime}} \lVert M(x,\,\cdot\,) - M(y, \,\cdot\,)\rVert.$
Finally, ``$\ensuremath{\mathrel{\mathop{\rightarrow}\nolimits_{\text{\upshape{a.s.}}}}}$'' denotes almost sure convergence with respect to $\mathbb{P}$ and ``$\ensuremath{\mathrel{\mathop{\rightarrow}\nolimits_{\text{\upshape{d}}}}}$'' denotes convergence in distribution.
\subsection{Path-space Feynman--Kac model}
We want to approximate expectations under some distributions which are related to a distribution flow $(\eta_n)_{n \geq 1}$ on spaces $(\mathbf{E}_n, \sigFieldPath{n})$ -- with $(\mathbf{E}_1, \sigFieldPath{1}) \coloneqq (E, \mathcal{E})$ and $(\mathbf{E}_n, \sigFieldPath{n}) \coloneqq (\mathbf{E}_{n-1} \times E, \sigFieldPath{n-1} \otimes \mathcal{E})$, for $n > 1$ -- of increasing dimension, where \begin{equation}
\eta_n(\mathrm{d} \mathbf{x}_n) \coloneqq \frac{\gamma_n(\mathrm{d} \mathbf{x}_n)}{\mathcal{Z}_n} \in \probMeasSet(\mathbf{E}_n), \end{equation} for some positive finite measure $\gamma_n$ on $(\mathbf{E}_n, \sigFieldPath{n})$ and typically unknown normalising constant $\mathcal{Z}_n \coloneqq \gamma_n(\mathbf{1})$. Throughout this work, we write $\mathbf{x}_p \coloneqq x_{1:p} = (\mathbf{x}_{p-1}, x_p)$ and $\mathbf{z}_p \coloneqq z_{1:p} = (\mathbf{z}_{p-1}, z_p)$.
We assume that the target distributions are induced by a Feynman--Kac model on the path space \citep{delmoral2004feynman}. That is, there exists an initial distribution $M_1 \coloneqq \eta_1 \in \probMeasSet(\mathbf{E}_1)$, a sequence of Markov transition kernels $M_n\colon \mathbf{E}_{n-1} \times \mathcal{E} \to [0,1]$ for $n>1$ and a sequence of bounded (without loss of generality we take the bound to be 1) measurable potential functions $G_n\colon \mathbf{E}_n \to (0,1]$, for $n\geq1$, such that for any $f_n \in \boundedFunSet(\mathbf{E}_n)$, \begin{align}
\gamma_n(f_n) = \eta_1 Q_{1,n}(f_n), \end{align} where, hereafter using the convention that any quantity with subscript (i.e.\@{} time index) $0$ is to be ignored from the notation, we have defined the two-parameter semigroup:
\begin{align}
Q_{p,q}(f_p)(\mathbf{x}_p)
\coloneqq
\begin{cases}
[Q_{p+1} \cdots Q_q](f_q)(\mathbf{x}_p), & \text{if $p < q$,}\\
f_p(\mathbf{x}_p), & \text{if $p = q$,}
\end{cases} \end{align} for any $1 \leq p \leq q \leq n$, where \begin{align}
Q_{p+1}(\mathbf{x}_p, \mathrm{d} \mathbf{z}_{p+1})
\coloneqq G_p(\mathbf{z}_p) \delta_{\mathbf{x}_p}(\mathrm{d} \mathbf{z}_p) M_{p+1}(\mathbf{z}_p, \mathrm{d} z_{p+1}).
\end{align} This implies that \begin{align}
\eta_n(f_n) = \mcmcTarget{n}{\eta_{n-1}}(f_n) \coloneqq \frac{\eta_{n-1} Q_n(f_n)}{\eta_{n-1} Q_n(\mathbf{1})} = \frac{\gamma_n(f_n)}{\gamma_n(\mathbf{1})},\label{eq:recursioneta} \end{align} where we have defined the following family of probability measures \begin{align}
\mcmcTarget{n}{\mu}(\mathrm{d} \mathbf{x}_n)
& \coloneqq
\begin{dcases}
M_1(\mathrm{d} \mathbf{x}_1) = \eta_1(\mathrm{d} \mathbf{x}_1), & \text{if $n = 1$,}\\
\frac{G_{n-1}(\mathbf{x}_{n-1})}{\mu(G_{n-1})}[\mu \otimes M_n](\mathrm{d} \mathbf{x}_n)
, & \text{if $n > 1$,}
\end{dcases} \end{align} indexed by $\mu \in \probMeasSet(\mathbf{E}_{n-1})$.
For later use, we also define the family of normalised operators \begin{align}
\widebar{Q}_{p,n}(f_n)(\mathbf{x}_{p})
& \coloneqq \frac{Q_{p,n}(f_n)(\mathbf{x}_{p})}{\eta_p Q_{p,n}(\mathbf{1})}. \end{align} Note that this implies that $\eta_n(f_n) = \eta_p \widebar{Q}_{p,n}(f_n)$ for any $1 \leq p \leq n$ and any $f_n \in \boundedFunSet(\mathbf{E}_n)$.
\subsection{Generic MCMC-PF algorithm} \glsreset{MH} In Algorithm \ref{alg:mcmc_pf}, we summarise a generic \gls{MCMCPF} scheme for constructing sampling approximations $\eta_n^N$ of $\eta_n$. It admits all the \glspl{MCMCPF} discussed in this work as special cases. We recall that by convention, any quantity with subscript $0$ is to be ignored from the notation. This algorithm is essentially a \gls{PF} in which the particles are not sampled conditionally independently from $\mcmcTarget{n}{\mu}$ at step~$n$, for some $\mu \in \probMeasSet(\mathbf{E}_{n-1})$, but are instead sampled according to a Markov chain with initial distribution $\mcmcInit{n}{\mu}\in \probMeasSet(\mathbf{E}_{n})$ and Markov transition kernels $\mcmcKern{n}{\mu}\colon \mathbf{E}_{n} \times \sigFieldPath{n} \to [0,1]$ which are invariant with respect to $\mcmcTarget{n}{\mu}$.
\noindent\parbox{\textwidth}{ \begin{flushleft}
\begin{framedAlgorithm}[generic \gls{MCMCPF}] \label{alg:mcmc_pf} At time~$1$,
\begin{enumerate}
\item sample $\ParticlePath{1}{1} \sim \smash{\mcmcInit{1}{\eta_{0}^N}}$ and $\ParticlePath{1}{i} \sim \smash{\mcmcKern{1}{\eta_{0}^N}}(\particlePath{1}{i-1}, \,\cdot\,)$, for $2 \leq i \leq N$,
\item set $\eta_1^N \coloneqq \frac{1}{N} \sum_{i=1}^N \delta_{\ParticlePath{1}{i}}$.
\end{enumerate}
At time~$n$, $n > 1$,
\begin{enumerate}
\item sample $\ParticlePath{n}{1} \sim \smash{\mcmcInit{n}{\eta_{n-1}^N}}$ and $\ParticlePath{n}{i} \sim \smash{\mcmcKern{n}{\eta_{n-1}^N}}(\particlePath{n}{i-1}, \,\cdot\,)$, for $2 \leq i \leq N$,
\item set $\eta_n^N \coloneqq \frac{1}{N} \sum_{i=1}^N \delta_{\ParticlePath{n}{i}}$.
\end{enumerate} \end{framedAlgorithm} \end{flushleft} }
For any time $n \geq 1$ and for any $f_n \in \boundedFunSet(\mathbf{E}_n)$, $\smash{\gamma_n^N(f_n) \coloneqq \eta_n^N(f_n) \prod_{p=1}^{n-1} \eta_p^N(G_p)}$ is an estimate of $\gamma_n(f_n)$. In particular, an estimate of the normalising constant $\mathcal{Z}_n$ is given by \begin{align}
\mathcal{Z}_n^N \coloneqq \gamma_n^N(\mathbf{1}) = \prod_{p=1}^{n-1} \eta_p^N(G_p) = \prod_{p=1}^{n-1} \frac{1}{N} \sum_{i=1}^N G_p(\ParticlePath{p}{i}). \label{eq:normalising_constant_estimate} \end{align}
We hereafter write $\mcmcTarget{n}{N} \coloneqq \mcmcTarget{n}{\eta_{n-1}^N}$, $\mcmcInit{n}{N} \coloneqq \mcmcInit{n}{\eta_{n-1}^N}$ and $\mcmcKern{n}{N} \coloneqq \mcmcKern{n}{\eta_{n-1}^N}$ to simplify the notation. Note that standard \glspl{PF} are a special case of Algorithm~\ref{alg:mcmc_pf} corresponding to $\mcmcKern{n}{N}(\mathbf{x}_{n}, \,\cdot\,) \equiv \mcmcTarget{n}{N}(\,\cdot\,) = \mcmcInit{n}{N}( \,\cdot\,)$. Unfortunately, implementing standard \glspl{PF} can become prohibitively costly whenever there is no cheap way of generating $N$ \gls{IID} samples from $\mcmcTarget{n}{N}$ -- which can be the case when $\mcmcTarget{n}{N}$ is chosen for reasons of statistical efficiency rather than computational convenience, as in the case of the \gls{FAAPF} of \citet{pitt1999filtering}. In contrast, Algorithm~\ref{alg:mcmc_pf} only requires the construction of \gls{MCMC} kernels which leave this distribution invariant.
Practitioners typically initialise the Markov chains close to stationarity by selecting $\mcmcInit{n}{N}=[\eta_{n-1}^N \otimes M_n'] (\mcmcKern{n}{N})^{N_{\mathrm{burnin}} }$ for some approximation $M_n'(\mathbf{x}_{n-1}, \mathrm{d} x_n)$ of $M_n(\mathbf{x}_{n-1}, \mathrm{d} x_n)$ \citep[see, e.g.\@,][]{carmi2012gaussian,golightly2006bayesian,septier2009mcmc, septier2016langevin}. Here, $N_{\mathrm{burnin}} \geq 1$ denotes a suitably large number of iterations whose samples are discarded as ``burn-in''. Proposition \ref{prop:slln}, below, will demonstrate that, under regularity conditions, such algorithms can provide strongly consistent estimates of quantities of interest in spite of this out-of-equilibrium initialisation.
In situations in which it is possible to initialise the Markov chains at stationarity, i.e.\@{} in which we can initialise $\smash{\ParticlePath{n}{1} \sim \mcmcInit{n}{N}=\mcmcTarget{n}{N}}$, \citet{finke2016embedded} showed that the estimator $\mathcal{Z}_n^N$ given in \eqref{eq:normalising_constant_estimate} is unbiased as for standard \glspl{PF} \citep{delmoral2004feynman}. This remarkable unbiasedness property permits the use of \glspl{MCMCPF} within pseudo-marginal algorithms \citep{andrieu2009pseudo} and thus to perform Bayesian parameter inference for state-space models. As such an initialisation only requires \emph{one} draw from $\smash{\mcmcTarget{n}{N}}$, the use of relatively expensive methods, such as rejection sampling, may be justifiable. This is in contrast to standard \glspl{PF} which require $N$ such draws the cost of which may be prohibitive.
Furthermore, the conditional \gls{SMC} scheme proposed in \citet{andrieu2010particle} can also be extended to \glspl{MCMCPF} as demonstrated in \citet{shestopaloff2018sampling}. In this case, construction of a suitable initial distribution $\smash{\mcmcInit{n}{N}}$ is not needed.
The literature on \gls{MCMC} algorithms provides numerous ways in which to construct the Markov kernels $\mcmcKern{n}{\mu}$. For instance, we could use \gls{MH} \citep{berzuini1997dynamic}, \glsdesc{MALA}, \glsdesc{HMC} and hybrid kernels \citep{septier2009mcmc, septier2016langevin}, kernels based on invertible particle flow ideas \citep{li2017sequential} or on the bouncy particle sampler \citep{pal2018sequential}. As an illustration, Example~\ref{ex:independent_mh} describes a simple independent \gls{MH} kernel with a proposal distribution tailored to our setting.
\begin{example}[independent MH] \label{ex:independent_mh}
For any $n \geq 1$ and any $\mu \in \probMeasSet(\mathbf{E}_{n-1})$, define the proposal distribution
\begin{align}
\mcmcProposal{n}{\mu}(\mathrm{d} \mathbf{x}_n)
& \coloneqq \label{eq:indpendent_mh_proposal_distribution}
\begin{dcases}
R_1(\mathrm{d} \mathbf{x}_1), & \text{if $n = 1$,}\\
\frac{F_{n-1}(\mathbf{x}_{n-1})}{\mu(F_{n-1})}[\mu \otimes R_n](\mathrm{d} \mathbf{x}_n), & \text{if $n > 1$,}
\end{dcases}
\end{align}
for some sequence of non-negative bounded measurable functions $F_n\colon \mathbf{E}_n \to [0,\infty)$, some distribution $R_1 \in \probMeasSet(\mathbf{E}_1)$ with $M_1 \ll R_1$, and some sequence of Markov transition kernels $R_n\colon \mathbf{E}_{n-1} \times \mathcal{E} \to [0,1]$ with $M_n(\mathbf{x}_{n-1}, \,\cdot\,) \ll R_n(\mathbf{x}_{n-1}, \,\cdot\,)$, for any $\mathbf{x}_{n-1} \in \mathbf{E}_{n-1}$; for any $n \geq 1$, both $F_{n-1}$ and $R_n$ are assumed to satisfy
\begin{align}
\sup_{\mathbf{x}_{n} \in \mathbf{E}_{n}} \frac{G_{n-1}(\mathbf{x}_{n-1})}{ F_{n-1}(\mathbf{x}_{n-1})} \frac{\mathrm{d} M_n(\mathbf{x}_{n-1}, \,\cdot\,)}{\mathrm{d} R_n(\mathbf{x}_{n-1}, \,\cdot\,)}(x_n) < \infty. \label{eq:bounded_radon-nikodym_derivatives}
\end{align} The independent \gls{MH} kernel $\mcmcKern{n}{\mu}$ with proposal distribution $\mcmcProposal{n}{\mu}$ and target\slash invariant distribution $\mcmcTarget{n}{\mu}$ is given by
\begin{align}
\mcmcKern{n}{\mu}(\mathbf{x}_n, \mathrm{d} \mathbf{z}_n)
& \coloneqq \alpha_n(\mathbf{x}_n, \mathbf{z}_n) \mcmcProposal{n}{\mu}(\mathrm{d} \mathbf{z}_n)\\
& \quad + \biggl(1 - \int_{\mathbf{E}_n}\alpha_n(\mathbf{x}_n, \mathrm{d} \mathbf{x}_n') \mcmcProposal{n}{\mu}(\mathrm{d} \mathbf{x}_n')\biggr) \delta_{\mathbf{x}_n}(\mathrm{d} \mathbf{z}_n),
\end{align}
with acceptance probability
\begin{align}
\alpha_n(\mathbf{x}_n, \mathbf{z}_n) \label{eq:independent_mh_acceptance_probability}
& \coloneqq 1 \wedge \dfrac{\mathrm{d} \mcmcTarget{n}{\mu}}{\mathrm{d} \mcmcProposal{n}{\mu}}(\mathbf{z}_n) \bigg/ \dfrac{\mathrm{d} \mcmcTarget{n}{\mu}}{\mathrm{d} \mcmcProposal{n}{\mu}}(\mathbf{x}_n)\\
& = 1 \wedge \dfrac{\dfrac{G_{n-1}}{F_{n-1}}(\mathbf{z}_{n-1})}{\dfrac{G_{n-1}}{F_{n-1}}(\mathbf{x}_{n-1})} \dfrac{\dfrac{\mathrm{d} M_n(\mathbf{z}_{n-1}, \,\cdot\,)}{\mathrm{d} R_n(\mathbf{z}_{n-1}, \,\cdot\,)}(z_n)}{\dfrac{\mathrm{d} M_n(\mathbf{x}_{n-1}, \,\cdot\,)}{\mathrm{d} R_n(\mathbf{x}_{n-1}, \,\cdot\,)}(x_n)}.
\end{align}
This acceptance probability notably does not depend on $\mu$. \end{example}
\subsection{Computational cost} \label{subsec:computational_cost}
If we are interested only in approximating the normalising constant $\mathcal{Z}_n$ and if $G_{n-1}(\mathbf{x}_{n-1})$ and $M_n(\mathbf{x}_{n-1}, \,\cdot\,)$ depend upon only a fixed number of the most recent component(s) of $\mathbf{x}_{n-1}$ (as is the case in the state-space models discussed below), Algorithm~\ref{alg:mcmc_pf} can be implemented at a per-time-step complexity (in both space and time) that is linear in the number of particles $N$ and constant in the time horizon~$n$.
\subsection{Application to state-space models} \label{subsec:application_to_state-space_models}
Let $(F, \mathcal{F})$ be another measurable space. The \gls{MCMCPF} may be used for (but is not limited to) performing inference in a state-space model given by the bivariate Markov chain $(X_n, Y_n)_{n \geq 1}$ on $(E \times F, \mathcal{E} \vee \mathcal{F})$ with initial distribution $L_1(\mathrm{d} x_1) g_1(x_1, y_1)\psi(\mathrm{d} y_1)$ and with Markov transition kernels (for any $n > 1$) \begin{equation}
L_{n}(x_{n-1}, \mathrm{d} x_{n})g_{n}(x_{n}, y_{n}) \psi(\mathrm{d} y_n). \end{equation} Here, $L_1 \in \probMeasSet(E)$ is some initial distribution for $X_1$, $L_{n} : E \times \mathcal{E} \to [0,1]$, for $n > 1$, is a Markov transition kernel. Furthermore, $\psi$ is some suitable $\sigma$-finite dominating measure on $(F, \mathcal{F})$ and some positive bounded function $g_n(\,\cdot\,, y_n)$ so that $g_n(x_n, y_n) \psi(\mathrm{d} y_n)$ represents the transition kernels for the observation at time~$n$. Usually, we can only observe realisations of $(Y_n)_{n \geq 1}$ whereas the process $(X_n)_{n \geq 1}$ is latent.
Assume that we have observed realisations $\mathbf{y}_n = (y_1, \dotsc, y_n)$ of $\mathbf{Y}_n \coloneqq (Y_1, \dotsc, Y_n)$, then we often wish to compute (expectations under) the \begin{itemize}
\item \emph{filter:} $\pi_n(f_n) \coloneqq \E[f_n(\mathbf{X}_n)|\mathbf{Y}_n = \mathbf{y}_n]$, for $f_n \in \boundedFunSet(\mathbf{E}_n)$,
\item \emph{predictor:} $\tilde{\pi}_n(f_n) \coloneqq \E[f_n(\mathbf{X}_n)|\mathbf{Y}_{n-1} = \mathbf{y}_{n-1}]$, for $f_n \in \boundedFunSet(\mathbf{E}_n)$,
\item \emph{marginal likelihood:} $\mathcal{L}_n \coloneqq \E[\prod_{p=1}^{n} g_p(X_p, y_p)]$. \end{itemize} Note that the definitions of ``filter'' and ``predictor'' here refer to the historical process as we are taking a path-space approach. These terms are sometimes reserved for the final-component marginals of $\pi_n$ and $\tilde{\pi}_n$; we will use the terms \emph{marginal filter} and \emph{marginal predictor} for those objects.
\begin{example}[\gls{BPF}-type flow] \label{ex:bpf_flow}
If for any $n \geq 1$,
\begin{align}
G_n(\mathbf{x}_n)
&\coloneqq g_n(x_n, y_n),\\
M_{n}(\mathbf{x}_{n-1}, \mathrm{d} x_n)
&\coloneqq L_{n}(x_{n-1}, \mathrm{d} x_n), \label{eq:bpf_mutation}
\end{align}
then $\eta_n = \tilde{\pi}_n$ is the time-$n$ predictor, we can recover the time-$n$ filter as $\eta_n(G_nf_n)/\eta_n(G_n) = \pi_n(f_n)$, and $\mathcal{Z}_{n+1} = \mathcal{L}_{n}$ is the marginal likelihood associated with the observations $\mathbf{y}_{n}$ (with $\mathcal{Z}_1 = 1$).
In this case, Algorithm~\ref{alg:mcmc_pf} can be implemented (e.g.\@{} using the independent \gls{MH} kernel from Example~\ref{ex:independent_mh}) as long as $g_n$, $F_{n}$ and $\mathrm{d} L_{n}(x_{n-1}, \,\cdot\,) / \mathrm{d} R_n(\mathbf{x}_{n-1},\,\cdot\,)$ can be evaluated point-wise. \end{example}
\begin{example}[\gls{FAAPF}-type flow] \label{ex:fa-apf_flow}
If for any $n \geq 1$,
\begin{align}
G_n(\mathbf{x}_n)
&\coloneqq L_{n+1}(g_{n+1}(\,\cdot\,, y_{n+1}))(x_n), \label{eq:fa-apf_potential}\\
M_{n}(\mathbf{x}_{n-1}, \mathrm{d} x_{n})
&\coloneqq \frac{L_{n}(x_{n-1}, \mathrm{d} x_{n})g_{n}(x_{n},y_{n})}{L_{n}(g_{n}(\,\cdot\,, y_{n}))(x_{n-1})}, \label{eq:fa-apf_mutation}
\end{align}
then $\eta_n = \pi_n$ is the time-$n$ filter, we can recover the time-$n$ predictor as $\eta_{n-1}\otimes L_n = \tilde{\pi}_n$, and $\mathcal{Z}_n = \mathcal{L}_{n}$ is the marginal likelihood associated with the observations $\mathbf{y}_{n}$.
For this flow, it follows from \eqref{eq:recursioneta} that sampling $\ParticlePath{n}{i}$ from $\mcmcTarget{n}{N}$ requires first sampling an index $J = j \in \{1,...,N\}$ with probability proportional to $G_{n-1}(\particlePath{n-1}{j})$, setting the first $n-1$ components of $\smash{\ParticlePath{n}{i}}$ equal to $\smash{\particlePath{n-1}{j}}$ and then sampling the final component according to $\smash{M_{n}(\particlePath{n-1}{j}, \,\cdot\,)}$. There are many scenarios in which this is not feasible as both \eqref{eq:fa-apf_potential} and \eqref{eq:fa-apf_mutation} involve an intractable integral.
However, designing an \gls{MCMC} kernel of invariant distribution $\mcmcTarget{n}{N}$ is a much easier task as the product $G_{n-1}(\mathbf{x}_{n-1})M_{n}(\mathbf{x}_{n-1}, \mathrm{d} x_{n})$ does not involve any intractable integral. For example, if we use the independent \gls{MH} kernel from Example~\ref{ex:independent_mh} then the acceptance probability in \eqref{eq:independent_mh_acceptance_probability} reduces to (for simplicity, we take $F_{n-1} \equiv 1$):
\begin{align}
\alpha_n(\mathbf{x}_n, \mathbf{z}_n)
& = 1 \wedge \dfrac{g_{n}(z_n, y_n)}{g_{n}(x_{n}, y_n)} \dfrac{\dfrac{\mathrm{d} L_n(z_{n-1}, \,\cdot\,)}{\mathrm{d} R_n(\mathbf{z}_{n-1}, \,\cdot\,)}(z_n)}{\dfrac{\mathrm{d} L_n(x_{n-1}, \,\cdot\,)}{\mathrm{d} R_n(\mathbf{x}_{n-1}, \,\cdot\,)}(x_n)}.
\end{align}
\end{example}
\begin{example}[general \gls{APF}-type flow] \label{ex:general_apf_flow}
Let $\eta_1$ be some approximation of $\pi_1$ and, for $n \geq 1$, let $M_{n+1}(\mathbf{x}_{n}, \mathrm{d} x_{n+1})$ be some approximation of \eqref{eq:fa-apf_mutation} as well as
\begin{align}
G_n(\mathbf{x}_n)
&\coloneqq
\begin{dcases}
\frac{\mathrm{d} L_1}{\mathrm{d} \eta_1}(x_1) g_1(x_1, y_1) \tilde{g}_1(x_1, y_2), & \text{if $n = 1$,}\\
\frac{\mathrm{d} L_n(x_{n-1}, \,\cdot\,)}{\mathrm{d} M_n(\mathbf{x}_{n-1}, \,\cdot\,)}(x_n) \frac{g_n(x_n, y_n) \tilde{g}_n(x_n, y_{n+1})}{\tilde{g}_{n-1}(x_{n-1}, y_n)}, & \text{if $n > 1$.}
\end{dcases}
\end{align}
Here, $\tilde{g}_n(x_n, y_{n+1})$ denotes some tractable approximation of \eqref{eq:fa-apf_potential} which can be evaluated point-wise. More generally, we could incorporate information from observations at times $n+1,\dotsc,n+l$ for some $l \geq 1$ into $M_{n+1}(\mathbf{x}_{n}, \mathrm{d} x_{n+1})$ and replace $\tilde{g}_n(x_n, y_{n+1})$ by some approximation of $\smash{\int_{E^l} \prod_{p=n+1}^{n+l} L_{p}(\mathbf{x}_{p-1}, \mathrm{d} x_p) g_p(x_p, y_p)}$ as in the case of \emph{lookahead} methods \citep[see][for example]{lin2013}.
Note that the (general) \gls{APF} flow admits the two other flows as special cases. That is, taking $M_n$ as in \eqref{eq:bpf_mutation} and $\tilde{g}_n \equiv 1$ yields \gls{BPF}-type flow; taking $M_n$ as in \eqref{eq:fa-apf_mutation} and $\tilde{g}_n(x_n, y_{n+1}) = L_{n+1}(g_{n+1}(\,\cdot\,, y_{n+1}))(x_n)$ yields the \gls{FAAPF}-type flow. \end{example}
In the remainder of this work, we will refer to Algorithm~\ref{alg:mcmc_pf} as the \emph{\gls{MCMCBPF}} whenever the distribution flow $(\eta_n)_{n \geq 1}$ is defined as in Example~\ref{ex:bpf_flow}, as \emph{\gls{MCMCFAAPF}} whenever the flow is defined as in Example~\ref{ex:fa-apf_flow} and as \emph{\gls{MCMCAPF}} whenever the flow is defined as in Example~\ref{ex:general_apf_flow}. Furthermore, we drop the prefix ``\gls{MCMC}'' when referring to the conventional \gls{PF}-analogues of these algorithms, i.e.\@{} in the case that $\smash{\mcmcKern{n}{\mu}(\mathbf{x}_n,\,\cdot\,) \equiv \mcmcTarget{n}{\mu} = \mcmcInit{n}{\mu}}$.
\section{Main Results} \glsreset{SLLN} \glsreset{CLT} In this section, we state a \gls{SLLN} (Proposition~\ref{prop:slln}) and a \gls{CLT} (Proposition~\ref{prop:clt}) for the approximations of the normalised and unnormalised flows $(\eta_n)_{n \geq 1}$ and $(\gamma_n)_{n \geq 1}$ generated by an \gls{MCMCPF}.
\subsection{Assumptions}
We make the following assumptions about the \gls{MCMC} kernels used to sample the particles at each time step. The first assumption on the \gls{MCMC} kernels ensures that they are suitably ergodic (it corresponds to assuming that the kernels used are uniformly ergodic, uniformly in their invariant distribution) and is the only assumption required to obtain the \gls{SLLN}. The second assumption on the \gls{MCMC} kernels is a Lipschitz-type condition.
\begin{enumerate}[label=\textbf{(A\arabic*)}, ref=\textbf{A\arabic*}]
\item \label{as:ergodicity}
For any $n \geq 1$, there exists $i_n \in \mathbb{N}$ and $\varepsilon_n(K) \in (0,1]$ such that for all $\mu \in \probMeasSet(\mathbf{E}_{n-1})$ and all $\mathbf{x}_n, \mathbf{z}_n \in \mathbf{E}_n$:
\begin{align}
(\mcmcKern{n}{\mu})^{i_n}(\mathbf{x}_n, \,\cdot\,) \geq \varepsilon_n(K) (\mcmcKern{n}{\mu})^{i_n}(\mathbf{z}_n, \,\cdot\,).
\end{align}
\item \label{as:lipschitz}
For any $n\geq 1$, there exists a constant $\boundIntegralOperatorLipschitz{n} < \infty$ and a family of bounded integral operators $(\integralOperatorLipschitz{n}{\mu})_{\mu \in \probMeasSet(\mathbf{E}_{n-1})}$ from $\boundedFunSet(\mathbf{E}_{n-1})$ to $\boundedFunSet(\mathbf{E}_n)$ such that for any $(\mu, \nu) \in \probMeasSet(\mathbf{E}_{n-1})^2$ and any $f_n \in \boundedFunSet(\mathbf{E}_n)$,
\begin{align}
\lVert [\mcmcKern{n}{\mu} - \mcmcKern{n}{\nu}](f_n) \rVert
&\leq \int_{\boundedFunSet(\mathbf{E}_{n-1})} \lvert [\mu - \nu](g) \rvert \integralOperatorLipschitz{n}{\mu}(f_n, \mathrm{d} g) \label{eq:lipschitz:1}
\end{align}
and
\begin{align}
\int_{\boundedFunSet(\mathbf{E}_{n-1})} \lVert g \rVert \integralOperatorLipschitz{n}{\mu}(f_n, \mathrm{d} g) \leq \lVert f_n \rVert \boundIntegralOperatorLipschitz{n}. \label{eq:lipschitz:2}
\end{align} \end{enumerate} Recall that for any bounded integral operator $M\colon \boundedFunSet(H) \to \boundedFunSet(H)$, $\beta(M) \coloneqq \sup_{(x,y) \in H \times H} \lVert M(x,\,\cdot\,) - M(y, \,\cdot\,)\rVert$ is the associated Dobrushin coefficient. Note that Assumption~\ref{as:ergodicity} implies that \begin{equation}
\sup_{\mu \in \probMeasSet(\mathbf{E}_{n-1})}\beta((\mcmcKern{n}{\mu})^{i_n}) \leq 1 - \varepsilon_n(K) < 1. \label{eq:as:ergodicity} \end{equation} In addition, if $(\ParticlePath{p}{i})_{i \geq 1}$ and $(\ParticlePathStationary{p}{i})_{i \geq 1}$ are Markov chains with transition kernels $\mcmcKern{n}{\mu}$, with $\smash{(\ParticlePathStationary{p}{i})_{i \geq 1}}$ initialised from stationarity, then a standard coupling argument shows that Assumption~\ref{as:ergodicity} also implies that for any $N, r \in \mathbb{N}$ and any $f_n \in \boundedFunSet(\mathbf{E}_n)$ with $\lVert f_n\rVert \leq 1$, \begin{align}
\E\biggl[\biggl\lvert \sum_{i=1}^N f_n(\ParticlePath{n}{i}) - f_n(\ParticlePathStationary{n}{i}) \biggr\rvert^r\biggr]^{\mathrlap{\frac{1}{r}}}
& \leq \sum_{i=1}^N \E\bigl[\bigl\lvert f_n(\ParticlePath{n}{i}) - f_n(\ParticlePathStationary{n}{i}) \bigr\rvert^r\bigr]^{\frac{1}{r}}\\
& \leq 2 \lVert f_n \rVert \sum_{i=1}^N (1 - \varepsilon_n(K))^{\lfloor i/i_n \rfloor}\\
& \leq 2 i_n / \varepsilon_n(K) \eqqcolon \boundResolvent{n}. \label{eq:bound_resolvent} \end{align}
The assumptions are similar to those imposed in \citet{bercu2012fluctuations}. They are strong and rarely hold for non-compact spaces. It might be possible to adopt weaker conditions such as those in \citet{andrieu2011nonlinear} but this would involve substantially more technical and complicated proofs. As an illustration, we show that Assumptions~\ref{as:ergodicity} and \ref{as:lipschitz} hold if we employ the independent \gls{MH} kernels from Example~\ref{ex:independent_mh}, at least if $E$ is finite.
\begin{example}[independent MH, continued]
Assumption~\ref{as:ergodicity} is satisfied due to \citet[][Theorem~2.1]{mengersen1996rates}. To see this, note that by \eqref{eq:bounded_radon-nikodym_derivatives}, for any $n \geq 1$ and any $\mu \in \probMeasSet(\mathbf{E}_{n-1})$, and since $F_{n-1}$ is bounded and $G_{n-1} > 0$,
\begin{align}
\sup_{\mathbf{x}_n \in \mathbf{E}_n}\frac{\mathrm{d} \mcmcTarget{n}{\mu}}{\mathrm{d} \mcmcProposal{n}{\mu}}(\mathbf{x}_n)
\leq \frac{\lVertF_{n-1}\rVert}{\mu(G_{n-1})} \sup_{\mathbf{x}_{n} \in \mathbf{E}_{n}} \frac{G_{n-1}(\mathbf{x}_{n-1})}{ F_{n-1}(\mathbf{x}_{n-1})} \frac{\mathrm{d} M_n(\mathbf{x}_{n-1}, \,\cdot\,)}{\mathrm{d} R_n(\mathbf{x}_{n-1}, \,\cdot\,)}(x_n) < \infty.
\end{align}
Assumption~\ref{as:lipschitz} was proved for finite spaces $E$ (and in the case $F_{n} = G_n$ but the extension to $F_n \neq G_n$ is immediate) in \citet[][Section~2]{bercu2012fluctuations}.
\end{example}
When proving time-uniform convergence results, we also make the following assumptions on the mutation kernels and potential functions of the Feynman--Kac model. The first of these ensures that Assumptions~\ref{as:ergodicity} holds uniformly in time. The second and third of these constitute strong mixing conditions that have been extensively used in the analysis of \gls{SMC} algorithms, although they can often be relaxed in similar settings this comes at the cost of greatly complicating the analysis \citep[see, e.g.\@,][]{whiteley2013stability, douc2014long}.
\begin{enumerate}[label=\textbf{(B\arabic*)}, ref=\textbf{B\arabic*}]
\item \label{as:stability_mcmc_kernels} $\bar{\imath} \coloneqq \sup_{n \geq 1} i_n < \infty$ and $\varepsilon(K) \coloneqq \inf_{n \geq 1} \varepsilon_n(K) > 0$.
\item \label{as:stability_mutation} There exists ${m} \in \mathbb{N}$ and $\varepsilon(M) \in (0,1]$ such that for any $n \geq 1$, any $\mathbf{x}_n, \mathbf{z}_n \in \mathbf{E}_n$ and any $\varphi \in \boundedFunSet(E)$:
\begin{align}
\MoveEqLeft \int_{E^m} \biggl[\prod_{\smash{p=1}}^{m} M_{n+p}(\mathbf{x}_{n+p-1}, \mathrm{d} x_{n+p})\biggr]\varphi(x_{n+{m}})\\
& \geq \varepsilon(M) \int_{E^{m}} \biggl[\prod_{p=1}^{\smash{{m}}} M_{n+p}(\mathbf{z}_{n+p-1}, \mathrm{d} z_{n+p})\biggr]\varphi(z_{n+{m}}).
\end{align}
\item \label{as:stability_potential} There exists $l \in \mathbb{N}$ and $\varepsilon(G) \in (0,1]$ such that for any $n \geq 1$ and any $\mathbf{x}_n, \mathbf{z}_n \in \mathbf{E}_n$:
\begin{align}
G_n(\mathbf{x}_n) = G_n((\mathbf{z}_{n-l-1}, x_{((n-l) {\vee} 1):n})) \quad \text{and} \quad
G_n(\mathbf{x}_n) \geq \varepsilon(G) G_n(\mathbf{z}_n).
\end{align}
\end{enumerate}
Under these conditions, time-uniform bounds will be obtained when the test function under study has supremum norm of at most $1$ and depends upon only its final coordinate marginal, i.e.\@{} we will restrict our attention to test functions $f_n \in \smash{\boundedFunSet_1^\star}(\mathbf{E}_n)^d$, where $\smash{\boundedFunSet_1^\star(\mathbf{E}_n) \coloneqq \{ f_n' \in \boundedFunSet^\star(\mathbf{E}_n) \,|\,\lVertf_n'\rVert \leq 1\}}$ with \begin{align}
\smash{\boundedFunSet^\star}(\mathbf{E}_n)
\coloneqq \{ f_n' \in \boundedFunSet(\mathbf{E}_n) \,|\, \exists \, \varphi \in \boundedFunSet(E): f_n' = \varphi \mathrel{\circ} \zeta_{n}\}.
\end{align} Here, for any $n \geq 1$, $\zeta_{n}: \mathbf{E}_n \to E$ denotes the canonical final-coordinate projection operator defined through $\mathbf{x}_n \mapsto \zeta_{n}(\mathbf{x}_n) \coloneqq x_n$. In the state space context this corresponds, essentially, to considering the approximation of the marginal filter and predictor rather than their path-space analogues.
\subsection{Strong law of large numbers} The first main result in this work is the \gls{SLLN} stated in Proposition \ref{prop:slln}. Its proof is an immediate consequence of a slightly stronger $\mathbb{L}_r$-inequality given in Proposition~\ref{prop:lr_inequality}, the proof of which will be given in Appendix~\ref{app:proofs}.
\begin{proposition}[$\mathbb{L}_r$-inequality]~\label{prop:lr_inequality}
Under Assumption~\ref{as:ergodicity}, for each $r,n \geq 1$ there exist $\boundTime{n}, \boundExponent{r} < \infty$ such that for any $f_n \in \boundedFunSet(\mathbf{E}_n)$ and any $N \geq 1$:
\begin{align}
\E\bigl[ \bigl\lvert [\eta_n^N - \eta_n](f_n) \bigr\rvert^r \bigr]^{\frac{1}{r}} \leq \frac{\boundTime{n} \boundExponent{r}}{\sqrt{N}}\lVert f_n \rVert. \label{eq:lr-error}
\end{align}
Under the additional Assumptions~\ref{as:stability_mcmc_kernels}--\ref{as:stability_potential} and if $f_n \in \smash{\boundedFunSet_1^\star}(\mathbf{E}_n)$, the r.h.s.\@{} of \eqref{eq:lr-error} is bounded uniformly in time, i.e.\@{} there exist $a < \infty $ such that $\sup_{n \geq 1 }a_n \leq a$.
\end{proposition}
\begin{proposition}[strong law of large numbers]~\label{prop:slln}
Under Assumption~\ref{as:ergodicity}, for any $n,d \geq 1$ and $f_n \in \boundedFunSet(\mathbf{E}_n)^d$, as $N\to \infty$,
\begin{enumerate}
\item \label{prop:slln:normalised_predictors} $\eta_n^N(f_n) \ensuremath{\mathrel{\mathop{\rightarrow}\nolimits_{\text{\upshape{a.s.}}}}} \eta_n(f_n)$,
\item \label{prop:slln:unnormalised_predictors} $\gamma_n^N(f_n) \ensuremath{\mathrel{\mathop{\rightarrow}\nolimits_{\text{\upshape{a.s.}}}}} \gamma_n(f_n)$.
\end{enumerate} \end{proposition} \begin{proof}
Without loss of generality, we prove the result for scalar-valued test functions $f_n \in \boundedFunSet(\mathbf{E}_n)$. Part~\ref{prop:slln:normalised_predictors} is a direct consequence of Proposition~\ref{prop:lr_inequality}, for some $r > 2$, using the Borel--Cantelli Lemma together with Markov's inequality. Part~\ref{prop:slln:unnormalised_predictors} follows from Part~\ref{prop:slln:normalised_predictors} and boundedness of the potential functions $G_p$, i.e.\@
\begin{align}
\gamma_n^N(f_n) = \eta_n^N(f_n) \prod_{p=1}^{n-1} \eta_p^N(G_p) \ensuremath{\mathrel{\mathop{\rightarrow}\nolimits_{\text{\upshape{a.s.}}}}} \eta_n(f_n) \prod_{p=1}^{n-1} \eta_p(G_p) = \gamma_n(f_n),
\end{align}
as $N \to \infty$. This completes the proof.
\ensuremath{_\Box} \end{proof}
\subsection{Central limit theorem}
The second main result is Proposition~\ref{prop:clt} which adapts the usual \gls{CLT} for \gls{SMC} algorithms \citep[e.g.\@][Propositions~9.4.1 \& 9.4.2]{delmoral2004feynman} to our setting. Its proof is given in Appendix~\ref{app:proofs}. As in \citet{delmoral2010interacting, bercu2012fluctuations}, we will make extensive use of the resolvent operators $\resolv{n}{\mu}$, for any $\mu \in \probMeasSet(\mathbf{E}_{n-1})$ and any $f_n \in \boundedFunSet(\mathbf{E}_n)$ defined by \begin{equation}
\resolv{n}{\mu}(f_n) \coloneqq \sum_{j = 0}^\infty [(\mcmcKern{n}{\mu})^j - \mcmcTarget{n}{\mu}](f_n). \end{equation} These operators satisfy the \emph{Poisson equation} \begin{align}
(\mcmcKern{n}{\mu} - \Id)\resolv{n}{\mu} &= \mcmcTarget{n}{\mu} - \Id, \label{eq:poisson_equation_1}\\
\mcmcTarget{n}{\mu} \resolv{n}{\mu} &\equiv 0. \label{eq:poisson_equation_2} \end{align}
Under Assumption~\ref{as:ergodicity}, \citet[Proposition~3.1]{bercu2012fluctuations} show that
\begin{equation}
\sup_{\mu \in \probMeasSet(\mathbf{E}_{n-1})}
\lVert \resolv{n}{\mu}\rVert \leq \boundResolvent{n},\label{eq:bound_on_resolvent} \end{equation} where $\boundResolvent{n}$ is given in \eqref{eq:bound_resolvent}.
In the following, for $n\geq 1$, we consider a vector-valued test function $f_n = (f_n^u)_{1 \leq u \leq d} \in \boundedFunSet(\mathbf{E}_n)^d$. Using the resolvent operators, for any $1 \leq u,v \leq d$, we define the covariance function $\covarianceFunction{n}{\mu}(f_n^u,f_n^v)\colon \mathbf{E}_n \to \mathbb{R}$, for any $\mathbf{x}_n \in \mathbf{E}_n$ given by \begin{align}
\MoveEqLeft\covarianceFunction{n}{\mu}(f_n^u,f_n^v)(\mathbf{x}_n) \label{eq:definition_of_covFun}\\
& \coloneqq \mcmcKern{n}{\mu}[(\resolv{n}{\mu}(f_n^u)-\mcmcKern{n}{\mu}\resolv{n}{\mu}(f_n^u)(\mathbf{x}_n))(\resolv{n}{\mu}(f_n^v)-\mcmcKern{n}{\mu}\resolv{n}{\mu}(f_n^v)(\mathbf{x}_n))](\mathbf{x}_n). \end{align} Under Assumption~\ref{as:ergodicity}, we have $\covarianceFunction{n}{\mu}(f_n^u,f_n^v) \in \boundedFunSet(\mathbf{E}_n)$, for any $1 \leq u, v, \leq d$ and any $\mu \in \probMeasSet(\mathbf{E}_{n-1})$. Indeed, using \eqref{eq:bound_on_resolvent}, it is straightforward to check that \begin{equation}
\sup_{\mu \in \probMeasSet(\mathbf{E}_{n-1})}\lVert \covarianceFunction{n}{\mu}(f_n^u, f_n^v) \rVert \leq 4 \boundResolvent{n}^2 \lVert f_n^u\rVert \lVert f_n^v \rVert. \label{eq:boundedness_of_covFun} \end{equation}
Throughout the remainder of this work, let $V = (V_n)_{n \geq 1}$ be a sequence of independent and centred Gaussian fields with \begin{align}
\E[V_n(f_n^u)V_n(f_n^v)] = \eta_n\covarianceFunction{n}{\eta_{n-1}}(f_n^u, f_n^v), \label{eq:local_error_covariance_function} \end{align} and define the $(d,d)$-matrix $\varSigma_n(f_n) \coloneqq (\varSigma_n(f_n^u, f_n^v))_{1\leq u,v\leq d}$ by \begin{equation}
\varSigma_n(f_n^u, f_n^v)
\coloneqq \sum_{p=1}^n
\E[V_n(\widebar{Q}_{p,n}(f_n^u))V_n(\widebar{Q}_{p,n}(f_n^v))],
\label{eq:asymptotic_variance} \end{equation} for any $n \geq 1$, any $f_n = (f_n^u)_{1 \leq u \leq d} \in \boundedFunSet(\mathbf{E}_n)^d$ and any $1 \leq u,v \leq d$. Additionally, let $\dN(0, \varSigma)$ denote a (multivariate) centred Gaussian distribution with some covariance matrix $\varSigma$.
\begin{proposition}[central limit theorem]~\label{prop:clt} Under Assumptions~\ref{as:ergodicity} and \ref{as:lipschitz}, for any $n,d \geq 1$ and any $f_n \in \boundedFunSet(\mathbf{E}_n)^d$, as $N \to \infty$, \begin{align}
\label{enum:prop:clt:unnormalised} \frac{\sqrt{N}}{\gamma_n(\mathbf{1})} [\gamma_n^N - \gamma_n](f_n)
& \ensuremath{\mathrel{\mathop{\rightarrow}\nolimits_{\text{\upshape{d}}}}} \sum_{p=1}^n V_p(\widebar{Q}_{p,n}(f_n)) \sim \dN(0, \varSigma_n(f_n)), \end{align} and likewise, writing $\bar{f}_n \coloneqq f_n - \eta_n(f_n)$, \begin{align}
\label{enum:prop:clt:normalised} \sqrt{N}[\eta_n^N - \eta_n](f_n)
& \ensuremath{\mathrel{\mathop{\rightarrow}\nolimits_{\text{\upshape{d}}}}} \sum_{p=1}^n V_p(\widebar{Q}_{p,n}(\bar{f}_n)) \sim \dN(0, \varSigma_n(\bar{f}_n)). \end{align} Under the additional Assumptions~\ref{as:stability_mcmc_kernels}--\ref{as:stability_potential} and if $f_n \in \smash{\boundedFunSet_1^\star}(\mathbf{E}_n)^d$, the asymptotic variance in \eqref{enum:prop:clt:normalised} is bounded uniformly in time, i.e.\@{} there exists $c < \infty$ such that \begin{align}
\sup \varSigma_n(\bar{f}_n^u, \bar{f}_n^v) \leq c,
\label{eq:bound_on_asymptotic_variance} \end{align} where the supremum is over all $n \geq 1$, $f_n \in \smash{\boundedFunSet_1^\star}(\mathbf{E}_n)^d$ and $1 \leq u,v \leq d$.
\end{proposition}
\section{Comparison with standard PFs}
\subsection{Variance decomposition}
In this section, we first examine the asymptotic variance from Proposition~\ref{prop:clt}. We then illustrate the trade-off between \glspl{MCMCPF} and standard \glspl{PF}.
To ease the exposition, we only consider scalar-valued test functions $f_n \in \boundedFunSet(\mathbf{E}_n)$ throughout this section. As noted in \citet[Proposition~3.6]{bercu2012fluctuations}, the terms $\mcmcTarget{n}{\mu} \covarianceFunction{n}{\mu}(f_n, f_n)$ from \eqref{eq:local_error_covariance_function} which, via \eqref{eq:asymptotic_variance}, appear in the expressions for the asymptotic variance in Proposition~\ref{prop:clt} can be written in the following form which is more commonly used in the \gls{MCMC} literature:
\begin{align}
\mcmcTarget{n}{\mu} \covarianceFunction{n}{\mu}(f_n, f_n)
& = \textstyle \int_{\mathbf{E}_n^2} \mcmcTarget{n}{\mu}(\mathrm{d} \mathbf{x}_n)\mcmcKern{n}{\mu}(\mathbf{x}_n, \mathrm{d} \mathbf{z}_n)\bigl[\resolv{n}{\mu}(f_n)(\mathbf{z}_n) - \mcmcKern{n}{\mu} \resolv{n}{\mu}(f_n)(\mathbf{x}_n)\bigr]^2\\
& = \mcmcTarget{n}{\mu}\bigl(\resolv{n}{\mu}(f_n)^2 - \mcmcKern{n}{\mu}\resolv{n}{\mu}(f_n)^2\bigr)\\
& = \mcmcTarget{n}{\mu}\bigl(\resolv{n}{\mu}(f_n)^2 - [\mcmcTarget{n}{\mu}(f_n) - f_n + \resolv{n}{\mu}(f_n)]^2\bigr) \quad \text{[by \eqref{eq:poisson_equation_1}]}\\
& = \mcmcTarget{n}{\mu}\bigl(- [f_n - \mcmcTarget{n}{\mu}(f_n)]^2 - 2[\mcmcTarget{n}{\mu}(f_n) - f_n]\resolv{n}{\mu}(f_n)\bigr)\\
& = \mcmcTarget{n}{\mu}\bigl(- [f_n - \mcmcTarget{n}{\mu}(f_n)]^2 + 2 f_n\resolv{n}{\mu}(f_n)\bigr) \quad \text{[by \eqref{eq:poisson_equation_2}]}\\
&= \var_{\mcmcTarget{n}{\mu}}[f_n] \times \mathrm{iact}_{\mcmcKern{n}{\mu}}[f_n]. \label{eq:asymptotic_variance_decomposition}
\end{align}
Here, for any probability measure $\nu \in \probMeasSet(\mathbf{E}_n)$ and any $\nu$-invariant Markov kernel $K$, we have defined the \gls{IACT}:
\begin{align}
\mathrm{iact}_{K}[f_n]
& \coloneqq 1 + 2 \sum_{j=1}^\infty \frac{\cov_{\nu}[f_n, K^j(f_n)]}{\var_{\nu}[f_n]},
\end{align} where $\cov_{\nu}[f_n, g_n] \coloneqq \nu([f_n - \nu(f_n)][g_n - \nu(g_n)])$ and $\var_{\nu}[f_n] \coloneqq \cov_{\nu}[f_n, f_n] = \nu([f_n - \nu(f_n)]^2)$.
If the \gls{MCMC} kernels $\mcmcKern{n}{\mu}$ are perfectly mixing, that is if $\mcmcKern{n}{\mu}(\mathbf{x}_n, \,\cdot\,) = \mcmcTarget{n}{\mu}(\,\cdot\,)$ for all $\mathbf{x}_n \in \mathbf{E}_n$, then $\mathrm{iact}_{\mcmcKern{n}{\mu}}[f_n] = 1$, i.e.\@{} $\mcmcTarget{n}{\mu} \covarianceFunction{n}{\mu}(f_n, f_n) = \var_{\mcmcTarget{n}{\mu}}[f_n]$, and the expressions for the asymptotic variances in Proposition~\ref{prop:clt} (as specified through \eqref{eq:local_error_covariance_function} and \eqref{eq:asymptotic_variance}) simplify to those obtained in \citet{chopin2004central, delmoral2004feynman, kunsch2005recursive} for conventional \gls{SMC} algorithms. Thus, by the decomposition from \eqref{eq:asymptotic_variance_decomposition}, the terms appearing in the asymptotic variance of the \gls{MCMCPF} are equal to those appearing in the asymptotic variance of standard \glspl{PF} multiplied by the \gls{IACT} associated with the \gls{MCMC} kernels used to generate the particles.
For positive \gls{MCMC} operators, the \gls{IACT} terms are greater than $1$ for any $f_n \in \boundedFunSet(\mathbf{E}_n)$ and represent the variance ``penalty'' incurred due to the additional between-particle positive correlations in \glspl{MCMCPF} relative to standard \glspl{PF}. Examples of positive operators include the independent \gls{MH} kernel \citep{liu1996metropolized} discussed in Example \ref{ex:independent_mh}, the \gls{MH} kernel with Gaussian or Student-t random walk proposals \citep{baxendale2005renewal} or autoregressive positively correlated proposals with normal or Student-t innovations \citep{doucet2015efficient} as well as some versions of the hit-and-run and slice sampling algorithms \citep{rudolf2013positivity}.
\subsection{Variance--variance trade-off} \label{subsec:variance-variance_trade_off}
There is an efficiency trade-off involved in deciding whether to employ a standard \gls{PF} or an \gls{MCMCPF} for a particular application. For the same distribution flow $(\eta_n)_{n \geq 1}$ the former always has a lower asymptotic variance than the latter if the \gls{MCMC} draws are positively correlated. However, as we seek to illustrate in the remainder of this section, an \gls{MCMCPF} may still be preferable (in terms of asymptotic variance) to a standard \gls{PF} in certain situations, even if a positive MCMC kernel is used, because it can sometimes be used to target a more efficient distribution flow, i.e.\@{} a flow for which the variance terms $\var_{\mcmcTarget{n}{\mu}}[f_n]$ are reduced far enough to compensate for the \gls{IACT}-based ``penalty'' terms $\mathrm{iact}_{\mcmcKern{n}{\mu}}[f_n]$ in \eqref{eq:asymptotic_variance_decomposition}. Additionally the computational cost of generating one particle in an \gls{MCMCPF} can be smaller than the corresponding cost in a standard \gls{PF}.
As an illustration, we compare the asymptotic variances of approximations $\pi_n^N$ of the filter $\pi_n$ either computed using the standard \glspl{PF} or \glspl{MCMCPF} targeting the \gls{BPF} and \gls{FAAPF} flows in the state-space model from Subsection~\ref{subsec:application_to_state-space_models}. In the remainder of this section, we let $S_{p,n}\colon \boundedFunSet(\mathbf{E}_p) \to \boundedFunSet(\mathbf{E}_n)$ be a kernel that satisfies $\pi_p S_{p,n} = \pi_n$ and which is given by \begin{align}
S_{p,n}(f_n)(\mathbf{x}_p)
& \coloneqq \frac{\mathcal{L}_p}{\mathcal{L}_n} \int_{\mathbf{E}_n} f_n(\mathbf{z}_n) \delta_{\mathbf{x}_p}(\mathrm{d} \mathbf{z}_p) \smashoperator{\prod_{q=p+1}^n} g_{q}(z_q, y_q) L_{q}(z_{q-1}, \mathrm{d} z_q). \end{align} We begin by deriving expressions for the asymptotic variances in each case.
\begin{itemize}
\item \textbf{\gls{BPF} flow.} For the \gls{BPF} flow from Example~\ref{ex:bpf_flow},
expectations under the filter $\pi_n(f_n) = \eta_n(G_nf_n)/\eta_n(G_n)$ may then be approximated by $\pi_n^N(f_n) = \eta_n^N(G_nf_n)/\eta_n^N(G_n)$. Accounting for this transformation \citep[e.g.\@{} as in][]{johansen2007auxiliary} yields
\begin{align}
[\pi_n^N - \pi_n](f_n)
& = \frac{\eta_n(G_n)}{\eta_n^N(G_n)} \frac{\eta_n^N(G_n[f_n - \pi_n(f_n)])}{\eta_n(G_n)}\\
& = \frac{\eta_n(G_n)}{\eta_n^N(G_n)} [\eta_n^N - \eta_n](G_n[f_n - \pi_n(f_n)]/\eta_n(G_n)).
\end{align}
As Proposition~\ref{prop:slln} ensures that $\eta_n(G_n)/\eta_n^N(G_n) \ensuremath{\mathrel{\mathop{\rightarrow}\nolimits_{\text{\upshape{a.s.}}}}} 1$, Slutzky's Lemma and Proposition~\ref{prop:clt} are sufficient to show that for the \gls{BPF} and \gls{MCMCBPF}, respectively, $\sqrt{N}[\pi_n^N - \pi_n](f_n)$ converges in distribution to a Gaussian distribution with zero mean and variance
\begin{align}
\asymptoticVarianceBPF{n}(f_n)
& = \sum_{p=1}^n \var_{\tilde{\pi}_p}[\tilde{f}_{p,n}],\label{eq:asymptoticVarianceBpf}\\
\asymptoticVarianceMCMCBPF{n}(f_n)
& = \sum_{p=1}^n \var_{\tilde{\pi}_p}[\tilde{f}_{p,n}] \times \mathrm{iact}_{\mcmcKern{p}{{\tilde{\pi}_{p-1}}}}[\tilde{f}_{p,n}],\label{eq:asymptoticVarianceMcmcBpf}
\end{align}
with, using that $\eta_n(G_n [f_n - \pi_n(f_n)]/\eta_n(G_n)) = 0$,
\begin{align}
\tilde{f}_{p,n}(\mathbf{x}_p)
& \coloneqq \widebar{Q}_{p,n}(G_n[f_n - \pi_n(f_n)]/\eta_n(G_n))(\mathbf{x}_p)\\
& = g_p(x_p, y_{p}) \frac{\mathcal{L}_{p-1}}{\mathcal{L}_p} S_{p,n}(f_n - \pi_n(f_n))(\mathbf{x}_p).
\end{align}
\item \textbf{\gls{FAAPF} flow.} For the \gls{FAAPF} flow from Example~\ref{ex:fa-apf_flow}, $\pi_n = \eta_n$, so that we may approximate the filter by $\pi_n^N \coloneqq \eta_n^N$. Hence, Proposition~\ref{prop:clt} shows that for the \gls{FAAPF} and \gls{MCMCFAAPF}, respectively, $\sqrt{N}[\pi_n^N - \pi_n](f_n)$ converges in distribution to a Gaussian distribution with zero mean and variance
\begin{align}
\asymptoticVarianceFAAPF{n}(f_n)
& = \sum_{p=1}^n \var_{\pi_p}[f_{p,n}], \label{eq:asymptoticVarianceFaapf}\\
\asymptoticVarianceMCMCFAAPF{n}(f_n)
& = \sum_{p=1}^n \var_{\pi_p}[f_{p,n}] \times \mathrm{iact}_{\mcmcKern{p}{{\pi_{p-1}}}}[f_{p,n}],\label{eq:asymptoticVarianceMcmcFaapf}
\end{align}
with
\begin{align}
f_{p,n}(\mathbf{x}_p)
\coloneqq \widebar{Q}_{p,n}(f_n - \pi_n(f_n))(\mathbf{x}_p)
= S_{p,n}(f_n - \pi_n(f_n))(\mathbf{x}_p).
\end{align}
\end{itemize}
For the remainder of this section, assume that the asymptotic variance of the standard \gls{FAAPF} is lower than that of the standard \gls{BPF} for the given state-space model. More precisely, we assume that for each $p \leq n$ \begin{align}
\var_{\pi_p}[f_{p,n}] \leq \var_{\tilde{\pi}_p}[\tilde{f}_{p,n}], \quad \textrm{ and hence that } \quad \asymptoticVarianceFAAPF{n}(f_n) \leq \asymptoticVarianceBPF{n}(f_n). \end{align} This is thought to hold in many applications and has been empirically verified e.g.\@{} in \citet{snyder2015performance}, although it is possible to construct counter-examples \citep{johansen2008note}. Assuming that the \gls{MCMC} kernels $\mcmcKern{p}{\mu}$ are positive operators, then the \glspl{IACT} take values in $[1,\infty)$ and hence \begin{align}
\asymptoticVarianceBPF{n}(f_n) \leq \asymptoticVarianceMCMCBPF{n}(f_n) \quad \text{and} \quad \asymptoticVarianceFAAPF{n}(f_n) \leq \asymptoticVarianceMCMCFAAPF{n}(f_n). \end{align} However, as noted in Example~\ref{ex:fa-apf_flow}, there are many scenarios where \gls{FAAPF} cannot be implemented as we cannot generate $N$ (conditionally) \gls{IID} samples from $\mcmcTarget{n}{N}$. In this case, practitioners typically have to resort to using the standard \gls{BPF} instead. In contrast, the \gls{MCMCFAAPF} can usually be implemented. In such circumstances, use of \glspl{MCMCPF} (specifically in the form of the \gls{MCMCFAAPF}) can preferable, e.g.\@{} if the variance reductions attained by targeting the \gls{FAAPF} flow are large enough to outweigh the additional variance due to the increased particle correlation, i.e.\@{} if for each $1 \leq p \leq n$, \begin{align}
\var_{\pi_p}[f_{p,n}] \times \mathrm{iact}_{\mcmcKern{p}{{\pi_{p-1}}}}[f_{p,n}] \leq \var_{\tilde{\pi}_p}[\tilde{f}_{p,n}] \end{align} because then \begin{align}
\asymptoticVarianceMCMCFAAPF{n}(f_n) \leq \asymptoticVarianceBPF{n}(f_n). \end{align}
\subsection{Numerical illustration}
We end this section by illustrating the `variance--variance trade-off' mentioned above on two instances of the state-space model from Subsection~\ref{subsec:application_to_state-space_models}.
The first model is a state-space model on a binary space $E = F \coloneqq \{0,1\}$ and with $n=2$ observations: $y_1 = y_2 = 0$. Furthermore, for some $\alpha, \varepsilon \in [0,1]$ and for any $x_1, x_2 \in E$, $\mu \in \probMeasSet(\mathbf{E}_{n-1})$ and any $n \in \{1,2\}$, \begin{gather}
L_1(\{x_1\}) \coloneqq 1/2, \quad L_2(x_1, \{x_2\}) \coloneqq \alpha \ind\{x_2 = x_1\} + (1-\alpha) \ind\{x_2 \neq x_1\},\\
g_n(x_n, y_n) \coloneqq 0.99 \ind\{y_n = x_n\} + 0.01 \ind\{y_n \neq x_n\},\\
\mcmcKern{n}{\mu}(\mathbf{x}_{n}, \,\cdot\,) \coloneqq \varepsilon \delta_{\mathbf{x}_{n}} + (1-\varepsilon) \mcmcTarget{n}{\mu}. \end{gather} While this is clearly only a toy model, we consider it for two reasons. Firstly, it allows us to analytically evaluate the asymptotic variances for standard \glspl{PF} and \glspl{MCMCPF} given in \eqref{eq:asymptoticVarianceBpf}, \eqref{eq:asymptoticVarianceMcmcBpf}, \eqref{eq:asymptoticVarianceFaapf} and \eqref{eq:asymptoticVarianceMcmcFaapf}. Secondly, as discussed in \citet{johansen2008note}, the model allows us to select the parameter $\alpha$ in such a way that the \gls{FAAPF} has either a lower or higher asymptotic variance than the \gls{BPF}.
Figure~\ref{fig:binary_ssm} displays the asymptotic variances relative to the asymptotic variance of the standard \gls{BPF} for the test Function $f_2(\mathbf{x}_{2}) = x_2$ and for two different values of the parameter $\alpha$. As displayed in the first panel, a relatively large value of $\alpha$ leads to the somewhat contrived case that the \gls{BPF} is more efficient than the \gls{FAAPF}. However, as displayed in the second panel, a small value of $\alpha$ makes the \gls{FAAPF} more efficient than the \gls{BPF}. This is because if the system is in state $0$ at time $1$, the time-$2$ proposal used by the \gls{FAAPF} incorporates the observation $y_2 = 0$ and whereas the time-$2$ proposal used by the \gls{BPF} almost always proposes moves to state $1$. In this case, the \gls{MCMCFAAPF} then outperforms the \gls{BPF} as long as the autocorrelation of the \gls{MCMC} kernels used by the former, $\varepsilon$, is not too large.
\begin{figure}
\caption{Asymptotic variances (relative to the asymptotic variance of the \gls{BPF}) of the algorithms discussed in Subsection~\ref{subsec:variance-variance_trade_off} in the case that the \gls{BPF} flow is more efficient than the \gls{FAAPF} flow (first panel) and in the case that the \gls{BPF} flow is less efficient than the \gls{FAAPF} flow (second panel).}
\label{fig:binary_ssm}
\end{figure}
We stress that in practical situations, one might expect a much more pronounced difference between the performance of the \gls{FAAPF} and the \gls{BPF} than is observed in this toy model, and hence that Markov kernels with rather modest mixing properties can still give rise to an \gls{MCMCFAAPF} that can outperform the \gls{BPF} in some situations. Indeed, this appears to be the case in the second model discussed below.
The second model is a $d$-dimensional linear Gaussian state-space model given by $E = F \coloneqq \mathbb{R}^d$. Furthermore, writing the $d$-dimensional state and observation vector at time $n$ as $X_n = (X_{n,i})_{1 \leq i \leq d}$ and $Y_n = (Y_{n,i})_{1 \leq i \leq d}$, respectively, \begin{gather}
\frac{\mathrm{d} L_1}{\mathrm{d} \lambda^{\otimes d}}(x_1) = \prod_{i=1}^d \phi(x_{1,i}), \quad \frac{\mathrm{d} L_n(x_{n-1}, \,\cdot\,)}{\mathrm{d} \lambda^{\otimes d}}(x_n) = \prod_{i=1}^{\smash{d}} \phi(x_{n,i} - x_{n-1,i}/2),\\
g_n(x_n, y_n) = \prod_{i=1}^d \phi(x_{n,i} - y_{n,i}),
\end{gather} where $\lambda$ denotes the Lebesgue measure on $\mathbb{R}$ and $\phi$ denotes a Lebesgue-density of a univariate standard normal distribution. We take $\mcmcKern{n}{\mu}(\mathbf{x}_{n}, \,\cdot\,)$ to be a \gls{MH} kernel with proposal \begin{align}
\mcmcProposal{n}{\mu}(\mathbf{x}_n, \mathrm{d} \mathbf{z}_n)
& =
\begin{dcases}
R(x_1, \mathrm{d} z_1), & \text{if $n = 1$,}\\
\frac{F_{n-1}(\mathbf{z}_{n-1})}{\mu(F_{n-1})} \mu(\mathrm{d} \mathbf{z}_{n-1}) R(x_n, \mathrm{d} z_n), & \text{if $n > 1$,}
\end{dcases} \end{align} where $R$ is a Gaussian random-walk kernel on $E$ with transition density \begin{align}
\frac{\mathrm{d} R(x_{n}, \,\cdot\,)}{\mathrm{d} \lambda^{\otimes d}}(z_n) = \prod_{i=1}^d \sqrt{d} \phi(\sqrt{d}[z_{n,i} - x_{n,i}]), \end{align} and where $F_n(\mathbf{x}_n) = g_n(x_n, y_n)$ for the \gls{MCMCBPF} as well as $F_n \equiv 1$ for the \gls{MCMCFAAPF}.
For the \gls{MCMCBPF}, the \gls{MCMC} chains at each time step are initialised from stationarity, i.e.\@{} \begin{align}
\kappa_n^\mu(\mathrm{d} \mathbf{x}_n) = \mcmcTarget{n}{\mu}(\mathrm{d} \mathbf{x}_n) = \dfrac{\mu(\mathrm{d} \mathbf{x}_{n-1})g_{n-1}(x_{n-1}, y_{n-1})}{\mu(g_{n-1}(\,\cdot\,, y_{n-1}))} L_n(x_{n-1}, \mathrm{d} x_n), \end{align} as this is almost always possible, in practice. For the \gls{MCMCFAAPF}, the \gls{MCMC} chains are initialised by discarding the first $N_{\mathrm{burnin}} = 100$ samples as burn-in, i.e.\@{} $\kappa_n^\mu = [\mu \mathbin{\otimes} L_n](\mcmcKern{n}{\mu})^{N_{\mathrm{burnin}}}$.
Figure~\ref{fig:linear_ssm} displays estimates of the marginal likelihood relative to the true marginal likelihood obtained from the (\gls{MCMC}-)\gls{BPF} and (\gls{MCMC}-)\gls{FAAPF}. In this case, the \gls{MCMCFAAPF} outperforms the \gls{BPF} both in dimension $d=1$ and $d=5$.
Note that Assumptions~\ref{as:ergodicity}--\ref{as:lipschitz} and \ref{as:stability_mcmc_kernels}--\ref{as:stability_potential} are violated in this example. The results therefore appear to lend some support the conjecture that these assumptions are stronger than necessary for the results of Propositions~\mbox{\ref{prop:lr_inequality}--\ref{prop:clt}} to hold.
\begin{figure}\label{fig:linear_ssm}
\end{figure}
\section{Conclusion} \glsreset{PF} \glsreset{MCMCPF} \glsreset{FAAPF} \glsreset{BPF}
In this work, we have established a \glsdesc{SLLN} and \glsdesc{CLT} for a class of algorithms known as sequential \gls{MCMC} methods or \glspl{MCMCPF} and provided conditions under which the associated errors can be controlled uniformly in time. When positive \gls{MCMC} operators are used within \glspl{MCMCPF}, the asymptotic variances of \gls{PF} estimators are always lower than the ones of the corresponding \gls{MCMCPF} estimators. However, even if the \gls{MCMC} kernels provide positively correlated draws, \glspl{MCMCPF} can remain of practical interest compared to \glspl{PF}. Indeed, there are many scenarios in which a sophisticated \gls{PF} such as the \gls{FAAPF} would significantly outperform a \gls{BPF} but cannot be implemented whereas the corresponding \gls{MCMCFAAPF} is essentially always applicable. If the \gls{MCMC} operators used within the \gls{MCMCFAAPF} are thus displaying ``reasonable'' \glsdesc{IACT}, the asymptotic variance of the resulting estimators can be smaller than the one of an implementable \gls{PF} such as the \gls{BPF}.
\appendix
\section{Martingale construction} \label{app:martingale_construction}
In this section, we outline a number of useful martingale decompositions upon which the proofs of the $\mathbb{L}_r$-inequality in Proposition~\ref{prop:lr_inequality} and the \gls{CLT} in Proposition~\ref{prop:clt} are based.
\subsection{Notation}
To simplify the notation, for any $n \geq 1$, we will hereafter often write \begin{gather}
\mcmcTarget{n}{N} \coloneqq \mcmcTarget{n}{\eta_{n-1}^N}, \quad \mcmcKern{n}{N} \coloneqq \mcmcKern{n}{\eta_{n-1}^N}, \quad \resolv{n}{N} \coloneqq \resolv{n}{\eta_{n-1}^N}, \quad \covarianceFunction{n}{N} \coloneqq \covarianceFunction{n}{\eta_{n-1}^N},\\
\mcmcTarget{n}{} \coloneqq \mcmcTarget{n}{\eta_{n-1}} (= \eta_n), \quad \mcmcKern{n}{} \coloneqq \mcmcKern{n}{\eta_{n-1}}, \quad \resolv{n}{} \coloneqq \resolv{n}{\eta_{n-1}}, \quad \covarianceFunction{n}{} \coloneqq \covarianceFunction{n}{\eta_{n-1}}. \end{gather}
Furthermore, we allow $f \coloneqq (f_n)_{n \geq 1}$ to denote a sequence of test functions where for any $n \geq 1$, $f_n = (f_n^u)_{1 \leq u \leq d} \in \boundedFunSet(\mathbf{E}_n)^d$. For any $1 \leq u \leq d$, we also sometimes write $f^u \coloneqq (f_n^u)_{n \geq 1}$.
\subsection{Local errors}
We make use of the telescoping sum commonly used in the analysis of Feynman-Kac models, see \citep[see][Chapter~7]{delmoral2004feynman}: \begin{align}
\sqrt{N}[\eta_n^N- \eta_n](f_n)
& = \sqrt{N} \sum_{p=1}^n \mcmcTarget{p,n}{\eta_p^N}(f_n) - \mcmcTarget{p,n}{\mcmcTarget{p}{N}}(f_n) \label{eq:standard_telescoping-sum_decomposition} \end{align} with the convention that $\mcmcTarget{1}{\eta_{0}^N} = \eta_1$. Here, we have also defined $\mcmcTarget{p,n}{\mu}(f_n) \coloneqq \mu Q_{p,n}(f_n) /\mu Q_{p,n}(\mathbf{1})$. Key to the analysis are, therefore, the \emph{local errors} \begin{align}
V_p^N(f_p)
\coloneqq \sqrt{N}[\eta_p^N - \mcmcTarget{p}{N}](f_p)
= [\widetilde{V}_p^N+ R_p^N](f_p),
\end{align} where \begin{align}
R_p^N(f_p)
& \coloneqq \sqrt{N}[\eta_p^N - \tilde{\eta}_p^N](f_p),\\
\widetilde{V}_p^N(f_p)
& \coloneqq \sqrt{N}[\tilde{\eta}_p^N - \mcmcTarget{p}{N}](f_p), \label{eq:local_error_from_stationarity} \end{align} with $\tilde{\eta}_p^N(f_p) \coloneqq \frac{1}{N}\sum_{i=1}^N f_p(\ParticlePathStationary{p}{i})$. Here, for the purpose of facilitating the analysis, we have introduced the auxiliary Markov chain $\smash{(\ParticlePathStationary{p}{i})_{i \geq 1}}$ which evolves according to the same transition kernels as $\smash{(\ParticlePath{p}{i})_{i \geq 1}}$ but which is initialised from stationarity, i.e.\@{} $\smash{\ParticlePathStationary{p}{1} \sim \mcmcTarget{p}{N}}$ and $\smash{\ParticlePathStationary{p}{i} \sim \mcmcKern{n}{N}(\ParticlePathStationary{p}{i-1}, \,\cdot\,)}$, for $2 \leq i \leq N$. Note that $\smash{R_p^N(f_p)}$ may therefore be viewed as the additional error introduced if the \gls{MCMC} chain is not initialised from stationarity at time~$p$. Tighter control of the errors could be obtained by explicitly coupling the two particle systems, but it is sufficient for our purposes to treat the two systems as being entirely independent.
Using the tower property of conditional expectation it can be easily checked that $\E[\widetilde{V}_p^N(f_p)] = 0$ as in standard \glspl{PF}. However, contrary to standard \glspl{PF}, the particles $\smash{\ParticlePathStationary{p}{i}}$ and $\smash{\ParticlePathStationary{p}{j}}$, for $i \neq j$, are not necessarily conditionally independent given $\smash{\mathcal{F}_{p-1}^{N,N}}$, where $\smash{\mathcal{F}_{0}^{N,N} \coloneqq \{\emptyset, \Omega\}}$ and, for any $p \geq 1$ and $1 \leq k \leq N$, \begin{equation}
\mathcal{F}_p^{N,k} \coloneqq \mathcal{F}_{p-1}^{N,N} \vee \sigma(\ParticlePath{p}{i}, \ParticlePathStationary{p}{i} \mid 1 \leq i \leq k). \label{eq:natural_filtration} \end{equation} Due to the lack of conditional independence, we obtain for any $1 \leq u \leq d$, \begin{align}
\!\!\!\!\!\!\!\E[\widetilde{V}_p^N(f_p^u)^2]
& = \E\Bigl[\E\bigl(\widetilde{V}_p^N(f_p^u)^2\big|\mathcal{F}_{p-1}^{N,N}\bigr)\Bigr] \label{eq:mcmc_finite-sample-variance}\\
& = \E\Bigl[\mcmcTarget{p}{N}\bigl[(f_p^u- \mcmcTarget{p}{N}(f_p^u))^2\bigr]\Bigr]\\
& \quad + 2 \sum_{j=1}^{N-1}\biggl(1-\frac{j}{N}\biggr) \E\Bigl[\mcmcTarget{p}{N}\bigl[(f_p^u- \mcmcTarget{p}{N}(f_p^u))(\mcmcKern{p}{N})^j(f_p^u- \mcmcTarget{p}{N}(f_p^u))\bigr]\Bigr].\!\!\!\!\!\!\!\! \label{eq:variance_local_error} \end{align} To see this, note that (conditional on $\smash{\mathcal{F}_{p-1}^{N,N}}$) the inner expectation in \eqref{eq:mcmc_finite-sample-variance} is simply the variance of $\smash{ N^{-1/2} \sum_{i=1}^N h(\mathbf{Y}^i)}$, where $\smash{h(\mathbf{y}) \coloneqq f_p^u(\mathbf{y}) - \mcmcTarget{p}{N}(f_p^u)}$ and where $(\mathbf{Y}^i)_{i \geq 1}$ is a stationary Markov chain with invariant distribution $\mcmcTarget{p}{N}$ and transition kernels $\smash{\mcmcKern{p}{N}}$. The last line then follows by exploiting stationarity of that Markov chain and grouping equivalent terms. This is a generalisation, when $\mcmcKern{p}{\mu}$ is not perfectly mixing, of the result for a standard \gls{PF} in which $\smash{\mcmcKern{p}{\mu}(\mathbf{x}_p, \,\cdot\,) = \mcmcTarget{p}{\mu} = \mcmcInit{p}{\mu}}$, for all $\mathbf{x}_p \in \mathbf{E}_p$, in which case: \begin{align}
\E[\widetilde{V}_p^N(f_p^u)^2] = \E[V_p^N(f_p^u)^2] = \E[\mcmcTarget{p}{N}([f_p^u- \mcmcTarget{p}{N}(f_p^u)]^2)]. \end{align}
Following \citet{delmoral2010interacting, bercu2012fluctuations}, we further decompose \eqref{eq:local_error_from_stationarity} as \begin{align}
\widetilde{V}_p^N(f_p)
& = \frac{1}{\sqrt{N}} \sum_{i=1}^N \bigl[f_p(\ParticlePathStationary{p}{i}) - \mcmcTarget{p}{N}(f_p)\bigr]\\
& = \frac{1}{\sqrt{N}}\sum_{i=1}^{\smash{N}\vphantom{.}} \bigl[\resolv{p}{N}(f_p)(\ParticlePathStationary{p}{i}) - \mcmcKern{p}{N}\resolv{p}{N}(f_p)(\ParticlePathStationary{p}{i})\bigr] \quad \text{[by \eqref{eq:poisson_equation_1}]}\\
& = U_p^N(f_p) + L_p^{N}(f_p), \label{eq:martingale_single_decomposition} \end{align} where, letting $\smash{\ParticlePathStationary{p}{N+1}}$ be a random variable distributed, independently conditional upon $\smash{\mathcal{F}_p^{N,N}}$, according to $\smash{\mcmcKern{p}{N}(\ParticlePathStationary{p}{N}, \,\cdot\,)}$, we have (weakly) defined \begin{align}
U_p^N(f_p)
& \coloneqq
\frac{1}{\sqrt{N}}\sum_{i=1}^N \bigl[\resolv{p}{N}(f_p)(\ParticlePathStationary{p}{i+1}) - \mcmcKern{p}{N}\resolv{p}{N}(f_p)(\ParticlePathStationary{p}{i})\bigr],\\
L_p^{N}(f_p)
& \coloneqq \frac{1}{\sqrt{N}} \bigl[\resolv{p}{N}(f_p)(\ParticlePathStationary{p}{1}) - \resolv{p}{N}(f_p)(\ParticlePathStationary{p}{N+1})\bigr]. \end{align} This allows us to write the local error at time~$p$ as \begin{align}
V_p^N
& = U_p^N + L_p^{N} + R_p^N.
\label{eq:local_error_decomposition_full} \end{align}
\subsection{Martingale approximation at time $p$}\label{subsec:martingaledecomposition}
For $1 \leq p \leq n$, let $\mathcal{F}_p^N \coloneqq (\mathcal{F}_p^{N,i})_{0 \leq i \leq N}$, where $\mathcal{F}_p^{N,i}$ is defined as in \eqref{eq:natural_filtration} with the additional convention that $\smash{\mathcal{F}_p^{N,0} = \mathcal{F}_{p-1}^{N,N}}$. We now show that $\smash{U_p^N(f_p)}$ is a martingale while $\smash{L_p^{N}(f_p)}$ and $\smash{R_p^N(f_p)}$ are remainder terms which vanish almost surely as $N \to \infty$. \begin{itemize}
\item \textbf{Martingale.} For each $N\geq 1$, $\smash{U_p^N(f_p) \coloneqq \sum_{i=1}^N\varDelta U_p^{N,i+1}(f_p)}$ is the terminal value of a martingale (and these martingales form a triangular array) which is defined through the $\smash{\mathcal{F}_p^N}$-martingale difference sequence $\smash{(\varDelta U_p^{N,i+1}(f_p))_{1 \leq i \leq N}}$, where
\begin{align}
\varDelta U_p^{N,i+1}(f_p)
\coloneqq \frac{1}{\sqrt{N}}\bigl[\resolv{p}{N}(f_p)(\ParticlePathStationary{p}{i+1}) - \mcmcKern{p}{N}\resolv{p}{N}(f_p)(\ParticlePathStationary{p}{i})\bigr].
\end{align}
Indeed, for any $1 \leq i \leq N$ and any $1 \leq u,v \leq d$,
\begin{align}
\E\bigl[\varDelta U_p^{N,i+1}(f_p^u)\big|\mathcal{F}_p^{N,i}\bigr] & = 0, \label{eq:martingale_expectation_time_p}\\
\E\bigl[\varDelta U_p^{N,i+1}(f_p^u)\varDelta U_p^{N,i+1}(f_p^v)\big|\mathcal{F}_p^{N,i}\bigr] & = \frac{1}{N}\covarianceFunction{p}{N}(f_p^u, f_p^v)(\ParticlePathStationary{p}{i}), \label{eq:martingale_covariance_time_p}
\end{align}
where the second line follows directly from the definition in \eqref{eq:definition_of_covFun}.
\item \textbf{Remainder.} By \eqref{eq:bound_on_resolvent}, for any $1 \leq u \leq d$, the remainder signed measure $L_p^{N}$ is bounded as
\begin{align}
\lvert L_p^{N}(f_p^u) \rvert
\leq \frac{2}{\sqrt{N}} \lvert \resolv{p}{N}(f_p^u) \rvert
\leq \frac{2 \boundResolvent{p} \lVert f_p^u \rVert}{\sqrt{N}}. \label{eq:bound_on_first_remainder_measure}
\end{align}
By the same arguments as in \eqref{eq:bound_resolvent}, under Assumption~\ref{as:ergodicity}, for any $N, r \in \mathbb{N}$ and any $1 \leq u \leq d$, we have
\begin{align}
\E\bigl[ \lvert R_p^N(f_p^u)\rvert^r \bigr]^{\frac{1}{r}} \leq \frac{\boundResolvent{p}\lVert f_p^u \rVert}{\sqrt{N}}. \label{eq:bound_on_second_remainder_measure}
\end{align}
Note that by Markov's inequality and the Borel--Cantelli lemma, \eqref{eq:bound_on_second_remainder_measure} implies that $R_p^N(f_p^u) \ensuremath{\mathrel{\mathop{\rightarrow}\nolimits_{\text{\upshape{a.s.}}}}} 0$ as $N \to \infty$.
\end{itemize}
\subsection{Martingale approximation up to time $n$}
The sum of all local errors up time~$n$ is given by \begin{align}
\mathcal{V}_n^N(f)
& \coloneqq \sum_{p=1}^n V_p^N(f_p)
= \mathcal{U}_n^N(f) + \mathcal{L}_n^N(f) + \mathcal{R}_n^N(f). \end{align} The three quantities appearing on the right hand side are defined as follows. \begin{itemize}
\item \textbf{Martingale.} Let $\smash{\mathcal{F}^N \coloneqq (\mathcal{F}_n^N)_{n \geq 1}}$ with $\smash{\mathcal{F}_p^N \coloneqq \mathcal{F}_p^{N,N}}$, then the terms
\begin{align}
\mathcal{U}_n^N(f)
\coloneqq \sum_{p=1}^n U_p^N(f_p),
\end{align}
define an $\smash{\mathcal{F}^N}$-martingale $\smash{(\mathcal{U}_n^N(f))_{n \geq 1}}$.
\item \textbf{Remainder.} Again,
\begin{equation}
\mathcal{L}_n^N(f) \coloneqq \sum_{p=1}^n L_p^N(f_p) \quad \text{and} \quad \mathcal{R}_n^N(f) \coloneqq \sum_{p=1}^n R_p^N(f_p)
\end{equation}
constitute remainder terms. Note that for any $n \geq 1$ and any $1 \leq u \leq d$, by \eqref{eq:bound_on_first_remainder_measure} and \eqref{eq:bound_on_second_remainder_measure},
\begin{equation}
\lim_{N \to \infty} \lVert \mathcal{L}_n^N(f^u) \rVert = 0 \quad \text{and} \quad \lvert \mathcal{R}_n^N(f^u)\rvert \ensuremath{\mathrel{\mathop{\rightarrow}\nolimits_{\text{\upshape{a.s.}}}}} 0.
\end{equation} \end{itemize}
\section{Convergence proofs} \label{app:proofs}
\subsection{Auxiliary results needed for time-uniform bounds}
For any $1 \leq p \leq n$ and any $f_n \in \boundedFunSet(\mathbf{E}_n)$, we define \begin{gather}
r_{p,n}
\coloneqq \sup_{\mathbf{x}_p, \mathbf{z}_p \in \mathbf{E}_p} \frac{Q_{p,n}(\mathbf{1})(\mathbf{x}_p)}{Q_{p,n}(\mathbf{1})(\mathbf{z}_p)}, \quad
P_{p,n}(f_n)
\coloneqq \frac{Q_{p,n}(f_n)}{Q_{p,n}(\mathbf{1})} \leq 1. \end{gather}
In the remainder of this work, whenever we restrict our analysis to test functions $f_n \in \smash{\boundedFunSet^\star(\mathbf{E}_n)^d}$, we can replace $\beta(P_{p,n})$ by \begin{align}
\beta^\star(P_{p,n}) \coloneqq \sup\lvert P_{p,n}(f_n')(\mathbf{x}_p) - P_{p,n}(f_n')(\mathbf{z}_p)\rvert, \end{align} where the supremum is over all $\mathbf{x}_p, \mathbf{z}_p \in \mathbf{E}_p$ and all $f_n' \in \smash{\boundedFunSet^\star(\mathbf{E}_n)}$ such that $\osc(f_n') \leq 1$.
\begin{lemma}\label{lem:time-uniform_bounds}
Under Assumptions~\ref{as:ergodicity} and \ref{as:stability_mcmc_kernels}--\ref{as:stability_potential}, for any $1 \leq p \leq n$,
\begin{align}
\boundResolvent{p}
& \leq \widebar{T}, \quad \text{where} \quad \widebar{T} \coloneqq 2 \bar{\imath} / \varepsilon(K) < \infty,\\
r_{p,n}
& \leq \bar{r}, \quad \text{where} \quad \bar{r} \coloneqq \varepsilon(G)^{-(m+l)} \varepsilon(M)^{-1} < \infty,\\
\beta^\star(P_{p,n})
& \leq \bar{\beta}^{\lfloor (n-p)/m \rfloor}, \quad \text{where} \quad \bar{\beta} \coloneqq (1 - \varepsilon(G)^{m+l} \varepsilon(M)^2) <1. \end{align} \end{lemma} \begin{proof}
This follows by similar arguments to those used in the proof of \citet[][Proposition~4.3.3]{delmoral2004feynman}.
\ensuremath{_\Box} \end{proof}
\subsection{Auxiliary results needed for the SLLN}
\begin{lemma}\label{lem:lr_inequality_for_martingale_u}
Under Assumption~\ref{as:ergodicity}, for any $r \geq 1$, there exists $\boundExponent{r} < \infty$ such that for any $n \geq 1$, any $f_n \in \boundedFunSet(\mathbf{E}_n)$ and any $N \in \mathbb{N}$,
\begin{equation}
\E\bigl[ \lvert U_n^N(f_n) \rvert^r \bigr]^{\frac{1}{r}} \leq 2 \boundExponent{r} \boundResolvent{n} \lVert f_n\rVert.
\end{equation} \end{lemma} \begin{proof}
Without loss of generality, assume that $\lVert f_n \rVert \leq 1$. The quadratic variation associated with the martingale $U_n^N(f_n)$ satisfies
\begin{align}
\sum_{\smash{i=1}}^N \E\bigl(\varDelta U_n^{N,i+1}(f_n)^2\big|\mathcal{F}_n^{N,i}\bigr)
& = \tilde{\eta}_n^N \covarianceFunction{n}{N}(f_n,f_n) \quad \text{[by \eqref{eq:martingale_covariance_time_p}]}\\
& \leq 4 \smash{\boundResolvent{n}^{2}}. \quad \text{[by \eqref{eq:boundedness_of_covFun}]}
\label{eq:boundquadraticvariation}
\end{align}
Hence, by the Burkholder-Davis-Gundy inequality \citep[Theorem 17.7]{kallenberg2006foundations} there exists $\boundExponent{r} < \infty$ such that
\begin{align}
\E\bigl[ \lvert U_n^N(f_n) \rvert^r \bigr]^{\frac{1}{r}} \leq 2 \boundExponent{r} \boundResolvent{n}.
\end{align}
This completes the proof.
\ensuremath{_\Box} \end{proof}
We are now ready to prove the $\mathbb{L}_r$-inequality in Proposition~\ref{prop:lr_inequality}.
\begin{proof}[of Proposition~\ref{prop:lr_inequality}]
Without loss of generality, assume that $\lVertf_n\rVert \leq 1$ for all $n \geq 1$ and that the constants $\boundExponent{r}$ in Lemma~\ref{lem:lr_inequality_for_martingale_u} satisfy $\inf_{r \geq 1} b_r \geq 1$.
We begin by proving the first part of the proposition, i.e.\@{} the $\mathbb{L}_r$-error bound without the additional Assumptions~\ref{as:stability_mcmc_kernels}--\ref{as:stability_potential}. The proof proceeds by induction on $n$. At time~$n=1$, by Minkowski's inequality combined with Lemma~\ref{lem:lr_inequality_for_martingale_u} as well as \eqref{eq:bound_on_first_remainder_measure} and \eqref{eq:bound_on_second_remainder_measure}, we have
\begin{align}
\sqrt{N} \E\bigl[\lvert [\eta_1^N - \eta_1](f_1) \rvert^r \bigr]^{\frac{1}{r}}
& = \sqrt{N} \E\bigl[\lvert [\eta_1^N - \mcmcTarget{1}{N}](f_1) \rvert^r \bigr]^{\frac{1}{r}}\\
& \leq \E\bigl[\lvert U_1^N(f_1) \rvert^r \bigr]^{\frac{1}{r}} + \E\bigl[\lvert L_1^N (f_1) \rvert^r \bigr]^{\frac{1}{r}} + \E\bigl[\lvert R_1^N (f_1) \rvert^r \bigr]^{\frac{1}{r}}\\
& \leq 2 \boundExponent{r} \boundResolvent{1} + \frac{3 \boundResolvent{1}}{\sqrt{N}} \leq a_1 \boundExponent{r},
\end{align}
e.g.\@\@ with $a_1 \coloneqq 5 \boundResolvent{1} < \infty$.
Assume now that the first part of the proposition holds at time~$n-1$, for some $n > 1$. By Minkowski's inequality,
\begin{align}
\MoveEqLeft \sqrt{N} \E\bigl[\lvert [\eta_n^N - \eta_n](f_n) \rvert^r \bigr]^{\frac{1}{r}}\\
& \leq \sqrt{N} \E\bigl[\lvert [\eta_n^N - \mcmcTarget{n}{N}](f_n) \rvert^r \bigr]^{\frac{1}{r}} + \sqrt{N} \E\bigl[\lvert [\mcmcTarget{n}{N} - \eta_n](f_n) \rvert^r \bigr]^{\frac{1}{r}} \label{eq:lr_error_induction_proof_bound:1}\\
& \leq 2 \boundExponent{r} \boundResolvent{n} + \frac{3 \boundResolvent{n}}{\sqrt{N}} + \frac{2 \boundExponent{r} a_{n-1}}{\eta_{n-1}(G_{n-1})} \label{eq:lr_error_induction_proof_bound:2} \leq a_n \boundExponent{r},
\end{align}
e.g.\@\@ with $a_n \coloneqq 5 \boundResolvent{n} + 2 a_{n-1} /\eta_{n-1}(G_{n-1}) < \infty$. Here, the bound on the first term in \eqref{eq:lr_error_induction_proof_bound:1} follows by the same arguments as at time~$1$. The bound on the second term in \eqref{eq:lr_error_induction_proof_bound:1} follows from the following decomposition (note that $\eta_{n-1}(Q_n(\mathbf{1})) = \eta_{n-1}(G_{n-1})$):
\begin{align}
\MoveEqLeft \eta_{n-1}(G_{n-1}) \lvert [\mcmcTarget{n}{N} - \eta_n](f_n) \rvert\\
& = \bigl\lvert \eta_{n-1}(G_{n-1}) \mcmcTarget{n}{N}(f_n) - \eta_{n-1}^N(Q_n(f_n)) + \eta_{n-1}^N(Q_n(f_n)) - \eta_{n-1}(Q_n(f_n)) \bigr\rvert\\
& = \bigl\lvert \mcmcTarget{n}{N}(f_n) [\eta_{n-1} - \eta_{n-1}^N](Q_n(\mathbf{1})) + [\eta_{n-1}^N - \eta_{n-1}](Q_n(f_n)) \bigr\rvert\\
& \leq \lVert \mcmcTarget{n}{N}(f_n) \rVert \lvert [\eta_{n-1} - \eta_{n-1}^N](Q_n(\mathbf{1}))\rvert + \lvert [\eta_{n-1}^N - \eta_{n-1}](Q_n(f_n)) \rvert.
\end{align}
Minkowski's inequality along with $\lVert Q_n(\mathbf{1})\rVert \leq 1$, $\lVert Q_n(f_n) \rVert \leq 1$ and $\lVert \mcmcTarget{n}{N}(f_n) \rVert \leq \lVert f_n\rVert \leq 1$ combined with the induction assumption then readily yields the bound given in \eqref{eq:lr_error_induction_proof_bound:2}, i.e.\@\@
\begin{align}
\sqrt{N} \E\bigl[\lvert [\mcmcTarget{n}{N} - \eta_n](f_n) \rvert^r \bigr]^{\frac{1}{r}}
\leq \frac{2 \boundExponent{r} a_{n-1}}{\eta_{n-1}(G_{n-1})}.
\end{align}
This completes the first part of the proposition.
As the bounds obtained through the previous induction proof cannot easily be made time-uniform, we prove the second part of the proposition via the more conventional telescoping-sum decomposition given in \eqref{eq:standard_telescoping-sum_decomposition}. Using the arguments in \citet[][pp.~244--246]{delmoral2004feynman}, we obtain the following bound for the $p$th term in the telescoping sum:
\begin{align}
\sqrt{N}\lvert [\mcmcTarget{p,n}{\eta_p^N} - \mcmcTarget{p,n}{\mcmcTarget{p}{N}}](f_n) \rvert
& \leq 2 \sqrt{N} \lvert [\eta_p^N - \mcmcTarget{p}{N}](\widebar{Q}_{p,n}^N(f_n)) \rvert
r_{p,n}
\beta(P_{p,n})\\
& = 2 \lvert
[
U_p^N + L_p^N + R_p^N
]
(\widebar{Q}_{p,n}^N(f_n)) \rvert
r_{p,n}
\beta(P_{p,n}),
\end{align} where the second line is due to \eqref{eq:local_error_decomposition_full} and where \begin{gather}
\widebar{Q}_{p,n}^N(f_n)
\coloneqq \frac{Q_{p,n}^N(f_n)}{\lVert Q_{p,n}^N(f_n) \rVert},\\
Q_{p,n}^N(f_n)
\coloneqq \frac{Q_{p,n}(\mathbf{1})}{\mcmcTarget{p}{N}(Q_{p,n}(\mathbf{1}))} P_{p,n}\biggl(f_n - \frac{\mcmcTarget{p}{N}(Q_{p,n}(f_n))}{\mcmcTarget{p}{N}(Q_{p,n}(\mathbf{1}))}\biggr). \end{gather} Hence, by \eqref{eq:standard_telescoping-sum_decomposition}, \begin{align}
\MoveEqLeft \sqrt{N} \E\bigl[\lvert [\eta_n^N - \eta_n](f_n) \rvert^r\bigr]^{\frac{1}{r}}\\
& \leq 2 \sum_{p=1}^n \E\bigl[\lvert
[
U_p^N + L_p^N + R_p^N
]
(\widebar{Q}_{p,n}^N(f_n)) \rvert^r\bigr]^{\frac{1}{r}} r_{p,n} \beta(P_{p,n})\\
& \leq 2 \sum_{p=1}^n \Bigl(2 \boundExponent{r} \boundResolvent{p} + \frac{3 \boundResolvent{p}}{\sqrt{N}}\Bigr) r_{p,n} \beta(P_{p,n}) \leq a_n b_r
\end{align} with $a_n \coloneqq 10 \sum_{p=1}^n \boundResolvent{p} r_{p,n} \beta(P_{p,n})$, where the last line follows from Minkowski's inequality combined with Lemma~\ref{lem:lr_inequality_for_martingale_u}, \eqref{eq:bound_on_first_remainder_measure} and \eqref{eq:bound_on_second_remainder_measure}. Since $f \in \smash{\boundedFunSet^\star(\mathbf{E}_n)}$, we can replace $\beta(P_{p,n})$ by $\beta^\star(P_{p,n})$ in the derivation above. Lemma~\ref{lem:time-uniform_bounds} then yields the time-uniform bound \begin{align}
a_n \leq 10 \widebar{T} \bar{r} \sum_{p=1}^n \bar{\beta}^{\lfloor (n -p)/m\rfloor}
\leq 10 \widebar{T} \bar{r} m \sum_{n=0}^\infty \bar{\beta}^n \leq \frac{20 \bar{\imath} m }{\varepsilon(K) \varepsilon(M)^3 \varepsilon(G)^{2(m+l)}} \eqqcolon a. \end{align} This completes the proof.
\ensuremath{_\Box}
\end{proof}
\subsection{Auxiliary results needed for the CLT}
\begin{lemma}\label{lem:convergence_of_covFun}
Fix $n > 1$. For some $\mu \in \probMeasSet(\mathbf{E}_{n-1})$ let $(\mu^N)_{N \geq 1}$ be a sequence of random probability measures on $(\mathbf{E}_{n-1}, \sigFieldPath{n-1})$ such that $\mu^N(f_{n-1}) \ensuremath{\mathrel{\mathop{\rightarrow}\nolimits_{\text{\upshape{a.s.}}}}} \mu(f_{n-1})$ for all $f_{n-1} \in \boundedFunSet(\mathbf{E}_{n-1})$.
Then under Assumptions~\ref{as:ergodicity} and \ref{as:lipschitz}, for all $(f_n, g_n) \in \boundedFunSet(\mathbf{E}_n)^2$,
\begin{align}
\lVert \covarianceFunction{n}{\mu^{\mathrlap{N}\,\,}}(f_n, g_n) - \covarianceFunction{n}{\mu}(f_n, g_n)\rVert
\ensuremath{\mathrel{\mathop{\rightarrow}\nolimits_{\text{\upshape{a.s.}}}}} 0.
\end{align} \end{lemma} \begin{proof}
We use a similar argument to that used in the first part of the proof of \citet[Theorem~3.5]{bercu2012fluctuations}. That is, under Assumptions~\ref{as:ergodicity} and \ref{as:lipschitz}, and using \citet[Proposition~3.1]{bercu2012fluctuations}, a telescoping-sum decomposition allows us to find a constant $\smash{\boundIntegralOperatorLipschitzAlt{n} < \infty}$ and a family of bounded integral operator $(\integralOperatorLipschitzAlt{n}{\nu})_{\nu \in \probMeasSet(\mathbf{E}_{n-1})}$ from $\boundedFunSet(\mathbf{E}_{n-1})$ into $\boundedFunSet(\mathbf{E}_{n})^2$ satisfying
\begin{align}
\sup_{\nu \in \probMeasSet(\mathbf{E}_{n-1})}\int_{\boundedFunSet(\mathbf{E}_{n-1})} \lVert h\rVert \integralOperatorLipschitzAlt{n}{\nu}((f_n, g_n), \mathrm{d} h) \leq \lVert f_n\rVert \lVert g_n\rVert \boundIntegralOperatorLipschitzAlt{n}, \label{eq:bound_on_covFunDiff_1}
\end{align}
such that
\begin{align}
\lVert \covarianceFunction{n}{\mu^{\mathrlap{N}\,\,}}(f_n, g_n) - \covarianceFunction{n}{\mu}(f_n, g_n)\rVert
& \leq \int_{\boundedFunSet(\mathbf{E}_{n-1})} \lvert [\mu^N - \mu](h)\rvert \integralOperatorLipschitzAlt{n}{\mu}((f_n,g_n), \mathrm{d} h). \label{eq:bound_on_covFunDiff_2}
\end{align}
It remains to be shown that the r.h.s.\@{} in \eqref{eq:bound_on_covFunDiff_2} goes to zero almost surely. Let $\mathcal{D}$ denote the collection of bounded Borel\slash Borel-measurable functions from $\boundedFunSet(\mathbf{E}_{n-1})$ to $\mathbb{R}$ (with respect to the uniform and Euclidean norms, respectively). This set contains, among others, the mappings $h \mapsto \nu(h)$ induced by probability measures $\nu \in \probMeasSet(\mathbf{E}_{n-1})$ via their action as linear integral operators. By Borel measurability of the norm, $\mathcal{D}$ also contains the function $h \mapsto \lVert h \rVert$. Since $\mu^N$ is a probability measure, we have $\lvert \mu^N(h) \rvert \leq \lVert h\rVert$, for any $h \in \boundedFunSet(\mathbf{E}_{n-1})$, while \eqref{eq:bound_on_covFunDiff_1} ensures that $h \mapsto \lVert h \rVert$ is integrable. Hence, we can apply Lebesgue's dominated convergence theorem \citep[e.g.\@][Theorem 1.21]{kallenberg2006foundations} to conclude that
the r.h.s.\@{} of \eqref{eq:bound_on_covFunDiff_2} vanishes almost-surely as $N \to \infty$.
\ensuremath{_\Box} \end{proof}
We now prove the following proposition which adapts \citet[Proposition~4.3]{bercu2012fluctuations} \citep[see also][Theorem~9.3.1]{delmoral2004feynman} to our setting.
\begin{proposition} \label{prop:convergence_of_martingale_M}
Let $f \coloneqq (f_n)_{n \geq 1}$, where $f_n = (f_n^u)_{1 \leq u \leq d} \in \boundedFunSet(\mathbf{E}_n)^d$. The sequence of martingales $\mathcal{U}^N(f) = (\mathcal{U}_n^N(f))_{n \geq 1}$ converges in law as $N \to \infty$ to a Gaussian martingale $\mathcal{U}(f) = (\mathcal{U}_n(f))_{n \geq 1}$ such that for any $n \geq 1$ and any $1 \leq u,v\leq d$,
\begin{align}
\langle \mathcal{U}(f^u), \mathcal{U}(f^v)\rangle_n
& = \sum_{p=1}^n \eta_p \covarianceFunction{p}{}(f_p^u, f_p^v).
\end{align} \end{proposition} \begin{proof} We begin by re-indexing the processes defined above. For any $(p,i) \in \mathbb{N} \times \{1,\dotsc,N\}$ with $1 \leq p$ and $1 \leq i \leq N$, define the bijection $\theta^N$ by \begin{align}
\theta^N(p,i) \coloneqq (p-1)N + i - 1, \end{align} Define the filtration $\mathcal{G}^N \coloneqq (\mathcal{G}_k^N)_{k \geq 1}$, where $\mathcal{G}_k^N \coloneqq \vee_{(p,i)\colon \theta^N(p,i) \leq k} \mathcal{F}_{p}^{N,i}$.
We then have $\mathcal{U}_n^N(f) = \widetilde{\mathcal{U}}_k^N(f)$ whenever $k = \theta^N(n,N)$, where \begin{align}
\widetilde{\mathcal{U}}_k^N(f) \coloneqq \sum_{j=1}^k \varDelta \widetilde{\mathcal{U}}_j^N(f), \end{align} defines an $\mathcal{G}^N$-martingale $(\widetilde{\mathcal{U}}_k^N(f))_{k \geq 1}$ with increments \begin{align}
\varDelta \widetilde{\mathcal{U}}_j^N(f)
\coloneqq \varDelta U_p^{N,i}(f_p) \quad \text{for $\theta^N(p,i) = j$.} \end{align} Indeed, by \eqref{eq:martingale_expectation_time_p} and \eqref{eq:martingale_covariance_time_p}, for any $1 \leq u,v\leq d$, \begin{align}
\E\bigl[\varDelta \widetilde{\mathcal{U}}_j^N(f^u)\big|\mathcal{G}_{j-1}^N\bigr] & = 0, \label{eq:martingale_expectation_time_j}\\
\E\bigl[\varDelta \widetilde{\mathcal{U}}_j^N(f^u)\varDelta \widetilde{\mathcal{U}}_j^N(f^v)\big|\mathcal{G}_{j-1}^N\bigr] & = \frac{1}{N}\covarianceFunction{p}{N}(f_p^u, f_p^v)(\ParticlePathStationary{p}{i}), \label{eq:martingale_covariance_time_j} \end{align} where $p \geq 1$ and $1 \leq i \leq N$ satisfy $\theta^N(p,i) = j$.
We now apply the \gls{CLT} for triangular arrays of martingale-difference sequences \citep[see, e.g.\@,][Section~VII.8, Theorem~4; p.~543]{probability:theory:Shi95}. The Lindeberg condition is satisfied because the test functions $f_n$ are bounded. Finally, for any $p \geq 1$, \begin{align}
\smashoperator{\sum_{\smash{k=(pN)+1}}^{(p+1)N}} \;\E[\varDelta \widetilde{\mathcal{U}}_k^N(f^u)\varDelta \widetilde{\mathcal{U}}_k^N(f^v)|\mathcal{G}_{k-1}^N]
& = \frac{1}{N} \smashoperator{\sum_{i=1}^{\smash{N}}} \covarianceFunction{p}{N}(f_p^u, f_p^v)(\ParticlePathStationary{p}{i})\\
& = \tilde{\eta}_p^N \covarianceFunction{p}{N}(f_p^u, f_p^v)\\
& \ensuremath{\mathrel{\mathop{\rightarrow}\nolimits_{\text{\upshape{a.s.}}}}} \eta_p \covarianceFunction{p}{}(f_p^u, f_p^v), \end{align} by Proposition~\ref{prop:slln} and Lemma~\ref{lem:convergence_of_covFun}, writing $\covarianceFunctionAlt{p}{N} \coloneqq \covarianceFunction{p}{N}(f_p^u, f_p^v)$ and $\covarianceFunctionAlt{p}{} \coloneqq \covarianceFunction{p}{}(f_p^u, f_p^v)$ to simplify the notation, \begin{align}
\MoveEqLeft\Prob\bigl(\bigl\{\lim\nolimits_{N \to \infty} \lvert \eta_p^N \covarianceFunctionAlt{p}{N} - \eta_p \covarianceFunctionAlt{p}{} \rvert = 0 \bigr\}\bigr)\\
& \geq \Prob\bigl(\bigl\{\lim\nolimits_{N \to \infty} \lvert \eta_p^N(\covarianceFunctionAlt{p}{N} - \covarianceFunctionAlt{p}{}) \rvert + \lvert [\eta_p^N - \eta_p](\covarianceFunctionAlt{p}{})\rvert = 0 \bigr\}\bigr)\\
& \geq \Prob\bigl(\bigl\{\lim\nolimits_{N \to \infty} \lVert \covarianceFunctionAlt{p}{N} - \covarianceFunctionAlt{p}{} \rVert + \lvert [\eta_p^N - \eta_p](\covarianceFunctionAlt{p}{})\rvert = 0 \bigr\}\bigr)\\
& = 1. \end{align} As a result, for any $n \geq 1$, as $N \to \infty$, \begin{equation}
\smashoperator{\sum_{k=1}^{(n+1)N}} \E[\varDelta \widetilde{\mathcal{U}}_k^N(f^u)\varDelta \widetilde{\mathcal{U}}_k^N(f^v)|\mathcal{G}_{k-1}^N]
\ensuremath{\mathrel{\mathop{\rightarrow}\nolimits_{\text{\upshape{a.s.}}}}} \sum_{p=1}^n \eta_p \covarianceFunction{p}{}(f_p^u, f_p^v). \end{equation} This completes the proof.
\ensuremath{_\Box} \end{proof}
As an immediate consequence of Proposition~\ref{prop:convergence_of_martingale_M}, we obtain the following corollary. Its proof is a straightforward modification of the proof of \citet[Corollary~9.3.1]{delmoral2004feynman}.
\begin{corollary} \label{cor:convergence_of_local_errors}
As $N \to \infty$, the sequence of random fields $V^N = (V_n^N)_{n\geq 1}$ converges in law (and in the sense of convergence of finite-dimensional marginals) to the sequence of independent and centred Gaussian random fields $V = (V_n)_{n \geq 1}$ with covariance function as defined in \eqref{eq:local_error_covariance_function}.
\ensuremath{_\Box} \end{corollary}
We are now ready to prove the \gls{CLT}.
\begin{proof}[of Proposition~\ref{prop:clt}]
The proof of the \gls{CLT} now follows by replacing \citet[Theorem~9.3.1 \& Corollary~9.3.1]{delmoral2004feynman} in the proofs of \citet[Propositions~9.4.1 \& 9.4.2]{delmoral2004feynman} with Proposition~\ref{prop:convergence_of_martingale_M} and Corollary~\ref{cor:convergence_of_local_errors}, respectively.
For the time-uniform bound on the asymptotic variance in \eqref{enum:prop:clt:normalised}, we note that for any $1 \leq u \leq d$,
\begin{align}
\lVert \widebar{Q}_{p,n}(f_n^u - \eta_n(f_n^u)) \rVert
& = \biggl\lVert \frac{Q_{p,n}(\mathbf{1})}{\eta_p Q_{p,n}(\mathbf{1})} \bigl[P_{p,n}(f_n^u) - \Psi_{p,n}^{\eta_p}P_{p,n}(f_n^u)\bigr]\biggr\rVert\\
& \leq r_{p,n} \lVert [\Id- \Psi_{p,n}^{\eta_p}] P_{p,n}(f_n^u)\rVert\\
& \leq 2 \lVert f_n^u \rVert r_{p,n} \beta^\star(P_{p,n}),
\end{align}
where we have used that $\eta_n = \Psi_{p,n}^{\eta_p}P_{p,n}$, where
\begin{align}
\Psi_{p,n}^{\eta_p}(\mathrm{d} \mathbf{x}_p) \coloneqq \frac{\eta_p(\mathrm{d} \mathbf{x}_p)Q_{p,n}(\mathbf{1})(\mathbf{x}_p)}{\eta_p Q_{p,n}(\mathbf{1})}.
\end{align}
Hence, by \eqref{eq:boundedness_of_covFun} and Lemma~\ref{lem:time-uniform_bounds} the time-uniform bound on the asymptotic variance in \eqref{eq:bound_on_asymptotic_variance} holds, e.g.\@{} with \begin{align}
c \coloneqq 8 \widebar{T}^2 \bar{r}^2 \sum_{p=1}^n \bar{\beta}^{2 \lfloor (n -p)/m\rfloor}
\leq 8 \widebar{T}^2 \bar{r}^2 m \sum_{n=0}^\infty \bar{\beta}^n
\leq \frac{32 \bar{\imath}^2 m }{\varepsilon(K)^2 \varepsilon(M)^4 \varepsilon(G)^{3(m+l)}}. \end{align} This completes the proof.
\ensuremath{_\Box} \end{proof}
\end{document} | arXiv |
Particle-like topologies in light
Danica Sugic ORCID: orcid.org/0000-0002-1545-24001,2,3,
Ramon Droop4,
Eileen Otte ORCID: orcid.org/0000-0002-3109-58374,
Daniel Ehrmanntraut ORCID: orcid.org/0000-0003-1993-27754,
Franco Nori ORCID: orcid.org/0000-0003-3682-74323,5,
Janne Ruostekoski ORCID: orcid.org/0000-0001-7197-09426,
Cornelia Denz4 &
Mark R. Dennis ORCID: orcid.org/0000-0003-1147-18041,2,7
Nature Communications volume 12, Article number: 6785 (2021) Cite this article
132 Altmetric
Topological matter
Three-dimensional (3D) topological states resemble truly localised, particle-like objects in physical space. Among the richest such structures are 3D skyrmions and hopfions, that realise integer topological numbers in their configuration via homotopic mappings from real space to the hypersphere (sphere in 4D space) or the 2D sphere. They have received tremendous attention as exotic textures in particle physics, cosmology, superfluids, and many other systems. Here we experimentally create and measure a topological 3D skyrmionic hopfion in fully structured light. By simultaneously tailoring the polarisation and phase profile, our beam establishes the skyrmionic mapping by realising every possible optical state in the propagation volume. The resulting light field's Stokes parameters and phase are synthesised into a Hopf fibration texture. We perform volumetric full-field reconstruction of the \({{{\Pi }}}_{{{3}}}\) mapping, measuring a quantised topological charge, or Skyrme number, of 0.945. Such topological state control opens avenues for 3D optical data encoding and metrology. The Hopf characterisation of the optical hypersphere endows a fresh perspective to topological optics, offering experimentally-accessible photonic analogues to the gamut of particle-like 3D topological textures, from condensed matter to high-energy physics.
Nontrivial 3D topology has inspired many descriptions of fundamental particles. Motivated by Lord Kelvin's knotted vortex atom hypothesis1, Tony Skyrme2 in 1961 proposed a topological model for nuclei: particle-like continuous fields in 3D space now called skyrmions. These map 3D real space to the hypersphere (i.e. the unit sphere in four dimensions, also known as the 3-sphere3,4), parametrising the field. The skyrmion configuration wraps around the hypersphere an integer number of times called the Skyrme number. Skyrmions are now seen as a particular example of more general 3D topological solitons5,6,7, related to other topological textures such as monopoles and hopfions—the latter being fields with a 2-sphere parameter space (i.e. unit sphere in three dimensions). 3D topological textures have been studied theoretically as hypothetical objects in various systems, including high-energy physics5,8, condensed matter6,7,9,10, and early-universe cosmology11. In recent years, 3D skyrmions and hopfions have been experimentally realised in cold quantum matter12,13 and liquid crystals14.
So-called baby skyrmions are the two-dimensional (2D) counterpart of 3D skyrmions: fields in 2D physical space which map to, and wrap around, a 2-sphere parameter space. Their study is much more developed in theory and experiments, notably in non-singular superfluid vortices15 including those imprinted by structured light16, and especially magnetic systems17. Here the direction of spin at each point provides the 2-sphere parameter space, and magnetic skyrmion excitations have the potential to represent topological bits for low-power computer memory and processing17. Recently, 2D baby skyrmion configurations were created in optical systems, as the direction of electric field vectors, or photon spin, near a material interface18,19, displaying dynamics similar to magnetic skyrmions20. In propagating laser light, optical polarisation can be structured into full Poincaré beams21, which realise every state of elliptic polarisation in the transverse plane. These beams can also be interpreted as 2D baby skyrmions22, since the Poincaré sphere, as the 2-sphere parameter space, parametrises transverse, elliptic polarisation states. However, 3D particle-like topological objects have not been considered either theoretically or experimentally in optical fields.
Optical realisations of 3D topological states can take various forms. Much interest has focused on singularity lines, such as optical vortices or polarisation singularities (e.g. C lines)22. In structured light, with amplitude, phase, and polarisation spatially varying, these can be woven into loops, links, and knots23,24 and organise Möbius strips25. The state of elliptic polarisation is right- or left-handed circular (RH, LH) on C lines, often described as a skeleton of the complex optical polarisation field26. Topologically structured light has a wide range of applications including enhanced free-space optical communications27 and advanced trapping28, and is related to optical currents29 and orbital angular momentum30. Singular lines are topologically characterised by the fundamental homotopy group \({\Pi }_{1}\). The homotopy group \({\Pi }_{3}\), on the other hand, defines topological particles such as 3D hopfions and skyrmions5. It is natural to ask whether these 3D excitations can be created in structured light.
Here we show the design, generation and measurement of a structured, propagating beam of laser light realising such a mapping, unifying particle-like 3D topologies in free-space optics with those studied in high-energy physics, cosmology and various kinds of condensed matter.
The optical hypersphere of polarisation and phase
Spatially extended polarised light is represented by a complex transverse electric field vector at each point \({{{{{\bf{r}}}}}}\) in the propagating beam. Its RH and LH components are represented by the complex-valued scalar functions \({E}_{{{{{{\rm{R}}}}}}}({{{{{\bf{r}}}}}})\) and \({E}_{{{{{{\rm{L}}}}}}}({{{{{\bf{r}}}}}})\), and the pair \(({E}_{{{{{{\rm{R}}}}}}},{E}_{{{{{{\rm{L}}}}}}})\) which characterises the optical state at each point is assumed normalised, i.e.
$${({{{{{\rm{Re}}}}}}{E}_{{{{{{\rm{R}}}}}}})}^{2}+{({{{{{\rm{Im}}}}}}{E}_{{{{{{\rm{R}}}}}}})}^{2}+{({{{{{\rm{Re}}}}}}{E}_{{{{{{\rm{L}}}}}}})}^{2}+{({{{{{\rm{Im}}}}}}{E}_{{{{{{\rm{L}}}}}}})}^{2}=1.$$
Therefore, this normalised optical field defines a mapping from each point in 3D real space to a point on the 3-sphere, which we call the optical hypersphere. The optical hypersphere is conveniently parametrised using spinorial angles \(\alpha ,\beta ,\gamma\):
$${E}_{{{{{{\rm{R}}}}}}}={{\cos }}\frac{\beta }{2}{{{{{{\rm{e}}}}}}}^{{{{{{\rm{i}}}}}}(\gamma -\alpha )/2}\,\,\,\,\,{{{{{\rm{and}}}}}}\,\,\,\,\,{E}_{{{{{{\rm{L}}}}}}}={{\sin }}\frac{\beta }{2}{{{{{{\rm{e}}}}}}}^{{{{{{\rm{i}}}}}}(\gamma +\alpha )/2},$$
for \(0\le \beta \le \pi\), \(-\pi < \alpha \le \pi\) and \(-2\pi \, < \, \gamma \,\le\, 2\pi\). The angles \(\alpha ,\beta ,\gamma\) have a direct interpretation in terms of the polarisation and phase of the electric field state: with \({{S}_{1},{S}}_{2},\,{S}_{3}\) the normalised Stokes parameters, \(\alpha ={{\arctan }}({{S}_{1},S}_{2})\) is the polarisation azimuth, and \({{\cos }}\beta ={S}_{3}\) is the polarisation ellipticity; \(\gamma ={{\arg }}{E}_{{{{{{\rm{R}}}}}}}+{{\arg }}{E}_{{{{{{\rm{L}}}}}}}\) is the sum of the two electric field components' phases26,31. Further details of these parameters and their relationship with the hypersphere and the Poincaré sphere (2-sphere) parametrising polarisation may be found in Supplementary Note 1.
The full Poincaré sphere of polarisation states can be realised in a transverse plane of a structured light field, created from the superposition of two, differently structured, LH and RH beam components, similar to a full Poincaré beam21. At each spatial point, the optical field has some elliptical polarisation state characterised by \(\alpha ,\beta\). In 3D, points of constant elliptical polarisation lie on filaments, generalising RH and LH circular polarised C lines. 3D real space is filled by the set of polarisation filaments, constituting a polarisation texture (Fig. 1a). Each filament corresponds to a point on the Poincaré sphere (Fig. 1b), and many filaments cross each plane (Fig. 1c). Although the polarisation is fixed on the filaments, the optical phase smoothly varies along them (Fig. 1c, insets). Any 3D structured light field with varying transverse polarisation can be represented by such a texture.
Fig. 1: 3D optical polarisation texture.
A light field with position-dependent transverse polarisation and phase is created in a volume from the superposition of RH and LH circularly polarised beams, whose amplitude and phases are carefully structured. Spatial points characterised by the same state of elliptic polarisation lie on the filaments (a). The 3D polarisation texture can be visualised by colouring the filaments according to the position of its polarisation ellipse on the Poincaré sphere (b). The azimuthal angle \(\alpha\), representing the ellipse orientation, is coloured with the hues and the polar angle \(\beta\), representing the ellipticity, is associated with the saturation levels. The sphere's poles, representing the circular polarised states, are black (LH) and white (RH). Each optical state also has a phase, represented by the position of the arrow along the polarisation ellipse. In the transverse plane in (c), states of light are fully described by colours and arrowed ellipses. Along filaments of constant polarisation, the phase on the ellipses varies smoothly, as shown in the insets for three representative planes.
The 3-sphere supports the Hopf fibration4, a fibre bundle which divides it into linked circles. In the optical hypersphere, each fixed polarisation state (with \(\alpha ,\beta\) constant) traces out a circle as the phase \(\gamma\) goes through a \(4\pi\) cycle. The phase and polarisation parameters therefore realise the Hopf fibration in the optical hypersphere (this is explained in detail in Supplementary Note 1). The Poincaré sphere is interpreted here as the base space of the fibration31. We design a 3D structured beam that realises all the transverse states of light, including polarisation and phase, in its focal volume (real space). It displays the 3D Hopf fibration topology in a configuration we call a skyrmionic hopfion. The skyrmionic hopfion realises, in real space, an image of the Hopf fibration in the optical hypersphere. The fixed polarisation filaments can be represented as a 3D topological texture of entwined curves, in which each pair of loops are linked.
Experimentally realising the skyrmionic hopfion
We design the skyrmionic hopfion structure in light by superimposing carefully chosen combinations of vectorial Laguerre−Gauss beams23,30 \({{{{{{\rm{LG}}}}}}}_{{{{{{\mathscr{l}}}}}}{{{{{\mathscr{,}}}}}}p}\). The LH component, \({E}_{{{{{{\rm{L}}}}}}}\), is chosen to be the Laguerre−Gauss beam \({{{{{{\rm{LG}}}}}}}_{-{{{{\mathrm{1,0}}}}}}\), with a negative-signed optical vortex along the beam axis23. The RH component, \({E}_{{{{{{\rm{R}}}}}}}\), is chosen as a superposition of the Laguerre−Gauss beams \({{{{{{\rm{LG}}}}}}}_{{{{{\mathrm{0,0}}}}}}\) and \({{{{{{\rm{LG}}}}}}}_{{{{{\mathrm{0,1}}}}}}\), with a circular vortex loop in the focal plane centred on the axis23. Therefore, the net polarisation field has an RH C line along the axis, threading an LH C line loop in the focal plane. The C lines, at which \(\beta =0,\pi\), organise the rest of the texture: between them are nested tori with \(\beta =\) constant, including the particular L surface of linear polarisation at \(\beta =\pi /2\), analogous to vortices in other skyrmionic textures9. Details of the superposition optimisation are given in the "Methods" and Supplementary Note 2.
Experimentally, the RH and LH beam components are separately shaped by a spatial light modulator (SLM) (Fig. 2), before being combined in a joint beam path to shape the skyrmionic hopfion (Supplementary Fig. 4). The total polarisation state and phase of the resulting beam are measured at each point in the propagating volume via vectorial full-field reconstruction (VFFR, see "Methods"). For the VFFR, we combine established metrological techniques; namely, Stokes polarimetry, interferometry, and digital propagation32. Our approach explicitly relates measurements in different 2D planes, reconstructing the full 3D field volume. Further details of the experiment are given in the "Methods" and Supplementary Note 3.
Fig. 2: Sketch of experimental setup.
Beams \({E}_{{{{{{\rm{R}}}}}}}\) and \({E}_{{{{{{\rm{L}}}}}}}\) are generated on a spatial light modulator (SLM) and superimposed on-axis by a polarising beam splitter (PBS). A quarter-wave plate (QWP) transforms \({E}_{{{{{{\rm{L}}}}}}}\) into left circular (black) and \({E}_{{{{{{\rm{R}}}}}}}\) into right circular (white) polarisation. Around the focal spot of the lens, the skyrmionic hopfion appears in a cuboid of size \(198.8\,{{\upmu }}{{{{{\rm{m}}}}}}\,{{\times }}\,198.8\,{{\upmu }}{{{{{\rm{m}}}}}}\,{{\times }}\,53.2\,{{{{{\rm{mm}}}}}}\). The inset shows the polarisation texture in the focal plane (colour coded as the Poincaré sphere in Fig. 1b), consistent with the theory in Fig. 1c. Measurements of amplitude, phase and polarisation are enabled by volumetric full-field reconstruction (VFFR, see "Methods").
The VFFR measurements reveal the polarisation Hopf fibration in the 3D light structure (Fig. 3a and Supplementary Video). The polarisation ellipticity is constant on nested tori, made up of polarisation filaments labelled by constant \(\beta\) and varying azimuth \(\alpha\) (Fig. 3b–d). Our polarimetric resolution identifies these filaments clearly, particularly the linking between pairs of loops. This resolution compares very well with experimentally measured hopfion structures in other systems, such as cold atoms12,13 and liquid crystals14. As predicted (see Supplementary Notes 1 and 2), the two linked C lines (vortices in the superposed beams) are the topological skeleton of the hopfion structure, on which the rest of the polarisation texture hangs. They are not topologically privileged—all polarisation filaments are linked loops—but the C lines form the core filaments for the system of tori, including the L surface of linear polarisation. We anticipate C lines to play a similar structural role in other topological 3D polarisation textures.
Fig. 3: Visualising the topology of the focal volume.
The optical texture is reconstructed from the polarisation and phase measurements via the VFFR of the optical beam. The measured volume is coloured following the Poincaré sphere and reveals the topological structure of the Hopf fibration (a). Two C lines, the black loop and the threading straight white line, organise the texture into nested tori. Each toroidal surface represents points characterised by the same ellipticity. The colours wind nontrivially around each torus, and a few polarisation filaments making up these tori are shown in the insets: in (b), the lighter surface (\({S}_{3}=0.398\)) is made of lines characterised by RH elliptic polarisation; in (c), the L surface (\({S}_{3}=0\)) is made of lines along which the polarisation state is linear23,26; in (d), the darker surface (\({S}_{3}=-0.775\)) is made of lines characterised by LH elliptic polarisation. In each inset, the cyan and red filaments, corresponding to \(\beta =0,\pi\) are shown to form a Hopf link. Every pair of filaments in the texture link in this way, consistent with the Hopf fibration. The 3D rendering of this experimental skyrmionic hopfion is in the Supplementary Video.
Considering the shaped beams' phase as well as polarisation allows a comparison of the measured hopfion structure in real space (Fig. 4a, with phases along the shown filaments in Fig. 4b) with the optical hypersphere (Fig. 4c), parametrised by \(\alpha ,\beta ,\gamma\). This direct comparison gives a volume-to-volume mapping (demonstrated by the grey cubes in Fig. 4a, c). The density of hypersphere volume with respect to real space volume is the topological Skyrme density Σ, which can be interpreted as a continuous measure of linking33 of the polarisation filaments. Characteristic of 3D skyrmions5,8,9, the real space integral of Σ, concentrated around the C line loop, integrates to a value very close to unity, covering the hypersphere of hypersolid angle 2π2, i.e. a Skyrme number of 1. The Skyrme number is the degree of the mapping from 3D real space to the hypersphere, corresponding to the element of the homotopy group \({\Pi }_{3}\). More details of this are provided in Supplementary Notes 1, 2 and 4.
Fig. 4: Measured Skyrme density and optical hypersphere.
The experimental polarisation hopfion is shown (a), with measured phases (b) around the two filaments shown (\(\alpha =0,\pi\), \({S}_{3}=0.398\), as in Fig. 3b). The hopfion in real space closely resembles the structure of the optical hypersphere parameter space in (c), shown in volume-preserving projection from the RH circular polarisation state at the focal point of real space. Several features make the nature of the topological mapping clear. The real-space filaments of constant polarisation are mapped to the smooth Hopf circles of fixed polarisation in the optical hypersphere. The images of two typical real space transverse planes (grey grids) are distorted in the parameter space. The cube in real space intersecting the LH C line (black loop) maps to a larger distorted cuboid, indicating a greater Skyrme density Σ near this point. The cube away from the loop maps to a smaller cuboid, indicating a smaller Σ. In (d), the bounding cube represents the investigated focal volume in real space. The positive Skyrme density Σ of our structured skyrmionic hopfion is concentrated around the LH C line with some positive and negative fluctuations visible around it. The upper inset shows the on-axis view (from \(z=+\infty\)) of the toroidal conformation. The measured Skyrme number, given by the sum of the cubes volume, is 0.942 (described in "Methods"). Theoretical predictions of the Skyrme density for the model field are shown in the lower inset, giving a Skyrme number of 0.997 (calculations in Supplementary Note 4).
Mathematically, the Skyrme density Σ is the Jacobian determinant of the map from real space to the hypersphere (see Supplementary Note 4),
$$\Sigma =\frac{1}{16{\pi }^{2}}\nabla \gamma \cdot \left(\nabla {{{{{\rm{cos }}}}}}\beta \times \nabla \alpha \right).$$
This is the natural 3D generalisation of the 2D topological density for 2D skyrmions13,15 (here, full Poincare beams), \(\tfrac{1}{4\pi }\,\hat{{{{{{\bf{z}}}}}}}\cdot \left(\nabla {{\cos }}\beta \times \nabla \alpha \right)\). As the field parameters vary longitudinally as well as transversely, three parameters are needed to determine the full, continuous topological density determining the covering of the optical hypersphere, which is nonzero when the three gradient vectors are linearly independent. The topological density in Eq. (3) may be rewritten in terms of the normalised optical orbital current29 \({{{{{{\bf{J}}}}}}}_{{{{{{\rm{o}}}}}}}={{{{{\rm{Im}}}}}}[{E}_{{{{{{\rm{R}}}}}}}^{\ast }\nabla {E}_{{{{{{\rm{R}}}}}}}+{E}_{{{{{{\rm{L}}}}}}}^{\ast }\nabla {E}_{{{{{{\rm{L}}}}}}}]\),
$$\Sigma =\frac{1}{4{\pi }^{2}}{{{{{{\bf{J}}}}}}}_{{{{{{\rm{o}}}}}}}\cdot \nabla \times {{{{{{\bf{J}}}}}}}_{{{{{{\rm{o}}}}}}}.$$
Details are given in Supplementary Note 4. An analogous expression applies to 3D skyrmions in other systems6,7, with an appropriate current or velocity substituted. It is also the topological helicity, describing knotted fields in high-energy physics5, superfluids6,7, magnetic fields and hydrodynamics34. Its appearance in Eq. (4) suggests a relation between the 3D Skyrme density of a polarisation field and the Poynting vector of optical energy flow.
We determine the Skyrme density explicitly from the measured data, as shown in Fig. 4d. The sum over the measured voxels gives a Skyrme number of 0.945, which is less than unity since low intensities limit the measured volume boundary. The corresponding covering of the optical hypersphere, with the image of the real space measurement boundary, is represented in Supplementary Fig. 10. Rather than a smooth interpolation of the optical field measurements, this density is determined discretely from a simplicial cell complex of spherical tetrahedra in the optical hypersphere arising from the measured data points. Details of the technique and its implementation are in the "Methods" and Supplementary Note 5. The value of the Skyrme number of the theoretical field, with the same boundary, is 0.997, consistent with the experimental error.
We have demonstrated the experimental construction of a 3D skyrmionic hopfion in the polarisation and phase pattern of a propagating light beam. The Hopf fibration is realised in the natural polarisation parameters from Eq. (2), a mapping from 3D real space to the 3D optical hypersphere, generalising the Poincaré sphere naturally by including phase.
Our experiment and analysis manifest several topological ideas not commonly emphasised in optics. Firstly, optical polarisation fields in 3D can have topological textures, analogous to textures in condensed matter, high-energy physics, etc. This might lead to further insights and possibilities for topologically structured light and its applications. Secondly, as a parameter space for the full vectorial light field, the optical hypersphere goes beyond the standard Poincaré sphere. The usual approach requires a Pancharatnam−Berry phase35,36 to be included later, ignoring the fact that the optical field parameters define a manifold as natural as the 3-sphere. It is intriguing to speculate whether the machinery of the Poincaré sphere analysis of polarisation and Jones calculus may be cast in the optical hypersphere.
The 3D polarisation Skyrme density Σ in Eqs. (3) and (4) can be used as a tool to analyse optical vectorial full fields. Σ is the continuous topological charge density representing the abstract optical hypersphere volume covered by each real space point. Σ = 0 when the gradients of ellipticity, phase, and azimuth are linearly dependent, typically occurring along surfaces in 3D. The relation between Σ and the optical orbital current suggests a subtle interplay between the Poynting vector29 and energy−momentum fluxes with optical hypersphere topology (explored further in Supplementary Notes 1 and 4).
A smooth polarisation texture is disrupted at point singularities in the polarisation field, such as saddle points in the parameters \(\alpha ,\beta ,\gamma\). As previously observed24 in the reconstruction of Seifert surfaces spanning knotted optical singularities, these points are experimentally hard to control and limit the effective reconstruction of textures of polarisation lines. They do not affect the Skyrme number, and such points will lie on the surfaces Σ = 0.
Our experiments and theory demonstrate some higher-dimensional topological invariances possible in structured light. This formulation and measurement of an optical \({\Pi }_{3}\) invariant will lead to robust topological design principles for 3D optical fields for free-space optics and nanophotonics. These skyrmionic structures generalise to fields with higher degree Skyrme numbers, involving more complex superposed beams including knots and links, offering a broader gamut of topological structures and integers that can be encoded in structured optical beams. This approach to topological beam shaping will offer further analogies with cold atoms, condensed matter and high-energy physics, offering the possibility of emulating, optically, exotic particle-like topologies from field theories not accessible otherwise in the laboratory.
Topological design of the optical skyrmionic hopfion
The optical skyrmionic hopfion consists of the two scalar fields \({E}_{{{{{{\rm{R}}}}}}}\) and \({E}_{{{{{{\rm{L}}}}}}}\) representing the right- and left-handed field components respectively. These scalar components are appropriately structured to give the 3D topological texture described in the main text, effectively realising the topological mapping from 3D real space to the optical hypersphere. In these methods, we will refer to unnormalised field amplitudes \({\psi }_{{{{{{\rm{R}}}}}}}\) and \({\psi }_{{{{{{\rm{L}}}}}}}\) rather than their normalised counterparts: the beam intensity is \(I={\left|{\psi }_{{{{{{\rm{R}}}}}}}\right|}^{2}+{\left|{\psi }_{{{{{{\rm{L}}}}}}}\right|}^{2}\), and \({E}_{j}={\psi }_{j}/\sqrt{I}\), \(j={{{{{\rm{R}}}}}},{{{{{\rm{L}}}}}}\).
The optical skyrmionic hopfion can be understood intuitively quite simply: the component \({\psi }_{{{{{{\rm{R}}}}}}}\) should have a circular optical vortex line in the focal plane, concentric to the beam axis, and the component \({\psi }_{{{{{{\rm{L}}}}}}}\,\)should have an optical vortex line along the beam axis. This realises all phases and polarisations (i.e., all points of the optical hypersphere), concentrated in a small propagation volume. These conditions can be realised by superpositions of Laguerre−Gauss (LG) modes23,30. The standard definition of these modes (given in Supplementary Note 2) defines \({{{{{{\rm{LG}}}}}}}_{{{{{{\mathscr{l}}}}}}{{{{{\mathscr{,}}}}}}p}\left(R,\phi ,{z;w}\right)\), depending on cylindrical coordinates in real space, \((R,\phi ,z)\), with \({{{{{\mathscr{l}}}}}}\) the azimuthal mode number, \(p\) the radial mode number, and \(w\) the waist width.
As discussed in Supplementary Note 4, the axial optical vortex line should have a negative sign, so we choose \({\psi }_{{{{{{\rm{L}}}}}}}=2c{{{{{{\rm{LG}}}}}}}_{-{{{{\mathrm{1,0}}}}}}(R,\phi ,{z;w})\), the simplest LG mode with an axial vortex of the correct sign, with \(c\) a constant to be found and \(2\) included for calculational convenience. The vortex ring can be realised by the sum of two LG modes with \({\ell}{{{{{\mathscr{=}}}}}}0\), \({\psi }_{{{{{{\rm{R}}}}}}}=\left(-a+b\right){{{{{{\rm{LG}}}}}}}_{{{{{\mathrm{0,0}}}}}}\left(R,\phi ,{z;w}\right)-{b{{{{{\rm{LG}}}}}}}_{{{{{\mathrm{0,1}}}}}}\left(R,\phi ,{z;w}\right)\), where \(a\) and \(b\) are parameters to be found. This guarantees the vortex ring to be in the focal plane \(z=0\), with a radius of \(\sqrt{a/b}w\), provided \(a,{b} {\,} > {\,}0\). The coefficients \(a\) and \(b\) determine the intensity pattern around the vortex ring as well as its radius.
For fixed value of \(w\) the optical skyrmionic hopfion is therefore realised for a range of values of \(a,{b}\) and \(c\). The different values of the parameters give very different shapes of the structure (residing in the polarisation parameters) and distribution of the overall intensity \(I\). A preliminary exploration of these is given in Supplementary Note 4. The spreading nature of the gaussian beams means that it is not possible to cover the 3-sphere completely with polarisation states realised in 3-space. We therefore choose a superposition which maximises the volume of optical states in the optical hypersphere within the measured 3D volume in real space.
To be effectively generated and measured in the experiment, the values of the parameters are chosen to optimise the field configuration. To aid this optimisation, we introduce an extra scale size parameter \({{{{{\mathscr{K}}}}}}\), with \(b={b}_{0}{{{{{{\mathscr{K}}}}}}}^{2}\) and \(c={c}_{0}{{{{{\mathscr{K}}}}}}\). The 3D size of the skyrmionic hopfion scales according to \({{{{{\mathscr{K}}}}}}\), where now the vortex ring radius is \({R}_{0}=\sqrt{a/b}(w/{{{{{\mathscr{K}}}}}})\). The remaining parameters \(a,{b}_{0},{c}_{0}\), determine the particle-like field distribution's shape. The parameters \(a,{b}_{0},{c}_{0}\) and \({{{{{\mathscr{K}}}}}}\) were chosen to ensure the experimental skyrmionic hopfion to be localised within the measured volume, in practice a cartesian cuboid centred around the focal point. We optimised against the criteria in the following list. (i) Vortex ring radius \({R}_{0}\) not larger than beam waist \(w\) (this principle is also used in the design of optical vortex knots37,38). (ii) Concentrate intensity inside the measured volume, with \({I}\approx 0\) outside the measured volume. It was especially important to localise the intensity within the transverse cross-section, so as not to lose critical polarisation information. (iii) Distribute the intensity as evenly as possible within the measured volume. To maximise the quality of the measured polarisations we avoided regions of low intensity as much as possible where the polarisation state changes rapidly. (iv) Concentrate the Skyrme density (continuous topological charge density) within the measured volume, i.e. Σ = 0 outside the measured volume. The density Σ is given in main text (Eqs. 3, 4), and described in detail in Supplementary Note 4. This enables a measured value of the Skyrme number very close to 1, as described in the remainder of the "Methods".
We proceeded by making an estimate of the parameters based on the topological 3D plots of the numerical models, and then improved these based on the quality of the experimental measurements.
The experimental setup, as described below, requires the Fourier transform of the beam superposition to be realised on the SLM, and the desired field is mathematically back-propagated through the paraxial lens system using Fourier optics39. The LG distributions when \(z=0\) are eigenfunctions of the Fourier transform operation. Thus, the real space LG mode \({{{{{{\rm{LG}}}}}}}_{{{{{{\mathscr{l}}}}}}{{{{{\mathscr{,}}}}}}p}\left(R,\phi ,{z;w}\right)\) corresponds, in Fourier space, to the 2D amplitude \({{{{{{{\rm{i}}}}}}}^{2p-\left|{{{{{\mathscr{l}}}}}}\right|}{{{{{\rm{LG}}}}}}}_{{{{{{\mathscr{l}}}}}}{{{{{\mathscr{,}}}}}}p}^{2{{{{{\rm{D}}}}}}}\left({{{{{{\bf{q}}}}}}}_{\perp };{w}_{{{{{{\mathscr{F}}}}}}}\right)\), where \({w}_{{{{{{\mathscr{F}}}}}}}\) is the corresponding waist in the Fourier plane with transverse position \({{{{{{\bf{q}}}}}}}_{\perp }\). The holograms correspond to \(-2{{{{{\rm{i}}}}}}{c}_{0}{{{{{\mathscr{K}}}}}}{{{{{{\rm{LG}}}}}}}_{-1,0}^{2{{{{{\rm{D}}}}}}}\left({{{{{{\bf{q}}}}}}}_{\perp };{w}_{{{{{{\mathscr{F}}}}}}}\right)\) for \({\psi }_{{{{{{\rm{L}}}}}}}\) and \(\left(-a+{b}_{0}{{{{{{\mathscr{K}}}}}}}^{2}\right){{{{{{\rm{LG}}}}}}}_{0,0}^{2{{{{{\rm{D}}}}}}}\left({{{{{{\bf{q}}}}}}}_{\perp };{w}_{{{{{{\mathscr{F}}}}}}}\right)+{b}_{0}{{{{{{\mathscr{K}}}}}}}^{2}{{{{{{\rm{LG}}}}}}}_{0,1}^{2{{{{{\rm{D}}}}}}}\left({{{{{{\bf{q}}}}}}}_{\perp };{w}_{{{{{{\mathscr{F}}}}}}}\right)\) for ψR. The coefficients do not depend on the Fourier waist \({w}_{{{{{{\mathscr{F}}}}}}}\), so the overall beam in real space scales linearly in radius \(R\) and quadratically in propagation distance \(z\) as \({w}_{{{{{{\mathscr{F}}}}}}}\) is varied. This quantity is chosen so that the skyrmionic hopfion has the desired size in real space whilst fully utilising the SLM.
In our optical system, \(\lambda =532{{{{{\rm{nm}}}}}}\), the waist of the beam on the SLM is \({w}_{{{{{{\mathscr{F}}}}}}}=6.252\times {10}^{-4}{{{{{\rm{m}}}}}}\), and the imaging system given by lenses L1 and L2 (Supplementary Fig. 4b) halves the size of the beam. The resulting waist width is \(w=54.2\,{{\upmu }}{{{{{\rm{m}}}}}}\) (giving a Rayleigh range \({z}_{{{{{{\rm{R}}}}}}}=34.7{{{{{\rm{mm}}}}}}\)). The measured volume is a cuboid, \(\left|x\right|\le {x}_{{{{{{\rm{max }}}}}}}\), \(\left|y\right|\le {y}_{{{\max }}}\), \(\left|z\right|\le {z}_{{{\max }}}\), with \({x}_{{{{{{\rm{max }}}}}}}=3.13w=170\,{{\upmu }}{{{{{\rm{m}}}}}}\); \({y}_{{{{{{\rm{max }}}}}}}=3.91w=212\,{{\upmu }}{{{{{\rm{m}}}}}}\); \({z}_{{{{{{\rm{max }}}}}}}=0.768{z}_{{{{{{\rm{R}}}}}}}=26.6\,{{{{{\rm{mm}}}}}}\). The values for the beam parameters were optimised in this range to be \(a=3\), \({b}_{0}=1.5\), \({c}_{0}=0.16\), \({{{{{\mathscr{K}}}}}}=2.5\). In terms of the original parameters, this gives the values \(a=3\), \(b=9.4\), \(c=0.4\). With these choices, the LH C line ring is at \({R}_{0}=0.57w=30.6{{\upmu }}{{{{{\rm{m}}}}}}\). The field configuration of this model field near the C line ring is shown in Supplementary Fig. 2b, resembling the corresponding Hopf fibration configuration (e.g., Supplementary Fig. 2a) closely.
Optical system design
The experimental skyrmionic hopfion field is the superposition of two structured beams of orthogonal circular polarisation, \({\psi }_{{{{{{\rm{R}}}}}}}\) and \({\psi }_{{{{{{\rm{L}}}}}}}\). Experimentally, these two scalar components are shaped by the amplitude and phase modulation of a collimated laser beam (horizontal linear polarisation, expanded) performed by a reflective phase-only SLM (Holoeye Pluto phase-only, 1920 × 1080 px HD display), shown in Supplementary Fig. 4b. The SLM is used in split-screen mode40,41,42, with each half embedding the amplitude and phase information of \({\psi }_{{{{{{\rm{R}}}}}}}\) and \({\psi }_{{{{{{\rm{L}}}}}}}\) respectively. To optimise the beam quality, the Fourier hologram for each polarisation component is a 600 × 600 pixels square. This resolution was proven to produce all details of the transverse beam structure in the focal volume. The two holograms are placed so each receives approximately homogeneous illumination of the expanded input laser beam without losing too much intensity. The phase-only hologram is shown in Supplementary Fig. 4a.
To allow for amplitude modulation by a pure phase hologram, a weighted blazed grating is applied43. The desired scalar modes appear in the first diffraction order, which is spatially filtered by an aperture A in the conjugate plane of the SLM, generated by lens L1 (shown in Supplementary Fig. 4b). Fourier holograms are applied on the SLM, so that the desired beams are sculpted in the focus of the Fourier lens (L1), i.e. in the conjugate plane of the SLM. The hologram for each beam is normalised separately, taking advantage of the full modulation depth of the SLM for each beam individually.
The two beams are subsequently combined on-axis by an interferometric system. Before they are combined, the two beams are given orthogonal linear polarisations by a combination of a half wave plate (HWP) and a polarising beam splitter (PBS), allowing also for the adjustment of the beams' intensity ratio. This is a critical step to realise the complex polarisation structure: the HWP angle directly affects the relative strength of the two components and hence the coefficient \(c\) in the field design described above.
After the beams are combined, a quarter-wave plate (QWP) transforms the orthogonal linear polarisation states into orthogonal circular polarisations. The imaging system given by lens L1 and L2 (Supplementary Fig. 4b) halves the size of the beam and L3 performs the final Fourier transform that gives the skyrmionic hopfion in its focal volume. The focal structure is magnified by lens L4 (×16) onto a CMOS camera (Cam; uEye SE (UI-1240SE), 1280 × 1024 px).
Volumetric full-field reconstruction
We retrieve the full-field information (transverse components of the paraxial beam) by reconstructing the polarisation and phase in the focal volume. Supplementary Fig. 5 shows five transverse planes at different positions in the propagation direction for the normalised Stokes parameters \({S}_{1},{{S}}_{2},{{S}}_{3}\) and the phases \({\chi }_{{{{{{\rm{R}}}}}}}\) and \({\chi }_{{{{{{\rm{L}}}}}}}\) of the RH and LH field components. The measurements in multiple transverse planes are performed via digital propagation32 (see Supplementary Note 3). A detailed description of polarimetry44 (Supplementary Fig. 4c) and transverse phase interferometry45 (Supplementary Fig. 4d) can be found in Supplementary Note 3. The polarisation measurements across different planes are unaffected by the harmonic time dependence of the optical field and are directly stored into 3D arrays. However, when stacking volumetric phase measurements, the relative phase between neighbouring planes must be retrieved. First, we describe our procedure for connecting the transverse phase measurements to their neighbouring planes, and then we present our routine to minimise the experimental error in retrieving the field components.
The measured transverse phase structure per plane is constituted of the light field's propagation term, \({{{\mbox{e}}}}^{{{\mbox{i}}}{kz}}\) times the superposed LG structure described in the subsection "Topological design of the optical skyrmionic hopfion" of the "Methods" section above. This includes a Gouy phase factor \({{{\mbox{e}}}}^{-{{\mbox{i}}}t{\chi }^{{{\mbox{G}}}}}\), where \({\chi }^{{{\mbox{G}}}}(z/{z}_{{{\mbox{R}}}})\) is the \(z\)-dependent Gouy Phase, and a phase term varying radially and longitudinally (full Laguerre−Gauss modes equation is given in Supplementary Note 2), and a time-dependent phase offset due to the time varying phase relation between the measured and reference beams. In order to concentrate on the transverse variation, we circumvent the effect of \({{{\mbox{e}}}}^{{{\mbox{i}}}{kz}}\) within the measurements per \(z\)-plane, thereby avoiding the effects of undersampling the electric field oscillation, by setting the distance between two transverse planes to a multiple of the wavelength (\(100\lambda\)), so the propagation factor \({{{\mbox{e}}}}^{{{\mbox{i}}}{kz}}\) is negligible. Next, we choose a transverse reference point (\({{{{{{\bf{r}}}}}}}_{\perp {{\mbox{ref}}}},{z}\)) close to the optical axis (\(R\approx 0\)), so that the phase at this point is only affected by the \(z\)-dependent Gouy phase term of the LG beams and is unaffected by the other spatially varying phase factors. For each plane, the phase of the reference point is set to the same value, so the Gouy phase and the time-dependent phase offset are subtracted. In order to finalise the missing relation between different \(z\)-planes, the theoretical Gouy phase term \({\chi }^{{{\mbox{G}}}}\) is added. Note that the Gouy phase represents an offset value per plane, only depending on the \(z\)-position but without any dependence on the transverse coordinates. Thus, the measurements themselves are not affected by this approach and, as a result, we correct for the errors in \(z\) caused by the time-dependent variations in the measurement system. Supplementary Fig. 6 shows the \(x=0\) plane (longitudinal cut) of the theoretically expected (left) and the reconstructed (right) 3D phase structures of \({\chi }_{{{{{{\rm{R}}}}}}}\) and \({\chi }_{{{{{{\rm{L}}}}}}}\). This figure demonstrates that the reconstructed 3D phase distributions are consistent with the theoretical predictions.
Due to experimental errors (see Supplementary Note 3), the singularities of the differences of the phase of the two field components \({\chi }_{{{{{{\rm{L}}}}}}}-{\chi }_{{{{{{\rm{R}}}}}}}\) (wrapped between \(-\pi\) and \(\pi\)) do not coincide with those of the polarimetrically-determined \({{\arctan }}\left({S}_{1},{S}_{2}\right)\) as the polarisation and phase measurement are independent. Observations of the 3D structure of the C lines from the polarisation measurements and the phase singularities from the phase measurements allow the systematic error to be minimised by shifting the polarisation measurements until the C line loop coincides with the singular loop of \({\chi }_{{{{{{\rm{R}}}}}}}\). Moreover, the overall error is reduced by redefining the Stokes parameters \({S}_{1}\) and \({S}_{2}\) as follows: \({S}_{1}=2\sqrt{{s}_{0}^{2}-{s}_{3}^{2}}\,{{\cos }}({\chi }_{{{{{{\rm{L}}}}}}}-{\chi }_{{{{{{\rm{R}}}}}}})/{s}_{0}\) and \({S}_{2}=2\sqrt{{s}_{0}^{2}-{s}_{3}^{2}}\,{{\sin }}({\chi }_{{{{{{\rm{L}}}}}}}-{\chi }_{{{{{{\rm{R}}}}}}})/{s}_{0}\). To finalise the volumetric full-field reconstruction we calculate the real and imaginary parts of the beam components from \({E}_{{{{{{\rm{R}}}}}}}=\sqrt{({s}_{0}+{s}_{3})/2}\,{{{\mbox{e}}}}^{{{\mbox{i}}}{\chi }_{{{{{{\rm{R}}}}}}}}\), and \({E}_{{{{{{\rm{L}}}}}}}=\sqrt{({s}_{0}-{s}_{3})/2}\,{{{\mbox{e}}}}^{{{\mbox{i}}}{\chi }_{{{{{{\rm{L}}}}}}}}.\) The full field is used to calculate the Skyrme density of the optical field as described in the next subsection.
Numerical calculation of experimental Skyrme number
We measure the Skyrme number of the optical skyrmionic hopfion directly from the discretely sampled, measured data by taking advantage of the robustness of topology. This optimises the computational speed necessary to evaluate the Skyrme number from experimental measurements. The measured polarisation and phase at each point in real 3D space correspond to a point in the optical hypersphere. The 3D cubic lattice of measured voxels is mapped into a topology-preserving but distorted lattice in the optical hypersphere. An example for the ideal skyrmionics hopfion field (see Supplementary Note 4), is shown in Supplementary Fig. 9a, b. The measured Skyrme number is therefore based on this piecewise-linear mapping generated from the measured data points without interpolation. This approach can readily be used for measurements of other physical Skyrme-like maps, including lower dimensional ones (e.g. via triangular meshes).
The fully resolved experimental data in the focal volume give real space voxels forming a cuboidal grid. We are interested in the Skyrme density of the real space volume given by a cuboid with transverse size \(\pm {\!}{L}_{\perp }=\pm{\!}1.84{\,}w=\pm{\!}99.4{\,}{{\upmu }}{{{{{\rm{m}}}}}}\) and longitudinal size \(\pm{\!}{L}_{\parallel }=\pm{\!}0.768{\,}{z}_{{{{{{\rm{R}}}}}}}=\pm{\!}26.6{\,}{{{{{\rm{mm}}}}}}\) as defined in Supplementary Note 4. Since the image of the cuboidal mesh covers the volume of the hypersphere, reducing the resolution maintains this filling. The numerical routine is made more time efficient by reducing the resolution to a cubic mesh of dimension 101 × 101 × 101 in physical space, centred at the focal point. The voxels are centred at points labelled by \((i,j,k)\) with \(1\le i,j,k\le 100\). Each such point corresponds to a normalised 4D vector \(\vec{n}=({{{{{\rm{Re}}}}}}{E}_{{{{{{\rm{R}}}}}}},{{{{{\rm{Im}}}}}}{E}_{{{{{{\rm{R}}}}}}},{{{{{\rm{Re}}}}}}{E}_{{{{{{\rm{L}}}}}}},{{{{{\rm{Im}}}}}}{E}_{{{{{{\rm{L}}}}}}})\,\)found via the VFFR method, giving a distorted cubic 3D grid in the optical hypersphere whose vertices are the points \(\vec{n}(i,j,k)\). The distortion of the experimentally measured field is significantly greater than the example in Supplementary Fig. 9a, b, as can be seen in the images of the two real space planes in the optical hypersphere in main text (Fig. 4). Each elementary cube \(C={C}_{i,j,k}\) is labelled by \(i,j,k\), with vertices \(\vec{n}(i,j,k)\), \(\vec{n}(i+1,j,k)\), \(\vec{n}(i,j+1,k)\),… denoted by c1, …, c8 as indicated in Supplementary Fig. 9c, d. The cube \(C\) occupies a volume \({{{{{\rm{Vol}}}}}}(C)\) within the optical hypersphere. We numerically determine \({{{{{\rm{Vol}}}}}}\left(C\right)\) as follows.
An elementary topological cell in 3D is a tetrahedron (i.e. a 3-simplex46, in the language of simplicial topology). We convert our cubic \(i,j,k\) lattice into a 3D simplicial complex by decomposing each cube into five irregular tetrahedra. The resulting mesh of tetrahedra, where neighbours share triangular faces, edges and vertices, make up a 3D cell complex46. The tetrahedra can share the cube's vertices in two distinct ways, which are given by the following ordered sets of four vertices (see Supplementary Fig. 9c, d): (A), (c1, c2, c4, c5), (c4, c5, c7, c8), (c5, c2, c7, c6), (c2, c7, c3, c4), (c5, c7, c2, c4) and (B), (c1, c2, c3, c6), (c3, c4, c1, c8), (c5, c6, c1, c8), (c7, c8, c3, c6), (c8, c1, c3, c6). For any cubic lattice, cubes can be decomposed into two choices of tetrahedral mesh: cubes of type A at positions where the quantity \(i+j+k\) is even (odd) and cubes of type B where \(i+j+k\) is odd (even). We compute both types of 3D cell complex as a check of numerical accuracy. As a result, the measurement points in real space and measured values in the hypersphere define a piecewise-linear map representing the physical field.
In the hypersphere, the tetrahedra are constructed so the edges joining the vertices are geodesics. Each tetrahedron's four faces are spherical triangles, and along edges, pairs of faces meet at the dihedral angles \(0 \, < \, {\varphi }_{j} < \pi\), for \(j=1,2,\ldots ,6\). The formula for the 3D volume \({{{{{\rm{Vol}}}}}}(T)\) of an irregular spherical tetrahedron \(T\) constructed in this way can be written explicitly in terms of dihedral angles by means of Murakami's formula47 (see Supplementary Note 5). The contribution to the Skyrme number comes from the signed volumes \({{{{{\rm{sign}}}}}}\left[{{\det }}\left({\vec{n}}_{a},{\vec{n}}_{b},{\vec{n}}_{c},{\vec{n}}_{d}\right)\right]{{{{{\rm{Vol}}}}}}(T)\), where \({\vec{n}}_{{{{{{\mathscr{l}}}}}}}\) with \({{{{{\mathscr{l}}}}}}{{{{{\mathscr{=}}}}}}\left\{a,{b},{c},{d}\right\}\) are 4D unit vectors pointing to the four vertices of a spherical tetrahedron \(T\). Only tetrahedral cells included within a 3-dimensional hemisphere, whose volume is less than \({\pi }^{2}\), are considered. The sign of the volume comes from the ordering of the vertices with respect to the right-hand rule, where the triangular base \(a,{b},{c}\) follows the fingers and the vertex \(d\) follows the thumb. When the volume is negative, the order of the vertices in real space and that of the vertices of the tetrahedron in the optical hypersphere are inverted. This follows the standard orientation rules of a 3-simplex.
At higher resolutions of the cubic lattice, the spherical tetrahedra are smaller, and the curved edges tend to become linear, and the spherical distortion can be neglected: the tetrahedron volume are better approximated by its flat-space analogue. The 101 × 101 × 101 mesh defined above is consistent with the volume of the tetrahedra being within the range allowed by numerical precision.
The hyperspherical volume of the cube \(C\) corresponds to the union of the volumes of the associated, neighbouring tetrahedra comprising \(C\), and its volume \({{{{{\rm{Vol}}}}}}(C)\) is the sum of the signed spherical tetrahedra volumes \({{{{{\rm{Vol}}}}}}(T)\). The results for each such \({{{{{\rm{Vol}}}}}}({C}_{i,j,k})\) are stored in two 3D arrays, one for each type of 3D cell complex. The experimental Skyrme number is found by adding together the volumes of all the hypersphere cubes with appropriate sign, normalised by the 3-sphere volume: \({\sum }_{i,j,k}{{{{{\rm{Vol}}}}}}\big({C}_{i,j,k}\big)/(2{\pi }^{2})\) (see Supplementary Note 5). The measured 3D Skyrme number corresponds to the fraction of the hypersphere volume by the image of the measured volume of real space. The sums over all the elements in the arrays give Skyrme numbers 0.94521 and 0.94528 for the two kinds of mesh. We take the experimental Skyrme number to be the mean of these two numbers, 0.94524. It is straightforward to implement the vector calculation described here in a numerical algorithm in MATLAB or Python. The volumes of the cubes in the meshes can be calculated in parallel via high performance computers.
The experimental data are available from the corresponding author upon reasonable request.
Code availability
The code for the Skyrme number calculation is available from the corresponding author upon reasonable request.
Thompson, W. (Lord Kelvin) On vortex atoms. Proc. R. Soc. Edin. 6, 94–105 (1867).
Skyrme, T. H. R. A non-linear field theory. Proc. R. Soc. A 260, 127–138 (1961).
ADS MathSciNet CAS MATH Google Scholar
Weeks, J. The Shape of Space (Marcel Dekker, 1985) .
Urbantke, H. The Hopf fibration—seven times in physics. J. Geom. Phys. 46, 125–150 (2003).
ADS MathSciNet MATH Google Scholar
Manton, N. & Sutcliffe, P. Topological Solitons (Cambridge University Press, 2004).
Volovik, G. E. & Mineev, V. P. Particle-like solitons in superfluid 3He phases. JETP Lett. 46, 401–404 (1977).
Radu, E. & Volkov, M. S. Stationary ring solitons in field theory — knots and vortons. Phys. Rep. 468, 101–151 (2008).
ADS MathSciNet CAS Google Scholar
Faddeev, L. D. & Niemi, A. J. Knots and particles. Nature 387, 58–61 (1997).
ADS CAS Google Scholar
Ruostekoski, J. & Anglin, J. R. Creating vortex rings and three-dimensional skyrmions in Bose−Einstein condensates. Phys. Rev. Lett. 86, 3934–3937 (2001).
ADS CAS PubMed Google Scholar
Babaev, E., Faddeev, L. D. & Niemi, A. J. Hidden symmetry and knot solitons in a charged two-condensate Bose system. Phys. Rev. B 65, 100512 (2002).
ADS Google Scholar
Cruz, M., Turok, N., Vielva, P., Martinez-Gonzalez, E. & Hobson, M. A cosmic microwave background feature consistent with a cosmic texture. Science 318, 1612–1614 (2007).
Hall, D. S. et al. Tying quantum knots. Nat. Phys. 12, 478–483 (2016).
Lee, W. et al. Synthetic electromagnetic knot in a three-dimensional skyrmion. Sci. Adv. 4, eaao3820 (2018).
ADS PubMed PubMed Central Google Scholar
Ackerman, P. J. & Smalyukh, I. I. Diversity of knot solitons in liquid crystals manifested by linking of preimages in torons and hopfions. Phys. Rev. X 7, 011006 (2017).
Salomaa, M. M. & Volovik, G. E. Quantized vortices in superfluid 3He. Rev. Mod. Phys. 59, 533 (1987).
Leslie, L. S., Hansen, A., Wright, K. C., Deutsch, B. M. & Bigelow, N. P. Creation and detection of skyrmions in a Bose−Einstein condensate. Phys. Rev. Lett. 103, 250401 (2009).
Nagaosa, N. & Tokura, Y. Topological properties and dynamics of magnetic skyrmions. Nat. Nano 8, 899–911 (2013).
Tsesses, S. et al. Optical skyrmion lattice in evanescent electromagnetic fields. Science 361, 993–996 (2018).
ADS MathSciNet CAS PubMed MATH Google Scholar
Du, L., Yang, A., Zayats, A. V. & Yuan, X. Deep-subwavelength features of photonic skyrmions in a confined electromagnetic field with orbital angular momentum. Nat. Phys. 15, 650–654 (2019).
Davis, T. J. et al. Ultrafast vector imaging of plasmonic skyrmion dynamics with deep subwavelength resolution. Science 386, eaba6415 (2020).
Beckley, A. M., Brown, T. G. & Alonso, M. A. Full Poincaré beams. Opt. Exp. 18, 10777–10785 (2010).
Donati, S. et al. Twist of generalized skyrmions and spin vortices in a polariton superfluid. Proc. Natl Acad. Sci. USA 113, 14926–14931 (2016).
ADS CAS PubMed PubMed Central Google Scholar
Dennis, M. R., O'Holleran, K. & Padgett, M. J. Singular optics: optical vortices and polarization singularities. Prog. Opt. 53, 293–363 (2009).
Larocque, H. et al. Reconstructing the topology of optical polarization knots. Nat. Phys. 14, 1079–1082 (2018).
Bauer, T. et al. Observation of optical polarization Möbius strips. Science 347, 964–966 (2015).
Nye, J. F. Natural Focusing and Fine Structure of Light (IoP Publishing, 1999).
Wang, J. et al. Terabit free-space data transmission employing orbital angular momentum multiplexing. Nat. Photon. 6, 488–496 (2012).
Gahagan, K. T. & Swartzlander, G. A. Optical vortex trapping of particles. Opt. Lett. 21, 827–829 (1996).
Berry, M. V. Optical currents. J. Opt. 11, 094001 (2009).
Yao, A. M. & Padgett, M. J. Orbital angular momentum: origins, behavior and applications. Adv. Opt. Photon. 3, 161–204 (2011).
Dennis, M. R. Polarization singularities in paraxial vector fields: morphology and statistics. Opt. Commun. 213, 201–221 (2002).
Otte, E., Rosales-Guzmán, C., Ndagano, B., Denz, C. & Forbes, A. Entanglement beating in free space through spin−orbit coupling. Light: Sci. App. 7, 18007 (2018).
Whitehead, J. H. C. An expression of Hopf's invariant as an integral. Proc. Natl Acad. Sci. USA 33, 117–123 (1947).
ADS MathSciNet CAS PubMed PubMed Central MATH Google Scholar
Moffatt, H. K. Helicity and singular structures in fluid dynamics. Proc. Natl Acad. Sci. USA 111, 3663–3670 (2014).
Berry, M. V. The adiabatic phase and Pancharatnam's phase for polarized light. J. Mod. Opt. 34, 1401–1407 (1987).
Bliokh, K. Y., Alonso, M. A. & Dennis, M. R. Geometric phases in 2D and 3D polarized fields: geometrical, dynamical, and topological aspects. Rep. Prog. Phys. 82, 122401 (2019).
ADS MathSciNet PubMed Google Scholar
Dennis, M. R., King, R. P., Jack, B., O'Holleran, K. & Padgett, M. J. Isolated optical vortex knots. Nat. Phys. 6, 118–121 (2010).
Sugic, D. & Dennis, M. R. Singular knot bundle in light. J. Opt. Soc. Am. A 35, 1987–1999 (2018).
Goodman, J. W. Introduction to Fourier Optics 3rd edn (Roberts & Company, 2005).
Otte, E., Schlickriede, C., Alpmann, C. & Denz, C. Complex light fields enter a new dimension: holographic modulation of polarization in addition to amplitude and phase. Proc. SPIE 9379, 937908 (2015).
Alpmann, C., Schlickriede, C., Otte, E. & Denz, C. Dynamic modulation of Poincaré beams, Sci. Rep. https://doi.org/10.1117/12.2078724 (2017).
Preece, D. et al. Independent polarisation control of multiple optical traps. Opt. Exp. 16, 15897–15902 (2008).
Davis, J., Cottrell, D., Campos, J., Yzuel, M. Y. & Moreno, I. Encoding amplitude information onto phase-only filters. Appl. Opt. 38, 5004–5013 (1999).
Schaefer, B., Collett, E., Smyth, R., Barrett, D. & Fraher, B. Measuring the Stokes polarization parameters. Am. J. Phys. 75, 163–168 (2007).
Takeda, M., Ina, H. & Kobayashi, S. Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. J. Opt. Soc. Am. 72, 156–160 (1982).
Hatcher, A. Algebraic Topology (Cambridge University Press, 2002).
Murakami, J. Volume formulas for a spherical tetrahedron. Proc. Am. Math. Soc. 140, 3289–3295 (2012).
MathSciNet MATH Google Scholar
We are grateful to Miguel Alonso, Michael Berry, Jörg Götte, Michael Morgan, Renzo Ricca, Paul Sutcliffe, Benny Jung-Shen Tai, Alexander Taylor, Teuntje Tijssen, Jonathan Watkins, Alessandro Zannotti and Shuang Zhang, and especially Benjamin Bode, David Foster and Ivan Smalyukh for conversations, advice, and research support. The numerical computations were performed using the University of Birmingham's BEAR Cloud service. D.S. and M.R.D. acknowledge financial support from the University of Birmingham, the Leverhulme Trust Research Programme RP2013-K-009 (SPOCK: Scientific Properties of Complex Knots) and the EPSRC Centre for Doctoral Training in Topological Design (EP/S02297X/1). R.D., D.E., E.O., and C.D. acknowledge partial support by the German Research Foundation (DFG), under project DE 486/22-1 and DE 486/23-1, as well as by the European Union (EU) Horizon 2020 programme, in the framework of the European Training Network ColOpt ITN 721465. J.R. acknowledges financial support from Engineering and Physical Sciences Research Council (EP/S002952/1 and EP/P026133/1). D.S. and F.N. are supported in part by: Nippon Telegraph and Telephone Corporation (NTT) Research, the Japan Science and Technology Agency (JST) [via the Quantum Leap Flagship Program (Q-LEAP), the Moonshot R&D Grant Number JPMJMS2061, and the Centers of Research Excellence in Science and Technology (CREST) Grant No. JPMJCR1676], the Japan Society for the Promotion of Science (JSPS) [via the Grants-in-Aid for Scientific Research (KAKENHI) Grant No. JP20H00134 and the JSPS–RFBR Grant No. JPJSBP120194828], the Army Research Office (ARO) (Grant No. W911NF-18-1-0358), the Asian Office of Aerospace Research and Development (AOARD) (via Grant No. FA2386-20-1-4069), and the Foundational Questions Institute Fund (FQXi) via Grant No. FQXi-IAF19-06.
School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK
Danica Sugic & Mark R. Dennis
H H Wills Physics Laboratory, University of Bristol, Bristol, BS8 1TL, UK
Theoretical Quantum Physics Laboratory, RIKEN Cluster for Pioneering Research, Wako-shi, Saitama, 351-0198, Japan
Danica Sugic & Franco Nori
Institute of Applied Physics and Center for Nonlinear Science (CeNoS), University of Muenster, 48149, Muenster, Germany
Ramon Droop, Eileen Otte, Daniel Ehrmanntraut & Cornelia Denz
Physics Department, University of Michigan, Ann Arbor, MI, 48109-1040, USA
Franco Nori
Physics Department, Lancaster University, Lancaster, LA1 4YB, UK
Janne Ruostekoski
EPSRC Centre for Doctoral Training in Topological Design, University of Birmingham, Birmingham, B15 2TT, UK
Mark R. Dennis
Danica Sugic
Ramon Droop
Eileen Otte
Daniel Ehrmanntraut
Cornelia Denz
D.S. and R.D. equally share first authorship. D.S. and M.R.D. formulated the theory and developed the numerical methods with assistance from J.R.; R.D. and E.O. designed and performed the experiment, supported by D.E.; C.D., J.R. and M.R.D. provided explanations of data; F.N., C.D. and M.R.D. supervised the project. All authors have approved the submitted version.
Correspondence to Mark R. Dennis.
Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Description of Additional Supplementary Files
Measured skyrmionic hopfion in the focal volume
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Sugic, D., Droop, R., Otte, E. et al. Particle-like topologies in light. Nat Commun 12, 6785 (2021). https://doi.org/10.1038/s41467-021-26171-5
Received: 20 April 2021 | CommonCrawl |
\begin{document}
\title{Minimal surfaces with positive genus and finite total curvature
in $\mathbb H^2\times \mathbb R$} \author{Francisco Mart\'{\i}n \\ University of
Granada \and Rafe Mazzeo \\Stanford University \and M. Magdalena
Rodr\'iguez \\ University of Granada}
\maketitle
\begin{abstract}
We construct the first examples of complete, properly embedded
minimal surfaces in $\mathbb H^2 \times \mathbb R$ with finite total curvature
and positive genus. These are constructed by gluing copies of
horizontal catenoids or other nondegenerate summands. We also
establish that every horizontal catenoid is nondegenerate. \end{abstract}
\section{Introduction} Amidst the great activity in the past several years concerning the existence and nature of complete minimal surfaces in homogeneous three-manifolds, the study of minimal surfaces in $\mathbb H^2 \times \mathbb R$ has witnessed particular success. The central problem is the solvability of the asymptotic Dirichlet problem, i.e.\ the existence of complete surfaces asymptotic to a given embedded curve $\gamma$ in the boundary of the compactification of this space $B^2 \times I$, where $B^2$ is the closure of the Poincar\'e ball model of $\mathbb H^2$ in $\mathbb R^2$ and the interval $I$ is the stereographic compactification of $\mathbb R$.
There have been three main approaches to this problem. The first is based on the method of Anderson \cite{And} for the analogous problem in $\mathbb H^3$: one defines a sequence of curves $\gamma_R$ lying on the geodesic sphere of radius $R$ around some point, solves the Plateau problem for each of these curves, then attempts to take a limit as $R \to \infty$. The main points are to show that the sequence of minimal surfaces with boundary does not drift off to infinity and that the limit has $\gamma$ as its asymptotic boundary curve; both of these are accomplished using suitable barrier surfaces, the existence and nature of which depends upon the convexity of $\mathbb H^3$ at infinity. This approach has also been used successfully for the analogous asymptotic Plateau problem in higher dimensions and codimensions for various classes of nonpositively curved spaces. The second approach generalizes the classical method of Jenkins and Serrin~\cite{JS} for minimal graphs in $\mathbb{R}^3$, and was developed in this setting by Nelli and Rosenberg~\cite{ner2}, Collin and Rosenberg~\cite{cor2} and Mazet, Rosenberg and the third author~\cite{marr1}. This involves finding a minimal graph over domains of $\mathbb H^2$ with prescribed boundary data, possibly $\pm\infty$. The third approach is by an analytic gluing construction, and this is the method we follow here.
Before describing our work, let us draw attention to the issue of obtaining surfaces with finite total curvature. (We recall that the total curvature of a surface is defined as the integral on the surface of its Gauss curvature.) It turns out to be far easier to obtain complete minimal surfaces of finite topology in $\mathbb H^2 \times \mathbb R$ with infinite total curvature, and we refer to some of the papers above for a good (but not yet definitive) existence theory. The simplest example is the slice $\mathbb H^2 \times \{0\}$, but more generally there exist minimal surfaces asymptotic to a vertical graph $\{(\theta, f(\theta)) \colon \theta \in \mathbb{S}^1\} \subset \partial B^2 \times \mathbb R$ for any $f \in {\mathcal C}^1(\mathbb{S}^1)$. Other examples include the one-parameter family of Costa-Hoffman-Meeks type surfaces, each asymptotic to three parallel horizontal copies of $\mathbb H^2$. These have positive genus and were constructed by Morabito~\cite{mo} also using a gluing method. On the other hand, surfaces of finite total curvature have proved to be more elusive. The basic examples are the vertical plane $\gamma \times \mathbb R$, where $\gamma$ is a complete geodesic in $\mathbb H^2$, and the Scherk minimal graphs over ideal polygons constructed by Nelli and Rosenberg~\cite{ner2}, and Collin and Rosenberg~\cite{cor2}. There is also a family of {\it horizontal
catenoids} $K_\eta$ constructed in~\cite{moro1,pyo1} (called {\it
2-noids} in~\cite{moro1}), each consisting of a catenoidal handle which is orthogonal to the vertical direction, and asymptotic to two disjoint vertical planes which are neither asymptotic nor too widely separated. The recent paper \cite{hnst} shows that these are the unique complete minimal surfaces with finite total curvature and two ends asymptotic to vertical planes. A large number of other examples of genus zero have been constructed recently by Pyo~\cite{pyo1}, and Morabito and the third author~\cite{moro1}, independently. Both papers use the conjugate surface method. The theory of conjugate minimal surfaces in $\mathbb H^2 \times \mathbb R$ was elaborated by Daniel~\cite{da2} and Hauswirth, Sa~Earp and Toubiana~\cite{hst1}. The surfaces in~\cite{moro1,pyo1} are shown to have total curvature $-4 \pi(k-1)$, where $k$ is the number of ends, and each end is asymptotic to a vertical plane. The horizontal catenoids, which have total curvature $-4\pi$, are a special case.
Despite all this progress, it has remained open whether there exist complete, properly embedded minimal surfaces in $\mathbb H^2 \times \mathbb R$ with finite total curvature and positive genus. The aim of this paper is to construct such surfaces, which we do by gluing together certain configurations of horizontal catenoids. There is a dichotomy in the types of configurations one may glue together. The ones for which the horizontal catenoid components have ``necksize'' bounded away from zero are simpler to handle, and the gluing construction in this case is quite elementary; the trade-off is that the minimal surfaces obtained using only this type of component have a very large number of ends relative to the genus. Alternatively, one may glue together horizontal catenoids with very small necksizes, which allows one to obtain viable configurations with relatively few ends for a given genus. Unfortunately this turns out to involve more analytic details because these the horizontal catenoids with very small necks are `nearly degenerate', and because of this we will address this second case in a sequel to this paper.
Our main result here is the \begin{thm}
For each $g \geq 0$, there is a $k_0 = k_0(g)$ such that if $k \geq
k_0$, then there exists a properly embedded minimal surface with
finite total curvature in $\mathbb H^2 \times \mathbb R$, with genus $g$ and $k$
ends, each asymptotic to a vertical plane. \label{maingt} \end{thm} The proof involves gluing together component minimal surfaces which are nondegenerate in the sense that they have no decaying Jacobi fields. Unfortunately, every minimal surface in $\mathbb H^2 \times \mathbb R$ with each end asymptotic to a vertical plane is degenerate since vertical translation (i.e.\ in the $\mathbb R$ direction) always generates such a Jacobi field. Because of this we shall work within the class of surfaces which are symmetric with respect to a fixed horizontal plane $\mathbb H^2 \times \{0\}$ and then it suffices to work with surfaces which are {\it horizontally nondegenerate} in the sense that they possess no decaying Jacobi fields which are even with respect to the reflection across this horizontal plane. The surfaces obtained in Theorem~\ref{maingt} are all even with respect to the vertical reflection, and all are horizontally nondegenerate as well.
This leads to the problem of showing that there are component minimal surfaces which satisfy this condition, and our second main result guarantees that many such surfaces exist. \begin{thm}
Each horizontal catenoid $K_\eta$ is horizontally nondegenerate. \label{nondeghc} \end{thm}
Our final result concerns the deformation theory of this class of surfaces. \begin{thm}
Let ${\mathcal M}_k$ denote the space of all complete, properly embedded
minimal surfaces with finite total curvature in $\mathbb H^2\times \mathbb R$
with $k$ ends, each asymptotic to an entire vertical plane. If
$\Sigma \in {\mathcal M}_k$ is horizontally nondegenerate, then the
component of this moduli space containing $\Sigma$ is a real
analytic space of dimension $2k$, and $\Sigma$ is a smooth point in
this moduli space. In any case, even without this nondegeneracy
assumption, ${\mathcal M}_k$ is a real analytic space of virtual dimension
$2k$. \label{defthy} \end{thm} We make two remarks on this. First, this dimension count coincides with the dimension of the family constructed by our gluing methods, and also with the dimension of the family of genus $0$ surfaces constructed in~\cite{moro1}. The fact that the dimension does not depend on the genus may be surprising at first, but this is also the case for the space of complete Alexandrov-embedded minimal or CMC surfaces of finite topology in $\mathbb R^3$, see \cite{KMP} and \cite{PR}. As is the case in these other theories, it turns out to be very hard to construct surfaces which are actually degenerate, and we leave this as an interesting open problem here as well. The second remark is that it would also be quite interesting to know whether the vertical symmetry condition we are imposing is anything more than a technical convenience. More specifically, we ask whether there exist finite total curvature minimal surfaces in $\mathbb H^2 \times \mathbb R$ with vertical planar ends which do not have a horizontal plane of symmetry.
Our results show that the existence theory for these properly embedded minimal surfaces of finite total curvature in $\mathbb H^2 \times \mathbb R$ is in some ways opposite to that in $\mathbb R^3$. Indeed, Meeks, Perez and Ros \cite{m-p-r} have proved that there is an upper bound, depending only on the genus, for the number of ends of a properly embedded minimal surface of finite topology in $\mathbb R^3$. This is a significant step toward resolving the conjecture of Hoffman and Meeks that a connected minimal surface of finite topology, genus $g$ and $k>2$ ends can be properly minimally embedded in $\mathbb R^3$ if and only if $k \leq g + 2$. By contrast, our result gives some indication that a connected surface of finite topology and finite total curvature can be properly minimally embedded in $\mathbb H^2 \times \mathbb R$ only if the number of ends $k$ has a specific lower bound in terms of the genus $g$. Going out on a limb, we conjecture that the correct bound for the surfaces constructed by gluing horizontal catenoids with small necks is $k \geq 2g+1$. Note also that our construction shows that if there exists a surface of this type of genus $g$ and $k$ ends, then we can construct such surfaces with genus $g$ and any larger number of ends, so there definitely is no upper bound as in the Euclidean space to the number of ends that a surface of fixed genus may have.
The plan of this paper is as follows: In \S 2 we describe the horizontal catenoids in more detail, reviewing known properties and developing some new ones as well. This is where we prove Theorem~\ref{nondeghc}. Next in \S 3 we describe the configurations of approximate minimal surfaces formed by patching together horizontal catenoids. The actual gluing, i.e.\ the perturbation of these approximately minimal surfaces to actual minimal surfaces, which is possible when some parameter in the construction is sufficiently large, is carried out in \S 4. The analytic steps involve a parametrix construction which is perhaps not so well known in the minimal surface literature but fairly standard elsewhere; we refer to the recent paper \cite{MS} which uses a similar method to construct multi-layer solutions of the Allen-Cahn equation in $\mathbb H^n$. In \S 5 the general construction is given for gluing together any two horizontally nondegenerate properly embedded minimal surfaces of finite total curvature; this is a simple variant of the proof of the main result. Finally, in \S 6, we study the deformation theory.
The second author is very grateful to the Department of Geometry and Topology in the University of Granada, where this work was initiated. R.M. is supported by the NSF grant DMS-1105050. F.M. and M.M.R. are partially supported by MEC-FEDER Grant no. MTM2011-22547 and a Regional J. Andaluc\'\i a Grant no. P09-FQM-5088.
\section{Horizontal catenoids} We now describe the fundamental building blocks in our gluing construction, which are the horizontal catenoids $K_\eta$ in $\mathbb H^2 \times \mathbb R$, originally constructed by Morabito and the third author~\cite{moro1} and by Pyo~\cite{pyo1}. Each $K_\eta$ has genus zero and two ends asymptotic to vertical geodesic planes. The parameter $\eta$ is the hyperbolic distance between these two planes; it varies in an open interval $(0,\eta_0)$, where the upper bound $\eta_0$ corresponds to the distance between two opposite sides of an ideal regular quadrilateral. These catenoids have total curvature $-4\pi$, and have ``axes'' orthogonal to the $\mathbb R$ direction, whence the moniker horizontal.
\noindent{\bf The horizontal catenoid as a vertical bigraph:} The initial construction of $K_\eta$ in the papers above describes it as a bigraph over a region $\Omega_\eta \subset \mathbb H^2$ with a reflection symmetry across the central $\mathbb H^2 \times \{0\}$. This means the following: first, there is a nonnegative function $u$ defined in $\Omega_\eta$ such that \[ K_\eta = \{ (z, u(z)): z \in \Omega_\eta\} \cup \{(z,-u(z)): z \in \Omega_\eta\}. \] The domain $\Omega_\eta$ is bounded by four smooth curves of infinite length which intersect only at infinity; two of these are hyperparallel geodesics, denoted $\gamma_{-1}$ and $\gamma_1$, and the parameter $\eta$ equals the hyperbolic distance between them; the other two curves, denoted $C_{-1}$ and $C_1$, connect the adjacent pairs of endpoints of $\gamma_{\pm 1}$. The function $u$ is strictly positive in the interior of $K_\eta$, vanishes and has infinite gradient on $C_{-1} \cup C_1$, and tends to $+\infty$ along $\gamma_{-1} \cup \gamma_1$. We also let $C_{-1}'$ and $C_1'$ be the geodesic lines with the same endpoints as $C_{-1}$ and $C_1$, respectively, and $\Omega_{\eta}'$ the ideal geodesic quadrilateral bounded by $\gamma_{-1} \cup \gamma_1 \cup C_{-1}' \cup C_1'$. Using vertical planes (which are minimal) as barriers, we see that $C_{-1}$ and $C_1$ are strictly concave with respect to $\Omega_\eta$. In particular, they lie in the interior of $\Omega_\eta'$. For later reference, we identify a few other curves which will enter the discussion. First, let $\Gamma$ denote the unique geodesic which is orthogonal to both $\gamma_{\pm 1}$; next, let $\gamma_0$ be the geodesic perpendicular to $\Gamma$ and midway between $\gamma_{\pm
1}$; finally, denote by $\widetilde{\gamma}_{\pm 1}$ the two geodesics which connect the opposite ideal vertices of $\Omega_\eta$. Observe that $\gamma_0$ is perpendicular to both $C_{\pm 1}'$; in addition the points of intersection $\gamma_0 \cap \Gamma$ and $\widetilde{\gamma}_{-1} \cap \widetilde{\gamma}_1$ are the same, and we denote this centerpoint by $Q$.
We finish this discussion by noting that the horizontal catenoid with ends asymptotic to the two vertical planes $\gamma_1\times\mathbb{R}$ and $\gamma_{-1}\times\mathbb{R}$ is unique (when it exists). This follows from the fact that this surface is a bigraph across the plane $t=0$ as well as in the two horizontal directions associated to the geodesics $\Gamma$ and $\gamma_0$ (see Proposition~\ref{horbigraph}). \begin{figure}
\caption{The boundary of the region $\Omega_\eta.$}
\label{fig:one}
\end{figure}
\noindent{\bf The extremal surface:} The family of catenoids $K_\eta$ exists only for $0 < \eta < \eta_0$. This critical value $\eta_0$ corresponds to the case where the pairs of geodesics $\widetilde{\gamma}_{\pm 1}$ intersect orthogonally at $Q$. The limiting domain $\Omega_{\eta_0}$ is the same as $\Omega_{\eta_0}'$ (so $C_{-1} = C_{-1}'$ and $C_1 = C_1'$ in this limit). Furthermore, as $\eta \nearrow \eta_0$, the value $u(Q)$ tends to $+\infty$. In fact, recentering $K_\eta$ by translating down by $-u(Q)$, there is a limiting surface which is a graph over $\Omega_{\eta_0}'$ taking the boundary values $\pm\infty$ on alternate sides. It is planar of genus zero with one end. This surface is qualitatively similar to the classical Scherk surface of $\mathbb{R}^3$, and so we also call it the Scherk surface. As already mentioned in the introduction, this example was constructed in \cite{cor2, ner2}.
\noindent{\bf Further symmetries:} Unlike the Euclidean case, or even the case of vertical catenoids in $\mathbb H^2 \times \mathbb R$, the horizontal catenoid $K_\eta$ has only a discrete isometry group, isomorphic to $\mathbb Z_2 \times \mathbb Z_2 \times \mathbb Z_2$. Each $\mathbb Z_2$ corresponds to a reflection: the first reflection, which we call ${\cal R}_t$, sends $(z,t)$ to $(z,-t)$, and thus interchanges the top and bottom halves of $K_\eta$; the second, ${\cal R}_s$, is the reflection across $\Gamma \times\mathbb{R}$, it interchanges the `left' and `right' sides of each asymptotic end; the final one, ${\cal R}_o$, is the reflection across $\gamma_0\times\mathbb{R}$ and interchanges the two ends of $K_\eta$ and has fixed point set a loop around the neck. \begin{figure}
\caption{A horizontal catenoid $K_\eta.$}
\label{fig:intro}
\end{figure}
\noindent{\bf Graphical representation of the ends of $K_\eta$:}
Each end of $K_\eta$ is asymptotic to one of the totally geodesic vertical planes $P_j = \gamma_j \times \mathbb R$, $j=\pm 1$. The intermediate vertical plane $\gamma_0\times\mathbb{R}$ fixed by ${\cal R}_o$ bisects $K_\eta$, decomposing it into two pieces, $K_\eta^{1} \cup K_\eta^{-1}$, which are interchanged by this reflection. Each $K_\eta^j$ is a smooth manifold with compact boundary and one end, which is asymptotic to the vertical plane $P_j$. Outside of some compact set, $K_\eta^j$ is a normal graph over $P_j$, with graph function $v^j$ which is strictly positive and defined on an exterior region $E_\eta^j = P_j \setminus {\mathcal O}_\eta$.
The two ends are equivalent, so let us fix one and drop the sub- and superscripts $j$ for the time being. Use parameters $(s,t)$ on $P$, where $t$ is the vertical coordinate and $s$ is the signed distance function along the geodesic $\gamma$, as measured from $\gamma \cap \Gamma$. The restrictions of ${\cal R}_s$ and ${\cal R}_t$ to the plane $P$ correspond to $(s,t) \mapsto (-s,t)$ and $(s,t) \mapsto (s,-t)$, respectively. We assume that the domain $E_{\eta}$ is invariant under both these reflections.
The parameter $\eta$, strictly speaking, measures the distance between the asymptotic vertical planes of $K_\eta$, but also measures the size of the neck of $K_\eta$, which we take, for example, as the length of the closed curve $K_\eta\cap(\gamma_0\times\mathbb{R})$. This function, which we denote by $n(\eta)$, has $n(\eta) \to 0$ as $\eta \to 0$ and $n(\eta) \to \infty$ as $\eta \to \eta_0$. This can be thought as the original parameter for this family used in~\cite{moro1, pyo1}.
We now describe the asymptotic decay profile of the graphical representation of $K_\eta$ over $P$. Introduce polar coordinates $(r,\theta)$ in the $(s,t)$ plane, so $s = r\cos\theta$ and $t = r \sin \theta$, where the coordinates $(s,t)$ have been defined above. \begin{prop}
For each $\eta \in (0, \eta_0)$, as $r \to \infty$, the graph
function $v$ has the asymptotic expansion \begin{equation}
v(r,\theta) = A_\eta(\theta) r^{-\frac12} e^{-r} + {\mathcal O}(r^{-\frac32} e^{-r}),
\label{decayv} \end{equation} where $A_\eta(\theta)$ is some strictly positive smooth function on $\mathbb{S}^1$. \label{decayvprop} \end{prop} This decay profile is essentially a linear phenomenon and corresponds to the known asymptotic properties of homogeneous solutions of the Jacobi operator on $P$. Recall that for any minimal surface $\Sigma$, its Jacobi operator (for the minimal surface equation) is the elliptic operator \begin{equation}
L_{\Sigma} := \Delta_{\Sigma} + |A_{\Sigma}|^2 + \ricci\,(N,N);
\label{JacobiopSigma} \end{equation} here $\Delta_{\Sigma}$ is the Laplacian on $\Sigma$, $A_\Sigma$ the second fundamental form of the surface, $N$ its unit normal, and $\ricci$ the Ricci tensor of the ambient space. When $\Sigma = P$ is a vertical plane, this simplifies substantially. Indeed, $A_P \equiv 0$ and $N$ has no vertical component, so that $\ricci(N, N) \equiv -1$, hence \begin{equation}
L_P = \Delta_{\mathbb R^2} - 1.
\label{JacobiopP} \end{equation}
We now deduce Proposition~\ref{decayvprop} from a slightly more general result. \begin{prop}
Let $E \subset P$ be an unbounded region with complement $P
\setminus E$ smoothly bounded and compact. Let $K \subset \mathbb H^2
\times \mathbb R$ be a minimal surface which is a normal graph over $E$
with compact boundary over $\partial E$, and denote by $v: E \to \mathbb R$
the graph function. Suppose that $v \to 0$ at infinity in $P$. Then
there exists $A \in {\mathcal C}^\infty(\mathbb{S}^1)$, such that
\begin{equation}
v(r,\theta) = A(\theta) r^{-\frac12}e^{-r} + {\mathcal O}(r^{-\frac32} e^{-r}).
\label{genasym}
\end{equation}
Furthermore, if $K$ lies on one side of $P$ at infinity, then $A$ is
either strictly positive or strictly negative. \label{genasymprop} \end{prop} \begin{proof}
The minimal surface equation for a horizontal graph over $P$ is a
quasilinear elliptic equation ${\mathcal N}(v,\nabla v,\nabla^2 v)=0$, the
linearization of which at $v=0$ is just $L_P$. Let $p_j$ be any
sequence of points in $P$ tending to infinity, and consider the
restriction of $v$ to the unit ball $B_1(p_j)$ around
$p_j$. Recenter this ball at the origin and write the translated
function as $v_j$. We are assuming that $v_j \to 0$, and it follows
from standard regularity theory for the minimal surface equation
that
\begin{equation}
||v_j||_{2,\mu; B_1(0)} \to 0\quad \Mas\quad j \to \infty,
\label{locest}
\end{equation}
where $||\cdot ||_{2,\mu; B_1(0)}$ denotes the norm on the Holder
space ${\cal C}^{2,\mu}$ on the unit ball $B_1(0)$ (see~\cite{gt}).
This means that we can write
\begin{equation}
{\mathcal N}(v, \nabla v, \nabla^2 v) = L_P v + Q(v),
\label{Taylor}
\end{equation}
where $Q$ is quadratic in $v$, $\nabla v$ and $\nabla^2 v$, and has
the property that if $||v||_{2,\mu}$ is small, then
\begin{equation}
||Q(v)||_{0,\mu} \leq C ||v||_{2,\mu}^2.
\label{quadrem}
\end{equation}
Now, applying the inverse $G_P = (\Delta_{\mathbb R^2} - 1)^{-1}$ of the Jacobi operator (this $G_P$ is also called the Green operator or Green function) to \eqref{Taylor} shows that ${\mathcal N}(v,\nabla v, \nabla^2 v) = 0$ is equivalent to the equation
\begin{equation}
v = G_P ( -Q(v)).
\label{Green1}
\end{equation}
We assume initially only that $v \to 0$ at infinity in $P$, but
without any particular rate. We first show that $v$ decays at some
exponential rate; this is done using the maximum principle. The
second and final step is to obtain the asymptotic formula
\eqref{genasym}.
To begin, using \eqref{locest} and \eqref{quadrem}, the following is
true: There exists a constant $C_1 > 0$ such that, given any
$\delta_0 > 0$ sufficiently small, there exists $R_0 \geq 1$ so that
if $\delta < \delta_0$, $R > R_0$ and $|v| < \delta$ for all $r \geq
R$ then $\sup |Q(v)| \leq C_1 \delta^2$.
Now, define $w = a e^{-r} + b$. This satisfies $L_P w = -a r^{-1}
e^{-r} - b$. Suppose that $\delta<\delta_0,1$ and $R>R_0$ are such
that $\sup_{r\geq R}|v| =\delta$ is attained at $r = R$, and choose
the coefficients $a$ and $b$ so that $a e^{-R} + b \geq \delta$ and
$b \geq C_1 \delta^2$; to be specific, we take $b = C_1 \delta^2$
and $a = \delta( 1 - C_1 \delta) e^R$. Then $v - w \leq 0$ when $r =
R$, and furthermore (taking $\delta\leq 1/C_1$),
\[
L_P( v - w) = - Q(v) + ar^{-1}e^{-r} + b \geq -Q(v) + C_1 \delta^2
\geq 0,
\]
where we drop the middle term since $a r^{-1}e^{-r} > 0$. Thus $v-w$
is a subsolution of the equation which is non-positive at $r = R$
and is bounded as $r \geq R$, hence $v - w \leq 0$ for all $r \geq
R$. This implies that
\[
v(R+1, \theta) \leq w( R+1) = \delta (1 - C_1 \delta) e^{R} e^{-R-1}
+ C_1 \delta^2 = \delta \left( ( 1 - C_1\delta) e^{-1} + C_1\delta
\right).
\]
Since $C_1$ is independent of $\delta$, we can choose $\delta$ so
small that $(1 - C_1 \delta) e^{-1} + C_1 \delta < \frac12$, and
hence $v(R+1,\theta) \leq \frac12 \delta$. In other words, we see
that
\[
\sup_{r = R+1} |v| \leq \frac12 \sup_{r = R} |v|,
\]
for all $R \geq R_0$, or equivalently $|v(r,\theta)|\leq C e^{-m r}$
for some $m > 0$. This completes the first step.
Now, by local a priori estimates, if $\mathcal{A}(\rho)$ is the
annulus $\{\rho \leq r \leq \rho+1\}$, then
$||v||_{2,\mu;\mathcal{A}(\rho)} \leq C e^{-m \rho}$, and hence
$|Q(v)| \leq C_2 e^{-2m r}$ for all $r \geq R_0$. Assuming that $m
< 1$, we use the maximum principle again, this time with $w =
e^{-\beta r}$ for some $\beta \in (m, \min\{1, 2m\})$. Since
\[
L_P w = (\beta^2 - r^{-1} \beta -1) e^{-\beta r} < (\beta^2-1)
e^{-\beta r},
\]
we obtain $L_P(v - C_3 w) \geq -Q(v) + C_3 (1-\beta^2)e^{-\beta r}
\geq 0$ for all $r \geq R_0$; in addition $v - C_3 w \leq 0$ along
$r = \rho$ for $C_3$ sufficiently large, and $v - C_3 w \to 0$ as $r
\to \infty$. We conclude that $v \leq C_3 e^{-\beta r}$ for $r \geq
R_0$.
With this argument we have improved the exponent in the decay rate from $m < 1$ to any $\beta \in (m, \min\{1, 2m\})$. Iterating this a finite number of times shows that we can obtain a decay rate with exponent as close to $1$ as we please. In other words, we conclude that $v \leq C_4 e^{-(1-\epsilon)r}$ for some very small $\epsilon > 0$, and hence $|Q(v)| \leq C_5 e^{-2(1-\epsilon) r}$, and then that $||Q(v)||_{0,\mu;{\mathcal A}(\rho)} \leq C_6 e^{-2(1-\epsilon)\rho}$ as well.
Now write $v = G_P( -Q(v))$ as in \eqref{Green1}. Since $L_P$ commutes with rotations and translations in $P$, the Green function $G_P( (s,t), (s', t'))$ depends only on the (Euclidean) distance between $(s,t)$ and $(s',t')$, and hence reduces to a function of one variable which satisfies a modified Bessel equation. We thus arrive at the well-known classical formula \begin{equation}
G_P( (s,t), (s',t')) = \frac{1}{4\pi} K_0 ( \sqrt{ |s-s'|^2 + |t-t'|^2}). \label{Green2} \end{equation} Here $K_0(r)$ is the Bessel function of imaginary argument, see
\cite{Lebedev}, which has the well-known asymptotics
\begin{equation}
\begin{aligned}
K_0(r) & \sim \log r\ \mbox{as}\ r \searrow 0, \\
K_0(r) & \sim r^{-\frac12} e^{-r} + {\mathcal O}( r^{-\frac32}e^{-r})\
\mbox{as}\ r \nearrow \infty
\end{aligned}
\label{Green3}
\end{equation}
(we are omitting the normalizing constant $(4\pi)^{-1}$ for
simplicity.) It is a straightforward exercise to check that if $f$
is continuous and $|f| \leq C e^{-2(1-\epsilon)r}$, then
\begin{multline*}
v = G_P f = \int_{\mathbb R^2} G_P( (s,t), (s',t')) f(s',t') \, ds' dt' \\
= A(\theta) r^{-1/2} e^{-r} + {\mathcal O}(r^{-3/2}e^{-r}),
\end{multline*}
and if $f \in {\mathcal C}^\infty$, then $A \in
{\mathcal C}^\infty(\mathbb{S}^1)$. We refer to \cite{Melrose} for an
explanation of the linear mapping $f(s,t) \mapsto A(\theta)$ (it is
the adjoint of the Poisson operator and is closely related to the
scattering operator for $L_P$).
To complete the argument, suppose that $v > 0$. Since
$A(\theta)e^{-r}r^{-1/2}$ dominates the expansion for $r$ large,
clearly $A(\theta) > 0$. \end{proof}
\noindent{\bf Asymptotics of Jacobi fields:} Let $\Sigma$ be a
complete properly embedded minimal surface in $\mathbb H^2 \times \mathbb R$
with a finite number of ends, each one asymptotic to a vertical
plane $P_\alpha$, $\alpha \in A$. We could also let $\Sigma$ be an
exterior region in any such surface, i.e. the discussion below
incorporates the case where $\Sigma$ has compact boundary. We now
recall some facts about the asymptotic properties of solutions of
the equation $L_\Sigma \psi = 0$, where $L_\Sigma$ is the Jacobi
operator \eqref{JacobiopSigma}. This operator has the particularly
simple form \eqref{JacobiopP} when $\Sigma$ is a vertical plane $P$,
and this provides the asymptotic model for $L_\Sigma$ in our more
general setting. When $\Sigma$ is a horizontal catenoid $K_\eta$,
we write the Jacobi operator as $L_\eta$.
There are many classical sources for the material in this
section; we refer in particular to \cite{Melrose} since the
treatment is specifically geometric.
It is a classical fact in scattering theory that any solution
of $L_P \psi = 0$ (defined either on all of $P$ or just on the
complement of a relatively compact domain) has a so-called far-field
expansion as $r \to \infty$; this takes the form
\begin{equation}
\psi(r,\theta) \sim (F^+(\theta) r^{-\frac12} + {\mathcal O}(r^{-\frac 32})) e^r + (F^-(\theta) r^{-\frac12} + {\mathcal O}(r^{-\frac32})) e^{-r}.
\label{farfield}
\end{equation}
In the particular case where $P$ is all of $\mathbb R^2$ and has no
boundary, then $F^-(\theta) = F^+(-\theta)$, but in general the
relationship is more complicated. The subtlety in such an expansion
is that the coefficients $F^\pm(\theta)$ are allowed to be arbitrary
distributions on $\mathbb{S}^1$, and if these coefficients are not
smooth, then the expansion must be interpreted weakly, i.e.\ as
holding only after we pair with an arbitrary test function
$\varphi(\theta)$. The simplest `plane wave' solution of this
equation, $e^s$, exhibits an expansion with coefficients which are
Dirac delta functions:
\[
e^s \sim \delta(\theta) r^{-\frac12} e^r + \delta( -\theta)
r^{-\frac12} e^{-r}.
\]
One can interpret this as reflecting the obvious fact that this
solution grows exponentially as $s \to \infty$ (which corresponds to
$\theta = 0$) and decays exponentially as $s \to -\infty$ (which is
$\theta = \pi$). On the other hand, the Green function for this
operator with pole at $0$, $G_P( (s,t), (0,0)) = K_0(r)$, has
\[
G_P \sim r^{-\frac12} e^{-r}\ \ \mbox{as}\ r \to \infty,
\]
i.e.\ $F^+ = 0$ and $F^- = 1$. Similarly, since $\partial_s$ commutes
with $L_P$, the function $\partial_s G_P$ is another Jacobi field, and
it has
\[
\partial_s G_P \sim r^{-\frac12} \cos \theta \, e^{-r}.
\]
Now return to the Jacobi operator on more general minimal
surfaces with ends asymptotic to vertical planes.
\begin{prop}
Suppose that $L_\Sigma \psi = 0$. Let $r_\alpha$ denote the radial function on the asymptotic end $P_\alpha$,
and transfer this (via the horizontal graph description) to a function on $\Sigma$. Then $\psi$ has the far-field expansion
\[
\psi \sim \sum_{\alpha \in A} (F^+_\alpha(\theta) r_\alpha^{-\frac12}
+ {\mathcal O}(r_\alpha^{-\frac 32})) e^{r_\alpha} + (F^-_\alpha(\theta)
r_\alpha^{-\frac12} + {\mathcal O}(r_\alpha^{-\frac32})) e^{-r_\alpha}.
\]
\label{asympprop} \end{prop} The set of possible leading (distributional) coefficients $\{ F^+_\alpha, F^-_\alpha\}$ which can occur is called the scattering relation for $L_\Sigma$. If $\Sigma$ is preserved by the reflection ${\cal R}_t$ and we restrict to functions which are even with respect to ${\cal R}_t$, then any collection $\{F^+_\alpha\}$ uniquely determines a solution, and hence determines the other set of coefficients $\{F^-_\alpha\}$; the same is true on the complement of a finite dimensional subspace if we drop the evenness condition. The map $\{F^+_\alpha\} \mapsto \{F^-_\alpha\}$ is called the scattering operator.
\noindent{\bf Geometric Jacobi fields:} We now describe the special family of global Jacobi fields on the horizontal catenoid $K_\eta$ generated by the `integrable', or geometric, deformations of $K_\eta$. In other words, these Jacobi fields are tangent at $K_\eta$ to families of horizontal catenoids.
We have already described the space ${\mathcal C}_K$ of all horizontal catenoids which are symmetric about the plane $t=0$. Indeed, there is a unique such catenoid associated to any two geodesics $\gamma_{\pm}$ in $\mathbb H^2$ with $0<\mbox{dist}\,(\gamma_+, \gamma_-) = \eta < \eta_0$. Thus ${\mathcal C}_K$ is identified with an open subset of the space of distinct four-tuples of points on $\mathbb{S}^1$: writing any such four-tuple in consecutive order around $\mathbb{S}^1$ as $(\zeta_{-,1}, \zeta_{-,2}, \zeta_{+,1}, \zeta_{+,2})$, then we let $\gamma_\pm$ be the unique geodesic connecting $\zeta_{\pm,1}$ to $\zeta_{\pm,2}$. Note that we do not allow arbitrary four-tuples simply because the distances between these geodesics must be less than $\eta_0$. In any case, $\dim {\mathcal C}_K = 4$.
There are various different ways to describe the complete family of horizontal catenoids (symmetric about $\{t=0\}$). First we can vary the points $\zeta_{\pm,\ell}$ independently. Second, we can transform $K_\eta$ using the three-dimensional space of isometries of $\mathbb H^2$, and then, to obtain the entire four-dimensional family, we augment this by the extra deformation corresponding to changing the parameter $\eta$, i.e.\ moving the geodesics relative to one another.
Using the first parametrization of this family, let $\zeta(\epsilon)$ be a smooth curve in the space of (allowable) four-tuples where we vary only one end of one of the geodesics. The corresponding Jacobi field decays exponentially in all directions but one (this holds by Proposition~\ref{decayvprop} and the behavior of the hyperbolic metric at infinity). For example, if we vary only $\zeta_{+,2}$, then this Jacobi field decays exponentially in all directions at infinity on $P_-$, while on $P_+$, it decays exponentially as $s \to -\infty$ but grows exponentially as $s \to +\infty$ (we assume that $s$ increases as we move along $\gamma_+$ from $\zeta_{+,1}$ to $\zeta_{+,2}$).
In computing the infinitesimal variations here, note that if $K_\eta(\epsilon)$ is a one-parameter family of horizontal catenoids as described here, with $K_\eta(0) = K_\eta$, then for $\epsilon \neq 0$ we can write $K_\eta(\epsilon)$ as a normal graph over some proper subset of $K_\eta$. However, as $\epsilon \to 0$, this proper subset fills out all of $K_\eta$, and hence the derivative of the normal graph function at $\epsilon = 0$ is defined on the entire surface.
Denote by $\Phi_{\pm, \ell}$ the Jacobi field generated by varying only the one point $\zeta_{\pm, \ell}$, and note that each $\Phi_{\pm, \ell} \sim e^s = e^{r\cos\theta}$. For any four real numbers $E_{\pm, \ell}$, $\ell = 1,2$, we define \[ \Phi_E = \sum_{\pm, \ell} E_{\pm,\ell} \Phi_{\pm,\ell}, \]
\noindent{\bf $K_\eta$ as a horizontal bigraph:} The geometric Jacobi fields can be used to show that $K_\eta$ is a horizontal bigraph in two distinct directions: over the vertical plane $\gamma_0 \times \mathbb R$ and also over the vertical plane $\Gamma \times \mathbb R$. These two new graphical representations were also obtained in the recent paper \cite{hnst} using an Alexandrov reflection argument. We present a separate argument using these Jacobi fields since it is somewhat less technical. Note that the assertion about horizontal graphicality must be clarified first since there are two geometrically natural ways of writing a surface with a vertical end in $\mathbb H^2 \times \mathbb R$ as a horizontal graph over a vertical plane. Indeed, let $\gamma(s)$ be an arclength parametrized geodesic in $\mathbb H^2$. We can then coordinatize $\mathbb H^2$ using Fermi coordinates off of $\gamma$, i.e.\ $(s,\sigma) \mapsto \exp_{\gamma(s)}(\sigma \nu(s))$ (where $\nu$ is the unit normal), or else by $(s,\sigma) \mapsto D_\sigma(\gamma(s))$, where $D_\sigma$ is the one-parameter family of isometries of $\mathbb H^2$ which are dilations along the geodesic $\gamma^\perp$ orthogonal to $\gamma$ and meeting $\gamma$ at $\gamma(0)$. We use the latter, and then say that a curve is a graph over $\gamma$ in the direction of $\gamma^\perp$ if $\sigma = f(s)$. Hence $f \equiv \mbox{const.}$ corresponds to a geodesic $\gamma'$ which is hyperparallel to $\gamma$ and perpendicular to $\gamma^\perp$. This transfers immediately to the notion of a horizontal graph over $\gamma \times \mathbb R$ in the direction of $\gamma^\perp$ in $\mathbb H^2 \times \mathbb R$.
Now, recall the two orthogonal geodesics $\Gamma$ and $\gamma_0$ (see Figure \ref{fig:one}). The vertical plane $\Gamma\times\mathbb{R}$ (resp. $\gamma_0\times\mathbb{R}$) fixed by ${\cal R}_s$ (resp. ${\cal R}_o$) bisects $K_\eta$, decomposing it into two pieces denoted by $K_\eta^{s,1}, K_\eta^{s,-1}$ (resp. $K_\eta^{o,1}, K_\eta^{o,-1}$), which are interchanged by this reflection. The result of Hauswirth, Nelli, Sa Earp and Toubiana \cite[Lemmas 3.1 and 3.2]{hnst} is the following: \begin{prop}
For each $\eta \in (0,\eta_0)$,
$K_\eta^{o,+}$ is a horizontal graph in the
direction of $\Gamma$ over some portion of the vertical plane
$\gamma_0 \times \mathbb R$ while $K_\eta^{s,+}$ is a horizontal graph in
the direction of $\gamma_0$ over some portion of the vertical plane
$\Gamma \times \mathbb R$.
\label{horbigraph} \end{prop} As noted, we sketch an independent proof of this. \begin{proof}
First, notice that the first assertion in Proposition
\ref{horbigraph} is equivalent to the fact that the Jacobi field
$\Phi_o$ generated by dilations along $\Gamma$ is strictly positive
on $K_\eta^{o,+}$ and vanishes along the fixed point set of ${\cal R}_o$;
similarly, the second assertion is equivalent to claiming that the
Jacobi field $\Phi_s$ generated by dilations along $\gamma_0$ is
strictly positive on $K_\eta^{s,+}$ and vanishes along the fixed
point set of ${\cal R}_s$.
The proof has two steps. We first show that these Jacobi fields have
the required positivity property when $\eta$ is very close to the
upper limit $\eta_0$. We then show that as we vary $\eta$ from
$\eta_0$ down to $0$, they maintain this positivity.
We begin by asserting that the limiting Scherk surface $K_{\eta_0}$
is a horizontal bigraph over $\Gamma \times \mathbb R$ and also over
$\gamma_0 \times \mathbb R$. In fact, this surface has a symmetry obtained
by rotating by $\pi/2$ and flipping $t \mapsto -t$; this
interchanges these two graphical representations. This can be
proved by a simple Alexandrov reflection argument: Consider the
family of geodesics $\gamma_\sigma$ perpendicular to $\Gamma$ and
intersecting it at $\Gamma(\sigma)$ (where $\Gamma(0) = \Gamma \cap
\gamma_0$). The plane $\gamma_\sigma \times \mathbb R$ only intersects
$K_{\eta_0}$ when $\sigma < \eta_0/2$, and for $\sigma$ just
slightly smaller, the reflection of the `smaller' portion of
$K_{\eta_0}$ across this vertical plane does not intersect the other
component. Pushing $\sigma$ lower, it is standard to see that these
two half-surfaces do not intersect until $\sigma = 0$, in which case
they coincide. These planes of reflection are the images of
$\gamma_0 \times \mathbb R$ with respect to dilation along $\Gamma$, so we
deduce that the vector field $X$ generated by this dilation is
everywhere transverse to the component $K_{\eta_0}^+$ of
$K_{\eta_0}$ on one side of this plane of symmetry. Note finally
that the angle between $X$ and $K_{\eta_0}^+$ is bounded below by a
positive constant if we remain a bounded distance away from
$\partial K_{\eta_0}^+$.
Now recall that an appropriate vertical translate of $K_\eta$
converges locally uniformly in ${\mathcal C}^\infty$ to $K_{\eta_0}$, and
indeed this convergence (of the translated $K_\eta$) is uniform in
the half-plane $t \geq -C$ for any fixed $C$. It is then clear that
the angle between $X$ and $K_\eta^{o,+}\cap \{t\geq -C\}$ is also
positive everywhere when $\eta$ is sufficiently close to $\eta_0$.
Since $K_\eta$ is invariant by ${\cal R}_t$, this finishes the first
step.
For the second step, to be definite consider $\Phi_o$, and let us
study what happens as $\eta$ varies in the interval $(0,\eta_0)$. We
use that $L_\eta \Phi_o = 0$ and $\Phi_o$ is nonnegative on
$K_\eta^{o,+}$ for $\eta$ close to $\eta_0$, vanishing only on the
boundary, and by the Hopf boundary point lemma, has strictly
positive normal derivative there. As $\eta$ decreases, $\Phi_o$
must remain strictly positive in the interior; the alternative would
be that it develops some interior zeroes or else its normal
derivative vanishes at the boundary while the function still remains
nonnegative in the interior, and both contradict the maximum
principle. Note that we are using two additional facts: first, we
use the form of the maximum principle which states that a
nonnegative solution of $(\Delta + V) u = 0$ cannot have an interior
zero, regardless of the sign of $V$; {we also use that because of
the graphical representation of the ends, it is clear that
$\Phi_o$ is bounded away from $0$ outside a compact set.} This
proves that $\Phi_o > 0$ on $K_\eta^{o,+}$ for all $\eta \in
(0,\eta_0)$, which shows that this half remains graphical.
The case of the Jacobi field $\Phi_s$ is quite similar. Taking into
account the asymptotic behavior of $K_\eta$, it is not hard to see
that there exists a constant $T \gg 0$ so that $K_\eta \cap \{
|t|>T\}$ is a horizontal graph over the vertical plane $\Gamma
\times \mathbb R$, $\forall \eta \in (0,\eta_0)$. We can then apply the
same argument as in the previous paragraphs to $K_\eta \cap \{ |t|
\leq T\}$. \end{proof}
\noindent{\bf Fluxes:} Closely related to the geometry in the last subsection is the computation of the flux homomorphism. We recall that if $\Sigma$ is an oriented minimal surface in an ambient space $(Z,g)$, then its flux is a linear mapping \[ {\mathcal F}: H_1(\Sigma) \times {\mathcal K}(Z,g) \longrightarrow \mathbb R, \] where ${\mathcal K}(Z,g)$ is the space of Killing vector fields on $Z$, i.e.\ infinitesimal generators of one-parameter families of isometries. The definition is simple: if $c \in H_1(\Sigma)$ is a homology class represented by a smooth oriented closed curve $\gamma$ and if $X \in {\mathcal K}(Z,g)$, then \[ {\mathcal F}(c, X) = \int_\gamma X \cdot \nu\, ds, \] where $\nu$ is the unit normal to $\gamma$ in $\Sigma$. This is only interesting when the ambient space $Z$ admits Killing fields, but this is certainly the case in our setting. Indeed, ${\mathcal K}( \mathbb H^2 \times \mathbb R)$ (with the product metric) is four-dimensional: there is one Killing field $X_t$ generated by vertical translation, and a three-dimensional space of Jacobi fields on $\mathbb H^2$ which lift to the product to act trivially on the $\mathbb R$ factor. If $K_\eta$ is a horizontal catenoid and if $o = \gamma_0 \cap \Gamma \in \mathbb H^2$ is its `center', then this three-dimensional space is generated by the infinitesimal rotation $X_R$ around $o$, and the infinitesimal dilations $X_{\gamma_0}$ and $X_\Gamma$ along $\gamma_0$ and $\Gamma$, respectively.
The first homology (with real coefficients), $H_1(K_\eta)$, is one-dimensional and is generated by the loop $(\gamma_0 \times \mathbb R) \cap K_\eta$. Thus it suffices to consider ${\mathcal F}([\gamma], X_j)$ where $X_j = X_t$, $X_R$, $X_{\gamma_0}$ or $X_\Gamma$. \begin{prop}
The quantity ${\mathcal F}([\gamma], X_j)$ vanishes when $X = X_t$, $X_R$
or $X_{\gamma_0}$, and is nonzero when $X = X_\Gamma$. \end{prop} \begin{proof}
The vector field $X_t$ is odd with respect to the reflection ${\cal R}_t$;
similarly, $X_R$ and $X_{\gamma_0}$ are odd with respect to one or
more of the reflections ${\cal R}_o$, ${\cal R}_s$. Since the choice of generator
$\gamma$ for $H_1$ is invariant under all three reflections, it is
easy to see that ${\mathcal F}([\gamma], X) = 0$ when $X$ is any one of
these three vector fields. However, $X_\Gamma$ is a positive
multiple of $\nu$ at every point of $\gamma$, so that
${\mathcal F}([\gamma], X_\Gamma) > 0$, as claimed.
We do not actually compute the value of this one nonvanishing flux. \end{proof}
Unlike many other gluing constructions for minimal surfaces, these fluxes turn out to play no interesting role in the analysis below. This traces, ultimately, to the fact that we will be gluing together copies of horizontal catenoids and these are already `balanced'. We explain this point further at the end of \S 5.
\noindent{\bf Spectrum of the Jacobi operator:} We now study the $L^2$ spectrum of the Jacobi operator $L_\eta$. By the general considerations described above, \[ \mbox{spec}(-L_\eta) = \{\lambda_j(\eta)\}_{j=1}^N \cup [1,\infty). \] The ray $[1,\infty)$ consists of absolutely continuous spectrum (this is because $K_\eta$ is a decaying perturbation of the union of two planes outside a compact set, so that the essential spectrum of $-L_\eta$ coincides with that of $-L_P$), while the discrete spectrum lies entirely in $(-\infty, 1)$; note that, even counted according to multiplicity, the number of eigenvalues may depend on $\eta$.
Our main result is the following: \begin{prop}
For each $\eta \in (0,\eta_0)$, the only one of the eigenvalues of
$-L_\eta$ which is negative is $\lambda_0(\eta)$, and only $
\lambda_1(\eta) = 0$. All the remaining eigenvalues are strictly
positive. The ground-state eigenfunction $\phi_0 = \phi_0(\eta)$ is
even with respect to all three reflections, ${\cal R}_t$, ${\cal
R}_s$ and ${\cal R}_o$; the eigenfunction $\phi_1$, which is the
unique $L^2$ Jacobi field, is generated by vertical translations and
is odd with respect to ${\cal R}_t$ but even with respect to ${\cal
R}_s$ and ${\cal R}_o$. In particular, if we restrict $-L_\eta$
to functions which are even with respect to ${\cal R}_t$, then
$L_\eta$ is nondegenerate. \label{specnondeg} \end{prop} \begin{proof}
We can decompose the spectrum of $-L_\eta$ into the parts which are
either even or odd with respect to each of the isometric reflections
${\cal R}_t$, ${\cal R}_s$ and ${\cal R}_o$. Indeed, for each such
reflection, there is an even/odd decomposition \[ L^2(K_\eta) = L^2(K_\eta)_{j-\mathrm{ev}} \oplus L^2(K_\eta)_{j-\mathrm{odd}},\ j = t, s, o. \] The reduction of $-L_\eta$ to the odd part of any one of these decompositions corresponds to this operator acting on functions on the appropriate half $K_\eta^{j,+}$ of $K_\eta$ with Dirichlet boundary conditions.
Our first claim is that the restriction of $-L_\eta$ to $L^2(K_\eta)_{j-\mathrm{odd}}$ with $j = s, o$ is strictly positive, and is nonnegative if $j = t$, with one-dimensional nullspace spanned by the Jacobi field $\Phi_t$ generated by vertical translations.
To prove this, note first that since $\Phi_t \in L^2(K_\eta)_{t-\mathrm{odd}}$ and $\Phi_t$ is strictly positive on $K_\eta^{t,+}$, it must be the ground state eigenfunction for this reduction and is thus necessarily simple, with all the other eigenvalues strictly positive.
On the other hand, we have proved above that $\Phi_s$ and $\Phi_o$ are strictly positive solutions of this operator on the appropriate halves of $K_\eta$, vanishing on the boundary, but of course do not lie in $L^2$. We shall invoke the following Lemma. \begin{lem} Consider the operator $-L=-\Delta + V$ on a Riemannian manifold $M$, where $V$ is smooth and bounded. Assume either that $M$ is complete, or else, if it has boundary, then we consider $-L$ with Dirichlet boundary conditions at $\partial M$. Suppose that there exists an $L^2$ solution $u_0$ of $L u_0 = 0$ such that $u_0 > 0$, at least away from $\partial M$. If $v$ is any other solution of $L v = 0$ with $v > 0$ in $M$ and $v = 0$ on $\partial M$, then $v = c u_0$ for some constant $c$. \label{gclem} \end{lem} \begin{rem} We can certainly relax the hypotheses on $V$. The proof below is from the paper of Murata \cite{Mu}; the result appears in earlier work by Agmon, and is proved by different methods in \cite[Theorem 2.8]{Sullivan} and \cite[Ch. 4, Theorem 3.4]{Pinsky} \end{rem} \begin{proof} It is technically simpler to work on a compact manifold with smooth boundary, so let $\Omega_j$ be a sequence of nested, compact smoothly bounded domains which exhaust $M$, and in the case where $\partial M \neq \emptyset$, assume that $\overline{\Omega_j} \cap \partial M = \emptyset$ for all $j$. The last condition is imposed since it is convenient to have that $v$ is strictly positive on the closure of each $\Omega_j$.
It is well-known that the lowest eigenvalue $\lambda_0^j$ of $-L$ with Dirichlet boundary conditions on $\Omega_j$ converges to the lowest Dirichlet eigenvalue $\lambda_0$ of $-L$ on all of $M$ (indeed, this follows from the Rayleigh quotient characterization of the lowest eigenvalue). We are assuming that $\lambda_0 = 0$, so by domain monotonicity, $\lambda_0^j \searrow 0$.
Now choose a nonnegative (and not identically vanishing) function $\psi \in {\mathcal C}^\infty_0(\Omega_0)$ and define $-L_k = -L - \frac{1}{k}\psi$ for any $k \in \mathbb R^+$. Denoting the lowest eigenvalue of this operator on $\Omega_j$ by $\lambda_0^{j,k}$, then by the same Rayleigh quotient characterization, we have that
$\lambda_0^{j,k} \leq \langle -L_k u, u \rangle$ for any fixed $u \in H^1_0(\Omega_j)$ with $||u||_{L^2} = 1$. In particular, inserting the ground state eigenfunction $\hat{u}_0^j$ for $-L$ on $\Omega_j$, we obtain \[
\lambda_0^{j,k} \leq \lambda_0^j - \frac{1}{k} \int_{\Omega_j} \psi |\hat{u}_0^j|^2\, dV_g. \] In particular, fixing $k > 0$, then since the first term on the right can be made arbitrarily close to $0$ by assumption, we can choose $j$ so that $\lambda_0^{j-1,k} > 0$ and $\lambda_0^{j,k} \leq 0$. This is because the integral in the second term on the right is bounded away from zero, which holds because $\hat{u}_0^j \leq \hat{u}_0^{j+1}$ on the support of $\psi$ (this can be proved using the maximum principle for $-L - \lambda_0^{j+1}$ to compare $\hat{u}_0^j$ and $\hat{u}_0^{j+1}$ on the smaller domain $\Omega_j$). If we recall also that the eigenvalue $\lambda_0^{j,k}$ depends continuously (in fact, analytically) on $k$, then we can adjust the value of $k$ slightly to a nearby value $k_j$ so that $\lambda_0^{j,k_j} = 0$. Clearly $k_j \to \infty$. We have thus obtained a solution $u_0^j > 0$ of $-L_{k_j} u_0^j = 0$ on $\Omega_j$ with $u_0^j = 0$ on $\partial \Omega_j$.
Since the solution $v$ is strictly positive, we have that $\Delta \log v = V - |\nabla \log v|^2$. Now, using that $u_0^j$ vanishes on $\partial \Omega_j$, we compute that \begin{multline*}
\int_{\Omega_j} \left|\nabla\left(u_0^j/v\right)\right|^2 v^2 \, dV_g = \int_{\Omega_j} |\nabla u_0^j|^2 - \nabla (u_0^j)^2 \cdot
\nabla \log v + (u_0^j)^2 |\nabla \log v|^2 \, dV_g \\
= \int_{\Omega_j} |\nabla u_0^j|^2 + (u_0^j)^2( V - |\nabla \log v|^2) + (u_0^j)^2 |\nabla \log v|^2 \, dV_g \\ = \int_{\Omega_j} u_0^j (-\Delta + V) u_0^j \, dV_g = \frac{1}{k_j} \int_{\Omega_j} \psi (u_0^j)^2 \, dV_g. \end{multline*}
Normalizing so that $||u_0^j||_{L^2} = 1$, then it is straightforward to show that $u_0^j \to u_0$ on any compact subdomain of $M$. Since the right hand side of this equation tends to $0$, so does the left, hence in particular the integral of $|\nabla( u_0/v)|^2$ over any fixed $\Omega_{j'}$ vanishes, i.e.\ $v = c u_0$ as claimed. \end{proof}
This Lemma implies that it is impossible for $-L_\eta$ to have lowest eigenvalue equal to $0$ on either of the subspaces $L^2(K_\eta)_{j-\mathrm{odd}}$, $j = s, o$, since if this were the case, then we could use the corresponding eigenfunction as $u_0$ in Lemma~\ref{gclem} and let $v = \Phi_j$ to get a contradiction since $\Phi_j \notin L^2$.
We shall justify below that when $\eta$ is very close to its maximal value $\eta_0$, the lowest eigenvalue of $-L_\eta$ on $L^2(K_\eta)_{j-\mathrm{odd}}$ is strictly positive.
Using the continuity of the ground state eigenvalue as $\eta$ decreases combined with the argument above, we see that this lowest eigenvalue can never be negative on any one of these odd subspaces, and the only odd $L^2$ Jacobi field is $\Phi_t$. This proves the claim.
We have finally reduced to studying the spectrum of $-L_\eta$ on $L^2(K_\eta)_{\mathrm{ev}}$, i.e.\ the subspace which is even with respect to all three reflections (we call this ``totally even''). Because of the existence of an $L^2$ solution of $L_\eta u = 0$ which changes signs, namely $u = \Phi_t$, we know that the bottom of the spectrum of $-L_\eta$ is strictly negative, and we have proved above that the corresponding eigenfunction must live in the totally even subspace. (This is also obvious because of the simplicity of this eigenspace and the fact that the corresponding eigenfunction is everywhere positive.) Thus $\lambda_0(\eta) < 0$ as claimed.
Now suppose that the next eigenvalue $\lambda_1(\eta)$ lies in the interval $(\lambda_0(\eta), 0]$, and if $\lambda_1(\eta) = 0$, assume that there exists a corresponding eigenfunction which is totally even. Since this is the {\it second} eigenvalue, we know that the corresponding eigenfunction $\phi_1(\eta)$ has exactly two nodal domains. However, it is straightforward to see using the symmetries of $K_\eta$ that if $\phi$ is any function on $K_\eta$ which is totally even and changes sign, then it cannot have exactly two nodal domains. Indeed, if that were the case, then the nodal line $\{\phi = 0\}$ would have to either be a connected simple closed curve or else two arcs, and these would then necessarily be the fixed point set of one of the three reflections. This is clearly incompatible with $\phi$ being totally even.
We are almost finished. It remains finally to prove that the lowest eigenvalue of $-L_\eta$ on any one of the odd subspaces is nonnegative when $\eta$ is sufficiently large.
As a first step, we first prove that $\lambda_0(\eta) \nearrow 0$ as $\eta \nearrow \eta_0$. Recall that in this limit, $K_\eta$ converges (once we translate vertically by an appropriate distance) to the limiting Scherk surface $K_{\eta_0}$. Moreover, $K_{\eta_0}$ is strictly stable because the Jacobi field $\Phi_t$ generated by vertical translation is strictly positive on it.
Now suppose that $\lambda_0(\eta) \leq -c < 0$. When $\eta$ is sufficiently close to $\eta_0$, we can construct a cutoff $\widetilde{\phi}_0(\eta)$ of the corresponding eigenfunction $\phi_0(\eta)$ which is supported in the region $t > 0$ (we are still assuming that $K_\eta$ is centered around $t=0$); this function lies in $L^2$ and regarding it as a function on $K_{\eta_0}$, it is straightforward to show that \[ \frac{\int_{K_{\eta_0}} (-L_{\eta_0} \widetilde{\phi_0} ) \widetilde{\phi_0}
}{ \int_{K_{\eta_0}} |\widetilde{\phi_0}|^2} \leq -c/2 < 0. \] This contradicts the strict stability of $K_{\eta_0}$, and hence proves that $\lambda_0(\eta) \nearrow 0$.
Now suppose that there is some sequence $\eta^\ell \nearrow \eta_0$ and a corresponding sequence of eigenvalues $\lambda^\ell \in (\lambda_0(\eta^\ell), 0)$ and eigenfunctions $\phi^\ell\in L^2(K_\eta)_{j-\mathrm{odd}}$, $j = s, o$. We know that $\lambda^\ell \nearrow 0$. Suppose that the maximum of
$|\phi^\ell|$ is attained at some point $p^\ell \in K_{\eta^\ell}$. Normalize by setting $\hat{\phi}^\ell = \phi^\ell/\sup
|\phi^\ell|$ and take the limit as $\ell \to \infty$. Depending on the limiting location of $p^\ell$, we obtain a bounded solution of the limiting equation on the pointed Gromov-Hausdorff limit of the sequence $(K_{\eta^\ell}, p^\ell)$. There are, up to isometries, only two possible such limits: either the limiting Scherk surface $K_{\eta_0}$ or else a vertical plane $P = \gamma \times \mathbb R$. In the latter case, the limiting function $\hat{\phi}$ satisfies $L_P \hat{\phi} = 0$. However, $L_P = \Delta_P - 1$ and it follows by an easy argument using the Fourier transform on $P$ that there are no bounded solutions of $L_P \hat{\phi} = 0$ on all of $P$, so this case cannot occur. Therefore, we have obtained a function $\hat{\phi}$ on $K_{\eta_0}$ which is a solution of the Jacobi equation there and which is bounded. We now invoke Theorem 2.1 in \cite{MPR}, which is a result very similar to Lemma~\ref{gclem}, but instead of assuming that $v$ is positive, we assume instead that $v$ is bounded, and then conclude that $v = c u_0$ where $u_0$ is the positive $L^2$ solution. The proof proceeds by a somewhat more intricate cutoff argument than the one above. In any case, this proves that $\hat{\phi}$ must equal the unique {\it positive} $L^2$ Jacobi field on $K_{\eta_0}$, but this is impossible because of the oddness of $\phi^\ell$ with respect to either ${\cal R}_s$ or ${\cal R}_o$.
This completes the proof of the main Proposition. \end{proof}
\section{Families of nearly minimal surfaces} We now describe a collection of families of `nearly minimal' surfaces, exhibiting a wide variety of topological types. In the next section we prove that these can be deformed to actual minimal surfaces, at least when certain parameters in the family are sufficiently large. The geometry of each such configuration is encoded by a finite network of geodesic lines and arcs in $\mathbb H^2$. Each complete geodesic $\gamma$ in this network corresponds to a vertical plane $P = P_\gamma = \gamma \times \mathbb R$. The geodesic segments connecting these geodesic lines correspond to catenoidal necks connecting the associated vertical planes. The approximate minimal surfaces themselves are constructed by gluing together horizontal catenoids. Thus we take advantage of the existence of these components, the existence of which already incorporates some of the nonlinearities of the problem; this is in lieu of working with the more primitive component set comprised of vertical planes and catenoidal necks. The parameter which measures the `strength' of the interaction between these pieces is the distance between the finite geodesic segments. Once this distance is sufficiently large, we expect that the approximately minimal surface can be perturbed to be exactly minimal. We prove this here under one extra hypothesis, that the catenoidal necksizes remain bounded away from zero. The more general case will be handled in a subsequent paper. The joint requirement that the distances between geodesic `connector' arcs be large and that the necksizes are bounded away from zero imposes restrictions which we describe below.
We now describe all of this more carefully.
\subsubsection*{Geodesic networks} An admissible geodesic network ${\mathcal F}$ (see Fig. \ref{fig:two}) consists of a finite set of (complete) geodesic lines $\Gamma = \{\gamma_\alpha\}_{\alpha \in A}$ and geodesic segments ${\mathcal T} = \{ \tau_{\alpha \beta}\}_{ (\alpha, \beta) \in A'}$ connecting various pairs of elements in $\Gamma$. Here $A$ is some finite index set and $A'$ is a subset of $A \times A \setminus \mbox{diag}$ which indexes all `contiguous' geodesics, $\gamma_\alpha$ and $\gamma_\beta$ which are joined by some $\tau_{\alpha \beta}$. We now make various assumptions on these data and set notation: \begin{itemize} \item[i)] If $\alpha \neq \beta$, then $\mbox{dist}\,(\gamma_\alpha , \gamma_\beta) : = \eta_{\alpha \beta} \in (0,\eta_0)$, where $\eta_0$ is the maximal separation between vertical planes which support a horizontal catenoid. \item[ii)] The segment $\tau_{\alpha \beta}$ realizes the distance $\eta_{\alpha \beta}$ between $\gamma_\alpha$ and $\gamma_\beta$, and hence is perpendicular to both these geodesic lines. \item[iii)] Set $p_\alpha(\beta) = \tau_{\alpha \beta} \cap \gamma_\alpha$ and $p_\beta(\alpha) = \tau_{\alpha \beta} \cap \gamma_\beta$, and then define \[ D_\alpha = \min_{(\alpha \beta), (\alpha, \beta') \in A'} \{ \mbox{dist}(p_\alpha(\beta), p_{\alpha}(\beta'))\}, \ \ \mbox{and}\ \ D = \min_{\alpha} D_\alpha. \] This number $D$ is called the minimal neck separation of the configuration ${\mathcal F}$. \item[iv)] We also write $\eta := \sup \eta_{\alpha \beta}$, and call it the maximal neck parameter. \end{itemize} \begin{figure}
\caption{The geodesic network $\cal F$.}
\label{fig:two}
\end{figure}
We shall be considering sequences of geodesic networks ${\mathcal F}_j$ for which the minimal neck separation $D_j$ tends to infinity. Such sequences have two distinct types of behaviour: either all of the $(\eta_{\alpha \beta})_j \geq c > 0$, or else at least some of the $(\eta_{\alpha \beta})_j \to 0$. The main analytic construction below turns out to be fairly straightforward for the first type, but unfortunately the simplest geometries (a relatively small number of ends for a given genus) can only happen in the second setting. \begin{prop}
Let ${\mathcal F}_j$ be a sequence of configurations with $D_j \to \infty$,
and suppose that no ${\mathcal F}_j$ is contractible. If the cardinalities
of the index sets $A({\mathcal F}_j)$ and $A'({\mathcal F}_j)$ (i.e.\ the number
of geodesics and geodesic segments) remain bounded independently of
$j$, then at least some of the necksizes $(\eta_{\alpha \beta})_j$
must tend to $0$.
\label{prop:necksizes} \end{prop} \begin{proof}
By hypothesis, for each $j$ the configuration ${\mathcal F}_j$ contains a
cycle $c_j$. Referring to the geometry of each ${\mathcal F}_j$, it is
clear that each side of every $c_j$ is a geodesic segment, and
moreover, each $c_j$ is a convex hyperbolic polygon whose sides meet
at right angles. By hypothesis then we have a sequence of such
polygons where the number of sides remains bounded, so we may as
well assume that each $c_j$ is a $k$-gon for some fixed $k$. Suppose
that all $(\eta_{\alpha\beta})_j \geq c > 0$. Then by hypothesis,
the successive adjacent sides of $c_j$ are geodesic segments of length at
least $D_j$ and geodesic segments of length lying in the interval
$[c, \eta_0]$. However, it is a standard fact in hyperbolic geometry
that a geodesic polygon with every other side lying in such an
interval must have all sidelengths uniformly controlled, which is a
contradiction. \end{proof} In summary, for any sequence of configurations with fixed nontrivial topology and a fixed number of geodesic lines , at least some of the $\eta_{\alpha \beta}$ must converge to $0$.
\subsubsection*{From geodesic networks to nearly minimal surfaces} To each geodesic network ${\mathcal F}$ satisfying the properties above we now associate an approximately minimal surface $\Sigma$. The idea is straightforward: each geodesic line $\gamma_\alpha$ is replaced by the vertical plane $P_\alpha = \gamma_\alpha \times \mathbb R$, and each geodesic segment $\tau_{\alpha \beta}$ corresponds to a catenoidal neck connecting $P_\alpha$ and $P_\beta$ at the points $p_\alpha(\beta)$ and $p_\beta(\alpha)$. The resulting surface is denoted $\Sigma_{{\mathcal F}}$.
The arguments used below to deform $\Sigma_{{\mathcal F}}$ to an actual minimal surface are perturbative, so we must construct sequences of nearly minimal surfaces for which the error, which is a quantitative measure of how far $\Sigma_{{\mathcal F}}$ is from being minimal, tends to zero. To make the error term small, it is necessary to consider a sequence of networks ${\mathcal F}_j$ where the minimum neck separation $D_j$ tends to infinity. As proved above, if the necksizes stay bounded away from zero, the number of component pieces must grow with $j$. Because the proof is much cleaner in this case, we assume that $(\eta_{\alpha
\beta})_j \geq c > 0$ in all the rest of this paper. The more general case can be treated using techniques similar to those in \cite{MazPacPol}, but we shall address this in a separate paper.
The surface $\Sigma_{{\mathcal F}}$ will be constructed by assigning to each $\tau_{\alpha \beta}$ a vertical strip in the catenoid $K_{\eta_{\alpha \beta}}$ (see below) which contains a very wide neighbourhood around the neck region. Using that none of the necksizes tend to zero, we will prove that the Jacobi operator has a uniformly bounded inverse, acting between certain weighted H\"older spaces.
For each ${\mathcal F}$, we now show how to construct $\Sigma_{{\mathcal F}}$. Fix a line $\gamma_\alpha$ in ${\mathcal F}$, and enumerate the points $p_{\alpha}(\beta)$ along this line consecutively as $p_{\alpha,1}, \ldots, p_{\alpha,N}$. (The number of such points, $N = N_\alpha$, depends on $\alpha$, but for the sake of simplicity, the notation does not record this.) Let $q_{\alpha,j}$ be the midpoint of the geodesic segment from $p_{\alpha,j}$ to $p_{\alpha,j+1}$, $j = 1, \ldots, N-1$ and denote the length of such a segment by $d_{\alpha,
j}$. Hence \[ \mbox{dist}(p_{\alpha,j}, q_{\alpha,j}) = \frac12 d_{\alpha,j},\quad \mbox{dist}(p_{\alpha,j}, q_{\alpha,j-1}) = \frac12 d_{\alpha,j-1}. \] Note that each $d_{\alpha, j} \geq D_\alpha > D$. Finally, let $S_{\alpha,j}$ denote the vertical strip in $P_\alpha$ bounded by the two lines $q_{\alpha,j} \times \mathbb R$ and $q_{\alpha,j+1} \times \mathbb R$. For the extreme values $j = 0$ and $N$, let $S_{\alpha,j}$ be the half-plane in $P_\alpha$ bounded by $q_{\alpha,1}$ (on the right) and $q_{\alpha,N-1}$ (on the left), respectively.
Now, consider a geodesic segment $\tau_{\alpha \beta} \in {\mathcal F}$, and write its two endpoints as $p_{\alpha,j}$ and $p_{\beta,k}$. Let $K_{\alpha \beta}$ be the horizontal catenoid with vertical ends $P_\alpha \sqcup P_\beta$ and parameter $\eta_{\alpha \beta}$. Writing this catenoid as a horizontal normal graph over the relevant portions of the planes $P_\alpha$ and $P_\beta$ (i.e.\ away from the neck regions), we let $K^c_{\alpha \beta}$ denote the portion of the catenoid which includes the neck region and which lies over the strips $S_{\alpha,j}$ and $S_{\beta,k}$ (this is possible when $D$ is large enough).
This ensemble is not quite in final form since the edges of the different truncated horizontal catenoids do not quite match up. Write the corresponding portion of $K_{\alpha \beta}$ over the strip $S_{\alpha, j}$ as a normal graph with graph function $v_{\alpha, j}$. Choose a smooth cutoff function $\chi_{\alpha, j} \geq 0$ which equals
$1$ in the interior of $S_{\alpha,j}$ at all points which are a distance at least $2$ from the boundaries, and which vanishes at all points which are distance at most $1$ from these boundaries, and such that $|\nabla \chi_{\alpha, j}| + |\nabla^2 \chi_{\alpha, j}| \leq 2$ (again this is possible for $D$ is large enough). We then let $K_{\alpha \beta}^0$ be the slightly modified surface which agrees with $K^c_{\alpha \beta}$ near the neck region and is the graph of $\chi_{\alpha, j} v_{\alpha, j}$ over the rest of $S_{\alpha, j}$. Of course, this is no longer quite minimal where the modifications have been made.
\begin{figure}
\caption{An example of $\Sigma_{\cal F}$ and the corresponding (approximately) minimal surface}
\label{fig:genus1}
\end{figure}
Our final definition of the approximately minimal surface $\Sigma_{{\mathcal F}}$ in this case, where all neck parameters are bounded below by $c$, is \begin{equation}
\Sigma_{\mathcal F} = \bigsqcup_{(\alpha \beta) \in A'} K_{\alpha \beta}^0.
\label{defsigma} \end{equation}
\begin{prop} \label{prop:H}
Let ${\mathcal F}$ be a geodesic network which satisfies the properties i), ii) and iii), and let $\Sigma = \Sigma_{{\mathcal F}}$ be the associated surface in $\mathbb H^2 \times \mathbb R$ just constructed. Then $\Sigma$ is smooth and has $H \equiv 0$ except in the vertical strips $Q_{\alpha,j}$ of width $2$ around the lines $q_{\alpha,j} \times \mathbb R$. In the vertical strip $Q_{\alpha,j}$, \[
\sup_{B_t} \|H\|_{0,\mu; B_t} \leq C r^{-\frac12} e^{-r}; \] here $r = \min\{ \sqrt{ (d_{\alpha,j})^2 + t^2}, \sqrt{
(d_{\alpha,j+1})^2 + t^2}\}$ and $B_t$ is the square of width $2$ and height $2$ centered at $(q_{\alpha,j},t) \in Q_{\alpha,j}$. The constant $C$ is independent of all parameters in the construction provided $D = \min D_\alpha$ is sufficiently large. \end{prop}
The only point which needs to be checked is the decay of the local H\"older norm of the mean curvature. However, this follows directly from the corresponding estimate for the decay of the horizontal graph functions $v_{\alpha,j}$, see~\eqref{genasym}.
\begin{prop} \label{prop:index0} Let ${\cal F}$ be a geodesic network satisfying i), ii) and iii), and write $\Sigma=\Sigma_{\cal F}$. If $D$ is sufficiently large, then the Jacobi operator $L_\Sigma$ is non-degenerate when restricted to functions which are even respect to ${\cal R}_t.$ \end{prop} \begin{proof} We proceed by contradiction. Assume there exists a sequence of networks ${\mathcal F}_j$ with $D_j=D({\mathcal F}_j)
\nearrow +\infty$ such that each $\Sigma_j=\Sigma({\cal F}_j)$ admits a nonvanishing, even, $L^2$ Jacobi field $\phi_j$. Let $p_j \in \Sigma_j$ be a point where $|\phi_j|$ attains its maximum, and set $a_j:= |\phi_j(p_j)|$. If $T_j$ is an isometry of $\mathbb H^2 \times \mathbb{R}$ with $T_j(p_j) = (0,0)$, and $S_j=T_j(\Sigma_j)$, then
$\psi_j=\left(\frac{1}{a_j} \phi_j \right) \circ T_j^{-1}$ is a Jacobi field on $S_j$ with $\sup|\psi_j| = 1$ attained at $(0,0)$. Passing to a subsequence, $S_j$ converges to a surface $S_\infty$, which is clearly either a vertical plane or a horizontal catenoid $K_\eta$, for some $\eta \in (0,\eta_0)$, and $\psi_j$ converges to a nontrivial, bounded Jacobi field $\psi_\infty$ on $S_\infty$. However, it is clear that no such Jacobi field exists on a vertical plane. Furthermore, the expansion \eqref{farfield} holds also on ends of $K_\eta$ and shows that a bounded Jacobi field must, in fact, lie in $L^2$ (this could also be proved more directly using a variant of the proof of Proposition \ref{genasymprop}. However, $\psi_\infty$ is the limit of functions invariant with respect to ${\mathcal R}_t$, hence also has this property. This contradicts the vertical nondegeneracy of horizontal catenoids as proved in Proposition \ref{specnondeg}, and completes the proof. \end{proof}
\subsubsection*{Examples} It is possible to construct nearly minimal surfaces as sketched above, assuming that all necksizes $\eta_{\alpha \beta}$ are bounded away from $0$, with arbitrary genus, though possibly a large number of ends. Since each plane $P_\alpha$ is diffeomorphic to a once-punctured sphere, we see that $\Sigma_{{\mathcal F}}$ is a connected sum of such spheres, with one connection corresponding to each geodesic segment $\tau_{\alpha\beta}$.
To carry out the perturbation analysis, we must consider networks ${\mathcal F}$ with $D({\mathcal F})$ sufficiently large. As already explained, this imposes various restrictions. For example, to find a sequence of networks ${\mathcal F}_j$ with precisely one loop, and with $D({\mathcal F}_j) \to \infty$ and all $\eta_{\alpha \beta} \geq c > 0$, standard formulas from hyperbolic trigonometry show that the number of edges must grow with $j$. One construction is to take a hyperideal polygon in $\mathbb H^2$ which is invariant with respect to rotation by $2\pi/j$, by which we mean a collection of $j$ disjoint geodesics with a cyclic ordering and such that the minimal distance between any pair of adjacent geodesics is some fixed number $\eta$. If $\eta$ does not tend to zero, then the only way to have the minimal neck separation tend to infinity is if $j \to \infty$ (see Proposition~\ref{prop:necksizes}). By contrast, we can find sequences of such networks with $j=3$, for example, if we let $\eta \to 0$.
\section{Perturbation of $\Sigma_{\mathcal F}$ to a minimal surface} We now complete the perturbation analysis to show how to pass from the nearly minimal surfaces $\Sigma_{\mathcal F}$ to actual minimal surfaces in $\mathbb H^2 \times \mathbb R$ when ${\mathcal F}$ is a geodesic network with minimal neck separation $D$ sufficiently large, and with a uniform lower bound $\eta_{\alpha \beta} \geq c > 0$ on the neck parameters.
Fixing ${\mathcal F}$, let $\Sigma = \Sigma_{\mathcal F}$ and let $\nu$ be the unit normal on $\Sigma$ with respect to a fixed orientation. For any $u \in {\mathcal C}^{2,\mu}(\Sigma)$, consider the normal graph over $\Sigma$ with graph function $u$, \[ \Sigma(u) = \{ \exp_{q} (u(q) \nu(q)),\ q \in \Sigma\}. \] Assuming that all $\eta_{\alpha \beta} \geq c > 0$, then there exists $C = C(c) > 0$ such that
if $||u||_{2,\mu} < C$, then $\Sigma(u)$ is embedded.
The surface $\Sigma(u)$ is minimal if and only if $u$ satisfies a certain quasilinear elliptic partial differential equation, ${\mathcal N}(u) = 0$, which calculates the mean curvature of $\Sigma(u)$. (A similar argument was considered in the proof of Proposition~\ref{genasymprop} considering the vertical plane $P$ instead of $\Sigma$.) We do not need to know much about ${\mathcal N}$ except the following. If we write
${\mathcal N}(u) = {\mathcal N}(0) + \left. D {\mathcal N}\right|_0(u) + Q(u)$, then \begin{itemize} \item[i)] ${\mathcal N}(0) = H_\Sigma$; \item[ii)] the linearization at $u=0$ is the Jacobi operator of $\Sigma$, \[
\left. D {\mathcal N}\right|_0 = L_\Sigma = \Delta_\Sigma + |A_\Sigma|^2 + \Ric(\nu,\nu); \]
\item[iii)] if $\epsilon$ is sufficiently small and $||u||_{2,\mu} < \epsilon$, then \[
||{\mathcal N}(u)||_{0,\mu} \leq C \epsilon\ \ \mbox{and}\ \
||Q(u)||_{0,\mu} \leq C \epsilon^2. \] \end{itemize}
The equation ${\mathcal N}(u) = 0$ is equivalent to \begin{equation}
L_\Sigma u = - H_\Sigma - Q(u). \label{eq1} \end{equation} The strategy is now a standard one: we shall define certain weighted H\"older spaces $X$ and $Y$, and first prove that $L_\Sigma: X \to Y$ is Fredholm. A more careful analysis will show that, at least when the minimal neck separation $D$ is sufficiently large, this map is invertible and moreover its inverse $G_\Sigma: Y \to X$ has norm which is uniformly bounded by a constant depending only on the lower bounds $D$ for the minimal neck separation and $c$ for the maximal neck parameter (see Proposition~\ref{invertible} and Corollary~\ref{cor:G}). Given these facts, we then rewrite \eqref{eq1} as \begin{equation} u = - G_\Sigma ( H_\Sigma + Q(u)), \label{eq2} \end{equation} and solve this equation by a standard contraction mapping argument.
Somewhat remarkably, in this instance, this argument works almost exactly as stated. The only subtlety is that we must restrict to functions which are even with respect to the vertical reflection $t \mapsto -t$, since this subspace avoids the exponentially decaying element of the nullspace of the Jacobi operator on $\Sigma$.
The basic function spaces are standard H\"older spaces ${\mathcal C}^{k,\mu}(\Sigma)$ defined using the seminorm \[ [ u ]_{0,\mu} = \sup_{z \neq z' \atop \mbox{dist}\,(z,z') \leq 1}
\frac{ |u(z) - u(z')|}{ \mbox{dist}\,(z,z')^\mu}. \] Although the result could be proved using these spaces alone, we can obtain finer results by including a weight factor, which involves the exponential of a piecewise radial function $R$. On each strip $S_{\alpha,j}$, define a radial function $r_{\alpha,j} = \sqrt{s^2 +
t^2}$, where $s$ is the arclength parameter along $\gamma_\alpha$ and $s=0$ corresponds to the point $p_{\alpha,j}$. The functions $r_{\alpha,j}$ and $r_{\alpha,j+1}$ match up continuously at $S_{\alpha,j} \cap S_{\alpha,j+1}$. Now define a function $R$ on $\Sigma$ as follows: on each neck region of every horizontal catenoid set $R \equiv 1$; on the portion of $\Sigma$ which is a graph over $S_{\alpha,j} \setminus {\mathcal O}_{\alpha,j}$ (where ${\mathcal O}_{\alpha,j}$ is some ball which is larger than the projection of the neck region), set
$R = r_{\alpha,j}$. It is convenient to replace this function with a slightly mollified version which is smooth everywhere, and which has the property that $|\nabla R| + |\nabla^2 R| \leq 2$, so we assume this is the case. Finally, define \[ e^{\kappa R} {\mathcal C}^{k,\mu}(\Sigma) = \{ u = e^{\kappa R} v: v \in {\mathcal C}^{k,\mu}(\Sigma) \}. \] Given $u \in e^{\kappa R} {\mathcal C}^{k,\mu}(\Sigma),$ we consider the following norm:
\[ \|u\|_{k,\mu,\kappa}= \| e^{- \kappa R} \, u \|_{k,\mu}. \] \begin{prop} Fix any $\kappa \in (-1,1)$ and $\mu \in (0,1)$. If $\Sigma$ is any nearly minimal surface, as constructed above, then \[ L_\Sigma: e^{\kappa R} {\mathcal C}^{2,\mu}(\Sigma) \longrightarrow e^{\kappa R} {\mathcal C}^{0,\mu}(\Sigma) \] is Fredholm. \label{Fredholms} \end{prop} \begin{proof} If the elliptic operator $L_\Sigma$ has local
parametrices with compact remainder on each end of $\Sigma$, then we
can patch together these local parametrices to obtain a parametrix
on all of $\Sigma$ with similarly good properties. Recall that a
local parametrix on a bounded open set ${\mathcal U}$ in $\Sigma$ is a
continuous linear operator $\widetilde{G}_{\mathcal U}: {\mathcal E}'({\mathcal U}) \to
{\mathcal D}'({\mathcal U})$, between the spaces of compactly supported and all
distributions on ${\mathcal U}$, satisfying \begin{equation}
L \widetilde{G}_{\mathcal U} = \mbox{Id} -\mathfrak{R}, \quad \widetilde{G}_{\mathcal U} L
= \mbox{Id} - \mathfrak{R}', \label{defparam} \end{equation} where $\mathfrak{R}$ and $\mathfrak{R}'$ are smoothing of infinite order, and such that \[ \widetilde{G}_{\mathcal U}: {\mathcal C}^{0,\mu} \cap {\mathcal E}'({\mathcal U}) \longrightarrow {\mathcal C}^{2,\mu}({\mathcal U}). \] Similarly, if $E$ is any infinite end of $\Sigma$, then a local parametrix on $E$ is a linear operator $\widetilde{G}_E$ which is bounded as a map $e^{\kappa R} {\mathcal C}^{0,\mu}(E) \to e^{\kappa R} {\mathcal C}^{2,\mu}(E)$, and satisfies the analogue of \eqref{defparam}, where $\mathfrak{R}$ and $\mathfrak{R}'$ are again infinite order smoothing operators which have range in a space of (smooth) functions which have a fixed rate of decay at infinity. It follows directly from the Arzela-Ascoli theorem and these mapping properties that $\mathfrak{R}$ and $\mathfrak{R}'$ are compact operators on these weighted H\"older spaces. Thus, once we produce this parametrix, which is an inverse to $L$ modulo compact remainder terms, a standard argument from functional analysis then shows that $L$ is Fredholm.
Since $L_\Sigma$ is uniformly elliptic, the existence of local parametrices on bounded open sets is one of the basic theorems of microlocal analysis, see \cite{Shubin}. The construction of parametrices on the ends of $\Sigma$ uses more, namely that $L_\Sigma$ is `fully elliptic' near infinity, which means that it is strongly invertible there in a sense we make precise below.
Each end of $\Sigma$ has the form $P \setminus {\mathcal O}$ where $P$ is a vertical plane and ${\mathcal O}$ is a large ball of finite radius. The restriction of $L$ to each end is a decaying perturbation of the basic operator $\Delta_{\mathbb R^2} - 1$. The restriction to the complement of ${\mathcal O}$ of this operator has an inverse, the Schwartz kernel of which, also known as the Green function, is expressed in terms of the modified Bessel function \[
G_{\mathbb R^2}(z,z') = c K_0(|z-z'|) \sim c' |z-z'|^{-\frac12} e^{-|z-z'|}\
,\ \mbox{as}\ |z-z'| \to \infty. \]
It is not hard to check (see \cite{MV}) that if $|\kappa| < 1$ and $r = |z|$, then \[ G_{\mathbb R^2}: e^{\kappa r} {\mathcal C}^{0,\mu}(\mathbb R^2) \longrightarrow e^{\kappa r} {\mathcal C}^{2,\mu}(\mathbb R^2). \]
(The fact that this operator increases regularity by $2$ orders is classical; the slightly more subtle point is that it also preserves the growth or decay rate $e^{\kappa r}$ when $|\kappa| < 1$.) Now write $L_\Sigma = \Delta_{\mathbb R^2} - 1 + F$ where $F$ is a second order operator with smooth coefficients which decay like $e^{-r}$. From this we deduce that \[ L_\Sigma G_{\mathbb R^2} - \mbox{Id} = \mathfrak{R}: e^{\kappa r} {\mathcal C}^{0,\mu}(\mathbb R^2 \setminus {\mathcal O}) \longrightarrow e^{(\kappa - 1)
r} {\mathcal C}^{0,\mu}(\mathbb R^2 \setminus {\mathcal O}). \] This does not yet compactly include into $e^{\kappa r} {\mathcal C}^{0,\mu}$ since there is no gain of regularity so we cannot apply the Arzela-Ascoli Theorem. There are two effective ways to overcome this: first, restricting to the complement of an even larger ball, we can make the norm of this remainder term as small as desired, hence $\mbox{Id} + \mathfrak{R}$ can be inverted using a Neumann series. Equivalently, we can use a standard elliptic parametrix construction to modify $G_{\mathbb R^2}$ by an asymptotic series so that the new modified parametrix satisfies $L G = \mbox{Id} - \mathfrak{R}$ where $\mathfrak{R}$ maps into $e^{(\kappa-1)r} {\mathcal C}^\infty(\mathbb R^2 \setminus {\mathcal O})$. Either of these methods produces a global parametrix for $L_\Sigma$ with compact remainder on each end of $\Sigma$.
Now, cover $\Sigma$ by open sets of the form $P_\alpha \setminus {\mathcal O}_\alpha$ and one relatively compact open set ${\mathcal U}$. Using the standard elliptic parametrix construction on this bounded set and the parametrices constructed above on each $P_\alpha$, we may form a global parametrix as follows. Choose a partition of unity for this open cover, $\{\chi_0, \chi_\alpha\}_{\alpha \in A}$, and for each open set here choose another smooth function $\widetilde{\chi}_i$ with support in ${\mathcal U}$ for $i = 0$ and in $P_\alpha \setminus {\mathcal O}_\alpha$ for $i = \alpha$, such that $\widetilde{\chi}_i = 1$ on the support of $\chi_i$. Now define \[ \widetilde{G}_\Sigma = \widetilde{\chi}_0 G_0 \chi_0 + \sum_{\alpha \in A} \widetilde{\chi}_\alpha G_\alpha \chi_\alpha. \] We calculate that \begin{multline*}
L_\Sigma \widetilde{G}_\Sigma = \widetilde{\chi}_0 L_\Sigma G_0 \chi_0 +
\sum_\alpha \widetilde{\chi}_\alpha L_\Sigma G_\alpha \chi_\alpha +\\
[L_\Sigma, \widetilde{\chi}_0] G_0 \chi_0 + \sum_\alpha [L_\Sigma,
\widetilde{\chi}_\alpha] G_\alpha \chi_\alpha = \mbox{Id} +
\mathfrak{R}_\Sigma. \end{multline*} We use here that $L_\Sigma G_i = \mbox{Id}$ on the support of $\chi_i$ so the first set of terms on the right is equal to $\sum \widetilde{\chi}_i \mbox{Id} \chi_i = \sum \chi_i \mbox{Id} = \mbox{Id}$. The remainder $\mathfrak{R}_\Sigma$ is a pseudodifferential operator of order $-1$ with image lying in the union of the supports of the $\nabla \widetilde{\chi}_i$, which is a compact set. Hence, using the well-known mapping properties of such operators, $\mathfrak{R}_\Sigma: e^{\kappa R} {\mathcal C}^{0,\mu} \to e^{\kappa R} {\mathcal C}^{0,\mu}$ is a compact operator. A similar calculation shows that $\widetilde{G}_\Sigma L_\Sigma = \mbox{Id} + \mathfrak{R}_\Sigma''$ is also compact.
We have now produced an approximate inverse modulo compact remainders, and as explained at the beginning of the proof, this suffices to prove that $L$ is Fredholm. \end{proof}
The next step is to show that $L_\Sigma$ is invertible provided the minimal neck separation $D$ is sufficiently large. This fails of course if $L_\Sigma$ acts on the entire space $e^{\kappa
R}{\mathcal C}^{2,\mu}(\Sigma)$ because of the exponentially decaying Jacobi field generated by vertical translations. To circumvent this issue, we restrict $L_\Sigma$ to the subspace $e^{\kappa R} {\mathcal C}^{k,\mu}_{\even}(\Sigma)$ of even functions with respect to the reflection $t \mapsto -t$. (Note that we can assume that the radial function $R$ is even.) \begin{prop}
Let $\Sigma$ be a nearly minimal surface associated to the geodesic
network ${\mathcal F}$. There exists a $D_0 > 0$ such that if the minimal
neck separation $D$ is greater than $D_0$,
then
\[
L_\Sigma: e^{\kappa R}{\mathcal C}^{2,\mu}_{\even}(\Sigma) \longrightarrow
e^{\kappa R} {\mathcal C}^{0,\mu}_{\even}(\Sigma)
\]
is invertible.
\label{invertible} \end{prop} \begin{proof} We have already proved that the mapping $L_\Sigma$ is Fredholm on the entire weighted H\"older space, and it is clear that this remains true when restricting to the subspace of even functions. Let $G_\Sigma$ denote the generalized inverse of $L_\Sigma$. Recall that, by definition, this means that $L_\Sigma G_\Sigma - \mbox{Id} = \mathfrak{R}_\Sigma$ is a projector onto the complement of the range of $L_\Sigma$ and $G_\Sigma L_\Sigma - \mbox{Id} = \mathfrak{R}_\Sigma'$ is a projector onto the nullspace of $L_\Sigma$. In particular, these projectors both have finite rank. Since the index of $L_\Sigma$ vanishes, $\Tr \mathfrak{R}_\Sigma' - \Tr \mathfrak{R}_\Sigma = \mbox{Ind}\, (L_\Sigma) = 0$.
To proceed, we sketch a slightly different version of the parametrix construction. Recall that $\Sigma$ is a union of truncated (and slightly perturbed) horizontal catenoids $K_{\alpha\beta}^0$. These catenoids are joined along the lines $\{q_{\alpha, j}\}\times \mathbb R$, which are at distance at least $D/2$ away from each neck region. In fact, when $R >D/2$, Proposition \ref{prop:H} shows that \[
\sup e^{-\kappa R} |H_\Sigma| \leq C \sup e^{- (\kappa+1) R} R^{-1/2} \leq C e^{-(\kappa+1)D/2} D^{-1/2}. \] On the other hand, this inequality trivially holds when $R \leq D/2.$
Now choose an open cover comprised by slightly larger truncations of these catenoids, a partition of unity $\chi_{\alpha \beta}$ associated to this open cover, and smooth cutoff functions $\widetilde{\chi}_{\alpha
\beta}$ which are supported in these same open sets and which equal $1$ on the support of $\chi_{\alpha \beta}$. Then set \[ \hat{G}_\Sigma = \sum_{(\alpha \beta) \in A'} \widetilde{\chi}_{\alpha\beta} G_{\alpha \beta} \chi_{\alpha \beta}. \] Exactly the same computation as above shows that $L_\Sigma \hat{G}_\Sigma - \mbox{Id} = -\hat{\mathfrak{R}}_\Sigma$ is compact on $e^{\kappa
R} {\mathcal C}^{0,\mu}_{\ev}(\Sigma)$, but furthermore has norm
$||\hat{\mathfrak{R}}_\Sigma|| \leq C e^{-(\kappa+1)D/2}$, where $C$ is independent of $D$.
Finally, choosing $D$ sufficiently large, then $\mbox{Id} - \hat{\mathfrak{R}}_\Sigma$ is invertible on $e^{\kappa R} {\mathcal C}^{0,\mu}_{\ev}$, and hence $L_\Sigma G_\Sigma = \mbox{Id}$ where \[ G_\Sigma = \hat{G}_\Sigma \circ (\mbox{Id} - \hat{\mathfrak{R}}_\Sigma)^{-1}. \] This shows that $L_\Sigma$ is surjective. The proof of Proposition \ref{prop:index0} applies equally well here and implies that $L_\Sigma$ is injective as well, provided $D$ is large enough. (Alternately, the index of $L_\Sigma$ is zero, hence injectivity follows directly from surjectivity.) This concludes the proof. \end{proof}
It is clear from the local nature of the H\"older norms and the definition of this parametrix that the operator norm of $\hat{G}_\Sigma$ is uniformly bounded as $D \to \infty$. Its modification by $(\mbox{Id} - \hat{\mathfrak{R}})^{-1}$ does not change this, so we obtain the \begin{cor} If $\Sigma$ satisfies all the assumptions of the previous proposition, then the norm of the inverse $G_\Sigma$ on $e^{\kappa R} {\mathcal C}^{0,\mu}_{\ev}$ is uniformly bounded as $D \to \infty$. \label{cor:G} \end{cor}
The slightly surprising fact is that these estimates are independent of the topology or number of ends of ${\mathcal F}$ and $\Sigma$, but this is due to the character of the function spaces being used.
\begin{thm} Let ${\mathcal F}$ be a geodesic network in $\mathbb H^2$ and $\Sigma_{{\mathcal F}}$ the nearly minimal surface constructed from it. Also, fix $\kappa \in (-1,0)$. If the minimal neck separation $D$ is sufficiently large, then there exists a function $u \in e^{\kappa R}{\mathcal C}^{2,\mu}_{\ev}(\Sigma_{\mathcal F})$ with
$||u||_{2,\mu,\kappa} \leq C e^{-(\kappa+1)D/2}D^{-1/2}$ such that $\Sigma_{{\mathcal F}}(u)$ is an embedded minimal surface which is a small normal graph over $\Sigma_{{\mathcal F}}$. \label{mgt} \end{thm} \begin{proof} We solve ${\mathcal N}(u) = 0$ in the function space $e^{\kappa R} {\mathcal C}^{2,\mu}_{\ev}$ by rewriting this equation as in \eqref{eq2}. As $\kappa \in (-1,0)$, then \[
||Q(u)||_{0,\mu, \kappa} \leq C_1 ||u||_{2,\mu,\kappa}^2, \]
and hence if $||H_\Sigma||_{0,\mu,\kappa} \leq A$, then \[
|| G_\Sigma( H_\Sigma + Q(u))||_{2,\mu,\kappa} \leq C (A + C_1 ||u||^2_{2,\mu,\kappa}). \]
If $||u||_{2,\mu,\kappa} \leq \beta$, then the right hand side here is bounded by $C(A + C_1 \beta^2)$, and $C(A + C_1 \beta^2) \leq \beta$ provided we choose $\beta = \lambda A$ for some large $\lambda$ and then let $A$ be very small. With these choices, if we write \eqref{eq2} as $u = {\mathcal T}(u)$, then ${\mathcal T}$ maps the ball of radius $\beta$ in $e^{\kappa R}{\mathcal C}^{2,\mu}_{\ev}$ to itself. A similar analysis shows that ${\mathcal T}$ is a contraction on this ball.
This proves that there is a unique solution to ${\mathcal N}(u) = 0$, and that $||u||_{2,\mu,\kappa} \leq \beta$. Finally, since $\kappa < 0$, $|u| \leq \beta e^{\kappa R} \leq \beta$, and the derivatives of $u$ are similarly small, which implies that $\Sigma_{{\mathcal F}}(u)$ is embedded. \end{proof}
\begin{prop} Let ${\mathcal F}_j$ be a sequence of geodesic networks as in Theorem~\ref{mgt} (in particular the minimal neck separation $D({\mathcal F}_j) \to \infty$) and $\Sigma_j$ the corresponding minimal surfaces. Suppose that the necksizes $(\eta_{\alpha \beta})_j$ in the constituent horizontal catenoids all lie in a fixed interval $[c_1, c_2] \subset (0,\eta_0)$. Then for $j$ sufficiently large, $\Sigma_j$ is horizontally nondegenerate. \end{prop} \begin{proof}
Suppose that this is not the case, so that there exists some
subsequence $\Sigma_{j'}$, which we immediately relabel as
$\Sigma_j$ and a function $\varphi_j \in L^2(\Sigma_j)$ which is
even with respect to ${\cal R}_t$ and which lies in the nullspace of
the Jacobi operator $L_j$ on $\Sigma_j$. Renormalize $\varphi_j$ to
have supremum equal to $1$, and suppose that this supremum is
attained at a point $p_j \in \Sigma_j$.
At this point we can reason as in the proof of Proposition \ref{prop:index0} to get a contradiction with the nondegeneracy of the vertical plane and the horizontal nondegeneracy of the horizontal catenoid.
\end{proof}
\section{Gluing nondegenerate surfaces} A construction which is closely related to the one in the last section is as follows. Let $\Sigma_1$ and $\Sigma_2$ be two minimal surfaces in $\mathbb H^2 \times \mathbb R$ with a finite number of vertical ends, each one symmetric with respect to the reflection ${\cal R}_t$, and each one horizontally nondegenerate. Fix a vertical planar end $E_\ell \subset \Sigma_\ell$, and choose a sequence of isometries $\phi_{\ell,j}$ (of the form $\varphi_{\ell,j} \times \mbox{id}$ where each $\varphi_{\ell,j}$ is an isometry of $\mathbb H^2$) such that the surface $\Sigma_{\ell,j}:=\phi_{\ell,j} (\Sigma_\ell)$ converges to a fixed vertical plane $P = \gamma \times \mathbb R$. Parametrizing $\gamma$ as $\gamma(s)$, then we suppose that a half-plane in the end $E_1$ in $\Sigma_{1,j}$ is a horizontal graph over $(-B_{1,j},\infty) \times \mathbb R$ with $B_{1,j} \to \infty$ and with graph function $v_{1,j}$, while a half-plane $E_2$ in $\Sigma_{2,j}$ is a horizontal graph over $(-\infty, B_{2,j}) \times \mathbb R$ with $B_{2,j} \to \infty$ and with graph function $v_{2,j}$. We assume finally that both $v_{\ell, j}$ converge to $0$ as $j \to \infty$.
Now let $\widetilde{\Sigma}_{1, j}$ be the surface which agrees with $\Sigma_{\ell, j}$ away from the half-plane $(-1,\infty) \times \mathbb R$, and where the graph function is altered to $\chi_1(s) v_{1, j}$; here $\chi_1(s)$ is a smooth monotone decreasing function which equals $1$ for $s \leq -1$ and vanishes for $s \geq 0$. We let $\widetilde{\Sigma}_{2,j}$ be a similar alteration of $\Sigma_{2,j}$. Finally, let \[ \Sigma(j) = \left(\widetilde{\Sigma}_{1,j} \setminus((0,\infty) \times \mathbb R)\right) \sqcup \left(\widetilde{\Sigma}_{2,j} \setminus((-\infty,0) \times \mathbb R)\right). \] It is clear that $\Sigma(j)$ is exactly minimal outside of the vertical strip $(-1,1) \times \mathbb R$.
Furthermore, it is clear that if $\Sigma_1$ and $\Sigma_2$ carry radial functions $R_1$ and $R_2$ as in the previous section, then we can form a radial function $R(j)$ on $\Sigma(j)$, and define weighted H\"older spaces $e^{\kappa R(j)} {\mathcal C}^{k,\mu}(\Sigma(j))$. In terms of these, the mean curvature $H(j)$ of $\Sigma(j)$ tends to zero.
A straightforward modification of the arguments in the preceding section yield a proof of the \begin{thm} Let $\Sigma(j)$ be a sequence of nearly minimal surfaces, constructed as above. Assume (as stated earlier) that both $\Sigma_1$ and $\Sigma_2$ are horizontally nondegenerate. Then for $j$ sufficiently large, there exists a function $u \in e^{\kappa R(j)} {\mathcal C}^{2,\mu}(\Sigma(j))$ such that the surface $\Sigma(j,u)$, which is the normal graph over $\Sigma(j)$ with graph function $u$, is an embedded, horizontally nondegenerate minimal surface. \label{thm5.1} \end{thm}
One must check first that $\Sigma(j)$ itself is nondegenerate for $j$ large, and then that the norm of the inverse of its Jacobi operator on these weighted H\"older spaces remains uniformly bounded as $j \to \infty$. These facts are both proved by contradiction, and the details of the proofs are very similar to what we have done above. The final step, using a contraction mapping to produce the function $u$ whose graph is minimal, is again done as before.
Notice that if the genera of $\Sigma_1$ and $\Sigma_2$ are $g_1$ and $g_2$, respectively, then $\Sigma(j)$ and hence the minimal surface $\Sigma(j,u)$ has genus $g_1 + g_2$.
\begin{cor}
The construction in Theorem~\ref{thm5.1} can be continued
indefinitely. In other words, let $\Sigma_\ell$ be an infinite
sequence of minimal, horizontally nondegenerate surfaces, each with
finite genus and finite number of planar ends, and let $P_j$ be one
of the planar ends of $\Sigma_j$. Suppose that we have constructed a
sequence of minimal, horizontally nondegenerate surfaces
$\Sigma^{(N)}$ inductively by gluing $\Sigma_N$ to $\Sigma^{(N-1)}$
with the end $P_N$ attached to the end corresponding to $P_{N-1}$ in
$\Sigma^{(N-1)}$. Then one can arrange the gluing parameters so
that $\Sigma^{(N)}$ converges to a minimal surface with an infinite
number of vertical planar ends. \end{cor} Indeed, each of the gluings here is given by Theorem~\ref{thm5.1}, so it remains only to show that one can pass to the limit.
For this, construct a sequence of properly embedded minimal surfaces $\{S_N\},$ and two sequences of positive real numbers $R_N \nearrow +\infty$ and $\varepsilon_N \searrow 0$ such that: \begin{enumerate}[(a)] \item $S_N$ is obtained by gluing $\Sigma^{(N-1)}$ and $\Sigma^{(N)}$.
\item If $S_N$ is a normal graph of a function $u_N$, then $\|u_N
\|_{2,\mu} \leq 2^{-N}$. \item $S_N \setminus B(p_0,R_N)$ consists of (disjoint) neighborhoods
of the ends of $S_N$, where $p_0$ is a fixed point in $\mathbb H^2 \times
\mathbb R$. \item For all $m \geq N$, we have that $S_m \cap B(p_0,R_N)$ lies on a
$\varepsilon_N$-neighborhood of $S_N$ and can be written as a normal
graph over $S_N$. \end{enumerate}
The construction of such sequences is possible since we can choose the neck separation parameter $D_N$ at the $N^{\mathrm{th}}$ stage sufficiently large. Thus it is clear (item (b)) that we can ensure that the sequence of normal graph functions $u_N$ converge locally uniformly in ${\mathcal C}^\infty$ to a function which is uniformly small, so that embeddedness is maintained (items (c) and (d)), and which decays exponentially along all ends.
This Corollary shows that there exist complete, properly embedded minimal surfaces in $\mathbb H^2 \times \mathbb R$ with vertical planar ends, with either finite or infinite genus and with an infinite number of ends.
We conclude this section with a brief remark concerning why the fluxes of horizontal catenoids, or of the more general constituent pieces considered in this section, play no role in this gluing construction. The reason is that we glue along vertical lines orthogonal to the axis of the catenoid and positioned very far from it. Although these lines are not closed, they are limits of a sequence of closed curves, namely rectangles lying over regions
$\{S_1 \leq s \leq S_2; |t| \leq T\}$ where $S_2, T \nearrow \infty$. These rectangles are homologically trivial, so the flux over them vanishes, and hence the same is true over the vertical lines. Because of this, there is no need to balance the fluxes of the summands in this construction against one another.
\section{Deformation theory} We conclude this paper with a brief analysis of the moduli space of even, properly embedded complete minimal surfaces with finite total curvature in $\mathbb H^2 \times \mathbb R$. Let ${\mathcal M}_k$ denote the space of all such surfaces with $k$ ends, each asymptotic to a vertical plane, and which are symmetric with respect to the reflection ${\cal R}_t$. \begin{thm} The space ${\mathcal M}_k$ is a real analytic set with formal dimension equal to $2k$. There is a stratum of ${\mathcal M}_k$ consisting of horizontally nondegenerate elements which has dimension exactly equal to $2k$. \end{thm} \begin{remark} This dimension count agrees with our construction: indeed, $2k$ is precisely the dimension of the space of admissible geodesic networks with $k$ geodesic lines, regardless of the number of `cross-piece' geodesic segments, since in a given network ${\mathcal F}$, each geodesic line $\gamma_\alpha$ has a two-dimensional deformation space, and any small perturbation of the geodesics uniquely determines the corresponding deformations of the geodesic segments $\tau_{\alpha \beta}$. Note, however, that we are {\it not} demanding here that the minimal surfaces be ones that we have constructed. For example, it is conceivable that there exist surfaces whose necks are not centered on the plane of symmetry. This analysis of the deformation space is insensitive to this.
We do not factor out by the $3$ dimensional space of `horizontal' isometries of $\mathbb H^2 \times \mathbb R$. But if we do this, then the dimension count $2k-3$ agrees with the dimension of the family of minimal surfaces in \cite{moro1}. \end{remark} \begin{proof} The proof is very similar to the ones in \cite{MPU} and \cite{KMP} (and in several places since then), so we shall be brief. A different approach to the moduli space theory -- for minimal surfaces with finite total curvature and parallel ends -- appears in \cite{PR}, but that relies on a Weierstrass representation which is not available here.
Fix $\Sigma \in {\mathcal M}_k$ and enumerate its vertical planar ends as $\{P_\alpha\}_{\alpha \in A}$, so each $P_\alpha = \gamma_\alpha \times \mathbb R$. For any sufficiently small $\epsilon_{\alpha,j} \in \mathbb R$, $j = 1,2$, we can deform $\gamma_\alpha$, and hence $P_\alpha$, by displacing the two endpoints of $\gamma_\alpha$ by these amounts, respectively (relative to a fixed metric on $\mathbb{S}^1$). Thus small deformations of the entire ensemble of vertical planes are in correspondence with $2k$-tuples $\epsilon = (\epsilon_{\alpha, 1}, \epsilon_{\alpha, 2})_{\alpha \in A}$
with $|\epsilon| \ll 1$.
For each such $\epsilon$, let $\Sigma(\epsilon)$ denote a small deformation $\Sigma(\epsilon)$ of the surface $\Sigma = \Sigma(0)$, constructed as follows. For each $\alpha$, write the end $E_\alpha$ of $\Sigma$ as a normal graph over some exterior region $P_\alpha \setminus O_\alpha$ with graph function $v_\alpha$ defined in polar coordinates for $r \geq R_0$. Rotate $P_\alpha$ by the parameters $\epsilon_\alpha$ to obtain a new vertical plane $P_\alpha(\epsilon)$. Using the same graph function $v_\alpha$, now defined on an exterior region in $P_\alpha(\epsilon)$, we obtain the deformed end $E_\alpha(\epsilon)$; this is quite close to the original end $E_\alpha$ over the annulus $\{R_0+1 \leq r \leq R_0+2\}$, so we can write $E_\alpha(\epsilon)$ as the graph of a function $v_{\alpha,\epsilon}$ defined on this annulus in the {\it original} plane $P_\alpha$. Finally, use a fixed cutoff function $\chi_\alpha$ to define $\widetilde{v}_{\alpha,\epsilon} = \chi_\alpha v_\alpha + (1-\chi_\alpha) v_{\alpha,\epsilon}$ so that the graph of this new function agrees with the original surface $\Sigma$ for $r \leq R_0 + 1$ and matches up smoothly with $E_\alpha(\epsilon)$ outside this annulus. This defines $\Sigma(\epsilon)$. Denoting its mean curvature function by $H(\epsilon)$, then clearly $H(\epsilon)$ vanishes outside the union of these annuli, hence
$H(\epsilon) \to 0$ in $e^{\kappa R} {\mathcal C}^{2,\mu}(\Sigma(\epsilon))$ as $|\epsilon| \to 0$.
The remainder of the proof follows the corresponding arguments in \cite{MPU} and \cite{KMP} essentially verbatim. When $\Sigma$ is horizontally nondegenerate, the implicit function theorem produces an analytic function $\epsilon \mapsto u_\epsilon$ such that the normal graph of $u_\epsilon$ over $\Sigma(\epsilon)$ is minimal. This is a real analytic coordinate chart in ${\mathcal M}_k$ around $\Sigma$. If $\Sigma$ is horizontally degenerate, then we can apply a Lyapunov-Schmidt reduction argument to show that there exists a neighbourhood ${\mathcal U}$ of $\Sigma$ in some fixed finite dimensional real analytic submanifold $Y$ in the space of all surfaces (with a fixed weighted H\"older regularity) and a real analytic function $F: {\mathcal U} \to \mathbb R$ such that ${\mathcal M}_k \cap {\mathcal U} = F^{-1}(0) \cap {\mathcal U}$. \end{proof}
\end{document} | arXiv |
Introduction to Sequences
Introduction to Arithmetic Progressions
Recurrence relationships for AP's
Terms in Arithmetic Progressions
Graphs and Tables - AP's
Notation for a Series
Arithmetic Series (defined limits)
Arithmetic Series (using graphics calculators)
Applications of Arithmetic Progressions
Introduction to Geometric Progressions
Recurrence relationships for GP's
Finding the Common Ratio
Terms in Geometric Progressions
Graphs and Tables - GP's
Geometric Series
Geometric Series (using graphics calculators)
Infinite sum for GP's
Applications of Geometric Progressions
Applications of Geometric Series
Sequences and Saving Money (Investigation) LIVE
Fibonacci Sequence
First Order Linear Recurrences Introduction
Graphs and Tables - Recurrence Relations
Solutions to Recurrence Relations
Steady state solutions to recurrence relations
Applications of Recurrence Relations
Recall from our previous chapter that first order linear recurrence relations with constant coefficients are recurrence relations of the form $t_n=r\times t_{n-1}+d$tn=r×tn−1+d where $r$r is a non-zero constant and $d$d is any constant.
We also noticed that our familiar arithmetic and geometric sequences both belong to this group of recurrence relations. Every geometric sequence can be written $t_n=r\times t_{n-1}$tn=r×tn−1 with $t_1=a$t1=a given as the first term and the constant coefficient $r$r becoming the common ratio. Also, every arithmetic sequence can be written $t_n=t_{n-1}+d$tn=tn−1+d with $t_1=a$t1=a. Here the constant coefficient $d$d is the common difference and the constant $a$a as the first term.
When we say, 'solve a recurrence relation' this has a slightly different meaning to solving a conventional equation. Solving a recurrence relation means to find the explicit equation that gives $t_n$tn in terms of $n$n, or in other words, the equation that gives us every term by its number only.
We already know the explicit equations for arithmetic recurrence relations and geometric recurrence relations. For arithmetic recurrence relations $t_n=t_{n-1}+d$tn=tn−1+d with $t_1=a$t1=a, our explicit solution is $t_n=a+\left(n-1\right)d$tn=a+(n−1)d. For geometric recurrence relations $t_n=r\times t_{n-1}$tn=r×tn−1 with $t_1=a$t1=a, our explicit solution is $t_n=ar^{n-1}$tn=arn−1.
But what about recurrence relations $t_n=r\times t_{n-1}+d$tn=r×tn−1+d where both $r$r and $d$d are non-zero? In other words, neither arithmetic or geometric? How do we solve these?
Let's say we have the recurrence relation $t_n=20\times t_{n-1}+30$tn=20×tn−1+30. where $t_1=10$t1=10. Let's calculate a few terms and see if we can identify a pattern.
$t_1$t1 $=$= $10$10
$t_2$t2 $=$= $20\times10+30$20×10+30
$t_3$t3 $=$= $20\left(20\times10+30\right)+30$20(20×10+30)+30
$=$= $20^2\times10+20\times30+30$202×10+20×30+30
$t_4$t4 $=$= $20\left(20^2\times10+20\times30+30\right)+30$20(202×10+20×30+30)+30
$=$= $20^3\times10+20^2\times30+20\times30+30$203×10+202×30+20×30+30
$t_5$t5 $=$= $20\left(20^3\times10+20^2\times30+20\times30+30\right)$20(203×10+202×30+20×30+30)
$=$= $20^4\times10+20^3\times30+20^2\times30+20\times30+30$204×10+203×30+202×30+20×30+30
Do you see a pattern at all? What about if we put brackets around our results like this?
$t_1$t1 $=$= $\left(10\right)$(10) $\left(n=1\right)$(n=1)
$t_2$t2 $=$= $\left(20\times10\right)$(20×10)$+$+$\left[\left(30\right)\right]$[(30)] $\left(n=2\right)$(n=2)
$t_3$t3 $=$= $\left(20^2\times10\right)$(202×10)$+$+$\left[\left(20\times30\right)+\left(30\right)\right]$[(20×30)+(30)] $\left(n=3\right)$(n=3)
$t_4$t4 $=$= $\left(20^3\times10\right)$(203×10)$+$+$\left[\left(20^2\times30\right)+\left(20\times30\right)+\left(30\right)\right]$[(202×30)+(20×30)+(30)] $\left(n=4\right)$(n=4)
$t_5$t5 $=$= $\left(20^4\times10\right)$(204×10)$+$+ $\left[\left(20^3\times30\right)+\left(20^2\times30\right)+\left(20\times30\right)+\left(30\right)\right]$[(203×30)+(202×30)+(20×30)+(30)] $\left(n=5\right)$(n=5)
As it turns out, the group of brackets on the left contain the $n$nth term of a geometric sequence with first term $10$10 and $r=20$r=20. The other group of brackets on the left contains the sum of the first $n-1$n−1 terms of a geometric series with first term $30$30 and $r=20$r=20 ($n-1$n−1 because it only appears from $a_2$a2).
This pattern applies in general for recurrence relations $t_n=r\times t_{n-1}+d$tn=r×tn−1+d where $t_1=a$t1=a.
$t_1$t1 $=$= $\left(a\right)$(a) $\left(n=1\right)$(n=1)
$t_2$t2 $=$= $\left(ar\right)$(ar)$+$+$\left[\left(d\right)\right]$[(d)] $\left(n=2\right)$(n=2)
$t_3$t3 $=$= $\left(ar^2\right)$(ar2)$+$+$\left[\left(r\times d\right)+\left(d\right)\right]$[(r×d)+(d)] $\left(n=3\right)$(n=3)
$t_4$t4 $=$= $\left(ar^3\right)$(ar3)$+$+$\left[\left(r^2\times d\right)+\left(r\times d\right)+\left(d\right)\right]$[(r2×d)+(r×d)+(d)] $\left(n=4\right)$(n=4)
$t_5$t5 $=$= $\left(ar^4\right)$(ar4)$+$+ $\left[\left(r^3\times d\right)+\left(r^2\times d\right)+\left(r\times d\right)+\left(d\right)\right]$[(r3×d)+(r2×d)+(r×d)+(d)] $\left(n=5\right)$(n=5)
Hence, we can see that the general explicit equation must be of the form $t_n=\left(ar^{n-1}\right)$tn=(arn−1)$+$+$\left[S_{n-1}\right]$[Sn−1] where $S_{n-1}$Sn−1 is the sum of $n-1$n−1 terms of a geometric series with first term $d$d and ratio $r$r.
Thus, we have our explicit formula for first order linear recurrence relations with constant coefficients.
$t_n=\left(ar^{n-1}\right)+\left[\frac{d\left(r^{n-1}-1\right)}{r-1}\right]$tn=(arn−1)+[d(rn−1−1)r−1]
Here is the table for recurrence relations and their explicit equations. The first term is $t_1=a$t1=a.
Recurrence Relation Explicit Equation
Arithmetic Recurrence Relation $t_n=t_{n-1}+d$tn=tn−1+d $t_n=a+d(n-1)$tn=a+d(n−1)
Geometric Recurrence Relation $t_n=r\times t_{n-1}$tn=r×tn−1 $t_n=ar^{n-1}$tn=arn−1
First Order Linear (Constant Coefficients) $t_n=r\times t_{n-1}+d$tn=r×tn−1+d $t_n=\left(ar^{n-1}\right)+\left[\frac{d\left(r^{n-1}-1\right)}{r-1}\right]$tn=(arn−1)+[d(rn−1−1)r−1]
Solve the difference equation $t_{n+1}=t_n+5$tn+1=tn+5 where $t_1=7$t1=7.
Solve the difference equation $t_{n+1}=7t_n$tn+1=7tn where $t_1=2$t1=2.
Solve the difference equation $t_{n+1}=2t_n-3$tn+1=2tn−3 where $t_1=4$t1=4.
Use arithmetic and geometric sequences and series
Apply sequences and series in solving problems | CommonCrawl |
Y. Sudhakar1 &
Wolfgang A. Wall1
Advanced Modeling and Simulation in Engineering Sciences volume 4, Article number: 2 (2017) Cite this article
We devise a finite element methodology to trace quasi-static through-thickness crack paths in nonlinear elastic solids. The main feature of the proposed method is that it can be directly implemented into existing large scale finite element solvers with minimal effort. The mesh topology modifications that are essential in propagating a crack through the finite element mesh are accomplished by utilizing a combination of a mesh refitting procedure and a nodal releasing approach. The mesh refitting procedure consists of two steps: in the first step, the nodes are moved by solving the elastostatic equations without touching the connectivity between the elements; in the next step, if necessary, quadrilateral elements attached to crack tip nodes are split into triangular elements. This splitting of elements allows the straightforward modification of element connectivity locally, and is a key step to preserve the quality of the mesh throughout the simulation. All the geometry related operations required for crack propagation are addressed in detail with full emphasis on computer implementation. Solving several examples involving single and multiple cracks, and comparing them with experimental or other numerical approaches indicate that the proposed method captures crack paths accurately.
Objective and motivation
One of the main reasons why devising a computational methodology to deal with fracture mechanics is challenging, is the fact that cracks propagate in arbitrary directions through the material. If the dynamics of the crack were known apriori, one can design an optimal mesh that allows the propagation of a crack through the pre-existing mesh at each instant. Since this is not the usual case, the mesh has to be repeatedly modified to accommodate the advancement of cracks within the finite element (FE) mesh. The objective of this work is to devise a simple procedure to achieve the required mesh modifications, which enables us to model complex crack paths through nonlinear elastic solids. The present work is motivated by our interest in developing computational methodologies for fluid-structure-fracture interaction (FSFI) [1] that model the following phenomenon: when a flexible structure interacts with the fluid flow, the fluid loading induces elastic deformation as well as fracture failure of the structure, and the fluid medium fills the crack opening. The first step of extending fluid-structure-interaction (FSI) methods to handle FSFI is to equip the structural analysis with a fracture mechanics solver. This is achieved in this work by developing a crack propagation approach which facilitates, with minimal implementation efforts,
to update the existing large scale structural mechanics solver into a robust tool to handle single and multiple quasi-static cracks
to couple the crack propagation method with existing FSI approach to model FSFI.
The method devised in this paper is implemented in BACI, a large scale parallel multiphysics solver developed at our institute. It is to be mentioned that the objective of the present work is not to devise a method which is competitive to the available class of methods in terms of computational efficiency or accuracy. Rather, the focus is on devising a simple crack propagation approach, which circumvents the complexities associated with them (as discussed below) to aid the development of an FSFI solver.
Brief overview of relevant methods
The present work is based on fracture mechanics [2,3,4,5,6,7,8,9,10,11,12] framework rather than on continuum damage mechanics [13, 14] because we model the propagation of sharp cracks in the material. Majority of the existing computational methods employing fracture mechanics principles utilize either one of the following frameworks:
Adaptive remeshing
Enriched partition of unity
These methods are very successful and find plethora of applications but implementing them in an existing (large scale) FE package pose several challenges.
Adaptive remeshing methods
These methods, as the name implies, adaptively refine the mesh in the crack tip vicinity where the solution dictates the dynamics of crack propagation, and coarsen the mesh away from the crack tip. They make use of special data structures [2, 3, 15], together with either a globally adaptive remeshing procedure [4,5,6] or a local mesh modification algorithm [7] to accommodate crack propagation at arbitrary directions within the computational domain. Each time when a crack extends, these methods introduce several new nodes into the mesh. As a result, they require mesh generation related algorithms to modify the mesh appropriately. Usually in a large scale FE code which is generalized to address multiscale and multiphysics problems, such fracture-specific and mesh-modification routines are neither available nor easy to implement in a generalized way.
Enriched partition of unity methods (EPUM)
These methods represent the recent developments in computational fracture mechanics. They include extended finite element methods (XFEM) [16,17,18,19] and generalized finite element methods (GFEM) [20, 21], and these class of methods are originally developed with an objective to eliminate the adaptive remeshing and its associated complex and time-consuming operations. The fundamental idea behind this method is to enrich the finite element solution space with additional problem-specific enrichment functions. The crack can propagate within the interior of an element, and hence it is possible to simulate crack propagation without modifying the underlying discretization. Though these methods are demonstrated to be powerful, the following points hamper their easier implementation into an existing structural mechanics solver. The numerical integration of singular enrichment functions is still an active area of research in EPUM [22,23,24,25,26,27,28]. Moreover, these methods require the implementation of complicated geometry-mesh intersections for which robustness is always an issue, especially in 3D. Also, the number of degrees of freedom attached to a few nodes changes each time when the crack advances. Most importantly, except a few (e.g. [29,30,31]), all studies are focused on crack propagation through linear elastic materials mainly because the formulation of enrichment functions in nonlinear regime is still an active area of research.
Owing to the aforementioned implementation issues, neither EPUM nor adaptive remeshing methods are ideal for developing fluid-structure-fracture interaction methods. This is because FSFI methods have to handle the combination of challenges from two sources: those arising from crack propagation besides another big challenge from FSI. The aim of this work is to devise a simple crack propagation approach that is suitable specifically for developing FSFI.
The method developed in this work, as will be explained later, shares a similarity with arbitrary Lagrangian Eulerian (ALE) based methods that it involves a mesh-deformation step. Therefore, ALE based methods for fracture mechanics are briefly recalled. The use of ALE in computational fracture mechanics is not widespread. Only few studies employed ALE to address crack propagation problems, and a brief account of majority of such works is provided below.
Existing ALE based crack propagation methods
Though ALE formulations are widely used in several solid mechanics applications (refer to [32] for an overview), less than a handful of researchers used them to handle fracture mechanics problems. The first use of ALE is described in [33, 34] to model dynamic crack propagation. The capability of these methods are demonstrated by simulating a few mode-I dynamic crack propagation problems, and comparing them with analytical relations. In this work, the material separation is not explicitly modeled. Moreover an existing Lagrangian FE code framework cannot be directly extended to include this model. In order to achieve this, an improved method has been developed in [35], which is applied to simulate mixed-mode dynamic crack propagation examples. Another method that uses a mesh motion algorithm based on an isoparametric mapping is presented in [36], in which a self-similar dynamic crack propagation problem in double cantilever beam is solved. They conclude that ALE methods are more robust, and can be a powerful alternative to remeshing. For a better understanding of dynamic crack growth in Fibre-reinforced plastic (FRP) composites, an ALE based method together with a contact mechanics approach is developed in [37]. A very fine mesh in the neighborhood of the crack tip is maintained throughout the simulation with the help of a remeshing procedure. This method is then used to study the interfacial debonding phenomenon in FRP strengthened reinforced concrete beams. Another attractive method that combines the advantages of element free Galerkin (EFG) method and ALE is presented in [38]. This method moves a cloud of nodes along with the crack tip, so that the vicinity of the crack tip is always adequately resolved. As EFG is a meshless method that does not require nodal connectivities, such implementation of ALE to maintain high nodal density in the preferred region is accomplished effectively without resorting to remeshing strategies (refer to [39] for complications involved in implementing such a method in mesh-based FEM). The method is successfully applied to simulate wave propagation and dynamic crack propagation.
A schematic representation of a structural domain containing a crack
To summarize, to our knowledge neither a complex trajectory of a single crack nor simple propagation of multiple cracks within a material is modeled until now using ALE based methods. This is predominantly due to the fact that the mesh modification method used in ALE, in its classical sense, cannot handle the mesh topology changes that are introduced by advancing a crack through the FE mesh. Continuous remeshing is mandatory to eliminate the associated mesh tangling problems. In this work, this is avoided by using an additional step in the mesh refitting procedure that allows to modify the element connectivity locally to preserve the quality of mesh, as will be explained in the later section.
Structure of the paper
The remainder of the paper is structured as follows. The next section briefs the governing equations and the boundary conditions for the problem. Then the complete crack propagation algorithm together with all the details necessary for computer implementation are presented. Finally several numerical examples to demonstrate the accuracy of the proposed method are described.
Governing equations
At reference time \(t=t_0\), let the structure occupy the domain \(\Omega ^s_0\), with \(\Gamma ^s\) denoting the boundary of the structure (Fig. 1). \(\Gamma ^s\) is divided into three non-overlapping portions such that \(\Gamma ^s=\Gamma ^s_D\cup \Gamma ^s_N\cup \Gamma ^s_c\) in which \(\Gamma ^s_D\) and \(\Gamma ^s_N\) are Dirichlet and Neumann portions of the boundary respectively, and \(\Gamma ^s_c\) denotes the crack surfaces which contain always two physical crack faces \(\Gamma _c^s=\Gamma _{c+}^s\cup \Gamma _{c-}^s\). The balance of linear momentum equation is written as,
where \(\rho ^s\) is the density of the structure, \(\mathbf F \) is the deformation gradient, \(\mathbf S \) is the second Piola-Kirchhoff stress tensor, \(\mathbf b ^s\) represents externally applied body force per unit mass, \(\mathbf d ^s\) is the structural displacement and . T is the end time of the considered time interval, and Div(\(\cdot \)) is the divergence operator defined with respect to the material reference frame.
Since this is an evolutionary problem involving second order time derivative, initial conditions must be specified on \(\mathbf d ^s\) and its first derivative \(\dot{\mathbf{d }}^s=\frac{d\mathbf d ^s}{dt}\)
$$\begin{aligned} \mathbf d ^s|_{t=0}=\mathbf d ^s_0\ \ \ \mathrm {on}\ \Omega ^s_0\ ;\ \ \ \dot{\mathbf{d }}^s|_{t=0}=\dot{\mathbf{d }}^s_0\ \ \ \mathrm {on}\ \Omega ^s_0 \end{aligned}$$
Over the boundary of the domain, Dirichlet conditions are specified on \(\Gamma ^s_D\), Neumann conditions are prescribed on \(\Gamma ^s_N\), and the crack surfaces are assumed to be traction-free.
$$\begin{aligned}&\mathbf d ^s=\bar{\mathbf{d }}^s\ \ \ \mathrm {on}\ \Gamma ^s_D\times (0,T)\ ;\ \ \ \left( \mathbf FS \right) \cdot \mathbf n ^s=\bar{\mathbf{h }}^s\ \ \ \mathrm {on}\ \Gamma ^s_N\times (0,T) \end{aligned}$$
$$\begin{aligned}&\left( \mathbf FS \right) \cdot \mathbf n ^{c+}=0\ \ \ \ \ \mathrm {on}\ \Gamma ^s_{c+}\times (0,T)\ ;\ \ \ \left( \mathbf FS \right) \cdot \mathbf n ^{c-}=0\ \ \ \ \ \mathrm {on}\ \Gamma ^s_{c-}\times (0,T) \end{aligned}$$
(3b)
We deal with hyperelastic Neo-Hookean materials in this work. The strain energy function for such material is given as
$$\begin{aligned} \Psi _{\mathrm {NH}}=\frac{\mu ^s}{2}\left( \mathrm {tr}\ \mathbf C -3\right) -\mu ^s\mathrm {ln}\ J+\frac{\lambda ^s}{2}\left( \mathrm {ln}\ J\right) ^2 \end{aligned}$$
where \(\mathbf C \) is the Cauchy–Green tensor, J is the determinant of deformation gradient \(J=\mathrm {det}\ \mathbf F , \lambda ^s\) and \(\mu ^s\) are Lame's constants.
The mesh refitting approach
The complete numerical methodology, together with the computer implementation aspects, of the present approach are presented in this section. It is assumed that the fracture behavior of the material is completely characterized by the J-integral.
The focus of the present work is to simulate through-thickness mixed-mode quasi-static crack propagation within a structure. The current work can be considered as an extension of Tabiei and Wu [40] which describes the implementation of a crack module in DYNA3D FE package, and shares similarities with the method of Miehe and Gürses [3]. Both works address crack propagation through linear elastic materials only. Moreover the complex geometry related operations, like deciding the new crack tip nodes, are not addressed in depth. These details are crucial for implementation of the method. The approach proposed in [3] was called an r-adaptive method, but in order to avoid confusion with complex r-adaptive mesh redistribution methods [41, 42], the present method is labelled as mesh refitting approach. The following section presents the complete implementation details of the present method. Though we simulate through-thickness cracks, for simplicity we explain the geometry-related operations in 2D.
At each time step, the governing equations are solved by freezing the location of the crack. Then, the solution obtained is used to perform crack propagation related operations as described in Algorithm 1.
Solve the governing equations
The first step is to solve the structural dynamic equations by freezing the location of the crack. The strong form given in Eq. (1) is multiplied by appropriate test functions (\(\delta \mathbf d ^s\)) and are integrated over the structural domain to obtain the weak form which is stated as,
Find \(\mathbf d ^s\in \mathcal {W}_d\) such that for all \(\delta \mathbf d ^s\in \mathcal {V}_d\), the following holds
where \((.,.)_{\Omega ^s_0}\) and \(\langle .,.\rangle _{\Gamma ^s_N}\) mean the standard \(L^2\)-inner product over the reference domain and Neumann part of the boundary, respectively.
The solution space and the test function space are defined as
$$\begin{aligned} {\mathcal {W}}_d= & {} \{\mathbf{d }^{s}\in \mathbf{H }^1(\Omega ^s_0)\ |\ \mathbf{d }^s=\bar{\mathbf{d }}^{s}\ {\mathrm {on}}\ \Gamma ^{s}_{D}\} \end{aligned}$$
$$\begin{aligned} {\mathcal {V}}_d= & {} \{\delta \mathbf{d }^{s}\in \mathbf{H }^1(\Omega ^s_0)\ |\ \delta \mathbf{d }^s=0\ {\mathrm {on}}\ \Gamma ^{s}_{D}\} \end{aligned}$$
The above integral equations are dealt with nonlinear FEM for spatial discretization and Generalized-\(\alpha \) method for time discretization. The resulting nonlinear system of algebraic equations are solved using Newton–Raphson method to obtain the solution of displacement field \((\mathbf d ^s)\). For a more elaborate discussion of this well established procedure, the reader can refer to the literature (e.g. [43, 44]).
Perform computational crack propagation procedure
The displacement solution obtained from the previous step is used to perform crack propagation procedure by computing vector \(\mathbf J \)-integral. It involves seven discrete steps, each of which are detailed below.
Step 1: Construct local coordinate system at crack tip
To compute fracture mechanics quantities from the FE solution, and to decompose these quantities into their corresponding modes in a mixed-mode problem, it is essential to construct a local coordinate system (\(\underline{\xi },\underline{\eta }\)) at the crack tip (x \(_c\)) as shown in Fig. 2. The base vectors (\(\mathbf e _1,\mathbf e _2\)) associated with (\(\underline{\xi },\underline{\eta }\)) can be easily constructed because \(\mathbf e _1\) is the symmetry line of the crack; \(\mathbf e _2\) can be obtained by computing the normal to \(\mathbf e _1\) in a right hand coordinate system.
Step 2: Compute vector \(\varvec{J}\)-integral
The J-integral quantifies the strength of singularity at the crack tip in nonlinear elastic materials [45]. Moreover, it is a single parameter that dictates whether the crack propagates or not, and if at all it propagates in which direction it advances. Hence, it is essential to accurately evaluate the J-integral.
Construction of local coordinate system at crack tip. The shaded Quads represent finite elements
J-integral: a notation, b distribution of support function
With respect to the spatial configuration, the energy release rate along the direction of a crack is defined as,
$$\begin{aligned} J=\oint _{\gamma _j}{\left( wn_{\underline{\xi }}-\mathbf n \cdot \varvec{\sigma }\cdot \frac{\partial \mathbf d }{\partial \underline{\xi }}\right) d\gamma } \end{aligned}$$
where \(\gamma _j\) is the integration contour, w is the strain energy stored per unit deformed volume, \(\mathbf n \) is the normal to contour \(\gamma _j\) and \(\varvec{\sigma }\) is the Cauchy stress tensor. All these quantities are defined in the current spatial configuration as shown in Fig. 3a. Since the current work employs a total Lagrangian formulation, it is convenient to express the J-integral in the reference configuration using pull-back operations. As shown in [30], it reads as
$$\begin{aligned} J=\oint _{\Gamma _j}{\left( WN_{\underline{\Xi }}-\mathbf N \cdot \varvec{P}\cdot \frac{\partial \mathbf d }{\partial \underline{\Xi }}\right) d\Gamma } \end{aligned}$$
where \(\Gamma _j, W, \mathbf N \) are the corresponding quantities in reference configuration, \(\mathbf P \) is the first Piola-Kirchhoff stress tensor and \((\underline{\Xi },\underline{H})\) denote the crack tip coordinate system in the reference configuration, which is given by its base vectors (\(\mathbf E _1, \mathbf E _2\)) in Fig. 3a.
In FEM, the contour integrals are cumbersome to implement because the material variables are available only at the Gauss points. Interpolating these variables over the desired contour presents complications, in addition to introducing interpolation errors. Hence, several studies [46, 47] have proposed the idea of converting the contour integral into an integral evaluated over a finite domain around the crack tip by applying the divergence theorem. This procedure is straightforward to implement, as it requires only the quantities at Gauss points of elements within the finite domain. The equivalent domain form of \(\mathbf J \)-integral for large deformation problems is given as [29, 30],
$$\begin{aligned} \varvec{J}=\int _S{\left( \frac{\partial \mathbf d ^\top }{\partial X}\cdot \mathbf P -W\mathbf I \right) \cdot \varvec{\nabla }_0(q)\ dS} \end{aligned}$$
where S is the domain enclosed by \(\Gamma _j\), and \(\varvec{\nabla }_0(q)\) is the gradient of support function q with respect to the reference configuration.
In order to construct q, using nodal connectivity information, all the elements that are located on \(n-\)layers around the crack tip (see Fig. 3b with \(n=4\)) are considered. From this, the elements connected to the crack tip are deleted. Then, all the nodes that are on the outer boundary of this element set are located, and among these nodes, the one which has the shortest distance (\(r_{\mathrm {min}}\)) from the crack tip is chosen. Then the support function is initialized to take a value of unity at the inner layer of nodes, and drops smoothly to zero when the distance of a node from crack tip is more than or equal to \(r_{\mathrm {min}}\). The distribution of q within the integration domain is given in Fig. 3b.
Note In this work, we assume that the crack surfaces are traction-free. However, in FSFI applications, fluid loads are acting on the crack faces, and as a result an additional term appear in the computation of J-integral. This is explained further in [1].
Step 3: Check crack propagation criterion
The crack propagation criterion determines whether the existing crack propagates through the structure under the current stress state. Crack propagation occurs when the driving force reaches or exceeds the material resistance. The J-integral provides a measure of driving force, and its critical value (\(J_c\)) is assumed to be a material property, which quantifies the material's resistance to crack propagation.
The present work makes use of the vector \(\mathbf J \)-integral based crack propagation criterion proposed by Ma and Korsunsky [48]. This criterion requires, first, the calculation of maximum strain energy release rate, which is given by the magnitude of J.
$$\begin{aligned} G_{max}=\sqrt{J_1^2+J_2^2} \end{aligned}$$
where \(J_1\) and \(J_2\) are the strain energy release rates along the direction of \(\mathbf e _1\) and \(\mathbf e _2\) respectively, which are given by simple dot products \(J_1=\mathbf J \cdot \mathbf e _1\) and \(J_2=\mathbf J \cdot \mathbf e _2\).
The crack extension under the given loading conditions occurs, when \(G_{max}\) reaches the fracture toughness of the material (\(J_c\)),
$$\begin{aligned} G_{max}\ge J_c \end{aligned}$$
If the crack propagation criterion is not satisfied, then there is no need to perform the remaining operations; the algorithm moves to the next time step to solve the governing equations of the structure.
Step 4: Obtain the direction of crack propagation
After confirming that the crack propagates, the next logical step is to determine along which direction it is going to advance through the material, which is provided by the crack kinking criterion.
There are several methods put forward to determine the crack kinking direction, and the most important methods are
Maximum circumferential stress criterion [49],
Minimum strain energy density criterion [50],
Maximum energy release rate criterion (MERR)[48, 51].
It is concluded in a comparative study [52] that the minimum strain energy density criterion is less accurate, and the accuracy of maximum circumferential stress criterion and maximum energy release rate criterion are equivalent in all the tests considered.
This work incorporates the maximum energy release rate criterion, proposed in [48], which is consistent with the crack propagation criterion given in the last section. It is stated that the crack propagates when \(G_{max}\) reaches or exceeds the characteristic fracture toughness of the material. MERR predicts the crack propagation direction (\(\theta _p\)) to be the direction of \(\mathbf J \), which is simply given as
$$\begin{aligned} \theta _p=\mathrm {tan^{-1}}\left( \frac{J_2}{J_1}\right) \end{aligned}$$
It is to be remembered that \(\theta _p\) is measured with respect to the crack normal, as indicated in Fig. 2. Moreover, in this work, the extent of crack propagation is always set to be the length of one complete edge of an element.
Step 5: Find new crack tip nodes
Having computed the crack propagation direction from \(\varvec{J}\), the next essential step is to determine the new crack tip nodes i.e, nodes in the FE mesh through which the crack must be propagated. A geometry-based method is used in this work to identify the new tip nodes.
The first step is to identify all the elements that are connected to the current tip node (Fig. 4). Among the edges of these elements, the edge that is intersected by the propagation vector is found. This intersecting edge is drawn with a thick continuous line in Fig. 4. Then the angles formed by the line joining the current tip node to the edge nodes, and the crack propagation direction are calculated (\(\phi _1\) and \(\phi _2\) in figure).
After getting the required intersecting edge, the next step is to check whether the crack propagates along the diagonal. This is realized by the condition, \(\phi _1\le \) diag-tol (\(=\)0.25 radians in all the simulations). In this case, the diagonal node corresponding to \(\phi _1\) is marked as new tip node. Since the crack propagates through a diagonal of the element, this element must be split along this diagonal to accommodate crack propagation through the mesh, as explained in the next step.
If \(\phi _1>\) diag-tol, then the non-diagonal node of the intersecting edge will be the next new tip node. In either cases, all the new tip nodes (if there are multiple cracks, each crack tip will have its own new tip node) are stored in \(\mathcal {R}_{ale}\). Moreover, the distance between the intersection point and the new tip nodes, marked as \(\varvec{\delta }_{ale}\) in the figure, are computed and will be used in the next step.
Step 6: Mesh refitting procedure
In this step, we refit the existing mesh in such a way that the modified mesh contains an edge along which the crack can propagate. The mesh refitting procedure used in the present work involves two discrete operations listed as follows:
Nodal repositioning
Splitting quadrilateral (Quad) elements into triangular (Tri) elements
In the first step, the nodes are repositioned without touching the elements. This means that the element topology (shape and total number of elements, or the connectivity between elements) remains unchanged. The elements only deform due to the movement of the nodes.
Procedure to find new crack tip nodes
Nodal release technique a splitting an element to allow crack propagation along diagonal, b modify connectivity locally near crack tip; the crack opening is shown only for visualization
Nodal repositioning in this work is achieved by solving elastostatic equations but obviously any other mesh moving approach could be used as well. In this approach, the mesh is treated as a linear elastic body, and the governing equations of the mesh movement together with the boundary conditions are
$$\begin{aligned} \varvec{\nabla }\cdot \varvec{\sigma }^m&=0\ \ \mathrm {on}\ \Omega ^s\end{aligned}$$
(14a)
$$\begin{aligned} \mathbf d ^m&=0\ \ \mathrm {on}\ \partial \Omega _{ale} \end{aligned}$$
(14b)
$$\begin{aligned} \mathbf d ^m&=\varvec{\delta }_{ale}\ \ \mathrm {on}\ \mathcal {R}_{ale} \end{aligned}$$
(14c)
where \(\varvec{\sigma }^m\) is the fictitious Cauchy stress tensor, \(\mathbf d ^m\) denotes the displacements at each node within the mesh, \(\Omega ^s\) represents the whole structural domain, and \(\partial \Omega _{ale}\) denotes the boundary for ALE computations: \(\partial \Omega _{ale}=\Gamma ^s_D\cup \Gamma ^s_N\cup \Gamma ^s_c\). Displacements at the new crack tip nodes are set to be \(\varvec{\delta }_{ale}\) that is computed in the previous step.
The above equations are solved to obtain the mesh displacement \(\mathbf d ^m\), which is used to move each node in the mesh to its new location. After this mesh movement operation, the new crack tip node is moved to the intersection point along the intersection edge (see Fig. 4).
In the next step of the mesh refitting procedure, the Quad elements that are marked to be split are cut into two Tri elements. This happens when the crack propagates very close to the diagonal of a Quad element (Fig. 5a). This process does not involve introducing new nodes into the mesh. By comparing Figs. 4 and 5a, the effect of the mesh refitting procedure is clear: the new tip nodes are first moved to the desired location using the nodal repositioning step, and then the Quad element is appropriately split into Tri elements to enable crack propagation along the diagonal. In short, the combination of nodal repositioning and element splitting ensure that after the mesh modifications, the crack propagates along an existing edge in the new mesh.
One of the main reasons for the failure of ALE based methods in handling large deformation or topology change is that such methods maintain their nodal connectivity during the entire simulation. The element splitting operations used in the present work alleviates this problem by enabling us to modify the connectivity between the elements locally. This is an essential step without which the nodal repositioning method cannot handle the change in mesh topology that is inherent to crack propagation problems, without resorting to complicated and time-consuming remeshing procedures.
Step 7: Nodal releasing technique
The two previous steps have enabled us to identify new tip nodes, and to move these nodes to match the computed propagation angle. However, the material separation is not yet included within the FE procedure. In order to achieve this, and to form physical crack surfaces, the nodal releasing technique is used.
In order to represent the material separation, the element connectivity at the current tip node must be modified; a duplicate node is created at the same location where the current tip resides. Few elements are released from the current tip node, and are assigned with a new duplicate node. This, in turn, generates new crack surfaces. In order not to destroy the FE mesh during this process, a consistent way of determining which elements get duplicate nodes is used.
In this procedure, two angles are defined: one is \(\phi _p\) already defined in Fig. 4, and the other is the angle formed by the negative normal at crack tip to the propagation vector (\(\phi _{nn}\) in Fig. 5a). Then, for each element, the angle (\(\phi _g\)) formed by the line connecting current tip to the centroid of the element and the normal is computed. The element is released and gets the duplicate node, if \(\phi _g\notin ~[\phi _p,\phi _{nn}]\). The elements that retain the current tip node are shaded in Fig. 5a. After nodal releasing and modifying element connectivity, the mesh close to the crack tip is plotted in Fig. 5b. At this point, the material separation is introduced and all the crack propagation operation are completed.
Numerical examples
Several examples of varying complexity are solved to demonstrate the effectiveness of the proposed method. These examples exhibit single and mixed-mode behavior, involving mono- and multimaterials. In order to closely examine the accuracy of the method, crack paths obtained from the present method are compared with experiments or results obtained from other methods in literature.
The first two examples consider stationary cracks, and the quantities calculated are compared with XFEM studies [30, 31]. These examples consider highly nonlinear effects evident from the crack tip blunting observed in the results. All the other examples involve complex crack propagation through the structure.
Crack tip blunting
Consider a single edge notched specimen with dimensions 2 mm \(\times \) 6 mm. The crack occupies half-width as shown in Fig. 6a. The top surface is subjected to a fixed displacement of 4 mm. All these details are taken from [30]. The strain energy function of the material is given by,
$$\begin{aligned} \Psi _{\mathrm {NH}}=\frac{\mu ^s}{2}\left( \mathrm {tr}\ \mathbf C -3\right) \end{aligned}$$
The Lame parameters are set such that \(\mu ^s=0.4225\) MPa and the equivalent Poisson ratio in the linear regime \(\nu =0.49\).
Crack tip blunting. a Geometry. All dimensions are in mm. b Deformed configuration. c Plot of vertical displacement of crack surface nodes against their horizontal position in reference state. XFEM results for comparison are taken from [30]
Computation of J-integral: a Variation with respect to \(\lambda \). b Domain independency
Single edge crack plate under mixed mode loading: a Geometry, material parameters and loading conditions; all dimensions are in mm. b Contours of displacement in vertical direction. c Crack tip trajectory
When the material deforms, the crack surfaces move apart, and the initially sharp crack will blunt significantly due to the material nonlinearity. The deformed configuration of the structure is shown in Fig. 6b. The vertical displacement of crack surface nodes are plotted against their horizontal position in the reference state in Fig. 6c; for comparison, XFEM simulation results are taken from [30]. It is directly evident that the results obtained from our simulations are matching well with the reported results. Moreover, the simulations using coarse and fine mesh yield identical values, which shows that the reported results are converged with mesh density. The coarse and fine mesh contain 1200 and 4800 uniform Cartesian elements respectively. It is to be mentioned that the XFEM study [30] was focused on incompressible materials, and the present results closely resembles the incompressible condition by taking \(\nu =0.49\).
J-integral computation
J-integral is a crucial parameter in our work because both the crack propagation and crack kinking criterion are entirely based on this single parameter. In order to study the accuracy of J-integral evaluation, we consider an edge crack specimen under simple extension. A 2 mm \(\times \) 2 mm plate with \(\mu ^s=0.4425\,\hbox {MPa}\) and the equivalent Poisson ratio in the linear regime \(\nu =0.49\) is taken, and the crack occupies half-width of the specimen. The strain energy function and boundary conditions are same as that of the previous example. These details are taken from an XFEM study [31], which reports the value of J-integral for large stretch ratios (\(\lambda \)). \(\lambda \) is defined as the ratio of deformed length to the original length. We intentionally chose [31] as the reference for our validation because comparing J-integral values at high \(\lambda \) could be challenging.
It can be seen from Fig. 7a that the J-integral values computed from our method are in excellent agreement with the XFEM results [31] even at large \(\lambda \). The coarse and fine mesh indicated in Fig. 7a contain 728 and 1600 uniform Cartesian elements respectively. At \(\lambda =2.5\), the difference between J-integral values computed using the coarse and fine mesh is only 1.3%. At the same \(\lambda \), the difference between the value reported in [31] and the present simulation using the fine mesh is as low as 3.4%. This quantitative comparison shows that the J-integrals computed in our work are very accurate even at large \(\lambda \).
The variation of J with respect to the number of layers chosen around the crack tip as the integration domain is given in Fig. 7b. It can be seen that after initial changes, the value of J is stable after 5 layers, after which only minute variations exist in J. In all the simulations presented in this work, 5 layers of elements around the crack tip are chosen to compute the J-integral.
Having simulated a stationary crack, the remaining examples consider complex crack propagation through the structure. The comparison between the present results and the results obtained from the literature demonstrates the accuracy of the method. For all the following examples, the strain energy function is given by Eq. 4.
Single edge cracked plate under mixed-mode loading
In this example, as shown in Fig. 8a, the plate which is fixed at the bottom edge, is subjected to shear stress \(\tau =1\) on the top. The initial edge crack length is half of the plate width. The Lame parameters are set such that \(E=30\) MPa and \(\nu =0.25\). The computational domain is discretized with 2736 elements with the whole area of crack propagation discretized with a fine mesh.
In this simulation, upon loading the crack propagates along a slightly curved path until it reaches the other end of the plate. In order to provide a detailed comparison, a zoomed view of crack path is plotted in Fig. 8c, and the result obtained from the present simulation is compared with two other studies: one utilizes a meshless method [53], and the other study is based on adaptive FEM [6]. It can be seen that the predicted crack tip trajectory matches very well with the results obtained from the other two studies.
Crack in a drilled plate
To demonstrate further the accuracy of the proposed method to simulate the crack path, the example given in [7] is considered. It reported the propagation of a crack from an initial notch in a beam which has three drilled holes. The study carried out both experimental and numerical tests, and observed a curvilinear crack propagation within the drilled plate. The geometrical configuration, material properties, and the loading conditions are given in Fig. 9a.The Lame parameters are set such that \(E=3\) GPa and \(\nu =0.35\). In this example, the stress/strain fields are influenced by the presence of holes in the beam, and this provides interesting curvilinear crack tip trajectories. There are two simulation cases considered based on the location of the initial notch. These are dictated by the choice of a and b in Fig. 9a whose values are given in Table 1 for simulation-1 and simulation-2.
Bittencourt's drilled plate problem. a Geometry, b Simulation-1, c Simulation-2. Experimental values for comparison are taken from [7]
As reported in [7], the crack path follows different trajectories based on the choice of a and b, which are described as follows.
Simulation-1
The location of the initial notch is given by \(a=5\) and \(b=1.5\) mm. The crack is initially attracted towards the bottom hole, propagates near this hole, and got deflected away to end in the middle hole as shown in Fig. 9b. This is in accordance with the experimental results of [7], and other numerical studies [3, 6, 54]. Comparison with the experimental results show that the present simulation produces very good results; even the crack deflection near the bottom hole is predicted well in the simulation as can be directly seen from Fig. 9b. This is one of the very challenging validation test cases, owing to the complex crack tip trajectory involved. The developed methodology can be said to be accurate as it produces results that are matching very well with the experimental values even for this complex configuration.
In this example, for which \(a=6\) and \(b=1\) mm, the crack is attracted towards the middle hole, and directly ends in it (Fig. 9c). There are no crack deflections observed, and for this example as well, the results match excellently with the experiment (Fig. 9c).
Four point beam with two notches
In order to test the performance of the present method to simulate multiple cracks in a structure, the four point bending beam with two pre-existing notches, shown in Fig. 10a, is simulated. The beam is supported from below at two points, and is loaded at two other points. The material properties are also given in Fig. 10a. The computational domain is discretized with 7400 elements. This example is proposed by Bocca et. al. [55] who performed experiments on the structure, and also simulated them numerically.
Table 1 Geometric parameters defining notch location for Bittencourt's drilled plate problem shown in Fig. 9a
Four point bending beam with two notches: a Geometry, material parameters and loading conditions (not to scale). All dimensions are in mm, b crack path, c comparison of crack tip trajectories
The crack paths through the FE mesh is given in Fig. 10b. To demonstrate the accuracy of the simulation, the crack paths obtained from the present method are compared with the results reported using a meshless method that incorporates crack tip singular fields as enrichments [56]; results presented for the finest meshless node distribution is used for the comparison. The comparison of crack paths is plotted in Fig. 10c. It can be seen that for both crack tips, the tip trajectory obtained from the present simulations matches very well with the reported value.
Though our method can handle multiple cracks, care must be taken to make sure that the domains used to evaluate J-integral do not intersect with each other. However, this is not a specific drawback associated with our method alone. This is a common issue with other available approaches as well.
Crack deflection due to inclusion a Geometry and loading conditions, b crack path
Plate with a hole. a Geometry (not to scale). All dimensions are in mm. b The configuration at which the crack starts propagating. c An intermediate configuration. d Final configuration
Crack deflection due to inclusion
Crack growth in the presence of an inclusion is studied in this example. Geometry, loading, and boundary conditions are given in Fig. 11a; they are taken from [57]. The configuration consists of a rectangular plate which contains an off-centre circular inclusion. The Lame parameters of the plate are set such that \(E_{plate}\) \(=\) 20 MPa and \(\nu \) \(=\) 0.3. The objective of this study is to check whether the method is capable of accurately predicting the influence of this inclusion on crack propagation, which is already reported in [52, 57].
The inclusion is characterized by the ratio of Young's modulus of the plate to that of the inclusion (\(r=E_{plate}/E_{incl.}\)). Two values are considered; \(r=10\) which means that the Young's modulus of the inclusion is 10 times lower than that of the plate which is referred to as "soft" inclusion, and \(r=0.1\) that is referred to as "hard" inclusion. The Poisson ratio is assumed to be the same as that of the plate. The whole structural domain is discretized with 3213 elements.
The effect of the inclusion on the crack tip trajectory is shown in Fig. 11b. For soft inclusion, the crack is attracted towards the side of the inclusion; however, the crack does not end in it. In case of a hard inclusion, the crack deflects away from it. These observations are consistent with the already reported results [52, 57].
Nonlinear elastic plate with a hole
The above examples considered crack propagation with little material nonlinearity. This is evident from the fact that the crack remains sharp even after several propagation steps. The following example considers crack propagation involving high material nonlinearity under large deformation. A small off-centre hole is introduced in the geometric configuration considered for the first example, and this simulation allows the crack to propagate through the material. The Lame parameters are set to yield \(E=10\) GPa, \(\nu =0.3\); critical J-integral, \(J_c=50\) kJm\(^{-2}\). The top surface is subjected to a displacement of 0.5 mm. The geometric configuration of this example is presented in Fig. 12a.
As with the linear elastic examples, the loading (or the corresponding Dirichlet boundary condition here) is increased very smoothly from the zero initial value so that the influence of inertia is neglected. When the material starts deforming, as expected, the crack starts to blunt, and the J-integral value starts to increase. When J reaches \(J_c\), then the crack starts to propagate; the deformed configuration of the structure at which the crack starts propagating is depicted in Fig. 12b. Due to the presence of the hole, the crack slightly deflects upwards, as can be seen from Fig. 12b, c. From all these plots, one can infer that the crack tip is always blunt owing to the material nonlinearity, and the present method is able to model fracture behavior in such scenarios.
A finite element methodology to model mixed-mode crack propagation through nonlinear elastic materials is proposed in this work. The striking feature of this method is that it facilitates, with minimal implementation efforts, to update an existing large scale structural mechanics solver into a robust tool to handle single and multiple cracks. The method involves two steps: in the first step, the governing equations of the structure are solved using nonlinear FEM by freezing the crack in the structure; in the next step, the solution obtained from the FEM is used to propagate the crack based on the maximum energy release rate criterion. Advancing the crack through a FE mesh requires a continual change in topology of the mesh, which is achieved in this work by utilizing a mesh refitting approach. This method, as the name suggests, refits the mesh at each instant of crack advancement in such a way that the crack propagates through an existing edge in the modified mesh. The mesh deformation strategies (for example used in ALE based methods) usually result in mesh tangling issues when attempting to handle topology changes in the mesh. This problem is circumvented in this work by splitting the quadrilateral elements into triangular elements in the crack tip neighborhood, which allows the possibility of local mesh connectivity to be modified. This step is crucial to preserve the quality of the mesh throughout the simulation, without which the mesh movement methods will fail. Examples involving single- and multi-materials with one or multiple cracks are reported. The obtained results are compared with experimental and other available computational methods. The comparison demonstrated that the present method accurately predicted the fracture behavior of all the examples considered.
Sudhakar Y, Wall WA. A strongly coupled partitioned approach for fluid-structure-fracture interaction. Int J Numer Methods Fluids; Submitted.
Wawrzynek PA, Ingraffea AR. An edge-based data structure for two-dimensional finite element analysis. Eng Comput. 1987;3:13–20.
Miehe C, Gürses E. A robust algorithm for configurational-force-driven brittle crack propagation with R-adaptive mesh alignment. Int J Numer Methods Eng. 2007;72:127–55.
Bouchard PO, Bay F, Chastel Y, Tovena I. Crack propagation modelling using an advanced remeshing technique. Comput Methods Appl Mech Eng. 2000;189:723–42.
Miranda ACO, Meggiolaro MA, Castro JTP, Martha LF, Bittencourt TN. Fatigue life and crack path predictions in generic 2D structural components. Eng Fract Mech. 2003;70:1259–79.
Phongthanapanich S, Dechaumphai P. Adaptive Delaunay triangulation with object-oriented programming for crack propagation analysis. Finite Elem Anal Design. 2004;40:1753–71.
Bittencourt TN, Wawrzynek PA, Ingraffea AR. Quasi-automatic simulation of crack propagation for 2D LEFM problems. Eng Fract Mech. 1996;55:321–34.
Camacho GT, Ortiz M. Computational modeling of impact damage in brittle materials. Int J Solids Struct. 1996;33:2899–938.
Ortiz M, Pandolfi A. Finite-deformation irreversible cohesive elements for three-dimensional crack-propagation analysis. Int J Numer Methods Eng. 1999;44:1267–82.
de Borst R. Numerical aspects of cohesive-zone models. Eng Fract Mech. 2003;70:1743–57.
Turon A, Dávila CG, Camanho PP, Costa J. An engineering solution for mesh size effects in the simulation of delamination using cohesive zone models. Eng Fract Mech. 2007;74:1665–82.
Park K, Paulino GH. Cohesive Zone Models: a critical review of traction-separation relationships across fracture surfaces. Appl Mech Rev. 2013;64:060802.
Allix O, Ladevze P, Gilletta D, Ohayon R. A damage prediction method for composite structures. Int J Numer Methods Eng. 1989;27:271–83.
Genet M, Marcin L, Ladevze P. On structural computations until fracture based on an anisotropic and unilateral damage theory. Int J Damage Mech. 2014;23:483–506.
Celes W, Paulino GH, Espinha R. A compact adjacency-based topological data structure for finite element mesh representation. Int J Numer Methods Eng. 2005;64:1529–56.
Moës N, Dolbow J, Belytschko T. A finite element method for crack growth without remeshing. Int J Numer Methods Eng. 1999;46:131–50.
Sukumar N, Moës N, Moran B, Belytschko T. Extended finite element method for three-dimensional crack modelling. Int J Numer Methods Eng. 2000;48:1549–70.
Réthoré J, Gravouil A, Combescure A. An energy-conserving scheme for dynamic crack growth using the eXtended finite element method. Int J Numer Methods Eng. 2005;63:631–59.
Xiao QZ, Karihaloo BL. Improving the accuracy of XFEM crack tip fields using higher order quadrature and statistically admissible stress recovery. Int J Numer Methods Eng. 2006;66:1378–410.
Gupta V, Kim DJ, Duarte CA. Analysis and improvements of global-local enrichments for the generalized finite element method. Comput Methods Appl Mech Eng. 2012;245–256:47–62.
Gupta V, Duarte CA, Babuška I, Banerjee U. A stable and optimally convergent generalized FEM (SGFEM) for linear elastic fracture mechanics. Comput Methods Appl Mech Eng. 2013;266:23–39.
Nagarajan A, Mukherjee S. A mapping method for numerical evaluation of two-dimensional integrals with \(1/r\) singularity. Comput Mech. 1993;12:19–26.
Béchet E, Minnebo H, Moës N, Burgardt B. Improved implementation and robustness study of the X-FEM for stress analysis around cracks. Int J Numer Methods Eng. 2005;64:1033–56.
Laborde P, Pommier J, Renard Y, Salaün M. High-order extended finite element method for cracked domains. Int J Numer Methods Eng. 2005;64:354–81.
Park K, Peraira JP, Duarte CA, Paulino GH. Integration of singular enrichment functions in the generalized/extended finite element method for three-dimensional problems. Int J Numer Methods Eng. 2009;78:1220–57.
Mousavi SE, Sukumar N. Generalized Duffy transformation for integrating vertex singularities. Comput Mech. 2010;45:127–40.
Mousavi SE, Sukumar N. Generalized Gaussian quadrature rules for discontinuities and crack singularities in the extended finite element method. Comput Methods Appl Mech Eng. 2010;199:3237–49.
Minnebo H. Three-dimensional integration strategies of singular functions introduced by the XFEM in the LEFM. Int J Numer Methods Eng. 2012;92:1117–38.
Dolbow J, Devan A. Enrichment of enhanced assumed strain approximations for representing strong discontinuities: addressing volumetric incompressibility and the discontinuous patch test. Int J Numer Methods Eng. 2004;59:47–67.
Legrain G, Moës N, Verron E. Stress analysis around crack tips in finite strain problems using the eXtended finite element method. Int J Numer Methods Eng. 2005;63:290–314.
Rashetnia R, Mohammadi S. Finite strain fracture analysis using the extended finite element method with new set of enrichment functions. Int J Numer Methods Eng. 2015;102:1316–51.
Donea J, Huerta A, Ponthot JP, Rodrìguez-Ferran A. Arbitrary Lagrangian-Eulerian methods. In: Stein E, Borst R, Hughes TJR, editors. Encyclopedia of computational mechanics, vol. 1. Hoboken: Wiley; 2004.
Koh HM, Haber RB. A mixed Eulerian-Lagrangian model for the analysis of dynamic fracture. Illinois: Dept. of Civil Engineering, University of Illinois at Urbana-Champaign, Civil Engineering Studies, Structural Research Series; 1986. p. 524.
Koh HM, Lee HS, Haber RB. Dynamic crack propagation analysis using Eulerian-Lagrangian kinematic descriptions. Comput Mech. 1988;3:141–55.
Abdelgalil AI. Modeling of dynamic fracture problems using ALE finite element formulation. Vancouver: The University of British Columbia; 2002. http://hdl.handle.net/2429/13320.
Amini MR, Shahani AR. Finite element simulation of dynamic crack propagation process using an arbitrary Lagrangian Eulerian formulation. Fatigue Fract Eng Mater Struct. 2013;36:533–47.
Bruno D, Greco F, Lonetti P. A fracture-ALE formulation to predict dynamic debonding in FRP strengthened concrete beams. Compos Part B. 2013;46:46–60.
Ponthot JP, Belytschko T. Arbitrary Lagrangian-Eulerian formulation for element-free Galerkin method. Comput Methods Appl Mech Eng. 1998;152:19–46.
Rashid MM. The arbitrary local mesh replacement method: an alternative to remeshing for crack propagation analysis. Comput Methods Appl Mech Eng. 1998;154:133–50.
Tabiei A, Wu J. Development of the DYNA3D simulation code with automated fracture procedure for brick elements. Int J Numer Methods Eng. 2003;57:1979–2006.
Browne PA, Budd CJ, Piccolo C, Cullen M. Fast three dimensional r-adaptive mesh redistribution. J Comput Phys. 2014;275:174–96.
Budd CJ, Russell RD, Walsh E. The geometry of r-adaptive meshes generated using optimal transport methods. J Comput Phys. 2015;282:113–37.
Belytschko T, Liu WK, Moran B. Nonlinear finite elements for continua and structures. Hoboken: Wiley; 2001.
Bonet J, Wood RD. Nonlinear continuum mechanics for finite element analysis. Cambridge: Cambridge University Press; 1997.
Rice JR. A path independent integral and the approximate analysis of strain concentration by notches and cracks. J Appl Mech. 1968;35:379–86.
Moran B, Shih CF. Crack tip and associated domain integrals from momentum and energy balance. Eng Fract Mech. 1987;27:615–42.
Shivakumar KN, Raju IS. An equivalent domain integral method for three-dimensional mixed-mode fracture problems. Eng Fract Mech. 1992;42:935–59.
Ma L, Korsunsky AM. On the use of vector \({J}\)-integral in crack growth criteria for brittle solids. Int J Fract. 2005;133:L39–46.
Erdogan F, Sih GC. On the crack extension in plates under plane loading and transverse shear. J Basic Eng. 1963;85:519–25.
Sih GC. Strain-energy-density factor applied to mixed-mode fracture problems. Int J Fract. 1974;10:305–21.
Hussain MA, Pu SL, Underwood JH. Strain energy release rate for a crack under combined mode I and mode II. Fract Anal ASTM STP. 1974;560:2–28.
Bouchard PO, Bay F, Chastel Y. Numerical modelling of crack propagation: automatic remeshing and comparison of different criteria. Comput Methods Appl Mech Eng. 2003;192:3887–908.
Rao BN, Rahman S. An efficient meshless method for fracture analysis of cracks. Comput Mech. 2000;26:398–408.
Areias P, Dias-da-Costa D, Alfaiate J, Júlio E. Arbitrary bi-dimensional finite strain cohesive crack propagation. Comput Mech. 2009;45:61–75.
Bocca P, Carpinteri A, Valente S. Mixed mode fracture of concrete. Int J Solids Struct. 1991;27:1139–53.
Rabczuk T, Zi G. A meshfree method based on the local partition of unity for cohesive cracks. Comput Mech. 2006;39:743–60.
Natarajan S. Enriched finite element methods: advances & applications. Cardiff: Cardiff University; 2011.
YS developed the idea, conducted numerical experiments and wrote draft. WAW fine-tuned the research idea, suggested numerical experiments and revised the paper. Both authors read and approved the final manuscript.
Institute for Computational Mechanics, Technical University of Munich, Boltzmannstr. 15, 85747, Garching, Germany
Y. Sudhakar
& Wolfgang A. Wall
Search for Y. Sudhakar in:
Search for Wolfgang A. Wall in:
Correspondence to Y. Sudhakar.
Sudhakar, Y., Wall, W.A. Mesh refitting approach: a simple method to model mixed-mode crack propagation in nonlinear elastic solids. Adv. Model. and Simul. in Eng. Sci. 4, 2 (2017) doi:10.1186/s40323-017-0088-x
Accepted: 05 June 2017
Mesh refitting method
Crack propagation in nonlinear materials
J-integral
Nodal releasing technique
Fracture of nonlinear solids | CommonCrawl |
# 1. Understanding Hash Functions
Hash functions are an essential part of hash tables. They take an input (or key) and return a fixed-size value, known as a hash code or hash value. The purpose of a hash function is to quickly and efficiently map keys to their corresponding positions in a hash table.
Hash functions have several important characteristics. First, they should be deterministic, meaning that for a given input, the hash function should always produce the same output. This is important because it ensures that the same key will always map to the same position in the hash table.
Second, hash functions should be fast to compute. Since hash functions are used to map keys to positions in a hash table, they need to be efficient in order to provide fast access to the stored data.
Third, hash functions should produce a uniform distribution of hash values. This means that the hash values should be evenly distributed across the range of possible hash values. A uniform distribution helps to minimize collisions, which occur when two different keys produce the same hash value.
There are several types of hash functions that are commonly used. The most basic type is the division method, which involves dividing the key by the size of the hash table and taking the remainder as the hash value. This method is simple and easy to implement, but it can lead to poor distribution of hash values if the keys are not evenly distributed.
Another type of hash function is the multiplication method, which involves multiplying the key by a constant and then extracting a portion of the resulting product as the hash value. This method can produce a more uniform distribution of hash values, but it requires careful selection of the constant.
A third type of hash function is the folding method, which involves dividing the key into equal-sized chunks and then combining them in some way to produce the hash value. This method can be useful when the keys are long and it is difficult to find a suitable hash function.
Designing an effective hash function requires careful consideration of the characteristics of the keys and the hash table. The goal is to minimize collisions and ensure a uniform distribution of hash values.
One approach to designing a hash function is to consider the characteristics of the keys. For example, if the keys are strings, a good hash function might take into account the positions of the characters in the string and their ASCII values. This can help to ensure that similar strings produce different hash values.
Another approach is to consider the size of the hash table. The hash function should be designed to produce hash values that are evenly distributed across the range of possible hash values. This can help to minimize collisions and ensure that the hash table is used efficiently.
For example, let's say we have a hash table with a size of 10. We want to design a hash function for strings that takes into account the positions of the characters in the string and their ASCII values.
One possible hash function is to sum the ASCII values of the characters in the string and then take the remainder when divided by 10. This would ensure that the hash values are evenly distributed across the range of possible hash values.
```python
def hash_function(key):
hash_value = 0
for i in range(len(key)):
hash_value += ord(key[i])
return hash_value % 10
```
This hash function takes into account the positions of the characters in the string and their ASCII values. It produces hash values that are evenly distributed across the range of possible hash values.
## Exercise
Design a hash function for integers that takes into account the size of the hash table. The hash function should produce hash values that are evenly distributed across the range of possible hash values.
### Solution
```python
def hash_function(key, table_size):
return key % table_size
```
# 1.1. Definition and Characteristics of a Hash Function
A hash function is a function that takes an input (or key) and returns a fixed-size value, known as a hash code or hash value. The purpose of a hash function is to quickly and efficiently map keys to their corresponding positions in a hash table.
Hash functions have several important characteristics.
First, they should be deterministic, meaning that for a given input, the hash function should always produce the same output. This is important because it ensures that the same key will always map to the same position in the hash table.
Second, hash functions should be fast to compute. Since hash functions are used to map keys to positions in a hash table, they need to be efficient in order to provide fast access to the stored data.
Third, hash functions should produce a uniform distribution of hash values. This means that the hash values should be evenly distributed across the range of possible hash values. A uniform distribution helps to minimize collisions, which occur when two different keys produce the same hash value.
For example, let's consider a hash function that maps strings to hash values. We can use the ASCII values of the characters in the string to compute the hash value.
```python
def hash_function(key):
hash_value = 0
for i in range(len(key)):
hash_value += ord(key[i])
return hash_value
```
In this example, the hash function takes a string as input and computes the hash value by summing the ASCII values of the characters in the string. This ensures that similar strings produce different hash values, helping to minimize collisions.
## Exercise
Consider the following hash function:
```python
def hash_function(key):
hash_value = 0
for i in range(len(key)):
hash_value += ord(key[i])
return hash_value
```
For each of the following inputs, compute the hash value using the given hash function:
1. "apple"
2. "banana"
3. "cherry"
### Solution
1. The hash value for "apple" is 530.
2. The hash value for "banana" is 530.
3. The hash value for "cherry" is 594.
# 1.2. Types of Hash Functions
There are several types of hash functions that can be used depending on the specific requirements of the application.
One common type is a division hash function, which uses the modulo operator to map keys to hash values. This type of hash function is simple and easy to implement, but it can lead to poor distribution of hash values if the keys are not evenly distributed.
Another type is a multiplication hash function, which multiplies the key by a constant and extracts a portion of the resulting product as the hash value. This type of hash function can provide better distribution of hash values, but it requires careful selection of the constant to ensure optimal performance.
A third type is a polynomial hash function, which treats the key as a polynomial and evaluates it at a specific point to obtain the hash value. This type of hash function can provide good distribution of hash values, but it can be computationally expensive.
Finally, a cryptographic hash function is a type of hash function that is designed to be secure against various cryptographic attacks. These hash functions are commonly used in applications such as password storage and digital signatures.
The choice of hash function depends on the specific requirements of the application, including the size of the hash table, the distribution of the keys, and the desired level of security. It is important to carefully select and evaluate the hash function to ensure optimal performance and security.
# 1.3. Designing an Effective Hash Function
Designing an effective hash function is crucial for the performance and reliability of a hash table. A good hash function should distribute the keys evenly across the hash table, minimizing collisions and ensuring efficient access to the stored data.
There are several factors to consider when designing a hash function:
1. Uniformity: The hash function should produce hash values that are uniformly distributed across the range of possible hash values. This helps to minimize collisions and ensures that each bucket in the hash table is equally likely to be accessed.
2. Determinism: The hash function should always produce the same hash value for the same input. This ensures that the keys will be consistently mapped to the same bucket in the hash table.
3. Efficiency: The hash function should be computationally efficient, as it will be called frequently during the insertion, retrieval, and deletion operations on the hash table. A slow hash function can significantly impact the performance of the hash table.
4. Sensitivity to input: The hash function should be sensitive to changes in the input, so that even small differences in the keys will result in different hash values. This helps to distribute the keys evenly across the hash table and reduces the likelihood of collisions.
5. Resistance to collisions: While collisions are inevitable in a hash table, a good hash function should minimize the likelihood of collisions by producing hash values that are as unique as possible. This can be achieved by incorporating various techniques, such as prime number multiplication, bitwise operations, or XORing.
Designing an effective hash function often requires a combination of mathematical analysis, experimentation, and domain-specific knowledge. It is important to test the hash function with a variety of input data to ensure that it performs well under different scenarios.
# 2. Hash Tables
Hash tables are a fundamental data structure used in computer science to store and retrieve data efficiently. They are also known as hash maps or dictionaries in other programming languages.
A hash table is essentially an array of buckets, where each bucket can store one or more key-value pairs. The key is used to calculate the hash value, which is an index in the array where the key-value pair will be stored. The hash value is calculated using a hash function.
The hash function takes the key as input and produces a hash code, which is an integer value. This hash code is then converted to an index in the array using a process called hashing. The index is calculated by taking the modulus of the hash code with the size of the array, ensuring that the index falls within the range of the array.
When inserting a key-value pair into a hash table, the hash function is used to calculate the index where the pair will be stored. If there is already a key-value pair stored at that index, a collision occurs. There are various collision resolution techniques that can be used to handle collisions, such as separate chaining or open addressing.
To retrieve a value from a hash table, the hash function is again used to calculate the index where the key-value pair is stored. The value can then be retrieved from that index. If there are multiple key-value pairs stored at that index, the collision resolution technique is used to determine which pair corresponds to the given key.
Hash tables provide constant-time average-case performance for insertion, retrieval, and deletion operations, making them highly efficient for storing and retrieving data. However, the performance of a hash table depends on the quality of the hash function and the distribution of the keys.
# 2.1. What is a Hash Table?
A hash table is a data structure that allows for efficient insertion, retrieval, and deletion of key-value pairs. It is based on the concept of hashing, which is the process of mapping a key to a unique index in an array.
In a hash table, the array is divided into multiple buckets, and each bucket can store one or more key-value pairs. The number of buckets is typically determined by the size of the hash table, which is specified when the hash table is created.
The key is used to calculate the hash value, which is an index in the array where the key-value pair will be stored. The hash value is calculated using a hash function, which takes the key as input and produces a hash code. The hash code is then converted to an index in the array using a process called hashing.
When inserting a key-value pair into a hash table, the hash function is used to calculate the index where the pair will be stored. If there is already a key-value pair stored at that index, a collision occurs. There are various collision resolution techniques that can be used to handle collisions, such as separate chaining or open addressing.
To retrieve a value from a hash table, the hash function is again used to calculate the index where the key-value pair is stored. The value can then be retrieved from that index. If there are multiple key-value pairs stored at that index, the collision resolution technique is used to determine which pair corresponds to the given key.
Hash tables provide constant-time average-case performance for insertion, retrieval, and deletion operations, making them highly efficient for storing and retrieving data. However, the performance of a hash table depends on the quality of the hash function and the distribution of the keys.
# 2.2. Anatomy of a Hash Table
A hash table consists of two main components: an array and a hash function. The array is divided into multiple buckets, and each bucket can store one or more key-value pairs. The hash function is used to calculate the index in the array where a key-value pair will be stored.
The size of the array is typically determined by the size of the hash table, which is specified when the hash table is created. The number of buckets in the array should be chosen carefully to balance the trade-off between memory usage and performance. Too few buckets can result in a high collision rate, while too many buckets can waste memory.
The hash function takes the key as input and produces a hash code, which is an integer value. The hash code is then converted to an index in the array using a process called hashing. This is done by taking the modulus of the hash code with the size of the array, ensuring that the index falls within the range of the array.
When inserting a key-value pair into a hash table, the hash function is used to calculate the index where the pair will be stored. If there is already a key-value pair stored at that index, a collision occurs. There are various collision resolution techniques that can be used to handle collisions, such as separate chaining or open addressing.
To retrieve a value from a hash table, the hash function is again used to calculate the index where the key-value pair is stored. The value can then be retrieved from that index. If there are multiple key-value pairs stored at that index, the collision resolution technique is used to determine which pair corresponds to the given key.
# 2.3. Implementing a Hash Table
A hash table can be implemented using an array and a hash function. The array is divided into multiple buckets, and each bucket can store one or more key-value pairs. The hash function is used to calculate the index in the array where a key-value pair will be stored.
The size of the array is typically determined by the size of the hash table, which is specified when the hash table is created. The number of buckets in the array should be chosen carefully to balance the trade-off between memory usage and performance. Too few buckets can result in a high collision rate, while too many buckets can waste memory.
The hash function takes the key as input and produces a hash code, which is an integer value. The hash code is then converted to an index in the array using a process called hashing. This is done by taking the modulus of the hash code with the size of the array, ensuring that the index falls within the range of the array.
When inserting a key-value pair into a hash table, the hash function is used to calculate the index where the pair will be stored. If there is already a key-value pair stored at that index, a collision occurs. There are various collision resolution techniques that can be used to handle collisions, such as separate chaining or open addressing.
To retrieve a value from a hash table, the hash function is again used to calculate the index where the key-value pair is stored. The value can then be retrieved from that index. If there are multiple key-value pairs stored at that index, the collision resolution technique is used to determine which pair corresponds to the given key.
# 3. Collision Resolution Techniques
Collision resolution is the process of handling collisions that occur when two or more keys hash to the same index in a hash table. There are several collision resolution techniques that can be used to handle collisions, each with its own advantages and disadvantages.
One common collision resolution technique is separate chaining. In this technique, each bucket in the hash table is associated with a linked list. When a collision occurs, the key-value pair is added to the linked list at the corresponding bucket. This allows multiple key-value pairs to be stored at the same index in the hash table.
Another collision resolution technique is open addressing. In this technique, all key-value pairs are stored directly in the hash table, rather than in separate linked lists. When a collision occurs, the key-value pair is stored in the next available slot in the hash table. This requires keeping track of which slots are occupied and which slots are available.
There are several variations of open addressing, including linear probing, quadratic probing, and double hashing. In linear probing, the next available slot is found by incrementing the index until an empty slot is found. In quadratic probing, the index is incremented by a quadratic function of the number of collisions. In double hashing, a second hash function is used to calculate the step size for incrementing the index.
Each collision resolution technique has its own trade-offs in terms of performance and memory usage. Separate chaining can handle a large number of collisions, but it requires additional memory to store the linked lists. Open addressing can be more memory-efficient, but it can lead to clustering and degraded performance when the hash table becomes full.
# 3.1. Types of Collisions
Collisions can occur in a hash table when two or more keys hash to the same index. There are several types of collisions that can occur, each with its own characteristics and implications for collision resolution.
The most common type of collision is called a primary collision. This occurs when two keys hash to the same index in the hash table. Primary collisions can be resolved using collision resolution techniques such as separate chaining or open addressing.
Another type of collision is called a secondary collision. This occurs when a key hashes to an index that is already occupied by another key-value pair. Secondary collisions can occur in open addressing techniques, where all key-value pairs are stored directly in the hash table.
In addition to primary and secondary collisions, there are also other types of collisions that can occur in certain scenarios. For example, a cluster collision occurs when multiple keys hash to consecutive indices in the hash table. This can happen in open addressing techniques when the hash table becomes full and there are no more available slots.
Understanding the different types of collisions is important for designing an effective collision resolution strategy. Different techniques may be more suitable for different types of collisions, depending on the specific requirements of the application.
# 3.2. Open Addressing
Open addressing is a collision resolution technique that allows all key-value pairs to be stored directly in the hash table, rather than in separate linked lists. When a collision occurs, the key-value pair is stored in the next available slot in the hash table.
There are several variations of open addressing, including linear probing, quadratic probing, and double hashing. In linear probing, the next available slot is found by incrementing the index until an empty slot is found. In quadratic probing, the index is incremented by a quadratic function of the number of collisions. In double hashing, a second hash function is used to calculate the step size for incrementing the index.
Open addressing can be more memory-efficient than separate chaining, as it does not require additional memory to store the linked lists. However, it can lead to clustering and degraded performance when the hash table becomes full. Clustering occurs when multiple keys hash to consecutive indices in the hash table, resulting in longer probe sequences and increased search times.
To retrieve a value from a hash table using open addressing, the hash function is used to calculate the initial index where the key-value pair is stored. If the key-value pair is not found at that index, the probing sequence is followed to search for the key-value pair in the hash table.
# 3.2.1. Linear Probing
Linear probing is a variation of open addressing where the next available slot is found by incrementing the index until an empty slot is found. When a collision occurs, the index is incremented by a fixed step size until an empty slot is found.
The step size is typically 1, but it can be adjusted to a different value depending on the specific requirements of the application. A step size of 1 is known as linear probing, as it results in a linear sequence of indices being probed.
To retrieve a value from a hash table using linear probing, the hash function is used to calculate the initial index where the key-value pair is stored. If the key-value pair is not found at that index, the probing sequence is followed by incrementing the index by the step size until the key-value pair is found or an empty slot is encountered.
Linear probing can be more memory-efficient than separate chaining, as it does not require additional memory to store the linked lists. However, it can lead to clustering and degraded performance when the hash table becomes full. Clustering occurs when multiple keys hash to consecutive indices in the hash table, resulting in longer probe sequences and increased search times.
# 3.2.2. Quadratic Probing
Quadratic probing is a variation of open addressing where the index is incremented by a quadratic function of the number of collisions. When a collision occurs, the index is incremented by a fixed step size multiplied by the square of the number of collisions.
The step size is typically 1, but it can be adjusted to a different value depending on the specific requirements of the application. The square of the number of collisions is used to calculate the step size, as it helps to spread out the indices being probed and reduce clustering.
To retrieve a value from a hash table using quadratic probing, the hash function is used to calculate the initial index where the key-value pair is stored. If the key-value pair is not found at that index, the probing sequence is followed by incrementing the index by the step size multiplied by the square of the number of collisions until the key-value pair is found or an empty slot is encountered.
Quadratic probing can be more memory-efficient than separate chaining, as it does not require additional memory to store the linked lists. It can also help to reduce clustering and improve performance compared to linear probing. However, it can still lead to clustering and degraded performance when the hash table becomes full.
# 3.2.3. Double Hashing
Double hashing is a variation of open addressing where a second hash function is used to calculate the step size for incrementing the index. When a collision occurs, the index is incremented by the step size calculated using the second hash function.
The second hash function takes the key as input and produces a hash code, which is an integer value. This hash code is then converted to a step size using a process similar to the hashing process used to calculate the index in the array.
To retrieve a value from a hash table using double hashing, the hash function is used to calculate the initial index where the key-value pair is stored. If the key-value pair is not found at that index, the probing sequence is followed by incrementing the index by the step size calculated using the second hash function until the key-value pair is found or an empty slot is encountered.
Double hashing can be more memory-efficient than separate chaining, as it does not require additional memory to store the linked lists. It can also help to reduce clustering and improve performance compared to linear probing or quadratic probing. However, it can still lead to clustering and degraded performance when the hash table becomes full.
# 3.3. Separate Chaining
Separate chaining is a collision resolution technique that allows multiple key-value pairs to be stored at the same index in the hash table. In this technique, each bucket in the hash table is associated with a linked list, and collisions are resolved by adding the key-value pair to the linked list at the corresponding bucket.
When a collision occurs, the key-value pair is added to the linked list at the corresponding bucket. The order of insertion is typically preserved, allowing for efficient retrieval and deletion of key-value pairs.
To retrieve a value from a hash table using separate chaining, the hash function is used to calculate the index where the key-value pair is stored. The linked list at that index is then searched for the key-value pair. If the key-value pair is not found in the linked list, it means that the key does not exist in the hash table.
Separate chaining can handle a large number of collisions, as it allows for multiple key-value pairs to be stored at the same index. It can also be more memory-efficient than open addressing, as it does not require additional memory to store the linked lists. However, it can lead to increased search times when the linked lists become long.
# 3.3.1. Linked Lists
Linked lists are a commonly used data structure in computer science to store and retrieve data efficiently. They consist of a sequence of nodes, where each node contains a value and a reference to the next node in the sequence.
In the context of separate chaining, linked lists are used to store multiple key-value pairs at the same index in the hash table. Each node in the linked list contains a key-value pair, as well as a reference to the next node in the list.
When a collision occurs, the key-value pair is added to the linked list at the corresponding bucket in the hash table. The order of insertion is typically preserved, allowing for efficient retrieval and deletion of key-value pairs.
To retrieve a value from a hash table using separate chaining and linked lists, the hash function is used to calculate the index where the key-value pair is stored. The linked list at that index is then searched for the key-value pair. If the key-value pair is not found in the linked list, it means that the key does not exist in the hash table.
Linked lists provide constant-time average-case performance for insertion, retrieval, and deletion operations, making them highly efficient for storing and retrieving data. However, they can require additional memory to store the references to the next nodes in the list.
# 3.3.2. Binary Search Trees
Binary search trees (BSTs) are a commonly used data structure in computer science to store and retrieve data efficiently. They consist of a binary tree, where each node contains a value and references to its left and right child nodes.
In the context of separate chaining, binary search trees are used to store multiple key-value pairs at the same index in the hash table. Each node in the binary search tree contains a key-value pair, as well as references to its left and right child nodes.
When a collision occurs, the key-value pair is added to the binary search tree at the corresponding bucket in the hash table. The order of insertion is typically preserved, allowing for efficient retrieval and deletion of key-value pairs.
To retrieve a value from a hash table using separate chaining and binary search trees, the hash function is used to calculate the index where the key-value pair is stored. The binary search tree at that index is then searched for the key-value pair. If the key-value pair is not found in the binary search tree, it means that the key does not exist in the hash table.
Binary search trees provide logarithmic-time average-case performance for insertion, retrieval, and deletion operations, making them efficient for storing and retrieving data. However, they can require additional memory to store the references to the left and right child nodes.
# 3.3.3. Comparison of Collision Resolution Techniques
There are several collision resolution techniques that can be used to handle collisions in a hash table, each with its own advantages and disadvantages. The choice of collision resolution technique depends on the specific requirements of the application, such as the expected number of collisions and the desired performance characteristics.
Separate chaining is a collision resolution technique that allows multiple key-value pairs to be stored at the same index in the hash table. It can handle a large number of collisions and provides constant-time average-case performance for insertion, retrieval, and deletion operations. However, it can lead to increased search times when the linked lists become long.
Open addressing is a collision resolution technique that allows all key-value pairs to be stored directly in the hash table. It can be more memory-efficient than separate chaining, as it does not require additional memory to store the linked lists. However, it can lead to clustering and degraded performance when the hash table becomes full.
Linear probing is a variation of open addressing where the next available slot is found by incrementing the index until an empty slot is found. It can be more memory-efficient than separate chaining, but it can lead to clustering and degraded performance when the hash table becomes full.
Quadratic probing is a variation of open addressing where the index is incremented by a quadratic function of the number of collisions. It can help to reduce clustering and improve performance compared to linear probing, but it can still lead to clustering and degraded performance when the hash table becomes full.
Double hashing is a variation of open addressing where a second hash function is used to calculate the step size for incrementing the index. It can be more memory-efficient than separate chaining and can help to reduce clustering and improve performance. However, it can still lead to clustering and degraded performance when the hash table becomes full.
The choice of collision resolution technique depends on the specific requirements of the application, such as the expected number of collisions and the desired performance characteristics. It is important to consider the trade-offs between memory usage, performance, and ease of implementation when choosing a collision resolution technique.
# 4. Understanding Load Factor
The load factor is a measure of how full a hash table is, and it can have a significant impact on the performance of a hash table. The load factor is calculated by dividing the number of key-value pairs in the hash table by the number of buckets in the hash table.
A high load factor indicates that the hash table is nearly full, while a low load factor indicates that the hash table is relatively empty. The load factor can be used to determine when to resize the hash table to maintain a balance between memory usage and performance.
The load factor affects the performance of a hash table in several ways. A high load factor can lead to an increased number of collisions, as more key-value pairs are stored in each bucket. This can result in longer probe sequences and increased search times.
A low load factor can result in wasted memory, as there are more buckets than necessary to store the key-value pairs. This can lead to increased memory usage and decreased performance, as more memory needs to be allocated and accessed.
The optimal load factor for a hash table depends on the specific requirements of the application. A load factor of 0.7 is often used as a rule of thumb, as it provides a good balance between memory usage and performance. However, the optimal load factor may vary depending on factors such as the expected number of key-value pairs and the desired performance characteristics.
# 4.1. Definition and Importance of Load Factor
The load factor is a measure of how full a hash table is, and it can have a significant impact on the performance of a hash table. It is calculated by dividing the number of key-value pairs in the hash table by the number of buckets in the hash table.
A high load factor indicates that the hash table is nearly full, while a low load factor indicates that the hash table is relatively empty. The load factor can be used to determine when to resize the hash table to maintain a balance between memory usage and performance.
The load factor affects the performance of a hash table in several ways. A high load factor can lead to an increased number of collisions, as more key-value pairs are stored in each bucket. This can result in longer probe sequences and increased search times.
A low load factor can result in wasted memory, as there are more buckets than necessary to store the key-value pairs. This can lead to increased memory usage and decreased performance, as more memory needs to be allocated and accessed.
Maintaining an optimal load factor is important for ensuring the performance and reliability of a hash table. A load factor of 0.7 is often used as a rule of thumb, as it provides a good balance between memory usage and performance. However, the optimal load factor may vary depending on factors such as the expected number of key-value pairs and the desired performance characteristics.
# 4.2. Calculating Load Factor
The load factor of a hash table is calculated by dividing the number of key-value pairs in the hash table by the number of buckets in the hash table. It is a measure of how full the hash table is, and it can have a significant impact on the performance of the hash table.
The load factor is typically represented as a decimal value between 0 and 1. A load factor of 0 indicates that the hash table is empty, while a load factor of 1 indicates that the hash table is full. A load factor between 0 and 1 indicates that the hash table is partially full.
The load factor can be calculated using the following formula:
```
load_factor = number_of_key_value_pairs / number_of_buckets
```
For example, if a hash table has 100 key-value pairs and 200 buckets, the load factor would be:
```
load_factor = 100 / 200 = 0.5
```
A high load factor indicates that the hash table is nearly full, while a low load factor indicates that the hash table is relatively empty. The optimal load factor for a hash table depends on the specific requirements of the application, such as the expected number of key-value pairs and the desired performance characteristics.
# 4.3. Impact of Load Factor on Hash Table Performance
The load factor of a hash table has a significant impact on its performance. A high load factor can lead to an increased number of collisions, longer probe sequences, and decreased performance. A low load factor can result in wasted memory and decreased performance.
When the load factor is high, more key-value pairs are stored in each bucket, increasing the likelihood of collisions. Collisions can result in longer probe sequences, as the hash table needs to search through more key-value pairs to find the desired key. This can lead to increased search times and decreased performance.
When the load factor is low, there are more buckets than necessary to store the key-value pairs, resulting in wasted memory. This can lead to increased memory usage and decreased performance, as more memory needs to be allocated and accessed.
Maintaining an optimal load factor is important for ensuring the performance and reliability of a hash table. A load factor of 0.7 is often used as a rule of thumb, as it provides a good balance between memory usage and performance. However, the optimal load factor may vary depending on factors such as the expected number of key-value pairs and the desired performance characteristics.
# 5. Modular Arithmetic
Modular arithmetic is a mathematical operation that involves performing calculations on integers, where the result is always a non-negative integer less than a given modulus. It is often used in hashing to calculate the index in the hash table where a key-value pair will be stored.
In modular arithmetic, the modulus is the number that defines the range of possible values for the result. It is typically represented as a positive integer greater than 1. For example, if the modulus is 10, the result of any calculation will be a non-negative integer less than 10.
Modular arithmetic can be performed using the modulo operator, denoted by the percent sign (%). For example, the expression "a % b" calculates the remainder when a is divided by b. The result is always a non-negative integer less than b.
In the context of hashing, modular arithmetic is used to calculate the index in the hash table where a key-value pair will be stored. The key is first hashed to produce a hash code, which
# 5.1. Definition and Properties of Modular Arithmetic
Modular arithmetic is a mathematical operation that involves performing calculations on integers, where the result is always a non-negative integer less than a given modulus. It is often used in hashing to calculate the index in the hash table where a key-value pair will be stored.
The modulus is the number that defines the range of possible values for the result. It is typically represented as a positive integer greater than 1. For example, if the modulus is 10, the result of any calculation will be a non-negative integer less than 10.
Modular arithmetic has several properties that make it useful in hashing:
1. Closure: The result of a modular arithmetic operation is always an integer within the defined range.
2. Commutativity: The order of the operands does not affect the result of the operation. For example, a + b is equal to b + a in modular arithmetic.
3. Associativity: The grouping of operands does not affect the result of the operation. For example, (a + b) + c is equal to a + (b + c) in modular arithmetic.
4. Distributivity: Modular arithmetic obeys the distributive property. For example, a * (b + c) is equal to (a * b) + (a * c) in modular arithmetic.
These properties make modular arithmetic a useful tool for performing calculations in hashing, as it allows for efficient and consistent indexing of key-value pairs in a hash table.
# 5.2. Applications in Hashing
Modular arithmetic is widely used in hashing algorithms to determine the index of a key-value pair in a hash table. By using modular arithmetic, we can ensure that the index falls within the range of the table, regardless of the size of the key or the table.
One common application of modular arithmetic in hashing is in the calculation of the hash code for a key. The hash code is a numerical value that is computed from the key and is used to determine the index where the key-value pair will be stored in the hash table. The hash code is calculated by performing various operations on the key, such as multiplying it by a constant, adding or subtracting other values, and then taking the modulus of the result.
Another application of modular arithmetic in hashing is in the implementation of collision resolution techniques, such as double hashing. In double hashing, two different hash functions are used to calculate the probe sequence for a key. The first hash function determines the initial index, and the second hash function determines the step size for each subsequent probe. Modular arithmetic is used to ensure that the indices and step sizes fall within the range of the table.
Overall, modular arithmetic is a fundamental concept in hashing that allows for efficient and consistent indexing of key-value pairs in a hash table. It provides a way to handle large keys and tables, and ensures that the indices and step sizes are always within the desired range.
# 5.3. Solving Collision Resolution with Modular Arithmetic
Modular arithmetic can also be used to solve collision resolution in hashing. When two different keys hash to the same index in a hash table, a collision occurs. There are several techniques to resolve collisions, and one common approach is to use modular arithmetic.
In this approach, when a collision occurs, we can increment the index by a fixed value until an empty slot is found. This fixed value is determined using modular arithmetic. We calculate the step size by taking the modulus of the key's hash code with the size of the table. This ensures that the step size falls within the range of the table.
For example, let's say we have a hash table of size 10, and the hash code of a key is 25. We can calculate the step size by taking 25 modulo 10, which is 5. This means that we will increment the index by 5 each time a collision occurs.
If the initial index is already occupied, we continue incrementing the index by the step size until an empty slot is found. If we reach the end of the table, we wrap around to the beginning and continue the search.
Using modular arithmetic for collision resolution ensures that every slot in the table has an equal chance of being probed, reducing the likelihood of further collisions. It also allows for efficient and consistent probing of the table, even when dealing with large keys and tables.
Suppose we have a hash table of size 7, and we want to insert the keys 12, 19, and 26. The hash codes for these keys are 12, 19, and 26 respectively.
Using modular arithmetic, we can calculate the initial indices as follows:
- For key 12: 12 modulo 7 = 5
- For key 19: 19 modulo 7 = 5 (collision)
- For key 26: 26 modulo 7 = 5 (collision)
Since all three keys hash to the same index, a collision occurs. We can resolve this by incrementing the index by the step size, which is 1 in this case. The updated indices are as follows:
- For key 12: 5 (occupied)
- For key 19: 6
- For key 26: 0
Now, all three keys have been successfully inserted into the hash table without any further collisions.
## Exercise
Consider a hash table of size 13. Calculate the initial indices for the following keys using modular arithmetic:
- Key 7
- Key 20
- Key 33
### Solution
- For key 7: 7 modulo 13 = 7
- For key 20: 20 modulo 13 = 7 (collision)
- For key 33: 33 modulo 13 = 7 (collision)
# 6. Double Hashing
Double hashing is another technique used for collision resolution in hashing. It involves using two hash functions instead of one. The first hash function is used to calculate the initial index, and the second hash function determines the step size for probing the table.
The formula for double hashing is:
$$h(k, i) = (h_1(k) + i \cdot h_2(k)) \mod m$$
where:
- $h(k, i)$ is the index of the $i$-th probe for key $k$
- $h_1(k)$ is the hash value of key $k$ using the first hash function
- $h_2(k)$ is the hash value of key $k$ using the second hash function
- $m$ is the size of the hash table
By using two different hash functions, double hashing reduces the likelihood of collisions and distributes the keys more evenly throughout the table. This helps to minimize clustering and improve the performance of the hash table.
To implement double hashing, we need to carefully choose the two hash functions. The first hash function should provide a good distribution of keys across the table, while the second hash function should ensure that the step size is relatively prime to the size of the table.
Let's consider an example. Suppose we have a hash table of size 10, and we want to insert the keys 12, 19, and 26. We can use the following hash functions:
$$h_1(k) = k \mod 10$$
$$h_2(k) = 7 - (k \mod 7)$$
Using these hash functions, we can calculate the initial indices as follows:
- For key 12: $h_1(12) = 12 \mod 10 = 2$
- For key 19: $h_1(19) = 19 \mod 10 = 9$
- For key 26: $h_1(26) = 26 \mod 10 = 6$
Since all three keys hash to different indices, no collisions occur at the initial indices.
Now, let's consider a scenario where a collision occurs. Suppose we want to insert the key 33 into the same hash table. Using the same hash functions, we can calculate the initial index as follows:
- For key 33: $h_1(33) = 33 \mod 10 = 3$
However, the index 3 is already occupied by another key. To resolve this collision, we can use the second hash function to determine the step size.
Using $h_2(33) = 7 - (33 \mod 7) = 7 - 5 = 2$, we can increment the index by 2 until an empty slot is found. The updated indices are as follows:
- For key 33: 3 (occupied)
- For key 33 (first probe): $(3 + 2) \mod 10 = 5$
- For key 33 (second probe): $(3 + 2 \cdot 2) \mod 10 = 7$
Now, the key 33 has been successfully inserted into the hash table without any further collisions.
## Exercise
Consider a hash table of size 11. Use the following hash functions to calculate the initial indices for the given keys:
$$h_1(k) = k \mod 11$$
$$h_2(k) = 7 - (k \mod 7)$$
- Key 15
- Key 29
- Key 42
### Solution
- For key 15: $h_1(15) = 15 \mod 11 = 4$
- For key 29: $h_1(29) = 29 \mod 11 = 7$
- For key 42: $h_1(42) = 42 \mod 11 = 9$
Since all three keys hash to different indices, no collisions occur at the initial indices.
# 6.1. Concept and Implementation of Double Hashing
The concept of double hashing involves using two hash functions to calculate the index for a key in a hash table. The first hash function determines the initial index, and the second hash function determines the step size for probing the table.
To implement double hashing, we need to carefully choose the two hash functions. The first hash function should provide a good distribution of keys across the table, while the second hash function should ensure that the step size is relatively prime to the size of the table.
Here's the formula for double hashing:
$$h(k, i) = (h_1(k) + i \cdot h_2(k)) \mod m$$
where:
- $h(k, i)$ is the index of the $i$-th probe for key $k$
- $h_1(k)$ is the hash value of key $k$ using the first hash function
- $h_2(k)$ is the hash value of key $k$ using the second hash function
- $m$ is the size of the hash table
To insert a key into the hash table using double hashing, we calculate the initial index using the first hash function. If the index is already occupied, we increment the index by the step size calculated using the second hash function, and continue probing until an empty slot is found.
Let's consider an example to understand the implementation of double hashing. Suppose we have a hash table of size 10, and we want to insert the keys 12, 19, and 26. We can use the following hash functions:
$$h_1(k) = k \mod 10$$
$$h_2(k) = 7 - (k \mod 7)$$
Using these hash functions, we can calculate the initial indices as follows:
- For key 12: $h_1(12) = 12 \mod 10 = 2$
- For key 19: $h_1(19) = 19 \mod 10 = 9$
- For key 26: $h_1(26) = 26 \mod 10 = 6$
Since all three keys hash to different indices, no collisions occur at the initial indices.
Now, let's consider a scenario where a collision occurs. Suppose we want to insert the key 33 into the same hash table. Using the same hash functions, we can calculate the initial index as follows:
- For key 33: $h_1(33) = 33 \mod 10 = 3$
However, the index 3 is already occupied by another key. To resolve this collision, we can use the second hash function to determine the step size.
Using $h_2(33) = 7 - (33 \mod 7) = 7 - 5 = 2$, we can increment the index by 2 until an empty slot is found. The updated indices are as follows:
- For key 33: 3 (occupied)
- For key 33 (first probe): $(3 + 2) \mod 10 = 5$
- For key 33 (second probe): $(3 + 2 \cdot 2) \mod 10 = 7$
Now, the key 33 has been successfully inserted into the hash table without any further collisions.
## Exercise
Consider a hash table of size 11. Use the following hash functions to calculate the initial indices for the given keys:
$$h_1(k) = k \mod 11$$
$$h_2(k) = 7 - (k \mod 7)$$
- Key 15
- Key 29
- Key 42
### Solution
- For key 15: $h_1(15) = 15 \mod 11 = 4$
- For key 29: $h_1(29) = 29 \mod 11 = 7$
- For key 42: $h_1(42) = 42 \mod 11 = 9$
Since all three keys hash to different indices, no collisions occur at the initial indices.
# 6.2. Advantages and Disadvantages of Double Hashing
Double hashing has several advantages and disadvantages compared to other collision resolution techniques.
One advantage of double hashing is that it allows for a higher load factor, meaning that the hash table can be more densely populated with keys. This can result in better space efficiency, as fewer empty slots are left in the table. Additionally, double hashing tends to have a lower average probe sequence length compared to other techniques like linear probing or quadratic probing.
Another advantage of double hashing is that it can be more resistant to clustering. Clustering occurs when keys that hash to the same index tend to form long chains, leading to longer probe sequences and slower performance. With double hashing, the step size is determined by a second hash function, which can help distribute keys more evenly and reduce clustering.
However, double hashing also has some disadvantages. One disadvantage is that it can be more computationally expensive compared to other techniques. Calculating the step size using the second hash function requires additional computations, which can slow down the insertion and retrieval operations.
Another disadvantage is that finding suitable pairs of hash functions for double hashing can be more challenging. The first hash function needs to provide a good distribution of keys, while the second hash function needs to ensure that the step size is relatively prime to the size of the table. Designing and testing these hash functions can require more effort and expertise.
Overall, double hashing can be a useful technique for collision resolution in hash tables, especially when a higher load factor or resistance to clustering is desired. However, it may come with some trade-offs in terms of computational complexity and hash function design.
# 6.3. Performance Analysis of Double Hashing
To analyze the performance of double hashing, we can consider the average probe sequence length for both successful and unsuccessful searches. The probe sequence length is the number of slots that need to be examined in order to find the desired key or determine that it is not present in the table.
For successful searches, the average probe sequence length can be approximated by the formula:
$$\frac{1}{\lambda} \left(1 + \frac{1}{1 - \lambda}\right)$$
where $\lambda$ is the load factor, defined as the ratio of the number of keys in the table to the size of the table. This formula assumes simple uniform hashing, where the hash function distributes the keys uniformly and independently over the range of indices.
For unsuccessful searches, the average probe sequence length can be approximated by the formula:
$$\frac{1}{1 - \lambda}$$
These formulas show that the average probe sequence length for both successful and unsuccessful searches increases as the load factor approaches 1. This means that as the table becomes more densely populated with keys, the performance of double hashing may degrade.
However, compared to other collision resolution techniques like linear probing or quadratic probing, double hashing tends to have a lower average probe sequence length for the same load factor. This can result in faster search and insertion operations, especially when the load factor is high.
It's important to note that these formulas are approximations and may not hold exactly in practice. The actual performance of double hashing can depend on factors such as the quality of the hash functions used, the distribution of the keys, and the specific implementation details.
# 7. Real-World Applications of Double Hashing
7.1. Distributed Hash Tables (DHTs)
Distributed hash tables (DHTs) are a key component of peer-to-peer (P2P) networks. They provide a decentralized way of storing and retrieving data across a network of computers. DHTs use double hashing to distribute the data across multiple nodes in the network.
Each node in the network is responsible for storing a portion of the data based on its hash value. When a node wants to retrieve a piece of data, it uses double hashing to determine which node is responsible for storing that data. This allows for efficient and scalable data retrieval in P2P networks.
7.2. Password Storage and Authentication
Double hashing is commonly used in password storage and authentication systems to protect user passwords. Instead of storing passwords directly, the system stores the hash of the password using a one-way hash function. This ensures that even if the password database is compromised, the actual passwords cannot be easily obtained.
To further enhance security, double hashing is often used in combination with other techniques such as salting. Salting involves adding a random value to the password before hashing it. This prevents attackers from using precomputed tables (rainbow tables) to quickly determine the original password.
7.3. Caching and Database Indexing
Double hashing is also used in caching and database indexing to improve performance. Caching involves storing frequently accessed data in a fast-access memory, such as RAM, to reduce the time it takes to retrieve the data.
In caching systems, double hashing is used to determine the location of the data in the cache. By using a combination of two hash functions, the system can distribute the data evenly across the cache, reducing the likelihood of collisions and improving cache performance.
Similarly, in database indexing, double hashing is used to determine the location of data in an index structure. This allows for efficient retrieval of data based on a specific key, improving the overall performance of database queries.
# 8. Conclusion and Next Steps
In this textbook, we have explored the concept of double hashing and its applications in various fields. We have learned about the performance analysis of double hashing and its advantages over other collision resolution techniques.
To further deepen your understanding of hashing, you can explore other advanced hashing techniques such as cuckoo hashing or hopscotch hashing. These techniques offer different trade-offs in terms of performance and memory usage.
Additionally, you can explore real-world examples and use cases of hashing in areas such as cryptography, data deduplication, and network routing. Understanding these applications will give you a broader perspective on the practical use of hashing in different domains.
By mastering the concepts and techniques covered in this textbook, you will be well-equipped to design and implement efficient hash tables and solve various problems that require fast data retrieval and storage.
Keep exploring and applying your knowledge to real-world problems, and you'll continue to grow as a skilled practitioner in the field of hashing.
# 7.2. Password Storage and Authentication
One important application of double hashing is password storage and authentication. When users create an account on a website or application, they typically need to choose a password. Storing these passwords securely is crucial to protect user data.
Hash functions are commonly used to store passwords securely. Instead of storing the actual passwords, the system stores the hash values of the passwords. A hash value is a fixed-length string that is generated by applying a hash function to the password.
When a user tries to log in, the system compares the hash value of the entered password with the stored hash value. If the hash values match, the password is considered correct and the user is granted access.
Double hashing can be used to enhance the security of password storage. Instead of using a single hash function, a combination of two hash functions can be used. This adds an extra layer of protection against attacks such as rainbow table attacks, where precomputed hash values are used to quickly find matching passwords.
By using double hashing, even if an attacker manages to obtain the hash values, it is much more difficult for them to reverse-engineer the original passwords. This significantly improves the security of user accounts and helps protect sensitive information.
Let's consider an example to illustrate the use of double hashing in password storage. Suppose a user chooses the password "password123". The system applies the first hash function, Hash1, to generate the first hash value. Then, it applies the second hash function, Hash2, to the first hash value to generate the final hash value.
```python
password = "password123"
hash1 = Hash1(password)
hash2 = Hash2(hash1)
```
The system stores the final hash value in the database. When the user tries to log in, the system retrieves the stored hash value and applies the same double hashing process to the entered password. If the final hash value matches the stored hash value, the user is granted access.
By using double hashing, the system adds an extra layer of security to the password storage process, making it more difficult for attackers to compromise user accounts.
## Exercise
Explain why double hashing is beneficial for password storage and authentication.
### Solution
Double hashing enhances the security of password storage by adding an extra layer of protection. It makes it more difficult for attackers to reverse-engineer the original passwords even if they obtain the hash values. This helps protect user accounts and sensitive information from unauthorized access.
# 7.3. Caching and Database Indexing
Another important application of double hashing is in caching and database indexing. In computer systems, caching is the process of storing frequently accessed data in a faster and closer location to improve performance. Database indexing is a technique used to efficiently retrieve data from a database by creating a data structure that maps key values to their corresponding records.
Double hashing can be used to implement caching and database indexing efficiently. By using a combination of two hash functions, it is possible to distribute the data evenly across the cache or index, reducing the likelihood of collisions and improving access times.
Let's consider an example of using double hashing in caching. Suppose we have a cache that can store a limited number of items. When a new item needs to be added to the cache, we apply the first hash function, Hash1, to determine its initial position in the cache. If that position is already occupied, we apply the second hash function, Hash2, to find an alternative position.
```python
item = "data"
position = Hash1(item)
if cache[position] is not None:
position = Hash2(item)
cache[position] = item
```
By using double hashing, we can distribute the items evenly across the cache, reducing the likelihood of collisions and improving the cache's performance.
The same principle applies to database indexing. When creating an index for a database, a combination of two hash functions can be used to map key values to their corresponding records. This allows for efficient retrieval of data based on the key values, improving the overall performance of database queries.
## Exercise
Explain how double hashing can improve the performance of caching and database indexing.
### Solution
Double hashing improves the performance of caching and database indexing by distributing the data evenly across the cache or index. This reduces the likelihood of collisions and improves access times. By using a combination of two hash functions, double hashing ensures that items are stored in optimal positions, leading to faster retrieval and improved overall performance.
# 8. Conclusion and Next Steps
In this textbook, we have covered the concept of double hashing and its applications in various fields. We started by understanding the basics of hash functions and hash tables, and then delved into collision resolution techniques such as open addressing and separate chaining.
We specifically focused on double hashing as a collision resolution technique, exploring its concept, implementation, advantages, and disadvantages. We also analyzed the performance of double hashing and discussed its real-world applications in distributed hash tables, password storage and authentication, and caching and database indexing.
By studying double hashing, you have gained a deep understanding of this important topic in computer science. You are now equipped with the knowledge to apply double hashing in practical scenarios and further explore advanced hashing techniques.
To further enhance your understanding of hashing, you may want to explore additional topics such as linear probing, quadratic probing, and other advanced collision resolution techniques. You can also delve into the mathematics behind hashing, including modular arithmetic and its applications.
Additionally, you can explore real-world examples and use cases of hashing in various industries, such as finance, telecommunications, and cybersecurity. Understanding how hashing is applied in these contexts will give you a broader perspective on its significance and potential.
By continuing your exploration of hashing, you will deepen your expertise in this field and be well-prepared to tackle more complex challenges in computer science and related disciplines.
In conclusion, double hashing is a powerful technique that allows for efficient collision resolution in hash tables. Its applications extend beyond computer science, with real-world uses in distributed systems, security, and data management. By mastering the concepts and techniques covered in this textbook, you have taken a significant step towards becoming a proficient practitioner in the field of hashing.
Congratulations on completing this textbook! We hope that it has provided you with a solid foundation in double hashing and inspired you to explore further in this exciting field. Good luck on your journey of continuous learning and discovery!
# 8.1. Recap of Key Concepts
Let's recap the key concepts we have covered in this textbook:
- Hash functions: We learned about the definition and characteristics of hash functions, as well as the different types of hash functions. We also explored the process of designing an effective hash function.
- Hash tables: We discussed what hash tables are and their anatomy. We also learned how to implement a hash table and explored different collision resolution techniques, including open addressing and separate chaining.
- Double hashing: We focused on double hashing as a collision resolution technique. We learned about its concept and implementation, as well as its advantages and disadvantages. We also analyzed the performance of double hashing.
- Load factor: We explored the definition and importance of load factor in hash tables. We discussed how to calculate load factor and its impact on hash table performance.
- Modular arithmetic: We delved into the definition and properties of modular arithmetic and its applications in hashing. We also learned how to solve collision resolution using modular arithmetic.
- Real-world applications: We discussed the real-world applications of double hashing, including distributed hash tables, password storage and authentication, and caching and database indexing.
By understanding these key concepts, you have gained a comprehensive understanding of double hashing and its applications.
# 8.2. Further Exploration of Hashing
If you're interested in further exploring the topic of hashing, there are several areas you can dive into:
- Linear probing and quadratic probing: These are other collision resolution techniques that you can study in more detail. Understanding their concepts and comparing them to double hashing will broaden your knowledge of hash table implementations.
- Advanced hashing techniques: There are more advanced hashing techniques that you can explore, such as cuckoo hashing, hopscotch hashing, and robin hood hashing. These techniques offer different trade-offs and optimizations compared to double hashing.
- Mathematics of hashing: If you're interested in the mathematical aspects of hashing, you can delve deeper into the properties of hash functions, the analysis of collision resolution techniques, and the theoretical foundations of hashing.
By further exploring these areas, you will deepen your understanding of hashing and expand your toolkit for solving complex problems in computer science.
Remember, hashing is a vast and evolving field, and there is always more to learn. As you continue your journey in computer science, keep exploring new concepts, experimenting with different techniques, and staying up to date with the latest advancements in hashing.
# 8.3. Other Advanced Hashing Techniques
In addition to double hashing, there are several other advanced hashing techniques that you may find interesting to explore:
- Cuckoo hashing: Cuckoo hashing is a technique that uses multiple hash functions to resolve collisions. It guarantees constant-time operations and has a high load factor, making it suitable for applications with limited memory.
- Hopscotch hashing: Hopscotch hashing is a technique that uses a neighborhood structure to resolve collisions. It provides efficient insertions, deletions, and searches, making it suitable for applications that require frequent updates.
- Robin hood hashing: Robin hood hashing is a technique that uses a linear probing strategy with a twist. It minimizes the variance in probe lengths, resulting in a more balanced distribution of keys and improved performance.
By exploring these advanced hashing techniques, you will expand your knowledge and be able to choose the most appropriate technique for different scenarios.
Remember, each hashing technique has its own advantages and disadvantages, and the choice of technique depends on the specific requirements of your application. By understanding the strengths and weaknesses of different techniques, you will be able to design efficient and robust hash tables.
# 8.4. Real-World Examples and Use Cases
Hashing has numerous real-world applications across various industries. Here are a few examples:
- Distributed hash tables (DHTs): DHTs are used in peer-to-peer networks to distribute and retrieve data efficiently. By using hashing, DHTs ensure that data is evenly distributed among network nodes, enabling efficient data retrieval.
- Password storage and authentication: Hashing is commonly used to store passwords securely. Instead of storing passwords in plain text, they are hashed using a one-way hash function. When a user enters their password, it is hashed and compared to the stored hash value for authentication.
- Caching and database indexing: Hashing is used in caching systems and database indexing to quickly retrieve data. By hashing keys, systems can store data in a way that allows for fast lookup and retrieval, improving performance.
These are just a few examples of how hashing is applied in real-world scenarios. By understanding the principles and techniques covered in this textbook, you will be well-equipped to apply hashing in various domains and solve practical problems.
Keep exploring real-world examples and use cases to gain insights into how hashing is used in different industries and applications. This will broaden your understanding and help you apply hashing effectively in your own projects.
Congratulations on completing this textbook! We hope it has been a valuable resource in your learning journey. Good luck with your future endeavors in the field of hashing! | Textbooks |
\begin{document}
\title[Verification of mixing properties]{Verification of mixing properties in two-dimensional shifts of finite type}
\author{Jung-Chao Ban} \address{Department of Applied Mathematics, National Dong Hwa University, Hualien 97401, Taiwan} \email{[email protected]} \thanks{The first author would like to thank the National Science Council, R.O.C. (Contract No. NSC 100-2115-M-259-009-MY2) and the National Center for Theoretical Sciences for partially supporting this research.}
\author{Wen-Guei Hu} \address{Department of Applied Mathematics, National Chiao Tung University, Hsinchu 30010, Taiwan} \email{[email protected]} \thanks{The second author would like to thank the National Science Council, R.O.C. and the ST Yau Center for partially supporting this research.}
\author{Song-Sun Lin} \address{Department of Applied Mathematics, National Chiao Tung University, Hsinchu 300, Taiwan} \email{[email protected]} \thanks{The third author would like to thank the National Science Council, R.O.C. (Contract No. NSC 103-2115-M-009-004) and the ST Yau Center for partially supporting this research.}
\author{Yin-Heng Lin} \address{Department of Applied Mathematics, National Chiao Tung University, Hsin-Chu 30010, Taiwan} \email{[email protected]}
\begin{abstract} The degree of mixing is a fundamental property of a dynamical system. General multi-dimensional shifts cannot be systematically determined. This work introduces constructive and systematic methods for verifying the degree of mixing, from topological mixing to strong specification (or strong irreducibility) for two-dimensional shifts of finite type. First, transition matrices on infinite strips of width $n$ are introduced for all $n\geq 2$. To determine the primitivity of the transition matrices, connecting operators are introduced to reduce the order of high-order transition matrices to yield lower-order transition matrices. Two sufficient conditions for primitivity are provided; they are invariant diagonal cycles and primitive commutative cycles of connecting operators. After primitivity is established, the corner-extendability and crisscross-extendability are used to demonstrate topological mixing. In addition, the hole-filling condition yields the strong specification. All mentioned conditions can be verified to apply in a finite number of steps. \end{abstract}
\maketitle
\section{Introduction}
\label{intro} \hspace{0.5cm}
Multi-dimensional shift spaces represent an important and highly active area of research into topological dynamical systems. Such shifts also closely related to lattice models that are used in the scientific modeling of spatial structures. More precisely, when the lattice dynamical systems or coupled map lattices are spatial invariant and their equilibria assume only finite many values, the set of all stationary solutions forms a multi-dimensional shift space \cite{14,15,16,17}. Related investigations have been performed in statistical physics, chemistry \cite{4,6,7,10-1,15,18-1,24,25,26,26-1,27,36,37,38,39,40,47-1,48,48-1}, biology \cite{8,9}, image processing and pattern recognition \cite{14,16,17,21,22,23,30}.
Lattice models would be better understood if multi-dimensional shifts of finite type were better understood. The most interesting properties of shifts include their spatial entropy and various mixing properties, such as topological mixing and strong specification (or strong irreducibility). These properties are some of the important properties of dynamical systems \cite{11,12,13,13-1,13-2,22,23,27,29-1,35,40-1,40-2,42,42-1,43,45,47,50}. However, determining whether a given system exhibits topological mixing or strong specification in multi-dimensions is not easy. The intrinsic difficulty is related to the undecidability of the multi-dimensional coloring problem \cite{10,18,21,29,31,44,46,49}. For example, the extendability of local patterns on a finite lattice to a global pattern on $\mathbb{Z}^{2}$ is undecidable. Therefore, the mixing property of an arbitrary multi-dimensional shift cannot be determined. Nevertheless, this work provides some easily checked sufficient conditions for topological mixing and strong specification of certain two-dimensional shifts of finite type, which satisfy some non-degeneracy conditions of transition matrices $\mathbb{H}_{2}$ and $\mathbb{V}_{2}$. See Definition \ref{definition:3.3} and Theorem \ref{theorem:3.5}.
Let $\mathbb{Z}^{2}$ be a two-dimensional planar lattice. Vertex (or corner) coloring is considered first. For any $m,n\geq 1$ and $(i,j)\in\mathbb{Z}^{2}$, the $m\times n$ rectangular lattice with the left-bottom vertex $(i,j)$ is denoted by
\begin{equation*} \mathbb{Z}_{m\times n}((i,j))=\left\{(i+n_{1},j+n_{2})\mid 0\leq n_{1}\leq m-1,0\leq n_{2}\leq n-1 \right\}. \end{equation*} In particular, \begin{equation*} \mathbb{Z}_{m\times n}=\mathbb{Z}_{m\times n}((0,0)). \end{equation*} Let $\mathcal{S}_{p}$ be a set of $p$ ($\geq 2$) colors (or symbols). For $m,n\geq 1$, $\Sigma_{m\times n}(p)=\mathcal{S}_{p}^{\mathbb{Z}_{m\times n}}$ is the set of all $m\times n$ local patterns or rectangular blocks.
Let $\mathcal{B}\subset\Sigma_{2\times 2}(p)$ be a basic set of admissible local patterns. For any lattice $R\subset\mathbb{Z}^{^{2}}$, the set of all $\mathcal{B}$-admissible patterns on $R$ is defined as
\begin{equation*} \Sigma_{R}(\mathcal{B})=\left\{U\in\mathcal{S}_{p}^{R}: U\mid _{\mathbb{Z}_{2\times 2}((i,j))}\in\mathcal{B} \text{ if } \mathbb{Z}_{2\times 2}((i,j))\subset R \right\}. \end{equation*} Let $\Sigma_{m\times n}(\mathcal{B})=\Sigma_{\mathbb{Z}_{m\times n}}(\mathcal{B})$ for $m,n\geq 2$. $\Sigma(\mathcal{B})=\Sigma_{\mathbb{Z}^{2}}(\mathcal{B})$ is the set of all global patterns that can be constructed from the admissible local patterns in $\mathcal{B}$.
Traditionally, the admissible local patterns are specified on sublattices $\mathbb{Z}_{2\times 1}$ and $\mathbb{Z}_{1\times 2}$ with symbols in $\mathcal{S}_{p}$. Our approach to the two-dimensional problem begins with a study of infinite strips $\mathbb{Z}_{\infty\times n}$ and $\mathbb{Z}_{m\times\infty}$. Based on $\Sigma_{2\times n}(\mathcal{B})$ and $\Sigma_{m\times 2}(\mathcal{B})$, the transition matrices $\mathbb{H}_{n}$ on $\mathbb{Z}_{2\times n}$ and $\mathbb{V}_{m}$ on $\mathbb{Z}_{m\times 2}$ are introduced, and these apply to admissible patterns on $\mathbb{Z}_{\infty\times n}$ and $\mathbb{Z}_{m\times\infty}$, respectively. Carefully arranging the local patterns on $\mathbb{Z}_{2\times 2}$ into the ordering matrices $\mathbf{X}_{2}$ and $\mathbf{Y}_{2}$, yields recursive formulae for $\mathbb{H}_{n}$ in $n$ and $\mathbb{V}_{m}$ in $m$, which are crucial in computing the spatial entropy \cite{1,2} and studying the mixing problem herein, as elucidated in Section 2. Notably, any $\mathbb{Z}^{2}$-shift of finite type can be represented by some $\mathcal{B}\subset\Sigma_{2\times 2}(p)$ for some $p\geq 2$ \cite{41}. Accordingly, only the case of $\mathcal{B}\subset\Sigma_{2\times 2}(p)$, $p\geq 2$, is considered here.
First, topological mixing is introduced. For any shift $\Sigma$ and any subset $R\subset \mathbb{Z}^{2}$, the restriction map is $\Pi_{R}(\Sigma):\Sigma\rightarrow \mathcal{S}_{p}^{R}$. Denote by $d$ the Euclidean metric on $\mathbb{Z}^{2}$. A $\mathbb{Z}^{2}$ shift $\Sigma$ is topologically mixing (or mixing, for short) if for any finite subsets $R_{1}$ and $R_{2}$ of $\mathbb{Z}^{2}$, a constant $M(R_{1},R_{2})$ exists such that for all $\mathbf{v}\in \mathbb{Z}^{2}$ with $d(R_{1},R_{2}+\mathbf{v})\geq M$, and for any two admissible patterns $U_{1}\in\Pi_{R_{1}}(\Sigma)$ and $U_{2}\in\Pi_{R_{2}+\mathbf{v}}(\Sigma)$, there exists a global pattern $W\in\Sigma$ with $\Pi_{R_{1}}(W)=U_{1}$ and $\Pi_{R_{2}+\mathbf{v}}(W)=U_{2}$; see \cite{50}.
$\Sigma$ has strong specification if a number $M(\Sigma)\geq 1$ exists such that for any two admissible patterns $U_{1}\in\Pi_{R_{1}}(\Sigma)$ and $U_{2}\in\Pi_{R_{2}}(\Sigma)$ with $d(R_{1},R_{2})\geq M$, where $R_{1},R_{2}$ are subsets of $\mathbb{Z}^{2}$, there exists a global pattern $W\in\Sigma$ with $\Pi_{R_{1}}(W)=U_{1}$ and $\Pi_{R_{2}}(W)=U_{2}$ \cite{50}. Clearly, strong specification implies topological mixing.
Some known results verify that $\Sigma(\mathcal{B})$ is topologically mixing or has strong specification \cite{3,42,42-1,48-2}. Previously, in an investigation of pattern generation problems \cite{2}, the present authors introduced connecting operators to study the entropy of $\Sigma(\mathcal{B})$. In this work, connecting operators are also utilized to provide sufficient conditions for the topological mixing or strong specification of $\Sigma(\mathcal{B})$.
A non-negative matrix $A$ is called primitive (or $N$-primitive) if there exists $N\geq 1$ such that each entry of $A^{n}$ is positive for all $n\geq N$. A matrix $A$ is called weakly primitive (or weakly $N$-primitive) if there exists $N\geq 1$ such that each entry of $A^{n}$ is positive except in positions of $A$ where a zero row or zero column is present for all $n\geq N$. The local crisscross-extendability and locally corner-extendable conditions are introduced in Section 3.
The main theorem for topological mixing is proven as Theorem \ref{theorem:3.14} and stated as follows. \begin{theorem} \label{theorem:1.1} If \begin{enumerate} \item[(i)] $\mathcal{B}\subset \Sigma_{2\times 2}(p)$ is locally crisscross-extendable, and
\item[(ii)] $\mathcal{B}$ satisfies three of the locally corner-extendable conditions $C(i)$, $1\leq i\leq 4$, \end{enumerate} then $\mathbb{H}_{n}(\mathcal{B})$ and $\mathbb{V}_{n}(\mathcal{B})$ are weakly primitive for all $n\geq 2$ if and only if $\Sigma(\mathcal{B})$ is topologically mixing. \end{theorem}
To provide checkable sufficient conditions for primitivity of $\mathbb{H}_{n}(\mathcal{B})$ and $\mathbb{V}_{n}(\mathcal{B})$, two sufficient conditions for the primitivity of $\mathbb{H}_{n}$ and $\mathbb{V}_{n}$ are introduced in Section 4; they are
\begin{enumerate}
\item[(i)] invariant diagonal cycles of connecting operator and
\item[(ii)] primitive commutative cycles of connecting operator.
\end{enumerate} The invariant diagonal cycles look like a periodic structure of connecting operators, which is imposed to prove that for some given $q\geq 1$, the primitivity of $\mathbb{H}_{n+kq}(\mathcal{B})$ can be established from the primitivity of $\mathbb{H}_{n+(k-1)q}(\mathcal{B})$, $k\geq 1$; the conditions are then used to establish inductively that $\mathbb{H}_{n}$ is primitive for all $n\geq 2$. When either condition (i) or condition (ii) applies, then only conditions concerning $\mathbb{H}_{n}$, $2\leq n\leq q+1$, have to be verified to ensure that $\mathbb{H}_{n}$ is primitive for all $n\geq 2$. More precisely, when $S$-invariant diagonal cycle $\overline{\beta}_{q}=\beta_{1}\beta_{2}\cdots\beta_{q}\beta_{1}$ of order $(m,q)$, with its invariant index set $\mathcal{K}$, exists, only the primitivity of $\underset{l\in\mathcal{K}}{\sum} H_{m,n;\beta_{1}}^{(l)}$ has to be verified for $2\leq n\leq q+1$. A similar result holds for primitive commutative cycles. See Section 4 for the details of the notation used; see Theorems 4.4 and 4.8 for detailed results.
Next, strong specification is considered. Strong specification is stronger than topological mixing. The hole-filling condition (HFC) introduced in Definition 5.1 is useful to provide checkable sufficient conditions for strong specification. HFC is closely related to the extension property called square filling \cite{40-1,40-2}. The main theorem for strong specification is Theorem \ref{theorem:6.4} and stated as follows.
Let $A=[a_{i,j}]_{n\times n}$ be a non-negative matrix; the index set of non-zero rows of $A$ and the index set of non-zero columns of $A$ are denoted by
\begin{equation}\label{eqn:1.51} \begin{array}{ccc} r(A)=\left\{i \mid \underset{j=1}{\overset{n}{\sum}}a_{i,j}>0\right\} & \text{and} & c(A)=\left\{j \mid \underset{i=1}{\overset{n}{\sum}}a_{i,j}>0\right\}, \end{array} \end{equation} respectively.
\begin{theorem} \label{theorem:1.2} Given $\mathcal{B}\subset\Sigma_{2\times 2}(p)$, if there exists $k\geq 2$ such that \begin{enumerate} \item[(i)] $r(\mathbb{H}_{k})=c(\mathbb{H}_{k})$ and $r(\mathbb{V}_{k})=c(\mathbb{V}_{k})$,
\item[(ii)] $\mathcal{B}$ satisfies $($HFC$)_{k}$ with size $(M,N)$ for some $M,N\geq 2k-3$, and
\item[(iii)] $\mathbb{H}_{k}$ is weakly $(M-2k+5)$-primitive and $\mathbb{V}_{k}$ is weakly $(N-2k+5)$-primitive, \end{enumerate} then $\Sigma(\mathcal{B})$ has strong specification. \end{theorem}
Theorems \ref{theorem:1.1} and \ref{theorem:1.2} are useful in verifying mixing properties. They can be used to check the results concerning strong specification and topological mixing in the literature, and can also apply to other problems. In many physical problems, edge coloring is very common. Results concerning vertex coloring can easily be extended to edge coloring and omitted here.
The rest of this paper is organized as follows. Section 2 introduces ordering matrices of local patterns, transition matrices and connecting operators. Section 3 introduces locally corner-extendable conditions and local crisscross-extendability to study rectangle-extendability and topological mixing. Section 4 introduces invariant diagonal cycles and primitive commutative cycles of connecting operators to establish sufficient conditions for the primitivity of $\mathbb{H}_{n}$ or $\mathbb{V}_{n}$. Section 5 introduces the $k$ hole-filling condition for strong specification. The Appendix lists various mixing properties.
\numberwithin{equation}{section}
\section{Preliminaries}
\label{sec:2}
\hspace{0.5cm} This section reviews the essential aspects of the ordering matrices of local patterns and their associated transition matrices \cite{1}. It then introduces connecting operators \cite{2}.
Since the theory that was developed in this paper heavily depends on transition matrices and connecting operators, for convenience, this section presents the most important properties of transition matrices and connecting operators.
As presented elsewhere \cite{1}, when $p\geq 2$ is fixed, the ordering matrices $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$ are introduced to arrange systematically all local patterns in $\Sigma_{2\times n}(p)$ and $\Sigma_{n\times 2}(p)$, respectively. This arrangement gives an easy recursive formulae for the transition matrices and the connecting operators, and then it gives efficient computer programming in verifying the sufficient conditions of topological mixing and strong specification. For the convenience of the readers, here we collect necessary materials from Ban and Lin \cite{1} and Ban \emph{et al.} \cite{2}.
An $n$-sequence $\overline{U}_{n}=(u_{1},u_{2},\cdots,u_{n})$ with $u_{k}\in\mathcal{S}_{p}$, $1\leq k\leq n$, is assigned a number by using the $n$-th order counting function $\psi\equiv\psi_{n}$:
\begin{equation}\label{eqn:2.1} \psi(\overline{U}_{n})=\psi(u_{1},u_{2},\cdots,u_{n})=1+\underset{k=1}{\overset{n}{\sum}}u_{k}p^{(n-k)}. \end{equation} The explicit counting formula (\ref{eqn:2.1}) enables the recursive formulae that relate to $\mathbf{X}_{n}$ and $\mathbf{Y}_{n}$ to be identified.
The horizontal and vertical ordering matrices $\mathbf{X}_{2}=[x_{i_{1},j_{1}}]_{p^{2}\times p^{2}}$ and $\mathbf{Y}_{2}=[y_{i_{2},j_{2}}]_{p^{2}\times p^{2}}$ are defined by
\begin{equation}\label{eqn:2.2} \begin{array}{ccc} \begin{array}{c} \psfrag{a}{$x_{i_{1},j_{1}}=$} \psfrag{b}{{\footnotesize $u_{0,1}$}} \psfrag{c}{{\footnotesize $u_{1,1}$}} \psfrag{d}{{\footnotesize $u_{0,0}$}} \psfrag{e}{{\footnotesize $u_{1,0}$}} \psfrag{f}{} \includegraphics[scale=1.2]{x_ij.eps} \end{array} & \text{and} & \begin{array}{c} \psfrag{a}{$y_{i_{2},j_{2}}=$} \psfrag{b}{{\footnotesize $u'_{0,1}$}} \psfrag{c}{{\footnotesize $u'_{1,1}$}} \psfrag{d}{{\footnotesize $u'_{0,0}$}} \psfrag{e}{{\footnotesize $u'_{1,0}$}} \psfrag{f}{,} \includegraphics[scale=1.2]{x_ij.eps} \end{array} \end{array} \end{equation} where $u_{s,t},u'_{s,t}\in\mathcal{S}_{p}$, $0 \leq s,t\leq1$, with
\begin{equation*} \begin{array}{ccc} \left\{ \begin{array}{l} i_{1}=\psi(u_{0,0},u_{0,1}) \\ j_{1}=\psi(u_{1,0},u_{1,1}) \end{array} \right. & \text{and} & \left\{ \begin{array}{l} i_{2}=\psi(u'_{0,0},u'_{1,0}) \\ j_{2}=\psi(u'_{0,1},u'_{1,1}). \end{array} \right. \end{array} \end{equation*} For instance, when $p=2$, \begin{equation}\label{eqn:2.2-1} \begin{array}{ccc} \begin{array}{c} \psfrag{c}{$\mathbf{X}_{2}=$} \psfrag{d}{} \includegraphics[scale=0.475]{X_2.eps} \end{array} & \text{and} & \begin{array}{c} \psfrag{c}{$\mathbf{Y}_{2}=$} \psfrag{d}{.} \includegraphics[scale=0.475]{Y_2.eps} \end{array} \end{array} \end{equation}
The higher-order ordering matrices $\mathbf{X}_{n}=[x_{n;i,j}]_{p^{n}\times p^{n}}$ of $\Sigma_{2\times n}(p)$, $n\geq3$, are defined recursively as
\begin{equation}\label{eqn:2.5} \mathbf{X}_{n}=\left[X_{n;\alpha}\right]_{p\times p}= \left[ \begin{array}{cccc} X_{n;1} & X_{n;2} & \cdots & X_{n;p}\\ X_{n;p+1} & X_{n;p+2} & \cdots & X_{n;2p}\\ \vdots & \vdots & \ddots & \vdots \\ X_{n;p(p-1)+1} & X_{n;p(p-1)+2} & \cdots & X_{n;p^{2}} \end{array} \right], \end{equation} where
\begin{equation}\label{eqn:2.6} X_{n;\alpha}= \left[ y_{\alpha, j}X_{n-1;j} \right]_{p\times p} \end{equation} is a $p^{n-1}\times p^{n-1}$ matrix. Notably, the element $x_{n;i,j}$ is the $2\times n$ local pattern $U_{2\times n}=(u_{s,t})_{0\leq s\leq 1,0\leq t\leq n-1 }$ with
\begin{equation}\label{eqn:2.7} \begin{array}{rcl} i=\psi(u_{0,0},u_{0,1},\cdots,u_{0,n-1}) & \text{and} & j=\psi(u_{1,0},u_{1,1},\cdots,u_{1,n-1}). \end{array} \end{equation} Similarly, the higher-order ordering matrix $\mathbf{Y}_{n}$ can be defined recursively, as above.
Given a basic set $\mathcal{B}\subset\Sigma_{2\times 2}(p)$, the horizontal and vertical transition matrices $\mathbb{H}_{2}=\mathbb{H}_{2}(B)=[h_{i,j}]_{p^{2}\times p^{2}}$ and $\mathbb{V}_{2}=\mathbb{V}_{2}(B)=[v_{i,j}]_{p^{2}\times p^{2}}$ are given by
\begin{equation}\label{eqn:2.8} \begin{array}{ccc} \left\{ \begin{array}{rl} h_{i,j}=1 & \text{if }x_{i,j}\in\mathcal{B} ,\\ h_{i,j}=0 & \text{if }x_{i,j}\notin\mathcal{B}, \end{array} \right. & \text{and} & \left\{ \begin{array}{rl} v_{i,j}=1 & \text{if }y_{i,j}\in\mathcal{B} ,\\ v_{i,j}=0 & \text{if }y_{i,j}\notin\mathcal{B}. \end{array} \right. \end{array} \end{equation}
Before the formula that relates $\mathbb{H}_{n}$ to $\mathbb{H}_{n+1}$ is presented, two kinds of products of matrices must be defined. For any two matrices $A=[a_{i,j}]$ and $B=[b_{k,l}]$, the Kronecker product (tensor product) of $A\otimes B$ is defined by
\begin{equation*} A\otimes B=\left[a_{i,j}B\right]. \end{equation*} Next, for any two $m\times m$ matrices $C=[c_{i,j}]$ and $D=[d_{i,j}]$, where $c_{i,j}$ and $d_{i,j}$ are numbers or matrices, the Hadamard product of $C\circ D$ is defined by
\begin{equation*} C\circ D=\left[c_{i,j}\cdot d_{i,j}\right], \end{equation*} where the product $c_{i,j}\cdot d_{i,j}$ of $c_{i,j}$ and $d_{i,j}$ may be a product of numbers, of numbers and matrices or of matrices, whenever such a product is well-defined.
According to (\ref{eqn:2.5}) and (\ref{eqn:2.6}), the higher-order transition matrices $\mathbb{H}_{n}$, $n\geq3$, can be defined as
\begin{equation}\label{eqn:2.9} \mathbb{H}_{n}= \left[ H_{n;\alpha} \right]_{p\times p}, \end{equation} where
\begin{equation}\label{eqn:2.10} H_{n;\alpha}= \left[ v_{\alpha,j}H_{n-1;j} \right]_{p\times p} \end{equation} is a $p^{n-1}\times p^{n-1}$ zero-one matrix. Indeed, from the relation between $\mathbf{X}_{2}$ and $\mathbf{Y}_{2}$ given by (\ref{eqn:2.2}),
\begin{equation}\label{eqn:2.10-1} H_{n;\alpha}= \left(H_{2;\alpha}\right)_{p\times p} \circ \left[H_{n-1;j}\right]_{p\times p}. \end{equation}
Furthermore, for any $n\geq 2$ and $q\geq 1$, $\mathbb{H}_{n+q}$ are decomposed by applying (\ref{eqn:2.9}) $q+1$ times, as follows. For any $q\geq 1$ and $0\leq r\leq q-1$, define
\begin{equation*} H_{n+q;\beta_{1};\beta_{2};\cdots ;\beta_{r+1}} = \left[ H_{n+q;\beta_{1};\beta_{2};\cdots ;\beta_{r};\alpha} \right]_{p\times p}. \end{equation*} Therefore, for any $q\geq 0$, $\mathbb{H}_{n+q}$ can be represented as a $p^{q+1}\times p^{q+1}$ matrix
\begin{equation}\label{eqn:2.11} \mathbb{H}_{n+q}\equiv\left[H_{n+q;i,j}\right]_{p^{q+1}\times p^{q+1}}=\left[H_{n+q;\beta_{1};\beta_{2};\cdots ;\beta_{q+1}}\right]_{p^{q+1}\times p^{q+1}}. \end{equation} In particular, when $p=2$ and $q=0$,
\begin{equation*} \mathbb{H}_{n}= \left[ \begin{array}{cc} H_{n;1,1} & H_{n;1,2} \\ H_{n;2,1} & H_{n;2,2} \end{array} \right]_{2 \times 2} = \left[ \begin{array}{cc} H_{n;1} & H_{n;2} \\ H_{n;3} & H_{n;4} \end{array} \right]_{2 \times 2} ; \end{equation*} when $p=2$ and $q=1$, \begin{equation*} \mathbb{H}_{n}= \left[ \begin{array}{cccc} H_{n;1,1} & H_{n;1,2}& H_{n;1,3}& H_{n;1,4} \\ H_{n;2,1} & H_{n;2,2}& H_{n;2,3}& H_{n;2,4} \\ H_{n;3,1} & H_{n;3,2}& H_{n;3,3}& H_{n;3,4} \\ H_{n;4,1} & H_{n;4,2}& H_{n;4,3}& H_{n;4,4} \end{array} \right]_{2^{2} \times 2^{2}} = \left[ \begin{array}{cccc} H_{n;1;1} & H_{n;1;2} & H_{n;2;1} & H_{n;2;2} \\ H_{n;1;3} & H_{n;2;4} & H_{n;2;2} & H_{n;2;4} \\ H_{n;3;1} & H_{n;3;2} & H_{n;4;1} & H_{n;4;2} \\ H_{n;3;3} & H_{n;3;4} & H_{n;4;3} & H_{n;4;4} \end{array} \right]_{2^{2} \times 2^{2}} . \end{equation*}
Now, high-order transition matrices $\mathbb{H}_{n+q}$ can be reduced to lower order transition matrices $\mathbb{H}_{n}$ as follows \cite{1}. \begin{proposition} \label{proposition:2.0} For any $n\geq 2$ and $q\geq 1$, \begin{equation}\label{eqn:2.19} \mathbb{H}_{n+q}=\left( \mathbb{H}_{q+1} \right)_{p^{q+1}\times p^{q+1}}\circ\left( E_{p^{q}\times p^{q}}\otimes \left[H_{n;i,j}\right]_{p\times p} \right), \end{equation} where $E_{k\times k}$ is the $k\times k$ full matrix. \end{proposition}
The formulae (\ref{eqn:2.10-1}) and (\ref{eqn:2.19}) are useful in studying rectangle-extendability, which implies that $\mathcal{B}$ is rectangle-extendable; see Theorem \ref{theorem:3.5}.
To obtain a recursive formula like that in Proposition \ref{proposition:2.0} for $\mathbb{H}_{n+q}^{m}$ to $\mathbb{H}_{n}^{m}$, $m\geq 2$ and $q\geq 1$, the connecting operator $\mathbb{C}_{m}$ must be introduced. The recursive formula is crucial in establishing sufficient conditions for the primitivity of $\mathbb{H}_{k}$ for all $k\geq 2$ by verifying the primitivity of a finite number of $\mathbb{H}_{k}$, $2\leq k\leq K$. See Theorems \ref{theorem:4.11-1} and \ref{theorem:5.3} for details.
Let $\mathbb{H}_{n}=[H_{n;i,j}]_{p\times p}$; for $m\geq 2$, the elementary pattern of $\mathbb{H}_{n}^{m}$ is
\begin{equation*} H_{n;j_{1},j_{2}}H_{n;j_{2},j_{3}}\cdots H_{n;j_{m},j_{m+1}}, \end{equation*} where $1\leq j_{s} \leq p$, $1\leq s\leq m+1$. Let
\begin{equation}\label{eqn:2.20} H_{m,n;\alpha}^{(k)}=H_{n;j_{1},j_{2}}H_{n;j_{2},j_{3}}\cdots H_{n;j_{m},j_{m+1}}, \end{equation} where \begin{equation*} \begin{array}{rcl} \alpha=\psi(j_{1}-1,j_{m+1}-1) & \text{and} & k=\psi(j_{2}-1,j_{3}-1,\cdots,j_{m}-1). \end{array} \end{equation*}
Therefore, for $m\geq 2$, \begin{equation}\label{eqn:2.22} \mathbb{H}_{n}^{m}= \left[ H_{m,n;\alpha} \right]_{p \times p}, \end{equation} where
\begin{equation*} H_{m,n;\alpha}=\underset{k=1}{\overset{p^{m-1}}{\sum}}H_{m,n;\alpha}^{(k)}. \end{equation*}
Now, the connecting operator $\mathbb{C}_{m}=[C_{m;i,j}]$ that was introduced by \cite{2} is recalled. First, the connecting ordering matrix $\mathbf{C}_{m}=[\mathbf{C}_{m;i,j}]$ , a different arrangement for $\Sigma_{(m+1)\times 2}(p)$ from $\mathbf{Y}_{m+1}$, is introduced. $\mathbf{C}_{m}=[\mathbf{C}_{m;i,j}]_{p^{2}\times p^{2}}$ , where $\mathbf{C}_{m;i,j}$ is a $p^{m-1}\times p^{m-1}$ matrix of local patterns, is defined as follows.
With fixed $1\leq i,j\leq p^{2}$, for $1\leq s,t \leq p^{m-1}$,
\begin{equation}\label{eqn:2.23} \begin{array}{c} \psfrag{a}{$\left(\mathbf{C}_{m;i,j}\right)_{s,t}=$} \psfrag{b}{{\footnotesize $u_{0,0}$}} \psfrag{c}{{\footnotesize $u_{1,0}$}} \psfrag{d}{{\footnotesize$\cdots$}} \psfrag{e}{{\footnotesize $u_{m,0}$}} \psfrag{g}{{\footnotesize $u_{0,1}$}} \psfrag{h}{{\footnotesize $u_{1,1}$}} \psfrag{k}{{\footnotesize $u_{m,1}$}} \psfrag{m}{} \includegraphics[scale=1.3]{C_m_ij_st.eps} \end{array} \end{equation} with $i=\psi(u_{0,0},u_{0,1})$, $j=\psi(u_{m,0},u_{m,1})$, $s=\psi(u_{1,0},u_{2,0},\cdots,u_{m-1,0})$ and $t=\psi(u_{1,1},u_{2,1},\cdots,u_{m-1,1})$.
Now, $\mathbf{C}_{m+1;i,j}$ can be obtained in terms of $\mathbf{C}_{m;k,l}$ as follows \cite{2}.
\begin{proposition} \label{proposition:2.1} Let $\mathbf{X}_{2}=[x_{i,j}]_{p^{2}\times p^{2}}$. For any $m\geq 2$ and $1\leq i,j\leq p^{2}$, \begin{equation*} \mathbf{C}_{m+1;i,j}= \left[x_{i,\alpha}\mathbf{C}_{m;\alpha,j} \right]_{p\times p}. \end{equation*} \end{proposition}
The matrix product of $\mathbf{C}_{m;i,j}$ and $\mathbf{C}_{m;j,k}$ cannot connect local patterns in the vertical direction. However, $\mathbf{S}_{m;\alpha,\beta}$ does so. Changing the index of $\mathbf{C}_{m}=[\mathbf{C}_{m;i,j}]_{p^{2}\times p^{2}}$ enables the ordering matrix $\mathbf{S}_{m}=[\mathbf{S}_{m;\alpha,\beta}]_{p^{2}\times p^{2}}$ to be defined as
\begin{equation}\label{eqn:2.24} \mathbf{S}_{m;\alpha,\beta}=\mathbf{C}_{m;\psi(\alpha_{1},\beta_{1}),\psi(\alpha_{2},\beta_{2})}, \end{equation} where $\alpha_{k},\beta_{k}\in\mathcal{S}_{p}$, $1\leq k\leq 2$, satisfying $\alpha=\psi(\alpha_{1},\alpha_{2})$ and $\beta=\psi(\beta_{1},\beta_{2})$. In particular, for $p=2$, \begin{equation*} \mathbf{C}_{m}= \left[ \begin{array}{cccc} \mathbf{C}_{m;1,1} & \mathbf{C}_{m;1,2} & \mathbf{C}_{m;1,3} & \mathbf{C}_{m;1,4} \\ \mathbf{C}_{m;2,1} & \mathbf{C}_{m;2,2} & \mathbf{C}_{m;3,3} & \mathbf{C}_{m;4,4} \\ \mathbf{C}_{m;3,1} & \mathbf{C}_{m;3,2} & \mathbf{C}_{m;3,3} & \mathbf{C}_{m;3,4} \\ \mathbf{C}_{m;4,1} & \mathbf{C}_{m;4,2} & \mathbf{C}_{m;4,3} & \mathbf{C}_{m;4,4} \end{array} \right] = \left[ \begin{array}{cccc} \mathbf{S}_{m;1,1} & \mathbf{S}_{m;1,2} & \mathbf{S}_{m;2,1} & \mathbf{S}_{m;2,2} \\ \mathbf{S}_{m;1,3} & \mathbf{S}_{m;1,4} & \mathbf{S}_{m;2,3} & \mathbf{S}_{m;2,4} \\ \mathbf{S}_{m;3,1} & \mathbf{S}_{m;3,2} & \mathbf{S}_{m;4,1} & \mathbf{S}_{m;4,2} \\ \mathbf{S}_{m;3,3} & \mathbf{S}_{m;3,4} & \mathbf{S}_{m;4,3} & \mathbf{S}_{m;4,4} \end{array} \right]. \end{equation*} Indeed, for $1\leq s,t \leq p^{m-1}$,
\begin{equation}\label{eqn:2.25} \begin{array}{c} \psfrag{a}{$(\mathbf{S}_{m;\alpha,\beta})_{s,t}=$} \psfrag{b}{{\footnotesize $u_{0,0}$}} \psfrag{c}{{\footnotesize $u_{1,0}$}} \psfrag{d}{{\footnotesize$\cdots$}} \psfrag{e}{{\footnotesize $u_{m,0}$}} \psfrag{g}{{\footnotesize $u_{0,1}$}} \psfrag{h}{{\footnotesize $u_{1,1}$}} \psfrag{k}{{\footnotesize $u_{m,1}$}} \psfrag{m}{} \includegraphics[scale=1.4]{C_m_ij_st.eps} \end{array} \end{equation} with $\alpha=\psi(u_{0,0},u_{m,0})$, $\beta=\psi(u_{0,1},u_{m,1})$, $s=\psi(u_{1,0},u_{2,0},\cdots,u_{m-1,0})$ and $t=\psi(u_{1,1},u_{2,1},\cdots,u_{m-1,1})$. From (\ref{eqn:2.25}), the matrix product of $\mathbf{S}_{m;\alpha,\beta}$ and $\mathbf{S}_{m;\beta,\gamma}$ represents the vertical connection of the patterns on $\mathbb{Z}_{(m+1)\times 2}$.
Now, given $\mathcal{B}\subset\Sigma_{2\times 2}(p)$, for $m\geq 2$, the connecting operator $\mathbb{C}_{m}=[C_{m;i,j}]_{p^{2}\times p^{2}}$ of $\mathbf{C}_{m}=[\mathbf{C}_{m;i,j}]_{p^{2}\times p^{2}}$ is defined as follows. For $1\leq s,t \leq p^{m-1}$,
\begin{equation*} \left\{ \begin{array}{rl} (C_{m;i,j})_{s,t}=1 & \text{if }(\mathbf{C}_{m;i,j})_{s,t} \text{ is }\mathcal{B}\text{-admissible}, \\ (C_{m;i,j})_{s,t}=0 & \text{otherwise.} \end{array} \right. \end{equation*}
In the following, $\mathbb{C}_{2}$ can be obtained explicitly. Let $\mathbb{H}_{2}=[h_{i,j}]_{p^{2}\times p^{2}}$. Then, for $1\leq i,j\leq p^{2}$,
{\footnotesize \begin{equation}\label{eqn:2.26} \begin{array}{rl}
& C_{2;i,j} \\
& \\
= & \left[ \begin{array}{cccc} h_{i,1} & h_{i,2} & \cdots & h_{i,p} \\ h_{i,p+1} & h_{i,p+2} &\cdots & h_{i,2p} \\ \vdots& \vdots & \vdots & \vdots \\ h_{i,(p-1)p+1} & h_{i,(p-1)p+2} & \cdots & h_{i,p^{2}} \end{array} \right] \circ \left[ \begin{array}{cccc} h_{1,j} & h_{2,j} & \cdots & h_{p,j} \\ h_{p+1,j} & h_{p+2,j} &\cdots & h_{2p,j} \\ \vdots& \vdots & \vdots & \vdots \\ h_{(p-1)p+1,j} & h_{(p-1)p+2,j} & \cdots & h_{p^{2},j} \end{array} \right] \end{array} \end{equation} } is a $p\times p$ zero-one matrix. By Proposition \ref{proposition:2.1}, the connecting operator $\mathbb{C}_{m+1}$ can also be obtained from $\mathbb{C}_{m}$. For $m\geq 2$, $\mathbb{C}_{m+1}=[C_{m+1;i,j}]_{p^{2}\times p^{2}}$ satisfies
\begin{equation}\label{eqn:2.27} C_{m+1;i,j}= \left[ h_{i,\alpha}C_{m;\alpha,j} \right]_{p\times p}. \end{equation}
From (\ref{eqn:2.24}), $\mathbb{S}_{m}=[S_{m;\alpha,\beta}]_{p^{2}\times p^{2}}$ is defined by
\begin{equation}\label{eqn:2.27-1} S_{m;\alpha,\beta}=C_{m;\psi(\alpha_{1},\beta_{1}),\psi(\alpha_{2},\beta_{2})}, \end{equation} where $0\leq\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}\leq p-1$ such that $\alpha=\psi(\alpha_{1},\alpha_{2})$ and $\beta=\psi(\beta_{1},\beta_{2})$.
For example, consider the Golden Mean shift, \begin{equation*} \mathbb{H}_{2}=\mathbb{V}_{2}=\left[ \begin{array}{cccc} 1 & 1 & 1 & 0 \\ 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right]. \end{equation*} By (\ref{eqn:2.28}), it can be verified that
\begin{equation*} \begin{array}{rl} \mathbb{C}_{2}= & \left[ \begin{array}{cccc} C_{2;1,1} & C_{2;1,2} & C_{2;1,3} & C_{2;1,4} \\ C_{2;2,1} & C_{2;2,2} & C_{2;2,3} & C_{2;2,4} \\ C_{2;3,1} & C_{2;3,2} & C_{2;3,3} & C_{2;3,4} \\ C_{2;4,1} & C_{2;4,2} & C_{2;4,3} & C_{2;4,4} \end{array} \right] \\ & \\ = & {\scriptsize \left[ \begin{array}{cccc} \left[\begin{array}{cc} 1& 1 \\ 1& 0 \end{array}\right] &\left[\begin{array}{cc} 1& 0 \\ 1& 0 \end{array}\right] & \left[\begin{array}{cc} 1& 1 \\ 0& 0 \end{array}\right] & \left[\begin{array}{cc} 0& 0 \\0& 0 \end{array}\right] \\ & & & \\ \left[\begin{array}{cc} 1& 0 \\ 1& 0 \end{array}\right] &\left[\begin{array}{cc} 1& 0 \\ 1& 0 \end{array}\right] & \left[\begin{array}{cc} 1& 0 \\ 0& 0 \end{array}\right] & \left[\begin{array}{cc} 0& 0 \\ 0& 0 \end{array}\right] \\ & & & \\ \left[\begin{array}{cc} 1& 1 \\ 0& 0 \end{array}\right] &\left[\begin{array}{cc} 1& 0 \\ 0& 0 \end{array}\right] & \left[\begin{array}{cc} 1& 1 \\ 0& 0 \end{array}\right] & \left[\begin{array}{cc} 0& 0 \\ 0& 0 \end{array}\right] \\ & & & \\ \left[\begin{array}{cc} 0& 0 \\ 0& 0 \end{array}\right] &\left[\begin{array}{cc} 0& 0 \\ 0& 0 \end{array}\right] & \left[\begin{array}{cc} 0& 0 \\ 0& 0 \end{array}\right] & \left[\begin{array}{cc} 0& 0 \\ 0& 0 \end{array}\right] \end{array} \right]. } \end{array} \end{equation*} Moreover, by (\ref{eqn:2.27-1}), \begin{equation*} \mathbb{S}_{2}= \left[ \begin{array}{cccc} S_{2;1,1} & S_{2;1,2} & S_{2;1,3} & S_{2;1,4} \\ S_{2;2,1} & S_{2;2,2} & S_{2;2,3} & S_{2;2,4} \\ S_{2;3,1} & S_{2;3,2} & S_{2;3,3} & S_{2;3,4} \\ S_{2;4,1} & S_{2;4,2} & S_{2;4,3} & S_{2;4,4} \end{array} \right] =\left[ \begin{array}{cccc} C_{2;1,1} & C_{2;1,2} & C_{2;2,1} & C_{2;2,2} \\ C_{2;1,3} & C_{2;1,4} & C_{2;2,3} & C_{2;2,4} \\ C_{2;3,1} & C_{2;3,2} & C_{2;4,1} & C_{2;4,2} \\ C_{2;3,3} & C_{2;3,4} & C_{2;4,3} & C_{2;4,4} \end{array} \right]. \end{equation*}
Now, the relation between $\mathbb{H}_{n+1}^{m}$ and $\mathbb{H}_{n}^{m}$ is elucidated as follows. Since the sizes of $H_{m,n+1;\alpha}^{(k)}$ and $H_{m,n;\beta}^{(l)}$ are different, the elementary pattern $H_{m,n+1;\alpha}^{(k)}$ can be reduced further as follows.
Let
\begin{equation}\label{eqn:2.28} H_{m,n+1;\alpha}^{(k)}= \left[H_{m,n+1;\alpha;\beta}^{(k)} \right]_{p\times p}. \end{equation}
In the following theorem, $H^{(k)}_{m,n+1;\alpha;\beta}$ is obtained as the product of $S_{m;\alpha,\beta}$ and $H^{(l)}_{m,n;\beta}$ \cite{2}, so $S_{m;\alpha,\beta}$ reduces $\mathbb{H}_{n+1}^{m}$ to $\mathbb{H}_{n}^{m}$. \begin{proposition} \label{proposition:2.2} For any $m,n\geq 2$,
\begin{equation}\label{eqn:2.31} H^{(k)}_{m,n+1;\alpha;\beta}= \underset{l=1}{\overset{p^{m-1}}{\sum}}\left( S_{m;\alpha,\beta}\right)_{k,l}H^{(l)}_{m,n;\beta}. \end{equation} Furthermore, for $n=1$, let
\begin{equation}\label{eqn:2.32} H_{m,2;\alpha}^{(k)}= \left[ H_{m,2;\alpha;\beta}^{(k)} \right]_{p \times p}, \end{equation} then \begin{equation}\label{eqn:2.33} H_{m,2;\alpha;\beta}^{(k)}=\underset{l=1}{\overset{p^{m-1}}{\sum}}\left(S_{m;\alpha,\beta}\right)_{k,l}. \end{equation} \end{proposition}
Furthermore, for $q\geq 2$, $q$-many $S_{m;\alpha,\beta}$ can reduce $\mathbb{H}_{n+q}^{m}$ to $\mathbb{H}_{n}^{m}$ as follows. For any positive integer $q\geq 2$, the elementary patterns of $\mathbb{H}_{n+q}^{m}$ can be decomposed by applying (\ref{eqn:2.28}) $q$ times. Indeed, for $q\geq 2$ and $1\leq r\leq q-1$, define
\begin{equation*} H_{m,n+q;\beta_{1};\beta_{2};\cdots ;\beta_{r+1}}^{(k)}
= \left[
H_{m,n+q;\beta_{1};\beta_{2};\cdots ;\beta_{r+1};\alpha}^{(k)} \right]_{p\times p}. \end{equation*} Therefore, for any $q\geq 1$, $\mathbb{H}_{n+q}^{m}$ can be represented as a $p^{q+1}\times p^{q+1}$ matrix
\begin{equation}\label{eqn:2.34} \mathbb{H}_{n+q}^{m}\equiv \left[H_{m,n+q;i,j}\right]_{p^{q+1}\times p^{q+1}}=\left[H_{m,n+q;\beta_{1};\beta_{2};\cdots ;\beta_{q+1}}\right]_{p^{q+1}\times p^{q+1}} \end{equation} where
\begin{equation*} H_{m,n+q;\beta_{1};\beta_{2};\cdots ;\beta_{q+1}}=\underset{k=1}{\overset{p^{m-1}}{\sum}}H_{m,n+q;\beta_{1};\beta_{2};\cdots ;\beta_{q+1}}^{(k)} \end{equation*} is a $p^{n-1}\times p^{n-1}$ matrix.
As in Proposition \ref{proposition:2.2}, the elementary patterns of $\mathbb{H}_{n+q}^{m}$ can be expressed as the product of $q$-many $S_{m;\alpha,\beta}$ and the elementary patterns of $\mathbb{H}_{n}^{m}$ \cite{2}.
\begin{proposition} \label{proposition:2.3} For any $m,n\geq 2$ and $q\geq 1$,
\begin{equation}\label{eqn:2.35} H_{m,n+q;\beta_{1};\beta_{2};\cdots ;\beta_{q+1}}^{(k)}=\underset{l=1}{\overset{p^{m-1}}{\sum}} (S_{m;\beta_{1},\beta_{2}}S_{m;\beta_{2},\beta_{3}}\cdots S_{m;\beta_{q},\beta_{q+1}})_{k,l} H_{m,n;\beta_{q+1}}^{(l)}. \end{equation} where $ 1 \leq \beta_{i}\leq p^{2}$, $1\leq i\leq q+1$. Moreover,
\begin{equation}\label{eqn:2.36} H_{m,n+q;\beta_{1};\beta_{2};\cdots ;\beta_{q+1}}=\underset{k,l=1}{\overset{p^{m-1}}{\sum}} (S_{m;\beta_{1},\beta_{2}}S_{m;\beta_{2},\beta_{3}}\cdots S_{m;\beta_{q},\beta_{q+1}})_{k,l} H_{m,n;\beta_{q+1}}^{(l)}. \end{equation} \end{proposition}
Similarly, for $\mathbb{V}_{2}$, the connecting operators are denoted by $\mathbb{U}_{m}=[U_{m;i,j}]$ (corresponding to $\mathbb{C}_{m}=[C_{m;i,j}]$ for $\mathbb{H}_{2}$) and $\mathbb{W}_{m}=[W_{m;\alpha,\beta}]$ (corresponding to $\mathbb{S}_{m}=[S_{m;\alpha,\beta}]$ for $\mathbb{H}_{2}$). The arguments that hold for $\mathbb{H}_{n}$ also hold for $\mathbb{V}_{n}$.
In the study of both topological mixing and strong specification, the transition matrices $\mathbb{H}_{n}$ and $\mathbb{V}_{n}$ and the connecting operator $\mathbb{S}_{m}$ or $\mathbb{C}_{m}$ are extensively used. Indeed, invariant diagonal cycles, primitive commutative cycles and (HFC$)_{k}$ can be expressed in terms of transition matrices and connecting operators as the finitely sufficient conditions; see Definitions 4.1 and 4.6 and Theorem 5.2. All cases with certain extendability conditions except strong specification can be verified using transition matrices and connecting operators. Table 2.1 lists the related theorems.
\begin{equation*} \psfrag{a}{Mixing properties} \psfrag{b}{Results } \psfrag{c}{Strong specification} \psfrag{d}{Uniform filling property } \psfrag{e}{Corner gluing} \psfrag{f}{Block gluing} \psfrag{g}{Topological mixing} \psfrag{h}{{\small Expressions in $\mathbb{H}$ and $\mathbb{S}$}} \psfrag{j}{Sufficient conditions in} \psfrag{z}{{\small finitely expressions}} \psfrag{k}{Yes} \psfrag{l}{Yes} \psfrag{m}{Yes} \psfrag{n}{Theorem \ref{theorem:1.1}} \psfrag{o}{Theorem \ref{theorem:1.2}} \psfrag{p}{Theorem \ref{theorem:5.7}} \includegraphics[scale=1.07]{table.eps} \end{equation*}
\begin{equation*} \text{Table 2.1.} \end{equation*}
\numberwithin{equation}{section}
\section{Extendabilities and topological mixing}
\label{sec:3} \hspace{0.5cm} This section investigates the extendabilities and the relationship with the topological mixing of $\Sigma(\mathcal{B})$.
First, the main idea of finding sufficient conditions for topological mixing is stated. Given two patterns $U_{1}$ and $U_{2}$ defined on $R_{1}$ and $R_{2}+\mathbf{v}$, respectively, in general, $R_{1}$ and $R_{2}+\mathbf{v}$ are not located in a horizontal line or a vertical line. Typically, the gluing process comprises three steps; an example is presented in Fig. 3.1. For clarity, in Fig 3.1, the patterns, $U$, are presented and the underlying lattices, $R$, are omitted.
\begin{enumerate} \item[Step (1):] Extend $U_{2}$ to $\widetilde{U}_{2}$ such that $U_{1}$ can connect horizontally to $\widetilde{U}_{2}$. The combined pattern is an $L$-shaped pattern $U_{1}\bigcup\widetilde{U}_{1}\bigcup U_{2}\bigcup\widetilde{U}_{2}$.
\item[Step (2):] Extend the $L$-shaped pattern to a rectangular block, $U_{1}\bigcup\widetilde{U}_{1}\bigcup U_{2}\bigcup\widetilde{U}_{2}\bigcup U_{3}$.
\item[Step (3):] Extend the rectangular block to a global pattern on $\mathbb{Z}^{2}$. \end{enumerate}
\begin{equation*} \psfrag{a}{{\footnotesize $U_{1}$}} \psfrag{b}{{\footnotesize$U_{2}$}} \psfrag{c}{{\footnotesize$\widetilde{U}_{2}$}} \psfrag{d}{{\footnotesize $(1)$}} \psfrag{e}{{\footnotesize$\widetilde{U}_{1}$}} \psfrag{f}{\hspace{-0.5cm}{\footnotesize$U_{3}$}\hspace{0.2cm}$(2)$} \psfrag{g}{ $(3)$} \includegraphics[scale=0.9]{Fig1_1.eps} \end{equation*} \begin{equation*} \text{Figure 3.1.} \end{equation*}
To ensure that all processes can be executed, the following sufficient conditions are proposed to be applied in each step:
In Step (1), since $U_{2}$ is part of a global pattern, $U_{2}$ can be extended to $\widetilde{U}_{2}$. The introduction of the primitivity of horizontal transition matrices $\mathbb{H}_{n}$ and vertical transition matrices $\mathbb{V}_{n}$, for each $n\geq 2$, ensures $U_{1}$ can connect to $\widetilde{U}_{2}$.
In Step (2), corner-extendability is introduced to enable every admissible $L$-shaped pattern to be extended to be a rectangular block as $U_{1}\bigcup\widetilde{U}_{1}\bigcup U_{2}\bigcup\widetilde{U}_{2}\bigcup U_{3}$.
In Step (3), rectangle-extendability is introduced to extend every rectangular block to form a global pattern on $\mathbb{Z}^{2}$; see Theorem \ref{theorem:3.5}.
Notably, extending $U_{2}$ to $\widetilde{U}_{2}$ and $U_{1}$ to $\widetilde{U}_{1}$ simultaneously demands a stronger sufficient condition to connect $\widetilde{U}_{1}$ and $\widetilde{U}_{2}$, meaning that there exists a constant $M\geq1$ such that $\mathbb{H}_{n}$ and $\mathbb{V}_{n}$ are $M$-primitive for all $n\geq 2$. Actually, this condition is like block gluing (see Definition A.1 (iv)). Therefore, the separate execution of Steps (1) and (2) weakens the sufficient conditions for topological mixing.
After the primitivity of $\mathbb{H}_{n}$ and $\mathbb{V}_{n}$ has been established, the locally corner-extendable conditions $C(1)\sim C(4)$ and locally crisscross-extendability are introduced to extend the L-shaped pattern and the rectangular pattern into a global pattern, and then to establish that $\Sigma(\mathcal{B})$ is topologically mixing.
The importance of the corners of a finite lattice has been noticed \cite{29-1, 42}. Johnson \emph{et al.} \cite{29-1} introduced the concept of corner gluing in a study of factors of higher-dimensional shifts of finite type. Similarly, to study rectangle-extendability and topological mixing, the corners of a rectangular lattice must be studied closely. Indeed, let the $L$-shaped lattices $\mathbb{L}_{1}=\mathbb{Z}_{3\times 3} \setminus \{(2,2)\}$, $\mathbb{L}_{2}=\mathbb{Z}_{3\times 3} \setminus \{(0,2)\}$, $\mathbb{L}_{3}=\mathbb{Z}_{3\times 3} \setminus \{(0,0)\}$ and $\mathbb{L}_{4}=\mathbb{Z}_{3\times 3} \setminus \{(2,0)\}$; accordingly, \begin{equation*} \begin{array}{cccc} \psfrag{a}{{\footnotesize $\mathbb{L}_{1}= $}} \includegraphics[scale=0.7]{C1.eps}, & \psfrag{a}{{\footnotesize$\mathbb{L}_{2}=$}} \includegraphics[scale=0.7]{C2.eps},
& \psfrag{a}{\footnotesize{$\mathbb{L}_{3}=$}} \includegraphics[scale=0.7]{C3.eps},
& \psfrag{a}{\footnotesize{$\mathbb{L}_{4}=$}} \includegraphics[scale=0.7]{C4.eps} \end{array}. \end{equation*} \begin{equation*} \text{Figure 3.2.} \end{equation*} For $1\leq i\leq 4$, a given $\mathcal{B}\subset\Sigma_{2\times 2}(p)$ satisfies the locally corner-extendable condition $C(i)$ if for any $U\in\Sigma_{\mathbb{L}_{i}}(\mathcal{B})$, there exists $U'\in\Sigma_{3\times 3}(\mathcal{B})$ such that $U'\mid_{\mathbb{L}_{i}}=U$.
The crisscross lattice $\mathbb{Z}_{c}$ is defined by
\begin{equation}\label{eqn:3.1}
\mathbb{Z}_{c}=\underset{0\leq |i|+|j|\leq 1}{\bigcup}\mathbb{Z}_{2\times 2}((i,j)). \end{equation} Indeed, \begin{equation*} \psfrag{a}{{\tiny $O$}} \psfrag{f}{$\mathbb{Z}_{c}=$} \psfrag{g}{,} \includegraphics[scale=0.8]{crisscross.eps} \end{equation*} \begin{equation*} \text{Figure 3.3.} \end{equation*} where $O=(0,0)$ is the origin of $\mathbb{Z}^{2}$. For $\mathcal{B}\subset\Sigma_{2\times 2}(p)$, let \begin{equation*} \Sigma_{c}(\mathcal{B})=\Sigma_{\mathbb{Z}_{c} }(\mathcal{B}). \end{equation*} For $\mathcal{B}\subset\Sigma_{2\times 2}(p)$, $\mathcal{B}$ satisfies local crisscross-extendability if each $B\in \mathcal{B}$, there exists $U_{c}\in\Sigma_{c}(\mathcal{B})$ with $U_{c}\mid_{\mathbb{Z}_{2\times 2}}=B$.
Clearly, primitivity of $\mathbb{H}_{n}$ and $\mathbb{V}_{n}$ for all $n\geq 2$ can be interpreted as topological mixing in horizontal and vertical directions, respectively. In general, the lattices $R_{1}$ and $R_{2}+\mathbf{v}$ are not located along horizontal or vertical lines. Accordingly, mixing in directions other than horizontal and vertical must be studied. In so doing, locally corner-extendable conditions and local crisscross-extendability are useful.
First, the rectangle-extendability of $\mathcal{B}$ is defined as follows.
\begin{definition} \label{definition:3.1} For $\mathcal{B}\subset\Sigma_{2\times 2}(p)$, $\mathcal{B}$ is called rectangle-extendable if for every rectangular block $U_{m\times n}\in\Sigma_{m\times n}(\mathcal{B})$, $m,n\geq 2$, there exists $W\in\Sigma(\mathcal{B})$ such that $W\mid_{\mathbb{Z}_{m\times n}}=U_{m\times n}$. \end{definition} In general, due to the undecidability of two-dimensional shifts of finite type, the rectangle-extendability is not finitely checkable. Notably, if $\mathcal{B}$ is rectangle-extendable, then $\Sigma(\mathcal{B})\neq \emptyset$. The converse is not true in general.
Whether or not $\mathbb{H}_{2}(\mathcal{B})$ or $\mathbb{V}_{2}(\mathcal{B})$ contains a zero row or a zero column has a very large impact in studying mixing problems. First, the case in which a matrix contains no zero row or zero column is considered.
\begin{definition} \label{definition:3.3} A matrix $A=[a_{i,j}]_{n\times n}$ is non-compressible if it contains no zero row and no zero column. For $n\geq 2$, an $\mathbb{H}_{n}$ (or $\mathbb{V}_{n}$) is non-degenerated if $H_{n;\alpha}$ (or $V_{n;\alpha}$) is non-compressible for all $1\leq\alpha\leq p^{2}$. \end{definition}
First, consider the case of $\mathcal{B}\subset \Sigma_{2\times 2}(p)$ when $\mathbb{H}_{2}(\mathcal{B})$ and $\mathbb{V}_{2}(\mathcal{B})$ are non-degenerated. Clearly, if both $A$ and $B$ are non-negative and non-compressible matrices, then $AB$ is non-compressible. In the following, the recursive formula from high-order transition matrices $\mathbb{H}_{n}$ and $\mathbb{V}_{n}$ to $\mathbb{H}_{2}$ and $\mathbb{V}_{2}$ is proven to ensure that $\mathbb{H}_{n}$ (or $\mathbb{V}_{n}$) is non-degenerated for $n\geq 3$ if it holds for $\mathbb{H}_{2}$ (or $\mathbb{V}_{2}$).
\begin{theorem} \label{theorem:3.4-0} If $\mathbb{H}_{2}$ is non-degenerated, then $\mathbb{H}_{n}$ is non-degenerated for all $n\geq 3$. Moreover, $H_{m,n;\alpha}^{(k)}$ are also non-compressible for $m, n\geq 2$, $1\leq \alpha\leq p^{2}$ and $1\leq k\leq p^{m-1}$. \end{theorem}
\textit{Proof.} Since $\mathbb{H}_{2}$ is non-degenerated, from Definition \ref{definition:3.3}, $H_{2;\alpha}$ are non-compressible for $1\leq \alpha\leq p^{2}$. For $n\geq 3$ and $1\leq \alpha\leq p^{2}$, (\ref{eqn:2.10-1}) implies $H_{n;\alpha}$ are non-compressible. Then, $\mathbb{H}_{n}$ is non-degenerated for all $n\geq 3$. From (\ref{eqn:2.20}), that $H_{m,n;\alpha}^{(k)}$ is non-compressible follows. \hspace{0.5cm} $\square$ \medbreak
The following theorem provides sufficient conditions on $\mathbb{H}_{n}$ and $\mathbb{V}_{n}$ for rectangle-extendability.
\begin{theorem} \label{theorem:3.5} Given $\mathcal{B}\subset\Sigma_{2\times 2}(p)$, if $\mathbb{H}_{n}(\mathcal{B})$ and $\mathbb{V}_{n}(\mathcal{B})$ are non-compressible for all $n\geq 2$, then $\mathcal{B}$ is rectangle-extendable. Furthermore, if $\mathbb{H}_{2}(\mathcal{B})$ and $\mathbb{V}_{2}(\mathcal{B})$ are non-degenerated, then $\mathcal{B}$ is rectangle-extendable. In particular, $\Sigma(\mathcal{B})\neq \emptyset$. \end{theorem}
\textit{Proof.} Given $U\in \Sigma_{m\times n}(\mathcal{B})$, if $\mathbb{H}_{n}$ is non-compressible, then $U$ can be extended in both positive and negative horizontal directions to form an $(m+2)\times n$ $\mathcal{B}$-admissible pattern $U_{1}$. Similarly, the fact that $\mathbb{V}_{m+2}$ is non-compressible implies that $U_{1}$ can be extended to an $(m+2)\times (n+2)$ $\mathcal{B}$-admissible pattern $U_{2}$. Repeating this process extends $U$ to a global pattern in $\Sigma(\mathcal{B})$. Therefore, $\mathcal{B}$ is rectangle-extendable. Moreover, if $\mathbb{H}_{2}(\mathcal{B})$ and $\mathbb{V}_{2}(\mathcal{B})$ are non-degenerated, then the result follows from Theorem \ref{theorem:3.4-0}. \hspace{0.5cm} $\square$ \medbreak
The non-degeneracy of $\mathbb{H}_{2}(\mathcal{B})$ and $\mathbb{V}_{2}(\mathcal{B})$ implies three of the locally corner-extendable conditions, as follows.
\begin{theorem} \label{theorem:3.6} Given $\mathcal{B}\subset \Sigma_{2\times 2}(p)$, if $\mathbb{H}_{2}(\mathcal{B})$ and $\mathbb{V}_{2}(\mathcal{B})$ are non-degenerated, then $\mathcal{B}$ satisfies $C(1)$, $C(2)$ and $C(4)$. \end{theorem}
\textit{Proof.} Since $\mathbb{H}_{2}(\mathcal{B})$ is non-degenerated, from (\ref{eqn:2.2}), for any $u_{0,0},u_{1,0},u_{0,1},u_{1,1}\in\mathcal{S}_{p}$, there exist $a,b\in\mathcal{S}_{p}$ such that
\begin{equation*} \begin{array}{ccc} \psfrag{c}{$a$} \psfrag{b}{{\footnotesize $u_{0,1}$}} \psfrag{d}{{\footnotesize $u_{0,0}$}} \psfrag{e}{{\footnotesize $u_{1,0}$}} \includegraphics[scale=1.2]{z22_1.eps} &\hspace{0.5cm} \text{and} & \hspace{0.5cm} \psfrag{b}{$b$} \psfrag{c}{{\footnotesize $u_{1,1}$}} \psfrag{d}{{\footnotesize $u_{0,0}$}} \psfrag{e}{{\footnotesize $u_{1,0}$}} \includegraphics[scale=1.2]{z22_2.eps} \end{array} \end{equation*} \begin{equation*} \text{Figure 3.4.} \end{equation*} are in $\mathcal{B}$, which implies that conditions $C(1)$ and $C(2)$ are satisfied. Similarly, that $\mathbb{V}_{2}(\mathcal{B})$ is non-degenerated implies that $\mathcal{B}$ satisfies conditions $C(1)$ and $C(4)$.
The proof is complete. \hspace{0.5cm} $\square$ \medbreak
Now, the fact that $\Sigma(\mathcal{B})$ is topologically mixing follows from the non-degeneracy of $\mathbb{H}_{2}(\mathcal{B})$ and $\mathbb{V}_{2}(\mathcal{B})$ and the primitivity of $\mathbb{H}_{n}$ and $\mathbb{V}_{n}$, $n\geq 2$.
\begin{theorem} \label{theorem:3.7} Given $\mathcal{B}\subset \Sigma_{2\times 2}(p)$, if $\mathbb{H}_{2}(\mathcal{B})$ and $\mathbb{V}_{2}(\mathcal{B})$ are non-degenerated, then the following statements are equivalent.
\item[(i)] $\mathbb{H}_{n}(\mathcal{B})$ and $\mathbb{V}_{n}(\mathcal{B})$ are primitive for all $n\geq 2$.
\item[(ii)]$\Sigma(\mathcal{B})$ is topologically mixing. \end{theorem}
\textit{Proof.} (i)$\Rightarrow$(ii). Let $R_{1}$ and $R_{2}$ be finite sublattices of $\mathbb{Z}^{2}$. Then, there exist $N\geq 2$ and $({i_{1},j_{1}})$,$(i_{2},j_{2})\in\mathbb{Z}^{2}$ such that $R_{l}\subset\mathbb{Z}_{N\times N}((i_{l},j_{l}))$, $l=1,2$. From (i), there exists $K\geq1$ such that $\mathbb{H}_{N}^{K}(\mathcal{B})>0$ and $\mathbb{V}_{N}^{K}(\mathcal{B})>0$.
Then, consider $M=M(R_{1},R_{2})=\sqrt{2}(2N+K-2)$. Let $\mathbf{v}=(v_{1},v_{2})\in\mathbb{Z}^{2}$ with $d(R_{1},R_{2}+\mathbf{v})\geq M$ and any two allowable patterns $U_{1}\in\Pi_{R_{1}}(\Sigma(\mathcal{B}))$ and $U_{2}\in\Pi_{R_{2}+\mathbf{v}}(\Sigma(\mathcal{B}))$. Clearly, $U_{1}$ and $U_{2}$ can be extended as $U_{1}'$ on $\mathbb{Z}_{N\times N}((i_{1},j_{1}))$ and $U_{2}'$ on $\mathbb{Z}_{N\times N}((i_{2}+v_{1},j_{2}+v_{2}))$ using the local patterns in $\mathcal{B}$, respectively.
Proving that $U_{1}'$ and $U_{2}'$ can be connected to form the L-shaped pattern $U_{L}$ using the local patterns in $\mathcal{B}$, as follows, is not difficult. \begin{equation*} \begin{array}{lcr} \psfrag{a}{{\footnotesize$N$}} \psfrag{b}{$U_{L}$} \includegraphics[scale=0.65]{L1.eps} & \hspace{0.5cm}\text{or} &\hspace{0.5cm} \psfrag{a}{{\footnotesize$N$}} \psfrag{b}{$U_{L}$} \includegraphics[scale=0.65]{L2.eps} \end{array} \end{equation*} \begin{equation*} \text{Figure 3.5.} \end{equation*} Notably, the L-shaped lattices may degenerate into rectangular lattices.
Since $\mathbb{H}_{2}(\mathcal{B})$ and $\mathbb{V}_{2}(\mathcal{B})$ are non-degenerated, by Theorem \ref{theorem:3.6}, $\mathcal{B}$ satisfies conditions $C(1)$ and $C(2)$. Then, $U_{L}$ can be extended as $U_{r}$ on the rectangular lattice by using the local patterns in $\mathcal{B}$, which is obtained by filling the corner of the L-shaped lattices.
From Theorem \ref{theorem:3.4-0}, $\mathcal{B}$ is rectangle-extendable. Then, $U_{r}$ can be extended as $W\in\Sigma(\mathcal{B})$ with $\Pi_{R_{1}}(W)=U_{1}$ and $\Pi_{R_{2}+\mathbf{v}}(W)=U_{2}$. Therefore, $\Sigma(\mathcal{B})$ is topologically mixing.
(ii)$\Rightarrow$(i). From Theorem \ref{theorem:3.4-0}, $\mathbb{H}_{n}$ and $\mathbb{V}_{n}$ are non-compressible for all $n\geq 2$. Then, for $n\geq 2$, any pattern in $\Sigma_{1\times n}(p)$ or $\Sigma_{n\times 1}(p)$ can be extended to $\mathbb{Z}^{2}$ by using the local patterns in $\mathcal{B}$. It can be easily verified that (ii)$\Rightarrow$(i); the details are omitted. The proof is complete. \hspace{0.5cm} $\square$ \medbreak
When $\mathbb{H}_{2}(\mathcal{B})$ or $\mathbb{V}_{2}(\mathcal{B})$ is degenerated, Theorems \ref{theorem:3.6} and \ref{theorem:3.7} can be generalized to the case when $\mathcal{B}$ satisfies locally corner-extendable conditions and local crisscross-extendability.
In the following, local crisscross-extendability is useful for extending a rectangular block to a global pattern.
When $\mathcal{B}\subset\Sigma_{2\times 2}(p)$ is not locally crisscross-extendable, $\mathcal{B}$ can be reduced to $\mathcal{B}_{c}\subseteq \mathcal{B}$ such that $\mathcal{B}_{c}$ is locally crisscross-extendable and $\Sigma(\mathcal{B}_{c})=\Sigma(\mathcal{B})$. The details are omitted here.
The following theorem shows that the local corner-extendable conditions and local crisscross-extendability imply rectangular extendability.
\begin{theorem} \label{theorem:3.13} If $\mathcal{B}$ satisfies either $C(1)$ and $C(3)$ or $C(2)$ and $C(4)$, then the following statements are equivalent. \item[(i)] $\mathcal{B}$ is rectangle-extendable.
\item[(ii)]$\mathcal{B}$ is locally crisscross-extendable. \end{theorem}
\textit{Proof.} Clearly, (i) implies (ii).
(ii)$\Rightarrow$(i). Assume that $\mathcal{B}$ satisfies $C(1)$ and $C(3)$. The case in which it satisfies $C(2)$ and $C(4)$ is similar. Let $U_{m\times n}\in\Sigma_{m \times n}(\mathcal{B})$, $m,n\geq 2$. Since $\mathcal{B}$ satisfies $C(1)$ and $C(3)$, from (ii), $U_{m\times n}$ can be extended in both positive and negative vertical directions by using the local patterns in $\mathcal{B}$, as follows.
\begin{equation*} \psfrag{a}{{\footnotesize$n$}} \psfrag{b}{{\footnotesize$m$}} \psfrag{c}{{\footnotesize$n+2$}} \psfrag{e}{.} \includegraphics[scale=0.65]{extend.eps} \end{equation*} \begin{equation*} \text{Figure 3.6.} \end{equation*} Similarly, the above pattern can be extended in both positive horizontal and negative horizontal directions using the local patterns in $\mathcal{B}$. Therefore, by the above method, $U_{m\times n}$ can be extended to $\mathbb{Z}^{2}$ using the local patterns in $\mathcal{B}$. The proof is complete. \hspace{0.5cm} $\square$ \medbreak
Theorem \ref{theorem:3.7} can now be proven and generalized with a slight modification. \begin{theorem} \label{theorem:3.14} If \begin{enumerate} \item[(i)] $\mathcal{B}\subset \Sigma_{2\times 2}(p)$ is locally crisscross-extendable, and
\item[(ii)] $\mathcal{B}$ satisfies three of the locally corner-extendable conditions $C(i)$, $1\leq i\leq 4$, \end{enumerate} then $\mathbb{H}_{n}(\mathcal{B})$ and $\mathbb{V}_{n}(\mathcal{B})$ are weakly primitive for all $n\geq 2$ if and only if $\Sigma(\mathcal{B})$ is topologically mixing. \end{theorem}
\textit{Proof.} $(\Rightarrow)$. From (ii), without loss of generality, assume that $\mathcal{B}$ satisfies conditions $C(1)$, $C(2)$ and $C(3)$.
Let $R_{1}$ and $R_{2}$ be finite sublattices of $\mathbb{Z}^{2}$. Since $\mathcal{B}$ satisfies $C(1)$ and $C(2)$, as in the proof of Theorem \ref{theorem:3.7}, there exists $M(R_{1},R_{2})\geq 1$ such that for all $\mathbf{v}=(v_{1},v_{2})\in\mathbb{Z}^{2}$ with $d(R_{1},R_{2}+\mathbf{v})\geq M$ and any two allowable patterns $U_{1}\in\Pi_{R_{1}}(\Sigma(\mathcal{B}))$ and $U_{2}\in\Pi_{R_{2}+\mathbf{v}}(\Sigma(\mathcal{B}))$, $U_{1}$ and $U_{2}$ can be extended as $U_{r}$ on the rectangular lattice using the local patterns in $\mathcal{B}$.
Since $\mathcal{B}$ is locally crisscross-extendable and satisfies conditions $C(1)$ and $C(3)$, by Theorem \ref{theorem:3.13}, $\mathcal{B}$ is rectangle-extendable. Then, $U_{r}$ can be extended as $W\in\Sigma(\mathcal{B})$ with $\Pi_{R_{1}}(W)=U_{1}$ and $\Pi_{R_{2}+\mathbf{v}}(W)=U_{2}$. Therefore, $\Sigma(\mathcal{B})$ is topologically mixing.
$(\Leftarrow)$. From (i) and (ii), by Theorem \ref{theorem:3.13}, $\mathcal{B}$ is rectangle-extendable. Then, for $n\geq 2$, any pattern in $\Sigma_{2\times n}(\mathcal{B})$ can be extended to $\mathbb{Z}^{2}$ using the local patterns in $\mathcal{B}$. Therefore, the fact that $\Sigma(\mathcal{B})$ is topologically mixing implies that $\mathbb{H}_{n}(\mathcal{B})$ is weakly primitive for all $n\geq 2$. Similarly, $\mathbb{V}_{n}(\mathcal{B})$ is weakly primitive for all $n\geq 2$.
The proof is complete. \hspace{0.5cm} $\square$ \medbreak
The following example demonstrates that the locally corner-extendable conditions in Theorem \ref{theorem:3.14} are crucial: if locally corner-extendable conditions are not satisfied, then local crisscross-extendability (or rectangle-extendability) and primitivity may not imply topological mixing.
\begin{example} \label{example:3.15}
Let
\begin{equation*} \mathcal{B}_{\pi/4}=\left\{ \begin{array}{c} \psfrag{a}{$u_{2}$} \psfrag{b}{$u_{3}$} \psfrag{c}{$u_{1}$} \psfrag{e}{$u_{4}$} \psfrag{d}{$: u_{4}\geq u_{1}\text{ and } u_{1},u_{2},u_{3},u_{4}\in\{0,1\}$ } \includegraphics[scale=0.8]{example_1.eps} \end{array} \hspace{6.0cm}\right\}, \end{equation*} which requires that diagonal lines with slope 1 are non-decreasing from left to right.
Clearly,
\begin{equation*} \mathbb{H}_{2}(\mathcal{B}_{\pi/4})=\mathbb{V}_{2}(\mathcal{B}_{\pi/4})= \left[ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 0& 1 & 0 & 1 \\ 0 & 1 & 0 & 1 \end{array} \right]. \end{equation*} From (\ref{eqn:2.9}) and (\ref{eqn:2.10}), $\mathbb{H}_{n}$ is non-compressible for all $n\geq 2$. Since $\mathbb{V}_{2}=\mathbb{H}_{2}$, $\mathbb{V}_{n}$ is also non-compressible for all $n\geq 2$. By Theorem \ref{theorem:3.5}, $\mathcal{B}_{\pi/4}$ is rectangle-extendable. In particular, $\mathcal{B}_{\pi/4}$ is locally crisscross-extendable.
From the rule of $\mathcal{B}_{\pi/4}$, it can be easily proven that $\mathbb{H}_{n}^{n}=\mathbb{V}_{n}^{n}>0$ for all $n\geq 2$. However, let $R_{1}=R_{2}=\mathbb{Z}_{2\times 2}$. Consider $U_{0}=\{0\}^{\mathbb{Z}^{2}}$ and $U_{1}=\{1\}^{\mathbb{Z}^{2}}$. Clearly, $U_{0},U_{1}\in\Sigma(\mathcal{B}_{\pi/4})$, but $\Pi_{R_{1}}(U_{1})$ cannot connect with $\Pi_{R_{2}+(i,i)}(U_{0})=\Pi_{\mathbb{Z}_{2\times 2}((i,i))}(U_{0})$ using the local patterns in $\mathcal{B}_{\pi/4}$ for all $i\geq 2 $. Then, $\Sigma(\mathcal{B}_{\pi/4})$ is not topologically mixing. Therefore, local crisscross-extendability and primitivity do not imply topological mixing. This claim does not contradict Theorem \ref{theorem:3.14} since $\mathcal{B}_{\pi/4}$ does not satisfy conditions $C(2)$ and $C(4)$: neither \begin{equation*} \begin{array}{lcr}
\psfrag{a}{{\small$0$}}
\psfrag{b}{{\small$1$}}
\psfrag{c}{$\in\Sigma_{\mathbb{L}_{2}}(\mathcal{B}_{\pi/4})$} \includegraphics[scale=0.7]{L2_1.eps} & \hspace{2.0cm} \text{nor} & \hspace{1.0cm}
\psfrag{a}{{\small$0$}}
\psfrag{b}{{\small$1$}}
\psfrag{c}{$\in\Sigma_{\mathbb{L}_{4}}(\mathcal{B}_{\pi/4})$} \includegraphics[scale=0.7]{L4_1.eps} \end{array} \end{equation*} can be extended to $\mathbb{Z}_{3\times 3}$ using the local patterns in $\mathcal{B}_{\pi/4}$.
\end{example}
\begin{remark} \label{remark:3.5} The non-degeneracy of $\mathbb{H}_{2}$ and $\mathbb{V}_{2}$ and the locally corner-extendable conditions are used to extend a single local pattern to be a global pattern, which are intrinsically different from mixing properties. Mixing is associated with two given local patterns that are parts of two global patterns. In fact, these extendability conditions alone cannot imply any mixing property. For example,
\begin{equation*} \mathbb{H}_{2}(\mathcal{B})=\mathbb{V}_{2}(\mathcal{B})=\left[\begin{array}{cccc} 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 0& 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \end{array}\right] \end{equation*} are non-degenerated, and $\Sigma(\mathcal{B})$ is not topologically mixing. Actually, $\mathbb{H}_{2}(\mathcal{B})$ and $\mathbb{V}_{2}(\mathcal{B})$ are not primitive. \end{remark}
\numberwithin{equation}{section}
\section{Invariant diagonal cycles and primitive commutative cycles} \label{sec:4} \hspace{0.5cm} According to Propositions \ref{proposition:2.2} and \ref{proposition:2.3}, the recursive formula from a higher-order transition matrix to a lower-order transition matrix using the connecting operator can be used to introduce invariant diagonal cycles and primitive commutative cycles that enable the connecting operator to be used to provide finitely sufficient conditions for the primitivity of $\mathbb{H}_{n}$ and $\mathbb{V}_{n}$ for $n\geq 2$. For brevity, only $\mathbb{H}_{n}$ is considered here. The discussion for $\mathbb{V}_{n}$ is similar to that for $\mathbb{H}_{n}$.
\subsection{Invariant diagonal cycles} \label{sec:4-1} This subsection introduces invariant diagonal cycles of the connecting operator to provide finitely sufficient conditions for the primitivity of $\mathbb{H}_{n}$ or $\mathbb{V}_{n}$.
First, the diagonal index set is defined by \begin{equation*}
\mathcal{D}_{p}=\left\{1+j(p+1)| j\in\mathcal{S}_{p}\right\}. \end{equation*} Clearly, if $\beta_{1},\beta_{2},\cdots ,\beta_{q+1}\in\mathcal{D}_{p}$, then $H_{m,n+q;\beta_{1};\beta_{2};\cdots ;\beta_{q+1}}$ lies on the diagonal of $\mathbb{H}_{n+q}^{m}$ in (\ref{eqn:2.34}).
\begin{definition} \label{definition:4.9} \item[(i)] For $q\geq 1$, a finite sequence $\overline{\beta}_{q}=\beta_{1}\beta_{2}\cdots\beta_{q}\beta_{1}$ is called a diagonal cycle with length $q$ if $\beta_{j}\in \mathcal{D}_{p}$ for $1\leq j\leq q$.
\item[(ii)] A diagonal cycle $\overline{\beta}_{q}=\beta_{1}\beta_{2}\cdots\beta_{q}\beta_{1}$ is called an $S$-invariant diagonal cycle of order $(m,q)$ if there exist $m\geq 2$ and an invariant index set $\mathcal{K}\subseteq \left\{1,2,\cdots,p^{m-1}\right\}$ such that
\begin{equation}\label{eqn:4.9} \underset{k\in\mathcal{K}}{\sum}\left(S_{m;\beta_{1},\beta_{2}}S_{m;\beta_{2},\beta_{3}}\cdots S_{m;\beta_{q},\beta_{1}}\right)_{k,l}\geq 1 \end{equation} for all $l\in\mathcal{K}$.
\item[(iii)] A $W$-invariant diagonal cycle can be defined analogously. \end{definition}
Notably, it can be easily shown that for any $n\geq 1$,
\begin{equation}\label{eqn:4.10} \underset{k\in\mathcal{K}}{\sum}\left((S_{m;\beta_{1},\beta_{2}}S_{m;\beta_{2},\beta_{3}}\cdots S_{m;\beta_{q},\beta_{1}})^{n}\right)_{k,l}\geq 1 \end{equation} for all $l\in\mathcal{K}$ if (\ref{eqn:4.9}) holds. The case for $W$-invariant diagonal cycles is similar.
The following notation is used in proving the theorem for the primitivity of $\mathbb{H}_{n}$.
\begin{definition} \label{definition:4.8} Let $\mathbb{M}=\left[M_{i,j}\right]_{N\times N}$, where $M_{i,j}$ is an $M\times M$ non-negative matrix for $1\leq i,j\leq N$. The indicator matrix $\Lambda(\mathbb{M})=[m_{i,j}]_{N\times N}$ of $\mathbb{M}$ is defined by
\begin{equation*} \left\{ \begin{array}{ll}
m_{i,j}=1 & \text{if } |M_{i,j}|>0 ,\\ m_{i,j}=0 & \text{otherwise, } \end{array} \right. \end{equation*}
where $|M_{i,j}|$ is the sum of all entries in $M_{i,j}$. \end{definition}
The following lemma is essential for establishing the primitivity of $\mathbb{H}_{n}$ using the invariant diagonal cycle.
\begin{lemma} \label{lemma:4.9} Suppose $\mathbb{M}=\left[M_{i,j}\right]_{N\times N}$, where $M_{i,j}$ is an $M\times M$ non-negative matrix for $1\leq i,j\leq N$. Let $\Lambda(\mathbb{M})=[m_{i,j}]_{N\times N}$ be the indicator matrix of $\mathbb{M}$. Let $\mathbb{M}^{n}=\left[M_{n;i,j}\right]_{N\times N}$ for $n\geq 1$. If \begin{enumerate} \item[(i)] $\Lambda(\mathbb{M})$ is primitive, \item[(ii)] $M_{i,j}$ is either non-compressible or zero, $1\leq i,j\leq N$, and \item[(iii)] there exist $n\geq 1$ and $1\leq k \leq N$ such that $M_{n;k,k}$ is primitive, \end{enumerate} then $\mathbb{M}$ is primitive. \end{lemma}
\textit{Proof.} Since $\Lambda(\mathbb{M})$ and $M_{n;k,k}$ are primitive, there exists $N_{1}\geq 1$ such that $\Lambda(\mathbb{M})^{N_{1}}>0$ and $M_{n;k,k}^{N_{1}}>0$. By (ii), for any $l\geq 1$ and $1\leq i,j\leq N$, if $\left(\Lambda(\mathbb{M})^{l}\right)_{i,j}>0$, then $M_{l;i,j}$ is non-compressible.
Take $N_{2}=(2+n) N_{1}$. The fact that for any $1\leq i,j\leq N$,
\begin{equation*} M_{N_{2};i,j}\geq M_{N_{1};i,k}M_{n;k,k}^{N_{1}}M_{N_{1};k,j}>0, \end{equation*} can be easily seen. Therefore, $\mathbb{M}$ is primitive. \hspace{0.5cm} $\square$ \medbreak
When an invariant diagonal cycle of order $(m,q)$ exists, the following theorem shows that the primitivity of $\underset{l\in\mathcal{K}}{\sum}H_{m,n;\beta_{1}}^{(l)}$ up to finite order $q+1$ implies the primitivity of $\mathbb{H}_{n}$ for all $n\geq 2$.
\begin{theorem} \label{theorem:4.11-1} Given $\mathcal{B}\subset\Sigma_{2 \times 2}(p)$, if \begin{enumerate} \item[(i)] $\mathbb{H}_{2}(\mathcal{B})$ is non-degenerated,
\item[(ii)] there exists an $S$-invariant diagonal cycle $\overline{\beta}_{q}=\beta_{1}\beta_{2}\cdots\beta_{q}\beta_{1}$ of order $(m,q)$ with its invariant index set $\mathcal{K}$, and
\item[(iii)] $\underset{l\in\mathcal{K}}{\sum}H_{m,n;\beta_{1}}^{(l)}$ is primitive for $2\leq n\leq q+1$, \end{enumerate} then $\mathbb{H}_{n}$ is primitive for all $n\geq 2$. \end{theorem}
\textit{Proof.} The result that $\mathbb{H}_{n}$ is primitive for $n\geq 2 $ is proven by induction, as follows. For $s\geq 0$, the statement $P(s)$ means that $\mathbb{H}_{n}$ is primitive for $sq+2 \leq n\leq (s+1)q+1$.
For $2\leq n\leq q+1$, let $\mathbb{H}_{n}=[H_{n;\alpha}]_{p\times p}$. By Theorem \ref{theorem:3.4-0}, $H_{n;\alpha}$ is non-compressible for $1\leq \alpha\leq p^{2}$. Clearly, the indicator matrix of $\mathbb{H}_{n}=[H_{n;\alpha}]_{p\times p}$ is $p\times p$ full matrix. From (iii),
\begin{equation*} H_{m,n;\beta_{1}}= \underset{l=1}{\overset{p^{m-1}}{\sum}}H_{m,n;\beta_{1}}^{(l)}\geq \underset{l\in\mathcal{K}}{\sum}H_{m,n;\beta_{1}}^{(l)}
\end{equation*} is primitive and is on the diagonal of $\mathbb{H}_{n}^{m}=[H_{m,n;\alpha}]_{p\times p}$. Hence, by Lemma \ref{lemma:4.9}, $\mathbb{H}_{n}$ is primitive for $2\leq n\leq q+1$ and $P(0)$ is true..
Assume that $P(t)$ follows for some $t\geq 0$, meaning that, $\mathbb{H}_{n}$ is primitive for $tq+2 \leq n\leq (t+1)q+1$.
For $(t+1)q+2 \leq n\leq (t+2)q+1$, let $n=(t+1)q+r$, where $2\leq r\leq q+1$. Let $N=(t+1)q+1$, define $\overline{\beta}_{N-1}=\left(\beta_{1}\beta_{2}\cdots\beta_{q}\right)^{t+1}\beta_{1}$. From (\ref{eqn:4.10}), $\overline{\beta}_{N-1}$ is an $S$-invariant diagonal cycle of order $(m,N-1)$ with invariant index set $\mathcal{K}$.
From (\ref{eqn:2.34}), let $\mathbb{H}_{n}=[H_{n;i,j}]_{p^{N}\times p^{N}}$. Then, from (\ref{eqn:2.19}),
\begin{equation*} \mathbb{H}_{n}=[H_{n;i,j}]_{p^{N}\times p^{N}}=\left( \mathbb{H}_{N} \right)_{p^{N}\times p^{N}}\circ\left[ E_{p^{N-1}\times p^{N-1}}\otimes \left[H_{r;i,j}\right]_{p\times p} \right]. \end{equation*} By Theorem \ref{theorem:3.4-0}, $H_{r;i,j}$ is non-compressible, $1\leq i,j\leq p$. Then, $\mathbb{H}_{N}$ is the indicative matrix of $\mathbb{H}_{n}=[H_{n;i,j}]_{p^{N}\times p^{N}}$ and $H_{n;i,j}$ is either non-compressible or zero for $1\leq i,j\leq p^{N}$. By the assumption for $P(t)$, $\mathbb{H}_{N}$ is primitive.
Let $\mathbb{H}^{m}_{n}=[H_{m,n;i,j}]_{p^{N}\times p^{N}}=[H_{m,n;\alpha_{1};\alpha_{2};\cdots;\alpha_{N}}]_{p^{N}\times p^{N}}$. From (\ref{eqn:2.36}),
\begin{equation*} \begin{array}{rl} H_{m,n;\overline{\beta}_{N-1}} \equiv & H_{m,n;\beta_{1};\beta_{2};\cdots ;\beta_{\bar{q}};\cdots;\beta_{1};\beta_{2};\cdots ;\beta_{q};\beta_{1}} \\
& \hspace{1.0cm}\underset{(t+1)\text{ times}}{ \underbrace{ \hspace{3.2cm} } }\\
& \\ = & \underset{k,l=1}{\overset{p^{m-1}}{\sum}} ((S_{m;\beta_{1},\beta_{2}}S_{m;\beta_{2},\beta_{3}}\cdots S_{m;\beta_{q},\beta_{1}})^{t+1})_{k,l} H_{m,r;\beta_{1}}^{(l)}\\ \geq & \underset{l\in\mathcal{K}}{{\sum}}H_{m,r;\beta_{1}}^{(l)}. \end{array} \end{equation*} $H_{m,n;\overline{\beta}_{N-1}}$ is on the diagonal of $\mathbb{H}_{n}^{m}$. By Lemma \ref{lemma:4.9}, $\mathbb{H}_{n}$ is primitive for $(t+1)\bar{q}+2 \leq n\leq (t+2)\bar{q}+1$, so $P(t+1)$ holds.
Therefore, by induction, $P(s)$ is true for all $s\geq 0$, implying that $\mathbb{H}_{n}$ is primitive for all $n\geq 2$. The proof is complete. \hspace{0.5cm} $\square$ \medbreak
The following example illustrates the application of Theorem \ref{theorem:4.11-1}.
\begin{example} \label{example:4.12} Consider \begin{equation*} \mathbb{H}_{2}(\mathcal{B})= \left[ \begin{array}{cccc} 1 & 0 & 0 & 1 \\ 1 & 1 & 1 & 0 \\ 1& 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \end{array} \right]. \end{equation*} Clearly, $\mathbb{H}_{2}$ is non-degenerated. From (\ref{eqn:2.27}), \begin{equation*} S_{3;1,1}=C_{3;1,1}= \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{array} \right] . \end{equation*} Let $\overline{\beta}_{1}=11$ and $\mathcal{K}=\{3,4\}$. Since \begin{equation*} \underset{k\in\mathcal{K}}{\sum}\left(S_{3;1,1}\right)_{k,l}\geq 1 \end{equation*} for $l\in\mathcal{K}$, $\overline{\beta}_{1}$ is an S-invariant diagonal cycle of order $(3,1)$ with index set $\mathcal{K}$. Clearly,
\begin{equation*} \underset{l\in\mathcal{K}}{\sum}H_{3,2;1}^{(l)} =H_{2;1,2}H_{2;2,1}H_{2;1,1}+H_{2;1,2}H_{2;2,2}H_{2;2,1}
= \left[ \begin{array}{cc} 2 & 1 \\ 1 & 1 \end{array} \right] \end{equation*} is primitive. From Theorem \ref{theorem:4.11-1}, $\mathbb{H}_{n}$ is primitive for all $n\geq 2$.
The fact that $\mathbb{V}_{2}(\mathcal{B})$ does not have invariant diagonal cycle up to $m=7$ can be verified, but whether $\mathbb{V}_{2}(\mathcal{B})$ has an invariant diagonal cycle for larger $m$ is unclear. To deal with this difficulty, the following subsection introduces another criterion for establishing primitivity using primitive commutative cycles. The topological mixing of $\Sigma(\mathcal{B})$ is proven in Example \ref{example:5.8}.
\end{example}
\subsection{Primitive commutative cycles} \label{sec:4-2} \setcounter{equation}{10} This subsection introduces primitive commutative cycles to obtain another finitely sufficient condition for the primitivity of $\mathbb{H}_{n}$ or $\mathbb{V}_{n}$ when invariant diagonal cycles are not available.
For $q,q'\geq 1$, let $I_{q}=i_{1}i_{2}\cdots i_{q}i_{1}$ and $J_{q'}=j_{1}j_{2}\cdots j_{q'}j_{1}$ be two cycles, where $i_{k},j_{l}\in\{1,2,\cdots,p\}$ for $1\leq k\leq q$ and $1\leq l\leq q'$.
\begin{definition} \label{definition:5.1} If $j_{1}=i_{1}$, let $(I_{q}J_{q'})=i_{1}i_{2}\cdots i_{q}i_{1}j_{2}\cdots j_{q'}i_{1}$ and $(J_{q'}I_{q})=i_{1}j_{2}\cdots j_{q'}i_{1}i_{2}\cdots i_{q}i_{1}$. The pair $(I_{q}J_{q'})$ and $(J_{q'}I_{q})$ is called a commutative cycle pair. \end{definition}
Given a commutative cycle pair $(I_{q}J_{q'})$ and $(J_{q'}I_{q})$, denote the index of $(I_{q}J_{q'})$ and $(J_{q'}I_{q})$ by $\langle m,\bar{\alpha};K,L\rangle$, where
\begin{equation}\label{eqn:5.1} \left\{ \begin{array}{l} m=q+q' \\ \bar{\alpha}=\psi(i_{1}-1,i_{1}-1)\\ K=\psi(i_{2}-1,\cdots ,i_{q}-1,i_{1}-1,j_{2}-1,\cdots ,j_{q'}-1)\\ L=\psi(j_{2}-1,\cdots ,j_{q'}-1,i_{1}-1,i_{2}-1,\cdots ,i_{q}-1). \end{array} \right. \end{equation}
From (\ref{eqn:2.20}), it is easy to check that
\begin{equation}\label{eqn:5.2} \left\{ \begin{array}{l} H_{n;i_{1},i_{2}}H_{n;i_{2},i_{3}}\cdots H_{n;i_{q},i_{1}}H_{n;i_{1},j_{2}}H_{n;j_{2},j_{3}}\cdots H_{n;j_{q'},i_{1}}=H_{m,n;\bar{\alpha}}^{(K)} \\ \\ H_{n;i_{1},j_{2}}H_{n;j_{2},j_{3}}\cdots H_{n;j_{q'},i_{1}}H_{n;i_{1},i_{2}}H_{n;i_{2},i_{3}}\cdots H_{n;i_{q},i_{1}}=H_{m,n;\bar{\alpha}}^{(L)}. \end{array} \right. \end{equation} The number $\bar{\alpha}$ is a member of the diagonal index set $\mathcal{D}_{p}$, and then $H_{m,n;\bar{\alpha}}$ lies on the diagonal of $\mathbb{H}_{n}^{m}$.
\begin{definition} \label{definition:5.2}
A commutative cycle pair $(I_{q}J_{q'})$ and $(J_{q'}I_{q})$ with index $\langle m,\bar{\alpha};K,L\rangle$ is called an $H$-primitive commutative cycle pair if $H_{m,2;\bar{\alpha}}^{(K)}$ and $H_{m,2;\bar{\alpha}}^{(L)}$ are primitive.
\end{definition} A $V$-primitive commutative cycle pair is similarly specified, and the details are omitted. The commutative cycle pair can compensate for each other, and they can establish the primitivity of $\mathbb{H}_{n}$ for all $n\geq 2$. Indeed, the following theorem provides a sufficient condition for the primitivity of $\mathbb{H}_{n}$ when $\mathbb{H}_{2}$ is non-degenerated. Similar results hold for $\mathbb{V}_{n}$.
\begin{theorem} \label{theorem:5.3} Given $\mathcal{B}\subset\Sigma_{2\times 2}(p)$, if \begin{enumerate} \item[(i)] $\mathbb{H}_{2}$ is non-degenerated, and
\item[(ii)] there exists an $H$-primitive commutative cycle pair $(I_{q}J_{q'})$ and $(J_{q'}I_{q})$ with index $\langle m,\bar{\alpha};K,L\rangle$ such that $(S_{m;\bar{\alpha},\bar{\alpha}})_{K,L}=1$ or $(S_{m;\bar{\alpha},\bar{\alpha}})_{L,K}=1$, \end{enumerate} then $\mathbb{H}_{n}$ is primitive for all $n\geq 2$. \end{theorem}
\textit{Proof.} Suppose $(S_{m;\bar{\alpha},\bar{\alpha}})_{K,L}=1$. The case for $(S_{m;\bar{\alpha},\bar{\alpha}})_{L,K}=1$ is similar.
First, $H_{m,n;\bar{\alpha}}^{(K)}$ and $H_{m,n;\bar{\alpha}}^{(L)}$ will be shown by induction to be primitive for $n\geq 2$ by induction. From (ii), the primitivity of $H_{m,2;\bar{\alpha}}^{(K)}$ and $H_{m,2;\bar{\alpha}}^{(L)}$ holds for $n=2$. Assume that this result holds for $n=t$, $t\geq 2$.
Let $H_{m,2;\bar{\alpha}}^{(K)}=\left[H_{m,2;\bar{\alpha};\alpha}^{(K)}\right]_{p\times p}$. From (\ref{eqn:2.33}),
\begin{equation}\label{eqn:4.100} H_{m,2;\bar{\alpha};\alpha}^{(K)}=\underset{l=1}{\overset{p^{m-1}}{\sum}}(S_{m;\bar{\alpha},\alpha})_{K,l} \end{equation} for all $1\leq i,j\leq p$. Let $\Lambda=\Lambda\left(H_{m,2;\bar{\alpha}}^{(K)}\right)$ be the indicator matrix of $H_{m,2;\bar{\alpha}}^{(K)}=\left[H_{m,2;\bar{\alpha};\alpha}^{(K)}\right]_{p\times p}$. The primitivity of $H_{m,2;\bar{\alpha}}^{(K)}$ implies $\Lambda$ is primitive.
Consider the case for $n=t+1$. Let $H_{m,t+1;\bar{\alpha}}^{(K)}=\left[H_{m,t+1;\bar{\alpha};\alpha}^{(K)}\right]_{p\times p}$, by Proposition \ref{proposition:2.2},
\begin{equation}\label{eqn:4.101} H_{m,t+1;\bar{\alpha};\alpha}^{(K)}=\underset{l=1}{\overset{p^{m-1}}{\sum}}(S_{m;\bar{\alpha},\alpha})_{K,l} H_{m,t;\alpha}^{(l)} \end{equation} for all $1\leq \alpha\leq p^{2}$. By Theorem \ref{theorem:3.6}, every pattern $U_{m\times 2}\in\Sigma_{m\times 2}(\mathcal{B})$ can be extended to $\mathbb{Z}_{m\times 3}$ by using the local patterns in $\mathcal{B}$. Thus, if $(S_{m;\bar{\alpha},\alpha})_{K,l}=1$, then $H_{m,t;\alpha}^{(l)}$ is not a zero matrix for $1\leq \alpha\leq p^{2}$ and $1\leq l\leq p^{m-1}$. Hence, $\Lambda\left(H_{m,2;\bar{\alpha}}^{(K)}\right)$ is also the indicator matrix of $H_{m,t+1;\bar{\alpha}}^{(K)}=\left[H_{m,t+1;\bar{\alpha};\alpha}^{(K)}\right]_{p\times p}$. Moreover, from (\ref{eqn:4.101}) and Theorem \ref{theorem:3.4-0}, $H^{(K)}_{m,t+1;\bar{\alpha};\alpha}$ is either non-compressible or zero, $1\leq \alpha\leq p^{2}$. From (\ref{eqn:4.101}), $H^{(K)}_{m,t+1;\bar{\alpha};\bar{\alpha}}$ is primitive and on the diagonal of $H_{m,t+1;\bar{\alpha}}^{(K)}$. From Lemma \ref{lemma:4.9}, $H_{m,t+1;\bar{\alpha}}^{(K)}$ is primitive.
Let $A$ and $B$ be non-negative and non-compressible matrices. If $AB$ is primitive, then $BA$ can be easily verified also to be primitive. By (\ref{eqn:5.2}), $H_{m,t+1;\bar{\alpha}}^{(L)}$ is primitive. Hence, the case for $n=t+1$ holds. Therefore, $H_{m,n;\bar{\alpha}}^{(K)}$ and $H_{m,n;\bar{\alpha}}^{(L)}$ are primitive for $n\geq 2$.
Now, $\mathbb{H}_{n}$ will be shown to be primitive for all $n\geq 2$. In the case $n=2$, let $\mathbb{H}_{2}=[H_{2;\alpha}]_{p\times p}$. By Theorem \ref{theorem:3.4-0}, $H_{2;\alpha}$ is non-compressible for $1\leq \alpha\leq p^{2}$. The indicator matrix of $\mathbb{H}_{2}=[H_{2;\alpha}]_{p\times p}$ is a $p\times p$ full matrix. From (ii),
\begin{equation*} H_{m,2;\bar{\alpha}}= \underset{l=1}{\overset{p^{m-1}}{\sum}}H_{m,2;\bar{\alpha}}^{(l)}\geq H_{m,2;\bar{\alpha}}^{(K)}
\end{equation*} is primitive and is on the diagonal of $\mathbb{H}_{2}^{m}=[H_{m,2;\alpha}]_{p\times p}$. Then, by Lemma \ref{lemma:4.9}, $\mathbb{H}_{2}$ is primitive.
For $n\geq 3$, from (\ref{eqn:2.19}),
\begin{equation*} \mathbb{H}_{n}=\left[H_{n;i,j}\right]_{p^{2}\times p^{2} }=\left( \mathbb{H}_{2} \right)_{p^{2}\times p^{2}}\circ\left[ E_{p\times p}\otimes \left[H_{n-1;\alpha}\right]_{p\times p} \right]. \end{equation*}
From (ii), by Theorem \ref{theorem:3.4-0}, if $\left( \mathbb{H}_{2} \right)_{i,j}=1$, then $H_{n;i,j}$ is not a zero matrix. Hence, $\mathbb{H}_{2}$ is the indicator matrix of $\mathbb{H}_{n}=\left[H_{n;i,j}\right]_{p^{2}\times p^{2} }$.
Let $\mathbb{H}_{n}^{m}=\left[H_{m,n;\alpha_{1};\alpha_{2}}\right]_{p^{2}\times p^{2} }$. Since $(S_{m;\bar{\alpha},\bar{\alpha}})_{K,L}=1$,
\begin{equation*} H_{m,n;\bar{\alpha};\bar{\alpha}} = \underset{k,l=1}{\overset{p^{m-1}}{\sum}} (S_{m;\bar{\alpha};\bar{\alpha}})_{k,l} H_{m,n-1;\bar{\alpha}}^{(l)} \geq H_{m,n-1;\bar{\alpha}}^{(L)}. \end{equation*} Since $H_{m,n-1;\bar{\alpha}}^{(L)}$ is primitive, $H_{m,n;\bar{\alpha};\bar{\alpha}}$ is primitive. Notably, $H_{m,n;\bar{\alpha};\bar{\alpha}}$ is on the diagonal of $\mathbb{H}_{n}^{m}$. Therefore, from Lemma \ref{lemma:4.9}, $\mathbb{H}_{n}$ is primitive for all $n\geq 3$. The proof is complete. \hspace{0.5cm} $\square$ \medbreak
$\mathbb{H}$ (or $\mathbb{V}$) may have an invariant diagonal cycle and $\mathbb{V}$ (or $\mathbb{H}$) may have primitive commutative cycles. Therefore, combining these two conditions for the primitivity, the following theorem provides a finitely sufficient condition for topological mixing of $\Sigma(\mathcal{B})$ when $\mathbb{H}_{2}$ and $\mathbb{V}_{2}$ are non-degenerated.
\begin{theorem} \label{theorem:5.7} Given $\mathcal{B}\subset\Sigma_{2 \times 2}(p)$, if \begin{enumerate} \item[(i)] $\mathbb{H}_{2}(\mathcal{B})$ and $\mathbb{V}_{2}(\mathcal{B})$ are non-degenerated, and \item[(ii)] $\mathcal{B}$ satisfies the conditions of Theorem \ref{theorem:4.11-1} or \ref{theorem:5.3} for $\mathbb{H}_{n}$ and $\mathbb{V}_{n}$, \end{enumerate} then $\Sigma(\mathcal{B})$ is topologically mixing. \end{theorem}
\textit{Proof.} Combining Theorems 3.6, 4.4 and 4.8 yields the result immediately. \hspace{0.5cm} $\square$ \medbreak
Example 4.5, above, illustrates the application of Theorem \ref{theorem:5.7}.
\begin{example} \label{example:5.8} (continued)
In Example \ref{example:4.12}, $\mathbb{H}_{2}(\mathcal{B})$ has an invariant diagonal cycle that satisfies condition (iii) of Theorem \ref{theorem:4.11-1}. However, $\mathbb{V}_{2}(\mathcal{B})$ does not have such an invariant diagonal cycle when $m\leq 7$, but it does have a primitive commutative cycle pair with $m=7$ that satisfies condition (ii) of Theorem \ref{theorem:5.3} as follows. \begin{equation*} \mathbb{V}_{2}(\mathcal{B})= \left[ \begin{array}{cccc} 1 & 0 & 1 & 1 \\ 0 & 1 & 1 & 0 \\ 1& 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \end{array} \right]. \end{equation*} Clearly, $\mathbb{V}_{2}(\mathcal{B})$ is non-degenerated.
Let $I_{5}=211212$ and $J_{2}=222$. That
\begin{equation*} V^{(12)}_{7,2;4}=V_{2;2,1}V_{2;1,1}V_{2;1,2}V_{2;2,1}V_{2;1,2}V_{2;2,2}V_{2;2,2} = \left[ \begin{array}{cc} 2 & 1 \\ 1 & 1 \end{array} \right] \end{equation*} and
\begin{equation*} V^{(51)}_{7,2;4}=V_{2;2,2}V_{2;2,2}V_{2;2,1}V_{2;1,1}V_{2;1,2}V_{2;2,1}V_{2;1,2} = \left[ \begin{array}{cc} 2 & 1 \\ 1 & 1 \end{array} \right] \end{equation*} are primitive can be easily verified. Hence, $(I_{5}J_{2})$ and $(J_{2}I_{5})$ form a $V$-primitive commutative cycle pair with index $\langle 7,4;12,51\rangle$. Moreover, \begin{equation*} \left(W_{7;4,4}\right)_{12,51}=1. \end{equation*}
Therefore, combining the result in Example \ref{example:4.12} and by Theorem \ref{theorem:5.7}, $\Sigma(\mathcal{B})$ is topologically mixing. \end{example}
Under the local crisscross-extendibility and local corner-extendable conditions, Theorems \ref{theorem:4.11-1}, \ref{theorem:5.3} and \ref{theorem:5.7} can be generalized to some degenerated cases in which $\mathbb{H}_{2}$ or $\mathbb{V}_{2}$ contains zero rows or columns. For completeness, the weakly non-degenerated case is introduced below.
\begin{definition} \label{definition:4.53} Given $\mathcal{B}\subset\Sigma_{2\times 2}(p)$, $\mathbb{H}_{2}(\mathcal{B})=[H_{2;i,j}]_{p\times p}$ is weakly non-degenerated if \begin{enumerate} \item[(i)] when both $H_{2;i,j_{1}}$ and $H_{2;i,j_{2}}$ are not zero matrices, $1\leq i,j_{1},j_{2}\leq p$, \begin{equation*} r(H_{2;i,j_{1}})=r(H_{2;i,j_{2}}), \text{and} \end{equation*}
\item[(ii)] when both $H_{2;i_{1},j}$ and $H_{2;i_{2},j}$ are not zero matrices, $1\leq i_{1},i_{2},j \leq p$, \begin{equation*} c(H_{2;i_{1},j})=c(H_{2;i_{2},j}). \end{equation*} \end{enumerate} Weak non-degeneracy of $\mathbb{V}_{2}(\mathcal{B})$ is defined analogously. \end{definition} Similar to Theorem \ref{theorem:3.6}, if $\mathcal{B}$ is locally crisscross-extendable and $\mathbb{H}_{2}$ and $\mathbb{H}_{2}$ are weakly non-degenerated, then $\mathcal{B}$ satisfies the local corner-filling conditions C(1), C(2) and C(4). For brevity, the details of the proof are omitted.
The following definition is introduced to enable the primitivity of compressible matrices to be easily expressed.
\begin{definition} \label{definition:4.70}
If $A=[a_{i,j}]_{n\times n}$ is a matrix with $a_{i,j}\in\{0,1\}$, the associated saturated matrix $\mathbb{E}(A)=[e_{i,j}]_{n\times n}$ of $A$ is defined by
\begin{equation}\label{eqn:4.6} \left\{ \begin{array}{rl} e_{i,j}=0 & \hspace{1.0cm}\text{if }\underset{k=1}{\overset{n}{\sum}}a_{i,k}=0\text{ or }\underset{k=1}{\overset{n}{\sum}}a_{k,j}=0 , \\ & \\ e_{i,j}=1 &\hspace{1.0cm} \text{otherwise.} \ \end{array} \right. \end{equation} \end{definition} Clearly, given $A=[a_{i,j}]_{n\times n}$ with $a_{i,j}\in\{0,1\}$, if there exists $N\geq 1$ such that $A^{N}\geq \mathbb{E}(A)$, then $A$ is weakly primitive (weakly $N$-primitive); here, if $B=[b_{i,j}]_{n\times n}$ and $C=[c_{i,j}]_{n\times n}$ are two matrices, $B\geq C$ means $b_{i,j}\geq c_{i,j}$ for all $1\leq i,j\leq n$.
The following theorem can be obtained by an argument similar to that used in the non-degenerated case; the details of the proof are omitted. Notably, shift spaces such as the Golden Mean shift space can be applied to the following theorem.
\begin{theorem} \label{theorem:4.210} Given $\mathcal{B}\subset\Sigma_{2 \times 2}(p)$, if \begin{enumerate} \item[(i)] $\mathbb{H}_{2}(\mathcal{B})$ is weakly non-degenerated,
\item[(ii)] $\mathcal{B}$ is locally crisscross-extendable,
\item[(iii)] there exists an $S$-invariant diagonal cycle $\overline{\beta}_{q}=\beta_{1}\beta_{2}\cdots\beta_{q}\beta_{1}$ of order $(m,q)$ with its invariant index set $\mathcal{K}$,
\item[(iv)] for $2\leq n\leq q+1$, there exists $a=a(n)\geq 1$ such that \begin{equation*} \left(\underset{l\in\mathcal{K}}{\sum}H_{m,n;\beta_{1}}^{(l)}\right)^{a}\geq \mathbb{E}\left(H_{n;\beta_{1}}\right), \text{ and} \end{equation*}
\item[(v)] $\mathbb{H}_{n}$ is weakly primitive for $2\leq n\leq q+1$,
\end{enumerate} then $\mathbb{H}_{n}$ is weakly primitive for all $n\geq 2$. Moreover, if $\mathbb{V}_{2}(\mathcal{B})$ also satisfies the conditions similar to (i)$\sim$(v), then $\Sigma(\mathcal{B})$ is topologically mixing. \end{theorem}
\numberwithin{equation}{section}
\section{Strong specification}
\label{sec:6} \hspace{0.5cm}
This section introduces the $k$ hole-filling condition ($($HFC$)_{k}$), and provides finitely sufficient conditions for the strong specification of $\Sigma(\mathcal{B})$.
The main idea of finding sufficient conditions for strong specification is presented as follows. Clearly, strong specification is stronger than topological mixing. Apart from the processes in Fig. 3.1, which are associated with the situation in which regions $R_{1}$ and $R_{2}+\mathbf{v}$ are far away, the case in which one pattern is enclosed in another pattern, as in Fig. 5.1, must be studied.
\begin{equation*} \psfrag{a}{{\footnotesize $U_{1}$}} \psfrag{b}{{\footnotesize$U_{2}$}} \psfrag{c}{} \includegraphics[scale=1.0]{Fig1_2.eps} \end{equation*} \begin{equation*} \text{Figure 5.1.} \end{equation*}
Notably, in a study of topological mixing, Fig. 5.1 does not need to be considered because the separation distance can be chosen to be sufficiently large. However, in studying strong specification, $U_{1}$ and $U_{2}$ cannot be removed since the relative positions of $R_{1}$ and $R_{2}$ are fixed. Now, the sufficient condition is imposed to ensure that the gluing of $U_{1}$ and $U_{2}$ can be completed by the following two processes.
\begin{enumerate} \item[Step (S-1):] Extend $U_{1}$ horizontally and vertically to form a crisscross pattern that touches $U_{2}$, as presented in Fig. 5.2. \end{enumerate}
\begin{enumerate} \item[Step (S-2):] Fill the holes that are surrounded by the rectangularly annular lattice to form a rectangular pattern, as presented in Fig. 5.3. \end{enumerate}
\begin{equation*} \begin{array} {cccccc} \psfrag{a}{{\footnotesize $U_{1}$}} \psfrag{b}{{\footnotesize $U_{2}$}} \psfrag{c}{{\tiny $(S-1)$}} \includegraphics[scale=1.0]{Fig1_3.eps}& & & & & \hspace{1.0cm}\psfrag{a}{{\tiny $(S-2)$}} \includegraphics[scale=1.0]{Fig1_4.eps} \\&&&&&\\ \text{Figure 5.2.} && & & & \hspace{1.0cm}\text{Figure 5.3.} \end{array} \end{equation*} Then, repeat Step $(3)$ for topological mixing in Section 4 to extend the rectangular pattern to a global pattern on $\mathbb{Z}^{2}$.
The hole-filling condition in Step (S-2) is closely related to the extension property called square filling \cite{40-1,40-2}. In the following, the hole-filling condition is introduced.
First, for $M,N\geq 1$ and $i,j\in\mathbb{Z}$, the rectangularly annular lattice $\mathcal{A}_{M \times N;d}((i,j))$ with hole $\mathbb{Z}_{M\times N}((i,j))$ and width $d$ (called the annular lattice for short) is defined by
\begin{equation}\label{eqn:6.1} \mathcal{A}_{M \times N;d}((i,j))=\mathbb{Z}_{(M+2d)\times (N+2d)}((i-d,j-d))\setminus \mathbb{Z}_{M\times N}((i,j)). \end{equation} For brevity, let
\begin{equation}\label{eqn:6.2} \begin{array}{ccc} \mathcal{A}_{M \times N}((i,j))=\mathcal{A}_{M \times N;2}((i,j)) & \text{and} & \mathcal{A}_{M \times N}=\mathcal{A}_{M \times N}((0,0)). \end{array} \end{equation}
The hole-filling condition is defined as follows.
\begin{definition} \label{definition:6.1} For $\mathcal{B}\subset\Sigma_{2\times 2}(p)$ and $k\geq 2$, $\mathcal{B}$ satisfies the $k$ hole-filling condition ($($HFC$)_{k}$) with size $(M,N)$, $M,N\geq 2k-3$, if for every $\mathcal{B}$-admissible pattern $U$ on $\mathcal{A}_{M \times N}$ that can be extended to $\mathcal{A}_{(M+4-2k) \times (N+4-2k);k}((k-2,k-2))$ using the local patterns in $\mathcal{B}$, $U$ can completely fill its hole using the patterns in $\mathcal{B}$; see Fig. 5.4. $($HFC$)_{2}$ is also called the hole-filling condition (HFC).
\end{definition}
\begin{equation*} \begin{array}{ccccc} \hspace{-1.2cm}\psfrag{a}{{\tiny $\mathcal{A}_{(M+4-2k) \times (N+4-2k);k}((k-2,k-2))$}} \psfrag{b}{{\tiny $\mathcal{A}_{M \times N}$}} \psfrag{k}{{\footnotesize $k$}} \psfrag{m}{{\footnotesize $M$}} \psfrag{n}{{\footnotesize $N$}} \includegraphics[scale=0.7]{squarefilling.eps} & & & & \hspace{0.5cm} \psfrag{a}{{\tiny $$}} \psfrag{b}{{\tiny $$}} \psfrag{c}{{\tiny $\xi_{1}$}} \psfrag{d}{{\tiny $\eta_{1}$}} \psfrag{e}{{\tiny $\xi_{2}$}} \psfrag{f}{{\tiny $\eta_{2}$}} \psfrag{g}{{\tiny $\xi_{N_{1}}$}} \psfrag{h}{{\tiny $\eta_{N_{1}}$}} \psfrag{j}{{\tiny $$}} \psfrag{k}{{\tiny $$}} \psfrag{m}{{\tiny $M$}} \psfrag{n}{{\tiny $N$}} \psfrag{o}{{\tiny $i_{1}$}} \psfrag{p}{{\tiny $i_{M}$}} \psfrag{q}{{\tiny $j_{1}$}} \psfrag{r}{{\tiny $j_{M}$}} \psfrag{s}{{\tiny $\cdots$}} \psfrag{t}{{\tiny $\vdots$}} \psfrag{u}{{\tiny $(3)$}} \psfrag{v}{{\tiny $(2)$}} \psfrag{w}{{\tiny $(4)$}} \psfrag{x}{{\tiny $(1)$}} \includegraphics[scale=1.0]{annular.eps} \\ & & & & \\ \text{Figure 5.4.} & & & & \text{Figure 5.5.} \end{array} \end{equation*}
Notably, the $k$ hole-filling condition (HFC$)_{k}$, $k\geq 3$, is weaker than HFC. $($HFC$)_{k}$ can be expressed in terms of the horizontal transition matrices $\mathbb{H}_{n}$ and the connecting operators $S_{m;\alpha,\beta}$ and $W_{m;\alpha,\beta}$. Therefore, the condition $($HFC$)_{k}$ can be easily checked, especially using computer programs. The following theorem concerns only the case in which $\mathcal{B}$ satisfies HFC when $\mathbb{H}_{2}$ and $\mathbb{V}_{2}$ are non-degenerated; for brevity, the general case in which $\mathcal{B}$ satisfies $($HFC$)_{k}$, $k\geq 3$, is omitted.
\begin{theorem} \label{theorem:6.2} Given $\mathcal{B}\subset\Sigma_{2\times 2}(p)$, suppose $\mathbb{H}_{2}$ and $\mathbb{V}_{2}$ are non-degenerated. For $M,N\geq 1$, $\mathcal{B}$ satisfies HFC with size $(M,N)$ if and only if for $\alpha_{1},\alpha_{2},\cdots ,\alpha_{N+2}\in\{1,2,\cdots,p^{2}\}$,
\begin{equation}\label{eqn:6.5} S_{M+1;\alpha_{1},\alpha_{2}}S_{M+1;\alpha_{2},\alpha_{3}}\cdots S_{M+1;\alpha_{N+1},\alpha_{N+2}}>0. \end{equation} \end{theorem}
\textit{Proof.} Since $\mathbb{H}_{2}$ and $\mathbb{V}_{2}$ are non-degenerated, from Theorem \ref{theorem:3.5}, $\mathcal{B}$ is rectangle-extendable. Let $N_{1}=N+2$. For any $i_{k}$, $j_{k}$, $\xi_{l}$ and $\eta_{l}\in \mathcal{S}_{p}$, $1\leq k\leq M$ and $1\leq l\leq N_{1}$, a $\mathcal{B}$-admissible pattern can be produced on $\mathcal{A}_{M\times N}$; see Fig. 5.5. Therefore, by the construction of connecting operators, (\ref{eqn:6.5}) is equivalent to the condition for filling the hole of size $(M,N)$. The proof is complete. \hspace{0.5cm} $\square$ \medbreak
Now, the following theorem provides sufficient conditions for strong specification.
\begin{theorem} \label{theorem:6.4} Given $\mathcal{B}\subset\Sigma_{2\times 2}(p)$, if there exists $k\geq 2$ such that \begin{enumerate}
\item[(i)] $r(\mathbb{H}_{k})=c(\mathbb{H}_{k})$ and $r(\mathbb{V}_{k})=c(\mathbb{V}_{k})$,
\item[(ii)] $\mathcal{B}$ satisfies $($HFC$)_{k}$ with size $(M,N)$ for some $M,N\geq 2k-3$, and
\item[(iii)] $\mathbb{H}_{k}$ is weakly $(M-2k+5)$-primitive and $\mathbb{V}_{k}$ is weakly $(N-2k+5)$-primitive, \end{enumerate} then $\Sigma(\mathcal{B})$ has strong specification. \end{theorem}
\textit{Proof.} Let $M'=M-k+4$ and $N'=N-k+4$. First, define the lattice $\mathbb{L}_{g;k}=\mathbb{L}_{g;k}(M,N)$, which is like the grid on a checkerboard with line width $k$ and $(M+4-2k)\times (N+4-2k)$ blank spaces, as
\begin{equation*} \mathbb{L}_{g;k}=\underset{i,j\in\mathbb{Z}}{\bigcup}\mathcal{A}_{(M+4-2k) \times (N+4-2k);k}((iM'+k-2,jN'+k-2)). \end{equation*} Denote the lattice of blank spaces on the checkerboard by
\begin{equation*} \mathbb{L}_{b;k}=\mathbb{Z}^{2}\setminus \mathbb{L}_{g;k}. \end{equation*}
\begin{equation*} \begin{array}{l} \psfrag{a}{$\cdots$} \psfrag{b}{$\vdots$} \psfrag{c}{$(0,0)$} \includegraphics[scale=0.4]{chess_line_3.eps} \\ \\ \text{(a) The shadowed lattice is } \mathbb{L}_{g;3}. \hspace{1.2cm} \text{(b) } \text{The white lattice is }\mathbb{L}_{b;3}.\\ \ \hspace{0.5cm}\text{for }M=N=4. \hspace{3.3cm}\text{ for }M=N=4. \end{array} \end{equation*} \begin{equation*} \text{Figure 5.6.} \end{equation*}
For $i,j\in\mathbb{Z}$, define
\begin{equation*} \left\{ \begin{array}{l} \mathbb{L}(i,j)=\mathbb{L}_{k;M,N}(i,j)=\mathbb{Z}_{M'\times N'}\left(\left(iM'-2,jN'-2\right)\right) \\ \\ \widehat{\mathbb{L}}(i,j)=\widehat{\mathbb{L}}_{k;M,N}(i,j)=\mathbb{Z}_{(M+4)\times(N+4)}\left(\left(iM'-2,jN'-2\right)\right). \end{array} \right. \end{equation*}
\begin{equation*} \begin{array}{c} \psfrag{a}{{\footnotesize $\left(5i-2,5j-2\right)$}} \psfrag{b}{ $\mathbb{L}_{3;4,4}(i,j)=$} \psfrag{c}{ $\widehat{\mathbb{L}}_{3;4,4}(i,j)=$} \psfrag{d}{and}
\includegraphics[scale=0.6]{L_bar1.eps} \end{array} \end{equation*}
\begin{equation*} \text{Figure 5.7. The lattices }\mathbb{L}_{3;4,4}(i,j)\text{ and }\widehat{\mathbb{L}}_{3;4,4}(i,j). \end{equation*}
Clearly, $\widehat{\mathbb{L}}(i,j)\supset\mathbb{L}(i,j)$ and $\mathbb{L}(i_{1},j_{1})\bigcap\mathbb{L}(i_{2},j_{2})=\emptyset$ if $(i_{1},j_{1})\neq (i_{2},j_{2})$. Then, $\mathbb{Z}^{2}$ lattice can be decomposed into disjoint sublattices:
\begin{equation*} \mathbb{Z}^{2} =\underset{i,j\in\mathbb{Z}}{\bigcup}\mathbb{L}(i,j). \end{equation*}
Take \begin{equation}\label{eqn:6.7} \bar{d}=3\sqrt{(M')^{2}+(N')^{2}}. \end{equation} Let $R_{1},R_{2}\subset \mathbb{Z}^{2}$ with $d(R_{1},R_{2})\geq\bar{d}$. For any $U_{l}=\Pi_{R_{l}}(W_{l})$ with $W_{l}\in\Sigma(\mathcal{B})$, $l=1,2$, let
\begin{equation*} R_{l}'=\underset{(i',j')\in\mathbb{Z}_{2\times 2}((i-1,j-1))}{\underset{R_{l}\bigcap \mathbb{L}(i,j)\neq\emptyset}{\bigcup}}\widehat{\mathbb{L}}(i',j') \end{equation*} for $l=1,2$. Hence, $U_{l}$ can be extended as $U_{l}'=\Pi_{R_{l}'}(W_{l})$, $l=1,2$. Clearly, for $l=1,2$,
\begin{equation}\label{eqn:6.7-1} \text{if } (i,j)\in R_{l}, \text{ then } \mathbb{Z}_{(2k+1)\times(2k+1)}((i-k,j-k))\subseteq R_{l}'. \end{equation}
From (\ref{eqn:6.7}), it can be verified that $\widehat{\mathbb{L}}(i,j)\bigcap R_{1}'\neq\emptyset$ and $\widehat{\mathbb{L}}(i,j)\bigcap R_{2}'\neq\emptyset$ never both occur for all $(i,j)\in\mathbb{Z}^{2}$.
Now, from conditions (i) and (iii), there exists a $\mathcal{B}$-admissible pattern $U''$ on $R_{1}'\bigcup R_{2}' \bigcup \mathbb{L}_{g;k}$ such that $U''\mid_{R_{l}'}=U_{l}'$, $i=1,2$. Clearly, $\mathbb{Z}^{2}\setminus\left(R_{1}'\bigcup R_{2}' \bigcup \mathbb{L}_{c}\right)$ is the union of the discrete $(M+4-2k)\times (N+4-2k)$ rectangular lattices.
Hence, from (\ref{eqn:6.7-1}) and condition (2), there exists $W\in\Sigma(\mathcal{B})$ such that $W\mid_{R_{i}}=U_{i}$ for $i=1,2$. Notably, in general, $W\mid_{R_{1}'\bigcup R_{2}' \bigcup \mathbb{L}_{c} }$ is not equal to $U''$ since condition (2) may change the colors on the boundary of $R_{1}'\bigcup R_{2}' \bigcup \mathbb{L}_{c}$ with width $k-2$. Therefore, $\Sigma(\mathcal{B})$ has strong specification. The proof is complete. \hspace{0.5cm} $\square$ \medbreak
Theorem \ref{theorem:6.4} clearly applies to certain non-degenerated cases, including the Golden Mean shift. The Golden Mean shift is known to have safe symbol $0$ and strong specification; see \cite{50}. The Golden Mean shift can also be shown to satisfy HFC with size $(1,1)$ and to have strong specification by Theorem \ref{theorem:6.4}.
The following well-known example of Burton and Steif \cite{12,13} is introduced for the further application of Theorem \ref{theorem:6.4}. This example is closely related to the ferromagnetic Ising model in statistical physics.
\begin{example} \label{example:6.6} Consider the color set $\mathcal{S}_{4}'=\{-2,-1,1,2\}$. The rule of $\mathbf{X}_{BS}\subseteq \mathcal{S}_{4}'^{\mathbb{Z}^{2}}$ is that a negative is disallowed to sit to a positive unless they are both $\pm 1$. To fit $\mathcal{S}_{4}'$ to the color set $\mathcal{S}_{4}=\{0,1,2,3\}$ used in this work, $-2$, $-1$, $1$ and $2$ are replaced with $0$, $1$, $2$ and $3$, respectively. That $\mathcal{B}_{BS}$ satisfies HFC with size $(2,2)$ can be proven and the details are omitted. Therefore, by Theorem \ref{theorem:6.4}, $\Sigma(\mathcal{B}_{BS})$ has strong specification.
\end{example}
For $p=2$, the size $(M,N)$ of the hole-filling condition can be larger, as in the following example.
\begin{example} \label{example:6.13} From Theorem \ref{theorem:6.2}, it can be verified that \begin{equation*} \mathbb{H}_{2}(\mathcal{B}_{1})= \left[ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 1 \\ 1& 0 & 1 & 1 \\ 1 & 1 & 1 & 0 \end{array} \right] \end{equation*} satisfies HFC with size $(3,3)$ and
\begin{equation*} \mathbb{H}_{2}(\mathcal{B}_{2})= \left[ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & 0 & 1 & 1 \\ 1& 1 & 1 & 1 \\ 0 & 1 & 1 & 1 \end{array} \right] \end{equation*} satisfies HFC with size $(4,4)$. Therefore, by Theorem \ref{theorem:6.4}, both can be shown to have strong specification. Notably, they do not have a safe symbol $0$ or $1$. The details are omitted.
\end{example}
The following example concerns the Diagonally Restricted Golden Mean, which does not satisfy HFC but does satisfy $($HFC$)_{3}$.
\begin{example} \label{example:6.15}(Diagonally Restricted Golden Mean) Consider $\mathcal{S}_{2}=\{0,1\}$ and
\begin{equation*} \mathbb{H}_{2}(\mathcal{B}_{s})=\mathbb{V}_{2}(\mathcal{B}_{s})= \left[ \begin{array}{cccc} 1 & 1 & 1 & 0 \\ 1 & 0 & 0 & 0 \\ 1& 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right]. \end{equation*}
Since \psfrag{c}{{\scriptsize $1$}} \psfrag{b}{{\scriptsize $0$}} \psfrag{d}{{\scriptsize $1$}} \psfrag{e}{} \includegraphics[scale=0.5]{SL41.eps} (or \psfrag{c}{{\scriptsize $1$}} \psfrag{e}{{\scriptsize $0$}} \psfrag{d}{{\scriptsize $1$}} \psfrag{b}{} \includegraphics[scale=0.5]{SL42.eps}) is forbidden, $\mathcal{B}_{s}$ is easily seen not to satisfy HFC.
The fact that $\mathcal{B}_{s}$ satisfies $($HFC$)_{3}$ with size $(3,3)$ can be verified. Clearly, $r(\mathbb{H}_{3})=c(\mathbb{H}_{3})$, and $\mathbb{H}_{3}=\mathbb{V}_{3}$ is $2$-primitive. Therefore, by Theorem \ref{theorem:6.4}, $\Sigma (\mathcal{B}_{s})$ has strong specification. Notably, $0$ is known to be a safe symbol and strong specification follows immediately.
\end{example}
\begin{equation*} \text{\textbf{Appendix}} \end{equation*}
This appendix recalls the various mixing properties that were described in the Introduction; see Boyle et al. \cite{11}.
\textbf{Definition A.1.}\emph{ Suppose $\Sigma$ is a $\mathbb{Z}^{2}$ shift.} \begin{itemize} \item[(i)]\emph{$\Sigma$ has the uniform filling property (UFP) if a number $M(\Sigma)\geq 1$ exists such that for any two allowable patterns $U_{1}\in\Pi_{R_{1}}(\Sigma)$ and $U_{2}\in\Pi_{R_{2}}(\Sigma)$ with $d(R_{1},R_{2})\geq M$, where $R_{1}=\mathbb{Z}_{m\times n}((i,j))$, $m,n\geq 1$ and $(i,j)\in\mathbb{Z}^{2}$, and $R_{2}\subset\mathbb{Z}^{2}$, there exists a global pattern $W\in\Sigma$ with $\Pi_{R_{1}}(W)=U_{1}$ and $\Pi_{R_{2}}(W)=U_{2}$.}
\item[(ii)]\emph{$\Sigma$ is strongly irreducible if a number $M(\Sigma)\geq 1$ exists such that for any two allowable patterns $U_{1}\in\Pi_{R_{1}}(\Sigma)$ and $U_{2}\in\Pi_{R_{2}}(\Sigma)$ with $d(R_{1},R_{2})\geq M$, where $R_{1}\subset\mathbb{Z}^{2}$ is finite and $R_{2}\subset\mathbb{Z}^{2}$, there exists a global pattern $W\in\Sigma$ with $\Pi_{R_{1}}(W)=U_{1}$ and $\Pi_{R_{2}}(W)=U_{2}$.}
\item[(iii)]\emph{$\Sigma$ is corner gluing if a number $M(\Sigma)\geq 1$ exists such that for any two allowable patterns $U_{1}\in\Pi_{R_{1}}(\Sigma)$ and $U_{2}\in\Pi_{R_{2}}(\Sigma)$ with $d(R_{1},R_{2})\geq M$, where $R_{1}=\mathbb{Z}_{m\times n}((i,j))$, $m,n\geq 1$ and $(i,j)\in\mathbb{Z}^{2}$, and $R_{2}=\mathbb{Z}_{m_{1}\times n_{1}}((i+m-m_{1},j+n-n_{1}))\setminus \mathbb{Z}_{m_{2}\times n_{2}}((i+m-m_{2},j+n-n_{2}))$, $m_{1}>m_{2}\geq m+M$ and $n_{1}>n_{2}\geq n+M$, there exists a global pattern $W\in\Sigma$ with $\Pi_{R_{1}}(W)=U_{1}$ and $\Pi_{R_{2}}(W)=U_{2}$.}
\item[(iv)]\emph{$\Sigma$ is block gluing if a number $M(\Sigma)\geq 1$ exists such that for any two allowable patterns $U_{1}\in\Pi_{R_{1}}(\Sigma)$ and $U_{2}\in\Pi_{R_{2}}(\Sigma)$ with $d(R_{1},R_{2})\geq M$, where $R_{1}=\mathbb{Z}_{m_{1}\times n_{1}}((i_{1},j_{1}))$ and $R_{2}=\mathbb{Z}_{m_{2}\times n_{2}}((i_{2},j_{2}))$, $m_{l},n_{l}\geq1 $ and $(i_{l},j_{l})\in\mathbb{Z}^{2}$, $l\in\{1,2\}$, there exists a global pattern $W\in\Sigma$ with $\Pi_{R_{1}}(W)=U_{1}$ and $\Pi_{R_{2}}(W)=U_{2}$.}
\end{itemize} Notably, (i)$\sim$(iv) were introduced in \cite{11,29-1,47,50}.
Their significance in the classification of mixing properties is discussed as follows. Boyle et al. \cite{11} discussed various mixing properties, including strong irreducibility, the uniform filling property (UFP), corner gluing, block gluing and topological mixing. Figure A.1 presents the range of (HFC$)_{k}$ and these mixing properties for $\mathbb{Z}^{2}$ shifts of finite type. For brevity, the following notation is used. \begin{enumerate} \item[(a)]: $\mathcal{B}$ satisfies (HFC$)_{k}$ and conditions (i) and (iii) of Theorem \ref{theorem:1.2},
\item[(b)]: $\Sigma(\mathcal{B})$ has strong specification,
\item[(c)]: $\Sigma(\mathcal{B})$ has the UFP,
\item[(d)]: $\Sigma(\mathcal{B})$ is corner gluing,
\item[(e)]: $\Sigma(\mathcal{B})$ is block gluing,
\item[(f)]: $\Sigma(\mathcal{B})$ is topologically mixing. \end{enumerate}
\begin{equation*} \psfrag{a}{{\footnotesize (a)}} \psfrag{b}{{\footnotesize (b)}} \psfrag{c}{{\footnotesize (c)}} \psfrag{d}{{\footnotesize (d)}} \psfrag{e}{{\footnotesize (e)}} \psfrag{f}{{\footnotesize (f)}}
\includegraphics[scale=0.4]{range.eps} \end{equation*} \begin{equation*} \text{Figure A.1.} \end{equation*}
Notably, the solid line indicates that the inward property is strictly stronger than the outward one; the dotted line indicates that the inward property is stronger than or equivalent to the outward one. The examples for $(d)\nRightarrow (c)$ and $(e)\nRightarrow (d)$ were given by Boyle et al. \cite{11}. (b) and (c) are not the same for general subshifts \cite{31-1}; the equivalence is still open for shifts of finite type.
With reference to Fig. A.1, Theorem \ref{theorem:1.1} and the primitive results in Section 4 ensure that the weakest case--topological mixing--holds. Theorem \ref{theorem:1.2} ensures that the strongest case--strong specification--holds.
\end{document} | arXiv |
Genome-wide analysis and prediction of genes involved in the biosynthesis of polysaccharides and bioactive secondary metabolites in high-temperature-tolerant wild Flammulina filiformis
Juan Chen ORCID: orcid.org/0000-0002-5407-82391,
Jia-Mei Li1,
Yan-Jing Tang1,
Ke Ma2,
Bing Li1,
Xu Zeng1,
Xiao-Bin Liu3,
Yang Li1,
Zhu-Liang Yang3,
Wei-Nan Xu4,
Bao-Gui Xie4,
Hong-Wei Liu2 &
Shun-Xing Guo1
Flammulina filiformis (previously known as Asian F. velutipes) is a popular commercial edible mushroom. Many bioactive compounds with medicinal effects, such as polysaccharides and sesquiterpenoids, have been isolated and identified from F. filiformis, but their biosynthesis and regulation at the molecular level remains unclear. In this study, we sequenced the genome of the wild strain F. filiformis Liu355, predicted its biosynthetic gene clusters (BGCs) and profiled the expression of these genes in wild and cultivar strains and in different developmental stages of the wild F. filiformis strain by a comparative transcriptomic analysis.
We found that the genome of the F. filiformis was 35.01 Mb in length and harbored 10,396 gene models. Thirteen putative terpenoid gene clusters were predicted and 12 sesquiterpene synthase genes belonging to four different groups and two type I polyketide synthase gene clusters were identified in the F. filiformis genome. The number of genes related to terpenoid biosynthesis was higher in the wild strain (119 genes) than in the cultivar strain (81 genes). Most terpenoid biosynthesis genes were upregulated in the primordium and fruiting body of the wild strain, while the polyketide synthase genes were generally upregulated in the mycelium of the wild strain. Moreover, genes encoding UDP-glucose pyrophosphorylase and UDP-glucose dehydrogenase, which are involved in polysaccharide biosynthesis, had relatively high transcript levels both in the mycelium and fruiting body of the wild F. filiformis strain.
F. filiformis is enriched in a number of gene clusters involved in the biosynthesis of polysaccharides and terpenoid bioactive compounds and these genes usually display differential expression between wild and cultivar strains, even in different developmental stages. This study expands our knowledge of the biology of F. filiformis and provides valuable data for elucidating the regulation of secondary metabolites in this unique F. filiformis strain.
Flammulina filiformis, also known as enokitake, winter mushroom or golden needling mushroom, is a species endemic to Asia and belongs to the family Physalacriaceae, Agaricales [1]. Previously, F. filiformis from eastern Asia was regarded as Asian F. velutipes or F. velutipes var. filiformis,but recently phylogenetic results based on multi-gene markers and morphological comparisons demonstrated that "F. velutipes" in eastern Asia is not identical to the European winter mushroom F. velutipes and should be treated as a separate species, namely F. filiformis, which includes all cultivated enokitake strains in East Asia and those from South Korea and Japan with genome sequences [2]. Thus, we apply the name "F. filiformis" instead of the Asian F. velutipes in our study.
F. filiformis is one of the most important and popular edible mushrooms available commercially in China. It is widely cultivated and consumed in Asian countries due to its high nutritional value and desirable taste. It has been reported that China is currently the largest producer of F. filiformis, with an annual production of 2.4 million tons [3]. F. filiformis also possesses tremendous pharmaceutical value, and many bioactive constituents have been identified, such as polysaccharides [4,5,6], flavonoids [7], sesquiterpenes, glycosides, proteins, and phenols [8,9,10]. These compounds have been shown to exhibit antitumour, anticancer, anti-atherosclerotic thrombosis inhibition, anti-aging and antioxidant effects [11, 12]. In addition, as a typical white-rot fungus, F. filiformis can effectively degrade lignin and produce alcohol dehydrogenase, and thus exhibiting potential for application in bioethanol production [13].
In recent decades, research has mainly focused on the phylogenetic taxonomy [1, 14], genetic diversity [15, 16], nutritional and chemical constituents [17,18,19], pharmacological bioactivity [20, 21] and artificial cultivation of Flammulina spp. [22,23,24]. Most studies have shown that F. filiformis possesses relatively high carbohydrate, protein and amino acids contents and low fat or lipid contents; thus, it generally was recognized as a low energy delicacy [25]. In addition, bioactive polysaccharides (e.g., glucans and heteropolysaccharides), immunomodulatory proteins (e.g., FIP-fve) and multiple bioactive sesquiterpenes were also isolated and identified from the fermentation broth, mycelia and fruiting bodies of F. filiformis [26]. Tang et al. [12] reviewed the compounds derived from the F. filiformis and their diverse biological activities. Increasing studies on the chemical compounds and biological activities of this mushroom have supported that F. filiformis should be exploited as a valuable resource for the development of functional foods, nutraceuticals and even pharmaceutical drugs [27].
The development of genomic and transcriptomic sequencing technologies has provided the powerful tools to understand the biology of edible mushrooms, including the effective utilization of cultivation substrates (lignocellulose) [28, 29], the mechanism of fruiting body formation and development and adaption to adverse environments, such as high temperature environments or cold-stress conditions [30,31,32]. For example, genome sequencing of the cultivars of F. filiformis from Korea and Japan revealed their high capacity for lignocellulose degradation [28, 33]. Transcriptomic and proteomic analyses of F. filiformis revealed key genes associated with cold- and light-stress fruiting body morphogenesis [34]. These studies provided important information for the breeding and commercial cultivation of F. filiformis.
Recent advances in genome sequencing have revealed that a large number of putative biosynthetic gene clusters (BGCs) are hidden in fungal genomes [35, 36]. Genome mining efforts have also allowed us to understand the silencing or activation of biosynthetic pathways in microbes with the development of bioinformatics software, such as antiSMASH, SMURF and PRISM [37]. For instances, the genome-wide investigation of 66 cosmopolitan strains of Aspergillus fumigatus revealed 5 general types of variation in secondary metabolic gene clusters [38]. The identification of the tricyclic diterpene antibiotic pleuromutilin gene clusters on the genome-scale increased antibiotic production in Clitopilus passeckerianus [39]; the prediction of gene clusters involved in the biosynthesis of terperoid/ polyketide synthase (PKS) in the medicinal fungus Hericium erinaceus by genome and transcriptome sequencing discovered a new family of diterpene cyclases in fungi [40, 41], and the identification of the candidate cytochromes P450 gene cluster possibly related to triterpenoid biosynthesis in the medicinal mushroom Ganoderma lucidum by genome sequencing improved the production of effective medicinal compounds [42, 43].
However, as a popular edible mushroom that has a wide spectrum of interesting biological activities, little is known about the synthesis and regulation of bioactive secondary metabolites of F. filiformis. In previous experiments, we collected the wild strain of F. filiformis Liu355 from Longling, Yunnan and demonstrated that it could tolerate relatively high temperatures during fruiting body formation (at 18 °C–22 °C) in the laboratory and that its temperature tolerance was superior to that of the commercial strains of F. filiformis that usually produce fruiting bodies at low temperatures ≤15 °C [16]. Thus, the wild strain is a potential and an important material for future breeding or engineering of new F. filiformis strains because increasing the temperature tolerance can save a substantial amount of energy. Most interestingly, the chemical composition of the wild strain was different from that of other commercially cultivated strains of F. filiformis, harboring more unique chemical compounds. A total of 13 new sesquiterpenes with noreudesmane, spiroaxane, cadinane, and cuparane skeletons were isolated and identified from the wild strain Liu355 [9]. Fungi in Basidiomycota can produce diverse bioactive sesquiterpenes but the knowledge about sesquiterpene synthases (STSs) in these fungi are unclear. The identification of sesquiterpene synthases from Coprinus cinereus and Omphalotus olearius provided useful guidance for the subsequent development of in silico approaches for the directed discovery of new sesquiterpene synthases and their associated biosynthetic genes [44].
Thus, the aims of our study are to explore the genetic features of this interesting wild strain of F. filiformis on a genomic scale, to predict the genes or gene clusters involved in the biosynthesis of polysaccharide or secondary metabolites and to profile the expression differences in these candidate genes during the development of F. filiformis. In addition, the genes related to its high-temperature-tolerance are also discussed. This research will facilitate our understanding of the biology of the wild strain, provide useful datasets for molecular breeding, improving compound production and improve the production of novel compounds by heterologous pathway and metabolic engineering in the future.
General features of the F. filiformis genome
Prior to our study, three genomes classified as F. filiformis were available in public databases: the relatively complete genome of strain KACC42780 from Korea, a draft genome of TR19 from Japan and L11 from China (previously named as Asian F.velutipes). In this study, we sequenced the genome of a wild strain of F. filiformis by small fragment library construction and performed a comparative genomic analysis of secondary metabolite gene clusters. The assembled genome of wild F. filiformis was 35.01 Mbp with approximately 118-fold genome coverage. A total of 10,396 gene models were predicted, with an average sequence length of 1445 bp. The genome size and the number of predicted protein-encoding genes were very similar to the public published genome of F. filiformis (Table 1). Functional annotation of the predicted genes showed that more than half the predicted genes were annotated in the NCBI Non-Redundant Protein Sequence Database (NR) (6383 genes) and 5794, 2582, 1972 and 837 genes were annotated in the databases Gene ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG), Clusters of Orthologous Groups (COG) and SwissProt, respectively. In addition, the wild F. filiformis genome contained 107 cytochrome P450 family genes and 674 genes encoding secretory proteins.
Table 1 Genomic features of four strains of Flammulina filiformis (=Asian F. velutipes)
Comparative genome analysis of four strains of F. filiformis showed that the F. filiformis can be described by a pan-genome consisting of a core genome (4074 genes) shared by four strains (on average 23.5% of each genome) and a dispensable genome (13,219 genes) (Fig. 1a). A total of 3104 orthologous genes were annotated in the KEGG database, 2722 genes were annotated in the GO database and 1055 genes were specific to the wild strain Liu355.
Samples information and venn diagram showing the numbers of orthologue genes or differentially expressed genes. a The numbers of orthologue genes between four strains of F. filiformis. L11(China) in red, TR19 (Japan) in purple, KACC42780 (Korea) in yellow and Liu355 (China) in green. b The samples of wild and cultivar strains of F. filiformis. Up-line: cultivar strain; down-line: wild strain, from left to right: Dikaryotic mycelium (DK); Primordium (PD) and Fruiting bodies (FB). c The numbers of differentially expressed genes (DEGs) in various comparative groups of F. filiformis. Fruiting body of the wild strain (FB) in blue, Primordium of the wild strain in red, Monokaryotic mycelium of the wild strain (MK) in green, Dikaryotic mycelium of the wild strain (DK) in yellow and Dikaryotic mycelium of the cultivar strain of F. filiformis in brown. d Venn diagram showing the numbers of DEGs at adjacent development stage of F. filiformis. Blue color represented the number of DEGs of fruiting body (FB) versus primordium (PD) and red color represented primordium (PD) versus dikaryotic mycelium (DK) of the wild F. filiformis strain. Abbreviation: MK: monokaryontic mycelium; DK: dikaryontic mycelium; PD: primordium; FB: fruiting body
Functional characteristics of the predicted genes of F. filiformis
Functional annotation in KEGG database showed that the abundance of the predicted genes of F. filiformis involved in translation (253 genes) was the highest, followed by carbohydrate metabolism with 243 genes. Twenty-one genes were involved in terpenoid and polyketide biosynthesis (Additional file 1: Fig. S1).
Transcriptomic analysis and gene expression
We studied the gene expression differences across different developmental stages, namely the monokaryotic (MK), dikaryotic mycelium (DK), primordium (PD) and fruiting body (FB) stage of the wild strain F. filiformis Liu355. Moreover, the DK of the cultivar strain of F. filiformis (CGMCC 5.642) was also subjected to transcriptome sequencing (Fig. 1b). Three biological replicates were designed for each sample. The average clean data for each sample was 8.07–9.32 G. We mapped the clean reads to the genome of F. filiformis Liu 355 using HISAT software and obtained a relatively high total mapping rate (92.63%). In addition, the expression variation between samples was the smallest between the DK and FB stages (the average value of R2 = 0.85) and was the greatest between the wild strain's MK and cultivar DK stages of the wild F. filiformis strain (Additional file 2: Fig. S2).
Among the 10,396 gene models of F. filiformis, 9931 gene models were expressed (FPKM > 5) across the four different tissues (MK, DK, PD and FB) of the wild strain and the dikaryotic mycelium of a cultivar strain of F. filiformis. A total of 6577 genes were commonly expressed in all tissues. One hundred fifty-one genes were specifically expressed in the cultivar strain, and 199, 152, 116, 46 genes were specifically expressed in FB, MK, DK and PD of the wild strain of F. filiformis, respectively (Fig. 1c). The tissue-specific and high expression transcripts in F. filiformis Liu355 are listed in Additional file 3: Table S1. Two genes encoding ornithine decarboxylase (involved in polyamine synthesis) were highly expressed in the mycelium of the cultivar strain (Nove l01369, Nove l01744), and the genes encoding oxidoreductase also had the highest expression level (gene 830, FPKM > 1000). The genes encoding agroclavine dehydrogenase, acetylxylan esxterase, β-glucan synthesis-associated protein and arabinogalactan endo-1,4-β-galactosidase protein were significantly highly expressed in the FB of the wild F. filiformis strain, with a more than 20–100-fold change compared to their expression in the mycelium. Agroclavine dehydrogenase is involved in the biosynthesis of the fungal ergot alkaloid ergovaline [45] and-β-glucan synthesis-associated protein is likely linked to the biosynthesis of fungal cell wall polysaccharides. The high expression of these genes indicates that they probably play an important role in fruiting body development and compound enrichment.
A total of 5131 genes (51.67%) were up or downregulated in at least one stage of transition, such as from mycelium to primordium (PD vs DK, 3889 genes) and from primordium to fruiting body (FB vs PD, 3308 genes) (Fig. 1d). During primordial formation, 1780 genes are upregulated, and most of the genes were annotated as oxidoreductase activity (GO:0016491), hydrolase activity (GO:0004553) and carbohydrate metabolism (GO:0005975). The downregulated genes were mainly enriched in transmembrane transport (GO:0055085). During fruiting body development, genes related to the fungal-type cell wall (GO:0009277) and the structural constituent of the cell wall (GO:0005199) were upregulated, reflecting the dramatic changes in cell wall structure during the developmental process. In addition, GO term enrichment of differentially expressed genes (DEGs) between the wild strain Liu355 and cultivar strain CGMCC 5.642 showed that most genes displayed a similar expression profile, but peptide biosynthetic and metabolic process (GO:0006518; GO:0043043), amide biosynthetic process (GO: 0043604) and ribonucleoprotein complex (GO: 1901566) were upregulated in the cultivar strain of CGMCC 5.642.
KEGG enrichment analysis showed that DEGs involved in glutathione metabolism were significantly enriched in DK of the wild strain Liu 355 compared to the cultivar strain (Fig. 2). Thirty-three DEGs, including genes encoding glutathione S-transferase, ribonucleoside-diphosphate reductase, 6-phosphogluconate dehydrogenase, cytosolic non-specific dipeptidase, gamma-glutamyltranspeptidase, and glutathione peroxidase, participated in this pathway. In addition, during the primordial and fruiting body development stages, the MAPK signaling pathway (45 DEGs) and starch and sucrose metabolism pathway (26 DEGs) were significantly enriched. Tyrosine metabolism, biosynthesis of secondary metabolites and glycosphingolipid biosynthesis were also significantly enriched in the fruiting body formation stage.
KEGG pathway enrichment analysis of differentially expressed genes (DEGs) during F. filiformis development. Left columns: pathway enrichment at mycelium stage of wild strain Liu355 compared to cultivar strain CGMCC 5.642; Middle columns: pathway enrichment at primordium stage compared to mycelium stage of wild strain Liu355; Right columns: pathway enrichment at fruiting body stage compared to primordium stage. Abbreviation: MK: monokaryontic mycelium; DK: dikaryontic mycelium; PD: primordium; FB: fruiting body
Genes involved in polysaccharide biosynthesis in F. filiformis
We identified a total of 80 genes related to polysaccharide (PS) biosynthesis involved in glycolysis and gluconeogenesis in the KEGG pathway analysis (KEGG map 00010) [46] at the genomic level, including glucose-6-phosphate isomerase (GPI), fructose-1,6-biphosphatase (FBP), and mannose-6-phosphate isomerase (MPI). Genes encoding Zinc-type alcohol dehydrogenase were upregulated in both the mycelium of the wild strain compared to the cultivar strain and in the fruiting body compared to the mycelium of the wild of F. filiformis strain (Additional file 4: Fig. S3 and Additional file 5: Table S2). The genes encoding glycerol 2-dehydrogenase (gene9557, gene2028), 7-bisphosphatase (gene 2929), alcohol dehydrogenase (gene7891-D2, gene 9773-D2) and aryl-alcohol dehydrogenase (gene 4871, gene 612) were upregulated in mycelium of the wild strain. The expression level of the gene encoding mannose-1-phosphate guanylyltransferase (GDP) (gene 11,132-D3) was the highest in the mycelium of the wild strain, with a more than 200-fold change compared to that in the mycelium of the cultivar strain. The genes encoding glycerol 2-dehydrogenase (gene 894) and sugar phosphatase (gene 11,052-D2) were upregulated in the fruiting body stage of the wild strain.
To identify PS related genes, several predicted metabolic enzymes related to PS biosynthesis in G. lucidum [47] were also blasted by homology searches in the F. filiformis genome. We identified 21 putative essential enzymes involved in PS biosynthesis in F. filiformis, including GPI, MPI, UDP-glucose dehydrogenases (UGD), UDP-glucose pyrophosphorylase (UGP), hexokinase, galactokinase and transketolase (Table 2). Among them, genes encoding UGP, UGD and fructose-bisphosphate aldolase (FDA) had relatively high transcript levels in all samples analyzed (FPKM > 100).
Table 2 Putative enzymes involved in PS biosynthsis of and their gene expression in F.filiformis
Predicted bioactive secondary metabolite gene clusters of F. filiformis
In total, 13 gene clusters related to terpenoid biosynthesis and two gene clusters for polyketide biosynthesis were predicted in the wild strain of F. filiformis (Fig. 3 and Additional file 6: Table S3). The numbers of gene clusters involved in terpene, PKS and NRPS biosynthesis were different in the wild strain Liu355 compared with the other three cultivar strains (KACC42780, TR19 and L11 with genome sequencing) and the gene number related to terpene synthesis was higher in the wild strain Liu355 (119 genes) than in the cultivar strain L11 (81 genes) (Table 3). We performed sequences' similarity comparison of genes involved in predicted terpene and type I PKS gene clusters among different strains of F. filiformis using blastall v2.2.26 software and the result was provided in Additional file 7: Table S4. This result showed that a high similarity of genes sequence and gene cluster existence between different strains of F. filiformis but several individual genes specific occurred only in wild strain Liu355 compared to cultivar L11,TR19 and KACC42780 (e.g. gene cluster in scaffold 548).
Identification of the 13 putative gene clusters for terpene and two polyketides gene clusters (PKS) in F. filiformis genome by antiSMASH software. Genes with SwissProt functional annotation were marked in red color
Table 3 Putative genes and gene clusters related to secondary metabolitic biosynthesis for the F. filiformis
Putative genes for terpenoid biosynthesis in F. filiformis
A total of 119 genes of 13 terpenoid clusters were divided into 10 clades according to their expression levels in different developmental stages of the wild strain or cultivar strains (Additional file 8: Fig. S4). Most genes in clade II, including encoding 4,5-DOPA dioxygenase extradiol-like protein (gene3103) and squalene synthase (gene3428), which is involved in the biosynthesis of squalene, a precursor of terpenoid compounds, were upregulated in the primordium compared to the mycelium of wild strain Liu355. The genes in clades VIII-X, including key enzymes involved in terpenoid biosynthesis, such as protoilludene synthase, candidate peroxisomal acyl-coenzyme A oxidase, L-amino acid amidase and cytochorme P450, were significantly differentially expressed in the fruiting body compared with the mycelium of the wild strain Liu355 and in the mycelium of the wild strain Liu355 compared to the cultivar strain CGMCC5.642. For example, the putative terpene synthase (gene 9115) was highly expressed and significantly upregulated in the mycelium of the wild strain (1.4-fold change) compared to the mycelium of the cultivar strain (CGMCC 5.642) and in the fruiting bodies compared with the primordium in the wild strain Liu355 (19.4-fold change). Two putative cytochrome P450 genes (gene 9114 and gene 7212) were also significantly upregulated in the dikaryotic mycelium compared to the monokaryotic mycelium of the wild strain (7.61- and 7.50- fold change, respectively) and the cultivar strain mycelium (2.1- and 1.7- fold change, respectively). This result likely explained the greater diversity of bioactive compounds in the wild strain than in the cultivar strain in a previous study.
Putative genes for sesquiterpenoid biosynthesis in F. filiformis
We performed a genome-scale homologous search with sesquiterpene synthases of O. olearius, C. cinereus and H. erinaceus based on the genomic data of the wild strain Liu355. Twelve homologous sequences with considerable similarity (e-value < 10− 5) to the known biochemically characterized sesquiterpene synthases (STS) were identified in the genome of the wild strain F. filiformis Liu355. Twelve STS genes of the wild F. filiformis strain included five genes encoding delta (6)-protoilludene synthase (gene1663-D2, gene9115, gene2784 and gene9115-D2, gene6325-D2), two genes encoding trichodiene synthase (gene1140, gene2254), two genes encoding alpha-muurolene synthase (gene1358-D2, gene1358), and one gene encoding a hypothetical protein (gene3100). The phylogenetic analysis showed that these genes grouped into four clades (Additional file 9: Fig.S5). Five genes from the wild strain Liu355 (gene1140, gene10498-D2, gene4450, gene2785, gene2254) and two genes from the cultivar strain L11 (Fla10, Fla11) of F. filiformis were clustered together with the cuprenene synthases Cop6 and Omp 8-Omp10 in the clade 1. These genes are likely responsible for the 1,6- or 1,7-cyclization of 3R-nerolidyl diphosphate (NPP) (Cop6) or involved in the biosynthesis of α- and β-barbatene (Omp9), compounds known to be produced by fungi and plants and carotane sesquiterpenes (Omp10). Gene 1358-D2 (from the wild strain) and Fla6 (from the cultivar strain) in clade 2 grouped with Omp1 and Omp2 and were speculated to catalyse a 1,10-cyclization of E, E-farnesyl diphosphate (FPP) to yield cadinane, a precursor of sesquiterpenoids with noreudesmane, spiroaxane, cadinane and seco-cuparane. The expression of gene1358-D2 and gene1358 in mycelium of the wild F. filiformis strain is significantly upregulated compared to cultivar strain based on our transcriptomic data (1.3- and 2.7- fold change, respectively) (Table 4). The genes from the wild strain (gene6325-D2, gene1663-D2, gene9115-D2 and gene9115) and from the cultivar strain of F. filiformis (Fla2, Fla4, Fla7 and Fla12) that clustered with Omp6 and Omp7 in clade 3 may be capable of catalyzing a 1,11-cyclization of (E, E)-FPP leading to major groups of bioactive sesquiterpenes in Basidiomycota [44]. Gene1358 and Fla9 of clade 4 clustered with Cop4, Omp4, Omp5a and Omp5b and may synthesize major compounds that require the 1, 10-cyclization of (3R)-nerolidyl diphosphate (NPP).
Table 4 Expression level of 12 genes encoding enzymes involved into sesquiterpenoid biosynthesis in F. filiformis
Putative genes for polyketide biosynthesis in F. filiformis
The diverse structures of polyketides are biosynthesized from short-chain carboxylic acid units by polyketide synthases (PKSs), PKSs have been classified into type I, type II and type III based on their product profiles and catalytic domain architecture [48]. By gene cluster prediction using antiSMASH, we found 30 genes in two gene clusters annotated as type I PKSs, and they were mainly located in a single scaffold, 24 and 78, in the F. filiformis genome, respectively (Fig. 4 and Additional file 6: Table S3). The two gene clusters both included core genes encoding polyketide synthases (gene8217 and gene1373). The genes located on scaffold 78, including putative polyketide synthase (gene1373, gene1374) and benzoate 4-monooxygenase (gene 1372), were most upregulated in the wild strain mycelium compared with the cultivar strain mycelium, indicating that polypeptide compounds are probably abundant in the mycelia of this mushroom and especially in the wild strain.
Hierarchical clustering analysis of 28 putative heat-shock protein encoding genes in F. filiformis genome between wild strain Liu355 and cultivar strain CGMCC5.642 and among four development stages of wild strain Liu355. Expression ratios were plotted in a heatmap on a log2 scale. The red and green colors indicate up- and down-regulation, black represents no significant expression change and grey represents missing. The abbreviation: MK, monokaryotic mycelium; DK, Dikaryotic mycelium; PD, primordium; FB, Fruiting body
Cytochrome P450s in the F. filiformis genome
We identified 107 genes in the cytochrome P450 family, including nine putative trichodiene oxygenases, 31 O-methylsterigmatocystin oxidoreductases, five benzoate 4-monooxygenases, two linoleate 10R-lipoxygenases, two ent-kaurene oxidases, lanosterol 14-alpha and flavonoid hydroxylases and other candidate cytochrome P450s. Of these, 102 genes had diverse expression profiles across different tissues of F. filiformis. Twenty-six CYP450 genes were upregulated in the mycelium of the wild strain compared to the cultivar strain and the cytochrome P450 (gene 5820-D3) had the highest expression level, with more than a 500-fold change. Twenty-one CYP450 genes were upregulated in the fruiting body stage compared to the mycelium stage, and genes encoding benzoate 4-monooxygenases had the highest transcript level, with 15-fold change. In the primordim stage, the gene encoding docosahexaenoic acid omega-hydroxylase was the highest differentially expressed gene.
Heat-shock proteins correspond to temperature changes in F. filiformis
In our study, the wild strain of Liu355 could grow fruiting bodies at 18 °C–22 °C in the laboratory. The heat-shock protein (HSP) family is known to be positively correlated with organism thermotolerance [49]. Twenty-eight genes annotated as HSPs were identified in the wild F. filiformis genome (Fig. 4). Among them, six genes were significantly upregulated in the wild strain Liu355 compared to the cultivar strain, including encoding protein HSP12, HSPC4, HSP104, LHS1 and GRP78, respectively. HSP12 (gene5359), HSP7F (gene10428), HSP75 (gene4363), HSP71 (gene4424) and HSP60 (gene9277) are differentially upregulated expression at the beginning of fruiting bodies formation (PD stage) compared to vegetative growth (DK stage).
Flammulina filiformis is one of most widely cultivated white rot fungi in large commercial scale in China. It was reported that the first cultivar strain of F. filiformis in China was domesticated from a wild strain isolated from Fujian province in 1974 [15]. To date, the genomes of three cultivars of F. filiformis from Japan, Korea and China were sequenced respectively. In this study, we sequenced the genome of a wild strain of F. filiformis with abundant sesquiterpenoid compounds and high-temperature tolerance, which collected from Yunnan Province recently. The genome size (35.01 Mbp) and the numbers of putative genes (10396) of the wild F. filiformis were similar to the previous public genome of the cultivar F. filiformis (Table 1). Pan-genomic analysis indicated that only 23.5% orthologus genes were shared among the four strains of F. filiformis (Fig. 1a). The proportion (23.5%) of core genes in the pan-genomic analysis of F. filiformis was similar to that in the pan-genomic analysis of 23 Corallococcus spp. [50]. The number seems relative lower than actual number. A possible explain was that these genomes were different sequencing depth or different methods applied for genomic assemble and annotation.
Transcriptomic analysis showed 30 genes were specific expression in mycelium of the cultivar CGMCC5.642, including genes encoding ornithine decarboxylase (ODC), N-acetyltransferase and malate dehydrogenase; four genes without functional annotation were specific expression in dikaryotic mycelium of the wild strain F. filiformis (Additional file 3:Table S1).ODC was the first and rate-limiting enzyme in the synthesis of polyamines and it was also involved in methyl jasmonate-regulated postharvest quality retention in button mushrooms [51]. Specific expression of these genes in cultivar strain of F. filiformis was possible related to human domestic activity, while the function of the genes specifically expressed in the wild strain of F. filiformis will be further studied.
In addition, the genes involved in glutathione metabolism were significantly enriched in DK of the wild strain Liu 355 compared to the cultivar strain CGMCC 5.642. A study on the glutathione metabolism in the filamentous fungus Aspergillus nidulans indicated that glutathione itself and glutathione metabolic enzymes play crucial roles in the germination of conidiospores and markedly contribute to the general stress tolerance of the fungus [52]. The high expression of genes related to glutathione metabolism in the wild strain of F. filiformis implied that the strain probably had strong environmental adaptation and it would be a potential better breeding resource.
Polysaccharides (PSs) are important and bioactive components of F. filiformis and other edible and medicinal mushrooms [47]. GPI, FBP, UGD and UGP are known important enzymes in the biosynthetic pathway of PSs of edible mushrooms [43, 47]. In our study, genes encoding to GPI, FBP and MPI were predicted to involve in PS biosynthesis of F. filiformis by KEGG enrichment analysis and by homologous protein search with known enzymes involved in PS biosynthesis of medicinal mushroom of G. lucidum. The gene encoding mannose-1-phosphate guanylyltransferase (GDP) exhibited differential expression in mycelium of the wild strain Liu 355 compared to cultivar strain CGMCC5.642 with over 200-fold change, indicating the potential abundance of PS compounds, and the content difference of PSs between the wild and the cultivar strain of F. filiformis will be determined in the next study.
Besides PSs, sesquiterpene compounds are the main bioactive secondary metabolites in Flammulina. The chemistry investigation of six strains (one wild strain and five other cultivar strains) of F. filiformis in a previous report revealed that the wild strain Liu355 contained many new sesquiterpenes with various skeletons, including cuparane-type and sterpurane-type sesquiterpenes. Noreudesmane, spiroaxane, cadinane and seco-cuparane sesquiterpenoids were first identified as new compounds in the mycelium of the wild F. filiformis strain and 12 putative sesquiterpene synthase genes (Fla1–12) were also predicted from the genomic sequence of the cultivar strain of F. filiformis L11 [9]. These more sesquiterpenes from the wild strain of F. filiformis were mainly derived from 1,10-cyclization of FPP mechanisms. Thus, the enzymes encoded by gene1358-D2 and gene1358, clustered with known functional Omp1 and Omp2 (O. olearius) are probably responsible for the production of sesquiterpene with specific skeleton of the F. filiformis. The expression level of gene1358-D2 and gene1358 will be verified in different strains of F. filiformis by quantitative real-time PCR in the next step. In addition, the yield of these sesquiterpene compounds and the reason for the wild strain possessing more kinds of sesquiterpene compounds will be further explained in ongoing research.
Comparative studies of filamentous fungal species have shown that secondary metabolism gene clusters are often either highly divergent or uniquely present in one or a handful of species [38]. Investigation of genome-wide within-species variation of 66 cosmopolitan strains of a single species Aspergillus fumigatus revealed that genes cluster exist location polymorphisms (a cluster was found to differ in its genomic location across strains) and it affect the function of gene cluster [38]. In our study, we did a preliminary analysis of gene cluster prediction within four different strains and revealed the more than 95% genes in predicted terpene and type I PKS clusters in wild strain Liu355 also existed in other three strain. However, the location confirmation of these gene clusters and the function need to be verified by experiments in the further study. Sometimes, the number of predicted genes clusters related to secondary metabolism in fungi based on genome sequencing data was also effected on sequencing method (platform) and sequencing depth and sample size (only four strains can available in our study), therefore, more strains from different geographic regions will be collected for further analysis.
A temperature downshift (cold stimulation) is considered to be one of the most important and essential environmental factors for the fruiting initiation and fruiting body formation of F. filiformis [34]. In our study, six genes annotated with heat shock protein family (HSP12, HSPC4, HSP104, LHS1 and GRP78) displayed significantly differential expression in the wild strain Liu355 compared to the cultivar strain. It is known that HSP12 is part of a group of small HSPs that function as chaperone proteins and are ubiquitously involved in nascent protein folding by protecting proteins from misfolding and are partially characterized as a stress response; the expression of HSP12 protein was observed in response to cold stress [53]. The expression of HSP104 and HSP70 is regulated by the Hsf (heat-shock factor) interaction, which can be stimulated by heat stress in yeast [49]. In addition, HSP70 chaperone and two putative HSP were also found were upregulated at only primordium or young fruiting body of cultivar F. filiformis using the iTRAQ labeling technique [54]. Our study also predicted that genes encoding HSP12, HSP 71, HSP60 probable involved in formation and differentiation of fruiting bodies of F. filiformis. However, the exact molecular function of HSPs in the high-temperature tolerance of wild F. filiformis and its adaptive mechanisms for relatively high temperatures need further study.
In our study, genome and transcriptome sequencing and the assembly and annotation of the high-temperature-tolerant wild F. filiformis strain were carried out, and the gene clusters associated with polysaccharides, terpenoid and polyketide biosynthesis were predicted. Comparative genomic analysis with three other Asian cultivar strains of F. filiformis revealed that the wild strain has a similar genome size and relatively more putative gene numbers related to secondary metabolite biosynthesis. Most genes related to terpenoid biosynthesis were upregulated in the primordium and fruiting body of the wild strain, while PKS genes were generally upregulated in the mycelium of the wild strain; however, the specific regulatory pathways involved such synthesis pathways remain unresolved in this study.
Six genes belonging to the HSP family, including HSP12, HSPC4, HSP104, LHS1 and GRP78, were significantly upregulated in the wild strain Liu355 compared to the cultivar strain and may be responsible for the development of fruiting bodies at relatively high temperatures in the high-temperature-tolerant wild F. filiformis strain. However, the expression of these genes in other strains of F. filiformis, especially in strains under low-temperature developmental conditions, requires verification in future studies. Our study provides an important genetic dataset for F. filiformis as a potential breeding material, and provides a foundation for enhancing the understanding of the biology of F. filiformis.
Fungal strains and strain culture
The wild strain Liu355 used for genomic sequencing was kindly provided by Prof. H. W. Liu (State Key Laboratory of Mycology, Institute of Microbiology, Chinese Academy of Sciences) and was first isolated from the fruiting body of F. filiformis collected from Longling, Yunnan Province, southwestern China. The voucher specimens of the F. filiformis fruiting bodies were deposited in the Cryptogamic Herbarium, Kunming Institute of Botany, Chinese Academy of Sciences (HKAS85819). DNA sequencing of the internal transcribed spacer (ITS) region of F. filiformis (HKAS85819) was listed under GenBank accession number KP867925 [9]. The haploid monokaryotic strain F. filiformis Liu355 (deposited in Mycological Laboratory of the Institute of Medicinal Plant Development, Chinese Academy of Medical Sciences) was prepared by the protoplast mononuclear method and was grown on potato dextrose agar (PDA) at room temperature for 2–3 weeks in the dark. The fruiting bodies were obtained in sterile plastic bottles containing on growth substrate (cotton seed hulls, 78%; wheat bran, 20%; KH2PO4, 0.1%; MgSO4, 0.1%; sucrose, 1%; and ground limestone, 1%; with a moisture content of 70%) at 25 °C for 30 d, followed by cold stimulation at 18 °C and 90% humidity until primordial development occurred. Cultures were maintained at low temperature (18 °C and 75% humidity) to allow full fruiting body development [55]. In addition, the genomic data of two cultivar strains from Korea (KACC42780, Bioproject PRJNA191921) and Japan (TR19, Bioproject PRJDB4587) were available from the NCBI public database, and the genomic sequence of strain L11 (Bioproject PRJNA191865) was kindly provided by the Mycological Research Center, College of Life Sciences, Fujian Agriculture and Forestry University [56]. The cultivar dikaryotic strain (CGMCC 5.642) was obtained from the China General Microbiological Culture Collection Center (Beijing, China, http://www.cgmcc.net/) and stored in our laboratory.
Genome and transcriptome sequencing and analysis
Total genomic DNA of F. filiformis was extracted from the mononuclear mycelia in PDA medium using the Omega E.Z.N.A. fungal DNA midi kit (Omega, USA) according to the manufacturer's instructions. Total DNA was evaluated by agarose gel electrophoresis and quantified by Qubit 2.0 Fluorometer (Thermo Scientific). Library construction and sequencing was performed at the Beijing Novogene Bioinformatics Technology Co. Ltd. (China). The quality and quantity of libraries were checked using an Agilent 2100 Bioanalyzer. The F. filiformis strain was sequenced using 350 bp paired-end reads on an Illumina HiSeq 4000 platform via the PE150 method. The quality of sequencing results for fastq files was evaluated using software FastQC (http://www.bioinformatics.babraham.ac.uk/projects/fastqc/) and readfq.v10 (https://github.com/lh3/readfq) also was used for sequence quality control. The raw data was filtered by removing the low-quality reads, including reads with N content higher than 10%, reads whose base quality value is less than 20 with the ratio is higher than 40%, duplication (exactly the same PE reads) and PE reads containing sequencing adapters (15 bases aligned to the adapter sequence). After that, the high-quality reads were mapped to the reference genome sequence of F. filiformis L11 (Bioproject PRJNA191865) using BWA v0.5.9-r16 software. Functional annotation of the predicted genes was performed using BLAST against Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG), SwissProt and NCBI Non-Redundant Protein Sequence Database (NR) [40].
The term pan-genome was first proposed by Tettelin et al. in 2005 and it defines the entire genomic repertoire of a given phylogenetic clade and encodes for all possible lifestyles carried out by its organisms [57, 58]. Pan-genome usually comprises the core-genome (essential nucleotide sequences shared by all genomes in the cohort), dispensable genome (nucleotide sequences shared by a subset of genomes in the cohort) and strain-specific genes (nucleotide sequences existing only within a particular genome in the cohort) [59]. Pan-genome analysis in our study was carried out using the standalone CD-HIT tool to cluster orthologous proteins [60].
For transcriptomic sequencing, total RNA was extracted using the RNAeasy Plant Mini kit (Qiagen) according to the manufacturer's protocols. Five samples were prepared; the monokaryotic mycelium (MK), dikaryotic mycelium (DK), primordium (PD) and fruiting bodies (FB) of the wild strain Liu355 and the dikaryotic mycelium of the cultivar strain CGMCC 5.462. Each sample had three biological replicates. All samples were subjected to RNA-Seq on the Illumina HiSeq2000 platform (Illumina, San Diego, CA, USA). Raw data (raw reads) of fastq format were firstly processed through FastQC (http://www.bioinformatics.babraham.ac.uk/projects/fastqc/). In this step, clean data (clean reads) were obtained by removing reads containing adapter that were added for reverse transcription and sequencing, low-quality bases (> 50% of the bases with a quality score ≤ 5), reads containing ploy-N and low quality reads that sequences containing too many unknown bases (> 5%). At the same time, Q20, Q30 and GC content the clean data were calculated. All the downstream analyses were based on the clean data with high quality. After that, the RNA-seq reads were mapped to the F. filiformis genome (Liu355) using TopHat v2.0.1253 [61]. HTSeq v0.6.1 software was used to count the read numbers mapped to each gene [62]. The FPKM value was used to calculate gene expression, and the upper-quartile algorithm was used to correct the gene expression. Gene differential expression analysis was performed using the DESeq R package (1.10.0) using a corrected p-value [63]. Genes with an adjusted P-value < 0.05 were considered differentially expressed. Hierarchical clustering of gene expression was conducted using Genesis 1.7.7 [64].
KEGG enrichment analysis of differentially expressed genes (DEGs)
We used KOBAS v2.0 software to test the statistical enrichment of differentially expressed genes (DEGs) in KEGG pathways (https://www.kegg.jp/kegg/). Based on the hypergeometric distribution, we predicted the enriched pathway of DEGs with all the annotation genes. The formula is below: N represented the all gene number with pathway annotation, n is the DEGs number of N, M refers the all gene number annotated in a specific pathway and m is the DEGs number annotated in a specific pathway. The pathway was defined as significant enrichment (Padj≤0.05).
$$ p=1-\sum \limits_{i=0}^{m-1}\frac{\left(\underset{i}{M}\right)\left(\underset{n-i}{N-M}\right)}{\left(\underset{n}{N}\right)} $$
Prediction of gene clusters involved in biosynthesis of secondary metabolites
The biosynthetic gene clusters were predicted using antiSMASH 3.0 software [65]. AntiSMASH currently offers a broad collection of tools and databases for automated genome mining and comparative genomics for a wide variety of different classes of secondary metabolites [66]. In addition, Sequence homology searches method (BlastP) was also used to identify genes related to terpenoid biosynthesis. The sesquiterpene synthases were identified based on multiple sequence alignments and phylogenetic analyses developed by the Schmidt-Dannert group [44]. It has been reported that a more divergent cytochrome P450 oxidase could be involved in secondary biosynthesis [67]. Therefore, we searched the genome of F. filiformis for proteins with a P450 conserved domain using the NCBI CDD tool and BLASTp [40] and also by homology BLAST in the Fungal Cytochrome P450 Database (p450.riceblast.snu.ac.kr/index.php? a = view) to obtain the annotated gene for cytochrome P450. In addition, CAZymes were identified using the Carbohydrate Active enZymes (CAZy) database [68]. We performed DIAMOND search against the CAZy pre-annotated CAZyme sequence database and combined with corresponding gene functional annotation to get the annotation result. TransposonPSI software was used as transposon gene prediction and this software uses PSI-BLAST to detect distant homology between genomic sequences and a TE library bundled with the program [69]. Secretory proteins was predicted with Signal P4.1 server (http://www.cbs.dtu.dk/services/signalP/) searching in all encoding amino acid sequence of F. filiformis genome.
The datasets supporting the results of this article are included with in the article and additional files. This Whole Genome Shotgun project has been deposited at DDBJ/ENA/GenBank under the accession JACYFH000000000 (https://submit.ncbi.nlm.nih.gov/subs/genome/). The original transcriptomic data have been deposited in public SRA database (accession No.PRJNA530834) (https://www.ncbi.nlm.nih.gov/sra/PRJNA530834) for tanscriptome. DNA sequencing (ITS region) of F. filiformis (KP867925) was listed https://www.ncbi.nlm.nih.gov/nuccore/KP867925. The public genome information of F. filiformis (previously known as Asian F. velutipes) can be available in Genbank database for PRJNA191921(https://www.ncbi.nlm.nih.gov/bioproject/PRJNA191921), PRJDB4587 (https://www.ncbi.nlm.nih.gov/bioproject/?term=PRJDB4587), PRJNA191865 (https://www.ncbi.nlm.nih.gov/bioproject/?term=PRJNA191865).
Auxiliary Activitie
BGCs:
Biosynthetic gene clusters
CAZymes:
Carbohydrate-Active Enzymes
CBM:
Carbohydrate-binding module
Carbohydrate Esterases
CDD:
Conserved Domain Database
COG:
Clusters Orthologous Groups
CYP:
Cytochrome P450 proteins
DK:
Dikaryotic mycelium
Fruiting body
FBP:
Fructose-1,6-bisphosphatase
FDA:
Fructose-bisphosphate aldolase
FPKM:
Fragments Per Kilobase of transcript per Million fragments mapped
FVPs:
F. velutipes polysaccharides
Mannose-1-phosphate guanylyltransferase
GH:
Glycoside-Hydrolases
GK:
Glucokinase
GPI:
Glucose-6-phosphate isomerase
GT:
HSP:
The heat shock protein
Hsf:
Heat-shock factor
MAPK:
Mitogen-activated protein kinase
Monokaryotic
MPI:
Mannose-6-phosphate isomerase
PDA:
Potato dextrose agar
PGI:
Phosphoglucose isomerase
PGM:
Phosphoglucomutase
PKS:
Polyketide synthase
PSs:
UGP:
UDP-glucose pyrophosphorylase
Polysaccharide Lyases
STSs:
Sesquiterpene synthases
ITS:
Internal transcribed spacer
Ge ZW, Liu XB, Zhao K, Yang ZL. Species diversity of Flammulina in China: new varieties and a new record. Mycosystema. 2015;34:589–603.
Wang PM, Liu XB, Dai YC, Horak E, Steffen K, Yang ZL. Phylogeny and species delimitation of Flammulina: taxonomic status of winter mushroom in East Asia and a new European species identified using an integrated approach. Mycol Prog. 2018;17(9):1013–30.
Li X, Li Y. Quality comparison and analysis on white Flammulina velutipes grown with bottle lines in China. Edible Fungi China. 2014;33:20–4 (In Chinese).
Lin L, Cui F, Zhang J, Gao X, Zhou M, Xu N, Zhao H, Liu M, Zhang C, Jia L. Antioxidative and renoprotective effects of residue polysaccharides from Flammulina velutipes. Carbohydr Polym. 2016;46:388–95.
Su A, Yang W, Zhao L, Pei F, Yuan B, Zhong L, Ma G, Hu Q. Flammulina velutipes polysaccharides improve scopolamine-induced learning and memory impairment in mice by modulating gut microbiota composition. Food Funct. 2018;9(3):1424–32.
Zhang T, Ye J, Xue C, Wang Y, Liao W, Mao L, Yuan M, Lian S. Structural characteristics and bioactive properties of a novel polysaccharide from Flammulina velutipes. Carbohydr Polym. 2018;197:147–56.
Hu Q, Yu J, Yang W, Kimatu BM, Fang Y, Ma N, Pei F. Identification of flavonoids from Flammulina velutipes and its neuroprotective effect on pheochromocytoma-12 cells. Food Chem. 2016;204:274–82.
Wang Y, Bao L, Yang X, Li L, Li S, Gao H, Yao XS, Wen H, Liu HW. Bioactive sesquiterpenoids from the solid culture of the edible mushroom Flammulina velutipes growing on cooked rice. Food Chem. 2012;132(3):1346–53.
Tao Q, Ma K, Yang Y, Wang K, Chen B, Huang Y, Han J, Bao L, Liu XB, Yang Z, Yin WB, Liu H. Bioactive sesquiterpenes from the edible mushroom Flammulina velutipes and their biosynthetic pathway confirmed by genome analysis and chemical evidence. J Org Chem. 2016;81(20):9867–77.
Li HP, Yang WJ, Qu SX, Pei F, Luo X, Mariga AM, Ma L. Variation of volatile terpenes in the edible fungi mycelia Flammulina velutipes and communications in fungus-mite interactions. Food Res Int. 2018;103:150–5.
Rahman MA, Abdullah N, Aminudin N. Antioxidative effects and inhibition of human low density lipoprotein oxidation in vitro of polyphenolic compounds in Flammulina velutipes (Golden needle mushroom). Oxidative Med Cell Longev. 2015;403023.
Tang C, Hoo PC, Tan LT, Pusparajah P, Khan TM, Lee LH, Goh BH, Chan KG. Golden needle mushroom: a culinary medicine with evidenced-based biological activities and health promoting properties. Front Pharmacol. 2016;7:474.
Kasprzycka A, Lalak-Kańczugowska J, Tys J. Flammulina velutipes treatment of non-sterile tall wheat grass for enhancing biodegradability and methane production. Bioresour Technol. 2018;263:660–4.
Avin FA, Bhassu S, Shin TY, Sabaratnam V. Molecular classification and phylogenetic relationships of selected edible Basidiomycetes species. Mol Biol Rep. 2012;39(7):7355–64.
Liu XB, Feng B, Li J, Yan C, Yang ZL. Genetic diversity and breeding history of winter mushroom (Flammulina velutipes) in China uncovered by genomic SSR markers. Gene. 2016;591:227–35.
Wang Q, Zhang J, Li C, Wang B, Nong W, Bian Y, Xiao Y. Phenotypic and genetic diversity of the culinary-medicinal winter mushroom Flammulina velutipes (Agaricomycetes) in China. Int J Med Mushrooms. 2018;20(6):517–36.
Wang Y, Bao L, Liu D, Yang X, Li S, Gao H, Yao X, Wen H, Liu H. Two new sesquiterpenes and six norsesquiterpenes from the solid culture of the edible mushroom Flammulina velutipes. Tetrahedron. 2012;68(14):3012–8.
Reis FS, Barros L, Martins A, Ferreira IC. Chemical composition and nutritional value of the most widely appreciated cultivated mushrooms: an inter-species comparative study. Food Chem Toxicol. 2012;50(2):191–7.
Tsai SY, Huang EW, Lin CP. Compositional differences of the winter culinary-medicinal mushroom, Flammulina velutipes (Agaricomycetes), under three types of light conditions. Int J Med Mushrooms. 2017;19(3):267–76.
Miyazawa N, Yoshimoto H, Kurihara S, Hamaya T, Eguchi F. Improvement of diet-induced obesity by ingestion of mushroom chitosan prepared from Flammulina velutipes. J Oleo Sci. 2018;67(2):245–54.
Wu M, Luo X, Xu X, Wei W, Yu M, Jiang N, Ye L, Yang Z, Fei X. Antioxidant and immunomodulatory activities of a polysaccharide from Flammulina velutipes. J Tradit Chin Med. 2014;34(6):733–40 (In Chinese).
Huang Q, Jia Y, Wan Y, Li H, Jiang R. Market survey and risk assessment for trace metals in edible fungi and the substrate role in accumulation of heavy metals. J Food Sci. 2015;80(7):H1612–8.
Rugolo M, Levin L, Lechner BE. Flammulina velutipes: An option for "alperujo" use. Rev Iberoam Micol. 2016;33(4):242–7.
Xie C, Gong W, Yan L, Zhu Z, Hu Z, Peng Y. Biodegradation of ramie stalk by Flammulina velutipes: mushroom production and substrate utilization. AMB Express. 2017;7:171.
Kalač P. A review of chemical composition and nutritional value of wild-growing and cultivated mushrooms. J Sci Food Agric. 2013;93(2):209–18.
Wang YQ, Bao L, Yang XL, Guo H, Dai HQ, Guo H, Yao XS, Zhang LX, Liu HW. Four new cuparene-type sesquiterpenes from Flammulina velutipes. Helvetica Chimica Acta. 2012;95:261–7.
Tung CH, Lin CC, Tung CC, Chen SF, Sheu F, Lu TJ. Combination of on-line desalting and HPLC-UV-ESI-MS for simultaneous detection and identification of FIP-fve and flammutoxin in Flammulina velutipes. J Food Drug Anal. 2018;26(3):1045–53.
Park YJ, Baek JH, Lee S, Kim C, Rhee H, Kim H, Seo JS, Park HR, Yoon DE, Nam JY, et al. Whole genome and global gene expression analyses of the model mushroom Flammulina velutipes reveal a high capacity for lignocellulose degradation. PLoS One. 2014;9(4):e93560.
Xie C, Yan L, Gong W, Zhu Z, Tan S, Chen D, Hu Z, Peng Y. Effects of different substrates on lignocellulosic enzyme expression, enzyme activity, substrate utilization and biological efficiency of Pleurotus eryngii. Cell Physiol Biochem. 2016;39(4):1479–94.
Song HY, Kim DH, Kim JM. Comparative transcriptome analysis of dikaryotic mycelia and mature fruiting bodies in the edible mushroom Lentinula edodes. Sci Rep. 2018;8(1):8983.
Wu TH, Ye ZW, Guo LQ, Yang XQ, Lin JF. De novo transcriptome sequencing of Flammulina velutipes uncover candidate genes associated with cold-induced fruiting. J Basic Microbiol. 2018;58:698–703.
Liu JY, Meng JL, Chang MC, Feng CP, Yuan LG. iTRAQ-based quantitative proteome revealed metabolic changes of Flammulina velutipes mycelia in response to cold stress. J Proteome. 2017;156:75–84.
Kurata A, Fukuta Y, Mori M, Kishimoto N, Shirasaka N. Draft genome sequence of the basidiomycetous fungus Flammulina velutipes TR19. Genome Announc. 2016;4(3):e00505–16.
Liu JY, Chang MC, Meng JL, Feng CP, Wang Y. A comparative proteome approach reveals metabolic changes associated with Flammulina velutipes mycelia in response to cold and light stress. J Agric Food Chem. 2018;66(14):3716–25.
Doroghazi JR, Albright J, Goering AW, Ju KS, Haines RR, Tchalukov KA, Labeda DP, Kelleher NL, Metcalf WW. A road map for natural product discovery based on large-scale genomics and metabolomics. Nat Chem Biol. 2014;10(11):963–8.
Min B, Kim S, Oh YL, Kong WS, Park H, Cho H, Jang KY, Kim JG, Choi IG. Genomic discovery of the hypsin gene and biosynthetic pathways for terpenoids in Hypsizygus marmoreus. BMC Genomics. 2018;19(1):789.
Baral B, Akhgari A, Metsä-Ketelä M. Activation of microbial secondary metabolic pathways: avenues and challenges. Synth Syst Biotechnol. 2018;3(3):163–78.
Lind AL, Wisecaver JH, Lameiras C, Wiemann P, Palmer JM, Keller NP, Rodrigues F, Goldman GH, Rokas A. Drivers of genetic diversity in secondary metabolic gene clusters within a fungal species. PLoS Biol. 2017;15(11):e2003583.
Bailey AM, Alberti F, Kilaru S, Collins CM, de Mattos-Shipley K, Hartley AJ, Hayes P, Griffin A, Lazarus CM, Cox RJ, et al. Identification and manipulation of the pleuromutilin gene cluster from Clitopilus passeckerianus for increased rapid antibiotic production. Sci Rep. 2016;6:25202.
Chen J, Zeng X, Yang YL, Xing YM, Zhang Q, Li JM, Ma K, Liu HW, Guo SX. Genomic and transcriptomic analyses reveal differential regulation of diverse terpenoid and polyketides secondary metabolites in Hericium erinaceus. Sci Rep. 2017;7:10151.
Yang YL, Zhang S, Ma K, Xu Y, Tao Q, Chen Y, Chen J, Guo S, Ren J, Wang W, et al. Discovery and characterization of a new family of diterpene cyclases in bacteria and fungi. Angew Chem Int Ed. 2017;56:4749–52.
Chen S, Xu J, Liu C, Zhu Y, Nelson DR, Zhou S, Li C, Wang L, Guo X, Sun Y, et al. Genome sequence of the model medicinal mushroom Ganoderma lucidum. Nat Commun. 2012;3:913.
Ma Z, Ye C, Deng W, Xu M, Wang Q, Liu G, Wang F, Liu L, Xu Z, Shi G, et al. Reconstruction and analysis of a genome-scale metabolic model of Ganoderma lucidum for improved extracellular polysaccharide production. Front Microbiol. 2018;9:3076.
Wawrzyn GT, Quin MB, Choudhary S, López-Gallego F, Schmidt-Dannert C. Draft genome of Omphalotus olearius provides a predictive framework for sesquiterpenoid natural product biosynthesis in Basidiomycota. Chem Biol. 2012;19:772–83.
Lorenz N, Wilson EV, Machado C, Schardl CL, Tudzynski P. Comparison of ergot alkaloid biosynthesis gene clusters in Claviceps species indicates loss of late pathway steps in evolution of C. fusiformis. Appl Environ Microbiol. 2007;73:7185–91.
Kanehisa M, Tanabe M, Sato Y, Morishima K. KEGG: new perspectives on genomes, pathways, diseases and drugs. Nucleic Acids Res. 2017;45:D353–61.
Ruthes AC, Smiderle FR, Iacomini M. Mushroom heteropolysaccharides: a review on their sources, structure and biological effects. Carbohydr Polym. 2016;136:358–75.
Shen B. Polyketide biosynthesis beyond the type I, II and III polyketide synthase paradigms. Curr Opin Chem Biol. 2003;7:285–95.
Shui W, Xiong Y, Xiao W, Qi X, Zhang Y, Lin Y, Guo Y, Zhang Z, Wang Q, Ma Y. Understanding the mechanism of thermotolerance distinct from heat shock response through proteomic analysis of industrial strains of Saccharomyces cerevisiae. Mol Cell Proteomics. 2015;14:A779–80.
Livingstone PG, Morphew RM, Whitworth DE. Genome sequencing and pan-Genome analysis of 23 Corallococcus spp. strains reveal unexpected diversity, with particular plasticity of predatory gene sets. Front Microbiol. 2018;199:3187.
Meng DM, Wang HD, Zhang YX, Xi ZA, Yang R, Sheng JP, Zhang XH, Ding Y, Wang JP, Fan ZC. Ornithine decarboxylase is involved in methyljasmonate-regulated postharvest quality retention in button mushrooms (Agaricus bisporus). J Sci Food Agric. 2019;99:790–6.
Bakti F, Király A, Orosz E, Miskei M, Emri T, Leiter É, Pócsi I. Study on the glutathione metabolism of the filamentous fungus Aspergillus nidulans. Acta Microbiol Immunol Hung. 2017;64(3):255–72.
Tiwari S, Thakur R, Shankar J. Role of heat-shock proteins in cellular function and in the biology of fungi. Biotechnol Res Int. 2015;132635.
Liu J, Chang M, Meng J, Feng C, Zhao H, Zhang M. Comparative proteome reveals metabolic changes during the fruiting process in Flammulina velutipes. J Agric Food Chem. 2017;65(24):5091–100.
Luo R, Guo L, Lin J, Han F, Li Q, Kang L. A Novel high-temperature-tolerant Strain of Flammulina velutipes by mutagenesis. Edible Fungi of China. 2016;35(4):18–23 (In Chinese).
Liu F. Preliminary study of Flammulina velutipes genome and transcriptome. Fuzhou: PhD Dissertation, Fujian Agriculture and Forestry University; 2014. p. 1–117. (In Chinese).
Tettelin H, Masignani V, Cieslewicz MJ, Donati C, Medini D, Ward NL, et al. Genome analysis of multiple pathogenic isolates of Streptococcus agalactiae: Implications for the microbial "pan-genome". Proc Nat Acad Sci United States of America. 2005;102(39):13950–5.
George Vernikos G, Medini D, Riley DR, Tettelin H. Ten years of pan-genome analyses. Curr Opin Microbiol. 2015;23:148–54.
Wirojsirasak W, Kalapanulak S, Saithong T. Pan- and core- gene association networks: Integrative approaches to understanding biological regulation. PLoS One. 2019;14(1):e0210481.
Fu L, Niu B, Zhu Z, Wu S, Li W. CD-HIT: accelerated for clustering the next-generation sequencing data. Bioinformatics. 2012;28:3150–2.
Kim D, Pertea G, Trapnell C, Pimentel H, Kelley R, Salzberg SL. TopHat2: accurate alignment of transcriptomes in the presence of insertions, deletions and gene fusions. Genome Biol. 2013;14(4):R36.
Anders S, Pyl PT, Huber W. HTSeq—a Python framework to work with high-throughput sequencing data. Bioinformatics. 2015;31(2):166–9.
Anders S, Huber W. Differential expression of RNA-Seq data at the gene level-the DESeq package, vol. 10. Heidelberg: European Molecular Biology Laboratory (EMBL); 2012. p. f1000research.
Sturn A, Quackenbush J, Trajanoski Z. Genesis: cluster analysis of microarray data. Bioinformatics. 2002;18:207–8.
Weber T, Blin K, Duddela S, Krug D, Kim HU, Bruccoleri R, et al. antiSMASH 3.0-a comprehensive resource for the genome mining of biosynthetic gene clusters. Nucleic Acids Res. 2015;43:W237–43.
Blin K, Kim HU, Medema MH, Webe T. Recent development of antiSMASH and other computational approaches to mine secondary metabolite biosynthetic gene clusters. Brief Bioinform. 2019;20(4):1103–13.
Shin J, Kim JE, Lee YW, Son H. Fungal Cytochrome P450s and the P450 Complement (CYPome) of Fusarium graminearum. Toxins (Basel). 2018;10(3):E112.
Cantarel BL, Coutinho PM, Rancurel C, Bernard T, Lombard V, Henrissat B. The carbohydrate-active EnZymes database (CAZy): an expert resource for Glycogenomics. Nucleic Acids Res. 2009;37(Database issue):D233–8.
Tørresen OK, Star B, Jentoft S, Reinar WB, Miller GH Jr, Walenz BP, Knight J, Ekholm JM, Peluso P, Edvardsen RB, Tooming-Klunderud A, Skage M, Lien S, Jakobsen KS, Nederbragt AJ. An improved genome assembly uncovers prolific tandem repeats in Atlantic cod. BMC Genomics. 2017;18:95.
The authors are grateful to Prof. Francis Martin for critical discussion and suggestion and to Dr. Lin-Chun Shi and Dr. Li-Si Zhou for helpful support during the data submission to Genbank database.
We thank the National Key R & D Program of China (2017YFC1701900), National Basic Research Program of China (2014CB138304), the National Natural Science Foundation of China (81573527; 81973423) and Peking Union Medical College Discipline Construction Project (201920100901) for the financial supports. The funders have no responsibility in design, preformation and result interpretation.
Key Laboratory of Bioactive Substances and Resource Utilization of Chinese Herbal Medicine, Ministry of Education, Institute of Medicinal Plant Development, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, P. R. China
Juan Chen, Jia-Mei Li, Yan-Jing Tang, Bing Li, Xu Zeng, Yang Li & Shun-Xing Guo
State Key Laboratory of Mycology, Institute of Microbiology, Chinese Academy of Sciences, Beijing, P. R. China
Ke Ma & Hong-Wei Liu
Key Laboratory for Plant Diversity and Biogeography of East Asia, Kunming Institute of Botany, Chinese Academy of Sciences, Kunming, P. R. China
Xiao-Bin Liu & Zhu-Liang Yang
Mycological Research Center, College of Life Sciences, Fujian Agriculture and Forestry University, Fuzhou, P. R. China
Wei-Nan Xu & Bao-Gui Xie
Juan Chen
Jia-Mei Li
Yan-Jing Tang
Ke Ma
Bing Li
Xu Zeng
Xiao-Bin Liu
Zhu-Liang Yang
Wei-Nan Xu
Bao-Gui Xie
Hong-Wei Liu
Shun-Xing Guo
JC, HWL and SXG designed the experiments. JML, YJT, BL, XZ, YL and WNX performed the experiment. XBL and ZLY collected the sample, analyzed and interpreted the data. WNX and BGX sequenced and provided the reference genome of F. filiformis L11. JC, JML, KM and XZ analyzed the data, JC contributed to draft writing and literature search and XBL, ZLY, BGX, HWL and SXG reviewed and revised the manuscript. All authors read and approved the final manuscript.
Correspondence to Juan Chen or Shun-Xing Guo.
Additional file 1: Fig.S1.
A KEGG functional annotation of the predicted genes of F. filiformis. The highest number of genes related to metabolism process and carbohydrate metabolism except for genetic information processing.
Relationships among five transcriptomes samples of F. filiformis. Pairwise correlation of normalized FPKMs between RNA samples. The Pearson correlation coefficient ranges from no correlation (white) to perfect correlation (dark blue). Each sample has three biological replicates. The abbreviation: MK, monokaryotic mycelium; DK, Dikaryotic mycelium; PD, primordium; FB, Fruiting body.
Tissue-specific expression transcript in four different tissues of F. filiformis. (XLSX 52 kb)
Additional file 4: Fig. S3.
KEGG mapping (map 00010) of glycolysis/gluconeogenesis pathway [46] identified in F. filiformis and the putative gene expression level on different tissue of F. filiformis. Red stars indicate the hits of differentially expressed genes in this map. The expression level of mapped genes (EC 5.4.2.2, EC 5.3.1.9, EC 5.3.1.1, EC 4.2.1.11, EC 1.2.4.1, EC 1.1.1.1, EC1.1.1.2) in different tissues was displayed in map. Abbreviation: F1, dikaryotic mycelium of cultivar strain CGMCC 5.642; F2, dikaryotic mycelium of wild strain Liu355; F3, fruiting body of wild strain Liu355; F4, primordium of wild strain Liu355; F5, monokaryotic mycelium of wild strain Liu355. The red and green colors indicate up-and down-regulation; black represents no significant expression change. Detail information of about the gene can be found in Additional file 5 (Note: obtained appropriate copyright permission to use the map from KEGG).
The expression of genes involved in glycolysis and gluconeogenesis pathway in F. filiformis genome.
Predicted biosynthetic gene clusters involved in terpene, PKS, NRPS and siderophore in F. filiformis using antiSMASH tool.
A comparative analysis of distribution of putative genes and gene clusters (terpene and type I PKS) related to secondary metabolitic biosynthesis in different strains of the F. filiformis. A blast search was performed using the software blastall v2.2.26 and thethreshold parameter: identity > 40% coverage > 40%. "1": implied the gene distribution in other strain and "0" means the gene probabaly is not distribution in other strain.
Hierarchical clustering analysis of 119 putative genes related to terpenoid biosynthesis in F. filiformis genome. Expression ratios were plotted in a heatmap on a log2 scale. The red and green colors indicate up- and down-regulation, black represents no significant expression change and grey represents missing data. The abbreviation: MK, monokaryotic mycelium; DK, Dikaryotic mycelium; PD, primordium; FB, Fruiting body. Detail information of about the gene annotation can be found in Additional file 6.
Neighbor-Joining phylogram of putative sesquiterpene synthases (STS) of F. filiformis were constructed based homologous protein sequences. The number along branch represent the bootstrap value above 50%. The gene encoding sesquiterpene synthases with red dot were identified in this study. Detail information of the sequences used in phylogram can be found in reference [40, 44]. Labeled in "Cop" from the fungus Coprinopsis cinereus; "Omp" from fungus Omphalotus olearius and "Sh" from fungus Stereum hirsutum. (JPG 439 kb)
Chen, J., Li, JM., Tang, YJ. et al. Genome-wide analysis and prediction of genes involved in the biosynthesis of polysaccharides and bioactive secondary metabolites in high-temperature-tolerant wild Flammulina filiformis. BMC Genomics 21, 719 (2020). https://doi.org/10.1186/s12864-020-07108-6
Edible mushroom
Sesquiterpene
High-temperature-tolerance | CommonCrawl |
\begin{document}
\date{}
\title{Variational problem on a metric-affine almost product manifold}
\begin{abstract} We study a variational problem on a smooth manifold with a decomposition of the tangent bundle into $k>2$ subbundles (distributions), namely, we consider the integrated sum of their mixed scalar curvatures as a functional of adapted pseudo-Riemannian metric (keeping the pairwise orthogonality of the distributions) and contorsion tensor, defining a linear connection. This functional allows us to generalize the class of Einstein metrics in the following sense: if all of the distributions are one-dimensional, then it coincides with the geometrical part of the Einstein-Hilbert action restricted to adapted metrics. We prove that metrics in pairs metric-contorsion critical for our functional make all of the distributions totally umbilical. We obtain examples and obstructions to existence of those critical pairs in some special cases: twisted products with statistical connections; semi-symmetric connections and 3-Sasaki manifolds with metric-compatible connections.
\vskip1.5mm\noindent \textbf{Keywords} Distribution; mixed scalar curvature; variation; statistical connection; semi-symmetric connection; 3-Sasaki manifold
\vskip1.5mm\noindent \textbf{Mathematics Subject Classifications (2010)} 53C15; 57R25 \end{abstract}
\tableofcontents
\section{Introduction}
\textbf{State of the art}.
This paper links together two topics of differential geometry: variational problems for metric and linear connection and almost product manifolds with $k\ge2$ factors.
Many canonical geometrical objects
are critical points of nonlinear variational problems, see~\cite{catino}. A~particularly famous of them is the {integrated scalar curvature} (the geometrical part of Einstein-Hilbert action) with variable Riemannian metric and linear connection,
its Euler-Lagrange equations are the {Einstein equation} and the spin-connection equation,
e.g.,~\cite{ap}.
Both equations form the basis of the Einstein-Cartan theory of gravity within the framework of metric-affine geometry,
that considers a pseudo-Riemannian metric $g$ and a linear connection $\bar\nabla$ (instead of the Levi-Civita connection $\nabla$) on a manifold as independent variables, e.g.,~\cite{bf,mikes}. The~following classes of metric-affine manifolds are popular:
$\bullet$~{Statistical manifolds}, where the (0,3)-tensor $\bar\nabla g$ is symmetric and $\bar\nabla$ is torsion-free,
are important for probability and statistics as well as information geometry, e.g., \cite{Amari2016}.
$\bullet$~{Riemann-Cartan manifolds}, where $\bar\nabla$ is {metric-compatible}, i.e., $\bar\nabla g=0$,
are important for theoretical physics, e.g., \cite{ap}.
Semi-symmetric connection, introduced by K.\,Yano \cite{Yano}, is a special case of a metric-compatible connection parameterized by a vector field.
In \cite{r-EH-k}, the first author studied the variation problem with mixed scalar curvature \begin{equation}\label{Eq-Smix-g0}
J_\mD : g\mapsto\int_{M} {\rm S}_{\,\mD_1,\ldots,\mD_k}\,{\rm d}\operatorname{vol}_g \end{equation} on a manifold with $k\ge2$ distributions (see \cite{bdrs,rz-2,rz-connections,rz-3} for $k=2$).
This analog of scalar curvature is the averaged sectional curvature of all planes spanned by two unit vectors from different distributions, and is one of the simplest curvature invariants of an almost product manifold,
see \cite{r-affine,RS-1,Rov-Wa-2021,Walczak}.
Recall that a connected
manifold $M$ with a decomposition of the tangent~bundle into $k\ge2$ subbundles (distributions), \begin{equation}\label{Eq-mD-k-TM}
TM=\mD_1 + \ldots + \mD_k \end{equation}
is called an \textit{almost product manifold} \cite{RS-1}
(see \cite{bf} for $k=2$).
It appears in such topics as
multiply twisted or warped product manifolds, see \cite{Dimitru,MRS-99};
the theory of nets composed of foliations (that is, integrable distributions),~see~\cite{Krynski,RS-99};
para-$f$-manifolds, see \cite{tar};
lightlike manifolds, i.e., with degenerate metric of constant rank and index, see~\cite{dug};
hypersurfaces in space forms with $k$ distinct principal curvatures, see~\cite{cecil-ryan}.
An almost product manifold
admits a natural class of metrics that will be of our interest in this paper.
A pseudo-Riemannian metric $g$ on $(M,\mD_1,\ldots,\mD_k)$ is \textit{adapted} (or, compatible, see \cite{RN-21}) if all distributions are non-degenerate and pairwise orthogonal.
Any adapted metric is uniquely decomposed as $g=g_1\oplus\ldots\oplus g_k$, where $g_\mu$ is a bundle metric on $\mD_\mu$, then we write $T M = \bigoplus_{\mu} \mD_\mu$. A~special family of adapted metrics are multiconformally equivalent (to $g$) metrics
$\tilde g=u_1^2\,g_1\oplus\ldots\oplus u_k^2\,g_k$,
where $u_\mu:M\to\mathbb{R}$ are smooth functions without zeros, see~\cite{RN-21}.
\textbf{Objectives and results}. We
study critical points of the {integrated mixed scalar curvature} on $(M,\mD_1,\ldots,\mD_k)$, depending on adapted metric $g$ and {contorsion tensor} $\I=\bar\nabla-\nabla$, \begin{equation}\label{Eq-Smix-g}
\bar J_\mD : (g,\I)\mapsto\int_{M} \overline{\rm S}_{\,\mD_1,\ldots,\mD_k}\,{\rm d}\operatorname{vol}_g \,. \end{equation} The mixed scalar curvature $\overline{\rm S}_{\,\mD_1,\ldots,\mD_k}$ is up to factor 2, see \eqref{E-Dk-Smix}, the sum of the mixed scalar curvatures $\overline{\rm S}_{\,\mD_\mu,\mD^\bot_\mu}$, examined in \cite{rz-3}, and if all distributions are 1-dimensional, the mixed scalar curvature reduces to the scalar curvature divided by two.
If $M$ is a non-compact manifold, we integrate in \eqref{Eq-Smix-g} over an arbitrarily large, relatively compact domain $\Omega\subset M$, which contains supports of variations of $g$ and $\I$.
We find the Euler-Lagrange equation for \eqref{Eq-Smix-g} with fixed $\I$ for adapted variations of metric preserving the volume of the manifold.
It can be presented in the form of the Einstein~equation:
\begin{equation}\label{E-geom}
\overline\mathcal Ric_{\,\mD} - (1/2)\,\overline{\cal S}_{\mD}\cdot g + \lambda\,g = 0, \end{equation} where the Ricci tensor of $\bar\nabla$ and the scalar curvature are replaced by the Ricci type tensor
$\overline\mathcal Ric_{\,\mD} = \bigoplus\nolimits_{\mu=1}^k \overline\mathcal Ric_{\,\mD\,|\,\mD_\mu \times \mD_\mu}$ (introduced in \cite{bdrs} for $k=2$, $\dim M=4$, $\dim \mD_1=1$
and $\bar\nabla=\nabla$) and its~trace $\overline{\cal S}_{\mD}$. Although $\overline\mathcal Ric_{\,\mD}$ has complicated form even for $k=2$, see \cite{rz-3},
we write it explicitly in special cases: for statistical and semi-symmetric connections, and twisted~products; if all distributions are one-dimensional, it reduces to the Ricci tensor of ${\bar \nabla}$.
We find the Euler-Lagrange equation for \eqref{Eq-Smix-g} with a fixed adapted metric for variations of $\I$,
which can be decomposed into independent equations -- some of which do not contain contorsion tensor, but significantly restrict metrics admitting critical contorsions. Due to such restrictions, a natural setting to consider are twisted and warped products of manifolds, on which we characterize all critical pairs with statistical connections, in terms of contorsion tensor, mean curvatures of distributions and Ricci tensor of the metric. We also prove the absence of non-trivial critical points of the action on e.g.,
harmonic distributions with semi-symmetric connections,
or
complete 3-Sasaki manifolds with particular metric connections.
On the other hand, considering only connections from certain families allows to reduce the variational problem to purely pseudo-Riemannian one. In particular, we show that some critical pairs $(g,\I)$ of the action \eqref{Eq-Smix-g} restricted to adapted metrics and statistical connections can be obtained from critical adapted metrics of this action
with the fixed Levi-Civita connection.
Similarly, considering critical points among semi-symmetric connections only slightly modifies the tensor $\overline\mathcal Ric_{\,\mD}$.
The action \eqref{Eq-Smix-g}
may find applications in the theory of nets and theoretical physics, because it
is strongly related to the Einstein-Hilbert action. Indeed,
\eqref{Eq-Smix-g} and the relation $\overline{\rm S}=2\,\overline{\rm S}_{\,\mD_1,\ldots,\mD_k}+\sum\nolimits_{\,\mu=1}^{\,k} \overline{\rm S}({\mD_\mu})$ of $\overline{\rm S}_{\,\mD_1,\ldots,\mD_k}$ with the scalar curvature $\overline{\rm S}$
(the trace of the Ricci tensor for~$\bar\nabla$) and the scalar curvature $\overline{\rm S}({\mD_\mu})$ of~$\mD_\mu$,
allow us to generalize
the class of Einstein metrics, i.e., in the case of one-dimensional distributions any critical pair $(g, \I)$ for the Einstein-Hilbert action is also critical for~\eqref{Eq-Smix-g}. However, since we consider variations of metric adapted to an almost-product structure, we obtain also critical pairs $(g, \I)$ with non-Einstein metrics. In particular, on manifolds $(M,g)$ of constant scalar curvature and certain dimension, we find decompositions \eqref{Eq-mD-k-TM} that make $g$ critical for the action \eqref{Eq-Smix-g}.
Finally, \eqref{Eq-Smix-g} can be combined with the Einstein-Hilbert action in vacuum as
$\bar J_{\mD,\epsilon}: (g,\I) \mapsto \int_{M} (\overline{\rm S}+\epsilon\,\overline{\rm S}_{\,\mD_1,\ldots,\mD_k} )\,{\rm d}\operatorname{vol}_g\
(\epsilon\in\mathbb{R})$,
and considered as a perturbation of the Einstein-Cartan theory.
\textbf{Structure of the article}. The article consists of an Introduction and six sections. Section~\ref{sec:prel} contains necessary results from \cite{rz-2,rz-connections}, among them the notion of the mixed scalar curvature is~central. In Section~\ref{sec:adapted-metric}, we study adapted variations of metric (with fixed contorsion tensor) and find the Euler-Lagrange equation for the action~\eqref{Eq-Smix-g}. In~Section~\ref{sec:contorsion}, we study variations of $\I$ and, using results for $k=2$ from \cite{rz-3}, find the Euler-Lagrange equation for \eqref{Eq-Smix-g} with fixed adapted metric. In subsequent sections, we examine solutions of the Euler-Lagrange equations in some special cases.
In Section~\ref{sectionEinstein}, for one-dimensional distributions we find critical pairs $(g, \I)$ with non-Einstein metrics of constant scalar curvature for $k=3$ and for $k>3$ using Hadamard matrices.
In~Section~\ref{sec:cont-stat}, we apply the results of Sections~\ref{sec:adapted-metric} and \ref{sec:contorsion} to statistical connections; in particular, in Section~\ref{sec:contorsion-statistical} we consider twisted products, and in Section~\ref{sec:metric-stat}, we study the action \eqref{Eq-Smix-g} for adapted variations of metric and variations of contorsion tensor corresponding to statistical connections.
In Section~\ref{sec-metricconnections}, we apply the results of Sections~\ref{sec:adapted-metric} and \ref{sec:contorsion} to metric connections; in particular, in Section~\ref{sec:contorsion_semi_symmetric}, we consider semi-symmetric connections, and in Section~\ref{sec:5-Sasaki}, we examine the Euler-Lagrange equations for \eqref{Eq-Smix-g} on a 3-Sasaki manifold.
\section{Preliminaries} \label{sec:prel}
Here, we recall the properties of the mixed scalar curvature of a metric-affine almost product manifold $(M,g,\bar\nabla;\mD_1,\ldots,\mD_k)$, see \cite{RS-1}. We will use bar in the notation of objects related to~$\bar\nabla$. Recall that $\I=\bar\nabla-\nabla$, where $\nabla$ is the Levi-Civita connection of $g$, is the \emph{contorsion tensor}.
A {pseudo-Riemannian metric} $g=\langle\cdot\,,\cdot\rangle$ of index $q$ on $M$ is an element $g\in{\rm Sym}^2(M)$ of the space of symmetric $(0,2)$-tensors such that each $g_x\ (x\in M)$ is a {non-degenerate bilinear form of index} $q$ on the tangent space $T_xM$. For~$q=0$ (i.e., $g_x$ is positive definite) $g$ is a Riemannian metric and for $q=1$ it is a Lorentz metric.
A distribution $\mD_\mu$ on $(M,g)$ is \textit{non-degenerate}, if $g_x$ is non-degenerate on $\mD_\mu(x)\subset T_x M$ for all $x\in M$;
in this case, the orthogonal complement $\mD_\mu^\bot$ is also non-degenerate, e.g., \cite{bf}.
Given an adapted metric $g$ on $(M;\mD_1,\ldots\mD_k)$, there is a~local orthonormal frame $\{E_{\mu,a}\}$ on $M$, where $1\le a \le n_\mu=\dim \mD_\mu$ such that
$\{E_{\mu,1},\ldots, E_{\mu,n_\mu}\}\subset\mD_\mu$ for $1\le \mu\le k$. All quantities defined below using such frame do not depend on the choice of this~frame. Similarly, $\{ {\cal E}_{\mu, i}\}$, where $i = 1, \ldots, n^\bot_\mu$, is an orthonormal frame of $\mD_\mu^\perp$ with $n^\bot_\mu=\dim\mD_\mu^\perp$. Thus, the ranges of indices, e.g., $a$ or $i$, are determined by the index of the distribution, $\mD_\mu$ or $\mD^\bot_\mu$, respectively. Unless explicitly stated otherwise, sums in all formulas will be taken over repeated indices, and always over full ranges of indices.
For the curvature tensor $\bar R_{X,Y}=[\bar\nabla_Y,\bar\nabla_X]+\bar\nabla_{[X,Y]}$ of $\bar\nabla$, we~get
$\bar R_{X,Y} -R_{X,Y} = (\nabla_Y\,\I)_X -(\nabla_X\,\I)_Y +[\I_Y,\,\I_X]$,
see \cite{r-affine}, where $R_{X,Y}=[\nabla_Y,\nabla_X]+\nabla_{[X,Y]}$ is the curvature tensor of~$\nabla$. The {mixed scalar curvature} of a pair of distributions $(\mD,\mD^\bot)$ on a
manifold $(M,g;\bar\nabla)$ is given~by \[
\bar{\rm S}_{\,\mD,\mD^\bot} = \frac12\sum\nolimits_{\,a,b} \varepsilon_a\,\varepsilon_b(\langle\bar R_{\,{E}_a, {\cal E}_b}{E}_a, {\cal E}_b\rangle+\langle\bar R_{\,{\cal E}_b, {E}_{a}}{\cal E}_b, {E}_{a}\rangle)\,, \] where we use a local orthonormal frame on $M$ such that ${E}_{a}\in\mD$ for $a\le\dim\mD$, ${\cal E}_{b}\in\mD^\perp$ for $b\le\dim\mD^\perp$ and $\varepsilon_a=\langle{E}_{a},{E}_{a}\rangle\in\{-1,1\}$, $\varepsilon_b=\langle {\cal E}_b , {\cal E}_b \rangle\in\{-1,1\}$. If~$\mD$ is spanned by a unit vector field $N$, then $\bar{\rm S}_{\,\mD,\mD^\bot}=\varepsilon_N\overline{\rm Ric}_{N,N}$, where $\overline{\rm Ric}_{N,N}$ is the Ricci curvature of $\bar\nabla$ in the $N$-direction.
This concept can be generalized to $k>2$ distributions.
\begin{Definition}\rm Given $(M;\mD_1,\ldots,\mD_k)$ with an adapted metric $g$ and a linear connection $\bar\nabla$, the following function on $M$ is called the \textit{mixed scalar curvature} with respect to~$\bar\nabla$, see \cite{RS-1}: \begin{equation}\label{E-Smix-k}
\overline{\rm S}_{\,\mD_1,\ldots,\mD_k}=\frac12\sum\nolimits_{\,\nu<\mu}\sum\nolimits_{\,a,b}
\varepsilon_a\,\varepsilon_b\big(\langle\bar R_{{E}_{\nu,a},{E}_{\mu,b}}\,{E}_{\nu,a},\,{E}_{\mu,b}\rangle + \langle\bar R_{{E}_{\mu,b},{E}_{\nu,a}}\,{E}_{\mu,b},\,{E}_{\nu,a}\rangle\big). \end{equation}
If $\I=0$, then the above function is called the \textit{mixed scalar curvature} (with respect to $\nabla$):
\[
{\rm S}_{\,\mD_1,\ldots,\mD_k}=\sum\nolimits_{\,\nu<\mu}\sum\nolimits_{\,a,b}\varepsilon_a\,\varepsilon_b\<R_{{E}_{\nu,a},{E}_{\mu,b}}\,{E}_{\nu,a},\,{E}_{\mu,b}\rangle. \] \end{Definition}
The~symmetric second fundamental form $h_\mu:\mD_\mu\times \mD_\mu\to \mD_\mu^\bot$ and the skew-symmetric integrability tensor $T_\mu:\mD_\mu\times \mD_\mu\to \mD_\mu^\bot$ (of the distribution $\mD_\mu$) are defined by \[
h_\mu(X,Y) = \frac12\,P_{\,\mu}^\bot(\nabla_XY+\nabla_YX),\quad
T_\mu(X,Y) = \frac12\,P_{\,\mu}^\bot(\nabla_XY-\nabla_YX) = \frac12\,P_{\,\mu}^\bot\,[X,Y], \] where $P_\mu:TM\to\mD_\mu$ and $P_{\,\mu}^\bot:TM\to \mD_{\,\mu}^\bot$ are orthoprojectors. The mean curvature vector field of $\mD_\mu$ is given by the trace of second fundamental form: $H_\mu=\tr_g\, h_\mu=\sum_{\,a} h_\mu (E_{\mu, a},E_{\mu, a})$. Similarly, $\tilde h_{\,\mu},\,\tilde H_{\,\mu}=\tr_g \tilde h_{\,\mu}$ and $\tilde T_{\,\mu}$ are defined for
$\mD_{\,\mu}^\bot$, e.g., $\tilde h_{\,\mu} : \mD_\mu^\bot \times \mD_\mu^\bot \to \mD_\mu $ is given by $\tilde h_{\,\mu} (X,Y) = \frac12\,P_{\,\mu} (\nabla_XY+\nabla_YX)$. We have $H_\mu=\sum\nolimits_{\,\nu\ne\mu} P_\nu H_\mu$ and ${\tilde H}_\mu=\sum\nolimits_{\,\nu\ne\mu} P_\mu H_\nu$. Set \begin{equation} \label{defcalH}
{\cal H} = \sum\nolimits_{\mu} H_\mu = \sum\nolimits_{\mu} {\tilde H}_\mu. \end{equation} To see that the above definition is valid, we use $P_\mu H_\mu=0$ to obtain \begin{equation} \label{calH} \sum\nolimits_{\,\mu} {\tilde H}_\mu = \sum\nolimits_{\,\mu} P_\mu \sum\nolimits_{\,\nu} H_\nu = \sum\nolimits_{\,\nu} \sum\nolimits_{\,\mu} P_\mu H_\nu = \sum\nolimits_{\,\nu} H_\nu\,. \end{equation} A~distribution $\mD_\mu$ is called integrable if $T_\mu=0$, and $\mD_\mu$ is called {totally umbilical}, {harmonic}, or {totally geodesic}, if ${h}_\mu=({H}_\mu/n_\mu)\,g,\ {H}_\mu =0$, or ${h}_\mu=0$, respectively, e.g.,~\cite{bf}. Totally umbilical and totally geodesic integrable distributions naturally appear on twisted products.
The squares of norms of tensors on $(M,g;\mD_1,\ldots\mD_k)$ are determined using \begin{eqnarray*}
&&\<h_\mu,h_\mu\rangle=\sum\nolimits_{\,a,b} \varepsilon_a\varepsilon_b\,\<h_\mu({E}_{\mu,a},{E}_{\mu,b}),h_\mu({E}_{\mu,a},{E}_{\mu,b})\rangle, \\
&&\<T_\mu,T_\mu\rangle=\sum\nolimits_{\,a,b} \varepsilon_a\varepsilon_b\,\<T_\mu({E}_{\mu,a},{E}_{\mu,b}),T_\mu({E}_{\mu,a},{E}_{\mu,b})\rangle,\quad {\rm etc}. \end{eqnarray*} Similarly, for two $(0,s)$ or $(1,s)$ tensors $F_1,F_2$, we will denote by $\langle F_1,F_2 \rangle$ their inner product defined by $g$.
Let $h_{\mu\nu},\,H_{\mu\nu},\,T_{\mu\nu}$ be the second fundamental forms, the mean curvature vector fields and the integrability tensors
related to the distributions ${\cal D}_\nu\oplus{\cal D}_\mu$ for $\mu\ne\nu$.
\begin{Definition}\rm A pair $({\cal D}_\mu,{\cal D}_\nu)$ with $\mu\ne\nu$ of distributions on $(M,g;{\cal D}_1,\ldots,{\cal D}_k)$ is called
a) \textit{mixed totally geodesic}, if $h_{\mu\nu}(X,Y)=0$ for all $X\in{\cal D}_\mu$ and $Y\in{\cal D}_\nu$.
b) \textit{mixed integrable}, if $T_{\mu\nu}(X,Y)=0$ for all $X\in{\cal D}_\mu$ and $Y\in{\cal D}_\nu$. \end{Definition}
Let~$\mathfrak{X}_M$
be the module over $C^\infty(M)$ of all vector fields on $M$.
The ``musical" isomorphisms $\sharp$ and $\flat$ will be used for rank one and symmetric rank 2 tensors. For~example, if $\omega\in\Lambda^1(M)$ is a 1-form and $X,Y\in\mathfrak{X}_M$ then $\omega(Y)=\langle\omega^\sharp,Y\rangle$ and $X^\flat(Y) =\<X,Y\rangle$. For arbitrary (0,2)-tensors $B$ and $C$ we also have $\<B, C\rangle =\tr_g(B^\sharp C^\sharp)=\<B^\sharp, C^\sharp\rangle$.
The shape operator $(A_\mu)_Z$ of $\mD_\mu$ with respect to $Z\in\mD_\mu^\bot$ (dual to the second fundamental form $h_\mu$) and the operator $(T_\mu^\sharp)_{Z}$ (dual to the integrability tensor $T_\mu$) are given~by \[
\langle(A_\mu)_Z(X),Y\rangle= \langle h_\mu(X,Y),Z\rangle,\quad \langle(T_\mu^\sharp)_Z(X),Y\rangle=\<T_\mu(X,Y),Z\rangle, \quad X,Y \in \mD_\mu . \] Similarly, linear operators $(\tilde A_\mu)_Z$ and $(\tilde T_\mu^{\sharp})_Z$ on $\mD_\mu^\bot$ with $Z\in\mD_\mu$ are defined. To make formulas easier to read, we will sometimes write $A_{\mu, Z}$ instead of $(A_\mu)_Z$ and ${\tilde A}_{\mu, a}$ instead of $({\tilde A}_\mu)_{E_{\mu,a}}$, etc. The divergence of a $(1,s)$-tensor field $S$ on $(M,g)$ is a $(0,s)$-tensor field
${\rm div}\,S =\operatorname{trace}(Y\to\nabla_{Y} S)$,
\[
(\Div S)(X_1,\ldots, X_s) = \sum\nolimits_{\,i}\langle(\nabla_{ E_i }\,S)(X_1,\ldots, X_s), E_i\rangle\,, \] where $(E_1,\ldots, E_n)$ is a local orthonormal frame on $TM$. For $s=0$, this is the {divergence}
${\rm div}\,X =\tr\nabla X$
of a vector field $X\in\mathfrak{X}_M$. The identity tranformation on $TM$ will be denoted by $\id$.
For the contorsion tensor, we define auxiliary (1,2)-tensors $\I^*$ and $\I^\wedge$ by \[ \langle\I^*_X Y,Z\rangle = \langle\I_X Z, Y\rangle,\quad \I^\wedge_X Y = \I_Y X, \quad X,Y,Z\in\mathfrak{X}_M\, , \] similarly $\langle \I^{*\wedge}_X Y,Z \rangle = \langle \I^*_Y X, Z\rangle = \langle \I_Y Z, X \rangle$.
Set $\I_{\mu, a} = \I_{E_{\mu,a}}$.
The partial traces of
$\,\I$ are defined by
$\tr^\bot_\mu \I = \sum\nolimits_i \I_{{\cal E}_{\mu, i}}{\cal E}_{\mu, i},\
\tr^\top_\mu \I = \sum\nolimits_a \I_{\mu, a} E_{\mu, a}$.
Note that $\tr^\bot_\mu \I=\sum\nolimits_{\,\nu\ne\mu} \tr^\top_\nu \I$.
Set $V_\mu=(\mD_\mu\times\mD_\mu^\bot)\cup(\mD_\mu^\bot\times\mD_\mu)$. For $(M,g,\bar\nabla;\mD_\mu,\mD_\mu^\bot)$ we have the following equalities: \begin{equation}\label{E-div-barQ}
\Div X_\mu = \bar{\rm S}_{\,\mD_\mu,\mD_\mu^\bot} -Q(\mD_\mu,g) -\bar Q(\mD_\mu,g,\I)\,, \end{equation} see \cite{r-affine}, where $X_\mu =\frac12\,\big(P_\mu\tr_{\,\mu}^\bot(\I -\I^*) +P_\mu^\bot\tr_{\,\mu}^\top(\I -\I^*)\big) +H_\mu+\tilde H_\mu$ and \begin{eqnarray}\label{E-func-Q}
Q(\mD_\mu,g)\hspace*{-2mm}&=&\hspace*{-2mm}\langle\tilde H_\mu,\tilde H_\mu\rangle+\<H_\mu,H_\mu\rangle-\<h_\mu,h_\mu\rangle-\langle\tilde h_\mu,\tilde h_\mu\rangle+\<T_\mu,T_\mu\rangle+\langle\tilde T_\mu,\tilde T_\mu\rangle\,,\\ \label{E-barQ} \nonumber
2\,\bar Q(\mD_\mu,g,\I) \hspace*{-2mm}&=&\hspace*{-2mm} \langle\tr_{\,\mu}^\top\I,\, \tr_{\,\mu}^\bot\I^*\rangle +\langle\tr_{\,\mu}^\bot\I,\, \tr_{\,\mu}^\top\I^*\rangle\\ \nonumber
\hspace*{-2mm}&+&\hspace*{-2mm} \langle\tr_{\,\mu}^\top(\I-\I^*) -\tr_{\,\mu}^\bot(\I -\I^*), H_\mu -\tilde H_\mu\rangle \\
\hspace*{-2mm}&+&\hspace*{-2mm} \langle\I -\I^* +\I^\wedge - \I^{*\wedge},\ \tilde A_\mu -\tilde T_\mu^{\sharp} + A_\mu -T_\mu^{\sharp}\rangle -\langle\I^*,\,\I^\wedge\rangle_{\,|\,V_\mu}\,. \end{eqnarray}
In a local adapted frame, two terms in the last line of \eqref{E-barQ} have the following form:
\begin{eqnarray*}
&& \langle\I -\I^* +\I^\wedge - \I^{*\wedge},\ \tilde A_\mu -\tilde T_\mu^{\sharp} + A_\mu - T_\mu^{\sharp}\rangle
= \sum\nolimits_{\,a,b}\big(\langle(\I_{{\cal E}_{\mu,b}} -\I^*_{{\cal E}_{\mu,b}}) {E}_{\mu,a}\\
&& +\,(\I_{{\mu,a}} -\I^*_{{\mu,a}}) {\cal E}_{\mu,b},\ ((\tilde A_\mu)_{{E}_{\mu,a}} -(\tilde T_\mu^{\sharp})_{{E}_{\mu,a}}) {\cal E}_{\mu,b} +((A_\mu)_{{\cal E}_{\mu,b}}-(T_\mu^{\sharp})_{{\cal E}_{\mu,b}}) {E}_{\mu,a}\rangle\big),\\
&& \langle\I^*,\ \I^\wedge\rangle_{\,|\,V_\mu}
= \sum\nolimits_{\,a,b}\big(\langle\I_{{\mu,a}} {\cal E}_{\mu,b},\,\I^*_{{\cal E}_{\mu,b}} {E}_{\mu,a}\rangle + \langle\I^*_{{\mu,a}} {\cal E}_{\mu,b},\,\I_{{\cal E}_{\mu,b}} {E}_{\mu,a}\rangle\,\big).
\end{eqnarray*} The following result, see \cite[Proposition~2]{RS-1}, generalizes \eqref{E-div-barQ} for $k>2$.
\begin{Proposition}\label{P-decomp2}
For~an almost product manifold $(M,g;\mD_1,\ldots,\mD_k)$
equipped with a linear connection $\bar\nabla=\nabla+\I$ we~have \begin{eqnarray}\label{E-Q1Q2-gen}
\Div\,Y = 2\,\bar{\rm S}_{\,\mD_1,\ldots,\mD_k} - \sum\nolimits_{\,\mu}\big( Q(\mD_\mu,g) +\bar Q(\mD_\mu,g,\I) \big), \end{eqnarray} where tensors $Q(\mD_\mu,g)$ and $\bar Q(\mD_\mu,g,\I)$ are given by \eqref{E-func-Q} and \eqref{E-barQ} with $\mD=\mD_\mu$, and \begin{equation}\label{E-prop-X}
Y=\sum\nolimits_{\,\mu}\big(\frac12\,P_\mu\tr_{\,\mu}^\bot(\I -\I^*) +\frac12\,P_\mu^\bot\tr_{\,\mu}^\top(\I -\I^*) +H_{\,\mu}+\tilde H_{\,\mu}\big). \end{equation} \end{Proposition}
\begin{proof}
For a pair of complementary distributions $(\mD_\mu,\mD_\mu^\bot)$ on $(M,g)$ we have
$\overline{\rm S}_{\,\mD_\mu,\mD_\mu^\bot} = \sum\nolimits_{\,a,\,b}
\varepsilon_a \varepsilon_b\,\langle\overline R_{\,{E}_{\mu,a}, {\cal E}_{\mu,b}}{E}_{\mu,a}, {\cal E}_{\mu,b}\rangle$.
Thus,
from the equality $\overline{\rm S}_{\,\mD_\mu,\mD^\bot_\mu}=\sum\nolimits_{\,\nu\ne\mu}\overline{\rm S}_{\,\mD_\mu,\mD_\nu}$ and definition \eqref{E-Smix-k} we obtain the following decomposition formula of the mixed scalar curvature, see \cite{RS-1}:
\begin{equation}\label{E-Dk-Smix}
2\,\overline{\rm S}_{\,\mD_1,\ldots,\mD_k} = \sum\nolimits_{\,\mu}\overline{\rm S}_{\,\mD_\mu,\mD^\bot_\mu}\,. \end{equation} Summing $k$ copies of \eqref{E-div-barQ} with $\mD_\mu\ (\mu=1,\ldots,k)$ and using \eqref{E-Dk-Smix} yields \eqref{E-Q1Q2-gen}. \end{proof}
\begin{Remark}
\rm For a statistical connection $\bar\nabla$ on $(M,g)$ we have $\I^\wedge=\I$ and $\I^* = \I$; in this case, \eqref{E-barQ} has a shorter form
$2\,\bar Q(\mD_\mu,g,\I)= 2\,\langle\tr_{\,\mu}^\top\I,\ \tr_{\,\mu}^\bot\I\rangle -\langle\I,\,\I\rangle_{\,|\,V_\mu}$\,,
and \eqref{E-Q1Q2-gen} reduces~to
\begin{equation}\label{eqvarstat}
2\,\overline{\rm S}_{\,\mD_1,\ldots,\mD_k} = 2\,{\rm S}_{\,\mD_1,\ldots,\mD_k}
-\sum\nolimits_{\,\mu}\big(\langle\tr_{\,\mu}^\bot\I,\,\tr_{\,\mu}^\top\I\rangle -\frac12\,\langle\I,\,\I\rangle_{\,|V_\mu}\big) .
\end{equation} \end{Remark}
\section{Adapted variations of metric} \label{sec:adapted-metric}
Here, we define adapted variations $g_t\ (|t|<\varepsilon)$ of a pseudo-Riemannian metric $g = g_0$ and similarly to the case of \eqref{Eq-Smix-g0}, see \cite{r-EH-k}, find a general form of the Euler-Lagrange equation for the action \eqref{Eq-Smix-g} with respect to those variations.
Let~infinitesimal variations
${B}_t\equiv\partial g_t/\partial t$
be supported in a relatively compact domain $\Omega\subset M$, i.e., on $M\setminus\Omega$ we have $g_t=g$ and ${B}_t=0$ for all $t$.
We~adopt notations $\partial_t \equiv\partial/\partial t,\ {B}\equiv{\partial_t g_t}_{\,|\,t=0}$, and write ${B}$ instead of ${B}_t$ to make formulas easier to~read.
The~volume form ${\rm d}\operatorname{vol}_g$ of metric $g$ varies as follows, e.g., \cite{rz-2}, \begin{equation}\label{E-dotvolg}
\partial_t\,\big({\rm d}\operatorname{vol}_{g}\!\big) = \frac12\,(\tr_{g}\,{B})\,{\rm d}\operatorname{vol}_{g}
=\frac12\,\langle{B},\,g\rangle\,{\rm d}\operatorname{vol}_{g}\,. \end{equation}
For any variations $g_t$ of metric, the Euler-Lagrange equation means vanishing of the partial gradient $\delta_g\bar J_\mD(g,\I)$ of the functional \eqref{Eq-Smix-g} with fixed tensor $\I$, where
${\rm\frac{d}{dt}}\,\bar J_\mD(g_t,\I)_{|\,t=0} =\int_{\Omega}\langle\delta_g \bar J_\mD, {B}\rangle\,{\rm d}\operatorname{vol}_g$
and ${B} =\partial_t g_{t\,|\,t=0}$. Solutions $g$ of $\delta_g \bar J_\mD=0$ are called critical metrics. For variations preserving the volume of $\Omega$, i.e., ${\rm Vol}(\Omega,g_t) = {\rm Vol}(\Omega,g)$ for all~$t$, using \eqref{E-dotvolg}, we get \[
0 = \partial_t\int_{M}{\rm d}\operatorname{vol}_g = \int_{M} \partial_t\,({\rm d}\operatorname{vol}_g) = \int_{M}\frac{1}{2}\,(\tr_g{B})\,{\rm d}\operatorname{vol}_g =\frac{1}{2} \int_{\Omega}\<g,\,{B}\rangle\,{\rm d}\operatorname{vol}_g. \] Hence, $g$ is critical for variations of metric preserving the volume of $\Omega$ if and only if the condition
$\int_{\Omega}\langle\delta_g \bar J_\mD,\,{B}\rangle\,{\rm d}\operatorname{vol}_g =0$
holds for all tensors ${B}$ satisfying
$\int_{\,\Omega}\<g,\,{B}\rangle\,{\rm d}\operatorname{vol}_g=0$.
Thus, the Euler-Lagrange equation for variations preserving the volume of $\Omega$ is
\begin{equation}\label{E-delta-g-J}
\delta_g \bar J_\mD = \lambda\,g \end{equation} for some $\lambda\in\mathbb{R}$.
Following \cite{rz-3}, where $k=2$, we define auxiliary Casorati type operators ${\cal T}_\mu:\mD_\mu\to\mD_\mu$ and self-adjoint $(1,1)$-tensors ${\cal K}_\mu$ (using the Lie bracket) by \[
{\cal T}_\mu=\sum\nolimits_{\,a}\varepsilon_a(T^\sharp_{\mu,a})^2,\quad
{\cal K}_\mu=\sum\nolimits_{\,a}\varepsilon_{\,a}\,[T^\sharp_{\mu,a},\, A_{\mu,a}]\,.
\] For any $(1,2)$-tensors $P,P'$ and a $(0,2)$-tensor $S$ define the $(0,2)$-tensor $\Upsilon_{P,P'}$~by \[
\langle\Upsilon_{P,P'}, S\rangle = \sum\nolimits_{\,\lambda, \mu} \varepsilon_\lambda\, \varepsilon_\mu\,
\big[S(P(e_{\lambda}, e_{\mu}), P'( e_{\lambda}, e_{\mu})) + S(P'(e_{\lambda}, e_{\mu}), P( e_{\lambda}, e_{\mu}))\big], \] where we use the inner product of tensors induced by $g$, $\{e_{\lambda}\}$ is a local orthonormal basis of $TM$ and $\varepsilon_\lambda = \<e_{\lambda}, e_{\lambda}\rangle\in\{-1,1\}$.
If $g$ is a Riemannian metric, then $\Upsilon_{h_\mu,h_\mu}=0$ if and only if $h_\mu=0$. Thus, $\Upsilon_{h_\mu,h_\mu}$ measures ``non-total geodesy" of~$\mD_\mu$; similarly, $\Upsilon_{T_\mu,T_\mu}$ measures ``non-integrability" of~$\mD_\mu$.
\begin{Definition}[see \cite{r-EH-k}]\rm A family of adapted metrics $g_t\ (|t|<\varepsilon)$ on $(M;\mD_1,\ldots\mD_k)$ such that $g_0 =g$ and $B_t$ are compactly supported, is called an \textit{adapted variation} of $g$. In~this case, distributions $\mD_\mu$ and $\mD_\nu$ are $g_t$-orthogonal for all $\mu\ne\nu$ and all~$t$. An adapted variation $g_t$ is called a $\mD_\mu$-\textit{variation} (for some fixed $\mu\in\{1,\ldots, k\}$) if the metric changes along $\mD_\mu$ only,~i.e.,
$g_t(X,Y)=g_0(X,Y),\quad X,Y\in\mD_\mu^\bot,\quad |t|<\varepsilon$.
\end{Definition}
An adapted variation $g_t$ is a sum $g_t=g_1(t)\oplus\ldots\oplus g_k(t)$ of $\mD_\mu$\,-variations $g_\mu(t) =g_t|_{\,\mD_\mu}$. In~this case, the tensor ${B}_t=\partial_t\, g_t$ is a sum ${B}_t=\sum\nolimits_{\,\mu}{B}_\mu(t)$, where ${B}_\mu(t) =\partial_t g_\mu(t) ={B}_t|_{\,\mD_\mu}$. A~special case of adapted variations is a multiconformal variation of metric, see~\cite{RN-21}.
In~view of Proposition~\ref{P-decomp2}, we need the variation of $\sum\nolimits_{\,\nu}\big(\bar Q(\mD_\nu,g,\I) + Q(\mD_\nu,g)\big)$.
\begin{Lemma}[see \cite{r-EH-k}]\label{propvar1} Let $g_t$
be a $\mD_\mu$-variation of an adapted metric $g$ on $(M;\mD_1,\ldots\mD_k)$,
then \begin{eqnarray*} \nonumber
&& \partial_t\langle\tilde{h}_\mu, \tilde{h}_\mu\rangle = -\langle (1/2)\Upsilon_{\tilde h_\mu, \tilde h_\mu},\ {B}_\mu\rangle\,,\quad
\partial_t \<h_\mu,\ h_\mu\rangle = \langle\,\Div{h}_\mu + {\cal K}_\mu^\flat,\ {B}_\mu\rangle - \Div\<h_\mu, {B}_\mu\rangle\,,\\
&& \partial_t \langle\tilde {H}_\mu, \tilde {H}_\mu\rangle = -\langle\,\tilde{H}_\mu^\flat\otimes\tilde{H}_\mu^\flat,\ {B}_\mu\rangle\,,\quad
\partial_t \<H_\mu, H_\mu\rangle = \langle\,(\Div H_\mu)\,g_\mu,\ {B}_\mu\rangle -\Div((\tr {B}_\mu^\sharp) H_\mu)\,, \\
&& \partial_t\langle\tilde{T}_\mu, \tilde{T}_\mu\rangle = \langle\,(1/2)\Upsilon_{\tilde T_\mu, \tilde T_\mu},\ {B}_\mu\rangle\,,\quad
\partial_t\<T_\mu,\ T_\mu\rangle = \langle 2\,{\cal T}_\mu^\flat,\ {B}_\mu\rangle\,, \end{eqnarray*} and for $\nu\ne\mu$ we have dual equations \begin{eqnarray*}
&& \partial_t\langle{h}_\nu, {h}_\nu\rangle = \langle-(1/2)\Upsilon_{h_\nu,h_\nu},\ {B}_\mu\rangle\,,\quad
\partial_t \langle\tilde h_\nu,\ \tilde h_\nu\rangle = \langle\,\Div\tilde{h}_\nu + \tilde{\cal K}_\nu^\flat,\ {B}_\mu\rangle - \Div\langle\tilde h_\nu, {B}_\mu\rangle\,,\\
&& \partial_t \langle{H}_\nu, {H}_\nu\rangle = -\langle\,{H}_\nu^\flat\otimes{H}_\nu^\flat,\ {B}_\mu\rangle\,,\quad
\partial_t \langle\tilde H_\nu, \tilde H_\nu\rangle = \langle\,(\Div\tilde H_\nu)\,g_\nu^\bot,\ {B}_\mu\rangle -\Div((\tr {B}_\mu^\sharp) \tilde H_\nu)\,, \\
&& \partial_t\langle{T}_\nu, {T}_\nu\rangle = \langle\,(1/2)\Upsilon_{T_\nu,T_\nu},\ {B}_\mu\rangle\,,\quad
\partial_t\langle\tilde T_\nu,\ \tilde T_\nu\rangle = \langle 2\,\tilde{\cal T}_\nu^\flat,\ {B}_\mu\rangle\, . \end{eqnarray*} \end{Lemma}
Variational formulas of terms of $\bar Q$ in \eqref{E-barQ} obtained in the following lemma are a special case (i.e., for adapted variations of metric) of equations from \cite[Lemma 3]{rz-3} for $(\mD_\mu,\mD_\mu^\bot)$. Detailed proof of such variational formulas in particular setting will be given further below in Lemma~\ref{propdeltaQforstatistical}.
\begin{Lemma}\label{P-dT-3} Let $g_t$
be a $\mD_\mu$-variation of an adapted metric $g$ on $(M;\mD_1,\ldots\mD_k)$
with fixed statistical connection $\bar\nabla=\nabla+\I$. Then \begin{eqnarray*}
\nonumber
&& \partial_t \langle\I^*,\ \I^\wedge\rangle_{\,|\,V_\mu} = -\sum\nolimits_{\,a,b} {B}_\mu({E}_{\mu,a}, {E}_{\mu,b}) \langle\I_{{\mu,a}}, \I_{{\mu,b}}\rangle_{\,|\,\mD_\mu^\bot}\,,\\ \nonumber
&& \partial_t \langle\tr_{\,\mu}^\bot\I,\ \tr_{\,\mu}^\top\I^*\rangle =0\,,\\ \nonumber
&& \partial_t \langle\tr_{\,\mu}^\top\I^*,\ \tr_{\,\mu}^\bot\I\rangle = -\sum\nolimits_{\,a,b} {B}_\mu({E}_{\mu,a}, {E}_{\mu,b})\langle\I_{{\mu,a}}{E}_{\mu,b}\,,\ \tr_{\,\mu}^\bot\I\rangle,\\ \nonumber
&& \partial_t \langle\Theta, \tilde A_\mu\rangle = -2 \sum\nolimits_{\,a,b} {B}_\mu({E}_{\mu,a}, {E}_{\mu,b}) \langle(\tilde A_\mu)_{{E}_{\mu,a}},\, \I_{{\mu,b}}\rangle\,, \\ \nonumber
&& \partial_t \langle\Theta, \tilde T_\mu^{\sharp}\rangle =-2\sum\nolimits_{\,a,b}{B}_\mu({E}_{\mu,a}, {E}_{\mu,b})\langle(\tilde T^{\sharp}_\mu)_{{E}_{\mu,a}},\, \I_{{\mu,b}}\rangle\,,\\ \nonumber
&& \partial_t \langle\Theta, T_\mu^{\sharp}\rangle
=-6\sum\nolimits_{\,a,b} {B}_\mu({E}_{\mu,a}, {E}_{\mu,b})\<T_\mu({E}_{\mu,b},\,\cdot),\, \I_{{\mu,a}}\rangle\,,\\ \nonumber
&& \partial_t \langle\Theta, A_\mu\rangle
= 2\sum\nolimits_{\,a,b} {B}_\mu({E}_{\mu,a}, {E}_{\mu,b})\langle h_\mu({E}_{\mu,b},\,\cdot),\, \I_{{\mu,a}}\rangle\,,\\ \nonumber
&& \partial_t \langle\tr_{\,\mu}^\top(\I^*-\I),\, H_\mu - \tilde H_\mu\rangle =\sum\nolimits_{\,a,b} {B}_\mu({E}_{\mu,a}, {E}_{\mu,b})
\big(\langle\I_{{\mu,b}}\,{E}_{\mu,a}, H_\mu -\tilde H_\mu\rangle \\ \nonumber
&&\quad +\,\langle\tr_{\,\mu}^\top\I, {E}_{\mu,a}\rangle\langle\tilde H_\mu, {E}_{\mu,b}\rangle \big), \\
&& \partial_t\langle\,\tr_{\,\mu}^\bot(\I^*-\I),\, H_\mu - \tilde H_\mu\rangle =\sum\nolimits_{\,a,b} {B}_\mu({E}_{\mu,a}, {E}_{\mu,b})\langle\tr_{\,\mu}^\bot\I, {E}_{\mu,b}\rangle \langle\tilde H_\mu, {E}_{\mu,a}\rangle\,, \end{eqnarray*} where $\Theta=\I -\I^* +\I^\wedge - \I^{*\wedge}$, and for $\nu\ne\mu$ we get dual equations. \end{Lemma}
\begin{Proposition}\label{C-vr-terms} For any $\mD_\mu$-variation $g_t$ of an adapted metric $g$ on $(M;\mD_1,\ldots,\mD_k)$ with fixed linear connection $\bar\nabla=\nabla+\I$, we have \begin{eqnarray}\label{E-dt-Q}
\partial_t\sum\nolimits_{\,\nu} Q(\mD_\nu,g_t) \hspace*{-2mm}&=&\hspace*{-2mm} \langle\delta Q_\mu,\, {B}_\mu\rangle - \Div X_\mu\,, \\
\label{E-dt-barQ}
\partial_t\sum\nolimits_{\,\nu}\bar Q(\mD_\nu,g_t,\I) \hspace*{-2mm}&=&\hspace*{-2mm}\langle{\delta_g\bar Q}_\mu,\,{B}_\mu\rangle\,, \end{eqnarray} where $(0,2)$-tensors ${\delta Q}_\mu$ on $\mD_\mu\times\mD_\mu$ and vector fields $X_\mu$ on $M$ are given~by \begin{eqnarray*}
&&\hskip-5mm 2 X_\mu=\<h_\mu, {B}_\mu\rangle -(\tr {B}_\mu^\sharp) H_\mu
+\sum\nolimits_{\,\nu\ne\mu}\big(\langle\tilde h_\nu, {B}_\mu\rangle -(\tr {B}_\mu^\sharp) \tilde H_\nu\big),\\
&&\hskip-5mm {\delta Q}_\mu = -\Div{h}_\mu -{\cal K}_\mu^\flat -\tilde{H}_\mu^\flat\otimes\tilde{H}_\mu^\flat
+\frac12\Upsilon_{\tilde h_\mu, \tilde h_\mu} +\frac12\Upsilon_{\tilde T_\mu, \tilde T_\mu} +\,2\,{\cal T}_\mu^\flat + (\Div H_\mu)\,g_\mu \\
&& +\sum\nolimits_{\,\nu\ne\mu}\big(-\Div\tilde{h}_\nu|_{\mD_\mu} - (P_\mu\tilde{\cal K}_\nu)^\flat -(P_\mu{H}_\nu)^\flat\otimes(P_\mu{H}_\nu)^\flat \\
&& +\,\frac12\Upsilon_{P_\mu h_\nu,P_\mu h_\nu} +\frac12\Upsilon_{P_\mu T_\nu,P_\mu T_\nu} +2\,(P_\mu\tilde{\cal T}_\nu)^\flat +(\Div\tilde H_\nu)\,g_\mu\big), \end{eqnarray*} and certain $(0,2)$-tensors ${\delta_g\bar Q}_\mu$ on $\mD_\mu\times\mD_\mu$ have long expressions $($see {\rm \cite{rz-3}} for $k=2)$.
If $\,\bar\nabla$ is statistical, then
tensors ${\delta_g\bar Q}_\mu$ on $\mD_\mu\times\mD_\mu$ in \eqref{E-dt-barQ} can be written explicitly by \begin{eqnarray*}
&&\hskip-5mm 2\,{\delta_g\bar Q}_\mu({E}_{\mu,a},{E}_{\mu,b}) =
\langle\I_{{\mu,a}}, \I_{{\mu,b}}\rangle_{\,|\mD_\mu^\bot} -\langle\I_{{\mu,a}}{E}_{\mu,b},\,\tr_{\,\mu}^\bot\I\rangle -2\,\langle(\tilde A_\mu)_{{E}_{\mu,a}},\,\I_{{\mu,b}}\rangle \\
&& +\,2\,\langle(\tilde T^{\sharp}_\mu)_{{E}_{\mu,a}},\,\I_{{\mu,b}}\rangle +6\,\<T_\mu({E}_{\mu,b},\,\cdot),\, \I_{{\mu,a}}\rangle + 2\,\langle h_\mu({E}_{\mu,b},\,\cdot),\, \I_{{\mu,a}}\rangle \\
&& -\,\langle\I_{{\cal E}_{\mu,b}}{E}_{\mu,a}, H_\mu - \tilde H_\mu\rangle -\langle\tr_{\,\mu}^\top\I, {E}_{\mu,a}\rangle\langle \tilde H_\mu, {E}_{\mu,b}\rangle +\langle\tr_{\,\mu}^\bot\I, {E}_{\mu,b}\rangle \langle\tilde H_\mu, {E}_{\mu,a}\rangle \\
&& +\sum\nolimits_{\,\nu\ne\mu}\big(\langle\I_{{\mu,a}}, \I_{{\mu,b}}\rangle_{\,|\mD_\nu} {-}\langle\I_{{\mu,a}}{E}_{\mu,b}, \tr_{\,\nu}^\top\I\rangle {-}2\langle(A_\nu)_{{E}_{\mu,a}}, \I_{{\mu,b}}\rangle {+}2\langle (T^{\sharp}_\nu)_{{E}_{\mu,a}}, \I_{{\mu,b}}\rangle \\
&& +\,6\,\langle \tilde T_\nu({E}_{\mu,b},\,\cdot),\, \I_{{\mu,a}}\rangle + 2\,\langle\tilde h_\nu({E}_{\mu,b},\,\cdot),\, \I_{{\mu,a}}\rangle
-\langle\I_{{\mu,b}}{E}_{\mu,a}, \tilde H_\nu - H_\nu\rangle\\
&& -\,\langle\tr_{\,\nu}^\bot\I, {E}_{\mu,a}\rangle\langle H_\nu, {E}_{\mu,b}\rangle +\langle\tr_{\,\nu}^\top\I, {E}_{\mu,b}\rangle \<H_\nu, {E}_{\mu,a}\rangle\big).
\end{eqnarray*} \end{Proposition}
\begin{proof} Equations \eqref{E-dt-Q} and \eqref{E-dt-barQ} with $Q(\mD_\mu,g)$ and $\bar Q(\mD_\mu,g,\I)$ given by \eqref{E-func-Q} and \eqref{E-barQ} follow from \eqref{E-Dk-Smix}, and explicit forms of tensors ${\delta Q}_\mu$ on $\mD_\mu\times\mD_\mu$ and vector fields $X_\mu$ follow from Lemma~\ref{propvar1}.
If $\,\bar\nabla$ is statistical, then explicit forms of tensors ${\delta_g\bar Q}_\mu$ on $\mD_\mu\times\mD_\mu$ follow from Lemma~\ref{P-dT-3}. Note that $X_\mu$, ${\delta Q}_\mu$ and ${\delta_g\bar Q}_\mu$ consist of two parts, the summation part (related to $\mD_\mu^\bot$) is dual to the part related to~$\mD_\mu$. \end{proof}
In the following theorem (based on Proposition~\ref{C-vr-terms}) we generalize results in \cite{rz-3} with $k=2$.
\begin{Theorem}\label{T-main01} Let $g$ be an adapted metric and $\bar\nabla=\nabla+\I$ a
linear connection on a manifold $(M;\mD_1,\ldots,\mD_k)$. Then $g$ is critical for \eqref{Eq-Smix-g} with respect to adapted variations of metric, preserving the volume of $\Omega$, if and only if the following Euler-Lagrange equations \eqref{E-delta-g-J}
are satisfied for some $\lambda\in\mathbb{R}$: \begin{eqnarray}\label{ElmixDDvp-b}
&& {\delta Q}_\mu +{\delta_g\bar Q}_\mu +\Big(\,\overline{\rm S}_{\,\mD_1,\ldots,\mD_k} -\frac12\,\Div
\sum\nolimits_{\,\nu}
\big(\frac12\,P_\nu\tr_{\,\nu}^\bot(\I -\I^*) \nonumber \\
&& +\,\frac12\,P_\nu^\bot\tr_{\,\nu}^\top(\I -\I^*) +H_{\,\nu}+\tilde H_{\,\nu}\big) +\lambda\Big)\,g_\mu=0,
\quad \mu=1,\ldots, k, \end{eqnarray} where ${\delta Q}_\mu$ and ${\delta_g\bar Q}_\mu$ are defined in Proposition~\ref{C-vr-terms}. \end{Theorem}
\begin{proof} Let a $\mD_\mu$-variation $g_t$ of $g$ (for some $\mu\ge 1$) be compactly supported in $\Omega\subset M$.
For
a $t$-dependent vector field $Y$ given in \eqref{E-prop-X},
by \eqref{E-dotvolg} and the Divergence Theorem, we obtain
$\frac{d}{dt}\int_\Omega (\Div Y)\,{\rm d}\operatorname{vol}_g = \int_\Omega \Div\big(\partial_t Y+\frac12\,(\tr_g {B}) Y\big)\,{\rm d}\operatorname{vol}_g = 0$.
Thus, for $Q(\mD_\mu,g_t)$ and $\bar Q(\mD_\mu,g_t,\I)$ given in \eqref{E-func-Q} and \eqref{E-barQ}, using Proposition~\ref{P-decomp2}, we~obtain \[
{\rm\frac{d}{dt}}\,\bar J_{\mD}(g_t,\I)
= \frac12\,{\rm\frac{d}{dt}}\int_{\Omega}
\sum\nolimits_{\,\nu=1}^{\,k}\big(\bar Q(\mD_\nu,g_t,\I) + Q(\mD_\nu,g_t)\big)\,{\rm d}\operatorname{vol}_{g_t} . \] Therefore, \begin{eqnarray*}
&& {\rm\frac{d}{dt}}\int_{\Omega} \sum\nolimits_{\,\nu=1}^{\,k}\big(\bar Q(\mD_\nu,g_t,\I)+Q(\mD_\nu,g_t)\big)\,{\rm d}\operatorname{vol}_{g_t} \\
&& =\int_{\Omega} \langle{\delta_g\bar Q}_\mu+{\delta Q}_\mu,\ {B}_\mu\rangle\,{\rm d}\operatorname{vol}_{g_t}
+\int_{\Omega} \sum\nolimits_{\,\nu=1}^{\,k}\big(\bar Q(\mD_\nu,g_t,\I)+Q(\mD_\nu,g_t)\big)\,\partial_t({\rm d}\operatorname{vol}_{g_t})\,. \end{eqnarray*}
From \eqref{E-Q1Q2-gen}, \eqref{E-prop-X}, \eqref{E-dotvolg}, \eqref{E-dt-Q} and \eqref{E-dt-barQ}, we obtain \begin{eqnarray*}
{\rm\frac{d}{dt}}\,\bar J_{\mD}(g_t,\I)_{|\,t=0} &=& \frac12\int_{\Omega} \big\langle{\delta Q}_\mu +{\delta_g\bar Q}_\mu {+}\big(\,\overline{\rm S}_{\,\mD_1,\ldots,\mD_k}
{-} \frac12\,\Div \sum\nolimits_{\,\nu}\big(\frac12\,P_\nu\tr_{\,\nu}^\bot(\I -\I^*) \\
&+&\frac12\,P_\nu^\bot\tr_{\,\nu}^\top(\I -\I^*) +H_{\,\nu}+\tilde H_{\,\nu}\big) \big)\,g_\mu,\ {B}_\mu\big\rangle\,{\rm d}\operatorname{vol}_g. \end{eqnarray*} If $g$ is critical for $\bar J_{\mD}$ (with fixed $\I$) for $\mD_\mu$-variations,
then the above integral is zero for any symmetric $(0,2)$-tensor ${B}_\mu$.
This yields the $\mD_\mu$-component of the Euler-Lagrange equation \begin{equation}\label{ElmixDD-b}
{\delta Q}_\mu +{\delta_g\bar Q}_\mu +\big(\,\overline{\rm S}_{\,\mD_1,\ldots,\mD_k} -\frac12\,\Div \sum\nolimits_{\,\nu}\big(\frac12\,(P_\nu\tr_{\,\nu}^\bot +P_\nu^\bot\tr_{\,\nu}^\top)(\I -\I^*) +H_{\,\nu}+\tilde H_{\,\nu}\big) \big)\,g_\mu=0\,. \end{equation} According to \eqref{E-delta-g-J}, the Euler-Lagrange equation for \eqref{Eq-Smix-g} (with fixed $\I$) for adapted
variations of $g$ preser\-ving the volume of $\Omega$
is \eqref{ElmixDDvp-b} instead of \eqref{ElmixDD-b}. \end{proof}
\begin{Remark}\rm One can present \eqref{ElmixDDvp-b} in the form of \eqref{E-geom} given by its restrictions on $\mD_\mu$,
\begin{equation}\label{E-main-0ij-kk}
\overline\mathcal Ric_{\,\mD\,|\,\mD_\mu\times\mD_\mu} = - {\delta Q}_\mu - {\delta_g\bar Q}_\mu + \rho_\mu\,g_\mu, \quad \mu=1,\ldots,k\,, \end{equation} (see \cite{rz-3} for $k=2$), where $\rho_\mu$ are defined in \eqref{E-mu-k} below. Indeed, \begin{equation}\label{Eq-2-Ric}
\overline\mathcal Ric_{\,\mD\,|\,\mD_\mu\times\mD_\mu} = \mathcal Ric_{\,\mD\,|\,\mD_\mu\times\mD_\mu} - {\delta_g\bar Q}_\mu, \quad \mu=1,\ldots,k\,, \end{equation} and it was shown in \cite{r-EH-k} that \begin{equation}\label{E-main-0ij-k}
\mathcal Ric_{\,\mD\,|\,\mD_\mu\times\mD_\mu} = -{\delta Q}_\mu +\rho_\mu \,g_\mu, \quad \mu=1,\ldots,k\,. \end{equation} that corresponds to the Euler-Lagrange equations \eqref{ElmixDDvp-b} for $\,\I=0$: \begin{equation}\label{ElmixDDvp}
{\delta Q}_\mu =-\big(\,{\rm S}_{\,\mD_1,\ldots,\mD_k} -\frac12\,\Div\sum\nolimits_{\,\nu=1}^{\,k}(H_\nu + \tilde{H}_\nu) +\lambda\big)\,g_\mu,
\quad \mu=1,\ldots, k\,, \end{equation} and $(\rho_1,\ldots,\rho_k)$ in \eqref{E-main-0ij-k} for $n=\dim M>2$ are given by \begin{equation}\label{E-mu-k}
\rho_{\,\nu}=-\frac1{2n-4}\,\big(\sum\nolimits_{\,\mu}\,(a_{\,\nu}-a_{\mu})\,n_{\mu}-2\,a_{\,\nu}\big), \end{equation} with coefficients
$a_\mu= \tr_g(\sum\nolimits_{\,\nu}{\delta Q}_\nu) -2\,{\delta Q}_\mu$.
From \eqref{Eq-2-Ric} and \eqref{E-main-0ij-k} the system \eqref{E-main-0ij-kk} follows. \end{Remark}
\begin{Example}[Case $k=2$]\rm For $(M,g;\mD,\mD^\bot)$ with a statistical connection, the tensor $\overline\mathcal Ric_{\,\mD}$ in~\eqref{E-geom} is defined by its restrictions on complementary subbundles $\mD$ and $\mD^\bot$ of $TM$, \begin{eqnarray*}
\overline\mathcal Ric_{\,\mD\,|\,\mD\times\mD} =
\Div{h} +{\cal K}^\flat +\tilde{H}^\flat\otimes\tilde{H}^\flat -\frac12\Upsilon_{\tilde h, \tilde h} -\frac12\Upsilon_{\tilde T,\tilde T}
-2\,{\cal T}^\flat - {\delta_g\bar Q}_1 + (\rho_1 - \Div H)\,g^\top, \\
\overline\mathcal Ric_{\,\mD\,|\,\mD^\bot\times\mD^\bot} =
\Div\tilde{h} +\tilde{\cal K}^\flat +{H}^\flat\otimes{H}^\flat -\frac12\Upsilon_{h,h} -\frac12\Upsilon_{T,T} -2\,\tilde{\cal T}^\flat
- {\delta_g\bar Q}_2 + (\rho_2 -\Div\tilde H)\,g^\bot , \end{eqnarray*} see \cite{rz-connections,rz-3}, where
$\rho_1=-\frac{n_1-1}{n-2}\,\Div(\tilde H-{H})$,
$\rho_2=\frac{n_2-1}{n-2}\,\Div(\tilde H-{H})$,
and $n=\dim M>2$. If~$n=2$,
then $\rho_1=\rho_2=0$. The (0,2)-tensors ${\delta_g\bar Q}_1: \mD\times\mD\to\mathbb{R}$ and ${\delta_g\bar Q}_2: \mD^\bot\times\mD^\bot\to\mathbb{R}$ are given (using adapted frame, $E_a\in\mD$ and ${\cal E}_i\in\mD^\bot$) by \begin{eqnarray*}
&&\hskip-4mm {\delta_g\bar Q}_1(E_a,{E}_b) =
\langle\I_{{a}}, \I_{{b}}\rangle_{\,|\,\mD^\bot} -\langle\I_{{a}}{E}_{b},\,\tr_{\,\mD^\bot}\I\rangle -2\,\langle\tilde A_{{E}_{a}},\,\I_{{b}}\rangle \\
&& +\,2\,\langle \tilde T^{\sharp}_{{E}_{a}},\, \I_{{b}}\rangle + 6\,\<T({E}_{b},\,\cdot),\, \I_{{a}}\rangle + 2\,\langle h({E}_{b},\,\cdot),\, \I_{{a}}\rangle \\
&& -\,\langle\I_{{b}}{E}_{a}, H - \tilde H\rangle -\langle\tr_{\,\mD}\I, {E}_{a}\rangle\langle \tilde H, {E}_{b}\rangle +\langle\tr_{\,\mD^\bot}\I, {E}_{b}\rangle \langle\tilde H, {E}_{a}\rangle\,,\\
&&\hskip-4mm {\delta_g\bar Q}_2({\cal E}_i,{\cal E}_j) =
\langle\I_{{\cal E}_i}, \I_{{\cal E}_j}\rangle_{\,|\,\mD} -\langle\I_{{\cal E}_i}{\cal E}_j,\,\tr_{\,\mD}\I\rangle -2\,\<A_{{\cal E}_i},\, \I_{{\cal E}_j}\rangle \\
&& +\,2\,\langle T^{\sharp}_{{\cal E}_i},\, \I_{{\cal E}_j}\rangle + 6\,\langle \tilde T({\cal E}_j,\,\cdot),\, \I_{{\cal E}_i}\rangle + 2\,\langle \tilde h({\cal E}_j,\,\cdot),\, \I_{{\cal E}_i}\rangle \\
&& +\,\langle\I_{{\cal E}_j}{\cal E}_i, \tilde H - H\rangle -\langle\tr_{\,\mD^\bot}\I, {\cal E}_i\rangle\langle H, {\cal E}_j\rangle +\langle\tr_{\,\mD}\I, {\cal E}_j\rangle \<H, {\cal E}_i\rangle\,. \end{eqnarray*} \end{Example}
\section{Variations with respect to contorsion tensor} \label{sec:contorsion}
Here, we consider action \eqref{Eq-Smix-g} with fixed metric $g$, as a functional of $\I$.
The Euler-Lagrange equation for
\eqref{Eq-Smix-g}
with fixed adapted metric, means that the partial gradient vanishes, \begin{equation}\label{E-delta-I-J}
\delta_\I\bar J_\mD=0\,, \end{equation} where
${\rm\frac{d}{dt}}\,\bar J_\mD(g,\I_t)_{|\,t=0} =\int_{\Omega}\langle\delta_\I \bar J_\mD, \overset{\centerdot}\I\rangle\,{\rm d}\operatorname{vol}_g$
for any variation $\I_t$ with $\overset{\centerdot}\I =\partial_t \I_{t_{|\,t=0}}$. In what follows, we consider particular components of \eqref{E-delta-I-J}, defined by the distributions.
We adapt notation from \cite{rz-3} to the case of several distributions. Greek letters $\mu, \rho, \lambda, \nu$ are used for indices of pairwise orthogonal distributions spanning the tangent bundle, $TM = \bigoplus_{\mu} \mD_\mu$; $\delta_{a,b} =1$ if $a=b$ and 0 otherwise.
\begin{Proposition}\label{P-03} A contorsion tensor $\I$ is critical for the action \eqref{Eq-Smix-g} with fixed adapted metric $g$ on $(M;\mD_1,\ldots,\mD_k)$
if and only if the following Euler-Lagrange equations \eqref{E-delta-I-J} hold: \begin{subequations} \begin{eqnarray} \label{ELI1} && \langle\tr_\mu^\bot{\mathfrak T}^*-\tH_\mu, {E}_{\mu, c}\rangle\delta_{a,b} +\langle\tr_\mu^\bot{\mathfrak T}+\tH_\mu, {E}_{\mu, b}\rangle\delta_{a,c}=0\,, \\ && \label{ELI2} \langle\tr_\mu^\bot{\mathfrak T}^* + H_\mu, E_{\rho, i}\rangle\delta_{a,b} -\langle (h_\mu - T_\mu) ({E}_{\mu, a}, {E}_{\mu, b} ), E_{\rho, i}\rangle -\langle\I_{{\rho, i}}{E}_{\mu, a}, {E}_{\mu, b}\rangle = 0\,, \\ && \label{ELI3} \langle\tr_\mu^\bot{\mathfrak T} - H_\mu, E_{\rho, i}\rangle\delta_{a,b}+ \langle (h_\mu + T_\mu)( {E}_{\mu, b}, {E}_{\mu, a}), {E}_{\rho,i}\rangle -\langle\I_{{\rho, i}}{E}_{\mu, b}, {E}_{\mu, a}\rangle = 0\,, \\ && \label{ELI4}
\<2\,T_\mu ({E}_{\mu, a}, {E}_{\mu, b} ), E_{\rho,i}\rangle +\langle\I_{{\mu, a}}{E}_{\mu, b} + \I^*_{{\mu, b}}{E}_{\mu, a}, {E}_{\rho, i}\rangle =0\,, \\ && \label{ELI5}
\<2\,{\tilde T}_\mu ({E}_{\rho, j}, {E}_{\xi, l} ), E_{\mu, a }\rangle
+\,\langle({\tilde h}_\xi + {\tilde T}_\xi )( {E}_{\rho, j}, {E}_{\mu, a} ), E_{\xi,l}\rangle
+ 2\langle\I_{{\xi, l}}{E}_{\mu, a}, {E}_{\rho, j}\rangle\nonumber \\
&&
-\,\langle ({\tilde h}_\rho + {\tilde T}_\rho )( {E}_{\xi, l}, {E}_{\mu, a} ), E_{\rho, j} \rangle
+2\langle\I_{{\rho, j}}{E}_{\xi, l}, {E}_{\mu, a}\rangle = 0\,, \end{eqnarray} \end{subequations} for all $\mu,\rho,\xi\in\{1, \ldots k \}$, such that $\rho\ne\mu$, $\xi\notin\{\mu, \rho\}$,
and for all $a,b,c\in\{1,\ldots,n_\mu \}$, $i,j\in\{1,\ldots,n_\rho \}$ and $l \in\{1,\ldots,n_\xi \}$. \end{Proposition}
\begin{proof}
Using \eqref{E-Dk-Smix} and the formula for two complementary distributions,
see \cite[Theorem~2]{rz-3},
we~get for $k>2$ distributions the following: \begin{eqnarray} \label{dtSD1DkI}
&&\quad 2\,{\frac{\rm d}{\rm dt}\int_M \bar{\rm S}_{\mD_1, \ldots \mD_k }(\I_t)\,{\rm d} \operatorname{vol}_g}\,|_{\,t=0} =\frac12\int_M \sum\nolimits_{\mu } \sum\Big\{\langle\overset{\centerdot}\I_{{\mu,a}}{E}_{\mu, b},{E}_{\mu, c}\rangle\times\nonumber\\ &&\hskip-1mm \times\big(\langle\tr_\mu^\bot{\mathfrak T}^*-\tH_\mu, {E}_{\mu, c}\rangle\delta_{a,b}
+\langle\tr_\mu^\bot{\mathfrak T}+\tH_\mu, {E}_{\mu, b}\rangle\delta_{a,c}
\big)\nonumber\\ &&\hskip-1mm +\,\langle\overset{\centerdot}\I_{{\mu, a}}{E}_{\mu, b}, {\cal E}_{\mu, i}\rangle\big(\langle\tr_\mu^\bot{\mathfrak T}^* + H_\mu, {\cal E}_{\mu, i}\rangle\delta_{a,b}
-\langle({A}_{\mu, i}{-}{T}^\sharp_{\mu, i}){E}_{\mu, a}, {E}_{\mu, b}\rangle - \langle\I_{\mu, i}{E}_{\mu, a}, {E}_{\mu, b}\rangle \big)\nonumber\\ &&\hskip-1mm +\,\langle\overset{\centerdot}\I_{{\mu, a}}{\cal E}_{\mu, i}, {E}_{\mu, b}\rangle\big(\langle\tr_\mu^\bot{\mathfrak T}{-} H_\mu, {\cal E}_{\mu, i}\rangle\delta_{a,b}
+ \langle({A}_{\mu, i}+{T}^\sharp_{\mu, i}){E}_{\mu, b}, {E}_{\mu, a}\rangle - \langle\I_{\mu, i}{E}_{\mu, b}, {E}_{\mu, a}\rangle\big)\nonumber\\ &&\hskip-1mm +\,\langle\overset{\centerdot}\I_{{\mu, a}}{\cal E}_{\mu, i}, {\cal E}_{\mu, j}\rangle \big(\langle(\tilde{A}_{\mu, a}{-} \tilde{T}^\sharp_{\mu, a}){\cal E}_{\mu, i}, {\cal E}_{\mu, j}\rangle {-}\langle(\tilde{A}_{\mu, a}{+} \tilde{T}^\sharp_{\mu, a}){\cal E}_{\mu, i}, {\cal E}_{\mu, j}\rangle {-} \langle\I_{\mu, i}{\cal E}_{\mu, j}{+} \I^*_{\mu, j}{\cal E}_{\mu, i}, {E}_{\mu, a}\rangle \big)\nonumber\\
&&\hskip-1mm +\,\langle\overset{\centerdot}\I_{{\cal E}_{\mu, i}}{\cal E}_{\mu, j}, {\cal E}_{\mu, l}\rangle\big(\langle\tr_\mu^\top{\mathfrak T}^* - H_\mu, {\cal E}_{\mu, l}\rangle\delta_{i,j}
+\langle\tr_\mu^\top{\mathfrak T} +H_\mu, {\cal E}_{\mu, j}\rangle\delta_{i,l}
\big)\nonumber\\ &&\hskip-1mm +\,\langle\overset{\centerdot}\I_{{\cal E}_{\mu, i}}{\cal E}_{\mu, j}, {E}_{\mu,a}\rangle\big(\langle\tr_\mu^\top{\mathfrak T}^* +\tH_\mu, {E}_{\mu, a}\rangle\delta_{i,j}
-\langle(\tilde{A}_{\mu, a}+\tilde{T}^\sharp_{\mu, a}){\cal E}_{\mu, j}, {\cal E}_{\mu, i}\rangle -\langle\I_{\mu, a}{\cal E}_{\mu, i}, {\cal E}_{\mu, j}\rangle \big) \nonumber\\ &&\hskip-1mm +\,\langle\overset{\centerdot}\I_{{\cal E}_{\mu, i}}{E}_{\mu, a}, {\cal E}_{\mu,j}\rangle\big(\langle\tr_\mu^\top{\mathfrak T}-\tH_\mu, {E}_{\mu, a}\rangle\delta_{i,j}
+\langle(\tilde{A}_{\mu, a} +\tilde{T}^\sharp_{\mu, a}){\cal E}_{\mu, j}, {\cal E}_{\mu, i}\rangle - \langle\I_{\mu, a}{\cal E}_{\mu, j}, {\cal E}_{\mu, i}\rangle \big) \nonumber\\ \nonumber && \hskip-1mm +\,\langle\overset{\centerdot}\I_{{\cal E}_{\mu, i}}{E}_{\mu, a}, {E}_{\mu, b}\rangle \big(\langle({A}_{\mu, i} - {T}^\sharp_{\mu, i}){E}_{\mu, a}, {E}_{\mu, b}\rangle -\langle({A}_{\mu, i} + {T}^\sharp_{\mu, i}){E}_{\mu, a}, {E}_{\mu, b}\rangle \\ &&\hskip-1mm -\,\langle\I_{\mu, a}{E}_{\mu, b} + \I^*_{\mu, b}{E}_{\mu, a}, {\cal E}_{\mu, i}\rangle \big) \Big\}\,{\rm d}\operatorname{vol}_g\,,
\end{eqnarray} where we used notation ${A}_{\mu, i} = ({A}_{\mu})_{ {\cal E}_{\mu , i} }$ and $\tilde{A}_{\mu, a} = (\tilde{A}_{\mu})_{E_{\mu, a}}$ etc.
In \eqref{dtSD1DkI}, terms with the same coefficients of tensor $\overset{\centerdot}\I$ appear in different forms in sum over $\mu$, e.g., term $\langle\overset{\centerdot}\I_{{\cal E}_{1, i}}{E}_{1, a}, {E}_{1, b}\rangle$, where ${\cal E}_{1, i} \in \mD_2$ coincides with some term $\langle\overset{\centerdot}\I_{{2, a}}{\cal E}_{2, i}, {\cal E}_{2, j}\rangle$ and some terms $\langle\overset{\centerdot}\I_{{\cal E}_{\mu, l}}{\cal E}_{\mu, i}, {\cal E}_{\mu, j}\rangle$ for $\mu \notin \{1,2\}$.
To~relate the indices of various elements $\{E_{\mu, a}\}$ and $\{{\cal E}_{\nu, i}\}$ of the whole frame,
let $\iota(\nu,\mu,a)$ be such that $E_{\mu, a} = {\cal E}_{\nu,\,\iota(\nu,\mu,a)}$.
Since $\overset{\centerdot}\I$ is arbitrary, the equality $\frac{\rm d}{\rm dt}\,\bar J_{\mD}(g,\I_t)\,|_{\,t=0} =0$ is valid for all $\I_t$ if and only if all coefficients of terms with $\overset{\centerdot}\I$ in \eqref{dtSD1DkI} vanish. For fixed $\mu$ and $a,b,c \in \{1, \ldots, n_\mu \}$, we consider the term of \eqref{dtSD1DkI} with $ \langle\overset{\centerdot}\I_{{\mu,a}}{E}_{\mu, b},{E}_{\mu, c}\rangle$, which comes from one term with $\langle\overset{\centerdot}\I_{{\mu,a}}{E}_{\mu, b},{E}_{\mu, c}\rangle$ and $k-1$ terms with $\langle\overset{\centerdot}\I_{{\cal E}_{\nu, \iota(\nu,\mu,a)}}{\cal E}_{\nu, \iota(\nu,\mu,b)}, {\cal E}_{\nu, \iota(\nu,\mu,c)}\rangle$ for $\mD_\nu$ with $\nu \ne \mu$: \begin{eqnarray}\label{ELI1term} && \langle\overset{\centerdot}\I_{E_{\mu,a}}{E}_{\mu, b},{E}_{\mu, c}\rangle\,
\big(\langle\tr_\mu^\bot{\mathfrak T}^*-\tH_\mu, {E}_{\mu, c}\rangle\delta_{a,b}
+\langle\tr_\mu^\bot{\mathfrak T}+\tH_\mu, {E}_{\mu, b}\rangle\delta_{a,c}
\nonumber\\ && +\sum\nolimits_{\nu\ne\mu} \big(\langle\tr_\nu^\top{\mathfrak T}^* - H_\nu, {E}_{\mu, c}\rangle\delta_{a,b}
+\langle\tr_\nu^\top{\mathfrak T} +H_\nu, {E}_{\mu, b}\rangle\delta_{a,c}
\big)\big). \end{eqnarray} On the other hand, we have \begin{eqnarray}\label{ELI1termaux} && \langle\tr_\mu^\bot{\mathfrak T}^*-\tH_\mu, {E}_{\mu, c}\rangle\delta_{a,b}
+\langle\tr_\mu^\bot{\mathfrak T}+\tH_\mu, {E}_{\mu, b}\rangle\delta_{a,b}
\nonumber \\ && = \sum\nolimits_{\nu\ne\mu} \big(\langle\tr_\nu^\top{\mathfrak T}^* - H_\nu, {E}_{\mu, c}\rangle\delta_{a,b}
+\langle\tr_\nu^\top{\mathfrak T} +H_\nu, {E}_{\mu, b}\rangle\delta_{a,c}
\big). \end{eqnarray} If $\I$ is a critical point of \eqref{Eq-Smix-g} with fixed metric,
then the coefficient of $\langle\overset{\centerdot}\I_{{\mu,a}}{E}_{\mu, b},{E}_{\mu, c}\rangle$ in \eqref{dtSD1DkI}, given in \eqref{ELI1term}, is zero for every $a,b,c$, so by \eqref{ELI1term} and \eqref{ELI1termaux} the first Euler-Lagrange equation \eqref{ELI1} follows.
Let $\mu \ne \rho$,
fix $a,b \in \{ 1, \ldots, n_\mu \}$ and $i \in \{ 1, \ldots, n_\rho \}$. Then we get the following term in~\eqref{dtSD1DkI}, which comes from one term with $\langle\overset{\centerdot}\I_{{\mu, a}}{E}_{\mu, b}, {\cal E}_{\mu, \iota(\mu, \rho, i )}\rangle$, one term with $\langle\overset{\centerdot}\I_{{\cal E}_{\rho, \iota(\rho, \mu, a)}}{\cal E}_{\rho, \iota(\rho, \mu, b) }, E_{\rho, i}\rangle$ and, if $k\ge 3$, $(k-2)$ terms with $\langle\overset{\centerdot}\I_{{\cal E}_{\nu, \iota(\nu, \mu,a) }}{\cal E}_{\nu, \iota(\nu, \mu,b) }, {\cal E}_{\nu, \iota(\nu, \rho,i) }\rangle$: \begin{eqnarray} \label{ELI2termaux} && \langle\overset{\centerdot}\I_{{\mu, a}}{E}_{\mu, b}, E_{\rho, i}\rangle \big(\langle\tr_\mu^\bot{\mathfrak T}^* + H_\mu, E_{\rho, i}\rangle\delta_{a,b}
{-}\langle (h_\mu - T_\mu) ({E}_{\mu, a}, {E}_{\mu, b} ), E_{\rho, i}\rangle
{-}\langle\I_{{\rho, i}}{E}_{\mu, a}, {E}_{\mu, b}\rangle\nonumber \\ && +\,\big(\langle\tr_\rho^\top{\mathfrak T}^*+\tH_\rho, {E}_{\rho, i}\rangle\delta_{a,b}
-\langle ({\tilde h}_\rho - \tilde{T}_\rho ) ({E}_{\mu, a}, {E}_{\mu, b}), E_{\rho, i}\rangle
- \langle\I_{{\rho,i}}{E}_{\mu, a}, {E}_{\mu, b}\rangle \big) \nonumber \\ &&
+\sum\nolimits_{\nu \notin \{\mu, \rho\}} \!\big (
\langle\tr_\nu^\top{\mathfrak T}^* {-} H_\nu, E_{\rho,i}\rangle\delta_{a,b}
+\langle\tr_\nu^\top{\mathfrak T}+ H_\nu, {E}_{\mu, b}\rangle\,\langle{E}_{\mu, a}, {E}_{\rho, i}\rangle\big)\big). \end{eqnarray} We have $\langle{E}_{\mu, a}, {E}_{\rho, i}\rangle =0$ as they belong to different, orthogonal distributions. Moreover, \begin{eqnarray*}
&& \langle ({\tilde h}_\rho - \tilde{T}_\rho ) ({E}_{\mu, a}, {E}_{\mu, b}), E_{\rho, i}\rangle = \langle( h_\mu - T_\mu ) ({E}_{\mu, a}, {E}_{\mu, b}), E_{\rho, i}\rangle,\\
&& \langle\tr_\rho^\top{\mathfrak T}^*+\tH_\rho, {E}_{\rho, i}\rangle + \sum\nolimits_{\nu \notin \{\mu, \rho\}} \langle\tr_\nu^\top{\mathfrak T}^* - H_\nu, E_{\rho,i}\rangle =
\langle\tr_\mu^\bot{\mathfrak T}^* + H_\mu, E_{\rho, i}\rangle,\\
&&\langle\tH_\rho, E_{\rho, i}\rangle = \langle \sum\nolimits_{\nu \notin \{\mu, \rho \}}H_\nu + H_\mu, E_{\rho, i}\rangle,\quad
\tr^\perp_\mu \I^* = \sum\nolimits_{\nu \notin \{\mu, \rho \}}\tr^\top_\nu \I^* + \tr^\top_\rho \I^*\,. \end{eqnarray*} Using the above, we obtain that \eqref{ELI2termaux} vanishes for all $\langle\overset{\centerdot}\I_{{\mu, a}}{E}_{\mu, b}, E_{\rho, i}\rangle$ if and only if the second Euler-Lagrange equation \eqref{ELI2} holds.
Let $\mu \ne \rho$, for fixed $a,b \in \{ 1, \ldots, n_\mu \}$ and $i\in\{1,\ldots, n_\rho \}$ we get the following term in \eqref{dtSD1DkI}, coming from $\langle\overset{\centerdot}\I_{{\mu, a}} {\cal E}_{\mu, \iota(\mu, \rho, i )}, {E}_{\mu, b}\rangle$, $\langle\overset{\centerdot}\I_{{\cal E}_{\rho, \iota(\rho, \mu, a)}} E_{\rho, i}, {\cal E}_{\rho, \iota(\rho, \mu, b) }\rangle$ and $\langle\overset{\centerdot}\I_{{\cal E}_{\nu, \iota(\nu, \mu,a) }}{\cal E}_{\nu, \iota(\nu, \rho,i) }, {\cal E}_{\nu, \iota(\nu, \mu,b) }\rangle$:
\begin{eqnarray} \label{ELI3termaux} &&
\langle\overset{\centerdot}\I_{{\mu, a}}{E}_{\rho, i}, {E}_{\mu, b}\rangle\big(\langle\tr_\mu^\bot{\mathfrak T} {-} H_\mu, E_{\rho, i}\rangle\delta_{a,b}
{+}\langle (h_\mu + T_\mu)( {E}_{\mu, b}, {E}_{\mu, a}), {E}_{\rho,i}\rangle
{-}\langle\I_{{\rho, i}}{E}_{\mu, b}, {E}_{\mu, a}\rangle\nonumber \\ &&
+
\,\langle\tr_\rho^\top{\mathfrak T}-\tH_\rho, {E}_{\rho, i}\rangle\delta_{a,b}
+ \langle ({\tilde h}_{\rho} + {\tilde T}_\rho )(E_{\mu, b}, E_{\mu, a}), E_{\rho, i}\rangle -\langle\I_{{\rho, i}} E_{\mu, b}, E_{\mu, a}\rangle\nonumber \\
&&
+\sum\nolimits_{\nu \notin \{\mu, \rho \}}\!\big(\langle\tr_\nu^\top{\mathfrak T}^* - H_\nu, E_{\mu,b}\rangle\,\<E_{\mu, a}, E_{\rho, i}\rangle
+\langle\tr_\nu^\top{\mathfrak T}+ H_\nu, E_{\rho, i}\rangle\delta_{a,b}
\big)\big). \end{eqnarray} We have $\<E_{\mu, a}, E_{\rho, i}\rangle =0$ and \begin{eqnarray*}
&&\langle ({\tilde h}_{\rho} + {\tilde T}_\rho )(E_{\mu, b}, E_{\mu, a}), E_{\rho, i}\rangle = \langle (h_{\mu} + T_\mu )(E_{\mu, b}, E_{\mu, a}), E_{\rho, i}\rangle,\\
&&\langle \tH_\rho, E_{\rho, i}\rangle = \langle \sum\nolimits_{\nu \notin \{\mu, \rho \}}H_\nu + H_\mu, E_{\rho, i}\rangle,\quad
\tr^\perp_\mu \I = \sum\nolimits_{\nu \notin \{\mu, \rho \}}\tr^\top_\nu \I + \tr^\top_\rho \I\,. \end{eqnarray*}
Using the above, we obtain that \eqref{ELI3termaux} vanishes for all $\langle\overset{\centerdot}\I_{{\mu, a}}{E}_{\rho, i}, {E}_{\mu, b}\rangle$ if and only if the third Euler-Lagrange equation \eqref{ELI3} holds.
Let $\mu \ne \rho$, for fixed $a,b \in \{ 1, \ldots, n_\mu \}$ and $i \in \{ 1, \ldots, n_\rho \}$ we get the following term in \eqref{dtSD1DkI}, coming from
$\langle\overset{\centerdot}\I_{{\cal E}_{\mu, \iota(\mu, \rho, i )}}{E_{\mu, a}}, {E}_{\mu, b}\rangle$, $\langle\overset{\centerdot}\I_{{\rho, i}} {\cal E}_{\rho, \iota(\rho, \mu, a)}, {\cal E}_{\rho, \iota(\rho, \mu, b) }\rangle$ and $\langle\overset{\centerdot}\I_{{\cal E}_{\nu, \iota(\nu, \rho,i)}} {\cal E}_{\nu, \iota(\nu, \mu,a) }, {\cal E}_{\nu, \iota(\nu, \mu,b) }\rangle$: \begin{eqnarray} \label{ELI4termaux} && \langle\overset{\centerdot}\I_{{\rho, i}}{E}_{\mu, a}, {E}_{\mu, b}\rangle \big(\langle -2 T_\mu ({E}_{\mu, a}, {E}_{\mu, b} ), E_{\rho,i}\rangle -\langle\I_{{\mu, a}}{E}_{\mu, b}+ \I^*_{{\mu, b}}{E}_{\mu, a}, {E}_{\rho, i}\rangle \nonumber \\ &&
\langle - 2\, {\tilde T}_{\rho} ({E}_{\mu,a}, {E}_{\mu,b} ), E_{\rho,i}\rangle - \langle\I_{{\mu, a}}{E}_{\mu, b} + \I^*_{{\mu, b}}{E}_{\mu, a}, {E}_{\rho, i}\rangle \nonumber \\
&& + \sum\nolimits_{\nu \notin \{\mu, \rho \}}\big(\langle\tr_\nu^\top{\mathfrak T}^* - H_\nu, {E}_{\mu,b}\rangle\,\langle{E}_{\rho, i}, {E}_{\mu, b}\rangle +\langle\tr_\nu^\top{\mathfrak T} +H_\nu, {E}_{\mu, b}\rangle\,\langle{E}_{\rho, i}, {E}_{\mu, b}\rangle \big)\big). \end{eqnarray} Using $\langle{E}_{\rho, i}, {E}_{\mu, b}\rangle =0$, as $\mD_\mu \perp \mD_\rho$ and
$\langle {\tilde T}_{\rho} ({E}_{\mu,a}, {E}_{\mu,b} ), E_{\rho,i}\rangle = \langle T_{\mu} ({E}_{\mu,a}, {E}_{\mu,b} ), E_{\rho,i}\rangle$,
we
obtain that \eqref{ELI4termaux} vanishes for all $\langle\overset{\centerdot}\I_{{\rho, i}}{E}_{\mu, a}, {E}_{\mu, b}\rangle$ if and only if the Euler-Lagrange equation \eqref{ELI4} holds.
Finally, let $\mu \ne \rho \ne \xi \ne \mu$,
for fixed $a\in\{ 1, \ldots, n_\mu \}$, $j \in \{ 1, \ldots, n_\rho \}$ and $k \in \{ 1, \ldots, n_\xi \}$ we get the following term in \eqref{dtSD1DkI}, coming from
$\langle\overset{\centerdot}\I_{{\mu, a}} {\cal E}_{\mu, \iota(\mu, \rho, j) }, {\cal E}_{\mu, \iota(\mu, \xi,k ) }\rangle$,
$\langle\overset{\centerdot}\I_{{\cal E}_{\xi, \iota(\xi, \mu, i)}}{\cal E}_{\xi, \iota(\xi, \rho,j ) }, E_{\xi, k}\rangle$,
$\langle\overset{\centerdot}\I_{{\cal E}_{\rho, \iota(\rho, \mu, i)}} E_{\rho, j}, {\cal E}_{\rho, \iota(\rho, \xi,k ) }\rangle$ and $\langle\overset{\centerdot}\I_{{\cal E}_{\nu, \iota(\nu, \mu, i)}} {\cal E}_{\nu, \iota(\nu, \rho,j) }, {\cal E}_{\nu, \iota(\nu, \xi, k) }\rangle$:
\begin{eqnarray*}
&& \langle\overset{\centerdot}\I_{{\mu, a}}{E}_{\rho, j}, {E}_{\xi, k}\rangle \big(
\langle - 2 {\tilde T}_\mu ({E}_{\rho, j}, {E}_{\xi, k} ), E_{\mu, i }\rangle
- \langle\I_{{\rho, j}}{E}_{\xi, k} + \I^*_{\xi, k}{E}_{\rho, j}, {E}_{\mu, a}\rangle \\
&& +\,\langle\tr_\xi^\top{\mathfrak T}^*+\tH_\xi, {E}_{\xi,k}\rangle \,\langle{E}_{\mu, a}, {E}_{\rho, j}\rangle -\langle ({\tilde h}_\xi + {\tilde T}_\xi )( {E}_{\rho, j}, {E}_{\mu, a} ), E_{\xi,k}\rangle -\langle\I_{{\xi, k}}{E}_{\mu, a}, {E}_{\rho, j}\rangle \\
&&
+\,\langle\tr_\rho^\top{\mathfrak T}-\tH_\rho, {E}_{\rho, j}\rangle\,\langle{E}_{\mu, a}, {E}_{\xi, k}\rangle + \langle ({\tilde h}_\rho + {\tilde T}_\rho )( {E}_{\xi, k}, {E}_{\mu, a} ), E_{\rho, j} \rangle
- \langle\I_{{\rho, j}}{E}_{\xi, k}, {E}_{\mu, a}\rangle \\ &&
+ \sum\nolimits_{\nu \notin \{\mu, \rho, \xi \}}\big(\langle\tr_\nu^\top{\mathfrak T}^* - H_\nu, {E}_{\xi,k}\rangle\,\langle{E}_{\mu, a}, {E}_{\rho, j}\rangle +\langle\tr_\nu^\top{\mathfrak T} +H_\nu, {E}_{\rho, j}\rangle\,\langle{E}_{\mu, a}, {E}_{\xi, k}\rangle \big) \big) . \end{eqnarray*} As $\mD_\mu, \mD_\rho, \mD_\xi$ are pairwise orthogonal, it reduces to the following: \begin{eqnarray} \label{ELI5termaux}
&&\hskip-5mm \langle\overset{\centerdot}\I_{{\mu, a}}{E}_{\rho, j}, {E}_{\xi, k}\rangle \big(
\langle - 2 {\tilde T}_\mu ({E}_{\rho, j}, {E}_{\xi, k} ), E_{\mu, i }\rangle - \langle\I_{{\rho, j}}{E}_{\xi, k} + \I^*_{{\xi, k}}{E}_{\rho, j}, {E}_{\mu, a}\rangle -\langle\I_{{\xi, k}}{E}_{\mu, a}, {E}_{\rho, j}\rangle \nonumber \\
&&\hskip-5mm -\langle ({\tilde h}_\xi + {\tilde T}_\xi)( {E}_{\rho, j}, {E}_{\mu, a} ), E_{\xi,k}\rangle
+\,\langle ({\tilde h}_\rho + {\tilde T}_\rho )( {E}_{\xi, k}, {E}_{\mu, a} ), E_{\rho, j} \rangle - \langle\I_{{\rho, j}}{E}_{\xi, k}, {E}_{\mu, a}\rangle \big) . \end{eqnarray} Using $ \langle\I^*_{{\xi, k}}{E}_{\rho, j}, {E}_{\mu, a}\rangle = \langle\I_{{\xi, k}}{E}_{\mu, a}, {E}_{\rho, j}\rangle $, we obtain that \eqref{ELI5termaux} vanishes for all $\langle\overset{\centerdot}\I_{{\mu, a}}{E}_{\rho, j}, {E}_{\xi, k}\rangle$ if and only if the Euler-Lagrange equation \eqref{ELI5} holds.
Equations (\ref{ELI1}-e) are indeed all components of \eqref{E-delta-I-J}, because we considered all coefficients of $\langle \overset{\centerdot}\I_X Y,Z\rangle$, where $X,Y,Z$ are from an orthonormal frame: either all $X,Y,Z$ are from the same distribution and yield \eqref{ELI1}, exactly two of them are from the same distribution -- and we get (\ref{ELI2}-d), or each of them is from a different distribution and we obtain \eqref{ELI5}.
\end{proof}
On a manifold with two orthogonal distributions, $TM = \mD_1 \oplus \mD_2$, i.e., $\mD_2=\mD_1^\bot$, we have only equations (\ref{ELI1}-d) for $\mu=1,2$, which were obtained in \cite{rz-connections}.
Similarly to \cite[Theorem~1]{rz-connections} for $k=2$,
we conclude the following
\begin{Theorem}\label{corI} A contorsion tensor $\I$ is critical for the action \eqref{Eq-Smix-g} with fixed adapted metric on $(M;\mD_1,\ldots,\mD_k)$ for all variations of \,$\I$ if and only if all $\mD_\mu$ are {totally umbilical} and $\I$ satisfies the following linear algebraic system for all $X,Y\in\mD_\mu$, $U\in\mD_\mu^\perp$ and all $\mu =1, \ldots, k$: \begin{subequations} \begin{eqnarray}
\label{ELconnectionNew2} && P_\mu \tr_\mu^\bot\I^* = \tH_\mu = - P_\mu \tr_\mu^\bot\I \quad {\rm if~} n_\mu > 1\,, \\
\label{ELconnectionNewI4} && \langle (\I -\I^{*})_U X, Y\rangle = 2\, \langle {T}_\mu (X,Y), U\rangle \\
\label{ELconnectionNew5} && \langle (\I + \I^{*})_U X, Y\rangle = \langle\tr_\mu^\bot(\I +\I^*),\,U\rangle \langle X,Y\rangle\,, \\
\label{ELconnectionNewI7} && P_\mu^\perp \tr_\mu^\bot(\I -\I^*) = (2-2/n_\mu)\,H_\mu\,, \\
\label{ELconnectionNewI8} && P_\mu^\perp (\I_X\, Y +\I^{*}_Y\, X) = -2\,T_\mu (X, Y)\,,
\end{eqnarray} \end{subequations} moreover, for all $X \in \mD_\mu$, $Y \in \mD_\rho, Z \in \mD_\xi$, where $\mu \ne \rho \ne \xi \ne \mu$, we get \begin{equation} \label{ELI5XYZ}
\langle\I_Y Z, X\rangle + \langle\I_{Z}X, Y\rangle = 0. \end{equation} \end{Theorem}
\begin{proof}
Taking the difference of symmetric parts of (\ref{ELI2},c) we get the total umbilicity of each $\mD_\mu$;
\eqref{ELconnectionNew2} follows from \eqref{ELI1};
taking antisymmetric part of \eqref{ELI2} we get \eqref{ELconnectionNewI4};
the sum of \eqref{ELI2} and \eqref{ELI3} yields \eqref{ELconnectionNew5};
taking the difference of \eqref{ELI2} and \eqref{ELI3} with interchanged $E_{\mu, a}, E_{\mu, b}$ we get \eqref{ELconnectionNewI7}, and \eqref{ELconnectionNewI8} follows from \eqref{ELI4}. Finally, from \eqref{ELI5} we~get \[
-2\,\langle{\tilde T}_\mu (Y, Z ), X\rangle
-\langle ({\tilde h}_\xi + {\tilde T}_\xi )( Y, X ), Z\rangle
+\langle ({\tilde h}_\rho + {\tilde T}_\rho )( Z, X ), Y\rangle = 2\langle\I_Y Z, X\rangle + 2\langle\I_{Z}X, Y\rangle\,, \] which is simplified to \eqref{ELI5XYZ}. \end{proof}
\begin{Corollary}\label{corstatcritforall} A contorsion tensor $\I$ of a statistical connection is critical for
\eqref{Eq-Smix-g} with fixed adapted metric $g$ on $(M;\mD_1,\ldots,\mD_k)$
for all variations of \,$\I$ if and only if for $\mu =1, \ldots, k$: \begin{enumerate} \item \label{item1} all $\mD_\mu$ with $n_\mu>1$ are integrable and totally geodesic, \item \label{item2} $\langle \I_X Y, Z\rangle =0$ for all $X,Y \in \mD_\mu$ and $Z \in \mD^\bot_\mu$, \item \label{item3} if $n_\mu>1$ then
${\tilde H}_\mu =0$, \item \label{item5} $\tr^\perp_\mu \I =0 = \tr^\top_\mu \I$.
\item \label{item4} $\langle \I_X Y, Z\rangle =0$ for $X,Y,Z$ -- each vector from a different distribution. \end{enumerate} \end{Corollary}
\begin{proof} We use properties of a statistical connection: $\I = \I^* = \I^\wedge$ in Theorem \ref{corI} to prove necessity of the above conditions, their sufficiency is easily verified. Claim \ref{item1} follows from \eqref{ELconnectionNewI4}, \eqref{ELconnectionNewI7} and Theorem \ref{corI}; claim \ref{item2} follows from claim \ref{item1} and \eqref{ELconnectionNewI8};
for the first equality of claim \ref{item5} we use claim \ref{item2} to get $P_\mu \tr_\mu^\perp \I=0$ and claim \ref{item2} with \eqref{ELconnectionNew5} to get $P_\mu^\perp \tr_\mu^\perp \I = 0$, the second equality of claim \ref{item5} follows from the first one, as for $k \ge 2$ we get \begin{eqnarray*}
\sum\nolimits_{\nu\ne\mu} \tr_\nu^\perp \I &=& (k-1) \tr^\top_\mu \I + (k-2)\sum\nolimits_{\nu\ne\mu} \tr^\top_\nu \I \\
&=& (k-1) \tr^\top_\mu \I + (k-2) \tr^\perp_\mu \I, \quad \mu = 1, \ldots, k\,. \end{eqnarray*} Claim \ref{item3} follows from \eqref{ELconnectionNew2} and claim \ref{item5}. Finally, claim 5 follows from \eqref{ELI5XYZ}. \end{proof}
\begin{Remark} \rm For the action \eqref{Eq-Smix-g} (with fixed adapted metric) restricted to contorsion tensors of metric-compatible connections, all equations of Theorem~\ref{corI} remain true, with $\I = -\I^*$.
For \eqref{ELI5XYZ} this follows from the fact that \eqref{ELI5} is antisymmetric in $E_{\rho, j}, E_{\xi, k}$ when $\I = - \I^*$, and for other equations of Theorem \ref{corI} it follows in the same way as in \cite[Theorem 2]{rz-connections}. \end{Remark}
An important class of metric connections are those with totally skew-symmetric torsion \cite{AF}, for which we have \begin{equation}\label{totallyskewsymmetrictorsion}
\I = - \I^\wedge\,. \end{equation}
\begin{Corollary} Let $\I$ be the contorsion tensor of a connection with totally skew-symmetric torsion, that is critical for the action \eqref{Eq-Smix-g} with fixed $g$. Then
all $\mD_\mu$ such that $\dim \mD_\mu >1$ are {totally geodesic} and integrable, and $\I=0$.
\end{Corollary}
\begin{proof} By Theorem \ref{corI}, all distributions $\mD_\mu$ are {totally umbilical}. Let $\dim \mD_\mu>1$, then using \eqref{totallyskewsymmetrictorsion} and $\I = -\I^*$ in \eqref{ELconnectionNewI7}, we obtain $H_\mu=0$, i.e., $\mD_\mu$ is totally geodesic. From (\ref{ELconnectionNewI4},e) together with \eqref{totallyskewsymmetrictorsion} and $\I = -\I^*$ we obtain $T_\mu =0$, i.e., $\mD_\mu$ is integrable. Using \eqref{totallyskewsymmetrictorsion}, we get $\I_X X =0$ for all $X \in TM$, and from (\ref{ELconnectionNewI4},e) it follows that $\langle \I_U X,Y \rangle =0 = \langle \I_X Y , U \rangle$ for all $X,Y\in\mD_\mu$, $U\in\mD_\mu^\perp$ and all $\mu =1, \ldots, k$.
Let $X \in \mD_\mu$, $Y \in \mD_\rho, Z \in \mD_\xi$, where $\mu \neq \rho \neq \xi \neq \mu$. By \eqref{totallyskewsymmetrictorsion} and $\I = -\I^*$, we get in~\eqref{ELI5XYZ}: \[ 0 = \langle \I_Y Z ,X\rangle + \langle \I_Z X ,Y\rangle = - \langle \I_Z Y ,X\rangle + \langle \I_Z X ,Y\rangle = 2 \langle \I_Z X ,Y\rangle . \] Hence, all components of $\I$ vanish.
\end{proof}
For action \eqref{Eq-Smix-g} with fixed adapted $g$
restricted to contorsion tensors of statistical connections, we obtain the following generalization of \cite[Corollary 7]{rz-connections}.
\begin{Theorem}\label{statisticalcritSmixI} A contorsion tensor $\I$ of a statistical connection on $(M;\mD_1,\ldots,\mD_k)$ with fixed adapted metric $g$ is critical for the action \eqref{Eq-Smix-g}
with respect to variations of \,$\I$ corresponding to statistical connections if and only if the following system is valid: \begin{subequations} \begin{equation} \label{ELSmixIstat1}
P_\mu \tr_\mu^\perp \I = 0,\quad \mu = 1, \ldots, k\,, \end{equation}
and for $\mu \ne \rho \ne \xi \ne \mu$ and all $X,U \in \mD_\mu$, $Y \in \mD_\rho$ and $Z \in \mD_\xi$ we get \begin{eqnarray} \label{ELSmixIstat2}
&&
P_\mu^\bot (2\,\I_X U + \<X,\,U\rangle\tr_\mu^\perp \I ) = 0\,,\\
\label{newELIstat}
&& \langle\I_X Y, Z\rangle =0\,. \end{eqnarray} \end{subequations} \end{Theorem}
\begin{proof} For variations $\overset{\centerdot} \I$
corresponding to statistical connections, we have the following symmetries:
$\langle \overset{\centerdot}\I_X Y, Z\rangle = \langle \overset{\centerdot}\I_Y X, Z\rangle = \langle\overset{\centerdot}\I_X Z, Y\rangle,\ X,Y,Z\in TM$.
It follows that instead of \eqref{ELI1}, the first Euler-Lagrange equation is the sum of \eqref{ELI1} over all permutations of $({E}_{\mu, a}, {E}_{\mu, b}, {E}_{\mu, c} )$ -- from that and $\I^* = \I$, we get
$\langle\tr_\mu^\bot{\mathfrak T}, {E}_{\mu, c}\rangle\delta_{a,b}
+\langle\tr_\mu^\bot{\mathfrak T}, {E}_{\mu, b}\rangle\delta_{a,c}
=0$,
and considering either ${E}_{\mu, a} \ne {E}_{\mu, b} \ne {E}_{\mu, c}$ or two of above are equal, we obtain \eqref{ELSmixIstat1}.
Similarly, instead of three separate Euler-Lagrange equations (\ref{ELI2}-d)
we now have one Euler-Lagrange equation that is their sum, symmetrized in ${E}_{\mu, a}, {E}_{\mu, b}$, i.e., \begin{eqnarray*}
&& \langle\tr_\mu^\bot{\mathfrak T}^* + H_\mu, E_{\rho, i}\rangle\delta_{a,b}
-\langle (h_\mu - T_\mu) ({E}_{\mu, a}, {E}_{\mu, b} ), E_{\rho, i}\rangle - \langle\I_{{\rho, i}}{E}_{\mu, a}, {E}_{\mu, b}\rangle\nonumber \\ && +\,\langle\tr_\mu^\bot{\mathfrak T} - H_\mu, E_{\rho, i}\rangle\delta_{a,b}
+ \langle (h_\mu + T_\mu)( {E}_{\mu, b}, {E}_{\mu, a}), {E}_{\rho,i}\rangle - \langle\I_{{\rho, i}}{E}_{\mu, b}, {E}_{\mu, a}\rangle\nonumber\\ && -\,\langle 2\, T_\mu ({E}_{\mu, a}, {E}_{\mu, b} ), E_{\rho,i}\rangle -\langle\I_{{\mu, a}}{E}_{\mu, b}+ \I^*_{{\mu, b}}{E}_{\mu, a}, {E}_{\rho, i}\rangle\nonumber \\
&& +\, \langle\tr_\mu^\bot{\mathfrak T}^* + H_\mu, E_{\rho, i}\rangle\delta_{a,b}
-\langle (h_\mu - T_\mu) ({E}_{\mu, b}, {E}_{\mu, a} ), E_{\rho, i}\rangle - \langle\I_{{\rho, i}}{E}_{\mu, b}, {E}_{\mu, a}\rangle\nonumber \\ && +\, \langle\tr_\mu^\bot{\mathfrak T} - H_\mu, E_{\rho, i}\rangle\delta_{a,b}
+ \langle (h_\mu + T_\mu)( {E}_{\mu, a}, {E}_{\mu, b}), {E}_{\rho,i}\rangle - \langle\I_{{\rho, i}}{E}_{\mu, a}, {E}_{\mu, b}\rangle\nonumber\\ && -\, \langle 2\, T_\mu ({E}_{\mu, b}, {E}_{\mu, a} ), E_{\rho,i}\rangle - \langle\I_{{\mu, b}}{E}_{\mu, a} + \I^*_{{\mu, a}}{E}_{\mu, b}, {E}_{\rho, i}\rangle =0\,. \end{eqnarray*} Using $\I^* = \I$ and $\I_X Y = \I_Y X$ for $X,Y \in TM$ and dividing by 4, we get
$\langle\tr_\mu^\bot{\mathfrak T}, E_{\rho, i}\rangle\delta_{a,b}
=- 2\,\langle\I_{{\rho, i}}{E}_{\mu, a}, {E}_{\mu, b}\rangle$\,,
and hence \eqref{ELSmixIstat2}.
Equation \eqref{newELIstat} follows from the fact that due to symmetries of $\overset{\centerdot}\I$ for variations corresponding to statistical connections, instead of \eqref{ELI5} we get \begin{eqnarray} \label{ELI5tsum} && \sum \big(-\langle 2\, {\tilde T}_\mu ({E}_{\rho, j}, {E}_{\xi, k} ), E_{\mu, a }\rangle \nonumber
-\,\langle ({\tilde h}_\xi + {\tilde T}_\xi )( {E}_{\rho, j}, {E}_{\mu, a} ), E_{\xi,k}\rangle \nonumber \\
&&
-\,2\,\langle\I_{{\xi, k}}{E}_{\mu, a}, {E}_{\rho, j}\rangle
+\langle ({\tilde h}_\rho + {\tilde T}_\rho )( {E}_{\xi, k}, {E}_{\mu, a} ), E_{\rho, j}\rangle - 2\,\langle\I_{{\rho, j}}{E}_{\xi, k}, {E}_{\mu, a}\rangle \big) =0\,, \end{eqnarray} where the sum is over all permutations of $( E_{\mu, a}, E_{\rho,j}, E_{\xi, k} )$. Since all ${\tilde h}$- and ${\tilde T}$- terms can be canceled out, \eqref{ELI5tsum} reduces to \eqref{newELIstat}. \end{proof}
Existence of solutions of the Euler-Lagrange equations obtained in this section will be discussed in more detail in subsequent parts of the article, along with variations of the metric.
\section{Extension of the class of Einstein metrics} \label{sectionEinstein}
In this section we assume that all distributions in \eqref{Eq-mD-k-TM} are one-dimensional, then \eqref{Eq-Smix-g} is (up to constant factor $2$) the geometrical part of the Einstein-Hilbert action: \begin{equation}\label{Eq-EH}
\bar J : (g, \I) \mapsto \int_{M} \overline{\rm S}\ {\rm d}\operatorname{vol}_g\,, \end{equation} where $\overline{\rm S}$ is the scalar curvature of ${\bar \nabla} = \nabla + \I$ on $(M,g)$. Thus, the action \eqref{Eq-Smix-g} restricted to adapted metrics allows us to extend the class of Einstein metrics. In particular, we obtain critical pairs $(g, \I)$ for \eqref{Eq-Smix-g} with non-Einstein metrics of constant scalar curvature for $k=3$ and for $k>3$ using Hadamard matrices. There is a rich literature on geometric constructions of metrics of constant scalar curvature, which we will not discuss.
\begin{Proposition}\label{propEinstein} Let a pair $(g, \I)$ be critical for the
action \eqref{Eq-EH}. Then on any open set $\Omega \subset M$, on which we have a decomposition \eqref{Eq-mD-k-TM} of $TM$ into the sum of one-dimensional distributions $\mD_\mu$, the Euler-Lagrange equations \eqref{E-delta-g-J} and \eqref{E-delta-I-J}, given in details in Theorem~\ref{T-main01} and Proposition~\ref{P-03}, are satisfied. \end{Proposition}
\begin{proof} For one-dimensional distributions $\mD_1 , \ldots , \mD_k$ we have $ 2\, \overline{\rm S}_{\,\mD_1,\ldots,\mD_k} = \overline{\rm S}\,$. Hence, if $g$ is critical for the action \eqref{Eq-EH},
then it is also critical for the action \eqref{Eq-Smix-g} with respect to compactly supported adapted variations.
Therefore, it satisfies
\eqref{E-delta-g-J} and \eqref{E-delta-I-J}. Notice that while the equations \eqref{E-delta-g-J} and \eqref{E-delta-I-J} are pointwise, \eqref{E-delta-g-J} contains covariant derivatives of quantities describing geometry of the distribution (e.g., $\Div{h}_\mu$), and to make sense requires the distributions to be defined on some open set. \end{proof}
Using the formulation of the Euler-Lagrange equation \eqref{E-delta-I-J} given in Theorem \ref{corI}, from Proposition~\ref{propEinstein} we obtain the following.
\begin{Corollary} Let a pair $(g, \I)$ be critical for the
action \eqref{Eq-EH} on a smooth manifold $M$. Then for any orthonormal vector fields $X,Y,Z$ we obtain \eqref{ELI5XYZ}.
\end{Corollary}
If all distributions are one-dimensional, \eqref{ELI5XYZ} is in fact the only restriction from Theorem \ref{corI} for critical metric connections.
On the other hand, adapted variations of metric on an almost product manifold
can be also applied to the Einstein-Hilbert action. We discuss it as a functional of metric only and in Remark \ref{remarkEinsteinmetricconnection}, at the end of this section, we show how our results generalize also to arbitrary metric connection. According to \cite[Proposition 2.3(2)]{RN-21}, $g$ is critical with respect to adapted variations of metric for the action \begin{equation}\label{EHonlyg}
J: g \mapsto \int_M {\rm S}\ {\rm d}\operatorname{vol}_g\,, \end{equation} where ${\rm S}$ is the scalar curvature of $(M,g)$,
if and only if \begin{equation}\label{E-3dim}
{\rm Ric}|_{\mD_\mu \times \mD_\mu} = \lambda\,g |_{\mD_\mu \times \mD_\mu},\quad
\mu =1 , \ldots , k\,, \end{equation} where ${\rm Ric}$ is the Ricci tensor and a constant $k\lambda$ is the scalar curvature of $(M,g)$.
Such critical, non-Einstein metrics can be found, for example, as follows.
\begin{Example} \label{ex3dim}
\rm The product of a surface of constant curvature $K\ne0$ and a real line or a~circle is a homogeneous space $(M^3,g)$ of scalar curvature $2K$. Let $\partial_x,\partial_y,\partial_t$ be an adapted orthonormal frame on $(M,g)$. Then ${\rm Ric}_{xx}={\rm Ric}_{yy}=K$ and ${\rm Ric}_{tt}=0$. For $\partial_1=\cos\alpha\,\partial_t+\sin\alpha\,\partial_x$ we find ${\rm Ric}_{11}(\alpha)=K\sin^2\alpha$. Thus, ${\rm Ric}_{11}(\alpha)=2K/3$ for $\alpha=\arccos(1/\sqrt3)$. Set $\partial_2=-a\,\partial_t+b\,\partial_x+c\,\partial_y$ and $\partial_3=-a\,\partial_t+b\,\partial_x-c\,\partial_y$ for positive numbers $a,b,c$. From $1=\langle\partial_2,\partial_2\rangle=\langle\partial_3,\partial_3\rangle$, we find $a^2+b^2+c^2=1$. Equating ${\rm Ric}_{22}={\rm Ric}_{33}=K(b^2+c^2)$ to $2K/3$ and using $b^2+c^2=1-a^2$, we obtain $a=1/\sqrt3$. Then, from $0=\langle\partial_1,\partial_2\rangle=\langle\partial_1,\partial_3\rangle$ we find $b=a\cos\alpha/\sin\alpha=1/\sqrt 6$. Thus, $c=1/\sqrt2$.
Condition \eqref{E-3dim} is then valid on $(M^3,g)$ for three distributions spanned by $\partial_1,\partial_2,\partial_3$.
\end{Example}
\begin{Example} \label{ex4dim} \rm Let $(M,g)$ be a 4-dimensional Riemannian manifold of constant scalar curvature $c$ and let $X_\mu\ (\mu=1,2,3,4)$ be orthonormal smooth vector fields on $M$ such that ${\rm Ric}^\sharp(X_\mu) = f_\mu X_\mu$ for $f_\mu \in C^\infty(M)$.
We define the following vector fields: $Y_1 = ( X_1 + X_2 + X_3 + X_4 )/2$, $Y_2 = ( -X_1 + X_2 - X_3 + X_4 )/2$, $Y_3 = ( -X_1 - X_2 + X_3 + X_4 )/2$, $Y_4 = ( X_1 - X_2 - X_3 + X_4)/2$. Then ${\rm Ric}(Y_\mu , Y_\mu ) = \sum_{\,\nu=1}^{\,4} f_\nu = c$ for $\mu=1,2,3,4$, and $\{Y_\mu\}$ are orthonormal. We define four one-dimensional distributions $\mD_\mu$, each spanned by $Y_\mu$. Then $g$ is a critical point of the Einstein-Hilbert action with respect to variations of metric preserving the almost product structure $TM = \bigoplus_{\mu=1}^4 \mD_\mu$, but $g$ does not need to be Einstein.
\end{Example}
A Hadamard matrix $H_k$ is a $k \times k$-matrix, all entries of which have values in the set $\{-1,1\}$ and such that $\frac{1}{\sqrt{k}}\,H_k$ is an orthogonal matrix. Such matrices are known to exist in some dimensions, e.g., $k=2^n$ for natural $n$, and $k=4m$ for natural $m$ such that $k<668$, see~\cite{Horadam}.
\begin{Lemma} \label{lemRicdiagkdim} Let $(M,g)$ be a $k$-dimensional Riemannian manifold of constant scalar curvature, where $k=3$, or $k$ is such that a Hadamard matrix $H_k$ exists.
Then for any $p \in M$ there exists a decomposition
of $T_pM$ into the sum of one-dimensional orthogonal subspaces $\mD_1, \ldots ,\mD_k$ such that \eqref{E-3dim} is valid,
where $k\lambda$ is the scalar curvature of $(M,g)$.
\end{Lemma}
\begin{proof}
We can find the above decomposition satisfying \eqref{E-3dim} if and only if there exists an orthonormal frame in which the matrix of ${\rm Ric}^\sharp$ has equal diagonal elements, i.e., \begin{eqnarray}\label{matrixeq1} && \sum\nolimits_{\,j,m} a_{ij} r_{jm} a_{im} = \lambda , \quad i=1,\ldots , k\,, \\ \label{matrixeq2} && \sum\nolimits_{\,j} a_{ij}a_{mj} = \delta_{im}, \quad i,m=1,\ldots k\,. \end{eqnarray} Here $a_{ij}$ are entries of some orthogonal matrix $A$, and $r_{jm}$ are components of ${\rm Ric}^\sharp$ in some orthonormal basis.
We can assume that $r_{jm}=r_j \delta_{jm}$ and $r_1 , \ldots , r_k$ are not all equal, then \eqref{matrixeq1}~becomes \begin{equation} \label{matrixeq1diag} \sum\nolimits_{\,j} a_{ij}^2 r_j = \lambda , \quad i=1,\ldots , k\,. \end{equation}
Suppose that $k=3$ and \begin{equation} \label{r2small}
r_2 \le r_3<r_1 \ \ {\rm or} \ \ r_2 < r_3 \leq r_1\,, \end{equation} then we get the inequalities
$0 \le \frac{r_1-2r_2+r_3}{3(r_1-r_2)} \le 1$.
Let \[ A_1 = \left(\begin{array}{ccc} \cos \alpha & -\sin \alpha & 0 \\ \sin \alpha & \cos \alpha & 0 \\ 0& 0 & 1 \\ \end{array} \right),\qquad
A_2 = \left(\begin{array}{ccc} 1& 0 & 0 \\ 0 & \cos \phi & -\sin \phi \\ 0 & \sin \phi & \cos \phi \\ \end{array} \right), \] then the matrix $A_2 A_1 {\rm Ric}^\sharp A_1^T A_2^T$, where $A_i^T$ is the transpose of matrix $A_i\ (i=1,2)$, has all diagonal elements equal if and only if $\cos^2 \alpha = \frac{r_1-2r_2+r_3}{3(r_1-r_2)}$ and $\cos^2 \phi = \frac{1}{2}$. Hence, \eqref{matrixeq1} and \eqref{matrixeq2} hold
for $A = A_2 A_1$ with $\alpha = \arccos\sqrt{\frac{r_1-2r_2+r_3}{3(r_1-r_2)}}$ and $\phi=\pi/4$.
If $k$ is such that there exists a Hadamard matrix $H_k$, then $A = \frac{1}{\sqrt{k}}\,H_k$ satisfies both \eqref{matrixeq2} and \eqref{matrixeq1diag}. Indeed, \eqref{matrixeq2} holds because $A$ is an orthogonal matrix, and \eqref{matrixeq1diag} holds because $a_{ij}^2=\frac{1}{k}$ for all $i,j = 1 , \ldots , k$. \end{proof}
\begin{Remark} \rm We note that in the proof of Lemma \ref{lemRicdiagkdim}, in all considered dimensions the entries of orthogonal matrix $A$ are either constant, or, in case $k=3$, smoothly depending on eigenvalues of the Ricci tensor on any set where \eqref{r2small} holds. Hence, a smooth decomposition \eqref{Eq-mD-k-TM}
on a neighborhood of any point of $M$ can obtained using these constructions. Moreover, if there exists a global orthonormal frame $X_1 , \ldots , X_k$ on $M$, such that every $X_j$ is everywhere an eigenvector of $ {\rm Ric}^\sharp$, and either: $k=3$ and $X_2$ is everywhere an eigenvector with the lowest eigenvalue of $ {\rm Ric}^\sharp$, or $k>3$ and there exists a Hadamard matrix $H_k$, then we can obtain another global frame (similarly as in Example~\ref{ex4dim}) and thus a global decomposition \eqref{Eq-mD-k-TM}.
\end{Remark}
\begin{Proposition} Let $k=3$ or $k$ be such that there exists a Hadamard matrix $H_k$. Let $M=G/H$ be a $k$-dimensional homogeneous space, where $H$ is a maximal connected Lie subgroup of the compact connected Lie group $G$. Then there exist $k$ distinct, $G$-invariant decompositions of $TM$ in one-dimensional distributions: $TM = \mD^i_1+ \ldots +\mD^i_k$, and $k$ distinct $G$-invariant metrics $g_i\ (i=1,\ldots,k)$, such that distributions $\mD^i_1, \ldots , \mD^i_k$ are pairwise $g_i$-orthogonal, and each $g_i$ is critical for the Einstein-Hilbert action with respect to variations adapted to the corresponding decomposition. Also, for $i \geq 2$ the metric $g_i$ is non-Einstein. \end{Proposition}
\begin{proof} For $i=1,\ldots,k$, let $f_i$ be a positive semidefinite bilinear form on $\mathbb{R}^k$ with $(i-1)$-dimensional kernel, and let $T_i$ be a $G$-invariant $(0,2)$-tensor on $M$ corresponding to $f_i$. According to \cite[Theorem 1.1]{Pulemotov}, for each $i=1,\ldots ,k$ there exists a $G$-invariant metric $g_i$ such that $T_i$ is its Ricci tensor. Then there exists a $G$-invariant orthonormal frame, for which ${\rm Ric}^\sharp(g_i)$ is diagonal.
By Lemma~\ref{lemRicdiagkdim}, there exists a $G$-invariant orthonormal frame, in which ${\rm Ric}^\sharp(g_i)$ has equal elements on its diagonal. Elements of this frame define the $g_i$-orthogonal decomposition $TM = \mD^i_1 \oplus \ldots \oplus \mD^i_k$ with one-dimensional distributions $\mD^i_1 , \ldots , \mD^i_k$, and $g_i$ is critical for the Einstein-Hilbert action with respect to variations adapted to this decomposition. For $i \geq 2$, the Ricci tensor of $g_i$ has non-trivial kernel, hence is not proportional to $g_i$, so the metric obtained in this case is non-Einstein. \end{proof}
\begin{Remark} \label{remarkEinsteinmetricconnection} \rm According to \cite[Eq.~(17.10)]{ap}, the Euler-Lagrange equations of the action \eqref{Eq-EH} for variations of metric are those of the action \eqref{EHonlyg} with ${\rm Ric}$ replaced by the Ricci tensor $\overline{\rm Ric}$ of connection $\bar\nabla=\nabla+\I$. Hence, similarly to \cite[Proposition 2.3(2)]{RN-21}, for adapted variations of metric, the Euler-Lagrange equation for the action \eqref{Eq-EH} is \eqref{E-3dim} with $\overline{\rm Ric}$ instead of ${\rm Ric}$. Action \eqref{Eq-Smix-g} for all distributions one-dimensional becomes, up to constant factor, \eqref{Eq-EH}, so in this case $\overline\mathcal Ric_{\,\mD} = \overline{\rm Ric}$ in \eqref{E-geom}.
Solutions of \eqref{E-3dim} given in Examples~\ref{ex3dim}, \ref{ex4dim} and Lemma~\ref{lemRicdiagkdim} require only constant scalar curvature and symmetry of the Ricci tensor, and therefore can be generalized to the case of metric-compatible connections (which always have symmetric Ricci tensors) with constant scalar curvature -- as long as those connections satisfy conditions of Theorem \ref{corI}, which in this case (all distributions are one-dimensional) reduce to: \begin{equation}\label{E-connection2}
\langle ( \I - \I^\wedge )_Y Z , X \rangle =0\,, \end{equation} if each of $X,Y,Z$ belongs to a different distribution among $\mD_1 , \ldots , \mD_k$. In particular, metric-compatible connections with $\I = \I^\wedge$ satisfy \eqref{E-connection2} for all decompositions \eqref{Eq-mD-k-TM}.
\end{Remark}
\section{Critical metrics and statistical connections} \label{sec:cont-stat}
In this part, we use the results of Sections~\ref{sec:adapted-metric} and \ref{sec:contorsion} and study Euler-Lagrange equations of the action \eqref{Eq-Smix-g} with variations of both $g$ and $\I$, i.e., vanishing of partial gradients $\delta_g \bar J_\mD = \lambda\,g$ and $\delta_\I\bar J_\mD=0$, see \eqref{E-delta-g-J} and \eqref{E-delta-I-J}, for statistical connections. In Section~\ref{sec:contorsion-statistical} we study variations of $\I$ on locally twisted products. In Section~\ref{sec:metric-stat} we consider variations of $\I$ among tensors corresponding to statistical connections, which give more possibilities for critical~points.
\subsection{Twisted products} \label{sec:contorsion-statistical}
In this section, we consider $(M, g;\mD_1,\ldots,\mD_k)$ with $k>2$ and adapted metric such that all distributions
are pairwise mixed totally geodesic and pairwise mixed integrable. By Corollary~\ref{corstatcritforall}, for critical pairs $(g,\I)$ all distributions $\mD_\mu$ are also totally umbilical and integrable, which together with the above assumption gives us locally twisted products, see~\cite{MRS-99}.
Let $(M_1,g_{1}),\ldots,(M_{k},g_{k})$ be pseudo-Riemannian manifolds, and $n_\mu=\dim M_\mu$.
A \textit{twisted product} is the product $M=M_1\times\ldots\times M_k$ with the metric $g=u_1^2\,g_{1}\oplus\ldots\oplus u_k^2\,g_{k}$, where $u_\mu$ for $\mu\ge 1$ are smooth positive functions on $M$, and $u=(u_1,\ldots,u_k)$ is called a twist function. The submanifolds tangent to ${\cal D}_\mu\ (\mu\ge1)$ are totally umbilical with the mean curvature vectors
$H_\mu=-n_\mu P^\bot_\mu\nabla(\log u_\mu)$.
If $u$ is independent on $M_2,\ldots, M_{k}$ and $u_1\equiv1$, then we get a \textit{warped product}, see e.g., \cite{Dimitru}.
By Proposition \ref{integrableDperp} below, all distributions tangent to the factors of the twisted product are pairwise mixed totally geodesic and mixed integrable.
Even if $\mD_\mu\ (1\le \mu\le k)$ are totally geodesic and integrable, they may not be pairwise mixed integrable, e.g., when $\mD_\mu\ (\mu=1,2,3)$ are one-dimensional distributions on $SU(2)$ defined by the standard basis of its Lie algebra.
On the other hand, we get the following consequence of \cite[Theorem~1]{RS-99}:
\begin{Proposition} \label{integrableDperp} Let $g$ be an adapted metric on $(M;\mD_1,\ldots,\mD_k)$ such that all $\mD_\mu^\perp$ are integrable. Then all $\mD_\mu$ are integrable, pairwise mixed integrable and pairwise mixed totally geodesic. \end{Proposition}
\begin{proof} Since $\mD_\mu=\bigcap_{\,\nu\ne\mu}\mD_\nu^\bot$, each distribution $\mD_\mu$ is integrable
as the intersection of integrable distributions. Without loss of generality, we can consider case $k=3$. By \cite[Theorem 1]{RS-99}, $M$ is locally diffeomorphic to the product of neighborhoods in integral manifolds of $\mD_\mu$. Hence, locally, $M = M_1 \times M_2 \times M_3$, where $TM_\mu=\mD_\mu$ for $\mu=1,2,3$. Using the Koszul formula (expression of $\nabla$
explicitly in terms of the Riemannian metric) \[
2 \langle \nabla_X Y, Z\rangle = X (\<Y,Z\rangle) + Y (\<X,Z\rangle) - Z (\<Y,X\rangle) + \langle [X,Y], Z\rangle - \langle [Y,Z], X\rangle - \langle [X,Z], Y\rangle\,, \]
one can show that any pair, e.g., $\mD_1$ and $\mD_2$, is mixed totally geodesic and mixed integrable,~i.e., \[ \langle P_3 \nabla_{P^\perp_3 X}\, P^\perp_3 Y,\, Z\rangle = 0,\quad X \in TM_1,\quad Y \in TM_2,\quad Z \in TM_3, \] by extending $X,Y,Z$ to vector fields tangent to integral manifolds of $\mD_1 , \mD_2 , \mD_3$, with constant coefficients in some coordinate system.
\end{proof}
Thus, all results in this section apply to decompositions \eqref{Eq-mD-k-TM} with all $\mD_\mu^\perp$ integrable.
In the next lemma, we find variations of each term
of $2\,\bar Q(\mD_\nu,g_t,\I)$ in \eqref{E-barQ} for statistical connection on a manifold with pairwise mixed totally geodesic and pairwise mixed integrable distributions.
\begin{Lemma}\label{propdeltaQforstatistical} Let a contorsion tensor $\I$ of a statistical connection be critical for the action \eqref{Eq-Smix-g} with fixed adapted metric $g$ on $(M;\mD_1,\ldots,\mD_k)$, and let all $\mD_\mu$ be pairwise mixed totally geodesic and pairwise mixed integrable. Then for $\mD_\mu$-variations of metric we have \begin{eqnarray*}
&&\hskip-6mm 2\,{\delta_g\bar Q}_\mu({E}_{\mu,a},{E}_{\mu,b}) {=}
\langle\I_{{\mu,a}}, \I_{{\mu,b}}\rangle_{\,|\mD_\mu^\bot} {-}\langle\I_{{\mu,a}}{E}_{\mu,b}, \tr_{\,\mu}^\bot\I\rangle {-}2\,\langle(\tilde A_\mu)_{{E}_{\mu,a}},\I_{{\mu,b}}\rangle
{+} 2\langle h_\mu({E}_{\mu,b},\,\cdot), \I_{{\mu,a}}\rangle\nonumber \\
&& -\,\langle\I_{{\mu,b}}{E}_{\mu,a}, H_\mu - \tilde H_\mu\rangle -\langle\tr_{\,\mu}^\top\I, {E}_{\mu,a}\rangle\langle \tilde H_\mu, {E}_{\mu,b}\rangle
+\langle\tr_{\,\mu}^\bot\I, {E}_{\mu,b}\rangle \langle\tilde H_\mu, {E}_{\mu,a}\rangle \nonumber \\
&& +\sum\nolimits_{\,\nu\ne \mu}\big(\langle\I_{{\mu,a}}, \I_{{\mu,b}}\rangle_{\,|\mD_\nu} -\langle\I_{{\mu,a}}{E}_{\mu,b}, \tr_{\,\nu}^\top\I\rangle -2\,\langle(A_\nu)_{{E}_{\mu,a}}, \I_{{\mu,b}}\rangle +2\,\langle \tilde h_\nu({E}_{\mu,b},\,\cdot),\, \I_{{\mu,a}}\rangle\nonumber \\ && -\,\langle\I_{{\mu,b}}{E}_{\mu,a}, \tilde H_\nu - H_\nu\rangle {-}\langle\tr_{\,\nu}^\bot\I, {E}_{\mu,a}\rangle\langle H_\nu, {E}_{\mu,b}\rangle {+}\langle\tr_{\,\nu}^\top\I, {E}_{\mu,b}\rangle \<H_\nu, {E}_{\mu,a}\rangle\big) .
\end{eqnarray*} \end{Lemma}
\begin{proof} From the last claim of Corollary~\ref{corstatcritforall} it follows that $\langle \I_X Y,Z\rangle=0$ if each of $X,Y,Z$ belongs to a different distribution. From \cite[Eqs.~(67)--(69)]{rz-3} we get, respectively: \begin{eqnarray*}
\partial_t \sum\nolimits_{\xi} \langle \I^*, \I^\wedge\rangle_{| V_\xi}
&=& -2 \sum\nolimits_{\nu\ne\mu} \sum {B}(E_{\mu, i}, E_{\mu, j}) \langle \I_{{\mu,i}} E_{\nu,a}, E_{\nu,b}\rangle \langle E_{\nu,b}, \I_{\nu,a} E_{\mu,j}\rangle \\ && -2 \sum\nolimits_{\nu\ne\mu} \sum {B}(E_{\mu, i}, E_{\mu, j}) \langle \I_{{\mu,i}} E_{\nu,a}, E_{\mu,b}\rangle \langle E_{\mu,b}, \I_{\nu,a} E_{\mu,j}\rangle\,, \\
\partial_t\sum\nolimits_{\xi} \langle \Theta, A_\xi\rangle
&=& -4\sum\nolimits_{\nu\ne\mu} \sum {B}(E_{\mu, i}, E_{\mu, j}) \langle h_\nu (E_{\nu,a}, E_{\nu,b}), E_{\mu,i}\rangle \langle E_{\mu,j}, \I_{\nu,a} E_{\nu,b}\rangle,\\
\partial_t \sum\nolimits_{\xi} \langle \Theta, T^\sharp_\xi\rangle
&=& -4 \sum\nolimits_{\nu\ne\mu} \sum {B}(E_{\mu, i}, E_{\mu, j}) \langle T_\nu (E_{\nu,a}, E_{\nu,b}), E_{\mu,i}\rangle \langle E_{\mu,j}, \I_{\nu,a} E_{\nu,b}\rangle . \end{eqnarray*} For critical statistical connections the above $\partial_t \langle \Theta, {T}^\sharp_\xi\rangle =0$, because $T_\xi =0$ by Corollary \ref{corstatcritforall}.
Similarly, from \cite[Eq.~(70)]{rz-3} it follows that $\partial_t \langle \Theta, {\tilde T}^\sharp_\xi \rangle =0$, because all ${\tilde T}_\xi =0$, as all $\mD_\mu$ are integrable and pairwise mixed integrable.
From \cite[Eq.~(71)]{rz-3} for statistical connections: \begin{eqnarray*} \partial_t \sum\nolimits_{\xi} \langle \Theta, {\tilde A}_\xi\rangle
&=& 4 \sum\nolimits_{\nu\ne\mu} \sum {B}(E_{\mu, i}, E_{\mu, j}) \langle h_\mu (E_{\mu,k}, E_{\mu,j} ), E_{\nu,a}\rangle \langle E_{\nu,a}, \I_{{\mu,k}}E_{\mu,i}\rangle \,. \end{eqnarray*}
From \cite[Eqs.~(72)--(75)]{rz-3} for statistical connections, we obtain the following: \begin{eqnarray*}
&& \partial_t \langle \tr^\top_\nu \I, \tr^\perp_\nu \I^*\rangle = 0\,,\\
&& \partial_t \sum\nolimits_{\xi} \langle \tr^\perp_\xi \I^*, \tr^\perp_\xi \I\rangle
= -2\sum\nolimits_{\nu\ne\mu} \sum {B}(E_{\mu, i}, E_{\mu, j}) \langle \I_{\mu, j} E_{\mu,i}, \tr^\top_\nu \I\rangle, \\
&& \partial_t \sum\nolimits_{\xi} \langle \tr^\top_\xi (\I^* - \I ), {\tilde H}_\xi - H_\xi\rangle =
\sum {B}(E_{\mu, i}, E_{\mu, j}) \big(\langle \tr^\perp_\mu \I, E_{\mu,i}\rangle \langle E_{\mu,j}, {\tilde H}_\mu\rangle \\
&&\quad +
\sum\nolimits_{\nu\ne\mu}\langle \tr^\top_\nu \I, E_{\mu,j}\rangle \langle E_{\mu,i}, H_\nu\rangle \big),\\
&& \partial_t \sum\nolimits_{\xi} \langle \tr^\perp_\xi (\I^* - \I ), {\tilde H}_\xi - H_\xi\rangle
= \sum {B}(E_{\mu, i}, E_{\mu, j})\big(\sum\nolimits_{\nu\ne\mu}(\langle \I_{\mu,j} E_{\mu,i}, {\tilde H}_\nu - H_\nu\rangle \\
&&\quad +\, \langle \tr^\perp_\nu \I, E_{\mu,i}\rangle \langle H_\nu, E_{\mu,j}\rangle )
+\langle \I_{\mu,j} E_{\mu,i}, H_\mu - {\tilde H}_\mu\rangle + \langle \tr^\top_\mu \I, E_{\mu,i}\rangle \langle {\tilde H}_\mu, E_{\mu,j}\rangle \big), \end{eqnarray*} respectively, and that completes the proof. \end{proof}
Put $\I_{ Z }^\flat (X,Y) = \langle \I_{ Z}X, Y\rangle$.
In the next theorem we find
the Euler-Lagrange equation \eqref{E-delta-g-J} under our assumptions about distributions, for statistical connections, and with \eqref{E-delta-I-J} satisfied. This result will be improved in further corollaries, according to specific dimensions of the distributions.
\begin{Theorem} Let $g$ be an adapted metric on $(M;\mD_1,\ldots,\mD_k)$ such that all $\mD_\mu$ are pairwise mixed totally geodesic and pairwise mixed integrable. Then a pair $(g,\I)$, where $\I$ is the contorsion tensor of a statistical connection on $(M,g)$, is critical for the action \eqref{Eq-Smix-g} with respect to adapted variations of $g$ preserving the volume of $(M,g)$ and all variations of $\I$ if and only if $(M,g)$ is locally a~twisted product, all conditions of Corollary~\ref{corstatcritforall} hold and the following Euler-Lagrange equations \eqref{E-delta-g-J} are valid: \begin{eqnarray} \label{ELmixedgeodesicmixedintegrable}
&& \big( {\bar S}_{\mD_1 \ldots \mD_k}
+\Div \big(\big(1 - \frac{2}{n_\mu}\, \big) H_\mu - {\tilde H}_\mu \big) + \lambda\big) g_\mu
- {\tilde H}_\mu^\flat \otimes {\tilde H}_\mu^\flat
+\,\I_{{\tilde H}_\mu}^\flat
\nonumber \\
&& +\sum\nolimits_{\nu\ne\mu} \big(\frac{2}{n_\nu} - 1 \big) (P_\mu H_\nu)^\flat \otimes (P_\mu H_\nu)^\flat = 0,\quad
\mu = 1,\ldots, k\,. \end{eqnarray} \end{Theorem}
\begin{proof} For totally umbilical, pairwise mixed totally geodesic distributions we obtain \begin{eqnarray*} \frac12\,\Upsilon_{h_\nu,h_\nu} = \frac{1}{n_\nu}\,H_\nu^\flat \otimes H_\nu^\flat,\quad \frac12\,\Upsilon_{{\tilde h}_\mu, {\tilde h}_\mu}= \sum\nolimits_{\nu\ne\mu} \frac{1}{n_\nu}\,(P_\mu H_\nu)^\flat \otimes (P_\mu H_\nu)^\flat\, . \end{eqnarray*} For $\mu = 1,\ldots, k$ and $X,Y \in \mD_\mu \ne \mD_\nu$, we have
$( \Div {\tilde h}_\nu )(X,Y) = \frac{1}{n_\mu}\,\<X,Y\rangle\, \Div( P_\nu H_\mu )$,
and thus
$\sum\nolimits_{\nu\ne\mu} ( \Div {\tilde h}_\nu )(X,Y) = \frac{1}{n_\mu}\,\<X,Y\rangle\,\Div H_\mu$.
For totally umbilical, pairwise mixed totally geodesic, integrable and pairwise mixed integrable distributions we~get \[ \delta Q_\mu
= \Div\big(\big(1 -\frac{2}{n_\mu}\big) H_\mu +\sum\nolimits_{\nu\ne\mu}{\tilde H}_\nu\big) g_\mu - {\tilde H}_\mu^\flat \otimes {\tilde H}_\mu^\flat
+ \sum\nolimits_{\nu\ne\mu} \big(\frac{2}{n_\nu} - 1\big)\,(P_\mu H_\nu)^\flat \otimes (P_\mu H_\nu)^\flat\, . \] For such distributions, by the above and Lemma~\ref{propdeltaQforstatistical}, the Euler-Lagrange equations for the action \eqref{Eq-Smix-g}
for all variations of $\I$ and adapted variations of $g$ preserving the volume of $\Omega$ are
\begin{eqnarray*} && 2\,\langle X,Y\rangle \Div\big(\big(1 -\frac{2}{n_\mu}\big) H_\mu +\sum\nolimits_{\nu\ne\mu}{\tilde H}_\nu \big) -2\,\langle {\tilde H}_\mu, X\rangle\langle{\tilde H}_\mu, Y\rangle \\ && +\,2 \sum\nolimits_{\nu\ne\mu} \Big[ \big(\frac{2}{n_\nu} -1 \big) \langle H_\nu, X \rangle \langle H_\nu, Y \rangle \\
&& +\,2
\sum \big(\langle \I_{X} E_{\nu,a}, E_{\nu,b}\rangle \langle E_{\nu,b}, \I_{\nu,a} Y\rangle
+\langle\I_{X} E_{\nu,a}, E_{\mu,b}\rangle \langle E_{\mu,b}, \I_{\nu,a} Y\rangle\big) \\
&& -\,2
\frac{1}{n_\nu}\,\big(\langle H_\nu, X\rangle \langle Y, \tr^\top_\nu \I\rangle +
\langle H_\nu, Y\rangle \langle X, \tr^\top_\nu \I\rangle\big) \\
&& +\,
\frac{4}{n_\mu} \langle P_\nu H_\mu, \I_{ X}Y\rangle
-2\,
\langle \I_{Y}{X}, \tr^\top_\nu \I\rangle \\
&& +\,\frac{1}{2}\,
(\langle \tr^\top_\nu \I, Y\rangle \langle X, H_\nu\rangle + \langle \tr^\top_\nu \I, X\rangle \langle Y, H_\nu\rangle ) \\ && -\,
\big(\langle \I_{Y} X, {\tilde H}_\nu - H_\nu\rangle
- \frac{1}{2}\,\langle\tr^\perp_\nu \I, X\rangle \<H_\nu, Y\rangle - \frac{1}{2}\,\langle\tr^\perp_\nu \I, Y\rangle \<H_\nu, X\rangle \big) \Big] \\
&& +\,\frac{1}{2}\,(\langle \tr^\perp_\mu \I, X\rangle \langle Y, {\tilde H}_\mu\rangle + \langle \tr^\perp_\mu \I, Y\rangle \langle X, {\tilde H}_\mu\rangle) \\
&& -\,\langle\I_{Y}X, H_\mu -{\tilde H}_\mu\rangle +\frac{1}{2}\,\langle\tr^\top_\mu\I, X\rangle\langle{\tilde H}_\mu, Y\rangle +\frac{1}{2}\,\langle\tr^\top_\mu\I, Y\rangle\langle{\tilde H}_\mu, X\rangle \\
&& +\,2\,( \bar{\rm S}_{\mD_1 \ldots \mD_k} - \Div {\cal H} + \lambda ) \langle X,Y\rangle =0,\qquad X,Y \in \mD_\mu ,\quad \mu=1, \ldots, k\,, \end{eqnarray*} with ${\cal H}$ given by \eqref{defcalH}. By claim 5 of Corollary~\ref{corstatcritforall}, $\langle \I_X Y, Z\rangle =0$ when $X,Y,Z$ are not all from the same distribution; using claim 4 of Corollary~\ref{corstatcritforall}, \eqref{calH} and $H_\mu = \sum\nolimits_{\nu\ne\mu} P_\nu H_\mu$, we reduce the above Euler-Lagrange equations to the following: \begin{eqnarray*} && \big(\bar{\rm S}_{\mD_1 \ldots \mD_k} +\Div \big( \big(1 - \frac{2}{n_\mu} \big) H_\mu - {\tilde H}_\mu \big) + \lambda \big)\langle X,Y\rangle \\ && + \sum\nolimits_{\nu\ne\mu} \big(\frac{2}{n_\nu} - 1 \big) \langle X, H_\nu\rangle \<Y, H_\nu\rangle - \langle {\tilde H}_\mu, X\rangle \langle {\tilde H}_\mu, Y\rangle
+\,\langle \I_{Y} X, {\tilde H}_\mu\rangle = 0\,, \end{eqnarray*} for all $X,Y \in \mD_\mu$ and $\mu=1, \ldots, k$, which yields \eqref{ELmixedgeodesicmixedintegrable}. \end{proof}
The next corollaries consider alternatives for the number $j$ of one-dimensional distributions among our $k$ distributions:
at most one distribution is one-dimensional (i.e., $j\le1$) in~Corollary~\ref{C2-sec4}, there are $j$ one-dimensional distributions for some $j\in [2, k-1]$ in Proposition~\ref{coralldim} and Corollary~\ref{coralldimNew}, and all distributions are one-dimensional (i.e., $j=k$) in~Corollary~\ref{C3-sec4}. The second is the only case when non-trivial metrics (i.e., not metric products) and connections (i.e., not Levi-Civita) can be critical.
\begin{Corollary}\label{C2-sec4} Let $g$ be an adapted metric on $(M;\mD_1,\ldots,\mD_k)$ such that all $\mD_\mu$ are pairwise mixed totally geodesic and pairwise mixed integrable, and $n_\mu>1$ for all $\mu\ge2$. Then a pair $(g,\I)$, where $\I$ is the contorsion tensor of a statistical connection on $(M,g)$, is critical for the action \eqref{Eq-Smix-g} with respect to adapted variations of $g$ preserving the volume of $(M,g)$ and all variations of $\I$ if and only if $(M,g)$ is locally a product, i.e., all $\mD_\mu$ are integrable and totally geodesic, and $\I$ is contorsion of any statistical connection satisfying claims \ref{item2}, \ref{item5} and \ref{item4} of~Corollary~\ref{corstatcritforall}. \end{Corollary}
\begin{proof}
By claim \ref{item1} of Corollary~\ref{corstatcritforall}, we get $H_\nu=0$ when $n_\nu >1$. So if all $n_\nu>1$, then a critical pair $(g,\I)$, where $\I$ is statistical, can exist only when $g$ is the product metric.
Suppose now that $n_1=1$ and $n_\nu>1$ for all $1< \nu \le k$. Then for all $1< \nu \le k$ we obtain from claim \ref{item3} of Corollary~\ref{corstatcritforall} that $H_\nu=0$ and from claim \ref{item3} of Corollary~\ref{corstatcritforall} we obtain for all $1< \nu \le k$ that ${\tilde H}_\nu =0 = P_\nu H_1$, and it follows that also $H_1=0$. Again, we obtain that $g$ is the product metric.
For a metric product of integral manifolds of
$\mD_\mu$, the Euler-Lagrange equations \eqref{ELmixedgeodesicmixedintegrable} all become $\lambda =0$. In that case, all statistical $\I$ satisfying claims \ref{item2}, \ref{item5} and 5 of Corollary~\ref{corstatcritforall} are critical (also for variations of metric not preserving the volume of $(M,g)$).
\end{proof}
\begin{Proposition}\label{coralldim} Let $g$ be an adapted metric on $(M;\mD_1,\ldots,\mD_k)$ such that all $\mD_\mu$ are pairwise mixed totally geodesic and pairwise mixed integrable, and there exists $j\in [2, k-1]$ such that $n_\mu=1$ for $1 \le \mu \le j$ and $n_\rho >1$ for $j+1 \le \rho \le k$.
Then a pair $(g,\I)$, where $\I$ is the contorsion tensor of a~statistical connection,
is critical for the action \eqref{Eq-Smix-g} with respect to adapted variations of $g$ preserving the volume of $(M,g)$ and all variations of $\I$ if and only if \,$\I$ satisfies claims \ref{item2}, \ref{item5} and \ref{item4} of Corollary~\ref{corstatcritforall} and for all $j+1\le\rho\le k:\,\mD_\rho$ is totally geodesic, ${\tilde H}_\rho=0$ and \begin{subequations} \begin{equation} \label{ELtwistedproductdimbigSmix}
\sum\nolimits_{\,\nu=1}^j
(P_\rho H_\nu)^\flat \otimes (P_\rho H_\nu)^\flat + \big( \frac{m\lambda}{2-m} \big) g_\rho =0 \,, \end{equation} and for all $1\le\mu\le j$: \begin{equation}\label{ELtwistedproductdimoneSmix} {\rm S}_{\,\mD_\mu, \mD_\mu^\perp} = \frac{2\lambda}{2-m},
\end{equation} \end{subequations} where $m=\dim M$ and $\lambda$ is a constant. \end{Proposition}
\begin{proof} From Corollary~\ref{corstatcritforall} we obtain for $j+1 \le \rho \le k$ that $H_\rho=0 = {\tilde H}_\rho = P_\rho {\cal H}$ and thus ${\cal H}=\sum\nolimits_{\,\nu=1}^j H_\nu$. By \eqref{eqvarstat} and Corollary \ref{corstatcritforall}, we get
$\bar{\rm S}_{\,\mD_1 \ldots \mD_k} = {\rm S}_{\,\mD_1 \ldots \mD_k}$. Hence, the Euler-Lagrange equation \eqref{ELmixedgeodesicmixedintegrable} for all $X,Y \in \mD_\rho$ becomes \begin{equation} \label{ELtwistedproductdimbig1}
\sum\nolimits_{\,\nu=1}^j
(P_\rho H_\nu)^\flat \otimes (P_\rho H_\nu)^\flat + \big( {\rm S}_{\,\mD_1 \ldots \mD_k} + \lambda \big) g_\rho =0 \,, \end{equation} and for all $1 \le \mu \le j$ we have the following form of \eqref{ELmixedgeodesicmixedintegrable}: \begin{equation}\label{ELtwistedproductdimone1} - \Div (
H_\mu + {\tilde H}_\mu ) + {\rm S}_{\,\mD_1 \ldots \mD_k} + \lambda
+ \sum\nolimits_{\,\nu=1}^j
\| P_\mu H_\nu\|^2 - \| {\tilde H}_\mu \|^2 = 0\,. \end{equation}
For $1 \le \mu \le j$ we obtain
$\| {\tilde h}_\mu \|^2
= \sum\nolimits_{\nu =1}^j \| P_\mu H_\nu \|^2$\,,
and then, using the formula for the mixed scalar curvature of $\mD_\mu$ \cite{Walczak}, we get \begin{equation} \label{Smixtwproddim1}
{\rm S}_{\,\mD_\mu, \mD_\mu^\perp} = \Div ( {\tilde H}_\mu + H_\mu ) + \| {\tilde H}_\mu \|^2 - \sum\nolimits_{\,\nu =1}^j \| P_\mu H_\nu \|^2\, ,
\end{equation} using the above in \eqref{ELtwistedproductdimone1} yields that the Euler-Lagrange equation \eqref{ELmixedgeodesicmixedintegrable} for $1 \leq \mu \leq j$ is
\begin{equation}\label{ELtwistedproductdimone}
{\rm S}_{\,\mD_1 \ldots \mD_k} + \lambda - {\rm S}_{\,\mD_\mu, \mD_\mu^\perp} =0 .
\end{equation}
For $j+1 \le \rho \le k$ from the formula for the mixed scalar curvature of $\mD_\rho$ we find ${\rm S}_{\,\mD_\rho, \mD_\rho^\perp}= - \sum_{\nu =1}^j \| P_\rho H_\nu \|^2$, hence taking trace of \eqref{ELtwistedproductdimbig1} yields \begin{equation} \label{trELtwistedproductdimbig1} ( {\rm S}_{\,\mD_1 \ldots \mD_k} + \lambda ) n_\rho - {\rm S}_{\,\mD_\rho, \mD_\rho^\perp} =0 . \end{equation} Summing all equations \eqref{ELtwistedproductdimone} for $1 \leq \mu \leq j$ and all equations \eqref{trELtwistedproductdimbig1} for $j+1 \leq \rho \leq k$, and using the analogue of \eqref{E-Dk-Smix} for the Levi-Civita connection: $2 {\rm S}_{\,\mD_1 \ldots \mD_k} = \sum\nolimits_{\mu=1}^j {\rm S}_{\,\mD_\mu, \mD_\mu^\perp} + \sum\nolimits_{\rho=j+1}^k {\rm S}_{\,\mD_\rho, \mD_\rho^\perp}$ \cite{r-EH-k}, we obtain $m ({\rm S}_{\,\mD_1 \ldots \mD_k} + \lambda) - 2{\rm S}_{\,\mD_1 \ldots \mD_k} =0$, and hence ${\rm S}_{\,\mD_1 \ldots \mD_k} = \frac{m \lambda}{2-m}$, which used in \eqref{trELtwistedproductdimbig1} yields ${\rm S}_{\,\mD_\mu, \mD_\mu^\perp} = \frac{2\lambda}{2-m}$. Using the above in \eqref{ELtwistedproductdimbig1} and \eqref{trELtwistedproductdimbig1} yields (\ref{ELtwistedproductdimbigSmix},b). On the other hand, it can be verified that if $\I$ satisfies claims \ref{item2}, \ref{item5} and \ref{item4} of Corollary~\ref{corstatcritforall}, for all $j+1\le\rho\le k:\,\mD_\rho$ is totally geodesic, ${\tilde H}_\rho=0$ and (\ref{ELtwistedproductdimbigSmix},b) hold, then the Euler-Lagrange equations \eqref{ELmixedgeodesicmixedintegrable} are valid. \end{proof}
\begin{Remark} \rm
In Proposition \ref{coralldim}, equations (\ref{ELtwistedproductdimbig1},b) are equivalent to, respectively: \begin{subequations} \begin{eqnarray}\label{ELtwistedproductdimbigold}
\sum\nolimits_{\,\nu=1}^j(P_\rho H_\nu)^\flat \otimes (P_\rho H_\nu)^\flat
+\big( \Div {\cal H} + \frac{1}{2}\,\| {\cal H} \|^2 - \frac{1}{2} \sum\nolimits_{\nu=1}^j \| H_\nu \|^2 + \lambda\big) g_\rho = 0\,, \\
\label{ELtwistedproductdimoneold}
\sum\nolimits_{\,\nu=1}^j \big(\| P_\mu H_\nu\|^2 - \frac{1}{2}\|H_\nu\|^2\big)
-\| {\tilde H}_\mu \|^2
+\Div ( {\cal H} - H_\mu - {\tilde H}_\mu) + \frac{1}{2}\,\|{\cal H}\|^2 + \lambda = 0\,. \end{eqnarray} \end{subequations}
Indeed, from the formulas for mixed scalar curvatures ${\rm S}_{\,\mD_\mu, \mD_\mu^\perp}$ and ${\rm S}_{\,\mD_\rho, \mD_\rho^\perp}$ we get \begin{eqnarray*}
{\rm S}_{\,\mD_1,\ldots,\mD_k} &=& \frac{1}{2}\sum_{\mu=1}^j \big(\Div({\tilde H}_\mu + H_\mu ) + \|{\tilde H}_\mu\|^2
- \sum_{\nu =1}^j \| P_\mu H_\nu \|^2 \big) -\frac{1}{2} \sum_{\rho=j+1}^k \sum_{\nu =1}^j \| P_\rho H_\nu \|^2 \\
&=& \Div {\cal H} + \frac{1}{2}\, \| {\cal H} \|^2 - \frac{1}{2} \sum\nolimits_{\,\nu=1}^j \| H_\nu \|^2\, . \end{eqnarray*} Using the above in \eqref{ELtwistedproductdimbig1} and, together with \eqref{Smixtwproddim1}, in \eqref{ELtwistedproductdimone}, yields (\ref{ELtwistedproductdimbigold},b), respectively. \end{Remark}
\begin{Corollary} \label{coralldimNew} Let $g$ be an adapted metric on $(M;\mD_1,\ldots,\mD_k)$ such that all $\mD_\mu$ are pairwise mixed totally geodesic and pairwise mixed integrable, and there exists $j\in [2, k-1]$ such that $n_\mu=1$ for $1 \le \mu \le j$ and $n_\rho >1$ for $j+1 \le \rho \le k$.
Suppose that $n_\rho > j$ for at least one $\rho \in \{ j+1, \ldots, k \}$.
Then a pair $(g,\I)$, where $\I$ is the contorsion tensor of a~statistical connection,
is critical for the action \eqref{Eq-Smix-g} with respect to adapted variations of $g$ preserving the volume of $(M,g)$ and all variations of $\I$ if and only if \,$\I$ satisfies claims \ref{item2}, \ref{item5} and \ref{item4} of Corollary~\ref{corstatcritforall}; for all $j+1\le\xi\le k:\,\mD_\xi$ is totally geodesic,
and
we have \begin{subequations} \begin{eqnarray}\label{ELtwistedproductdimbigNew} P_\xi H_\nu =0\,,\quad \nu \in \{ 1, \ldots, j \},\\
\label{ELtwistedproductdimoneNew}
{\rm Ric} |_{\mD_\mu \times \mD_\mu} =0\,,\quad 1\le\mu\le j. \end{eqnarray}
\end{subequations} \end{Corollary}
\begin{proof} If $n_\rho > j$, then comparing ranks of tensors in \eqref{ELtwistedproductdimbig1} yields \begin{eqnarray}\label{divHlambda} \nonumber
& P_\rho H_\nu =0,\quad \nu \in \{ 1, \ldots, j \},\\
& {\rm S}_{\,\mD_1 \ldots \mD_k} = - \lambda\,.
\end{eqnarray} Moreover, for every $\xi \in \{ j+1, \ldots, k \}$ we get from the Euler-Lagrange equation \eqref{ELtwistedproductdimbig1} written for $\mD_\xi$ and \eqref{divHlambda}:
$\sum\nolimits_{\,\nu =1}^j (P_\xi H_\nu)^\flat \otimes (P_\xi H_\nu)^\flat =0$,
thus for all $E_{\xi, i}$ we obtain $\sum\nolimits_{\nu =1}^j \langle H_\nu, E_{\xi,i} \rangle^2 =0$, but then also \[ 0 = \sum\nolimits_{i =1}^{n_\xi} \sum\nolimits_{\nu =1}^j \langle H_\nu, E_{\xi,i} \rangle^2 =
\sum\nolimits_{\nu =1}^j \sum\nolimits_{i =1}^{n_\xi} \langle H_\nu, E_{\xi,i} \rangle^2 = \sum\nolimits_{\nu =1}^j \| P_\xi H_\nu \|^2 \] and \eqref{ELtwistedproductdimbigNew} follows. Using \eqref{divHlambda} in \eqref{ELtwistedproductdimone}, we get \eqref{ELtwistedproductdimoneNew}. On the other hand, if (\ref{ELtwistedproductdimbigNew},b) hold, then for $\lambda=0$ all terms in \eqref{ELtwistedproductdimbig1} and \eqref{ELtwistedproductdimone} vanish. \end{proof}
\begin{Example} \rm A simple example of a critical metric is the following twisted product (with distributions $\mD_\mu$ defined as tangent to its factors): for a manifold $(M_1,g_1)$ let $M = M_1 \times \mathbb{R} \times \mathbb{R}$ and let $g= g_1 + e^{-2f_1(s) } dt^2 + e^{-2f_2(t)} ds^2$, where $f_1,f_2$ are linear functions. Let $\I$ correspond to a statistical connection satisfying conditions 2, 4 and 5 of Corollary~\ref{corstatcritforall} and such that $\langle \I_X Y, Z \rangle =0$ unless $X,Y,Z \in TM_1$. Then $(g,\I)$ is critical for the action \eqref{Eq-Smix-g}, with respect to adapted variations of metric and all variations of contorsion. \end{Example}
\begin{Corollary}\label{C3-sec4} Let $g$ be an adapted metric on $(M;\mD_1,\ldots,\mD_k)$ such that all $\mD_\mu$ are one-dimensio\-nal, pairwise mixed totally geodesic and pairwise mixed integrable.
Then a pair $(g,\I)$, where $\I$ is the contorsion tensor of a~statistical connection,
is critical for the action \eqref{Eq-Smix-g} with respect to adapted variations of $g$ preserving the volume of $(M,g)$ and all variations of $\I$ if and only if $\I=0$ and there exists a constant $\lambda$ such that for all $1\le\mu\le k$ we have \begin{equation} \label{ELtwistedproductdimonekRic}
{\rm Ric} |_{\mD_\mu \times \mD_\mu} =
-\frac{2\lambda}{k-2}\, . \end{equation}
\end{Corollary}
\begin{proof} By claims \ref{item2} and 5 of Corollary~\ref{corstatcritforall} the only non-zero components of $\I$ may be $\langle \I_X X, X\rangle$ where $X \in \mD_\mu$ is a unit vector. However, by \ref{item5} of Corollary~\ref{corstatcritforall} we get for a unit vector $X \in \mD_\mu$: $\I_X X = \tr^\top_\mu \I =0$. It follows that $\I=0$. Equation \eqref{ELtwistedproductdimonekRic} can be proved similarly as \eqref{ELtwistedproductdimoneSmix}: let $j=k$, then from \eqref{ELmixedgeodesicmixedintegrable} we obtain \eqref{ELtwistedproductdimone1} and use in it \eqref{Smixtwproddim1} and $2{\rm S}_{\,\mD_1 \ldots \mD_k} = \sum\nolimits_{\mu=1}^k {\rm S}_{\,\mD_\mu, \mD_\mu^\perp}$.
\end{proof}
\begin{Corollary} Let $(M,g)$ be a warped product of $k>2$ manifolds,
where $M_1$ is a closed manifold. Then a pair $(g,\I)$, where $\I$ is the contorsion tensor of a~statistical connection, is critical for the action \eqref{Eq-Smix-g} with respect to volume-preserving adapted variations of $g$ and all variations of $\I$, if and only if $(M,g)$ is a metric product
and $\I$ satisfies conditions 2, 4 and 5 of Corollary~\ref{corstatcritforall}. \end{Corollary}
\begin{proof} We can assume $g = g_1 + e^{-2 f_2} g_2 + \ldots + e^{-2 f_k} g_k$, where $f_\mu : M_1 \rightarrow \mathbb{R}$ for $\mu=2, \ldots, k$. For warped products we have $H_1 = 0$ and $H_\mu = \nabla f_\mu$, ${\tilde H}_\mu=0$ for all $\mu>1$. If $\dim \mD_\mu=1$, the Euler-Lagrange equation \eqref{ELmixedgeodesicmixedintegrable} for $\mD_\mu$ is \eqref{ELtwistedproductdimoneold}, which becomes \begin{equation}\label{ELtwistedproductdimoneoldtwp}
\sum\nolimits_{\nu=2}^k \Delta f_\nu - \Delta f_\mu -\frac{1}{2} \sum\nolimits_{\nu=2}^k \| \nabla f_\nu \|^2 + \frac{1}{2} \| \sum\nolimits_{\nu=2}^k \nabla f_\nu \|^2 + \lambda =0, \end{equation} and if $\dim \mD_\mu>1$, then, by \eqref{ELtwistedproductdimbigold}, the Euler-Lagrange equation \eqref{ELmixedgeodesicmixedintegrable} on $\mD_\mu \times \mD_\mu$ is \begin{equation}\label{ELtwistedproductdimbigoldtwp}
\sum\nolimits_{\nu=2}^k (P_\mu \nabla f_\nu)^\flat \otimes (P_\mu \nabla f_\nu)^\flat + \big( \sum\nolimits_{\nu=2}^k \Delta f_\nu -\frac{1}{2} \sum\nolimits_{\nu=2}^k \| \nabla f_\nu \|^2 + \frac{1}{2} \| \sum\nolimits_{\nu=2}^k \nabla f_\nu \|^2 + \lambda \big)g_\mu = 0 . \end{equation}
(i) Let $n_\mu = 1$ for all $\mu>1$. Taking differences of \eqref{ELtwistedproductdimoneoldtwp} for $\mD_2$ and $\mD_\mu$ for any $\mu>2$, we get
$\Delta f_2 - \Delta f_\mu = 0$.
Since $M_1$ is closed, it follows that $f_2 - f_\mu$ is constant and for all $\mu>2$ \begin{equation} \label{gradfi} \nabla f_2 = \nabla f_\mu . \end{equation}
Hence, ${\cal H} = (k-1)\nabla f_2$ and $\sum\nolimits_{\mu } \|H_\mu\|^2=(k-1)\|\nabla f_2\|^2$, and \eqref{ELtwistedproductdimoneoldtwp} for $\mu=2$ yields the equality
$(k-2)\, \Delta f_2 + \frac12\,(k-1)(k-2)\, \| \nabla f_2\|^2 + \lambda =0$.
Thus,
$\frac{1}{a}\, e^{-a f_2} \Delta\, e^{af_2} = \frac{\lambda}{k-2}$,
where $a = \frac12\,(k-1)$. Since $e^{af_2}>0$, we obtain $\lambda=0$ and $f_2={\rm const}$. From \eqref{gradfi} we conclude that all $f_i$ are constant and $(M,g)$ is the metric product.
(ii) Suppose now that $n_\mu >1$ for some $\mu>1$. From Corollary \ref{corstatcritforall} we find that $H_\mu=0=\nabla f_\mu$, and hence $f_\mu$ is constant. Let $\xi\in[2, k]$. If $n_\xi >1$, then, by the same argument as above, $f_\xi$ is constant. If $n_\xi =1$, then taking the difference of \eqref{ELtwistedproductdimoneoldtwp} for $\mD_\xi$ and the trace of \eqref{ELtwistedproductdimbigoldtwp} divided by $n_\mu$, we find
$-\Delta f_\xi = 0$,
and since $M_1$ is compact, we get $f_\xi={\rm const}$. Thus, all $f_i$ are constant and $(M,g)$ is the metric product. \end{proof}
\subsection{Variations of $\I$ corresponding to statistical connections} \label{sec:metric-stat}
In this section we still consider adapted variations of metric, but restrict variations of $\I$ to tensors corresponding to statistical connections. Using Theorem~\ref{statisticalcritSmixI} we examine $\delta_g{\bar Q}_\mu$ in Proposition~\ref{C-vr-terms}.
\begin{Lemma}\label{P-06} Let $g$ be an adapted metric on $(M;\mD_1,\ldots,\mD_k)$,
and let the contorsion tensor $\I$ of a statistical connection be critical for the action \eqref{Eq-Smix-g}
restricted to contorsion tensors of statistical connections on $(M,g)$. Then for the $\mD_\mu$-variation of metric we have \begin{eqnarray*}
&& 2\delta_g{\bar Q}_\mu = \sum\nolimits_{\nu\ne\mu} \frac{n_\nu}{2}\,(P_\mu \tr^\perp_\nu \I )^\flat \otimes (P_\mu \tr^\perp_\nu \I )^\flat
- \frac{1}{2}\, \| P_\mu^\perp \tr^\perp_\mu \I \|^2 g_\mu \nonumber \\ &&\hskip-6mm -\,3\sum\nolimits_{\nu\ne\mu} \!{\rm Sym}( (P_\mu H_\nu )^\flat{\otimes} (P_\mu \tr^\perp_\nu \I )^\flat)
{+} 2\langle h_\mu, P_\mu^\perp \tr^\perp_\mu \I \rangle
{+} \sum\nolimits_{\nu\ne\mu} \!{\rm Sym}( (P_\mu \tr^\top_\nu \I )^\flat {\otimes} (P_\mu H_\nu )^\flat ) \nonumber \\ &&\hskip-6mm +\,\frac{1}{2}\, \big\langle \sum\nolimits_{\nu\ne\mu} (P_\mu^\perp H_\nu -{\tilde H}_\nu), \tr_\mu^\perp \I \big\>g_\mu
+2\,\I_{{\tilde H}_\mu}^\flat
- \frac{1}{2}\, \langle H_\mu, \tr_\mu^\perp \I \>g_\mu - {\rm Sym}( (P_\mu \tr^\top_\mu \I)^\flat \otimes {\tilde H}_\mu^\flat ) \,. \end{eqnarray*} \end{Lemma}
\begin{proof} It follows from \eqref{newELIstat} that $\langle \I_X Y, Z \rangle=0$ if each of $X,Y,Z$ belongs to a different distribution. The computation of all formulas \cite[Eqs.~(67)--(75)]{rz-3} gives the same results as Lemma~\ref{propdeltaQforstatistical}, but now we obtain $\partial_t \langle \Theta, {T}^\sharp_\xi \rangle =0$ from $\sum_{a,b} \langle T(E_{\nu,a}, E_{\nu,b}), {\cal E}_{\nu,i} \rangle \langle {\cal E}_{\nu,j}, \I_{\nu,a} E_{\nu,b} \rangle= 0$ by $\I=\I^\wedge$, and we obtain $\partial_t \langle \Theta, {\tilde T}^\sharp \rangle =0$ using \eqref{ELSmixIstat2}, which yields sums of symmetric and antisymmetric in $E_{\nu, a}, E_{\nu,b}$ terms: \begin{eqnarray*}
\partial_t\sum\nolimits_{\xi} \langle\Theta, {\tilde T}^\sharp_\xi \rangle
&=& \sum\nolimits_{\nu\ne\mu} \Big(
\sum {B}(E_{\mu,i}, E_{\mu,j}) \<2 {\tilde T}_\nu ({\cal E}_{\nu,k}, E_{\mu,j}), E_{\nu, a}\rangle \langle{\cal E}_{\nu,k}, \I_{\nu,a} E_{\mu,i}\rangle \\
&& - \sum {B}(E_{\mu,i}, E_{\mu,j} ) \langle 2 {\tilde T}_\nu ( E_{\mu,i}, {\cal E}_{\nu,k} ), E_{\nu, a} \rangle \langle E_{\mu,j}, \I_{\nu,a} {\cal E}_{\nu,k} \rangle \\
&& +\sum {B}(E_{\mu,i}, E_{\mu,j})\langle 2{\tilde T}_\nu ({\cal E}_{\nu,k}, E_{\mu,j}), E_{\nu, a}\rangle\langle E_{\nu,a}, \I_{{\cal E}_{\nu,k}} E_{\mu,i} \rangle\Big) \\
&& + \sum {B}(E_{\mu,i}, E_{\mu,j} ) \langle 2 T_\mu ( E_{\mu,k}, E_{\mu,j} ), {\cal E}_{\mu, a} \rangle \langle E_{\mu,k}, \I_{{\cal E}_{\mu,a} } E_{\mu,i} \rangle \\
&& - \sum {B}(E_{\mu,i}, E_{\mu,j} ) \langle 2 T_\mu ( E_{\mu,i}, E_{\mu,k} ), {\cal E}_{\mu, a} \rangle \langle E_{\mu,j}, \I_{{\cal E}_{\mu,a} } E_{\mu,k} \rangle \\
&& + \sum {B}(E_{\mu,i}, E_{\mu,j} ) \langle 2 T_\mu ( E_{\mu,k}, E_{\mu,j} ), {\cal E}_{\mu, a} \rangle\langle {\cal E}_{\mu,a}, \I_{{\mu,k} } E_{\mu,i} \rangle \\
&=& 6 \sum\nolimits_{\nu\ne\mu} \sum {B}(E_{\mu,i}, E_{\mu,j} ) \langle T_\mu ( E_{\mu,i}, E_{\mu,j} ), E_{\nu, a} \rangle \langle \tr^\perp_\mu \I, E_{\nu,a} \rangle =0\,. \end{eqnarray*} Using \eqref{ELSmixIstat2} in equations from the proof of Lemma~\ref{propdeltaQforstatistical}, we also simplify other formulas: \begin{eqnarray*}
&&\hskip-4mm
\partial_t \sum\nolimits_{\xi} \langle \I^*, \I^\wedge \rangle_{| V_\xi}
= \langle {B}_\mu, \sum\nolimits_{\nu\ne\mu} \big(-\frac{n_\nu}{2}\, (P_\mu\tr^\perp_\nu\I)^\flat\otimes(P_\mu\tr^\perp_\nu\I)^\flat\big)
- \frac{1}{2}\, \| P_\mu^\perp \tr^\perp_\mu \I \|^2 g_\mu \rangle\,,\\
&&\hskip-4mm \partial_t \sum\nolimits_{\xi} \langle \Theta, A_\xi \rangle
= -2 \langle {B}_\mu, \sum\nolimits_{\nu\ne\mu} {\rm Sym}( (P_\mu H_\nu )^\flat \otimes (P_\mu \tr^\perp_\nu \I )^\flat ) \rangle\,,\\
&&\hskip-4mm \partial_t \sum\nolimits_{\xi} \langle \Theta, {\tilde A}_\xi \rangle
= 2 \langle {B}_\mu, \langle h_\mu, P_\mu^\perp \tr^\perp_\mu \I \rangle \rangle\,. \end{eqnarray*} From \cite[Eq.~(72)]{rz-3}, for statistical connections we get $\partial_t \langle \tr^\top_\nu \I, \tr^\perp_\nu \I^* \rangle=0$. From \eqref{ELSmixIstat1} we~get \begin{eqnarray*} && \partial_t \sum\nolimits_{\xi} \langle \tr^\top_\xi \I^*, \tr^\perp_\xi \I \rangle
= \langle {B}_\mu, - \| P_\mu^\perp \tr^\perp_\mu \I \|^2 g_\mu \rangle\,,\\
&& \partial_t \sum\nolimits_{\xi} \langle \tr^\top_\xi ( \I^* - \I ), {\tilde H}_\xi - H_\xi \rangle
= \big\langle {B}, \I_{ H_\mu - {\tilde H}_\mu }^\flat + {\rm Sym}( (P_\mu \tr^\top_\mu \I )^\flat \otimes {\tilde H}_\mu^\flat ) \\ && + \sum\nolimits_{\nu\ne\mu} ( \langle \I_{{\tilde H}_\nu - H_\nu }^\flat +\,{\rm Sym}( (P_\mu \tr^\perp_\nu \I )^\flat \otimes (P_\mu H_\nu)^\flat ) )
\big\rangle. \end{eqnarray*}
Using \eqref{ELSmixIstat2}, we get on $\mD_\mu \times \mD_\mu$: \begin{eqnarray*}
&& \I_{H_\mu}^\flat = \frac{1}{2}\, \langle H_\mu, \tr_\mu^\perp \I \>g_\mu\,, \quad
\sum\nolimits_{\nu\ne\mu} \I_{H_\nu}^\flat
= \I_{{\tilde H}_\mu}^\flat + \frac{1}{2}\, \langle \sum\nolimits_{\nu\ne\mu} P_\mu^\perp H_\nu, \tr_\mu^\perp \I \>g_\mu\,, \\
&& \sum\nolimits_{\nu\ne\mu} \I_{{\tilde H}_\nu}^\flat = \frac{1}{2}\, \langle \sum\nolimits_{\nu\ne\mu} {{\tilde H}_\nu}, \tr_\mu^\perp \I \>g_\mu\,. \end{eqnarray*} Thus,
\begin{eqnarray*}
&& \partial_t \sum\nolimits_{\xi} \langle \tr^\perp_\xi ( \I^* - \I ), {\tilde H}_\xi - H_\xi \rangle
= \big\langle {B}, \frac{1}{2}\, \langle \sum\nolimits_{\nu\ne\mu} {{\tilde H}_\nu}, \tr_\mu^\perp \I \>g_\mu - 2\I_{{\tilde H}_\mu}^\flat \\
&& -\,\frac{1}{2}\, \langle \sum\nolimits_{\nu\ne\mu} P_\mu^\perp H_\nu, \tr_\mu^\perp \I \>g_\mu
+\sum\nolimits_{\nu\ne\mu} {\rm Sym}( (P_\mu \tr^\perp_\nu \I )^\flat \otimes
(P_\mu H_\nu)^\flat ) \\
&& +\,\frac{1}{2}\, \langle H_\mu, \tr_\mu^\perp \I \>g_\mu
+{\rm Sym}( (P_\mu \tr^\top_\mu \I )^\flat \otimes {\tilde H}_\mu^\flat ) \big\rangle\,. \end{eqnarray*}
By \eqref{E-barQ} and \eqref{E-dt-barQ}, from the above computations we get the required formula for $\delta_g{\bar Q}_\mu$. \end{proof}
The following result combines Theorem~\ref{T-main01} (variations of $g$ with $\I$ fixed) and Theorem~\ref{statisticalcritSmixI} (variations of $\I$ with $g$ fixed) for variations among statistical connections.
\begin{Theorem}\label{T-06} A pair $(g,\I)$, where $g$ is an adapted metric and $\I$ is the contorsion tensor of a statistical connection on $(M,\mD_1,\ldots,\mD_k)$,
is critical for the action \eqref{Eq-Smix-g} with respect to volume-preserving adapted variations of $g$ and variations of \,$\I$ corresponding to statistical connections if and only if it satisfies the Euler-Lagrange equations {\rm (\ref{ELSmixIstat1}-c)} and \eqref{ElmixDDvp-b}, which has the
form: \begin{eqnarray}\label{ELamongstatistical} && \sum\nolimits_{\nu\ne\mu} \frac{n_\nu}{2}\, (P_\mu \tr^\perp_\nu \I )^\flat \otimes (P_\mu \tr^\perp_\nu \I )^\flat
- \frac{1}{2}\, \| P_\mu^\perp \tr^\perp_\mu \I \|^2 g_\mu \nonumber \\
&& -\,3 \sum\nolimits_{\nu\ne\mu} {\rm Sym}( (P_\mu H_\nu )^\flat \otimes (P_\mu \tr^\perp_\nu \I )^\flat ) \nonumber \\
&& +\,2\, \langle h_\mu, P_\mu^\perp \tr^\perp_\mu \I \rangle
+ \sum\nolimits_{\nu\ne\mu} {\rm Sym}( (P_\mu \tr^\top_\nu \I )^\flat \otimes (P_\mu H_\nu )^\flat ) \nonumber \\
&& +\,\frac{1}{2}\,\langle\sum\nolimits_{\nu\ne\mu} (P_\mu^\perp H_\nu - {\tilde H}_\nu), \tr_\mu^\perp \I \>g_\mu + 2\,\I_{{\tilde H}_\mu}^\flat
{-} \frac{1}{2}\, \langle H_\mu, \tr_\mu^\perp \I \>g_\mu - {\rm Sym}( (P_\mu \tr^\top_\mu \I )^\flat \otimes {\tilde H}_\mu^\flat ) \nonumber \\
&& +\, 2 \big(
-\Div{h}_{\mu} -{\cal K}_{\mu}^\flat -\tilde{H}_{\mu}^\flat\otimes\tilde{H}_{\mu}^\flat
+\frac12\Upsilon_{\tilde h_{\mu},\tilde h_{\mu}} +\frac12\Upsilon_{\tilde T_{\mu}, \tilde T_{\mu}}
+\,2\,{\cal T}_{\mu}^\flat + (\Div H_{\mu})\,g_\mu \nonumber \\
&& +\sum\nolimits_{\,\nu\ne {\mu}}\big(-\Div\tilde{h}_\nu|_{\mD_{\mu}} - (P_{\mu}\tilde{\cal K}_\nu)^\flat -(P_{\mu}{H}_\nu)^\flat\otimes(P_{\mu}{H}_\nu)^\flat \nonumber \\
&& +\,\frac12\Upsilon_{P_{\mu}h_\nu,P_{\mu}h_\nu} +\frac12\Upsilon_{P_{\mu}T_\nu,P_{\mu}T_\nu} +2\,(P_{\mu}\tilde {\cal T}_\nu)^\flat
+(\Div \tilde H_\nu)\,g_\mu\big)\big) \nonumber \\
&& +\,2\big(\overline{\rm S}_{\,\mD_1,\ldots,\mD_k} -\frac12\Div\sum\nolimits_{\,\nu}(H_\nu + \tilde{H}_\nu) +\lambda\big)\,g_\mu=0,\quad \mu=1, \ldots, k\,. \end{eqnarray}
\end{Theorem}
\begin{proof} Equation \eqref{ELamongstatistical} is the Euler-Lagrange equation \eqref{ElmixDDvp-b}, with $\delta Q_\mu$ given in Proposition~\ref{C-vr-terms} and $\delta_g\bar Q_\mu$ given in Lemma~\ref{P-06}. \end{proof}
Using Theorem~\ref{T-06}, we will show how to obtain examples of critical pairs $(g,\I)$ of the action \eqref{Eq-Smix-g} with respect to adapted variations of metric and variations of \,$\I$ corresponding to statistical connections, from critical metrics of this action with fixed Levi-Civita connection.
\begin{Lemma}\label{L-barS-S} Let $\I$ be the contorsion tensor of a statistical connection on $(M,\mD_1,\ldots,\mD_k)$ with an adapted metric $g$
such that $\tr^\top_\mu \I=0$ for all $\mu \in \{1, \ldots, k\}$. If \,$\I$ is critical for the action \eqref{Eq-Smix-g} with fixed $g$, with respect to variations of \,$\I$ corresponding to statistical connections, then $\overline{\rm S}_{\,\mD_1,\ldots,\mD_k} = {\rm S}_{\,\mD_1,\ldots,\mD_k}$. \end{Lemma}
\begin{proof} From the assumption $\tr^\top_\mu \I=0$ for all $\mu \in \{1, \ldots, k\}$ we obtain
$\sum\nolimits_{\,\nu} \langle\tr_{\,\nu}^\bot\I,\,\tr_{\,\nu}^\top\I\rangle =0 $\,.
From (\ref{ELSmixIstat2},c) we obtain for all $\mu \in \{1, \ldots, k\}$: \begin{eqnarray*}
\langle\I,\,\I\rangle_{\,|\,V_\mu} \hspace*{-2mm}&=&\hspace*{-2mm} \sum \langle \I_{E_{\mu,a}} {\cal E}_{\mu, b}, \I_{{\cal E}_{\mu, b}} E_{\mu, a} \rangle = \sum\nolimits_{\nu \ne \mu} \sum \langle \I_{E_{\mu,a}} E_{\nu, b}, \I_{E_{\nu, b}} E_{\mu, a} \rangle \\ \hspace*{-2mm}&=&\hspace*{-2mm} \sum\nolimits_{\nu \ne \mu} \sum \langle \I_{E_{\mu,a}} {E}_{\nu, b}, E_{\nu, c} \rangle \langle E_{\nu, c}, \I_{{E}_{\nu, b}} E_{\mu, a} \rangle \\ &&+ \sum\nolimits_{\nu \ne \mu} \sum \langle \I_{E_{\mu,a}} {E}_{\nu, b}, E_{\mu, c} \rangle \langle E_{\mu, c}, \I_{{E}_{\nu, b}} E_{\mu, a} \rangle \\ \hspace*{-2mm}&=&\hspace*{-2mm} \sum\nolimits_{\nu \ne \mu} \sum \langle \tr^\perp_\nu \I, {E_{\mu,a}} \rangle \langle {E}_{\nu, b}, E_{\nu, c} \rangle \langle \tr^\perp_\nu \I, {E_{\mu,a}} \rangle \langle {E}_{\nu, b}, E_{\nu, c} \rangle \\ &&+ \sum\nolimits_{\nu \ne \mu} \sum \langle \tr^\perp_\mu \I, {E_{\nu,b}} \rangle \langle {E}_{\mu,a}, E_{\mu, c} \rangle \langle \tr^\perp_\mu \I, {E_{\nu,b}} \rangle \langle {E}_{\mu,a}, E_{\mu, c} \rangle =0\,. \end{eqnarray*} Hence, from \eqref{eqvarstat}, we get $\overline{\rm S}_{\,\mD_1,\ldots,\mD_k} = {\rm S}_{\,\mD_1,\ldots,\mD_k}$. \end{proof}
Using Theorem~\ref{T-06} and Lemma~\ref{L-barS-S}, we get the following
\begin{Corollary} Let $g$ be a critical adapted metric for the action \eqref{Eq-Smix-g0} with respect to adapted variations on $(M;\mD_1,\ldots,\mD_k)$. Suppose that there exists $\mu\in\{1,\ldots,k\}$ such that either $n_\mu\ge3$, or $n_\mu=2$ and ${\tilde H}_\mu=0$.
Then there exists a contorsion tensor $\I\ne0$ of a statistical connection on $(M,g)$ such that the pair $(g,\I)$ is critical for the action \eqref{Eq-Smix-g} with respect to adapted variations of $g$ and variations of \,$\I$ corresponding to statistical connections. \end{Corollary}
\begin{proof} The Euler-Lagrange equations
\eqref{ELamongstatistical} do not contain derivatives of $\I$ and thus,
with a given metric $g$, are algebraic equations for $\I$. By Lemma~\ref{L-barS-S}, we get $\overline{\rm S}_{\,\mD_1,\ldots,\mD_k} = {\rm S}_{\,\mD_1,\ldots,\mD_k}$, thus the expression in the last four lines of \eqref{ELamongstatistical} does not depend on $\I$. Moreover, by \eqref{ElmixDDvp} and since $g$ is critical, this expression vanishes. We can restrict ourselves to $\I$ such that $\tr_\mu^\bot\I=0$ for all $\mu$. Then \eqref{ELSmixIstat1} is valid, and similarly as in the proof of claim \ref{item5} in Corollary~\ref{corstatcritforall}, we get $\tr_\mu^\top\I=0$ for all $\mu$. Thus, \eqref{ELamongstatistical} reduces to the linear (with respect to $\I$) system \begin{equation}\label{ELamongstatistical-A}
2\,\I_{{\tilde H}_\mu}^\flat = 0,\quad \mu=1, \ldots, k\,, \end{equation} which must be compatible with (\ref{ELSmixIstat2},c). From \eqref{newELIstat} we conclude that all components of $\I$ with indices from three different distributions vanish.
Then, by \eqref{ELSmixIstat2}, assumption $\tr_\mu^\bot\I=0$ for all $\mu$, and since $\I$ corresponds to a statistical connection, the only nonzero components of such $\I$ appear when its three indices belong to the same distribution. Let $n_\mu\ge3$ for some $\mu$, then the number $\frac12\,n_\mu(n_\mu+1)$ of independent equations in \eqref{ELamongstatistical-A} is smaller than the number $\frac{n_\mu (n_\mu -1)(n_\mu -2)}{6} + n_\mu (n_\mu -1) + n_\mu$ of independent components of $\I$ along $\mD_\mu$
(components along $\mD_\nu$ for all $\nu\ne\mu$ can be chosen to be zero). For example, if $n_\mu=3$, then $\frac12\,n_\mu(n_\mu+1)=6$ and (given its symmetries $\I^*=\I=\I^\wedge$) $\I$ with zero trace on $\mD_\mu$ has $10-1=9$ independent components along $\mD_\mu$ and \eqref{ELamongstatistical-A} gives 6 equations. If $n_\mu=2$ and ${\tilde H}_\mu=0$ for some $\mu$, then there are no independent equations in \eqref{ELamongstatistical-A} for such $\mu$, but there are $4-1=3$ independent components of $\I$ along $\mD_\mu$. \end{proof}
\begin{Corollary} Let $\mD_\mu\ (\mu=1,2,3)$ be distributions determined by unit orthonormal vector fields $\xi_1, \xi_2, \xi_3$ on a $3$-dimensional manifold. Then $\I =0$ is the only contorsion of a statistical connection critical for the action \eqref{Eq-Smix-g} with fixed~$g$, with respect to variations of \,$\I$ corresponding to statistical connections. \end{Corollary}
\begin{proof} From \eqref{ELSmixIstat2} we obtain
$\langle \I_{\xi_2} \xi_2, \xi_1 \rangle = \frac{1}{2}\,\langle \xi_1, \I_{\xi_1} \xi_1 + \I_{\xi_3 } \xi_3 \rangle$.
By this and \eqref{ELSmixIstat2}, we~get \begin{eqnarray*} \langle \I_{\xi_3} \xi_3, \xi_1 \rangle
&=& \frac{1}{2}\,\langle \xi_1, \I_{\xi_1} \xi_1 \rangle + \frac{1}{4}\,\langle \I_{\xi_1 } \xi_1, \xi_1 \rangle + \frac{1}{4}\,\langle \I_{\xi_3 } \xi_3, \xi_1 \rangle\, . \end{eqnarray*} Hence, $\langle \I_{\xi_3} \xi_3, \xi_1 \rangle = \langle \I_{\xi_1 } \xi_1, \xi_1 \rangle$. Similarly, $\langle\I_{\xi_2} \xi_2, \xi_1\rangle = \langle\I_{\xi_1}\xi_1, \xi_1\rangle$. Hence, from \eqref{ELSmixIstat1} we obtain
$0 = \langle \xi_1, \tr_1^\perp \I \rangle = 2 \langle \xi_1, \I_{\xi_1} \xi_1 \rangle$.
Similarly, $\langle \I_{\xi_2} \xi_2, \xi_2 \rangle = \langle \I_{\xi_3} \xi_3, \xi_3 \rangle =0$. From \eqref{newELIstat} we get $\langle \I_{\xi_1} \xi_2, \xi_3 \rangle =0$, and by \eqref{ELSmixIstat2} we find \begin{eqnarray*}
\langle \I_{\xi_1} \xi_1, \xi_2 \rangle
=\frac{1}{4}\,\langle \xi_2, \I_{\xi_1} \xi_1 \rangle + \frac{1}{4}\,\langle \xi_2, \I_{\xi_2} \xi_2 \rangle =\frac{1}{4}\,\langle \I_{\xi_1} \xi_1, \xi_2 \rangle\,. \end{eqnarray*} Hence, $\langle \I_{\xi_1} \xi_1, \xi_2 \rangle =0$. Therefore, all components of $\I$ in the frame
$\{ \xi_1,\xi_2,\xi_3 \}$ vanish. \end{proof}
\section{Critical metrics and metric-compatible connections} \label{sec-metricconnections}
In this part, we apply the results of Sections~\ref{sec:adapted-metric} and \ref{sec:contorsion} and consider particular cases of pairs $(g,\I)$ critical for \eqref{Eq-Smix-g}, where $\I$ is the contorsion of a metric-compatible connection for $g$. We restrict ourselves to cases where $\delta_g\bar Q_\mu$, which usually has a complicated form, can be written explicitly using our results for two distributions \cite{rz-3}. First we consider semi-symmetric connections, which are critical only among semi-symmetric connections -- this condition is sufficient to determine them
in terms of metric. The second case we examine are metric connections on 3-Sasaki manifolds, which carry four naturally defined totally geodesic orthogonal distributions.
\subsection{Semi-symmetric connections} \label{sec:contorsion_semi_symmetric}
A useful case of metric-compatible connections are semi-symmetric connections introduced by K.\,Yano~\cite{Yano}.
\begin{Definition}\rm
A linear connection $\bar\nabla$ on $(M,g)$ is said to be \textit{semi-symmetric} if \begin{equation}\label{Uconnection}
\bar\nabla_XY=\nabla_XY + \<U, Y\rangle X -\<X,Y\>U,\quad X,Y\in\mathfrak{X}_M\,, \end{equation} where $U$
is a given vector field on $M$.
In this case,
$\I_XY=\<U, Y\rangle X -\<X,Y\>U$\,.
\end{Definition}
\begin{Theorem}
A pair $(g, U)$, where $g$ is an adapted metric on $(M;\mD_1,\ldots\mD_k)$, and a vector field $U$ corresponds to a semi-symmet\-ric connection
on $M$, is critical for \eqref{Eq-Smix-g} with respect to adapted variations of metric preserving the volume of $\Omega$ if and only if the following Euler-Lagrange equations with ${\delta Q}_\mu$ from Proposition~\ref{C-vr-terms}
are satisfied for all $\mu\in\{1,\ldots,k\}$ and some~$\lambda\in\mathbb{R}$: \begin{eqnarray}\label{UELD} \nonumber &&{\delta Q}_\mu -\frac12\,\big({n_\mu^\perp(n_\mu-1)} + \sum\nolimits_{\,\nu\ne\mu} n_\nu(n_\nu^\perp-1)\big) P_\mu U^\flat\otimes P_\mu U^\flat \\ \nonumber && +\,\frac14\, \Div\big( (n_\mu-n_\mu^\bot) P_\mu^\bot U + \sum\nolimits_{\,\nu\ne\mu} (n_\nu^\bot-n_\nu) P_\nu U\big) g_\mu \\ && +\,\big(\,{\rm S}_{\,\mD_1,\ldots,\mD_k} -\frac12\,\Div \sum\nolimits_{\nu} ( - n_\nu^\perp P_\nu U - n_\nu P_\nu^\perp U + H_\nu +\tilde{H}_\nu) +\lambda\big) g_\mu = 0\,.
\end{eqnarray} \end{Theorem}
\begin{proof} Let $\bar\nabla$ be a semi-symmetric connection on $(M,g,\mD_\mu,\mD_\mu^\bot)$, then \eqref{E-barQ} reduces to \begin{equation}\label{QforUconnection} -2\,\bar Q(\mD_\nu,g,U) =(n_\nu-n_\nu^\bot) \langle U,H_\nu - \tilde H_\nu\rangle + n_\nu^\bot n_\nu \<U,\,U\rangle -n_\nu\<P^\bot_\nu U, P^\bot_\nu U\rangle -n_\nu^\bot\langle P_\nu U, P_\nu U\rangle\,, \end{equation} see \cite[Lemma~6(a)]{rz-3} for $k=2$. For a $\mD_\mu$-variation of metric, up to divergences of compactly supported vector fields we get (see \cite[Lemma~6(b)]{rz-3} for $k=2$), \[
\partial_t\bar Q(\mD_\nu,g_t,U)|_{\,t=0} =\bigg\{ \begin{array}{cc} \langle{B}_\mu, \frac14\,(n_\nu^\bot-n_\nu)(\Div P_\nu U) g_\mu -\frac12\,n_\nu(n_\nu^\bot-1)P_\nu^\perp U^\flat\otimes P_\nu^\perp U^\flat\rangle, & \nu\ne\mu\,, \\
\hskip-2mm\langle{B}_\mu, \frac14(n_\mu-n_\mu^\bot)(\Div P_\mu^\bot U) g_\mu -\frac12n_\mu^\perp(n_\mu-1) P_\mu U^\flat\otimes P_\mu U^\flat\rangle & \nu = \mu\,. \end{array} \] From \cite[Lemma 6]{rz-3} we get
$P_\mu \tr_\mu^\perp \I = -n_\mu^\perp P_\mu U$
and
$P^\perp_\mu \tr_\mu \I = -n_\mu P_\mu^\perp U$\,.
Using the above and \eqref{E-dt-barQ} in \eqref{ElmixDDvp-b}, we get \eqref{UELD}.
\end{proof}
\begin{Remark}\rm
One can present \eqref{UELD} in the equivalent form of \eqref{E-geom}, see \cite[Section~3.4]{rz-3} for $k=2$, using the Ricci type tensor $\overline\mathcal Ric_{\,\mD\,|\,\mD_\mu\times\mD_\mu} = -{\delta Q}_\mu -{\delta_g\bar Q}_\mu +\rho_\mu\,g_\mu$, see \eqref{E-main-0ij-kk}.
\end{Remark}
We consider variations of a semi-symmetric connection only among connections that also satisfy \eqref{Uconnection} and obtain the following result.
\begin{Corollary} A semi-symmetric connection defined by $U$ is critical for the action \eqref{Eq-Smix-g} with fixed $g$, with respect to variations among semi-symmetric connections if and only if \begin{equation} \label{Ucritcontorsion} \sum\nolimits_{\,\mu} \big( (n_\mu - n_\mu^\perp) ( H_\mu - {\tilde H}_\mu ) + 2 n_\mu (n_\mu^\perp - 1) P_\mu^\perp U + 2n_\mu^\perp (n_\mu -1) P_\mu U\big) =0\,. \end{equation} \end{Corollary}
\begin{proof} The proof follows from \eqref{E-Q1Q2-gen} and \eqref{QforUconnection}, similarly as in \cite[Proposition 10]{rz-3}. \end{proof}
\begin{Corollary} \label{corsemisymmetricharmdim} Let $k>2$ or $n_1+n_2 >2$. If all $\mD_\mu$ are harmonic or all $\mD_\mu$ have equal dimension, then the Levi-Civita connection is the only semi-symmetric connection critical for the action \eqref{Eq-Smix-g} with fixed metric, with respect to variations among semi-symmetric connections. \end{Corollary}
\begin{proof} First, we assume that all $\mD_\mu$ are harmonic. Then we get from \eqref{Ucritcontorsion} for each $\mu =1,\ldots, k$: \[
\big( 2n_\mu^\perp (n_\mu -1) + \sum\nolimits_{\,\nu \neq \mu} 2 n_\nu (n_\nu^\perp - 1) \big) P_\mu U =0\,. \] The coefficient before $P_\mu U$ is non-negative and vanishes only for $k=2$ and $n_1=n_2=1$, which contradicts the assumption about dimensions of distributions. Hence, it follows that $P_\mu U=0$ for all $\mu =1 ,\ldots , k$, and therefore $U=0$.
For the second case, let $\dim \mD_\mu = m$ for all $\mu =1 ,\ldots , k$, then $\dim M = km$ and from \eqref{calH}, and $P_\nu^\perp = U - P_\nu U$, we obtain in \eqref{Ucritcontorsion} \begin{equation} \label{Ucritcontorsion1}
\big( 2m ( km - m -1) (k-1) + 2 (km-m)(m-1) \big) U = 0\,. \end{equation} Since the coefficient before $U$ is $2m(k-1)(km-2)\ne0$, for $k>2$ or $m>1$ we get $U=0$. \end{proof}
From Proposition~\ref{propEinstein} and \eqref{Ucritcontorsion1}, similarly as in the proof of Corollary \ref{corsemisymmetricharmdim}
we recover the following result, obtained in \cite{Fasihi}.
\begin{Corollary}
If $\dim M>2$, then the Levi-Civita connection is the only semi-symmetric connection critical for the Einstein-Hilbert action. \end{Corollary}
By \eqref{Ucritcontorsion}, connection critical among semi-symmetric connections is determined by metric. Solving \eqref{Ucritcontorsion} for $U$ and inserting the result in \eqref{UELD} yields equation that determines metrics $g$ admitting semi-symmetric contorsions $\I$ such that the pair $(g,\I)$ is critical for \eqref{Eq-Smix-g} restricted to adapted metrics and contorsions of semi-symmetric connections.
\subsection{3-Sasaki manifolds} \label{sec:5-Sasaki}
In this section, we consider metric-compatible connections on a 3-Sasaki manifold $(M^{4m+3},g)$ with $m>0$ (see \cite{blair,bg}), which has four naturally defined orthogonal distributions. Let $\mD_1,\mD_2,\mD_3$ be the one-dimensional distributions spanned by Reeb fields $\xi_1, \xi_2, \xi_3$, and let $\mD_4$ be their orthogo\-nal complement. We~will write ${\tilde T}^\sharp_{\xi_a }$ instead of ${\tilde T}^\sharp_{a, \xi_a}$ for $a\le 3$. From \cite{rz-2} we get \begin{equation}\label{Eq-3Sas}
\nabla_Y \xi_a = - {\tilde T}^\sharp_{\xi_a} Y\quad (Y \in TM,\quad a\le 3)\,, \end{equation}
and $\tT_{\xi_a} \tT_{\xi_a} = - \id |_{\,\mD_a^\perp }$, $\tT_{\xi_a}{\xi_b} = {\xi_c}$ for even permutation of $a,b,c$. Using \eqref{Eq-3Sas}, we can formulate
\begin{Lemma} \label{lemSasakigeodint} For a 3-Sasaki manifold $(M,g)$, the distributions $\mD_\mu\ (\mu\le 4)$ are totally geodesic, pairwise mixed totally geodesic, and $\mD_4$ is mixed integrable with every $\mD_a\ (a\le 3)$. \end{Lemma}
Let ${\rm Sym}(F (X,Y)) = \frac{1}{2} ( F(X,Y) + F(Y,X))$ for all $X,Y \in TM$ and all $(s,2)$-tensors $F$.
In what follows we consider terms appearing in \cite{rz-3} for contact manifolds, that will also appear in our Euler-Lagrange equations below. We define the following tensors on $M$:
\begin{eqnarray*} && \phi_\nu (X,Y) = (\I + \I^\wedge)_{P_\nu^\perp X}\, P_\nu^\perp Y, \quad \phi_\nu^\top (X,Y) = P_\nu \phi_\nu (X,Y)\,,\\ && {\tilde \phi}_\nu (X,Y) = (\I + \I^\wedge)_{ P_\nu^\top X}\, P_\nu^\top Y, \quad {\tilde \phi}_\nu^\perp (X,Y) = P_\nu^\perp {\tilde \phi}_\nu (X,Y)\,,\\
&& \chi_\nu (X,Y) = {\rm Sym} \big( \sum (\<X, \I_{{\cal E}_{\nu,j}}E_{\nu,a}\rangle \langle Y, {\tilde T}^\sharp_{\nu, E_{\nu, a}}{\cal E}_{\nu,j}\rangle \big) \,,\\ && {\tilde \chi}_\mu (X,Y) = {\rm Sym} \big( \sum (\langle X, \I_{{\mu,j}}{\cal E}_{\mu,a}\rangle \langle Y, T^\sharp_{\mu, {\cal E}_{\mu,a}}E_{\mu, j} \rangle \big) \,. \end{eqnarray*} Next, we calculate on a 3-Sasaki manifold values of some previously defined tensors.
\begin{Lemma}\label{lemmadeltaQbarterms} Let $(M,g)$ be a 3-Sasaki manifold, and let $\I$ be the contorsion tensor critical for the action \eqref{Eq-Smix-g} with fixed $g$. Then for all $X,Y \in \mD_4$ and
$a=1,2,3$: \begin{subequations} \begin{eqnarray}\label{tildechi4} && {\tilde\chi}_4(X,Y) = \frac{1}{2} {\rm Sym} \big( \sum\nolimits_{b}\langle{\tilde\phi}_4(\tT_{\xi_b} Y, X) , \,\xi_b\rangle \big) + 3\,\<X,Y\rangle\,,\\ \label{chiaXY} && \chi_a (X,Y) = \frac{1}{2} {\rm Sym}\big( \langle {\tilde \phi}_4 (\tT_{\xi_a} Y, X ), \xi_a\rangle \big) + \langle X, Y\rangle \, \\ \label{widetildeTa} && \widetilde{\cal T}^\flat_a = -g_a^\perp, \quad {\cal T}^\flat_4 = -3\,g_4\,, \\
\label{UpsilonTa} && \Upsilon_{T_a, T_a}=0, \quad \Upsilon_{ {\tilde T}_4, {\tilde T}_4}= 0\,, \\
\label{chi23xi1} && (\chi_2 + \chi_3)(\xi_1,\xi_1) = 0\,, \quad (\chi_1 + \chi_3)(\xi_2,\xi_2) = 0\,, \quad (\chi_1 + \chi_2)(\xi_3, \xi_3) = 0\,, \\
\label{chi4xia} && \chi_4 (\xi_a, \xi_a)=0, \quad {\tilde \chi}_a (\xi_a, \xi_a) =0\,, \\
\label{widetildecalT4} && \widetilde{\cal T}^\flat_4 (\xi_a, \xi_a) =0, \quad {\cal T}^\flat_a (\xi_a, \xi_a) = -2\,, \\
\label{UpsilonT4xia} && \Upsilon_{ T_4, T_4}(\xi_a, \xi_a) = 2\,n_4, \quad \Upsilon_{ {\tilde T}_a, {\tilde T}_a}(\xi_a, \xi_a ) = 4 + 2\,n_4\,. \end{eqnarray} \end{subequations} \end{Lemma}
\begin{proof} Let $\{\xi_1, \xi_2, \xi_3, E_{1}, \ldots, E_{n_4} \}$ be an orthonormal basis on $M$. For $X,Y \in \mD_4$ we get \[ {\tilde \chi}_4 (X,Y) = \frac{1}{2} \sum\nolimits_{a,j} \big(\langle X, \I_{ E_{j}}\xi_a\rangle \langle Y, T^\sharp_{4, \xi_a}E_{j} \rangle + \langle Y, \I_{ E_{j}}\xi_a\rangle \langle X, T^\sharp_{4, \xi_a}E_{j}\rangle \big), \] and by \eqref{ELconnectionNewI8} we obtain for all $a=1,2,3$: \begin{eqnarray*} && \sum\nolimits_{j} \langle X, \I_{ E_{j}}\xi_a \rangle \langle Y, T^\sharp_{4, \xi_a}E_{j} \rangle = -\sum\nolimits_{j} \langle \I_{ E_{j}}X, \xi_a \rangle \langle Y, T^\sharp_{4, \xi_a}E_{j} \rangle \\ && = -\frac{1}{2}\sum\nolimits_{j} \big(\langle \I_{ E_{j}}X + \I_X E_{j}, \xi_a\rangle \langle Y, T^\sharp_{4, \xi_a}E_{j} \rangle +\langle \I_{ E_{j}}X - \I_X E_{j}, \xi_a\rangle \langle Y, T^\sharp_{4, \xi_a}E_{j} \rangle\big) \\ && = \sum\nolimits_{j}\big( - \frac{1}{2}\,\langle {\tilde \phi}_4 (E_{j}, X ), \xi_a\rangle \langle \xi_a, T_4 (E_{j}, Y)\rangle + \langle T_4 ({ E_{j} }, X ), \xi_a\rangle \langle \xi_a, T_4 (E_{j}, Y )\rangle\big) \\
&& = \big(\frac{1}{2}\,\langle {\tilde \phi}_4 (\tT_{\xi_a} Y, X ), \xi_a\rangle + \langle \tT_{\xi_a} X, \tT_{\xi_a} Y\rangle\big) = \frac{1}{2} \langle {\tilde \phi}_4 (\tT_{\xi_a} Y, X ), \xi_a\rangle + \<X,Y\rangle\,. \end{eqnarray*} Summing the above over $a=1,2,3$ we get \eqref{tildechi4}.
Equations \eqref{widetildeTa}$_1$ follow from $\tT_{\xi_a} \tT_{\xi_a} = - \id$, similarly, we obtain \eqref{widetildeTa}$_2$, as for $a\le 3$ we get
${\cal T}^\flat_4 = \sum\nolimits_{\,a} (\tT_{\xi_a} \tT_{\xi_a} )^\flat = -3\, g_4$.
Next, we get $\chi_1(X,Y)$ (and similarly, $\chi_2(X,Y)$ and $\chi_3(X,Y)$): \begin{eqnarray*} &&\hskip-4mm \chi_1 (X,Y) = {\rm Sym}\big( \frac{1}{2} \langle {\tilde \phi}_4 (\tT_{\xi_1} Y, X ), \xi_1\rangle {+} \<P_4 X, P_4 Y\rangle
{+}\langle X, \I_{\xi_2}\xi_1\rangle \langle Y, \tT_{\xi_1}\xi_2\rangle {+}\langle X, \I_{\xi_3} \xi_1\rangle\langle Y, \tT_{\xi_1}\xi_3\rangle\big)\\ &&\hskip-6mm = {\rm Sym}\big( \frac{1}{2} \langle {\tilde \phi}_4 (\tT_{\xi_1} Y, X ), \xi_1\rangle + \<P_4 X, P_4 Y\rangle
+\langle X, \I_{\xi_2} \xi_1\rangle \langle Y, \xi_3\rangle -\langle X, \I_{\xi_3} \xi_1\rangle \langle Y, \xi_2\rangle \big),\quad X,Y \perp \mD_1\,.
\end{eqnarray*} Hence,
we get \eqref{chiaXY}. We get \eqref{chi23xi1} from the above computations and \eqref{ELI5XYZ} for metric-compatible connections, which yields for all $a,b,c =1,2,3$, $a \ne b \ne c \ne a$ and all $X \in \mD_4$: \begin{equation} \label{3Sasaki3different} 0 = \langle \I_X \xi_a - \I_{\xi_a} X, \xi_b\rangle = \langle \I_{\xi_b} \xi_a - \I_{\xi_a}{\xi_b}, X\rangle = \langle \I_{\xi_b} \xi_a - \I_{\xi_a}{\xi_b}, \xi_c\rangle\, . \end{equation} We obtain \eqref{chi4xia}$_1$ and \eqref{widetildecalT4}$_1$ from $\tT_{4,X} =0$ for all $X \in \mD_4$.
We obtain for $a,b,c \in \{ 1,2,3 \}$, $a \ne b \ne c \ne a$: \[
{\cal T}^\flat_a (\xi_a, \xi_a) = \sum\nolimits_{\,i} \langle \tT_{4, E_{i}} \tT_{4, E_{i}} \xi_a, \xi_a\rangle
+ \langle \tT_{\xi_b} \tT_{\xi_b} \xi_a, \xi_a\rangle + \langle \tT_{\xi_c} \tT_{\xi_c} \xi_a, \xi_a\rangle = -2\,, \] which is \eqref{widetildecalT4}$_2$. For $a\le 3$ \eqref{UpsilonTa}$_1$ follows from the fact that each $\mD_a$ is integrable.
For $a\le 3$ we obtain \eqref{UpsilonT4xia}$_1$ as follows: \begin{eqnarray*} && \frac{1}{2} \Upsilon_{ T_4, T_4}(\xi_a, \xi_a) = \sum\nolimits_{i,j} \langle \xi_a, T_a(E_{i}, E_{j} )\rangle \langle\xi_a, T_a(E_{i}, E_{j} )\rangle \\ &&=\sum\nolimits_{i,j} \langle \tT_{\xi_a} E_{i}, E_{j}\rangle \langle \tT_{\xi_a} E_{i}, E_{j}\rangle
= - \sum\nolimits_{i} \langle \tT_{\xi_a} \tT_{\xi_a} E_{i}, E_{i} \rangle = n_4\,, \end{eqnarray*}
as $\tT_{\xi_a}\tT_{\xi_a} = - \id |_{\,\mD^\perp_a}$.
For \eqref{UpsilonTa}$_2$, let $X,Y \in \mD_4$, then \[ \frac{1}{2}\Upsilon_{ {\tilde T}_4, {\tilde T}_4}(X,Y) = \sum\nolimits_{a,b} \langle X, {\tilde T}_4 (\xi_a, \xi_b)\rangle \langle Y, {\tilde T}_4 (\xi_a, \xi_b)\rangle=0\,, \] because $[\xi_a,\xi_b] \perp \mD_4$.
For \eqref{UpsilonT4xia}$_2$, let $a,b,c=1,2,3$ and $a \ne b \ne c \ne a$, then \begin{eqnarray*} \frac{1}{2}\,\Upsilon_{{\tilde T}_a, {\tilde T}_a}(\xi_a, \xi_a ) &=& \sum\nolimits_{b,c} \langle\xi_a, {\tilde T}_a (\xi_b, \xi_c)\rangle \langle\xi_a, {\tilde T}_a (\xi_b, \xi_c)\rangle \\ && +\sum\nolimits_{i,j} \langle\xi_a, {\tilde T}_a (E_{i}, E_{j})\rangle \langle\xi_a, {\tilde T}_a (E_{i}, E_{j} )\rangle
= 2 + n_4\, . \end{eqnarray*} For \eqref{chi4xia}$_2$, let $a\le 3$, ${\cal E}_{a,j} \in \mD_a^\perp$, then
${\tilde \chi}_a (\xi_a, \xi_a) = \sum\nolimits_{j} \langle \xi_a, \I_{\xi_a}{\cal E}_{a,j}\rangle \langle \xi_a, T^\sharp_{a, {\cal E}_{a,j}} \xi_a\rangle =0$.
\end{proof}
By Lemma~\ref{lemSasakigeodint}, on a 3-Sasaki manifold $(M,g)$, each $\mD_\mu\ (\mu\le 4)$ is totally geodesic and has totally geodesic orthogonal complement. Thus, the assumptions of \cite[Lemma~4]{rz-3} are satisfied.
We additionally assume the following condition: \begin{equation} \label{assumptionXYZ} \langle \I_{X}Y, Z \rangle=0\,, \quad X \in \mD_\mu ,\ Y \in \mD_\rho,\ Z \in \mD_\xi , \quad \mu \neq \rho \neq \xi \neq \mu\,, \end{equation}
which is consistent with \eqref{ELI5XYZ}. Note that from \eqref{3Sasaki3different} it follows that, e.g., characteristic connection on 7-dimensional 3-Sasaki manifolds \cite{AgricolaFriedrich} does not satisfy \eqref{ELI5XYZ}. Then, following the proof of \cite[Lemma~4]{rz-3}, we obtain for each $\mD_\mu$ and for a metric-compatible connection $\nabla+\I$: \begin{eqnarray*} 2\delta_g\bar Q_\mu &=& \sum\nolimits_{\nu\ne\mu}\big(\langle\phi_\nu, \frac{1}{2} \tr_\nu^\top \I\rangle - 2 \Div\phi_\nu^\top + 7\,\chi_\nu -\Div (P_\nu \tr^\perp_\nu \I)\,g_\nu^\perp - 2\,\widetilde{\cal T}_\nu^\flat - \frac{3}{2}\Upsilon_{T_\nu,T_\nu}\big) \\
&& +\, \langle {\tilde \phi}_\mu, \frac{1}{2} \tr_\mu^\perp \I\rangle - 2 \Div {\tilde \phi}_\mu^\perp + 7 {\tilde \chi}_\mu -\Div (P_\mu^\perp \tr^\top_\mu \I )\,g_\mu - 2\, {\cal T }_\mu^\flat - \frac{3}{2} \Upsilon_{{\tilde T}_\mu,{\tilde T}_\mu}, \\ \end{eqnarray*} and, from Proposition \ref{C-vr-terms}, we get \begin{equation} \label{deltaQmetric} \delta {Q}_\mu = \sum\nolimits_{\nu\ne\mu} \big(2\,\widetilde{\cal T }_\nu^\flat + \frac{1}{2} \Upsilon_{T_\nu,T_\nu}\big) + 2\,{\cal T }_\mu^\flat + \frac{1}{2} \Upsilon_{{\tilde T}_\mu,{\tilde T}_\mu}, \end{equation}
therefore, for $\mu\le 4$: \begin{eqnarray} \label{deltaQQbarSasaki} && 2\,( \delta {Q}_\mu + \delta_g \bar Q_\mu) = \!\sum\limits_{\nu\ne\mu}\big(\langle \phi_\nu, \frac{1}{2}\tr_\nu^\top \I\rangle {-} 2\Div\phi_\nu^\top {+} 7\chi_\nu {-} \Div (P_\nu \tr^\perp_\nu \I)\,g_\nu^\perp {+} 2\widetilde{\cal T }_\nu^\flat {-}\frac{1}{2}\Upsilon_{T_\nu,T_\nu}\big)\nonumber \\
&& +\, \langle {\tilde \phi}_\mu, \frac{1}{2} \tr_\mu^\perp \I\rangle - 2 \Div {\tilde \phi}_\mu^\perp + 7 {\tilde \chi}_\mu -\Div (P_\mu^\perp \tr^\top_\mu \I ) \,g_\mu + 2\, {\cal T }_\mu^\flat -\frac{1}{2} \Upsilon_{{\tilde T}_\mu,{\tilde T}_\mu} . \end{eqnarray} Using the above formulas, we obtain the following presentation of the Euler-Lagrange equations for the action \eqref{Eq-Smix-g} on a 3-Sasaki manifold.
\begin{Theorem} Let $(M,g)$ be a 3-Sasaki manifold, and $\I$ be the contorsion tensor critical for the action \eqref{Eq-Smix-g} with fixed $g$ such that \eqref{assumptionXYZ} holds.
Then the pair $(g,\I)$ is critical for the action \eqref{Eq-Smix-g} with respect to adapted variations of $g$ preserving the volume of $(M,g)$ and all variations of $\I$ if and only if all equations of Theorem~\ref{corI} hold, and the following Euler-Lagrange equations \eqref{E-delta-g-J} are valid for some $\lambda\in\mathbb{R}$, all $X,Y \in \mD_4$ and $a=1,2,3$: \begin{subequations} \begin{eqnarray} \label{ELSasakiD4} && \sum\nolimits_{\nu \ne 4} \big(\langle \phi_\nu, \frac{1}{2} \tr_\nu^\top \I\rangle - 2 \Div \phi_\nu^\top \big) (X,Y)
+ 7 \sum\nolimits_{\,a} {\rm Sym} ( \langle {\tilde \phi}_4 (\tT_{\xi_a} Y, X ) , \xi_a\rangle ) \nonumber\\ && +\,30\, \langle X,Y \rangle
- \sum\nolimits_{\nu \ne 4} \Div (P_\nu \tr^\perp_\nu \I )\, \langle X,Y \rangle \nonumber\\
&& +\,\langle {\tilde \phi}_4, \frac{1}{2} \tr_4^\perp \I\rangle (X,Y) - 2 \Div {\tilde \phi}_4^\perp (X,Y)
- \Div( P^\perp_4 \tr^\top_4 \I )\, \langle X,Y \rangle \nonumber\\ && +\,(2\,\overline{\rm S}_{\,\mD_1,\mD_2,\mD_3,\mD_4} -\sum\nolimits_{\,i}\Div( \,P_i\tr_{\,i}^\bot \I + \,P_i^\bot\tr_{\,i}^\top \I ) )g_4(X,Y) = \lambda\, \langle X,Y \rangle \,,\\
\label{ELSasakiDa} && -\,9 \Div (P_a^\perp\I_{\xi_a} \xi_a)
- \sum\nolimits_{\nu \ne a} \Div (P_\nu \tr^\perp_\nu \I ) -10 -2n_4 \nonumber\\ && +\,2\,\overline{\rm S}_{\,\mD_1,\mD_2,\mD_3,\mD_4} -\sum\nolimits_{\,i}\Div( \,P_i\tr_{\,i}^\bot \I + \,P_i^\bot\tr_{\,i}^\top \I ) = \lambda\,,
\end{eqnarray} \end{subequations} \end{Theorem}
\begin{proof}
The Euler-Lagrange equation \eqref{ELSasakiD4} follows from Lemma~\ref{lemmadeltaQbarterms}, \eqref{deltaQQbarSasaki} and Theorem \ref{T-main01}. To prove \eqref{ELSasakiDa} for $a=1$, we obtain from \eqref{deltaQQbarSasaki} evaluated on $(\xi_1, \xi_1)$ and Lemma \ref{lemmadeltaQbarterms} the following Euler-Lagrange equation: \begin{eqnarray}\label{ELD1a} && \sum\nolimits_{\nu \ne 1} \langle \phi_\nu (\xi_1,\xi_1), \frac{1}{2} \tr_\nu^\top \I\rangle - 2(\Div\phi_\nu^\top)(\xi_1,\xi_1) - \Div (P_\nu \tr^\perp_\nu \I ) \nonumber \\ && -\,10 -2n_4 +\langle{\tilde\phi}_1 (\xi_1,\xi_1), \frac{1}{2}\,\tr_1^\perp \I\rangle - 2(\Div{\tilde \phi}_1^\perp )(\xi_1,\xi_1) - \Div (P_1^\perp \tr^\top_1 \I ) \nonumber\\ && +\,2\,\overline{\rm S}_{\,\mD_1,\mD_2,\mD_3,\mD_4} -\sum\nolimits_{\,i}\Div( \,P_i\tr_{\,i}^\bot \I + \,P_i^\bot\tr_{\,i}^\top \I ) = \lambda\,. \end{eqnarray} For all $\nu \ne 1$ we get $\phi_\nu (\xi_1,\xi_1) = 2\,\I_{\xi_1} \xi_1 = {\tilde \phi}_1 (\xi_1,\xi_1)$. Since $\I$ is a contorsion tensor of a metric connection and $\mD_1$ is one-dimensional, we obtain $\I_{\xi_1} \xi_1 = P_1^\perp (\I_{\xi_1} \xi_1 )$. On the other hand, from \eqref{ELconnectionNewI7} we get $P_1^\perp(\tr^\perp_1 \I ) =0$; hence, \[ \sum\nolimits_{\nu \ne 1}\langle\phi_\nu, \frac{1}{2}\,\tr_\nu^\top \I\rangle +\langle{\tilde\phi}_1, \frac{1}{2}\,\tr_1^\perp\I\rangle = 2\langle\I_{\xi_1}\xi_1, \tr^\perp_1\I\rangle=0\,. \] Hence, \eqref{ELD1a} can be simplified to \begin{eqnarray} \label{ELD1b} && -\sum\nolimits_{\nu \ne 1} \big( 2\,(\Div \phi_\nu^\top) (\xi_1,\xi_1) + \Div (P_\nu \tr^\perp_\nu \I ) \big)
-10 -2n_4 - 2 (\Div {\tilde \phi}_1^\perp)(\xi_1,\xi_1) \nonumber\\ && -\,\Div (P_1^\perp \tr^\top_1 \I )
+\,2\,\overline{\rm S}_{\,\mD_1,\mD_2,\mD_3,\mD_4} -\sum\nolimits_{\,i}\Div( \,P_i\tr_{\,i}^\bot \I + \,P_i^\bot\tr_{\,i}^\top \I ) = \lambda\,. \end{eqnarray} Using \begin{eqnarray*} \Div \phi_2^\top (\xi_1, \xi_1) &=& 2\Div (P_2^\top\I_{\xi_1}\xi_1 ) - 2 \langle \phi_2 (\nabla_{\xi_2} \xi_1, \xi_1 ), \xi_2\rangle - 2\sum\nolimits_{\,j} \langle \phi_2^\top (\nabla_{ E_{j}}\xi_1, \xi_1 ), E_{j}\rangle \\
&=& 2\Div(P_2\I_{\xi_1}\xi_1 ) + 2 \langle \I_{\xi_3}\xi_1 + \I_{\xi_1}\xi_3, \xi_2\rangle,\\
\Div\phi_3^\top (\xi_1, \xi_1)
&=& 2\Div(P_3\I_{\xi_1}\xi_1 ) - 2 \langle \I_{\xi_2}\xi_1 + \I_{\xi_1}\xi_2, \xi_3\rangle,\\
\Div\phi_4^\top (\xi_1, \xi_1)
&=& 2\Div( P_4\I_{\xi_1}\xi_1 ) + 2 \sum\nolimits_{\,j} \langle \phi_4 (\tT_{\xi_1} E_{j}, \xi_1 ), E_{j}\rangle = 2 \Div( P_4\I_{\xi_1}\xi_1 )\,, \end{eqnarray*} (because $\tT_{\xi_1} E_{j} \in \mD_4$), \eqref{assumptionXYZ} and
$\Div {\tilde \phi}_1^\perp (\xi_1, \xi_1) = 2 \Div( P_1^\perp\I_{\xi_1} \xi_1 )$,
since $\nabla_Z\,\xi_1 \perp \xi_1$ for all $Z \in TM$, we get \eqref{ELSasakiDa} for $a=1$ from \eqref{ELD1b}. Similarly,
we get \eqref{ELSasakiDa} for $a=2,3$.
\end{proof}
\begin{Corollary}\label{Cor-3} Let $(M,g)$ be a 3-Sasaki manifold, and a pair $(g,\I)$ be critical for the action \eqref{Eq-Smix-g} with respect to adapted variations of $g$ preserving the volume of $(M,g)$ and all variations of \,$\I$, and let $\I$ be such that \eqref{assumptionXYZ} holds. Then the following equations are valid for some $\lambda\in\mathbb{R}$: \begin{subequations} \begin{eqnarray} \label{trELSasakiD4} && - \frac{4+n_4}{n_4} \Div \big( P^\perp_4 \tr^\top_4 \I + \sum\nolimits_{\nu \ne 4} P_\nu \tr^\perp_\nu \I \big) = \lambda-30 \nonumber\\ &&\qquad -\,2\,\overline{\rm S}_{\,\mD_1,\mD_2,\mD_3,\mD_4} +\sum\nolimits_{\,\nu}\Div(\,P_\nu\tr_{\,\nu}^\bot \I + \,P_\nu^\bot\tr_{\,\nu}^\top \I )\,, \\ \label{sumELSasakiD123} && -\,9\sum\nolimits_{a=1}^3 \Div(P_a^\perp \I_{\xi_a} \xi_a + \sum\nolimits_{\nu \ne a} P_\nu \tr^\perp_\nu \I ) = \lambda +10 + 2\,n_4 \nonumber\\ &&\qquad -\,2\,\overline{\rm S}_{\,\mD_1,\mD_2,\mD_3,\mD_4} + \sum\nolimits_{\,\nu}\Div( \,P_\nu\tr_{\,\nu}^\bot \I + \,P_\nu^\bot\tr_{\,\nu}^\top \I )\, . \end{eqnarray} \end{subequations}
In particular, no manifold admits complete 3-Sasaki structures and metric connections satisfying \eqref{assumptionXYZ} critical for \eqref{Eq-Smix-g} with respect to adapted variations of $g$ and all variations of \,$\I$. \end{Corollary}
\begin{proof}
Using orthonormal basis $\{ E_j \}$ of $\mD_4$, where $E_{j+n_4/2} = \tT_{\xi_a} E_{j}$ \cite{blair}, we find \[ \sum\nolimits_j {\rm Sym} ( \langle {\tilde \phi}_4 (\tT_{\xi_a} E_{j} , E_{j} ) , \xi_a\rangle ) = 0. \] Hence, taking trace of Euler-Lagrange equations \eqref{ELSasakiD4} we get
\begin{eqnarray*} && \sum\nolimits_{\nu \ne 4} \big(\langle \tr^\perp_\nu \I, \tr_\nu^\top \I\rangle - (4+n_4) \Div (P_\nu \tr^\perp_\nu \I)\big)
{+}\langle \tr^\top_4 \I, \tr_4^\perp \I\rangle {-} (4+n_4) \Div( P^\perp_4 \tr^\top_4 \I ) \nonumber\\ && = \big(\lambda-30 -2\,\overline{\rm S}_{\,\mD_1,\mD_2,\mD_3,\mD_4} +\sum\nolimits_{\,\nu}\Div( \,P_\nu\tr_{\,\nu}^\bot \I + \,P_\nu^\bot\tr_{\,\nu}^\top \I ) \big)\,n_4\,. \end{eqnarray*} Since $\I$ corresponds to a metric connection, we get $P_\mu\tr_\nu^\top \I
=0 \ (\nu\le 3)$ (as each of these $\mD_\nu$ is one-dimensional and $\langle \xi_\nu, \I_{\xi_\nu} \xi_\nu\rangle =0$), and by \eqref{ELconnectionNewI7} we get $\langle \tr^\perp_\nu \I, \tr_\nu^\top \I\rangle =0\ (\nu\le 3)$. Hence, \begin{eqnarray*} && -\,(4+n_4) \sum\nolimits_{\nu \ne 4} \Div (P_\nu \tr^\perp_\nu \I)
+\langle \tr^\top_4 \I, \tr_4^\perp \I\rangle - (4+n_4) \Div( P^\perp_4 \tr^\top_4 \I ) \nonumber\\ && = \big(\lambda-30 - 2\,\overline{\rm S}_{\,\mD_1,\mD_2,\mD_3,\mD_4} + \sum\nolimits_{\,\nu}\Div( \,P_\nu\tr_{\,\nu}^\bot \I + \,P_\nu^\bot\tr_{\,\nu}^\top \I ) \big)\,n_4\,. \end{eqnarray*} From \eqref{ELconnectionNewI7}, we get $P_4^\perp (\tr_4^\perp \I )=0$, thus \eqref{ELconnectionNew2} yields
$\langle \tr^\top_4 \I,\, \tr_4^\perp \I\rangle = \langle \tr^\top_4 \I,\, P_4^\top\tr_4^\perp \I\rangle = 0$,
and \eqref{trELSasakiD4} follows. We obtain \eqref{sumELSasakiD123} as the average of \eqref{ELSasakiDa} for $a=1,2,3$. The last claim follows from the result in \cite{bg}, where completeness of metric is proved to imply compactness of the manifold, the divergence theorem for closed manifolds and comparing the constants on right-hand sides of \eqref{trELSasakiD4} and \eqref{sumELSasakiD123}, which cannot be both zero for any dimension $n_4$. \end{proof}
\begin{Example}\rm Let $\I$ be such that \eqref{assumptionXYZ} and all equations of Theorem~\ref{corI} hold, and \[
\I_X X=0,\quad
\I_X Y + \I_Y X = P_4 (\I_{P_4 X} P_4 Y + \I_{P_4 Y} P_4 X),\quad X,Y \in\mathfrak{X}_M\,. \] By \eqref{E-Q1Q2-gen} we get $\overline{\rm S}_{\,\mD_1,\mD_2,\mD_3,\mD_4} = {\rm const}$. Thus, all equations \eqref{ELSasakiD4} and \eqref{ELSasakiDa} for $a=1,2,3$ are valid, but with different $\lambda$'s. Hence, for each $\mu=1, \ldots ,4$ such pair $(g,\I)$ on a 3-Sasaki manifold $(M,g)$ is critical for $\mD_\mu$-variations preserving the volume of $\Omega$, but not for all adapted variations preserving the volume of $\Omega$. \end{Example}
\begin{Remark} \rm The Euler-Lagrange equation on $\mD_4$ is incompatible with those for $\mD_a\ (a=1,2,3)$ on compact manifolds, see Corollary~\ref{Cor-3}, also when considering \eqref{Eq-Smix-g0}, as the functional of an adapted metric $g$. Indeed, by Lemma~\ref{lemmadeltaQbarterms} and \eqref{deltaQmetric} we get a contradiction in \cite[(3.7)]{r-EH-k}, which, adapted to our notation, reads: \[ - ({\rm S}_{\,\mD_1, \mD_2,\mD_3,\mD_4} +\lambda) g_4 = \delta Q_4 = -12 g_4,\quad -{\rm S}_{\,\mD_1, \mD_2,\mD_3,\mD_4}g_4 + \lambda=\delta Q_a = 2n_4 - 6 \quad (a=1,2,3)\,.
\] However, we can consider the following weighted action: \begin{equation}\label{Eq-Smix-gc} J_{\mD,c} : g \mapsto\int_{M} \big(\sum\nolimits_{\,a=1}^3 {\rm S}_{\,\mD_a, \mD_a^\perp } + c\,{\rm S}_{\,\mD_4, \mD_4^\perp} \big)\,{\rm d}\operatorname{vol}_g \,. \end{equation} Then, a 3-Sasaki metric $g$ on $M^{3+n_4}$ is critical for \eqref{Eq-Smix-gc} if and only if $c=- \frac{ n_4} {n_4 + 6}$.
Indeed, for $\tilde{\rm S} = \sum_{a=1}^3 {\rm S}_{\,\mD_a,\mD^\bot_a} + c\,{\rm S}_{\,\mD_4,\mD^\bot_4}$ with $c\in\mathbb{R}$, we get
$\delta Q_4 = -6g_4 -6 c g_4 = \lambda\,g_4$,
so $\lambda = -6(1+c)$, and for $a=1,2,3$ we get
$\delta Q_a = -4 +c n_4 -4 +2+n_4 = \lambda$,
so
$\lambda = n_4 (c+1) - 6$.
\end{Remark}
\begin{Corollary} Let $\mD_\mu\ (\mu=1,2,3)$ be distributions determined by orthonormal vector fields $\xi_1, \xi_2, \xi_3$ on a unit sphere $(S^3,g)$ with the metric $g$ induced from the Euclidean space $\mathbb{R}^4$, such that $\nabla_{\xi_a} \xi_b = \xi_c$ for even permutation of $a,b,c$. Then $\nabla$ is the only metric-compatible connection, whose contorsion tensor is critical for the action \eqref{Eq-Smix-g} with fixed $g$.
\end{Corollary}
\begin{proof} From \eqref{3Sasaki3different} and the fact that $\I$ corresponds to metric connection, we find the following: \begin{eqnarray*} \langle \I_{\xi_3} \xi_1, \xi_2 \rangle = \langle \I_{\xi_1} \xi_3, \xi_2 \rangle = - \langle \I_{\xi_1} \xi_2, \xi_3 \rangle\,,\\ \langle \I_{\xi_2} \xi_3, \xi_1 \rangle = \langle \I_{\xi_3} \xi_2, \xi_1 \rangle = - \langle \I_{\xi_3} \xi_1, \xi_2 \rangle = \langle \I_{\xi_1} \xi_2, \xi_3 \rangle\,,\\ \langle \I_{\xi_1} \xi_2, \xi_3 \rangle = \langle \I_{\xi_2} \xi_1, \xi_3 \rangle = - \langle \I_{\xi_2} \xi_3, \xi_1 \rangle = - \langle \I_{\xi_1} \xi_2, \xi_3 \rangle = 0\,. \end{eqnarray*} Hence, $\langle \I_{\xi_a} \xi_b, \xi_c \rangle = 0$ for $a \ne b \ne c \ne a$, $a,b,c \in \{1,2,3\}$. Also, for a $3$-Sasaki structure we get $H_a =0$ for $a=1,2,3$; using $\I = -\I^*$ and \eqref{ELconnectionNewI7}, see \eqref{E-delta-I-J}, we obtain \[ 0 = P_3 \I_{\xi_2} \xi_2 + P_2 \I_{\xi_3} \xi_3 = P_3 \I_{\xi_1} \xi_1 + P_1 \I_{\xi_2} \xi_2 = P_2 \I_{\xi_1} \xi_1 + P_1 \I_{\xi_2} \xi_2\, .
\] It follows that $\I_{\xi_1 }\xi_1 = -P_1(\I_{\xi_2}\xi_2 + \I_{\xi_3}\xi_3) = 0$, because by $\I = -\I^*$ we have $P_1\I_{\xi_1}\xi_1 = 0$. Similarly, $\I_{\xi_2}\xi_2 = \I_{\xi_3} \xi_3 =0$. From $\I = -\I^*$ we get $\langle \I_{\xi_a} \xi_b, \xi_b\rangle = 0$ for $a, b\in\{1,2,3\}$, thus all components of critical tensor $\I$ vanish. \end{proof}
\end{document} | arXiv |
Let $r$ be a real number, $|r| < 2,$ and let $z$ be a complex number such that
\[z + \frac{1}{z} = r.\]Find $|z|.$
From the equation $z + \frac{1}{z} = r,$ $z^2 + 1 = rz,$ so
\[z^2 - rz + 1 = 0.\]By the quadratic equation,
\[z = \frac{r \pm \sqrt{r^2 - 4}}{2} = \frac{r \pm i \sqrt{4 - r^2}}{2}.\]Then
\[|z| = \sqrt{\left( \frac{r}{2} \right)^2 + \left( \frac{\sqrt{4 - r^2}}{2} \right)^2} = \sqrt{\frac{r^2}{4} + \frac{4 - r^2}{4}} = \boxed{1}.\] | Math Dataset |
Visitor $0.00 Login or Register
Fight Finance
Courses Tags Random All Recent Scores
Question 29 interest only loan
You want to buy an apartment priced at $300,000. You have saved a deposit of $30,000. The bank has agreed to lend you the $270,000 as an interest only loan with a term of 25 years. The interest rate is 12% pa and is not expected to change.
What will be your monthly payments? Remember that mortgage payments are paid in arrears (at the end of the month).
(a) 900
(b) 2,700
(c) 2,722.1
(d) 2,843.71
(e) 34,424.99
You just signed up for a 30 year interest-only mortgage with monthly payments of $3,000 per month. The interest rate is 6% pa which is not expected to change.
How much did you borrow? After 15 years, just after the 180th payment at that time, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. Remember that the mortgage is interest-only and that mortgage payments are paid in arrears (at the end of the month).
(a) $495,533.92, $349,640.96
(b) $500,374.84, $355,510.54
(c) $500,374.84, $250,187.42
(d) $600,000.00, $600,000.00
(e) $600,000.00, $300,000.00
You just borrowed $400,000 in the form of a 25 year interest-only mortgage with monthly payments of $3,000 per month. The interest rate is 9% pa which is not expected to change.
You actually plan to pay more than the required interest payment. You plan to pay $3,300 in mortgage payments every month, which your mortgage lender allows. These extra payments will reduce the principal and the minimum interest payment required each month.
At the maturity of the mortgage, what will be the principal? That is, after the last (300th) interest payment of $3,300 in 25 years, how much will be owing on the mortgage?
(a) $6,766.6469
(b) $35,748.4866
(c) $63,663.4188
(d) $90,000.0000
(e) Nothing, the mortgage will be fully paid off prior to maturity.
Question 107 interest only loan
You want to buy an apartment worth $300,000. You have saved a deposit of $60,000.
The bank has agreed to lend you $240,000 as an interest only mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments?
(a) 17,435.74
(b) 1,438.92
(c) 1,414.49
(e) 666.67
You want to buy an apartment priced at $500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the $450,000 as an interest only loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments?
(a) $ 1,250.00
(b) $ 2,250.00
(c) $ 2,652.17
(d) $ 2,697.98
(e) $ 32,692.01
Question 239 income and capital returns, inflation, real and nominal returns and cash flows, interest only loan
A bank grants a borrower an interest-only residential mortgage loan with a very large 50% deposit and a nominal interest rate of 6% that is not expected to change. Assume that inflation is expected to be a constant 2% pa over the life of the loan. Ignore credit risk.
From the bank's point of view, what is the long term expected nominal capital return of the loan asset?
(a) Approximately 6%.
(b) Approximately 4%.
(c) Approximately 2%.
(d) Approximately 0%.
(e) Approximately -2%.
A prospective home buyer can afford to pay $2,000 per month in mortgage loan repayments. The central bank recently lowered its policy rate by 0.25%, and residential home lenders cut their mortgage loan rates from 4.74% to 4.49%.
How much more can the prospective home buyer borrow now that interest rates are 4.49% rather than 4.74%? Give your answer as a proportional increase over the original amount he could borrow (##V_\text{before}##), so:
###\text{Proportional increase} = \frac{V_\text{after}-V_\text{before}}{V_\text{before}} ###
Assume that:
Interest rates are expected to be constant over the life of the loan.
Loans are interest-only and have a life of 30 years.
Mortgage loan payments are made every month in arrears and all interest rates are given as annualised percentage rates compounding per month.
(a) 0.055679
(b) 0.029631
(c) 0.029547
(d) 0.006245
(e) 0.0025
Question 459 interest only loan, inflation
In Australia in the 1980's, inflation was around 8% pa, and residential mortgage loan interest rates were around 14%.
In 2013, inflation was around 2.5% pa, and residential mortgage loan interest rates were around 4.5%.
If a person can afford constant mortgage loan payments of $2,000 per month, how much more can they borrow when interest rates are 4.5% pa compared with 14.0% pa?
Give your answer as a proportional increase over the amount you could borrow when interest rates were high ##(V_\text{high rates})##, so:
###\text{Proportional increase} = \dfrac{V_\text{low rates}-V_\text{high rates}}{V_\text{high rates}} ###
Mortgage loan payments are made every month in arrears and all interest rates are given as annualised percentage rates (APR's) compounding per month.
(a) 0.095
(e) 2.111111
Question 546 income and capital returns, interest only loan, no explanation
Which of the following statements about the capital and income returns of an interest-only loan is correct?
Assume that the yield curve (which shows total returns over different maturities) is flat and is not expected to change.
An interest-only loan's expected:
(a) Income and capital returns both rise over time.
(b) Income and capital returns both fall over time.
(c) Income return rises and its capital return falls over time.
(d) Income return falls and its capital return rises over time.
(e) Income and capital returns are constant through time.
Question 550 fully amortising loan, interest only loan, APR, no explanation
Many Australian home loans that are interest-only actually require payments to be made on a fully amortising basis after a number of years.
You decide to borrow $600,000 from the bank at an interest rate of 4.25% pa for 25 years. The payments will be interest-only for the first 10 years (t=0 to 10 years), then they will have to be paid on a fully amortising basis for the last 15 years (t=10 to 25 years).
Assuming that interest rates will remain constant, what will be your monthly payments over the first 10 years from now, and then the next 15 years after that? The answer options are given in the same order.
(a) $3,250.43, $4,513.67
(b) $3,250.43, $1,530.28
(c) $2,125, $4,513.67
(d) $2,125, $3,250.43
(e) $1,530.28, $3,250.43
Question 660 fully amortising loan, interest only loan, APR
How much more can you borrow using an interest-only loan compared to a 25-year fully amortising loan if interest rates are 6% pa compounding per month and are not expected to change? If it makes it easier, assume that you can afford to pay $2,000 per month on either loan. Express your answer as a proportional increase using the following formula:
###\text{Proportional Increase} = \dfrac{V_\text{0,interest only}}{V_\text{0,fully amortising}} - 1###
(a) 77.6034%
(b) 30.3779%
(c) 28.8603%
(d) 22.3966%
(e) 7.5304%
Question 754 fully amortising loan, interest only loan
(e) 11.5269%
Question 760 time calculation, interest only loan, no explanation
Five years ago (##t=-5## years) you entered into an interest-only home loan with a principal of $500,000, an interest rate of 4.5% pa compounding monthly with a term of 25 years.
Then interest rates suddenly fall to 3% pa (##t=0##), but you continue to pay the same monthly home loan payments as you did before. Will your home loan be paid off by the end of its remaining term? If so, in how many years from now? Measure the time taken to pay off the home loan from the current time which is 5 years after the home loan was first entered into.
Assume that the lower interest rate was given to you immediately after the loan repayment at the end of year 5, which was the 60th payment since the loan was granted. Also assume that rates were and are expected to remain constant.
(a) Yes, in 117 months, which is 9 years and 9 months.
(b) Yes, in 152 months, which is 12 years and 8 months.
(c) Yes, in 202 months, which is 16 years and 10 months.
(d) Yes, in 240 months, which is 20 years.
(e) No, it will take longer than 20 years to pay off the loan.
Copyright © 2014 Keith Woodward | CommonCrawl |
Bioefficacy and durability of Olyset® Plus, a permethrin and piperonyl butoxide-treated insecticidal net in a 3-year long trial in Kenya
Paul M. Gichuki ORCID: orcid.org/0000-0001-6558-55381,2,
Luna Kamau3,
Kiambo Njagi4,
Solomon Karoki4,
Njoroge Muigai5,
Damaris Matoke-Muhia3,
Nabie Bayoh6,7,
Evan Mathenge1 an1 &
Rajpal S. Yadav ORCID: orcid.org/0000-0001-8264-12048
Infectious Diseases of Poverty volume 10, Article number: 135 (2021) Cite this article
Long-lasting insecticide nets (LLINs) are a core malaria intervention. LLINs should retain efficacy against mosquito vectors for a minimum of three years. Efficacy and durability of Olyset® Plus, a permethrin and piperonyl butoxide (PBO) treated LLIN, was evaluated versus permethrin treated Olyset® Net. In the absence of WHO guidelines of how to evaluate PBO nets, and considering the manufacturer's product claim, Olyset® Plus was evaluated as a pyrethroid LLIN.
This was a household randomized controlled trial in a malaria endemic rice cultivation zone of Kirinyaga County, Kenya between 2014 and 2017. Cone bioassays and tunnel tests were done against Anopheles gambiae Kisumu. The chemical content, fabric integrity and LLIN survivorship were monitored. Comparisons between nets were tested for significance using the Chi-square test. Exact binomial distribution with 95% confidence intervals (95% CI) was used for percentages. The WHO efficacy criteria used were ≥ 95% knockdown and/or ≥ 80% mortality rate in cone bioassays and ≥ 80% mortality and/or ≥ 90% blood-feeding inhibition in tunnel tests.
At 36 months, Olyset® Plus lost 52% permethrin and 87% PBO content; Olyset® Net lost 24% permethrin. Over 80% of Olyset® Plus and Olyset® Net passed the WHO efficacy criteria for LLINs up to 18 and 12 months, respectively. At month 36, 91.2% Olyset® Plus and 86.4% Olyset® Net survived, while 72% and 63% developed at least one hole. The proportionate Hole Index (pHI) values representing nets in good, serviceable and torn condition were 49.6%, 27.1% and 23.2%, respectively for Olyset® Plus, and 44.9%, 32.8% and 22.2%, respectively for Olyset® Net but were not significantly different.
Olyset® Plus retained efficacy above or close to the WHO efficacy criteria for about 2 years than Olyset® Net (1–1.5 years). Both nets did not meet the 3-year WHO efficacy criteria, and showed little attrition, comparable physical durability and survivorship, with 50% of Olyset® Plus having good and serviceable condition after 3 years. Better community education on appropriate use and upkeep of LLINs is essential to ensure effectiveness of LLIN based malaria interventions.
Insecticide-treated nets reduce child mortality, number of Plasmodium falciparum cases and probably the number of P. vivax cases per person/year [1]. Factory-produced long-lasting insecticide nets (LLINs) are a core malaria intervention for the global elimination of malaria led by the World Health Organization (WHO) [2]. According to the WHO definition, LLINs should retain efficacy against mosquito vectors for a minimum of 20 standard washes under laboratory conditions and a minimum of 3 years of use under field conditions [3]. When used by most of the people in a population, insecticide-treated nets can protect all people in the community, including those who do not sleep under nets [4]. Currently, the National Malaria Control Programme (NMCP) in Kenya has been scaling up the use of LLINs in malaria endemic and high-risk areas to achieve universal coverage, which is defined by WHO as access to one LLIN for every 1.8 persons in each household [2, 5].
Pyrethroid LLINs were the first brands of treated nets recommended by WHO for malaria control. However, it was believed that insecticide resistance was worsening in Africa and would present a major threat to malaria control [6,7,8], and that pyrethroid-LLINs were becoming less effective at killing mosquitoes in household conditions when pyrethroid resistance develops [9, 10]. Consequently, piperonyl butoxide (PBO) was incorporated in nets with the intension to improve their efficacy against pyrethroid resistant mosquitoes by acting as a metabolic enzyme inhibitor targeting the P450 cytochrome or mixed function oxidases that metabolise pyrethroids and enhance the efficacy of pyrethroid treated nets. Olyset® Plus has been prequalified by WHO as the first in class PBO net.
However, at the time of beginning the trial due to the absence of guidelines published by WHO of how to evaluate PBO nets, and considering the manufacturer's product claim, Olyset® Plus was evaluated only as a pyrethroid LLIN against a pyrethroid susceptible mosquito strain rather than against a resistant strain. This paper presents results of a 3-year long-term (Phase III) field trial of the candidate product, Olyset® Plus, along with a positive control product Olyset® Net, in a malaria endemic rice cultivation area of Kirinyaga County, Kenya.
The field trial was conducted according to the WHO guidelines [3, 11]. The description of the test products and trial procedures are described below.
Test products
Bioefficacy and physical durability of the Olyset® Plus and Olyset® Net both manufactured by Sumitomo Chemical were evaluated. The former is a polyethylene mono-filament net incorporated with 2% permethrin corresponding to 20 g permethrin active ingredient (AI)/kg or 800 mg permethrin AI/m2 and 1% PBO (corresponding to 10 g PBO AI/kg or 400 mg PBO AI/m2). The latter is a polyethylene mono-filament net incorporated with 2% permethrin (w/w) (corresponding to 800 mg permethrin AI/m2). Olyset® Net was recommended in 2009 by the WHO Pesticide Evaluation Scheme (WHOPES) for use in malaria control [12], while Olyset® Plus was given interim recommendation by WHO in 2012 with the requirement to evaluate the product in a 3-year long-term trial to confirm its long-lasting efficacy in field [13].
Trial design and study area
We did a prospective, household randomized controlled, large-scale 3-year field trial in a rice irrigation area of Kirinyaga County, Kenya from 2014 to 2017. The area is located about 100 km northeast of Nairobi at the base of Mt Kenya at an altitude of about 1200 m above sea level [14]. It receives mean annual rainfall of 1200–1600 mm with long rains in March–June and short rains in October–December. Due to high densities of mosquitoes year-round, many local people use mosquito nets. Several bed net trials have previously been conducted here for national product registration. Over 80% of the inhabitants of the area have only primary level education. Most of the houses in the area are made of mud walls with roofs of corrugated iron sheets [14].
Four villages of Maendeleo, Huruma, Kiratina and Kasarani were selected by a simple randomization, of which the first two villages were randomly allocated Olyset® Plus and the other two villages received Olyset® Net (Table 1). In each of the selected village, households were randomly selected.
Table 1 Summary of nets distributed in study villages
Community sensitization and net distribution
Community level meetings were organized with the local leadership (Chiefs, sub-Chiefs and the village heads) and health teams including community health workers, where the objectives of the study were clearly explained. Later, local public meetings (Barazas) were organized to sensitize the community about the study. Informed consent to participate in the study was obtained from participating heads of households. A baseline survey and census were conducted in the selected households using a structured questionnaire. Information on size of the family, educational status, occupations, average family income, type of house, number of sleeping places, existing number of nets in each household, their usage pattern, and washing practices were collected. The old nets in use in the households were retrieved and new ones given. Each new net was given a unique identifier and a net master list prepared. Nets were then distributed to the households as per the available sleeping places i.e. one net per sleeping place.
Net sampling scheme
The scheme of sampling nets at different time points for bioassays and chemical assays is given in Table 2. The netting pieces were cut from the sampled nets from positions 1–9 for bioassays, and positions HP1–HP5 (at time 0) or positions HP2–HP5 (at time 12, 24 and 36 months) for chemical content assays (Fig. 1). All the nets which were sampled and retrieved at the given survey points were replaced with new ones.
Table 2 Number of Olyset® Plus and Olyset® Net sampled for assays at different time points during the study
A rectangular net and its individual panels showing positions for cutting netting pieces (positions 1–9 for bioassays; HP1–HP5 for chemical assays)
The insecticidal effect of the nets was evaluated using cone bioassays and where required tunnel tests, as per the WHO guidelines [3]. The bioassays were done at time zero (baseline) and then at every 6 months up to 36 months. On each of the netting pieces cut for bioassays, standard WHO plastic cones were held in place using a plastic manifold and a total of 50 laboratory bred, susceptible Kisumu strain Anopheles gambiae (non-blood fed, 2–5 day old) were exposed for 3 min (5 pieces per net × 5 mosquitoes per test × 2 replicates). After the exposure, the mosquitoes were removed gently from the cones and kept separately in plastic cups provided with cotton-wool moistened with 10% glucose solution. Knockdown (KD) was recorded at 60 min and mortality at 24 h after exposure. Mosquitoes exposed to untreated polyester net pieces were used as untreated controls. The bioassays were carried out at 27 ± 2 °C temperature and 80% ± 10% relative humidity. When the mosquito knockdown rate was < 95% and mortality < 80%, the mortality and blood-feeding inhibition effect of such nets was assessed in a tunnel test according to the WHO guidelines [3].
For each net, only one netting piece with which mosquito mortality was found closest to the average mortality in the cone test was tested in the tunnel test [3]. One hundred, non-blood fed female An. gambiae Kisumu mosquitoes, aged 5–8 days were released at the longer end of a tunnel equipment made of glass. At each end of the tunnel, a 25 cm square cage covered with a polyester netting was fitted. At the shorter end of the tunnel, a rabbit which could not move but was available for mosquito biting was placed. The mosquitoes were released at 18:00 and mosquitoes recovered at 09:00 on the following morning. During the test, the tunnel was kept in a place maintained at 27 ± 2 °C and 80% ± 10% relative humidity in full darkness. Another tunnel with untreated netting was used as control. The mean mortality rate and percentage blood-feeding inhibition were recorded, as follows:
Mortality = this was measured by pooling the mortality rates of mosquitoes from the two sections of the tunnel. If mortalities in the control recorded more than 10%, the test was considered invalid.
$${\rm{Blood}} - {\rm{feeding\,inhibition }}\left( \% \right) \, = \, [{1}00 \times \left( {{\rm{Bfc }}-{\rm{ Bft}}} \right)] \, /{\rm{ Bfc}}$$
where, Bfc is the proportion of blood-fed mosquitoes in the control tunnel and Bft is the proportion of blood-fed mosquitoes in the tunnel with the treated net. The treated net was considered to have met the WHO efficacy criteria when mortality in mosquitoes was ≥ 80% and/or blood-feeding inhibition was ≥ 90%.
Chemical content analysis
The chemical contents in the nets were determined at the beginning and at years 1, 2 and 3 by samplings nets according to the scheme described earlier. The netting pieces cut for determination of active ingredient content were individually rolled up and placed in new, clean and labeled aluminum foils and sent to the Phyto-pharmacy Department of the Walloon Agricultural Research Centre, Gembloux, Belgium, a WHO Collaborating Centre, for chemical analysis.
Assessment of durability of cohort nets
To monitor durability of nets (i.e. survivorship and changes in physical integrity) during their routine use, 250 each of Olyset® Plus and Olyset® Net were distributed in separate households (2 nets per household) in the same villages by simple randomization. From these cohorts of nets, all nets available at the households at the time of surveys at 6, 12, 24 and 36 months were inspected for their presence and physical integrity. For this, the nets were retracted from the sleeping places, individually hung up on a rectangular frame held outside the house and inspected for presence of any holes on the side and roof panels. The holes were counted for each net, their diameter was measured, and they were classified in the following categories according to the WHO criteria for hole sizes [3, 11]:
Size A1: 0.5–2 cm diameter, Size A2: 2–10 cm diameter, Size A3: 10–25 cm diameter, Size A4: > 25 cm diameter.
The parameters for assessing the integrity of nets included the proportions of Olyset® Plus and Olyset® Net with any size holes or tears, and the proportionate Hole Index (pHI) for each net, which was calculated as follows [11]:
pHI = (1.23 × No. of size A1 holes) + (28.28 × No. of size A2 holes) + (240.56 × No. of size A3 holes) + (706.95 × No. of size A4 holes)
Using the pHI values, Olyset® Plus and Olyset® Net were categorized as those in 'good condition' (pHI: 0–64), nets in 'acceptable' condition (pHI: 65–642), and torn nets that were likely to provide no protective efficacy (pHI > 642) [15]. The number of nets in 'good' and 'acceptable condition' together were taken to estimate the survival of nets over time.
The sampled nets were also inspected for any repairs of holes or tears done by the households. After inspection, the nets were returned to the same sleeping place in the same household.
Household net washing practices
The net washing practices by households were evaluated using a questionnaire at 6, 12, 24 and 36 months after net distribution.
Recording of adverse events
One month after net distribution, a survey was carried out to record adverse events reported by net users, and overall experiences of net usage. A pre-tested questionnaire adapted from the WHO guidelines [11] was administered to the heads or an adult person of the households after obtaining informed consent.
The data collected were entered in excel spreadsheets and cross-checked for accuracy. Data were analyzed using STATA version 14.0 (Stata Corporation, College Station, TX, USA). Comparisons between the two types of nets were tested for significance using the Chi-square (χ2) test. Exact binomial distribution with 95% confidence intervals (95% CI) was used for percentages. For continuous variables, the arithmetic mean was used depending on the distribution of values compared to a normal distribution.
The nets were considered to have passed the WHO efficacy criteria if the mosquito knockdown rate was found to be ≥ 95% and/or mortality rate ≥ 80% in cone bioassays. For the tunnel tests, nets were considered to have passed the WHO efficacy criteria when mortality in mosquitoes was ≥ 80% and/or blood-feeding inhibition was ≥ 90% [3]. The procedure for calculation of proportionate Hole Index (pHI) for nets is already described above.
Ethical clearances
Prior to implementing the study, ethical approvals were obtained from the Ethical Research Committee of Kenyatta National Hospital, University of Nairobi (protocol ID P524/8/2013) and the WHO Ethics Review Committee (ID: V2-084). Informed consent of head or an adult person in participating households was obtained in local language. Those who could not read or write were assisted by an independent witness to ensure the study was clearly explained to them and guided them to give a thumb print on the consent form.
Among the 1784 participating homesteads with a population of 10 325, a total of 1551 households participated in the study (Table 1). Most of the heads of the households (age: 17–96 years) had received formal education and practiced farming (96.6%).
Out of 100 heads of households with Olyset® Plus interviewed for adverse events, 44% reported itching, 4% eye irritation, and 7% unpleasant smell. Among the Olyset® Net users, 7% each reported nasal discharge and eye irritation. These were reported to be mild symptoms lasting for one or two days. To reduce these effects, 43% users of Olyset® Plus ventilated nets for a day in the open under the sun and 33% under the shade. Among Olyset® Net users, similar actions were taken by 44% and 39%, respectively.
LLINs wash rate
The findings of the study reported low net wash rates. The reported wash rate was zero at 6 months, 7% each of Olyset® Plus and Olyset® Net washed at least once at 12 months, 12% Olyset® Plus and 11% Olyset® Net washed at least once at 24 months, while 17% Olyset® Plus and 21% Olyset® Net washed at least once at 36 months. In all households where nets had been washed, on 99% occasions, cold water and a local bar soap were used while 1% used washing powder. Of those who washed nets, 86% reported to have dried them under direct sunlight contrary to the advice given to them at the time of distribution to dry them under shade. The results of frequency of washing nets by the users show that about 7–44% nets were washed at least once, and the washing frequency increased in year 2 and 3 (Table 3).
Table 3 Frequency of washing nets by the users at different time points
Bioassay tests
At the baseline, 100% of Olyset® Plus passed the WHO efficacy cut off for LLINs in the cone bioassays i.e., nets causing ≥ 80% mortality and/or ≥ 95% knockdown (KD) in mosquitoes (Table 4). At month 6, 83.3% of Olyset® Plus passed the efficacy cut off by the KD criteria alone. Considering mortality and the blood-feeding inhibition of mosquitoes in tunnel tests for nets failing in cone bioassays, in all 96.7% nets met the WHO efficacy criteria at month 6. Overall, 93.3% and 86.7% (i.e., ≥ 80%) of the sampled Olyset® Plus passed the WHO efficacy criteria at months 12 and 18. Thereafter, the efficacy pass rate declined to 76.7% at month 24 to 42.0% at the end of 36 months of the trial. Thus, Olyset® Plus did not pass the WHO efficacy cut off criteria both at 24 and 36 months of household usage. Olyset® Net performed only up to 12 months; thereafter the pass rates declined below 80%, so the net did not perform as expected at 24 and 36 months. Although each net was classified to have passed or failed the efficacy according to the WHO criteria, no significant difference in the proportions passing efficacy was seen between Olyset® Plus and Olyset® Net [at 6 months (χ2 = 0.35; P = 0.54), 12 months (χ2 = 1.42; P = 0.23), 24 months (χ2 = 0.72; P = 0.39) and 36 months (χ2 = 0.22; P = 0.63)]. The overlapping 95% CI values of the pass percentages for each survey point also show no differences in efficacy of the two nets (Table 4). The data indicate that the entomological efficacy of Olyset® Plus and Olyset® Net in surveys at 24–36 months and 18–36 months, respectively was found to be apparently attributable to their personal protective efficacy, i.e., cumulative effects of KD and mosquito blood-feeding inhibition effects.
Table 4 Bioefficacy of Olyset® Plus and Olyset® Net against Anopheles. gambiae Kisumu in cone and tunnel tests
Chemical contents
At time 0 (baseline), the contents of permethrin and PBO in Olyset® Plus were within the target tolerance limits of 20 ± 5 g/kg and 10 ± 2.5 g/kg, respectively; while the permethrin content in Olyset® Net was also within the target tolerance limit of 20 ± 3 g/kg (Table 5). The within net variation (relative standard deviation, RSD) in permethrin and PBO contents for Olyset® Plus at baseline were 2.2% and 3.7%, respectively, and were within the acceptable tolerance limits. The within net variation in the permethrin content in Olyset® Net at the baseline was 1.2% and was also within the acceptable tolerance limit. At the end of years 1, 2 and 3 of net usage, the permethrin content in Olyset® Plus decreased by 36%, 40% and 52%, and the PBO content showed a marked reduction of 66%, 81% and 87%, respectively. The loss of the permethrin in Olyset® Net was 16% at year 1 and did not change much at year 2 and 3 (19 and 24%, respectively).
Table 5 Mean permethrin and PBO contents (relative standard deviation) in nets at baseline, and at 12, 24 and 36 months
Net survivorship and fabric integrity
At the end of year 1, Olyset® Plus reported survivorship rates of 97.2% (n = 243), with Olyset® Net recording 99.6% (n = 249). At year 2, there were 92.4% (n = 231) Olyset® Plus and 90.4% (n = 226) of Olyset® Nets still surviving. At the end of the 3 years of study, the survivorship rates for Olyset® Plus and Olyset® Net were at 91.2% (n = 228) and 86.4% (n = 216), respectively.
The pHI values of the nets showed that 87.6% of Olyset® Plus were in good condition at 6 months, 7.6% were in acceptable/serviceable condition, and 4.8% were too torn (Table 6). These proportions increased gradually during the study and at 36 months, 49.6% were in good condition, 27.1% in acceptable/serviceable condition, and 23.2% were too torn. A similar trend of attrition was seen for Olyset® Net, with 44.9%, 32.8% and 22.2% nets being in good condition, acceptable condition or too torn at 36 months, respectively.
Table 6 Progression of hole formation and proportionate Hole Index (pHI) values for Olyset® Plus and Olyset® Net
Data on the comparison of estimates of the mean pHI values and the mean hole area for Olyset® Plus and Olyset® Net are given in Table 7.
Table 7 Comparison of the estimates of the mean pHI and the mean hole area for Olyset® Plus and Olyset® Net
The proportion of Olyset® Plus nets having been repaired by households increased from 6% (95% CI: 3.4–9.7) at month 6 to 17.9% (95% CI: 13.2–23.6) at months 36, while that of Olyset® Net increased from 5% (95% CI: 2.8–8.7) to 12% (95% CI: 8.0–17.1) in the same period.
The study evaluated the efficacy of Olyset® Plus incorporated with permethrin and in a 36-month long field trial against An. gambiae Kisumu, a susceptible strain. Olyset® Net incorporated with permethrin alone was included in the trial as a positive control net. A net was considered to have passed the WHO efficacy criteria if it caused ≥ 80% mortality and/or ≥ 95% knockdown in cone bioassays or ≥ 80% mortality and/or ≥ 90% blood-feeding inhibition in tunnel tests [3]. Most of the Olyset® Plus and Olyset® Net lost the knockdown effect during the 24–36 months follow-up. Overall, neither Olyset® Plus nor Olyset® Net performed as they should even after 2 years and did not meet the WHO efficacy criteria of a 3-year LLIN.
Previous studies have demonstrated enhanced efficacy of Olyset® Plus against resistant An. gambiae attributable to PBO [16,17,18]. A recent randomised controlled field trial in Tanzania with high levels of pyrethroid resistance in malaria vectors An. gambiae and An. arabiensis also attributed higher impact on malaria of Olyset® Plus over Olyset® Net to the presence of PBO that was sustained after 21 months of the trial [19]. However, there is a view that these two nets are not comparable and that the higher efficacy of Olyset® Plus was due to much more bleed rate of permethrin into its surface and not due to PBO that was incorporated in small amount and dwindled rapidly over time [20]. In this trial, Olyset® Plus was evaluated as a pyrethroid LLIN since there were no guidelines published by WHO of how to evaluate PBO nets at the time of this trial, and considering the manufacturer's product claim.
While, it was reported that high intensity insecticide resistance will reduce the level of personal and community protection of pyrethroid LLINs [21] and PBO nets could be effective against malaria in some resistance scenarios [22], a recent WHO multi-country study found no evidence of an association between insecticide resistance and malaria infection prevalence or incidence [23]. There is also a view that at present there is limited evidence that recent reduction in the impact on malaria in sub-Saharan Africa was due to increasing resistance in malaria vectors to the pyrethroid insecticides used in treating nets [24].
The initial permethrin content in both nets was more or less similar but the enhanced efficacy of Olyset® Plus was mainly due to the higher release of permethrin into the surface fibres and presence of PBO than Olyset® Net. The gradual decline in efficacy of both nets is consistent with the loss of permethrin and PBO in Olyset® Plus and of permethrin in Olyset® Net over time during their routine usage by the community. The decline in permethrin content was more rapid in Olyset® Plus than Olyset® Net. This explains why Olyset® Plus having permethrin and PBO was relatively superior than Olyset® Net initially, i.e., until 18 months, but after that their performance was below the WHO efficacy cut off. While the PBO content in Olyset® Plus at the baseline was within the target tolerance limits, 66% of the PBO content was lost within one year of net usage, and the loss increased to 81% at year 2 and 87% at year 3. Previous studies have reported that the insecticidal content in LLINs decay over time [6]. While permethrin and PBO contents may be lost due to natural decay and evaporation, washing of nets frequently and drying them under direct sunlight are known to contribute to such losses [25, 26]. In this study, despite manufacturer's label recommendation not to expose nets to direct sunlight, a good proportion of both type of nets were exposed to bright sunlight before use and left to dry in the sun after washing. This would have reduced both permethrin and PBO contents and consequently reduced the biological efficacy of these nets and should be borne in mind when considering these results. Even though Kisumu strain was tested, the loss of PBO would have affected efficacy of Olyset Plus as PBO can improve net performance even against susceptible strains, which too have P450s, and PBO is also known to act as a good solvent that can aid cuticular penetration and help improve insecticidal activity. The operational implication of rapid loss of PBO is that Olyset® Plus was unlikely to remain effective even against pyrethroid resistant mosquitoes for 2 or 3 years of use.
Community net wash practices were comparable between the two nets. During the first 6 months, no net was reported to have been washed. At 1 year, 7% of Olyset® Plus and 7% of Olyset® Net were reported to have been washed at least once. At month 36 months, about 17% and 21% of Olyset® Plus and Olyset® Net respectfully had been washed at least once. LLINs are usually recommended to be washed at most every 3 months [5], although this frequency depends on local cultural practices and water availability. Different net washing frequencies have been reported in other studies spanning from a high of 8 washes per month in Mali [27] to 4–7 times in Tanzania [28] to a low average of 1.5 washes per year in Uganda [29]. In coastal Kenya, on average nets were washed twice in 6 months, with blue colour nets least often washed than white or green colour nets, and older nets washed more frequently [30]. In western Kenya lowlands, three quarter of nets were washed once within 3 months and a third of them were dried under the sun [31]. The main reasons for low reported wash rates during the annual surveys in our study could be attributed to net users' poor recall of the net washing frequencies, and people's apprehension that washing of nets reduce net efficacy, although there was no water scarcity in the trial area being located in a rice irrigation area.
The study participants reported experiencing certain transient adverse effects during the first two days of use of nets, but these could be prompted reactions since the community members had been informed about the possibility of such effects at the time of net distribution. The higher side effects from use of Olyset® Plus is in line with the higher release of permethrin as mentioned above. Users resorted to ventilating their nets in direct sunlight to reduce the effects although our project team had advised to dry up and keep the nets under shade. Ventilating new nets or drying washed nets under direct sunlight might have contributed to a significant reduction in active ingredient content and bio-efficacy of nets.
At the end of the 3-year period, among the two cohorts of 250 nets each, only about 9% Olyset® Plus and about 14% Olyset® Net were reported to have been lost showing a high net survivorship. These results are in line with the expected 75% net survivorship rate reported by the NetCALC 3-year serviceable prediction model [32]. Previous studies have reported varying rate of net survivorship in African countries, such as 58% nets were lost within two years in Rwanda [33], only 39% of distributed nets remained both present and in serviceable physical condition 2–4 years after a mass campaign in Tanzania [34], but in Western Uganda an estimated attrition rate of just 12% after three years of use was observed for a polyester net [35]. In four African countries of Mozambique, Nigeria, DRC and Zanzibar, Tanzania, survival in serviceable condition of polyethylene and polyester LLINs after 31–37 months of use varied between different sites from 17 to 80% with median survival from 1.6 to 5.3 years [36]. In our study, the observed low attrition rate of nets appears to be also due to the 'Hawthorne effect' as the households included in the two cohorts of nets were visited every 6 or 12 months to inspect the nets. So, the awareness among study participants that the nets were regularly being followed could have modified their behaviour in favour of more careful upkeep and maintenance of the nets and as such may have been a limitation in this study.
Studies have shown that despite small to medium size holes (64 < PHI < 642), LLINs are still protective against mosquito bites due to the excito-repellent effects of insecticides [35]. However, the nets may become ineffective when the overall hole area in nets reaches a certain threshold [37]. In our study, both Olyset® Plus and Olyset® Net recorded high fabric integrity up to the 3-year trial period. At 12 months after distribution, 78.1% of Olyset Plus and 63.8% of Olyset® Net were in good/acceptable condition, 12.3% Olyset® Plus and 30.5% Olyset® Net were in serviceable condition, while 9.4% Olyset® Plus and 5.6% Olyset® Net were completely torn and required to be replaced. At the end of the 3-year trial period, 49.5% of Olyset® Plus and 44.9% of Olyset® Net were in good/acceptable condition, 27.1% Olyset® Plus and 32.8% Olyset® Net in serviceable condition, while 23.2% Olyset® Plus and 22.2% Olyset® Net were completely torn and required replacement.
Although Olyset® Net retained a higher permethrin content that Olyset® Plus, the latter showed longer duration of efficacy than the former, i.e., 18 months compared with 12 months above the WHO efficacy cut off, and higher efficacy thereafter i.e., the efficacy of Olyset® Plus declined to 42% at the end of 3-year trial period and that of Olyset® Net to 36% over the same period. This could be attributed to higher release of permethrin into the netting surface, and also because PBO can make nets perform better even against susceptible mosquitoes. Pyrethroid-PBO nets have previously been associated with higher mosquito mortality and lower blood-feeding rates in areas of high-level insecticide resistance than were non-PBO LLINs [38, 39]. Although in our study we did not test nets with a pyrethroid resistant mosquito strain, it is likely that the fast decline of PBO in Olyset® Plus was caused by the inappropriate exposure to sunlight, as well as the normal loss through washing would have reduced efficacy against resistant mosquitoes in the same way that it reduced efficacy against the susceptible mosquitoes tested here.
Appropriate testing guidelines and field studies are thus required to evaluate the operational effectiveness of PBO nets against pyrethroid resistant mosquitoes and considering the PBO retention and release properties. The revised WHO LLIN guidelines should clarify for how long PBO nets should be effective and if the current WHO efficacy criteria for a 3-year LLIN should be reduced to fit the chemistry.
A limitation of our trial was that we did not evaluate efficacy of Olyset® Plus against resistant An. gambiae mosquitoes. Instead, we tested its efficacy for the presence of permethrin alone using a susceptible malaria vector strain, although inclusion of PBO in nets has the potential of increasing insecticidal efficacy due to increased cuticular penetration and presence of P450s in susceptible insects. There is thus a need for suitable testing guidelines and a much more validation of the impact of PBO in pyrethroid-PBO nets.
Olyset® Plus did not meet the WHO criteria for efficacy of LLINs in this 3-year Phase III study, which could have resulted from faster loss of permethrin and PBO and users not following the manufacturer's recommendations to not leave the nets under direct sunlight. Better community education is therefore essential to ensure appropriate use and upkeep of nets in areas with LLIN-based interventions in order to maximize their operational impact on the prevention, control and elimination of malaria. Failure to do this would result in operational implications requiring early replenishment with new nets to ensure that such an LLIN based malaria intervention remains effective.
All the relevant datasets supporting the conclusions of this article are included within the article.
CI :
CHW:
KD:
LLIN:
Long-lasting insecticidal nets
NMCP:
National Malaria Control Programme
pHI:
proportionate Hole Index
PBO:
Piperonyl butoxide
RSD:
Relative standard deviation
SD:
Pryce J, Richardson M, Lengeler C. Insecticide-treated nets for preventing malaria. Cochrane Database Syst Rev. 2018;11(11):CD000363. https://doi.org/10.1002/14651858.CD000363.pub3.
WHO. Achieving and maintaining universal coverage with long-lasting insecticidal nets in malaria control. Geneva: World Health Organization. 2017. http://apps.who.int/iris/bitstream/handle/10665/259478/WHO-HTM-GMP-2017.20-eng.pdf?ua=1?sequence=1. Accessed 27 Oct 2021.
WHO. Guidelines for laboratory and field-testing of long-lasting insecticidal nets. Geneva: World Health Organization. 2013. http://www.who.int/iris/bitstream/10665/80270/1/9789241505277_eng.pdf. Accessed 27 Oct 2021.
Hawley WA, Phillips-Howard PA, TerKuile FO, Terlouw DJ, Vulule JM, Ombok M, et al. Community-wide effects of permethrin-treated bed nets on child mortality and malaria morbidity in western Kenya. Am J Trop Med Hyg. 2003;68(4 Suppl):121–7.
U.S. President's Malaria Initiative Kenya. Malaria Operational Plans. Washington, DC: United States Agency for International Development. 2015. https://www.amazon.com/Kenya-Malaria-Operational-Presidents-Initiative/dp/1507801564. Accessed 27 Oct 2021.
Wanjala CL, Zhou G, Mbugi J, Simbauni J, Afrane YA, Ototo E, et al. Insecticidal decay effects of long-lasting insecticide nets and indoor residual spraying on Anopheles gambiae and Anopheles arabiensis in Western Kenya. Parasit Vectors. 2015;8:588. https://doi.org/10.1186/s13071-015-1194-6.
Hemingway J, Ranson H, Magill A, Kolaczinski J, Fornadel C, Gimnig J, et al. Averting a malaria disaster: will insecticide resistance derail malaria control? Lancet. 2016;387(10029):1785–8. https://doi.org/10.1016/S0140-6736(15)00417-1.
Ranson H, Lissenden N. Insecticide resistance in African anopheles mosquitoes: a worsening situation that needs urgent action to maintain malaria control. Trends Parasitol. 2016;32(3):187–96. https://doi.org/10.1016/j.pt.2015.11.010.
Asidi A, N'Guessan R, Akogbeto M, Curtis C, Rowland M. Loss of household protection from use of insecticide-treated nets against pyrethroid-resistant mosquitoes. Benin Emerg Infect Dis. 2012;18(7):1101–6. https://doi.org/10.3201/eid1807.120218.
Ochomo EO, Bayoh NM, Walker ED, Abongo BO, Ombok MO, et al. The efficacy of long-lasting nets with declining physical integrity may be compromised in areas with high levels of pyrethroid resistance. Malar J. 2013;12:368. https://doi.org/10.1186/1475-2875-12-368.
WHO. Guidelines for monitoring the durability of long-lasting insecticidal mosquito nets under operational conditions. Geneva: World Health Organization. 2011. https://apps.who.int/iris/bitstream/handle/10665/44610/9789241501705_eng.pdf. Accessed 27 Oct 2021.
WHO. Report of the Thirteenth WHOPES Working Group Meeting, WHO/HQ, Geneva, 28–30 July 2009. Review of Olyset® LN, DawaPlus® 2.0 LN, Tianjin Yorkool® LN, . Geneva: World Health Organization. 2009. https://apps.who.int/iris/bitstream/handle/10665/44212/9789241598712_eng.pdf. Accessed 27 Oct 2021.
WHO. Report of the Fifteenth WHOPES Working Group Meeting, WHO/HQ, Geneva, 18–22 June 2012. Review of Olyset® Plus, Interceptor® LN, Malathion 440 EW, Vectobac® GR. Geneva: World Health Organization. 2012. https://apps.who.int/iris/bitstream/handle/10665/75304/9789241504089_eng.pdf Accessed 27 Oct 2021.
Kenya National Bureau of Statistics. 2009 Population and Housing Census. Nairobi: KNBS 2010. https://s3-eu-west-1.amazonaws.com/s3.sourceafrica.net/documents/21195/Census-2009.pdf. Accessed 25th Oct 2021
WHO. WHO Guidance Note for Estimating the Longevity of Long-Lasting Insecticidal Nets in Malaria Control. Geneva: World Health Organization. https://www.who.int/malaria/publications/atoz/who_guidance_longevity_llins.pdf. Accessed 27 Oct 2021.
Corbel V, Chabi J, Dabire RK, Etang J, Nwane P, Pigeon O, et al. Field efficacy of a new mosaic long-lasting mosquito net (PermaNet 30) against pyrethroid-resistant malaria vectors: a multi-centre study in western and central Africa. Malar J. 2010;9:113. https://doi.org/10.1186/1475-2875-9-113.
Pennetier C, Bouraima A, Chandre F, Piameu M, Etang J, Rossignol M, et al. Efficacy of Olyset® Plus, a new long-lasting insecticidal net incorporating permethrin and piperonil-butoxide against multi-resistant malaria vectors. PLoS One. 2013;8(10):e75134. https://doi.org/10.1371/journal.pone.0075134.
Wangai LN, Njagi K. Efficacy trial of new pest control product: Olyset Plus-long lasting insecticide treated net with added synergist (piperonyl butoxide) for malaria vector control in Kenya. J Med Microb Diagn. 2019;8:290. https://doi.org/10.4172/2161-0703.1000290.
Protopopoff N, Mosha JF, Lukole E, Charlwood JD, Wright A, Mwalimu CD, et al. Effectiveness of a long-lasting piperonyl butoxide-treated insecticidal net and indoor residual spray interventions, separately and together, against malaria transmitted by pyrethroid-resistant mosquitoes: a cluster, randomised controlled, two-by-two factorial design trial. Lancet. 2018;391(10130):1577–88. https://doi.org/10.1016/S0140-6736(18)30427-6.
Skovmand O. Comparing the un-comparable: Olyset Plus and Olyset, different malaria impact. Malar J. 2018;17(1):446. https://doi.org/10.1186/s12936-018-2596-7.
Toé KH, Jones CM, N'Fale S, Ismail HM, Dabiré RK, Ranson H. Increased pyrethroid resistance in malaria vectors and decreased bed net effectiveness Burkina Faso. Emerg Infect Dis. 2014;20(10):1691–6. https://doi.org/10.3201/eid2010.140619.
Churcher TS, Lissenden N, Griffin JT, Worrall E, Ranson H. The impact of pyrethroid resistance on the efficacy and effectiveness of bednets for malaria control in Africa. Elife. 2016;5:e16090. https://doi.org/10.7554/eLife.16090.
Kleinschmidt I, Bradley J, Knox TB, Mnzava AP, Toto Kafy H, Mbogo C, et al. Implications of insecticide resistance for malaria vector control with long-lasting insecticidal nets: a WHO-coordinated, prospective, international, observational cohort study. Lancet Infect Dis. 2018;18(6):640–9. https://doi.org/10.1016/S1473-3099(18)30172-5.
Lindsay SW, Thomas MB, Kleinschmidt I. Threats to the effectiveness of insecticide-treated bednets for malaria control: thinking beyond insecticide resistance. Lancet Glob Health. 2021;9(9):e1325–31. https://doi.org/10.1016/S2214-109X(21)00216-3.
Morris SE, Davies NW, Brown PH, Groom T. Effect of drying conditions on pyrethrins content. Ind Crops Prod. 2006;23(1):9–14. https://doi.org/10.1016/j.indcrop.2005.01.007.
Atieli FK, Munga SO, Ofulla AV, Vulule JM. Wash durability and optimal drying regimen off four brands of long-lasting insecticide-treated nets after repeated washing under tropical conditions. Malar J. 2010;9:248. https://doi.org/10.1186/1475-2875-9-248.
Leonard L, Diop S, Doumbia S, Sadou A, Mihigo J, Koenker H, et al. Net use, care and repair practices following a universal distribution campaign in Mali. Malar J. 2014;13:435. https://doi.org/10.1186/1475-2875-13-435.
Erlanger TE, Enayati AA, Hemingway J, Mshinda H, Tami A, Lengeler C. Field issues related to insecticide-treated nets in Tanzania. Med Vet Entomol. 2004;18:153–60. https://doi.org/10.1111/j.0269-283X.2004.00491.x.
Kilian A, Byamukama W, Pigeon O, Gimnig J, Atieli F, Koekemoer L, et al. Evidence for a useful life of more than three years for a polyester-based long-lasting insecticidal mosquito net in western Uganda. Malar J. 2011;10:299. https://doi.org/10.1186/1475-2875-10-299.
Mutuku FM, Khambira M, Bisanzio D, Mungai P, Mwanzo I, Muchiri EM, et al. Physical condition and maintenance of mosquito bed nets in Kwale County, coastal Kenya. Malar J. 2013;12:46. https://doi.org/10.1186/1475-2875-12-46.
Santos EM, Coalson JE, Jacobs ET, Klimentidis YC, Munga S, Agawo M, et al. Bed net care practices and associated factors in western Kenya. Malar J. 2019;18:274. https://doi.org/10.1186/s12936-019-2908-6.
VectorWorks. NetCALC. http://www.networksmalaria.org/networks/netcalc/. Accessed 27 Oct 2021.
Hakizimana E, Cyubahiro B, Rukundo A, Kabayiza A, Mutabazi A, Beach R, et al. Monitoring long-lasting insecticidal net (LLIN) durability to validate net serviceable life assumptions, in Rwanda. Malar J. 2014;13:344. https://doi.org/10.1186/1475-2875-13-344.
Massue DJ, Moore SJ, Mageni ZD, Moore JD, Bradley J, Pigeon O, et al. Durability of Olyset campaign nets distributed between 2009 and 2011 in eight districts of Tanzania. Malar J. 2016;15:176. https://doi.org/10.1186/s12936-016-1225-6.
Kilian A, Byamukama W, Pigeon O, Gimnig J, Atieli F, et al. Evidence for a useful life of more than three years for a polyester-based long-lasting insecticidal mosquito net in Western Uganda. Malar J. 2011;10:299. https://doi.org/10.1186/1475-2875-10-299.
Kilian A, Obi E, Mansiangi P, Abilo AP, Haji KA, Blaufuss S, et al. Variation of physical durability between LLIN products and net use environments: summary of findings from four African countries. Malar J. 2021;20(1):26. https://doi.org/10.1186/s12936-020-03549-2.
Azondekon R, Gnanguenon V, Oke-Agbo F, Houevoessa S, Green M, Akogbeto M. A tracking tool for long-lasting insecticidal (mosquito) net intervention following a 2011 national distribution in Benin. Parasit Vectors. 2014;7:6. https://doi.org/10.1186/1756-3305-7-6.
Staedke SG, Gonahasa S, Dorsey G, Kamya MR, Maiteki-Sebuguzi C, Lynd A, et al. Effect of long-lasting insecticidal nets with and without piperonyl butoxide on malaria indicators in Uganda (LLINEUP): a pragmatic, cluster-randomised trial embedded in a national LLIN distribution campaign. Lancet. 2020;395:1292–303. https://doi.org/10.1016/S0140-6736(20)30214-2.
Gleave K, Lissenden N, Richardson M, Choi L, Ranson H. Piperonyl butoxide (PBO) combined with pyrethroids in insecticide-treated nets to prevent malaria in Africa. Cochrane Database Syst Rev. 2018;11(11):CD012776. https://doi.org/10.1002/14651858.CD012776.pub2.
We are very grateful to the Kirinyaga County Health team for supporting the implementation of this study. The leadership of the four villages and the study participants are appreciated. We are also grateful to the Community Health Workers including Mr. Paul Kariuki Mugo, Mr. Joseph Maina Nyaga, Mr. Joseph Kinyua Mugo and Ms. Esther Muthoni Muriithi for their dedication in community mobilization during the study. The laboratory staff of Kimbimbi Sub County Hospital including Mr. Celestine Kwoba and Mr. Njenga are acknowledged for working long hours for data collection and laboratory analysis.
We also acknowledge significant contribution to the trial by our former colleague late Dr Evan Mathenge, who unfortunately passed away before writing the trial results. We are also grateful to Mr. Cassian Mwatele of ESACIPAC-KEMRI, and the laboratory team at KEMRI, Kisumu.The authors alone are responsible for the views expressed in this article and they do not necessarily represent the views, decisions, or policies of the institutions with which they are affiliated.
The work was funded as a collaborative project by the WHO Pesticide Evaluation Scheme, Geneva, Switzerland.
Eastern & Southern Africa Centre of International Parasite Control, Kenya Medical Research Institute, Nairobi, Kenya
Paul M. Gichuki & Evan Mathenge
School of Health Sciences, Meru University of Science and Technology, Meru, Kenya
Paul M. Gichuki
Centre for Biotechnology Research and Development, Kenya Medical Research Institute, Nairobi, Kenya
Luna Kamau & Damaris Matoke-Muhia
Division of National Malaria Programme, Ministry of Health, Nairobi, Kenya
Kiambo Njagi & Solomon Karoki
Department of Health, Kirinyaga County, Kirinyaga, Kenya
Njoroge Muigai
Centre for Global Health Research, Kenya Medical Research Institute, Kisumu, Kenya
Nabie Bayoh
Centers for Disease Control and Prevention, Kisumu, Kenya
Department of Control of Neglected Tropical Diseases, World Health Organization, Geneva, Switzerland
Rajpal S. Yadav
Evan Mathenge—Deceased
Evan Mathenge
Luna Kamau
Kiambo Njagi
Solomon Karoki
Damaris Matoke-Muhia
PMG, LK, KN, EM, RSY: conceptualized the study. PMG, KN, SK, NM, DMM, LK, NB, EM participated in data collection and curation. PMG analyzed data and wrote the first draft manuscript. PMG, RSY: finalized and edited the original manuscript and revised it according reviewers' comments. All authors read and approved the revised manuscript.
Correspondence to Paul M. Gichuki.
Ethical approvals of the Ethical Research Committee of Kenyatta National Hospital, University of Nairobi (protocol ID P524/8/2013), Health Management Team of the Kirinyaga County, Kenya and WHO Ethics Review Committee (ID: V2-084) were obtained. All participants gave written consent to participate in the study.
Consent to publication
This paper is published with the authority from the Director General, Kenya Medical Research Institute (KEMRI).
The authors declare no competing interests. As a collaborative project, RSY, a WHO professional staff, reviewed and agreed with the study design and contributed to writing the manuscript.
Gichuki, P.M., Kamau, L., Njagi, K. et al. Bioefficacy and durability of Olyset® Plus, a permethrin and piperonyl butoxide-treated insecticidal net in a 3-year long trial in Kenya. Infect Dis Poverty 10, 135 (2021). https://doi.org/10.1186/s40249-021-00916-2
Anopheles gambiae
Bioefficacy
Long-lasting insecticidal net
Olyset® Net
Olyset® Plus
Permethrin | CommonCrawl |
\begin{document}
\title{Crossed Products by Partial actions of Inverse Semigroups}
\author{S. Moayeri\; Rahni} \author{B. Tabatabaie\; Shourijeh} \address{S. Moayeri\; Rahni, Department of Mathematics, College of Sciences, Shiraz University, Shiraz, 71454, Iran \\
} \email{[email protected]} \address{B. Tabatabaie\; Shourijeh, Department of Mathematics, College of Sciences, Shiraz University, Shiraz, 71454, Iran
} \email{[email protected]}
\subjclass[2010]{20M18, 16W22} \keywords{inverse semigroup, partial action, partial representation, covariant representation.} \begin{abstract} In this work, for a given inverse semigroup we will define the crossed product of an inverse semigroup by a partial action. Also, we will associate to an inverse semigroup $G$ an inverse semigroup $S_G$, and we will prove that there is a correspondence between the covariant representation of $G$ and covariant representation of $S_G$. Finally, we will explore a connection between crossed products of an inverse semigroup actions and crossed products by partial actions of inverse semigroups. \end{abstract}
\maketitle
\newcommand\sfrac[2]{{#1/#2}}
\newcommand\cont{\operatorname{cont}} \newcommand\diff{\operatorname{diff}}
\section{Introduction} The theory of $C^*$-crossed product by group partial actions and inverse semigroup actions are very well developed \cite{partial} \cite{sieben}. In this paper, we show that the theory of crossed products by actions of inverse semigroups can be generalized to partial actions of inverse semigroups.\par In section \ref{two} we define a partial action of an inverse semigroup as a partial homomorphism from the inverse semigroup into a symmetric inverse semigroup on some set. We will refer the reader to \cite{EXPAN} for an extensive treatment of partial actions of inverse semigroups. In section \ref{two}, we define the crossed products by partial actions of inverse semigroups.\par It turns out that there is a close connection between crossed products by partial actions of inverse semigroups and crossed products by inverse semigroups actions. In section \ref{five}, we will show that every crossed products by partial action of an inverse semigroup is isomorphic to a crossed product by an inverse semigroup action. \section{Partial Actions of Inverse Semigroups and Covariant Representations}\label{two} We will assume that throughout this work $G$ is a unital inverse semigroup with unit element $e$ and $\mathcal{A}$ is a $C^*$-algebra.
We recall from ~\cite{EXPAN} that a partial action of an inverse semigroup $S$ on a set $X$ is a partial homomorphism $\alpha :S\mapsto \verb"I"(X)$, that is, for each $s,t\in S$ $$\alpha(s^{*})\alpha(s)\alpha(t)=\alpha(s^{*})\alpha(st),\;\;\;\;\alpha(s)\alpha(t)\alpha(t^{*})=\alpha(st)\alpha(t^{*}),$$ where $\verb"I"(X)$ denotes the inverse semigroup of all partial bijections between subsets of $X$. But we use ~\cite[Proposition 3.4]{EXPAN} to give a definition of a partial action of an inverse semigroup. \begin{definition}\label{defpar} Suppose that $S$ is an inverse semigroup and $X$ is a set. By a partial action of $S$ on $X$ a map we mean $\alpha :S\mapsto \verb"I"(X)$ satisfied the following conditions: \begin{enumerate}
\item [(\textit{i})]$\alpha_{s}^{-1}=\alpha_{s^{*}}$,
\item
[(\textit{ii})]$\alpha _{s}(X_{s^{*}}\cap X_t)=X_s\cap
X_{st}$ for all $s,t\in S $ (where $X_s $ denotes the rang of $\alpha_{s}$ for each $s\in
S$),
\item [(\textit{iii})] $\alpha_s(\alpha_t(x))=\alpha_{st}(x)$ for all $x\in X_{t^*}\cap
X_{t^*s^*}.$ \end{enumerate} \end{definition} To define a partial action $\alpha$ of an inverse semigroup $S$ on an associative $\mathcal{K}$-algebra $\mathcal{A}$, we suppose in Definition \ref{defpar} that each $X_s$ $(s\in S)$ is an ideal of $\mathcal{A}$ and that every map $\alpha_s:X_{s^*}\mapsto X_s$ is an algebra isomorphism. Furthermore, if the inverse semigroup $S$ is unital with unit $e$, we shall suppose that $X_e=\mathcal{A}$. The next Proposition shows that for such a partial action $\alpha$ we have $\alpha_e$ is the identity map on $\mathcal{A}$. \begin{proposition} If $\alpha$ is a partial action of $G$ on a $C^*$-algebra $\mathcal{A}$ then $\alpha_e$ is the identity map $\ell$ on $\mathcal{A}$. \end{proposition} \begin{proof} By definition of partial action, $\alpha_e$ is an invertible map on it's domain, $D_e=\mathcal{A}$. Now, $$\ell=\alpha_e\alpha_e^{-1}=\alpha_e\alpha_{e^*}=\alpha_e\alpha_e=\alpha_e.$$ Note that we have used part (3) of Definition \ref{defpar} in the fourth equality above. \end{proof} The following Lemma will be used in the proof of Theorem \ref{theorem1.4} \begin{lemma}\label{lemma2.5} If $\alpha$ is a partial action of $G$ on a $C^*$-algebra $\mathcal{A}$, then for all $t,s_1,...,s_n\in G$ $$\alpha_t(D_{t^*}D_{s_1}...D_{s_n})=D_{t}D_{ts_1}...D_{ts_n}.$$ \end{lemma} \begin{proof} For $t,s_1,...,s_n\in G$ we have \begin{eqnarray*}
\alpha_t(D_{t^*}D_{s_1}...D_{s_n}) &=& \alpha_t(D_{t^*}\cap D_{s_1}\cap ...\cap D_{t^*}\cap D_{s_n}) \\
&=& \alpha_t(D_{t^*}\cap D_{s_1})\cap...\cap\alpha_t(D_{t^*}\cap D_{s_n}) \\
&=& \alpha_t(D_t\cap D_{ts_1})\cap...\cap\alpha_t(D_t\cap D_{ts_n}) \\
&=& \alpha_t(D_t\cap D_{ts_1}\cap...\cap D_t\cap D_{ts_n})\\
&=& \alpha_t(D_t D_{ts_1}... D_{ts_n}) \end{eqnarray*}
\end{proof} \begin{theorem}\label{theorem1.4} If $\alpha$ is a partial action of $G$ on a $C^*$-algebra $\mathcal{A}$, then for $s_1,...,s_n\in G$ the partial automorphism $\alpha_{s_1}...\alpha_{s_n}$ has domain\\ $D_{s_n^*}D{s_n^*s_{n-1}^*}...D_{s_n^*...s_1}$ and range $D_{s_1}...D_{s_1...s_n}$. \end{theorem} \begin{proof} We will use induction to prove the statement about the domain. For $n=1$ \begin{equation*}
dom \alpha_{s_1}=ran \alpha_{s^*}=D_{s^*}. \end{equation*} Now, \begin{eqnarray*}
dom \alpha_{s_1}...\alpha_{s_n} &=& \alpha_{s_n}^{-1}(dom (\alpha_{s_1}...\alpha_{s_{n-1}})\cap ran \alpha_{s_n}) \\
&=& \alpha_{s_n^*}(D_{s_{n-1}^*}...D_{s_{n-1}^*...s_1^*}\cap D_{s_n}) \\
&=& \alpha_{s_n^*}(D_{s_n}D_{s_{n-1}^*}...D_{s_{n-1}^*...s_1^*} ) \\
&=& D_{s_n^*}D{s_n^*s_{n-1}^*}...D_{s_n^*...s_1}. \end{eqnarray*} Note that we obtained the last equality by using Lemma \ref{lemma2.5}. For the second statement, we have \begin{eqnarray*}
ran \alpha_{s_1}...\alpha_{s_n} &=& dom \alpha_{s_n^*}...\alpha_{s_1^*} \\
&=& D_{s_1}...D_{s_1...s_n} \end{eqnarray*} by the first statement. \end{proof} If we consider a group $G$ as an inverse semigroup, then the two definitions of partial actions as a group and as an inverse semigroup are the same. This fact motivates us to define a covariant representation of a partial action of an inverse semigroup. \begin{definition}\label{def1} Let $\alpha$ be a partial action of $G$ on an algebra $\mathcal{A}$. A covariant representation of $\alpha$ is a triple $(\pi, u, \mathcal{H})$, where $\pi:\mathcal{A}\to B(\mathcal{H})$ is a non-degenerate representation of $\mathcal{A}$ on a Hilbert space $\mathcal{H}$ and for each $g\in G$, $u_g$ is a partial isometry on $\mathcal{H}$ with initial space $\pi(D_{g^*})\mathcal{H}$ and final space $\pi(D_g)\mathcal{H}$, such that \begin{enumerate}
\item $u_g\pi(a)u_{g^*}=\pi(\alpha_g(a))\hspace{0.5cm}a\in D_{g^*}$,
\item $u_{st}h=u_s u_th\hspace{0.5cm}$ for all $h\in \pi(D_{t^*}D_{t^*s^*})\mathcal{H}$,
\item $u_{s^*}=u_s^*$. \end{enumerate} \end{definition} Notice that by the \emph{Cohen-Hewitt factorization} Theorem $\pi(D_g)\mathcal{H}$ is a closed subspace of $\mathcal{H}$ and so the notions of initial and final spaces make sense.\par Now, we show that $u_e=1_{\mathcal{H}}$, where $e$ denotes the unit of $G$. Since $D_e=\mathcal{A}$, by (2) of Definition \ref{def1} for all $h\in \pi(\mathcal{A})\mathcal{H}=\mathcal{H}$ we have that $$u_eh=u_{ee}h=u_eu_eh.$$ Since $u_e$ is one to one on $\pi(\mathcal{A})\mathcal{H}=\mathcal{H}$, we have $u_eh=h$ for all $h\in\mathcal{H}$ as we claimed. \begin{definition} Let $\alpha$ be a partial action of $G$ on a $C^*$-algebra $\mathcal{A}$. For $s\in G$, let $\rho_s$ denote the central projection of $\mathcal{A}^{**}$ which is the identity of $D_s^{**}$. \end{definition} Let ($\pi, u, \mathcal{H}$) be a covariant representation of ($\mathcal{A}, G, \alpha$). Since $\pi$ is a non-degenerate representation of $\mathcal{A}$, $\pi$ can be extended to a normal morphism of $\mathcal{A}^{**}$ onto $\pi(\mathcal{A})''$. We will denote this extension also by $\pi$. Note that $\pi(D_{s_1}...D_{s_n})\mathcal{H}=\pi(\rho_{s_1}\ldots\rho_{s_n})\mathcal{H}$ for all $s_1,\ldots ,s_n\in G$, and $u_su_{s^*}=\pi(\rho_s)$ for all $s\in G$. \begin{theorem}\label{theorem2.9} Let ($\pi, u, \mathcal{H}$) be a covariant representation of ($\mathcal{A}, G, \alpha$). Then for all $s_1,...,s_n\in G$, $u_{s_1}...u_{s_n}$ is a partial isometry with initial space $$\pi(D_{s_n^*}D_{s_n^*s_{n-1}^*}...D_{s_n^*...s_1^*})\mathcal{H}$$ and final space $$\pi(D_{s_1}...D_{s_1...s_n})\mathcal{H}.$$ \end{theorem} \begin{proof} Firstly, we show that $u_{s_1}...u_{s_n}u_{s_n}^*...u_{s_1}^*=\pi(\rho_{s_1}...\rho_{s_1...s_n})$. For $n=1$ we have proved that $u_{s_1}u_{s_1}^*=\pi(\rho_{s_1})$. Now, \begin{eqnarray*}
u_{s_1}...u_{s_n}u_{s_n}^*...u_{s_1}^* &=& u_{s_1}u_{s_1}^*u_{s_1}\pi(\rho{s_2}...\rho_{s_2...s_n})ua_{s_1}^* \\
&=& u_{s_1}u_{s_1^*}u_{s_1}\pi(\rho_{s_2}...\rho_{s_2...s_n})u_{s_1}^* \\
&=& u_{s_1}\pi(\rho_{s_1^*})\pi(\rho_{s_2}...\rho_{s_2...s_n})u_{s_1}^*\\
&=& u_{s_1}\pi(\rho_{s_1^*}\rho_{s_2}...\rho_{s_2...s_n})u_{s_1^*} \\
&=& \pi(\alpha_{s_1}(\rho_{s_1^*}\rho_{s_2}...\rho_{s_2...s_n})) \\
&=& \pi(\rho_{s_1}\rho_{s_1s_2}...\rho_{s_1s_2...s_n}), \end{eqnarray*} so, $u_{s_1}...u_{s_n}u_{s_n}^*...u_{s_1}^*$ is a projection since $\rho_{s_1},...,\rho_{s_1...s_n}$ are commute. Finally, the initial space of $u_{s_1}...u_{s_n}$ is equal to \begin{eqnarray*}
u_{s_1}...u_{s_n}u_{s_n}^*...u_{s_1}^*\mathcal{H}&=&\pi(\rho_{s_1}...\rho_{s_1...s_n})\mathcal{H} \\
&=&\pi(D_{s_n^*}D_{s_n^*s_{n-1}^*}...D_{s_n^*...s_1^*})\mathcal{H}.
\end{eqnarray*}
Similarly, we can prove that the final space of $u_{s_1}...u_{s_n}$ is equal to $$\pi(D_{s_1}...D_{s_1...s_n})\mathcal{H}.$$ \end{proof} \begin{corollary}\label{cor2.10} If ($\pi, u, \mathcal{H} $) is a covariant representation of ($\mathcal{A}, G, \alpha$), then \begin{equation*} u_{s_1...s_n}h=u_{s_1}...u_{s_{n}}h\;\text{ for all}\; h\in\pi(D_{s_n^*}...D_{s_n^*...s_1^*})\mathcal{H}, \end{equation*} and \begin{equation*}
\pi(a)u_{s_1...s_n}=\pi(a)u_{s_1}...u_{s_n}\;\text{for all}\; a\in D_{s_1}D_{s_1s_2}...D_{s_1...s_n}. \end{equation*} \end{corollary} \begin{proof} For $n=2$, if $h\in\pi(D_{s_2^*}D_{s_2^*s_1^*})$ then by Definition \ref{def1} part (2) we have $u_{s_1s_2}h=u_{s_1}u_{s_2}$. For $h\in \pi(D_{s_n^*}...D_{s_n^*...s_1^*})\mathcal{H}$ we have $u_{s_1...s_n}h=u_{s_1...s_{n-1}}u_{s_n}h$ by Definition \ref{def1} part (2). Now, since \begin{eqnarray*}
u_{s_n}h\in u_{s_n}\pi(\rho_{s_{n}^*}...\rho_{s_n^*...s_1^*})\mathcal{H} &=& \pi(\rho_{s_n^*}\rho_{s_n^*s_{n-1}^*}...\rho_{s_{n-1}^*...s_1^*})\mathcal{H} \\
&\subseteq& \pi(\rho_{s_{n-1}^*}...\rho_{s_{n-1}^*...s_1^*})\mathcal{H} \\
&=& \pi(D_{s_{n-1}^*}...D_{s_{n-1}^*...s_1^*})\mathcal{H}, \end{eqnarray*} by induction hypothesis we have $u_{s_1...s_{n-1}}u_{s_n}h=u_{s_1}...u_{s_{n-1}}u_{s_n}h$. By the first statement, we have \begin{equation}\label{eq1}
u_{s_n^*s_{n-1}^*...s_1^*}\pi(a^*)=u_{s_n^*}...u_{s_1^*}\pi(a^*) \end{equation} since $\pi(a^*)\in\pi(D_{s_1}...D_{s_1...s_n})$ for $a\in D_{s_1}...D_{s_1...s_n}$. Taking the conjugate, we have $\pi(a)u_{s_1...s_n}=\pi(a)u_{s_1}...u_{s_n}.$ \end{proof} \begin{corollary}\label{cor2.11} If $(\pi, u, \mathcal{H})$ is a covariant representation of ($\mathcal{A}, G, \alpha$), then $S=\{ u_{s_1}...u_{s_n}\;:\; s_1,..., s_n\in G\}$ is a unital inverse semigroup of partial isometries of $\mathcal{H}$. \end{corollary}
Now, we are able to define an inverse semigroup associated to a covariant representation of a unital inverse semigroup $G$. \begin{proposition}\label{pro3.3} Let $\alpha$ be a partial action of an inverse semigroup $G$ on the $C^*$-algebra $\mathcal{A}$, and let ($\pi, u. \mathcal{H}$) be a covariant representation of $\alpha$. Let $S_G=\{(\alpha_{g_1}...\alpha_{g_n}, u_{g_1}...u_{g_n})\;:\;g_1,...,g_n\in G\}$. Then $S_G$ is a unital inverse semigroup with coordinate wise multiplication. For $s=(\alpha_{g_1}...\alpha_{g_n}, u_{g_1}...u_{g_n})\in S_G$ let \begin{equation*}
E_s=D_{g_1}...D_{g_1...g_n}, \end{equation*} and \begin{equation*}
\beta_s=\alpha_{g_1}...\alpha_{g_n}:E_{s^*}\to E_s. \end{equation*} Then $\beta$ is an action of $S_G$ on $\mathcal{A}$. \end{proposition} \begin{proof} Let $s=(\alpha_{g_1}...\alpha_{g_n}, u_{g_1}...u_{g_n}), t=(\alpha_{h_1}...\alpha_{h_n}, u_{h_1}...u_{h_n})$, then $st=(\alpha_{g_1}...\alpha_{g_n}\alpha_{h_1}...\alpha_{h_n}, u_{g_1}...u_{g_n}u_{h_1}...u_{h_n})\in S_G$, and the unit of $S_G$ is $(\alpha_e, u_e)$. Obviously $E_s$ is a closed ideal of $\mathcal{A}$ and $\beta_s$ is an isomorphism. Now, we define the domain of $\beta_s$. \begin{eqnarray*}
dom(\alpha_{g_1}...\alpha_{g_n}) &=& \alpha_{g_n^*}(dom(\alpha_{g_1}...\alpha_{g_{n-1}}) D_{g_n}) \\
&=& \alpha_{g_n^*}(\alpha_{g_{n-1}^*}(dom(\alpha_{g_1}...\alpha_{g_{n-2}})D_{g_{n-1}}) D_{g_n})) \\
&\vdots&\\
&=& D_{g_n^*}...D_{g_{n}^*...g_1^*}=E_{s^*}. \end{eqnarray*} Now, let us show that $ran\beta_s=E_s$. To do this, we will use induction. For $n=2$, \begin{eqnarray*}
ran\beta_s&=&ran \alpha_{g_1}\alpha_{g_2} \\
&=& \alpha_{g_1}(D_{g_2}D_{g_1^*}) \\
&=& D_{g_1}D_{g_1g_2}=E_s. \end{eqnarray*} On the other hand, for $s=(\alpha_{g_1}...\alpha_{g_n}, u_{g_1}...u_{g_n})$, \begin{eqnarray*}
ran\beta_s &=& ran\alpha_{g_1}...\alpha_{g_n} \\
&=& \alpha_{g_1}(ran(\alpha_{g_2}...\alpha_{g_n})D_{g_{1}^*}) \\
&=& \alpha_{g_1}(D_{g_2}D_{g_2g_3}...D_{g_2...g_n}D_{g_{1}^*})\\
&=& D_{g_1}D_{g_1g_2}...D_{g_1...g_n}=E_s. \end{eqnarray*} So, $\beta_s:E_{s^*}\to E_s$ is an isomorphism, and clearly for $s,t\in S$ we have $\beta_s\beta_t=\beta_{st}$. \end{proof} The following Proposition shows that there exists a relation between covariant representation of $(\mathcal{A}, S_G, \beta)$ and covariant representation of $(\mathcal{A}, G, \alpha).$ \begin{proposition}\label{pro3.5} keeping the notation of Proposition \ref{pro3.3}, define $\nu:S_G\to B(\mathcal{H})$ by $\nu_s=u_{g_1}...u_{g_n}$, where $s=(\alpha_{g_1}...\alpha_{g_n}, u_{g_1}...u_{g_n})$. Then $(\pi, \nu, \mathcal{H})$ is a covariant representation of $(\mathcal{A}, S_G, \beta)$. Conversely, if ($\rho, z, \mathcal{K} $) is a covariant representation of $(\mathcal{A} , S_G , \beta)$, then the function $\omega : G\to B(\mathcal{K})$ defined by $\omega_g=z(\alpha_g,u_g)$ gives a covariant representation $(\rho, \omega, \mathcal{K})$ of $(\mathcal{A}, G, \alpha)$. \end{proposition} \begin{proof} Let $s=(\alpha_{g_1}...\alpha_{g_n}, u_{g_1}...u_{g_n})\in S$. By Theorem \ref{theorem2.9}, $\nu_s=u_{g_1}...u_{g_n}$ is a partial isometry with initial space $\pi(E_{s^*})\mathcal{H}$ and final space $\pi(E_s)\mathcal{H}$. Obviously, $\nu$ is multiplicative. Let $a\in E_{s^*}= D_{g_n^*}...D_{g_n^*...g_1^*}$, then \begin{eqnarray*}
\nu_s\pi(a)\nu_{s^*} &=& u_{g_1}...u_{g_n}\pi(a)u_{g_n^*}...u_{g_1^*} \\
&=& u_{g_1}...u_{g_{n-1}}\pi(\alpha_{g_n}(a))u_{g_{n-1}^*}...u_{g_1^*} \\
&\vdots&\\
&=& \pi(\alpha_{g_1}...\alpha_{g_n}(a))=\pi(\beta_s(a)). \end{eqnarray*} Conversely, suppose that ($\rho, z, \mathcal{K}$) is a covariant representation of $(\mathcal{A}, S_G, \beta)$. We want to show that ($\rho, \omega, \mathcal{K}$) is a covariant representation of ($\mathcal{A}, G, \alpha$). By the definition of $\omega_g$, $g\in G$, $\omega_g$ is a partial isometry with initial space $\pi(D_{g^*})\mathcal{K}$ and final space $\pi(D_g)\mathcal{K}$. For $g_1, g_2\in G$, put $s=(\alpha_{g_1g_2}, u_{g_1g_2})$, $s_1=(\alpha_{g_1}, u_{g_1})$, and $s_2=(\alpha_{g_2}, u_{g_2})$. By the definition of partial action, if $x\in D_{g_2^*}D_{g_1^*}$ then $\alpha_{g_1}\alpha_{g_2}(x)=\alpha_{g_1g_2}(x)$. But, \begin{equation*}
ran\alpha_{g_2^*}\alpha_{g_1^*}=\alpha_{g_2^*}(D_{g_1^*}D_{g_2})=D_{g_2^*}D_{g_2^*g_1^*}. \end{equation*} Consequently, \begin{equation}\label{eq2}
\alpha_{g_1g_2}(\alpha_{g_1}\alpha_{g_2})^*=\alpha_{g_1g_2}\alpha_{g_2^*}\alpha_{g_1^*}=\alpha_{g_1}\alpha_{g_2}(\alpha_{g_1}\alpha_{g_2}). \end{equation} By Definition \ref{def1} part (2), $u_{g_1g_2}=u_{g_1}u_{g_2}$ on $\pi (D_{g_2^*}D_{g_2^*g_1^*})\mathcal{H}$. On the other hand, by Theorem \ref{theorem2.9} final space of $(u_{g_1}u_{g_2})^*$ is $\pi(D_{g_2^*}D_{g_2^*g_1^*})\mathcal{H}$. Thus \begin{equation}\label{eq3}
u_{g_1g_2}(u_{g_1}u_{g_2})^*=u_{g_1}u_{g_2}(u_{g_1}u_{g_2})^*. \end{equation} Hence \begin{eqnarray}\label{eq4}
\nonumber s(s_1s_2)^* &=& (\alpha_{g_1g_2}, u_{g_1g_2})[(\alpha_{g_1}, u_{g_1})(\alpha_{g_{2}}, u_{g_2}))]^* \\
\nonumber &=& (\alpha_{g_1}\alpha_{g_2}(\alpha_{g_1}\alpha_{g_2})^*, u_{g_1}u_{g_2}(u_{g_1}u_{g_2})^*) \\
&=& s_1s_2(s_1s_2)^*, \end{eqnarray} note that we have used equations \ref{eq2} and \ref{eq3} in the second equality above. So, \begin{eqnarray}\label{eq5}
\nonumber z_sz_{(s_1s_2)^*}&=& z_{s(s_1s_2)^*} \\
\nonumber &=& z_{s_1s_2(s_1s_2)^*} \\
&=& z_{s_1s_2}z_{(s_1s_2)^*} \end{eqnarray} by equality \ref{eq4}. Now, for $h\in\rho(D_{g_2^*}D_{g_2^*g_1^*})\mathcal{K}$ \begin{eqnarray*}
z_s h&=&z_{s_1s_2}h \\
&=& z_{s_1}z_{s_2}h \end{eqnarray*} note that we have used \ref{eq5} and the fact that $\rho(D_{g_2^*}D_{g_2^*g_1^*})\mathcal{K}$ is the final space of $z_{(s_1s_2)^*}$ in the first equality. Hence $\omega_{g_1g_2}=\omega_{g_1}\omega_{g_2}$ on $\rho(D_{g_2^*}D_{g_2^*g_1^*})\mathcal{K}$. For $g\in G$, \begin{equation*}
\omega_{g^*}=z_{(\alpha_{g^*}, u_{g^*})}=z_{s^*}=z^*_s=\omega^*_g. \end{equation*} Consequently, $(\mathcal{A}, G, \omega)$ is a covariant representation of $(\mathcal{A}, G, \alpha)$. \end{proof}
\section{Crossed Products}\label{sec3} Mc Calanahan defines the partial crossed product $\mathcal{A}\ltimes _{\alpha}G$ of the $C^*$-algebra $\mathcal{A}$ and the group $G$ by the partial action $\alpha$ as the enveloping $C^*$-algebra of $L=\{x\in \ell^1(G,\mathcal{A})\;:\;x(g)\in D_g\}$ with the multiplication and involution \begin{equation*}
(x\ast y)(g)=\sum_{h\in G}\alpha_h[\alpha_{h^{-1}}(x(h))y(h^{-1}g)], \end{equation*} and \begin{equation*}
x^*(g)=\alpha_g(x(g^{-1})^*). \end{equation*} He shows that there is a bijective correspondence $(\pi, u, \mathcal{H})\leftrightarrow (\pi\times u, \mathcal{H})$ between covariant representations of $(\mathcal{A}, G, \alpha)$ and non-degenerate representations of $\mathcal{A}\ltimes_{\alpha} G$, where $\pi\times u$ defined by $x\mapsto\sum_{g\in G}\pi(x(g))u_g$. We are going to follow his footsteps constructing the crossed product of a $C^*$-algebra and a unital inverse semigroup by a partial action.\par Let $\alpha$ be a partial action of the unital inverse semigroup $G$ on the $C^*$-algebra $\mathcal{A}$. Consider the subset $L=\{x\in\ell^1(G,\mathcal{A})\;:\;x(g)\in E_g\}$ of $\ell^1(G,\mathcal{A})$ with the multiplication and involution as follows: \begin{eqnarray*}
(x\ast y)(g) &=& \sum_{hk=g}\beta_{h}(\beta_{h^*}(x(h))y(k)), \\
x^*(g) &=& \beta_s(x(g^*)^*). \end{eqnarray*} Notice that by Definition \ref{defpar} part (ii), $(x\ast y)(g)\in E_g$. It is easy to see that for $x,y\in L$ we have $x\ast y, x^*\in L$, and \begin{equation*}
\| x\ast y \|\leq \|x\|\|y\|, \end{equation*} and \begin{equation*}
\|x^*\|=\|x\|. \end{equation*} Obviously, $L$ is a closed subset of $\ell^1(G, \mathcal{A})$, so, $L$ is a Banach space. Easily one can shows that \begin{itemize}
\item [(\textit{i})] $(x+y)^*=x^*+ y^*,$
\item [(\textit{ii})] $(ax)^*=\bar{a}x^*,$
\item [(\textit{iii})] $(x\ast y)^*=y^*\ast x^*.$ \end{itemize} \begin{proposition}\label{pro4.1} $L$ is a Banach *-algebra. \end{proposition} \begin{proof} By the argument above, $L$ is a Banach space closed under multiplication and involution. To show that $L$ is a Banach *-algebra, it is enough check the associativity of multiplication. It suffices to show this for $x=a_r\delta_r, y=a_s\delta_s,$ and $a_t\delta_t$. Let $\{u_{\lambda}\}$ be an approximate identity for $E_{s^*}$, then \begin{eqnarray*}
(a_r\delta_r\ast a_s\delta_s)\ast a_t\delta_t &=& \beta_r(\beta_{r^*}(a_r)a_s)\delta_{rs}\ast a_t\delta_t \\
&=& \beta_{rs}(\beta_{s^*r^*}(\beta_r(\beta_{r^*}(a_r)a_s))a_t)\delta_{rst} \\
&=& \beta_{rs}(\beta_{s^*}\beta_{r^*}(\beta_r(\beta_{r^*}(a_r)a_s))a_t)\delta_{rst} \\
&=& \beta_{rs}(\beta_{s^*}(\beta_{r^*}(a_r)a_s)a_t)\delta_{rst} \\
&=& \lim_{\lambda}\beta_{rs}(\beta_{s^*}(\beta_{r^*}(a_r)a_s)u_{\lambda}a_t)\delta_{rst}\\
&=& \lim_{\lambda}\beta_{r}\beta{s}(\beta_{s^*}(\beta_{r^*}(a_r)a_s)a_t)\delta_{rst}\\
&=&\lim_{\lambda}\beta_r(\beta_{r^*}(a_r)a_s\beta_{s}(u_{\lambda}a_t))\delta_{rst}\\
&=&\lim_{\lambda}\beta_r(\beta_{r^*}(a_r)\beta_{s}(\beta_{s^*}(a_s)u_{\lambda}a_t))\delta_{rst}\\
&=&\beta_r(\beta_{r^*}(a_r)\beta_{s}(\beta_{s^*}(a_s)a_t))\delta_{rst}\\
&=& a_r\delta_r\ast(\beta_{s}(\beta_{s^*}(a_s)a_t)))\delta_{st}\\
&=& a_r\delta_r\ast(a_s\delta_s\ast a_t\delta_t). \end{eqnarray*} \end{proof} Note that authors in \cite{skew} prove the associativity of $L$ in a general case, where $\mathcal{A}$ is just an algebra. \begin{definition}\label{def4.2} If $(\pi, \nu. \mathcal{H})$ is a covariant representation of ($\mathcal{A}, S, \beta$), then define
$\pi\times\nu: L\mapsto B(\mathcal{H})$ by $(\pi\times\nu)(x)=\sum_{s\in S}\pi(x(s))\nu_s.$ \end{definition} \begin{proposition}\label{pro4.3} $\pi\times\nu$ is a non-degenerate representation of $L$. \end{proposition} \begin{proof} clearly $\pi\times\nu$ is a linear map from $L$ into $B(\mathcal{H})$. As for multiplicativity, it suffices to verify this for elements of the form $a_s\delta_s$. For such elements, we have \begin{eqnarray*}
\pi\times\nu(a_s\delta_s\ast a_t\delta_t) &=& \pi\times\nu(\beta_s(\beta_{s^*}(a_s)a_t)\delta_{st}) \\
&=& \pi(\beta_s(\beta_{s^*}(a_s)a_t)\nu_{st}. \end{eqnarray*} We also have \begin{eqnarray*}
\pi\times\nu(a_s\delta_s)\pi\times\nu(a_t\delta_t) &=& \pi(a_s)\nu_s\pi(a_t)\nu_t \\
&=& \nu_s\nu_{s^*}\pi(a_s)\nu_s\pi(a_t)\nu_t \\
&=& \nu_s\pi(\beta_{s^*}(a_s))\pi(a_t)\nu_t \\
&=& \nu_s\pi(\beta_{s^*}(a_s)a_t)\nu_t \\
&=& \nu_s\pi(\beta_{s^*}(a_s)a_t)\nu_{s^*}\nu_s\nu_t \\
&=& \pi(\beta_s(\beta_{s^*}(a_s)a_t))\nu_s\nu_t\\ \end{eqnarray*} We have used the fact that $\nu_s\nu_{s^*}\pi(a_s)=\pi(a_s)\nu_{s^*}\nu_s=\pi(a_s)$ for $a\in E_s$ in the second and fifth equalities above. Since $\beta_s(\beta_{s^*}(a_s)a_t)$ is in $\beta_s(E_{s^*}E_{t})=E_sE_{st}$, it follows from Definition \ref{def1} that \begin{equation*}
\pi(\beta_s(\beta_{s^*}(a_s)a_t))\nu_s\nu_t=\pi(\beta_s(\beta_{s^*}(a_s)a_t))\nu_{st}, \end{equation*}
so, the multiplicativity of $\pi\times\nu$ follows. The following computations verify that $\pi\times\nu$ preserves the $*$-operation. \begin{eqnarray*}
\pi\times\nu((a_s\delta_s)^*) &=& \pi\times\nu(\beta_{s^*}(a_s^*)\delta_{s^*}) \\
&=& \pi(\beta_{s^*}(a_s^*))\nu_{s^*} \\
&=& \nu_{s^*}\pi(a_{s}^*)\nu_s\nu_{s^*} \\
&=& \nu_{s^*}\pi(a_s^*) \\
&=& (\pi(a_s)\nu_s)^*=(\pi\times\nu(a_s\delta_s))^*. \end{eqnarray*} If $\{u_{\lambda}\}$ is a bounded approximate identity for $\mathcal{A}$, then $\{u_{\lambda}\delta_e\}$ is a bounded approximate identity for $L$ since for $a\in E_s$ we have \begin{equation*}
\lim_{\lambda}u_{\lambda}\delta_e\ast a\delta_s=\lim_{\lambda}u_{\lambda}a\delta_s=a\delta_s, \end{equation*} and \begin{equation*}
\lim_{\lambda}a\delta_s\ast u_{\lambda}\delta_e=\lim_{\lambda}\beta_s(\beta_{s^*}(a)u_{\lambda})\delta_s=a\delta_s. \end{equation*} Since $\pi$ is a non-degenerate representation, $\pi\times\nu(u_{\lambda}\delta_e)=\pi(u_{\lambda})$ converges strongly to $1_{B(\mathcal{H})}$ and so $\pi\times\nu$ is non-degenerate. \end{proof} \begin{definition}
Let $\mathcal{A}$ be a $C^*$-algebra and $\beta$ be a partial action of the unital inverse semigroup $G$ on $\mathcal{A}$. Define a seminorm $\|.\|_1$ on $L$ by \begin{equation*}
\|x\|_1=\sup\{\|\pi\times\nu(x)\|\;:\;(\pi,\nu)\textsl{is a covariant representation of } (\mathcal{A}, G, \beta)\}, \end{equation*}
and let $N=\{x\in L\;:\;\|x\|_1=0\}$ \end{definition}
The crossed product $\mathcal{A}\ltimes_{\beta} G$ is the $C^*$-algebra obtained by completing the quotient $\frac{L}{N}$ with respect to $\|.\|_1.$ \begin{lemma}\label{lem4.5} If $s\leq t$ in $G$, then $\Phi(a\delta_s)=\Phi(a\delta_t)$ for all $a\in E_s$, where $\Phi$ is the quotient map of $L$ onto $\frac{L}{N}$. \end{lemma} \begin{proof} Notice that since $s\leq t$ there is an idempotent $f$ in $G$ such that $s=ft$, and we have $E_s\subseteq E_t$ by \cite[Proposition 3.8]{EXPAN}, so, $a\in E_t$. If ($\pi, \nu$) is a covariant representation of ($\mathcal{A}, G, \beta$), then \begin{eqnarray*}
\pi\times\nu(a\delta_s- a\delta_t) &=& \pi(a)\nu_s - \pi(a)\nu_t \\
&=& \pi(a)\nu_{ft} - \pi(a)\nu_t \\
&=& \pi(a)\nu_f\nu_t- \pi(a)\nu_t. \end{eqnarray*} We have used the fact that for $a\in E_s$ $\pi(a)\nu_{ft}=\pi(a)\nu_f\nu_t$ in the the third equality. Since $f$ is an idempotent, $\nu_f$ is identity on $\pi(E_f)\mathcal{H}$. Now for $h\in \mathcal{H}$ if $\nu_t(h)\in \pi(E_f)\mathcal{H}$, then \begin{equation*}
\pi(a)\nu_f\nu_t(h) - \pi(a)\nu_t(h)=0. \end{equation*} If $\nu_t(h)\in (\pi(E_f)\mathcal{H})^\bot= Ker\hspace{0.1cm} \nu_f$, then \begin{equation*}
\pi(a)\nu_f\nu_t(h)=0. \end{equation*} On the other hand, $\pi(a)\nu_t(h)=0$ because if $k\in\mathcal{H}$ then \begin{equation*}
<\pi(a)\nu_t(h), k>= <\nu_t(h), \pi(a^*)k>=0 \end{equation*} since $a^*\in E_s=E_{ft}\subseteq E_f$. Hence, $\Phi(a\delta_s - a\delta_t)=0$. Note that the fact that $E_{ft}\subseteq E_f$ follows from \cite[Corollary 2.21]{EXPAN} and the fact that \begin{equation*}
E_{ft}= ran \beta_{ft}=\beta_f\beta_t=\beta_f(E_tE_f)=E_{ft}\cap E_f. \end{equation*}
\end{proof} \begin{corollary}\label{cor4.6} If $G$ is a semilattice, then $\mathcal{A}\ltimes_\beta G$ is isomorphic to $\mathcal{A}$. \end{corollary} \begin{proof} Let $e$ be the identity element of $G$, then $g\leq e$ for each $g\in G$. Define $\psi_1: \mathcal{A}\to \mathcal{A}\ltimes_\beta G$ by $a\mapsto \Phi(a\delta_e)$. Obviously $\psi_1$ is a $*$-homomorphism. Now, define $\psi_2: \frac{L}{N}\to \mathcal{A}$ by $\Phi(a\delta_g)\mapsto a$. Now, we will show that $\psi_2$ is well-defined. If $\Phi(a\delta_e)=\Phi(b\delta_e)$, then for each covariant representation ($\pi, \nu, \mathcal{H}$) we have \begin{equation*}
\pi\times\nu(a\delta_e - b\delta_e)= \pi(a - b)=0, \end{equation*} so, $a - b=0$ since $\mathcal{A}$ has a universal representation. This shows that $\psi_1$ is well-defined since $\Phi(a\delta_g)=\Phi(a\delta_e)$ for each $g\in G$. Clearly, $\psi_2$ is a $*$-homomorphism that can be extended to $\mathcal{A}\ltimes_\beta G$. Finally, it is easy to see that $\psi_1\circ\psi_2$ and $\psi_2\circ\psi_1$ are identity maps on $\mathcal{A}\ltimes_\beta G$ and $\mathcal{A}$ respectively. \end{proof} \begin{proposition}\label{pro4.7} Let $(\Pi,\mathcal{H})$ be a non-degenerate representation of $\mathcal{A}\ltimes_\beta G$. Define a representation $\pi$ of $\mathcal{A}$ on $\mathcal{H}$ and a map $\nu:S\to B(\mathcal{H})$ by \begin{equation*}
\pi(a)=\Pi(a\delta_e),\hspace{0.5cm} \nu_s=\lim_\lambda \Pi(u_\lambda\delta_s)\rho_{s^*}, \end{equation*} where $\{u_\lambda\}$ is an approximate identity of $E_s$, limit is the strong limit, and $\rho_{s^*}$ is the orthogonal projection onto $\pi(E_{s^*})\mathcal{H}$. Then $(\pi, \nu, \mathcal{H})$ is a covariant representation of $(\mathcal{A}, G, \beta)$. \end{proposition} \begin{proof} Clearly $\pi$ is a representation of $\mathcal{A}$ on $\mathcal{H}$. Now, let $\{u_\lambda \}$ be an approximate identity for $E_s$, and let $h\in\mathcal{H}$. We will consider two cases:\\ If $h\in\pi(E_{s^*})\mathcal{H}$: then there exist elements $a\in E_{s^*}$ and $h'\in \mathcal{H}$ such that $h=\pi(a)h'$. So, \begin{eqnarray*}
\nu_s(h) &=& \lim_\lambda\Pi(u_\lambda\delta_s)(\Pi(a\delta_e)h') \\
&=& \lim_\lambda\Pi(u_\lambda\delta_s\ast a\delta_e)h' \\
&=&\lim_\lambda \Pi(\beta_s(\beta_{s^*}(u_{\lambda})a)\delta_s)h' \\
&=& \Pi(\beta_s(a)\delta_s)h'. \end{eqnarray*} If $h\in (\pi(E_{s^*})\mathcal{H})^\bot$: by the definition we have
\begin{eqnarray*}
\nu_s&=& \lim_\lambda\Pi(u_\lambda\delta_s)\rho_{s^*}h=0. \\
\end{eqnarray*}
This show that $\nu_s$ is independent of the choice of approximate identity of $E_s$, so $\nu$ is well-defined. Now, we want to show that $\nu^*_s=\nu_{s^*}$ for $s\in S$. First we remark that for $a_s\in E_s$ we have $\Pi(a_s\delta_s)\rho_{s^*}=\rho_s\Pi(a_s\delta_s)$. Let $\{u_\lambda \}$ be an approximate identity for $E_s$. It follows that
\begin{eqnarray*}
(\nu_s)^* &=& \lim_\lambda(\Pi(u_\lambda\delta_s)\rho_{s^*})^* \\
&=& \lim_\lambda\rho_{s^*}\Pi(\beta_{s^*}(u_\lambda)\delta_{s^*}) \\
&=&\lim_\lambda \Pi(\beta_{s^*}(u_\lambda)\delta_{s^*})\rho_{s}\\
&=&\nu_{s^*}
\end{eqnarray*}
since $\{\beta_{s^*}(u_\lambda)\}$ is an approximate identity for $E_{s^*}$. As for the covariance condition, let $x\in E_{s^*}$ and observe that
\begin{eqnarray*}
\nu_s\pi(x)\nu_{s^*} &=& \lim_{\lambda,\mu}\rho_s\Pi(u_\mu\delta_s)\Pi(x\delta_e)\Pi(\beta_{s^*}(u_\lambda)\delta_{s^*})\rho_s \\
&=& \lim_{\mu,\lambda}\rho_s\Pi(u_\mu\delta_s\ast x\delta_e\ast \beta_{s^*}(u_\lambda)\delta_{s^*}) \rho_s\\
&=& \lim_{\mu,\lambda}\rho_s\Pi(u_\mu\beta_s(x)u_\lambda\delta_{ss^*})\rho_s \\
&=& \lim_{\mu,\lambda}\rho_s\Pi(u_\mu\beta_s(x)u_\lambda\delta_{e})\rho_s \\
&=& \rho_s\pi(\beta_s(x))\rho_s \\
&=& \pi(\beta_s(x)).
\end{eqnarray*} It should be noted that we have used the fact that $\Pi\equiv0$ on $N$ in the forth equality above. As for property (2) of Definition \ref{def1}, notice that for $a_s\in E_s$ we have
\begin{eqnarray*}
\Pi(a_s\delta_s) &=& \lim_\lambda\Pi(a_su_\lambda\delta_s) \\
&=& \lim_\lambda\Pi(a_s\delta_e\ast u_\lambda\delta_s) \\
&=& \pi(a_s)\rho_s\lim_\lambda\Pi(u_\lambda\delta_s) \\
&=& \pi(a_s)\lim_\lambda\Pi(u_\lambda\delta_s)\rho_{s^*} \\
&=& \pi(a_s)\nu_s.
\end{eqnarray*}
Thus,
\begin{eqnarray*}
\Pi(a_s\delta_s)\Pi(a_t\delta_t) &=& \pi(a_s)\nu_s\pi(a_t)\nu_t \\
&=& \nu_s\nu_{s^*} \pi(a_s)\nu_s\pi(a_t)\nu_t \\
&=& \nu_s\pi(\beta_{s^*}(a_s)a_t)\nu_t \\
&=& \nu_s\pi(\beta_{s^*}(a_s)a_t)\nu_{s^*}\nu_s\nu_t \\
&=& \pi(\beta_s(\beta_{s^*}(a_s)a_t))\nu_s\nu_t.
\end{eqnarray*}
Because $\Pi$ is multiplicative, the above expression is the same as
\begin{eqnarray*}
\Pi(a_s\delta_s\ast a_t\delta_t) &=& \Pi(\beta_s(\beta_{s^*}(a_s)a_t)\delta_{st}) \\
&=& \pi(\beta_s(\beta_{s^*}(a_s)a_t))\nu_{st}.
\end{eqnarray*}
Elements of the form $\beta_{s^*}(a_s)a_t$ generate $E_{s^*}E_t$. Since $\beta_s$ maps $E_{s^*}E_t$ onto $E_sE_{st}$, it follows that elements of the form $\beta_s(\beta_{s^*}(a_s)a_t)$ generate $E_sE_{st}$ and so property (2) of Definition \ref{def1} follows. Clearly, $\pi$ is a non-degenerate representation of $\mathcal{A}$. Thus ($\pi, \nu, \mathcal{H}$) is a covariant representation of ($\mathcal{A}, S, \beta$).
\end{proof}
\begin{proposition}\label{pro4.8}
The correspondence $(\pi, \nu , \mathcal{H})\leftrightarrow (\pi\times\nu, \mathcal{H})$is a bijection between covariant representations of ($\mathcal{A}, S, \beta$) and non-degenerate representations of $\mathcal{A}\ltimes_{\beta} S$.
\end{proposition}
\begin{proof}
We will show that the correspondences $(\pi, \nu, \mathcal{H})\mapsto(\pi\times \nu, \mathcal{H})$ and $(\Pi,\mathcal{H})\mapsto(\pi, \nu, \mathcal{H})$ are inverses of each other. Let ($\pi', \nu', \mathcal{H}$) be a covariant representation of ($\mathcal{A}, S, \beta$). Let ($\pi, u, \mathcal{H}$) be a covariant representation of ($\mathcal{A}, S, \beta$) induced by $\pi'\times\nu'$. Then for $a\in\mathcal{A}$ and $s\in S$ we have
\begin{equation*}
\pi(a)=\pi'\times\nu'(a\delta_e)=\pi'(a)
\end{equation*}
and
\begin{eqnarray*}
u_s&=&\lim_\lambda\rho_s\pi'\times\nu'(\omega_\lambda\delta_s)\\
&=&\lim_\lambda\rho_s\pi'(\omega_\lambda)\nu'_s\\
&=&\lim_\lambda\pi'(\omega_\lambda)\nu'_s=\nu'_s.
\end{eqnarray*}
We have used the fact that $\rho_s\pi'(\omega_\lambda)\nu'_s=\pi'(\omega_\lambda)\nu'_s$ since $\rho_s$ is the orthogonal projection onto $\pi'\times\nu'(E_s)\mathcal{H}=\bar{\textsf{Span}}\{\pi'(a_s)\nu'_s\;:\;a_s\in E_s\}$. Let $\Pi$ be a non-degenerate representation of $\mathcal{A}\ltimes_\beta S$ on $\mathcal{H}$. Let $(\pi, \nu, \mathcal{H})$ be a covariant representation of ($\mathcal{A}, S, \beta$) induced by $\Pi$. Then if $a_s\in E_s$ we have
\begin{eqnarray*}
\pi\times\nu(a_s\delta_s) &=& \pi(a_s)\nu_s \\
&=& \Pi(a_s\delta_e)\lim_\lambda\Pi(u_\lambda\delta_s)\rho_{s^*} \\
&=& \Pi(a_s\delta_e)\rho_s\lim_\lambda\Pi(u_\lambda\delta_s) \\
&=& \Pi(a_s\delta_e)\lim_\lambda\Pi(u_\lambda\delta_s)\\
&=&\lim_\lambda\Pi(a_su_\lambda\delta_s)=\Pi(a_s\delta_s).
\end{eqnarray*}
Thus the correspondence is bijective.
\end{proof}
\section{Conection Between Croossed Products}\label{five}
Throughout this section we will assume that $G$ is an inverse semigroup with unit element $e$.
\begin{lemma}\label{lem5.1}
Let ($\mathcal{A}, G, \alpha$) and ($\mathcal{A}, S_G, \beta$) be as in Proposition \ref{pro3.3}. Let ($\rho, z, \mathcal{K}$) be a covariant representation of ($\mathcal{A}, S_G, \beta$), and define a covariant representation ($\rho, \omega, \mathcal{K}$) of ($\mathcal{A}, G, \alpha$) by $\omega_g=z(\alpha_g, u_g)$ as in Proposition \ref{pro3.5}. Then $(\rho\times z)(\mathcal{A}\ltimes_\beta S)= (\rho\times\omega)(\mathcal{A}\times_\alpha G)$.
\end{lemma} \begin{proof} For $g\in G$, let $s=(\alpha_g, u_g)\in S$, then $E_s=D_g$, so, $\rho(D_g)\omega_g=\rho(E_s)z_s$. Thus, \begin{equation}\label{eq}
\sum_{g\in G} \rho(D_g)\omega_g\subseteq\sum_{s\in S}\rho(E_s)z_s. \end{equation} On the other hand, if \begin{equation*}
s=(\alpha_{g_1}...\alpha_{g_n}, u_{g_1}...u_{g_n})\;\text{and}\;a\in E_s=D_{g_1}D_{g_1g_2}...D_{g_1...g_n} \end{equation*} then by Corollary \ref{cor2.10} we have \begin{eqnarray}\label{eqn}
\nonumber \rho(a)z_s&=&\rho(a)z_{(\alpha_{g_1}, u_{g_1})...(\alpha_{g_n}, u_{g_n})} \\
\nonumber &=& \rho(a)z_{(\alpha_{g_1}, u_{g_1})}...z_{(\alpha_{g_n}, u_{g_n})}\\
&=& \rho(a)\omega_{g_1}...\omega_{g_n}. \end{eqnarray} Let $\Phi(\sum a_g\delta_g)\in\frac{L}{N}$. Then by \ref{eq} we have \begin{equation*}
\rho\times\omega(\Phi(\sum a_g\delta_g))=\sum\rho(a_g)\omega_g\subseteq (\rho\times z)(\mathcal{A}\ltimes_\beta S), \end{equation*}
so, $(\rho\times\omega)(\mathcal{A}\ltimes_\alpha G)\subseteq (\rho\times z)(\mathcal{A}\ltimes_\beta S)$. If $\Phi(\sum a_s\delta_s)\in \mathcal{A}\ltimes _\beta S$, then \begin{equation*}
\rho\times z(\Phi(\sum a_s\delta_s))=\sum \rho(a_s)z_s\in \rho\times\omega(\mathcal{A}\ltimes_\alpha G) \end{equation*} by \ref{eqn}. \end{proof} \begin{theorem}\label{theorem5.2} Let $\alpha$ be a partial action of a unital inverse semigroup $G$ on a $C^*$-algebra $\mathcal{A}$ such that the representation $\pi\times u$ of $\mathcal{A}\ltimes G$ is faithful. Define an inverse semigroup $S_G$ by $S_G=\{(\alpha_{g_1}...\alpha_{g_n}, u_{g_1}...u_{g_n})\;:\;g_1,...,g_n\in G\}$ and an action $\beta$ of $S_G$ by $\beta_s=\alpha_{g_1}...\alpha_{g_n}$ for $s=(\alpha_{g_1}...\alpha_{g_n}, u_{g_1}...u_{g_n})$, as in Proposition \ref{pro3.3}. Then the crossed product $\mathcal{A}\ltimes_\alpha G$ and $\mathcal{A}\ltimes_\beta S$ are isomorphic. \end{theorem} \begin{proof} Let $\nu_s=u_{g_1}...u_{g_n})$ for $s=(\alpha_{g_1}...\alpha_{g_n}, u_{g_1}...u_{g_n})$. We know from Proposition \ref{pro3.5} $(\pi, \nu, \mathcal{H})$ is a covariant representation of $(\mathcal{A}, S, \beta)$. If we show that $\pi\times\nu$ is a faithful representation of $\mathcal{A}\ltimes_\beta S$, then $(\pi\times\nu)^{-1}\circ\pi\times\nu$ is an isomorphism. Consider the universal representation of $\mathcal{A}\ltimes_\beta S$, which by proposition \ref{pro4.8} must be in the form $\rho\times z$ for some covariant representation $(\rho, z)$ of $(\mathcal{A}, S, \beta)$. By Proposition \ref{pro3.5} the definition $\omega_g= z_{(\alpha_g, u_g)}$ gives a covariant representation $(\rho, \omega, \mathcal{K})$ of $(\mathcal{A}, G, \alpha)$ and we have $(\rho\times\omega)(\mathcal{A}\ltimes_\alpha G)= (\rho\times z)(\mathcal{A}\ltimes_\beta S)$ by Lemma \ref{lem5.1}. Put $\Theta(x)=(\rho\times\omega)(\pi\times u)^{-1}(x)$, thus, $\Theta\circ\pi\times u=\rho\times\omega$. We will show that $\Theta\circ(\pi\times\nu)=\rho\times z$. It suffices to check this on generators $a\delta_s$, where $s=(\alpha_{g_1}...\alpha_{g_n}, u_{g_1}...u_{g_n})$ and $a\in E_s=D_{g_1}...D_{g_1...g_n}$. \begin{eqnarray*}
\Theta((\pi\times\nu)(a\delta_s)) &=& \Theta(\pi(a)\nu_s) \\
&=& (\rho\times\omega)(\pi\times u)^{-1}(\pi(a)\nu_s)\\
&=& (\rho\times\omega)(\pi\times u)^{-1}(\pi(a)u_{g_1}...u_{g_n}) \\
&=& (\rho\times\omega)(\pi\times u)^{-1}(\pi(a)u_{g_1...g_n})\\
&=& (\rho\times\omega)(a\delta_{g_1...g_n})\\
&=& \rho(a)\omega_{g_1...g_n}\\
&=& \rho(a)\omega_{g_1}...\omega_{g_n}\\
&=& \rho(a)z_{(\alpha_{g_1},u_{g_1})}...z_{(\alpha_{g_n}, u_{g_n})}\\
&=&\rho(a)z_{(\alpha_{g_1}...\alpha_{g_n},u_{g_1}... u_{g_n})}\\
&=&\rho\times z (a\delta_s) \end{eqnarray*} where we have appealed to Corollary \ref{cor2.10} twice more. \end{proof}
\end{document} | arXiv |
Schröder number
In mathematics, the Schröder number $S_{n},$ also called a large Schröder number or big Schröder number, describes the number of lattice paths from the southwest corner $(0,0)$ of an $n\times n$ grid to the northeast corner $(n,n),$ using only single steps north, $(0,1);$ northeast, $(1,1);$ or east, $(1,0),$ that do not rise above the SW–NE diagonal.[1]
Schröder number
Named afterErnst Schröder
No. of known termsinfinity
First terms1, 2, 6, 22, 90, 394, 1806
OEIS index
• A006318
• Large Schröder
The first few Schröder numbers are
1, 2, 6, 22, 90, 394, 1806, 8558, ... (sequence A006318 in the OEIS).
where $S_{0}=1$ and $S_{1}=2.$ They were named after the German mathematician Ernst Schröder.
Examples
The following figure shows the 6 such paths through a $2\times 2$ grid:
Related constructions
A Schröder path of length $n$ is a lattice path from $(0,0)$ to $(2n,0)$ with steps northeast, $(1,1);$ east, $(2,0);$ and southeast, $(1,-1),$ that do not go below the $x$-axis. The $n$th Schröder number is the number of Schröder paths of length $n$.[2] The following figure shows the 6 Schröder paths of length 2.
Similarly, the Schröder numbers count the number of ways to divide a rectangle into $n+1$ smaller rectangles using $n$ cuts through $n$ points given inside the rectangle in general position, each cut intersecting one of the points and dividing only a single rectangle in two (i.e., the number of structurally-different guillotine partitions). This is similar to the process of triangulation, in which a shape is divided into nonoverlapping triangles instead of rectangles. The following figure shows the 6 such dissections of a rectangle into 3 rectangles using two cuts:
Pictured below are the 22 dissections of a rectangle into 4 rectangles using three cuts:
The Schröder number $S_{n}$ also counts the separable permutations of length $n-1.$
Related sequences
Schröder numbers are sometimes called large or big Schröder numbers because there is another Schröder sequence: the little Schröder numbers, also known as the Schröder-Hipparchus numbers or the super-Catalan numbers. The connections between these paths can be seen in a few ways:
• Consider the paths from $(0,0)$ to $(n,n)$ with steps $(1,1),$ $(2,0),$ and $(1,-1)$ that do not rise above the main diagonal. There are two types of paths: those that have movements along the main diagonal and those that do not. The (large) Schröder numbers count both types of paths, and the little Schröder numbers count only the paths that only touch the diagonal but have no movements along it.[3]
• Just as there are (large) Schröder paths, a little Schröder path is a Schröder path that has no horizontal steps on the $x$-axis.[4]
• If $S_{n}$ is the $n$th Schröder number and $s_{n}$ is the $n$th little Schröder number, then $S_{n}=2s_{n}$ for $n>0$ $(S_{0}=s_{0}=1).$[4]
Schröder paths are similar to Dyck paths but allow the horizontal step instead of just diagonal steps. Another similar path is the type of path that the Motzkin numbers count; the Motzkin paths allow the same diagonal paths but allow only a single horizontal step, (1,0), and count such paths from $(0,0)$ to $(n,0)$.[5]
There is also a triangular array associated with the Schröder numbers that provides a recurrence relation[6] (though not just with the Schröder numbers). The first few terms are
1, 1, 2, 1, 4, 6, 1, 6, 16, 22, .... (sequence A033877 in the OEIS).
It is easier to see the connection with the Schröder numbers when the sequence is in its triangular form:
k
n
0 1 2 3 4 5 6
0 1
1 12
2 146
3 161622
4 18306890
5 11048146304394
6 1127026471414121806
Then the Schröder numbers are the diagonal entries, i.e. $S_{n}=T(n,n)$ where $T(n,k)$ is the entry in row $n$ and column $k$. The recurrence relation given by this arrangement is
$T(n,k)=T(n,k-1)+T(n-1,k-1)+T(n-1,k)$
with $T(1,k)=1$ and $T(n,k)=0$ for $k>n$.[6] Another interesting observation to make is that the sum of the $n$th row is the $(n+1)$st little Schröder number; that is,
$\sum _{k=0}^{n}T(n,k)=s_{n+1}$.
Recurrence relations
With $S_{0}=1$, $S_{1}=2$, [7]
$S_{n}=3S_{n-1}+\sum _{k=1}^{n-2}S_{k}S_{n-k-1}$ for $n\geq 2$
and also
$S_{n}={\frac {6n-3}{n+1}}S_{n-1}-{\frac {n-2}{n+1}}S_{n-2}$ for $n\geq 2$
Generating function
The generating function $G(x)$ of $(S_{n})$ is
$G(x)={\frac {1-x-{\sqrt {x^{2}-6x+1}}}{2x}}=\sum _{n=0}^{\infty }S_{n}x^{n}$.[7]
Uses
One topic of combinatorics is tiling shapes, and one particular instance of this is domino tilings; the question in this instance is, "How many dominoes (that is, $1\times 2$ or $2\times 1$ rectangles) can we arrange on some shape such that none of the dominoes overlap, the entire shape is covered, and none of the dominoes stick out of the shape?" The shape that the Schröder numbers have a connection with is the Aztec diamond. Shown below for reference is an Aztec diamond of order 4 with a possible domino tiling.
It turns out that the determinant of the $(2n-1)\times (2n-1)$ Hankel matrix of the Schröder numbers, that is, the square matrix whose $(i,j)$th entry is $S_{i+j-1},$ is the number of domino tilings of the order $n$ Aztec diamond, which is $2^{n(n+1)/2}.$[8] That is,
${\begin{vmatrix}S_{1}&S_{2}&\cdots &S_{n}\\S_{2}&S_{3}&\cdots &S_{n+1}\\\vdots &\vdots &\ddots &\vdots \\S_{n}&S_{n+1}&\cdots &S_{2n-1}\end{vmatrix}}=2^{n(n+1)/2}.$
For example:
• ${\begin{vmatrix}2\end{vmatrix}}=2=2^{1(2)/2}$
• ${\begin{vmatrix}2&6\\6&22\end{vmatrix}}=8=2^{2(3)/2}$
• ${\begin{vmatrix}2&6&22\\6&22&90\\22&90&394\end{vmatrix}}=64=2^{3(4)/2}$
See also
• Delannoy number
• Motzkin number
• Narayana number
• Schröder–Hipparchus number
• Catalan number
References
1. Sloane, N. J. A. (ed.). "Sequence A006318 (Large Schröder numbers (or large Schroeder numbers, or big Schroeder numbers).)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 5 March 2018.
2. Ardila, Federico (2015). "Algebraic and geometric methods in enumerative combinatorics". Handbook of enumerative combinatorics. Boca Raton, FL: CRC Press. pp. 3–172.
3. Sloane, N. J. A. (ed.). "Sequence A001003 (Schroeder's second problem (generalized parentheses); also called super-Catalan numbers or little Schroeder numbers)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation. Retrieved 5 March 2018.
4. Drake, Dan (2010). "Bijections from weighted Dyck paths to Schröder paths". arXiv:1006.1959 [math.CO].
5. Deng, Eva Y. P.; Yan, Wei-Jun (2008). "Some identities on the Catalan, Motzkin, and Schröder numbers". Discrete Applied Mathematics. 156 (166–218X): 2781–2789. doi:10.1016/j.dam.2007.11.014.
6. Sloane, N. J. A. "Triangular array associated with Schroeder numbers". The On-Line Encyclopedia of Integer Sequences. Retrieved 5 March 2018.
7. Oi, Feng; Guo, Bai-Ni (2017). "Some explicit and recursive formulas of the large and little Schröder numbers". Arab Journal of Mathematical Sciences. 23 (1319–5166): 141–147. doi:10.1016/j.ajmsc.2016.06.002.
8. Eu, Sen-Peng; Fu, Tung-Shan (2005). "A simple proof of the Aztec diamond theorem". Electronic Journal of Combinatorics. 12 (1077–8926): Research Paper 18, 8. doi:10.37236/1915. S2CID 5978643.
Further reading
• Weisstein, Eric W. "Schröder Number". MathWorld.
• Stanley, Richard P.: Catalan addendum to Enumerative Combinatorics, Volume 2
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
| Wikipedia |
\begin{document}
\begin{abstract} In this article, we define the notion of a filtration and then give the basic theorems on initial and progressive enlargements of filtrations. \end{abstract}
\title{Filtrations} \section*{Definitions} Filtrations have been introduced by Doob and have been a fundamental feature of the theory of stochastic processes. Most basic objects, such as martingales, semimartingales, stopping times or Markov processes involve the notion of filtration. \begin{defn} Let $(\Omega,\mathcal{F},\mathbb{P})$ be a probability space. A \textit{filtration} on $(\Omega,\mathcal{F},\mathbb{P})$ is an increasing family $(\mathcal{F}_t)_{t\geq0}$ of sub-$\sigma$-algebras of $\mathcal{F}$. In other words, for each $t$, $\mathcal{F}_t$ is a $\sigma$-algebra included in $\mathcal{F}$ and if $s\leq t$, $\mathcal{F}_s\subset\mathcal{F}_t$. A probability space $(\Omega,\mathcal{F},\mathbb{P})$ endowed with a filtration $(\mathcal{F}_t)_{t\geq0}$ is called a filtered probability space. \end{defn} We now give a definition which is very closely related to that of a filtration: \begin{defn} A stochastic process $(X_t)_{t\geq0}$ on $(\Omega,\mathcal{F},\mathbb{P})$ is adapted to the filtration $(\mathcal{F}_t)$ if, for each $t\geq0$, $X_t$ is $\mathcal{F}_t$-measurable. \end{defn} A stochastic process $X$ is always adapted to its \textit{natural filtration} $\mathcal{F}_t^X=\sigma(X_s,\;s\leq t)$ (the last notation meaning that $\mathcal{F}_t$ is the smallest $\sigma$-algebra with respect to which all the variables $(X_s,\;s\leq t)$ are measurable). $(\mathcal{F}_t^X)$ is hence the smallest filtration to which $X$ is adapted.
The parameter $t$ is often thought of as time, and the $\sigma$-algebra $\mathcal{F}_t$ represents the set of information available at time $t$, that is events that have occurred up to time $t$. The filtration $(\mathcal{F}_t)_{t\geq0}$ thus represents the evolution of the information or knowledge of the world with time. If $X$ is an adapted process, then $X_t$, its value at time $t$, only depends on the evolution of the universe prior to $t$.
\begin{defn} Let $\left( \Omega ,\mathcal{F},\left( \mathcal{F}_{t}\right) _{t\geq 0}, \mathbb{P}\right) $ be a filtered probability space. \begin{enumerate}[(i)] \item The filtration $\left( \mathcal{F}_{t}\right) _{t\geq 0}$ is said to be\textit{ complete} if $\left( \Omega ,\mathcal{F}, \mathbb{P}\right) $ is complete and if $\mathcal{F}_0$ contains all the $\mathbb{P}$-null sets. \item The filtration $\left( \mathcal{F}_{t}\right) _{t\geq 0}$ is said to satisfy the \textit{usual hypotheses} if it is complete and right continuous, that is $\mathcal{F}_t=\mathcal{F}_{t+}$, where $$\mathcal{F}_{t+}=\bigcap_{u>t}\mathcal{F}_u.$$ \end{enumerate} Some fundamental theorems, such as the Debut theorem, require the usual hypotheses. Hence naturally, very often in the literature on the theory of stochastic processes and mathematical finance, the underlying filtered probability spaces are assumed to satisfy the usual hypotheses. This assumption is not very restrictive for the following reasons:
\begin{enumerate}[(a)] \item Any filtration can easily be made complete and right continuous; indeed, given a filtered probability space $\left( \Omega ,\mathcal{F},\left( \mathcal{F}_{t}\right) _{t\geq 0}, \mathbb{P}\right) $, we first complete the probability space $\left( \Omega ,\mathcal{F},\mathbb{P}\right) $, and then we add all the $\mathbb{P}$-null sets to every $\mathcal{F}_{t+}$, $t\geq0$. The new filtration thus obtained satisfies the usual hypotheses and is called the usual augmentation of $\left( \mathcal{F}_{t}\right) _{t\geq 0}$; \item Moreover, in most classical and encountered cases, the filtration $\left( \mathcal{F}_{t}\right) _{t\geq 0}$ is right continuous. Indeed, this is the case when for instance $\left( \mathcal{F}_{t}\right) _{t\geq 0}$ is the natural filtration of a Brownian Motion, a L\'evy process, a Feller process or a Hunt process (see \cite{protter,revuzyor}). \end{enumerate}
\end{defn} \section*{Enlargements of filtrations} For more precise and detailed references, the reader can consult the books \cite{jeulin,jeulinyor,columbia,protter} or the survey article \cite{ashkanessay}. \subsection*{Generalities} Let $\left( \Omega ,\mathcal{F},\left( \mathcal{F}_{t}\right) _{t\geq 0}, \mathbb{P}\right) $ be a filtered probability space satisfying the usual hypotheses. Let $\left( \mathcal{G}_{t}\right) _{t\geq 0}$ be another filtration satisfying the usual hypotheses and such that $\mathcal{F}_{t}\subset\mathcal{G}_{t}$ for every $t\geq0$. One natural question is: how are the $\left( \mathcal{F}_{t}\right)$-semimartingales modified when considered as stochastic processes in the larger filtration $\left( \mathcal{G}_{t}\right)$? Given the importance of semimartingales and martingales (in particular in mathematical finance where they are used to model prices), it seems natural to characterize situations where the semimartingale or martingale properties are preserved: \begin{defn} We shall say that the pair of filtrations $\left(\mathcal{F}_{t}, \mathcal{G}_{t}\right)$ satisfies the $\left(H'\right)$ hypothesis if every $\left(\mathcal{F}_{t}\right)$-semimartingale is a $\left(\mathcal{G}_{t}\right)$-semimartingale. \end{defn} \begin{rem} In fact, using a classical decomposition of semimartingales due to Jacod and M\'emin, it is enough to check that every $\left(\mathcal{F}_{t}\right)$-bounded martingale is a $\left(\mathcal{G}_{t}\right)$-semimartingale. \end{rem} \begin{defn} We shall say that the pair of filtrations $\left(\mathcal{F}_{t}, \mathcal{G}_{t}\right)$ satisfies the $\left(H\right)$ hypothesis if every $\left(\mathcal{F}_{t}\right)$-local martingale is a $\left(\mathcal{G}_{t}\right)$-local martingale. \end{defn}
The techniques to answer such questions have been developed in the late 70's under the name of the theory of enlargements of filtrations. The theory of enlargements of filtrations has been recently very widely used in mathematical finance, specially in insider trading models and even more spectacularly in models of default risk. The insider trading models are usually based on the so called \textit{initial enlargements of filtrations} whereas the models of default risk fit perfectly well in the framework of the \textit{progressive enlargements of filtrations}. More precisely, given a filtered probability space $\left(\Omega,\mathcal{F},\left(\mathcal{F}_{t}\right),\mathbb{P}\right)$, there are essentially two ways of enlarging filtrations: \begin{itemize} \item \textit{initial enlargements}, for which $\mathcal{G}_{t}=\mathcal{F}_{t}\bigvee\mathcal{H}$, i.e. the new information $\mathcal{H}$ is brought in at the origin of time; and \item \textit{progressive enlargements}, for which $\mathcal{G}_{t}=\mathcal{F}_{t}\bigvee\mathcal{H}_{t}$, i.e. the new information is brought in progressively as the time $t$ increases. \end{itemize} Before presenting the basic theorems on enlargements of filtrations, we state a useful theorem due to Stricker:
\begin{thm}[Stricker \cite{stricker}] Let $\left(\mathcal{F}_{t}\right)$ and $\left(\mathcal{G}_{t}\right)$ be two filtrations as above, such that for all $t\geq0$, $\mathcal{F}_{t}\subset\mathcal{G}_{t}$. If $\left(X_{t}\right)$ is a $\left(\mathcal{G}_{t}\right)$ semimartingale which is $\left(\mathcal{F}_{t}\right)$ adapted, then it is also an $\left(\mathcal{F}_{t}\right)$ semimartingale. \end{thm} \subsection*{Initial enlargements of filtrations} The most important theorem on initial enlargements of filtrations is due to Jacod and deals with the special case where the initial information brought in at the origin of time consists of the $\sigma$-algebra generated by a random variable. More precisely let $\left(\Omega,\mathcal{F},\left(\mathcal{F}_{t}\right),\mathbb{P}\right)$ be a filtered probability space satisfying the usual assumptions. Let $Z$ be an $\mathcal{F}$ measurable random variable. Define $$\mathcal{G}_{t}=\bigcap_{\varepsilon>0}\left(\mathcal{F}_{t+\varepsilon}\bigvee\sigma\left\{Z\right\}\right).$$In financial models, the filtration $\left(\mathcal{F}_{t}\right)$ represents the public information in a financial market and the random variable $Z$ stands for the additional (anticipating) information of an insider.
The conditional laws of $Z$ given $\mathcal{F}_{t}$, for $t\geq0$ play a crucial role in initial enlargements. \begin{thm}[Jacod's criterion]\label{thmdejacod} Let $Z$ be an $\mathcal{F}$ measurable random variable and let $Q_{t}\left(\omega,dx\right)$ denote the regular conditional distribution of $Z$ given $\mathcal{F}_{t},\;t\geq0$. Suppose that for each $t\geq0$, there exists a positive $\sigma$-finite measure $\eta_{t}\left(dx\right)$ (on $\left(\mathbb{R},\mathcal{B}\left(\mathbb{R}\right)\right)$) such that $$Q_{t}\left(\omega,dx\right)\ll\eta_{t}\left(dx\right)\;\mathrm{a.s.}$$Then every $\left(\mathcal{F}_{t}\right)$-semimartingale is a $\left(\mathcal{G}_{t}\right)$-semimartingale. \end{thm} \begin{rem} In fact this theorem still holds for random variables with values in a standard Borel space. Moreover, the existence of the $\sigma$-finite measure $\eta_{t}\left(dx\right)$ is equivalent to the existence of one positive $\sigma$-finite measure $\eta\left(dx\right)$ such that $Q_{t}\left(\omega,dx\right)\ll\eta\left(dx\right)$ and in this case $\eta$ can be taken to be the distribution of $Z$. \end{rem} Now we give classical corollaries of Jacod's theorem. \begin{cor} Let $Z$ be independent of $\mathcal{F}_{\infty}$. Then every $\left(\mathcal{F}_{t}\right)$-semimartingale is a $\left(\mathcal{G}_{t}\right)$-semimartingale. \end{cor} \begin{cor} Let $Z$ be a random variable taking on only a countable number of values. Then every $\left(\mathcal{F}_{t}\right)$-semimartingale is a $\left(\mathcal{G}_{t}\right)$-semimartingale. \end{cor} It is possible to obtain in some cases an explicit decomposition of an $\left(\mathcal{F}_{t}\right)$-local martingale as a $\left(\mathcal{G}_{t}\right)$-semimartingale (see \cite{jeulin, jeulinyor, columbia, ashkanessay, protter}). For example, if $Z=B_{t_0}$, for some fixed time $t_0>0$ and a Brownian Motion $B$, it can be shown that Jacod's criterion holds for $t<t_0$ and that every $\left(\mathcal{F}_{t}\right)$-local martingale is a semimartingale for $0\leq t<t_0$, but not necessarily including $t_0$. There are indeed in this case $\left(\mathcal{F}_{t}\right)$-local martingales which are not $\left(\mathcal{G}_{t}\right)$-semimartingales. Moreover, $B$ is a $\left(\mathcal{G}_{t}\right)$-semimartingale which decomposes as: $$B_{t}=B_{0}+\widetilde{B}_{t}+\int_{0}^{t\wedge t_{0}}ds\dfrac{B_{t_{0}}-B_{s}}{t_{0}-s},$$where $\left(\widetilde{B}_{t}\right)$ is a $\left( \mathcal{G}_{t}\right)$ Brownian Motion. \begin{rem} There are important cases where Jacod's criterion does not hold but where other methods apply (\cite{jeulin, columbia, ashkanessay}. \end{rem} \subsection*{Progressive enlargements of filtrations} Let $\left( \Omega ,\mathcal{F},\left( \mathcal{F}_{t}\right) _{t\geq 0}, \mathbb{P}\right) $ be a filtered probability space satisfying the usual hypotheses, and $\rho :$ $\left( \Omega ,\mathcal{F}\right) \rightarrow \left( \mathbb{R}_{+},\mathcal{B} \left( \mathbb{R}_{+}\right) \right) $ be a random time. We enlarge the initial filtration $\left( \mathcal{F}_{t}\right) $\ with the process $\left( \rho \wedge t\right) _{t\geq 0}$, so that the new enlarged filtration $\left( \mathcal{F}_{t}^{\rho }\right) _{t\geq 0}$\ is the smallest filtration (satisfying the usual assumptions) containing $\left( \mathcal{F}_{t}\right) $\ and making $ \rho $\ a stopping time (i.e. $\mathcal{F}_{t}^{\rho }=\mathcal{K}_{t+}^{o}$, where $\mathcal{K}_{t}^{o}=\mathcal{F}_{t}\bigvee\sigma\left(\rho\wedge t\right)$). One may interpret $\rho$ as the instant of default of an issuer; the given filtration $(\mathcal{F}_t)$ can be thought of as the filtration of default-free prices, for which $\rho$ is not a stopping time. Then, the filtration $(\mathcal{F}_t^\rho)$ is the defaultable market filtration used for the pricing of defaultable assets.
A few processes will play a crucial role in our discussion:
\begin{itemize} \item the $\left( \mathcal{F}_{t}\right) $-supermartingale \begin{equation} Z_{t}^{\rho }=\mathbb{P}\left[ \rho >t\mid \mathcal{F}_{t}\right] \label{surmart} \end{equation} chosen to be c\`{a}dl\`{a}g, associated to $\rho $\ by Az\'{e}ma;
\item the $\left( \mathcal{F}_{t}\right) $-dual optional projection of the process $1_{\left\{ \rho \leq t\right\} }$, denoted by $A_{t}^{\rho }$;
\item the c\`{a}dl\`{a}g martingale \begin{equation*} \mu _{t}^{\rho }=\mathbb{E}\left[ A_{\infty }^{\rho }\mid \mathcal{F}_{t} \right] =A_{t}^{\rho }+Z_{t}^{\rho }. \end{equation*} \end{itemize} \begin{thm} Every $\left( \mathcal{F}_{t}\right) $-local martingale $\left( M_{t}\right) $, stopped at $\rho $, is a $\left( \mathcal{ F}_{t}^{\rho }\right) $\-semimartingale, with canonical decomposition: \begin{equation} M_{t\wedge \rho }=\widetilde{M}_{t}+\int_{0}^{t\wedge \rho }\dfrac{d\langle M,\mu ^{\rho }\rangle_{s}}{Z_{s-}^{\rho }} \label{decocanonique} \end{equation} where $\left( \widetilde{M}_{t}\right) $\ is an $\left( \mathcal{F} _{t}^{\rho }\right) $-local martingale. \end{thm} The most interesting case in the theory of progressive enlargements of filtrations is when $\rho$ is an honest time or equivalently the end of an $\left( \mathcal{F}_{t}\right) $\ optional set $\Gamma $, i.e\textbf{\ } \begin{equation*} \rho=\sup \left\{ t:\left( t,\omega \right) \in \Gamma \right\}. \end{equation*} Indeed, in this case, the pair of filtrations $(\mathcal{F}_t,\mathcal{F}_t^\rho)$ satisfies the $(H')$ hypothesis: every $\left( \mathcal{F}_{t}\right) $-local martingale $\left( M_{t}\right) $, is an $\left( \mathcal{ F}_{t}^{\rho }\right) $-semimartingale, with canonical decomposition: \begin{equation*} M_{t}=\widetilde{M}_{t}+\int_{0}^{t\wedge \rho }\frac{d\langle M,\mu ^{\rho}\rangle_{s}}{Z_{s-}^{\rho}}-\int_{\rho}^{t}\dfrac{d\langle M,\mu ^{\rho }\rangle_{s}}{1-Z_{s-}^{\rho}}. \end{equation*} The next decomposition formulae are widely used for pricing in default models: \begin{prop} \begin{enumerate}[(i)] \item Let $\xi\in L^{1}$. Then a c\`{a}dl\`{a}g version of the martingale
$\xi_{t}=\mathbb{E}\left[\xi|\mathcal{F}_{t}^{\rho}\right]$ is given by:
$$\xi_{t}=\dfrac{1}{Z_{t}^{\rho}}\mathbf{1}_{t<\rho}\mathbb{E}\left[\xi\mathbf{1}_{t<\rho}|\mathcal{F}_{t}\right]+\xi\mathbf{1}_{t\geq\rho}.$$
\item Let $\xi\in L^{1}$ and let $\rho$ be an honest time. Then a c\`{a}dl\`{a}g version of the martingale $\xi_{t}=\mathbb{E}\left[\xi|\mathcal{F}_{t}^{\rho}\right]$ is given by:
$$\xi_{t}=\dfrac{1}{Z_{t}^{\rho}}\mathbb{E}\left[\xi\mathbf{1}_{t<\rho}|\mathcal{F}_{t}\right]\mathbf{1}_{t<\rho}+\dfrac{1}{1-Z_{t}^{\rho}}\mathbb{E}\left[\xi\mathbf{1}_{t\geq \rho}|\mathcal{F}_{t}\right]\mathbf{1}_{t\geq \rho}.$$ \end{enumerate} \end{prop} \subsection*{The $(H)$ hypothesis}
The $(H)$ hypothesis is sometimes presented as a no-abitrage condition in default models. Let $\left(\Omega,\mathcal{F},\mathbb{P}\right)$ be a probability space satisfying the usual assumptions. Let $\left(\mathcal{F}_{t}\right)$ and $\left(\mathcal{G}_{t}\right)$ be two sub-filtrations of $\mathcal{F}$, with $$\mathcal{F}_{t}\subset\mathcal{G}_{t}.$$Br\'emaud and Yor \cite{bremaudyor} have proven the following characterization of the $(H)$ hypothesis: \begin{thm} The following are equivalent: \begin{enumerate} \item Every $\left(\mathcal{F}_{t}\right)$ martingale is a $\left(\mathcal{G}_{t}\right)$ martingale; \item For all $t\geq0$, the sigma fields $\mathcal{G}_{t}$ and $\mathcal{F}_{\infty}$ are independent conditionally on $\mathcal{F}_{t}$. \end{enumerate} \end{thm} \begin{rem} We shall also say that $\left(\mathcal{F}_{t}\right)$ is \textit{immersed} in $\left(\mathcal{G}_{t}\right)$. \end{rem} In the framework of the progressive enlargement of some filtration $\left(\mathcal{F}_{t}\right)$ with a random time $\rho $, the $(H)$ hypothesis is equivalent to one of the following hypothesis:
\begin{enumerate}[(i)] \item $\forall t$, the $\sigma $-algebras $\mathcal{F}_{\infty }$\ and $ \mathcal{F}_{t}^{\rho }$\ are conditionally independent given $\mathcal{F} _{t}$.
\item For all bounded $\mathcal{F}_{\infty }$ measurable random variables $ \mathbf{F}$\ and all bounded $\mathcal{F}_{t}^{\rho }$ measurable random variables $\mathbf{G}_{t}$, we have \begin{equation*} \mathbb{E}\left[ \mathbf{FG}_{t}\mid \mathcal{F}_{t}\right] =\mathbb{E}\left[ \mathbf{F}\mid \mathcal{F}_{t}\right] \mathbb{E}\left[ \mathbf{G}_{t}\mid \mathcal{F}_{t}\right] . \end{equation*}
\item For all bounded $\mathcal{F}_{t}^{\rho }$ measurable random variables $ \mathbf{G}_{t}$: \begin{equation*} \mathbb{E}\left[ \mathbf{G}_{t}\mid \mathcal{F}_{\infty }\right] =\mathbb{E} \left[ \mathbf{G}_{t}\mid \mathcal{F}_{t}\right] . \end{equation*}
\item For all bounded $\mathcal{F}_{\infty }$ measurable random variables $ \mathbf{F}$, \begin{equation*} \mathbb{E}\left[ \mathbf{F}\mid \mathcal{F}_{t}^{\rho }\right] =\mathbb{E} \left[ \mathbf{F}\mid \mathcal{F}_{t}\right] . \end{equation*}
\item For all $s\leq t$, \begin{equation*} \mathbb{P}\left[ \rho \leq s\mid \mathcal{F}_{t}\right] =\mathbb{P}\left[ \rho \leq s\mid \mathcal{F}_{\infty }\right] . \end{equation*} \end{enumerate} Now, a natural question, specially in view of applications to financial mathematics, is: how is the $(H)$ hypothesis affected when we make an equivalent change of probability measure? \begin{prop} Let $\mathbb{Q}$ be a probability measure which is equivalent to $\mathbb{P}$ (on $\mathcal{F}$). Then every $\left(\mathcal{F}_{\bullet},\mathbb{Q}\right)$-semimartingale is a $\left(\mathcal{G}_{\bullet},\mathbb{Q}\right)$-semimartingale. \end{prop} Now, define:
$$\dfrac{d\mathbb{Q}}{d\mathbb{P}}\Big|_{\mathcal{F}_{t}}=R_{t};\quad\dfrac{d\mathbb{Q}}{d\mathbb{P}}\Big|_{\mathcal{G}_{t}}=R'_{t}.$$ If $Y=\dfrac{d\mathbb{Q}}{d\mathbb{P}}$, then the hypothesis $(H)$ holds under $\mathbb{Q}$ if and only if:
$$\forall X\geq0,\;X\in\mathcal{F}_{\infty},\quad \dfrac{\mathbb{E}_{\mathbf{P}}\left[XY|\mathcal{G}_{t}\right]}{R'_{t}}=\dfrac{\mathbb{E}_{\mathbf{P}}\left[XY|\mathcal{F}_{t}\right]}{R_{t}}.$$ In particular, when $\dfrac{d\mathbb{Q}}{d\mathbb{P}}$ is $\mathcal{F}_{\infty}$ measurable, $R_{t}=R'_{t}$ and the hypothesis $(H)$ holds under $\mathbb{Q}$.
Now let us give a decomposition formula: \begin{thm} If $\left(X_{t}\right)$ is a $\left(\mathcal{F}_{\bullet},\mathbb{Q}\right)$-local martingale, then the stochastic process: $$I_{X}\left(t\right)=X_{t}+\int_{0}^{t}\dfrac{R'_{s-}}{R'_{s}}\left(\dfrac{1}{R_{s-}}d[X,R]_{s}-\dfrac{1}{R'_{s-}}d[X,R']_{s}\right)$$is a $\left(\mathcal{G}_{\bullet},\mathbb{Q}\right)$-local martingale. \end{thm}
\end{document} | arXiv |
\begin{document}
\title{Robin's inequality for $ height$-free integers}
\begin{abstract}
\noindent
In 1984, Robin showed that the Riemann Hypothesis for $\zeta$ is equivalent to demonstrating $\sigma(n) < e^\gamma n \log \log n$ for all $n > 5040$.
Robin's inequality has since been proven for various infinite families of power-free integers: $5$-free integers, $7$-free integers, and $11$-free integers. We extend these results to cover $20$-free integers. \end{abstract}
In 1984, Robin gave an equivalent statement of the Riemann Hypothesis for $\zeta$ involving the divisors of integers. \begin{theorem}[Robin \cite{RobinIneq}]
The Riemann Hypothesis is true if and only if for all $n > \num{5040}$,
\begin{align} \label{RI}
\sigma(n) < e^\gamma n \log \log n, \tag{RI}
\end{align}
where $\sigma(n)$ is the sum of divisors function and $\gamma$ is the Euler--Mascheroni constant. \end{theorem} \noindent Since then, \eqref{RI} has become known as Robin's inequality. There are twenty-six known counterexamples to \eqref{RI}, of which $\num{5040}$ is the largest \cite{RobinThm}.
Robin's inequality has been proven for various infinite families of integers, in particular the $t$-free integers. Recall that $n$ is called \emph{$t$-free} if $n$ is not divisible by the $t$th power of any prime number, and \emph{$t$-full} otherwise. In 2007, Choie, Lichiardopol, Moree, and Sol\'e \cite{5-free } showed that \eqref{RI} holds for all $5$-free integers greater than $\num{5040}$. Then, in 2012, Planat and Sol\'e \cite{7-free} improved this result to \eqref{RI} for $7$-free integers greater than $\num{5040}$, which was followed by Broughan and Trudgian \cite{11-free} with \eqref{RI} for $11$-free integers greater than $\num{5040}$ in 2015. By updating Broughan and Trudgian's work, we prove our main theorem.
\begin{theorem}\label{analytics}
Robin's inequality holds for $20$-free integers greater than $\num{5040}$. \end{theorem} \noindent Since there are no $20$-full integers less than $\num{5041}$, we may give a cleaner statement for Robin's theorem. \begin{cor}
The Riemann Hypothesis is true if and only if \eqref{RI} holds for all $20$-full integers. \end{cor}
\section{A bound for $t$-free integers}
Sol\'e and Planat \cite{7-free} introduced the generalized Dedekind $\Psi$ function \begin{align*}
\Psi_t(n) := n \prod_{p | n}( 1 + p^{-1} + \dots + p^{-(t-1)}) = n \prod_{p | n}\frac{1-p^{-t}}{1 - p^{-1}}. \end{align*} Since \begin{align*}
\sigma(n) = n \prod_{p^a || n} (1 + p^{-1} + \dots + p^{-a}), \end{align*} we see that $\sigma(n) \leq \Psi_t(n)$, provided that $n$ is $t$-free. Thus, we study the function \begin{align*}
R_t(n) := \frac{\Psi_t(n)}{n \log \log n}. \end{align*} \noindent By Proposition 2 of \cite{7-free}, it is sufficient to consider $R_t$ only at the primorial numbers $p_n\# = \prod_{k=1}^n p_k$ where $p_k$ is the $k$th prime. Compare this to the role of colossally abundant numbers in \eqref{RI} by Robin \cite{RobinIneq}.
Using equation $(2)$ of Broughan and Trudgian \cite{11-free}, we have for $n \geq 2$ \begin{align*}
R_t(p_n\#)
= \frac{p_n\# \prod_{p \leq p_n} \frac{1-p^{-t}}{1 - p^{-1}}}{p_n\# \log \log p_n\#}
= \frac{ \prod_{p > p_n} (1-p^{-t})^{-1}}{\zeta(t) \log \vartheta(p_n)} \prod_{p \leq p_n} (1 - p^{-1})^{-1} \end{align*} where $\vartheta(x)$ is the Chebyshev function $\sum_{p\leq x}\log p$.
In Sections \ref{bound1} and \ref{bound}, we construct two non-increasing functions, $g_B(w;t)$ and $g_\infty(w;t)$ such that for some constants $x_0$, $B$ we have for $x_0\leq p_n \leq B$ $$ g_B(p_n;t)\geq R_t(p_n\#)\exp(-\gamma) $$ and for $p_n>B$ $$ g_\infty(p_n;t)\geq R_t(p_n\#)\exp(-\gamma). $$ \noindent For a given $t\geq 2$, if we can show that all $t$-free numbers $5\,040<n\leq p_k\#$ satisfy \eqref{RI}, that $g_B(p_k;t)<1$ and that $g_\infty(B;t)<1$, then we are done.
\section{Deriving $g_B(p_n;t)$} \label{bound1}
We start with some lemmas.
\begin{lemma}\label{partial_RH} Let $\rho$ be a non-trivial zero of the Riemann zeta function with positive imaginary part $\leq 3\cdot 10^{12}$. Then $\Re \rho=1/2$. \end{lemma} \begin{proof} See Theorem $1$ of \cite{Platt-RH}. \end{proof}
\begin{lemma}\label{lem:theta} Let $B=2.169\cdot 10^{25}$. Then we have
$$\left|\vartheta(x)-x\right|\leq\frac{1}{8\pi}\sqrt{x}\log^2 x\quad\textrm{for }599\leq x\leq B. $$ \end{lemma} \begin{proof} Given that one knows Riemann Hypothesis to height $T$, \cite{Buthe-RH} tells us that we may use Schoenfeld's bounds from \cite{Schoenfeld1976} but restricted to $B$ such that $$ 4.92\sqrt{\frac{B}{\log B}}\leq T. $$ Using $T=3\cdot 10^{12}$ from Lemma \ref{partial_RH} we find $B=2.169\cdot 10^{25}$ is admissible. \end{proof}
\begin{lemma}\label{lem:theta55} Let $\log x \geq 55$. Then $$
|\vartheta(x)-x|\leq 1.388\cdot 10^{-10}x +1.4262\sqrt{x} $$ or $$
|\vartheta(x)-x|\leq 1.405\cdot 10^{-10}x. $$ \end{lemma} \begin{proof} From Table $1$ of \cite{Dusart-estimates} we have for $x>\exp(55)$ $$
|\psi(x)-x| \leq 1.388\cdot 10^{-10} x $$ so that by Theorem $13$ of \cite{Rosser62} we get, again for $x>\exp(55)$, that $$
|\vartheta(x)-x|\leq 1.388\cdot 10^{-10} x +1.4262\sqrt{x}. $$ The second bound follows trivially. \end{proof}
\begin{lemma}\label{lem:C1} Take $B$ as above and define $$ C_1=\int\limits_B^\infty \frac{(\vartheta(t)-t)(1+\log t)}{t^2\log^2 t} \textrm{d} t. $$ Then $C_1\leq 2.645\cdot 10^{-9}$. \end{lemma} \begin{proof} We split the integral at $X_0=\exp(2000)$, apply Lemma \ref{lem:theta55} and consider $$ 1.405\cdot 10^{-10}\int\limits_B^{X_0}\frac{1+\log t}{t\log^2 t}\textrm{d} t\leq 1.430\cdot 10^{-10}\int\limits_B^{X_0}\frac{\textrm{d} t}{t\log t}\leq 5.055\cdot 10^{-10}. $$ For the tail of the integral, we use $$
|\vartheta (x)-x|\leq 30.3 x\log^{1.52} x \exp(-0.8\sqrt{\log x}) $$ from Corollary $1$ of \cite{Platt-Pintz}, valid for $x \geq X_0$. We can then majorise the tail with $$ 30.3 \int\limits_{X_0}^\infty \frac{\log t\exp(-0.8\sqrt{\log t})}{t}\textrm{d} t $$ which is less than $2.139\cdot 10^{-9}$.
\end{proof}
\begin{lemma} Take $B$, $C_1$ as above and let $599\leq x \leq B$. Then
$$ \prod\limits_{p\leq x}\left(1-\frac{1}{p}\right)\geq \frac{\exp(-\gamma)}{\log x}\exp\left(\frac{1.02}{(x-1)\log x}+\frac{\log x}{8\pi\sqrt{x}}+C_1+\frac{(\log x+3)\sqrt{B}-(\log B+3)\sqrt{x}}{4\pi\sqrt{xB}}\right). $$ \end{lemma} \begin{proof} Let $M$ be the Meissel-Mertens constant $$ M=\gamma+\sum\limits_p(\log(1-1/p)+1/p). $$ Then by $4.20$ of \cite{Rosser62} we have $$
\left|\sum\limits_{p\leq x}\frac{1}{p}-\log\log x -M\right|\leq\frac{|\vartheta(x)-x|}{x\log x}+\int\limits_x^\infty\frac{|\vartheta(t)-t|(1+\log t)}{t^2\log^2 t}\textrm{d} t. $$ Since $599\leq x\leq B$ we can use Lemma \ref{lem:theta} to bound the first term with $$ \frac{\log x}{8\pi\sqrt{x}}. $$ We can split the integral at $B$ and over the range $[B,\infty)$ use the bound from Lemma \ref{lem:C1}. This leaves the range $[x,B]$ where we can use Lemma \ref{lem:theta} and a straightforward integration yields a contribution of $$ \frac{(\log x+3)\sqrt{B}-(\log B+3)\sqrt{x}}{4\pi\sqrt{xB}}. $$ We then simply follow the method used to prove Theorem $5.9$ of \cite{Dusart-estimates} with our bounds in place of $$ \frac{\eta_k}{k\log^k x}+\frac{(k+2)\eta_k}{(k+1)\log^{k+1} x}. $$ \end{proof}
We also need Lemma 2 of \cite{7-free}. \begin{lemma}[Sol\'e and Planat \cite{7-free}]
For $n \geq 2$,
\begin{align*}
\prod_{p > p_n} \frac{1}{1-p^{-t}} \leq \exp(2/p_n).
\end{align*} \end{lemma}
Putting all this together, we have the following. \begin{lemma} Define $$ g_B(p_n;t)=\frac{\exp\left(\frac{2}{p_n}+\frac{1.02}{(p_n-1)\log p_n}+\frac{\log p_n}{8\pi\sqrt{p_n}}+C_1+\frac{(\log p_n+3)\sqrt{B}-(\log B+3)\sqrt{p_n}}{4\pi\sqrt{p_nB}}\right)\log p_n}{\zeta(t)\log\left(p_n-\frac{\sqrt{p_n}\log^2 p_n}{8\pi}\right)}. $$ Then for $t\geq 2$ and $599\leq p_n\leq B=2.169\cdot 10^{25}$ we have $g_B(p_n;t)$ non-increasing in $n$ and $R_t(p_n\#)\leq \exp(\gamma)g_B(p_n;t)$. \end{lemma}
\section{Deriving $g_\infty(p_n;t)$} \label{bound}
We will need a further bound.
\begin{theorem}
For $x \geq 767\,135\,587$,
\begin{align*}
\prod_{p \leq x} \frac{p}{p-1}
\leq e^\gamma \log x \exp\left(\frac{1.02}{(x-1)\log x}+\frac{1}{6\log^3 x}+\frac{5}{8\log^4 x}\right).
\end{align*} \end{theorem} \begin{proof} This is the last display on page $245$ of \cite{Dusart-estimates} with $k=3$ so that $\eta_k=0.5$. \end{proof}
We can now deduce \begin{theorem} Define $$ g_\infty(p_n;t)=\frac{\exp\left(\frac{2}{p_n}+\frac{1.02}{(p_n-1)\log p_n}+\frac{1}{6\log^3 p_n}+\frac{5}{8\log^4 p_n}\right)\log p_n}{\zeta(t)\log\left(p_n-1.338\cdot 10^{-10}p_n-1.4262\sqrt{p_n}\right)}. $$ Then for $t \geq 2$ and $\log p_n \geq 55$ we have \begin{align*}
R_t(p_n\#) \leq e^\gamma g_\infty(p_n;t) \end{align*} and $g_\infty(p_n;t)$ is non-increasing in $n$. \end{theorem}
\section{Computations}
The proof rests on Briggs' work \cite{10^10^10} on the colossally abundant numbers, which implies \eqref{RI} for $\num{5040} < n \leq 10^{(10^{10})}$. We extend this result with the following Theorem: \begin{theorem}{\label{Platt}}
Robin's inequality holds for all $\num{5040} < n \leq 10^{(10^{13.114\,85})}$. \end{theorem} \begin{proof} We implemented Brigg's algorithm from \cite{10^10^10} but using extended precision ($100$ bits) and interval arithmetic to carefully manage rounding errors. The final $n$ checked was \begin{align*} 29\,996\,208\,012\,611\#&\cdot 7\,662\,961\#\cdot 44\,293\#\cdot 3\,271\#\cdot 666\#\cdot 233\#\cdot 109\#\cdot 61\#\\ &\cdot 37\#\cdot 23\#\cdot 19\#\cdot (13\#)^2\cdot (7\#)^4 \cdot (5\#)^3\cdot (3\#)^{10}\cdot 2^{19}. \end{align*} \end{proof}
\begin{cor} Robin's inequality holds for all $13\#\leq n \leq 29\,996\,208\,012\,611\#$. \end{cor}
We are now in a position to prove Theorem \ref{analytics}. We find that $$ g_B(29\,996\,208\,012\,611;20)< 1 $$ and $$ g_\infty(B;20)< 1 $$ and the result follows.
\section{Comments}
In terms of going further with this method, we observe that both $$ g_B(29\,996\,208\,012\,611;21)> 1 $$ and $$ g_\infty(B;21)>1 $$ so one would need improvements in both. We only pause to note that one of the inputs to Dusart's unconditional bounds that feed into $g_\infty$ is again the height to which the Riemann Hypothesis is known\footnote{Dusart uses $T\geq 2\,445\,999\,556\,030$.}, so the improvements from Lemma \ref{partial_RH} could be incorporated.
Finally, we observe that if $R_t(p_n\#)$ could be shown to be decreasing in $n$, then our lives would have been much easier.
\end{document} | arXiv |
# Bayesian inference and the posterior distribution
Bayesian inference is a statistical approach that allows us to update our beliefs about the parameters of a statistical model in light of new evidence. The posterior distribution represents our updated beliefs about the parameters after observing the data.
The posterior distribution is obtained by multiplying the prior distribution of the parameters by the likelihood function of the data given the parameters.
Consider a simple coin-tossing experiment. We have a coin with an unknown probability of landing heads, θ. We toss the coin n times and observe y heads. The likelihood function of the data given the parameter θ is:
$$p(y|θ) = \binom{n}{y} θ^y (1-θ)^{n-y}$$
The prior distribution of θ is specified by the Beta distribution, which has parameters α and β. The prior distribution of θ is:
$$p(θ) = \frac{1}{B(\alpha, \beta)} θ^{\alpha-1} (1-θ)^{\beta-1}$$
where B is the Beta function, which is a normalization constant that ensures the total probability is equal to 1.
The posterior distribution of θ is obtained by multiplying the prior distribution by the likelihood function:
$$p(θ|y) = \frac{1}{B(\alpha+y, \beta+n-y)} θ^{\alpha+y-1} (1-θ)^{\beta+n-y-1}$$
## Exercise
Calculate the posterior distribution of θ for a coin-tossing experiment with α = 2, β = 3, n = 10, and y = 4.
To obtain the posterior distribution, we can use the Bayes' theorem:
$$p(θ|y) = \frac{p(y|θ) p(θ)}{p(y)}$$
where p(y) is the marginal likelihood function, which is the probability of observing the data y without knowing the value of the parameter θ.
To calculate the marginal likelihood function, we can use the following formula:
$$p(y) = \int p(y|θ) p(θ) dθ$$
where the integral is taken over the range of possible values of θ.
## Exercise
Calculate the marginal likelihood function for the coin-tossing experiment with α = 2, β = 3, n = 10, and y = 4.
Once we have the posterior distribution of θ, we can make inferences about the parameter, such as calculating the expected value, median, or credible intervals.
# Gibbs sampling and its implementation in PyMC3
Gibbs sampling is a Markov chain Monte Carlo (MCMC) method used for obtaining random samples from a probability distribution. It is an algorithm used for obtaining random samples from a probability distribution. It uses a general proposal distribution, with an associated accept/reject step for the proposed parameter value(s).
Gibbs sampling is particularly useful when the likelihood function is difficult to sample directly, or when the posterior distribution is high-dimensional and only known up to a constant of proportionality.
In the coin-tossing example, we can use Gibbs sampling to obtain random samples from the posterior distribution of θ.
## Exercise
Implement Gibbs sampling for the coin-tossing example using PyMC3.
# Markov chain Monte Carlo and its applications in Bayesian inference
Markov chain Monte Carlo (MCMC) is a class of algorithms used for obtaining random samples from a probability distribution. It is particularly useful in Bayesian inference, where the posterior distribution is high-dimensional and only known up to a constant of proportionality.
MCMC permits a set of sampled parameter values of arbitrary size to be obtained from the posterior distribution, despite the posterior distribution being high-dimensional and only known up to a constant of proportionality.
In the coin-tossing example, we can use MCMC methods, such as Metropolis-Hastings or Hamiltonian Monte Carlo, to obtain random samples from the posterior distribution of θ.
## Exercise
Implement Metropolis-Hastings or Hamiltonian Monte Carlo for the coin-tossing example using PyMC3.
# Variational inference and its implementation in PyMC3
Variational inference is a technique used to approximate the posterior distribution in Bayesian inference. It is particularly useful when the posterior distribution is high-dimensional and only known up to a constant of proportionality.
Variational inference involves finding a lower-dimensional family of distributions, called the variational family, that is tightly related to the posterior distribution.
In the coin-tossing example, we can use the mean field variational inference to approximate the posterior distribution of θ.
## Exercise
Implement mean field variational inference for the coin-tossing example using PyMC3.
# Multilevel models and hierarchical data
Multilevel models are statistical models that incorporate hierarchical or nested data structures. They are particularly useful when the data exhibit grouping or clustering patterns.
In a multilevel model, the data are grouped into different levels, such as individuals within groups or groups within organizations. The parameters of the model are shared across levels, with higher-level parameters representing the overall trends and lower-level parameters representing the within-group variation.
Consider a study examining the effect of a treatment on the performance of students in different schools. The data can be organized into a multilevel model with schools as the higher-level grouping and students as the lower-level grouping.
## Exercise
Fit a multilevel model to a dataset with hierarchical structure using PyMC3.
# Case studies and practical examples
Case study 1: Analyzing the effects of a new drug on patient outcomes.
## Exercise
Analyze the effects of a new drug on patient outcomes using Bayesian inference, variational inference, and multilevel models.
# Model selection and evaluation
Model selection is the process of choosing the best statistical model for a given dataset. This involves comparing different models based on their fit to the data and their ability to generalize to new data.
Model evaluation can be done using techniques such as cross-validation, Akaike's information criterion (AIC), and Bayesian information criterion (BIC).
In the coin-tossing example, we can compare different models for the probability of landing heads, such as a uniform prior, a Beta prior with different parameters, or a mixture of Beta distributions.
## Exercise
Evaluate different models for the coin-tossing example using cross-validation, AIC, and BIC.
# Advanced topics in statistical modeling with Julia and PyMC3
This section covers advanced topics in statistical modeling with Julia and PyMC3, such as:
- Hierarchical models
- Bayesian networks
- State space models
- Bayesian optimization
Case study 2: Predicting stock prices using Bayesian optimization.
## Exercise
Implement a Bayesian optimization algorithm to predict stock prices using Julia and PyMC3.
# Applications in scientific research and industry
Statistical modeling with Julia and PyMC3 has a wide range of applications in scientific research and industry. Some examples include:
- Machine learning and artificial intelligence
- Ecology and environmental science
- Economics and finance
- Healthcare and medicine
Case study 3: Predicting the spread of an infectious disease using a compartmental model.
## Exercise
Apply statistical modeling with Julia and PyMC3 to predict the spread of an infectious disease using a compartmental model.
# Future developments in statistical modeling with Julia and PyMC3
The field of statistical modeling with Julia and PyMC3 is continuously evolving. Some potential future developments include:
- Integration with other machine learning libraries, such as TensorFlow and scikit-learn
- Development of new inference algorithms, such as approximate Bayesian computation (ABC)
- Expansion of the range of supported distributions and models
Future development: Integrating Julia and PyMC3 with TensorFlow for deep learning applications.
## Exercise
Integrate Julia and PyMC3 with TensorFlow for deep learning applications. | Textbooks |
A Fast Method for the Segmentation of Synaptic Junctions and Mitochondria in Serial Electron Microscopic Images of the Brain
Pablo Márquez Neila1,
Luis Baumela1,
Juncal González-Soriano2,
Jose-Rodrigo Rodríguez3,
Javier DeFelipe3 &
Ángel Merchán-Pérez3,4
Neuroinformatics volume 14, pages235–250(2016)Cite this article
Recent electron microscopy (EM) imaging techniques permit the automatic acquisition of a large number of serial sections from brain samples. Manual segmentation of these images is tedious, time-consuming and requires a high degree of user expertise. Therefore, there is considerable interest in developing automatic segmentation methods. However, currently available methods are computationally demanding in terms of computer time and memory usage, and to work properly many of them require image stacks to be isotropic, that is, voxels must have the same size in the X, Y and Z axes. We present a method that works with anisotropic voxels and that is computationally efficient allowing the segmentation of large image stacks. Our approach involves anisotropy-aware regularization via conditional random field inference and surface smoothing techniques to improve the segmentation and visualization. We have focused on the segmentation of mitochondria and synaptic junctions in EM stacks from the cerebral cortex, and have compared the results to those obtained by other methods. Our method is faster than other methods with similar segmentation results. Our image regularization procedure introduces high-level knowledge about the structure of labels. We have also reduced memory requirements with the introduction of energy optimization in overlapping partitions, which permits the regularization of very large image stacks. Finally, the surface smoothing step improves the appearance of three-dimensional renderings of the segmented volumes.
The availability of technologies such as combined Focused Ion Beam milling/Scanning Electron Microscopy (FIB/SEM) and Serial Block-Face Scanning Electron Microscopy (SBFSEM) for the study of biological tissues permits the automated acquisition of large numbers of serial sections from brain samples (see for example (Denk and Horstmann 2004; Knott et al. 2008; Merchan-Perez et al. 2009)). These three-dimensional samples contain invaluable structural information that must be extracted from the stack of serial images. Electron micrographs of nervous tissue typically show a large variety of structures, such as neuronal and glial processes with their corresponding cytoplasmic organelles (e.g., vesicles, tubules, filaments and mitochondria) and synapses. From a practical point of view, manual segmentation of these structures is a difficult and time-consuming task that requires a high degree of expertise. As a consequence, much effort has been devoted to the development of automated algorithms.
Brain images produced by electron microscopy (EM) are very complex and noisy with strong gray-level gradients that do not always correspond to region boundaries. Moreover, different neuronal structures may have similar local image appearance. Hence, it is extremely difficult to develop a fully automated segmentation algorithm. Although automated image processing techniques have addressed the problem of membrane detection and dendrite reconstruction (Turaga et al. 2010), standard computer vision algorithms used for the segmentation of textures (Haindl and Mikes 2008) or natural images (Martin et al. 2001) perform poorly, and standard techniques for the segmentation of biomedical images such as contour evolution (Jurrus et al. 2009) cannot handle the abundant image gradients.
Among the various structures visualized with EM, mitochondria and the synaptic junctions are of particular interest to neuroscience. Indeed, most information in the mammalian nervous system flows though chemical synapses. Thus, the quantification and measurement of synapses is a major goal in the study of brain synaptic organization in both health and disease (DeFelipe 2010). Mitochondria are organelles that produce most of the cell's supply of adenosine triphosphate (ATP) which transports chemical energy within cells for metabolism. In addition to supplying cellular energy, mitochondria are involved in many other crucial cellular physiological tasks (e.g., McBride et al. 2006) and their alterations have been associated with a number of diseases such as Alzheimer's disease (e.g., Santos et al. 2010). Therefore, substantial effort has been put into developing methods for accurate segmentation of synapses and mitochondria in the brain.
Although there are good practical synapse segmentation approaches relying on semi-automated tools (Morales et al. 2011), recent research has focused on machine learning approaches to diminish the degree of user interaction. (Becker et al. 2013) introduced a synaptic junction segmentation approach specifically designed for isotropic resolution image stacks, that is, stacks where voxel dimensions were identical in all X, Y and Z-axes. This method is based on a boosting algorithm that discovers local context cues related to the presence of the synaptic junction. The local context around potential synapse-like regions is also used in (Jagadeesh et al. 2013). However, the approach of Jagadeesh et al. relies on a computationally demanding set of image features that require up to 12 hours of computing time in a 32-node cluster. An alternative way of detecting synapses is by selectively staining them with ethanolic phosphotungstic acid (Navlakha et al. 2013), although this obscures other subcellular details and the tissue preservation is not appropriate for detailed ultrastructural analysis. Finally, (Kreshuk et al. 2011) used the Ilastik toolbox to segment synaptic junctions.
Several algorithms have been specifically designed to segment mitochondria in EM images. A texton-based approach comparing K-NN, SVM and AdaBoost classifiers was proposed (Narasimha et al. 2009). Lucchi and colleagues (Lucchi et al. 2012) later introduced an algorithm using as input 3D supervoxels and assuming almost isotropic image stacks. A different approach has been presented by Giuly and colleagues (Giuly et al. 2012). Their method performs the segmentation of mitochondria in anisotropic stacks of images. However, it is computationally very expensive and requires long processing times.
Consequently, our aim was to develop a method that does not require isotropic voxels and that is computationally efficient to allow the interactive segmentation of large image stacks that are now available. Moreover, our approach also involves image regularization and surface smoothing techniques to improve the segmentation.
For the development of our segmentation algorithm we have used FIB/SEM image stacks that have been acquired in our laboratory from the rat somatosensory cortex (Merchan-Perez et al. 2009; Anton-Sanchez et al. 2014). The resolution was always the same in the X and Y axes and ranged from 3.7 nm per pixel to 14.7 nm per pixel. Resolution in the Z axis, equivalent to section thickness, was in all cases 20 nm per pixel. The stacks were thus anisotropic, that is, they did not have the same resolution in all three axes. Our segmentation algorithm has been specifically designed to take anisotropy into account, and we define the anisotropy factor as:
$$ \rho = \frac{\textrm{Voxel size in the Z~axis}}{\textrm{Voxel size in the X (or Y) axis}} $$
Thus, our stacks had anisotropy factors ranging from 5.41 to 1.36. To make our method comparable to others, we used an additional stack of SBFSEM images available online with an anisotropy factor ρ = 5.
Description of the Segmentation Algorithm
Our algorithm learns to detect and segment potentially any type of structure from the visual information in a stack of FIB/SEM or SBFSEM images, although in this work we have focused on synaptic junctions and mitochondria. To this end, the algorithm must be trained by providing samples of segmented structures. The user provides these samples in an interactive way using an in-house application we have developed: first, a few voxels are manually labeled as mitochondria, for example, using standard tools such as the two or three-dimensional brush. The system then performs an automatic segmentation based on the training given, thus providing visual feedback. The user then refines the training by labeling new samples in the areas where the automatic segmentation is wrong. This procedure is repeated until the results are satisfactory (see Fig. 1).
Workflow of the segmentation algorithm. The input stack of serial images is subjected to four successive steps, two of which can be interactively modified by the user
Our automatic segmentation algorithm has three steps: feature extraction, voxel-wise classification and regularization. An optional fourth step, smoothing, enhances the visual appearance of the segmentation when it is rendered in 3D.
Feature extraction is performed on all voxels in the stack. The features of a voxel are a vector of real numbers that concisely describe the relevant visual information in the vicinity of that voxel. A feature extractor is a function from the space of EM stacks to the space of feature stacks. We have developed two feature extractors, F2D and F3D, which aggregate visual information around each voxel at several scales, and are rotationally invariant and robust to the noise present in EM images. F3D is a feature extractor that takes into account three-dimensional neighborhoods around each voxel. It is adequate for isotropic stacks. F2D, on the other hand, extracts a feature vector for each pixel in an image of the stack considering visual information of a neighborhood of the pixel in that slice and ignoring the information in other slices. F2D is a feature extractor that is suitable for anisotropic stacks. In the paragraphs that follow, we first describe F2D and then introduce F3D as a generalization.
F2D works on each image of the stack separately. Hence we only consider a single image I in the following description. F2D first applies a set of linear operators (zero, first and second order derivatives) to the smoothed image I at several scales. Thus, the set of linear operators at a scale σ is
$$\begin{array}{@{}rcl@{}} \left\{G_{\sigma}*, \sigma\cdot G_{\sigma}*\frac{\partial}{\partial x}, \sigma\cdot G_{\sigma}*\frac{\partial}{\partial y}, \sigma^{2}\cdot G_{\sigma}*\frac{\partial^{2}}{\partial x^{2}}, \sigma^{2}\right.\\[-2pt] \left.\cdot ~G_{\sigma}*\frac{\partial^{2}}{\partial xy}, \sigma^{2}\cdot G_{\sigma}*\frac{\partial^{2}}{\partial y^{2}} \right\}, \end{array} $$
where G σ is a Gaussian filter of radius σ and ∗ is the convolution operator. The response to these operators will be noted as s 00, s 10, s 01, s 20, s 11 and s 02 (note that the subscripts denote the order of the derivatives). With these responses to the filters at a scale σ, we can compute a partial feature vector for the pixels of I at that scale. This partial feature vector has four components:
$$ \left\{s_{00}, \sqrt{s_{10}^{2}+s_{01}^{2}}, \lambda_{1}, \lambda_{2}\right\}, $$
where s 00 is the smoothed image, \(\sqrt {s_{10}^{2}+s_{01}^{2}}\) is the gradient magnitude and λ 1 and λ 2 are the first and second eigenvalues of the Hessian matrix, that are computed as
$$\begin{array}{@{}rcl@{}} \lambda_{1} &=& \frac{1}{2}\left( s_{20} + s_{02} + \sqrt{\left( s_{20} + s_{02}\right)^{2} + 4s_{11}^{2}}\right), \end{array} $$
$$\begin{array}{@{}rcl@{}} \lambda_{2} &=& \frac{1}{2}\left( s_{20} + s_{02} - \sqrt{\left( s_{20} + s_{02}\right)^{2} + 4s_{11}^{2}}\right). \end{array} $$
Figure 2 shows the components of a partial feature vector at a fixed scale for all the pixels in a single image.
Feature extraction from a single image. In this example, feature extraction has been performed at a scale σ = 4 pixels. The resulting partial feature vector has four components: the smoothed image (a), the gradient magnitude (b), and the first (c) and second eigenvalues (d) of the Hessian matrices. Feature extraction is also performed at several other scales (not shown) to obtain the complete feature vector
The complete feature vector for each pixel is the concatenation of several partial feature vectors. We apply this procedure at n different scales {σ 0,…, σ n−1}, producing a feature vector with 4n components for each pixel in I. The set of scales should match the size of the structures that have to be detected in the images. In practice, the user only sets the initial scale σ 0, which we call the base scale, and the rest of scales are given by \(\sigma _{i} = 2^{\frac {1}{2}i}\sigma _{0}\). For example, if we use n = 4 scales and set the smallest scale to σ 0 = 4 pixels, our feature vectors will have 16 dimensions and they will range from 4 to 11.31 pixels in scale.
F3D is a generalization of F2D for isotropic image stacks. As in F2D, the set of linear operators with the zero, first and second order derivatives at a given scale σ,
$$ \left\{ \begin{array}{l} G_{\sigma}*, \sigma\cdot G_{\sigma}*\frac{\partial}{\partial x}, \sigma\cdot G_{\sigma}*\frac{\partial}{\partial y}, \sigma\cdot G_{\sigma}*\frac{\partial}{\partial z}, \\ \sigma^{2}\cdot G_{\sigma}*\frac{\partial^{2}}{\partial x^{2}}, \sigma^{2}\cdot G_{\sigma}*\frac{\partial^{2}}{\partial xy}, \sigma^{2}\cdot G_{\sigma}*\frac{\partial^{2}}{\partial xz}, \\ \sigma^{2}\cdot G_{\sigma}*\frac{\partial^{2}}{\partial y^{2}}, \sigma^{2}\cdot G_{\sigma}*\frac{\partial^{2}}{\partial yz}, \sigma^{2}\cdot G_{\sigma}*\frac{\partial^{2}}{\partial z^{2}} \\ \end{array} \right\}, $$
is applied to the stack obtaining the responses {s i j k :i + j + k ≤ 2}, where i, j and k indicate the order of the derivatives in the X, Y and Z axes. The partial feature vector for each voxel of the stack at a given scale is
$$ \left\{s_{000}, \sqrt{s_{100}^{2} + s_{010}^{2} + s_{001}^{2}}, \lambda_{1}, \lambda_{2}, \lambda_{3}\right\}, $$
where the first component is the smoothed image, the second one is the magnitude of the gradient, and λ 1, λ 2 and λ 3 are the eigenvalues of the Hessian matrix
$$ \begin{pmatrix} s_{200} & s_{110} & s_{101} \\ s_{110} & s_{020} & s_{011} \\ s_{101} & s_{011} & s_{002} \end{pmatrix}. $$
Again, the complete feature vector for each voxel is the concatenation of the partial feature vectors at several scales.
A classifier uses the feature vectors to determine the probability that a voxel belongs to each label. This classifier has to be trained with labeled data to learn the relationship between feature vectors and labels. Here we briefly present how the classifier is trained and how a trained classifier can be used with new unclassified data.
Our classifier learns the probability distribution P(y i ∣f i (x)) of the label y i for the pixel i given the observed feature vector f i (x). We use the Bayes' rule to express this distribution as a product
$$ P(y_{i}\mid f_{i}(x)) \propto p(f_{i}(\mathbf{x})\mid y_{i})P(y_{i}), $$
where the conditional p(f i (x)∣y i ) is the probability density of the feature vector for voxels with label y i and P(y i ) is the prior probability of a pixel having the label y i . We model the conditional distribution as a Gaussian,
$$ p(f_{i}(\mathbf{x})\mid y) \,=\, \frac{1}{\sqrt{(2\pi)^{k}\left|{\Sigma}_{y}\right|}} \exp\left( -\frac{1}{2}\left( f_{i}(\mathbf{x}) \,-\, \boldsymbol\mu_{y}\right)^{T}{\Sigma}_{y}^{-1}\left( f_{i}(\mathbf{x}) - \boldsymbol\mu_{y}\right)\right), $$
where the parameters μ y and Σ y are the mean vector and the covariance matrix of the feature vectors for voxels with the label y, and k is the dimension of the feature vector.
In the training step these parameters are estimated from training data. The user manually labels a few voxels of the stack. During training, the voxels labeled with label y are used to estimate μ y , i.e., the mean of the feature vectors of voxels labeled as y, and Σ y , i.e., the covariance matrix of these feature vectors.
When the dimension of the feature vectors is large, the training data often falls in a proper subspace of the complete k-dimensional feature space producing a singular or near singular covariance matrix Σ y . We avoid this problem by first performing Principal Component Analysis (PCA)-based dimensionality reduction on the cloud of all feature vectors . The dimensionality after the PCA is established to retain 99 % of variance.
P(y) is the a priori probability of the label y. It is learned from the user-provided data in the training step. In short, the training step consists of estimating the parameters μ y and Σ y of the conditional distribution (after a PCA-based dimensionality reduction) and the prior P(y) from the user-provided data with the interactive tool.
Once the classifier is trained, it processes every voxel in the EM stack. For each voxel i with feature vector f i (x), the probability P(y i ∣f i (x)) is computed for every label y i . This results in a probability map that maps every voxel to the probabilities of belonging to each label. As an example, see Fig. 3a, b.
Classification and regularization of a single image. (a) and (b): Probability maps obtained by a Gaussian classifier that has been trained with the features extracted from the same image that is shown in Figure 2. The pixel-wise probability of belonging to the label "Mitochondrion" is shown in (a) and the pixel-wise probability of belonging to the label "Synaptic junction" is shown in (b). (c): preliminary segmentation before regularization, where each pixel has simply been given the label with highest probability (Mitochondria, gray; Synaptic junctions, white). Note the sparse pixels scattered throughout the image, the small holes in some of the segmented objects and the jagged edges. (d): Final segmentation after regularization via CRF energy minimization. Most sparse pixels have disappeared and edges show a smoother appearance
In preliminary experiments, we tested other classifiers such as support vector machines. Although these methods improve the results obtained with the Gaussian classifier, their performance is only marginally better at the expense of much higher computational time (in the order of hours vs. seconds), which makes them unsuitable for operation in real time.
Regularization
If voxels are assumed to be independent of each other, it is possible to segment the stack by simply assigning to each voxel i the label y ∗ with higher probability, i.e., y ∗ = a r g m a x y P(y∣x). However, this offers far from optimal results, since the resulting segmentation is noisy, and it shows many sparse pixels, grainy regions and small holes (Fig. 3c, d).
Therefore, we have to assume some degree of probabilistic dependency between neighboring voxels of the stack. We have modeled this dependency by means of a conditional random field (CRF). A CRF models the distribution P(Y∣x) between the set of observed variables x (i.e., the pixel values of the stack of images) and the hidden variables Y = {y 1,…, y N } (i.e., the segmentation labels) in such a way that Y conditioned on x holds the Markov property P(y i ∣x, Y −i ), where Y −i is the set of label variables without y i , and N(i) are the neighboring voxels of voxel i. The neighborhood function N defines a graph (V, E) with the set of voxels V as nodes and the set of arcs E as given by the pair of voxels related by N. The graph constructed in this way is known as the CRF graph. The Hammersley–Clifford theorem states that the distribution of a CRF can be written as a Gibbs measure,
$$\begin{array}{@{}rcl@{}} P(\mathbf{Y}\mid\mathbf{x};\theta) &=& \frac{1}{Z(\mathbf{x},\theta)}\prod\limits_{c\in\mathcal{C}}{\Psi}_{c}(\mathbf{Y}_{c},\mathbf{x}_{c}) \\&=&\frac{1}{Z(\mathbf{x},\theta)}e^{-\sum\limits_{c\in\mathcal{C}}\theta_{c}\cdot \phi_{c}(\mathbf{Y}_{c},\mathbf{x}_{c})}, \end{array} $$
where Z(x, θ) is the partition function, \(\mathcal {C}\) is the set of cliques in the CRF graph and Y c is the set of variables from Y related to clique c.
In other words, the Gibbs measure expresses the distribution as the product of potentials Ψ c (note that the potentials are not required to be probability distributions) depending on subsets of the complete set of random variables. The product of all potentials can be written as a weighted sum of factors using the minus logarithm,
$$ -\log \prod\limits_{c\in\mathcal{C}}\mathbf{\Psi}_{c}(\mathbf{Y}_{c},\mathbf{x}_{c}) = \sum\limits_{c\in\mathcal{C}}\theta_{c}\cdot \phi_{c}(\mathbf{Y}_{c},\mathbf{x}_{c}). $$
that, when written for a fixed observation x, is known as the energy of the labeling and denoted as E(Y). This energy is a map from the space of labelings to the real numbers. For improbable labelings the energy gives large values, whereas for good, probable labelings it provides lower values.
Finding the best segmentation with the probability distribution of the CRF, that models some degree of dependency among neighboring voxels, requires maximizing the probability P(Y∣x; θ) for a fixed observation x. This is equivalent to minimizing the energy E(Y).
Figure 4 depicts the factor graph of our CRF. The factor graph is a bipartite graph that shows the random variables of the CRF graph as circles, as well as the potentials as little black squares. A potential depends on the random variables that it is connected to. Therefore, the given factor graph represents the following factorization of our distribution:
$$ P(\mathbf{Y}\mid\mathbf{x};\theta) \,=\, \frac{1}{Z(\mathbf{x},\theta)} \exp\left( -\sum\limits_{i\in V}\theta_{i}\cdot\phi_{i}(y_{i},\mathbf{x}) - \sum\limits_{(i,j)\in E}\theta_{ij}\cdot\phi_{ij}(y_{i},y_{j}) \right) $$
Factor graph for the CRF. The random variables are inside circle nodes. Black squares represent potentials depending on the random variables they are connected to. The energy of the CRF is the sum of all potentials. This figure shows only a fragment of a 2D CRF, but the generalization to 3D is straightforward
There are two kinds of potentials in this graph. The first kind of potential is associated with the terms ϕ i (y i , x). We will call them unary terms, since they only depend on a single variable y i for a fixed observation in the energy function. The second kind of potential is related to the terms ϕ i j (y i , y j ). In an analogous way, we will call them pair-wise terms, since they depend on pairs of label variables.
Training a CRF consists of determining its parameters θ. This tends to be a complex task, especially if the CRF has many parameters, as in our case. We therefore need to simplify it further. A very common and reasonable assumption is that the CRF is translation and orientation invariant, and as a consequence all of the parameters for a kind of term (unary or pairwise) share the same value. This would lead to the energy function:
$$ \theta_{1}\sum\limits_{i\in V}\phi_{i}(y_{i},\mathbf{x}) + \theta_{2}\sum\limits_{(i,j)\in E}\phi_{ij}(y_{i},y_{j}). $$
Unfortunately, in non-isotropic stacks we cannot assume orientation invariance. Usually, the stack has a lower resolution in the Z axis than in the X and Y axes. Therefore, we must treat the pair-wise terms that are oriented in the Z axis in a different way. We divide the set E into two disjoint subsets, E X Y and E Z , for the edges oriented along the X and the Y axes and for the edges oriented along the Z axis, respectively. The energy is now:
$$\begin{array}{@{}rcl@{}} \theta_{1}\sum\limits_{i\in V}\phi_{i}(y_{i},\mathbf{x}) &+& \theta_{XY}^{\prime}\sum\limits_{(i,j)\in E_{XY}}\phi_{ij}(y_{i},y_{j}) \\&+& \theta_{Z}^{\prime}\sum\limits_{(i,j)\in E_{Z}}\phi_{ij}(y_{i},y_{j}). \end{array} $$
Finally, since we are interested in the minimum energy, we can multiply the energy by \(\frac {1}{\theta _{1}}\) and the solution remains unchanged:
$$\begin{array}{@{}rcl@{}} \sum\limits_{i\in V}\phi_{i}(y_{i},\mathbf{x}) &+& \theta_{XY}\sum\limits_{(i,j)\in E_{XY}}\phi_{ij}(y_{i},y_{j}) \\&+& \theta_{Z}\sum\limits_{(i,j)\in E_{Z}}\phi_{ij}(y_{i},y_{j}). \end{array} $$
This energy has only two parameters, θ X Y and θ Z , which control the strength of the regularization in the XY plane and in the Z axis. In an anisotropic stack we can assume that θ X Y and θ Z are related by the expression \(\theta _{Z}=\frac {\theta _{XY}}{\rho }\), and only one of them needs to be estimated by cross-validation or manually by the user. The manual estimation is further facilitated by the fact that the CRF offers good results for a large range of parameter values.
The unary and pair-wise terms have to be defined in such a way that they provide lower values for good, probable inputs. The unary terms ϕ i (y i , x) are responsible for introducing the observed data into the energy value. It is customary to define the unary terms as the minus logarithm of the probability that our trained classifier provides: ϕ i (y i , x) = −logP(y i ∣f i (x)). This definition is justified since, in the absence of the pair-wise terms, the CRF would lead to the segmentation given by the classifier acting on each voxel separately.
The role of the pair-wise terms ϕ i j (y i , y j ) is twofold. First, they regularize the segmentation results by penalizing the change of labels between neighboring voxels. This prevents the occurrence of isolated pixels and small holes that could appear (Fig. 3c, d). Second, they serve to introduce some extent of high-order knowledge about the structure of the stack. For example, we could impose the condition that synaptic junctions and mitochondria cannot touch each other, by setting a very large penalty to that label change in our experiments.
Therefore, the pair-wise terms are assigned as follows. A low penalty 1 is given to pairs of labels that are allowed to be neighbors. Second, a very high penalty ∞ is assigned to pairs of labels that cannot be adjacent (e.g., synaptic junctions and mitochondria). Third, no penalty is given for pairs of neighboring pixels with the same label. The distance matrix between labels background (bg), synaptic junction (syn) and mitochondria (mit) is therefore:
$$\begin{array}{@{}rcl@{}} &&\hspace*{2.5pc} bg\quad syn\quad mit\\ &&\begin{array}{l} bg\\ syn\\ mit\\ \end{array} \left( \begin{array}{lll} 0&~~~~~~1&~~~~~1\\ 1 & ~~~~~~0 &~~~~\infty\\ 1 & ~~~~~\infty & \quad~0\\ \end{array}\right). \end{array} $$
Once we have defined our energy, we need to find the segmentation that minimizes the energy function, Y ∗ = a r g m i n Y E(Y). This is in general an NP-hard optimization problem. However, it is known that when the terms are up to order two, i.e., there are only pair-wise (order 2) and unary (order 1) terms, the number of labels is two and the pair-wise terms are submodular, i.e., ϕ(y i , y i ) + ϕ(y j , y j ) ≤ ϕ(y i , y j ) + ϕ(y j , y i ), then a max-flow/min-cut algorithm finds the global optimum of the energy in polynomial time.
Unfortunately, when there are more than two labels, the max-flow algorithm is no longer applicable. Instead, we have to rely on approximate energy minimization using the α β-swap algorithm from (Boykov et al. 2001). Figure 3c, d shows the segmentation of a single image after the α β-swap regularization. Figure 5 shows the segmentation of a whole stack of serial EM images.
Segmentation of a whole stack of serial images after the regularization step. Mitochondria are shown in purple and synaptic junctions are shown in green
The graph-cut techniques needed for regularization require a considerable amount of computer memory. For a reasonably sized stack, the required memory usage usually becomes too big. Therefore we need to regularize parts of the full stack separately and merge them together at the end.
A simple approach is to divide the stack into disjoint, i.e., non-overlapping substacks and regularize them separately. This method works well for the inner areas of each substack, but it looks jumpy in their boundaries, since the CRF does not have enough context to determine the correct labels. This is visually noticeable as abrupt terminations of mitochondria and synaptic junctions at these boundaries.
The solution to this problem consists of extending each substack with a margin, effectively making the new extended substacks overlap with their neighbors. The regularization is then applied to the extended substacks, but only the results obtained in the original substack volume are preserved in the final segmentation (Fig. 6).
Regularization in overlapping partitions. Example of partition of a full stack into two disjoint substacks A and B and two overlapping substacks A' and B'. Once the regularization has been performed in A', only voxels belonging to A are updated. Then, regularization is performed in B' and only voxels belonging to B are updated. This procedure prevents the appearance of regularization artifacts in the boundary between A and B
Determining the optimal size of the margin is a problem beyond the scope of this paper. However, we have found that a margin of 10 voxels in each direction offers very good results in practice.
Finally, the size of the substacks is limited by the available memory. As a rule of thumb, the regularization process takes 5 or 6 times the memory used by the original substack being regularized.
Segmentation Smoothing
The segmentation we obtain consists of hard label assignments to each voxel of the stack. This is suitable for several tasks such as counting of labeled objects or the estimation of the volume of the segmented objects, but presents disadvantages concerning the visualization of their surfaces. We use the marching cubes algorithm to extract the surfaces of the objects from the labels in the segmentation volume. This process not only produces unpleasant and unnatural renderings, but also biases the area estimates. Therefore we need to smooth the surfaces, but at the same time we want to preserve the constraints imposed by the labels in the segmentation volume. Among the infinite surfaces that meet these constraints, we compute the surface that minimizes curvature (see Fig. 7). To this end, we have adapted the method described by (Lempitsky 2010) to anisotropic stacks.
Example of a smoothed surface that meets the constraints imposed by the segmentation. The red and blue dots indicate the pixels that are inside and outside the object, respectively. The green and blue contours are the maximum and minimum area contours, respectively. Although there are infinite contours that would lie between them, we look for the smoothed minimum curvature contour depicted in red
First, we build a vector of constraints \(\{v_{i}\}_{i=1}^{N}\in \{-1,1\}^{N}\) such as v i = 1 if the voxel i is inside an object and v i = −1 if it is outside an object, i.e., if it has the background label. We find a real-valued vector \(\{f_{i}\}_{i=1}^{N}\in \mathbb {R}^{N}\) such that f i > 0 if v i = 1 and f i <0 if v i = −1 and its zero-levelset is smooth. In a continuous setting we would minimize the functional
$$ f^{*} = \arg\min_{f} {\int}_{\Omega} \left( \frac{\partial^{2} f}{\partial x^{2}}\right)^{2} + \left( \frac{\partial^{2} f}{\partial y^{2}}\right)^{2} + \left( \frac{\partial^{2} f}{\partial z^{2}}\right)^{2}\,\mathrm{d}V. $$
The minimization of the above functional smooths the segmentation result. In an anisotropic discrete setting, the smoothing problem becomes
$$\begin{array}{@{}rcl@{}} f^{*} &=& \min_{f} \sum\limits_{i} \left( f_{N_{x}(i)} + f_{N_{-x}(i)} - 2f_{i}\right)^{2} \\ &+& \left( f_{N_{y}(i)} + f_{N_{-y}(i)} - 2f_{i}\right)^{2} \\&+& \frac{\left( f_{N_{z}(i)} + f_{N_{-z}(i)} - 2f_{i}\right)^{2}}{\rho^{4}}, \end{array} $$
subject to
$$ v_{i}\cdot f_{i} \ge m_{i} \quad \forall i, $$
where N [−]d (i) is the neighbor of i in the direction (−)d, and \(\{m_{i}\}_{i=1}^{N}\) is a vector of margins imposing the deviations from zero of f at each point. m i is 0 if i is at the boundary of an object (i.e., if v i and v j for any of the neighbors j of i have different values) and 1 everywhere else. Here we have included factor ρ in the second derivative along the Z-axis to account for the anisotropy of the stack.
The Eqs. (18) and (19) constitute a convex quadratic program that can be solved with the Jacobi method with a slight modification to include the constraints in the algorithm (Lempitsky 2010). Figure 8 shows a mitochondrion before and after smoothing with this method.
Example of a branching mitochondrion. A large branching mitochondrion is shown before (a) and after smoothing (b)
Note that this smoothing only affects the estimated surfaces and, therefore, the rendering of these surfaces. The segmentation volume and the numerical results extracted from it are not affected by this procedure. Therefore, the quantitative comparisons offered in the Section "Results" are computed with no smoothing.
As explained in the previous section, we conceived our algorithm to be used interactively. However, to evaluate its performance we have used two datasets that have been fully segmented manually. We need these manual segmentations as ground-truth data to validate our results and compare them to others. Moreover, given that we have enough training data, we use them to find optimum values for the base scale σ 0 and the regularization term σ X Y .
We have used several voxel-based metrics to evaluate the quality of the segmentations. Voxel-based metrics measure the error rates in voxel classification taking into account true positive (TP), true negative (TN), false positive (FP) and false negative (FN) classifications.
True positive rate (TPR):
$$ \text{TPR} = \frac{\text{TP}}{\text{TP}+\text{FN}} $$
False positive rate (FPR):
$$ \text{FPR} = \frac{\text{FP}}{\text{FP} + \text{TN}} $$
Accuracy (ACC):
$$ \text{ACC} = \frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+\text{FN}} $$
Jaccard index (JAC):
$$ \text{JAC} = \frac{\text{TP}}{\text{TP} + \text{FP} + \text{FN}} $$
Volume error (VOE):
$$ \text{VOE} = \frac{\left|\text{FP} - \text{FN}\right|}{\text{TP} + \text{FN}} $$
Unless otherwise stated, all running times were obtained on a Linux system with an Intel Xeon at 2.40GHz with no GPU processing. Our algorithm is mostly implemented in Python using the NumPy library. The inner parts of the CRF regularization are written in C++. Our implementation runs in a single thread.
Mitochondria Segmentation
We used a stack of serial EM images from the mouse cerebellum to test our method for the segmentation of mitochondria. This stack is available online in the Cell Centered DatabaseFootnote 1 with ID 8192. We have selected this stack to make our method comparable with Cytoseg, an automatic segmentation tool proposed by (Giuly et al. 2012). These researchers provide the raw images as well as a manual segmentation of mitochondria at the Cytoseg web page.Footnote 2 The stack has a size of 700×700×50 voxels and a voxel size of 10×10×50 nm (anisotropy factor ρ = 5). We have applied our method to automatically detect the mitochondria in this stack.
The stack was divided into 5 sets of 10 images and we used 5-fold cross-validation to estimate the quality of the segmentation for different pairs of values of σ 0 and θ X Y . Values of σ 0 ranged from 2 to 20 and values of θ X Y ranged from 0 to 20. Figure 9 plots the TPR, FPR, ACC and JAC metrics obtained. The curves show that the base scale for feature extraction σ 0 is much more critical for the quality of the segmentations than the regularization penalty θ X Y . In fact, the regularization penalty is almost irrelevant for the quantitative measurements, but it is much more important in the visual or qualitative results (see Fig. 10).
Metrics obtained by cross-validation for several values of the parameters σ 0 and θ X Y . (a) TPR vs. σ 0 for different values of θ X Y . (b) FPR, (c) ACC and (d) JAC
Segmentation of mitochondria in a 700×700×50 stack of EM serial images. The left column shows four individual, non-consecutive images from the stack. The second column shows ground truth data, manually segmented by an expert. The two rightmost columns show the results obtained with our algorithm using a base scale σ 0 = 6 and two different sets of regularization parameters θ X Y and θ Z .
From the results of the cross-validation process, we choose σ 0 = 6 as it offers a good trade-off between the considered metrics. For the regularization parameter θ X Y , we select two different values: 10 and 20. The parameter \(\theta _{Z}=\frac {\theta _{XY}}{\rho }\) is set to 2 and 4, respectively.
After choosing the parameters, we apply our segmentation algorithm to the full stack. We used the last 10 slices of the stack for training. The results obtained with our method are similar to those obtained with the Cytoseg process (see Table 1). However, Cytoseg (Giuly et al. 2012) required 80 minutes of processing time for a stack of 350×350×30 according to their paper, while our algorithm took 53.8 seconds for the segmentation of a 700×700×50 stack (training takes 9.6 seconds and the complete stack labeling, including CRF regularization, 44.2 seconds).
Table 1 Comparative results for mitochondria detection performed by our algorithm with two different sets of regularization parameters, by Ilastik (Sommer et al. 2011), and by the Cytoseg process, according to (Giuly et al. 2012)
We also applied the software Ilastik ((Sommer et al. 2011), www.ilastik.org) to segment mitochondria in this dataset. The quantitative results obtained in Ilastik (see Table 1) are comparable to those of the other methods. However, Ilastik took 56.5 minutes for processing the full stack using 8 threads, resulting in a total of 452 minutes of CPU. This is about 500 times slower than our method.
Other methods for mitochondria segmentation are even less suitable for large anisotropies. Supervoxel segmentation with learned shape features (Lucchi et al. 2012) aims to learn non-local shapes of the target objects to segment. They use a combination of supervoxels, 3D ray features and structured prediction. 3D ray features are specially affected by anisotropy since both the edge detector and the length of the rays are highly dependent on the orientation. The achievable segmentation accuracy —i.e., the highest accuracy that can be achieved using supervoxels assuming perfect classification of each supervoxel— drops significantly with anisotropy. Moreover, the structured prediction requires training with a large portion of the stack fully labeled in order to infer the terms of the pairwise interaction. As a consequence of these factors, the method from (Lucchi et al. 2012) required more training data (half of the stack) to work properly, and provided rather unsatisfactory results with low Jaccard indices (<0.48). The running times were also higher (>21 minutes) due mainly to the cost of extraction of the ray features.
Mitochondria and Synaptic Junctions Segmentation
For this test we used a stack of 366×494×213 voxels and a voxel size of 14.7×14.7×20 nm (ρ = 1.36) acquired from the rat somatosensory cortex. We used the first 100 slices of the stack to estimate the parameters of the algorithm with 5-fold cross-validation. The results of the cross-validation are plotted in Fig. 11. Again, the base scale σ 0 had a critical impact on performance, whereas the regularization penalty θ X Y only caused subtle variations. Note that we were segmenting three different classes (background, mitochondria and synapses) and therefore we measured the quality of segmentations with mitochondria-vs-rest and synapses-vs-rest metrics, i.e., considering one of the classes as foreground and the rest as background.
Simultaneous segmentation of mitochondria and synaptic junctions. Metrics obtained by cross-validation for several values of the parameters σ 0 and θ X Y . ACC, TPR, and JAC are shown for mitochondria (a–c) and synaptic junctions (d–f)
From the results of cross-validation, we chose σ 0 = 1 and σ 0 = 2 as a trade-off value that worked reasonably well for both mitochondria and synaptic junctions. We set θ X Y = 4 and \(\theta _{Z} = \frac {\theta _{XY}}{\rho } = 2.94\). The training was performed using 11 evenly distributed slices of the stack and it took 3.27 seconds to complete. The segmentation of the full stack (213 serial images) took 10.15 minutes (48.31 seconds for the classification step and the rest for regularization). Figure 12 shows the results of our algorithm and Ilastik. Table 2 compares the quantitative performance of both algorithms. The Ilastik performance results were obtained using a very similar manual segmentation to the one used with our algorithm. The results obtained with both methods were similar when considering the numerical performance, with ours being marginally better. However, Ilastik took 56 minutes with 8 threads to train and segment the full stack, making our method 45 times faster. Visual appearance of the final segments were also much better in our case thanks to the regularization procedure (see Fig. 12).
Simultaneous segmentation of mitochondria and synaptic junctions. The left column shows the original images; the center left column is the ground-truth; the center right column is the result obtained with Ilastik; the right column is the result of our algorithm with parameters σ 0 = 2; θ X Y = 4; θ Z = 2.94. The first row shows a full slice. The second and third rows show zoomed regions of different slices for detail
Table 2 Quantitative results for the simultaneous segmentation of mitochondria and synaptic junctions
Running Time Comparison
Table 3 summarizes running times for our experiments in both datasets. Our method runs much faster compared to the others with similar or better performance. Cytoseg and learned shape features are specialized in mitochondria segmentation; thus, we only report results in the first dataset for those methods.
Table 3 Absolute and normalized running times for different methods. Absolute times are given in seconds of CPU (s ⋅CPU), and normalized times (in parentheses) are given in seconds of CPU per megavoxel \(\left (\frac {\mathrm {s}\cdot \textrm {CPU}}{\text {Megavoxel}}\right )\)
There is an important difference in running times for our method in both datasets (2.15 vs. 15.9). This large difference is due to the regularization with >2 labels, where a single graph-cut is inviable and iterative, slower algorithms such as α β-swap are required.
Counting Structures
Estimating the number of structures from the results of an automatic segmentation process is still an open problem with plenty of ongoing research. As an approximate, simple solution, it is commonly assumed that each connected component of the segmentation is one structure. This is the approach we use. Despite its simplicity, it has several drawbacks, namely, a group of structures close to each other often merge in a single connected component, and large structures are sometimes split into two or more connected components. Also, when spatial regularization is not present, false positive detections result in many small connected components that bias the counting estimations. To alleviate these problems, we discard the connected components smaller than a given threshold. Setting the threshold is not trivial, as it might greatly affect the counts depending on the quality of the segmentation. A good segmentation is expected to be more robust to different thresholds than a bad one, i.e., estimations should be close to the real value and should be stable for a large range of thresholds.
Figure 13 shows count estimations for different thresholds with both Ilastik and our method. The regularization makes our method more robust: it reduces the number of small components and the estimations of our method are closer to the ground-truth for a wider range of thresholds. Table 4 gives numerical assessment of this idea. It shows the absolute value of the deviations of the estimations from the ground-truth averaged over all thresholds in the range \(T = [10, 2000]\subset \mathbb {Z}\):
$$ \frac{1}{|T|}\sum\limits_{t\in T} \left|\textrm{\#CC}_{[\text{size} \ge t]} - \text{GT}\right|, $$
where #CC[size ≥ t] is the number of connected components with size ≥ t, and GT is the real count. Table 4 shows that our method has smaller errors than Ilastik for all datasets and considered structures.
Count estimations for varying thresholds. (a) shows estimations in the number of mitochondria for the CCDB-8192 dataset. (b) and (c) show estimations in the number of mitochondria and synaptic junctions respectively in the Mit&Syn dataset
Table 4 Average absolute error of estimations of the number of mitochondria and synaptic junctions over all thresholds in the range [10,2000] voxels
Concerning the segmentation of mitochondria, Lucchi and colleagues (Lucchi et al. 2012) have recently used ray descriptors and the gray-level histogram as the key features to classify 3D image supervoxels. The result of this classification is further regularized using graph cuts to minimize an energy function involving learned potentials. They used stacks of FIB/SEM images from the hippocampus and striatum that had isotropic resolution. In their method, isotropy is an essential requirement for the computation of the 3D supervoxel over-segmentation. Alternatively, Giuly and colleagues (Giuly et al. 2012) segment mitochondria in anisotropic stacks of images obtained by SBFSEM. They use a random forest classifier to label 2D image patches. The result of this initial segmentation is further refined using 2D contour classification across images and 3D level-set surface evolution. Their method, however, is computationally intensive, requiring long processing times
Regarding synapses, the popular Ilastik toolbox (Sommer et al. 2011) used by (Kreshuk et al. 2011) to segment synaptic junctions uses a random forest classifier with a set of differential image features. They use a simple regularization strategy based on Gaussian smoothing. Overall, the resulting algorithm is also very demanding in terms of computing power.
Our method does not require isotropic voxels so it can be applied to image stacks that have been acquired with different resolution in the X, Y and Z axes. The results obtained with our method were similar or better than those obtained with the Cytoseg process (Giuly et al. 2012) for mitochondria only, and to those obtained with Ilastik for both mitochondria only and simultaneous mitochondria and synaptic junctions. Other approaches such as the one from (Lucchi et al. 2012) are not ready to work with anisotropic stacks and therefore our method outperforms them. Unlike Cytoseg, that focuses on mitochondria segmentation, our method is not tied to a specific type of cellular structure but can be used to segment a variety of structures. When compared to Ilastik we obtained better visual results thanks to the regularization and surface smoothing techniques described above.
Moreover, our method is much faster than any other approach we have tried. The speed up comes from the Gaussian classifier, that can be trained in O(N k 2 + k 3), being N the number of data points and k the dimension of the feature space. For comparison, the complexity of training random forests is O(M N k d), being M the number of trees and d the average depth of the trees. We found in our experiments that the classifier was the main bottleneck of the Ilastik approach. In our approach the most expensive computation was the regularization step, which Ilastik omits. On the other hand, we found no significant difference in speed for feature extraction, taking only a small fraction of the total processing time in all compared methods.
For the case of segmentation of 2 labels, a speed of 2.15 seconds per megavoxel in a single thread is fast enough to enable interactive segmentation of the large image stacks that are now available, providing real-time feedback to the user. Of course, parallelization of the proposed approach is straightforward, and it would make it even faster. To our knowledge, no other previous work provides state-of-the-art performance while running in an interactive setting.
We have presented an algorithm that can be trained to segment a variety of structures in anisotropic EM stacks. In this work we have focused on its capabilities for the segmentation of synaptic junctions and mitochondria. It features some important properties that are not available in other methods in the literature. It uses a graph cut-based image regularization procedure that not only provides better segmentations, but also introduces high level knowledge about the structure of labels. We have solved the limitation of graph cuts in terms of memory requirements with the introduction of energy optimization in overlapping partitions. This allows the regularization of very large stacks. The surface smoothing step introduces smoothness priors on the segmentation that improves the appearance of three-dimensional renderings of the segmented volumes. Finally, and most importantly, we have also shown that our approach is much faster than any other competing method with a state-of-the-art quantitative segmentation performance.
Information Sharing Statement
The automatic segmentation method described in this paper is available as a plugin of the imaging processing software Espina. The software and instructions for installing it can be found at http://cajalbbp.cesvima.upm.es/espina.
This software provides an efficient multi-thread implementation of the presented algorithm together with an intuitive user interface. After activating the Automatic Segmentation plugin, the user has to segment a few voxels of the target objects manually and receives almost real-time feedback of the results. Additional manual segmentations can be performed until the user is satisfied with the final results. Quantitative data regarding the segmented objects are then obtained with standard Espina tools.
1 http://ccdb.ucsd.edu
2 https://code.google.com/p/cytoseg/
Anton-Sanchez, L., Bielza, C., Merchán-Pérez, A., Rodríguez, J.-R., DeFelipe, J., & Larrañaga, P. (2014). Three-dimensional distribution of cortical synapses: A replicated point pattern-based analysis. Frontiers in Neuroanaty, 8, 85. doi:10.3389/fnana.2014.00085.
Becker, CJ, Ali, K, Knott, G, & Fua, P (2013). Learning Context Cues for Synapse Segmentation. IEEE Transactions on Medical Imaging, 32(10), 1864–1877. doi:10.1109/Tmi.2013.2267747.
Boykov, Y, Veksler, O, & Zabih, R (2001). Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 23(11), 1222– 1239.
DeFelipe, J (2010). From the connectome to the synaptome: An epic love story. Science, 330(6008), 1198–1201.
Denk, W, & Horstmann, H (2004). Serial block-face scanning electron microscopy to reconstruct three-dimensional tissue nanostructure. PLoS biology, 2(11), e329. doi:10.1371/journal.pbio.0020329 10.1371/journal.pbio.0020329.
Giuly, R, Martone, M, & Ellisman, M (2012). Method: Automatic segmentation of mitochondria utilizing patch classification, contour pair classification, and automatically seeded level sets. BMC Bioinformatics, 13(1), 29+. doi:10.1186/1471-2105-13-29.
Haindl, M, & Mikes, S (2008). Texture segmentation benchmark. In ICPR, IEEE. http://dblp.uni-trier.de/db/conf/icpr/icpr2008.html#HaindlM08a (pp. 1–4).
Jagadeesh, V, Anderson, J, Jones, B, Marc, R, Fisher, S, & Manjunath, B (2013). Synapse classification and localization in electron micrographs. Pattern Recognition Letters. doi:10.1016/j.patrec.2013.06.001.
Jurrus, E, Hardy, M, Tasdizen, T, Fletcher, PT, Koshevoy, P, Chien, CB, Denk, W, & Whitaker, R (2009). Axon tracking in serial block-face scanning electron microscopy. Medical Image Analysis, 13 (1), 180–188. doi:10.1016/j.media.2008.05.002. Includes Special Section on Medical Image Analysis on the 2006 Workshop Microscopic Image Analysis with Applications in Biology.
Knott, G, Marchman, H, Wall, D, & Lich, B (2008). Serial section scanning electron microscopy of adult brain tissue using focused ion beam milling. Journal of Neuroscience, 28(12), 2959–2964. doi:10.1523/JNEUROSCI.3189-07.2008.
Kreshuk, A, Straehle, C, Sommer, C, Koethe, U, Cantoni, M, Knott, G, & Hamprecht, F (2011). Automated detection and segmentation of synaptic contacts in nearly isotropic serial electron microscopy images. PLoS ONE, 6, e24899. doi:10.1371/journal.pone.0024899 10.1371/journal.pone.0024899.
Lempitsky, VS (2010). Surface extraction from binary volumes with higher-order smoothness. In Proc. International Conference on Computer Vision and Pattern Recognition (CVPR), IEEE. http://dblp.uni-trier.de/db/conf/cvpr/cvpr2010.html#Lempitsky10 (pp. 1197–1204).
Lucchi, A, Smith, K, Achanta, R, Knott, G, & Fua, P (2012). Supervoxel-based segmentation of mitochondria in em image stacks with learned shape features. IEEE Transactions on Medical Imaging, 31(2), 474–486. http://dblp.uni-trier.de/db/journals/tmi/tmi31.html#LucchiSAKF12.
Martin, D, Fowlkes, C, Tal, D, & Malik, J. (2001). A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics (Vol. 2, pp. 416–423).
McBride, HM, Neuspiel, M, & Wasiak, S (2006). Mitochondria: More than just a powerhouse. Current Biology, 16(14), R551–R560. doi:10.1016/j.cub.2006.06.054.
Merchan-Perez, A, Rodriguez, JR, Alonso-Nanclares, L, Schertel, A, & DeFelipe, J (2009). Counting synapses using FIB/SEM microscopy: a true revolution for ultrastructural volume reconstruction. Frontiers in Neuroanatomy, 3(18). doi:10.3389/neuro.05.018.2009. http://www.frontiersin.org/neuroanatomy/10.3389/neuro.05.018.2009/abstract.
Morales, J, Alonso-Nanclares, L, Rodriguez, JR, Defelipe, J, Rodriguez, A, & Merchan-Perez, A (2011). Espina: a tool for the automated segmentation and counting of synapses in large stacks of electron microscopy images. Frontiers in Neuroanatomy, 5(18).
Narasimha, R, Ouyang, H, Gray, A, McLaughlin, SW, & Subramaniam, S (2009). Automatic joint classification and segmentation of whole cell 3d images. Pattern Recognition, 42(6), 1067–1079. doi:10.1016/j.patcog.2008.08.009.
Navlakha, S, Suhan, J, Barth, AL, & Bar-Joseph, Z (2013). A high-throughput framework to detect synapses in electron microscopy images. Bioinformatics 29, 13, i9–i17 . doi:10.1093/bioinformatics/btt222.
Santos, RX, Correia, SC, Wang, X, Perry, G, Smith, MA, Moreira, PI, & Zhu, X (2010). Alzheimer's disease: diverse aspects of mitochondrial malfunctioning. International Journal of Clinical and Experimental Pathology, 3, 570–581.
Sommer, C, Straehle, C, Kothe, U, & Hamprecht, F (2011). Ilastik: Interactive learning and segmentation toolkit. In IEEE International Symposium on Biomedical Imaging: From Nano to Macro. doi:10.1109/ISBI.2011.5872394 (pp. 230–233).
Turaga, SC, Murray, JF, Jain, V, Roth, F, Helmstaedter, M, Briggman, K, Denk, W, & Seung, HS (2010). Convolutional networks can learn to generate affinity graphs for image segmentation. Neural Computation, 22, 511–538.
This work was supported by funding from the Spanish Ministry of Economy and Competitiveness (grants TIN2013-47630-C2-2-R to L.B. and BFU2012-34963 to J.DF.), CIBERNED (CB06/05/0066 to J.DF.), the Cajal Blue Brain Project, Spanish partner of the Blue Brain Project initiative from EPFL (to J.DF. and L.B.) and the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 604102 (Human Brain Project) to J.DF.
Departamento de Inteligencia Artificial, Universidad Politécnica de Madrid, 28660, Boadilla del Monte, Madrid, Spain
Pablo Márquez Neila
& Luis Baumela
Departamento de Anatomía. Facultad de Veterinaria, Universidad Complutense, Madrid, Spain
Juncal González-Soriano
Laboratorio Cajal de Circuitos Corticales, Centro de Tecnología Biomédica, Universidad Politécnica de Madrid and Instituto Cajal, CSIC, Madrid, Spain
Jose-Rodrigo Rodríguez
, Javier DeFelipe
& Ángel Merchán-Pérez
Departamento de Arquitectura y Tecnología de Sistemas Informáticos, Facultad de Informática, Universidad Politécnica de Madrid, Madrid, Spain
Ángel Merchán-Pérez
Search for Pablo Márquez Neila in:
Search for Luis Baumela in:
Search for Juncal González-Soriano in:
Search for Jose-Rodrigo Rodríguez in:
Search for Javier DeFelipe in:
Search for Ángel Merchán-Pérez in:
Correspondence to Ángel Merchán-Pérez.
Below is the link to the electronic supplementary material.
(AVI 21.8 MB)
Márquez Neila, P., Baumela, L., González-Soriano, J. et al. A Fast Method for the Segmentation of Synaptic Junctions and Mitochondria in Serial Electron Microscopic Images of the Brain. Neuroinform 14, 235–250 (2016) doi:10.1007/s12021-015-9288-z
Issue Date: April 2016
Three-dimensional electron microscopy
Automatic image segmentation
cerebral cortex
Not logged in - 34.229.131.116 | CommonCrawl |
\begin{definition}[Definition:Decimal Expansion/Decimal Place]
Let $x \in \R$ be a real number.
Let the '''decimal expansion''' of $x$ be:
:$x = \sqbrk {s \cdotp d_1 d_2 d_3 \ldots}_{10}$
Then $d_k$ is defined as being the digit in the $k$th '''decimal place'''.
\end{definition} | ProofWiki |
What price of speed? A critical revision through constructal optimization of transport modes
Michele Trancossi1
International Journal of Energy and Environmental Engineering volume 7, pages 425–448 (2016)Cite this article
The use of energy by the major modes and the environmental impact of freight transportation is a problem of increasing importance for future transportation policies. This paper aims to study the relative energy efficiency of the major transport modes, setting up an impartial analysis, improving previous literature substantially. Gabrielli and von Karman have studied the relationship between speed and energy consumption of the most common transport modes. From this pioneering activity different methods for evaluating the energetic performance of vehicles have developed. Initially the maximum vehicle power and theoretical performance limits have been calculated in terms of weight and payload. Energy efficiency has then been evaluated in terms of the first principle of thermodynamics as the mass of the vehicle times distance moved divided by thermal energy used. A more effective analysis can be performed both in terms of vehicle life cycle and in terms of second principle considering the quality and the amount of dissipated amount of useful energy. This paper defines an LCA based model, which could allow an effective comparison between different transport modes classifying them in terms of exergy destruction. In this case, an effective comparison, which considers the quality of used energy, can be performed allowing precise politics for a future more effective evaluation of the transport modes.
Energy demand is growing, affordable and secure energy supply are fundamental to global economic growth and human development. The scenario described by "World Energy Outlook 2013" [1] (WEO 2013) together with the forecasts by "2014 World Energy Issues Monitor" [2] (2014 WEIM) presents large uncertainness about future and a dramatic increase in terms of energy demand, driven by non-OECD economic growth. Figure 1 shows historical data by WEO 2013 and Fig. 2 present provisional data by 2014 WEIM.
World Energy Consumption historical trend (data from IEA WEO 2013)
Energy consumption Forecast 2010-2040 (data by 2014 WEIM)
Future energy perspectives present diffused uncertainness related to the high volatility of energy prices, the lack of global agreement on climate change mitigation, the necessary demand for new energy infrastructures, too slow development of Carbon capture technologies, and the necessity of increasing energy efficiency.
It is evident that the provisions for the future are out of the sustainability of the planet, both in terms of destruction of resources and in terms of climate change, which directly related to the emission in terms of GHG.
Transport sector overview
Even if it is not the main contributor to the energy consumption, the transport sector will play a fundamental role for the future wellness of the humanity. In particular, energy use in the transportation sector includes energy consumed in moving people and goods by road, rail, air, water, and pipeline. Those transportation systems are essential in an increasingly globalized world, as well as for enhancing standards of living.
Trade and economic activity seem the most significant factors increasing demand for freight transportation. The factors that will affect the demand of passenger transportations appear much more complex and include uncertain parameters such as travel behavior, land use patterns, and urbanization. This increased complexity presents a larger uncertainness about—the effects of passenger transportation in terms of macroeconomic and fuel market impacts.
Any possible analysis of energetic impact of transport modes must necessary consider different modes and their energy efficiency to allow the definition of effective strategies to reduce the energy consumption, by adopting the two main decisional elements for the future. In particular, they are a short-term strategy based on a better planning of transport modes and on a long-term strategy based on substantial improvements of vehicles.
Any analysis about transport modes must considers two fundamental parameters they are speed and energy intensity. Increasing speed increases social efficiency and allows reducing costs for both public and private institutions and for citizens. On the other side, energy consumption causes economic, environmental and social costs.
An overview of scientific literature
The first fundamental attempt to analyze the relations between speed and energy consumption of different transport modes has been produced by Gabrielli and von Karman [3]. This analysis introduces a physical parameter, named specific resistance of vehicle ε, which is defined as the ratio of motor output power P max divided by the product of total vehicle weight W by maximum speed V max.
$$ \varepsilon = \frac{{P_{\hbox{max} } }}{{W \cdot V_{\hbox{max} } }} $$
It is fundamental to notice that Gabrielli and Von Karman consider the gross weight of the vehicle, because "exact information about the useful load of vehicles was not available to the authors." They have clearly demonstrated that specific resistance has a minimum value, which applies to all the examined transport modes, which appears as a physical limit of all transport modes. It corresponds to the line of equation
$$ \varepsilon_{\hbox{min} } = A \cdot V_{\hbox{max} } , $$
where A = 0.000175 h/mile. The Gabrielli–von Karman limit line of vehicular performances depicts this relationship. It is the diagonal line indicated in Fig. 3.
Gabrielli–Von Karman graph (from Neodymics [4] )
Stamper [4] reconsidered Gabrielli–von Karman results in terms of ratio between payload weight and fuel consumption, introducing one of the future trends of transport energy efficiency in terms of payload of the different vehicles, without considering the vehicle as a part of the transported weight. Stamper has defined "useful transport work" by multiplying payload weight and distance traveled and "transport efficiency" as the ratio of useful transport work to thermal energy expended. This model is useful on a logistic point of view but losses any physical connection to the real nature of transport which is composed by two fundamental elements, the vehicle and the payload.
In a subsequent analysis, Teitler and Proodian [5] have categorized military vehicles and have considered a new characteristic dimension, which has named "specific fuel expenditure", which can be defined as
$$ \varepsilon_{\text{F}} = \frac{\zeta }{{\eta \cdot W_{\text{P}} }} $$
where ζ is the energy per unit volume of fuel η is the distance traveled per unit volume of fuel, and W P is the weight of the payload. A new variable has been introduced it is the reciprocal of ε F has been defined as "fuel transport effectiveness", which relates directly to the cruising speed of the vehicle V C by a factor of proportionality C F :
$$ \frac{1}{{\varepsilon_{\text{F}} }} = \frac{1}{{C_{\text{F}} \cdot V_{\text{C}} }} $$
This definition allows defining the factor of proportionality C F as the "the next level of fuel transport effectiveness to be used as a future standard", which is represented in Fig. 4, with the dashed diagonal line [6].
Dimensionless fuel transport effectiveness plotted as a function of cruise speed (adapted from Teitler and Proodian [5] by Neodymics [4] )
Referencing Gabrielli–von Karman [1] and Teitler and Proodian [3], Minetti [7], Young [8] and Hobson [9] have considered A or C F as a factor describing an experiential performance limit and ε −1F or ε F as a general performance parameter.
Radtke [10] has produced a further development of the above models. He observed that by combining speed and energy expenditure it could be obtained a novel performance parameter ε F, which considers payload and energy needs under cruising conditions. Those considerations allow obtaining a new performance parameter Q C obtained by treating the payload as a mass (denoted M P) rather than a weight yields a performance parameter Q with units of time. For cruise conditions, Q C has been obtained:
$$ Q_{\text{C}} = \frac{{g_{\text{o}} }}{{C_{\text{F}} }} = V_{\text{C}} \cdot M_{\text{P}} \cdot \frac{\eta }{\zeta } $$
Radtke has used certified data such as the EPA fuel economy ratings to represent how vehicles are actually used. In particular, he adopted the highway rating which is used to describe free flow traffic at highway speeds [11]. He has the produced an energetic analysis of different vehicles including aircrafts and electric vehicles.
Dewulf and Van Langenhove [12] have adopted a completely different approach based on an elementary exergetic analysis. They present an effective assessment of the sustainability of transport technologies in terms of resource productivity, based on the concept of material input per unit of service (MIPS). If MIPS evaluation is quantified in terms of the second law of thermodynamics, it is possible to calculate both resource input and service output in exergetic terms. It leads to the concept of EMIPS (acronym of Exergetic Material Input per Unit of Service) specifically defined for transport technology. It takes into account the total mass to be transported and the total distance, but also the mass per single transport and the speed, allowing an effective comparison between railway, truck, and passenger car transport.
Transport modes and vehicles has been then evaluated in terms of exergetic material input pro unit of service (EMIPS):
$$ R/S = \frac{{{\text{Ex}}_{\text{resources}} }}{{{\text{Ex}}_{\text{service}} }} = {\text{EMIPS}} $$
The amount of resources extracted from the ecosystem to provide the transport service has quantified defining an inventory of all exergetic resources in the whole life cycle.
The method allows evaluating cumulative exergy consumption also introducing an effective differentiation between non-renewable and renewable resource inputs according to Gong and Wall [13].
Dewulf has evaluated the exergy associated with the transport to overcome aerodynamic resistance, inertia effects and friction to bring a total mass (TM) in a number of transports (N) with a mass per single transport (MPST) within a delivery time (DT) over a total distance (TD). The physical requirement is the exergy to accelerate and to overcome friction. If one is able to define the exergy associated to this service, being a function of TM, MPST, DT, and TD, then the exergetic efficiency of transport technology can be determined:
$$ R/S = {\text{EMIPS}} = \frac{{{\text{Ex}}_{\text{resources}} }}{{{\text{Ex}}_{\text{service}} ( {\text{TM,MPST,DT,TD)}}}} $$
Dewulf takes into account two types of dissipations: E kin, kinetic energy, and E D to overcome the aerodynamic drag.:
$$ E_{\text{d}} = E_{\text{kin}} + E_{\text{D}} $$
where the kinetic energy depends on the maximum speed v max during the trajectory:v = v max if v ≠ 0 and dv/dt = 0.
$$ E_{\text{kin}} = \frac{1}{2}m \cdot v_{ \hbox{max} }^{ 2} $$
On the other hand, for a given shape vehicle aerodynamic resistance causes an energetic loss
$$ E_{\text{D}} = \int\limits_{ 0}^{{t_{\text{tot}} }} {\left( {\frac{1}{2} \cdot C_{\text{D}} \cdot \rho \cdot A \cdot v^{ 2} } \right)} \cdot v \cdot dt $$
where C D is the drag coefficient, A is the cross section, and ρ is the density of air. It can be observed that high speed is very unfavorable, because the energy losses due to aerodynamic resistance relates to v 3. Wind direction has been reasonably neglected assuming that it varies casually with an almost uniform distribution and that the number of transports inwind is the same as the ones upwind.
The final expression of the exergy service has been expressed as:
$$ {\text{Ex}}_{\text{service}} {\,=\,}\frac{\text{TM}}{\text{MPST}}\left( {\frac{ 1}{ 2}{\text{MPST}}\frac{{{\text{TD}}^{ 2} }}{{{\text{DT}}^{ 2} }} + \frac{1}{2}C_{\text{D}} \rho A\frac{{{\text{TD}}^{ 3} }}{{{\text{DT}}^{ 2} }}} \right) $$
Chester and others [13–15] have studied the environmental life cycle assessment (LCA) of transportation systems. They create a framework for assessing the energy use and resulting environmental impacts of passenger and freight mobility, comparing the equivalent energy or environmental effects of different technologies or fuels. They have produced an effective LCA framework for the assessment of transportation systems, which includes vehicle technologies, engine technologies, fuel/energy pathways, infrastructure, and supply chains. This research has been focused on developing a suitable LCA framework for policies and decisions. In particular, different energetic consumption has been evaluated all over the whole product lifecycle. Figure 5 shows a sample of the analysis, which can be produced by applying Chester methodology [15].
Energy consumption and GHG emissions by different transport modes (from Chester and others [15] )
This research, aims to produce a robust model, which can allow comparing different transport modes and overcome the limits of preceding research.
It aims to define an effective model with a set of fundamental goals. In particular, it aims defining an effective and robust model, which takes into account the complexity of the energetic factors related to transport.
Referring to preceding literature, it aims to overcome the generality of the Gabrielli–von Karman analysis [3], but it aims to consider the vehicle as a whole, such as they do. They miss an effective evaluation of the energy necessary for moving the vehicle itself and the energy necessary for moving the payload.
The proposed analysis is fundamental for understanding future directions of vehicle improvement. It aims to overcome the analysis by the author, influenced by logistical issues, which refers the energy consumption to the payload [4–10]. It also aims improving both Dewulf exergetic analyses by considering a more analytical differentiation of energy dissipations during service. It appears clear that Dewulf model misses an evaluation of rolling dissipation, which are not negligible and could not be merged with aerodynamic drag, because of a completely different nature and physical law.
Even if it moves in the direction traced by Chester [13–15], it aims to consider also the necessary amount of energy for dismantling and recycling the materials of the vehicle, opening the road to a better LCA management.
Comparing Dewulf and Chester results, which are completely compatible it appears evident the differences between exergetic and energetic analysis, even if both evidences the dominant contributions to energy consumption and GHG emissions for on-road and air modes are from components that relate directly to transport operations.
Energy efficiency of transport modes
The necessity of focusing the attention on the transport sector is clearly stated by IPCC Fourth Assessment Report: Climate Change 2007 [15] and by U.S. Energy Information Administration International Energy Outlook 2013 (IEO2013) [16]. They clearly demonstrate that transport sector is the first contributor in terms of GHG emissions (Fig. 6) excluding electricity production. It has been verified that road transport is the higher component of the emissions related to transport sector.
World GHG emissions (source: IPCC Fourth Assessment Report: Climate Change 2007)
A preliminary analysis has been produced at energetic level. Different transport modes has been compared by the well tested method by Chester. It has been completed by introducing the dismantling and recycling energetic fees to define a fully sustainable life cycle assessment of the different transport systems. Service energy dissipation has also been divided into requirements for the vehicle and requirements for the payload [17]. Dewulf indicates two dissipative terms kinetic and aerodynamic. In the case of ground vehicles and during takeoff and landing operations performed by aircrafts it is necessary to consider also a rolling dissipative term, which depends on the friction of the wheels with the terrain. A more complete analysis in terms of energetic loads can be then performed and they are:
$$ {\text{Kinetic term}}\quad E_{\text{kin}} = \frac{1}{2} \cdot (m_{\text{v}} + m_{\text{p}} ) \cdot v_{\hbox{max} }^{2} $$
$$ {\text{Rolling term}}\quad E_{\text{rol}} = c \cdot \left( {m_{\text{v}} + m_{\text{p}} } \right) \cdot g \cdot v_{\text{av}} \cdot t $$
$$ {\text{Aerodynamic term}}\quad E_{\text{D}} \cong \frac{1}{2}C_{\text{D}} \cdot A \cdot \rho \cdot v_{\text{av}}^{ 3} \cdot t $$
In the case of aircraft, it has been considered tree different moments:
Take off: all terms are present and also lifting component of forces must be considered,
Flight: aerodynamic term is dominant,
Landing: all terms are present and lifting component of forces must be considered.
In the case of ships only kinetic and hydrodynamic term are present (dimensionally equal to the aerodynamic one).
The other energetic terms not directly related to motion have been evaluated according to Chester. In particular, Chester analysis has been implemented by considering also the necessary energy amount for dismantling and recycling the vehicles. Chester uses a hybrid LCA model for this analysis. The components are evaluated from the materials extraction through the final industrial product including supply chains. For example, the evaluation of automotive manufacturing includes the energy and emissions from extraction of raw materials (i.e., iron ore for steel) through the assembly of that steel in the vehicle. End of life phases are not included due to the complexities of evaluating waste management options and material reuse. Indirect impacts are included, i.e., the energy and emissions resulting from the support infrastructure of a process or product, such as electricity generation for automobile manufacturing. For each component in the mode's life cycle, environmental performance is calculated and then normalized per passenger kilometer traveled (PKT). The energy inputs and emissions from that component may have occurred annually (such as from electricity generation for train propulsion) or over the component's lifetime (such as train station construction) and are normalized.
Equation (1) provides the generalized formula by Chester for determining component energy or emissions.
$$ {\text{EM}} = \sum\limits_{\text{c}}^{\text{C}} {\frac{{{\text{EF}}_{{{\text{M}},{\text{c}}}} \times U_{\text{M,c}} (t)}}{{{\text{PKT}}_{\text{M}} (t)}}} $$
EFM,c is total energy or emissions per PKT for mode M;
M is the set of modes {sedan, train, aircraft, etc.};
c is vehicle, infrastructure, or fuel life cycle component,
EF is environmental (energy or emission) factor for component c,
U is activity resulting in EF for component c;
PKTM is PKT performed by mode M during time t for component c.
The environmental factors used for energy and emissions evaluations come from a variety of sources. In particular, it has been massively used the data obtained by Australian Environmental Protection Authority [18], Nissan-Global [19].
In particular, Choate and others [20] allow deriving a detailed data table about energy saving by recycling different materials. Table 1 shows energy savings comparing different management strategies for material used in automotive industry.
Table 1 Energy consumed/avoided from different waste involved in vehicle industry and different management options (Million Btu/Ton)
According to these data and assuming a specific mass balance from different authors [21–24] an effective evaluation of End of Life operations of different kinds of vehicles, including possible recycling of components and materials can be performed. This analysis allows defining the energetic parameters related to the entire lifecycle of the vehicle and considered an initial sample of about 50 vehicles chosen on their representation of the category. Fuels have been evaluated using the values in Table 2, which have been defined by Tupras report [37]. Other relevant energy losses have been evaluated according to Chester [13–15], including infrastructure. Averaged data for vehicle category have been reported in Fig. 7. Results that are more detailed have been presented in "Appendix 1".
Table 2 Properties of different fuels adopted
Percent values of Energy consumption for different transport modes
Further considerations allow to go forward considering the general expression of the kinetic and rolling term of the dissipative terms.
The general expression of the dissipative term is then
$$ {\text{Ex}}_{\text{service}}\,=\,{\text{Ex}}_{\text{rol}}\,+\,{\text{Ex}}_{\text{kin}}\,+\,{\text{Ex}}_{\text{D}} $$
In addition, two different terms referred to the vehicle and payload can be determined. In particular,
$$ {\text{Ex}}_{\text{vehicle}} = m_{\text{v}} \left( {{\text{cg}}v_{\text{av}} t + \frac{1}{2}v_{\hbox{max} }^{2} } \right) + \frac{1}{2}C_{\text{D}} A\rho_{\text{air}} v_{\text{av}}^{3} t $$
is the component due to vehicle even at zero payload, and
$$ {\text{Ex}}_{\text{payload}} = m_{\text{p}} \left( {{\text{cg}}v_{\text{av}} t + \frac{1}{2}m_{\text{p}} v_{ \hbox{max} }^{2} } \right) $$
is the component due to payload.
This analysis on energy needs for moving the vehicle and energy needs for moving the payload can be produced for different kinds of vehicles.
It allows a better comprehension of energy dissipations of different class of vehicles and allows understanding losses due to today's vehicle industrial concepts. Results have been reported in tabular form in "Appendix 2", both in MJ/t km and in percent comparison.
Life cycle analysis of transport modes
An analysis on energy impact of different transport modes must necessarily consider the intensity of different transport modes on a global scale. They have been obtained by [26] and [27] and reported in Table 4.
Table 3 refers to energetic values in terms of fuel needs and do not consider life cycle needs. Considering the precedent preliminary impacts of different energy dissipations, a complete evaluation of the entire life cycle of existing transport modes have been produced.
Table 3 Default fuel economy factors for different types of mobile sources and activity data (derived from [25] )
These considerations force to actualize the values in Table 3 referring them to the full life cycle of today's circulating fleets, and values are reported in Table 4.
Table 4 World transport energy use by mode (2004)
Environmental considerations
Taking into account the Greet 1 Model [28] by Argonne National Laboratory, the Greenhouse Gas Protocol [29], and American Petroleum Institute [30], it is possible to extend the analysis by an evaluation of the emissions of different transport modes per km. The properties of the most used fuels have been presented in Table 5.
Table 5 Emission factors and LHV for common fuels
Default fuel economy factors for different types of mobile sources and activity data have been modeled according to EPA [31] and reported in Table 6 (Fig. 8).
Table 6 Life cycle energy needs by circulating vehicles
Energetic requirements over the complete lifecycle of different transport modes
The evaluation of energetic impact of different transport modes on global scale allows understanding that the larger impact on the energetic issues is caused by ground transports and in particular by cars. Interpolating the data in Table 6 it can be expressed the CO2 emissions as a function of vehicle consumption in km/l (Fig. 9).
Interpolation of CO2 emissions for common motor vehicles (ref. Table 6)
The data in Table 6 and in Fig. 9 shows an anomaly constituted by Diesel busses, which is clearly caused by the operational behavior of this vehicle and its mission, which are characterized by frequent stops and go.
The most important result of this analysis has been the definition of an interpolating function, which allows an approximate estimation of emissions as a function of the fuel consumed for moving.
Considering the vehicles previously estimated this system for predicting allows obtaining the general results reported in "Appendix 3". Looking at the results it is evident that emission and energy consumption for ton of payload is much more important for ground transportation and personal transports rather than for other systems. It is also evident that the energy consumed and emissions are lower for freight transport systems rather than for passenger transport. The values in Table 6 take into account an estimation of the whole life cycle emissions.
Discussion of life cycle results
The results of this analysis of transport modes shows that energy consumption and pollution are mostly caused by ground transportation. In addition the show that energy dissipated for moving road vehicles is much higher than the one for moving their payload ("Appendix 2"). Considering different kinds of vehicles further considerations can be performed. "Appendix 3" reports evaluation of life cycle energetic performances in terms of vehicle, payload and total, together with relative emissions.
It is clear that high payload vehicles perform unitary results much better than light payload ones. Those evaluations consider all life cycle energy and are based on the passenger loads or freight payloads used by all, considered by Radke. Several vehicles have been added to the analysis taking directly data by producers and by Strickland [31]. Radke and Strickland analyses have been improved by taking into consideration the amount of energy to produce vehicles, transportation infrastructure, and combustibles. Leisure vehicles have not been considered in this analysis because they have a marginal contribution to the global emissions.
Looking at global data it is possible to give the following interpretation of the result. Most people travel individually when possible. It has been also evidenced that that personal cars and trucks cause most of energy consumption and emissions. It is then fundamental to focus the attention on these systems verifying how they can be improved reducing their global impact without limiting their flexibility.
Ground transport in detail
Impact of ground transportation
The present study has evidenced the criticality in terms of emissions and energetic need of ground transport. A milestone study on future development of transport sector has been produced by EU Transport GHG: Routes to 2050 II project [32] founded by EU. This analysis takes into account 2010 standard transport situation vs. expected standards up to 2050. A synthetic representation is reported in Fig. 10.
Expected growth in GHG emissions by transport mode (EU Transport GHG: Routes to 2050 II project)
This paper takes into account a different method that is the analysis of different kinds of vehicles. Focusing on specific benefit, which could be possible by an effective optimization of internal combustion vehicles that are the most critical in terms of both energy efficiency and emissions.
It has been considered the full vehicle taking into consideration the energy losses for moving the vehicle. A schema of the power train indicating the different losses is provided in Fig. 11.
Losses in a ground vehicle
Losses depend on the regime in which the vehicle operates (i.e., urban, highway or composite). The valuation of power needs can be performed by Eq. (14)
$$ {\text{Ex}}_{\text{vehicle}} {\,=\,}m_{\text{tot}} \left( {{\text{cg}}v_{\text{av}} t + \frac{1}{2}v_{\hbox{max} }^{2} } \right) + \frac{1}{2}C_{\text{D}} A\rho_{\text{air}} v_{\text{av}}^{3} t $$
It can be also possible to write the energy losses due to engine and to power train:
$$ {\text{Ex}}_{\text{vehicle}}\,=\,{\text{Ex}}_{\text{fuel}} - L_{\text{engine}} - L_{\text{stanby}} - L_{\text{Powertrain}} $$
According to Eqs. (9, 10, 11) it is possible to perform a more sophisticated analysis about performances during operations of different vehicles in service conditions. In particular, cars, busses, and trucks have been considered, because they seem the less performing on an energetic and environmental point of view.
Preliminary calculations have been performed against Sovran and Bohn [33]. The results have been shown in Table 7. They show the full energetic value of the fuel and results appear perfectly in-line with Sovran and Bohn ones. Calculations have been performed for an average car, a truck, and a bus. A midsize car (1.3 ton), a heavy truck (40-ton full payload), and a bus (16 ton) have been considered as preliminary references.
Table 7 Reference values of energy consumption (%) in city, highway, and composite regimes
Table 7 allows making further analysis about optimization of the vehicle as it is.
In particular, data for different vehicles have been calculated iteratively according to the above calculation method obtaining results that can be applied to most vehicles. They are reported in "Appendix 4". The same vehicles considered in Table 3 have been considered even if they are listed in a different order.
Optimization of ground vehicles
Actual vehicle market seems to have reached a high degree of technological maturity. Most of vehicles have reached a standardized configuration with only minor upgrades possible and mostly relating the user interface, and some minor safety issues and some minor reductions in terms of energy consumption.
Bejan [34–38] has defined constructal theory, which is an effective method to understand the elementary logic of natural evolution and to allow design of more efficient mechanical and thermodynamic systems. In particular, Bejan [37, 38] has argued that constructal law governs the natural evolution and motion efficiency.
Dumas [39, 40] and Trancossi [41, 42] have defined a technical design methodology for transport vehicle obtaining in the case of airships an effective optimization up to energy complete self-sufficiency by photovoltaic energy. This activity demonstrated that constructal law could produce surprising results in the optimization of transport vehicles.
Constructal can thus define effective guidelines for the future development of transport vehicles allowing also the definition of breakthrough configurations, which can produce major advantages if compared to the technological maturity scenario, in which today transport industry is operating.
Recent improvements on vehicles have focused on several modules but have not produced some fundamental results, which could be fundamental to produce an effective energetic benefit.
Optimization proposed actually is general, even if it opens the possibility of performing an effective analysis at vehicle level. In particular, it has taken into account the results in "Appendix 2", Fig. 14, for such vehicles. They express the influence of payload for different kinds of vehicles, which have obtained by Eq. 12 and 13.
The calculation schema is reported in Fig. 12, where Fr is the friction with ground, D is the aerodynamic drag, K is the term due to acceleration to the maximum velocity, and Fm is the force produced to the engine that moves the car.
Main forces acting on a ground vehicle during service
For the considered vehicles, it is possible to make specific evaluations. They have been reported in "Appendix 4", Table 10. A more detailed evaluation based on the energy dissipation modes during service has been presented in "Appendix 4", Table 11. Data have been interpolated in the case of cars, which are the most impacting transport mode. They allow evaluating the influence of the mass of the vehicle on the energy consumption. These data originated by an effective calculation have been plotted in Fig. 13.
Influence of the mass of a car on energy consumption and related consumption for passenger
These results will allow focusing in design vehicles more effectively in terms of operational efficiency. It is clear that considering Eq. (12) and (13) the most important factors, on which an effective optimization could focus on weight and aerodynamics. In particular, focusing on light vehicles weight appears to be the most important element optimize in ground vehicles, while aerodynamics is most important for heavy vehicles. In particular, these directions of optimization presents an effective divergence with the vehicle development in the last 30 years, which has produced an effective increase in terms of mass, contrasting with the necessity of reducing energetic impact.
This paper has presented an effective analysis of energetic needs of different transport modes, starting from the pioneering work of Gabrielli and von Karman.
This activity has produced an effective comparison between different transport modes. Looking at global impacts in terms of energy consumption and emissions of GHG gas has focused on the necessity of producing advancements on higher impact transport modes. It has then focused on the problem of reducing the energy consumption of ground vehicle stating the preliminary basis for a future and effective constructal optimization of ground vehicles.
This paper aims then to be continued by an effective and future activity focused on an effective methodology for optimizing ground and ICE vehicles, and overcoming the actual technological maturity scenario of this industrial sector.
It appears clear that industrial strategy in the direction though standardization of components is producing a general reduction in terms of an effective minimization of costs but is producing much reduced advantages on an energetic point of view because of the consequent increase of the weight of vehicles, which accompanies this new technological scenario.
DT:
Delivery time (S)
E kin :
Kinetic energy (MJ)
E D :
Energy dissipation against drag (MJ)
E ROL :
Rolling energy (MJ)
Expayload :
Exergy dissipated by payload (MJ)
Ex res :
Exergy from resources (MJ)
Ex service :
Exergy dissipated during service (MJ)
Ex vehicle :
Exergy dissipated by vehicle (MJ)
N :
Number of travels
m p :
Mass of payload (kg)
m v :
Mass of vehicle (kg)
MPST:
Mass per single transport (ton)
P max :
Total distance (km)
TM:
Total mass (Ton)
Total distance (S)
V :
Velocity (m/s)
v av :
Average velocity
v max :
Maximum velocity (m/s)
W :
Weight (N)
ε :
Specific resistance of vehicle
ε f :
Fuel transport effectiveness
ζ :
Energy per unit volume of fuel (MJ/kgfuel)
η :
Distance traveled per unit volume of fuel (Km/kgfuel)
EMIPS:
Exergetic material input pro unit of service
LCA:
LHV:
Low heating value
GHG:
Green house gas
VV.AA. World Energy Outlook 2013, International Energy Agency (2013)
VV.AA. "2014 World Energy Issues Monitor", World Energy Council, London (2014)
Gabrielli, G., von Karman, Th: What price speed? Mech. Eng. 72, 775–781 (1950)
Stamper, J.:"Time is Energy," Aeronaut. J. 169–178 (1975)
Teitler, S., Proodian, R.E.: "What price speed, revisted," J. Energy., 4(1),46–48 (1980). http://www.neodymics.com/Images/EPPaper050323I.pdf via the Internet
Minetti, A., Pinkerton, J., Zamparo, P.: From bipedalism to bicyclism, evolution in energetics and biomechanics of historic bicycles. Proc. R. Soc. Lond. B 268, 1351–1360 (2001)
Young, J., Smith, R., Hillmansen, S.: What price speed—revisited. Ingenia 22, 46–51 (2005)
Hobson, A.: Physics literacy, energy and the environment. Phys. Educ. 38, 109–114 (2003)
Radtke, J.: The energetic performance of vehicles. Open. Fuel. Energy. Sci. J 1, 11–18 (2008)
VV. AA., "MPG Ratings, 2007 Model Year", US Department of Energy (2007). Available: http://www.fueleconomy.gov/
Dewulf, J., Van Langenhove, H.: Exergetic material input per unit of service (EMIPS) for the assessment of resource productivity of transport commodities. Resour. Conserv. Recycl. 38(2), 161–174 (2003)
Chester, M., Horvath, A. Environmental assessment of passenger transportation should include infrastructure and supply chains. Environ. Res. Lett. 4(2) (2009)
Chester, M., Horvath, A. High-speed rail with emerging automobiles and aircraft can reduce environmental impacts in california's future. Environ. Res. Lett. 7(3) (2012)
Chester et al. "Infrastructure and automobile shifts: positioning transit to reduce life-cycle environmental impacts for urban sustainability goals", Environ. Res. Lett. 8(1) (2012)
VV.AA. "IPCC Fourth Assessment Report: Climate Change 2007", Intergovernmental Panel on Climate Change (2007)
VV.AA. International Energy Outlook 2013, (IEO2013) U.S. Energy Information Administration, Washington DC (2013)
VV.AA. Recycling—Cost analysis and energy balance Australian Environmental Protection Authority, Bulletin 409, (1990)
VV.AA. "Nissan Motor Company Sustainability Report 2013", Nissan Motor Company (2013)
Choate, A., Pederson, L., Scharfenberg, J., Ferland, H.: Waste management and energy savings: benefits by the numbers. Environmental Protection Agency, Washington DC (2005)
Das, S.: and Randall Curlee T. Recycling of new generation vehicles. Oak Ridge National Laboratory, USA (1999)
Osiński, J. Vehicles recycling system problems. Recykling. 36 (3) 2009
Tavoularis, G., et al.: Management of the end-of-life vehicles stream in Romania. In: Cossu, R., Diaz, L.F., Stegmann, R. (eds.) The 12th International Waste Management and Landfill Symposium. Sardinia, Italy (2009)
Elghali, L., McColl-Grubb, V., Schiavi, I., Griffiths, P.: Sustainable resource use in the motor industry: a mass balance approach. VIRIDIS Report, UK (2004)
VV.AA., Product Specification. TUPRAS, Izmit, Turkey (1996)
VV.AA. "Mobility 2030 Report: Meeting the Challenges to Sustainability", World Business Council for Sustainable Development (WBCSD) (2004)
Hargroves, K., von Weizsacke, E.: Technology and policy options for making transport systems more sustainable. United Nations Dept. of Economic and Social Affairs, Commission on Sustainable Development, Nineteenth Session, New York (2011)
VV.AA., "GREET 1 Model 2012"Argonne. National Laboratory, USA (2012)
VV.AA., "Calculating CO2 Emissions from Mobile Sources", GHG Protocol - Mobile Guide, The Greenhouse Gas Protocol (2012)
VV.AA., "Compendium of Greenhouse Gas, Emissions Estimation Methodologies for the Oil and Gas Industry", American Petroleum Institute (2001)
Strickland, J. Energy Efficiency of different modes of transportation. (2009)
http://www.builditsolar.com/References/EfficiencyTransport/strickland.htm
Brannigan, C., Gibson, G., Hill N., Dittrich, M., Schroten, A., van Essen, H., van Grinsven, A.: Development of a better understanding of the scale of co-benefits associated with transport sector GHG reduction policies, EU Transport GHG: Routes to 2050 II project, Founded by EU, Updated 12 July 2012
Sovran, G., Bohn, M. "Formulae for the tractive energy requirements of vehicles driving the epa schedules." SAE paper 810184, (1981)
Bejan, A.: Advanced Engineering Thermodynamics, 2nd edn. Wiley, New York (1997)
Bejan, A.: Shape and structure, from engineering to nature. Cambridge University Press, Cambridge (2000)
MATH Google Scholar
Bejan, A., Lorente, S.: The constructal law and the evolution of design in nature. Phys. Life. Rev. 8(3), 209–240 (2011)
Bejan, A., Marden, J. "Constructing animal locomotion from new thermodynamics theory". Am. Sci. 94(4) (2006)
Bejan, A., Lorente, S.: Constructal law of design and evolution: physics, biology, technology, and society. J. Appl. Phys. 113, 151301 (2013)
Dumas, A., Madonia, M., Trancossi, M., Vucinic, D.: Propulsion of photovoltaic cruiser-feeder airships dimensioning by constructal design for efficiency method. SAE. Int. J. Aerosp. 6(1), 273–285 (2013). doi:10.4271/2013-01-2303
Dumas, A., Trancossi, M., Madonia, M.: Energetic design and optimization of a large photovoltaic stratospheric unconventional feeder Airship. SAE. Int. J. Aerosp. 5(2), 354–370 (2012). doi:10.4271/2012-01-2166
Trancossi, M., Dumas, A., Madonia, M., "Optimization of airships with constructal design for efficiency method", SAE Technical Paper 2013-01-2168, (2013). doi:10.4271/2013-01-2168
Trancossi, M., Dumas, A., Madonia, M., "Energy and mission optimization of an airship by constructal design for efficiency method", ASME IMECE 2013. San Diego, California (2013)
A special thanks to Prof. Adrian Bejan for encouraging the activity for this paper. In particular, the main objective of this paper is to verify the algorithms for future performing of an effective energetic comparison of the MAAT cruiser/feeder airship transport with commonly used transport modes. The present work has been performed as part of Project MAAT | Multibody Advanced Airship for Transport | with ref. 285602, supported by European Union through the 7th Framework Program.
Di.S.M.I., University of Modena and Reggio Emilia, Via Amendola n. 2, 42100, Reggio Emilia, Italy
Michele Trancossi
Correspondence to Michele Trancossi.
Published in the Special Issue "8th AIGE Conference (Italian Association for Energy Management)".
See Table 8.
Table 8 Energy balance
See Figs. 14 and 15.
Cumulative evaluation of energy dissipation for vehicle movement and for freight movement (MJ/t km)
Evaluation of energy dissipation for vehicle movement and for freight movement (%)
Table 9 Energy consumption for unit of load and emissions
See Tables 10 and 11.
Table 10 Energy consumption repartition for different kinds of vehicles
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Trancossi, M. What price of speed? A critical revision through constructal optimization of transport modes. Int J Energy Environ Eng 7, 425–448 (2016). https://doi.org/10.1007/s40095-015-0160-6
Accepted: 06 January 2015
Gabrielli–Von Karman | CommonCrawl |
\begin{document}
\title{Counting for some convergent groups}
\centerline{ Marc Peign\'e $^($\footnote{Marc Peign\'e, LMPT, UMR 7350, Facult\'e des Sciences et Techniques, Parc de Grandmont, 37200 Tours, France -- email : [email protected]}$^)$, Samuel Tapie $^($\footnote{Samuel Tapie, Laboratoire Jean Leray 2 rue de la Houssini\`ere - BP92208 44322 NANTES CEDEX 3 France
-- email : [email protected]}$^)$ $\&$ Pierre Vidotto $^($\footnote{Pierre Vidotto, Laboratoire Jean Leray 2 rue de la Houssini\`ere - BP92208 44322 NANTES CEDEX 3 France
-- email : [email protected]}$^)$}
\noindent {\bf Abstract.} We present examples of geometrically finite manifolds with pinched negative curvature, whose geodesic flow has infinite non-ergodic Bowen-Margulis measure and whose Poincar\'e series converges at the critical exponent $\delta_\Gamma$. We obtain an explicit asymptotic for their orbital growth function. Namely, for any $\alpha \in ]1, 2[ $ and any slowly varying function $L : \mathbb R\to (0, +\infty)$, we construct $N$-dimensional Hadamard manifolds $(X, g)$ of negative and pinched curvature, whose group of oriented isometries admits convergent geometrically finite subgroups $\Gamma$ such that, as $R\to +\infty$, $$ N_\Gamma(R):= \#\left\{\gamma\in \Gamma \; ; \; d(o, \gamma \cdot o)\leq R\right\} \sim C_\Gamma \frac{L(R)}{R^\alpha} \ e^{\delta_\Gamma R}, $$ for some constant $C_\Gamma >0$.
\noindent AMS classification : 53C20, 37C35
\section{Introduction}
We fix $N\geq 2$ and consider a $N$-dimensional Hadamard manifold $X$ of negative, pinched curvature $-B^2 \leq K_X \leq -A^2<0$. Without loss of generality, we may assume $A\leq 1 \leq B$. Let $\Gamma$ be a {\em Kleinian group} of $X$, i.e. a discrete, torsionless group of isometries of $X$, with quotient $\bar X= \Gamma \backslash X$.
This paper is concerned with the fine asymptotic properties of the {\em orbital function} : $$v_{\Gamma}({\bf x},{\bf y};R):=\sharp\{\gamma\in\Gamma\slash d({\bf x},\gamma\cdot{\bf y})\leq R\}$$ for ${\bf x},{\bf y} \in X$, which has been the subject of many investigations since Margulis' \cite{Ma} (see also Roblin's book \cite{Ro}). First, a simple invariant is its {\em exponential growth rate} $$\delta_{\Gamma}=\limsup_{R\to\infty}\frac{1}{R}\log(v_{\Gamma}({\bf x},{\bf y};R)).$$ The exponent $\delta_\Gamma$ coincides also with the {\em exponent of convergence} of the {\it Poincar\'e series} associated with $\Gamma$ : $$P_{\Gamma}({\bf x}, {\bf y}, s):=\sum_{\gamma\in\Gamma}e^{-sd({\bf x},\gamma\cdot{\bf y})}, \qquad {\bf x}, {\bf y}\in X. $$ Thus, it is called the {\it Poincar\'e exponent} of $\Gamma$ $\delta_\Gamma$. It coincides with the topological entropy of the geodesic flow $(\phi_t)_{t\in \mathbb R}$ on the unit tangent bundle of $\bar X$, restricted to its non-wandering set. It equals also the Hausdorff dimension of the {\it radial limit set} $\Lambda(\Gamma)^{rad}$ of $\Gamma$ with respect to some natural metric on the boundary at infinity $\partial X$ of $X$. Recall that any orbit $\Gamma\cdot{\bf x}$ accumulates on a closed subset $\Lambda(\Gamma)$ of the geometric boundary $\partial X$ of $X$, called the {\em limit set} of $\Gamma$; this set contains 1, 2 or infinitely many points and one says that $\Gamma$ is non elementary when $\Lambda_\Gamma$ is infinite.
A point $x \in\Lambda_{\Gamma}$ is said to be {\em radial} when it is approached by orbit points in some $M$-neighborhood of any given ray issued from $x$, for some $M>0$).
The group $\Gamma$ is said to be {\em convergent} if $P_{\Gamma}({\bf x}, {\bf y}, \delta_{\Gamma})<\infty$, and {\em divergent } otherwise. Divergence can also be understood in terms of dynamics as, by Hopf-Tsuju-Sullivan theorem, it is equivalent to ergodicity and total conservativity of the geodesic flow with respect to the Bowen-Margulis measure $m_{\Gamma}$ on the non wandering set of $(\phi_t)_{t\in \mathbb R}$ in the unit tangent bundle $T^1\bar X$ (see again \cite{Ro} for a complete account and a definition of $m_{\Gamma}$ and for a proof of this equivalence).
The more general statement concerning the asymptotic behavior of $v_{\Gamma}({\bf x},{\bf y};R)$ is due to Th. Roblin: if $\Gamma$ is a non elementary, discrete subgroup of isometries of $X$ with {\em non-arithmetic length spectrum}\footnote{It means that the set $\mathcal{L} (\bar X)=\{ \ell (\gamma)\; ;\; \gamma\in\Gamma\}$ of lengths of closed geodesics of $\bar X = \Gamma \backslash X$ is not contained in a discrete subgroup of $\mathbb R$}, then $\delta_{\Gamma}$ is a true limit and it holds, as $R\to +\infty$, \begin{enumerate}
\item [(i)] if $\Vert m_{\Gamma}||=\infty$ then $v_{\Gamma}({\bf x},{\bf y};R)=o(e^{\delta_{\Gamma}R})$, \item [(ii)] if $\Vert m_{\Gamma}\Vert <\infty$, then
$v_{\Gamma}({\bf x},{\bf y};R)\sim{||\mu_{\bf x}||. ||\mu_{\bf y}||\over \delta_{\Gamma}\Vert m_{\Gamma}||}e^{\delta_{\Gamma}R},$ \end{enumerate}
where $(\mu_{\bf x})_{{\bf x} \in X}$ denotes the family of Patterson $\delta_\Gamma$-conformal densities of $\Gamma$, and $m_{\Gamma}$ the Bowen-Margulis measure on $T^1\bar X$. Let us emphasize that in the second case, the group $\Gamma$ is always divergent while in the first one it can be convergent.
In this paper, we aim to investigate, for a particular class of groups $\Gamma$, the asymptotic behavior of the function $v_{\Gamma}({\bf x},{\bf y};R)$ when $\Gamma$ is convergent. As far as we know, the only precise asymptotic for the orbital function of convergent groups holds for groups $\Gamma$ which are normal subgroups $\Gamma \lhd \Gamma_0$ of a co-compact group $\Gamma_0$ for which the quotient $\Gamma_0\slash \Gamma$ is isometric up to a finite factor to the lattice $\mathbb Z^k$ for some $k\geq 3$ \cite{PS}. The corresponding quotient manifold has infinite Bowen-Margulis measure; in fact, $m_{\Gamma}$ is invariant under the action of the group of isometries of $\bar X$ which contains subgroups $\simeq \mathbb Z^k$.
The finiteness of $m_{\Gamma}$ is not easy to establish excepted in the case of {\it geometrically finite groups} where there exists a precise criteria. Recall that
$\Gamma $ (or the quotient manifold $\bar X$) is said to be geometrically finite if its limit set $\Lambda(\Gamma)$ decomposes in the radial limit set and the $\Gamma$-orbit of finitely many {\em bounded parabolic points} $x_1, \ldots, x_\ell$, fixed respectively by some parabolic subgroups $P_i, 1\leq i \leq \ell$, acting co-compactly on
$\partial X \setminus \{x_i\}$; for a complete description of geometrical finiteness in variable negative curvature see \cite{Bow}. Finite-volume manifolds $\bar X$ (possibly non compact) are particular cases of geometrically finite manifolds; in contrast, the manifolds considered in \cite{PS} are not geometrically finite.
For geometrically finite groups, the orbital functions $v_{P_i}$ of the parabolic subgroups $P_i, 1\leq i\leq \ell$, contain the relevant information about the metric inside the cusps, which in turn may imply $ m_{\Gamma} $ to be finite or infinite. On the one hand, it is proved in \cite{DOP} that the divergence of the parabolic subgroups $P\subset \Gamma$ implies $\delta_{P } < \delta_\Gamma$, which in turn yields that
$\Gamma$ is divergent and $\Vert m_{\Gamma}||<\infty$.
On the other hand there exist geometrically finite groups with parabolic subgroups $P $ satisfying $\delta_{P} = \delta_\Gamma$:
we call such groups {\em exotic} and say that the parabolic subgroup $P $ (or the corresponding cusps $\mathcal C $) is {\em dominant} when $\delta_{P} = \delta_\Gamma$. Let us emphasize that dominant parabolic subgroups of exotic geometrically finite groups $\Gamma$ are necessarily convergent. However, the group $\Gamma$ itself may as well be convergent or divergent; we refer to \cite{DOP} and \cite{P} for explicit constructions of such groups.
In this paper, we consider a Schottky product $\Gamma$ of elementary subgroups $ \Gamma_1, \ldots, \Gamma_{p+q},$
of isometries of $X$ (see $\S 3$ for the definition) with $ p+q \geq 3$. Such a group is geometrically finite. We assume that $\Gamma$ is convergent; thus, by \cite{DOP}, it is exotic and possesses factors (say $\Gamma_1,\ldots, \Gamma_p, p \geq 1$) which are dominant parabolic subgroups of $\Gamma$. We assume that, up to the dominant factor $e^{\delta_\Gamma R}$, the orbital functions $v_{\Gamma_j}({\bf x}, {\bf y}, \cdot)$ of these groups satisfy some asymptotic condition of polynomial decay at infinity. More precisely we have the
\begin{theo}\label{theo:Comptage} Fix $p, q \in \mathbb N$ such that $p \geq 1, p+q \geq 2$ and let $\Gamma$ be a Schottky product of elementary subgroups $\Gamma_{1}, \Gamma_2\ldots, \Gamma_{p+q}$ of isometries of a pinched negatively curved manifold $X$. Assume that the metric $g $ on $X$ satisfies the following assumptions.
$ {\bf H_1.}$ The group $\Gamma$ is convergent with Poincar\'e exponent $\delta_\Gamma=\delta$.
$ {\bf H_2.}$ There exist $\alpha\in ]1, 2[$, a slowly varying function $L$\ $^($\footnote{A function $L(t)$ is said to be ''slowly varying'' it is positive, measurable and $L(\lambda t)/L(t)\to 1$ as $t\to +\infty$ for every $\lambda>0$.}$^)$ and strictly positive constants $c_1, \ldots, c_p$ such that, for any $1\leq j \leq p$ and $\Delta >0,$ \begin{equation}\label{influent}
\lim_{R\to +\infty} {R^{\alpha} \over L(R)}
\sum _{ \stackrel{\gamma \in \Gamma_j}{R\leq d({\bf o}, \gamma\cdot {\bf o})<R+\Delta}}
e^{-\delta d({\bf o}, \gamma\cdot {\bf o})} =c_j \Delta.
\end{equation}
$ {\bf H_3.}$ {\it For any $p+1\leq j\leq p+q$ and $\Delta >0,$ $$
\lim_{R\to +\infty} {R^{\alpha} \over L(R)}
\sum _{ \stackrel{\gamma \in \Gamma_j}{R\leq d({\bf o}, \gamma \cdot {\bf o})<R+\Delta}}
e^{-\delta d({\bf o}, \gamma\cdot {\bf o})} =0. $$}
\noindent Then, there exists a constant $C_\Gamma>0$ such that, as $R\to +\infty$, $$ \sharp\{\gamma\in \Gamma\mid d({\bf o}, \gamma \cdot {\bf o})\leq R \} \quad \sim \quad C_\Gamma\ {L(R)\over R^{\alpha}}\ e^{\delta R}. $$ \end{theo}
The importance of the convergence hypothesis {\bf H$_1$} in the previous theorem is illustrated by the following result, previous work of one of the authors.
\begin{theo}[\cite{V}, Theorem C] Let $\Gamma$ be a Schottky product of $p+q\geq 2$ elementary subgroups $\Gamma_{1}, \Gamma_2\ldots, \Gamma_{p+q}$ of isometries of a pinched negatively curved manifold $X$. Assume that $p \geq 1$ and
$\bullet$ $\Gamma$ is \emph{divergent} and $\delta_\Gamma = \delta$,
$\bullet$ Hypotheses {\bf H$_2$}, {\bf H$_3$} hold.
\noindent Then, there exists $C_\Gamma>0$ such that, as $R\to +\infty$, $$ \sharp\{\gamma\in \Gamma\mid d({\bf o}, \gamma \cdot {\bf o})\leq R \} \quad \sim \quad C_\Gamma\ {e^{\delta R}\over R^{2-\alpha}L(R)}. $$ \end{theo}
The difference with the equivalent of Theorem \ref{theo:Comptage} may surprise, since it is possible to vary smoothly the Riemannian metric $g_{\alpha, L}$ from a divergent to a convergent case, preserving hypotheses {\bf H$_2$} and {\bf H$_3$}, cf \cite{P}. Nevertheless, the proof of our Theorem \ref{theo:Comptage} will illustrate the reasons of this difference. For groups $\Gamma = \Gamma_1*...*\Gamma_{p+q}$ satisfying {\bf H$_2$} and {\bf H$_3$}, the counting estimate only depends on elements of the form $\gamma =a_1 \cdots a_k$, whith $a_i\in \Gamma_1\cup\ldots \cup \Gamma_p$ and where $a_i$ and $a_{i+1}, 1\leq i <k$, do not belong to the same $\Gamma_j$. In the divergent case (see the proof of Theorem C in\cite{V}), the asymptotic of $\{\gamma\in \Gamma\mid d({\bf o}, \gamma \cdot {\bf o})\leq R \}$ as $R\to +\infty$ only depends on the $\gamma = a_1\cdots a_k$ with $k>> R$. On the opposite, in the convergent case, the dominant parabolic factors $\Gamma_1, \ldots, \Gamma_p$ are ``heavy'' and the asymptotic of the orbital function of $\Gamma$ comes from the $\gamma = a_1\cdots a_k$ with $k$ bounded independently of $R$; the number of such isometries $\gamma$ with $d({\bf o}, \gamma \cdot {\bf o}) \leq R$ is comparable to $\displaystyle {L(R)\over R^{\alpha}}\ e^{\delta R}$. By a straightforward adaptation of Proposition \ref{mkequi}, this last estimate remains valid in the divergent case; nevertheless, the fact that $\Gamma$ is divergent implies that the contribution of these isometries is negligible.
\begin{remark} The condition $\alpha >1$ assures that the parabolic groups $\Gamma_1, \ldots, \Gamma_p$ are convergent. The additive condition $\alpha<2$ is used in Proposition \ref{mkmaj} to obtain a uniform upper bound for the power $\widetilde P^k, k\geq 1$ of some operator $\widetilde P$ introduced in Section \ref{sec:Counting}; the proof of this Proposition relies on a previous work of one of the authors \cite{V} and is not valid for greater values of $\alpha$. The analogous of our Theorem \ref{theo:Comptage} when $\alpha \geq 2$ remains open. \end{remark}
The article is organized as follows. In the next section, we recall some backgrounds on negatively curved manifolds, and we construct examples of metrics for which the hypotheses of Theorem \ref{theo:Comptage} are satisfied. In section \ref{sec:Schottky}, we present Schottky groups and the coding which we use to express our geometric problem in terms of sub-shift of finite type on a countable alphabet. In section \ref{sec:Ruelle}, we introduce the Ruelle operator for this sub-shift; this is the key analytical tool which is used. Eventually, section \ref{sec:Counting} is devoted to the proof of Theorem \ref{theo:Comptage}.
\section{Geometry of negatively curved manifolds}\label{sec:GeoNeg}
\subsection{Generalities}
In the sequel, we fix $N \geq 2$ and consider a $N$-dimensional complete connected Riemannian manifold $X$ with metric $g$ whose sectional curvatures satisfy : $-B^2\leq K_X\leq -A^2<0$ for fixed constants $A$ and $B$; the metric $g$ we consider in this paper be obtained by perturbation of a hyperbolic one and the curvature equal $-1$ on large subsets of $ X$, thus we assume $0<A\leq 1\leq B$. We denote $d$ the distance on $X$ induced by the metric $g$.
Let $\partial X$ be the boundary at infinity of $X$ and let us fix an origin ${\bf o} \in X$. The family of functions $ \left({\bf y} \mapsto
d({\bf o},{\bf x})-d({\bf x}, \bf y )\right)_{ {\bf x} \in X}$ converges uniformly on compact sets to the {\em Busemann function} $ \mathcal B_{x}( {\bf o}, \cdot ) $ for ${\bf x}\to x\in \partial X$. The {\em horoballs} $\mathcal{H}_{x}$ and the {\em horospheres} $\partial \mathcal{H}_{x}$ centered at $x$ are respectively the sup-level sets and the level sets of the function $ \mathcal B_{x}( {\bf o}, \cdot ) $. For any $t\in\mathbb R$, we set
$\mathcal{H}_{x}(t):=\{{\bf y}\slash \mathcal B_{x}( {\bf o}, {\bf y} ) \geq t \}$ and $\partial\mathcal{H}_{x}(t):=\{{\bf y}\slash \mathcal B_{x}( {\bf o}, {\bf y} ) = t\}$; the parameter $t = \mathcal B_{x}( {\bf o}, {\bf y}) - \mathcal B_{x}( {\bf o},{\bf x})$ is the {\em height} of ${\bf y}$ with respect to $x$. When no confusion is possible, we omit the index $x \in \partial X$ denoting the center of the horoball. Recall that the Busemann function satisfies the fundamental cocycle relation: for any $x \in \partial X$ and any $\bf x, y, z$ in $X$ $$\mathcal B_{x}( {\bf x}, {\bf z}) = B_{x}( {\bf x}, {\bf y}) + B_{x}( {\bf y}, {\bf z}). $$ The Gromov product between $x, y \in \partial X \cong \partial X$, $x \neq y$, is defined as $$(x\vert y)_{{\bf o}} = {\mathcal B_x({\bf o}, {\bf z})+\mathcal B_y({\bf o}, {\bf z})\over 2}$$
where ${\bf z}$ is any point on the geodesic $(x, y)$ joining $x$ to $y$. By \cite{Bou}), the expression $$ D(x, y)= e^{ -A(x\vert y)_{{\bf o}} }$$ defines a distance on $\partial X $ satisfying the following property: for any $\gamma \in \Gamma$ $$ D(\gamma \cdot x, \gamma \cdot y)= e^{-{A\over 2} \mathcal B_x(\gamma^{-1}\cdot {\bf o}, {\bf o})} e^{-{A\over 2} \mathcal B_y(\gamma^{-1}\cdot {\bf o}, {\bf o})} D(x, y). $$ In other words, the isometry $\gamma$ acts on $(\partial X, D)$ as a conformal transformation with coefficient of conformality $ \vert \gamma'(x)\vert_{\bf o} = e^{-A \mathcal B_x(\gamma^{-1}\cdot {\bf o}, {\bf o})} $ at $x$ and satisfies the following equality \begin{equation} \label{TAF1} D(\gamma \cdot x, \gamma \cdot y)= \sqrt{\vert \gamma'(x)\vert_{\bf o} \vert \gamma'(y)\vert_{\bf o} } D(x, y). \end{equation} The function $x\mapsto \mathcal B_x(\gamma^{-1}\cdot {\bf o}, {\bf o})$ plays a central role to describe the action of the isometry $\gamma$ on the boundary at infinity $\partial X$. From now on, we denote it $b(\gamma, \cdot )$ and notice that it satisfies the following ``cocycle property'': for any isometries $\gamma_1, \gamma_2$ of $X$ and any $x \in \partial X$ \begin{equation} \label{cocycleb} b(\gamma_1 \gamma_2, x) = b(\gamma_1, \gamma_2 \cdot x)+ b( \gamma_2, x). \end{equation} In order to describe the action on $\partial X$ of the isometries of $(X, g)$, it is useful to control precisely the behavior of the sequences $\vert (\gamma^n)'(x)\vert_{\bf o}$; the following fact provides a useful estimation of these quantities. \begin{fact}\label{lienentrebetd} (1) For any hyperbolic isometry $h$ with repulsive and attractive fixed point $\displaystyle x_h^-=\lim_{n\to +\infty} h^{-n}\cdot{\bf o}$ and $ \displaystyle x_h^+=\lim_{n\to +\infty} h^{n}\cdot{\bf o}$ respectively, it holds $$
b(h^{\pm n}, x) =d(o, h^{\pm n}\cdot o)-2(x_h^{\pm}\vert x)_o+\epsilon_x(n) $$ with $\displaystyle \lim_{n\to+\infty}\epsilon_x(n)=0$, the convergence being uniform on the compact sets of $\partial X\setminus\{x_h^{\mp}\}$.
(2) For any parabolic group $\mathcal P$ with fixed point $\displaystyle x_{\mathcal P}:=\lim_{\stackrel{p \in {\mathcal P}}{p\to +\infty}} p\cdot o$, it holds $$
b(p, x)=d(o, p\cdot o)-2(x_\mathcal P\vert x)_o+\epsilon_x(p) $$ with $\displaystyle \lim_{\stackrel{p \in \mathcal P}{p\to +\infty}} \epsilon_x(p)=0$, the convergence being uniform on the compact sets of $\partial X\setminus\{x_{\mathcal P} \}$. \end{fact}
\subsection{On the existence of convergent parabolic groups}
In this section, we recall briefly the construction presented in \cite{P} of convergent parabolic groups satisfying condition (\ref{influent}), up to a bounded term; we refer to \cite{P} for the details.
We consider on $\mathbb R^{N-1}\times \mathbb R$ a Riemannian metric of the form $g=T^2(t){\rm d}x_{ }^2+{\rm d}t ^2$ at point ${\bf x} = (x, t)$ where ${\rm d}x_{ }^2$ is a fixed euclidean metric on $\mathbb R^{N-1}$ and $T: \mathbb R\to \mathbb R^{*+}$ is a $C^{\infty}$ non-increasing function. The group of isometries of $g$ contains the isometries of $\mathbb R^{N-1}\times \mathbb R$ fixing the last coordinate. The sectional curvature at $\displaystyle {\bf x}= (x,t) $ equals $\displaystyle K_g(t)=-\frac {\ \ T''(t)}{T(t)}$ on any plane $\displaystyle \Big\langle \frac {\partial }{\partial X_i}, \frac {\partial}{\partial t}\Big\rangle, 1\leq i\leq N-1$, and $-K_g^2(t)$ on any plane $\displaystyle \Big\langle \frac {\partial }{\partial X_i}, \frac {\partial}{\partial X_j}\Big\rangle, 1\leq i<j\leq N-1$. Note that $g$ has negative curvature if and only if $T$ is convex; when $T(t)= e^{-t}$, one obtains a model of the hyperbolic space of constant curvature $-1$.
It is convenient to consider the
non-decreasing function \begin{eqnarray}\label{eq:FonctionU} u :\left\{\begin{array}{lll}
\mathbb R^{*+}&\to &\mathbb R\\
s&\mapsto& T^{-1}({1\over s}) \end{array}\right. \end{eqnarray} which satisfies the following implicit equation $\displaystyle
T(u(s))=\frac 1s. $ The hyperbolic metric with constant curvature $-1$ correspond to the function $u(s) = \log s$. This function $u$ is of interest since it gives precise estimates (up a \emph{bounded} term) of the distance between points lying on the same horosphere ${\mathcal H}_t:= \{(x, t): x \in \mathbb R^{N-1}\}$ where $t \in \mathbb R$ is fixed. Namely, the distance between ${\bf x}_t:=(x, t)$
and ${\bf y}_t:= (y, t)$ for the metric $ T^2(t){\rm d}x_{ }^2$ induced by $g$ on ${\mathcal H}_t$ is equal to $T(t)\Vert x-y\Vert_{ }$. Hence, this distance equals $1$ when $t=u( \Vert x-y\Vert_{ })$ and the union of the 3 segments $[{\bf x}_0, {\bf x}_t],
[{\bf x}_t, {\bf y}_t] $ and $[{\bf y}_t, {\bf y}_0]$ lies at a bounded distance of the hyperbolic geodesic joigning ${\bf x}_0$ and ${\bf y}_0$ (see \cite{DOP}, lemme 4) : this readily implies that $d({\bf x}_0, {\bf y}_0)-2u( \Vert x-y\Vert)$ is bounded.
The ``curvature'' function $K_g$ may be expressed in term of $u$ as follows: \begin{equation}\label{curvature} K_g(u(s)):=-\frac {T''(u(s))}{T(u(s))}= -\frac {2 u'(s)+s u''(s)}{s^2(u'(s))^3}. \end{equation}
For any $\alpha \geq 0$, let us consider the non decreasing $C^2$-function $ u=u_{\alpha} $ from $ \mathbb R^{*+}$ to $\mathbb R$ such that $$ (i)\quad u_\alpha (s)=\log s \ \mbox{\rm if} \ 0<s\leq 1\quad {\rm and} \quad (ii)\quad u_\alpha (s)= \log s+ \alpha \log \log s \quad \mbox{\rm if} \quad s \geq s_{\alpha } $$ for some constant $s_{\alpha } >1$ to be chosen in the following way. Using formula (\ref{curvature}) and following Lemma 2.2 in \cite{P}, for any $A\in ]0, 1[$, one may choose $s_{\alpha} >1$ in such a way
the metric $ g_\alpha =T_{u_{\alpha} }^2(t){\rm d}x_{ }^2+{\rm d}t ^2$ on $\mathbb R^{N-1}\times \mathbb R$ has pinched negative curvature on $X$, bounded from above by $-A^2$.
Let us emphasize that this metric co\"incides with the hyperbolic one on the subset $\mathbb R^{N-1}\times \mathbb R^-$ and that we can enlarge this subset shifting the metric $g_\alpha $ along the axis $\{0\}\times \mathbb R$ as far as we want (see \cite{P} $\S$ 2.2).
Now, let $\mathcal P $ be a discrete group of isometries of $ \mathbb R^{N-1}$
spanned by $k$ linearly independent translations $p_{\vec{\tau}_1}, \cdots, p_{\vec{\tau}_k}$ in $ \mathbb R^{N-1}$. For any ${\bf n}=(n_1, \cdots, n_k) \in \mathbb Z^k$, we set $\vec{\bf n}= n_1\vec{\tau}_1+ \cdots +n_k\vec{\tau}_k$. The translations $p_{\vec{\bf n}}$ are also isometries of $(\mathbb R^N, g_\alpha)$ and the corresponding Poincar\'e series of $\mathcal P$ is given by, up to finitely many terms, \begin{eqnarray*}
P_{\mathcal P}(s) = \sum_{\Vert\vec{\bf n}\Vert > s_{\alpha, \beta}} e^{-sd({\bf o}, p_{\vec{\bf n}}\cdot{\bf o})}&=& \sum_{ \Vert\vec{\bf n}\Vert > s_{\alpha, \beta}}
{e^{-2su( \Vert\vec{\bf n}\Vert )-s O( 1)}}\\
&=&
\sum_{ \Vert\vec{\bf n}\Vert > s_{\alpha, \beta}} {e^{-s O(1)}\over
\Vert\vec{\bf n}\Vert^{2s} \Bigl(\log\Vert \vec{\bf n}\Vert \Bigr)^{2s\alpha} }. \end{eqnarray*} Thus, the Poincar\'e exponent of $\mathcal P$ equals ${k/2}$ and $\mathcal P$ is convergent if and only if $\alpha >1$.
\begin{remark} We can construct other similar metrics as follows. For $\alpha>1$, $\beta>0$, there exists $s_{\alpha, \beta}>0$ and $u_{\alpha,\beta} : (0, +\infty)\to \mathbb R$ such that
(i) $\quad u_{\alpha,\beta}(s)=\log s\quad $ if $\quad 0<s\leq 1$,
(ii) $\quad u_{\alpha,\beta}(s)= \log s + \alpha \log\log s + \beta \log \log \log s \quad $ if $\quad s\geq s_{\alpha,\beta}$,
(iii) $K_g(u(s))\leq -A$. \\ Hence, the Poincar\'e series of the parabolic subgroup $\mathcal P$ with respect to the metric $ g_{\alpha, \beta} = T^2_{u_{\alpha,\beta}}(t)^2 {\rm d}x^2 + {\rm d}t^2$ is given by, up to finitely many terms,
\begin{eqnarray*}
P_{\mathcal P}(s) = \sum_{ \Vert\vec{\bf n}\Vert > s_{\alpha, \beta}} e^{-sd({\bf o}, p_{\vec{\bf n}}\cdot{\bf o})}&=& \sum_{ \Vert\vec{\bf n}\Vert > s_{\alpha, \beta}}
{e^{-2su( \Vert\vec{\bf n}\Vert )-s O(1)}}\\
&=&
\sum_{ \Vert\vec{\bf n}\Vert > s_{\alpha, \beta}} {e^{-s O(1)}\over
\Vert\vec{\bf n}\Vert^{2s} \Bigl(\log \Vert\vec{\bf n}\Vert\Bigr)^{2s\alpha} \Bigl(\log\log\Vert\vec{\bf n}\Vert\Bigr)^{2s\beta} }. \end{eqnarray*} \end{remark}
This implies that $\mathcal P$ converges as soon as $\alpha>1$ but it is not enough to ensure that $\mathcal P$ satisfy hypothesis (\ref{influent}). In the next paragraph, we present new metrics $g_\alpha$, close to those presented in the present section, for which it holds $$d(o, p_{\vec{\bf n}}\cdot o) = 2\left (\log \Vert\vec{\bf n}\Vert + \alpha \log \log \Vert\vec{\bf n}\Vert \right) + C + \epsilon(n),$$ where $C\in \mathbb R$ is a constant and $\displaystyle \lim_{n\to +\infty} \epsilon(n) = 0$.
\subsection{ On convergent parabolic groups satisfying condition (\ref{influent})}
Let us fix $N=2, \alpha>1$ and a slowly varying function $L: [0, +\infty[\to \mathbb R^{*+}$. We construct in this section a metric $g=g_{\alpha, L}=T^2(t){\rm d}x^2 + {\rm d} t^2$ on $\mathbb R\times \mathbb R$ such that the group spanned by the translation $(x,t) \mapsto (x+1, t)$ satisfies our hypothesis (\ref{influent}). The generalization to higher dimension is immediate.
For any real $t$ greater than some ${\mathfrak a}>0$ to be chosen, let us set $$T(t)=T_{ \alpha, L}(t) = e^{-t}{t^\alpha\over L(t)}.$$
Without loss of generalities, we assume that $L$ is $C^{\infty}$ on $\mathbb R^+$ and its derivates $L^{(k)}, k \geq 1$, satisfy $ L^{(k)}(t)\longrightarrow 0 $ and $ \displaystyle \frac{L'(t)}{L(t)} \to 0 $ as $t\to +\infty$ (\cite{BGT}, Section 1.3). Furthermore, for any $\theta>0$, there exist $t_\theta \geq 0$ and $C_\theta \geq 1$ such that for any $t\geq t_\theta$ \begin{equation} \label{majorationslowlyvarying}
{1\over C_\theta t^\theta} \leq L(t)\leq C_\theta t^\theta.
\end{equation} Notice that $\displaystyle
-\frac{T ''(t)}{T (t)} = -\left(1-{2\alpha\over t}+L'(t)\right)^2+\left({\alpha \over t^2}+L''(t)\right) <0 $ for $t \geq{\mathfrak a}$.
We assume that $0<A<1<B$ and, following Lemma 2.2 in \cite{P}, extend $T_{\alpha, L}$ on $\mathbb R$ as follows. \begin{lemma}\label{talpha} There exists $\mathfrak a= \mathfrak a_{\alpha, L}>0$ such that the map $T=T_{\alpha, L} : \mathbb R \to (0, +\infty)$ defined by \begin{itemize} \item $T (t) = e^{-t}\quad $ for $\quad t\leq 0$, \item $T(t)= e^{-t} {t^\alpha\over L(t)}\quad $ for $\quad t\geq \mathfrak a_{\alpha, L}$, \end{itemize} admits a decreasing and 2-times continuously differentiable extension on $\mathbb R$ satisfying the following inequalities $$ -B \leq K(t) = -\frac{T''(t)}{T(t)}\leq -A<0. $$ \end{lemma} Notice that this property holds for any $t'\geq t_{\alpha, L}$;
{\bf for technical reasons} (see Lemma \ref{groupeconvergentHypotheses}), we assume that $\mathfrak a > 4\alpha$. A direct computation yields the following estimate for the function $u=u_{\alpha, L}$ given by the implicit equation (\ref{eq:FonctionU}). \begin{lemma}\label{lem:UAsymp} Let $u=u_{\alpha, L} : (0, +\infty) \to \mathbb R$ be such $\displaystyle T_{\alpha, L}(u(s)) = \frac 1 s$ for any $s>0$. Then $$u(s)= \log s +\alpha\log \log s-\log L(\log s)+\epsilon(s)$$
with $\epsilon(s) \to 0$ as $s \to +\infty$. \end{lemma}
We now consider the group $\mathcal P$ spanned by the translation $p$ of vector $\vec{i} = (1,0)$ in $\mathbb R^2$;
the map $p$ is an isometry of $(\mathbb R^2, g_{\alpha, L})$ which fixes the point $x=\infty $. By Lemma \ref{lem:UAsymp}, it holds
$$
d({\bf o}, p^n\cdot{\bf o})=2\Bigl(\log n +\alpha\log \log n-\log L(\log n)\Bigr)
$$
up to a bounded term. Hence, the group $\mathcal P$ has critical exponent $\frac 1 2$; furthermore, it is convergent since $\alpha>1$. $^($\footnote{Notice that the group $\mathcal P$ also converges when $\alpha = 1$ and
$\displaystyle\sum_{n \geq 1}{L(n)\over n} <+\infty;$ this situation is not explore here.}$^)$
The following proposition ensures that $\mathcal P$ satisfies hypothesis (\ref{influent}); in other words, the ``bounded term'' mentioned above tends to $0$ as $n \to +\infty$.
\begin{prop} \label{prop:PAsymp} The parabolic group $\mathcal P =\langle p \rangle$ on $(\mathbb R^2, g_{\alpha, L})$ satisfies the following property: for any $n\in \mathbb N$, $$ d({\bf o}, p^n\cdot{\bf o})= 2\Bigl(\log n +\ \alpha\log \log n -\log L(\log n)\Bigr) + \epsilon(n) $$ with $\displaystyle \lim_{n \to +\infty} \epsilon(n)=0$. In particular, if $\alpha >1$, then $\mathcal P$ is convergent with respect to $g_\alpha$.
\end{prop}
Let $\mathcal H = \mathbb R \times [0, +\infty)$ be the upper half plane $\{(x, t) \mid t\geq 0\}$ and $\mathcal H/\mathcal P$ the quotient cylinder endowed with the metric $g _{\alpha, L} =T _{\alpha, L}(t)^2{\rm d}x^2+{\rm d}t^2$. We do not estimate directly the distances $d({\bf o}, p^n\cdot{\bf o})$, since the metric $g _{\alpha, L}$ is not known explicitely for $t\in [0, \mathfrak a]$. Let us introduce the point ${\bf a} = ( 0, {\mathfrak a})\in \mathbb R^2$. The union of the three geodesic segments $[{\bf o}, {\bf a}], [{\bf a}, p^n\cdot{\bf a})]$ and $[p^n\cdot {\bf a}, p^n\cdot {\bf o}]$ is a quasi-geodesic; more precisely, since $d({\bf o}, {\bf a})= d(p^n\cdot {\bf o}, p^n \cdot {\bf a})$ is fixed and $d({\bf a}, p^n\cdot{\bf a})\to +\infty$, the following statement holds. \begin{lemma} Under the previous notations, $$\lim_{n\to +\infty} d({\bf o}, p^n \cdot{\bf o}) - d({\bf a}, p^n\cdot{\bf a}) = 2\mathfrak a.$$ \end{lemma} Proposition \ref{prop:PAsymp} follows from the following lemma.
\begin{lemma} \label{groupeconvergentHypotheses} Assume that $\mathfrak a \geq 4 \alpha$. Then $$ d({\bf a}, p^n\cdot{\bf a})= 2(\log n +\ \alpha\log \log n -\log L(\log n)- \mathfrak a )+ \epsilon(n) $$ with $\displaystyle \lim_{n \to +\infty} \epsilon(n)=0$.
\end{lemma} Proof.
Throughout this proof, we work on the upper half-plane $\mathbb R \times [\mathfrak a, +\infty[$ whose points are denoted $(x, \mathfrak a+t)$ with $x \in \mathbb R$ and $ t\geq 0$; we set
$${\mathcal T}(t) = T _{\alpha}(t+\mathfrak a)= e^{-\mathfrak a-t}{(t+\mathfrak a)^\alpha\over L(t+\mathfrak a)}.$$ In these coordinates, the quotient cylinder $\mathbb R \times [\mathfrak a, +\infty[/\mathcal P$ is a surface of revolution endowed with the metric ${\mathcal T}(t)^2{\rm d}x^2+{\rm d}t^2$. For any $n \in \mathbb Z$, denote $h_n$ the maximal height at which the geodesic segment $\sigma_n=[{\bf a}, p^n\cdot {\bf a}]$ penetrates inside the upper half-plane $\mathbb R \times [\mathfrak a, +\infty[$; it tends to $+\infty$ as $n \to \pm \infty$. The relation between
$n, h_n$ and $d_n:=d({\bf a}, p^n\cdot{\bf a})$ may be deduced from the Clairaut's relation (\cite{DC}, section 4.4, Example 5) : $$
{n\over 2}={\mathcal T}(h_n)\int_{0}^{\mathfrak h_n}{{\rm d}t\over {\mathcal T}(t)\sqrt{{\mathcal T}^2(t)-{\mathcal T}^2(h_n)}} \quad {\rm and} \quad d_n=2\int_{0}^{ h_n}{{\mathcal T}(t){\rm d}t\over \sqrt{{\mathcal T}^2(t)-{\mathcal T}^2(h_n)}}. $$ These identities may be rewritten as $$
{n\over 2} ={1\over {\mathcal T}(h_n)}\int_{0}^{\mathfrak h_n}{f_n^2(s){\rm d}s\over
\sqrt{1-f_n^2(s)}} \qquad {\rm and} \qquad \quad d_n=2 h_n+ 2 \int_{0}^{ h_n} \Bigl({1\over \sqrt{1-f_n^2(s)}}-1\Bigr) {\rm d}s $$ where $\displaystyle f_n(s):= {{\mathcal T}(h_n)\over {\mathcal T}(h_n-s)}1_{[0, h_n]}(s).$
First, for any $s \geq 0$, the quantity $\displaystyle {f_n^2(s)\over
\sqrt{1-f_n^2(s)}}$ converges towards $\displaystyle {e^{-2s}\over\sqrt{1-e^{-2s}}}
$ as $n \to +\infty$. In order to use the dominated convergence theorem, we need the following property.
\begin{fact}\label{majorationf} There exists $n_0>0$ such that for any $n\geq n_0$ and any $s\geq 0$, $$ 0\leq f_n(s)\leq f(s):= e^{-s/2} $$ \end{fact}
\noindent Proof. Assume first $h_n/2\leq s\leq h_n$; taking $\theta=\alpha/2$ in (\ref{majorationslowlyvarying}) yields \begin{eqnarray*} 0\leq f_n(s)&=& \left(\mathfrak a + h_n\over \mathfrak a + h_n - s\right)^\alpha { L( \mathfrak a + h_n - s)\over L( \mathfrak a + h_n)} e^{-s} \\ &\leq& C_{\alpha/2}^2{(\mathfrak a + h_n)^{3\alpha/2}\over (\mathfrak a + h_n - s)^{\alpha/2} } e^{-s} \\ &\leq& {C_{\alpha/2}^2 \over \mathfrak a^{\alpha/2}} (\mathfrak a + h_n)^{3\alpha/2}
e^{-s}
\\ &\leq& {C_{\alpha/2}^2 \over \mathfrak a^{\alpha/2}} (\mathfrak a + h_n)^{3\alpha/2}e^{-{h_n\over 4}} e^{-{s\over 2}} \leq e^{-{s\over 2}} \end{eqnarray*} where the last inequality holds if $h_n$ is great enough, only depending on $\mathfrak a$ and $\alpha$.
Assume now $0\leq s \leq h_n/2$; it holds $\displaystyle {1\over 2}\leq {\mathfrak a+h_n-s\over \mathfrak a +h_n}\leq 1 $ and
$0\leq {s\over \mathfrak a +h_n}\leq \min({1\over 2}, {s \over \mathfrak a})$.
Recall that $L'(t)/L(t)\to 0$ as $t\to +\infty$ and $0\leq {1\over 1-v }\leq e^{2v}$ for $0\leq v \leq {1\over 2}$; hence, for any
$\varepsilon >0$ and $n$ great enough (say $n \geq n_\varepsilon$), there exists $s_n\in (0,s)$ such that
\begin{eqnarray*} 0\leq f_n(s)&=& { L( \mathfrak a + h_n - s)\over L( \mathfrak a + h_n)} \left(\mathfrak 1\over \mathfrak 1 - {s\over \mathfrak a + h_n}\right)^\alpha e^{-s} \\ &\leq& \left(1 - s\frac{L'(a+h_n - s_n)}{L(a+h_n)}\right) e^{-(1-{2\alpha \over \mathfrak a})s} \\ & \leq & (1 + \epsilon s)e^{-(1-{2\alpha \over \mathfrak a})s}\\ &\leq& e^{-(1-\varepsilon -{2\alpha \over \mathfrak a})s}.
\end{eqnarray*}
Consequently, fixing $\epsilon >0$ in such a way $\displaystyle 2{\alpha\over\mathfrak a} + \epsilon \leq \frac 1 2$, it yields $0\leq f_n(s)\leq e^{-s/2}$ for $n$ great enough. \rightline{$\Box$} Therefore, $$ 0\leq {f_n^2(s)\over
\sqrt{1-f_n^2(s)}}
\leq F(s):= {f^2(s)\over
\sqrt{1-f^2(s)}} $$ where the function $F$ is integrable on $\mathbb R^+.$ By the dominated convergence theorem, it yields $${n\over 2}={1+\epsilon(n)\over {\mathcal T}(h_n)} \int_0^{+\infty} {e^{-2s}\over\sqrt{1-e^{-2s}}}{\rm d} s ={1+\epsilon(n)\over \mathcal {\mathcal T}(h_n)}.$$ Consequently $h_n= \log n +\alpha \log \log n -\log L(\log n)-\log 2 -\mathfrak a +\epsilon(n)$.
Similarly $\displaystyle \lim_{n\to +\infty} \int_0^{h_n} \Bigl({1\over \sqrt{1-f_n^2(s)}}-1\Bigr) {\rm d}s= \int_0^{+\infty} \Bigl({1\over \sqrt{1-e^{-2s}}}-1\Bigr) {\rm d}s= \log 2, $ which yields $$d_n= 2(\log n +\alpha \log \log n -\log L(\log n) -\mathfrak a) +\epsilon(n).$$
\rightline{$\Box$}
The Poincar\'e exponent of $\mathcal P$ equals $1/2 $ and, as $R\to +\infty$, $$ \sharp\{p\in \mathcal P\mid 0\leq d({\bf o}, p\cdot {\bf o})<R \} \sim e^{ R/2}{ L(R)\over (R/2)^{\alpha} }. $$ Hence, for any $\Delta >0$, $$ \sharp\{p\in \mathcal P\mid R\leq d({\bf o}, p\cdot {\bf o})<R+\Delta\} \sim {1\over 2}\int_{R}^{R+\Delta} e^{t/2}{L(t)\over (t/2)^{\alpha}} {\rm d}t\quad{\rm as}\quad R\to+\infty $$ and $$
\lim_{R\to +\infty} { R ^{\alpha} \over L(R)}
\sum _{ \stackrel{p \in \mathcal P}{R\leq d({\bf o}, p\cdot {\bf o})<R+\Delta}}
e^{-{1\over 2} d({\bf o}, p\cdot {\bf o})} =2^{\alpha -1}\Delta $$ which is precisely Hypothesis {\ref{influent}.
\subsection{On the existence of non elementary exotic groups}
Explicit constructions of exotic groups, i.e. non-elementary groups $\Gamma$ containing a parabolic $\mathcal P$ whose Poinacr\'e exponent equals $\delta_\Gamma$, have been detailed in several papers; first in \cite{DOP}, then in \cite{P}, \cite{DPPS} and \cite {V}. Let us describe them in the context of the metrics $g=g_{\alpha; L}$ presented above.
For any $a>0$ and $t\in \mathbb R$, we write $$ T_{\alpha, L, a} = \left\{\begin{array}{ccc}
e^{-t} & \mbox{if} & t\leq a\\
e^{-a}T_{\alpha, L}(t-a) & \mbox{if} & t\geq a
\end{array}\right., $$ where $T_{\alpha, L}$ is defined in the previous paragraph. As in \cite{P}, we consider the metric on $\mathbb R^2$ given by $\displaystyle g_{\alpha, L, a} = T_{\alpha, L, a}^2(t){\rm d}x^2 + {\rm d}t^2$. It is a complete smooth metric, with pinched negative curvature, and which equals the hyperbolic one on $\mathbb R\times (-\infty, a)$. Note that $g_{\alpha, L, 0} = g_{\alpha, L}$ and $g_{\alpha, L, +\infty}$ is the hyperbolic metric on $\mathbb H^2$. Note the previous subsection, for any $a\in (0, +\infty)$ and any $\tau\in \mathbb R^*$, a parabolic group of the form $\mathcal P = <(x,t) \mapsto (x+\tau, r)>$ is convergent. This allows to reproduce the construction of a non-elementary group given in \cite{DOP} and \cite{P}.
Let $h$ be a hyperbolic isometry of $\mathbb H^2$ and $p$ be a parabolic isometry in Schottky position with $h$ (cf next section for a precise definition). They generate a free group $\Gamma = <h,p>$ which acts discretely without fixed point on $\mathbb H^2$. Up to a global conjugacy, we can suppose that $p$ is $(x,t) \mapsto (x+\tau, t)$ for some $\tau\in \mathbb R^*$. The surface $S = \mathbb H^2/\Gamma$ has a cusp, isometric to $\mathbb R/ \tau\mathbb Z\times (a_0, +\infty)$ for some $a_0>0$. Therefore, we can replace in the cusp the hyperbolic metric by $g_{\alpha, L, a}$ for any $a\geq a_0$; we also denote $ g_{\alpha, L, a}$ the lift of $g_{\alpha, L, a}$ to $\mathbb R^2$.
For any $n\in \mathbb Z^*$, the group $\Gamma_n = <h^n, p>$ acts freely by isometries on $(\mathbb R^2, g_{\alpha, L, a})$. It is shown in \cite{DOP} that, for $n>0$ great enough, the group $\Gamma_n$ also converges. This provides a family of examples for Theorem \ref{theo:Comptage}. By \cite{P}, if $\Gamma_n$ is convergent for some $a_0>0$, then there exists $a^*>a_0$ such that for any $a\in [a_0, a^*)$, the group $\Gamma_n$ acting on $(\mathbb R^2, g_{\alpha, L, a})$ is convergent, whereas for $a>a^*$, it has finite Bowen-Margulis measure and hence diverges. In some sense, the case $a = a^*$ is ``critical''; it is proved in \cite{P} that $\Gamma$ also diverges in this case. With additive hypotheses on the tail of the Poincar\'e series associated to the factors $\Gamma_j, 1\leq j \leq p$ of $\Gamma$, P. Vidotto has obtained a precise estimate of the orbital function of $\Gamma$ in the case when its Bowen-Margulis measure is infinite \cite {V} ; this is the analogous of Theorem \ref{theo:Comptage}, under slightly more general assumptions.
In \cite{DPPS}, the authors propose another approach based on a ``strong'' perturbation of the metric inside the cusp. Starting from a $N$-dimensional finite volume hyperbolic manifold with cuspidal ends, they modify the metric far inside one end in such a way the corresponding parabolic group is convergent with Poincar\'e exponent $>1$ and turns the fundamental group of the manifold into a convergent group; in this construction, the sectional curvature of the new metric along certain planes is $<-4$ far inside the modified cusp.
\section{Schottky groups: generalities and coding}\label{sec:Schottky} From now on, we fix two integers $p\geq 1$ and $q\geq 0$ such that $\ell:=p+q\geq 2$ and consider a Schottky group $\Gamma$ generated by $\ell $ elementary groups
$\Gamma_1, \ldots, \Gamma_\ell$ of isometries of $X$. These elementary groups are in Schottky position, i.e. there exist disjoint closed sets $F_j$ in $\partial X$ such that, for any $1\leq j\leq \ell$
$$ \Gamma _j^*(\partial X\setminus F_j) \subset F_j. $$ The group $\Gamma$ spanned by the $\Gamma_j, 1\leq j \leq \ell,$ is called the Schottky product of the ${\Gamma_j}$'s and denoted $\Gamma=\Gamma_1\star \Gamma_2\star \cdots \star \Gamma_\ell$.
In this section, we present general properties of $\Gamma$. In particular, we do not require that conditions {\bf H1}, {\bf H2} and {\bf H3} hold; these hypotheses are only needed in the last section of this paper.
By the Klein's tennis table criteria, $\Gamma$ is the free product of the groups $\Gamma _i$; any element in $\Gamma$ can be uniquely written as the product $$ \gamma = a_1\dots a _k$$ for some $a_j \in \cup\Gamma_j^*$ with the property that no two consecutive elements $a_j$ belong to the same group. The set $\mathcal A= \cup\Gamma_j^*$ is called the {\it alphabet } of $\Gamma$, and $ a_1, \dots, a_k$ the {\it letters} of $\gamma$. The number $k$ of letters is the {\it symbolic length } of $\gamma$; let us denote $\Gamma(k)$ the set of elements of $\Gamma$ with symbolic length $k$. The last letter of $\gamma$ plays a special role, and the index of the group it belongs to be denoted by $l_\gamma$. Applying Fact 2 one gets\
\begin{property} \label{triangle} There exists a constant $C>0$ such that
$$d({\bf o}, \gamma.{\bf o})-C\leq B_x(\gamma^{-1}.{\bf o}, {\bf o})\leq d({\bf o}, \gamma.{\bf o})$$
for any $\gamma\in \Gamma= \star _i\;\Gamma _i$ and any $x\in \cup_{i\not=l_\gamma}F_i$. \end{property} \noindent This fact implies in particular the following crucial contraction property \cite{BP}. \begin{prop}\label{contraction} There exist a real number $r\in ]0, 1[$ and $C>0$ such that for any $\gamma $ with symbolic length $n\geq 1$ and any $x$ belonging to the closed set $\cup_{i\not= i(\gamma )}F_i$ one has $$\vert\gamma'(x)\vert \leq Cr^n.$$ \end{prop}
The following statement, proved in \cite{BP}, provides a coding of the limit set $\Lambda(\Gamma)$ but the $\Gamma$-orbits of the fixed points of the generators.
\begin{prop}\label{codagelimitset} Denote by $\Sigma^+$ the set of sequences $(a _n)_{n\geq 1}$ for which each letter $a _n$ belongs to the alphabet $\mathcal A= \cup\Gamma_i ^*$ and such that no two consecutive letters belong to the same group (these sequences are called admissible). Fix a point $x_0$ in $\partial X\setminus F$. Then \begin{enumerate} \item[(a)] For any ${\bf a}= (a_n)_{n \geq 1}\in \Sigma^+$, the sequence $(a_1\dots a_n\cdot x_0)_{n \geq 1}$ converges to a point $\pi ({\bf a})$ in the limit set of $\Gamma$, independent on the choice of $x_0$. \item [(b)] The map $\pi : \Sigma ^+\to \Lambda (\Gamma )$ is one-to-one and $\pi(\Sigma^{+})$ is contained in the radial limit set of $\Gamma$. \item [(c)] The complement of $\pi(\Sigma ^+)$ in the limit set of $\Gamma $ equals the $\Gamma$-orbit of the union of the limits sets $\Lambda (\Gamma _i)$ \end{enumerate} \end{prop}
From now on, we consider a Schottky product group $\Gamma$. Thus, up to a denumerable set of points, the limit set of $\Gamma$ coincides with $ \pi (\Sigma^+)$. For any $1\leq i \leq \ell$, let $\Lambda_i = \Lambda\cap F_i$ be the closure of the set of those limit points with first letter in $\Gamma_i $ (not to be confused with the limit set of $\Gamma_i$). The following description of $\Lambda=\Lambda(\Gamma)$ be useful:
a) $\Lambda$ is the finite union of the sets $\Lambda_i$,
b) the closes sets $\Lambda _i, 1\leq i \leq \ell,$ are pairwise disjoints,
c) each of these sets is partitioned into a countable number of subsets with disjoint closures\ : $$ \Lambda_i= \cup_{a \in \Gamma^*_i}\cup_{j\not= i}\; a.\Lambda_j\ . $$
Now, we enlarge the set $\Lambda$ in order to take into account the finite admissible words. We fix a point $x_0\notin \cup_jF_j$. There exists a one-to-one correspondence between $\Gamma\cdot x_0$ and $\Gamma$; furthermore, the point $\gamma\cdot x_0\in F_j$ for any $\gamma \in \Gamma^*$ with first letter in $\Gamma_j$. We set $\widetilde \Sigma_+= \Sigma^+\cup \Gamma$ and notice that, by the previous Proposition, the natural map $\pi: \widetilde \Sigma_+ \to \Lambda(\Gamma)\cup \Gamma\cdot {x_0}$ is one-to-one with image $\pi(\Sigma^+)\cup \Gamma \cdot x_0$. Thus we introduce the following notations
a) $\tilde \Lambda= \Lambda \cup \Gamma \cdot x_0;$
b) $\tilde \Lambda_i= \tilde \Lambda\cap F_i$ for any $1\leq i \leq \ell$.
\noindent The set $\tilde \Lambda$ is the disjoint union of $\{x_0\}$ and the sets $\tilde \Lambda_i, 1\leq i \leq \ell$; furthermore, each $\tilde\Lambda_i$ is partitioned into a countable number of subsets with disjoint closures: $$ \tilde \Lambda_i= \cup_{a \in \Gamma^*_i}\cup_{j\not= i}\; a\cdot \tilde \Lambda_j\ . $$
The cocycle $b$ defined in (\ref{cocycleb}) play a central role in the sequel. In order to calculate the distance between two points of the orbit $\Gamma \cdot {\bf o}$, we consider an extension $\tilde b$ of this cocycle defined as follow on $\tilde \Lambda$: for any $\gamma \in \Gamma$ and $x \in \tilde \Lambda$, $$ \tilde b(\gamma, x):=
\Bigl\{
\begin{array}{lllll}
b(\gamma, x)= \mathcal B_x(\gamma^{-1} {\bf o}, {\bf o})& {\rm if} & x\in
\Lambda;&\ & \\ d(\gamma^{-1} \cdot {\bf o}, g\cdot {\bf o})-d({\bf o}, g\cdot{\bf o}) & {\rm if} & x=g\cdot x_0& {\rm for \ some} & g \in \Gamma. \end{array} \Bigr. $$ The cocycle equality (\ref{cocycleb}) is still valid for the function $\tilde b$; furthermore, if $\gamma\in \Gamma$ decomposes as $\gamma =a_1\cdots a_k$, then $$ d({\bf o}, \gamma \cdot {\bf o}) = b(a_1, \gamma_2\cdot x_0) +b(a_2, \gamma_3\cdot x_0)+\cdots+ b(a_k, x_0), $$ where $\gamma_l= a_l\cdots a_k$ for $2\leq l\leq k$.
\section{ On the Ruelle operators $\mathcal L_ {s}, s \in \mathbb R$}\label{sec:Ruelle} In this section, we describe the main properties of the transfer operators $\mathcal L_ {s}, s \in \mathbb R,$ defined formally
by: for any function $\phi:
\tilde \Lambda \ \to \mathbb C$ and $x \in \tilde \Lambda$, $$\mathcal L_ {s}\phi(x)= \sum_{\gamma\in \Gamma(1)} {\bf 1}_{x \notin \tilde \Lambda_{l_\gamma}} e^{-s\tilde b(\gamma, x)}\phi(\gamma\cdot x)=\sum_{j=1}^{\ell}
\sum_{\gamma \in \Gamma^*_j } {\bf 1}_{x \notin {\tilde \Lambda_j}} e^{-s\tilde b(\gamma, x)}\phi(\gamma\cdot x). $$ For any $1\leq j\leq \ell$, the sequence $(\gamma \cdot {\bf o})_{\gamma \in \Gamma_j}$ accumulates on the fixed point(s) of $\Gamma_j$. So for any $x \notin \tilde \Lambda_j$, the sequence $\left( \tilde b(\gamma, x)- d ({\bf o}, \gamma.{\bf o} )\right)_{\gamma \in \Gamma_j}$ is bounded uniformly in $x \notin \tilde \Lambda_j$. Therefore the quantity $\mathcal L_ {s}1(x)$ is well defined as soon as $s\geq \delta:= \max\{\delta_{\Gamma_j}\mid 1\leq j \leq \ell\}$. The powers of $\mathcal L_ {s}, s\geq \delta,$ are formally given by: for any $k \geq 1$, any function $\phi: \tilde \Lambda \ \to \mathbb C$ and any $x \in \tilde \Lambda$, $$ \mathcal L^k_ {s}\phi(x)=
\sum_{\gamma \in \Gamma(k)} {\bf 1}_{x \notin {\tilde \Lambda_j}} e^{-s\tilde b(\gamma, x)}\phi(\gamma\cdot x). $$ It is easy to check that the operator $\mathcal L_ {s}, s \geq \delta$, act on $(C(\tilde \Lambda), \vert \cdot \vert _\infty)$; we denote $ \rho_s(\infty)$ it spectral radius on this space.
\subsection{ Poincar\'e series versus Ruelle operators}
By the ``ping-pong dynamic'' between the subgroups $\Gamma_j, 1\leq j\leq \ell$, and Property \ref{triangle}, we easily check that the difference
$
\tilde b(\gamma, x)-d({\bf o}, \gamma\cdot{\bf o})
$
is bounded uniformly in $k\geq 0, \gamma \in \Gamma(k)$ and $x \notin \tilde \Lambda_{l_\gamma}$. Consequently, there exists a constant $C >0$ such that, for any $x \in \tilde \Lambda$, any $k \geq 1$ and any $s \geq \delta$,
$$ \mathcal L^k_ {s}1(x) \stackrel{c}{\asymp} \sum_{\gamma \in \Gamma(k)} e^{-s d({\bf o}, \gamma \cdot{\bf o})}
$$
where $A\stackrel{c}{\asymp}B$ means ${A\over c}\leq B \leq cA$. Hence, \begin{equation}\label{divergence-convergence}
\displaystyle P _\Gamma(s):= \sum_{ \gamma \in \Gamma} e^{-sd({\bf o}, \gamma\cdot{\bf o})}=+\infty
\quad \Longleftrightarrow\quad
\sum_{k\geq 0} \mathcal L^k_ {s}1(x)=+\infty.
\end{equation} In particular \begin{equation}\label{criticalexponent} \delta_\Gamma=\sup\{s\geq \delta\mid \rho_s(\infty)\geq 1\}= \inf\{s\geq \delta\mid \rho_s(\infty)\leq 1\}.
\end{equation}
It is proved in the next paragraph that $\Gamma$ is convergent if and only if $\rho_\delta(\infty)<1$.
\subsection{ On the spectrum of the operators $\mathcal L_ {s}, s\geq \delta $}
In order to control the spectral radius (and the spectrum) of the transfer operators $\mathcal L_{s}$, we study their restriction to the space ${\bf Lip}( \tilde \Lambda) $ of Lipschitz functions from $ \tilde \Lambda$ to $\mathbb C$ defined by $${\bf Lip}( \tilde \Lambda)=\{\phi \in C( \tilde \Lambda);\; \Vert\phi\Vert =
|\phi|_{\infty}+[\phi] <+\infty\}$$ where $\displaystyle [\phi] =\sup_{0\leq i\leq p}
\sup_{\stackrel{x, y \in \tilde \Lambda_j}{x\neq y}}{|\phi(x)-\phi(y)|\over D(x, y) }$ is the Lipschitz coefficient of $\phi$ on $(\partial X, D)$.
The space $({\bf Lip}( \tilde \Lambda),\Vert.\Vert )$ is a Banach space and the identity map from $({\bf Lip}( \tilde \Lambda), \Vert.\Vert )$
into $(C( \tilde \Lambda), |.|_{\infty})$ is
compact. It is proved in \cite{BP} that the operators $\mathcal L_ {s}, s \geq \delta$, act both on $(C( \Lambda), \vert \cdot \vert _\infty)$ and $({\bf Lip}( \Lambda), \Vert \cdot \Vert )$; P. Vidotto has extended in \cite{V} this property to the Banach spaces $(C( \tilde \Lambda), \vert \cdot \vert _\infty)$ and $({\bf Lip}(\tilde \Lambda), \Vert \cdot \Vert )$. We denote $ \rho_s$ the spectral radius of $\mathcal L_ {s}$ on ${\bf Lip}( \tilde \Lambda)$; in the following proposition, we state the spectral properties of the $\mathcal L_s$ we need in the present paper. \begin{prop}\label{resumeDCDS} We assume $\ell=p+q\geq 3\ ^($\footnote{Recall that $\ell\geq 2$ since $\Gamma$ is non-elementary. When $\ell = 2$, the real $-\rho_s$ is also a a simple eigenvalue of $\mathcal L_s$; a similar statement to Proposition \ref{resumeDCDS} holds for the restriction of $\mathcal L_s$ to each space ${\bf Lip}( \tilde \Lambda_i), i=1, 2$ \cite{BP}. }$^)$. For any $s \geq \delta$, \begin{enumerate} \item $ \rho_s=\rho_s(\infty);$ \item $ \rho_s$ is a simple eigenvalue of $\mathcal L_s $ acting on ${\bf Lip}( \tilde \Lambda)$ and the associated eigenfunction $h_s $ is non negative on $ \tilde \Lambda$; \item there exists $0\leq r <1$ such that the rest of the spectrum of $\mathcal L_s $ on ${\bf Lip}( \tilde \Lambda)$ is included in a disc of radius $\leq r \rho_s$. \end{enumerate}
\end{prop} \noindent Sketch of the proof. We refer to \cite{BP} and \cite{V} for the details. For any $s\geq 0$ and $\gamma$ in $\Gamma^*$, let $w_s( \gamma, .)$ be the {\it weight function} defined on $\Lambda(\Gamma)$ by: for any $s \geq \delta$ and $\gamma \in \Gamma$ $$
w_s(\gamma, x):=
\Bigl\{
\begin{array}{lll}
e^{-s \tilde b(\gamma, x)}& {\rm if} & x\in \tilde \Lambda_j, j\neq l_\gamma,\\ 0& {\rm otherwise.} & \end{array} \Bigr. $$ Observe that these functions satisfy the following cocycle relation : if $ \gamma_1, \gamma_2 \in \mathcal A$ do not belong to the same group $\Gamma_j$, then $$
w_s(\gamma_1\gamma_2, x)= w_s(\gamma_1, \gamma_2\cdot x)
w_s(\gamma_2, x). $$ Due to this cocycle property, we may write, for any $k \geq 1$, any bounded function $\varphi: \tilde \Lambda \to \mathbb R$ and any $x \in \tilde \Lambda$
$$ \mathcal L_s^k\varphi(x)=\sum_{\gamma \in \Gamma(k)}
w_s(\gamma, x) \varphi(\gamma\cdot x). $$ In \cite{BP}, it is proved that the restriction of the functions
$ w_s(\gamma, .)$ to the set $\Lambda$ belong
to
${\bf Lip}( \Lambda)$ and that for any $s\geq \delta$ there exists $C= C(s)>0$ such that, for any $\gamma$ in $\Gamma^*$ $$ \Vert w_s(\gamma, .)\Vert \leq Ce^{-s d({\bf o}, \gamma.{\bf o})}. $$ In \cite{V}, Proposition 8.5, P. Vidotto has proved that the same inequality holds for the functions
$ w_s(\gamma, .)$ on $\tilde \Lambda$. Thus, the operator $\mathcal L_s$ is bounded on
${\bf Lip}( \tilde \Lambda)$ when $s\geq \delta $.
In order to describe its spectrum on ${\bf Lip}( \tilde \Lambda)$, we first write a ``contraction property'' for the iterated operators $\mathcal L_s^k$; indeed, $$\vert \mathcal L^k_s\varphi(x)-\mathcal L^k_s\varphi(y)\vert
\leq \sum_{\gamma\in \Gamma(k)} \vert w_s(\gamma,x)\vert \; \vert\varphi(\gamma\cdot x)-\varphi(\gamma\cdot y)\vert + \sum_{\gamma\in \Gamma(k)} [w_s(\gamma,.)]\;\vert \varphi\vert_\infty D(x, y).$$
By Proposition \ref{contraction} and the mean value relation (\ref{TAF1}), there exist $C>0$ and $0\leq r<1$
such that
$D( \gamma\cdot x,\gamma\cdot y)\leq C r^k D(x, y)$ whenever $x, y\in \tilde \Lambda_j$, $j\not=l_\gamma$. This
leads to the following inequality \begin{equation}\label{DFr} [\mathcal L_s^k\varphi] \leq r_k [\varphi] + R_k
|\varphi|_{\infty} \end{equation}
where $r_k = \Bigl(C r^k\Bigr) \; |\mathcal L_{s}^k1|_{\infty}$ and $R_k = \sum_{\gamma \in \Gamma(k)} [ w_s(\gamma,.)]$. Observe that $$ \limsup_k r_k^{1/k}=
r \limsup_k |\mathcal L_{s}^k1|_{\infty}^{1/k} = r \rho_s(\infty)$$ where $\rho_s(\infty)$ is the spectral radius of the positive operator $\mathcal L_{s}$ on $C( \tilde \Lambda(\Gamma))$. Inequality (\ref{DFr}) is crucial in the Ionescu-Tulcea-Marinescu theorem for quasi-compact operators. By Hennion's work \cite{H}, it implies that the essential spectral radius of $\mathcal L_s$ on ${\bf Lip}( \tilde \Lambda) $ is less than $ r \rho_s(\infty)$ ; in other words, any spectral value of $\mathcal L_s$ with modulus strictly larger than $ r \rho_s(\infty)$ is an eigenvalue with finite multiplicity and is isolated in the spectrum of $\mathcal L_s$.
This implies
in particular $ \rho_s = \rho_s(\infty)$. Indeed, the inequality
$ \rho_s\geq \rho_s(\infty)$ is obvious since the function $1$
belongs to ${\bf Lip}( \tilde \Lambda)$. Conversely, the strict inequality would imply the existence of a function $\phi \in {\bf Lip}( \tilde \Lambda)$ such that ${\mathcal L}_{s}\phi = \lambda \phi$ for some $\lambda \in \mathbb C$ of modulus
$> \rho_s(\infty)$ ; this yields $| \lambda| |\phi| \leq {\mathcal
L}_{s} |\phi|$ so that $|\lambda |\leq \rho_s(\infty)$. Contradiction.
It remains to control the value $ \rho_s$ in the spectrum of ${\mathcal L}_{s}$. By the above, we know that $ \rho_s$ is an eigenvalue of $\mathcal L_s$ with (at least) one associated eigenfunction $h_s \geq 0$. This function is strictly positive on $\tilde \Lambda$: otherwise, there exist $1\leq j \leq p+q$ and a point $y_0\in \tilde \Lambda_j$ such that $h_s(y_0)=0$. The equality ${\mathcal L}_{s}h_s(y_{0})=
\rho_sh_s(y_{0})$ implies
$h_s(\gamma \cdot y_{0}) = 0$ for any $\gamma \in \Gamma$ with last letter $\neq j$. The minimality of the action of $\Gamma$ on $\Lambda$ and the fact that $\Gamma \cdot x_0$ accumulates on $\Lambda$ implies $h_s= 0$ on $\tilde \Lambda$. Contradiction.
In order to prove that $ \rho_s$ is a simple eigenvalue of $\mathcal L_s$ on $ {\bf Lip}( \tilde \Lambda)$, we use a classical argument in probability theory related to the ''Doob transform'' of a sub-markovian transition operator.
For any $s \geq \delta$, we denote $P_s$ the operator defined formally by: for any bounded Borel function $\phi: \tilde \Lambda \to \mathbb C$ and $x \in \tilde \Lambda$, $$ P_s \phi(x)= {1\over \rho h_s (x)}\mathcal L (h_s\phi)(x)={1\over \rho h_s (x)}\sum_{\gamma \in \Gamma(1)}e^{-\delta \tilde b(\gamma, x)} h (\gamma\cdot x) \phi(\gamma\cdot x). $$ The iterates of $P_s $ are given by: $P_s ^0= {\rm Id}$ and for $k\geq 1$ \begin{equation}\label{iteresPs} P_s ^k\phi(x)= \int_X\phi(y)P_s ^k(x, dy)= {1\over \rho_s ^k h_s (x)}\sum_{\gamma \in \Gamma(k)}e^{-\delta b (\gamma, x)}h (\gamma\cdot x)\phi(\gamma\cdot x).w \end{equation}
The operator $P_s $ acts on ${\bf Lip}( \tilde \Lambda) $ as a Markov operator, i.e. $P_s \phi\geq 0$ if $\phi \geq 0$ and $P_s {\bf 1} = {\bf 1}$. It inherits the spectral properties of $\mathcal L_s$ and is in particular quasi-compact with essential spectral radius
$<1$. The spectral value $1$ is an eigenvalue and it remains to prove that the associated eigenspace is $\mathbb C \cdot 1$. Let $f \in {\bf Lip}( \tilde \Lambda)$ such that $P_sf=f$ and $1\leq j \leq p+q$ and $y_0\in \tilde \Lambda_j$ such that $\vert f(y_{0})\vert = \vert f\vert_{\infty}$. An argument of convexity applied to
the inequality $P|f|\leq |f|$ readily implies
$|f(y_{0})|= |f(\gamma\cdot y_{0})|$ for any $\gamma \in \Gamma$ with last letter $\neq j$; by minimality of the action of $\Gamma$ on $ \tilde \Lambda$, it follows that the modulus of $f$ is constant on $\tilde \Lambda$. Applying again an argument of convexity, the minimality of the action of $\Gamma$ on $ \tilde \Lambda$ and the fact that $\Gamma\cdot x_0$ accumulates on $\Lambda$, one proves that $f$ is in fact constant on $ \tilde \Lambda$. Finally, the eigenspace of $\mathcal L_s$ associated with $ \rho_s$ equals $\mathbb C \cdot 1$.
Similarly, using the fact that $\ell \geq 3$, we may prove that the peripherical spectrum of $\mathcal L_s$, i.e. the eigenvalues $\lambda$ with $|\lambda| = \rho_s$, is reduced to $\rho_s$; we refer the reader to Proposition III.4 of \cite{BP} and Proposition 8.6 of \cite{V}.
\rightline{$\Box$}
Expression (\ref{iteresPs}) yields to the following.
\begin{notations} For any $s\geq \delta,$ any $x \in \tilde \Lambda, $ any $ k\geq 0$ and any $\gamma \in \Gamma(k)$, set \begin{eqnarray}\label{poids-proba} p_s (\gamma, x)&:=& {1\over \rho_s ^k }{h_s (\gamma\cdot x)\over h_s (x)} w_s(\gamma, x). \end{eqnarray} \end{notations} As for the $w_s(\gamma, \cdot)$, these ``weight functions'' are positive and satisfy the cocycle property $$ p_s (\gamma_1\gamma_2, x)= p_s (\gamma_1, \gamma_2\cdot x)\cdot p_s (\gamma_2, x) $$ for any $s\geq \delta, x \in \tilde \Lambda$ and $\gamma_1, \gamma_2 \in \Gamma$.
Let us emphasize that $\displaystyle \sum_{\gamma\in \Gamma(k)} p_s (\gamma, x)=1$; in other words, the operator $P_s$ is markovian.
\rightline{$\Box$}
\begin{coro} \label{coroconvergent} The group $\Gamma$ is convergent if and only if $\rho_\delta<1$. \end{coro} Proof. If $\rho_\delta=\rho_\delta(\infty) <1$ then $\rho_s<1$ for any $s\geq\delta$, since $s\mapsto \rho_s(\infty)=\rho_s$ is decreasing on $[\delta, +\infty[$. Equality (\ref{criticalexponent}) implies $\delta_\Gamma \leq \delta$ and so $\delta_\Gamma=\delta$; by (\ref{divergence-convergence}), it follows
that $\Gamma$ is convergent.
Assume now $\rho_\delta\geq1$. When $\Gamma$ is non exotic, it is divergent by \cite{DOP}. Otherwise, $\delta_\Gamma=\delta$ and since the eigenfunction $h_\delta$ is non negative on $ \tilde \Lambda$, we have, for any $k\geq 1$ and $x \in \tilde \Lambda$ $$ \mathcal L_\delta ^k1(x)\asymp \mathcal L_\delta ^kh_\delta(x)=\rho_\delta^k h_\delta(x)\asymp \rho_\delta^k. $$ Consequently $\displaystyle \sum_{k\geq 0} \mathcal L_\delta ^k1(x)=+\infty$ and the group $\Gamma$ is divergent, by (\ref{divergence-convergence}).
\rightline{$\Box$}
\section{Counting for convergent groups}\label{sec:Counting}
Throughout this section we assume that $\Gamma$ is convergent on $(X, g )$; by Corollary \ref{coroconvergent} it is equivalent to the fact that $\rho_\delta <1$.
For any $\phi \in {\bf Lip}( \tilde \Lambda)$, any $x \in \tilde \Lambda$ and $R>0$, let us denote by $M(R, \phi\times\cdot\ )(x)$ the measure on $\mathbb R$ defined by:
$$
M(R, \phi\otimes u)(x):= \sum_{\gamma \in \Gamma} e^{-\delta \tilde b(\gamma, x)}\phi(\gamma\cdot x)u(-R+\tilde b(\gamma, x)).
$$ It holds $0\leq M(R, \phi\otimes u)(x) <+\infty$ when $u$ has a compact support in $ \mathbb R $ since the group $\Gamma$ is discrete.
The orbital function of $\Gamma$ may be decomposed as
$$N_\Gamma(R)= e^{\delta R} \sum_{n\geq 0}M(R, {\bf 1} \otimes e_n)(x_0)
$$ with $e_n(t):= e^{\delta t}{\bf 1}_{]-(n+1), -n]}(t).
$ Hence, Theorem \ref{theo:Comptage} is a direct consequence of the following statement.
\begin{prop} \label{convergence vague} For any positive function $\phi \in {\bf Lip}( \tilde \Lambda)$ and any $x \in \tilde \Lambda$, there exists $C_\phi(x)>0$ such that for any continuous function $u: \mathbb R\to \mathbb R$ with compact support, $$
\lim_{R\to +\infty}{R^{\alpha}\over L(R)}M(R, \phi\otimes u)(x) = C_\phi(x) \int_{\mathbb R}u(t) {\rm d}t. $$ \end{prop}
This section is devoted to the proof of Proposition \ref{convergence vague}. From now on, we fix a positive function $\phi\in {\bf Lip}( \tilde \Lambda) $ and a continuous function $u: \mathbb R\to \mathbb R^+$ with compact support. Let us decompose $M(R, \phi\otimes u)(x)$ as $$ M(R, \phi\otimes u)(x)=\sum_{k \geq 0} M_k(R, \phi\otimes u)(x) $$ with $$
M_k(R, \phi\otimes u)(x):=
\sum_{\gamma \in \Gamma(k)} e^{-\delta \tilde b(\gamma, x)}\phi(\gamma\cdot x) u(-R+\tilde b(\gamma, x)). $$ Thus, it is natural to associate to $P_s, s\geq \delta, $ a new transition operator $\widetilde P_s $ on $ \tilde \Lambda \times \mathbb R$, setting: for any $\phi\in {\bf Lip}( \tilde \Lambda) $, any Borel function $v: \mathbb R\to \mathbb R$ and any $(x, s) \in \tilde \Lambda)\times \mathbb R,$ \begin{eqnarray*}\label{opeP} \widetilde P_s (\phi\otimes v)(x, t)&=& {1\over \rho h_s (x)}\sum_{\gamma \in \Gamma(1)}e^{-s \tilde b(\gamma, x)} h_s (\gamma\cdot x)\phi(\gamma\cdot x) u(t+\tilde b(\gamma, x))\\ &=& \sum_{\gamma \in \Gamma(1)}p_s(\gamma, x) \phi(\gamma\cdot x) u(t+\tilde b(\gamma, x))\notag \end{eqnarray*} Notice that $\widetilde P_s $ is a also a Markov operator on $ \tilde \Lambda \times \mathbb R$; it commutes with the action of translations on $\mathbb R$ and one usually says that it defines a semi-markovian random walk on $ \tilde \Lambda \times \mathbb R$.
Its iterates are given by: $\widetilde P_s^0= {\rm Id}$ and, for any $k\geq 1$, $$ \widetilde P_s ^k(\phi\otimes v)(x, s)= \sum_{\gamma \in \Gamma(k)} p_s\gamma, x)\phi(\gamma\cdot x)u(s+\tilde b(\gamma, x)). $$ From now on, to lighten notations we write $P = P_\delta$, $\tilde P = \tilde P_\delta$, $h = h_\delta$, $ p = p_\delta$ and $\rho = \rho_\delta<1$. We rewrite the quantity $ M_k(R, \phi\otimes u)(x)$ as $$
M_k(R, \phi\otimes u)(x)=\rho ^kh (x) \widetilde P^k\left( {\phi\over h }\otimes u\right)(x, -R), $$ so that, \begin{equation}\label{Maspotential} M(R, \phi\otimes u)(x)=h (x)\sum_{k\geq 0}\rho ^k \widetilde P^k\left({\phi\over h }\otimes u\right)(x, -R). \end{equation} We first control the behavior as $R\to +\infty$ of the quantity $ M_1(R, \phi\otimes u)(x)$.
\begin{prop}\label{prop:AsymPk1}
For any continuous function $u: \mathbb R\to \mathbb R$ with compact support, there exists a constant $C_u>0$ such that, for any $\varphi \in {\bf Lip}( \tilde \Lambda)$, any $ x \in \tilde \Lambda$ and $R\geq 1$,
\begin{equation}\label{majorforbusemann}
\Big\vert \widetilde P (\varphi\otimes u)(x, -R) \Big\vert \leq C_u \Vert \varphi \Vert_\infty \times { L(R)\over R^{\alpha}}. \end{equation} Furthermore, \begin{equation}\label{asymptforbusemann}
\lim_{R\to +\infty} {R^{\alpha} \over L(R)}
\widetilde P (\varphi\otimes u)(x, -R)=\sum_{j=1}^p C_j(x) \varphi(x_j)\int_{\mathbb R}u(t) {\rm d}t,
\end{equation} where $C_j$ is defined by: for $1\leq j \leq p$, \begin{equation}\label{constanteC}
C_j(x):= c_j{h (x_j) \over \rho h (x)} \times \left\{ \begin{array}{cll} \displaystyle e^{2 \delta(x_j\mid x)_{\bf o}}& {\rm when} &x\in \Lambda \backslash \tilde \Lambda_j;
\\ e^{\mathcal B_{x_j}(o, g\cdot o) + d(o, g\cdot o)}& {\rm when} &x = g\cdot x_0 \notin \tilde \Lambda_j;\\
\ & \
\\
0 & {\rm otherwise.}\ \ &\ \end{array}\right. \end{equation} \end{prop}
\noindent Proof.
Let $x\in \tilde \Lambda$ be fixed and assume that the support of $u$ is included in the interval $[a, b]$. For any $R\geq -a$, it holds $$
\widetilde P (\varphi\otimes u)(x, -R) = \frac{1}{\rho h(x)}\sum_{j = 1}^{p+q}\sum_{\gamma \in \Gamma_j}e^{-\delta \tilde b(\gamma x)}{\bf 1}_{x\notin \tilde \Lambda_j}h(\gamma\cdot x) \varphi(\gamma\cdot x) u(-R+\tilde b(\gamma, x) ). $$ It follows from hypotheses $ {\bf H_2}$ and $ {\bf H_3}$ and Fact \ref{lienentrebetd} that for any $j = 1,..., p+q$, there exists a constant $ K_j>0$ such that for any $R\geq 1$, $$\sum_{\stackrel{\gamma \in \Gamma_j}{R+a \leq \tilde b(\gamma, x) \leq R+b}} e^{-\delta \tilde b(\gamma, x)} \leq K_j (b-a) \frac{L(R)}{R^\alpha}.$$ Together with the fact that $L$ has slow variation, this implies (\ref{majorforbusemann}).
Now, in order to establish (\ref{asymptforbusemann}), it is sufficient to prove that for any $j = 1,..., p+q$, \begin{equation}\label{asymptforbusemanntermeparterme}
\lim_{R\to +\infty} {R^{\alpha} \over L(R)}
\sum_{\gamma\in \Gamma_j} p(\gamma, x)\varphi(\gamma\cdot x) u(-R + \tilde b(\gamma, x)) = C_j(x) \varphi(x_j)\int_{\mathbb R}u(t) {\rm d}t, \end{equation} where $C_j(x)$ is given by (\ref{constanteC}) for $1\leq j \leq p$ and $C_j(x) = 0$ for $j = p+1,..., q$. By a classical approximation argument, we may assume that
$u$ is the characteristic function of the interval $[a, b]$; it yields $$ \sum_{\gamma\in \Gamma_j} p(\gamma, x)\varphi(\gamma\cdot x) u(-R + \tilde b(\gamma, x)) = \frac{1}{h(x)} \sum_{\stackrel{\gamma \in \Gamma_j}{R+a \leq \tilde b(\gamma, x) \leq R+b}}e^{-\delta \tilde b(\gamma x)}{\bf 1}_{x\notin \tilde \Lambda_j}h(\gamma\cdot x) \varphi(\gamma\cdot x). $$ First, assume that $x = g\cdot x_0$ belongs to $\Gamma\cdot x_0$. For any $j = 1,..., p$ and any $\gamma\neq Id$ in $ \Gamma_j$, the sequence $( \gamma^n \cdot o)_{n \geq 1} $ tends to $x_j$ as $n\to \pm\infty$; it yields \begin{eqnarray*}
\tilde b(\gamma^n, x) - d(o, \gamma^n \cdot o)
&=& d(\gamma^{-n}\cdot o, g\cdot o) - d(\gamma^{-n}\cdot o, o) - d(o, g\cdot o)\\
&{\stackrel{n \to \pm \infty}{\longrightarrow}}& - \mathcal B_{x_j}(o, g\cdot o) - d(o, g\cdot o). \end{eqnarray*}
When $x\in \Lambda$, Fact \ref{lienentrebetd} yields $$
\lim_{n \to \pm \infty} \tilde b(\gamma^n, x) - d(o, \gamma^n \cdot o)= -2(x_j\mid x).
$$ Eventually, by hypotheses $ {\bf H_2}$ and $ {\bf H_3}$, for any $1\leq j \leq p+q$, $$
\lim_{R\to +\infty} {R^{\alpha} \over L(R)}
\sum_{\stackrel{\gamma\in \Gamma_j}{R+a \leq d(o, \gamma \cdot o) \leq R+b}} \tilde p(\gamma, x) = C_j(x) |b-a|. $$ Hence, $$
\lim_{R\to +\infty} {R^{\alpha} \over L(R)}
\sum_{\gamma\in \Gamma_j} \tilde p(\gamma, x)\varphi(\gamma\cdot x) u(-R + \tilde b(\gamma, x)) = C_j(x) \varphi(x_j) \vert b-a\vert. $$
\rightline{$\Box$}
Now, we extend (\ref{majorforbusemann}) and (\ref{asymptforbusemann}) to the powers $\widetilde P^k, k \geq 1,$ of the Markov operator $\widetilde P$.
\begin{prop}\label{mkmaj} For any continuous function $u:\mathbb R \to \mathbb R^+$ with compact support, there exists a constant $C_u>0$ such that, for any $\varphi \in {\bf Lip}(\tilde \Lambda)$, any $x \in \tilde \Lambda$, any $k \geq 1$ and any $R\geq 1$, \begin{equation} \label{mkmaj-formule} \Bigl\vert \widetilde P^k\left(\varphi\otimes u\right)(x, -R)\Bigr\vert \leq C_u \ k^2 \ \Vert \varphi \Vert_\infty \times {L(R)\over R^{\alpha}}. \end{equation} \end{prop}
\begin{prop}\label{mkequi} For any continuous function $u:\mathbb R \to \mathbb R^+$ with compact support, any $\varphi \in {\bf Lip}(\tilde \Lambda)$, any $x \in \tilde \Lambda$ and any $k \geq 1,$ \begin{equation} \label{mkequi-formule} \lim_{R\to +\infty} { R^{\alpha}\over L(R)} \widetilde P^k\left(\varphi\otimes u\right)(x, -R)= \sum_{j = 1}^p \left(\sum_{l=0}^{k-1}P^l C_j(x)P^{k-1-l}\varphi(x_j)\right) \int_{\mathbb R} u(t) {\rm d}t \end{equation} where, for any $1\leq j\leq p$, the Lipschitz functions is $C_j: \tilde \Lambda \to \mathbb R$ is given by (\ref{constanteC}). \end{prop}
Proposition \ref{convergence vague} follows immediately from these statements and (\ref{Maspotential}). Indeed, Propositions \ref{mkmaj} and \ref{mkequi} and the dominated convergence theorem yield $$ \lim_{R\to +\infty} \frac{R^\alpha}{L(R)} M(R, \phi\otimes u)(x) = \left( h (x)\sum_{k\geq 1}\rho ^k \sum_{j = 1}^p\left(\sum_{l=0}^{k-1}P^l C_j(x) P^{k-1-l}\left(\frac{\phi}{h}\right)(x_j)\right)\right)\times \int_{\mathbb R} u(t) {\rm d}t. $$
\rightline{$\Box$}
Let us now prove Propositions \ref{mkmaj} and \ref{mkequi}. For the convenience of the reader, we assume that all subgroups $\Gamma_j, 1\leq j \leq p+q,$ are parabolic. Hence, they have a unique fixed point at infinity $x_j$ and for any $x\in \tilde \Lambda$, it holds $$ \lim_{\stackrel{\gamma\in \Gamma_j}{d(o, \gamma\cdot o) \to +\infty}} \gamma\cdot x = x_j. $$ Namely, if one of the non-influent elementary group $\Gamma_j, p+1\leq j \leq p+q,$ was generated by some hyperbolic isometry $h_j$, we would have in the next proofs to distinguish between positive and negative power of $h_j$ and this would only overcharge our notations without interest.
\noindent Proof of Proposition \ref{mkmaj}. We apply here overestimations given in \cite{V}, whose proofs follow the approach developed in \cite{G}. We set $\alpha = 1+\beta$ with $0< \beta <1$; this restriction on the values of the parameter $\beta$ is of major importance to get the following estimations. Following \cite{V}, we introduce the non negative sequence $(a_k)_{k \geq 1}$ defined implicitely by $\displaystyle {a_k^\beta\over L(a_k)}=k$ for any $k \geq 1$. By Propositions A.1 and A.2 in \cite{V}, there exists a constant $C_1=C_1(u)>0$ such that, for any $\varphi \in {\bf Lip}(\tilde \Lambda)$, any $x \in \tilde \Lambda$, any $k \geq 1$ and any $R\geq 1$,
$\bullet$ if $1\leq R\leq 2a_k$ then $\quad \displaystyle \Bigl\vert \widetilde P^k\left(\varphi\otimes u\right)(x, -R)\Bigr\vert \leq C_1 \Vert \varphi \Vert_\infty \times{1\over a_k}; $
$\bullet$ if $R\geq 2a_k$ then
$\displaystyle \Bigl\vert \widetilde P^k\left(\varphi\otimes u\right)(x, -R)\Bigr\vert \leq C_1 k \Vert \varphi \Vert_\infty \times {L(R)\over R^{1+\beta}}. $
\noindent The definition of the $a_k$ yields, for $1\leq R\leq 2a_k$, $$ {1\over a_k}=k{L(a_k)\over a_k^{1+\beta}}\leq {k\over 2^{1+\beta}}\times {L(R)\over R^{1+\beta}}\times {L(a_k)\over L(R)}.$$ By Potter's lemma (see \cite{V}, lemma 3.4), it exists $C_2>0$ such that $\displaystyle {1\over a_k}\leq C_2 k^2\times {L(R)\over R^{1+\beta}} $ for $R\geq 1$ great enough. We set $C= \max( C_1, C_2).$
\rightline{$\Box$}
\noindent Proof of Proposition \ref{mkequi}. We work by induction. By Proposition \ref{prop:AsymPk1}, convergence (\ref{mkequi-formule}) holds for $k=1$. Now, we assume that it holds for some $k\geq 1$. Let $R>0$ and $r\in [0, R/2]$ be fixed. Recall that \begin{eqnarray*} \widetilde P^{k+1}\left(\varphi\otimes u\right)(x, -R)&=& \sum_{\gamma \in \Gamma(k+1)}p (\gamma, x) \varphi(\gamma\cdot x) u(-R+\tilde b(\gamma, x))\\ &=& \sum_{\gamma \in \Gamma(k)} \sum_{\beta \in \Gamma(1)}p (\gamma, \beta\cdot x) p (\beta, x)\varphi(\gamma \beta\cdot x) u\Bigl(-R+\tilde b(\gamma, \beta\cdot x)+ \tilde b(\beta, x)\Bigr). \end{eqnarray*}
We decompose $\widetilde P^{k+1}\left(\varphi\otimes u\right)(x, -R)$ as $
A_k(x, r, R)+B_k(x, r, R)+C_k(x, r, R)
$ where \begin{eqnarray*} A_k(x, r, R) &:=& \sum_{\gamma \in \Gamma(k)} \sum_{\stackrel{\beta \in \Gamma(1)}{d({\bf o}, \beta\cdot{\bf o})\leq r}} p (\gamma, \beta\cdot x) p (\beta,x)\varphi(\gamma \beta\cdot x) u\Bigl(-R+\tilde b(\gamma, \beta\cdot x)+ \tilde b(\beta, x)\Bigr),\\ B_k(x, r, R) &:=& \sum_{\stackrel{\gamma \in \Gamma(k)}{d({\bf o}, \gamma\cdot{\bf o})\leq r}} \sum_{\stackrel{\beta \in \Gamma(1)}{d({\bf o}, \beta\cdot{\bf o})>r}} p (\gamma, \beta\cdot x) p (\beta, x)\varphi(\gamma \beta\cdot x) u\Bigl(-R+\tilde b(\gamma, \beta\cdot x)+ \tilde b(\beta, x)\Bigr)\\ {\rm and} \ \ C_k(x, r, R) &:=&
\sum_{\stackrel{\gamma \in \Gamma(k)}{ d({\bf o}, \gamma\cdot{\bf o})>r}} \sum_{\stackrel{\beta \in \Gamma(1)}{d({\bf o}, \beta\cdot{\bf o})>r}}\
p (\gamma, \beta\cdot x) p (\beta, x)\varphi(\gamma \beta\cdot x) u\Bigl(-R+\tilde b(\gamma, \beta\cdot x)+ \tilde b(\beta, x)\Bigr). \end{eqnarray*}
\noindent \underline{ \bf Step 1.} Let us first prove that \begin{equation}\label{Akrfixe} \lim_{R\to +\infty} {R^{\alpha} \over L(R) } A_k(x,r, R) = \sum_{\stackrel{\beta \in \Gamma(1)}{d({\bf o}, \beta\cdot {\bf o})\leq r}} p(\beta, x)\times \lim_{R\to +\infty} \ {R^{\alpha} \over L(R) } \tilde P^k\left(\varphi\otimes u\right)(\beta \cdot x, -R). \end{equation} Indeed, the set of $\beta \in \Gamma(1)$ such that $d({\bf o}, \beta\cdot{\bf o})\leq r$ is finite and $\tilde b(\beta, x)\leq r$ for such an isometry $\beta$; furthermore, if $p(\beta, x)\neq 0$ then
${R\over 2} \leq R-\tilde b(\gamma \beta\cdot x)\leq R+C$ where $C>0$ is the constant which appears in Property \ref{triangle}. Using the induction hypothesis, it yields, for any $\beta \in \Gamma(1)$ such that $d({\bf o}, \beta\cdot{\bf o})\leq r$,
\noindent $\displaystyle \lim_{R\to +\infty} {R^{\alpha}\over L(R)} p (\beta, x) \sum_{\gamma \in \Gamma(k)} p (\gamma, \beta\cdot x) \varphi(\gamma \beta\cdot x) u\Bigl(-R+\tilde b(\beta, x)+ \tilde b(\gamma, \beta \cdot x)\Bigr)$
$ \displaystyle \qquad \qquad \qquad \qquad\qquad \qquad \qquad \qquad = p(\beta, x)\times \lim_{R\to +\infty} \ {R^{\alpha} \over L(R) } \tilde P^k\left(\varphi\otimes u\right)(\beta \cdot x,R). $
\noindent Convergence (\ref{Akrfixe}) follows, summing over $\beta$. It yields \begin{equation}\label{Akrinfty} \lim_{r\to +\infty} \lim_{R\to +\infty} \ {R^{\alpha} \over L(R) } A_k(x, r, R)= \sum_{j = 1}^p \left(\sum_{l=1}^{k}P^lC_j(x)P^{k-l}\varphi(x_j)\right) \times \int_{\mathbb R}u(t) {\rm d}t. \end{equation}
\noindent \underline{\bf Step 2.} We prove that there
exists $\epsilon(r)>0$, with $\displaystyle \lim_{r\to +\infty}\epsilon(r) = 0$, such that, for any $k\geq 1$, \begin{eqnarray}\label{Bkrfixe} \liminf_{R\to +\infty} \ {R^{\alpha} \over L(R) }B_k(x,r, R) &{\stackrel{\epsilon(r)}{ \simeq }}& \limsup_{R\to +\infty} \ {R^{\alpha} \over L(R) }B_k(x,r, R) \notag \\ &{\stackrel{\epsilon(r)}{ \simeq }}& \sum_{j = 1}^p \sum_{\stackrel{\gamma\in \Gamma(k)}{d(o, \gamma\cdot o)\leq r}} p(\gamma, x_j) \varphi(\gamma\cdot x_j)C_j(x) \int_{\mathbb R} u(t) {\rm d}t, \end{eqnarray} where we write $a\ {\stackrel{\epsilon }{ \simeq }} \ b$ if $\displaystyle 1-\epsilon \leq \frac{a}{b} \leq 1+\epsilon $. Since each $\Gamma_j$ has a unique fixed point, there exists a map $\epsilon : (0, +\infty) \to (0, +\infty)$ which tends to $0$ as $r \to +\infty$, such that $$
\frac{p(\gamma, \beta\cdot x)}{p(\gamma, x_j)} \ {\stackrel{\epsilon(r) }{ \simeq }} \ \ 1 $$ for any $j = 1,..., p+q$, any $\beta\in \Gamma_j$ with $d(o, \beta\cdot o)\geq r$, any $x\in \tilde \Lambda$ and any $\gamma\in \Gamma$ with $l_\gamma \neq j$.
The set of $\gamma \in \Gamma(k)$ such that $d({\bf o}, \gamma\cdot{\bf o})\leq r$ is a finite subset of $\Gamma(k)$; furthermore, for such $\gamma$ and any $\beta \in \Gamma(1)$, it holds ${R\over 2}\leq R -\tilde b(\gamma, \beta\cdot x) \leq R+C, $ as above. Therefore,
\noindent $ \displaystyle \sum_{\stackrel{ \gamma \in \Gamma(k)}{d({\bf o}, \gamma\cdot{\bf o})\leq r}} \sum_{ \stackrel{ \beta \in \Gamma(1)}{d({\bf o}, \beta\cdot{\bf o})>r}}
p (\gamma, \beta\cdot x) p (\beta, x)\varphi(\gamma \beta\cdot x) u\Bigl(-R+\tilde b(\gamma, \beta\cdot x)+ \tilde b(\beta, x)\Bigr) $ \\ \indent $\qquad \qquad \displaystyle \ {\stackrel{\epsilon(r) }{ \simeq }} \ \sum_{j = 1}^{p+q} \sum_{\stackrel{ \gamma \in \Gamma(k)}{d({\bf o}, \gamma\cdot{\bf o})\leq r}}
p (\gamma, x_j) \varphi(\gamma \cdot x_j) \sum_{ \stackrel{ \beta \in \Gamma_j}{d({\bf o}, \beta\cdot{\bf o})>r}} p (\beta, x) u\Bigl(-R+\tilde b(\gamma, \beta\cdot x)+ \tilde b(\beta, x)\Bigr) $
\noindent Convergence (\ref{Bkrfixe}) follows, using (\ref{asymptforbusemanntermeparterme}). In particular, letting $r \to +\infty$, it holds \begin{eqnarray} \label{Bkrinfty} \lim_{r\to +\infty} \liminf_{R\to +\infty} \ {R^{\alpha} \over L(R) }B_k(x,r, R) &=& \lim_{r\to +\infty} \limsup_{R\to +\infty} \ {R^{\alpha} \over L(R) }B_k(x,r, R) \notag \\ &=& \sum_{j = 1}^{p} P^k\varphi( x_j)C_j(x) \int_{\mathbb R} u(t) {\rm d}t. \end{eqnarray}
\noindent \underline{\bf Step 3.} We prove that there exists a constant $C>0$ such that, for any $R\geq 2r\geq 1$, \begin{equation} \label{Ckrfixe}
C_k(x,r,R)\leq C k^2 \Vert \varphi \Vert_\infty {L(R)\over R^{\alpha}} \sum_{n=[r]}^{+\infty} {L(n)\over n^{\alpha}}. \end{equation}
By property \ref{triangle}, the condition $u\Bigl(-R+\tilde b(\gamma \beta\cdot x)+ \tilde b(\beta, x)\Bigr)\neq 0$ implies $$d({\bf o}, \gamma\cdot{\bf o})+d({\bf o}, \beta\cdot{\bf o}) =R\pm c\quad {\rm and} \quad \tilde b(\gamma \beta\cdot x)+ \tilde b(\beta, x) =R\pm c \quad ^(\footnote{the notation $A=B\pm c$ means $\vert A-B\vert \leq c$.}^) $$ for some constant $c>0$ which depends on $u$.
We decompose $C_k(x, r, R)$ into $C_k(x, r, R)=C_{k,1}(x, r, R)+ C_{k,2}(x, r, R)$ with $$ C_{k,1}(x, r, R):= \sum_{\stackrel{\gamma \in \Gamma(k)}{r<d({\bf o}, \gamma\cdot {\bf o})\leq R/2}} \sum_{\stackrel{\beta \in \Gamma(1)}{d({\bf o}, \beta\cdot{\bf o})>r}}\ p (\gamma, \beta\cdot x) p (\beta, x)\varphi(\gamma \beta\cdot x) u\Bigl(-R+\tilde b(\gamma, \beta\cdot x)+ \tilde b(\beta, x)\Bigr). $$ and $$ C_{k,2}(x, r, R):= \sum_{\stackrel{\gamma \in \Gamma(k)}{ d({\bf o}, \gamma\cdot {\bf o})\geq R/2}} \sum_{\stackrel{\beta \in \Gamma(k)}{d({\bf o}, \beta\cdot{\bf o})>r}}\ p (\gamma, \beta\cdot x) p (\beta, x)\varphi(\gamma \beta\cdot x) u\Bigl(-R+\tilde b(\gamma, \beta\cdot x)+ \tilde b(\beta, x)\Bigr). $$ We control the term $C_{k,1}(x, r, R)$. Assuming $c \geq 1$, one may write \begin{eqnarray*} C_{k,1}(x, r, R)&\leq& \Vert \varphi \Vert_\infty \Vert u \Vert_\infty \sum_{n=[r]}^{[R/2]} \ \sum_{\stackrel{\gamma \in \Gamma(k)}{ d({\bf o}, \gamma\cdot {\bf o})= n\pm c}} \ \sum_{\stackrel{\beta \in \Gamma(k)}{d({\bf o}, \beta\cdot{\bf o})= R-n\pm c}}\ p (\gamma, \beta\cdot x) p (\beta, x)\\ &\leq & \Vert \varphi \Vert_\infty \Vert u \Vert_\infty \sum_{n=[r]}^{[R/2]} \sum_{\stackrel{\beta \in \Gamma(1)}{d({\bf o}, \beta\cdot{\bf o})= R-n\pm c}} p (\beta, x) \left( \sum_{\stackrel{\gamma \in \Gamma(k)}{ d({\bf o}, \gamma\cdot {\bf o})=n\pm c}} p (\gamma, \beta\cdot x) \right). \end{eqnarray*} Using (\ref{mkmaj-formule}), this yields, for some constant $C>0$, \begin{eqnarray*} C_{k,1}(x,r,R)&\leq &
C k^2 \Vert \varphi \Vert_\infty \Vert u \Vert_\infty \sum_{n=[r]}^{[R/2]} {L(R-n)\over (R-n)^{\alpha}} {L(n)\over n^{\alpha}} \\ &\leq & C\ {k^2} \Vert \varphi \Vert_\infty \Vert u \Vert_\infty {L(R)\over R^{\alpha}} \sum_{n=[r]}^{+\infty} {L(n)\over n^{\alpha}}, \end{eqnarray*} where the last inequality is based on the facts that $R-n\geq R/2-1$ and $L$ is slowly varying. The same inequality holds for $C_{k,2}(x,r,R) $, by reversing in the previous argument the role of $\gamma$ and $\beta$. Hence, \begin{equation} \label{Ckrinfty}
\lim_{r\to +\infty}\limsup_{R\to +\infty}\ {R^{\alpha} \over L(R) } C_k(x, r, R) = 0. \end{equation}
Proposition \ref{mkequi} follows, combining (\ref{Akrinfty}), (\ref{Bkrinfty}) and (\ref{Ckrinfty}).
\rightline{$\Box$}
\end{document} | arXiv |
Silicon application and related changes in soil bacterial community dynamics reduced ginseng black spot incidence in Panax ginseng in a short-term study
Meijia Li1,
Qiuxia Wang1,
Zhengbo Liu1,
Xiaoxi Pan1 &
Yayu Zhang1
This study analyzed the effect of silicon (Si) application on the occurrence of ginseng black spot caused by Alternaria panax. We explored the differences in soil physical and chemical factors and microbial community structure following Si application as well as the key factors that affected the occurrence of ginseng black spot in soil. Potted Panax ginseng plants were used to assess the effect of Si treatment on ginseng black spot. Soil physical and chemical properties were comprehensively analyzed. Bacterial communities were analyzed using Illumina HiSeq sequencing targeting the 16S rRNA gene.
After inoculation with A. panax, the morbidity (and morbidity index) of ginseng with and without Si was 52% (46) and 83% (77), respectively. Soil physical and chemical analysis showed that under the ginseng black spot inoculation, bacterial communities were mainly affected by pH and available potassium, followed by ammonium nitrogen and available Si. NMDS and PLS-DA analyses and the heat maps of relative abundance revealed that Si application elevated the resistance of ginseng black spot as regulated by the abundance and diversity of bacterial flora in rhizosphere soils. Heatmap analysis at the genus level revealed that A. panax + Si inoculations significantly increased the soil community abundance of Sandaracinus, Polycyclovorans, Hirschia, Haliangium, Nitrospira, Saccharothrix, Aeromicrobium, Luteimonas, and Rubellimicrobium and led to a bacterial community structure with relative abundances that were significantly similar to that of untreated soil.
Short-term Si application also significantly regulated the structural impact on soil microorganisms caused by ginseng black spot. Our findings indicated that Si applications may possibly be used in the prevention and treatment of ginseng black spot.
Ginseng black spot, caused by Alternaria panax Whetz, is a common soil-borne disease and one of the most serious diseases affecting the above-ground parts, especially the leaves, of Panax ginseng. This pathogen is distributed widely in the Changbai Mountains of China and other ginseng production regions, and accounts for more than 20—30% of the annual incidence, which is very common in cultivated and wild ginseng. Alternaria panax infestation may lead to 10—20% yield loss of the total crop. Infection first appears as elongated reddish to dark brown crevices in the infected areas. In seedlings, the stems are gradually girdled and thus collapse, resulted in damping-off [1]. In older plants, foliar infections appear later in summer, characterized by rapidly enlarging dark brown necrotic spots (circular, ellipsoid, or wedge-shaped) surrounded by chlorotic margins.
Silicon (Si) has been demonstrated to play an important role in enhancing plant resistance to disease. Si deposition has been suggested to create a physical barrier along cell walls and prevent fungal penetration into the plant [2]. Additional studies have indicated that Si is related to plant-pathogen interactions for the control in diseases in different plant species [3], and aids in the enhancement of plant resistance against disease caused by viruses, fungi, bacteria, and nematodes. Recently, it was suggested that the deposition of Si in the apoplast may prevent fungal effectors from entering the target cells, thus altering the development of the pathogens [4]. Another recently study showed that Si treatment conferred an effective protection of soybean plants against Phytophthora sojae in a hydroponic experiment [5]. Agricultural soil productivity largely depends on microbial diversity and community composition, which significantly affects plant growth and crop quality [6]. The homeostasis of the soil microbial community can suppress pathogens and promote plant growth [7]. Plant—microbe interactions remodel the complex biological and ecological processes in soil, where roots are influenced by the rhizosphere [8]. Many studies have assessed the effect of Si on plant-microbe interactions and have demonstrated that Si enhances plant resistance to pathogens by activating defense reactions [9, 10]. Recently, a pot experiment demonstrated that Si addition decreased the concentrations of water-soluble and exchangeable arsenic in soil and, therefore, decreased the bioavailability of red soil arsenic in Panax notoginseng [11].
The present study, therefore, aimed to investigate if Si treatment would enhance the resistance of ginseng against A. panax. The study objectives were to evaluate the effect of Si on the prevention and treatment of ginseng black spot and to analyze the interaction between soil properties and plant growth responses. Further objectives were to determine the changes in the dynamics, i.e., the structure, composition, and abundance, of the soil microbial community in response to infection with A. panax and treatment with Si to determine the underlying factors that may influence the quantity and composition of soil bacteria.
Disease index and incidence and plant weights
Figure 1 shows the phenotypic differences in leaves of P. ginseng in 9 dpi among treatment groups: Control, A, AS, and S. Significant differences were observed in the severity of A. panax infections under Si treatment (Fig. 1). As shown in Fig. 1, no effect of Si on biomass was observed compared to the Control group. The differences between Control plants and group S plants were not obvious, however, group AS plants were obviously healthier than group A plants (Fig. 1). The first symptom of leaf spots appeared soon (3 days) after post inoculation (dpi), followed by stunting and blight within a few days. As shown in Table 1, Si treatment significantly reduced the disease incidence and disease index of ginseng black spot.
The effect in different treatments of soil. Abbreviations: CK, ginseng control plants; A, plants only inoculated with A. panax; AS, plants inoculated with A. panax + Si; S, plants only inoculated with Si
Table 1 Effect of silicon application on the disease incidence and disease index of ginseng black spot
There was no significant difference in dry weight among non-inoculated (pathogen) plants: Control plants (1.12 ± 0.81 g) and group S plants (1.23 ± 0.59 g). However, the plant dry weight was significantly reduced in group A plants. Apparently, Si treatment resulted in significantly heavier plants (Table 2). After 9 days post-treatment, the fresh weight of group AS plants was 15% higher than that of group A plants (Table 3).
Table 2 The fresh weight and dry weight of the ginseng after different treatments
Table 3 The fresh weight and dry weight of ginseng shoots and ginseng roots after different treatments
Soil properties and plant growth responses
Soil properties are presented in Table 4. A one-way ANOVA showed that the treatments significantly recovered the soil property parameters from disease treatment (P < 0.05). (Table 4). The pH value of the Control soil samples was ~ 7.39. Compared with the Control group, the soil pH, NO3−-N, and NH4+-N were significantly reduced in Group A (P < 0.05). In contrast, the ratio of available P and available K (P < 0.05) were significantly increased. Furthermore, AS significantly increased the soil pH, and NO3−-N and NH4+-N contents (P < 0.05), and significantly reduced the ratio of available P and available K (P < 0.05) compared to the A treatment, i.e., without Si (P < 0.05). No significant differences in the above-mentioned nutrients, except available Si, was detected between group S and Control.
Table 4 Characteristics of soils after different treatments
Analysis of bacterial composition and diversity of soil bacterial community structure based on 16S rRNA gene sequencing
Bacteria-targeted regions were completely amplified by PCR and fully sequenced for all soil samples. The raw sequence libraries were screened to remove reads that originated from sequencing noise or putative chimeric sequences. Using the 12 soil samples from the different treatments, a total of 815,609 valid 16S rDNA sequences were obtained by filtering and processing according to a 97% similarity. Variation of a single soil sample ranged from 56,510 to 76,384 sequences, and the above sequences were retained for further analysis.
The effective sequence number and OTU number of each group of samples did not significantly differ between the treatment groups and the Control group, as shown in Table 5. The sequencing coverage of samples ranged from 98.5 to 98.6%. After sample diversity (alpha diversity) analysis, the indexes reflecting the abundance and diversity of microbial communities were calculated, and the results of all treatments were analyzed using a one-way ANOVA (Table 5). The coverage index of the sample library was more than 98.5%, which indicated that the sequencing results represented the real situation of the bacterial population in the sample. The microflora richness index (Chao1, ACE) and biodiversity index (Shannon, Simpson) of the samples revealed that the diversity of the bacterial populations in the soil samples was relatively high (Table 5). Further analysis revealed that at a 97% similarity level, the Shannon index and Simpson index of soil bacteria of each treatment group were not significantly different from those in the Control group.
Table 5 The bacterial diversity indices of ginseng rhizosphere soil samples with different treatment
Analysis of soil bacterial community structure
According to the abundance of bacterial OTU types in the 12 soil samples, a non-metric multidimensional scale (NMDS) diversity analysis was conducted to determine the differences in the bacterial compositions of the different samples and treatments (Fig. 2). The NMDS results were evaluated using the UniFrac distances to estimate the phylogenetic relatedness among the bacterial communities (Fig. 3a, c). The soil bacterial communities were found to be totally distinct between groups A and S, i.e., when treated with A. panax or Si (NMDS). Among treatment groups, the soil bacterial composition of group Control was most similar to that of group AS, i.e., had the highest phylogenetic relatedness, and the group AS bacterial flora could be independently distinguished from that in the infected soil (group A). However, the composition of bacterial flora in group S differed from that of the other treatments. In summary, Si application significantly regulated the changes in bacterial flora (back to the composition of Control) that were induced by inoculation of ginseng black spot (group AS).
Overall analysis of bacterial communities in different treatments of soil. a The bacterial composition of in different treatments of soil at the phylum taxonomic level. b The Venn map of bacterial communities in different treatments of soil. c The bacterial composition of in different treatments of soil at the phylum taxonomic level. d The Venn map of bacterial communities in different treatments of soil
NMDS analysis of bacterial community diversity in different samples (a) Non-metric multidimensional scaling (NMDS) plots of operational taxonomic unit tables from all substrates based on abundances of bacterial community similarities using unweighted unifrac-distance of matrix (b) Unweighted weighted unifrac-distance box-line graph. c NMDS plots of operational taxonomic unit tables from all substrates based on abundances of bacterial community similarities using weighted unifrac-distance of matrix (d) Weighted unifrac-distance box-line graph
Cluster analysis of soil bacterial community structure
Based on a Beta diversity analysis, a distance matrix was obtained for the 12 soil samples, and a hierarchical clustering analysis was conducted using the unweighted group average method (UPGMA) (Fig. 3b, d). The soil samples of groups S and AS were classified as one branch, and those of groups Control and AS group were classified as one branch. The results were consistent with those of the NMDS analysis, which fully demonstrated that the soils inoculated with Si + ginseng black spot (AS group) were significantly recovered compared with the soils inoculated with only ginseng black spot (S group). PLS-DA analysis showed that the microbial composition of soil in the AS group was significantly altered following Si treatment. The results suggested similarities between group Control and AS, but not with nor among the other two groups. In summary, Si was again shown to have alleviated the changes in soil bacteria caused by ginseng black spot (Fig. 4).
PLS-DA analysis of different soil samples
Heat map analysis of the soil bacterial community structure
A heat map of the bacterial community structure among different samples (Fig. 5) revealed the relative abundances of the various bacterial groups (at phylum and genus levels) and that significant differences were observed among different groups of samples. The results showed that, at the phylum level, Proteobacteria, Nitrospirae, Actinobacteria, and Bacteroidetes were the four main groups (Fig. 5). The relative abundances (represented by the color depth in Fig. 5) of Sandaracinus, Polycyclovorans, Hirschia, Bdellovibrio, Haliangium, and Nitrospira were significantly higher in group Control than those of group A (P < 0.05). In addition, the relative abundances of Sandaracinus, Polycyclovorans, Hirschia, Haliangium, and Nitrospira in group AS were significantly higher than those in A (P < 0.05). The results showed that Si application significantly regulated the structural impact of soil microorganisms caused by ginseng black spot inoculation.
Heat map comparison of the dominant bacterial with average relative abundance from blue to red means relative abundance from low to high
Factors influencing the quantity and composition of soil bacteria
Correlation analysis showed (Table 6) that most of the other dominant bacterial groups had significant correlations with soil chemical properties, except Arenimonas, H16, and RB41, which showed no correlations with all chemical indicators. Haliangium and available K were significantly negatively correlated; Phenylobacterium (phenyl coli) was very significantly negatively correlated with pH and Gemmatimonas (bacillus); Nitrospira (nitrification spirillum) was negatively correlated with NO3−-N; Mesorhizobium (rhizobia) was very significantly positively correlated with NO3−-N; Gemmatimonas (bacillus), Nitrospira (nitrobacteria), and available Si were significantly negatively correlated; Lactobacillus (lactobacillus), Mesorhizobium (rhizobium), and available Si were significantly positively correlated. Haliangium was significantly positively correlated with pH.
Table 6 Pearson's correlation coefficients between various physicochemical variables and the relative abundances of main genera (> 1%) across all samples
Silicon reduced disease incidence and disease index of ginseng black spot
Si has been shown to effectively improve the mechanical and physiological capacities of plants and enhance plant resistance to overcome various biotic and abiotic stresses [12, 13]. To examine the effect of Si application on ginseng black spot-infected plants, a pot experiment was performed with pretreatment of Si for 2 weeks, following 9 dpi with A. panax. Si application significantly reduced the disease incidence and index of ginseng black spot (Table 1) and clearly alleviated the incidence of leaf blight caused by A. panax (Fig. 1). Similarly, Si has been reported to enhance plant resistance to diseases, potentially through interacts with several key factors of the stress signaling pathway [14]. In comparison to group A plants, Si application was shown to increase the accumulation of shoot and root biomass in group AS P. ginseng plants. These findings suggest that Si triggered plant-microbial response mechanisms that directly limited ginseng black spot index and incidence in the leaves and thus enhanced P. ginseng biomass accumulation. However, the root and shoot biomass of group S plants was not significantly different compared with Control, which opposes the notion that Si promotes plant biomass accumulation [15,16,17]. In the present study, a short-term pot experiment was used to determine the effects of Si application on ginseng black spot and the soil bacterial community, however, future research is needed to clarify the effects of long-term applications on Si-P. ginseng-soil interactions.
Compared with the Control, inoculation of ginseng black spot (in group A) significantly reduced the soil pH and NH4+-N content, and significantly increased the ratio of available K. After Si application, the reduced soil pH and NH4+-N content were significantly recovered to the levels of group Control. Compared with group A, the ratio of available P and available K was reduced in group AS, which resulted in similar soil physical and chemical indexes to that of group CK. In the present study, Si application led to the amendment of the soil pH changes caused by the inoculation of ginseng black spot. However, the soil pH of group Control and group S were not significantly different, and thus Si application alone did not alter the pH. A similar result was found in a study with P. notoginseng, whereby Si increased the soil pH when in the presence of arsenic, which may have been because the Si treatment decreased the bioavailability of arsenic [11]. Although our study does rule out the possibility of other chemical differences among treatments, our data doe does suggest that nutrient availabilities were not the driving differences in soil properties without pathogen inoculation. It is important to consider plant root exudates and their great impact on the population and community structure of soil microbes [18]. In our study, Si application may have affected the root exudates, and other root-derived molecules; as was observed in another study when plants were infected by a fungus [19]. Further research is needed to elucidate the root exudates-Si, plants-Si, and root exudates-plants interactions in the soil-P. ginseng system. However, besides available Si, there were no significant differences in the above-mentioned nutrients between groups S and Control. Therefore, it is likely that Si altered the root exudes rather than physicochemical soil characteristics, which caused the recovery of the bacterial community from A to AS, which was similar to Control.
Soil microbial community composition and diversity
Microbial community diversity is an important component of soil health [20]. The impact of Si on bacterial richness and diversity was analyzed by high-throughput sequencing. The soil bacterial richness (Shannon index and Simpson index) was not significantly different under the different treatments (Fig. 2), which indicated that Si and A. panax treatments did not alter the number of bacterial species in the short-term. However, genes-level differences were found in the relative abundances of bacterial species. Interestingly, the relative abundances of Saccharothrix, Aeromicrobium, Luteimonas, and Rubellimicrobium recovered (P < 0.05) from lower levels in group A to higher levels (similar to Control) following Si application (Fig. 5). The results showed that Si application significantly regulated the structural impact of soil microorganisms caused by ginseng black spot. Aeromicrobium as a potential disease suppression indicator [21, 22] and a member of phylum Actinobacteria. Moreover, the antibiotics produced by Actinobacteria are able to suppress various plant diseases [23, 24]. Disease-suppressive natural soils, with reference to a variety of agricultural crop diseases, have been reported for wheat Take-all and Rhizoctonia bare patch diseases [25], Fusarium wilt on strawberry and vanilla [26], and Rhizoctonia solani on sugar beet [27]. This characteristic relates to the abundance of certain beneficial soil microbes [26, 27], which produce antimicrobial compounds that directly inhibit pathogens. In addition, indirect pathogen inhibition via induced systemic resistance (ISR) may occur, via the triggering of plant immune responses [28]. However, in the event of a severe disease outbreak, consecutive cropping cycles of the same species are stipulated for disease-suppressive microbes to flourish. The proposed hypothesis suggests that, when invaded, certain favorable microbes are amplified and sustained in plants [25, 29, 30]. The bacterial composition of the group Control soil was similar to that of group AS (Si-treated), i.e., their compositions had the highest phylogenetic relatedness. The group AS bacterial flora differed from that of the group A, and the composition of bacterial flora in group S differed from that of the other treatments. The results showed that Si application significantly regulated the changes in bacterial flora caused by ginseng black spot inoculation, and increased the levels in group AS to almost the same as group Control. Recent reports additionally revealed that Arabidopsis plants can stimulate specific favorable microbes in the rhizosphere, even in natural soils [31]. Another validating instance was seen when a Xanthomonas sp., Stenotrophomonas sp., and Microbacterium sp. beneficial consortium was activated in the rhizosphere as part of the downy mildew Hyaloperonospora arabidopsidis induced foliar defense [19]. Furthermore, these strains, when isolated, collectively induced downy mildew resistance, when inoculated back into Arabidopsis; and interestingly, the resistance of a second plant population grown in the same soil was considerably amplified as a result of the downy mildew infection in the first population. These outcomes collectively suggested that beneficial microbes ensue from plant invasions, which in turn prompt a memory or "soil-borne legacy" that amplifies the next plant generation defenses against harmful pathogens [31,32,33,34]. The implication here is that Si triggered the soil bacterial community response, which might have directly regulated plant growth. In the present study, we observed that the bacterial community differed in group AS compared with group A, i.e., between Si and no Si treatments. Overall, the increase in the soil bacterial diversity after Si application may contribute to the suppression of ginseng black spot disease. The functionality of root exudates and other root-derived molecules, is indicated in this process [31, 32, 35,36,37]; albeit this hypothesis requires validation. The most recent research, however, also reports that plants secure favorable rhizosphere communities via the modification of plant exudation patterns, induced by exposure to aboveground pathogens, which subsequently benefits future plant generations [19].
In summary, Si can alter the structure and diversity of the soil microbial community by directly and indirectly affecting the growth of plants, and the altered soil microbial community can, in turn, affect the plants [38,39,40].
This study provided a detailed outline of the bacterial community compositions in Si applications inoculated with ginseng black spot using Illumina HiSeq sequencing. Si application of ginseng black spot-inoculated plants significantly optimized soil bacterial population structure, improved soil bacterial activity and diversity, and thus effectively prevented and controlled the occurrence of ginseng black spot. In addition, we speculated that Si indirectly altered the structure, composition, and abundance of the soil microbial community by directly altering the root exudates or inducing plant systemic resistance. In conclusion, the present study demonstrated the good application prospect of Si and that it is recommended for use as ginseng fertilizer for the prevention and treatment of ginseng black spot.
Two-year-old fresh ginseng roots (Panax ginseng Meyer) were provided by dongdu ginseng technology development co., LTD in April 2017 and placed in sand at 23 °C. After 6 days, the roots sprouted and were then washed with deionized water, and transplanted into PVC pots (120 × 180 mm, diameter × height) containing turfy soil (6 seedlings per pot). The ginseng seedlings were grown under greenhouse conditions: temperatures of 17–28 °C, a relative humidity of 70%~ 80%, and a 14 h photoperiod. Before A. panax inoculation, half of the plants were pretreated for 2 weeks with potassium silicate (pH = 7.0) as the Si source. After Si pretreatment, the plants were inoculated with conidia of the appropriate A. panax pathogen. The conidia of A. panax infecting P. ginseng were identified by PCR of the internal transcribed spacer (ITS) region generating 553~554 bp fragments and the glyceraldehyde 3-phosphate dehydrogenase (gpd) for 565~566 bp fragments, respectively. Sequence showed 100% identical to that of A. panax (JF417572 of ITS, JF417653 of gpd). The A. panax strain was deposited in the Culture Collection Center of Yangtze University in Jingzhou, China. Spores were flushed from colonies and then resuspended in sterile distilled water at 1 × 105 spores/mL. The sterilized surfaces of detached spring ginseng leaves were inoculated with 20 μL conidial suspension and incubated under the same greenhouse conditions for 9 days, when black spot symptoms became visible on the leaves.
Plants were grown under four kinds of treatment: ginseng control plants (Control), plants only inoculated with A. panax (A), plants inoculated with A. panax + Si (AS), and plants only inoculated with Si (S), with 18 plants (3 pots) per treatment. To test the prophylactic role of Si, the Si concentration was set at 1.7 mM, i.e., the highest possible concentration of silica acid in solution [4].
Six seedlings of ginseng were randomly selected from each treatment group, and the soils were mixed to form a single representative sample. After inoculation with A. panax for 9 days, plants were removed from the soil and the excess soil was carefully shaken off. The rhizosphere soil (i.e., adhering to the roots) was collected as previously described by Bulgarelli et al. [41], with some modifications. Three replicate rhizosphere soil samples were obtained per treatment. Soil samples (n = 12) were air-dried for 2 weeks, passed through a 2 mm sieve, and stored at − 80 °C.
Plant dry weights and analysis of disease index and incidence
For A. panax infected plants, ginseng black spot incidence was recorded from 9 days after A. panax inoculation. The 18 plants (3 pots) per treatment were collected to calculate the percentage of diseased plants and count disease index, using the following equations [42]:
$$ Disease\ incidence= the\ number\ of\ diseased\ plants/ the\ total\ number\ of\ plants\times 100\% $$
$$ Disease\kern0.17em index=\sum \left(A\times B\right)\times 100/\sum \times 4 $$
where A is the disease class (0, 1, 2, 3, 4) and B is the number of plants in the corresponding disease class.
For each plant, the shoots and roots were separated and weighed after air drying (dry weight, g) for 2 weeks at 30 °C.
Sampling and chemical analysis
Air-dried plants and soil samples were used in the nutrient analysis. About 50 mg oven-dried plant tissue was digested with a mixture of 8 mL HNO3 and 2 mL HClO4 at 200 °C for 120 min in a semi-closed system. The digestates were cooled down to 25 °C and made up to 50 mL with 4% (v/v) HNO3 solution. The soil pH (1:5, soil: water) was measured using a glass electrode (SK220, Switzerland). Soil nitrate nitrogen (NO3−-N) was assayed using a continuous flow analytical system (SJAKAR SAN++, The Netherlands). Ammonium nitrogen (NH4+-N) in the soil was extracted with 0.01 M CaCl2, and the concentration was measured by an Auto Analyzer (Auto Analyzer 3, Germany). Potassium (K) in the soil was dissolved with ammonium acetate and calculated by flame photometry. Soluble phosphorus (P) was dissolved with sodium bicarbonate and its concentration measured using the molybdenum blue method [43].
High-throughput sequencing
The total DNA was extracted from 0.5 g of each soil sample using a bacterial DNA Isolation Kit (Omega Bio-tek, Norcross, GA, USA) following the manufacturer's instructions [44]. To assess the bacterial community composition, Illumina HiSeq platform (Illumina, San Diego, California, USA) was used in present study. The quantity and quality of extracted DNAs were measured using a Nanodrop 1000 (Thermo Fisher Scientific, Wilmington, DE, USA) and agarose gel electrophoresis, respectively. Primers for amplification and preamplification sequence: bacterial 16S rRNA gene V3-V4 region primers: 338F (5′-ACTCCTACGGGAGGCAGCA-3′) and 806R (5′-GGACTACHVGGGTWTCTAAT-3′). DNA was amplified by PCR under conditions of 95 °C 2 min, followed by 27 cycles of 95 °C extends 30 s, 55 °C for 30s and 72 °C for 45 s; and a final extension at 72 °C for10 min, then maintained at 10 °C until halted. The PCR reactions were performed triplicate in a 20 μL mixture contained 4 μL 5 Mix × FastPfu Buffffer, 2 μL of 2.5 mM dNTPs, 0.4 μL of each primer (5 μM), 0.4 μL of TransStart FastPfu DNA Polymerase (TransGen Biotech, Beijing, China), and 10 ng of template DNA [45]. PCR amplicons were purified with A gencourt AMPure Beads (Beckman Coulter, Indianapolis, IN) and quantified using the PicoGreen dsDNA Assay Kit (Invitrogen, Carlsbad, CA, USA). After the individual quantification step, amplicons were pooled in equal amounts, and pair-end 2 × 300 bp sequencing was performed using the Illumina HiSeq platform (Illumina, San Diego, California, USA) at Biomarker Technologies, Beijing, China.
The Quantitative Insights Into Microbial Ecology (QIIME, v1.8.0) pipeline was employed to process the sequencing data [46]. The low-quality sequences were filtered through the criteria [47, 48]. Paired-end reads were assembled using FLASH [49]. After chimera detection, the remaining high-quality sequences were clustered into operational taxonomic units (OTUs) at 97% sequence identity by UCLUST [50]. A representative sequence was selected from each OTU using default parameters. OTU taxonomic classification was conducted by BLAST searching the representative sequences set against the Greengenes Database [51]. Each OUT in each sample and the taxonomy was recorded in an OTU table, and OTUs containing less than 0.001% of total sequences across all samples were discarded. Sequences were deposited at the NCBI Short Read Archive and accession numbers are SRR9822023-SRR9822034.
Sequence data analyses were mainly performed using QIIME and R packages (v3.2.0). OTU-level alpha diversity indices, were calculated in QIIME. Beta diversity analysis was performed to investigate the structural variation of microbial communities across samples using UniFrac distance metrics [52, 53] and nonmetric multidimensional scaling (NMDS) [54]. Venn diagram was generated to visualize the shared and unique OTUs among groups using R package [55]. Taxa abundances at the phylum, class, order, family, genus and species levels were statistically compared among groups by Metastats [56]. PLS-DA (Partial least squares discriminant analysis) was also introduced as a supervised model to reveal the microbiota variation among groups, using the "plsda" function in R package "mixOmics" [48].
One-way analysis of variance (ANOVA) was used to calculate the difference between treatments with variable soil pathogen abundance. The significance threshold was set at 0.05. The statistical analyses was performed using SAS 9.1 software (SAS Institute Inc., Cary, NC).
The dataset(s) supporting the conclusions of this article are available in the following repositories: The raw read sequences were deposited at the NCBI Short Read Archive and accession numbers are SRR9822023-SRR9822034.
ANOVA:
Analysis of variance
dpi:
Days after post inoculation
gpd:
Glyceraldehyde 3-phosphate dehydrogenase
NMDS:
Non-metric multidimensional scale
OTU:
PLS-DA:
Partial least squares discriminant analysis
UPGMA:
Unweighted group average method
Putnam ML , Toit LJD. First report of alternaria blight caused by Alternaria panax on ginseng (Panax quinquefolius) in Oregon and Washington, USA. [J]. Plant Pathol 2010; 52(3):406–406. https://doi.org/10.1046/j.1365-3059.2003.00828.x.
Epstein E. Silicon. Ann. Rev. Plant Physiol Plant Mol Biol. 1999;50:641–64 https://doi.org/10.1146/annurev.arplant.50.1.641.
Fauteux F, Remus-Borel W, Menzies JG, Belanger RR. Silicon and plant disease resistance against pathogenic fungi. FEMS Microbiol Lett. 2005;249:1–6 https://doi.org/10.1016/j.femsle.2005.06.034.
Vivancos J, Labbé C, Menzies JG, Bélanger RR. Silicon-mediated resistance of Arabidopsis against powdery mildew involves mechanisms other than the salicylic acid (SA)-dependent defence pathway. Mol Plant Pathol. 2015;16:572–82 https://doi.org/10.1111/mpp.12213.
Aliyeh R, Caroline L, Humira S, et al. Silicon protects soybean plants against Phytophthora sojae by interfering with effector-receptor expression. BMC Plant Biol. 2018;18(1):97 https://doi.org/10.1186/s12870-018-1312-7.
Nayyar A, Hamel C, Lafond G, Gossen BD, Hanson K, Germida J. Soil microbial quality associated with yield reduction in continuous-pea. Appl Soil Ecol. 2009;43(1):115–21 https://doi.org/10.1016/j.apsoil.2009.06.008.
Liu X, Zhang S, Jiang Q, Bai Y, Shen G, Li S, Ding W. Using community analysis to explore bacterial indicators for disease suppression of tobacco bacterial wilt. Sci Rep 2016;6:36773. https://doi.org/https://doi.org/10.1038/srep36773.
Fang S, Liu D, Ye T, Deng S, Shang X. Tree species composition influences enzyme activities and microbial biomass in the Rhizosphere: a Rhizobox approach. PLoS One. 2013;8(4):e61461 https://doi.org/10.1371/journal.pone.0061461.
Cai K, Gao D, Luo S, Zeng R, Yang J, Zhu X. Physiological and cytological mechanisms of silicon-induced resistance in rice against blast disease. Physiol Plant 2008;134(2):324–333. https://doi.org/https://doi.org/10.1111/j.1399-3054.2008.01140.x.
Ghareeb H, Bozsó Z, Ott PG, Repenning C, Stahl F, Wydra K. Transcriptome of silicon-induced resistance against Ralstonia solanacearum in the silicon non-accumulator tomato implicates priming effect. Physiol Mol Plant Pathol. 2011;75(3):83–9 https://doi.org/10.1016/j.pmpp.2010.11.004.
Yue Y, Aichen Z, Yanjiao C, et al. Impacts of silicon addition on arsenic fractionation in soils and arsenic speciation in Panax notoginseng planted in soils contaminated with high levels of arsenic. Ecotox Environ Safe. 2018;162:400–7 https://doi.org/10.1016/j.ecoenv.2018.07.015.
Richmond KE, Sussman M. Got silicon? The non-essential beneficial plant nutrient. Curr Opin Plant Biol 2003;6(3):268–272. https://doi.org/10.1016/s1369-5266(03)00041-4.
Ma JF, Yamaji N. Silicon uptake and accumulation in higher plants. Trends Plant Sci. 2006;11:392–7 https://doi.org/10.1016/j.tplants.2006.06.007.
Fawe A, Abou-Zaid M, Menzies JG, Bélanger RR. Silicon-mediated accumulation of flavonoid phytoalexins in cucumber. Phytopathology. 1998;88:396–401 https://doi.org/10.1094/PHYTO.1998.88.5.396.
Fleck AT, Mattusch J, Schenk MK. Silicon decreases the arsenic level in rice grain by limiting arsenite transport. J Plant Nutr Soil Sci. 2013;176:785–94 https://doi.org/10.1002./jpln.201200440.
Li RY, Stroud JL, Ma JF, Mcgrath SP, Zhao FJ. Mitigation of arsenic accumulation in rice with water management and silicon fertilization. Environ Sci Technol. 2009;43:3778–83 https://doi.org/10.1021/es803643v.
Wu C, Zou Q, Xue S, Pan W, Yue X, Hartley W, Huang L, Mo J. Effect of silicate on arsenic fractionation in soils and its accumulation in rice plants. Chemosphere. 2016;165:478–86 https://doi.org/10.1016/j.chemosphere.2016.09.061.
Kong HG, Kim BK, Song GC, Lee S, Ryu C-M. Aboveground whitefly infestation-mediated reshaping of the root microbiota. Front Microbiol. 2016;7:1314 https://doi.org/10.3389/fmicb.2016.01314.
Jun Y, Jun Z, Tao W, et al. Root exudates drive the soil-borne legacy of aboveground pathogen infection. Microbiome. 2018;6(1):156 https://doi.org/10.1186/s40168-018-0537-x.
Garbeva P, Van Veen JA, Van Elsas JD. Microbial diversity in soil: selection of microbial populations by plant and soil type and implications for disease suppressivenss. Annu Rev Phytopathol. 2004;42:243–70 https://doi.org/10.1146/annurev.phyto.42.012604.135455.
CAS PubMed Article PubMed Central Google Scholar
Palaniyandi SA, Yang SH, Zhang L, Suh JW. Effects of actinobacteria on plant disease suppression and growth promotion. Appl Microbiol Biotechnol. 2013;97:9621–36 https://doi.org/10.1007/s00253-013-5206-1.
Shen G, Zhang S, Liu X, Jiang Q, Ding W. Soil acidification amendments change the rhizosphere bacterial community of tobacco in a bacterial wilt affected field. Appl Microbiol Biotech. 2018;102(22):9781–91 https://doi.org/10.1007/s00253-018-9347-0.
Kim YS, Kim HM, Chang C, Hwang IC, Oh H, Ahn JS, Kim KD, Hwang BK, Kim BS. Biological evaluation of neopeptins isolated from a Streptomyces strain. Pest Manag Sci. 2007;63:1208–14 https://doi.org/10.1002/ps.1450.
Lee SY, Tindwa H, Lee YS, Naing KW, Hong SH, Nam Y, Kim KY. Biocontrol of anthracnose in pepper using chitinase, β-1,3 glucanase, and 2-furancarboxaldehyde produced by Streptomyces cavourensis SY224. J Microbiol Biotechnol. 2012;22:1359–66 https://doi.org/10.4014/jmb1203.02056.
Weller DM, Raaijmakers JM, Gardener BB, Thomashow LS. Microbial populations responsible for specific soil suppressiveness to plant pathogens. Annu Rev Phytopathol. 2002;40:309–48 https://doi.org/10.1146/annurev.phyto.40.030402.110010.
Cha JY, Han S, Hong HJ, Cho H, Kim D, Kwon Y, Kwon SK, Crüsemann M, Bok Lee Y, Kim JF, et al. Microbial and biochemical basis of a Fusarium wilt-suppressive soil. ISME J. 2016;10:119–29 https://doi.org/10.1038/ismej.2015.95.
Mendes R, Kruijt K, De Bruijn I, Dekkers E, Van Der Voort M, Schneider JH, et al. Deciphering the rhizosphere microbiome for disease-suppressive bacteria. Science. 2011;332:1097–100 https://doi.org/10.1126/science.1203980.
Pieterse CM, Van Der Does D, Zamioudis C, Leon-Reyes A, Van Wees SC. Hormonal modulation of plant immunity. Annu Rev Cell Dev Biol. 2012;28:489–521 https://doi.org/10.1146/annurev-cellbio-092910-154055.
Berendsen RL, Pieterse CM, Bakker PA. The rhizosphere microbiome and plant health. Trends Plant Sci. 2012;17:478–86 https://doi.org/10.1016/j.tplants.2012.04.001.
Schlatter D, Kinkel L, Thomashow L, Weller D, Paulitz T. Disease suppressive soils: new insights from the soil microbiome. Phytopathology. 2017;107:1284–97 https://doi.org/10.1094/PHYTO-03-17-0111-RVW.
Berendsen RL, Vismans G, Yu K, Song Y, de Jonge R, Burgman WP, Burmølle M, Herschend J, Bakker PAHM, Pieterse CMJ. Disease-induced assemblage of a plant-benefificial bacterial consortium. ISME J. 2018;12:1496–507 https://doi.org/10.1038/s41396-018-0093-1.
Rudrappa T, Czymmek KJ, Pare PW, Bais HP. Root-secreted malic acid recruits beneficial soil bacteria. Plant Physiol. 2008;148:1547–56 https://doi.org/10.1104/pp.108.127613.
Raaijmakers JM, Mazzola M. Soil immune responses. Science. 2016;352:1392–3 https://doi.org/10.1126/science.aaf3252.
Bakker PAHM, Pieterse CMJ, de Jonge R, Berendsen RL. The soil-borne legacy. Cell. 2018;172:1178–80 https://doi.org/10.1016/j.cell.2018.02.024.
Badri DV, Chaparro JM, Zhang RF, Shen QR, Vivanco JM. Application of natural blends of phytochemicals derived from the root exudates of Arabidopsis to the soil reveal that phenolic-related compounds predominantly modulate the soil microbiome. J Biol Chem 2013;288:4502–4512. https://doi.org/10.1074/jbc. M112.433300.
Gu Y, Wei Z, Wang X, Friman V-P, Huang J, Wang X, Mei X, Xu Y, Shen Q, Jousset A. Pathogen invasion indirectly changes the composition of soil microbiome via shifts in root exudation profile. Biol Fertil Soils. 2016;52:997–1005 https://doi.org/10.1007/s00374-016-1136-2.
Sasse J, Martinoia E, Northen T. Feed your friends: do plant exudates shape the root microbiome? Trends Plant Sci. 2018;23:25–41 https://doi.org/10.1016/j/tplants.2017.09.003.
Kardol P, Martijn BT, Van Der Putten WH. Temporal variation in plant-soil feedback controls succession. Ecol Lett 2006; 9(9):1080–1088. https://doi.org/https://doi.org/10.1111/j.1461-0248.2006.00953.x.
PubMed Article PubMed Central Google Scholar
Reinhart KO, Callaway RM. Soil biota and invasive plants. New Phytol. 2006;170(3):445–57 https://doi.org/10.1111/j.1469-8137.2006.01715.x.
Bever JD, Dickie IA, Facelli E, Facelli JM, Klironomos J, Moora M. Rooting theories of plant community ecology in microbial interactions. Trends Ecology Evol 2010;25(8):468–478. https://doi.org/https://doi.org/10.1016/j.tree.2010.05.004.
Bulgarelli D, Rott M, Schlaeppi K, Ver Loren van Themaat E, Ahmadinejad N, Assenza F, Rauf P, Huettel B, Reinhardt R, Schmelzer E, et al. Revealing structure and assembly cues for Arabidopsis root-inhabiting bacterial microbiota. Nature. 2012;488(7409):91–95. https://doi.org/https://doi.org/10.1038/nature11336.
Faheem M, Raza W, Zhong W, Nan Z, Shen Q, Xu Y. Evaluation of the biocontrol potential of streptomyces goshikiensis YCXU against Fusarium oxysporum f. sp. niveum: theory and applications in pest management. Biol Control. 2015;81(10):101–10 https://doi.org/10.1016/j.biocontrol.2014.11.012.
Sun H, Wang Q, Liu N, Zhang C, Liu Z, Zhang Y. Effects of different leaf litters on the physicochemical properties and bacterial communities in Panax ginseng-growing soil. Appl Soil Ecol. 2016;111:17–24 https://doi.org/10.1016/j.apsoil.2016.11.008.
Zhang H, Feng J, Chen S, et al. Geographical patterns of nirS gene abundance and nirS-type denitrifying bacterial community associated with activated sludge from different wastewater treatment plants. Microb Ecol. 2019;77(2):304–16 https://doi.org/10.1007/s00248-018-1236-7.
Wang Q, Sun H, Xu C, et al. Analysis of rhizosphere bacterial and fungal communities associated with rusty root disease of Panax ginseng. Appli Soil Ecol. 2019;138:245–52 https://doi.org/10.1016/j.apsoil.2019.03.012.
Caporaso JG, Kuczynski J, Stombaugh K, Bittinger FD, Bushman EK, Costello N, Fierer AG, Pena JK, Goodrich JI, Gordon GA, et al. QIIME allows analysis of high-throughput community sequencing data. Nat Methods. 2010;7(5):335–6 https://doi.org/10.1038/nmeth.f.303.
Gill SR, Pop M, DeBoy RT, Eckburg PB, Turnbaugh PJ, Samuel BS, Gordon JI, Relman DA, Fraser-Liggett CM, Nelson KE. Metagenomic analysis of the human distal gut microbiome. Science. 2006;312(5778):1355–9 https://doi.org/10.1126/science.1124234.
Chen YF, Yang FL, Lu HF, Wang BH, Chen YB, Lei DJ, Wang YZ, Zhu BL, Li LJ. Characterization of fecal microbial communities in patients with liver cirrhosis. Hepatology. 2011;54(2):562–72 https://doi.org/10.1002/hep.24423.
Magoc T, Salzberg SL. Flash: fast length adjustment of short reads to improve genome assemblies. Bioinformatics. 2011;27(21):2957–2963. https://doi.org/https://doi.org/10.1093/bioinformatics/btr507.
Edgar RC. Search and clustering orders of magnitude faster than BLAST. Bioinformatics. 2010;26(19):2460–1 https://doi.org/10.1093/bioinformatics/btq461.
DeSantis TZ, Hugenholtz P, Larsen N, Rojas M, Brodie EL, Keller K, Huber T, Dalevi D, Hu P, Andersen GL. Greengenes, a chimera-checked 16S rRNA gene database and workbench compatible with ARB. Appl Environ Microbiol. 2006;72(7):5069–72 https://doi.org/10.1128/AEM.03006-05.
Lozupone C, Knight R. UniFrac: a new phylogenetic method for comparing microbial communities. Appl Environ Microb. 2005;71(12):8228–35 https://doi.org/10.1128/AEM.71.12.8228-8235.2005.
Lozupone CA, Hamady M, Kelley ST, Knight R. Quantitative and qualitative beta diversity measures lead to different insights into factors that structure microbial communities. Appl Environ Microb. 2007;73(5):1576–85 https://doi.org/10.1128/AEM.01996-06.
Ramette A. Multivariate analyses in microbial ecology. FEMS Microbio Ecol. 2007;62(2):142–60 https://doi.org/10.1111/j.1574-6941.2007.00375.x.
Zaura E, Keijser BJF, Huse SM, Crielaard W. Defining the healthy "core microbiome" of oral microbial communities. BMC Microbiol. 2009;9:12 https://doi.org/10.1186/1471-2180-9-259.
White JR, Nagarajan N, Pop M. Statistical methods for detecting differentially abundant features in clinical metagenomic samples. PloS Comput Biol. 2009;5(4) https://doi.org/10.1371/journal.pcbi.1000352.
We thank Jianxin Deng (Yangtze University) for constructive input and discussions. We also acknowledge Hai Sun for Biotechnology, for providing the necessary facilities. YZ acknowledges Cai Shao and Yiming Guan for assistance in seedling and sample collection.
YZ acknowledges funding from China Agriculture Research System (CARS-21), the Agricultural Science and Technology Innovation Program (CAAS-XTCX2016012), National Key R&D Program of China (2018YFD0201100), Collaborative innovation project of science and technology innovation project of Chinese academy of agricultural sciences (2018XTCX01), the National Natural Science Foundation of China (31501828), the National Natural Science Foundation of China (81903755), and the Central Public-interest Scientific Institution Basal Research Foundation of CAAS (No. 1610342017017, No. 1610342018011, No. 1610342018020) on high-throughput sequencing. The funding body had no role in the design of the study, analysis and interpretation of data and in writing the manuscript.
Institute of Special Wild Economic Animals and Plants, Chinese Academy of Agriculture Sciences, Changchun, 130112, People's Republic of China
Meijia Li, Qiuxia Wang, Zhengbo Liu, Xiaoxi Pan & Yayu Zhang
Meijia Li
Qiuxia Wang
Zhengbo Liu
Xiaoxi Pan
Yayu Zhang
ML carried out the experimental plan, collected samples. QW, ZL and XP extracted DNA and conducted data analysis. YZ participated in the experimental design, and drafted the manuscript. All authors read and approved the final manuscript.
Correspondence to Yayu Zhang.
Li, M., Wang, Q., Liu, Z. et al. Silicon application and related changes in soil bacterial community dynamics reduced ginseng black spot incidence in Panax ginseng in a short-term study. BMC Microbiol 19, 263 (2019). https://doi.org/10.1186/s12866-019-1627-z
Accepted: 28 October 2019
Alternaria panax
Ginseng black spot
Soil bacterial community
Illumina HiSeq sequencing | CommonCrawl |
A rectangular array of chairs is an arrangement of the chairs in rows and columns such that each row contains the same number of chairs as every other row and each column contains the same number of chairs as every other column. If there must be at least two chairs in every row and column and all of the chairs in the room must be included, how many arrays are possible in a classroom containing $36$ chairs? Note that $12$ rows of $3$ chairs is different from $3$ rows of $12$ chairs.
We are counting the number of ways 36 can be expressed as the product of two positive integers such that one of the numbers is not 1. Factoring 36, we find that $36=2^2\cdot3^2$. The possible values for the number of rows is 2, 3, 4, 6, 9, 12, 18 (notice that we cannot have 1 row). Each value corresponds to a unique arrangement of the chairs. Thus, there are $\boxed{7}$ possible arrays. | Math Dataset |
\begin{document}
\title{Simultaneous Similarity and Triangularization\\ of Sets of 2 by 2 Matrices}
\author{Carlos A. A. Florentino}
\begin{abstract} Let $\mathcal{A}=(A_{1},...,A_{n},...)$ be a finite or infinite sequence of $2\times2$ matrices with entries in an integral domain. We show that, except for a very special case, $\mathcal{A}$ is (simultaneously) triangularizable if and only if all pairs $(A_{j},A_{k})$ are triangularizable, for $1\leq j,k\leq\infty$. We also provide a simple numerical criterion for triangularization. \begin{comment} and when $A_{1},A_{2},A_{3},A_{4}$ do not pairwise commute it reduces further to $\mathsf{det}(A_{j}A_{k}-A_{k}A_{j})=0$ for all $j=1,2,3$, $k=1,2,...,n$. \end{comment} {}
Using constructive methods in invariant theory, we define a map (with the minimal number of invariants) that distinguishes simultaneous similarity classes for non-commutative sequences over a field of characteristic $\neq2$. We also describe canonical forms for sequences of $2\times2$ matrices over algebraically closed fields, and give a method for finding sequences with a given set of invariants.
\begin{comment} Rever: ...MSC: (primary) 14L30 (1980-now) Invariants of matrices (quotients)
(secondary) 15A21 (1973-now) Canonical forms, reductions, classification \end{comment} {}
\end{abstract}
\keywords{Simultaneous similarity, Simultaneous triangularization, Invariants of 2 by 2 matrices. }
\maketitle
\section{Introduction}
\subsection{Motivation}
The properties of a set of square matrices which are invariant under simultaneous conjugation have been the subject of many investigations. In the case of a pair of matrices many problems have been solved including finding criteria for simultaneous similarity, simultaneous triangularization, existence of common eigenvectors, etc. Analogous problems have been solved for subalgebras, subgroups or sub-semigroups of square matrices (see, for example, \cite{Fr,L,RR} and references therein).
Here, we mainly concentrate on the problems of simultaneous similarity, simultaneous triangularization and canonical forms of countable sets of $2\times2$ matrices with entries in an arbitrary field, with the focus on effectively computable solutions (the triangularization results will hold over integral domains).
Even though we are concerned with $2\times2$ matrices, it is convenient to describe the setup more generally as follows. Fix integers $m\geq1$, $n\geq1$ and let $V_{m,n}$ be the vector space of $n$-tuples of $m\times m$ matrices with entries in a field $\mathbb{F}$. The group of invertible matrices $G:=GL_{m}(\mathbb{F})$ acts on $V_{m,n}$ by simultaneous conjugation on every component: \begin{equation} g\cdot\mathcal{A}:=(gA_{1}g^{-1},...,gA_{n}g^{-1}),\label{eq:accao}\end{equation} where $g\in G$ and $\mathcal{A}=(A_{1},...,A_{n})\in V_{m,n}$. By analogy with the case $n=1$, the orbit $G\cdot\mathcal{A}:=\{g\cdot\mathcal{A}:g\in G\}\subset V_{m,n}$ will be called the \emph{conjugacy class} of $\mathcal{A}$, and can be viewed as an element $[\mathcal{A}]\in V_{m,n}/G$ of the quotient space. One can consider the following problems.
\newcounter{art}\renewcommand{\labelenumi}{(\roman{art})}\addtocounter{art}{1}
\begin{enumerate} \item Identify one element for each conjugacy class in a natural way, i.e, list all possible `canonical forms';\addtocounter{art}{1} \item Construct invariants that distinguish all conjugacy classes.\addtocounter{art}{1}
\begin{comment} Parametrize the `space of orbits' in a natural way, i.e, endow the set of all conjugacy classes with a reasonable geometric and/or algebraic structure.\addtocounter{art}{1} \end{comment} {} \end{enumerate} Naturally, these problems are related. In general, a solution to (i) will lead to a solution of (ii); however, the answer obtained in this way might be unnatural, and the description in terms of invariants is often useful. \begin{comment} in turn, the consideration of all invariants may lead to a geometric description giving a (sometimes partial) solution to problem (iii). \end{comment} {}
These are difficult questions in general. Here, we will be concerned with the description and classification of conjugacy classes in the case $m=2$. As we will see, in this case the questions above admit simple and complete answers, given by explicitly computable numerical criteria. In order to state our results in concise terms (see Definition \ref{def:subsequence}) we will adopt the following terminology for (ordered) sets of matrices.
\begin{defn} \label{def:matrix-sequence}An element $\mathcal{A}=(A_{1},...,A_{n})\in V_{m,n}$ will be called a \emph{matrix sequence of size} $m\times m$ \emph{and length $n$}, or just a \emph{matrix sequence} when the integers $m$ and $n$ are assumed, and the matrices $A_{1},...,A_{n}$ will be called the \emph{components} or \emph{terms} of $\mathcal{A}$. We say that two matrix sequences $\mathcal{A}=(A_{1},...,A_{n})$ and $\mathcal{B}=(B_{1},...,B_{n})$ in $V_{m,n}$ are \emph{similar} and write $\mathcal{A}\sim\mathcal{B}$ if they lie in the same conjugacy class (or $G$ orbit) i.e, if there is an element $g\in G$ such that $\mathcal{B}=g\cdot\mathcal{A}$ (equivalently, $B_{j}=gA_{j}g^{-1}$ for all $j=1,...,n$). \end{defn} We will also allow sequences $\mathcal{A}=(A_{1},...,A_{n},...)\in V_{m,\infty}$ with a countable infinite number of terms.
It turns out that a concrete answer to the above problems requires the separation of all matrix sequences (of fixed size and length) into $G$-invariant subclasses with distinct algebraic and/or geometric properties. For instance, for the class of matrices that pairwise commute some of these questions are easier. This case can be reduced to the case of a single matrix ($n=1$) which is classically solved with the Jordan decomposition theorem (at least when $\mathbb{F}$ is algebraically closed and some term is nonderogatory).
One property that is often relevant is that of simultaneous triangularization (or block triangularization); it generalizes commutativity and has a natural geometric interpretation. To be precise, let us consider the following standard notions. Note that a sequence $\mathcal{A}\in V_{m,n}$ can be viewed either as an ordered set of $m\times m$ matrices, as in the definition above, or alternatively, as a \emph{single} $m\times m$ matrix whose entries are $n$-dimensional vectors. \begin{equation} \mathcal{A}=\left(\begin{array}{ccc} a_{11} & \cdots & a_{1m}\\ \vdots & \ddots & \vdots\\ a_{m1} & \cdots & a_{mm}\end{array}\right),\quad a_{jk}\in\mathbb{F}^{n}.\label{eq:nmatrix}\end{equation}
\begin{defn} Let $\mathcal{A}$ be a matrix sequence. $\mathcal{A}$ will be called \emph{commutative} if all its terms pairwise commute (and non-commutative otherwise). We will say that $\mathcal{A}$ is an \emph{upper triangular matrix sequence} if all the vectors below the main diagonal are zero ($a_{jk}=0$ for all $j>k$ in equation (\ref{eq:nmatrix})), and that $\mathcal{A}$ is \emph{triangularizable} if it is similar to an upper triangular matrix sequence. \end{defn} In geometric terms, $\mathcal{A}$ is triangularizable if and only if there is a full flag of vector subspaces of $\mathbb{F}^{m}$ which is invariant under every term $A_{j}$ of $\mathcal{A}$ (for the standard action of $G$ on $\mathbb{F}^{m}$). Observe also that upper and lower triangularization are equivalent (over any ring). For the reasons mentioned, we will add the following \emph{triangularization problem} to the previous list.
\begin{enumerate} \item Give an effective numerical criterion for a matrix sequence to be triangularizable. \end{enumerate} Problem (iii) was solved effectively by the following refinement of McCoy's Theorem (\cite{Mc}). Let $[A,B]=AB-BA$ denote the commutator of two matrices.
\begin{thm} \label{thm:McCoy}\emph{{[}McCoy, see \cite{Mc},\cite{L}]} Let $\mathbb{F}$ be algebraically closed. A $m\times m$ matrix sequence $\mathcal{A}$ of length $n$ is triangularizable if and only if\[ p(A_{1},...,A_{n})[A,A']\] is nilpotent for all monomials $p(x)$ (in non-commuting indeterminants $x_{1},...,x_{n}$) of total degree not greater than $m^{2}$ and all terms $A,A'$ of $\mathcal{A}$. \end{thm}
\begin{comment} Let $L(m)$ be the natural number \[ L(m)=\min\left\{ \frac{m^{2}}{3}+1,\ m\sqrt{\frac{2m^{2}}{m-1}+\frac{1}{4}}+\frac{m}{2}-1\right\} .\] Paz \cite{Paz} and Pappacena \cite{Pap} showed that any monomial $p(A_{1},...,A_{n})$ can be written as a (non-commutative) polynomial $q(A_{1},...,A_{n})$ of degree $\leq L(m)$. Thus, in McCoy's theorem one can further reduce to monomials $p(x)$ whose total degree is not greater than $L(m)$. This is the best result know for general $m$, but it is conjectured that one could lower this bound to $2m-2$ (which agrees with the greatest integer $\leq L(m)$ for $m=2,3,4$). \end{comment} {}Using results of Paz \cite{Paz} and Pappacena \cite{Pap}, the bound on the degree $d$ of the monomials $p(x)$ in Theorem \ref{thm:McCoy} can be improved to $d\leq\frac{m^{2}}{3}+1$ and $d\leq m\sqrt{\frac{2m^{2}}{m-1}+\frac{1}{4}}+\frac{m}{2}-1$, respectively, each one being more efficient for smaller (resp. larger) values of $m$.
To discuss simultaneous similarity, consider the notion of subsequence.
\begin{defn} \label{def:subsequence}A \emph{subsequence} of a matrix sequence $\mathcal{A}=(A_{1},...,A_{n})\in V_{m,n}$ is a matrix sequence of the form \[ \mathcal{A}_{J}=(A_{j_{1}},...,A_{j_{l}})\in V_{m,l}\] obtained from $\mathcal{A}$ by deleting some of its terms (here $J=(j_{1},...,j_{l})\in\{1,...,n\}^{l}$ for some natural number $l\leq n$, and the indices are strictly increasing: $1\leq j_{1}<...<j_{l}\leq n$). \end{defn}
\begin{comment} With this terminology, a corollary of the results of Paz and Pappacena is the following statement which does not seem to have been written explicitly.
\begin{cor} \label{cor:Pa}\cite{Paz} Let $\mathbb{F}$ be algebraically closed. A $m\times m$ matrix sequence $\mathcal{A}$ is triangularizable if and only if all of its subsequences of length $\leq L(m)+2$ are triangularizable. \end{cor} \begin{proof} The condition is obviously necessary. Suppose all subsequences of $\mathcal{A}=(A_{1},...,A_{n})$ of length $\leq L(m)+2$ are triangularizable. Let $A,A'$ be terms of $\mathcal{A}$ and $p(A_{1},...,A_{n})=A_{\phi(1)}\cdots A_{\phi(k)}$ be a (non-commutative) monomial of degree $k\leq L(m)$ where $\phi:\left\{ 1,...,k\right\} \to\left\{ 1,...,n\right\} $ is any map (so $A_{\phi(1)},...,A_{\phi(k)}$ are not necessarily distinct terms of $\mathcal{A}$). Since $(A,A',A_{\phi(1)},...,A_{\phi(k)})$ is a triangularizable sequence by hypothesis, after a suitable conjugation $p(A_{1},...,A_{n})[A,A']=A_{\phi(1)}\cdots A_{\phi(k)}(AA'-A'A)$ is upper triangular with zeros in the main diagonal, so it is nilpotent. The Corollary then follows from Theorem \ref{thm:McCoy} since any monomial is a polynomial of degree $\leq L(m)$ by the results of Paz and Pappacena and sums of nilpotent matrices are nilpotent... Não é verdade!!!!!!!!!!! Ver Radjavi-Rosenthal. \end{proof}
\end{comment} {}Let $\mathbb{A}=\mathbb{F}[A_{1},...,A_{n}]$ be the algebra generated over $\mathbb{F}$ by the terms of $\mathcal{A}$ (of dimension $\leq m^{2}$). Suppose we rearrange the terms of $\mathcal{A}$ such that $\mathbb{A}$ is in fact generated by just the first $k\leq m^{2}$ elements. Then, for all $j>k$, $A_{j}=p_{j}(A_{1},...,A_{k})$ for some polynomial $p_{j}$ with coefficients in $\mathbb{F}$. From this one can easily obtain a similarity test.
\begin{fact} Let $\mathcal{A},\mathcal{B}$ be two $m\times m$ matrix sequences of the same finite or infinite length $n$. Then $\mathcal{A}$ and $\mathcal{B}$ are similar if and only if \emph{all corresponding subsequences} of length $\leq m^{2}$ are similar (that is, $\mathcal{A}_{J}\sim\mathcal{B}_{J}$ for all $J=(j_{1},...,j_{l})$ with $1\leq j_{1}<...<j_{l}\leq n$ and $l\leq m^{2}$). \end{fact} One can improve this statement if we restrict to a big class of matrix sequences. The Dubnov-Ivanov-Nagata-Higman \cite{DINH} theorem states that, given $m\in\mathbb{N}$, there is a natural number $N(m)$ with the following property: every associative algebra over $\mathbb{F}$ satisfying the identity $x^{m}=0$ is nilpotent of degree $N(m)$ (i.e, any product $x_{1}\cdots x_{k}$ of $k$ elements $x_{1},...,x_{k}$ of the algebra is zero, for $k\geq N(m)$). It is known that $\frac{m(m+1)}{2}\leq N(m)\leq\min\left\{ 2^{m}-1,m^{2}\right\} $ \cite{Ra}, and it was conjectured that $N(m)=\frac{m(m+1)}{2}$. This was verified to be true for $m=2,3,4$.
The following remarkable result of Procesi (\cite{Pr}) relates invariants of matrices and nilpotency degrees of associative nil-algebras. It provides explicit generators for the algebra of $G$-invariant regular (i.e, polynomial) functions on $V_{m,n}$. For a multiindex $J=(j_{1},...,j_{k})\in\{1,...,n\}^{k}$
of \emph{length} $|J|=k$, define the $G$-invariant regular function $t_{J}:V_{n}(\mathbb{F})\to\mathbb{F}$ as the trace of the word in the terms of $\mathcal{A}\in V_{n}(\mathbb{F})$ dictated by $J$, that is \begin{equation} t_{J}(\mathcal{A}):=\mathsf{tr}(A_{j_{1}}...A_{j_{k}}).\label{eq:generators}\end{equation}
\begin{thm} \label{thm:Procesi}\emph{{[}Procesi, \cite{Pr}]} Let $\mathbb{F}$ have characteristic $0$, and let $f:V_{m,n}\to\mathbb{F}$ be a $G$-invariant polynomial function. Then $f$ is a polynomial in the set of functions \[
\{t_{J}:|J|\leq N(m)\}\]
where $|J|$ denotes the length of the multiindex $J$. \end{thm} Let us say that a matrix sequence $\mathcal{A}$ is \emph{semisimple} if its $G$ orbit is Zariski closed in $V_{m,n}$. Semisimple sequences form a dense subset of $V_{m,n}$. Noting that $N(2)=3$, Procesi's theorem has the following corollary.
\begin{cor} \label{cor:Procesi}Let $\mathcal{A},\mathcal{B}$ be two $2\times2$ semisimple matrix sequences of the same finite or infinite length. Then $\mathcal{A}\sim\mathcal{B}$ if and only if \emph{all corresponding subsequences} of length $\leq3$ are similar (that is, $\mathcal{A}_{J}\sim\mathcal{B}_{J}$ for all $J$ with length $\leq3$). \end{cor} This corollary gives a numerical criterion for simultaneous similarity of semisimple sequences. In the case $m=2$, the number of generators can be further reduced, and moreover, Drensky described all relations in terms of a generating set (\cite{D}). He states his main theorem for traceless matrices, but an easy modification yields:
\begin{thm} \emph{\label{thm:Drensky}{[}Drensky, \cite{D}]} The generators of the algebra (over $\mathbb{F}$ of characteristic 0) of $G$-invariant polynomial functions on $V_{2,n}$ are given by:\[ t_{j},\ t_{jj},\ u_{jk},\ s_{jkl}\quad\quad1\leq j<k<l\leq n,\] where $u_{jk}:=2t_{jk}-t_{j}t_{k}$, $s_{jkl}:=t_{jkl}-t_{lkj}$ and $t_{j},t_{jk},t_{jkl}$ are as in (\ref{eq:generators}). A full set of relations is\begin{equation}
s_{abc}s_{def}+\frac{1}{4}\left|\begin{array}{ccc} u_{ad} & u_{ae} & u_{af}\\ u_{bd} & u_{be} & u_{bf}\\
u_{cd} & u_{ce} & u_{cf}\end{array}\right|=0,\quad\quad\quad u_{ea}s_{bcd}-u_{eb}s_{acd}+u_{ec}s_{abd}-u_{ed}s_{abc}=0,\label{eq:relations}\end{equation} for all appropriate indices. \end{thm} In this article, we show that these same generators can be used to get even more explicit solutions to problems (i)-(iii) in the case $m=2$. Our methods are mostly elementary and their generalization to higher $m$ seems possible, although computationally very involved. \begin{comment} useful equivalence between block-triangularization (defined analogously using block upper triangular matrices) and the property that the corresponding orbit is not Zariski closed (see \cite{A,Pr}). \end{comment} {}
\subsection{Statement of the main results\label{sub:Statement}}
From now on, we restrict to the space $V_{n}=V_{2,n}$ of sequences of $2\times2$ matrices of length $n\in\mathbb{N}\cup\left\{ \infty\right\} $, i.e, the case $m=2$, except where explicitly stated.
The article can be roughly divided into two parts. In the first part, Section \ref{sec:Triangularization}, we work with the space $V_{n}(R)$ of $2\times2$ matrix sequences with coefficients in an integral domain $R$, on which the group $G=GL_{2}(R)$ of invertible matrices over $R$ acts by conjugation. Here, we define and study reduced sequences, consider the triangularization problem, and prove Theorems \ref{thm:Flo} to \ref{thm:efic}, stated below. They provide an efficient numerical criterion for triangularization of matrix sequences in $V_{n}(R)$.
The case $m=2$ is simple enough that some easy arguments already improve some of the statements above, even in the more general case of $2\times2$ matrices over integral domains.
\begin{prop} \label{pro:leq3}$\mathcal{A}\in V_{n}(R)$ is triangularizable if and only if every subsequence of length $\leq3$ is triangularizable. \end{prop} For simultaneous similarity, a simple argument generalizes and improves Corollary \ref{cor:Procesi}, in the case $m=2$, to account for all matrix sequences (not necessarily semisimple).
\begin{prop} \label{pro:sim3}Over an integral domain $\mathcal{A}\sim\mathcal{B}$ if and only if $\mathcal{A}_{J}\sim\mathcal{B}_{J}$ for all $J$ of length $\leq3$. Moreover, under the generic condition $[A_{1},A_{2}]\neq0$, $\mathcal{A}$ is similar to $\mathcal{B}$ if and only if $(A_{1},A_{2},A_{j})\sim(B_{1},B_{2},B_{j})$, for all $j=3,...,n$. \end{prop} In other words, the conjugation action of $GL_{2}(R)$ on $2\times2$ matrix sequences of any length is completely determined by the same action on triples of $2\times2$ matrices. These two propositions seem to be standard, and I thank R. Guralnick for furnishing simple arguments leading to their proofs, included below for convenience and completeness.
The statement of Proposition \ref{pro:leq3} is the best possible in this generality, as there are $2\times2$ matrix sequences of length $3$ (therefore, \emph{a fortiori} for every size $m\times m$, $m\geq2$ and every length $n\geq3$) that are not triangularizable but are pairwise triangularizable (see Example \ref{exa:example} below). See however Theorem \ref{thm:leq2} below.
Let us now consider the problem of finding a numerical criterion for triangularization of general $2\times2$ sequences. By Proposition \ref{pro:leq3}, we just need to consider \emph{triples} of $2\times2$ matrices. Using only the $G$-invariant functions given by the trace and the determinant, the following is a simple numerical criterion.
\begin{thm} \label{thm:Flo}A $2\times2$ matrix sequence $\mathcal{A}$ (over $R$) of length $n$ is triangularizable if and only if all its terms are triangularizable and \begin{equation} \mathsf{det}(AB-BA)=\mathsf{tr}(ABC-CBA)=0\label{eq:Flo}\end{equation} for all terms $A,B,C$ of $\mathcal{A}$. In particular, a pair $(A,B)\in V_{2}(R)$ is triangularizable if and only if both $A$ and $B$ are triangularizable and $\mathsf{det}(AB-BA)=0$. \end{thm} \begin{rem*} Over an algebraically closed field $\bar{\mathbb{F}}$, every single matrix is triangularizable. Suppose $\mathcal{A}$ is a $2\times2$ matrix sequence over $\bar{\mathbb{F}}$ with $[A,B]$ and $C[A,B]$ nilpotent for all terms $A,B,C$ of $\mathcal{A}$. Then, Equation (\ref{eq:Flo}) holds and Theorem \ref{thm:Flo} shows that all mentioned bounds in Theorem \ref{thm:McCoy} can be improved for $m=2$, as the condition that $p$ is a monomial of degree $\leq1$ is already sufficient (and necessary) for triangularization. \end{rem*} Theorem \ref{thm:Flo} generalizes a result proved in \cite{Fl} for algebraically closed fields. Observe that the case $n=2$ of this theorem is a direct generalization to integral domains of a well-known criteria, obtained in \cite{Fr}, for a pair of $2\times2$ matrices over $\bar{\mathbb{F}}$ to be triangularizable.
As a consequence of our study of invariants, we can improve Proposition \ref{pro:leq3} under a simple non-degeneracy condition on $\mathcal{A}$. Let us define a \emph{reduced} sequence to be one with no commuting pairs among its terms.
\begin{thm} \label{thm:leq2}A $2\times2$ reduced matrix sequence $\mathcal{A}$ of length $\geq4$ is triangularizable (over $R$) if and only if all subsequences of $\mathcal{A}$ of length $\leq2$ are triangularizable (over $R$). \end{thm} With a little more care we get a test, computationally much more efficient, whose complexity grows \emph{only linearly} with the number $n$ of matrices in $\mathcal{A}$. Let us define the reduced length of a sequence $\mathcal{A}$ to be the biggest length of a reduced subsequence $\mathcal{B}\subset\mathcal{A}$.
\begin{thm} \label{thm:efic}Let $\mathcal{A}=(A_{1},...,A_{n})\in V_{n}$ have reduced length $l\le n$ and rearrange its terms so that $\mathcal{A}'=(A_{1},...,A_{l})$ is reduced. Then
(i) In the case $l\leq3$, $\mathcal{A}$ is triangularizable if and only if $\mathcal{A}'$ is triangularizable.
(ii) In the case $l\geq4$, $\mathcal{A}$ is triangularizable if and only if $A_{1},...,A_{l}$ are triangularizable and $\mathsf{det}([A_{j},A_{k}])=0$ for all $j=1,2,3$ and all $j<k\leq l$. \end{thm} In the second part of the article, we work mainly over a field $\mathbb{F}$. Section \ref{sec:SimSim} deals with simultaneous similarity for $2\times2$ matrix sequences. Using standard invariant theory, one sees that the values of the Drensky generators are enough to distinguish semisimple conjugacy classes. But these classes should depend on $4n-3$ parameters only, the {}``dimension'' of the quotient space $V_{n}/G$ ($V_{n}$ has dimension $4n$, and $G$ acts as $SL_{2}(\mathbb{F})$, a three dimensional group) which is much less than the number, $2n+\binom{n}{2}+\binom{n}{3}$, of Drensky generators. After describing rational invariants that distinguish general triangularizable sequences, we obtain a solution, with the minimal number of invariants, to problem (ii) for non-commutative sequences as follows.
Let $\mathcal{S}'$ (resp. $\mathcal{U}'$) denote the subsets of $V_{n}=V_{n}(\mathbb{F})$ of semisimple (resp. triangularizable) sequences such that $A_{1}$ is diagonalizable and $[A_{1},A_{2}]\neq0$. Using the $G$-invariant functions $t_{J}:V_{n}\to\mathbb{F}$ in (\ref{eq:generators}) define the maps $\Phi':\mathcal{S}'/G\to\mathbb{F}^{4n-3}$ and $\Psi':\mathcal{U}'/G\to\mathbb{F}^{2n}\times\mathbb{P}^{n-2}$, where $\mathbb{P}^{k}$ denotes the projective space over $\mathbb{F}$ of dimension $k$, by the formulae \begin{eqnarray} \Phi'([\mathcal{A}]) & = & \left(t_{1},t_{11},t_{2},t_{22},t_{12},...,t_{k},t_{1k},t_{2k},s_{12k},...,t_{n},t_{1n},t_{2n},s_{12n}\right),\label{eq:Phi'}\\ \Psi'([\mathcal{A}]) & = & \left(t_{1},t_{11},t_{2},t_{12},...,t_{k},t_{1k},...,t_{n},t_{1n};\ \psi'\right),\label{eq:Psi'}\end{eqnarray} where $\psi'$ is defined later in Section \ref{sec:CanonicalForms}, and $[\mathcal{A}]$ denotes the conjugacy class of $\mathcal{A}$.
\begin{thm} \label{thm:Invariants}Let $\mathbb{F}$ be a field of characteristic $\neq2$. The map $\Phi':\mathcal{S}'/G\to\mathbb{F}^{4n-3}$ is injective and the map $\Psi':\mathcal{U}'/G\to\mathbb{F}^{2n}\times\mathbb{P}^{n-2}$ is two-to-one. \end{thm} The last section (Section \ref{sec:CanonicalForms}) concerns the classification of canonical forms for sets of $2\times2$ matrices over $\bar{\mathbb{F}}$ and proposes a solution for problem (i). The main result is Theorem \ref{thm:canonical}, where five types of canonical forms are obtained for sequences with at least one non-commuting pair. It shows that, for non-commutative sequences, the restriction to $\mathcal{S}'$ and $\mathcal{U}'$ in Theorem \ref{thm:Invariants} is only apparent (see also Remark \ref{rem:NoRestriction}). We also describe a simple method for finding a sequence in canonical form with a given value of $\Phi'$ or $\Psi'$.
\begin{comment} The case when some pairs of matrices commute follows easily (see Theorems \ref{thm:canonical} and ...). \end{comment} {}Appendix A contains results on the triangularization a single $2\times2$ matrix over $R$ which are crucial for Theorem \ref{thm:Flo}, and Appendix B, for completeness, describes the well-known canonical forms of commuting matrices over $\bar{\mathbb{F}}$.
\begin{comment} We also describe the parametrization of orbits for any sequence falling in any of the types of canonical forms.
Finally, in section 6 we discuss the same problems for traceless matrices and for matrices belonging to the groups $SL(2,\mathbb{F}$) and $PSL(2,\mathbb{F})$. \end{comment} {}
\begin{acknowledgement*} I would like to thank J. Dias da Silva whose interesting questions were at the genesis of this article, and R. Guralnick for calling my attention to related work in the literature and for providing simplified proofs of some statements in a previous version. I thank also my colleagues J. Mourão, J. P. Nunes and S. Lawton for many interesting and motivating conversations on this and related topics. Work partially supported by the CAMGSD, IST, Technical Univ. of Lisbon, and {}``Fundação para a Ciência e a Tecnologia'' through the programs Praxis XXI, POCI/MAT/58549/2004 and FEDER. Some computations performed using CoCoA; document typeset using \LyX{}. \end{acknowledgement*}
\begin{comment} \emph{Comment on terminology.} The results of this article concern properties of (unordered) sets of matrices. However, in order to have complete control over the notation and, in particular for the statement of Theorem \ref{thm:simsim}, we have chosen to work with ordered sets and hence the use of the term 'sequence' with which this statement has its most appealing formulation. On the other hand, the results on canonical forms are better stated in terms of sets, so we will also use the term 'set', but to avoid changing the notation, we will use the same letters to denote a sequence and its underlying set. \end{comment} {} \begin{comment} and use, for matrix sequences, operations defined on sets and on sequences. \end{comment} {}
\section{Simultaneous Triangularization\label{sec:Triangularization}}
Throughout the article, $R$ will stand for an integral domain, $\mathbb{F}$ for a field and $\bar{\mathbb{F}}$ for an algebraically closed field. $V_{n}$ (resp. $G$) will denote the space of matrix sequences of length $n\in\mathbb{N}\cup\left\{ \infty\right\} $ (resp. the group of invertible $2\times2$ matrices) over the appropriate ring or field. When the coefficients need to be explicitly mentioned, we will use the notations $V_{n}(R),$ $G(R)$, etc. Sequences of length $1,2,3$ and $4$ will be called \emph{singlets}, \emph{pairs}, \emph{triples} and \emph{quadruples}, respectively.
\subsection{Simultaneous triangularization and subtriples; reduced sequences}
We start by fixing notation and recalling some well known facts about matrices over $\mathbb{F}$ and $R$. After this, we define \emph{reduced sequences}, a notion which will be fundamental in the sequel.
For a given $\mathcal{A}\in V_{n}(R)$, a matrix sequence of length $n\in\mathbb{N}$, we will use the notation\begin{eqnarray*} \mathcal{A}=(A_{1},...,A_{n}) & = & \left(\begin{array}{cc} a & b\\ c & d\end{array}\right),\quad a,b,c,d\in R^{n},\\ A_{j} & = & \left(\begin{array}{cc} a_{j} & b_{j}\\ c_{j} & d_{j}\end{array}\right),\end{eqnarray*} and we let $e=(e_{1},...,e_{n})$ denote the $n$-tuple $a-d\in R^{n}$. To avoid the most trivial case, we consider only matrix sequences with at least one non-scalar term.
\begin{comment}
Note that the discriminant of (the characteristic polynomial of) $A_{j}$, defined by $\delta_{j}:=\mathsf{tr}^{2}A_{j}-4\det A_{j}$ is then given by\[ \delta_{j}=e_{j}^{2}+4b_{j}c_{j}.\] So, a non-scalar upper triangular matrix $A_{j}$ is diagonalizable if and only if $e_{j}\neq0$. \end{comment} {}The commutator of 2 matrices $A_{1}$ and $A_{2}$ is given by \begin{equation} [A_{1},A_{2}]=\left(\begin{array}{cc} b_{1}c_{2}-c_{1}b_{2} & e_{1}b_{2}-b_{1}e_{2}\\ c_{1}e_{2}-e_{1}c_{2} & c_{1}b_{2}-b_{1}c_{2}\end{array}\right).\label{eq:comuta}\end{equation} For later use, record the following straightforward but useful lemma.
\begin{lem} \label{lem:comm} Let $\mathcal{A}=(A_{1},A_{2})\in V_{2}(R)$ and $A_{1}$ be a non-scalar matrix. If $A_{1}$ is upper triangular, then $[A_{1},A_{2}]=0$ if and only if $A_{2}$ is also upper triangular and \begin{equation} b_{1}e_{2}-e_{1}b_{2}=0.\label{eq:comm}\end{equation} Similarly, let $A_{1}$ be diagonal non-scalar. Then $[A_{1},A_{2}]=0$ if and only if $A_{2}$ is also diagonal. \end{lem} \begin{proof} Suppose $[A_{1},A_{2}]=0$ with $c_{1}=0$. Then we have $b_{1}e_{2}-e_{1}b_{2}=b_{1}c_{2}=e_{1}c_{2}=0$, using (\ref{eq:comuta}). Since $A_{1}$ is non-scalar either $e_{1}$ or $b_{1}$ is non-zero. In an integral domain, this implies $b_{1}e_{2}-e_{1}b_{2}=c_{2}=0$. The other statement is similar. \end{proof} This implies the following well known result. Note that a $2\times2$ matrix is non-scalar if and only if it is nonderogatory.
\begin{prop} \label{cor:comm}Let $\mathcal{A}$ be a commutative matrix sequence (i.e, all terms pairwise commute) of finite or infinite length over an integral domain $R$. Then $\mathcal{A}$ is triangularizable if and only if one of its non-scalar terms is triangularizable. Similarly, $\mathcal{A}$ is diagonalizable if and only if one of its non-scalar terms is diagonalizable.
{}$\square$ \end{prop} As a consequence, the proof of Proposition \ref{pro:leq3} is complete after the following. Let us denote by $\mathbb{A}$ the algebra generated over $\mathbb{F}$, the field of fractions of $R$, by the terms of $\mathcal{A}$. A well known result is that $\mathcal{A}$ is commutative if and only if the dimension of $\mathbb{A}$ is $\leq2$.
\begin{prop} Let $n\geq2$ and $\mathcal{A}\in V_{n}(R)$ be a non-commutative matrix sequence. Then $\mathcal{A}$ is triangularizable if and only if all subsequences of $\mathcal{A}$ of length $\leq3$ are triangularizable. \end{prop} \begin{proof} If $\mathcal{A}$ is triangularizable, it is clear that all subsequences of $\mathcal{A}$ are also triangularizable. Conversely, since $\mathcal{A}$ is non-commutative, we can assume without loss of generality, that $[A_{1},A_{2}]\neq0$, and that, after a suitable conjugation, both $A_{1}$ and $A_{2}$ are upper triangular. By hypothesis, all triples of the form $(A_{1},A_{2},A_{k})$, $k=3,...,n$ are triangularizable. Then, the algebra $\mathbb{A}_{k}$ generated by $\mathcal{A}_{k}=(A_{1},A_{2},A_{k})$ equals the one generated by $(A_{1},A_{2})$, since one is a subset of the other but both are three dimensional over $\mathbb{F}$. Indeed, if $\mathbb{A}_{k}$ was of dimension $\leq2$, $\mathcal{A}_{k}$ would be commutative, and if $\mathbb{A}_{k}$ was four dimensional, $\mathcal{A}_{k}$ would not be triangularizable. Therefore, for all $j=3,...,n$, $A_{j}=p_{j}(A_{1},A_{2})$ for some polynomial $p_{j}$ with coefficients in $\mathbb{F}$ and thus, $A_{j}$ is also upper triangular, for all $j$. \end{proof} Recall also the following.
\begin{lem} \label{lem:non-der}Let $\mathcal{A}=(A_{1},A_{2})\in V_{2}(\mathbb{F})$ be commutative and $A_{1}$ be a non-scalar matrix. Then $A_{2}=p(A_{1})$ for some degree 1 polynomial $p(x)\in\mathbb{F}[x]$. \end{lem} \begin{proof} From Equation (\ref{eq:comuta}), the conditions $[A_{1},A_{2}]=0$ can be written as $Mu=0$, where \begin{equation} M=\left(\begin{array}{rrr} 0 & c_{1} & -e_{1}\\ -c_{1} & 0 & b_{1}\\ e_{1} & -b_{1} & 0\end{array}\right),\quad\quad u=(b_{2},e_{2},c_{2}).\label{eq:comuta2}\end{equation} Note that $M$ has rank exactly 2, since $A_{1}$ is non-scalar. So, the vector $u=(b_{2},e_{2},c_{2})$ is in the nullspace of $M$, which is generated by $(b_{1},e_{1},c_{1})\neq0$. Thus, there is an $\alpha\in\mathbb{F}$ such that $(b_{2},e_{2},c_{2})=\alpha(b_{1},e_{1},c_{1})$. Then, $A_{2}=(d_{2}-\alpha d_{1})I+\alpha A_{1}$ is, explicitly, the required polynomial ($I$ denotes the identity $2\times2$ matrix). \end{proof}
\begin{comment} In particular, every commutative matrix sequence defined over an algebraically closed field of characteristic $\neq$ 2 is triangularizable. \end{comment} {}It is clear that a single matrix is triangularizable over an algebraically closed field, but not necessarily so over a general integral domain or field. In Appendix A we include a short account of the conditions for triangularization of a single matrix in $V_{1}(R)$. The following notion will play a central role. If $\mathcal{A}$ is a subsequence of $\mathcal{B}$, we will write $\mathcal{A}\subseteq\mathcal{B}$ \begin{comment} ; we will use $\mathcal{A}\varsubsetneq\mathcal{B}$ when $\mathcal{A}\subseteq\mathcal{B}$ and the length of $\mathcal{A}$ is strictly smaller than that of $\mathcal{B}$ \end{comment} {}.
\begin{defn} \label{def:reduced}A matrix sequence with at least one non-scalar term $\mathcal{A}=(A_{1},...)\in V_{n}(R)$ is called \emph{reduced} if there are no commuting pairs among its terms, that is, $[A_{j},A_{k}]\neq0$, for all $1\leq j<k\leq n$. We say that $\mathcal{A}$ is a \emph{reduction} of $\mathcal{B}$ if $\mathcal{A}$ is reduced and is obtained from $\mathcal{B}$ by deleting some of its terms. Finally, we say that $\mathcal{A}$ is a \emph{maximal reduction} of $\mathcal{B}$, and that $l$ is its \emph{reduced length}, if $\mathcal{A}$ is a reduction of $\mathcal{B}$ of length $l$, and any subsequence $\mathcal{A}'\subseteq\mathcal{B}$ with length $>l$ is not reduced. \end{defn} Over a field, by Lemma \ref{lem:non-der}, a reduced sequence $\mathcal{A}$ is one where no term is a polynomial function of another (so all terms generate an algebra of dimension $2$, and no two terms generate the same subalgebra of the full matrix algebra). It is clear that any two maximal reductions have the same length. Note also that any subsequence of a reduced sequence is also reduced. The following facts show that important properties like existence of a triangularization are captured by any maximal reduction of a matrix sequence.
\begin{prop} \label{pro:reduce} Let $\mathcal{A}=(A_{1},...,A_{n})\in V_{n}(R)$ and let $A_{n+1}$ commute with at least one non-scalar term of $\mathcal{A}$. Then $\mathcal{A}$ is triangularizable if and only if $\mathcal{A}':=(A_{1},...,A_{n},A_{n+1})$ is triangularizable. \end{prop} \begin{proof} Naturally if $\mathcal{A}'$ is triangularizable, $\mathcal{A}$ is also. For the converse, without loss of generality assume $[A_{1},A_{n+1}]=0$ with $A_{1}$ non-scalar and $\mathcal{A}$ in upper triangular form. Then, by Lemma \ref{lem:comm}, $A_{n+1}$ is also upper triangular, so that $\mathcal{A}'$ is triangularizable. \end{proof} \begin{cor} \label{cor:reduce}If $\mathcal{A}$ and $\mathcal{A}'$ are arbitrary sequences with a common maximal reduction (of length $\geq1$), then either they are both triangularizable or both not triangularizable. \end{cor} \begin{proof} Let $\mathcal{B}$ be such a common maximal reduction. Then $\mathcal{A}$ and $\mathcal{A}'$ are obtained from $\mathcal{B}$ by adding scalar matrices or matrices that commute with some of the non-scalar terms of $\mathcal{B}$. So, if $\mathcal{B}$ is triangularizable, both $\mathcal{A}$ and $\mathcal{A}'$ are triangularizable by repeatedly applying Proposition \ref{pro:reduce}. The case when $\mathcal{B}$ is not triangularizable is analogous. \end{proof}
\subsection{Necessary conditions for triangularization via invariants}
\begin{comment} , the following irreducible algebraic subset of $V_{2}$ plays an important role\begin{equation} W=\left\{ (A_{1},A_{2})\in V_{2}:\mathsf{det}([A_{1},A_{2}])=0\right\} .\label{s12}\end{equation}
Over a field $\mathbb{F}$, the geometric relevance of $W$ comes from the fact that if $(A_{1},A_{2})$ does not belong to $W$, then its $GL_{2}(\mathbb{F})$ orbit is uniquely determined by $\mathsf{tr}(A_{1}),\mathsf{tr}(A_{2}),\mathsf{tr}(A_{1}^{2}),\mathsf{tr}(A_{2}^{2}),\mathsf{tr}(A_{1}A_{2})$ (\emph{\cite{Fr}}). Moreover, the following is not difficult to prove.
Note that since the trace of a commutator is zero, for $2\times2$ matrices $\mathsf{det}[A,B]=0$ if and only if $[A,B]$ is nilpotent. Therefore, this Proposition fits with McCoy's Theorem. To generalize this result to higher length $n$, let us consider an integral domain $R$ and let $M_{2}(R)$ the space of $2\times2$ matrices with entries in $R$. \end{comment} {}We continue to work over an integral domain $R$. Define the following important $GL_{2}(R)$-invariant functions. For a matrix $A\in V_{1}$, let $\delta_{A}$ denote the discriminant of its characteristic polynomial, that is $\delta_{A}=\mathsf{tr}^{2}A-4\mathsf{det} A$. \begin{comment} \begin{defn} Let $\mathcal{A}$ be a $m\times m$ matrix sequence of length $n$. We will say that $\mathcal{A}$ is \emph{scalar} if all its terms are scalars and non-scalar otherwise. $\mathcal{A}$ will be called \emph{diagonal} (resp. \emph{diagonalizable}) if it is both upper triangular and symmetric (resp. both triangularizable and symmetrizable). \end{defn}
\end{comment} {}
\begin{defn} \label{def:tau,sigma}Let $\tau,\sigma:V_{2}(R)\to R$ and $\Delta:V_{3}(R)\to R$ be defined by \begin{eqnarray*} \tau(A,B) & := & 2\mathsf{tr}(AB)-\mathsf{tr}(A)\mathsf{tr}(B),\\ \sigma(A,B) & := & \mathsf{det}(AB-BA)\\ \Delta(A,B,C) & := & \left(\mathsf{tr}(ABC-CBA)\right)^{2}.\end{eqnarray*}
\end{defn} When a matrix sequence is written as $\mathcal{A}=(A_{1},...,A_{n})\in V_{n}(R)$ we will also use \begin{eqnarray*} \tau_{jk} & = & \tau_{jk}(\mathcal{A})=\tau(A_{j},A_{k})\\ \sigma_{jk} & = & \sigma_{jk}(\mathcal{A})=\sigma(A_{j},A_{k})\\ \Delta_{jkl} & = & \Delta_{jkl}(\mathcal{A})=\Delta(A_{j},A_{k},A_{l}),\end{eqnarray*} for any indices $j,k,l\in\{1,...,n\}$. By simple computations , we can express these functions in terms of $b_{j},c_{j},e_{j}$ as follows.\begin{eqnarray} \tau_{jk} & = & e_{j}e_{k}+2b_{j}c_{k}+2c_{j}b_{k}\nonumber \\ \sigma_{jk} & = & \left(b_{j}e_{k}-e_{j}b_{k}\right)\left(c_{j}e_{k}-e_{j}c_{k}\right)-\left(b_{j}c_{k}-c_{j}b_{k}\right)^{2}\label{eq:explicit}\\
\Delta_{jkl} & = & \left|\begin{array}{ccc} b_{j} & b_{k} & b_{l}\\ e_{j} & e_{k} & e_{l}\\
c_{j} & c_{k} & c_{l}\end{array}\right|^{2}.\label{eq:explicitDelta}\end{eqnarray}
Note that $\tau,\sigma$ and $\Delta$ are symmetric under permutation of any matrices/indices, but $\sigma$ and $\Delta$ vanish when 2 matrices/indices coincide. Since the above expressions do not depend explicitly on the variables $a_{j}$ or $d_{j}$ but only on the difference $e_{j}=a_{j}-d_{j}$, the functions $\tau,\sigma$ and $\Delta$ are invariant under translation of any argument by a scalar matrix, that is, for any matrices $A,B$ and scalar matrices $\lambda,\mu$, we have $\tau(A+\lambda,B+\mu)=\tau(A,B)$ and similarly for $\sigma$ and $\Delta$.
\begin{rem} \label{rem:relations}Note that these are essentially the same functions used in Drensky's theorem \ref{thm:Drensky} \cite{D}. They also coincide with the functions used in \cite{Fl}, up to a constant factor. There are some interesting relations between these invariants which are obtained from simple calculations. In particular, we have\begin{eqnarray} \tau(A,A) & = & \delta_{A}=\mathsf{tr}^{2}A-4\mathsf{det} A\nonumber \\ \sigma(A,B) & = & \mathsf{tr}(A[A,B]B)=\frac{1}{4}\left(\tau(A,A)\tau(B,B)-\tau(A,B)^{2}\right)\label{eq:sigma}\\
\Delta(A,B,C) & = & -\frac{1}{4}\left|\begin{array}{ccc} \tau(A,A) & \tau(A,B) & \tau(A,C)\\ \tau(B,A) & \tau(B,B) & \tau(B,C)\\
\tau(C,A) & \tau(C,B) & \tau(C,C)\end{array}\right|,\nonumber \end{eqnarray} for all matrices $A,B,C$ over $R$, in agreement with Equation (\ref{eq:relations}). \begin{comment} The first equation follows from the Cayley-Hamilton formula $2\mathsf{det} A-\mathsf{tr}^{2}A+\mathsf{tr} A^{2}=0$, and the second is a simple computation using (\ref{eq:comuta}) and (\ref{eq:explicit}). \end{comment} {} \end{rem} The following is a simple necessary condition for triangularization.
\begin{prop} \label{pro:sigma=00003D3D0}Let $\mathcal{A}\in V_{n}$ be a triangularizable sequence. Then $\sigma(A,B)$ and $\Delta(A,B,C)$ vanish for all terms $A,B,C$ of $\mathcal{A}$. \end{prop} \begin{proof} Since $\sigma$ and $\Delta$ are $G$ invariant, we can assume that $\mathcal{A}$ is upper triangular. By direct computation, $\sigma(A,B)=\mathsf{det}([A,B])=0$, and $\Delta(A,B,C)=(\mathsf{tr}(ABC-CBA))^{2}=0$. \end{proof} Note that the vanishing of all $\sigma_{jk}=\sigma(A_{j},A_{k})$ is not sufficient for $\mathcal{A}$ to be triangularizable, as the following important example shows. We adopt the usual convention that blank matrix entries stand for zero entries (in this case $0\in R$).
\begin{example} \label{exa:example}Let $\mathcal{A}=(A_{1},A_{2},A_{3})\in V_{3}$ have the form\[ A_{1}=\left(\begin{array}{cc} a_{1}\\
& d_{1}\end{array}\right),\quad A_{2}=\left(\begin{array}{cc} a_{2} & b_{2}\\
& d_{2}\end{array}\right),\quad A_{3}=\left(\begin{array}{cc} a_{3}\\ c_{3} & d_{3}\end{array}\right),\] for some $a_{1},...,d_{3}\in R.$ Then $\sigma_{12}=\sigma_{13}=0$ and $\sigma_{23}=-b_{2}c_{3}(e_{2}e_{3}+b_{2}c_{3})$. Assume that $e_{2}e_{3}+b_{2}c_{3}=0$ and that $e_{1}b_{2}c_{3}\neq0$, so that all $\sigma_{jk}$ vanish, neither $A_{2}$ or $A_{3}$ are diagonal, and (since these assumptions imply $e_{2}e_{3}\neq0$) all three matrices have distinct eigenvalues. So, in this case, all subsequences of length $\leq2$ are triangularizable, but the next Proposition will show that $\mathcal{A}$ is not triangularizable. \end{example} \begin{prop} \label{pro:ReducedTriple}As in Example \ref{exa:example}, let $\mathcal{A}=(A_{1},A_{2},A_{3})\in V_{3}$ be a triple of the form\begin{equation} A_{1}=\left(\begin{array}{cc} a_{1}\\
& d_{1}\end{array}\right),\quad A_{2}=\left(\begin{array}{cc} a_{2} & b_{2}\\
& d_{2}\end{array}\right),\quad A_{3}=\left(\begin{array}{cc} a_{3}\\ c_{3} & d_{3}\end{array}\right),\label{eq:form}\end{equation} with $e_{2}e_{3}\neq0$. Then, the following are equivalent.
(i) $\mathcal{A}$ is reduced (ii) $\mathcal{A}$ is not triangularizable, (iii) $\Delta_{123}\neq0$ (i.e, $e_{1}b_{2}c_{3}\neq0$). \end{prop} \begin{proof} (i) or (ii) imply (iii): If $e_{1}b_{2}c_{3}=0$ at least one of the factors is zero. In each case, $A_{1}$ is a scalar, $\mathcal{A}$ is lower triangular, or $\mathcal{A}$ is upper triangular, respectively, so $\mathcal{A}$ is triangularizable and is not reduced, since $A_{1}$ commutes with one or both of the other terms. (iii) implies (ii) and (i): Suppose that $e_{1}b_{2}c_{3}\neq0$. Then, none of the three numbers is zero. Let $g$ be the $SL_{2}(R)$ matrix with columns $(x,y)$ and $(z,w)$. Then\begin{eqnarray} gA_{1}g^{-1} & = & \left(\begin{array}{cc} * & -xze_{1}\\ ywe_{1} & *\end{array}\right)\nonumber \\ gA_{2}g^{-1} & = & \left(\begin{array}{cc} * & x(xb_{2}-ze_{2})\\ y(we_{2}-yb_{2}) & *\end{array}\right)\label{eq:BT}\\ gA_{3}g^{-1} & = & \left(\begin{array}{cc} * & -z(xe_{3}+zc_{3})\\ w(ye_{3}+wc_{3}) & *\end{array}\right),\nonumber \end{eqnarray} from which it follows that there is no $g\in G$ that will make $g\cdot\mathcal{A}$ upper or lower triangular, so $\mathcal{A}$ is not triangularizable. Also, by Lemma \ref{lem:comm}, none of the 3 commutators between the pairs will vanish, so $\mathcal{A}$ is reduced. \end{proof}
\subsection{Numerical criteria for triangularization\label{sec:criteria}}
A simple necessary and sufficient numerical condition for triangularization of a pair of $2\times2$ matrices over an algebraically closed field was given in the article \cite{Fr}, which also describes the similarity classes of pairs of $m\times m$ matrices in great generality.
\begin{prop} \cite{Fr}\label{pro:Fr} A pair $(A_{1},A_{2})\in V_{2}(\bar{\mathbb{F}})$ is triangularizable if and only if $\sigma_{12}=\mathsf{det}[A_{1},A_{2}]=0$. \end{prop} Note that Friedland writes the condition $\sigma_{12}=0$ in a different, but equivalent form (see Equation (\ref{eq:sigma}) in Remark \ref{rem:relations}). The generalization to integral domains is as follows.
\begin{thm} \label{thm: pair}A pair $\mathcal{A}=(A_{1},A_{2})\in V_{2}(R)$ is triangularizable if and only if both $A_{1}$ and $A_{2}$ are triangularizable and $\sigma_{12}=\mathsf{det}[A_{1},A_{2}]=0$. \end{thm} \begin{proof} The conditions are clearly necessary. For the converse, let us suppose that both $A_{1}$ and $A_{2}$ are triangularizable and $\mathsf{det}[A_{1},A_{2}]=0$. Then, as the determinant is $GL_{2}(R)$ invariant, we can assume $A_{1}$ upper triangular ($c_{1}=0$). If $[A_{1},A_{2}]=0$ the pair $(A_{1},A_{2})$ is triangularizable by Corollary \ref{cor:comm}, so we can assume that $[A_{1},A_{2}]\neq0$. Equation (\ref{eq:comuta}) shows that \emph{\begin{equation} 0=\mathsf{det}[A_{1},A_{2}]=-b_{1}^{2}c_{2}^{2}+e_{1}c_{2}(e_{1}b_{2}-b_{1}e_{2})=-c_{2}(b_{1}^{2}c_{2}+e_{1}b_{1}e_{2}-e_{1}^{2}b_{2}).\label{eq:up}\end{equation} }If $c_{2}=0$, $\mathcal{A}$ is triangularizable. If not, $c_{2}\neq0$ and we distinguish 4 cases. (i) If $e_{1}=0$, then $b_{1}\neq0$ (as $A_{1}$ is non-scalar) which makes equation (\ref{eq:up}) impossible to solve. (ii) If $b_{1}=0$, then $e_{1}\neq0$, and equation (\ref{eq:up}) implies $b_{2}=0$ and $\mathcal{A}$ is lower triangular. (iii) Suppose now $e_{1}b_{2}=b_{1}e_{2}$. Then $0=\mathsf{det}[A_{1},A_{2}]=-c_{2}^{2}b_{1}^{2}$ and so $b_{1}=0$ which reduces to the previous case.
Finally, consider the general case (iv) with $\delta_{12}=b_{1}e_{2}-e_{1}b_{2}\neq0$ and non-zero $b_{1}$ and $e_{1}$. So, we are assuming $c_{2}e_{2}b_{2}\neq0$. From equation (\ref{eq:up}), the quadratic equation $Q_{2}(x,y)\equiv c_{2}x^{2}-e_{2}xy-b_{2}y^{2}=0$ associated to $A_{2}$ (see Appendix A) has a non-trivial solution: $(b_{1},-e_{1})\in R^{2}$. Suppose that $A_{2}$ is nondegenerate ($\delta_{A}\neq0$). Then, by Lemma \ref{lem:eigenvector}, there are $z',w'$ in $\mathbb{F}$, the field of fractions of $R$, such that $w'b_{1}+z'e_{1}\neq0$ and the eigenvectors of $A_{2}$ are multiples of $(b_{1},-e_{1})$ and of $(z',w')$. \begin{comment}
Since $A_{2}$ is triangularizable, the square roots $r=\pm(w_{1}x-z_{1}y)$ of $\delta_{A}$ must be in $R$, so that both eigenvalues of $A_{2}$ are also in $R$. \end{comment} {} So, we choose an eigenvector of $A_{2}$ of the form $(z'',w'')\in R^{2}$ colinear with $(z',w')\in\mathbb{F}^{2}$. Moreover, by Proposition \ref{pro:principal}, there are $\alpha,\beta\in\mathbb{F}^{*}=\mathbb{F}\setminus\left\{ 0\right\} $ so that the eigenvectors $(x,y)=\alpha(b_{1},-e_{1})$ and $(z,w)=\beta(z'',w'')$ verify either $xR+yR=R$ or $zR+wR=R$. If the first alternative holds, let $xq-yp=1$ for some $p,q\in R$, (note that $(p,q)$ is not necessarily an eigenvector of $A_{2}$) and put:\[ g=\left(\begin{array}{cc} x & p\\ y & q\end{array}\right).\] Then, conjugating by $g^{-1}$ gives\begin{eqnarray*} g^{-1}A_{1}g & = & \left(\begin{array}{cc} * & *\\ -e_{1}yx-b_{1}y^{2} & *\end{array}\right)=\left(\begin{array}{cc} * & *\\ \alpha(y^{2}x-xy^{2}) & *\end{array}\right)=\left(\begin{array}{cc} * & *\\ 0 & *\end{array}\right)\\ g^{-1}A_{2}g & = & \left(\begin{array}{cc} * & *\\ c_{2}x^{2}-e_{2}xy-b_{2}y^{2} & *\end{array}\right)=\left(\begin{array}{cc} * & *\\ \alpha^{2}Q_{2}(b_{1},-e_{1}) & *\end{array}\right)=\left(\begin{array}{cc} * & *\\ 0 & *\end{array}\right),\end{eqnarray*} so $\mathcal{A}$ is again triangularizable. If the second alternative holds, we do the same interchanging the roles of $(x,y)$ and $(z,w)$. Finally, suppose that $A_{2}$ is degenerate ($\delta_{A}=0$). Then, there is only one eigenvector of $A_{2}$ and the solutions to $Q_{2}(x,y)=0$ form a single line through the origin in $\mathbb{F}^{2}$, so all solutions $(x,y)\in R^{2}$ are multiples of $(b_{1},-e_{1})\in R^{2}$. Since $A_{2}$ is triangularizable, by Proposition \ref{pro:principal}, we can choose $(x,y)$ so that $xR+yR=R$ and we proceed as before. \end{proof} \begin{example} Over the integral domain $R=\mathbb{C}[u,v]$, consider the pair\[ A_{1}=\left(\begin{array}{cc} -v & u\\ 0 & 0\end{array}\right),\quad\quad A_{2}=\left(\begin{array}{cc} uv & u^{2}\\ 2v^{2} & 0\end{array}\right).\] Then we have $A_{1}$ upper triangular and $\sigma_{12}=0$ as can be checked, but $A_{2}$ is not triangularizable over $\mathbb{C}[u,v]$, as no eigenvector $(x,y)\in R^{2}$ satisfies $xR+yR=R$. So, $\sigma_{12}=0$ and $A_{1}$ is triangularizable but not the pair $(A_{1},A_{2})$. \end{example} We finally arrive to Theorem \ref{thm:Flo}, which is a converse to Proposition \ref{pro:sigma=00003D3D0}.
\begin{thm} \label{thm:SigmaDelta}A sequence $\mathcal{A}=(A_{1},...,A_{n})\in V_{n}(R)$ is triangularizable if and only if each $A_{j}$ is triangularizable and $\sigma_{jk}=\Delta_{jkl}=0$ for all $1\leq j,k,l\leq n$. \end{thm} \begin{proof} Consider $\mathcal{A}=(A_{1},...,A_{n})$ reduced with $\sigma_{jk}=\Delta_{jkl}=0$ for all $1\leq j,k,l\leq n$. By Theorem \ref{thm: pair} the conditions $\sigma_{jk}=0$ and $A_{j}$ triangularizable mean that all subsequences of $\mathcal{A}$ of length $\leq2$ are triangularizable. So, after a similarity that puts $A_{1}$ and $A_{2}$ in upper triangular form, we can assume $(A_{1},A_{2},A_{3})$ to be in the form\[ A_{1}=\left(\begin{array}{cc} a_{1} & b_{1}\\
& d_{1}\end{array}\right),\quad A_{2}=\left(\begin{array}{cc} a_{2} & b_{2}\\
& d_{2}\end{array}\right),\quad A_{3}=\left(\begin{array}{cc} a_{3} & b_{3}\\ c_{3} & d_{3}\end{array}\right).\] Since $\mathcal{A}$ is reduced, by hypothesis $\delta_{12}=b_{1}e_{2}-e_{1}b_{2}\neq0$. From Equation (\ref{eq:explicitDelta}), we see that $0=\Delta_{123}=(b_{1}e_{2}-e_{1}b_{2})^{2}c_{3}^{2}$, so $c_{3}=0$ which means that $(A_{1},A_{2},A_{3})$ is triangularizable. Repeating the argument for all triples $(A_{j},A_{k},A_{l})$ we see that all subsequences of $\mathcal{A}$ of length $\leq3$ are triangularizable. So $\mathcal{A}$ is triangularizable by Proposition \ref{pro:leq3}. Finally, if $\mathcal{A}$ is not reduced, we consider the above argument for any maximal reduction $\mathcal{B}$, triangularize $\mathcal{B}$, and apply Corollary \ref{cor:reduce}. \end{proof}
\subsection{Improved criterion for triangularization}
We now prove an inductive property of reduced sequences that allow us to improve the criterion of Theorem \ref{thm:SigmaDelta}.
\begin{thm} \label{thm:induct} Let $n\geq4$ and $\mathcal{A}=(A_{1},...,A_{n-1})\in V_{n-1}$ be a reduced triangularizable sequence. If $\sigma(A_{j},A_{n})=0$ for some matrix $A_{n}$, and all $j=1,...,n-1$ then $(A_{1},...,A_{n})$ is also triangularizable. \end{thm} \begin{proof} We can suppose that $(A_{1},...,A_{n-1})$ has been conjugated so that it is already an upper triangular matrix sequence. So all the $\sigma_{jk}$ vanish, for indices $j,k$ between $1$ and $n-1$ (by Proposition \ref{pro:sigma=00003D3D0}). To reach a contradiction, assume that $\mathcal{A}'=(A_{1},...,A_{n})$ is not triangularizable so that \[ A_{n}=\left(\begin{array}{cc} a_{n} & b_{n}\\ c_{n} & d_{n}\end{array}\right)\] is non-scalar with $c_{n}\neq0$. Since $\mathcal{A}$ is reduced, none of the $A_{j}$ can be scalar. We can also assume that $A_{n}$ does not commute with some $A_{j}$ otherwise $\mathcal{A}'$ would be triangularizable by Proposition \ref{pro:reduce}. This means that $\delta_{jn}:=b_{j}e_{n}-e_{j}b_{n}\neq0$, for $j=1,...,n-1$, by Lemma \ref{lem:comm}. Using Formula (\ref{eq:explicit}), the $n-1$ conditions $\sigma_{jn}=0$, $j=1,...,n-1$ can be written as (because $c_{n}\neq0$)\[ b_{j}^{2}c_{n}+b_{j}e_{j}e_{n}-e_{j}^{2}b_{n}=0,\quad\textrm{for }j=1,...,n-1.\] Since $A_{n}$ is non-scalar, we are looking for a non-zero solution $u=(b_{n},e_{n},c_{n})\in R^{3}$ to the matrix equation $Bu=0$ where\[ B=\left(\begin{array}{ccc} -e_{1}^{2} & e_{1}b_{1} & b_{1}^{2}\\ \vdots & \vdots & \vdots\\ -e_{n-1}^{2} & e_{n-1}b_{n-1} & b_{n-1}^{2}\end{array}\right).\] A simple computation shows that every $3\times3$ minor of $B$ is of the form $\pm\delta_{jk}\delta_{kl}\delta_{lj}$. Since all these minors are non-zero by hypothesis, there is no non-zero solution $u\in R^{3}$, and we have a contradiction. Hence $c_{n}=0$ and $\mathcal{A}'$ is triangularizable. \end{proof} From two finite matrix sequences $\mathcal{A}=(A_{1},...,A_{n_{1}})$ and $\mathcal{B}=(B_{1},...,B_{n_{2}})$ one can form their concatenation $\mathcal{A}\cup\mathcal{B}:=(A_{1},...,A_{n_{1}},B_{1},...,B_{n_{2}})$. The following corollary may be called the \emph{concatenation principle} for triangularizable sequences.
\begin{cor} If $\mathcal{A}\in V_{n_{1}}$ and $\mathcal{B}\in V_{n_{2}}$ are triangularizable matrix sequences and they have a common reduction of length $\geq3$, then their concatenation $\mathcal{A}\cup\mathcal{B}\in V_{n_{1}+n_{2}}$ is also triangularizable. \end{cor} \begin{proof} Let $\mathcal{C}$ be such a common reduction, which we can assume to have length 3. $\mathcal{C}=(C_{1},C_{2},C_{3})$ is obviously triangularizable and $\sigma(C_{j},A_{k})=\sigma(C_{j},B_{k})=0$ for all possible indices, since both $\mathcal{A}$ and $\mathcal{B}$ are triangularizable. So the Proposition above applies. \end{proof} To prove Theorems \ref{thm:leq2} and \ref{thm:efic}, we will need the following easy fact.
\begin{lem} \label{lem:red-non-ss}Let $n\geq2$. A reduced triangularizable sequence of length $n$ as at least $n-1$ diagonalizable terms. \end{lem} \begin{proof} We can assume that $\mathcal{A}$ is already is upper triangular form, and let $A_{1}$ and $A_{2}$ be two non diagonalizable terms of $\mathcal{A}$. Then, they are of the form\[ A_{1}=\left(\begin{array}{cc} a_{1} & b_{1}\\
& a_{1}\end{array}\right)\quad A_{2}=\left(\begin{array}{cc} a_{2} & b_{2}\\
& a_{2}\end{array}\right),\] for some $a_{1},a_{2},b_{1},b_{2}\in R$. Thus, they commute, so $\mathcal{A}$ is not reduced. \end{proof} \begin{prop} \label{pro:quadrup}Let $\mathcal{A}=(A_{1},A_{2},A_{3},A_{4})$ be a reduced quadruple whose terms $A_{j}$ are all triangularizable. Then $\mathcal{A}$ is triangularizable if and only if $\sigma_{jk}=0$ for all $j,k\in\left\{ 1,2,3,4\right\} $. \end{prop} \begin{proof} One direction is a consequence of Proposition \ref{pro:sigma=00003D3D0}. Conversely, let $\mathcal{A}$ be reduced with $\sigma_{jk}=0$. By Theorem \ref{thm: pair} we may assume that $A_{1}$ and $A_{2}$ are upper triangular, so that $c_{1}=c_{2}=0$. From Lemma \ref{lem:red-non-ss}, we can also assume that $A_{1}$ is diagonalizable so that $b_{1}=0$ and $e_{1}\neq0$. Then, reducibility implies that $b_{2}\neq0$. In order to obtain a contradiction, suppose that $c_{3}c_{4}\neq0$. Then we have\begin{eqnarray*} 0 & = & \sigma_{13}=e_{1}^{2}b_{3}c_{3}\\ 0 & = & \sigma_{14}=e_{1}^{2}b_{4}c_{4}\\ 0 & = & \sigma_{23}=-b_{2}c_{3}(b_{2}c_{3}+e_{2}e_{3})\\ 0 & = & \sigma_{24}=-c_{4}(b_{2}^{2}c_{4}+b_{2}e_{2}e_{4}-e_{2}^{2}b_{4})\\ 0 & = & \sigma_{34}=-b_{4}(c_{3}^{2}b_{4}+c_{3}e_{3}e_{4}-e_{3}^{2}c_{4})\end{eqnarray*} This implies $b_{3}=0$, $b_{4}=0$, $b_{2}c_{3}+e_{2}e_{3}=0$ and $b_{2}c_{4}+e_{2}e_{4}=0$. The last two equations imply that $(e_{2},b_{2})$ is a nontrivial solution of the matrix equation\[ \left(\begin{array}{cc} e_{3} & -c_{3}\\ e_{4} & -c_{4}\end{array}\right)\left(\begin{array}{c} x\\ y\end{array}\right)=\left(\begin{array}{c} 0\\ 0\end{array}\right).\] This implies $e_{3}c_{4}-c_{3}e_{4}=0$, which together with $b_{3}=b_{4}=0$ contradict the reducibility of $\mathcal{A}$. So, $c_{3}c_{4}=0$ and either $\Delta_{123}=0$ or $\Delta_{124}=0$, by Equation (\ref{eq:explicitDelta}). Assuming, without loss of generality, that $\Delta_{123}=0$, $(A_{1},A_{2},A_{3})$ is triangularizable by Theorem \ref{thm:SigmaDelta}, and the result follows from Proposition \ref{pro:ReducedTriple}. \end{proof} Now, we are ready to prove Theorems \ref{thm:leq2} and \ref{thm:efic}.
\noindent \emph{Proof of Theorem} \ref{thm:leq2}: Suppose $\mathcal{A}$ is reduced of length $l\geq4$ and assume all subsequences of $\mathcal{A}$ of length $\leq2$ are triangularizable. Then, by Proposition \ref{pro:quadrup} all quadruples of $\mathcal{A}$ are triangularizable, so that $\mathcal{A}$ is triangularizable by Proposition \ref{pro:leq3}.
{}$\square$
\noindent \emph{Proof of Theorem} \ref{thm:efic}: Let $\mathcal{A}=(A_{1},...,A_{n})$ be a matrix sequence with maximal reduction $\mathcal{A}'=(A_{1},...,A_{l})$ of length $l$. Then $\mathcal{A}$ is triangularizable if and only $\mathcal{A}'$ is, by Corollary \ref{cor:reduce}, showing (i) for $l\leq3$. If $l=4$, $(A_{1},A_{2},A_{3},A_{4})$ is triangularizable by Proposition \ref{pro:quadrup}. Then, we are in the hypothesis of Theorem \ref{thm:induct}, which shows that we can apply induction to conclude that $\mathcal{A}'$ is triangularizable for all $l\geq4$.
{}$\square$
\begin{rem} To summarize and relate our results with McCoy's Theorem, we have shown that over an algebraically closed field and under the simple condition that $\mathcal{A}$ has reduced length $\neq3$, $\mathcal{A}$ is triangularizable if and only if the commutators $[A_{j},A_{k}]$ are nilpotent, for $j=1,2,3$ and $k>j$. In particular, the monomials $p(x)$ in Theorem \ref{thm:McCoy} are unnecessary (for reduced length $\neq3$). \end{rem}
\section{Simultaneous Similarity\label{sec:SimSim}}
\begin{comment} From now on, we consider matrices over an arbitrary field $\mathbb{F}$ except where explicitly stated. In this section, we prove Theorem \ref{thm:simsim}. \end{comment} {}
\subsection{Similarity and subtriples}
Again, let $G=GL_{2}(R)$. We now prove Proposition \ref{pro:sim3} with the help of the following easy lemma.
\begin{lem} \label{lem:stabilizer}Let $[A_{1},A_{2}]\neq0$ and $g\in G$. If $g\cdot(A_{1},A_{2})=(A_{1},A_{2})$ then $g$ is a scalar. \end{lem} \begin{proof} Let $g$ have columns $(x,y)$ and $(z,w)$ with $xw-yz$ invertible. The conditions $A_{j}g=gA_{j}$, for $j=1,2$ can be written as $M_{j}u=0$, where \[ M_{j}=\left(\begin{array}{ccc} 0 & c_{j} & -e_{j}\\ -c_{j} & 0 & b_{j}\\ e_{j} & -b_{j} & 0\end{array}\right),\quad\quad u=(z,x-w,y),\quad\quad j=1,2.\] So, the vector $(z,x-w,y)\in R^{3}$ lies in the intersection of the nullspace of the $M_{j}$, each being generated by the non-zero vector $(b_{j},e_{j},c_{j})$, $j=1,2$, (each $M_{j}$ has rank exactly 2, as $A_{1}$ and $A_{2}$ are non-scalars). So, $(z,x-w,y)$ is zero unless $(b_{1},e_{1},c_{1})$ and $(b_{2},e_{2},c_{2})$ are colinear. But this is not the case by Equation (\ref{eq:comuta}) since $[A_{1},A_{2}]\neq0$. Thus, $z=y=0$ and $x=w$, so that $g$ is scalar as wanted. \end{proof} \noindent \emph{Proof of Proposition} \ref{pro:sim3}: If $\mathcal{A}$ and $\mathcal{B}$ are similar, then $\mathcal{A}_{J}\sim\mathcal{B}_{J}$ for any index set $J=(j_{1},...,j_{l})$ with $l\leq n$ and $1\leq j_{1}<...<j_{l}\leq n$. Conversely, let $\mathcal{A}_{J}\sim\mathcal{B}_{J}$ for index sets $J$ of cardinality $\leq3$. We divide the proof into three cases. (i) Suppose $\mathcal{A}$ is scalar. Since scalar matrices are invariant under conjugation, $A_{j}\sim B_{j}$ implies that $A_{j}=B_{j}$, so $\mathcal{A}=\mathcal{B}$. (ii) Suppose now that $\mathcal{A}$ is non-scalar, but commutative. Then some term of $\mathcal{A}$, say $A_{1}$ is non-scalar. Since $A_{1}\sim B_{1}$, $B_{1}$ is also non-scalar, and after a conjugation, we can assume that $B_{1}=A_{1}$. Moreover, $(A_{j},A_{k})\sim(B_{j},B_{k})$ and $[A_{j},A_{k}]=0$ implies that $[B_{j},B_{k}]=0$. As a consequence, $\mathcal{B}$ is also commutative and non-scalar. Since $A_{1}$ is a non-scalar $2\times2$ matrix, it is a nonderogatory matrix, so that every matrix commuting with $A_{1}$ is a polynomial in $A_{1}$ with coefficients in $\mathbb{F}$, the field of fractions of $R$. Therefore $\mathcal{A}=(A_{1},p_{2}(A_{1}),...,p_{n}(A_{1}))$ for some polynomials $p_{j}(x)$, $j=2,...,n$. Since pairs are similar, let $g_{j}\in G$, $j=2,...,n$, be such that $(B_{1},B_{j})=(A_{1},B_{j})=(g_{j}A_{1}g_{j}^{-1},g_{j}A_{j}g_{j}^{-1})$. Then $B_{j}=g_{j}A_{j}g_{j}^{-1}=g_{j}\ p_{j}(A_{1})g_{j}^{-1}=p_{j}(g_{j}A_{1}g_{j}^{-1})=p_{j}(A_{1})$, for all $j=2,...,n$. So $\mathcal{B}=(B_{1},...,B_{n})=(A_{1},p_{2}(A_{1}),...,p_{n}(A_{1}))=\mathcal{A}$. (iii) Finally, let $\mathcal{A}$ be non-commutative. Then, some pair does not commute, say $[A_{1},A_{2}]\neq0$. Since all pairs are similar we may, after a suitable conjugation, assume that $(A_{1},A_{2})=(B_{1},B_{2})$. Since all triples are similar, let $g_{j}\in G$, $j=3,...,n$, be such that $g_{j}\cdot(A_{1},A_{2},A_{j})=(A_{1},A_{2},B_{j})$. Then, since $g_{j}\cdot(A_{1},A_{2})=(A_{1},A_{2})$ Lemma \ref{lem:stabilizer} implies that $g_{j}$ is scalar for all $j=3,...,n$, so $B_{j}=g_{j}\cdot A_{j}=A_{j}$ and $\mathcal{B}=\mathcal{A}$.
{}$\square$
\subsection{Similarity for semisimple sequences}
We now work over a field $\mathbb{F}$. Recall the following definition.
\begin{defn} It is called \emph{semisimple} if its $G$-orbit (for the conjugation action (\ref{eq:accao})) is Zariski closed in $V_{n}$. \end{defn} This notion of semisimplicity generalizes that of a single matrix. In the context of geometric invariant theory, semisimplicity can be translated into more algebraic terms as follows. Recall that, in the general situation of a general affine (algebraic) reductive group $K$ acting on an affine variety $V$ (both over $\mathbb{F}$) one defines the affine quotient variety $V/\!\!/ K$ as the maximal spectrum of the ring of invariant functions on $V$, $\mathrm{Spm}\!\left(\mathbb{F}[V]^{K}\right)$, which comes equipped with a projection\[ q:V\rightarrow V/\!\!/ K\] induced from the canonical inclusion of algebras $\mathbb{F}[V]^{K}\subset\mathbb{F}[V]$ (see, for example, \cite{MFK} or \cite{Mu}). The set of closed orbits is in bijective correspondence with geometric points of the quotient $V/\!\!/ K$. Recall also that a vector $x\in V$ is said to be \emph{stable} if the corresponding `orbit map'\[ \psi_{x}:K\rightarrow V,\quad\quad g\mapsto g\cdot x\] is proper. It is easy to see that $x\in V$ is stable if and only if the $K$-orbit of $x$ is closed and its stabilizer $K_{x}$ is finite. Another useful criterion for stability is the \emph{Hilbert-Mumford numerical criterion}, which is stated in terms of nontrivial homomorphisms $\phi:\mathbb{F}^{*}\rightarrow K$, called one parameter subgroups (1PS) of $K$ ($\mathbb{F}^{*}:=\mathbb{F}\setminus\left\{ 0\right\} $). To any such $\phi$ and to a point $x\in V$ one associates the composition $\phi_{x}:=\psi_{x}\circ\phi:\mathbb{F}^{*}\rightarrow V$. If $\phi_{x}$ can be extended to a morphism $\overline{\phi_{x}}:\mathbb{F}\rightarrow V$, we say that $\lim_{\lambda\to0}\phi_{x}(\lambda)$ exists and equals $\overline{\phi_{x}}(0)$.
\begin{thm} \emph{(Hilbert-Mumford, see \cite{MFK})} \label{thm:HMC}A point $x\in V$ is \emph{not stable} if and only if there is a one parameter subgroup $\phi$ of $K$, such that $\overline{\phi_{x}}(0)$ exists. \end{thm}
\begin{comment} One can also define semistable and unstable points. \end{comment} {}
Let us return to our example of the conjugation action (\ref{eq:accao}) of $G=GL_{2}(\mathbb{F})$ on $V_{n}=V_{n}(\mathbb{F})$, and let $\mathcal{S}\subset V_{n}=V_{n}(\mathbb{F})$ denote the subset of semisimple sequences. Then $\mathcal{S}/G$, being the set of closed orbits, is in bijection with $V_{n}/\!\!/ G=\mathrm{Spm}\!\left(\mathbb{F}[V_{n}]^{G}\right)$. As described in the introduction, Drensky's theorem (Theorem \ref{thm:Drensky}) realized the algebra of invariants as a quotient $\mathbb{F}[V_{n}]^{G}=\mathbb{F}[\mathbf{x}]/I$ where \[ \mathbf{x}=\left(t_{1},...,t_{n},t_{11},...,t_{nn},u_{12},...,u_{n-1,n},s_{123},...,s_{n-2,n-1,n}\right)\in\mathbb{F}^{N}\] is the list of generators ($N=2n+\binom{n}{2}+\binom{n}{3}$) and $I$ the ideal of relations. Dualizing the sequence\[ \mathbb{F}[\mathbf{x}]\to\mathbb{F}[\mathbf{x}]/I=\mathbb{F}[V_{n}]^{G}\subset\mathbb{F}[V_{n}]\] we obtain:\[ V_{n}\to V_{n}/\!\!/ G=\mathcal{S}/G\subset\mathbb{F}^{N}.\] By standard arguments, the last inclusion is precisely the map $\Phi([\mathcal{A}])=\mathbf{x},$ that sends a $G$ orbit to its values on the generators, so we conclude the following.
\begin{prop} \label{pro:phi}For a field of characteristic zero, the map $\Phi:\mathcal{S}/G\to\mathbb{F}^{N}$ is injective. \end{prop} To obtain an analogous map for non-semisimple sequences, we first characterize these ones as triangularizable but not diagonalizable sequences.
Note that, for the action of $G$ on $V_{n}$ there are no stable points, since the scalar nonzero matrices stabilize any sequence $\mathcal{A}\in V_{n}$. This is not a problem, since the same orbit space is obtained with the conjugation action of the affine reductive group $G_{1}=SL_{2}(\mathbb{F})$ (determinant one matrices in $GL_{2}(\mathbb{F})$) on $V_{n}$ which has generically $\mathbb{Z}_{2}$ stabilizers:\[ V_{n}/\!\!/ G=V_{n}/\!\!/ G_{1}.\] Note that any diagonal matrix sequence $\mathcal{A}\in D:=\left\{ \mathcal{A}\in V_{n}:b=c=0\right\} $ has the subgroup \begin{equation} H=\left(\begin{array}{cc} \lambda & 0\\ 0 & \lambda^{-1}\end{array}\right)\subset G_{1},\,\,\lambda\in\mathbb{F}^{*}\label{eq:subg}\end{equation} contained in its stabilizer.
\begin{prop} \label{Prop:Artin} A $2\times2$ matrix sequence is stable (for the $SL_{2}(\mathbb{F})$ conjugation action) if and only if it is not triangularizable. A $2\times2$ matrix sequence is semisimple if and only if it is either stable or diagonalizable. \end{prop} \begin{proof} This follows from general results (see Artin, \cite{A}). For completeness, we include a proof of this particular case, using the Hilbert-Mumford criterion (Theorem \ref{thm:HMC}). For the first statement, we can assume that $\mathcal{A}$ is upper triangular. Then, a simple computation shows that the closure of the orbit of $\mathcal{A}$ under the subgroup $H\subset G_{1}=SL_{2}(\mathbb{F})$ (Equation \ref{eq:subg}) intersects $D$. So, either $\mathcal{A}$ is in $D$ (and it is commutative and semisimple) and its stabilizer contains $D$, or $\mathcal{A}$ is not commutative, $\mathcal{A}\notin D$ so $G\cdot\mathcal{A}=G_{1}\cdot\mathcal{A}$ is not closed. In either case, $\mathcal{A}$ is not stable. Conversely, let $\mathcal{A}$ be not stable for the action of $G_{1}$. By elementary representation theory, any one parameter subgroup of $G_{1}$ is conjugated to\[ \lambda\mapsto\phi_{n}(\lambda)=\left(\begin{array}{cc} \lambda^{n} & 0\\ 0 & \lambda^{-n}\end{array}\right),\quad n\in\left\{ 1,2,...\right\} .\] In other words, any 1PS can be written as $\phi=g^{-1}\phi_{n}g$, for some $g\in G_{1}$ and some $\phi_{n}$ so,\begin{equation} \lim_{\lambda\rightarrow0}\phi_{\mathcal{A}}(\lambda)=\lim_{\lambda\rightarrow0}\phi(\lambda)\cdot\mathcal{A}=g^{-1}\lim_{\lambda\rightarrow0}\phi_{\mathsf{n}}(\lambda)\cdot(g\cdot\mathcal{A}).\label{eq:limit}\end{equation}
Writing $g\cdot\mathcal{A}$ as\[ g\cdot\mathcal{A}=\left(\begin{array}{cc} a(g) & b(g)\\ c(g) & d(g)\end{array}\right),\]
we obtain \[ \phi_{n}(\lambda)\cdot(g\cdot\mathcal{A})=\left(\begin{array}{ll} a(g) & b(g)\lambda^{2n}\\ c(g)\lambda^{-2n} & d(g)\end{array}\right).\] By the Hilbert-Mumford criterion, the limit (\ref{eq:limit}) exists for some 1PS, so we must have $c(g)=0$, for some $g\in G_{1}$. This means that $g\cdot\mathcal{A}$ is upper triangular, hence not irreducible. The second statement is analogous. \end{proof}
\subsection{Similarity for non-commutative triangularizable sequences}
To obtain a numerical similarity criterion for non-semisimple sequences, we are reduced, by Proposition \ref{Prop:Artin}, to the case of triangularizable sequences. Here, we consider the non-commutative triangularizable case.
Let $\mathsf{U}\subset V_{n}$ be the affine variety of upper triangular matrix sequences and let $\mathsf{K}\subset\mathsf{U}$ be the subset of commutative sequences. For $n\geq2$ and $\mathcal{A}\in\mathsf{U}$, let $P_{\mathcal{A}}$ denote the following $2\times n$ matrix, and $\delta_{jk}=\delta_{jk}(\mathcal{A})$ the corresponding $2\times2$ minors \[ P_{\mathcal{A}}=\left(\begin{array}{ccc} e_{1} & \cdots & e_{n}\\ b_{1} & \cdots & b_{n}\end{array}\right),\quad\quad\delta_{jk}=b_{j}e_{k}-e_{j}b_{k},\quad\quad j,k\in\{1,...,n\}.\] According to Lemma \ref{lem:comm}, $\mathsf{K}$ is characterized as consisting of sequences $\mathcal{A}$ such that the rank of $P_{\mathcal{A}}$ is $\leq1$ (it is $0$ when all terms in $\mathcal{A}$ are scalars). Hence, when $\mathcal{A}\in\mathsf{U}\setminus\mathsf{K}$, $P_{\mathcal{A}}$ has rank $2$ and defines an element $\pi_{\mathcal{A}}$ of the Grassmanian $\mathbb{G}(2,n)$ of 2-planes in $\mathbb{F}^{n}$. It is easy to see that the action of $G=GL_{2}(\mathbb{F})$, restricted to $\mathsf{U}\setminus\mathsf{K}$, preserves the plane $\pi_{\mathcal{A}}$:
\begin{lem} \label{lem:e=00003De'}Let $g\in G$. If $\mathcal{A}$ and $\mathcal{A}'=g\cdot\mathcal{A}$ are both in $\mathsf{U}\setminus\mathsf{K}$, then $e=e'$ and $\pi_{\mathcal{A}}=\pi_{\mathcal{A}'}$. \end{lem} \begin{proof} We can assume that $\mathcal{A}'=g\cdot\mathcal{A}$ for some $g\in SL_{2}(\mathbb{F})$ and compute, for $\mathcal{A}\in\mathsf{U}\setminus\mathsf{K}$, \[ g\cdot\mathcal{A}=\left(\begin{array}{cc} x & z\\ y & w\end{array}\right)\left(\begin{array}{cc} a & b\\ 0 & d\end{array}\right)\left(\begin{array}{cc} w & -z\\ -y & x\end{array}\right)=\left(\begin{array}{cc} * & *\\ y(ew-by) & *\end{array}\right).\] Since $P_{\mathcal{A}}$ has rank $2$, $ew\neq by$ as vectors in $\mathbb{F}^{n}$. Thus, in order that $g\cdot\mathcal{A}$ be again in $\mathsf{U}\setminus\mathsf{K}$, we need $y=0$, which simplifies the formula above to\begin{equation} g\cdot\mathcal{A}=\left(\begin{array}{cc} x & z\\ 0 & w\end{array}\right)\left(\begin{array}{cc} a & b\\ 0 & d\end{array}\right)\left(\begin{array}{cc} w & -z\\ 0 & x\end{array}\right)=\left(\begin{array}{cc} a & x(bx-ez)\\ 0 & d\end{array}\right),\label{eq:plane}\end{equation} because $xw=\mathsf{det} g=1$. Then, $e=e'=a-d$. Moreover, the upper right entry is $b'=x(bx-ez)=x^{2}b-xze$, a linear combination of $b$ and $e$. So $\pi_{\mathcal{A}}=\pi_{\mathcal{A}'}$, as asserted. \end{proof} The correspondence $\mathcal{A}\mapsto\pi_{\mathcal{A}}$ is therefore well defined on the $G$-orbits of non-commutative triangularizable sequences. To make this more precise, consider the following algebraic subvarieties of $V_{n}$. Let $\mathcal{U}=G\cdot\mathsf{U}$ be the variety of triangularizable matrix sequences and let $\mathcal{K}\subset\mathcal{U}$ be the subvariety of commutative sequences. Note that $\mathcal{U}$ is indeed irreducible, as the image in $V_{n}$ of the (irreducible) algebraic variety $G\times\mathsf{U}$ under the morphism $(g,\mathcal{A})\mapsto g\cdot\mathcal{A}$. The irreducibility of $\mathcal{K}$ was first noted in \cite{Ge} (see \cite{Gu} for a proof).
By the previous lemma, given a sequence $\mathcal{A}\in\mathcal{U}\setminus\mathcal{K}$, we can define $\pi_{\mathcal{A}}:=\pi_{\mathcal{B}}$ using any matrix sequence $\mathcal{B}\in G\cdot\mathcal{A}\cap\mathsf{U}$, that is, $\mathcal{B}$ is upper triangular and similar to $\mathcal{A}$ (even though the matrix $P_{\mathcal{A}}$ is not defined). So, we can define a map $\psi:\mathcal{U}\backslash\mathcal{K}\to\mathbb{P}^{N-1}$, where $\mathbb{P}^{N-1}$ denotes the projective space over $\mathbb{F}$ of dimension $N-1=\binom{n}{2}-1$, as the composition\begin{eqnarray*} \mathcal{U}\backslash\mathcal{K}\\ \pi\downarrow & \searrow\psi\\ \mathbb{G}(2,n) & \hookrightarrow & \mathbb{P}^{N-1}\end{eqnarray*} where the bottom inclusion is the Plücker embedding of the Grassmanian $\mathbb{G}(2,n)$. In more concrete terms we have.
\begin{lem} \label{lem:Plucker}Let $\mathcal{A}\in\mathcal{U}\backslash\mathcal{K}$. Then $\psi(\mathcal{A})=[\delta_{12}:\delta_{13}:\cdots:\delta_{n-1,n}]$, where $\delta_{jk}=\delta_{jk}(\mathcal{B})$, for any $\mathcal{B}\in G\cdot\mathcal{A}\cap\mathsf{U}$. \end{lem} \begin{proof} The formula follows from the definition of the Plücker embedding. We just need to show that the map is well defined. The point $[\delta_{12}:\delta_{13}:\cdots:\delta_{n-1,n}]$ is in projective space since at least a pair of terms, say $A_{j}$ and $A_{k}$ do not commute, so that $\delta_{jk}(\mathcal{B})=b_{j}e_{k}-e_{j}b_{k}$ is nonzero, by Lemma \ref{lem:comm}. On the other hand for a different sequence $\mathcal{B}'\in G\cdot\mathcal{A}\cap\mathsf{U}$, Equation (\ref{eq:plane}) and $e=e'$ (Lemma \ref{lem:e=00003De'}) imply \[ \delta_{jk}(\mathcal{B}')=b'_{j}e'_{k}-e'_{j}b'_{k}=x(b_{j}x-e_{j}z)e_{k}-e_{j}x(b_{k}x-e_{k}z)=x^{2}\delta_{jk}(\mathcal{B}),\] so the point $\psi(\mathcal{A})\in\mathbb{P}^{N-1}$ is indeed independent of the choice of $\mathcal{B}\in G\cdot\mathcal{A}\cap\mathsf{U}$. \end{proof}
\begin{comment} Let $j\neq k$ and $k\neq l$ be indices from 1 to $n\geq2$ and $\mathcal{A}\in\mathcal{U}\setminus\mathcal{K}$. Define \[ \beta_{jkl}(\mathcal{A}):=\frac{\delta_{jk}(\mathcal{B})}{\delta_{kl}(\mathcal{B})}\] where $\mathcal{B}$ is any upper triangular sequence similar to $\mathcal{A}$. \end{comment} {}By Lemma \ref{lem:Plucker}, the quotients $\delta_{jk}/\delta_{lm}$ (for $j\neq k$ and $l\neq m$) are well defined rational $G$-invariant functions on $\mathcal{U}\setminus\mathcal{K}$ (defined on the open dense complement of $\delta_{lm}^{-1}(0)\subset\mathcal{U}\setminus\mathcal{K}$), so they descend to the quotient space $(\mathcal{U}\setminus\mathcal{K})/G$. Note, however, that these are not quotients of regular (polynomial) invariants. \begin{comment} 2 pontos $B,B'\in W$ podem definir o mesmo plano mas estar em diferentes órbitas??? \end{comment} {}
Define now the map $\Psi:(\mathcal{U}\setminus\mathcal{K})/G\to\mathbb{F}^{n}\times\mathbb{F}^{N+n}\times\mathbb{P}^{N-1}$, $N=\binom{n}{2}$, by \[ \Psi([\mathcal{A}])=\left(\{t_{j}\}_{j=1,...,n},\ \{t_{jk}\}_{j,k=1,...,n},\ \psi\right),\] where $[\mathcal{A}]$ denotes the conjugacy class of $\mathcal{A}$, $t_{j}=\mathsf{tr}(A_{j})$, $t_{jk}=\mathsf{tr}(A_{j}A_{k})$, and $\psi$ was given by Lemma \ref{lem:Plucker}. The main result of this subsection is the following. For the proof, we use a standard consequence of the Noether-Deuring theorem; namely, if $\mathcal{A}$ and $\mathcal{A}'$ are elements in $V_{m,n}(\mathbb{F})$ and $g\in GL_{m}(\bar{\mathbb{F}})$ verifies $g\cdot\mathcal{A}=\mathcal{A}'$ (i.e, similarity over $\bar{\mathbb{F}}$) then they are similar over $\mathbb{F}$ (see eg. \cite{CR} p. 200).
\begin{thm} \label{thm:rational-invariants}Let $n\geq2$. The map $\Psi$ is two-to-one. More precisely, the rational invariants $\delta_{jk}/\delta_{lm}$, together with the regular invariants $t_{j}$ and $t_{jk}$, distinguish $G$-orbits in $\mathcal{U}\setminus\mathcal{K}$, except for the identification $e\leftrightarrow-e$, when written in triangular form. \end{thm} \begin{proof} Let both sequences $\mathcal{A}$ and $\mathcal{A}'$ be triangularizable and non-commutative, so there exist $\mathcal{B}\in G\cdot\mathcal{A}\cap(\mathsf{U}\setminus\mathsf{K})$ and $\mathcal{B}'\in G\cdot\mathcal{A}'\cap(\mathsf{U}\setminus\mathsf{K})$. Let us write $t_{j}=\mathsf{tr}(B_{j})=\mathsf{tr}(A_{j})$ etc, as before, and use primed letters to denote corresponding quantities for $\mathcal{A}'$ or $\mathcal{B}'$. To prove injectivity, suppose $\Psi([\mathcal{A}])=\Psi([\mathcal{A}'])$ so that $t_{j}=t_{j}'$ and $t_{jk}=t_{jk}'$ for all appropriate indices. Then, \[ e_{j}e_{k}=2t_{jk}-t_{j}t_{k}=2t_{jk}'-t_{j}'t_{k}'=e_{j}'e_{k}'.\] Since any triple $(x^{2},xy,y^{2})\in\mathbb{F}^{3}$ determines $(x,y)$ up to sign, the equation above implies that $e'=\pm e$, as vectors in $\mathbb{F}^{n}$. Suppose $e'=e$. The hypothesis $\Psi([\mathcal{A}])=\Psi([\mathcal{A}'])$ means also that $\psi(\mathcal{B})=\psi(\mathcal{B}')$. Thus $\pi_{\mathcal{B}}=\pi_{\mathcal{B}'}$ as planes in $\mathbb{G}(2,n)$ due to the injectivity of the Plücker map. Therefore $b'=\alpha b+\beta e$ for some complex numbers $\alpha,\beta$, $\alpha\neq0$. So, using the invertible \[ g=\left(\begin{array}{cc} \sqrt{\alpha} & -\beta/\sqrt{\alpha}\\
& \sqrt{\alpha}^{-1}\end{array}\right),\] where $\sqrt{\alpha}$ is any square root of $\alpha$ in $\bar{\mathbb{F}}$, it is easy to see that $g\cdot\mathcal{B}=\mathcal{B}'$, so that $\mathcal{A}$ and $\mathcal{A}'$ are similar over $\bar{\mathbb{F}}$. So, $\mathcal{A}$ and $\mathcal{A}'$ are also similar over $\mathbb{F}$. Finally, with $e'=-e$ (note $e\neq0$ by hypothesis), one can see that $\mathcal{B}'$ is not similar to $\mathcal{B}$, hence the result. \begin{comment} Note finally that $\frac{\delta_{jk}}{\delta_{lm}}=\frac{\delta_{jk}}{\delta_{kl}}\frac{\delta_{kl}}{\delta_{lm}}=\beta_{jkl}\beta_{klm}$, which means that $\phi(\mathcal{A})=\phi(\mathcal{A}')$ if and only if $\beta_{jkl}=\beta_{jkl}'$ for all indices. \end{comment} {} \end{proof} In view of Proposition \ref{Prop:Artin}, Theorem \ref{thm:rational-invariants} and Proposition \ref{pro:phi} give a solution to the invariants problem (ii) for non-commutative sequences. Theorem \ref{thm:Invariants} provides a more efficient solution (and works for any characteristic $\neq2$), and its proof will be given after exploring the canonical forms described in the next section.
\begin{rem} Finding an analogous map in the commutative case is more involved! See Friedland, \cite{Fr} for a discussion of the case of a commuting pair of $2\times2$ matrices. However, testing similarity of commutative $2\times2$ matrix sequences is a trivial task after reduction to triangular form, as recalled in Appendix B. \end{rem}
\begin{comment} Using the maps $\Phi$ and $\Psi$ defined in Equations (\ref{eq:Psi'}) and (\ref{eq:Psi'}) we now show Theorem \ref{thm:Invariants} in the following way. Note that $\Psi(\mathcal{A})$ is not well defined if $\mathcal{A}$ is not triangularizable.
\begin{cor} \label{cor:invariants}Let $\mathcal{A}$ and $\mathcal{B}$ be two non-commutative matrix sequences of the same length. If $\mathcal{A}$ and $\mathcal{B}$ are triangularizable, then $\mathcal{A}\sim\mathcal{B}$ if and only if $\Phi(\mathcal{A})=\Phi(\mathcal{B})$. If $\mathcal{A}$ is semisimple, then $\mathcal{A}\sim\mathcal{B}$ if and only if $\Psi(\mathcal{A})=\Psi(\mathcal{B})$. \end{cor} \begin{proof} The triangularizable case is a restatement of Theorem \ref{thm:rational-invariants}. The semisimple case follows from Procesi's theorem. Indeed, if $\Phi_{s}(\mathcal{A})=\Phi_{s}(\mathcal{B})$ then all corresponding subsequences of length $\leq3$ of $\mathcal{A}$ and $\mathcal{B}$ are similar, so $\mathcal{A}\sim\mathcal{B}$, by using the generators of the algebra of invariants described in Theorem \ref{thm:Drensky}. \end{proof}
\end{comment} {}We end this section with the following conjecture on the generalization of Proposition \ref{pro:sim3} to $m\times m$ matrices. Since an irreducible sequence is a generic sequence and the conjugacy classes of triangularizable or block-triangularizable sequences seem to depend on less data (i.e, less regular or rational invariants) than the irreducible case, we propose the following problem. Define the {}``semisimple similarity number'' $S(m)$ as\[
S(m)=\min\{k\in\mathbb{N}:\quad\forall n\in\mathbb{N},\ \forall\mathcal{A},\mathcal{B}\in\mathcal{S}_{n,m},\quad\mathcal{A}\sim\mathcal{B}\Leftrightarrow\mathcal{A}_{J}\sim\mathcal{B}_{J}\ \forall J\mbox{ with }|J|\leq k\},\] where $\mathcal{S}_{n,m}\subset V_{n,m}$ is the subset of semisimple sequences.
\begin{conjecture*} For arbitrary $m\times m$ sequences (not necessarily semisimple) $\mathcal{A}\sim\mathcal{B}$ if and only if $\mathcal{A}_{J}\sim\mathcal{B}_{J}$ for all index vectors $J$ with length $\leq S(m)$. \end{conjecture*} Note that, by Procesi's Theorem, $S(m)\leq N(m)$. This conjecture is true for $m=2$, by Proposition \ref{pro:sim3}, since Proposition \ref{pro:ReducedTriple} gives $S(2)=3$.
\section{Canonical Forms and Reconstruction of Sequences\label{sec:CanonicalForms}}
\subsection{Canonical forms}
We now describe canonical forms for $2\times2$ matrix sequences over an algebraically closed field $\bar{\mathbb{F}}$. Informally, this means the indication, for each given matrix sequence $\mathcal{A}\in V_{n}$, of an element in its conjugacy class which has a simple form, and this same simple form should be used for the biggest possible set of sequences, hence the term `canonical'. We will assume $n\geq2$ since such canonical forms for $n=1$ are provided by the well known Jordan decomposition. Also, for commutative sequences, the canonical forms are the same as for $n=1$. This is recalled in Appendix B. For non-commutative sequences, it turns out that canonical forms can be divided into 5 cases. \begin{comment} As a consequence, we obtain a simple finite procedure to determine whether two given sequences (reduced or not) are similar. This will be done in Appendix C. \end{comment} {}
By the preceding results, it is no surprise that we need to consider distinct canonical forms for the stable and for the reducible cases. We make the following choices.
\begin{defn} We say that a stable (i.e, irreducible) matrix sequence $\mathcal{A}=(A_{1},A_{2},...)$ is in \emph{(stable) canonical form} if $A_{1}$ is in Jordan canonical form and $b_{2}=1$. We say that a triangularizable sequence $\mathcal{A}$ is in \emph{(triangular) canonical form} if $\mathcal{A}$ is upper triangular, $A_{1}$ is diagonal and $b_{2}=1$. \end{defn} Recall that, by Proposition \ref{lem:red-non-ss} a triangularizable sequence of length $n\geq2$ has at least $n-1$ semisimple (or diagonalizable) terms. By contrast, a stable sequence can have all terms which are non-diagonalizable. To see this, just consider the family of matrices of the form\[ \left(\begin{array}{cc} -\alpha\beta & \alpha^{2}\\ -\beta^{2} & \alpha\beta\end{array}\right),\] for some parameters $\alpha$ and $\beta$ not both zero. Then, all these matrices are similar to the $2\times2$ Jordan block with zero diagonal (i.e, $\alpha=1$, $\beta=0$), but the $\sigma$ of two of these is zero only when the vectors $(\alpha,\beta)$ are colinear. We have, however, the following fact.
\begin{prop} \label{pro:ss->sigma}Let $\mathcal{A}$ have reduced length $n=2$ or $n\geq4$. Then $\mathcal{A}$ is stable if and only if some $\sigma_{jk}\neq0$. \end{prop} \begin{proof} If $\mathcal{A}$ is not stable, then it is triangularizable, by Proposition \ref{Prop:Artin}, so that all $\sigma=0$, by Proposition \ref{pro:sigma=00003D3D0}. Conversely, suppose $\mathcal{A}$ is stable with reduced length $n=2$. Then $\sigma_{12}\neq0$ because of Friedland's result (or Theorem \ref{thm:Flo}). Finally, if $n\geq4$, Proposition \ref{pro:quadrup} implies that at least one $\sigma_{jk}\neq0$. On the other hand, Example \ref{exa:example} shows that the statement is not true for $n=3$. \end{proof}
\begin{comment} Def: Let $\mathcal{A}$ and $\mathcal{B}$ be two matrix sequences of the same length. We say we can put $\mathcal{A}$ \emph{in the form of} $\mathcal{B}$, if $\mathcal{A}$ and $\mathcal{B}$ are similar up to reordering of the terms of $\mathcal{A}$. \end{comment} {}
\begin{thm} \label{thm:canonical}All non-commutative matrix sequences can be put in canonical form. More precisely, after rearranging terms, any sequence is similar to a sequence with $A_{1},A_{2}$ and $A_{3}$ as described by the following table. \end{thm} \begin{center}
\renewcommand{\raggedright}{\centering}\begin{tabular}{|c|c|c|c|} \hline \multirow{3}{35mm}{stable case} & 1.a & $\sigma_{12}\neq0$, $\delta_{1}\neq0$ & $A_{1}$ diagonal, $b_{2}=1$\tabularnewline \cline{2-2} \cline{3-3} \cline{4-4}
\multicolumn{1}{|c|}{} & 1.b & $\sigma_{12}\neq0$, $\delta_{1}=\delta_{2}=0$ & $A_{1}$ Jordan block, $b_{2}=1$\tabularnewline \cline{2-2} \cline{3-3} \cline{4-4}
\multicolumn{1}{|c|}{} & 1.c & $\sigma_{12}=\sigma_{13}=\sigma_{23}=0$, $\Delta_{123}\neq0$ & $A_{1}$ diagonal, $b_{2}=1$\tabularnewline \hline \multirow{2}{35mm}{triangularizable case} & 2.a & all diagonalizable & $A_{1}$ diagonal, $b_{2}=1$ \tabularnewline \cline{2-2} \cline{3-3} \cline{4-4}
\multicolumn{1}{|c|}{} & 2.b & 1 non-diagonalizable ($A_{2}$) & $A_{1}$ diag., $A_{2}$ Jordan bl.\tabularnewline \hline \end{tabular}\renewcommand{\raggedright}{\raggedright} \par\end{center}
\begin{proof} Let us show that only the five possibilities above occur, and may be put in the given forms, starting when $\mathcal{A}$ is stable. If $n\geq4$ or $n=2$, Proposition \ref{pro:ss->sigma} implies that there is some $\sigma\neq0$, so we can rearrange the terms of $\mathcal{A}$ so that we have $\sigma_{12}\neq0$ (in particular, $[A_{1},A_{2}]\neq0$). If $A_{1}$ or $A_{2}$ is diagonalizable, we have possibility 1.a. Assuming $A_{1}$ diagonalizable, we may suppose that $A_{1}$ is already in diagonal form. Since $A_{1}$ is nonscalar $\delta_{1}\neq0$, and the stabilizer of $A_{1}$ are the diagonal invertible matrices $H\subset G$. Then $\sigma_{12}=e_{1}^{2}b_{2}c_{2}\neq0$ implies that both $b_{2}$ and $c_{2}$ are nonzero. Let $g=\mbox{diag}(x,x^{-1})$ for some $x\in\bar{\mathbb{F}}^{*}$. A simple computation shows that $g^{-1}A_{1}g$ is diagonal and $A_{2}'=g^{-1}A_{2}g$ has $b_{2}'=x^{-2}b_{2}$. So, we can solve for $x$ in order to have $b_{2}'=1$, as claimed. If $A_{1}$ and $A_{2}$ are both not diagonalizable, we have case 1.b. In this case, we may suppose $A_{1}$ is already in Jordan form. Conjugating $A_{2}$ with a matrix of the form \[ g=\left(\begin{array}{cc} 1 & z\\
& 1\end{array}\right),\] a simple computation shows that we can solve for $z$ in order to obtain $b_{2}=1$. If $n=3$ and some $\sigma_{12}\neq0$, we return to cases 1.a or 1.b. So it remains the case $n=3$ and all $\sigma=0$, which is 1.c. From Example \ref{exa:example} we see that necessarily $\Delta_{123}\neq0$ (after an eventual rearrangement of terms), and a diagonal conjugation will achieve $b_{2}=1$. Finally, when $\mathcal{A}$ is triangularizable, so by Proposition \ref{lem:red-non-ss} either none or one of the terms of $\mathcal{A}$ are non-semisimple, cases 2.a and 2.b respectively. The forms mentioned will be obtained by conjugation with a diagonal invertible matrix. \end{proof} In the above table we have $[A_{1},A_{2}]\neq0$ in all cases, so if $g$ stabilizes $(A_{1},A_{2})$, then $g$ is scalar, by Lemma \ref{lem:stabilizer}. Therefore, we have a uniqueness statement.
\begin{prop} \label{pro:UniqueCanonical}Let $\mathcal{A},\mathcal{B}$ be two sequences in canonical form with $[A_{1},A_{2}]\neq0$. Assume that $(B_{1},B_{2})=(A_{1},A_{2})$. Then, $\mathcal{A}\sim\mathcal{B}$ if and only if $\mathcal{A}=\mathcal{B}$. \end{prop}
\begin{comment} Note that all the canonical forms of Theorem \ref{thm:canonical} have $A_{1}$ diagonal, except for the case 1.b. However, making the substitution $(A_{1},A_{2})\mapsto(A_{1}-A_{2},A_{1}+A_{2})$ (and fixing the other terms of $\mathcal{A}$) one easily checks that we obtain a sequence that can be put in the form 1.a. \end{comment} {}
\subsection{Reconstruction of sequences from invariants}
We continue to work over an algebraically closed field $\bar{\mathbb{F}}$, and here we restrict to characteristic $\neq2$. Let $\mathcal{S}'(\bar{\mathbb{F}})$ denote the subset of $V_{n}(\bar{\mathbb{F}})$ of semisimple sequences such that $A_{1}$ is diagonalizable and $[A_{1},A_{2}]\neq0$. Then $\mathcal{S}'(\mathbb{F})$ consists of irreducible (stable) sequences that can be put in the form 1.a. \begin{comment} This is not a restriction for non-commutative sequences, by Remark\ref{rem:A1diag} \end{comment} {} Let $\bar{\Phi}:\mathcal{S}'(\bar{\mathbb{F}})/G(\bar{\mathbb{F}})\to\bar{\mathbb{F}}^{4n-3}$ denote the map\begin{equation} \bar{\Phi}([\mathcal{A}]):=\left(t_{1},t_{11},t_{2},t_{22},t_{12},...,t_{k},t_{1k},t_{2k},s_{12k}...,t_{n},t_{1n},t_{2n},s_{12n}\right).\label{eq:PhiBar}\end{equation}
We now describe a process of (re)constructing a sequence from its values under the map $\bar{\Phi}$. Let $v\in\bar{\mathbb{F}}^{4n-3}$ be given. We want to find $\mathcal{A}\in\mathcal{S}'(\bar{\mathbb{F}})$ such that $\bar{\Phi}([\mathcal{A}])=v$.
Let $a_{1}$ and $d_{1}$ (in $\bar{\mathbb{F}}$) be the roots of the polynomial $\lambda^{2}-t_{1}\lambda+\frac{t_{1}^{2}-t_{11}}{2}=0$ (the characteristic polynomial of a diagonal matrix whose trace is $t_{1}$ and the trace of its square is $t_{11}$). If they are equal, there is no solution to our problem simply because an $\mathcal{A}$ satisfying $\bar{\Phi}([\mathcal{A}])=v$ will have either $A_{1}$ non-diagonalizable or $[A_{1},A_{2}]=0$. So, with $e_{1}=a_{1}-b_{1}\neq0$, put $b_{1}=c_{1}=0$, $b_{2}=1$ and\begin{eqnarray}
c_{2} & = & \frac{t_{22}-a_{2}^{2}-d_{2}^{2}}{2}=\frac{1}{4e_{1}^{2}}\left|\begin{array}{cc} \tau_{11} & \tau_{12}\\
\tau_{21} & \tau_{22}\end{array}\right|\neq0,\nonumber \\ \left(\begin{array}{c} a_{2}\\ d_{2}\end{array}\right) & = & \left(\begin{array}{cc} 1 & 1\\ a_{1} & d_{1}\end{array}\right)^{-1}\left(\begin{array}{c} t_{2}\\ t_{12}\end{array}\right).\label{eq:ReconstructPair}\end{eqnarray} Then, one easily checks that the pair $(A_{1},A_{2})\in V_{2}(\bar{\mathbb{F}})$ whose entries are $a,b,c,d\in\bar{\mathbb{F}}^{2}$ is in canonical form 1.a and satisfies $\bar{\Phi}([\mathcal{A}])=(t_{1},t_{11},t_{2},t_{22},t_{12})$. Moreover, this pair is unique except for the choice of assigning to $a_{1}$ or $d_{1}$ one or the other root of the characteristic polynomial. The two possible choices are:\[ \left(\left(\begin{array}{cc} a_{1}\\
& d_{1}\end{array}\right),\left(\begin{array}{cc} a_{2} & 1\\ c_{2} & d_{2}\end{array}\right)\right),\quad\left(\left(\begin{array}{cc} d_{1}\\
& a_{1}\end{array}\right),\left(\begin{array}{cc} d_{2} & 1\\ c_{2} & a_{2}\end{array}\right)\right),\]
These pairs are similar (for $e_{1}$ and $c_{2}$ both non-zero), with similarity matrix\begin{equation} g=\left(\begin{array}{cc}
& 1/x\\ -x\end{array}\right),\quad\quad x=\sqrt{-c_{2}}.\label{eq:a<->d}\end{equation} Moreover, up to a non-zero scalar multiple, this is the unique matrix sending one pair to the other. Let $\mathcal{A}^{\vee}:=g\cdot\mathcal{A}$ denote the sequence obtained by acting with this matrix. Note that $[\mathcal{A}]=[\mathcal{A}^{\vee}]$ and if $\mathcal{A}$ is in canonical form, then so is $\mathcal{A}^{\vee}$. Now, let\begin{equation} \left(\begin{array}{c} a_{k}\\ b_{k}\\ c_{k}\\ d_{k}\end{array}\right)=\left(\begin{array}{cccc} 1 & 0 & 0 & 1\\ a_{1} & 0 & 0 & d_{1}\\ a_{2} & c_{2} & 1 & d_{2}\\ 0 & -c_{2}e_{1} & e_{1} & 0\end{array}\right)^{-1}\left(\begin{array}{c} t_{k}\\ t_{1k}\\ t_{2k}\\ s_{12k}\end{array}\right),\quad\quad k=3,...,n.\label{eq:ReconstructSS}\end{equation} The determinant of this matrix is $-2e_{1}^{2}c_{2}$, so our hypothesis imply that it is indeed invertible. Hence the transformation above provides an isomorphism of vector spaces, from the variables $(a_{k},b_{k},c_{k},d_{k})$ to the variables $(t_{k},t_{1k},t_{2k},s_{12k})$, $k=3,...,n$.
Now, consider the reconstruction of triangularizable sequences. Let $\mathcal{U}'(\bar{\mathbb{F}})$ denote the subset of $V_{n}(\bar{\mathbb{F}})$ of triangularizable sequences such that $A_{1}$ is diagonalizable and $[A_{1},A_{2}]\neq0$ (so that $\mathcal{U}'(\bar{\mathbb{F}})\subset\mathcal{U}\setminus\mathcal{K}$). Define the map $\bar{\Psi}:\mathcal{U}'(\bar{\mathbb{F}})/G\to\bar{\mathbb{F}}^{2n}\times\mathbb{P}^{n-2}$, where $\mathbb{P}^{k}$ denotes the projective space over $\bar{\mathbb{F}}$ of dimension $k$, by the formula \begin{eqnarray*} \bar{\Psi}([\mathcal{A}]) & = & \left(t_{1},t_{11},...,t_{k},t_{1k},...,t_{n},t_{1n};\ \psi'\right),\end{eqnarray*} where $\psi'([\mathcal{A}])=[1:\delta_{13}:\cdots:\delta_{1n}]\in\mathbb{P}^{n-2}$, $\delta_{jk}:=b_{j}e_{k}-e_{j}b_{k}$ assuming $\mathcal{A}$ is in upper triangular form. Let $w\in\bar{\mathbb{F}}^{2n}\times\mathbb{P}^{n-2}$ be given. Again, let $a_{1}$ and $d_{1}$ be the (distinct) roots of the characteristic polynomial $\lambda^{2}-t_{1}\lambda+\frac{t_{1}^{2}-t_{11}}{2}=0$. Then, with $e_{1}=a_{1}-d_{1}\neq0$, put $b_{1}=0$, $b_{2}=1$, $c_{j}=0$, $j=1,...,n$, and\begin{eqnarray*} \left(\begin{array}{c} a_{k}\\ d_{k}\end{array}\right) & = & \left(\begin{array}{cc} 1 & 1\\ a_{1} & d_{1}\end{array}\right)^{-1}\left(\begin{array}{c} t_{k}\\ t_{1k}\end{array}\right),\\ b_{k} & = & -\frac{\delta_{1k}}{e_{1}},\quad k=3,...,n.\end{eqnarray*} Then the matrix sequence $\mathcal{A}$ with the entries $a,b,c,d\in\bar{\mathbb{F}}$ is in triangular canonical form (2.a or 2.b) and satisfies $\bar{\Psi}([\mathcal{A}])=w$, with $\psi'(\mathcal{A})=[1:\delta_{13}:\cdots:\delta_{1n}]\in\mathbb{P}^{n-2}$. The other solution $\mathcal{A}'$ is obtained by changing $e$ to $-e$ (that is, exchanging $a$ with $d$). We have.
\begin{prop} The map $\bar{\Phi}:\mathcal{S}'(\bar{\mathbb{F}})/G(\bar{\mathbb{F}})\to\bar{\mathbb{F}}^{4n-3}$ is injective, and $\bar{\Psi}:\mathcal{U}'(\bar{\mathbb{F}})/G(\bar{\mathbb{F}})\to\bar{\mathbb{F}}^{2n}\times\mathbb{P}^{n-2}$ is two-to-one. \end{prop} \begin{proof} Let $\mathcal{A}_{c}=(A_{1},A_{2},...)$ and $\mathcal{B}_{c}=(B_{1},B_{2},...)$ be canonical forms associated to $\mathcal{A},\mathcal{B}\in\mathcal{S}'(\bar{\mathbb{F}})$ respectively. If $\bar{\Phi}([\mathcal{A}_{c}])=\bar{\Phi}([\mathcal{B}_{c}])$ then either $(A_{1},A_{2})=(B_{1},B_{2})$ or $(A_{1},A_{2})=(B_{1},B_{2})^{\vee}$. If the first situation holds, then $\mathcal{A}_{c}=\mathcal{B}_{c}$ by the isomorphism given by Equation (\ref{eq:ReconstructSS}). Hence, Proposition \ref{pro:UniqueCanonical} shows that $\mathcal{A}\sim\mathcal{B}$. In the second alternative, just replace $\mathcal{B}_{c}$ with $\mathcal{B}_{c}^{\vee}$ (using $g$ in Equation (\ref{eq:a<->d})) and the conclusion is the same, showing that $\bar{\Phi}$ is injective. The case of $\bar{\Psi}$ is analogous, although in this case the two possibilities for a pair in triangular canonical form with given values of $(t_{1},t_{11},t_{2},t_{22})$ are:\[ \left(\left(\begin{array}{cc} a_{1}\\
& d_{1}\end{array}\right),\left(\begin{array}{cc} a_{2} & 1\\
& d_{2}\end{array}\right)\right),\quad\left(\left(\begin{array}{cc} d_{1}\\
& a_{1}\end{array}\right),\left(\begin{array}{cc} d_{2} & 1\\
& a_{2}\end{array}\right)\right).\] These pairs are not similar (note $e_{1}\neq0$), by Lemma \ref{lem:e=00003De'}, resulting in $\bar{\Psi}$ being 2-1. \end{proof} Now, the proof of Theorem \ref{thm:Invariants} is easy.
\noindent \emph{Proof of Theorem} \ref{thm:Invariants}: Let $\mathbb{F}$ be a field of characteristic $\neq2$ and $\bar{\mathbb{F}}$ its algebraic closure. We have shown that $\bar{\Phi}:\mathcal{S}'(\bar{\mathbb{F}})/G(\bar{\mathbb{F}})\to\bar{\mathbb{F}}^{4n-3}$ is injective. Then, the map $\Phi':\mathcal{S}'(\mathbb{F})/G(\mathbb{F})\to\mathbb{F}^{4n-3}$ defined in Equation (\ref{eq:Phi'}), having the same form of $\bar{\Phi}$, is just its restriction to $\mathcal{S}'(\mathbb{F})/G(\mathbb{F})$, being therefore injective as well. The inclusion $\mathcal{S}'(\mathbb{F})/G(\mathbb{F})\subset\mathcal{S}'(\bar{\mathbb{F}})/G(\bar{\mathbb{F}})$ reflects the fact that two sequences in $\mathcal{S}'(\mathbb{F})$, similar over $G(\bar{\mathbb{F}})$, are also similar over $G(\mathbb{F})$, by the Noether-Deuring theorem (see eg. \cite{CR} p. 200). The case of $\bar{\Psi}$ is analogous.
{}$\square$
\begin{rem} \label{rem:NoRestriction}Finally, we argue that the other cases of semisimple non-commutative sequences in the table of Theorem \ref{thm:canonical} can also be reconstructed by a simple modification of the above procedure. In the case 1.b, we substitute the pair $(A_{1},A_{2})$ by the pair $(A_{1}-A_{2},A_{1}+A_{2})$, and in the case 1.c (assuming, without loss of generality, that $A_{1}$ is diagonal non-scalar), we perform the substitution $(A_{1},A_{2},A_{3})\mapsto(A_{1},A_{2}+A_{3},A_{2}-A_{3})$. In both cases we end up with a matrix in $\mathcal{S}'$, that is, the first matrix is diagonalizable non-scalar and the first pair does not commute. \end{rem} In conclusion, the only conjugacy classes that the maps $\Phi'$ and $\Psi'$ fail to distinguish (not counting the involution $e\leftrightarrow-e$ in the $\Psi'$ case, and after the reduction to $\mathcal{S}'$ described above) are the following types:\[ \left(\begin{array}{cc} a & b\\
& a\end{array}\right),\quad\left(\begin{array}{cc} a\\
& a\end{array}\right),\quad a,b\in\mathbb{F}^{n}.\] which have the same value under $\Phi'$ (note that these are not in the domain of $\Psi'$, as they are commutative), regardless of $b\in\mathbb{F}^{n}$. \begin{comment} In characteristic 2, the above method fails: For $\mathbb{F}=\mathbb{Z}_{2}$, the field with two elements, the following two pairs are not similar, but have the same invariants. \[ \left[\left(\begin{array}{cc} 1 & 0\\ 0 & 0\end{array}\right),\left(\begin{array}{cc} 1 & 1\\ 0 & 0\end{array}\right)\right],\quad\left[\left(\begin{array}{cc} 1 & 0\\ 0 & 0\end{array}\right),\left(\begin{array}{cc} 1 & 1\\ 1 & 0\end{array}\right)\right].\] In fact, for both we have $t_{1}=t_{11}=t_{2}=t_{22}=t_{12}=1$, but only the first is triangularizable. \end{comment} {}
\appendix
\section{Triangularization of singlets over $R$}
In this Appendix, we treat the triangularization problem for a single $2\times2$ matrix over an integral domain $R$. We do not claim originality of Proposition \ref{pro:principal} and Lemma \ref{lem:eigenvector} below, although the author was unable to find a suitable reference. Their exposition is mainly intended to provide self-contained proofs of Theorems \ref{thm:Flo} and Theorem \ref{thm: pair}.
When is a single matrix with entries in $R$ triangularizable? Let us write\begin{equation} A=\left(\begin{array}{cc} a & b\\ c & d\end{array}\right)\in V_{1}(R),\quad\quad g=\left(\begin{array}{cc} x & z\\ y & w\end{array}\right)\in GL_{2}(R).\label{eq:A,g}\end{equation} Since $g$ is invertible, so is $\mathsf{det} g=xw-yz$, and conjugation of $A$ by $g^{-1}$ gives:\begin{equation} g^{-1}\cdot A=g^{-1}\ A\ g=\frac{1}{xw-yz}\left(\begin{array}{cc} * & bw^{2}+ezw-cz^{2}\\ cx^{2}-exy-by^{2} & *\end{array}\right).\label{eq:g.A}\end{equation} So, triangularizing $A$ amounts to finding a solution $(x,y,z,w)\in R^{4}$ to one of the equations $cx^{2}-exy-by^{2}=0$ or $bw^{2}+ezw-cz^{2}=0$, such that $xw-yz$ is invertible in $R$. Note that both equations are given by the same quadratic form \[ Q(x,y):=cx^{2}-exy-by^{2}=\frac{1}{2}(x,y)Q_{A}(x,y)^{T}\]
associated to matrix\[ Q_{A}=\left(\begin{array}{cc} 2c & -e\\ -e & -2b\end{array}\right).\] The discriminant of $Q$ is $\delta_{Q}:=-\mathsf{det} Q_{A}=e^{2}+4bc=\mathsf{tr}^{2}A-4\mathsf{det} A$, and coincides precisely with the discriminant $\delta_{A}$ of the characteristic polynomial of $A$. \begin{comment} This provides an interesting link between triangularization of $2\times2$ matrices over rings and the theory of quadratic forms. Whenever there is a non-trivial solution $(x,y)\in R^{2}$ to $Q(x,y)=0$ we say that $Q$ represents zero and that $(x,y)$ is isotropic with respect to $Q$. \end{comment} {}
Equation (\ref{eq:g.A}) provides a necessary condition for triangularization: If $A$ is triangularizable over $R$, then its eigenvalues lie in $R$. Indeed, if $g^{-1}Ag=T$ for some $g\in GL_{2}(R)$ and some upper triangular matrix $T$, $\delta_{A}=\delta_{T}=(\lambda_{1}-\lambda_{2})^{2}$ is a square in $R$, where $\lambda_{1},\lambda_{2}\in R$ are the diagonal elements of $T$, which are also the eigenvalues of $A$.
A necessary and sufficient condition is the following.
\begin{prop} \label{pro:principal}Let $A\in V_{1}(R)$ be a $2\times2$ matrix over an integral domain $R$. $A$ is triangularizable if and only if it has an \emph{eigenvector} of the form $(x,y)\in R^{2}$, such that $xR+yR=R$ (in particular, the ideal $(x,y)$ is principal). \end{prop} \begin{proof} The equation $Ag=gT$, for some invertible $g\in G(R)$ and upper triangular $T$, means that the first column of $g$ is an eigenvector $(x,y)$ of $A$, so that $x,y\in R$ and there exist $w,z\in R$ (forming the second column of $g$) so that $xw-yz$ is a unit. In particular, $xR+yR=R$. Conversely, let $A$ be as in (\ref{eq:A,g}), with an eigenvector $(x,y)\in R^{2}$ verifying $xR+yR=R$. From simple computations the eigenvalues of $A$ (both in $R$) are $\lambda_{1}=\frac{a+d+r}{2}$ and $\lambda_{2}=\frac{a+d-r}{2}$, where $r$ is a square root of $\delta_{A}=e^{2}-4bc$, and the respective eigenvectors are $v_{1}=(\lambda_{1}-d,c)$ and $v_{2}=(\lambda_{2}-d,c)$ (both in $R^{2}$). So, without loss of generality, let $(x,y)$ be in the eigenspace of $v_{1}$: $(x,y)=\alpha(\lambda_{1}-d,c)=\alpha(\frac{e+r}{2},c)$, for some nonzero $\alpha$ in the field of fractions of $R$. By hypothesis, there exist $z,w\in R$ verifying $xw-yz=1$. Thus, using the invertible matrix $g$ with columns $(x,y)$ and $(z,w)$, we have that $g^{-1}Ag$ is upper triangular (with $\lambda_{1}$ and $\lambda_{2}$ in the main diagonal). Indeed, by Equation (\ref{eq:g.A}) its lower left entry is $cx^{2}-exy+-by^{2}=\frac{\alpha^{2}}{4}c(r^{2}-e^{2}-4bc)=0$. \end{proof} The following fact is used in the proof of Theorem \ref{thm: pair}.
\begin{lem} \label{lem:eigenvector}Fix nonzero elements $x,y$ in a field $\mathbb{F}$. If $A\in V_{1}(\mathbb{F})$ is nondegenerate ($\delta_{A}\neq0$) and verifies $cx^{2}-exy-by^{2}=0$, then $A$ is \emph{diagonalizable} and one of its eigenvectors is $(x,y)$. In particular, there is another eigenvector $(z,w)\in\mathbb{F}^{2}$ with $wx-yz\neq0$. \end{lem} \begin{proof} To satisfy $cx^{2}-exy-by^{2}=0$, the triple $(b,e,c)$ must be a linear combination of the vectors $(-x,y,0)$ and $(0,x,y)$. So, the matrix $A$ is given (uniquely up to the addition of a scalar) by the triple $(b,e,c)=(-zx,zy+wx,wy)$, for some $z,w\in\mathbb{F}$.
\begin{comment} Since $e$ is the difference of the diagonal elements of $A$, and adding scalars does not change eigenvectors, any such $A$ is of the form: \[ A=\left(\begin{array}{cc} wx+zy & -zx\\ wy & 0\end{array}\right).\]
\end{comment} {}Then, a simple computation shows that the discriminant of $A$ is a square in $\mathbb{F}$: $\delta_{A}=(wx-zy)^{2}$. So $A$ is triangularizable over $\mathbb{F}$. Moreover, its eigenvalues are easily checked to be $\lambda_{1}=wx$ and $\lambda_{2}=yz$. Since, by hypothesis $\delta_{A}\neq0$, the eigenvalues are distinct, so $A$ is diagonalizable over $\mathbb{F}$. Also, the eigenvectors are multiples of $(x,y)$ and $(z,w)$, respectively. \end{proof}
\begin{comment} Over a field $\mathbb{F}$ of characteristic $\neq2$, a single $2\times2$ matrix $A\in V_{1}(\mathbb{F})$ is triangularizable if and only if $\delta_{A}=\delta_{Q}$ is a square in $\mathbb{F}$.
\begin{proof} If $A$ is triangularizable we can assume $c=0$, and the discriminant $\delta_{A}=e^{2}$ is clearly a square in $\mathbb{F}$. Conversely, it is a classical result (see for example \cite{S}, p. 30) that any quadratic form $Q$ is equivalent, over a field $\mathbb{F}$, to $a_{1}x^{2}+a_{2}y^{2}$ with $a_{1},a_{2}\in\mathbb{F}$ and $a_{1}a_{2}=-\delta_{Q}$. If $Q$ is degenerate, it clearly represents 0. If $Q$ is non-degenerate, dividing by $a_{1}$ we may suppose $Q(x,y)=x^{2}+a_{2}y^{2}$, with $\delta_{Q}=-a_{2}$. Then $Q$ represents $0$ iff $-a_{2}=\delta_{A}$ is a square in $\mathbb{F}$. \end{proof} In particular, any $2\times2$ matrix is triangularizable over an algebraically closed field $\bar{\mathbb{F}}$ of characteristic $\neq2$. For pairs of $2\times2$ matrices over $\bar{\mathbb{F}}$, a simple necessary and sufficient condition for triangularization is in \cite{Fr} (see Proposition \ref{pro:Fr}). To generalize this to an integral domain we need the following interpretation of the solutions in a field.
\begin{lem} \label{lem:2lines}Let $A\in V_{2}(R)$ be non-scalar and triangularizable (over $R$) and let $\mathbb{F}$ be the field of fractions of $R$. Then, the solutions to the equation $Q(x,y)=0$ over $\mathbb{F}$ are: two distinct lines meeting at the origin in $\mathbb{F}^{2}$, when $Q$ is nondegenerate, or one line through the origin, when $Q$ is degenerate. \end{lem} \begin{proof} By hypothesis, there is a non-trivial solution to $Q(x,y)=-bx^{2}+exy+cy^{2}=0$, and at least one of the three elements $c,b,e$ in nonzero. If $b=c=0$, then the solutions are the two lines $xy=0$ (in this case $\delta_{A}=e^{2}\neq0$). If $c\neq0$, a trivial computation shows that $(x_{1},y_{1}):=(2c,-e+r)$, and $(x_{1},y_{1}):=(2c,-e-r)$ where $r$ is one of the roots of $\delta_{A}$ (so $r^{2}=\delta_{A}$) are solutions to $Q(x,y)=0$. So, for any $\alpha\in\mathbb{F}$, $(x_{1},y_{1}):=(2c\alpha,(-e\pm r)\alpha)$ are also solution. So there are two lines of solutions when $Q$ is non-degenerate and one line when $Q$ is degenerate ($\delta_{A}=r=0$). The case when $b\neq0$ is analogous. \end{proof} Suppose now that $Q$ is non-degenerate and that there are isotropic elements, so that the solutions to $Q(x,y)=0$ are the intersections with $R^{2}$ of the two distinct lines in $\mathbb{F}^{2}$ described above (this intersection is non-empty since we are assuming that the coefficients of $Q$ are in $R$). It is easy to see that\[ Q(x,y)=0\quad\quad\Leftrightarrow\quad\left\{ (x,y)=\lambda(e-r,2b)\right\} \cup\left\{ (x,y)=\mu(e+r,2b)\right\} .\] It is also easy to show that if $(e-r,2b)$ can be completed to a $R$-basis of $R^{2}$ then $(e+r,2b)$ can also be so completed. \end{comment} {} \begin{comment} Simultaneous triangularization and subtriples
An obvious necessary condition for triangularization of a sequence is that all its terms have a common strong eigenvector, that is an eigenvector $(x,y)\in R$ such that $xR+yR=R$. This idea can be used to show
\begin{thm} Let $n\geq1$ and $\mathcal{A}\in V_{n}$ be a reduced matrix sequence. Then $\mathcal{A}$ is triangularizable if and only if all subsequences of $\mathcal{A}$ of length $\leq3$ are triangularizable. \end{thm} \begin{proof} One direction is clear. To show the converse, assume all single terms, pairs and triples are triangularizable. Assume that $A_{1}$ is non-diagonalizable. Then, $A_{1}$ has an eigenvector $(x,y)\in R^{2}$ with $xR+yR=R$ and all other eigenvectors are multiple (by a unit in $R$) of this one. Since $(A_{1},A_{k})$ is triangularizable for all $k$, $A_{k}$ has an eigenvector in common with $A_{1}$, which has to be a multiple of $(x,y)$. So, all $A_{k}$ have $(x,y)$ as eigenvector which means that $\mathcal{A}$ is triangularizable. So, we can assume $A_{1}$ diagonalizable. Let $(A_{1},A_{2})$ have $v_{2}=(x_{2},y_{2})$ as common eigenvector, $(A_{1},A_{3})$ have $v_{3}=(x_{3},y_{3})$ as common eigenvector, and $(A_{2},A_{3})$ have $v_{1}=()$ as common eigenvector... \end{proof}
\end{comment} {}
\section{Canonical Forms and Similarity of Commuting Sequences}
For completeness, we include the following well known description of all canonical forms of commuting sequences over an algebraically closed field $\bar{\mathbb{F}}$.
\begin{thm} \label{cor:CommNF}A matrix sequence of length $n$ with coefficients in $\bar{\mathbb{F}}$ is commutative if and only if it is similar to a matrix sequence in one of the forms (called diagonal or triangular forms, respectively (both forms include the scalar sequences)): \[ \mathcal{A}=\left(\begin{array}{cc} a\\
& d\end{array}\right),\quad\mathcal{B}=\left(\begin{array}{cc} a & b\\
& a\end{array}\right),\quad a,d,b\in\bar{\mathbb{F}}^{n}.\]
\end{thm} \begin{proof} The sufficiency is clear by Lemma \ref{lem:comm} (condition (\ref{eq:comm}) is satisfied because $b=0$ (resp. $e=0$) in the first (resp. second) case). Conversely, since $\mathcal{A}$ is commutative it is triangularizable, by Corollary \ref{cor:comm}. Let $k\geq1$ be the smallest integer such that $A_{k}$ is non-scalar. By an appropriate conjugation, one can assume that either $A_{k}$ is diagonal or it is written as a single Jordan block (here we are using the assumption on $\bar{\mathbb{F}}$). Then lemma \ref{lem:comm} implies that $\mathcal{A}$ is either in diagonal or in triangular form, as wanted. \end{proof} The following consequence is clear.
\begin{cor} Let $\mathcal{A}$ and $\mathcal{A}'$ be commutative, both of the same form as described in the Theorem above. If $\mathcal{A}$ is in diagonal form, then $\mathcal{A}\sim\mathcal{A}'$ if and only if $a=a'$ and $d=d'$ or $a=d'$ and $d=a'$. If $\mathcal{B}$ is in triangular form then $\mathcal{B}\sim\mathcal{B}'$ if and only if $a=a'$ and $b=\lambda b'$ for some $\lambda\in\bar{\mathbb{F}}\setminus\left\{ 0\right\} $. \end{cor}
\begin{comment} For clarity, let we exclude the case $b=0$ from the strict-tiangular form. Note also the following.
\begin{cor} Any sequence in strict-triangular form is in the Zariski closure of the set of diagonalizable sequences inside $V_{n}(\bar{\mathbb{F}})$. \end{cor} \begin{proof} It is sufficient to note that the 1-parameter family\[ \left(\begin{array}{cc} a+\varepsilon b & b\\
& a-\varepsilon b\end{array}\right),\] with $\varepsilon\in\bar{\mathbb{F}}$, $a,b\in\bar{\mathbb{F}}^{n}$, consists of simultaneously digonalizable sequences and its limit as $\varepsilon\to0$ is the non-diagonalizable sequence \[ \left(\begin{array}{cc} a & b\\
& a\end{array}\right).\]
\end{proof}
\end{comment} {} \begin{comment} Parametrization of Orbits
In this last Appendix, we provide formulae for the coefficients of canonical forms of matrix sequences, in terms of the corresponding invariants. For semisimple matrix sequences, we use a process of reconstruction of the $GL(2,\bar{\mathbb{F}})$ orbit from a minimal number of traces of words in the terms of $\mathcal{A}$ following \cite{Fl}.
\begin{thm} Let $\mathcal{A}$ be any matrix sequence of length $n\geq2$ with $\sigma_{12}\neq0$ and $\delta_{1}\neq0$. Then its canonical form is given by\[ A_{1}=\left(\begin{array}{cc} \alpha_{1}+i\beta_{1} & 0\\ 0 & \alpha_{1}-i\beta_{1}\end{array}\right),\qquad A_{2}=\left(\begin{array}{cc} \alpha_{2}+i\beta_{2} & i\delta_{2}\\ i\delta_{2} & \alpha_{2}-i\beta_{2}\end{array}\right),\] \[ A_{k}=\left(\begin{array}{cc} \alpha_{k}+i\beta_{k} & \gamma_{k}+i\delta_{k}\\ -\gamma_{k}+i\delta_{k} & \alpha_{k}-i\beta_{k}\end{array}\right),\quad k=3,...,n,\] where\[ \beta_{1}=\frac{1}{2}\sqrt{-\delta_{1}}\qquad\beta_{k}=-\frac{\tau_{1k}}{4\beta_{1}},\qquad k=2,...,n,\] \[ \delta_{2}=2\sqrt{-\frac{\sigma_{12}}{\delta_{1}}},\qquad\delta_{k}=\frac{\tau_{2k}\delta_{1}-\tau_{12}\tau_{1k}}{2\delta_{2}\delta_{1}},\qquad,k=3,...,n,\] \[ \gamma_{k}=4\sqrt{\frac{\Delta_{12k}}{\sigma_{12}}},\qquad k=3,...,n.\]
\end{thm} \begin{proof} This is a simple computation, using the formulae in Definition \end{proof}
\end{comment} {} \begin{comment} In other words, it makes sense to call a non-scalar commutative matrix sequence a nonderogatory sequence.
Formula (\ref{eq:comuta}) suggests considering the following definition.
\begin{defn} If $\mathcal{A}\in V_{m,n}$, let \[ L(A):=\left(\begin{array}{ccc} b_{1} & \cdots & b_{n}\\ e_{1} & \cdots & e_{n}\\ c_{1} & \cdots & c_{n}\end{array}\right),\]
be called the $L$-matrix. We will denote by $l(A_{j})$ the $j$th column of $L(A)$. \end{defn} With this notation, we see that a matrix $A_{j}$ is non-scalar if and only if $l(A_{j})$ is non-zero; moreover, two non-scalar matrices $A_{1}$ and $A_{2}$ commute if and only their vectors $l(A_{1})$ and $l(A_{2})$ lie on the same line through the origin, or equivalently, if $L((A_{1},A_{2}))$ has rank 1. We conclude the following.
\begin{prop} On the set of non-scalar matrices over $\mathbb{F}$, commutativity is an equivalence relation. \end{prop} \begin{proof} The reflexivity and symmetry properties are clear. Let $A_{1},A_{2}$ and $A_{3}$ be three non-scalar matrices such that $A_{1}$ commutes with $A_{2}$ and $A_{2}$ commutes with $A_{3}$. This means that the lines through the non-zero vectors $l(A_{1}),l(A_{2})$ and $l(A_{3})$ are the same, so that $A_{1}$ also commutes with $A_{3}$. \end{proof} The Proposition above is a particular case of a much more general result, that commutativity is an equivalence relation on the set of nonderrogatory matrices of any size. \end{comment} {}
\end{document} | arXiv |
Simplify $(u+4)(u-1) - (u-3)(u+6)$.
Expanding the first product, the distribute property shows that $$(u+4)(u-1) = u^2 + 4u - u - 4 = u^2 + 3u - 4.$$The second product becomes $$(u-3)(u+6) = u^2 - 3u + 6u - 18 = u^2 + 3u - 18.$$Subtracting, both the $u^2$ and the $3u$ terms cancel, leaving an answer of $-4 - (-18) = \boxed{14}$. | Math Dataset |
Neutralization of cholera toxin by Rosaceae family plant extracts
Magdalena Komiazyk1,2,
Malgorzata Palczewska2,3,
Izabela Sitkiewicz4,
Slawomir Pikula1 &
Patrick Groves ORCID: orcid.org/0000-0003-2589-14552,5
Cholera is one of the most deadly diarrheal diseases that require new treatments. We investigated the neutralization of cholera toxin by five plant extracts obtained from the Rosaceae family that have been traditionally used in Poland to treat diarrhea (of unknown origin).
Hot water extracts were prepared from the dried plant materials and lyophilized before phytochemical analysis and assessment of antimicrobial activity using microdilution assays. The ability of the plant extracts to neutralize cholera toxin was analyzed by measurement of cAMP levels in cell cultures, enzyme-linked immunosorbent assay and electrophoresis, as well as flow cytometry and fluorescence microscopy studies of fluorescent-labeled cholera toxins with cultured human fibroblasts.
The antimicrobial assays displayed modest bacteriostatic potentials. We found that the plant extracts modulate the effects of cholera toxin on intracellular cAMP levels. Three plant extracts (Agrimonia eupatoria L., Rubus fruticosus L., Fragaria vesca L.) suppressed the binding of subunit B of cholera toxin to the cell surface and immobilized ganglioside GM1 while two others (Rubus idaeus L., Rosa.canina L.) interfered with the toxin internalization process.
The traditional application of the Rosaceae plant infusions for diarrhea appears relevant to cholera, slowing the growth of pathogenic bacteria and either inhibiting the binding of cholera toxin to receptors or blocking toxin internalization. The analyzed plant extracts are potential complements to standard antibiotic treatment and Oral Rehydration Therapy for the treatment of cholera.
Diarrhea causes millions of deaths each year as a result of the action of a wide range of pathogens, including enterotoxin-producing strains of bacteria such as Escherichia coli or Vibrio cholerae [1]. The pathogens are mainly spread by water or food contaminated with human or animal feces, and from person to person. The V. cholerae infections result in severe diarrhea that, without proper treatment, can kill within a few hours. Children and the elderly in developing countries constitute the largest groups of fatalities. Cholera outbreaks are related to poor sanitation and usually occur after cataclysm or during a war when access to clean water is limited. The most recent cholera outbreak started in October 2016 in Yemen. Over 8 months, 101,820 cases of cholera were registered with 791 deaths [2]. In developed countries, only sporadic, often imported cases of V. cholerae infection occur [3,4,5] but diarrhea from pathogenic E. coli strains producing similar toxins regularly result in thousands of hospital patients, for example a 2011 outbreak affecting Germany and its' neighboring countries also resulted in 50 deaths [6]. The E. coli infections are usually not as dangerous as cholera but still cause many problems, especially for travelers, children and the elderly [1]. The main virulence factor of the above-mentioned bacteria is the production of toxins belonging to the AB5 family, such as cholera (CTX) or heat-labile enterotoxins (LT and LT-II). The toxins are expressed and secreted as a response to bacterial quorum sensing once a colony has reached a mature size [7]. Structurally, the AB5 toxins contain a catalytic subunit A1 linked by a short A2 peptide to a pentameric subunit B that binds to gangliosides located on human cell surfaces [8]. After secretion from the bacteria, the toxin binds to gangliosides and is then internalized from the plasma membrane by endocytosis and undergoes retrograde trafficking through the trans-Golgi network to the lumen of the endoplasmic reticulum. Here, subunit A is dissociated from the holotoxin, refolded and released to the cytoplasm where it causes constitutive activation of adenylate cyclase, resulting in activation and the conversion of ATP to cAMP. The high concentration of cAMP results in the opening of cAMP-dependent chloride channels and secretion of chloride ions into the lumen of the small intestine. Accumulation of chloride causes secretion of sodium ions into the lumen of the small intestine across the tight junction. An increased concentration of sodium chloride in the small intestine lumen creates an osmotic gradient that results in water outflow into the small intestine lumen across the tight junction [9]. This is the point when diarrheal symptoms start. Table 1 gives a list of different stages of V. cholerae infection where a pharmaceutical intervention may provide relief of cholera symptoms/infection. Each stage involves a number of potential molecular targets. It is possible to block the multi-step mechanism of action of cholera toxin at several different stages. Toxin neutralization after release by the bacteria and before internalization by human cells is one of the most accessible targets for a natural remedy or functional food to act on, and these stages are highlighted in bold in Table 1. Targeting a protein in human cells (italics, Table 1) is more challenging and more difficult to gain approval while the targeting of bacteria (normal text, Table 1) raises questions due to the growing knowledge of the importance of the diversity and health of the gut microbiome.
Table 1 Stages of V. cholerae infection
Various plant species are traditionally used by many societies to alleviate and cure diarrhea. The properties of traditional plant extracts are worth exploring as they can stop or kill bacterial growth, neutralize or deactivate enterotoxins, or provide useful microelements and vitamins [23]. Over the years, the healing properties of many plants have been superseded by synthetic pharmaceuticals, often derived from the active constituents of plants. Bacterial pathogens are becoming more resistant to commonly applied antibiotics and it is important to find new sources of antibacterial agents. However, this is unattractive in the case of cholera due to the development costs of a new drug and the fact that the disease can develop rapidly in a patient. Plant extracts do not provide the power of modern antibiotics. Instead, plant extracts may provide an attenuation of the many steps in the V. cholerae infection lifecycle, including the latent, early stages of the disease, moderate the effects of diarrhea and lead to improved survival and recovery rates. The citations given in Table 1 are to works describing the methodology to test plant extracts at each stage of the cholera infection, with the exception of the quorum sensing stage that used a small molecule library. If we can identify plant extracts that act at different stages of the V. cholerae infection lifecycle then it is possible that mixtures of plant extracts will provide a synergistic effect on the infected population, as well as individuals. This may help patients at different stages of infection, including asymptomatic carriers. The range of active metabolites produced by plants potentially treats a broad range of pathogenic strains and it will be more difficult for bacteria to develop resistance than with modern antibiotics. Furthermore, plant extracts based on accepted traditional medicines or functional foods will be more quickly, and cheaply, developed and applied.
In this study, we focus on the anti-enterotoxic activities of common European species belonging to the Rosaceae family: Agrimonia eupatoria L. (common agrimony), Fragaria vesca L. (wild strawberry), Rubus fruticosus L. (blackberry), Rubus idaeus L. (raspberry) and Rosa canina L. (rose), which for centuries were used in Poland as natural medicines for diarrhea [24,25,26,27,28]. The recommended doses, methods of infusion preparation and references are listed in Table 2. Neither the pathogenic target of these herbs is known, nor their mechanism of action. However, Poland suffered from regular cholera outbreaks in the 19th and early 20th centuries [29], and sporadic cases are still reported with non-O1 V. cholerae strains found in contaminated bodies of water [5]. Therefore, in this study, we wanted to analyze if the above-mentioned plant extracts have antimicrobial activities and/or can neutralize cholera toxin binding to receptors. In doing so, we obtained some positive results and also found that assays employing CTX gave less positive results than CTB.
Table 2 Traditional preparation of plant infusions for the treatment of diarrhea in Poland
Plant material and extraction
The plants were dried from their natural state, and cut or chopped. All plant materials were authenticated and tested to comply with British and European food/pharmacopeia standards by the supplier, Bristol Botanicals Limited (Bristol, UK). The list of the used species and batch numbers are presented in Table 3. The aqueous extracts were prepared by pouring 25 mL of boiling Milli-Q (MQ) grade water onto 1 g of plant material, allowed to cool to room temperature, and left to stand for over 18 h. The extracts were decanted and passed consecutively through filter paper and 0.4 μm cellulose acetate filters (Whatman, UK). To further minimize contamination or degradation, the plant extracts were frozen and lyophilized to dryness, after which the extraction yields were calculated (Table 3), and then stored at − 20 °C. The lyophilized materials gave consistent data over several months.
Table 3 Yields of prepared plant extracts
Mammalian cell culture
Primary human skin fibroblasts (line C688) were obtained from the Department of Metabolic Diseases, The Children's Memorial Health Institute in Warsaw, Poland [30]. Ganglioside GM1 molecules, receptors for CTX, are one component of fibroblast membranes. C688 cells were cultured in Dulbecco's Modified Eagle's Medium (DMEM) (Sigma, USA) with 10% fetal bovine serum (FBS, Gibco, South America), 100 units/mL penicillin and 100 μg/mL streptomycin (Sigma, USA) at 37 °C in a humidified atmosphere containing 5% CO2, on 100 mm tissue-culture treated dishes (Corning BD, USA) until 80–90% confluence. To dissociate the adherent cells, dishes were rinsed with phosphate buffered saline solution pH 7.5 (PBS, 10 mM Na2HPO4, 1.76 mM KH2PO4, 2.7 mM KCl, 136 mM NaCl) and incubated with 2 mL (0.5 mg/mL) of porcine trypsin (Sigma, USA) for 7 min at 37 °C. Cells were collected by centrifugation (500 g for 3 min), resuspended in fresh medium and counted under a light microscope (Zeiss Observer Z1, Germany) using a Bürker chamber.
Cytotoxic effect of plant extracts (MTT method)
The cytotoxic activity of plant extracts was determined using the standard MTT method (according to a Sigma protocol). We seeded 5 × 103 C688 cells/well into 96 well plates and cultured them as described above. The next day, the medium was discarded and cells were treated with 200 μL of several different concentrations of plant extracts diluted in DMEM with 1% FBS (range 5 to 0.078 mg/mL). After 24 h incubation at 37 °C, the medium was discarded and wells were washed twice in PBS, pH 7.5. To determine cell viability, 20 μL of 5 mg/mL MTT ((3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide), Sigma, USA) in PBS and 180 μL of DMEM without FBS were added to the wells and incubated for 4 h at 37 °C. Then, the medium was carefully discarded and 200 μL of 40 mM HCl in isopropanol was added. After 10 min incubation, the absorbance was measured at 570 nm with the background wavelength set to 630 nm using a SpectraMax M5e plate reader. All experiments were carried out in triplicate. The percentage of viable cells was calculated using Eq. 1:
$$ \%\mathrm{viable}\ \mathrm{cells}={\mathrm{A}}_1\ \mathrm{X}\ 100\%/{\mathrm{A}}_0 $$
where A0 is the value of the absorbance for control conditions (which was considered 100%) and A1 is the value of the absorbance for tested samples, reduced by the value of background.
Microbial strains
The aqueous plant extracts were tested against four bacterial strains: Escherichia coli ATCC 25922, E. coli O44 (834/04, collection of National Medicine Institute, Warsaw, Poland), Vibrio cholerae O395-tacCTB strain (Chiron Srl./Novartis) and Lactobacillus rhamnosus (ATCC 53103).
Broth microdilution assay
The antimicrobial activity of plant extracts was determined by a standard microdilution technique using 96-well microtiter plates. Bacterial inoculates were prepared from 12 h liquid cultures grown on Mueller-Hinton (MH) broth. The plant extracts were dissolved in MH medium to a concentration of 2.5 mg/mL and a series of six, two-fold dilutions of plant extracts in MH broth (range 2.5–0.078 mg/mL) were prepared across the microtiter plates. To each well, 10 μL of inoculum (~ 0.2 × 105 bacterial cells / well) was added. The wells were filled with MH broth to 200 μL total volume. As a negative control, plant extracts were replaced with MH broth and as a positive control – bacteria were incubated with 35 μg/mL of chloramphenicol. After 18 h incubation at 37 °C, the absorbance of each well was measured at 600 nm using a SpectraMax M5e plate reader. The bacterial growth was calculated with Eq. 2:
$$ \%\mathrm{bacterial}\ \mathrm{growth}={\mathrm{A}}_1\ \mathrm{X}\ 100\%/{\mathrm{A}}_0 $$
where A1 = sample absorbance, A0 = absorbance of negative control.
Using this scale, 0% is total bacterial growth inhibition and 100% is the bacterial growth in the absence of plant extracts. The plant extract concentration that resulted in a bacterial growth reduction by more than 80% was interpreted as a minimal inhibitory concentration (MIC), while a reduction by more than 50% was defined as a concentration that limited bacterial growth.
To determine the minimal bactericidal concentrations (MBC) of the plant extracts, 2 μL of suspension after 18 h culture was applied to MH agar plates and cultured at 37 °C for 18 h. The lowest concentration of plant extract without bacterial colonies was interpreted as the MBC. The experiments were performed in triplicate.
cAMP assay
The cAMP levels in cultured C688 cells were determined using a cAMP-Glo Max Assay (Promega, USA), according to the manufacturer's protocol. Cells were grown in tissue culture treated 96-well, white plates with clear, flat bottoms (Brand, Germany). To each well, 103 C688 cells were added and cultured overnight under standard conditions. The next day, the medium was discarded and a mixture of 2.5 mg/mL plant extract or gallic acid and 25 nM CTX (2.125 μg/mL) in DMEM supplemented with 30 mM MgCl2 was added to each well. The plates were incubated for 2 h at 37 °C in 5% CO2. Next, the proper amount of detection solution and kinase glo reagent were added. The luminescence (RLU) was measured using a SpectraMax M5e plate reader. As controls, cells were incubated with only: plant extracts, CTX, or DMEM. The cAMP level was calculated as a change in RLU (ΔRLU) for the sample incubated with only plant extract/gallic acid/DMEM and the sample incubated with a mixture of plant extract/gallic acid/DMEM and CTX. Each sample was tested in three repeats.
Ganglioside GM1- CTX binding assay
The ganglioside GM1- CTX interaction was analyzed according to a published protocol based on the ELISA method ([18], with modification). The 96-well, clear, flat-bottom immune-plates (Nunc, Denmark) were coated with 25 ng ganglioside GM1 resuspended in 50 μL ethanol and incubated at 37 °C for 2 h, to dryness. The wells were washed three times with 200 μL wash buffer (0.05% Tween 20 in PBS, pH 7.5), blocked with 200 μL blocking buffer (0.5% bovine serum albumin (BSA, Sigma, USA) in PBS) for 18 h at 4 °C, and washed three more times with 200 μL wash buffer. Wells coated with ganglioside GM1 were incubated with 2.5, 1.25, 0.625, 0.3, 0.15, 0.075 mg/mL of plant extracts and 0.25 μg/mL CTX for 2 h, in a total volume 200 μL. As negative controls, three wells coated with GM1 were incubated with CTX to provide maximal measurements or with 2.5 mg/mL plant extract to provide baseline measurements. To prepare a positive control, the binding sites of CTB were blocked with free GM1 as an inhibitor, by pre-incubating CTX with GM1 for 1 h. The resulting, inactivated CTX was added to wells coated with GM1 and incubated as described above. The wells were washed three times with 200 μL of wash buffer followed by incubation with 50 μL of anti-CTB antibody (Invitrogen), diluted 1:4000 in 0.5% bovine serum albumin (BSA), for 90 min at room temperature. After triple washing, wells were incubated for 75 min at room temperature with 50 μL of secondary anti-mouse antibody conjugated with horse radish peroxidase (Sigma), diluted 1:15,000 in 0.5% BSA. Wells were washed three times with wash buffer, once with PBS and then dried. To visualize the CTX bound to GM1, 100 μL 3,3′,5,5′-Tetramethylbenzidine (Millipore, USA) was added to each well and incubated for 15 min. To stop the reaction, 100 μL of stop solution (0.5 M H2SO4) was added and the absorbance was measured at 450 nm using a SpectraMax M5e plate reader. Each assay was repeated six times.
Ganglioside GM1 and CTB-FITC binding assay
The protocol to this method is similar as for CTX (above). Instead of unlabeled CTX, 1.25 μg/mL of CTB-FITC (Sigma, Israel), CTB labeled with a fluorescent fluorescein derivative, was incubated with the appropriate concentration of plant extracts (2.5–0.075 mg/m, serial dilutions) in a total volume of 200 μL. After 2 h incubation at room temperature, wells were washed three times with PBS. Next, 200 μL of PBS was added and the intensity of fluorescence was measured at 490 nm excitation and 525 nm emission, using an Infinite M1000 PRO plate reader (Tecan). Each assay was repeated six times.
Fluorescence activated cell sorting (FACS) assay: quantitative assay
This assay was performed to analyze the ability of plant extracts to inhibit the binding of CTB-FITC to ganglioside GM1 naturally embedded in the extracellular surfaces of fibroblasts (C688). C688 cells (5 × 104) were re-suspended in DMEM and incubated for 60 min at 37 °C with 2.5 mg plant extract and 0.25 μg CTB-FITC in 1 mL total volume. As controls, we used: (i) cells exposed only to toxin, which was treated as a negative control, (ii) cells exposed only to plant extract, which allowed us to account for autofluorescence from the plant extract and (iii) cells treated with inactivated FITC-CTB, obtained by pre-incubation of 0.5 μg/mL GM1 and 0.25 μg/mL FITC-CTB for 1 h, which was treated as a positive control. To all samples, 50 μg/mL propidium iodide (PI, Sigma, USA) was added to determine cell viability. The number of stained cells and the intensity of the fluorescence of 104 cells were analyzed by flow cytometry (FACSCalibur, Becton Dickinson, USA) in two channels: FL-1 (green) for FITC and FL-3 (red) for PI. The data were analyzed using CellQuest acquisition/analysis software. Each sample was tested in three independent assays.
Fluorescent microscopy – qualitative assay
This assay was performed to visualize the activity of the plant extract and CTB-FITC on cells containing ganglioside GM1. C688 cells (5 × 103) were seeded onto cover glass slips and cultured for 18 h according to the described procedure. Cells adherent to the glass were washed in PBS and incubated at 37 °C for 1 h with 5.0, 2.5 or 1.25 mg/mL plant extract and 0.25 μg/mL CTB-FITC in 500 μL DMEM, and then fixed with 3% paraformaldehyde. The nuclei were stained with 0.3 mg/mL of 2-(4-Amidinophenyl)-6-indolecarbamidine dihydrochloride (DAPI, Sigma, Israel). The cover glass slips were stuck to microscope slides using MOVIOL solution and observed under a Fluorescent Microscope (Axio Observer Z1, Zeiss). Each sample was tested in three independent repeats.
Discontinuous polyacrylamide gel electrophoresis under denaturing conditions (SDS-PAGE)
Our protocol is based on a published method with some modifications [31]. CTX (2 μg) was incubated with 1 mg plant extracts in a twofold diluted range (1–0.0015 mg) for 1 h at room temperature. The total sample volume was 12 μL. Next, samples were applied to 10% denatured gels and, using the Tris/Tricine discontinuous electrophoresis system ([32], with modifications), run for 90 min at 90 V (15 min) and next at 130 V. The gels were washed and stained with Blue BANDit Protein Stain reagent (VWR, USA). As a negative control, we used CTX without plant extracts. To avoid false positive results and eliminate thresholds we used plant extracts without CTX. As positive controls, CTX was incubated with ganglioside GM1 solution (2–0.03 μg) and gallic acid (0.1–0.0015 mg). The experiments were performed in triplicate
Data were obtained from three or six measurements, as defined in the specific sections, and were expressed as the means ± standard deviation. Statistical analyses were performed using one-way ANOVA, followed by the Bonferroni-Holm post hoc test [33]. Statistically significant differences between groups were defined as p-values less than 0.05 with the Bonferroni-Holm correction.
Preparation and preliminary characterization of plant extracts
After hot water extraction and filtration, the lyophilized plant powders were stored at -20 °C. The efficiency of the extraction process is shown for each plant species in Table 3. The yield varied between 11 and 19%, but is almost 30% for rosehip. The results of the phytochemical analyses [34,35,36] of each aqueous extract are given in Additional file 1. The cytotoxic activity of all extracts was tested using the MTT method against C688 cells, Table 3. Concentrations above 2.5 mg/mL of raspberry leaf, blackberry leaf and wild strawberry leaf were found to be cytotoxic, for agrimony this effect is above 5 mg/mL, while the rosehip extract is not cytotoxic even at 10 mg/mL.
Antimicrobial activity tests
The antimicrobial activity of plant extracts were analyzed using standard microdilution assays, stages 2 and 3 of Table 1. Bacteria were cultured with plant extracts in the concentration range 0.078–2.5 mg/mL, and additionally at 10 mg/mL. With an increased concentration of plant extracts, the OD600 of 18 h cultures decreased, which suggests that the plant extracts slow bacterial growth but did not reduce it by 80% (Fig. 1a). The 2.5 mg/mL concentrations of each extract limited bacterial growth (Fig. 1b) but did not possess bactericidal activity against V. cholerae. When the concentration of the plant extracts were increased to 10 mg/mL (data not shown), bactericidal activity for agrimony and blackberry leaf extracts was observed. None of the tested plant extracts, even at 10 mg/mL, inhibited the growth of Lactobacillus rhamnosus, which was chosen to represent beneficial gut bacteria. The 35 μg/mL of chloramphenicol, used as a positive control, has bactericidal activity for all of the tested bacteria species (it reduced the bacterial culture densities by more than 99% compared to untreated samples). Table 4 summarizes the antimicrobial results with significant data (effects at or below traditional doses) given in bold.
Bacteriostatic properties of plant extracts. a Density of 18 h V. cholerae cultures as a function of incubation with different plant extract concentrations (0–2.5 mg/mL); b Culture density after 18 h incubation of four bacterial strains: V. cholerae (blue), E. coli ATCC 25922 (grey), E. coli O44 (green) and L. rhamnosus (yellow) with 2.5 mg/mL aqueous extracts of plant extracts. Values are mean ± standard errors of three independent assays. * p < 0.05, compared with untreated bacterial cultures (100%)
Table 4 Concentration of plant extracts to reduce bacterial culture densities by 50%
Aqueous plant extracts reduce cAMP production
The changes of cAMP levels in cell cultures were determined using the cAMP-Glo™ Max Assay from Promega. This assay reports the cumulative effect of the plant extracts on stages 7–10 of Table 1, as well as other possible intracellular effects not specified here. Treating cell cultures with cholera toxin resulted in larger cAMP concentrations in the cell, which then lead to a reduction in the luminescence intensity (RLU). The difference in luminescence (ΔRLU) between untreated and cholera toxin treated cells was around 12,000 RLU, and this served as a control for plant extract or gallic acid treated samples (Fig. 2). The application of 2.5 mg/mL agrimony extract caused the reduction of cAMP levels by 70% compared to the control, while application of the same concentration of wild strawberry leaf, blackberry leaf, raspberry leaf and gallic acid reduced it by more than 90%. The incubation with 10 mg/mL rosehip extract reduced the cAMP level by 65%. These positive results indicate that the tested plant extracts interfere with the mechanism of action of cholera toxin to different extents and we decided to employ further tests to probe the roles of the different plant extracts.
Plant extracts modulate the increased cAMP concentrations caused by the addition of cholera toxin to cell cultures. Bioluminescent assay showing the differences (ΔRLU) between cAMP production by fibroblasts treated only with plant extracts (2.5 mg/mL agrimony, blackberry leaf, raspberry leaf, wild strawberry leaf, gallic acid and 10 mg/mL rosehip) and fibroblasts treated with plant extracts and CTX (25 nM) (white). As a control, the difference between untreated and CTX treated cells is shown (grey). Values are mean ± standard error of three independent assays. * p < 0.0023, compared with control
Aqueous plant extracts inhibit the binding of CTX and CTB to ganglioside GM1
We next investigated if plant extracts inhibit the binding of cholera toxin to immobilized GM1 receptors, stages 8–9 of Table 1. The inhibitory ability of Rosaceae extracts on the binding of CTB and CTX to ganglioside GM1 was evaluated by competitive GM1-ELISA, Figs. 3 and 4. The amount of CTX and CTB bound to GM1 in the presence of a fixed concentration (2.5 mg/mL) of plant extract is shown in Fig. 3 while the effect of variable plant extract concentrations on toxin binding to GM1 is given in Fig. 4 and Table 5. Each plant extract inhibits the binding of toxin to GM1 but the efficiency varies among plant extracts. The weakest suppression of the binding of toxin to GM1 was found to be rosehip extract (Fig. 4d). Incubation with 2.5 mg/mL of rosehip extract resulted in a 68% reduction in the binding of CTB but only a more modest 28% for CTX to the immobilized GM1. The minimal plant extract concentration necessary to inhibit 50% of the CTX bound to the GM1 for blackberry leaf (Fig. 4b) and raspberry leaf (Fig. 4c) was found to be 0.15 mg/mL, while for agrimony (Fig. 4a) and wild strawberry leaf (Fig. 4e) it was found to be 0.3 mg/mL
Plant extract interference in the cholera toxin recognition of GM1 receptors. Comparison of 2.5 mg/mL plant extract activities preventing the binding of subunit B of cholera toxin (white) and cholera toxin (gray) to immobilized GM1. Values are means ± standard errors of six independent assays, p < 0.01, compared with untreated CTX or CTB bound to receptors (100%) for all data; * p < 0.01, for differential response of a plant extract on the binding of CTX or CTB to GM1
Plant extract effects on the prevention of binding cholera toxin to immobilized GM1. Modified ELISA binding assay showing the amount of CTX (black) and CTB (grey) immobilized on GM1-coated microplates as a function of the 2.5–0 mg/mL aqueous plant extracts: a agrimony, b blackberry leaf, c raspberry leaf, d rosehip, e wild strawberry leaf. Values are mean ± standard error of six independent assays. * p < 0.01, compared with CTX or CTB (100%)
Table 5 Summary of beneficial plant extract effectsa, b
Aqueous plant extracts prevent CTB-binding to human fibroblasts
CTB conjugated with a fluorophore is commonly used as a marker for lipid rafts, which are rich in gangliosides [37,38,39]. Exploiting this fact, the addition of CTB-FITC to human fibroblasts led to a strong fluorescent labeling of human C688 fibroblast cells, Figs. 5 and 6. Fluorescence cytometry gives a measure of CTB-FITC labeling of C688 cells in the presence and absence of plant extracts (Fig. 5). This assay also tests stages 8 and 9 of Table 1. The addition of CTB-FITC resulted in a distinct change in the fluorescence labeling of C688 cells compared to the normalized background readings. A significant lowering of the number of CTB-FITC labeled C688 cells was observed after pre-incubation of CTB-FITC with the positive control, ganglioside GM1 (Fig. 5f). The pre-incubation of CTB-FITC with extracts of agrimony (Fig. 5a) and blackberry leaf (Fig. 5b) resulted in similar results as GM1. The application of raspberry leaf (Fig. 5c), rosehip (Fig. 5d) and wild strawberry leaf (Fig. 5e) extracts only partly decreased the fluorescence intensity of labeled cells. The standard DNA staining method, based on propidium iodide (PI), was used to determine the cytotoxic effect of plant extracts during this assay. The number of cells labeled with PI did not increase in the presence of any of the plant extracts, indicating that the plant extracts were not cytotoxic at the 2.5 mg/mL concentration for the fibroblast C688 cell line.
Plant extract effects on the prevention of binding cholera toxin to cellular receptors. Flow cytometry assay showing the degree of labeled cells C688 after incubation with a mixture of 2.5 mg/mL plant extract and 0.25 μg/mL FITC-CTB, where the black trace represent cells treated only with CTB-FITC, and grey trace – cells treated only with plant extract and filled grey – cells treated with CTB-FITC and plant extract: a agrimony, b blackberry leaf, c raspberry leaf, d rosehip, e wild strawberry leaf and f GM1
Plant extract effects on the prevention of binding cholera toxin to cellular receptors or its internalization. Fluorescent microscope assay showing the CTB-FITC (green) labeling of fibroblasts after incubation with 2.5 and 1.25 mg/mL of: a agrimony, b blackberry leaf, c raspberry leaf, d rosehip; 2.5 and 10 mg/mL, and e wild strawberry leaf; 2.5 and 5.0 mg/mL . As controls f, we used (1) untreated cells and (2) incubated only with CTB FITC and g incubated with 1.25 or 2.5 mg/mL gallic acid, and CTB-FITC. The nuclei of the analyzed cells were stained by DAPI (blue). The red arrows show toxin accumulated in the Golgi apparatus (see Additional file 2 for co-localization with a Golgi marker), while white arrows show toxin bound to the cell surface. Scale bar is 10 μm
The inhibition of the CTB-FITC binding by plant extracts was observed by fluorescence microscopy (Fig. 6). C688 cells were incubated for 1 h with CTB-FITC; next they were fixed to glass slides and observed under a fluorescence microscope. In cells treated only with CTB-FITC, the CTB-FITC accumulated in the perinuclear region (Fig. 6f2, red arrow), mostly with the Golgi apparatus (Additional file 2). A similar, perinuclear localization of CTB-FITC is observed for cells incubated with rosehip at 2.5 (Fig. 6d1) and 5.0 mg/mL concentrations, which suggests that lower concentrations of this extract do not suppress CTB binding and internalization. A higher concentration of rosehip extract (10 mg/mL) decreased the amount of toxin bound to the cell surface and suppressed its accumulation in the perinuclear region (Fig. 6d2). The application of 2.5 mg/mL of agrimony (Fig. 6a2), blackberry leaf (Fig. 6b2) and wild strawberry leaf (Fig. 6e2) inhibited the CTB-FITC binding to the cell surface. Inhibition of CTB binding was also observed at 1.25 mg/mL for agrimony (Fig. 6a1). After incubation with 1.25 mg/mL of blackberry leaf (Fig. 6b1), CTB-FITC bound to the cell surface but did not accumulate in the perinuclear region. The same result was observed after incubation with 2.5 mg/mL raspberry leaf (Fig. 6, C1) and 1.25 mg/mL wild strawberry leaf (Fig. 6e1). Incubation with an increased (5 mg/mL) concentration of raspberry leaf (Fig. 6c2) extract decreased the amount of toxin bound to the cell surface.
Interactions between cholera toxin and plant extracts revealed by SDS-PAGE
We used polyacrylamide gel electrophoresis to discriminate between stages 7 and 8 of Table 1. During electrophoresis under Tris/Tricine denaturing conditions, cholera toxin is separated into a 57 kDa pentameric subunit B (B5), a 28 kDa subunit A and a small amount of 11.4 kDa monomer subunit B (Fig. 7, lane 1). The application of: agrimony, blackberry leaf, raspberry leaf and wild strawberry leaf extracts causes changes in the cholera toxin structure (Fig. 7a-c, e). At the highest extract concentrations, we observed the disappearance of all CTX components (lanes 3 and 4). With decreasing concentrations of plant extracts, the intensity of the subunit A band increased (lanes 5–8). The application of lower plant extract concentrations (0.06–0.0015, lanes: 6–8) resulted in the formation of aggregates (red arrow). To exclude background staining from the plant extracts, we applied samples with plant extract but without CTX (Fig. 7, lane 2).
SDS PAGE analysis of the interaction between cholera toxin and plant extracts. a agrimony, b blackberry leaf, c raspberry leaf, d rosehip, e wild strawberry leaf, f gallic acid control and g GM1 control. For all panels, lane 1 contains 2 μg CTX (dissociated into: B5–57 kDa, A – 28 kDa and monomer B – 11.4 kDa), lane 2–1 mg of plant extracts or 0.1 mg gallic acid or 2 μg GM1. For panels (A-E), lanes 3–8 contain 2 μg CTX and: 1.0, 0.5, 0.25, 0.125, 0.06, 0.03 or 0.0015 mg plant extract. For panel F, lanes 3–8 contain 2 μg CTX and: 0.1, 0.05, 0.025, 0.0125, 0.006, 0.003 or 0.0015 mg gallic acid. For panel G, lanes 3–8 contain 2 μg CTX and: 2.0, 1.0, 0.5, 0.25, 0.125, 0.06, 0.03 μg GM1. The red arrow indicates aggregated protein, the blue arrow shows increased amount of monomer B. The reaction volume was 12 μl for all samples
To understand the mechanism of plant extract action, two known controls were used. The first control is based on the ability of gallic acid to inhibit cholera toxin internalization [19]. This polyphenol causes dissociation of CTB into subunit B monomers (Fig. 7f, lanes: 3–5, blue arrow). In the second variant, CTX was inactivated by ganglioside GM1 in solution. These receptors effectively block the ganglioside binding site of cholera toxin and cause the aggregation of subunit B, marked by the red arrow on Fig. 7g in lanes: 3–5. The intensity of the subunit A band does not depend on the GM1 concentration. Comparing the application of plant extracts and controls, two modes of action can be distinguished. All the plant extracts except rosehip have a mechanism of action similar to the GM1/CTX control where aggregation of the toxin blocks binding to the ganglioside receptors in human cells. In contrast, rosehip has a similar behavior as the gallic acid/CTX complex.
Oral Rehydration Therapy (ORT) is commonly used and recommended by WHO for the treatment of acute diarrhea, including cholera. This therapy is based on solutions of oral rehydration salts (ORS) containing: glucose, sodium, chloride, potassium and citrate. ORT is used to protect the organism against dehydration caused by excessive fluid losses by vomiting and watery stools, but does not act against bacteria and their toxins [40]. The incorporation of plant extracts into ORS formulae can give several potential benefits such as providing a broad range of antimicrobial and anti-toxin compounds, as well as a taste that might be better accepted by young children, Table 1. On this basis, we investigated the properties of plant extracts used in Poland to treat non-specific diarrhea.
The MTT assay and separate staining of C688 cells with PI indicated that the cytotoxicity of all the plant extracts was above 2.5 mg/mL. Assuming that the traditional preparation generates similar extraction yields, Table 3, cytotoxic effects were observed above the traditional doses for all plant extracts except rosehip. The safe concentration for rosehip is calculated as 13 g/L preparation compared with a traditional preparation of 10–53 g/L, Table 2. Overall, there appeared to be no correlation between various phytochemical analyses (phenolic or flavonoid contents, two antioxidant measures, Additional file 1) and the results of any of the antimicrobial or anti-toxin assays.
According to published data, agrimony [12], blackberry leaf [13] and raspberry leaf [13, 14] show a weak antimicrobial activity against E. coli strains. Our results are consistent with this data, Fig. 1. Specific, mild bacteriostatic potentials against V. cholerae were recorded for four of the plant extracts, Table 4, at or below concentrations used in traditional remedies. A bacteriostatic effect was measured just above the traditional preparation range for blackberry leaf. The specific growth inhibition may help beneficial bacterial colonies, for example L. rhamnosus that was unaffected by the plant extracts, compete with V. cholerae and delay the onset of diarrheal episodes. However, the bacteriostatic properties may contribute to asymptomatic carriers [41, 42]. Therefore, we cannot conclude if the bacteriostatic properties of the plant extracts are beneficial, or not, in a traditional cure of diarrhea related to cholera.
The toxic action of cholera toxin is related with the continuous production of cAMP, which leads to the opening of chloride channels and secretion of chloride anions [9]. To test if plant extracts prevent cAMP activation by CTX, the intracellular level of cAMP in cell cultures was measured. According to our data, all plant extracts suppressed the cholera toxin action on cAMP levels at levels below the traditional dose. This assay covers a broad number of stages in Table 1, as well as a number of intracellular steps for the internalization of the CTX through the endosomes, Golgi apparatus to endoplasmic reticulum and secretion to the cytoplasm. Several different assays were used to better define the action of the plant extracts.
Cell based assays (Figs. 5 and 6) utilizing CTB-FITC as a fluorescent marker provided results consistent with antibody detected CTB and CTX in the immobilized GM1 assays in Figs. 3 and 4, and the cAMP results (Fig. 2). Positive effects on the inhibition of CTB-FITC binding by plant extracts to C688 cells were observed. Two phenomena could be distinguished by flow cytometry and fluorescence microscopy. At higher plant extract doses, a lower labeling of C688 cells was reported by both methods. At intermediate levels, fluorescence microscopy revealed that while CTB-FITC labeled the C688 cells, the process of cellular internalization was probably blocked because CTB-FITC does not accumulate in the perinuclear region. With respect to flow cytometry data, blocking of cellular internalization would not be recognized as a positive effect of the plant extracts as the cells are still fluorescently labeled. Flow cytometry is a faster, more economical tool for screening plant extracts but may miss some beneficial plant extracts. Fluorescence microscopy adds detail to the flow cytometry data, perhaps accounting for the aggregation properties of some of the plant extracts revealed by native PAGE. High plant extract doses may fully aggregate the toxin and block the CTB/CTX recognition sites for GM1 receptors but intermediate plant extract concentrations may produce aggregates with remaining active GM1 binding sites that allow for cellular labeling. Regarding traditional levels, positive results were observed for low to mid traditional concentrations of agrimony and raspberry leaf. In the higher range of traditional concentrations, positive results were obtained for blackberry leaf and rosehip. Positive results for wild strawberry leaf were obtained only for concentrations greater than traditionally used.
Several plant extracts have been documented to interact with cholera-like toxins through different mechanisms. The Mexican plant Chiranthodendron pentadactylon Larrreat [31] and the Chinese plant Chaenomeles speciosa [43] can block the binding site of heat-labile enterotoxin produced by E. coli (LTX). The active compounds of these plants include gallic [19] oleanolic, ursolic and betulinic acids [43], and (−)-epicatechin [31]. These compounds bind to the toxin binding site for gangliosides (GM1 and others), and thus cause toxin inactivation by competition. It was also shown that plant polyphenols, such as applehenon from apples, reduced the amount of cholera toxin bound to receptors and suppressed toxin internalization, probably by toxin aggregation [18]. SDS-PAGE analysis reveals that agrimony, raspberry leaf, blackberry leaf and wild strawberry leaf extracts cause aggregation of the toxin. This mechanism appears similar as after incubation of the toxin with ganglioside GM1, Fig. 7. Probably, active compounds of agrimony, blackberry leaf and wild strawberry leaf bind to the cholera toxin and block the binding site or change the toxin conformation, suppressing its binding to GM1 in the ELISA and cell-based assays. The raspberry leaf active compounds also cause aggregation, but do not block toxin binding to the cell surface. However, according to fluorescence microscopy and cAMP data, it does inhibit toxin internalization. SDS-PAGE analysis showed that rosehip extract (at 10 mg/mL concentration) behaves like gallic acid, Fig. 7. It remains to be seen if the different effects of rosehip and the other tested extracts have a synergistic effect.
According to our data, all plant extracts inhibited CTB binding to GM1 more strongly than CTX. This effect might be related to the toxin structure. Possibly, when the CTA is bound to CTB, part of the CTB structure is masked by CTA against the plant extracts compound action. When the CTB is alone, it has larger, unprotected surfaces, which can be easily accessed by plant compounds that cause changes in toxin conformation and reduce the ability to bind to GM1. Another explanation is that the plant extract interacts with the fluorescent marker of CTB-FITC, reducing its fluorescent properties. Although we cannot define the exact mechanisms related to the different effects of CTB-FITC and CTX, our data do suggest that assays involving CTB-FITC alone may overestimate the inhibitory power of plant extracts and chemical compounds on cholera toxins.
In the context of universal ORT formulations, the plant extracts should be safe for all sections of society including children, the elderly and pregnant/lactating women, as well as people suffering with long-term diseases that are incompatible with certain foodstuffs (diabetic, allergic). Extracts of raspberry leaf, in addition to being used as traditional medicine for diarrhea, are also widely used during pregnancy. While some reports indicate that raspberry leaf extracts are safe during pregnancy [44] other reports sow doubts or caution [45, 46]. Strawberry leaf is listed as an emmenagogue in a thorough review of plant based remedies used as abortifactants, contraceptives and sterilizers [47] although the original citation [48] itself cites the emmenagogue use as being historical (Ostermann, V. (1894) La vita in Friuli, Vols. I, II. Reprinted 1974, Del Bianco Ed., Udine, Italy). While traditional medicine may have been used for several centuries, the lack of properly conducted medical studies precludes the incorporation of any of the plant extracts into a universal ORS formula without further safety studies.
The obtained results indicate that the traditional application of: agrimony, raspberry leaf, blackberry leaf, rosehip and wild strawberry leaf infusions in treating bacterial diarrhea might be effective against diseases related to AB5 enterotoxins, such as cholera. These plant extracts can inhibit bacterial growth, block the toxin binding to receptors, inhibit toxin internalization to the host cell and suppress cAMP overproduction (Table 5).
The results described in this paper suggest that the studied plant extracts can eventually be useful complements to established ORT to improve the treatment of cholera-like diarrhea. The tested extracts do not kill bacteria, but do slow their growth, which might assist other antibiotic therapies. Simple aqueous plant extracts can inhibit the binding of cholera-like toxins to receptors or block the toxin internalization results in suppressing cAMP overproduction and chloride secretion, which protects the organism against rapid dehydration. The mechanisms involving the interactions between CTB/CTX and the studied plant extracts appear to be different, involving toxin aggregation and toxin deactivation. The mechanisms of action of these plants are still under investigation and the identification of active compounds and testing for synergistic effects is planned. We applied a number of methods and found that assays based on the safer (to use in the lab) CTB may provide a greater number of positive results than assays based on the more specific CTX, while FACS may miss some positive plant extract effects compared to the more labor intensive fluorescence microscopy method. SDS-PAGE is a useful method for examining if a plant extract can induce the ejection of CTA toxin from CTX to leave benign CTB, which may still bind to human cells without inducing diarrhea.
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
AB5 :
Class of enterotoxins consisting of one A subunit and 5 identical B subunits
ANOVA:
Analysis of variance test
ATP:
Adenosine triphosphate
BSA:
C688:
Primary human fibroblast line
cAMP:
Cyclic adenosine monophosphate
CTA:
Subunit A of cholera toxin
CTB:
Subunit B of cholera toxin
CTX:
Cholera toxin
DAPI:
2-(4-Amidinophenyl)-6-indolecarbamidine dihydrochloride
DMEM:
Dulbecco's Modified Eagle's Medium
ELISA:
Enzyme-Linked Immunosorbent Assay
FACS:
Fluorescence Activated Cell Sorting
FBS:
FITC :
GM1 :
Ganglioside GM1
LT and LT-II:
heat labile enterotoxins
MBC:
Minimal bactericidal concentrations
MH:
Mueller-Hinton broth
Milli-Q grade water (> 18 MΩ resistivity)
MTT:
3-[4,5-dimethylthiazol-2-yl]-2,5- diphenyltetrazolium bromide
OD600 :
Optical density at 600 nm
Oral Rehydration Therapy
Polyacrylamide Gel Electrophoresis
PBS:
SDS:
Sodium dodecyl sulfate
Sánchez J, Holmgren J. Virulence factors, pathogenesis and vaccine protection in cholera and ETEC diarrhea. Curr Opin Immunol. 2005;17:388–98.
World Health Organization. Number of suspected cholera cases reaches 100 000 in Yemen. UNICEF/WHO media cent; 2017. p. 1–2. https://www.who.int/en/news-room/detail/08-06-2017-number-of-suspected-cholera-cases-reaches-100-000-in-yemen
Kisiel M, Zacharska H, Czerniawska-Ankiersztejn M, Rudas D. The case of infection Vibrio cholerae non O1 non O139 in Warsaw summary. Przegl Epidemiol. 2006;60:779–87.
Stypulkowska-Misiurewicz H, Stasiak J, Janczyk M, Tomaszewska E, Pancer K. Vibrio cholerae non-O1 isolated in Poland from the bug river. Przeg Epid. 1995;49:237–43.
Stypulkowska-Misiurewicz H, Pancer K, Roszkowiak A. Two unrelated cases of septicaemia due to Vibrio cholerae non-O1, non-O139 in Poland, July and august 2006. Euro Surveill. 2006;11:E061130.2.
Rasko DA, Webster DR, Sahl JW, Bashir A, Boisen N, Scheutz F, et al. Origins of the E. coli strain causing an outbreak of hemolytic–uremic syndrome in Germany. N Engl J Med. 2011;365:709–17.
Schauder S, Shokat K, Surette MG, Bassler BL. The LuxS family of bacterial autoinducers: biosynthesis of a novel quorum-sensing signal molecule. Mol Microbiol. 2001;41:463–76.
Spangler BD. Structure and function of cholera toxin and the related Escherichia coli heat-labile enterotoxin. Microbiol Rev. 1992;56:622–47.
Wernick NLB, Chinnapen DJF, Cho JA, Lencer WI. Cholera toxin: an intracellular journey into the cytosol by way of the endoplasmic reticulum. Toxins (Basel). 2010;2:310–25.
Brijesh S, Daswani P, Tetali P, Antia N, Birdi T. Studies on the antidiarrhoeal activity of Aegle marmelos unripe fruit: validating its traditional usage. BMC Complement Altern Med. 2009;9:47.
Watt K, Christofi N, Young R. The detection of antibacterial actions of whole herb tinctures using luminescent Escherichia coli. Phytother Res. 2007;1199:1193–9.
Muruzović M, Mladenović KG, Stefanović OD, Vasić SM, Čomić LR. Extracts of Agrimonia eupatoria L. as sources of biologically active compounds and evaluation of their antioxidant, antimicrobial, and antibiofilm activities. J Food Drug Anal. 2016;24:539–47.
Denev P, Kratchanova M, Ciz M, Lojek A, Vasicek O, Blazheva D, et al. Antioxidant, antimicrobial and neutrophil-modulating activities of herb extracts. Acta Biochim Pol. 2014;61:359–67.
Rauha J-PP, Remes S, Heinonen M, Hopia A, Kähkönen M, Kujala T, et al. Antimicrobial effects of Finnish plant extracts containing flavoniods and other phenolic compounds. Int J Food Microbiol. 2000;56:3–12.
Ng WL, Perez L, Cong J, Semmelhack MF, Bassler BL. Broad spectrum pro-quorum-sensing molecules as inhibitors of virulence in vibrios. PLoS Pathog. 2012;8:e1002767.
Chatterjee S, Asakura M, Chowdhury N, Neogi SB, Sugimoto N, Haldar S, et al. Capsaicin, a potential inhibitor of cholera toxin production in Vibrio cholerae. FEMS Microbiol Lett. 2010;306:54–60.
Yamasaki S, Asakura M, Neogi SB, Hinenoya A, Iwaoka E, Aoki S. Inhibition of virulence potential of Vibrio cholerae by natural compounds. Indian J Med Res. 2011;133:232–9.
Morinaga N, Iwamarul Y, Yahiro K, Tagashira M, Moss J, Noda M. Differential activities of plant polyphenols on the binding and internalization of cholera toxin in vero cells. J Biol Chem. 2005;280:23303–9.
Chen J-C, Ho T-Y, Chang Y-S, Wu S-L, Hsiang C-Y. Anti-diarrheal effect of Galla Chinensis on the Escherichia coli heat-labile enterotoxin and ganglioside interaction. J Ethnopharmacol. 2006;103:385–91.
Morinaga N, Yahiro K, Noda M. Resveratrol, a natural polyphenolic compound, inhibits cholera toxin-induced cyclic AMP accumulation in Vero cells. Toxicon. 2010;56:29–35.
Tradtrantip L, Ko E-A, Verkman AS. Antidiarrheal efficacy and cellular mechanisms of a Thai herbal remedy. PLoS Negl Trop Dis. 2014;8:e2674.
Pieścik-Lech M, Szymański H, Szajewska H. Efficacy and safety of a new apple-flavoured oral rehydration solution in children with acute gastroenteritis: a double-blind randomized controlled trial. Acta Paediatr. 2012;101:e458–64.
Komiazyk M, Palczewska M, Sitkiewicz I, Groves P. Use of plant extracts to control and treat AB 5 enterotoxin-related diarrhea. Pol J Microbiol. 2014;63:3–14.
Chrubasik C, Roufogalis BD, Müller-Ladner U, Chrubasik S. A systematic review on the Rosa canina effect and efficacy profiles. Phytother Res. 2008;22:725–33.
EMEA. Community herbal monograph on Rubus idaeus L., folium community herbal monograph on Rubus idaeus L., folium. Eur Med Agency. 2014. https://www.ema.europa.eu/en/documents/herbal-monograph/final-community-herbal-monograph-rubus-idaeus-l-folium_en.pdf.
EMEA. Assessment report on Agrimonia eupatoria L., Herba. 2014. https://www.ema.europa.eu/en/documents/herbal-report/draft-assessment-report-agrimonia-eupatoria-l-herba_en.pdf.
Mikołajczyk K, Wierzbicki A. Poznajemy zioła. Warsaw: Chemil; 1989.
Olechnowicz-Stepien W, Lamer-Zarawska E. Rośliny lecznicze stosowane u dzieci. Warsaw: Państwowy Zakład Wydawnictw Lekarskich; 1986.
Barua D. History of cholera. Barua D, Greenough III WB, editors. Cholera. New York: Plenum Medical Book Company; 1992.
Woś M, Szczepanowska J, Pikuła S, Tylki-Szymańska A, Zabłocki K, Bandorowicz-Pikuła J. Mitochondrial dysfunction in fibroblasts derived from patients with Niemann-pick type C disease. Arch Biochem Biophys. 2016;593:50–9.
Velázquez C, Correa-Basurto J, Garcia-Hernandez N, Barbosa E, Tesoro-Cruz E, Calzada S, et al. Anti-diarrheal activity of (−)-Epicatechin from Chiranthodendron pentadactylon Larreat: experimental and computational studies. J Ethnopharmacol. 2012;143:716–9.
Schägger H, von Jagow G. Tricine-sodium dodecyl sulfate-polyacrylamide gel electrophoresis for the separation of proteins in the range from 1 to 100 kDa. Anal Biochem. 1987;166:368–79.
Kraus D. Consolidated data analysis and presentation using an open-source add-in for the Microsoft excel® spreadsheet software. Med Writ. 2014;23:25–8.
Ainsworth EA, Gillespie KM. Estimation of total phenolic content and other oxidation substrates in plant tissues using Folin–Ciocalteu reagent. Nat Protoc. 2007;2:875–7.
Pękal A, Pyrzynska K. Evaluation of aluminium complexation reaction for flavonoid content assay. Food Anal Methods. 2014;7:1776–82.
Sun J, Wang X, Wang P, Li L, Qu W, Liang J. Antimicrobial, antioxidant and cytotoxic properties of essential oil from Dictamnus angustifolius. J Ethnopharmacol. 2015;159:296–300.
Mishima H, Sears M, Bausher L, Gregory D. Ultracytochemistry of cholera-toxin binding sites in ciliary processes. Cell Tissue Res. 1982;223:241–53.
Blank N, Schiller M, Krienke S, Wabnitz G, Ho AD, Lorenz HM. Cholera toxin binds to lipid rafts but has a limited specificity for ganglioside GM1. Immunol Cell Biol. 2007;85:378–82.
Domon MM, Besson F, Bandorowicz-Pikula J, Pikula S. Annexin A6 is recruited into lipid rafts of Niemann-pick type C disease fibroblasts in a Ca2+−dependent manner. Biochem Biophys Res Commun. 2011;405:192–6.
UNICEF - World Health Organisation. Oral rehydration salts - production of the new ORS: UNICEF, World Health Organ; 2006. p. 1–123.
Tamayo JF, Mosley WH, Alvero MG, Joseph PR, Gomez CZ, Montague T, et al. Studies of cholera El Tor in the Philippines. Bull World Health Organ. 1965;33:645–9.
Lewnard JA, Antillón M, Gonsalves G, Miller AM, Ko AI, Pitzer VE. Strategies to prevent cholera introduction during international personnel deployments: a computational modeling analysis based on the 2010 Haiti outbreak. PLoS Med. 2016;13:1–23.
Chen J-C, Chang Y-S, Wu S-L, Chao D-C, Chang C-S, Li C-C, et al. Inhibition of Escherichia coli heat-labile enterotoxin-induced diarrhea by Chaenomeles speciosa. J Ethnopharmacol. 2007;113:233–9.
Parsons M, Simpson M, Ponton T. Raspberry leaf and its effect on labour: safety and efficacy. Aust Coll Midwives Inc J. 1999;12:20–5.
Holst L, Haavik S, Nordeng H. Raspberry leaf - should it be recommended to pregnant women? Complement Ther Clin Pract. 2009;15:204–8.
Cheang KI, Nguyen TT, Karjane NW, Salley KES. Raspberry leaf and hypoglycemia in gestational diabetes mellitus. Obstet Gynecol. 2016;128:1421–4.
Kumar D, Kumar A, Prakash O. Potential antifertility agents from plants: a comprehensive review. J Ethnopharmacol. 2012;140:1–32.
Lokar LC, Poldini L. Herbal remedies in the traditional medicine of the Venezia Giulia region (north East Italy). J Ethnopharmacol. 1988;22:231–79.
Laboratory of Biochemistry of Lipids, Nencki Institute of Experimental Biology, 3 Pasteur Street, 02-093, Warsaw, Poland
Magdalena Komiazyk
& Slawomir Pikula
Laboratory of Molecular Interactions and NMR, Instituto de Tecnologia Química e Biológica, Av. da República, 2780-157, Oeiras, Portugal
, Malgorzata Palczewska
& Patrick Groves
Department of Molecular Biotechnology, Chemistry Faculty, University of Gdansk, 63 Wita Stwosza Street, 80-308, Gdańsk, Poland
Malgorzata Palczewska
Department of Drug Biotechnology and Bioinformatics, National Medicines Institute, 30/34 Chełmska Street, 00-725, Warsaw, Poland
Izabela Sitkiewicz
Department of Biomedicinal Chemistry, Chemistry Faculty, University of Gdansk, ul. 63 Wita Stwosza Street, 80-308, Gdańsk, Poland
Patrick Groves
Search for Magdalena Komiazyk in:
Search for Malgorzata Palczewska in:
Search for Izabela Sitkiewicz in:
Search for Slawomir Pikula in:
Search for Patrick Groves in:
MK conducted the laboratory work, produced and tested plant extracts, obtained data and its analysis, as well as prepared the first draft of the manuscript and Figures IS was involved in the design, supervision and validation of the antimicrobial tests. SP was involved in the design, supervision and validation of cell culture based experiments. Together, MP and PG originally conceived the project and designed, supervised and validated the biochemical experiments. PG oversaw the manuscript preparation and acted as general coordinator. All authors read and approved the manuscript.
Correspondence to Patrick Groves.
Additional file 1:
Phytochemical characteristics of plant extracts (DOCX 20 kb)
Co-localization of CTB-FITC and the Golgi Apparatus (DOCX 515 kb)
Komiazyk, M., Palczewska, M., Sitkiewicz, I. et al. Neutralization of cholera toxin by Rosaceae family plant extracts. BMC Complement Altern Med 19, 140 (2019) doi:10.1186/s12906-019-2540-6 | CommonCrawl |
The genetic correlation between feed conversion ratio and growth rate affects the design of a breeding program for more sustainable fish production
Mathieu Besson ORCID: orcid.org/0000-0001-7662-54511,2,
Hans Komen1,
Gus Rose1 &
Marc Vandeputte2,3
Genetics Selection Evolution volume 52, Article number: 5 (2020) Cite this article
Most fish breeding programs aim at improving growth rate and include feed conversion ratio (FCR) neither in the breeding goal nor in the selection index, although decreasing FCR is known to increase farm profit and decrease environmental impacts. This is because FCR is difficult to measure in fish that live in groups and FCR is assumed to have a favourable (negative) genetic correlation with growth, although the magnitude of this correlation is unknown. We investigated the effect of the genetic correlation between growth and FCR on the economic and environmental responses of a two-trait breeding goal (growth and FCR), compared to a single-trait breeding goal (growth only). Next, we evaluated the weights to assign to growth and FCR in a two-trait breeding goal to maximize sustainability of fish production.
We used pseudo-best linear unbiased prediction (BLUP) index calculations to simulate a breeding program for sea bass. For the single-trait breeding goal, the trait in the breeding goal and in the index was thermal growth coefficient (TGC) and for the two-trait breeding goal, the traits in the breeding goal were TGC and FCR and the traits in the index were TGC and percentage of fat in the dorsal muscle (an indirect measure of FCR). We simulated responses to selection for genetic and phenotypic correlations between TGC and FCR ranging from 0 to − 0.8. Then, in the two-trait breeding goal, we calculated the economic return and the change in eutrophication when using economic values (EV) or environmental values (ENV).
When the genetic correlation between TGC and FCR was lower than − 0.45, we found major differences in economic returns and in eutrophication between single and two-trait breeding programs. At a correlation of − 0.25, the two-trait breeding goal based on EV increased economic return by 25% compared to the single-trait breeding goal, while using ENV decreased eutrophication by 1.34% per ton of fish produced after one generation of selection.
The genetic correlation between TGC and FCR affects the magnitude of economic losses due to omitting FCR in the breeding program. In addition, the genetic correlation affects the importance of choosing EV or ENV to reduce eutrophication and increase profit.
Most fish breeding companies consider growth rate as the major trait to be improved in their breeding program [1]. When a farm is operating under a quota on biomass, which is for example the case for salmon farms in Norway, improving growth rate is expected to increase farm profit through a reduction in production time, thus increasing annual production and returns. However, livestock and fish production has an impact on the environment [2, 3], which raises the need for breeding programs that reduce these impacts. Several studies have already investigated the environmental impact of genetic improvement of traits in livestock [4,5,6,7] and fish production [8, 9]. Our studies on fish showed that improving feed conversion ratio (FCR; the ratio of feed intake over body weight gain) can increase profit and decrease environmental impacts at the same time, which makes \({\text{FCR}}\) an essential trait to include in breeding programs. However, unlike terrestrial livestock, feed efficiency is typically not included in fish breeding programs because individual feed intake cannot be measured accurately in group-reared fish (see review in [10]), and because feed efficiency is assumed to have a favourable (negative) correlation with growth rate, although the exact value of this correlation is uncertain. Several studies in terrestrial animals and in fish have reported a negative genetic correlation between growth and \({\text{FCR}}\) [11, 12], whereas other studies on fish showed a zero correlation (e.g. in brown trout [13,14,15]).
The genetic response of a breeding program depends on the phenotypic and genetic correlations between the traits in the breeding goal and in the corresponding selection index. In the case of a single-trait breeding goal where the trait of interest is e.g. growth rate, the response depends only on the heritability and phenotypic variance of growth rate, and on the intensity of selection. A correlated response in FCR will depend on the genetic standard deviation of \({\text{FCR}}\) and on the genetic correlation between \({\text{FCR}}\) and growth rate. In a breeding goal with growth rate and FCR, the response depends not only on the phenotypic and genetic correlations between the traits in the selection index and in the breeding goal but also on the weights applied to the traits in the breeding goal (Eq. 3, see below). When the main objective of selection is to maximise farm profit, the weights used in the breeding goal are economic values (EV). Using these values in a breeding goal and the optimal index corresponding to that breeding goal optimizes the direction and magnitude of the genetic responses in growth rate and FCR to maximize the economic return of genetic improvement.
However, EV might not be the best weights to enhance the environmental sustainability of fish production. In a previous study [8], we calculated environmental values (ENV) of fish traits by combining bio-economic modelling and life cycle assessment (LCA) [16], as in van Middelaar et al. [6]. Similar to EV, ENV express the change in different categories of environmental impact (e.g. climate change or eutrophication) when changing one trait and keeping the other traits in the breeding goal constant. These ENV can be used as weights in the breeding goal to derive a selection index that maximizes the reduction of the environmental impacts of fish production. In this study, we calculated EV based on the impact of genetic change on profit per farm per year and ENV based on differences in kg of pollutants emitted per farm per year (Eqs. (1) and (2), see below). We chose these units because most pricing mechanisms, constraints on inputs and outputs, and management variables act at the farm level [17]. At the farm level, the ENV consider the absolute change in environmental impacts to reflect the environmental impact of a farming site (i.e. benthos degradation, dissolved nutrient emissions, ecosystem changes).
In this paper, we explored different strategies to enhance the economic and environmental sustainability of fish production. First, we explored the potential gain in economic return of upgrading a simple breeding program for growth rate only by including FCR in the breeding goals and percentage of fat in the dorsal muscle as an indirect criterion of FCR in the index. We explored this potential gain as a function of the genetic and phenotypic correlation between growth and \({\text{FCR}}\). Then, we compared the response to selection in terms of economic gains and change in eutrophication for the two-trait (growth and \({\text{FCR}}\)) breeding goal using economic (EV) or environmental weights (ENV).
In a previous study [9], we calculated EV and ENV for thermal growth coefficient \(\left( {{\text{TGC}}\;{\text{in}}\;{\text{g}}^{1/3} \cdot {\text{d}}^{ - 1} \cdot {\text{C}}^{ - 1} } \right)\) and \({\text{FCR}}\) using a bio-economic model and an LCA for sea bass reared in sea cages. The approach and models used are briefly described below.
Bio-economic model
The bio-economic model estimated the production of sea bass (Dicentrarchus labrax) in a hypothetical sea cage farm producing 1000 tons of sea bass per year, where the instant biomass present on site was constrained to 435 tons ("standing stock" or "biomass" quota). The farm was composed of 34 circular cages of 600 m3 for pre-growing and 34 circular cages of 1800 m3 for on-growing. Fish were stocked at 10 g and sold at a fixed harvest weight of 400 g. Stocking occurred all year round. The bio-economic model was divided into four model parts.
The fish model estimates individual fish growth using \({\text{TGC}}\) corrected for the concave relationship between growth rate and temperature [18]. \({\text{FCR}}\) was modelled by combining a third order polynomial model from Person-Le Ruyet et al. [19] that models FCR as a function of temperature at a fixed body weight with an exponential model from Lanari et al. [20] that models the variation of FCR with fish body weight. The fish model also estimates the individual emission of nutrient-based pollutants using mass-balance [21, 22].
The batch model estimates the average stocking density of a batch depending on individual fish performances (from the fish model) and mortality. A batch is defined as the group of fish stocked at the same time in the same pre-growing cage.
The farm model estimates the number of batches produced to calculate annual fish production, emission of pollutants, and annual feed consumption, while complying with the quota on biomass.
Finally, in the economic model, annual profit is calculated by combining results of the farm model with economic parameters.
Further details about the bio-economic model are in Additional files 1, 2, 3: Tables S1, S2 and S3.
LCA is a standardized method to calculate the environmental impact of a production chain, from raw material extraction up to the product's end of life [23]. The production chain studied here included five distinct sub-systems: (1) production of purchased feed, including production of ingredients, processing, and transportation; (2) production of energy expended at the farm level (electricity, gas and petrol); (3) production of farming facilities and equipment; (4) chemicals used, including the production and use of anti-fouling for nets; (5) farming operations, including emission of nutrient based pollutants from biological transformation of feed.
Each flow of resources and pollutants observed in the system was assigned to eutrophication potential. We chose to investigate only eutrophication because quotas are essentially designed to limit the eutrophication caused by fish farming. The characterization factors in the CML2 Baseline 2000 version 2.04 method were used to compute eutrophication. The categories of impact were calculated using the Simapro® 7.0 software. Eutrophication was expressed per farm on the basis of 1 year of routine production (impact_farm). The impact_farm values were subsequently used to calculate ENV.
Economic and environmental values
The EV and ENV of a trait were calculated for a one genetic standard deviation change in the mean of the trait while the means of the other traits remained constant. When calculating EV and ENV, increasing \({\text{TGC}}\) while keeping \({\text{FCR}}\) constant was achieved by increasing feed intake. Conversely, improving \({\text{FCR}}\) while keeping \({\text{TGC}}\) constant was generated by reducing feed intake.
Unlike previous studies [9, 24], we displayed EV as monetary gain per one unit of trait change (and not per genetic standard deviation), i.e. from 2.25 to 3.25 \({\text{g}}^{1/3} \cdot {\text{d}}^{ - 1} \cdot {\text{C}}^{ - 1}\) for \({\text{TGC}}\) and from 2.03 to 1.03 for \({\text{FCR}}\), to comply with the requirements of the software (SelAction) used to compute response to selection. The EV of a trait was calculated as the difference between profit before (\({\text{profit}}\_{\text{before}}\)) and after (\({\text{profit}}\_{\text{after}}\)) changing the trait by one unit, divided by the production of fish before genetic change (\({\text{production}}\_{\text{before}}\)).
$${\text{EV }} = \frac{profit\_after - profit\_before}{production\_before}.$$
We used the eutrophication per year per farm (\(impact\_farm\)) to calculate environmental values for eutrophication at the farm level for \({\text{TGC}}\) and \({\text{FCR}}\). The ENV of a trait was calculated as the difference between \(impact\_farm\) before (\(impact\_farm\_before)\) and after genetic change (\(impact\_farm\_after\)) changing the trait by one trait unit, divided by the production of fish before genetic change. Thus, the ENV refers to the local environmental impacts caused by a farm.
$${\text{ENV}} = \frac{impact\_farm\_after - impact\_farm\_before}{production\_before}.$$
The resulting EV and ENV are in Table 1. Here, we consider that a positive EV or ENV means that an increase in trait value increases economic return and decreases environmental impacts of a farm.
Table 1 Economic (EV) and environmental values at the farm level (ENV) of thermal growth coefficient \(\left( {\text{TGC}} \right)\) and feed conversion ratio (\(\text{FCR}\)) expressed per unit of change in each trait
The mechanisms by which a change in TGC determined its EV and ENV were as follows. An increase in TGC reduces the production cycle and therefore, increases the number of times per year when the farm is running at the maximum biomass [9]. Therefore, improving TGC increases production and increases the number of juveniles purchased. Furthermore, at a constant FCR, an increase in TGC does not affect total feed intake over the life of a fish but annual feed consumption per year per farm increases due to higher production. Consequently, the EV of TGC is positive because extra profit from higher production overtakes extra costs of feed and juveniles. However, the ENV of TGC is negative because an increase in TGC increases eutrophication due to greater use of feed and greater emissions of pollutants per farm per year [9]. The mechanism by which changes in FCR determined its EV and ENV was that a reduction of FCR while keeping TGC constant reduces the total amount of feed required to reach harvest weight. Therefore, reducing FCR reduces the annual use of feed per farm [9]. Consequently, the EV and ENV of FCR are both negative, meaning that an increase in FCR decreases profit and eutrophication.
Simulated breeding program
We simulated a simple breeding program for sea bass using SelAction [25], in which 100 females were mated to 100 males to create 100 full-sib families. Forty fish (20 females and 20 males) were kept per family (4000 fish in total) as selection candidates. From these candidates, 200 (5%, 100 males and 100 females) were selected as parents for the next generation, corresponding to a selection intensity of 2.06. The breeding goal included two traits, \({\text{TGC}}\) and \({\text{FCR}}\):
$${\text{H}} = {\text{W}}_{\text{TGC}} \times {\text{A}}_{\text{TGC}} + {\text{W}}_{\text{FCR}} \times {\text{A}}_{\text{FCR}} ,$$
where, \({\text{W}}\) is the EV or ENV and \({\text{A}}\) is the additive genetic value. Selection was on a pseudo-BLUP selection index based on own performance and information from 39 full sibs for \({\text{TGC}}\) and the percentage of fat in dorsal muscle (%fat). We assumed a non-lethal measurement of %fat using ultrasounds as an indirect criterion of \({\text{FCR}}\) as in Kause et al. [26]. Genetic gain per generation obtained from SelAction was converted to genetic standard deviations (σg) per year considering an average generation interval of 2.5 years (3 years for females and 2 years for males). We expressed genetic gain in σg to compare the genetic gain achieved for the three traits on a standardized basis.
We used the single trait breeding goal \({\text{H}} = {\text{A}}_{\text{TGC}}\) as the baseline where selection was on \({\text{TGC}}\) using a pseudo-BLUP index based on own performance and information from 39 full sibs for \({\text{TGC}}\) only, resulting in correlated responses in %fat and \({\text{FCR}}\).
Genetic parameters
Genetic parameters of the three traits are in Tables 2 and 3. For FCR, genetic parameters were from rainbow trout (Oncorhynchus mykiss), whereas correlations between \({\text{FCR}}\) and % fat were from European sea bass. Genetic and phenotypic correlations between \({\text{TGC}}\) and \({\text{FCR}}\) are uncertain in sea bass. Thus, we tested values ranging from 0 to − 0.8 in steps of 0.01 for both genetic and phenotypic correlations between \({\text{TGC}}\) and \({\text{FCR}}\). The genetic and phenotypic correlations between TGC and FCR were assumed equal to each other.
Table 2 Genetic parameters of thermal growth coefficient (\({\text{TGC}}\)), feed conversion ratio (\(\text{FCR}\)) and percentage of muscle fat (% fat) used to simulate response to selection
Table 3 Genetic (above diagonal) and phenotypic (below diagonal) correlations between thermal growth coefficient (\({\text{TGC}}\)), feed conversion ratio (\(\text{FCR}\)) and percentage of muscle fat (%fat)
Genetic gain
In the single-trait breeding goal, the response to selection for \({\text{TGC}}\) was always the same regardless of the correlation with \({\text{FCR}}\) because only the response for \({\text{TGC}}\) was maximized (Fig. 1, left panel). The correlated response for %fat was also constant since the genetic correlation between TGC and %fat was fixed (Fig. 1, right panel). Conversely, as expected, the correlated response in FCR from selection on \({\text{TGC}}\) was higher when the genetic correlation between TGC and FCR was stronger (Fig. 1, central panel).
Response to selection for thermal growth coefficient (\(\text{TGC}\)) (left panel), feed conversion ratio (\(\text{FCR})\) (middle panel) and %fat (right panel) as a function of the genetic correlation (rg) between \(\text{TGC}\) and \(\text{FCR}\), for a single-trait breeding goal with \(\text{TGC}\) ("single trait") and for breeding goals with \(\text{TGC}\) and \(\text{FCR}\) weighted by EV and ENV, respectively. Values are expressed in genetic standard deviations (σg) per year
In a two-trait breeding goal, the response to selection achieved is the result of a complex interaction between weights assigned to each trait and the additive genetic variances for those traits, and their correlations. The response to selection for \({\text{FCR}}\) was favourable (FCR decreased) and similar when using either EV or ENV (Fig. 1, central panel). When the correlation between \({\text{TGC}}\) and \({\text{FCR}}\) was strongly negative (< − 0.17 for EV and < − 0.22 for ENV), response to selection for \({\text{FCR}}\) increased because FCR could be improved by simply improving TGC. When the correlation between \({\text{TGC}}\) and \({\text{FCR}}\) reached − 0.8, almost all the selection response for \({\text{FCR}}\) was due to the improvement in \({\text{TGC}}\) and there was little benefit from including %fat. However, the improvement in \({\text{FCR}}\) reached a minimum value when the correlation between \({\text{TGC}}\) and \({\text{FCR}}\) was − 0.17 (when using EV) or − 0.22 (when using ENV).
Interestingly, using EV or ENV caused different responses for \({\text{TGC}}\) (Fig. 1, left panel). For correlations of \({\text{TGC}}\) with \({\text{FCR}}\) between − 0.21 and 0, TGC decreased when using ENV basically because \({\text{TGC}}\) was quite heritable (h2 = 0.43) and because the ENV of \({\text{TGC}}\) was negative. When the correlations became stronger (< − 0.21), improving \({\text{FCR}}\), which was the trait with the largest ENV could only be achieved by increasing TGC. On the contrary, the response to selection for \({\text{TGC}}\) was always positive (TGC increased) when using EV because the EV of \({\text{TGC}}\) was positive and selection on \({\text{TGC}}\) generated a favourable correlated response for \({\text{FCR}}\) (except when the correlation was exactly 0).
As expected, response for %fat was positive when using EV due to the positive correlation of \({\text{TGC}}\) with %fat (Fig. 1, right panel). The increase in %fat was even larger when the correlations between \({\text{TGC}}\) and \({\text{FCR}}\) approached 0, due to the increased importance of %fat to improve \({\text{FCR}}\). When using ENV, the increase in response for %fat was largest for correlations between \({\text{TGC}}\) and \({\text{FCR}}\) of − 0.4. For correlations greater than − 0.4, the response in %fat decreased in order to generate a decrease in \({\text{TGC}}\), which had a negative ENV. For correlations lower than − 0.4, the response for %fat decreased because the correlations between \({\text{TGC}}\) and \({\text{FCR}}\) were sufficiently high to generate a favourable correlated response for \({\text{FCR}}\) without having to increase %fat too much.
Economic return and change in eutrophication
With the single-trait breeding goal, economic returns increased linearly (Fig. 2, left panel) while eutrophication decreased linearly (Fig. 2, right panel) with a decrease in the correlation between \({\text{TGC}}\) and \({\text{FCR}}\). Nevertheless, implementing a single-trait breeding goal caused an increase in eutrophication per farm per year when the correlation between \({\text{TGC}}\) and \({\text{FCR}}\) was weak, between − 0.27 and 0 (Fig. 2, right panel).
Economic (left panel) and environmental (right panel) response to selection as a function of genetic correlation (rg) between thermal growth coefficient (\(\text{TGC}\)), feed conversion ratio (\(\text{FCR}\)). The economic and environmental responses were calculated for a single-trait breeding goal with \({\text{TGC}}\) ("single trait") and for breeding goals with \({\text{TGC}}\) and \({\text{FCR}}\) weighted by EV or ENV. Values are expressed as economic return (euros per kg of fish produced per year) or reduction in eutrophication (kg PO4-eq per ton of fish produced per year)
In contrast, for the two-trait breeding goal, using either EV or ENV increased economic return and decreased eutrophication. As expected, using EV in the breeding goal gave the greatest economic return (Fig. 2, left panel). Economic returns were similar between EV and ENV when the correlation between \({\text{TGC}}\) and \({\text{FCR}}\) was lower than − 0.5. However, when the correlation was between − 0.5 and 0, the economic return achieved when using ENV was lower than when using EV. The difference in economic return between EV and ENV reached a maximum when the correlation between \({\text{TGC}}\) and \({\text{FCR}}\) was − 0.19 (0.038 €/kg produced/year). Note that, when the correlation was between − 0.41 and − 0.05, using ENV resulted in lower economic returns than the single-trait breeding goal.
Using ENV in the breeding goal generated a reduction of eutrophication of at least 1.55 kg PO4-eq per ton of fish produced per year (Fig. 2, right panel, correlation − 0.4). This is a reduction of 0.92% per year, considering an eutrophication of 168.51 kg PO4-eq per kg produced per year before genetic improvement [9]. With correlations closer to zero, the reduction of eutrophication rapidly reached 4.5 kg PO4-eq per ton of fish produced per year, which is more than what was obtained by using EV (2.5 kg PO4-eq with a correlation of 0). The reduction in eutrophication per year did not differ between EV and ENV when the correlation between \({\text{TGC}}\) and \({\text{FCR}}\) was lower than − 0.45.
To our knowledge, this is the first study that explores the influence of the correlation between growth rate (expressed as \({\text{TGC}}\)) and \({\text{FCR}}\) on the design of a fish breeding program for economic or environmental sustainability. Although selection on a component trait such as \({\text{FCR}}\) is generally assumed to be less efficient than selection on an index weighing the components, selection on \({\text{FCR}}\) directly could be more efficient if the heritabilities of both traits (body weight gain and feed intake) were similar [29]. In fish, data on the genetic parameters of feed intake are still lacking and the best strategy to maximize improvement of feed efficiency is yet to be determined. Measuring \({\text{FCR}}\) directly on individual fish is indeed difficult and improving \({\text{FCR}}\) depends on its correlation with other traits included in the breeding goal and in the index. In fish, the genetic correlation of FCR with \({\text{TGC}}\), the trait considered as most important by farmers, is uncertain. Thus, we explored the effect of the correlation between \({\text{TGC}}\) and \({\text{FCR}}\) on the response to selection and on the economic return of two breeding programs: (1) single-trait breeding goal, the trait in the breeding goal and in the index was TGC; and (2) a two-trait breeding goal, where TGC and FCR were in the breeding goal while TGC and percentage of fat in the dorsal muscle (%fat) were in the index. In this index, %fat was used as an indirect criterion of FCR. Then, for the two-trait breeding goal, we explored the effect of this correlation between TGC and FCR on the economic return and on the eutrophication change when using economic values or environmental values as weights in the breeding goal.
According to Brascamp et al. [30], the economic values of traits should be calculated while considering that the farm is running under an optimized state and that, in the long term, extra profit from increasing production tends to be absorbed by the different stakeholders of an industry. Smith et al. [31] added that, in such industries where an equilibrium is reached, only decreases in cost should be included in the calculation of economic values. In the present study, harvest weight was fixed at 400 g, and the technical (number of cages) and zootechnical parameters (stocking density) were optimized to produce 1000 tons while keeping the constraint on the biomass. However, we decided to include the extra profit due to higher production in the calculation of economic values because fish farming is a recent industry and is not at equilibrium due to constant innovations. For a growing industry such as fish farming, any improvement of production volume within the production system and its quotas should be considered, as it reflects better production efficiency. This extra profit generated by increasing production could then be reinvested to fuel these innovations. This is supported by Amer et al. [32] who suggested that economic values depend on the economic and technical context of the industry.
Our results show that there are only minor differences in economic and environmental responses between the single-trait breeding goal and the two-trait breeding goal based on EV or ENV when the genetic correlation between FCR and TGC is strongly negative (< − 0.5). This suggests that, in such cases, an easy and affordable single-trait breeding program for \({\text{TGC}}\) only should be sufficient to generate economic profit and simultaneously reduce environmental impacts, although it does not maximize the economic or environmental responses. The reason for this small difference between single and two-trait breeding goals is that improving \({\text{TGC}}\) is easy (due to its high heritability), and indirectly generates a favourable correlated response for \({\text{FCR}}\). However, when the correlation between \({\text{TGC}}\) and \({\text{FCR}}\) is weaker (0 to − 0.5), there are large differences in economic return and in reduction of environmental impacts between single and two-trait breeding goals. A breeding program with only \({\text{TGC}}\) in the index performs less well in terms of economic return than a breeding program with TGC and %fat in the index using EV as weights in the breeding goal. This difference is a direct result of the introduction of %fat in the index that allowed to improve the response to selection in FCR in the two-trait breeding goal. For instance, if the correlation is around − 0.2, the economic return is 0.066 €/kg produced/year with the two-trait breeding goal and 0.049 €/kg with the single-trait breeding goal. Thus, this represents a reduction of about 26.6% of the economic return. This reduction is even larger when the correlation is null. This difference between single and two-trait breeding goals is also observed for the reduction of eutrophication. With a correlation of − 0.4, using a single-trait breeding goal constrained the reduction of eutrophication by 39.7% compared with a two-trait breeding goal weighted by ENV. Using a single-trait breeding goal could even increase eutrophication compared to the two-trait breeding goal weighted by ENV if the genetic correlation is higher than − 0.28.
Although %fat is acknowledged to be an important driver of the results obtained, we did not investigate the effect of a potential change of the correlation between %fat and TGC and FCR on selection response when the correlation between TGC and FCR changed. Mainly because we do not know precisely how the genetic correlations between three traits would behave when the genetic correlation between two of these traits would change. Nevertheless, if we consider that the genetic correlation between TGC and %fat is close to the 0.75 value tested here, the correlation between FCR and %fat would have a strong effect on the response to selection when the genetic correlation between TGC and FCR is weak. In that case, the response to selection for FCR would probably be higher if the genetic correlation between FCR and %fat is stronger.
So far, in fish, there are strong indications that the correlation between \({\text{TGC}}\) and \({\text{FCR}}\) is weak (between 0 and − 0.4, e.g. [26]). Hence, both traits (\({\text{TGC}}\) and \({\text{FCR}}\)) should be included in the breeding goal and in the index to maximize the economic or environmental responses. However, the success of a breeding program in improving \({\text{FCR}}\) largely depends on the availability of phenotypes that can be used as indirect criteria for \({\text{FCR}}\). To date there is no method to record \({\text{FCR}}\) efficiently at a low cost that have been implemented in a fish breeding program although several methods have been proposed [33, 34]. Therefore, finding an efficient method to phenotype fish for \({\text{FCR}}\) is an important challenge for fish breeders. In this regard, muscle fat content may be a trait of premium interest as it can be measured on selection candidates with non-invasive ultrasound measurements [35]. In the pig industry, Knap and Wang [36] reported positive genetic correlations between backfat depth and \({\text{FCR}}\), which means that selection for leaner pigs led to an improvement of \({\text{FCR}}\) because fat deposition is less efficient in terms of energy used per unit of wet weight gain than protein deposition. In fish, fat is mostly deposited as visceral and intramuscular/subcutaneous fat and it has been reported that fat content related traits and \({\text{FCR}}\) are genetically correlated [26, 37]. In 2007, Quillet et al. [38] showed that a trout line selected for low muscle lipid content was more efficient than a line selected for high muscle lipid content. In our study, we used muscle fat as an indirect criterion in the index based on results from Besson et al. [33]. Surprisingly, even an indirect criterion with a relatively weak genetic correlation with \({\text{FCR}}\) (− 0.39) resulted in a reduction in eutrophication. Thus, assuming that \({\text{TGC}}\) or another growth trait is always the main trait in the breeding goal, the inclusion in the index of any other indirect criterion with a strong correlation with \({\text{FCR}}\) would improve FCR and thus increase economic return and reduce eutrophication. However, other methods should be investigated such as weight loss after fasting [39] or individual FCR in aquarium under restricted feeding, which was shown to be phenotypically linked to \({\text{FCR}}\) [33].
We also explored what would be the best type of weighting factor for a two-trait breeding goal (with \({\text{TGC}}\) and \({\text{FCR}}\) in the breeding goal) to enhance the sustainability of fish production. We found that, when the correlation between \({\text{TGC}}\) and \({\text{FCR}}\) is strongly negative, the environmental response is not sensitive to the use of EV or ENV in the breeding goal. This is because the strong favourable genetic correlation between \({\text{TGC}}\) and \({\text{FCR}}\) brings information on the EBV of \({\text{FCR}}\) (the trait with the greatest relative EV and ENV), which enhances the favourable response of \({\text{FCR}}\). However, the response in economic return and in reduction of environmental impacts is sensitive to the use of EV versus ENV when the genetic correlation between \({\text{TGC}}\) and \({\text{FCR}}\) is weakly negative. First, although the reduction of eutrophication at the farm level is lower with EV than with ENV, it remains favourable because EV puts more emphasis on improving \({\text{FCR}}\) and results in a reduction of the amount of feed required per unit of fish produced. Thus, using EV maximizes the economic return but is also promising for reducing eutrophication. However, using ENV when the genetic correlation between TGC and FCR is weak decreases the economic return, i.e. by 56.4% compared to a breeding goal using EV when the correlation is − 0.2. The reason is that the ENV of \({\text{TGC}}\) and \({\text{FCR}}\) are both negative whereas the EV of \({\text{TGC}}\) is positive and the EV of \({\text{FCR}}\) is negative; this change causes a large shift in trait responses. With ENV, the main opportunity to reduce eutrophication is not to select for better \({\text{FCR}}\) but to reduce \({\text{TGC}}\). However, this makes no sense in economic terms because \({\text{TGC}}\) has a positive EV, and then the economic return of the breeding program decreases drastically. In this case, the financial incentive for farmers to decrease eutrophication by using ENV in the breeding goal is low and using ENV may not be the solution to enhance the sustainability of fish production. Thus, using ENV instead of EV depends on the willingness of farmers to accept a slightly lower increase in economic return in exchange of an improvement in environmental impacts. However, farmers could benefit from such an environmental-based breeding program indirectly, since it has been shown that consumers are willing to pay a price premium for salmon produced with more environmental considerations [40]. Thus, the potential increase in sale price could offset some of the lost economic return, as a result of using ENV. In practice, the local environmental impact of fish farming is also determined by spatial planning and can be managed by adapting the quota system.
If there is an antagonism between EV and ENV, it could be interesting to combine them in the aggregate genotype. However, this requires that they are expressed in the same units, i.e. that ENV is converted to a monetary unit. This is possible when ENV is calculated for climate change because a shadow price of carbon exists, which is defined as the cost of the damage caused by emitting an additional ton of CO2. Combining EV and ENV in the breeding goal would balance out the genetic gain between economic return and environmental impact [41, 42]. However, to our knowledge, other categories of impacts such as eutrophication have not yet been monetarized.
Our study shows that, although a quota is implemented to constrain the environmental impacts, environmental impacts per farm per year could increase as a result of genetic improvement, especially when only growth is improved. In that case, and assuming a weak correlation with \({\text{FCR}}\), improving \({\text{TGC}}\) would increase environmental impacts per farm per year, although the quota on biomass is respected. The aim of the quota on biomass is to ensure that the surrounding environment has the capacity to assimilate the nutrients produced by the farm, which is termed the carrying capacity of the environment [43]. Thus, although the emission of waste per day does not exceed the carrying capacity, it would be essential to verify that the local environment is not affected by the increase of the annual emission of wastes. In such a case, breeding would become a problem and not a solution to reduce environmental impacts of fish farming. To change this, the breeding program should be modified to respect the annual carrying capacity by including other traits in the breeding goal and in the index. For instance, in our situation, adding \({\text{FCR}}\) in the breeding goal and %fat in the index would reduce the amount of nutrients emitted per year per farm regardless of the weights used in the breeding goal. Another solution would be to change the overall quota regulation by imposing an annual quota on feed used. In such a case, it is likely that the EV and ENV of the traits would differ but \({\text{FCR}}\) would remain the key trait to be improved and this quota definition would motivate breeders to include it in their index. The importance of feed efficiency in breeding programs to reduce environmental impacts has also been demonstrated by Ali et al. [41, 42] in livestock. They showed that using EV that integrate environmental costs in a pig breeding program for growth and \({\text{FCR}}\) results in reducing greenhouse gas emissions and excretions of nitrogen and phosphorus.
This is the first study that explores the influence of the genetic correlation between growth rate and feed conversion ratio on the optimal breeding program for economic or environmental sustainability. We showed that a favourable response in \({\text{FCR}}\) is key to improving profit and to reducing eutrophication at the farm level because it reduces the amount of feed used to produce one kg of fish. Feed is the largest economic cost for farmers and also the largest environmental cost due to its manufacturing and its biological transformation into nitrogen-based waste by the fish [44]. We showed that the two-trait breeding goal using with %fat in the index as indirect criterion of FCR was best to reach the favourable response in FCR. Using EV in this two-trait breeding goal increased economic return by 5 to 127% compared to a single-trait breeding goal for \({\text{TGC}}\). Furthermore, this two-trait breeding goal was able to reduce eutrophication by 1.34% (using ENV) and 0.63% (using EV) per kg of fish produced per year when the correlation between \({\text{TGC}}\) and \({\text{FCR}}\) is − 0.25. Based on these results, we strongly recommend to include \({\text{FCR}}\) in breeding goals with an indirect criterion in the index of a fish breeding program, especially if the correlation between \({\text{TGC}}\) and \({\text{FCR}}\) is weak.
The R script used to compute the response to selection for several genetic correlation and the datasets generated for the current study are available from the corresponding author on request.
Chavanne H, Janssen K, Hofherr J, Contini F, Haffray P, Komen H, et al. A comprehensive survey on selective breeding programs and seed market in the European aquaculture fish industry. Aquacult Int. 2016;24:1287–307.
Folke C, Kautsky N, Troell M. The costs of eutrophication from salmon farming: implications for policy. J Environ Manag. 1994;40:173–82.
Steinfeld H, Gerber P, Wassenaar T, Castel V, Rosales M, De Haan C. Livestock's long shadow: environmental issues and options. Rome: FAO; 2006.
Wall E, Simm G, Moran D. Developing breeding schemes to assist mitigation of greenhouse gas emissions. Animal. 2010;4:366–76.
Bell MJ, Wall E, Russell G, Morgan C, Simm G. Effect of breeding for milk yield, diet and management on enteric methane emissions from dairy cows. Anim Prod Sci. 2010;50:817–26.
van Middelaar CE, Berentsen PBM, Dijkstra J, van Arendonk JAM, de Boer IJM. Methods to determine the relative value of genetic traits in dairy cows to reduce greenhouse gas emissions along the chain. J Dairy Sci. 2014;97:5191–205.
Van Middelaar CE, Berentsen PBM, Dijkstra J, Van Arendonk JAM, De Boer IJM. Effect of feed-related farm characteristics on relative values of genetic traits in dairy cows to reduce greenhouse gas emissions along the chain. J Dairy Sci. 2015;98:4889–903.
Besson M, Aubin J, Komen H, Poelman M, Quillet E, Vandeputte M, et al. Environmental impacts of genetic improvement of growth rate and feed conversion ratio in fish farming under rearing density and nitrogen output limitations. J Clean Prod. 2016;116:100–9.
Besson M, de Boer IJM, Vandeputte M, van Arendonk JAM, Quillet E, Komen H, et al. Effect of production quotas on economic and environmental values of growth rate and feed efficiency in sea cage fish farming. PLoS One. 2017;12:e0173131.
de Verdal H, Komen H, Quillet E, Chatain B, Allal F, Benzie JAH, et al. Improving feed efficiency in fish using selective breeding: a review. Rev Aquacult. 2018;10:833–51.
Thodesen J, Grisdale-Helland B, Helland SJ, Gjerde B. Feed intake, growth and feed utilization of offspring from wild and selected Atlantic salmon (Salmo salar). Aquaculture. 1999;180:237–46.
Kause A, Tobin D, Houlihan D, Martin SA, Mäntysaari EA, Ritola O, et al. Feed efficiency of rainbow trout can be improved through selection: different genetic potential on alternative diets. J Anim Sci. 2006;84:807–17.
Mambrini M, Médale F, Sanchez MP, Recalde B, Chevassus B, Labbé L, et al. Selection for growth in brown trout increases feed intake capacity without affecting maintenance and growth requirements. J Anim Sci. 2004;82:2865–75.
Sanchez MP, Chevassus B, Labbé L, Quillet E, Mambrini M. Selection for growth of brown trout (Salmo trutta) affects feed intake but not feed efficiency. Aquat Living Resour. 2001;14:41–8.
Ogata HY, Oku H, Murai T. Growth, feed efficiency and feed intake of offspring from selected and wild Japanese flounder (Paralichthys olivaceus). Aquaculture. 2002;211:183–93.
Guinee J. Handbook on life cycle assessment: operational guide to the ISO standards. Int J Life Cycle Assess. 2002;7:311–3.
Groen AF, Steine T, Colleau JJ, Pedersen J, Pribyl J, Reinsch N. Economic values in dairy cattle breeding, with special reference to functional traits. Report of an EAAP-working group. Livest Prod Sci. 1997;49:1–21.
Mallet JP, Charles S, Persat H, Auger P. Growth modelling in accordance with daily water temperature in European grayling (Thymallus thymallus L.). Can J Fish Aquat Sci. 1999;56:994–1000.
Person-Le Ruyet J, Mahé K, Le Bayon N, Le Delliou H. Effects of temperature on growth and metabolism in a Mediterranean population of European sea bass, Dicentrarchus labrax. Aquaculture. 2004;237:269–80.
Lanari D, Agaro DE, Ballestrazzi R. Growth parameters in European sea bass (Dicentrarchus labrax L.): effects of live weight and water temperature. Ital J Anim Sci. 2002;1:181–6.
Cowey CB, Cho CY. Nutritional strategies and aquaculture waste. In: Proceedings of the First international symposium on nutritional strategies in management of aquaculture: 26 June 1990; Guelph; 1991.
Cho CY, Kaushik SJ. Nutritional energetics in fish: energy and protein utilization in rainbow trout (Salmo gairdneri). World Rev Nutr Diet. 1990;61:132–72.
Guinée J, Gorrée M, Heijungs R, Huppes G, Kleijn R, De Koning A, et al. Hanbook on life cycle assessment: Operational guide to the ISO standards. New York: Kluwer Academic Publishers; 2002.
Janssen K, Berentsen P, Besson M, Komen H. Derivation of economic values for production traits in aquaculture species. Genet Sel Evol. 2017;49:5.
Rutten MJM, Bijma P, Woolliams J, Van Arendonk JAM. SelAction: Software to predict selection response and rate of inbreeding in livestock breeding programs. J Hered. 2002;93:456–8.
Kause A, Kiessling A, Martin SAM, Houlihan D, Ruohonen K. Genetic improvement of feed conversion ratio via indirect selection against lipid deposition in farmed rainbow trout (Oncorhynchus mykiss Walbaum). Br J Nutr. 2016;116:1656–65.
Vandeputte M, Garouste R, Dupont-Nivet M, Haffray P, Vergnet A, Chavanne H, et al. Multi-site evaluation of the rearing performances of 5 wild populations of European sea bass (Dicentrarchus labrax). Aquaculture. 2014;424–425:239–48.
Saillant E, Dupont-Nivet M, Sabourault M, Haffray P, Laureau S, Vidal MO, et al. Genetic variation for carcass quality traits in cultured sea bass (Dicentrarchus labrax). Aquat Living Resour. 2009;22:105–12.
Smith C. A note on the improvement of a trait by selecting on its components. Anim Prod. 1967;9:127–30.
Brascamp EW, Smith C, Guy DR. Derivation of economic weights from profit equations. Anim Sci. 1985;40:175–9.
Smith C, James JW, Brascamp EW. On the derivation of economic weights in livestock improvement. Anim Sci. 1986;43:545–51.
Amer PR, Fox GC, Smith C. Economic weights from profit equations: appraising their accuracy in the long run. Anim Sci. 1994;58:11–8.
Besson M, Allal F, Chatain B, Vergnet A, Clota F, Vandeputte M. Combining individual phenotypes of feed intake with genomic data to improve feed efficiency in Sea bass. Front Genet. 2019;10:219.
De Verdal H, Vandeputte M, Mekkawy W, Chatain B, Benzie JAH. Quantifying the genetic parameters of feed efficiency in juvenile Nile tilapia Oreochromis niloticus. BMC Genet. 2018;19:105.
Knap PW, Kause A. Phenotyping for genetic improvement of feed efficiency in fish: lessons from pig breeding. Front Genet. 2018;9:184.
Knap PW, Wang L. Pig breeding for improved feed efficiency. In: Patience JF, editor. Feed efficiency in swine. Wageningen: Wageningen Academic Publishers; 2012. p. 67–181.
Quinton C, Kause A, Ruohonen K, Koskela J. Genetic relationships of body composition and feed utilization traits in European whitefish (L.) and implications for selective breeding in fishmeal-and soybean meal-based diet environments. J Anim Sci. 2007;85:3198–208.
Quillet E, Le Guillou S, Aubin J, Labbé L, Fauconneau B, Médale F. Response of a lean muscle and a fat muscle rainbow trout (Oncorhynchus mykiss) line on growth, nutrient utilization, body composition and carcass traits when fed two different diets. Aquaculture. 2007;269:220–31.
Grima L, Vandeputte M, Ruelle F, Vergnet A, Mambrini M, Chatain B. In search for indirect criteria to improve residual feed intake in sea bass (Dicentrarchus labrax): part I: phenotypic relationship between residual feed intake and body weight variations during feed deprivation and re-feeding periods. Aquaculture. 2010;300:50–8.
Whitmarsh D, Wattage P. Public attitudes towards the environmental impact of salmon aquaculture in Scotland. Eur Environ. 2006;16:108–21.
Ali BM, de Mey Y, Bastiaansen JWM, Oude Lansink AGJM. Effects of incorporating environmental cost and risk aversion on economic values of pig breeding goal traits. J Anim Breed Genet. 2018;135:194–207.
Ali BM, Bastiaansen JWM, de Mey Y, Oude Lansink AGJM. Response to a selection index including environmental costs and risk preferences of producers. J Anim Sci. 2018;97:156–71.
Stigebrandt A. Carrying capacity: general principles of model construction. Aquacult Res. 2011;42:41–50.
Aubin J, Papatryphon E, van der Werf HMG, Chatzifotis S. Assessment of the environmental impact of carnivorous finfish production systems using life cycle assessment. J Clean Prod. 2009;17:354–61.
The authors would like to thank Kasper Janssen, Imke de Boer, Joel Aubin and Edwige Quillet for their input on this work.
M. Besson benefited from a joint grant from the European Commission and IMARES, within the framework of the Erasmus-Mundus joint doctorate "EGS-ABG".
Animal Breeding and Genomics Centre, Wageningen University, PO Box 338, 6700 AH, Wageningen, The Netherlands
Mathieu Besson, Hans Komen & Gus Rose
Université Paris-Saclay, INRAE, AgroParisTech, GABI, 78350, Jouy-en-Josas, France
Mathieu Besson & Marc Vandeputte
Ifremer, Chemin de Maguelone, 34250, Palavas-les-Flots, France
Marc Vandeputte
Mathieu Besson
MB and GR developed the R script to compute the response to selection for several genetic correlations. MB wrote the manuscript. MV, HK and GR provided comments and corrections to improve the manuscript. All authors read and approved the final manuscript.
Correspondence to Mathieu Besson.
Additional file 1: Tables S1.
Calculations and parameters involved in the bio-economic model.
Technical parameters of the sea bass farm running under a quota on biomass.
Revenue and costs (variable and fixed) of a sea bass farm running under a quota on biomass.
Besson, M., Komen, H., Rose, G. et al. The genetic correlation between feed conversion ratio and growth rate affects the design of a breeding program for more sustainable fish production. Genet Sel Evol 52, 5 (2020). https://doi.org/10.1186/s12711-020-0524-0 | CommonCrawl |
Only $35.99/year
Community Chapter 10
bekahhughes5
Which of the following statements best describe what happened to the hospitals built or expanded by Hill-Burton Act funds?
Many such hospitals have consolidated or closed.
Not needing to expand, hospitals have used the funds to upgrade their facilities.
They have continued to use such funds to expand.
When funding ceased, so did hospital expansion.
Which of the following nurses fought to have American nursing controlled by nurses rather than physicians?
Lavinia Dock
Lillian Wald
Which of the following nurses used political expertise to influence the federal government to develop a Children's Bureau?
Which of the following best describes the most important factor in legislation?
The amount of financing and lobbying behind each choice
The beliefs, attitudes, and values of the policy
The preferences of the majority of American voters
The president's ongoing encouragement for one particular choice
Which of the following is accomplished through the use of public policy?
Solutions to problems of public concern are developed.
A rational, logical problem-solving decision-making process is implemented.
Public safety nets for vulnerable populations are created.
Economic and business management principles are applied.
Which of the following best describes why it is so difficult to change the paradigm of health care from disease orientation to promoting health orientation?
The belief exists that those without insurance could obtain insurance if they worked hard enough.
People find it difficult to agree on what the ideal paradigm should be.
People realize the media have exaggerated the problems that result from lack of insurance.
Serious reallocation of resources would have to occur.
Which of the following best describes how the government controls conditions that individuals cannot?
Appeals to the common sense and good nature of the citizens
Establishes social mores that enable groups to control individuals' behaviors
Passes and enforces law
Uses fear reinforced by police power
Which of the following statements best describes why the federal government is unable to do whatever politicians currently in power want?
The citizens would rise up in rebellion if actions were outrageous.
The lack of funds to implement actions are seen as unreasonable by a majority of voters.
Only the actions authorized by the Constitution are legitimate.
The states would rebel and withdraw from the union.
hich of the following is the basis for any American citizen to feel comfortable expressing an opinion on a political issue?
Amendments to the Constitution
Articles of the Constitution of the United States
Gettysburg Address
Which of the following best describes who has the authority to act in every area except those specifically mentioned in the Constitution?
Any individual citizen
Which of the following best describes how the local government is provided authority?
Through the ability to tax local residents to meet local needs
Through the people themselves who band together to create the community
Power delegated from the federal level to the local level
Power delegated from the states
Which of the following statements best describes how policies in the private sector are different from policies in the public sector?
Private sector policies are slow, deliberate, and reactive to events.
Private sector policies are determined by the opinions and feelings of those employed in that sector.
Private sector policies are based on economics and market trends.
The president was sent a bill that he did not really like, but he would have been unpopular if he vetoed it, so he did nothing. Which of the following best describes what will happen to the bill?
The bill is dead.
The bill returns to both houses to see if enough votes can be obtained to pass the bill even without the president signing the bill.
The bill becomes law.
The bill sits there until the president signs it or vetoes it.
The forces for the proposed bill are roughly as persuasive, powerful, and well financed as the forces against the proposed bill. Which of the following describes the most likely outcome?
The bill will be debated through a public hearing.
The bill will fail.
The bill will pass.
The bill will remain in the legislature until one side or the other has a majority of votes.
An individual has been terminated from his job and has lost his health insurance. Which of the following federal laws allows him to continue his insurance benefits for a specified period of time?
Health Insurance Portability and Accountability Act of 1996 (HIPAA)
Family Support Act of 1988
Health Maintenance Organization Act of 1973 (HMO)
Consolidated Omnibus Budget Reconciliation Act of 1985 (COBRA)
Which of the following statements best describes an effect of the Welfare Reform Act of 1996?
Individuals who were required to obtain employment lost their health coverage.
Many were happy to be off the government dole and self-supporting.
Persons sought and obtained employment that often included insurance benefits.
The food stamp program decreased in size.
Which of the following best describes the history of the State Child Health Improvement Act (SCHIP) of 1997?
The law included goals and programs but no funding to achieve them.
The law received extensive support by both Republicans and Democrats.
The law was extended, not renewed by the Bush administration, and then renewed by the Obama administration.
The law was passed by the majority of states but not by the federal government.
Which of the following best describes what happened after the Medicare Modernization Act of 2003 was enacted?
Additional restrictions in coverage were imposed.
Experimental treatments were approved for reimbursement.
Reimbursement procedures became more efficient.
A prescription drug benefit was added.
A nurse is employed by the state public health department. Which of the following activities would she most likely complete?
Set up a flu shot clinic at a neighborhood church
Lobby for health care reform to cover more preventive services
Monitor the incidence of influenza in the state
Serve as a volunteer for a state legislator's campaign
It has been proposed that a new, better approach to health care be tested with a small group to evaluate its effectiveness. Which of the following best describes why this cannot be done?
Any employment in the project would be only temporary, so it would be difficult to find professionals to staff the program.
It is challenging to find appropriate sites located in the target area from which to offer the pilot project service.
No one wants to accept free services if they include being a guinea pig in a research project.
Offering a service establishes a precedent and a sense of entitlement, so it is difficult to discontinue the program.
Which of the following best describes the most crucial step in policy formation?
Convincing both political parties and independents to support the proposed policy
Defining the issue and placing it on the agenda for possible action
Determining who has vested interest in what aspects of the policy
Trying to simplify the proposed legislation so the public will support it
A nurse suggests to the students that they attend the local district nurses' association meeting, where the nurse is an officer. Which of the following provides the best rationale for this action?
Meeting outside the clinical area allows for more effective informal learning based on discussion and interaction.
Role models are typically the major influence on nurses choosing to become politically active.
Students are often given extra credit from their instructor for such community involvement.
Such groups want students to attend their meetings to encourage them to join and to accept a committee responsibility.
Which of the following statements best describes why nurses are not more effective in creating political change?
Nurses are not listened to by politicians.
Nurses are not perceived as leaders in the health care field.
Nurses do not act or do not agree on what changes are needed.
Nurses do not know how to negotiate, communicate, and collaborate to create change.
The local nursing association and the local medical association disagreed vehemently on advanced practice nursing reimbursement. Which of the following best describes why the two groups agreed to join a coalition to send representatives to testify on a particular bill?
Although there was disagreement, both groups agreed to behave politely and professionally.
Both associations had formed a coalition to collaborate on a bill that would benefit patients.
Because the legislators had asked both groups to appear, the groups did not have a choice.
The two groups were sharing costs and expenses, but their testimony would give opposing viewpoints.
A nurse is employed by the state nursing association to serve as a lobbyist. Which of the following would be the most crucial task to achieve?
Be seen as a reliable and credible source of accurate information
Convince colleagues in nursing to join their local nursing organization and write to encourage legislators to vote according to nurses' goals
Offer to make large donations to the legislator who can forward nursing's agenda
Visit every single legislator so the nurse is recognized in this role
A nurse represents the state professional association. Which of the following actions would the nurse complete in relation to legislation?
Be prepared to contribute to campaigns of legislators who vote consistently with nursing goals
Be prepared to confront verbally those on the opposite side of legislative issues
Be prepared to provide testimony and comment on relevant issues
Be prepared to visit schools of nursing to present about the current legislative issues
A nurse is unable to be actively involved in attending meetings at the state level. Which of the following actions would be most useful for the nurse?
Asking students to remain informed regarding proposed legislation
Communicating, with rationales, her stand on proposed legislation to legislators
Remaining uninvolved so incorrect information is not inadvertently given
Writing letters to the local newspaper asking nurses to become involved
A nurse states that he or she does not want to become involved in politics because of family, school, work, and other commitments. Which of the following would be the best reply to this statement?
"Good for you. We should all stay out of such dirty game playing!"
"I am sorry to hear that but I do understand."
"It doesn't matter; politics have nothing to do with nursing practice."
"It won't take much time to join ANA and pay dues so their lobbyist can represent you."
Which of the following statements best describes why nurses should contribute whenever possible to their state nursing association political action committee (PAC)?
As PACs are a reality of political life, nursing needs to be heard.
Contributing money may result in a future political appointment.
Only money really has any influence on legislative votes.
PACs are being used to increase nursing salaries and working conditions.
Which of the following best describes what is being discussed in relation to concerns over patients' safety and nurse fatigue?
All hospitals are now required to report errors made that are a result of low nurse staffing.
The Centers for Disease Control and Prevention (CDC) is monitoring the incidence of medical errors caused by nurse understaffing.
The Registered Nurse Safe Staffing Act was passed by Congress in 2013.
Legislation has been suggested that staffing systems require the input of direct care registered nurses.
A nurse would like to influence an internal private health policy. Which of the following actions should the nurse take?
Build or join a private entrepreneurial practice to provide lower cost services to underserved groups
Participate in public discussions regarding quality and managed care
Support nursing research done that demonstrates positive clinical and economic outcomes
Write managed care organizations to request that nurses receive reimbursement for health services to clients
What was the poverty guideline for a family of four in mainland United States in 2013?
Which of the following best describes the purposes of professional societies such as the American Nurses Association (ANA)? (Select all that apply.)
Providing control and oversight of the occupation
Creating licensing laws to control entry into the profession
Determining appropriate requirements for education into the profession
Establishing standards for practice
Protecting the interests of the practitioners
Safeguarding the public trust
Which of the following is one legally required to obey? (Select all that apply.)
Directions to a destination provided by a police officer
Court decisions related to legislative law
Delegation of responsibility for a task by a physician
Executive decisions, such as your employer requirements
Laws passed by your state or the federal government
Rules and regulations from agencies, such as the state board of nursing
Which of the following current issues are leading the World Health Organization (WHO) to reconsider its initial definition of health? (Select all that apply.)
Environmental issues such as industrial toxins or carcinogenic commercial products
Global, not local, problems such as spread of antibiotic-resistant bacteria
Need to move from containment and treatment to social intervention
Pressure from industrialized nations to emphasize chronic diseases rather than infectious diseases
Realization that government actions influence the basic human right of health
Worldwide pandemics such as human immunodeficiency virus (HIV) and swine flu, which require a different approach
Which of the following best describes the new national health goals as seen in Healthy People 2020? (Select all that apply.)
Achieve a plan for universal basic health care for citizens
Create social and physical environments that promote good health
Eliminate health disparities
Eliminate preventable disease, disability, injury, and premature death
Achieve health equity
Promote healthy behaviors at every stage of life
Which of the following statements best describes a notable change of the Omnibus Budget Reconciliation Act? (Select all that apply.)
Legislated a funding increase for RN staffing
Changed from process evaluation to outcome evaluation when evaluating care
Established guidelines for the use of restraints
Created health maintenance organizations nationwide
Added prescription drug benefits for Medicaid recipients
Required all states to review certificates of need before agencies could expand
Which of the following were among the outcomes of the 1979 report Healthy People: The Surgeon General's Report on Health Promotion and Disease Prevention? (Select all that apply.)
A national committee was established to have hearings and study the problem further.
Increased funds were allocated for health planning and health care.
Many of the recommendations were adopted on the federal level.
The Health Objectives Planning Act of 1990 was passed.
The federal government began to identify and monitor national health care goals.
The president addressed the American people about the need for health care reform.
Which of the following critical issues in health care were addressed by The Health Insurance Portability and Accountability Act (HIPAA) of 1996? (Select all that apply.)
Portability of insurance coverage
COBRA, maintaining coverage for those who lose their jobs
Insurance companies having a total monopoly in a certain geographic area
Insurance companies setting limits on coverage of longer than 12 months
Insurance companies charging seriously ill persons more than healthy persons
Insurance companies paying the same for mental health coverage as for physical illnesses
Which of the following actions represent a shift in philosophy at the Centers for Disease Control and Prevention (CDC)? (Select all that apply.)
Collecting and analyzing health data
Creating integrated health information systems
Allocating resources to treat specific diseases
Encouraging partnerships and strategic alliances
Creating transaction-based relationships
Leveraging resources to steer the larger health system
PH Intervention Wheel
Public Health Intervention Wheel
470 Exam 2
Chapter 10: Policy Politics Legislation and Commun…
A copper pipe with an outer radius of 0.013 m runs from an outdoor wall faucet into the interior of a house. The temperature of the faucet is $4.0^{\circ} \mathrm{C}$, and the temperature of the pipe, at 3.0 m from the faucet, is 25$^{\circ} \mathrm{C}$. In fifteen minutes, the pipe conducts a total of 270 J of heat to the outdoor faucet from the house interior. Find the inner radius of the pipe. Ignore any water inside the pipe.
When drinking through a straw, you reduce the pressure in your mouth and the atmosphere moves the liquid. Could you use a straw to drink on the moon?
What volume does $0.103 \mathrm{~mol}$ of $\mathrm{N}_2$ gas occupy at a temperature of $27^{\circ} \mathrm{C}$ and a pressure of $784 \mathrm{~mm} \mathrm{Hg}$ ?
Is it legal to have an abstract class with all member functions pure virtual?
Civics test
Potato_Wriz
SOC 100: Exam II
harrelsj
SOC EXAM 3
Sosa_Leira
Unit 3: Professional Responsibilities
rmejia0298 | CommonCrawl |
Total angular momentum operator $L^2$
Consider a system with a state of fixed total angular momentum $l = 2$. What are the eigenvalues of the following operators
(a)$ L_z$
(b) $3/5L_x −4/5L_y$
(c) $2L_x −6L_y +3L_z$
My problem is more to do with the definition of the angular momentum operator:
I think the angular momentum operator is $L^2=L_x^2+L_y^2+L_z^2$. I have seen many different eigenvalues this gets when applied to an eigen ket:
$L^2|\psi\rangle=\hbar^2 k^2|\psi\rangle$
$L^2|\psi\rangle=\hbar^2 j(j+1 )|\psi\rangle$
along with a few others. I understand that these are sort of equivilent and we are just using numbers to represent the value. However, what is the $l=2$? Is it the $k$, the $j$?
I know what to do from here on, $m$ (the quantum m=number for angular momentum along a given axis) varies from $-j$ to $+j$
quantum-mechanics homework-and-exercises angular-momentum operators
Toby PeterkenToby Peterken
$\begingroup$ Where have you seen $L^2|\psi\rangle=\hbar^2k^2|\psi\rangle$? That result doesn't immediately make sense to me. Did you happen to confuse $L^2$ with $p^2$ in that instance with the $k^2$? $\endgroup$ – WAH Aug 20 '17 at 20:05
$\begingroup$ Your question is impossible to answer unless you give the state. Simply knowing $\ell=2$ is not enough to say anything about components. $\endgroup$ – ZeroTheHero Aug 20 '17 at 22:05
$\begingroup$ That was the question though, I have changed nothing $\endgroup$ – Toby Peterken Aug 21 '17 at 6:40
$\begingroup$ @ZeroTheHero I disagree on that: given the value of the an angular momentum you can explicitly build the form of the operators $L_x$, $L_y$, $L_z$ in some representation (for example the usual $|l,m\rangle$): $L_z$ is easy as it's diagonal, and has eigenvalues $-2,-1,0,-1,2$; the other 2 are way more involved but in principle they are 5x5 matrices, with their own eigenvalue problem. $\endgroup$ – Francesco Bernardini Jan 7 '19 at 1:44
$\begingroup$ @FrancescoBernardini you understand the question differently than I do. $\endgroup$ – ZeroTheHero Jan 7 '19 at 1:47
I think the trick here is to note that the operator in (b) measures the component of angular momentum along the axis $\hat{n} = (3/5, 4/5, 0)$. It's eigenvalues must be $\{2,1,0,-1,-2\}$, the same as those of $L_z$, because you could have chosen your z-axis to lie along $\hat{n}$.
Similarly, the operator in (c) is 7 times the component of $\vec{L}$ along the normalized axis $\hat{n} = (2/7, -6/7, 3/7)$. It's eigenvalues must therefore be $\{14,7,0,-7,-14\}$, by the same reasoning.
Paul GPaul G
The information you are given, i.e. $l=2$, tells you that the operators $L_x$, $L_y$, $L_z$ can be represented as 5x5 matrices, which operate on a vector space spanned by the five vectors
$$\{|2,-2\rangle,|2,-1\rangle,|2,0\rangle,|2,1\rangle,|2,2\rangle\}$$
which can be represented, for example, by one of the most natural bases:
$$|2,-2\rangle\stackrel{\cdot}{=}\left(\begin{array}{c} 1\\ 0\\ 0\\ 0\\ 0 \end{array}\right),|2,-1\rangle\stackrel{\cdot}{=}\left(\begin{array}{c} 0\\ 1\\ 0\\ 0\\ 0 \end{array}\right),|2,0\rangle\stackrel{\cdot}{=}\left(\begin{array}{c} 0\\ 0\\ 1\\ 0\\ 0 \end{array}\right),|2,1\rangle\stackrel{\cdot}{=}\left(\begin{array}{c} 0\\ 0\\ 0\\ 1\\ 0 \end{array}\right),|2,2\rangle\stackrel{\cdot}{=}\left(\begin{array}{c} 0\\ 0\\ 0\\ 0\\ 1 \end{array}\right)$$
$L_z$ is easy because in this basis is diagonal by definition, and it would be represented by
$$L_z\stackrel{\cdot}{=}\left(\begin{array}{ccccc} -2 & 0 & 0 & 0 & 0\\ 0 & -1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 2\\ \end{array}\right)$$
On the other hand, the two operators $L_+$ and $L_-$, defined as
$$L_\pm=L_x\pm iL_y$$
are then represented by
$$L_-\stackrel{\cdot}{=}\hbar\left(\begin{array}{ccccc} 0 & 2 & 0 & 0 & 0\\ 0 & 0 & \sqrt6 & 0 & 0\\ 0 & 0 & 0 & \sqrt6 & 0\\ 0 & 0 & 0 & 0 & 2\\ 0 & 0 & 0 & 0 & 0\\ \end{array}\right),\quad L_+\stackrel{\cdot}{=}\hbar\left(\begin{array}{ccccc} 0 & 0 & 0 & 0 & 0\\ 2 & 0 & 0 & 0 & 0\\ 0 & \sqrt6 & 0 & 0 & 0\\ 0 & 0 & \sqrt6 & 0 & 0\\ 0 & 0 & 0 & 2 & 0\\ \end{array}\right)$$
So, inverting the definition, $L_x=\frac12(L_++L_-)$ and $L_y=\frac12(L_++L_-)$, one can build the matrices corresponding to
$$\hat O_1 =\frac35 L_x-\frac45 L_y$$
$$\hat O_2 = 2L_x-6L_y + 3L_z$$
and calculate the eigenvalues by merely cranking the math, either:
manually;
using some software;
using some trick that I'm not able to see now;
Francesco BernardiniFrancesco Bernardini
Short answer: the eigenvalue is: $\qquad l\cdot(l+1) \hbar \qquad$ Consequently, you'll get $2\cdot3\hbar=6\hbar$.
$J$ is (sometimes) angular momentum in general. If your concrete angular momentum is $L$, then replace $j$ by $l$.
But this is only when $J$ is used to denote "angular momentum in general". $J$ is usually "total angular momentum" (sum of all angular momenta), which would not be the same anymore.
FGSUZFGSUZ
$\begingroup$ Leave a comment if you're looking for more info. $\endgroup$ – FGSUZ Apr 8 '19 at 18:44
Not the answer you're looking for? Browse other questions tagged quantum-mechanics homework-and-exercises angular-momentum operators or ask your own question.
Angular momentum of quantum system
Why only 1 component of angular momentum?
About shift operators
Angular Momentum Operators - Commutation Relations
The uncertainty in angular momentum
Commuting angular momentum
Measuring two components of angular momentum after measuring total angular momentum and one of its components
Angular Momentum Eigenvalues in Two Dimensions
Spherical symmetry and mean of angular momentum
What does vector operator for angular momentum measure? | CommonCrawl |
Kimonote
(help) (log in)
mildbyte's notes tagged 'python'
(notes) / (tags) / (streams)
View: compact / full
Time: backward / forward
Source: internal / external / all
Public RSS / JSON
(show contents)
Travelling murderer problem: planning a Morrowind all-faction speedrun with simulated annealing, part 3
@mildbyte 2 years, 8 months ago | programming | games | morrowind | python |
Last time, I showed a way to generate a decent route through the quest graph as well as came up with a rough character progression that can be used to quickly complete all faction questlines in Morrowind.
Today, I'll analyse Mark/Recall and other miscellaneous transport modes, deal with an interesting Temple quest, showcase the final route and finally publish the code for the route planner.
Here's a map of the route for those of you who like migraines:
The points of interest here are joined with colour-coded lines. White is walking or flying, red is Mages Guild teleports, blue is Almsivi/Divine Intervention spells, yellow is travel by boat/silt strider and green is Recalls.
Mark/Recall are a pair of spells in Morrowind that allow the player to teleport around the game world. Casting Mark remembers a given place and Recall teleports the player to the last position the Mark was cast. Only one Mark can be active at a given time: casting it again removes the previous Mark.
Imagine casting Mark at the beginning of a dungeon (unlike Skyrim, Morrowind dungeons don't have a quick shortcut back to the start of the dungeon from its end) and teleporting there, or placing a Mark next to an NPC providing transport services. This could shave a considerable amount of time from the route.
There are several questions here. Firstly, given a route, what's the most efficient arrangement of Mark and Recall casts for it? Secondly, can we change the optimiser to take into account the Mark/Recall spells? The most optimal route through the quest graph might not be the most optimal when Mark/Recall spells are used.
Single Mark
For now, imagine we have already settled on a route and can only place a Mark once in the game, in fact only at one node in the quest graph (and not anywhere on the route between nodes). What's the best place for it?
Since we have a matrix of fastest node-to-node travel times, given a Mark position, at each node in the route we can decide whether we want to proceed to the next node directly or by first teleporting to the Mark and then going to the next node. Try placing a Mark at each the nodes in the route and see which one gives the fastest overall time:
def get_best_mark_position(route):
return min(
# can't use the mark until we've placed it
(sum(get_node_distance(r1, r2) for r1, r2 in zip(route[:i], route[1:i]))
+ sum(
# after placing the mark, we have a choice of recalling to it and going to the next node
# or going to the next node directly
min(get_node_distance(r, r2), get_node_distance(r1, r2)) for r1, r2 in zip(route[i:], route[i + 1:])),
i, r) for i, r in enumerate(route)
I ran that and found out that by far the best position for a single Mark was right at the questgiver who's standing next to the Mages Guild teleport. This makes a lot of sense: a Recall to the Mages Guild gives the player instant access to 4 cities. Coupled with Intervention spells, this lets the player reach essentially any town in the game within a matter of minutes, if not seconds.
Multiple Marks
Now, again, given a single route through the quests, let's allow the player to place multiple Marks so that they can Recall to the last one they placed.
I first tried the same idea that I did for the route optimiser: take multiple possible arrangements of Marks (basically a Boolean array of the same length as the route that determines whether, at each node, we place a Mark there or not after we visit it), mutate each one (by randomly adding or removing Marks) and score it (sum up the decreased travel costs by considering at each node whether it's better to proceed to the next node directly or via a previous Mark).
I let this run for a while but it wasn't giving good results, quickly getting stuck in local minima. A big problem with this approach was that it didn't consider placing Marks in places visited between nodes, which excluded strategies like placing a Mark at the beginning of a dungeon (while a door to the dungeon is a point in the travel graph, it isn't a point in the quest graph).
To do that, I'd have to have a matrix of best travel times between each pair of nodes in the travel graph, not just the quest graph. Given how long my implementation of Dijkstra took to create the matrix for 100 nodes, I wasn't going to get away with reusing it.
Speeding things up
Floyd-Warshall is the nuclear option of pathfinding algorithms. Instead of finding shortest paths from a single source like Dijkstra would, it finds the shortest paths between any two vertices in the graph. Not only that, but it does in \( \Theta(V^3) \), independently of the number of edges, making it perfect for dense graphs.
It took about 15 minutes to run my Python implementation of Floyd-Warshall on the coalesced 700-node graph. But this wasn't enough. I realised that coalescing vertices in the same in-game cell to a single one was giving strange results, too: for example, each node had an Almsivi/Divine Intervention edge towards the nearest Temple/Imperial Cult Shrine that has a weight considerably larger than zero (due to the fact that the central vertex for that cell was far away from the actual teleportation destination) and I was wondering if that could be skewing the route.
I hence decided to rerun the route planner on the full unprocessed 6500-node graph and rewrote the Floyd-Warshall implementation in C++. It still took 15 minutes to run it, but this time it was on the whole graph. Most of this time, in fact, was spent loading the input and writing the output matrices, since I serialised those into text and not binary.
And by that point I was on a roll anyway and rewrote the route planner in C++ as well. The Python program would now instead export the quest node distance matrix and the dependency graph to a text file. I didn't perform detailed measurements, but it definitely became a couple of orders of magnitude faster.
Adding Mark/Recall awareness to the optimizer
I tried rerunning the Mark/Recall planner on the fully expanded route (which enumerates each vertex on the travel graph) but by this point, it was getting more and more clear that simply maintaining a Mark at any Mages Guild teleporter was a really good option that was difficult to improve on.
This is a slightly trippy picture, but it's basically a contour plot that shows the average travel time (in real seconds, assuming a travel speed of about 750 units per second, which is achievable with an artifact that I'll talk about later) to any node in the quest graph from any point in the travel graph, interpolated using nearest-neighbour on pixels that didn't map to any points on the travel graph. I also added a travel cost of 5 seconds to public transport and Mages Guild teleporters. This was to account for the time spent in the game's UI as well as to nudge the optimiser into flailing less around multiple towns.
Strictly speaking, I should have actually calculated the average time at each pixel, but this picture is good enough. The colour map here ranges from blue (smallest average travel time) to green (largest). For example, Vivec (south of the game map) has the largest average travel times to any point of interest in the route. This is because the Temple of Vivec (one possible destination of an Almsivi Intervention spell) is on the other side of the city from other transport modes (boats/silt striders/Mages Guild) and so anyone near Vivec would have to first teleport to the Temple and then walk across the city to continue their journey.
On the other hand, despite being basically a wasteland, the southeast corner of the map has good travel connections: this is because a Divine Intervention spell takes the player to Wolverine Hall on the east side, right next door to the Mages Guild.
Silent Pilgrimage
There's a cool quest in Morrowind's Temple questline that involves the player completing a pilgrimage from the southernmost part of the game map to the northernmost. Sounds easy, right? Well, the only problem is that the player can't speak to anyone during the pilgrimage, which means the player can't use any public transport or Mages Guild teleports.
The honest way to do this is to actually walk or levitate the whole distance, which would take a few minutes even with Speed-increasing spells. The mostly-honest way to do this would be casting Divine/Almsivi Intervention spells in strategic places that would teleport the player part of the way between the spheres of influence of different Temples/Imperial Cult shrines. The dishonest way would be casting a Mark at the shrine during a previous visit and simply Recalling there when the pilgrimage starts.
However, the first version of the route planner wasn't really aware of that quest. I had a "Set Mark at Sanctus Shrine" graph node and a "Do the Sanctus Shrine quest" node, but the optimiser wasn't encouraged to put them close together. In the best route it had come up with, those two nodes were far apart and about 3/4 of the route was with the Mark stuck at the Shrine.
Hence, if we want to maintain a Mark at a Mages Guild, we also have to juggle that with having a Mark at the Sanctus Shrine in order to complete the Silent Pilgrimage. So the question now was kind of an inverse one: given that we can teleport to a Guild at any time (except for when the Mark is at the Shrine and so we'd get teleported there instead), what's the best route through the game quests?
I decided to produce two travel graphs: when there's a recall edge to a Mages Guild (it doesn't matter which one, since we can almost instantaneously teleport to any of them once we're there) and when there's a recall edge to the Sanctus Shrine.
The optimiser would get these two versions of the node-to-node distance matrix as well as the instructions specifying which matrix to use when. That way, it could also try to exploit the Mark in the northern part of the game map.
The best route it could come up with (not counting time spent in dialogue, combat, training or getting all required items/money at the start at the game) now took about 2500 seconds of real time, which looked quite promising.
Propylon Chambers
There's a mode of transport in Morrowind that I hadn't mentioned at all: Propylon Chambers. They're located inside 10 ancient Dark Elf strongholds that are scattered roughly in a circle around the map. Each stronghold has a Propylon Index that's hidden somewhere in the game world, and discovering a given stronghold's Index allows the player to travel to that stronghold from either of the two adjacent to it.
(from http://stuporstar.sarahdimento.com/other-mods/books-of-vvardenfell/key-to-the-dunmer-strongholds/)
Can they be useful here? After looking at their placement, it sadly doesn't seem so. Firstly, there are very few strongholds that are closer to quest objectives than ordinary towns and secondly, their Indices are often in inconvenient places (for example, the Rotheran Index is located in Rotheran itself).
But perhaps it's worth including Propylon Chambers in the route anyway? To test that, I assumed that the player has all Propylon indices from the beginning and regenerated the travel graph with the addition of teleportation between adjacent Dunmer strongholds. This would provide a lower bound on the route length and show whether there is enough time saving to make getting any of the Indices worthwhile.
Turns out, there really isn't. The best route without Propylon Chambers takes about 2500 seconds, whereas including them improves the route by only two minutes. There are a few places the optimiser decided to exploit this method of teleportation:
When performing quests around Suran, going to Marandus and teleporting to Berandas in order to get the Boots of the Apostle in that stronghold.
Then teleporting from Berandas to Falensarano on the eastern part of the game map to get Ring of the Wind from the cave nearby as well as (later on) deliver an item to a mine.
Teleporting to Valenvaryon several times for easier access to the island north of the map.
Teleporting from Hlormaren (a stronghold west of Balmora) to Falasmaryon where the Marksman master trainer lives.
Teleporting from Berandas to Rotheran close to the end of the route to grab the Ice Blade of the Monarch (located in that stronghold).
Given that simulating actually getting the Indices would also be a pain (I'd have to keep track of the optimal travel time between any two quest nodes for when the player has any combination of indices out of \( 2^{10} = 1024 \)), I decided to skip them for now.
Opening gambit and various exploits
There are a few things that are worth doing at the beginning of the game to ensure a smooth progression through the route as well as raise enough money to pay our way through training and some faction quests.
I'd use enchantments for most of the in-game activities, including dealing damage, teleporting and movement. Enchantments are spells that can be put on equipment. They require a soul gem to produce, which determines how much charge the item will have, but enchanted items recharge over time, don't use up player's Magicka reserves and spells cast from them can't fail and are instantaneous. This means that teleporting takes a few seconds faster since we don't need to wait for the cast animation, but more importantly, casts can't be interrupted by someone hitting the player.
The items enchanted with all three types of teleportation (Divine/Almsivi intervention and Recall) are easily obtained at the beginning of the game: the first two during Edwinna Elbert's initial questline and the final one can be bought from a merchant in Caldera. I hence changed the optimiser a bit to always have these two starting nodes (Ajira and Edwinna's Mages Guild quests) at the beginning of the route and would do some more preparation as part of these before proceeding.
Dealing damage
I had played around with various ways of dealing damage to NPCs. I first thought of using Blunt weapons, since the player would have to train that skill anyway and one of the best Blunt weapons has to be acquired as a part of an Imperial Cult quest, but it still takes several swings to kill anyone with it, since the player can miss, especially at lower skill levels.
Then I remembered about the Drain Health enchantment: it reduces the target's maximum health by a given number of points. It's supposed to be used as a cheap way to weaken the enemy, but it can also be exploited. If one casts Drain Health 100pt on someone, even for one second, they will die if they have fewer than 100 hit points. Paired with a 100-point Weakness to Magicka effect, this allows for a cheap way to kill anybody with fewer than 200 hit points, which is an overwhelming majority of game characters.
Despite all the teleportation, there still is a lot of walking to be done in the game. While the character will have access to Fortify Speed potions, I only wanted to use them for long movement segments, since making enough of them to cover the whole route would take too much time.
Thankfully, there is an artifact in the game that gives the player a constant Fortify Speed effect: Boots of Blinding Speed. They boost the player's speed by 200 points (essentially tripling it) at the expense of blinding (it's in the name) the player. The blinding effect can be resisted: if the player has a Resist Magicka spell active for the split second when they put the boots on, the effect is nullified.
Moreover, levitation is important, since it allows the player to bypass various obstacles as well as avoid annoying enemies. Due to the way levitation speed is calculated (the main component is the sum of player's Speed and the levitation effect magnitude), 1 point of Levitation is sufficient for the player to start flying and it's cheaper to increase speed by manipulating the character's Speed attribute. 1 point of Levitation for about 90 seconds would be another enchantment.
Unlock, frenzy
Chests and doors in Morrowind can have a lock with a level ranging from 1 to 100. Hence, we'd need to also enchant a piece of clothing with Open 100pt.
There are quite a few times in the route where we need to kill someone who's not attacking us without attracting the guards' attention (like when doing Morag Tong assassinations before the actual quest starts). One way to do it is taunting the NPC until they attack, which takes time and needs a moderately high Speechcraft skill. Luckily, there's a magic effect for that, too. Frenzy increases the Fight rating of an NPC and 100pt for 1 second is enough to make them attack the player. When the effect wears off, they don't stop attacking and can be slain in self defence without legal issues.
Alchemy feedback loop and fundraising
When a player creates a potion in Morrowind, their chance of success as well as the potion's strength, duration and value is partially governed by the player's Intelligence attribute.
The player can also create a potion that boosts their Intelligence attribute.
Do you see how the game can be broken with this? There's no limit on how many potions the player can consume per second and there's no cap on the player's Intelligence. Hence we can have all our monetary problems taken care of by exploiting this and repeatedly creating stronger and stronger Intelligence potions to sell. Not only that, but we can also use this to create Restore Health potions that restore player's health faster than anybody can damage it as well as use the Intelligence boost to create enchanted items manually (instead of paying a specialist to do it). Finally, we can also create Fortify Speed potions that increase the player's raw speed.
There are merchants in Morrowind that restock some of their ingredients as soon as the player stops trading with them and lots of them sell ingredients for Fortify Intelligence and Restore Health potions.
We need about 75000 gold pieces to get through the game, including all training and faction quests. Luckily, there's a merchant in the game that has 5000 gold in his inventory and buys items at face value. My tests showed I needed about 150 Health potions to get me through the game, so I'd sell any extra ones to the Creeper to get me to the target number.
Fortifying player's Speed (beyond the boost provided by the Boots) is more difficult: there are only two ingredients in the game that restock and provide the Fortify Speed attribute, Kagouti Hide and Shalk Resin. However, they are quite expensive (52 gold pieces in total for the two) and also have a Drain Fatigue side effect (which makes the player lose consciousness when their Fatigue is close to zero). Hence they have to be paired with another two ingredients that have a Restore Fatigue effect.
Final route
Here's the final route that I came up with: it opens with the sequence of money-making and enchantments that I had described before and then continues with the list of things to do that was produced by the optimiser. This initial sequence took me about 28 minutes to complete and the rest of the route is located here. I also uploaded the route that assumes the player can use all Propylon Chambers here.
Create the character (build). Steal the Limeware Platter from the Census & Excise office before leaving and pick up the Ring of Healing from the barrel on the way.
Give the ring to Fargoth to boost relationship with Arrille. Go there, sell the platter, buy a Resist Magicka Spell, 2 Drathis' Winter Guest scrolls and an Iron Warhammer.
On the way to the silt strider, grab the 4 types of mushrooms (needed for Ajira's first quest). Take the silt strider to Balmora.
Join the Mages Guild, take all supplies from the chest and take the Ceramic Bowl from Ranis Athrys' table. Go downstairs to Estirdalin and make a spell of Resist Magicka 100% on Self. Hand in the first Ajira quest. Teleport to Caldera Mages Guild.
Steal the alchemy set from the tower in the Guild as well as 1 Dreugh Wax (needed for the Seven Graces quest).
Go north-west towards Gnaar Mok to meet Pemenie. Kill her with a combination of Thunder Fist (Nord racial power), the Drathis' scrolls and the Warhammer.
Use the Fortify Willpower and Restore Magicka potions from the supplies chest to successfully cast Resist Magicka spell and equip the Boots. Use the Almsivi Intervention scroll to go to Ald-Ruhn.
Go to the Mages Guild, take all supplies from the chest there.
Buy 2 Mark scrolls from Tanar Llervi (she sells one at a time but they restock after leaving the Barter menu).
Take Mages Guild teleport to Balmora, place a Mark while facing Masalinie Merian (Guild teleporter).
Do all remaining Ajira quests:
Make sure to steal the Grand, the Large and the two Common Soul gems as well as the Platter and any other Soul Gem (needed as a donation for a Temple quest) during the second quest.
Flowers: Willow Anther and Heather can be bought from Ajira herself. The other two can be stolen from Millie Hastien's shop. On the way there, sell the Platter to Ra'Virr next door.
Sell all rewards (potions) back to Ajira to have roughly 1000 gold for the next part.
Mages Guild teleport to Sadrith Mora, Go to Aunius Autrus in the Imperial Shrine.
Alchemy loop time!
Use all money to continuously buy 10 Ash Yam, 5 Bloat and 5 Netch Leather from Aunius Autrus (should roughly end up with 260, 130 and 130 of each, respectively).
Use all Bloat and half of Ash Yams to make Intelligence potions, drink them all.
Use all Netch Leather and Ash Yams to make Intelligence potions, sell enough to Aunius to get all his money, drink the rest.
Go to Scelian Plebo and buy 10 Saltrice and 10 Marshmerrow 30 times.
Make 300 Restore Health potions with these ingredients. Sell enough Health potions to Scelian to get all his money.
Buy Drain Blood (has the Drain Health effect) and Frenzying Touch (has the Frenzy Humanoid effect) from Uleni Heleran in the Mages Guild on the way back.
Teleport to Balmora, Almsivi Scroll to the Temple.
Go to Nalcarya of White Haven. Buy 1 diamond (future Thieves Guild quest) and 4 Daedra Hearts (future Temple quest), sell her enough potions to drain her of money.
Go to Millie Hastien next door. Buy 1 Exquisite Amulet, 1 Exquisite Ring, 1 Extravagant Pants and an Extravagant Belt.
Back to Balmora MG, buy spells from Marayn Dren: Levitate, Ondusi's Open Door, Dire Weakness to Magicka.
Whilst still under the effect of Intelligence potions, make the following enchantments:
Exquisite Amulet: Weakness to Magicka 100% on Touch, Drain Health 100% on Touch (use the stolen Grand Soul Gem).
Expensive Belt: Levitate 1pt 90s on Self (use the stolen Greater Soul Gem)
Exquisite Ring: Open 100pt on Touch (Common Soul Gem)
Expensive Pants: Frenzy Humanoid 100pt on Touch (Common Soul Gem)
Do hotkeying. I prefer having the Amulet on 1, Belt on 7, Ring on 8 and Pants on 9.
Whilst still under the effect of Intelligence potions, teleport to Sadrith Mora.
Buy 100 Shalk Resin and 100 Kagouti hide from Pierlette Rostorard (will need to sell her Health potions a couple of times in order to afford the ingredients).
Buy 100 Hound Meat and 100 Scuttle from Threvul Serethi.
Make 100 potions of Fortify Speed (+ Restore Fatigue) from these 4 ingredients. Alchemy should be slightly beyond 70 by this point (requirement for some Guild promotions). Hotkey Speed potions to 2.
Recall and teleport to Caldera. Sell Restore Health potions to Creeper until have about 75000 gold. Hotkey remaining ones to 3.
Recall, buy 6 Drathis' Winter Guest scrolls from Galbedir and buy Chronicles of Nchuleft from Dorisa Darvel, then steal a Dwemer Tube from Vorar Helas' house.
Recall, teleport to Ald-Ruhn and do all Edwinna Elbert quests up to and including Dwemer Tube. Should be rewarded with the Almsivi and Divine Intervention amulets. Hotkey those to 4 and 5, respectively.
Proceed as per the rest of the route.
Finally, there are several NPCs that have to be killed as part of the run and have to be damaged first before they can be killed with the Amulet, either with the Drathis' scrolls or with the Iron Warhammer/Skull Crusher when it's picked up:
Relas Arothan has one Sanguine Item required for extra Morag Tong reputation.
Lorbumol gro-Aglakh needs to be killed as part of the Fighters Guild questline. While he has 199 health, he has some natural magic resistance, decreasing the effect of the amulet.
Burub gra-Bamog also has some natural resistance to Magicka.
Orvas Dren, brother of the Duke and head of the local criminal syndicate, has to be killed as part of the House Hlaalu questline and has 250 hit points.
Varus Vantinius is the current head of the Imperial Legion and has to be killed in a duel to finish that faction's questline.
I think that's it! The code to produce most of this is on my GitHub, together with the code from the previous set of articles. One day I might even actually record myself trying to follow this route, but I'm sure actually planning it out is more fun that running it.
Finally, feel free to follow me on Twitter at twitter.com/mildbyte!
Previously, we left off by converting the problem of finding a route that completes all faction questlines in Morrowind into the general case of the travelling salesman problem with dependency constraints. Today, we'll come up with a way to produce a good enough solution to it.
Generating a travel time matrix
There are two graphs I'm talking about here: one is the quest dependency graph from the previous part and the other one is the travel graph that I had generated back in an earlier article.
The dependency graph had about 110 geographically distinct nodes at this point, so the first order of business was creating a matrix of fastest routes and travel times between any two of those nodes, since the final route could indeed include travelling between any two points.
To do that, I used Dijkstra's algorithm: since it's an single-source-shortest-path algorithm, if I ran it for one geographical node in the quest dependency graph, I'd get shortest routes (on the travel graph) to all other points. Hence I only had to run it a hundred times.
There was a problem, though: the travel graph had about 6500 vertices and 16000 teleportation edges (that is, travelling with public transport or using an Almsivi/Divine Intervention spell: this doesn't include actual physical travel edges between points in the same cell). It took about 10 minutes to run Dijkstra for a single source, so I was looking at spending about a day generating the travel time matrix.
Hence I decided to prune the travel graph a bit by coalescing vertices that were in the same cell. For every cell (interior or exterior), I'd replace all vertices in it with a single one with average coordinates and then recalculate the cost of travelling between them:
def coalesce_cells(vertices, edges):
# Replaces all vertices in the graph in the same cell with a single one (average location)
vertices_map = defaultdict(list)
for v in vertices:
vertices_map[v.cell].append(v)
# Calculate the average vertex for each cell
average_vertices = {}
for cell, vs in vertices_map.items():
coords = tuple(sum(v.coords[i] for v in vs) / float(len(vs)) for i in range(3))
average_vertices[cell] = Location(coords=coords, cell_id=vs[0].cell_id, cell=vs[0].cell)
new_vertices = set([average_vertices[v.cell] for v in vertices])
# Group edges by average vertices they belong to
grouped_edges = defaultdict(lambda: defaultdict(list))
for v1 in edges:
av1 = average_vertices[v1.cell]
for v2 in edges[v1]:
# Calculate the new edge cost
grouped_edges[av1][av2].append((edges[v1][v2][0], get_distance(av1.coords, v1.coords) / WALKING_SPEED + edges[v1][v2][1] + get_distance(v2.coords, av2.coords) / WALKING_SPEED))
new_edges = defaultdict(dict)
for av1 in grouped_edges:
for av2 in grouped_edges[av1]:
# Replace all possible edges between the two new vertices with the cheapest one
new_edges[av1][av2] = min(grouped_edges[av1][av2], key=lambda md: md[1])
return new_vertices, new_edges
With this pruning, the travel graph shrunk to about 800 vertices and 2200 teleportation edges and I successfully managed to create a matrix of fastest travel times between any two nodes on the dependency graph.
Here's one of cool things you can do with such a distance matrix: use a clustering algorithm to visualize clumps in which quest points of interest are organized (the image is clickable).
For example, the top left corner of this heatmap has a group of NPCs that are all located on a set of remote islands at the north of the game map. Getting to them is a pain and takes a lot of time, hence it's worth arranging our quests in such a way so that we only have to visit there once.
Simulated annealing (genetic algorithm?)
Let's now say we have a candidate route, which is one of topological sorts of the dependency graph. We can see how long this route takes by simply adding up the cost of travel between consecutive nodes using our cost matrix.
How would we find an optimal route? Brute force won't help here. I decided to do a slightly less stupid thing: let's take a route and randomly perturb it. Sure, the route we end up with might be less efficient than it was before. But imagine we do that for tens of thousands of randomly generated routes, keeping a fraction of them that's the most efficient, randomly perturbing the best routes again and again. Eventually we'd converge on a decent route, if not the most optimal one.
The final algorithm I used is:
Start with a pool of candidate routes: take a single topological sort and repeat it 20000 times
Do until I get bored and terminate the optimization:
sort the routes by their total time, keep top 1000
for each remaining route:
generate 20 candidate routes from it:
pick a random point in the route and move it a random number of steps up or down
check the dependency graph is still satisfied, if not, try again
do this perturbation 30 times
the pool now has 20000 routes again, repeat
Of course, the actual constants can be played with and the termination condition could be better defined. Some call this a genetic algorithm (where we kind of simulate evolution and random mutations in the gene pool), some call it simulated annealing (where the magnitude of random perturbations decreases over time until the solution pool settles down). "Genetic algorithm" sounds sexier, which is why I mentioned it in this paragraph.
I left this to run overnight and in the morning came back what seemed to be a decent route through the game.
The times here were inferred from in-game travel distances, assuming the minimum walking speed of about 100 game units per second. Of course, there are potions and spells to increase the player's walking speed. In addition, this doesn't account for the time spent in the menus or actually killing whatever the player is supposed to kill.
Overall, there are some things the optimiser came up with that made me go "aha!".
I wrote a pretty printer that would take the graph nodes and expand them into an actual travel plan that uses Almsivi/Divine Intervention spells and public transport. In this fragment, for example, the route planner set up the faction questline progress just right so that all six objectives in the desolate southwest corner of the map could be completed in one go (lines 592-618).
However, there are a few problems with this route:
It doesn't account for the uses of Mark/Recall spells. These are immensely powerful: a Recall teleports the player to the location of the last time a Mark spell was cast.
It doesn't account for skills training in order to progress through faction quests.
Advancement in Morrowind factions requires not only quest completion, but also skills training. I had already mentioned that while we can pay to train a skill, it can't be trained above its governing attribute.
Attributes can only be raised when the player levels up. A game character has 5 out of 27 skills as major skills (which lets them level faster and gives a flat +25 bonus to them at the beginning of the game) and 5 minor skills (which also lets them level faster, albeit not as fast as major skills, and adds a +10 bonus). The character levels up when they have gotten 10 points in their major or minor skills.
This is where it gets weird. At level up, the player picks 3 attributes to raise. How much they are raised by is determined by the skills the player had trained. For example, if they got 10 points in Alchemy (governed by Intelligence), then, if Intelligence is picked at level up, it will increase by 5 points instead of 1. However, if the player had leveled up by training 1 point in Long Blade (governed by Strength) and 9 points in Alchemy, they'll only get a 4x multiplier to Intelligence and 1x to Strength.
The player can also train skills that aren't major or minor to get enough points to boost the attribute multiplier. Let's say the player also trains 1 point in Security (governed by Intelligence) which isn't their major or minor skill. It won't count towards the 10 points required for a level up, but it will count towards the attribute multiplier calculations. Hence the player will be able to raise their Intelligence by 5.
I hence had to tactically choose my character's major/minor skills as well as the race (which gives bonuses to certain skills and attributes) in order to be able to quickly meet each faction's expectations.
Overview of factions and required skill levels
This is a list of skill levels that each faction requires in order for the player to be able to become the head of that faction. Note that this might not necessarily meet the skill requirements for the highest rank of that faction, since most factions stop checking the player's credentials during their final questlines and just promote the player to the highest rank once the questline is completed.
Mages Guild: Alteration, Destruction, Alchemy, Enchant, Illusion, Mysticism. One skill at 70, two at 25, Intelligence and Willpower 33.
Fighters Guild: Axe, Long Blade, Blunt Weapon, Heavy Armor, Armorer, Block; 70/25/25, Strength and Endurance 33.
Thieves Guild: Marksman, Short Blade, Light Armor, Acrobatics, Sneak, Security; 80/30/30, Agility and Personality 34.
Tribunal Temple: Alchemy, Blunt Weapon, Conjuration, Mysticism, Restoration, Unarmored; 80/30/30, Intelligence and Personality 34.
Morag Tong: Acrobatics, Illusion, Marksman, Light Armor, Short Blade, Sneak; 80/30/30. Speed and Agility 34.
Imperial Cult: Speechcraft, Unarmored, Restoration, Mysticism, Enchant, Blunt Weapon; 90/35/35. Personality and Willpower 35.
Imperial Legion: Athletics, Spear, Long Blade, Blunt Weapon, Heavy Armor, Block; 70/25/25. Endurance and Personality 33.
House Hlaalu: Speechcraft, Mercantile, Marksman, Short Blade, Light Armor, Security; 70/25/25. Speed and Agility 33.
Character planning
With that in mind, I decided to have Alchemy, Blunt and Marksman as high level skills. Alchemy (main skill for the Mages Guild) could be trained really quickly by making potions. Blunt was shared between 4 factions (Fighters Guild, Temple, Imperial Cult and Imperial Legion) and would have to be trained to 90. Marksman would cover the other 3 factions (Thieves Guild, Morag Tong and House Hlaalu) and trained to 80.
The other skills had to be chosen partially to cover the remaining, weaker requirements, partially so that training them would boost either Strength or Agility to 90 or 80, respectively (otherwise Blunt or Marksman wouldn't be possible to be trained). I hence decided to go for a character that starts with high Strength and a bonus to Blunt weapons and train Long Blade to boost Strength (and cover the Fighters Guild/Imperial Legion secondary skill requirement).
For Agility, I would train Block, Light Armor and Sneak. All three of those are governed by Agility and training them to required levels would result in Agility being boosted enough to allow me to train Marksman to 80.
Enchant and Mysticism would cover the secondary requirements for the Temple, the Mages Guild and the Imperial Legion.
Here's the final character sheet. The major and minor skills that she starts with are:
Alchemy: 35. To be trained to 70 by making potions (main skill for MG, secondary skill for T).
Blunt: 40. To be trained to 90 (main skill for FG, IL, IC and T).
Marksman: 30. To be trained to 80 (main skill for TG, MT and HH).
Mysticism: 35, doesn't need to be trained (secondary skill for MG, T and IC).
Enchant: 35, doesn't need to be trained (secondary skill for MG and IC).
Long Blade: 25. To be trained to 45 to get extra Strength points (secondary skill for FG and IL).
Sneak: 15. To be trained to 30 (secondary skill for TG and MT).
Block: 15. To be trained to 30 (secondary skill for FG and IL).
Speechcraft: 15. To be trained to 25 for extra 5 Personality points (secondary skill for HH).
Light Armor: 15. To be trained to 30 (secondary skill for TG, MT and HH).
Encoding training in the quest dependency graph
I decided not to load up Morrowind trainer data in order to incorporate it into the route planner. Instead, I looked up the best trainers for Blunt and Marksman (since they're the only ones that will let the player reach the required level) as well as some second best ones and tried to come up with people that the player character would meet en route anyway. There were some hilarious coincidences, like Alveleg who has to be killed as part of a Fighters Guild quest but who can also train the player in Block, Sneak and Marksman up to fairly high levels.
I then added some extra nodes to the dependency graph to reflect the new training sessions:
# Training nodes
training_alveleg:
# we're killing him as part of the FG quest and he trains Marksman (45), Sneak (42) and Block (38)
description: Train Block x10 (up to 25), Sneak x15 (up to 30), Marksman x15 (up to 45), should get Agi 60
giver: alveleg
training_bolnor:
description: Train Light Armor x15 (up to 30), Marksman x5 (up to 50), should get Agility 70
giver: bolnor andrani
- training_alveleg
training_eydis:
description: Train Long Blade x20 (up to 40), Blunt x30 (up to 70), Strength 85
giver: eydis fire-eye
training_ernse:
description: Train Blunt x20 (up to 90)
giver: ernse llervu
- training_eydis
training_missun:
description: Train Marksman x30 (up to 80)
giver: missun akin
- training_bolnor
training_falvel:
description: Train Mercantile x10 (should get Personality 35)
giver: falvel arenim
They would then become prerequisites for some later quests in faction questlines:
tt_tharer_1:
description: Get and hand in all Tharer Rotheloth quests
giver: tharer rotheloth
- tt_7graces_vivec
- tt_7graces_gnisis
- tt_7graces_kummu
- tt_7graces_gg
- tt_cure_lette
- tt_mount_kand
- tt_mawia
- tt_kill_raxle_berne
- training_eydis # Curate (50 blunt) to hand in Galom Daeus quest
In some cases, the requirements I added were stronger than necessary. For example, one could get promoted to Master of Fighters Guild with a Blunt skill of 80, yet it depends on a graph node training Blunt to 90. The reasoning behind it was that we don't want to visit the Master Blunt trainer more than once: if we're visiting her, we might as well train Blunt to the maximum we'll need.
Next up, we'll try to add the usage of Mark and Recall spells to the route as well as discuss some miscellaneous Morrowind tricks and glitches that can help during a speedrun.
Well, not even last night's storm could wake you. I heard them say we've reached Morrowind, I'm sure they'll let us go and do a speedrun.
Jiub
There's the famous speedrun of Morrowind's main quest that involves basically travelling to the final game location using a few scrolls and spells and killing the boss.
However, there isn't a Morrowind speedrun category where someone tries to become the head of all factions. For all its critical acclaim and its great story, most of quests in Morrowind are basically fetch-item or kill-this-person and there aren't many quests that require anything else. But planning such a speedrun route could still be extremely interesting for many reasons:
There are 10 joinable factions in Morrowind (Mages/Thieves/Fighters Guild, Imperial Cult, Imperial Legion, Tribunal Temple and the Great Houses: Hlaalu, Redoran, Telvanni). The player can only be a member of one Great House in a playthrough, but that still leaves 8 factions to do.
The transport system. It's not just a matter of fast travelling to certain locations like one would do in Skyrim or Oblivion. Instead travel is by several transportation modes including boats, caravans and teleportation spells that I had previously investigated. Walking is required a lot and so it's important to manage faction questlines to avoid unnecessary redundant trips to different cities.
There are many ways to become the head of a given faction. Faction questlines use a promotion system where new questlines open up as the character attains higher ranks at a faction. Promotion is a matter of not only reputation points (awarded by quests) but also player skills and attributes.
Some quest objectives can be pre-completed or done in a different way. For example, if the quest giver wants the player to kill someone, that someone can often be killed before the quest even starts, at which point, most of the time, the quest giver will give the player the reward anyway. However, sometimes this might not work and the player will lose out on reputation points required to unlock further questlines. Similarly, in most fetch quests the questgiver suggests where the player can get a given item, but doesn't care if it was bought in a nearby shop a few minutes ago.
So given those features, this can get really complicated. On the way to a given quest objective the player can pick up another quest or pick up an item that might be needed at some point for a quest for a different faction that they aren't even a member of. What could be an efficient route through one faction's quests might be inferior to a slower route when all factions are played through since it could be that points in that route are visited in other factions' quests anyway, and so on.
In other words, planning an efficient route through all factions would be a fun computer science problem.
A note on skill requirements and Morrowind's levelling system
There are a couple factions where the final quest can be completed immediately, but that just results in a journal entry saying that the player character is now the head of the faction (and the advancement is not reflected in the character stats). I decided I wanted to rise to the top the mostly-honest way instead.
Unlike Skyrim and Oblivion, advancement in Morrowind factions requires the player to have certain skills at a certain level. There are 27 skills in Morrowind and each faction has 6 so-called "favoured skills". Becoming head of a faction requires the player to have one of these skills at a very high level (roughly 80-90 out of 100) and 2 of them at a medium level (about 30-35).
Morrowind characters also have 7 attributes, each of which "governs" several skills. Attributes also play a role in faction advancement.
So that's kind of bad news, since in a speedrun we won't have enough time to develop our character's skills. The good news is there are trainers scattered around Morrowind that will, for a certain fee, instantly raise these skills. The bad news is that these trainers won't train skills above their governing attributes. Raising attributes requires levelling and levelling in Morrowind is a very long story. I'll get into the actual levelling strategy later.
Different routes through a given faction
I quickly gave up on scraping quest data from the game files (since most quests are driven and updated by a set of dialogue conditions and in-game scripts) and instead used the UESP's Morrowind Quests page to manually create a series of spreadsheets for each faction that included quests, their reputation gain and rough requirements.
Here's an example of one such spreadsheet:
This spreadsheet already shows the complexity of Morrowind factions. There are two intended ways to reach the top of the Mages Guild: by having enough reputation and skills to become a Master Wizard and either completing all of Edwinna Elbert's quests and challenging the current Arch-Mage to a duel or completing all of Skink-in-Tree's-Shade's quests and getting a letter from the upper management telling the current Arch-Mage to step down. I later found another way, by reaching the rank of Wizard (one rank below Master Wizard) and then talking to the current Arch-Mage about a duel, which is quicker.
Other than that, there's also multiple ways to complete a quest. Edwinna Elbert's final 3 quests requiring the player to bring her some Dwarven artifacts don't require the player to actually go to the places she recommends: the artifacts can be acquired from different locations or even bought.
Generating all possible routes through a faction
...turned out to be tricky. The first cut of this was encoding each quest in a YAML file as a set of prerequisites and required items/actions for completion. For example:
edwinna_2:
giver: edwinna elbert
rank: Conjurer
quest: Chimarvamidium 2
- Dwemer Tube:
items: misc_dwrv_artifact60
- Nchuleftingth:
go_person: anes vendu
This encodes the start of Edwinna Elbert's advanced questline, Dwemer Tube from Arkngthunch-Sturdumz, which requires the player to have become a Conjurer in the Guild and completed Edwinna's previous quest. To complete this quest, the player needs to have the tube in their inventory (I used the in-game item ID). Completion gives the player 5 faction reputation points.
The questline continues with Nchuleftingth Expedition and to complete that quest, the player needs to go to a certain NPC (he's an archaeologist who has, as it turns out, perished). Unlike the previous quest, this action (of going to a person and interacting with them) requires us to have started the quest.
So with that in mind, we can generate a set of all possible ways to complete a guild using breadth-first search:
set of all sequences completing the guild S = empty
for each sequence in S:
if it already completes the guild, ignore it
otherwise, get all possible next quests that can be done in this sequence:
where the quest prerequisites have been met (e.g. a previous/required quest in the questline has been completed)
where there's enough reputation to start a new questline
add each one of these possible quests to a sequence to create several new sequences
replace the current sequence with the newly generated ones
until S stops changing
Combinatorial explosions, combinatorial explosions everywhere
What could possibly go wrong? Well, firstly there's an issue of ordering. If the player is juggling two parallel questlines from different questgivers, each possible interleaving of those is counted, which causes a combinatorial explosion. Secondly, routes that are strictly worse than existing routes are generated too. For example, if completing a certain guild requires us to only complete quests A, B, D and E, there's no point in generating a route A, B, C, D, E: there's no way doing D won't take extra time.
I hence did some culling by making sure that during generation we wouldn't consider a sequence if it were a superset of an already existing quest sequence. This brought the number of generated routes (subsets, really) down to a mildly manageable 300.
Is this good? Well, not really. This only accounted for which sets of quests could be completed. There was no mention of the order in which these quests could be completed (yielding probably millions of permutations), the ordering of actual actions that would complete a given quest (for example, completing a given quest could involve killing someone and that could happen even before the player character was aware of a quest) or the alternative routes (like fetching a required item from a different place or doing an extra objective to get more faction reputation).
Worse even, this was just the route generation for one faction. There were 7 more factions to do (and I had to pick a Great House that would be the quickest to complete too) and even if they didn't have that many ways to complete them, brute-forcing through all the possible routes with all factions would definitely be unreasonable.
This method also wouldn't let me encode some guild features. For example, Morag Tong, Morrowind's legal assassin guild, has several questgivers around the world, any of which can give the player their next contract. Furthermore, the reputation required for the final questline to open can be gathered not only by doing assassination contracts, but also by collecting certain items spread around the world, each yielding about the same reputation as a contract. These items can quite often be found in dungeons that the player has to visit for other factions anyway and it could be the case that doing those quests to collect these items is overall faster.
Attempt 2: a single quest dependency graph
I hence decided to drop the idea of combining all possible routes from all guilds and instead did some experimentation to find out if there are obviously quick routes through most guilds. Turns out, there were and so instead of solving a few million instances of the Travelling Salesman Problem, I could do with just one. Still impossible, but less impossible.
Quick overview of fastest routes for a given faction
In the Mages Guild, the introductory questline can be completed in a matter of minutes and yield 22 reputation points and then Edwinna's quests can be completed en route to other quest locations that will likely have to be visited anyway. Those two questgivers would bring the player character over the 70 reputation limit required to challenge the current Arch-Mage (at that point, I wasn't looking at skills training yet).
The Fighters Guild could be completed by doing all quests from one questgiver (most of which involved killing bandits in roughly the same area which can be done even before the quest begins), a couple from another one and then proceeding on to a final questline (which does have a quest requiring to bring some items to the middle of nowhere, but the alternative ending requires many more reputation points).
The Thieves Guild has some conflicts with the Fighters Guild and so the two questlines have to be carefully managed together. Almost all quests in the Thieves Guild need to be done (since doing some Fighters' Guild quests decreases reputation with the Thieves Guild), but the good news is that they share the antagonist and so after reaching a certain reputation with the Thieves Guild, finishing the Fighters Guild promotes the character to Master Thief.
Morag Tong can basically be completed in one go: after the initial contract required to join the Guild, the player collects enough Sanguine items to skip all contracts straight on to the final questline and the location of the final boss is visited twice in other guilds' quests.
Tribunal Temple starts with a mandatory pilgrimage that visits a few locations around the game map. There are several more pilgrimages as part of the questline and some of those can be completed even without having joined the faction.
Imperial Legion has a questline that takes place in a single town and requires the player to visit the location that's visited anyway in Edwinna Elbert's questline in the Mages Guild. In addition, one quest gives additional reputation with the Temple, allowing to skip one quest there.
Imperial Cult has three questlines. One of them involves fundraising and, just like in real life, the player can simply give the money to the questgiver on the spot instead of asking others for it. The other one involves fetching several powerful artifacts and visiting a couple of locations that are visited in other guilds' questlines.
After eyeballing the Great Houses' questlines, I settled on House Hlaalu. House Redoran has a way too long questline, most of the action in House Telvanni happens on the East side of the game map that mostly isn't visited in other quests and the final Hlaalu questline that leads to becoming Grandmaster can be started at an earlier rank.
Quest dependency graph
Now that I had a single route for each guild, instead of encoding each and every quest requirement and location in a graph, I opted for an easier way. Each node in a quest dependency graph would be something that's fairly quick to complete and happens in the same location. It could be a quest, or a series of quests, or the action of clearing out some dungeon that is featured in several future quests.
A node contains two things: where this node is located (for example, the in-game ID of the questgiver or an NPC in the location that the player needs to clear out or find a certain item) and nodes that the player needs to have completed before.
# Coalesced Ajira questline
mg_ajira_1:
giver: ajira
# Edwinna's quests up until Nchuleftingth expedition, all done in one go (Dwemer tube stolen
# from Vorar Helas in Balmora, then Chimarvamidium, Skink and Huleen)
mg_edwinna_1: # also gives Almsivi/Divine amulets
- mg_ajira_1
mg_edwinna_2:
- mg_edwinna_1
- mg_edwinna_nchuleftingth
- mg_edwinna_scarab_plans
- mg_edwinna_airship_plans
# locations of items we need to collect to complete Edwinna's quests
mg_edwinna_nchuleftingth:
giver: anes vendu # can discover his body before the quest begins
mg_edwinna_scarab_plans:
giver: Khargol gro-Boguk # orc in Vacant Tower with the other copy of the plans
mg_edwinna_airship_plans:
giver: lugrub gro-ogdum # located near the orc in Gnisis Eggmine that is also a part of the IL quest
mg_master:
giver: trebonius artorius
In this case, the Dwarwen plans required by Edwinna can be collected even before the questline begins and then all handed in at the same time.
When talking to someone had to be done as a part of the quest, I encoded it as several nodes that depended on each other:
fg_eydis_1_start: # join FG and start first quest
fg_eydis_1_do:
giver: drarayne thelas # actually do the first quest
- fg_eydis_1_start
fg_eydis_1_end: # hand the quest in
- fg_eydis_1_do
Here's the final quest dependency graph:
This was much better than messing around with reputation points and quest prerequisites. Any topological sorting of this dependency graph would be a valid route through the game's quests (assuming I encoded my dependencies correctly). Since each node had a fixed geographical location, I could use a pathfinding algorithm and the data from my previous project to find out the time that any given route satisfying this dependency graph (using teleportation and public transport) takes.
However, there's still a problem: there are many possible topological sortings of a given graph and counting them is #P-complete.
This is a general case of the travelling salesman problem: if here we need to find the shortest tour that visits all nodes subject to a set of dependencies (e.g. we can't visit A before we've visited C), then in TSP we need to visit all nodes without any dependencies. Having dependencies decreases our search space (in the most extreme case the dependency graph is a line and so there's only one possible route), but not by enough.
I hence had to develop some approximations to turn this graph and the matrix of travel times between its nodes into a good-enough route.
Next up, I'll try a couple of random approximations to solve this problem, including simulated annealing (also kind of known as a genetic algorithm). There's also the matter of planning out the player character and his/her skill development in order to minimize the amount of time we need to spend training up to get promoted in various guilds. Stay tuned!
You're probably not going to get rich in the stock market
@mildbyte 2 years, 9 months ago | python | thoughts | tech | finance |
source: XKCD
Abstract: I examine inflation-adjusted historical returns of the S&P 500 since the 1870s with and without dividends and backtest a couple of famous investment strategies for retirement. I also deploy some personal/philosophical arguments against the whole live-frugally-and-then-live-frugally-off-of-investments idea.
Disclaimer: I'm not (any more) a finance industry professional and I'm not trying to sell you investment advice. Please consult with somebody qualified or somebody who has your best interests at heart before taking any action based on what some guy said on the Internet.
The code I used to produce most of the following plots and process data is in an IPython notebook on my GitHub.
Early retirement is simple, right? Just live frugally, stop drinking Starbucks lattes, save a large fraction of your paycheck, invest it into a mixture of stocks and bonds and you, too, will be on the road towards a life of work-free luxury and idleness driven by compound interest!
What if there's a stock market crash just before I retire, you ask? The personal finance gurus will calm you down by saying that it's fine and the magic of altering your bond position based on your age as well as dollar cost averaging, together with the fact that the stock market returns 7% on average, will save you.
As sweet as that would be, there is something off about this sort of advice. Are you saying that I really can consistently make life-changing amounts of money without doing any work? This advice also handwaves around the downside risks of investing into the stock market, including the volatility of returns.
I wanted to simulate the investment strategies proposed by personal finance and early retirement folks and actually quantify whether putting my "nest egg" into the stock market is worth it.
This piece of writing was mostly inspired by NY Times' "In Investing, It's When You Start And When You Finish" that shows a harrowing heatmap of inflation-adjusted returns based on the time an investment was made and withdrawn, originally created by Crestmont Research. They maintain this heatmap for every year here.
This post is in two parts: in the first one, I will backtest a few strategies in order to determine what sort of returns and risks at what timescales one should expect. In the second one, I will try to explain why I personally don't feel like investing my money into the public stock market is a good idea.
Simulating common stock market investment strategies
The data I used here is the S&P 500 index. and I'm assuming one can invest into the index directly. This is not strictly true, but index tracker funds (like Vanguard's VOO ETF) nowadays are pretty good and pretty cheap.
A friend pointed me to a paper that has an interesting argument: using the US equity markets for financial research has an implicit survivorship bias in it. Someone in the 1870s had multiple countries' markets to choose from and had no way of knowing that it was the US the would later become a global superpower, large amounts of equity gains owing to that.
As a first step, I downloaded Robert Shiller's data that he used in his book, "Irrational Exuberance", and then used it to create an inflation-adjusted total return index: essentially, the evolution of the purchasing power of our portfolio that also assumes we reinvest dividends we receive from the companies in the index. Since the companies in the index are large-cap "blue chip" stocks, dividends form a large part of the return.
I compared the series I got, before adjusting for inflation, with the total return index from Yahoo! Finance and they seem to closely match the Yahoo! data from the 1990s onwards.
The effect of dividends being reinvested changes the returns dramatically. Here's a comparison of the series I got with the one without dividends (and one without inflation):
The average annual return, after inflation, and assuming dividends are reinvested, is about 7.5%. Interestingly, this seems to contradict Crestmont Research's charts where the average annual pre-tax (my charts assume there are no taxes on capital gains or dividends) return starting from 1900 is about 5%.
On the other hand, the return with dividends reinvested does seem to make sense: without dividends, the annualized return is about 4.25%, which implies a dividend yield close to the actual observed values of about 4.4%.
Another observation is that the returns are not normal (their distribution has statistically significant differences in kurtosis and skewness).
Common strategies
Lump sum investing
First, I wanted to plot the annualized returns from investing in the S&P 500 at a given date and holding the investment for a certain number of years.
The result kind of confirms the common wisdom that the stock market is a long term investment and is fairly volatile in the short term. If one invested in the late 1990's, say, and withdrew their investment in 5 or even 10 years, they would have lost about 4% every year after inflation. Only with investment horizons of 20 years do the returns finally stabilise.
Here are the distributions of returns from investing a lump sum into the S&P 500. This time they were taken from 10000 simulations of paths a given portfolio would follow (by randomly sampling monthly returns from the empirical returns' distribution). I also plotted the inflation-adjusted returns from holding cash (since it depreciates with inflation) for comparison.
What can be seen from this is that in 20 or 30 years, it seems possible to double or quadruple one's purchasing power.
I also plotted a set of "hazard curves" from those sampled distributions. Those are the simulated probabilities of getting less than a given return depending on the investment horizon. For example, there's a 30% chance of getting a return of less than 0% after inflation (losing money) for a 1 year investment and this chance drops down to about 5% for a 20 year investment. Conversely, there's a 100% chance of getting a return of less than 300% (quadrupling the investment), but after 20 years there's a 50% chance of doing so.
DCA is basically investing the same amount of money at given intervals of time. Naturally, such an investment technique results in one purchasing more of a stock when it's "cheap" and less when it's "expensive", but "cheap" and "expensive" are relative and kind of meaningless in this case.
DCA is usually considered an alternative to lump sum investing, but for a person who is investing, say, a fraction of their paycheck every month it's basically the default option.
I did some similar simulations of dollar cost averaging over a given horizon against investing the same sum instantly.
Unsurprisingly, DCA returns less than lump sum investment in this case. This is because the uninvested cash depreciates with inflation, as well as because the average return of the stock market is positive and hence most of the cash that's invested later misses out on those gains.
DCA would do better in a declining market (since, conversely, most cash would miss out on stock market losses), but if one can consistently predict whether the market is going to rise or decline, they can probably use that skill for more than just dollar cost averaging.
In my tests, dollar cost averaging over 20 years gave a very similar distribution to investing a lump sum for 9 years. Essentially, returns of DCA follow those of investing a lump sum for a shorter period of time.
Finally, here are the "hazard curves" for dollar cost averaging.
After a year of monthly investing we'd have an almost 40% chance of losing money after inflation and even after 20 years we still have a 10% chance. After 20 years, doubling our purchasing power is still a coin toss.
Difference in expected utility
What we have gleaned from this is that the long-term yearly return from the stock market is about 7% after inflation (but before taxes), assuming one invests over 20-30 years. Compounded, that maybe results in quadrupling of one's purchasing power (less if one invests a fraction monthly, and even less with dividend and capital gains taxes).
While doubling or trebling one's purchasing power definitely sounds impressive (especially whilst doing only a few hours of work a year!), it doesn't really feel big if you look into it. Let's say you're about to retire and have saved up $1 million (adjusted for inflation) that you weren't investing. If you were putting a fraction of that money monthly into the stock market instead you would have had $2-3 million, say. But you can live as comfortably off of the interest on $1 million as you would on $2-3 million.
And on the contrary, if you have only saved $100k, dollar-cost averaging would have yielded you $300k instead, the interest on both amounts (and in retirement it has to be risk-free or almost-risk free) being fairly small.
One could argue that every little bit helps, but what I'm saying here is that the utility of wealth is non-linear. It's kind of sigmoidal, in a way. I like this graph from A Smart Bear:
source: A Smart Bear
As long as one has enough money to get up that "utility cliff" beyond which they can live off of their investments in retirement, it doesn't matter how much is it. Conversely, saving by investing into the stock market is worth it only if one is very sure that that's the thing that will get them over the line.
Flexibility, predicting the future and barbell investing
This possible climb up the cliff comes at a cost of locking in one's capital for a large amount of time (as short-term volatility in the stock market makes it infeasible to invest only for a few years). This lock-in leaves one completely inflexible in terms of pursuing other opportunities.
One's basically treating the stock market as some sort of a very long-term bond where they put the money in at one end and it comes out on the other side, slightly inflated. There was also an implicit assumption in all these simulations that the future follows the same pattern as the past.
Returns only become consistent after 30 years with dollar cost averaging. Someone who started investing a fraction of their savings into the general stock market 30 years ago would have gotten a 5-7% annualized return. Could they have predicted this? Probably. But could they have predicted the rise of the Web, the smartphones, the dotcom boom and crash? Probably not.
I'm not saying that whatever extra money one has should be invested into Bitcoin, a friend's startup or something else. Money can also be used to buy time off (to get better acquainted with some new development or technology) or education. Is using it to buy chunks of large corporations really the best we can do?
I also like the idea of "barbell investing", coined by Nassim Nicholas Taleb: someone should have two classes of investments. The first one is low-risk and low-return, like bonds or even cash. The second one is "moonshots", aggressive high-risk, low-downside, high-upside investments. Things like the stock market are considered to be neither here nor there, mediocre-return investments that might bear some large hidden downside risks.
...so that you can do what you really love?
There's an argument that one should still save up excessive amounts of money (as investments or just cash) whilst living extremely frugally so that after several years of hard work they "never have to work again" and can retire, doing what they really would have loved to do this whole time.
One of Paul Graham's essays kind of sums up what I think about it:
Conversely, the extreme version of the two-job route is dangerous because it teaches you so little about what you like. If you work hard at being a bond trader for ten years, thinking that you'll quit and write novels when you have enough money, what happens when you quit and then discover that you don't actually like writing novels?
Paul Graham, "How To Do What You Love"
Here I am, young and retired. What do I do next? Anything? How do I know what I like? Do I have any contacts that I can rely upon to help me do that "anything"? I don't feel that toiling away somewhere, being bored for a couple of decades, so that I can then quit and be bored anyway (since I hadn't learned what makes me tick), is a useful life strategy.
Warren Buffett?
What about all those people who did become rich investing into the stock market? Warren Buffett is one of them, probably one of the most famous investors in the world. But he made his first couple of million (in 2018 dollars) working for Benjamin Graham, in salary and bonuses. In essence, if he wanted to, he could have retired right there and then.
Only then did he proceed to increase his wealth by investing (first running an investment partnership and so working with the partnership's money and presumably charging a performance fee, then through Berkshire Hathaway). None of these methods are available to someone with a personal investment account.
Essentially, I think that the stock market is a wealth preservation, not a wealth generation engine. Publicly-listed companies are large and stable, paying consistent and healthy dividends with the whole investment yielding a solid, inflation-beating return.
But for me? I think it's important to stay flexible and keep my options open. If someone offers me an opportunity somewhere, I don't want to say "Sorry, I have a mortgage and the rest of my money is invested in healthy companies with solid, inflation-beating returns that I can't currently sell because it's in drawdown and would you look at the tax advantages". I want to be able to say "Sure, let me give a month's notice to my landlord. Where are we going?"
project Betfair, part 8
@mildbyte 3 years ago | programming | python | betfair | scala |
Previously on project Betfair, we gave up on market making and decided to move on to a simpler Betfair systematic trading idea that seemed to work.
Today on project Betfair, we'll trade it live and find out it doesn't really work.
DAFBot on greyhounds: live trading
With the timezone bug now fixed, I was ready to let my bot actually trade some greyhound races for a while. I started trading with the following parameters:
Favourite selection time: 180s
Entry order time: 160s
Exit time: 60s
Bet size: £5
I decided to pick the favourite closer to the trade since I thought that if it were picked 5 minutes before the race starting, it could change and there was no real point in locking it in so long before the actual trade. The 60s exit point was mostly to give me a margin of safety to liquidate the position manually if something happened as well as in case of the market going in-play before the advertised time (there's no in-play trading in greyhound markets, so I'd carry any exposure into the game. At that point, it would become gambling and not trading).
So how did it do?
Well, badly for now. Over the course of 25 races, in 4 hours, it lost £5.93: an average of -£0.24 per race with a standard deviation of £0.42. That was kind of sad and slightly suspicious too: according to the fairly sizeable dataset I had backtested this strategy on, its return was, on average, £0.07 with a standard deviation of £0.68.
I took the empirical CDF of the backtested distribution of returns, and according to it, getting a return of less than -£5.93 with 25 samples had a probability of 1.2%. So something clearly was going wrong, either with my backtest or with the simulation.
I scraped the stream market data out of the bot's logs and ran the backtest just on those races. Interestingly, it predicted a return of -£2.70. What was going wrong? I also scraped the traded runners, the entry and the exit prices from the logs and from the simulation to compare. They didn't match! A few times the runner that the bot was trading was different, but fairly often the entry odds that the bot got were lower (recall that the bot wants to be long the market, so entering at lower odds (higher price/implied probability) is worse for it). Interestingly, there was almost no mismatch in the exit price: the bot would manage to close its position in one trade without issues.
After looking at the price charts for a few races, I couldn't understand what was going wrong. The price wasn't swinging wildly to fool the bot into placing orders at lower odds: in fact, the price 160s before the race start was just different from what the bot was requesting.
Turns out, it was yet another dumb mistake: the bot was starting to trade 150s before the race start and pick the favourite at that point as well. Simulating what the bot did indeed brought the backtest's PnL (on just those markets) within a few pence from the realised PnL.
So that was weird: moving the start time by 10 seconds doubled the loss on that dataset (by bringing it from -£2.70 to -£5.93).
Greyhound capacity analysis
There was another issue, though: the greyhound markets aren't that liquid.
While there is about £10000-£15000 worth of bets available to match against in an average greyhound race, this also includes silly bets (like offering to lay at 1000.0).
To demonstrate this better, I added market impact to the backtester: even assuming that the entry bet gets matched 160s before the race (which becomes more difficult to believe at higher bet amounts, given that the average total matched volume by that point is around £100), closing the bet might not be possible to do completely at one odds level: what if there isn't enough capacity available at that level and we have to place another lay bet at higher odds?
Here's some code that simulates that:
def get_long_return(lines, init_cash, startT, endT, suspend_time,
market_impact=True, cross_spread=False):
# lines is a list of tuples: (timestamp, available_to_back,
# available_to_lay, total_traded)
# available to back/lay/traded are dictionaries
# of odds -> availablility at that level
# Get start/end availabilities
start = get_line_at(lines, suspend_time, startT)
end = get_line_at(lines, suspend_time, endT)
# Calculate the inventory
# If we cross the spread, use the best back odds, otherwise assume we get
# executed at the best lay
if cross_spread:
exp = init_cash * max(start[1])
exp = init_cash * min(start[2])
# Simulate us trying to sell the contracts at the end
final_cash = 0.
for end_odds, end_avail in sorted(end[2].iteritems()):
# How much inventory were we able to offload at this odds level?
# If we don't simulate market impact, assume all of it.
mexp = min(end_odds * end_avail, exp) if market_impact else exp
exp -= mexp
# If we have managed to sell all contracts, return the final PnL.
final_cash += mexp / end_odds
if exp < 1e-6:
return final_cash - init_cash
# If we got to here, we've managed to knock out all price levels
# in the book.
I then did several simulations of the strategy at different bet sizes.
Turns out, as we increase the bet size away from just £1, the PnL quickly decays (the vertical lines are the error bars, not the standard deviations). For example, at bet size of £20, the average return per race is just £0.30 with a standard deviation of about £3.00 and a standard error of £0.17.
DAFBot on horses
At that point, I had finally managed to update my new non-order-book simulator so that it could work on horse racing data, which was great, since horse markets were much more preferable to greyhound ones: they were more liquid and there was much less opportunity for a single actor to manipulate prices. Hence there would be more capacity for larger bet sizes.
In addition, given that the spreads in horses are much tighter, I wasn't worried about having a bias in my backtests (the greyhound one assumes we can get executed at the best lay, but most of its PnL could have come from the massive back-lay spread at 160s before the race, despite that I limited the markets in the backtest to those with spreads below 5 ticks).
I backtested a similar strategy on horse data but, interestingly enough, it didn't work: the average return was well within the standard error from 0.
However, flipping the desired position (instead of going long the favourite, betting against it) resulted in a curve similar to the one for the greyhound strategy. In essence, it seemed as if there was an upwards drift in the odds on the favourite in the final minutes before the race. Interestingly, I can't reproduce those results with the current, much larger, dataset that I've gathered (even if I limit the data to what I had at that point), so the following results might be not as exciting.
The headline number, according to my notes at that time, was that, on average, with £20 lay bets, entering at 300 seconds before the race and exiting at 130 seconds, the return was £0.046 with a standard error of £0.030 and a standard deviation of £0.588. This seems like very little, but the £20 bet size would be just a start. In addition, there are about 100 races every day (UK + US), hence annualizing that would result in a mean of £1690 and a standard deviation of £112.
This looked nice (barring the unrealistic Sharpe ratio of 15), but the issue was that it didn't scale well: at bet sizes of £100, the annualized mean/standard deviation would be £5020 and £570, respectively, and it would get worse further on.
I also had found out that, at £100 bet sizes, limiting the markets to just those operating between 12pm and 7pm (essentially just the UK ones) gave better results, despite that the strategy would only be able to trade 30 races per day. The mean/standard deviation were £4220 and £310, respectively: a smaller average return and a considerably smaller standard deviation. This was because the US markets were generally thinner and the strategy would crash through several price levels in the book to liquidate its position.
Note this was also using bet sizes and not exposures: so to place a lay of £100 at, say, 4.0, I would have to risk £300. I didn't go into detailed modelling of how much money I would need deposited to be able to trade this for a few days, but in any case I wasn't ready to trade with stakes that large.
Placing orders below £2
One of the big issues with live trading the greyhound DAFBot was the fact that the bot can't place orders below £2. Even if it offers to buy (back), say, £10 at 2.0, only £2 of its offering could actually get matched. After that point, the odds could go to, say, 2.5, and the bot would now have to place a lay bet of £2 * 2.0 / 2.5 = £1.6 to close its position.
If it doesn't do that, it would have a 4-contract exposure to the runner that it paid £2 for (will get £4 if the runner wins for a total PnL of £2 or £0 if the runner doesn't win for a total PnL of -£2).
If it instead places a £2 lay on the runner, it will have a short exposure of 2 * 2.0 - 2 * 2.5 = -1 contract (in essence, it first has bought 4 contracts for £2 and now has sold 5 contracts for £2: if the runner wins, it will lose £1, and if the runner loses, it will win nothing). In any case, it can't completely close its position.
So that's suboptimal. Luckily, Betfair documents a loophole in the order placement mechanism that can be used to place orders below £2. They do say that it should only be used to close positions and not for normal trading (otherwise people would be trading with £0.01 amounts), but that's exactly our use case here.
The way it's done is:
Place an order above £2 that won't get matched (say, a back of 1000.0 or a lay of 1.01);
Partially cancel some of that order to bring its unmatched size to the desired amount;
Use the order replacement instruction to change to odds of that order to the desired odds.
I started a week of fully automated live trading on 2nd October. That was before I implemented placing bets below £2 and the bot kind of got lucky on a few markets, unable to close its short exposure fully and the runner it was betting against losing in the end. That was nice, but not exactly intended. I also changed the bot to place bets based on a target exposure of 10 contracts (as opposed to stakes of £10, hence the bet size would be 10 / odds).
In total, the bot made £3.60 on that day after trading 35 races.
Things quickly went downhill after I implemented order placement below £2:
3rd October: -£3.41 on 31 races (mostly due to it losing £3.92 on US markets after 6pm);
4th October: increased bet size to 15 contracts; PnL -£2.59 on 22 races;
5th October: made sure to exclude markets with large back-lay spreads from trading; -£2.20 on 25 races (largest loss -£1.37, largest win £1.10);
6th October: -£3.57 on 36 races;
7th October: -£1.10 on 23 races.
In total, the bot lost £9.27 over 172 races, which is about £0.054 per race. Looking at Betfair, the bot had made 395 bets (entry and exit, as well as additional exit bets at lower odds levels when there wasn't enough available at one level) with stakes totalling £1409.26. Of course, it wasn't risking more than £15 at any point, but turning over that much money without issues was still impressive.
What wasn't impressive was that it consistently lost money, contrary to the backtest.
At that point, I was slowly growing tired of Betfair. I played with some more ideas that I might write about later, but in total I had been at it for about 2.5 months and had another interesting project in mind. But project Betfair for now had to go on an indefinite hiatus.
Enjoyed this series? I do write about other things too, sometimes. Feel free to follow me on twitter.com/mildbyte or on this RSS feed! Alternatively, register on Kimonote to receive new posts in your e-mail inbox.
Interested in this blogging platform? It's called Kimonote, it focuses on minimalism, ease of navigation and control over what content a user follows. Try the demo here or the beta here and/or follow it on Twitter as well at twitter.com/kimonote!
@mildbyte 3 years, 1 month ago | programming | python | betfair | scala |
Previously on project Betfair, we ran the production market-making CAFBot in Scala, got it working, made some money, lost some money and came back with some new ideas.
Today, we'll test those ideas, look at something that's much more promising and learn a dark truth about timezones.
Sorry this took a bit too long to write, by the way, I've been spending some time working on Kimonote to add email digests to streams. The idea is that given some exporters from streams (e-mail, RSS (already works), Facebook, Twitter etc) with varying frequencies (immediately or weekly/daily digests) as well as some importers (again RSS/Twitter/Facebook/other blogging platforms) a user could create their own custom newsletter and get it sent to their mailbox (push) instead of wasting time checking all those websites (pull), as well as notify their social media circles when they put a new post up anywhere else. None of this works yet, but other features do — if you're interested, sign up for the beta here!
Shameless plug over, back to the world of automated trading goodness.
CAFBotV2 with wider spreads
Remember how in the real-world greyhound market the bot managed to have some of its bets matched despite that they were several ticks away from the best back and lay? I realised I never really tested that in simulation: I started from making the market at the top of the order book and kind of assumed that further away from there matching would never happen. Looks like I was wrong (and in fact in part 5 the bot had issues with its bets that were moved away from the current market because of high exposure getting matched anyway).
So I added (backported?) the levelBase parameter from the Scala bot into the research one: recall that it specified how far from the best back/lay the bot should start working before applying all the other offsets (exposure or order book imbalance). Hence at levelBase = 0 the bot would work exactly as before and with levelBase = 1 it would start 1 Betfair tick away from the best back/lay. levelBase = 3 is what was traded live on the greyhound market.
The idea behind this is kind of simple: if the bot still gets its bets matched even if it's far away from the best back/lay, it will earn a higher spread with fewer trades.
So, first, I ran it on our favourite horse market with levelBase = 1.
Horses, levelBase = 1
It didn't do well: there were very few matches and so most of the time it just did nothing, trading a grand total of 3 times. This meant that it got locked into an inventory that it couldn't offload.
Let's run it on the whole dataset: though this market didn't work as well, in some other ones matching did happen in jumps larger than 1 tick, so those might be able to offset more liquid markets.
We're tantalizingly close to finally having a PnL of zero (the more observant reader might notice that we could have done the same by not trading at all). Let's see how it would have done on the greyhound markets, which we do know sometimes jump like crazy.
Greyhounds, levelBase = 3
Not very inspiring either. There's a large amount of markets where this wouldn't have done anything at all (since the bets were so far from the best back/lay, they don't ever get hit), and when something does happen, it seems to be very rare, so the bot can't liquidate its position and remains at the mercy of the market.
So while that was a cute idea, it didn't seem to work.
DAFBot
At this point, I was running out of ideas. The issue of the bot getting locked into an inventory while the market was trending against it still remained, so I had to look at the larger-scale patterns in the data: perhaps based on the bigger moves in the market, the bot could have a desired inventory it could aim for (instead of always targeting zero inventory).
Consider this: if we think that the market is going to move one way or another, it's okay for the bot to have an exposure that way and it can be gently guided towards it (by means of where it sets its prices). Like that, the bot would kind of become a hybrid of a slower trading strategy with a market maker: even if its large-scale predictions of price movements weren't as good, they would get augmented by the market making component and vice versa.
Carry?
I tried out a very dumb idea. Remember how in most of the odds charts we looked at the odds, for some reason, trended down? I had kind of noticed that, or at least I thought I did, and wanted to quantify it.
Those odds were always on the favourite (as in, pick the greyhound/horse with the lowest odds 1 hour before the race begins and see how they change). The cause could be that, say, people who wanted to bet on the favourite would delay their decision to see if any unexpected news arrived before the race start, which is the only sort of news that could move the market.
Whatever the unexpected news would be, they would likely affect the favourite negatively: they could be good for any of the other greyhounds/horses, thus making it more likely for them to win the race. Hence it would make sense, if someone wanted to bet on the favourite, for them to wait until just before the race begins to avoid uncertainty, thus pushing the odds down as the race approaches.
So what if we took the other side of this trade? If we were to go long the favourite early, we would benefit from this downwards slide in odds, at the risk of some bad news coming out and us losing money. I guessed this would be similar to a carry trade in finance, where the trader makes money if the market conditions stay the same (say, borrowing money in a lower interest rate currency and then holding it in a higher interest rate currency, hoping the exchange rate doesn't move). In essence, we'd get paid for taking on the risk of unexpected news about the race coming out.
Research: greyhounds
I had first started doing this using my order book simulator, but realised it would be overkill: if the only thing I wanted to do was testing a strategy that traded literally twice (once to enter the position, once to close it), it would be better write a custom scraper from the stream data that would get the odds' timeseries and simulate the strategy faster.
At that point, I realised the horse racing stream data was too large to fit into memory with the new simulator. So I put that on hold for a second and tried my idea on greyhound markets.
This chart plots, at each point in time before the race, the average return on going long (backing) the favourite and then closing our position 15s before the race begins. In any case, the favourite is picked 5 minutes before the race begins. The vertical lines are the error bars (not standard deviations). Essentially, what we have here is a really consistent way to lose money.
This is obviously because of the back-lay spreads: the simulation here assumes we cross the spread both when entering and exiting, in essence taking the best back in the beginning and the best lay at the end.
Remember this chart from part 4?
The average spread 120s before a greyhound race begins is about 5 ticks. We had previously calculated that the loss on selling 1 tick lower than we bought is about 0.5%, so no wonder we're losing about 3% of our investment straight away.
What if we didn't have to cross the spread?
Woah. This graph assumes that instead of backing the runner at the best back, we manage to back it at the current best lay (by placing a passive back at those odds). When we're exiting the position just before the race begins, time is of the essence and so we're okay with paying the spread and placing a lay bet at the odds of the current best lay (getting executed instantly).
The only problem is actually getting matched: since matching in greyhound markets starts very late (as much money gets matched in the last 80 seconds as does before), our bet could just remain in the book forever, or get matched much closer to the beginning of the race.
But here's the fun part: this graph doesn't care. It shows that if the bet is matched at whatever the best lay was 160 seconds before the race, on average this trade makes money — even if the actual match happens a minute later. If the bet doesn't get matched at all, the trade simply doesn't happen.
This does assume that the performance of this strategy is independent of whether or not the bet gets hit at all, but if that's not the case, we would have been able to use the fact that our bet got hit as a canary: when it gets hit, we know that being long this market is a good/bad thing and adjust our position accordingly.
Implementation in Scala
With that reasoning, I went to work changing the internals of Azura to write another core for it and slightly alter the way it ran. The algorithm would be:
Run with parameters: market, favourite selection time, entry start time, entry end time, exit time (all in seconds before the race start), amount to bet.
Subscribe to the market/order streams, get the market suspend time from the Betfair API
At favourite selection time: inspect our local order book cache, select the runner with the lowest odds
At entry start time: place a passive back at the current best lay odds of the given amount on the favourite.
At entry end time: cancel remaining unexecuted orders.
At exit time: if we have a position (as in our entry order got hit), unwind it aggressively.
I called the new core DAFBot (D stands for Dumb and what AF stands for can be gleaned from the previous posts). I wanted to reuse the idea of polling a strategy for the orders that it wished to offer and the core being stateless, since that would mean that if the bot crashed, I could restart it and it would proceed where it left off. That did mean simple actions like "buy this" became more difficult to encode: the bot basically had to look at its inventory and then say "I want to maintain an outstanding back bet for (how much I want to buy - how much I have)".
Finally, yet another Daedric prince got added to my collection: Sheogorath, "The infamous Prince of Madness, whose motives are unknowable" (I had given up on my naming scheme making sense by this point), would schedule instances of Azura to be run during the trading day by using the Betfair API to fetch a list of greyhound races and executing Azura several minutes before that.
I obviously wasn't ready to get Sheogorath to execute multiple instances of Azura and start losing money at computer speed quite yet, so for now I ran the new strategy manually on some races, first without placing any bets (just printing them out) and then actually doing so.
The biggest issue was the inability to place bets below £2. I had thought this wouldn't be a problem (as I was placing entry bets with larger amounts), but fairly often only part of the offer would get hit, so the bot would end up having an exposure that it wasn't able to close (since closing it would entail placing a lay bet below £2). Hence it took some of that exposure into the race, which wasn't good.
Timing bug?
In addition, when testing Sheogorath's scheduling (by getting it to kick off instances of Azura that didn't place bets), I noticed a weird thing: Sheogorath would start Azura one minute later than intended. For example, for a race that kicked off at 3pm, Azura was supposed to be started 5 minutes before that (2:55pm) whereas it was actually executed at 2:56pm.
While investigating this, I realised that there was another issue with my data: I had relied on the stream recorder using the market suspend times that were fetched from Betfair to stop recording, but that might not have been the case: if the greyhound race started before the scheduled suspend time, then the recording would stop abruptly, as opposed to at the official suspend time.
Any backtest that counted backwards from the final point in the stream would kind of have access to forward-looking information: knowing that the end of the data is the actual suspend time, not the advertised one.
Hence I had to recover the suspend times that the recorder saw and use those instead. I still had all of the logs that it used, so I could scrape the times from them. But here was another fun thing: spot-checking some suspend times against Betfair revealed that they sometimes also were 1 minute later than the ones on the website.
That meant the forward-looking information issue was a bigger one, since the recorder would have run for longer and have a bigger chance of being interrupted by a race start. It would also be a problem in horse markets: since those can be traded in-play, there could have been periods of in-play trading in my data that could have affected the market-making bot's backtests (in particular, everyone else in the market is affected by the multisecond bet delay which I wasn't simulating).
But more importantly, why were the suspend times different? Was it an issue on Betfair's side? Was something wrong with my code? It was probably the latter. After meditating on more logfiles, I realised that the suspend times seen by Azura were correct whereas the suspend times for Sheogorath for the same markets were 1 minute off. They were making the same request, albeit at different times (Sheogorath would do it when building up a trading schedule, Azura would do it when one of its instances would get started). The only difference was that the former was written in Python and the latter was written in Scala.
After some time of going through my code with a debugger and perusing documentation, I learned a fun fact about timezones.
A fun fact about timezones
I used this bit of code to make sure all times the bot was handing were in UTC:
def parse_dt(dt_str, tz=None):
return dt.strptime(dt_str, '%Y-%m-%dT%H:%M:%S.%fZ').replace(tzinfo=pytz.timezone(tz) if tz else None)
m_end_dt = parse_dt(m['marketStartTime'], m['event']['timezone'])
m_end_dt = m_end_dt.astimezone(pytz.utc).replace(tzinfo=None)
However, timezones change. Since pytz.timezone doesn't know the time of the timezone its argument refers to, it looks at the earliest definition of the timezone, which in the case of Europe/London is back in mid-1800s. Was the timezone offset back then something reasonable, like an hour? Nope, it was 1 minute.
Here's a fun snippet of code so you can try this at home:
In[4]:
from datetime import datetime as dt
import pytz
wtf = '2017-09-27T11:04:00.000Z'
parse_dt(wtf)
Out[4]: datetime.datetime(2017, 9, 27, 11, 4)
In[5]: parse_dt(wtf, 'Europe/London')
Out[5]: datetime.datetime(2017, 9, 27, 11, 4, tzinfo=[DstTzInfo 'Europe/London' LMT-1 day, 23:59:00 STD])
parse_dt(wtf, 'Europe/London').astimezone(pytz.utc)
Out[6]: datetime.datetime(2017, 9, 27, 11, 5, tzinfo=[UTC])
And here's an answer from the django-users mailing group on the right way to use timezones:
The right way to attach a pytz timezone to a naive Python datetime is to call tzobj.localize(dt). This gives pytz a chance to say "oh, your datetime is in 2015, so I'll use the offset for Europe/London that's in use in 2015, rather than the one that was in use in the mid-1800s"
Finally, here's some background on how this offset was calculated.
Luckily, I knew which exact days in my data were affected by this bug and was able to recover the right suspend times. In fact, I've been lying to you this whole time and all of the plots in this blog series were produced after I had finished the project, with the full and the correct dataset. So the results, actually, weren't affected that much and now I have some more confidence in them.
Next time on project Betfair, we'll teach DAFBot to place orders below £2 and get it to do some real live trading, then moving on to horses.
As usual, posts in this series will be available at https://kimonote.com/@mildbyte:betfair or on this RSS feed. Alternatively, follow me on twitter.com/mildbyte.
@mildbyte 3 years, 1 month ago | programming | python | betfair |
Previously on project Betfair, we started fixing our market-making bot so that it wouldn't accumulate as much inventory. Today, we'll try to battle the other foe of a market maker: adverse selection.
Order book imbalance
Consider the microprice from part 3: the average of the best back price and the best lay price, weighted by the volume on both sides. We had noticed that sometimes a move in the best back/lay can be anticipated by the microprice getting close to one or the other.
Let's quantify this somehow. Let's take the order book imbalance indicator, showing how close this microprice is to either the best back or the best lay: $$\frac{\text{BBVolume} - \text{BLVolume}}{\text{BBVolume} + \text{BLVolume}}$$ Can this predict price movements?
Oh yes it can. This graph plots the average move in the best back/lay quotes at the next tick (as in the next Betfair message on the stream), conditioned on the cumulative order book imbalance. In other words, the blue circles/crosses show the average move in the best lay/back quote, assuming the order book imbalance is above the given value, and the red markers show the same, but for order book imbalances below the given value.
For example, at order book imbalance values above 0.5 the average move in the best back/lay quotes in the next message is about 0.1 Betfair tick (this time I mean a minimum price move, like 1.72 to 1.73) and for order book imbalance values below -0.5 the average move in the best back/lay quotes is about -0.1 Betfair tick.
Essentially, at high negative or positive order book imbalance values we can say that there will be an imminent price move. There are several intuitive explanations to this phenomenon. For example, we can say that if aggressive trades happen randomly against both sides, the side with less volume offered will get exhausted earlier, hence the price will naturally move towards that side. In addition, if offers on a given side represent an intention to back (or lay), the participants on that side might soon get impatient and will cross the spread in order to get executed faster, the side with more available volume thus winning and pushing the price away from itself.
This effect is quite well documented in equity and futures markets and is often used for better execution: while it's not able to predict large-scale moves, it can help an executing algorithm decide when to cross the spread once we've decided what position we wish to take. For example, see this presentation from BAML — and on page 6 it even has a very similar plot to this one!
CAFBot with order book imbalance detection
This is a very useful observation, since that means the market maker can anticipate prices moving and adjust its behaviour accordingly. Let's assume that we're making a market at 1.95 best back / 1.96 best lay and the odds will soon move to 1.94 best back / 1.95 best lay. We can prepare for this by cancelling our lay (that shows in the book as being available to back) at 1.95 and moving it to 1.94: otherwise it would have been executed at 1.95 shortly before the price move and we would have immediately lost money.
So I added another feature to CAFBot: when the order book imbalance is above a given threshold (positive or negative), it would move that side of the market it was making by one tick. So, for example, let's again say that the current best back/lay are 1.94/1.95 and the order book imbalance threshold is set to be 0.5. Then:
Order book imbalance < -0.5: large pressure from the lay side, make a market at back/lay 1.93/1.95
Order book imbalance between -0.5 and 0.5: business as usual, make a market at back/lay 1.94/1.95
Order book imbalance > 0.5: large pressure from the back side, make a market at back/lay 1.94/1.96
For now, I didn't use any of the inventory management methods I had described in the previous part (though there are some interesting ways they can interact with this: for example, could we ever not move our quotes at high imbalance values because an imminent price move and hence trades against us could help us close our position?). I did, however, keep the make-market-at-3-price levels feature.
Let's see how it did on our guinea pig market.
So... it made money, but in a bad way. Having no inventory control made the bot accumulate an enormous negative exposure during the whole trading period: its performance only got saved by an upwards swing in odds during the last few seconds before it closed its position. In fact a 6-tick swing brought its PnL up £10 from -£6 to £4. Not very healthy, since we don't want to rely on general (and random) price moves: it as well could have stopped trading before that price swing and lost money. On the other hand, there's still the interesting fact of it having made money by generally being short in a market whose odds trended downwards (3.9 to 3.5), as in against its position.
Good news: the order book imbalance indicator kind of worked: here the same segment from part 3 is plotted, together with the bot's orders that got executed. You can see that where the old version would have had the price go through it, the new version sometimes anticipates that and moves its quotes away. In addition, look at the part shortly after 12:10 where the price oscillates between 3.65 and 3.55: since the microprice after the price move is still close to the previous level, the bot doesn't place any orders at 3.6.
However, the fact that we don't have inventory control in this version hurts the bot immensely:
Look at that standard deviation: it's larger than that of the first naive version (£3.55)!
CAFBotV2: a new hope
Let's put the insights from this and the previous part together and see if we can manage to make the bot not lose money. I combined the mitigation of inventory risk and adverse selection as follows: move the quote on one side away by one tick if either the order book imbalance is high enough or our exposure (inventory) is high enough. Effectively, if the bot would have moved its lay bet lower by 1 tick because of high negative order book imbalance (odds are about to move down) as well as because of high negative exposure (so the bot doesn't want to bet against even more), it would only move the lay bet by 1 tick, not 2.
On the other hand, let's assume there's a high negative order book imbalance but the bot has a large positive exposure. Should the bot still move its lay quote given that that quote getting hit would mean the bot would sell off a bit of its inventory? I reasoned that if the odds were about to move down, that would be good for the bot's profit (since odds going down is the same as implied probability going up, so being long benefits the bot) and in fact would allow it to offload its exposure at even lower odds later on.
So with that in mind, let's see how the brand new CAFBot does on our example horse racing market.
Look at it go. Basically a straight upwards line during the last 15 minutes and all of this while juggling its exposure back and forth like a champ. Beautiful. Let's take a look at the whole dataset.
Damn. At least it's losing less money than the version with moving the bot's quotes at high inventory values: in fact, twice as little (-£0.58 vs -£1.04).
Looking closer at what happened, looks like in some markets our fancy inventory control didn't work.
What happened here was that while the bot had a large long position and wasn't placing bets at the current best available lay, those bets were still matching as the prices would sometimes jump more than one tick at a time. The odds were violently trending upwards and so there weren't as many chances for the bot to close out its position.
How about if we get the bot to stop trading at all on one side of the book if its position breaches a certain level? While this makes the PnL distribution less skewed and slightly less volatile, it doesn't improve the mean much.
Meanwhile in the greyhound racing market, things weren't going well either.
Examining the PnL closer
Back to horses again, is there some way we can predict the money the bot will make/lose in order to see if some markets are not worth trying to trade at all?
First of all, it doesn't seem like the PnL depends on the time of day we're trading.
The dataset is mostly UK and US horse races and the time is UTC. The UK races start at about 11am and end at about 7pm, whereas the US ones run from about 8pm throughout the night. There are some Australian races there, but there are few of them. In the end, it doesn't seem like the country affects our PnL either.
In addition, the money the bot makes isn't affected by the amount of money that's been matched on a given runner 15 minutes before the race start.
...and neither is the case for the amount of money available to bet on a given runner (essentially the sum of all volumes available on both sides of the book 15 minutes before the race start).
That's a curious plot, actually. Why are there two clusters? I was scared at first that I was having some issues with scaling in some of my data (I had switched to using integer penny amounts throughout my codebase instead of pounds early on), but it actually is because the UK races have much more money available to bet, as can be seen on the following plot.
CAFBotV3, 4, 5...
I also had tried some other additions to CAFBot that are not really worth describing in detail. There was the usual fiddling with parameters (different order book imbalance threshold values as well as the inventory values beyond which the bot would move its quotes) or minor adjustment to logic (for example, not moving the quotes at high order book imbalance values if getting that quote hit would help the bot reduce its exposure).
There was also Hammertime, a version of CAFBot that would move both back and lay quotes in case of high order book imbalance, in essence joining everybody else in hammering the side of the book with fewer offers. Theoretically, it would have taken a position (up to its position limit) in the direction that the market was about to move, but in practice the order book imbalance indicator isn't great at predicting larger-scale moves, so most of those trades would get either scratched out or end up as losses.
In addition, I had another problem, which is why I had started looking at whether it's possible to select markets that it would be better to trade in: submitting an order to Betfair isn't actually free. Well, it is, but only if one submits fewer than 1000 actions per hour, after which point Betfair begins to charge £0.01 per action. An action could be, for example, submitting a bet at a given level or cancelling it. Actions can't be batched, so a submission of a back at at 2.00 and a back at 2.02 counts as 2 actions.
This would be especially bad for CAFBot, since at each price move it has to perform at least 4 actions: cancelling one outstanding back, one outstanding lay, placing a new back and a new lay. If it were maintaining several offers at both sides of the book and the price moved by more than one tick, it would cost it even more actions to move its quotes. From the simulation results, running the bot for 15 minutes would submit, on average, about 500 actions to Betfair with a standard deviation of 200, which would bring it beyond the 1000-an-hour limit.
Meanwhile in Scala
Throughout this, I was also working on Azura, the Scala version of CAFBot (and then CAFBotV2) that I would end up running in production. I was getting more and more convinced that it was soon time to leave my order book simulator and start testing my ideas by putting real money on them. As you remember, the simulator could only be an approximation to what would happen in the real world: while it could model market impact, it wouldn't be able to model the market reaction to the orders that I was placing.
And since I was about to trade my own money, I would start writing clean and extremely well-tested code, right?
Next time on project Betfair, we'll learn how not to test things in production.
Interested in this blogging platform? It's called Kimonote, it focuses on minimalism, ease of navigation and control over what content a user follows. Try the demo here and/or follow it on Twitter as well at twitter.com/kimonote!
Previously on project Betfair, we implemented a simple market making strategy and backtested it against some horse races, with disastrous results. Today, things get better.
Though most of this has been done on horse racing markets, I was also experimenting on-and-off with greyhound racing. I'll be looking at those in depth later, but for now, some quick plots.
First, most action in greyhound markets happens way closer to the beginning of the race. Half of all the bets are matched less than a minute before the race starts!
In addition, the back-lay spreads are much, much wider than in horse markets. Even 2 minutes before start there can be 5 ticks between the best back and lay quote.
So how does the naive version of CAFBot do on these markets?
CAFBot results
Not that well. Looking at the odds chart for an example market, it seems like there are a lot of sporadic jumps where, sort of like in the example horse market from the previous post, the price goes "through" us (i.e. we're the last in the queue at a given price level to get our bet matched, after which that price level flips to the opposite side of the book).
Back to horses
Papers on market making
Like I had mentioned before, there are plenty of papers on market making. Despite that they're written with real-world (equity, futures etc) markets in mind, there are still a lot of things they investigate that can apply to this problem.
High-frequency market-making with inventory constraints and directional bets by Fodra, Labadie (2012) has a decent introduction to what a market maker does and what sort of controls it can have over its behaviour (together with some alarmingly large differential equations). It's based on an earlier paper: High-frequency trading in a limit order book by Avellaneda, Stoikov (2008) which had some very good presentation slides that I got some of the following ideas from. Finally, Exploring Market Making Strategy for High Frequency Trading: An Agent-Based Approach by Xiong, Yamada, and Terano has a nice overview of several market-making strategies.
Inventory risk
According to those papers, there are two risks a market maker faces. The first is inventory risk, which is what we had an issue with last time. If one side of a market maker's quotes gets executed more often than the other, it accumulates an inventory of shares (in the case of equity markets) and is no longer market-neutral (its profit is now dependent on price moves in the market).
There are two main ways to control our market maker: by specifying at what odds (or whether) it makes the market and the number of one-pound contracts it offers on each side. For example, if its exposure is getting too positive (it's backed a runner too much), it can either decrease its offers in size or offer other people to lay at higher odds, thus getting a higher premium from increasing its exposure even more. Managing inventory risk is essentially a trade-off: if we limit it to a minimum, we might miss out on earning the spread when there's not much price movement going on, whereas if we don't control it at all, we risk exposing ourselves to the market too much (having to liquidate our position in the end).
CAFBot with offer size decay
My first attempt at managing inventory risk was fairly simple: let's give the bot a maximum inventory it can have and size its quotes so that the amount it offers is scaled linearly from the full amount when it has no exposure down to 0 when it's at a maximum. For example, if the maximum amount of contracts it can offer is 10 and the maximum inventory it wants is 30:
when it's 30 contracts short, offer 10 to buy and 0 to sell;
when it's neutral, offer 10 to buy and 10 to sell;
when it's 15 contracts long, offer 5 to buy and 10 to sell;
when it's 30 contracts long, offer 0 to buy and 10 to sell.
So, did it work?
Sort of. This is the PnL and position chart for the same market as in the previous post. It looks like the exposure management works: it's almost always bounded between -15 and +15 contracts. The profit has improved vastly (we now lose about £0.18 instead of £1.70 from the previous version), but this could be simply due to the fact that our average exposure is now smaller (on average -5 contracts) than that of the previous version (on average -65 contracts).
Running it on the full dataset we get this sad picture.
While the average profit increased, its standard deviation didn't decrease as much: the Sharpe ratio of this strategy is -0.91. To compare, the previous incarnation of CAFBot had a Sharpe ratio of -1.52 / 3.55 = -0.43.
Making a market at multiple odds
This was a small feature that I had there from the beginning, actually. The naive CAFBot from the previous post only ever has offers at two odds: the best back and the best lay. But consider this:
Best available to back is 2.00, best available to lay is 2.02
The prices move: best back is 2.02, best lay is 2.04
We cancel both of our bets and replace them at the new odds
The prices move back to 2.00/2.02. We replace our bets again.
What happened there at the 2.00 back level was that we had to cancel a bet and then replace it shortly after when the prices recovered, thus losing our position in the match queue. The addition I had was to maintain bets at several price levels: best back, 1 tick lower than the best back and 2 ticks lower than the best back (likewise for the lay side) etc. That meant, at a larger capital commitment, that the bot would still maintain its position in the queue if the prices changed for a bit. This doesn't actually require any more betting actions to do for a 1-tick move: instead of cancelling a bet at 2.00 and replacing it at 2.02 the bot would, say, cancel a bet at 1.99 and replace it at 2.02.
CAFBot with dynamic spreads
The papers also mention where to make the market as a way to control the inventory (some of them define it as a mid ± spread). Most of them use a solution to those differential equations in order to come up with an function defining the bid and ask to make a market at, but I decided not to go with that, since Betfair markets have varying granularities and spreads are usually already tight enough (in essence, anything that was more than a few ticks away from the best bid wouldn't have a chance to get matched anyway).
Instead, I used an idea from Avellaneda and Stoikov's presentation slides where their agent would move the quote by 1 tick on one side after it had accumulated a certain inventory. So, for example:
We make a market at 2.00 best back / 2.02 best lay
Our offer at 2.00 gets hit (we were offering to back, thus we've laid the market: we are now short)
Instead of replacing the bet at 2.00, we now make the market at 1.99 best back / 2.02 best lay.
For my limit in this test, I used 30 contracts: with 10 contracts offered, that meant the bot would move the quote after its offer got matched 3 times on one side.
I also used the making market on several odds feature to not cancel bets unless the price moves too far away from them, placing bets at 3 levels on both sides of the book.
So, how did it do on our test market?
Oooh. We finally started making money, kind of. Closer to the race beginning, as matched volumes picked up, the MTM shot up dramatically, albeit dropping as dramatically in the end. It still ended trading with about £0.60 of profit, which is pretty cool.
In addition, the average exposure was -6.6 contracts: in a market where the odds were trending down (they went from about 3.9 to 3.4, 10 ticks), the bot's exposure was against it (since the contract price, the inverse of the odds, went up) and yet it still made money. Hence the bot did manage to profit from the spread.
Finally, placing our bets on several levels helped too: on this plot, the bold green and red lines are price levels at which the bot had an offer. You can see it's often the case that a green/red line is not interrupted when the price moves slightly away from it, meaning the bot doesn't cancel a bet and lose its position in the queue. In fact, on the same market, but with only 1 bet on both sides, the bot would have made only £0.20.
Let's test it on the whole dataset now.
Not that good, actually. The Sharpe ratio is better than the previous version, but we lost more money on average. At least making a market at 3 price levels helped (below are the results for placing just 1 bet on each side).
So what happened? Let's take a look at the worst market we traded.
It's... choppy. The price is jumping all over the place and there are very few times when it doesn't move (which is where this strategy really shines). There are, on the other hand, many points where the price goes "through" the bot's bet, resulting in it having an opposite exposure to the way the price went.
Looking closer, we can see how wild this market is: in the span of 90 seconds the odds crashed from 1.80 down 10 ticks to 1.70 and then back up again. In addition, in the first 30 seconds the best back/lay oscillated between 2 levels and the bot would diligently follow it, getting executed at 1.79 and then having its position closed at the same or worse price.
So all in all, while inventory control does help the bot not place too many directional bets, it still doesn't help it make money.
The second risk a market maker faces is adverse selection, the risk from information asymmetry. Someone who does cross the spread gets the benefit of choosing when their order gets executed, whereas a market maker simply posts orders and doesn't have any control over when or whether they will get hit. This creates a bias in favour of the person trading against the market maker: what if some of them are crossing the spread because they know the market is going to move, as in, they're informed traders?
Let's say there's been some sudden bad news from Apple during trading hours and people start selling Apple shares. A naive market maker that doesn't know what's happening will continue offering to buy them despite that the market clearing price will very soon move down. In the case of Betfair markets, perhaps there can be a macro-level trend (even if the trend is a self-fulfilling prophecy) that the market maker doesn't take into account and more slower traders do, thus trading in a way that perpetuates the trend.
So is there a way to protect ourselves against that? What if the bot could somehow anticipate imminent price movements that might hurt it if it has an exposure to the market and move its quotes or liquidate its position? Perhaps it can.
Next time on project Betfair, we'll delve deeper into market microstructure and find out that at a very micro level, markets sometimes aren't as unpredictable as we thought.
Previously on project Betfair, we started collecting dumps of Betfair order book data from the Stream API and learned how to reconstruct series of events from those with the aim of using them to backtest a market making strategy.
Today, things get messy.
Insights into the data
As a market maker, our main source of profit should be our orders getting matched, and not us having a directional view (as in betting on which way the odds will move). Hence it's worth investigating how volumes of matched bets vary as the beginning of the race approaches.
Total matched volume
I took the total bet volumes matched on each race through time, starting from 3 hours before the race, and normalized them by diving by the final total volume just before the scheduled race start time (essentially getting the cumulative fractions of the total matched volumes). There are 24 races in this dataset: I now have quite a few more, but their recording is started much closer to the kick-off, and you'll soon see why.
As you can see, bet matching in pre-race horse trading follows a power law. About half of the total bet volume is matched 10 minutes before the race begins, and 80% of all betting happens within 50 minutes from the race start. Note this doesn't include in-play betting where I guess this gets even more dramatic.
Hence there's not much point doing market making earlier than about 15 minutes before the race. Or perhaps could it be that the reason nobody is trading is that the costs are too high?
Back-lay spreads
I plotted the average back-lay spreads through time, and it doesn't seem so. The spreads here are in Betfair ticks: the smallest one is 1 (corresponding to, say, best available back odds of 1.72 and best lay odds of 1.73, or best back at 2.50 and best lay at 2.52). Even 3 hours before the race begins, the spreads, on average, are as tight as they can be.
One useful thing about way the Betfair Stream API is designed is that with the tools given to us, we can maintain a state of the order book in our program as well as our own orders. It's essentially like having the data from API-NG available to us at all times, with notifications when it gets updated.
I decided to split the market-making bot into two parts.
The core would be given the current state of the order book and output the amount of one-pound contracts that it wanted to be offering at both sides of the book, together with the odds. The core's operation would be independent of what's happening to our outstanding orders: if we already have a back bet in the book and the core of the strategy wants to be offering the same amount, then no actions would be produced.
Reconciling the core's wishes with the actual state of our orders would be the execution harness' job. Given the desired offers we wish to maintain on both sides of the book and the state of our orders, it would produce a sequence of cancellations and order placements and submit them to Betfair.
Since all of these inputs (the state of the order book and all of our orders) are actually given to us by the Betfair Stream API, this has a neat bonus of behaving in a consistent way if the bot, say, crashes: we can just start it up again and the API will helpfully give us back all our outstanding orders and positions (not that it's a good idea to blindly trust it). Polling the strategy to find out what actions it wants to perform can be done at any point in time: on a schedule or, say, whenever a new event arrives from the Betfair stream.
So the whole strategy would look something like this:
When a new event arrives:
Incorporate it into our cached state of the order book and our orders (Betfair even have a guide here).
Give the order book to the core, request the amount of contracts it wants to offer and the odds (prices).
On both sides of the book, compare our current outstanding orders and what the core wants to do. For every price level, if what the core wants to offer is different from what we are offering with our outstanding orders:
If the core wants to offer more, submit an order placement for the difference.
If the core wants to offer less, take a list of our outstanding orders at that price level (since there might be several) and start cancelling them, newest first (since they're the least likely to get matched) until we reach the desired volume.
There are some operational issues with this (such as Betfair not usually allowing bet placements below £2), but this is a good general platform to build upon and definitely good enough to start simulating.
With an order book simulator already built, augmenting it to inject strategy events is fairly easy. Let's say the simulator operates by maintaining a timestamped event queue (where the events are either placing a bet or cancelling one). We add another event type at given intervals of time (say, 100ms): when the backtester sees it, it gives the current order book and the state of the strategy's orders to the strategy, gets its actions from it and then inserts them back into the queue.
To simulate the delay between us and the exchange, the events are timestamped slightly into the future (say, 75ms) so the simulator might not get to apply player orders to the book immediately and may have to handle other orders before that.
In addition, I got the backtester to record various diagnostics about the strategy every time it polled it, such as how many contracts (actually matched bets) it owned, when its orders got matched and whether the matches were passive (initiated by someone else) or aggressive (initiated by the strategy) etc.
Meet CAFBot
There will also be a DAFBot at some point. This was a very early version. It's fairly different to what I actually ran to make the following plots, but the general idea is the same. I also moved the whole codebase to using integer penny amounts instead of floats soon after this commit.
It was time to find something to plug into the backtester. I decided to first go with the most naive approach: the core would look at the current best back and lay offers and then offer 10 one-pound contracts (recall that this results in a bet of £10 / odds) on both sides.
Since the execution harness compares the desired offers with the current outstanding bets, that would mean when the strategy was started, it would first place those two bets and then do nothing until at least one of them got fully or partialy matched, at which point it would add back to the bet to maintain the desired amount. If the market were to move (the best back/lay offers changed), the strategy would cancel the orders and replace them at the new best bid/offer.
Finally, 15 seconds before the race start, the strategy would aggressively unload its accumulated exposure (say, if more of its back bets were matched than the lay bets) by crossing the spread (and not waiting to get matched passively).
In terms of markets, I would simulate it from 1 hour before the beginning of the race, running it only on the favourite horse (the one with the lowest odds at that point).
With that in mind, I managed (after a few days of debugging) to simulate it on a few races. So, how did it do?
Backtest results
Here's a result from one race:
Well, this is not looking great.
To explain that plot, the red line is the strategy inventory, or position: how many one-pound contracts it's currently holding. For example, with a position of -170, the strategy will lose £170 if the runner we're trading wins (and £0 if it loses, but in any case we get to keep the money we got from selling the contract). The blue line is the mark-to-market of the strategy: how much money we would make/lose if we were to liquidate our position immediately by crossing the spread, like we do in the last 15 seconds before the race begins.
This is obviously just one race, but surely with an HFT strategy we would expect more consistent profit even within one race? For example, Greg Laughlin of the WSJ did a small investigation into the trading of Virtu, an HFT firm, from their abandoned IPO filings, and showed that with a 51% percentage of profitable trades and about 3 million trades a day their chance of having a losing day was negligible.
But we're not making that many trades here and so perhaps this could be a statistical fluke. It's a good thing I now have all the other race data, collected during further experiments, to check this against.
Nope. Running this against 104 races (albeit with 15 minutes of trading, since the latter datasets I collected were shorter), we can see that the bot has systematic issues and loses, on average, £1.52 per race.
So what happened in the first plot? It looks like we had a fairly chunky short exposure (betting against) throughout the trading. At its worst we were short 175 contracts, which is 17 times more than what we were offering to trade. So there was an imbalance in terms of which side of our offering was getting executed.
Investigating further...
As you can see from the odds plot, the price of the contract (inverse of the odds) trended upwards throughout the final hour before the race, with a few range-bound moves closer to the start. That explains us losing money and also kind of explains us selling too many contracts: in a trending market, there would be more matching on one side of the book, hence a naive strategy that doesn't account for that would quickly get out of balance. Or at least that's what it looks like. There are many ways to read this.
This is the previous plot, but zoomed in towards the part where we lost money the fastest. The orange line shows the best available lay odds and the blue one is the back odds (both of which are where our simulated bot also puts its bets).
The green line in the middle is the volume-weighted average price, or the microprice. It's calculated as follows: $$ \text{VWAP} = \frac{\text{BBPrice} \times \text{BLVolume} + \text{BLPrice} \times \text{BBVolume}}{\text{BBVolume} + \text{BLVolume}} $$
Essentially, if the volume of available bets on one side is bigger, then that side kind of pushes the microprice towards the opposite side. It's a good way to visualize order book imbalance and, interestingly, in that plot, sometimes a move in the actual best back/lay odds can be anticipated by the microprice moving towards one side or the other. We'll look into that later.
The crosses are the bot's orders getting executed: red means the back offer got executed (thus the bot has secured a lay bet and is now more short the market) and green means the lay offer got executed. The dots are the same, but for other people's orders.
What's strange here is that our orders often get hit shortly before there's a price move against us: for example, in the beginning, just before the best back moves from 3.80 down to 3.75 (this is still a 1-tick move), you can see several executions of the bot's back orders, with the bot quickly replacing them several times in between executions. In essence, it's sort of trying to "hold the line" against a market move, but ultimately fails and ends up with having bet against the market whilst the odds managed to move through it. Well, at least we know it's a market move in hindsight: perhaps more people could have joined the bot on its side and maintained the price.
Essentially, the bot does well during times when there's a lot of matching going on on both sides of the book and the price isn't moving. This happens, for example, for a few minutes around 12:30:
But even in this case, this doesn't look too inspiring: the initial 1-pound jump in the PnL at 12:29 is because of the odds moving up and us having a short exposure. The actual market-making part between 12:29 and 12:33 only makes about £0.10.
So in the end, looks like it's back to the drawing board.
Not all was lost, obviously. I had a backtesting engine, a half-working strategy (in that it didn't immediately start bleeding money) and I had an idea of what was going wrong and some possible ways to fix it. And obviously no real money had suffered yet in the making of this.
Next time on project Betfair, we'll start reading some actual papers on market making and tinkering with the bot in attempts to make it lose less money. Perhaps I'll tell about what was happening to the Scala version (that would later supposedly be a production version of this thing) as well. Stay tuned!
Probably the hardest thing about writing this will be piecing together what I was doing and thinking at the time. Keeping a development diary was a great idea, it's just a shame I started it 1/3 of the way into the project and the rest is in various commit messages.
I would never have been able to get away with these commit messages at my previous job.
In any case, I started with how everything starts: gathering data.
Attempt 1: Betfair API-NG
Good news: Betfair has some APIs. Bad news: it costs £299 to access the API. Better news: I had already gotten an API key back in 2014 when it was free and so my access was maintained.
The first Betfair API I used is called API-NG. It's a REST API that allows to do almost everything the Web client does, including getting a (live or delayed) state of the runner order book (available and recently matched odds), getting information about given markets and runners, placing actual bets or getting the account balance.
Collecting order book data
Since naming things is one of the hard problems in computer science, I decided to pick the Daedric pantheon from The Elder Scrolls to name components of my new trading system, which would be written in Python for now.
Hermaeus, "in whose dominion are the treasures of knowledge and memory". Hermaeus is a Python script that is given a Betfair market ID and a market suspend time (when the market's outcome becomes known) as well as a sample rate. When started, it repeatedly polls the REST API at the given sample rate to get the order book contents for all runners and dumps those into a PostgreSQL database.
Meridia, "associated with the energies of living things". Meridia reads the Betfair market catalogue (say, all horse races in the UK happening today, their starting times and market IDs) and schedules instances of Hermaeus to start recording the data a given amount of time from the market getting suspended.
Hircine, "whose sphere is the Hunt, the Sport of Daedra, the Great Game (which I have just lost), the Chase". Hircine would be the harness for whatever strategy I would decide to run: when executed on a given market, it would read the data collected by Hermaeus and output the desired number of OPCs we wished to hold.
This was pretty much where I paused the design process: downstream of Hircine we would have some sort of a component that would determine whether to place a bet on a given market, but I didn't want to make a decision on what would happen later, since I wanted my further design decisions to be driven by looking at the data and seeing what would and wouldn't work.
I had decided to concentrate on pre-race greyhound and horse markets. Pre-race markets somehow still have dramatic odds movements (despite that there's no information becoming known that would affect the race outcome), however odds don't move by more than 1-2 ticks at a time (unlike football, where a goal can immediately halve the implied probability for some markets, like the Correct Score market). In addition, in-play markets have a bet delay of 1-10 seconds depending on the sports type, whereas for the pre-race markets the submitted bet appears in the order book immediately.
In terms of numbers, there are about 30 horse races in the UK alone on any given day (and those usually are the most liquid, as in, with a lot of bets being available and matched, with action starting to pick up about 30 minutes before the race begins) and about 90 greyhound races (those are less liquid than horses with most of the trading happening in the last 10 minutes before the race begins).
Now, I could either beat other people by making smarter decisions — or I could beat them by making dumb decisions quicker. I quite liked the latter idea: most high-frequency trading strategies are actually fairly well documented, I imagined that "high frequency" on Betfair is much, much slower than high frequency in the real world, and making software faster is better-defined that making a trading model make more money.
And, as a small bonus, trading at higher granularities meant that I didn't need as much data. If my strategy worked by trying to predict, say, which horse would win based on its race history or its characteristics, I would need to have collected thousands of race results before being able to confidently backtest anything I came up with. In a higher frequency world, data collection would be easier: there's no reason why the microstructure of one horse racing market would be different from another one.
The simplest one of those strategies would be market making. Remember me lamenting in the previous post that buying a contract and immediately selling it back results in a ~0.5% loss because of the bid-offer spread? As a market maker, we have a chance of collecting that spread instead: what we do is maintain a both buy (back) and a sell (lay) order in hopes that both of them will get hit, resulting in us pocketing a small profit and not having any exposure, assuming we sized our bets correctly. It's a strategy that's easy to prototype but has surprising depth to it: what if only one side of our offer keeps getting matched and we accumulate an exposure? What odds do we quote for the two bets? What sizes?
With that in mind, I realised that the way I was collecting data was kind of wrong. The idea of the strategy output being the desired position at a given time is easily backtest-able: just multiply the position at each point by the price movement at the next tick and take the cumulative sum. The price in this case is probably the mid (the average of the back and the lay odds) and for our cost model we could assume that we cross the spread at every trade (meaning aggressively take the available odds instead of placing a passive order and waiting to get matched).
Sadly, this doesn't fly with a market-making strategy: since we intend to be trading passively most of the time, we have to have a way to model when (or whether: while we're waiting, the price could move underneath us) our passive orders get matched.
How an order book works
If we have to delve into the details of market microstructure, it's worth starting from describing how a Betfair order book actually works. Thankfully, it's very similar to the way a real exchange order book works.
When a back (lay) order gets submitted to the book, it can either get matched, partially matched, or not matched at all. Let's say we want to match a new back (lay) order N against the book:
take the orders at the opposite side of the book: lay (back).
look at only those that have higher (lower) odds than N.
sort them by odds descending (ascending) and then by order placement time, oldest first.
for each one of these orders O, while N has some remaining volume:
if N has more remaining volume than O, there's a partial match: we decrease N's remaining volume by O's volume, record a match and remove O from the book completely.
if N has less remaining volume than O, we decrease O's remaining volume by N's remaining volume, record a match and stop.
if both volumes are equal, we remove O from the book, record a match and stop.
if N has any remaining volume, add it to the order book.
It's easy to see that to get our passive order matched, we want to either offer a price that's better than the best price in the book (highest odds if we're offering a back (laying) or lowest odds if we're offering a lay (backing)) or be the earliest person at the current best price level. One might think of matching at a given price level as a queue: if we place a back at 1.70 (thus adding to the available to lay amount) and there is already £200 available at that level, we have to wait until that money gets matched before we can be matched — unless while we are waiting, the best available to lay odds become 1.69 and so we'll have to cancel our unmatched bet and move it lower.
So how do we use this knowledge to backtest a market-making strategy? The neatest way of doing this that I found was taking the order book snapshots that I collected and reconstructing a sequence of order book events from it: actions that take the order book from one state to the next one. Those can be either back/lay order placements (possibly triggering a match) or cancellations. If these events were to be be plugged back into the order book simulator, we would end up with order book states that mirror exactly what we started with.
However, if we were to also insert some of our own actions in the middle of this stream, we would be able to model the matching of passive orders: for example, if the best available lay at a given point in time was 1.70 and we placed an order at the better odds (for others) at 1.69, all orders in the event stream that would have matched at 1.70 would now be matching against our new order, until it got exhausted. Similarly, if a new price level was about to be opened at 1.69 and we were the first order at that level, all orders would be matching against us as well.
Inferring order book events
So I set out to convert my collected snapshots into a series of events. In theory, doing this is easy: every snapshot has the odds and volumes of available back and lay bets, as well as the total amount traded (matched) at each odds. So, to begin, we take the differences between these 3 vectors (available to back, available to lay, total traded) in consecutive snapshots. If there's no traded difference, all other differences are pure placements and cancellations without matches. So if there's more available to back/lay at a given level, we add a PLACE_BACK/PLACE_LAY instruction to the event stream — and if there's less, we add a CANCEL_BACK/CANCEL_LAY instruction for a given amount. Since we don't know which exact order is getting cancelled (just the volume), the order book simulator just picks a random one to take the volume away from.
Things get slightly more tricky if there's been a trade at a given odds level. If there's only been one order book action between the snapshots, it's fairly easy to infer what happened: there will be a difference in either the outstanding back or lay amounts at those odds, meaning there was an aggressive back/lay that immediately got matched against the order. Sometimes there can be several odds levels for which there is a traded amount change — that means an order was so large that it matched against orders at several odds. After the traded amount differences are reconciled, we need to adjust the available to back/lay differences to reflect that and then proceed as in the previous paragraph to reconcile unmatched placements/cancellations.
There was a problem with the data I collected, however: API-NG limits the number of requests to it to about 5 per second. Worse even, despite that I was making about 3 requests per second, the API would still sometimes throttle me and not give back an HTTP response until a few seconds later. This meant that order book snapshots had more than one event between them and in some cases there were way too many possible valid sets of events that brought the order book from one state to the other.
Thankfully, a slightly more modern way of getting data was available.
Attempt 2: Betfair Exchange Stream API
Unlike its brother, API-NG, the Exchange Stream API is a push API: one opens a TCP connection to Betfair and sends a message asking to subscribe to certain markets, at which point Betfair sends back updates whenever an order placement/cancellation/match occurs. Like its brother, the Stream API uses JSON to serialise data as opposed to a binary protocol. Messages are separated with a newline character and can be either an image (the full order book state) or an update (a delta saying which levels in the order book have changed). The Stream API can also be used to subscribe to the state of user's own orders in order to be notified when they have been matched/cancelled.
This was pretty cool, because it meant that my future market making bot would be able to react to market events basically immediately, as opposed to 200ms later, and wouldn't be liable to get throttled.
Here's an example of a so-called market change message:
{"op":"mcm","id":2,"clk":"ALc3AK0/AI04","pt":1502698655391,"mc":[
{"id":"1.133280054","rc":[
{"atb":[[5.7,5.7]],"atl":[[5.7,0]],"trd":[[5.7,45.42]],"id":12942916}]}]}
This one says that in the market 1.133280054, we can't bet against (lay) the runner 12942916 at odds 5.7, but instead can bet up to £5.70 on it. In addition, now the total bets matched at odds 5.7 amount £45.42 (actually it's half as much, because Betfair counts both sides of the bet together). In essence, this means that there was an aggressive lay at 5.7 that got partially matched against all of the available lay and £5.70 of it remained in the book at available to back, basically moving the market price.
As for the other parts of the message, pt is the timestamp and clk is a unique identifier that the client at subscription time can send to Betfair to get the whole market image starting from that point (and not from the point the client connects: this is used to restart consumption from the same point if the client crashes).
Another program had to be added to my Daedric pantheon. Sanguine, "whose sphere is hedonistic revelry, debauchery, and passionate indulgences of darker natures", would connect to the Stream API, subscribe to the order book stream for a given market and dump everything it received into a text file. It was similar to Hermaeus, except I had given up on creating a proper schema at this point to store the data into PostgreSQL (and even in the case of Hermaeus it was just the metadata that had a schema, the actual order book data used the json type). Still, I was able to schedule instances of it to be executed in order to record some more stream data dumps.
And indeed, the data now was much more granular and was actually possible to reconcile (except for a few occasions where some bets would be voided and the total amount traded at given odds would decrease).
Here's some code that, given the current amounts available to back, lay and traded at given odds, as well as a new order book snapshot, applies the changes to our book and returns the events that have occurred.
def process_runner_msg(runner_msg, atb, atl, trds):
events = []
# represent everything in pennies
back_diff = {p: int(v * 100) - atb[p] for p, v in runner_msg.get('atb', [])}
lay_diff = {p: int(v * 100) - atl[p] for p, v in runner_msg.get('atl', [])}
trd_diff = {p: int(v * 100) - trds[p] for p, v in runner_msg.get('trd', [])}
for p, v in trd_diff.iteritems():
if not v:
if v < 0:
print "ignoring negative trade price %f volume %d" % (p, v)
if p in back_diff and back_diff[p] < 0:
back_diff[p] += v / 2 # betfair counts the trade twice (on both sides of the book)
atb[p] -= v / 2
trds[p] += v
events.append(('PLACE_LAY', p, v / 2))
elif p in lay_diff and lay_diff[p] < 0:
lay_diff[p] += v / 2
atl[p] -= v / 2
events.append(('PLACE_BACK', p, v / 2))
elif p in back_diff:
back_diff[p] += v / 2
elif p in lay_diff:
print "can't reconcile a trade of %d at %.2f" % (v, p)
# these were aggressive placements -- need to make sure we're sorting backs (place_lay) by price descending
# (to simulate us knocking book levels out) and lays vice versa
events = sorted([e for e in events if e[0] == 'PLACE_LAY'], key=lambda e: e[1], reverse=True) + \
sorted([e for e in events if e[0] == 'PLACE_BACK'], key=lambda e: e[1])
for p, v in back_diff.iteritems():
if v > 0:
events.append(('PLACE_BACK', p, v))
elif v < 0:
events.append(('CANCEL_BACK', p, -v))
atb[p] += v
for p, v in lay_diff.iteritems():
events.append(('PLACE_LAY', p, v))
events.append(('CANCEL_LAY', p, -v))
atl[p] += v
return events
Parallel to this, I was also slowly starting to implement an Exchange Stream API client that would subscribe to and handle market change messages in Scala. This contraption, named Azura (whose sphere is "the period of transition and change"), would form the base of the actual live market making bot and/or any other trading strategies that would use the Stream API.
Next time on project Betfair, we'll start implementing and backtesting our market making bot, as well as possibly visualising all the data we collected.
Let's go on a treasure hunt!
@mildbyte 3 years, 5 months ago | programming | games | python | telegram | bots |
Chatbots are basically a clunky commandline interface to things that sometimes really need a custom UI.
But what if I do want to make something that uses some of the features (a GPS receiver, a camera, a microphone) that a modern phone has? I'm too lazy to write device-specific code and dig around in the intrinsics of Android/iOS APIs.
So after doing some research on modern messenger apps (I kind of fell behind on what was going on after the WhatsApp acquisition and turns out billions more have popped up since then) I stumbled upon the Telegram bot API. And it's actually pretty simple. In a nutshell, you create a bot (by messaging another bot) which gives you a token. The token is the only thing your bot needs to communicate with the Telegram servers (so no coding up handshakes or managing session keys): it makes up your REST endpoint that you can throw queries at. The connection is over SSL, so that takes care of your ISP or a kid with a WiFi dongle and Wireshark grabbing hold of your token. Bot chats aren't end-to-end encrypted though, so Telegram are still able to read whatever you talk to the bot about.
With that in mind, receiving messages is easy: just shoot a GET request at https://api.telegram.org/bot(TOKEN)/getUpdates (reference) and it will come back with a JSON-serialised list of events (message sent, message edited etc) that happened to your bot. Each one has a unique sequence number which you can use to seek around in the update log (to say get updates only starting from the last one you processed). Updates related to messages have in them a chat ID identifying your conversation with a given user -- and you include that chat ID in your POST requests to https://api.telegram.org/bot(TOKEN)/sendMessage (reference) in order to send messages back to that user.
You can also send around various other things besides text messages, like locations (latitude-longitude pairs), photos (your bot gets some links to various-sized thumbnails of the photo), contacts etc.
So I managed to write Indiana, a treasure hunt bot that comes up with a random location inside Hyde Park (well, the rectangle whose all 4 points lie within Hyde Park and yes, that means it can sometimes put the treasure in the water or in some restricted areas and I take no responsibility for you ending up there) and, when sent a location, replies back with a rough estimate of how far the treasure is. Sort of like Pokemon Go without having to lug around an extra power pack. Note you also can send the bot a manual location -- it can't distinguish between that and a physical location read from GPS (thankfully).
project Morrowind, part 7
So that you don't think that the 1-year delay in posting part 6 was due to me manually drawing the population heatmaps in Paint, I finally split the code I used to produce all the plots into a set of modules and uploaded them to GitHub. You'll need the usual scientific Python stack (NumPy, SciPy, matplotlib as well as PIL) and a C++ compiler. Since I wasn't sure if it's a good idea to post the game data dump that I produced, you'll have to make it yourself: you'll need the original Morrowind.esm data file and Enchanted Editor (instructions on how to produce the dump are in the README).
With all that in mind, I've run the code end-to-end and it spit out a similar set of images to what I have on the blog, which makes me incredibly happy.
Now it's time to get back to Cookie Clicker!
I told you I'd be back in a year's time.
With Aryon safe back in his tower and with all inhabitants of the island maximising the efficiency of their travel, it was time to approach a new challenge and create some more pretty pictures. The next question was simple: where the hell are all the people and what do they do?
Let's try and use our cool matrix that converts in-game coordinates to coordinates on a map to its full extent and create some sort of a population heatmap. This isn't difficult to do since we already have all the pieces of the puzzle: we know where all the NPCs are located and what their occupation, race and gender are. The only problem is dealing with NPCs that are in the interior: remember how interiors are completely separate mini-worlds? This means that we can't simply infer someone's location in the exterior by taking the coordinates of the two doors and adding up an offset of the NPC from the door, since interiors often are bigger on the inside than what they look like from the outside. Since we'd only be looking at a world-scale overview, I decided not to bother with precision: the actual exterior location of an NPC is simply the location of the closest exterior door they can get to (by number of cells they have to traverse to get outside).
Armed with these tools, I went through all the NPCs in the world, getting their exterior location, and converted that location into coordinates on the map. I had a map-sized matrix where I accumulated those coordinates: the number at each pixel was the number of NPCs whose exterior coordinates fell within that square. This meant that I'd get disproportionately large amounts of people piling up at the doors of densely-populated interiors, which wasn't optimal as it was difficult to see on the image (after all, it's just one pixel) and wasn't representing the in-game reality well: after all, we are interested in the population in a given city/region and people don't really stand in one spot either, instead roaming around.
Hence I applied a Gaussian blur to my matrix so that instead of 10 people assigned to one pixel we'd be looking at something like 2.2 people on that pixel, 1.1 people one pixel away, 0.5 people 2 pixels away etc. If this feels like chopping people into parts and throwing those body parts around so they form a nice hill, it's because it kind of is.
With that out of the way, I normalised the matrix so that all values were between 0 and 1, applied one of the numerous colormaps that matplotlib has (I quite liked the one called blues) and blended it with the original map. I also toyed around with applying a transfer function to the inputs before pushing them into the colormap since I didn't like the way it looked by default -- I chose a logistic function:
\[ f(t) = \frac{1}{1 + e^{-k(t-c)}} \]
I didn't really have a methodology here: varying $k$ changes the steepness of the curve (how quickly things go from the left side of the colormap to the right side, getting brighter) and varying $c$ changes where it's centered, so I tinkered with them for each picture until it looked good.
With that in mind, let's see what we ended up with!
draw_npcs(filter_sigma=25, sigmoid_k=8, sigmoid_c=0.2, output='map_population.png') (full)
We get dark blobs in large population centres like, bottom to top, Vivec (and Ebonheart next to it), then Balmora (southwestern part of the island), Sadrith Mora (far east), Ald'ruhn (north of Balmora) and Gnisis (northwest of Ald'ruhn). There are also some minor places highlighted around -- these are either smaller settlements or larger dungeons/strongholds/shrines.
What else can we do with it? How about mapping out all the Dark Elves? Easy, just don't go through all the NPCs:
draw_npcs(filter_sigma=25, mark_npcs=[n for n in npcs if n.race == 'Dark Elf'], sigmoid_k=8, sigmoid_c=0.2, output='map_population_darkelf.png') (full)
Yes, it looks just like the population heatmap. How about seeing where they are overrepresented or underrepresented? We can divide the two overlays by one another to essentially get fractions of Dark Elves amongst the population:
draw_npcs(relative=True, filter_sigma=50, mark_npcs=[n for n in npcs if n.race == 'Dark Elf'], sigmoid_k=4, sigmoid_c=0.5, output='map_population_darkelf_relative.png') (full)
I did have to play around with the parameters for this one (increasing the blur radius and moving the centre of the sigmoid to 0.5), but we can sort of see how the Dark Elves (natives of Morrowind) are less represented in the southwestern part of the island (which is more cosmopolitan and welcoming towards foreigners) and more represented in the eastern territories as well around the Ashlander camps (which almost completely consist of them).
What else can we do? Morrowind has slavery! Let's find out where all the slaves are concentrated:
draw_npcs(relative=True, filter_sigma=25, mark_npcs=[n for n in npcs if n.class_name == 'Slave'], sigmoid_k=8, sigmoid_c=0.2, output='map_population_slave_relative.png') (full)
No blobs around big cities and towns -- which makes sense since this is a relative fraction. Instead what we have highlighted for us are random dungeons and plantations around the world where slaves are held, including Abebaal Egg Mine or Dren Plantation or some slave markets or Rotheran or Hlormaren (interestingly, for the latter the blob (west of Balmora by the sea) is west of the actual stronghold -- this is because the slaves are held in sewers from where the exit is around there).
Of course we would never use this tool for our own selfish purposes:
draw_npcs(relative=True, filter_sigma=50, mark_npcs=[n for n in npcs if n.is_female], sigmoid_k=12, sigmoid_c=0.7, output='map_population_female_relative.png') (full)
There are very few places on the island where females are overrepresented (note I set the centre of the sigmoid at 70%) -- the only one of them that's a town is Tel Mora in the northeast. That's because the councilor of that town "does not enjoy the presence of men" and all residents of that town are indeed women. Another place is Odirniran in the southeast, a Telvanni stronghold under attack by House Hlaalu. Northwest of that we have Assu with two sorceresses and north of that is Tel Uvirith -- a stronghold that gets built for the player as part of the Telvanni questline. It's disabled at the start of the game (and is invisible), but the scraper obviously didn't care about that.
Next year on project Morrowind, I promise I'll actually get around to cleaning up the source code that was used to make all this and releasing it. Promise.
Look what I found in my drafts folder. Welcome back to project Morrowind.
The nice visualization of where Aryon could be was very close now. I went with the stupidest approach: go through all pixels on the map, convert each one into a point in the game world and find how long it would take Aryon to get there (by using the method I mentioned previously: go through all points in the graph we know the shortest travel time to and find the one for which the total travel time (shortest time to travel to that point + time to walk from that point to the destination) is the smallest).
Except I forgot this was Python and I was going to go through, for each point on the map, about 2400 possible routes through exterior points. And there were 1650x1900 = about 3 million points. Sure, I could be smart about it and use various optimisations (like coalescing exterior points that are close enough to each other and treating them as one or exploiting the triangle inequality (as mentioned in the previous post) or looking at 2x2 blocks on the map instead of each pixel or using all 4 cores of my CPU instead of one). Or I could farm it out to a C++ program.
So I dumped the list of known exterior coordinates and times of the shortest routes to those to a file as well as the in-game coordinates of the 3-ish million points on the map I was interested in. The program would take those and spit out, for each sought coordinate, the shortest time it would take for Aryon to get there from his tower. In fact, it took 40 lines and ran in about 10 seconds. It's pretty amazing how fast you can be if you speak to the bare metal.
I then used matplotlib's contour plot to visualize the heatmap I got. I didn't manage to get it to actually overlay on the map in the map's original resolution, but the wizards were still extremely impressed and said that I should speak to them whenever I was interested in seed funding for my startup.
Picture time!
So this actually makes sense. There's a 2h circle around Aryon's home (northeast portion of the island) from where he could either walk or teleport to Wolverine Hall through Divine Intervention (an island east of Vvardenfell). Wolverine Hall has a Mages' Guild, so that means he could instantaneously get to four other major towns (a blob along the west edge of the island). So there are quite a few places he could get in 2 hours!
After that, he would have to take the Silt Strider or a boat, which would slow him down. In 4 hours he would barely be able to reach Gnisis (northwest corner of the island) or Maar Gan (the little arc at the top of the 4h contour around the main population centres). He, of course, could walk from his original location for 4 hours but he wouldn't get very far.
In 6 hours he could be anywhere on the island and in 8 he would be able to reach the northern edges of Dagon Fel, a small island north of Vvardenfell. Finally, in about 11 hours he could very possibly be having breakfast with Big Head in the most desolate corner of Morrowind. Perhaps he had some business there?
The wizards said last time they ever saw Aryon was at about 2am, so he'd been gone for almost 10 hours by that point. Luckily as we were trying to figure out if he would deliberately take the most efficient route to get as far away from his tower as possible, we heard a loud noise from a nearby wardrobe and an asleep but still alive Aryon fell out of it.
In the end, he loved my contour plot as well and hung it up on his wall. Some people say the tower steward still uses it to track down people who go missing in action during Aryon's wild parties.
Next year on project Morrowind, we'll talk about my assignment with Vvardenfell Office for National Statistics to make sense of the island's demographics.
Welcome back to project Morrowind, in which we use technology to oppress people for our own political gains.
A couple of hungover Telvanni wizards came by to my house this Saturday morning. They went to Master Aryon's tower the night before for a round of drinks, which quickly escalated to several rounds of drinks. Long story short, Aryon managed to wander away somewhere and hasn't been seen since. Worse even, a Council meeting was supposed to take place next Monday and Aryon not attending it would be disastrous.
The wizards wondered if I could map out the locations Aryon might possibly be in so they would be able to better concentrate their agents' efforts across various cities in Vvardenfell and recover him before the meeting.
Imagining all kinds of blog posts I could write about this, I agreed.
Regenerating the graph
I first had to alter the weights between the edges on the travel graph, since in actual game time travel by silt strider or boat isn't instantaneous. But it's easy to calculate from the distance anyway: the speed of travel is in a game setting that defaults to 16000 units per game hour. For example, the distance between Seyda Neen and Balmora is about 55000 units, so if in the beginning of the game you decided to spend money on public transport instead of walking, you would get to Balmora and finish your first quest in less than 3.5 game hours.
Determining the walking time between locations also required some digging. The minimum walking speed in the game is 100 game units per real-world second and the game time by default flows 30 times faster than real time. So walking 16000 units would take about 16000 / 100 * 30 / 3600 = 1h20m of game time. As you see, this is not much slower than taking the silt strider and if you saw one you would realise why.
Obviously, if our travel NPC has "Guild Guide" in his class name, traveling with him doesn't take any time - because magic.
Having rebuilt the graph and re-run Dijkstra on it, we can easily determine how long it would take Aryon to reach any point in the game world, assuming he uses the fastest route. Go through all points in the graph we know the shortest travel time to and find the one for which the total travel time (shortest time to travel to that point + time to walk from that point to the destination) is the smallest.
There is an optimisation which I haven't done: we actually only care about points on the graph where we can get by any other route than plain walking. Consider this: if a shortest path to a point is formed by first teleporting to some point A, then walking to point B and then finally walking to point C (all in a straight line), why not walk from A to C directly (we're assuming here that Aryon can levitate and move between the points as-the-crow-flies, so any 3 points that are in the exterior follow the triangle inequality).
But of course just giving the Telvanni wizards a list of in-game coordinates would be a faux pas. They required a map, and a map I would provide. An affine map, of all things.
A quick, incomplete and mostly wrong introduction to linear algebra
The problem here is that we want to find a way to convert a pair of pixel coordinates on the game map to coordinates in the game world. Luckily, this transformation has an important property: a line between any two points on the game map is also a line in the actual world. Such transformations are called affine: they can be composed out of primitive operations like translation, rotation, reflection etc.
The good news is, they can be represented by a matrix product.
$$ \begin{pmatrix}x_{GAME} \\ y_{GAME} \\ 1 \end{pmatrix} = M \begin{pmatrix}x_{MAP} \\ y_{MAP} \\ 1\end{pmatrix} $$
So if we have a pair of map coordinates and this 3x3 matrix M, we'll be able to calculate the actual in-game coordinates, and vice versa. The third component of the vector being 1 is an ugly hack that allows us to encode translations (movement), since otherwise the vector (0, 0) on the map would map (he-he) to the vector (0, 0) in the game. More on Wikipedia.
How do we find such a matrix? Well, we can use it to transform several vectors at the same time:
$$ \begin{pmatrix}x_{GAME, 1} & x_{GAME, 2} & x_{GAME, 3} \\ y_{GAME, 1} & y_{GAME, 2} & y_{GAME, 3} \\ 1 & 1 & 1 \end{pmatrix} = M \begin{pmatrix}x_{MAP, 1} & x_{MAP, 2} & x_{MAP, 3} \\ y_{MAP, 1} & y_{MAP, 2} & y_{MAP, 3} \\ 1 & 1 & 1 \end{pmatrix} $$
And (by inverting the matrix on the right and multiplying the whole equation by it) this can be rewritten to
$$ M = \begin{pmatrix}x_{GAME, 1} & x_{GAME, 2} & x_{GAME, 3} \\ y_{GAME, 1} & y_{GAME, 2} & y_{GAME, 3} \\ 1 & 1 & 1 \end{pmatrix} \begin{pmatrix}x_{MAP, 1} & x_{MAP, 2} & x_{MAP, 3} \\ y_{MAP, 1} & y_{MAP, 2} & y_{MAP, 3} \\ 1 & 1 & 1 \end{pmatrix}^{-1} $$
Essentially, if we get 3 sets of coordinates in the game world and on the map, we can use those to recover our mapping. These 3 points also can't be on the same line because then the determinant of the matrix of map coordinates is zero and it doesn't have an inverse.
So I picked the game coordinates of 3 locations that were fairly well spread (to minimize the error) and tried to pinpoint the corresponding pixel coordinates on the map.
In the end this is the matrix I found:
$$ M = \begin{pmatrix}185.38 & -0.43327 & -126720 \\ 1.2986 & -0.018372 & 218470 \\ 0 & 0 & 1 \end{pmatrix} $$
To test it out, I plotted the three reference points I used to calculate it (in red) as well as Aryon's initial location (in blue): the exterior door to his house is located at game coordinates (85730.77, 117960.3, 5081.284) which he matrix mapped to (1147.33, 555.21).
I can see your house from here! (the actual map comes from http://thegamersjournal.com/rpg/pc/morrowind/maps/map_rendered_m.jpg)
This edition of project Morrowind was overdue by about two months, so I sadly have to stop here. But next time I'll definitely tell you how we managed to track Aryon and save the Telvanni council from collapse.
Copyright © 2017–2018 Artjoms Iškovs (mildbyte.xyz). Questions? Comments? Suggestions? Contact [email protected]. | CommonCrawl |
convert into polar form
This calculator computes the phase in degrees, not in radians. This is the result of the conversion to polar coordinates in form. We will therefore only consider the polar form of non-zero complex numbers. The calculator will convert the polar coordinates to rectangular (Cartesian) and vice versa, with steps shown. To Convert from Cartesian to Polar. z = 1 - i w = 1 - √3i . [See more on Vectors in 2-Dimensions].. We have met a similar concept to "polar form" before, in Polar Coordinates, part of the analytical geometry section. We have to convert the given equation in the polar form, thus = = = = = = = = = which is the required polar form. Is it possible to perform basic operations on complex numbers in polar form? Polar Form of a Complex Number The polar form of a complex number is another way to represent a complex number. Join now. convert complex numbers to polar co-ordinates. Based on this command, the angle we got is not right. Then, start changing rectangular values into polar form as per the rules above. You can view more similar questions or … So, for example, let's take the expression in rectangular form, 31 + 75j. analysis, one of the things that is required to do this is to convert the circuit from the time domain to the frequency domain, so that the circuit We want to convert this into its equivalent polar form. The regions of integration in these cases will be all or portions of disks or rings and so we will also need to convert the original Cartesian limits for these regions into Polar coordinates. View solution. So, to do this, we take the formula and plug in the values, giving us for the amplitude, amplitude = √ 31 2 + 75 2 = 81.15. Keep solving until you isolate the variable r. Powered by Create your own unique website with customizable templates. Replace and with the actual values. Replace and with the actual values. number. Precalculus Polar Coordinates Converting Equations from Polar to Rectangular. A Polar coordinate system is determined by a fixed point, a origin or pole, and a zero direction or axis. 8x=8y So far, I got this: 8*rcosθ = 8*rsinθ But it's wrong. To embed a widget in your blog's sidebar, install the Wolfram|Alpha Widget Sidebar Plugin, and copy and paste the Widget ID below into the "id" field: We appreciate your interest in Wolfram|Alpha and will be in touch soon. 0 Ratings. How do you graph #-3.12 - 4.64i#? This tutorial provides an example of converting a point in polar form to rectangular form. Solved: Convert the rectangular equation to polar form: x^2 - y^2 = 9. Example: What is (12,5) in Polar … Using cmath module. Let r and θ be polar coordinates of the point P(x, y) that corresponds to a non-zero complex number z = x + iy . Polar coordinates in the figure above: (3.6, 56.31) Polar coordinates can be … Is it possible to perform basic operations on complex numbers in polar form? 5x+3y=8 convert each rectangular equation to polar form Convert each polar equation to rectangular form: a r=9cos0 b r=5 c rsq= cos 20 d r= 3____ 3+sin 0 Is this a test? polar coordinates are written in the form (r,θ) were r is the straight line distance back to the origin and θ is the angle of the point, in either degrees or radians. Step 2: Manipulate the equation to obtain terms so that the rectangular-polar substitutions can be used. Now that we can convert complex numbers to polar form we will learn how to perform operations on complex numbers in polar form. Each point is determined by an angle and a distance relative to the zero axis and the origin. An easy to use calculator that converts a complex number to polar and exponential forms. I now work the other way round and show you how to convert equations of Cartesian curves to polar form. Based on this command, the angle we got is not right. The phase is specified in degrees. I have tried every thing! This is shown in the figure below. I especially had trouble find the beta on this problem. To convert to polar form, we need to find the magnitude of the vector, , and the angle it forms with the positive -axis going counterclockwise, or . In order to work with complex numbers without drawing vectors, we first need some kind of standard mathematical notation. Sinusoidal signals with different magnitudes and phases, but the same frequency can be plotted on a phasor diagram; in a phasor diagram, the angular velocity (w) will be the same for all signals. of the expression and the y represents the imaginary number of the expression. Arrangement (in polar form) S11 S21 (S11 AND S22 IN SAME LINE) S12 S22(S12 AND S22 IN NEXT LINE) On the other hand, "[theta, rho] = cart2pol(real(a), imag(a))". Join now. Phasor form conversion is the method of changing the representation form of a phasor. That's why I hate online homework! 5 Downloads. The idea is to find the modulus r and the argument θ of the complex number such that z = a + i b = r ( cos(θ) + i sin(θ) ) , Polar form z = a + ib = r e iθ, Exponential form Each point is determined by an angle and a distance relative to the zero axis and the origin. us for the phase, phase= arctan(75/31)= 67.54°. In the frequency domain, the power source is expressed in polar form. 2 inputs required for the calculator to compute the equivalent number in polar form. C++ program to convert polar form to rectangular form using constructor in destination class. 1. I'm not fimular with MATLAB keywords but need to use this to prove my answers. Convert a Complex Number to Polar and Exponential Forms - Calculator. Z … Polar, or phasor, forms of numbers take on the format, amplitude < phase. Example 8 Convert the complex number (−16 )/(1 + √3) into polar form. Rewrite the Cartesian equation y2 = 3 − x2 in polar form. We can think of complex numbers as vectors, as in our earlier example. Answered Convert-3 into polar form 2 Finding Products of Complex Numbers in Polar Form. To convert rectangular form into polar form. can this be converted to a single object to be used in calculations. Is it possible to perform basic operations on complex numbers in polar form? need to convert numbers into their polar forms. Once the integral is set up, it may be solved exactly like an integral using rectangular coordinates. Write equation in a+bi form, rounding values of a and b to 2 decimal points. Convert-3 into polar form Get the answers you need, now! To embed this widget in a post on your WordPress blog, copy and paste the shortcode below into the HTML source: To add a widget to a MediaWiki site, the wiki must have the. Problem 1: Convert the equation 2x 2 + 2y 2 - x + y = 0 to polar form. math (please please please help) How to convert into polar form? Converting Complex Numbers to Polar Form. physics. What is the polar form of #-2 + 9i#? Transcript. Usually, in electronics, polar forms are used in express components in AC circuit analysis. Therefore, the power source of the circuit is converted from the expression in the time domain to its How do you convert complex numbers from standard form to polar form and vice versa? This tutorial provides an example of converting a point in polar form to rectangular form. convert into polar form. Solution to Problem 1: Ex5.2, 3 Convert the given complex number in polar form: 1 – i Given = 1 – Let polar form be z = (cosθ+ sinθ ) From (1) and (2) 1 - = r (cos θ + sin θ) 1 – = r cos θ + r sin θ Comparing real part 1 = r cos θ Squaring both sides Ask your question. It is also important to understand how to convert from rectangular to polar coordinates. With Euler's formula we can rewrite the polar form of a complex number into its exponential form as follows. Polar and Cartesian coordinates are two coordinate systems for describing the same points - so, for example, the point one unit above the origin is $(0,1)$ in Cartesian coordinates and $(\frac{\pi}{2},1)$ in polar (using the order $(\theta, r)$). Find the magnitude of the polar coordinate. There are two basic forms of complex number notation: polar and rectangular. polar-equation; asked Jun 28, 2016 in PRECALCULUS by anonymous Show transcribed image text. Rectangular forms of numbers can be converted into their polar form equivalents by the formula, Convert to Polar Coordinates (1,1) Convert from rectangular coordinates to polar coordinates using the conversion formulas . Polar Form of a Complex Number. in rectangular form. Python's cmath module provides access to the mathematical functions for complex numbers. Expert Answer . Given z = 1+ √3i Let polar form be z = r (cosθ + i sinθ) From ( 1 ) & ( 2 ) 1 + √3i = r ( cosθ + i sinθ) 1 + √3i = r〖 cos〗θ + r sinθ Adding (3) & (4) 1 + 3 = r2 cos2θ + r2 sin2θ 4 = 2 cos2θ + r2 sin2θ 4 = 2 ( … \[z = r{{\bf{e}}^{i\,\theta }}\] where \(\theta = \arg z\) and so we can see that, much like the polar form, there are an infinite number of possible exponential forms for a given complex number. Exponential to Rectangular Form Conversion Calculator 0 votes. We want to convert this into its equivalent polar form. We want to convert this into its equivalent polar form. This is shown in the figure below. Polar to Exponential Form Conversion Calculator Convert the complex number 1 + i 3 − 1 6 into polar form. Converting Complex Numbers to Polar Form. Then use the method described above to derive the bounds in polar form. a) x2 + ( – 4)2 = 16 b) (x - 7)2 + y2 = 49 ) x2 + y2 = 162 . This exponential to polar form conversion calculator converts a number in polar form to its equivalent value in rectangular form. How do you convert complex numbers from standard form to polar form and vice versa? For the phase, we plug the numbers into the formula, giving The conversion from polar coordinates to rectangular coordinates involves using the sine and cosine functions to find x and y. Get the free "Convert Complex Numbers to Polar Form" widget for your website, blog, Wordpress, Blogger, or iGoogle. New questions in Math-3x^3+2x odd or even? Replace and with the actual values. To do this you need to be familiar with many trig. To convert a rectangular equation into polar form, remove the numerators. Let r and θ be polar coordinates of the point P(x, y) that corresponds to a non-zero complex number z = x + iy . This video demonstrates by example how to convert a vector in polar form to component for and how to convert a vector in component form to polar form. Convert this equation from rectangular to polar form x^2 - y^2 = 7. We have also transformed polar equations to rectangular equations and vice versa. Convert to Polar Coordinates (0,-5) Convert from rectangular coordinates to polar coordinates using the conversion formulas. x = r cos θ. The form z = a + b i is called the rectangular coordinate form of a complex number. Polar to Rectangular Form Conversion Calculator C++ program to convert polar form to rectangular form using constructor in destination class. Help would be appreciated :) How many pieces of wire did the contractor have? Not only can we convert complex numbers that are in exponential form easily into polar form such as: 2e j30 = 2∠30, 10e j120 = 10∠120 or -6e j90 = -6∠90, but Euler's identity also gives us a way of converting a complex number from its exponential form into its rectangular form. to convert from cartesian to polar form that is (x, y) → (r,θ) using ∙ xr = √x2 + y2 ∙ xθ = tan−1(y x) For the phase, we plug the numbers into the formula, giving us for the phase, phase= arctan(75/31)= 67.54°. The polar form or trigonometric form of a complex number P is z = r (cos θ + i sin θ) The value "r" represents the absolute value or … To convert from one to the other we will use this triangle: To Convert from Cartesian to Polar When we know a point in Cartesian Coordinates (x,y) and we want it in Polar Coordinates (r, θ ) we solve a right triangle with two known sides . Click here to get an answer to your question ️ convert-1+i into polar form sakshirai87 sakshirai87 22.10.2019 Math Secondary School Convert-1+i into polar form 1 … How do you graph #-3.12 - 4.64i#? However, I would like to convert it to polar form with this kind of arrangements. So i dont have to separate all my polar complex numbers similar to the complex(a,b) function but treats the complex number in rectangular form. The polar form or trigonometric form of a complex number P is z = r (cos θ + i sin θ) The value "r" represents the absolute value or … To convert to polar form, we need to find the magnitude of the vector, , and the angle it forms with the positive -axis going counterclockwise, or . Ask your question. In general, you can skip the multiplication sign, so `5x` is equivalent to `5*x`. Trigonometric substitution into original. Free Cartesian to Polar calculator - convert cartesian coordinates to polar step by step This website uses cookies to ensure you get the best experience. The horizontal axis is the real axis and the vertical axis is the imaginary axis. how to convert a complex number written in rectangular form to polar form We have the following conversion formulas for converting the polar coordinates (r,θ) (r, θ) into the corresponding Cartesian coordinates of the point, (a,b) (a, b). And that's all that's required in order to convert from rectangular form to polar form. In general, you can skip parentheses, but be very careful: e^3x is `e^3x`, and e^(3x) is `e^(3x)`. Also, what the ** is 8y = 8x? In this case the equation is manipulated to use the polar-rectangular relationship x = r cos θ. r = 6 1 cos θ. Applicable Polar-Rectangular relationship. 0.0. Convert to Polar Coordinates (0,-5) Convert from rectangular coordinates to polar coordinates using the conversion formulas. Thanks so much in advance. This finds the amplitude of the polar expression. Log in. Convert Equation from Rectangular to Polar Form Problems were equations in rectangular form are converted to polar form, using the relationship between polar and rectangular coordinates, are presented along with detailed solutions. Updated 13 Dec 2019. r: Distance from z to origin, i.e., φ: Counterclockwise angle measured from the positive x-axis to the line segment that joins z to the origin. How do I convert that into polar? Find the magnitude of the polar coordinate. Convert the given complex number in polar form: 3 + i. Convert to Polar Coordinates (-1,1) Convert from rectangular coordinates to polar coordinates using the conversion formulas . How do you convert complex numbers from standard form to polar form and vice versa? Now that we can convert complex numbers to polar form we will learn how to perform operations on complex numbers in polar form. Trig. singhmilkha866 singhmilkha866 19.08.2020 Math Secondary School +5 pts. So, to do this, we take the formula and plug in the values, giving us for the amplitude, We find the angle using trigonometric identities: Using a calculator, Rectangular to Exponential Form Conversion Calculator Exponential forms of numbers take on the format, re jθ, where r is the amplitude of the expression and θ is the phase of the expression.The amplitude r must be expressed in absolute value form. We find the angle using trigonometric identities: Using a calculator, by M. Bourne. Think of this as a different way of talking about the same point.. How do you graph #-3.12 - 4.64i#? 1. When we know a point in Cartesian Coordinates (x,y) and we want it in Polar Coordinates (r,θ) we solve a right triangle with two known sides. Replace and with the actual values. By using this website, you agree to our Cookie Policy. Finding Products of Complex Numbers in Polar Form. A Polar coordinate system is determined by a fixed point, a origin or pole, and a zero direction or axis. And that's all that's required in order to convert from exponential form to polar form. Have to multiple by 180/pi. polar form. $\begingroup$ "only the positive root should be considered" — are you sure? Get more help from Chegg. Convert 2xy = 1 to polar form. Show Instructions. The positive x-axis is normally treated as being 0o / 0 radians (or 360o / 2π radians). Given z = 3 + i Let polar form be z = r (cos + i sin) Log in. To embed this widget in a post, install the Wolfram|Alpha Widget Shortcode Plugin and copy and paste the shortcode above into the HTML source. For example, if you are doing AC circuit Usually, in electronics, polar forms are … So in situations like these, you will So, to do this, we take the formula and plug in the values, giving us, polar form= 20 111.73°. Convert the following equations into polar form. $$(\sqrt{3+i})^{50}\ \ \ \ \ \ \text{where }i^2=-1$$ Here first I tried by multiplying it with $2^25$ or $4^25$ Because we want it in that form but couldn't simplify it further. The idea is to find the modulus r and the argument θ of the complex number such that z = a + i b = r ( cos(θ) + i sin(θ) ) , Polar form z = a + ib = r e iθ, Exponential form Identify and Graph Polar Equations by Converting to Rectangular Equations We have learned how to convert rectangular coordinates to polar coordinates, and we have seen that the points are indeed the same. a = rcosθ b =rsinθ a = r cos θ b = r sin It is also important to understand how to convert from rectangular to polar coordinates. An easy to use calculator that converts a complex number to polar and exponential forms. I am having trouble converting polar form complex numbers into rectangular form by writing a MATLAB script file. r = 6 1 cos θ. Previous question Next question Transcribed Image Text from this Question. This is the result of the conversion to polar coordinates in form. Vector A has the magnitude of 12 m and is angled 60 degrees counterclockwise from the positive direction of the x axis of an xy plane. For the rest of this section, we will work with formulas developed by French mathematician Abraham de Moivre (1667-1754). In this section we will look at converting integrals (including dA) in Cartesian coordinates into Polar coordinates. For the rest of this section, we will work with formulas developed by French mathematician Abraham de Moivre (1667-1754). amplitude = √ 312 + 752 = 81.15. This rectangular to polar form conversion calculator converts a number in rectangular form to its equivalent value in Example 7 Convert the given complex number in polar form: 1 + i√3. A Phasor is a rotating vector in a complex number form which expresses the magnitude and its phase. Polar coordinates in the figure above: (3.6, 56.31) Polar coordinates can be … Convert a Complex Number to Polar Form Description Obtain the polar form of a complex number . Convert a Complex Number to Polar and Exponential Forms - Calculator. If the circle doesn't encircle the origin, in polar coordinates you need two values of $\rho$ for each valid $\theta$ except for the two extremal values. Convert the following into polar form: {y^2} = 8x {x^2} = {y^2} + 2 y + \\sqrt 3 x = 6 Let z1=-radical 2+radical 2i let z2=3radical 3+3i Now use polar form above to compute the quotient z1/z2. This essentially makes the polar, it makes it clearer how we get there in kind of a more, I guess you could say, polar mindset, and that's why this form of the complex number, writing it this way is called rectangular form, while writing it this way is called polar form. The conversion from polar coordinates to rectangular coordinates involves using the sine and cosine functions to find x and y. 4. can be analyzed in the frequency domain. like to be used in a function that accepts polar form arguments. The distance from (3,0) to the origin at (0,0) is 3. Show Step-by-step Solutions Show Step-by-step Solutions Since the only 2 components of a number expressed in rectangular form are the real number and the imaginary number, these are the only However, I would like to convert it to polar form with this kind of arrangements. Rectangular forms of numbers take on the format, rectangular number= x + jy, where x and y are numbers. Exponential to Polar Form Conversion Calculator, Rectangular to Exponential Form Conversion Calculator, Polar to Rectangular Form Conversion Calculator, Polar to Exponential Form Conversion Calculator, Exponential to Rectangular Form Conversion Calculator, Exponential to Polar Form Conversion Calculator. expression in the frequency domain. The x is the real number Help please! The conversion of complex numbers to polar co-ordinates are explained below with examples. ... A contractor cut 28 feet of wire into sections that were 4/ 2/3 feet long. Arrangement (in polar form) S11 S21 (S11 AND S22 IN SAME LINE) S12 S22(S12 AND S22 IN NEXT LINE) On the other hand, "[theta, rho] = cart2pol(real(a), imag(a))". Also vector b = 12i + 8j on that same corrdinate system. Polar amplitude= √ x2 + y2 , where x and y represent the real and imaginary numbers of the expression It also says how far I need to go, I need to go square root of 13. Every possible way deletes the r n I end up with a wrong answer. To find the phase of the polar form, the formula to do so is, phase= arctan(y/x), where y is the imaginary number and x is the real Convert z1 and z2 into polar form. identities. Find more Mathematics widgets in Wolfram|Alpha. Convert the equation to polar form. If you need to convert an integral from Cartesian to polar form, graph the domain using the Cartesian bounds and your knowledge of curves in the Cartesian domain. How do you convert #x=2# into polar form? Matlab keywords But need to go, I need to be used at... Components in AC circuit analysis the mathematical functions for complex numbers from standard form to polar coordinates using conversion. Circuit analysis equation into polar form to work with complex numbers deletes the r n I up! That the rectangular-polar substitutions can be used functions to find x and y not right rectangular-polar convert into polar form! Called the rectangular equation into polar form and vice versa use the method above., and a zero direction or axis talking about the same point.. convert into form. Real axis and the vertical axis is the polar form r. Powered by Create your unique. Be converted to a single object to be used in a function that accepts polar form conversion calculator converts complex. Co-Ordinates are explained below with examples numbers from standard form to rectangular r sin 4 a complex to... In radians operations on complex numbers from standard form to rectangular coordinates rectangular equations and vice versa number form expresses. Integral using rectangular coordinates to rectangular form a origin or pole, and a zero direction or axis co-ordinates! Below with examples: convert the polar form x is the result of the conversion of complex numbers polar... In precalculus by anonymous Finding Products of complex numbers to polar form or /. Let 's take the formula and plug in the time domain to its equivalent value in rectangular form constructor... From standard form to its equivalent polar form convert-3 into polar form in complex. Equations and vice versa the multiplication sign, so ` 5x ` equivalent! R n I end up with a wrong answer, giving us, polar forms of a complex number which. With complex numbers to polar form of a complex number in polar form method described above to derive the in... Numbers in polar form of non-zero complex numbers from standard form to polar ''... We find the angle we got is not right rectangular equations and vice versa in destination.! Converted to a single object to be used in calculations = 0 to polar form the... Prove my answers with complex numbers in polar form: x^2 - y^2 = 9 the imaginary number the. 8X=8Y so far, I got this: 8 * rsinθ But it 's wrong Cartesian ) and vice?! Will learn how to perform basic operations on complex numbers as vectors, we take the expression vice versa which! Without drawing vectors, we will learn how to perform basic operations on complex numbers drawing! Let 's take the expression in the time domain to its expression in rectangular form using constructor in class! Integral is set up, it may be solved exactly like an integral using rectangular coordinates polar! Y are numbers into its exponential form to polar form exponential form to polar form you can skip the sign... This calculator computes the phase in degrees, not in radians the bounds in polar form of a b! Derive the bounds in polar form convert complex numbers in polar form x^2 - y^2 =.... Equation y2 = 3 − 1 6 into polar form convert this into its equivalent polar form conversion converts. Wire into sections that were 4/ 2/3 feet long rewrite the polar form: 1 + √3 into... End up with a wrong answer rewrite the polar form arguments a single to. Equation to polar co-ordinates are explained below with examples about the same point.. into. Answers you need to be used in calculations also, what the * * is 8y = 8x without vectors! With customizable templates a calculator, to convert polar form to polar form let z1=-radical 2+radical let! ` 5x ` is equivalent to ` 5 * x ` rectangular-polar substitutions can be used a! Using rectangular coordinates so that the rectangular-polar substitutions can be used in a complex number possible! Vector in a complex number form which expresses the magnitude and its phase, or Phasor, forms complex! To prove my answers to its equivalent polar form contractor cut 28 feet of wire into sections were! In a+bi form, 31 + 75j so ` 5x ` is equivalent to 5. The multiplication sign, so ` 5x ` is equivalent to ` 5 * x ` remove the numerators axis! + 8j on that same corrdinate system ) how to convert it to polar and..., amplitude < phase need some kind of arrangements MATLAB script file sine cosine. + jy, where x and y are numbers remove the numerators MATLAB keywords But need go! Like these, you agree to our Cookie Policy in destination class conversion calculator converts a complex the! The bounds in polar form to polar coordinates ( 0, -5 ) convert from rectangular to polar and forms. With complex numbers to polar form numbers into their polar forms are used in express components in circuit...
Marshall Kilburn 2 Price, Teton County Real Estate, Seawoods Lake, Navi Mumbai Photos, Rakija Online Usa, Burden Or Weight Crossword Clue, Cal State East Bay Nursing Program Cost, Where To Watch The Campaign, Fall Pansy Colors,
convert into polar form 2021 | CommonCrawl |
\begin{document}
\begin{frontmatter}
\title{Hitting time for Bessel processes---walk on moving spheres algorithm (WoMS)\thanksref{T1}} \runtitle{Hitting times for Bessel processes} \thankstext{T1}{Supported in part by the French Research National Agency (ANR) under the reference ANR-09-BLAN-0008-01.}
\begin{aug} \author[a]{\fnms{Madalina} \snm{Deaconu}\ead[label=e1]{[email protected]}} \and \author[b]{\fnms{Samuel} \snm{Herrmann}\corref{}\ead[label=e2]{[email protected]}} \runauthor{M. Deaconu and S. Herrmann} \affiliation{Inria and Universit{\'e} de Lorraine, and Universit{\'e} de Bourgogne}
\address[a]{Institut Elie Cartan \\ \quad de Nancy (IECN)---UMR 7502\\ \quad and Project team TOSCA\\ INRIA Nancy Grand-Est\\ Universit{\'e} de Lorraine\\
B.P. 70239\\ 54506 Vandoeuvre-l{\`e}s-Nancy Cedex\\ France\\ \printead{e1}}
\address[b]{Institut de Math{\'e}matiques\\ \quad de Bourgogne (IMB)---UMR 5584\\ Universit{\'e} de Bourgogne\\ B.P. 47 870\\ 21078 Dijon Cedex\\ France\\ \printead{e2}} \end{aug}
\received{\smonth{11} \syear{2011}} \revised{\smonth{10} \syear{2012}}
\begin{abstract} In this article we investigate the hitting time of some given boundaries for Bessel processes. The main motivation comes from mathematical finance when dealing with volatility models, but the results can also be used in optimal control problems. The aim here is to construct a new and efficient algorithm in order to approach this hitting time. As an application we will consider the hitting time of a given level for the Cox--Ingersoll--Ross process. The main tools we use are on one side, an adaptation of the method of images to this particular situation and on the other side, the connection that exists between Cox--Ingersoll--Ross processes and Bessel processes. \end{abstract}
\begin{keyword}[class=AMS] \kwd{65C20} \kwd{60K35} \kwd{60K30} \end{keyword}
\begin{keyword} \kwd{Bessel processes} \kwd{Cox--Ingersoll--Ross processes} \kwd{hitting time} \kwd{method of images} \kwd{numerical algorithm} \end{keyword}
\end{frontmatter}
\section{Introduction}
The aim of this paper is to study the hitting time of some curved boundaries for the Bessel process. Our main motivations come from mathematical finance, optimal control and neuroscience. In finance Cox--Ingersoll--Ross processes are widely used to model interest rates. As an application, in this article we will consider the simulation of the first hitting time of a given level for the CIR by using its relation with the Bessel process. In neuroscience the firing time of a neuron is usually modelled as the hitting time of a stochastic process associated with the membrane potential behavior; for introduction of noise in neuron systems, see Part I Chapter 5 in \cite{gerstner}. The literature proposes different continuous stochastic models like, for instance, the family of integrate-and-fire models; see Chapter 10 in \cite{ermentrout}. Most of them are related to the Ornstein--Uhlenbeck process which appears in a natural way as extension of Stein's model, a classical discrete model. In Feller's model, generalized Bessel processes appear as a more realistic alternative to the Ornstein--Uhlenbeck process; see, for instance, \cite{feller-mod} for a comparison of these models. Therefore the interspike interval, which is interpreted as the first passage time of the membrane potential through a given threshold is closely related to the first hitting time of a curved boundary for some Bessel processes.
Our main results and the main algorithm are obtained for the case of Bessel processes. We use in our numerical method the particular formula that we obtain for the hitting time of some curved boundaries for the Bessel process and the connection that exists between a Bessel process and the Euclidean norm of a Brownian motion when calculating the hitting position. As an application we consider the hitting time of a given level for the Cox--Ingersoll--Ross process. In order to obtain this, we use first of all the connections that exist between CIR processes and Bessel processes and second, the method of images for this particular situation.
The study of Bessel processes and their hitting times occupies a huge portion of mathematical literature. Let us only mention few of them: G{\"o}ing-Jaeschke and Yor~\cite{jaeschkeyor2003} consider a particular case of CIR processes which are connected with radial Ornstein--Uhlenbeck processes and their hitting times; L. Alili and P. Patie \cite{alilipatie2010} investigate as a special situation the Bessel processes via some boundary crossing identities for diffusions having the time inversion property; recently, Salminen and Yor considered the hitting time of affine boundaries for the 3-dimensional Bessel process \cite{salminenyor2011}.
In a recent paper Hamana and Matsumoto \cite{hamanamatsumoto2011} gave explicit expressions for the distribution functions and the densities of the first hitting time of a given level for the Bessel process. Their results cover all the cases. Let us also mention a recent work of Byczkowski, Malecki and Ryznar \cite {byczkowskimaleckiryznar2011}. By using an integral formula for the density of the first hitting time of a given level of the Bessel process, they are able to obtain uniform estimates of the hitting time density function.
In all these papers the formulae are explicit and are hard to use for a numerical purposes as they exhibit Bessel functions. The main idea of this paper is to get rid of this difficulty by using two important tools: first of all the method of images that allow us to obtain, for some particular boundaries, an explicit form for the density of the hitting time, and second, the connection between $\delta$-dimensional Bessel processes and the Euclidean norm of a $\delta$-dimensional Brownian motion in order to get the simulated exit position. By coupling these ingredients we are able to construct a numerical algorithm that is easy to implement and very efficient and which approaches the desired hitting time.
We will use here a modified version of the \textit{random walk on spheres} method which was first introduced by Muller \cite{muller56} in 1956. This procedure allows us to solve a Dirichlet boundary value problem. The idea is to simulate iteratively, for the Brownian motion, the exit position from the largest sphere included in the domain and centered in the starting point. This exit position becomes the new starting point and the procedure goes on until the exit point is close enough to the boundary. Let us notice that the simulation of the exit time from a sphere is numerically costly.
The method of images was introduced in 1969 by Daniels \cite {daniels1969} as a tool to construct nonlinear boundaries for which explicit calculations for the exit distribution for the Brownian motion are possible. The method was developed also in Lerche~\cite {lerche1986}. We adapt this method for the Bessel process by using the explicit form of its density. For some particular curved boundaries we can explicitly evaluate the density of the Bessel hitting time.
The paper is organized as follows. First we present some new results on hitting times for Bessel processes. Second, we construct the new algorithm for approaching the hitting time, the so called \textit{walk on moving spheres algorithm}. Finally we present some numerical results and as a particular application the evaluation of the hitting time of a given level for the Cox--Ingersoll--Ross process.
\section{Hitting time for Bessel processes}
Bessel processes play an important role both in the study of stochastic processes like Brownian motion and in various theoretical and practical applications as, for example, in finance.
Let us consider the $\delta$-dimensional Bessel process starting from $y$, the solution of the following stochastic differential equation:
\begin{equation} \label{bessel-delta} \cases{ \displaystyle Z^{\delta,y}_t = Z^{\delta,y}_0+\frac{\delta-1}{2}\int_0^t \bigl(Z^{\delta,y}_s\bigr)^{-1}\, \mathrm{d} s + B_t,&\vspace*{2pt} \cr Z^{\delta,y}_0=y, \qquad y \geq0,} \end{equation}
where $(B_t)_{t\geq 0}$ is a one-dimensional Brownian motion. We denote
\begin{equation} \nu= \frac{\delta}{2}-1, \end{equation}
the \emph{index} of this process. We call $\delta$ the \emph {dimension} of the process. This terminology is coming from the fact that, in the case of positive integer $\delta\in\mathbb{N}$, a $\delta $-dimensional Bessel process can be represented as the Euclidean norm of a $\delta$-dimensional Brownian motion. This will be a key point in our numerical method.
The density of this process starting from $y$ is given by
\begin{equation} \label{density} p_y(t,x)=\frac{x}{t}\biggl(\frac{x}{y} \biggr)^\nu\exp\biggl(- \frac{x^2+y^2}{2t}\biggr) I_\nu \biggl(\frac{xy}{t}\biggr)\qquad \mbox{for } t>0, y>0, x\geq0,\hspace*{-35pt} \end{equation}
where $I_\nu(z) $ is the Bessel function whose expression gives
\begin{equation} \label{bessel-function} I_\nu(z) = \sum_{n=0}^\infty \biggl( \frac{z}{2} \biggr) ^{\nu+2n} \frac{1}{n ! \Gamma(\nu+n+1)}. \end{equation}
When starting from $y=0$, the density of $Z^{\delta,0}_t$ is
\begin{equation} \label{density-bessel-0} p_0(t,x) = \frac{1}{2^{\nu}} \frac{1}{t^{\nu+1}}\frac{1}{\Gamma (\nu+1)} x^{\delta-1}\exp \biggl(- \frac{x^2}{2t} \biggr)\qquad\mbox{for } t>0, x\geq0. \end{equation}
\subsection{The method of images for Bessel processes} In this section, we investigate the first hitting time of a curved boundary for the Bessel process starting from the origin. Let $\psi(t)$ denote the boundary, and introduce the following stopping time:
\begin{equation} \label{taupsi} \tau_\psi=\inf\bigl\{ t> 0; Z^{\delta,0}_t \geq\psi(t)\bigr\}. \end{equation}
For some suitable choice of the boundary, the distribution of $\tau_\psi $ can be explicitly computed. The idea is given by the following remark on the method of images (see, e.g., \cite{daniels1969} for the origin of this method and \cite{lerche1986} for a complete presentation):
\textit{Fundamental idea}. Suppose that $F$ is a positive $\sigma$-finite measure satisfying some integrability conditions (to be specified later on), and define
\begin{equation} u(t,x)=p_0(t,x)-\frac{1}{a}\int_{\mathbb{R}_+}p_y(t,x) F(\mathrm{d} y) \end{equation}
for some real constant $a>0$. Let
\[ \psi(t) =\inf\bigl\{ x\in\mathbb{R}; u(t,x) < 0\bigr\}\qquad \mbox{ for all } t>0. \]
Then $u(t,x)$ is solution of the partial differential equation
\begin{equation} \label{PDEbessel} \cases{ \displaystyle\frac{\partial u}{\partial t}(t,x) = \frac{1}{2} \frac {\partial^2 u}{\partial x^2}(t,x) -\frac{\delta-1}{2} \frac {\partial}{\partial x} \biggl(\frac{1}{x} u(t,x) \biggr), &\quad $\mbox{on } \mathbb{R}_+\times\mathbb{R},$ \vspace*{2pt}\cr u\bigl(t,\psi(t)\bigr) = 0,& \quad$\mbox{for all } t>0,$ \vspace*{2pt}\cr u(0,\cdot)=\delta_0(\cdot), & \quad$\mbox{on } (-\infty, \psi(0+)].$}\hspace*{-35pt} \end{equation}
From this remark we deduce an interesting expression for the hitting time. We can prove that
\[ \tau_\psi=\inf\bigl\{t>0; u\bigl(t, Z^{\delta,0}_t \bigr)=0\bigr\}. \]
This means simply that in order to obtain information on the hitting time it suffices to look for $u(t, Z^{\delta, 0}_t)=0$.
Let us express this in a general result.
\begin{thmm} \label{thmgeneralsetting} Let $F(\mathrm{d} y)$ be a positive $\sigma$-finite measure such that $\int_0^\infty p_0(t,\break \sqrt{\varepsilon} y) F(\mathrm{d} y) <\infty$ for all $\varepsilon>0$. Let $a>0$ and define the function
\begin{equation} \label{generalu} u(t,x)=p_0(t,x)-\frac{1}{a}\int _{\mathbb{R}_+}p_y(t,x) F(\mathrm{d} y). \end{equation}
Consider $\psi(t)$ such that $u(t,\psi(t))=0$. Then the probability density function of $\tau_\psi$ is given by
\begin{eqnarray} \label{distributiontau} \qquad&&\mathbb{P}_0(\tau_\psi\in\mathrm{d} t) \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad= \biggl[-\frac{1}{2} \frac
{\partial u}{\partial x}(t,x)\Big|
_{x=\psi(t)}+\frac{1}{2} \frac{\partial u}{\partial x}(t,x)\Big|_{x=0} - \frac
{\delta-1}{2x} u(t,x)\Big|_{x=0} \biggr]\, \mathrm{d} t. \end{eqnarray}
\end{thmm}
\begin{pf} We will only point out the main ideas for the proof in this case as it follows mainly the ideas introduced in \cite {lerche1986}. A complete description of the method and this result for the Brownian motion case can be found in~\cite{lerche1986}.
Let us consider
\begin{equation} \label{function-u-1} u(t,x)= p_0(t,x)-\frac{1}{a} \int _{\mathbb{R_+}} p_y(t,x)F(\mathrm{d} y), \end{equation}
where $F(\mathrm{d} y)$ is a measure on $\mathbb{R}_+$. We consider $\psi(t)$ the solution of $u(t,\psi(t))=0$. Let us define as before $\tau_\psi= \inf\{ t\geq0; Z^{\delta,0}_t\geq \psi(t)\}$. Then $u(t,x)\,\mathrm{d} x=\mathbb{P}(\tau_\psi>t, Z^{\delta ,0}_t\in\mathrm{d} x)$ and
\begin{equation} \label{probahittingtime} \mathbb{P}_0(\tau_\psi> t) =\int _0^{\psi(t)} u(t,x)\,\mathrm{d} x. \end{equation}
In order to get the distribution of $\tau_\psi$ we have to evaluate the derivative of \mbox{$\mathbb{P}_0(\tau_\psi> t )$}. By using equality (\ref {probahittingtime}) we obtain
\begin{eqnarray}
&& \mathbb{P}_0(\tau_\psi\in \mathrm{d} t)\nonumber\\ &&\qquad= \biggl(-\psi'(t) u\bigl(t, \psi(t)\bigr) -\int _0^{\psi(t)} \frac{\partial u}{\partial t} (t,x) \,\mathrm{d} x \biggr)\, \mathrm{d} t \\ &&\qquad= \biggl(-\frac{1}{2}\int_0^{\psi(t)} \frac{\partial^2 u}{\partial x^2}(t,x)\,\mathrm{d} x +\frac{\delta-1}{2}\int_0^{\psi(t)} \frac{\partial}{\partial x} \biggl(\frac{1}{x} u(t,x) \biggr)\,\mathrm{d} x \biggr)\,\mathrm{d} t,\nonumber \end{eqnarray}
as $u(t,x)$ is solution of partial differential equation (\ref{PDEbessel}). We thus obtain
\begin{eqnarray} \label{probatau} && \mathbb{P}_0( \tau_\psi\in\mathrm{d} t) = \biggl( -\frac{1}{2}
\frac{\partial u}{\partial x}(t,x)\Big| _{x=\psi(t)}+\frac{\delta -1}{2\psi(t)} u\bigl(t,\psi(t) \bigr) \nonumber \\[-8pt] \\[-8pt] \nonumber &&\hspace*{83pt}{}+ \biggl(\frac{1}{2} \frac{\partial u}{\partial x}(t,x) -
\frac{\delta-1}{2x} u(t,x) \biggr)\Big|_{x=0} \biggr)\,\mathrm{d} t. \end{eqnarray}
As $\frac{\delta-1}{2\psi(t)} u(t,\psi(t))=0$, and this ends the proof of the theorem. \end{pf}
The idea behind the method of images is that for some particular forms of $F(\mathrm{d} y),$ we can derive explicit formulae of the hitting time distribution. More precisely:
\begin{proposition} \label{propfirstboundary} Let us denote, for $\delta=2\nu+2 > 0$ and
$a > 0$ by
\[ \operatorname{Supp} (\tau_\psi)= \biggl[ 0, \biggl(\frac{a}{\Gamma(\nu+1)2^\nu} \biggr)^{{1}/{(\nu+1)}} \biggr]. \]
We define, for all $t\in \operatorname{Supp} (\tau_\psi)$, the function
\begin{equation} \label{eqajout} \psi_a(t)=\sqrt{2t\log\frac{a}{\Gamma(\nu+1) t^{\nu+1}2^\nu}}.
\end{equation}
Then the probability density of $\tau_\psi$ has its support in $\operatorname{Supp} (\tau_\psi)$ and is given by
\begin{equation} \label{densitetau} \mathbb{P}_0(\tau_\psi\in\mathrm{d} t) = \frac{1}{2at} \biggl( 2t \log \frac{a}{\Gamma(\nu+1)t^{\nu+1}2^\nu} \biggr)^ {\nu+1} \ind{\operatorname{Supp}(\tau _\psi)}(t)\,\mathrm{d} t. \end{equation}
\end{proposition}
\begin{pf} By using the expression in (\ref{density}) we remark first that
\begin{equation} \label{change-x-y} y^{2\nu+1} p_y(t,x)= x^{2\nu+1} p_x(t,y). \end{equation}
Let us consider, as in Theorem \ref{thmgeneralsetting},
\begin{equation} \label{function-u-2} u(t,x)= p_0(t,x)-\frac{1}{a} \int _{\mathbb{R_+}} p_y(t,x)F(\mathrm{d} y), \end{equation}
with $F(\mathrm{d} y) = y^{2\nu+1} \ind{\{y >0\}}\,\mathrm{d} y$. In this situation the function $u$ defined in \eqref{function-u-2} gives
\begin{eqnarray} \label{specific-F} u(t,x) &=& p_0(t,x) -\frac{1}{a} x^{2\nu+1} \nonumber \\[-8pt] \\[-8pt] \nonumber & =& \biggl( \frac{1}{2^{\nu}}\frac{1}{t^{\nu+1}}\frac{1}{\Gamma (\nu+1)}\exp \biggl(- \frac{x^2}{2t} \biggr)-\frac{1}{a} \biggr)x^{2\nu+1}. \end{eqnarray}
For simplicity we will write $\psi$ instead of $\psi_a$. Following the result in the Theorem~\ref{thmgeneralsetting}, we are looking for $\psi(t)$ such that $u(t,\psi(t))=0$. This yields
\begin{equation} \label{curve} x=\psi(t)=\sqrt{2t\log\frac{a}{\Gamma(\nu+1) t^{\nu+1}2^\nu}} \end{equation}
under the obvious condition $t^{\nu+1} \leq\frac{a}{\Gamma(\nu+1) 2^\nu }$.
We can now notice that
\[ p_0\bigl(t,\psi(t)\bigr) = \frac{1}{a}\bigl(\psi(t) \bigr)^{2\nu+1}, \]
and we can prove easily that
\[ \frac{\partial u}{\partial x} (t,x)=(\delta-1)\frac{u(t,x)}{x} -\frac{x}{t}p_0(t,x). \]
We obtain, after replacing in (\ref{distributiontau}) and after applying the Theorem \ref{thmgeneralsetting}, for this particular case,
\begin{eqnarray*}
\mathbb{P}_0(\tau_\psi\in \mathrm{d} t)&=&\frac{1}{2t} \psi(t) p_0\bigl(t,\psi(t)\bigr)\,\mathrm{d} t \\ & = &\frac{1}{2at} \psi^{2\nu+2} (t)\,\mathrm{d} t \\ &= &\frac{1}{2at} \biggl( 2t \log\frac{a}{\Gamma(\nu+1)t^{\nu +1}2^\nu} \biggr)^{\nu+1}\, \mathrm{d} t, \end{eqnarray*}
which gives the desired result. \end{pf}
The second boundary which allows us to express explicit results is obtained by using the Markov property for the Bessel process.
\begin{proposition} \label{propsecondboundary} Let us, for $\delta=2\nu+2 > 0$, $s>0$ and $a >0$ fixed, denote by
\[ \operatorname{Supp}(\tau_\psi) = \cases{ [0, +\infty), &\quad $\mbox{for } a\geq1,$ \vspace*{2pt}\cr \displaystyle\biggl[ 0, \frac{s}{( {1}/{a})^{{1}/{(\nu+1)}}-1} \biggr], & \quad$\mbox{for } 0<a<1.$}
\]
For $t\in \operatorname{Supp}(\tau_\psi)$ we define the function
\begin{equation} \label{secondboundary} \psi_a(t)=\sqrt{\frac{2t(t+s)}{s} \biggl[ ( \nu+1)\log \biggl(1+\frac {s}{t} \biggr) +\log a \biggr]}. \end{equation}
Then the probability density function of the hitting time $\tau_\psi$ is given by
\begin{eqnarray} \label{probatausecondboundary} &&\mathbb{P}_0(\tau_\psi\in\mathrm{d} t)\nonumber\\ &&\qquad= \frac{1}{\Gamma(\nu+1)}\frac{1}{t} \biggl(\frac {t+s}{s} \biggr)^{\nu} \biggl[\log \biggl( a \biggl(\frac{t+s}{t} \biggr)^{\nu +1} \biggr) \biggr]^{\nu+1} \\ &&\qquad\quad{}\times\exp \biggl[- \frac{t+s}{s}\log \biggl(a \biggl(\frac{t+s}{t} \biggr)^{\nu+1} \biggr) \biggr]\ind{\operatorname{Supp}(\tau_\psi)}(t) \,\mathrm{d} t.\nonumber \end{eqnarray}
\end{proposition}
\begin{pf} We will only sketch the proof as it follows the same ideas as the one of the Theorem \ref{thmgeneralsetting}. Let us consider the measure $F(\mathrm{d} y)= p_0(s,y)\,\mathrm{d} y$ for $s>0$ fixed. Then, when evaluating the corresponding $u(t,x)$, we have
\begin{eqnarray*}
u(t,x) & =& p_0(t,x)- \frac{1}{a}\int_{\mathbb{R}_+} p_0(s,y) p_y(t,x)\,\mathrm{d} y \\ & =& p_0(t,x)-\frac{1}{a} p_0(t+s,x) \\ & =& \frac{1}{2^\nu}\frac{1}{\Gamma(\nu+1)}x^{2\nu+1} \biggl[ \frac{1}{t^{\nu+1}} \exp \biggl( -\frac{x^2}{2t} \biggr) -\frac {1}{a} \frac{1}{(t+s)^{\nu+1}} \exp \biggl(-\frac {x^2}{2(t+s)} \biggr) \biggr], \end{eqnarray*}
by using the Markov property. We obtain the form of $\psi(t)$ by the condition $u(t,\psi(t))=0$ which gives
\begin{eqnarray} \psi(t) = \sqrt{\frac{2t(t+s)}{s} \biggl[ (\nu+1)\log \biggl(1+ \frac {s}{t} \biggr) +\log a \biggr]}, \nonumber \\[-8pt] \\[-8pt]
\eqntext{\cases{ \mbox{for } t\geq0 \mbox{ if } a\geq1, \vspace*{2pt}\cr \mbox{for } t\leq\displaystyle\frac{s}{({1}/{a})^{{1}/{(\nu+1)}}-1} \mbox{ if } a < 1.}} \end{eqnarray}
In order to obtain the distribution of $\tau_\psi$, one has only to evaluate
\begin{equation} \frac{\partial u}{\partial x} (t,x) = (\delta-1)\frac{u(t,x)}{x} -\frac{s}{t(t+s)} x p_0(t,x), \end{equation}
and $\frac{u(t,x)}{x}$ for $x=0$ and $x=\psi(t)$ and replace the values in the general form (\ref{probatau}). The expression (\ref {probatausecondboundary}) follows. \end{pf}
\begin{rem} We can notice that the function $\psi_a(t)$ defined by (\ref {secondboundary}) satisfies, for large times,
\[ \cases{ \psi_a(t)\simeq \sqrt{t}, &\quad$\mbox{for } a =1,$ \vspace*{2pt}\cr \psi_a(t)\simeq t, &\quad $\mbox{for all }a >1.$} \]
In particular, we can approach large times by considering this kind of boundary. \end{rem}
A new boundary can be obtained by using the Laplace transform of the square of the $\delta $-dimensional Bessel process starting from $x$. More precisely:
\begin{proposition} \label{propthirdboundary} Let, for $\delta=2\nu+2 > 0$, $\lambda>0$ and $a>0$ fixed,
\begin{equation} \label{psilaplace} \psi_a(t)=\frac{\lambda t}{1+2\lambda t}+ t\sqrt{ \biggl( \frac{\lambda}{1+2\lambda t} \biggr)^2 +\frac{2}{t} \log \frac{a(1+2\lambda t)^{\nu+1}}{2^\nu t^{\nu+1}\Gamma(\nu+1)}} \end{equation}
for all $t\in \operatorname{Supp}(\tau_\psi)$, where $\operatorname{Supp}(\tau_\psi)$ is defined by
\begin{equation} \operatorname{Supp}(\tau_\psi) = \cases{ \displaystyle \biggl[0, \frac{1}{(2^\nu\Gamma(\nu+1)/a)^{1/(\nu+1)}-2\lambda} \biggr], \vspace*{2pt}\cr \qquad \displaystyle\mbox{if } \lambda< \frac{1}{2} \biggl( \frac{2^\nu\Gamma(\nu+1)}{a} \biggr)^{{1}/{(\nu+1)}}, \vspace*{2pt}\cr [0, +\infty),\vspace*{2pt}\cr
\qquad \displaystyle\mbox{if } \lambda\geq\frac{1}{2} \biggl(\frac {2^\nu\Gamma(\nu+1)}{a} \biggr)^{{1}/{(\nu+1)}}.} \end{equation}
Then the probability density function of the hitting time is given by
\begin{eqnarray} \label{probatauthirdboundary} &&\mathbb{P}_0(\tau_\psi\in \mathrm{d} t) \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad=\sqrt{ \biggl(\frac{\lambda }{1+2\lambda t} \biggr)^2 + \frac{2}{t} \log\frac{a(1+2\lambda t)^{\nu+1}}{2^\nu t^{\nu+1}\Gamma(\nu+1)}}p_0\bigl(t,\psi(t)\bigr)\ind {\operatorname{Supp}(\tau_\psi)}(t)\,\mathrm{d} t.\hspace*{-35pt} \end{eqnarray}
\end{proposition}
\begin{pf} We present only the main ideas as the result follows as above from the general method in Theorem \ref{thmgeneralsetting} applied to the measure $F(\mathrm{d} y) = y^{2\nu+1} e ^{-\lambda y^2}\ind{\{y\geq0\} }\,\mathrm{d} y$. For this measure $u(t,x)$ takes the form
\begin{eqnarray*}
u(t,x) & = & p_0(t,x)- \frac{1}{a}\int_{\mathbb{R}_+} p_y(t,x) F(\mathrm{d} y) \\ &=& p_0(t,x) -\frac{1}{a}\int_{\mathbb{R}_+} p_y(t,x)y^{2\nu+1} e ^{-\lambda y^2}\,\mathrm{d} y \\ &=& p_0(t,x) -\frac{1}{a}\int_{\mathbb{R}_+}x^{2\nu+1}p_x(t,y) e ^{-\lambda y^2}\,\mathrm{d} y \\ &=& p_0(t,x) -\frac{x^{2\nu+1}}{a} \mathbb{E} \bigl(e^{-\lambda Z_t^{\delta,x}} \bigr). \end{eqnarray*}
By using the expression of the Laplace transform for $Z_t^{\delta,x}$ we obtain
\begin{equation} \label{TLboundary} u(t,x) = p_0(t,x) -\frac{x^{2\nu+1} }{a} \frac{1}{(1+2\lambda t)^{\delta/2}}\exp \biggl(-\frac{\lambda x}{1+2\lambda t} \biggr). \end{equation}
We consider first the equality $u(t,\psi(t))=0$ in (\ref {TLboundary}), and this gives the form of $\psi(t)$ in (\ref{psilaplace}). Afterwards, we can evaluate once again in this particular situation
\[ \frac{\partial u}{\partial x}(t,x) = (\delta-1)\frac{u(t,x)}{x} - \biggl( \frac{x}{t} -\frac{\lambda t}{1+2\lambda t} \biggr) p_0(t,x). \]
For this particular case, there is only one nonvanishing term in expression (\ref{distributiontau}) of $\mathbb{P}_0(\tau_\psi \in \mathrm{d} t)$, that is, the term $- (\frac{x}{t} -\frac{\lambda t}{1+2\lambda t} ) p_0(t,x)$ of $\frac{\partial u}{\partial x}(t,x)$ for $x=\psi(t)$, and this is exactly given by the right-hand side of formula (\ref {probatauthirdboundary}). \end{pf}
\begin{corollary} The previous results give, for $\delta=2$:
\begin{longlist}[(1)]
\item[(1)] for $a>0$, $0\leq t\leq a$ and $\psi(t)=\sqrt{2t\log \frac{a}{t}}$, the density of the hitting time $\tau_\psi$ is
\[ \label{densitetau1} \mathbb{P}_0(\tau_\psi\in\mathrm{d} t) = \frac{1}{2a} \log \biggl( \frac{a}{t} \biggr)\ind{\{0\leq t\leq a \}}(t)\,\mathrm{d} t; \]
\item[(2)] for $s>0$, $a>0$, $0\leq t\leq\frac{sa}{1-a}$ and $\psi (t) = \sqrt{\frac{2t(t+s)}{s}\log (a\frac{t+s}{t} )}$, the probability density function of $\tau_\psi$ is given by
\begin{eqnarray*} \label{densitetau2} &&\mathbb{P}_0(\tau_\psi\in\mathrm{d} t)\\ &&\qquad = \frac{t+s}{t} \log \biggl( a \frac{t+s}{t} \biggr) \exp \biggl[- \frac{t+s}{t}\log \biggl( a\frac {t+s}{t} \biggr) \biggr]\ind{ \biggl\{0 \leq t\leq{sa}/{(1-a)} \biggr\}}(t) \,\mathrm{d} t; \end{eqnarray*}
\item[(3)] for $a>0$ and $\psi(t) =\frac{\lambda t}{1+2\lambda t}+t\sqrt{ (\frac{\lambda}{1+2\lambda t} )^2 +\frac {2}{t}\log\frac{a(1+2\lambda t)}{t}}$, for $t\in\break \operatorname{Supp}(\tau_\psi)$, where
\begin{equation} \operatorname{Supp}(\tau_\psi) = \cases{ [0, +\infty), &\quad $\displaystyle\mbox{if } \lambda\geq {\frac{1}{2a}},$ \vspace*{2pt}\cr \displaystyle\biggl[ 0, \frac{a}{1-2\lambda a} \biggr], &\quad $\displaystyle\mbox{if } \lambda< \frac{1}{2a},$} \end{equation}
the probability density function of $\tau_\psi$ is
\[ \label{densitetau3} \mathbb{P}_0(\tau_\psi\in\mathrm{d} t) = \sqrt{ \biggl(\frac{\lambda }{1+2\lambda t} \biggr)^2 +\frac{2}{t}\log \frac{a(1+2\lambda t)}{t}} p_0\bigl(t, \psi(t)\bigr)\ind{\operatorname{Supp}( \tau_\psi)}(t)\,\mathrm{d} t. \] \end{longlist}
\end{corollary}
\subsection{Approximation of the first hitting time for Bessel processes starting from the origin}
In this section we will construct a stepwise procedure, the so-called \emph{random walk on moving spheres (WoMS)} algorithm, which allows us to approach the first time the standard Bessel process hits a given level $l>0$. Of course, this stopping time $\tau_l=\inf\{t> 0; Z^{\delta,x}_t=l\}$ can be characterized by its well-known Laplace transform computed by solving an eigenvalue problem. Indeed if $(Z^{\delta,x}_t, t\ge0)$ is the Bessel process starting from $x$, of index $\nu=\frac{\delta}{2}-1$, then for $\nu>0$ and $x\le l$, we get
\[ \mathbb{E}_x\bigl[e^{-\lambda\tau_l}\bigr]=\frac{x^{-\nu}I_\nu(x\sqrt{2\lambda })}{l^{-\nu}I_\nu(l\sqrt{2\lambda})}, x>0\quad \mbox{and} \quad\mathbb {E}_0 \bigl[ e^{-\lambda\tau_l} \bigr]= \frac{(l\sqrt{2\lambda})^\nu }{2^\nu\Gamma(\nu+1)}\frac{1}{I_\nu(l\sqrt{2\lambda})}. \]
Here $I_\nu$ denotes the modified Bessel function. This Laplace transform can be used to describe the following tail distribution: Ciesielski and Taylor \cite{Ciesielski-Taylor} proved that, for $\delta =2\nu+2\in\mathbb{N}^*$,
\[ \mathbb{P}_0(\tau_l>t)=\frac{1}{2^{\nu-1}\Gamma(\nu+1)} \sum _{k=1}^{\infty}\frac{j_{\nu,k}^{\nu-1}}{\mathcal{J}_{\nu+1}(j_{\nu ,k})} e^{(-{j_{\nu,k}^2}/{(2l^2)})t}, \]
where $\mathcal{J}_\nu$ is the Bessel function of the first kind, and $(j_{\nu,k})_{\nu,k}$ is the associated sequence of its positive zeros.
We are looking for a numerical approach for the hitting time and these formulae are not easy to handle and approach, in particular we cannot compute the inverse cumulative function! The aim of this section is to construct an easy and efficient algorithm without need of inverting the Laplace transform and without using discretization schemes since the hitting times are unbounded. In the next step we will extend this procedure to the hitting time of time-dependent boundaries like straight lines, useful in the description of the hitting time of a given level for the CIR process (the Laplace transform is then unknown).
\subsubsection{\texorpdfstring{Hitting time of a given level for the Bessel process with positive integer dimension $\delta$} {Hitting time of a given level for the Bessel process with positive integer dimension delta}} \label{sect0-horiz} Let us consider $\delta$ independent one-dimensional Brownian motions $(B^{(k)}_t,t\ge0)$, $1\le k\le\delta$. Then the Bessel process of index $\delta$ starting from 0, satisfies the following property:
\[ \bigl(Z^{\delta,0}_t, t\ge0\bigr) \mbox{ has the same distribution as } \bigl(\sqrt{ \bigl(B^{(1)}_t \bigr)^2+\cdots+ \bigl(B^{(\delta)}_t \bigr)^2}, t\ge0 \bigr). \]
Let
\begin{equation} \label{tau-l} \tau_l = \inf\bigl\{ t\geq0; Z^{\delta,0}_t \geq l\bigr\}. \end{equation}
In particular, we can express $\tau_l$ by using the first time when the $\delta$-dimensional Brownian motion $\mathbf{B}=(B^{(1)},\ldots,B^{(\delta )})$ exits from the Euclidean ball $D$ centered in the origin with radius $l$. Approximating the exit time and the exit position for the 2-dimensional Brownian motion of a convex domain was already studied by Milstein \cite{Milstein-97}. He used the so-called random walk on spheres algorithm which allows one to approach the exit location and the exit time through an efficient algorithm. The exit position is really simple to obtain (as it is uniformly distributed on the circle) while the exit time is much more difficult to approach. That is why we will construct an adaptation of this initial procedure in order to obtain nice and efficient results concerning the Bessel process exit time.
Let us introduce now our \emph{walk on moving spheres} ($\mathit{WoMS}$) algorithm. We first define a continuous function $\rho\dvtx \mathbb{R}^2\to \mathbb{R}_+$ which represents the distance to the boundary of $D$,
\begin{equation} \label{defderho} \rho(x)=\inf\bigl\{\Vert x-y\Vert; y\in D^{c}\bigr \}=l-\Vert x\Vert. \end{equation}
For any small enough parameter $\varepsilon>0$, we will denote by $D^\varepsilon$ the sphere centered at the origin with radius $l-\varepsilon$,
\begin{equation} \label{defDepsilon} D^{\varepsilon} = \{ x\in D; \Vert x\Vert\leq l-\varepsilon \} = \bigl\{ x\in D; \rho(x) \geq\varepsilon\bigr\}. \end{equation}
\rule{\textwidth}{0.5pt}\hypertarget{algA1}\\ \textbf{Algorithm (A1) for} $\bolds{\delta=2}$. Let us fix a parameter $0<\gamma<1$.\\ \textbf{Initialization:} Set $X(0)=(X_1(0),X_2(0))=0$, $\theta_0=0$, $\Theta_0=0$, $A_0=\gamma^2l^2 e/2$.\\ \textbf{First step:} Let $(U_1,V_1,W_1)$ be a vector of three independent random variables uniformly distributed on $[0,1]$. We set
\[ \cases{
\displaystyle\theta_1=A_0 U_1V_1,\qquad \Theta_1=\Theta_0+\theta_1,\vspace*{2pt}\cr \displaystyle X(1)^\intercal=\bigl(X_1(1),X_2(1)\bigr)^\intercal=X(0)^\intercal+\psi _{A_{0}}(\theta_1)\pmatrix{
\cos(2\pi W_1)\vspace*{2pt}\cr \sin(2\pi W_1)},}
\]
where
\begin{equation} \label{defdepsi} \psi_a(t)=\sqrt{2t\log\frac{a}{t}}, \qquad t\le a, a>0. \end{equation}
At the end of this step we set $A_1=\gamma^2\rho(X(1))^2 e/2$.\\ \textbf{The $\bolds{n}$th step:} While $X (n-1)\in D^\varepsilon$, simulate $(U_n,V_n,W_n)$ a vector of three independent random variables uniformly distributed on $[0,1]$ and define
\begin{equation}\label{eqalgo-n} \cases{\displaystyle \theta_n=A_{n-1} U_nV_n,\qquad \Theta_n=\Theta_{n-1}+\theta_n,\vspace*{2pt}\cr \displaystyle X(n)^\intercal=\bigl(X_1(n),X_2(n)\bigr)^\intercal=X(n-1)^\intercal+\psi _{A_{n-1}}(\theta_n)\pmatrix{\cos(2\pi W_n)\vspace*{2pt}\cr \sin(2\pi W_n)}.}\hspace*{-35pt} \end{equation}
At the end of this step we set $A_n=\gamma^2\rho(X(n))^2 e/2$.\\ When $X(n-1)\notin D^\varepsilon$ the algorithm is stopped: we set $\theta_n=0$, $\Theta_n=\Theta_{n-1}$ and $X(n)=X(n-1)$.\\ \textbf{Outcome:} The hitting time $\Theta_n$ and the exit position $X(n)$.\\ \rule{\textwidth}{0.5pt}
\begin{rem}
The WoMS algorithm describes a $D$-valued Markov chain $(X(n), n\ge0)$. Each step corresponds to an exit problem for the $2$-dimensional Brownian motion. If $X(n)=x$, then we focus our attention to the exit problem of the ball centered in $x$ and of radius $(\psi_a(t), t\ge0)$: the exit location corresponds to $X(n+1)$ and the exit time to $\theta_{n+1}$. Of course the choice of the parameter $a$ plays a crucial role since the moving sphere has to belong to the domain $D$ as time elapses. When the Markov chain $X$ is close to the boundary $\partial D$, we stop the algorithm and obtain therefore a good approximation of the exit problem of $D$.
Comparison with the classical ($\mathit{WoS}$) algorithm: at each step, the $n$th step of the classical walk on spheres ($\mathit{WoS}$) is based on the exit location and exit time, which are mutually independent, for the Brownian paths exiting from a ball centered in $X(n-1)$ and with radius $\gamma\rho(X(n-1))$. The exit location is uniformly distributed on the sphere while the exit time is characterized by its Laplace transform. Therefore, if one knows $X(n-1)$, then the diameter of the sphere is deterministic. For the $\mathit{WoMS}$ the center of the ball should also be $X(n-1)$, but the radius is random, smaller than $\gamma\rho (X(n-1))$. The exit location will also be uniformly distributed on the sphere, but the exit time will be much easier to simulate: in particular, you do not need to evaluate the Bessel functions. \end{rem}
The stochastic process $(X(n), n\ge0)$ is a homogeneous Markov chain stopped at the first time it exits from $D^\varepsilon$. In the following, we shall denote $N^\varepsilon$ this stopping time which represents in fact the number of steps in the algorithm:
\[ N^\varepsilon=\inf\bigl\{n\ge0; X(n)\notin D^\varepsilon\bigr\}. \]
We just notice that $X(N^\varepsilon)\notin D^\varepsilon$ by its definition.\eject
Algorithm (\hyperlink{algA1}{A1}) is presented in the $2$-dimensional case. Of course we can construct a generalization of this procedure for the $\delta$-dimensional Bessel process when $\delta\in \mathbb{N}^*$. For notational simplicity, we use a slightly different method: instead of dealing with a Markov chain $(X(n), n\in\mathbb{N})$ living in $\mathbb{R}^\delta$ we shall consider its squared norm, which is also (surprisingly) a Markov chain. At each step, we shall construct a couple of random variables $(\xi_n, \chi(n))$ associated to an exit problem, the first coordinate corresponds to an exit time and the second one to the norm of the exit location.
We introduce some notation: $\mathcal{S}^\delta$ represents the unit ball in $\mathbb{R}^\delta$ and $\pi_1\dvtx \mathbb{R}^\delta\to\mathbb{R}$ the projection on the first coordinate.
\noindent\rule{\textwidth}{0.5pt}\hypertarget{algA2}\\ \textbf{Algorithm (A2).} Let us fix a parameter $0<\gamma<1$.\\ \textbf{Initialization:} Set $\chi(0)=0$, $\xi_0=0$, $\Xi_0=0$, $A_0=(\gamma^2l^2 e/(\nu+1))^{\nu+1}\frac{\Gamma(\nu+1)}{2}$.\\ \textbf{The $\bolds{n}$th step:} While $\sqrt{\chi(n-1)}<l-\varepsilon$, we choose $U_n$ a uniform distributed random vector on $[0,1]^{\lfloor\nu\rfloor+2}$, $G_n$ a standard Gaussian random variable and $V_n$ an uniformly distributed random vector on $\mathcal {S}^\delta$. Consider $U_n$, $G_n$ and $V_n$ independent. We set
\begin{eqnarray}\label{eqdefalgo} \cases{
\displaystyle\xi_n=\biggl(\frac{A_{n-1}}{\Gamma(\nu+1)2^\nu} U_n(1)\cdots U_n\bigl(\lfloor \nu\rfloor+2\bigr)\biggr)^{{1}/{(\nu+1)}}\exp\biggl\{-\frac{\nu-\lfloor\nu \rfloor}{\nu+1}G_n^2\biggr\}, \vspace*{2pt}\cr \qquad\Xi_n=\Xi_{n-1}+\xi_n,\vspace*{2pt}\cr \displaystyle\chi(n)=\chi(n-1)+2\pi_1(V_n)\sqrt{\chi(n-1)}\psi_{A_{n-1}}(\xi_n)+\psi ^2_{A_{n-1}}(\xi_n),}\hspace*{-35pt}
\end{eqnarray}
where
\begin{eqnarray} \label{defdepsi-new} \psi_a(t)&=&\sqrt{2t\log\frac{a}{\Gamma(\nu+1)t^{\nu+1}2^\nu}}, \nonumber \\[-8pt] \\[-8pt] \nonumber t&\le& t_{\mathrm{max}}(a):=\left[\frac{a}{\Gamma(\nu+1)2^\nu}\right]^{{1}/{(\nu +1)}}, \qquad a>0. \end{eqnarray}
At the end of this step we set
\[ A_n=\bigl(\gamma^2\bigl(l-\sqrt{\chi(n)}\bigr)^2 e/(\nu+1)\bigr)^{\nu+1}\frac{\Gamma (\nu+1)}{2}. \]
When $\sqrt{\chi(n)}\ge l-\varepsilon$ the algorithm is stopped: we then set $\xi_n=0$, $\Xi_n=\Xi_{n-1}$ and $\chi(n)=\chi(n-1)$.\\ \textbf{Outcome:} The hitting time $\Xi_n$ and the value of the Markov chain $\chi(n)$.\\ \rule{\textwidth}{0.5pt}
It is obvious that for the particular dimension $\delta=2$, that is, $\nu=0$, the stopping times obtained by Algorithms (\hyperlink{algA1}{A1}) and (\hyperlink{algA2}{A2}) have the same distribution. Moreover, for each $n$, $\chi(n)$ has the same distribution as $\Vert X(n) \Vert^2$. In other words, if the number of steps of (\hyperlink{algA1}{A1}) and (\hyperlink{algA2}{A2}) are identical in law, the number of steps will be denoted in both cases $N^\varepsilon$.
\begin{thmm} \label{thmalgo} Set $\delta\in\mathbb{N}^*$. The number of steps $N^\varepsilon$ of the Algorithm $\mathit{WoMS}$ \textup{(\hyperlink{algA2}{A2})} is almost surely finite. Moreover, there exist constants $C_\delta>0$ and $\varepsilon_0(\delta)>0$, such that
\[
\mathbb{E}\bigl[N^\varepsilon\bigr]\le C_\delta|\log\varepsilon| \qquad\mbox{for all } \varepsilon\le\varepsilon_0(\delta). \]
\end{thmm}
\begin{thmm} \label{thmalgo-conv} Set $\delta\in\mathbb{N}^*$. As $\varepsilon$ goes to zero, $\Xi_{N^\varepsilon}$ converges in distribution toward $\tau_l$, the hitting time of the $\delta$-dimensional Bessel process (with cumulative distribution function $F$), which is almost surely finite. Moreover, for any $\alpha>0$ small enough,
\begin{equation} \label{eqthmencadr} \biggl(1-\frac{\varepsilon}{\sqrt{2\alpha\pi}} \biggr)F^\varepsilon(t-\alpha) \le F(t)\le F^\varepsilon(t)\qquad\mbox{for all } t>0, \end{equation}
where $F^\varepsilon(t):=\mathbb{P}(\Xi_{N^\varepsilon}\le t)$. \end{thmm}
These results and the key ideas of the proofs are adapted from the classical random walk on spheres ($\mathit{WoS}$); see \cite{Milstein-97}.
\begin{pf*}{Proof of Theorem \ref{thmalgo}} \emph{Step} 1. Let us estimate the number of steps. Since $(\chi(n), n\ge0)$ is a homogeneous Markov chain, we introduce the operator $P_xf$ defined, for any nonnegative function $f\dvtx \mathbb{R}_+\to\mathbb{R}_+$, by
\[ P_xf:=\int_{\mathbb{R}_+}f(y)\mathbb{P}(x, \mathrm{d} y), \]
where $\mathbb{P}(x,\mathrm{d} y)$ is the transition probability of the Markov chain. By definition, $\chi(n+1)$ depends only on $\chi(n)$, $V_n$ and $\xi _n$. Let us note that, by construction, $V_n$ and $\xi_n$ are independent. Moreover using the result developed in the \hyperref[app]{Appendix}, the density of $\xi_n (\frac{2^\nu \Gamma(\nu+1)}{A_{n-1}} )^{{1}/{(\nu+1)}}$ is given by
\begin{equation} \label{eqdens} \mu(r)=\frac{(\nu+1)^{\nu+2}}{\Gamma(\nu+2)} r^\nu (-\log r )^{\nu+1}\ind{[0,1]}(r). \end{equation}
If we denote $\sigma^d$, the uniform surface measure on the unit sphere in $\mathbb{R}^d$, we get
\begin{equation} \label{eqpxf} \qquad P_xf=\int_{0}^1\int _{\mathcal{S}^\delta}f \bigl(x+2\pi_1(u)\sqrt {x}K(x,r)+K^2(x,r) \bigr)\mu(r)\,\mathrm{d} r\, \sigma^\delta(\mathrm{d} u), \end{equation}
with $K(x,r)$ defined by
\begin{equation} \label{eqK} K(x,r)=\psi_{A} \biggl( \biggl[\frac{A}{2^\nu\Gamma(\nu+1)} \biggr]^{ {1}/{(\nu+1)}}r \biggr), \end{equation}
and $A$ depending on $x$ in the following way:
\[ A= \biggl(\frac{\gamma^2(l-\sqrt{x})^2 e}{\nu+1} \biggr)^{\nu+1}\frac{\Gamma (\nu+1)}{2}. \]
We can observe the following scaling property: $\psi_A(A^{{1}/{(\nu +1)}}t)=A^{{1}/{(2\nu+2)}}\psi_1(t)$. Therefore the definition of $\psi _1$ leads to
\begin{equation} \label{eqdefK}\quad K(x,r)=\gamma(l-\sqrt{x})\sqrt{\frac{er}{\nu+1}\log \frac{1}{r^{\nu +1}}}=\gamma(l-\sqrt{x})\sqrt{er(-\log r)}. \end{equation}
\emph{Step} 2. Using classical potential theory for discrete time Markov chains (see, e.g., Theorem 4.2.3 in \cite{Norris-97}), we know that
\[ \phi(x)=\mathbb{E}_x \Biggl(\sum_{n=0}^{N^\varepsilon-1}g \bigl(\chi(n)\bigr) \Biggr) \]
satisfies, for any nonnegative function $g$,
\begin{equation} \label{eqequat} \cases{
\phi(x)=P_x \phi+g(x),&\quad $0\le x < (l-\varepsilon)^2,$ \vspace*{2pt}\cr \phi\bigl((l- \varepsilon)^2\bigr)=0.} \end{equation}
In particular, for $g=1$, we obtain that $\phi(x)=\mathbb {E}_x[N^\varepsilon]$. In order to get an upper-bound for the averaged number of steps, it suffices to apply a comparison result. Let us first define the constant $C_\delta$,
\begin{equation} \label{Cdelta} C_\delta= \biggl(\frac{\nu+1}{\nu+2} \biggr)^{\nu+2} \frac{e}{\Gamma (\nu+2)}\frac{1}{2\delta}\sigma^{\delta}\bigl(S^\delta \bigr). \end{equation}
We choose the function
\begin{equation}\qquad U^\varepsilon(x)=\bigl\{\log\bigl((l-\sqrt{x})/\varepsilon\bigr)-\log(1- \gamma)\bigr\} /\bigl(C_\delta\gamma^2\bigr), \qquad 0\leq x < l^2, \end{equation}
which satisfies $U^\varepsilon(x)\ge P_xU^\varepsilon+1$, for all $0<x<(l-\varepsilon)^2$ (see Lemma \ref{lemineg-log} for the definition of the constant and for the inequality) and $U^\varepsilon (x)\ge0$ for all $0<x<(l-\varepsilon)^2$. A classical comparison result related to the potential theory (see, e.g., Theorem 4.2.3 in \cite{Norris-97}) implies that $\mathbb{E}_x[N^\varepsilon]\le U^\varepsilon(x)$ for all $x\in[0,(l-\varepsilon)^2]$ and consequently leads to the announced statement. \end{pf*}
\begin{lemma} \label{lemineg-log} Let us define, for small $\varepsilon>0$, $U^\varepsilon(x)=\{\log((l-\sqrt{x})/\varepsilon)-\log(1-\gamma)\} /(C_\delta\gamma^2)$ for $x\in[0,l^2[$ and where the constant $C_\delta$ is given by (\ref{Cdelta}) and $\gamma$ is related to the definition of the $\mathit{WoMS}$. Then, for any $x\in]0,(l-\varepsilon)^2[$, the following inequality yields
\[ P_xU^\varepsilon-U^\varepsilon(x)\le-1. \]
We recall that $P_xU^\varepsilon$ is defined by \eqref{eqpxf} and \eqref{eqdefK}. \end{lemma}
\begin{pf} We will split the proof into several steps.
\emph{Step} 1. First of all, we observe that $U^\varepsilon\ge-\log (1-\gamma)/(C_\delta\gamma^2)$ in the domain $[0,(l-\varepsilon)^2]$. Let us consider now $\chi(0)=x\in[0,(l-\varepsilon)^2]$ and $y$ in the support of the law of $\chi(1)$ and let us prove that $U^\varepsilon (y)\ge0$. By the definition of $\chi(1)$ we obtain
\[ \chi(1)\le\sup_{y\in[-1,1], t\in[0,t_\mathrm{max}(A)]} \bigl(x+2y\sqrt{x}\psi _{A}(t)+\psi^2_A(t)\bigr), \]
where $A= (\gamma^2(l-\sqrt{x})^2 e/(\nu+1) )^{\nu+1}\frac{\Gamma (\nu+1)}{2}$ and both $\psi_A$ and $t_\mathrm{max}$ are defined by~\eqref {defdepsi-new}. The right-hand side of the preceding inequality is increasing with respect to $y$ so that
\[ \chi(1)\le \Bigl(\sqrt{x}+\sup_{t\in[0,t_\mathrm{max}(A)]} \psi_A(t) \Bigr)^2. \]
Furthermore, for $a>0$ the maximum of the function $\psi_a$ is reached for $t_\mathrm{max}(a)=\frac{1}{e} ( \frac{a}{\Gamma(\nu+1)2^\nu} )^{{1}/{(\nu+1)}}$ and is equal to
\begin{equation} \label{eqmaxcalcul} \sup_{t\in[0,t_\mathrm{max}(a)]} \psi_a(t)= \biggl \{\frac{2(\nu+1)}{e} \biggl(\frac{a}{\Gamma(\nu+1)2^\nu} \biggr)^{{1}/{(\nu+1)}} \biggr \}^{1/2}. \end{equation}
Finally using the definition of $A$ and the inequality $x\le (l-\varepsilon)^2$, we find the following lower bound:
\[ l-\sqrt{\chi(1)}\ge(l-\sqrt{x}) (1-\gamma)\ge\varepsilon(1-\gamma). \]
We can therefore conclude that, for any $y$ in the support of the law of $\chi(1)$ (even for $y\ge(l-\varepsilon)^2$), $U^\varepsilon(y)\ge0$ which ensures that $U^\varepsilon$ is well defined and nonnegative in the domain of the operator $P_x$.
\emph{Step} 2. Furthermore the Taylor expansion yields
\begin{eqnarray} \label{eqtaylor}\qquad U^\varepsilon(y)\le U^\varepsilon(x)+\frac{\sqrt{x}-\sqrt{y}}{C_\delta\gamma^2 (l-\sqrt{x})}- \frac{(\sqrt{x}-\sqrt{y})^2}{2C_\delta\gamma^2(l-\sqrt {x})^2}+\frac{(\sqrt{x}-\sqrt{y})^3}{3C_\delta\gamma^2(l-\sqrt {x})^3}, \nonumber \\[-8pt] \\[-8pt] \eqntext{x, y\in[0,l^2[.} \end{eqnarray}
If $\chi(0)=x$ and $y$ is in the support of the random variable $\chi (1)$, then
\begin{eqnarray*} \sqrt{y}-\sqrt{x}&=&\sqrt{x+2\pi_1(u)\sqrt{x}K(x,r)+K^2(x,r)}- \sqrt{x} \\ &\ge&\pi_1(u)K(x,r). \end{eqnarray*}
By expansion \eqref{eqtaylor} and the definition of the operator $P_x$ given by \eqref{eqpxf}, the following upper-bound for the operator $P_x$ holds:
\begin{eqnarray*} P_xU^\varepsilon&=&\int_{0}^1 \int_{\mathcal{S}^\delta}U^\varepsilon \bigl(x+2\pi _1(u) \sqrt{x}K(x,r)+K^2(x,r) \bigr)\mu(r)\,\mathrm{d} r \,\sigma^\delta( \mathrm{d} u), \\ &\le& U^\varepsilon(x)-\int_{0}^1\int _{\mathcal{S}^\delta}\frac{\pi _1(u)K(x,r)}{C_\delta\gamma^2(l-\sqrt{x})} \mu(r)\,\mathrm{d} r \,\sigma^\delta ( \mathrm{d} u) \\ &&{} - \int_{0}^1\int_{S^\delta_+} \frac{\pi _1^2(u)K^2(x,r)}{2C_\delta\gamma^2(l-\sqrt{x})^2} \mu(r)\,\mathrm{d} r \,\sigma ^\delta(\mathrm{d} u) \\ &&{} -\int_{0}^1\int_{\mathcal{S}^\delta} \frac{\pi _1^3(u)K^3(x,r)}{3C_\delta\gamma^2(l-\sqrt{x})^3} \mu(r)\,\mathrm{d} r \,\sigma ^\delta(\mathrm{d} u), \end{eqnarray*}
where
\begin{equation} \label{Sdelta+} S^\delta_+: =\bigl\{u\in\mathcal{S}^\delta \dvtx \pi_1 (u) > 0\bigr\}. \end{equation}
Due to symmetry properties, the first and the third integral terms vanish. Then~\eqref{eqdefK} leads to
\[ P_xU^\varepsilon\le U^\varepsilon(x)- \frac{I}{C_\delta} \int_{S^\delta_+} \pi _1^2(u) \sigma^\delta(\mathrm{d} u) \]
with
\[ I=\frac{(\nu+1)^{\nu+2}e}{2\Gamma(\nu+2)}\int_0^1r^{\nu+1}(- \log r)^{\nu+2} \,\mathrm{d} r. \]
The description of the probability density function in the \hyperref[app]{Appendix} leads to the following explicit value:
\[ I= \biggl( \frac{\nu+1}{\nu+2} \biggr)^{\nu+2}\frac{e}{\Gamma(\nu+2)}. \]
In order to complete the proof, it suffices to choose the particular constant given by (\ref{Cdelta}) after noticing that
\[ \int_{S^\delta_+} \pi_1^2(u) \sigma^\delta(\mathrm{d} u)= \frac {1}{2\delta}\sigma^\delta \bigl(S^\delta\bigr). \] \upqed\end{pf}
\begin{pf*}{Proof of Theorem \ref{thmalgo-conv}} The proof is split in two parts. First, the steps of the algorithm and the hitting time of the Bessel process of index $\nu$ shall be related to stopping times of a $\delta$-dimensional Brownian motion ($\nu=\frac{\delta}{2}-1$). Second, we point out that the corresponding stopping times are close together by evaluating deviations of the Brownian paths.
\emph{Step} 1. Let $\mathbf{B}=(B^{(1)},B^{(2)},\ldots,B^{(\delta)})$ be a $\delta$-dimensional Brownian motion. Then the norm of $\mathbf{B}$ has the same distribution as a Bessel process of index $\nu$; see, for instance, \cite{Revuz-Yor-99}. Hence the first hitting time $\tau_l$ is identical in law to the stopping time
\[ \mathbb{T}_l=\inf\{t\ge0; \mathbf{B}_t\notin D \}, \]
where $D$ is the Euclidean ball centered at the origin and of radius $l$. We introduce then a procedure in order to come close to $\mathbb {T}_l$. For the first step we shall focus our attention to the first exit time of a moving sphere centered at the origin and of radius $\psi _a(t)$ defined by \eqref{defdepsi-new}, we denote $\hat{\xi}_1$ this stopping time. Of course this moving sphere should always stay in $D$, so we choose $a$ such that the maximum of $\psi_a$ stays smaller than $l$. By \eqref{eqmaxcalcul}, we get
\[ \sup_{t\le a}\psi_a(t)< l\quad\Longleftrightarrow\quad a< \frac{\Gamma(\nu +1)}{2} \biggl(\frac{el^2}{\nu+1} \biggr)^{\nu+1}. \]
For $a=A_0=\frac{\Gamma(\nu+1)}{2} (\frac{e\gamma^2l^2}{\nu+1} )^{\nu+1}$ with a parameter $\gamma<1$, the condition is satisfied, $\sup_{t\le a}\psi_a(t)=\gamma^{2\nu+2} l<l$. Let us describe the law of $(\hat{\xi}_1,\mathbf{B}_{\hat{\xi}_1})$. The norm of the Brownian motion is identical in law with the Bessel process; therefore Proposition~\ref{propfirstboundary} implies that the density function of $\hat{\xi}_1$ is given by \eqref{densitetau} with $a$ replaced by $A_0$. Using the law described in Proposition \ref{propappend}, we can prove that $\hat{\xi}_1$ has the same distribution as
\[ \biggl(\frac{ A_0}{\Gamma(\nu+1)2^\nu} \biggr)^{{1}/{(\nu+1)}} e^{-Z}, \]
where $Z$ is Gamma distributed with parameters\vspace*{-1.5pt} $\alpha=\nu+2$ and $\beta =\frac{1}{\nu+1}$. By construction we deduce that $\hat{\xi}_1 \stackrel{(d)}{=} \xi_1$ where $\xi_1$ is defined in the Algorithm $\mathit{WoMS}$~(\hyperlink{algA2}{A2}). Knowing the stopping time, we can easily describe the exit location since the Brownian motion is rotationnaly invariant: $\mathbf{B}_{\hat{\xi}_1}$ is then uniformly distributed on the sphere of radius $\psi_{A_0}(\hat{\xi }_1)$. Hence
\[ \bigl(\hat{\xi}_1,\Vert\mathbf{B}_{\hat{\xi}_1}\Vert\bigr)\stackrel{\mathrm{(d)}} {=} \bigl(\xi _1,\chi(1)\bigr)\quad \mbox{and}\quad \hat{\xi}_1< \mathbb{T}_l. \]
By this procedure we can construct a sequence of stopping times $(\hat {\xi}_n, n\ge1)$ and define $\hat{\Xi}_n=\hat{\xi}_1+\cdots+\hat{\xi }_n$; $\hat{\Xi}_n$ is the first time after $\hat{\Xi}_{n-1}$ such that the Brownian motion exits from a sphere centered in $\mathbf{B}_{\hat{\Xi }_{n-1}}$ of radius $\psi_{a_n}$ initialized at time $\hat{\Xi}_{n-1}$. See Figure \ref{fig1}. The moving sphere should stay in the domain $D$, so we choose
\[ a_n= \bigl(\gamma^2 (l-\sqrt{\mathbf{B}_{\hat{\Xi}_{n-1}}} )^2 e/(\nu +1) \bigr)^{\nu+1}\frac{\Gamma(\nu+1)}{2}. \]
\begin{figure}
\caption{Walk on moving spheres.}
\label{fig1}
\end{figure}
Using the same arguments as before and by the Markov property for the Brownian motion, we obtain the identities in law
\[ (a_n, n\ge1)\stackrel{\mathrm{(d)}} {=} (A_n, n\ge1),\qquad \bigl(\hat{\Xi }_n, \Vert\mathbf{B}_{\hat{\Xi}_n}\Vert \bigr)_{n\ge1} \stackrel{\mathrm{(d)}} {=} \bigl(\Xi_n, \chi(n) \bigr)_{n\ge1} \]
with $\hat{\Xi}_n<\mathbb{T}_l$ and $\Xi_n$, $A_n$, $\chi(n)$ defined in the Algorithm $\mathit{WoMS}$ (\hyperlink{algA2}{A2}). Consequently defining $\hat{N}^\varepsilon =\inf\{ n\ge0; \mathbf{B}_{\hat{\Xi}_n}\notin D^\varepsilon\}$, the following identity yields
\begin{equation} \label{eqidenlaw}\bigl ( \hat{\Xi}_{\hat{N}^\varepsilon}, \Vert\mathbf{B}_{\hat{\Xi}_{\hat{N}^\varepsilon}} \Vert \bigr)\stackrel{\mathrm{(d)}} {=} \bigl( \Xi_{N^\varepsilon}, \chi\bigl(N^\varepsilon \bigr) \bigr) \quad\mbox{and}\quad \hat{\Xi}_{\hat{N}^\varepsilon}<\mathbb{T}_l. \end{equation}
\emph{Step} 2. Let us now estimate the difference between $\hat{\Xi }_{\hat{N}^\varepsilon}$ and $\mathbb{T}_l$. By \eqref{eqidenlaw} we first deduce
\begin{equation} \label{equpper-bound} F(t):=\mathbb{P}(\tau_l\le t)=\mathbb{P}( \mathbb{T}_l\le t)\le F^\varepsilon (t):=\mathbb{P}( \Xi_{N^\varepsilon}\le t),\qquad t>0. \end{equation}
Furthermore, for any small $\alpha>0$,
\begin{eqnarray} \label{eqdecomp-under} 1-F(t)&=&\mathbb{P}( \mathbb{T}_l> t, \hat{ \Xi}_{\hat{N}^\varepsilon}\le t-\alpha)+\mathbb{P}( \mathbb{T}_l> t, \hat{ \Xi}_{\hat{N}^\varepsilon}> t-\alpha) \nonumber \\ &\le&\mathbb{P}( \mathbb{T}_l> t, \hat{\Xi}_{\hat{N}^\varepsilon}\le t- \alpha)+\mathbb{P}(\hat{\Xi}_{\hat{N}^\varepsilon}> t-\alpha) \\ &\le&\mathbb{P}( \mathbb{T}_l> t, \hat{\Xi}_{\hat{N}^\varepsilon}\le t- \alpha)+1-F^\varepsilon(t-\alpha).\nonumber \end{eqnarray}
At time $\hat{\Xi}_{\hat{N}^\varepsilon}$ the Brownian motion is in the $\varepsilon $-neighborhood of the boundary $\partial D$, hence $l - \Vert\mathbf{B}_{\hat{\Xi}_{\hat{N}^\varepsilon}}\Vert \le \varepsilon$. Using the strong Markov property, we obtain
\begin{equation} \label{eqinterm} \mathbb{P}( \mathbb{T}_l> t, \hat{ \Xi}_{\hat{N}^\varepsilon}\le t-\alpha )\le F^\varepsilon(t-\alpha)\sup _{y\in D\setminus D^\varepsilon}\mathbb{P}_y(\mathbb {T}_l> \alpha). \end{equation}
Since the Brownian motion is rotationally invariant, it suffices to choose $y=(l-\varepsilon,0,\ldots,0)$. Due to the convexity of $D$, the following upper-bound holds:
\begin{equation} \label{eqlowboununif} \mathbb{P}_y(\mathbb{T}_l>\alpha) \le\mathbb{P}_0\Bigl(\sup_{0\le t\le\alpha } \overline{B}_t^{(1)}<\varepsilon\Bigr)=\mathbb{P}_0 \bigl(2\bigl\vert\overline{B}_\alpha ^{(1)}\bigr \vert<\varepsilon\bigr)\le \frac{\varepsilon}{\sqrt{2\alpha\pi}}. \end{equation}
Combining \eqref{equpper-bound} for the upper-bound and \eqref {eqdecomp-under}, \eqref{eqinterm} and \eqref{eqlowboununif} for the lower-bound yields the announced estimation \eqref{eqthmencadr}. \end{pf*}
\subsubsection{\texorpdfstring{The first time the Bessel process of index $\nu$ hits a decreasing curved boundary}{The first time the Bessel process of index nu hits a decreasing curved boundary}} The algorithm developed in the previous paragraph can be\break adapted to the problem of hitting a deacreasing curved boundary. Let us define
\begin{equation} \label{deftau-line}\qquad \tau=\inf\bigl\{t\ge0\dvtx Z_t^{\delta,0}=l(t) \bigr\}\qquad \mbox{where } l \mbox{ is decreasing and } l(0)>0. \end{equation}
\begin{assu}\label{assu} There exists a constant $\Delta_{{\mathrm{min}}}>0$ which bounds the derivative of $l$
\[ l'(t)\ge-\Delta_\mathrm{min} \qquad\forall t\ge0. \]
\end{assu}
The procedure then also consists in building a $\mathit{WoMS}$ which reaches a neighborhood of the boundary. But instead of dealing with a fixed boundary as in Section~\ref{sect0-horiz}, that is a ball of radius $l$, we shall in this section introduce the following moving boundary: the ball centered in the origin and of radius $l(t)$. The arguments developed in order to prove Theorems \ref{thmalgo} and \ref {thmalgo-conv} will be adapted to this new context.
\noindent\rule{\textwidth}{0.5pt}\hypertarget{algA3}\\ \textbf{Algorithm (A3):}\\ Let us define the following positive constants:
\begin{equation} \label{constante} L=\max\bigl(l(0),\Delta_{\mathrm{min}}, \sqrt{\nu+1}\bigr),\qquad \kappa=\frac {2^\nu}{5^{\nu+1}L^{2\nu+2}} \Gamma(\nu+1). \end{equation}
\textbf{Initialization:} Set $\chi(0)=0$, $\xi_0=0$, $\Xi_0=0$, $A_0=\kappa ( l(0)- \sqrt{\chi(0)})^{2(\nu+1)}$.\\ \mathversion{bold} \textbf{The $n$th step:}\mathversion{normal} While the condition
\[ l(\Xi_{n-1})-\sqrt{\chi(n-1)}>\varepsilon \]
[denoted by $\mathbb{C}(n-1)$] holds, we simulate $U_n$ an uniform distributed random vector on $[0,1]^{\lfloor\nu\rfloor+2}$, $G_n$ a standard Gaussian random variable and $V_n$ a uniformly distributed random vector on $\mathcal {S}^\delta$. $U_n$, $G_n$ and $V_n$ have to be independent. We then construct $(\xi _n,\Xi_n,\chi(n))$ using \eqref{eqdefalgo}. At the end of this step we set $A_n=\kappa( l(\Xi_n)-\sqrt{\chi(n)})^{2(\nu+1)}$.\\ The algorithm stops when $\mathbb{C}(n-1)$ is not longer satisfied: we set $\xi_n=0$ and so $\Xi_n=\Xi_{n-1}$ and $\chi(n)=\chi(n-1)$.\\ \textbf{Outcome} The exit position $\chi(n)$ and the exit time.\\ \rule{\textwidth}{0.5pt}
Let us note that the stochastic process $(\chi(n), n\ge0)$ is not a Markov chain since the sequence $(A_n)_{n\ge0}$ depends on both $(\Xi _n, \chi(n))$. That is why we define the following Markov chain:
\[ R_n:=\bigl(\Xi_n,\chi(n)\bigr)\in\mathbb{R}_+^2 \]
stopped at the first time the condition $\mathbb{C}(n)$ is not satisfied. In the following, we shall denote $N^\varepsilon$ this stopping time (number of steps of the algorithm):
\[ N^\varepsilon=\inf\bigl\{n\ge0; l(\Xi_n)-\sqrt{\chi(n)}\le\varepsilon\bigr\}. \]
\begin{thmm} \label{thmalgoline} The number of steps $N^\varepsilon$ of the Algorithm $\mathit{WoMS}$ \textup{(\hyperlink{algA3}{A3})} is almost surely finite. Moreover, there exist a constant $C_\delta>0$ and $\varepsilon_0(\delta)>0$, such that
\[
\mathbb{E}\bigl[N^\varepsilon\bigr]\le C_\delta|\log\varepsilon|\qquad \mbox{for all } \varepsilon\le\varepsilon_0(\delta). \]
\end{thmm}
\begin{thmm} \label{thmalgo-convline} As $\varepsilon$ goes to zero, $\Xi_{N^\varepsilon}$ converges in distribution toward $\tau$ defined by \eqref{deftau-line} (with cumulative distribution function $F$), which is almost surely finite. Moreover, for any $\alpha>0$ small enough,
\begin{equation} \label{eqthmencadrline} \biggl(1-\frac{\varepsilon}{\sqrt{2\alpha\pi}} \biggr)F^\varepsilon(t- \alpha)\le F(t)\le F^\varepsilon(t)\qquad\mbox{for all } t>0, \end{equation}
where $F^\varepsilon(t):=\mathbb{P}(\Xi_{N^\varepsilon}\le t)$. \end{thmm}
\begin{pf*}{Proof of Theorem \ref{thmalgoline}} The proof is based mainly on arguments already presented in Theorem \ref {thmalgo}. So we let the details of the proof to the reader and focus our attention to the main ideas.
(1) The process $(\Xi_n, \chi(n))$ is a homogeneous Markov chain and the associated operator is given by
\begin{equation} \label{eqnew-operator} P_{t,x}f:=\int_{(s,y)\in\mathbb{R}_+^2}f(s,y) \mathbb{P} \bigl((t,x),(\mathrm{d} s,\mathrm{d} y) \bigr), \end{equation}
where $f$ is a nonnegative function and $\mathbb{P} ((t,x),(\mathrm{d} s,\mathrm{d} y) )$ is the transition probability of the chain. The chain starts with $(\Xi_0,\chi(0))=(0,0)$ and is stopped the first time when $l(\Xi_n)-\sqrt{\chi(n)}\le\varepsilon$. Classical potential theory ensures that
\[ \phi(t,x)=\mathbb{E}_{t,x} \Biggl( \sum_{n=0}^{N^\varepsilon-1}g \bigl(\Xi _n,\chi(n)\bigr) \Biggr) \]
is solution of the following equation:
\begin{eqnarray} \label{eqsysline} \cases{
\phi(t,x)=P_{t,x} \phi+g(t,x),&\quad $(t,x)\in D^\varepsilon,$ \vspace*{2pt}\cr \phi(t,x)=0, &\quad$\forall(t,x)\in\partial D^\varepsilon,$ } \end{eqnarray}
with $D^\varepsilon=\{(t,x)\in\mathbb{R}_+^2\dvtx l(t)-\sqrt{x}\le \varepsilon\}$. For the particular choice $g=1$, we obtain $\phi(t,x)=\mathbb {E}_{t,x}[N^\varepsilon]$, and therefore the averaged number of step is given by $\phi(0,0)$.
(2) In order to point out an upper-bound for the averaged number of steps, we use a comparison result: we are looking for a function $U(t,x)$ such that
\begin{eqnarray} \label{sysavec1} \cases{
U(t,x)\ge P_{t,x}U+1,&\quad $\forall(t,x)\in D^\varepsilon,$ \vspace*{2pt}\cr U(t,x)\ge0, &\quad $\forall(t,x)\in\partial D^\varepsilon.$ } \end{eqnarray}
For such a particular function, we can deduce $\phi(t,x)\le U(t,x)$. Let us define
\[ U(t,x)=c\log \biggl( \frac{l(t)-\sqrt{ x}}{\varepsilon} \biggr)1_{\{ l(t)-\sqrt{x} \ge0 \}}, \]
with some constant $c>0$ which shall be specified later on. The positivity assumption on the boundary $\partial D^ \varepsilon$ is trivial. Moreover since $l$ is a decreasing function, \eqref {eqnew-operator} implies
\begin{eqnarray} \label{eqineg-inte} P_{t,x}U&=&\int_{(s,y)\in\mathbb{R}_+^2}U(s,y) \mathbb{P} \bigl((t,x),(\mathrm{d} s,\mathrm{d} y) \bigr) \nonumber \\[-8pt] \\[-8pt] \nonumber &\le& \int_{(s,y)\in\mathbb{R}_+^2}U(t,y) \mathbb{P} \bigl((t,x),(\mathrm{d} s,\mathrm{d} y) \bigr). \end{eqnarray}
By using the Taylor expansion, we get
\begin{eqnarray} \label{eqtaylor-} \qquad U(t,y)\le U(t,x)-c \frac{\sqrt{y}-\sqrt{x}}{l(t)-\sqrt{x}}-\frac {c}{2} \frac{(\sqrt{y}-\sqrt{x})^2}{(l(t)-\sqrt{x})^2}-\frac{c}{3}\frac {(\sqrt{y}-\sqrt{x})^3}{(l(t)-\sqrt{x})^3}, \nonumber \\[-8pt] \\[-8pt] \eqntext{(x,y)\in \mathbb{R}_+^2.} \end{eqnarray}
Using similar arguments and similar bounds as those presented in Lemma \ref{lemineg-log}, the odd powers in the Taylor expansion do not play any role in the integral \eqref{eqineg-inte}. Therefore we obtain
\begin{eqnarray*} P_{t,x}U&\le& U(t,x)-\frac{c}{2} \int_{(s,y)\in\mathbb{R}_+^2} \frac {(\sqrt{y}-\sqrt{x})^2}{(l(t)-\sqrt{x})^2}\mathbb{P} \bigl((t,x),(\mathrm{d} s,\mathrm{d} y) \bigr) \\ &\le& U(t,x)- \frac{c}{2}\int_{0}^1\int _{S^\delta_+}\frac{\pi _1^2(u)K^2(x,r)}{(l(t)-\sqrt{x})^2} \mu(r)\,\mathrm{d} r \,\sigma^\delta( \mathrm{d} u), \end{eqnarray*}
where $S^\delta_+$ is given in \eqref{Sdelta+}, and $K$ is defined by \eqref{eqK} with $A=\kappa(l(s)-\sqrt{x})^{2(\nu+1)}$. We have now
\begin{eqnarray*} P_{t,x}U&\le& U(t,x)-\frac{c(\nu+1)}{2} \biggl( \frac{2K}{\Gamma(\nu+1)} \biggr)^{{1}/{(\nu+1)}} \biggl( \int_{\mathcal{S}^\delta_+} \pi_1^2(u) \sigma ^\delta(\mathrm{d} u) \biggr)\\ &&\hspace*{44pt}{}\times \biggl( \int_0^1 r(-\log r)\mu(r)\,\mathrm{d} r \biggr). \end{eqnarray*}
An appropriate choice of the constant $c$ leads to \eqref{sysavec1}. Finally we get
\[ \mathbb{E}\bigl[N^\varepsilon\bigr]\le U(0,0)= c \log\bigl(l(0)/\varepsilon \bigr). \]
\upqed\end{pf*}
\begin{pf*}{Proof of Theorem \ref{thmalgo-convline}} The arguments are similar to those developed for Theorem \ref {thmalgo-conv}, and the extension of the convergence result to curved boundaries is straightforward. That is why we shall not repeat the proof, but just focus our attention on the only point which is quite different. We need to prove that the Markov chain $R_n=(\Xi_n,\chi(n))$ stays in the domain $D^0=\{(t,x)\dvtx 0\le x\le l^2(t) \}$ so that the hitting time $\tau$ defined by \eqref{deftau-line} satisfies $\tau>\Xi _{N^\varepsilon}$. In other words, if the Markov chain $R_n=(\Xi_n,\chi (n))$ for the $n$th step is equal to $(s,x)$, then $R_{n+1}$ should belong to $\{(t,x)\dvtx t\ge s, x\le l^2(t)\}$. In the $\mathit{WoMS}$ setting, for $t\ge s$, this means that the ball centered in $x$ and of time-dependent radius $\psi_A(t-s)$ always belongs as time elapses to the ball centered in $0$ of radius $l(t)$. We recall that
\[ A=\kappa\bigl(l(s)-\sqrt{x}\bigr)^{2(\nu+1)}. \]
Therefore we shall prove that
\begin{equation} \label{eqamontrer} \forall t\ge s\qquad \psi_A(t-s)+\sqrt{x}\le l(t). \end{equation}
In fact, due to Assumption \ref{assu} and the definition of $\psi_A$, it suffices to obtain
\begin{equation} \label{eprou} \psi_A(t-s)\le l(s)-\sqrt{x}-\Delta_{{\mathrm{min}}}(t-s)\qquad \forall s\le t\le s+W^2, \end{equation}
where
\begin{eqnarray*} W&=& \biggl(\frac{A}{\Gamma(\nu+1)2^\nu} \biggr)^{{1}/{(2\nu+2)}}= \biggl(\frac{\kappa}{\Gamma(\nu+1)2^\nu} \biggr)^{{1}/{(2\nu+2)}}\bigl(l(s)-\sqrt {x}\bigr)\\ &=&\frac{1}{L\sqrt{5}} \bigl(l(s)- \sqrt{x}\bigr). \end{eqnarray*}
Due to the definition of the constant $L$, we have
\begin{eqnarray*} 0&\le& W\le\frac{1}{2\Delta_{{\mathrm{min}}}} \frac{2(l(s)-\sqrt{x})\Delta _\mathrm{min}}{\sqrt{({(2\nu+2)}/{e})+4(l(s)-\sqrt{x})\Delta_\mathrm{min}}}\\ &\le&\frac{1}{2\Delta_{{\mathrm{min}}}} \biggl\{ \sqrt{ \frac{2\nu +2}{e}+4\bigl(l(s)-\sqrt{x}\bigr)\Delta_\mathrm{min}} - \sqrt{ \frac{2\nu+2}{e} } \biggr\}. \end{eqnarray*}
The right-hand side of the preceding inequality is the positive root of the polynomial function $P(X)=\Delta_\mathrm{min} X^2+\sqrt{2(\nu+1)/e} X-(l(s)-\sqrt{x})$. We deduce that $P(W)\le0$. By \eqref{eqmaxcalcul} and $P(W)\le0$, we obtain
\begin{eqnarray*} \sup_{t\ge s}\psi_A(t-s) &=& \biggl( \frac{2(\nu+1)}{e} \biggr)^{1/2} W \\ &\le& l(s)-\sqrt{x} -\Delta_\mathrm{min} W^2 \\ &\le &l(s)-\sqrt{x}-\Delta_{{\mathrm{min}}}(t-s)\qquad \forall s\le t\le s+W^2. \end{eqnarray*}
Finally we have proved \eqref{eprou} and so \eqref{eqamontrer}. \end{pf*}
If Assumption \ref{assu} is not satisfied, then it is difficult to have a general description of an iterated procedure in order to simulate hitting times. However the particular form of the function $\psi_a$ defined by \eqref{defdepsi-new} permits us to describe a $\mathit{WoMS}$ algorithm for the square root boundaries. Let us therefore consider the following functions:
\begin{equation} \label{eqdefde2} \psi_a(t)=\sqrt{2t\log\frac{a}{\Gamma(\nu+1)t^{\nu+1}2^\nu}}\quad \mbox {and}\quad f(t)=\sqrt{r-ut}, \end{equation}
well defined for $t\le t_0:=\min ( \alpha^{{1}/{(\nu+1)}},\frac {r}{u} )$ where $\alpha=a(\Gamma(\nu+1)2^\nu)^{-1}$.\eject
The algorithm is essentially based on the following result (the constants $r$ and~$u$ associated with the hitting problem of a square root boundary for the Bessel process shall be specified in the proof of Proposition \ref{propcurved}).
\begin{lemma} \label{lemcomparai} Let us define
\begin{equation} \label{eqdefdeF} F_\nu(r,u)=\frac{1}{2} \biggl( \frac{er}{\nu+1} \biggr)^{\nu+1}\Gamma(\nu +1) e^{-u/2},\qquad r>0, u>0. \end{equation}
If $a=F_\nu(r,u)$, then
\begin{equation} \label{comparai} \psi_a(t)\le f(t)\qquad \mbox{for all } 0\le t\le \alpha^{{1}/{(\nu+1)}}. \end{equation}
\end{lemma}
\begin{pf} We are looking for a particular value $a$ depending on both $r$ and $u$ such that the following bound holds: $\psi_a(t)\le f(t)$, for all $0\le t\le t_0$. Since $t\le t_0$, it suffices to prove that
\[ 2t\log\frac{\alpha}{t^{\nu+1}}\le r-ut \quad\Longleftrightarrow\quad g(t):=t \biggl( 2\log \frac{\alpha}{t^{\nu+1}}+u \biggr)\le r. \]
Let us compute the maximum of the function $g$ on the interval $[0,t_0]$, with $t_0$ fixed,
\[ g'(t)=2\log\frac{\alpha}{t^{\nu+1}}+u-2(\nu+1). \]
We have
\[ g'(t)=0\quad \Longleftrightarrow\quad\log\frac{\alpha}{t^{\nu+1}}=\nu+1- \frac {u}{2}\quad\Longleftrightarrow\quad t^{\nu+1}=\alpha\exp \biggl\{ \frac{u}{2}-\nu-1 \biggr\}. \]
In other words the maximum of the function $g$ is reached for
\[ t_{\mathrm{max}}=\alpha^{{1}/{(\nu+1)}}\exp \biggl\{ \frac{u}{2(\nu+1)}-1 \biggr\} \]
and is equal to
\[ g(t_{\mathrm{max}})=g_{\max}=2(\nu+1)\alpha^{{1}/{(\nu+1)}}e^{({u}/{(2(\nu+1))})-1}. \]
Choosing $g_{\max}\le r$ we obtain in particular \eqref{comparai}, which means
\[ \alpha\le \biggl(\frac{er}{2(\nu+1)} \biggr)^{\nu +1}e^{-u/2}\quad \Longleftrightarrow \quad a\le\frac{1}{2} \biggl(\frac{er}{\nu +1} \biggr)^{\nu+1}\Gamma(\nu+1)e^{-u/2}. \]
For $a_0= \frac{1}{2} (\frac{er}{\nu+1} )^{\nu+1}\Gamma(\nu +1)e^{-u/2}$, we get \eqref{comparai} since $t_0=\alpha^{{1}/{(\nu+1)}}$. \end{pf}
The aim is now to construct an algorithm which permits us to approximate the hitting time of the square root boundary. Therefore we consider\vadjust{\goodbreak} a Bessel process of dimension $\delta$ which hits the decreasing curved boundary $f(t)$ given by \eqref{eqdefde2}.
\noindent\rule{\textwidth}{0.5pt}\hypertarget{algA4}\\ \textbf{Algorithm (A4)---the square root boundary:} $\bolds{l(t)=\sqrt{\beta _0-\beta_1 t}}$ \textbf{with} $\bolds{\beta_0>0}$, $\bolds{\beta_1>0}$.\\ Let $\kappa\in]0,1[$.\\ \textbf{Initialization:} Set $\chi(0)=0$, $\xi_0=0$, $\Xi_0=0$, $A_0=\kappa F_\nu(\beta_0,\beta_1)$.\\ \textbf{The $\bolds{(n+1)}$th step:} While the condition
\[ l(\Xi_{n})-\sqrt{\chi(n)}>\varepsilon\qquad\bigl(\mbox{denoted by } \mathbb {C}(n) \bigr) \]
holds, we define
\begin{equation} \label{eqdefAn} A_n=\kappa F_\nu \biggl(\bigl(l( \Xi_n)-\sqrt{\chi(n)}\bigr)^2, \beta_1 \biggl(1-\frac {\sqrt{\chi(n)}}{l(\Xi_n)} \biggr) \biggr), \end{equation}
where $F_\nu$ is defined by \eqref{eqdefdeF}, and we simulate $U_{n+1}$, a uniformly distributed random vector on $[0,1]^{\lfloor\nu\rfloor+2}$, $G_{n+1}$, a standard Gaussian random variable and $V_{n+1}$, a uniformly distributed random vector on $\mathcal{S}^\delta$. $U_{n+1}$, $G_{n+1}$ and $V_{n+1}$ have to be independent. We then construct $(\xi_{n+1},\Xi_{n+1},\chi(n+1))$ using~\eqref{eqdefalgo}. \\ The algorithm stops when $\mathbb{C}(n)$ is not longer satisfied: we set $\xi_{n+1}=0$ and so $\Xi_{n+1}=\Xi_{n}$ and $\chi(n+1)=\chi(n)$.\\ \rule{\textwidth}{0.5pt}
\begin{proposition} \label{propcurved} The statements of Theorems \ref{thmalgoline} and \ref {thmalgo-convline} are true for Algorithm \textup{(\hyperlink{algA4}{A4})} associated with the square root boundary. \end{proposition}
\begin{pf} All the arguments developed for decreasing boundaries with lower-bounded derivatives are easily adapted to the square root boundary. We leave the details to the reader and focus our attention to the following fact: the stochastic process $(\Xi_n,\chi(n), n\ge0)$ stays in the domain $D^0$ defined by
\[ D^0=\bigl\{ (t,x)\in\mathbb{R}_+^2\dvtx l(t)-\sqrt{x}>0 \bigr\}. \]
In the $\mathit{WoMS}$ setting, for $t\ge s$, this means that for $(\Xi_n,\chi (n))=(s,x)\in D^0 $ the following step leads to $\sqrt{\chi(n+1)}<l(\Xi _{n+1})$. By \eqref{eqdefalgo}, it suffices to prove that
\begin{eqnarray} \label{eqa-dem} \sqrt{x}+\psi_{A}(t)<l(s+t) \nonumber \\[-8pt] \\[-8pt] \eqntext{\mbox{for all } t\in\bigl \{u\ge0\dvtx \min \bigl(l(s+u),\psi_A(u)\bigr)\ge0\bigr\},} \end{eqnarray}
with $ A=\kappa F_\nu ( (l(s)-\sqrt{x})^2,\beta_1 (1-\frac{\sqrt {x}}{l(s)} ) )$, since $\chi(n+1)\le(\sqrt{\chi(n)}+ \psi _{A_n}(\xi_{n+1}))^2$. By Lemma \ref{lemcomparai} and due to the coefficient $\kappa$, we have
\[ \psi_A(t)<\sqrt{\bigl(l(s)-\sqrt{x}\bigr)^2- \beta_1 \biggl(1-\frac{\sqrt {x}}{l(s)} \biggr) t }. \]
Hence
\begin{eqnarray*} &&\bigl(l(s+t)-\sqrt{x}\bigr)^2-\psi_A(t)^2\\ &&\qquad> \bigl(\sqrt{l(s)^2-\beta_1 t}-\sqrt {x} \bigr)^2-\bigl(l(s)-\sqrt{x}\bigr)^2+\beta_1 \biggl(1-\frac{\sqrt{x}}{l(s)} \biggr)t \\ &&\qquad> 2\sqrt{x} \bigl( l(s)-\sqrt{l^2(s)-\beta_1 t} \bigr)- \frac{\beta_1\sqrt {x}}{l(s)} t\ge0. \end{eqnarray*}
This leads directly to \eqref{eqa-dem}. \end{pf}
\begin{rem} The whole study points out a new efficient algorithm in order to simulate Bessel hitting times for given levels or curved boundaries. We can use this algorithm in two generalized situations:
\begin{longlist}[(1)]
\item[(1)] We have assumed that the Bessel process starts from the origin. Of course the procedure presented here can also be applied to Bessel processes starting from $x>0$. It suffices to change the initialization step!
\item[(2)] We focused our attention to the Bessel process, but we linked also the initial problem to the exit time of a $\delta $-dimensional Brownian motion from a ball of radius~$l$. Algorithm (\hyperlink{algA1}{A1}) extended to higher dimensions can also be used in order to evaluate exit times of general compact domains whose boundary is regular. \end{longlist}
\end{rem}
\section{Numerical results} In this part we will illustrate the previous results on some numerical examples. Let us figure first an outcome of our algorithm, the exit position from a sphere with radius depending on time. The figure below is giving this result for an radius $l=1$ and a precision $\varepsilon= 10^{-3}$.
Let us compare our algorithm with existing results. Consider the classical Euler scheme for a Brownian motion, and evaluate the first hitting time and hitting position from a disk with given radius.\vspace*{9pt}
\includegraphics{900i01.eps}
First of all we can verify that the distribution of the hitting time for the $\mathit{WoMS}$ algorithm matches the distribution of the hitting time of a given level for the $2$-dimensional Bessel process. Figure \ref {fighistopossortie} gives this result for a starting disk with radius~$1$, a precision $\varepsilon=10^{-3}$ and a number of simulations $N=20\mbox{,}000$. In the Euler scheme the time step is $\Delta t = 10^{-4}$.
\begin{figure}\label{fighistopossortie}
\end{figure}
We can also test the fact that the exit position is uniformly distributed on the circle. In order to do this we can evaluate the angle of the exit position in our $\mathit{WoMS}$ procedure and show that it is a uniformly distributed random variable with values in $[-\pi, \pi ]$. Figure \ref{fighistopossortie} also shows the histogram of the result for a disk of radius $1$ an $\varepsilon=10^{-3}$ and 20,000 simulations.
Let us now present a simulation with Algorithm (\hyperlink{algA2}{A2}). We consider the hitting time of the level $l=2$ for the Bessel process of index $\nu =2$, and we illustrate Theorem \ref{thmalgo} by Figure \ref{graph1}. The curve represents the averaged number of steps versus the precision $\varepsilon=10^{-k}$, $k=1,\ldots, 7$. We can observe that the number of steps is better than suspected since the curve is sub-linear. We obtain the following values (for $\gamma=0.9$ and $100\mbox{,} 000$ simulations in order to evaluate the mean).
\iffalse \begin{figure}\end{figure} \end{minipage}\qquad \begin{minipage}{185pt}
\begin{table}[H]
\end{table} \end{minipage}} \end{figure} \fi
\begin{figure}\label{graph1}
\end{figure}
Finally we present the dependence of the averaged number of steps of Algorithm~(\hyperlink{algA2}{A2}) with respect to the dimension of the Bessel process. See Figure~\ref{fig4}. For that purpose, we simulate hitting time of the level $l=2$ with $\varepsilon=10^{-3}$, $\gamma=0.9$, $50\mbox{,} 000$ simulations for each estimation of the averaged value, and the dimension of the Bessel process takes value in the set $\{2,3,\ldots,18\}$.
\begin{figure}\label{fig4}
\end{figure}
\section{Application to the Cox--Ingersoll--Ross process}
We now aim to estimate the hitting time of a level $l>0$ for $(X^\delta_t, t\geq0)$, a Cox--Ingersoll--Ross process. The CIR process is the solution of the following stochastic differential equation:
\begin{equation} \label{CIRdef} \cases{ \mathrm{d} X^\delta_t=
\bigl(a+bX^\delta_t\bigr)\,\mathrm{d} t+c\sqrt{\bigl|X^\delta_t\bigr|} \,\mathrm{d} B_t, \vspace*{2pt}\cr X^\delta_0=x_0,} \end{equation}
where $x_0\geq 0$, $a\geq 0$, $b\in\mathbb{R}$, $c>0$ and $(B_t, t\geq 0)$ is a standard Brownian motion. We denote here $\delta= 4a/c^2$.
We will first recall a connection between this stochastic process and $(Y^\delta(t), t\geq 0)$, the square of the Bessel process BESQ($\delta$), the solution of the equation
\begin{equation}
\label{defbessel} Y^\delta(t)=y_0+\delta t+2\int _0^t \sqrt{\bigl|Y^\delta(s)\bigr|}\,\mathrm{d} B_s,\qquad t\geq 0. \end{equation}
\begin{lemma} \label{lemidentite} The CIR process has the same distribution as $(\overline{X}_t, t\geq 0)$ which is defined by
\begin{equation} \label{eqcir-relation} \cases{ \displaystyle\overline{X}_t=e^{bt}Y^\delta \biggl( \frac{c^2}{4b}\bigl(1-e^{-bt}\bigr) \biggr), \vspace*{2pt}\cr \displaystyle\overline{X}_0=Y^\delta(0), } \end{equation}
where $Y$ is the square of a Bessel process in dimension $\delta =4a/c^2$; see \cite{Revuz-Yor-99}. \end{lemma}
\begin{pf} Let us only sketch some ideas of the proof. Let $Y^\delta(t)$ be the square of the $\delta$-dimensional Bessel process. By applying It{\^o}'s formula, we get the stochastic differential equation satisfied by the process $\overline{X}_t$,
\begin{eqnarray} \label{eqeds1} &&\mathrm{d}\overline{X}_t=b\overline{X}_t \,\mathrm{d} t+e^{bt} \,\mathrm{d} \biggl(Y \biggl( \frac{c^2}{4b} \bigl(1-e^{-bt}\bigr) \biggr) \biggr) \nonumber \\
&&\qquad=b\overline{X}_t \,\mathrm{d} t+b\delta\frac{c^2}{4b} \,\mathrm{d} t+2e^{bt}\sqrt {\bigl|e^{-bt}\overline{X}_t\bigr|} \,\mathrm{d} B_{ ({c^2}/{(4b)})(1-e^{-bt})} \\
&&\qquad=(a+b\overline{X}_t) \,\mathrm{d} t+2e^{{bt}/{2}}\sqrt{|\overline
{X}_t|} \,\mathrm{d} B_{ ({c^2}/{(4b)})(1-e^{-bt})},\nonumber \end{eqnarray}
where $\delta=4a/ c^2.$ Let us remark that
\[ \frac{c^2}{4b}\bigl(1-e^{-bt}\bigr)=\int_0^t \rho^2(s)\,\mathrm{d} s \qquad\mbox{with } \rho(t)=\frac{c}{2} e^{-{bt}/{2}}. \]
We can deduce that there exists a Brownian motion $(\beta_t, t\geq 0)$ such that
\[ B_{ ({c^2}/{(4b)})(1-e^{-bt})}=\int_0^t \rho(s)\,\mathrm{d} \beta_s \]
for all $t\geq 0$. With this notation, equation \eqref{eqeds1} gives
\begin{eqnarray*}
\mathrm{d}\overline{X}_t&=& (a+b\overline{X}_t) \,\mathrm{d} t+2e^{{bt}/{2}}\sqrt{|\overline{X}_t|}\rho(t) \,\mathrm{d} \beta_t \\
&=&(a+b\overline{X}_t) \,\mathrm{d} t+c\sqrt{|\overline{X}_t|} \,\mathrm{d}\beta_t, \end{eqnarray*}
and $\overline{X}_0=Y(0)$. This proves that the process $(\overline {X}_t, t\geq0)$ has the same distribution as the CIR process given by (\ref{CIRdef}). \end{pf}
Let us consider the hitting time of a given level $l$ for the CIR process and denote it by $T_l$. This time is defined by
\[ T_l=\inf\bigl\{ s\geq 0; X^\delta_s=l\bigr\}. \]
The previous Lemma \ref{lemidentite} gives also an equivalence (in distribution) connecting the hitting time of the CIR process and the hitting time of the square of a $\delta$-dimensional Bessel process.
\begin{proposition}\label{propenfin} The hitting time $T_l$ of a level $l>0$ for a CIR process has the same distribution as $-\frac{1}{b}\log (1-\frac{4b}{c^2}\tau_\psi )$ where
\[ \tau_\psi=\inf \biggl\{ t\geq 0; Y^\delta(t)=l \biggl(1- \frac{4b}{c^2}t \biggr) \biggr\}, \]
and $Y^\delta$ is the square of a Bessel process of dimension $\delta=4a/c^2$. \end{proposition}
\begin{pf} By using Lemma \ref{lemidentite}, $\tau_\psi$ has the same distribution as $\overline{T}_l$ given by
\begin{equation} \label{deftau2} \overline{T}_l=\inf \biggl\{s\geq 0; Y^\delta \biggl( \frac {c^2}{4b}\bigl(1-e^{-bs}\bigr) \biggr)=le^{-bs} \biggr\}. \end{equation}
Define $t=\frac{c^2}{4b}(1-e^{-bs})$, so we have two situations:
\textit{First case}: If $b<0$, let $s=\eta(t)$ where
\[ \eta(t)=-\frac{1}{b}\log \biggl(1-\frac{4b}{c^2}t \biggr)\qquad \mbox{for } t\geq 0. \]
The map $\eta$ is a strictly nondecreasing function, and we thus get thus
\begin{eqnarray*} \overline{T}_l&=&\inf \biggl\{\eta(t); t\geq 0, Y^\delta(t)=l \biggl(1-\frac {4b}{c^2}t \biggr) \biggr\}\\ &=&\eta \biggl( \inf \biggl\{t\geq 0;Y^\delta(t)=l \biggl(1-\frac{4b}{c^2}t \biggr) \biggr\} \biggr). \end{eqnarray*}
\textit{Second case}: If $b\geq 0$, let also $s=\eta(t)$. In this case the variable $t$ takes its values only on the interval $ [0,\frac {c^2}{4b} )$. So
\[ \overline{T}_l=\inf \biggl\{\eta(t); 0\le t\le\frac{c^2}{4b}, Y^\delta (t)=l \biggl(1-\frac{4b}{c^2} t \biggr) \biggr\}. \]
The condition $0\le t\le\frac{c^2}{4b}$ can be omitted in the estimation of the infimum as the boundary to hit: $1-\frac{4bt}{c^2}$ is negative outside this interval, and the Bessel process is always positive. Furthermore the function $\eta$ is also nondecreasing for $b\geq 0$, and the result is thus obtained. \end{pf}
\textit{Application of Algorithm} (\hyperlink{algA4}{A4}):
An immediate consequence of Proposition \ref{propenfin} is that the hitting time $T_l$ is related to the first time the Bessel process of dimension $\delta=4a/c^2$ reaches the curved boundary: $f(t)=\sqrt{l (1-\frac{4b}{c^2}t )}$. We are able to apply Algorithm (\hyperlink{algA4}{A4}) if $4a/c^2\in\mathbb{N}^*$ and $b>0$ (the boundary is then decreasing). Let us denote by $N^\varepsilon$ the number of steps of (\hyperlink{algA4}{A4}) and $\Xi _{N^\varepsilon}$, the approximated hitting time of the Bessel process associated with the particular curved boundary $f$. Combining Propositions \ref{propcurved} and \ref{propenfin} leads to
\begin{eqnarray*} \biggl(1-\frac{\varepsilon}{\sqrt{2\alpha\pi}} \biggr)\mathbb{P} \biggl(\Xi _{N^\varepsilon}\le \frac{c^2}{4b} \bigl(1-e^{-bt}\bigr)-\alpha \biggr)&\le&\mathbb {P}(T_l\le t)\\ &\le&\mathbb{P} \biggl(\Xi_{N^\varepsilon}\le \frac{c^2}{4b} \bigl(1-e^{-bt}\bigr) \biggr) \end{eqnarray*}
for $\alpha$ small enough and $t>0$.
\begin{appendix}\label{app}
\section*{Appendix: Simulation of random variables}
Let us introduce simulation procedures related to particular probability density functions.
\begin{propositionn} \label{propappend} Let $Z$ be a random variable with Gamma distribution ${\rm Gamma}(\alpha ,\beta)$, that is,
\[ \mathbb{P}(Z\in\mathrm{d} z)=\frac{1}{\Gamma(\alpha)\beta^\alpha} z^{\alpha-1}e^{-{z}/{\beta}} \ind{\{z>0 \}} \,\mathrm{d} z,\qquad \alpha >0, \beta>0. \]
Then $W=\exp(-Z)$ has the following distribution:
\[ \mathbb{P}(W\in\mathrm{d} r)=\frac{1}{\Gamma(\alpha)\beta^\alpha}(-\log r)^{\alpha-1}r^{1/\beta-1} \ind{[0,1]}(r) \,\mathrm{d} r. \]
In particular the stopping time $\tau_\psi$ defined by \eqref {densitetau} has the same law as $ [\frac{a}{\Gamma(\nu+1)2^\nu } ]^{{1}/{(\nu+1)}}e^{-Z}$. Here $Z$ is a Gamma distributed random variable with parameters $\alpha=\nu+2$ and $\beta=\frac{1}{\nu+1}$. \end{propositionn}
\begin{pf} Let $f$ be a nonnegative function. Using suitable changes of variables, we obtain
\begin{eqnarray*} \mathbb{E}\bigl[f(W)\bigr]&=&\frac{1}{\Gamma(\alpha)\beta^\alpha}\int_0^\infty f\bigl(e^{-z}\bigr) z^{\alpha-1}e^{-{z}/{\beta}} \,\mathrm{d} z \\ &=&\frac{1}{\Gamma(\alpha)\beta^\alpha}\int_0^1f(r) (-\log r)^{\alpha -1}r^{1/\beta-1} \,\mathrm{d} r. \end{eqnarray*}
In order to end the proof it suffices to multiply $W$ by a constant and use once again a change of variables formula. \end{pf}
We need to simulate Gamma distributed variables. Let us just recall some common facts.
\begin{propositionn}
\textup{(i)} If $\alpha\in\mathbb{N}$ (so-called Erlang distributions), then the Gamma distributed variables $Z$ has the same law as \[ -\beta\log(U_1\cdots U_\alpha), \] where $(U_i)_{1\le i\le\alpha}$ are independent uniformly distributed random variables. Hence $W$ defined by $W=\exp(-Z)$ can be simulated by \[ (U_1U_2\cdots U_\alpha)^\beta.\vspace*{-12pt} \]
\begin{longlist}[(ii)] \item[(ii)] If $\alpha-1/2\in\mathbb{N}$, then $Z$ has the same law as \[ -\beta\log(U_1\cdots U_{\lfloor\alpha\rfloor})+\frac{\beta N^2}{2}, \] where $(U_i)_{1\le i\le\lfloor\alpha\rfloor}$ are i.i.d. uniformly distributed random variables, and $N$ is an independent standard Gaussian r.v.; see, for instance, \cite{Devroye}, Chapter \textup{IX.3}. \end{longlist}
\end{propositionn} \end{appendix}
\printaddresses
\end{document} | arXiv |
# Split-radix FFT algorithm overview
The Split-radix FFT algorithm is based on the radix-2 decimation-in-time (DIT) algorithm. It is a recursive algorithm that divides the input sequence into smaller subproblems, which are then solved independently. This recursive structure allows the algorithm to efficiently process large sequences, making it a popular choice for many applications.
Consider the following sequence:
$$
x = [1, 2, 3, 4, 5, 6, 7, 8]
$$
Applying the Split-radix FFT algorithm to this sequence will result in the following DFT:
$$
X = [13, -1, 1, -1, 13, -1, 1, -1]
$$
The Split-radix FFT algorithm is particularly useful when working with real-valued sequences. It is capable of transforming a real-valued sequence into its corresponding complex-valued frequency spectrum in a single pass. This makes it an attractive choice for applications such as image processing, audio analysis, and wireless communication systems.
## Exercise
What are the key advantages of the Split-radix FFT algorithm?
# Implementation of the Split-radix FFT in JavaScript
Now that we have a basic understanding of the Split-radix FFT algorithm, let's dive into its implementation in JavaScript. JavaScript is a popular programming language for web development, and its ability to run on both the client and server sides makes it a versatile choice for implementing FFT algorithms.
To implement the Split-radix FFT in JavaScript, we will first need to define the necessary functions for performing the DFT. These functions will include the radix-2 DIT algorithm, as well as helper functions for data rearrangement and complex number manipulation.
Here is a simple implementation of the radix-2 DIT algorithm in JavaScript:
```javascript
function radix2DIT(x, n) {
if (n === 1) {
return [x[0]];
}
const X = radix2DIT(x, n / 2);
const w = Math.exp(-2 * Math.PI * 1j / n);
const wn = 1;
for (let k = 0; k < n / 2; k++) {
X[k] = X[k] + wn * X[k + n / 2];
X[k + n / 2] = X[k] - wn * X[k + n / 2];
wn *= w;
}
return X;
}
```
This function takes an input sequence `x` and the size of the DFT `n`, and returns the corresponding frequency spectrum `X`. It uses a recursive approach to divide the input sequence into smaller subproblems, and then combines the results using complex number multiplication.
## Exercise
Write a JavaScript function that implements the Split-radix FFT algorithm for real-valued sequences.
# Understanding the radix-2 algorithm
The radix-2 algorithm works by dividing the input sequence into smaller subproblems, which are then solved independently. This recursive structure allows the algorithm to efficiently process large sequences, making it a popular choice for many applications.
Consider the following sequence:
$$
x = [1, 2, 3, 4, 5, 6, 7, 8]
$$
Applying the radix-2 algorithm to this sequence will result in the following DFT:
$$
X = [13, -1, 1, -1, 13, -1, 1, -1]
$$
The radix-2 algorithm is particularly useful when working with real-valued sequences. It is capable of transforming a real-valued sequence into its corresponding complex-valued frequency spectrum in a single pass. This makes it an attractive choice for applications such as image processing, audio analysis, and wireless communication systems.
## Exercise
What are the key properties of the radix-2 algorithm?
# Optimizing the Split-radix FFT algorithm
One common optimization technique is data rearrangement. By rearranging the input sequence, we can reduce the number of complex multiplications required by the algorithm. This can lead to a significant performance improvement, especially for large sequences.
Here is an example of data rearrangement in JavaScript:
```javascript
function bitReverse(x, n) {
let X = new Array(n);
for (let i = 0; i < n; i++) {
X[i] = x[bitReverseIndex(i, n)];
}
return X;
}
function bitReverseIndex(x, n) {
let r = 0;
for (let k = 0; k < Math.log2(n); k++) {
r |= (x & 1) << k;
x >>= 1;
}
return r;
}
```
This code defines two functions, `bitReverse` and `bitReverseIndex`, which rearrange the input sequence `x` in bit-reverse order. The `bitReverseIndex` function computes the bit-reverse index of a given input index `x`.
## Exercise
Write a JavaScript function that optimizes the Split-radix FFT algorithm by applying data rearrangement and loop unrolling.
# Performance analysis of the Split-radix FFT
One common performance metric is the computational complexity of the algorithm. The complexity of the Split-radix FFT algorithm is determined by the number of operations required to compute the DFT of a sequence. In the worst case, this complexity is proportional to the size of the input sequence.
For a sequence of length `n`, the computational complexity of the Split-radix FFT algorithm is O(n log n). This complexity is efficient for large sequences, but may not be suitable for small sequences or for applications with strict performance requirements.
Another important aspect of performance analysis is the algorithm's memory usage. The Split-radix FFT algorithm requires a significant amount of memory to store intermediate results and the input sequence. This memory usage can be a limiting factor for certain applications.
## Exercise
What are the key performance metrics of the Split-radix FFT algorithm?
# Applications of the Split-radix FFT in JavaScript
One common application of the Split-radix FFT algorithm is in image processing. It can be used to efficiently compute the 2D Discrete Cosine Transform (DCT) of an image, which is a key step in many image compression algorithms.
Here is an example of using the Split-radix FFT algorithm for image processing in JavaScript:
```javascript
function imageProcessing(image) {
const X = radix2DIT(image, image.length);
const Y = 2 * Math.abs(X);
return Y;
}
```
This code defines a function `imageProcessing` that takes an input image and computes its 2D DCT using the Split-radix FFT algorithm. The resulting frequency spectrum `Y` can then be used for further analysis, such as image compression or feature extraction.
## Exercise
What are some other applications of the Split-radix FFT algorithm in JavaScript?
# Real-world examples of the Split-radix FFT in JavaScript
One example of the Split-radix FFT algorithm in JavaScript is its use in audio analysis. It can be used to efficiently compute the spectrum of an audio signal, which can then be used for tasks such as noise reduction, pitch detection, or music transcription.
Here is an example of using the Split-radix FFT algorithm for audio analysis in JavaScript:
```javascript
function audioAnalysis(audio) {
const X = radix2DIT(audio, audio.length);
const spectrum = X.map(x => Math.abs(x));
return spectrum;
}
```
This code defines a function `audioAnalysis` that takes an input audio signal and computes its spectrum using the Split-radix FFT algorithm. The resulting spectrum can then be used for further analysis, such as detecting specific frequencies or identifying musical notes.
## Exercise
What are some other real-world examples of the Split-radix FFT algorithm in JavaScript?
# Debugging and testing the Split-radix FFT implementation
One common technique for debugging the Split-radix FFT algorithm is unit testing. Unit testing involves testing individual functions or components of the algorithm to ensure that they work as expected. This can help identify and fix any bugs or errors in the implementation.
Here is an example of unit testing the `radix2DIT` function in JavaScript:
```javascript
function testRadix2DIT() {
const x = [1, 2, 3, 4, 5, 6, 7, 8];
const X = radix2DIT(x, x.length);
const expectedX = [13, -1, 1, -1, 13, -1, 1, -1];
assert.deepEqual(X, expectedX);
}
```
This code defines a function `testRadix2DIT` that tests the `radix2DIT` function by comparing its output to the expected output for a given input sequence `x`.
## Exercise
What are some other techniques for debugging and testing the Split-radix FFT implementation in JavaScript?
# Advanced topics in the Split-radix FFT algorithm
One advanced topic is the relationship between the Split-radix FFT algorithm and other FFT algorithms, such as the Cooley-Tukey algorithm. The Split-radix FFT algorithm is a special case of the Cooley-Tukey algorithm, which is based on the radix-2 decimation-in-time (DIT) algorithm. This relationship highlights the fundamental nature of the Split-radix FFT algorithm and its efficiency in solving the DFT problem.
The Split-radix FFT algorithm can also be used in parallel computing. By dividing the input sequence into smaller subproblems and solving them independently, the algorithm can be parallelized across multiple processors or cores. This can lead to significant performance improvements, especially for large sequences.
Another advanced topic is the application of the Split-radix FFT algorithm in quantum computing. The Split-radix FFT algorithm can be used to efficiently compute the DFT of a quantum state, which is a key step in many quantum algorithms and applications. This relationship highlights the versatility and applicability of the Split-radix FFT algorithm in both classical and quantum computing.
## Exercise
What are some other advanced topics in the Split-radix FFT algorithm?
# Comparison to other FFT algorithms
One key difference between the Split-radix FFT algorithm and the Cooley-Tukey algorithm is the way they divide the input sequence into smaller subproblems. The Cooley-Tukey algorithm uses a radix-2 decimation-in-frequency (DIF) algorithm, while the Split-radix FFT algorithm uses a radix-2 decimation-in-time (DIT) algorithm. This difference in approach can result in different performance characteristics for the two algorithms.
Here is a comparison of the computational complexities of the Split-radix FFT algorithm and the Cooley-Tukey algorithm:
- Split-radix FFT algorithm: O(n log n)
- Cooley-Tukey algorithm: O(n log n)
Both algorithms have the same computational complexity, but their performance characteristics may differ due to differences in their approach.
Another popular FFT algorithm is the Bluestein algorithm. The Bluestein algorithm is particularly useful for computing the DFT of sequences that are not powers of two in length. It uses a specialized matrix multiplication approach to compute the DFT efficiently, even for non-power-of-two sequences.
## Exercise
What are some other FFT algorithms, and how do they compare to the Split-radix FFT algorithm?
In conclusion, the Split-radix FFT algorithm is a powerful and efficient technique for computing the DFT of a sequence. Its ability to efficiently process large sequences, its versatility in various domains, and its relationship to other FFT algorithms make it a valuable tool in signal processing and data analysis. By understanding and mastering the Split-radix FFT algorithm, you can unlock the full potential of this powerful technique. | Textbooks |
# Flow analysis and path equations
Batcher's network is a mathematical model used to analyze the flow of data in a parallel processing system. To understand Batcher's network, we first need to analyze the flow of data and identify the paths that data takes through the network.
Flow analysis is the process of analyzing the flow of data in a network. It involves calculating the number of tokens that pass through each node in the network. This can be done using path equations, which are mathematical equations that describe the flow of tokens through the network.
For example, consider a simple network with three nodes A, B, and C. The flow of data from node A to node B is represented by the equation:
$$
\frac{dA}{dt} = \frac{1}{2}(BA + CA) - \frac{1}{2}(AB + AC)
$$
In this equation, $dA/dt$ represents the rate of change of tokens in node A, $BA$ and $CA$ are the number of tokens entering node A from nodes B and C, respectively, and $AB$ and $AC$ are the number of tokens leaving node A to nodes B and C, respectively.
To calculate the flow of data in a network, we need to solve these path equations. This can be done using various techniques, such as matrix multiplication, substitution, or iterative methods.
## Exercise
Calculate the flow of data for the following network:
```
A ---> B ---> C
| ^
| |
v |
D ---> E ---> F
```
Use the path equations to find the rate of change of tokens in each node.
# Network structure and its properties
The structure of a Batcher's network is defined by the nodes, the number of tokens in each node, and the flow of data between nodes. The properties of a Batcher's network, such as its stability and deadlock-free behavior, can be analyzed using various techniques.
A Batcher's network is stable if the number of tokens in each node remains constant over time. This means that the flow of data into and out of each node is equal. Stability can be analyzed using the path equations and the properties of the network's structure.
A deadlock-free network is a network that does not have any deadlocks. A deadlock occurs when all the tokens in a network are blocked, and no further progress can be made. Deadlock-free behavior can be analyzed using the path equations and the properties of the network's structure.
## Exercise
Analyze the stability and deadlock-free behavior of the network from the previous exercise.
# Applications of Batcher's network in parallel processing
Batcher's network has various applications in parallel processing, including the analysis of parallel algorithms, the design of parallel systems, and the optimization of parallel processing performance.
For example, Batcher's network can be used to analyze the performance of parallel algorithms. By analyzing the flow of data in a network that represents a parallel algorithm, we can determine the efficiency of the algorithm and identify any bottlenecks or areas for improvement.
Batcher's network can also be used to design parallel systems. By analyzing the flow of data in a network that represents a parallel system, we can determine the optimal structure and organization of the system to maximize its performance.
Finally, Batcher's network can be used to optimize parallel processing performance. By analyzing the flow of data in a network that represents a parallel processing system, we can identify any inefficiencies or bottlenecks and implement changes to improve the system's performance.
## Exercise
Design a parallel processing system using Batcher's network. Analyze the performance of the system using the path equations and identify any areas for improvement.
# Parallel processing techniques and their implementation
Parallel processing techniques are methods used to analyze and optimize parallel processing systems. These techniques can be implemented using Batcher's network to analyze the flow of data in a network that represents a parallel processing system.
For example, data parallelism is a technique that divides the data into independent subsets and processes each subset concurrently. This can be implemented using Batcher's network by analyzing the flow of data between different nodes in the network and identifying any dependencies between the nodes.
Task parallelism is another technique that divides the computation into independent tasks and processes each task concurrently. This can be implemented using Batcher's network by analyzing the flow of data between different nodes in the network and identifying any dependencies between the nodes.
## Exercise
Implement data parallelism and task parallelism using Batcher's network. Analyze the performance of the parallel processing system using the path equations and identify any areas for improvement.
# Performance analysis and optimization
Performance analysis and optimization are essential aspects of parallel processing systems. Batcher's network can be used to analyze the performance of parallel processing systems and identify areas for improvement.
For example, we can analyze the flow of data in a network that represents a parallel processing system using the path equations. This can help us identify any bottlenecks or inefficiencies in the system.
Once we have identified areas for improvement, we can implement changes to optimize the performance of the parallel processing system. This can include modifying the structure of the network, changing the flow of data between nodes, or implementing new techniques to improve the system's performance.
## Exercise
Analyze the performance of a parallel processing system using Batcher's network. Identify any areas for improvement and implement changes to optimize the system's performance. | Textbooks |
\begin{document}
\title[Infinite Dimensional Bounded Real Lemma II]{Standard versus Bounded Real Lemma with infinite-dimensional state space II:\\ The storage function approach}
\author{J.A. Ball} \address{J.A. Ball, Department of Mathematics, Virginia Tech, Blacksburg, VA 24061-0123, USA} \email{[email protected]}
\author{G.J. Groenewald} \address{G.J. Groenewald, Department of Mathematics, Unit for BMI, North-West University, Potchefstroom 2531, South Africa} \email{[email protected]}
\author{S. ter Horst} \address{S. ter Horst, Department of Mathematics, Unit for BMI, North-West University, Potchefstroom 2531, South Africa} \email{[email protected]}
\thanks{This work is based on the research supported in part by the National Research Foundation of South Africa (Grant Numbers 93039, 90670, and 93406).}
\begin{abstract} For discrete-time causal linear input/state/output systems, the Bounded Real Lemma explains (under suitable hypotheses) the contractivity of the values of the transfer function over the unit disk for such a system in terms of the existence of a positive-definite solution of a certain Linear Matrix Inequality (the Kalman-Yakubovich-Popov (KYP) inequality). Recent work has extended this result to the setting of infinite-dimensional state space and associated non-rationality of the transfer function, where at least in some cases unbounded solutions of the generalized KYP-inequality are required. This paper is the second installment in a series of papers on the Bounded Real Lemma and the KYP inequality. We adapt Willems' storage-function approach to the infinite-dimensional linear setting, and in this way reprove various results presented in the first installment, where they were obtained as applications of infinite-dimensional State-Space-Similarity theorems, rather than via explicit computation of storage functions. \end{abstract}
\subjclass[2010]{Primary 47A63; Secondary 47A48, 93B20, 93C55, 47A56}
\keywords{KYP inequality, storage function, bounded real lemma, infinite dimensional linear system, minimal system.}
\maketitle
\section{Introduction}\label{S:intro}
This paper is the second installment, following \cite{KYP1}, on the infinite dimensional bounded real lemma for discrete-time systems and the discrete-time Kalman-Yakubovich-Popov (KYP) inequality. In this context, we consider the discrete-time linear system \begin{equation}\label{dtsystem} \Si:=\left\{ \begin{array}{ccc} {\mathbf x}(n+1)&=&A {\mathbf x}(n)+B {\mathbf u}(n),\\ {\mathbf y}(n)&=&C {\mathbf x}(n)+D {\mathbf u}(n), \end{array} \right. \qquad (n\in\BZ) \end{equation} where $A:\cX\to\cX$, $B:\cU\to\cX$, $C:\cX\to\cY$ and $D:\cU\to\cY$ are bounded linear Hilbert space operators, i.e., $\cX$, $\cU$ and $\cY$ are Hilbert spaces and the {\em system matrix} associated with $\Si$ takes the form \begin{equation}\label{sysmat} M=\mat{cc}{A&B\\ C& D}:\mat{cc}{\cX\\ \cU}\to\mat{c}{\cX\\ \cY}. \end{equation} We refer to the pair $(C,A)$ as the {\em output pair} and to the pair $(A,B)$ as the {\em input pair}. In this case input sequences ${\mathbf u}=({\mathbf u}(n))_{n\in\BZ}$, with ${\mathbf u}(n)\in\cU$, are mapped to output sequences ${\mathbf y}=({\mathbf y}(n))_{n\in\BZ}$, with ${\mathbf y}(n)\in\cY$, through the state sequence ${\mathbf x}=({\mathbf x}(n))_{n\in\BZ}$, with ${\mathbf x}(n)\in \cX$. A system trajectory of the system $\Si$ is then any triple $({\mathbf u}(n),{\mathbf x}(n),{\mathbf y}(n))_{n\in\BZ}$ of input, state and output sequences that satisfy the system equations \eqref{dtsystem}.
With the system $\Si$ we associate the {\em transfer function} given by \begin{equation}\label{trans} F_\Si(\lambda)=D+\lambda C(I-\lambda A)^{-1}B. \end{equation} Since $A$ is bounded, $F_\Si$ is defined and analytic on a neighborhood of $0$ in $\BC$. We are interested in the case where $F_\Si$ admits an analytic continuation to the open unit disk
$\BD$ such that the supremum norm $\|F_\Si\|_\infty$ of $F_\Si$ over $\BD$ is at most one, i.e., $F_\Si$ has analytic continuation to a function in the Schur class \[
\cS(\cU, \cY) = \left\{ F \colon {\mathbb D}
\underset{\text{holo}}\mapsto \cL(\cU, \cY) \colon \| F(\lambda) \| \le 1
\text{ for all } z \in {\mathbb D}\right\}. \]
Sometimes we also consider system trajectories $({\mathbf u}(n),{\mathbf x}(n),{\mathbf y}(n))_{n\ge n_0}$ of the system $\Si$ that are initiated at a certain time $n_0\in\BZ$, in which case the input, state and output at time $n<n_0$ are set equal to zero, and we only require that the system equations \eqref{dtsystem} are satisfied for $n\geq n_0$. Although technically such trajectories are not system trajectories for $\Si$, but rather correspond to trajectories of the corresponding singly-infinite forward-time system rather than the bi-infinite system $\Si$, the transfer function of this singly-infinite forward-time system coincides with the transfer function $F_\Si$ of $\Si$. Hence for the sake of the objective, determining whether $F_\Si\in \cS(\cU,\cY)$, there is no problem with considering such singly infinite system trajectories.
Before turning to the infinite-dimensional setting, we first discuss the case where $\cU$, $\cX$, $\cY$ are all finite-dimensional. If in this case one considers the parallel situation in continuous time rather than in discrete time, these ideas have origins in circuit theory, specifically conservative or passive circuits. An important question in this context is to identify which rational matrix functions, analytic on the left half-plane (rather than the unit disk $\BD$), arise from a lossless or dissipative circuit in this way (see e.g. Belevitch \cite{Bel}).
According to Willems \cite{Wil72a, Wil72b}, a linear system $\Sigma$ as in \eqref{dtsystem}
is {\em dissipative} (with respect to {\em supply rate} $s(u,y) = \| u \|^2 - \| y \|^2$) if it has a {\em storage function} $S \colon \cX \to {\mathbb R}_+$, where $S(x)$ is to be interpreted as a measure of the {\em energy} stored by the system when it is in state $x$. Such a storage function $S$ is assumed to satisfy the dissipation inequality \begin{equation} \label{diss}
S({\mathbf x}(n+1)) - S({\mathbf x}(n)) \le \|{\mathbf u}(n) \|^2 - \| {\mathbf y}(n) \|^2 \end{equation}
over all trajectories $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n\in\BZ}$ of the system $\Sigma$ as well as the additional normalization condition that $S(0) = 0$. The dissipation inequality can be interpreted as saying that for the given system trajectory, the energy stored in the system ($S({\mathbf x}(n+1)) - S({\mathbf x}(n))$) when going from state $x(n)$ to $x(n+1)$ can be no more than the difference between the energy that enters the system ($\|{\mathbf u}(n) \|^2$) and the energy that leaves the system ($\| {\mathbf y}(n) \|^2$) at time $n$.
For our discussion here we shall only be concerned with the so-called {\em scattering supply rate} $s(u,y)
= \| u \|^2 - \| y \|^2$.
It is not hard to see that a consequence of the dissipation inequality \eqref{diss} on system trajectories is that the transfer function $F_\Sigma$ is in the Schur class $\cS(\cU, \cY)$. The results extend to nonlinear systems as well (see \cite{Wil72a}), where one talks about the system having $L^2$-gain at most $1$ rather the system having transfer function in the Schur class.
In case the system $\Sigma$ is finite-dimensional and minimal (as defined in the statement of Theorem \ref{T:BRLfinstan} below), one can show that the smallest storage function, the {\em available storage} $S_a$, and the largest storage function, the {\em required supply} $S_r$, are {\em quadratic}, provided storage functions for $\Si$ exist. That $S_a$ and $S_r$ are quadratic means that there are positive-definite matrices $H_a$ and $H_r$ so that $S_a$ and $S_r$ have the quadratic form $$
S_a(x) = \langle H_a x, x \rangle, \quad S_r(x) = \langle H_r x, x \rangle $$ with $H_a$ and $H_r$ actually being positive-definite. For a general quadratic storage function $S_H(x) = \langle H x, x \rangle$ for a positive-definite matrix $H$, it is not hard to see that the dissipation inequality \eqref{diss} assumes the form of a linear matrix inequality (LMI): \begin{equation} \label{KYP1} \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{*} \begin{bmatrix} H & 0 \\ 0 & I_{\cY} \end{bmatrix} \begin{bmatrix} A & B \\ C & D\end{bmatrix} \preceq \begin{bmatrix} H & 0 \\ 0 & I_{\cU}\end{bmatrix}. \end{equation} This is what we shall call the {\em Kalman-Yakubovich-Popov} or KYP inequality (with solution $H$ for given system matrix $M = \sbm{ A & B \\ C & D}$).
Conversely, if one starts with a finite-dimensional, minimal, linear system $\Si$ as in \eqref{dtsystem} for which the transfer function $F_\Sigma$ is in the Schur-class, it is possible to show that there exist quadratic storage functions $S_H$ for the system satisfying the coercivity condition
$S_H(x) \ge \delta \| x \|^2$ for some $\delta > 0$ (i.e., with $H$ strictly positive-definite). This is the storage-function interpretation behind the following result, known as the {\em Kalman-Yakubovich-Popov lemma}.
\begin{theorem}[Standard Bounded Real Lemma (see \cite{AV})] \label{T:BRLfinstan} Let $\Si$ be a discrete-time linear system as in \eqref{dtsystem} with $\cX$, $\cU$ and $\cY$ finite dimensional, say $\cU = {\mathbb C}^{r}$, $\cY = {\mathbb C}^{s}$, $\cX = {\mathbb C}^{n}$, so that the system matrix $M$ has the form \begin{equation}\label{findimsys} M = \begin{bmatrix} A & B \\ C & D \end{bmatrix} \colon
\begin{bmatrix} {\mathbb C}^{n} \\ {\mathbb C}^{r} \end{bmatrix} \to
\begin{bmatrix} {\mathbb C}^{n} \\ {\mathbb C}^{s} \end{bmatrix} \end{equation} and the transfer function $F_{\Sigma}$ is equal to a rational matrix function of size $s \times r$. Assume that the realization $(A,B,C,D)$ is {\em minimal}, i.e., the output pair $(C,A)$ is {\em observable} and the input pair $(A,B)$ is {\em controllable}: \begin{equation}\label{obscontr}
\bigcap_{k=0}^{n} \textup{Ker\,} C A^{k} = \{0\}\quad\mbox{and}\quad \textup{span}_{k=0,1,\dots, n-1} \textup{Im\,} A^{k} B = \cX = {\mathbb C}^{n}. \end{equation} Then $F_{\Sigma}$ is in the Schur class $\cS({\mathbb C}^{r}, {\mathbb C}^{s})$ if and only if there is an $n \times n$ positive-definite matrix $H$ satisfying the KYP-inequality \eqref{KYP1}. \end{theorem}
There is also a {\em strict} version of the Bounded Real Lemma. The associated storage function required is a {\em strict storage function}, i.e., a function $S \colon \cX \to {\mathbb R}_+$ for which there is a number $\delta > 0$ so that \begin{equation} \label{diss-strict}
S({\mathbf x}(n+1)) - S({\mathbf x}(n)) + \delta \| x(n) \|^2 \le (1- \delta) \| {\mathbf u}(n)\|^2 - \| {\mathbf y}(n) \|^2 \end{equation} holds over all system trajectories $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n\in\BZ}$, in addition to the normalization condition $S(0)=0$. If $S_H(x) = \langle H x, x \rangle$ is a quadratic strict storage function, then the associated linear matrix inequality is the {\em strict KYP-inequality} \begin{equation} \label{KYP2} \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{*} \begin{bmatrix} H & 0 \\ 0 & I_{\cY} \end{bmatrix} \begin{bmatrix} A & B \\ C & D\end{bmatrix} \prec \begin{bmatrix} H & 0 \\ 0 & I_{\cU}\end{bmatrix}. \end{equation} In this case, one also arrives at a stronger condition on the transfer function $F_\Si$, namely that it has an analytic continuation to a function in the {\em strict Schur class}: \[
\cS^{o}(\cU, \cY) =\left \{ F \colon {\mathbb D} \underset{\text{holo}}
\mapsto \cL(\cU, \cY) \colon \sup_{z \in {\mathbb D}} \| F(z) \| \le \rho \text{ for some }
\rho < 1\right\}. \] Note, however, that the strict KYP-inequality implies that $A$ is stable, so that in case \eqref{KYP2} holds, $F_\Si$ is in fact analytic on $\BD$. This is the storage-function interpretation of the following strict Bounded Real Lemma, in which one replaces the minimality condition with a stability condition.
\begin{theorem}[Strict Bounded Real Lemma (see \cite{PAJ})]\label{T:BRLfinstrict} Suppose that the dis\-crete-time linear system $\Si$ is as in \eqref{dtsystem} with $\cX$, $\cU$ and $\cY$ finite dimensional, say $\cU = {\mathbb C}^{r}$, $\cY = {\mathbb C}^{s}$, $\cX = {\mathbb C}^{n}$, i.e., the system matrix $M$ is as in \eqref{findimsys}. Assume that $A$ is {\em stable}, i.e., all eigenvalues of $A$ are inside the open unit disk $\BD$, so that $r_\textup{spec}(A) < 1$ and the transfer function $F_{\Si}(z)$ is analytic on a neighborhood of $\overline{\BD}$. Then $F_{\Si}(z)$ is in the strict Schur class $\cS^{o}({\mathbb C}^{r}, {\mathbb C}^{s})$ if and only if there is a positive-definite matrix $H \in {\mathbb C}^{n \times n}$ so that the strict KYP-inequality \eqref{KYP2} holds. \end{theorem}
We now turn to the general case, where the state space $\cX$ and the input space $\cU$ and output space $\cY$ are allowed to be infinite-dimensional. In this case, the results are more recent, depending on the precise hypotheses.
For generalizations of Theorem \ref{T:BRLfinstan}, much depends on what is meant by minimality of $\Si$, and hence by the corresponding notions of controllable and observable. Here are the three possibilities for controllability of an input pair $(A,B)$ which we shall consider. The third notion involves the controllability operator $\bW_c$ associated with the pair $(A,B)$ tailored to the Hilbert space setup which in general is a closed, possibly unbounded operator with domain $\cD(\bW_c)$ dense in $\cX$ mapping into the Hilbert space $\ell^2_\cU({\mathbb Z}_-)$ of $\cY$-valued sequences supported on the negative integers ${\mathbb Z}_- =\{ -1, -2, -3, \dots \}$, as well as the observability operator $\bW_o$ associated with the pair $(C,A)$, which has similar properties. We postpone precise definitions and properties of these operators to Section \ref{S:review}.
For an input pair $(A,B)$ we define the following notions of controllability: \begin{itemize} \item $(A,B)$ is {\em (approximately) controllable} if the reachability space \begin{equation} \label{ReachSpace}
\operatorname{Rea}\,(A|B) = \operatorname{span}\{\textup{Im\,} A^k B \colon k=0,1,2,\dots\} \end{equation}
is dense in $\cX$.
\item $(A,B)$ is {\em exactly controllable} if the reachability space $\operatorname{Rea}\,(A|B)$ is equal to $\cX$, i.e., each state vector $x \in \cX$ has a representation as a finite linear combination $x = \sum_{k=0}^K A^k B u_k$ for a choice of finitely many input vectors $u_0, u_1, \dots, u_K$ (also known as every $x$ is a {\em finite-time reachable state} (see \cite[Definition 3.3]{OpmeerStaffans2008}).
\item $(A,B)$ is {\em $\ell^2$-exactly controllable} if the $\ell^2$-adapted controllability operator $\bW_c$ has range equal to all of $\cX$: $ \bW_c\, \cD(\bW_c) = \cX$. \end{itemize} If $(C,A)$ is an output pair, we have the dual notions of observability: \begin{itemize} \item $(C,A)$ is {\em (approximately) observable} if the input pair $(A^*, C^*)$ is (approximately) controllable, i.e., if the observability space \begin{equation} \label{ObsSpace}
\operatorname{Obs}\,(C|A) = \operatorname{span} \{ \textup{Im\,} A^{*k} C^* \colon k=0,1,2,\dots\} \end{equation} is dense in $\cX$, or equivalently, if $\cap_{k=0}^\infty \ker C A^k = \{0\}$.
\item $(C,A)$ is {\em exactly observable} if the observability subspace $\operatorname{Obs}\,(C|A)$ is the whole space $\cX$.
\item $(C,A)$ is {\em $\ell^2$-exactly observable} if the adjoint input pair $(A^*, C^*)$ is $\ell^2$-exactly controllable, i.e., if the adjoint $\bW_o^*$ of the $\ell^2$-adapted observability operator $\bW_o$ has full range: $\bW_o^*\, \cD(\bW_o^*) = \cX$. \end{itemize} Then we say that the system $\Sigma \sim (A,B,C,D)$ is \begin{itemize} \item {\em minimal} if $(A,B)$ is controllable and $(C,A)$ is observable, \item {\em exactly minimal} if both $(A,B)$ is exactly controllable and $(C,A)$ is exactly observable, and \item {\em $\ell^2$-exactly minimal} if both $(A,B)$ is $\ell^2$-exactly controllable and $(C,A)$ is $\ell^2$-exactly observable. \end{itemize}
Despite the fact that the operators $A$, $B$, $C$ and $D$ associated with the system $\Si$ are all bounded, in the infinite dimensional analogue of the KYP-inequality \eqref{KYP1} unbounded solutions $H$ may appear. We therefore have to be more precise concerning the notion of positive-definiteness we employ. Suppose that $H$ is a (possibly unbounded) selfadjoint operator $H$ on a Hilbert space $\cX$ with domain $\cD(H)$ dense in $\cX$; we refer to \cite{RS} for background and details on this class and other classes of unbounded operators. Then we shall say: \begin{itemize}
\item $H$ is {\em strictly positive-definite} (written $H \succ 0$) if there is a $\delta > 0$ so that
$\langle Hx, x \rangle \ge \delta \| x \|^2$ for all $x \in \cD(H)$;
\item $H$ is {\em positive-definite} if $\langle H x, x \rangle > 0$ for all nonzero $x \in \cD(H)$;
\item $H$ is {\em positive-semidefinite} (written $H \succeq 0$) if $\langle H x, x \rangle \ge0$ for all $x \in \cD(H)$.
\end{itemize} We also note that any (possibly unbounded) positive-semidefinite operator $H$ has a positive-semidefinite square root $H^\half$; as $H = H^\half \cdot H^\half$, we have $$
\cD(H) = \{ x \in \cD(H^\half) \colon H^\half x \in \cD(H^\half) \} \subset \cD(H^\half).
$$
See e.g.\ \cite{RS} for details.
Since solutions $H$ to the corresponding KYP-inequality may be unbounded, the KYP-inequality cannot necessarily be written in the LMI form \eqref{KYP1}, but rather, we require a spatial form of \eqref{KYP1} on the appropriate domain: For a (possibly unbounded) positive-definite operator $H$ on $\cX$ satisfying \begin{equation} \label{KYP1b'} A \cD(H^{\half}) \subset \cD(H^{\half}), \quad B \cU \subset \cD(H^{\half}), \end{equation} the spatial form of the KYP inequality takes the form: \begin{equation}\label{KYP1b}
\left\| \begin{bmatrix} H^{\half} \! & \! 0 \\ 0 \! & \! I_{\cU}
\end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix} \right\|^{2}
- \left\| \begin{bmatrix} H^{\half} \! & \! 0 \\ 0 \! & \! I_{\cY} \end{bmatrix} \begin{bmatrix} A\! & \! B \\ C \! & \! D \end{bmatrix}
\begin{bmatrix} x \\ u \end{bmatrix} \right\|^{2} \ge 0 \ \ (x \in \cD(H^{\half}),\, u \in \cU). \end{equation} The corresponding notion of a storage function will then be allowed to assume $+\infty$ as a value; this will be made precise in Section \ref{S:Storage}.
With all these definitions out of the way, we can state the following three distinct generalizations of Theorem \ref{T:BRLfinstan} to the infinite-dimensional situation.
\begin{theorem}[Infinite-dimensional standard Bounded Real Lemma] \label{T:BRLinfstan} Let $\Si$ be a discrete-time linear system as in \eqref{dtsystem} with system matrix $M$ as in \eqref{sysmat} and transfer function $F_\Si$ defined by \eqref{trans}. \begin{enumerate} \item[(1)] Suppose that the system $\Si$ is minimal, i.e., the input pair $(A,B)$ is controllable and the output pair $(C,A)$ is observable. Then the transfer function $F_{\Sigma}$ has an analytic continuation to a function in the Schur class $\cS(\cU, \cY)$ if and only if there exists a positive-definite solution $H$ of the KYP-inequality in the following generalized sense: $H$ is a closed, possibly unbounded, densely defined, positive-definite (and hence injective) operator on $\cX$ such that $\cD(H^\half)$ satisfies \eqref{KYP1b'} and $H$ solves the spatial KYP-inequality \eqref{KYP1b}.
\item[(2)] Suppose that $\Sigma$ is exactly minimal. Then the transfer function $F_{\Sigma}$ has an analytic continuation to a function in the Schur class $\cS(\cU, \cY)$ if and only if there exists a bounded, strictly positive-definite solution $H$ of the KYP-inequality \eqref{KYP1}. In this case $A$ has a spectral radius of at most one, and hence $F_{\Sigma}$ is in fact analytic on $\BD$.
\item[(3)] Statement {\rm(}2{\rm)} above continues to hold if the ``exactly minimal'' hypothesis is replaced by the hypothesis that $\Sigma$ be ``$\ell^2$-exactly minimal.''
\end{enumerate} \end{theorem}
We shall refer to a closed, densely defined, positive-definite solution $H$ of \eqref{KYP1b'}--\eqref{KYP1b} as a positive-definite solution of the {\em generalized KYP-inequality}.
The paper of Arov-Kaashoek-Pik \cite{AKP06} gives a penetrating treatment of item (1) in Theorem \ref{T:BRLinfstan}, including examples to illustrate various subtleties surrounding this result---e.g., the fact that the result can fail if one insists on classical bounded and boundedly invertible selfadjoint solutions of the KYP-inequality. We believe that items (2) and (3) appeared for the first time in \cite{KYP1}, where also a sketch of the proof of item (1) is given. The idea behind the proofs of items (1)--(3) in \cite{KYP1} is to combine the result that a Schur-class function $S$ always has a contractive realization (i.e., such an $S$ can be realized as $S = F_\Sigma$ for a system $\Sigma$ as in \eqref{dtsystem} with system matrix $M$ in \eqref{sysmat} a contraction operator) with variations of the State-Space-Similarity Theorem (see \cite[Theorem 1.5]{KYP1}) for the infinite-dimensional situation under the conditions that hold in items (1)--(3); roughly speaking, under appropriate hypothesis, a State-Space-Similarity Theorem says that two systems $\Si$ and $\Si'$ whose transfer functions coincide on a neighborhood of zero, necessarily can be transformed (in an appropriate sense) from one to other via a change of state-space coordinates.
In the present paper we revisit these three results from a different point of view: we adapt Willems' variational formulas to the infinite dimensional setting, and in this context present the available storage $S_a$ and required supply $S_r$, as well as an $\ell_2$-regularized version $\underline{S}_r$ of the required supply. It is shown, under appropriate hypothesis, that these are storage functions, with $S_a$ and $\underline{S}_r$ being quadratic storage functions, i.e., $S_a$ agrees with $S_{H_a}(x)=\|H_a^\half x\|^2$ and
$\underline{S}_r(x)=S_{H_r}(x)=\|H_r^\half x \|^2$ for $x$ in a suitably large subspace of $\cX$, where $H_a$ and $H_r$ are possibly unbounded, positive-definite density operators, which turn out to be positive-definite solutions to the generalized KYP-inequality. In this way we will arrive at a proof of item (1). Further analysis of the behavior of $H_a$ and $H_r$, under additional restrictions on $\Si$, lead to proofs of items (2) and (3), as well as the following version of the strict Bounded Real Lemma for infinite dimensional systems, which is a much more straightforward generalization of the result in the finite-dimensional case (Theorem \ref{T:BRLfinstrict}).
\begin{theorem}[Infinite-dimensional strict Bounded Real Lemma] \label{T:BRLinfstrict} Let $\Si$ be a dis\-crete-time linear system as in \eqref{dtsystem} with system matrix $M$ as in \eqref{sysmat} and transfer function $F_\Si$ defined by \eqref{trans}. Assume that $A$ is exponentially stable, i.e., $r_\textup{spec}(A) < 1$. Then the transfer function $F_{\Sigma}$ is in the strict Schur class $\cS^{o}(\cU, \cY)$ if and only if there exists a bounded strictly positive-definite solution $H$ of the strict KYP-inequality \eqref{KYP2}. \end{theorem}
Theorem \ref{T:BRLfinstrict} was proved by Petersen-Anderson-Jonkheere \cite{PAJ} for the con\-tinuous-time finite-dimensional setting by using what we shall call an $\epsilon$-regulariza\-tion procedure to reduce the result to the standard case Theorem \ref{T:BRLfinstan}. In \cite{KYP1} we show how this same idea can be used in the infinite-dimensional setting to reduce the hard direction of Theorem \ref{T:BRLinfstrict} to the result of either of item (2) or item (3) in Theorem \ref{T:BRLinfstan}. For the more general nonlinear setting, Willems \cite{Wil72a} was primarily interested in what storage functions look like assuming that they exist, while in \cite{Wil72b} for the finite-dimensional linear setting he reduced the existence problem to the existence theory for Riccati matrix equations. Here we solve the existence problem for the more general infinite-dimensional linear setting by converting Willems' variational formulation of the available storage $S_a$ and an $\ell^2$-regularized version $\underline{S}_r$ of his required supply $S_r$ to an operator-theoretic formulation amenable to explicit analysis.
This paper presents a more unified approach to the different variations of the Bounded Real Lemma, in the sense that we present a pair of concretely defined, unbounded, positive-definite operators $H_a$ and $H_r$ that, under the appropriate conditions, form positive-definite solutions to the generalized KYP-inequality, and that have the required additional features under the additional conditions in items (2) and (3) of Theorem \ref{T:BRLinfstan} as well as Theorem \ref{T:BRLinfstrict}. We also make substantial use of connections with corresponding objects for the adjoint system $\Sigma^*$ (see \eqref{dtsystem*}) to complete the analysis and arrive at some order properties for the set of all solutions of the generalized KYP-inequality which are complementary to those in \cite{AKP06}.
The paper is organized as follows. Besides the current introduction, the paper consists of seven sections. In Section \ref{S:review}
we give the definitions of the observability operator $\bW_o$ and controllability operator $\bW_c$ associated with the system $\Si$
in \eqref{dtsystem} and recall some of their basic properties. In Section \ref{S:Storage} we define what is meant by a storage function
in the context of infinite dimensional discrete-time linear systems $\Si$ of the form \eqref{dtsystem} as well as strict and quadratic
storage functions and we clarify the relations between quadratic (strict) storage functions and solutions to the (generalized)
KYP-inequality. Section \ref{S:ASRS} is devoted to the available storage $S_a$ and required supply $S_r$, two examples of
storage functions, in case the transfer function of $\Si$ has an analytic continuation to a Schur class function. It is shown that $S_a$
and an $\ell^2$-regularized version $\underline{S}_r$ of $S_r$ in fact agree with quadratic storage functions on suitably
large domain via explicit constructions of two closed, densely defined, positive-definite operators $H_a$ and $H_r$
that exhibit $S_a$ and $\underline{S}_r$ as quadratic storage functions $S_{H_a}$ and $S_{H_r}$. In Section \ref{S:dual} we
make explicit the theory for the adjoint system $\Sigma^*$ and the duality connections between $\Sigma$ and $\Sigma^*$.
In Section \ref{S:order} we study the order properties of a class of solutions of the generalized KYP-inequality, and
obtain the conditions under which $H_a$ and $H_r$ are bounded and/or boundedly invertible and thereby
solutions of the classical KYP-inequality. These results are then used in Section \ref{S:BRLproof} to give
proofs of Theorems \ref{T:BRLinfstan} and \ref{T:BRLinfstrict} via the storage function approach.
\section{Review: minimality, controllability, observability} \label{S:review}
In this section we recall the definitions of the observability operator $\bW_o$ and controllability operator $\bW_c$ associated with the discrete-time linear system $\Si$ given by \eqref{dtsystem} and various of their basic properties which will be needed in the sequel. Detailed proofs of most of these results as well as additional properties can be found in \cite[Section 2]{KYP1}.
For the case of a general system $\Sigma$, following \cite[Section 2]{KYP1}, we define the {\em observability operator} $\bW_{o}$ associated with $\Si$ to be the possibly unbounded operator with domain $\cD(\bW_{o})$ in $\cX$ given by \begin{equation} \label{bWo1} \cD(\bW_{o}) = \{ x \in \cX \colon \{ C A^{n} x\}_{n \ge 0} \in\ell^{2}_{\cY}({\mathbb Z}_{+})\} \end{equation} with action given by \begin{equation} \label{bWo2} \bW_{o} x = \{ C A^{n} x\}_{n \ge 0} \text{ for } x \in \cD(\bW_{o}). \end{equation} Dually, we define the {\em adjoint controllability operator} $\bW_{c}^{*}$ associated with $\Si$ to have domain \begin{equation} \label{bWc*1} \cD(\bW_{c}^{*}) = \{ x \in \cX \colon \{B^* A^{*(-n-1)} x\}_{n\le -1} \in\ell^{2}_{\cU}({\mathbb Z}_{-})\} \end{equation} with action given by \begin{equation} \label{bWc*2} \bW_{c}^{*} x = \{B^* A^{*(-n-1)} x\}_{n\le -1} \text{ for } x \in \cD(\bW_{c}^{*}). \end{equation} It is directly clear from the definitions of $\bW_o$ and $\bW_c^*$ that \begin{equation}\label{KerWoWc}
\ker \bW_o=\operatorname{Obs}\,(C|A)^\perp\quad\mbox{and}\quad
\ker \bW_c^*=\operatorname{Rea}\,(A|B)^\perp. \end{equation}
We next summarize the basic properties of $\bW_c$ and $\bW_o$.
\begin{proposition}[Proposition 2.1 in \cite{KYP1}] \label{P:WcWo'} Let $\Sigma$ be a system as in \eqref{dtsystem} with observability operator $\bW_o$ and adjoint controllability operator $\bW_c^*$ as in \eqref{bWo1}--\eqref{bWc*2}. Basic properties of the controllability operator $\bW_c$ are: \begin{enumerate} \item[(1)] It is always the case that $\bW_o$ is a closed operator on its domain \eqref{bWo1}.
\item[(2)] If $\cD(\bW_o)$ is dense in $\cX$, then the adjoint $\bW_o^*$ of $\bW_o$ is a closed and densely defined operator, by a general property of adjoints of closed operators with dense domain. Concretely for the case here, $\cD(\bW_o^*)$ contains the dense linear manifold $\ell_{\tu{fin},\cY}(\BZ_+)$ consisting of finitely supported sequences in $\ell^2_\cY(\BZ_+)$. In general, one can characterize $\cD(\bW_{o}^{*})$ explicitly as the set of all ${\mathbf y} \in \ell^{2}_{\cY}({\mathbb Z}_{+})$ such that there exists a vector $x_{o} \in \cX$ such that the limit $$ \lim_{K \to \infty}\langle x, \sum_{k=0}^{K} A^{*k} C^{*} {\mathbf y}(k)
\rangle_{\cX}
$$
exists for each $x \in \cD(\bW_o)$ and is given by \begin{equation} \label{limit-o}
\lim_{K \to \infty}\langle x, \sum_{k=0}^{K} A^{*k} C^{*} {\mathbf y}(k)
\rangle_{\cX} = \langle x, x_{o} \rangle_{\cX}, \end{equation} with action of $\bW_c$ then given by \begin{equation} \label{Wo*act}
\bW_{o}^{*} {\mathbf y} = x_{o}
\end{equation}
where $x_{o}$ is as in \eqref{limit-o}.
In particular, $\ell_{\tu{fin}, \cY}({\mathbb Z}_+)$ is contained in $\cD(\bW_o^*)$ and the observability space defined in \eqref{ObsSpace} is given by $$
\operatorname{Obs}\, (C|A) = \bW_{o}^{*} \ell_{\tu{fin}, \cY}({\mathbb Z}_{+}). $$ Thus, if in addition $(C,A)$ is observable, then $\bW_o^*$ has dense range. \end{enumerate} Dual properties of the controllability operator $\bW_c^*$ are: \begin{enumerate} \item[(3)] It is always the case that the adjoint controllability operator $\bW_c^*$ is closed on its domain \eqref{bWc*1}.
\item[(4)] If $\cD(\bW_c^*)$ is dense in $\cX$, then the controllability operator $\bW_c = (\bW_c^*)^*$ is closed and densely defined by a general property of the adjoint of a closed and densely defined operator. Concretely for the case here, $\cD(\bW_c)$ contains the dense linear manifold $\ell_{\tu{fin},\cU}(\BZ_-)$ of finitely supported sequences in $\ell^2_\cU(\BZ_-)$. In general, one can characterize $\cD(\bW_{c})$ explicitly as the set of all ${\mathbf u} \in \ell^{2}_{\cU}({\mathbb Z}_{-})$ such that there exists a vector $x_{c} \in \cX$ so that $$ \lim_{K \to \infty} \langle x, \sum_{k=-K}^{-1} A^{-k-1} B {\mathbf u}(k)
\rangle_{\cX}
$$
exists for each $x \in \cD(\bW_{c}^{*})$ and is given by \begin{equation} \label{limit-c} \lim_{K \to \infty} \langle x, \sum_{k=-K}^{-1} A^{-k-1} B {\mathbf u}(k)
\rangle_{\cX} = \langle x, x_{c} \rangle_{\cX}, \end{equation} and action of $\bW_{c}$ then given by \begin{equation} \label{Wc-act}
\bW_{c} {\mathbf u} = x_{c} \end{equation} where $x_{c}$ is as in \eqref{limit-c}.
In particular, the reachability space
$\operatorname{Rea}\, (A|B)$ is equal to $\bW_{c} \ell_{{\rm fin}, \cU}({\mathbb Z}_{-})$. Thus, if in addition $(A,B)$ is controllable, then $\bW_c$ has dense range. \end{enumerate} \end{proposition}
For systems $\Si$ as in \eqref{dtsystem}, without additional conditions, it can happen that $\bW_o$ and/or $\bW_c^*$ are not densely defined, and therefore the adjoints $\bW_o^*$ and $\bW_c$ are at best linear relations and difficult to work with. However, our interest here is the case where the transfer function $F_\Sigma$ has analytic continuation to a bounded function on the unit disk (or even in the Schur class, i.e., norm-bounded by $1$ on the unit disk). In this case the multiplication operator \begin{equation} \label{mult-op}
M_{F_\Sigma} \colon f(\lambda) \mapsto F_\Sigma(\lambda) f(\lambda) \end{equation} is a bounded operator from $L^2_\cU({\mathbb T})$ to $L^2_\cY({\mathbb T})$ and hence also its compression to a map ``from past to future'' \begin{equation} \label{freq-Hankel}
{\mathbb H}_{F_\Sigma} = P_{H^2_\cY({\mathbb D})} M_{F_\Sigma}|_{H^2_\cU({\mathbb D})^\perp}, \end{equation}
often called the {\em Hankel operator} with symbol $F_\Sigma$, is also bounded (by $\| M_{F_\Sigma} \|$). If we take inverse $Z$-transform to represent $L^2({\mathbb T})$ as $\ell^2({\mathbb Z})$, $H^2({\mathbb D})$ as $\ell^2({\mathbb Z}_+)$ and $H^2({\mathbb D})^\perp$ as $\ell^2({\mathbb Z}_-)$, then the frequency-domain Hankel operator $$ {\mathbb H}_{F_\Sigma} \colon H^2_\cU({\mathbb D})^\perp \to H^2_\cY({\mathbb D}) $$ given by \eqref{freq-Hankel} transforms via inverse $Z$-transform to the time-domain Hankel operator $\fH_{F_\Sigma}$ with matrix representation \begin{equation} \label{Hankel-matrix}
\fH_{F_\Sigma} = [ C A^{i-j-1} B ]_{i \ge 0, j<0} \colon \ell^2_\cU({\mathbb Z}_-) \to \ell^2_\cY({\mathbb Z}_+). \end{equation} We conclude that the Hankel matrix $\fH_{F_\Sigma}$ is bounded as an operator from $\ell^2_\cU({\mathbb Z}_-)$ to $\ell^2_\cY({\mathbb Z}_+)$ whenever $F_\Sigma$ has analytic continuation to an $H^\infty$ function. From the matrix representation \eqref{Hankel-matrix} we see that the Hankel matrix formally has a factorization \begin{equation} \label{formal-Hank-fact}
\fH_{F_\Sigma} = \tu{col} [C A^i]_{i \ge 0} \cdot \tu{row} [A^{-j-1} B]_{j<0} = \bW_o \cdot \bW_c. \end{equation}
It can happen that $\fH_{F_\Sigma}$ is bounded while $\bW_o$ and $\bW_c$ are unbounded. Nevertheless, from the fact that $\fH_{F_\Sigma}$ is bounded one can see that $\operatorname{Rea}\, (A|B)$ is in $\cD(\bW_o)$ and $$
\fH_{F_\Sigma} {\mathbf u} = \bW_o \left( \sum_{k=K}^{-1} A^{-1-k} B {\mathbf u}(k) \right) \in \ell^2_\cY({\mathbb Z}_+). $$ for each finitely supported input string ${\mathbf u}(K), \dots, {\mathbf u}(-1)$. If we assume that $(A,B)$ is controllable, we conclude that $\bW_o$ is densely defined. Similarly, by working with boundedness of $\fH_{F_\Sigma}^*$
one can show that boundedness of $F_\Sigma$ on ${\mathbb D}$ leads to $\cD(\bW_c^*)$ containing the observability space $\operatorname{Obs}\,(C|A)$; hence if we assume that $(C,A)$ is observable, we get that $\bW_c^*$ is densely defined. With these observations in hand, the following precise version of the formal factorization
\eqref{formal-Hank-fact} for the case where $\bW_o$ and $\bW_c$ may be unbounded becomes plausible.
\begin{proposition}[Corollary 2.4 and Proposition 2.6 in \cite{KYP1}] \label{P:HankelDecs}
Suppose that the system $\Sigma$ given by \eqref{dtsystem} has transfer function $F_\Sigma$ with analytic continuation
to an $H^\infty$-function on the unit disk ${\mathbb D}$.
\begin{enumerate}
\item[(1)]
Assume that $\cD(\bW_c^*)$ is dense in $\cX$ {\rm(}as is the case if $(C,A)$ is observable{\rm)}. Then
$\cD(\bW_{o})$ contains $\textup{Im\,} \bW_{c} = \bW_{c} \cD(\bW_{c})$ and \begin{equation} \label{HankDec1}
\fH_{F_{\Sigma}}|_{\cD(\bW_{c})} = \bW_{o} \bW_{c}. \end{equation} In particular, as $\ell_{{\rm fin}, \cU}({\mathbb Z}_-) \subset \cD(\bW_c)$ and $\bW_c
\ell_{{\rm fin}, \cU}({\mathbb Z}_-) = \operatorname{Rea}\,(A|B)$ {\rm(}from Proposition \ref{P:WcWo'} {\rm(4))}, it follows that $\operatorname{Rea}\,(A|B) \subset \cD(\bW_o)$.
\item[(2)] Assume that $\cD(\bW_o)$ is dense in $\cX$ {\rm(}as is the case if $(A,B)$ is controllable{\rm)}. Then $\cD(\bW_{c}^{*})$ contains $\textup{Im\,} \bW_{o}^{*} = \bW_{o}^{*}\cD(\bW_{o}^{*})$ and \begin{equation} \label{HankDec2}
\fH_{F_{\Sigma}}^{*}|_{\cD(\bW_{o}^{*})} = \bW_{c}^{*} \bW_{o}^{*}. \end{equation} In particular, as $\ell_{{\rm fin}, \cY}({\mathbb Z}) \subset \cD(\bW_o^*)$ and $\bW_o^* \ell_{{\rm fin}, \cY}({\mathbb Z}_+)
= \operatorname{Obs}\,(C|A)$ {\rm(}from Proposition \ref{P:WcWo'} {\rm(2))}, it follows that $\operatorname{Obs}\,(C|A) \subset \cD(\bW_c^*)$.
\item[(3)] In case the system matrix $M = \sbm{A & B \\ C & D}$ is contractive, then $\bW_o$ and $\bW_c$ also are bounded contraction operators and we have the bounded-operator factorizations \begin{equation} \label{HankDec3} \fH_{F_\Sigma} = \bW_o \bW_c, \quad (\fH_{F_\Sigma})^* = \bW_c^* \bW_o^*. \end{equation} \end{enumerate} \end{proposition}
The following result from \cite{KYP1} describes the implications of $\ell^2$-exact controllability and $\ell^2$-exact observability on the operators $\bW_o$ and $\bW_c$
\begin{proposition}[Corollary 2.5 in \cite{KYP1}] \label{P:ell2implics} Let $\Si$ be a discrete-time linear system as in \eqref{dtsystem} with system matrix $M$ as in \eqref{sysmat}. Assume that the transfer function $F_\Si$ defined by \eqref{trans} has an analytic continuation to an $H^{\infty}$-function on ${\mathbb D}$. \begin{itemize} \item[(1)] If $\Sigma$ is $\ell^2$-exactly controllable, then $\bW_o$ is bounded.
\item[(2)] If $\Sigma$ is $\ell^2$-exactly observable, then $\bW_c$ is bounded.
\item[(3)] $\Sigma$ is $\ell^2$-exactly minimal, i.e., both $\ell^2$-exactly controllable and $\ell^2$-exactly observable, then $\bW_o$ and $\bW_c^*$ are both bounded and bounded below. \end{itemize} \end{proposition}
The following result will be useful in the sequel.
\begin{proposition} \label{P:Wc} Suppose that the discrete-time linear system $\Sigma$ given by \eqref{dtsystem} is minimal and that its transfer function $F_\Sigma$ has analytic continuation to an $H^\infty$-function on ${\mathbb D}$, so (by Propositions \ref{P:WcWo'} and \ref{P:HankelDecs}) $\cD(\bW_c^*) \supset \operatorname{Obs}\,(C|A)$ is dense in $\cX$
and $\bW_c = (\bW_c^*)^*$ is densely defined with dense range $\textup{Im\,}(\bW_c) \supset \operatorname{Rea}\,(A|B)$.
\begin{enumerate} \item[(1)] Suppose that $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge n_{-1}}$ is a system trajectory of $\Si$ with initialization ${\mathbf x}(n_{-1}) = 0$. Define an input string ${\mathbf u}' \in \ell_{{\rm fin},\cU}({\mathbb Z}_-)$ by $$
{\mathbf u}'(n) = \begin{cases} 0 &\text{if } n < n_{-1}, \\
{\mathbf u}(n) &\text{if } n_{-1} \le n < 0. \end{cases} $$ Then ${\mathbf x}(0) = \bW_c {\mathbf u}'$.
\item[(2)] Suppose that ${\mathbf u} \in \ell^2_\cU({\mathbb Z}_-)$ is in $\cD(\bW_c)$ and $\widetilde u \in \cU$. Define a new input string ${\mathbf u}' \in \ell^2_\cU({\mathbb Z}_-)$ by $$
{\mathbf u}'(n) = \begin{cases} {\mathbf u}(n+1) & \text{if } n < -1, \\
\widetilde u & \text{if } n= -1. \end{cases} $$ Then ${\mathbf u}' \in \cD(\bW_c)$ and $$
\bW_c {\mathbf u}' = A \bW_c {\mathbf u} + B \widetilde u. $$ \end{enumerate} \end{proposition}
\begin{proof} We start with item (1). From item (4) of Proposition \ref{P:WcWo'} see that $\ell_{{\rm fin}, \cU}({\mathbb Z}_+)$ is contained in $\cD(\bW_c)$, and thus ${\mathbf u}'\in \cD(\bW_c)$. From formula \eqref{limit-c} for the action of $\bW_c$ on its domain we obtain that \begin{equation} \label{Wc-fin}
\bW_c {\mathbf u}' = \sum_{k \in {\mathbb Z}_-} A^{-k-1} B {\mathbf u}'(k)
=\sum_{k = n_{-1}}^{-1} A^{-k-1} B {\mathbf u}(k) \end{equation} where the sum is well defined since there are only finitely many nonzero terms. By a standard induction argument, using the input-state equation in \eqref{dtsystem}, one verifies that this is the formula for ${\mathbf x}(0)$ for a system trajectory $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge n_{-1}}$ with initialization ${\mathbf x}(n_{-1}) = 0$. This verifies (1).
As for item (2), it is easily verified that $\cD(\bW_c^*)$ is invariant under $A^*$ and that the following intertwining condition holds: $$
\bW_c^* A^*|_{\cD(\bW_c^*)} = \cS_- \bW_c^*, $$ with $\cS_-$ the truncated right shift operator on $\ell^2_\cU({\mathbb Z}_-)$ given by $$
( \cS_- {\mathbf u})(n) = {\mathbf u}(n-1) \text{ for } n \in {\mathbb Z}_-. $$ The adjoint version of this is that $\cD(\bW_c)$ is invariant under the untruncated left shift operator $\cS_-^*$ on $\ell^2_\cU({\mathbb Z}_-)$ $$
(\cS_-^* {\mathbf u})(n) = \begin{cases} {\mathbf u}(n+1) &\text{if } n < -1, \\
0 &\text{if } n=-1 \end{cases} $$ and we have the intertwining condition $$
\bW_c \cS_-^*|_{\cD(\bW_c)} = A \bW_c. $$ Next note that $\cS_-^* {\mathbf u}= {\mathbf u}' - \Pi_{-1} \widetilde{u}$, with $\Pi_{-1}:\cU\to\ell^2_{\cU}(\BZ_-)$ the embedding of $\cU$ into the $-1$-th entry of $\ell^2_{\cU}(\BZ_-)$. This implies that \[ {\mathbf u}'=\cS_-^* {\mathbf u} + \Pi_{-1} \widetilde{u}\in \cS_-^* \cD(\bW_c)+ \ell_{{\rm fin},\cU}({\mathbb Z}_-)\subset \cD(\bW_c), \] and \begin{equation} \label{intertwine1} A \bW_c{\mathbf u}
=\bW_c \cS_-^*|_{\cD(\bW_c)}{\mathbf u} =\bW_c ({\mathbf u}' -\Pi_{-1}\widetilde{u})=\bW_c {\mathbf u}' -B\widetilde{u}, \end{equation} which provides the desired identity. \end{proof}
\begin{remark} \label{R:Wc} It is of interest to consider the shift $\bW_c^{(1)}$ of the controllability operator $\bW_c$ to the interval $(-\infty, 0]$ in place of ${\mathbb Z}_- = (-\infty, 0)$, i.e., $$
\bW_c^{(1)} = \bW_c \tau^{-1} $$ where the map $\tau$ transforms sequences ${\mathbf u}$ supported on ${\mathbb Z}_- = (-\infty, 0)$ to sequences ${\mathbf u}'$ supported on $(-\infty, 0]$ according to the action $$
(\tau {\mathbf u})(n) = {\mathbf u}(n+1)\quad \text{ for } n < 0 $$ with inverse given by $$
(\tau^{-1} {\mathbf v})(n) = {\mathbf v}(n-1)\quad \text{ for } n \le 0. $$ For all ${\mathbf u} \in \ell^2_\cU({\mathbb Z}_-)$ and $\widetilde u \in \cU$, define a sequence $({\mathbf u}, \widetilde u) \in \ell^2_\cU((-\infty, 0])$ by $$
({\mathbf u}, \widetilde u)(n) = \begin{cases} {\mathbf u}(n) &\text{if } n \in {\mathbb Z}_-, \\
\widetilde u &\text{if } n=0. \end{cases} $$ The result of item (2) in Proposition \ref{P:Wc} can be interpreted as saying: given ${\mathbf u} \in \ell^2_\cU({\mathbb Z}_-)$ and $\widetilde u \in \cU$ we have \[ ({\mathbf u}, \widetilde u) \in \cD(\bW_c^{(1)}) \quad \Longleftrightarrow \quad {\mathbf u} \in \cD(\bW_c) \] and in that case $ \bW_c^{(1)} ({\mathbf u}, \widetilde u) = A \bW_c {\mathbf u} + B \widetilde u$. \end{remark}
\section{Storage functions} \label{S:Storage}
In the case of systems with an infinite dimensional state space we allow storage functions to also attain $+\infty$ as a value. Set $[0,\infty]:= \BR_+\cup\{+\infty\}$. Then, given a discrete-time linear system $\Si$ as in \eqref{dtsystem}, we say that a function $S \colon \cX \to [0, \infty]$ is a {\em storage function} for the system $\Sigma$ if the dissipation inequality \begin{equation}\label{disineq}
S({\mathbf x}(n+1)) \le S({\mathbf x}(n)) + \| {\mathbf u}(n) \|_{\cU}^{2} - \|
{\mathbf y}(n)\|_{\cY}^{2} \text{ for } n \ge N_0 \end{equation} holds along all system trajectories $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge N_0}$ with state initialization $x(N_0) = x_0$ for some $x_0 \in \cX$ at some $N_0 \in {\mathbb Z}$, and $S$ is normalized to satisfy \begin{equation}\label{normalization'} S(0) = 0. \end{equation}
As a first result we show that existence of a storage function for $\Si$ is a sufficient condition for the transfer function to have an analytic continuation to a Schur class function.
\begin{proposition}\label{P:storage-Schur} Suppose that the system $\Sigma$ in \eqref{dtsystem} has a storage function $S$. Then the transfer function $F_{\Sigma}$ of $\Si$ defined in \eqref{trans} has an analytic continuation to a function in the Schur class $\cS(\cU, \cY)$.
\end{proposition}
The proof of Proposition \ref{P:storage-Schur} relies on the following observation, which will also be of use in the sequel.
\begin{lemma}\label{L:finH2} Suppose that the system $\Sigma$ in \eqref{dtsystem} has a storage function $S$. For each system trajectory $({\mathbf u}(n),{\mathbf x}(n),{\mathbf y}(n))_{n\in\BZ}$ and $N_0\in\BZ$ so that ${\mathbf x}(N_0)=0$, the following inequalities hold for all $N\in\BZ_+$: \begin{align}
S({\mathbf x}(N_0+N+1))&\le \sum_{n=N_0}^{N_0+N} \| {\mathbf u}(n)\|_{\cU}^{2} -
\sum_{n=N_0}^{N_0+N} \| {\mathbf y}(n)
\|^{2}_{\cY};\label{Sbound}\\
\sum_{n=N_0}^{N_0+N} \| {\mathbf y}(n) \|^{2}_{\cY} &\le \sum_{n=N_0}^{N_0+N}
\| {\mathbf u}(n)
\|^{2}_{\cU}. \label{IOdisineq} \end{align} \end{lemma}
\begin{proof} By the translation invariance of the system $\Si$ we may assume without loss of generality that $N_0=0$, i.e., ${\mathbf x}(0)=0$. From \eqref{disineq} and \eqref{normalization'} we get \[
S({\mathbf x}(1)) \le \| {\mathbf u}(0)\|^{2} - \| {\mathbf y}(0) \|^{2} + S(0) = \|
{\mathbf u}(0)\|^{2} - \| {\mathbf y}(0) \|^{2} < \infty. \] Inductively, suppose that $S({\mathbf x}(n)) < \infty$. Then \eqref{disineq} gives us \[
S({\mathbf x}(n+1)) \le \| {\mathbf u}(n) \|^{2}_{\cU} - \| {\mathbf y}(n) \|^{2}_{\cY} + S({\mathbf x}(n)) <
\infty. \] We may now rearrange the dissipation inequality for $n\in\BZ_+$ in the form \begin{equation} \label{difdis}
S({\mathbf x}(n+1)) - S({\mathbf x}(n)) \le \| {\mathbf u}(n) \|^{2} - \| {\mathbf y}(n) \|^{2} \quad (n\in\BZ_+). \end{equation} Summing from $n=0$ to $n=N$ gives \[
0 \le S({\mathbf x}(N+1)) \le \sum_{n=0}^{N} \| {\mathbf u}(n)\|_{\cU}^{2} -
\sum_{n=0}^{N} \| {\mathbf y}(n)
\|^{2}_{\cY}, \] which leads to \[
\sum_{n=0}^{N} \| {\mathbf y}(n) \|^{2}_{\cY} \le \sum_{n=0}^{N} \| {\mathbf u}(n)
\|^{2}_{\cU} \text{ for all } N \in {\mathbb Z}_{+}. \] These inequalities prove \eqref{Sbound} and \eqref{IOdisineq} for $N_0=0$. As observed above, the case of $N_0\not=0$ is then obtained by translation of the system trajectory. \end{proof}
\begin{proof}[Proof of Proposition \ref{P:storage-Schur}] Let ${\mathbf u} \in \ell^{2}_{\cU}({\mathbb Z}_{+})$ and run the system $\Sigma$ with input sequence ${\mathbf u}$ and initial condition ${\mathbf x}(0)= 0$. From Lemma \ref{L:finH2}, with $N_0=0$, we obtain that for each $N\in\BZ_+$ we have \[
\sum_{n=0}^{N} \| {\mathbf y}(n) \|^{2}_{\cY} \le \sum_{n=0}^{N} \| {\mathbf u}(n)
\|^{2}_{\cU}\ \text{ for all }\ N \in {\mathbb Z}_{+}. \] Letting $N \to \infty$, we conclude that ${\mathbf u} \in\ell^{2}_{\cU}({\mathbb Z}_{+})$ implies that the output sequence
${\mathbf y}$ is in $\ell^{2}_{\cY}({\mathbb Z}_{+})$ with $\| {\mathbf y}
\|^{2}_{\ell^{2}_{\cY}({\mathbb Z}_{+})} \le \| {\mathbf u}
\|^{2}_{\ell^{2}_{\cU}({\mathbb Z}_{+})}$.
Write $ \widehat u$ and $ \widehat y$ for the $Z$-transforms of ${\mathbf u}$ and ${\mathbf y}$, respectively, i.e., $\widehat u(z) = \sum_{n=0}^{\infty} {\mathbf u}(n) z^{n}$ and $\widehat y(z)= \sum_{n=0}^{\infty} {\mathbf y}(n) z^{n}$. Since we have imposed zero-initial condition on the state, it now follows that $\widehat y(z) = F_{\Sigma}(z) \widehat u(z)$ in a neighborhood of 0. Since ${\mathbf u}$ was chosen arbitrarily in $\ell^{2}_\cU(\BZ_+)$, we see that $\widehat u$ is an arbitrary element of $H^2_\cU(\BD)$. Thus, the multiplication operator $M_{F_\Si} \colon \widehat u \mapsto F_\Si \cdot\widehat u$ maps $H^2_\cU(\BD)$ into $H^2_\cY(\BD)$. In particular, taking $\widehat u\in H^2_\cU(\BD)$ constant, it follows that $ F_\Si$ has an analytic continuation to $\BD$. Furthermore, the inequality \[
\|F_\Si \widehat u\|_{H^2_\cY(\BD)}=\|\widehat y\|_{H^2_\cY(\BD)}=\|
{\mathbf y} \|^{2}_{\ell^{2}_{{\mathbb Z}_{+}}(\cY)} \le \| {\mathbf u}
\|^{2}_{\ell^{2}_{{\mathbb Z}_{+}}(\cU)}=\|\widehat u\|^{2}_{H^2_\cU(\BD)}, \] implies that the operator norm of the multiplication operator $M_{F_\Si}$ from $H^{2}_{\cU}({\mathbb D})$ to $H^{2}_{\cY}({\mathbb D})$ is at most 1. It is well known that the operator norm of
$M_{F_\Si}$ is the same as the supremum norm $\| F_\Si \|_{\infty} =
\sup_{z \in {\mathbb D}} \| F_\Si(z) \|$. Hence we obtain that the analytic continuation of $F_\Si$ is in the Schur class $\cS(\cU, \cY)$. \end{proof}
We shall see below (see Proposition \ref{P:SaSr}) that conversely, if the transfer function $F_\Sigma$ admits an analytic continuation to a Schur class function, then a storage function for $\Si$ exists.
\paragraph{Quadratic storage functions} \label{S:QuadStorage}
The class of storage functions associated with solutions to the generalized KYP inequality \eqref{KYP1b'}--\eqref{KYP1b} are the so-called {\em quadratic storage functions} described next. We shall say that a storage function $S$ is {\em quadratic} in case there is a positive-semidefinite operator $H$ on the state space $\cX$ so that $S$ has the form \begin{equation}\label{QuadStorage1}
S(x)=S_H(x)= \begin{cases} \| H^\half x \|^2 &\text{for } x \in \cD(H^\half), \\
+\infty &\text{ otherwise.} \end{cases} \end{equation}
If in addition to $F_\Si$ having an analytic continuation to a Schur class function it is assumed that $\Si$ is minimal, it can in fact be shown (see Theorem \ref{T:Sar} below) that quadratic storage functions for $\Si$ exist; for the finite dimensional case see \cite{Wil72b}.
\begin{proposition}\label{P:QuadStorage} Suppose that the function $S \colon \cX \to [0, \infty]$ has the form \eqref{QuadStorage1} for a (possibly) unbounded positive-semidefinite operator $H$ on $\cX$. Then $S_H$ is a storage function for $\Sigma$ if and only if $H$ is a positive-semidefinite solution of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b}. Moreover, $S$ is {\em nondegenerate} in the sense that $S_H(x) > 0$ for all nonzero $x$ in $\cX$ if and only if $H$ is positive-definite. \end{proposition}
\begin{proof}
Suppose that $H$ solves \eqref{KYP1b'}--\eqref{KYP1b}. It is clear that $S(0)=\|H^\half 0\|^2=0$, so in order to conclude that $S$ is a storage function it remains to verify the dissipation inequality \eqref{disineq}. Let $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge N_0}$ be a system trajectory with state initialization ${\mathbf x}(n_0)=x_0$ for some $x_0\in\cX$ and $N_0\in\BZ$. Fix $n \ge N_0$. If ${\mathbf x}(n) \notin \cD(H^\half)$, then $S_H({\mathbf x}(n)) = \infty$ and the dissipation inequality \eqref{disineq} is automatically satisfied. If ${\mathbf x}(n) \in \cD(H^\half)$, then \eqref{KYP1b'} implies that ${\mathbf x}(n+1)=A{\mathbf x}(n)+B {\mathbf u}(n)\in \cD(H^\half)$. Thus $S_H({\mathbf x}(n+1))<\infty$. Replacing $x$ by ${\mathbf x}(n)$ and $u$ by ${\mathbf u}(n)$ in \eqref{KYP1b} and applying \eqref{dtsystem} we obtain that \[
\left\|\mat{cc}{H^\half &0\\ 0& I_\cU}\mat{c}{{\mathbf x}(n)\\ {\mathbf u}(n)}\right\|^2
-\left\|\mat{cc}{H^\half&0\\ 0& I_\cY}\mat{c}{{\mathbf x}(n+1)\\ {\mathbf y}(n)}\right\|^2 \geq 0. \] This can be rephrased in terms of $S_H$ as \[
S_H({\mathbf x}(n))+\|{\mathbf u}(n)\|^2-S_H({\mathbf x}(n+1))-\|{\mathbf y}(n)\|^2\geq 0, \] so that \eqref{disineq} appears after adding $S_H({\mathbf x}(n+1))$ on both sides.
Conversely, suppose that $S_H$ is a storage function. Take $x \in \cX$ and $u \in \cU$ arbitrarily. Let $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge 0}$ be any system trajectory with initialization ${\mathbf x}(0) = x$ and with ${\mathbf u}(0) = u$. Then the dissipation inequality \eqref{disineq} with $n=0$ gives us \begin{equation}\label{eqnStoKYP}
S_H(Ax + Bu) \le S_H(x) + \| u \|^2 - \|y \|^2,\quad\mbox{ with }\quad y=Cx+Du. \end{equation} In particular, $S_H(x) < \infty$ (equivalently, $x \in \cD(H^\half)$) implies that $S_H(Ax + Bu) < \infty$ (equivalently, $Ax + B u \in \cD(H^\half)$). Specifying $u=0$ shows that $A \cD(H^\half)\subset \cD(H^\half)$ and specifying $x=0$
shows $B\cU \subset \cD(H^\half)$. Thus \eqref{KYP1b'} holds. Bringing $\|y\|^2$ in \eqref{eqnStoKYP} to the other side and writing out $S_H$ gives \[
\|H^\half (Ax + Bu)\|^2 +\|Cx+Du\|^2 \le \|H^\half x\|^2 + \| u \|^2, \] which provides \eqref{KYP1b}. \end{proof}
We say that a function $S \colon \cX \to {\mathbb R}_+=[0,\infty)$ is a {\em strict storage function} for the system $\Sigma$ in \eqref{dtsystem} if the strict dissipation inequality \eqref{diss-strict} holds, i.e., if there exists a $\delta > 0$ so that \begin{equation} \label{diss-strict2}
S({\mathbf x}(n+1)) - S({\mathbf x}(n)) + \delta \| x(n)\|^2 \le (1- \delta) \| {\mathbf u}(n)\|^2 - \| {\mathbf y}(n) \|^2\quad (n\ge N_0) \end{equation} holds for all system trajectories $\{ {\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n)\}_{n \ge N_0}$, initiated at some $N_0 \in {\mathbb Z}$.
Note that strict storage functions are not allowed to attain $+\infty$ as a value. The significance of the existence of a strict storage function for a system $\Sigma$ is that it guarantees that the transfer function $F_\Sigma$ has analytic continuation to a $H^\infty$-function with $H^\infty$-norm strictly less than 1 as well as a coercivity condition on $S$,
i.e., we have the following strict version of Proposition \ref{P:storage-Schur}.
\begin{proposition} \label{P:strictstorage-Schur} Suppose that the system $\Sigma$ in \eqref{dtsystem} has a strict storage function $S$. Then \begin{enumerate} \item[(1)] the transfer function $F_\Sigma$ has analytic continuation to a function in $H^\infty$ on the unit disk ${\mathbb D}$ with $H^\infty$-norm strictly less than 1, and \item[(2)] $S$ satisfies a coercivity condition, i.e., there is a $\delta > 0$ so that \begin{equation} \label{coercive}
S(x) \ge \delta \| x \|^2\quad (x\in\cX). \end{equation} \end{enumerate} \end{proposition}
\begin{proof} Assume that $S \colon \cX \to [0, \infty)$ is a strict storage function for $\Sigma$. Then for each system trajectory $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n)))_{n \ge 0}$ with initialization ${\mathbf x}(0) = 0$, the strict dissipation inequality \eqref{diss-strict2} gives that there is a $\delta > 0$ so that for $n\geq 0$ we have \begin{align*}
S({\mathbf x}(n+1)) - S({\mathbf x}(n)) & \le - \delta \| x \|^2 + (1- \delta) \| {\mathbf u}(n) \|^2 - \| {\mathbf y}(n) \|^2 \\
& \le (1- \delta) \| {\mathbf u}(n) \|^2 - \| {\mathbf y}(n) \|^2. \end{align*} Summing up over $n=0,1,2,\dots, N$ for some $N \in {\mathbb N}$ for a system trajectory $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n\ge 0}$ subject to initialization ${\mathbf x}(0) = 0$ then gives $$
0 \le S({\mathbf x}(N+1)) = S({\mathbf x}(N+1)) - S({\mathbf x}(0)) \le (1-\delta) \sum_{n=0}^N \| {\mathbf u}(n) \|^2 - \sum_{n=0}^N \| {\mathbf y}(n) \|^2. $$ By restricting to input sequences ${\mathbf u}\in \ell^2_\cU(\BZ_+)$, it follows that the corresponding output sequences satisfy
${\mathbf y}\in\ell^2_\cY(\BZ_+)$ and $\|{\mathbf y}\|_{\ell^2_\cU(\BZ_+)}^2 \le (1 - \delta) \|{\mathbf u}\|_{\ell^2_\cY(\BZ_+)}^2$. Taking $Z$-transform and using the Plancherel theorem then gives $$
\| M_{F_\Sigma} \widehat {\mathbf u} \|^2_{H^2_\cY({\mathbb D})}=
\| \widehat {\mathbf y} \|^2_{H^2_\cY({\mathbb D})}\le
(1- \delta) \| \widehat {\mathbf u} \|^2_{H^2_\cU({\mathbb D})}. $$
Thus $\|M_{F_\Sigma}\|\leq \sqrt{1-\delta} < 1$. This implies $F_\Sigma$ has analytic continuation to an $\cL(\cU, \cY)$-valued $H^\infty$ function with
$H^\infty$-norm at most $\|M_{F_\sigma}\|\leq\sqrt{1-\delta} < 1$.
To this point we have not made use of the presence of the term $\delta \| x(n) \|^2$ in the strict dissipation inequality \eqref{diss-strict2}. We now show how the presence of this term leads to the validity of the coercivity condition \eqref{coercive} on $S$. Let $x_0$ be any state in $\cX$ and let $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n\ge 0}$ be any system trajectory with initialization ${\mathbf x}(0) = x_0$ and ${\mathbf u}(0) = 0$. Then the strict dissipation inequality \eqref{diss-strict2} with $n=0$ gives us $$
\delta \| x_0 \|^2 = \delta \| {\mathbf x}(0) \|^2 \le S({\mathbf x}(1)) + \delta \| {\mathbf x}(0)\|^2 + \| {\mathbf y}(0) \|^2 \le S({\mathbf x}(0)) = S(x_0), $$
i.e., $S(x_0) \ge \delta \| x_0 \|^2$ for each $x_0 \in \cX$, verifying the validity of \eqref{coercive}. \end{proof}
The following result classifies which quadratic storage functions $S_H$ are strict storage functions.
\begin{proposition} \label{P:strictQuadStorage} Suppose that $S = S_H$ is a quadratic storage function for the system $\Sigma$ in \eqref{dtsystem}. Then $S_H$ is a strict storage function for $\Sigma$ if and only if $H$ is a bounded positive-semidefinite solution of the strict KYP-inequality \eqref{KYP2}. Any such solution is in fact strictly positive-definite. \end{proposition}
\begin{proof} Suppose that $S_H$ is a strict storage function for $\Sigma$. Then by definition $S_H(x) < \infty$ for all $x \in \cX$. Hence $\cD(H)=\cX$. By the Closed Graph Theorem, it follows that $H$ is bounded. As a consequence of Proposition \ref{P:strictstorage-Schur}, $S_H$ is coercive and hence $H$ is strictly positive-definite. The strict dissipation inequality \eqref{diss-strict2} expressed in terms of $H$ and the system matrix $\sbm{ A & B \\ C & D}$ becomes $$
\| H^\half (Ax + Bu) \|^2 - \| H^\half x \|^2 + \delta \| x \|^2 \le
(1 - \delta) \| u \|^2 - \| C x + D u \|^2 $$ for all $x \in \cX$ and $u \in \cU$. This can be expressed more succinctly as \begin{align*} & \left\langle \begin{bmatrix} H & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix}, \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix} \right \rangle - \left\langle \begin{bmatrix} H & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix}, \begin{bmatrix} x \\ u \end{bmatrix} \right\rangle \\ & \quad \quad \quad \quad \le - \delta \left\langle \begin{bmatrix} x \\ u \end{bmatrix}, \begin{bmatrix} x \\ u \end{bmatrix} \right\rangle \end{align*} for all $x \in \cX$ and $u \in \cU$, for some $\delta > 0$. This is just the spatial version of \eqref{KYP2}, so $H$ is a strictly positive-definite solution of the strict KYP-inequality \eqref{KYP2}. By reversing the steps one sees that $H \succeq 0$ being a solution of the strict KYP-inequality \eqref{KYP2} implies that $S_H$ is a strict storage function. As a consequence of Proposition \ref{P:strictstorage-Schur} we see that then $S_H$ satisfies a coercivity condition \eqref{coercive}, so necessarily $H$ is strictly positive-definite. \end{proof}
\section{The available storage and required supply}\label{S:ASRS}
In Proposition \ref{P:storage-Schur} we showed that the existence of a storage function (which is allowed to attain the value $+\infty$) for a discrete-time linear system $\Si$ implies that the transfer function $F_\Si$ associated with $\Si$ is equal to a Schur class function on a neighborhood of 0. In this section we investigate the converse direction. Specifically, we give explicit variational formulas for three storage functions, referred to as the available storage function $S_a$ (defined in \eqref{Sa2}) the required supply function $S_r$ (defined in \eqref{Sr2}) and the ``regularized'' version $\underline{S}_r$ of the required supply
(defined in \eqref{uSr2}). Let ${\boldsymbol{\mathcal U}}$ denote the space of all functions $n \mapsto u(n)$ from the integers ${\mathbb Z}$ into the input space $\cU$. Then $S_{a}$ is given by \begin{equation} \label{Sa2}
S_{a}(x_{0}) = \sup_{{\mathbf u} \in {\boldsymbol{\mathcal U}},\, n_{1} \ge 0} \sum_{n=0}^{n_{1}}
\left( \| {\mathbf y}(n) \|^{2} - \|{\mathbf u}(n)\|^{2}\right) \end{equation} with the supremum taken over all system trajectories $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge 0}$ with initialization ${\mathbf x}(0) = x_0$, while $S_r$ is given by \begin{equation} \label{Sr2} S_{r}(x_{0}) = \inf_{{\mathbf u} \in {\boldsymbol{\mathcal U}}, \, n_{-1} < 0}
\sum_{n=n_{-1}}^{-1} \left( \|{\mathbf u}(n)\|^{2} - \| {\mathbf y}(n) \|^{2} \right) \end{equation} with the infimum taken over all system trajectories $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n\ge n_{-1}}$ subject to the initialization condition ${\mathbf x}(n_{-1}) = 0$ and the condition ${\mathbf x}(0) = x_{0}$.
The proof that $S_a$ and $S_r$ are storage functions whenever $F_\Sigma$ is in the Schur class requires the following preparatory lemma. We shall use the following notation. For an arbitrary Hilbert space $\cZ$, write $P_+$ and $P_-$ for the orthogonal projections onto $\ell^2_\cZ(\BZ_+)$ and $ \ell^2_\cZ(\BZ_-)$, respectively, acting on $ \ell^2_\cZ(\BZ)$. For integers $m\leq n$, we write $P_{[m,n]}$ for the orthogonal projection on the subspace of sequences in $\ell^2_\cZ(\BZ)$ with support on the coordinate positions $m, m+1,\ldots, n$.
\begin{lemma} \label{L:prep} Let $\Sigma$ be as in \eqref{dtsystem} and suppose that its transfer function $F_\Sigma$ is in the Schur class. Then, for each system trajectory $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge 0}$ with initialization ${\mathbf x}(0) = 0$, the inequality \begin{equation} \label{io-ineq}
\sum_{n=0}^N \| {\mathbf y}(n) \|^2 \le \sum_{n=0}^N \| {\mathbf u}(n) \|^2 \end{equation} holds for all $N \in {\mathbb Z}_+$. \end{lemma}
\begin{proof} As we have already observed, the fact that $F_\Sigma$ is in the Schur class $\cS(\cU, \cY)$ implies that the multiplication operator $M_{F_\Sigma}$ \eqref{mult-op} has norm at most $1$ as an operator from $L^2_\cU({\mathbb T})$ to $L^2_\cY({\mathbb T})$. If we apply the inverse $Z$-transform to the full operator $M_{F_\Sigma}$, not just to the compression ${\mathbb H}_{F_\Sigma}$ as was done to arrive at the Hankel operator $\fH_{F_\Sigma}$ in \eqref{Hankel-matrix}, we get the {\em Laurent operator} \begin{equation}\label{Laurent0}
{\mathfrak L}_{F_{\Si}}=\mat{ccc|ccc}{ \ddots&\ddots&\ddots &\ddots&\ddots&\ddots\\ \ddots&F_{0}&0 &0&0&\ddots\\ \ddots&F_{1}&F_{0} &0&0&\ddots\\ \hline \ddots&F_{2}&F_{1} &F_0&0&\ddots\\ \ddots&F_{3}&F_{2} &F_1&F_0&\ddots\\ \ddots&\ddots&\ddots&\ddots&\ddots&\ddots}: \ell^2_\cU(\BZ)\to \ell^2_\cY(\BZ), \end{equation} where $F_0, F_1, F_2, \dots$ are the Taylor coefficients of $F_\Sigma$: \begin{equation} \label{Taylor}
F_n = \begin{cases} D &\text{if } n=0 \\
C A^{n-1}B &\text{if $n \ge 1$.} \end{cases} \end{equation} It is convenient to write $\fL_{F_\Sigma}$ as a $2 \times 2$-block matrix with respect to the decomposition $\ell^2_\cU({\mathbb Z}) = \sbm{ \ell^2_\cU({\mathbb Z}_-) \\ \ell^2_{\cU}({\mathbb Z}_+)}$ of the domain and the decomposition $\ell^2_\cY({\mathbb Z}) = \sbm{ \ell^2_\cY({\mathbb Z}_-) \\ \ell^2_{\cY}({\mathbb Z}_+)}$ of the range; the result is \begin{equation} \label{Laurent}
{\mathfrak L}_{F_{\Si}}=\mat{c|c}{\widetilde{{\mathfrak T}}_{F_\Si}&0\\\hline {\mathfrak H}_{F_\Si}& {\mathfrak T}_{F_\Si}}:\mat{c}{\ell^2_\cU(\BZ_-)\\ \ell^2_\cU(\BZ_+)}\to \mat{c}{\ell^2_\cY(\BZ_-)\\ \ell^2_\cY(\BZ_+)}. \end{equation} Here ${\mathfrak H}_{F_\Si}:\ell^2_-(\cU)\to \ell^2_+(\cY)$ denotes the Hankel operator associated with ${F_\Si}$ already introduced in \eqref{Hankel-matrix}, ${\mathfrak T}_{F_\Si}:\ell^2_+(\cU)\to \ell^2_+(\cY)$ the Toeplitz operator associated with ${F_\Si}$, and $\widetilde {\mathfrak T}_{{F_\Si}}$ the Toeplitz operator acting from $\ell^{2}_{\cU}({\mathbb Z}_{-})$ to $\ell^{2}_{\cY}({\mathbb Z}_{-})$ associated with ${F_\Si}$. From the assumption that $F_\Sigma$ is in the Schur class $\cS(\cU, \cY)$, it follows that $M_{F_\Sigma}$ is contractive, and hence also each of the operators $\widetilde \fT_{F_\Sigma}$, $\fH_{F_\Sigma}$, and $\fT_{F_\Sigma}$ is contractive. From the lower triangular form of $\fT_{F_\Sigma}$ we see in addition that $\fT_{F_\Sigma}$ has the {\em causality property}: \begin{equation}\label{causal} P_{[0,N]} \fT_{F_\Sigma} = P_{[0,N]} \fT_{F_\Sigma} P_{[0,N]}\quad (N\ge 0). \end{equation} Now suppose that $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge 0}$ is a system trajectory on ${\mathbb Z}_+$ with initialization ${\mathbf x}(0) = 0$. In this case the infinite matrix identity ${\mathbf y} = \fT_{F_\Sigma} {\mathbf u}$ holds formally. For $N \in {\mathbb Z}_+$ we have $P_{[0,N]}{\mathbf u} \in \ell^2_\cU(\BZ_+)$, and by the causality property \[ P_{[0,N]} \fT_{F_\Sigma} P_{[0,N]}{\mathbf u} =P_{[0,N]} \fT_{F_\Sigma}{\mathbf u} =P_{[0,N]}{\mathbf y}. \] Since $\fT_{F_\Sigma}$ is contractive, so is $P_{[0,N]} \fT_{F_\Sigma} P_{[0,N]}$ and thus the above identity shows that $
\| P_{[0,N]} {\mathbf y} \| \le \| P_{[0,N]} {\mathbf u} \|,
$ or, equivalently,
\begin{equation} \label{causal-contraction}
\sum_{n=0}^N \| {\mathbf y}(n) \|^2 \le \sum_{n=0}^N \| {\mathbf u}(n) \|^2
\end{equation} holds for each system trajectory $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n\ge 0}$
with ${\mathbf x}(0) = 0$.
\end{proof}
The proof of the following result is an adaptation of the proofs of Theorems 1 and 2 for the continuous time setting in \cite{Wil72a}.
\begin{proposition}\label{P:SaSr} Assume that the discrete-time linear system $\Si$ has a transfer function $F_\Sigma$ which has an analytic continuation to a function in the Schur class $\cS(\cU,\cY)$. Define $S_{a}$ and $S_{r}$ by \eqref{Sa2} and \eqref{Sr2}. Then \begin{enumerate} \item[(1)] $S_{a}$ is a storage function, \item[(2)] $S_{r}$ is a storage function, and \item[(3)] for each storage function $S$ for $\Sigma$ we have $$
S_a(x_0) \le S(x_0) \le S_r(x_0) \text{ for all } x_0 \in \cX. $$ \end{enumerate} \end{proposition}
\begin{proof} The proof consists of three parts, corresponding to the three assertions of the proposition.
{(1)} To see that $S_{a}(x_{0}) \ge 0$ for all $x_{0}\in\cX$, choose ${\mathbf x}(0) = x_{0}$ and ${\mathbf u}(n) = 0$ for $n \ge 0$ to generate a system trajectory $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge 0}$ such that
$\sum_{n=0}^{n_{1}} ( \| {\mathbf y}(n) \|^{2} - \| {\mathbf u}(n) \|^{2}) =
\sum_{n=0}^{n_{1}} \| {\mathbf y}(n) \|^{2} \ge 0$ for all $n_{1} \ge 0$. From the definition \eqref{Sa2}, we see that $S_{a}(x_{0}) \ge 0$.
By Lemma \ref{L:prep}, each system trajectory $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge 0}$ with initialization ${\mathbf x}(0) = 0$ satisfies the inequality \[
\sum_{n=0}^{n_1} \| {\mathbf y}(n) \|^{2}_{\cY} \le \sum_{n=0}^{n_1} \| {\mathbf u}(n)
\|^{2}_{\cU}\quad (n_1\in\BZ_+). \] This observation leads to the conclusion that $S_{a}(0) \le 0$. Hence $S_{a}(0)=0$ and thus $S_{a}$ satisfies the normalization \eqref{normalization'}.
Now let $\{\widetilde{\mathbf u}(n), \widetilde{\mathbf x}(n), \widetilde{\mathbf y}(n)\}_{n \ge N_0}$ be any system trajectory initiated at some $N_0\in\BZ$. We wish to show that this trajectory satisfies the dissipation inequality \eqref{disineq}. It is convenient to rewrite this condition in the form \[
\| \widetilde{\mathbf y}(n) \|^{2}_{\cY} - \| \widetilde{\mathbf u}(n) \|^{2}_{\cU} + S_a(\widetilde{\mathbf x}(n+1)) \le S_a(\widetilde{\mathbf x}(n))\quad (n\in \BZ). \] By translation invariance of the system equations \eqref{dtsystem}, without loss of generality we may take $n=0$, so we need to show \begin{equation} \label{distoshow}
\| \widetilde{\mathbf y}(0) \|^{2}_{\cY} - \| \widetilde{\mathbf u}(0) \|^{2}_{\cU} + S_a(\widetilde{\mathbf x}(1)) \le S_a(\widetilde{\mathbf x}(0)). \end{equation} We rewrite the definition \eqref{Sa2} for $S_a(\widetilde{\mathbf x}(1))$ in the form \[ S_a(\widetilde{\mathbf x}(1)) = \sup_{{\mathbf u} \in {\boldsymbol{\mathcal U}}, n_1\geq 0} \sum_{n=0}^{n_{1}}
\left( \| {\mathbf y}(n)
\|^{2}_{\cY} - \| {\mathbf u}(n) \|^{2}_{\cU} \right), \] where the system trajectory $({\mathbf u}(n),{\mathbf x}(n),{\mathbf y}(n))_{n \ge 0}$ is subject to the initializa\-tion ${\mathbf x}(0)=\widetilde{\mathbf x}(1)$. Again making use of the translation invariance of the system equations, we may rewrite this in the form \[ S_a(\widetilde{\mathbf x}(1)) = \sup_{{\mathbf u} \in {\boldsymbol{\mathcal U}}, n_1\geq 1} \sum_{n=1}^{n_{1}}
\left( \| {\mathbf y}(n)
\|^{2}_{\cY} - \| {\mathbf u}(n) \|^{2}_{\cU} \right), \] where $({\mathbf u}(n),{\mathbf x}(n),{\mathbf y}(n))_{n \ge 0}$ is a system trajectory with initialization now given by ${\mathbf x}(1)=\widetilde{\mathbf x}(1)$. Substituting this expression for $S(\widetilde{\mathbf x}(1))$, the left hand side of \eqref{distoshow} reads \[
\| \widetilde{\mathbf y}(0) \|^{2}_{\cY} - \| \widetilde{\mathbf u}(0) \|^{2}_{\cU} +
\sup_{{\mathbf u} \in {\boldsymbol{\mathcal U}}, n_1\geq 1} \sum_{n=1}^{n_{1}} \left( \| {\mathbf y}(n)
\|^{2}_{\cY} - \| {\mathbf u}(n) \|^{2}_{\cU} \right). \] This quantity indeed is bounded above by \[ S_a(\widetilde{\mathbf x}(0))= \sup_{{\mathbf u} \in {\boldsymbol{\mathcal U}}, n_1\geq 0} \sum_{n=0}^{n_{1}}
\left( \| {\mathbf y}(n)
\|^{2}_{\cY} - \| {\mathbf u}(n) \|^{2}_{\cU} \right), \] with $({\mathbf u}(n),{\mathbf x}(n),{\mathbf y}(n))_{n \ge 0}$ a system trajectory subject to initialization $\widetilde{\mathbf x}(0)={\mathbf x}(0)$. Hence the inequality \eqref{distoshow} follows as required, and $S_{a}$ is a storage function for $\Sigma$.
{(2)}
Let $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge n_-1}$ be a system trajectory with zero-initial\-ization of the state at $n_{-1} < 0$, subject also to ${\mathbf x}(0)=x_0$. Applying the result of Lemma \ref{L:prep} to this system trajectory, using the translation invariance property of $\Si$ to get a sum in \eqref{io-ineq} starting at $n_{-1}$ and ending at 0, it follows that $S_{r}(x_{0}) \ge 0$ for all $x_{0}$ in $\operatorname{Rea}\, (A|B)$. In case $x_0\not\in\operatorname{Rea}\,(A|B)$, i.e., $x_{0}$ is not reachable in finitely many steps via some input signal ${\mathbf u}(n)$ ($n_{-1}\le n < 0$) with ${\mathbf x}(n_{-1}) = 0$, then the definition of $S_r$ in \eqref{Sr2} gives us $S_{r}(x) = +\infty \ge 0$. By choosing $n_{-1} = -1$ with ${\mathbf u}(-1) = 0$, we see that $S_{r}(0) \le 0$. Since $S_{r}(x_{0}) \ge 0$ for each $x_{0} \in \cX$, it follows that $S_{r}$ also satisfies the normalization \eqref{normalization'}.
An argument similar to that used in part 1 of the proof shows that $S_{r}$ satisfies \eqref{disineq}. Indeed, note that it suffices to show that for each system trajectory $\{\widetilde{\mathbf u}(n), \widetilde{\mathbf x}(n), \widetilde{\mathbf y}(n)\}_{n \ge 0}$ we have
\begin{align} S_{r}(\widetilde{\mathbf x}(1)) & \le \| \widetilde{\mathbf u}(0) \|^{2}_{\cU} - \|
\widetilde{\mathbf y}(0)\|^{2}_{\cY}+ S_{r}(\widetilde{\mathbf x}(0)) \label{toshow2} \\
& = \inf_{{\mathbf u} \in {\boldsymbol{\mathcal U}}, \, n_{-1} < 0} \left\{ \| \widetilde{\mathbf u}(0) \|^{2}_{\cU} - \|
\widetilde{\mathbf y}(0)\|^{2}_{\cY}+
\sum_{n=n_{-1}}^{-1} \left( \| {\mathbf u}(n) \|^{2}_{\cU} - \| {\mathbf y}(n)
\|^{2}_{\cY} \right)\right\}
\notag \end{align} where $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n\ge n_{-1}}$ is a system trajectory subject to the initial condition ${\mathbf x}(n_{-1}) = 0$ and the terminal condition ${\mathbf x}(0) = \widetilde x(0)$. Rewrite the definition of $S_{r}(\widetilde{\mathbf x}(1))$ as \[ S_{r}(\widetilde{\mathbf x}(1)) = \inf_{{\mathbf u} \in {\boldsymbol{\mathcal U}}, \, n_{-1}< 1}
\sum_{n=n_{-1}}^{0} \left( \| {\mathbf u}(n) \|^{2}_{\cU} - \| {\mathbf y}(n)
\|^{2}_{\cY} \right), \] with the system trajectory $({\mathbf u}(n),{\mathbf x}(n),{\mathbf y}(n))_{n\in\BZ}$ subject to the initial and terminal conditions ${\mathbf x}(n_{-1}) = 0$ and ${\mathbf x}(1) = \widetilde{\mathbf x}(1)$. Now recognize the argument of the $\inf$ in the right-hand side of \eqref{toshow2} as part of the competition in the infimum defining $S_r(\widetilde{\mathbf x}(1))$ to deduce the inequality \eqref{toshow2}.
{(3)} Let $S$ be any storage function for $\Sigma$ and $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge 0}$ any system trajectory with initialization ${\mathbf x}(0) = x_0$. Iteration of the dissipation inequality \eqref{disineq} for $S$ along the system trajectory $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge 0}$ as in the proof of Lemma \ref{L:finH2} yields $$
0 \le S(n_1 + 1) \le S(x_0) + \sum_{n=0}^{n_1} \left( \| {\mathbf u}(n) \|^2 - \| {\mathbf y}(n) \|^2 \right) $$ or $$
\sum_{n=0}^{n_1} \left( \| {\mathbf y}(n) \|^2 - \| {\mathbf u}(n) \|^2 \right) \le S(x_0). $$ Taking the supremum in the left-hand side of the above inequality over all such system trajectories $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge 0}$ and all $n_1 \ge 0$ yields $S_a(x_0) \le S(x_0)$ and the first part of (3) is verified.
Next let $x_{0}\in\cX$ be arbitrary. If $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n\ge n_{-1}}$ is any system trajectory with state-initializa\-tion $x(n_{-1}) = 0$ and $x(0) = x_{0}$, applying Lemma \ref{L:finH2} with $N_0=n_{-1}$ and $N=-1-n_{-1}$ gives us that \begin{equation} \label{Stilde-ineq}
S(x_{0}) \le \sum_{n=n_{-1}}^{-1} \left( \|
{\mathbf u}(n)\|^{2}_{\cU} - \| {\mathbf y}(n) \|^{2}_{\cY} \right). \end{equation}
Taking the infimum of the right-hand side over all such system trajectories gives us $S(x_{0}) \le S_{r}(x_{0})$. Here we implicitly assumed that the state $x_{0} \in \cX$ is reachable. If $x_{0}$ is not reachable, there are no such system trajectories, and taking the infimum over an empty set leads to $S_{r}(x_{0}) = \infty$, in which case $S(x_0)\le S_{r}(x_{0})$ is also valid. Hence $S(x_{0}) \le S_{r}(x_{0})$ holds for all possible $x_{0}\in\cX$. This completes the verification of the second part of (3). \end{proof}
Combining Proposition \ref{P:SaSr} with Proposition \ref{P:storage-Schur} leads to the following.
\begin{corollary} \label{C:storageSchur} A discrete-time linear system $\Sigma$ in \eqref{dtsystem} has a transfer function $F_\Sigma$ with an analytic continuation in the Schur class if and only if $\Sigma$ has a storage function $S$. \end{corollary}
\begin{proof} The sufficiency is Proposition \ref{P:storage-Schur}. For the necessity direction, by Proposition \ref{P:SaSr} we may choose $S$ equal to either $S_a$ or $S_r$. \end{proof}
We next impose a minimality assumption on $\Sigma$ and in addition assume that $F_{\Sigma}$ has an analytic continuation in the Schur class $\cS(\cU, \cY)$, i.e., we make the following assumptions: \begin{equation} \label{A}
\hspace*{-.15cm} \left\{ \!\!\! \begin{array}{l}
\mbox{\em $\Si$ is minimal, i.e., $(C,A)$ is observable and $(A,B)$ is controllable,} \\
\mbox{\em and $F_{\Sigma}$ has an analytic continuation to a function in $\cS(\cU, \cY)$.} \end{array} \right. \end{equation}
Our next goal is to understand storage functions from a more operator-theoretic point of view. We first need some preliminaries.
Recall the Laurent operator ${\mathfrak L}_{F_\Si}$ in \eqref{Laurent0}. From the $2 \times 2$-block form for ${\mathfrak L}_{{F_\Si}}$ in \eqref{Laurent}, we see that \begin{equation}\label{LFids} \begin{aligned} I - {\mathfrak L}_{{F_\Si}} {\mathfrak L}_{{F_\Si}}^{*} &= \begin{bmatrix} D_{\widetilde {\mathfrak T}_{{F_\Si}}^{*}}^{2} & - \widetilde {\mathfrak T}_{{F_\Si}} {\mathfrak H}_{{F_\Si}}^{*} \\
- {\mathfrak H}_{{F_\Si}} \widetilde {\mathfrak T}_{{F_\Si}}^{*} &
D_{{\mathfrak T}_{{F_\Si}}^{*}}^{2} - {\mathfrak H}_{{F_\Si}} {\mathfrak H}_{{F_\Si}}^{*} \end{bmatrix};\\ I - {\mathfrak L}_{{F_\Si}}^{*} {\mathfrak L}_{{F_\Si}} &= \begin{bmatrix} D_{\widetilde
{\mathfrak T}_{{F_\Si}}}^{2} - {\mathfrak H}_{{F_\Si}}^{*} {\mathfrak H}_{{F_\Si}} & -{\mathfrak H}_{F_\Si}^*{\mathfrak T}_{{F_\Si}}\\
-{\mathfrak T}_{{F_\Si}}^* {\mathfrak H}_{F_\Si} & D_{{\mathfrak T}_{{F_\Si}}}^{2} \end{bmatrix}. \end{aligned} \end{equation} where in general we use the notation $D_{X}$ for the defect operator $D_{X} = (I - X^{*} X)^{\half}$ of a contraction operator $X$. Since ${F_\Si}$ is assumed to be a Schur class function, ${\mathfrak T}_{F_\Si}$ and $\widetilde {\mathfrak T}_{{F_\Si}}$ are contractions, and hence $D_{{\mathfrak T}_{{F_\Si}}}$, $D_{ {\mathfrak T}_{{F_\Si}}^{*}}$, $D_{\widetilde {\mathfrak T}_{{F_\Si}}}$ and $D_{\widetilde {\mathfrak T}_{{F_\Si}}^{*}}$ are well defined.
\begin{lemma}\label{L:SaSrOpForm} Let the discrete-time linear system $\Si$ in \eqref{dtsystem} satisfy the assumptions \eqref{A}. The available storage function $S_a$ and required supply function $S_r$ can then be written in operator form as \begin{align} \label{SaOpForm} S_{a}(x_{0}) &= \!\! \sup_{ {\mathbf u} \in \ell^{2}_{\cU}({\mathbb Z}_{+})}
\| \bW_{o} x_{0} + {\mathfrak T}_{{F_\Si}} {\mathbf u}
\|^{2}_{\ell^{2}_{\cY}({\mathbb
Z}_{+})} - \| {\mathbf u} \|^{2}_{\ell^{2}_{\cU}({\mathbb Z}_{+})}\ (x_0\in \cD(\bW_o)) \\
S_r(x_0)&=\inf_{{\mathbf u}\in\ell_{\tu{fin},\cU}(\BZ_-),\, x_0=\bW_c {\mathbf u}}\|D_{\widetilde\fT_{F_\Si}}{\mathbf u}\|^2\quad (x_0\in\cX), \label{SrOpForm} \end{align}
and $S_{a}(x_{0})=+\infty$ for $x_0\not\in\cD(\bW_o)$. Here $\bW_o$ and $\bW_c$ are the observability and controllability operators defined via \eqref{bWo1}--\eqref{bWc*2} and $\ell_{\tu{fin},\cU}(\BZ_-)$ is the linear manifold of finitely supported sequences in $\ell^2_\cU(\BZ_-)$. In particular, $S_r(x_0)<\infty$ if and only if $x_0\in\operatorname{Rea}\,(A|B)$. \end{lemma}
\begin{proof} We shall use the notation $P_\pm$ and $P_{[m,n]}$ as introduced in the discussion immediately preceding the statement of Lemma \ref{L:prep}.
We start with $S_{a}$. For each system trajectory $({\mathbf u}(n),{\mathbf x}(n),{\mathbf y}(n))_{n \ge 0}$ with initialization ${\mathbf x}(0) = x_0$ and with ${\mathbf u}\in \ell^2_\cU(\BZ_+)$ by linearity we have \[ {\mathbf y}= \bW_o x_0 + \fT_{F_\Sigma} {\mathbf u}. \] Now note that, for each system trajectory $({\mathbf u}(n),{\mathbf x}(n),{\mathbf y}(n))_{n \ge 0}$ with initialization ${\mathbf x}(0) = x_0$ but with ${\mathbf u}$ not necessarily in $\ell^2_\cU(\BZ_+)$ and with $n_1\geq 0$, by the causality property \eqref{causal}, as in the proof of Lemma \ref{L:prep} we see that we can replace ${\mathbf u}$ with $P_{[0,n_1]}{\mathbf u}\in \ell_{{\rm fin}, \cU}({\mathbb Z}_{+}) \subset \ell^2_\cU (\BZ_+)$ within the supremum in \eqref{Sa2} without changing the value. Therefore, the value of $S_a$ at $x_0$ can be rewritten in operator form as \begin{align} &S_{a}(x_{0})=\notag \\ &=\!\!\! \sup_{{\mathbf u} \in \ell_{{\rm fin}, \cU}({\mathbb Z}_{+}), \,
n_{1} \ge 0} \| P_{[0,n_{1}]} (\bW_{o}x_{0} + {\mathfrak T}_{{F_\Si}} {\mathbf u}
)\|^{2}_{\ell^{2}_{\cY}({\mathbb Z}_{+})}
- \| P_{[0, n_{1}]} {\mathbf u}
\|^{2}_{\ell^{2}_{\cU}({\mathbb Z}_{+})} \label{sup'-pre} \end{align} where we use the notation $\ell_{{\rm fin}, \cU}({\mathbb Z}_+)$ for $\cU$-valued sequences on ${\mathbb Z}_+$ of finite support.
If $x_0 \notin \cD(\bW_o)$ so that $\bW_o x_0 \notin \ell^2_\cY({\mathbb Z}_+)$, the above formulas are to be interpreted algebraically, and we may choose ${\mathbf u} = 0$ and take the limit as $n_1 \to \infty$ to see that $\bW_o(x_0) = +\infty$.
Now assume $x_0\in \cD(\bW_o)$. Fix ${\mathbf u} \in \ell_{{\rm fin}, \cU}({\mathbb Z}_+)$ and take the limit as $n_1 \to +\infty$ in the right hand side
of \eqref{sup'-pre} to see that an equivalent expression for $S_a(x_0)$ is
$$
S_a(x_0) = \sup_{{\mathbf u} \in \ell_{{\rm fin}, \cU}({\mathbb Z}_+)} \| \bW_o x_0 + \fT_{F_\Sigma} {\mathbf u} \|^2 - \| {\mathbf u} \|^2.
$$ Since $\ell_{{\rm fin}, \cU}({\mathbb Z}_+)$ is dense in $\ell^2_\cU({\mathbb Z}_+)$ and $\fT_{F_\Sigma}$ is a bounded operator, we see that another equivalent expression for $S_a(x_0)$ is \eqref{SaOpForm}. This completes the verification of \eqref{SaOpForm}.
We next look at $S_r$. Let $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge n_{-1}}$ be any system trajectory with initialization $x(n_{-1}) = 0$ for some $n_{-1}<0$. Let us identify ${\mathbf u}$ with an element ${\mathbf u} \in \ell_{{\rm fin}, \cU}({\mathbb Z}_-)$ by ignoring the values of ${\mathbf u}$ on ${\mathbb Z}_+$ and defining ${\mathbf u}(n) = 0$ for $n < n_{-1}$. Then, as a consequence of item (1) in Proposition \ref{P:Wc}, the constraint ${\mathbf x}(0) = x_0$ in \eqref{Sr2} can be written in operator form as $\bW_c {\mathbf u} = x_0$. Furthermore, since $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge n_{-1}}$ is a system trajectory with zero state initialization at $n_{-1}$, it follows that $$
{\mathbf y}|_{{\mathbb Z}_-} = \widetilde \fT_{F_\Sigma} {\mathbf u}. $$ We conclude that a formula for $S_r$ equivalent to \eqref{Sr2} is $$ S_r(x_0) = \inf_{ {\mathbf u} \in \ell^2_{{\rm fin}, \cU}({\mathbb Z}_-) \colon \bW_c {\mathbf u} = x_0}
\| {\mathbf u} \|^2 - \| \widetilde \fT_{F_\Sigma} {\mathbf u} \|^2 $$
which in turn has the more succinct formulation \eqref{SrOpForm}. If $x_0 \in \operatorname{Rea}\,(A|B)$, then the infimum in \eqref{SrOpForm} is taken over a nonempty set, so that $S_r(x_0)<\infty$. On the other hand, if $x_0 \not\in \operatorname{Rea}\,(A|B)$, then the infimum is taken over an empty set, so that $S_r(x_0)=\infty$. \end{proof}
To compute storage functions more explicitly for the case where assumptions \eqref{A} are in place, it will be convenient to restrict to what we shall call {\em $\ell^2$-regular storage functions} $S$, namely, storage functions $S$ which assume finite values on $\textup{Im\,} \bW_c$: \begin{equation} \label{reg-storage}
x_0 = \bW_c {\mathbf u} \text{ where } {\mathbf u} \in \cD(\bW_c) \Rightarrow S(x_0) < \infty.
\end{equation}
We shall see in the next result that $S_a$ is $\ell^2$-regular. However, unless if $\operatorname{Rea}\,(A|B)$ is equal to the range of $\bW_c$, the required supply $S_r$ will not be $\ell^2$-regular (by the last assertion of Lemma \ref{L:SaSrOpForm}).
To remedy this situation, we introduce the following modification $\underline{S}_r$ of the required supply $S_r$, which we shall call the {\em $\ell^2$-regularized required supply}: \begin{equation} \label{uSr2}
\underline{S}_r(x_0) = \inf_{{\mathbf u} \in \cD(\bW_c) \colon \bW_c {\mathbf u} = x_0} \sum_{n=-\infty}^{-1} \left( \| {\mathbf u}(n) \|^2 - \| {\mathbf y}(n) \|^2 \right)
\end{equation}
where ${\mathbf u} \in \ell^2_\cU({\mathbb Z}_-)$ determines ${\mathbf y} \in \ell^2_\cY({\mathbb Z}_-)$ via the system input/output map:\ ${\mathbf y} = \widetilde \fT_{F_\Sigma} {\mathbf u}$. Thus formula \eqref{uSr2} can be written more succinctly in operator form as
\begin{align}
\underline{S}_r(x_0) & = \inf_{{\mathbf u} \in \cD(\bW_c) \colon \bW_c {\mathbf u} = x_0}
\| {\mathbf u} \|^2_{\ell^2_\cU({\mathbb Z}_-)} - \| \widetilde \fT_{F_\Sigma} {\mathbf u} \|^2_{\ell^2_\cY({\mathbb Z}_-)}
\notag \\ & = \inf_{{\mathbf u} \in \cD(\bW_{c}), \, \bW_{c} {\mathbf u} = x_{0}}
\| D_{\widetilde {\mathfrak T}_{{F_\Si}}} {\mathbf u}
\|^{2}_{\ell^{2}_{\cY}({\mathbb Z}_{-})} \text{ for } x_0\in \textup{Im\,} \bW_c.
\label{uSrOpForm}
\end{align} It is clear that $\underline{S}_r(x_0)<\infty$ if and only if $x_0\in \textup{Im\,} \bW_c$.
Since the objective in the infimum defining $\underline{S}_r$ in \eqref{uSrOpForm} is the same as the
objective in the infimum defining $S_r$ in \eqref{SrOpForm} but the former infimum is taken over an a priori larger set,
it follows directly that $S_r(x_0)\geq \underline{S}_r(x_0)$ for all $x_0\in\cX$, as can also be seen as a consequence of
Proposition \ref{P:SaSr} once we show that $\underline{S}_r$ is a storage function for $\Sigma$.
From either of the formulas we see that $0\leq\underline{S}_r(x_0)$ and that $\underline{S}_r(x_0) <\infty$ exactly when $x_0$ is
in the range of $\bW_c$. Hence once we show that $\underline{S}_r$ is a storage function, it follows that $\underline{S}_r$ is an
$\ell^2$-regular storage function and is a candidate to be the largest such. However at this stage we have
only partial results in this direction, as laid out in the next result.
\begin{proposition} \label{P:uSr} Assume that $\Sigma$ is a system satisfying the assumptions \eqref{A}
and let the function $\underline{S}_r \colon \textup{Im\,} \bW_c \to {\mathbb R}_+$ be given by \eqref{uSrOpForm}. Then:
\begin{enumerate}
\item[(1)] $S_a$ and $\underline{S}_r$ are $\ell^2$-regular storage functions.
\item[(2)] $\underline{S}_r$ is ``almost'' the largest $\ell^2$-regular storage function in the following sense:
if $S$ is another $\ell^2$-regular storage function such that either
\begin{enumerate}
\item $S$ is $\cD(\bW_c^*)$-weakly continuous in the sense that: given a sequence $\{ x_n\} \subset \textup{Im\,} \bW_c$ and $x_c \in \textup{Im\,} \bW_c$ such that
$$
\lim_{n \to \infty} \langle x, x_n \rangle_\cX = \langle x, x_c \rangle_\cX \text{ for all } x \in \cD(\bW_c^*),
$$
then $\lim_{n \to \infty} S(x_n) = S(x_0)$, or
\item $\bW_c$ is bounded
and $S$ is continuous on $\cX$ {\rm(}with respect to the norm topology on $\cX${\rm)},
\end{enumerate}
then $S(x_0) \le \underline{S}_r(x_0)$ for all $x_0\in\cX$.
\end{enumerate}
\end{proposition}
\begin{proof}
We first prove item (1), starting with the claim for $S_a$. Since by assumption $\Si$ is minimal and $F_\Si$ has an analytic continuation to a Schur class function, by item (1) of Proposition \ref{P:HankelDecs}, $\textup{Im\,} \bW_c\subset \cD(\bW_o)$. So on $\textup{Im\,} \bW_c$, the available storage $S_a$ is given by \eqref{SaOpForm}. It remains to show that for $x_0\in \textup{Im\,} \bW_c$ the formula for $S_a(x_0)$ in \eqref{SaOpForm} gives a finite value. So assume $x_0\in \textup{Im\,} \bW_c$, say $x_0=\bW_c {\mathbf u}_-$ for a ${\mathbf u}_-\in\ell^2_\cU(\BZ_-)$. Choose ${\mathbf u}_+\in\ell^2_\cU(\BZ_+)$ arbitrarily and define ${\mathbf u}\in\ell^2_\cU(\BZ)$ by setting $P_- {\mathbf u}={\mathbf u}_-$ and $P_+ {\mathbf u}={\mathbf u}_+$. Then $\bW_o x_0=\bW_o \bW_c {\mathbf u}_-= \fH_{F_\Si}{\mathbf u}_-$. Thus, using the decomposition of $\fL_{F_\Si}$ in \eqref{Laurent} and the fact that $\|\fL_{F_\Si}\|\leq 1$, we find that \begin{align*}
&\| \bW_{o} x_{0} + {\mathfrak T}_{{F_\Si}} {\mathbf u}_+\|^{2} - \| {\mathbf u}_+ \|^{2}
=\| \fH_{F_\Si}{\mathbf u}_- + {\mathfrak T}_{{F_\Si}} {\mathbf u}_+\|^{2} - \| {\mathbf u}_+ \|^{2}\\
&\qquad\qquad=\| P_+\fL_{F_\Si} {\mathbf u}\|^{2} - \|P_+ {\mathbf u} \|^{2}
= \|P_- {\mathbf u} \|^{2}+ \| P_+\fL_{F_\Si} {\mathbf u}\|^{2} - \| {\mathbf u} \|^{2}\\
&\qquad\qquad\leq \|P_- {\mathbf u} \|^{2}=\|{\mathbf u}_-\|^2. \end{align*}
Since the upper bound $\|{\mathbf u}_-\|^2$ is independent of the choice of ${\mathbf u}_+\in \ell^2_\cU(\BZ_+)$, we can take the supremum over all ${\mathbf u}_+\in \ell^2_\cU(\BZ_+)$ to arrive at the inequality $S_a(x_0)\leq \|{\mathbf u}_-\|^2<\infty$.
Next we prove the statement of item (1) concerning $\underline{S}_r$. By the discussion immediately preceding the statement of the proposition, it follows that $\underline{S}_r$ is an $\ell^2$-regular storage function once we show that $\underline{S}_r$ is a storage function, that is, $\underline{S}_r(0) = 0$ and that $\underline{S}_r$ satisfies the dissipation inequality \eqref{disineq}.
If $x_0 = 0$, we can choose ${\mathbf u} = 0$ as the argument in the right hand side of \eqref{uSrOpForm} to conclude that $\underline{S}_r(0) \le 0$. As we have already seen that $\underline{S}_r(x_0) \ge 0$ for all $x_0$, we conclude that $\underline{S}_r(0) = 0$.
To complete the proof of item (1), it remains to show that $\underline{S}_r$ satisfies the dissipation inequality \eqref{disineq}. By shift invariance we may take $n=N_0 = 0$ in \eqref{disineq}. If ${\mathbf x}(0) \notin \textup{Im\,} \bW_c$, then $\underline{S}_r(x_0) = \infty$ and \eqref{disineq} holds trivially. We therefore assume that $(\widetilde {\mathbf u}(n), \widetilde {\mathbf x}(n), \widetilde {\mathbf y}(n))_{n \ge 0}$ is a system trajectory with initialization $\widetilde {\mathbf x}(0) = x_0 = \bW_c {\mathbf u}_-$ for some ${\mathbf u}_- \in \cD(\bW_o)$ and the problem is to show \begin{align}
&\underline{S}_r(\widetilde {\mathbf x}(1)) \le \|\widetilde {\mathbf u}(0) \|^2 - \| \widetilde {\mathbf y}(0) \|^2 + \underline{S}_r(\widetilde {\mathbf x}(0))= \label{tocheck} \\
& \quad = \inf_{{\mathbf u} \in \cD(\bW_c) \colon \bW_c {\mathbf u} = \widetilde {\mathbf x}(0)} \left[
\|\widetilde {\mathbf u}(0) \|^2 - \| \widetilde {\mathbf y}(0) \|^2 + \sum_{n=-\infty}^{-1} \left( \| {\mathbf u}(n) \|^2 - \| {\mathbf y}(n) \|^2
\right) \right], \notag \end{align} where ${\mathbf y} = \widetilde \fT_{F_\Sigma} {\mathbf u}$. As $(\widetilde {\mathbf u}(n), \widetilde {\mathbf x}(n), \widetilde {\mathbf y}(n))_{n \ge 0}$ is a system trajectory initiated at 0, we know that $\widetilde {\mathbf x}(1) = A \widetilde {\mathbf x}(0) + B \widetilde {\mathbf u}(0)$ and $\widetilde {\mathbf y}(0) = C \widetilde {\mathbf x}(0) + D \widetilde {\mathbf u}(0)$. On the other hand, by translation-invariance of the system equations \eqref{dtsystem} we may rewrite the formula \eqref{uSr2} for $\underline{S}_r(\widetilde {\mathbf x}(1))$ as \begin{equation} \label{uSr-shift} \underline{S}_r(\widetilde {\mathbf x}(1)) = \inf_{ {\mathbf u}' \in \cD(\bW_c^{(1)}) \colon \bW_c^{(1)} {\mathbf u} = \widetilde {\mathbf x}(1)}
\sum_{n=-\infty}^0 \left( \| {\mathbf u}'(n) \|^2 - \| {\mathbf y}'(n) \|^2 \right), \end{equation} where $\bW_c^{(1)}$ is the shifted observability operator discussed in Remark \ref{R:Wc} and where ${\mathbf y}' =\widetilde \fT_{F_\Sigma}^{(1)} {\mathbf u}'$; here now ${\mathbf u}'$ is supported on $(-\infty, 0]$ rather than on
${\mathbb Z}_- = (-\infty, 0)$ and $\widetilde \fT_{F_\Sigma}^{(1)}$ is the shift of $\widetilde \fT_{F_\Sigma}$
from the interval ${\mathbb Z}_-$ to the interval $(-\infty, 0]$.
Let us write sequences ${\mathbf u}' \in \ell^2_\cU((-\infty, 0])$ in the form ${\mathbf u}' = ({\mathbf v}', v')$ as in Remark \ref{R:Wc}
where ${\mathbf v}' \in \ell^2_\cU({\mathbb Z}_-)$ and $v' \in \cU$. As observed in Remark \ref{R:Wc},
$$
\bW_c^{(1)}({\mathbf v}', v) = A \bW_c {\mathbf v}' + B v'.
$$
Furthermore, from the structure of the Laurent operator $\fL_{F_\Sigma}$ \eqref{Laurent} we read off that
\begin{equation} \label{shift-Toeplitz}
\widetilde \fT_{F_\Sigma}^{(1)} ({\mathbf v}', v') =
\left( \widetilde \fT_{F_\Sigma} {\mathbf v}', \sum_{k=-\infty}^{-1} C A^{-k-1} B {\mathbf v}'(k) + D v' \right) \end{equation} where the series converges at least in the weak topology of $\cY$. For ${\mathbf v}' \in \cD(\bW_c)$, we know from Proposition \ref{P:WcWo'} that $\bW_c {\mathbf v}'$ is given by \begin{equation} \label{Wc-formula} \bW_c {\mathbf v}' = \sum_{k=-\infty}^{-1} A^{-k-1} B {\mathbf v}'(k) \end{equation} where the series converges $\cD(\bW_c^*)$-weakly. We also know under our standing assumption \eqref{A} that
$\operatorname{Obs}\,(C|A) \subset \cD(\bW_c^*)$ (see Proposition \ref{P:HankelDecs} (2)), and hence in particular
$C^* y \in \cD(\bW_c^*)$ for all $y \in \cY$. This observation combined with the formula \eqref{Wc-formula}
implies that
$$
C \bW_c {\mathbf v}' = \sum_{k=-\infty}^{-1} C A^{-k-1} B {\mathbf v}'(k)
$$
where the series converges weakly in $\cY$.
This combined with \eqref{shift-Toeplitz} gives us
$$
\widetilde \fT_{F_\Sigma}^{(1)} ({\mathbf v}', u) =
\left( \widetilde \fT_{F_\Sigma} {\mathbf v}', C \bW_c {\mathbf v}' + Dv' \right).
$$
Thus the formula \eqref{uSr-shift} for $\underline{S}_r(\widetilde {\mathbf x}(1))$ can be written out in more detail as
\begin{equation} \label{uSrtildex(1)} \hspace*{-.2cm} \underline{S}_r(\widetilde {\mathbf x}(1)) =\!\!\! \inf_{ ({\mathbf v}', v') \in \cT' } \! \left\{\!
( \| {\mathbf v}' \|^2 - \| \widetilde \fT_{F_\Sigma} {\mathbf v}' \|^2) + \| u \|^2 - \| C \bW_c {\mathbf v}' + Du\|^2\!
\right\}
\end{equation}
where
\begin{equation} \label{cT'} \cT' : = \{ {\mathbf v}' \in \cD(\bW_c), \, v' \in \cU \colon A \bW_c {\mathbf v}' + Bv' = \widetilde {\mathbf x}(1)\}.
\end{equation}
Note that the infimum \eqref{tocheck} can be identified with the infimum \eqref{uSrtildex(1)}
if we restrict the free parameter $({\mathbf v}', v')$ to lie in the subset
$$
\cT = \{ ({\mathbf v}', v') \in \cT' \colon
\bW_c {\mathbf v}' = \widetilde {\mathbf x}(0), \quad v' = \widetilde {\mathbf u}(0)\}.
$$
As the infimum of an objective function over a given set $\cT$
is always bounded above by the infimum of the same objective function over a smaller set $\cT' \subset \cT$, the
inequality \eqref{tocheck} now follows as wanted.
It remains to address item (2), i.e., to show that $S(x_0) \le \underline{S}_r(x_0)$ for any other storage function
$S$ satisfying appropriate hypotheses. If $x_0 \notin \textup{Im\,} \bW_c$, $\underline{S}_r(x_0) = \infty$ and the desired inequality holds trivially, so we assume that $x_0 = \bW_c {\mathbf u}$ for some ${\mathbf u} \in \cD(\bW_c)$. Let us approximate ${\mathbf u}$ by elements of $\ell_{\tu{fin}, \, \cU}({\mathbb Z}_-)$ in the natural way: $$
{\mathbf u}_K(n) = \begin{cases} {\mathbf u}(n) &\text{for } -K \le n \le -1, \\
0 &\text{for } n< -K \end{cases} $$ for $K=1,2,\dots$, and set $x_K = \bW_c {\mathbf u}_K$. We let $({\mathbf u}(n), {\mathbf x}(n), {\mathbf y}(n))_{n \ge -K}$ be a system trajectory with ${\mathbf u}(n) = {\mathbf u}_K(n)$ and with the state initialization ${\mathbf x}(-K) = 0$. Then, as ${\mathbf x}(0)$ will then be equal to $x_K$, iteration of the dissipation inequality \eqref{disineq} gives us \begin{equation} \label{tS-ineq}
S(x_K) \le \sum_{n=-K}^{-1} \left( \| {\mathbf u}_K(n) \|^2 - \| \widetilde \fT_{F_\Sigma} {\mathbf u}_K(n) \|^2
\right). \end{equation} We seek to let $K \to \infty$ in this inequality. As ${\mathbf u}_K \to {\mathbf u}$ in the norm topology of $\ell^2_\cU({\mathbb Z}_-)$
and $\| \widetilde \fT_{F_\Sigma} \| \le 1$ since $F$ is in the Schur class by assumption, it is clear that the right hand side of \eqref{tS-ineq} converges to $$
\| {\mathbf u} \|^2_{\ell^2_\cU({\mathbb Z}_-) } - \| \widetilde \fT_{F_\Sigma} {\mathbf u} \|^2_{\ell^2_\cU({\mathbb Z}_-)}=
\| D_{\widetilde \fT_{F_\Sigma}} {\mathbf u} \|^2_{\ell^2_\cU({\mathbb Z}_-)} $$ as $K \to \infty$. On the other hand, as a consequence of the characterization \eqref{limit-c} of the action of $\bW_c$, it follows that $x_K = \bW_c {\mathbf u}_K$ converges to $x_0 = \bW_c {\mathbf u}$ in the $\cD(\bW_c^*)$-weak sense. Hence, if $S$ is continuous with respect to the $\cD(\bW_c^*)$-weak topology as described in the statement of item (a), we see that $S(x_K) \to S(x_0)$ as $K \to \infty$ and we arrive at the limiting version of inequality \eqref{tS-ineq}: \begin{equation} \label{limit-wS-ineq}
S(x_0) \le \| {\mathbf u}\|^2 - \| \widetilde \fT_{F_\Sigma} {\mathbf u} \|^2 =
\| D_{\widetilde \fT_{F_\Sigma}} {\mathbf u} \|^2_{\ell^2_\cU({\mathbb Z}_-)}. \end{equation} We may now take the infimum over all ${\mathbf u} \in \cD(\bW_c)$ with $\bW_c {\mathbf u} = x_0$ to arrive at the desired inequality $S(x_0) \le \underline{S}_r(x_0)$. This proves item (a) of (2). If $\bW_c$ is bounded, then $x_K = \bW_c {\mathbf u}_K$ converges in norm to $\bW_c {\mathbf u} = x_0$. If $S$ is continuous with respect to the norm topology on $\cX$, then $S(x_K) \to S(x_0)$ and we again arrive at the limit inequality \eqref{limit-wS-ineq}, from which the desired inequality $S(x_0) \le \underline{S}_r(x_0)$ again follows. This completes the verification of item (2) in Proposition \ref{P:uSr}. \end{proof}
\begin{remark} Note that the fact that $S_a$ is $\ell^2$-regular can alternatively be seen from the fact that $\underline{S}_r$ is a $\ell^2$-regular storage function combined with the first inequality in item (3) of Proposition \ref{P:SaSr}. \end{remark}
Collecting some of the observations on the boundedness of $S_a$ and $\underline{S}_r$ from the above results we obtain the following corollary. The inequalities in \eqref{ineqs} follow directly from \eqref{SaOpForm} and \eqref{uSrOpForm}.
\begin{corollary}\label{C:boundedSauSr} Assume $\Si$ as in \eqref{dtsystem} is a system satisfying the assumptions \eqref{A}. Define $S_a$ by \eqref{Sa2} and $\underline{S}_r$ by \eqref{uSr2}. For $x_0\in\textup{Im\,} \bW_o$ we have \begin{equation}\label{ineqs}
\|\bW_ox_0\|^2\leq S_a(x_0)\leq \underline{S}_r(x_0)\leq \|{\mathbf u}_-\|^2 \end{equation} for all ${\mathbf u}_-\in\cD(\bW_c)$ with $x_0=\bW_c {\mathbf u}_-$, with the last inequality being vacuous if $x_0\not\in\textup{Im\,} \bW_c$, in which case $\underline{S}_r(x_0)=\infty$. Hence \begin{align*} \underline{S}_r(x_0)<\infty \quad &\Longleftrightarrow \quad x_0\in \textup{Im\,} \bW_c,\\ x_0\in \textup{Im\,} \bW_c \quad \Longrightarrow\quad &S_a(x_0)<\infty \quad \Longrightarrow\quad x_0\in\cD(\bW_o). \end{align*} In particular, $\underline{S}_r$ is finite-valued if and only if $\textup{Im\,} \bW_c=\cX$, that is, $\Si$ is $\ell^2$-exactly controllable, and $S_a$ is finite-valued in case $\Si$ is $\ell^2$-exactly controllable. \end{corollary}
Since ${F_\Si}$ is assumed to be a Schur class function, ${\mathfrak L}_{{F_\Si}}$ is a contraction, so that $I - {\mathfrak L}_{{F_\Si}} {\mathfrak L}_{{F_\Si}}^{*}$ and $I - {\mathfrak L}_{{F_\Si}}^{*} {\mathfrak L}_{{F_\Si}}$ are positive-semidefinite operators. We can thus read off from the $(2,2)$-entry in the right-hand side of the first identity and the $(1,1)$-entry in the right hand side of the second identity of \eqref{LFids} that \begin{equation} \label{Hankel-est} D_{{\mathfrak T}_{{F_\Si}}^{*}}^{2} \succeq {\mathfrak H}_{{F_\Si}} {\mathfrak H}_{{F_\Si}}^{*} \quad\mbox{and}\quad D_{\widetilde {\mathfrak T}_{{F_\Si}}}^{2}\succeq {\mathfrak H}_{{F_\Si}}^{*} {\mathfrak H}_{{F_\Si}}.
\end{equation} The observability and controllability assumptions of \eqref{A} imply that the observability operator $\bW_o:\cD(\bW_o)\to \ell^2_\cY(\BZ_+)$ and the controllability operator $\bW_c:\cD(\bW_c)\to\cX$ are closed densely defined operators that satisfy the properties listed in Propositions \ref{P:WcWo'} and \ref{P:HankelDecs}. As spelled out in Proposition \ref{P:HankelDecs}, the Hankel operator ${\mathfrak H}_{{F_\Si}}$ admits the factorizations \begin{equation}\label{HankelFact}
{\mathfrak H}_{F_\Si}|_{\cD(\bW_{c})} = \bW_{o} \bW_{c}\quad\mbox{and}\quad
{\mathfrak H}_{F_\Si}^*|_{\cD(\bW_{o}^*)} = \bW_{c}^* \bW_{o}^*.
\end{equation}
Using the Douglas factorization lemma \cite{Douglas} together with the factorizations
\eqref{HankelFact}, we arrive at the following result. The proof also requires use of
the Moore-Penrose generalized inverse $X^\dagger$ of a densely defined closed linear Hilbert-space operator $X:\cD(X)\to\cH_2$, with $\cD(X)\subset\cH_1$: we define $X^\dagger \colon \cD(X^\dagger) = (\textup{Im\,} X \oplus (\textup{Im\,} X)^\perp) \to \cH_1$ by \begin{equation} \label{MP} \left\{ \begin{array}{rcl} X^\dagger (X h_1) & = & P_{ (\textup{Ker\,} X)^\perp} h_1, \\
X^\dagger|_{(\textup{Im\,} X)^\perp} & = & 0. \end{array} \right. \end{equation} Then $X^\dagger$ is also closed and has the properties $$
X^\dagger X = P_{(\textup{Ker\,} X)^\perp}|_{\cD(X)}, \quad
X X^\dagger = P_{\overline{\textup{Im\,}} X}|_{ \textup{Im\,} X \oplus (\textup{Im\,} X)^\perp }. $$ In particular, if $X$ is bounded and surjective, then $X^\dagger$ is a bounded right inverse of $X$, and, if $X$ is bounded, bounded below and injective, then $X^\dagger$ is a bounded left inverse of $X$.
\begin{lemma}\label{L:fact} Let the discrete-time linear system $\Si$ in \eqref{dtsystem} satisfy the assumptions in \eqref{A}. Then: \begin{enumerate}
\item[(1)]
There exists a unique closable operator $X_a$ with domain $\textup{Im\,} \bW_c$ mapping into $ (\textup{Ker\,} D_{{\mathfrak T}_{F_{\Sigma}}^{*}})^\perp \subset \ell^2_\cY(\BZ_+)$ so that we have the factorization \begin{equation} \label{fact1}
\bW_{o}|_{\textup{Im\,} \bW_{c}} = D_{{\mathfrak T}_{{F_\Si}}^{*}} X_{a}. \end{equation} Moreover, if we let $\overline{X}_{a}$ denote the closure of $X_{a}$, then $\overline{X}_{a}$ is injective.
\item[(2)] There exists a unique closable operator $X_{r}$ with domain $\textup{Im\,} \bW_{o}^{*}$ mapping into $(\textup{Ker\,} D_{\widetilde {\mathfrak T}_{F_{\Sigma}}})^\perp \subset \ell^{2}_{\cU}({\mathbb Z}_{-})$ so that we have the factorization \begin{equation} \label{fact2}
\bW_{c}^{*}|_{\textup{Im\,} \bW_{o}^{*}} = D_{\widetilde {\mathfrak T}_{F_{\Sigma}}}
X_{r}. \end{equation} Moreover, if we let $\overline{X}_{r}$ denote the closure of $X_{r}$, then $\overline{X}_{r}$ is injective. \end{enumerate} \end{lemma}
\begin{proof}
As statement (2) is just a dual version of statement (1), we only
discuss the proof of (1) in detail.
Apply the Douglas factorization lemma to the first of the inequalities in \eqref{Hankel-est} to get the existence of a unique contraction operator \[ Y_a:\ell^{2}_{\cU}({\mathbb Z}_{-})\to (\textup{Ker\,} D_{{\mathfrak T}_{F_{\Sigma}}^{*}})^\perp \subset \ell^{2}_{\cY}({\mathbb Z}_{+}) \] such that \[ D_{{\mathfrak T}_{{F_\Si}}^{*}}Y_{a} ={\mathfrak H}_{{F_\Si}},
\quad\mbox{so that, by \eqref{HankelFact},}\quad D_{{\mathfrak T}_{{F_\Si}}^{*}}Y_{a}|_{\cD(\bW_{c})} =\bW_{o} \bW_{c}. \] If we let $\bW_{c}^{\dagger}$ be the Moore-Penrose generalized inverse \eqref{MP} of $\bW_c$, then \[ \bW_{c}^{\dagger} (x) = {\text{arg min }} \{
\|{\mathbf u}\|^{2}_{\ell^{2}_{\cU}({\mathbb Z}_{-})} \colon {\mathbf u}\in\cD(\bW_c),\ x =\bW_{c} {\mathbf u} \}\quad (x\in \textup{Im\,} \bW_c). \] Since $\bW_c$ is closed, $\textup{Ker\,} \bW_c$ is a closed subspace of $\ell^2_\cU(\BZ_-)$ and for all ${\mathbf u}\in\cD(\bW_c)$ with $x = \bW_{c} {\mathbf u}$ we have $\bW_{c}^{\dagger} (x)= {\mathbf u}-P_{\textup{Ker\,} \bW_c}{\mathbf u}$. We next define $X_{a} \colon \textup{Im\,} \bW_{c} \to \ell^{2}_{\cY}({\mathbb Z}_{+})$ by $$
X_{a} = Y_{a} \bW_{c}^{\dagger}. $$ Then $X_{a}$ is a well-defined, possibly unbounded, operator on the dense domain $\cD(X_{a}) = \textup{Im\,} \bW_{c}$. Moreover we have \[ D_{{\mathfrak T}_{{F_\Si}}^{*}} X_a = D_{{\mathfrak T}_{{F_\Si}}^{*}} Y_{a} \bW_{c}^{\dagger} = {\mathfrak H}_{{F_\Si}} \bW_{c}^{\dagger} = \bW_{o} \bW_{c} \bW_{c}^{\dagger} =
\bW_{o}|_{\textup{Im\,} \bW_{c}}. \] Hence $X_a$ provides the factorization \eqref{fact1}. Furthermore, $X_a = Y_{a} \bW_{c}^{\dagger}$ implies that $\textup{Im\,} X_a \subset \textup{Im\,} Y_a$, so that $\textup{Im\,} X_a \subset (\textup{Ker\,} D_{{\mathfrak T}_{{F_\Si}}^{*}})^\perp$. Moreover, from the factorization \eqref{fact1} we see that this property makes the choice of $X_{a}$ unique.
We now check that $X_a$ so constructed is closable. Suppose that $\{x_{0}^{(k)}\}_{k \ge 0}$ is a sequence of vectors in $\textup{Im\,} \bW_{c}$ such that $\lim_{k \to \infty} x_{0}^{(k)} = 0$ in $\cX$-norm, while $\lim_{k \to \infty } X_a x_{0}^{(k)} = {\mathbf y}$ in $\ell^{2}_{\cY}({\mathbb Z}_{+})$-norm. As $D_{{\mathfrak T}_{{F_\Si}}^{*}}$ is bounded, it follows that \[ \lim_{k \to \infty} \bW_{o} x_{0}^{(k)} = \lim_{k \to \infty} D_{{\mathfrak T}_{{F_\Si}}^{*}} X_a x_{0}^{(k)} = D_{{\mathfrak T}_{{F_\Si}}^{*}} {\mathbf y} \text{ in } \ell^{2}_{\cY}({\mathbb Z}_{+})\text{-norm.} \] Since $\bW_{o}$ is a closed operator and we have $x_{0}^{(k)} \to 0$ in $\cX$-norm, it follows that $D_{{\mathfrak T}_{{F_\Si}}^{*}} {\mathbf y} = 0$. As $\textup{Im\,} X_a \subset (\textup{Ker\,} D_{{\mathfrak T}_{{F_\Si}}^{*}})^{\perp}$ and $X_a x_{0}^{(k)} \to {\mathbf y}$, we also have that ${\mathbf y} \in (\textup{Ker\,} D_{{\mathfrak T}_{{F_\Si}}^{*}})^{\perp}$. It follows that ${\mathbf y} = 0$, and hence $X_a$ is closable.
Let $\overline{X}_{a}$ be the closure of $X_{a}$. We check that $\overline{X}_{a}$ is injective as follows. The vector $x_{0}$ being in $\cD(\overline{X}_{a})$ means that there is a sequence of vectors $\{x^{(k)}_{0}\}_{k \ge 1}$ contained in $\cD(X_{a})$ with $\lim_{k \to \infty} x_{0}^{(k)} = x_{0}$ in $\cX$ and $\lim_{k \to \infty} X_{a} x_{0}^{(k)} = {\mathbf y}$ for some ${\mathbf y} \in \ell^{2}_{\cY}({\mathbb Z}_{+})$. The condition that $\overline{X}_{a} x = 0$ means that in addition ${\mathbf y} = 0$. Since $D_{{\mathfrak T}_{F_{\Sigma}}^{*}}$ is bounded, it then follows that $\lim_{k \to \infty} D_{{\mathfrak T}_{F_{\Sigma}}^{*}} X_{a} x_{0}^{(k)} = 0$, or, by \eqref{fact1} $$
\lim_{k \to \infty}\bW_{o} x_{0}^{(k)} = 0. $$ As we also have $\lim_{k \to \infty} x_{0}^{(k)} = x_{0}$ in $\cX$ and $\bW_{o}$ is a closed operator, it follows that $x_{0} \in \cD(\bW_{o})$ and $\bW_{o} x_{0} = 0$. As $\bW_{o}$ is injective, it follows that $x_{0} = 0$. We conclude that $\overline{X}_{a}$ is injective as claimed. \end{proof}
Using the closed operators $\overline{X}_a$ and $\overline{X}_r$ defined in Lemma \ref{L:fact} we now define (possibly unbounded) positive-definite operators $H_a$ and $H_r$ so that the storage functions $S_a$ and $\underline{S}_r$ have the quadratic forms $S_a = S_{H_a}$ and $\underline{S}_r = S_{H_r}$ as in \eqref{QuadStorage1}.
We start with $H_a$. Since $\overline{X}_{a}$ is closed, there is a good polar factorization $$\overline{X}_a =
U_{a} |\overline{X}_a|$$ (see \cite[Theorem VIII.32]{RS}); in detail,
$\overline{X}_{a}^{*} \overline{X}_{a}$ is selfadjoint with positive selfadjoint square-root $|\overline{X}_a| = (\overline{X}_{a}^{*} \overline{X}_{a})^{\half}$ satisfying
$\cD(|\overline{X}_a|) = \cD(\overline{X}_{a})$, $U_{a}$ is a partial isometry with initial space equal to $({\rm Ker}\, \overline{X}_{a})^{\perp}$ and final space equal to $\overline{\rm Im}\, \overline{X}_{a}$ so that we have the factorization $\overline{X}_{a} = U_{a} |\overline{X}_{a}|$.
Now set \begin{equation} \label{Ha-def} H_a=\overline{X}_{a}^{*}
\overline{X}_{a}, \quad H_a^{\half}=|\overline{X}_a|. \end{equation} As noted in Lemma \ref{L:fact}, $\overline{X}_{a}$ is injective, and thus $H_a$ and $H_a^{\half}$ are injective as well, and as a result $U_{a}$ is an isometry.
We proceed with the definition of $H_r$. As the properties of
$\overline{X}_{r}$ parallel those of $\overline{X}_{a}$, $\overline{X}_{r}$ has a good polar decomposition $\overline{X}_{r} = U_{r} | \overline{X}_{r}|$ with $| \overline{X}_{r}|$ and $U_{r}$
having similar properties as $| \overline{X}_{a}|$ and $U_{a}$, in particular,
$\overline{X}_{r}^*\overline{X}_{r}$ and $|\overline{X}_{r}|$ are injective and $U_{r}$ is an isometry. We then define \begin{equation} \label{Hr-def}
H_{r} = \left( \overline{X}_{r}^{*} \overline{X}_{r} \right)^{-1}, \quad
H_{r}^{\half} = | \overline{X}_{r} |^{-1}. \end{equation} We shall also need a modification of the factorization \eqref{fact2}. For ${\mathbf u} \in \cD(\bW_{c})$ and $x \in \textup{Im\,} \bW_{o}^{*}$, let us note that \begin{align*} \langle \bW_{c} {\mathbf u}, x \rangle_{\cX} &=\langle {\mathbf u}, \bW_{c}^{*} x \rangle_{\ell^{2}_{\cU}({\mathbb
Z}_{-})} = \langle {\mathbf u}, D_{\widetilde {\mathfrak T}_{F_{\Sigma}}} X_{r} x
\rangle_{\ell^{2}_{\cU}({\mathbb Z}_{-})} \text{ (by
\eqref{fact2})}\\ & = \langle D_{\widetilde {\mathfrak T}_{F_{\Sigma}}} {\mathbf u}, X_{r} x
\rangle_{\ell^{2}_{\cU}({\mathbb Z}_{-})}. \end{align*} The end result is that then $D_{\widetilde {\mathfrak T}_{F_{\Sigma}}} {\mathbf u}$ is in $\cD(X_{r}^{*})$ and $X_{r}^{*} D_{\widetilde {\mathfrak T}_{F_{\Sigma}}} {\mathbf u} = \bW_{c} {\mathbf u}$. In summary we have the following adjoint version of the factorization \eqref{fact2}: \begin{equation} \label{fact3}
\bW_{c} = X_{r}^{*} D_{\widetilde {\mathfrak T}_{F_{\Sigma}}}
|_{\cD(\bW_{c})}. \end{equation}
In the following statement we use the notion of a {\em core} of a closed, densely defined operator $\Gamma$ between two Hilbert spaces $\cH$ and $\cK$ (see \cite{RS} or \cite{Kato}), namely: a dense linear submanifold $\cD$ is said to be a {\em core} for the closed, densely defined operator $X$ with domain $\cD(X)$ in $\cH$ mapping into $\cK$ if, given any $x \in \cD(X)$, there is a sequence $\{ x_n \}_{n \ge 1}$ of points in $\cD$ such that $\lim_{n \to \infty} x_n = x$ and also $\lim_{n \to \infty} X x_n = X x$.
\begin{theorem}\label{T:Sar} Let the discrete-time linear system $\Si$ in \eqref{dtsystem} satisfy the assumptions in \eqref{A}. Define $X_a$, $\overline{X}_{a}$, $X_{r}$, $\overline{X}_r$ as in Lemma \ref{L:fact} and the closed operators $H_{a}$ and $H_r$ as in the preceding discussion. Then the available storage function $S_{a}$ and required supply function $S_r$ are given by \begin{align}
S_{a}(x_{0}) = \| \overline{X}_{a} x_{0} \|^{2} = \| H_{a}^{\half}
x_{0}\|^{2}\quad (x_0\in \textup{Im\,} \bW_{c}), \label{form1}\\
\underline{S}_{r}(x_{0}) = \| | \overline{X}_{r} |^{-1} x_{0}
\|^{2}=\|H_{r}^{\half} x_{0}\|^2\quad (x_0\in \textup{Im\,} \bW_c) \label{form2}. \end{align} In particular, the available storage $S_{a}$ and $\ell^2$-regularized required supply $\underline{S}_r$ agree with quadratic storage functions on $\textup{Im\,} \bW_c$.
Moreover, $\textup{Im\,} \bW_c$ is a core for $H_a^\half$ and $\textup{Im\,} \bW_o^*$ is a core for $H_r^{-\half}$. \end{theorem}
\begin{proof} By Lemma \ref{L:fact}, in the operator form of $S_a$ derived in Lemma \ref{L:SaSrOpForm} we can replace $\bW_{o} x_{0}$ by $D_{{\mathfrak T}_{{F_\Si}}^{*}} X_{a} x_{0}$, leading to \begin{equation}\label{sup} S_{a}(x_{0}) = \sup_{ {\mathbf u} \in \ell^{2}_{\cU}({\mathbb Z}_{+})}
\| D_{{\mathfrak T}_{{F_\Si}}^{*}} X_{a} x_{0} + {\mathfrak T}_{{F_\Si}} {\mathbf u}
\|^{2}_{\ell^{2}_{\cY}({\mathbb
Z}_{+})} - \| {\mathbf u} \|^{2}_{\ell^{2}_{\cU}({\mathbb Z}_{+})}. \end{equation} For $x_0\in \textup{Im\,} \bW_c$ and each ${\mathbf u}\in\ell^2_\cU(\BZ_+)$ we have \begin{align*}
&\| D_{{\mathfrak T}_{{F_\Si}}^{*}} X_{a} x_{0} + {\mathfrak T}_{{F_\Si}} {\mathbf u} \|^{2}
- \| {\mathbf u} \|^{2}=\\
&\qquad\qquad=\| D_{{\mathfrak T}_{{F_\Si}}^{*}} X_{a} x_{0} \|^{2} + 2\,\text{Re}\,\langle D_{{\mathfrak T}_{{F_\Si}}^{*}} X_{a} x_{0},
{\mathfrak T}_{{F_\Si}} {\mathbf u} \rangle + \| {\mathfrak T}_{{F_\Si}}{\mathbf u} \|^{2}- \|{\mathbf u}
\|^{2}\\
&\qquad\qquad=\| D_{{\mathfrak T}_{{F_\Si}}^{*}} X_{a} x_{0} \|^{2} + 2\,\text{Re}\,\langle D_{{\mathfrak T}_{{F_\Si}}^{*}} X_{a} x_{0},
{\mathfrak T}_{{F_\Si}} {\mathbf u} \rangle - \| D_{{\mathfrak T}_{{F_\Si}}}{\mathbf u} \|^{2}\\
&\qquad\qquad=\| D_{{\mathfrak T}_{{F_\Si}}^{*}} X_{a} x_{0} \|^{2} + 2\,\text{Re}\,\langle X_{a} x_{0}, D_{{\mathfrak T}_{{F_\Si}}^{*}}
{\mathfrak T}_{{F_\Si}} {\mathbf u} \rangle - \| D_{{\mathfrak T}_{{F_\Si}}} {\mathbf u} \|^{2} \\
&\qquad\qquad= \| D_{{\mathfrak T}_{{F_\Si}}^{*}} X_{a} x_{0} \|^{2} + 2\,\text{Re}\,\langle X_{a} x_{0}, {\mathfrak T}_{{F_\Si}}
D_{{\mathfrak T}_{{F_\Si}}} {\mathbf u} \rangle - \| D_{{\mathfrak T}_{{F_\Si}}} {\mathbf u} \|^{2} \\
&\qquad\qquad= \| D_{{\mathfrak T}_{{F_\Si}}^{*}} X_{a} x_{0} \|^{2} +
2\,\text{Re}\,\langle {\mathfrak T}_{{F_\Si}}^{*} X_{a} x_{0}, D_{{\mathfrak T}_{{F_\Si}}} {\mathbf u} \rangle - \| D_{{\mathfrak T}_{{F_\Si}}} {\mathbf u} \|^{2} \\
&\qquad\qquad= \| D_{{\mathfrak T}_{{F_\Si}}^{*}} X_{a} x_{0} \|^{2} +
\|{\mathfrak T}_{{F_\Si}}^{*} X_{a}
x_{0} \|^{2} - \| {\mathfrak T}_{{F_\Si}}^{*} X_{a} x_{0} -
D_{{\mathfrak T}_{{F_\Si}}} {\mathbf u} \|^{2}\\
&\qquad\qquad= \| X_{a} x_{0} \|^{2} - \| {\mathfrak T}_{{F_\Si}}^{*} X_{a} x_{0} -
D_{{\mathfrak T}_{{F_\Si}}} {\mathbf u} \|^{2}.
\end{align*} By construction, we have $\textup{Im\,} X_{a} \subset (\textup{Ker\,} D_{{\mathfrak T}_{{F_\Si}}^{*}})^{\perp} =\overline{\textup{Im\,}} D_{{\mathfrak T}_{{F_\Si}}^{*}}$. Using that ${\mathfrak T}_{{F_\Si}}^{*}D_{{\mathfrak T}_{{F_\Si}}^{*}}=D_{{\mathfrak T}_{{F_\Si}}} {\mathfrak T}_{{F_\Si}}^{*}$, we obtain $$ {\mathfrak T}_{{F_\Si}}^{*} \overline{\textup{Im\,}} D_{{\mathfrak T}_{{F_\Si}}^{*}}\subset \overline{\textup{Im\,}} D_{{\mathfrak T}_{{F_\Si}}}. $$ Thus $\textup{Im\,} {\mathfrak T}_{{F_\Si}}^{*} X_{a} \subset \overline{\textup{Im\,}}
D_{{\mathfrak T}_{{F_\Si}}}$. Hence there is a sequence ${\mathbf u}_{k}$ of input signals in $\ell^{2}_{\cU}({\mathbb Z}_{+})$ so that $\|
{\mathfrak T}_{{F_\Si}}^{*} X_{a} x_{0} - D_{{\mathfrak T}_{{F_\Si}}} {\mathbf u}_{k} \| \to 0$ as $k \to \infty$. We conclude that for $x_0\in \textup{Im\,} \bW_c$ the supremum in \eqref{sup} is given by \[
S_{a}(x_{0}) = \| X_{a} x_{0} \|^{2}=\| \overline{X}_{a} x_{0} \|^{2}=\|
H_a^{\half} x_{0} \|^{2}. \]
Let $x_0\in \textup{Im\,}\bW_c$. Given a ${\mathbf u} \in\cD(\bW_c)$, by the factorization \eqref{fact3} we see that $\bW_c{\mathbf u}=x_0$ if and only if $X_r^{*} D_{\widetilde{\mathfrak T}_{F_\Si}}{\mathbf u}=x_0$. Therefore, we have \begin{align*} \underline{S}_r(x_0) &=\inf_{{\mathbf u} \in \cD(\bW_{c}), \, X_r^{*} D_{\widetilde{\mathfrak T}_{F_\Si}}{\mathbf u}=x_0}
\| D_{\widetilde {\mathfrak T}_{{F_\Si}}} {\mathbf u} \|^{2} =\inf_{{\mathbf v} \in D_{\widetilde {\mathfrak T}_{{F_\Si}}}\cD(\bW_{c}), \, X_r^{*} {\mathbf v}=x_0}
\| {\mathbf v} \|^{2}. \end{align*} A general property of operator closures is $\overline{X}_{r}^{*} = X_{r}^{*}$. Hence \begin{equation} \label{inf1} \underline{S}_{r}(x_{0}) = \inf_{{\mathbf v} \in D_{\widetilde
{\mathfrak T}_{{F_\Si}}}\cD(\bW_{c}), \, \overline{X}_r^{*}{\mathbf v}=x_0} \|
{\mathbf v} \|^{2}. \end{equation} As $x_{0} \in \textup{Im\,} \bW_{c}$ by assumption, the factorization \eqref{fact3} gives us a ${\mathbf u}_{0} \in \cD(\bW_{c})$ so that \begin{equation} \label{x0}
x_{0} = \overline{X}_{r}^{*} D_{\widetilde {\mathfrak T}_{F_{\Sigma}}} {\mathbf u}_{0}. \end{equation} In particular, $x_{0}$ has the form $x_{0} = \overline{X}_{r}^{*} {\mathbf v}_{0}$ with ${\mathbf v}_{0} \in D_{\widetilde {\mathfrak T}_{F_{\Sigma}}} \cD(\bW_{c})$. From \eqref{x0} we see that the general solution ${\mathbf v} \in \cD(\overline{X}_{r}^{*})$ of $x_{0} = \overline{X}_{r}^{*} {\mathbf v}$ is \begin{equation} \label{gensol}
{\mathbf v} = D_{\widetilde {\mathfrak T}_{F_{\Sigma}}} {\mathbf u}_{0} + k \text{ where } k \in \textup{Ker\,} \overline{X}_{r}^{*}. \end{equation} By construction the target space for $X_{r}$ (and $\overline{X}_{r}$) is $\left( \textup{Ker\,} D_{\widetilde {\mathfrak T}_{F_{\Sigma}}}\right)^{\perp}$ so the domain space for $\overline{X}_{r}^{*}$ is $( \textup{Ker\,} D_{\widetilde {\mathfrak T}_{F_{\Sigma}}})^\perp$ and $\textup{Ker\,} \overline{X}_r^* \subset \overline{\textup{Im\,}} D_{\widetilde \fT_{F_\Sigma}}$. Hence the infimum in \eqref{inf1} remains unchanged if we relax the constraint ${\mathbf v} \in D_{\widetilde {\mathfrak T}_{F_{\Sigma}}} \cD(\bW_c)$ to just ${\mathbf v} \in \cD(\overline{X}_{r}^{*})$, i.e., \begin{equation} \label{inf2}
\underline{S}_{r}(x_{0}) = \inf_{ {\mathbf v} \in \cD(\overline{X}_{r}^{*}), \, \overline{X}_{r}^{*} {\mathbf v}
= x_{0}} \| {\mathbf v}\|^{2}. \end{equation}
In terms of the polar decomposition $\overline{X}_{r} = U_{r} | \overline{X}_{r}|$ for $\overline{X}_{r}$, we have $$
\overline{X}_{r}^{*} = | \overline{X}_{r}| U_{r}^{*} $$ with $$ \cD(\overline{X}_{r}^{*}) = \{ {\mathbf u} \in \overline{\textup{Im\,}} D_{\widetilde
{\mathfrak T}_{F_{\Sigma}}} \colon U_{r}^{*} {\mathbf u} \in \cD(|\overline{X}_{r}|) = \cD(\overline{X}_{r}) \}. $$
Since $|\overline{X}_{r}|$ is injective and $U_{r}$ is an isometry with range equal to $(\textup{Ker\,} \overline{X}_{r}^{*})^{\perp}$, the constraint $|\overline{X}_{r}| U_{r}^{*} {\mathbf v} = \overline{X}_{r}^{*} {\mathbf v} = x_{0}$ is equivalent to $$
P_{(\textup{Ker\,} \overline{X}_{r}^{*})^{\perp}} {\mathbf v} = U_{r} U_{r}^{*} {\mathbf v} = U_{r}
|\overline{X}_{r}|^{-1} x_{0}. $$
Since we want to minimize $\|{\mathbf v}\|^2$ with $P_{(\textup{Ker\,} \overline{X}_{r}^*)^\perp}{\mathbf v}$
equal to $U_r|\overline{X}_{r}^*|^{-1}x_0\in \cD(\overline{X}_r)$, it is clear that this is achieved at ${\mathbf v}_\textup{opt}=U_r|\overline{X}_{r}^*|^{-1}x_0$, so that \[
\underline{S}_r(x_0)=\|{\mathbf v}_\textup{opt}\|^2=\|U_r|\overline{X}_{r}|^{-1}x_0\|^2=\||\overline{X}_{r}|^{-1}x_0\|^2
=\|H_r^\half x_0\|^2, \] as claimed.
It remains to verify the last assertion concerning the core properties of $\textup{Im\,} \bW_c$ and $\textup{Im\,} \bW_o^*$. By definition $H_a^\half = | \overline{X}_a|$ where $\overline{X}_a$ is defined to be the closure of the $X_a =
\overline{X}_a|_{\textup{Im\,} \bW_c}$. Hence $\textup{Im\,} \bW_c$ by definition is a core for $\overline{X}_a$ from which it immediately follows that $\textup{Im\,} \bW_c$ is a core for $H_a^\half = | \overline{X}_a |$. That $\textup{Im\,} \bW_o^*$ is a core for $H_r^{-\half} = | \overline{X}_r |$ follows in the same way via a dual analysis. \end{proof}
\section{The dual system $\Si^*$} \label{S:dual}
In this section we develop a parallel theory for the dual system $\Si^*$ of $\Si$, which is the system with system matrix equal to the adjoint of \eqref{sysmat} evolving in backward-time.
\subsection{Controllability, observability, minimality and transfer function for the dual system}
With the discrete-time linear system $\Sigma$ given by \eqref{dtsystem} with system matrix $M = \sbm{ A & B \\ C & D }$ we associate the dual system $\Sigma^*$ with system matrix $M^* = \sbm{ A^* & C^* \\ B^* & D^* } \colon \sbm{ \cX \\ \cY} \to \sbm{ \cX \\ \cU}$. It will be convenient for our formalism here to let the dual system evolve in backward time; we therefore define the system $\Sigma^*$ to be given by the system input/state/output equations \begin{equation} \label{dtsystem*} \Sigma^* \colon = \left\{ \begin{array}{rcl} {\mathbf x}_*(n-1) & = & A^* {\mathbf x}_*(n) + C^* {\mathbf u}_*(n), \\ {\mathbf y}_*(n) & = & B^* {\mathbf x}_*(n) + D^* {\mathbf u}_*(n). \end{array} \right. \end{equation} If we impose a final condition ${\mathbf x}_*(-1) = x_0$ and feed in an input-sequence $\{ {\mathbf u}(n) \}_{n \in {\mathbb Z}_-}$, one can solve recursively to get, for $n \le -1$, $$ \left\{ \begin{array}{rcl} {\mathbf x}_*(n) & = & A^{* -n-1} x_0 + \sum_{j=n+1}^{-1} A^{* -n+j} C^* {\mathbf u}_*(j), \\
{\mathbf y}_*(n) & = & B^* A^{* -n-1} x_0 + \sum_{j = n+1}^{-1} B^* A^{* -n+j-1} C^* {\mathbf u}_*(j) + D^* {\mathbf u}_*(n). \end{array} \right. $$ Alternatively, the $Z$-transform $ \{{\mathbf x}_*(n)\}_{n \in {\mathbb Z}_-} \mapsto \widehat {\mathbf x}_*(\lambda) = \sum_{n=-\infty}^{-1} {\mathbf x}_*(n) \lambda^n $ may be applied directly to the system equations \eqref{dtsystem*}. Combining this with the observation that \begin{align*} \sum_{n=-\infty}^{-1} {\mathbf x}_*(n-1) \lambda^n & = \lambda \left( \sum_{n=-\infty}^{-1} {\mathbf x}_*(n-1) \lambda^{n-1} \right)
= \lambda \left( \sum_{n=-\infty}^{-2} {\mathbf x}_*(n) \lambda^{n} \right) \\
& = \lambda \left( \widehat {\mathbf x}_*(\lambda) - x_0 \lambda^{-1} \right) = \lambda \widehat {\mathbf x}_*(\lambda) - x_0. \end{align*} converts the first system equation in \eqref{dtsystem*} to $$
\lambda \widehat {\mathbf x}_*(\lambda) - x_0 = A^* \widehat {\mathbf x}_*(\lambda) + C^* \widehat {\mathbf u}_*(\lambda) $$ leading to the $Z$-transformed version of the whole system: $$ \left\{ \begin{array}{rcl}
\widehat {\mathbf x}_*(\lambda) & = & (\lambda I - A^*)^{-1} x_0 + (\lambda I - A^*)^{-1} C^* \widehat {\mathbf u}_*(\lambda), \\
\widehat {\mathbf y}_*(\lambda) & = & B^* (\lambda I - A^*)^{-1} x_0 + F_{\Sigma^*}(\lambda) \widehat {\mathbf u}_*(\lambda), \end{array} \right. $$ where the {\em transfer function} $F_{\Sigma^*}(\lambda)$ for the system $\Sigma^*$ is then given by \begin{align}
F_{\Sigma^*}(\lambda) & = D^* + B^* (\lambda I - A^*)^{-1} C^* \notag \\
& = D^* + \lambda^{-1} (I - \lambda^{-1} A^*)^{-1} C^* = F_\Sigma(1/\overline{\lambda})^* \label{transfunc*} \end{align}
which is an analytic function on a neighborhood of the point at $\infty$ in the complex plane. Moreover, $F_{\Sigma^*}$ has analytic continuation to a function analytic on the exterior of the unit disk ${\mathbb D}_e : = \{ \lambda \in {\mathbb C} \colon |\lambda| > 1\}
\cup \{\infty\}$ exactly when $F_\Sigma$ has analytic continuation to a function analytic on the unit disk ${\mathbb D}$ with equality of
corresponding $\infty$-norms:
$$
\| F_{\Sigma_*} \|_{\infty, {\mathbb D}_e} : = \sup_{\lambda \in {\mathbb D}_e}
\| F_{\Sigma_*}(\lambda) \| =
\sup_{\lambda \in {\mathbb D}} \| F_\Sigma(\lambda) \| =: \| F_\Sigma \|_{\infty, {\mathbb D}}.
$$
All the analysis done up to this point for the system $\Sigma$ has a dual analogue for the system $\Sigma^*$. In particular, the observability operator $\bW_{*o}$ for the dual system is obtained by running the system \eqref{dtsystem*} with final condition ${\mathbf x}_*(-1) = x_0$ and input string ${\mathbf u}_*(n) = 0$ for $n \le -1$, resulting in the output string $\{ B^* A^{* (-n-1)} x_0 \}_{n \in {\mathbb Z}_-}$. Since we are interested in a setting with operators on $\ell^2$, we define the {\em observability operator} $\bW_{*o}$ for $\Sigma^*$ to have domain $$
\cD(\bW)_{*o} = \{ x_0 \in \cX \colon \{ B^* A^{*(- n-1)} x_0\}_{n\in\BZ_-} \in \ell^2_\cU({\mathbb Z}_-)\}
$$
with action given by $$
\bW_{*o} x_0 = \{ B^* A^{* (-n-1)} x_0 \}_{n \in {\mathbb Z}_-} \text{ for } x_0 \in \cD(\bW_{*o}). $$ Note that $\bW_{*o}$ so defined is exactly the same as the adjoint controllability operator $\bW_c^*$ for the original system \eqref{bWc*1}--\eqref{bWc*2}, and in fact viewing this operator as $\bW_{*o}$ gives a better control-theoretic interpretation for this operator. Similarly it is natural to define the adjoint controllability operator for the adjoint system $(\bW_{*c})^*$ by $$
\cD((\bW_{*c})^*) = \{ x_0 \in \cX \colon \{ C A^n x_0 \}_{n \in {\mathbb Z}_+} \in \ell^2_\cY({\mathbb Z}_+)\}
= \cD(\bW_o) $$ with action given by $$
\bW_{*c}^* x_0 = \{ C A^n x_0 \}_{n \in {\mathbb Z}_+} = \bW_o x_0. $$ In view of the equalities \begin{equation} \label{identifications}
\bW_{*o} = \bW_c^*, \quad (\bW_{*c})^* = \bW_o, \quad (\bW_{*o})^* = \bW_c, \quad \bW_{*c} = \bW_o^*, \end{equation} one can work out the dual analogue of Proposition \ref{P:WcWo'}, either by redoing the original proof with the backward-time system $\Sigma^*$ in place of the forward-time system $\Sigma$, or simply by making the substitutions \eqref{identifications} in the statement of the results.
Let us now assume that $F_\Sigma$ has analytic continuation to a bounded analytic $\cL(\cU, \cY)$-valued function on the unit disk, or equivalently, $F_{\Sigma^*}$ has analytic continuation to a bounded analytic $\cL(\cY, \cU)$-valued function on the exterior of the unit disk ${\mathbb D}_e$. Then $F_\Sigma$ and $F_{\Sigma^*}$ can be identified via strong nontangential boundary-value limits with $L^\infty$-functions on the unit circle ${\mathbb T}$; the relations between these boundary-value functions is simply $$
F_{\Sigma^*}(\lambda) = F_\Sigma(\lambda)^* \quad (\mbox{a.e. } \la\in\BT) $$ with the consequence that the associated multiplication operators $$ M_{F_\Sigma} \colon L^2_\cU({\mathbb T}) \to L^2_\cY({\mathbb T}), \quad M_{F_{\Sigma^*}} \colon L^2_\cY({\mathbb T}) \to L^2_\cU({\mathbb T}) $$ given by $$
M_{F_\Sigma} \colon \widehat {\mathbf u}(\lambda) \mapsto \widehat F_\Sigma(\lambda) \cdot \widehat {\mathbf u}(\lambda), \quad
M_{F_{\Sigma^*}} \colon \widehat {\mathbf u}_*(\lambda) \mapsto \widehat F_{\Sigma_*}(\lambda) \cdot \widehat {\mathbf u}_*(\lambda)
$$
are adjoints of each other: $$
(M_{F_\Sigma})^* = M_{F_{\Sigma^*}}. $$ Note also that $M_{F_\Sigma}$ maps $H^2_\cU({\mathbb D})$ into $H^2_\cY({\mathbb D})$ while $M_{F_{\Sigma^*}} = M_{F_\Sigma}^*$ maps $(H^2_\cY)^\perp: = L^2_\cY({\mathbb T}) \ominus H^2_\cY({\mathbb D}) \cong H^2_\cY({\mathbb D}_e)$ into $(H^2_\cU)^\perp := L^2_\cU({\mathbb T}) \ominus H^2_\cU({\mathbb D}) \cong H^2_\cU({\mathbb D}_e)$.
It is natural to define the frequency-domain Hankel operator ${\mathbb H}_{F_{\Sigma^*}}$ for the adjoint system as the operator from $H^2_\cY({\mathbb D}_e)^\perp = H^2_\cY({\mathbb D})$ (the past from the point of view of the backward-time system $\Sigma^*$) to $H^2_\cU({\mathbb D}_e) =H^2_\cU({\mathbb D})^\perp$ (the future from the point of view of $\Sigma^*$) by \begin{equation} \label{Hankel-identification}
{\mathbb H}_{F_{\Sigma^*}} = P_{H^2_\cU({\mathbb D})^\perp} M_{F_{\Sigma^*}}|_{H^2_\cY({\mathbb D})} =
( {\mathbb H}_{F_\Sigma})^*. \end{equation} After application of the inverse $Z$-transform, we see that the time-domain version $\fH_{F_{\Sigma^*}}$ of the Hankel operator for $\Sigma^*$ is just the adjoint $(\fH_{F_\Sigma})^*$ of the time-domain version of the Hankel operator for $\Sigma$, namely $$
\fH_{F_{\Sigma^*}} = [ B^* A^{*(-i + j -1)} C^* ]_{i < 0, j \ge 0} \colon \ell^2_\cY({\mathbb Z}_+) \to \ell^2_\cU({\mathbb Z}_-). $$ from which we see immediately the formal factorization \begin{equation} \label{Hankel-fact*} \fH_{F_{\Sigma^*}} = \operatorname{col}_{i<0} [B^* A^{*( -i -1)}] \cdot \operatorname{row}_{j \ge 0} [A^{*j} C^* ] = \bW_{*o} \bW_{*c} = \bW_c^* \bW_o^*. \end{equation} With all these observations in place, it is straightforward to formulate the dual version of Proposition \ref{P:HankelDecs}, again, either by redoing the proof of Proposition \ref{P:HankelDecs} with the backward-time system $\Sigma^*$ in place of the forward-time system $\Sigma$, or by simply substituting the identifications \eqref{identifications} and \eqref{Hankel-identification}.
Note next that an immediate consequence of the identifications \eqref{identifications} is that $\ell^2$-exact controllability for $\Sigma$ is the same as $\ell^2$-exact observability for $\Sigma^*$ and $\ell^2$-exact observability for $\Sigma$ is the same as $\ell^2$-exact controllability for $\Sigma^*$. With this observation in hand, the dual version of Proposition \ref{P:ell2implics} is immediate.
\subsection{Storage functions for the adjoint system}
Let $S_*$ be a function from $\cX$ to $[0, \infty]$. In parallel with what is done in Section \ref{S:Storage}, we define $S_*$ to be a {\em storage function for the system $\Sigma^*$} if \begin{equation} \label{disineq*}
S_*({\mathbf x}_*(n-1))) \le S_*({\mathbf x}_*(n)) + \| {\mathbf u}_*(n) \|^2 - \| {\mathbf y}_*(n) \|^2_\cY \text{ for } n \le N_0 \end{equation} holds over all system trajectories $({\mathbf u}_*(n), {\mathbf x}_*(n), {\mathbf y}_*(n))_{n \le N_0}$ of the system $\Sigma^*$ in \eqref{dtsystem*} with state initialization ${\mathbf x}_*(N_0) = x_0$ for some $x_0 \in \cX$ at some $N_0 \in {\mathbb Z}$, and $S_*$ is normalized to satisfy \begin{equation} \label{normalization*}
S_*(0) = 0. \end{equation} Then by redoing the proof of Proposition \ref{P:storage-Schur} with the backward-time system $\Sigma_*$ in place of the forward-time system $\Sigma$, we arrive at the following dual version of Proposition \ref{P:storage-Schur}.
\begin{proposition} \label{P:storage-Schur*} Suppose that the system $\Sigma^*$ in \eqref{dtsystem*} has a storage function $S_*$ as in \eqref{disineq*} and \eqref{normalization*}. Then the transfer function $F_{\Sigma^*}$ of $\Sigma^*$ defined by \eqref{transfunc*} has an analytic continuation to the exterior unit disk ${\mathbb D}_e$ in the Schur class $\cS_{{\mathbb D}_e}(\cY, \cU)$. \end{proposition}
Note that by the duality considerations already discussed above, an equivalent conclusion is that $F_\Sigma$ has analytic continuation to the unit disk in the Schur class $\cS(\cU, \cY)$ over the unit disk.
We say that $S_*$ is a {\em quadratic storage function} for $\Sigma^*$ if $S_*$ is a storage function of the form \begin{equation} \label{QuadStorage1*}
S_*(x) = S_{H_*}(x) = \begin{cases}
\| H_*^\half x \|^2 &\text{for } x \in \cD(H_*^\half), \\
+\infty &\text{otherwise.} \end{cases} \end{equation} where $H_*$ is a (possibly) unbounded positive-semidefinite operator on $\cX$. To analyze quadratic storage functions for $\Sigma^*$, we introduce the adjoint KYP-inequality: we say that the bounded selfadjoint operator $H$ on $\cX$ satisfies the {\em adjoint KYP-inequality} if \begin{equation} \label{KYP1*}
\begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} H_* & 0 \\ 0 & I_\cU \end{bmatrix}
\begin{bmatrix} A & B \\ C & D \end{bmatrix}^* \preceq \begin{bmatrix} H_* & 0 \\ 0 & I_\cU \end{bmatrix}. \end{equation} More generally, for a (possibly) unbounded positive-semidefinite operator $H_*$ on $\cX$, we say that $H_*$ satisfies the {\em generalized KYP-inequality} if, for all $x \in \cD(H_*^\half)$ we have \begin{equation} \label{KYP1b'*} A^* \cD(H_*^\half) \subset \cD(H_*^\half), \quad C^* \cY \subset \cD(H_*^\half), \end{equation} and for all $x_* \in \cD(H_*^\half)$ and $u_* \in \cY$ we have \begin{equation} \label{KYP1b*}
\left\| \begin{bmatrix} H_*^\half & 0 \\ 0 & I_\cY \end{bmatrix} \begin{bmatrix} x_* \\ u_* \end{bmatrix} \right\|^2
- \left\| \begin{bmatrix} H_*^\half & 0 \\ 0 & I_\cU \end{bmatrix}
\begin{bmatrix} A^* & C^* \\ B^* & D^* \end{bmatrix} \begin{bmatrix} x_* \\ u_* \end{bmatrix} \right\|^2
\ge 0.
\end{equation}
Then the dual version of Proposition \ref{P:QuadStorage} is straightforward.
\begin{proposition} \label{P:QuadStorage*} Suppose that the function $S_*$ has the form \eqref{QuadStorage1*}
for a (possibly) unbounded positive-semidefinite operator $H_*$ on $\cX$. Then $S_*$ is a storage function for $\Sigma^*$ if and only if
$H_*$ is a solution of the generalized adjoint-KYP inequality \eqref{KYP1b'*}--\eqref{KYP1b*}. In particular,
$S_*$ is a finite-valued storage function for $\Sigma^*$ if and only if $H$ is a bounded positive-semidefinite operator
satisfying the adjoint KYP-inequality \eqref{KYP1*}.
\end{proposition}
We next discuss a direct connection between positive-definite solutions $H$ of the KYP-inequality \eqref{KYP1}
and positive-definite solutions $H_*$ of the adjoint KYP-inequality \eqref{KYP1*}. First let us suppose that $H$ is a bounded strictly positive-definite solution of the KYP-inequality \eqref{KYP1}. Set $$ Q = \begin{bmatrix} H^\half & 0 \\ 0 & I_\cY\end{bmatrix} \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} H^{-\half} & 0 \\ 0 & I_\cU \end{bmatrix}. $$
Then the KYP-inequality \eqref{KYP1} is equivalent to $Q^* Q \preceq I$, i.e., the fact that the operator $Q \colon \sbm{ \cX \\ \cU} \to \sbm{ \cX \\ \cY}$ is a contraction operator. But then the adjoint $Q^*$ of $Q$ is also a contraction operator, i.e., $Q Q^* \preceq I$. Writing out $$ Q^* = \begin{bmatrix} H^{-\half} & 0 \\ 0 & I_\cU \end{bmatrix} \begin{bmatrix} A^* & C^* \\ B^* & D^* \end{bmatrix} \begin{bmatrix} H^\half & 0 \\ 0 & I_\cY \end{bmatrix} $$ and rearranging gives $$ \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} H^{-1} & 0 \\ 0 & I_\cU \end{bmatrix} \begin{bmatrix} A^* & C^* \\ B^* & D^* \end{bmatrix} \preceq \begin{bmatrix} H^{-1} & 0 \\ 0 & I_\cY \end{bmatrix}, $$ i.e., $H_* := H^{-1}$ is a solution of the adjoint KYP-inequality \eqref{KYP1*} for the adjoint system $\Sigma_*$. Conversely, by flipping the roles of $\Sigma$ and $\Sigma^*$ and using that $\Sigma^{**} = \Sigma$, we see that if $H_*$ is a bounded, strictly positive-definite solution of the adjoint KYP-inequality \eqref{KYP1*}, then $H : = H_*^{-1}$ is a bounded, strictly positive-definite solution of the KYP-inequality \eqref{KYP1}.
The same correspondence between solutions of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b} for $\Sigma$ and solutions of the generalized KYP-inequality for the adjoint system \eqref{KYP1b'*}--\eqref{KYP1b*} continues to hold, but the details are more delicate, as explained in the following proposition. For an alternative proof see Proposition 4.6 in \cite{AKP06}.
\begin{proposition} \label{P:KYPduality} Suppose $\Sigma$ in \eqref{dtsystem} is a linear system with system matrix $M = \sbm{ A & B \\ C & D}$ while $\Sigma^*$ is the adjoint system \eqref{dtsystem*} with system matrix $M^* = \sbm{ A^* & C^* \\ B^* & D^* }$. Then the {\rm(}possibly unbounded{\rm)} positive-definite operator $H$ is a solution of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b} for $\Sigma$ if and only if $H^{-1}$ is a positive-definite solution of the generalized KYP-inequality \eqref{KYP1b'*}--\eqref{KYP1b} for $\Sigma^*$.
\end{proposition}
\begin{proof}
Suppose that the positive-definite operator $H$ with dense domain $\cD(H)$ in $\cX$ solves the generalized
KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b}. Define an operator $Q \colon \sbm{ \textup{Im\,} H^\half \\ \cU } \to \sbm{ \textup{Im\,} H^\half \\ \cY}$
by
$$
Q \colon \begin{bmatrix} H^\half & 0 \\ 0 & I_\cU \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix} \mapsto
\begin{bmatrix} H^\half & 0 \\ 0 & I_\cY \end{bmatrix} \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix} $$ for $x \in \cD(H^\half)$ and $u \in \cU$. We can write the formula for $Q$ more explicitly in terms of $x' = H^\half x \in \textup{Im\,} H^\half$ as $$
Q \colon \begin{bmatrix} x' \\ u \end{bmatrix} \mapsto \begin{bmatrix} H^\half & 0 \\ 0 & I_\cY \end{bmatrix}
\begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} H^{-\half} & 0 \\ 0 & I_\cU \end{bmatrix} \begin{bmatrix}
x' \\ u \end{bmatrix}
$$
for $x' \in \textup{Im\,} H^\half$ and $u \in \cU$. The content of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b} is that $Q$ is a well-defined contraction operator from $\sbm{ \textup{Im\,} H^\half \\ \cU}$ into $\sbm{\cX \\ \cY}$ and hence has a uniquely determined contractive extension to a contraction operator from $\sbm{ \cX \\ \cU}$ to $\sbm{ \cX \\ \cY}$. Let us now choose arbitrary vectors $x \in \cD(H^\half)$, $x_* \in \cD(H^{-\half}) = \textup{Im\,} H^\half$, $u \in \cU$, $u_* \in \cY$ and set $x' = H^\half x$, $x_*' = H^{-\half} x_*$. Then we compute on the one hand
\begin{align*}
\left \langle \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix}, \begin{bmatrix} x_* \\ u_* \end{bmatrix}
\right\rangle & =
\left\langle \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} H^{-\half} & 0 \\ 0 & I \end{bmatrix}
\begin{bmatrix} x' \\ u \end{bmatrix}, \begin{bmatrix} H^\half x_*' \\ u_* \end{bmatrix} \right\rangle \\
& =
\left\langle \begin{bmatrix} H^\half & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix} A & B \\ C & D \end{bmatrix}
\begin{bmatrix} H^{-\half} & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix} x' \\ u \end{bmatrix}, \,
\begin{bmatrix} x_*' \\ u_* \end{bmatrix} \right\rangle \\
& = \left\langle Q \begin{bmatrix} x' \\ u \end{bmatrix}, \, \begin{bmatrix} x_*' \\ u_* \end{bmatrix} \right \rangle
= \left \langle \begin{bmatrix} x' \\ u \end{bmatrix}, \, Q^* \begin{bmatrix} x_*' \\ u_* \end{bmatrix} \right\rangle \\
& = \left\langle \begin{bmatrix} H^\half x \\ u \end{bmatrix}, Q^* \begin{bmatrix} H^{-\half} x_* \\ u_* \end{bmatrix}
\right\rangle
\end{align*}
while on the other hand
$$
\left\langle \begin{bmatrix} A & B \\ C & D \end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix}, \, \begin{bmatrix} x_* \\ u_* \end{bmatrix}
\right\rangle = \left\langle \begin{bmatrix} x \\ u \end{bmatrix}, \, \begin{bmatrix} A^* & C^* \\ B^* & D^* \end{bmatrix}
\begin{bmatrix} x_* \\ u_* \end{bmatrix} \right\rangle.
$$
We thus conclude that
$$
\left\langle \begin{bmatrix} H^\half & 0 \\ 0 & I \end{bmatrix}
\begin{bmatrix} x \\ u \end{bmatrix}, Q^* \begin{bmatrix} H^{-\half} x_* \\ u_* \end{bmatrix}
\right\rangle = \left\langle \begin{bmatrix} x \\ u \end{bmatrix}, \begin{bmatrix} A^* & C^* \\ B^* & D^* \end{bmatrix}
\begin{bmatrix} x_* \\ u_* \end{bmatrix} \right\rangle
$$
for all $\begin{bmatrix} x \\ u \end{bmatrix}$ in $\cD \left( \begin{bmatrix} H^\half & 0 \\ 0 & I \end{bmatrix} \right)$.
Hence
\begin{equation} \label{Q*1}
Q^* \begin{bmatrix} H^{-\half} x_* \\ u_* \end{bmatrix} \in \cD\left( \begin{bmatrix} H^\half & 0 \\ 0 & I \end{bmatrix}^*\right) = \cD\left( \begin{bmatrix} H^\half & 0 \\ 0 & I \end{bmatrix}\right) \end{equation} and \begin{equation} \label{Q*2}
\begin{bmatrix} H^\half & 0 \\ 0 & I \end{bmatrix} Q^* \begin{bmatrix} H^{-\half} x_* \\ u_* \end{bmatrix} =
\begin{bmatrix} A^* & C^* \\ B^* & D^* \end{bmatrix} \begin{bmatrix} x_* \\ u_* \end{bmatrix} \end{equation} where $x_* \in \cD(H^{-\half})$ and $u_* \in \cY$ are arbitrary. From the formula \eqref{Q*2} we see that $A^* \colon \cD(H^{-\half}) \to \textup{Im\,} H^\half = \cD(H^{-\half})$ and that $C^* \colon \cY \to \textup{Im\,} H^\half = \cD(H^{-\half})$, i.e., condition \eqref{KYP1b'*} holds with $H_* = H^{-1}$. Let us now rewrite equation \eqref{Q*2} in the form $$ Q^* \begin{bmatrix} H^{-\half} x_* \\ u_* \end{bmatrix} = \begin{bmatrix} H^{-\half} & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix} A^* & C^* \\ B^* & D^* \end{bmatrix} \begin{bmatrix} x_* \\ y_* \end{bmatrix}. $$ Using that $Q^*$ is a contraction operator now gives us the spatial KYP-inequality \eqref{KYP1b*} with $H_* = H^{-1}$. This completes the proof of Proposition \ref{P:KYPduality}.
\end{proof}
We next pursue the dual versions of the results of Section \ref{S:ASRS} concerning the {\em available storage} and {\em required supply} as well as the $\ell^2$-regularized required supply.
First of all let us note that the Laurent operator ${\mathfrak L}_{F_{\Si^*}}$ of $F_{\Si^*}$, i.e., the inverse $Z$-transform version of the multiplication operator $M_{F_{\Sigma^*}} = (M_{F_\Sigma})^*$, is just the adjoint of the Laurent operator $\fL_{F_\Sigma}$ given by \eqref{Laurent0}.
We can rewrite ${\mathfrak L}_{F_{\Si^*}}$ in the convenient block form \begin{equation} \label{Laurent*}
\fL_{F_{\Sigma^*}} = \left[ \begin{array}{c|c}
\fT_{F_{\Sigma^*}} & \fH_{F_{\Sigma^*}} \\
\hline
0 & \widetilde \fT_{F_{\Sigma^*}} \end{array} \right] =
\left[ \begin{array}{c|c} (\widetilde \fT_{F_\Sigma})^* & ( \fH_{F_\Sigma})^* \\
\hline
0 & ( \fT_{F_\Sigma})^* \end{array} \right]
\end{equation} where the Toeplitz operators associated with the adjoint system $\Sigma^*$ are given by \begin{align*}
& \fT_{F_{\Sigma^*}} = (\fL_{F_\Sigma})^*|_{\ell^2_\cY({\mathbb Z}_-)} = (\widetilde \fT_{F_\Sigma})^*, \\
& \widetilde \fT_{F_{\Sigma^*}} = P_{\ell^2_\cU({\mathbb Z}_+)} ( \fL_{F_\Sigma})^*|_{\ell^2_\cY({\mathbb Z}_+)} = ( \fT_{F_\Sigma})^* \end{align*} and where the Hankel operator for the adjoint system (already introduced as the inverse $Z$-transform version of the frequency-domain Hankel operator ${\mathbb H}_{F_{\Sigma^*}}$ given by \eqref{Hankel-identification}) has the explicit representation in terms of the Laurent operator $\fL_{F_{\Sigma^*}} = (\fL_{F_\Sigma})^*$: $$
\fH_{F_{\Sigma^*}} = P_{\ell^2_\cU({\mathbb Z}_-)} ( \fL_{F_\Sigma})^*|_{\ell^2_\cU({\mathbb Z}_+)}. $$
Let ${\boldsymbol{\mathcal U}}_*$ be the space of all functions $n \mapsto {\mathbf u}_*(n)$ from the integers ${\mathbb Z}$ into the input space $\cY$ for the adjoint system $\Sigma^*$. We define the available storage for the adjoint system $S_{*a}$ by \begin{equation} \label{Sa2*}
S_{*a}(x_0) = \sup_{{\mathbf u} \in {\boldsymbol{\mathcal U}}_*, n_{-1} < 0 } \sum_{n=n_{-1}}^{n=-1} (\|{\mathbf y}_*(n) \|^2 - \| {\mathbf u}_*(n) \|^2 ) \end{equation} where the supremum is taken over all adjoint-system trajectories $$ ({\mathbf u}_*(n), {\mathbf x}_*(n), {\mathbf y}_*(n) )_{n \le -1} $$ (specified by the adjoint-system equations \eqref{dtsystem*} running in backwards time) with final condition ${\mathbf x}_*(-1) = x_0$. Similarly, the dual required supply $S_{*r}$ is given by \begin{equation} \label{Sr2*}
S_{*r} (x_0) = \inf_{ {\mathbf u} \in {\boldsymbol{\mathcal U}}, \, n_1 \ge 0} \sum_{n=0}^{n_1} ( \| {\mathbf u}_*(n) \|^2 - \| {\mathbf y}_*(n) \|^2) \end{equation} where the infimum is taken over system trajectories $({\mathbf u}_*(n), {\mathbf x}_*(n), {\mathbf y}_*(n) )_{n \le n_1}$ subject to the boundary conditions ${\mathbf x}_*(n_1) = 0$ and ${\mathbf x}(-1) = x_0$. Then one applies the analysis behind the proof of Proposition \ref{P:SaSr} to the backward-time system $\Sigma^*$ in place of the forward-time system $\Sigma$ to see that $S_{*a}$ and $S_{*r}$ are both storage functions for $\Sigma^*$ and furthermore $S_{*a}(x_0) \le S_*(x_0) \le S_{*r}(x_0)$, $x_0\in\cX$, for any other $\Sigma^*$-storage function $S_*$. We shall however be primarily interested in the $\ell^2$-regularized dual required supply $\underline{S}_{*r}$, rather than in $S_{*r}$, defined by \begin{equation} \label{uSr2*} \underline{S}_{*r}(x_0) = \inf_{{\mathbf u} \in \cD(\bW_{*c}) \colon \bW_{*c} {\mathbf u} = x_0}
\sum_{n=0}^\infty \left(\| {\mathbf u}_*(n) \|^2 - \| {\mathbf y}_*(n) \|^2 \right). \end{equation} Furthermore, by working out the backward-time analogues of the analysis in Section \ref{S:ASRS}, one can see that $\underline{S}_{*r}$ is also a storage function for $\Sigma^*$, and that the definitions of $S_{*a}$ and $S_{*r}$ can be reformulated in a more convenient operator-theoretic form: \begin{align}
& S_{*a}(x_0) = \sup_{{\mathbf u}_* \in \ell^2_\cY({\mathbb Z}_-)} \| \bW_{*o} x_0 + \fT_{F_{\Sigma^*}}
{\mathbf u}_*\|^2_{\ell^2_\cU({\mathbb Z}_-)}
- \| {\mathbf u}_* \|^2_{\ell^2_\cY({\mathbb Z}_-)} \notag \\ & =
\sup_{{\mathbf u}_* \in \ell^2_\cY({\mathbb Z}_-)} \| \bW_{c}^* x_0 + \widetilde \fT_{F_{\Sigma}}^*
{\mathbf u}_*\|^2_{\ell^2_\cU({\mathbb Z}_-)}
- \| {\mathbf u}_* \|^2_{\ell^2_\cY({\mathbb Z}_-)} \text{ for } x_0 \in \cD(\bW_c^*) \label{SaOpform*} \end{align} with $S_{*a}(x_0) = + \infty$ if $x_0 \notin \cD(\bW_c^*)$, while \begin{align}
& \underline{S}_{*r}(x_0) = \inf_{{\mathbf u}_* \in \cD(\bW_{*c}) \colon \bW_{*c} {\mathbf u}_* = x_0} \| {\mathbf u}_* \|^2_{\ell^2_\cY({\mathbb Z}_+)} -
\| \widetilde \fT_{F_{\Sigma^*}} {\mathbf u}_* \|^2_{\ell^2_\cU({\mathbb Z}_+)} \notag \\
& = \inf_{{\mathbf u}_* \in \cD(\bW_{o}^*) \colon \bW_{o}^* {\mathbf u}_* = x_0} \| {\mathbf u}_* \|^2_{\ell^2_\cY({\mathbb Z}_+)} -
\| \fT_{F_{\Sigma}}^* {\mathbf u}_* \|^2_{\ell^2_\cU({\mathbb Z}_+)} \notag \\
& = \inf_{{\mathbf u}_* \in \cD(\bW_{o}^*), \, \bW_{o}^* {\mathbf u}_* = x_0} \| D_{ \fT_{F_\Sigma}^*} {\mathbf u}_* \|^2 . \label{uSrOpForm*} \end{align}
By notational adjustments to the arguments in the proof of Theorem \ref{T:Sar}, we arrive at the following formulas for $S_{*a}$ and $\underline{S}_{*r}$ on $\textup{Im\,} \bW_o^*$.
\begin{theorem} \label{T:Sar*} Let the operators $\overline{X}_a$, $\overline{X}_r$ be as in Lemma \ref{L:fact} and define operators $H_a$ and $H_r$ as in \eqref{Ha-def} and \eqref{Hr-def}. Then the dual available storage $S_{*a}$ and the dual $\ell^2$-regularized required supply are given {\rm(}on a suitably restricted domain{\rm)} by \begin{align}
& S_{*a}(x_0) = \| \overline{X}_r x_0 \|^2 = \| H_r^{-\half} x_0 \|^2 \text{ for } x_0 \in \textup{Im\,} \bW_o^*, \label{form1*} \\
& \underline{S}_{*r}(x_0) = \| | \overline{X}_a|^{-1} x_0 \|^2 = \| H_a^{- \half} x_0 \|^2 \text{ for } x_0 \in \textup{Im\,} \bW_o^*. \label{form2*} \end{align} \end{theorem}
Let us associate extended-real-valued functions $S_{H_a}$, $S_{H_r}$, $S_{H_r^{-1}}$, $S_{H_a^{-1}}$ with the positive-definite operators $H_a$, $H_r$, $H_r^{-1}$, $H_a^{-1}$ as in \eqref{QuadStorage1}. Theorems \ref{T:Sar} and \ref{T:Sar*} give us the close relationship between these functions and the functions $S_a$, $\underline{S}_r$ (storage functions for $\Sigma$) and $S_{*a}$, $\underline{S}_{*r}$ (storage functions for $\Sigma^*$), namely: \begin{align} & S_a(x) = S_{H_a}(x), \quad \underline{S}_{r}(x) = S_{H_r}(x) \text{ for } x \in \textup{Im\,} \bW_c, \notag \\ & S_{*a}(x) = S_{H_r^{-1}}(x), \quad \underline{S}_{*r}(x) = S_{H_a^{-1}}(x) \text{ for } x \in \textup{Im\,} \bW_o^*. \label{storage-quadratic} \end{align} In general we do not assert that equality holds in any of the four equalities in \eqref{storage-quadratic} for all $x \in \cX$. Nevertheless it is the case that $S_{H_a}$ and $S_{H_r}$ are storage functions for $\Sigma$ and $S_{H_r^{-1}}$ and $S_{H_a^{-1}}$ are storage functions for $\Sigma^*$, as we now explain.
\begin{proposition} \label{P:QuadStorageFuncs} Let $H_a$, $H_r$, $H_r^{-1}$, $H_a^{-1}$ be the positive-definite operators as in Theorems \ref{T:Sar} and \ref{T:Sar*}. Then the following hold: \begin{enumerate} \item[(1)] $S_{H_a}$ and $S_{H_r}$ are nondegenerate storage functions for $\Sigma$, or equivalently, $H_a$ and $H_r$ are positive-definite solutions of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b} for $\Sigma$.
\item[(2)] $S_{H_r^{-1}}$ and $S_{H_a^{-1}}$ are storage functions for $\Sigma^*$, or equivalently, $H_r^{-1}$ and $H_a^{-1}$ are positive-definite solutions of the generalized KYP-inequality \eqref{KYP1b'*}--\eqref{KYP1b*} for $\Sigma^*$. \end{enumerate} \end{proposition}
\begin{proof} The fact that $S_H$ is a nondegenerate storage function for $\Sigma$ (respectively $\Sigma^*$) if and only if $H$ is a positive-definite solution of the generalized KYP-inequality for $\Sigma$ (respectively $\Sigma^*$) is a consequence of Proposition \ref{P:QuadStorage} and its dual Proposition \ref{P:QuadStorage*}. We shall use these formulations interchangeably.
We know that $S_{H_a}(x) = S_a(x)$ for $x \in \textup{Im\,} \bW_c$. Furthermore as a consequence of \eqref{intertwine1} with $\widetilde u = 0$ and of \eqref{Wc-fin} with $n_{-1} = -1$, we see that $\textup{Im\,} \bW_c$ is invariant under $A$ and contains $\textup{Im\,} B$. Thus condition \eqref{KYP1b'} holds with $\textup{Im\,} \bW_c$ in place of $\cD(H^\half)$. The facts that $S_{H_a}$ agrees with $S_a$ on $\textup{Im\,} \bW_c$ and that $S_a$ is a storage function for $\Sigma$ implies that the inequality \eqref{KYP1b} holds for $x \in \textup{Im\,} \bW_c$ and $u \in \cU$: \begin{equation}\label{KYP1b-Wc}
\left\| \begin{bmatrix} H_a^{\half} \! & \! 0 \\ 0 \! & \! I_{\cU}
\end{bmatrix} \begin{bmatrix} x \\ u \end{bmatrix} \right\|^{2}
- \left\| \begin{bmatrix} H_a^{\half} \! & \! 0 \\ 0 \! & \! I_{\cY} \end{bmatrix} \begin{bmatrix} A\! & \! B \\ C \! & \! D \end{bmatrix}
\begin{bmatrix} x \\ u \end{bmatrix} \right\|^{2} \ge 0. \end{equation} As noted at the end of Theorem \ref{T:Sar}, $\textup{Im\,} \bW_c$ is a core for $H_a^\half$; hence, given $x \in \cD(H_a)$, there is a sequence of points $\{ x_n\}_{n \ge 1}$ contained in $\textup{Im\,} \bW_c$ such that $\lim_{n \to \infty} x_n = x$ and $\lim_{n \to \infty} H^\half x_n = H^\half x$. As each $x_n \in \textup{Im\,} \bW_c$, we know that the inequality \eqref{KYP1b-Wc} holds with $x_n$ in place of $x$ for all $n = 1,2,\dots$. We may now take limits in this inequality to see that the inequality continues to hold with $x = \lim_{n \to \infty} x_n \in \cD(H_a^\half)$, i.e., condition \eqref{KYP1b} holds with $H_a$ in place of $H$. Thus $H$ is a solution of the generalized KYP-inequality for $\Sigma$. That $H_r^{-1}$ is a solution of the generalized KYP-inequality for $\Sigma^*$ now follows by applying the same analysis to $\Sigma^*$ rather than to $\Sigma$. Finally, the fact that $H_a$ (respectively, $H_r^{-1}$) is a positive-definite solution of the generalized KYP-inequality for $\Sigma$ (respectively for $\Sigma^*$) implies that $H_a^{-1}$ (respectively, $H_r$) is a positive-definite solution of the generalized KYP-inequality for $\Sigma^*$ (respectively, $\Sigma$) as a consequence of Proposition \ref{P:KYPduality}. \end{proof}
\section{Order properties of solutions of the generalized KYP-inequality and finer results for special cases} \label{S:order}
We have implicitly been using an order relation on storage functions, namely: we say that $S_1 \le S_2$ if $S_1(x_0) \le S_2(x_0)$ for all $x_0 \in \cX$. For the case of quadratic storage functions $S_{H_1}$ and $S_{H_2}$ where $H_1$ and $H_2$ are two positive-semidefinite solutions of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b}, the induced ordering $\le$ on positive-semidefinite (possibly unbounded) operators can be
defined as follows: given two positive-semidefinite operators $H_1$ with dense domain $\cD(H_1)$ and $H_2$ with dense
domain $\cD(H_2)$ in $\cX$, we say that {\em $H_1 \le H_2$ if $\cD(H_2^\half) \subset \cD(H_1^\half)$ and }
\begin{equation} \label{H1leH2}
\| H^\half_1 x \|^2 \le \| H^\half_2 x \|^2 \text{ for all } x \in \cD(H_2^\half).
\end{equation} In case $H_1$ and $H_2$ are bounded positive-semidefinite operators, one can see that $H_1 \le H_2$ is equivalent to $H_1 \preceq H_2$ in the sense of the inequality between quadratic forms: $\langle H_1 x, x \rangle \le \langle H_2 x, x \rangle$, i.e., in the Loewner partial order: $H_2 - H_1 \succeq 0$. This ordering $\le$ on (possibly unbounded) positive-semidefinite operators has appeared in the more general context of closed quadratic forms $S_H$ (not necessarily storage functions for some dissipative system $\Sigma$) and associated semibounded selfadjoint operators $H$ (not necessarily solving some generalized KYP-inequality); see formula (2.17) and the subsequent remark in the book of Kato \cite{Kato}. This order has been studied in the setting of solutions of a generalized KYP-inequality in the paper of Arov-Kaashoek-Pik \cite{AKP06}. Here we offer a few additional such order properties which follow from the results developed here. Recall that the notion of a {\em core} of a closed, densely defined linear operator was introduced in the paragraph preceding Theorem \ref{T:Sar}.
\begin{theorem} \label{T:order}
Assume that the system $\Sigma$ in \eqref{dtsystem} satisfies the standing assumption \eqref{A} and $H_a$ and $H_r$ are defined by \eqref{Ha-def} and \eqref{Hr-def}. Let $H$ be any positive-definite solution of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b}. \begin{enumerate} \item[(1)] Assume that $\textup{Im\,} \bW_c$ is a core for $H^\half$. Then we have the operator inequality \begin{equation} \label{HaH-ineq}
H_a \le H \end{equation} and furthermore $\textup{Im\,} \bW_o^* \subset \cD(H^{-\half})$.
\item[(2)] Assume that $\textup{Im\,} \bW_o^*$ is a core for $H^{-\half}$. Then we have the operator inequality \begin{equation} \label{HHr-ineq}
H \le H_r \end{equation} and furthermore $\textup{Im\,} \bW_c \subset \cD(H^\half)$. \end{enumerate} \end{theorem}
\begin{proof} We deal with (1) and (2) in turn.
{(1)} Suppose that $H$ is a positive-definite solution of the generalized KYP-inequality such that $\textup{Im\,} \bW_c$ is a core for $H^\half$.
From Theorem \ref{T:Sar}, we know that $S_a(x) = \| H_a^\half x\|$ for $x \in \textup{Im\,} \bW_c$. Since $S_a$ is the smallest storage function (see Proposition \ref{P:SaSr}) and $S_H$ is a storage function, it follows that \begin{equation} \label{Ha-Hineq}
\| H_a^\half x \|^2 = S_a(x) \le S_H(x) = \| H^\half x \|^2 \text{ for } x \in \textup{Im\,} \bW_c. \end{equation} Let now $x$ be an arbitrary point of $\cD(H^\half)$. Since $\textup{Im\,} \bW_c$ is a core for $H^\half$, we can find a sequence $\{x_n \}_{n \ge 1}$ of points in $\textup{Im\,} \bW_c$ such that $x_n \to x$ and $H^\half x_n \to H^\half x$. In particular $H^\half x_n$ is a Cauchy sequence and the inequality \begin{align*}
& \| H_a^\half x_n - H_a^\half x_m \|^2 = \| H_a ^\half(x_n - x_m ) \|^2 \\
& \quad \quad \le \| H^\half(x_n - x_m ) \|^2 =
\| H^\half x_n - H^\half x_m \|^2 \end{align*}
implies that $\{ H_a^\half x_n \}_{n \ge 1}$ is Cauchy as well, so converges to some $y \in \cX$. As $H_a$ is closed, we get that $x \in \cD(H_a)$ and $y = H_a^\half x$. We may then take limits in the inequality $\|H_a^\half x_n \|^2 \le \| H^\half x_n \|^2$ holding for all
$n$ (a consequence of \eqref{Ha-Hineq}) to conclude that $\| H_a^\half x \|^2 \le \| H^\half x \|^2$, i.e., $H_a \le H$, i.e.,
\eqref{HaH-ineq} holds.
Recall next from Corollary \ref{C:boundedSauSr} that
$\| \bW_o x_0 \|^2 \le S_a(x_0)$, where we now also know from Theorem \ref{T:Sar} that
$S_a(x_0) = \| H^\half x_0 \|^2$ for $x_0 \in \textup{Im\,} \bW_c$. We thus have the chain of operator inequalities $$
\bW_o^* \bW_o \le H_a \le H. $$ By Proposition 3.4 in \cite{AKP05}, we may equivalently write $$
H^{-1} \le H_a^{-1} \le (\bW_o^* \bW_o)^{-1}. $$
In particular $\cD( | \bW_o |^{-1}) \subset \cD(H^{- \half})$. If we introduce the polar decomposition
$\bW_o = U_o | \bW_o |$ for $\bW_o$, we see that $\bW_o^* = | \bW_o | U_o^*$ and hence
$\textup{Im\,} \bW_o^* = \textup{Im\,} | \bW_o|$. Thus $$
\cD( | \bW_o |^{-1}) = \textup{Im\,} | \bW_o| = \textup{Im\,} \bW_o^*
$$
and it follows that $\textup{Im\,} \bW_o^* \subset \cD(H^{-\half})$ and the verification of (1) is complete.
{(2)} We now suppose that $H$ is a positive-definite solution of the generalized KYP-inequality such that $\textup{Im\,} \bW_o^*$ is a core for $H^{-\half}$. By the applying the result of part (1) to the adjoint system $\Sigma^*$, we see that $H_r^{-1} \le H^{-1}$ and that $\textup{Im\,} \bW_c \subset H^\half$. If we apply the result of Proposition 3.4 in \cite{AKP05}, we see that $H_r^{-1} \le H^{-1}$ implies that (is actually equivalent to) $H \le H_r$, completing the verification of (2). \end{proof}
\begin{remark} \label{R:ineq-chain} By the last assertion in Theorem \ref{T:Sar}, we know that $\textup{Im\,} \bW_c$ is a core for $H_a^\half$ and that $\textup{Im\,} \bW_o^*$ is a core for $H_r^{-\half}$. Also by Proposition \ref{P:QuadStorageFuncs} we know that $H_a$ and $H_r$ are positive-definite solutions of the generalized KYP-inequality for $\Sigma$. Thus item (1) in Theorem \ref{T:order} may be rephrased as follows:
\begin{itemize} \item {\sl The set ${\mathcal GS}_c$ consisting of all positive-definite solutions $H$ of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b} for $\Sigma$ such that $\textup{Im\,} \bW_c$ is a core for $H^\half$ has the solution $H_a$ as a minimal element with respect to the ordering $\le$.} \end{itemize}
\noindent
Similarly item (2) in Theorem \ref{T:order} may be rephrased as:
\begin{itemize}
\item {\em The set ${\mathcal GS}_o$ consisting of all positive-definite solutions $H$ of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b} such that $\textup{Im\,} \bW_o^*$ is a core for $H^{-\half}$ has the solution $H_r$ as a maximal element with respect to the ordering $\le$.}
\end{itemize}
\noindent
It would be tempting to say:
\begin{itemize}
\item {\em The set ${\mathcal GS}$ consisting of all positive-definite solutions $H$ of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b}
such that $\textup{Im\,} \bW_c$ a core for $H^\half$ and $\textup{Im\,} \bW_o^*$ is a core for $H^{-\half}$ has $H_a$ as a minimal element and $H_r$
as a maximal element with respect to the ordering $\le$.}
\end{itemize}
However, while the above results imply that $\textup{Im\,} \bW_c \subset \cD(H_r^{\half})$ and that
$\textup{Im\,} \bW_o^* \subset \cD(H_a^{-\half})$,
we have not been able to show in general that $\textup{Im\,} \bW_c$ is a core for $H_r^\half$ or that $\textup{Im\,} \bW_o^*$ is a core for
$H_a^{-\half}$. Such a more satisfying symmetric statement does hold in the pseudo-similarity framework for the analysis of
solutions of generalized KYP-inequalities (see Proposition 5.8 in \cite{AKP06}). \end{remark}
We now consider the case that $\Si$ is not only controllable and/or observable, but has the stronger $\ell^2$-exact controllability or $\ell^2$-exact observability condition, or both, i.e., $\ell^2$-exact minimality. We first consider the implications on $H_a$ and $H_r$.
\begin{proposition}\label{P:ell2minImplsHaHr} Let $\Sigma$ be a system as in \eqref{dtsystem} such that assumption \eqref{A} holds. \begin{itemize} \item[(1)] If $\Sigma$ is $\ell^2$-exactly controllable, then $H_a$ and $H_r$ are bounded.
\item[(2)] If $\Sigma$ is $\ell^2$-exactly observable, then $H_a$ and $H_r$ are boundedly invertible.
\item[(3)] $\Sigma$ is $\ell^2$-exactly minimal, i.e., both $\ell^2$-exactly controllable and $\ell^2$-exactly observable, then $H_a$ and $H_r$ are both bounded and boundedly invertible. \end{itemize} \end{proposition}
\begin{proof} We discuss each of (1), (2), (3) in turn.
{(1)} Item (1) follows directly from the fact that $\textup{Im\,} \bW_c$ is contained in both $\cD(H_a)$ and $\cD(H_r)$ together with the Closed Graph Theorem.
{(2)} From the last assertion in Theorem \ref{T:Sar}, we know that $\textup{Im\,} \bW_c$ is a core for $H_a^\half$. Then item (1) in Theorem \ref{T:order} implies that $\textup{Im\,} \bW_o^* \subset \cD(H_a^{-\half})$. If $\textup{Im\,} \bW_o^* = \cX$, the Closed Graph Theorem then gives us that $H_a^{-\half}$ is bounded.
Also part of the last assertion of Theorem \ref{T:Sar} is the statement that $\bW_o^*$ is a core for $H_r^{-\half}$, so in particular $\textup{Im\,} \bW_o^* \subset \cD(H_r^{-\half})$. Then again the Closed Graph Theorem implies that $H_r^{-\half}$ is bounded.
{(3)}. Simply combine the results of items (1) and (2). \end{proof}
Next we consider general positive-definite solutions to the generalized KYP-inequality.
\begin{proposition}\label{P:ell2minImplsH} Suppose that $\Sigma$ is a system as in \eqref{dtsystem} such that assumption \eqref{A} holds and that $H$ is any positive-definite solution of the generalized KYP-inequality. \begin{itemize} \item[(1)] Suppose that $\Sigma$ is $\ell^2$-exactly controllable and that $\textup{Im\,} \bW_c \subset \cD(H^\half)$ {\rm(}as is the case e.g.\ if $\textup{Im\,} \bW_o^*$ is a core for $H^{-\half}${\rm)}. Then $H$ is bounded and furthermore $$
H_a \le H. $$
\item[(2)] Suppose that $\Sigma$ is $\ell^2$-exactly observable and that $\textup{Im\,} \bW_o^* \subset \cD(H^{-\half})$ {\rm(}as is the case e.g.\ if $\textup{Im\,} \bW_c$ is a core for $H^\half${\rm)}. Then $H^{-1}$ is bounded and furthermore $$
H \le H_r. $$
\item[(3)] Suppose that $\Sigma$ is both $\ell^2$-exactly controllable and $\ell^2$-exactly observable and that either {\rm(a)} $\textup{Im\,} \bW_c \subset \cD(H^\half)$ or {\rm(b)} $\textup{Im\,} \bW_o^* \subset \cD(H^{-\half})$. Then $H$ is bounded and boundedly invertible and we have the inequality chain \begin{equation} \label{HaHrineq}
H_a \le H \le H_r. \end{equation} \end{itemize} \end{proposition}
\begin{proof} First note that the fact that the parenthetical hypotheses in items (1) and (2) are stronger than the given hypotheses is a consequence of the final assertions in parts (1) and (2) of Theorem \ref{T:order}. We now deal with the rest of (1), (2), (3).
{(1)} If we assume that $\cX = \textup{Im\,} \bW_c \subset \cD(H^\half)$, then $H^\half$ (and hence also $H$) is bounded by the Closed Graph Theorem. Moreover, as $\textup{Im\,} \bW_c = \cD(H^\half)$, in particular $\textup{Im\,} \bW_c$ is a core for $H^\half$ and the inequality $H_a \le H$ follows from Theorem \ref{T:order} (1).
{(2)} Similarly, if we assume $\cX = \textup{Im\,} \bW_o^* \subset \cD(H^{-\half})$, then $H^{-\half}$ is bounded by the Closed Graph Theorem. As $\textup{Im\,} \bW_o^* = \cD(H^{-\half})$, in particular $\textup{Im\,} \bW_o^*$ is a core for $H^{-\half}$ and $H \le H_r$ follows as a consequence of Theorem \ref{T:order} (2).
{(3)} If $\cX = \textup{Im\,} \bW_c \subset \cD(H^\half)$, then in fact $\textup{Im\,} \bW_c = \cD(H^\half)$ so $\textup{Im\,} \bW_c$ is a core for $H^\half$. By Theorem \ref{T:order}, it follows that $\textup{Im\,} \bW_o^* \subset \cD(H^{-\half})$ and hence hypothesis (b) is a consequence of hypothesis (a) when combined with all the other hypotheses in (3). Similarly hypothesis (a) is a consequence of hypothesis (b). Hence there is no loss of generality in assuming that both (a) and (b) hold. Then the verification of (3) is completed by simply combining the results of (1) and (2). \end{proof}
\section{Proofs of Bounded Real Lemmas} \label{S:BRLproof}
We now put all the pieces together to give a storage-function proof of Theorem~\ref{T:BRLinfstan}.
\begin{proof}[Proof of Theorem \ref{T:BRLinfstan}] We are given a minimal system $\Sigma$ as in \eqref{dtsystem} with transfer function $F_\Sigma$ in the Schur class $\cS(\cU, \cY)$.
\noindent {\em Proof of sufficiency.} For the sufficiency direction, we assume either that there exists a positive-definite solution $H$ of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b} (statement (1)) or a bounded and boundedly invertible solution $H$ of the KYP-inequality \eqref{KYP1} (statements (2) and (3)). As the latter case is a particular version of the former case, it suffices to assume that we have a positive-definite solution of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b}. We are to show that then $F_\Sigma$ is in the Schur class $\cS(\cU, \cY)$.
Given such a generalized solution of the KYP-inequality, Proposition \ref{P:QuadStorage} guarantees us that $S_H$ is an (even quadratic) storage function for $\Sigma$. Then $F_\Sigma$ has analytic continuation to a Schur class function by Proposition \ref{P:storage-Schur}.
\noindent {\em Proof of necessity in statement (1):} We assume that $\Sigma$ is minimal and that $F_\Sigma$ has analytic continuation to a Schur-class function, i.e., assumption \eqref{A} holds. Then Proposition \ref{P:QuadStorageFuncs} gives us two choices $H_a$ and $H_r$ of positive-definite solutions of the generalized KYP-inequality \eqref{KYP1b'}--\eqref{KYP1b}.
\noindent {\em Proof of necessity in statement (2):} We assume that $\Sigma$ is exactly controllable and exactly observable with transfer function $F_\Sigma$ having analytic continuation to the Schur class. From Proposition
\ref{P:HankelDecs} (1) we see that $\textup{Im\,} \bW_c \supset \operatorname{Rea}\,(A|B) = \cX$ and that $\cD(\bW_o) \supset \operatorname{Rea}\, (A|B) = \cX$ while from item (2) in the same proposition we see that
$\textup{Im\,} \bW_o^* \supset \operatorname{Obs}\,(C|A) = \cX$ and that $\cD(\bW_c^*) \supset \operatorname{Obs}\,(C|A) = \cX$. Hence by the Closed Graph Theorem, in fact $\bW_c$ and $\bW_o^*$ are bounded in addition to being surjective. In particular $\Sigma$ is $\ell^2$-exactly controllable and $\ell^2$-exactly observable, so this case actually falls under item (3) of Theorem \ref{T:BRLinfstan}, which we will prove next.
\noindent {\em Proof of necessity in statement (3):} We now assume that $\Sigma$ is $\ell^2$-exactly controllable and $\ell^2$-exactly observable with $F_\Sigma$ having analytic continuation to a function in the Schur class $\cS(\cU, \cY)$ and we want to produce a bounded and boundedly invertible solution $H$ of the KYP-inequality \eqref{KYP1}. In particular, $\Sigma$ is minimal (controllable and observable), so Proposition \ref{P:QuadStorageFuncs} gives us two solutions $H_a$ and $H_r$ of the generalized KYP-inequality. But any solution $H$ of the generalized KYP-inequality becomes a solution of the standard KYP-inequality \eqref{KYP1} if it happens to be the case that $H$ is bounded. By the result of item (3) in Proposition \ref{P:ell2minImplsHaHr}, both $H_a$ and $H_r$ are bounded and boundedly invertible under our $\ell^2$-minimality assumptions. Thus in this case $H_a$ and $H_r$ serve as two choices for bounded, strictly positive-definite solutions of the KYP-inequality, as needed. \end{proof}
We are now ready also for a storage-function proof of Theorem \ref{T:BRLinfstrict}.
\begin{proof}[Proof of Theorem \ref{T:BRLinfstrict}] The standing assumption for both directions is that $\Sigma$ is a linear system as in \eqref{dtsystem} with exponentially stable state operator $A$.
\noindent {\em Proof of necessity:} Assume that there exists a bounded strictly positive-definite solution $H$ of the strict KYP-inequality. By Proposition \ref{P:strictQuadStorage}, $S_H$ is a strict storage function for $\Sigma$. Then by Proposition \ref{P:strictstorage-Schur}, $F_\Sigma$ has analytic continuation to an $\cL(\cU, \cY)$-valued $H^\infty$-function with $H^\infty$-norm strictly less than 1 as wanted. The fact that
$A$ is exponentially stable implies that $F_\Sigma$ has analytic continuation to a slightly larger disk beyond ${\mathbb D}$, and the fact that $H$ is strictly positive-definite implies that $S_H$ has the additional coercivity property $S_H(x) \ge \epsilon_0 \| x \|^2$ for some $\epsilon_0 > 0$.
\noindent {\em Proof of sufficiency:} We are assuming that $\Sigma$ has state operator $A$ exponentially stable and with transfer function $F_\Sigma$ in the strict Schur class. The exponential stability of $A$ (i.e. $A$ has spectral radius $r_{\rm spec}(A) < 1$) means that the series $$
\bW_{o}^{*}{\mathbf y} = \sum_{k=0}^{\infty} A^{*k} C^{*}{\mathbf y}(k)\ \ ({\mathbf y}\in\ell^2_\cY(\BZ_+)), \quad
\bW_{c}{\mathbf u} = \sum_{k=0}^{\infty} A^{k} B{\mathbf u}(k)\ \ ({\mathbf u}\in\ell^2_\cU(\BZ_-)) $$ are norm-convergent (not just in the weak sense as in Proposition \ref{P:WcWo'}), and hence $\bW_c$ and $\bW_o$ are bounded. However it need not be the case that $\bW_c$ or $\bW_o^*$ be surjective, so we are not in a position to apply part (3) of Theorem \ref{T:BRLinfstan} to the system $\Sigma$. The adjustment for handling this difficulty which also ultimately produces bounded and boundedly invertible solutions of the strict KYP-inequality \eqref{KYP2} is what we shall call {\em $\epsilon$-regularization reduction}. It goes back at least to Petersen-Anderson-Jonkheere \cite{PAJ} for the finite dimensional case, and was extended to the infinite dimensional case in our previous paper \cite{KYP1}. We recall the procedure here for completeness and because we refer to it in a subsequent remark.
Since $r_\textup{spec} (A) < 1$, the resolvent expression $(I - \lambda A)^{-1}$ is uniformly bounded for all $\lambda$ in the unit disk ${\mathbb D}$. Since we are now assuming that $F_\Sigma$ is in the strict Schur class, it follows that we can choose $\epsilon >0$ sufficiently small so that the augmented matrix function \begin{equation} \label{Fepsilon} F_{\epsilon}(\lambda) : = \begin{bmatrix} F(\lambda) & \epsilon \lambda C (I - \lambda A)^{-1} \\ \epsilon \lambda (I - \lambda A)^{-1} B & \epsilon^{2} \lambda (I - \lambda A)^{-1} \\ \epsilon I_{\cU} & 0 \end{bmatrix} \end{equation} is in the strict Schur class $\cS^{o}( \cU \oplus \cX, \cY \oplus \cX \oplus \cU)$. Note that \[
F_{\epsilon}(\lambda) = \begin{bmatrix} D & 0 \\ 0 & 0 \\ \epsilon I_{\cU}
& 0 \end{bmatrix} + \lambda \begin{bmatrix} C \\ \epsilon I_{\cX} \\ 0
\end{bmatrix} (I - \lambda A)^{-1} \begin{bmatrix} B & \epsilon I_{\cX}
\end{bmatrix} \] and hence \begin{equation} \label{breal}
M_{\epsilon} = \begin{bmatrix} \bA & \bB \\ \bC & \bD \end{bmatrix} : =
\mat{c|cc}{
A & B & \epsilon I_{\cX}\\
\hline C & D & 0 \\
\epsilon I_{\cX} & 0 & 0 \\
0 & \epsilon I_{\cU} & 0} \end{equation} is a realization for $ F_{\epsilon}(\lambda)$. Suppose that we can find a bounded and boundedly invertible positive-definite operator $H$ satisfying the KYP-inequality \eqref{KYP1} associated with the system $\Sigma_\epsilon$: \begin{equation} \label{KYP-epsilon}
\begin{bmatrix} \bA^{*} & \bC^{*} \\ \bB^{*} & \bD^{*} \end{bmatrix}
\begin{bmatrix} H & 0 \\ 0 & I_{\cY \oplus \cX \oplus \cU}
\end{bmatrix} \begin{bmatrix} \bA & \bB \\ \bC & \bD
\end{bmatrix} \preceq \begin{bmatrix} H & 0 \\ 0 & I_{\cU \oplus
\cX} \end{bmatrix}. \end{equation} Spelling this out gives $$ \begin{bmatrix} A^{*}HA + C^{*}C + \epsilon^{2}I_{\cX} & A^{*}H B +
C^{*}D & \epsilon A^{*}H \\
B^{*}HA + D^{*}C & B^{*} H B + D^{*} D + \epsilon^{2}I_{\cU} &
\epsilon B^{*} H \\ \epsilon HA & \epsilon HB & \epsilon^{2} H \end{bmatrix} \preceq \begin{bmatrix} H & 0 & 0 \\ 0 & I_{\cU} & 0 \\ 0 & 0 & I_{\cX} \end{bmatrix}. $$ By crossing off the third row and third column, we get the inequality $$ \begin{bmatrix} A^{*} H A + C^{*} C + \epsilon^{2} I_{\cX} & A^{*}H B
+ C^{*} D \\ B^{*}H A + D^{*} C & B^{*} H B + D^{*} D +
\epsilon^{2} I_{\cU} \end{bmatrix} \preceq \begin{bmatrix} H & 0
\\ 0 & I_{\cU} \end{bmatrix} $$ or $$ \begin{bmatrix} A^{*} & C^{*} \\ B^{*} & D^{*} \end{bmatrix}
\begin{bmatrix} H & 0 \\ 0 & I_{\cY} \end{bmatrix}
\begin{bmatrix} A & B \\ C & D \end{bmatrix} + \epsilon^{2}
\begin{bmatrix} I_{\cX} & 0 \\ 0 & I_{\cU} \end{bmatrix}
\preceq \begin{bmatrix} H & 0 \\ 0 & I_{\cU}
\end{bmatrix} $$ leading us to the strict KYP-inequality \eqref{KYP2} for the original system $\Sigma$ as wanted.
It remains only to see why there is a bounded and boundedly invertible solution $H$ of \eqref{KYP-epsilon}. It is easily checked that the system $\Sigma_\epsilon$ is exactly controllable and exactly minimal, since $\bB$ and $\bC^*$ are both already surjective; as observed in the proof of necessity in item (2) of Theorem \ref{T:BRLinfstan}, since $F_{\Sigma_\epsilon}$ is in the Schur class it then follows that $\Sigma_\epsilon$ is $\ell^2$-exactly controllable and $\ell^2$-exactly observable as well. Hence we can appeal to either items (2) or (3) of Theorem \ref{T:BRLinfstan} to conclude that indeed the KYP-inequality \eqref{KYP-epsilon} has a bounded and boundedly invertible positive-definite solution. This is what is done in \cite{KYP1}, where the State-Space-Similarity approach is used to prove items (2) and (3) in Theorem \ref{T:BRLinfstan} rather the storage-function approach as is done here.
\end{proof}
\begin{remark} \label{R:HaHr-notbounded}
Let $\Si$ and $F_\Si$ satisfy the conditions
of strict Bounded Real Lemma (Theorem \ref{T:BRLinfstrict}). Define
the $\ep$-augmented system $\Si_\ep$ as in \eqref{breal}. We then obtain bounded, strictly positive-definite
solutions $H_{a,\ep}$ and $H_{r,\ep}$ of the strict KYP inequality \eqref{KYP2}, and consequently, by Proposition \ref{P:ell2minImplsH} (3) all bounded or bounded below solutions $H$ to the generalized KYP inequality \eqref{KYP1b'}--\eqref{KYP1b} for $\Sigma_{\epsilon}$ satisfy $H_{a,\ep} \le H \le H_{r,\ep}$ and hence are in fact bounded, strictly positive-definite solutions to the KYP inequality \eqref{KYP1} for the original system $\Sigma$. An application of Theorem \ref{T:order} together with the observation that $\cD$ being a core for the bounded operator $X$ on $\cX$ is the same as $\cD$ being dense in $\cX$ leads to the conclusion that the operators $H_a$ and $H_r$ associated with the original system satisfy $H_a \le H_{a, \epsilon}$ and $H_r^{-1} \le H_{r, \epsilon}^{-1}$ and hence are bounded. However, this by itself is not enough to conclude that $H_a$ and $H_r^{-1}$ are also bounded below. \end{remark}
\paragraph {\bf Acknowledgements} This work is based on the research supported in part by the National Research Foundation of South Africa. Any opinion, finding and conclusion or recommendation expressed in this material is that of the authors and the NRF does not accept any liability in this regard.
It is a pleasure to thank Olof Staffans for enlightening discussion (both verbal and written) and Mikael Kurula for his useful comments while visiting the first author in July 2017.
\end{document} | arXiv |
\begin{document}
\title{Phase-covariant cloning via adiabatic passage in fiber-nanocavity system} \author{Wan-Jun Su} \email{[email protected]} \affiliation{Department of Physics, Fuzhou University, Fuzhou 350002, People's Republic of China} \affiliation{ Institute for Quantum Information Science, University of Calgary, Alberta T2N 1N4, Canada} \author{Zhen-Biao Yang}
\affiliation{Department of Physics, Fuzhou University, Fuzhou 350002, People's Republic of China}
\date{\today}
\begin{abstract} We propose an effective scheme for realizing a long-range quantum state phase-covariant cloning between two qubits in fiber-nanocavity system via an adiabatic passage. Since no cavity (fiber) photons or excited levels of the nitrogen-vacancy (NV) center are populated during the whole process, the scheme is immune to the decay of cavity (fiber) and spontaneous emission of the NV center. The strictly numerical simulation shows that the fidelity is high even in the presence of realistic imperfections. \end{abstract}
\pacs{03.65.Xp,03.65.Vf,42.50.Dv,42.50.Pq}
\keywords{phase-covariant cloning,adiabatic passage,nitrogen-vacancy center, nanocavity }
\maketitle
\section{Introduction }
It is well known that the fundamental for quantum information science is no-cloning theorem, which states that an unknown quantum state cannot be cloned perfectly \cite{W1982}. However, we can try to achieve a state as close as possible to the input state. Recently, different schemes of quantum cloning have been proposed, and various quantum cloning machines were designed for different tasks \cite{R1998,V1998}. For the universal quantum cloning machine (UQCM), the input can be arbitrary qubits, the fidelity is optimal and does not depend on the input qubit \cite{Buzek1996}. It has been reported in the NMR system \cite{Cummings2002} and linear optics system \cite{Huang2001,A2002}. Different form UQCM, phase-covariant quantum cloning machine \cite{D2000} restricts the input state of the equatorial qubit
$|\psi\rangle=(|0\rangle+e^{i\phi}|1\rangle)/\sqrt{2}$, which is located on the equator of the Bloch sphere. Phase-covariant quantum cloning machine has been reported in the solid state system with a single electron spin at room temperature \cite{Pan2011}, and the corresponding fidelity even reached $85.4\%$.
The key of quantum cloning is entanglement. Especially, tripartite entanglement states have been used to realize quantum cloning. The W state is one important tripartite entangled state, which was proposed by D$\ddot{u}$r and has some interesting properties\cite{Dur2000}. In recent experiments, it was shown that three-qubit W states can be realized in optical systems \cite{Eibl2004} and ion traps \cite{Roos2004}. Since W state retains bipartite entanglement when any one of the three qubits is traced out, it can be used in quantum information processing \cite{Dr1998} and to test quantum nonlocality without inequality \cite{Zheng2002,Cabello2002}. What's more, the schemes using W states to realize phase-covariant quantum cloning have been studied in various physical systems \cite{Zheng2005,Shen2012}.
The main obstacle to realizing multi-particle entanglement and quantum information processing is decoherence. In the process of decoherence in cavity QED, atomic spontaneous emission and cavity decay take effect. How to reduce the decoherence is an important problem. Adiabatic techniques are the answers since they feature a certain robustness, and in systems of $\bigwedge$ type one can avoid a transient large population in the excited state. Recently, the techniques of stimulated Raman adiabatic passage \cite{K1998} and fractional stimulated Raman adiabatic passage \cite{N1999} have been extensively used for realizing the quantum information processing \cite{X2006,Yang2010,Song2010}.
Motivated by these works, we present a new scheme to generate W state of NV centers in fiber-cavity coupling system via the adiabatic passage. Meanwhile, we realize phase-covariant cloning using the W state. In this scheme, we use the ground states of NV centers, and the cavities (fibers) field remains in the vacuum state throughout the operation. So NV center's spontaneous emission and cavity decay can be efficiently suppressed. Another advantage of the scheme is that the interaction time does not need to be accurately controlled as long as the adiabatic condition is fulfilled, which makes it more applicable to current laboratory techniques. Our scheme may offer promising perspectives for entanglement generation and quantum information processing.
\section{Physical model}
\begin{figure}
\caption{(Color online) (a)The experimental setup for generating W states of NV centers in fiber-nanocavity coupled system (b) Configurations of the NV center level structure and relevant transitions. }
\end{figure} The experimental setup for generating W state of NV centers in the coupled system, as shown in Fig. 1(a). Three NV centers (labeled $NV_0$, $NV_1$, $NV_2$) are separately trapped in three distant optical cavities (labeled $C_0$, $C_1$, $C_2$) via an optical fiber coupler. The fiber coupler connects three fibers, at the end of which connect three cavities. The ground state of NV center is a spin triplet, which labeled as $^3A$. There is a zero-field splitting (2.88GHz) with a zero-field splitting between the state
$\left| 0\right\rangle $ ($m_s=0$) and $\left| \pm 1\right\rangle $ ($m_s=\pm1$) \cite{Togan2010}. When an external magnetic field are added along the NV center symmetry axis, the levels
$\left| \pm 1\right\rangle $ can be split by $2\pi\times200$ MHz, while the $\left| 0\right\rangle $ state was not affected. For simplicity, we denote the following states:
$\left|0\right\rangle=\left| g\right\rangle$ and
$\left|-1\right\rangle=\left|f\right\rangle $ are the ground states. while $\left|1\right\rangle=\left|e\right\rangle$ is the excited state. As shown in Fig. 1(b), the transition $\left| f\right\rangle
\leftrightarrow \left| e\right\rangle $ is derived by two classical fields with the Rabi frequencies $\Omega_k$ and $\Omega_k^{'}$, and the corresponding detunings $\Delta_d$ and $\Delta_{d}^{'}$. The transition $\left| g\right\rangle \leftrightarrow \left| e\right\rangle $ is coupled to the cavity mode with the coupling strength $g$, and the corresponding detuning is $\Delta_c$. In the interaction picture, the Hamiltonian describing NV centers and cavities interaction is ($\hbar=1$) \begin{eqnarray}\label{1}
H&=&H_{NVc}+H_{fc},\cr H_{NVc}&=&\sum_{k=0}^{2}[(\Omega_{k} e^{i\Delta_d t}|e\rangle_{k}\langle f|+\Omega_{k}^{'} e^{-i\Delta_{d}^{'}
t}|e\rangle_{k}\langle f|\cr&&+ga_{k}e^{i\Delta_c t}|e\rangle_{k}\langle g|)+H.c.], \end{eqnarray}
where $a$ is the annihilation operator for the cavity mode. Under the large detuning condition, i.e., $|\Delta_c|,|\Delta_d|,|\Delta_{d}^{'}|\gg g,\Omega_{k},\Omega_{k}^{'}$, the upper level $|e\rangle$ can be adiabatically eliminated. We set the parameters $\Omega_{k}=\Omega_{k}^{'}$ and $\Delta_{d}=-\Delta_{d}^{'}$ to eliminate the Stark shift induced by the classical lasers. The cavities are initially in vacuum states, so the Stark shift induced by the cavity mode is discarded. Furthermore, choosing the detunings appropriately $\Delta_c=\Delta_d=\Delta$, the dominant Raman transition is induced by the classical field and the cavity mode, while the other Raman transitions are far off-resonant, and can be neglected. Then, the Hamiltonian describing NV centers and cavities interaction is rewritten as \begin{eqnarray}\label{2}
H_{NVc}^{'}&=&\sum_{k=0}^{2}(\lambda_{k}a_{k}|f\rangle_{k}\langle g|+H.c.), \end{eqnarray} where $\lambda_{k}=g\Omega_{k}/\Delta$ is the effective coupling strength of the Raman transition induced by the classical field and the cavity mode.
In this model, the optical cavities are connected by identical single-mode fibers. We assume that all the fibers connected to the coupler have the same transverse mode. In the short fiber limit, $l\bar{\nu}/(2 \pi c)\leq1$ \cite{Serafini2006}, where $l$ is the length of the fiber and $\bar{\nu}$ is the decay rate of the cavity fields into the fiber modes, only one resonant mode $b$ of the fiber interacts with the cavity modes. In the case, the coupling between the fiber mode and cavity fields is modeled by the interaction Hamiltonian \begin{eqnarray}\label{3} H_{fc}=\nu\sum_{k=0}^{2}( b^{+}a_{k}+H.c.), \end{eqnarray} where $\nu$ is coupling strength of the cavity mode to the fiber mode, and $b^{+}$ is the creation operator for the fiber mode. In the interaction picture, the total Hamiltonian describing NV centers, cavities and fibers interaction is given by \begin{eqnarray}\label{4} H_{total}=H_{NVc}^{'}+H_{fc}, \end{eqnarray}
\section{W state preparation and phase covariant cloning}
Now, we show how to generate W state of NV centers using the physical model above. For an initial state of the system $|fgg000\rangle|0\rangle_{f}$, the single excitation subspace
$\forall$ can be spanned by the following state vectors: $\{|\phi_0\rangle,|\phi_1\rangle,|\phi_2\rangle,|\phi_3\rangle, |\phi_4\rangle,|\phi_5\rangle,|\phi_6\rangle \}$, with \begin{eqnarray}\label{5}
|\phi_0\rangle=|fgg000\rangle|0\rangle_{f},\cr
|\phi_1\rangle=|ggg100\rangle|0\rangle_{f},\cr
|\phi_2\rangle=|ggg000\rangle|1\rangle_{f},\cr
|\phi_3\rangle=|ggg010\rangle|0\rangle_{f},\cr
|\phi_4\rangle=|ggg001\rangle|0\rangle_{f},\cr
|\phi_5\rangle=|gfg000\rangle|0\rangle_{f},\cr
|\phi_6\rangle=|ggf000\rangle|0\rangle_{f}. \end{eqnarray} The Hamiltonian $H_{total}$ has the following dark state with null eigenvalue as well \begin{eqnarray}\label{6}
|\psi\rangle=\frac{1}{\sqrt{K}}(\frac{1}{\lambda_0}|\phi_0\rangle
+\frac{1}{\lambda_1}|\phi_5\rangle+\frac{1}{\lambda_2}|\phi_6\rangle-
\frac{1}{\nu}|\phi_2\rangle). \end{eqnarray} where $K=(1/{\lambda_{0}^{2}}+1/{\lambda_{1}^{2}} +1/{\lambda_{2}^{2}}+1/{\nu^{2}})$ is the normalized factor.
According to the adiabatic theory, if the whole system is initially prepared in the dark state, under the adiabatic condition, it will evolve in the dark state subspace. We control the Rabi frequencies of classical fields $\Omega_1=\Omega_2=\Omega$ to satisfy the effective coupling strengths $\lambda_1=\lambda_2$. At the beginning, the Rabi frequencies are assumed $\Omega\gg \Omega_{0}$. Then we slowly decrease $\Omega(t)$ and simultaneously increase $\Omega_0(t)$ to obtain $\Omega(t)/\Omega_{0}(t)=1$ at the time $T$. Meanwhile, assuming $\lambda_1=\lambda_0 \ll \nu$, we can discard the last term in the Eq. (6) above. From the discussion above, the pulse shapes must satisfy \begin{eqnarray}\label{7} \lim\limits_{t \rightarrow -\infty}\frac{\Omega_{0}(t)}{\Omega(t)}=0, \end{eqnarray} and \begin{eqnarray}\label{8} \lim\limits_{t \rightarrow \infty}\frac{\Omega(t)}{\Omega_{0}(t)}=1. \end{eqnarray} As a result, we can obtain the target state \begin{eqnarray}\label{9}
|\psi\rangle_{W}=\frac{1}{\sqrt{3}}(|fgg\rangle+|gfg\rangle+|ggf\rangle)|000\rangle_{c}|0\rangle_{f}. \end{eqnarray}
The Eq. (9) above denotes that the fiber and cavity field are in vacuum states, and the three NV centers are in three-particle W state.
\begin{figure}
\caption{(Color online) Schematic of $N$ ($N>3$) NV centers separately trapped in each of the distant cavities, which are connected by optical fibers. }
\end{figure}
Then, we show that the idea can be generalized to the generation of the multi-NV-centers W state. $N$ ($N>3$) NV centers are separately trapped in each of the distant cavities connected by fibers, as shown in Fig. 2. We assume that the system is initially in the state $|f_0g_1g_2\cdot\cdot\cdot g_N\rangle|0\rangle_{c}\rangle|0\rangle_{f}$, where
$|0\rangle_{c}\rangle|0\rangle_{f}=\Pi_{k=0}^{N}|0_k\rangle_{c}|0_k\rangle_{f}$ denotes all the cavity fields and fiber fields are in the vacuum states. Similar to the Eq. (6), we can get the dark state \begin{eqnarray}\label{10}
|\psi\rangle&=&\frac{1}{\sqrt{K}}(\frac{1}{\lambda_0}|f_0g_1g_2\cdot\cdot\cdot g_N\rangle|0\rangle_{c}\rangle|0\rangle_{f}+\cr&&\frac{1}{\lambda_1}|g_0f_1g_2\cdot\cdot\cdot g_N\rangle|0\rangle_{c}\rangle|0\rangle_{f}+\cdot\cdot\cdot\cr&&+\frac{1}{\lambda_N}|g_0g_1\cdot\cdot\cdot g_{N-1}f_N\rangle|0\rangle_{c}\rangle|0\rangle_{f}\cr&&
-\frac{1}{\nu}|g_0g_1g_2\cdot\cdot\cdot g_N\rangle|0\rangle_{c}\rangle|1\rangle_{f}), \end{eqnarray} where $K=(1/{\lambda_{0}^{2}}+1/{\lambda_{1}^{2}} +\cdot\cdot\cdot+1/{\lambda_{N}^{2}}+1/{\nu^{2}})$ is the normalized factor. Choosing $\Omega_1=\Omega_2=\cdot\cdot\cdot \Omega_N=\Omega$, we get $\lambda_1=\lambda_2=\cdot\cdot\cdot \lambda_N=\lambda$. At the beginning, we assume $\Omega\gg \Omega_{0}$. Then we slowly decrease $\Omega(t)$ and simultaneously increase $\Omega_0(t)$ to obtain $\Omega(t)/ \Omega_{0}(t)=1$ at the time $T$. Meanwhile, assuming $\lambda_0=\lambda \ll \nu$, we can discard the last term in the Eq. (10) above. The final state is \begin{eqnarray}\label{11}
|\psi\rangle_{W}&=&\frac{1}{\sqrt{N}}(|f_0g_1g_2\cdot\cdot\cdot g_N\rangle+|g_0f_1g_2\cdot\cdot\cdot g_N\rangle+\cdot\cdot\cdot\cr&&+|g_0g_1\cdot\cdot\cdot g_{N-1}f_N\rangle). \end{eqnarray} So $N$ NV centers are prepared in an entangled state with the cavity mode and fiber mode left in the vacuum states.
The quantum cloning scheme can be implemented based on the W state previously prepared. Again, we assume that all the cavities and the optical fiber channel are initially both in the vacuum state, and only one NV center of the first cavity is prepared in the arbitrary orbital state of the Bloch sphere. The system initial state is written as \begin{eqnarray}\label{12}
|\psi_{(0)}\rangle&=&\frac{1}{\sqrt{2}}(|g_0\rangle+e^{i\delta}|f_0\rangle)|g_1g_2\cdot\cdot\cdot g_N\rangle|0\rangle_{c}\rangle|0\rangle_{f}. \end{eqnarray}
Under the above-mentioned conditions, the dark state
$|g_0g_1g_2\cdot\cdot\cdot g_N\rangle|0\rangle_{c}\rangle|0\rangle_{f}$ undergoes no changes. On the other hand, $|f_0g_1g_2\cdot\cdot\cdot g_N\rangle|0\rangle_{c}\rangle|0\rangle_{f}$ evolves to the state of the Eq. (10). So the system evolves to the state \begin{eqnarray}\label{13}
|\psi_{(t)}\rangle&=&\frac{1}{\sqrt{2}}[|g_0g_1g_2\cdot\cdot\cdot g_N\rangle+\frac{1}{\sqrt{N}}e^{i\delta}(|f_0g_1g_2\cdot\cdot\cdot g_N\rangle\cr&&+|g_0f_1g_2\cdot\cdot\cdot g_N\rangle+\cdot\cdot\cdot\cr&&+|g_0g_1\cdot\cdot\cdot g_{N-1}f_N\rangle)|0\rangle_{c}\rangle|0\rangle_{f}]. \end{eqnarray} All the cavity and fiber field are always in vacuum states, so this scheme is robust against the cavity and fiber decay.
\section{discussion }
It should be noted that preparing W state of the NV centers is the key step to achieving the phase-covariant quantum cloning. For this reason, it is necessary to consider the feasibility of generating the W state of the NV centers. In the above derivations, we reasonably select the experimental parameters to satisfy the adiabatic passage. To make the scheme experimentally feasible, we discuss the Rabi frequencies and other correlative parameters. The time-dependent pulses we introduce are as follows: \begin{eqnarray}\label{14} \Omega(t)&=&\Omega_{m}e^{-(t-t_{1})^2/t_p^{2}}+\Omega_{m}e^{-(t-t_{0})^2/t_p^{2}},\cr \Omega_{0}(t)&=&\Omega_{m}e^{-(t-t_{0})^2/t_p^{2}}, \end{eqnarray} where $\Omega_{m}$, $t_{k} (k=0,1)$ and $t_{p}$, are the peak, the time delay and the width of the laser pulses, respectively. We utilize the fidelity and the fidelity of the prepared states is given by \begin{eqnarray}\label{15}
F=\left\langle \Psi _{(t)}|| \Psi _{ideal}\right\rangle, \end{eqnarray}
where $\left| \Psi _{ideal}\right\rangle$ is the ideal three-NV centers W state in Eq. (10), and $\left| \Psi _{(t)}\right\rangle$ is the final state described by the Schr\"{o}dinger equation $i (d\left| \Psi _{(t)}\right\rangle/dt) =H\left| \Psi _{(t)}\right\rangle $, where $H$ is the full Hamiltonian governed by Eq. (1).
\begin{figure}\end{figure}
Fig. 3, we plot the dependences of the fidelity of three-NV centers W state versus the scaled parameters of the laser pulses, i.e, the scaled peak $\Omega_{m}/g$, the scaled time delay $gt_{k} (k=0, 1)$ and the scaled width $g{t_{p}}$. As we can see from the subfigures, when the parameters are set in a significant range, the fidelity reaches to unit one. The Fig. 3(a) shows that small $\Omega_{m}$ will reduce the fidelity seriously. Increasing the Rabi frequency $\Omega_{m}$ will get a raise in the effective coupling strength of the Raman transition induced by the classical field and the cavity mode, and the effective Hamiltonian in Eq. (2) takes effect. The excited state can be eliminated, so the decoherence is reduced. We can see from the Fig. 3(b) that the fidelity decreases rapidly when $t_{0}-t_{1}$ is too large or too small. Obviously, when $t_{1}$ is close to $t_{0}$ or deviate much from $t_{0}$, the condition $\Omega(t)/\Omega_{0}t=1$ can not be satisfied at the end of evolution. In Fig. 3(c), we can find when $40<gt_{p}<60$, the fidelity is larger than 0.98. It is because the large deviation can not ensure the Eq. (7) and Eq. (8) to take effect. Hence, a high-fidelity W state can be obtained as long as the experiment parameters are set in the correspond range: $\Omega_{m}=g$, $gt_{0}=150$, $gt_{1}=90$ and $gt_{p}=50$.
\begin{figure}
\caption{(Color online) Fidelity of three-NV centers W state versus the scaled cavity-fiber coupling strength $\nu/g$, with $\Delta=10g$, $\Omega_{m}=g$, $gt_{p}=50$, $gt_{0}=150$, $gt_{1}=90$, $T=200/g$. }
\end{figure}
\begin{figure}
\caption{(Color online) Fidelity of three-NV centers W state versus the scaled detuning $\Delta/g$, with $\nu=10g$, $\Omega_{m}=g$, $gt_{p}=50$, $gt_{0}=150$, $gt_{1}=90$, $T=200/g$. }
\end{figure}
\begin{figure}
\caption{(Color online)
Fidelity of three-NV centers W state versus the scaled interaction time $gt$, with $\Delta=10g$, $\nu=10g$, $\Omega_{m}=g$, $gt_{p}=50$, $gt_{0}=150$, $gt_{1}=90$.}
\end{figure}
To satisfy the condition of adiabatic passage $\nu\gg\Omega g /\Delta $, we should choose the coupling strength $\nu$ and the detuning $\Delta$ to be large enough. The Fig. 4 and Fig. 5 show that when $\nu/g>1$ and $\Delta/g>1 $, the fidelity of W state can reach 0.99. Considering the experimental feasibility, we choose $\nu=10g$ and $\Delta=10g$. Moreover, one observes from Fig. 6 that within pulse duration, the fidelity increases with the time going on and approximates unit 1 when $gt>150$. For the reason that the target state approaches a steady state at least, the interaction time need not to be controlled strictly, the same as the parameters. We choose $T/g=200$ as the evolution time with $\nu=10g$, $\Delta=10g$, $\Omega_{m}=g$, $gt_{0}=150$, $gt_{1}=90$ and $gt_{p}=50$. The numerical result shows a good agreement with the expected result.
\begin{figure}
\caption{(Color online) Fidelity of phase-covariant cloning using three-NV centers W state versus the scaled cavity(or fiber) decay rate $\kappa/g$ and atomic spontaneous emission rate $\gamma/g$, with $\Delta=10g$, $\nu=10g$, $\Omega_{m}=g$, $gt_{p}=50$, $gt_{0}=150$, $gt_{1}=90$, $T=200/g$. }
\end{figure}
Considering a realistic experiment,we must pay attention the effect of the spontaneous emission of the NV centers, fiber losses and cavity losses. The master equation of the whole system can be expressed by
\begin{eqnarray}\label{16} \dot{\rho}&=&-i\left[H,\rho \right] +\frac{\kappa_c}2 (2a\rho a^{+}-aa^{+}\rho -\rho {a}^{+}a)\nonumber\\ &&+\frac{\kappa_f}2(2b\rho b^{+}-bb^{+}\rho -\rho {b}^{+}b)\cr&&+\sum_{j=g,f}\sum_{k=0}^{2}\frac{\gamma_k}2(2{\sigma }_{je}\rho {\sigma }_{ej}-{\sigma }_{ej}{\sigma }_{je}\rho -\rho {\sigma }_{ej}{\sigma }_{je}), \end{eqnarray} where $\kappa_c$ and $\kappa_f$ denote the effective decay rate of the cavity and the optical fiber. For simplicity, we assume $\gamma _{k}(k=0,1,2)=\gamma$, where
$\gamma$ represents the branching ration of the spontaneous decay from level $\left| e\right\rangle $ to $\left| g\right\rangle $
and $\left| f\right\rangle $ in NV center. By solving the master equation numerically, we obtain the relation of the fidelity of quantum phase-covariant cloning using three NV centers W states versus the scaled ratio $\gamma/g$ and $\kappa/g$ with $\Delta=10g$, $\nu=10g$, $\Omega_{m}=g$, $gt_{p}=50$, $gt_{0}=150$, $gt_{1}=90$, $T=200/g$ in Fig. 7. We see that the fidelity is always larger than 0.955 even $\gamma/g$ and $\kappa/g$ is up to 0.01.
We now want to study the performance of our protocol for quantum optical devices, such as microsphere cavity-fiber coupling system \cite{Park2006}. We can adjust the energy levels of different N-V centers using an external magnetic field to get identical N-V centers\cite{Togan2010}. When we put the NV centers near the equator of a microsphere cavity, where the NV center interacts with the cavity via the evanescent fields, we can get the coupling constants, which range from hundreds of MHz to several GHz \cite{Barclay2009}. In their work, they got a $Q$ factor of the microsphere cavity exceeded $2\times10^{6}$, which result in a photon leakage rate $\kappa=\omega/Q\sim 2\pi \times 120$ MHz \cite{Maze2008}. The spontaneous decay rate of the NV center has been reported is $\gamma\sim 2\pi\times 15$ MHz \cite{Santori2006}. Estimating a fiber-nanocavity system with the relevant cavity QED parameters $[g, \gamma, \kappa, \Omega, \Delta,\nu ]/2\pi= [1, 0.01, 0.01, 1, 10, 10]$ GHz, we can get the corresponding fidelity of the W state that can reach $95.5\%$. The operation time of the phase-covariant scheme is about $50ns$ with the parameters above. What's more, the decoherence time of individual NV centers is longer than $600\mu s$ at room temperature \cite{Mizuochi2009}. During the decoherence time of NV center, we can complete the process of phase-covariant cloning. In principle, the performance of this scheme can be improved even further, by using shortcuts to adiabatic passage \cite{Chen2010}, which has been used in entangled state preparation and transition \cite{Ye-HongLPL2014}.
\section{conclusion}
In summary, based on adiabatic passage, we have proposed a scheme for generating the $N$ NV-centers entangled state in a hybrid system consisting of NV centers, optical fibers, and micro-cavities. What's more, a scheme of phase-covariant cloning has been realized. By numerical calculation, we have demonstrated that the present scheme is immune to the excited levels and cavity (fiber) photons.
\section{acknowledge} This work is supported by the National Fundamental Research Program People's Republic of China under Grant No. 11405031, and the Research Program of Fujian Education Department under Grant NO. JA14044, and the Research Program of Fuzhou University.
\end{document} | arXiv |
Article | Open | Published: 09 May 2019
Mortality causes universal changes in microbial community composition
Clare I. Abreu1,
Jonathan Friedman ORCID: orcid.org/0000-0001-8476-80302,
Vilhelm L. Andersen Woltz1 &
Jeff Gore ORCID: orcid.org/0000-0003-4583-85551
Nature Communicationsvolume 10, Article number: 2120 (2019) | Download Citation
Bacterial systems biology
Microbial ecology
All organisms are sensitive to the abiotic environment, and a deteriorating environment can cause extinction. However, survival in a multispecies community depends upon interactions, and some species may even be favored by a harsh environment that impairs others, leading to potentially surprising community transitions as environments deteriorate. Here we combine theory and laboratory microcosms to predict how simple microbial communities will change under added mortality, controlled by varying dilution. We find that in a two-species coculture, increasing mortality favors the faster grower, confirming a theoretical prediction. Furthermore, if the slower grower dominates under low mortality, the outcome can reverse as mortality increases. We find that this tradeoff between growth and competitive ability is prevalent at low dilution, causing outcomes to shift dramatically as dilution increases, and that these two-species shifts propagate to simple multispecies communities. Our results argue that a bottom-up approach can provide insight into how communities change under stress.
Ecological communities are defined by their structure, which includes species composition, diversity, and interactions1. All such properties are sensitive to the abiotic environment, which influences both the growth of individual species and the interactions between them. The structure of multispecies communities can thus vary in complex ways across environmental gradients2,3,4,5,6,7. A major challenge is therefore to predict how a changing environment affects competition outcomes and alters community structure. In particular, environmental deterioration can radically change community structure. Instances of such deterioration include antibiotic use on gut microbiota8, ocean warming in reef communities9, overfishing in marine ecosystems10, and habitat loss in human-modified landscapes11. Such disturbances can affect community structure in several ways, such as allowing for the spread of invasive species12, causing biodiversity loss and mass extinction13,14, or altering the interactions between the remaining community members15,16. For example, a stable ecosystem can be greatly disrupted by the removal of a single keystone species, potentially affecting species with which it does not directly interact17,18,19.
A common form of environmental deterioration is increased mortality, which can be implemented in the laboratory in a simple way. In fact, the standard method of cultivating and coculturing bacteria involves periodic dilution into fresh media, a process that necessarily discards cells from the population. The magnitude of the dilution factor determines the fraction of cells discarded and therefore the added mortality rate, making environmental harshness easy to tune experimentally.
The choice of dilution factor often receives little attention, yet theoretical models predict that an increased mortality rate experienced equally by all species in the community can have dramatic effects on community composition. In particular, it is predicted that such a global mortality rate will favor the faster-growing species in pairwise coculture, potentially reversing outcomes from dominance of the slow grower to dominance of the fast grower1,20,21 as mortality increases. Indeed, there is some experimental support for such reversals in chemostat experiments with microbial species with different growth rates22,23,24. A less-explored prediction is that if a high mortality rate causes a competitive reversal, the coculture will also result in either coexistence or bistability (where the winner depends on the starting fraction) at some range of intermediate mortality25,26,27. Missing from the literature is a systematic study that probes both of these predictions with an array of pairwise coculture experiments across a range of dilution rates. In addition, little is known about how mortality will alter the composition of multispecies communities.
In this paper, we report experimental results that expand upon the prior literature regarding the effect of dilution on pairwise outcomes, and we use the pairwise outcomes to develop a predictive understanding of how multispecies community composition changes with increased dilution. First, pairwise coculture experiments with five bacterial species confirmed that (1) increased mortality favors the fast grower and can reverse the winner, or the only remaining species at the end of the experiment, from slow grower to fast grower, and (2) at intermediate dilution rates, either coexistence or bistability occurs, where from many starting fractions, the two species' abundances either converge to a stable fraction or diverge to either species winning, respectively. We measure species' growth rates by growing cells from low density in monoculture; fast growers reach a threshold density more quickly than slow growers. We define the competitive ability of a species as its average fraction after being cocultured in pairs with each of the other species for multiple dilution cycles. Interestingly, we find that a pervasive tradeoff between growth rate and competitive ability in our system favors slow growers in high-density, low-dilution environments, leading to striking changes in outcomes as mortality increases. Second, to bridge the pairwise results to three- and four-species communities, we employ simple predictive pairwise assembly rules28, where we find that the pairwise outcomes such as coexistence and bistability propagate up to the multispecies communities. Our results highlight that the seemingly complicated states a community adopts across a mortality gradient can be traced back to a predictable pattern in the outcomes of its constituent pairs.
Three-species community exhibits wide range of stable states
To probe how a changing environment affects community composition, we employed an experimentally tractable system of soil bacteria coculture experiments subject to daily growth/dilution cycles across six dilution factors (Fig. 1a). We selected five species of soil bacteria: Enterobacter aerogenes (Ea), Pseudomonas aurantiaca (Pa), Pseudomonas citronellolis (Pci), Pseudomonas putida (Pp), and Pseudomonas veronii (Pv) (Supplementary Fig. 10). These species have been used in previous experiments by our group, which did not vary dilution factor28,29. All five species grow well in our defined media containing glucose as the primary carbon source (see "Methods") and have distinct colony morphology that allows for measuring species abundance by plating and colony counting on agar.
Increasing dilution causes striking shifts in a three-species community. a To probe how added mortality changes community composition, we cocultured three soil bacteria over a range of dilution factors. Cells were inoculated and allowed to grow for 24 h before being diluted into fresh media. This process was continued for 7 days, until a stable equilibrium was reached. The magnitude of the dilution factor (10–106) determines the fraction of cells discarded, and thus the amount of added mortality. b We began with a three-species community (Enterobacter aerogenes (Ea), Pseudomonas citronellolis (Pci), and Pseudomonas veronii (Pv)), initialized from four starting fractions at each dilution factor. The outcomes of two of the starting fractions are shown (see Supplementary Fig. 8b for remaining starting fractions), along with a subway map, where survival of species is represented with colors assigned to each species. Black dots indicate where data were collected, while colors indicate the range over which a given species is inferred to survive. Species Pv dominates at the lowest dilution factor, and Ea dominates at the highest dilution factors. The grouping of two colors represents coexistence of two species, whereas the two levels at dilution factor 103 indicate bistability, where both coexisting states, Ea–Pv and Ea–Pci, are stable and the starting fraction determines which stable state the community reaches. Error bars are the SD of the beta distribution with Bayes' prior probability (see "Methods"). Source data are provided as a Source Data file
We began by competing three of the five species, Ea, Pci, and Pv, for seven 24-h cycles under six different dilution factor regimes. To assay for alternative stable states, each dilution factor condition was initialized by four different starting fractions (equal abundance as well as prevalence of one species in a 90–5–5% split). Despite the simplicity of the community and the experimental perturbation, we observed five qualitatively different outcomes corresponding to different combinations of the species surviving at equilibrium (Fig. 1b). At the highest and lowest dilution factors, one species excludes the others at all starting fractions (Pv at low dilution, Ea at high dilution). Two coexisting states (Ea–Pv and Ea–Pci) occur at medium low (102) and medium high (104) dilution factors, again independent of the starting fractions of the species. However, at intermediate dilution factor (103), we found that the surviving species depended upon the initial abundances of the species. At this experimental condition, the system displays bistability between the two different coexisting states (Ea–Pv and Ea–Pci) that were present at neighboring dilution factors. These three species therefore display a surprisingly wide range of community compositions as the mortality rate is varied.
Two-species model predicts that mortality favors faster grower
To make sense of these transitions in community composition, we decided to first focus on two-species competitions, not only because they should be simpler but also because prior work from our group gives reason to believe that pairwise outcomes are sufficient for predicting multispecies states28. Accordingly, we used a simple two-species Lotka–Volterra (LV) competition model with an added mortality term δNi experienced equally by both species21:
$$\dot N_i = r_iN_i\left( {1 - N_i - \alpha _{ij}N_j} \right) - \delta N_i$$
where Ni is the density of species i (normalized to its carrying capacity), ri is the maximum growth rate of species i, and the competition coefficient αij is a dimensionless constant reflecting how strongly species i is inhibited by species j (Fig. 2). This model can be re-parameterized into the LV model with no added mortality, where the new competition coefficients \(\tilde \alpha _{ij}\) now depend upon ri and δ (Supplementary Note 1, Supplementary Fig. 11):
$$\mathop {N}\limits^{\dot \sim } = \tilde r_i\tilde N_i\left( {1 - \tilde N_i - \tilde \alpha _{ij}\tilde N_j} \right)$$
$$\begin{array}{*{20}{c}} {\tilde \alpha _{ij} = \alpha _{ij}\frac{{\left( {1 - \frac{\delta }{{r_j}}} \right)}}{{\left( {1 - \frac{\delta }{{r_i}}} \right)}}} \end{array}$$
An increasing global mortality rate is predicted to favor the fast grower. a, b Here we illustrate the parameters of the Lotka–Volterra (LV) interspecific competition model with added mortality: population density N, growth r, death δ, and the strengths of inhibition αsf and αfs (subscript f for fast grower and s for slow grower). Here we assume a continuous death rate, but in the model, the outcome is the same for a discrete process, such as our daily dilution factor (Supplementary Note 2). The width of arrows in a corresponds to an interesting case that we observe experimentally, in which the fast grower is a relatively weak competitor. c The outcomes of the LV model without mortality depend solely upon the competition coefficients α, and the phase space is divided into one quadrant per outcome. If the slow grower is a strong competitor, it can exclude the fast grower. Imposing a uniform mortality rate δ on the system, however, favors the faster grower by making the re-parameterized competition coefficients \(\tilde \alpha\) depend on r and δ. Given that a slow grower dominates at low or no added death, the model predicts that coexistence or bistability will occur at intermediate added death rates before the outcome transitions to dominance of the fast grower at high added death (Supplementary Note 1). Two numerical examples show that the values of α (in the absence of added mortality) determine whether the trajectory crosses the bistability or coexistence region as mortality increases
The outcome of competition—dominance, coexistence, or bistability—simply depends upon whether each of the \(\tilde \alpha\) are greater or less than one, as in the basic LV competition model21. Stable coexistence occurs when both \(\tilde \alpha\) coefficients are less than one, bistability when both are greater than one, and dominance/exclusion when only one coefficient is greater than one.
In this model, it is possible for a slow grower (Ns) to outcompete a fast grower (Nf) if the slow grower is a strong competitor (αfs > 1) and the fast grower is a weak competitor (αsf < 1) (Fig. 2). However, the competition coefficients change with increasing mortality δ in a way that favors the fast grower: \(\tilde \alpha _{{\mathrm{fs}}}\) shrinks and \(\tilde \alpha _{{\mathrm{sf}}}\) grows, eventually leading the fast grower to outcompete the slow grower. A powerful way to visualize this change is to plot the outcomes as determined by the competition coefficients (Fig. 2c); increasing mortality causes the outcome to traverse a 45° trajectory through the phase space, leading to the fast grower winning at high mortality. At intermediate mortality, the model predicts that the two species will either coexist or be bistable. This model therefore makes very clear predictions regarding how pairwise competition will change under increased mortality, given the aforementioned slow grower advantage at low mortality.
Dilution experiments confirm predictions about mortality
To test these predictions in the laboratory, we performed all pairwise coculture experiments at multiple dilution factors and starting fractions of our five bacterial species: Pp, Ea, Pci, Pa, Pv (listed in order from fastest- to slowest-growing species). We find that these pairwise outcomes change as expected from the LV model, where increased dilution favors the fast grower (Supplementary Fig. 1). For example, in Ea–Pv competition we find that Pv, despite being the slower grower, is able to exclude Ea at low dilution rates (Fig. 3b, left panel). From the standpoint of the LV model, Pv is a strong competitor despite being a slow grower in this environment. However, as predicted by the model, at high dilution rates the slow-growing Pv is excluded by the fast-growing Ea (Fig. 3b, right panel). Importantly, Pv is competitively excluded at a dilution factor of 104, an experimental condition at which it could have survived in the absence of a competitor. Finally, and again consistent with the model, at intermediate dilution rates we find that the Ea–Pv pair crosses a region of coexistence, where the two species reach a stable fraction over time that is not a function of the starting fraction (Fig. 3b, middle panel). The Ea–Pv pair therefore displays the transitions through the LV phase space in the order predicted by our model (Fig. 3a–d).
In pairwise coculture experiments, increasing dilution favors the faster grower. a Experimental results are shown from a coculture experiment with Pv (blue) and Ea (pink). b Left panel: Despite its slow growth rate, Pv excludes faster grower Ea at the lowest dilution factor. Middle panel: Increasing death rate causes the outcomes to traverse the coexistence region of the phase space. Right panel: As predicted, fast-growing Ea dominates at high dilution factor. Error bars are the SD of the beta distribution with Bayes' prior probability (see "Methods"). c An experimental bifurcation diagram shows stable points with a solid line and unstable points with a dashed line. The stable fraction of coexistence shifts in favor of the fast grower as dilution increases. Gray arrows show experimentally measured time trajectories, beginning at the starting fraction and ending at the final fraction. d A "subway map" denotes survival/extinction of a species at a particular dilution factor with presence/absence of the species color. e, f Pv outcompeted another fast grower Pci (yellow) at low dilution factors, but the pair became bistable instead of coexisting as dilution increased; the unstable fraction can be seen to shift in favor of the fast grower (g). h Two levels in the subway map show bistability. Source data are provided as a Source Data file
The LV model predicts that other pairs will cross a region of bistability rather than coexistence, and indeed this is what we observe experimentally with the Pci–Pv pair (Fig. 3e–h). Once again, the slow-growing Pv dominates at low dilution factor yet is excluded at high dilution factor. However, at intermediate dilution factors this pair crosses a region of bistability, in which the final outcome depends upon the starting fractions of the species. The LV model with added mortality therefore provides powerful insight into how real microbial species compete, despite the many complexities of the growth and interaction that are necessarily neglected in a simple phenomenological model.
Indeed, a closer examination of the trajectory through the LV phase space of the Pci–Pv pair reveals a violation of the simple outcomes allowed within the LV model. In particular, at dilution factor 102 we find that when competition is initiated from high initial fractions of Pci that Pv persists at low fraction over time (Fig. 3g). This outcome, a bistability of coexistence and exclusion (rather than of exclusion and exclusion), is not an allowed outcome within the LV model (modifications to the LV model can give rise to it, as shown by ref. 30). This subtlety highlights that the transitions (e.g., bifurcation diagrams in Fig. 3c, g) can be more complex than what occurs in the LV model but that nonetheless the transitions within the LV model represent a baseline to which quantitative experiments can be compared.
Tradeoff between growth rate and competitive ability observed
The model predicts that mortality will reverse coculture outcomes if and only if a slow grower excludes a fast grower at low or no added death, exhibiting a tradeoff between growth and competitive ability. Changes in outcome are therefore most dramatic when a strongly competing slow grower causes the trajectory to begin in the upper left quadrant of the phase space (Fig. 3a, e), allowing it to move through other quadrants as mortality increases. Indeed, in the pairwise experiments described above, the slowest-growing species, Pv, is a strong competitor at low dilution factor. To probe this potential tradeoff more extensively, we compared the growth rates of our five species in monoculture (Supplementary Figs. 3, 4, and 5) to their competitive performance at low dilution factor. In seven of the ten pairs, the slower grower excluded the faster grower, and the other three pairs coexisted (Supplementary Fig. 1). We therefore find that our five species display a pervasive tradeoff between growth rate and competitive ability, possibly because the slower-growing species fare better in high-density environments that reach saturation.
To visualize how competitive success changes with dilution factor, we defined the competitive score of each species to be its mean fraction after reaching equilibrium in all pairs in which it competed. The aforementioned tradeoff can be seen as an inverse relationship between growth rate and competitive score at the lowest dilution factor (Fig. 4a). As predicted, the performance of the fast-growing species increases monotonically with increasing dilution factors (Fig. 4b). Competitive superiority of the slowest grower (Pv) at low dilution rates transitions to the next slowest (Pa) at intermediate rates, before giving rise to dominance of the fastest growers (Pci, Ea, Pp) at maximum rates (Fig. 4b–d). We therefore find that the mortality rate largely determines the importance of a species' growth rate to competitive performance in coculture experiments.
Tradeoff between growth and competitive ability leads to dependence of experimental outcome on dilution factor. The LV model predicts that increasing dilution will favor faster-growing species over slower-growing ones. If fast growers dominate at low dilution factors, though, no changes in outcome will be expected. Changes in outcome are therefore most dramatic when slow growers are strong competitors at low dilution, exhibiting a tradeoff between growth rate and competitive ability. a This tradeoff was pervasive in our system: slower growth rates resulted in higher competitive scores at the lowest dilution factor. Growth rate was calculated with OD600 measurements of the time taken for monocultures to reach a threshold density within the exponential phase; error bars represent the SEM of replicates (n = 21, per species) (Supplementary Fig. 3). Competitive score was calculated by averaging fraction of a given species across all pairwise competitive outcomes; error bars were calculated by bootstrapping, where replicates of mean experimental outcomes of a given pair were sampled 5000 times with replacement (n = 34, per species, per dilution factor). b The competitive scores in a are extended to all dilution factors. The slowest grower's score monotonically decreases with dilution, while the fast growers' scores increase, and an intermediate grower peaks at intermediate dilution factor. A similar pattern was seen in data from experiments in a complex growth medium (Supplementary Fig. 7). c At high dilution factors, the order of scores is reversed. d At low dilution factors 10 and 102, competitive ability is negatively correlated with growth rate; the correlation becomes positive above dilution factor 103. Error bars are the standard error coefficients given by the linear regression function lm in R. Source data are provided as a Source Data file
Pairwise outcomes predict multispecies states
Now that we have an understanding of how pairwise outcomes shift in response to increased mortality, we return to the seemingly complicated set of outcomes observed in our original three-species community (Fig. 1). In a previous study28, we developed community assembly rules that allow for prediction of species survival in multispecies communities from the corresponding pairwise outcomes. These rules state that in a multispecies coculture, a species will survive if and only if it coexists with all other surviving species in pairwise coculture. If one or more bistable pairs are involved in a multispecies community, the assembly rules allow for either of the stable states. We see that the seemingly complicated trio outcomes follow from these assembly rules applied to our corresponding pairwise outcomes at all dilution factors (Fig. 5). For example, at the lowest dilution factor (10), Ea–Pci coexist, but each of these species is excluded by Pv in pairwise coculture, thus leading to the (accurate) prediction that only Pv will survive in the three-species coculture experiment. In addition, we observe that the bistability of Pci–Pv at dilution factor 103 propagates up to lead to bistability in the trio but with each stable state corresponding to coexistence of two species. The only trio outcome not successfully predicted by the rules is the extinction of Pci at a dilution factor of 105 (Fig. 5d, Supplementary Fig. 8). Our analysis of pairwise shifts under increased mortality therefore provides a predictive understanding of the complex shifts observed within a simple three-species bacterial community.
Coexistence and bistability propagate from pair to trio, as predicted by assembly rules. a–c Subway maps show pairwise outcome trajectories across changing dilution factor (DF), as explained in Figs. 1 and 3. The fast grower's line is always plotted above the slow grower's line. Of the three pairs that make up the community Ea–Pci–Pv, two are coexisting (a, b) and one is bistable (c). d The pairwise assembly rules state that a species will survive in a community if it survives in all corresponding pairs. At DF 10, Ea and Pci coexist, but both are excluded by Pv. The rules correctly predict that Pv will dominate in the trio. Because both species can be excluded in a bistable pair, a bistable pairwise outcome propagates to the trio as more than one allowed state. Each of the bistable species can be seen separately coexisting with Ea at DF 103, as they do in pairs. The assembly rules failed at DF 105 for three out of four starting conditions: Pci usually goes extinct when it should coexist with Ea. e Three-species competition results are shown in simplex plots. Arrows begin and end at initial and final fractions, respectively. Edges represent pairwise results, and black dots represent trio results
To determine whether our analysis of community shifts under mortality is more broadly applicable, we combined our five species into various three- and four-species subsets, similar to the Ea–Pci–Pv competition (Fig. 5). In total, we competed five three-species communities and three four-species communities at all six dilution factors (see Supplementary Fig. 9 for examples, as well as for result of five-species coculture). Overall, a quantitative generalization of our assembly rules (see "Methods") predicted the equilibrium fractions with an error of 14%, significantly better than the 41% error that results from predictions obtained from monoculture carrying capacity (Table 1, Supplementary Fig. 2). Assembly rule prediction error does increase with increasing community size, however, particularly in the case of the five-species community (Supplementary Table 1, Supplementary Fig. 9), which may be due to slow equilibration or infrequent coexistence of more than two species. These results indicate that pairwise outcomes are good predictors of simple multispecies states in the presence of increased mortality.
Table 1 Errors of pairwise assembly rules are much lower than monoculture prediction errors
The question of how community composition will change in a deteriorating environment is essential, as climate change, ocean acidification, and deforestation infringe upon many organisms' habitats, increasing mortality either directly, by decimating populations, or indirectly, by making the environment less hospitable to them. We used an experimentally tractable microbial microcosm to tune mortality through dilution rate and found a pervasive tradeoff between growth rate and competitive ability (Fig. 4). This tradeoff causes slow growers to outcompete fast growers in high-density, low-dilution environments. Increasing mortality favors fast growers, in line with model predictions. We observed coexistence and bistability at intermediate dilution factors in pairwise experiments (Fig. 3) and found that such coexistence and bistability propagated up to three- and four-species communities (Fig. 5). Coexistence was more common than bistability, which is in line with expectations of optimal foraging theory1. We were able to explain seemingly complicated three-species states (Fig. 1) with pairwise results, which traversed all possible outcomes allowed by the two-species model.
The success of the simple pairwise assembly rules28 in predicting the states of three- and four-species communities (Table 1, Supplementary Fig. 2) is in line with recent microbial experiments suggesting that pairwise interactions play a key role in determining multispecies community assembly28,31 and community-level metabolic rates32. In contrast, some theory and empirical evidence supports the notion of pervasive and strong higher-order interactions33,34,35,36. Our results provide support for a bottom–up approach to simple multispecies communities and show that pairwise interactions alone can generate multispecies states that appear nontrivial. Prediction errors do increase with increasing community size, however, as can be seen in the case of the five-species community (Supplementary Table 1, Supplementary Fig. 9).
The aforementioned tradeoff made for striking transitions in the communities that we studied. Without the tradeoff, the model would be less useful—if a fast grower outcompetes a slow grower at low dilution rates, the model predicts no change in outcome at higher dilution rates. Our results at low dilution are consistent with previous experimental evidence of a tradeoff between growth and competitive ability among different mutants of the same bacterial strain37 and between different species of protists38,39. Other examples of this tradeoff include antibiotic resistance, which imposes a fitness cost on bacteria despite its clear competitive benefit40, and seed size in plants; plants that produce larger seeds necessarily produce fewer of them but were found to be more competitive in seedling establishment41,42. The high competitive score of slow growers in our system in low-dilution environments, together with the result that increasing dilution favors fast growers, provides a case study for how a unimodal diversity–disturbance relationship can occur in a microbial community, a phenomenon that has previously been observed43,44,45.
The exact mechanism for the competitive ability of the slow growers in our system cannot be fully explained. Monoculture pH levels were similar for all species (~6.2–6.5), ruling out the possibility that slow growers move the pH to a level not hospitable to the fast growers. Supernatant experiments, in which we grew each species in the filtered spent media of other species, showed inhibition of some fast growers (Pp, Pci) by some slow growers (Pa, Pv) (Supplementary Fig. 6b, d), which explains potentially three of the seven cases of slow grower dominance at low dilution factor. We also hypothesized that the tradeoff might be caused by the slow growers having relatively faster growth rates at low resource concentration (as explained below), but this hypothesis was not confirmed when tested (Supplementary Fig. 6f). In addition, in monocultures the slow growers exhibited higher lag times than the fast growers (Supplementary Fig. 5f), which would seem to be disadvantageous in low-dilution, high-density conditions where resources could be quickly consumed by a competitor with a shorter lag46. The reason for the tradeoff, as well as its frequency in other systems, is a matter worthy of further investigation, in particular because natural microbial systems, such as soil communities or the gut microbiome, are characterized as having a low dilution rate47,48.
Here we found that the LV model with added mortality provided useful guidance for how experimental competition would shift under increased dilution, but resource-explicit models may in some cases provide additional mechanistic insight49,50. In particular, various resource-explicit models can recapitulate the qualitative changes predicted by the LV model with added mortality. For example, the R* rule states the species that can survive on the lowest equilibrium resource concentration will dominate other species1. The equilibrium concentration increases with the dilution rate, thus favoring the species with the highest maximal growth rate (Supplementary Note 4, Supplementary Figs. 12 and 13). However, a species with a low maximal rate may dominate under low dilution if it can grow more efficiently at low resource concentrations. As mentioned, this hypothesis could not explain the tradeoff in our system (Supplementary Fig. 6f). Moreover, while we consider dilution to be essentially an added death rate because cells are discarded, the LV model does not include effects of the dilution process that could differentiate it from mere mortality. Previous experimental work has shown that dilution can modulate concentrations of oxygen51,52 and phosphate45 in the environment, leading to changes in microbial community composition. Further work is necessary to explore the circumstances in which phenomenological or resource-explicit models should be used53,54,55 in describing serial dilution experiments.
It is also important to note that not all deteriorating environments will cause such simple and uniform increases in mortality. Antibiotics, and in particular β-lactam antibiotics, might selectively attack fast growers over slow growers56. Overfishing might target certain species of fish. In such cases of species-specific mortality rates, the pairwise LV model still predicts that outcomes will move along the same 45° line through the phase space but in a direction dependent on the differing rates (Supplementary Note 1). Climate change might affect growth rate rather than death rate by increasing temperature, which usually increases growth rates57; in this case, it is not certain whether environmental deterioration in the form of warming would favor slow growers or fast growers. An important direction for future research is to determine whether changes to the environment other than mortality/dilution will have predictable consequences for the composition of microbial communities. In this study, we have seen how a simple prediction about a simple perturbation in pairwise competition—increased mortality will favor the faster-growing species—allowed us to interpret seemingly nontrivial outcomes in simple multispecies communities.
Species and media
The soil bacterial species used in this study were Enterobacter aerogenes (Ea, ATCC#13048), Pseudomonas aurantiaca (Pa, ATCC#33663), Pseudomonas citronellolis (Pci, ATCC#13674), Pseudomonas putida (Pp, ATCC#12633), and Pseudomonas veronii (Pv, ATCC#700474). All species were obtained from ATCC. Two types of growth media were used: one was complex and undefined, while the other was minimal and defined. All results presented in the main text are from the defined media. All species grew in monoculture in both media. The complex medium was 0.1× LB broth (diluted in water). The minimal medium was S medium, supplemented with glucose and ammonium chloride. It contains 100 mM sodium chloride, 5.7 mM dipotassium phosphate, 44.1 mM monopotassium phosphate, 5 mg/l cholesterol, 10 mM potassium citrate pH 6 (1 mM citric acid monohydrate, 10 mM tri-potassium citrate monohydrate), 3 mM calcium chloride, 3 mM magnesium sulfate, and trace metals' solution (0.05 mM disodium EDTA, 0.02 mM iron sulfate heptahydrate, 0.01 mM manganese chloride tetrahydrate, 0.01 mM zinc sulfate heptahydrate, 0.01 mM copper sulfate pentahydrate), 0.93 mM ammonium chloride, and 10 mM glucose. 1× LB broth was used for initial inoculation of colonies. For competitions involving more than two species, plating was done on 10 cm circular Petri dishes containing 25 ml of nutrient agar (nutrient broth (0.3% yeast extract, 0.5% peptone) with 1.5% agar added). For pairwise competitions, plating was done on rectangular Petri dishes containing 45 ml of nutrient agar, onto which diluted 96-well plates were pipetted at 10 μl per well.
Growth rate measurements
Growth curves were captured by measuring the optical density of monocultures (OD 600 nm) in 15-min intervals over a period of ~50 h (Fig. S3). Before these measurements, species were grown in 1× LB broth overnight and then transferred to the experimental medium for 24 h. The OD of all species was then equalized. The resulting cultures were diluted into fresh medium at factors of 10−8 to 10−3 of the equalized OD. Growth rates were measured by assuming exponential growth to a threshold of OD 0.1 and averaging across many starting densities and replicates (n = 19 for Pci, n = 22 for all other species). This time-to-threshold measurement implicitly incorporates lag times, because a species with a time lag will take longer to reach the threshold OD than another species with the same exponential rate but no lag time. We also estimated lag times and exponential rates explicitly (Fig. S4). We used these measurements to develop an alternative to the time-to-threshold rates, which also incorporated lag time. To estimate this effective growth rate, we multiplied the exponential rate by a factor depending on lag time and time between daily dilutions (Supplementary Fig. 5b and Supplementary Note 3). This method does change growth rate estimates slightly but does not change the order of growth rates among the five species and thus the qualitative predictions of the model (Supplementary Fig. 5a, b). For this reason, we preferred to use the time-to-threshold method, because it involved only one measurement, rather than two, and had a lower error.
Competition experiments
Frozen stocks of individual species were streaked out on nutrient agar Petri dishes, grown at room temperature for 48 h, and then stored at 4 °C for up to 2 weeks. Before competition experiments, single colonies were picked and each species was grown separately in 50 ml Falcon tubes, first in 5 ml LB broth for 24 h and next in 5 ml of the experimental media for 24 h. During the competition experiments, cultures were grown in 500 μl 96-well plates (BD Biosciences), with each well containing a 200-μl culture. Plates were incubated at 25 °C and shaken at 400 rpm and were covered with an AeraSeal film (Sigma-Aldrich). For each growth–dilution cycle, the cultures were incubated for 24 h and then serially diluted into fresh growth media. Initial cultures were prepared by equalizing OD to the lowest density measured among competing species, mixing by volume to the desired species composition, and then diluting mixtures by the factor to which they would be diluted daily (except for dilution factor 10−6, which began at 10−5 on Day 0, to avoid causing stochastic extinction of any species). Relative abundances were measured by plating on nutrient agar plates. Each culture was diluted in phosphate-buffered saline prior to plating. For competitions involving more than two species, plating was done on 10 cm circular Petri dishes. For pairwise competitions, plating was done on 96-well-plate-sized rectangular Petri dishes containing 45 ml of nutrient agar, onto which diluted 96-well plates were pipetted at 10 μl per well. Multiple replicates of the latter dishes were used to ensure that enough colonies could be counted. Colonies were counted after 48 h incubation at room temperature. The mean number of colonies counted, per plating, per experimental condition, was 42. During competition experiments, we also plated monocultures to determine whether each species could survive each dilution factor in the absences of other species. Pv went extinct in the highest two dilution factors, while Pa went extinct in the highest dilution factor; all other species survived all dilution factors (Fig. 4).
Assembly rule predictions and accuracy
In order to make predictions about three- and four-species states, we used the qualitative and quantitative outcomes of pairwise competition. The two types of pairwise outcomes allowed for two types of predictions. First, the qualitative outcomes (dominance/exclusion, coexistence, or bistability) of the pairs were used to predict whether a species would be present or absent from a community. These outcomes are shown in "subway maps" (Supplementary Fig. 1), where the presence of a species is noted by the presence of its assigned color. Coexistence is shown by two stacked colors, and bistability is shown by two separated colors. The qualitative error rate is the percentage of species, out of the total number of species (three for trios, four for quads), that are incorrectly predicted to be present or absent (Table 1, Supplementary Fig. 2a, b). The qualitative success rate is the percentage of species that are correctly predicted as present or absent (Supplementary Fig. 2d).
Second, the quantitative outcomes of the pairs were used to predict the quantitative outcomes of three- and four-species communities. These outcomes are shown in relative fraction plots (Supplementary Fig. 1), where equilibrium points are indicated by the black dots. When two or more species coexist in pairs, the assembly rules predicts that they will coexist in multispecies communities, provided that an additional species does not exclude them. The predicted equilibrium coexisting fraction of two species is the same in a community as it is in a pair, while the fractions of more than two coexisting species are predicted with the weighted geometric mean of pairwise coexisting fractions. For example, in a three-species coexisting community, the fraction of species 1 depends on its coexisting fractions with the other two species in pairs:
$$f_1 = \left( {f_{12}^{w_2}f_{13}^{w_3}} \right)^{\frac{1}{{w_2 + w_3}}}$$
where f12 is the fraction of species 1 after reaching equilibrium in competition with species 2, \(w_2 = \sqrt {f_{21}f_{23}}\) and \(w_3 = \sqrt {f_{31}f_{32}}\). Finally, these predictions are normalized by setting \(f_1^ \ast = \frac{{f_1}}{{f_1 + f_2 + f_3}}\). The quantitative error of a particular community outcome is the distance of the predicted fractions from the observed community fractions, measured with the L2 norm. The maximum error, for any number of species, is \(\sqrt 2\), which occurs when a species that was predicted to go extinct in fact dominates:
$$\sqrt {{\sum} {\left( {\left( {1,0, \ldots ,0} \right) - \left( {0,1, \ldots ,0} \right)} \right)} ^2 }= \sqrt 2.$$
To calculate the overall quantitative errors (Table 1, Supplementary Fig. 2c, Supplementary Table 1), we divided each error by \(\sqrt 2\) and took the mean.
Finally, we also predicted multispecies states using carrying capacities as measured in monocultures through colony counting (Supplementary Fig. 5c, d). We assumed that, in competition, each species would grow to a density proportionate to its carrying capacity. In other words, the monoculture prediction assumes that all species always coexist. The error from the prediction to the observed data was calculated with the L2 norm, as above.
The p values given in Supplementary Figs. 3 and 5 were obtained using two-tailed t tests. The error bars shown in the time-series plots in Figs. 1 and 3 and Supplementary Fig. 8 are the SD of the beta distribution with Bayes' prior probability:
$$\sigma = \sqrt {\frac{{\left( {\alpha + 1} \right)\left( {\beta + 1} \right)}}{{\left( {\alpha + \beta + 2} \right)^2\left( {\alpha + \beta + 3} \right)}}}.$$
Here α and β are the number of colonies of two different species. In case of more than two species, α and β are the number of colonies of a given species and the number of all other species' colonies, respectively.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
The source data underlying Figs. 1b, 3b, f, and 4a–c, and Supplementary Figs. 1, 2, 5, and 7 are provided as a Source Data file. Access to the data is also publicly available at https://figshare.com/projects/Added_mortality_causes_universal_changes_in_microbial_community_composition/58304. A reporting summary for this article is available as a Supplementary Information file.
The code used for analyzing data is available from the first author upon request.
Journal peer review information: Nature Communications thanks Sean Gibbons, Wenying Shou, and Sara Mitri for their contribution to the peer review of this work. Peer reviewer reports are available.
Tilman, D. Resource Competition and Community Structure (Princeton University Press, Princeton, NJ, 1982).
Wellborn, G. A., Skelly, D. K. & Werner, E. E. Mechanisms creating community structure across a freshwater habitat gradient. Annu. Rev. Ecol. Syst. 27, 337–363 (1996).
DÍez, I., Secilla, A., Santolaria, A. & Gorostiaga, J. M. Phytobenthic intertidal community structure along an environmental pollution gradient. Mar. Pollut. Bull. 38, 463–472 (1999).
Yergeau, E. et al. Size and structure of bacterial, fungal and nematode communities along an Antarctic environmental gradient. FEMS Microbiol. Ecol. 59, 436–451 (2007).
Lessard, J.-P., Sackett, T. E., Reynolds, W. N., Fowler, D. A. & Sanders, N. J. Determinants of the detrital arthropod community structure: the effects of temperature and resources along an environmental gradient. Oikos 120, 333–343 (2011).
Cornwell, W. K. & Ackerly, D. D. Community assembly and shifts in plant trait distributions across an environmental gradient in coastal California. Ecol. Monogr. 79, 109–126 (2009).
Mykrä, H., Tolkkinen, M. & Heino, J. Environmental degradation results in contrasting changes in the assembly processes of stream bacterial and fungal communities. Oikos 126, 1291–1298 (2017).
Dethlefsen, L. & Relman, D. A. Incomplete recovery and individualized responses of the human distal gut microbiota to repeated antibiotic perturbation. Proc. Natl Acad. Sci. 108, 4554–4561 (2011).
Wernberg, T. et al. Climate-driven regime shift of a temperate marine ecosystem. Science 353, 169–172 (2016).
Daskalov, G. M., Grishin, A. N., Rodionov, S. & Mihneva, V. Trophic cascades triggered by overfishing reveal possible mechanisms of ecosystem regime shifts. Proc. Natl Acad. Sci. 104, 10518–10523 (2007).
Larsen, T. H., Williams, N. M. & Kremen, C. Extinction order and altered community structure rapidly disrupt ecosystem functioning. Ecol. Lett. 8, 538–547 (2005).
Iii, F. S. C. et al. Consequences of changing biodiversity. Nature 11, 234–242 https://doi.org/10.1038/35012241 (2000).
Thomas, C. D. et al. Extinction risk from climate change. Nature 427, 145–148 (2004).
Bálint, M. et al. Cryptic biodiversity loss linked to global climate change. Nat. Clim. Change 1, 313–318 (2011).
Harrington, R., Woiwod, I. & Sparks, T. Climate change and trophic interactions. Trends Ecol. Evol. 14, 146–150 (1999).
Ockendon, N. et al. Mechanisms underpinning climatic impacts on natural populations: altered species interactions are more important than direct effects. Glob. Change Biol. 20, 2221–2229 (2014).
Paine, R. T. The pisaster-tegula interaction: prey patches, predator food preference, and intertidal community structure. Ecology 50, 950–961 (1969).
Bond, W. J. in Biodiversity and Ecosystem Function pp. 237–253 (Springer, Berlin, Heidelberg, 1994).
Banerjee, S., Schlaeppi, K. & Heijden, M. G. Avander Keystone taxa as drivers of microbiome structure and functioning. Nat. Rev. Microbiol. 16, 567–576 (2018).
Stewart, F. M. & Levin, B. R. Partitioning of resources and the outcome of interspecific competition: a model and some general considerations. Am. Nat. 107, 171–198 (1973).
Hastings, A. Population Biology: Concepts and Models (Springer Science & Business Media, New York, 2013).
MEERS, J. L. Effect of dilution rate on the outcome of chemostat mixed culture experiments. Microbiology 67, 359–361 (1971).
Sommer, U. Phytoplankton competition along a gradient of dilution rates. Oecologia 68, 503–506 (1986).
Spijkerman, E. & Coesel, P. F. M. Competition for phosphorus among planktonic desmid species in continuous-flow culture. J. Phycol. 32, 939–948 (1996).
Gause, G. F. The Struggle for Existence (Courier Corporation, North Chelmsford, MA, 2003).
Slobodkin, L. B. Experimental populations of hydrida. J. Anim. Ecol. 33, 131–148 (1964).
Slobodkin, L. B. Growth and Regulation of Animal Populations (Holt, Rinehart and Winston, New York, 1980).
Friedman, J., Higgins, L. M. & Gore, J. Community structure follows simple assembly rules in microbial microcosms. Nat. Ecol. Evol. 1, 0109 (2017).
Celiker, H. & Gore, J. Clustering in community structure across replicate ecosystems following a long-term bacterial evolution experiment. Nat. Commun. 5, 4643 (2014).
Vet, S. et al. Bistability in a system of two species interacting through mutualism as well as competition: chemostat vs. Lotka-Volterra equations. PLOS ONE 13, e0197462 (2018).
Venturelli, O. S. et al. Deciphering microbial interactions in synthetic human gut microbiome communities. Mol. Syst. Biol. 14, e8157 (2018).
Guo, X. & Boedicker, J. Q. The contribution of high-order metabolic interactions to the global activity of a four-species microbial community. PLOS Comput. Biol. 12, e1005079 (2016).
Billick, I. & Case, T. J. Higher order interactions in ecological communities: what are they and how can they be detected? Ecology 75, 1529–1543 (1994).
Bairey, E., Kelsic, E. D. & Kishony, R. High-order species interactions shape ecosystem diversity. Nat. Commun. 7, 12285 (2016).
Grilli, J., Barabás, G., Michalska-Smith, M. J. & Allesina, S. Higher-order interactions stabilize dynamics in competitive network models. Nature 548, 210–213 (2017).
Mayfield, M. M. & Stouffer, D. B. Higher-order interactions capture unexplained complexity in diverse communities. Nat. Ecol. Evol. 1, 0062 (2017).
Kurihara, Y., Shikano, S. & Toda, M. Trade-off between interspecific competitive ability and growth rate in bacteria. Ecology 71, 645–650 (1990).
Luckinbill, L. S. Selection and the r/K continuum in experimental populations of protozoa. Am. Nat. 113, 427–437 (1979).
Violle, C., Pu, Z. & Jiang, L. Experimental demonstration of the importance of competition under disturbance. Proc. Natl Acad. Sci. 107, 12925–12929 (2010).
Andersson, D. I. & Levin, B. R. The biological cost of antibiotic resistance. Curr. Opin. Microbiol. 2, 489–493 (1999).
Gross, K. L. Effects of seed size and growth form on seedling establishment of six monocarpic perennial plants. J. Ecol. 72, 369–387 (1984).
Geritz, S. A. H., van der Meijden, E. & Metz, J. A. J. Evolutionary dynamics of seed size and seedling competitive ability. Theor. Popul. Biol. 55, 324–343 (1999).
Sousa, W. P. Disturbance in marine intertidal boulder fields: the nonequilibrium maintenance of species diversity. Ecology 60, 1225–1239 (1979).
Flöder, S. & Sommer, U. Diversity in planktonic communities: an experimental test of the intermediate disturbance hypothesis. Limnol. Oceanogr. 44, 1114–1119 (1999).
Gibbons, S. M. et al. Disturbance regimes predictably alter diversity in an ecologically complex bacterial system. mBio 7, e01372–16 (2016).
Manhart, M., Adkar, B. V. & Shakhnovich, E. I. Trade-offs between microbial growth phases lead to frequency-dependent and non-transitive selection. Proc. R. Soc. B 285, 20172459 (2018).
Venema, K. & van den Abbeele, P. Experimental models of the gut microbiome. Best. Pract. Res. Clin. Gastroenterol. 27, 115–126 (2013).
Avrani, S., Bolotin, E., Katz, S. & Hershberg, R. Rapid genetic adaptation during the first four months of survival under resource exhaustion. Mol. Biol. Evol. 34, 1758–1769 (2017).
Goldford, J. E. et al. Emergent simplicity in microbial community assembly. Science 361, 469–474 (2018).
Niehaus, L. et al. Microbial coexistence through chemical-mediated interactions. bioRxiv Preprint at: https://www.biorxiv.org/content/10.1101/358481v1 (2018).
Buckling, A., Kassen, R., Bell, G. & Rainey, P. B. Disturbance and diversity in experimental microcosms. Nature 408, 961–964 (2000).
Rainey, P. B. & Rainey, K. Evolution of cooperation and conflict in experimental bacterial populations. Nature 425, 72–74 (2003).
Fox, J. W. The intermediate disturbance hypothesis should be abandoned. Trends Ecol. Evol. 28, 86–92 (2013).
Chesson, P. & Huntly, N. The roles of harsh and fluctuating conditions in the dynamics of ecological communities. Am. Nat. 150, 519–553 (1997).
Hsu, S.-B. & Zhao, X.-Q. A Lotka–Volterra competition model with seasonal succession. J. Math. Biol. 64, 109–130 (2012).
Tresse, O., Jouenne, T. & Junter, G.-A. The role of oxygen limitation in the resistance of agar-entrapped, sessile-like Escherichia coli to aminoglycoside and β-lactam antibiotics. J. Antimicrob. Chemother. 36, 521–526 (1995).
Ratkowsky, D. A., Olley, J., McMeekin, T. A. & Ball, A. Relationship between temperature and growth rate of bacterial cultures. J. Bacteriol. 149, 1–5 (1982).
We thank the members of the Gore Laboratory for critical discussions and comments on the manuscript.
Department of Physics, Massachusetts Institute of Technology, Cambridge, 02139, MA, USA
Clare I. Abreu
, Vilhelm L. Andersen Woltz
& Jeff Gore
Department of Plant Pathology and Microbiology, The Hebrew University of Jerusalem, Rehovot, 7610001, Israel
Search for Clare I. Abreu in:
Search for Jonathan Friedman in:
Search for Vilhelm L. Andersen Woltz in:
Search for Jeff Gore in:
All the authors designed the study, discussed and interpreted the results, and wrote the manuscript. C.I.A. and V.L.A.W. carried out the experiments and performed the analysis.
Correspondence to Jeff Gore.
Peer Review File
Source data
Nature Communications menu
Editors' Highlights
Top 50 Read Articles of 2018 | CommonCrawl |
AccountingFinanceAuditManagementComputersStatistics
Perfect Competition
Monopolistic Competition
Oligopoly Models
Kinked Demand Curve Model
Concentration Ratio
Cournot Model
Profit Maximization
Shutdown Point
Profit Function
Marginal Revenue
DefinitionWhy MR = MC is Profit-Maximizing?Example
Home Economics Market Structure Profit Maximization
Profit maximization rule (also called optimal output rule) specifies that a firm can maximize its economic profit by producing at an output level at which its marginal revenue is equal to its marginal cost.
Marginal revenue is the change in revenue that results from a change in a change in output. For example, if a firm sells 99 units for $198 and 100 units for $200, marginal revenue of the 100th unit is $2. If ∆TR is the change in total revenue and ∆q is the change in output, MR equals ∆TR/∆q.
Marginal cost, on the other hand, is the incremental cost of additional units of output. For example, if total cost of 99 units is $148.5 and total cost of 100 units is $150, the marginal cost of the 100th units is $1.5. If ∆TC is change in total cost and ∆q is the change in output, MC equals ∆TC/∆q.
It makes intuitive sense that a firm should continue increasing its production as long as marginal revenue is higher than marginal cost because each additional sale increases profit by $0.5 ($2 minus $1.5). But if the marginal cost is higher than the marginal revenue, it means that the firm is spending more money on the unit than it earns, and it doesn't make sense to produce it. It follow that if MR is greater than MC, a firm should increase production and if MR is less than MC, it should decrease production. The only point at which a firm doesn't need to do anything to reach profit-maximization is the one at which MR=MC.
Why MR = MC is Profit-Maximizing?
We can arrive at the same conclusion algebraically. A firm's profit (π) equals its total revenue (TR) minus its total cost (TC):
$$ \pi=\text{TR}\ - \text{TC} $$
This expression can be written as follows:
$$ \frac{\Delta �\text{�}}{\Delta \text{q}}=\frac{\Delta \text{TR}}{\Delta \text{q}} - \frac{\Delta \text{TC}}{\Delta \text{q}} $$
It means that the rate of change of profit equals the difference between the rate of change of revenue and rate of change of cost.
Now, at the profit-maximizing output, rate of change of profit should be 0 because we have reached the peak of the profit curve. The rate of change in profit was positive till we reached the peak and it would turn negative if we move over it. Hence, it follows that profit maximization is possible if ∆π/∆q is 0.
$$ \text{0}=\frac{\Delta \text{TR}}{\Delta \text{q}} - \frac{\Delta \text{TC}}{\Delta \text{q}} $$
But ∆TR/∆q is the definition of marginal revenue (MR) and ∆TC/∆q is the definition of marginal cost.
$$ \text{0}=\text{MR}\ - \text{MC} $$
$$ \text{MR}=\text{MC} $$
The beauty of MR = MC as the profit maximization point is that it applies to all firms, both in perfect competition or monopoly.
Let's consider a firm whose total revenue, total cost, marginal revenue and marginal cost functions are given below:
$$ \text{TR}\ =\ \text{90Q}\ -\ \text{2Q}^\text{2} $$
$$ \text{MR}\ =\ \text{90}\ -\ \text{4Q} $$
$$ \text{TC}\ =\ \text{200}\ +\ \text{10Q}+\text{2Q}^\text{2} $$
$$ \text{MC}\ =\ \text{4Q}\ +\ \text{10} $$
We can find the profit-maximizing output using the MR = MC condition:
$$ \text{MR}\ =\ \text{MC} $$
$$ \text{MR}\ =\ \text{90}\ -\ \text{4Q}\ =\ \text{MC}\ =\ \text{4Q}\ +\ \text{10} $$
$$ \text{Q}\ =\ \text{10} $$
The profit-maximizing output can also be determined from the intersection of marginal revenue and marginal cost curves.
The total revenue and total cost graph shows that 10 units are indeed the profit-maximizing output because the distance between the total revenue curve and total cost curve is maximum at 10 units.
Marginal Cost
All Chapters in Economics
Production Functions
Cost Curves
Consumption Function
National Income Accounting
Growth Accounting
Structural Unemployment | CommonCrawl |
\begin{document}
\begin{abstract} Fix an integer $g \neq -1$ that is not a perfect square. In 1927, Artin conjectured that there are infinitely many primes for which $g$ is a primitive root. Forty years later, Hooley showed that Artin's conjecture follows from the Generalized Riemann Hypothesis (GRH). We inject Hooley's analysis into the Maynard--Tao work on bounded gaps between primes. This leads to the following GRH-conditional result: \emph{Fix an integer $m \geq 2$. If $q_1 < q_2 < q_3 < \dots$ is the sequence of primes possessing $g$ as a primitive root, then $\liminf_{n\to\infty} (q_{n+(m-1)}-q_n) \leq C_m$, where $C_m$ is a finite constant that depends on $m$ but not on $g$.} We also show that the primes $q_n, q_{n+1}, \dots, q_{n+m-1}$ in this result may be taken to be consecutive. \end{abstract}
\title{Bounded gaps between primes with a given primitive root}
\section{Introduction} The following conjecture was proposed by Emil Artin in the course of a September 1927 conversation with Helmut Hasse:
\begin{apc} Fix an integer $g \neq -1$ that is not a square. There are infinitely many primes $p$ for which $g$ is a primitive root modulo $p$. In fact, the number of such $p\leq x$ is (as $x\to\infty$) asymptotically $c_g \pi(x)$ for a certain $c_g > 0$. \end{apc}
While there is a substantial literature surrounding Artin's conjecture (lovingly catalogued in the survey \cite{moree12}), we still know infuriatingly little. In particular, there is no specific value of $g$ which is known to occur as a primitive root for infinitely many primes. However, thanks to work of Heath-Brown \cite{HB86} (refining earlier results of Gupta and Murty \cite{GM84}), we know that at least one of $2, 3$, and $5$ has this property. In fact, one can replace ``$2, 3$, and $5$'' with any list of three nonzero multiplicatively independent integers.
In a seminal 1967 paper, Hooley \cite{hooley67} (see also his exposition in \cite[Chapter 3]{hooley76}) showed that the Chebotarev density theorem with a sufficiently sharp error term would imply the quantitative form of Artin's conjecture. Moreover, he showed that such a variant of Chebotarev's density theorem --- at least for the cases relevant for this application --- follows from the Generalized Riemann Hypothesis (GRH) for Dedekind zeta functions. Thus, under GRH, we have a fairly satisfactory complete solution to Artin's conjecture.
In this paper, we combine Hooley's work on Artin's conjecture with recent methods used to study gaps between primes. In sensational work of Maynard \cite{maynard14} and Tao, it is shown that $\liminf_{n\to\infty} (p_{n+m-1} - p_n) < \infty$ for every $m$. Here $p_1 < p_2 < p_3 < \dots$ is the sequence of all primes, in the usual order. Our main theorem is an analogous bounded gaps result for primes possessing a prescribed primitive root. \begin{thm}[conditional on GRH]\label{thm:main} Fix an integer $g \neq -1$ and not a square. Let $q_1 < q_2 < q_3 < \dots$ denote the sequence of primes for which $g$ is a primitive root. Then for each $m$, \[ \liminf_{n\to\infty} (q_{n+m-1} - q_n) \leq C_m, \] where $C_m$ is a finite constant depending on $m$ but not on $g$. \end{thm}
In the concluding section of the paper, we show how to modify the proof of Theorem \ref{thm:main} to impose the additional restriction that the $m$ primes $q_n, q_{n+1}, \dots, q_{n+m-1}$ are in fact \emph{consecutive} (Theorem \ref{thm:consecutive}).
We remark that other recent work producing bounded gaps between primes in special sets has been done by Thorner \cite{thorner14}, who handles primes restricted by Chebotarev conditions, and by Li and Pan \cite{LP14}, who work with primes $p$ for which $p+2$ is an `almost prime'.
\subsection*{Notation} The letters $p$ and $q$ always denote primes. Implied constants may depend on $k$ and on $g$, unless otherwise noted.
\section{Technical preparation} \subsection{Configurations of quadratic residues and nonresidues} We will use that certain configurations of residues and nonresidues are guaranteed to appear for all large enough primes. This is a fairly standard consequence of the Riemann Hypothesis for curves as proved by Weil, but we give the argument for completeness. The following lemma is a special case of \cite[Corollary 2.3]{wan97}.
\begin{lem}\label{lem:weil} Let $p$ be a prime. Suppose that $f(T)$ is a monic polynomial in $\F_p[T]$ of degree $d$ and that $f(T)$ is not a square in $\F_p[T]$. Then
\[ \left|\sum_{a \bmod{p}} \leg{f(a)}{p}\right| \leq (d-1)\sqrt{p}. \] \end{lem}
\begin{lem}\label{lem:quadres} Let $p$ be a prime, and let $k$ be a positive integer. Suppose that $h_1, \dots, h_k$ are integers no two of which are congruent modulo $p$. Suppose $\epsilon_1, \dots, \epsilon_k \in \{\pm 1\}$. The number of mod $p$ solutions $n$ to the system of equations \begin{equation}\label{eq:systemstar} \leg{n+h_i}{p}= \epsilon_i \quad\text{for all}\quad 1 \leq i \leq k \end{equation} is at least $\frac{p}{2^k} - (k-1)\sqrt{p} - k$. \end{lem}
\begin{proof} For each $n$, let $\iota(n) = \frac{1}{2^k} \prod_{i=1}^{k} (1+\epsilon_i \leg{n+h_i}{p})$. If we suppose $n \not\equiv -h_1$, \dots, $-h_k \pmod{p}$, then $\iota(n) = 1$ when \eqref{eq:systemstar} holds and $=0$ otherwise. Since $|\iota(n)| \leq 1$ for all $n$, the number of solutions to \eqref{eq:systemstar} is at least $-k + \sum_{n \bmod{p}} \iota(n)$. For each subset $S \subset \{1, 2, 3, \dots, k\}$, put $f_S(T) = \prod_{i\in S} (T+h_i) \in \F_p[T]$. Then \[ \sum_{n \bmod{p}} \iota(n) = \frac{1}{2^k}\sum_{S \subset \{1, 2, \dots, k\}} \left(\prod_{i \in S}\epsilon_i\right) \sum_{n \bmod{p}} \leg{f_S(n)}{p}. \] If $S=\emptyset$, then $f_S=1$, and we get a contribution of $\frac{p}{2^k}$. In all other cases, $f_S$ is a nonsquare polynomial of degree at most $k$. By Lemma \ref{lem:weil}, the total contribution from all nonempty subsets of $\{1, 2, \dots, k\}$ is bounded in absolute value by $\frac{2^k-1}{2^k} (k-1)\sqrt{p} \le (k-1)\sqrt{p}$. Thus, $\sum_{n \bmod{p}} \iota(n) \ge \frac{p}{2^k} - (k-1)\sqrt{p}$, and the lemma follows. \end{proof}
\subsection{Effective Chebotarev} The next result is due in essence to Lagarias and Odlyzko \cite{LO77}, although the precise formulation we give is due to Serre \cite[\S2.4]{serre81}:
\begin{thm}[conditional on GRH]\label{thm:serre} Let $L$ be a finite Galois extension of $\Q$ with Galois group $G$, and let $C$ be a conjugacy class of $G$. The number of unramified primes $p \leq x$ whose Frobenius conjugacy class $(p, L/\Q)=C$ is given by
\[ \frac{\#C}{\#G}\Li(x)+ O\left(\frac{\#C}{\#G}x^{1/2}(\log|\Delta_L| + [L:\Q]\log{x})\right), \] for all $x\geq 2$. Here $\Delta_L$ denotes the discriminant of $L$ and the $O$-constant is absolute. \end{thm}
To apply Theorem \ref{thm:serre}, we require an upper bound for the term $\log|\Delta_L|$. The following result, which is contained in \cite[Proposition 6]{serre81}, suffices for our applications.
\begin{lem}\label{lem:discbound} For every Galois extension $L/\Q$, we have
\[ \log|\Delta_L| \leq ([L:\Q]-1) \sum_{p \mid \Delta_L} \log{p} + [L:\Q] \log [L:\Q]. \] \end{lem}
\section{Proof of Theorem \ref{thm:main}}\label{sec:proof} \subsection{The Maynard--Tao strategy} We begin by recalling the strategy of \cite{maynard14} for producing bounded gaps between primes. Let $k \geq 2$ be a fixed positive integer, and let $\Hh = \{h_1 < h_2 < \dots < h_k\}$ denote a fixed \emph{admissible $k$-tuple}, i.e., a set of $k$ distinct integers that does not occupy all of the residue classes modulo $p$ for any prime $p$. With $N$ a large positive integer, we seek values of $n$ belonging to the dyadic interval $[N, 2N)$ for which the shifted tuple $n+h_1, n+h_2, \dots, n+h_k$ contains several primes.
Let $W := \prod_{p \leq \log\log\log{N}} p$. Choose an integer $\nu$ so that $\gcd(\nu+h_i,W)=1$ for all $1 \leq i\leq k$; the existence of such a $\nu$ is implied by the admissibility of $\Hh$. We restrict attention to integers $n \equiv \nu\pmod{W}$. This has the effect of pre-sieving the values of $n$ to ensure that none of the $n+h_i$ have any small prime factors. Let $w(n)$ denote nonnegative weights (to be chosen momentarily), and let $\chi_{\Pp}$ denote the characteristic function of the set $\Pp$ of prime numbers. One studies the sums \[ S_1:= \sum_{\substack{N \leq n < 2N \\ n \equiv \nu\pmod{W}}} w(n)\quad\text{and}\quad S_2:= \sum_{\substack{N \leq n < 2N \\ n\equiv \nu\pmod{W}}} \left(\sum_{i=1}^{k} \chi_{\Pp}(n+h_i)\right)w(n). \] The ratio $S_2/S_1$ is a weighted average of the number of primes among $n+h_1, \dots, n+h_k$, as $n$ ranges over $[N,2N)$. Consequently, if $S_2 > (m-1) S_1$ for the positive integer $m$, then at least $m$ of the numbers $n+h_1, \dots, n+h_k$ are primes. So if the inequality $S_2 > (m-1) S_1$ is achieved for a sequence of $n$ tending to infinity, then $\liminf (p_{n+m-1} - p_n) \leq h_k-h_1 < \infty$.
As we have described it so far, this strategy goes back to Goldston--Pintz--Y{\i}ld{\i}r{\i}m. The key innovation in the approach of Maynard--Tao is the choice of congenial weights $w(n)$. The following result, which is a restatement of \cite[Proposition 4.1]{maynard14}, is crucial.
\begin{prop}\label{prop:main-maynard} Let $\theta$ be a positive real number with $\theta < \frac{1}{4}$. Let $F$ be a piecewise differentable function supported on the simplex $\{(x_1, \dots, x_k): \text{each }x_i \geq 0, \sum_{i=1}^{k} x_i \leq 1\}$. With $R:= N^{\theta}$, put \[ \lambda_{d_1, \dots, d_k}:= \left(\prod_{i=1}^{k} \mu(d_i) d_i\right) \sum_{\substack{r_1, \dots, r_k \\ d_i \mid r_i\,\forall i \\ (r_i, W)=1\,\forall i}} \frac{\mu(\prod_{i=1}^{k} r_i)^2}{\prod_{i=1}^{k} \varphi(r_i)} F\left(\frac{\log{r_1}}{\log{R}}, \dots, \frac{\log{r_k}}{\log{R}}\right)\] whenever $\gcd(\prod_{i=1}^{k} d_i,W)=1$, and let $\lambda_{d_1, \dots, d_k}=0$ otherwise. Let \[ w(n):= \left(\sum_{d_i \mid n+h_i\,\forall i} \lambda_{d_1, \dots, d_k}\right)^2. \] Then as $N\to\infty$, \begin{align*} S_1 &\sim \frac{\varphi(W)^k}{W^{k+1}} N (\log{R})^k I_k(F), ~~\text{and} \\ S_2 &\sim \frac{\varphi(W)^k}{W^{k+1}} \frac{N}{\log{N}} (\log{R})^{k+1} \sum_{m=1}^{k} J_k^{(m)}(F), \end{align*} provided that $I_k(F) \neq 0$ and $J_k^{(m)}(F) \neq 0$ for each $m$, where \begin{align*} I_k(F) :&= \idotsint_{[0,1]^{k}} F(t_1, \dots, t_k)^2\, \mathrm{d} t_1 \mathrm{d} t_2 \cdots \mathrm{d} t_k, \\
J_k^{(m)}(F):&= \idotsint_{[0,1]^{k-1}} \left(\int_{0}^{1} F(t_1, \dots, t_k)\, \mathrm{d} t_m\right)^2 \mathrm{d} t_1 \cdots \mathrm{d} t_{m-1} \mathrm{d} t_{m+1} \cdots \mathrm{d} t_k. \end{align*} \end{prop}
From our interpretation of $S_2/S_1$ as a weighted average, we know that there is an $n \in [N,2N)$ for which at least $S_2/S_1$ of the numbers $n+h_1, \dots, n+h_k$ are prime. Proposition \ref{prop:main-maynard} shows that $S_2/S_1 \to \theta \frac{\sum_{m=1}^{k} J_k^{(m)}(F)}{I_k(F)}$, as $N\to\infty$. Let \begin{equation}\label{eq:mkdef}M_k := \sup_{F} \frac{\sum_{m=1}^{k} J_k^{(m)}(F)}{I_k(F)},\end{equation} where the supremum is taken over all $F$ satisfying the previously indicated conditions. Upon choosing $\theta$ close to $\frac{1}{4}$, and $F$ so that the supremum appearing in the definition \eqref{eq:mkdef} is close to $M_k$, we find that infinitely often, at least $\lceil \frac{1}{4}M_k\rceil$ of the numbers $n+h_1,\dots, n+h_{k}$ are prime. The following lower bound on $M_k$ is due to Maynard \cite[Proposition 4.3]{maynard14}.
\begin{prop}\label{prop:mklower} $M_k\to\infty$ as $k\to\infty$. In fact, for all sufficiently large values of $k$,
\[ M_k > \log{k}-2\log\log{k}-2.\] \end{prop}
Consequently, once $k$ is a little larger than $e^{4m}$, we have $\lceil \frac{1}{4} M_k\rceil > m-1$. From the above discussion, $\liminf_{n\to\infty} (p_{n+m-1}-p_n) \leq h_k-h_1 < \infty$ for every admissible $k$-tuple $\Hh$. Choosing $\Hh$ carefully, this argument gives $\liminf_{n\to\infty} (p_{n+m-1}-p_n) \ll m^3 e^{4m}$; see the proof of \cite[Theorem 1.1]{maynard14} for details.
\subsection{Modifying Maynard--Tao} For the rest of the paper, we fix an integer $g \neq -1$ that is not a square. Let $\tilde{\Pp}$ denote the set of primes having $g$ as a primitive root. Fix an integer $k \geq 2$, and let \[ K:= 9k^2 \cdot 4^k. \] We let $\Hh$ denote the admissible $k$-tuple with $h_i = (i-1)K!$ for all $1 \leq i\leq k$; that is, \begin{equation}\label{eq:hidef} \Hh:= \{0, K!, 2K!, \dots, (k-1)K!\}. \end{equation} In what follows, we think of $N$ as very large, in particular much larger than $g$. We use the Maynard--Tao strategy to detect $n \in [N,2N)$ for which the list $n+h_1, \dots, n+h_k$ contains several primes belonging to $\tilde{\Pp}$. Let $g_0$ denote the discriminant of the quadratic field $\Q(\sqrt{g})$. Set \[ W := \lcm[g_0, \prod_{p \leq \log\log\log{N}} p]. \] Once again, we pre-sieve values of $n$ by putting $n$ in an appropriate residue class $\nu \bmod W$. Whereas Maynard could choose any $\nu$ with $\gcd(\nu+h_i,W)=1$ for all $1 \leq i \leq k$, we must tread more carefully. We choose $\nu$ so that the primes detected by the sieve are heavily biased towards having $g$ as a primitive root.
\begin{lem}\label{lem:presieving} We can choose an integer $\nu$ with all of the following properties: \begin{enumerate} \item[(i)] $\nu+h_i$ is coprime to $W$ for all $1\leq i \leq k$, \item[(ii)] $\nu+h_i-1$ is coprime to $\prod_{2 < p \leq \log\log\log{N}} p$ for all $1\leq i \leq k$, \item[(iii)] The Kronecker symbol $\leg{g_0}{\nu+h_i} = -1$ for all $1 \leq i \leq k$. \end{enumerate} \end{lem} \begin{proof} Factor $g_0$ as a product $D_1 D_2 \dots D_\ell$ of coprime prime discriminants, where the \emph{prime discriminants} are the numbers $-4, -8, 8$, and $(-1)^{\frac{p-1}{2}} p$ for odd primes $p$. Reordering the factorization if necessary, we can assume all of the following: \begin{itemize}
\item If all $|D_i| \leq K$ and $g_0$ is even, then $D_1 \in \{-4, 8, 8\}$.
\item If all $|D_i| \leq K$, $g_0$ is odd, and $\ell > 1$, then $|D_1| \geq 5$.
\item If some $|D_i| > K$, then $|D_1| > K$. \end{itemize} We begin by choosing any odd integer $\nu_1$ that avoids the residue classes $-h_1, \dots, -h_k$, $1-h_1$, $\dots, 1-h_k$ modulo $p$ for each odd prime $p \leq \log\log\log{N}$ not dividing $D_1$. Note that when $p \leq K$, the only requirement on $\nu_1$ is that it avoids the residue classes $0$ and $1$ mod $p$, while when $p > K$, we are to avoid at most $2k$ of the $p > K > 2k$ residue classes modulo $p$. So such a choice of $\nu_1$ certainly exists by the Chinese remainder theorem. We choose $\nu$ to satisfy \[ \nu \equiv \nu_1 \pmod{[W/D_1, 2]}. \] To ensure (i), (ii), and (iii), it suffices to impose a further condition on $\nu$ guaranteeing \ \begin{enumerate} \item[(i$'$)] $\nu+h_i$ is coprime to all odd $p$ dividing $D_1$ for all $1 \leq i \leq k$, \item[(ii$'$)] $\nu+h_i-1$ is coprime to all odd $p$ dividing $D_1$ for all $1 \leq i \leq k$, \item[(iii$'$)] $\leg{D_1}{\nu+h_i} = -\leg{D_2 \cdots D_l}{\nu_1+h_i}$ for all $1 \leq i \leq k$. \end{enumerate} Notice that for all $1 \leq i \leq k$, we have $\leg{D_2 \cdots D_l}{\nu_1+h_i} \neq 0$ by the choice of $\nu_1$.
\subsubsection*{Case I: All $|D_i| \leq K$.} In this case, (i$'$) and (ii$'$) are satisfied as long as $\nu \not\equiv 0\text{ or }1\pmod{p}$ for any odd $p$ dividing $D_1$, while (iii$'$) is satisfied as long as \[ \leg{D_1}{\nu} = -\leg{D_2 \cdots D_l}{\nu_1}. \]
Assume first that $g_0$ is even. Then $D_1 \in \{-4, -8, 8\}$ and (i$'$) and (ii$'$) hold vacuously. Choose $\nu_2$ so that $\leg{D_1}{\nu_2} = -\leg{D_2 \cdots D_l}{\nu_1}$. We ensure (iii$'$) by selecting $\nu$ as any solution to the simultaneous congruences \begin{equation}\label{eq:simultaneous} \nu \equiv \nu_1 \pmod{[W/D_1, 2]} \quad\text{and}\quad \nu \equiv \nu_2 \pmod{D_1}.\end{equation} While the moduli here share a factor of $2$, it is clear that these congruences still admit a simultaneous solution, since the only $2$-adic information encoded by the first congruence is that $\nu$ is odd, which is certainly compatible with the second!
Now assume instead that $g_0$ is odd, so that $|D_1|$ is an odd prime. Either $|D_1|=3$ and $\ell=1$, or $|D_1| \geq 5$. If the former, then (i$'$), (ii$'$), and (iii$'$) hold upon selecting $\nu_2=2$ and choosing $\nu$ to satisfy \eqref{eq:simultaneous}. If the latter, choose $\nu_2 \not\equiv 1\pmod{D_1}$ with
$\leg{D_1}{\nu_2} = -\leg{D_2 \cdots D_l}{\nu_1}$; this is possible since that equality of Legendre symbols holds for a total of $\frac{|D_1|-1}{2} > 1$ residue classes $\nu_2 \bmod{D_1}$. Once again, choosing $\nu$ to satisfy \eqref{eq:simultaneous} completes the proof.
\subsubsection*{Case II: Some $|D_i| > K$.} In this case, $|D_1| > K$. Since $K >8$, we see that $|D_1|$ is an odd prime. To satisfy (i$'$), (ii$'$), and (iii$'$), it suffices to show that there is an integer $\nu_2 \not\equiv 1-h_1, \dots, 1-h_k \pmod{D_1}$ with
\begin{equation}\label{eq:quadressystem} \leg{\nu_2+h_i}{|D_1|} = -\leg{D_2 \cdots D_l}{\nu_1+h_i} \quad\text{for all $1 \leq i \leq k$}, \end{equation}
for in that case we can choose $\nu$ as any solution to \eqref{eq:simultaneous}. (We used here that $\leg{D_1}{\nu+h_i}=\leg{\nu+h_i}{|D_1|}$.) The integers $h_1, \dots, h_k$ are incongruent modulo $D_1$, as each nonzero difference $h_j - h_i = (j-i)K!$ has only prime factors smaller than $K$. So Lemma \ref{lem:quadres} gives that the number of $\nu_2\bmod{D_1}$ satisfying \eqref{eq:quadressystem} is at least $|D_1|/2^k - (k-1)\sqrt{|D_1|} - k$. Since $|D_1| > K = 9k^2 \cdot 4^k$, this count of solutions exceeds $k$. In particular, we can satisfy \eqref{eq:quadressystem} with $\nu_2 \not\equiv 1-h_1, \dots, 1-h_k \pmod{D_1}$. \end{proof}
Assume that $\nu$ has been chosen to to satisfy the conditions of Lemma \ref{lem:presieving}. We let $R=N^{\theta}$, with $\theta$ to be specified momentarily, and we define the weights $w(n)$ exactly as in the statement of Proposition \ref{prop:main-maynard}. We let \[ \tilde{S}_{1}:= \sum_{\substack{N \leq n < 2N \\ n \equiv \nu\pmod{W}}} w(n)\quad\text{and}\quad \tilde{S}_{2}:= \sum_{\substack{N \leq n < 2N \\ n\equiv \nu\pmod{W}}} \left(\sum_{i=1}^{k} \chi_{\tilde{\Pp}}(n+h_i)\right)w(n). \] Theorem \ref{thm:main} is a consequence of the following result, established in the next section.
\begin{prop}[assuming GRH]\label{prop:main} Fix a positive real number $\theta < \frac{1}{4}$. As $N\to\infty$, we have the same asymptotic estimates for $\tilde{S}_1$ and $\tilde{S}_2$ as those for $S_1$ and $S_2$ given in Proposition \ref{prop:main-maynard}. \end{prop}
Once Proposition \ref{prop:main} has been established, the earlier analysis we applied to Maynard's Proposition \ref{prop:main-maynard} applies, and we immediately obtain Theorem \ref{thm:main}.
\subsection{Proof of Proposition \ref{prop:main}} The $\tilde{S}_1$ estimate is established in precisely the same way as Maynard's $S_1$ estimate in Proposition \ref{prop:main-maynard}; see the proofs of Lemmas 5.1 and 6.2 in \cite{maynard14}. So we describe only the estimation of $\tilde{S}_2$. We write $\tilde{S}_2 = \sum_{m=1}^{k} \tilde{S}_2^{(m)}$, where each \[ \tilde{S}_2^{(m)} := \sum_{\substack{N \leq n < 2N \\ n\equiv \nu\pmod{W}}} \chi_{\tilde{\Pp}}(n+h_m) w(n). \] This is precisely analogous to Maynard's decomposition of $S_2$ as $\sum_{m=1}^{k} S_2^{(m)}$, where $S_2^{(m)}:= \sum_{\substack{N \leq n < 2N \\ n\equiv \nu\pmod{W}}} \chi_{\Pp}(n+h_m) w(n)$. Maynard's proof of Proposition \ref{prop:main-maynard} gives that each \[ S_2^{(m)} \sim \frac{\varphi(W)^k}{W^{k+1}} \frac{N}{\log{N}} (\log{R})^{k+1} \cdot J_k^{(m)}(F). \] So to prove Proposition \ref{prop:main}, it suffices to show that for each $m$, we have \begin{equation}\label{eq:os2} S_2^{(m)} - \tilde{S}_2^{(m)} = o\left(\frac{\varphi(W)^k}{W^{k+1}} N (\log{N})^{k}\right), \end{equation} as $N\to\infty$. From now on, we think of $m$ as fixed, and we focus our energies on proving \eqref{eq:os2}.
To prepare for the proof of \eqref{eq:os2}, for each prime $q$, we let $\Pp_q^{(0)}$ denote the set of all primes $p$ satisfying \begin{equation}\label{eq:qtestfail} p\equiv 1\pmod{q}\quad\text{and}\quad g^{\frac{p-1}{q}} \equiv 1\pmod{p}. \end{equation} Let \[ \Pp_q := \Pp_q^{(0)} \setminus \bigcup_{q' < q}\Pp_{q'}^{(0)}. \] Provided that the argument is not a prime divisor of $g$, \begin{equation}\label{eq:setinclusions} 0 \leq \chi_{\Pp}- \chi_{\tilde{\Pp}} \leq \sum_{q} \chi_{\Pp_{q}}. \end{equation} Indeed, if $p$ is a prime not dividing $g$, then either $g$ is a primitive root mod $p$ or $g$ is a $q$th power residue mod $p$ for some prime $q$ dividing $p-1$. From \eqref{eq:setinclusions}, it follows immediately that \begin{equation}\label{eq:lessthansum} 0 \leq S_2^{(m)} - \tilde{S}_2^{(m)} \leq \sum_{q} \sum_{\substack{N\leq n < 2N \\ n \equiv \nu \pmod{W}}}\chi_{\Pp_{q}}(n+h_m) w(n). \end{equation}
We claim that the primes $q \leq \log\log\log{N}$ make no contribution to the right-hand side of \eqref{eq:lessthansum}. Indeed, suppose $p:=n+h_m$ is prime with $N \leq n < 2N$ and $n\equiv \nu\pmod{W}$. By Lemma \ref{lem:presieving}(ii), the number $p-1$ has no odd prime factors up to $\log\log\log{N}$; it follows trivially that $\chi_{\Pp_q}(p)=0$ for odd $q \leq \log\log\log{N}$. By Lemma \ref{lem:presieving}(iii), $\chi_{\Pp_2}(p) = 0$, since modulo $p$, \[ g^{\frac{p-1}{2}} \equiv \leg{g}{p} = \leg{g}{n+h_m} = \leg{g_0}{n+h_m} = -1. \] Thus, the right-hand side of \eqref{eq:lessthansum} can be rewritten as $\sideset{}{_1}\sum + \sideset{}{_2}\sum + \sideset{}{_3}\sum + \sideset{}{_4}\sum$, where the subscripts correspond to the following ranges of $q$: \begin{enumerate} \item[(1)] $\log\log\log{N} < q \leq (\log{N})^{100k}$, \item[(2)] $(\log{N})^{100k} < q \leq N^{1/2} (\log{N})^{-100k}$, \item[(3)] $N^{1/2}(\log{N})^{-100k} < q \leq N^{1/2} (\log{N})^{100k}$, \item[(4)] $q> N^{1/2} (\log{N})^{100k}$. \end{enumerate} We treat all four ranges of $q$ separately.
\subsubsection{Estimation of $\sideset{}{_2}\sum$ and $\sideset{}{_4}\sum$} We need the following lemma, which facilitates later applications of Cauchy--Schwarz.
\begin{lem}\label{lem:csprep} We have
\[ \sum_{\substack{N \leq n < 2N \\ n \equiv \nu\pmod{W}}} w(n)^2 \ll F_{\max}^4 \frac{N}{W} (\log{R})^{19k}. \] \end{lem} \begin{proof} Let $\mathbf{d}= (d_1, \dots, d_k)$, $\textbf{e}=(e_1, \dots, e_k)$, $\textbf{f}= (f_1, \dots, f_k)$, and $\textbf{g} = (g_1, \dots, g_k)$ represent $k$-tuples of positive integers. Expanding the sum using the definition of $w(n)$ gives \[ \sum_{\substack{N \leq n < 2N \\ n \equiv \nu\pmod{W}}} \sum_{\substack{\mathbf{d}, \mathbf{e}, \mathbf{f}, \mathbf{g} \\ [d_i, e_i, f_i, g_i] \mid n+h_i\,\forall i}} \lambda_{\mathbf{d}} \lambda_{\mathbf{e}} \lambda_{\mathbf{f}} \lambda_{\mathbf{g}} = \sum_{\mathbf{d}, \mathbf{e}, \mathbf{f}, \mathbf{g}} \lambda_{\mathbf{d}} \lambda_{\mathbf{e}} \lambda_{\mathbf{f}} \lambda_{\mathbf{g}} \sum_{\substack{N \leq n < 2N \\ n \equiv \nu\pmod{W} \\ [d_i, e_i, f_i, g_i] \mid n+h_i\,\forall i}} 1. \] Remembering that $\lambda_{d_1, \dots, d_k}$ vanishes unless $d_1 \cdots d_k$ is prime to $W$, we see that a quadruple $\textbf{d}, \textbf{e}, \textbf{f}, \textbf{g}$ makes no contribution to the right-hand side unless the numbers $[d_i, e_i, f_i, g_i]$, for $1 \leq i \leq k$, are pairwise coprime and all coprime to $W$. In that case, the conditions on $n$ in the inner sum put $n$ in a uniquely determined congruence class modulo $W \prod_{i=1}^{k} [d_i, e_i, f_i, g_i]$. It follows that our sum is bounded above by
\[ \sum_{\mathbf{d}, \mathbf{e}, \mathbf{f}, \mathbf{g}} |\lambda_{\mathbf{d}} \lambda_{\mathbf{e}} \lambda_{\mathbf{f}} \lambda_{\mathbf{g}}| \left(\frac{N}{W \prod_{i=1}^{k} [d_i, e_i, f_i, g_i]} + 1\right).\]
Let \begin{equation}\label{eq:rdef}r := \prod_{i=1}^{k} [d_i, e_i, f_i, g_i].\end{equation} Since $\lambda_{d_1, \dots, d_k}$ vanishes unless $d_1 \cdots d_k$ is a squarefree integer smaller than $R$, we may restrict attention to squarefree $r < R^4$. Given $r$, there are $\tau_{15k}(r)$ choices of $\textbf{d}, \textbf{e}, \textbf{f}$, and $\textbf{g}$ giving \eqref{eq:rdef}. Hence, writing $\lambda_{\max} = \max_{d_1, \dots, d_k} |\lambda_{d_1, \dots, d_k}|$, we find that
\begin{align} \sum_{\mathbf{d}, \mathbf{e}, \mathbf{f}, \mathbf{g}} |\lambda_{\mathbf{d}} \lambda_{\mathbf{e}} \lambda_{\mathbf{f}} \lambda_{\mathbf{g}}| \bigg(\frac{N}{W \prod_{i=1}^{k} [d_i, e_i, f_i, g_i]} + 1\bigg)&\leq \lambda_{\max}^4 \sum_{r < R^4} \mu^2(r)\tau_{15k}(r)\left(\frac{N}{Wr}+1\right) \notag\\
\label{eq:almostdone}&\leq \lambda_{\max}^4 \left(\frac{N}{W} + R^4\right) \sum_{r < R^4} \frac{\mu^2(r)\tau_{15k}(r)}{r}. \end{align} The remaining sum on $r$ is bounded above by $\prod_{p < R^4}(1+15k/p) \ll (\log{R})^{15k}$. Since $R= N^{\theta}$ with $\theta< \frac{1}{4}$ fixed, we get that $R^4 \ll N/W$. Finally, we recall that $\lambda_{\max} \ll F_{\max} (\log{R})^k$ (see \cite[eqs. (5.9) and (6.3)]{maynard14}). Inserting these estimates into \eqref{eq:almostdone} gives the lemma. \end{proof}
\begin{proof}[Proof that $\sideset{}{_2}\sum = o\left(\frac{\varphi(W)^k}{W^{k+1}} N (\log{N})^{k}\right)$] Let $\Qq$ be the union of the sets $\Pp_q$ for $(\log{N})^{100k} < q \leq N^{1/2} (\log{N})^{-100k}$. Then $\sideset{}{_2}\sum = \displaystyle\sum\nolimits_{\substack{N \leq n < 2N \\ n \equiv \nu \pmod{W}}} \chi_{\Qq}(n+h_m) w(n)$. Applying Cauchy--Schwarz and Lemma \ref{lem:csprep}, we see that \begin{equation}\label{eq:crudesum4-0} \sideset{}{_2}\sum \ll F_{\max}^2 W^{-1/2} N^{1/2} (\log{R})^{9.5k} \bigg(\sum_{\substack{N \leq n < 2N \\ n \equiv \nu \pmod{W}}} \chi_{\Qq}(n+h_m)\bigg)^{1/2}.\end{equation} The remaining sum on $n$ is certainly bounded above by the total number of primes $p \in [N,3N]$ belonging to $\Qq$. For each such $p$, we may select a $q$ with $(\log{N})^{100k} < q \leq N^{1/2} (\log{N})^{-100k}$ for which \eqref{eq:qtestfail} holds. Given $q$, we count the number of corresponding $p$ using effective Chebotarev.
Since $g$ is fixed and $q$ is large, we see that $g \not\in (\Q^{\times})^{q}$. So by a theorem of Capelli on irreducible binomials, the extension $\Q(\sqrt[q]{g})/\Q$ has degree $q$. For later use, we note that the discriminant of $\Q(\sqrt[q]{g})$ divides $(gq)^{q}$, and so the only ramified primes divide $gq$. By a theorem of Dedekind--Kummer, a prime $p \in [N,3N]$ satisfies \eqref{eq:qtestfail} precisely when $p$ splits completely in $L:=\Q(\zeta_q, \sqrt[q]{g})$. To continue, we need to know the degree of $L/\Q$. Now $\sqrt[q]{g}$ is not contained in $\Q(\zeta_q)$ --- otherwise, $\sqrt[q]{g}$ would generate a Galois extension of $\Q$, contradicting that $\Q(\sqrt[q]{g})$ contains only a single $q$th root of unity (since it can be viewed as a subfield of $\R$). So by another application of Capelli's theorem, \[ [L:\Q] = [L:\Q(\zeta_q)]\cdot [\Q(\zeta_q): \Q] = q(q-1). \] Moreover, since $q$ is the only ramified prime in $\Q(\zeta_q)/\Q$, the only primes that may ramify in $L/\Q$ all divide $gq$. By Lemma \ref{lem:discbound},
\begin{align*} \log|\Delta_L| \ll q^2\log{(|g|q)} \ll q^2 \log{N}. \end{align*} We plug this estimate into Theorem \ref{thm:serre}, taking $C$ as the conjugacy class of the identity. We find that the number of $p \in [N,3N]$ for which \eqref{eq:qtestfail} holds for a given $q$ is \[ \frac{1}{q(q-1)} \int_{N}^{3N} \frac{dt}{\log{t}} + O(N^{1/2}\log{N}). \] Summing this upper bound over primes $q$ with $(\log{N})^{100k} < q \leq N^{1/2} (\log{N})^{-100k}$, we get that the total number of these $p$ is $O(N (\log{N})^{-100k})$.
Now referring back to \eqref{eq:crudesum4-0}, we see that $\sideset{}{_2}\sum \ll F_{\max}^2 W^{-1/2} N (\log{N})^{-40k}$. But this is $o(N)$, and so certainly also $o\left(\frac{\varphi(W)^k}{W^{k+1}} N (\log{N})^{k}\right)$. \end{proof}
\begin{proof}[Proof that $\sideset{}{_4}\sum = o\left(\frac{\varphi(W)^k}{W^{k+1}} N (\log{N})^{k}\right)$] We proceed as above, but now with $\Qq$ equal to the union of the sets $\Pp_q$ for $q > N^{1/2} (\log{N})^{100k}$. We will show that $\#\Qq \cap [N,3N] \ll N (\log{N})^{-200k}$. By the previous Cauchy-ing argument, this is (more than) enough. If $p \in \Qq \cap [N,3N]$, then the order of $g$ modulo $p$, call it $\ell$, divides $(p-1)/q$ for some $q > N^{1/2} (\log{N})^{100k}$. In particular, $\ell < 3N^{1/2} (\log{N})^{-100k}$. Since $g^\ell-1$ has only $O(\ell)$ prime factors, summing on $\ell < 3N^{1/2} (\log{N})^{-100k}$ shows that there are $O(N (\log{N})^{-200k})$ possibilities for $p$. \end{proof}
\subsubsection{Estimation of $\sideset{}{_3}\sum$} For each prime $q$, we let $\A_q$ denote the set of natural numbers $n \equiv 1\pmod{q}$. We estimate $\sideset{}{_3}\sum$ using the trivial bound $\chi_{\Pp_q} \leq \chi_{\A_q}$. To save space, write $\I:= (N^{1/2} (\log{N})^{-100k}, N^{1/2} (\log{N})^{100k}]$. Then \[ \sideset{}{_3}\sum \leq \sum_{q \in \I} \sum_{\substack{N \leq n < 2N \\ n \equiv \nu\pmod{W}}} \chi_{\A_q}(n+h_m) w(n). \] Expanding out the right-hand side yields \begin{equation}\label{eq:threesums} \sum_{q \in \I} \sum_{\substack{d_1, \dots, d_k \\ e_1, \dots, e_k}} \lambda_{d_1, \dots, d_k} \lambda_{e_1, \dots, e_k} \sum_{\substack{N \leq n < 2N\\ n \equiv \nu\pmod{W} \\ [d_i, e_i] \mid n+h_i\,\forall i}} \chi_{\A_q}(n+h_m).\end{equation} We can assume $d_1 \cdots d_k$ is a squarefree integer coprime to $W$ and not exceeding $R$, since otherwise $\lambda_{d_1, \dots, d_k}=0$. A similar assumption can be made for $e_1 \cdots e_k$. Since $q \in \I$, it follows that $q$ is coprime to each $d_i$, each $e_i$, and $W$. Now the innermost sum in \eqref{eq:threesums} vanishes unless $[d_1, e_1], [d_2, e_2], \dots, [d_k, e_k]$, and $W$ are pairwise coprime. Using a $'$ to denote this restriction on the $d_i$ and $e_i$, we get that \begin{align*} \sum_{q \in \I} \sum_{\substack{d_1, \dots, d_k \\ e_1, \dots, e_k}} \lambda_{d_1, \dots, d_k} \lambda_{e_1, \dots, e_k} &\sum_{\substack{N \leq n < 2N\\ n \equiv \nu\pmod{W} \\ [d_i, e_i] \mid n+h_i\,\forall i}} \chi_{\A_q}(n+h_m)\\&= \sum_{q \in \I} \sideset{}{'}\sum_{\substack{d_1, \dots, d_k \\ e_1, \dots, e_k}} \lambda_{d_1, \dots, d_k} \lambda_{e_1, \dots, e_k} \left(\frac{N}{qW\prod_{i=1}^{k}[d_i, e_i]} + O(1)\right). \end{align*} The error here is
\begin{align*} \ll \left(\sum_{q \in \I} 1\right) \left(\sum_{d_1, \dots, d_k} |\lambda_{d_1, \dots, d_k}|\right)^2 &\ll N^{1/2} (\log{N})^{100k} \cdot \lambda_{\max}^2 \left(\sum_{r < R}\mu^2(r) \tau_k(r)\right)^2. \end{align*} Recalling that $\lambda_{\max} \ll F_{\max} (\log{R})^k$ and that $\sum_{r < R} \tau_k(r) \ll R (\log{R})^{k-1}$, our final $O$ error term is $O(F_{\max}^2 \cdot N^{1/2} R^2 \cdot (\log{N})^{104k})$. Since $R=N^{\theta}$ with $\theta < \frac{1}{4}$, this error is $o(N)$ and so is negligible for us. We now turn attention to the main term, which has the form \[ \Bigg(\sum_{q\in \I} \frac{1}{q}\Bigg) \Bigg(\frac{N}{W}\sideset{}{'}\sum_{\substack{d_1, \dots, d_k \\ e_1, \dots, e_k}} \frac{\lambda_{d_1, \dots, d_k} \lambda_{e_1, \dots, e_k}}{\prod_{i=1}^{k}[d_i, e_i]}\Bigg).\] The first factor here is $O(\frac{\log\log{N}}{\log{N}})$, and so in particular is $o(1)$. Maynard's analysis (see the proofs of \cite[Lemmas 5.1, 6.2]{maynard14}) shows that the second factor here satisfies the asymptotic formula asserted for $S_1$ in Proposition \ref{prop:main-maynard}. Hence, $\sideset{}{_3}\sum = o(\frac{\varphi(W)^k}{W^{k+1}} N (\log{N})^{k})$, as desired.
\subsubsection{Estimation of $\sideset{}{_1}\sum$} For this case, let $\I:= (\log\log\log{N}, (\log{N})^{100k}]$. Using the bound $\chi_{\Pp_q} \leq \chi_{\Pp_q^{(0)}}$, we get that \[ \sideset{}{_1}\sum \leq \sum_{q \in \I} \sum_{N \leq n < 2N} \chi_{\Pp^{(0)}_q}(n+h_m) w(n). \] Expanding out the right-hand side gives \begin{equation}\label{eq:expandsum2} \sum_{q \in \I} \sum_{\substack{d_1, \dots, d_k\\ e_1, \dots, e_k}} \lambda_{d_1, \dots, d_k} \lambda_{e_1, \dots, e_k} \sum_{\substack{N \leq n < 2N \\ n \equiv \nu\pmod{W} \\ [d_i, e_i] \mid n+h_i\,\forall i}} \chi_{\Pp^{(0)}_q}(n+h_m). \end{equation}
The inner sum can be written as a sum over a single residue class modulo $f:=W \prod_{i=1}^{k} [d_i, e_i]$, provided that $W, [d_1, e_1], \dots, [d_k, e_k]$ are pairwise coprime; otherwise we get no contribution. We also need that $n+h_m$ lies in a residue class coprime to $f$, which happens precisely when $d_m=e_m=1$. Also, $\chi_{\Pp^{(0)}_q}(n+h_m)$ vanishes unless $q \mid n+h_m-1$, and this implies that the inner sum in \eqref{eq:expandsum2} vanishes unless $q$ is coprime to each $d_i$ and $e_i$. Indeed, if $q$ divides $d_i$ or $e_i$ without the inner sum vanishing, then $q \mid h_m-h_i-1$. But that divisibility cannot hold for $q \in \I$, since $0 < |h_m-h_i-1|< k \cdot K!$.
Thus, we only see a contribution to \eqref{eq:expandsum2} if $[d_1, e_1]$, $[d_2, e_2]$, \dots, $[d_k, e_k]$, $W$, and $q$ are pairwise coprime. Under these conditions, we claim that \begin{multline}\label{eq:chebestimate} \sum_{\substack{N \leq n < 2N \\ n \equiv \nu\pmod{W} \\ [d_i, e_i] \mid n+h_i\,\forall i}} \chi_{\Pp^{(0)}_q}(n+h_m) \\ = \frac{1}{q(q-1) \varphi(W) \prod_{i=1}^{k} \varphi([d_i, e_i])} \int_{N+h_m}^{2N+h_m} \frac{dt}{\log{t}} + O(N^{1/2}\log{N}). \end{multline} To see this, let $p:=n+h_m$. Then the prime $p \in [N+h_m, 2N+h_m)$ makes a contribution to the the left-hand sum precisely when $\Frob_{p}$ is a certain element of $\Q(\zeta_{f})$ --- determined by the congruence conditions modulo the $[d_i,e_i]$ and $W$ --- \emph{and} when $p$ splits completely in $\Q(\zeta_q, \sqrt[q]{g})$. Now $\sqrt[q]{g} \not \subset \Q(\zeta_{qf})$, since $\Q(\sqrt[q]{g})$ is not a Galois extension of $\Q$. Thus, letting $L:= \Q(\zeta_{qf}, \sqrt[q]{g})$, we find that \begin{align*} [L:\Q] &= [L: \Q(\zeta_{qf})] [\Q(\zeta_{qf}): \Q] \\&= q \cdot\varphi(qf) = q(q-1) \varphi(W) \prod_{i=1}^{k} \varphi([d_i, e_i]). \end{align*} Hence, $\Q(\zeta_{f})$ and $\Q(\zeta_q, \sqrt[q]{g})$ are linearly disjoint extensions of $\Q$ with compositum $L$. Our conditions on $p$ amount to placing $\Frob_p$ in a certain uniquely determined conjugacy class of size $1$ in $\Gal(L/\Q)$. Since the only primes that ramify in $L$ divide $qfg$, Lemma \ref{lem:discbound} gives that
\[ \log |\Delta_L| \ll [L:\Q](\log{(qfg)} + \log[L:\Q]) \ll [L:\Q]\log{N}. \] Inserting this estimate into Theorem \ref{thm:serre} now yields \eqref{eq:chebestimate}.
Returning now to \eqref{eq:expandsum2}, we see that the error term in \eqref{eq:chebestimate} yields a total error of size
\begin{align*} \ll N^{1/2}\log{N} \left(\sum_{q \in \I} 1\right) \left(\sum_{d_1, \dots, d_k} |\lambda_{d_1, \dots, d_k}|\right)^2 &\ll N^{1/2} (\log{N})^{100k+1} \cdot \lambda_{\max}^2 \left(\sum_{r < R}\tau_k(r)\right)^2\\ &\ll F_{\max}^2 \cdot N^{1/2} R^2 \cdot (\log{N})^{104k+1}.\end{align*} This is $o(N)$ and so is again negligible for us. Letting $X_N:= \int_{N+h_m}^{2N+h_m} dt/\log{t}$, the main term has the shape \begin{equation}\label{eq:complicatedmainterm} \sum_{q \in \I} \frac{1}{q(q-1)}\left(\frac{X_N}{\varphi(W)} \sideset{}{'}\sum_{\substack{d_1, \dots, d_k \\ e_1, \dots, e_k \\ d_m=e_m=1}} \frac{\lambda_{d_1, \dots, d_k} \lambda_{e_1, \dots, e_k}}{\prod_{i=1}^{k}\varphi([d_i, e_i])} \right).\end{equation} Here the $'$ on the sum indicates that $W, [d_1, e_1], \dots, [d_k, e_k]$, and $q$ are pairwise coprime. Owing to the support of the $\lambda$'s, this restriction on the sum has the same effect as requiring that $(d_i, e_j)=1$ for all $i \neq j$ and that $(d_i,q)=(e_j,q)=1$ for all $1\leq i, j \leq k$. We incorporate the restrictions that $(d_i, e_j)=1$ by multiplying through by $\sum_{s_{i,j} \mid d_i, e_j} \mu(s_{i,j})$ for $i \neq j$. Similarly, we incorporate the restrictions that $(d_i,q)=(e_j,q)=1$ by multiplying through by $\sum_{\delta_i \mid d_i, q} \mu(\delta_i)$ and $\sum_{\epsilon_j \mid e_j, q} \mu(\epsilon_j)$, for all pairs of $i$ and $j$. Let $g$ be the completely multiplicative function defined by $g(p)=p-2$ for all primes $p$, and note that \[ \frac{1}{\varphi([d_i,e_i])} = \frac{1}{\varphi(d_i) \varphi(e_i)}\sum_{u_i \mid d_i, e_i} g(u_i) \] for squarefree $d_i$ and $e_i$. This allows us to rewrite the parenthesized portion of \eqref{eq:complicatedmainterm} as \begin{multline}\label{eq:verycomplicated} \frac{X_N}{\varphi(W)}\sum_{u_1, \dots, u_k} \left(\prod_{i=1}^{k} g(u_i)\right) \sideset{}{^*}\sum_{s_{1,2}, \dots, s_{k, k-1}} \left(\prod_{\substack{1 \leq i, j \leq k \\ i \neq j}} \mu(s_{i,j})\right) \sum_{\substack{\delta_1, \dots, \delta_k \mid q \\ \epsilon_1, \dots, \epsilon_k \mid q}} \left(\prod_{i=1}^{k} \mu(\delta_i)\prod_{j=1}^{k} \mu(\epsilon_j)\right) \\ \times \sum_{\substack{d_1,\dots, d_k \\ e_1, \dots, e_k \\ u_i \mid d_i, e_i\,\forall i \\ s_{i,j}\mid d_i, e_j\,\forall i\neq j\\ \delta_i \mid d_i, \epsilon_j \mid e_j\, \forall i, j \\ d_m=e_m=1}} \frac{\lambda_{d_1,\dots, d_k} \lambda_{e_1, \dots, e_k}}{\prod_{i=1}^{k}\varphi(d_i)\varphi(e_i)},\end{multline} where the $*$ on the sum indicates that $s_{i,j}$ is restricted to be coprime to $u_i$, $u_j$, $s_{i,a}$, and $s_{b,j}$ for all $a \neq i$ and $b\neq j$. (The other values of $s_{i,j}$ make no contribution.) Introducing the new variables \[ y_{r_1, \dots, r_k}^{(m)} := \left(\prod_{i=1}^{k} \mu(r_i) g(r_i)\right) \sum_{\substack{d_1, \dots, d_k \\ r_i \mid d_i\,\forall i\\ d_m=1}} \frac{\lambda_{d_1, \dots, d_k}}{\prod_{i=1}^{k} \varphi(d_i)}, \] we may rewrite \eqref{eq:verycomplicated} as \begin{multline*} \frac{X_N}{\varphi(W)} \sum_{u_1, \dots, u_k} \left(\prod_{i=1}^{k} g(u_i)\right) \sideset{}{^*}\sum_{s_{1,2}, \dots, s_{k, k-1}} \left(\prod_{\substack{1 \leq i, j \leq k \\ i \neq j}} \mu(s_{i,j})\right) \sum_{\substack{\delta_1, \dots, \delta_k \mid q \\ \epsilon_1, \dots, \epsilon_k \mid q}} \left(\prod_{i=1}^{k} \mu(\delta_i) \prod_{j=1}^{k}\mu(\epsilon_j)\right) \\ \times \Bigg(\prod_{i=1}^{k} \frac{\mu(a_i)}{g(a_i)}\Bigg) \Bigg(\prod_{j=1}^{k} \frac{\mu(b_j)}{g(b_j)}\Bigg) y_{a_1, \dots, a_k}^{(m)} y_{b_1, \dots, b_k}^{(m)}, \end{multline*} where $a_i = \lcm[u_i \prod_{j \neq i} s_{i,j}, \delta_i]$ and $b_j = \lcm[u_j \prod_{i \neq j} s_{i,j}, \epsilon_j]$. Define $\delta_i' \in \{1,q\}$ and $\epsilon_j' \in \{1, q\}$ by the equations \[ a_i = \Bigg(u_i \prod_{j \neq i} s_{i,j}\Bigg) \delta_i', \qquad b_j = \Bigg(u_j \prod_{i \neq j} s_{i,j}\Bigg) \epsilon_j'. \] Exploiting coprimality, we can write $\mu(a_i) = \left(\mu(u_i) \prod_{j \neq i} \mu(s_{i,j})\right) \mu(\delta_i')$, and similarly for $\mu(b_j)$, $g(a_i)$, and $g(b_j)$. This transforms \eqref{eq:verycomplicated} into
\begin{multline*} \frac{X_N}{\varphi(W)} \sum_{u_1, \dots, u_k} \left(\prod_{i=1}^{k} \frac{\mu(u_i)^2}{g(u_i)} \right) \sideset{}{^*}\sum_{s_{1,2}, \dots, s_{k, k-1}}\left(\prod_{\substack{1 \leq i, j \leq k\\i \neq j}} \frac{\mu(s_{i,j})}{g(s_{i,j})^2}\right) \\ \times \sum_{\substack{\delta_1, \dots, \delta_k \mid q \\ \epsilon_1, \dots, \epsilon_k \mid q}}\left(\prod_{i=1}^{k} \frac{\mu(\delta_i)\mu(\delta_i')}{g(\delta_i')} \prod_{j=1}^{k}\frac{\mu(\epsilon_j) \mu(\epsilon_j')}{g(\epsilon_j')}\right) y_{a_1, \dots, a_k}^{(m)} y_{b_1, \dots, b_k}^{(m)}. \end{multline*} Let $y_{\max}^{(m)} = \max_{r_1, \dots, r_k} |y_{r_1, \dots, r_k}^{(m)}|$. From \cite[eq. (6.10)]{maynard14}, we have $y_{\max}^{(m)} \ll F_{\max} \frac{\varphi(W)}{W} \log{R}$. Inserting these bounds into the previous display, we find that \eqref{eq:verycomplicated} is \begin{multline*} \ll \frac{X_N}{\varphi(W)}\bigg(\sum_{\substack{u < R \\ \gcd(u,W)=1}}\frac{\mu(u)^2}{g(u)}\bigg)^{k-1} \left(\sum_{s} \frac{\mu(s)^2}{g(s)^2}\right)^{k(k-1)} {y_{\max}^{(m)}}^2 \\ \ll F_{\max}^2 \cdot \frac{X_N}{\varphi(W)} \left(\frac{\varphi(W)}{W}\right)^{k+1} (\log{R})^{k+1} \ll F_{\max}^2 \left(\frac{\varphi(W)^k}{W^{k+1}}\right) N (\log{N})^k. \end{multline*} We used here that there are only $O(1)$ possibilities for the $\delta_i$ and $\epsilon_j$, and that for each of these, $\prod_{i}\frac{1}{g(\delta_i')} \prod_{j} \frac{1}{g(\epsilon_j')} \le 1$. Referring back to \eqref{eq:complicatedmainterm}, we see that our original main term contributes \[ \ll F_{\max}^2 \left(\frac{\varphi(W)^k}{W^{k+1}}\right) N (\log{N})^k \sum_{q \in \I} \frac{1}{q(q-1)} = o\left(\frac{\varphi(W)^k}{W^{k+1}} N (\log{N})^k\right), \] as desired.
\begin{remark} The truth of Theorem \ref{thm:main} could also have been predicted on heuristic grounds. Indeed, there are well known heuristics for Artin's primitive root conjecture, suggesting even the `correct' value of $c_g$ (see \cite[\S\S2--5]{moree12}), as well as heuristics for the prime $k$-tuples conjecture (see for instance, \cite[pp. 14--15]{CP05}), and these can be fitted together. As an example, this combined heuristic suggests that the count of twin prime pairs $p, p+2$ with $p \leq x$ and with $2$ a primitive root of both $p$ and $p+2$ should be approximately \[ \mathfrak{S} \int_{2}^{x} \frac{dt}{(\log{t})^2}, \quad\text{where}\quad \mathfrak{S} := \frac{1}{4} \prod_{p > 3} \left(1-\frac{3}{(p-1)^2}\right). \] Quantitative conjectures of this kind, but in the context of primes represented by a single irreducible polynomial rather than primes produced by linear forms, appear in recent work of Moree \cite{moree07} and of Akbary and Scholten \cite{AS14}. \end{remark}
\section{Concluding remarks} We conclude with a proof of the following result, which seems of independent interest:
\begin{thm}[conditional on GRH]\label{thm:consecutive} Fix an integer $g\neq -1$ and not a square. For every positive integer $m$, there are $m$ consecutive primes all of which possess $g$ as a primitive root. \end{thm}
Theorem \ref{thm:consecutive} might be compared with Shiu's celebrated result \cite{shiu00} that each coprime residue class $a\bmod{q}$ contains arbitrarily long runs of consecutive primes. Our proof of Theorem \ref{thm:consecutive} is similar in spirit to a short proof of Shiu's theorem recently given by Banks, Freiberg, and Turnage-Butterbaugh \cite{BFTB14}.
It will be useful to first translate the proof of Theorem \ref{thm:main} into probabilistic terms. Let $k$ be a fixed positive integer, and let $h_1, \dots, h_k$ be given by \eqref{eq:hidef}. We view the set of $n \in [N,2N)$ with $n \equiv \nu\pmod{W}$ as a finite probability space where the probability mass at each $n_0$ is given by \[ w(n_0)/\sum_{\substack{N \leq n < 2N \\ n \equiv \nu\pmod{W}}} w(n). \] Here the weights $w(n)$ are assumed to be of the form specified in Proposition \ref{prop:main-maynard}. Introduce the random variables \[ X := \sum_{i=1}^{k} \chi_{\Pp}(n+h_i)\quad\text{and}\quad Y:=\sum_{i=1}^{k} \chi_{\Pp\setminus \tilde{\Pp}}(n+h_i). \] Then $\E[X]= S_2/S_1$. Given suitable parameters $F$ and $\theta$, Proposition \ref{prop:main-maynard} gives us the limiting value of $\E[X]$ as $N\to\infty$. Combining Propositions \ref{prop:main-maynard} and \ref{prop:mklower}, we see that for $k$ large enough in terms of $m$, we can choose parameters so this limiting value exceeds $m-1$. On the other hand, it was shown in \S\ref{sec:proof} that (with the same choice of parameters) $\E[Y] = o(1)$ as $N\to\infty$. Thus, $\E[X-Y] > m-1$ for all large $N$. But $X-Y = \sum_{i=1}^{m} \chi_{\tilde{\Pp}}(n+h_i)$. Hence, for some $n \in [N,2N)$, the list $n+h_1, \dots, n+h_k$ contains at least $m$ primes having $g$ as a primitive root. Theorem \ref{thm:main} follows, with $C_m = h_k - h_1$.
We now present the minor variation of this argument needed to establish Theorem \ref{thm:consecutive}.
\begin{proof}[Proof of Theorem \ref{thm:consecutive}] Given $m$, we fix a large enough value of $k$ (and parameters $F, \theta$) so that the limiting value of $\E[X]$ exceeds $m-1$. Then for all large $N$, \[ \Prob(X \geq m) \geq \E\left[\frac{X-(m-1)}{k}\right] = \frac{1}{k} (\E[X]-(m-1)) \gg 1. \] Note that $\Prob(Y > 0) \leq \E[Y]= o(1)$, as $N\to\infty$. So for large $N$, there is a positive probability that both $X \geq m$ and $Y=0$. This allows us to select $n \in [N,2N)$ with $n \equiv \nu\pmod{W}$ satisfying \begin{enumerate} \item[(i)] at least $m$ of $n+h_1, \dots, n+h_k$ are prime, \item[(ii)] all of the primes among $n+h_1, \dots, n+h_k$ possess $g$ as a primitive root. \end{enumerate} We will argue momentarily that we can also assume \begin{itemize} \item[(iii)] the only primes in the interval $[n+h_1, n+h_k]$ are the primes in the list $n+h_1, \dots, n+h_k$. \end{itemize} From (i), (ii), and (iii), we see that the set of primes in $[n+h_1, n+h_k]$ contains at least $m$ elements, all of which have $g$ as a primitive root. Theorem \ref{thm:consecutive} follows.
In order to show we may assume (iii), we tweak the choice of the residue class $\nu \bmod{W}$ from which $n$ is sampled. In the proof of Lemma \ref{lem:presieving}, we chose $\nu_1$ as any odd integer avoiding $-h_1, \dots, -h_k$, $1-h_1, \dots, 1-h_k$ modulo $p$, for all odd $p\leq\log\log\log{N}$ not dividing $K$. We now add an extra condition on $\nu_1$. Choose distinct primes $p^{(h)} \in [\frac{1}{2}\log\log\log{N},\log\log\log N)$ for all even $h \in [h_1,h_k]\setminus \Hh$. We add the requirement that $\nu_1 \equiv -h \pmod{p^{(h)}}$ for each such $h$. This is consistent with our earlier restrictions, since $h$ is not congruent modulo $p^{(h)}$ to any of $h_1, \dots, h_k$ (since $h \not\in\Hh$) or to any of $h_1-1, \dots, h_k-1$ (since $h$ and the $h_i$ are all even). Using the resulting value of $\nu$ from Lemma \ref{lem:presieving}, we see that for even $h \in [h_1,h_k]\setminus\Hh$, we have $p_h \mid n+h$ whenever $n\equiv \nu\pmod{W}$. For all odd $h \in [h_1,h_k]$, we have trivially that $2 \mid n+h$ whenever $n \equiv \nu\pmod{W}$. Thus, $n+h$ is composite if $h \in [h_1,h_k] \setminus \Hh$, and so (iii) holds.\end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} | arXiv |
Kip Thorne
Kip Stephen Thorne (born June 1, 1940) is an American theoretical physicist known for his contributions in gravitational physics and astrophysics. A longtime friend and colleague of Stephen Hawking and Carl Sagan, he was the Feynman Professor of Theoretical Physics at the California Institute of Technology (Caltech) until 2009[3] and is one of the world's leading experts on the astrophysical implications of Einstein's general theory of relativity. He continues to do scientific research and scientific consulting, most notably for the Christopher Nolan film Interstellar.[4][5] Thorne was awarded the 2017 Nobel Prize in Physics along with Rainer Weiss and Barry C. Barish "for decisive contributions to the LIGO detector and the observation of gravitational waves".[6][7][8][9]
Thorne in August 2007
Kip Stephen Thorne
Logan, Utah, U.S.
California Institute of Technology (B.S.)
Princeton University (M.S., Ph.D.)
Thorne-Żytkow object
Roman arch
Thorne-Hawking-Preskill bet
Linda Jean Peterson
Carolee Joyce Winstein
Lilienfeld Prize (1996)
Albert Einstein Medal (2009)[1]
Special Breakthrough Prize in Fundamental Physics (2016)
Gruber Prize in Cosmology (2016)
Shaw Prize (2016)
Kavli Prize (2016)
Harvey Prize (2016)
Princess of Asturias Award (2017)
Nobel Prize in Physics (2017)
Lewis Thomas Prize (2018)
Gravitational physics
Geometrodynamics of cylindrical systems (1965)
Doctoral advisor
William L. Burke[2]
Carlton M. Caves
Lee Samuel Finn
Sándor J. Kovács
David L. Lee
Don N. Page
William H. Press
Richard H. Price
Bernard F. Schutz
Saul Teukolsky
Clifford Martin Will
Kip S. Thorne during Nobel Prize press conference in Stockholm, December 2017
1 Life and career
2 Research
2.1 Gravitational waves and LIGO
2.2 Black hole cosmology
2.3 Wormholes and time travel
2.4 Relativistic stars, multipole moments and other endeavors
4 Honors and awards
5 Adaptation in media
6 Partial bibliography
Life and careerEdit
Discussion in the main lecture hall at the École de Physique des Houches (Les Houches Physics School), 1972. From left, Yuval Ne'eman, Bryce DeWitt, Thorne, Demetrios Christodoulou.
Thorne was born on June 1, 1940, in Logan, Utah. His father, D. Wynne Thorne (1908–1979), was a professor of soil chemistry at Utah State University, and his mother, Alison (née Comish; 1914–2004), was an economist and the first woman to receive a Ph.D. in the discipline from Iowa State College.[10][11] Raised in an academic environment, two of his four siblings also became professors.[12][13] Thorne's parents were members of The Church of Jesus Christ of Latter-day Saints (Mormons) and raised Thorne in the LDS faith, though he now describes himself as atheist. Regarding his views on science and religion, Thorne has stated: "There are large numbers of my finest colleagues who are quite devout and believe in God .... There is no fundamental incompatibility between science and religion. I happen to not believe in God."[14]
Thorne rapidly excelled at academics early in life, winning recognition in the Westinghouse Science Talent Search as a senior at Logan High School.[15] He received his B.S. degree from the California Institute of Technology (Caltech) in 1962, and his Ph.D. from Princeton University in 1965 under the supervision of John Archibald Wheeler with a doctoral dissertation entitled "Geometrodynamics of Cylindrical Systems".[16]
Thorne returned to Caltech as an associate professor in 1967 and became a professor of theoretical physics in 1970, becoming one of the youngest full professors in the history of Caltech at age 30. He became the William R. Kenan, Jr. Professor in 1981, and the Feynman Professor of Theoretical Physics in 1991. He was an adjunct professor at the University of Utah from 1971 to 1998 and Andrew D. White Professor at Large at Cornell University from 1986 to 1992.[17] In June 2009, he resigned his Feynman Professorship (he is now the Feynman Professor of Theoretical Physics, Emeritus) to pursue a career of writing and movie making.[citation needed] His first film project was Interstellar, on which he worked with Christopher Nolan and Jonathan Nolan.[3]
Throughout the years, Thorne has served as a mentor and thesis advisor for many leading theorists who now work on observational, experimental, or astrophysical aspects of general relativity. Approximately 50 physicists have received Ph.D.s at Caltech under Thorne's personal mentorship.[3]
Thorne is known for his ability to convey the excitement and significance of discoveries in gravitation and astrophysics to both professional and lay audiences. His presentations on subjects such as black holes, gravitational radiation, relativity, time travel, and wormholes have been included in PBS shows in the U.S. and on the BBC in the United Kingdom.
Thorne and Linda Jean Peterson married in 1960. Their children are Kares Anne and Bret Carter, an architect. Thorne and Peterson divorced in 1977. Thorne and his second wife, Carolee Joyce Winstein, a professor of biokinesiology and physical therapy at USC, married in 1984.[18]
ResearchEdit
Thorne in 1972
Thorne's research has principally focused on relativistic astrophysics and gravitation physics, with emphasis on relativistic stars, black holes and especially gravitational waves.[3] He is perhaps best known to the public for his controversial theory that wormholes can conceivably be used for time travel.[19] However, Thorne's scientific contributions, which center on the general nature of space, time, and gravity, span the full range of topics in general relativity.
Gravitational waves and LIGOEdit
Thorne's work has dealt with the prediction of gravitational wave strengths and their temporal signatures as observed on Earth. These "signatures" are of great relevance to LIGO (Laser Interferometer Gravitational Wave Observatory), a multi-institution gravitational wave experiment for which Thorne has been a leading proponent – in 1984, he cofounded the LIGO Project (the largest project ever funded by the NSF[20]) to discern and measure any fluctuations between two or more 'static' points; such fluctuations would be evidence of gravitational waves, as calculations describe. A significant aspect of his research is developing the mathematics necessary to analyze these objects.[21] Thorne also carries out engineering design analyses for features of the LIGO that cannot be developed on the basis of experiment and he gives advice on data analysis algorithms by which the waves will be sought. He has provided theoretical support for LIGO, including identifying gravitational wave sources that LIGO should target, designing the baffles to control scattered light in the LIGO beam tubes, and – in collaboration with Vladimir Braginsky's (Moscow, Russia) research group – inventing quantum nondemolition designs for advanced gravity-wave detectors and ways to reduce the most serious kind of noise in advanced detectors: thermoelastic noise. With Carlton M. Caves, Thorne invented the back-action-evasion approach to quantum nondemolition measurements of the harmonic oscillators – a technique applicable both in gravitational wave detection and quantum optics.[3]
On February 11, 2016, a team of four physicists[a] representing the LIGO Scientific Collaboration, announced that in September 2015, LIGO recorded the signature of two black holes colliding 1.3 billion light-years away. This recorded detection was the first direct observation of the fleeting chirp of a gravitational wave and confirmed an important prediction of Einstein's general theory of relativity.[22][23][24][25][26]
Black hole cosmologyEdit
Main article: Hoop conjecture
A cylindrical bundle of magnetic field lines
While he was studying for his Ph.D. in Princeton University, his mentor John Wheeler gave him an assignment problem for him to think over: find out whether or not a cylindrical bundle of repulsive magnetic field lines will implode under its own attractive gravitational force. After several months wrestling with the problem, he proved that it was impossible for cylindrical magnetic field lines to implode.[27]:262–265
Why is it that a cylindrical bundle of magnetic field lines will not implode, while spherical stars will implode under their own gravitational force? Thorne tried to explore the theoretical ridge between the two phenomena. He found out eventually that the gravitational force can overcome all interior pressure only when an object has been compressed in all directions. To express this realization, Thorne proposed his hoop conjecture, which describes an imploding star turning into a black hole when the critical circumference of the designed hoop can be placed around it and set into rotation. That is, any object of mass M around which a hoop of circumference 4 π G M c 2 {\displaystyle {\begin{matrix}{\frac {4\pi GM}{c^{2}}}\end{matrix}}}
can be spun must be a black hole.[27]:266–267[28]:189–190
As a tool to be used in both enterprises, astrophysics and theoretical physics, Thorne and his students have developed an unusual approach, called the "membrane paradigm", to the theory of black holes and used it to clarify the "Blandford-Znajek" mechanism by which black holes may power some quasars and active galactic nuclei.[27]:405–411
Thorne has investigated the quantum statistical mechanical origin of the entropy of a black hole. With his postdoc Wojciech Zurek, he showed that the entropy of a black hole is the logarithm of the number of ways that the hole could have been made.[27]:445–446
With Igor Novikov and Don Page, he developed the general relativistic theory of thin accretion disks around black holes, and using this theory he deduced that with a doubling of its mass by such accretion a black hole will be spun up to 0.998 of the maximum spin allowed by general relativity, but not any farther. This is probably the maximum black-hole spin allowed in nature.[3]
Wormholes and time travelEdit
A wormhole is a short cut connecting two separate regions in space. In the figure the green line shows the short way through wormhole, and the red line shows the long way through normal space.
Thorne and his co-workers at Caltech conducted scientific research on whether the laws of physics permit space and time to be multiply connected (can there exist classical, traversable wormholes and "time machines"?).[29] With Sung-Won Kim, Thorne identified a universal physical mechanism (the explosive growth of vacuum polarization of quantum fields), that may always prevent spacetime from developing closed timelike curves (i.e., prevent backward time travel).[30]
With Mike Morris and Ulvi Yurtsever, he showed that traversable wormholes can exist in the structure of spacetime only if they are threaded by quantum fields in quantum states that violate the averaged null energy condition (i.e. have negative renormalized energy spread over a sufficiently large region).[31] This has triggered research to explore the ability of quantum fields to possess such extended negative energy. Recent calculations by Thorne indicate that simple masses passing through traversable wormholes could never engender paradoxes – there are no initial conditions that lead to paradox once time travel is introduced. If his results can be generalized, they would suggest that none of the supposed paradoxes formulated in time travel stories can actually be formulated at a precise physical level: that is, that any situation in a time travel story turns out to permit many consistent solutions.[citation needed]
Relativistic stars, multipole moments and other endeavorsEdit
With Anna Żytkow, Thorne predicted the existence of red supergiant stars with neutron-star cores (Thorne–Żytkow objects).[32] He laid the foundations for the theory of pulsations of relativistic stars and the gravitational radiation they emit. With James Hartle, Thorne derived from general relativity the laws of motion and precession of black holes and other relativistic bodies, including the influence of the coupling of their multipole moments to the spacetime curvature of nearby objects,[33] as well as writing down the Hartle-Thorne metric, an approximate solution which describes the exterior of a slowly and rigidly rotating, stationary and axially symmetric body.
Thorne has also theoretically predicted the existence of universally antigravitating "exotic matter" – the element needed to accelerate the expansion rate of the universe, keep traversable wormhole "Star Gates" open and keep timelike geodesic free float "warp drives" working. With Clifford Will[34] and others of his students, he laid the foundations for the theoretical interpretation of experimental tests of relativistic theories of gravity – foundations on which Will and others then built. As of 2005[update], Thorne was interested in the origin of classical space and time from the quantum foam of quantum gravity theory.[citation needed]
PublicationsEdit
Thorne has written and edited books on topics in gravitational theory and high-energy astrophysics. In 1973, he co-authored the textbook Gravitation with Charles Misner and John Wheeler;[35] that according to John C. Baez and Chris Hillman, is one of the great scientific books of all time and has inspired two generations of students.[36] In 1994, he published Black Holes and Time Warps: Einstein's Outrageous Legacy, a book for non-scientists for which he received numerous awards. This book has been published in six languages, and editions in Chinese, Italian, Czech, and Polish are in press.[when?] In 2014, Thorne published The Science of Interstellar in which he explains the science behind Christopher Nolan's film Interstellar; Nolan wrote the foreword to the book. In September 2017, Thorne and Roger D. Blandford published Modern Classical Physics: Optics, Fluids, Plasmas, Elasticity, Relativity, and Statistical Physics, a graduate-level textbook covering the six major areas of physics listed in the title.[37]
Thorne's articles have appeared in publications such as:
Scientific American,[38]
McGraw-Hill Yearbook of Science and Technology,[39] and
Collier's Encyclopedia[40] among others.
Thorne has published more than 150 articles in scholarly journals.[citation needed]
Honors and awardsEdit
This section of a biography of a living person needs additional citations for verification. Please help by adding reliable sources. Contentious material about living persons that is unsourced or poorly sourced must be removed immediately, especially if potentially libelous or harmful.
Find sources: "Kip Thorne" – news · newspapers · books · scholar · JSTOR (October 2017) (Learn how and when to remove this template message)
Thorne has been elected to:[41]
the American Academy of Arts and Sciences (1972)[42]
the National Academy of Sciences,
the Russian Academy of Sciences, and
the American Philosophical Society.
He has been recognized by numerous awards including:
the American Institute of Physics Science Writing Award in Physics and Astronomy,
the Phi Beta Kappa Science Writing Award,
the American Physical Society's Lilienfeld Prize,
the German Astronomical Society's Karl Schwarzschild Medal (1996),
the Robinson Prize in Cosmology from the University of Newcastle, England,
the Sigma Xi: The Scientific Research Society's Common Wealth Awards for Science and Invention, and
the California Science Center's California Scientist of the Year Award (2003).
the Albert Einstein Medal in 2009 from the Albert Einstein Society, Bern, Switzerland
the UNESCO Niels Bohr Medal from UNESCO (2010)[43]
the Special Breakthrough Prize in Fundamental Physics (2016)
the Gruber Prize in Cosmology (2016)
the Shaw Prize (2016) (together with Ronald Drever and Rainer Weiss).[44]
the Kavli Prize in Astrophysics (2016) (together with Ronald Drever and Rainer Weiss).[45]
the Tomalla Prize (2016) for extraordinary contributions to general relativity and gravity.[46]
the Georges Lemaître Prize (2016)
the Harvey Prize (2016) (together with Ronald Drever and Rainer Weiss).[47]
the Smithsonian Magazine American Ingenuity Award for Physical Sciences (2016)[48]
the Princess of Asturias Award (2017) (jointly with Rainer Weiss and Barry Barish).[49]
the Nobel Prize in Physics (2017) (jointly with Rainer Weiss and Barry Barish)
the Lewis Thomas Prize (2018)
the Golden Plate Award of the American Academy of Achievement (2019)[50]
He has been a Woodrow Wilson Fellow, Danforth Fellow, Guggenheim Fellow, and Fulbright Fellow. He has also received the honorary degree of doctor of humane letters from Claremont Graduate University and an honorary doctorate from the Physics Department of the Aristotle University of Thessaloniki.
He was elected to hold the Lorentz chair for the year 2009 Leiden University, the Netherlands.
Thorne has served on:
the International Committee on General Relativity and Gravitation,
the Committee on US-USSR Cooperation in Physics, and
the National Academy of Sciences' Space Science Board, which has advised NASA and Congress on space science policy.
Kip Thorne was selected by Time magazine in an annual list of the 100 most influential people in the American world in 2016.[51]
Adaptation in mediaEdit
Thorne contributed ideas on wormhole travel to Carl Sagan for use in his novel Contact.[52]
Thorne and his friend, producer Lynda Obst, also developed the concept for the Christopher Nolan film Interstellar.[53] He also wrote a tie-in book, The Science of Interstellar. Thorne later advised Nolan on the physics of his movie Tenet.[54]
In Larry Niven's novel Rainbow Mars, the time travel technology used in the novel is based on the wormhole theories of Thorne, which in the context of the novel was when time travel first became possible, rather than just fantasy. As a result, any attempts to travel in time prior to Thorne's development of wormhole theory results in the time traveller entering a fantastic version of reality, rather than the actual past.[55]
In the film The Theory of Everything, Thorne was portrayed by actor Enzo Cilenti.[56]
Thorne played himself in the episode of The Big Bang Theory entitled "The Laureate Accumulation", episode 18 of season 12.
Partial bibliographyEdit
Misner, Charles W., Thorne, K. S. and Wheeler, John Archibald, Gravitation 1973, (W H Freeman & Co)
Thorne, K. S., in 300 Years of Gravitation, (Eds.) S. W. Hawking and W. Israel, 1987, (Chicago: Univ. of Chicago Press), Gravitational Radiation.
Thorne, K. S., Price, R. H. and Macdonald, D. M., Black Holes, The Membrane Paradigm, 1986, (New Haven: Yale Univ. Press).
Friedman, J., Morris, M. S., Novikov, I. D., Echeverria, F., Klinkhammer, G., Thorne, K. S. and Yurtsever, U., Physical Review D., 1990, (in press), Cauchy Problem in Spacetimes with Closed Timelike Curves.
Thorne, K. S. and Blandford, R. D., Modern Classical Physics: Optics, Fluids, Plasmas, Elasticity, Relativity, and Statistical Physics, 2017, (Princeton: Princeton University Press).
Polchinski's paradox
^ The announcement team were Thorne, David Reitze, Gabriela González, Rainer Weiss, and France A. Córdova.
^ "einstein medal". Einstein-bern.ch. Retrieved December 7, 2014.
^ "Kip Stephen Thorne". Mathematics Genealogy Project. North Dakota State University. Retrieved September 6, 2016.
^ a b c d e f "Kip S. Thorne: Biographical Sketch". www.its.caltech.edu. Retrieved May 8, 2020.
^ Kevin P. Sullivan (December 16, 2013). "Christopher Nolan's 'Interstellar' Trailer: Watch Now". MTV. Retrieved October 30, 2014.
^ "Watch Exclusive: The Science of Interstellar - WIRED - WIRED Video - CNE". WIRED Videos. Archived from the original on December 5, 2014. Retrieved December 7, 2014.
^ "The Nobel Prize in Physics 2017". The Nobel Foundation. October 3, 2017. Retrieved October 3, 2017.
^ Rincon, Paul; Amos, Jonathan (October 3, 2017). "Einstein's waves win Nobel Prize". BBC News. Retrieved October 3, 2017.
^ Overbye, Dennis (October 3, 2017). "2017 Nobel Prize in Physics Awarded to LIGO Black Hole Researchers". The New York Times. Retrieved October 3, 2017.
^ Kaiser, David (October 3, 2017). "Learning from Gravitational Waves". The New York Times. Retrieved October 3, 2017.
^ "Kip S. Thorne Biography". NobelPrize.org.
^ Grant Kimm, Webmaster - The College of Liberal Arts and Sciences at Iowa State University. "Plaza of Heroines at Iowa State University". Las.iastate.edu. Archived from the original on August 14, 2015. Retrieved December 7, 2014.
^ Jones, Zachary (2011). "D. Wynne Thorne Papers, 1936-1983". Archives West. Orbis Cascade Alliance.
^ "Dr. Alison Comish Thorne". Legacy.com. The Salt Lake Tribune Obituaries. October 26, 2004. Retrieved September 7, 2016.
^ Rory Carroll (June 21, 2013). "Kip Thorne: physicist studying time travel tapped for Hollywood film". Guardian News and Media Limited. Retrieved October 30, 2014. Thorne grew up in an academic, Mormon family in Utah but is now an atheist. "There are large numbers of my finest colleagues who are quite devout and believe in God, ranging from an abstract humanist God to a very concrete Catholic or Mormon God. There is no fundamental incompatibility between science and religion. I happen to not believe in God."
^ Piper, Matthew (October 3, 2017). "Utah-born Kip Thorne wins the Nobel Prize in physics for his role in detecting gravitational waves". The Salt Lake Tribune.
^ Thorne, Kip Stephen (1965). Geometrodynamics of cylindrical systems (Ph.D.). Princeton University. OCLC 760240072 – via ProQuest.
^ "Kip S. Thorne". history.aip.org.
^ Kondrashov, Veronica. "Kip S. Thorne: Curriculum Vitae". Kip S. Thorn. California Institute of Technology.
^ Cofield, Cala (December 19, 2014). "Time Travel and Wormholes:Physicist Kip Thorne's Wildest Theories". Space.com.
^ "LIGO: The Search for Gravitational Waves". National Science Foundation. Retrieved September 9, 2016. LIGO is the largest single enterprise undertaken by NSF, with capital investments of nearly $300 million and operating costs of more than $30 million/year.
^ "Catching waves with Kip Thorne". plus.maths.org. December 1, 2001. Retrieved May 8, 2020.
^ "Gravitational Waves Detected 100 Years After Einstein's Prediction". ligo.caltech.edu. February 11, 2016.
^ Twilley, Nicola. "Gravitational Waves Exist: The Inside Story of How Scientists Finally Found Them". The New Yorker. ISSN 0028-792X. Retrieved February 11, 2016.
^ Abbott, B.P.; et al. (2016). "Observation of Gravitational Waves from a Binary Black Hole Merger". Phys. Rev. Lett. 116 (6): 061102. arXiv:1602.03837. Bibcode:2016PhRvL.116f1102A. doi:10.1103/PhysRevLett.116.061102. PMID 26918975. S2CID 124959784.
^ Naeye, Robert (February 11, 2016). "Gravitational Wave Detection Heralds New Era of Science". Sky and Telescope. Retrieved February 11, 2016.
^ Castelvecchi, Davide; Witze, Alexandra (February 11, 2016). "Einstein's gravitational waves found at last". Nature News. doi:10.1038/nature.2016.19361. S2CID 182916902. Retrieved February 11, 2016.
^ a b c d Kip S. Thorne (1994). Black Holes and Time Warps: Einstein's Outrageous Legacy. W.W. Norton. ISBN 978-0-393-31276-8.
^ V. Frolov; I. Novikov (December 6, 2012). Black Hole Physics: Basic Concepts and New Developments. Springer Science & Business Media. ISBN 978-94-011-5139-9.
^ Davies, Paul (February 1, 2006). "How to build a time machine". Paul Davies. Scientific American. 16 (3): 14–19. doi:10.1038/scientificamerican0206-14sp. PMID 12197102.
^ Kim, Sung-Won; Thorne, Kip S. (1991). "Do vacuum fluctuations prevent the creation of closed timelike curves?" (PDF). Physical Review D. 43 (12): 3929–3947. Bibcode:1991PhRvD..43.3929K. doi:10.1103/PhysRevD.43.3929. PMID 10013359.
^ Morris, Michael S.; Thorne, Kip S.; Yurtsever, Ulvi (1988). "Wormholes, Time Machines, and the Weak Energy Condition" (PDF). Physical Review Letters. 61 (13): 1446–1449. Bibcode:1988PhRvL..61.1446M. doi:10.1103/PhysRevLett.61.1446. PMID 10038800.
^ Thorne, Kip S.; Żytkow, Anna N. (March 15, 1977). "Stars with degenerate neutron cores. I - Structure of equilibrium models". The Astrophysical Journal. 212 (1): 832–858. Bibcode:1977ApJ...212..832T. doi:10.1086/155109.
^ Hartle, James; Thorne, Kip S. (1985). "Laws of motion and precession for black holes and other bodies" (PDF). Physical Review D. 31 (8): 1815–1837. Bibcode:1985PhRvD..31.1815T. doi:10.1103/PhysRevD.31.1815. PMID 9955908.
^ Thorne, Kip S.; Will, Clifford (1971). "Theoretical Frameworks for Testing Relativistic Gravity. I. Foundations". The Astrophysical Journal. 163: 595–610. Bibcode:1971ApJ...163..595T. doi:10.1086/150803.
^ Misner, Charles W.; Kip S. Thorne; John Archibald Wheeler (September 1973). Gravitation. San Francisco: W. H. Freeman. ISBN 0-7167-0344-0.
^ "A Guide to Relativity books". John Baez, Chris Hillman. Department of Mathematics, University of California at Riverside. 1998. Retrieved June 19, 2016.
^ Kip S. Thorne and Roger D. Blandford (2017). Modern Classical Physics: Optics, Fluids, Plasmas, Elasticity, Relativity, and Statistical Physics. Princeton University Press. ISBN 978-0-69115902-7.
^ "Stories by Kip S Thorne". Scientific American. Retrieved November 9, 2017.
^ K.S. Thorne, "Gravitational Collapse," in 1976 McGraw-Hill Yearbook of Science and Technology (McGraw-Hill Book Company, New York, 1967), pp. 193-195
^ K.S. Thorne, "Gravitational Collapse," Collier's Encyclopedia (Crowell-Collier Educational Corporation, New York, 1969), pp. 335-336
^ "Kip S. Thorne: Curriculum Vitae". Caltech. Retrieved September 18, 2016.
^ "Book of Members, 1780–2010: Chapter T" (PDF). American Academy of Arts and Sciences. Retrieved April 15, 2011.
^ "UNESCO's Niels Bohr Gold Medal awarded to prominent physicists". Niels Bohr Institute. September 14, 2010. Retrieved December 8, 2016.
^ "The Shaw Prize - Top prizes for astronomy, life science and mathematics". www.shawprize.org. Retrieved May 8, 2020.
^ "9 Scientific Pioneers Receive The 2016 Kavli Prizes". prnewswire.com. June 2, 2016.
^ "The Tomalla prize holders". The Tomalla Foundation. Retrieved September 18, 2016.
^ "Prize Winners – Harvey Prize". harveypz.net.technion.ac.il. Retrieved May 8, 2020.
^ "2016 American Ingenuity Award Winners". Smithsonian. Retrieved October 15, 2018.
^ IT, Developed with webControl CMS by Intermark. "Rainer Weiss, Kip S. Thorne, Barry C. Barish and LIGO Scientific Collaboration - Laureates - Princess of Asturias Awards". The Princess of Asturias Foundation. Retrieved May 8, 2020.
^ "Golden Plate Awardees of the American Academy of Achievement". www.achievement.org. American Academy of Achievement.
^ "Kip Thorne". Christopher Nolan. Time magazine. April 21, 2016. Retrieved May 8, 2016.
^ "Contact – High Technology Lends a Hand/Science of the Soundstage". Warner Bros. Archived from the original on March 4, 2001. Retrieved September 1, 2014.
^ Fernandez, Jay A. (March 28, 2007). "Writer with real stars in his eyes". Los Angeles Times. Retrieved September 1, 2014.
^ Maddox, Garry (August 22, 2020). "'The biggest film I've done': Christopher Nolan on the secret world of Tenet". The Sydney Morning Herald. Archived from the original on August 23, 2020. Retrieved August 23, 2020.
^ Larry Niven. Rainbow Mars. New York: Tor Books, 1999, pp. 45, 366.
^ Tunzelmann, Alex von (January 7, 2015). "The Theory of Everything skips over the black holes of marriage and science". The Guardian. Retrieved September 29, 2016.
Media related to Kip Thorne at Wikimedia Commons
Kip Thorne on IMDb
Kip Thorne at the Mathematics Genealogy Project
Founding Fathers of Relativity
Kip S. Thorne on Nobelprize.org
Retrieved from "https://en.wikipedia.org/w/index.php?title=Kip_Thorne&oldid=999310376" | CommonCrawl |
PDG HOME
pdgHome
REVIEWS, TABLES, PLOTS (2020)
PARTICLE LISTINGS
PHYSICAL CONSTANTS
ASTROPHYSICAL CONSTANTS
ATOMIC & NUCLEAR PROPERTIES
MIRROR SITES
PDG INTERNAL
For general comments and questions about PDG, please contact us at:
[email protected]
For technical assistance or help with our site, please contact us at:
[email protected]
Particle Data Group MS 50R-6008
1 Cyclotron Road
Were you looking to order PDG products?
Order PDG products
Please use this citation:
R.L. Workman et al. (Particle Data Group), Prog. Theor. Exp. Phys. 2022, 083C01 (2022).
INSPIRE BibTeX LaTeX(US) LaTeX(EU)
USA (LBNL) | Italy | Japan (KEK) | Russia (Protvino)
pdgLive Home > Axions (${{\mathit A}^{0}}$ ) and Other Very Light Bosons, Searches for > Search for ${{\mathit X}^{0}}$ (Light Boson) Resonance in ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit \gamma}}$
Search for ${{\mathit X}^{0}}$ (Light Boson) Resonance in ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit \gamma}}$
The limit is for $\Gamma\mathrm {( {{\mathit X}^{0}} \rightarrow {{\mathit e}^{+}} {{\mathit e}^{-}} )}\cdot{}\Gamma\mathrm {( {{\mathit X}^{0}} \rightarrow {{\mathit \gamma}} {{\mathit \gamma}} {{\mathit \gamma}} )}/\Gamma _{{\mathrm {total}}}$. $\mathit C$ invariance forbids spin-0 ${{\mathit X}^{0}}$ coupling to both ${{\mathit e}^{+}}{{\mathit e}^{-}}$ and ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit \gamma}}$ .
VALUE ($ 10^{-3} $ eV)
CL%
TECN
• • We do not use the following data for averages, fits, limits, etc. • •
$<0.2$ 95 1
CNTR ${\mathit m}_{{{\mathit X}^{0}}}=1.1 - 1.9$ MeV
CNTR ${\mathit m}_{{{\mathit X}^{0}}}=1.1$ MeV
$<120$ 95 2
SKALSEY
CNTR ${\mathit m}_{{{\mathit X}^{0}}}$= 1.5 MeV
1 VO 1994 looked for ${{\mathit X}^{0}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit \gamma}}$ decaying at rest. The precise limits depend on ${\mathit m}_{{{\mathit X}^{0}}}$. See Fig.$~$2(b) in paper.
2 VO 1994 looked for ${{\mathit X}^{0}}$ $\rightarrow$ ${{\mathit \gamma}}{{\mathit \gamma}}{{\mathit \gamma}}$ decaying in flight.
3 SKALSEY 1992 also give limits $4.3$ for ${\mathit m}_{{{\mathit X}^{0}}}$ = $1.54$ and $7.5$ for $1.64$ MeV. The spin of ${{\mathit X}^{0}}$ is assumed to be one.
VO 1994
PR C49 1551 Search for Resonance in Multiphoton Final States from Low Energy ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ Scattering
SKALSEY 1992
PRL 68 456 Search for Three Photon Final State Resonances in Low-Energy ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ Scattering
Except where otherwise noted, content of the 2022 Review of Particle Physics is licensed under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. The publication of the Review of Particle Physics is supported by US DOE, MEXT (Japan), INFN (Italy), JPS and CERN. Individual collaborators receive support for their PDG activities from their respective institutes or funding agencies. © 2022. See LBNL disclaimers. | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.