text
stringlengths
59
500k
subset
stringclasses
6 values
\begin{document} \title{Approximation by multivariate \ Kantorovich--Kotelnikov operators hanks{This research is supported by Volkswagen Foundation; the first author is also supported by the project AFFMA that has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 704030; the second author is also supported by grants from RFBR \# 15-01-05796-a, St. Petersburg State University \#~9.38.198.2015.} \begin{abstract} Approximation properties of multivariate Kantorovich-Kotelnikov type operators generated by different band-limited functions are studied. In particular, a wide class of functions with discontinuous Fourier transform is considered. The $L_p$-rate of convergence for these operators is given in terms of the classical moduli of smoothness. Several examples of the Kantorovich-Kotelnikov operators generated by the $\operatorname{sinc}$-function and its linear combinations are provided. \end{abstract} \textbf{Keywords} Kantorovich--Kotelnikov operator, band-limited function, approximation order, modulus of smoothness, matrix dilation. \textbf{AMS Subject Classification}: 41A58, 41A25, 41A63 \newtheorem{theo}{Theorem} \newtheorem{lem}[theo]{Lemma} \newtheorem {prop} [theo] {Proposition} \newtheorem {coro} [theo] {Corollary} \newtheorem {defi} [theo] {Definition} \newtheorem {rem} [theo] {Remark} \newtheorem {ex} [theo] {Example} \newtheorem{theorempart}{Theorem}[theo] \newtheorem{lemmapart}{Lemma}[theo] \newtheorem{proppart}{Proposition}[theo] \newcommand{\hspace{0mm}}{\hspace{0mm}} \section{Introduction} The Kantorovich--Kotelnikov operator is an operator of the form \begin{equation}\label{KK} K_w(f,{\varphi} ;x)=\sum_{k\in {{\Bbb Z}}}\(w\int_{\frac kw}^{\frac{k+1}w}f(u)du\){\varphi}(w x-k),\quad x\in {\Bbb R},\quad w>0, \end{equation} where $f \,:\, {\Bbb R}\to {\Bbb C}$ is a locally integrable function and ${\varphi}$ is an appropriate kernel satisfying certain "good" properties, as a rule ${\varphi}$ is a band-limited function or a function with a compact support, e.g., $B$-spline. The operator $K_w$ was introduced in~\cite{BBSV}, although in other forms, it was known previously, see, e.g.,~\cite{Jia1, Jia2, LJC}. During the last years, in view of some important applications, this operator has drawn attention by many mathematicians and has been especially actively studied~\cite{BM, CS, CV2, CV0, FCMV, Jia2, KM, KKS, KS, KS1, OT15, VZ1, VZ2}. The operator $K_w$ has several advantages over the generalized sampling operators \begin{equation}\label{SS} S_w (f,{\varphi} ;x)=\sum_{k\in {{\Bbb Z}}}f\(\frac kw\){\varphi}(w x-k),\quad x\in {\Bbb R},\quad w>0. \end{equation} First of all, using the averages $w\int_{\frac kw}^{\frac{k+1}w}f(u)du$ instead of the sampled values $f(k/w)$ allows to deal with discontinues signals and reduce the so-called time-jitter errors. Note that the latter property is very useful in the theory of Signal and Image Processing. Moreover, unlike to the generalized sampling operators $S_w$, the operator~\eqref{KK} is continuous in $L_p({\Bbb R})$ and, therefore, provides better approximation order than $S_w$ in most cases. In the literature, there are several generalizations and refinements of the Kantorovich--Kotelnikov operator $K_w$ (see, e.g.,~\cite{BDR, v58, Jia2, KKS, KS, KS1, LJC, OT15, VZ1, VZ2}). In this paper, we study approximation properties of the following multivariate analogue of~\eqref{KK} \begin{equation}\label{gKK} Q_j (f,{\varphi},\widetilde{\varphi}; x)=\sum_{k\in {{\Bbb Z}}^d}\(m^j \int_{{\Bbb R}^d}f(u)\widetilde{\varphi} (M^ju+k) du\){\varphi}(M^j x+k),\quad x\in {\Bbb R}^d,\quad j\in {{\Bbb Z}}, \end{equation} where $M$ is a dilation matrix, $m=|{\rm det}\,M|$, and $\widetilde{\varphi}$ and ${\varphi}$ are appropriate functions. Note that if $d=1$ and $\widetilde{\varphi}(x)=\chi_{[0,1]}(x)$ (the characteristic function of $[0,1]$), then~\eqref{gKK} represents the standard Kantorovich--Kotelnikov operator $K_{m^j}$. Convergence and approximation properties of the operator~\eqref{gKK} have been actively studied by many authors (see~\cite{BDR, BM, CS, CV2, CV0, FCMV, v58, Jia1, Jia2, KKS, KS, KS1, LJC, OT15, VZ1, VZ2}). The most general results on estimates of the error of approximation by $Q_j$ have been obtained in the case of compactly supported ${\varphi}$ and $\widetilde{\varphi}$. In particular in~\cite{Jia2} (see also~\cite{Jia1}) it was proved the following result: \emph{ if ${\varphi}$ and $\widetilde{\varphi}$ are compactly supported, ${\varphi} \in L_p({\Bbb R}^d)$, $\widetilde{\varphi}\in L_q({\Bbb R}^d)$, $1/p+1/q=1$, $M$ is an isotropic dilation matrix, and $Q_0$ reproduces polynomials of degree $n-1$, then for any $f\in L_p({\Bbb R})$, $1\le p\le \infty$, and $n\in {{\Bbb N}}$ we have \begin{equation}\label{Qvved} \Vert f-Q_j(f,{\varphi},\widetilde{\varphi})\Vert_{L_p({\Bbb R}^d)}\le C({p,d,n,{\varphi},\widetilde{\varphi}})\omega_n(f,\|M^{-j}\|)_p, \end{equation}} \emph{where $\omega_n(f,h)_p$ is the modulus of smoothness of order $n$.} Concerning band-limited functions ${\varphi}$, it turns out that approximation properties of the operators $Q_j$ have been studied mainly in the case where $\widetilde{\varphi}$ is some characteristic function (see, e.g.,~\cite{BBSV, CS, CV2, CV0, FCMV, KKS, OT15, VZ1, VZ2}) and, unlike to compactly supported functions ${\varphi}$, there are several limitations and drawbacks in the available results. First of all, the methods previously used are essentially restricted to the case of integrable functions ${\varphi}$, which do not allow to consider the functions of type $\operatorname{sinc}(x)=(\sin \pi x)/(\pi x)$. Secondly, the conditions imposed on the kernel ${\varphi}$ cannot provide high rate of convergence of $Q_j(f)$ even for sufficiently smooth functions $f$. At that, the corresponding estimates are given in the terms of Lipschitz classes. For example, it follows from~\cite{CV2} that \emph{for any $f\in L_p({\Bbb R})\cap {\rm Lip}(\nu)$, $1\le p\le \infty$, $0<\nu\le 1$, we have \begin{equation}\label{intrImp} \bigg\Vert f-\frac12\sum_{k\in {{\Bbb Z}}}\(w\int_{\frac kw}^{\frac{k+1}w}f(u)du\)\operatorname{sinc}^2\(\frac{wx-k}{2}\)\bigg\Vert_{L_p({\Bbb R})}=\mathcal{O}(w^{-\nu}),\quad w\to +\infty. \end{equation} } In this paper, we improve the mentioned drawbacks and study the rate of convergence of the operator~\eqref{gKK} for a wide class of band-limited functions ${\varphi}$ including non-integrable ones. In particular, estimation~(\ref{Qvved}) is proved for a large class of functions $\widetilde\varphi$ including both compactly supported and band-limited functions, provided that $D^{\beta}(1-\widehat\varphi\overline{\widehat{\widetilde\varphi}})(\textbf{0}) = 0$ for all $\beta\in{\mathbb Z}^d_+$, $\|\beta\|_{\ell_1}<n$ (see Theorems~\ref{th2} and~\ref{th2}\,$'$). In the partial cases, this gives an answer to the question posed in~\cite{BBSV} about approximation properties of the following sampling series (see Section~6): \begin{equation*} \sum_{k\in {{\Bbb Z}}}\(w\int_{\frac kw}^{\frac{k+1}w}f(u)du\)\operatorname{sinc}(w x-k),\quad x\in {\Bbb R},\quad w>0. \end{equation*} The paper is organized as follows: in Section~2 we introduce notation and give some basic facts. In Section~3 we consider approximation properties of some generalized sampling operators of type $Q_j$. These operators are defined similarly to the operators $Q_j$ but with appropriate tempered distribution in place of the function $\widetilde{\varphi}$ that, in particular, allows to include in the consideration operators of type $S_w$. The $L_p$-rate of convergence of such generalized sampling operators is given in terms of Fourier transform of $f$ and has several drawbacks, which we improve in the next sections. Section~4 is devoted to auxiliary results. In Section~5 we prove two main results that provide estimates of the error of approximation by the operator $Q_j$ in terms of the classical moduli of smoothness. The results of this section can be considered as counterparts of the corresponding results from Section~3. Finally, in Section~6 we consider some special cases and provide a number of examples. \section{Notation and basic facts} \label{notation} $\n$ is the set of positive integers, ${\mathbb R}$ is the set of real numbers, $\cn$ is the set of complex numbers. ${\mathbb R}^d$ is the $d$-dimensional Euclidean space, $x = (x_1,\dots, x_d)$ and $y = (y_1,\dots, y_d)$ are its elements (vectors), $(x, y)~=~x_1y_1+~\dots~+x_dy_d$, $|x| = \sqrt {(x, x)}$, ${\bf0}=(0,\dots, 0)\in{\mathbb R}^d$; $B_r=\{x\in{\mathbb R}^d:\ |x|\le r\}$, ${\mathbb T}^d=[-\frac 12,\frac 12]^d$; ${\mathbb Z}^d$ is the integer lattice in ${\mathbb R}^d$, $\z_+^d:=\{x\in{\mathbb Z}^d:~x\geq~{\bf0}\}.$ If $\alpha,\beta\in{\mathbb Z}^d_+$, $a,b\in{\mathbb R}^d$, we set $[\alpha]=\sum\limits_{j=1}^d \alpha_j$, $\alpha!=\prod\limits_{j=1}^d(\alpha_j!),$ $$\binom{\beta}{\alpha}=\frac{\beta!}{\alpha!(\beta-\alpha)!},\quad D^{\alpha}f=\frac{\partial^{[\alpha]} f}{\partial x^{\alpha}}=\frac{\partial^{[\alpha]} f}{\partial^{\alpha_1}x_1\dots \partial^{\alpha_d}x_d},$$ $\delta_{ab}$~is the Kronecker delta. A real $d\times d$ matrix $M$ whose eigenvalues are bigger than 1 in modulus is called a dilation matrix. Throughout the paper we consider that such a matrix $M$ is fixed and $m=|{\rm det}\,M|$, $M^*$ denotes the conjugate matrix to $M$. Since the spectrum of the operator $M^{-1}$ is located in $B_r$, where $r=r(M^{-1}):=\lim_{j\to+\infty}\|M^{-j}\|^{1/j}$ is the spectral radius of $M^{-1}$, and there exists at least one point of the spectrum on the boundary of $B_r$, we have \begin{equation} \|M^{-j}\|\le {C_{M,\vartheta}}\,\vartheta^{-j},\quad j\ge0, \label{00++} \end{equation} for every positive number $\vartheta$ which is smaller in modulus than any eigenvalue of $M$. In particular, we can take $\vartheta > 1$, then $$ \lim_{j\to+\infty}\|M^{-j}\|=0. $$ $L_p$ denotes $L_p({\mathbb R}^d)$, $1\le p\le\infty$, with the usual norm $\Vert f\Vert_p=\Vert f\Vert_{L_p({\Bbb R}^d)}$. We say that ${\varphi}\in L_p^0$ if ${\varphi}\in L_p$ and ${\varphi}$ has a compact support. We use $W_p^n$, $1\le p\le\infty$, $n\in\n$, to denote the Sobolev space on~${\mathbb R}^d$, i.e. the set of functions whose derivatives up to order $n$ are in $L_p$, with the usual Sobolev semi-norm given by $$ \|f\|_{\dot W_p^n}=\sum_{[\nu]=n}\Vert D^\nu f\Vert_p. $$ If $f, g$ are functions defined on ${\mathbb R}^d$ and $f\overline g\in L_1$, then $\langle f, g\rangle:=\int_{{\mathbb R}^d}f\overline g$. If $f\in L_1$, then its Fourier transform is $\mathcal{F}f(\xi)=\widehat f(\xi)=\int_{{\mathbb R}^d} f(x)e^{-2\pi i (x,\xi)}\,dx$. If $\varphi$ is a function defined on ${\mathbb R}^d$, we set $$ \varphi_{jk}(x):=m^{j/2}\varphi(M^jx+k),\quad j\in\z,\,\, k\in{\mathbb R}^d. $$ Denote by $\mathcal{S}$ the Schwartz class of functions defined on ${\mathbb R}^d$. The dual space of $\mathcal{S}$ is $\mathcal{S}'$, i.e. $\mathcal{S}'$ is the space of tempered distributions. The basic facts from distribution theory can be found, e.g., in~\cite{Vladimirov-1}. Suppose $f\in \mathcal{S}$, $\varphi \in \mathcal{S}'$, then $\langle \varphi, f\rangle:= \overline{\langle f, \varphi\rangle}:=\varphi(f)$. If $\varphi\in \mathcal{S}',$ then $\widehat \varphi$ denotes its Fourier transform defined by $\langle \widehat f, \widehat \varphi\rangle=\langle f, \varphi\rangle$, $f\in \mathcal{S}$. If $\varphi\in \mathcal{S}'$, $j\in\z, k\in{\mathbb Z}^d$, then we define $\varphi_{jk}$ by $ \langle f, \varphi_{jk}\rangle= \langle f_{-j,-M^{-j}k},\varphi\rangle$ for all $f\in \mathcal{S}$. Denote by $\mathcal{S}_N'$ the set of all tempered distribution whose Fourier transform $\widehat{\varphi}$ is a function on ${\mathbb R}^d$ such that $|\widehat{\varphi}(\xi)|\le C_{\varphi} |\xi|^{N}$ for almost all $\xi\notin{\mathbb T}^d$, $N=N({\varphi})\ge 0,$ and $|\widehat{\varphi}(\xi)|\le C'_{{\varphi}}$ for almost all $\xi\in{\mathbb T}^d$. Denote by ${\cal B}={\cal B}({\Bbb R}^d)$ the class of functions $\varphi$ given by $$ \varphi(x)=\int\limits_{{\mathbb R}^d}\theta(\xi)e^{2\pi i(x,\xi)}\,d\xi, $$ where $\theta$ is supported in a parallelepiped $\Pi:=[a_1, b_1]\times\dots\times[a_d, b_d]$ and such that $\theta\big|_\Pi\in C^d(\Pi)$. Let $1\le p \le \infty$. Denote by ${\cal L}_p$ the set $$ {\cal L}_p:= \left\{ \varphi\in L_p\,:\, \|\varphi\|_{{\cal L}_p}:= \bigg\|\sum_{k\in{\mathbb Z}^d} \left|\varphi(\cdot+k)\right|\bigg\|_{L_p({\mathbb T}^d)}<\infty \right\}. $$ With the norm $\|\cdot\|_{{\cal L}_p}$, ${\cal L}_p$ is a Banach space. The simple properties are: ${\cal L}_1=L_1,$ $\|\varphi\|_p\le \|\varphi\|_{{\cal L}_p}$, $\|\varphi\|_{{\cal L}_q}\le \|\varphi\|_{{\cal L}_p}$ for $1\le q \le p \le\infty.$ Therefore, ${\cal L}_p\subset L_p$ and ${\cal L}_p\subset {\cal L}_q$ for $1\le q \le p \le\infty.$ If $\varphi\in L_p$ and compactly supported, then $\varphi\in {\cal L}_p$ for $p\ge1.$ If $\varphi$ decays fast enough, i.e. there exist constants $C>0$ and $\varepsilon>0$ such that $|\varphi(x)|\le C( 1+|x|)^{-d-\varepsilon}$ for all $x\in{\mathbb R}^d,$ then $\varphi\in {\cal L}_\infty$. The modulus of smoothness $\omega_n(f,\cdot)_p$ of order $n\in {{\Bbb N}}$ for a function $f\in L_p$ is defined by $$ \omega_n(f,h)_p=\sup_{|\delta|<h,\, \delta\in {\Bbb R}^d} \Vert \Delta_\delta^n f\Vert_{p}, $$ where $$ \Delta_\delta^n f(x)=\sum_{\nu=0}^n (-1)^\nu \binom{n}{\nu} f(x+\delta \nu). $$ \section{Preliminary results} \label{scaleAppr} Scaling operator $\sum_{k\in{\mathbb Z}^d} \langle f, {\widetilde\varphi}_{jk}\rangle \varphi_{jk}$ is a good tool of approximation for many appropriate pairs of functions $\varphi, \widetilde\varphi$. Let us consider the case, where $\widetilde\varphi$ is a tempered distribution, e.g., the delta-function or a linear combination of its derivatives. In this case the inner product $\langle f, \widetilde\varphi_{jk}\rangle $ has meaning only for functions $f$ in $\mathcal{S}$. To extend the class of functions $f$ one can replace $\langle f, {\widetilde\varphi}_{jk}\rangle $ by $\langle \widehat f, \widehat{\widetilde\varphi_{jk}}\rangle$. Approximation properties of such operators for certain classes of distributions~$\widetilde\varphi$ and functions $\varphi$ were studied, e.g., in~\cite{KKS} and~\cite{KS1}. Repeating step-by-step the proof of Theorem~4 in~\cite{KS1} and using Corollary~10 in~\cite{KKS}, it is easy to prove the following result. \begin{prop} \label{theoQjnew} Let $N\in\z_+,$ $\widetilde\varphi\in \mathcal{S}_N'$, $\varphi \in \mathcal{B} $. Suppose that there exist $n\in\n$ and $\delta\in(0, 1/2)$ such that $\widehat\varphi\widehat{\widetilde\varphi}$ is boundedly differentiable up to order $n$ on $\{|\xi|<\delta\}$; ${\rm supp\,} \widehat\varphi\subset B_{1-\delta}$; $D^{\beta}(1-\widehat\varphi\widehat{\widetilde\varphi})(0) = 0$ for $[\beta]<n$. If $2\le p< \infty$, $1/p+1/q=1$, $\gamma\in(N+d/p, N+d/p+\varepsilon)$, $\varepsilon>0,$ and \begin{equation} f\in L_p,\ \ \widehat f\in L_q,\ \ \widehat f(\xi)=O(|\xi|^{-N-d-\varepsilon})\ \text{as }|\xi|\to\infty, \label{(c)} \end{equation} then \begin{equation*} \begin{split} \bigg\|f-\sum_{k\in{\mathbb Z}^d} \langle \widehat{f}, \widehat{\widetilde\varphi_{jk}}\rangle \varphi_{jk}\bigg\|_p^q\le C_1 \|M^{*-j}\|^{\gamma q}&\int\limits_{|M^{*-j}\xi|\ge\delta} |\xi|^{q\gamma}| \widehat f(\xi)|^q d\xi\\ &+C_2 \|M^{*-j}\|^{nq}\int\limits_{|M^{*-j}\xi|<\delta} |\xi|^{qn}|\widehat f(\xi)|^q d\xi, \end{split} \end{equation*} where $C_1$ and $C_2$ do not depend on $j$ and $f$. \end{prop} Proposition~\ref{theoQjnew} does not provide approximation order of $\sum_{k\in{\mathbb Z}^d} \langle \widehat{f}, \widehat{\widetilde\varphi_{jk}}\rangle \varphi_{jk}$ better than $\|M^{*-j}\|^{n}$ even for very smooth functions $f$. This can be fixed under stronger restrictions on ${\varphi}$ given in the following definition. \begin{defi} \label{d1} A tempered distribution $\widetilde\varphi$ and a function $\varphi$ is said to be {\em strictly compatible} if there exists $\delta\in(0,1/2)$ such that $\overline{\widehat\varphi}(\xi)\widehat{\widetilde\varphi}(\xi)=1$ a.e. on $\{|\xi|<\delta\}$ and $\widehat\varphi(\xi)=0$ a.e. on $\{|l-\xi|<\delta\}$ for all $l\in\z^d\setminus\{0\}$. \end{defi} \begin{rem} \emph{ It is well known that the shift-invariant space generated by a function $\varphi$ has approximation order $n$ if and only if the Strang--Fix condition of order $n$ is satisfied for $\varphi$ (that is $D^\beta\widehat\varphi(l)=0$ whenever $l\in{\mathbb Z}^d$, $ l\ne{\bf0}$, $[\beta]<n$). This fact appeared in the literature in different situations many times (see, e.g.,~\cite{BDR}, \cite{DB-DV1}, \cite{v58}, and~\cite[Ch.~3]{NPS}). The condition $D^{\beta}(1-\widehat\varphi\widehat{\widetilde\varphi})(0) = 0$, $[\beta]<n$, is also a natural requirement for providing approximation order $n$ of scaling operators. This assumption often appears (especially in wavelet theory) in other terms, in particular, in terms of polynomial reproducing property (see~\cite[Lemma~3.2]{Jia1}). It is clear that to provide an infinitely large approximation order, these conditions should be satisfied for any~$n$. Clearly, the latter holds for strictly compatible functions ${\varphi}$ and~$\widetilde{\varphi}$. } \emph{Supposing that $\widehat{\widetilde\varphi}(\xi)=1$ a.e. on $\{|\xi|<\delta\}$, it is easy to see that the simplest example of $\varphi$ satisfying Definition 2 is the tensor product of the $\rm sinc$ functions.} \end{rem} \begin{prop} {\sc \cite[Theorem~11]{KKS}} \label{theoQjcomp} Let $N\in\z_+,$ $\widetilde\varphi\in \mathcal{S}_N'$, $\varphi \in \mathcal{B} $, $\widetilde\varphi$ and $\varphi$ be strictly compatible. \noindent If $2\le p< \infty$, $1/p+1/q=1$, $\gamma\in(N+d/p, N+d/p+\varepsilon)$, $\varepsilon>0,$ and a function $f$ satisfies~(\ref{(c)}), then \begin{equation} \bigg\|f-\sum_{k\in{\mathbb Z}^d} \langle \widehat{f}, \widehat{\widetilde\varphi_{jk}}\rangle \varphi_{jk}\bigg\|_p^q\le C \|M^{*-j}\|^{\gamma q} \int\limits_{|M^{*-j}\xi|\ge\delta} |\xi|^{q\gamma}| \widehat f(\xi)|^q d\xi, \label{8} \end{equation} where $C$ does not depend on $j$ and $f$. \end{prop} Note that Proposition~\ref{theoQjcomp} is an analog of the following Brown's result~\cite{Brown}: $$ \left|f(x)- \sum_{k\in\,\z} f(-2^{-j}k)\,{\rm sinc}(2^jx+k)\right|\le C \int\limits_{|\xi|>2^{j-1}} |\widehat f(\xi)|d\xi,\quad x\in{\mathbb R}, $$ whenever the Fourier transform of $f$ is summable on ${\mathbb R}$. There are several drawbacks in Propositions~\ref{theoQjnew} and~\ref{theoQjcomp}. First, they are proved only in the case $p\ge 2$. Second, there are additional restrictions on the function $f$. Even in the case $\widetilde\varphi\in L_q^0$, where $\sum_{k\in{\mathbb Z}^d}\langle{f}, {\widetilde\varphi_{jk}}\rangle \varphi_{jk}$ has meaning for every $f\in L_p$, the error estimate is obtained only for functions $f$ satisfying~(\ref{(c)}) with $N=0$. Third, the error estimate is given in the terms of decreasing of Fourier transforms unlike to common estimates, which usually are given in the terms of moduli of smoothness. Below, under more restrictive conditions on $\widetilde{\varphi}$, we obtain analogues of the above propositions for all $f\in L_p$, $1\le p\le\infty$, and give the estimates of the error of approximation in terms of the classical moduli of smoothness. \section{Auxiliary results} The following auxiliary statements will be useful for us. \begin{lem} \label{prop2} Let either $1< p< \infty$, $\varphi\in \cal B$ or $1\le p\le \infty$, ${\varphi} \in \mathcal{L}_p$, and $a=\{a_k\}_{k\in{\mathbb Z}^d}\in\ell_p$. Then \begin{equation} \label{201} \bigg\|\sum_{k\in{\mathbb Z}^d} a_k \varphi_{0k}\bigg\|_p\le C_{\varphi, p}\|a\|_{\ell_p}. \end{equation} \end{lem} {\bf Proof.} In the case $\varphi\in \cal B$, the proof of~\eqref{201} follows from~\cite[Proposition 9]{KKS}. Concerning the case ${\varphi} \in \mathcal{L}_p$ see~\cite[Theorem 2.1]{v58}.~$\Diamond$ \begin{lem}\label{prop1Lq} Let $f\in L_p$. Suppose that either $1<p<\infty$, $\varphi\in \mathcal{B}$ or $1\le p\le\infty$, $\varphi\in L_q^0$, $1/p+1/q=1$. Then \begin{equation} \label{eqK1Lq} \bigg(\sum_{k\in{\mathbb Z}^d} |\langle f,{{\varphi}_{0k}}\rangle|^p\bigg)^\frac 1p\le C'_{\varphi, p}\|f\|_p. \end{equation} \end{lem} {\bf Proof.} In the case $\varphi\in \mathcal{B}$, the proof of~\eqref{eqK1Lq} see in~\cite[Proposition~6]{KKS}. The case $\varphi\in L_q^0$ follows from~\cite[Lemmas~4 and 5]{KS}.~$\Diamond$ Combining the above lemmas, we obtain the following statement. \begin{lem}\label{lemBound} Let $f\in L_p$, $1<p<\infty$, and $j\in {{\Bbb N}}$. Suppose ${\varphi}\in \mathcal{B}$ and $\widetilde{{\varphi}}\in \mathcal{B}\cup \mathcal{L}_q$, $1/p+1/q=1$. Then \begin{equation}\label{eq0} \bigg\|\sum_{k\in{\mathbb Z}^d} \langle f,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p\le C\Vert f\Vert_p, \end{equation} where $C$ does not depend on $f$ and $j$. \end{lem} {\bf Proof.} If $\widetilde{\varphi}\in \mathcal{B}$, then the proof of~\eqref{eq0} directly follows from Lemmas~\ref{prop2} and~\ref{prop1Lq}. In the case $\widetilde\varphi\in \mathcal{L}_q$, we find $g\in L_q$, $\|g\|_q\le1$, such that \begin{equation}\label{eq+eq1} \bigg\|\sum_{k\in{\mathbb Z}^d} \big\langle f,\widetilde\varphi_{jk}\big\rangle \varphi_{jk}\bigg\|_p= \bigg|\bigg\langle\sum_{k\in{\mathbb Z}^d} \langle f,\widetilde\varphi_{jk}\rangle \varphi_{jk}, g\bigg\rangle\bigg|=\bigg|\bigg \langle f, \sum_{k\in{\mathbb Z}^d} \langle\varphi_{jk}, g\rangle \widetilde\varphi_{jk}\bigg\rangle\bigg|. \end{equation} It follows from the H\"older inequality and Lemmas~\ref{prop2} and~\ref{prop1Lq} that \begin{equation}\label{eq+eq2} \begin{split} \bigg|\bigg \langle f, \sum_{k\in{\mathbb Z}^d} \langle\varphi_{jk}, g\rangle \widetilde\varphi_{jk}\bigg\rangle\bigg|&\le \|f\|_p\Big\| \sum_{k\in{\mathbb Z}^d} \langle\varphi_{jk}, g\rangle \widetilde\varphi_{jk} \Big\|_q\\ &\le C_{\widetilde\varphi, q}\|f\|_p\bigg( \sum_{k\in{\mathbb Z}^d}| \langle\varphi_{jk}, g\rangle|^q\bigg)^{1/q} \le C_{\widetilde\varphi, q}{ C'}_{\varphi, q}\|f\|_p. \end{split} \end{equation} Thus, combining \eqref{eq+eq1} and~\eqref{eq+eq2}, we obtain~(\ref{eq0}).~$\Diamond$ Similarly one can prove the following generalization of Lemma~\ref{lemBound} to the limiting cases $p=1,\infty$. \noindent\textbf{Lemma~\ref{lemBound}$\,'$} \emph{ Let $f\in L_p$, $p=1$ or $p=\infty$, $j\in {{\Bbb N}}$. Suppose that} \noindent (i) {\it ${\varphi}\in L_1$ and $\widetilde{\varphi}\in \mathcal{L}_\infty$ in the case $p=1$;} \noindent (ii) {\it ${\varphi}\in \mathcal{L}_\infty$ and $\widetilde{\varphi}\in L_1$ in the case $p=\infty$.} \noindent\emph{Then \begin{equation*} \bigg\|\sum_{k\in{\mathbb Z}^d} \langle f,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p\le C\Vert f\Vert_p, \end{equation*} where $C$ does not depend on $f$ and $j$.} \begin{lem}\label{corE} Let $N\in {{\Bbb Z}}_+$, $\widetilde\varphi\in \mathcal{S}_N'$, $\varphi\in { \mathcal{ B}}\cup { \mathcal{ L}}_2$, and $\widetilde\varphi$ and $\varphi$ be strictly compatible. If a function $f\in L_2$ is such that its Fourier transform is supported in $\{\xi:\ |M^{*-j}\xi|<\delta\}$, where $\delta$ is from Definition~\ref{d1}, then \begin{equation} f=\sum_{k\in{\mathbb Z}^d} \langle \widehat{f}, \widehat{\widetilde\varphi_{jk}}\rangle \varphi_{jk}\quad a.e. \label{203} \end{equation} \end{lem} {\bf Proof.} In the case ${\varphi}\in \mathcal{B}$, the proof of the lemma follows from~(\ref{8}). In the case ${\varphi}\in \mathcal{L}_2$, it follows from~\cite[ eq. (4.15)]{KS1} that~(\ref{8}) holds and, therefore, one has~\eqref{203}.~$\Diamond$ Let $\mathcal{B}_p^{\sigma}$, $1\le p\le\infty$, ${\sigma}>0$, denote the set of all entire functions $f$ in ${\Bbb C}^d$ which are of (radial) exponential type ${\sigma}$ and being restricted to ${\Bbb R}^d$ belong to $L_p$. \begin{lem}\label{lem1} {\sc \cite[Theorem~3]{Wil}} Let $n\in {{\Bbb N}}$, $1\le p\le\infty$, $0<h<2\pi/{\sigma}$, $|\xi|=1$, and $P_{\sigma}\in \mathcal{B}_p^{\sigma}$. Then $$ \Vert D^{n,\xi} P_{\sigma}\Vert_p\le \(\frac{{\sigma}}{2\sin ({\sigma} h/2)}\)^n \Vert \Delta_{h\xi}^n P_{\sigma}\Vert_p, $$ where $$ D^{n,\xi} f(x)=\mathcal{F}^{-1}((i\cdot,\xi)^n \widehat{f}(\cdot))(x). $$ \end{lem} \begin{coro}\label{coroNS} In terms of Lemma~\ref{lem1}, we have \begin{equation}\label{NS++} \Vert P_{\sigma}\Vert_{\dot W_p^n}\le C{\sigma}^n\omega_n(P_{\sigma},{\sigma}^{-1})_p, \end{equation} where $C$ does not depend on $P_{\sigma}$. \end{coro} {\bf Proof.} By Lemma~\ref{lem1}, for any $1\le j\le d$, we have that \begin{equation*}\label{NS} \bigg\Vert \frac{\partial^n}{\partial x_j^n} P_{\sigma}\bigg\Vert_{p}\le C{\sigma}^n\Vert \Delta_{{\sigma}^{-1} e_j}^n P_{\sigma}\Vert_p\le C{\sigma}^n\omega_n(P_{\sigma},{\sigma}^{-1})_p, \end{equation*} which obviously implies~\eqref{NS++}.~$\Diamond$ We need several basic properties of the modulus of smoothness (see, e.g.,~\cite[Ch.~4]{Nik}). \begin{lem}\label{lemmod} Let $f,g\in L_p$, $1\le p\le \infty$, and $n\in {{\Bbb N}}$. Then for any $\delta>0$, we have \noindent {\rm (i)} $\omega_n(f+g,\delta)_p\le \omega_n(f,\delta)_p+\omega_n(g,\delta)_p$; \noindent {\rm (ii)} $\omega_n(f,\delta)_p\le 2^n\Vert f\Vert_p$; \noindent {\rm (iii)} $\omega_n(f,\lambda\delta)_p\le (1+\lambda)^n\omega_n(f,\delta)_p,\, \lambda>0$. \end{lem} Let us also recall the Jackson-type theorem in $L_p$ (see, e.g.,~\cite[Theorem 5.2.1 (7)]{Nik} or~\cite[5.3.2]{Timan}). \begin{lem}\label{lemJ} Let $f\in L_p$, $1\le p\le \infty$, $n\in {{\Bbb N}}$, and ${\sigma}>0$. Then there exists $P_{\sigma}\in \mathcal{B}_p^{\sigma}$ such that \begin{equation}\label{J} \Vert f-P_{\sigma}\Vert_p\le C\omega_n(f,1/{\sigma})_p, \end{equation} where $C$ is a constant independent of $f$ and $P_{\sigma}$. \end{lem} \section{Main results} Our main results are based on the following lemma. \begin{lem}\label{lemJack} Let $1\le p\le \infty$, $n\in {{\Bbb N}}$, $\varphi\in ({\mathcal{ B}}\cup \mathcal{L}_2)\cap L_p$, and $\widetilde\varphi\in L_q$, $1/p+1/q=1$. Suppose that the functions ${\varphi}$ and $\widetilde{\varphi}$ are strictly compatible and there exists a constant $c=c(n,p,d,{\varphi},\widetilde{\varphi})$ such that \begin{equation}\label{eq0++} \bigg\|\sum_{k\in{\mathbb Z}^d} \langle f,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p\le c\Vert f\Vert_p. \end{equation} Then \begin{equation}\label{eqJ1} \bigg\|f-\sum_{k\in{\mathbb Z}^d} \langle f,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p\le C \omega_n\(f,\Vert M^{-j}\Vert\)_p, \end{equation} where $C$ does not depend on $f$ and $j$. \end{lem} {\bf Proof.} Let $g\in L_p\cap L_2$ be such that \begin{equation}\label{eq*} \Vert f-g\Vert_p\le \omega_n\(f,\Vert M^{*-j}\Vert\)_p. \end{equation} Using~\eqref{eq0++} and~\eqref{eq*}, we have \begin{equation}\label{eqJ2} \begin{split} \bigg\|f-\sum_{k\in{\mathbb Z}^d} \langle f,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p &\le \Vert f-g\Vert_p+\bigg\|g-\sum_{k\in{\mathbb Z}^d} \langle g,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p+\bigg\|\sum_{k\in{\mathbb Z}^d} \langle g-f,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p\\ &\le (1+c)\omega_n\(f,\Vert M^{*-j}\Vert\)_p+\bigg\|g-\sum_{k\in{\mathbb Z}^d} \langle g,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p. \end{split} \end{equation} By Lemma~\ref{lemJ}, for any $n\in {{\Bbb N}}$ and $g\in L_p\cap L_2$ there exists a function $J_n(g)\,:\, {\Bbb R}^d\to \mathbb{C}$ such that $\operatorname{supp}\widehat{J_n(g)}\subset \{|\xi|<{\delta}{\Vert M^{*-j}\Vert^{-1}}\}$ and \begin{equation}\label{eqJ3} \Vert g-J_n(g)\Vert_p \le C_1\omega_n\(g,\delta^{-1}\Vert M^{*-j}\Vert\)_p, \end{equation} where $\delta$ is from Definition~\ref{d1} and $C_1$ does not depend on $g$ and $j$. By Lemma~\ref{corE}, taking into account that $ \langle \widehat{J_n(g)},\widehat{\widetilde\varphi_{jk}}\rangle= \langle J_n(g),\widetilde\varphi_{jk}\rangle$ and $\{|\xi|<{\delta}{\Vert M^{*-j}\Vert^{-1}}\}\subset \{|M^{*-j}\xi|<\delta\}$, we have \begin{equation}\label{eqJ4} \sum_{k\in{\mathbb Z}^d} \langle J_n(g),\widetilde\varphi_{jk}\rangle \varphi_{jk}=J_n(g). \end{equation} Thus, using~\eqref{eqJ3}, \eqref{eqJ4}, and~\eqref{eq0++}, we derive \begin{equation}\label{eqJ5} \begin{split} \bigg\|g-\sum_{k\in{\mathbb Z}^d} \langle g,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p &\le \Vert g-J_n(g)\Vert_p+\bigg\|\sum_{k\in{\mathbb Z}^d} \langle g-J_n(g),\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p\\ &\le (1+c)\Vert g-J_n(g)\Vert_p\le C_2\omega_n\(g,\delta^{-1}\Vert M^{*-j}\Vert\)_p. \end{split} \end{equation} Next, using Lemma~\ref{lemmod} and~\eqref{eq*}, we get \begin{equation}\label{eqJ6} \begin{split} \omega_n\(g,\delta^{-1}\Vert M^{*-j}\Vert\)_p&\le (1+\delta^{-1})^n\omega_n\(g,\Vert M^{*-j}\Vert\)_p\\ &\le C_3\(\Vert f-g\Vert +\omega_n\(f,\Vert M^{*-j}\Vert\)_p\)\le 2C_3\omega_n\(f,\Vert M^{*-j}\Vert\)_p\\ &=C_4\omega_n\(f,\Vert M^{-j}\Vert\)_p. \end{split} \end{equation} Finally, combining~\eqref{eqJ2}, \eqref{eqJ5}, and~\eqref{eqJ6}, we obtain~\eqref{eqJ1}.~$\Diamond$ The following statement is a multivariate analogue of Theorem~7 in~\cite{Sk1}. It is also a counterpart of Proposition~\ref{theoQjcomp} in some sense. \begin{theo}\label{thJack} Let $f\in L_p$, $1<p<\infty$, and $n\in {{\Bbb N}}$. Suppose ${\varphi}\in \mathcal{B}$, $\widetilde\varphi\in \mathcal{B}\cup \mathcal{L}_q$, $1/p+1/q=1$, and $\varphi$ and $\widetilde\varphi$ are strictly compatible. Then \begin{equation} \label{eqJ1++} \bigg\|f-\sum_{k\in{\mathbb Z}^d} \langle f,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p\le C \omega_n\(f,\Vert M^{-j}\Vert\)_p, \end{equation} where $C$ does not depend on $f$ and $j$. \end{theo} {\bf Proof.} The proof follows from Lemma~\ref{lemJack} and Lemma~\ref{lemBound}. $\Diamond$ \begin{rem} \emph{By \eqref{00++}, it is easy to see that in~\eqref{eqJ1++} and in further similar estimates, the modulus $\omega_n\(f,\Vert M^{-j}\Vert\)_p$ can be replaced by $\omega_n\(f,\vartheta^{-j}\)_p$, where $\vartheta$ is a positive number smaller in modulus than any eigenvalue of $M$. Note that $\| M^{-1}\|$ may be essentially bigger than $\vartheta^{-1}$.} \end{rem} In the following result, we give an analogue of Theorem~\ref{thJack} for the limiting cases $p=1$ and $p=\infty$. \noindent \textbf{Theorem~\ref{thJack}\,$'$} {\it Let $f\in L_p$, $p=1$ or $p=\infty$, and $n\in {{\Bbb N}}$. Suppose the functions $\varphi$ and $\widetilde\varphi$ are strictly compatible and} \noindent (i) {\it ${\varphi}\in \mathcal{B}\cap L_1$ and $\widetilde{\varphi}\in \mathcal{L}_\infty$ in the case $p=1$;} \noindent (ii) {\it ${\varphi}\in \mathcal{L}_\infty$ and $\widetilde{\varphi}\in L_1$ in the case $p=\infty$. \noindent Then \begin{equation*} \bigg\|f-\sum_{k\in{\mathbb Z}^d} \langle f,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p\le C \omega_n\(f,\Vert M^{-j}\Vert\)_p, \end{equation*} where $C$ does not depend on $f$ and $j$.} \textbf{Proof.} The proof follows from Lemmas~\ref{lemJack} and~\ref{lemBound}$\,'$.~$\Diamond$ \begin{rem} It is easy to see that Theorem~\ref{thJack}\,$'$ is valid if we replace the condition ${\varphi}\in \mathcal{B}\cap L_1$ by ${\varphi}\in \mathcal{L}_2$. \end{rem} The next theorem is the main result of the paper. This result essentially extends the classes of functions ${\varphi}$ and $\widetilde{\varphi}$ from Theorem~\ref{thJack} and can be considered as a counterpart of Proposition~\ref{theoQjnew}. \begin{theo}\label{th2} Let $f\in L_p$, $1<p<\infty$, and $n\in\n$. Suppose $\varphi\in\cal B$, ${\rm supp\,} \widehat\varphi\subset B_{1-\varepsilon}$ for some $\varepsilon\in (0,1)$, $\widehat\varphi\in C^{n+d+1}(B_\delta)$ for some $\delta>0$; $\widetilde\varphi \in\mathcal{B}\cup \mathcal{L}_q$, $1/p+1/q=1$, $\widehat{\widetilde\varphi}\in C^{n+d+1}(B_\delta)$ and $D^{\beta}(1-\widehat\varphi\overline{\widehat{\widetilde\varphi}})({\bf 0}) = 0$ for all $\beta\in{\mathbb Z}^d_+$, $[\beta]<n$. Then \begin{equation}\label{1} \bigg\|f-\sum\limits_{k\in{\mathbb Z}^d} \langle f,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p\le C \omega_n\(f,\|M^{-j}\|\)_p, \end{equation} where $C$ does not depend on $f$ and $j$. \end{theo} \textbf{Proof.} First, let us prove that for any $f\in W_p^n$ \begin{equation}\label{2} \bigg\|f-\sum\limits_{k\in{\mathbb Z}^d} \langle f,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p\le C_{0}\|f\|_{\dot W_p^n}\|M^{-j}\|^n, \end{equation} where $C_{0}$ does not depend on $f$ and $j$. Choose $0<\delta'<\delta''$ such that $\widehat\varphi(\xi)\ne0$ on $\{|\xi|\le\delta'\}$ and $\delta'\le\delta$. Set $$ F(\xi)= \begin{cases} \displaystyle\frac{1-\overline{\widehat\varphi(\xi)}\widehat{\widetilde\varphi}(\xi)}{\overline{\widehat\varphi(\xi)}} &\mbox{if $|\xi|\le\delta'$,} \\ \displaystyle 0 &\mbox{if $|\xi|\ge\delta''$} \end{cases} $$ and extend this function such that $F\in C^{n+d+1}({\mathbb R}^d)$. Define $\widetilde\psi$ by $\widehat{\widetilde\psi}=F$. Obviously, the function $\widetilde\psi$ is continuous and $\widetilde\psi(x)=O(|x|^{-\gamma})$ as $|x|\to \infty$, where $\gamma>n+d$. Since $\widetilde\psi\in \cal B$ and $\widetilde\varphi\in \mathcal{L}_q\cup \mathcal{B}$, by Lemma~\ref{lemBound}, we have \begin{equation*} \begin{split} \bigg\Vert\sum_{k\in{\mathbb Z}^d}\langle f, \widetilde\varphi_{jk}+\widetilde\psi_{jk}\rangle {\varphi}_{jk}\bigg\Vert_p&\le \bigg\Vert\sum_{k\in{\mathbb Z}^d}\langle f, \widetilde\varphi_{jk}\rangle {\varphi}_{jk}\bigg\Vert_p+ \bigg\Vert\sum_{k\in{\mathbb Z}^d}\langle f, \widetilde\psi_{jk}\rangle {\varphi}_{jk}\bigg\Vert_p\le C_1\|f\|_p. \end{split} \end{equation*} Now, taking into account that $\overline{\widehat\varphi(\xi)}(\widehat{\widetilde\varphi}(\xi)+\widehat{\widetilde\psi}(\xi))=1$ whenever $|\xi|\le\delta'$, we obtain from Lemma~\ref{lemJack} that for every $f\in W_p^n$ $$ \bigg\|f-\sum\limits_{k\in{\mathbb Z}^d} \langle f,\widetilde\varphi_{ik}+\widetilde\psi_{jk}\rangle \varphi_{jk}\bigg\|_p \le C_2\omega_n\(f, \|M^{-j}\|\)_p\le C_3\|f\|_{\dot W_p^n}\|M^{-j}\|^n. $$ Thus, to prove~\eqref{2}, it remains to verify that for $f\in W_p^n$ \begin{equation}\label{dop1} \bigg\|\sum\limits_{k\in{\mathbb Z}^d}\langle f,\widetilde\psi_{jk}\rangle \varphi_{jk}\bigg\|_p \le C_4\|f\|_{\dot W_p^n}\|M^{-j}\|^n. \end{equation} Let $k\in{\mathbb Z}^d$, $z\in [-1/2,1/2]^d-k$, $y=M^{-j}z$. Since $D^{\beta}\widehat{\widetilde\psi}(0) = 0$ whenever $[\beta]<n$, we have $$ \int\limits_{{\mathbb R}^d}y^\alpha\widetilde\psi_{jk}(y)\,dy=0,\quad j\in\z,\quad \alpha\in{\mathbb Z}^d_+,\quad [\alpha]< n. $$ Hence, due to Taylor's formula with the integral remainder, \begin{equation*} \begin{split} |\langle f,\widetilde\psi_{jk}\rangle |&= \bigg|\int\limits_{{\mathbb R}^d}f(x)\overline{\widetilde\psi_{jk}(x)}\,dx\bigg| \\ &=\bigg|\int\limits_{{\mathbb R}^d}\, \overline{\widetilde\psi_{jk}(x)}\bigg(\sum\limits_{\nu=0}^{n-1}\frac{1}{\nu!}\left( (x_1-y_1)\partial_{1} + \dots + (x_d-y_d)\partial_{d} \right)^\nu f(y) \\ &\quad\quad\quad\quad+\int\limits_0^1\frac{(1-t)^{n-1}}{(n-1)!} \Big((x_1-y_1)\partial_{1} + \dots + (x_d-y_d)\partial_{d} \Big)^n f(y+t(x-y))\,dt\bigg)\,dx \bigg| \\ &\le \int\limits_{{\mathbb R}^d}\,dx\,|x-y|^n|\widetilde\psi_{jk}(x)|\,\int\limits_0^1\sum\limits_{[\beta]=n}|D^{\beta}f(y+t(x-y))|\,dt. \end{split} \end{equation*} From this, using H\"older's inequality and taking into account that $$ |x-y|^n\le\|M^{-j}\|^n|M^j x-z|^n, $$ $$ |\widetilde\psi_{jk}(x)|\le\frac{C_5 m^{j/2}}{{(1+|M^jx+k|)^\gamma}}\le \frac{C_6 m^{j/2}}{(1+|M^jx-z|)^\gamma}, $$ we obtain \begin{equation}\label{dlin} \begin{split} |\langle f,\widetilde\psi_{jk}\rangle |&\le C_6 m^{j/2}\|M^{-j}\|^n \int\limits_{{\mathbb R}^d}\,dx \frac{|M^j x-z|^n}{(1+|M^jx-z|)^\gamma} \int\limits_0^1\sum\limits_{[\beta]=n}|D^{\beta}f(y+t(x-y))|\,dt \\ &\le C_6 m^{j/2}\|M^{-j}\|^n \bigg(\int\limits_{{\mathbb R}^d}\frac{|M^j x-z|^{n}}{(1+|M^jx-z|)^\gamma}\,dx\bigg)^{1/q} \\ &\quad\quad\quad\quad\times\Bigg(\int\limits_{{\mathbb R}^d}\,\frac{|M^j x-z|^{n} dx}{(1+|M^jx-z|)^\gamma}\bigg(\int\limits_0^1\, \sum\limits_{[\beta]=n}|D^{\beta}f(y+t(x-y))|\,dt\bigg)^p\Bigg)^{\frac 1 p} \\ &=C_6 m^{\frac j2-\frac jq}\|M^{-j}\|^n \bigg(\int\limits_{{\mathbb R}^d}\frac{|x-z|^{n} \,dx}{(1+|x-z|)^\gamma}\bigg)^{1/q} \\ &\quad\quad\quad\quad\times\Bigg(\int\limits_{{\mathbb R}^d}\,\frac{|M^j x-z|^{n} dx}{(1+|M^jx-z|)^\gamma}\bigg(\int\limits_0^1\, \sum\limits_{[\beta]=n}|D^{\beta}f(y+t(x-y))|\,dt\bigg)^p\Bigg)^{\frac 1 p} \\ &\le C_7 m^{\frac j2-\frac jq}\|M^{-j}\|^n \Bigg(\int\limits_{{\mathbb R}^d}\,\frac{|M^j(x-y)|^{n} dx}{(1+|M^j(x-y)|)^\gamma}\int\limits_0^1\,\sum\limits_{[\beta]=n}|D^{\beta}f(y+t(x-y))|^p\,dt\Bigg)^{\frac 1 p} \\ &=C_7 m^{\frac j2-\frac jq}\|M^{-j}\|^n \Bigg(\int\limits_{{\mathbb R}^d}\,\frac{|M^ju|^n du}{(1+|M^ju|)^\gamma}\int\limits_0^1\,\sum\limits_{[\beta]= n}|D^{\beta}f(y+tu)|^p\,dt\Bigg)^{\frac 1 p}. \end{split} \end{equation} Next, it follows from~\eqref{dlin} and Lemma~\ref{prop2} that \begin{equation*} \begin{split} \bigg\|\sum\limits_{k\in\,{\mathbb Z}^d}\langle f,\widetilde\psi_{jk}\rangle\varphi_{jk}\bigg\|_p^p &\le C_8 m^{p(\frac j2-\frac jp)}\sum_{k\in{\mathbb Z}^d}|\langle f,\widetilde\psi_{jk}\rangle |^p \\ &=C_8m^{p(\frac j2-\frac jp)}\sum_{k\in{\mathbb Z}^d}\int\limits_{[-1/2,1/2]^d-k}\,dz|\langle f,\widetilde\psi_{jk}\rangle |^p \\ &\le C_9\|M^{-j}\|^{pn} \int\limits_{{\mathbb R}^d}dz \int\limits_{{\mathbb R}^d}\,\frac{|M^ju|^n du}{(1+|M^ju|)^\gamma}\int\limits_0^1\,\sum\limits_{[\beta]=n}|D^{\beta}f(M^{-j}z+tu)|^p\,dt \\ &=C_9 \|M^{-j}\|^{pn} \int\limits_{{\mathbb R}^d}dz \int\limits_{{\mathbb R}^d}\,\frac{|u|^n du}{(1+|u|)^\gamma}\int\limits_0^1\,\sum\limits_{[\beta]=n}|D^{\beta}f(z+tM^{-j}u)|^p\,dt \\ &=C_9 \|M^{-j}\|^{pn} \int\limits_{{\mathbb R}^d}\,\frac{|u|^n du}{(1+|u|)^\gamma}\int\limits_0^1\,dt \int\limits_{{\mathbb R}^d}\sum\limits_{[\beta]=n}|D^{\beta}f(z+tM^{-j}u)|^p\,dz \\ &=C_9 \|M^{-j}\|^{pn} \int\limits_{{\mathbb R}^d}\,\frac{|u|^n du}{(1+|u|)^\gamma}\int\limits_0^1\,dt\int\limits_{{\mathbb R}^d}\sum\limits_{[\beta]=n}|D^{\beta}f(z)|^p\,dz \\ &\le C_{10}\|M^{-j}\|^{pn}\|f\|_{\dot W^n_p}^p. \end{split} \end{equation*} This implies~\eqref{dop1} and, therefore,~\eqref{2}. Now let us prove inequality~\eqref{1}. By Lemma~\ref{lemJ}, there exists $P_{\sigma}\in \mathcal{B}_p^{\sigma}$, ${\sigma}=1/\|M^{-j}\|$, such that \begin{equation}\label{J} \Vert f-P_{\sigma}\Vert_p\le C_{11}\omega_n(f,1/{\sigma})_p. \end{equation} Using Lemma~\ref{lemBound}, we obtain \begin{equation}\label{1+} \begin{split} \bigg\|f-\sum\limits_{k\in{\mathbb Z}^d} \langle f,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p\le C_{12}\Vert f-P_{\sigma}\Vert_p+\bigg\|P_{\sigma}-\sum\limits_{k\in{\mathbb Z}^d} \langle P_{\sigma},\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p. \end{split} \end{equation} Next, by~\eqref{2} and Corollary~\ref{coroNS}, we derive \begin{equation}\label{2+} \begin{split} \bigg\|P_{\sigma}-\sum\limits_{k\in{\mathbb Z}^d} \langle P_{\sigma},\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p\le C_0\|P_{\sigma}\|_{\dot W_p^n}\|M^{-j}\|^n\le C_{13}\omega_n\(P_{\sigma},1/{\sigma}\)_p. \end{split} \end{equation} Finally, combining~\eqref{J}--\eqref{2+} and using Lemma~\ref{lemmod}, we get \begin{equation*} \begin{split} \bigg\|f-\sum\limits_{k\in{\mathbb Z}^d} \langle f,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p&\le C_{14}\(\Vert f-P_{\sigma}\Vert_p+\omega_n(P_{\sigma},1/{\sigma})_p\)\\ &\le C_{15}\(\Vert f-P_{\sigma}\Vert_p+\omega_n(f,1/{\sigma})_p\)\le C\omega_n\(f,\|M^{-j}\|\)_p, \end{split} \end{equation*} which implies~\eqref{1}.~$\Diamond$ In the following result, we give an analogue of Theorem~\ref{th2} for the limiting cases $p=1$ and $p=\infty$. \noindent \textbf{Theorem~\ref{th2}\,$'$} {\it Let $f\in L_p$, $p=1$ or $p=\infty$, and $n\in {{\Bbb N}}$. Suppose the functions ${\varphi}$ and $\widetilde{\varphi}$ are such that $\widehat\varphi, \widehat{\widetilde\varphi}\in C^{n+d+1}(B_\delta)$ for some $\delta>0$, $D^{\beta}(1-\widehat\varphi\overline{\widehat{\widetilde\varphi}})({\bf 0}) = 0$ for all $\beta\in{\mathbb Z}^d_+$, $[\beta]<n$, ${\varphi}\in \mathcal{B}$, \, ${\rm supp\,} \widehat\varphi\subset B_{1-\varepsilon}$ for some $\varepsilon\in (0,1)$, and} \noindent (i) {\it ${\varphi}\in L_1$ and $\widetilde{\varphi}\in \mathcal{L}_\infty$ in the case $p=1$;} \noindent (ii) {\it ${\varphi}\in \mathcal{L}_\infty$ and $\widetilde{\varphi}\in L_1$ in the case $p=\infty$. } \noindent{\it Then \begin{equation*} \bigg\|f-\sum\limits_{k\in{\mathbb Z}^d} \langle f,\widetilde\varphi_{jk}\rangle \varphi_{jk}\bigg\|_p\le C \omega_n\(f,\|M^{-j}\|\)_p, \end{equation*} where $C$ does not depend on $f$ and $j$. } \textbf{Proof.} The proof is similar to the proof of Theorem~\ref{th2}. We only note that one needs to use Lemma~\ref{lemBound}\,$'$ instead of Lemma~\ref{lemBound}. At that, in the case $p=\infty$, using the first inequality in~\eqref{dlin} and Lemma~\ref{prop2}, we get \begin{equation*} \begin{split} \bigg\|\sum\limits_{k\in\,{\mathbb Z}^d}\langle f,\widetilde\psi_{jk}\rangle\varphi_{jk}\bigg\|_\infty &\le C_1 m^{\frac j2}\sup_{k\in{\mathbb Z}^d}|\langle f,\widetilde\psi_{jk}\rangle |\\ &\le C_2 m^{j}\|M^{-j}\|^n \Vert f\Vert_{\dot W_\infty^n} \sup_{k\in{\mathbb Z}^d} \int\limits_{{\mathbb R}^d} \frac{|M^j x-z|^n}{(1+|M^jx-z|)^\gamma}\,dx\\ &\le C_3 \|M^{-j}\|^n \Vert f\Vert_{\dot W_\infty^n}. \end{split} \end{equation*} Theorem~\ref{th2}\,$'$ is proved.~$\Diamond$ \begin{rem} Note that any function $\widetilde{\varphi}\in L_p^0$, $1\le p\le \infty$, satisfies assumptions $\widetilde{\varphi}\in \mathcal{L}_p\subset L_p$ and $\widehat{\widetilde\varphi}\in C^{n+d+1}(B_\delta)$, and hence can be used in Theorems~\ref{th2} and~\ref{th2}\,$'$. \end{rem} \section{Special Cases } I. Let us start from the classical multivariate Kotelnikov type decomposition that can be obtain by using the function $$ \operatorname{sinc}(x):=\prod_{\nu=1}^d \frac{\sin(\pi x_\nu)}{\pi x_\nu},\quad x\in {\Bbb R}^d. $$ In what follows, we restrict ourselves by the case $1<p<\infty$. \begin{prop} \label{pr1 } Let $f\in L_p$, $1<p<\infty$, and let $U$ be a bounded measured subset of ${\mathbb R}^d$. Then \begin{equation}\label{101} \bigg\Vert f-\sum_{k\in{\mathbb Z}^d}\frac{m^j}{\operatorname{mes} U}\int\limits_{M^{-j}{U}}f(-M^{-j}k+t)\,dt\,\operatorname{sinc}(M^j\cdot+k)\bigg\Vert_p\le C \omega_1(f,\|M^{-j}\|)_p, \end{equation} where the constant $C$ does not depend on $f$ and $j$. If, in addition, $U$ is symmetric with respect to the origin, then in~\eqref{101} the modulus of continuity $\omega_1(f,\|M^{-j}\|)_p$ can be replaced by the second order modulus of smoothness $\omega_2(f,\|M^{-j}\|)_p$. \end{prop} \textbf{Proof.} We use Theorem~\ref{th2} for $$ \varphi(x)=\operatorname{sinc}(x)\quad\text{and}\quad\widetilde{\varphi} (x)=\frac1{\operatorname{mes} U}\chi_U(x). $$ Then, taking into account that $\widehat{\widetilde\varphi}({\bf0})=1$ and $$ \langle f,\widetilde\varphi_{jk}\rangle=\frac{m^{j/2}}{\operatorname{mes} U}\int\limits_{M^{-j}U}f(-M^{-j}k+t)\,dt, $$ we can verify that all assumptions of Theorem~\ref{th2} are satisfied with $n=1$, which provides inequality~\eqref{101}. Now, let $U$ be symmetric with respect to the origin. In this case, $$ \frac{\partial}{\partial x_j}(1-\widehat\varphi\overline{\widehat{\widetilde\varphi}})(\textbf{0})=-\frac{\partial \overline{\widehat{\widetilde\varphi}}}{\partial x_j}(\textbf{0})=\frac{2\pi i}{\operatorname{mes} U}\int_U x_j dx=0,\quad 1\le j\le d. $$ Therefore, all assumptions of Theorem~\ref{th2} are satisfied with $n=2$. $\Diamond$ \begin{rem} Relation~(\ref{101}) gives a general answer to the question posed in~\cite{BBSV} concerning approximation properties of the sampling series given by $$ \sum_{k\in{\mathbb Z}^d}\frac{m^j}{\operatorname{mes} U}\int\limits_{M^{-j}{U}}f(-M^{-j}k+t)\,dt\,\operatorname{sinc}(M^j\cdot+k) $$ in the spaces $L_p$ for $1<p<\infty$. \end{rem} \begin{rem} It follows from Theorem~\ref{th2}\,$'$ that Proposition~\ref{pr1 } is valid for all $f\in L_p$, $1\le p\le\infty$, if we replace $\operatorname{sinc}(x)$ by $\operatorname{sinc}^2(x)$ in~\eqref{101}. In particular, this gives an improvement of estimate~\eqref{intrImp}. The same conclusion holds for all propositions presented below. \end{rem} II. Now let us show that using of an appropriate linear combination of the function $\operatorname{sinc}(x)$ rather than this function itself can provide better rates of the approximation by the corresponding sampling operator. \begin{prop} \label{pr2} Let $f\in L_p$, $1<p<\infty$, $n\in {{\Bbb N}}$, and let $U$ be a bounded measured subset in ${\mathbb R}^d$. Then there exists a finite set of numbers $\{a_l\}_{l\in {{\Bbb Z}}^d}\subset {\Bbb C}$ depending only on $d$, $n$, and $U$ such that for \begin{equation}\label{eqfunc} \varphi(x)=\sum_l a_l \operatorname{sinc}(x+l) \end{equation} we have \begin{equation}\label{eqcor1++} \bigg\Vert f-\sum_{k\in{\mathbb Z}^d} \frac{m^{j}}{\operatorname{mes} U} \int\limits_{M^{-j}U}f(-M^{-j}k+t)\,dt\,\varphi(M^j\cdot+k)\bigg\Vert_p\le C \omega_n(f,\|M^{-j}\|)_p, \end{equation} where the constant $C$ does not depend on $f$ and $j$. \end{prop} \textbf{Proof.} Let $\widetilde{\varphi} (x)=\frac1{\operatorname{mes} U}\chi_U(x)$. Find complex numbers $c_\alpha$, $\alpha\in{\mathbb Z}^d_+$, $[\alpha]< n$, satisfying $$ c_{\bf0}=1, \quad\sum\limits_{{\bf0}\le\alpha\le\beta}\left({\beta\atop\alpha}\right) \overline{D^{\beta-\alpha}\widehat{\widetilde\varphi}({\bf0})}c_\alpha=0\quad \forall \beta\in{\mathbb Z}^d_+,\, {\bf0}<[\beta]< n $$ and set \begin{equation} T(\xi)=\sum_{{\bf0}\le[\alpha]\le n}c_\alpha \prod_{j=1}^dg_{\alpha_j}(\xi_j), \label{102} \end{equation} where $g_k$ is a trigonometric polynomial such that $\frac{d^l g_k}{dt^l}(0)=\delta_{kl}$ for all $l=0,\dots, k$. It is not difficult to deduce explicit recursive formulas for finding such polynomials (see, e.g.,~\cite[Sec. 3.4]{KPS} or~\cite{Sk3}). Obviously, $D^\alpha T({\bf0})=c_\alpha$ and $D^\alpha(T\cdot\widehat\varphi)({\bf0})=D^\alpha(T\cdot\chi_{[-1/2,1/2]^d})({\bf0})=c_\alpha$ for all $\alpha\in{\mathbb Z}^d_+$, $[\alpha]< n$. If now $T(\xi)=\sum_l a_l e^{2\pi i(l,\xi)}$, then setting $ \varphi(x)=\sum_l a_l \operatorname{sinc}(x+l), $ we obtain that $D^{\beta}(1-\widehat\varphi\overline{\widehat{\widetilde\varphi}})(\textbf{0}) = 0$ for all $\beta\in{\mathbb Z}^d_+$, $[\beta]<n$. Thus, due to Theorem~\ref{th2}, we have inequality~\eqref{eqcor1++}. $\Diamond$ Let us write explicit formulas for the function~\eqref{eqfunc} and for the polynomial $T$ given by~\eqref{102} in the cases $d=1, 2$ and $n=4$. \textbf{Example 1.} Let first $d=1$, $U=[-1/2, 1/2]$, and $M=2$. Then $$ \widehat{\widetilde\varphi}(0)=1,\ {\widehat{\widetilde\varphi}}'(0)=0,\ {\widehat{\widetilde\varphi}}''(0)=-\frac{\pi^2}{3},\ {\widehat{\widetilde\varphi}}'''(0)=0, $$ which yields $ c_0=1,\ c_1=0,\ c_2=\frac{\pi^2}{3},\ c_3=0, $ and $$ T(\xi)=1+\frac{\pi^2}{3}g_2(\xi), $$ where $$ g_2(u)=-\frac1{8\pi^2}(2-5e^{2\pi i u}+4e^{4\pi i u}-e^{6\pi iu}). $$ Hence $$ \varphi(x)=\frac{11}{12}\operatorname{sinc}(x)+\frac{5}{24}\operatorname{sinc}(x+1)-\frac{1}{6}\operatorname{sinc}(x+2)+\frac{1}{24}\operatorname{sinc}(x+3) $$ and, by Proposition~\ref{pr2}, we have $$ \bigg\Vert f-\sum_{k\in {{\Bbb Z}}}2^j\int\limits_{[-2^{-j-1},2^{-j-1}]}f(-2^{-j}k+t)\,dt\,\varphi(2^j\cdot+k)\bigg\Vert_p\le C \omega_4(f,2^{-j})_p. $$ \textbf{Example 2.} Now let $d=2$ and $U=[-1/2,1/2]^2$. Simple calculations show that in this case one has $$ T(\xi)=g_0(\xi_1)g_0(\xi_2)+\frac{\pi^2}3g_2(\xi_1)g_0(\xi_2)+\frac{\pi^2}3g_0(\xi_1)g_2(\xi_2)=1+\frac{\pi^2}3(g_2(\xi_1)+g_2(\xi_2)), $$ and, therefore, \begin{equation*} \begin{split} {\varphi}(x_1,x_2)=\frac56\operatorname{sinc} x_1\operatorname{sinc} x_2&+\frac{\operatorname{sinc} x_2}{24}(5\operatorname{sinc}(x_1+1)-4\operatorname{sinc}(x_1+2)+\operatorname{sinc}(x_1+3))\\ &+\frac{\operatorname{sinc} x_1}{24}(5\operatorname{sinc}(x_2+1)-4\operatorname{sinc}(x_2+2)+\operatorname{sinc}(x_2+3)). \end{split} \end{equation*} It follows from~\eqref{eqcor1++} that \begin{equation*} \bigg\Vert f-\sum_{k\in{{\Bbb Z}}^2}m^j\int\limits_{M^{-j}[-1/2,1/2]^2}f(-M^{-j}k+t)\,dt\,\varphi(M^j\cdot+k)\bigg\Vert_p\le C \omega_4(f,\|M^{-j}\|)_p. \end{equation*} \textbf{Example 3.} Similarly to Example 2, if $d=2$ and $U=B_1$, taking into account that $$ \widetilde{\varphi} (x)=\frac{\Gamma(1+d/2)}{\pi^{d/2}}\chi_{B_1}(x),\quad \widehat{\widetilde{{\varphi}}}(\xi)=\Gamma(1+d/2)\frac{J_{d/2}(2\pi|\xi|)}{(\pi|\xi|)^{d/2}}, $$ where $J_\lambda$ is the Bessel function of the first kind of order $\lambda$, we obtain $$ T(\xi_1,\xi_2)=1-\pi^2(g_2(\xi_1)+g(\xi_2)) $$ and \begin{equation*} \begin{split} {\varphi}(x_1,x_2)=\frac32\operatorname{sinc} x_1\operatorname{sinc} x_2&-\frac{\operatorname{sinc} x_2}{8}(5\operatorname{sinc}(x_1+1)-4\operatorname{sinc}(x_1+2)+\operatorname{sinc}(x_1+3))\\ &-\frac{\operatorname{sinc} x_1}{8}(5\operatorname{sinc}(x_2+1)-4\operatorname{sinc}(x_2+2)+\operatorname{sinc}(x_2+3)). \end{split} \end{equation*} Hence, \begin{equation*} \bigg\Vert f-\sum_{k\in{{\Bbb Z}}^2}\frac{m^j}{\pi}\int\limits_{M^{-j} B_1}f(-M^{-j}k+t)\,dt\,\varphi(M^j\cdot+k)\bigg\Vert_p\le C \omega_4(f,\|M^{-j}\|)_p. \end{equation*} III. Another improvement of the estimate given in Proposition~\ref{pr1 } can be obtained by using an appropriate linear combination of the averaging operator rather than linear combinations of the function ${\varphi}(x)=\operatorname{sinc}(x)$. Thus, the following estimate is a trivial consequence of Proposition~\ref{pr2}: \begin{equation}\label{103} \bigg\Vert f-\sum_{k\in{\mathbb Z}^d}\sum_la_l\frac{m^j}{\operatorname{mes} U}\int\limits_{M^{-j}(U+l)}f(-M^{-j}k+t)\,dt\,\operatorname{sinc}(M^j\cdot+k)\bigg\Vert_p\le C \omega_n(f,\|M^{-j}\|)_p, \end{equation} where $a_l$ is the $l$-th coefficient of the polynomial $T$ defined by~(\ref{102}) and the constant $C$ does not depend on $f$ and $j$. Now let us obtain an analog of~(\ref{103}) for other functions $\varphi$. \begin{prop} \label{pr3} Let $f\in L_p$, $1<p<\infty$, $n\in {{\Bbb N}}$, and let $U$ be a bounded measured subset in ${\mathbb R}^d$. Suppose the function $\varphi$ satisfies conditions of Theorem~\ref{th2}. Then there exists a finite set of numbers $\{b_l\}_{l\in {{\Bbb Z}}^d}\subset {\Bbb C}$ depending only on $d$, $n$, $U$, and ${\varphi}$ such that \begin{equation}\label{107} \bigg\Vert f-\sum_{k\in{\mathbb Z}^d}\sum_lb_l\frac{m^{j}}{\operatorname{mes} U}\int\limits_{M^{-j}(U-l)}f(-M^{-j}k+t)\,dt\,\varphi(M^j\cdot+k)\bigg\Vert_p\le C \omega_n(f,\|M^{-j}\|)_p, \end{equation} where the constant $C$ does not depend on $f$ and $j$. \end{prop} \textbf{Proof.} Find complex numbers $c'_\alpha$, $\alpha\in{\mathbb Z}^d_+$, $[\alpha]< n$, satisfying $$ c'_{\bf0}=1, \quad\sum\limits_{{\bf0}\le\alpha\le\beta}\left({\beta\atop\alpha}\right) \overline{D^{\beta-\alpha}\widehat{\varphi}({\bf0})}c'_\alpha=0\quad \forall \beta\in{\mathbb Z}^d_+,\, {\bf0}<[\beta]< n. $$ Next, for $\widetilde{\varphi} (x)=\frac1{\operatorname{mes} U}\chi_U(x)$, we find complex numbers $c_\alpha$, $\alpha\in{\mathbb Z}^d$, $[\alpha]< n$, satisfying $$ c_{\bf0}=1, \quad\sum\limits_{{\bf0}\le\alpha\le\beta}\left({\beta\atop\alpha}\right) \overline{D^{\beta-\alpha}\widehat{\widetilde\varphi}({\bf0})}c_\alpha=c'_\beta\quad \forall \beta\in{\mathbb Z}^d_+,\, {\bf0}<[\beta]< n, $$ and set \begin{equation} Q(\xi)=\sum_{{\bf0}\le[\alpha]\le n}c_\alpha \prod_{j=1}^dg_{\alpha_j}(\xi_j), \label{108} \end{equation} where $g_\alpha$ is as in~(\ref{102}). If now $Q(\xi)=\sum\limits_l b_l e^{2\pi i(l,\xi)}$, then setting $$ \widetilde\psi(x)=\sum_l b_l \widetilde\varphi(x+l), $$ we obtain that $D^{\beta}(1-\overline{\widehat\varphi}{\widehat{\widetilde\psi}})(\textbf{0}) = 0$ for all $\beta\in{\mathbb Z}^d_+$, $[\beta]<n$. Thus, due to Theorem~\ref{th2}, we have~\eqref{107}.~$\Diamond$ \textbf{Example 4.} In this example, we consider radial functions ${\varphi}(x)=R_\delta(x)$ given by the Bochner-Riesz type kernel $$ R_\delta(x):=\frac{\Gamma(1+\delta)}{\pi^\delta}\frac{J_{d/2+\delta}(2\pi|x|)}{|x|^{d/2+\delta}}. $$ Some results on approximation properties of sampling expansions generated by radial functions with a diagonal matrix $M$ can be found in~\cite{BFS}. Let also $d=2$, $n=4$, and $U=B_1$. By analogy with the above examples, we can compute that the polynomial $Q(\xi)$ given in~\eqref{108} has the form $$ Q(\xi)=1+(2\delta-\pi^2)(g_2(\xi_1)+g_2(\xi_2)) $$ and, therefore, by~\eqref{107}, we derive \begin{equation*} \bigg\Vert f-\sum_{k\in {{\Bbb Z}}^2}\sum_{l_1=0}^3 \sum_{l_2=0}^3 \frac{b_{l_1,l_2} m^{j}}{\pi}\!\!\!\!\!\!\!\int\limits_{M^{-j}(B_1-(l_1,l_2))}f(-M^{-j}k+t)\,dt\,R_\delta(M^j\cdot+k)\bigg\Vert_p\le C \omega_4(f,\|M^{-j}\|)_p, \end{equation*} where $$ b_{0,0}=1-\frac{2\delta-\pi^2}{2\pi^2},\quad b_{1,0}=b_{0,1}=\frac{5(2\delta-\pi^2)}{8\pi^2}, $$ $$ b_{2,0}=b_{0,2}=\frac{-(2\delta-\pi^2)}{2\pi^2},\quad b_{3,0}=b_{0,3}=\frac{2\delta-\pi^2}{8\pi^2}, $$ and $$ b_{1,1}=b_{1,2}=b_{2,1}=0. $$ IV. Finally, from Theorem~\ref{thJack}, we obtain the following result related to the classical Kotelnikov decomposition. \begin{prop} Let $f\in L_p$, $1<p<\infty$, $\widehat f$ be locally summable, and $n\in {{\Bbb N}}$. Then \begin{equation}\label{eqcor1+} \bigg\Vert f- m^{j/2}\sum_{k\in {{\Bbb Z}}^d}\int\limits_{[-1/2,1/2]^d} \widehat f({M^*}^j\xi)e^{-2\pi i (k,\xi)}d\xi\,\operatorname{sinc}(M^j\cdot+k) \bigg\Vert_p\le C\omega_n\(f,\|M^{-j}\|\)_p, \end{equation} where the constant $C$ does not depend on $f$ and $j$. \end{prop} \textbf{Proof.} We apply Theorem~\ref{thJack} for ${\varphi}(x)=\widetilde{\varphi}(x)=\operatorname{sinc}(x)$. Since $\widehat {\operatorname{sinc}}(\xi)=\chi_{[-1/2,1/2]^d}(\xi)$ and $ \widehat {\operatorname{sinc}_{jk}}(\xi)=m^{-j/2}e^{2\pi i (k,{M^*}^{-j}\xi)}\chi_{[-1/2, 1/2]^d}({M^*}^{-j}\xi), $ we have $$ \langle f,\operatorname{sinc}_{jk}\rangle=\langle \widehat f,\widehat {\operatorname{sinc}_{jk}}\rangle=m^{j/2}\int\limits_{{\mathbb R}^d}\widehat f({M^*}^j\xi)\overline{ \widehat{\widetilde\varphi}({M^*}^j\xi)}d\xi=\int\limits_{[-1/2,1/2]^d} \widehat f({M^*}^j\xi)e^{-2\pi i (k,\xi)}d\xi, $$ which, by Theorem~\ref{thJack}, proves the proposition. $\Diamond$ Finally, let us note that~\eqref{eqcor1+} can be also written in the following form \begin{equation}\label{eqcor2+} \bigg\Vert f-\sum_{k\in {{\Bbb Z}}^d}\mathcal{F}^{-1}(\chi_{M^{*j}[-1/2,1/2]^d}\widehat f)(M^{*-j}k)\,\operatorname{sinc}(M^j\cdot+k) \bigg\Vert_p\le C\omega_n\(f,\|M^{-j}\|\)_p. \end{equation} \end{document}
arXiv
\begin{document} \title[Gr\"obner bases of radical Li-Li type ideals]{Gr\"obner bases of radical Li-Li type ideals associated with partitions} \author{Xin Ren} \address{ Xin Ren, Department of Mathematics, Kansai University, Suita-shi, Osaka 564-8680, Japan. } \email{[email protected]} \author{Kohji Yanagawa} \address{ Kohji Yanagawa, Department of Mathematics, Kansai University, Suita-shi, Osaka 564-8680, Japan. } \email{[email protected]} \thanks{The second authors is partially supported by JSPS KAKENHI Grant Number 22K03258.} \date{\today} \keywords{ Gr\"obner bases, Specht polynomial, Specht ideal, Li-Li ideal} \subjclass[2020]{13P10, 05E40} \newcommand{\add}[1]{\ensuremath{\langle{#1}\rangle}} \newcommand{\tab}[1]{\ensuremath{{\rm Tab}({#1})}} \newcommand{\wtab}[1]{\ensuremath{\widehat{\rm Tab}({#1})}} \newcommand{\stab}[1]{\ensuremath{{\rm STab}({#1})}} \newcommand{\ini}[1]{\ensuremath{{\rm in}({#1})}} \newcommand{\operatorname{in}}{\operatorname{in}} \newcommand{{\mathcal F}}{{\mathcal F}} \newcommand{{\mathcal Y}}{{\mathcal Y}} \newcommand{{\mathbf I}}{{\mathbf I}} \newcommand{{\mathbf V}}{{\mathbf V}} \newcommand{{\mathfrak S}}{{\mathfrak S}} \newcommand{{\mathcal S}}{{\mathcal S}} \newcommand{{\sf c}}{{\sf c}} \newcommand{{\mathbb N}}{{\mathbb N}} \newcommand{{\mathbb Z}}{{\mathbb Z}} \newcommand{{\widetilde{T}}}{{\widetilde{T}}} \newcommand{{\widetilde{\lambda}}}{{\widetilde{\lambda}}} \newcommand{\operatorname{sh}}{\operatorname{sh}} \newcommand{{\mathcal T}}{{\mathcal T}} \newcommand{{\widetilde{\Tc}}}{{\widetilde{{\mathcal T}}}} \def\langle{\langle} \def\rangle{\rangle} \def\longrightarrow{\longrightarrow} \maketitle \begin{abstract} For a partition $\lambda$ of $n$, the {\it Specht ideal} $I_\lambda \subset K[x_1, \ldots, x_n]$ is the ideal generated by all Specht polynomials of shape $\lambda$. In their unpublished manuscript, Haiman and Woo showed that $I_\lambda$ is a radical ideal, and gave its universal Gr\"obner basis (Murai et al. published a quick proof of this result). On the other hand, an old paper of Li and Li studied analogous ideals, while their ideals are not always radical. The present paper introduces a class of ideals generalizing both Specht ideals and {\it radical} Li-Li ideals, and studies their radicalness and Gr\"obner bases. \end{abstract} \section{Introduction} Let $S=K[x_1, \ldots, x_n]$ be a polynomial ring over an infinite field $K$. For a subset $A=\{a_1, a_2, \ldots, a_m\}$ of $[n]:=\{1,2, \ldots, n\}$, let $$\Delta(A):=\prod_{1 \le i <j \le m}(x_{a_i} -x_{a_j}) \in S$$ be the difference product. For a sequence of subsets ${\mathcal Y}=(Y_1, Y_2, \ldots, Y_{k-1})$ with $[n]\supset Y_1 \supset Y_2 \supset \cdots \supset Y_{k-1}$, Li and Li \cite{LL} studied the ideal \begin{equation}\label{I_Y} I_{\mathcal Y}:=\left( \, \prod_{i=1}^{k-1} \Delta(X_i) \, \middle | \, X_i \supset Y_i \ \text{for all} \ i, \ \bigcup_{i=1}^{k-1} X_i=[n] \, \right ) \end{equation} of $S$ (more precisely, the polynomial ring in \cite{LL} is ${\mathbb Z}[x_1, \ldots, x_n]$). Among other things, they showed the following. \begin{theorem}[{c.f. Li-Li \cite[Theorem~2]{LL}}] With the above notation, $I_{\mathcal Y}$ is a radical ideal if and only if $\# Y_2 \le 1$. \end{theorem} A {\it partition} of a positive integer $n$ is a non-increasing sequence of positive integers $\lambda=(\lambda_1,\ldots,\lambda_p)$ with $\lambda_1+ \cdots+\lambda_p=n$. Let $P_n$ be the set of all partitions of $n$. A partition $\lambda$ is frequently represented by its Young diagram. For example, $(4,2,1)$ is represented as $\ytableausetup{mathmode, boxsize=0.5em}\ydiagram {4,2,1}$. A {\it (Young) tableau} of shape $\lambda \in P_n$ is a bijective filling of the squares of the Young diagram of $\lambda$ by the integers in $[n]$. For example, $$ \ytableausetup{mathmode, boxsize=1.2em} \begin{ytableau} 4 & 3 & 1&7 \\ 5 & 2 \\ 6 \\ \end{ytableau} $$ is a tableau of shape $(4,2,1)$. Let $\tab \lambda$ be the set of all tableaux of shape $\lambda$. Recall that the Specht polynomial $f_T$ of $T \in \tab \lambda$ is $\prod_{j=1}^{\lambda_1} \Delta(T(j))$, where $T(j)$ is the set of the entries of the $j$-th column of $T$ (here the entry in the $i$-th row is the $i$-th element of $T(j)$). For example, if $T$ is the above tableau, then $f_T=(x_4-x_5)(x_4-x_6)(x_5-x_6)(x_3-x_2)$. We call the ideal $$I_\lambda:=(f_T \mid T \in \tab{\lambda}) \subset S$$ the {\it Specht ideal} of $\lambda$. These ideals have been studied from several points of view (and under several names and characterizations), see for example, \cite{BGS, MW,MRV, SY}. The following is an unpublished result of Haiman and Woo (\cite{HW}), to which Murai, Ohsugi and the second author (\cite{MOY}) published a quick proof. \begin{theorem}[{Haiman-Woo \cite{HW}, see also \cite{MOY}}]\label{HWMOY} If ${\mathcal F} \subset P_n$ is a lower filter with respect to the dominance order $\unlhd$, then $I_{\mathcal F}:=\sum_{\lambda \in {\mathcal F}} I_\lambda$ is a radical ideal, for which $\{ f_T \mid T \in \tab{\mu}, \mu \in {\mathcal F} \}$ forms a universal Gr\"obner basis (i.e., a Gr\"obner basis with respect to all monomial orders). In particular, $I_\lambda$ is a radical ideal, for which $\{ f_T \mid T \in \tab{\mu}, \mu \unlhd \lambda \}$ forms a universal Gr\"obner basis. \end{theorem} Let us explain why the second assertion follows form the first. Since $\lambda \unrhd \mu$ for $\lambda, \mu \in P_n$ implies $I_\lambda \supset I_\mu$ (c.f. Lemma~\ref{inclusion}), we have $I_\lambda=I_{\mathcal F}$ for the lower filter ${\mathcal F}:=\{ \mu \in P_n \mid \mu \unlhd \lambda \}$. The Li-Li ideals $I_{\mathcal Y}$ and the Specht ideals $I_\lambda$ share common examples. In fact, for ${\mathcal Y}=(Y_1, Y_2, \ldots, Y_{k-1})$ with $\# Y_1 \le 1$, $Y_2=\cdots =Y_{k-1}=\emptyset$ and $\lambda=(\lambda_1, \ldots, \lambda_p) \in P_n$ with $\lambda_1=\cdots =\lambda_{p-1}=k-1$, we have $I_{\mathcal Y}=I_\lambda$ by \cite[Corollary~3.2]{LL}. In this paper, we study a common generalization of the {\it radical} Li-Li ideals and the Specht ideals, for which {\it almost} direct analogs of Theorem~\ref{HWMOY} hold. For example, in Sections 2 and 3, we take a positive integer $l$, and a partition $\lambda \in P_{n+l-1}$ with $\lambda_1 \ge l$, and consider tableaux like \begin{equation}\label{Tab(l)} \ytableausetup{mathmode, boxsize=1.2em} \begin{ytableau} 1& 1& 1& 1& 2 & 3 \\ 4 & 5 & 8 \\ 6 & 7\\ \end{ytableau} \end{equation} ($l=4$ in this case). Using these tableau, we define the ideal $I_{l, \lambda}$. The symmetric group ${\mathcal S}_{n-1}$ of the set $\{2, \ldots, n\}$ still acts on $I_{l, \lambda}$, so our ideals have representation theoretic interest. The following are other motivations of the present paper. \begin{itemize} \item[(1)] Recently, the defining ideals of subspace arrangements have been intensely studied (c.f. \cite{BPS, CT,S}). Our $I_{l, \lambda}$ and its generalization $\sqrt{I_{l,m,\lambda}}$ introduced in Section 4 give new classes of these ideals. Note that $I_{l,m,\lambda}$ is not a radical ideal in general, while Corollary~\ref{I_{l,m,lambda}} gives the generators of its radical explicitly. \item[(2)] A universal Gr\"obner basis is very important, since it is closely related to the Gr\"obner fan. While we can use a computer for explicit examples, it is extremely difficult to construct universal Gr\"obner bases for some (infinite) family of ideals. Theorem~\ref{HWMOY} gives universal Gr\"obner bases of Specht ideals $I_\lambda$. However, since $I_\lambda$ are symmetric, this case is exceptional. So it must be very interesting, if the Gr\"obner bases of non-symmetric ideals $I_{l, \lambda}$ given in Theorem~\ref{main1} are universal. Corollary~\ref{smallest & largest} is an affirmative evidence. \item[(3)] One of the motivations of the paper \cite{LL} of Li and Li is an application to graph theory (see \cite{deL} for further connection to Gr\"obner bases theory). We expect that the present paper gives a new inslight to this direction. \end{itemize} In the present paper, for the convention and notation of the Gr\"{o}bner bases theory, we basically follow \cite[Chapter 1]{HHO}. \section{A generalization of the case $\#Y_1=\cdots =\#Y_l=1$} We keep the same notation as Introduction, and fix a positive integer $l$. For $\lambda \in [P_{n+l-1}]_{\ge l}:=\{ \lambda \in P_{n+l-1} \mid \lambda_1 \ge l \}$, we consider a bijective filling of the squares of the Young diagram of $\lambda$ by the multiset $\{\overbrace{1, \ldots, 1}^{\text{$l$ copies}},2, \ldots, n\}$ such that no two copies of 1 are contained in the same column. Let $\tab {l, \lambda}$ be the set of such tableaux. For example, the tableau \eqref{Tab(l)} above is an element of $\tab{4, \lambda}$ for $\lambda=(6,3,2)$ (moreover, this is a {\it standard} tableau defined below). The Specht polynomial $f_T$ of $T \in \tab{l, \lambda}$ is defined by the same way as in the classical case. For example, if $T$ is the one in \eqref{Tab(l)}, then $f_T=(x_1-x_4)(x_1-x_6)(x_4-x_6)(x_1-x_5)(x_1-x_7)(x_5-x_7)(x_1-x_8)$. For $\lambda \in [P_{n+l-1}]_{\ge l}$, consider the ideal $$I_{l, \lambda}:=(f_T \mid T \in \tab{l, \lambda})$$ of $S$. Clearly, $\tab{1, \lambda}=\tab{\lambda}$ and $I_{1,\lambda}=I_\lambda$. For $\lambda=(\lambda_1,\dots,\lambda_p), \mu =(\mu_1,\dots,\mu_q) \in P_m$, we write $\lambda \unrhd \mu$ if $\lambda$ is equal to or larger than $\mu$ with respect to the {\it dominance order}, that is, $$\lambda_1+ \cdots+\lambda_i \geq \mu_1+ \cdots + \mu_i \ \ \ \mbox{ for }i =1,2,\dots,\min\{p, q\}.$$ In what follows, we regard $[P_{n+l-1}]_{\ge l}$ as a subposet of $P_{n+l-1}$. For $\lambda \in P_m$ and $j$ with $1 \le j \le \lambda_1$, let $\lambda^\perp_j$ be the length of the $j$-th column of the Young diagram of $\lambda$. Then $\lambda^\perp=(\lambda^\perp_1, \lambda^\perp_2, \ldots)$ is a partition of $m$ again. It is a classical result that $\lambda \unrhd \mu$ if and only if $ \lambda^\perp \unlhd \mu^\perp$. \begin{remark}\label{cover} By \cite[Proposition~2.3]{B}, if $\lambda$ covers $\mu$ (i.e., $\lambda \rhd \mu$, and there is no other partition between them), then there are two integers $i, i'$ with $i< i'$ such that $\mu_i=\lambda_i-1$, $\mu_{i'}=\lambda_{i'}+1$, and $\mu_k=\lambda_k$ for all $k\ne i, i'$, equivalently, there are two integers $j, j'$ with $j< j'$ such that $\mu^\perp_j=\lambda^\perp_j+1$, $\mu^\perp_{j'}=\lambda^\perp_{j'}-1$, and $\mu^\perp_k=\lambda^\perp_k$ for all $k\ne j, j'$. Clearly, $\mu_j^\perp \ge \mu_{j'}^\perp +2$ in this case. Here, we allow the case $i'$ is larger than the length $p$ of $\lambda$, where we set $\lambda_{i'}=0$. Similarly, the case $\mu^\perp_{j'}=0$ might occur. \end{remark} \begin{remark} By a similar argument to the proof of Lemma~\ref{stab}, to generate $I_{l,\lambda}$, it suffices to use $T \in \tab{l, \lambda}$ such that the left most $l$ squares in the first row are filled by 1. So, in manner of \eqref{I_Y}, the ideal $I_{l, \lambda}$ can be represented as follows. $$I_{l, \lambda}=\left( \, \prod_{i=1}^{\lambda_1} \Delta(X_i) \, \middle | \, 1 \in X_i \ \text{for} \ 1 \le i \le l, \ \# X_i=\lambda_i^\perp \ \text{for all} \ i, \ \bigcup_{i=1}^{\lambda_1} X_i=[n] \, \right )$$ \end{remark} \noindent{\bf Convention.} Throughout this paper, when we consider the Gr\"obner bases, we use the lexicographic order with $x_1 < \cdots < x_n$ unless otherwise specified (see Lemma~\ref{product of linear forms} below, which states that only the order among the variables $x_1, \ldots, x_n$ matters for our Gr\" obner bases), and the initial monomial $\operatorname{in}_<(f)$ of $0 \ne f \in S$ will be simply denoted by $\ini{f}$. For $T \in \tab{l,\lambda}$, recall that $T(j)$ is the set of the entries of the $j$-th column of $T$. If $\sigma$ is a permutation on $T(j)$, we have $f_{\sigma T} ={\rm sgn}(\sigma)f_T$ for each $j$. In this sense, to consider $f_T$, we may assume that $T$ is {\it column standard}, that is, all columns are increasing from top to bottom (in particular, all 1's appear in the 1st row). If $T$ is column standard and the number $i$ is in the $d_i$-th row of $T$, we have \begin{equation}\label{in(f)} \ini {f_T} = \prod_{i=1}^n x_i^{d_i-1} \end{equation} (recall our convention on the monomial order). If a column standard tableau $T \in \tab{l, \lambda}$ is also row semi-standard (i.e., all rows are non-decreasing from left to right), we say $T$ is {\it standard}. Let $\stab{l, \lambda}$ be the set of standard tableaux in $\tab{l, \lambda}$. We simply denote $\stab{1, \lambda}$ by $\stab{\lambda}$. The next result is very classical when $l=1$. \begin{lemma}\label{stab} For $\lambda \in [P_{n+l-1}]_{\ge l}$, $\{ f_T \mid T\in \stab{l, \lambda} \}$ forms a basis of the vector space $V$ spanned by $\{ f_T \mid T\in \tab{l, \lambda} \}$. Hence $\{ f_T \mid T\in \stab{l, \lambda} \}$ is a minimal system of generators of $I_{l, \lambda}$. \end{lemma} \begin{proof} In the classical case (i.e., when $l=1$), we can rewrite $f_T$ for $T \in \tab{\lambda}$ as a linear combination of $f_{T_i}$'s for $T_i \in \stab{\lambda}$ repeatedly using the relations given by {\it Garnir elements} (see \cite[\S 2.6]{Sa}). Such a relation concerns the $j$-th and the $(j+1)$-st columns of $T$. The classical argument directly works in our case unless both of these columns contain 1. So we assume that both columns have 1. Since $f_T=\prod_{j=1}^{\lambda_1} \Delta(T(j))$, we can concentrate on the $j$-th and $(j+1)$-st columns of $T$, and may assume that $T$ consists of two columns (i.e., $\lambda$ is of the form $(2,\lambda_2, \ldots, \lambda_p) \in P_{n+2-1}=P_{n+1}$) and $l=2$. Set ${\widetilde{\lambda}} :=(\lambda_2, \ldots, \lambda_p) \in P_{n-1}$. Removing the first row from $T \in \tab{2, \lambda}$, we have ${\widetilde{T}} \in \tab{{\widetilde{\lambda}}}$ (the set of the entries of ${\widetilde{T}}$ is $\{2, \ldots, n\}$). The converse operation $\tab{{\widetilde{\lambda}}} \ni {\widetilde{T}} \longmapsto T \in \tab{2, \lambda}$ also makes sense. Clearly, $f_T=(\prod_{i=2}^n(x_1-x_i)) \cdot f_{{\widetilde{T}}}$. Multiplying $\prod_{i=2}^n(x_1-x_i)$ to both sides of a Garnir relation $f_{\widetilde{T}} =\sum_{i=1}^k \pm f_{{\widetilde{T}}_i}$ (${\widetilde{T}}, {\widetilde{T}}_i \in \tab{{\widetilde{\lambda}}}$), we have the relation $f_T =\sum_{i=1}^k \pm f_{T_i}$ ($T, T_i \in \tab{2, \lambda}$). As in the classical case, $T_i$ need not to be standard, but is closer to standard than $T$. Using these relations, the argument in \cite[\S 2.6]{Sa} is applicable to our case, and we can show that $\{ f_T \mid T\in \stab{l, \lambda} \}$ spans $V$. As we have seen in \eqref{in(f)}, $\ini{f_T} \ne \ini{f_{T'}}$ holds for distinct $T,T'\in \stab{l, \lambda}$. So $\{ f_T \mid T\in \stab{l, \lambda} \}$ is linearly independent. \end{proof} For $\bm a =(a_1, \ldots, a_n) \in K^n$, there are distinct $\alpha_1, \ldots, \alpha_p \in K$ with $\{\alpha_1, \ldots, \alpha_p\} =\{a_1, \ldots, a_n\}$ as sets. Now we can define the partition $\mu =(\mu_1,\dots,\mu_p) \in P_n$ such that $\alpha_i$ appears $\mu_i$ times in $(a_1, \ldots, a_n)$ for each $i$. This partition $\mu$ will be denoted by $\Lambda(\bm a)$. For example, $\Lambda((1,0,2,1,2,2))=(3,2,1)$. For $\bm a \in K^n$, set $\bm a_{(l)}:=(\overbrace{a_1, \ldots, a_1}^{\text{$l$ copies}}, a_2, \ldots, a_n) \in K^{n+l-1}$ and $\Lambda_l(\bm a):=\Lambda(\bm a_{(l)}) \in [P_{n+l-1}]_{\ge l}$. For example, if $\bm a = (1,0,2,1,2,2)$, then $\bm a_{(3)}=(1,1,1, 0,2,1,2,2)$ and $\Lambda_3 (\bm a) =(4,3,1)$. When $l=1$, the following result is classical. \begin{lemma}[{c.f. \cite[Lemma~2.1.]{MOY}}] \label{f_T(a)=0} Let $\lambda \in [P_{n+l-1}]_{\ge l}$ and $T \in \tab{l, \lambda}$. For $\bm a \in K^n$ with $\Lambda_l(\bm a) \not \not \! \unlhd \lambda$, we have $f_T(\bm a)=0$. \end{lemma} \begin{proof} For $\bm a=(a_1, \ldots, a_n) \in K^n$, replacing $i$ with $a_i$ for each $i$ in $T$, we have a tableau $T(\bm a)$, whose entries are elements in $K$. It is easy to see that $f(\bm a) \ne 0$ if and only if the entries in each column of $T(\bm a)$ are all distinct. So the assertion follows from the same argument as \cite[Lemma~2.1]{MOY}. \end{proof} \begin{lemma}[{c.f. \cite[Theorem~1.1]{MRV}}] \label{inclusion} For $\lambda, \mu \in [P_{n+l-1}]_{\ge l}$ with $\lambda \unrhd \mu$, we have $I_{l, \lambda} \supset I_{l, \mu}$. \end{lemma} \begin{proof} The proof is essentially same as the classical case, while we have to care about one point. First, we will recall a basic property of difference products. For subsets $A=\{a_1, a_2, \ldots, a_k\}$ and $B=\{b_1, b_2, \ldots, b_{k'} \}$ of $[n]$ with $k \ge k'+2$, we have \begin{equation}\label{diff product} \Delta(A)\cdot \Delta(B)=\sum_{k-k' \le i \le k} (-1)^{i-k+k'}\left[\Delta(A\setminus \{a_i\}) \cdot \Delta(B \cup \{a_i\}) \cdot \prod_{1 \le i' < k-k'}(x_{a_{i'}}-x_{a_i}) \right] \end{equation} by \cite[Proposition~3.1]{LL}, where we regard $a_i$ as the last element of $B \cup \{a_i\}$. Let us start with the main body of the proof. To prove the assertion, we may assume that $\lambda$ covers $\mu$. By Remark~\ref{cover}, there are $j,j'$ with $j<j'$ such that $\mu^\perp_j=\lambda^\perp_j+1$, $\mu^\perp_{j'}=\lambda^\perp_{j'}-1$, and $\mu^\perp_i=\lambda^\perp_i$ for all $i \ne j, j'$. Take $T \in \tab{l, \mu}$, and let $A=\{a_1, \ldots, a_k\}$ (resp. $B=\{b_1, \ldots, b_{k'} \}$) be the set of the contents of the $j$-th (resp. $j'$-th) column of $T$. For $i$ with $k-k' \le i \le k$, consider the tableau $T_i$ whose $j$-th (resp. $j'$-th) column consists of the elements of $A\setminus \{a_i\}$ (resp. $B \cup \{a_i\}$) and the other columns are same as those of $T$. Since $a_i \ge 2$ for $i \ge 2$, we have $T_i \in \tab{l, \lambda}$. By \eqref{diff product}, we have \begin{equation}\label{inclusion expansion} f_T=\sum_{k-k' \le i \le k} (-1)^{i-k+k'}\left[f_{T_i} \cdot \prod_{1 \le i' < k'-k}(x_{a_{i'}}-x_{a_i}) \right] \in I_{l, \lambda}, \end{equation} and it means that $I_{l, \lambda} \supset I_{l, \mu}$. \end{proof} We say that ${\mathcal F} \subset [P_{n+l-1}]_{\ge l}$ is a {\it lower (resp. upper) filter} if $\lambda \in {\mathcal F}$, $\mu \in [P_{n+l-1}]_{\ge l}$ and $\mu \unlhd \lambda$ (resp. $\mu \unrhd \lambda$) imply $\mu \in {\mathcal F}$. For a {\it lower} filter ${\mathcal F} \subset [P_{n+l-1}]_{\ge l}$, set $$G_{l, {\mathcal F}} := \{f_T \mid T \in \stab{l, \lambda} \, \text{for} \, \lambda \in {\mathcal F} \},$$ and let $I_{l, {\mathcal F}} \subset S$ be the ideal generated by $G_{l, {\mathcal F}}$, equivalently, $$I_{l,{\mathcal F}}:=\sum_{\lambda \in {\mathcal F}} I_{l, \lambda}. $$ In particular, for $\lambda \in [P_{n+l-1}]_{\ge l}$, ${\mathcal F}_\lambda:=\{\mu \in [P_{n+l-1}]_{\ge l} \mid \mu \unlhd \lambda \}$ is a lower filter, and we have $I_{l, \lambda}=I_{l, {\mathcal F}_\lambda}$ by Lemma~\ref{inclusion}. For convenience, set $G_{l, \emptyset}=\emptyset$ and $I_{l, \emptyset}=(0)$. For an {\it upper} filter $\emptyset \ne {\mathcal F} \subset [P_{n+l-1}]_{\ge l}$, we consider the ideal $$J_{l, {\mathcal F}} :=( f \in S \mid f(\bm a)=0 \, \text{for $\forall \bm a \in K^n$ with $\Lambda_l(\bm a) \in {\mathcal F}$} ). $$ Clearly, $J_{l, {\mathcal F}}$ is a radical ideal. \begin{theorem}\label{main1} Let ${\mathcal F} \subsetneq [P_{n+l-1}]_{\ge l}$ be a lower filter, and ${\mathcal F}^{\sf c}:= [P_{n+l-1}]_{\ge l} \setminus {\mathcal F}$ its complement (note that ${\mathcal F}^{\sf c}$ is an upper filter). Then $G_{l, {\mathcal F}}$ is a Gr\"obner basis of $J_{l, {\mathcal F}^{\sf c}}$. \end{theorem} The following corollary is immediate from the theorem. \begin{corollary}\label{Cor to main1} With the above situation, we have $I_{l, {\mathcal F}} =J_{l, {\mathcal F}^{\sf c}}$, and $I_{l, {\mathcal F}}$ is a radical ideal. In particular, $I_{l, \lambda}$ is a radical ideal with $$I_{l, \lambda} = ( f \in S \mid f(\bm a)=0 \, \text{for $\forall \bm a \in K^n$ with $\Lambda_l(\bm a) \! \! \not \! \unlhd \lambda$} ),$$ for which $\{ f_T \mid T \in \stab{l,\mu}, \mu \unlhd \lambda \}$ forms a Gr\"obner basis. \end{corollary} The strategy of the proof of Theorem~\ref{main1} is essentially same as that of \cite[Theorem~1.1]{MOY}, but we repeat it here for the reader's convenience. For a partition $\lambda=(\lambda_1,\ldots,\lambda_p) \in P_m$ and a positive integer $i$, we write $\lambda+\add i$ for the partition of $m+1$ obtained by rearranging the sequence $(\lambda_1,\dots,\lambda_i+1,\dots,\lambda_p)$, where we set $\lambda+\add i=(\lambda_1,\dots,\lambda_p,1)$ when $i>p$. For example $(4,2,2,1)+\add 2=(4,2,2,1)+\add 3=(4,3,2,1)$, and $(4,2,2,1)+\add i=(4,2,2,1,1)$ for all $i \ge 5$. Since $\lambda \unlhd \mu$ implies $\lambda + \add i \unlhd \mu + \add i$ for all $i$, if ${\mathcal F} \subset P_m$ is an upper (resp. lower) filter, then so is $$ {\mathcal F}_i := \{ \mu \in P_{m-1} \mid \mu + \add i \in {\mathcal F}\}. $$ \begin{example} Even if a lower filter ${\mathcal F}$ has a unique maximal element, ${\mathcal F}_i$ does not in general. For example, if ${\mathcal F} :=\{ \lambda \in [P_7]_{\ge l} \mid \lambda \unlhd (3,2,2) \}$ for $l=1,2$, then ${\mathcal F}_2$ has two maximal elements $(3,1,1,1)$ and $(2,2,2)$. \end{example} \begin{lemma}[{c.f. \cite[Lemma~3.3]{MOY}}] \label{keylemma} Let $\emptyset \ne {\mathcal F} \subset [P_{n+l-1}]_{\ge l}$ be an upper filter, and let $f$ be a polynomial in $J_{l, {\mathcal F}}$ of the form $$ f= g_d x_n^d + \cdots + g_1 x_n +g_0, $$ where $g_0,\dots,g_d \in K[x_1,\dots,x_{n-1}]$ and $g_d\ne 0$. Then $g_0,\dots,g_d$ belong to $J_{l, {\mathcal F}_{d+1}}$. \end{lemma} \begin{proof} Let $\lambda=(\lambda_1,\dots,\lambda_p) \in {\mathcal F}_{d+1}$, and take $\bm a=(a_1, \ldots, a_{n-1}) \in K^{n-1}$ with $\Lambda_l(\bm a) =\lambda$. Then there are distinct elements $\alpha_1, \ldots, \alpha_p \in K$ such that $\alpha_i$ appears $\lambda_i$ times in $\bm a_{(l)}$ for $i=1, \ldots,p$. Since ${\mathcal F}$ is an upper filter, we have $\lambda + \add i \in {\mathcal F}$ for $i=1,2,\ldots,d+1$. We will consider two cases as follows (in the sequel, for $\alpha \in K$, $(\bm a, \alpha)$ means the point in $K^n$ whose coordinate is $(a_1, \ldots, a_{n-1}, \alpha)$): (i) If $p<d+1$, then $\lambda + \add{d+1} =(\lambda_1,\ldots,\lambda_p,1)$. Thus, for any $\alpha \in K \setminus \left\{\alpha_1, \alpha_2, \ldots, \alpha_p \right\}$, we have $\Lambda_l(\bm a,\alpha) =\lambda + \add{d+1}\in {\mathcal F}$, and hence $f(\bm a,\alpha)=0$. (ii) If $p \geq d+1$, then we have $\Lambda_l(\bm a, \alpha_i)=\lambda + \add{i} \in {\mathcal F}$ for any $i=1,\ldots,d+1$ (note that $\lambda + \add{i} \unrhd \lambda + \add{d+1} \in {\mathcal F}$ for these $i$), and hence $f(\bm a,\alpha_i)=0$. In both cases, it follows that the polynomial $f(\bm a, x_n)=\sum^{d}_{i=0}g_i(\bm a)x^i_{n} \in K[x_n]$ has at least $d+1$ zeros. Since the degree of $f(\bm a, x_n)$ is $d$, $f(\bm a, x_n)$ is the zero polynomial in $K[x_n]$. Thus, $g_i(\bm a)=0$ for $i=0,1,\ldots,d$. Hence, $g_0,\dots,g_d \in J_{l, {\mathcal F}_{d+1}}$. \end{proof} \noindent{\it The proof of Theorem~\ref{main1}.} First, we show that $G_{l, {\mathcal F}} \subset J_{l, {\mathcal F}^{\sf c}}$. Take $T \in \stab{l, \lambda}$ for $\lambda \in {\mathcal F}$, and $\bm a \in K^n$ with $\Lambda_l(\bm a) \in {\mathcal F}^{\sf c}$ (i.e., $\Lambda_l(\bm a) \not \in {\mathcal F}$). Since ${\mathcal F}$ is a lower filter, we have $\Lambda_l(\bm a) \not \!\! \unlhd \, \lambda$, and hence $f_T(\bm a)=0$ by Lemma~\ref{f_T(a)=0}. So $f_T \in J_{l, {\mathcal F}^{\sf c}}$. For $\mu \in [P_{n+l-2}]_{\geq l}$, it is easy to see that $$\mu \not \in ({\mathcal F}^{\sf c})_{i} \Longleftrightarrow \mu+\add i \not \in {\mathcal F}^{\sf c} \Longleftrightarrow \mu+\add i \in {\mathcal F} \Longleftrightarrow \mu \in {\mathcal F}_i,$$ so we have $[P_{n+l-2}]_{\geq l} \setminus ({\mathcal F}^{\sf c})_{i} ={\mathcal F}_i$. To prove the theorem, it suffices to show that the initial monomial $\ini{f}$ for all $0 \ne f \in J_{l, {\mathcal F}^{\sf c}}$ can be divided by $\ini{f_T}$ for some $f_T \in G_{l, {\mathcal F}}$. We will prove this by induction on $n$. The case $n=1$ is trivial. For $n\geq2$, let $f= g_d x_n^d + \cdots + g_1 x_n +g_0 \in J_{l, {\mathcal F}^{\sf c}}$, where $g_i \in K[x_1,\ldots,x_{n-1}]$ and $g_d\neq0$. By Lemma \ref{keylemma}, one has $g_d \in J_{l, ({\mathcal F}^{\sf c})_{d+1}}$. By the induction hypothesis, we have $G_{l, {\mathcal F}_{d+1}} (=G_{l,[P_{n+l-2}]_{\geq l} \setminus ({\mathcal F}^{\sf c})_{d+1}})$ is a Gr\"obner basis of $J_{l,({\mathcal F}^{\sf c})_{d+1}}$. Then there is $T \in \stab{l, \mu}$ for $\mu \in {\mathcal F}_{d+1}$ such that $\ini{f_T}$ divides $\ini{g_d}$. Set $\lambda :=\mu + \add{d+1} \in {\mathcal F}$. Let us consider the tableau $T' \in \stab{l, \lambda}$ such that the image of each $i = 1,2,\dots,n-1$ is same for $T$ and $T'$. So $n$ is in the newly added square. Since $\lambda=\mu + \add{d+1}$, $n$ is in the $(q+1)$-st row of $T'$ for some $q \le d$. Since we have $\ini{f}=\ini{g_d x_n^d}=x_n^d \cdot \ini{g_d}$ and $\ini{f_{T'}}=x_n^q \cdot \ini{f_T}$ by \eqref{in(f)}, $\ini{f_{T'}}$ divides $\ini{f}$. Hence, the proof is completed. \qed For $\lambda=(\lambda_1, \ldots, \lambda_p) \in [P_{n+l-1}]_{\ge l}$, set $H_{l, \lambda}:=\{ \bm a \in K^n \mid \Lambda_l(\bm a) =\lambda\}$. Then we have the decomposition $K^n=\bigsqcup_{\lambda \in [P_{n+l-1}]_{\ge l}} H_{l, \lambda}$, and the dimension of $H_{l, \lambda}$ equals the length $p$ of $\lambda$. For an upper filter ${\mathcal F} \subset [P_{n+l-1}]_{\ge l}$, $S/J_{l, {\mathcal F}} \, (=S/I_{l, {\mathcal F}^{\sf c}} )$ is the coordinate ring of $\bigsqcup_{\lambda \in {\mathcal F}} H_{l, \lambda}$. \begin{proposition} The codimension of the ideal $I_{l, \lambda}$ is $\lambda_1-l+1$. \end{proposition} \begin{proof} By the above remark, the algebraic set defined by $I_{l, \lambda}$ is the union of $H_{l, \mu}$ for all $\mu \in [P_{n+l-1}]_{\ge l}$ with $\mu \not \!\! \unlhd \lambda$. Among these partitions, $\mu'=(\lambda_1+1, 1, 1, \ldots)$ has the largest length $n+l-1-\lambda_1$, and hence $\operatorname{codim} I_{l, \lambda}= n- \dim S/I_{l, \lambda}=n-(n+l-1-\lambda_1)=\lambda_1-l+1$. \end{proof} \begin{example} For $\lambda=(3,3,1)$, $\stab{2, \lambda}$ consists of the following 11 elements $$ \ytableausetup{mathmode, boxsize=1.2em} \begin{ytableau} 1 & 1 & 2 \\ 3 & 4& 5 \\ 6 \\ \end{ytableau}, \quad \begin{ytableau} 1 & 1 & 2 \\ 3 & 4& 6 \\ 5 \\ \end{ytableau}, \quad \begin{ytableau} 1 & 1 & 2 \\ 3 & 5& 6 \\ 4 \\ \end{ytableau}, \quad \begin{ytableau} 1 & 1 & 3 \\ 2 & 4& 5 \\ 6\\ \end{ytableau}, \quad \begin{ytableau} 1 & 1 & 3 \\ 2 & 4 & 6 \\ 5\\ \end{ytableau}, \quad \begin{ytableau} 1 & 1 & 3 \\ 2 & 5 & 6 \\ 4\\ \end{ytableau}, $$ $$ \ytableausetup{mathmode, boxsize=1.2em} \begin{ytableau} 1 & 1 & 4 \\ 2 & 3& 5 \\ 6 \\ \end{ytableau}, \quad \begin{ytableau} 1 & 1 & 4 \\ 2 & 3& 6 \\ 5 \\ \end{ytableau}, \quad \begin{ytableau} 1 & 1 & 4 \\ 2 & 5& 6 \\ 3 \\ \end{ytableau}, \quad \begin{ytableau} 1 & 1 & 5 \\ 2 & 3& 6 \\ 4 \\ \end{ytableau}, \quad \begin{ytableau} 1 & 1 & 5 \\ 2 & 4& 6 \\ 3 \\ \end{ytableau}, $$ so $I_{2, \lambda}$ is minimally generated by 11 elements. For a non-empty subset $F \subset [n]$, consider the ideal $P_F=(x_i-x_j \mid i, j \in F)$. Clearly, $P_F$ is a prime ideal of codimension $\# F-1$. By Corollary~\ref{Cor to main1}, $I_{3, \lambda}$ is a radical ideal whose minimal primes are $P_F$ for $F \subset [n]$ either (i) $1 \in F$ and $\# F =3$, or (ii) $1 \not \in F$ and $\# F =4$. \end{example} \section{Under the opposite monomial order} Philosophically, we next treat the Gr\" obner basis of $I_{l,\lambda}$ with respect to the lexicographic order with $x_1 >x_2 >\cdots >x_n$, which is opposite to the one used in the previous section. However, for notational simplicity, we keep using the lexicographic order with $x_1 < \cdots < x_n$, but we consider tableaux whose squares are bijectively filled by the multiset $\{1, \ldots, n-1, {\overbrace{n, \ldots, n}^{\text{$l$ copies}}}\}$. For $\lambda \in [P_{n+l-1}]_{\ge l}$, let $\tab{\lambda, l}$ be the set of such tableaux of shape $\lambda$. As in the previous section, we can define the standard-ness of $T \in \tab{\lambda, l}$. For example, the tableau $T$ in \eqref{stab(lambda, l)} below is standard. Let $\stab{\lambda, l}$ be the subset of $\tab{\lambda, l}$ consisting of standard tableaux. By the same argument as Lemma~\ref{stab}, we have the following. \begin{lemma}\label{stab2} For $\lambda \in [P_{n+l-1}]_{\ge l}$, $\{ f_T \mid T\in \stab{\lambda, l} \}$ forms a basis of the vector space spanned by $\{ f_T \mid T\in \tab{\lambda,l} \}$. \end{lemma} For $\lambda \in [P_{n+l-1}]_{\ge l}$, consider the ideal $$I_{\lambda,l}:=(f_T \mid T \in \tab{\lambda,l}) =(f_T \mid T \in \stab{\lambda,l} ).$$ For $\bm a \in K^n$, set $\bm a^{(l)}:=(a_1, \ldots, a_{n-1}, \overbrace{a_n, \ldots, a_n}^{\text{$l$ copies}}) \in K^{n+l-1}$ and $\Lambda^l(\bm a):=\Lambda(\bm a^{(l)}) \in [P_{n+l-1}]_{\ge l}$. Up to the automorphism of $S$ exchanging $x_1$ and $x_n$, the ideal $I_{\lambda, l}$ coincides with $I_{l,\lambda}$ treated in the previous section. Hence we have $I_{\mu, l} \subset I_{\lambda,l}$ for $\mu, \lambda \in [P_{n+l-1}]_{\ge l}$ with $\mu \unlhd \lambda$, and \begin{equation}\label{I_{F,l}=J} I_{\lambda, l} =( f \in S \mid f(\bm a)=0 \, \text{for $\forall \bm a \in K^n$ with $\Lambda^l(\bm a) \! \! \not \! \unlhd \lambda$}). \end{equation} In $T \in \stab{\lambda,l}$, all $n$'s are in the bottom of their columns. Let $w(T)$ denote the number of squares which locate above some $n$. For example, if \begin{equation}\label{stab(lambda, l)} T=\ytableausetup{mathmode, boxsize=1.2em} \begin{ytableau} 1& 2& 3 & 5& 8 & 8 \\ 4 & 6 & 8 \\ 7 & 8\\ \end{ytableau} \end{equation} ($n=8$ and $l=4$ in this case), then $w(T)=3$. In fact, the squares filled by 2, 6, and 3 are counted. For $T \in \tab{\lambda, l}$, the degree of $\ini{f_T}$ with respect to $x_n$ is $w(T)$. Let $\operatorname{sh}_{<n}(T) \in P_{n-1}$ denote the shape of $T'$, where $T'$ is the tableau obtained by removing all squares filled by $n$ from $T$. For example, if $T$ is the above one, we have $\operatorname{sh}_{<8}(T)=(4,2,1)$. For $\mu \in P_{n-1}$, set $$\langle \mu \rangle^l := \{ \lambda \in [P_{n+l-1}]_{\ge l} \mid \text{$\exists T \in \stab{\lambda,l}$ with $\operatorname{sh}_{<n}(T)=\mu$}\}. $$ For $\lambda \in \langle \mu \rangle^l$ and $T \in \stab{\lambda,l}$ with $\operatorname{sh}_{<n}(T)=\mu$, the positions of the squares filled by $n$ only depend on $\lambda$ and $\mu$. We call these squares {\it $n$-squares} of $\lambda$. Similarly, $w(T)$ does not depend on a particular choice of $T$, and we denote this value by $w_\mu(\lambda)$. \begin{lemma}\label{maximum element} Let $\lambda \in [P_{n+l-1}]_{\ge l}$, and ${\mathcal F}:=\{ \rho \in [P_{n+l-1}]_{\ge l} \mid \rho \unlhd \lambda \}$ the lower filter of $[P_{n+l-1}]_{\ge l}$. For $\mu \in P_{n-1}$, if $X:= \langle\mu\rangle^l \cap {\mathcal F}$ is non-empty, then there is the element ${\widetilde{\lambda}} \in X$ satisfying $w_\mu(\rho) > w_\mu({\widetilde{\lambda}})$ for all $\rho \in X \setminus \{ {\widetilde{\lambda}} \}$. \end{lemma} \begin{proof} We determine ${\widetilde{\lambda}} =({\widetilde{\lambda}}_1, {\widetilde{\lambda}}_2, \ldots, {\widetilde{\lambda}}_p)$ inductively from ${\widetilde{\lambda}}_1$. First, we set $${\widetilde{\lambda}}_1:=\max \{ \, \rho_1 \mid \rho=(\rho_1, \ldots, \rho_q) \in X \, \} \quad \text{and} \quad X_1:= \{\, \rho \in X \mid \rho_1 ={\widetilde{\lambda}}_1 \, \},$$ and next ${\widetilde{\lambda}}_2:=\max \{ \rho_2 \mid \rho \in X_1 \}$ and $X_2 := \{ \rho \in X_1 \mid \rho_2 ={\widetilde{\lambda}}_2 \}.$ We repeat this procedure until the sum ${\widetilde{\lambda}}_1+{\widetilde{\lambda}}_2 +\cdots$ reaches $n+l-1$. We will show that ${\widetilde{\lambda}}$ has the expected property. For $\rho \in X \setminus \{ {\widetilde{\lambda}} \}$, set $i_0:=\min\{ i \mid \rho_i \ne \lambda_i\}$. Then we have $\rho_{i_0} < {\widetilde{\lambda}}_{i_0}$, and $\rho$ has an $n$-square in the $i$-th row for some $i >i_0$. Let $i_1$ be the smallest $i$ with this property. Raising up the right most square in the $i_1$-th row to the right end of $i_0$-th row, we get $\rho' \in X$ (here we use the present form of ${\mathcal F}$). It is clear that $w_\mu(\rho) > w_\mu(\rho')$ and $\rho \lhd \rho'$. Repeating this argument until our partition reaches ${\widetilde{\lambda}}$, we get the expected inequality. \end{proof} \begin{example} In the above lemma, the case $\lambda \ne {\widetilde{\lambda}}$ might happen. For example, if $\lambda=(4,2,1)$ and $\mu=(3,3)$ (so $l=1$ now), we have ${\widetilde{\lambda}}=(3,3,1)$. \end{example} For a lower filter ${\mathcal F} \subset [P_{n+l-1}]_{\ge l}$ and a non-negative integer $k$, set $${\mathcal F}^k := \{ \, \mu \in P_{n-1} \mid \text{$\exists \lambda \in \langle\mu\rangle^l \cap {\mathcal F}$ with $w_\mu(\lambda) \le k$} \, \}.$$ \begin{example} Consider the case $n=6, l=2$, $\lambda=(3,3,1)$ and ${\mathcal F}:=\{ \, \rho \in [P_7]_{\ge 2} \mid \rho \unlhd \lambda \, \}$. Then ${\mathcal F}^3, {\mathcal F}^2,{\mathcal F}^1, {\mathcal F}^0$ are the lower filters of $P_5$ whose unique maximal elements are $(3,2), (3,1,1), (2,1,1,1)$ and $(1,1,1,1,1)$, respectively. In the following diagrams, $\star$'s represent the positions of $n$-squares of the corresponding partitions of $n+l-1 \, (=7)$. It is also easy to see that ${\mathcal F}^k={\mathcal F}^3$ for all $k \ge 3$. $$ \lambda= \ytableausetup{mathmode, boxsize=1em} \begin{ytableau} {} & {} & {} \\ {} & {} & {} \\ {} \\ \end{ytableau} \qquad \qquad \ytableausetup{mathmode, boxsize=1em} \begin{ytableau} {} & {} & {} \\ {} & {} & \none[\star] \\ \none[\star] \\ \end{ytableau} \qquad \quad \ytableausetup{mathmode, boxsize=1em} \begin{ytableau} {} & {} & {} \\ {} & \none[\star] & \none[\star] \\ {} \\ \end{ytableau} \qquad \quad \ytableausetup{mathmode, boxsize=1em} \begin{ytableau} {} & {} & \none[\star] \\ {} & \none[\star] \\ {} \\ {} \\ \end{ytableau} \qquad \quad \ytableausetup{mathmode, boxsize=1em} \begin{ytableau} {} & \none[\star] & \none[\star] \\ {} \\ {} \\ {} \\ {} \\ \end{ytableau} $$ \end{example} \begin{lemma}\label{filter filter} If ${\mathcal F} \subset [P_{n+l-1}]_{\ge l}$ is a lower filter, then ${\mathcal F}^k$ is a lower filter of $P_{n-1}$. \end{lemma} \begin{proof} It suffices to show that if $\mu \in {\mathcal F}^k$ covers $\nu \in P_{n-1}$ then $\nu \in {\mathcal F}^k$. In this situation, there are two integers $j, j'$ with $j< j'$ such that $\nu^\perp_j=\mu^\perp_j+1$, $\nu^\perp_{j'}=\mu^\perp_{j'}-1$, and $\nu^\perp_i=\mu^\perp_i$ for all $i \ne j, j'$. In other words, moving a square in the $j'$-th column of $\mu$ to the $j$-th column, we get $\nu$. Anyway, we can take $\lambda \in {\mathcal F} \cap \langle\mu \rangle^l$ with $w_\mu(\lambda) \le k$, and we want to construct $\rho \in {\mathcal F} \cap \langle\nu \rangle^l$ with $w_\nu(\rho) \le k$. For each $i$, we have $\mu_i^\perp \le \lambda_i^\perp \le \mu_i^\perp+1$, and there is an $n$-square in the $i$-th column of $\lambda$ if and only if $\lambda_i^\perp = \mu_i^\perp+1$. We have the following four cases. \begin{itemize} \item[(1)] $\lambda^\perp_j=\mu^\perp_j$ and $\lambda^\perp_{j'}=\mu^\perp_{j'}$. \item[(2)] $\lambda^\perp_j=\mu^\perp_j+1$ and $\lambda^\perp_{j'}=\mu^\perp_{j'}+1$. \item[(3)] $\lambda^\perp_j=\mu^\perp_j$ and $\lambda^\perp_{j'}=\mu^\perp_{j'}+1$. \item[(4)] $\lambda^\perp_j=\mu^\perp_j+1$ and $\lambda^\perp_{j'}=\mu^\perp_{j'}$. \end{itemize} In the case (4), we set $\rho=\lambda$, that is, exchanging the $n$-square in the $j$-th column and the bottom square of $j'$-th column, we get $\rho$ and $\nu$ from $\lambda$ and $\mu$. In the other cases, we first move the $n$-squares in the $j$-th and $j'$-th columns of $\lambda$ (their existence depends on the cases (1)-(3)) vertically along the change from $\mu$ to $\nu$. For example, in the case (2), the above operation is \begin{equation}\label{move of n squares} \ytableausetup{mathmode, boxsize=1.1em} \begin{ytableau} \none[\text{\tiny{$j$}}] & \none & \none[\text{\tiny{$j'$}}] \\ {} & \none[\hdots] & {} \\ {} & \none[\hdots] & {} \\ {} &\none & n\\ n \end{ytableau} \quad \begin{ytableau} \none \\ \none \\ \none[\longrightarrow] \\ \none \\ \none \end{ytableau} \quad \ytableausetup{mathmode, boxsize=1.1em} \begin{ytableau} \none[\text{\tiny{$j$}}] & \none & \none[\text{\tiny{$j'$}}] \\ {} & \none[\hdots] & {} \\ {} & \none[\hdots] & n\\ \\ \\ n \end{ytableau} \end{equation} Furthermore, if necessary, we apply a suitable column permutation as the following figure (in this situation, since $\mu^\perp_{j-1}=\nu^\perp_{j-1} \ge \nu^\perp_j=\mu^\perp_j +1$, we have $\mu^\perp_{j-1} > \mu^\perp_j$ and there is no $n$-square in the $(j-1)$-st column of the left and the middle diagrams). In any cases, we have $\rho \unlhd \lambda \in {\mathcal F}$ and $w_\nu(\rho) \le w_\mu(\lambda) \le k$, that is, $\rho$ satisfies the expected property. $$ \ytableausetup{mathmode, boxsize=1.1em} \begin{ytableau} \none[\vdots] & \none[\vdots] & \none[\tiny{\text{$j$}}] \\ {} & {} & {} \\ {} & {} & n \\ {} \\ \none[\vdots] \\ \end{ytableau} \qquad \begin{ytableau} \none \\ \none \\ \none[\longrightarrow] \\ \none \\ \none \end{ytableau} \qquad \begin{ytableau} \none[\vdots] & \none[\vdots] & \none[\tiny{\text{$j$}}] \\ {} & {} & {} \\ {} & {} & {} \\ {} & \none & n \\ \none[\vdots] \\ \end{ytableau} \qquad \begin{ytableau} \none \\ \none \\ \none[\longrightarrow] \\ \none \\ \none \end{ytableau} \qquad \begin{ytableau} \none[\vdots] & \none[\vdots] & \none[\tiny{\text{$j$}}] \\ {} & {} & {} \\ {} & {} & {} \\ {} & n \\ \none[\vdots] \\ \end{ytableau} $$ \end{proof} \begin{proposition}\label{keylemma2} Let $\lambda \in [P_{n+l-1}]_{\ge l}$, and ${\mathcal F}:=\{ \rho \in [P_{n+l-1}]_{\ge l} \mid \rho \unlhd \lambda \}$ the lower filter of $[P_{n+l-1}]_{\ge l}$. If $f \in I_{\lambda,l}$ is of the form $f= g_d x_n^d + \cdots + g_1 x_n +g_0 $ with $g_0,\dots,g_d \in K[x_1,\dots,x_{n-1}]$ and $g_d \ne 0$, then $g_0,\dots,g_d$ belong to $I_{{\mathcal F}^d}$. \end{proposition} \begin{proof} Assume that $g_m \not \in I_{{\mathcal F}^d}$ for some $m$. By the classical case (i.e., when $l=1$) of Corollary~\ref{Cor to main1}, there are some $\bm a \in K^{n-1}$ such that $\mu :=\Lambda(\bm a) \not \in {\mathcal F}^d$ and $g_m(\bm a) \ne 0$. If $\mu =(\mu_1, \ldots, \mu_p)$, there are distinct elements $\alpha_1, \ldots, \alpha_p \in K$ such that $\alpha_i$ appears $\mu_i$ times in $\bm a$ for $i=1, \ldots, p$. We have \begin{equation}\label{expression} f= \sum_{\substack{T \in \stab{\lambda', l} \\ \lambda' \in {\mathcal F}}}h_T \cdot f_T \end{equation} for some $h_T \in S$. For $T\in \stab{\lambda', l}$, replacing $i$ with $a_i$ in $T$ for all $1 \le i \le n-1$, and $n$ with $x_n$, we get the tableau $T(\bm a)$ whose entries are elements of $K \cup \{ x_n \}$. Take $\rho \in \langle \nu \rangle^l$ for some $\nu \in P_{n-1}$. We call a bijective filling ${\mathcal T} $of the squares of the Young diagram of $\rho$ by the multiset $$\{ \, \overbrace{\alpha_1, \ldots, \alpha_1}^{\text{$\mu_1$ copies}}, \overbrace{\alpha_2, \ldots, \alpha_2}^{\text{$\mu_2$ copies}},\cdots, \overbrace{\alpha_p, \ldots, \alpha_p}^{\text{$\mu_p$ copies}}, \overbrace{x_n, \ldots, x_n}^{\text{$l$ copies}} \, \}$$ such that all $n$-squares are filled by $x_n$ is called an {\it $\bm a$-tableau}. We call $\rho$ the {\it shape} of ${\mathcal T}$, and denote it by $\operatorname{sh}({\mathcal T})$. We also denote $\nu$ by $\operatorname{sh}_{<n}({\mathcal T})$. A typical example of an $\bm a$-tableau is $T(\bm a)$ given above. We say an $\bm a$-tableau ${\mathcal T}$ is {\it regular}, if the entries in the $j$-th column of ${\mathcal T}$ are all distinct for each $j$. Note that $T(\bm a)$ is regular if and only if $f_T(\bm a, x_n) \ne 0$. For all $\alpha \in K$, we can show that $\bar{\mu}:=\Lambda^l(\bm a, \alpha)$ belongs to $\langle \mu \rangle^l$ (recall that $\mu=\Lambda(\bm a)$). For example, if $\alpha \not \in \{ \alpha_1, \ldots, \alpha_p \}$, we have $\bar{\mu}=(\mu_1, \ldots, \mu_j, l, \mu_{j+1}, \ldots, \mu_p)$, where $j:=\max \{ i \mid \mu_i \ge l \}$, and hence $\bar{\mu}_i \ge \mu_i$ and $\mu_i^\perp \le \bar{\mu}_i^\perp \le \mu_i^\perp+1$ for all $i$. The case $\alpha \in \{ \alpha_1, \ldots, \alpha_p \}$ can be shown by a similar argument. Anyway, if $\langle \mu \rangle^l \cap {\mathcal F} = \emptyset$, then $\Lambda^l(\bm a, \alpha) \not \in {\mathcal F}$, and hence $f(\bm a, \alpha)=0$ by \eqref{I_{F,l}=J}. So it implies that $f(\bm a, x_n)=0$ and $g_m(\bm a)=0$. This is a contradiction. So $\langle \mu \rangle^l \cap {\mathcal F}$ is non-empty, and it has the element ${\widetilde{\lambda}}$ with the minimum $w_{\mu}(-)$ by Lemma~\ref{maximum element}. Let ${\widetilde{\Tc}}$ be an $\bm a$-tableau of shape ${\widetilde{\lambda}}$ with $\operatorname{sh}_{<n}({\widetilde{\Tc}})=\mu$ such that all squares in the $i$-th row of $\mu$ are filled by $\alpha_i$. Assume that, for each $i$ with $1 \le i \le p$, $\alpha_i$ appears $d_i$ times in squares above some $n$-squares in ${\widetilde{\Tc}}$. See Example~\ref{a-tableau} below. We have $\sum_{i=1}^p d_i =w_\mu(\lambda) >d$, where the inequality follows from that $\mu \not \in {\mathcal F}^d$. \noindent{\bf Claim.} Let ${\mathcal T}$ be a regular $\bm a$-tableau with $\operatorname{sh}({\mathcal T}) \in {\mathcal F}$. For all $i$ with $1 \le i \le p$, $\alpha_i$ appears at least $d_i$ times in squares above some $n$-squares in ${\mathcal T}$. \noindent{\it Proof of Claim.} Set $\nu := \operatorname{sh}_{<n}({\mathcal T}) \in P_{n-1}$. We will prove the assertion by induction on $\nu$ with respect to the dominance order. Since ${\mathcal T}$ is regular, it is easy to see that $\mu \unlhd \nu$ by the classical case (i.e., when $l=1$) of Corollary~\ref{Cor to main1}. If $\mu=\nu$, applying suitable actions of column stabilizers (i.e., permutations of entries in the same column), we may assume that each square in the $i$-th row of ${\mathcal T}$ is filled by $\alpha_i$ or $x_n$. So the assertion can be shown by an argument similar to the proof of Lemma~\ref{maximum element}. Next consider the case $\mu \lhd \nu$. As the induction hypothesis, we assume that the assertion holds for ${\mathcal T}'$ with $\mu \unlhd \operatorname{sh}_{<n}({\mathcal T}') \lhd \nu$. To proceed with proof by contradiction, assume that ${\mathcal T}$ does not satisfy the expected condition, that is, there is some $s$ such that $\alpha_s$ appears less than $d_s$ times in squares above some $n$-squares in ${\mathcal T}$. Since $\mu \lhd \nu$ now, there are some $t$ and $j,j'$ with $j <j'$ such that $\alpha_t$ appears in the $j'$-th column of ${\mathcal T}$, but does not appear in the $j$-th column. If $\alpha_s$ has this property, we take $s$ as $t$. We move the square in the $j'$-th column filled by $\alpha_t$ to the $j$-th column, and get the partition $\nu' \in P_{n-1}$ (a suitable column permutation might be required). The following condition is crucial. \begin{itemize} \item[$(*)$] $s=t$ holds, and the bottom of the $j$-th column of ${\mathcal T}$ is an $n$-square, and that of the $j'$-th column is not. \end{itemize} In the case $(*)$ is not satisfied, we move the $n$-squares in these columns (if they exist) vertically like \eqref{move of n squares}, then apply a suitable column permutation if necessary (sometimes, we have to move the $j$-th column to left and/or the $j'$-th column to right). Finally, we get an $\bm a$-tableau ${\mathcal T}'$ with $\operatorname{sh}_{<n}({\mathcal T}')=\nu'$. On the other hand, if $(*)$ holds, we move the $n$-square in the $j$-th column to below the bottom of the $j'$-th column. Of course, do not forget to move the $\alpha_s$-square in the $j'$-th column to the $j$-th column. Applying a suitable column permutation if necessary, we get ${\mathcal T}'$ with $\operatorname{sh}_{<n}({\mathcal T}')=\nu'$. In this case, we have $\operatorname{sh}({\mathcal T})=\operatorname{sh}({\mathcal T}')$. In both cases, ${\mathcal T}'$ is regular, and $\alpha_s$ appears less than $d_s$ times in squares above some $n$-squares in ${\mathcal T}'$. Moreover, we have $\operatorname{sh}({\mathcal T}') \unlhd \operatorname{sh}({\mathcal T})$, and hence $\operatorname{sh}({\mathcal T}') \in {\mathcal F}$. Since $(\mu \unlhd) \, \nu' \lhd \nu$, it contradicts the induction hypothesis. \qed We back to the proof of the proposition itself. In \eqref{expression}, we have $$ f(\bm a, x_n)= \sum h_T(\bm a, x_n) \cdot f_T(\bm a, x_n) $$ If $f_T(\bm a, x_n) \ne 0$, then $T(\bm a)$ is regular. So, by Claim, $f_T(\bm a, x_n)$ can be divided by $$R(x_n)=\prod_{1 \le i \le p}(x_n-\alpha_i)^{d_i},$$ and $f(\bm a, x_n)$ itself can be divided by $R(x_n)$. While the degree of $f(\bm a, x_n)$ is at most $d$, we have $\deg f(\bm a, x_n) \ge \deg R(x_n) = \sum_{i=1}^p d_i >d$. This is a contradiction. \end{proof} \begin{example}\label{a-tableau} Consider the case $n=8, l=3$, and take the lower filter given by ${\mathcal F} =\{ \lambda \in [P_{10}]_{\ge 3} \mid \lambda \unlhd (4,4,2)\}$. If $\bm a =(\alpha_1, \alpha_1, \alpha_1, \alpha_2, \alpha_2, \alpha_3, \alpha_3)$ (hence $\mu=(3,2,2)$), the $\bm a$-tableau ${\widetilde{\Tc}}$ given in the proof of Proposition~\ref{keylemma2} is as follows $$ \ytableausetup{mathmode, boxsize=1.3em} \begin{ytableau} \alpha_1 & \alpha_1 & \alpha_1 & x_8 \\ \alpha_2 & \alpha_2 & x_8 \\ \alpha_3 & \alpha_3 \\ x_8 \end{ytableau} $$ Above three $n\, (=8)$-boxes, there are two copies of $\alpha_1$, so we have $d_1=2$. Similarly, since there is one $\alpha_2$ (resp. $\alpha_3$) above three $n$-boxes, we have $d_2=1$ (resp. $d_3=1$). The following are examples of regular $\bm a$-tableaux whose shape belong to ${\mathcal F}$. In each case, there are at least 2 (resp. 1) $\alpha_1$ (resp. $\alpha_2$ and $\alpha_3$) above $n$-squares. $$ \ytableausetup{mathmode, boxsize=1.3em} \begin{ytableau} \alpha_1 & \alpha_1 & \alpha_1 & x_8 \\ \alpha_2 & \alpha_2 \\ \alpha_3 & \alpha_3 \\ x_8 & x_8 \end{ytableau} \qquad \qquad \begin{ytableau} \alpha_1 & \alpha_1 & \alpha_1 & \alpha_2 \\ \alpha_2 & \alpha_3 & x_8 & x_8 \\ \alpha_3 & x_8 \end{ytableau} \qquad \qquad \begin{ytableau} \alpha_1 & \alpha_1 & \alpha_1 & \alpha_2 \\ \alpha_2 & \alpha_3 & \alpha_3 & x_8 \\ x_8 & x_8 \end{ytableau} $$ \end{example} \begin{theorem}\label{main1.5} For $\lambda \in [P_{n+l-1}]_{\ge l}$, set ${\mathcal F}:=\{ \rho \in [P_{n+l-1}]_{\ge l} \mid \rho \unlhd \lambda \}$ be the lower filter. Then $\{ f_T \mid T \in \stab{\rho,l}, \rho \in {\mathcal F} \}$ is a Gr\"obner basis of $I_{\lambda,l}$. \end{theorem} \begin{proof} It suffices to show that the initial monomial $\ini{f}$ for all $0 \ne f \in I_{\lambda,l}$ can be divided by $\ini{f_T}$ for some $T \in \stab{\rho,l}$ with $\rho \in {\mathcal F}$. Let $f= g_d x_n^d + \cdots + g_1 x_n +g_0$, where $g_i \in K[x_1,\ldots,x_{n-1}]$ and $g_d\neq0$. By Proposition~\ref{keylemma2}, one has $g_d \in I_{{\mathcal F}^d}$. By Theorem~\ref{HWMOY}, $\{ f_T \mid T \in \stab{\mu}, \mu \in {\mathcal F}^d \}$ is a Gr\"obner basis of $I_{{\mathcal F}^d}$ (since we fix the monomial order, it is enough to consider standard tableaux, see \cite[Remark~3.5]{MOY}), and there is a tableau $T \in \stab{\mu}$ for some $\mu \in {\mathcal F}^d$ such that $\ini{f_T}$ divides $\ini{g_d}$. So we can take $\rho \in \langle\mu \rangle^l \cap {\mathcal F}$ with $e:=w_\mu(\rho) \le d$. Let us consider the tableau $T' \in \stab{\rho,l}$ such that the image of each $i = 1, \dots,n-1$ is same for $T$ and $T'$. Since we have $\ini{f}=x_n^d \cdot \ini{g_d}$ and $\ini{f_{T'}}=x_n^e \cdot \ini{f_T}$, $\ini{f_{T'}}$ divides $\ini{f}$. \end{proof} \begin{example} Contrary to Theorem~\ref{main1}, Theorem~\ref{main1.5} cannot be generalized to the ideal $I_{{\mathcal F},l}:=(f_T \mid T \in \tab{\lambda,l}, \lambda \in {\mathcal F})$ for a lower filter ${\mathcal F} \subset [P_{n+l-1}]_{\ge l}$. For example, if ${\mathcal F} \subset [P_8]_{\ge 2}$ is the lower filter whose maximal elements are $(4,2,1,1)$ and $(3,3,2)$, then $x_4^2x_5^3x_6x_7^2$ is a minimal generator of $\ini{I_{{\mathcal F}, 2}}$, but this cannot be represented in the form of $\ini{f_T}$ for $T \in \stab{\lambda,2}$. \end{example} The following fact might be well-known to the specialist, and is stated in \cite{MOY} without proof. This time, we give a proof for the reader's convenience. \begin{lemma}\label{product of linear forms} Let $I \subset S=K[x_1, \ldots, x_n]$ be a graded ideal, and $G \subset I$ a Gr\"obner basis of $I$ with respect to the lexicographic order $<$ with $x_1 < x_2 <\cdots < x_n$. If all elements of $G$ are products of linear forms, then $G$ is a Gr\"obner basis of $I$ with respect to any monomial order $\prec$ with $x_1 \prec x_2 \prec \cdots \prec x_n$. \end{lemma} The assumption that $I$ is graded is unnecessary, but we add it here for the simplicity. \begin{proof} Since $g \in G$ is a product of linear forms, we have $\operatorname{in}_{\prec}(g)=\operatorname{in}_<(g)$, and hence $$\operatorname{in}_<(I) =(\operatorname{in}_< (g) \mid g \in G) =(\operatorname{in}_\prec (g) \mid g \in G) \subset\operatorname{in}_{\prec}(I).$$ Since $\operatorname{in}_\prec(I)$ and $\operatorname{in}_<(I)$ have the same Hilbert function (in fact, they have the same Hilbert function as $I$ itself), we have $\operatorname{in}_\prec(I) =\operatorname{in}_<(I)$. It implies that $G$ is a Gr\"obner basis of $I$ with respect to $\prec$. \end{proof} \begin{corollary}\label{smallest & largest} Let $\lambda \in [P_{n+l-1}]_{\ge l}$. With respect to a monomial order in which $x_1$ is either the smallest or the largest among the variables $x_1, \ldots, x_n$, $\{ f_T \mid T \in \tab{l, \rho}, \rho \unlhd \lambda \}$ is a Gr\"obner basis of $I_{l, \lambda}$. \end{corollary} Since we consider several monomial orders, we have to treat $\tab{l, \lambda}$, not $\stab{l, \lambda}$. \begin{proof} First, we consider the case $x_1$ is the smallest among $x_1, \ldots, x_n$. Since the ideal $I_{l, \lambda}$ is symmetric for variables $x_2, \ldots,x_n$, and Specht polynomials are products of linear forms, we may assume that our monomial order is the lexicographic order with $x_1 < \cdots < x_n$ by Lemma~\ref{product of linear forms}, and the assertion follows from Corollary~\ref{Cor to main1}. Similarly, if $x_1$ is the largest, we may assume that our monomial order is the lexicographic order with $x_1 > \cdots > x_n$, and the assertion follows from Theorem~\ref{main1.5}. \end{proof} \begin{example} For $\lambda=(3,3)$, $I_{2,\lambda}$ is generated by 3 elements of degree 3. With respect to a monomial order in which $x_1$ is the smallest, $\ini{I_{2,\lambda}}$ is minimally generated by 3 elements of degree 3 and 2 elements of degree 4. On the other hand, with respect to an order in which $x_1$ is the largest, $\ini{I_{2,\lambda}}$ is minimally generated by 3 elements of degree 3, 3 elements of degree 4, and an element of degree 6. Computer experiment suggests that $\ini{I_{l, \lambda}}$ with respect to an order in which $x_1$ is the smallest requires fewer generators. \end{example} \begin{problem} With the notation of Corollary~\ref{smallest & largest}, is $\{ f_T \mid T \in \tab{l, \rho}, \rho \unlhd \lambda \}$ a universal Gr\" obner basis of $I_{l, \lambda}$? \end{problem} We have computed several partitions $\lambda$ up to $n=8$ using SageMath and {\it Macaulay2}, and we have not found a counter example yet. \section{A generalization of the case $\#Y_1 \ge 2$ and $\#Y_2=\cdots =\#Y_l=1$} In this section, we fix a positive integer $m$ with $1 \le m \le n$, and set $$\Delta_m:=\Delta(\{1, \ldots, m\})=\prod_{1 \le i < j \le m}(x_i-x_j).$$ For $T \in \tab{l, \lambda}$ with $[P_{n+l-1}]_{\ge l}$, set $$f_{m,T}:= \operatorname{lcm}\{ f_T, \Delta_m \} \in S \quad \text{and} \quad I_{l,m,\lambda}:=( f_{m,T} \mid T \in \tab{l, \lambda}) \subset S.$$ Note that $I_{l,1,\lambda}=I_{l,\lambda}$ and $I_{l,n,\lambda}=(\Delta_n)$. \begin{example} Even if $l=1$, $I_{l,m,\lambda}$ is not a radical ideal in general, while their generators are squarefree products of linear forms $(x_i-x_j)$. For example, if $\lambda=(2,2)$, we have $$I_{1,3,\lambda}=( \Delta_3\cdot(x_1-x_4), \Delta_3\cdot(x_2-x_4), \Delta_3\cdot (x_3-x_4)),$$ where $\Delta_3= (x_1-x_2)(x_1-x_3)(x_2-x_3)$. (Note that an analog of Lemma~\ref{stab} does not hold here. So we have to consider a non-standard tableau also to generate $I_{l,m, \lambda}$.) Clearly, $\Delta_3 \not \in I_{1,3,\lambda}$, but we can show that $\Delta_3 \in \sqrt{I_{1,3,\lambda}}$ by Lemma~\ref{radical inclusion} below. Moreover, the statement corresponding to Lemma~\ref{inclusion} does not hold for $I_{l,m,\lambda}$. In fact, if $\lambda=(2,2)$ and $\mu=(2,1,1)$, then $\mu \lhd \lambda$, but $I_{1,3,\mu}=(\Delta_3) \not \subset I_{1,3,\lambda}$. \end{example} However, we have the following. \begin{lemma}\label{radical inclusion} For $\lambda, \mu \in [P_{n+l-1}]_{\ge l}$ with $\lambda \unrhd \mu$, we have $\sqrt{I_{l, m, \lambda}} \supset I_{l, m, \mu}$. \end{lemma} \begin{proof} It suffices to show that $f_{m, T} \in \sqrt{I_{l, m, \lambda}}$ for all $T \in \tab{l, \mu}$. By Lemma~\ref{inclusion}, there are some $k \in {\mathbb N}$, $T_1, \ldots, T_k \in \tab{l,\lambda}$ and $g_1, \ldots, g_k \in S$ such that $f_T=\sum g_i f_{T_i}$. Multiplying $\Delta_m$ to both sides, we have $$\Delta_m \cdot f_T=\sum g_i \cdot (\Delta_m \cdot f_{T_i}).$$ Since $f_{m,T_i}$ divides $\Delta_m \cdot f_{T_i}$, we have $\Delta_m \cdot f_T \in I_{l,m,\lambda}$. However, since $\Delta_m \cdot f_T$ divides $(f_{m,T})^2$, we have $(f_{m,T})^2 \in I_{l,m,\lambda}$. \end{proof} For a {\it lower} filter ${\mathcal F} \subset [P_{n+l-1}]_{\ge l}$, set $$G_{l, m, {\mathcal F}} := \{ f_{m,T} \mid T \in \tab{l, \lambda}, \lambda \in {\mathcal F} \} \quad \text{and} \quad I_{l,m,{\mathcal F}} := (G_{l, m, {\mathcal F}})=\sum_{\lambda \in {\mathcal F} } I_{l,m,\lambda}.$$ For an {\it upper} filter ${\mathcal F} \subset [P_{n+l-1}]_{\ge l}$ with the lower filter ${\mathcal F}^{\sf c}:= [P_{n+l-1}]_{\ge l} \setminus {\mathcal F}$, we consider the ideal $$J_{l, m, {\mathcal F}} := (\Delta_m) \cap J_{l, {\mathcal F}} \ (= (\Delta_m) \cap I_{l, {\mathcal F}^{\sf c}}).$$ Since both $(\Delta_m)$ and $J_{l, {\mathcal F}}$ are radical ideals, so is $J_{l, m, {\mathcal F}}$. Since $J_{l,m,{\mathcal F}} \subset (\Delta_m)$, the codimension of $J_{l,m,{\mathcal F}}$ for $m \ge 2$ is 1 (unless ${\mathcal F}=[P_{n+l-1}]_{\ge l}$, equivalently, $J_{l,m,{\mathcal F}}=0$). \begin{theorem}\label{main2} Let ${\mathcal F} \subsetneq [P_{n+l-1}]_{\ge l}$ be a lower filter, and ${\mathcal F}^{\sf c}:= [P_{n+l-1}]_{\ge l} \setminus {\mathcal F}$ its complement. Then $G_{l, m, {\mathcal F}}$ is a Gr\"obner basis of $J_{l, m, {\mathcal F}^{\sf c}}$. Hence $J_{l, m, {\mathcal F}^{\sf c}}=I_{l, m, {\mathcal F}}$, and $I_{l, m, {\mathcal F}}$ is a radical ideal. \end{theorem} Let us prepare the proof of Theorem~\ref{main2}. \begin{lemma} \label{keylemma3} Let ${\mathcal F} \subset [P_{n+l-1}]_{\ge l}$ be an upper filter, and let $f$ be a polynomial in $J_{l,m, {\mathcal F}}$ of the form $$ f= g_d x_n^d + \cdots + g_1 x_n +g_0, $$ where $g_0,\dots,g_d \in K[x_1,\dots,x_{n-1}]$ and $g_d\ne 0$. If $m <n$, then $g_0,\dots,g_d$ belong to $J_{l, m, {\mathcal F}_{d+1}}$. \end{lemma} \begin{proof} Here we use the same notation as in the proof of Lemma~\ref{keylemma}. Take $\bm a=(a_1, \ldots, a_{n-1}) \in K^{n-1}$. Since $f \in (\Delta_m)$, if $a_i =a_j$ for some $1 \le i < j \le m$, then $f(\bm a, \alpha)=0$ for all $\alpha \in K$, and hence $g_i(\bm a)=0$ for all $i$. It means that each $g_i$ can be divided by $\Delta_m$ in $K[x_1, \ldots, x_{n-1}]$. So it remains to show that $g_i \in J_{l, {\mathcal F}_{d+1}}$, but it follows from Lemma~\ref{keylemma}, since $f \in J_{l, {\mathcal F}}$. \end{proof} \noindent{\it The proof of Theorem~\ref{main2}.} First, we show that $G_{l, m, {\mathcal F}} \subset J_{l, m, {\mathcal F}^{\sf c}}$. For any $f_{m,T}\in G_{l, m, {\mathcal F}}$, it is clear that $f_{m,T}\in (\Delta_m)$, and we have $f_{m,T} \in (f_T) \subset J_{l, {\mathcal F}^{\sf c}}$ by Theorem~\ref{main1}. Hence $f_{m,T} \in J_{l, m, {\mathcal F}^{\sf c}}$. So it remains to show that, for any $0 \ne f \in J_{l, m, {\mathcal F}^{\sf c}}$, there is some $f_{m,T} \in G_{l, m, {\mathcal F}}$ such that $\ini{f_{m,T}}$ divides $\ini{f}$, but it can be done by induction on $n-m$ (we fix $m$) in the same way as in the proof of Theorem~\ref{main1}, while we use Lemma~\ref{keylemma3} instead of Lemma~\ref{keylemma}. \qed The following corollary immediately follows from Theorem~\ref{main2}. \begin{corollary}\label{I_{l,m,lambda}} For $\lambda \in [P_{n+l-1}]_{\ge l}$, $$\bigcup_{\substack{\mu \in [P_{n+l-1}]_{\ge l} \\ \mu \unlhd \lambda}}G_{l,m,\mu}$$ is a Gr\"obner basis of $\sqrt{I_{l,m, \lambda}}=J_{l,m,{\mathcal F}}$, where ${\mathcal F}$ is the upper filter $\{\nu \in [P_{n+l-1}]_{\ge l} \mid \nu \not \! \! \unlhd \lambda\}$. In particular, $$\sqrt{I_{l,m, \lambda}}=\sum_{\substack{\mu \in [P_{n+l-1}]_{\ge l} \\ \mu \unlhd \lambda}}I_{l,m, \mu}.$$ \end{corollary} \begin{remark} If $\lambda=(\lambda_1, \ldots, \lambda_p) \in P_{n+l-1}$ is of the form $\lambda_1=\cdots =\lambda_{p-1}=k-1$ for some $k >l$, then our $\sqrt{I_{l, m, \lambda}} \ ( =\sum_{\mu \unlhd \lambda} I_{l,m, \mu})$ coincides with the Li-Li ideal $I_{\mathcal Y}$ for ${\mathcal Y}=(Y_1, Y_2, \ldots, Y_{k-1})$ with $Y_1=\{1, 2, \ldots, m\}$, $Y_2=\cdots =Y_l=\{1\}$ and $Y_{l+1}=\cdots =Y_{k-1}=\emptyset$ in the notation of the Introduction. \end{remark} \begin{proposition} $I_{l,m,\lambda}$ is a radical ideal for $m \le 2$. \end{proposition} \begin{proof} The case $m=1$ follows from Theorem~\ref{main1}. So we treat the case $m=2$. By Theorem~\ref{main2}, it suffices to show that $f_{2,T} \in I_{l,2, \lambda}$ for all $T \in \tab{l, \mu}$ with $\mu \unlhd \lambda$. If the letters 2 and (some copy of) 1 are not in the same column of $T$, then we have $f_{2,T}=(x_1-x_2)f_T$, and if they are in the same column, then we have $f_{2,T}=f_T$. We first treat the former case. Since $I_{l, \mu} \subset I_{l, \lambda}$ by Lemma~\ref{inclusion}, there are $g_1, \ldots, g_k \in S$ and $T_1, \ldots, T_k \in \tab{l, \lambda}$ such that $f_T=\sum_{i=1}^k g_i f_{T_i}$. Multiplying $(x_1-x_2)$ to the both sides, we have $$f_{2,T}=(x_1-x_2)f_T =\sum_{i=1}^k g_i \cdot (x_1-x_2)f_{T_i}.$$ Since $f_{2,T_i}$ divides $(x_1-x_2)f_{T_i}$, we have $f_{2,T} \in I_{l,2, \lambda}$. So the case when 1 and 2 are in the same column (equivalently, $f_{2,T}=f_T$) remains. We may assume that $\lambda$ covers $\mu$, and we want to modify the argument of the proof of Lemma~\ref{inclusion}, which shows that $I_{l, \mu} \subset I_{l, \lambda}$. In the sequel, we use the same notation as there. The crucial case is that $1,2 \in A$ (we may assume that $a_1=1, a_2=2$) and $1 \not \in B$. In fact, in other cases, it is easy to see that $f_{2,T_i}=f_{T_i}$ for all $i$. By \eqref{inclusion expansion}, we have $$f_T=\sum_{k-k' \le i \le k} (-1)^{i-k+k'} (x_1-x_{a_i})f_{T_i}$$ and $T_i \in \tab{l, \lambda}$ for all $i$. For $i \ge 3$, the letters 1 and 2 stay in the same column of $T_i$, and we have $f_{2,T_i}=f_{T_i}$. So the case $k-k' \ge 3$ is easy, and we may assume that $k-k'=2$. Then, among $T_2, \ldots, T_k$, only $T_2$ does {\it not} have 1 and 2 in the same column. Hence \begin{eqnarray*} f_{2,T}=f_T&=& (x_1-x_2) f_{T_2} + \sum_{3 \le i \le k} (-1)^i (x_1-x_{a_i})f_{T_i} \\ &=& f_{2, T_2} + \sum_{3 \le i \le k} (-1)^i (x_1-x_{a_i})f_{2, T_i} \in I_{l,2, \lambda}. \end{eqnarray*} \end{proof} \end{document}
arXiv
Extension of generalized solidarity values to interval-valued cooperative games Juan Meng 1, and Yisheng Song 1,, School of Mathematics and Information Science, Henan Normal University, XinXiang HeNan, China * Corresponding author: Yisheng Song Received March 2018 Revised June 2018 Published March 2020 Early access December 2018 Fund Project: The first author is supported by the National Natural Science Foundation of China (Grant No.11571095, 11601134, 11701154) The Z$ _1 $-eigenvalue of tensors (hypermatrices) was widely used to discuss the properties of higher order Markov chains and transition probability tensors. In this paper, we extend the concept of Z$ _1 $-eigenvalue from finite-dimensional tensors to infinite-dimensional tensors, and discuss the upper bound of such eigenvalues for infinite-dimensional generalized Hilbert tensors. Furthermore, an upper bound of Z$ _1 $-eigenvalue for finite-dimensional generalized Hilbert tensor is obtained also. Keywords: Generalized Hilbert tensor, $ Z_1 $-eigenvalue, spectral radius, Hilbert inequalities. Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35. Citation: Juan Meng, Yisheng Song. Upper bounds for Z$ _1 $-eigenvalues of generalized Hilbert tensors. Journal of Industrial & Management Optimization, 2020, 16 (2) : 911-918. doi: 10.3934/jimo.2018184 A. Aleman, A. Montes-Rodriguez and A. Sarafoleanu, The eigenfunctions of the Hilbert matrix, Constr Approx., 36 (2012), 353-374. doi: 10.1007/s00365-012-9157-z. Google Scholar K. C. Chang, K. Pearson and T. Zhang, On the uniqueness and non-uniqueness of the positive Z-eigenvector for transition probability tensors, J. Math. Anal. Appl., 408 (2013), 525-540. doi: 10.1016/j.jmaa.2013.04.019. Google Scholar K. C. Chang, K. Pearson and T. Zhang, Some variational principles for $Z$-eigenvalues of nonnegative tensors, Linear Algebra Appl., 438 (2013), 4166-4182. doi: 10.1016/j.laa.2013.02.013. Google Scholar K. C. Chang, K. Pearson and T. Zhang, Perron-Frobenius theorem for nonnegative tensors, Commun. Math. Sci., 6 (2008), 507-520. doi: 10.4310/CMS.2008.v6.n2.a12. Google Scholar H. Chen and L. Qi, Positive definiteness and semi-definiteness of even order symmetric Cauchy tensors, J. Ind. Manag. Optim., 11 (2015), 1263-1274. doi: 10.3934/jimo.2015.11.1263. Google Scholar H. Chen and Y. Wang, On computing minimal H-eigenvalue of sign-structured tensors, Front. Math. China, 12 (2017), 1289-1302. doi: 10.1007/s11464-017-0645-0. Google Scholar H. Chen, L. Qi and Y. Song, Column sufficient tensors and tensor complementarity problems, Front. Math. China, 13 (2018), 255-276. doi: 10.1007/s11464-018-0681-4. Google Scholar M. D. Choi, Tricks for treats with the hilbert matrix, Amer. Math. Monthly, 90 (1983), 301-312. doi: 10.1080/00029890.1983.11971218. Google Scholar J. Culp, K. Pearson and T. Zhang, On the uniqueness of the $Z_1$-eigenvector of transition probability tensors, Linear Multilinear Algebra, 65 (2017), 891-896. doi: 10.1080/03081087.2016.1211130. Google Scholar H. Frazer, Note on Hilbert's Inequality, J. London Math. Soc., 21 (1946), 7-9. doi: 10.1112/jlms/s1-21.1.7. Google Scholar J. He and T. Huang, Upper bound for the largest $Z$-eigenvalue of positive tensors, Appl. Math. Lett., 38 (2014), 110-114. doi: 10.1016/j.aml.2014.07.012. Google Scholar J. He, Bounds for the largest eigenvalue of nonnegative tensors, J. Comput. Anal. Appl., 20 (2016), 1290-1301. Google Scholar J. He, Y. M. Liu, H. Ke, J. K. Tian and X. Li, Bounds for the $Z$-spectral radius of nonnegative tensors, SpringerPlus, 5 (2016), 1727. Google Scholar D. Hilbert, Ein beitrag zur theorie des legendre'schen polynoms, Acta Mathematica, 18 (1894), 155-159. doi: 10.1007/BF02418278. Google Scholar C. K. Hill, On the singly-infinite Hilbert matrix, J. London Math. Soc., 35 (1960), 17-29. doi: 10.1112/jlms/s1-35.1.17. Google Scholar A. E. Ingham, A Note on Hilbert's Inequality, J. London Math. Soc., 11 (1936), 237-240. doi: 10.1112/jlms/s1-11.3.237. Google Scholar T. Kato, On the hilbert matrix, Proc. American Math. Soc., 8 (1957), 73-81. doi: 10.1090/S0002-9939-1957-0083965-4. Google Scholar W. Li, D. Liu and S. W. Vong, $Z$-eigenpair bounds for an irreducible nonnegative tensor, Linear Algebra Appl., 483 (2015), 182-199. doi: 10.1016/j.laa.2015.05.033. Google Scholar L. H. Lim, Singular values and eigenvalues of tensors: A variational approach, 1st IEEE International workshop on computational advances of multi-tensor adaptive processing, 13/15 (2005), 129-132. Google Scholar W. Magnus, On the spectrum of Hilbert's matrix, American J. Math., 72 (1950), 699-704. doi: 10.2307/2372284. Google Scholar W. Mei and Y. Song, Infinite and finite dimensional generalized Hilbert tensors, Linear Algebra Appl., 532 (2017), 8-24. doi: 10.1016/j.laa.2017.05.052. Google Scholar L. Qi, Eigenvalues of a real supersymmetric tensor, J. Symbolic Comput., 40 (2005), 1302-1324. doi: 10.1016/j.jsc.2005.05.007. Google Scholar L. Qi, Rank and eigenvalues of a supersymmetric tensor, the multivariate homogeneous polynomial and the algebraic hypersurface it defines, J. Symbolic Comput., 41 (2006), 1309-1327. doi: 10.1016/j.jsc.2006.02.011. Google Scholar L. Qi, Symmetric nonnegative tensors and copositive tensors, Linear Algebra Appl., 439 (2013), 228-238. doi: 10.1016/j.laa.2013.03.015. Google Scholar L. Qi, Hankel tensors: Associated Hankel matrices and Vandermonde decomposition, Commun. Math. Sci., 13 (2015), 113-125. doi: 10.4310/CMS.2015.v13.n1.a6. Google Scholar M. Rosenblum, On the Hilbert matrix Ⅰ, Proc.Am. Math. Soc., 9 (1958), 137-140. doi: 10.2307/2033411. Google Scholar Y. Song and L. Qi, Infinite and finite dimensional Hilbert tensors, Linear Algebra Appl., 451 (2014), 1-14. doi: 10.1016/j.laa.2014.03.023. Google Scholar Y. Song and L. Qi, Infinite dimensional Hilbert tensors on spaces of analytic functions, Commun. Math. Sci., 15 (2017), 1897-1911. doi: 10.4310/CMS.2017.v15.n7.a5. Google Scholar Y. Song and L. Qi, Positive eigenvalue-eigenvector of nonlinear positive mappings, Front. Math. China, 9 (2014), 181-199. doi: 10.1007/s11464-013-0258-1. Google Scholar Y. Song and L. Qi, Spectral properties of positively homogeneous operators induced by higher order tensors, SIAM. J. Matrix Anal. Appl., 34 (2013), 1581-1595. doi: 10.1137/130909135. Google Scholar Y. Song and L. Qi, Necessary and sufficient conditions for copositive tensors, Linear Multilinear Algebra, 63 (2015), 120-131. doi: 10.1080/03081087.2013.851198. Google Scholar O. Taussky, A remark concerning the characteristic roots of the finite segments of the Hilbert matrix, Quarterly J. Math. Oxford ser., 20 (1949), 80-83. doi: 10.1093/qmath/os-20.1.80. Google Scholar Y. Wang, K. Zhang and H. Sun, Criteria for strong H-tensors, Front. Math. China, 11 (2016), 577-592. doi: 10.1007/s11464-016-0525-z. Google Scholar Y. Wang, G. Zhou and L. Caccetta, Nonsingular H-tensors and its criteria, J. Industr. Manag. Optim., 12 (2016), 1173-1186. doi: 10.3934/jimo.2016.12.1173. Google Scholar Y. Wang; L. Caccetta and G. Zhou, Convergence analysis of a block improvement method for polynomial optimization over unit spheres, Numerical Linear Algebra with Applications, 22 (2015), 1059-1076. doi: 10.1002/nla.1996. Google Scholar Y. Wang, L. Qi and X. Zhang, A practical method for computing the largest M-eigenvalue of a fourth-order partially symmetric tensor, Numerical Linear Algebra with Applications, 16 (2009), 589-601. doi: 10.1002/nla.633. Google Scholar C. Xu, Hankel tensors, Vandermonde tensors and their positivities, Linear Algebra Appl., 491 (2016), 56-72. doi: 10.1016/j.laa.2015.02.012. Google Scholar Q. Yang and Y. Yang, Further Results for Perron-Frobenius Theorem for Nonnegative Tensors Ⅱ, SIAM. J. Matrix Anal. Appl., 32 (2011), 1236-1250. doi: 10.1137/100813671. Google Scholar Q. Yang and Y. Yang, Further Results for Perron-Frobenius Theorem for Nonnegative Tensors, SIAM. J. Matrix Anal. Appl., 31 (2010), 2517-2530. doi: 10.1137/090778766. Google Scholar K. Zhang and Y. Wang, An H-tensor based iterative scheme for identifying the positive definiteness of multivariate homogeneous forms, J. Compu. Appl. Math., 305 (2016), 1-10. doi: 10.1016/j.cam.2016.03.025. Google Scholar Chen Ling, Liqun Qi. Some results on $l^k$-eigenvalues of tensor and related spectral radius. Numerical Algebra, Control & Optimization, 2011, 1 (3) : 381-388. doi: 10.3934/naco.2011.1.381 Wanbin Tong, Hongjin He, Chen Ling, Liqun Qi. A nonmonotone spectral projected gradient method for tensor eigenvalue complementarity problems. Numerical Algebra, Control & Optimization, 2020, 10 (4) : 425-437. doi: 10.3934/naco.2020042 Irene Benedetti, Luisa Malaguti, Valentina Taddei. Nonlocal problems in Hilbert spaces. Conference Publications, 2015, 2015 (special) : 103-111. doi: 10.3934/proc.2015.0103 Fritz Gesztesy, Rudi Weikard, Maxim Zinchenko. On a class of model Hilbert spaces. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5067-5088. doi: 10.3934/dcds.2013.33.5067 Ilesanmi Adeboye, Harrison Bray, David Constantine. Entropy rigidity and Hilbert volume. Discrete & Continuous Dynamical Systems, 2019, 39 (4) : 1731-1744. doi: 10.3934/dcds.2019075 Doǧan Çömez. The modulated ergodic Hilbert transform. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 325-336. doi: 10.3934/dcdss.2009.2.325 Ya Li, ShouQiang Du, YuanYuan Chen. Modified spectral PRP conjugate gradient method for solving tensor eigenvalue complementarity problems. Journal of Industrial & Management Optimization, 2022, 18 (1) : 157-172. doi: 10.3934/jimo.2020147 Francis Akutsah, Akindele Adebayo Mebawondu, Hammed Anuoluwapo Abass, Ojen Kumar Narain. A self adaptive method for solving a class of bilevel variational inequalities with split variational inequality and composed fixed point problem constraints in Hilbert spaces. Numerical Algebra, Control & Optimization, 2021 doi: 10.3934/naco.2021046 Yan Guo, Juhi Jang, Ning Jiang. Local Hilbert expansion for the Boltzmann equation. Kinetic & Related Models, 2009, 2 (1) : 205-214. doi: 10.3934/krm.2009.2.205 Zhiming Li, Yujun Zhu. Entropies of commuting transformations on Hilbert spaces. Discrete & Continuous Dynamical Systems, 2020, 40 (10) : 5795-5814. doi: 10.3934/dcds.2020246 Chaoqian Li, Yaqiang Wang, Jieyi Yi, Yaotang Li. Bounds for the spectral radius of nonnegative tensors. Journal of Industrial & Management Optimization, 2016, 12 (3) : 975-990. doi: 10.3934/jimo.2016.12.975 Hanbing Liu, Yongdong Huang, Chongjun Li. Weaving K-fusion frames in hilbert spaces. Mathematical Foundations of Computing, 2020, 3 (2) : 101-116. doi: 10.3934/mfc.2020008 Daniel Alpay, Mihai Putinar, Victor Vinnikov. A Hilbert space approach to bounded analytic extension in the ball. Communications on Pure & Applied Analysis, 2003, 2 (2) : 139-145. doi: 10.3934/cpaa.2003.2.139 Gaven J. Martin. The Hilbert-Smith conjecture for quasiconformal actions. Electronic Research Announcements, 1999, 5: 66-70. Anna Karczewska, Carlos Lizama. On stochastic fractional Volterra equations in Hilbert space. Conference Publications, 2007, 2007 (Special) : 541-550. doi: 10.3934/proc.2007.2007.541 Gilberto Bini, Margarida Melo, Filippo Viviani. On GIT quotients of Hilbert and Chow schemes of curves. Electronic Research Announcements, 2012, 19: 33-40. doi: 10.3934/era.2012.19.33 Onur Alp İlhan. Solvability of some partial integral equations in Hilbert space. Communications on Pure & Applied Analysis, 2008, 7 (4) : 837-844. doi: 10.3934/cpaa.2008.7.837 D. Novikov and S. Yakovenko. Tangential Hilbert problem for perturbations of hyperelliptic Hamiltonian systems. Electronic Research Announcements, 1999, 5: 55-65. Marcin Dumnicki, Łucja Farnik, Halszka Tutaj-Gasińska. Asymptotic Hilbert polynomial and a bound for Waldschmidt constants. Electronic Research Announcements, 2016, 23: 8-18. doi: 10.3934/era.2016.23.002 Jin-Mun Jeong, Seong-Ho Cho. Identification problems of retarded differential systems in Hilbert spaces. Evolution Equations & Control Theory, 2017, 6 (1) : 77-91. doi: 10.3934/eect.2017005 Juan Meng Yisheng Song \begin{document}$ _1 $\end{document}-eigenvalues of generalized Hilbert tensors" readonly="readonly">
CommonCrawl
# Basic algebraic concepts and equations Before diving into calculus, it's important to have a solid understanding of basic algebraic concepts and equations. Algebra is the language of mathematics, and it provides the foundation for solving more complex problems. In this section, we will review some fundamental algebraic concepts and equations that you'll need to know for calculus. We'll cover topics such as variables, constants, expressions, equations, and inequalities. Let's start with variables. In algebra, a variable is a symbol that represents an unknown quantity. It can be any letter or combination of letters. For example, in the equation $y = mx + b$, $x$ and $y$ are variables. A constant, on the other hand, is a value that does not change. It is a fixed number. For example, in the equation $y = 2x + 3$, the numbers 2 and 3 are constants. An expression is a combination of variables, constants, and mathematical operations. It represents a value. For example, $2x + 3$ is an expression. We can evaluate this expression by substituting a value for $x$. If $x = 5$, then the expression becomes $2(5) + 3 = 13$. An equation is a statement that two expressions are equal. It contains an equal sign. For example, $2x + 3 = 13$ is an equation. We can solve this equation to find the value of $x$. In this case, $x = 5$. Inequalities are statements that compare two expressions. They use symbols such as $<$ (less than), $>$ (greater than), $\leq$ (less than or equal to), and $\geq$ (greater than or equal to). For example, $2x + 3 < 13$ is an inequality. We can solve this inequality to find the range of values for $x$ that satisfy the inequality. Now that we've reviewed these basic algebraic concepts, let's move on to using Excel for solving equations and graphing functions. # Using Excel for solving equations and graphing functions To solve an equation in Excel, we can use the Goal Seek feature. This feature allows us to find the value of a variable that makes a specific equation true. For example, let's say we have the equation $2x + 3 = 13$. We can use Goal Seek to find the value of $x$ that satisfies this equation. To use Goal Seek, we first need to set up our equation in Excel. We can create a table with two columns - one for the variable $x$ and one for the equation $2x + 3$. We can then enter different values for $x$ and use Excel's formulas to calculate the corresponding values for $2x + 3$. Once we have our table set up, we can use the Goal Seek feature to find the value of $x$ that makes $2x + 3$ equal to 13. We can specify the target value (13) and the cell that contains the equation ($2x + 3$). Excel will then calculate the value of $x$ that satisfies the equation. Let's walk through an example. Suppose we have the equation $2x + 3 = 13$. We can set up our table in Excel as follows: | x | 2x + 3 | |---|--------| | 0 | 3 | | 1 | 5 | | 2 | 7 | | 3 | 9 | | 4 | 11 | | 5 | 13 | To use Goal Seek, we can go to the Data tab in Excel and click on the What-If Analysis button. From there, we can select Goal Seek. In the Goal Seek dialog box, we can specify the target value (13) and the cell that contains the equation ($2x + 3$). Excel will then calculate the value of $x$ that makes the equation true. In this case, Excel will find that $x = 5$ is the value that satisfies the equation $2x + 3 = 13$. ## Exercise Use Excel's Goal Seek feature to solve the equation $3x + 2 = 14$. Enter the equation in an Excel table and use Goal Seek to find the value of $x$ that satisfies the equation. ### Solution To solve the equation $3x + 2 = 14$ using Excel's Goal Seek feature, follow these steps: 1. Set up an Excel table with two columns - one for $x$ and one for $3x + 2$. 2. Enter different values for $x$ and use Excel's formulas to calculate the corresponding values for $3x + 2$. 3. Go to the Data tab in Excel and click on the What-If Analysis button. Select Goal Seek. 4. In the Goal Seek dialog box, specify the target value (14) and the cell that contains the equation ($3x + 2$). 5. Click OK. Excel will then calculate the value of $x$ that satisfies the equation. The value of $x$ that satisfies the equation $3x + 2 = 14$ is $x = 4$. # Differentiation and its applications To understand differentiation, let's start with the definition of a derivative. The derivative of a function $f(x)$ at a point $x$ is defined as the limit of the difference quotient as the change in $x$ approaches zero: $$f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h}$$ The derivative represents the instantaneous rate of change of the function at a specific point. It tells us how the function is changing at that point. To calculate derivatives in Excel, we can use the built-in function `=DERIVATIVE()`. This function takes the form `=DERIVATIVE(function, variable, point)`, where `function` is the function we want to differentiate, `variable` is the variable with respect to which we want to differentiate, and `point` is the point at which we want to evaluate the derivative. For example, let's say we have the function $f(x) = x^2$. We can use the `=DERIVATIVE()` function in Excel to calculate the derivative of $f(x)$ with respect to $x$ at a specific point. To calculate the derivative of $f(x) = x^2$ with respect to $x$ at the point $x = 2$, we can use the formula `=DERIVATIVE("x^2", "x", 2)` in Excel. This will give us the value of the derivative at that point. The result of the `=DERIVATIVE()` function in this case is $4$. This means that the function $f(x) = x^2$ is changing at a rate of $4$ at the point $x = 2$. ## Exercise Use the `=DERIVATIVE()` function in Excel to calculate the derivative of the function $f(x) = 3x^3 + 2x^2 - 5x + 1$ with respect to $x$ at the point $x = 1$. ### Solution To calculate the derivative of $f(x) = 3x^3 + 2x^2 - 5x + 1$ with respect to $x$ at the point $x = 1$ in Excel, use the formula `=DERIVATIVE("3x^3 + 2x^2 - 5x + 1", "x", 1)`. The result of the `=DERIVATIVE()` function in this case is $14$. This means that the function $f(x) = 3x^3 + 2x^2 - 5x + 1$ is changing at a rate of $14$ at the point $x = 1$. # Solving optimization problems using Excel To solve an optimization problem in Excel, we can use the built-in Solver tool. The Solver tool is an add-in that allows us to find the optimal solution to a problem by adjusting the values of certain variables. It can handle both linear and nonlinear optimization problems. To use the Solver tool in Excel, we first need to enable it. To do this, go to the "File" tab, click on "Options," and then select "Add-Ins." In the "Manage" box, select "Excel Add-ins" and click "Go." Check the box next to "Solver Add-in" and click "OK." Once the Solver add-in is enabled, we can access it by going to the "Data" tab and clicking on "Solver" in the "Analysis" group. This will open the Solver Parameters dialog box. Let's say we have a manufacturing company that produces two products, A and B. The company wants to maximize its profit by determining the optimal production quantities for each product. The profit function is given by: $$P = 5A + 8B$$ The company has the following constraints: - The total production time cannot exceed 40 hours. - The maximum production quantity for product A is 10 units. - The maximum production quantity for product B is 15 units. To solve this optimization problem in Excel, we can set up the following spreadsheet: | | A | B | C | |---|----|----|----| | 1 | | A | B | | 2 | P | 5 | 8 | | 3 | | | | | 4 | | | | | 5 | | | | In cell B3, we enter the formula `=B2*B4` to calculate the total production time. In cell C3, we enter the formula `=B4*10` to calculate the production quantity for product A. In cell C4, we enter the formula `=B4*15` to calculate the production quantity for product B. Next, we go to the Solver Parameters dialog box and set up the following: - Set Objective: Cell B5 (the profit cell) - To: Max - By Changing Variable Cells: Cell B4 (the production quantity cell) - Subject to the Constraints: - Cell B3 <= 40 (the total production time constraint) - Cell C3 <= 10 (the maximum production quantity for product A constraint) - Cell C4 <= 15 (the maximum production quantity for product B constraint) After clicking "Solve," Excel will find the optimal solution that maximizes the profit. It will adjust the value in cell B4 (the production quantity cell) to achieve this. ## Exercise A company produces two products, X and Y. The profit function is given by: $$P = 4X + 6Y$$ The company has the following constraints: - The total production time cannot exceed 30 hours. - The maximum production quantity for product X is 8 units. - The maximum production quantity for product Y is 10 units. Set up the spreadsheet and use the Solver tool in Excel to find the optimal production quantities for products X and Y that maximize the profit. ### Solution | | A | B | C | |---|----|----|----| | 1 | | X | Y | | 2 | P | 4 | 6 | | 3 | | | | | 4 | | | | | 5 | | | | In cell B3, enter the formula `=B2*B4` to calculate the total production time. In cell C3, enter the formula `=B4*8` to calculate the production quantity for product X. In cell C4, enter the formula `=B4*10` to calculate the production quantity for product Y. Go to the Solver Parameters dialog box and set up the following: - Set Objective: Cell B5 (the profit cell) - To: Max - By Changing Variable Cells: Cell B4 (the production quantity cell) - Subject to the Constraints: - Cell B3 <= 30 (the total production time constraint) - Cell C3 <= 8 (the maximum production quantity for product X constraint) - Cell C4 <= 10 (the maximum production quantity for product Y constraint) After clicking "Solve," Excel will find the optimal solution that maximizes the profit. It will adjust the value in cell B4 (the production quantity cell) to achieve this. # Integration and its applications To understand integration, let's start with the definition of an integral. The integral of a function $f(x)$ over an interval $[a, b]$ is defined as the limit of a sum of areas of rectangles as the number of rectangles approaches infinity: $$\int_{a}^{b} f(x) dx = \lim_{n \to \infty} \sum_{i=1}^{n} f(x_i) \Delta x$$ Here, $\Delta x$ represents the width of each rectangle, and $x_i$ represents the $x$-value of the $i$-th rectangle. The integral represents the area under the curve of the function $f(x)$ between the limits $a$ and $b$. It tells us the total amount of a quantity represented by the function over the given interval. To calculate integrals in Excel, we can use the built-in function `=INTEGRAL()`. This function takes the form `=INTEGRAL(function, variable, lower_limit, upper_limit)`, where `function` is the function we want to integrate, `variable` is the variable with respect to which we want to integrate, `lower_limit` is the lower limit of integration, and `upper_limit` is the upper limit of integration. For example, let's say we have the function $f(x) = 2x^2 + 3x + 1$. We can use the `=INTEGRAL()` function in Excel to calculate the integral of $f(x)$ with respect to $x$ over a specific interval. To calculate the integral of $f(x) = 2x^2 + 3x + 1$ with respect to $x$ over the interval $[1, 3]$, we can use the formula `=INTEGRAL("2x^2 + 3x + 1", "x", 1, 3)` in Excel. This will give us the value of the integral over that interval. The result of the `=INTEGRAL()` function in this case is $32$. This means that the area under the curve of the function $f(x) = 2x^2 + 3x + 1$ between $x = 1$ and $x = 3$ is $32$. ## Exercise Use the `=INTEGRAL()` function in Excel to calculate the integral of the function $f(x) = \sin(x)$ with respect to $x$ over the interval $[0, \pi]$. ### Solution To calculate the integral of $f(x) = \sin(x)$ with respect to $x$ over the interval $[0, \pi]$ in Excel, use the formula `=INTEGRAL("sin(x)", "x", 0, PI)`. The result of the `=INTEGRAL()` function in this case is $2$. This means that the area under the curve of the function $f(x) = \sin(x)$ between $x = 0$ and $x = \pi$ is $2$. # Using Excel for numerical integration and area under a curve In addition to calculating integrals analytically, we can also use numerical methods to approximate the area under a curve. Numerical integration involves dividing the interval of integration into smaller subintervals and approximating the area under each subinterval using a numerical method, such as the midpoint rule or the trapezoidal rule. To perform numerical integration in Excel, we can use the built-in functions `=SUM()` and `=AVERAGE()`. These functions allow us to sum up the areas of the rectangles or trapezoids and calculate their average, respectively. Let's say we want to approximate the area under the curve of the function $f(x) = x^2$ over the interval $[0, 1]$ using the midpoint rule. The midpoint rule involves dividing the interval into $n$ subintervals and approximating the area under each subinterval using the midpoint of the subinterval. The formula for the midpoint rule is: $$\int_{a}^{b} f(x) dx \approx \sum_{i=1}^{n} f\left(\frac{x_i + x_{i+1}}{2}\right) \Delta x$$ To perform numerical integration in Excel using the midpoint rule, we can follow these steps: 1. Create a column of $x$-values beginning with $x = 0$ and ending with $x = 1$ by using a small increment, such as $0.001$. 2. Create a column of "midpoint" values by calculating the average of each pair of consecutive $x$-values. 3. Create a column of $y$-values by evaluating the function $f(x)$ at each midpoint value. 4. Compute the sum of the areas of the rectangles by multiplying each $y$-value by the width of each subinterval, which is the same as the increment used in step 1. 5. Use the `=SUM()` function to add up the areas of the rectangles. Let's perform numerical integration in Excel using the midpoint rule to approximate the area under the curve of the function $f(x) = x^2$ over the interval $[0, 1]$. 1. Create a column of $x$-values beginning with $x = 0$ and ending with $x = 1$ by using an increment of $0.001$. In cell C2, type "0" and in cell C3, type "=C2+0.001". Use the copy function to extend the calculation to the whole column until reaching the number $1$. 2. Create a column of "midpoint" values by calculating the average of each pair of consecutive $x$-values. In cell D3, type "=(C2+C3)/2". Use the copy function to extend the calculation to the whole column until reaching the number $1$. 3. Create a column of $y$-values by evaluating the function $f(x)$ at each midpoint value. In cell E3, type "=D3^2". Use the copy function to extend the calculation to the whole column until reaching the number $1$. 4. Compute the sum of the areas of the rectangles by multiplying each $y$-value by the width of each subinterval, which is the same as the increment used in step 1. In cell F3, type "=E3*0.001". Use the `=SUM()` function to add up the areas of the rectangles. The result of the `=SUM()` function in this case is approximately $0.3335$. This is an approximation of the area under the curve of the function $f(x) = x^2$ over the interval $[0, 1]$ using the midpoint rule. ## Exercise Perform numerical integration in Excel using the midpoint rule to approximate the area under the curve of the function $f(x) = \sin(x)$ over the interval $[0, \pi]$. ### Solution 1. Create a column of $x$-values beginning with $x = 0$ and ending with $x = \pi$ by using a small increment, such as $0.001$. In cell C2, type "0" and in cell C3, type "=C2+0.001". Use the copy function to extend the calculation to the whole column until reaching the number $\pi$. 2. Create a column of "midpoint" values by calculating the average of each pair of consecutive $x$-values. In cell D3, type "=(C2+C3)/2". Use the copy function to extend the calculation to the whole column until reaching the number $\pi$. 3. Create a column of $y$-values by evaluating the function $f(x)$ at each midpoint value. In cell E3, type "=SIN(D3)". Use the copy function to extend the calculation to the whole column until reaching the number $\pi$. 4. Compute the sum of the areas of the rectangles by multiplying each $y$-value by the width of each subinterval, which is the same as the increment used in step 1. In cell F3, type "=E3*0.001". Use the `=SUM()` function to add up the areas of the rectangles. The result of the `=SUM()` function in this case is approximately $1.9993$. This is an approximation of the area under the curve of the function $f(x) = \sin(x)$ over the interval $[0, \pi]$ using the midpoint rule. # Regression analysis using Excel Regression analysis is a statistical technique used to model the relationship between a dependent variable and one or more independent variables. It is commonly used in fields such as economics, finance, and social sciences to analyze and predict the behavior of variables. In Excel, we can perform regression analysis using the built-in functions `=LINEST()` and `=FORECAST()`. The `=LINEST()` function calculates the coefficients of a linear regression model, while the `=FORECAST()` function predicts the value of the dependent variable based on the independent variables. To perform regression analysis in Excel, we need to have a set of data with both the dependent variable and the independent variables. The dependent variable is the variable we want to predict or explain, while the independent variables are the variables we use to predict or explain the dependent variable. Let's say we have a dataset with the following variables: - Dependent variable: Sales - Independent variables: Advertising, Price, and Promotion We can use the `=LINEST()` function in Excel to calculate the coefficients of the linear regression model. The formula for the `=LINEST()` function is `=LINEST(known_y's, [known_x's], [const], [stats])`, where `known_y's` is the range of cells containing the dependent variable, `known_x's` is the range of cells containing the independent variables, `const` is a logical value indicating whether to include a constant term in the regression model, and `stats` is a logical value indicating whether to return additional statistics. Let's perform regression analysis in Excel using the `=LINEST()` function to model the relationship between the dependent variable Sales and the independent variables Advertising, Price, and Promotion. 1. Set up the dataset in Excel with the dependent variable Sales in one column and the independent variables Advertising, Price, and Promotion in separate columns. 2. In a new column, enter the formula `=LINEST(B2:B11, C2:E11, TRUE, TRUE)`. This will calculate the coefficients of the linear regression model. 3. The result of the `=LINEST()` function will be an array of values. The first value is the intercept term, and the subsequent values are the coefficients of the independent variables. 4. Use the `=FORECAST()` function in Excel to predict the value of the dependent variable based on the independent variables. The formula for the `=FORECAST()` function is `=FORECAST(x, known_y's, known_x's)`, where `x` is the value of the independent variable for which we want to predict the dependent variable. ## Exercise Perform regression analysis in Excel using the `=LINEST()` function to model the relationship between the dependent variable GPA and the independent variables SAT score and High school GPA. ### Solution 1. Set up the dataset in Excel with the dependent variable GPA in one column and the independent variables SAT score and High school GPA in separate columns. 2. In a new column, enter the formula `=LINEST(B2:B11, C2:D11, TRUE, TRUE)`. This will calculate the coefficients of the linear regression model. 3. The result of the `=LINEST()` function will be an array of values. The first value is the intercept term, and the subsequent values are the coefficients of the independent variables. 4. Use the `=FORECAST()` function in Excel to predict the value of the dependent variable based on the independent variables. The formula for the `=FORECAST()` function is `=FORECAST(x, known_y's, known_x's)`, where `x` is the value of the independent variable for which we want to predict the dependent variable. # Applications of regression in real-world problem solving One example is in the field of finance, where regression analysis can be used to model the relationship between a company's stock price and various factors, such as interest rates, market indices, and financial ratios. By analyzing historical data, we can identify the factors that have a significant impact on the stock price and use regression analysis to predict future stock prices. Another example is in the field of marketing, where regression analysis can be used to model the relationship between a company's sales and various marketing variables, such as advertising expenditure, price, and promotion. By analyzing sales data and marketing variables, we can identify the marketing strategies that are most effective in driving sales and optimize marketing campaigns. Regression analysis can also be used in social sciences to analyze the relationship between variables, such as income and education, crime rates and poverty levels, or health outcomes and lifestyle factors. By analyzing data from surveys or experiments, we can identify the factors that influence these outcomes and develop policies or interventions to address them. Let's consider an example in the field of marketing. A company wants to analyze the relationship between its sales and various marketing variables, such as advertising expenditure, price, and promotion. The company collects data on sales and marketing variables for a period of time and performs regression analysis to model the relationship. The regression analysis reveals that advertising expenditure has a positive and significant impact on sales, while price has a negative and significant impact. Promotion, on the other hand, does not have a significant impact on sales. Based on these findings, the company can optimize its marketing strategy by increasing advertising expenditure and adjusting the price to maximize sales. It can also allocate resources more effectively by focusing on advertising channels that have a higher return on investment. ## Exercise Think of a real-world problem in a field of your interest where regression analysis can be applied. Describe the problem and how regression analysis can help solve it. ### Solution One example is in the field of healthcare. Regression analysis can be used to analyze the relationship between patient outcomes and various factors, such as treatment methods, patient characteristics, and hospital resources. By analyzing patient data and healthcare variables, we can identify the factors that influence patient outcomes, such as mortality rates or readmission rates, and develop strategies to improve healthcare quality and patient outcomes. Regression analysis can help us understand the impact of different treatment methods, patient characteristics, and hospital resources on patient outcomes, and guide decision-making in healthcare policy and practice. # Advanced Excel functions for calculus One of the most commonly used functions is the `SUM` function. This function allows you to add up a range of cells or values. It can be used to calculate the sum of a series of numbers, which is useful for calculating the definite integral of a function. Another useful function is the `PRODUCT` function. This function allows you to multiply a range of cells or values. It can be used to calculate the product of a series of numbers, which is useful for calculating the derivative of a function. Excel also has built-in functions for calculating trigonometric functions, such as `SIN`, `COS`, and `TAN`. These functions can be used to calculate the values of trigonometric functions at different angles, which is useful for solving trigonometric equations. In addition to these basic functions, Excel also has more advanced functions for calculus operations. For example, the `DERIVATIVE` function can be used to calculate the derivative of a function at a specific point. The `INTEGRAL` function can be used to calculate the definite integral of a function over a specified range. Let's say we have a function f(x) = x^2 + 3x + 2. We want to calculate the derivative of this function at x = 2 using Excel. To do this, we can use the `DERIVATIVE` function. The syntax of the `DERIVATIVE` function is `DERIVATIVE(function, variable, point)`. In this case, the function is "x^2 + 3x + 2", the variable is "x", and the point is 2. ```excel =DERIVATIVE("x^2 + 3x + 2", "x", 2) ``` The result of this calculation is 7. This means that the derivative of the function f(x) = x^2 + 3x + 2 at x = 2 is 7. ## Exercise Using Excel, calculate the definite integral of the function f(x) = 2x^3 + 5x^2 - 3x + 2 over the interval [0, 1]. ### Solution To calculate the definite integral of a function in Excel, we can use the `INTEGRAL` function. The syntax of the `INTEGRAL` function is `INTEGRAL(function, variable, lower_limit, upper_limit)`. In this case, the function is "2x^3 + 5x^2 - 3x + 2", the variable is "x", and the lower limit is 0 and the upper limit is 1. ```excel =INTEGRAL("2x^3 + 5x^2 - 3x + 2", "x", 0, 1) ``` The result of this calculation is 6.25. This means that the definite integral of the function f(x) = 2x^3 + 5x^2 - 3x + 2 over the interval [0, 1] is 6.25. # Excel add-ins for advanced calculus concepts In addition to the built-in functions, Excel also offers add-ins that can be used for advanced calculus concepts. These add-ins provide additional functionality and tools that can enhance your calculus calculations. One popular add-in is the Solver add-in. The Solver add-in allows you to solve optimization problems by finding the maximum or minimum of a function, subject to certain constraints. This can be useful for solving real-world problems that involve maximizing or minimizing a certain quantity. Another useful add-in is the Analysis ToolPak. The Analysis ToolPak provides a set of data analysis tools that can be used for statistical analysis, regression analysis, and more. These tools can be helpful for analyzing data and making predictions based on mathematical models. Excel also offers add-ins for numerical integration and solving differential equations. These add-ins provide more advanced methods for solving complex calculus problems. Let's say we have a function f(x) = x^2 + 3x + 2 and we want to find the maximum value of this function. We can use the Solver add-in to solve this optimization problem. First, we need to enable the Solver add-in. To do this, go to the "File" tab, click on "Options", and then select "Add-Ins". In the "Manage" box, select "Excel Add-ins" and click on "Go". Check the box next to "Solver Add-in" and click on "OK". Once the Solver add-in is enabled, go to the "Data" tab and click on "Solver". In the Solver Parameters dialog box, set the "Set Objective" field to "Max", select the cell that contains the function (e.g. B2), and click on "Add". Then, set the "By Changing Variable Cells" field to the range of cells that contain the variable (e.g. A2), and click on "Add". Finally, click on "Solve" to find the maximum value of the function. The Solver add-in will calculate the maximum value of the function and provide the corresponding values for the variable cells. ## Exercise Using the Solver add-in in Excel, find the minimum value of the function f(x) = x^3 - 2x^2 + 5x - 3. ### Solution To find the minimum value of a function using the Solver add-in, follow these steps: 1. Enable the Solver add-in by going to the "File" tab, clicking on "Options", selecting "Add-Ins", and checking the box next to "Solver Add-in". 2. Go to the "Data" tab and click on "Solver". 3. In the Solver Parameters dialog box, set the "Set Objective" field to "Min", select the cell that contains the function (e.g. B2), and click on "Add". 4. Set the "By Changing Variable Cells" field to the range of cells that contain the variable (e.g. A2), and click on "Add". 5. Click on "Solve" to find the minimum value of the function. The Solver add-in will calculate the minimum value of the function and provide the corresponding values for the variable cells. # Case studies and real-world examples of calculus and Excel in action Case Study 1: Calculating the Area Under a Curve One common application of calculus is calculating the area under a curve. This can be useful in various fields, such as physics, economics, and engineering. Let's consider an example from physics. Suppose we have a velocity-time graph that represents the motion of an object. We want to find the total distance traveled by the object during a certain time interval. To do this, we need to calculate the area under the velocity-time curve. We can use calculus to solve this problem. By integrating the velocity function with respect to time over the given time interval, we can find the area under the curve, which represents the total distance traveled. To perform this calculation in Excel, we can use numerical integration techniques, such as the trapezoidal rule or Simpson's rule. These methods approximate the area under the curve by dividing it into smaller trapezoids or parabolic segments and summing their areas. We can then use Excel formulas and functions to implement these numerical integration methods and calculate the area under the curve. Example: Calculating the Area Under a Velocity-Time Curve Let's consider a specific example. Suppose we have the following velocity-time data for an object: | Time (s) | Velocity (m/s) | |----------|----------------| | 0 | 0 | | 1 | 5 | | 2 | 10 | | 3 | 8 | | 4 | 3 | To calculate the total distance traveled by the object during the time interval from 0 to 4 seconds, we can use numerical integration in Excel. First, we need to create a column for the time values and a column for the velocity values. We can then use the trapezoidal rule or Simpson's rule to calculate the area under the curve. Next, we can use Excel formulas and functions to perform the numerical integration. For example, we can use the TRAPZ function to calculate the area using the trapezoidal rule: ``` =TRAPZ(B2:B6, A2:A6) ``` This formula calculates the area under the curve defined by the velocity values in column B and the corresponding time values in column A. By applying this formula, we can find that the total distance traveled by the object during the time interval from 0 to 4 seconds is 26 square meters. ## Exercise Consider the following velocity-time data for a moving car: | Time (s) | Velocity (m/s) | |----------|----------------| | 0 | 0 | | 1 | 10 | | 2 | 15 | | 3 | 20 | | 4 | 18 | Using numerical integration in Excel, calculate the total distance traveled by the car during the time interval from 0 to 4 seconds. ### Solution To calculate the total distance traveled by the car during the time interval from 0 to 4 seconds, we can use numerical integration in Excel. First, create a column for the time values and a column for the velocity values. Next, use the TRAPZ function to calculate the area under the curve: ``` =TRAPZ(B2:B6, A2:A6) ``` This formula calculates the area under the curve defined by the velocity values in column B and the corresponding time values in column A. By applying this formula, we find that the total distance traveled by the car during the time interval from 0 to 4 seconds is 61 square meters.
Textbooks
How to calculate univariate conditional distribution of a trivariate gaussian [duplicate] Deriving the conditional distributions of a multivariate normal distribution (2 answers) Closed 6 months ago. I am trying to find the conditional distribution of a trivariate gaussian. So here is a hypothetical trivariate gaussian: $$\mathcal{N}(\mu_{ABC},\Sigma_{ABC}),\;\mu_{ABC}=\begin{bmatrix}\mu_A \\ \mu_B \\ \mu_C \end{bmatrix},\;\Sigma_{ABC}=\begin{bmatrix} \sigma^2_{AA} & \sigma^2_{AB} & \sigma^2_{AC}\\ \sigma^2_{BA} & \sigma^2_{BB} & \sigma^2_{BC} \\ \sigma^2_{CA} & \sigma^2_{CB} & \sigma^2_{CC} \end{bmatrix}$$ I would like to find the conditional distribution parameters as follows: $$\mu_{A}|B,C$$ $$\Sigma_{A}|B,C$$ To get the resulting distribution $$\mathcal{N}(\mu_{A},\Sigma_{A}|B,C)$$ normal-distribution conditional-probability covariance-matrix joint-distribution multivariate-normal-distribution Matthew JaneMatthew Jane Write the tri-variate distribution in blocks first - the first block would have the first random variable, the next would have the remaining two variables. Write the mean and covariance matrix of the tri-variate normal as this Conditional Normal multivariate distribution Find the conditional mean and covariance (which is Schur complement of the right bottom 2x2 block of the whole tri-variate covariance matrix) as shown in the link. Subrata PalSubrata Pal Not the answer you're looking for? Browse other questions tagged normal-distribution conditional-probability covariance-matrix joint-distribution multivariate-normal-distribution or ask your own question. Deriving the conditional distributions of a multivariate normal distribution Multivariate Gaussian with 3 Partitions Understanding the marginal distribution of multivariate normal distribution Multivariate Gaussian - Calculate Covariance-entry given other entries Conditional Probability Distribution of Multivariate Gaussian Conditional distribution of multivariate cauchy distribution
CommonCrawl
Journal of Big Data A robust machine learning approach to SDG data segmentation Kassim S. Mwitondi ORCID: orcid.org/0000-0003-1134-547X1, Isaac Munyakazi2 & Barnabas N. Gatsheni3 Journal of Big Data volume 7, Article number: 97 (2020) Cite this article In the light of the recent technological advances in computing and data explosion, the complex interactions of the Sustainable Development Goals (SDG) present both a challenge and an opportunity to researchers and decision makers across fields and sectors. The deep and wide socio-economic, cultural and technological variations across the globe entail a unified understanding of the SDG project. The complexity of SDGs interactions and the dynamics through their indicators align naturally to technical and application specifics that require interdisciplinary solutions. We present a consilient approach to expounding triggers of SDG indicators. Illustrated through data segmentation, it is designed to unify our understanding of the complex overlap of the SDGs by utilising data from different sources. The paper treats each SDG as a Big Data source node, with the potential to contribute towards a unified understanding of applications across the SDG spectrum. Data for five SDGs was extracted from the United Nations SDG indicators data repository and used to model spatio-temporal variations in search of robust and consilient scientific solutions. Based on a number of pre-determined assumptions on socio-economic and geo-political variations, the data is subjected to sequential analyses, exploring distributional behaviour, component extraction and clustering. All three methods exhibit pronounced variations across samples, with initial distributional and data segmentation patterns isolating South Africa from the remaining five countries. Data randomness is dealt with via a specially developed algorithm for sampling, measuring and assessing, based on repeated samples of different sizes. Results exhibit consistent variations across samples, based on socio-economic, cultural and geo-political variations entailing a unified understanding, across disciplines and sectors. The findings highlight novel paths towards attaining informative patterns for a unified understanding of the triggers of SDG indicators and open new paths to interdisciplinary research. The 17 Sustainable Development Goals (SDGs) signed up by 193 United Nations member states in 2015, as the blueprint for achieving a better and more sustainable future for mankind and planet earth span across various aspects of life [1]. Each goal is defined with measurable aims for improving our quality of life to be achieved by 2030 [2]. Since then, governments, institutions, businesses and individual researchers across the world, have increasingly paid attention to the SDGs, mainly for national development strategies, technical and business improvements as well as theoretical and practical aspects of their implementation. The complex interactions of the SDGs, the magnitude and dynamics of inherent data attributes and the deep and wide socio-economic and cultural variations across the globe are both challenges and opportunities to the SDG project. In the light of the recent technological advances in computing power and explosions in data generation, this paper treats each SDG as a source of Big Data [3,4,5]. Across sectors and nations, Big Data challenges and opportunities manifest in technical and application forms. Technically, they are pathways towards addressing issues ranging from data infrastructure, governance, sharing, modelling and security and from an application perspective, they potentially lead to influential policies and improving decision making at institutional, national, regional and global levels. In particular, Big Data challenges and opportunities present potential knowledge for unlocking our understanding of the mutual impact—positive and negative, resulting from our interaction with our environment [6]. Indicators for the 17 SDGs pool together a wide range of issues—hunger, poverty, inequality, health, species facing extinction, land degradation, gender inequality, gaps in education quality, productivity and technological achievements. These issues span across sectors and regions and our sustainability requires an adaptive understanding of their triggers. It is in that context that we view them as highly voluminous, volatile and dynamic data attributes, the behaviour and variations of which we need to track and understand in a unified and interdisciplinary manner. A unified interdisciplinary understanding of the challenges we face hinges on the relationship between knowledge extraction from data and development, which is well-documented [7,8,9]. The United Nations has a series of publications relating to the relevance of Big Data to SDGs [10,11,12]-but none of these have specifically focused on Big Data modelling. Innovations in data acquisition, storage, dissemination and modelling have taken different forms at different levels, most notably visual pictures of what the world is like [13, 14], while the Millenium Institute (https://www.millennium-institute.org/isdg) has developed tools for simulating patterns based on alterations of some key SDG metrics. All these tools provide enhanced visualisation and are capable of generating an infinitely large number of patterns, depending on the choices or perturbations made. However, they can be viewed as enhanced descriptive statistics generators and often simulated patterns are based on pre-determined assumptions, parameters, environment etc, which vary invariably in a spatio-temporal context. A recent research work-Development Science Framework (DSF) for Big Data modelling of SDGs [15, 16], combines data streaming from external factors like Government policies, cross-border legislations, technical and socio-economic and cultural factors with data directly attributable to the SDG indicators. In the form of highly voluminous and dynamic data, the SDG indicators are inevitably associated with spatio-temporal and other forms of variation. It is in this context that this paper seeks to highlight paths for expounding triggers of SDGs indicators. Based on the original ideas in [15, 16], the approach is consilient in that it adopts an interdisciplinary approach to unifying the underlying principles, concepts and reasoning for a comprehensive SDG modelling. One of its major strengths is that it is designed to add a predictive power to existing tools for SDG data visualisation. The approach is adaptive to the well-documented root-cause analysis [17] and an automated observation mapping. It is modelled on existing knowledge systems and cross-sectoral governance arrangements [18], to extract huge chunks of data from selected SDGs for identifying and modelling triggers of indicators across SDGs. We shall be making some key assumptions-notably on regional homegeneity and heterogeneity, allowing data simulations based on one country's real data to be used as real data proxies. The paper is organised as follows. Section 1 presents the study background-including the study motivation, objectives and research question. The methodology-data sources and implementation strategy are in Section 2, followed by analyses and discussions in Section 3 and concluding remarks in Section 4. The motivation for this work derives from years of interdisciplinary work relating to modelling high-dimensional data. In particular, the complexity of SDGs interactions and dynamics renders itself readily to the problem of data randomness [19, 20]. Identifying triggers of the indicators amounts to uncovering what works in different sectors and countries which, given the spatio-temporal variations can be challenging. This work looks at variations in SDG data across sectors from the Southern, Eastern and Western parts of the African continent. Research question and objectives To uncover triggers of the indicators, we adopt a general pragmatic approach to examine similarities and dissimilarities among data attributes that could lead to uncovering potentially useful information in the attributes. Although this work is inspired by the narrative of SDG Big Data Modelling [15, 16], it does not carry out Big Data modelling, in the strictest sense of the word. Instead, it provides a pathway for a consilient approach to complex SDG data modelling via the research question How can interdisciplinary research revolutionise knowledge extraction from SDG data? It aims to demonstrate the complexity of answering the foregoing question through the following six objectives. To exhibit the impact of data randomness and variations through data visualisation. To promote interdisciplinary activities for problem identification and attainment of agenda 2030. To highlight and support interdisciplinary paths for a unified understanding of global phenomena. This paper adopts the concept of Development Science Framework-DSF [15, 16], the main idea of which is to view each SDG as a Big Data node and the UN SDG data repository as a multi-disciplinary data fabric. The DSF consists of two layers-the inner and outer shells, via which it associates data streams and variations with internal and external factors. The former relates to variations within the actual data attributes while the latter relates to factors such as infrastructure, legislations and other socio-economic and geo-political variations which directly impinge on data modelling. This section outlines the mechanics of the framework, based on those considerations. The main data source for this work is the United Nations SDG data repository [2] which holds the full list of targets and indicators for all the 17 SDGs from 2000 to 2018. The full description of the targets and indicators is provided by the United Nations [21]. This work focuses only on structured data from five of the 17 SDGs-i.e., # 1, 2, 3, 4 and 9, using hundreds of SDG indicators in six African countries–Botswana, Cameroon, Ghana, Kenya, Rwanda & South Africa. Indicators for each of these SDGs fall within specific targets as illustrated in Table 1. Table 1 Selected indicators, associated SDGs and number of cases used in the study Selection of the variables was guided by the research question. For instance, it was reasonable to assert that the level and quality of education in a country would impinge on the level of innovation and productivity, hence the attained level of manufacturing and Research and Development (R&D). The data attributes were cleaned and reformatted to fit in with the modelling strategy. Labelling of the data could be carried out in various ways—by country, by indicator, by region, etc. This work focuses on country variations, implying that performance within countries provides potentially useful information on triggers of indicators variations and that indicator variables are predictors of geographical locations. The implementation strategy adopted in this study is outlined below. Implementation strategy This work applies two commonly used techniques—i.e., Principal Component Analysis (PCA) and data clustering-a technique used to group data objects according to their homogeneity. For the latter we use the K-Means technique [22, 23]. Both methods use scaled matrix of sampled features to reduce the data dimensionality. Dimensional reduction using PCA Principal component analysis (PCA) seeks to transforms a number of correlated variables into a smaller number of uncorrelated variables, called principal components. It uses the correlation among variables to develop a small set of components, which empirically summarise the correlations among them. Its main goal is to reduce data dimensionality—i.e., reduce the number of variables while retaining most of the original variability in it. Principal components are extracted in succession, with the first component accounting for as much of the variability in the data as possible and each succeeding component accounting for as much of the remaining variability as possible. More specifically, PCA is concerned with explaining the variance-covariance structure of a high dimensional random vector through a few linear combinations of the original component variables. The indicators in Table 1 can be formulated in a generic form as in Eq. 1. $$\begin{aligned} \mathbf{X} =\begin{bmatrix} x_{11} &{} x_{12} &{} x_{13} \dots x_{1n}\\ x_{21} &{} x_{22} &{} x_{13} \dots x_{2n} \\ x_{31} &{} x_{32} &{} x_{33} \dots x_{3n} \\ \dots &{} \dots &{} \dots \dots \dots \\ x_{p1} &{} x_{p2} &{} x_{p3} \dots x_{pn}\end{bmatrix}=\left[ x_{ij}\right] \end{aligned}$$ Equation 1 corresponds to the data source notation in Algorithm 1 and, in this application, it describes the 11 numeric variables selected from Table 1, constituting the set $$\begin{aligned} \mathcal {SDGI}=\left\{ MA, NO, EB, MI, IN, MG, ME, MP, MO, UN, TR \right\} \subset \mathbb {R}^n \end{aligned}$$ Extracted components are inferred from the correlations among the indicator variables, with each component being estimated as a weighted sum of the variables. That is, we can extract 11 components as random variables such that $$\begin{aligned} \mathcal {PC}_k=\left\{ w_{ik}MA, w_{ik}NO, w_{ik}EB, w_{ik}MI, w_{ik}IN, w_{ik}MG, w_{ik}ME, w_{ik}MP, w_{ik}MO, w_{ik}UN, w_{ik}TR \right\} \end{aligned}$$ where \(k=1,2,3,\ldots ,10,11\) denoting the number of components and \(i=1,2,3,\dots , 10, 11,\) denoting the number of variables. The vectors \(w_{ik}\) are chosen such that the following conditions are met. \(\Vert w_{k}\Vert =1\) Each of the \(\mathcal {PC}_k\), maximises the variance \(V\left\{ w_k^{'}\mathcal {SDGI}_k\right\}\) and The covariance \(COV\left\{ w_{k}^{'}\mathcal {SDGI}_{k}\,w_{r}{'}\mathcal {SDGI}_{r}\right\} =0, \, \forall {k}<{r}\) In other words, the principal components are extracted from the linear combinations of the original variables maximising the variance and have zero covariance with the previously extracted components. It can be shown that the number of such linear combinations is exactly 11. Our applications will adopt this method. Underlying mechanics of data clustering Another common unsupervised learning method is cluster analysis [24, 25] groups data according to some measures of similarity, and it is generally described as follows. Given \(\mathcal {SDGI}\) data in Eq. 2 and, assuming k distinct clusters for \(\mathcal {SDGI},\,\,\text {i.e.,}~\mathcal {C}=\left\{ c_1,c_2,\dots ,c_k\right\} ,\) each with a specified centroid. Then, for each of the vectors \(j=1,2,\dots p,\) we can obtain the distance from \(\mathbf{v} _j\in \mathcal {SDGI}\) to the nearest centroid from the set \(\left\{ \mathbf{x} _1,\mathbf{x} _2,\dots \mathbf{x} _k\right\}\) as $$\begin{aligned} \mathcal {D}_j\left( \mathbf{x} _1,\mathbf{x} _2,\dots \mathbf{x} _k\right) =\min \limits _{1\le l\le k}d\left( \mathbf{x} _l,~\mathbf{v} _j\right) \end{aligned}$$ where \(d\left( .\right)\) is an adopted measure of distance and the clustering objective would then be to minimise the sum of the distances from each of the data points in \(\mathcal {SDGI}\) to the nearest centroid. That is, optimal partitioning of \(\mathcal {C}\) requires identifying k vectors \(\mathbf{x} ^*_1,\mathbf{x} ^*_2,\dots ,\mathbf{x} ^*_k\in \mathbb {R}^n\) that solve the continuous optimisation function in Eq. 5. $$\begin{aligned} \min \limits _{\left\{ \mathbf{x} _1,\dots ,\mathbf{x} _k\right\} \in \mathbb {R}^n} f\left( \mathbf{x} _1,\dots ,\mathbf{x} _k\right) =\sum _{j=1}^{p}\mathcal {D}_j\left( \mathbf{x} _1,\dots ,\mathbf{x} _k\right) \end{aligned}$$ Minimisation of the distances depends on the initial values in \(\mathcal {C},\) hence if we let \(z_{i=1,2,...,n}\) be an indicator variable denoting group membership with unknown values, the search for the optimal solution can be through iterative smoothing of the random vector \(x|(z=k),\) for which we can compute \(\bar{\mu }=\mathbf {E}(x) \quad \text {and}\quad \delta =\left\{ \mu _k-\bar{\mu }|y=k\in \mathbf{c _z}\right\} .\) In a labelled data scenario, \(\left\{ x_i,y_i\right\} ~i=1,2,\dots n,\) Eq. 5 amounts to minimising Eq. 6 $$\begin{aligned} f\left( \theta \right) =\sum _{i=1}^{n}\left[ y_i-g\left( x_i;~\theta \right) \right] ^2 \end{aligned}$$ where \(x_i\) are described by the parameters \(\left\{ \bar{\mu }~\text {and}~\delta \right\} \in \theta\) and \(g(x_i;~\theta )\) are fitted values. Equations 4 through 6 relate to the K-Means clustering algorithm [22, 23], which searches for clusters in numeric data based on pre–specified number of centroids. The decision on the initial number of centroids does ultimately impinge on the detected clusters and we shall be addressing this issue via the Algorithm in "The Sample-Measure-Assess (SMA) Algorithm" section. Whether we are looking for variations among countries, SDGs or their indicators, interest is in their variant or invariant behaviour across the set and over time. Addressing spatio–temporal variations in SDG data appeals naturally to dealing with randomness in data [19, 20] and adopting interdisciplinary approaches to gaining a unified understanding and interpretation of data modelling. This work is not based on a complex high-dimensional dataset but, as explained in "Research question and objectives" section, it demonstrates the techniques in anticipation of such volumes and dynamics. The Sample-Measure-Assess (SMA) algorithm [26], described in "The Sample-Measure-Assess (SMA) Algorithm" section was developed to address variations in data due to inherent randomness. The Sample-Measure-Assess (SMA) Algorithm The SMA algorithm [15, 16] seeks to address issues of data randomness [19, 20]. It draws from existing modelling techniques such as the standard variants of cross–validation [27] and permutation feature importance [28]. Unlike many of its predecessors, the SMA has a built–in mechanism that allows it to handle data randomness more efficiently. Further, it is adaptable to a wide range of models and amenable to both clustering and classification problems. Its mechanics, described below, assume structured data [29], but can readily be extended to semi-structured and unstructured data. Its implementation is problem-specific and, in this case, the free parameters were chosen based on the assumptions made in the last paragraph of "Introduction" secion. For example, the decision to run the algorithm without the industrial manufacturing related variables was based on the prior knowledge that, relative to other African countries, these factors are predominantly influenced by South Africa. The dataset \(\mathbf{X} =\left[ x_{i,j}\right]\) corresponds to Table 1 and the learning model \(F\left( \phi \right)\) is, in this case, either PCA or K-Means. The constant \(\kappa\) used here is a free parameter, determined by the user. The algorithm draws samples from the full data, generating random training and testing samples with distributional parameters varying from sample to sample. The parameters \(\Theta _{tr}(.)\leftarrow \Theta _{tr}\) and \(\Theta _{ts}(.)\leftarrow \Theta _{ts}\) are updated by randomly drawn samples, \(\left[ x_{\nu ,\tau }\right] \leftarrow \left[ x_{i,j}\right] ,\) initialised in step 8, and sampled in steps 9 and 10. They are random and they remain stateless across all iterations. The same applies to \(\left[ x_{\nu ,\tau }\right] \leftarrow \left[ x_{l\ne i,j}\right] .\) The notation \(\mathcal {\hat{L}}_{tr,ts} \propto \Phi (.)_{tr,ts}\) represents multiple trained and tested machine learning models, adopted by the user. The loop from step 11through 19 involves sampling through the data with replacement, fitting the model and updating the parameters. The choice for the best performing model is carried out at step 20, where \(P\left( \Psi _{D,POP}\ge \Psi _{B,POP}\right)\) is the probability of the population error being greater than the training error. Analyses, results and discussions Analyses are presented from both descriptive and inferential perspectives in order to, firstly, grasp an understanding of the data we are looking at and, secondly, deciding what to do with the data in the attributes. Due to constraints on time and space, we work only with a handful, selected indicators, some of which are shown in Table 1. Exploratory data analyses Expolaratory data analysis (EDA) is a common good practice that helps gain insight into the data at hand–typically via visual inspection through graphs and numerical results. These early investigations are extremely useful in that they serve as either warning, hints or both as to the overall behaviour of the data–e.g., presence of outliers, missing data or systematic patterns. Figure 1 shows box plots of 4 of the selected indicators for the six countries. Each of the boxes is built on sorted scores, forming equal sized groups referred to as quartiles. The median line divides your univariate data into 2 parts, forming the "inter-quartile" range and containing 50% of all data. The lines extending from the top and bottom edges of the boxes, known as whiskers represent data poits outside the middle 50% and they are indicators of outlying cases. For all four indicators the boxes show that South Africa has a high level of consensus on the data collected over the period, whereas there are relatively wide variations for Rwanda and Cameroon. In all four cases, South Africa is well isolated from the remaining five countries, which suggests a fundamental difference between the country and the rest. With the exception of Rwanda, where there are outlying cases in the upper quartile for maternal and infant mortality indicators and Kenya with outlying cases on undernourishment, the remaining boxes are quite evenly distributed. These variations warrant further investigations. Box plot distributions for four indicator variables selected from Table 1 Another common EDA method for investigating data behaviour is to look at the individual univariate densities and try to hypothesise what message they provide. Note that a density estimator seeks to model the probability distribution that generated the data and there cannot be a better example than the histogram. The challenges we face with histogram estimation–i.e., choosing the bin size and location are typical what you would encounter with density estimation. Figure 2 presents the same four indicators discussed above, drawn at a very small bandwidth of 0.01, the equivalent of choosing very small bin sizes for an histogram. The number of modes in each density suggests existence of a group or a distinctive feature within that dataset and altering the bandwidth changes the number of these features and, like with histograms, various choices can lead to data representations with distinctively different features. Density estimations for the four variables plotted at bandwidth 0.01 Univariate analysis has many limitations which arise from factors outside that variable. With each SDG associated with hundreds of indicators, the need to explore interactions and variations among data attributes is apparent. One way to explore such relationships is through correlation analysis which measures the strength of a linear association between two variables. The left hand side panel in Fig. 3 shows all paired correlations, with the strongest positively being between UNDERNOUR and EBI (79%) and between MATMORTRATIO and INFMORTPER1K (71%). The indicators MANUFPCTA and MATMORTRATIO exhibit strong negative correlation (−75%) while MANUFPCTA and EBI stand at -66%. The correlation plots for these four indicators are given on the right hand side panel. Effectively, the correlation function attempts to draw a line of best fit through the paired indicators without regard to which influences the other–hence the old dictum, correlation does not mean causation. Correlations among selected indicator variables described in Table 1 The EDA methods presented in "Exploratory data analyses" section provide good insights into the overall data behaviour, relationships and variations among the data attributes. This section focuses on unsupervised learning-a process of drawing inferences from unlabelled data. It implements two unsupervised learning models–PCA and clustering. We adopt PCA for a simple illustration on how to use the data matrix in Eq. 2, to try and uncover triggers of the indicators, via similarities and dissimilarities among countries, SDGs and indicators. Since PCA transforms indicators into linear combinations of an underlying set of hypothesized or unobserved components, each component may be associated with 2 or more of the original indicators. That is, rather than measuring infant and maternal mortality, say, we may have a single measure on the state of health services in a particular country. Table 2 exhibits PCA loadings-values that relate the specific association between factors and the original SDG indicators. In particular, the concept of "loadings" refers to the correlation between the indicators and the factors and they are key to understanding the nature of a particular factor. Loadings derive from the magnitude of the eigenvalues associated with the individual indicator. Squared factor loadings indicate what percentage of the variance in an original SDG indicator is explained by a component. Consequently, it is necessary to find the loadings, then solve for the factors, which will approximate the relationship between the original indicators and underlying factors. The directions of the SDG indicators here reflect the role each indicator played in forming each of the eleven components. In interpreting extracted components, the main consideration is about these values as each component is the directions which maximizes variance among all directions orthogonal to the previous component. Table 2 Loadings provide differentiation among extracted components The two panels in Fig. 4 derive from the data in Table 1 and the methods in Eqs. 2 and 3. In the panel on the left, involving 140 observations on 11 SDG indicators, the highest component accrued a total of 38.2% of the total variation. Despite the small size, clear country-specific patterns emerge-on manufacturing per capita, for instance, South Africa dominates, with deaths from non-communicable diseases and other aspects of manufacturing taking the same direction. On the other hand, issues relating to poverty-maternal and infant mortality, undernourishment and the employed population below international poverty line are dominated by statistics from Rwanda. The right hand side panel, yielding a 39.4% explained variance by the first component, excludes manufacturing related variables-MANUFPCTA, NONCOMMDEATHS and MANUFGDP, which are mostly associated with South Africa. The LHS and RHS panels are with and without manufacturing related variables respectively In both cases in Fig. 4, multiple runs through the SMA algorithm, using randomly sampled data, gave Eigenvalue rule cut-off points of between 2 and 3 components. In searching for triggers of SDG indicators, a thorough understanding of these categories of variables is required. It is under such circumstances that interdisciplinarity plays a crucial role. For such a small study, it is important to interpret findings with care, as such patterns may arise from the level and quality of data. We take a closer look at variations in patterns attributable to data randomness. Data clustering The K-Means clustering algorithm [22, 23] was applied to the data in Table 1 for a range of between 2 and 7 clusters. The results for 2, 3, 4 and 5 clusters are summarised in Table 3, where it is evident that in all cases South Africa distinctively stands out in forming the clusters. With two clusters, the pattern is almost binary-South Africa versus the rest, which is similar to the pattern observed with PCA in Figure 4. While South Africa still dominates under \(K=3,\) Botswana and Ghana dominate one cluster while Cameroon, Kenya and Rwanda dominate the other cluster. Particularly important is the last column in Table 3, exhibiting the ratio \(B_{ss}\) (between-cluster sum of squares) over \(T_{ss}\) (the total sum of squares), giving what is basically the within-cluster sum of squares. The K-Means algorithm uses the minimum sum of squares to identify clusters. The algorithm iteratively updates cluster centres, allocating observations as it goes and stops only when the maximum number of iterations is reached or the change of within-cluster sum of squares in two successive iterations is less than the set threshold. Thus, the values in the last column of Table 3 measures the total variance in the dataset that is due to that level of clustering. That is, by assigning the samples to the specified clusters rather than the total number of samples, we are able to show the reduction in the sum of squares each cluster achieved. Table 3 Country-by country participation in the formation of clusters The four clusters presented in Table 3 are from the total of six clusters generated from the full dataset. The left hand side panel in Fig. 5 exhibits the within cluster variations per cluster for each of the six cluster categories. Notice the huge variation within the category \(K=2\) and the relatively high variation for \(K=3\) and \(K=4\) as compared to the remaining categories. This pattern is reflected by the six small panels to the right, corresponding to each of the six cluster categories. As noted above, the influence of industrial manufacturing factors is felt heavily. The within cluster variations (LHS) and each of the six cluster categories (RHS) To suppress the dominant influence of South Africa, we remove the three variables-NONCOMMDEATHS, MANUFGDP and MANUFPCTA and run the K-Means clustering through the SMA Algorithm. Multiple samples of between 30 and 60 were randomly drawn from \(\mathbf{X} =\left[ x_{i,j}\right]\) fifty times, with parameters of interest being the variation within and between clusters. The densities to the left of Figures 6 and 7 exhibit the within cluster variations for the clusters shown in the right hand side panels. These were selected from 50 runs through the SMA algorithm. Both Figs. 6 and 7 provide insights into the naturally arising structures in the data in Table 1. Within and between cluster variations for the clusters on the RHS based on 30 sample size The six panels on the right hand side of Fig. 6 exhibit within and between cluster variations based on the 30 sample size runs. Each panel is a 2 dimensional plot of the proportion of maternal mortality (MARTMORTRATIO) versus the proportion of employed population below the international poverty line (EBI), for 2, 3, 4, 5, 6 and 7 clusters. The plots indicate cluster overlaps as the number of clusters increases, making them increasingly less distinctive, which we can interpret as over–fitting. The plots in Fig. 7 are similar to those in Figure 6, except that they are based on samples of size 60. They both provide insights into the naturally arising structures in the data in Table 1 and it can be seen that they exhibit consistent variations across samples, as the sample size increases–that is, the higher the number of clusters the more evident over–fitting becomes. Note that via the SMA, samples of different sizes can be drawn and implemented with different numbers of centroids. While, the final decision on the optimal number of clusters can be decided based on the set criteria for between–cluster variation, in practice it will also depend, inter–alia on the problem of interest. That is what entails interdisciplinarity, as domain knowledge outside modelling plays a crucial role here. We can tell from Fig. 6 and 7 that while it is possible to capture key metrics on indicators, their triggers remain buried in the data. For example, the inclusion and omission of variables dominated by South Africa showed that socio-economic, cultural and geo-political variations make it impossible for an overarching strategy to be developed for the continent. Thus, attainment of agenda 2030 requires a unified understanding of the agenda at both low and high levels. Variations will typically arise from a wide range of causes and it is imperative that SDG stakeholders, like the Sustainable Development Goals Center for Africa (SDGCA) [30] in Kigali, engage with individual Governments through relevant departments to monitor SDG dynamics in a spatio-temporal context. Initiatives, geared towards accelerating attainment of agenda 2030, can be enhanced by adopting interdisciplinary approaches to providing relevant support to governments, civil society, businesses and academic institutions. Summary of results The analyses in this section sought to highlight the impact of data variation in addressing complex challenges in SDG monitoring. We considered soft and technical solutions-i.e., socio-economic and cultural variations and data interactions and dynamics. Initial EDA patterns isolated South Africa from the remaining five countries. The dominance of South Africa continued through PCA and clustering analyses. We attempted to iron out the impact of data randomness on variations by deploying the SMA algorithm, taking repeated samples of sizes 20 through 75, two of which are presented in Figs. 6 and 7, exhibiting consistent variations across samples, as the sample size increases. The indicators were not written on stone, and so a simple way to assess progress would be to think of how often they have been reviewed or whether there is a regular update forming posterior information. Posterior information forms the basis of prior knowledge which decision makers need for any interventions. The decision to omit some of the manufacturing-related variables was based on a prior knowledge we notionally generated from the same data. For example, not all the variables in Table 1 were used in PCA and data clustering. Some variables, like MOBCOVERAGE, were used externally to provide "prior knowledge" for segmenting the SDG data. For example, our SDG #9 data showed that the mobile coverage across the selected countries was exceptionally high, making South Africa hardly distinguishable from the rest. This raises the question as to what triggers such development. Talk to different sections of the population and you might get different answers. Such disparate perceptions on the impact of mobile phones reflect the extent of data fragmentation among countries and even institutions within the same country. The socio-economic, cultural and geo-political variations make it impossible for an overarching strategy to be developed for the African continent, or indeed elsewhere. It is on those premises that we emphasise a unified understanding, across disciplines and sectors, of the 2030 agenda at both low and high levels. Given the magnitude of SDG indicators, the dataset used in this section was a drop in the ocean. The foregoing results are therefore not geared towards establishing unknown patterns among SDGs, but rather to highlight novel paths towards attaining informative patterns. Our consilient approach was conceived in anticipation of infinitely many challenges that require data-driven solutions across the 17 SDGs and, particularly, our original ideas of the DSF [15, 16] that views each SDG as a source of Big Data. Success will come from sharing data, skills and resources and ensuring that open science became the norm. One of the most difficult tasks of this work was to collate the data attributes in Table 1 from the main database of SDG indicators, as the initial variable selection is problem-specific. The findings in this section should open new paths to interdisciplinary research for a unified understanding of the triggers of SDG indicators. The paper proposed a robust machine learning approach to data segmentation, constituting what can be viewed as a consilient approach to expounding triggers of SDG indicators via interdisciplinary modelling. It examined a range of tools that have been developed to provide SDG visualisation [13, 14] which while they capture key metrics on indicators, they leave the triggers of those indicators buried in data. Using selected SDG indicators it fulfilled objective #1 by illustrating the impact of data randomness and variation through visual objects. The analyses exhibited potential knowledge gaps that may arise from including or excluding different data attribute. On objectives #2 and #3 it underlined interdisciplinarity in identifying actual and potential triggers-a major step towards attaining agenda 2030. Understanding the overall behaviour and development of SDGs requires taking a much broader perspective than just exploring individual SDGs. While interdisciplinarity may not have significantly featured in this work, the strong correlation among the SDGs and their span across disciplines and sectors imply that future applications of the proposed methods will adopt more interdisciplinary approaches. While the number of SDGs and indicators used in this paper may not represent highly voluminous data, the indicators and targets in the entire set of 17 SDGs do. Apparently, the patterns in Figures 1 through 7 could have been fundamentally different if different sets of indicators, different SDGs or different countries had been used in the analyses. It is that level of intricacy that calls for interdisciplinary approaches in addressing SDG and underlines the role of objective #2 in SDG modelling. All the three objectives in the paper sought to promote interdisciplinary research. Objective #1 provided visualisation, while objectives # 2 and #3, effectively, focused on a unified, interdisciplinary understanding of the visual images and together, they highlighted the importance of clear definition of the problem to be tackled. Through objectives #2 and #3, the paper calls for SDG monitoring teams to realise that attainment of agenda 2030 hinges on looking at all 17 SDGs as a Big Data challenge. The proposed methods are readily upscalable to higher data volumes. The paper's main output was not a pack of triggers, but a robust, unified approach driven by the three objectives. Data scientists may be content with the technical output of the adopted method, but it is combining domain knowledge and the power of modelling that yields reliable results. Agenda 2030 hinges heavily on this combination and future research paths should focus on it. As noted in "Data sources" section, raw secondary data attributes on hundreds of SDG indicators in six African countries—Botswana, Cameroon, Ghana, Kenya, Rwanda & South Africa, were extracted from the United Nation's Sustainable Development Indicators repository [2]. The data only relate to SDGs # 1, 2, 3, 4 and 9, covering the period 2000 to 2018. The source of the raw SDG indicators [2] is open-source, hence freely available to the public upon request. However, the data attributes used in this paper, were obtained via a semi-automated selection and cleaning process by the authors. They were reformatted to fit in with the adopted modelling strategy-hence, the data is only available from the authors, who have retained both the raw and modified copies, should they be requested. BDMSDG: Big Data Modelling of Sustainable Development Goals DSF: Development Science Framework EDA: PCA: RDBMS: Relational Database Management Syststem SDGCA: Sustainable Development Goals Centre for Africa SDGI: SMA: Sample-Measure-Assess UN-SDG: United Nation's Sustainable Development Goals SDG, Sustainable Development Goals. 2015; https://www.un.org/sustainabledevelopment/sustainable-development-goals/ SDGI, Sustainable Development Goals Indicators. 2017; https://unstats.un.org/sdgs/indicators/database/ Kharrazi A. Challenges and opportunities of urban big-data for sustainable development. Asia Pacific Tech Monitor. 2017;34:17–211. Kruse CS, Goswamy R, Raval Y, Marawi S. Challenges and opportunities of big data in health care: a systematic review. JMIR Med Inf. 2016;4:e38. Yan M, Haiping W, Lizhe W, Bormin H, Ranjan R, Zomaya A, Wei J. Remote sensing big data computing: challenges and opportunities. Future Gener Comput Syst. 2015;51:47–60. IUCN, In the spirit of nature, everything is connected. 2018; https://www.iucn.org/news/europe/201801/spirit-nature-everything-connected Mwitondi KS. Tracking the Potential, Development, and Impact of Information and Communication Technologies in Sub-Saharan Africa; International Council for Science (ICSU-ROA);2018 Meusburger P. In Knowledge and the Economy; Meusburger, P., Glückler, J., el Meskioui, M., Eds.; Springer Netherlands: Dordrecht, 2013; pp 15–42 Parr M, Musker R, Schaap B. GODAN'S Impact 2014 to 2018 - Improving Agriculture, Food and Nutrition with Open Data;2018 UN-Global-Pulse, Big Data for Development: Challenges and Opportunities.UN Global Pulse. 2012 UN-Global-Pulse, Big Data for Development and Humanitarian Action: Towards Responsible Governance. 2016 Bamberger M. Integrating Big Data Into the Monitoring and Evaluation of Development Programmes. 2016 Roser M, Ortiz-Ospina E, Ritchie H, Hasell J, Gavrilov D. Our World in data: Research and interactive data visualizations to understand the world's largest problems;2018 WBGroup, Atlas of Sustainable Development Goals From World Development Indicators. 2018 Mwitondi K, Munyakazi I, Gatsheni B. An interdisciplinary data-driven framework for development science. DIRISA National Research Data Workshop, CSIR ICC, 19-21 June 2018, Pretoria, RSA2018 Mwitondi K, Munyakazi I, Gatsheni B. Amenability of the United Nations Sustainable Development Goals to Big Data Modelling. International Workshop on Data Science-Present and Future of Open Data and Open Science, 12-15 Nov 2018, Joint Support Centre for Data Science Research, Mishima Citizens Cultural Hall, Mishima, Shizuoka, Japan2018 Ishikawa K. Guide to auality control; Asian Productivity Organization;1976 Primmer E, Furman E. Operationalising ecosystem service approaches for governance: do measuring, mapping and valuing integrate sector-specific knowledge systems? Ecosyst Serv. 2012;1:85–92. Mwitondi KS, Said RA. A data-based method for harmonising heterogeneous data modelling techniques across data mining applications. J Stat Appl Probab. 2013;2(3):293–305. Mwitondi KS, Moustafa RE, Hadi AS. A data-driven method for selecting optimal models based on graphical visualisation of differences in sequentially fitted ROC model parameters. Data Sci J. 2013;12:WDS247–WDS253. SDGTI, Sustainable Development Goals Targets & Indicators. 2020; https://unstats.un.org/sdgs/metadata/ Lloyd SP. Least squares quantization in PCM. Technical Report RR-5497, Bell Laboratories. 1957. MacQueen JB. Some methods for classification and analysis of multivariate observations. 1967;1:281–97. Chapmann J. Machine learning algorithms; CreateSpace Independent Publishing Platform, 2017 Kogan J. Introduction to clustering large and high-dimensional data. Cambridge: Cambridge University Press; 2007. Mwitondi KS, Zargari SA. An iterative multiple sampling method for intrusion detection. Inf Secur J. 2018;27:230–9. Bo L, Wang L, Jiao L. Feature scaling for Kernel Fisher discriminant analysis using leave-one-out cross validation. Neural Comput. 2006;18:961–78. MathSciNet Article Google Scholar Galkin F, Aliper A, Putin E, Kuznetsov I, Gladyshev VN, Zhavoronkov A. Human microbiome aging clocks based on deep learning and tandem of permutation feature importance and accumulated local effects. BioRxiv. 2018;1:507780. Codd EF. A relational model of data for large shared data banks. Commun ACM. 1970;13:377–87. SDGCA, Sustainable Development Goals. 2015; https://sdgcafrica.org/ This paper is part of on-going initiatives towards Big Data Modelling of Sustainable Development Goals (BDMSDG). We would like to thank many individuals and institutions who have discussed these initiatives with us at different stages of development. We particularly acknowledge the Data Intensive Research Initiative of South Africa (DIRISA), through the Council for Scientific and Industrial Research (CSIR) (https://www.csir.co.za/), who have invited us a couple of times to Pretoria to present our findings. We are also grateful to the Joint Support-Center for Data Science Research (DS) and the Polar Environment Data Science Center (PEDSC) of Japan (http://pedsc.rois.ac.jp/en/), the United Nations World Data Forum (UNWDF) (https://unstats.un.org/unsd/undataforum/index.html) and the University of Sussex's Sustainability Research Programme (https://www.sussex.ac.uk/ssrp/research/sdg-interactions). We also thank Sheffield Hallam University (https://www.shu.ac.uk/research/industry-innovation-research-institute) for meeting Springer's Gold Open Access publication costs. This work has not been supported by any grant, but rather it is an outcome of ordinary Research and Scholarly Activities (RSA) allocation to each of the three authors by their respective institutions. College of Business, Technology and Engineering, Sheffield Hallam University, Sheffield, United Kingdom Kassim S. Mwitondi Ministry of Education, Kigali, Republic of Rwanda Isaac Munyakazi Department of Applied Information Systems, University of Johannesburg, Johannesburg, South Africa Barnabas N. Gatsheni The individual contributions of the authors heavily overlapped, from previous joint work on SDGs. For this work, KS carried out most of the data cleaning and automated selection, estimated at approximately 40%. IM provided insights into designing the analyses layout based on his experiences with the education system in the Republic of Rwanda and particularly his connections at the SDG Centre for Africa in Kigali, Rwanda [30], estimated at about 30% and BG carried out a background study on the impact of SDG initiatives within the Southern African Development Community (SADC) member states, in which he has previously published work-estimated at 30%. Correspondence to Kassim S. Mwitondi. All the three authors declare that there are no competing interests in publishing this paper, be they financial or non-financial and, as a co-authored paper, the costs of publishing will be shared among the three institutions. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Mwitondi, K.S., Munyakazi, I. & Gatsheni, B.N. A robust machine learning approach to SDG data segmentation. J Big Data 7, 97 (2020). https://doi.org/10.1186/s40537-020-00373-y DOI: https://doi.org/10.1186/s40537-020-00373-y Consilience Data randomness K-Means Sample-Model-Assess Unsupervised Modelling
CommonCrawl
Evidence of local and regional freshening of Northeast Greenland coastal waters Mikael K. Sejr1,2, Colin A. Stedmon ORCID: orcid.org/0000-0001-6642-96923, Jørgen Bendtsen ORCID: orcid.org/0000-0003-1393-30724, Jakob Abermann5, Thomas Juul-Pedersen6, John Mortensen ORCID: orcid.org/0000-0001-9863-53436 & Søren Rysgaard7,1,6 195 Altmetric The supply of freshwater to fjord systems in Greenland is increasing as a result of climate change-induced acceleration in ice sheet melt. However, insight into the marine implications of the melt water is impaired by lack of observations demonstrating the fate of freshwater along the Greenland coast and providing evaluation basis for ocean models. Here we present 13 years of summer measurements along a 120 km transect in Young Sound, Northeast Greenland and show that sub-surface coastal waters are decreasing in salinity with an average rate of 0.12 ± 0.05 per year. This is the first observational evidence of a significant freshening on decadal scale of the waters surrounding the ice sheet and comes from a region where ice sheet melt has been less significant. It implies that ice sheet dynamics in Northeast Greenland could be of key importance as freshwater is retained in southward flowing coastal currents thus reducing density of water masses influencing major deep water formation areas in the Subarctic Atlantic Ocean. Ultimately, the observed freshening could have implications for the Atlantic meridional overturning circulation. The Arctic Ocean has increased its freshwater content significantly the past decade1 and is freshening at a rate of approximately 600 km3 per year2. A simultaneous freshening is expected in Greenland coastal waters where current rates of ice loss from the Greenland Ice Sheet have more than doubled compared to rates from 1983–20033. The annual net loss of ice is estimated at 186 Gt (i.e. 186 km3) whereas the total ice loss during the seasonal melt in summer exceeds 1200 km3 annually4. Although much of the ice loss to date can be attributed to glaciers in West and Southeast Greenland it is now evident that there is also accelerating ice loss in Northeast Greenland5. In addition to contributing to global sea level rise6, increasing freshwater contribution from Greenland may impact local and global circulation patterns. Transport of freshwater in Greenlandic coastal currents and into the surface waters of the subarctic Atlantic could influence deep water convection and potentially the Atlantic meridional overturning circulation7,8. This represents an important potential climatic feedback mechanism not included in current IPCC models. The Greenland Ecosystem Monitoring programme has conducted systematic hydrographic measurements in Young Sound (NE Greenland, 74.24° N 20.17° W, http://www.g-e-m.dk) since 2003. The aim of this study was to analyse the dataset for evidence in changes in the freshwater budget. The hydrographic transect is sampled each year in early August and spans from the inner fjord and approximately 30 km off the coast (Fig. 1a). The fjord system is divided into an inner shallow basin (Tyroler Fjord, to the west) and an outer deep basin (Young Sound) separated from the shelf by a shallow (45 m depth) sill (Fig. 1b). There are no marine terminating glaciers in the fjord, but several land terminating within the catchment. Freshwater from snow and glacial melt and precipitation is supplied through several rivers running into the fjord along its length supplying 0.9 to 1.4 km3 freshwater annually9. Coastal shelf waters in East Greenland are influenced by the East Greenland Current, which originates from the Arctic Ocean and follows the continental slope southwards. The current consists of Polar Surface Waters characterised by low salinities (S < 34.4) and sub-zero temperatures. Below the Polar Surface Waters, warm and saline re-circulating Atlantic Water is found typically at the depth of 250 m. For fjords such as Young Sound with a shallow sill, the dense Atlantic Water is prevented from entering the fjord as bottom water (Fig. 1d). In summer, the local freshwater contribution to the fjord is largely retained at the surface forming a shallow (5–10 m) surface lens with salinity below 25. The salinity and temperature of the surface waters are influenced by local atmospheric conditions, the presence of sea ice, melting snow and glacial meltwater discharge (Fig. 1b,d). We separated the transect into two sections: one inside the shallow outer sill (termed fjord) and one outside (coastal water). To visualize the inter-annual change we averaged all profiles for fjord and coastal water for each year to produce a contour plot of changes in average salinity profiles from 2003 to 2015. In the fjord, salinity decreased throughout the water column with a deepening of the 32 isohaline in the subsurface water and a deepening of the 33 isohaline near the bottom (Fig. 2a). In the coastal water (Fig. 2b), the most obvious change was in the subsurface water, where the 32 isohaline deepened after 2010 from approximately 25 to 50 m depth. Plots of average salinities in different depth strata, show no significant changes in surface waters (Fig. 3a), whereas the subsurface (Fig. 3b) and deeper layers (Fig. 3c) in the fjord show a gradual freshening by more than one salinity unit over the 13-year period. In the coastal water, the decrease is most pronounced in the 30–50 m layer (Fig. 3b). Linear regressions indicate a statistically significant decrease in salinity in the fjord of approximately 0.1 salinity unit per year at both 30–50 m and 100–150 m. Even in the bottom waters of the fjord (250–300 m) a significant decrease of 0.022 ± 0.005 per year is apparent (Table 1). Water temperatures in the different depth strata did not show a significant linear trend with the exception of the deepest part of the fjord, 250–300 m, where a significant warming was detected with a rate of 0.018 ± 0.001 °C yr−1. At 30–50 m in the coastal water, the significant linear decrease in salinity observed in the time series was 0.12 ± 0.05 per year. This is a significant freshening, of a magnitude similar to that found in the Beaufort Gyre23 To better assess the trend in freshening we calculated the integrated freshwater content (FWC) of the surface waters (0–50) in the fjord. We used the average salinity (2003 to 2015) at 50 m in Young Sound as the reference salinity. In the fjord, FWC for this layer increased from 0.9 ± 1.1 (2003) to 3.7 ± 1.1 m in 2015 (Fig. 3d). (a) Image of the study site in East Greenland with the hydrographical transect sampled from 2003 to 2015. (b) Locations of the sampling stations together with distribution in salinity for a typical year (2011). (c) Seasonal variability in salinity at three depths from a mooring in the fjord (near the 70 km mark in Fig. 1b) from 2011 to 2012. (d) Typical temperatures along the transect in August. Figure 1b and d where created using Ocean Data View version 4.6. http://odv.awi.de. Figure 1a satellite image credit: NASA Goddard Space flight Centre. (a) Time-depth isopleths of average salinity over the study period for the fjord and (b) the coastal water of the hydrographical transect. Changes in summer salinity of fjord and coastal waters at the study site in East Greenland for different depth strata: (a) 0–30 m; (b) 30–50 m and (c) 100–150 m. (d) Integrated estimate of freshwater content (FWC) in the top 50 m of the water column in the fjord. See text for definitions of FWC and FWC_S. The solid line represents the linear regression of FWC. Table 1 Result of linear regressions of changes in salinity and temperature 2003–2015 along the hydrographical transect (split into two; fjord and coastal waters) in NE Greenland. Significant regressions are shown in bold. There are two potential sources for the observed freshening in the fjord: from the local terrestrial catchment or the coastal water outside the sill. Data on total discharge from the main river in the catchment (Zackenberg River) reveals no significant increasing linear trend during the study period (Table 2, n = 13, p = 0.30).On average, more than 80 % of the annual discharge of the river has occurred by the date of the CTD survey with no evidence that the discharge up until that date has increased over the study period (Table 2, n=13, p=0.26). Maximum snow depth on land has not increased (n = 13, p = 0.83) and neither has the number of positive degree days (n = 13, p = 0.50). This indicates that snow and glacial melt of the catchment area in general is not increasing the runoff to the fjord. Wind driven vertical mixing can influence the vertical distribution of the freshwater. Young Sound typically becomes ice free just 3-4 weeks prior to the CTD survey and a strong halocline persists until autumn. It is therefore unlikely that inter-annual variation in vertical mixing before sampling is important for the observed decrease in salinity below 30m observed in August. We estimated the annual wind stress by integrating the daily average wind speeds during the ice free season (estimated from daily photos in the outer fjord) (Table 2). Neither this accumulated index of wind stress nor the duration of the ice free season revealed an increase for the period. Brine production (associated with a winter polynya outside the fjord) are an additional processes that may impact the seasonal and inter-annual variation in salinity along the studied transect24 but further studies are required before their importance for the inter-annual patterns can be assessed. To further explore the source of freshwater we calculated the seasonal freshwater content, FWC_S, based on the average salinity at 50 m depth in Young Sound for each individual year (rather than the 2003–2015 average). In August, the salinity at 65 m is approximately equal to early spring conditions7 (Fig. 1c), and since the runoff is predominantly found in the upper 30 m, and its residence time is relatively short7, the seasonal freshwater content becomes an estimate of runoff from each year. Thus, the differences between the FWC and FWC_S allow us to isolate a freshening driven from the fjord's catchment, as FWC_S only represents freshening occurring from the winter until the time of sampling, driven by the local run off. As there is no significant change in FWC_S over the time series this calculation indicates that the freshening of the incoming water is driving the observed trends in FWC (Fig 3d). This supports the findings of no significant change in local river discharge, snow depth and degree days. Although there is considerable inter-annual variability in runoff from the catchment there is no clear indication of a significant increase in runoff to the fjord. The observed increase in freshwater content (FWC) of the fjord can, therefore, be explained by the observed freshening of the water masses at sill depth outside the fjord (30–50 m). This is also the depth range with the largest rate of decrease in salinity (Table 1). Plotted in temperature-salinity space this can be clearly seen as a migration of source waters to lower salinities with time (Fig. 4). The question remains as to the source of the freshening in the coastal water. The East Greenland Current is characterised by a core of Polar water with its origins in the Arctic Halocline and Polar Surface waters10. The freshwater present in this water mass originates from a combination of runoff from land, sea ice melt, precipitation, meltwater from the Greenland Ice sheet and freshwater input from the Arctic Ocean through the Fram Strait. In 2005, an increased transport of freshwater was observed in the Fram Strait (compared to 1998) which was attributed to the release of river water temporarily stored on the Siberian shelf11. In 2011 to 2013 an increasing contribution from Pacific water to the freshwater content in the Denmark Strait was observed12. Both these studies sampled on the East Greenland shelf whereas this study is close to the coast line in what has been termed the Riverine Coastal Domain in the Arctic Ocean13 characterized by coastal trapped runoff from land. The observed freshening along our transect, therefore, likely represents a regional signal of increased ice mass loss from the Greenland Ice sheet north of our study area which is accelerating due to increasing air temperature5. Clearly the meltwater signal from the ice sheet is modified by other fresh water sources. For example, the 2007 peak in FWC and low salinities coincides with an exceptional amount of melting sea ice in outer Young Sound (monitored by daily images in Young Sound) and also coincided with higher than average sea ice cover along the east Greenland coast5. Table 2 Environmental data from the catchment area of Young Sound provided by the Greenland Ecosystem Monitoring Program. Temperature-Salinity plots of the subsurface (>30 m) coastal waters outside Young Sound in East Greenland. (a) The symbols are coloured by year and (b) by depth. PW: Polar water (T < 0, S < 34.4); RAW: Recirculating Atlantic Water (T > 0, S > 34.4). Superimposed on (b) are the average salinity and temperature values for coastal waters, at 30–50 m. (red dots, labelled with year). Meltwater from the Greenland Ice Sheet impacts the local marine ecosystem in numerous ways, providing organic carbon14 and nutrients15, influencing light availability16, productivity17 and pelagic18 and benthic19 ecosystem structure. Impacts of changing freshwater budgets in coastal waters is an additional manifestation of a warming climate, and in concert with changes in sea ice cover, ocean warming and acidification, freshening is likely an important driver for ecosystem change that remains poorly quantified in the Arctic. The salinity time series presented are unique for the region and the first to document a decreasing trend in salinity on decadal scale from the coastal ocean surrounding the ice sheet. The presence of a freshening signal in Northeast Greenland despite a modest increase in ice sheet melt (17%, compared to 48% for southern Greenland4) suggest that meltwater is efficiently retained near the coast as has been shown in other Arctic regions13. Recent modelling studies have shown that meltwater from East Greenland may be efficiently transported to the Labrador Sea20,21,22. Combined with the decreasing salinity trend in our data it suggests that a signal of ocean freshening is being transported downstream by a nearshore component of the East Greenland current and will contribute to the freshwater content in the subarctic Atlantic. Hydrography profiles were obtained using a Sea-Bird SBE19plus CTD. The instruments were calibrated by Sea-Bird every year before fieldwork and data was averaged for 1 m intervals. For this analysis a database of about 300 CTD profiles has been accumulated and analysed. Salinity profiles in this analysis covers measurements during early August. The profiles were divided into two sectors; inside the fjord and outside in the coastal water masses. Only profiles with water depth larger than 50 m were considered. Freshwater content (FWC) was calculated as: $${\rm{FWC}}=50\,{\rm{m}}\times (1-{S}_{m}/{S}_{{\rm{ref}}})$$ where Sref was a reference salinity characterizing water in the surface layer before spring. From observation and model simulations it was shown that salinity in Young Sound was relatively constant in the upper 50 m during winter and variations below 30 m were relatively small during the year9. Therefore, the reference salinity was defined from the average value of Sm in the fjord at 50 m depth in the period 2003–2015 (Sref = 32.36). An additional measure of the annual change of freshwater content (FWC_S) was defined by applying the annual averaged value of Sref in Eq. 1 (Sref = 32.91, 32.88, 33.12, 32.70, 32.20, 32.39, 32.38, 32.25, 31.87, 32.00, 31.73, 32.19, 32.05 in 2003–2015). Wind speed was collected by the ClimateBasis component of Greenland Ecosystem Monitoring program (GEM). Average wind speed (10-min intervals) collected at a height of 7.5 m available at the GEM database were used. The dates for sea ice melt and sea ice formations was also extracted from the GEM database, where an automatic camera system provides information on daily local ice conditions in the outer part of Young Sound. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. McPhee, M. G., Proshutinsky, a, Morison, J. H., Steele, M. & Alkire, M. B. Rapid change in freshwater content of the Arctic Ocean. Geophys. Res. Lett. 36, L10602 (2009). ADS Article Google Scholar Rabe, B. et al. Arctic Ocean liquid freshwater storage trend 1992−2012. Geophys. Res. Lett. 15, 8213 (2013). Kjeldsen, K. K. et al. Spatial and temporal distribution of mass loss from the Greenland Ice Sheet since AD 1900. Nature 528, 396–400 (2015). ADS CAS Article PubMed Google Scholar Bamber, J., Van Den Broeke, M., Ettema, J., Lenaerts, J. & Rignot, E. Recent large increases in freshwater fluxes from Greenland into the North Atlantic. Geophys. Res. Lett. 39, 8–11 (2012). Khan, S. A. et al. Sustained mass loss of the northeast Greenland ice sheet triggered by regional warming. Nat. Clim. Chang. 4, 292–299 (2014). Nick, F. M. et al. Future sea-level rise from Greenland's main outlet glaciers in a warming climate. Nature 497, 235–8 (2013). Rahmstorf, S. et al. Exceptional twentieth-century slowdown in Atlantic Ocean overturning circulation. Nat. Clim. Chang. 1–6, doi:10.1038/nclimate2554 (2015). Böning, C. W., Behrens, E., Biastoch, A., Getzlaff, K. & Bamber, J. L. Emerging impact of Greenland meltwater on deepwater formation in the North Atlantic Ocean. Nat. Geosci. 9, 523–528 (2016). Bendtsen, J., Mortensen, J. & Rysgaard, S. Seasonal surface layer dynamics and sensitivity to runoff in a high Arctic fjord (Young Sound/Tyrolerfjord, 74°N). J. Geophys. Res. C Ocean. 119, 6461–6478 (2014). Håvik, L. et al. Evolution of the East Greenland Current from Fram Strait to Denmark Strait: Synoptic measurements from summer 2012. J. Geophys. Res. Ocean. 122, 1974–1994 (2017). Rabe, B., Schauer, U. & Mackensen, A. Freshwater components and transports in the Fram Strait- recent observations and changes since the late 1990 s. Ocean Sci. 5, 219–233 (2009). De Steur, L., Pickart, R. S., Torres, D. J. & Valdimarsson, H. Recent changes in the freshwater composition east of Greenland. Geophys. Res. Lett. 42, 2326–2332 (2015). Carmack, E., Winsor, P. & Williams, W. The contiguous panarctic Riverine Coastal Domain: A unifying concept. Prog. Oceanogr. 139, 13–23 (2015). Lawson, E. C. et al. Greenland Ice Sheet exports labile organic carbon to the Arctic oceans. Biogeosciences 11, 4015–4028 (2014). Meire, L. et al. High export of dissolved silica from the Greenland Ice Sheet. Geophys. Res. Lett. 43, 9173–9182 (2016). ADS CAS Article Google Scholar Murray, C. et al. The influence of glacial melt water on bio-optical properties in two contrasting Greenland fjords. Estuar. Coast. Shelf Sci. doi:10.1016/j.ecss.2015.05.041 (2015). Meire, L. et al. Marine‐terminating glaciers sustain high productivity in Greenland fjords. Glob Change Biol. 00:1–14. doi. org/10.1111/gcb.13801 (2017). Middelboe, A. B., Sejr M. K., Arendt, K. E. & Møller, E. F. Impact of glacial meltwater on spatiotemporal distribution of copepods and their grazing impact in Young Sound NE, Greenland: Impact of meltwater on copepod carbon cycling. Limnol & Oceanogr doi: 10.1002/lno.10633 (2017). Sejr, M. K., Włodarska-Kowalczuk, M., Legeżyńska, J. & Blicher, M. E. Macrobenthic species composition and diversity in the Godthaabsfjord system, SW Greenland. Polar Biol. 33, 421–431 (2009). Luo, H. et al. Oceanic transport of surface meltwater from the southern Greenland ice sheet. Nat. Geosci. 9, 528–532 (2016). Gillard, L. C., Hu, X., Myers, P. G. & Bamber, J. L. Meltwater pathways from marine terminating glaciers of the Greenland ice sheet. Geophys. Res. Lett. doi:10.1002/2016GL070969 (2016). Dukhovskoy, D. S. et al. Greenland freshwater pathways in the sub-Arctic Seas from model experiments with passive tracers. J. Geophys. Res. Ocean. 121, 877–907 (2016). Li, W. K. W., McLaughlin, F. A., Lovejoy, C. & Carmack, E. C. Smallest Algae Thrive As the Arctic Ocean Freshens. Science. 326, 539 (2009) Igor A. Dmitrenko et al., Polynya impacts on water properties in a Northeast Greenland fjord. Estuarine, Coastal and Shelf Science 153:10–17 (2015) We would like to acknowledge Egon Frandsen for assistance during field work. MKS received financial support from DANCEA and Carlsberg. SR acknowledges financial support from the Canada Excellence Research Chair program. Data on local runoff where collected as part of the GEM program and provided by the Department of Bioscience, Aarhus University, Denmark. Arctic Research Centre, Ny Munkegade, Aarhus University, 8000, Aarhus C, Denmark Mikael K. Sejr & Søren Rysgaard Department of Bioscience, Vejlsøvej 25, Aarhus University, 8600, Silkeborg, Denmark Mikael K. Sejr National Institute of Aquatic Resources, Technical University of Denmark, Building 202, Kemitorvet, 2800, Kgs. Lyngby, Denmark Colin A. Stedmon ClimateLab, Fruebjergvej 3, box 98, 2100, Copenhagen O, Denmark Jørgen Bendtsen Asiaq PO Box 1003 Qatserisut 8, 3900, Nuuk, Greenland Jakob Abermann Greenland Climate Research Centre, Greenland Institute of Natural Resources, Kivioq 2, 3900, Nuuk, Greenland Thomas Juul-Pedersen, John Mortensen & Søren Rysgaard Centre for Earth Observation Science, 584 Wallace Bldg, University of Manitoba, Winnipeg, MB R3T 2N2, Canada Søren Rysgaard Thomas Juul-Pedersen John Mortensen M.K.S. and C.A.S. wrote the manuscript with contributions from all co-authors. M.K.S., J.A., T.J.P. and S.R. conducted the field work. Analysis was done by M.K.S., J.B., C.A.S., J.M. and S.R. and figures prepared by M.K.S., C.A.S. and J.B. Correspondence to Mikael K. Sejr. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Sejr, M.K., Stedmon, C.A., Bendtsen, J. et al. Evidence of local and regional freshening of Northeast Greenland coastal waters. Sci Rep 7, 13183 (2017). https://doi.org/10.1038/s41598-017-10610-9 An under-ice bloom of mixotrophic haptophytes in low nutrient and freshwater-influenced Arctic waters Dorte H. Søgaard Brian K. Sorrell Lars Chresten Lund-Hansen Multiple Ecosystem Effects of Extreme Weather Events in the Arctic T. R. Christensen M. Lund M. Mastepanov Ecosystems (2021) Epibenthic megafauna communities in Northeast Greenland vary across coastal, continental shelf and slope habitats Rosalyn Fredriksen Jørgen S. Christiansen Bodil A. Bluhm Polar Biology (2020) An oceanic perspective on Greenland's recent freshwater discharge since 1850 Kerstin Perner Matthias Moros Eystein Jansen Feeding ecology of capelin (Mallotus villosus) in a fjord impacted by glacial meltwater (Godthåbsfjord, Greenland) Peter Grønkjær Kasper Vibsig Nielsen Rasmus Berg Hedeholm Top 100 in Earth Science
CommonCrawl
2beeg.mobi anybunny.tv thaipornclips.com redwap2.com liebelib.net justindianporn.org rajwap.xyz sexthclips.com xthaitube.com kompoz.me pornthclips.com http:sosiano.com xthaivideos.com thaihdclips.com xxxthaimovs.com Bibliografia e cultura 50 Idee per l'abitare contemporaneo International small house competition Premio Maestri Comacini 1995 Concept e concorsi Polo scolastico Edilizia sociale Palazzina Plurifamiliare Ampliamento Comune di Lecco Lungo lago Malgrate Polo Logistico Selezione realizzazioni Arredamento interni Arredamento interni 2 Santuario San Martino Ampliamento edificio Edificio residenziale Ristrutturazione villa Villa sul lago Hubo Srl Galvanotecnica op amp applications This problem can be mitigated with appropriate use of bypass capacitors connected across each power supply pin and ground. Circuit Cookbook: Op Amps (First Edition) Message from the editors: The . According to the virtual short concept, the voltage at the inverting input terminal of an op-amp is same as that of the voltage at its non-inverting input terminal. Since a virtual ground exists at the Op-Amp input, we have, According to the virtual short concept, the voltage at the inverting input terminal of the op-amp is same as that of the voltage at its non-inverting input terminal. out unless the capacitor C is periodically discharged, the output voltage will eventually drift outside of the operational amplifier's operating range. In this case, though, the circuit will be susceptible to input bias current drift because of the mismatch between Rf and Rin. Some of the more common applications are: as a voltage follower, selective inversion circuit, a current-to-voltage converter, active rectifier, integrator, a whole wide variety of filters, and a voltage comparator. In this case, an external push–pull amplifier can be controlled by the current into and out of the operational amplifier. It is a special case of non-inverting amplifier. Application of OP-Amp as Inverting Amplifier An OP amplifier can be operated as an inverting amplifier as shown in fig. Similarly, a circuit is said to be non-linear, if there exists a non-linear relationship between its input and output. Some of the types of op-amp include: A differential amplifier, which is a circuit that amplifies the difference between two signals. This article illustrates some typical operational amplifier applications. A non-ideal operational amplifier's equivalent circuit has a finite input impedance, a non-zero output impedance, and a finite gain. The circuit exploits the fact that the current flowing through a capacitor behaves through time as the voltage across an inductor. Many commercial op-amp offerings provide a method for tuning the operational amplifier to balance the inputs (e.g., "offset null" or "balance" pins that can interact with an external voltage source attached to a potentiometer). The first example is the differential amplifier, from which many of the other applications can be derived, including the inverting, non-inverting, and summing amplifier, the voltage follower, integrator, differentiator, and gyrator. Operational amplifiers are optimised for use with negative feedback, and this article discusses only neg… Chapter 2 reviews some basic phys-ics and develops the fundamental circuit equations that are used throughout the book. are functions of time. 5. Additionally, current drawn into the operational amplifier from the power supply can be used as inputs to external circuitry that augment the capabilities of the operational amplifier. When bursts of current are required by a component, the component can bypass the power supply by receiving the current directly from the nearby capacitor (which is then slowly recharged by the power supply). In this active version, the problem is solved by connecting the diode in the negative feedback loop. Each circuit … Resistors used in practical solid-state op-amp circuits are typically in the kΩ range. A summing amplifier sums several (weighted) voltages: Combines very high input impedance, high common-mode rejection, low DC offset, and other properties used in making very accurate, low-noise measurements. The manufacturer data sheet for the operational amplifier may provide guidance for the selection of components in external compensation networks. An operational amplifier can, if necessary, be forced to act as a comparator. Note that for an op-amp, the voltage at the inverting input terminal is equal to the voltage at its non-inverting input terminal. The op-amp compares the output voltage across the load with the input voltage and increases its own output voltage with the value of VF. Note that the gain of the inverting amplifier is having a negative sign. Produces a very low distortion sine wave. The high input impedance, gain of an op-amp allow straightforward calculation of element values. In this article, we will see the different op-amp based differentiator circuits, its working and its applications. The following are the basic applications of op-amp −. Hence, the voltage at the inverting input terminal of op-amp is equal to $V_{0}$. I The input … In a practical application one encounters a significant difficulty: The circuit diagram of an inverting amplifier is shown in the following figure −. These currents flow through the resistances connected to the inputs and produce small voltage drops across those resistances. The circuit diagram of a non-inverting amplifier is shown in the following figure −. Basically it performs mathematical operation of integration. What an Op-Amp looks like to a lay-person What an Op-Amp looks like to an engineer Or, expressed as a function of the common-mode input Vcom and difference input Vdif: In order for this circuit to produce a signal proportional to the voltage difference of the input terminals, the coefficient of the Vcom term (the common-mode gain) must be zero, or, With this constraint[nb 1] in place, the common-mode rejection ratio of this circuit is infinitely large, and the output. is the saturation current and A common application is for the control of motors or servos, as However, it is usually better to use a dedicated comparator for this purpose, as its output has a higher slew rate and can reach either power supply rail. A non-inverting amplifier is a special case of the differential amplifier in which that circuit's inverting input V1 is grounded, and non-inverting input V2 is identified with Vin above, with R1 ≫ R2. The nodal equation at this terminal's node is as shown below −, $$\frac{0-V_i}{R_1}+ \frac{0-V_0}{R_f}=0$$, $$=>V_{0}=\left(\frac{-R_f}{R_1}\right)V_{t}$$. Due to the strong (i.e., unity gain) feedback and certain non-ideal characteristics of real operational amplifiers, this feedback system is prone to have poor stability margins. So, the voltage at the non-inverting input terminal of the op-amp will be $V_{i}$. You can operate op-amp both with AC and DC signals. Op-amps are extremely versatile and are used in a wide variety of electronic circuits. Op-amps can be used in both linear and non-linear applications. This chapter discusses the characteristics and types of op-amps. A voltage follower is an electronic circuit, which produces an output that follows the input voltage. Basics of Integrated Circuits Applications. An operational amplifier (often op amp or opamp) is a DC-coupled high- gain electronic voltage amplifier with a differential input and, usually, a single-ended output. In cases where a design calls for one input to be short-circuited to ground, that short circuit can be replaced with a variable resistance that can be tuned to mitigate the offset problem. A non-inverting amplifier takes the input through its non-inverting terminal, and produces its amplified version as the output. Operational amplifiers using MOSFET-based input stages have input leakage currents that will be, in many designs, negligible. You can put together basic op amp circuits to build mathematical models that predict complex, real-world behavior. Physically, there is no short between those two terminals but virtually, they are in short with each other. = Alternatively, a tunable external voltage can be added to one of the inputs in order to balance out the offset effect. This amplifier not only amplifies the input but also inverts it (changes its sign). Similarly, a circuit is said to be non-linear, if there exists a non-linear relationship between its input and output. This circuit is of limited use in applications relying on the back EMF property of an inductor as this effect will be limited in a gyrator circuit to the voltage supplies of the op-amp. {\displaystyle \omega =0} Simulates an inductor (i.e., provides inductance without the use of a possibly costly inductor). In the above circuit, the input voltage $V_{i}$ is directly applied to the non-inverting input terminal of the op-amp. By using voltage division principle, we can calculate the voltage at the inverting input terminal of the op-amp as shown below −, $$=>V_{1} = V_{0}\left(\frac{R_1}{R_1+R_f}\right)$$. The capacitor used in this circuit is smaller than the inductor it simulates and its capacitance is less subject to changes in value due to environmental changes. This is the same as saying that the output voltage changes over time t0 < t < t1 by an amount proportional to the time integral of the input voltage: This circuit can be viewed as a low-pass electronic filter, one with a single pole at DC (i.e., where Here, the output is directly connected to the inverting input terminal of opamp. Fig.. Ckt symbol for general purpose op-amp Figure shows the symbol of op-amp & the power supply connections to make it work. ) and with gain. The above mentioned general characteristics of op amps make them ideal for various buffering purposes as well as some other linear and non-linear applications. ). In this case, though, the circuit will be susceptible to input bias current drift because of the mismatch between the impedances driving the V+ and V− op-amp inputs. = Additionally, the output impedance of the op amp is known to be low, perhaps in the order of few tens of Ohms or less. Thus, the operational amplifier may itself operate within its factory specified bounds while still allowing the negative feedback path to include a large output signal well outside of those bounds.[1]. The output voltage. A non-ideal operational amplifier's equivalent circuit has a finite input impedance, a non-zero output impedance, and a finite gain. While in the process of reviewing Texas Instruments applications notes, including those from Burr-Brown – I uncovered a couple of treasures, this handbook on op amp applications and one on active RC networks. The simplified circuit above is like the differential amplifier in the limit of R2 and Rg very small. Introduction What is OP-AMP Mathematics of OP-AMP Characteristics of OP-AMP Ideal OP-AMP Types of OP-AMP Applications of OP-AMP Description of OP-AMP applications 4. Op-amps can be used in both linear and non-linear applications. This implementation does not consider temperature stability and other non-ideal effects. R Operational Amplifier, also called as an Op-Amp, is an integrated circuit, which can be used to perform various linear, non-linear, and mathematical operations. Although power supplies are not indicated in the (simplified) operational amplifier designs below, they are nonetheless present and can be critical in operational amplifier circuit design. With these requirements satisfied, the op-amp is considered ideal, and one can use the method of virtual ground to quickly and intuitively grasp the 'behavior' of any of the op-amp circuits below. A circuit is said to be linear, if there exists a linear relationship between its input and the output. According to the virtual short concept, the voltage at the inverting input terminal of an op-amp will be zero volts. For example, an operational amplifier may not be fit for a particular high-gain application because its output would be required to generate signals outside of the safe range generated by the amplifier. As the negative input of the op-amp acts as a virtual ground, the input impedance of this circuit is equal to Rin. Operational amplifiers are popular building blocks in electronic circuits and they find applications in … Op Amp Applications Handbook, Edited by Walt Jung, Published by Newnes/Elsevier, 2005, ISBN-0-7506-7844-5 (Also published as Op Amp Applications, Analog Devices, 2002, ISBN-0-916550-26-5). The op amp circuit is a powerful took in modern circuit applications. 3 See Comparator applications for further information. Applications where this circuit may be superior to a physical inductor are simulating a variable inductance or simulating a very large inductance. It is brimming with application circuits, handy design tips, historical perspectives, and in-depth looks at the latest techniques to simplify designs and improve their … R ω Fig.1 An input signal Vin is applied through input resistor Ri to the minus input (inverting input). , and The output is fed back to the same inverting input through feedback resistor Rf . It indicates that there exists a 1800 phase difference between the input and the output. Some of the operational amplifiers can … However, the frequencies at which active filters can be implemented is limited; when the behavior of the amplifiers departs significantly from the ideal behavior assumed in elementary design of the filters, filter performance is degraded. Note that the gain of the non-inverting amplifier is having a positive sign. The ideal op amp equations are devel- R These old publications, from 1963 and 1966, respectively, are some of the finest works on op amp theory that I have ever seen. V In other words, the op-amp voltage comparator compares the magnitudes of two voltage inputs and determines which is the largest of the two. Here a number of resistors are connected to the input node of the inverting Op-Amp with each resistor returned to a different source. 1 When Vin ascends "above ground", the output Vout rises proportionately with the lever. In the circuit shown above, the non-inverting input terminal is connected to ground. We recognize an Op-Amp as a mass-produced component found in countless electronics. 2 The inverting amplifier can be applied for unity gain if R f = R i (where, R f is the feedback resistor … S The voltage drop VF across the forward biased diode in the circuit of a passive rectifier is undesired. [3][4] In the case of the ideal op-amp, with AOL infinite and Zdif infinite, the input impedance is also infinite. An op-amp has countless applications and forms the basic building block of linear and non-linear analogue systems. {\displaystyle R_{3}} Commercial op amps first entered the market as integrated circuits in the mid-1960s, and by the early 1970s, they dominated the active device market in analog […] In the op amp integrator circuit the capacitor is … September 1, 2020 by Electricalvoice Op-amp Integrator is an electronic circuit that produces output that is proportional to the integration of the applied input. That value is the parallel resistance of Ri and Rf, or using the shorthand notation ||: The relationship between input signal and output signal is now. and The circuit diagram of a voltage follower is shown in the following figure −. As a consequence, when a component requires large injections of current (e.g., a digital component that is frequently switching from one state to another), nearby components can experience sagging at their connection to the power supply. Therefore, the gain of inverting amplifier is equal to $-\frac{R_f}{R_1}$. Inverting Summing Amplifier. A mechanical analogy is a class-2 lever, with one terminal of R1 as the fulcrum, at ground potential. The input and output impedance are affected by the feedback loop in the same way as the non-inverting amplifier, with B=1.[3][4]. in Op-amp Differentiator is an electronic circuit that produces output that is proportional to the differentiation of the applied input. The output is fed back to the input of the op-amp through an external resistor, called feedback resistor (R f). The voltage follower is a simple circuit that requires only an operational amplifier; it functions as an effective buffer because it has high input impedance and low output impedance. Feedback connection provides a means to accurately control the gain of the op-amp, depending on the application. As the name suggests, this amplifier just amplifies the input, without inverting or changing the sign of the output. The inverting amplifier is an important circuit configuration using op-amps and it uses a negative feedback connection. provides operational amplifier (op amp) sub-circuit ideas that can be quickly adapted to meet your specific system needs. 1. {\displaystyle I_{\text{S}}} when the voltage is greater than zero, it can be approximated by: Operational amplifiers parameter requirements, Using power supply currents in the signal path, Differential amplifier (difference amplifier), Voltage follower (unity buffer amplifier), If you think of the left-hand side of the relation as the closed-loop gain of the inverting input, and the right-hand side as the gain of the non-inverting input, then matching these two quantities provides an output insensitive to the common-mode voltage of. The heuristic rule is to ensure that the impedance "looking out" of each input terminal is identical. Therefore, we could say that the comparator is the modified version of the Op-Amps which specially designed to give the digital output. Here, the feedback resistor Rf provides a discharge path for capacitor Cf, while the series resistor at the non-inverting input Rn, when of the correct value, alleviates input bias current and common-mode problems. The smallest difference between the input voltages will be amplified enormously, causing the output to swing to nearly the supply voltage. An Operational Amplifier (Op-Amp) is an integrated circuit that uses external voltage to amplify the input through a very high gain. Used as a buffer amplifier to eliminate loading effects (e.g., connecting a device with a high source impedance to a device with a low input impedance). $$=>V_{0}\left(\frac{R_1}{R_1+R_f}\right)=V_{i}$$, $$=>\frac{V_0}{V_i}=\frac{R_1+R_f}{R_1}$$. Referring to the circuit immediately above. 0 Now, the ratio of output voltage $V_{0}$ and input voltage $V_{i}$ or the voltage-gain or gain of the non-inverting amplifier is equal to $1+\frac{R_f}{R_1}$. Analog Engineer's Circuit Cookbook: Op Amps. Operational amplifiers can be used in construction of active filters, providing high-pass, low-pass, band-pass, reject and delay functions. This chapter discusses these basic applications in detail. The integrator is mostly used in analog computers, analog-to-digital converters and wave-shaping circuits. have input impedance large with respect to values present in the feedback network. A real op-amp has a number of non-ideal features as shown in the diagram, but here a simplified schematic notation is used, many details such as device selection and power supply connections are not shown. The relationship between the input voltage. Resistors much greater than 1 MΩ cause excessive thermal noise and make the circuit operation susceptible to significant errors due to bias or leakage currents. need not be resistors; they can be any component that can be described with an impedance. Thus, the gain of a voltage follower is equal to one since, both output voltage $V_{0}$ and input voltage $V_{i}$ of voltage follower are same. In this case, the ratio between the input voltage and the input current (thus the input resistance) is given by: In general, the components That means zero volts is applied at the non-inverting input terminal of the op-amp. Operational amplifiers are optimised for use with negative feedback, and this article discusses only negative-feedback applications. Vin is at a length Rin from the fulcrum; Vout is at a length Rf. This circuit is used to toggle the output pins status of a flip-flop IC, using … Integrates (and inverts) the input signal Vin(t) over a time interval t, t0 < t < t1, yielding an output voltage at time t = t1 of. In particular, as a root locus analysis would show, increasing feedback gain will drive a closed-loop pole toward marginal stability at the DC zero introduced by the differentiator. It indicates that there is no phase difference between the input and the output. Appropriate design of the feedback network can alleviate problems associated with input bias currents and common-mode gain, as explained below. V As a result, the voltage drop VF is compensated and the circuit behaves very nearly as an ideal (super) diode with VF = 0 V. The circuit has speed limitations at high frequency because of the slow negative feedback and due to the low slew rate of many non-ideal op-amps. the relationship between the current and the voltage, http://e2e.ti.com/blogs_/archives/b/thesignal/archive/2012/03/14/op-amps-used-as-comparators-is-it-okay.aspx, "AN1177 Op-Amp Precision Design: DC Errors", "Single supply op-amp circuit collection", "Handbook of operational amplifier applications", Low Side Current Sensing Using Operational Amplifiers, "Log/anti-log generators, cube generator, multiply/divide amp", Logarithmically variable gain from a linear variable component, Impedance and admittance transformations using operational amplifiers, https://en.wikipedia.org/w/index.php?title=Operational_amplifier_applications&oldid=1000027267, Creative Commons Attribution-ShareAlike License, have large open-loop signal gain (voltage gain of 200,000 is obtained in early integrated circuit exemplars), and. Practical operational amplifiers draw a small current from each of their inputs due to bias requirements (in the case of bipolar junction transistor-based inputs) or leakage (in the case of MOSFET-based inputs). By adding resistors in parallel on the inverting input pin of the inverting … For comparison, the old-fashioned inverting single-ended op-amps from the early 1940s could realize only parallel negative feedback by connecting additional resistor networks (an op-amp inverting amplifier is the most popular example). Input Impedance(Z) Input Impedance is defined as the input voltage by the input current. Power supply inputs are often noisy in large designs because the power supply is used by nearly every component in the design, and inductance effects prevent current from being instantaneously delivered to every component at once. Alternatively, another operational amplifier can be chosen that has more appropriate internal compensation. The circuit shown computes the difference of two voltages, multiplied by some gain factor. Op-amp or Operational Amplifier is the backbone of Analog Electronics and out of many applications, such as Summing Amplifier, differential amplifier, Instrumentation Amplifier, Op-Amp can also be used as integrator which is a very useful circuit in analog related application. The operational amplifier must. The closed-loop gain is Rf / Rin, hence. T If we consider the value of feedback resistor, $R_{f}$ as zero ohms and (or) the value of resistor, 1 as infinity ohms, then a non-inverting amplifier becomes a voltage follower. Op-Amps can often be used as voltage comparators if a diode or transistor is added to the amplifiers output) but the real comparator is designed to have a faster switching time comparing to the multipurpose Op-Amps. An inverting amplifier takes the input through its inverting terminal through a resistor $R_{1}$, and produces its amplified version as the output. What is Op Amp • An Operational Amplifier (Op-Amp) is an integrated circuit that uses external voltage to amplify the input through a very high gain What an Op-Amp looks like to a lay-person A real op-amp has a number of non-ideal features as shown in the diagram, but here a simplified schematic notation is used, many details such as device selection and power supply connections are not shown. Basically it performs mathematical operation of differentiation. The transfer function of the inverting differentiator has a single zero in the origin (i.e., where angular frequency Similar equations have been developed in other books, but the presentation here empha-sizes material required for speedy op amp design. In this article, we will see the different op-amp based integrator circuits, its working and its applications. So, the voltage at the non-inverting input terminal of op-amp is equal to $V_{i}$. This may well be the ultimate op amp book. The … To intuitively see this gain equation, use the virtual ground technique to calculate the current in resistor R1: then recall that this same current must be passing through R2, therefore: Unlike the inverting amplifier, a non-inverting amplifier cannot have a gain of less than 1. Gas Furnace Blowing Cold Air And Won't Shut Off, Claudio's Greenport Sold, Caring Partner Meaning In Tamil, Originating In A Foreign Country Crossword Clue, Csusb Mph Cost, Minecraft Spider Farm Without Spawner, Hunting License Cost, Kuroo Tetsurou Height, Compact Power Adapter Ca-ps700 With Dc Coupler Dr-e17, Via Manzoni, 35 23868 Valmadrera (LC) Email: [email protected]
CommonCrawl
What are some examples of "non-logical theorems" proven by Logic? I am still an undergraduate student, and so perhaps I just haven't seen enough of the mathematical world. Question: What are some examples of mathematical logic solving open problem outside of mathematical logic? Note that the Ax-Grothendieck Theorem would have been a perfect example of this (namely, If $P$ is a polynomial function from $\mathbb{C}^n$ to $\mathbb{C}^n$ and $P$ is injective then $P$ is bijective). However, even though there is a beautiful logical proof of this result, it was first proven not specifically using mathematical logic. I'm curious as to whether there are any results where "the logicians got there first". Edit 1:Bonus: I am quite curious if one can post an example from Reverse Mathematics. Edit 2:This post reminded me that the solution to Whitehead's Problem came from logic (a problem in group theory). According to the wikipedia article, the proof by Shelah was 'completely unexpected'. It utilizes the fact that ZFC+(V=L) implies every Whitehead group is free while ZFC+$\neg$CH+MA implies there exists a Whitehead group which is not free. Since these two separate axiom systems are equiconsistent, hence Whitehead's problem is undecidable. Edit 3: A year later, we have some more examples: 1) Hrushovski's Proof of the Mordell-Lang Conjecture for functional fields in all characteristics. 2) The Andre-Óort conjecture by Pila and Tsimerman. 3) Various results in O-minimality including work by Wilkie and van den Dries (as well as others). 4) Zilber's Pseudo-Exponential Field as work towards Schanuel's conjecture. logic soft-question math-history big-list KyleKyle $\begingroup$ What's "mathematical logic" in this case? $\endgroup$ – Asaf Karagila♦ Jul 28 '14 at 23:14 $\begingroup$ en.wikipedia.org/wiki/Ax%E2%80%93Kochen_theorem $\endgroup$ – Will Jagy Jul 28 '14 at 23:20 $\begingroup$ You can find many examples of applications of model theory in this MathOverflow thread. $\endgroup$ – Alex Kruckman Jul 28 '14 at 23:42 $\begingroup$ I believe Ramsey's theorem was motivated by a problem in mathematical logic, but I guess that's not what you're looking for. Well, sticking with Ramsey theory, what about the Halpern-Läuchli partitio theorem? Wasn't that proved using mathematical logic? $\endgroup$ – bof Jul 28 '14 at 23:43 $\begingroup$ I agree with @bof that the Halpern-Läuchli theorem is a good example. The original proof involved a custom-designed deductive system for efficiently handling the complicated nested quantifier combinations that need to be manipulated in the proof. $\endgroup$ – Andreas Blass Jul 29 '14 at 7:13 I was impressed by Bernstein and Robinson's 1966 proof that if some polynomial of an operator on a Hilbert space is compact then the operator has an invariant subspace. This solved a particular instance of invariant subspace problem, one of pure operator theory without any hint of logic. Bernstein and Robinson used hyperfinite-dimensional Hilbert space, a nonstandard model, and some very metamathematical things like transfer principle and saturation. Halmos was very unhappy with their proof and eliminated non-standard analysis from it the same year. But the fact remains that the proof was originally found through non-trivial application of the model theory. Another example is the solution to the Hilbert's tenth problem by Matiyasevich. Hilbert asked for a procedure to determine whether a given polynomial Diophantine equation is solvable. This was a number theoretic problem, and he did not expect that such procedure can not exist. Proving non-existence though required developing a branch of mathematical logic now called computability theory (by Gödel, Church, Turing and others) that formalizes the notion of algorithm. Matiyasevich showed that any recursively enumerable set can be the solution set for a Diophantine equation, and since not all recursively enumerable sets are computable there can be no solvability algorithm. This example is typical of how logic serves all parts of mathematics by saving effort on doomed searches for impossible constructions, proofs or counterexamples. For instance, an analyst might ask if the plane can be decomposed into a union of two sets, one at most countable along every vertical line, and the other along every horizontal line. It seems unlikely and people could spend a lot of time trying to disprove it. In vain, because Sierpinski proved that existence of such a decomposition is equivalent to the continuum hypothesis, and Gödel showed that disproving it is impossible by an elaborate logical construction now called inner model. As is proving it as Cohen showed by an even more elaborate logical construction called forcing. A more recent example is the proof of the Mordell-Lang conjecture for function fields by Hrushovski (1996). The conjecture "is essentially a finiteness statement on the intersection of a subvariety of a semi-Abelian variety with a subgroup of finite rank". In characteristic $0$, and for Abelian varieties or finitely generated groups the conjecture was proved more traditionally by Raynaud, Faltings and Vojta. They inferred the result for function fields from the one for number fields using a specialization argument, another proof was found by Buium. Abramovich and Voloch proved many cases in characteristic $p$. Hrushovski gave a uniform proof for general semi-Abelian varieties in arbitrary characteristic using "model-theoretic analysis of the kernel of Manin's homomorphism", which involves definable subsets, Morley dimension, $\lambda$-saturated structures, model-theoretic algebraic closure, compactness theorem for first-order theories, etc. ConifoldConifold $\begingroup$ Do you mean "recursively enumerable"rather than "recursive" at the end of the second paragraph? $\endgroup$ – Ned Jul 29 '14 at 22:52 $\begingroup$ Yes, sorry about that $\endgroup$ – Conifold Jul 29 '14 at 23:07 $\begingroup$ Your first example sounds very convincing, but Hilbert's tenth problem has the concept of algorithm (or procedure, or whatever you call it) in the statement and so must be solved by developing a formalization of this. It's not surprising that this was, in fact, the case, and whether the resulting theory is actually a branch of logic is both an opinion and sort of incidental. $\endgroup$ – Ryan Reich Jul 30 '14 at 2:55 $\begingroup$ @ Ryan Reich Only if you assume it was impossible. Euclid's algorithm existed for centuries without formalization of algorithms, and belongs to number theory as would a Diophantine algorithm if it existed. The ellipse method for linear programming was developed after formalization, and didn't use much of it. Hilbert, a founder of logic, didn't think it was impossible or needed formalization. The arguments Matiyasevich used (recursive deconstruction of expressions) are exactly of the type used in logic, which is why computability theory was developed by many of the same people as logic. $\endgroup$ – Conifold Jul 30 '14 at 23:10 $\begingroup$ And even if you assume it was impossible the non-existence of algorithms for squaring a circle with Euclidean tools, or for solving equations in radicals was proved long before formalization of algorithms, so there is no "must" there, and it was "surprising". I am not sure what "incidental" means, but that the type of computability theory used in the proof "is a field of mathematical logic" even according to Wikipedia, and their "main professional organization" is called Association for Symbolic Logic en.wikipedia.org/wiki/Computability_theory#Name_of_the_subject. $\endgroup$ – Conifold Jul 30 '14 at 23:51 The Tarski–Seidenberg theorem says that the set of semialgebraic sets is closed under projection. It's a pure real-algebraic statement that was originally proved with logic. Jacobson says this in chapter 5 of his Basic Algebra I: More generally, Tarski's theorem implies his metamathematical principle that any "elementary" sentence of algebra which is true in one real closed field (e.g., the field of real numbers) is true in every real closed field. lhflhf Not sure if this qualifies, but Goodstein's theorem which appears quite number theoretic and states that any Goodstein sequence converges, was proved using ordinals. apurvapurv $\begingroup$ ... and it is interesting that you cannot do without ordinals, in some sense. (Details can be found at the wikipedia page.) $\endgroup$ – Martin Brandenburg Jul 29 '14 at 18:01 So automated theorem provers consist of an application of mathematical logic. Consequently, the solution of the Robbins conjecture by William McCune using EQP qualifies as solving an open problem outside of mathematical logic by using mathematical logic. Doug SpoonwoodDoug Spoonwood $\begingroup$ Impressive formulas: math.colgate.edu/~amann/MA/robbins_complete.pdf $\endgroup$ – Martin Brandenburg Jul 30 '14 at 19:46 The compactness theorem gives many ways to generate theorems of the form "if all finite $X$s have property $P$, then so do all infinite $X$s". A random example from Bell and Slomson Models and Ultraproducts: Let $B$ be an infinite set of boys each of whom has at most a finite number of girlfriends. If for each integer $k$, any $k$ of the boys have between them at least $k$ girlfriends, then it is possible for each boy to marry one of his girlfriends without any of them committing bigamy. This can be deduced (with some work) by using the compactness theorem with this finite version of the lemma: If $C$ is a set of $m$ boys and for each $k\le m$, any k of the boys have at least k girlfriends between them, then it is possible for each boy to marry one of his girlfriends without any of them committing bigamy. There are a number of better examples here: Most astonishing applications of compactness theorem outside logic (Although if you're looking for real world applications of logic, I think type theory for programming languages is a big one.) Dan PiponiDan Piponi Terry Tao has been using Abraham Robinson's framework and more generally ultraproducts (often considered to be "logic" as you put it), as for example his solution of Hilbert's Fifth problem in this recent book. Mikhail KatzMikhail Katz The Baumgartner–Hajnal theorem says that, for any countable ordinal $\alpha,$ and for any coloring of the edges of the complete graph on the vertex set $\omega_1$ (the first uncountable ordinal) with finitely many colors, there is a monochromatic complete subgraph of order type $\alpha.$ This theorem of classical set theory was proved using the method of forcing (in the form of Martin's axiom) and absoluteness. It seems reasonable to consider forcing and absoluteness as logical, and classical set theory as non-logical. bofbof Not the answer you're looking for? Browse other questions tagged logic soft-question math-history big-list or ask your own question. Most astonishing applications of compactness theorem outside logic Model theory in group theory Why exactly is Whitehead's problem undecidable. What are some examples of subtle logical pitfalls? Under the impact of Gödel's incompleteness theorems, are the conclusions of statistics reliable? What are some examples of hard theorems in category theory? Are variables logical or non-logical symbols in a logic system? What are some examples of when Mathematics 'accidentally' discovered something about the world? Has a conjecture ever originally been decided by constructing the proof with mathematical logic?
CommonCrawl
Took full pill at 10:21 PM when I started feeling a bit tired. Around 11:30, I noticed my head feeling fuzzy but my reading seemed to still be up to snuff. I would eventually finish the science book around 9 AM the next day, taking some very long breaks to walk the dog, write some poems, write a program, do Mnemosyne review (memory performance: subjectively below average, but not as bad as I would have expected from staying up all night), and some other things. Around 4 AM, I reflected that I felt much as I had during my nightwatch job at the same hour of the day - except I had switched sleep schedules for the job. The tiredness continued to build and my willpower weakened so the morning wasn't as productive as it could have been - but my actual performance when I could be bothered was still pretty normal. That struck me as kind of interesting that I can feel very tired and not act tired, in line with the anecdotes. Piracetam boosts acetylcholine function, a neurotransmitter responsible for memory consolidation. Consequently, it improves memory in people who suffer from age-related dementia, which is why it is commonly prescribed to Alzheimer's patients and people struggling with pre-dementia symptoms. When it comes to healthy adults, it is believed to improve focus and memory, enhancing the learning process altogether. Gamma-aminobutyric acid, also known as GABA, naturally produced in the brain from glutamate, is a neurotransmitter that helps in the communication between the nervous system and brain. The primary function of this GABA Nootropic is to reduce the additional activity of the nerve cells and helps calm the mind. Thus, it helps to improve various conditions, like stress, anxiety, and depression by decreasing the beta brain waves and increasing the alpha brain waves. It is one of the best nootropic for anxiety that you can find in the market today. As a result, cognitive abilities like memory power, attention, and alertness also improve. GABA helps drug addicts recover from addiction by normalizing the brain's GABA receptors which reduce anxiety and craving levels in the absence of addictive substances. Another factor to consider is whether the nootropic is natural or synthetic. Natural nootropics generally have effects which are a bit more subtle, while synthetic nootropics can have more pronounced effects. It's also important to note that there are natural and synthetic nootropics. Some natural nootropics include Ginkgo biloba and ginseng. One benefit to using natural nootropics is they boost brain function and support brain health. They do this by increasing blood flow and oxygen delivery to the arteries and veins in the brain. Moreover, some nootropics contain Rhodiola rosea, panxax ginseng, and more. The stop-signal task has been used in a number of laboratories to study the effects of stimulants on cognitive control. In this task, subjects are instructed to respond as quickly as possible by button press to target stimuli except on certain trials, when the target is followed by a stop signal. On those trials, they must try to avoid responding. The stop signal can follow the target stimulus almost immediately, in which case it is fairly easy for subjects to cancel their response, or it can come later, in which case subjects may fail to inhibit their response. The main dependent measure for stop-signal task performance is the stop time, which is the average go reaction time minus the interval between the target and stop signal at which subjects inhibit 50% of their responses. De Wit and colleagues have published two studies of the effects of d-AMP on this task. De Wit, Crean, and Richards (2000) reported no significant effect of the drug on stop time for their subjects overall but a significant effect on the half of the subjects who were slowest in stopping on the baseline trials. De Wit et al. (2002) found an overall improvement in stop time in addition to replicating their earlier finding that this was primarily the result of enhancement for the subjects who were initially the slowest stoppers. In contrast, Filmore, Kelly, and Martin (2005) used a different measure of cognitive control in this task, simply the number of failures to stop, and reported no effects of d-AMP. MPH was developed more recently and marketed primarily for ADHD, although it is sometimes prescribed off label or used nonmedically to increase alertness, energy, or concentration in conditions other than ADHD. Both MPH and AMP are on the list of substances banned from sports competitions by the World Anti-Doping Agency (Docherty, 2008). Both also have the potential for abuse and dependence, which detracts from their usefulness and is the reason for their classification as Schedule II controlled substances. Although the risk of developing dependence on these drugs is believed to be low for individuals taking them for ADHD, the Schedule II classification indicates that these drugs have a high potential for abuse and that abuse may lead to severe dependence. As shown in Table 6, two of these are fluency tasks, which require the generation of as large a set of unique responses as possible that meet the criteria given in the instructions. Fluency tasks are often considered tests of executive function because they require flexibility and the avoidance of perseveration and because they are often impaired along with other executive functions after prefrontal damage. In verbal fluency, subjects are asked to generate as many words that begin with a specific letter as possible. Neither Fleming et al. (1995), who administered d-AMP, nor Elliott et al. (1997), who administered MPH, found enhancement of verbal fluency. However, Elliott et al. found enhancement on a more complex nonverbal fluency task, the sequence generation task. Subjects were able to touch four squares in more unique orders with MPH than with placebo. As for newer nootropic drugs, there are unknown risks. "Piracetam has been studied for decades," says cognitive neuroscientist Andrew Hill, the founder of a neurofeedback company in Los Angeles called Peak Brain Institute. But "some of [the newer] compounds are things that some random editor found in a scientific article, copied the formula down and sent it to China and had a bulk powder developed three months later that they're selling. Please don't take it, people!" Vinh Ngo, a San Francisco family practice doctor who specializes in hormone therapy, has become familiar with piracetam and other nootropics through a changing patient base. His office is located in the heart of the city's tech boom and he is increasingly sought out by young, male tech workers who tell him they are interested in cognitive enhancement. The smart pill that FDA approved is called Abilify MyCite. This tiny pill has a drug and an ingestible sensor. The sensor gets activated when it comes into contact with stomach fluid to detect when the pill has been taken. The data is then transmitted to a wearable patch that eventually conveys the information to a paired smartphone app. Doctors and caregivers, with the patient's consent, can then access the data via a web portal. These pills don't work. The reality is that MOST of these products don't work effectively. Maybe we're cynical, but if you simply review the published studies on memory pills, you can quickly eliminate many of the products that don't have "the right stuff." The active ingredients in brain and memory health pills are expensive and most companies sell a watered down version that is not effective for memory and focus. The more brands we reviewed, the more we realized that many of these marketers are slapping slick labels on low-grade ingredients. Nevertheless, a drug that improved your memory could be said to have made you smarter. We tend to view rote memory, the ability to memorize facts and repeat them, as a dumber kind of intelligence than creativity, strategy, or interpersonal skills. "But it is also true that certain abilities that we view as intelligence turn out to be in fact a very good memory being put to work," Farah says. Take quarter at midnight, another quarter at 2 AM. Night runs reasonably well once I remember to eat a lot of food (I finish a big editing task I had put off for weeks), but the apathy kicks in early around 4 AM so I gave up and watched Scott Pilgrim vs. the World, finishing around 6 AM. I then read until it's time to go to a big shotgun club function, which occupies the rest of the morning and afternoon; I had nothing to do much of the time and napped very poorly on occasion. By the time we got back at 4 PM, the apathy was completely gone and I started some modafinil research with gusto (interrupted by going to see Puss in Boots). That night: Zeo recorded 8:30 of sleep, gap of about 1:50 in the recording, figure 10:10 total sleep; following night, 8:33; third night, 8:47; fourth, 8:20 (▇▁▁▁). With all these studies pointing to the nootropic benefits of some essential oils, it can logically be concluded then that some essential oils can be considered "smart drugs." However, since essential oils have so much variety and only a small fraction of this wide range has been studied, it cannot be definitively concluded that absolutely all essential oils have brain-boosting benefits. The connection between the two is strong, however. The FDA has approved the first smart pill for use in the United States. Called Abilify MyCite, the pill contains a drug and an ingestible sensor that is activated when it comes into contact with stomach fluid to detect when the pill has been taken. The pill then transmits this data to a wearable patch that subsequently transfers the information to an app on a paired smartphone. From that point, with a patient's consent, the data can be accessed by the patient's doctors or caregivers via a web portal. The difference in standard deviations is not, from a theoretical perspective, all that strange a phenomenon: at the very beginning of this page, I covered some basic principles of nootropics and mentioned how many stimulants or supplements follow a inverted U-curve where too much or too little lead to poorer performance (ironically, one of the examples in Kruschke 2012 was a smart drug which did not affect means but increased standard deviations). The fish oil can be considered a free sunk cost: I would take it in the absence of an experiment. The empty pill capsules could be used for something else, so we'll put the 500 at $5. Filling 500 capsules with fish and olive oil will be messy and take an hour. Taking them regularly can be added to my habitual morning routine for vitamin D and the lithium experiment, so that is close to free but we'll call it an hour over the 250 days. Recording mood/productivity is also free a sunk cost as it's necessary for the other experiments; but recording dual n-back scores is more expensive: each round is ~2 minutes and one wants >=5, so each block will cost >10 minutes, so 18 tests will be >180 minutes or >3 hours. So >5 hours. Total: 5 + (>5 \times 7.25) = >41. Smart drugs could lead to enhanced cognitive abilities in the military. Also known as nootropics, smart drugs can be viewed similarly to medical enhancements. What's important to remember though, is that smart drugs do not increase your intelligence; however, they may improve cognitive and executive functions leading to an increase in intelligence. Similar to the way in which some athletes used anabolic steroids (muscle-building hormones) to artificially enhance their physique, some students turned to smart drugs, particularly Ritalin and Adderall, to heighten their intellectual abilities. A 2005 study reported that, at some universities in the United States, as many as 7 percent of respondents had used smart drugs at least once in their lifetime and 2.1 percent had used smart drugs in the past month. Modafinil was used increasingly by persons who sought to recover quickly from jet lag and who were under heavy work demands. Military personnel were given the same drug when sent on missions with extended flight times. Bacopa Monnieri is probably one of the safest and most effective memory and mood enhancer nootropic available today with the least side-effects. In some humans, a majorly extended use of Bacopa Monnieri can result in nausea. One of the primary products of AlternaScript is Optimind, a nootropic supplement which mostly constitutes of Bacopa Monnieri as one of the main ingredients. The compound is one of the best brain enhancement supplements that includes memory enhancement and protection against brain aging. Some studies suggest that the compound is an effective treatment for disorders like vascular dementia, Alzheimer's, brain stroke, anxiety, and depression. However, there are some side effects associated with Alpha GPC, like a headache, heartburn, dizziness, skin rashes, insomnia, and confusion. Does little alone, but absolutely necessary in conjunction with piracetam. (Bought from Smart Powders.) When turning my 3kg of piracetam into pills, I decided to avoid the fishy-smelling choline and go with 500g of DMAE (Examine.com); it seemed to work well when I used it before with oxiracetam & piracetam, since I had no piracetam headaches, and be considerably less bulky. Evidence in support of the neuroprotective effects of flavonoids has increased significantly in recent years, although to date much of this evidence has emerged from animal rather than human studies. Nonetheless, with a view to making recommendations for future good practice, we review 15 existing human dietary intervention studies that have examined the effects of particular types of flavonoid on cognitive performance. The studies employed a total of 55 different cognitive tests covering a broad range of cognitive domains. Most studies incorporated at least one measure of executive function/working memory, with nine reporting significant improvements in performance as a function of flavonoid supplementation compared to a control group. However, some domains were overlooked completely (e.g. implicit memory, prospective memory), and for the most part there was little consistency in terms of the particular cognitive tests used making across study comparisons difficult. Furthermore, there was some confusion concerning what aspects of cognitive function particular tests were actually measuring. Overall, while initial results are encouraging, future studies need to pay careful attention when selecting cognitive measures, especially in terms of ensuring that tasks are actually sensitive enough to detect treatment effects. One reason I like modafinil is that it enhances dopamine release, but it binds to your dopamine receptors differently than addictive substances like cocaine and amphetamines do, which may be part of the reason modafinil shares many of the benefits of other stimulants but doesn't cause addiction or withdrawal symptoms. [3] [4] It does increase focus, problem-solving abilities, and wakefulness, but it is not in the same class of drugs as Adderall, and it is not a classical stimulant. Modafinil is off of patent, so you can get it generically, or order it from India. It's a prescription drug, so you need to talk to a physician.
CommonCrawl
Tagged: matrix Compute and Simplify the Matrix Expression Including Transpose and Inverse Matrices Let $A, B, C$ be the following $3\times 3$ matrices. 4 &5 &6 \\ -1 & 0\ & 1 \\ \[(A^{\trans}-B)^{\trans}+C(B^{-1}C)^{-1}.\] (The Ohio State University, Linear Algebra Midterm Exam Problem) Every Plane Through the Origin in the Three Dimensional Space is a Subspace Prove that every plane in the $3$-dimensional space $\R^3$ that passes through the origin is a subspace of $\R^3$. Quiz 4: Inverse Matrix/ Nonsingular Matrix Satisfying a Relation (a) Find the inverse matrix of \end{bmatrix}\] if it exists. If you think there is no inverse matrix of $A$, then give a reason. (b) Find a nonsingular $2\times 2$ matrix $A$ such that \[A^3=A^2B-3A^2,\] where \end{bmatrix}.\] Verify that the matrix $A$ you obtained is actually a nonsingular matrix. Let $V$ be the vector space of all $3\times 3$ real matrices. Let $A$ be the matrix given below and we define \[W=\{M\in V \mid AM=MA\}.\] That is, $W$ consists of matrices that commute with $A$. Then $W$ is a subspace of $V$. Determine which matrices are in the subspace $W$ and find the dimension of $W$. (a) \[A=\begin{bmatrix} a & 0 & 0 \\ 0 &b &0 \\ 0 & 0 & c \end{bmatrix},\] where $a, b, c$ are distinct real numbers. (b) \[A=\begin{bmatrix} 0 &a &0 \\ 0 & 0 & b \end{bmatrix},\] where $a, b$ are distinct real numbers. Idempotent Matrices. 2007 University of Tokyo Entrance Exam Problem For a real number $a$, consider $2\times 2$ matrices $A, P, Q$ satisfying the following five conditions. $A=aP+(a+1)Q$ $P^2=P$ $Q^2=Q$ $PQ=O$ $QP=O$, where $O$ is the $2\times 2$ zero matrix. Then do the following problems. (a) Prove that $(P+Q)A=A$. (b) Suppose $a$ is a positive real number and let \[ A=\begin{bmatrix} a & 0\\ 1& a+1 \end{bmatrix}.\] Then find all matrices $P, Q$ satisfying conditions (1)-(5). (c) Let $n$ be an integer greater than $1$. For any integer $k$, $2\leq k \leq n$, we define the matrix \[A_k=\begin{bmatrix} k & 0\\ 1& k+1 \end{bmatrix}.\] Then calculate and simplify the matrix product \[A_nA_{n-1}A_{n-2}\cdots A_2.\] (Tokyo University Entrance Exam 2007) by Yu · Published 01/19/2017 If matrix product $AB$ is a square, then is $BA$ a square matrix? Let $A$ and $B$ are matrices such that the matrix product $AB$ is defined and $AB$ is a square matrix. Is it true that the matrix product $BA$ is also defined and $BA$ is a square matrix? If it is true, then prove it. If not, find a counterexample. Matrix $XY-YX$ Never Be the Identity Matrix Let $I$ be the $n\times n$ identity matrix, where $n$ is a positive integer. Prove that there are no $n\times n$ matrices $X$ and $Y$ such that \[XY-YX=I.\] Row Equivalent Matrix, Bases for the Null Space, Range, and Row Space of a Matrix Let \[A=\begin{bmatrix} (a) Find a matrix $B$ in reduced row echelon form such that $B$ is row equivalent to the matrix $A$. (b) Find a basis for the null space of $A$. (c) Find a basis for the range of $A$ that consists of columns of $A$. For each columns, $A_j$ of $A$ that does not appear in the basis, express $A_j$ as a linear combination of the basis vectors. (d) Exhibit a basis for the row space of $A$. Determine a Matrix From Its Eigenvalue a & -1\\ \end{bmatrix}\] be a $2\times 2$ matrix, where $a$ is some real number. Suppose that the matrix $A$ has an eigenvalue $3$. (a) Determine the value of $a$. (b) Does the matrix $A$ have eigenvalues other than $3$? Use Cramer's Rule to Solve a $2\times 2$ System of Linear Equations Use Cramer's rule to solve the system of linear equations 3x_1-2x_2&=5\\ 7x_1+4x_2&=-1. Find a Matrix so that a Given Subset is the Null Space of the Matrix, hence it's a Subspace Let $W$ be the subset of $\R^3$ defined by \[W=\left \{ \mathbf{x}=\begin{bmatrix} x_1 \\ x_3 \end{bmatrix}\in \R^3 \quad \middle| \quad 5x_1-2x_2+x_3=0 \right \}.\] Exhibit a $1\times 3$ matrix $A$ such that $W=\calN(A)$, the null space of $A$. Conclude that the subset $W$ is a subspace of $\R^3$. Is there an Odd Matrix Whose Square is $-I$? The Center of the Symmetric group is Trivial if $n>2$ Determine the Quotient Ring $\Z[\sqrt{10}]/(2, \sqrt{10})$
CommonCrawl
Question 476 income and capital returns, idiom The saying "buy low, sell high" suggests that investors should make a: (a) Positive income return. (b) Positive capital return. (c) Negative income return. (d) Negative capital return. (e) Positive total return. Question 272 NPV Suppose you had $100 in a savings account and the interest rate was 2% per year. After 5 years, how much do you think you would have in the account if you left the money to grow? than $102, $102 or than $102? Question 1 NPV Jan asks you for a loan. He wants $100 now and offers to pay you back $120 in 1 year. You can borrow and lend from the bank at an interest rate of 10% pa, given as an effective annual rate. Ignore credit risk. Remember: ### V_0 = \frac{V_t}{(1+r_\text{eff})^t} ### Will you or Jan's deal? Question 2 NPV, Annuity Katya offers to pay you $10 at the end of every year for the next 5 years (t=1,2,3,4,5) if you pay her $50 now (t=0). You can borrow and lend from the bank at an interest rate of 10% pa, given as an effective annual rate. Ignore credit risk. Will you or Katya's deal? Question 3 DDM, income and capital returns The following equation is called the Dividend Discount Model (DDM), Gordon Growth Model or the perpetuity with growth formula: ### P_0 = \frac{ C_1 }{ r - g } ### What is ##g##? The value ##g## is the long term expected: (a) Income return of the stock. (b) Capital return of the stock. (c) Total return of the stock. (d) Dividend yield of the stock. (e) Price-earnings ratio of the stock. Question 4 DDM For a price of $13, Carla will sell you a share which will pay a dividend of $1 in one year and every year after that forever. The required return of the stock is 10% pa. Would you like to Carla's share or politely ? For a price of $6, Carlos will sell you a share which will pay a dividend of $1 in one year and every year after that forever. The required return of the stock is 10% pa. Would you like to his share or politely ? For a price of $1040, Camille will sell you a share which just paid a dividend of $100, and is expected to pay dividends every year forever, growing at a rate of 5% pa. So the next dividend will be ##100(1+0.05)^1=$105.00##, and the year after it will be ##100(1+0.05)^2=110.25## and so on. The required return of the stock is 15% pa. Would you like to the share or politely ? For a price of $10.20 each, Renee will sell you 100 shares. Each share is expected to pay dividends in perpetuity, growing at a rate of 5% pa. The next dividend is one year away (t=1) and is expected to be $1 per share. Would you like to the shares or politely ? Question 10 DDM For a price of $95, Sherylanne will sell you a share which is expected to pay its first dividend of $10 in 7 years (t=7), and will continue to pay the same $10 dividend every year after that forever. Question 11 bond pricing For a price of $100, Vera will sell you a 2 year bond paying semi-annual coupons of 10% pa. The face value of the bond is $100. Other bonds with similar risk, maturity and coupon characteristics trade at a yield of 8% pa. Would you like to her bond or politely ? For a price of $100, Carol will sell you a 5 year bond paying semi-annual coupons of 16% pa. The face value of the bond is $100. Other bonds with similar risk, maturity and coupon characteristics trade at a yield of 12% pa. For a price of $100, Rad will sell you a 5 year bond paying semi-annual coupons of 16% pa. The face value of the bond is $100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 6% pa. Would you like to the bond or politely ? For a price of $95, Nicole will sell you a 10 year bond paying semi-annual coupons of 8% pa. The face value of the bond is $100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 8% pa. For a price of $100, Andrea will sell you a 2 year bond paying annual coupons of 10% pa. The face value of the bond is $100. Other bonds with the same risk, maturity and coupon characteristics trade at a yield of 6% pa. Question 478 income and capital returns Total cash flows can be broken into income and capital cash flows. What is the name given to the income cash flow from owning shares? (a) Dividends. (b) Rent. (c) Coupons. (d) Loan payments. (e) Capital gains. An asset's total expected return over the next year is given by: ###r_\text{total} = \dfrac{c_1+p_1-p_0}{p_0} ### Where ##p_0## is the current price, ##c_1## is the expected income in one year and ##p_1## is the expected price in one year. The total return can be split into the income return and the capital return. Which of the following is the expected capital return? (a) ##c_1## (b) ##p_1-p_0## (c) ##\dfrac{c_1}{p_0} ## (d) ##\dfrac{p_1}{p_0} -1## (e) ##\dfrac{p_1}{p_0} ## A share was bought for $30 (at t=0) and paid its annual dividend of $6 one year later (at t=1). Just after the dividend was paid, the share price fell to $27 (at t=1). What were the total, capital and income returns given as effective annual rates? The choices are given in the same order: ##r_\text{total}## , ##r_\text{capital}## , ##r_\text{dividend}##. (a) -0.1, -0.3, 0.2. (b) -0.1, 0.1, -0.2. (c) 0.1, -0.1, 0.2. (d) 0.1, 0.2, -0.1. (e) 0.2, 0.1, -0.1. Question 278 inflation, real and nominal returns and cash flows Imagine that the interest rate on your savings account was 1% per year and inflation was 2% per year. After one year, would you be able to buy , exactly the as or than today with the money in this account? Question 353 income and capital returns, inflation, real and nominal returns and cash flows, real estate A residential investment property has an expected nominal total return of 6% pa and nominal capital return of 3% pa. Inflation is expected to be 2% pa. All rates are given as effective annual rates. What are the property's expected real total, capital and income returns? The answer choices below are given in the same order. (a) 3.9216%, 2.9412%, 0.9804%. (b) 3.9216%, 0.9804%, 2.9412%. (c) 3.9216%, 0.9804%, 0.9804%. (d) 1.9804%, 1.0000%, 0.9804%. (e) 1.9608%, 0.9804%, 0.9804%. Question 466 limited liability, business structure Which business structure or structures have the advantage of limited liability for equity investors? (a) Sole traders. (b) Partnerships. (c) Corporations. (e) None of the above On his 20th birthday, a man makes a resolution. He will put $30 cash under his bed at the end of every month starting from today. His birthday today is the first day of the month. So the first addition to his cash stash will be in one month. He will write in his will that when he dies the cash under the bed should be given to charity. If the man lives for another 60 years, how much money will be under his bed if he dies just after making his last (720th) addition? Also, what will be the real value of that cash in today's prices if inflation is expected to 2.5% pa? Assume that the inflation rate is an effective annual rate and is not expected to change. The answers are given in the same order, the amount of money under his bed in 60 years, and the real value of that money in today's prices. (a) $21,600, $95,035.46 (b) $21,600, $49,515.44 (c) $21,600, $4,909.33 (d) $21,600, $2,557.86 (e) $11,254.05, $2,557.86 Question 526 real and nominal returns and cash flows, inflation, no explanation How can a nominal cash flow be precisely converted into a real cash flow? (a) ##C_\text{real, t}=C_\text{nominal,t}.(1+r_\text{inflation})^t## (b) ##C_\text{real,t}=\dfrac{C_\text{nominal,t}}{(1+r_\text{inflation})^t} ## (c) ##C_\text{real,t}=\dfrac{C_\text{nominal,t}}{r_\text{inflation}} ## (d) ##C_\text{real,t}=C_\text{nominal,t}.r_\text{inflation} ## (e) ##C_\text{real,t}=C_\text{nominal,t}.r_\text{inflation}.t## Question 407 income and capital returns, inflation, real and nominal returns and cash flows A stock has a real expected total return of 7% pa and a real expected capital return of 2% pa. What is the nominal expected total return, capital return and dividend yield? The answers below are given in the same order. (a) 11.100%, 4.000%, 7.100%. (b) 11.140%, 4.040%, 7.100%. (c) 4.902%, 0.000%, 4.902%. (d) 9.140%, 4.040%, 5.100%. (e) 9.140%, 4.040%, 7.100%. Question 515 corporate financial decision theory, idiom The expression 'you have to spend money to make money' relates to which business decision? (a) Investment decision. (b) Financing decision. (c) Working capital decision. (d) Payout policy decision. (e) Diversification decision. Question 221 credit risk You're considering making an investment in a particular company. They have preference shares, ordinary shares, senior debt and junior debt. Which is the safest investment? Which will give the highest returns? (a) Junior debt is the safest. Preference shares will have the highest returns. (b) Preference shares are the safest. Ordinary shares will have the highest returns. (c) Senior debt is the safest. Ordinary shares will have the highest returns. (d) Junior debt is the safest. Ordinary shares will have the highest returns. (e) Senior debt is the safest. Junior debt will have the highest returns. Question 473 market capitalisation of equity The below screenshot of Commonwealth Bank of Australia's (CBA) details were taken from the Google Finance website on 7 Nov 2014. Some information has been deliberately blanked out. What was CBA's market capitalisation of equity? (a) $431.18 billion (b) $429 billion (c) $134.07 billion (d) $8.44 billion (e) $3.21 billion Question 444 investment decision, corporate financial decision theory The investment decision primarily affects which part of a business? (a) Assets. (b) Liabilities and owner's equity. (c) Current assets and current liabilities. (d) Dividends and buy backs. (e) Net income, also known as earnings or net profit after tax. Question 445 financing decision, corporate financial decision theory The financing decision primarily affects which part of a business? What is the present value of a real payment of $500 in 2 years? The nominal discount rate is 7% pa and the inflation rate is 4% pa. (a) $472.3557 (b) $471.298 (c) $436.7194 (d) $435.7415 (e) $405.8112 Question 152 NPV, Annuity The following cash flows are expected: 10 yearly payments of $80, with the first payment in 3 years from now (first payment at t=3). 1 payment of $600 in 5 years and 6 months (t=5.5) from now. What is the NPV of the cash flows if the discount rate is 10% given as an effective annual rate? (a) $1,006.25 (b) $846.78 (c) $761.47 (d) $741.87 (e) $724.54 Question 126 IRR What is the Internal Rate of Return (IRR) of the project detailed in the table below? Assume that the cash flows shown in the table are paid all at once at the given point in time. All answers are given as effective annual rates. Project Cash Flows Time (yrs) Cash flow ($) (b) 0.105 (c) 0.1111 (d) 0.1 (e) 0 Question 37 IRR If a project's net present value (NPV) is zero, then its internal rate of return (IRR) will be: (a) Positive infinity (##+\infty##) (b) Zero (0). (c) Less than the project's required return. (d) More than the project's required return. (e) Equal to the project's required return. Question 46 NPV, annuity due The phone company Telstra have 2 mobile service plans on offer which both have the same amount of phone call, text message and internet data credit. Both plans have a contract length of 24 months and the monthly cost is payable in advance. The only difference between the two plans is that one is a: 'Bring Your Own' (BYO) mobile service plan, costing $50 per month. There is no phone included in this plan. The other plan is a: 'Bundled' mobile service plan that comes with the latest smart phone, costing $71 per month. This plan includes the latest smart phone. Neither plan has any additional payments at the start or end. The only difference between the plans is the phone, so what is the implied cost of the phone as a present value? Assume that the discount rate is 2% per month given as an effective monthly rate, the same high interest rate on credit cards. (b) $397.1924 Question 465 NPV, perpetuity The boss of WorkingForTheManCorp has a wicked (and unethical) idea. He plans to pay his poor workers one week late so that he can get more interest on his cash in the bank. Every week he is supposed to pay his 1,000 employees $1,000 each. So $1 million is paid to employees every week. The boss was just about to pay his employees today, until he thought of this idea so he will actually pay them one week (7 days) later for the work they did last week and every week in the future, forever. Bank interest rates are 10% pa, given as a real effective annual rate. So ##r_\text{eff annual, real} = 0.1## and the real effective weekly rate is therefore ##r_\text{eff weekly, real} = (1+0.1)^{1/52}-1 = 0.001834569## All rates and cash flows are real, the inflation rate is 3% pa and there are 52 weeks per year. The boss will always pay wages one week late. The business will operate forever with constant real wages and the same number of employees. What is the net present value (NPV) of the boss's decision to pay later? (a) $261.16 (b) $1,919.39 (c) $13,580.21 (d) $18,295.38 (e) $1,000,000.00 Question 60 pay back period The required return of a project is 10%, given as an effective annual rate. What is the payback period of the project in years? Assume that the cash flows shown in the table are received smoothly over the year. So the $121 at time 2 is actually earned smoothly from t=1 to t=2. (a) 2.7355 (b) 2.3596 (d) 1.2645 (e) 0.2645 Question 190 pay back period A project has the following cash flows: Normally cash flows are assumed to happen at the given time. But here, assume that the cash flows are received smoothly over the year. So the $500 at time 2 is actually earned smoothly from t=1 to t=2. (a) -0.80 (e) 2.20 Question 500 NPV, IRR The below graph shows a project's net present value (NPV) against its annual discount rate. For what discount rate or range of discount rates would you accept and commence the project? All answer choices are given as approximations from reading off the graph. (a) From 0 to 10% pa. (b) From 0 to 5% pa. (c) At 5.5% pa. (d) From 6 to 20% pa. (e) From 0 to 20% pa. Question 501 NPV, IRR, pay back period Which of the following statements is NOT correct? (a) When the project's discount rate is 18% pa, the NPV is approximately -$30m. (b) The payback period is infinite, the project never pays itself off. (c) The addition of the project's cash flows, ignoring the time value of money, is approximately $20m. (d) The project's IRR is approximately 5.5% pa. (e) As the discount rate rises, the NPV falls. You have $100,000 in the bank. The bank pays interest at 10% pa, given as an effective annual rate. You wish to consume an equal amount now (t=0) and in one year (t=1) and have nothing left in the bank at the end (t=1). How much can you consume at each time? (a) $57,619.0476 (b) $55,000 (c) $53,809.5238 (d) $52,380.9524 (e) $50,000 Question 542 price gains and returns over time, IRR, NPV, income and capital returns, effective return For an asset price to double every 10 years, what must be the expected future capital return, given as an effective annual rate? (a) 0.2 (b) 0.116123 (c) 0.082037 (d) 0.071773 (e) 0.06054 Question 502 NPV, IRR, mutually exclusive projects An investor owns an empty block of land that has local government approval to be developed into a petrol station, car wash or car park. The council will only allow a single development so the projects are mutually exclusive. All of the development projects have the same risk and the required return of each is 10% pa. Each project has an immediate cost and once construction is finished in one year the land and development will be sold. The table below shows the estimated costs payable now, expected sale prices in one year and the internal rates of returns (IRR's). Mutually Exclusive Projects now ($) Sale price in one year ($) IRR (% pa) Petrol station 9,000,000 11,000,000 22.22 Car wash 800,000 1,100,000 37.50 Car park 70,000 110,000 57.14 Which project should the investor accept? (a) Petrol station. (b) Car wash. (c) Car park. (d) None of the projects. (e) All of the projects. Question 532 mutually exclusive projects, NPV, IRR An investor owns a whole level of an old office building which is currently worth $1 million. There are three mutually exclusive projects that can be started by the investor. The office building level can be: Rented out to a tenant for one year at $0.1m paid immediately, and then sold for $0.99m in one year. Refurbished into more modern commercial office rooms at a cost of $1m now, and then sold for $2.4m when the refurbishment is finished in one year. Converted into residential apartments at a cost of $2m now, and then sold for $3.4m when the conversion is finished in one year. All of the development projects have the same risk so the required return of each is 10% pa. The table below shows the estimated cash flows and internal rates of returns (IRR's). Project Cash flow now ($) Cash flow in Rent then sell as is -900,000 990,000 10 Refurbishment into modern offices -2,000,000 2,400,000 20 Conversion into residential apartments -3,000,000 3,400,000 13.33 (a) Rent then sell as is. (b) Refurbishment into modern offices. (c) Conversion into residential apartments. (e) Any of the above. Question 505 equivalent annual cash flow A low-quality second-hand car can be bought now for $1,000 and will last for 1 year before it will be scrapped for nothing. A high-quality second-hand car can be bought now for $4,900 and it will last for 5 years before it will be scrapped for nothing. What is the equivalent annual cost of each car? Assume a discount rate of 10% pa, given as an effective annual rate. The answer choices are given as the equivalent annual cost of the low-quality car and then the high quality car. (a) $100, $490 (b) $909.09, $608.5 (c) $1,000, $980 (d) $1,000, $1578.3 (e) $1,100, $1,292.61 Question 180 equivalent annual cash flow, inflation, real and nominal returns and cash flows Details of two different types of light bulbs are given below: Low-energy light bulbs cost $3.50, have a life of nine years, and use about $1.60 of electricity a year, paid at the end of each year. Conventional light bulbs cost only $0.50, but last only about a year and use about $6.60 of energy a year, paid at the end of each year. The real discount rate is 5%, given as an effective annual rate. Assume that all cash flows are real. The inflation rate is 3% given as an effective annual rate. Find the Equivalent Annual Cost (EAC) of the low-energy and conventional light bulbs. The below choices are listed in that order. (a) 1.4873, 6.7857 (b) 1.6525, 6.7857 (c) 2.1415, 7.1250 (d) 14.8725, 6.7857 (e) 2.0924, 7.1250 Carlos and Edwin are brothers and they both love Holden Commodore cars. Carlos likes to buy the latest Holden Commodore car for $40,000 every 4 years as soon as the new model is released. As soon as he buys the new car, he sells the old one on the second hand car market for $20,000. Carlos never has to bother with paying for repairs since his cars are brand new. Edwin also likes Commodores, but prefers to buy 4-year old cars for $20,000 and keep them for 11 years until the end of their life (new ones last for 15 years in total but the 4-year old ones only last for another 11 years). Then he sells the old car for $2,000 and buys another 4-year old second hand car, and so on. Every time Edwin buys a second hand 4 year old car he immediately has to spend $1,000 on repairs, and then $1,000 every year after that for the next 10 years. So there are 11 payments in total from when the second hand car is bought at t=0 to the last payment at t=10. One year later (t=11) the old car is at the end of its total 15 year life and can be scrapped for $2,000. Assuming that Carlos and Edwin maintain their love of Commodores and keep up their habits of buying new ones and second hand ones respectively, how much larger is Carlos' equivalent annual cost of car ownership compared with Edwin's? The real discount rate is 10% pa. All cash flows are real and are expected to remain constant. Inflation is forecast to be 3% pa. All rates are effective annual. Ignore capital gains tax and tax savings from depreciation since cars are tax-exempt for individuals. (a) $13,848.99 (b) $13,106.61 (c) $8,547.50 (d) $4,238.08 (e) -$103.85 Question 217 NPV, DDM, multi stage growth model A stock is expected to pay a dividend of $15 in one year (t=1), then $25 for 9 years after that (payments at t=2 ,3,...10), and on the 11th year (t=11) the dividend will be 2% less than at t=10, and will continue to shrink at the same rate every year after that forever. The required return of the stock is 10%. All rates are effective annual rates. What is the price of the stock now? Question 498 NPV, Annuity, perpetuity with growth, multi stage growth model A business project is expected to cost $100 now (t=0), then pay $10 at the end of the third (t=3), fourth, fifth and sixth years, and then grow by 5% pa every year forever. So the cash flow will be $10.5 at the end of the seventh year (t=7), then $11.025 at the end of the eighth year (t=8) and so on perpetually. The total required return is 10℅ pa. Which of the following formulas will NOT give the correct net present value of the project? (a) ##-100+ \dfrac{ \dfrac{10}{0.1} \left(1-\dfrac{1}{(1+0.1)^3} \right)}{(1+0.1)^2} +\dfrac{\left(\dfrac{10}{0.1-0.05}\right)}{(1+0.1)^5} ## (b) ##-100+ \dfrac{10}{(1+0.1)^3} +\dfrac{10}{(1+0.1)^4} +\dfrac{10}{(1+0.1)^5} +\dfrac{\left(\dfrac{10}{0.1-0.05}\right)}{(1+0.1)^5} ## (c) ##-100+ \dfrac{ \dfrac{10}{0.1} \left(1-\dfrac{1}{(1+0.1)^4} \right)}{(1+0.1)^2} +\dfrac{\left(\dfrac{10.5}{0.1-0.05}\right)}{(1+0.1)^6} ## (d) ##-100+ \dfrac{10}{(1+0.1)^3} +\dfrac{10}{(1+0.1)^4} +\dfrac{10}{(1+0.1)^5} +\dfrac{10}{(1+0.1)^6} +\dfrac{\left(\dfrac{10.5}{0.1-0.05}\right)}{(1+0.1)^6} ## (e) ##-100+ \dfrac{ \dfrac{10}{0.1} \left(1-\dfrac{1}{(1+0.1)^3} \right)}{(1+0.1)^3} +\dfrac{\left(\dfrac{10}{0.1-0.05}\right)}{(1+0.1)^5} ## Question 372 debt terminology Which of the following statements is NOT correct? Borrowers: (a) Receive cash at the start and promise to pay cash in the future, as set out in the debt contract. (b) Are debtors. (c) Owe money. (d) Are funded by debt. (e) Buy debt. Which of the following statements is NOT correct? Lenders: (a) Are long debt. (b) Invest in debt. (c) Are owed money. (d) Provide debt funding. (e) Have debt liabilities. Question 290 APR, effective rate, debt terminology Which of the below statements about effective rates and annualised percentage rates (APR's) is NOT correct? (a) An effective annual rate could be called: "a yearly rate compounding per year". (b) An APR compounding monthly could be called: "a yearly rate compounding per month". (c) An effective monthly rate could be called: "a yearly rate compounding per month". (d) An APR compounding daily could be called: "a yearly rate compounding per day". (e) An effective 2-year rate could be called: "a 2-year rate compounding every 2 years". Question 16 credit card, APR, effective rate A credit card offers an interest rate of 18% pa, compounding monthly. Find the effective monthly rate, effective annual rate and the effective daily rate. Assume that there are 365 days in a year. All answers are given in the same order: ### r_\text{eff monthly} , r_\text{eff yearly} , r_\text{eff daily} ### (a) 0.0072, 0.09, 0.0002. (b) 0.0139, 0.18, 0.0005. (c) 0.0139, 6.2876, 0.0055. (d) 0.015, 0.1956, 0.0005. (e) 0.015, 0.1956, 0.006. Question 131 APR, effective rate Calculate the effective annual rates of the following three APR's: A credit card offering an interest rate of 18% pa, compounding monthly. A bond offering a yield of 6% pa, compounding semi-annually. An annual dividend-paying stock offering a return of 10% pa compounding annually. ##r_\text{credit card, eff yrly}##, ##r_\text{bond, eff yrly}##, ##r_\text{stock, eff yrly}## (a) 0.1956, 0.0609, 0.1. (b) 0.015, 0.09, 0.1. (d) 0.1956, 0.0617, 0.1047. (e) 6.2876, 0.1236, 0.1. Question 134 fully amortising loan, APR You want to buy an apartment worth $400,000. You have saved a deposit of $80,000. The bank has agreed to lend you the $320,000 as a fully amortising mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (e) $23,247.65 You just signed up for a 30 year fully amortising mortgage loan with monthly payments of $2,000 per month. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 5 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. (a) 246,567.70, 93,351.63 (b) 246,567.70, 235,741.91 (c) 248,563.73, 96,346.75 (d) 248,563.73, 238,323.24 (e) 256,580.38, 245,314.97 How much did you borrow? After 10 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. (a) 184,925.77, 164,313.82 (c) 186,422.80, 166,717.43 You just agreed to a 30 year fully amortising mortgage loan with monthly payments of $2,500. The interest rate is 9% pa which is not expected to change. How much did you borrow? After 10 years, how much will be owing on the mortgage? The interest rate is still 9% and is not expected to change. The below choices are given in the same order. (a) $320,725.47, $284,977.19 (b) $310,704.66, $277,862.39 (c) $310,704.66, $197,354.23 (d) $308,209.62, $273,856.37 (e) $308,209.62, $192,529.73 Question 29 interest only loan You want to buy an apartment priced at $300,000. You have saved a deposit of $30,000. The bank has agreed to lend you the $270,000 as an interest only loan with a term of 25 years. The interest rate is 12% pa and is not expected to change. What will be your monthly payments? Remember that mortgage payments are paid in arrears (at the end of the month). (a) 900 (b) 2,700 (c) 2,722.1 (d) 2,843.71 (e) 34,424.99 Question 107 interest only loan You want to buy an apartment worth $300,000. You have saved a deposit of $60,000. The bank has agreed to lend you $240,000 as an interest only mortgage loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (a) 17,435.74 (b) 1,438.92 (c) 1,414.49 (e) 666.67 Question 239 income and capital returns, inflation, real and nominal returns and cash flows, interest only loan A bank grants a borrower an interest-only residential mortgage loan with a very large 50% deposit and a nominal interest rate of 6% that is not expected to change. Assume that inflation is expected to be a constant 2% pa over the life of the loan. Ignore credit risk. From the bank's point of view, what is the long term expected nominal capital return of the loan asset? (a) Approximately 6%. (b) Approximately 4%. (c) Approximately 2%. (d) Approximately 0%. (e) Approximately -2%. Question 128 debt terminology, needs refinement An 'interest payment' is the same thing as a 'coupon payment'. or ? An 'interest rate' is the same thing as a 'coupon rate'. or ? An 'interest rate' is the same thing as a 'yield'. or ? Question 215 equivalent annual cash flow, effective rate conversion You're about to buy a car. These are the cash flows of the two different cars that you can buy: You can buy an old car for $5,000 now, for which you will have to buy $90 of fuel at the end of each week from the date of purchase. The old car will last for 3 years, at which point you will sell the old car for $500. Or you can buy a new car for $14,000 now for which you will have to buy $50 of fuel at the end of each week from the date of purchase. The new car will last for 4 years, at which point you will sell the new car for $1,000. Bank interest rates are 10% pa, given as an effective annual rate. Assume that there are exactly 52 weeks in a year. Ignore taxes and environmental and pollution factors. Should you buy the or the ? Details of two different types of desserts or edible treats are given below: High-sugar treats like candy, chocolate and ice cream make a person very happy. High sugar treats are cheap at only $2 per day. Low-sugar treats like nuts, cheese and fruit make a person equally happy if these foods are of high quality. Low sugar treats are more expensive at $4 per day. The advantage of low-sugar treats is that a person only needs to pay the dentist $2,000 for fillings and root canal therapy once every 15 years. Whereas with high-sugar treats, that treatment needs to be done every 5 years. The real discount rate is 10%, given as an effective annual rate. Assume that there are 365 days in every year and that all cash flows are real. The inflation rate is 3% given as an effective annual rate. Find the equivalent annual cash flow (EAC) of the high-sugar treats and low-sugar treats, including dental costs. The below choices are listed in that order. Ignore the pain of dental therapy, personal preferences and other factors. (a) -$332.8709, -$68.2065 (b) -$327.595, -$62.9476 (c) -$828.9808, -$808.5709 (d) -$829.6304, -$810.2613 (e) -$1,093.4153, -$1,594.5881 You just bought a nice dress which you plan to wear once per month on nights out. You bought it a moment ago for $600 (at t=0). In your experience, dresses used once per month last for 6 years. Your younger sister is a student with no money and wants to borrow your dress once a month when she hits the town. With the increased use, your dress will only last for another 3 years rather than 6. What is the present value of the cost of letting your sister use your current dress for the next 3 years? Assume: that bank interest rates are 10% pa, given as an effective annual rate; you will buy a new dress when your current one wears out; your sister will only use the current dress, not the next one that you will buy; and the price of a new dress never changes. (b) 283.1403 (c) 282.2849 (d) 257.4003 (e) 225.3944 Question 548 equivalent annual cash flow, time calculation, no explanation An Apple iPhone 6 smart phone can be bought now for $999. An Android Kogan Agora 4G+ smart phone can be bought now for $240. If the Kogan phone lasts for one year, approximately how long must the Apple phone last for to have the same equivalent annual cost? Assume that both phones have equivalent features besides their lifetimes, that both are worthless once they've outlasted their life, the discount rate is 10% pa given as an effective annual rate, and there are no extra costs or benefits from either phone. (a) 11 years (b) 9 years (c) 7 years (d) 5 years (e) 3 years Question 56 income and capital returns, bond pricing, premium par and discount bonds Which of the following statements about risk free government bonds is NOT correct? (a) Premium bonds have a positive expected capital return. (b) Discount bonds have a positive expected capital return. (c) Par bonds have a zero expected capital return. (d) Par bonds have a total expected yield equal to their coupon yield. (e) Zero coupon bonds selling at par would have zero expected total, income and capital yields. Hint: Total return can be broken into income and capital returns as follows: ###\begin{aligned} r_\text{total} &= \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0} \\ &= r_\text{income} + r_\text{capital} \end{aligned} ### The capital return is the growth rate of the price. The income return is the periodic cash flow. For a bond this is the coupon payment. Question 25 bond pricing, zero coupon bond, term structure of interest rates, forward interest rate A European company just issued two bonds, a 2 year zero coupon bond at a yield of 8% pa, and a 3 year zero coupon bond at a yield of 10% pa. What is the company's forward rate over the third year (from t=2 to t=3)? Give your answer as an effective annual rate, which is how the above bond yields are quoted. Question 87 fully amortising loan, APR You want to buy an apartment worth $500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the $450,000 as a fully amortising mortgage loan with a term of 25 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (a) 1,500.00 You want to buy an apartment priced at $500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the $450,000 as a fully amortising loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (e) $1,250.00 You just signed up for a 30 year fully amortising mortgage with monthly payments of $1,000 per month. The interest rate is 6% pa which is not expected to change. (b) 166,791.61, 90,073.45 (d) 165,177.97, 88,321.04 You want to buy a house priced at $400,000. You have saved a deposit of $40,000. The bank has agreed to lend you $360,000 as a fully amortising loan with a term of 30 years. The interest rate is 8% pa payable monthly and is not expected to change. (a) $1,000 (b) $1,106.6497 (c) $2,400 (d) $2,571.8327 (e) $2,641.5525 Question 550 fully amortising loan, interest only loan, APR, no explanation Many Australian home loans that are interest-only actually require payments to be made on a fully amortising basis after a number of years. You decide to borrow $600,000 from the bank at an interest rate of 4.25% pa for 25 years. The payments will be interest-only for the first 10 years (t=0 to 10 years), then they will have to be paid on a fully amortising basis for the last 15 years (t=10 to 25 years). Assuming that interest rates will remain constant, what will be your monthly payments over the first 10 years from now, and then the next 15 years after that? The answer options are given in the same order. (a) $3,250.43, $4,513.67 (b) $3,250.43, $1,530.28 (c) $2,125, $4,513.67 (d) $2,125, $3,250.43 (e) $1,530.28, $3,250.43 Question 551 fully amortising loan, time calculation, APR You just entered into a fully amortising home loan with a principal of $600,000, a variable interest rate of 4.25% pa and a term of 25 years. Immediately after settling the loan, the variable interest rate suddenly falls to 4% pa! You can't believe your luck. Despite this, you plan to continue paying the same home loan payments as you did before. How long will it now take to pay off your home loan? Assume that the lower interest rate was granted immediately and that rates were and are now again expected to remain constant. Round your answer up to the nearest whole month. (a) 301 months (b) 288 months (c) 271 months (d) 270 months (e) 75 months Question 63 bond pricing, NPV, market efficiency The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero. Considering this, which of the following statements is NOT correct? (a) The internal rate of return (IRR) of buying a bond is equal to the bond's yield. (b) The present value of a fairly priced bond's coupons and face value is equal to its price. (c) If the required return of a bond falls, its price will fall. (d) Fairly priced discount bonds' yield is more than the coupon rate, price is less than face value, and the NPV of buying them is zero. (e) The NPV of buying a fairly priced bond is zero. Question 18 DDM, income and capital returns The following equation is the Dividend Discount Model, also known as the 'Gordon Growth Model' or the 'Perpetuity with growth' equation. ### p_{0} = \frac{c_1}{r_{\text{eff}} - g_{\text{eff}}} ### What is the discount rate '## r_\text{eff} ##' in this equation? (a) The expected total return of the stock. (b) The expected capital return of the stock. (c) The expected dividend return of the stock. (d) The expected income return of the stock. (e) The expected growth rate of the stock price. Question 21 income and capital returns, bond pricing A fixed coupon bond was bought for $90 and paid its annual coupon of $3 one year later (at t=1 year). Just after the coupon was paid, the bond price was $92 (at t=1 year). What was the total return, capital return and income return? Calculate your answers as effective annual rates. The choices are given in the same order: ## r_\text{total},r_\text{capital},r_\text{income} ##. (a) -0.0556, -0.0222, -0.0333 (b) 0.0222, -0.0111, 0.0333. (e) 0.0556, 0.0333, 0.0222. What would you call the expression ## C_1/P_0 ##? (b) The expected income return of the stock. (c) The expected capital return of the stock. (d) The expected growth rate of the dividend. Question 30 income and capital returns A share was bought for $20 (at t=0) and paid its annual dividend of $3 one year later (at t=1). Just after the dividend was paid, the share price was $16 (at t=1). What was the total return, capital return and income return? Calculate your answers as effective annual rates. (a) -0.2, -0.35, 0.15. (b) -0.05, -0.2, 0.15. (c) -0.05, 0.15, -0.2. (d) 0.05, 0.2, -0.15. (e) 0.15, -0.05, -0.2. Question 328 bond pricing, APR A 10 year Australian government bond was just issued at par with a yield of 3.9% pa. The fixed coupon payments are semi-annual. The bond has a face value of $1,000. Six months later, just after the first coupon is paid, the yield of the bond decreases to 3.65% pa. What is the bond's new price? (c) $1,033.8330 Question 213 income and capital returns, bond pricing, premium par and discount bonds The coupon rate of a fixed annual-coupon bond is constant (always the same). What can you say about the income return (##r_\text{income}##) of a fixed annual coupon bond? Remember that: ###r_\text{total} = r_\text{income} + r_\text{capital}### ###r_\text{total, 0 to 1} = \frac{c_1}{p_0} + \frac{p_1-p_0}{p_0}### Assume that there is no change in the bond's total annual yield to maturity from when it is issued to when it matures. Select the most correct statement. From its date of issue until maturity, the income return of a fixed annual coupon: (a) Premium bond will increase. (b) Premium bond will decrease. (c) Premium bond will remain constant. (d) Par bond will increase. (e) Par bond will decrease. Question 491 capital budgeting, opportunity cost, sunk cost A man is thinking about taking a day off from his casual painting job to relax. He just woke up early in the morning and he's about to call his boss to say that he won't be coming in to work. But he's thinking about the hours that he could work today (in the future) which are: (a) A sunk cost. (b) An opportunity cost. (c) A negative side effect. (d) A capital expense. (e) A depreciation expense. A project's NPV is positive. Select the most correct statement: (a) The project should be rejected. (b) The project's IRR is more than its required return. (c) The project's IRR is less than its required return. (d) The project's IRR is equal to its required return. (e) The project will never pay itself off, assuming that the discount rate is positive. Question 533 NPV, no explanation You wish to consume twice as much now (t=0) as in one year (t=1) and have nothing left in the bank at the end. How much can you consume at time zero and one? The answer choices are given in the same order. (a) $52,380.95, $52,380.95 (b) $62,500, $31,250 (c) $66,666.67, $33,333.33 (d) $68,750, $34,375 (e) $70,967.74, $35,483.87 You wish to consume half as much now (t=0) as in one year (t=1) and have nothing left in the bank at the end. (a) $26,190.48, $52380.95 For an asset price to triple every 5 years, what must be the expected future capital return, given as an effective annual rate? (e) 0.219755 A project to build a toll road will take 3 years to complete, costing three payments of $50 million, paid at the start of each year (at times 0, 1, and 2). After completion, the toll road will yield a constant $10 million at the end of each year forever with no costs. So the first payment will be at t=4. The required return of the project is 10% pa given as an effective nominal rate. All cash flows are nominal. What is the payback period? (a) Negative since the NPV is negative. (b) Zero since the project's internal rate of return is less than the required return. (c) 15 years. (d) 18 years. (e) Infinite, since the project will never pay itself off. Which of the following statements is NOT correct? Bond investors: (a) Buy debt. (c) Have debt assets. (e) Are lenders. Question 582 APR, effective rate, effective rate conversion A credit card company advertises an interest rate of 18% pa, payable monthly. Which of the following statements about the interest rate is NOT correct? All rates are given to four decimal places. (a) The APR compounding monthly is 18.0000% per annum. (b) The effective monthly rate is 1.5000% per month. (c) The effective annual rate is 19.5618% per annum. (d) The effective quarterly rate is 4.5678% per quarter. (e) The APR compounding quarterly is 13.7035% per annum. Which of the following statements about effective rates and annualised percentage rates (APR's) is NOT correct? (a) Effective rates compound once over their time period. So an effective monthly rate compounds once per month. (b) APR's compound more than once per year. So an APR compounding monthly compounds 12 times per year. The exception is an APR that compounds annually (once per year) which is the same thing as an effective annual rate. (c) To convert an effective rate to an APR, multiply the effective rate by the number of time periods in one year. So an effective monthly rate multiplied by 12 is equal to an APR compounding monthly. (d) To convert an APR compounding monthly to an effective monthly rate, divide the APR by the number of months in one year (12). (e) To convert an APR compounding monthly to an effective weekly rate, divide the APR by the number of weeks in one year (approximately 52). Question 26 APR, effective rate A European bond paying annual coupons of 6% offers a yield of 10% pa. Convert the yield into an effective monthly rate, an effective annual rate and an effective daily rate. Assume that there are 365 days in a year. ### r_\text{eff, monthly} , r_\text{eff, yearly} , r_\text{eff, daily} ### (b) 0.0080, 0.1, 0.0003. (c) 0.0083, 0.1, 0.0003. Question 49 inflation, real and nominal returns and cash flows, APR, effective rate In Australia, nominal yields on semi-annual coupon paying Government Bonds with 2 years until maturity are currently 2.83% pa. The inflation rate is currently 2.2% pa, given as an APR compounding per quarter. The inflation rate is not expected to change over the next 2 years. What is the real yield on these bonds, given as an APR compounding every 6 months? (a) 0.3086% (b) 0.6171% (c) 0.6181% (d) 0.6239% (e) 0.6300% Which of the following statements is NOT equivalent to the yield on debt? Assume that the debt being referred to is fairly priced, but do not assume that it's priced at par. (a) Debt coupon rate. (b) Required return on debt. (c) Total return on debt. (d) Opportunity cost of debt. (e) Cost of debt capital. You just signed up for a 30 year interest-only mortgage with monthly payments of $3,000 per month. The interest rate is 6% pa which is not expected to change. How much did you borrow? After 15 years, just after the 180th payment at that time, how much will be owing on the mortgage? The interest rate is still 6% and is not expected to change. Remember that the mortgage is interest-only and that mortgage payments are paid in arrears (at the end of the month). You want to buy an apartment priced at $500,000. You have saved a deposit of $50,000. The bank has agreed to lend you the $450,000 as an interest only loan with a term of 30 years. The interest rate is 6% pa and is not expected to change. What will be your monthly payments? (a) $ 1,250.00 (b) $ 2,250.00 (c) $ 2,652.17 (d) $ 2,697.98 (e) $ 32,692.01 A prospective home buyer can afford to pay $2,000 per month in mortgage loan repayments. The central bank recently lowered its policy rate by 0.25%, and residential home lenders cut their mortgage loan rates from 4.74% to 4.49%. How much more can the prospective home buyer borrow now that interest rates are 4.49% rather than 4.74%? Give your answer as a proportional increase over the original amount he could borrow (##V_\text{before}##), so: ###\text{Proportional increase} = \frac{V_\text{after}-V_\text{before}}{V_\text{before}} ### Assume that: Interest rates are expected to be constant over the life of the loan. Loans are interest-only and have a life of 30 years. Mortgage loan payments are made every month in arrears and all interest rates are given as annualised percentage rates compounding per month. (a) 0.055679 Question 459 interest only loan, inflation In Australia in the 1980's, inflation was around 8% pa, and residential mortgage loan interest rates were around 14%. In 2013, inflation was around 2.5% pa, and residential mortgage loan interest rates were around 4.5%. If a person can afford constant mortgage loan payments of $2,000 per month, how much more can they borrow when interest rates are 4.5% pa compared with 14.0% pa? Give your answer as a proportional increase over the amount you could borrow when interest rates were high ##(V_\text{high rates})##, so: ###\text{Proportional increase} = \dfrac{V_\text{low rates}-V_\text{high rates}}{V_\text{high rates}} ### Mortgage loan payments are made every month in arrears and all interest rates are given as annualised percentage rates (APR's) compounding per month. (a) 0.095 What is the company's forward rate over the second year (from t=1 to t=2)? Give your answer as an effective annual rate, which is how the above bond yields are quoted. Question 267 term structure of interest rates 4 year zero coupon bond at a yield of 6.5% pa. What is the company's forward rate over the fourth year (from t=3 to t=4)? Give your answer as an effective annual rate, which is how the above bond yields are quoted. Question 143 bond pricing, zero coupon bond, term structure of interest rates, forward interest rate An Australian company just issued two bonds: A 6-month zero coupon bond at a yield of 6% pa, and A 12 month zero coupon bond at a yield of 7% pa. What is the company's forward rate from 6 to 12 months? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted. A 1 year zero coupon bond at a yield of 8% pa, and A 2 year zero coupon bond at a yield of 10% pa. What is the forward rate on the company's debt from years 1 to 2? Give your answer as an APR compounding every 6 months, which is how the above bond yields are quoted. (a) 6.01% (b) 6.02% (c) 9.20% (d) 12.02% (e) 18.40% Question 479 perpetuity with growth, DDM, NPV Discounted cash flow (DCF) valuation prices assets by finding the present value of the asset's future cash flows. The single cash flow, annuity, and perpetuity equations are very useful for this. Which of the following equations is the 'perpetuity with growth' equation? (a) ##V_0=\dfrac{C_t}{(1+r)^t} ## (b) ##V_0=\dfrac{C_1}{r}.\left(1-\dfrac{1}{(1+r)^T} \right)= \sum\limits_{t=1}^T \left( \dfrac{C_t}{(1+r)^t} \right) ## (c) ##V_0=\dfrac{C_1}{r-g}.\left(1-\left(\dfrac{1+g}{1+r}\right)^T \right)= \sum\limits_{t=1}^T \left( \dfrac{C_t.(1+g)^t}{(1+r)^t} \right) ## (d) ##V_0=\dfrac{C_1}{r} = \sum\limits_{t=1}^\infty \left( \dfrac{C_t}{(1+r)^t} \right) ## (e) ##V_0=\dfrac{C_1}{r-g} = \sum\limits_{t=1}^\infty \left( \dfrac{C_t.(1+g)^t}{(1+r)^t} \right) ## Question 451 DDM The first payment of a constant perpetual annual cash flow is received at time 5. Let this cash flow be ##C_5## and the required return be ##r##. So there will be equal annual cash flows at time 5, 6, 7 and so on forever, and all of the cash flows will be equal so ##C_5 = C_6 = C_7 = ...## When the perpetuity formula is used to value this stream of cash flows, it will give a value (V) at time: (a) 0, so ##V_0=\dfrac{C_5}{r}## (b) 1, so ##V_1=\dfrac{C_5}{r}## (c) 4, so ##V_4=\dfrac{C_5}{r}## (d) 5, so ##V_5=\dfrac{C_5}{r}## (e) 6, so ##V_6=\dfrac{C_5}{r}## Question 201 DDM, income and capital returns The following is the Dividend Discount Model (DDM) used to price stocks: ###P_0=\dfrac{C_1}{r-g}### If the assumptions of the DDM hold, which one of the following statements is NOT correct? The long term expected: (a) Dividend growth rate is equal to the long term expected growth rate of the stock price. (b) Dividend growth rate is equal to the long term expected capital return of the stock. (c) Dividend growth rate is equal to the long term expected dividend yield. (d) Total return of the stock is equal to its long term required return. (e) Total return of the stock is equal to the company's long term cost of equity. A stock just paid its annual dividend of $9. The share price is $60. The required return of the stock is 10% pa as an effective annual rate. What is the implied growth rate of the dividend per year? (a) -0.8565 (b) -0.0500 (c) -0.0435 Question 497 income and capital returns, DDM, ex dividend date A stock will pay you a dividend of $10 tonight if you buy it today. Thereafter the annual dividend is expected to grow by 5% pa, so the next dividend after the $10 one tonight will be $10.50 in one year, then in two years it will be $11.025 and so on. The stock's required return is 10% pa. What is the stock price today and what do you expect the stock price to be tomorrow, approximately? (a) $200 today and $210 tomorrow. (b) $210 today and $220 tomorrow. (c) $220 today and $230 tomorrow. (d) $210 today and $200 tomorrow. (e) $220 today and $210 tomorrow. A two year Government bond has a face value of $100, a yield of 0.5% and a fixed coupon rate of 0.5%, paid semi-annually. What is its price? (b) $98.0346 (c) $99.0062 (d) $99.0087 (e) $99.0124 Question 254 time calculation, APR Your main expense is fuel for your car which costs $100 per month. You just refueled, so you won't need any more fuel for another month (first payment at t=1 month). You have $2,500 in a bank account which pays interest at a rate of 6% pa, payable monthly. Interest rates are not expected to change. Assuming that you have no income, in how many months time will you not have enough money to fully refuel your car? (a) In 23 months (t=23 months). (b) In 24 months (t=24 months). (c) In 25 months (t=25 months). (d) In 26 months (t=26 months). (e) In 27 months (t=27 months). Question 32 time calculation, APR You really want to go on a back packing trip to Europe when you finish university. Currently you have $1,500 in the bank. Bank interest rates are 8% pa, given as an APR compounding per month. If the holiday will cost $2,000, how long will it take for your bank account to reach that amount? (a) -3.74 years (b) 1.81 years (c) 3.33 years (d) 3.61 years (e) 3.74 years You're trying to save enough money for a deposit to buy a house. You want to buy a house worth $400,000 and the bank requires a 20% deposit ($80,000) before it will give you a loan for the other $320,000 that you need. You currently have no savings, but you just started working and can save $2,000 per month, with the first payment in one month from now. Bank interest rates on savings accounts are 4.8% pa with interest paid monthly and interest rates are not expected to change. How long will it take to save the $80,000 deposit? Round your answer up to the nearest month. (a) 27 months (t=27 months). (b) 38 months (t=38 months). (c) 40 months (t=40 months). (d) 43 months (t=43 months). (e) 79 months (t=79 months). A two year Government bond has a face value of $100, a yield of 2.5% pa and a fixed coupon rate of 0.5% pa, paid semi-annually. What is its price? (a) 90.6421 (b) 95.1524 (c) 95.2055 (d) 96.1219 Question 48 IRR, NPV, bond pricing, premium par and discount bonds, market efficiency (a) The internal rate of return (IRR) of buying a fairly priced bond is equal to the bond's yield. (c) If a fairly priced bond's required return rises, its price will fall. (d) Fairly priced premium bonds' yields are less than their coupon rates, prices are more than their face values, and the NPV of buying them is therefore positive. Question 165 DDM, PE ratio, payout ratio For certain shares, the forward-looking Price-Earnings Ratio (##P_0/EPS_1##) is equal to the inverse of the share's total expected return (##1/r_\text{total}##). For what shares is this true? Assume: The general accounting definition of 'payout ratio' which is dividends per share (DPS) divided by earnings per share (EPS). All cash flows, earnings and rates are real. (a) Shares of companies with a 100% payout ratio and high expected growth in earnings and dividends. (b) Shares of companies with a 100% payout ratio and negative expected growth in earnings and dividends. (c) Shares of companies with a 100% payout ratio and no expected growth in earnings or dividends. (d) Shares of companies with a 0% payout ratio and no expected growth in earnings or dividends. (e) Shares of companies with a 0% payout ratio and negative expected growth in earnings and dividends. Question 133 bond pricing A bond maturing in 10 years has a coupon rate of 4% pa, paid semi-annually. The bond's yield is currently 6% pa. The face value of the bond is $100. What is its price? (a) $33.93 (b) $55.37 (c) $57.61 (d) $68.28 (e) $85.12 Question 138 bond pricing, premium par and discount bonds Bonds A and B are issued by the same Australian company. Both bonds yield 7% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond A pays coupons of 10% pa and bond B pays coupons of 5% pa. Which of the following statements is true about the bonds' prices? (a) The prices of bonds A and B will be more than $100. (b) The prices of bonds A and B will be less than $100. (c) Bond A will have a price more than $100, and bond B will have a price less than $100. (d) Bond A will have a price less than $100, and bond B will have a price more than $100. (e) Bonds A and B will both have a price of $100. Bonds X and Y are issued by different companies, but they both pay a semi-annual coupon of 10% pa and they have the same face value ($100) and maturity (3 years). The only difference is that bond X and Y's yields are 8 and 12% pa respectively. Which of the following statements is true? (a) Bonds X and Y are premium bonds. (b) Bonds X and Y are discount bonds. (c) Bond X is a discount bond but bond Y is a premium bond. (d) Bond X is a premium bond but bond Y is a discount bond. (e) Bonds X and Y have the same price. A three year bond has a fixed coupon rate of 12% pa, paid semi-annually. The bond's yield is currently 6% pa. The face value is $100. What is its price? Bonds X and Y are issued by different companies, but they both pay a semi-annual coupon of 10% pa and they have the same face value ($100), maturity (3 years) and yield (10%) as each other. Which of the following statements is true? (e) Bonds X and Y are par bonds. A four year bond has a face value of $100, a yield of 6% and a fixed coupon rate of 12%, paid semi-annually. What is its price? Question 179 bond pricing, capital raising A firm wishes to raise $20 million now. They will issue 8% pa semi-annual coupon bonds that will mature in 5 years and have a face value of $100 each. Bond yields are 6% pa, given as an APR compounding every 6 months, and the yield curve is flat. How many bonds should the firm issue? (a) 140,202 (b) 184,280 (c) 184,460 (d) 186,881 (e) 200,000 A five year bond has a face value of $100, a yield of 12% and a fixed coupon rate of 6%, paid semi-annually. What is the bond's price? (e) 87.3629 Which one of the following bonds is trading at par? (a) a ten-year bond with a $4000 face value whose yield to maturity is 6.0% and coupon rate is 6.5% paid semi-annually. (b) a 6-year bond with a principal of $40,000 and a price of $45,000. (c) a 15-year bond with a $10,000 face value whose yield to maturity is 8.0% and coupon rate is 10.0% paid semi-annually. (d) a two-year bond with a $50,000 face value whose yield to maturity is 5.2% compounding semi-annually which has a price of $50,000. (e) None of the above bonds are trading at par. A firm wishes to raise $8 million now. They will issue 7% pa semi-annual coupon bonds that will mature in 10 years and have a face value of $100 each. Bond yields are 10% pa, given as an APR compounding every 6 months, and the yield curve is flat. (b) 98,393 (c) 90,480 (d) 80,000 (e) 64,039 Which one of the following bonds is trading at a premium? (a) a ten-year bond with a $4,000 face value whose yield to maturity is 6.0% and coupon rate is 5.9% paid semi-annually. (b) a fifteen-year bond with a $10,000 face value whose yield to maturity is 8.0% and coupon rate is 7.8% paid semi-annually. (c) a five-year bond with a $2,000 face value whose yield to maturity is 7.0% and coupon rate is 7.2% paid semi-annually. (d) a two-year bond with a $50,000 face value whose yield to maturity is 5.2% and coupon rate is 5.2% paid semi-annually. (e) None of the above bonds are premium bonds. Question 452 limited liability, expected and historical returns What is the lowest and highest expected share price and expected return from owning shares in a company over a finite period of time? Let the current share price be ##p_0##, the expected future share price be ##p_1##, the expected future dividend be ##d_1## and the expected return be ##r##. Define the expected return as: ##r=\dfrac{p_1-p_0+d_1}{p_0} ## The answer choices are stated using inequalities. As an example, the first answer choice "(a) ##0≤p<∞## and ##0≤r< 1##", states that the share price must be larger than or equal to zero and less than positive infinity, and that the return must be larger than or equal to zero and less than one. (a) ##0≤p<∞## and ##0≤r< 1## (b) ##0≤p<∞## and ##-1≤r< ∞## (c) ##0≤p<∞## and ##0≤r< ∞## (d) ##0≤p<∞## and ##-∞≤r< ∞## (e) ##-∞<p<∞## and ##-∞<r< ∞## Question 120 credit risk, payout policy A newly floated farming company is financed with senior bonds, junior bonds, cumulative non-voting preferred stock and common stock. The new company has no retained profits and due to floods it was unable to record any revenues this year, leading to a loss. The firm is not bankrupt yet since it still has substantial contributed equity (same as paid-up capital). On which securities must it pay interest or dividend payments in this terrible financial year? (a) Preferred stock only. (b) The senior and junior bonds only. (c) Common stock only. (d) The senior and junior bonds and the preferred stock. (e) No payments on any security is required since the firm made a loss. A firm wishes to raise $10 million now. They will issue 6% pa semi-annual coupon bonds that will mature in 8 years and have a face value of $1,000 each. Bond yields are 10% pa, given as an APR compounding every 6 months, and the yield curve is flat. (a) 9,022.2 bonds (b) 10,000.0 bonds (c) 11,484.5 bonds (d) 12,712.9 bonds (e) 12,767.4 bonds A four year bond has a face value of $100, a yield of 9% and a fixed coupon rate of 6%, paid semi-annually. What is its price? (a) $94.6187 (c) $72.592 In these tough economic times, central banks around the world have cut interest rates so low that they are practically zero. In some countries, government bond yields are also very close to zero. A three year government bond with a face value of $100 and a coupon rate of 2% pa paid semi-annually was just issued at a yield of 0%. What is the price of the bond? (a) 94.20452353 (b) 100 (c) 106 (d) 112 (e) The bond is priceless. A 10 year bond has a face value of $100, a yield of 6% pa and a fixed coupon rate of 8% pa, paid semi-annually. What is its price? (d) $126.628 Bonds X and Y are issued by the same company. Both bonds yield 10% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X pays coupons of 6% pa and bond Y pays coupons of 8% pa. Which of the following statements is true? Question 349 CFFA, depreciation tax shield Which one of the following will decrease net income (NI) but increase cash flow from assets (CFFA) in this year for a tax-paying firm, all else remaining constant? ###NI = (Rev-COGS-FC-Depr-IntExp).(1-t_c )### ###CFFA=NI+Depr-CapEx - \Delta NWC+IntExp### (a) An increase in revenue (Rev). (b) An increase in rent expense (part of fixed costs, FC). (c) An increase in depreciation expense (Depr). (d) An decrease in net working capital (ΔNWC). (e) An increase in dividends. Question 359 CFFA Which one of the following will have no effect on net income (NI) but decrease cash flow from assets (CFFA or FFCF) in this year for a tax-paying firm, all else remaining constant? ###NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c )### ###CFFA=NI+Depr-CapEx - ΔNWC+IntExp### (b) An increase in rent expense (a type of recurring fixed cost, FC). (d) An increase in inventories (a current asset). (e) An decrease in interest expense (IntExp). Question 377 leverage, capital structure Issuing debt doesn't give away control of the firm because debt holders can't cast votes to determine the company's affairs, such as at the annual general meeting (AGM), and can't appoint directors to the board. or ? Question 379 leverage, capital structure, payout policy Companies must pay interest and principal payments to debt-holders. They're compulsory. But companies are not forced to pay dividends to share holders. or ? Question 94 leverage, capital structure, real estate Your friend just bought a house for $400,000. He financed it using a $320,000 mortgage loan and a deposit of $80,000. In the context of residential housing and mortgages, the 'equity' tied up in the value of a person's house is the value of the house less the value of the mortgage. So the initial equity your friend has in his house is $80,000. Let this amount be E, let the value of the mortgage be D and the value of the house be V. So ##V=D+E##. If house prices suddenly fall by 10%, what would be your friend's percentage change in equity (E)? Assume that the value of the mortgage is unchanged and that no income (rent) was received from the house during the short time over which house prices fell. ### r_{0\rightarrow1}=\frac{p_1-p_0+c_1}{p_0} ### where ##r_{0-1}## is the return (percentage change) of an asset with price ##p_0## initially, ##p_1## one period later, and paying a cash flow of ##c_1## at time ##t=1##. (a) -100% (b) -50% (c) -12.5% (d) -10% (e) -8% Question 301 leverage, capital structure, real estate Your friend just bought a house for $1,000,000. He financed it using a $900,000 mortgage loan and a deposit of $100,000. In the context of residential housing and mortgages, the 'equity' or 'net wealth' tied up in a house is the value of the house less the value of the mortgage loan. Assuming that your friend's only asset is his house, his net wealth is $100,000. If house prices suddenly fall by 15%, what would be your friend's percentage change in net wealth? No income (rent) was received from the house during the short time over which house prices fell. Your friend will not declare bankruptcy, he will always pay off his debts. (a) -1,000% (b) -150% (c) -100% (e) -10% Question 67 CFFA, interest tax shield Here are the Net Income (NI) and Cash Flow From Assets (CFFA) equations: ###NI=(Rev-COGS-FC-Depr-IntExp).(1-t_c)### ###CFFA=NI+Depr-CapEx - \varDelta NWC+IntExp### What is the formula for calculating annual interest expense (IntExp) which is used in the equations above? Select one of the following answers. Note that D is the value of debt which is constant through time, and ##r_D## is the cost of debt. (a) ##D(1+r_D)## (b) ##D/(1+r_D) ## (c) ##D.r_D ## (d) ##D / r_D## (e) ##NI.r_D## Question 206 CFFA, interest expense, interest tax shield Interest expense (IntExp) is an important part of a company's income statement (or 'profit and loss' or 'statement of financial performance'). How does an accountant calculate the annual interest expense of a fixed-coupon bond that has a liquid secondary market? Select the most correct answer: Annual interest expense is equal to: (a) the bond's face value multiplied by its annual yield to maturity. (b) the bond's face value multiplied by its annual coupon rate. (c) the bond's market price at the start of the year multiplied by its annual yield to maturity. (d) the bond's market price at the start of the year multiplied by its annual coupon rate. (e) the future value of the actual cash payments of the bond over the year, grown to the end of the year, and grown by the bond's yield to maturity. Question 223 CFFA, interest tax shield Which one of the following will increase the Cash Flow From Assets in this year for a tax-paying firm, all else remaining constant? (a) An increase in net capital spending. (b) A decrease in the cash flow to creditors. (c) An increase in interest expense. (d) An increase in net working capital. (e) A decrease in dividends paid. (a) An increase in revenue (##Rev##). (b) An decrease in revenue (##Rev##). (c) An increase in rent expense (part of fixed costs, ##FC##). (d) An increase in interest expense (##IntExp##). Question 68 WACC, CFFA, capital budgeting A manufacturing company is considering a new project in the more risky services industry. The cash flows from assets (CFFA) are estimated for the new project, with interest expense excluded from the calculations. To get the levered value of the project, what should these unlevered cash flows be discounted by? Assume that the manufacturing firm has a target debt-to-assets ratio that it sticks to. (a) The manufacturing firm's before-tax WACC. (b) The manufacturing firm's after-tax WACC. (c) A services firm's before-tax WACC, assuming that the services firm has the same debt-to-assets ratio as the manufacturing firm. (d) A services firm's after-tax WACC, assuming that the services firm has the same debt-to-assets ratio as the manufacturing firm. (e) The services firm's levered cost of equity. Question 89 WACC, CFFA, interest tax shield A retail furniture company buys furniture wholesale and distributes it through its retail stores. The owner believes that she has some good ideas for making stylish new furniture. She is considering a project to buy a factory and employ workers to manufacture the new furniture she's designed. Furniture manufacturing has more systematic risk than furniture retailing. Her furniture retailing firm's after-tax WACC is 20%. Furniture manufacturing firms have an after-tax WACC of 30%. Both firms are optimally geared. Assume a classical tax system. Which method(s) will give the correct valuation of the new furniture-making project? Select the most correct answer. (a) Discount the project's unlevered CFFA by the furniture manufacturing firms' 30% WACC after tax. (b) Discount the project's unlevered CFFA by the company's 20% WACC after tax. (c) Discount the project's levered CFFA by the company's 20% WACC after tax. (d) Discount the project's levered CFFA by the furniture manufacturing firms' 30% WACC after tax. (e) The methods outlined in answers (a) and (c) will give the same valuations, both are correct. Question 113 WACC, CFFA, capital budgeting The US firm Google operates in the online advertising business. In 2011 Google bought Motorola Mobility which manufactures mobile phones. Assume the following: Google had a 10% after-tax weighted average cost of capital (WACC) before it bought Motorola. Motorola had a 20% after-tax WACC before it merged with Google. Google and Motorola have the same level of gearing. Both companies operate in a classical tax system. You are a manager at Motorola. You must value a project for making mobile phones. Which method(s) will give the correct valuation of the mobile phone manufacturing project? Select the most correct answer. The mobile phone manufacturing project's: (a) Unlevered CFFA should be discounted by Google's 10% WACC after tax. (b) Unlevered CFFA should be discounted by Motorola's 20% WACC after tax. (c) Levered CFFA should be discounted by Google's 10% WACC after tax. (d) Levered CFFA should be discounted by Motorola's 20% WACC after tax. (e) Unlevered CFFA by 15%, the average of Google and Motorola's WACC after tax. Question 368 interest tax shield, CFFA A method commonly seen in textbooks for calculating a levered firm's free cash flow (FFCF, or CFFA) is the following: ###\begin{aligned} FFCF &= (Rev - COGS - Depr - FC - IntExp)(1-t_c) + \\ &\space\space\space+ Depr - CapEx -\Delta NWC + IntExp(1-t_c) \\ \end{aligned}### Does this annual FFCF or the annual interest tax shield? One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use net operating profit after tax (NOPAT). ###\begin{aligned} FFCF &= NOPAT + Depr - CapEx -\Delta NWC \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC \\ \end{aligned} \\### Question 77 interest tax shield The equations for Net Income (NI, also known as Earnings or Net Profit After Tax) and Cash Flow From Assets (CFFA, also known as Free Cash Flow to the Firm) per year are: For a firm with debt, what is the amount of the interest tax shield per year? (a) ##IntExp.t_c## (b) ##IntExp/t_c## (c) ##IntExp.(1-t_c)## (d) ##IntExp/(1-t_c)## (e) ##IntExp.(t_c-1)## Question 237 WACC, Miller and Modigliani, interest tax shield Which of the following discount rates should be the highest for a levered company? Ignore the costs of financial distress. (a) Cost of debt (##r_\text{D}##). (b) Unlevered cost of equity (##r_\text{E, U}##). (c) Levered cost of equity (##r_\text{E, L}##). (d) Levered before-tax WACC (##r_\text{V, LxITS}##). (e) Levered after-tax WACC (##r_\text{V, LwITS}##). Question 240 negative gearing, interest tax shield Unrestricted negative gearing is allowed in Australia, New Zealand and Japan. Negative gearing laws allow income losses on investment properties to be deducted from a tax-payer's pre-tax personal income. Negatively geared investors benefit from this tax advantage. They also hope to benefit from capital gains which exceed the income losses. For example, a property investor buys an apartment funded by an interest only mortgage loan. Interest expense is $2,000 per month. The rental payments received from the tenant living on the property are $1,500 per month. The investor can deduct this income loss of $500 per month from his pre-tax personal income. If his personal marginal tax rate is 46.5%, this saves $232.5 per month in personal income tax. The advantage of negative gearing is an example of the benefits of: (a) Diversification. (b) The time value of money. (c) Interest tax shields. (d) Depreciation tax shields. (e) A 'buy and hold' investment strategy. Question 337 capital structure, interest tax shield, leverage, real and nominal returns and cash flows, multi stage growth model A fast-growing firm is suitable for valuation using a multi-stage growth model. It's nominal unlevered cash flow from assets (##CFFA_U##) at the end of this year (t=1) is expected to be $1 million. After that it is expected to grow at a rate of: 12% pa for the next two years (from t=1 to 3), 5% over the fourth year (from t=3 to 4), and -1% forever after that (from t=4 onwards). Note that this is a negative one percent growth rate. The nominal WACC after tax is 9.5% pa and is not expected to change. The nominal WACC before tax is 10% pa and is not expected to change. The firm has a target debt-to-equity ratio that it plans to maintain. The inflation rate is 3% pa. All rates are given as nominal effective annual rates. What is the levered value of this fast growing firm's assets? (a) $13.19m (b) $12.36m (c) $11.77m (d) $11.53m (e) $11.20m One formula for calculating a levered firm's free cash flow (FFCF, or CFFA) is to use earnings before interest and tax (EBIT). ###\begin{aligned} FFCF &= (EBIT)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ &= (Rev - COGS - Depr - FC)(1-t_c) + Depr - CapEx -\Delta NWC + IntExp.t_c \\ \end{aligned} \\### Question 506 leverage, accounting ratio A firm has a debt-to-equity ratio of 25%. What is its debt-to-assets ratio? Question 236 diversification, correlation, risk Diversification in a portfolio of two assets works best when the correlation between their returns is: (b) -0.5 Question 559 variance, standard deviation, covariance, correlation Which of the following statements about standard statistical mathematics notation is NOT correct? (a) The arithmetic average of variable X is represented by ##\bar{X}##. (b) The standard deviation of variable X is represented by ##\sigma_X##. (c) The variance of variable X is represented by ##\sigma_X^2##. (d) The covariance between variables X and Y is represented by ##\sigma_{X,Y}^2##. (e) The correlation between variables X and Y is represented by ##\rho_{X,Y}##. Question 111 portfolio risk, correlation All things remaining equal, the variance of a portfolio of two positively-weighted stocks rises as: (a) The correlation between the stocks' returns rise. (b) The correlation between the stocks' returns decline. (c) The portfolio standard deviation declines. (d) Both stocks' individual variances decline. (e) Both stocks' individual standard deviations decline. Question 293 covariance, correlation, portfolio risk All things remaining equal, the higher the correlation of returns between two stocks: (a) The more diversification is possible when those stocks are combined in a portfolio. (b) The lower the variance of returns of an equally-weighted portfolio of those stocks. (c) The lower the volatility of returns of an equal-weighted portfolio of those stocks. (d) The higher the covariance between those stocks' returns. (e) The more likely that when one stock has a positive return, the other has a negative return. Question 557 portfolio weights, portfolio return An investor wants to make a portfolio of two stocks A and B with a target expected portfolio return of 6% pa. Stock A has an expected return of 5% pa. Stock B has an expected return of 10% pa. What portfolio weights should the investor have in stocks A and B respectively? (a) 80%, 20% (b) 60%, 40% (c) 40%, 60% (d) 20%, 80% (e) 20%, 20% Question 556 portfolio risk, portfolio return, standard deviation An investor wants to make a portfolio of two stocks A and B with a target expected portfolio return of 12% pa. Stock A has an expected return of 10% pa and a standard deviation of 20% pa. Stock B has an expected return of 15% pa and a standard deviation of 30% pa. The correlation coefficient between stock A and B's expected returns is 70%. What will be the annual standard deviation of the portfolio with this 12% pa target return? (a) 24.28168% pa (b) 24% pa (c) 22.126907% pa (d) 19.697716% pa (e) 16.970563% pa Question 80 CAPM, risk, diversification Diversification is achieved by investing in a large amount of stocks. What type of risk is reduced by diversification? (a) Idiosyncratic risk. (b) Systematic risk. (c) Both idiosyncratic and systematic risk. (d) Market risk. (e) Beta risk. Question 326 CAPM A fairly priced stock has an expected return equal to the market's. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. What is the stock's beta? (b) 0.5 Question 71 CAPM, risk Stock A has a beta of 0.5 and stock B has a beta of 1. Which statement is NOT correct? (a) Stock A has less systematic risk than stock B, so stock A's return should be less than stock B's. (b) Stock B has the same systematic risk as the market, so its return should be the same as the market's. (c) Stock B has the same beta as the market, so it also has the same total risk as the market. (d) If stock A and B were combined in a portfolio with weights of 50% each, the beta of the portfolio would be 0.75. (e) Stocks A and B have more systematic risk than the risk free security (government bonds) so their return should be greater than the risk free rate. Question 93 correlation, CAPM, systematic risk A stock's correlation with the market portfolio increases while its total risk is unchanged. What will happen to the stock's expected return and systematic risk? (a) The stock will have a higher return and higher systematic risk. (b) The stock will have a lower return and higher systematic risk. (c) The stock will have a higher return and lower systematic risk. (d) The stock will have a lower return and lower systematic risk. (e) The stock's return and systematic risk will be unchanged. Question 114 WACC, capital structure, risk A firm's WACC before tax would decrease due to: (a) the firm's industry becoming more systematically risky, for example if it was a mining company and commodities prices varied more strongly and were more positively correlated with the market portfolio. (b) the firm's industry becoming less systematically risky, for example if it was a child care centre and the government announced permanently higher subsidies for parents' child care expenses. (c) the firm issuing more debt and using the proceeds to repurchase stock. (d) the firm issuing more equity and using the proceeds to pay off debt holders. (e) none of the above. Question 408 leverage, portfolio beta, portfolio risk, real estate, CAPM You just bought a house worth $1,000,000. You financed it with an $800,000 mortgage loan and a deposit of $200,000. You estimate that: The house has a beta of 1; The mortgage loan has a beta of 0.2. What is the beta of the equity (the $200,000 deposit) that you have in your house? Also, if the risk free rate is 5% pa and the market portfolio's return is 10% pa, what is the expected return on equity in your house? Ignore taxes, assume that all cash flows (interest payments and rent) were paid and received at the end of the year, and all rates are effective annual rates. (a) The beta of equity is 5 and the expected return on equity is 30% pa. (b) The beta of equity is 4.2 and the expected return on equity is 26% pa. (c) The beta of equity is 0.86 and the expected return on equity is 9.3% pa. (d) The beta of equity is 1 and the expected return on equity is 10% pa. (e) The beta of equity is 0.6 and the expected return on equity is 8% pa. Question 66 CAPM, SML Government bonds currently have a return of 5% pa. A stock has an expected return of 6% pa and the market return is 7% pa. What is the beta of the stock? (a) -0.5 (c) 0.5 Question 72 CAPM, portfolio beta, portfolio risk return Standard deviation Correlation Beta Dollars A 0.2 0.4 0.12 0.5 40 B 0.3 0.8 1.5 80 What is the beta of the above portfolio? (b) 0.833333333 (d) 1.166666667 (e) 1.4 Government bonds currently have a return of 5%. A stock has a beta of 2 and the market return is 7%. What is the expected return of the stock? (a) 5% (b) 7% (c) 9% (e) 19% Question 86 CAPM Treasury bonds currently have a return of 5% pa. A stock has a beta of 0.5 and the market return is 10% pa. What is the expected return of the stock? (a) 5% pa (b) 7.5% pa (c) 10% pa (d) 12.5% pa (e) 20% pa Question 88 WACC, CAPM A firm can issue 3 year annual coupon bonds at a yield of 10% pa and a coupon rate of 8% pa. The beta of its levered equity is 2. The market's expected return is 10% pa and 3 year government bonds yield 6% pa with a coupon rate of 4% pa. The market value of equity is $1 million and the market value of debt is $1 million. The corporate tax rate is 30%. What is the firm's after-tax WACC? Assume a classical tax system. (b) 10.50% (c) 10.80% According to the theory of the Capital Asset Pricing Model (CAPM), total variance can be broken into two components, systematic variance and idiosyncratic variance. Which of the following events would be considered the most diversifiable according to the theory of the CAPM? (a) Global economic recession. (b) A major terrorist attack, grounding all commercial aircraft in the US and Europe. (c) An increase in corporate tax rates. (d) The outbreak of world war. (e) A company's poor earnings announcement. Question 92 CAPM, SML, CML Which statement(s) are correct? (i) All stocks that plot on the Security Market Line (SML) are fairly priced. (ii) All stocks that plot above the Security Market Line (SML) are overpriced. (iii) All fairly priced stocks that plot on the Capital Market Line (CML) have zero idiosyncratic risk. Select the most correct response: (a) Only (i) is true. (b) Only (ii) is true. (c) Only (iii) is true. (d) All statements (i), (ii) and (iii) are true. (e) Only statements (i) and (iii) are true. Question 98 capital structure, CAPM A firm changes its capital structure by issuing a large amount of debt and using the funds to repurchase shares. Its assets are unchanged. Ignore interest tax shields. According to the Capital Asset Pricing Model (CAPM), which statement is correct? (a) The beta of the firm's assets will increase. (b) The beta of the firm's assets will decrease. (c) The beta of the firm's equity will increase. (d) The beta of the firm's equity will decrease. (e) The beta of the firm's equity will be unchanged. A fairly priced stock has an expected return of 15% pa. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. What is the beta of the stock? Question 110 CAPM, SML, NPV The security market line (SML) shows the relationship between beta and expected return. Investment projects that plot above the SML would have: (a) A positive NPV. (b) A zero NPV. (c) A negative NPV. (d) A large amount of diversifiable risk. (e) Zero diversifiable risk. A fairly priced stock has a beta that is the same as the market portfolio's beta. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. What is the expected return of the stock? (b) 7.5% Question 232 CAPM, DDM A stock has a beta of 0.5. Its next dividend is expected to be $3, paid one year from now. Dividends are expected to be paid annually and grow by 2% pa forever. Treasury bonds yield 5% pa and the market portfolio's expected return is 10% pa. All returns are effective annual rates. (c) $40.8 (d) $40 (e) $37.5 Question 235 SML, NPV, CAPM, risk Investment projects that plot on the SML would have: (a) A positive NPV and should be accepted. (c) A negative NPV and should be rejected. Question 244 CAPM, SML, NPV, risk Examine the following graph which shows stocks' betas ##(\beta)## and expected returns ##(\mu)##: Assume that the CAPM holds and that future expectations of stocks' returns and betas are correctly measured. Which statement is NOT correct? (a) Asset A is underpriced. (b) Asset B has a negative alpha (a negative excess return or abnormal return). (c) Buying asset C would be a positive NPV investment. (d) Asset D has less systematic variance than the market portfolio (M). (e) Asset E is fairly priced. Question 248 CAPM, DDM, income and capital returns The total return of any asset can be broken down in different ways. One possible way is to use the dividend discount model (or Gordon growth model): ###p_0 = \frac{c_1}{r_\text{total}-r_\text{capital}}### Which, since ##c_1/p_0## is the income return (##r_\text{income}##), can be expressed as: ###r_\text{total}=r_\text{income}+r_\text{capital}### So the total return of an asset is the income component plus the capital or price growth component. Another way to break up total return is to use the Capital Asset Pricing Model: ###r_\text{total}=r_\text{f}+β(r_\text{m}- r_\text{f})### ###r_\text{total}=r_\text{time value}+r_\text{risk premium}### So the risk free rate is the time value of money and the term ##β(r_\text{m}- r_\text{f})## is the compensation for taking on systematic risk. Using the above theory and your general knowledge, which of the below equations, if any, are correct? (I) ##r_\text{income}=r_\text{time value}## (II) ##r_\text{income}=r_\text{risk premium}## (III) ##r_\text{capital}=r_\text{time value}## (IV) ##r_\text{capital}=r_\text{risk premium}## (V) ##r_\text{income}+r_\text{capital}=r_\text{time value}+r_\text{risk premium}## Which of the equations are correct? (a) I, IV and V only. (b) II, III and V only. (c) V only. (d) All are true. (e) None are true. Question 303 WACC, CAPM, CFFA There are many different ways to value a firm's assets. Which of the following will NOT give the correct market value of a levered firm's assets ##(V_L)##? Assume that: The firm is financed by listed common stock and vanilla annual fixed coupon bonds, which are both traded in a liquid market. The bonds' yield is equal to the coupon rate, so the bonds are issued at par. The yield curve is flat and yields are not expected to change. When bonds mature they will be rolled over by issuing the same number of new bonds with the same expected yield and coupon rate, and so on forever. Tax rates on the dividends and capital gains received by investors are equal, and capital gains tax is paid every year, even on unrealised gains regardless of when the asset is sold. There is no re-investment of the firm's cash back into the business. All of the firm's excess cash flow is paid out as dividends so real growth is zero. The firm operates in a mature industry with zero real growth. All cash flows and rates in the below equations are real (not nominal) and are expected to be stable forever. Therefore the perpetuity equation with no growth is suitable for valuation. (a) ##V_L = n_\text{shares}.P_\text{share} + n_\text{bonds}.P_\text{bond}## (b) ##V_L = n_\text{shares}.\dfrac{\text{Dividend per share}}{r_f + \beta_{EL}(r_m - r_f)} + n_\text{bonds}.\dfrac{\text{Coupon per bond}}{r_f + \beta_D(r_m - r_f)}## (c) ##V_L = \dfrac{\text{CFFA}_{L}}{r_\text{WACC before tax}}## (d) ##V_L = \dfrac{\text{CFFA}_{U}}{r_\text{WACC after tax}}## (e) ##V_L = \dfrac{\text{CFFA}_{L}}{r_\text{WACC after tax}}## ###r_\text{WACC before tax} = r_D.\frac{D}{V_L} + r_{EL}.\frac{E_L}{V_L} = \text{Weighted average cost of capital before tax}### ###r_\text{WACC after tax} = r_D.(1-t_c).\frac{D}{V_L} + r_{EL}.\frac{E_L}{V_L} = \text{Weighted average cost of capital after tax}### ###NI_L=(Rev-COGS-FC-Depr-\mathbf{IntExp}).(1-t_c) = \text{Net Income Levered}### ###CFFA_L=NI_L+Depr-CapEx - \varDelta NWC+\mathbf{IntExp} = \text{Cash Flow From Assets Levered}### ###NI_U=(Rev-COGS-FC-Depr).(1-t_c) = \text{Net Income Unlevered}### ###CFFA_U=NI_U+Depr-CapEx - \varDelta NWC= \text{Cash Flow From Assets Unlevered}### Question 311 foreign exchange rate When someone says that they're "buying American dollars" (USD), what type of asset are they probably buying? They're probably buying: (a) Short term debt denominated in USD, for example lending to a US bank as a USD deposit. (b) Long term debt denominated in USD, for example lending to a US company by buying their USD bonds. (c) Shares denominated in USD, for example buying shares in Coca-Cola which is listed on the NYSE. (d) Real estate denominated in USD, for example buying an apartment in Chicago. (e) Commodities with USD, for example buying gold, wheat, or coal. An Indonesian lady wishes to convert 1 million Indonesian rupiah (IDR) to Australian dollars (AUD). Exchange rates are 13,125 IDR per USD and 0.79 USD per AUD. How many AUD is the IDR 1 million worth? (a) AUD 16,613,924,050.63 (b) AUD 10,368,750,000.00 (c) AUD 10,368.75 (d) AUD 96.44 (e) AUD 60.19 Question 602 foreign exchange rate, American and European terms Chinese people usually quote the Chinese Yuan or Renminbi in RMB per 1 USD. For example, in October 2015 the Chinese Renminbi was 6.35 RMB per USD. Is this an or terms quote? If the current AUD exchange rate is USD 0.9686 = AUD 1, what is the European terms quote of the AUD against the USD? (a) 0.9686 USD per AUD (b) 0.9686 AUD per USD (c) 1.0324 USD per AUD (d) 1.0324 AUD per USD (e) 1.0324 AUD per EUR Your friend overheard that you need some cash and asks if you would like to borrow some money. She can lend you $5,000 now (t=0), and in return she wants you to pay her back $1,000 in two years (t=2) and every year after that for the next 5 years, so there will be 6 payments of $1,000 from t=2 to t=7 inclusive. What is the net present value (NPV) of borrowing from your friend? Assume that banks loan funds at interest rates of 10% pa, given as an effective annual rate. (a) -$1,000 (e) $1,400.611 Some countries' interest rates are so low that they're zero. If interest rates are 0% pa and are expected to stay at that level for the foreseeable future, what is the most that you would be prepared to pay a bank now if it offered to pay you $10 at the end of every year for the next 5 years? In other words, what is the present value of five $10 payments at time 1, 2, 3, 4 and 5 if interest rates are 0% pa? (a) $0 (b) $10 (c) $50 (d) Positive infinity (e) Priceless A stock is expected to pay its next dividend of $1 in one year. Future annual dividends are expected to grow by 2% pa. So the first dividend of $1 will be in one year, the year after that $1.02 (=1*(1+0.02)^1), and a year later $1.0404 (=1*(1+0.02)^2) and so on forever. Its required total return is 10% pa. The total required return and growth rate of dividends are given as effective annual rates. Calculate the current stock price. (a) $10 (b) $12.254902 A stock just paid a dividend of $1. Future annual dividends are expected to grow by 2% pa. The next dividend of $1.02 (=1*(1+0.02)^1) will be in one year, and the year after that the dividend will be $1.0404 (=1*(1+0.02)^2), and so on forever. A stock is just about to pay a dividend of $1 tonight. Future annual dividends are expected to grow by 2% pa. The next dividend of $1 will be paid tonight, and the year after that the dividend will be $1.02 (=1*(1+0.02)^1), and a year later 1.0404 (=1*(1+0.04)^2) and so on forever. The perpetuity with growth formula, also known as the dividend discount model (DDM) or Gordon growth model, is appropriate for valuing a company's shares. ##P_0## is the current share price, ##C_1## is next year's expected dividend, ##r## is the total required return and ##g## is the expected growth rate of the dividend. The below graph shows the expected future price path of the company's shares. Which of the following statements about the graph is NOT correct? (a) Between points A and B, the share price is expected to grow by ##r##. (b) Between points B and C, the share price is expected to instantaneously fall by ##C_1##. (c) Between points A and C, the share price is expected to grow by ##g##. (d) Between points B and D, the share price is expected to grow by ##g##. (e) Between points D and E, the share price is expected to instantaneously fall by ##C_1.(1+r)^1##. ###P_0=\frac{d_1}{r-g}### A stock pays dividends annually. It just paid a dividend, but the next dividend (##d_1##) will be paid in one year. According to the DDM, what is the correct formula for the expected price of the stock in 2.5 years? (a) ##P_{2.5}=P_0 (1+g)^{2.5} ## (b) ##P_{2.5}=P_0 (1+r)^{2.5} ## (c) ##P_{2.5}=P_0 (1+g)^2 (1+r)^{0.5} ## (d) ##P_{2.5}=P_0 (1+r)^2 (1+g)^{0.5} ## (e) ##P_{2.5}=P_0 (1+r)^3 (1+g)^{-0.5} ## Question 289 DDM, expected and historical returns, ROE In the dividend discount model: ###P_0 = \dfrac{C_1}{r-g}### The return ##r## is supposed to be the: (a) Expected future total return of the market price of equity. (b) Expected future total return of the book price of equity. (c) Actual historical total return on the market price of equity. (d) Actual historical total return on the book price of equity. (e) Actual historical return on equity (ROE) defined as (Net Income / Owners Equity). Question 36 DDM, perpetuity with growth A stock pays annual dividends which are expected to continue forever. It just paid a dividend of $10. The growth rate in the dividend is 2% pa. You estimate that the stock's required return is 10% pa. Both the discount rate and growth rate are given as effective annual rates. Using the dividend discount model, what will be the share price? (a) $127.5 (b) $125 (c) $102 (e) $100 A stock is expected to pay the following dividends: Cash Flows of a Stock Time (yrs) 0 1 2 3 4 ... Dividend ($) 0.00 1.00 1.05 1.10 1.15 ... After year 4, the annual dividend will grow in perpetuity at 5% pa, so; the dividend at t=5 will be $1.15(1+0.05), the dividend at t=6 will be $1.15(1+0.05)^2, and so on. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What will be the price of the stock in three and a half years (t = 3.5)? (e) $3.6341 ### p_0 = \frac{d_1}{r - g} ### Which expression is NOT equal to the expected dividend yield? (a) ## r-g ## (b) ## \dfrac{d_1}{p_0} ## (c) ## \dfrac{d_5}{p_4} ## (d) ## \dfrac{d_5(1+g)^2}{p_6} ## (e) ## \dfrac{d_3}{p_0(1+r)^2} ## A fairly valued share's current price is $4 and it has a total required return of 30%. Dividends are paid annually and next year's dividend is expected to be $1. After that, dividends are expected to grow by 5% pa in perpetuity. All rates are effective annual returns. What is the expected dividend income paid at the end of the second year (t=2) and what is the expected capital gain from just after the first dividend (t=1) to just after the second dividend (t=2)? The answers are given in the same order, the dividend and then the capital gain. (a) $1.3, $0.26 (b) $1.25, $0.25 (c) $1.1025, $0.2205 (d) $1.05, $0.21 (e) $1, $0.2 Question 580 price gains and returns over time, time calculation, effective rate How many years will it take for an asset's price to quadruple (be four times as big, say from $1 to $4) if the price grows by 15% pa? (a) 1.3685 years (b) 3.4783 years (c) 4.4808 years (d) 9.919 years (e) 11.5156 years How many years will it take for an asset's price to double if the price grows by 10% pa? (d) 11.5267 years Question 341 Multiples valuation, PE ratio Estimate Microsoft's (MSFT) share price using a price earnings (PE) multiples approach with the following assumptions and figures only: Apple, Google and Microsoft are comparable companies, Apple's (AAPL) share price is $526.24 and historical EPS is $40.32. Google's (GOOG) share price is $1,215.65 and historical EPS is $36.23. Micrsoft's (MSFT) historical earnings per share (EPS) is $2.71. Source: Google Finance 28 Feb 2014. (e) $8.60 Question 348 PE ratio, Multiples valuation Estimate the US bank JP Morgan's share price using a price earnings (PE) multiples approach with the following assumptions and figures only: The major US banks JP Morgan Chase (JPM), Citi Group (C) and Wells Fargo (WFC) are comparable companies; JP Morgan Chase's historical earnings per share (EPS) is $4.37; Citi Group's share price is $50.05 and historical EPS is $4.26; Wells Fargo's share price is $48.98 and historical EPS is $3.89. Note: Figures sourced from Google Finance on 24 March 2014. Question 488 income and capital returns, payout policy, payout ratio, DDM Two companies BigDiv and ZeroDiv are exactly the same except for their dividend payouts. BigDiv pays large dividends and ZeroDiv doesn't pay any dividends. Currently the two firms have the same earnings, assets, number of shares, share price, expected total return and risk. Assume a perfect world with no taxes, no transaction costs, no asymmetric information and that all assets including business projects are fairly priced and therefore zero-NPV. All things remaining equal, which of the following statements is NOT correct? (a) BigDiv is expected to have a lower capital return than ZeroDiv in the future. (b) BigDiv is expected to have a lower total return than ZeroDiv in the future. (c) ZeroDiv's assets are likely to grow faster than BigDiv's. (d) ZeroDiv's share price will increase faster than BigDiv's. (e) BigDiv currently has a higher payout ratio than ZeroDiv. Question 333 DDM, time calculation When using the dividend discount model, care must be taken to avoid using a nominal dividend growth rate that exceeds the country's nominal GDP growth rate. Otherwise the firm is forecast to take over the country since it grows faster than the average business forever. Suppose a firm's nominal dividend grows at 10% pa forever, and nominal GDP growth is 5% pa forever. The firm's total dividends are currently $1 billion (t=0). The country's GDP is currently $1,000 billion (t=0). In approximately how many years will the company's total dividends be as large as the country's GDP? (a) 1,443 years (b) 1,199 years (c) 955 years (d) 674 years (e) 148 years Below are some statements about loans and bonds. The first descriptive sentence is correct. But one of the second sentences about the loans' or bonds' prices is not correct. Which statement is NOT correct? Assume that interest rates are positive. Note that coupons or interest payments are the periodic payments made throughout a bond or loan's life. The face or par value of a bond or loan is the amount paid at the end when the debt matures. (a) A bullet loan has no interest payments but it does have a face value. Therefore it's a discount loan. (b) A fully amortising loan has interest payments but does not have a face value. Therefore it's a premium loan. (c) An interest only loan has interest payments and its price and face value are equal. Therefore it's a par loan. (d) A zero coupon bond has no coupon payments but it does have a face value. Therefore it's a premium bond. (e) A balloon loan has interest payments and its face value is more than its price. Therefore it's a discount loan. Question 279 diversification Do you think that the following statement is or ? "Buying a single company stock usually provides a safer return than a stock mutual fund." Question 125 option, speculation, market efficiency Suppose that the US government recently announced that subsidies for fresh milk producers will be gradually phased out over the next year. Newspapers say that there are expectations of a 40% increase in the spot price of fresh milk over the next year. Option prices on fresh milk trading on the Chicago Mercantile Exchange (CME) reflect expectations of this 40% increase in spot prices over the next year. Similarly to the rest of the market, you believe that prices will rise by 40% over the next year. What option trades are likely to be profitable, or to be more specific, result in a positive Net Present Value (NPV)? Only the spot price is expected to increase and there is no change in expected volatility or other variables that affect option prices. No taxes, transaction costs, information asymmetry, bid-ask spreads or other market frictions. (a) Buy one year call options on fresh milk. (b) Buy one year put options on fresh milk. (c) Sell one year call options on fresh milk. (d) All of the above option trades are likely to have a positive NPV. (e) None of the above option trades are likely to have a positive NPV. (a) 5.8824%, 0.9804%, 4.902%. Question 522 income and capital returns, real and nominal returns and cash flows, inflation, real estate A residential investment property has an expected nominal total return of 6% pa and nominal capital return of 2.5% pa. Inflation is expected to be 2.5% pa. All of the above are effective nominal rates and investors believe that they will stay the same in perpetuity. What are the property's expected real total, capital and income returns? The answer choices below are given in the same order. (a) 3.4146%, 3.4146%, 0%. (c) 3.4146%, 0%, 3.4146%. (d) 0.9878%, 0.5%, 0.4878%. Question 480 NPV, real estate, DDM, no explanation What type of present value equation is best suited to value a residential house investment property that is expected to pay constant rental payments forever? Note that 'constant' has the same meaning as 'level' in this context. (a) The single cash flow equation. (b) The level annuity equation. (c) The annuity with growth equation. (d) The level perpetuity equation. (e) The perpetuity with growth equation. Question 242 technical analysis, market efficiency Select the most correct statement from the following. 'Chartists', also known as 'technical traders', believe that: (a) Markets are weak-form efficient. (b) Markets are semi-strong-form efficient. (c) Past prices cannot be used to predict future prices. (d) Past returns can be used to predict future returns. (e) Stock prices reflect all publically available information. Question 100 market efficiency, technical analysis, joint hypothesis problem A company selling charting and technical analysis software claims that independent academic studies have shown that its software makes significantly positive abnormal returns. Assuming the claim is true, which statement(s) are correct? (I) Weak form market efficiency is broken. (II) Semi-strong form market efficiency is broken. (III) Strong form market efficiency is broken. (IV) The asset pricing model used to measure the abnormal returns (such as the CAPM) had mis-specification error so the returns may not be abnormal but rather fair for the level of risk. (a) Only III is true. (b) Only II and III are true. (c) Only I, II and III are true. (d) Only IV is true. (e) Either I, II and III are true, or IV is true, or they are all true. Question 339 bond pricing, inflation, market efficiency, income and capital returns Economic statistics released this morning were a surprise: they show a strong chance of consumer price inflation (CPI) reaching 5% pa over the next 2 years. This is much higher than the previous forecast of 3% pa. A vanilla fixed-coupon 2-year risk-free government bond was issued at par this morning, just before the economic news was released. What is the expected change in bond price after the economic news this morning, and in the next 2 years? Assume that: Inflation remains at 5% over the next 2 years. Investors demand a constant real bond yield. The bond price falls by the (after-tax) value of the coupon the night before the ex-coupon date, as in real life. (a) Today the price would have increased significantly. Over the next 2 years, the bond price is expected to increase, measured on each ex-coupon date. (b) Today the price would have increased significantly. Over the next 2 years, the bond price is expected to be unchanged, measured on each ex-coupon date. (c) Today the price would have been unchanged. (d) Today the price would have decreased significantly. (e) Today the price would have decreased significantly. Question 464 mispriced asset, NPV, DDM, market efficiency A company advertises an investment costing $1,000 which they say is underpriced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Assume that there are no dividend payments so the entire 15% total return is all capital return. Assuming that the company's statements are correct, what is the NPV of buying the investment if the 15% return lasts for the next 100 years (t=0 to 100), then reverts to 10% pa after that time? Also, what is the NPV of the investment if the 15% return lasts forever? In both cases, assume that the required return of 10% remains constant. All returns are given as effective annual rates. The answer choices below are given in the same order (15% for 100 years, and 15% forever): (a) $0, $0 (b) $1,977.19, $2,000 (c) $2,977.19, $3,000 (d) $499.96, $500 (e) $84,214.9, Infinite If the current AUD exchange rate is USD 0.9686 = AUD 1, what is the American terms quote of the AUD against the USD? (e) 1.0324 AUD per 2 USD If the USD appreciates against the AUD, the American terms quote of the AUD will or ? If the AUD appreciates against the USD, the European terms quote of the AUD will or ? If the USD appreciates against the AUD, the European terms quote of the AUD will or ? Question 319 foreign exchange rate, monetary policy, American and European terms Investors expect the Reserve Bank of Australia (RBA) to keep the policy rate steady at their next meeting. Then unexpectedly, the RBA announce that they will increase the policy rate by 25 basis points due to fears that the economy is growing too fast and that inflation will be above their target rate of 2 to 3 per cent. What do you expect to happen to Australia's exchange rate in the short term? The Australian dollar is likely to: (a) Appreciate against the USD, so the 'European terms' quote of the AUD (AUD per USD) will increase. (b) Depreciate against the USD, so the 'European terms' quote of the AUD (AUD per USD) will decrease. (c) Appreciate against the USD, so the 'European terms' quote of the AUD (AUD per USD) will decrease. (d) Depreciate against the USD, so the 'European terms' quote of the AUD (AUD per USD) will increase. (e) Appreciate against the USD, so the 'American terms' quote of the AUD (USD per AUD) will decrease. Investors expect the Reserve Bank of Australia (RBA) to decrease the overnight cash rate at their next meeting. Then unexpectedly, the RBA announce that they will keep the policy rate unchanged. (e) Be unaffected by the change in the policy rate, so the exchange rate will remain the same. The market expects the Reserve Bank of Australia (RBA) to increase the policy rate by 25 basis points at their next meeting. Then unexpectedly, the RBA announce that they will increase the policy rate by 50 basis points due to high future GDP and inflation forecasts. What do you expect to happen to Australia's exchange rate in the short term? The Australian dollar will: (a) Appreciate against the USD, so the 'American terms' quote of the AUD (USD per AUD) will increase. (b) Depreciate against the USD, so the 'American terms' quote of the AUD (USD per AUD) will decrease. (c) Appreciate against the USD, so the 'American terms' quote of the AUD (USD per AUD) will decrease. (d) Depreciate against the USD, so the 'American terms' quote of the AUD (USD per AUD) will increase. The market expects the Reserve Bank of Australia (RBA) to decrease the policy rate by 25 basis points at their next meeting. Then unexpectedly, the RBA announce that they will decrease the policy rate by 50 basis points due to fears of a recession and deflation. What do you expect to happen to Australia's exchange rate? The Australian dollar will: As expected, the RBA increases the policy rate by 25 basis points. Investors expect Australia's central bank, the RBA, to reduce the policy rate at their next meeting due to fears that the economy is slowing. Then unexpectedly, the policy rate is actually kept unchanged. What do you expect to happen to Australia's exchange rate? (a) The Australian dollar will appreciate against the USD, so the 'European terms' quote of the AUD exchange rate (AUD/USD) will increase. (b) The Australian dollar will depreciate against the USD, so the 'European terms' quote of the AUD exchange rate (AUD/USD) will decrease. (c) The Australian dollar will appreciate against the USD, so the 'European terms' quote of the AUD exchange rate (AUD/USD) will decrease. (d) The Australian dollar will depreciate against the USD, so the 'European terms' quote of the AUD exchange rate (AUD/USD) will increase. (e) The Australian dollar will appreciate against the USD, so the 'American terms' quote of the AUD exchange rate (USD/AUD) will decrease. Vietnamese people usually quote the Vietnamese Dong in VND per 1 USD. For example, in October 2015 the Vietnamese Dong was 22,300 VND per USD. Is this an or terms quote? Which of the following FX quotes (current in October 2015) is given in American terms? (a) 1.55 USD per GBP (b) 120.61 JPY per USD (c) 6.32 RMB per USD (d) 1 USD = 1.31 CAD (e) 0.99 CHF = 1 USD Question 513 stock split, reverse stock split, stock dividend, bonus issue, rights issue (a) A 3 for 2 stock split means that for every 2 existing shares, all shareholders will receive 1 extra share. (b) A 3 for 10 bonus issue means that for every 10 existing shares, all shareholders will receive 3 extra shares. (c) A 20% stock dividend means that for every 10 existing shares, all shareholders will receive 2 extra shares. (d) A 1 for 10 reverse stock split means that for every 10 existing shares, all shareholders will lose 9 shares, so they will only be left with 1 share. (e) A 3 for 10 rights issue at a subscription price of $8 means that for every 10 existing shares, all shareholders can sell 3 of their shares back to the company at a price of $8 each, so shareholders receive money. A company has: 140 million shares outstanding. The market price of one share is currently $2. The company's debentures are publicly traded and their market price is equal to 93% of the face value. The debentures have a total face value of $50,000,000 and the current yield to maturity of corporate debentures is 12% per annum. The risk-free rate is 8.50% and the market return is 13.7%. Market analysts estimated that the company's stock has a beta of 0.90. The corporate tax rate is 30%. What is the company's after-tax weighted average cost of capital (WACC) in a classical tax system? (a) 13.18% Question 302 WACC, CAPM Which of the following statements about the weighted average cost of capital (WACC) is NOT correct? (a) WACC before tax ##= r_D.\dfrac{D}{V_L} + r_{EL}.\dfrac{E_L}{V_L}## (b) WACC before tax ##= r_f + \beta_{VL}.(r_m - r_f)## (c) WACC after tax ##= r_D.(1-t_c).\dfrac{D}{V_L} + r_{EL}.\dfrac{E_L}{V_L}## (d) WACC after tax ##= r_f + \beta_{VL}.(r_m - r_f) - \dfrac{r_D.D.t_c}{V_L}## (e) WACC after tax ##= r_f + \beta_{VL}.(r_m - r_f)## Question 450 CAPM, risk, portfolio risk, no explanation The accounting identity states that the book value of a company's assets (A) equals its liabilities (L) plus owners equity (OE), so A = L + OE. The finance version states that the market value of a company's assets (V) equals the market value of its debt (D) plus equity (E), so V = D + E. Therefore a business's assets can be seen as a portfolio of the debt and equity that fund the assets. Let ##\sigma_\text{V total}^2## be the total variance of returns on assets, ##\sigma_\text{V syst}^2## be the systematic variance of returns on assets, and ##\sigma_\text{V idio}^2## be the idiosyncratic variance of returns on assets, and ##\rho_\text{D idio, E idio}## be the correlation between the idiosyncratic returns on debt and equity. Which of the following equations is NOT correct? (a) ##r_V = \dfrac{D}{V}.r_D + \dfrac{E}{V}.r_E## (b) ##\beta_V = \dfrac{D}{V}.\beta_D + \dfrac{E}{V}.\beta_E## (c) ##\sigma_\text{V syst}^2 = \left(\dfrac{D}{V}\right)^2.\sigma_\text{D syst}^2 + \left(\dfrac{E}{V}\right)^2.\sigma_\text{E syst}^2## (d) ##\sigma_\text{V idio}^2 = \left(\dfrac{D}{V}\right)^2.\sigma_\text{D idio}^2 + \left(\dfrac{E}{V}\right)^2.\sigma_\text{E idio}^2 + 2.\dfrac{D}{V}.\dfrac{E}{V}.\rho_\text{D idio, E idio}.\sigma_\text{D idio}.\sigma_\text{E idio}## (e) ##\sigma_\text{V total}^2 = \left(\dfrac{D}{V}\right)^2.\sigma_\text{D total}^2 + \left(\dfrac{E}{V}\right)^2.\sigma_\text{E total}^2## Question 617 systematic and idiosyncratic risk, risk, CAPM A stock's required total return will increase when its: (a) Systematic risk increases. (b) Idiosyncratic risk increases. (c) Total risk increases. (d) Systematic risk decreases. (e) Idiosyncratic risk decreases. Question 627 CAPM, SML, NPV, Jensens alpha Assets A, B, M and ##r_f## are shown on the graphs above. Asset M is the market portfolio and ##r_f## is the risk free yield on government bonds. Which of the below statements is NOT correct? (a) Asset A has a Jensen's alpha of 4.5% pa. (b) Asset A is under-priced. (c) The NPV of buying asset A is zero. (d) Asset B has a Jensen's alpha of zero. (e) Asset B is fairly priced. Question 628 CAPM, SML, risk, no explanation Assets A, B, M and ##r_f## are shown on the graphs above. Asset M is the market portfolio and ##r_f## is the risk free yield on government bonds. Assume that investors can borrow and lend at the risk free rate. Which of the below statements is NOT correct? (a) Asset A has the same systematic risk as asset B. (b) Asset A has more total variance than asset B. (c) Asset B has zero idiosyncratic risk. Asset B must be a portfolio of half the market portfolio and half government bonds. (d) If risk-averse investors were forced to invest all of their wealth in a single risky asset, so they could not diversify, every investor would logically choose asset A over the other three assets. (e) Assets M and B have the highest Sharpe ratios, which is defined as the gradient of the capital allocation line (CAL) from the government bonds through the asset on the graph of expected return versus total standard deviation. Question 449 personal tax on dividends, classical tax system A small private company has a single shareholder. This year the firm earned a $100 profit before tax. All of the firm's after tax profits will be paid out as dividends to the owner. The corporate tax rate is 30% and the sole shareholder's personal marginal tax rate is 45%. The United States' classical tax system applies because the company generates all of its income in the US and pays corporate tax to the Internal Revenue Service. The shareholder is also an American for tax purposes. What will be the personal tax payable by the shareholder and the corporate tax payable by the company? (a) Personal tax of $6.43 and corporate tax of $45. (b) Personal tax of $15 and corporate tax of $30. (c) Personal tax of $16.5 and corporate tax of $45. (d) Personal tax of $31.5 and corporate tax of $30. (e) Personal tax of $45 and corporate tax of $0. Question 81 risk, correlation, diversification Stock A and B's returns have a correlation of 0.3. Which statement is NOT correct? (a) If stock A's return increases, stock B's return is also expected to increase. (b) If stock A's return decreases, stock B's return is also expected to decrease. (c) If stock A and B were combined in a portfolio, there would be no diversification at all since they are positively correlated. (d) Stock A and B's returns have positive covariance. (e) a and b. Question 283 portfolio risk, correlation, needs refinement Three important classes of investable risky assets are: Corporate debt which has low total risk, Real estate which has medium total risk, Equity which has high total risk. Assume that the correlation between total returns on: Corporate debt and real estate is 0.1, Corporate debt and equity is 0.1, Real estate and equity is 0.5. You are considering investing all of your wealth in one or more of these asset classes. Which portfolio will give the lowest total risk? You are restricted from shorting any of these assets. Disregard returns and the risk-return trade-off, pretend that you are only concerned with minimising risk. (a) Most money in corporate debt, with some money in equity and real estate. (b) Most money in equity, with some money in corporate debt and real estate. (c) Most money in real estate, with some money in corporate debt and equity. (d) Most money in corporate debt, with some money in real estate and none in equity. (e) All money in corporate debt. Question 284 covariance, correlation The following table shows a sample of historical total returns of shares in two different companies A and B. Stock Returns Total effective annual returns Year ##r_A## ##r_B## 2007 0.2 0.4 2008 0.04 -0.2 2009 -0.1 -0.3 2010 0.18 0.5 What is the historical sample covariance (##\hat{\sigma}_{A,B}##) and correlation (##\rho_{A,B}##) of stock A and B's total effective annual returns? (a) 0.05696, 0.702247 (b) 0.05696, 0.238663 (c) 0.053333, 0.936329 (d) 0.040889, 0.930519 (e) 0.04, 0.930519 Question 564 covariance What is the covariance of a variable X with a constant C? The cov(X, C) or ##\sigma_{X,C}## equals: (a) var(X) or ##\sigma_X^2## (b) sd(X) or ##\sigma_X## (e) Mathematically undefined Question 31 DDM, perpetuity with growth, effective rate conversion What is the NPV of the following series of cash flows when the discount rate is 5% given as an effective annual rate? The first payment of $10 is in 4 years, followed by payments every 6 months forever after that which shrink by 2% every 6 months. That is, the growth rate every 6 months is actually negative 2%, given as an effective 6 month rate. So the payment at ## t=4.5 ## years will be ## 10(1-0.02)^1=9.80 ##, and so on. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. What is the current price of the stock? ### P_0 = \frac{d_1}{r-g} ### Assume that the assumptions of the DDM hold and that the time period is measured in years. Which of the following is equal to the expected dividend in 3 years, ## d_3 ##? (a) ## d_1(1+g)^3 ## (b) ## P_3(r-g) ## (c) ## d_2(1+g)^2 ## (d) ## P_0(1+g)^3(r-g) ## (e) ## P_0(1+g)^2(r-g) ## A stock pays semi-annual dividends. It just paid a dividend of $10. The growth rate in the dividend is 1% every 6 months, given as an effective 6 month rate. You estimate that the stock's required return is 21% pa, as an effective annual rate. Using the dividend discount model, what will be the share price? Question 54 NPV, DDM After year 4, the annual dividend will grow in perpetuity at -5% pa. Note that this is a negative growth rate, so the dividend will actually shrink. So, the dividend at t=5 will be ##$1(1-0.05) = $0.95##, the dividend at t=6 will be ##$1(1-0.05)^2 = $0.9025##, and so on. What is the current price of the stock? (a) $7.2968 (b) $7.5018 (c) $7.6667 (d) $7.7522 What will be the price of the stock in four and a half years (t = 4.5)? Question 150 DDM, effective rate A share just paid its semi-annual dividend of $10. The dividend is expected to grow at 2% every 6 months forever. This 2% growth rate is an effective 6 month rate. Therefore the next dividend will be $10.20 in six months. The required return of the stock is 10% pa, given as an effective annual rate. What is the price of the share now? ###p_0=\frac{d_1}{r_\text{eff}-g_\text{eff}}### Which expression is NOT equal to the expected capital return? (a) ## g_\text{eff} ## (b) ## \dfrac{p_1}{p_0} -1 ## (c) ## \dfrac{d_5}{d_4} -1 ## (d) ## \dfrac{d_1}{p_0} - 1 ## (e) ## \dfrac{p_1-p_0}{p_0} ## A share just paid its semi-annual dividend of $10. The dividend is expected to grow at 2% every 6 months forever. This 2% growth rate is an effective 6 month rate. Therefore the next dividend will be $10.20 in six months. The required return of the stock 10% pa, given as an effective annual rate. Question 166 DDM, no explanation A stock pays annual dividends. It just paid a dividend of $3. The growth rate in the dividend is 4% pa. You estimate that the stock's required return is 10% pa. Both the discount rate and growth rate are given as effective annual rates. Using the dividend discount model, what will be the share price? (b) 31.16 (c) 31.2 (e) 52 The following is the Dividend Discount Model used to price stocks: ### p_0=\frac{d_1}{r-g} ### Which of the following statements about the Dividend Discount Model is NOT correct? (a) The dividend yield is equal to ##(r-g)##. (b) The dividend yield is equal to ##\left( \dfrac{d_1}{p_0} \right)##. (c) The total return of the stock is equal to ##(r)##. (d) The total return of the stock is equal to ##\left( \dfrac{d_1+p_0.g}{p_0} \right)##. (e) The total return of the stock is equal to ##(r+g)##. A stock pays annual dividends. It just paid a dividend of $5. The growth rate in the dividend is 1% pa. You estimate that the stock's required return is 8% pa. Both the discount rate and growth rate are given as effective annual rates. Question 184 NPV, DDM Dividend ($) 2 2 2 10 3 ... After year 4, the dividend will grow in perpetuity at 4% pa. The required return on the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. You owe money. Are you a or a ? You are owed money. Are you a or a ? You own a debt asset. Are you a or a ? You buy a house funded using a home loan. Have you or debt? You deposit cash into your bank account. Does the deposit account represent a debt or to you? You deposit cash into your bank account. Have you or your money? Question 616 idiom, debt terminology, bond pricing "Buy low, sell high" is a phrase commonly heard in financial markets. It states that traders should try to buy assets at low prices and sell at high prices. Traders in the fixed-coupon bond markets often quote promised bond yields rather than prices. Fixed-coupon bond traders should try to: (a) Buy at low yields, sell at high yields. (b) Buy at high yields, sell at low yields. (c) Buy at high yields, sell at high yields. (d) Buy at low yields, sell at low yields. (e) There is no preferable yield to buy or sell fixed-coupon debt. A share pays annual dividends. It just paid a dividend of $2. The growth rate in the dividend is 3% pa. You estimate that the stock's required return is 8% pa. Both the discount rate and growth rate are given as effective annual rates. Using the dividend discount model, what is the share price? (c) 25.75 (e) 41.2 Question 198 NPV, DDM, no explanation Dividend ($) 0 6 12 18 20 ... After year 4, the dividend will grow in perpetuity at 5% pa. The required return of the stock is 10% pa. Both the growth rate and required return are given as effective annual rates. (a) 302.10 (b) 315.76 (c) 329.42 (d) 344.45 Question 247 cross currency interest rate parity, no explanation In the so called 'Swiss Loans Affair' of the 1980's, Australian banks offered loans denominated in Swiss Francs to Australian farmers at interest rates as low as 4% pa. This was far lower than interest rates on Australian Dollar loans which were above 10% due to very high inflation in Australia at the time. In the late-1980's there was a large depreciation in the Australian Dollar. The Australian Dollar nearly halved in value against the Swiss Franc. Many Australian farmers went bankrupt since they couldn't afford the interest payments on the Swiss Franc loans because the Australian Dollar value of those payments nearly doubled. The farmers accused the banks of promoting Swiss Franc loans without making them aware of the risks. What fundamental principal of finance did the Australian farmers (and the bankers) fail to understand? (a) Home-made leverage. (b) Diversification. (c) Cross-currency interest rate parity. (d) Time value of money. (e) The undiversifiable nature of systematic risk. Question 605 cross currency interest rate parity, foreign exchange rate If the Reserve Bank of Australia is expected to keep its interbank overnight cash rate at 2% pa while the US Federal Reserve is expected to keep its federal funds rate at 0% pa over the next year, is the AUD is expected to , , or remain against the USD over the next year? Question 626 cross currency interest rate parity, foreign exchange rate, forward foreign exchange rate The Australian cash rate is expected to be 2% pa over the next one year, while the Japanese cash rate is expected to be 0% pa, both given as nominal effective annual rates. The current exchange rate is 100 JPY per AUD. What is the implied 1 year forward foreign exchange rate? (a) 98.04 JPY per AUD. (b) 100 JPY per AUD. (c) 102 JPY per AUD. (d) 1.02 AUD per JPY. (e) 0.9804 AUD per JPY. Question 336 forward foreign exchange rate, no explanation The Australian cash rate is expected to be 6% pa while the US federal funds rate is expected to be 4% pa over the next 3 years, both given as effective annual rates. The current exchange rate is 0.80 AUD per USD. (a) 1.180572 AUD per USD (b) 0.848966 AUD per USD (c) 0.847047 AUD per USD (d) 0.755566 AUD per USD (e) 0.752954 AUD per USD Question 536 idiom, bond pricing, capital structure, leverage The expression 'my word is my bond' is often used in everyday language to make a serious promise. Why do you think this expression uses the metaphor of a bond rather than a share? (a) Because bonds are higher risk investments than shares. (b) Because issuers must pay bond holders or face bankruptcy, while payments on shares are discretionary. (c) Because bonds are paid last in the event of bankruptcy, while shares are paid first. (d) Because bonds always pay higher returns than shares. (e) Because bonds are generally more liquid than shares. Question 22 NPV, perpetuity with growth, effective rate, effective rate conversion What is the NPV of the following series of cash flows when the discount rate is 10% given as an effective annual rate? The first payment of $90 is in 3 years, followed by payments every 6 months in perpetuity after that which shrink by 3% every 6 months. That is, the growth rate every 6 months is actually negative 3%, given as an effective 6 month rate. So the payment at ## t=3.5 ## years will be ## 90(1-0.03)^1=87.3 ##, and so on. A three year corporate bond yields 12% pa with a coupon rate of 10% pa, paid semi-annually. Find the effective six month yield, effective annual yield and the effective daily yield. Assume that each month has 30 days and that there are 360 days in a year. ##r_\text{eff semi-annual}##, ##r_\text{eff yearly}##, ##r_\text{eff daily}##. (a) 0.06, 0.1236, 0.000324. (b) 0.058301, 0.12, 0.000315. (c) 0.05, 0.1025, 0.000271. (d) 0.05, 0.1, 0.000278. (e) 0.048809, 0.1, 0.000265. A 2 year government bond yields 5% pa with a coupon rate of 6% pa, paid semi-annually. Find the effective six month rate, effective annual rate and the effective daily rate. Assume that each month has 30 days and that there are 360 days in a year. ##r_\text{eff semi-annual}##, ##r_\text{eff yrly}##, ##r_\text{eff daily}##. (a) 0.024695, 0.05, 0.000136. (b) 0.025, 0.050625, 0.000137. (c) 0.029563, 0.06, 0.000162. (d) 0.03, 0.06, 0.000167. (e) 0.03, 0.0609, 0.000164. A 2 year corporate bond yields 3% pa with a coupon rate of 5% pa, paid semi-annually. Find the effective monthly rate, effective six month rate, and effective annual rate. ##r_\text{eff monthly}##, ##r_\text{eff 6 month}##, ##r_\text{eff annual}##. (a) 0.002466, 0.014889, 0.03. (b) 0.002485, 0.015, 0.030225. (c) 0.004074, 0.024695, 0.05. (d) 0.004124, 0.025, 0.050625. (e) 0.004167, 0.025, 0.05. Question 456 inflation, effective rate In the 'Austin Powers' series of movies, the character Dr. Evil threatens to destroy the world unless the United Nations pays him a ransom (video 1, video 2). Dr. Evil makes the threat on two separate occasions: In 1969 he demands a ransom of $1 million (=10^6), and again; In 1997 he demands a ransom of $100 billion (=10^11). If Dr. Evil's demands are equivalent in real terms, in other words $1 million will buy the same basket of goods in 1969 as $100 billion would in 1997, what was the implied inflation rate over the 28 years from 1969 to 1997? The answer choices below are given as effective annual rates: (a) 0.5086% pa (b) 1.5086% pa (c) 5.0859% pa (d) 50.8591% pa (e) 150.8591% pa Question 540 APR, effective rate, no explanation Which one of the below statements about effective rates and annualised percentage rates (APR's) is NOT correct? (a) An effective semi-annual rate could be called: "a six month rate compounding per six months". (b) An APR compounding semi-annually could be called: "a yearly rate compounding per six months". (c) An effective monthly rate could be called: "a monthly rate compounding every month". (d) An APR compounding monthly could be called: "a monthly rate compounding per year". (e) An effective daily rate could be called: "a daily rate compounding every day". Question 155 inflation, real and nominal returns and cash flows, Loan, effective rate conversion You are a banker about to grant a 2 year loan to a customer. The loan's principal and interest will be repaid in a single payment at maturity, sometimes called a zero-coupon loan, discount loan or bullet loan. You require a real return of 6% pa over the two years, given as an effective annual rate. Inflation is expected to be 2% this year and 4% next year, both given as effective annual rates. You judge that the customer can afford to pay back $1,000,000 in 2 years, given as a nominal cash flow. How much should you lend to her right now? (a) $838,907.00 (b) $838,986.09 (c) $841,754.97 (d) $889,996.44 (e) $944,108.22 Question 309 stock pricing, ex dividend date A company announces that it will pay a dividend, as the market expected. The company's shares trade on the stock exchange which is open from 10am in the morning to 4pm in the afternoon each weekday. When would the share price be expected to fall by the amount of the dividend? Ignore taxes. The share price is expected to fall during the: (a) Day of the payment date, between the payment date's morning opening price and afternoon closing price. (b) Night before the payment date, between the previous day's afternoon closing price and the payment date's morning opening price. (c) Day of the ex-dividend date, between the ex-dividend date's morning opening price and afternoon closing price. (d) Night before the ex-dividend date, between the last with-dividend date's afternoon closing price and the ex-dividend date's morning opening price. (e) Day of the last with-dividend date, between the with-dividend date's morning opening price and afternoon closing price. Question 329 DDM, expected and historical returns ### P_0= \frac{d_1}{r-g} ### The pronumeral ##g## is supposed to be the: (a) Expected future growth rate of the dividend. (b) Actual historical growth rate of the dividend. (c) Expected future growth rate of the total required return on equity r. (d) Actual historical growth rate of the market share price. (e) Actual historical growth rate of the return on equity (ROE) defined as (Net Income / Owners Equity). Question 573 bond pricing, zero coupon bond, term structure of interest rates, expectations hypothesis, liquidity premium theory, forward interest rate, yield curve In the below term structure of interest rates equation, all rates are effective annual yields and the numbers in subscript represent the years that the yields are measured over: ###(1+r_{0-3})^3 = (1+r_{0-1})(1+r_{1-2})(1+r_{2-3}) ### (a) If the expectations hypothesis is true, then the forward rates are the expected future spot rates. (b) If the liquidity premium theory is true, then the forward rates are lower than the expected future spot rates due to the liquidity premium. (c) The yield curve is normal when: ##r_{0-1} < r_{1-2} < r_{2-3}##. (d) The yield curve is flat when: ##r_{0-1} = r_{1-2} = r_{2-3}##. (e) The yield curve is inverse when: ##r_{0-1} > r_{1-2} > r_{2-3}##. Question 524 risk, expected and historical returns, bankruptcy or insolvency, capital structure, corporate financial decision theory, limited liability (a) Stocks are higher risk investments than debt. (b) Stocks have higher expected returns than debt. (c) Firms' past realised stock returns are always higher than their past realised debt returns. (d) In the event of bankruptcy, stock holders are paid after debt holders are fully paid. (e) Stock holders have a residual claim on the firm's assets. Question 115 capital structure, leverage, WACC A firm has a debt-to-assets ratio of 50%. The firm then issues a large amount of debt to raise money for new projects of similar risk to the company's existing projects. Assume a classical tax system. Which statement is correct? (a) The debt-to-assets (D/V) ratio will decrease. (b) The debt-to-equity ratio (D/E) will decrease. (c) The firm's cost of equity will decrease. (d) The company's after-tax WACC will decrease. (e) The company's before-tax WACC will decrease. Question 376 leverage, capital structure, no explanation Interest expense on debt is tax-deductible, but dividend payments on equity are not. or ? (c) 37.5% (e) 6.25% The perpetuity with growth equation is: Which of the following is NOT equal to the expected capital return as an effective annual rate? (a) ##g## (b) ##r+\dfrac{C_1}{P_0}## (c) ##\dfrac{C_2}{C_1} -1## (d) ##\dfrac{C_3-C_2}{C_2} ## (e) ##\left( \dfrac{P_5}{P_3} \right)^{1/2}-1## Total cash flows can be broken into income and capital cash flows. What is the name given to the cash flow generated from selling shares at a higher price than they were bought? Question 538 bond pricing, income and capital returns, no explanation Risk-free government bonds that have coupon rates greater than their yields: (a) Are discount bonds. (b) Have positive expected capital returns. (c) Have higher expected income returns than total returns. (d) Have prices less than their face values. (e) Have significant credit risk. Which of the following investable assets are NOT suitable for valuation using PE multiples techniques? (a) Common equity in small private companies. (b) Common equity in large private companies. (c) Common equity in listed public companies. (d) Fixed term bonds of listed public companies. (e) Real estate. Which firms tend to have low forward-looking price-earnings (PE) ratios? Only consider firms with positive earnings, disregard firms with negative earnings and therefore negative PE ratios. (a) Illiquid small private companies. (b) High growth technology firms. (c) Firms expected to have temporarily low earnings over the next year, but with higher earnings later. (d) Firms with a very low level of systematic risk. (e) Firms whose assets include a very large proportion of cash. Estimate the Chinese bank ICBC's share price using a backward-looking price earnings (PE) multiples approach with the following assumptions and figures only. Note that the renminbi (RMB) is the Chinese currency, also known as the yuan (CNY). The 4 major Chinese banks ICBC, China Construction Bank (CCB), Bank of China (BOC) and Agricultural Bank of China (ABC) are comparable companies; ICBC 's historical earnings per share (EPS) is RMB 0.74; CCB's backward-looking PE ratio is 4.59; BOC 's backward-looking PE ratio is 4.78; ABC's backward-looking PE ratio is also 4.78; Note: Figures sourced from Google Finance on 25 March 2014. Share prices are from the Shanghai stock exchange. (a) RMB 6.4595 (b) RMB 6.3739 (c) RMB 6.3311 (d) RMB 3.4903 (e) RMB 3.3966 Which firms tend to have low forward-looking price-earnings (PE) ratios? Only consider firms with positive PE ratios. (a) Highly liquid publically listed firms. (b) Firms in a declining industry with very low or negative earnings growth. (d) Firms whose returns have a very low level of systematic risk. Private equity firms are known to buy medium sized private companies operating in the same industry, merge them together into a larger company, and then sell it off in a public float (initial public offering, IPO). If medium-sized private companies trade at PE ratios of 5 and larger listed companies trade at PE ratios of 15, what return can be achieved from this strategy? The medium-sized companies can be bought, merged and sold in an IPO instantaneously. There are no costs of finding, valuing, merging and restructuring the medium sized companies. Also, there is no competition to buy the medium-sized companies from other private equity firms. The large merged firm's earnings are the sum of the medium firms' earnings. The only reason for the difference in medium and large firm's PE ratios is due to the illiquidity of the medium firms' shares. Return is defined as: ##r_{0→1} = (p_1-p_0+c_1)/p_0## , where time zero is just before the merger and time one is just after. (a) 300%. (b) 200% Question 537 PE ratio, Multiples valuation, no explanation Estimate the French bank Societe Generale's share price using a backward-looking price earnings (PE) multiples approach with the following assumptions and figures only. Note that EUR is the euro, the European monetary union's currency. The 4 major European banks Credit Agricole (ACA), Deutsche Bank AG (DBK), UniCredit (UCG) and Banco Santander (SAN) are comparable companies to Societe Generale (GLE); Societe Generale's (GLE's) historical earnings per share (EPS) is EUR 2.92; ACA's backward-looking PE ratio is 16.29 and historical EPS is EUR 0.84; DBK's backward-looking PE ratio is 25.01 and historical EPS is EUR 1.26; SAN's backward-looking PE ratio is 14.71 and historical EPS is EUR 0.47; UCG's backward-looking PE ratio is 15.78 and historical EPS is EUR 0.40; (a) EUR 209.6268 (b) EUR 52.4067 (c) EUR 8.6724 (d) EUR 3.9327 (e) EUR 2.1681 Question 547 PE ratio, Multiples valuation, DDM, income and capital returns, no explanation A firm pays out all of its earnings as dividends. Because of this, the firm has no real growth in earnings, dividends or stock price since there is no re-investment back into the firm to buy new assets and make higher earnings. The dividend discount model is suitable to value this company. The firm's revenues and costs are expected to increase by inflation in the foreseeable future. The firm has no debt. It operates in the services industry and has few physical assets so there is negligible depreciation expense and negligible net working capital required. Which of the following statements about this firm's PE ratio is NOT correct? The PE ratio should: (a) Equal the inverse of the firm's nominal income return. (b) Equal the inverse of the firm's real total return. (c) Equal the inverse of the firm's real capital return. (d) Equal the expected payback period of an investor who intends to buy the firm's equity at the current price. (e) Be lower than that of faster growing firms that pay lower dividends and re-invest in themselves. Note: The inverse of x is 1/x. Question 20 NPV, APR, Annuity Your friend wants to borrow $1,000 and offers to pay you back $100 in 6 months, with more $100 payments at the end of every month for another 11 months. So there will be twelve $100 payments in total. She says that 12 payments of $100 equals $1,200 so she's being generous. If interest rates are 12% pa, given as an APR compounding monthly, what is the Net Present Value (NPV) of your friend's deal? (a) -648.51 (e) 200 Question 44 NPV The required return of a project is 10%, given as an effective annual rate. Assume that the cash flows shown in the table are paid all at once at the given point in time. What is the Net Present Value (NPV) of the project? (a) -100 Question 58 NPV, inflation, real and nominal returns and cash flows, Annuity A project to build a toll bridge will take two years to complete, costing three payments of $100 million at the start of each year for the next three years, that is at t=0, 1 and 2. After completion, the toll bridge will yield a constant $50 million at the end of each year for the next 10 years. So the first payment will be at t=3 and the last at t=12. After the last payment at t=12, the bridge will be given to the government. The required return of the project is 21% pa given as an effective annual nominal rate. All cash flows are real and the expected inflation rate is 10% pa given as an effective annual rate. Ignore taxes. The Net Present Value is: (a) -$112,496,484 (b) -$32,260,693 (c) -$19,645,987 (d) $5,222,533 (e) $200,000,000 In Australia, domestic university students are allowed to buy concession tickets for the bus, train and ferry which sell at a discount of 50% to full-price tickets. The Australian Government do not allow international university students to buy concession tickets, they have to pay the full price. Some international students see this as unfair and they are willing to pay for fake university identification cards which have the concession sticker. What is the most that an international student would be willing to pay for a fake identification card? Assume that international students: consider buying their fake card on the morning of the first day of university from their neighbour, just before they leave to take the train into university. buy their weekly train tickets on the morning of the first day of each week. ride the train to university and back home again every day seven days per week until summer holidays 40 weeks from now. The concession card only lasts for those 40 weeks. Assume that there are 52 weeks in the year for the purpose of interest rate conversion. a single full-priced one-way train ride costs $5. have a discount rate of 11% pa, given as an effective annual rate. Approach this question from a purely financial view point, ignoring the illegality, embarrassment and the morality of committing fraud. Question 140 IRR, NPV, profitability index A project has an internal rate of return (IRR) which is greater than its required return. Select the most correct statement. (a) The NPV will be negative and the profitability index will be less than 1. (b) The NPV will be positive and the profitability index will be more than 1. (c) The payback period will be positive and the Accounting Rate of Return will be more than the target return. (d) The NPV will be positive and the profitability index will be negative. (e) The NPV will be negative and the profitability index will be positive. A text book publisher is thinking of asking some teachers to write a new textbook at a cost of $100,000, payable now. The book would be written, printed and ready to sell to students in 2 years. It will be ready just before semester begins. A cash flow of $100 would be made from each book sold, after all costs such as printing and delivery. There are 600 students per semester. Assume that every student buys a new text book. Remember that there are 2 semesters per year and students buy text books at the beginning of the semester. Assume that text book publishers will sell the books at the same price forever and that the number of students is constant. If the discount rate is 8% pa, given as an effective annual rate, what is the NPV of the project? (a) $543,004 (b) $594,444 (c) $1,211,234 (e) $1,489,423 Question 145 NPV, APR, annuity due A student just won the lottery. She won $1 million in cash after tax. She is trying to calculate how much she can spend per month for the rest of her life. She assumes that she will live for another 60 years. She wants to withdraw equal amounts at the beginning of every month, starting right now. All of the cash is currently sitting in a bank account which pays interest at a rate of 6% pa, given as an APR compounding per month. On her last withdrawal, she intends to have nothing left in her bank account. How much can she withdraw at the beginning of each month? A project's net present value (NPV) is negative. Select the most correct statement. (a) The project should be accepted. (b) The project's IRR is less than its required return. (c) The project's IRR is more than its required return. (e) The project is under-priced. Question 192 NPV, APR Harvey Norman the large retailer often runs sales advertising 2 years interest free when you purchase its products. This offer can be seen as a free personal loan from Harvey Norman to its customers. Assume that banks charge an interest rate on personal loans of 12% pa given as an APR compounding per month. This is the interest rate that Harvey Norman deserves on the 2 year loan it extends to its customers. Therefore Harvey Norman must implicitly include the cost of this loan in the advertised sale price of its goods. If you were a customer buying from Harvey Norman, and you were paying immediately, not in 2 years, what is the minimum percentage discount to the advertised sale price that you would insist on? (Hint: if it makes it easier, assume that you're buying a product with an advertised price of $100). What will be the price of the stock in 7 years (t = 7), just after the dividend at that time has been paid? If all of the dividends since time period zero were deposited into a bank account yielding 8% pa as an effective annual rate, how much money will be in the bank account in 2.5 years (in other words, at t=2.5)? (a) 14.85 (d) 19.20 (e) 21.82 Question 218 NPV, IRR, profitability index, average accounting return (a) If a project has a positive NPV, its Profitability Index (PI) will be more than 1. (b) If a project has a negative NPV and the discount rate is zero, then the payback period will be infinite so the project will never pay back its costs. (c) If a project's NPV is zero, then the PI must be 1. (d) If a project has a negative NPV, its IRR will be negative. (e) If a project has a positive NPV, its Average Accounting Return (AAR) is not necessarily positive, it could be positive, negative or zero. Question 228 DDM, NPV, risk, market efficiency A very low-risk stock just paid its semi-annual dividend of $0.14, as it has for the last 5 years. You conservatively estimate that from now on the dividend will fall at a rate of 1% every 6 months. If the stock currently sells for $3 per share, what must be its required total return as an effective annual rate? If risk free government bonds are trading at a yield of 4% pa, given as an effective annual rate, would you consider buying or selling the stock? The stock's required total return is: (a) 9.55%, so buy the stock since its required return is too high for its low risk. (b) 7.37%, so buy the stock since its required return is too high for its low risk. (c) 7.37%, so sell the stock since its required return is too high for its low risk. (d) 3.62%, so buy the stock since its required return is too low for its low risk. (e) 3.62%, so sell the stock since its required return is too low for its low risk. Question 295 inflation, real and nominal returns and cash flows, NPV When valuing assets using discounted cash flow (net present value) methods, it is important to consider inflation. To properly deal with inflation: (I) Discount nominal cash flows by nominal discount rates. (II) Discount nominal cash flows by real discount rates. (III) Discount real cash flows by nominal discount rates. (IV) Discount real cash flows by real discount rates. Which of the above statements is or are correct? (a) I only. (b) III only. (c) IV only. (d) I and IV only. (e) II and III only. Question 70 payout policy Due to floods overseas, there is a cut in the supply of the mineral iron ore and its price increases dramatically. An Australian iron ore mining company therefore expects a large but temporary increase in its profit and cash flows. The mining company does not have any positive NPV projects to begin, so what should it do? Select the most correct answer. (a) Pay out the excess cash by increasing the regular dividend, and cutting it later. (b) Pay out a special dividend. (c) Conduct an on or off-market share repurchase. (d) Conduct a share dividend (also called a 'bonus issue'). (e) Either b or c. Question 101 payout policy, no explanation An established mining firm announces that it expects large losses over the following year due to flooding which has temporarily stalled production at its mines. Which statement(s) are correct? (i) If the firm adheres to a full dividend payout policy it will not pay any dividends over the following year. (ii) If the firm wants to signal that the loss is temporary it will maintain the same level of dividends. It can do this so long as it has enough retained profits. (iii) By law, the firm will be unable to pay a dividend over the following year because it cannot pay a dividend when it makes a loss. (d) Only (i) and (ii) is true. (e) All statements (i), (ii) and (iii) are true. Question 447 payout policy, corporate financial decision theory Payout policy is most closely related to which part of a business? Question 402 PE ratio, no explanation Which of the following companies is most suitable for valuation using PE multiples techniques? (a) A company with positive earnings that does not have any comparable firms. (b) A company with positive earnings that has comparable firms with positive earnings. (c) A company with positive earnings that has comparable firms with negative earnings. (d) A company with negative earnings that has comparable firms with negative earnings. (e) A company with negative earnings that has comparable firms with positive earnings. Question 474 PE ratio What was CBA's backwards-looking price-earnings ratio? (a)17.06 A firm has 2m shares and a market capitalisation of equity of $30m. The firm just announced earnings of $5m and paid an annual dividend of $0.75 per share. What is the firm's (backward looking) price/earnings (PE) ratio? Question 52 IRR, pay back period A three year project's NPV is negative. The cash flows of the project include a negative cash flow at the very start and positive cash flows over its short life. The required return of the project is 10% pa. Select the most correct statement. (a) The payback period is negative. (b) The project's IRR is negative. (d) The project's IRR is more than its required return. (e) The project's IRR is equal to its required return. Question 33 bond pricing, premium par and discount bonds Bonds A and B are issued by the same company. They have the same face value, maturity, seniority and coupon payment frequency. The only difference is that bond A has a 5% coupon rate, while bond B has a 10% coupon rate. The yield curve is flat, which means that yields are expected to stay the same. Which bond would have the higher current price? (a) Bond A will have a higher price. (b) Bond B will have a higher price. (c) Bonds A and B will have the same price. (d) The higher priced bond can't be determined without knowing their yields. (e) The higher priced bond can't be determined without knowing the inflation rate. Bonds X and Y are issued by the same US company. Both bonds yield 6% pa, and they have the same face value ($100), maturity, seniority, and payment frequency. The only difference is that bond X pays coupons of 8% pa and bond Y pays coupons of 12% pa. Which of the following statements is true? Question 82 portfolio return deviation Correlation Dollars A 0.1 0.4 0.5 60 B 0.2 0.6 140 What is the expected return of the above portfolio? Question 558 portfolio weights, portfolio return, short selling (a) 200%, -100% (b) 200%, 100% (c) -100%, 200% (d) 100%, 200% (e) -100%, 100% Question 294 short selling, portfolio weights Which of the following statements about short-selling is NOT true? (a) Short sellers benefit from price falls. (b) To short sell, you must borrow the asset from person A and sell it to person B, then later on buy an identical asset from person C and return it to person A. (c) Short selling only works for assets that are 'fungible' which means that there are many that are identical and substitutable, such as shares and bonds and unlike real estate. (d) An investor who short-sells an asset has a negative weight in that asset. (e) An investor who short-sells an asset is said to be 'long' that asset. Question 292 standard deviation, risk Find the sample standard deviation of returns using the data in the table: Year Return pa 2010 -0.2 The returns above and standard deviations below are given in decimal form. Question 307 risk, variance Let the variance of returns for a share per month be ##\sigma_\text{monthly}^2##. What is the formula for the variance of the share's returns per year ##(\sigma_\text{yearly}^2)##? Assume that returns are independently and identically distributed (iid) so they have zero auto correlation, meaning that if the return was higher than average today, it does not indicate that the return tomorrow will be higher or lower than average. (a) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2## (b) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times 12## (c) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times 12^2## (d) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times \sqrt{12}## (e) ##\sigma_\text{yearly}^2 = \sigma_\text{monthly}^2 \times {12}^{1/3}## Question 471 risk, accounting ratio High risk firms in danger of bankruptcy tend to have: (a) Low debt to assets ratios. (b) High quick ratios. (c) Low interest coverage ratios. (d) Positive amounts of net working capital. (e) High net profit margins. Question 495 risk, accounting ratio, no explanation (b) Low quick ratios. (c) High interest coverage ratios. A share just paid its semi-annual dividend of $5. The dividend is expected to grow at 1% every 6 months forever. This 1% growth rate is an effective 6 month rate. Therefore the next dividend will be $5.05 in six months. The required return of the stock 8% pa, given as an effective annual rate. A company's shares just paid their annual dividend of $2 each. The stock price is now $40 (just after the dividend payment). The annual dividend is expected to grow by 3% every year forever. The assumptions of the dividend discount model are valid for this company. What do you expect the effective annual dividend yield to be in 3 years (dividend yield from t=3 to t=4)? (c) 0.044 ### p_0= \frac{c_1}{r-g} ### Which expression is equal to the expected dividend return? (a) ##(c_1/p_0 ) -1## (b) ##(p_1/p_0) -1## (c) ##(c_5/c_4) -1## (d) ##c_3/p_2## (e) ##(p_1-p_0)/p_0## Question 591 short selling, future, option After doing extensive fundamental analysis of a company, you believe that their shares are overpriced and will soon fall significantly. The market believes that there will be no such fall. Which of the following strategies is NOT a good idea, assuming that your prediction is true? (a) Sell any of the firm's shares that you already own. (b) Short-sell the firm's shares. (c) Sell futures written on the shares. (d) Buy put options written on the shares. (e) Buy call options written on the shares. Question 27 bill pricing, simple interest rate A 180-day Bank Accepted Bill has a face value of $1,000,000. The interest rate is 8% pa and there are 365 days in the year. What is its price now? (a) $15,400,000.00 (b) $8,371,559.63 Question 132 bill pricing, simple interest rate A 90-day Bank Accepted Bill (BAB) has a face value of $1,000,000. The simple interest rate is 10% pa and there are 365 days in the year. What is its price now? Question 147 bill pricing, simple interest rate, no explanation A 30-day Bank Accepted Bill has a face value of $1,000,000. The interest rate is 8% pa and there are 365 days in the year. What is its price now? (d) $3,400,000.00 (e) $11,550,632.91 A 90-day Bank Accepted Bill has a face value of $1,000,000. The interest rate is 6% pa and there are 365 days in the year. What is its price? (a) $913,907.4174 (b) $925,925.9259 (c) $974,611.7471 (d) $987,020.0108 (e) $987,428.5591 On 27/09/13, three month Swiss government bills traded at a yield of -0.2%, given as a simple annual yield. That is, interest rates were negative. If the face value of one of these 90 day bills is CHF1,000,000 (CHF represents Swiss Francs, the Swiss currency), what is the price of one of these bills? (a) CHF 999,507.09 (b) CHF 1,000,000.00 (c) CHF 1,000,493.39 (d) CHF 1,002,004.01 (e) CHF 1,002,498.39 Question 621 market efficiency, technical analysis, no explanation Technical traders: (a) Believe that asset prices follow a random walk. (b) Believe that markets are weak form efficient (c) Are pessimists, they believe that they cannot beat the market (d) Base their investment decisions on past publicly available news (e) Believe that the theory of weak form market efficiency is broken. Question 62 implicit interest rate in wholesale credit A wholesale building supplies business offers credit to its customers. Customers are given 60 days to pay for their goods, but if they pay within 7 days they will get a 2% discount. What is the effective interest rate implicit in the discount being offered? Assume 365 days in a year and that all customers pay on either the 7th day or the 60th day. All rates given below are effective annual rates. A share was bought for $10 (at t=0) and paid its annual dividend of $0.50 one year later (at t=1). Just after the dividend was paid, the share price was $11 (at t=1). What was the total return, capital return and income return? Calculate your answers as effective annual rates. The choices are given in the same order: ##r_\text{total}##, ##r_\text{capital}##, ##r_\text{dividend}##. (a) -0.15, -0.1, -0.05. (b) 0.05, 0.15, 0.1. (c) 0.1, 0.05, 0.05. (d) 0.15, 0.05, 0.1. (e) 0.15, 0.1, 0.05. Question 446 working capital decision, corporate financial decision theory The working capital decision primarily affects which part of a business? Question 544 bond pricing, capital raising, no explanation Question 448 franking credit, personal tax on dividends, imputation tax system The Australian imputation tax system applies because the company generates all of its income in Australia and pays corporate tax to the Australian Tax Office. Therefore all of the company's dividends are fully franked. The sole shareholder is an Australian for tax purposes and can therefore use the franking credits to offset his personal income tax liability. Question 469 franking credit, personal tax on dividends, imputation tax system, no explanation A firm pays a fully franked cash dividend of $70 to one of its Australian shareholders who has a personal marginal tax rate of 45%. The corporate tax rate is 30%. What will be the shareholder's personal tax payable due to the dividend payment? A three year bond has a face value of $100, a yield of 10% and a fixed coupon rate of 5%, paid semi-annually. What is its price? Question 84 WACC, capital structure, capital budgeting A firm is considering a new project of similar risk to the current risk of the firm. This project will expand its existing business. The cash flows of the project have been calculated assuming that there is no interest expense. In other words, the cash flows assume that the project is all-equity financed. In fact the firm has a target debt-to-equity ratio of 1, so the project will be financed with 50% debt and 50% equity. To find the levered value of the firm's assets, what discount rate should be applied to the project's unlevered cash flows? Assume a classical tax system. (a) The required return on equity, ##r_E## (b) The required return on debt, ##r_D## (c) The after-tax required return on debt, ##r_D.(1-t_c)## (d) The after-tax WACC, ##\text{WACC after tax}=\frac{D}{V_L}.r_D.(1-t_c )+\frac{E_L}{V_L}.r_E## (e) The pre-tax WACC, ##\text{WACC before tax}=\frac{D}{V_L}.r_D+\frac{E_L}{V_L}.r_E## Question 74 WACC, capital structure, CAPM A firm's weighted average cost of capital before tax (##r_\text{WACC before tax}##) would increase due to: (a) The firm issuing more debt and using the proceeds to repurchase stock. (b) The firm issuing more equity and using the proceeds to pay off debt holders. (c) The firm's industry becoming more systematically risky, for example if it was a mining company whose performance became more sensitive to countries' GDP growth, so the correlation of the firm's returns with the market was higher. (d) The firm's industry becoming less systematically risky, for example if it was a child care centre and the government announced higher subsidies for parents using child care centres, so the correlation of the firm's returns with the market was lower. Question 78 WACC, capital structure A company issues a large amount of bonds to raise money for new projects of similar risk to the company's existing projects. The net present value (NPV) of the new projects is positive but small. Assume a classical tax system. Which statement is NOT correct? (a) The debt-to-assets (D/V) ratio will increase. (b) The debt-to-equity ratio (D/E) will increase. (c) Firm value is likely to have increased due to the higher amount of interest tax shields, assuming that there will not be any costs of financial distress. (d) The company's after-tax WACC is likely to have decreased. (e) The company's before-tax WACC is likely to have decreased. A firm has a debt-to-assets ratio of 50%. The firm then issues a large amount of equity to raise money for new projects of similar systematic risk to the company's existing projects. Assume a classical tax system. Which statement is correct? (d) The company's after-tax WACC is likely to stay the same. (e) The company's before-tax WACC is likely to stay the same. Question 409 NPV, capital structure, capital budgeting A pharmaceutical firm has just discovered a valuable new drug. So far the news has been kept a secret. The net present value of making and commercialising the drug is $200 million, but $600 million of bonds will need to be issued to fund the project and buy the necessary plant and equipment. The firm will release the news of the discovery and bond raising to shareholders simultaneously in the same announcement. The bonds will be issued shortly after. Once the announcement is made and the bonds are issued, what is the expected increase in the value of the firm's assets (ΔV), market capitalisation of debt (ΔD) and market cap of equity (ΔE)? The triangle symbol is the Greek letter capital delta which means change or increase in mathematics. Ignore the benefit of interest tax shields from having more debt. Remember: ##ΔV = ΔD+ΔE## (a) ##ΔV=800m, ΔD = 600m, ΔE=200m## (b) ##ΔV=200m, ΔD = 600m, ΔE= 0## (c) ##ΔV=200m, ΔD =0m, \quad ΔE=200m## (d) ##ΔV=400m, ΔD = 600m, ΔE=-200m## (e) ##ΔV=800m, ΔD = 800m, ΔE= 0## Question 411 WACC, capital structure A firm plans to issue equity and use the cash raised to pay off its debt. No assets will be bought or sold. Ignore the costs of financial distress. Which of the following statements is NOT correct, all things remaining equal? (a) The firm's WACC before tax will rise. (b) The firm's WACC after tax will rise. (c) The firm's required return on equity will be lower. (d) The firm's net income will be higher. (e) The firm's free cash flow will be lower. A mining firm has just discovered a new mine. So far the news has been kept a secret. The net present value of digging the mine and selling the minerals is $250 million, but $500 million of new equity and $300 million of new bonds will need to be issued to fund the project and buy the necessary plant and equipment. The firm will release the news of the discovery and equity and bond raising to shareholders simultaneously in the same announcement. The shares and bonds will be issued shortly after. Once the announcement is made and the new shares and bonds are issued, what is the expected increase in the value of the firm's assets ##(\Delta V)##, market capitalisation of debt ##(\Delta D)## and market cap of equity ##(\Delta E)##? Assume that markets are semi-strong form efficient. The triangle symbol ##\Delta## is the Greek letter capital delta which means change or increase in mathematics. Remember: ##\Delta V = \Delta D+ \Delta E## (a) ##\Delta V = 250m##, ##ΔD = 300m##, ##ΔE= 250## (b) ##\Delta V = 250m##, ##ΔD = 300m##, ##ΔE= 750## (c) ##\Delta V = 400m##, ##ΔD = 300m##, ##ΔE= -250## (d) ##\Delta V = 1,050m##, ##ΔD = 300m##, ##ΔE= 750## (e) ##\Delta V = 1,050m##, ##ΔD = 300m##, ##ΔE= 250## Question 618 capital structure, no explanation Who owns a company's shares? The: (a) Company. (b) Debt holders. (c) Equity holders. (d) Chief Executive Officer (CEO). (e) Board of directors. Question 567 stock split, capital structure A company conducts a 4 for 3 stock split. What is the percentage change in the stock price and the number of shares outstanding? The answers are given in the same order. (a) -33.33%, 50% (b) -25%, 33.33% (c) -25%, 25% (d) -20%, 25% (e) 33.33%, -25% Question 552 bond pricing, income and capital returns An investor bought a 10 year 2.5% pa fixed coupon government bond priced at par. The face value is $100. Coupons are paid semi-annually and the next one is in 6 months. Six months later, just after the coupon at that time was paid, yields suddenly and unexpectedly fell to 2% pa. Note that all yields above are given as APR's compounding semi-annually. What was the bond investors' historical total return over that first 6 month period, given as an effective semi-annual rate? An investor bought a 20 year 5% pa fixed coupon government bond priced at par. The face value is $100. Coupons are paid semi-annually and the next one is in 6 months. Six months later, just after the coupon at that time was paid, yields suddenly and unexpectedly rose to 5.5% pa. Note that all yields above are given as APR's compounding semi-annually. (a) -0.084351 (b) -0.059351 (c) -0.046851 (d) -0.034351 (e) -0.008907 Question 568 rights issue, capital raising, capital structure A company conducts a 1 for 5 rights issue at a subscription price of $7 when the pre-announcement stock price was $10. What is the percentage change in the stock price and the number of shares outstanding? The answers are given in the same order. Ignore all taxes, transaction costs and signalling effects. (b) -5%, 20% (c) 0%, 20% (d) 7.14%, 20% (e) 11.67%, 0% Question 274 derivative terminology, option The 'option price' in an option contract is paid at the start when the option contract is agreed to. or ? Question 245 foreign exchange rate, monetary policy, foreign exchange rate direct quote, no explanation Investors expect Australia's central bank, the RBA, to leave the policy rate unchanged at their next meeting. Then unexpectedly, the policy rate is reduced due to fears that Australia's GDP growth is slowing. What do you expect to happen to Australia's exchange rate? Direct and indirect quotes are given from the perspective of an Australian. The Australian dollar will: (a) Appreciate against the USD, so the 'direct quote' of the AUD (AUD per USD) will increase. (b) Depreciate against the USD, so the 'direct quote' of the AUD (AUD per USD) will decrease. (c) Appreciate against the USD, so the 'direct quote' of the AUD (AUD per USD) will decrease. (d) Depreciate against the USD, so the 'direct quote' of the AUD (AUD per USD) will increase. (e) Depreciate against the USD, so the 'indirect quote' of the AUD (USD per AUD) will increase. Is it possible for all countries' exchange rates to appreciate by 5% in the same year? or ? An American wishes to convert USD 1 million to Australian dollars (AUD). The exchange rate is 0.8 USD per AUD. How much is the USD 1 million worth in AUD? (a) AUD 0.2 million. (b) AUD 0.8 million (c) AUD 1 million (d) AUD 1.25 million. (e) AUD 1.8 million. In the 1997 Asian financial crisis many countries' exchange rates depreciated rapidly against the US dollar (USD). The Thai, Indonesian, Malaysian, Korean and Filipino currencies were severely affected. The below graph shows these Asian countries' currencies in USD per one unit of their currency, indexed to 100 in June 1997. Of the statements below, which is NOT correct? The Asian countries': (a) Exports denominated in domestic currency became cheaper to foreigners. (b) Imports denominated in domestic currency became more expensive. (c) Citizens would have to pay more in their own currency when holidaying in the US. (d) USD interest payments on USD fixed-interest bonds became more expensive in their own currency. (e) Domestic currency interest payments on fixed-interest bonds denominated in domestic currency became cheaper in their own currency. Question 275 derivative terminology, future The 'futures price' in a futures contract is paid at the start when the futures contract is agreed to. or ? Question 45 profitability index What is the Profitability Index (PI) of the project? (d) 0.82845 Question 174 profitability index What is the Profitability Index (PI) of the project? Assume that the cash flows shown in the table are paid all at once at the given point in time. The required return is 10% pa, given as an effective annual rate. Question 212 rights issue In mid 2009 the listed mining company Rio Tinto announced a 21-for-40 renounceable rights issue. Below is the chronology of events: 04/06/2009. Share price opens at $69.00 and closes at $66.90. 05/06/2009. 21-for-40 rights issue announced at a subscription price of $28.29. 16/06/2009. Last day that shares trade cum-rights. Share price opens at $76.40 and closes at $75.50. 17/06/2009. Shares trade ex-rights. Rights trading commences. All things remaining equal, what would you expect Rio Tinto's stock price to open at on the first day that it trades ex-rights (17/6/2009)? Ignore the time value of money since time is negligibly short. Also ignore taxes. In late 2003 the listed bank ANZ announced a 2-for-11 rights issue to fund the takeover of New Zealand bank NBNZ. Below is the chronology of events: 23/10/2003. Share price closes at $18.30. 24/10/2003. 2-for-11 rights issue announced at a subscription price of $13. The proceeds of the rights issue will be used to acquire New Zealand bank NBNZ. Trading halt announced in morning before market opens. 28/10/2003. Trading halt lifted. Last (and only) day that shares trade cum-rights. Share price opens at $18.00 and closes at $18.14. 29/10/2003. Shares trade ex-rights. All things remaining equal, what would you expect ANZ's stock price to open at on the first day that it trades ex-rights (29/10/2003)? Ignore the time value of money since time is negligibly short. Also ignore taxes. Why is Capital Expenditure (CapEx) subtracted in the Cash Flow From Assets (CFFA) formula? ###CFFA=NI+Depr-CapEx - \Delta NWC+IntExp### (a) CapEx is added in the Net Income (NI) equation so it needs subtracting in the CFFA equation. (b) CapEx is a financing cash flow that needs to be ignored. Therefore it should be subtracted. (c) CapEx is not a cash flow, it's a non-cash expense made up by accountants that needs to be subtracted. (d) CapEx is subtracted to account for the net cash spent on capital assets. (e) CapEx is subtracted because it's too hard to predict, therefore we exclude it. Find Trademark Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Trademark Corp Income Statement for year ending 30th June 2013 COGS 25 Operating expense 5 Depreciation 20 Interest expense 20 Income before tax 30 Tax at 30% 9 Net income 21 as at 30th June 2013 2012 $m $m Current assets 120 80 Cost 150 140 Accumul. depr. 60 40 Carrying amount 90 100 Total assets 210 180 Current liabilities 75 65 Non-current liabilities 75 55 Owners' equity Retained earnings 10 10 Contributed equity 50 50 Total L and OE 210 180 Note: all figures are given in millions of dollars ($m). (a) -19 A firm has forecast its Cash Flow From Assets (CFFA) for this year and management is worried that it is too low. Which one of the following actions will lead to a higher CFFA for this year (t=0 to 1)? Only consider cash flows this year. Do not consider cash flows after one year, or the change in the NPV of the firm. Consider each action in isolation. (a) Buy less land, buildings and trucks than what was planned. Assume that this has no impact on revenue. (b) Pay less cash to creditors by refinancing the firm's existing coupon bonds with zero-coupon bonds that require no interest payments. Assume that there are no transaction costs and that both types of bonds have the same yield to maturity. (c) Change the depreciation method used for tax purposes from diminishing value to straight line, so less depreciation occurs this year and more occurs in later years. Assume that the government's tax department allow this. (d) Buying more inventory than was planned, so there is an increase in net working capital. Assume that there is no increase in sales. (e) Raising new equity through a rights issue. Assume that all of the money raised is spent on new capital assets such as land and trucks, but they will be fitted out and delivered in one year so no new cash will be earned from them. Find Sidebar Corporation's Cash Flow From Assets (CFFA), also known as Free Cash Flow to the Firm (FCFF), over the year ending 30th June 2013. Sidebar Corp COGS 100 Rent expense 22 Taxable Income 210 Taxes at 30% 63 Net income 147 Inventory 70 50 Trade debtors 11 16 Rent paid in advance 4 3 PPE 700 680 Trade creditors 11 19 Bond liabilities 400 390 Contributed equity 220 220 Retained profits 154 120 The cash flow from assets was: (a) $138m (b) $142m (c) $143m (d) $172m (e) $176m Over the next year, the management of an unlevered company plans to: Achieve firm free cash flow (FFCF or CFFA) of $1m. Pay dividends of $1.8m Complete a $1.3m share buy-back. Spend $0.8m on new buildings without buying or selling any other fixed assets. This capital expenditure is included in the CFFA figure quoted above. All amounts are received and paid at the end of the year so you can ignore the time value of money. The firm has sufficient retained profits to pay the dividend and complete the buy back. The firm plans to run a very tight ship, with no excess cash above operating requirements currently or over the next year. How much new equity financing will the company need? In other words, what is the value of new shares that will need to be issued? (a) $2.1m (b) $1.3m (c) $0.8m (d) $0.3m (e) No new shares need to be issued, the firm will be sufficiently financed. Make $5m in sales, $1.9m in net income and $2m in equity free cash flow (EFCF). Pay dividends of $1m. The firm has sufficient retained profits to legally pay the dividend and complete the buy back. (a) $2m (b) $1m Question 512 capital budgeting, CFFA Find the cash flow from assets (CFFA) of the following project. Project life 2 years Initial investment in equipment $6m Depreciation of equipment per year for tax purposes $1m Unit sales per year 4m Sale price per unit $8 Variable cost per unit $3 Fixed costs per year, paid at the end of each year $1.5m Tax rate 30% Note 1: The equipment will have a book value of $4m at the end of the project for tax purposes. However, the equipment is expected to fetch $0.9 million when it is sold at t=2. Note 2: Due to the project, the firm will have to purchase $0.8m of inventory initially, which it will sell at t=1. The firm will buy another $0.8m at t=1 and sell it all again at t=2 with zero inventory left. The project will have no effect on the firm's current liabilities. Find the project's CFFA at time zero, one and two. Answers are given in millions of dollars ($m). (a) -6, 12.25, 16.68 (b) -6.8, 13.25, 14.05 (c) -6.8, 13.25, 15.88 (d) -6.8, 13.25, 18.51 (e) -6.8, 13.25, 17.71 Unit sales per year 10m Fixed costs per year, paid at the end of each year $2m Note 1: Due to the project, the firm will have to purchase $40m of inventory initially (at t=0). Half of this inventory will be sold at t=1 and the other half at t=2. Note 2: The equipment will have a book value of $2m at the end of the project for tax purposes. However, the equipment is expected to fetch $1m when it is sold. Assume that the full capital loss is tax-deductible and taxed at the full corporate tax rate. Note 3: The project will be fully funded by equity which investors will expect to pay dividends totaling $10m at the end of each year. (a) -48, 54.5, 55.8 (b) -48, 54.5, 74.5 (c) -48, 74.5, 35.8 (d) -48, 74.5, 75.8 (e) -8, 74.5, 75.8 Copyright © 2014 Keith Woodward
CommonCrawl
\begin{document} \title{Vortex-Meissner phase transition induced by two-tone-drive-engineered artificial gauge potential in the fermionic ladder constructed by superconducting qubit circuits} \author{Yan-Jun Zhao } \affiliation{Faculty of Information Technology, College of Microelectronics, Beijing University of Technology, Beijing, 100124, People's Republic of China} \author{Xun-Wei Xu } \affiliation{Department of Applied Physics, East China Jiaotong University, Nanchang 330013, China} \author{Hui Wang} \affiliation{Center for Emergent Matter Science, RIKEN, 2-1 Hirosawa, Wako-shi, Saitama 351-0198, Japan} \author{Yu-xi Liu} \affiliation{Institute of Microelectronics, Tsinghua University, Beijing 100084, China} \affiliation{Frontier Science Center for Quantum Information, Beijing, China} \author{Wu-Ming Liu} \affiliation{Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China} \keywords{superconducting qubit circuit, quantum simulation, artifical gauge potential, vortex phase, Meissner phase, chiral current} \pacs{} \begin{abstract} We propose to periodically modulate the onsite energy via two-tone drives, which can be furthermore used to engineer artificial gauge potential. As an example, we show that the fermionic ladder model penetrated with effective magnetic flux can be constructed by superconducting flux qubits using such two-tone-drive-engineered artificial gauge potential. In this superconducting system, the single-particle ground state can range from vortex phase to Meissner phase due to the competition between the interleg coupling strength and the effective magnetic flux. We also present the method to experimentally measure the chiral currents by the single-particle Rabi oscillations between adjacent qubits. In contrast to previous methods of generating artifical gauge potential, our proposal does not need the aid of auxiliary couplers and in principle remains valid only if the qubit circuit maintains enough anharmonicity. The fermionic ladder model with effective magnetic flux can also be interpreted as one-dimensional spin-orbit-coupled model, which thus lay a foundation towards the realization of quantum spin Hall effect. \end{abstract} \revised{\today} \startpage{1} \endpage{ } \maketitle \section{Introduction} \begingroup\begin{figure*} \caption{(color online). Ladder model constructed by X-shape flux qubits with the gradiometer structure which can cancel out some common flux noise penetrated through the two symmetric loops. The Josephson junctions, flux qubit loop, readout resonator, classical flux bias are colored in gray, red, green, and blue, respectively. The Josephson energy for the big and small Josephson junctions are respectively $E_J$ and $\alpha E_J$, where $\frac{1}{2}<\alpha <1$ should be satisfied to guarantee the nonlineartiy of the flux qubit and meanwhile, suppress the intercell tunnelling. Meanwhile, the flux qubits are coupled to their nearest neighbours with X-shape mutual inductances that are mostly determined by the nearest edge on the loop. The microwave coplanar waveguide resonator (CPW) is shorted at the terminal near the flux qubit loop such that only inductive coupling is present. The flux qubit loop is designed like a cross such that different coupling terms can be well separately to minimize the crosstalk.} \label{fig:schematic diagram} \end{figure*} \endgroup Gauge potential is a core ingredient of the electromagnetic interaction in electrodynamics ~ \cite{Jackson1999Book}, standard model in particle physics ~ \cite{Gaillard1999RMP}, and even the topological phenomena in condensed matter physics ~ \cite{Hasan2010RMP}. However, the behaviours of microscopic particles in gauge potentials are rather difficult to study in natural systems, due to their well-known low controllability. Representatively, for example, strong magnetic field is experimentally challenging to generate for electrons in solid systems. Therefore, engineering effective gauge potential in artificial quantum platform stands a wise option in order to access higher tunability. Superconducting qubit circuits ~ \cite{Makhlin2001RMP,You2005Phys.Today,Wendin2007LTP,Clarke2008Nature,Schoelkopf2008Nature,Buluta2011RPP,You2001Nature,Xiang2013RMP,Gu2017PR} , which inherit the advantages of microwave circuits in flexibility of design, convenience of scaling up, and maturation of controlling technology, have recently won great celebrity in simulating the motions of microscopic particles placed in gauge potentials. In superconducting qubit circuits, photons play the role of carriers, which, in contrast to electrons, will cause no backaction onto the artificial gauge potential due to the charge neutrality. The engineering of artificial gauge potential (mainly the effective magnetic flux) in superconducting qubit circuits greatly depends on the nonlinearity of Josephson junctions in auxiliary couplers ~ \cite{Koch2010PRA,Nunnenkamp2011NJP,Marcos2013PRL,Yang2016PRA,Roushan2017NP}. In this manner, chiral Fock-state transfer ~ \cite{Koch2010PRA}, multiparticle spectrum modulated by effective magnetic flux in Jaynes-Cummings model ~ \cite{Nunnenkamp2011NJP}, condensed-matter and high-energy physics phenomena in quantum-link model ~ \cite{Marcos2013PRL}, and flat band in the Lieb lattice ~ \cite{Yang2016PRA} have been theoretically studied. In experiment, effective-magnetic-flux-induced chiral currents of single photon and single-photon vacancy have been respectively observed in one-photon and two-photon states ~ \cite{Roushan2017NP}. By contrast, in cold atom systems, artificial gauge potentials are usually engineered using periodically-modulated onsite energy ~ \cite{Aidelsburger2011PRL,Aidelsburger2013PRL,Miyake2013PRL,Atala2014NP}. This has motivated the similar proposal of engineering artificial gauge potentials via periodically modulating the Josephson energy of the transmon qubit circuit ~ \cite{Alaeian2019PRA}, which however maintains valid only in small anharmonicity regime. To remedy this drawback, we propose to modulate the onsite energy of the coupled qubit chain with two-tone drives. This method can in principle be applied to a superconducting qubit circuit with any nonzero anharmonicity, which can thus simulate fermions rather than bosons as in Ref. ~ \cite{Alaeian2019PRA}. Besides, nonlinearity is known to be a key factor for demonstrating quantum phenomena ~ \cite{Wendin2007LTP}. Thus, periodically modulating the energy of the qubit circuit with better anharmonicity is significant for exploring nonequilibrium quantum physics. Meanwhile, thanks to the recent experimental progress in the integration scale of superconducting qubit circuits ~ \cite{Barends2014Nature,Zheng2017PRL,Song2017PRL,Gong2019PRL}, the\ quantum simulation research based on superconducting qubit circuits is now advancing from single or several qubits ~ \cite{Leak2007Science,Berger2012PRB,Berger2013PRA,Schroer2014PRL,Zhang2017PRA,Roushan2014Nature,Flurin2017PRX,Ramasesh2017PRL,Tan2017npj,Tan2018PRL,Zhong2016PRL,Guo2019PRapp} towards multiple qubits ~ \cite{Roushan2017NP,Nunnenkamp2011NJP,Mei2015PRA,Yang2016PRA,Tangpanitanon2016PRL,Gu2017arXiv,Roushan2017Science,Xu2018PRL,Yan2019Science,Ye2019PRL,Arute2019Nature} . However, most experiments are yet confined to the chain structure (one dimension) currently ~ \cite{Roushan2017NP,Roushan2017Science,Xu2018PRL,Yan2019Science}, which thus lacks one more dimension to realize the two-dimensional topological phenomena induced by gauge potential, e.g., the quantum Hall effect or quantum spin Hall effect ~ \cite{Shen2012Book}. Recently, the quasi-two-dimensional ladder model ~ \cite{Ye2019PRL}, and ture-two-dimensional Sycamore processor ~ \cite{Arute2019Nature} have both been achieved with the state-of-the-art technology in superconducting quantum circuits, but neither of them involves the research on artificial gauge potential. Therefore, the effect of artificial gauge potential needs to be further explored beyond the one-dimensional system. In particular, the ladder model is almost the simpliest two-dimensional model that implies rich physics, which, for example, can be mapped to the one-dimensional spin-orbit-coupled chain if\ penetrated by the effective magnetic flux ~ \cite{Atala2014NP,Livi2016PRL}. To make an initial attempt towards two-dimensional quantum simulation with artificial gauge potential, we will design the concrete superconducting qubit circuit that realizes the ladder model penetrated by the effective magnetic flux. We will focus the vortex and Meissner phase transitions induced by the\ competition of related parameters, such as the coupling strengths and effective magnetic flux. Since the lattice number cannot be achieved so many as the atom number in cold atom systems, we will mainly concentrate on the practical case with finite lattice number. Besides, the method to measure the two phases will also be discussed for the future experimental implementation. In Sec. ~ \ref{sec:Model}, we introduce the theoretical model that employs two-tone drives to engineer artificial gauge potential in the ladder model constructed by superconducting qubit circuits. In Sec. ~ \ref{sec:phase transition}, we analyze the Vortex-Meissner phase transition at different parameter regimes. In Sec. ~ \ref{Sec:ExpDetails}, we discuss the experimental feasibility to generate the single-particle ground state and measure the vortex-Meissner phase transition. In Sec. ~ \ref{sec:DC}, we summarize the results and make some discussions. \section{Two-tone drive induced artificial gauge potential\label{sec:Model}} \subsection{Theoretical model} As an example, we consider the ladder model constructed by the X-shape gradiometer flux qubit circuits (see schematic diagram in Fig. ~ \ref{fig:schematic diagram}). The individual flux qubit is manipulated by classical direct current flux bias and alternating current drive (colored in blue), and the states of qubits are dispersively read out through a coplanar waveguide resonator (colored in green) ~ \cite{Blais2004PRA,Lin2014NC,Yan2016NC,Wu2018npj}. The flux qubits are coupled to their nearest neighbours with mutual inductances that are mostly determined by the nearest edge on the loop. The the flux qubit loop is designed like a cross ~ \cite{Barends2013PRL} such that different coupling terms can be well separated to minimize the crosstalk. The first (second) row of the qubits is called the left (right) leg of the ladder. The qubit parameters are assumed to be homogeneous along the leg. Then, the bare Hamiltonian without driving fields can be generally given by \begin{align} \hat{H}_{\text{b}} & =\sum_{d=\mathrm{L}}^{\mathrm{R}}\sum_{l}\frac{\hbar }{2}\omega_{d}\hat{\sigma}_{z}^{\left( d,l\right) }\nonumber\\ & -\sum_{d=\text{L,R}}\sum_{l}\hbar g_{0}\hat{\sigma}_{-}^{\left( d,l\right) }\hat{\sigma}_{+}^{\left( d,l+1\right) }+\text{H.c.},\nonumber\\ & -\sum_{l}\hbar K_{0}\hat{\sigma}_{-}^{\left( \text{L},l\right) } \hat{\sigma}_{+}^{\left( \text{R},l\right) }+\text{H.c..} \label{eq:Hb} \end{align} Here, according to the homogeneous assumption, all qubits along the $d$ leg have the identical frequency $\omega_{d}$, where $d=\mathrm{L}$ or $d=\mathrm{R}$ is the abbreviation of left or right. The bare intraleg coupling strength on the left (L) or right (R) leg can be given by $g_{d}=M_{d}I_{pd}^{2}/\hbar$ with $d=\mathrm{L,R}$\textrm{,} $M_{d}$ being the mutual inductance between adjacent qubits (e.g., $\sim10 \operatorname{pH} $), $I_{pd}$ the persistent current (e.g., $\sim0.1 \operatorname{\mu A} $), and $\hbar$ the reduced Plank constant. The persistent current and the qubit frequency can be tuned via designing the area ratio $\alpha$ between the small and large junctions ~ \cite{Zhu2010APL}. Therefore, we can make the qubits on different legs of distinct qubit frequencies. This also leads to $I_{p\text{L}}\neq I_{p\text{R}}$, despite which, however, via careful design of $M_{d}$, we can also make $g_{\text{L}}=g_{\text{R}}=g_{0}$ (e.g., $1\sim300 \operatorname{MHz}\times2\pi $). Thus, in Eq. ~ (\ref{eq:Hb}), the intraleg coupling strengths on both legs are $g_{0}$. Besides, $K_{0}$ denotes the interleg coupling strength, which is determined by the interleg mutual inductance $M$ and also the persistent currents of the flux qubit circuits on both legs, i.e., $K_{0}=M I_{p\mathrm{L}} I_{p\mathrm{R}}$. To engineer the effective magnetic flux from the bare Hamiltonian $\hat {H}_{\text{b}}$, we will first show that the the qubit frequency can be periodically modulated via the assist of classical driving fields, as will be discussed below. \subsection{Periodical modulation of the qubit frequency\label{sec:PM}} We now demonstrate the periodical modulation of the qubit frequency through two-tone drives. In our treatment, the flux qubit circuit is modelled as an ideal two-level system because of the high anharmonicity ~ \cite{Orlandao1999PRB,Liu2005PRL,Robertson2006PRB} it possesses. In this manner, the individual flux qubit at the $d$ leg and $l$th rung with two-tone drives can be characterized by the Hamiltonian \begin{equation} \hat{H}_{d,l} \! = \! \frac{\hbar}{2}\omega_{d}\hat{\sigma}_{z}^{\left( d,l\right) }+\frac{\hbar }{2}\sum_{j=1}^{2}\left[ \hat{\sigma}_{+}^{\left( d,l\right) }\Omega _{j}^{\left( d,l\right) }e^{-i\omega_{j}^{\left( d\right) }t} +\mathrm{H.c.}\right] , \label{eq:Hq} \end{equation} in the qubit basis, where the $j$th driving field ($j=1,2$) possesses the complex driving strength $\Omega_{j}^{\left( d,l\right) }$ at the frequency $\omega_{j}^{\left( d\right) }$. However, the transmon qubit ~ \cite{Koch2007PRA,Sank2014Thesis} has a worse anharmonicity than the flux qubit and thus, the detailed model should include the higher energy levels, e.g., the second excited state (see Appendix. ~ \ref{appendix:PMQF}) . In Eq. ~ (\ref{eq:Hq}), the driving field is determined by the incident current $I_{j}^{\left( d,l\right) } \! \left( t\right) $ through the relation $\operatorname{Re}\{\frac{\hbar} {2}\Omega_{j}^{\left( d,l\right) }e^{-i\omega_{j}^{\left( d\right) } t}\}=-M_{d}I_{pd}I_{j}^{\left( d,l\right) } \! \left( t\right) $. The detunings of the driving frequencies $\omega _{j}^{\left( d\right) }$ from the qubit frequencies $\omega_{d}$ are kept identical for both ladder legs, i.e., $\delta_{j}\equiv\omega_{j}^{\left( d\right) }-\omega_{d}$ despite $d$ taking $\mathrm{L}$ or $\mathrm{R}$. In fact, this can be achieved via tuning the driving frequencies $\omega _{j}^{\left( d\right) }$ for the given qubit frequencies $\omega_{d}$. Besides, we assume $\delta_{1}$ and $\delta_{2}$ are close to each other, i.e., $\left\vert \delta\right\vert \ll|\delta_{1}|,|\delta_{2}|$ with $\delta=\delta_{2}-\delta_{1}$. Also, we consider the large-detuning regime $|\Omega_{j}^{\left( d,l\right) }/\delta_{j}|^{2}\ll1$ and homogeneous (inhomogeneous) driving strengths (phases), i.e., $\Omega_{1}^{\left( d,l\right) }=\Omega_{1}$ and $\Omega_{2}^{\left( d,l\right) }=\Omega _{2}e^{-i\phi_{d,l}}$ with positive $\Omega_{j}$. Then, via the second-order perturbative method, the effective Hamiltonian can be yielded (see Appendix. ~ \ref{appendix:PMQF}) as \begin{equation} \hat{H}_{d,l}^{\left( \text{eff}\right) } \! = \! \frac{\hbar}{2}\omega_{d}\hat{\sigma}_{z}^{\left( d,l\right) } \! -\frac{\hbar}{2} \! \left[ \omega_{s} \! + \! \Omega\cos(\delta t \! + \! \phi^{\left( d,l\right) })\right] \! \hat{\sigma}_{z}^{\left( d,l\right) }, \label{ea:Hq_modulated} \end{equation} where $\omega_{s}=\sum_{j=1}^{2}\frac{\Omega_{j}{}^{2}}{2\delta_{j}}$ is the Stark shift and $\Omega=|\frac{\Omega_{1}\Omega_{2}}{\delta_{1}}|$. The phase $\phi^{\left( d,l\right) }$ can be tuned by the driving field at the site $(d,l)$, which will not be specified at present. In Eq. ~ (\ref{ea:Hq_modulated}), we find that the qubit frequency is periodically modulated with the strength $\Omega$, the frequency $\delta$, and the phase $\phi^{\left( d,l\right) }$. Under our assumption, the parameters can be typically, $\delta_{1}^{\left( d\right) }/2\pi=1~\mathrm{GHz}$\textrm{, }$\delta_{2}^{\left( d\right) }/2\pi=1.1~\mathrm{GHz}$, and $\Omega_{1} /2\pi=\Omega_{2}/2\pi=178$~$\mathrm{MHz}$, in which case, the Stark shift $\omega_{s}/2\pi=30.24~\mathrm{MHz}$, the modulation strength is $\Omega /2\pi=31.7~\mathrm{MHz}$ and the modulation frequency $\delta/2\pi =100~\mathrm{MHz}$. The qubit frequency $\omega_{d}/2\pi\ $can be about $2~\mathrm{GHz}$, which, together with the driving frequencies $\omega _{j}^{\left( d\right) }$, is left to be exactly determined in the following. Note that one driving field will only arouse transitions between qubit bases [see the individual driving term in Eq. ~ (\ref{eq:Hq})]. That's why we apply two-tone driving fields to achieve the periodical modulation of the qubit frequency. We must also mention that the method introduced here is applicable for all qubit circuits, and not merely confined to the flux qubit (see Appendix. ~ \ref{appendix:PMQF}). Its validity does not require a negligibly small anharmonicity of the qubit circuit as that for the transmon circuit in Ref. ~ \cite{Alaeian2019PRA}. Since nonlinearity is a key factor for demonstrating quantum phenomena ~ \cite{Wendin2007LTP}, periodically modulating the qubit frequency while maintaining enough anharmonicity can be significant for exploring nonequilibrium quantum physics. \subsection{Engineering effective magnetic flux} Based on the periodical modulation of the qubit frequency in Sec. ~ \ref{sec:PM}, we now continue to demonstrate how to engineer the effective magnetic flux. We assume each qubit in Fig. ~ \ref{fig:schematic diagram} is driven by two-tone fields such that the qubit frequency can be modulated as in Eq. ~ (\ref{ea:Hq_modulated}). To include the nearest qubit-qubit couplings, the full Hamiltonian can be represented as \begin{equation} \hat{H}_{\text{f}}=\hat{H}_{\text{b}}-\sum_{d=\mathrm{L}}^{\mathrm{R}}\sum _{l}\frac{\hbar}{2}\left[ \omega_{s}+\Omega\cos(\delta t+\phi^{\left( d,l\right) })\right] \hat{\sigma}_{z}^{\left( d,l\right) }. \label{eq:H} \end{equation} Note that $\hat{H}_{\text{b}}$ is the bare Hamiltonian given in Eq. ~ (\ref{eq:Hb}), $\omega_{s}$ is the Stark shift, and $\Omega$, $\delta$, and $\phi^{(d,l)}$ are respectively the periodical modulation strength, frequency, and phase of the qubit at $(d,l)$. To eliminate the time-dependent terms in Eq. ~ (\ref{eq:H}), we now apply to Eq. ~ (\ref{eq:H}) a unitary transformation \begin{equation} \hat{U}_{d}\left( t\right) = {\textstyle\prod\limits_{l}} {\textstyle\prod\limits_{d=\mathrm{L,R}}} \exp\left[ i\hat{F}_{l,d}\left( t\right) \right] , \end{equation} where the expression of $\hat{F}_{l,d}\left( t\right) $ is explicitly given by \begin{equation} \hat{F}_{l,d}\left( t\right) =\frac{1}{2}\hat{\sigma}_{z}^{\left( d,l\right) }\left[ \frac{\Omega}{\delta}\sin\left( \delta t+\phi _{d,l}\right) +\left( \omega_{d}-\omega_{s}\right) t\right] . \end{equation} After that, the assumptions $\phi_{d\text{,}l}=\phi_{d}-\phi l$, $\phi_{\text{L}}=-\phi_{\text{R}}=\phi_{0}$, and $\delta=\omega_{\text{R} }-\omega_{\text{L}}$ are made and the fast-oscillating terms are neglected, thus leading to the following qubit ladder Hamiltonian as \begin{align} \hat{H}_{\text{f}}^{\prime}= & -\sum_{d=\text{L}}^{\text{R}}\sum_{l}\hbar g\hat{\sigma}_{-}^{\left( d,l\right) }\hat{\sigma}_{+}^{\left( d,l+1\right) }+\text{H.c.}\nonumber\\ & -\sum_{l}\hbar K\hat{\sigma}_{-}^{\left( \text{L},l\right) }\hat{\sigma }_{+}^{\left( \text{R},l\right) }\exp\left( i\phi l\right) +\text{H.c..} \label{eq:Hld0} \end{align} Here, the intraleg coupling strength $g=g_{0}J_{0}\left( \eta_{x}\right) $, and the interleg coupling strength $K=K_{0}J_{1}\left( \eta_{y}\right) $, which are in principle tunable via modifying $\Omega$, since $\eta_{x} =\frac{2\Omega}{\delta}\sin(\frac{\phi}{2})$ and $\eta_{y}=\frac{2\Omega }{\delta}\sin\left( \phi_{0}\right) $ (see Appendix. ~ \ref{append:Treatment} for details). The symbol $J_{n}\left( x\right) $ represents the $n$th Bessel function of the first kind. For the typical parameters given previously, which yields $\Omega/2\pi=31.7 \operatorname{MHz} $ and $\delta/2\pi=100 \operatorname{MHz} $, we can further set $g_{0}/2\pi=3.5 \operatorname{MHz} $, and $K_{0}/2\pi=33 \operatorname{MHz} $. Then, the condition $|\eta_{x/y}/2|^{2}\ll1$ is fulfilled, which makes $g\approx g_{0}$ and $K\approx\frac{\eta_{y}}{2}K_{0}=\frac{\Omega}{\delta }K_{0}\sin\left( \phi_{0}\right) $. In this case, the intraleg coupling strength is fixed at $g_{0}$, but the interleg coupling strength can also be equivalently represented as \begin{equation} K\approx3g\sin\phi_{0}. \end{equation} This implies that for given $g$, $K$ can be tuned via $\phi_{0}$ in the range $-3g\leq K\leq3g$ (see Fig. ~ \ref{fig:K_phi0}), which enables us to study the phase transition by adjusting $K$. The condition $\delta=\omega_{\text{R}}-\omega_{\text{L}}$ can be satisfied with making the qubit frequencies $\omega_{\text{L}}/2\pi=1.9 \operatorname{GHz} $ and $\omega_{\text{R}}/2\pi=2 \operatorname{GHz} $ such that $\delta/2\pi=100 \operatorname{MHz} $. Furthermore, the driving frequencies should be $\omega_{1}^{\left( \text{L}\right) }/2\pi=2.9 \operatorname{GHz} $, $\omega_{2}^{\left( \text{L}\right) }/2\pi=3 \operatorname{GHz} $, $\omega_{1}^{\left( \text{R}\right) }/2\pi=3 \operatorname{GHz} $, and $\omega_{2}^{\left( \text{R}\right) }=3.1 \operatorname{GHz} $, since we have assumed $\delta_{1}/2\pi=(\omega_{1}^{\left( d\right) }-\omega_{d})/2\pi=1 \operatorname{GHz} $ and $\delta_{2}/2\pi=(\omega_{2}^{\left( d\right) }-\omega_{d})/2\pi=1.1 \operatorname{GHz} $. So far, we have determined nearly all the necessary parameters of the qubit and driving fields, except for the phases in the driving fields $\phi$ and $\phi_{0}$, among which, the former acts as the effective magnetic flux per plaquette, while the latter is used to tune the interleg coupling strength $K$. \subsection{Fermionic ladder in the effective magnetic flux} To transform the qubit ladder into the fermionic ladder, we can make a Jordan-Wigner transformation ~ \cite{Mei2013PRB}, which is of the form as \begin{align} \hat{\sigma}_{-}^{\left( \text{L},l\right) } & =\hat{b}_{\text{L},l} \prod_{l^{\prime}=1}^{l-1}\exp \! (i\pi\hat{b}_{\text{L},l}^{\dag}\hat{b}_{\text{L},l} ),\label{eq:Jordan-Wigner1}\\ \hat{\sigma}_{-}^{\left( \text{R},l\right) } & =\hat{b}_{\text{R},l} \prod_{l^{\prime}=1}^{l}\exp(i\pi\hat{b}_{\text{L},l}^{\dag}\hat{b} _{\text{L},l}) \! \prod_{l^{\prime}=1}^{l-1}\exp(i\pi\hat{b}_{\text{R},l}^{\dag}\hat {b}_{\text{R},l}). \label{eq:Jordan-Wigner2} \end{align} Here, $\hat{\sigma}_{z}^{\left( d,l\right) }=2\hat{b}_{d,l}^{\dag}\hat {b}_{d,l}-1$, and the fermionic anticommutation relations $\{\hat{b} _{d,l},\hat{b}_{d^{\prime},l^{\prime}}^{\dag}\}=\delta_{dd^{\prime}} \delta_{ll^{\prime}}$ and $\{\hat{b}_{d,l},\hat{b}_{d^{\prime},l^{\prime} }\}=0$ are fulfilled, where $\delta_{dd^{\prime}}$ and $\delta_{ll^{\prime}}$ are Kronecker delta functions. Then, the qubit ladder Hamiltonian $\hat {H}_{\text{f}}^{\prime}$ in Eq. ~ (\ref{eq:Hld0}) can be transformed into the Hamiltonian of the fermionic ladder, i.e., \begin{align} \hat{H}_{\text{ld}}= & -\sum_{d=\text{L}}^{\text{R}}\sum_{l}\hbar g\hat {b}_{d,l}\hat{b}_{d,l+1}^{\dag}+\text{H.c.}\nonumber\\ & -\sum_{l}\hbar K\hat{b}_{\text{L},l}\hat{b}_{\text{R},l}^{\dag}\exp\left( i\phi l\right) +\text{H.c.,} \label{eq:Hld} \end{align} which describes the motion of \textquotedblleft fermionic\textquotedblright \ particles, governed by the effective magnetic flux $\phi$. We note that the above fermionic ladder model with effective magnetic flux can also be interpreted as one-dimensional spin-orbit-coupled model ~ \cite{Atala2014NP,Livi2016PRL}, which may thus inspire the research towards the realization of quantum spin Hall effect ~ \cite{Shen2012Book}. \begin{figure} \caption{(color online). Tunable interleg coupling strength $K$ plotted versus the phase $\phi_{0}$: $K=3g\sin\phi_{0}$, where, for simplicity, we have set the intraleg coupling strength $g=1$. Here, $\phi_{0}$ is determined by the phases of the driving fields. } \label{fig:K_phi0} \end{figure} \section{Vortex-Meissner phase transition\label{sec:phase transition}} \subsection{Infinite-length ladder\label{sec:InfLenLadder}} \begin{figure}\label{fig:spectrum} \end{figure} Now, we seek the energy spectrum of the ladder Hamiltonian $\hat{H} _{\text{ld}}$ in the infinite chain case [see Eq. ~ (\ref{eq:Hld})], i.e., the lattice site (or rung) number $N$ approaches infinity. To do this, we straightforwardly assume that the single-particle eigenstate at the energy $\hbar\omega$\ is $\left\vert \omega\right\rangle =\sum_{d,l}\psi_{d,l}\left\vert d,l\right\rangle $. Here, the notation $\left\vert d,l\right\rangle =\hat{b}_{d,l}^{\dag}\left\vert 0\right\rangle $ represents the single-particle state at the site $\left( \mathrm{L},l\right) $ and $\left\vert 0\right\rangle $ is the ground state. Afterwards, we assume the wave function $\psi_{d,l}\equiv\psi_{d,l}\left( z\right) $, and further assume \begin{equation} \psi_{\text{L},l}=\psi_{\text{L},0}z^{l}e^{-i\frac{\phi}{2}l}\text{ and } \psi_{\text{R,}l}=\psi_{\text{R,}0}z^{l}e^{i\frac{\phi}{2}l} \label{eq:psi_d_l_z} \end{equation} before substituting the eigenstate vector $\left\vert \omega\right\rangle $ into the following secular equation \begin{equation} \hat{H}_{\text{ld}}\left\vert \omega\right\rangle =\hbar\omega\left\vert \omega\right\rangle . \end{equation} Then, the dispersion relation can be yielded as a two-band spectrum i.e., \begin{equation} \omega=\omega_{\pm}=-2gz_{\text{p}}^{2}\cos\frac{\phi}{2}\pm\sqrt{K^{2} -4g^{2}z_{\text{m}}^{2}\sin^{2}\frac{\phi}{2}}, \label{eq:dispersion} \end{equation} where the intermediate parameters $z_{\text{p}}=(z+ z^{-1})/2$ and $z_{\text{m}}=(z- z^{-1})/2$. The corresponding wave function at $l=0$ is of the form \begin{equation} \psi_{\text{L},0} \! \left( z\right) =\omega+g(ze^{i\frac{\phi}{2}}+z^{-1}e^{-i\frac{\phi}{2} })\text{ and }\psi_{\text{R,}0} \! \left( z\right) =-K, \end{equation} where a global normalized constant has been discarded. To guarantee the existence of $\omega$, there can be the following three cases, i.e., (i) $z=\exp\left( iq\right) $, (ii) $z=\exp\left( \lambda\right) $, and (iii) $z=-\exp\left( \lambda\right) $, where $q$ and $\lambda$ must be in the regime $-\pi\leq q\leq\pi$ and $-\ln\Lambda \leq\lambda\leq\ln\Lambda$, where the parameter $\Lambda=K/2g\sin\frac{\phi }{2}+\sqrt{K^{2}/4g^{2}\sin^{2}\frac{\phi}{2}+1}$. Here, the case (i) gives a transmission state, the case (ii) a decay state, and the case (iii) a staggered decay state. In the case (i), the value of $K$ can control the number of the minimums of $\omega_{-}$, for which, there exists a critical interleg coupling strength with the analytical form \begin{equation} K_{\text{c}}=2g\tan\frac{\phi}{2}\sin\frac{\phi}{2}. \label{eq:Kc} \end{equation} The relation $K=K_{\text{c}}$ actually yields the vortex-Meissner transition boundary discussed afterwards. In detail, if $K<$ $K_{\text{c}}$, the lower band $\omega_{-}$ has two minimums, while, otherwise, the minimum number is one. This can be clearly found from the dashed black curve in Fig. ~ \ref{fig:spectrum}(a)-\ref{fig:spectrum}(c) for $K$ taking $0.5$, $\sqrt{2}$, and $2.5$, respectively, where we specify $g=1$ and $\phi=\frac{\pi}{2}$ such that $K_{\text{c}}=\sqrt{2}$. As $K$ is increased, the band gap between the two transmission bands $\omega_{+}$ and $\omega_{-}$ will also be broadened. In Fig. ~ \ref{fig:spectrum}, where the energy bands $\omega_{\pm}$ for the decay and staggered decay states have also been shown, we also find that a given single-particle energy will always correspond to four degenerate states. This is critical for the existence of the single-particle eigenstates under the open boundary condition, which can in principle be constructed by the linear superposition of these four degenerate states. Only when the decay and staggered decay states are included, one can definitely ensure the equality between the number of the independent coefficients and that of the boundary conditions, considering that there are four terminals of the ladder. However, in the simplest one-dimensional chain, which has only two terminals, the single-particle eigenstates under the open boundary condition is only the superposition of two transmission states, which differs from the quasi-two-dimensonal ladder model this present paper concentrates on. \subsection{Open-boundary ladder with finite qubit number} Now we invstigate the open-boundary condition for the ladder model. In cold atom systems, the ideal open-boundary effect is a hard wall, which is very hard to realize ~ \cite{Atala2014Thesis}, and the open-boundary condition is approximately engineered by an external power law potential. However, in superconducting qubit systems, the open-boundary condition is very convenient to realize, since the ladder length is finite in experiment. Suppose the ladder length is $N$, then the fermionic Hamiltonian in Eq. ~ (\ref{eq:Hld}) becomes \begin{align} \hat{H}_{\text{ld}}^{\left( N\right) }= & -\sum_{l=1}^{N-1}\sum _{d=\text{L}}^{\text{R}}\hbar g\hat{b}_{d,l}\hat{b}_{d,l+1}^{\dag} +\text{H.c.}\nonumber\\ & -\sum_{l=1}^{N}\hbar K\hat{b}_{\text{L},l}\hat{b}_{\text{R},l}^{\dag} \exp\left( i\phi l\right) +\text{H.c.,} \label{eq:Hld_N} \end{align} where the eigenstates are different from those of the infinite-length ladder, and therefore must be revisited. In Figs. ~ \ref{fig:spectrum}(a)-\ref{fig:spectrum}(c), we find that in infinite-length case, a definite $\omega$ corresponds to four states, which we denote by the characteristic constants $z=z_{1},$ $z_{2},$ $z_{3},$ and $z_{4}$, respectively. In our study, we are only interested in the low-energy states. Thus, the parameters $z_{j}$ can be determined by the relation $\omega =\omega_{-}$ [see Eq. ~ (\ref{eq:dispersion})], which yields \begin{align} z_{1,2} & \equiv z_{1,2}\left( \omega\right) =\frac{1}{2}(R_{-}\mp \sqrt{R_{-}^{2}-4}),\label{eq:z12}\\ z_{3,4} & \equiv z_{3,4}\left( \omega\right) =\frac{1}{2}(R_{+}\mp \sqrt{R_{+}^{2}-4}), \label{eq:z34} \end{align} with the compact symbols $R_{\pm}$, determined by $\omega$, represented in the form as \begin{equation} R_{\pm}=-\frac{\omega}{g}\cos\frac{\phi}{2}\pm\sqrt{-\frac{\omega^{2}}{g^{2} }\sin^{2}\frac{\phi}{2}+\frac{K^{2}}{g^{2}}+4\sin^{2}\frac{\phi}{2}}. \end{equation} \begin{figure}\label{fig:wavefunction} \end{figure} For the open-boundary ladder with finite qubit number, the single-particle eigenstate at the energy $\hbar\mu$ can be assumed as \begin{equation} \left\vert \mu\right\rangle =\sum_{d=\text{L}}^{\text{R}}\sum_{l=1}^{N} \chi_{d,l}\left\vert d,l\right\rangle . \label{eq:mu_expansion} \end{equation} Here, the eigen wave function $\chi_{d,l}\equiv\chi_{d,l} \! \left( \mu\right) $ must be the linear superposition of the four degenerate states at the energy $\omega=\mu$ of the infinite-length ladder, respectively denoted as $\psi_{d,l}^{\left( j\right) }\equiv\psi_{d,l}\left( z_{j}\left( \mu\right) \right) $ [see Eqs. ~ (\ref{eq:psi_d_l_z}), (\ref{eq:z12}), and (\ref{eq:z34})], i.e., \begin{equation} \chi_{d,l}=\sum_{j=1}^{4} \! A_{j}\psi_{d,l}^{\left( j\right) }. \end{equation} Then, by substituting the state vector expansion $\left\vert \mu\right\rangle $ in\ Eq. ~ (\ref{eq:mu_expansion}) into the secular equation \begin{equation} \hat{H}_{\text{ld}}^{\left( N\right) }\left\vert \mu\right\rangle =\hbar \mu\left\vert \mu\right\rangle , \label{eq:secular_eq} \end{equation} where the coefficients $A_{j}$ must be constrained nonzero, the eigen energies can in principle be discretized as $\mu=\hbar\mu_{n}$ ($n=1,2,...,2N$) with $\mu_{n}\leq\mu_{n+1}$ and the corresponding eigenstates can be assumed of the form \begin{equation} \left\vert \mu_{n}\right\rangle =\sum_{d=\text{L}}^{\text{R}}\sum_{l=1} ^{N}\chi_{d,l}^{\left( n\right) } \! \left\vert d,l\right\rangle . \end{equation} Here, the lowest energy eigenstate $|\mu_{1}\rangle$ is called the single-particle ground state, which is the major state we will study. The eigen wave function $\chi_{d,l}^{\left( n\right) }$ can also be expanded as the linear superposition of $\psi_{d,l}^{\left( n,j\right) }\equiv\psi _{d,l}\left( z_{j}\left( \mu_{n}\right) \right) $, the degenerate states in the infinite-length case, i.e., \begin{equation} \chi_{d,l}^{\left( n\right) }=\sum_{j=1}^{4}A_{j}^{\left( n\right) } \psi_{d,l}^{\left( n,j\right) }. \label{eq:expand_inf} \end{equation} However, straightforwardly solving Eq. ~ (\ref{eq:secular_eq}) is difficult, since a transcendental equation will be involved. Thus, in this paper, the determination of $A_{j}^{\left( n\right) }$ is achieved by fitting Eq. ~ (\ref{eq:expand_inf}) with the results obtained from direct numerical diagonalization of Eq. ~ (\ref{eq:secular_eq}). In Fig. ~ \ref{fig:wavefunction}, the wave functions of the single-particle ground state $\left\vert \mu_{1}\right\rangle $ and single-particle excited state $\left\vert \mu_{2}\right\rangle $ ($\mu_{1}<\mu_{2}$) have been shown for $K\ $taking $0.5$ [see Fig. ~ \ref{fig:wavefunction}(a)-\ref{fig:wavefunction}(d)] and $2.5$ [see Fig. ~ \ref{fig:wavefunction}(e)-\ref{fig:wavefunction}(h)], respectively, where the other parameters are $g=1$, $N=20$, and $\phi=\frac{\pi}{2}$ such that $K_{\text{c}}=\sqrt{2}$. The discrete circles represent the results from the direct numerical diagonalization using Eq. ~ (\ref{eq:secular_eq}), while the solid curves the fitting results using the expansion equation in Eq. ~ (\ref{eq:expand_inf}). Both results can be found to fit each other exactly. Also, the wave functions at $K=2.5>K_{\text{c}}$ appear smoother than those at $K=0.5<K_{\text{c}}$. Besides, when $K=2.5$, $|\chi_{d,l}^{\left( 2\right) }|$ exhibits an obvious dip near the middle lattice site, which nevertheless does not occur when $K=0.5$. Then, we investigate the properties of the single-particle ground state $\chi_{d,l}^{\left( 1\right) }$ using the expansion coefficients $A_{j}^{\left( n\right) }$ from fitting. From the discussions in Sec. ~ \ref{sec:InfLenLadder}, we know that if $K\ $is less than $K_{\text{c}}$, all the four characteristic constants $z_{j}$ correpsonding to $\omega=\mu_{1}$ are complex numbers on the unit circle, while, if $K$ exceeds $K_{\text{c}}$, $z_{3}$ and $z_{4}$ will become real, which will only contribute to the population at the edges. Due to the effective magnetic flux, a complex characteristic constant $z_{j}=\exp\left( iq_{j}\right) $ corresponds to a plane wave with the quasimomentum $q_{j}-\phi/2$ ($q_{j}+\phi/2$) in the wave function of the L (R) ladder leg [see Eq.~(\ref{eq:psi_d_l_z})]. \begin{figure} \caption{(color online). Quasimomentum $q_{j}\pm\phi/2$ in the single-particle ground state wave function $\chi_{d,l}^{\left( 1\right) }$ for different interleg coupling strength $K$. Here, the effective magnetic flux $\phi=\pi/2$, the intraleg coupling strength $g=1$, and the ladder length $N=20$. The color indicates the relative distribution intensity of the wave function on the quasimomentum component. Here, the quasimomentum $q_{j}-\phi/2$ ($q_{j}+\phi/2$) only occurs on the L (R) ladder leg. } \label{fig:ComponentSpectrum} \end{figure} In Fig. ~ \ref{fig:ComponentSpectrum}, we have plotted the quasimomentum $q_{j}\mp \phi/2$ versus the interleg coupling strength $K$ with $\phi=\pi/2$ and $N=20$,\ where the color represents the relative distribution intensity on a particular quasimomentum component [obtained by rescaling $|A_{j}^{\left( 1\right) }\psi_{d,0}^{\left( 1,j\right) }|$, with $d$ taking L (R) for $q_{j}-\phi/2$ ($q_{j}+\phi/2$)]. We can also see that if $\phi=\pi/2$, and $K$ is less than $K_{\text{c}}$, the particle is more likely to be populated on the L (R) leg, corresponding to the characteristic constant $z_{1,3}$ ($z_{2,4}$). However, if $K$ exceeds $K_{\text{c}}$, only $z_{1}$ and $z_{2}$ remain complex, and the particles corresponding to $z_{1,2}$ are approximately populated uniformly on both legs. Lastly, we mention that once the single-particle eigenstates $\chi _{d,l}^{\left( n\right) }$ are obtained, one can make the transformation $\hat{b}_{n}^{\dag}=\sum_{d=\text{L}}^{\text{R}}\sum_{l=1}^{N}\chi _{d,l}^{\left( n\right) }\hat{b}_{d,l}^{\dag}$, which can finally transform the Hamiltonian in Eq. ~ (\ref{eq:Hld_N}) into the independent fermionic modes, i.e., \begin{equation} \hat{H}_{\text{ld}}^{\left( N\right) }=\sum_{n=1}^{2N}\hbar\mu_{n}\hat {b}_{n}^{\dag}\hat{b}_{n}. \end{equation} Here, $\hat{b}_{n}$ and $\hat{b}_{n}^{\dag}$ meet the fermionic anticommutation relations, i.e., $\{\hat{b}_{n},\hat{b}_{n^{\prime}}^{\dag }\}=\delta_{nn^{\prime}}$. Compared with the infinite-length scenario, we note that the eigen energies are discretized, with the eigenstates being the superposition of the ones in the infinite-length scenario. \subsection{Chiral current} The current operator can be derived from the following continuity equation \begin{equation} \frac{\text{d}}{\text{d}t}(\hat{b}_{d,l}^{\dag}\hat{b}_{d,l})=\frac{[\hat {b}_{d,l}^{\dag}\hat{b}_{d,l},\hat{H}_{\text{ld}}]}{i\hbar}= \hat{j} _{l-1,l}^{\left( d\right) }+ \hat{j} _{l+1,l}^{\left( d\right) }+ \hat{j} _{l,\bar{d}d}, \end{equation} where $d,\bar{d}\in\{$L,R$\}$ and $\bar{d}\neq d$. Here, $ \hat{j} _{l,l+1}^{\left( d\right) }$ denotes the particle current flowing from the site $l$ to $l+1$ on the $d$ ladder, while $ \hat{j} _{l,\bar{d}d}$ the particle current flowing from the $\bar{d}$ ladder to $d$ ladder at the $l$th site. The physical meaning is that the time-varying rate of the particle number at one individual site is determined by the current that flows into it. The resulting current operator can be explicitly represented as \begin{align} \hat{j} _{l,l+1}^{\left( d\right) } & =ig(\hat{b}_{d,l+1}^{\dag}\hat{b}_{d,l} -\hat{b}_{d,l}^{\dag}\hat{b}_{d,l+1}),\\% \hat{j} _{l,\text{LR}} & =iK(\hat{b}_{\text{R},l}^{\dag}\hat{b}_{\text{L},l}e^{i\phi l}-\hat{b}_{\text{L},l}^{\dag}\hat{b}_{\text{R},l}e^{-i\phi l}). \end{align} For the specific single-particle ground state $\left\vert \mu_{1}\right\rangle =\sum_{d=\text{L}}^{\text{R}}\sum_{l=1}^{N}\chi_{d,l}^{\left( 1\right) }\left\vert d\text{,}l\right\rangle $, the average particle current can be respectively given by \begin{equation} j_{l,l+1}^{\left( d\right) }=ig(\chi_{d,l+1}^{\left( 1\right) \ast} \chi_{d,l}^{\left( 1\right) }-\chi_{d,l+1}^{\left( 1\right) }\chi _{d,l}^{\left( 1\right) \ast}) \end{equation} which describes the flow from the site $l$ to $l+1$ on the $d$ ladder, and \begin{equation} j_{l,\text{LR}}=iK(\chi_{\text{R},l}^{\left( 1\right) \ast}\chi_{\text{L} ,l}^{\left( 1\right) }e^{i\phi l}-\chi_{\text{R},l}^{\left( 1\right) } \chi_{\text{L},l}^{\left( 1\right) \ast}e^{-i\phi l}) \end{equation} which describes the flow from the L to R ladder at the $l$th site. The presence of the effective magnetic flux will make the system exhibit the property of chirality. In detail, the particle currents on both legs differ from each other. To quantify the difference, we define the chiral particle current as \begin{equation} j_{\text{C}}=j_{\text{L}}-j_{\text{R}}. \label{eq:chiral_current} \end{equation} Here, $j_{d}=(N-1)^{-1}\sum_{l=1}^{N-1}j_{l,l+1}^{\left( d\right) } $ with $d=\mathrm{L,R}$ is the site-averaged current on the particular $d$ leg. In Fig.~\ref{fig:chiralcurrentcalculation}, the chiral current strength is plotted as a function of the flux $\phi$ and interleg coupling strength $K$. The Meissner and vortex phase are separated by a critical boundary, where $K=2g\tan\frac{\phi}{2}\sin\frac{\phi}{2}$ [see Eq. ~ (\ref{eq:Kc})] is fulfilled. This boundary corresponds to the degeneracy transition of the single-particle ground state in the infinite-length case [see Figs. ~ \ref{fig:spectrum}(a)-\ref{fig:spectrum}(c)]. For given $K=\sqrt{2}$, the chiral current first increases as $\phi$ untill reaching its maximum at $\phi_{\text{c}}=\frac{\pi}{2}$ and then goes down towards zero, while, for given $\phi=\frac{\pi}{2}$, the chiral current also first increases as $K$ untill reaching its maximum at $K_{\text{c}}=\sqrt{2}$ but never changes afterwards. The current patterns of the Meissner and vortex phase will be discussed below. \begin{figure}\label{fig:chiralcurrentcalculation} \end{figure} \subsection{Current patterns in the vortex and Meissner phases} \begingroup\begin{figure*}\label{fig:wavefunction6} \end{figure*} \endgroup The difference between vortex and Meissner phases can be intuitively seen from their individual current patterns in Fig. ~ \ref{fig:wavefunction6}. In the vortex phase, currents flow around particular kernels, the number of which is what we define as the vortex number. In the Meissner phase, the currents only flow along the edges of the ladder, which can be therefore regarded as a single large vortex. In Fig. ~ \ref{fig:wavefunction6}, the flux $\phi=\pi/2$ for the left column and $-\pi/2$ for the right column, the intraleg coupling $g=1$, the site number $N=20$, and the corresponding critical interleg coupling is $K_{\text{c} }=\sqrt{2}$. When $K$ goes down from $2.5$ to the critical value $\sqrt{2}$, we see no more vortex to occur except the only one circulating around the edges. However, if $K$ continues to decrease to $1$ and furthermore $0.5$, we see that more vortices come into being. Moreover, before $K$ reaches $\sqrt {2}$, the particle density shows no periodical modulation, while, until $K$ reaches $\sqrt{2}$, more modulation periods appear as $K$ is increased. We mention that due to the effect of the open boundary, the particle density approaches zero near the chain ends. We also see the change of current directions when the flux $\phi$ is flipped from $\pi/2$ [see Figs. ~ \ref{fig:wavefunction6}(a)-\ref{fig:wavefunction6}(d)] to $-\pi/2$ [see Figs. ~ \ref{fig:wavefunction6}(e)-\ref{fig:wavefunction6}(h)]. To numerically quantify the vortex density, i.e., the average vortex number per lattice site, we now make one count of vortex for a particular plaquette once such a current pattern as the clockwise or anticlockwise type is present. Thus, if the total vortex number is $N_{\mathrm{V}}$, vortex density is then $D_{\mathrm{V}}=N_{\mathrm{V}}/N$. In Fig. ~ \ref{fig:vortexdensity4}, we have plotted the vortex density $D_{\text{V}}$ against the flux $\phi$ for different values of $K$ with $N=20$, $g=1$, and the open boundary conditions. For each given $K$, there is a critical value of the flux $\phi_{\text{c}}$. Below $\phi_{\text{c}}$, the system is in the Meissner phase, possessing a constant vortex density $1/N=0.05$, while above $\phi_{\text{c}}$, the system is in the vortex phase, where the vortex density increases with the flux $\phi$. Since the vortex number must be integers, the increase of vortex density with $\phi$ is in steps. Besides, the critical flux $\phi_{\text{c}}$ shifts to the right gradually when $K$ is increased. \begin{figure}\label{fig:vortexdensity4} \end{figure} \section{Experimental details\label{Sec:ExpDetails}} \subsection{Generating the single-particle ground state\label{Sec:Gen_SPG}} To observe the chiral particle current discussed above, we need to generate the single-particle ground state, i.e., the lowest single-particle energy state $|\mu_{1}\rangle$. In principle, the cold atoms can be condensed into one common single-particle state via laser cooling, thus forming the so-called Bose-Einstein condensate. However, since the number of particles here is not conserved as that of atoms, the ladder model realized by superconducting qubit circuits will decay to the ground state (with no particles present) through sufficient cooling of the conventional dilution refrigerator. Hence, in the following, we will demonstrate how to generate the single-particle ground state from the ground state. We now discuss a general method that generates the single-particle ground state from the ground state $\left\vert 0\right\rangle $, and simultaneously causes no unwanted excitations. In detail, we classically drive the qubits at all the sites, which appears in Eq. ~ (\ref{eq:H}) as an additional term \begin{equation} \hat{H}_{\text{g}}=\frac{\hbar}{2}\sum_{d=\text{L}}^{\text{R}}\sum_{l=1} ^{N}\hat{\sigma}_{+}^{\left( d,l\right) }B_{d,l}\exp\left( -i\nu _{d}t\right) +\text{H.c.}. \end{equation} When we further go to Eq. ~ (\ref{eq:Hld}), $\hat{H}_{\text{g}}$ is transformed into \begin{equation} \hat{H}_{\text{ld,g}}=\frac{\hbar}{2}\sum_{d=\text{L}}^{\text{R}}\sum _{l=1}^{N}\hat{\sigma}_{+}^{\left( d,l\right) }B_{d,l}^{\prime}\exp\left( -i\epsilon t\right) +\text{H.c.}. \label{eq:Hldg} \end{equation} Here, the driving strength $B_{d,l}^{\prime}=B_{d,l}J_{0}\left( \frac{\Omega }{\delta}\right) \approx B_{d,l}$, since $\left\vert \Omega/\delta\right\vert ^{2}\ll1$ is satisfied by the parameters in Sec. ~ \ref{sec:Model}, and the detuning $\epsilon\equiv\nu_{d}-\omega_{d}$ for $d=\mathrm{L,R}$ can be achieved via carefully tuning $\nu_{d}$. In Fig. ~ \ref{fig:E_versus_K}, it can be found that the eigenstates are approximately degenerate in pairs when $K<K_{\text{c}}$, although the approximate degeneracy is broken when $K>K_{\text{c}}$. Therefore, when we excite the single-particle ground state $\left\vert \mu_{1}\right\rangle $ from ground state with $\epsilon=\mu_{1}$, at least the single-particle state $\left\vert \mu _{2}\right\rangle $ might also be excited and so might the other single-particle states. \begin{figure}\label{fig:E_versus_K} \end{figure} To overcome this problem, we now make a unitray transformation of the single-particle creation operator, i.e., $\hat{\sigma}_{+}^{\left( d,l\right) }=\sum_{n=1}^{2N}\chi_{d,l}^{\left( n\right) \ast}\hat{\Sigma }_{n}^{+}$, and thus the interaction Hamiltonian in Eq. ~ (\ref{eq:Hldg}) becomes \begin{equation} \hat{H}_{\text{ld,g}}=\frac{\hbar}{2}\sum_{n=1}^{2N}C_{n}\hat{\Sigma}_{n} ^{+}\exp\left( -i\epsilon t\right) +\text{H.c.}. \label{eq:Hld_g_full} \end{equation} Here, the Pauli operator $\hat{\Sigma}_{n}^{+}$ represents the collective exciations of the qubits, and the driving strength $C_{n}=\sum_{d=\text{L} }^{\text{R}}\sum_{l=1}^{N}\chi_{d,l}^{\left( n\right) \ast}B_{d,l}^{\prime }~$can be controlled by the amplitude $B_{d,l}^{\prime}$ (or equivalently, $B_{d,l}$). To remove the excitations on the single-particle excitation states (i.e., the states $\left\vert \mu_{n}\right\rangle $ with $n\geq2$), we should make $C_{n}=0$ for $n\geq2$, which yields the required driving strength \begin{equation} B_{d,l}^{\prime}=\sum_{n=1}^{2N}\chi_{d,l}^{\left( n\right) }C_{n} =\chi_{d,l}^{\left( 1\right) }C_{1} \label{eq:Bpdl} \end{equation} using the orthonormality condition of $\chi_{d,l}^{\left( n\right) }$. Obviously, the driving fields $B_{d,l}^{\prime}$ must possess the same profile as the single-particle ground state $\chi_{d,l}^{\left( 1\right) }$ except for a scaling factor, i.e., the Rabi frequency $C_{1}$. Then, Eq. ~ (\ref{eq:Hld_g_full}) can be simplified into \begin{equation} \hat{H}_{\text{ld,g}}^{\prime}=\frac{\hbar}{2}C_{1}\exp\left( -i\epsilon t\right) \hat{\Sigma}_{1}^{+}+\mathrm{H.c.}, \end{equation} where we assume $C_{1}$ is tuned positive. From Eqs. ~ (\ref{eq:Jordan-Wigner1}) and (\ref{eq:Jordan-Wigner2}), we know that $\hat{\sigma}_{+}^{\left( d,l\right) }\left\vert 0\right\rangle =\hat {b}_{d,l}^{\dag}\left\vert 0\right\rangle $, thus yielding \begin{align} \hat{\Sigma}_{1}^{+}\left\vert 0\right\rangle & =\sum_{d=\text{L}} ^{\text{R}}\sum_{l=1}^{N}\chi_{d,l}^{\left( 1\right) }\hat{\sigma} _{+}^{\left( d,l\right) }\left\vert 0\right\rangle \nonumber\\ & =\sum_{d=\text{L}}^{\text{R}}\sum_{l=1}^{N}\chi_{d,l}^{\left( 1\right) }\hat{b}_{d,l}^{\dag}\left\vert 0\right\rangle =\left\vert \mu_{1} \right\rangle . \end{align} Since the single-particle ground state is generated from the ground state, we then have \begin{equation} \hat{H}_{\text{ld,g}}^{\prime}=\frac{\hbar}{2}C_{1}\exp\left( -i\epsilon t\right) \left\vert \mu_{1}\right\rangle \! \left\langle 0\right\vert +\mathrm{H.c.}. \label{eq:Hgld_remove} \end{equation} Thus, the unwanted excitations characterized by $C_{n}$ for $n\geq2$ are all removed via properly adjusting $B_{d,l}^{\prime}$. If the detuning is further taken as $\epsilon=\mu_{1}$ as expected, the system will evolve to the state $\cos\left( C_{1}t/2\right) \! \left\vert 0\right\rangle -i\sin\left( C_{1}t/2\right) \! \left\vert \mu_{1}\right\rangle $ in a time duration $t$. Assuming a $\pi$ pulse, i.e., $C_{1}t=\pi$, the single-particle ground state $\left\vert \mu_{1}\right\rangle $ can be achieved in just one step. If we specify the intraleg coupling strength $g/2\pi=3.5 \operatorname{MHz} $, the interleg coupling strength $K/2\pi=1.75 \operatorname{MHz} $, the ladder length $N=20$, the flux $\phi=\pi/2$, and the detuning $\epsilon/2\pi=\mu_{1}/2\pi=-210.4 \operatorname{MHz} $, the driving strength $B_{d,l}^{\prime}$ required to reach the desired Rabi frequencies $C_{1}/2\pi=1 \operatorname{MHz} $ and $C_{n}/2\pi=0$ ($n\geq2$) can be shown in Fig. ~ \ref{fig:DrFld_gen}, which implies a generation time of $0.5 \operatorname{\mu s} $. Besides, we can verify that $\left\vert B_{d,l}^{\prime}\right\vert $ [see Fig. ~ \ref{fig:DrFld_gen}(a)] shares the same profile as $\left\vert \chi _{d,l}^{\left( 1\right) }\right\vert $ [see Figs. ~ \ref{fig:wavefunction}(a) and ~ \ref{fig:wavefunction}(b)] except for a scaling factor. \begin{figure}\label{fig:DrFld_gen} \end{figure} Having obtained the target Hamiltonian in Eq. ~ (\ref{eq:Hgld_remove}), we now investigate the effect of the environment on the state generation process, which is described by the Lindblad master equation \begin{equation} \frac{\text{d}\hat{\rho}}{\text{d}t}=\frac{1}{i\hbar}[\hat{H}_{\text{ld} }^{\left( N\right) }+\hat{H}_{\text{ld,g}}^{\prime},\hat{\rho} ]+\mathcal{L}_{\mu1}\left[ \hat{\rho}\right] . \label{eq:ME_Gen} \end{equation} Here, $\hat{\rho}$ is the density operator of the ladder, $\mathcal{L}_{\mu 1}\left[ \hat{\rho}\right] $ represents the Lindblad dissipation terms as \begin{align} \mathcal{L}_{\mu1}\left[ \hat{\rho}\right] & =-\gamma_{1}\left\vert \mu_{1}\right\rangle \! \left\langle \mu_{1}\right\vert \left\langle \mu_{1}\right\vert \! \hat{\rho} \! \left\vert \mu_{1}\right\rangle \! + \! \gamma_{1}\left\vert 0\right\rangle \! \left\langle 0\right\vert \left\langle 0\right\vert \! \hat{\rho} \! \left\vert 0\right\rangle \! \nonumber\\ & - \! \frac{\Gamma_{1}}{2}\left\vert \mu_{1}\right\rangle \! \left\langle 0\right\vert \left\langle \mu_{1}\right\vert \! \hat{\rho} \! \left\vert 0\right\rangle \! - \! \frac{\Gamma_{1}}{2}\left\vert \mu_{1}\right\rangle \! \left\langle 0\right\vert \left\langle \mu_{1}\right\vert \! \hat{\rho} \! \left\vert 0\right\rangle , \end{align} and $\gamma_{1}$ ($\Gamma_{1}$) is the relaxation (dephasing) rate of the single-particle ground state $\left\vert \mu_{1}\right\rangle $. Using Eq. ~ (\ref{eq:ME_Gen}), we can find the exact solution of $\left\langle \mu _{1}\right\vert \! \hat{\rho}\left( t\right) \! \left\vert \mu_{1}\right\rangle $ (see Appendix. ~ \ref{Append:exact}), i.e., the fidelity of the single-particle ground state at the time $t$. However, in the strong coupling limit ($C_{1}\gg\gamma_{1}$, $\Gamma_{1}$), the generation fidelity can be approximated as \begin{equation} \left\langle \mu_{1}\right\vert \! \hat{\rho} \! \left\vert \mu_{1}\right\rangle =\frac{1}{2}\left[ 1-e^{-\frac{1}{2}\left( \gamma_{1}+\frac{\Gamma_{1}}{2}\right) t}\cos\left( C_{1}t\right) \right] . \end{equation} Suppose the relaxation (dephasing) rate of the qubit at the site $\left( d,l\right) $ is $\gamma_{d,l}$ ($\Gamma_{d,l}$), then $\gamma_{1}$ and $\Gamma_{1}$ can be estimated by \begin{equation} \gamma_{1}=\sum_{d,l}|\chi_{d,l}^{\left( 1\right) }|^{2}\gamma_{d,l}\text{ and }\Gamma_{1}=\sum_{d,l}|\chi_{d,l}^{\left( 1\right) }|^{2}\Gamma_{d,l}. \end{equation} We consider homogeneous qubit decay rates, e.g., $\gamma_{d,l}/2\pi\equiv0.05 \operatorname{MHz} $ and $\Gamma_{d,l}/2\pi\equiv0.1 \operatorname{MHz} $, while other parameters remain unchanged. Then, after a $\pi~$pulse, the fidelity is about $\left\langle \mu_{1}\right\vert \! \hat{\rho}(\frac{\pi}{C_{1}}) \! \left\vert \mu_{1}\right\rangle =$ $0.9273$. In Fig.~\ref{fig:exact-approximate}, we have shown the exact solution and the approximate one for the weak ($\Gamma_{1}=10C_{1}$), critical ($\Gamma _{1}=C_{1}$), and strong ($\Gamma_{1}=0.1C$) coupling, where good agreement is found in the last case. \begin{figure} \caption{(color online). Single-particle ground state fidelity $\left\langle \mu _{1}\right\vert \!\hat{\rho}\!\left\vert \mu_{1}\right\rangle $ evolving versus the time $t$ under the effects of environment for the dephasing rate $\Gamma_{1}$ taking (a) $10C_{1}$, (b) $C_{1}$, and (c) $0.1C_{1}$, respectively. Here, $C_{1}/2\pi=1\operatorname{MHz}$ is the Rabi frequency. The relaxation rate takes $\gamma_{1}=0.5\Gamma_{1}$ in all plots. The solid red (dashed blue) curve denotes the exact solution (the approximate one in the strong coupling limit $C_{1}\gg\gamma_{1},\Gamma_{1}$). } \label{fig:exact-approximate} \end{figure} \subsection{Measurement scheme\label{sec:Measurement Scheme}} To observe the vortex-Meissner phase transition, one indispensable issue is to measure the particle currents between a pair of adjacent sites. In superconducting quantum circuits, the qubit state can be dispersively read out by a microwave resonator, which enables us to extract the particle current from the Rabi oscillation between the pair of adjacent sites. To achieve this, we can tune the energy levels of the flux qubits that connect to the pair of sites we concentrate on such that both sites are decoupled from the others. For example, to investigate the Rabi oscillation between $\left( \text{L},l\right) $ and $\left( \text{L},l+1\right) $, we can tune the flux qubits at the sites $\left( \text{L},l-1\right) $, $\left( \text{L} ,l+2\right) $, $\left( \text{R},l\right) $, and $\left( \text{R} ,l+1\right) $ such that they are decoupled from the ones at $\left( \text{L},l\right) $ and $\left( \text{L},l+1\right) $. Then, the bare Hamiltonian that governs the evolution of the adjacent sites $\left( \text{L},l\right) $ and $\left( \text{L},l+1\right) $ can be given by $\hat{H}_{\text{L},l}=-\hbar g\hat{\sigma}_{+}^{\left( \text{L},l+1\right) }\hat{\sigma}_{-}^{\left( \text{L},l\right) }+\mathrm{H.c..}$ Differently from the cold atoms in optical lattices, the particles stored in the flux qubits suffer the relaxation rates $\gamma_{d,l}$ and dephasing rates $\Gamma_{d,l}$ for the site $\left( d,l\right) $. Thus, the interaction between the qubits at $\left( \text{L},l\right) $ and $\left( \text{L},l+1\right) $ should also be described by the Lindblad master equation, i.e., \begin{equation} \frac{\text{d}\hat{\rho}_{\text{L},l}}{\text{d}t}=\frac{[\hat{H}_{\text{L} ,l},\hat{\rho}_{\text{L},l}]}{i\hbar}+\mathcal{L}_{\text{L,}l}\left[ \hat{\rho}_{\text{L},l}\right] +\mathcal{L}_{\text{L,}l+1}\left[ \hat{\rho }_{\text{L},l}\right] . \end{equation} Here, $\hat{\rho}_{\text{L},l}=\sum_{l_{1}=l}^{l+1}\sum_{l_{2}=l} ^{l+1}\left\vert \text{L,}l_{1}\right\rangle \! \left\langle \text{L,}l_{1}\right\vert \hat{\rho}\left\vert \text{L,} l_{2}\right\rangle \! \left\langle \text{L,}l_{2}\right\vert $ is the subspace truncation of the global density operator $\hat{\rho}$, and the Lindblad terms \begin{align} \mathcal{L}_{\mathrm{L},l} \! \left[ \hat{\rho}_{\text{L},l}\right] & =-\gamma_{\mathrm{L},l}\left\vert \mathrm{L},l\right\rangle \! \left\langle \mathrm{L},l\right\vert \! \left\langle \mathrm{L},l\right\vert \! \hat{\rho}_{\text{L},l} \! \left\vert \mathrm{L},l\right\rangle \nonumber\\ & +\gamma_{\mathrm{L},l}\left\vert 0\right\rangle \! \left\langle 0\right\vert \! \left\langle 0\right\vert \! \hat{\rho}_{\text{L},l} \! \left\vert 0\right\rangle -\frac{\Gamma_{\mathrm{L},l}}{2}\left\vert 0\right\rangle \! \left\langle \mathrm{L},l\right\vert \! \left\langle 0\right\vert \! \hat{\rho}_{\text{L},l} \! \left\vert \mathrm{L},l\right\rangle \nonumber\\ & -\frac{\Gamma_{\mathrm{L},l}}{2}\left\vert \mathrm{L},l\right\rangle \! \left\langle 0\right\vert \! \left\langle \mathrm{L},l\right\vert \hat{\rho}_{\text{L},l} \! \left\vert 0\right\rangle \end{align} represent the dissipation into the environment. In the limit of strong coupling (i.e., $g\gg\gamma_{\mathrm{L},l},\Gamma_{\mathrm{L},l}$), the population difference between $\left( \text{L},l+1\right) $ and $\left( \text{L},l\right) $, which defined by $P_{\text{L,}l}\left( t\right) =\left\langle \text{L},l+1\right\vert \! \hat{\rho}_{\text{L},l} \! \left\vert \text{L},l+1\right\rangle -\left\langle \text{L},l\right\vert \! \hat{\rho}_{\text{L},l} \! \left\vert \text{L},l\right\rangle $, can be obtained using the Lindblad master equation as \begin{equation} P_{\text{L,}l}\left( t\right) \! = \! e^{-\tilde{\gamma}_{\text{L,}l}t} \! \left[ \cos\left( \tilde{g}t\right) \! P_{\text{L,}l} \! \left( 0\right) +\sin\left( \tilde{g}t\right) \! \frac{j_{l,l+1}^{\left( \text{L}\right) }}{g}\right] , \label{eq:PopulationDifference} \end{equation} where $\tilde{\gamma}_{\text{L},l}=\left( \gamma_{\text{L,}l}+\gamma _{\text{L,}l+1}+\Gamma_{\text{L,}l}+\Gamma_{\text{L,}l+1}\right) /4$ and $\tilde{g}=2g$. Now, we can confidently assert that the particle current $j_{l,l+1}^{\left( \text{L}\right) }$ can be extracted from the population difference after fitting the measured data using Eq. ~ (\ref{eq:PopulationDifference}). The discussions made above can also apply to extracting the particle current on the R leg, for which, the population difference between $\left( \text{R},l\right) $ and $\left( \text{R} ,l+1\right) $ is namely Eq. ~ (\ref{eq:PopulationDifference}) with the subscript L replaced with R. Similarly, the population difference between $\left( \text{R},l\right) $ and $\left( \text{L},l\right) $ is \begin{equation} P_{\text{LR,}l} \! \left( t\right) =e^{-\tilde{\gamma}_{\text{LR,}l}t} \!\! \left[ \cos(\tilde{K}t)P_{\text{LR,}l} \! \left( 0\right) +\sin(\tilde{K}t)\frac{j_{\text{LR},l}}{K}\right] \! \text{,} \end{equation} where $\tilde{\gamma}_{\text{LR},l}=\left( \gamma_{\text{L,}l}+\gamma _{\text{R,}l}+\Gamma_{\text{L,}l}+\Gamma_{\text{R,}l}\right) /4$, $\tilde {K}=2K$, and strong interleg coupling (i.e., $K\gg\gamma_{\mathrm{L},l} ,\Gamma_{\mathrm{L},l}$) has been assumed. In Fig. ~ \ref{fig:Measurement}, we have intuitively presented the population difference $P_{\text{L,}l}\left( t\right) $ and $P_{\text{LR,}l}\left( t\right) $ evolving as the time for $l$ taking $N/2$, with the chain length $N=20$, the intraleg coupling strength $g/2\pi=3.5 \operatorname{MHz} $, the interleg coupling strength $K/2\pi=1.75 \operatorname{MHz} $ (such that $K/g=0.5$), the effective magnetic flux $\phi=\pi/2$, and the decay rates $\gamma_{d,l^{\prime}}/2\pi\equiv0.05 \operatorname{MHz} $ and $\Gamma_{d,l^{\prime}}/2\pi\equiv0.1 \operatorname{MHz} $. The corresponding particle current is $j_{l,l+1}^{\left( \text{L}\right) }=0.43 \operatorname{MHz} $ and $j_{\text{LR},l}=-0.5785 \operatorname{MHz} $. We find that, in the strong coupling limit, the approximate analytical solutions (solid blue) agree very well with the exact numerical simulation results (dashed green), especially in the first few periods. However, when time goes longer, some deviation is exhibited from the approximate and numerical results. Thus, to improve accuracy of measurement, we advice to fit the data from the first few oscillation periods. Having measured the particle currents between adjacent sites, we can then calculate the chiral current given in Eq. ~ (\ref{eq:chiral_current}), which enables us to obtain the vortex-Meissner phase transition diagram for different interleg coupling strength $K$ and effective magnetic flux $\phi$ (see Fig. ~ \ref{fig:chiralcurrentcalculation}). The current patterns (see Fig. ~ \ref{fig:wavefunction6}) can also be obtained from the particle currents, which enables us to calculate the vortex density for different $K$ and $\phi$ (see Fig. ~ \ref{fig:vortexdensity4}). In a word, the vortex-Meissner phase transition can be determined from the measured data of particle currents between adjacent sites. \begin{figure}\label{fig:Measurement} \end{figure} \section{Conclusion\label{sec:DC}} We have introduced a circuit scheme on how to construct the two-leg fermionic ladder with X-shape gradiometer superconducting flux qubits. In such a scheme, we have shown that with two-tone driving fields, an artificial effective magnetic flux can be generated for each plaquette, which can be felt by the \textquotedblleft fermionic\textquotedblright\ particle and thus affects its motion. Compared with the previous method for generating effective magnetic flux without the aid of couplers ~ \cite{Alaeian2019PRA}, our method does not require the qubit circuit poessess a weak anharmonicity but on the contrary has a simple analytical expression in the strong anharmonicity regime. The maintenance of anharmonicity (or nonlinearity) is crucial, since it is indispensable for demonstrating quantum behaviors ~ \cite{Wendin2007LTP}. Via modifying the interleg coupling strength or the effective magnetic flux, both of which are tunable via adjusting the phases of the classical driving fields, the vortex-Meissner phase transition can in principle be observed in the single-particle ground state, which originates from the competition between the two parameters. In the vortex phase, the number of vortex kernels are more than one, while in the Meissner phase, there is only one large vortex, with the currents mainly flowing around the boundaries of the ladder. The phase transition boundary is analytically given. Besides, the wave functions, current patterns, and quasimomentum distributions in both phases are exhaustively discussed. The vortex densities for different parameters have also been presented. Since the vortex and Meissner phases are discussed in the single-particle ground state, which is not the (global) ground state, we have proposed a method on how to generate the single-particle ground state from the ground state with just a one-step $\pi$ pulse realized by simultaneously driving all the qubits and meanwhile cause no undesired excitations. The requied driving fields should share the same profile as the wave function of the single-particle ground state except for a scaling factor, the Rabi frequency of generation. We have shown that the particle currents between the two adjacent sites can be extracted from the Rabi oscillations between them, assuming the other sites connected to them are tuned to decouple. The detailed analytical expression has been given for fitting the experimentally measured data. The particle-current measurement between adjacent sites enables the calculation of chiral particle currents, which is critical for experimentally determining the vortex-Meissner phase transition. For strictness, the effects of the environment are also considered for generating the single-particle ground state and measuring the particle currents between the adjacent sites. To guarantee the generation fidelity and measurement accuracy, we find that the sample needs to reach the strong coupling regime, i.e., the coupling strength should be much larger than the decay rates. This condition, we think, should not be very difficult to met, since the ultrastrong coupling ~\cite{Niemczyk2010NP,Forn2016NP,Yoshihara2017NP} and decoherence time about tens of microseconds ~\cite{Yan2016NC,Abdurakhimov2019APL} have both been reported in flux qubit systems. \section{Acknowledgments} We are grateful to Wei Han for helpful discussions. Y. J. Z. is supported by National Natural Science Foundation of China (NSFC) under grants No.s 11904013 and 11847165. Y.X.L. is supported by the Key-Area Research and Development Program of GuangDong Province under Grant No.2018B030326001, the National Basic Research Program(973) of China under Grant No. 2017YFA0304304. W. M. L. was supported by the National Key R\&D Program of China under grants No. 2016YFA0301500, NSFC under grants Nos. 61835013, Strategic Priority Research Program of the Chinese Academy of Sciences under grants Nos. XDB01020300, XDB21030300. \appendix \section{Periodical modulation of the qubit frequency\label{appendix:PMQF}} Now, we investigate the periodical modulation of a qubit frequency with a general qubit (e.g., flux qubit, transmon qubit, etc) with multiple energy levels. The qubit Hamiltonian with two-tone driving fields can be represented as \begin{equation} \hat{H}_{\text{q}}=\hat{H}_{0}+\frac{\hbar}{2}\sum_{n}^{N-1}\sum_{j=1} ^{2}\left( \hat{\sigma}_{n+1,n}\Omega_{jn}e^{-i\tilde{\omega}_{j} t}+\mathrm{H.c.}\right) , \label{eq:Hq_append} \end{equation} where $\hat{H}_{0}=\sum_{n}\hbar\omega_{qn}\hat{\sigma}_{nn}$ and $\hat {\sigma}_{nn}=\left\vert n\right\rangle \left\langle n\right\vert $ ($\hat{\sigma}_{n+1,n}=\left\vert n+1\right\rangle \left\langle n\right\vert )$ is the projection (ladder) operator. In the interaction picture defined by $\hat{U}_{0}\left( t\right) =e^{-i\hat{H}_{0}t}$, the Hamiltonian $\hat {H}_{\text{q}}$ is transformed into \begin{equation} \hat{H}_{\text{I}}\left( t\right) =\frac{\hbar}{2}\sum_{n}^{N-1}\sum _{j=1}^{2}\left( \hat{\sigma}_{n+1,n}\Omega_{jn}e^{-i\delta_{jn} t}+\mathrm{H.c.}\right) \text{,} \end{equation} where $\delta_{jn}=\tilde{\omega}_{j}-\left( \omega_{q,n+1}-\omega _{q,n}\right) $ is the detuning between the driving field and the applied energy level. To derive the effective Hamiltonian, we employ the second-order perturbation theory in the large-detuning regime $\left\vert \Omega_{jn}/\delta_{j^{\prime }n}\right\vert ^{2}\ll1$, thus resulting in the evolution operator in the interaction as \begin{align} \hat{U}_{\text{I}}\left( t\right) \cong & 1+\frac{1}{i\hbar}\int_{0} ^{t}\text{d}t^{\prime}\hat{H}_{\text{I}}\left( t^{\prime}\right) \nonumber\\ & +\frac{1}{\left( i\hbar\right) ^{2}}\int_{0}^{t}\text{d}t^{\prime}\hat {H}_{\text{I}}\left( t^{\prime}\right) \int_{0}^{t^{\prime}}\hat {H}_{\text{I}}\left( t^{\prime\prime}\right) \text{d}t^{\prime\prime}. \label{eq:pert} \end{align} In the time scale $t\gtrsim\frac{1}{\left\vert \Omega_{jn}\right\vert }$, which satisfies $t\gg\frac{1}{\left\vert \delta_{jn}\right\vert }$, the fast-oscillating term (i.e., the first-order perturbative term) in Eq. ~ (\ref{eq:pert}) can be neglected, thus resulting in \begin{align} \hat{U}_{\text{I}}\cong & 1 \! +\frac{1}{i^{2}}\sum_{n=0}^{N-1}\int_{0}^{t}\text{d}t^{\prime}\sum_{j=1} ^{2}\frac{\left\vert \Omega_{jn}\right\vert ^{2}}{4}\left( \frac{\hat{\sigma }_{n+1,n+1}}{i\delta_{jn}}-\frac{\hat{\sigma}_{n,n}}{i\delta_{jn}}\right) \nonumber\\ & +\frac{1}{4i^{2}}\sum_{n=0}^{N-1}\int_{0}^{t}\text{d}t^{\prime}\left( \frac{O_{n}}{i\delta_{1n}}\hat{\sigma}_{n+1,n+1}-\frac{O_{n}^{\ast}} {i\delta_{1n}}\hat{\sigma}_{n,n}\right) \nonumber\\ & + \! \frac{1}{4i^{2}} \! \sum_{n=0}^{N-1} \! \int_{0}^{t}\text{d}t^{\prime} \! \left( \frac{O_{n}^{\ast}}{i\delta_{2n}}\hat{\sigma}_{n+1,n+1}-\hat{\sigma }_{n,n}\frac{O_{n}}{i\delta_{2n}}\right) \! , \end{align} where the symbol $O_{n}\equiv O_{n}\left( t\right) =$ $\Omega_{1n}^{\ast }\Omega_{2n}e^{-i\tilde{\delta}t}$ and the detuning $\tilde{\delta}=$ $\delta_{2n}-\delta_{1n}=\tilde{\omega}_{2}-\tilde{\omega}_{1}$. Assuming $|\tilde{\delta}|\ll\left\vert \delta_{jn}\right\vert $, which implies $\delta_{1n}\approx\delta_{2n}$, we can obtain the effective Hamiltonian using the relation $H_{\text{I,eff}}=i\hbar\partial_{t}U_{\text{I}}\left( t\right) $ as \begin{align} \hat{H}_{\text{I,eff}} & =\sum_{j=1}^{2}\frac{\hbar\left\vert \Omega _{j0}\right\vert ^{2}}{4\delta_{j0}}\hat{\sigma}_{00}\nonumber\\ & +\sum_{n=0}^{N-1}\sum_{j}\left( \frac{\hbar\left\vert \Omega _{j,n+1}\right\vert ^{2}}{4\delta_{j,n+1}}-\frac{\hbar\left\vert \Omega _{jn}\right\vert ^{2}}{4\delta_{jn}}\right) \hat{\sigma}_{n+1,n+1}\nonumber\\ & -\sum_{n=0}^{N-1}\frac{\hbar}{2}\frac{\left\vert \Omega_{1n}\Omega _{2n}\right\vert }{\delta_{1n}}\hat{\sigma}_{n+1,n+1}\cos\left( \tilde {\delta}t+\phi_{n}\right) \nonumber\\ & +\sum_{n=0}^{N-1}\frac{\hbar}{2}\frac{\left\vert \Omega_{1n}\Omega _{2n}\right\vert }{\delta_{1n}}\hat{\sigma}_{nn}\cos\left( \tilde{\delta }t+\phi_{n}\right) , \end{align} where we have defined $\phi_{1n}-\phi_{2n}\equiv\phi_{n}$. Omitting an irrelevant constant, the effective Hamiltonian can be further represented as \begin{equation} \hat{H}_{\text{I,eff}}\cong\sum_{n=1}^{N}\hbar\left[ \nu_{n}+\eta_{n} \cos\left( \tilde{\delta}t+\phi_{n-1}\right) \right] \hat{\sigma}_{n,n}, \end{equation} where $\nu_{n}$ is the Stark shift$\,$and $\eta_{n}$ is the periodical modulation strength: \begin{align} \nu_{n} & =\sum_{j=1}^{2}\frac{\left\vert \Omega_{jn}\right\vert ^{2} }{4\delta_{jn}}-\frac{\left\vert \Omega_{j,n-1}\right\vert ^{2}} {4\delta_{j,n-1}}-\frac{\left\vert \Omega_{j0}\right\vert ^{2}}{4\delta_{j0} },\\ \eta_{n} & = \! \frac{1}{2} \! \left( \frac{\left\vert \Omega_{1n}\Omega_{2n}\right\vert }{\delta_{1n}} \! -\frac{\left\vert \Omega_{1,n-1}\Omega_{2,n-1}\right\vert }{\delta_{1,n-1}} \! -\frac{\left\vert \Omega_{10}\Omega_{20}\right\vert }{\delta_{10}}\right) \! . \end{align} Returning to the original frame, the effective Hamiltonian is transformed into the form \begin{equation} \hat{H}_{\text{eff}}\cong\sum_{n=1}^{N}\hbar\left[ \tilde{\omega}_{qn} +\eta_{n}\cos\left( \tilde{\delta}t+\phi_{n-1}\right) \right] \hat{\sigma }_{n,n}, \end{equation} where $\tilde{\omega}_{qn}=\omega_{qn}+\nu_{n}$. In the large-detuning regime, the Stark shift $\nu_{n}$ is a small quantity compared to $\omega_{qn}$. If the qubit circuit possesses adequate anharmonicity, and all the control pulses involved are carefully designed to avoid the excitation to higher energy levels, then the Hamiltonian can be confined to the single-particle case, thus arriving at \begin{equation} \hat{H}_{\text{eff}}=\hbar\omega_{q1}\hat{\sigma}_{11}+\hbar\eta_{1} \cos\left( \tilde{\delta}t+\phi_{0}\right) \hat{\sigma}_{11}. \end{equation} If we further focus on the flux qubit circuit which is typically treated as an ideal two-level system where $\delta_{11}=\infty$, we have a simple result $\eta_{1}\approx-\frac{\left\vert \Omega_{10}\Omega_{20}\right\vert } {\delta_{10}}$ and then $\hat{H}_{\text{eff}}$ becomes the form of Eq. ~ (\ref{ea:Hq_modulated}). Now, we discuss the limit that the anharmonicity of the qubit is so weak that Eq. ~ (\ref{eq:Hq_append}) becomes the form of a driven resonator. In this case, the parameters can be represented as $\omega_{n}=n\bar{\omega}$, $\Omega _{jn}=\sqrt{n+1}\bar{\Omega}_{j}$, and $\delta_{jn}=Const,$ where $\bar {\omega}$ is the fundamental frequency of the resonator and $\bar{\Omega}_{j}$ is the driving strength on the resonator. Using such parameters, one can obtain that the Stark shift $\nu_{n}=0$ and $\eta_{n}=0$, and thus the periodical modulation of the qubit frequency vanishes. Therefore, to achieve the periodical modulation using two-tone driving fields, the superconducting qubit circuit should maintain a nonzero anharmonicity. In principle, the periodical modulation effect shall exist only if the anharmonicity of the interested qubit circuit is nonzero. This character requires a wider anharmonicity range of the qubit circuit than in Ref. ~ \cite{Alaeian2019PRA}, where the anharmonicity of the transmon qubit circuit needs to be negligibly small. Since the nonlinearity is a key factor for demonstrating quantum phenomena ~ \cite{Wendin2007LTP}, we think periodically modulating the qubit circuit with better anharmonicity is significant for exploring nonequilibrium quantum physics. \section{Treatment into the interaction picture\label{append:Treatment}} The full Hamiltonian with periodically modulated qubit frequency is given by \begin{align} \hat{H}_{\text{f}}= & \sum_{l}\sum_{d=\text{L,R}}\left[ \frac{\hbar} {2}\omega_{d}\sigma_{z}^{\left( d,l\right) }-\frac{\hbar}{2}\Omega \cos\left( \delta t+\phi_{d,l}\right) \sigma_{z}^{\left( \text{L},l\right) }\right] \nonumber\\ & -\sum_{l}\hbar g\sigma_{-}^{\left( d,l\right) }\sigma_{+}^{\left( d,l+1\right) }+\text{H.c.},\nonumber\\ & -\sum_{l}\hbar K\sigma_{-}^{\left( \text{L},l\right) }\sigma_{+}^{\left( \text{R},l\right) }+\text{H.c.} \end{align} where the subscript L and R represent the left and right legs of the ladder, $l$ the lattice site, $\omega_{d}$ ($d=\mathrm{L,R}$) the qubit frequency on the leg $d$, $g$ the intraleg tunneling rate, and $K$ the interleg tunneling rate. To eliminate the time-dependent terms in Eq. ~ (\ref{eq:H}), we now apply to Eq. ~ (\ref{eq:H}) a unitary transformation $U_{d}\left( t\right) = {\textstyle\prod\limits_{l}} {\textstyle\prod\limits_{d=\mathrm{L,R}}} \exp\left[ iF_{l,d}\left( t\right) \right] $ with \begin{equation} F_{l,d}\left( t\right) =\frac{\sigma_{z}^{\left( d,l\right) }}{2}\left[ \frac{\Omega}{\delta}\sin\left( \delta t+\phi_{d,l}\right) +\omega _{d}t\right] , \end{equation} in which manner, we now enter the interaction picture, and obtain the effective Hamiltonian as \begin{align} \hat{H}_{\text{f}}= & -\sum_{l}\sum_{d=\mathrm{L,R}}\left[ \hbar g\sigma_{-}^{\left( \text{L},l\right) }\sigma_{+}^{\left( \text{L} ,l+1\right) }e^{i\alpha_{\text{L},l}\left( t\right) }+\text{H.c.}\right] \nonumber\\ & -\sum_{l}\sum_{d=\mathrm{L,R}}\left[ \hbar g\sigma_{-}^{\left( \text{L},l\right) }\sigma_{+}^{\left( \text{L},l+1\right) }e^{i\alpha _{\text{R},l}\left( t\right) }+\text{H.c.}\right] \nonumber\\ & -\sum_{l}\left[ \hbar K\sigma_{-}^{\left( \text{L},l\right) }\sigma _{+}^{\left( \text{R},l\right) }e^{i\beta_{l}\left( t\right) } +\text{H.c.}\right] \text{.} \end{align} Here, the phase parameters $\alpha_{d,l}\left( t\right) $ and $\beta _{l}\left( t\right) $ are \begin{align} \alpha_{d,l}\left( t\right) & =\left[ \frac{2\Omega}{\delta}\sin \phi_{d,l}^{\left( -\right) }\right] \cos\left( \delta t+\phi _{d,l}^{\left( +\right) }\right) ,d=\text{L,R}\\ \beta_{l}\left( t\right) & =\left[ \frac{2\Omega}{\delta}\sin\phi _{l}^{\left( -\right) }\right] \cos\left( \delta t+\phi_{l}^{\left( +\right) }\right) +\Delta, \end{align} where $\phi_{d,l}^{\left( \pm\right) }=\left( \phi_{d\text{,}l}\pm \phi_{d\text{,}l+1}\right) /2$, $\phi_{l}^{\left( \pm\right) }=\left( \phi_{\text{L,}l}\pm\phi_{\text{R,}l+1}\right) /2$, and $\Delta =\omega_{\text{R}}-\omega_{\text{L}}$ is the qubit frequency difference between different legs. Furthermore, we define $\phi_{d\text{,}l}=\phi _{d}-\phi l$, $\phi_{\text{L}}=-\phi_{\text{R}}=\phi_{0}$, and use the relation $\exp\left( ix\sin\theta\right) =\sum_{n}J_{n}\left( x\right) e^{in\theta}$, where $J_{n}\left( x\right) $ is the $n$th Bessel function of the first kind, which yields the Hamiltonian as ~ \cite{Liu2014NJP,Zhao2015PRA} \begin{align} \hat{H}_{\text{f}}^{\prime}= & -\sum_{ln}\hbar g_{0}\sigma_{-}^{\left( \text{L},l\right) }\sigma_{+}^{\left( \text{L},l+1\right) }J_{xnl}^{\left( +\right) }\left( t\right) +\text{H.c.}\nonumber\\ & -\sum_{ln}\hbar g_{0}\sigma_{-}^{\left( \text{R},l\right) }\sigma _{+}^{\left( \text{R},l+1\right) }J_{xnl}^{\left( -\right) }\left( t\right) +\text{H.c.}\nonumber\\ & -\sum_{ln}\hbar K_{0}\sigma_{-}^{\left( \text{L},l\right) }\sigma _{+}^{\left( \text{R},l\right) }J_{ynl}\left( t\right) +\text{H.c..} \end{align} Here, the parameters $J_{xnl}^{\left( \pm\right) }\left( t\right) $ and $J_{ynl}\left( t\right) $ can be explicitly given by \begin{align} J_{xnl}^{\left( \pm\right) } & =i^{N}J_{n}\left( \eta_{x}\right) \exp\left[ in\left( \delta t\pm\phi_{0}-\phi l-\frac{\phi}{2}\right) \right] ,\\ J_{ynl} & =i^{N}J_{n}\left( \eta_{y}\right) \exp\left[ in\left( \delta t-\phi l\right) +i\Delta t\right] . \end{align} where $\eta_{x}=\frac{2\Omega}{\delta}\sin\left( \frac{\phi}{2}\right) $, $\eta_{y}=\frac{2\Omega}{\delta}\sin\left( \phi_{0}\right) $, and $J_{n}\left( \cdot\right) $ is the Bessel function of the first kind. We now assume the detuning $\delta$ is tuned to match $\Delta$, i.e., $\delta=\Delta $, such that, neglecting fast-oscillating terms, we can obtain the effective Hamiltonian \begin{align} \hat{H}_{\text{ld}}= & -\sum_{l}\sum_{d=\text{L,R}}\hbar g\sigma _{-}^{\left( d,l\right) }\sigma_{+}^{\left( d,l+1\right) }+\text{H.c.} \nonumber\\ & -\sum_{l}\hbar K\sigma_{-}^{\left( \text{L},l\right) }\sigma_{+}^{\left( \text{R},l\right) }\exp\left( i\phi l\right) +\text{H.c.,} \end{align} where $g=g_{0}J_{0}\left( \eta_{x}\right) $ and $K=K_{0}J_{1}\left( \eta_{y}\right) $ can be tunable in principle via modifying the two-tone driving strength $\Omega$. \section{Exact solution of the fidelity with the environment\label{Append:exact}} As the main text demonstrates, the effect of the environment on the state generation process can be described by the Lindblad master equation \begin{equation} \frac{\text{d}\hat{\rho}}{\text{d}t}=\frac{1}{i\hbar}\left[ \hat {H}_{\text{ld}}^{\left( N\right) }+\hat{H}_{\text{ld,g}}^{\prime},\hat{\rho }\right] +\mathcal{L}_{\mu1}\left[ \hat{\rho}\right] . \label{eq:ME_append} \end{equation} Here, $\hat{\rho}$ is the density operator of the ladder, $\mathcal{L}_{\mu 1}\left[ \hat{\rho}\right] $ represents the Lindblad dissipation terms as \begin{align} \mathcal{L}_{\mu1}\left[ \hat{\rho}\right] & =-\gamma_{1}\left\vert \mu_{1}\right\rangle \! \left\langle \mu_{1}\right\vert \left\langle \mu_{1}\right\vert \! \hat{\rho} \! \left\vert \mu_{1}\right\rangle \! + \! \gamma_{1}\left\vert 0\right\rangle \! \left\langle 0\right\vert \left\langle 0\right\vert \! \hat{\rho} \! \left\vert 0\right\rangle \! \nonumber\\ & - \! \frac{\Gamma_{1}}{2}\left\vert \mu_{1}\right\rangle \! \left\langle 0\right\vert \left\langle \mu_{1}\right\vert \! \hat{\rho} \! \left\vert 0\right\rangle \! - \! \frac{\Gamma_{1}}{2}\left\vert \mu_{1}\right\rangle \! \left\langle 0\right\vert \left\langle \mu_{1}\right\vert \! \hat{\rho} \! \left\vert 0\right\rangle , \end{align} and $\gamma_{1}$ ($\Gamma_{1}$) is the relaxation (dephasing) rate of the single-particle ground state $\left\vert \mu_{1}\right\rangle $. Solving Eq. ~ (\ref{eq:ME_append}), where the Hilbert space is $\left\{ \left\vert 0\right\rangle ,\left\vert \mu_{1}\right\rangle \right\} $, we can obtain the population on $\left\vert \mu_{1}\right\rangle $ after some time $t$, i.e., \begin{align} \rho_{11} & =\left\langle \mu_{1}\right\vert \! \hat{\rho} \! \left\vert \mu_{1}\right\rangle \nonumber\\ & =r_{0}-r_{0}\operatorname{Re}\left\{ \left( 1-\frac{i\gamma_{1}^{\prime} }{2C_{1}^{\prime}}\right) e^{-\frac{1}{2}\gamma_{1}^{\prime}t}\exp\left( itC_{1}^{\prime}\right) \right\} \! . \end{align} Here, the intermediate parameters are explicitly given as follows, \begin{align} r_{0} & =\frac{\frac{C_{1}^{2}}{2}}{C_{1}^{2}+\frac{\gamma_{1}\Gamma_{1}} {2}},\\ C_{1}^{\prime} & =\sqrt{C_{1}^{2}-\frac{1}{4}\left( \gamma_{1}-\frac {\Gamma_{1}}{2}\right) ^{2}},\\ \gamma_{1}^{\prime} & =\gamma_{1}+\frac{\Gamma_{1}}{2}, \end{align} and $\rho_{11}$ is also called the fidelity of $\left\vert \mu_{1} \right\rangle $. In the limit of strong coupling ($C_{1}\gg\gamma_{1}$, $\Gamma_{1}$), $r_{0}=\frac{1}{2}$, $C_{1}^{\prime}=C_{1}$, and $\gamma _{1}^{\prime}/C_{1}^{\prime}=0$, thus yielding \begin{equation} \rho_{11}=\frac{1}{2}\left[ 1-e^{-\frac{1}{2}\gamma_{1}^{\prime}t}\cos\left( C_{1}t\right) \right] , \end{equation} which yields $\rho_{11}=\frac{1}{2}$ in the steady state ($t=\infty$). \end{document}
arXiv
This paper studies Newtonian Sobolev-Lorentz spaces. We prove that these spaces are Banach. We also study the global p,q- capacity and the $p,q$-modulus of families of rectifiable curves. Under some additional assumptions (that is, $X$ carries a doubling measure and a weak Poincar ́e inequality), we show that when $1 \leq q < p$ the Lipschitz functions are dense in those spaces; moreover, in the same setting we also show that the $p,q$-capacity is Choquet provided that $q > 1$. We provide a counterexample for the density result in the Euclidean setting when $1 < p \leq n$ and $q = \infty$.
CommonCrawl
\begin{document} \title[]{Ergodic decomposition for folding and unfolding paths in Outer space} \author{Hossein Namazi, Alexandra Pettet, and Patrick Reynolds} \begin{abstract} We relate ergodic-theoretic properties of a very small tree or lamination to the behavior of folding and unfolding paths in Outer space that approximate it, and we obtain a criterion for unique ergodicity in both cases. Our main result is that non-unique ergodicity gives rise to a transverse decomposition of the folding/unfolding path. It follows that non-unique ergodicity leads to distortion when projecting to the complex of free factors, and we give two applications of this fact. First, we show that if a subgroup $H$ of $Out(F_N)$ quasi-isometrically embeds into the complex of free factors via the orbit map, then the limit set of $H$ in the boundary of Outer space consists of trees that are uniquely ergodic and have uniquely ergodic dual lamination. Second, we describe the Poisson boundary for random walks coming from distributions with finite first moment with respect to the word metric on $Out(F_N)$: almost every sample path converges to a tree that is uniquely ergodic and that has a uniquely ergodic dual lamination, and the corresponding hitting measure on the boundary of Outer space is the Poisson boundary. This improves a recent result of Horbez. We also obtain sublinear tracking of sample paths with Lipschitz geodesic rays. \end{abstract} \maketitle \section{Introduction}\label{sec: introduction} In \cite{Ma92} Masur proved that the vertical foliation for a quadratic differential is uniquely ergodic if a corresponding Teichm\"uller geodesic ray projects to a recurrent ray in the moduli space. This observation which is known as Masur Criterion provides the first step in understanding the dynamical behavior of Teichm\"uller geodesics and has had numerous applications. In particular Kaimanovich-Masur \cite{KM96} used this to study random walks in the mapping class group of a surface and in Teichm\"uller space. They showed that for a non-elementary probability measure on the mapping class group, for almost every random walk there is a uniquely ergodic measure foliation which is the limit of every orbit for the random walk in Teichm\"uller space. Moreover when the probability measure has finite entropy and finite first logarithmic moment with respect to the Teichm\"uller distance, the space of uniquely ergodic measured foliations is the Poisson boundary. Parallels between the mapping class group of a surface with its action on the Teichm\"uller space and the group $\Out(F_N)$ of outer automorphisms of a free group and the Outer Space $CV_N$, defined by Culler-Vogtmann \cite{CV86}, has been drawn and in many ways the study of surfaces has been a guideline for understanding $\Out(F_N)$ and its action on $CV_N$. However the geometry of $CV_N$ lacks a lot of nice features of the Teichm\"uller space and in particular does not have a natural symmetric metric. Folding paths in $CV_N$ are the closest to play the role of geodesic paths, however the nature of the forward direction of the folding path, i.e., folding, is unlike the backward direction, i.e., unfolding. This creates difficulty finding analogues for vertical and horizontal foliations for a quadratic differential and there is no symmetry between them. Finally, cocompactness in $CV_N$ seems to be weaker than it is in Teichm\"uller space. In particular it is easy to produce recurrent (after projecting to $CV_N/\Out(F_N)$) folding paths which are not {\em uniquely ergodic} in either direction and therefore the obvious analogue of Masur Criterion fails. In this article we study how non-unique ergodicity results in a transverse decomposition of the graphs in the folding path. Our results are similar to a theorem of McMullen, which shows that distinct ergodic measures on a singular foliation on a surface give rise to distinct components of the limiting noded surface in the Mumford compactification moduli space \cite{McM13}. One application of our results is that a folding, \emph{resp.} unfolding, ray with non-uniquely ergodic limit tree, \emph{resp.} lamination, is ``slow'' when projecting to the {\em free factor graph}. To avoid technicalities, we state a result that is an easy consequence of our results, and we only consider folding. \begin{thm}\label{distortion} Let $\pi:CV_N \to \mathcal{FF}$ be the projection map. Suppose that $T_t$ is a folding path parameterized by arc length and with limit $T$. If $T$ is non-uniquely ergodic, then $\lim_{t \to \infty} d(\pi(T_0),\pi(T_t))/t=0$. \end{thm} Recall that points in the boundary of Culler-Vogtmann Outer space are represented by very small actions of $F_N$ on trees that either are not free or else are not simplicial. For such a tree $T$, $L(T)$ denotes the algebraic lamination associated to $T$. Use $M_N \subseteq Curr(F_N)$ to denote currents that represent elements from the minset of the $Out(F_N)$-action on $\mathbb{P}Curr(F_N)$. The tree $T$ is uniquely ergodic if $L(U)=L(T)$ implies that $U$ is homothetic to $T$. The lamination $L(T)$ is uniquely ergodic if for $\eta, \eta' \in M_N$, if both are supported on $L(T)$, then $\eta$ is homothetic to $\eta'$. Let $\mathcal{UE}$ stand for the space of very small trees that are uniquely ergodic and which have uniquely ergodic lamination. Let $\mathcal{FF}$ denote the complex of free factors, and let $x_0 \in \mathcal{FF}$ be some point. We give two applications of Theorem \ref{distortion}: \begin{thm}\label{limit set} Let $H \leq Out(F_N)$. If the orbit map $H \to \mathcal{FF}:h \mapsto hx_0$ is a quasi-isometric embedding, then $\text{Limit}(H)\subseteq \mathcal{UE}$. \end{thm} In the statement $\text{Limit}(H)$ denotes the accumulation points of $H$ in compactified Outer space. The Poisson boundary for certain random walks on $Out(F_N)$ was recently described by Horbez \cite{Hor14b}. We improve the result of Horbez via the following application of Theorem \ref{distortion} and recent work of Tiozzo and Maher-Tiozzo \cite{Tio14, MT14}. \begin{thm}\label{random walks} Let $\mu$ be a non-elementary distribution of $Out(F_N)$ with finite first moment with respect to the Lipschitz metric on Outer space. Then \begin{enumerate} \item [(i)] There exists a unique $\mu$-stationary probability measure $\nu$ on the space $\partial CV_N$, which is purely non-atomic and concentrated on $\mathcal{UE}$; the measure space $(\partial CV_N, \nu)$ is a $\mu$-boundary, and \item [(ii)] For almost every sample path $(w_n)$, $w_nx_0$ converges in $\overline{CV}_N$ to $w_\infty \in \mathcal{UE}$, and $\nu$ is the corresponding hitting measure. \item [(iii)] For almost every sample path $(w_n)$, there is a Lipschitz geodesic $T_t$ such that \[ \lim_{n \to \infty} d(w_nx_0, T_{Ln})/n =0 \] \item [(iv)] The measure space $(\partial CV_N, \nu)$ is the Poisson boundary. \end{enumerate} \end{thm} The above applications use only a small part of our analysis of folding and unfolding paths in Outer space. We will now outline our main result. \subsection{Folding/unfolding sequences and the decomposition theorem} Our framework is that of folding/un-folding sequences; these objects live in the category of Stallings foldings, but we allow more than one fold at each step. A folding/un-folding sequence comes with vector spaces of \emph{length measures} and \emph{width measures}, and a main point of our approach is to study their interaction. \begin{rem} Every folding/un-folding sequence can be "filled-in" to get a \emph{liberal folding path} in the sense of \cite{BF14}. In particular, every folding/un-folding sequence gives rise to Lipschitz geodesics in Outer space. Conversely, given a Lipschitz geodesic in Outer space, one can study the family of folding/un-folding paths arising from it. In short, our analysis provides a method for analyzing the behavior of a geodesic in Outer space via the dynamical properties of its endpoints. Analogous methods in Teichm\"uller space have been fruitful. \end{rem} Formally, a \emph{folding/unfolding sequence} is a sequence \[ \ldots \to G_{-1} \to G_0 \to G_1 \to \ldots \] of graphs together with maps $f_n:G_n \to G_{n+1}$ such that for any $m \leq n$, the composition \[ f_n \circ f_{n -1} \circ \ldots \circ f_m:G_m \to G_{n+1} \] is a \emph{change of marking}, \emph{i.e.} a homotopy equivalence that sends every edge of $G_m$ to a reduced edge path. We assume that folding/un-folding sequences are \emph{reduced}, which means that there there cannot be an invariant subgraph as $n \to \pm \infty$ in which no folding/unfolding occurs. We assume that a marking between $G_0$ and fixed base point in Outer space has been specified, so a folding/unfolding sequence is a sequence of open simplices in Outer space. A {\em length measure} on $(G_n)$ is a sequence of vectors $(\vec\lambda_n)$, where $\vec\lambda_n$ is a vector that assigns a non-negative number, \emph{i.e.} a length, to every edge of $G_n$, in such a way that, when equipped with those lengths and for every $n$, the $f_n$ restricts to an isometry on every edge of $G_n$. Equivalently for every $n$ \[ \vec\lambda_n = A_n^T \vec\lambda_{n+1} \] where $A_n$ is the incidence matrix of the graph map $f_n:G_n\to G_{n+1}$. When equipped with a length measure with nonzero components, $G_n$ becomes a marked metric graph; therefore, we obtain a sequence in the unprojectivized Outer space. As $n\to\infty$, the universal covers of the graphs $(G_n)$ converge to a topological tree $T$. A length measure $(\vec\lambda_n)$ on $(G_n)$ induces a pseudo metric on $T$ which can be viewed as a non-atomic length measure on $T$ in sense of Paulin; see \cite{Gui00} or below. Collapsing the subsets of diameter zero gives a topological tree that represents a simplex in the boundary of Outer space, and specifying a length measure gives a convergent sequence in compactified Outer space. We prove that the space of projectivized length measures on $(G_n)$ is linearly isomorphic to the space of projectivized non-atomic length measures on $T$ and is a finite dimensional linear simplex spanned by non-atomic ergodic length measures on $T$. We say $T$ or the sequence $(G_n)$ is {\em uniquely ergometric} if this simplex is degenerate. In the opposite direction, we define a {\em width measure} (called a \emph{current} in the main body of the paper) on $(G_n)$ to be a sequence of vectors $(\vec\mu_n)$, where, again, each vector $\vec\mu_n$ assigns a non-negative number to every edge of $G_n$. However, for width measures, we require \[ \vec\mu_{n+1}= A_n \vec \mu_n.\] The space of projectivized width measures forms a finite dimensional linear simplex. In this case, the simplex naturally and linearly embeds in the space of projective {\em currents} for the free group. We prove that the set of currents corresponding to the width measures on $(G_n)$ is identified, via a linear isomorphism, with the set of currents supported the {\em legal lamination} $\Lambda$ of the sequence. The legal lamination $\Lambda$ can be described as the collection of parametrized bi-infinite reduced paths in $G_0$ which can be {\em lifted} to bi-infinite reduced paths in $G_n$ for every $n<0$, \emph{i.e.}, they are images of bi-infinite reduced paths in $G_n$. If tree $T$ in the compactification of Outer space is an accumulation point of the sequence $(G_n)$ equipped with a length measure and as $n\to-\infty$, then $\Lambda$ is contained in the {\em dual lamination} of $T$. A current is supported on the dual lamination of $T$ if and only if $\langle T, \mu \rangle=0$, where $\langle \cdot,\cdot \rangle$ is the Kapovich-Lustig intersection function. So the width measures naturally form a simplex in the space of projectivized currents, which is spanned by ergodic currents supported on the legal lamination $\Lambda$. We say the lamination $\Lambda$ or the sequence $(G_n)$ is {\em uniquely ergodic} if this simplex is degenerate. Before stating our main results, we will briefly return to the case of surface theory in order to state a generalization of Masur Criterion, which essentially follows from ideas of Masur. Assume $(g_t)_{t\ge0}$ is a Teichm\"uller geodesic ray in the Teichm\"uller space of a surface $S$ and that $(g_t)$ corresponds to a quadratic differential on $S$ with vertical foliation $\CF_v$ realized on $g_0$. Assume $\mu_1,\ldots,\mu_k$ are mutually singular ergodic transverse measures on $\CF_v$ and also assume there is a sequence $(g_{t_n})_n$ with $t_n\to\infty$ with $n$, so that the projections of $(g_{t_n})_n$ to the moduli space of $S$ converge to $\Sigma$, where $\Sigma$ is a noded Riemann surface in the Deligne-Mumford compactification of the moduli space of $S$. Masur's arguments can be applied to prove that there is a decomposition of $\Sigma$ into $k$ subsets whose closures only intersect in nodes of $\Sigma$ and with the property that for $i\in\{1,\ldots,k\}$ and $\mu_i$-a.e. leaf $l$ of $\CF_v$, the image of $l$ in $\Sigma$, under the composition of the Tichm\"uller map $g_0\to g_{t_n}$ and the approximating map $g_{t_n}\to\Sigma$ accumulate on the $i$-th subset of the decomposition. This implies, in particular, that the complement of nodes of $\Sigma$ has at least $k$ components, and one obtains the following theorem; see \cite{McM13}. \begin{thm}\label{mcmullen} Assume $(g_t)_{t\ge 0}$ is a Teichm\"uller geodesic ray whose projection to the moduli space accumulates on a noded Riemann surface $\Sigma$ with $\Sigma \setminus \{{\rm nodes}\}$ which has $k$ components. Then the vertical foliation for the quadratic differential associated to $(g_t)$ admits no more than $k$ mutually singular transverse measures. \end{thm} We now return to the case of a folding/unfolding sequence. Our main result is a statement that is similar to Theorem \ref{mcmullen}. By a {\em transverse decomposition} of a graph we mean a family of subgraphs whose edge sets are pairwise disjoint. Note that moduli space of metric graphs of rank $\le N$ is compact, so when $(G_n)$ is equipped with a length measure, we can consider subsequential limits as $n\to \pm \infty$. By what we stated earlier, every length measure $\lambda$ on $(G_n)$ is a linear combination of ergodic length measures. The {\em ergodic components} of $\lambda$ are the ones that appear in this combination with nonzero coefficient. Recall that every leaf $\omega$ of the legal lamination is a bi-infinite reduced path in $G_0$ and for every $n<0$ can be lifted to a bi-infinite reduced path in $G_n$ which is denoted by $\omega_n$. Our main result is: \noindent {\bf Theorem}: \emph{ Assume that the unfolding sequence $(G_n)_{n \leq 0}$ has non-uniquely ergodic legal lamination $\Lambda$, and let $\mu_1,\ldots,\mu_k$ be mutually singular ergodic measures. Assume that $(G_n)$ is equipped with a length measure and that the subsequence $(G_{n_i})$ converges to a metric graph $G$ in the moduli space of metric graphs. Then there is a transverse decomposition $H^0, H^1,\ldots, H^k$ of $G$ with the property that for every $j\in\{1,\ldots,k\}$ and for $\mu^j$-a.e. leaf $\omega$ of $\Lambda$, the image of $\omega_{n_i}$ (the lift of $\omega$ to $G_{n_i}$) under the approximating map $G_{n_i}\to G$ accumulates inside $H^j$.} \emph{ Assume that the folding sequence $(G_n)_{n \geq 0}$ has non-uniquely ergodic limit tree $T$, and let $\lambda^1,\ldots,\lambda^k$ are the ergodic components of a length measure $\lambda$ on $T$. Assume that $(G_n)$ is equipped with a length measure giving rise to $\lambda$ and that the subsequence $(G_{n_i})$ converges in the moduli space of metric graphs to a metric graph $G$. Then there is a transverse decomposition $H^0, H^1,\ldots, H^k$ of $G$ with the property that for every $j\in\{1,\ldots,k\}$ and for $\lambda^j$-a.e. point $x\in G_0$ and $i$ sufficiently large, the image of $x$ under the composition of the map $G_0\to G_{n_i}$ and the approximating map $G_{n_i}\to G$ falls in $H^j$. } \begin{rem} Coulbois and Hilion studied currents supported on the dual lamination of a free tree in the boundary of Outer space; our techniques and results provide a generalization of those of Coulbois-Hilion in \cite{CH13}. \end{rem} \subsection{Outline of the paper} In Section 2 we collect background and terminology. In Section 3, we introduce our main object of study: folding/un-folding sequences. Length measures and currents on a folding/un-folding sequence are introduced. We introduce a restriction that will be in place during the sequel: folding sequences are reduced--meaning that there cannot be an invariant subgraph for all time in which no folding/un-folding occurs. In Section 4 we focus on un-folding paths and define the legal lamination. We pause briefly to discuss the implications of the reduced hypothesis. Our first main result is Theorem \ref{spaces of currents are isomorphic}, which gives that the combinatorially-defined space of currents on the un-folding sequence is isomorphic to the space of currents supported on the legal lamination. We then obtain Theorem \ref{transverse decomposition}, which ensures that distinct ergodic currents give rise to transverse decomposition of the graphs in the folding sequence that is almost invariant near minus infinity. Next we focus on unfolding sequences whose underlying graphs converge in the compactified moduli space of graphs. Our next main result is Theorem \ref{frequency in generic leaves}, which ensures that homotopically non-trivial parts of our transverse decomposition survive even if the sequence of graphs degenerates. Finally, we focus on the case where the unfolding sequence is recurrent to the thick part of moduli; in this case we obtain a much better bound on the number of ergodic measures in Theorem \ref{recurrent implies bounded number of measures}. In Section 5 we repeat the analysis of Section 4, but now focus on folding sequences. The differences between the two scenarios are subtle but significant. In Section 6, we apply our technical results from Sections 4 and 5 to study projections to the complex of free factors. Our main results are Theorems \ref{slow progress in FF for unfoldings} and \ref{slow progress in FF for foldings}, which ensure that a folding, \emph{resp.} un-folding, sequence with non-uniquely ergodic limit tree, \emph{resp.} legal lamination, makes slow progress when projected to the complex of free factors. In Section 7, we gather the required facts from the literature to apply our main results from Section 6 to settle Theorems \ref{limit set} and \ref{random walks} \noindent \emph{Acknowledgements:} We wish to thank Jon Chaika for a pleasant and enlightening discussion at the beginning of this project. We also wish to thank Lewis Bowen, Thierry Coulbois, Arnaud Hilion, Ilya Kapovich, Joseph Maher, Amir Mohammadi, and Giulio Tiozzo for informative conversations and comments. We would also like to thank Jing Tao for pointing out to us the paper of McMullen. \section{Preliminaries} \label{sec: preliminaries} The reader can find background information about Outer space, the complex of free factors, and related issues at the beginning of Section \ref{applications}. \subsection{Paths in graphs}\label{subsec: paths in graphs} Given an oriented graph $G$, we define $\overline\Omega(G)$ to denote the set of reduced labeled directed paths in $G$; an element $\omega$ of $\overline\Omega(G)$ is an immersion from an interval $I$ of the form $[a,b], [a,\infty), (-\infty,b],$ or $(-\infty,\infty)$, with $a,b\in\BZ$, that sends $[i,i+1] \cap I$ homeomorphically to an edge $e_i$ of $G$. When $G$ is equipped with a length function, we assume the restriction of $\omega$ to $[i,i+1]$ has constant speed. When $\omega$ is defined on an interval $[a,b]$, the {\em simplicial length} of $\omega$ is $b-a$, denoted $|\omega|$; in this case, we say that $\omega$ is a \emph{finite labeled path}. Given $a<b$ in $\BZ\cap I$, we use the notation $\omega[a,b]$ to denote the restriction of $\omega$ to $[a,b]$. We also define $\overline\Omega_\infty(G) \subset \overline\Omega(G)$ to denote the subset consisting of bi-infinite labeled paths. The {\em shift map} $S :\overline\Omega(G)\to\overline\Omega(G)$ is defined as follows: if $\omega: I\to G$, then $S\omega(i)=\omega(i+1)$. The shift map restricts to a map $\overline\Omega_\infty(G) \to \overline\Omega_\infty(G)$. The quotient of $\overline \Omega(G)$, respectively $\overline\Omega_\infty(G)$, under the action $S$ is denoted by $\Omega(G)$, respectively $\Omega_\infty(G)$. Given $\gamma\in\Omega(G)$, a {\em labeling} of $\gamma$ is a an element of $\overline\Omega(G)$ that projects to $\gamma$ and has 0 in its domain. A special subset of $\Omega(G)$ is the set $EG$ of directed edges of $G$. A homotopy equivalence $G \to R_N$, where $R_N=\wedge_{j=1}^N S^1$, gives an identification of $\pi_1(G)$ with $F_N$; up to action of $Out(F_N)$ all homotopy equivalences come from choosing a maximal tree $K \subseteq G$, mapping $K$ to the vertex of $R_N$, and choosing a bijection from edges in $G \smallsetminus K$ to edges of $R_N$. Choosing a lift of $K$ to the universal cover $\widetilde G$ of $G$ and requiring $\omega(0) \in K$ gives an embedding of $\overline\Omega_\infty(G)$ as a compact subset of $\partial^2 F_N$, the {\em double boundary} of $F_N$; also we have a natural surjective map $\partial^2 F_N\to \Omega_\infty(G)$. A finite directed path $\gamma\in\Omega(G)$ gives a subset ${\rm Cyl}(\gamma) \subseteq \overline\Omega_\infty(G)$, which consists of all $\omega\in\overline\Omega_\infty(G)$, with the property that $\omega[0,|\gamma|]$ projects to $\gamma$ and with the same orientation. \subsection{Currents}\label{subsec: currents} A {\em current} $\mu$ on $F_N$ is an $F_N$-invariant and \emph{flip-invariant} positive Borel measure on $\partial^2F_N$ that takes finite values on compact subsets. Here, flip-invariant means invariant under the natural involution of $\partial^2 F_N$. Given a graph $G$ with $\pi_1(G)$ identified with $F_N$, using a natural embedding of $\overline\Omega_\infty(G)$ in $\partial^2 F_N$ mentioned earlier, a current $\mu$ on $F_N$ induces a finite measure on $\overline\Omega_\infty(G)$. Since $\mu$ is $F_N$ invariant, it is easy to see that this measure on $\overline\Omega_\infty(G)$ does not depend on which natural embedding of $\overline\Omega_\infty(G)$ in $\partial^2 F_N$ was used. The measure on $\overline\Omega_\infty(G)$, which is still denoted by $\mu$, is invariant under the shift map. In fact, the set of currents on $F_N$ is naturally identified with the set of shift-invariant finite measures on $\overline\Omega_\infty(G)$. Given a finite path $\gamma\in\Omega(G)$, define $\mu(\gamma):=\mu({\rm Cyl}(\gamma))$. In particular, for every directed edge $e$ of $G$, we have the weight $\mu_G(e)$. Regard the vector $\vec\mu_G \in \mathbb{R}^{|EG|}$ as a column vector: $\vec\mu_G(e)$, the component associated to $e\in EG$, is equal to $\mu_G(e)$. The set of numbers $\mu(\gamma)$ for all $\gamma\in\Omega(G)$ provide a coordinate system for the space of currents; indeed, the topology on the space of currents is the product topology corresponding to finite $\gamma \in \Omega(G)$ (See \cite{Kap06}) In particular, these coordinates $(\mu(\gamma))_{\gamma\in\Omega(G)}$ uniquely identify $\mu$. Moreover, given a collection $(\alpha(\gamma))_{\gamma\in\Omega(G)}$ of non-negative numbers, there is a unique current $\mu$ on $\pi_1(G)$ with $\mu(\gamma)=\alpha(\gamma)$ for every $\gamma\in\Omega(G)$ if and only if given any path $\gamma$ in $\Omega(G)$, if $\gamma_1,\ldots,\gamma_k$ are all possible extensions of $\gamma$ to a longer path by adding a single edge at the end of $\gamma$, then the Kolmogorov extension property holds: \[ \alpha(\gamma) = \sum_{i=1}^k \alpha(\gamma_i).\] \subsection{Length vectors}\label{subsec: length vectors} A {\em length vector} on a graph $G$ is a vector $\vec \lambda_G$ in $\BR^{|EG|}$ whose component associated to every edge $e\in EG$, denoted by $\lambda_G(e)$, is a nonnegative real number. If all the components of $\vec\lambda_G$ are positive, the length vector induces a metric on $G$ which is obtained by identifying every edge $e\in EG$ with a closed interval of length $\lambda_G(e)$ and defining the distance between two points to be the length of the shortest path connecting them. In general, $\vec\lambda_G$ induces a pseudo-metric where the distance of two distinct points is allowed to be zero. In this case, $\vec\lambda_G$ induces a metric on the graph which is obtained by collapsing all edges $e$ with $\lambda_G(e)=0$. Given a length vector $\vec\lambda_G$ on $G$, the length of a reduced path $\gamma$ in $G$ can be defined as the sum of lengths of its edges. A special example of a length vector on $G$ is the {\em simplicial length} $\vec 1_G$ where every component is equal to $1$. The induced metric on $G$ is also called the {\em simplicial metric}. \subsection{Morphisms}\label{subsec: morphisms} A {\em morphism} between graphs $G$ and $H$ is a function $f:G\to H$ that takes edges of $G$ to non-degenerate reduced edge paths in $H$. The {\em incidence matrix} $M_f$ of the morphism is a matrix whose columns are indexed by elements of $EG$ and its rows are indexed by elements of $EH$. Given $e\in EG$ and $e'\in EH$, $M(e',e)$, the $(e',e)$-entry of the matrix, is the number of occurrences of $e'$ in the reduced path $f(e)$. We restrict to {\em change of marking} morphisms, where, in addition, $f$ is required to be a homotopy equivalence. A morphism is a {\em permutation} if the incidence matrix is a permutation matrix. Given a length vector $\vec\lambda_H$ for $H$, the map $f$ naturally induces a length vector $\vec\lambda_G$ on $G$ and we have \begin{equation}\label{eq: morphisms and length} \vec\lambda_G = M_f^T \vec\lambda_H. \end{equation} In other words, $\vec\lambda_G$ gives the $f$-pull-back of the metric coming from $\vec\lambda_H$ on $H$; with these metrics, $f$ is a locally isometric immersion on every edge of $G$. By a morphism between two metric graphs, we always mean one which is a local isometry on every edge of the domain. We say a morphism $f$ {\em preserves simplicial length} if $\vec 1_G = M_f^T \vec 1_H$; equivalently the image of every edge of $G$ is a single edge of $H$. We usually assume the vertices of graphs have valence at least $3$, but allowing graphs with valence $2$ vertices, one gets the Stalling factorization of a morphism $f:G\to H$ as a composition $f_2\circ f_1$ of morphisms $f_1:G\to G'$ and $f_2:G'\to H$, where $f_1$ is topologically a homeomorphism and $f_2$ maps every edge to a single edge, i.e., preserves simplicial length. We refer to the Stallings factorization as the {\em natural factorization} of the morphism $f$. A topologically one-to-one morphism $f:G\to H$, induces a one-to-one map from $\Omega(G)$ to $\Omega(H)$. We can use this to obtain a one-to-one map $\overline\Omega_\infty(G)\to\overline\Omega_\infty(H)$, which sends $\omega_G\in\overline\Omega_\infty(G)$ to an element $\omega_H\in\overline\Omega_\infty(H)$ which labels $f\circ \omega_G$ in the same direction and $\omega_H(0)=f(\omega_G(0))$; we denote this map by $f$ as well. When $f$ is a homeomorphism, a left inverse for the induced map $\overline\Omega_\infty(G)\to\overline\Omega_\infty(G)$ can be defined which sends $\omega_H$ to $\omega_G$, with the property that $\omega_G[0,1]$ is mapped to $\omega_H[a,b]$ with $a\le 0$ and $b\ge1$. More precisely, assume the $f$ image of $e\in EG$ is the path $e_0\cdots e_k$ in $H$. Then for every $i=0,\ldots,k$, $S^if$, which is the composition of $f:\overline\Omega_\infty(G)\to\overline\Omega_\infty(H)$ with the $i$-th iterate of the shift map, provides a one-to-one correspondence between ${\rm Cyl}_G(e)$ and ${\rm Cyl}_H(e_i)$. In particular when $\mu$ is a current on $\pi_1(G)=\pi_1(H)$, \begin{equation}\label{eq: disjoint union 0} \mu(e) = \mu(e_i), \quad i=0,\ldots,k. \end{equation} Given a morphism $f:G\to H$, we say a path in $\Omega(G)$ is {\em legal} if its $f$-image is reduced, i.e. is an element of $\Omega(H)$. We use $\Omega_\infty^L(G)$ to denote the set of bi-infinite legal paths in $G$, and $\overline\Omega_\infty^L(G)$ to denote all possible labelings of elements of $\Omega_\infty^L(G)$. The $f$-image of these paths gives a subset of $\Omega_\infty(H)$, which we denote by $\Lambda_f$. The lift $\overline\Lambda_f$ of $\Lambda_f$ to $\overline\Omega_\infty(H)$, the set of all possible labelings of elements of $\Lambda_f$, is a compact subset of $\overline\Omega_\infty(H)$. Since $f$ induces a monomorphism on the level of fundamental groups, it induces a one-to-one map from $\partial^2\pi_1(G)$ to $\partial^2\pi_1(H)$ and therefore the map from $\Omega_\infty^L(G)$ to $\Lambda_f$ is one-to-one and onto. There is also an induced one-to-one map from $\overline\Omega_\infty^L(G)$ to $\overline\Lambda_f$, denoted by $f$, which sends $\omega_G$ to $\omega_H$, so that $\omega_G(0)$ is mapped to $\omega_H(0)$. When $f:G\to H$ preserves simplicial length, the map $f:\overline\Omega_\infty^L(G)\to\overline\Lambda_f$ is invertible and for every $\omega_H$ in $\Lambda_f$ there is a unique $\omega_G$ whose image by $f$ is $\omega_H$. In particular for every $e_H\in EH$ \begin{equation}\label{eq: disjoint union 1} {\rm Cyl}_H(e_H)\cap\overline\Lambda_f = \biguplus_{f(e_G)=e_H} \left(f({\rm Cyl}_G(e_G))\cap\overline\Lambda_f\right), \end{equation} is a disjoint union over all edges $e_G\in EG$ which are mapped to $e_H$. Assume that we have a change of marking $f:G\to H$ and a current $\mu$ on $\pi_1(H)$. Recall that $\mu$ induces a shift-invariant finite measure on $\overline\Omega_\infty(H)$, as well as on $\overline\Omega_\infty(G)$ via the isomorphism $f_*:\pi_1(G)\to\pi_1(H)$, which we still call $\mu$. If $\mu$ is supported on $\overline\Lambda_f$, then it follows from \eqref{eq: disjoint union 1} that for every $e_H\in EH$ \begin{equation}\label{eq: disjoint union 2} \mu(e_H) = \sum_{\{ e\in EG\,:\, f(e)=e_H\}}\mu(e_H). \end{equation} When $f:G\to H$ is a change of marking morphism, we can use the natural factorization $f=f_2\circ f_1$ with $f_1:G\to G'$ a homeomorphism and $f_2:G'\to H$ preserving the simplicial length. Then $\Lambda_f = \Lambda_{f_2}$. For every $\omega_H\in\Lambda_f$, we find a unique $\omega_{G'}\in\overline\Omega_\infty(G')$ with $f_2(\omega_{G'})=\omega_H$. Then we use the left inverse of the map $f_1$ explained earlier to find $\omega_G\in\overline\Omega_\infty(G)$ so that $S^if(\omega_G) = \omega_H$ with $0\le i < |f(\omega_G[0,1])|$ and $|f(\omega_G[0,1])|$ the simplicial length of the $f$-image of the edge $\omega_G[0,1]$. If $\mu$ is a current supported on $\overline\Lambda_f$ and $e_H\in EH$, by \eqref{eq: disjoint union 2} \[ \mu(e_H) = \sum_{\{e'\in EG'\,:\, f_2(e')=e_H\}}\mu(e').\] For every $e' \in EG'$, there is a unique $e\in EG$ so that $e'\subset f_1(e)$ as a sub-arc and by \eqref{eq: disjoint union 0} $\mu(e)=\mu(e')$. Hence \begin{equation}\label{eq: disjoint union} \mu(e_H) = \sum_{\{e'\in EG'\,:\, f_2(e')=e_H\}}\mu(e') = \sum_{\{e\in EG\,:\, f(e)\supset e_H\}} \mu(e). \end{equation} As a result: \begin{lem}\label{morphisms and currents} Given a change of marking morphism $f:G\to H$ with incidence matrix $M_f$. If $\mu$ is a current supported on $\Lambda_f$ and $\vec\mu_G$ and $\vec\mu_H$ are respectively the corresponding vectors for $G$ and $H$, then \begin{equation}\label{eq: morphisms and currents} \vec\mu_H = M_f \vec\mu_G. \end{equation} \end{lem} \section{Folding/Unfolding Sequences}\label{sec: folding/unfolding} A {\em folding/unfolding sequence} is a sequence of graphs $(G_n)_{a\le n\le b}$, $a,b\in\BZ\cup\{\pm\infty\}$, with change of marking morphisms $f_{k,l}:G_k\to G_l$ for $k<l$ such that if $k<l<m$, $f_{k,m}=f_{l,m} \circ f_{k,l}$. We use $f_n$ to denote $f_{n,n+1}$. Special cases are folding, respectively unfolding, sequences where $0\le n <\infty$, respectively $-\infty<n\le 0$. An {\em invariant sequence of subgraphs} is a sequence of non-degenerate proper subgraphs $E_n\subset G_n$ with the property that $f_n$ restricts to a change of marking morphism $E_n\to E_{n+1}$ for every $a\le n < b$. We say such a sequence is a {\em stabilized sequence of subgraphs} if the restriction of $f_n$ to $E_n\to E_{n+1}$ is a permutation for $n$ sufficiently large. A sequence is {\em reduced} if it admits no stabilized sequence of subgraphs. \subsection{Length measures}\label{subsec: length measures} A {\em length measure} for the folding/unfolding sequence $(G_n)_{a\le n \le b}$ is a sequence of length vectors $(\vec \lambda_n)_{a\le n\le b}$ where $\vec\lambda_n$ is a length vector on $G_n$ and for $a\le n <b$ \[ \vec\lambda_{n} = M_n^T\vec\lambda_{n+1}\] with $M_n = M_{f_n}$ the incidence matrix of $f_n$. When $b<\infty$ is finite, a length vector on $G_b$ provides a length measure on the sequence $(G_n)$. We define the {\em simplicial length measure} to be the length measure induced by the simplicial length vector $\vec 1_{G_b}$ on $G_b$. Given a length measure $(\vec\lambda_n)$ on $(G_n)$ and identifying $\pi_1(G_n)$ with $F_N$ for one of the graphs $G_n$, we can use the markings obtained form the maps $f_n$ and realize $(G_n,\lambda_n)$, the graph $G_n$ equipped with the metric induced by $\vec\lambda_n$, as a marked metric graph in $cv_N$. Therefore we obtain a sequence in $cv_N$ and its projectin $CV_N$. The set of sequences of vectors $(\vec v_n)_{a\le n \le b}$ with $v_n\in \BR^{|EG_n|}$ and with \[ \vec v_n = M_n^T \vec v_{n+1}\] for every $a\le n <b$ is naturally a finite dimensional real vector space, whose dimension is bounded by $\liminf_{n\to\infty} |EG_n|$. The set of length measures on $(G_n)_{a\le n\le b}$, denoted $\CD((G_n)_n)$, is the cone of non-negative vectors in this space. \subsection{Currents on sequences and area}\label{subsec: currents on sequences} A {\em current} for the folding/unfolding sequence $(G_n)_{a\le n\le b}$ is a sequence of non-negative vectors $(\vec\mu_n)_{a\le n\le b}$ where each $\vec\mu_n$ is a non-negative vector in $\BR^{|EG_n|}$ and for $a\le n < b$ \[ \vec\mu_{n+1}=M_n\vec\mu_n.\] When $a>-\infty$ is finite, currents for the sequence $(G_n)$ are completely determined by non-negative vectors in $\BR^{|EG_a|}$. In particular, the {\em frequency current} is defined by starting from $\vec\mu_a = \vec 1_{G_a}$ and defining $\vec\mu_{n+1} = M_n\vec\mu_n$ for $a<n\le b$. Similar to the case of length measures, we can consider the set $\CC((G_n)_n)$ of currents on the sequence $(G_n)_{a\le n\le b}$ as the cone of non-negative vectors in the finite dimensional vector space whose vectors are sequences $(\vec v_n)_{a\le n\le b}$ with $\vec v_n\in\BR^{|EG_n|}$ for every $a\le n\le b$ and \[ \vec v_{n+1} = M_n \vec v_n\] for every $a\le n < b$. The dimension of this space evidently is bounded by $\liminf_{n\to -\infty}|EG_n|$. Given a length measure $(\vec\lambda_n)_n$ and a current $(\vec\mu_n)_n$ for the sequence $(G_n)_n$, we define the {\em area} to be the scalar \[ A = \vec\mu_n^T\vec\lambda_n.\] Area is independent of $n$, and for reduced sequences: \begin{lem}\label{length and frequency decay} Given a reduced folding/unfolding sequence $(G_n)_{a\le n\le b}$ for every length measure $(\vec\lambda_n)_n$ on $(G_n)_n$, \[ \lim_{n\to\infty} \lambda_n(e_n) = 0 \quad {\rm and} \quad \lim_{n\to-\infty} \lambda_n(e_n) = \infty\] when $e_n\in EG_n$ is chosen arbitrarily. Similarly if $(\vec\mu_n)_n$ is a current on $(G_n)_n$ \[ \lim_{n\to\infty} \mu_n(e_n) = \infty \quad {\rm and} \quad \lim_{n\to-\infty} \mu_n(e_n) = 0\] for an arbitrary choice of $e_n\in EG_n$ and the limits are considered only when $a=-\infty$ or $b=\infty$. \end{lem} \section{Unfolding sequences}\label{sec:legal sequence} Recall that an {\em unfolding sequence} $(G_n)_{n\le0}$ is given by \begin{equation}\label{eq: unfolding sequence} \xymatrix{ \cdots \ar[r]^{f_{n-1}} & G_{n} \ar[r]^{f_{n}} \ar[r] & \cdots \ar[r]^{f_{-1}} & G_0 } \end{equation} with each $f_n, n<0,$ a change of marking morphism. For every $n<0$, we define $\varphi_n = f_{-1}\circ\cdots\circ f_n:G_n\to G_0$. We continue with the assumption that $(G_n)_n$ is a reduced sequence. Also we assume that $(\vec\lambda_n)_n$ is the simplicial length measure, i.e., $\vec\lambda_0 = \vec 1_{G_0}$ and for every $n<0$ \[ \vec\lambda_n = M_{n}^T \vec\lambda_{n+1}.\] Equivalently the component $\lambda_n(e)$ of $\vec\lambda_n$ associated to $e\in EG_n$ is the simplicial length of $\varphi_n(e)$ in $G_0$. \subsection{Legal lamination}\label{subsec: legal lamination} Given an unfolding sequence as in \eqref{eq: unfolding sequence}, we say a path in $\Omega(G_n)$ is {\em legal} if it is legal with respect to the morphism $\varphi_n$, i.e., its image under $\varphi_n$ is a reduced path. Every labeling of such a path in $\overline\Omega(G_n)$ is also {\em legal}. As before, we use the notations $\Omega^L_\infty(G_n)$ and $\overline\Omega^L_\infty(G_n)$ to denote the set of bi-infinite legal paths in $\overline\Omega_\infty(G_n)$. We define \[\Lambda = \bigcap_n \Lambda_{\varphi_n} = \bigcap_n \varphi_n(\Omega^L_\infty(G_n)).\] The {\em legal lamination} $\overline\Lambda$ of the sequence is the subset of $\overline\Omega_\infty(G_0)$ that consists of all labellings of elements of $\Lambda$. An element of $\overline\Lambda$ is called a {\em leaf} of the lamination. On can also consider the pre-image of $\Lambda$ in the double boundary $\partial^2\pi_1(G_0)$ which will be a $\pi_1(G_0)$-invariant closed subset of the double boundary and is a lamination in the conventional sense of \cite{CHL08a}. From our discussion of morphisms in \S \ref{subsec: morphisms}, it follows that since every leaf of $\overline\Lambda$ is image of a legal bi-infnite path in $G_n$: \begin{lem}\label{pre-image of leaves} Suppose $(G_n)_{n\le0}$ is an unfolding sequence with legal lamination $\overline\Lambda$. For every leaf $\omega$ of $\overline\Lambda$ and $n$, there is a unique $\omega_n\in\overline\Omega_\infty(G_n)$ with the property that $\varphi_n(\omega_n)$ is a relabeling of $\omega$ (in the same direction) and $\varphi_n(\omega_n[0,1]) = \omega[a,b]$ for integers $a\le 0 < b$. \end{lem} \subsection{Legal laminations of reduced unfolding sequences}\label{subsec: reduced legal lamination} We pause to give an informal discussion about laminations that arise as as the legal lamination of a reduced unfolding sequence. Although the hypothesis of being reduced may seem weak, it is strong enough to rule out certain pathologies that occur in general algebraic laminations on $F_N$. Suppose that $T$ is a very small tree with a non-abelian point stabilizer $H$, and let $L(T)$ denote the dual lamination of $T$. Then the subshift $L_v(T) \subseteq E^{\pm}R_N^\BZ$ of the full shift on the oriented edges of $R_N$, $L_v(T)=\{x\in \partial^2 F_N|$ the geodesic in $\widetilde R_N$ corresponding to $x$ contains $v\}$, $v \in \widetilde R_N$ some fixed vertex, has positive entropy, and the space of currents supported on $L(T)$ is infinite dimensional; this is because $L(T)$ contains $\partial^2 H$. The main results of this paper involve obtaining finiteness results about the space of currents supported on the legal lamination $\Lambda$ of a reduced unfolding sequence $(G_n)_n$. As a warm-up for these arguments we bring: \begin{prop} If $\Lambda$ is the legal lamination of a reduced folding sequence $(G_n)_{n \leq 0}$, then $\Lambda$ has zero entropy. \end{prop} For the proof we consider $\Lambda$ as a subshift of the full shift on the oriented edges of $G_0$; recall that for a subshift $X$ of $\mathcal{A}^\BZ$, one has $h(X)=\lim \log |B_k|/k$, where $B_k$ is the set of length-$k$ strings that occur in elements of $X$. \begin{proof} For $l$ fixed, Lemma \ref{length and frequency decay} gives that for $n$ sufficiently large, every loop in $G_n$ has length at least $l$. Notice that if a legal path $\gamma \in \Omega(G_n)$ has length at least $2||\vec \lambda_n||_1$, $\gamma$ must contain a loop, hence if $\gamma$ has length $p$, then the frequency with which $\gamma$ visits a vertex of $G_n$ is certainly bounded by $(2N-2)p/l$. Define $L_n$ to be the total number of legal turns in $G_n$; then $L_n \leq N(N-1)-1$ for all $n$ ($G_n$ must have an illegal turn, else $f_{n-j,n}$ is the identity for $j>0$). Now assume that $l$ is large, that every loop in $G_n$ has length at least $l$, and that $m$ is large compared to $l$. Then the graph $G_n$ ensures that we have the estimate \[|B_m|\leq ||\lambda_n||_1 C^{\frac{(2N-2)L_n}{l}m}\] where $C\leq 2N-1$ certainly holds. It follows immediately that $h(\Lambda)=0$. \end{proof} We also mention the following: \begin{prop}\label{minimal components} $\Lambda$ contains at most $3N-3$ minimal sublaminations. \end{prop} \begin{proof} If $l_0, l_1 \in \Lambda$ satisfy that $\overline{\{S^il_0|i\in\BZ\}} \neq \overline{\{S^il_1|i\in\BZ\}}$, then for all sufficiently large $n$, there must be $e_n^0, e_n^1\in EG_n$, such that $l_i$ crosses $e_n^i$ and does not cross $e_n^{1-i}$. \end{proof} The bound in Proposition \ref{minimal components} certainly is not sharp, as will become clear in the sequel. \subsection{Currents on the legal lamination}\label{subsec: currents on legal lamination} Recall that the set of currents on $\pi_1(G_0)$ is naturally identified with the set of finite shift-invariant measures on $\overline\Omega_\infty(G_0)$. In the presence of such a correspondence, by a {\em probability current} we mean one which induces a shift-invariant probability measure on $\overline\Omega_\infty(G_0)$. Note that since \[ \overline \Omega_\infty(G_0) = \biguplus_{e\in EG_0} {\rm Cyl}(e),\] equivalently \[ \sum_{e\in EG_0} \mu(e) =1,\] where $\mu(e) = \mu({\rm Cyl}(e))$. We also use this correspondence to define $\CC(\Lambda)$, {\em the set of currents supported on $\overline\Lambda$}, to denote those currents whose support in $\overline\Omega_\infty(G_0)$ is a subset of $\overline\Lambda$. The set of currents for $G_0$ is identified by the set of currents for $G_n$ via the identification of $\pi_1(G_n)$ and $\pi_1(G_0)$ provided by $\varphi_n$. Hence for $\mu\in\CC(\Lambda)$ and every $n\le 0$, we obtain a current $\mu_n$ for $G_n$. Obviously $\mu_n$ is supported on the set of bi-infinite paths which are legal with respect to $\varphi_n$. We claim that $\CC(\Lambda)$ is isomorphic to the space $\CC((G_n)_n)$ of currents on the unfolding sequence $(G_n)_n$. \begin{thm}\label{spaces of currents are isomorphic} Given a reduced unfolding sequence $(G_n)_{n\le0}$ with legal lamination $\overline\Lambda$, there is a natural linear isomorphism between $\CC((G_n)_n)$ and $\CC(\Lambda)$. \end{thm} \begin{proof} Assume $(\vec\mu_n)_n\in\CC((G_n)_n)$ is a current on the unfolding sequence $(G_n)_n$. We first show how this sequence induces a current supported on $\overline\Lambda$. Given a finite directed labeled path $\omega\in\overline\Omega(G_0)$ and a finite directed path $\gamma\in\Omega(G_0)$, we use the notation \[ \langle \gamma , \omega \rangle \] to show the number of copies of $\gamma$ in $\omega$ which agree with the orientation of $\omega$. In particular for $n\le 0$ and $e\in EG_n$, we can consider $\langle \gamma, \varphi_n(e)\rangle$ to denote the number of occurrences of $\gamma$ in $\varphi_n(e_n)$ considered as a directed path in $G_0$. We define \[ \alpha_n(\gamma) = \sum_{e\in EG_n}\mu_n(e) \langle \gamma,\varphi_n(e)\rangle. \] We claim for a fixed $\gamma$, $\alpha_0(\gamma)\le \alpha_{-1}(\gamma)\le\cdots\le\alpha_n(\gamma)\le\cdots$ is a monotonic increasing sequence. To see this assume $e\in EG_n$ and $e'\in EG_{n+1}$ are given. The number of occurrences of $e'$ in $f_n(e)$ as a directed sub-path is given by $M_n(e',e)$, the entry of incidence matrix of $f_n$ corresponding to $e'$ and $e$. Every occurrence of $\gamma$ as a directed sub-path of $\varphi_{n+1}(e')$ produces $M_n(e',e)$ occurrences of $\gamma$ as a directed sub-path of $\varphi_n(e)$. Hence \begin{align*} \alpha_n(\gamma) &= \sum_{e\in EG_n}\mu_n(e) \langle \gamma, \varphi_n(e) \rangle \\ & \ge \sum_{e\in EG_n} \mu_n(e) \sum_{e'\in EG_{n+1}} M_n(e',e) \langle\gamma,\varphi_{n+1}(e') \rangle \\ &= \sum_{e'\in EG_{n+1}} \langle\gamma, \varphi_{n+1}(e') \rangle \sum_{e\in EG_n} M_n(e',e)\mu_n(e) \\ & = \sum_{e'\in EG_{n+1}} \langle\gamma, \varphi_{n+1}(e') \rangle\, \mu_{n+1}(e') \\ & = \alpha_{n+1}(\gamma). \end{align*} Obviously for every $e\in EG_n$, $\langle \gamma, \varphi_n(e) \rangle$ is bounded from above by $\lambda_n(e)$, the simplicial length of $\varphi_n(e)$. This implies that $\alpha_n(\gamma)$ is bounded for every $n$ by the area of the unfolding sequence with respect to the current $(\vec\mu_n)_n$ and the simplicial length $(\vec\lambda_n)_n$. Therefore $\mu(\gamma) = \lim_n \alpha_n(\gamma)$ exists. On the other hand if $\gamma_1,\ldots,\gamma_k\in \Omega(G_0)$ are all possible extensions of $\gamma$ in $G_0$ by adding a single edge, then we can see that every occurrence of $\gamma$ as a directed sub-path of $\varphi_n(e)$ either extends to an occurrence of $\gamma_i$ as a sub-path of $\varphi_n(e)$ for some $i=1,\ldots, k$ or $\gamma$ is the terminal sub-path of $\varphi_n(e)$ and $\varphi_n(e)$ does not extend beyond $\gamma$. The latter possibility contributes at most one instance of occurrence of $\gamma$ as a sub-path of $\varphi_n(e)$ and therefore \[\sum_{i=1}^k \langle \gamma_i, \varphi_n(e) \rangle \le \langle \gamma,\varphi_n(e) \rangle \le \sum_{i=1}^k \langle \gamma_i, \varphi_n(e) \rangle + 1;\] multiplying by $\mu_n(e)$ and adding for all the edges of $G_n$ \[ \sum_{i=1}^k \alpha_n(\gamma_i) = \sum_{i=1}^k \sum_{e\in EG_n} \mu_n(e)\langle \gamma_i,\varphi_n(e)\rangle \le \alpha_n(\gamma) \le \sum_{i=1}^k \alpha_n(\gamma_i) + \sum_{e\in EG_n} \mu_n(e).\] By lemma \ref{length and frequency decay} $\mu_n(e)$ tends to zero for edges of $G_n$ as $n\to\infty$ and therefore in the limit \[ \mu(\gamma) = \sum_{i=1}^k \mu(\gamma_i).\] Using the characterization of currents via a coordinate system described in \S \ref{subsec: currents}, we conclude that $\mu$ is a current on $\pi_1(G_0)$. Now we show that $\mu$ is supported on $\overline\Lambda$. Assume $n$ is fixed and $\gamma$ is a path in $\Omega(G_0)$ whose simplicial length is bigger than $2\lambda_n(e)$, twice the simplicial length of $\varphi_n(e)$, for every $e\in EG_n$. We claim that $\mu(\gamma)>0$ only when for some $e\in EG_n$, $\gamma$ contains $\varphi_n(e)$ as a subpath. This will imply that every leaf in the support of $\mu$ is a limit of $\varphi_n(e_n)$ for a sequence of edges $e_n\in EG_n$ and therefore is contained in $\overline\Lambda$. If $\mu(\gamma)>0$ then for every $m$ sufficiently large, there is $e'\in EG_m$ with $\langle \gamma, \varphi_m(e') \rangle >0$. But $\varphi_m(e')$ is a concatenation of subpaths each of which is the $\varphi_n$-image of an edge of $G_n$. Since $\gamma$ is a subpath of $\varphi_m(e')$ and its simplicial length is larger than the twice the simplicial length of $\varphi_n(e)$ for every $e\in EG_n$, $\varphi_n(e)$ must be a subpath of $\gamma$ for some $e\in EG_n$. This proves the claim and we have constructed a natural map $\CC((G_n)_n)\to\CC(\Lambda)$. Now suppose $\mu\in\CC(\Lambda)$ is given. For every $n\le 0$, we define a vector $\vec \mu_n$ in $\BR_{\ge 0}^{|EG_n|}$ where $\mu_n(e)$, the component of $\vec\mu_n$ associated to $e\in EG_n$, is given by $\mu_n(e) = \mu_n({\rm Cyl}(e))$. It is a consequence of lemma \ref{morphisms and currents} that \[ \vec\mu_{n+1} = M_{n}\vec\mu_n, \quad n< 0.\] Hence the sequence of vectors $(\vec\mu_n)_n$ is a current for the unfolding sequence $(G_n)_n$ and $(\vec\mu_n)_n \in \CC((G_n)_n)$ and we have a map $\CC(\Lambda)\to \CC((G_n)_n)$. To prove that the two maps $\CC((G_n)_n)\to\CC(\Lambda)$ and $\CC(\Lambda)\to\CC((G_n)_n)$ are inverses of each other, we just need the next proposition. \begin{prop}\label{legal current determined by sequence} Suppose $(G_n)_{n\ge 0}$ is a reduced unfolding sequence with legal lamination $\overline\Lambda$ and $\mu$ is a current supported on $\overline\Lambda$. Then for every path $\gamma\in\Omega(G_0)$ \[ \mu(\gamma)= \mu({\rm Cyl}(\gamma)) = \lim_n \sum_{e\in EG_n} \mu_n(e) \langle \gamma, \varphi_n(e) \rangle.\] \end{prop} \begin{proof} For the morphism $\varphi_n:G_n\to G_0$, we use the natural factorization and factor it as a composition $\psi_n\circ\xi_n$ with $\xi_n: G_n\to G_n'$ a homeomorphism and $\psi_n:G_n'\to G_0$ a morphism that preserves the simplicial length. Then we can use a decomposition similar to \eqref{eq: disjoint union 1} \[ {\rm Cyl}_{G_0}(\gamma) \cap \overline\Lambda = \biguplus_{\psi_n(\gamma')=\gamma} (\psi_n({\rm Cyl}_{G_n'}(\gamma')) \cap \overline\Lambda),\] where the disjoint union is over all directed paths $\gamma'$ in $G_n'$ whose image is $\gamma$. As a result \[ \mu(\gamma) = \sum_{\psi_n(\gamma')=\gamma}\mu_{G_n'}(\gamma') \] Where $\mu_{G_n'}(\gamma')$ is the $\mu$-measure of ${\rm Cyl}_{G_n'}(\gamma')$. When the $\xi_n$-pre-image of $\gamma'$ is contained in an edge $e\in EG_n$ then the composition of $\xi_n$ with an iterate of the shift map on $\overline\Omega_\infty(G_n')$ identifies ${\rm Cyl}_{G_n}(e)$ with ${\rm Cyl}_{G_n'}(\gamma')$ and as a result $\mu_n(e) = \mu_{G_n'}(\gamma')$; there are $\langle \gamma, \mu \rangle$ such paths $\gamma'$ and we conclude \[ \mu(\gamma) \ge \sum_{e\in EG_n}\mu_n(e) \langle \gamma, \varphi_n(e) \rangle.\] Moreover the difference is the sum of $\mu_{G_n'}(\gamma')$ for all paths $\gamma'$ in $G_n'$ with $\psi_n(\gamma')=\gamma$ but so that the $\xi_n$-pre-image of $\gamma'$ in $G_n$ has a vertex of $G_n$ in its interior. The number of such paths $\gamma'$ is bounded by $(|\gamma|-1)(3N-3)^2$. Also ${\rm Cyl}_{G_n'}(\gamma')$ will be contained in the $\xi_n$-image of a cylinder ${\rm Cyl}_{G_n}(e)$ for an edge $e\in EG_n$ and therefore $\mu_{G_n'}(\gamma') \le \mu_n(e)$; hence \[ \mu(\gamma) - \sum_{e\in EG_n}\mu_n(e) \langle \gamma, \varphi_n(e) \rangle \le (|\gamma|-1)(3N-3)^2 \left(\max_{e\in EG_n}\mu_n(e)\right).\] However by lemma \ref{length and frequency decay} we know that $\max_{e\in EG_n}\mu_n(e) \to 0$ as $n\to-\infty$ and this proves the proposition. \end{proof} Therefore we have provided a bijection between $\CC(\Lambda)$ and $\CC((G_n)_n)$. The fact that this is a linear isomorphism is easy. \end{proof} We obtain: \begin{cor}\label{dimension of space of currents} The dimension of the space $\CC(\Lambda)$ of currents on $\Lambda$ is bounded by $\liminf_n |EG_n|$, and, in particular, is bounded by $3N-3$. \end{cor} \subsection{Ergodic currents}\label{subsec: ergodic currents} As before assume $(G_n)_{n\le0}$ is a reduced unfolding sequence with legal lamination $\overline\Lambda$. A current $\mu\in\CC(\Lambda)$ is {\em ergodic} if whenever $\mu= c_1 \mu^1 + c_2\mu^2$ for $\mu^1,\mu^2\in\CC(\Lambda)$ then $\mu^1$ and $\mu^2$ are homothetic to $\mu$. Since $\CC(\Lambda)$ is finite dimensional there are at most finitely many non-homothetic ergodic currents on $\overline\Lambda$; this is because $\CC(\Lambda)$ is a simplex, a point that is explained below. Assume $\mu^1,\ldots,\mu^k$ are the mutually singular ergodic probability currents on $\overline\Lambda$. Recall that given a finite directed labeled path $\omega\in\overline\Omega(G_0)$ and a finite directed (unlabeled) path $\gamma\in\Omega(G_0)$, we use the notation \[ \langle \gamma , \omega \rangle \] to show the number of copies of $\gamma$ in $\omega$ and in the same direction. To be more precise let $\chi_\gamma:\overline\Omega(G_0)\to\{0,1\}$ be the characteristic function for ${\rm Cyl}(\gamma)$. Then \[ \langle \gamma , \omega \rangle = \sum_i \chi_\gamma(S^i\omega), \] where $S$ is the shift map on $\overline\Omega(G_0)$. Obviously $\int \chi_\gamma d\mu = \mu(\gamma)$ for every current $\mu$. By the Ergodic Theorem for the the shift invariant ergodic probability measure $\mu^i$ on $\overline\Lambda, i=1,\ldots,k$, the set of elements $\omega$ of $\overline\Lambda$ with the property that \begin{equation}\label{eq: generic frequency} \begin{aligned} \mu^i(\gamma) = \int\chi_\gamma\, d\mu^i & = \lim_{n\to\infty}\frac1n\sum_{i=0}^{n-1}\chi_\gamma(S^i\omega) = \lim_{n \to \infty} \frac{\langle \gamma, \omega[0,n] \rangle} n \\ & = \lim_{n\to\infty}\frac1n\sum_{i=-n}^{-1}\chi_\gamma(S^i\omega) = \lim_{n \to \infty} \frac{\langle \gamma, \omega[-n,0] \rangle} n \end{aligned} \end{equation} has full $\mu^i$-measure in $\overline\Lambda$. We can assume that a shift invariant subset of full $\mu^i$-measure in $\overline\Lambda$ has been chosen so that \eqref{eq: generic frequency} holds for every $\gamma$ in $\Omega(G_0)$. It will be also enough to continue by picking a path $\gamma$ with the property that $\mu^i(\gamma)\neq\gamma^j(\gamma)$ for every pair $i\neq j$ and use the above observation that \ref{eq: generic frequency} holds for this $\gamma$ and for $\omega$ generic with respect to $\mu^i$. In particular, we have that the set of generic leaves for $\mu^i$ is disjoint from the set of generic leaves for $\mu^j$, $i\neq j$, and it follows that $\CC(\Lambda)$ is a cone on a simplex. \subsection{Transverse decomposition for an unfolding sequence}\label{subsec: transverse decomposition} Given a graph $G$, a {\em transverse decomposition} of $G$ is a collection $H^0,\ldots, H^k$ of subgraphs of $G$ with \[ EG = \biguplus_{i=0}^k EH^i, \] the disjoint union of the edges of $H^0,\ldots,H^k$. The subgraph $H^i$ is {\em degenerate} if $EH^i$ is empty. As before assume a reduced unfolding sequence $(G_n)_{n\le0}$ is given with legal lamination $\overline\Lambda$ and probability ergodic currents $\mu^1,\ldots,\mu^k$. We further assume $G$ is a graph with a given isomorphism to $G_n$ for every $n$. We use this isomorphism and for every edge $e$ of $G$, we denote the corresponding edge in $G_n$ by $e_n$. We now claim there is a transverse decomposition of $G$ induced from the currents $\mu^1,\ldots,\mu^k$. As before we assume $(\vec\lambda_n)_n$ is the simplicial length measure on $(G_n)_n$ induced from $G_0$ and for $e\in EG$, we use $\lambda_n(e)$ and $\mu^i_n(e)$ to denote the $\lambda_n$-length of $e_n$ and the $\mu^i_n$-measure of the cylinder ${\rm Cyl}(e_n), i=1,\ldots,k,$ respectively. Also $\langle \gamma , \varphi_n(e_n) \rangle$ denotes the number of occurrences of $\gamma$ in the path $\varphi_n(e_n)$ in $G_0$. \begin{thm}\label{transverse decomposition} Suppose a reduced unfolding sequence $(G_n)_n$ with legal lamination $\overline\Lambda$ and ergodic probability currents $\mu^1,\ldots,\mu^k$ is given. Also assume for every $n$, $G_n$ is identified with a fixed graph $G$. After passing to a subsequence, there is a transverse decomposition $H^0,H^1,\ldots, H^k$ of $G$ so that for every distinct pair $i,j\in\{1,\ldots,k\}$ and $e\in H^i$ \begin{enumerate} \item $\displaystyle \liminf_n \mu^i_{n}(e)\lambda_{n}(e) > 0,$ \item $\displaystyle \sum_n \mu_n^j(e)\lambda_n(e) < \infty,$ and \item \[ \lim_n \frac{\langle \gamma, \varphi_n(e_{n})\rangle}{\lambda_{n}(e)} = \mu^i(\gamma),\] \end{enumerate} Also given $e\in EH^0$ and $i\in \{1,\ldots,k\}$ \[ \sum_n \mu_n^i(e)\lambda_n(e) <\infty .\] \end{thm} \begin{rem} Conclusion (2) in the statement may seem somewhat artificial; we state it explicitly for convenience. \end{rem} \begin{proof} We can pass to a subsequence so that for every $e\in EG$, either there exists $i\in\{1,\ldots,k\}$ with $\liminf_n \mu_n^i(e)\lambda_n(e) >0$ or for every $i\in\{1,\ldots,k\}$, $\lim_n\mu_n^i(e)\lambda_n(e)=0$. Define $H^i$ to consist of the edges with \[ \liminf_n\mu_n^i(e)\lambda_n(e)>0\] and $H^0$ to be spanned by edges $e$ with \[ \lim_n\mu_n^i(e)\lambda_n(e)=0\] for every $i\in\{1,\ldots,k\}$. Recall by lemma \ref{pre-image of leaves} that for every leaf $\omega$ of $\overline\Lambda$ and every $n$, there is a unique bi-infinite legal path $\omega_n$ in $G_n$ whose $\varphi_n$-image is a relabeling of $\omega$ in the same direction. Moreover the $\varphi_n$-image of the edge $\omega_n[0,1]$ is a directed path that contains the edge $\omega[0,1]$ in the same direction. \begin{lem}\label{large frequency implies containing generic points} If $e\in EG$ is given with $\limsup_n\mu_n^i(e)\lambda_n(e) >0$ for some $i\in\{1,\ldots,k\}$ then there is a subset of positive $\mu^i$-measure of $\overline\Lambda$ with the property that for every leaf $\omega$ in this set $\omega_n[0,1]=e_n$ for infinitely many $n$. \end{lem} \begin{proof} Given and edge $e\in EG$ and $i$ let \[ A_n(e) = \{ \omega\in\overline\Lambda \, : \, \omega_n \in {\rm Cyl}(e_n)\subset \overline\Lambda. \}\] It follows that \begin{equation}\label{eq: size of image of a cylinder} \mu^i (A_n(e)) = \mu^i_n(e)\,\lambda_n(e). \end{equation} for every $n$. So with the assumption of the lemma, infinitely of these sets have $\mu^i$-measure bigger than $\epsilon>0$ for some fixed $\epsilon>0$. It is then immediate that \[ \bigcap_m \bigcup_{n\le m} A_n(e) \] has $\mu^i$-measure at least $\epsilon$. This proves the lemma. \end{proof} \begin{lem}\label{containing generic implies generic frequency} Given $i\in\{1,\ldots,k\}$, there is a subset of full $\mu^i$-measure in $\overline\Lambda$ with the property that for every $\omega$ in this set \[ \lim_n \frac{\langle \gamma, \varphi_n(\omega_n[0,1])\rangle}{\lambda_n(\omega_n[0,1])} =\mu^i(\gamma). \] \end{lem} \begin{proof} Given $\omega$ in $\overline\Lambda$ assume $e(\omega,n) = \omega_n[0,1]$ is the edge of $G_n$ that appears at the $[0,1]$ segment of $\omega_n$. Then by the definition of $\omega_n$, $\varphi_n(e(\omega,n)) =\omega[a_n,b_n]$ with $a_n\le 0 < b_n$ and $b_n - a_n = \lambda_n(e(\omega,n))$ is the simplicial length of the image in $G_0$. By lemma \ref{length and frequency decay}, $b_n-a_n\to \infty$ as $n\to-\infty$, and by \eqref{eq: generic frequency} for $\mu^i$-almost every $\omega$ \[ \lim_n\frac{\langle\gamma, \omega[a_n,b_n]\rangle}{b_n-a_n} = \mu^i(\gamma).\] This proves the lemma. \end{proof} As a consequence of the above two lemmas, if $e\in EH^i$ is given, i.e., \[ \liminf_n \mu_n^i(e)\lambda_n(e) >0\] and $i\neq j\in\{1,\ldots,k\}$ we must have $\lim_n\mu^j_n(e)\lambda_n(e) =0$; otherwise by lemma \ref{large frequency implies containing generic points} we will end up with a $\mu^i$-generic $\omega^i$ and a $\mu^j$-generic $\omega^j$ so that for infinitely many $n$, $\omega^i_n[0,1] = \omega^j_n[0,1] = e_n$. But then lemma \ref{containing a generic point implies generic} implies that along this subsequence \[ \frac{\langle \gamma, \varphi_n(e_n) \rangle}{\lambda_n(e_n)}\] converges both to $\mu^i(\gamma)$ and $\mu^j(\gamma)$ which is impossible by the assumption that $\mu^i(\gamma)\neq \mu^j(\gamma)$. Once we know that $\lim_n\mu^j_n(e)\lambda_n(e)=0$ for every $j\neq i$, we can obviously pass to further subsequences and assume \[ \sum_n \mu^j_n(e)\lambda_n(e) <\infty.\] This shows that edges of $H^i$ satisfy (1), (2), and (3) and moreover $H^i$ and $H^j$ share no edge for $i\neq j$. By definition of $H^0$, for every edge $e$ in $H^0$ and every $i\in\{1,\ldots,k\}$, $\lim_n \mu_n^i(e) \lambda_n(e) =0$. After passing to a subsequence we can also assume \[\sum_n\mu_n^i\lambda_n(e) <\infty\] for every $i\in\{1,\ldots,k\}$. \end{proof} It follows from the above theorem and \eqref{eq: size of image of a cylinder} in the proof that if $e\in EG$ is not in $H^i$ then \[ \sum_n \mu^i(A_n(e)) = \sum_n \mu_n^i(e)\lambda_n(e) < \infty.\] By standard arguments from measure theory, this implies the set of $\omega$ that belong to infinitely many such sets has zero $\mu^i$-measure. Equivalently the set of $\omega\in\overline\Lambda$ with $\omega_n[0,1] = e_n$ for infinitely many $n$ has zero $\mu^i$-measure. \begin{prop}\label{image of generic before pinching} With the hypothesis of theorem \ref{transverse decomposition} and after passing to a subsequence for which the conclusion holds, for $\mu^i$-almost every $\omega$ in $\overline\Lambda$, $\omega_n[0,1]\in EH^i_n$ for $n$ sufficiently large (depending on $\omega$). In particular $H^i$ is non-degenerate for every $i\in\{1,\ldots,k\}$. \end{prop} Note that $H^i_n$ denotes the subgraph of $G_n$ identified with $H^i$ for every $n\le 0$ and $i\in\{0,1,\ldots,k\}$. \subsection{Pinching along an unfolding sequence}\label{subsec: pinching} As before assume $(G_n)_{n\le 0}$ is a reduced unfolding sequence with legal lamination $\overline\Lambda$ and ergodic probability currents $\mu^1,\ldots,\mu^k$. We also assume every $G_n$ is equipped with a metric $\lambda_n$ induced by the simplicial length vector $\vec\lambda_n$. We say the sequence $(G_n,\lambda_n)$ {\em converges in the moduli space of graphs} if there is a fixed graph $G$ with given isomorphisms to $G_n$ for every $n$ and so that for every $e\in EG$ \[ \lim_n \frac{\lambda_n(e)}{\lambda_n(G)} \] exists, where $\lambda_n(G)$ denotes the total $\lambda_n$-length of $G_n$. We say $G$ is the {\em limit graph} and the {\em length limit} is the above limit for edges of $G$. We define the {\em pinched part} to be the subgraph $E\subset G$ spanned by edges whose length limit is zero. Given a reduced unfolding sequence $(G_n)_n$, assume we have passed to a subsequence that converges in the moduli space of graphs with limit graph $G$ and pinched part $E$. Moreover and after passing to a further subsequence, we assume the conclusion of theorem \ref{transverse decomposition} holds and we have the transverse decomposition $H^0,H^1,\ldots,H^k$ of $G$ associated to ergodic probability currents $\mu^1,\ldots,\mu^k$. Collapsing the pinched part $E$ of $G$ induces a transverse decomposition $H^0/E, H^1/E, \ldots, H^k/E$ of $G/E$. Note however that it is possible for $H^i/E$ to be degenerate. Recall that a leaf $\omega$ of $\overline\Lambda$ provides legal labeled bi-infinite paths $\omega_n\in \overline\Omega_\infty(G_n)$ for every $n$, whose $\varphi_n$-image is a relabeling of $\omega$ and the $\varphi_n$-image of $\omega_n[0,1]$ contains $\omega[0,1]$. Via the isomorphism between $G_n$ and $G$, the bi-infinite labeled path $\omega_n$ identifies with a bi-infinite labeled path in $G$ which is still called $\omega_n$. We say a path $\alpha$ in $G$ is an {\em accumulation of $\omega$} if there are integers $a<b$ and a subsequence of $(\omega_n[a,b])_n$ that converges to $\alpha$. We use $\alpha_n$ to denote the image of $\gamma$ in $G_n$; in particular $\alpha_n = \omega_n[a,b]$ for infinitely many $n$. \begin{thm}\label{frequency in generic leaves} Given $i\in\{1,\ldots,k\}$, for $\mu^i$-almost every $\omega\in\overline\Lambda$, if a path $\alpha$ in $G$ is an accumulation of $\omega$, \[ \frac{\lambda_n(\alpha_n \setminus H_n^i)}{\lambda_n(G)} \to 0\] as $n\to -\infty$, where $\alpha_n\setminus H_n^i$ is the part of $\alpha_n$ that falls outside of $H_n^i$. \end{thm} \begin{proof} Given a positive integer $c$ and for an edge $e\in EG$ and leaf $\omega\in\overline\Lambda$ for every $n$, we consider sub-path of $\omega$ given by the interval $\omega[-c\lambda_n(G),c\lambda_n(G)]$ with $\lambda_n(G)$ the total length of $G_n$. Obviously $2c\lambda_n(G)$, the simplicial length of these paths, tend to infinity with $n\to -\infty$. If $\alpha$ is an accumulation of $\omega$, then it is easy to see that we can choose $c$ large enough so that $\varphi_n(\alpha_n)$ is a sub-path of $\omega[-c\lambda_n(G),c\lambda_n(G)]$. Motivated by this observation define \[ f_n^c(\omega,e) = \frac1{2c\lambda_n(G)} \left| \omega[ -c\lambda_n(G) , c\lambda_n(G)] \cap \varphi_n(e_n) \right|, \] the ratio of the simplicial length of part of $\omega[-c\lambda_n(G),c\lambda_n(G)]$ which is in $\varphi_n(e_n)$. Using the comments in the beginning of the proof, it will be enough to show that for $\mu^i$-almost every $\omega$ and an edge of $G$ outside of $H^i$, $f_n^c(\omega,e)\to 0$ as $n\to-\infty$. As before assume $A_n(e)$ denotes the set of $\omega\in\overline\Lambda$ with the property that $\omega_n[0,1]=e_n$ and $\chi_{A_n(e)}$ is the characteristic function of $A_n(e)$ then \[ f_n^c(\omega,c) = \frac1{2c\lambda_n(G)} \sum_{j=-c\lambda_n(G)}^{c\lambda_n(G)-1} \chi_{A_n(e)}(S^j\omega).\] (Recall that $S$ is the shift map on $\overline\Lambda$.) Note that by \eqref{eq: size of image of a cylinder} and for every $i\in\{1,\ldots,k\}$ \[ \mu^i(A_n(e)) = \int_{\overline\Lambda}\chi_{A_n(e)}(\omega)\, d\mu^i(\omega) = \mu^i_n(e)\lambda_n(e).\] We use this to integrate the function $f_n^c(\omega,e)$: \begin{equation}\label{eq: integral of f unfolding} \begin{aligned} \int_{\overline\Lambda} f_n^c(\omega,e)\, d\mu^i(\omega) & = \frac1{2c\lambda_n(G)} \int_{\overline\Lambda} \left(\sum_{j=-c\lambda_n(G)}^{c\lambda_n(G)-1} \chi_{A_n(e)}(S^j\omega)\right) \, d\mu^i(\omega) \\ & = \frac1{2c\lambda_n(G)} \sum_{j=-c\lambda_n(G)}^{c\lambda_n(G)-1} \int_{\overline\Lambda} \chi_{A_n(e)}(S^j\omega)\, d\mu^i(\omega) \\ & = \frac1{2c\lambda_n(G)} \sum_{j=-c\lambda_n(G)}^{c\lambda_n(G)-1} \int_{\overline\Lambda} \chi_{A_n(e)}(\omega)\, d\mu^i(\omega) \\ & = \mu_n^i(e)\lambda_n(e), \end{aligned} \end{equation} where we have used the shift invariance of $\mu^i$. When $e$ is not in $H^i$, $\sum_n \mu_n^i(e)\lambda_n(e) <\infty$. Therefore by the Monotone Convergence Theorem \begin{align*} \int_{\overline\Lambda} \left( \sum_n f_n^c(\omega,e) \right) \, d\mu^i(\omega) & = \sum_n \int_{\overline\Lambda} f_n^c(\omega,e)\, d\mu^i(\omega) = \sum _n \mu_n^i(e)\lambda_n(e) < \infty. \end{align*} This implies for $\mu^i$-almost every $\omega$ \[ \sum_n f_n^c(\omega,e) \to 0\] as $n\to-\infty$ and in particular $f_n^c(\omega,e)\to 0$. \end{proof} \begin{cor}\label{projection of generic for unfolding} Given $i\in\{1,\ldots,k\}$, for $\mu^i$-almost every leave $\omega$ of $\overline\Lambda$, if $\alpha$ is an accumulation of $\omega$ in $G$, then $\alpha$ projects to $H^i/E$. \end{cor} In the proof of theorem \ref{frequency in generic leaves}, we showed that if $e\in EG\setminus EH^i$ then for $\mu^i$-almost every $\omega$, $f_n^c(\omega,e)\to 0$ with $n$. For edges in $H^i$, we easily see the following alternative. \begin{lem}\label{positive frequency for generic edges} Given $e\in EH^i$, $i\in\{1,\ldots,k\}$, and $c>0$, there exists $\epsilon>0$ and a set of positive $\mu^i$-measure in $\overline\Lambda$ so that for every leaf $\omega$ in this set $f_n^c(\omega,e)>\epsilon$ for infinitely many $n$. \end{lem} \begin{proof} By \eqref{eq: integral of f unfolding}, $\int f_n^c(\omega,e)\, d\mu^i = \mu_n^i(e)\lambda_n(e)$ for every edge $e\in EG$. When $e\in EH^i$, $\liminf_n \mu_n^i(e)\lambda_n(e)>0$ and we can find $\epsilon>0$, so that the $\mu^i$-measure of the set of $\omega$ with $f_n^c(\omega,e)>\epsilon$ is uniformly bounded from below by a positive constant. Hence there is a set of positive $\mu^i$-measure whose elements belong to infinitely many of these sets and this proves the claim. \end{proof} We use this to show one can find a path $\alpha$ in $G$ which is an accumulation of a generic $\omega$ and traverses $e$ as many times as required. This will immediately imply that a vertex of a non-degenerate component of $H^i/E$ has at least valence $2$. \begin{cor}\label{generic leaves traverse a generic edge many times for unfolding} Given an edge $e$ of $H^i, i\in\{1,\ldots,k\}$ and $l>0$, there is a set of positive $\mu^i$-measure in $\overline\Lambda$, so that for every $\omega$ in this set, there is a an accumulation $\alpha$ of $\omega$ in $G$ which traverses $e$ at least $l$ times. \end{cor} \begin{cor}\label{valence at least 2 after pinching unfolding sequence} Every vertex of a non-degenerate component of $H^i/E, i\in\{1,\ldots,k\}$ has valence at least $2$ (in $H^i/E$); also a component of $H^i$ cannot be contained in a component of $E$ which is a tree. \end{cor} \subsection{Recurrence for an unfolding sequence}\label{subsec: recurrence for unfolding} An unfolding sequence $(G_n)_{n\le0}$ is {\em recurrent} if (equipped with the simplicial lengths $(\lambda_n)_n$) it has a subsequence that converges in the moduli space of graphs and the pinched part for this subsequence is a forest. Equivalently $(G_n)_n$ is recurrent if the projections of $(G_n,\lambda_n)_n$ in $CV_N$ contain a subsequence that stays in a cocompact subset of $CV_N$. When $(G_n)_n$ is recurrent, we cannot conclude unique ergodicity but we can bound the dimension of the space of currents supported on the legal lamination. \begin{thm}\label{recurrent implies bounded number of measures} Assume $(G_n)_n$ is a recurrent unfolding sequence and $\overline\Lambda$ is the legal lamination. The dimension of the space of currents supported on $\overline\Lambda$ is at most $N$, i.e., there are at most $N$ mutually singular ergodic currents supported on $\overline\Lambda$. \end{thm} \begin{proof} Suppose $(G_{n_m})$ is a subsequence that converges in the closure of the moduli space of graphs with limit graph $G$ and the pinched part $E$, which is a forest in $G$. Also assume $H^0,H^1,\ldots,H^k$ is the transverse decomposition of $G$ corresponding to the collection of mutually singular ergodic currents supported on $\overline\Lambda$. By corollary \ref{valence at least 2 after pinching unfolding sequence}, no $H^i$, $i\in\{1,\ldots,k\}$, is contained in $E$ and after collapsing the pinched part $H^0/E, H^1/E, \ldots, H^k/E$ is a transverse decomposition of $G/E$ with non-degenerate $H^i/E$ for every $i>0$. In addition for $i>0$, every vertex of $H^i/E$ has degree at least $2$ in $H^i/E$ and this immediately implies $k\le N$. \end{proof} \section{Folding Sequences}\label{sec: folding sequences} Recall that a {\em folding sequence} is a sequence \begin{equation}\label{eq: folding sequence} \xymatrix{ G_0 \ar[r]^{f_0} & G_{1} \ar[r]^{f_{1}} \ar[r] & \cdots \ar[r]^{f_{n-1}} & G_n \ar[r]^{f_n} & \cdots } \end{equation} where every $f_n, n\ge 0$, is a change of marking morphism. We also define the morphism $\varphi_n:G_0\to G_n$ to be the composition $f_{n-1}\circ\cdots\circ f_0$. Similar to unfolding sequences, we assume the sequence $(G_n)_{n\ge0}$ is reduced, i.e., it does not have a stabilized sequence of subgraphs. In this section, we study the space of length measures on the folding sequence $(G_n)_n$. Not that the folding sequence lifts to a sequence \[ \xymatrix{ T_0 \ar[r]^{\widetilde f_0} & T_{1} \ar[r]^{\widetilde f_{1}} \ar[r] & \cdots \ar[r]^{\widetilde f_{n-1}} & T_n \ar[r]^{\widetilde f_n} & \cdots } \] where $T_n$ is the tree which is the universal cover of $G_n$. If $(\vec\lambda_n)_n$ is a length measure on $(G_n)_n$ and $\lambda_n$ is the induced metric on $G_n$, then it lifts to a metric for $T_n$ which is still denoted by $\lambda_n$. Also we will use the frequency current $(\vec\mu_n)_n$ on $(G_n)_n$. Recall that $\vec\mu_0 = \vec {\bf 1}$ and \[ \vec \mu_{n+1} = M_{f_n}\vec\mu_n \] where $M_{f_n}$ is the incidence matrix of $f_n$. For $e\in EG_n$, the component $e$ of $\vec \mu_n$, denoted by $\mu_n(e)$, is the number of times that $\varphi_n$-images of edges of $G_0$ traverse $e$. \subsection{The limit tree}\label{subsec: limit tree} We define the {\em limit tree} $T$ of the sequence of trees $T_n$. First, define an equivalence relation $\sim_0$ on $T_0$ where $x \sim_0 y$ if there is $p$ so that the images of $x$ and $y$ in $T_p$ coincide. The quotient space $T_0/\sim_0$ may not be Hausdorff, and so we use $\sim$ to be the equivalence relation whose classes are closures of $\sim_0$-classes. In other words, $T_0/\sim$ is exactly the maximal Hausdorff quotient of $T_0/\sim_0$. For every $n$, we have an induced quotient map $\psi_n:T_n\to T$. Notice that $F_N$ acts by homeomorphisms on $T$. Using Lemma \ref{length and frequency decay}, one sees that this action has a dense orbit. Also for every $n$, the $\psi_n$-image of every edge of $T_n$ is an embedded arc in $T$. One checks that $T$ is uniquely path connected and locally path connected. Further, it easy to see that $\sim_0$ arises from a pseudo-metric on $T_0$ and that $\sim$ arises from a metric, so $T$ is metrizable. Hence, by \cite{MO90}, $T$ is an $\mathbb{R}$-tree. We say a point of $T_m$ {\em maps to a branch point} if its image in $T_n$ for some $n\ge m$ is a vertex of $T_n$. Clearly the set of such points is countable. Given a length measure $(\vec\lambda_n)$ for the sequence we obtain a pseudo-distance function $\lambda$ on $T$ defined so that the $\lambda$-distance between $x$ and $y$ in $T$ is the infimum of the distances of $x_n, y_n\in T_n$ where $\psi_n(x_n)=x$ and $\psi_n(y_n)=y$. This is only a pseudo-distance since it is possible for the distance between distinct points to be zero. However we can identify every such pair and the quotient equipped with the induced metric, denoted by $\lambda$, is an $\BR$-tree with an action of $F_n$ by isometries. This $\BR$-tree represents a point of the boundary of $CV_N$ and we have \begin{lem}\label{folding sequence converges to the boundary} The projection of the marked metric graphs $(G_n,\lambda_n)_n$ to $CV_N$ converge to the $\BR$-tree $(T,\lambda)$ which is obtained from the pseudo-distance $\lambda$ on $T$. \end{lem} As a consequence of lemma \ref{length and frequency decay} we have: \begin{lem}\label{limit tree has dense orbits} If $(G_n)_n$ is a reduced folding sequence that admits a length measure, the limit tree with the induced metric has dense orbits. \end{lem} Note that passing to a subsequence of $(G_n)_n$ does not change the limit tree and the set of length measures on $(G_n)_n$. \subsection{Length measures on a tree}\label{subsection: length measures of a tree} Suppose $T$ is an arbitrary tree. Following Guirardel \cite{Gui00}, we define a {\em length measure} on $T$ to be a collection of finite Borel measures $\lambda_I$ for every compact interval $I$ in $T$ and such that if $J\subset I$, $\lambda_J =(\lambda_I)|_J$. When $T$ is endowed with an action of a group (the free group $F_N$ in our setting), we also assume these measures are invariant under this action. A length measure $\lambda$ is {\em non-atomic} if every $\lambda_I$ is non-atomic. By default a length measure is assumed to be non-atomic and we denote the space of non-atomic invariant length measures on $T$ by $\CD(T)$. When $T$ is an $\BR$-tree, the collection of the Lebesgue measures of the intervals in $T$ provide a length measure which is referred to as the {\em Lebesgue measure} of $T$. Also if $T$ is equipped with a pseudo-distance so that after collapsing diameter zero subsets one gets an $\BR$-tree, the pull back of the Lebesgue measure provides a non-atomic measure which we refer to as the Lebesgue measure. The Lebesgue measure is always non-atomic. \subsection{The space of length measures}\label{subsec: space of length measures} When $T$ is the limit tree for a folding sequence $(G_n)_{n\ge0}$, every non-atomic length measure $\lambda$ on $T$ naturally induces a length measure on $T_n$ for every $n$ so that the map $\psi_n$ restricted to an edge of $T_n$ preserves the measure. Since $\lambda$ is invariant under the action of $\pi_1(G_n)$, we also obtain a metric $\lambda_n$ on $G_n$ and a length vector $\vec\lambda_n$ where for every $e\in EG_n$, the component $\lambda_n(e)$ of the vector is equal to the $\lambda_n$-length of $e$. Hence we obtain a sequence of length vectors $(\vec\lambda_n)_n$ for $(G_n)$ and \[ \vec\lambda_{n+1} = M_{f_n}^T \vec\lambda_n;\] hence the sequence $(\vec\lambda_n)_n$ is a length measure for the folding sequence. \begin{lem}\label{length measure map is one to one} Given a reduced folding sequence $(G_n)$ and limit tree $T$, if $\lambda^1, \lambda^2$ are distinct non-atomic length measures, for $n$ sufficiently large, the induced length measures $\vec\lambda^1_n, \vec\lambda^2_n$ on $G_n$ are distinct. \end{lem} \begin{proof} Let $\lambda = \lambda^1+\lambda^2$. Since $\lambda^1\neq\lambda^2$, there is an arc $I$ in $T$ and a subset $S\subset I$ of positive $\lambda$ measure with the property that for every $x\in S$ there exists $\epsilon>0$ so that if $J\subset I$ is an interval of length $\le\epsilon$ containing $x$ then $\lambda^1(J)\neq\lambda^2(J)$. Essentially $S$ is the set of points in $I$ where the Radon-Nikodym derivatives $d\lambda^1/d\lambda$ and $d\lambda^2/d\lambda$ are not equal. We can assume $I$ is contained in the $\psi_0$-image of an edge $e_0$ of $T_0$. Since the set of points that map to a branch point is countable and $\lambda$ is non-atomoic , we can further assume that $S$ does not have any such point. Hence for every $n$, we can choose $x\in S$ and an edge $e_n$ of $T_n$ which is in the image of $\varphi_n(e_0)$, so that $\psi_n(e_n)$ in $T$ is a sub-interval of $I$ containing $x$ as an interior point. By lemma \ref{length and frequency decay}, the $\lambda_n$-length of $e_n$ tends to zero as $n\to\infty$ and therefore $\lambda^1_n(e_n) \neq \lambda^2_n(e_n)$ for $n$ sufficiently large. This proves the lemma. \end{proof} Conversely if the sequence $(\vec\lambda_n)$ is a length measure for the folding sequence $(G_n)$, we showed that on $T$ one obtains a pseudo-distance and the induced Lebesgue measure is a non-atomic length measure on $T$. Putting these together we have: \begin{prop}\label{equivalence of length measures} Given a reduced folding sequence $(G_n)$ and limit tree $T$, there is a linear isomorphism between the space $\CD((G_n)_n)$ of length measures on $(G_n)$ and the space $\CD(T)$ of invariant non-atomic length measures on $T$. \end{prop} This immediately shows that $\CD(T)$ is a positive cone in a vector space whose dimension is bounded by $\liminf_n |EG_n|$ and in particular by $3N-3$. Recall that given the length measure $\lambda=(\vec\lambda_n)_n$ and the frequency vectors $(\vec\mu_n)_n$, the area is equal to \[ \vec\mu_n^T\lambda_n \] which is independent of $n$. We define $\CD_0(T)$ to be the subset of $\CD(T)$ consisting of area one length measures, i.e., those $\lambda$ for which \[ \vec {\bf 1}^T\lambda_0 = 1.\] Note that the above quantity is equal to the total $\lambda$-length of $G_0$ and therefore \[ \lambda_0(G_0) = \sum_{e\in EG_0}\lambda_0(e) =1.\] The subset $\CD_0(T)$ is a compact subset which is a section of the cone $\CD(T)$. \subsection{Ergodic length measures}\label{subsec: ergoidic length measures} We say a length measure $\lambda\in \CD(T)$ is {\em ergodic} if whenever $\lambda = c_1\lambda^1+c_2\lambda^2$ for $c_1,c_2>0$ and $\lambda^1,\lambda^2\in \CD(T)$, then $\lambda^1$ and $\lambda^2$ are homothetic to $\lambda$. It is a standard consequence of the Radon-Nikodym Theorem that non-homothetic ergodic length measures are mutually singular and therefore linearly independent. This immediately implies the following. \begin{thm}\label{finite ergodic length measures} Given a reduced folding sequence $(G_n)_n$ with limit tree $T$, the number of pairwise non-homothetic ergodic length measures on $T$ is bounded by one less than the dimension of $\CD(T)$ and therefore by $3N-4$. \end{thm} This also follows from a result of Guirardel \cite{Gui00} but as we have seen it is easier in this case. Note that every $\lambda\in\CD(T)$ has an {\em ergodic decomposition} $\lambda = \lambda^1+\cdots+\lambda^k$ where each $\lambda^i\in\CD(T)$ is ergodic and $\lambda^1,\ldots,\lambda^k$ are the {\em ergodic components} of $\lambda$. Using Guirardel's terminology when $\lambda$ is ergodic, the limit tree with the length measure induced by $\lambda$ is {\em uniquely ergometric}. Using the above theorem and for a reduced folding sequence $(G_n)$ with limit tree $T$, assume $\lambda^1, \ldots, \lambda^k$ is a maximal collection of pairwise non-homothetic ergodic length measures on $T$ and define $\lambda = \lambda^1+\cdots+\lambda^k$. Suppose $T$ is equipped with the distance given by $\lambda$ and every $G_n$ and $T_n$ is equipped with the induced metric $\lambda_n$. In particular every edge $e$ of $T_0$ is identified with the interval $[0,\lambda_0(e)]$. We say a point $x$ in the interior of $e$ is {\em $\lambda^i$-generic}, $i=1,\ldots,k$, if it does not map to a branch point, and \begin{equation}\label{eq: generic for length} \lim_{\epsilon\to 0} \frac{\lambda^i_0([x-\epsilon,x])}\epsilon = \lim_{\epsilon\to 0} \frac{\lambda^i_0([x,x+\epsilon])}\epsilon = 1. \end{equation} Recall that $e$ is identified by the interval $[0,\lambda_0(e)]$ and therefore $[x-\epsilon,x]$ and $[x,x+\epsilon]$ represent subarcs of $\lambda_0$-length $\epsilon$ of $e$ with endpoint $x$. It is a standard consequence of ergodicity that in $T$ (respectively $T_n$ or $G_n$) for $\lambda^i$-almost (resp. $\lambda^i_n$-almost) all points \eqref{eq: generic for length} holds. We sometimes refer to these points as $\lambda^i$-generic. It is also easy to see that the set of $\lambda^i$-generic and $\lambda^j$-generic points are disjoint. \subsection{Transverse decomposition for a folding sequence} \label{subsec: transverse decomposition for folding} Assume (perhaps after passing to a subsequence) that every $G_n$ is identified by a fixed graph $G$. We use this identification and for an edge $e\in EG$, we denote the image in $EG_n$ by $e_n$. Also for $n\ge 0$ and $i=1,\ldots, k$, $\lambda_n(e)$, $\lambda^i_n(e)$, and $\mu_n(e)$ respectively denote the $\lambda_n$-length, $\lambda^i_n$-length, and the corresponding component of the frequency vector $\vec\mu_n$. \begin{thm}\label{transverse decomposition for folding} Suppose a reduced folding sequence $(G_n)$ with pairwise non-homothetic non-atomic ergodic length measures $\lambda^1,\ldots,\lambda^k$ is given and every $G_n$ is isomorphic to a fixed graph $G$. After passing to a subsequence, there is a transverse decomposition $H^0,H^1,\ldots,H^k$ of $G$, such that for a distinct pair $i, j\in\{1,\ldots,k\}$ and $e\in EH^i$ \begin{enumerate} \item $\displaystyle\liminf_n \mu_n(e)\lambda^i_n(e) >0$, \item $\displaystyle\sum_n \mu_n(e)\lambda^j_n(e) < \infty$, and \item $\displaystyle\lim_n\frac{\lambda^j_n(e)}{\lambda^i_n(e)}=0$. \end{enumerate} Also given $e\in EH^0$ and $i$ in $\{1,\ldots,k\}$ \[ \sum_n \mu_n(e)\lambda^i_n(e) < \infty.\] \end{thm} \begin{proof} Let $\lambda = \lambda^1+\cdots+\lambda^k$. First we pass to a subsequence, so that for every $e\in EG$, either $\displaystyle\liminf \mu_n(e)\lambda_n(e) >0$ or $\displaystyle\lim \mu_n(e)\lambda_n(e) = 0$. Then since $\lambda = \lambda^1+\cdots+\lambda^k$ and after passing to a subsequence, we can assume for every $e\in EG$, either $\displaystyle \lim \mu_n(e)\lambda_n(e) =0$ or there is $i\in\{1,\ldots,k\}$ so that $\displaystyle \liminf \mu_n(e)\lambda^i_n(e) >0$. We define $H^i$ to consist of all edges $e$ with $\displaystyle\liminf \mu_n(e)\lambda^i_n(e) >0$ and define $H^0$ to consist of the edges $e$ with $\displaystyle\lim \mu_n(e)\lambda_n(e)=0$. We proceed to prove this is a transverse decomposition of $G$ and satisfies the conclusion of the theorem. \begin{lem}\label{decent frequency implies containing generic points} If $e\in EG$ has the property that $\limsup_n \mu_n(e)\lambda_n^i(e) >0$ then there is a subset of positive $\lambda^i_0$-measure in $G_0$ so that for every $x$ in this subset $\varphi_n(x)\in e_n$ for infinitely many $n$. \end{lem} \begin{proof} Assume $\limsup_n \mu_n(e)\lambda^i_n(e) >\epsilon>0$ and let \[ B_n(e) = \varphi_n^{-1}(e_n)\] denote the $\varphi_n$-pre-image of $e_n$ in $G_0$. Then \begin{equation}\label{eq: length of preimage} \lambda^i_0(B_n(e)) = \mu_n(e)\lambda^i_n(e) \end{equation} for every $n$. This is a consequence of the fact that $\mu_n(e)$ is the number of component of the pre-image of $e_n$ in $G_0$. Hence $\lambda^i_0(B_n(e))>\epsilon$ for infinitely many $n$. This implies that \[ \bigcap_m \bigcup_{n\ge m}B_n(e) \] has $\lambda^i_0$-measure at least $\epsilon$. Obviously if $x$ is in the above set, then $\varphi_n(x)\in e_n$ for infinitely many $n$ and this proves the lemma. \end{proof} \begin{lem}\label{containing a generic point implies generic} Suppose $x$ is a $\lambda^i$-generic point of $G_0$ for some $i\in\{1,\ldots,k\}$ and $\varphi_n(x)$ is in the edge $e(x,n)$ of $G_n$. Then \[ \lim_n\frac{\lambda^i(e(x,n))}{\lambda(e(x,n))} = 1.\] \end{lem} \begin{proof} Since $\varphi_n(x)$ is in $e(x,n)$, we can choose a $\varphi_n$-pre-image $I_n$ of $e(x,n)$ which is an arc in $G_0$ containing $x$. Because $(G_n)$ is reduced and by lemma \ref{length and frequency decay} $\lambda_0(I_n)=\lambda_n(e(x,n))\to 0$ as $n\to\infty$. Then since $x$ is $\lambda^i$-generic and $\varphi_n$ form $I_n$ to $e(x,n)$ is measure preserving with respect to $\lambda$ and $\lambda^i$, \[ \lim_n\frac{\lambda^i(e(x,n))}{\lambda(e(x,n))} = \lim_n\frac{\lambda^i(I_n)}{\lambda(I_n)} =1.\] \end{proof} Combining the above two lemmas we can see that if $\liminf \mu_n(e)\lambda^i_n(e) >0$, then $\lim_n \mu_n(e)\lambda^j_n(e) = 0$ for every $j\neq i$. Otherwise and by using lemma \ref{decent frequency implies containing generic points} we find $\lambda^i$-generic $x$ and $\lambda^j$-generic $y$ so that $\varphi_n(x) \in e_n$ and $\varphi_n(y)\in e_n$ for infinitely many $n$. This however contradicts lemma \ref{containing a generic point implies generic} and the fact that $\lambda_n(e)\ge \lambda_n^i(e)+\lambda^j(e)$. Using the definition of $H^i$ given above, we know that (1) holds for every $e\in EH^i$. By what we just stated $\lim_n \mu_n(e)\lambda^j_n(e) =0$ for every $j\neq i$; we can pass to a further subsequence and assume in addition that $\sum_n \mu_n(e)\lambda_n^j(e) <\infty$ for every $j\neq i$. Part (3) is an immediate corollary of (1) and (2). In the definition of $H^0$ for every edge $e$ of $H^0$, $\lim \mu_n(e)\lambda_n(e) =0$. Again we pass to a further subsequence to assume that for every such $e$, $\sum_n \mu_n(e)\lambda_n(e) <\infty$ and therefore (4) also holds. \end{proof} With a simple modification of the above proof, it is possible to start with a length measure on the folding sequence and find a transverse decomposition corresponding to the ergodic components of that length measure. \begin{thm}\label{transverse decomposition for a length measure} Suppose $(G_n)$ is a reduced folding sequence and $\lambda=(\vec\lambda_n)$ is a length measure with ergodic components $\lambda^1,\ldots,\lambda^k$. Also assume every $G_n$ is isomorphic to a fixed graph $G$. After passing to a subsequence, there is a transverse decomposition $H^0,H^1\ldots,H^k$ of $G$, such that for every distinct pair $i,j\in\{1,\ldots,k\}$ and $e\in EH^i$ \begin{enumerate} \item $\displaystyle\liminf_n \mu_n(e)\lambda^i_n(e) >0$, \item $\displaystyle\sum_n \mu_n(e)\lambda^j_n(e) < \infty$, and \item $\displaystyle\lim_n\frac{\lambda^j_n(e)}{\lambda^i_n(e)}=0$. \end{enumerate} Also given $e\in EH^0$ \[ \sum_n \mu_n(e)\lambda_n(e) <\infty.\] \end{thm} From \eqref{eq: length of preimage} and when the conclusion of theorem \ref{transverse decomposition for folding} holds, we can see that for $i\in\{1,\ldots,k\}$ if $e$ is an edge of $G$ which is not in $H^i$ then \[ \sum_n \lambda_0(B_n(e)) = \sum_n \mu_n(e)\lambda^i_n(e) < \infty.\] Using standard measure theory, this implies that the set of points $x$, with $\varphi_n(x)\in e_n$ for infinitely many $n$, has $\lambda^i_0$-measure zero. Equivalently: \begin{prop}\label{generic points fold to corresponding subgraphs} For $\lambda^i_0$-almost every $x$ in $G_0$, $\varphi_n(x) \in H^i_n$ for $n$ sufficiently large (depending on $x$), where $H^i_n$ is the subgraph of $G_n$ corresponding to $H^i$. In particular $H^i$ is non-degenerate. \end{prop} \subsection{Pinching along a folding sequence}\label{subsec: pinching along folding} Assume $(G_n)_{n\ge0}$ is a reduced folding sequence and $(\vec\lambda_n)_n$ is a length measure on $(G_n)_n$. Recall that $\vec\lambda_n$ equips $G_n$ with a metric $\lambda_n$. As in \S \ref{subsec: pinching} assume we have passed to a subsequence so that every $G_n$ is isomorphic to a fixed graph $G$ and this subsequence converges in the moduli space of graphs. Also let $E$ denote the pinched part of $G$. After passing to a further subsequence we can assume the conclusion of theorem \ref{transverse decomposition for a length measure} holds for a transverse decomposition $H^0,H^1,\ldots,H^k$ associated to the ergodic components $\lambda^1,\ldots,\lambda^k$ of $\lambda$. Collapsing the pinched part of $G$ induces a transverse decomposition $H^0/E,H^1/E,\ldots,H^k/E$ of $G/E$. Assume $x$ is a point of $G_0$ which does not map to a branch point. We say a path $\beta$ in $G$ is an {\em accumulation of germs at $x$} if there is a sequence of paths $I_n$ in $G_0$ containing $x$ and with $\lambda_0(I_n)\to 0$ as $n\to\infty$ so that the path $\varphi_{n}(I_{n})$ in $G_n$ is identified with $\beta$ for infinitely many $n$. We use $\beta_n$ to denote the image of $\beta$ in $G_n$. In particular $\beta_n = \varphi_n(I_n)$ for infinitely many $n$. By proposition \ref{generic points fold to corresponding subgraphs} for $\lambda^i$-almost every $x\in G_0$, if an edge $e$ of $G$ is an accumulation of germs at $x$, $e\in H^i$. A generalization for longer paths holds. \begin{thm}\label{accumulations of germs respect decomposition} Given $i\in\{1,\ldots,k\}$ for $\lambda^i$-almost every $x\in G_0$, if a path $\beta$ in $G$ is an accumulation of germs at $x$, then \[ \frac{\lambda_n( \beta_n \setminus H^i_n)}{\lambda_n(G)} \to 0\] as $n\to\infty$, where $\beta_n\setminus H^i_n$ is the subset of $\beta_n$ outside of $H^i$. \end{thm} \begin{proof} Given $c>0$ and $n$, for a point $x$ in the interior of an edge of $G_0$, we define $I_n^c(x)$ to denote the set of all points in the interior of the same edge whose $\lambda$-distance to $x$ is at most $c\,\lambda_n(G)$. Obviously if $t\in I_n^c(x)$ then $x\in I_n^c(t)$. Also given $e\in EG$ and similar to the proof of theorem \ref{transverse decomposition for folding}, we define $B_n(e)$ to be the $\varphi_n$-pre-image of $e_n$ in $G_0$. We define \[ f_n^c(x,e) = \frac1{2c\lambda_n(G)}\lambda(I_n^c(x) \cap B_n(e)) = \frac1{2c\lambda_n(G)}\int_{I_n^c(x)} \chi_{B_n(e)}(t)\, d\lambda(t).\] After integrating with respect to $\lambda$: \begin{equation}\label{eq: integral of f folding} \begin{aligned} \int_{G_0} f_n^c(x,e)\, d\lambda(x) &= \frac1{2c\lambda_n(G)}\int_{G_0}\int_{I_n^c(x)}\chi_{B_n(e)}(t)\, d\lambda(t)\, d\lambda(x) \\ &= \frac1{2c\lambda_n(G)}\int_{G_0}\chi_{B_n(e)}(t) \int_{I_n^c(t)}d\lambda(x)\, d\lambda(t)\\ &\le \int_{G_0} \chi_{B_n(e)}(t)\, d\lambda(t)\\ &= \mu_n(e)\lambda_n(e). \end{aligned} \end{equation} When $e\in H^0$, we know that $\sum_n\mu_n(e)\lambda_n(e) <\infty$. Therefore by the Monotone Convergence Theorem \[ \int_{G_0}\left(\sum_n f_n^c(x,e)\right)\, d\lambda(x) = \sum_n \int_{G_0}f_n^c(x,e)\, d\lambda(x) \le \sum_n \mu_n(e)\lambda_n(e) <\infty.\] In particular for $\lambda$-almost every $x$, $\sum_n f_n^c(x,e) <\infty$ and as a result $f_n^c(x,e)\to 0$ as $n\to\infty$. On the other hand, assume $i,j\in\{1,\ldots,k\}$ are distinct and $e\in H^j$. By part (3) of theorem \ref{transverse decomposition for a length measure} $\lambda^j_n(e)/\lambda_n(e) \to 1$ as $n\to\infty$. So for a given $\epsilon>0$ and $n$ sufficiently large we can assume $\lambda^j(e_n)>(1-\epsilon)\lambda_n(e)$. Using this for every component of the pre-image of $e_n$ in $I_n^c(x)$, we have \[ \lambda^j(I_n^c(x)) \ge \lambda^j(I_n^c(x)\cap B_n^i(e)) \ge (1-\epsilon) \lambda(I_n^c(x)\cap B_n^i(e));\] so \[ \frac{\lambda^j(I_n^c(x))}{2c\lambda_n(G)} \ge (1-\epsilon) f_n^c(x,e).\] But since $2c\lambda_n(G) = \lambda(I_n^c(x))\to 0$ as $n\to\infty$ and by \eqref{eq: generic for length}, for $\lambda^i$-generic $x$ the fraction on the left tends to zero as $n\to\infty$. Hence $f_n^c(x,e)\to 0$ as $n\to\infty$ for $\lambda^i$-almost every $x\in G_0$. We have shown that for $\lambda^i$-almost every $x\in G_0$ and $c>0$ fixed, \[ \frac1{\lambda(I_n^c(x))} \lambda(I_n^c(x)\cap \varphi_n^{-1}(G_n \setminus H^i_n)) \to 0\] as $n\to\infty$. Equivalently for $\lambda^i$-almost every $x\in G_0$, $1/\lambda_n(G)$ times the $\lambda_n$-length of the intersection of $\varphi_n(I_n^c(x))$ and the complement of $H_n^i$ tends to zero as $n\to\infty$: \[ \lim_n \frac{\lambda_n( \varphi_n(I_n^c(x)\setminus H^i_n))}{\lambda_n(G)} =0.\] Given $\beta$ in $G$ which is an accumulation of germs at $x$, we can choose $c>0$, so that the path $\beta_n$ in $G_n$ is contained in $\varphi_n(I_n^c(x))$ for every $n$. Then it follows that for $\lambda^i$-almost every such $x$ \[ \lim \frac{ \lambda(\gamma_n\setminus H^i_n)}{\lambda_n(G)} = 0.\] \end{proof} \begin{cor}\label{accumulation of germs after pinching} Given $i=1,\ldots,k$, for $\lambda^i$-almost every $x\in G_0$ if a path $\beta$ in $G$ is an accumulation of germs at $x$, then $\beta$ projects to $H^i/E$. \end{cor} \begin{lem}\label{frequency in an accumulation} Given $e$ in $H^i, i\in\{1,\ldots,k\}$ and $c>0$, there is a set of positive $\lambda^i$-measure in $G_0$, so that for every $x$, $f_n^c(x,e)>\epsilon$ for infinitely many $n$ and some $\epsilon>0$. \end{lem} \begin{proof} By \eqref{eq: integral of f folding} in the previous proof, we know that \[ \int_{G_0}f_n^c(x,e)\, d\lambda^i(x) = \mu_n(e)\lambda_n^i(e).\] When $e$ is in $H^i$, $\liminf \mu_n(e)\lambda_n^i(e)>0$. From this we can find $\epsilon>0$ and a set of positive $\lambda^i$-measure, so that $f_n^c(x,e) >\epsilon$ for infinitely many $n$. \end{proof} \begin{cor}\label{recurrence after pinching} Given $e$ in $H^i, i\in\{1,\ldots,k\}$ and $l>0$, there is a set of positive $\lambda^i$-measure in $G_0$, so that for every $x$ in this set there is a path $\beta$ in $G$ which is an accumulation of germs at $x$ and $\beta$ traverses $e$ at least $l$ times. \end{cor} Corollaries \ref{accumulation of germs after pinching} and \ref{recurrence after pinching} imply in particular that: \begin{cor}\label{valence two after pinching and folding} Given $i\in\{1,\ldots,k\}$, in every non-degenerate component of $H^i/E$, every vertex has valence at least $2$ (in $H^i/E$). \end{cor} \subsection{Recurrence for a folding sequence}\label{subsec: recurrence for folding} As in the case of an unfolding sequence, a folding sequence $(G_n)_{n\ge0}$ with a length measure $\lambda = (\vec\lambda_n)_n$ is {\em recurrent} if it has a subsequence that converges in the moduli space of graphs and the pinched part is a forest. \begin{thm}\label{recurrence for folding} Assume $(G_n)_n$ is a reduced folding sequence and equipped with a length measure $\lambda=(\vec\lambda_n)_n$, it is recurrent. Then the ergodic decomposition of $\lambda$ consists of at most $N$ components. \end{thm} \section{Folding/unfolding sequences and progress in $\CF\CF(N)$}\label{sec: progress in FF} We say a folding/unfolding sequence $(G_n)_{a\le\le b}$ {\em moves linearly} or is {\em linear} if there exists $M>0$, so that for every $n$ every entry of the incidence matrix of $f_n$ is bounded by $M$. From this one can easily conclude that \begin{lem}\label{linear implies linear speed} Given a linear folding/unfolding sequence, the distances in $CV_N$ grow at most linearly, i.e., \[ d_{CV_N}(G_m, G_n) \le O(n-m).\] \end{lem} \begin{rem} In fact, when $(G_n)_n$ is cocompact, we can show the distances grow at least linearly. \end{rem} Suppose $(G_n)_{n\le 0}$ is an unfolding sequence with legal lamination $\overline\Lambda$ and ergodic probability currents $\mu^1,\ldots,\mu^k$. Recall that $\mu^i$-almost every leaf $\omega$ of $\overline\Lambda$ are generic, i.e., \[ \lim_n \frac{\langle\gamma,\omega[0,n]\rangle}n = \lim_n\frac{\langle\gamma,\omega[-n,0]\rangle}n = \mu^i(\gamma)\] for a fixed choice of $\gamma\in\Omega(G_0)$ (or for every $\gamma$ if one insists). The following lemma shows for $a\le 0<b$ and $b-a$ large, a large (relative to $b-a$) subsegment of $\omega[a,b]$ also has the property that the frequency of occurrence of $\gamma$ is close to $\mu^i(\gamma)$. \begin{lem}\label{frequency in a subpath} Given $\theta, \epsilon>0$ and $\mu^i$-generic leaf $\omega$ of $\overline\Lambda$ there exists $n_0>0$ so that if $a\le 0\le b$ are given with $b-a\ge n_0$ and $[c,d]\subset[a,b]$ is a sub-interval with $c-d > \theta (b-a)$ \[ \left| \frac{ \langle \gamma , \omega[c,d] \rangle}{d-c} - \mu^i(\gamma) \right| < \epsilon.\] \end{lem} \qed We have shown in theorem \ref{transverse decomposition} that there is a subsequence $(G_{n_m})_m$ of $(G_n)_n$ with $G_{n_m}$ isomorphic to a fixed graph $G$ and there is a transverse decomposition $H^0,H^1,\ldots,H^k$ of $G$ associated to the ergodic currents $\mu^1,\ldots,\mu^k$ so that the conclusion of the theorem holds. As usual we use the simplicial length $(\vec\lambda_n)_n$ for every $G_n$. To simplify notation, we also use $\vec{P\lambda_n}$ to denote a multiple of $\vec\lambda_n$ whose component associated to $e\in EG_n$ is $P\lambda_n(e)=\lambda_n(e)/\lambda_n(G_n)$ and $\lambda_n(G_n)$ is the total length of $G_n$. Given a positive integer $l>0$, we assume we have passed to a further subsequence of $(G_{n_m})_m$ so that for every $m, p=0,\ldots,l$, and $e\in EG_{n_m+p}$ either $P\lambda_n(e)$ is bounded from below by a positive constant or tends to zero with $m$. More precisely, there is a subgraph $E_{n_m+p}\subset G_{n_m+p}$ (another sequence of ``pinched'' subgraphs) so that given $e_{m,p}\in EG_{n_m+p}$, $P\lambda_{n_m+p}(e_{m,p})\to 0$ as $m\to\infty$, also given $e\in EG_{n_m+p}$ which is not in $E_{n_m+p}$, $P\lambda_{n_m+p}(e)>\epsilon$ for a uniform constant $\epsilon>0$. Given $m$ and $p=0,\ldots,l$ we can use the composition \[ f_{n_m+p-1} \circ \cdots \circ f_{n_m+1} \circ f_{n_m} : G_{n_m}\to G_{n_m+p}\] and compose it with the collapse map $G_{n_m+p}/E_{n_m+p}$. Assuming that the sequence $(G_n)_n$ is linear we claim: \begin{lem}\label{disjoint images after collapse for unfolding} For $m$ sufficiently large depending on $l$, and a distinct pair $i,j\in\{1,\ldots,k\}$, the images of $H^i_{n_m}$ and $H^j_{n_m}$ in $G_{n_m+p}/E_{n_m+p}$ do not share any edge. \end{lem} \begin{proof} Assume $e_1\in EH^i_{n_m}$ and $e_2\in EH^j_{n_m}$ are given and $e'$ is an edge of $G_{n_m+p}/E_{n_m+p}$ which is in the images of both $e_1$ and $e_2$. This implies there are sub-arcs $e_1'\subset e_1$ and $e_2'\subset e_2$ and the restrictions of the map $G_{n_m}\to G_{n_m+p}/E_{n_m+p}$ to those are homeomorphisms onto $e'$. Since $(G_n)_n$ is linear, it follows that $ \lambda_{n_m}(G_{n_m}) < (MN)^p \lambda_{n_m+p}(G_{n_m+p})$ with $M$ the upper bound for the entries of the incidence matrices. Since $e'$ is not in $EG_{n_m+p}$, its length is at least $\epsilon \lambda_{n_m+p}(G_{n_m+p})$. Therefore lengths of $e_1'$ and $e_2'$ are at least $\epsilon \lambda_{n_m}(G_{n_m})/(MN)^p$. This implies that $\lambda_{n_m}(e_1')>\theta \lambda_{n_m}(e_1)$ and $\lambda_{n_m}(e_2')>\theta\lambda_{n_m}(e_2)$ with \[ \theta = \epsilon/(MN)^l. \] Then we can use lemma \ref{frequency in a subpath} to conclude that \[ \frac{\langle \gamma, \varphi_{n_m}(e_1') \rangle}{\lambda_{n_m}(e_1')} \to \mu^i(\gamma) \quad {\rm and} \quad \frac{\langle \gamma, \varphi_{n_m}(e_2') \rangle}{\lambda_{n_m}(e_2')} \to \mu^j(\gamma)\] as $m\to\infty$. But $\varphi_{n_m}(e_1') = \varphi_{n_m}(e_2')$ since they both factor through $e'$ and this contradicts the fact that $\mu^i(\gamma)\neq \mu^j(\gamma)$. \end{proof} For $m$ sufficiently large and $p=0,\ldots,l$, choose an edge $e_{m,p}$ of $G_{n_m+p}$ which is not in $E_{n_m+p}$. By the above lemma, $e_{m,p}$ is in the image of at most one $H^i_{n_m}$, for $i=1,\ldots,k$. So if $k>1$, we can choose $j\in\{1,\ldots,k\}$ so that the image of $H^j_{n_m}$ in $G_{n_m+p}$ does not contain $e_{m,p}$. By corollary \ref{valence at least 2 after pinching unfolding sequence} $H^j_{n_m}/E_{n_m}$ contains a loop, we assume $\delta$ is a primitive loop in $G_{n_m}$ which projects to this loop. Then we claim that for every $p\in 0,\ldots, l$ (and $m$ sufficiently large), the image of $\delta$ in $G_{n_m+p}$ does not fill (misses at least one edge). The idea is that every edge of $\delta$ is either in $H^j_{n_m}$ or is in $E_{n_m}$. We had chosen $e_{m,p}$ so that it is not in the image of $H^j_{n_m}$. Also for $m$ sufficiently large, $e_{m,p}$ is not in the image of $E_{n_m+p}$ because (by the argument in the proof of the previous lemma), length of $e_{m,p}$ is at least $\epsilon\lambda_{n_m}(G_{n_m})/(MN)^p$. But for edges of $E_{n_m+p}$ the ratio of their length over $\lambda_{n_m}(G_{n_m})$ goes to zero as $m\to\infty$. So $e_{m,p}$ cannot be in the image of $E_{n_m+p}$. We have showed that for every $l$, if $m$ is chosen to be large enough, then there exists a primitive loop $\delta$ in $G_{n_m}$ whose image does not fill $G_{n_m+p}$ for every $p\in\{0,\ldots,l\}$. This implies that $d_{\CF\CF}(\delta,G_{n_m+p})\le 3$ where the distance is the $\CF\CF$-distance between the cyclic free factor generated by $\delta$ and a projection of $G_{n_m+p}$ to the free factor complex (obtained by taking the free factor represented by removing an edge of $G_{n_m+p}$). Consequently $d_{\CF\CF}(G_{n_m},G_{n_m+p}) \le 6.$ \begin{prop} Given $l$, for $m$ sufficiently large and every $p\in\{0,\ldots,l\}$ \[ d_{\CF\CF}(G_{n_m},G_{n_m+p}) \le 6.\] \end{prop} We have proved: \begin{thm}\label{slow progress in FF for unfoldings} Given a linear unfolding sequence $(G_n)_{n\le 0}$. If $(G_n)_n$ is not dually uniquely ergodic, i.e., the legal lamination of $(G_n)_n$ admits more than one non-homothetic ergodic currents, then for every $l$, there exists $n$ large so that \[ d_{\CF\CF}(G_n, G_{n+p})\le 6\] for every $p\in\{0,\ldots,l\}$. \end{thm} Now assume $(G_n)_{n\ge0}$ is a reduced folding sequence with limit tree $T$ and length measure $\lambda=(\vec\lambda_n)_n$. We also assume every graph $G_n$ is equipped with the metric induced by $\lambda$ and $\lambda^1,\ldots,\lambda^k$ are the ergodic components of $\lambda$. Similar to lemma \ref{frequency in a subpath} we show that for a sub-interval of significant length in a small interval about a $\lambda^i$-generic point $x\in G_0$, most of the length of the sub-interval is induced from $\lambda^i$. More precisely: \begin{lem}\label{length in a subinterval} Given $\theta, \epsilon>0$ for $\lambda^i$-almost every $x\in G_0$, there exists $\delta>0$, so that if $I$ is an interval of $\lambda$-length at most $\delta$ in $G_0$ containing $x$ and $J\subset I$ is a sub-interval of $\lambda$-length at least $\theta\lambda(I)$, then \[ \frac{\lambda^i(J)}{\lambda(J)} > 1-\epsilon.\] \end{lem}\qed We use theorem \ref{transverse decomposition for a length measure} to find a subsequence $(G_{n_m})_m$ of $(G_n)_n$ with each $G_{n_m}$ isomorphic to a fixed graph $G$ and a transverse decomposition $H^0,H^1,\ldots, H^k$ of $G$ which satisfies the conclusion of theorem \ref{transverse decomposition for a length measure}. We use the normalized metric $P\lambda_n$ for the graph $G_n$ which is obtained by dividing the metric $\lambda_n$ by the total length of $G_n$, $\lambda_n(G_n)$. Given a positive integer $l$, we assume a further subsequence has been chosen so that if for $m$ and $p\in\{0,\ldots,l\}$ an edge $e_{m,p}$ of $G_{n_m+p}$ is chosen then either $P\lambda_{n_m+p}(e_{m,p}) >\epsilon$ for a constant $\epsilon>0$ independent of $m$ or $P\lambda_{n_m+p}(e_{m,p})\to 0$ with $m$. We define $E_{n_m+p}\subset G_{n_m+p}$ to contain edges of $G_{n_m+p}$ whose $P\lambda_{n_m+p}$-length is at most $\epsilon$. (This is the pinched part of the graph.) Given $m$ and $p\in\{0,\ldots,k\}$ we consider the composition of the morphism \[ f_{n_m+p-1} \circ \cdots \circ f_{n_m+1} \circ f_{n_m} : G_{n_m}\to G_{n_m+p}\] with the collapse map $G_{n_m+p}\to G_{n_m+p}/E_{n_m+p}$. Analogous to lemma \ref{disjoint images after collapse for unfolding} and when $(G_n)_n$ is linear we claim \begin{lem}\label{disjoint images after collapse for folding} For $m$ sufficiently large (depending on $l$), $0\le p\le l$, and a distinct pair $i,j\in\{1,\ldots,k\}$, the images of $H^i_{n_m}$ and $H^j_{n_m}$ in $G_{n_m+p}/E_{n_m+p}$ do not share any edges. \end{lem} \begin{proof} The proof follows the same outline as the proof of lemma \ref{disjoint images after collapse for unfolding}. Assume images of $e_1\in EH^i_{n_m}$ and $e_2\in EH^i_{n_m}$ in $G_{n_m+p}$ both traverse an edge $e'$ outside of $E_{n_m+p}$. We choose subintervals $e_1'\subset e_1$ and $e_2'\subset e_2$ which are mapped isometrically to $e'$. Using linearity of $(G_n)_n$ there exists $\theta$ independent of $m$, so that \[ \lambda_{n_m}(e_1')>\theta \lambda_{n_m}(e_1) \quad {\rm and} \quad \lambda_{n_m}(e_2') > \theta \lambda_{n_m}(e_2).\] Then we use lemma \ref{length in a subinterval} to conclude that \[ \left| \frac{\lambda^i_{n_m}(e_1')}{\lambda_{n_m}(e_1')} - 1\right| \quad {\rm and} \quad \left| \frac{\lambda^i_{n_m}(e_2')}{\lambda_{n_m}(e_2')} - 1\right| \] will be arbitrarily small for $m$ large. But $e_1'$ and $e_2'$ both map to $e'$ and therefore their $\lambda, \lambda^i,$ and $\lambda^j$-lengths are equal. This contradicts the fact that $\lambda^i+\lambda^j \le \lambda$ for $m$ sufficiently large. \end{proof} With this lemma we proceed as in the case of linear unfolding sequences and prove that for $m$ sufficiently large, there is a primitive loop $\delta$ whose image in $G_{n_m+p}$ does not fill. This implies that \begin{prop} Given a positive integer $p$, if $m$ is chosen large enough then for every $p\in\{0,\ldots,l\}$ \[ d_{\CF\CF}(G_{n_m},G_{n_m+p})\le 6.\] \end{prop} And we have proved: \begin{thm}\label{slow progress in FF for foldings} Given a linear folding sequence $(G_n)_{n\ge 0}$. If $(G_n)_n$ is not uniquely ergometric, i.e., the limit tree admits more than one non-homothetic non-atomoic length measures, then for every $l$, there exists $n$ large so that \[ d_{\CF\CF}(G_n, G_{n+p})\le 6\] for every $p\in\{0,\ldots,l\}$. \end{thm} \section{Applications}\label{applications} We deduce two consequences of our main results. First, we will need to collect some background material. For the remainder of the paper $N \geq 3$. \subsection{Outer space} We give a very brief review of Outer space; for more details, see \cite{FM11, Bes11, BF14}. Use $\overline{cv}_N$ to denote the set of \emph{very small} actions of $F_N$ on $\mathbb{R}$-trees; elements of $\overline{cv}_N$ are called trees. Each $T \in \overline{cv}_N$ has an associated translation length function $l_T:F_N \to \mathbb{R}:g \mapsto \inf_{x \in T} d_T(x,gx)$. The function $\overline{cv}_N \to \mathbb{R}^F_N$ is injective \cite{CM87}, and $\overline{cv}_N$ is given the subspace topology, where $\mathbb{R}^F_N$ has the product topology. The subspace $cv_N$ consists of trees $T$ with a free, discrete action; the quotient $T/F_N$ is then a metric graph that comes with an identification $\pi_1(T/F_N) \cong F_N$ that is well-defined up to inner automorphisms of $F_N$. Associated to $T \in cv_N$ is cone on an open simplex got by equivariantly varying the lengths of edges of $T$ and constraining that each edge retain positive length. Let $T, T' \in cv_N$. Notice that a change of marking $f:T/F_N \to T'/F_N$ has lifts $\widetilde{f}:T \to T'$ that are $F_N$-equivariant and which restrict to linear functions on edges; conversely given any such function $g:T \to T'$, one obtains a change of marking $\overline{g}:T/F_N \to T/F_N$. So, we will use the term \emph{change of marking} to refer either to a change of marking as defined before or as a lift of a change of marking. Notice that the Lipschitz constant $Lip(f)$ of $f$ is equal to the maximum slope of $f$ restricted to an edge of $T$. Define: \[ d(T,T')=\inf \log(\text{Lip}(f)) \] \noindent Where $f$ ranges over all change of marking maps $T \to T'$. The function $d$ is called the \emph{Lipschitz distance} on $cv_N$; $d$ is positive definite and satisfies the triangle inequality, but is not symmetric. A change of marking $f:T\to T'$ is called \emph{optimal} if $d(T,T')=\log Lip(f)$; optimal maps always exist. Assume that $f:T \to T'$ is optimal. The topological tree $T$ with the $f$-pullback metric from $T'$ is $T_0$, and there is a path in a simplex of $cv_N$ from $T$ to $T_0$. The induced function $f_0:T_0 \to T'$ is a \emph{morphism}--$f_0$ has slope one of every edge of $T_0$. The tree $T_0$ can be \emph{folded} according to the map $f_0$: if two edges $e,e'$ of $T_0$ with the same initial vertex are identified by $f_0$, then one can form $T_\epsilon^{e,e'}$ by identifying the the $\epsilon$ initial segments of these edges, and there is an induced morphism $f_\epsilon^{e,e'}:T_\epsilon^{e,e'} \to T'$. Folding all edges that are identified by $f_0$ in this way gives a \emph{greedy folding path} $T_t$, which we parameterize by $vol(T_t/F_N)$. For $0\leq s<t\leq \beta$, one has a morphism $f_{s,t}:T_s \to T_t$ so that the equalities hold: $f_{0,t}=f_{s,t} \circ f_{0,s}$ and $f_{s, \beta}=f_{t,\beta}\circ f_{s,t}$. A \emph{liberal folding path} is defined similarly to a greedy folding path, except that one does not require to fold all illegal turn simultaneously; see the Appendix of \cite{BF14}. Our reduced folding/unfolding paths are exactly the restriction of a liberal folding path to a discrete subspace of the parameter space. Indeed, starting with a folding/unfolding path $(G_n)_n$, we may insert Stallings factorizations if necessary and then parameterize the new folding/unfolding sequence as a continuous process. \begin{lem}\label{2 gates} Let $(G_n)_n$ be a reduced folding/unfolding path. For every $n$ and every edge $e$ of $G_n$ with terminal vertex $v$, there is an edge $e'$ with initial vertex $v$ such that the path $ee'$ is legal. \end{lem} \begin{proof} Assume that an oriented edge $e$ of $G_n$ with terminal vertex $v$ is such that there is no edge $e'$ of $G_n$ with initial vertex $v$ such that $ee'$ is legal; call the oriented edge $e$ a $v$-dead-end. Notice that for $k<n$, the only edges that can fold over $e$ via $f_{k,n}$ are also dead ends. The collection of all dead ends in $G_k$ that fold over $e$ via $f_{k,n}$ is than an invariant subgraph, which much be proper, since no leaf of $\Lambda$ can cross any such edge. \end{proof} Hence, legal paths in $G_n$ can be extended; in particular, there are legal loops. By the definition of the metrics $\vec \lambda_n$ and by Lemma \ref{2 gates}, we have that $\varphi_n$ is an optimal map. Conversely, any liberal folding path can be discretized to give a folding/unfolding path. Hence, we use the term folding path to refer to liberal folding path or a folding/unfolding path. Regard $CV_N \subseteq cv_N$ as graphs with volume one; $CV_N$ is called \emph{Outer space} \cite{CV86}. We also consider the images of folding paths in $CV_N$, which are also called folding paths. These paths, which are read left to right, are geodesics and always are parameterized with respect to arc length. The $\epsilon$-\emph{thick part}, denoted $CV_N^\epsilon$, of $CV_N$ consists of those $T \in CV_N$ such that the injectivity radius of $T/F_N$ is at least $\epsilon$; $cv_N^\epsilon$ consists of those trees that are homothetic to tree in $CV_N^\epsilon$. Let us observe that if $T_t$ is a folding path in $CV_N$ and if $T_t \in CN_N^\epsilon$ for all $t$, then by choosing $t_{n+1}-t_n$ to be bounded, we get a linear folding/unfolding path $G_n=T_{t_n}/F_N$. According to Corollary 2.10 of \cite{AK11}, there is a constant $C=C(\epsilon)$ such that $d$ is $C$-bi-Lipschitz equivalent to $\check{d}(T,U)=d(U,T)$ on $CV_N^\epsilon$. Hence, we also have that $d$ is bi-Lipschitz equivalent to $d_s=\max\{d, \check{d}\}$; $d_s$ is a metric, but it is not geodesic. The action of $Out(F_N)$ on $CV_N^\epsilon$ is co-compact, so both $(CV_N,d)$ and $(CV_N,d_s)$ are quasi-isometric to $Out(F_N)$. \begin{lem}\label{thick distance} There is $B(\epsilon)$ such that if $T \in CV_N$ and $U \in CV_N^\epsilon$, then $d(U,T) \leq B(\epsilon)+ C(\epsilon)d(T,U)$. \end{lem} \begin{proof} Let $\sigma_T, \sigma_U \subseteq CV_N$ be the open simplices containing $T, U$, respectively. Since $U \in CV_N^\epsilon$, $d(U,\cdot):\sigma_U \to \mathbb{R}$ is bounded by some constant $A(\epsilon)$. Let $T' \in CV_N^\epsilon \cap \sigma_T$, then one has $d(U,T) \leq d(U,T') +d(T',T) \leq C(\epsilon)d(T',U)+A(\epsilon) \leq C(\epsilon)(d(T',T)+d(T,U))+A(\epsilon)\leq C(\epsilon)(A(\epsilon)+d(T,U))+A(\epsilon)$. \end{proof} If $H \leq F_N$ is finitely generated and non-trivial, then $T_H$ denotes the minimal $H$-invariant subtree of $T$; so $T_H/H$ is the core of the $H$-cover of $T/F_N$, and we have an immersion $T_H/H \to T/F_N$ representing the conjugacy class of $H$. The graph $T_H/H$ inherits a metric from $T$, and if a legal structure is specified on $T$, then there is an obvious induced legal structure on $T_H/H$. If $g \in F_N$ is non-trivial, then $T_g=T_{\langle g \rangle}$ is well-defined, and the immersion $T_g/g$ represents the conjugacy class of $g$. The volume of $T_g/g$ is the translation length $l_T(g)$ of the loxodromic isometry $g$. A result of Tad White, see \cite{FM11, AK11}, gives that \[ d(T,U)=\log(\sup_{1 \neq g \in F_N} l_U(g)/l_T(g)) \] \noindent Moreover, the supremum is realized by a conjugacy class whose simplicial length in $T$ is uniformly bounded. Specifically, use $c(T)$ to denote the set of conjugacy classes of non-trivial elements $g \in F_N$ that satisfy: \begin{itemize} \item $T_g/g \to T/F_N$ has image a subgraph of rank at most two, \item a point in the interior of an edge of $T/F_N$ has at most two pre-images in $T_g/g$, and \item the set of edges of $T/F_N$ whose interiors contain a point with two pre-images in $T_g/g$ either is empty or else is topologically a segment, whose interior separates the image of $T_g/g$ into two components, both of which are circles. \end{itemize} The elements of $c(T)$ are called \emph{candidates}. Tad White proved that \[ d(T,U)=\log(\max_{g \in c(T)} l_U(g)/l_T(g)) \] Notice that since $N>2$, the image in $T/F_N$ of the immersion representing any $g \in c(T)$ is contained in a proper subgraph of $T/F_N$. \subsection{The complex of free factors} A \emph{factor} is a conjugacy class of non-trivial proper free factors of $F_N$. The \emph{complex of free factors} is the simplicial complex whose $(k-1)$-simplices are collections of factors $[F^1],\ldots, [F^k]$, which have representatives satisfying, after possibly reordering, $F^1<\ldots<F^k$. Let $\mathcal{FF}$ denote the complex of free factors, and equip $\mathcal{FF}$ with the simplicial metric, denoted $d_{\mathcal{FF}}$. Bestvina and Feighn established the following: \begin{prop}\cite{BF11} The metric space $(\mathcal{FF},d_{\mathcal{FF}})$ is hyperbolic. \end{prop} An essential tool in their approach is a coarsely defined projection map from $cv_N$ to $\mathcal{FF}$. This projection, denoted $\pi$, is defined as follows: $\pi:cv_N \to \mathcal{FF}:T \mapsto \mathcal{F}(T)$, where $\mathcal{F}(T)$ is the set of factors that are represented by subgraphs of $T/F_N$. \begin{prop}\cite{BF11}\label{projection properties} There is a number $K$ such that for any $T,U \in cv_N$, $\text{diam}(\pi(T)\cup \pi(U)) \leq Kd(T,U)+K$. \end{prop} The conclusion of Proposition \ref{projection properties} is that $\pi$ is coarsely well-defined and coarsely Lipschitz in the sense that $\pi(T)$ has bounded diameter and that distances between the sets of images behave like distances under a Lipschitz map. Now suppose that $F'$ is a factor and that $T_t$ is the image of a greedy folding path to $CV_N$ and that this path has been parameterized by arc length. An immersed segment in $T_H/H$ is called \emph{illegal} if it does not contain a legal subsegment of length at least 3. Following \cite{BF14}, define $I=(18m(3N-3)+6)(2N-1)$ and put: \[ \text{left}_{T_t}(F)=\inf \{t|T_H/H \text{contains an immersed legal segment of length} > 2 \} \] \[ \text{right}_{T_t}(F)=\sup\{t|T_H/H \text{contains an immersed illegal segment of length} \leq I\} \] When the folding path $T_t$ is understood, we use $\text{left}(\cdot)$ to mean $\text{left}_{T_t}(\cdot)$. When $U \in CV_N$, we define $\text{left}(U)=\min \{\text{left}(F')|F' \in \mathcal{F}(U)\}$ and $\text{right}(U)=\max \{\text{right}(F'|F' \in \mathcal{F}(U)\}$. The \emph{left projection} of $U$ to $T_t$ is $\text{Left}(U)=T_{\text{left}(U)}$, and the \emph{right projection} of $U$ to $T_t$ is $\text{Right}(U)=T_{\text{right}(U)}$. We will need the following simple observation: \begin{lem}\label{nearest point} Let $U \in CV_N$ and let $T_t \in CV_N$, $t\in [a,b]$ be the image in $CV_N$ of a greedy folding path, and suppose that $T_c$ minimizes $d(U,\cdot):T_t \to \mathbb{R}$. If $d(U,T_c)$ is large, then $c \in [\text{left}(U),\text{right}(U)]$. \end{lem} The meaning of the word \emph{Large} is clear from the following proof; it seems uninformative to explicitly specify the constants involved. \begin{proof} By definition of $\text{left}(U)$, for $t<\text{left}(U)$, no element $g \in c(U)$ has image in $T_t$ containing a legal segment of length at least three, hence the number of illegal turns in $(T_t)_g/g$ is at least $l_{T_t}(g)/2$. By the Derivative Formula \cite[Lemma 4.4]{BF11}, we have that $l_{T_t}(g)>l_{T_{\text{left}(U)}}(g)$. By the definition of $\text{right}(U)$, for $t>\text{right}(U)$, every element $g \in c(U)$ contains a bounded number of illegal segments, so there are at least $l_{T_t}(g)/3I$ disjoint legal segments of length at least 3. Again, by the Derivative Formula \cite[Lemma 4.4]{BF11}, we have that $l_{T_t}(g)>l_{T_{\text{right}(U)}}(g)$. \end{proof} For the remainder of the paper \emph{uniform quasi-geodesic} will mean a quasi-isometric embedding of an interval of $\mathbb{R}$ with a fixed constant. We keep in mind that set-valued maps are acceptable in the quasi-isometric category as long as the image of every point has uniformly bounded diameter. If $(X,d)$ is a metric space, then a \emph{uniform re-parameterized quasi-geodesic} is a function $f:\mathbb{R} \to X$ such that there are $\ldots < n_{-1}<n_0<n_1<\ldots \in \mathbb{R}$ such that $g:\mathbb{Z} \to X:i \mapsto f(n_i)$ is a uniform quasi-geodesic and $f([n_i,n_{i+1}])$ is uniformly bounded . We have the following: \begin{prop}\label{liberal quasi-geodesic} If $T_t$ is a liberal folding path, then $\hat{\pi}(T_t)$ is a reparameterized uniform quasi-geodesic. \end{prop} \begin{proof} Theorem A.12 of \cite{BF14} gives that $T_t$ is a reparameterized uniform quasi-geodesic in the simplicial completion $(\mathcal{S},d_{\mathcal{S}})$ of $CV_N$, equipped with the simplicial metric. The space $\mathcal{S}$ is called the \emph{free splitting complex}. Theorem 1.1 of \cite{KR12} then gives that $\pi(T_t)$ is a reparameterized uniform quasi-geodesic in $\mathcal{FF}$. \end{proof} The follow result of Bestvina and Feighn generalizes \cite{AK11}: \begin{prop}\label{strongly contracting}\cite[Corollary 7.3]{BF11} Let $U, V \in CV_N$, and suppose that $T_t$ is the image in $CV_N$ of a greedy folding path and has been parameterized by arc length. Further, suppose that $d(U,T_t)>M$ for all $t$ and that $d(U,V) \leq M$. If $\pi(T_t)$ is a uniform quasi-geodesic, then $\text{right}(U)-\text{left}(U)$ and \[ \sup_{s\in [\text{left}(U),\text{right}(U)], t\in [\text{left}(V),\text{right}(V)]}d(T_s,T_t) \] are uniformly bounded. \end{prop} Notice that it certainly is implied that in this case there is $\epsilon>0$ such that $T_t \in CV_N^\epsilon$. Geodesics satisfying the conclusion of Proposition \ref{strongly contracting} are called \emph{strongly contracting}. The following is a consequence of the work of Bestvina-Feighn; we are including a proof for the convenience of the reader. \begin{cor}\label{uniformly stable}\cite{BF11} Suppose that $T_t$ is a strongly contracting folding path in $CV_N$. If $T,U \in CV_N$ are such that $d(T,\text{Image}(T_t))$ and $d(U,\text{Image}(T_t))$ are bounded, then any geodesic from $T$ to $U$ lies in a bounded $d_s$-neighborhood of $\text{Image}(T_t)$. \end{cor} \begin{proof} In light of Proposition \ref{strongly contracting} and Lemma \ref{thick distance}, supposing the conclusion is false would produce a contradiction to the triangle inequality. \end{proof} \subsection{The boundary of the factor complex} Additional details about the subject matter of this section can be found in \cite{BR12, R12}. Use $\overline{CV}_N$ to denote the space of homothety classes of elements of $\overline{cv}_N$; this is the usual compactification of Outer space \cite{CV86}. Put $\partial cv_N=\overline{cv}_N \smallsetminus cv_N$ and $\partial CV_N=\overline{CV}_N \smallsetminus CV_N$. Associated to $T \in \partial cv_N$ is an $F_N$-invariant, closed subspace $L(T) \subseteq \partial^2 F_N$ called the \emph{lamination} of $T$ defined as follows: \[ L(T)=\cap_{\delta>0} \overline{\{(g^{-\infty},g^\infty)|l_T(g)< \delta\}} \] Where $g^{\pm \infty}\in \partial F_N$ is $g^{\pm \infty}=\lim_{n \to \pm \infty} g^n1$. Invariance of $L(T)$ follows from the fact that $l_T(\cdot)$ is constant on conjugacy classes. See \cite{CHL08a, CHL08b}. Given $T \in \partial cv_N$ and a factor $F'$, say that $F'$ \emph{reduces} $T$ if there is an $F'$-invariant subtree $Y \subseteq T$ with $vol(Y/F')=0$. This means that the action of $F'$ on $Y$ has a dense orbit; notice that $Y$ could be a point. Define $\mathcal{R}(T)=\{F'|F' \text{ is a factor reducing } T\}$. We use the notation $\mathcal{AT}=\{T \in \partial cv_N|\mathcal{R}(T)=\emptyset\}$, and the elements of $\mathcal{AT}$ are called \emph{arational trees}. It is shown in \cite{R12} that $T \in \mathcal{AT}$ if and only if either $T$ is free and \emph{indecomposable} in the sense of Guirardel \cite{Gui08} or else $T$ is dual to an arational measured foliation on a once-punctured surface. Put a relation $\sim$ on $\mathcal{AT}$ by declaring $T \sim U$ if and only if $L(T)=L(U)$. The boundary $\partial \mathcal{FF}$ of $\mathcal{FF}$ was considered in \cite{BR12}; see also \cite{H12}. We have the following: \begin{prop}\label{delF}\cite[Main Results]{BR12} There is a (closed) quotient map $\partial \pi:\mathcal{AT} \to \partial \mathcal{FF}$ such that $\partial \pi(T)=\partial \pi(U)$ if and only if $T \sim U$. If $T_n \in cv_N$ converge to $T \in \partial cv_N$, then: \begin{enumerate} \item [(i)] If $T \notin \mathcal{AT}$, then $\hat{\pi}(T_n)$ does not converge in $\mathcal{FF}$, and \item [(ii)] If $T \in \mathcal{AT}$, then $\hat{\pi}(T_n)$ converges to $\partial \pi(T)$. \end{enumerate} Furthermore, if $T_n, U_n \in cv_N$ converge to $T,U \in \mathcal{AT}$, if $T \nsim U$, and if $V_t^n$ is a greedy folding path from $\lambda_nT_n$ to $U_n$, then $V_t^n$ converges uniformly on compact subsets to a greedy folding path $V_t$ with $\lim_{t \to -\infty} V_t/\mu_t \sim T$ and $\lim_{t \to \infty} V_t \sim U$, for some constants $\lambda_n$, $\mu_t$. \end{prop} \begin{rem} If $(V_n)_n$ is a reduced unfolding path with $\lim_{n \to -\infty} V_t=T$, then Lemma \ref{length and frequency decay} gives that $\Lambda((V_n)_n) \subseteq L(T)$. \end{rem} Recall that $Curr(F_N)$ denotes the space of currents on $F_N$ and that $Curr(F_N)$ is given the weak-* topology. When $g \in F_N$ is non-trivial and when no element of the form $h^k$, $k>1$, belongs to the conjugacy class of $g$, the notation $\eta_g$ is used to mean the sum of all Dirac masses on the distinct translates of $(g^{-\infty},g^\infty)$ in $\partial^2 F_N$. The following useful results were established by Kapovich and Lustig: \begin{prop}\cite[Main Results]{KL09a, KL10d}\label{KL} There is a unique $Out(F_N)$-invariant continuous function $\langle \cdot, \cdot \rangle: \overline{cv}_N \times Curr(F_N) \to \mathbb{R}_{\geq 0}$ that satisfies: \begin{enumerate} \item [(i)] $\langle T, \eta_g\rangle=l_t(g)$, \item [(ii)] $\langle T, \mu \rangle =0$ if and only if $\text{Supp}(\mu)\subseteq L(T)$ \end{enumerate} \end{prop} Let $g \in F_N$ be a primitive element, \emph{i.e.} $\langle g \rangle$ represents a factor. Define $\mathbb{R}M_N=\overline{Out(F_N)[\eta_g]}$, where $[\eta_g]$ is the homothety class of $\eta_g$ in the projective space $\mathbb{P}Curr(F_N)=Curr(F_N)/\mathbb{R}_{\geq 0}$. The subspace $\mathbb{P}M_N$ is the minset of the action of $Out(F_N)$ on $\mathbb{P}Curr(F_N)$ \cite{MarPhD, KL07}. Use $M_N$ to denote the pre-image of $\mathbb{P}M_N$ in $Curr(F_N)$. If $X$ is a topological space, then $X'$ denotes the set of non-isolated points in $X$, $X''=(X')'$, and so on. We have the following, which is the contents of Proposition 4.2, Corollary 4.3, and Theorem 4.4 of \cite{BR12}: \begin{prop}\label{uniqueduality}\cite{BR12} Let $T \in \mathcal{AT}$ and $U \in \partial cv_N$ \begin{enumerate} \item [(i)] $L'''(T)$ is perfect, \item [(ii)] If $L'''(T) \subseteq L(U)$, then $L(T)=L(U)$, and \item [(iii)] If $\mu \in M_N$ satisfies $\langle T,\mu \rangle=0$, then $Supp(\mu)=L'''(T)$. \end{enumerate} In particular, if $\langle T, \mu \rangle = 0 = \langle U, \mu \rangle$, then for any $\eta \in M_N$, one has that $\langle T, \mu \rangle=0$ if and only if $\langle U, \mu \rangle=0$, so $U \in \mathcal{AT}$. \end{prop} If $L_n$ and $L$ are algebraic laminations, say that the sequence $L_n$ \emph{super-converges} to $L$ if for any $l \in L$, there is a sequence $l_n \in L_n$ such that $l_n$ converges to $l$. It is immediate from the definition of $L(\cdot)$ and the definition of the topology on $\overline{cv}_N$ that for $T_n,T \in \partial cv_N$, if $T_n$ converge to $T$, then $L(T_n)$ super-converges to $L(T)$. For $T \in \mathcal{AT}$ define $\Lambda(T)=L'''(T)$, and use the notation $\mathcal{AL}=\{\Lambda(T)|T \in \mathcal{AT}\}$. Call the topology induced by the surjection $\mathcal{AT} \to \mathcal{AL}$ the \emph{super-convergence topology}. A rephrasing of the first statement of Proposition \ref{delF} is: \begin{cor}\label{delF2} $\partial \mathcal{FF} \cong \mathcal{AL}$ \end{cor} Define $\mathcal{UET}=\{T \in \mathcal{AT}|L(T)=L(T') \text{ imples } T'=\alpha T, \alpha \in \mathbb{R}\}$ and $\mathcal{UEL}=\{\Lambda \in \mathcal{AL}|\text{Supp}(\eta)=\Lambda=\text{Supp}(\eta'), \eta \in M_N, \text{ implies } \eta=\alpha\eta, \alpha \in \mathbb{R}\}$. Finally set $\mathcal{UE}=\mathcal{UET} \cap \{T |\Lambda(T) \in \mathcal{UEL}\}$. \subsection{Limit Sets of Certain Subgroups of $Out(F_N)$} If $H \leq Out(F_N)$, we use $\text{Limit}(H) \subseteq \partial cv_N$ to mean representatives of accumulation points of $x_0H$ in $\overline{CV}_N$, where $x_0$ is any base point. \begin{thm}\label{Limit Set} Let $H \leq Out(F_N)$. If the orbit map $H \to \mathcal{FF}:h \mapsto h\pi(x_0)$ is a quasi-isometric embedding, then $\text{Limit}(H)\subseteq \mathcal{UE}$. \end{thm} \begin{proof} First notice that the conclusion of Corollary \ref{uniformly stable} also holds for uniform quasi-geodesics from $T, U$. It follows that geodesics between points in $x_0H$ project to uniform quasi-geodesics in $\mathcal{FF}$. Now apply Proposition \ref{delF} to get that bi-infinite geodesics arising as limits of geodesics between points in $x_0H$ also project to uniform quasi-geodesics in $\mathcal{FF}$. Applying Proposition \ref{delF} again, we get that for any $U \in \text{Limit}(H)$, there are geodesics $T_t, \check{T}_t$ with $\lim_{t \to \infty} T_t \sim U$ and $\lim_{t \to -\infty}\check{T}_t \sim U$. Now apply Theorem \ref{slow progress in FF for foldings} to get that $\lim_{t\to \infty} T_t \in \mathcal{UET}$, which implies that $\lim_{t \to \infty} T_t=U$ and $\lim_{t \to -\infty} \check{T}_t=U$. Now apply Theorem \ref{slow progress in FF for unfoldings} to get that $\Lambda(\lim_{t \to -\infty} \check{T}_t) \in \mathcal{UEL}$. \end{proof} \subsection{Random walks on $Out(F_N)$} The goal of this section is to combine our results about folding/unfolding paths and our background material with recent work of Tiozzo \cite{Tio14} and Maher-Tiozzo \cite{MT14} to study random walks on $Out(F_N)$. Our goal is the following application: \begin{thm}\label{Poisson boundary} Let $\mu$ be a non-elementary distribution on $Out(F_N)$ with finite first moment with respect to $d_s$. Almost every sample path for the corresponding random walk converges to a point $T \in \partial cv_N$, and the corresponding harmonic measure $\nu$ is concentrated on the subspace of uniquely ergodic and dually uniquely ergodic arational trees and is the Poisson boundary. Furthermore, for almost every sample path $(w_n)_n$, there is a Lipschitz geodesic $T_t$ with image $\gamma$, such that \[ \lim d_s(w_n0, \gamma)/n=0 \] \end{thm} First, we define framework and will follow the discussion in \cite{MT14}. We use $\Gamma$ to denote a finitely generated discrete group. A \emph{distribution} on $\Gamma$ is a positive vector $\mu \in L^1(\Gamma)$ of norm one. The associated \emph{step space} is $\Omega=(\Gamma^\mathbb{Z},\mu^\mathbb{Z})$, and we use $\mathbb{P}$ to denote the product measure $\mathbb{P}=\mu^\mathbb{Z}$. The \emph{shift map} on $\Omega$ is $S:\Omega \to \Omega:(g_n) \mapsto (g_{n+1})$; $S$ is ergodic. Given a $\Gamma$-space $X$ and base point $x_0 \in X$, one obtains a discrete random variable with values in $X$ via $\Gamma^\mathbb{Z} \ni (g_n) \mapsto w_nx_0$, which is called a \emph{random walk} on $X$; we consider the push forward of $\mathbb{P}$ on the orbit $\Gamma x_0$. The \emph{location} $(w_n)$ of the random walk at time $n$ is: \[ w_n = \left\{ \begin{array}{ll} g_1g_2\ldots g_n & \quad n>0 \\ 1 & \quad n=0 \\ g_{0}^{-1}g_{-1}^{-1}\ldots g_{n}^{-1} & \quad n<0\\ \end{array} \right. \] For negative time, the behavior of paths is guided by the \emph{reflected measure} $\check{\mu}(g)=\mu(g^{-1})$. The sequences $(w_n)_{n>0}$ and $(w_n)_{n<0}$ are called \emph{unilateral paths} and are governed by the measures $\mu$ and $\check{\mu}$ respectively; we also say that $(w_n)_{n>0}$, \emph{resp.} $(w_n)_{n<0}$, is a \emph{sample path} for $\mu$, \emph{resp.} $\check{\mu}$. When we use the term sample path without reference to a measure $\mu$ or $\check{\mu}$, we shall mean a unilateral path $(w_n)_{n>0}$. The paths $(w_n)$ are called \emph{bilateral paths}. The notation $\textbf{w}$ may be used for a sample path. When $(X,\rho)$ is a metric space, one can consider the average step size, which is the first moment of $\mu$ with respect to $\rho$: \[ \sum_{g \in \Gamma}\mu(g)\rho(0,g0) \] If $Y$ is a topological space and if $\Gamma$ has a Borel action on $Y$, then $\Gamma$ acts on the space of Borel probability measures $\mathcal{M}(Y)$ via $g\nu(A)=\nu(g^{-1}(A))$. The space $\mathcal{M}(Y)$ gets the weak-* topology, and it is a standard fact that compactness of $Y$ implies compactness of $\mathcal{M}(Y)$. The distribution $\mu$ gives rise to the convolution operator on $\mathcal{M}(Y)$: for $\nu \in \mathcal{M}(Y)$ \[ \mu \ast \nu(A)=\sum_{g \in \Gamma}\mu(g)g\nu(A) \] A measure $\nu$ on $Y$ is called $\mu$-\emph{stationary} if it is a fixed point of the convolution operator. Use $\mu^{(n)} \ast \nu$ to denote the $n$-fold application of the convolution to $\nu$. A measure $\nu \in \mathcal{M}(Y)$ is called a $\mu$-\emph{boundary} if $\nu$ is stationary and for almost every sample path $(w_n)$, one has that $\lim_n w_n\nu$ is a Dirac mass. The maximal $\mu$-boundary is called the \emph{Poisson boundary}. \begin{lem}\label{AT Borel} $\mathcal{AT}$ is a Borel set. \end{lem} \begin{proof} Let $\{F^1,F^2,\ldots,F^k,\ldots\}$ be the list of all factors; use $\mathcal{R}_k=\{T \in \partial cv_N|\exists F' \in \mathcal{R}(T), F' \leq F^k\}$. By definition, $\mathcal{AT}=\cap_k (\partial cv_N \smallsetminus \mathcal{R}_k)$. We will finish by showing that $\mathcal{R}_k$ is closed. Note that $\partial cv_N$ is separable and metrizable. Suppose that $T_j \in \mathcal{R}_k$ converge to $T \in \partial cv_N$. If $F^k$ fixes a point on a subsequence of $T_j$, then by definition of the topology on $\overline{cv}_N$, then $F^k$ fixes a point in $T$ as well. Hence, we assume that $(T_j)_{F^k}$ is not a point. By the definition of $\mathcal{R}_k$, we have that $(T_j)_{F^k} \in \partial cv(F^k)$, where $\overline{cv}(F^k)$ is the space of very small $F^k$-trees. Further, notice that the definition of the topology of $\overline{cv}_N$ ensures that the restriction map $(T_j) \mapsto (T_j)_{F^k}$ is continuous. The fact that $\mathcal{R}_k$ is closed then follows from the fact that $\partial cv(F^k)$ is closed. \end{proof} For $T \in \mathcal{AT}$, use $[T]$ to denote the $\sim$-class of $T$. Define $B =\mathcal{AT} \times \mathcal{AT} \smallsetminus \text{Graph}(\sim)$. For $(T,U) \in B$, define $\mathbb G(T,U)=G([T],[U])$ to be the set of all geodesics $G_t$ in $CV_N$ with $\lim_{t \to -\infty} G_t \in [T]$ and $\lim_{t \to \infty} G_t \in [U]$. By Proposition \ref{delF}, $\mathbb G(T,U) \neq \emptyset$. If $G_t \in \mathbb{G}(T,U)$, use $G$ for the image of $G_t$. Fix a base point $x_0 \in cv_N$. Define a function $\Phi:B \to \mathbb{R}$ by \[ \Phi(T,U)=\sup_{G_t\in \mathbb G(T,U)} d_s(x_0,G) \] \begin{lem} $\Phi(T,U)$ is defined for every $(T,U) \in B$. \end{lem} \begin{proof} Let $G_t^n \in \mathbb{G}(T,U)$ be a sequence. According to \cite[Corollary 6.8]{BR12}, we have that the sequence of folding path $G_t^n|_{[-n,n]}$ must accumulate on a point $y \in cv_N$. It follows that $\Phi(T,U)$ is defined. \end{proof} \begin{lem}\label{phi is measurable} $\Phi$ is a Borel function. \end{lem} \begin{proof} Suppose $(T_i,U_i) \in B$ converge to $(T,U) \in B$. Choose $s_i \to -\infty$ and $t_i \to \infty$ and apply Proposition \ref{delF} to greedy folding paths $G_t^i$, $s_i \leq t \leq t_i$. After passing to a subsequence we get uniform convergence on compact sets to $G_t \in \mathbb{G}(T,U)$. By definition of $\Phi$, we then have that $\lim_i \Phi(U_i,T_i) \leq \Phi(U,T)$, and so $\Phi$ is Borel. \end{proof} If $\Gamma$ acts on a hyperbolic space $X$, then we say that a distribution $\mu$ is \emph{non-elementary} (relative to the $\Gamma$-action on $X$) if the semigroup generated by the support of $\mu$ does not fix a finite subset of $X \cup \partial X$. Notice that $\mu$ is non-elementary if and only if $\check{mu}$ is non-elementary. \begin{rem} By the subgroup classification theorem of Handel-Mosher \cite{HM09} and Horbez \cite{Hor14a}, one has that a distribution $\mu$ on $Out(F_N)$ is non-elementary with respect to the action on $\mathcal{FF}$ if and only if \end{rem} Recently, Maher and Tiozzo have shown the following: \begin{prop}\cite[Theorems 1.1, 1.2]{MT14}\label{Tiozzo Maher} Let $\Gamma$ be a group that acts on a separable hyperbolic space $(X,d_X)$ with base point $0$. Assume that $\mu$ is a non-elementary distribution on $\Gamma$ with finite first moment. \begin{enumerate} \item [(i)] For almost every sample path $(w_n)$, one has that $w_n0$ converges in $X$, and the corresponding hitting measure on $\partial X$ is non-atomic and is the unique $\mu$-stationary measure. \item [(ii)] There is $L_0>0$ such that for almost every sample path $(w_n)$, one has \[ \lim_n d_X(0,w_n0)/n=L_0 \] \end{enumerate} \end{prop} We will also need the following simple lemma, which is immediate from the triangle inequality and Kingmann's Sub-additive Ergodic Theorem: \begin{lem}\label{escape rate} Let $\mu$ be a distribution on $\Gamma$, and let $(X,d_X)$ be a $\Gamma$-metric space with base point $0$. If $\mu$ has finite first moment, then there is $L_1$ such that for almost every sample path $(w_n)$, one has \[ \lim_n d_X(0,w_nx)/n=L_1 \] \end{lem} The following is established by Tiozzo: \begin{lem}\label{Tiozzo lemma}\cite[Lemma 7]{Tio14} Let $\Omega$ be a measure space with a probability measure $\lambda$, and let $T : \Omega \to \Omega$ be a measure-preserving, ergodic transformation. Let $f:\Omega \to \mathbb{R}$ be a non-negative, measurable function, and define the function $g : \Omega \to \mathbb{R}$ as \[ g(\omega) := f (T \omega) - f (\omega) \quad \forall \omega \in \Omega \] If $g \in L^1(\Omega, \lambda)$, then, for $\lambda$-almost every $\omega \in \Omega$, one has \[ \lim f(T^n \omega)/n=0 \] \end{lem} \begin{thm}\label{mu boundary} Let $\mu$ be a non-elementary distribution of $Out(F_N)$ with finite first moment with respect to $d_s$. Then \begin{enumerate} \item [(i)] There exists a unique $\mu$-stationary probability measure $\nu$ on the space $\partial CV_N$, which is purely non-atomic and concentrated on $\mathcal{UE}$; the measure space $(\partial CV_N, \nu)$ is a $\mu$-boundary, and \item [(ii)] For almost every sample path $(w_n)$, $w_n0$ converges in $\overline{CV}_N$ to $w_\infty \in \mathcal{UE}$, and $\nu$ is the corresponding hitting measure. \end{enumerate} \end{thm} \begin{proof} Notice that finite first moment with respect to $d_s$ certainly gives finite first moment with respect to $d_{\mathcal{FF}}$ by Proposition \ref{projection properties}. According to Proposition \ref{Tiozzo Maher}, there is a unique $\mu$-stationary, \emph{resp.} $\check{\mu}$-stationary, purely non-atomic measure $\widetilde \nu_+$, \emph{resp.} $\widetilde \nu_{-}$, on $\partial \mathcal{FF}$. By Proposition \ref{delF}, we have that $\partial \mathcal{FF} \cong \mathcal{AT}/\sim$; in particular, $\mathcal{AT} \ni T \mapsto [T] \in \partial \mathcal{FF}$ is continuous. Further, by Lemma \ref{AT Borel}, we have that $\mathcal{AT}$ is measurable. Let $\nu_+$, \emph{resp.} $\nu_{-}$, be a $\mu$-stationary, \emph{resp.} $\check{\mu}$-stationary, measure on $\partial CV_N$; the existence of $\nu_+$, $\nu_{-}$ is guaranteed by compactness of $\partial CV_N$; see, for example, \cite[Lemma 2.2.1]{KM96}. By the previous paragraph, $\nu_+$, $\nu_{-}$ must be concentrated on $\mathcal{AT}$, and $(\partial \mathcal{FF}, \widetilde{\nu}_+)$, \emph{resp.} $(\partial \mathcal{FF}, \widetilde \nu_{-})$ is a quotient space of $(\partial CV_N, \nu_+)$, \emph{resp.} $(\partial CV_N, \nu_{-})$. It follows that $\nu_+,\nu_{-}$ are non-atomic. We will show that these quotients are in fact an isomorphisms by showing that $\nu_+$, $\nu_{-}$ are concentrated on $\mathcal{UE}$. By \cite[Lemma 2.2.3]{KM96} for almost every sample path $(w_n)$ (corresponding to $\textbf{g}$), the weak-* limits $\lim_{n \to \pm \infty} w_n\nu=\lambda(w_{\pm \infty})$ exist, and by Proposition \ref{Tiozzo Maher}, $\lambda(w_{\pm \infty})$ are each concentrated on a single $\sim$-class in $\mathcal{AT}$, which we denote by $\textbf{bnd}_{\pm}(\textbf{g})$. \begin{claim} For almost every sample path $(w_n)$ there is a Lipschitz geodesic $T_t$ with endpoints $T,U \in \mathcal{AT}$ such that \[ \lim d_s(w_n0,T_{L_1n})/n=0 \] \end{claim} The proof follows \emph{verbatim} Tiozzo's proof of \cite[Theorem 6]{Tio14}; in particular, Tiozzo never uses that $\partial X$ in the statement of Theorem 6 is in any way related to $X$. Tiozzo's proof is included for the convenience of the reader. \begin{proof}\cite{Tio14} We apply Lemma \ref{Tiozzo lemma} to the space of increments with the shift operator and with $f$ defined by \[ f(\textbf{g}):=\Phi(\textbf{bnd}_{-}(\textbf g),\textbf{bnd}_+(\textbf g)) \] For $\textbf{g} \in Out(F_N)^\mathbb{Z}$, we have: \[ f(S^k\textbf{g})=\sup_{G \in \mathbb G(\textbf{bnd}_-(S^k\textbf g),\textbf{bnd}_+(S^k\textbf g)} d(0,G) \] It is easy to see that $\textbf{bnd}_{\pm}(S^k \textbf g)=w_k^{-1}\textbf{bnd}_{\pm}(\textbf g)$, so \[ f(S^k \textbf g)=\sup_{G \in \mathbb G(\textbf{bnd}_-(\textbf g),\textbf{bnd}_+(\textbf g)} d(w_k0,G) \] It is evident from the definition that $\mathbb G(\cdot, \cdot)$ is equivariant, hence so is $\Phi$. Hence, we have that $g(\textbf g) \leq d(0,g_1^{-1}0)$ and the first moment assumption gives that $g \in L^1(Out(F_N)^\mathbb{Z},\mathbb P)$. Hence, Lemma \ref{Tiozzo lemma} gives \[ \lim f(S^n \textbf g)/n=0 \] almost surely. This certainly means that for any $G \in \mathbb G(\textbf{bnd}_-(\textbf g),\textbf{bnd}_+(\textbf g)$ we have that \[ \lim d(w_k0,G)/n=0 \] Suppose that $G=T_t$, then we have times $t_k$ such that \[ \lim d(w_k0,T_{t_k})/n=0 \] By Lemma \ref{escape rate}, we must have have that $t_k/t=\pm L_1$. \end{proof} Recall that from Proposition \ref{Tiozzo Maher} we have $L_0>0$ such that almost surely $\lim d_{\mathcal{FF}}(0,w_n0)/n=L_0$. Combining this with Lemma \ref{escape rate}, we have $L_1>0$ such that $\lim d_s(0,w_n0)/n=L_1$. By the triangle inequality and Lemma \ref{projection properties}, we have \[ d_s(0,T_{L_1n}) \geq d_s(0,w_n0)-d_s(T_{L_1n},w_n0) \] So \[ \lim d_{\mathcal{FF}}(\pi(0),\pi(T_{L_1n}))/n = L_0 \] In particular, the positive ray of $T_t$ projects quasi-isometrically to $\mathcal{FF}$. If we were to assume that $U=\textbf{bnd}_+(\textbf g) \notin \mathcal{UET}$, then we would obtain a contradiction from Theorem \ref{slow progress in FF for foldings}. By applying exactly the same string of arguments to $\textbf{bbd}_-(\textbf g)$ except using Theorem \ref{slow progress in FF for unfoldings} in place of Theorem \ref{slow progress in FF for foldings}, we get that $\Lambda(T) \in \mathcal{UEL}$. Hence, we have established that that $\nu_{\pm}$ are concentrated on $\mathcal{UE}$. Hence, the map $\partial \pi$ provides an isomorphism of $(\partial CV_N, \nu_{\pm})$ and $(\mathcal{AT}/\sim, \widetilde{\nu}_{\pm})$. \begin{claim}\label{convergence in cv} For almost every sample path, one has that $w_n0$ converges in $\overline{CV}_N$. \end{claim} \begin{proof} We have that $w_n0$ tracks sublinearly along a geodesic $T_t$ that converges to $T \in \mathcal{UET}$. Observe that $T_t/e^t$ converges in $\overline{cv}_N$. For a conjugacy class $g$, use $|g|$ to denote its length in $T_0$. Let $g_n$ be an embedded loop in $T_n$, then $\eta_{g_n}/|g_n|$ converges in $Curr(F_N)$ to $\eta \in M_N$ with $\langle T, \eta \rangle =0$; by Proposition \ref{uniqueduality}, we have that $Supp(\eta)=\Lambda(T)$. If $U \in \partial cv_N$ satisfies $\langle U, \eta \rangle =0$, then by Proposition \ref{KL}, we have that $\Lambda(T) \subseteq L(U)$. Again applying Proposition \ref{uniqueduality}, we have that $L(U)=L(T)$, and since $T \in \mathcal{UET}$, we get that $U=T$. Hence, we will finish by showing that $\lim_{n \to \infty} \langle w_n0, \eta \rangle=0$. Set $U_n=w_n0$. We have $\langle U_n, \eta_{g_n} \rangle \leq \langle T_n, \eta_{g_n} \rangle e^{d(T_n,U_n)}$. Further, $d(T_0,T_n) \leq d(T_0,U_n)+d(U_n,T_n) \leq d(T_0, U_n) + C(\epsilon) d(T_n,U_n) +B(\epsilon)$ by Lemma \ref{thick distance}. Hence, \begin{align*} \frac{\langle U_n, \eta_{g_n} \rangle}{|g_n|e^{d(T_0,U_n)}} & \leq \frac{\langle U_n, \eta_{g_n} \rangle}{|g_n|e^{d(T_0,T_n)- C(\epsilon)d(T_n,U_n)-B(\epsilon)}} \\ \, & \leq \frac{\langle T_n, \eta_{g_n} \rangle e^{d(T_n,U_n)}}{|g_n|e^{d(T_0,T_n)-C(\epsilon)d(T_n,U_n)-B(\epsilon)}} \\ \, & = \frac{\langle T_n, \eta_{g_n} \rangle}{|g_n|e^{d(T_0,T_n)-o(n)}} \to 0 \end{align*} \end{proof} \end{proof} We get: \begin{thm} Let $\mu$ be a non-elementary distribution on $Out(F_N)$ with finite first moment with respect to the word metric. The unique $\mu$-stationary measure $\nu$ on $\partial CV_N$ is the Poisson boundary. \end{thm} \begin{proof} Since Outer space has exponentially bounded growth, Theorem \ref{mu boundary} gives that the hypotheses of the Ray Criterion of Kaimanovich are satisfied \cite{Kai85}. \end{proof} \begin{rem} We make two observations: \begin{enumerate} \item [(i)] We had the option as well to check the Strip criterion. Indeed, from what we have shown and the contraction results of \cite{BF11}, one has that the $\mathbb{G}(T,U)$ is contained in a bounded neighborhood of any of its elements for generic $(T,U)$, and the construction of strips from \cite{KM96} can be carried out. \item [(ii)] The subspace of $\mathcal{AT}$ consisting of trees that are dual to foliation on a surface admits a countable partition $\{Z_i\}$ that satisfies $Z_i\Phi=Z_i$ or $Z_i\Phi \cap Z_i=\emptyset$ for any $\Phi \in Out(F_N)$; indeed, each tree is contained in a copy of $\mathcal{PML}(S)$ in $\partial CV_N$, where $S$ is a once-punctured surface with fundamental group $F_N$. Then \cite[Lemma 2.2.2]{KM96} gives that either $\nu$ is supported on a finite union of these $\mathcal{PML}(S)$'s or else $\nu$ is concentrated on trees that are free and indecomposable. \end{enumerate} \end{rem} {\sc \tiny \noindent Hossein Namazi, Department of Mathematics, University of Texas\\ Alexandra Pettet, Department of Mathematics, University of British Columbia\\ Patrick Reynolds, Department of Mathematics, Miami University} \end{document}
arXiv
Lexicographic Order forms Well-Ordering on Ordered Pairs of Ordinals This article needs to be linked to other articles. You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by adding these links. To discuss this in more detail, feel free to use the talk page. Once you have done so, you can remove this instance of {{MissingLinks}} from the code. This page has been identified as a candidate for refactoring. 2 Proof 1 3.1 Total Ordering 3.2 Strict Total Ordering 3.3 Well-Ordering The Lexicographic Order $\operatorname{Le}$ is a strict well-ordering on $\left({\operatorname{On} \times \operatorname{On}}\right)$. This is an instance of Finite Lexicographic Order on Well-Ordered Sets is Well-Ordering. Total Ordering Suppose $\left({x, y}\right) \operatorname{Le} \left({x, y}\right)$. $x < x \lor \left({x = x \land y < y}\right)$. Both are contradictory, so $\operatorname{Le}$ is irreflexive. $\Box$ $\left({\alpha, \beta}\right) \operatorname{Le} \left({\gamma, \delta}\right)$ $\left({\gamma, \delta}\right) \operatorname{Le} \left({\epsilon, \zeta}\right)$ There are two cases: Let $\alpha < \gamma$. $\alpha < \epsilon$ $\left({\alpha, \beta}\right) \operatorname{Le} \left({\epsilon, \zeta}\right)$ Let $\alpha = \gamma$. $\alpha < \epsilon \lor \left({\alpha = \epsilon \land \delta < \zeta}\right)$ But also, if $\alpha = \gamma$, then $\beta < \delta$. $\left({\alpha = \epsilon \land \beta < \zeta}\right)$ In either case, $\operatorname{Le}$ is transitive. So $\operatorname{Le}$ is a strict ordering. Strict Total Ordering $\neg \left({\alpha, \beta}\right) \operatorname{Le} \left({\gamma, \delta}\right)$ $\neg \left({\gamma, \delta}\right) \operatorname{Le} \left({\alpha, \beta}\right)$ $\neg \alpha < \gamma$ $\neg \gamma < \alpha$ $\alpha = \gamma$ Similarly: $\neg \beta < \delta$ $\neg \delta < \beta$ $\beta = \delta$ By Equality of Ordered Pairs: $\left({\alpha, \beta}\right) = \left({\gamma, \delta}\right)$ Therefore $\operatorname{Le}$ is a strict total ordering. Well-Ordering Let $A$ be a nonempty subset $A$ of $\left({\operatorname{On} \times \operatorname{On}}\right)$ Let $A$ be any class. This isn't strictly necessary, but it will not alter the proof. Let the mapping $1^{st}$ send each ordered pair $\left({x, y}\right)$ to its first member $x$. $\displaystyle 1^{st} = \left\{ {\left({\left({x, y}\right) z}\right): z = x}\right\}$ Then $1^{st}: A \to \operatorname{On}$ is a mapping. Take $\operatorname{Im}\left({A}\right)$, the image of $A$ under $1^{st}$. $\operatorname{Im}\left({A}\right) \subseteq \operatorname{On}$ so by Subset of Ordinals has Minimal Element, $\operatorname{Im}\left(A\right)$ has a minimal element. Let this minimal element be $\alpha$. Let $B = \left\{ {y \in \operatorname{On} : \left({\alpha, y}\right) \in A}\right\}$. $\alpha$ is a minimal element of $\operatorname{Im}\left({A}\right)$. $\left({\alpha, y}\right) \in \operatorname{Im} \left({A}\right)$ for some $y \in \operatorname{On}$. Therefore $B$ is nonempty. Furthermore, $B$ is some subset of the ordinals. By Subset of Ordinals has Minimal Element it follows that $B$ has a minimal element. Let this minimal element be $\beta$. $\left({\alpha, \beta}\right) \in A$ Suppose there is some element $\left({\gamma, \delta}\right)$ of $A$ such that: $\left({\gamma, \delta}\right) \operatorname{Le} \left({\alpha, \beta}\right)$ $\gamma \le \alpha$ But for all ordered pairs in $A$, $\alpha$ is a minimal first element. Therefore $\gamma = \alpha$ But this implies that $\delta < \beta$ and $\left({\alpha, \delta}\right) \in A$. This contradicts the fact that $\beta$ is the minimal element satisfying $\left({\alpha, \beta}\right) \in A$. From this contradiction it follows that $\left({\alpha, \beta}\right)$ is the $\operatorname{Le}$-minimal element of $A$. 1971: Gaisi Takeuti and Wilson M. Zaring: Introduction to Axiomatic Set Theory: $\S 7.54 \ (1)$ Retrieved from "https://proofwiki.org/w/index.php?title=Lexicographic_Order_forms_Well-Ordering_on_Ordered_Pairs_of_Ordinals&oldid=297356" Lexicographic Order This page was last modified on 14 May 2017, at 11:09 and is 4,328 bytes
CommonCrawl
We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings. Login Alert Only search content I have access to About Cambridge Core Institution login British Journal of Nutrition Volume 112 Issue 4 Milk protein fractions moderate... Core reader Milk protein fractions moderately extend the duration of satiety compared with carbohydrates independently of their digestive kinetics in overweight subjects Participants and study design Satiety assessment Digestive and metabolic measurements Digestion kinetics Influence of snacks on satiety This article has been cited by the following publications. This list is generated based on data provided by CrossRef. Léonil, J. 2014. Les peptides bioactifs du lait et leur intérêt dans la prévention des maladies cardiovasculaires et du syndrome métabolique. Médecine des Maladies Métaboliques, Vol. 8, Issue. 5, p. 495. Hutchison, Amy T Piscitelli, Diana Horowitz, Michael Jones, Karen L Clifton, Peter M Standfield, Scott Hausken, Trygve Feinle-Bisset, Christine and Luscombe-Marsh, Natalie D 2015. Acute load-dependent effects of oral whey protein on gastric emptying, gut hormone release, glycemia, appetite, and energy intake in healthy men. The American Journal of Clinical Nutrition, Vol. 102, Issue. 6, p. 1574. Marsset-Baglieri, Agnès Fromentin, Gilles Nau, Françoise Airinei, Gheorghe Piedcoq, Julien Rémond, Didier Barbillon, Pierre Benamouzig, Robert Tomé, Daniel and Gaudichon, Claire 2015. The satiating effects of eggs or cottage cheese are similar in healthy subjects despite differences in postprandial kinetics. Appetite, Vol. 90, Issue. , p. 136. Fromentin, G. Darcel, N. Chaumontet, C. Even, P. Tomé, D. and Gaudichon, C. 2016. The Molecular Nutrition of Amino Acids and Proteins. p. 221. Kinsey, Amber Cappadona, Stacy Panton, Lynn Allman, Brittany Contreras, Robert Hickner, Robert and Ormsbee, Michael 2016. The Effect of Casein Protein Prior to Sleep on Fat Metabolism in Obese Men. Nutrients, Vol. 8, Issue. 8, p. 452. le Roux, Carel W. Engström, My Björnfot, Niclas Fändriks, Lars and Docherty, Neil G. 2016. Equivalent Increases in Circulating GLP-1 Following Jejunal Delivery of Intact and Hydrolysed Casein: Relevance to Satiety Induction Following Bariatric Surgery. Obesity Surgery, Vol. 26, Issue. 8, p. 1851. Ohlsson, Bodil and Manjer, Jonas 2016. Physical inactivity during leisure time and irregular meals are associated with functional gastrointestinal complaints in middle-aged and elder subjects. Scandinavian Journal of Gastroenterology, Vol. 51, Issue. 11, p. 1299. Nyemb-Diop, Kéra Causeur, David Jardin, Julien Briard-Bion, Valérie Guérin-Dubiard, Catherine Rutherfurd, Shane M. Dupont, Didier and Nau, Françoise 2016. Investigating the impact of egg white gel structure on peptide kinetics profile during in vitro digestion. Food Research International, Vol. 88, Issue. , p. 302. Mollahosseini, Mehdi Shab-Bidar, Sakineh Rahimi, Mohammad Hossein and Djafarian, Kurosh 2017. Effect of whey protein supplementation on long and short term appetite: A meta-analysis of randomized controlled trials. Clinical Nutrition ESPEN, Vol. 20, Issue. , p. 34. Sanchón, J. Fernández-Tomé, S. Miralles, B. Hernández-Ledesma, B. Tomé, D. Gaudichon, C. and Recio, I. 2018. Protein degradation and peptide release from milk proteins in human jejunum. Comparison with in vitro gastrointestinal simulation. Food Chemistry, Vol. 239, Issue. , p. 486. Santos-Hernández, Marta Miralles, Beatriz Amigo, Lourdes and Recio, Isidra 2018. Intestinal Signaling of Proteins and Digestion-Derived Products Relevant to Satiety. Journal of Agricultural and Food Chemistry, Vol. 66, Issue. 39, p. 10123. Dougkas, Anestis and Östman, Elin 2018. Comparable effects of breakfast meals varying in protein source on appetite and subsequent energy intake in healthy males. European Journal of Nutrition, Vol. 57, Issue. 3, p. 1097. Miralles, Beatriz del Barrio, Roberto Cueva, Carolina Recio, Isidra and Amigo, Lourdes 2018. Dynamic gastric digestion of a commercial whey protein concentrate†. Journal of the Science of Food and Agriculture, Vol. 98, Issue. 5, p. 1873. Santos-Hernández, Marta Tomé, Daniel Gaudichon, Claire and Recio, Isidra 2018. Stimulation of CCK and GLP-1 secretion and expression in STC-1 cells by human jejunal contents and in vitro gastrointestinal digests from casein and whey proteins. Food & Function, Vol. 9, Issue. 9, p. 4702. Miralles, B. Hernández-Ledesma, B. Fernández-Tomé, S. Amigo, L. and Recio, I. 2018. Proteins in Food Processing. p. 523. Horner, Katy Drummond, Elaine O'Sullivan, Victoria S.C. Sri Harsha, Pedapati and Brennan, Lorraine 2019. Effects of a casein hydrolysate versus intact casein on gastric emptying and amino acid responses. European Journal of Nutrition, Vol. 58, Issue. 3, p. 955. Mangano, Kelsey M. Bao, Yihong and Zhao, Changhui 2019. Whey Protein Production, Chemistry, Functionality, and Applications. p. 103. Pal, Sebely Jane, Monica McKay, Jenny and Ho, Suleen 2019. Bioactive Food as Dietary Interventions for Diabetes. p. 103. Pal, Sebely McKay, Jenny Jane, Monica and Ho, Suleen 2019. Nutrition in the Prevention and Treatment of Abdominal Obesity. p. 261. View all Google Scholar citations for this article. Scopus Citations View all citations for this article on Scopus British Journal of Nutrition, Volume 112, Issue 4 28 August 2014 , pp. 557-564 Agnès Marsset-Baglieri (a1) (a2), Gilles Fromentin (a1) (a2), Gheorghe Airinei (a1) (a2), Camilla Pedersen (a1) (a2), Joëlle Léonil (a3), Julien Piedcoq (a1) (a2), Didier Rémond (a4) (a5), Robert Benamouzig (a1) (a2), Daniel Tomé (a1) (a2) and Claire Gaudichon (a1) (a2) 1 INRA, CRNH-IdF, UMR 914 Nutrition Physiology and Ingestive Behavior, F-75005 Paris, France 2 AgroParisTech, CRNH-IdF, UMR 914 Nutrition Physiology and Ingestive Behavior, 16 rue Claude Bernard, F-75005 Paris, France 3 INRA, AgroCampus Ouest, UMR STLO, F-35000 Rennes, France 4 INRA, UMR 1019 Nutrition Humaine, F-63122 Saint Genès, Champanelle, France 5 Univ Clermont 1, UFR Médecine, UMR 1019 Nutrition Humaine, F-63000 Clermont-Ferrand, France Copyright: © The Authors 2014 Published online by Cambridge University Press: 23 June 2014 Table 1 Characteristics of the study subjects (Mean values and standard deviations) Fig. 1 Experimental design. Eighty-two subjects, divided into three groups (casein (CAS), whey protein (WP) and an equal mix of the two (MIX)), followed two satiety tests after ingestion of a control snack (without protein) on day 1 (D1) and a protein snack on day 7 (D7). A subgroup of twenty-four subjects underwent a digestive and metabolic exploration after ingesting the protein snack on day 10 (D10). Fig. 2 Flow rates of dietary nitrogen in the jejunum (a) and dietary amino acid appearance in the plasma (b) in a subgroup of twenty-four subjects, after ingestion of a protein snack containing 15N-labelled milk proteins (casein (), whey protein () and an equal mix of the two ()). Values are means (n 8 per group), with their standard errors represented by vertical bars. * Time point at which a group effect was observed (P< 0·05). The effects of group, time and interaction were tested in a mixed model with time as a repeated factor. There were significant effects observed for time (P< 0·0001, for both (a) and (b)) and the time × group interaction ((a) P= 0·001 and (b) P= 0·006). Fig. 3 Feelings of hunger (a) and fullness (b) after ingestion of the control (n 82) or protein (n 26–28 per group) snack. The snack was given at t= 0 min. Values are means, with their standard errors represented by vertical bars. There was no significant effect observed for snack (protein v. control ()) or the type of protein snack (casein (), whey protein () and an equal mix of the two (); mixed model with protein as a factor and time as a repeated factor). Fig. 4 Effects of the protein snack v. the control carbohydrate snack on the time period elapsing before the request for lunch in all subjects (n 82) (a) and in early eaters (n 41) (b). The satiating effect of the protein snack is represented whatever the type of protein snack as well as in each protein group (intention-to-treat, n 26–28; early eaters, n 10–16). Data are expressed as the difference between the values obtained after ingestion of the protein snack and the control snack, respectively. Early and late eaters were split on the basis of the time period elapsing after ingesting the control snack (cut-off point: median 126 min). Values are means, with their standard errors represented by vertical bars. Mean value was significantly different from 0 min: * P≤ 0·05, ** P≤ 0·01, *** P≤ 0·001. There was no effect observed for the protein group (ANOVA with group as a factor). CAS, casein; MIX, equal mix of casein and whey protein; WP, whey protein. Send article to Kindle To send this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the 'name' part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations. '@free.kindle.com' emails are free but can only be sent to your device when it is connected to wi-fi. '@kindle.com' emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. Find out more about the Kindle Personal Document Service. Your Kindle email address Please provide your Kindle email. @free.kindle.com @kindle.com (service fees apply) Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Send article to Dropbox To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox. Send article to Google Drive To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive. Digestive kinetics are believed to modulate satiety through the modulation of nutrient delivery. We hypothesised that the duration of satiety could be extended by modulating the kinetics of dietary amino acid delivery in overweight subjects, using snacks containing casein and whey protein. In the present study, eighty-two subjects underwent a first satiety test where they received a control snack containing 60 g maltodextrin. For the next 5 d, the subjects consumed a liquid protein snack containing 30 g carbohydrates and 30 g proteins (casein, whey protein or an equal mix of the two; n 26–28 per group). The subjects then underwent a second satiety test after ingesting the protein snack. The time period elapsing between the snack and request for lunch, food intake at lunch and satiety scores were recorded. A subgroup of twenty-four subjects underwent a digestive and metabolic investigation after ingesting their protein snack. Gastric emptying times were 2·5, 4 and 6 h for whey protein, mix and casein, respectively, displaying different kinetics of appearance of dietary N in plasma but without affecting pancreatic and gastrointestinal hormones. Compared with the control snack, proteins extended the duration of satiety (+17 min, P= 0·02), with no difference between the protein groups. The satiating effect of proteins was greater in subjects who ate their lunch early after the snack (below the median value, i.e. 2 h) at the control test (+32 min, P= 0·001). Energy intake at lunch was not modulated by proteins. The satiating effect of proteins is efficient in overweight subjects, especially when the duration of satiety is short, but independently of their digestive and plasma amino acid kinetics. Dietary proteins are frequently claimed to reduce appetite and then food intake during subsequent meals beyond what can be accounted for by their energy content in humans( 1 – 4 ). The mechanisms underlying these protein-induced satiety effects, involving different and complex pathways that include both indirect vagus-mediated signals and the direct sensing of blood amino acids (AA), nutrients and hormones by specific brain areas, are not fully understood( 5 ). The current thinking is that proteins are more efficient than fats( 6 ) or carbohydrates (CHO)( 7 , 8 ) in inducing satiety. However, discrepancies have been observed between studies regarding the satiating capacity of proteins( 9 ), although these could be attributed to methodological conditions that hamper the interpretation of the results, such as the macronutrient content and other food parameters, texture( 10 ), structure, energy level or the time period elapsing between the preload and the test meal( 11 ). This period is frequently fixed, and only a few studies have observed the satiety response in human subjects who were free to request the next meal at any time( 12 – 14 ). Subjects may also differ in their ability to sense nutrient and energy intakes and adapt their overall food intake accordingly, which may particularly be the case in overweight subjects or obese as opposed to lean subjects( 15 , 16 ). Lastly, different protein sources have also been suspected of differently influencing the induction of satiety, with contrasting results( 1 , 2 , 17 , 18 ). Particularly, the satiating effect of whey protein and casein in relation to their digestive kinetics and subsequent postprandial changes in plasma AA and hormones has been extensively addressed but remains controversial( 3 , 19 , 20 ). The direct link between satiety responses to milk protein fractions and their digestion rate can, however, be hardly challenged in a unique protocol due to the problems inherent in obtaining access to the gastrointestinal tract. The present study was designed to assess whether the type of milk protein fractions, particularly differing in terms of their digestion kinetics, could modulate the satiating effect of a protein load in overweight subjects. For this purpose, the satiating effect of three different protein snacks (casein, whey protein or a 50:50 mixture of the two) was assessed in each subject against a basal CHO load, especially on the basis of the duration of satiety. Digestive kinetics and subsequent postprandial changes in plasma AA induced by different milk proteins were determined using a 15N-labelled meal test in a subgroup of subjects equipped with a jejunal tube. All participants were certified to be in good health after a thorough physical examination performed by medical staff in the Human Nutrition Research Centre (HNRC) at Avicenne Hospital (Bobigny, France), as well as routine biochemical tests. The inclusion criteria were 25 < BMI < 30 kg/m2 and 18 < age < 40 years. The exclusion criteria were positive serology for HIV, hepatitis B and C virus, any pathology, allergy to dairy proteins, pregnancy or an absence of contraception in women. The purpose and potential risks of the study were fully explained to the subjects. The study was conducted accorded to the guidelines laid down in the Declaration of Helsinki, and all procedures involving human subjects were approved by the Ethics Committee of Saint-Germain-en-Laye Hospital. Written informed consent was obtained from all participants. The study was registered at www.ClinicalTrials.gov (NCT00862329, SURPROL). It was performed in the HNRC at Avicenne Hospital under single-blind conditions according to a three-arm parallel design. Subjects were recruited between March 2008 and June 2010, and recruitment was halted when the groups contained at least twenty-five subjects, including eight subjects for a digestive and metabolic investigation. Finally, eighty-two healthy overweight subjects took part in the study: thirty-eight females and forty-four males, with a mean BMI of 28 (sd 1·8) kg/m2 and a mean age of 29 (sd 7) years. The subjects were allocated by the data manager to one of three groups (n 26–28 per group): casein (CAS group); whey protein (WP group); a 50:50 mix of the two (MIX group). During the final third of the study, BMI, age and sex were used to homogenise the groups. Sex ratio, weight, BMI and BMR values did not differ between the groups (Table 1). CAS, casein; WP, whey protein; MIX, equal mix of casein and whey protein. * BMR was calculated from the equations of Harris and Benedict. For satiety, each subject completed a control (snack without protein) and treatment (snack with protein) test. The subjects were thus their own controls (Fig. 1). A subgroup of twenty-four subjects underwent a digestive and metabolic test after ingesting the protein snack (n 8 per group). On day 1, the subjects underwent a control satiety test. They visited the HNRC in the morning after an overnight fast. The subjects were placed in a room with no time cues (closed curtains and no television, watch or personal computer). They were allowed to read and listen to pre-recorded music. They were given a standardised breakfast (1170 kJ) at 08.00 hours, including 120 ml skimmed milk, 30 g cornflakes, 100 ml orange juice, 20 g sugar, and tea or coffee, which they had to ingest in 30 min. At 11.00 hours, they had to ingest in 15 min a liquid control snack (1003 kJ) composed of 60 g maltodextrin (Roquette). The snack was dissolved in water to reach a final volume of 500 ml and flavoured with orange. After the control snack, the subjects were asked to request for lunch when they felt hungry. The meal proposed contained an excess quantity of food, including pasta, tomato sauce, cottage cheese, fruit salad and water. The time period elapsing between the snack and the spontaneous meal request was recorded, as was the energy intake at lunch. Regularly throughout the satiety test, the subjects completed visual analogue scales to evaluate their appetite feelings. After the control test, volunteers returned home and were asked to consume every morning one liquid protein snack for 5 d. The protein snacks were isoenergetic with the control snack and composed of 30 g maltodextrin, and 30 g casein, whey protein or mix (1003 kJ). Proteins were purchased from Ingredia. The subjects were given five shakes containing the protein load powder and orange flavour for self-administration after its dissolution in 500 ml water. The nature of the protein load was not revealed to the subjects in respect of the single-blind design. On day 7, the subjects came back to the HNRC for the same satiety test as on day 1, but with the protein snack. The satiating power of the protein snack was assessed in each subject as against the control snack. Satiety parameters were thus the main outcome of the study. On day 8, a subgroup of twenty-four subjects (n 8 per group) was used to investigate digestive, hormonal and metabolic parameters that might be associated with satiety. They were equipped with a double-lumen nasogastric tube that migrated to the proximal jejunum, as described previously( 21 , 22 ). The location of the sampling site was controlled by radiography. On day 9, after the subjects had fasted overnight, a solution of polyethylene glycol (PEG)-4000, used as a non-absorbable marker, was infused through the intestinal tube. After the baseline sampling of jejunal effluents and plasma, the subjects ingested their liquid protein snack in the same way as during the control satiety test, but in this case the proteins were intrinsically labelled with 15N, as described previously( 23 ). Intestinal effluents were collected continuously on ice for 6 h after the snack, and blood samples were collected every 30 min for 3 h and hourly thereafter. The subjects were given 80 ml water every hour. At the end of this test period, the gastrointestinal tube was removed and the subjects returned home after eating a complete meal. Effluents were pooled into 30 min periods and freeze-dried until analysis. The concentration of PEG-4000 in digesta samples was measured using a turbidimetric method, as described previously( 24 ), to determine the liquid flow rate. Enrichment of total N and 15N was determined by the elemental analysis-isotope ratio MS method, as described previously( 25 ). Plasma AA were analysed by ion-exchange chromatography after protein precipitation, with the addition of norleucine as an internal standard (Biotech Instrument). The 15N enrichment of AA was determined by isotope ratio MS (IsoPrime, GV Instrument) coupled to an elemental analyser (Euro Elemental Analyser 3000; EuroVector), after purification on Dowex AG-50 X8 resin, as described previously( 23 , 26 ). The dietary N level in plasma AA was calculated as follows: $$\begin{eqnarray} Dietary\,AA\,(mmol) = N_{plasma\,AA}\,(mmol/l)\times APE_{sample}/APE_{meal}, \end{eqnarray}$$ where Nplasma AA was calculated from the sum of N in individual plasma AA (on the basis of concentrations) and the estimated volume of plasma as 5 % of body weight( 27 ). APEsample and APEmeal are the 15N enrichment in the digesta and meal, respectively, where APE stands for atom percentage excess. Concentrations of plasma insulin, glucagon, glucose-dependent insulinotropic peptide (GIP), glucagon-like peptide 1 (GLP-1), peptide YY and ghrelin (active form) were analysed using a human endocrine panel (Milliplex; Millipore) on a Bioplex 200 system (Bio-Rad Laboratories, Inc.). The intra-assay CV ranged from 5·5 % for GLP-1, insulin and peptide YY to 10 % for glucagon. The inter-assay CV ranged from 6·5 % for GIP to 23 % for GLP-1. Data are presented as means with their standard errors. The number of subjects necessary to recruit was calculated from a power test based on the time period elapsing between the snack and request for lunch, using the software GPower 3.1. Only a few data were available in the literature to draw hypothesis on the differences between a CHO and protein snack, depending on the type of protein. In the study of Marmonier et al. ( 28 ), the time period elapsing between the snack and meal request was 471 min after a CHO snack v. 429 min after a high-protein snack, associated with a within standard deviation of 30 min. We targeted differential effects of 10 min between two groups with a standard deviation of 25, resulting in an effect size of 0·35. It was then calculated that at least eighty subjects needed to be enrolled to ensure a statistical power of 80 % and thus to detect differences between the three groups using a one-way ANOVA, with an α level of 5 %. The effect of qualitative (group and sex) and quantitative (BMI, weight, BMR and values of the control snack) variables on satiety parameters was also tested within ANOVA or ANCOVA models, using the generalised linear model procedure of SAS. Changes in visual analogue scales after the snack were compared between the control and protein snacks using a mixed model with two repeated factors, time and test (PROC MIXED, SAS 9.1). The effects of the protein snack on digestive kinetics and hormones were compared between groups using a mixed model, with time as the repeated factor and BMR as a covariate, to take into account the fact that the amount of the snack was similar among subjects. P≤ 0·05 was taken as the criterion for statistical significance. Flow rates of dietary N in the jejunum (Fig. 2(a)) differed significantly as a function of group, with a substantial delivery during the first 3 h after ingesting whey protein or mix while casein was delivered progressively in two phases during the first 3 h and between 4 and 5 h. The rate of appearance of dietary AA in plasma (Fig. 2(b)) reflected the rapid and slow gastric emptying of whey protein and casein, respectively, with an intermediate value for the mix of the two. Indeed, the casein snack did not trigger any peak for dietary AA but a progressive rise throughout the 6 h, whereas whey protein resulted in a massive appearance within 3 h. After the mix snack, a maximal appearance of dietary AA was observed as soon as 1 h and was followed by a plateau. Fig. 2 Flow rates of dietary nitrogen in the jejunum (a) and dietary amino acid appearance in the plasma (b) in a subgroup of twenty-four subjects, after ingestion of a protein snack containing 15N-labelled milk proteins (casein ( ), whey protein ( ) and an equal mix of the two ( )). Values are means (n 8 per group), with their standard errors represented by vertical bars. * Time point at which a group effect was observed (P< 0·05). The effects of group, time and interaction were tested in a mixed model with time as a repeated factor. There were significant effects observed for time (P< 0·0001, for both (a) and (b)) and the time × group interaction ((a) P= 0·001 and (b) P= 0·006). Levels of gastrointestinal and pancreatic hormones were also determined (see Fig. S1, available online). The secretion of insulin and GIP in response to the ingestion of the protein snack was similar in the three groups, with a peak at 0·5 h to reach approximately 300 pmol/l and a return to the baseline at 3 h. Peptide YY, glucagon and GLP1 did not vary significantly over time while ghrelin levels gradually rose during the investigation. A trend for a group effect (P= 0·06) was obtained with GLP-1, with a globally lower secretion in the CAS group than in the other two groups. Post-snack feelings of hunger (Fig. 3(a)) and fullness (Fig. 3(b)) did not differ between the groups (mixed model with time and snack as repeated factors). There was also no significant difference between the control and protein snacks. Fig. 3 Feelings of hunger (a) and fullness (b) after ingestion of the control (n 82) or protein (n 26–28 per group) snack. The snack was given at t= 0 min. Values are means, with their standard errors represented by vertical bars. There was no significant effect observed for snack (protein v. control ( )) or the type of protein snack (casein ( ), whey protein ( ) and an equal mix of the two ( ); mixed model with protein as a factor and time as a repeated factor). The time period elapsing between the control CHO snack and request for lunch was 133 (sem 7) min (median 126 min, range 15–385 min). The time period elapsing after the protein snack, whatever the type of protein, was 150 (sem 7·5) min (median 150 min, range 15–345 min). As a result, the protein snack significantly increased this period by 17 (sem 7) min compared with the control snack (P= 0·02). There was no significant effect of group (CAS, MIX or WP) on either the time period elapsing after the protein snack or the difference from the values obtained after the control snack, as shown in Fig. 4(a). Besides, the time period elapsing after the protein snack was significantly dependent on the time period elapsing after the control snack (ANCOVA, P< 0·0001), indicating that the duration of satiety was strongly subject-dependent. In contrast, there was no effect of subject-related variables such as sex, body weight, BMI or BMR. Owing to the significant relationship between the control test and the satiety response to the protein snack, a split analysis was performed in subjects displaying an initially short time period (early eaters) or long time period (late eaters) for meal request after the control snack (cut-off point: median 126 min). In early eaters, the time period for lunch request after the protein snack was increased by 32·4 (sem 7·5) min (P= 0·0001), whereas in late eaters, the protein snack did not have any effect. Although the ANOVA did not reveal any significant effect of the type of protein snack on the delay in this subgroup of subjects (Fig. 4(b)), the extension of the duration of satiety compared with the control snack was only significant in the WP and MIX groups, but not in the CAS group. Energy intake at lunch was 4·2 (sem 0·1) MJ after the control snack (median 4·1 MJ, range 1·8–6·5 MJ). After the protein snack, energy intake at lunch was 4·2 (sem 1·3) MJ and did not differ compared with the control snack. Energy intake after the protein snack was not influenced by the type of protein but by the energy intake at the control test (ANCOVA, P< 0·0001). In contrast to the time period elapsing between the snack and request for lunch, subject-related variables influenced energy intake at lunch, such as sex (P= 0·0005) and BMR (P= 0·0007). Baseline energy intake was also positively linked to weight (R 0·027, P= 0·01) but not to BMI. Energy intake at lunch was not influenced by the length of time after the control snack. The present study addressed the sensitivity of overweight subjects to the satiating effect of different milk protein snacks, differing in terms of protein type and consequently digestive kinetics. Milk proteins, whatever their type, moderately but significantly extended the appetite-suppressant effect of a liquid CHO snack. A post hoc analysis revealed that the satiating effect of proteins were only efficient in subjects displaying a short duration of satiety. The absence of any effect of milk protein fractions despite the marked difference in their digestion kinetics indicates that in our experimental conditions, the modulation of the kinetics of dietary AA delivery did not influence the duration of satiety. Although kinetic profiles were modulated between the three different protein snacks, displaying marked differences in the rate of appearance of dietary AA in plasma, there was no difference between the protein sources in relation to the effect on the duration of satiety. This modulation was characterised in a subgroup of subjects, but not at the same time as the satiety test, because the presence of the intestinal tube was not compatible with assessing satiety. At the jejunal level, the half delivery time of dietary N was 50 min with whey protein and 2 h with casein, but with an estimated complete emptying time (based on cumulated recovery) of 2·5 h with whey protein and 6 h with casein. The mix snack displayed similar kinetics to that for whey protein (half delivery time of 1 h) but with an estimated complete digestion time of 4 h. The appearance of 15N in plasma AA provided a good discrimination of kinetics, with a peak time at 90 min for whey protein and the absence of a peak for casein, while the mix of the two displayed an early maximum (1 h) followed by a plateau. These digestive and plasma AA kinetic profiles are in agreement with those observed previously( 23 , 29 , 30 ), with the peculiarity that due to clotted aggregates in the stomach, casein did not trigger a plasma AA peak. We were thus able to verify that mixing casein and whey protein in equal proportion produced an intermediate kinetic. Despite these differences, there were no effects on appetite rating. Several studies have compared the satiating effects of different milk proteins, of which the most widely studied are whey protein and casein or total milk proteins. However, the results of the studies reported to date have been contradictory. Some authors have found a stronger effect of whey protein( 3 ), casein( 31 ) or total milk proteins( 19 ), or no difference( 2 ). This latter study was performed in overweight women in order to compare liquid preloads (milk shake, 1·1 MJ) containing 50 g whey protein, soya, gluten or glucose. Energy intake during a subsequent buffet meal was lower by 10 % 3 h after ingesting all the protein preloads when compared with the glucose treatment, but without there being any differences between the protein sources. It should be noted that unlike the present experiment, all these studies implemented a fixed period between the load and the meal. Interestingly, in young men, a modification to food texture by increasing the viscosity of a liquid casein snack using transglutaminase increased fullness compared with a liquid casein or whey protein snack, but decreased the secretion of GLP-1 and cholecystokinin( 20 ). In the present study, no differences between casein and whey protein were observed, although the total gastric emptying time after ingestion of the casein snack was delayed by about 4 h. This could suggest that an excessive slowing of digestive kinetics, together with a high-protein dose (50 g), is necessary to obtain an effect on the duration of satiety. Furthermore, it shows that higher peaks of plasma GLP-1 and cholecystokinin observed during the early postprandial phase are not necessarily linked to a stronger satiating effect. We also did not find any marked effects of the type of protein snack on hormonal profiles. A high variability was observed, especially for gastrointestinal hormones, which may have been partly due to the presence of the intestinal tube and to the fact that the energy load of the snack was unique whatever the corpulence of the subjects, resulting in the contribution of 10 to 17 % to the BMR, although without any differences between the protein groups. Insulin and GIP were principally triggered by the presence of maltodextrin (30 g) in the snack, buffering the insulinotropic capacity of whey protein( 32 ). The only effect on GLP-1 was global, without affecting the kinetics, with a lower level in the CAS group. The absence of difference between casein and whey protein is in line with most previous findings( 2 , 20 , 33 ), but not all( 3 ). However, all these studies were performed with 50 g protein without CHO, whereas the present study used a dose more compatible with a supplementation strategy, meaning 30 g. At a lower dose (15 g), higher levels of GLP-1 and cholecystokinin secretion have been reported after ingestion of total milk proteins than that of whey protein( 34 ). The present study also addressed the effect of the protein snack against the CHO snack, and particularly in relation to the duration of satiety, a parameter that has been scarcely assessed. In one study, an increase in time period elapsing after a high-protein load compared with a high-CHO load( 28 ) has been reported; however, most of the investigations were performed with a fixed time interval between the load and the meal. We found that the duration of the satiating effect was increased by approximately 17 min when the snack contained proteins, in comparison to an isovolumic and isoenergetic (1 MJ) snack containing maltodextrin. The result of the present study is consistent with that of Douglas et al. ( 14 ) who found an increase in the duration of satiety from 150 min with a low-protein yogurt to 180 min with a high-protein one. In contrast, there was no effect on energy intake at lunch. This illustrates the fact that because the period was not fixed and subjects received their lunch when they were hungry, this criterion was no longer discriminating. Chungchunlam et al. ( 35 ) did not find a significant effect of the timing of the preload varying from 30 to 120 min on energy intake; however, this does not preclude the fact that after 120 min, the effect of the preload altered the result. Moreover, by contrast, with the time period elapsing after the snack, we found that energy intake was influenced by sex, as shown previously( 18 ), and also by BMI and BMR, thus increasing the factors influencing this criterion. Therefore, under the present experimental conditions, energy intake at meal was not a sensitive criterion, while the time period elapsing was the most appropriate one. As stated by Blundell et al. ( 11 ), the period during which a preload maintains a state of satiety can be considered as a good indicator to assess its satiety power. Interestingly, we found that the periods obtained after the protein snack varied considerably (15–385 min) and were strongly dependent on the periods observed following the control snack. We thus reanalysed the results after splitting the subjects as early and late eaters, and found that the duration of the satiating effect of proteins was extended by 32 min in early eaters but not in late eaters. This shows that the efficacy of proteins in extending the duration of satiety depends on their sensitivity to an energy load. In subjects in whom a non-protein snack triggers a sufficient duration of satiety, proteins do not exert any additional benefit. Interestingly, it has recently been shown that the usual mealtime interval of subjects influences the extent to which they could lose weight during dietary intervention( 36 ). This split was performed with two subgroups in order to retain sufficient statistical power to further test the effect of proteins. However, an analysis with three subgroups produced similar results, with a significant effect of proteins in early and medium eaters, and no effect in late eaters. The present study thus provides new insights that should be investigated specifically in further studies. Lastly, the split analysis suggests that the presence of whey protein could extend the duration of satiety in early eaters. Indeed, compared with the CHO snack, the satiating effect of proteins was only significant in the WP and MIX groups, but we were unable to find a group effect, probably due to a loss of statistical power subsequent to splitting. A specific study on subjects recruited for their ability to respond to an energy load would therefore be necessary to ascertain a higher satiating power of whey protein in subjects displaying a short duration of satiety. The present study was performed using an original approach that could investigate in the same subjects both the jejunal flux of dietary protein and the satiety response to a snack. A second original approach was that the design of the present study first included a control satiety test, allowing the classification of subjects in the absence of any protein in the snack, and then a second test to determine the additional effect of proteins, after habituation to the snack. Because of this original approach, the present study had some limitations. In particular, a cross-over design could not be implemented, leading to a loss of statistical power for between-group analysis. Furthermore, we could not verify the effect of snack order. Lastly, although the subjects were habituated to the snack – a design that has been rarely employed in other studies – the habituation period remained short. It would be interesting to evaluate the effect of a longer adaptation period and its subsequent impact on food consumption and weight. To our knowledge, very few studies have assessed this long-term effect. A sustainable effect on hunger feelings has been reported in subjects regularly consuming a satiating snack for 18 weeks after weight loss, together with better weight maintenance( 37 ); however, this result cannot be generalised. The present study focused on the duration of satiety and did not allow for any speculation regarding its possible effects on food intake. In conclusion, the kinetics of dietary AA delivery do not play a major role in satiety responses. The present results also show that the effect of proteins in inducing satiety needs to be interpreted in terms of target subjects, particularly relative to their sensitivity to satiety. The variability in the duration of satiety was extremely high among individuals, as has already been shown for daily energy intake between individuals of the same sex, age, body composition and activity levels( 38 ), and we showed that proteins were efficient when compared with CHO alone in early eaters. To our knowledge, this is the first study to demonstrate that sensitivity to satiety (short or long satiety period) influences the effect of specific macronutrients on satiety responses. Such new criteria could be taken into account when using loads enriched with macronutrients for the control of energy intake. To view supplementary material for this article, please visit http://dx.doi.org/10.1017/S0007114514001470 The authors thank Marie Claude Amar from the Volunteer Research Center at Avicenne Hospital for recruiting the subjects and collecting the samples, as well as Laure Chevalier and Cécile Bos for their active participation in the study. The authors are indebted to the Corrine Marmonier and Jean-Pierre Guyonnet, from the CNIEL, for constructive scientific discussions. The present study was supported by a grant from the CNIEL and the French Agency for Research and Technology (Program PNRA 2006; SURPROL). The authors are participants of the EU-funded COST action INFOGEST (COST FA 1005). The authors' contributions are as follows: A. M.-B. and C. G. designed the research; A. M.-B., C. G., G. A., C. P., J. P., R. B. and D. R. conducted the research; J. L. supplied the labelled milk proteins; C. G., A. M.-B. and C. P. analysed the data; A. M.-B., C. G., G. F. and D. T. wrote the paper; A. M.-B. and C. G. had primary responsibility for the final content. All authors read and approved the final version of the paper. The authors declare that there are no conflicts of interest. 1 Anderson, GH, Tecimer, SN, Shah, D, et al. (2004) Protein source, quantity, and time of consumption determine the effect of proteins on short-term food intake in young men. J Nutr 134, 3011–3015. 2 Bowen, J, Noakes, M & Clifton, PM (2006) Appetite regulatory hormone responses to various dietary proteins differ by body mass index status despite similar reductions in ad libitum energy intake. J Clin Endocrinol Metab 91, 2913–2919. 3 Hall, WL, Millward, DJ, Long, SJ, et al. (2003) Casein and whey exert different effects on plasma amino acid profiles, gastrointestinal hormone secretion and appetite. Br J Nutr 89, 239–248. 4 Westerterp-Plantenga, MS, Nieuwenhuizen, A, Tome, D, et al. (2009) Dietary protein, weight loss, and weight maintenance. Annu Rev Nutr 29, 21–41. 5 Fromentin, G, Darcel, N, Chaumontet, C, et al. (2012) Peripheral and central mechanisms involved in the control of food intake by dietary amino acids and proteins. Nutr Res Rev 25, 29–39. 6 Porrini, M, Santangelo, A, Crovetti, R, et al. (1997) Weight, protein, fat, and timing of preloads affect food intake. Physiol Behav 62, 563–570. 7 Bertenshaw, EJ, Lluch, A & Yeomans, MR (2008) Satiating effects of protein but not carbohydrate consumed in a between-meal beverage context. Physiol Behav 93, 427–436. 8 Poppitt, SD, McCormack, D & Buffenstein, R (1998) Short-term effects of macronutrient preloads on appetite and energy intake in lean women. Physiol Behav 64, 279–285. 9 Raben, A, Agerholm-Larsen, L, Flint, A, et al. (2003) Meals with similar energy densities but rich in protein, fat, carbohydrate, or alcohol have different effects on energy expenditure and substrate metabolism but not on appetite and energy intake. Am J Clin Nutr 77, 91–100. 10 Wanders, AJ, Mars, M, Borgonjen-van den Berg, KJ, et al. (2014) Satiety and energy intake after single and repeated exposure to gel-forming dietary fiber: post-ingestive effects. Int J Obes (Lond) 38, 794–800. 11 Blundell, J, de Graaf, C, Hulshof, T, et al. (2010) Appetite control: methodological aspects of the evaluation of foods. Obes Rev 11, 251–270. 12 Chapelot, D & Payen, F (2010) Comparison of the effects of a liquid yogurt and chocolate bars on satiety: a multidimensional approach. Br J Nutr 103, 760–767. 13 Marmonier, C, Chapelot, D & Louis-Sylvestre, J (2000) Effects of macronutrient content and energy density of snacks consumed in a satiety state on the onset of the next meal. Appetite 34, 161–168. 14 Douglas, SM, Ortinau, LC, Hoertel, HA, et al. (2013) Low, moderate, or high protein yogurt snacks on appetite control and subsequent eating in healthy women. Appetite 60, 117–122. 15 Brennan, IM, Luscombe-Marsh, ND, Seimon, RV, et al. (2012) Effects of fat, protein, and carbohydrate and protein load on appetite, plasma cholecystokinin, peptide YY, and ghrelin, and energy intake in lean and obese men. Am J Physiol Gastrointest Liver Physiol 303, G129–G140. 16 Ho, A, Kennedy, J & Dimitropoulos, A (2012) Neural correlates to food-related behavior in normal-weight and overweight/obese participants. PLOS ONE 7, e45403. 17 Lang, V, Bellisle, F, Oppert, JM, et al. (1998) Satiating effect of proteins in healthy subjects: a comparison of egg albumin, casein, gelatin, soy protein, pea protein, and wheat gluten. Am J Clin Nutr 67, 1197–1204. 18 Lam, SM, Moughan, PJ, Awati, A, et al. (2009) The influence of whey protein and glycomacropeptide on satiety in adult humans. Physiol Behav 96, 162–168. 19 Lorenzen, J, Frederiksen, R, Hoppe, C, et al. (2012) The effect of milk proteins on appetite regulation and diet-induced thermogenesis. Eur J Clin Nutr 66, 622–627. 20 Juvonen, KR, Karhunen, LJ, Vuori, E, et al. (2011) Structure modification of a milk protein-based model food affects postprandial intestinal peptide release and fullness in healthy young men. Br J Nutr 106, 1890–1898. 21 Bos, C, Airinei, G, Mariotti, F, et al. (2007) The poor digestibility of rapeseed protein is balanced by its very high metabolic utilization in humans. J Nutr 137, 594–600. 22 Bos, C, Juillet, B, Fouillet, H, et al. (2005) Postprandial metabolic utilization of wheat protein in humans. Am J Clin Nutr 81, 87–94. 23 Lacroix, M, Bos, C, Leonil, J, et al. (2006) Compared with casein or total milk protein, digestion of milk soluble proteins is too rapid to sustain the anabolic postprandial amino acid requirement. Am J Clin Nutr 84, 1070–1079. 24 Hyden, S (1955) A turbidimetric method for the determination of higher polyethylene glycol in biological materials. Kungl Lanthbrukshogskolans Annaler 22, 139–145. 25 Boutrou, R, Gaudichon, C, Dupont, D, et al. (2013) Sequential release of milk protein-derived bioactive peptides in the jejunum in healthy humans. Am J Clin Nutr 97, 1314–1323. 26 Airinei, G, Gaudichon, C, Bos, C, et al. (2011) Postprandial protein metabolism but not a fecal test reveals protein malabsorption in patients with pancreatic exocrine insufficiency. Clin Nutr 30, 831–837. 27 Gannong, W (1969) Review of Medical Physiology. Los Altos, CA: Lang Medical Publications. 28 Marmonier, C, Chapelot, D, Fantino, M, et al. (2002) Snacks consumed in a nonhungry state have poor satiating efficiency: influence of snack composition on substrate utilization and hunger. Am J Clin Nutr 76, 518–528. 29 Boirie, Y, Dangin, M, Gachon, P, et al. (1997) Slow and fast dietary proteins differently modulate postprandial protein accretion. Proc Natl Acad Sci U S A 94, 14930–14935. 30 Mahé, S, Benamouzig, R, Gaudichon, C, et al. (1996) Nitrogen movements in the upper jejunum lumen in humans fed low amounts of caseins or β-lactoglobulin. Gastroenterol Clin Biol 19, 20–26. 31 Abou-Samra, R, Keersmaekers, L, Brienza, D, et al. (2011) Effect of different protein sources on satiation and short-term satiety when consumed as a starter. Nutr J 10, 139. 32 Calbet, JA & MacLean, DA (2002) Plasma glucagon and insulin responses depend on the rate of appearance of amino acids after ingestion of different protein solutions in humans. J Nutr 132, 2174–2182. 33 Calbet, JA & Holst, JJ (2004) Gastric emptying, gastric secretion and enterogastrone response after administration of milk proteins or their peptide hydrolysates in humans. Eur J Nutr 43, 127–139. 34 Diepvens, K, Haberer, D & Westerterp-Plantenga, M (2008) Different proteins and biopeptides differently affect satiety and anorexigenic/orexigenic hormones in healthy humans. Int J Obes (Lond) 32, 510–518. 35 Chungchunlam, SM, Moughan, PJ, Henare, SJ, et al. (2012) Effect of time of consumption of preloads on measures of satiety in healthy normal weight women. Appetite 59, 281–288. 36 Garaulet, M, Gomez-Abellan, P, Alburquerque-Bejar, JJ, et al. (2013) Timing of food intake predicts weight loss effectiveness. Int J Obes (Lond) 37, 604–611. 37 Diepvens, K, Soenen, S, Steijns, J, et al. (2007) Long-term effects of consumption of a novel fat emulsion in relation to body-weight management. Int J Obes (Lond) 31, 942–949. 38 George, V, Tremblay, A, Despres, JP, et al. (1991) Further evidence for the presence of "small eaters" and "large eaters" among women. Am J Clin Nutr 53, 425–429. Loading article... Cambridge Core legal notices
CommonCrawl
# Setting up a Svelte project with interact.js To get started with implementing drag and drop functionality with Svelte and interact.js, you'll first need to set up a Svelte project and install the necessary dependencies. 1. Create a new Svelte project using the following command: ``` npx degit sveltejs/template svelte-app ``` 2. Change into the newly created project directory: ``` cd svelte-app ``` 3. Install the interact.js library as a dependency: ``` npm install interactjs ``` 4. Open the `src/App.svelte` file and import interact.js: ```javascript import interact from 'interactjs'; ``` Now you're ready to start creating draggable and droppable components using Svelte and interact.js. ## Exercise Instructions: - Create a new Svelte project using the `degit` command. - Change into the project directory. - Install the interact.js library as a dependency. - Open the `src/App.svelte` file and import interact.js. ### Solution - `npx degit sveltejs/template svelte-app` - `cd svelte-app` - `npm install interactjs` - `import interact from 'interactjs';` # Creating draggable and droppable components 1. Create a new Svelte component called `Draggable.svelte`: ``` src/Draggable.svelte ``` 2. In `Draggable.svelte`, add the following code to make the component draggable: ```javascript import interact from 'interactjs'; let draggable = false; function handleDragStart(event) { draggable = true; } function handleDragEnd(event) { draggable = false; } interact(this).draggable({ onstart: handleDragStart, onend: handleDragEnd, }); ``` 3. Create another new Svelte component called `Droppable.svelte`: ``` src/Droppable.svelte ``` 4. In `Droppable.svelte`, add the following code to make the component droppable: ```javascript import interact from 'interactjs'; let droppable = false; function handleDrop(event) { droppable = true; } interact(this).dropzone({ ondrop: handleDrop, }); ``` Now you have two components that can be dragged and dropped using interact.js. ## Exercise Instructions: - Create a new Svelte component called `Draggable.svelte`. - Add code to make the component draggable. - Create another new Svelte component called `Droppable.svelte`. - Add code to make the component droppable. ### Solution - `src/Draggable.svelte` - `import interact from 'interactjs';` - `let draggable = false;` - `function handleDragStart(event) { draggable = true; }` - `function handleDragEnd(event) { draggable = false; }` - `interact(this).draggable({ onstart: handleDragStart, onend: handleDragEnd });` - `src/Droppable.svelte` - `import interact from 'interactjs';` - `let droppable = false;` - `function handleDrop(event) { droppable = true; }` - `interact(this).dropzone({ ondrop: handleDrop });` # Handling drag and drop events with Svelte 1. In `Draggable.svelte`, add a `ondrag` event listener to the component: ```javascript function handleDrag(event) { // Handle drag event } interact(this).draggable({ onstart: handleDragStart, onend: handleDragEnd, onmove: handleDrag, }); ``` 2. In `Droppable.svelte`, add a `ondrop` event listener to the component: ```javascript function handleDrop(event) { // Handle drop event } interact(this).dropzone({ ondrop: handleDrop, }); ``` 3. In the `handleDrag` and `handleDrop` functions, you can access the event data to get information about the drag and drop actions. Now you have event handlers for drag and drop events using Svelte and interact.js. ## Exercise Instructions: - Add a `ondrag` event listener to `Draggable.svelte`. - Add a `ondrop` event listener to `Droppable.svelte`. - Access event data in the event handlers to handle drag and drop actions. ### Solution - `function handleDrag(event) { /* Handle drag event */ }` - `function handleDrop(event) { /* Handle drop event */ }` - `interact(this).draggable({ onstart: handleDragStart, onend: handleDragEnd, onmove: handleDrag });` - `interact(this).dropzone({ ondrop: handleDrop });` # Implementing drag and drop functionality with Svelte and interact.js 1. In `App.svelte`, import the `Draggable.svelte` and `Droppable.svelte` components: ```javascript import Draggable from './Draggable.svelte'; import Droppable from './Droppable.svelte'; ``` 2. Add the components to the `App.svelte` markup: ```html <Draggable /> <Droppable /> ``` 3. Run the Svelte project to see the drag and drop functionality in action. Now you have implemented drag and drop functionality using Svelte and interact.js. ## Exercise Instructions: - Import `Draggable.svelte` and `Droppable.svelte` into `App.svelte`. - Add the components to the `App.svelte` markup. - Run the Svelte project to see the drag and drop functionality. ### Solution - `import Draggable from './Draggable.svelte';` - `import Droppable from './Droppable.svelte';` - `<Draggable />` - `<Droppable />` # Styling and customizing the drag and drop interface 1. In `Draggable.svelte`, add a `<style>` block to define the component's styles: ```html <style> .draggable { /* Add your draggable styles here */ } </style> ``` 2. In `Droppable.svelte`, add a `<style>` block to define the component's styles: ```html <style> .droppable { /* Add your droppable styles here */ } </style> ``` 3. Apply the styles to the component's markup: ```html <div class="draggable"> <!-- Add your draggable content here --> </div> ``` ```html <div class="droppable"> <!-- Add your droppable content here --> </div> ``` Now you have styled and customized the drag and drop interface using Svelte and interact.js. ## Exercise Instructions: - Add a `<style>` block to `Draggable.svelte` and define the component's styles. - Add a `<style>` block to `Droppable.svelte` and define the component's styles. - Apply the styles to the component's markup. ### Solution - `<style>.draggable { /* Add your draggable styles here */ }</style>` - `<style>.droppable { /* Add your droppable styles here */ }</style>` - `<div class="draggable">` - `<div class="droppable">` # Handling edge cases and error handling 1. In `Draggable.svelte`, handle edge cases such as dragging outside the viewport or dragging over fixed elements. 2. In `Droppable.svelte`, handle edge cases such as dropping outside the viewport or dropping over fixed elements. 3. Implement error handling for any issues that may arise during the drag and drop process. Now you have handled edge cases and implemented error handling when implementing drag and drop functionality with Svelte and interact.js. ## Exercise Instructions: - Handle edge cases in `Draggable.svelte` and `Droppable.svelte`. - Implement error handling for drag and drop issues. ### Solution - `/* Handle edge cases in Draggable.svelte and Droppable.svelte */` - `/* Implement error handling for drag and drop issues */` # Integrating with other Svelte components and libraries 1. Import and use other Svelte components or libraries in `Draggable.svelte` and `Droppable.svelte`. 2. Pass data between the components and the drag and drop functionality. 3. Update the components based on the drag and drop events. Now you have integrated the drag and drop functionality with other Svelte components and libraries. ## Exercise Instructions: - Import and use other Svelte components or libraries in `Draggable.svelte` and `Droppable.svelte`. - Pass data between the components and the drag and drop functionality. - Update the components based on the drag and drop events. ### Solution - `import OtherComponent from './OtherComponent.svelte';` - `let data = '';` - `function handleDrag(event) { data = event.data; }` - `function handleDrop(event) { /* Update the component based on the drop event */ }` # Optimizing performance for larger applications 1. Use Svelte's built-in optimizations, such as `bind:this` and `bind:group`. 2. Implement lazy loading for components or data when needed. 3. Use the `use` directive to optimize the drag and drop functionality. Now you have optimized performance for larger applications when implementing drag and drop functionality with Svelte and interact.js. ## Exercise Instructions: - Use Svelte's built-in optimizations in `Draggable.svelte` and `Droppable.svelte`. - Implement lazy loading for components or data. - Use the `use` directive to optimize the drag and drop functionality. ### Solution - `import { onMount } from 'svelte';` - `onMount(() => { /* Implement lazy loading for components or data */ });` - `use:interact` # Testing and debugging drag and drop functionality 1. Write unit tests for the drag and drop functionality. 2. Use the browser's developer tools to debug any issues that may arise. 3. Use Svelte's built-in reactive statements, such as `$$restProps` and `$$slots`, to ensure the drag and drop functionality is working as expected. Now you have tested and debugged the drag and drop functionality when implementing it with Svelte and interact.js. ## Exercise Instructions: - Write unit tests for the drag and drop functionality. - Use the browser's developer tools to debug issues. - Use Svelte's built-in reactive statements to ensure the drag and drop functionality is working as expected. ### Solution - `import { render } from '@testing-library/svelte';` - `test('Drag and drop functionality', () => { /* Test the drag and drop functionality */ });` - `$$restProps` and `$$slots` # Implementing drag and drop functionality in a real-world application 1. Create a new Svelte project and set up the necessary dependencies. 2. Design and implement the drag and drop functionality in the application. 3. Test and debug the drag and drop functionality to ensure it works as expected. 4. Optimize the performance of the application for larger use cases. Now you have implemented drag and drop functionality in a real-world application using Svelte and interact.js. ## Exercise Instructions: - Create a new Svelte project and set up the necessary dependencies. - Design and implement the drag and drop functionality in the application. - Test and debug the drag and drop functionality to ensure it works as expected. - Optimize the performance of the application for larger use cases. ### Solution - `npx degit sveltejs/template svelte-app` - `cd svelte-app` - `npm install interactjs` - `import interact from 'interactjs';` - `/* Design and implement the drag and drop functionality */` - `/* Test and debug the drag and drop functionality */` - `/* Optimize the performance of the application */`
Textbooks
Rota's Basis Conjecture Posted on February 24, 2014 by Guest Contributor Guest post by Jan Draisma, Eindhoven University of Technology. I'd like to discuss some old and new developments on Rota's basis conjecture. Rota's basis conjecture. Let $V$ be an $n$-dimensional vector space over a field $K$, and for each $i=1,\ldots,n$ let $v_{i1},\ldots,v_{in}$ be a basis of $V$. Then there exist permutations $\pi_1,\ldots,\pi_n \in S_n$ such that for each $j\in [n]$ the transversal $v_{1,\pi_1(j)},v_{2,\pi_2(j)},\ldots,v_{n,\pi_n(j)}$ is a basis of $V$, as well. I first learned about this conjecture in 2005 from the beautiful chapter Ten abandoned goldmines [Cra01] in a tribute to Gian-Carlo Rota. Ignorantly, I sat down on my balcony and thought, this must be easy… let's identify $V$ with $K^n$ and introduce a variable $x_{ijk}$ for the $k$-th coordinate of $v_{ij}$. These are arranged in a cube whose horizontal slices are matrices with non-zero determinants. Let $\mathrm{hdet}$ denote the product of these horizontal determinants. For each choice $\pi (\pi_1,\ldots,\pi_n)$ let $\mathrm{vdet}_\pi$ be the product of the vertical determinants $\det(x_{i,\pi_i(j),k})_{i,k}$ for $j=1,\ldots,n$. Both $\mathrm{hdet}$ and $\mathrm{vdet}_\pi$ are degree-$n^2$ polynomials in the $x_{ijk}$, and the conjecture is equivalent (if $K$ is algebraically closed) to the statement that some power of $\mathrm{hdet}$ lies in the ideal generated by $\{\mathrm{vdet}_\pi \mid \pi \in S_n^n\}$. But they have the same degree, so perhaps $\mathrm{hdet}$ itself is already a linear combination of the $\mathrm{vdet}_\pi$? There is a natural guess for the coefficients: the polynomial \[ P=\sum_{\pi \in S_n^n} \mathrm{sign}(\pi_1) \cdots \mathrm{sign}(\pi_n) \mathrm{vdet}_\pi \] is multilinear in the $n^2$ vectors $v_{ij}$ and changes sign when you swap two vectors in the same row. There is only a one-dimensional space of such polynomials, and it is spanned by $\mathrm{hdet}$. So $P=c \cdot \mathrm{hdet}$ for some constant $c \in K$ and we're done…oh, but wait a minute, let's evaluate $c$. Each multilinear monomial in the $n^2$ vectors $v_{ij}$ is a product over all $(i,j)$ of a coordinate $x_{ijk}$ of $v_{ij}$ and can therefore be represented by an $n \times n$-matrix with entries $k \in [n]$. So for instance the monomial $m:=\prod_{i=1}^n (x_{i11} x_{i22} x_{i33} \cdots x_{inn})$ corresponds to the matrix with all rows equal to $1,2,\ldots,n$. The monomial $m$ appears in $\mathrm{vdet}_\pi$ if and only if, for each $j \in [n]$, the numbers $\pi_1(j),\ldots,\pi_n(j)$ are all distinct, i.e., if the array $(\pi_i(j))_{ij}$ is a Latin square (LS). The coefficient of $m$ in $P$ is then \[ \mathrm{sign}(\pi_1) \cdots \mathrm{sign} (\pi_n) \cdot \mathrm{sign}(\sigma_1) \cdots \mathrm{sign}(\sigma_n), \] where $\sigma_j:i \mapsto \pi_i(j)$ are the permutations given by the columns of the LS. Call an LS even if the coefficient above is $1$ and odd if it is $-1$. Then we have \[ c=\mathrm{ELS}(n)-\mathrm{OLS}(n) \] where $\mathrm{ELS}(n)$ is the number of even LS of order $n$ and $\mathrm{OLS}$ is the number of odd LS. Now if the characteristic of $K$ does not divide $c$, then we are done. Hmm, but why should $c$ even be non-zero? Of course, I found out that the identity \[ P=(\mathrm{ELS}(n)-\mathrm{OLS}(n)) \mathrm{hdet} \] had been discovered long before me [Onn97]. For odd $n$ it is useless, since swapping the last two rows gives a sign-reversing involution on the set of LS, so that $c=0$. For even $n$, the non-zeroness of $\mathrm{ELS}(n)-\mathrm{OLS}(n)$ was a already famous conjecture due to Alon and Tarsi [AT92]. So I decided to read up on the history of these and related conjectures. What led Rota and his student Rosa Huang to the basis conjecture was their work in invariant theory [HR94], and in his talk [Rot98] Rota expressed the opinion that new insights in invariant theory would be required to settle the conjecture. Infinitely many cases of the non-zeroness of $\mathrm{ELS}(n)-\mathrm{OLS}(n)$ were settled in [Dri97] (for primes plus one) and [Gly10] (for primes minus one). See also [Ber12] for an elementary proof of both results using Glynn's determinantal identity. For odd dimensions, recent work by Aharoni and Kotlar [AK11] relates an odd-$n$ version of Alon and Tarsi's conjecture to a weaker version of the basis conjecture. Another line of research concerns the natural generalisation of the basis conjecture to arbitrary matroids. This turns out to be true for $ n=3$ [Cha95] and for n=4 [Che12] and for paving matroids [GH06]. For general matroids, the following weakening is proved in [GW07]: a $k \times n$-array of elements in a matroid whose rows are bases has $n$ disjoint independent transversals if $n > \binom{k+1}{2}$. A reduction from Rota's conjecture to a conjecture concerning only three bases was established in [Cho10]. A third direction, which I recently explored with Eindhoven Master's student Guus Bollen [BD13], concerns the following strengthening of the basis conjecture. Suppose that the rows of the array $(v_{ij})_{ij}$ are given to you sequentially, and that you have to fix $\pi_i$ immediately after receiving the $i$-th row, without knowledge of the remaining rows. Does such an online algorithm exist for producing $\pi_1,\ldots,\pi_n$ satisfying the requirement in the conjecture? We prove the following surprising dichotomy: for even $n$, the non-zeroness of $\mathrm{ELS}(n)-\mathrm{OLS}(n)$ in $K$ implies the existence of such an algorithm, while for odd $n$ any online algorithm can be forced to make an error. This leads to the following question: does the online version make sense for general matroids? Well, yes, but it is easy to come up with counterexamples of format $n \times n$ for $n \geq 3$, even among linear matroids: for the online algorithm above it is essential that the algorithm gets the vectors as input, not just their linear dependencies. But what about $k \times n$-arrays for smaller $k$? It is quite conceivable that the following problem has an easy solution. Determine the maximal value of $k$, as a function of $n$, such that the online version of the basis conjecture has a positive answer for $k \times n$-arrays in general matroids. To conclude, here is a rather wild speculation related to Rota's remark that new techniques in invariant theory are needed. Among the most powerful new tools in invariant theory are invariant Hilbert schemes [Ber08,Bri13]. They might play a role as follows. Suppose that $v=(v_{ij})_{ij}$ is a counterexample to the basis conjecture. Think of the $v_{ij}$ as points in projective space $\mathbb{P}V$, and assume that no two rows represent the same set of $n$ projective points. Then the orbit of $v$ under the semi-direct product $G$ of $S_n^n$ (for permutations within rows) with $S_n$ (for permutations of the rows) is a set of $(n!)^{n+1}$ points in $(\mathbb{P}V)^{(n^2)}$. This finite $G$-stable set of points in the ambient space $(\mathbb{P}V)^{(n^2)}$ with $G$-action corresponds to a single point in a suitable $G$-invariant Hilbert scheme. Applying invertible linear transformations of $V$ yields further points in the Hilbert scheme. The power of Hilbert schemes is that they also parameterise non-reduced sets, which arise, for instance, by allowing some or all of the points in the set to collide (think of a double root of a polynomial, which encodes not just a root but also a tangency). So one would like to phrase the property of being a counterexample as a closed condition on all points of the invariant Hilbert scheme, use a one-parameter group of basis transformations to degenerate the alleged counterexample to a point in the Hilbert scheme corresponding to a non-reduced counterexample (where all $(n!)^{n+1}$ points collide but the scheme encodes an interesting higher-order tangency structure), and then use $G$-module structure of the coordinate ring of the limit to show that it is, in fact, not a counterexample… but all of this is, for the time being, just science fiction! [AK11] Ron Aharoni and Daniel Kotlar. A weak version of rota's basis conjecture for odd dimensions. SIAM J. Discrete Math., 2011. To appear, preprint here. [AT92] Noga Alon and Michael Tarsi. Colorings and orientations of graphs. Combinatorica, 12(2):125-134, 1992. [BD13] Guus P. Bollen and Jan Draisma. An online version of Rota's basis conjecture. 2013. Preprint, available here. [Ber12] Jochem Berndsen. Three problems in algebraic combinatorics. Master's thesis, Eindhoven University of Technology, 2012. Available electronically at here. [Bri13] José Bertin. The punctual Hilbert scheme, lecture notes of the 2008 Grenoble Summer school in mathematics, available here. [Bri13] Michel Brion. Invariant Hilbert schemes, pages 63-118. Number 24 in Advanced Lectures in Mathematics. International Press, 2013. [Cha95] Wendy Chan. An exchange property of matroid. Discrete Math., 146:299-302, 1995. [Che12] Michael Sze-Yu Cheung. Computational proof of Rota's basis conjecture for matroids of rank 4. See this page, 2012. [Cho10] Chow, Timothy Y. Reduction of Rota's basis conjecture to a problem on three bases. SIAM J. Discrete Math 23(1): 369–371, 2009. [Cra01] H. Crapo. Ten abandoned gold mines. In Algebraic combinatorics and computer science. A tribute to Gian-Carlo Rota, pages 3-22. Milano: Springer, 2001. [Dri97] Arthur A. Drisko. On the number of even and odd Latin squares of order $p+1$. Adv. Math., 128(1):20-35, 1997. [GH06] Jim Geelen and Peter J. Humphries. Rota's basis conjecture for paving matroids. SIAM J. Discrete Math., 20(4):1042-1045, 2006. [Gly10] David G. Glynn. The conjectures of alon-tarsi and rota in dimension prime minus one. SIAM J. Discrete Math., 24(2):394-399, 2010. [GW07] Jim Geelen and Kerri Webb. On Rota's basis conjecture. SIAM J. Discrete Math., 21(3):802-804, 2007. [HR94] Rosa Huang and Gian-Carlo Rota. On the relations of various conjectures on latin squares and straightening coefficients. Discrete Math., 128:225-236, 1994. [Onn97] Shmuel Onn. A colorful determinantal identity, a conjecture of Rota, and Latin squares. Am. Math. Mon., 104(2):156-159, 1997. [Rot98] Gian-Carlo Rota. Ten mathematics problems I will never solve. Mitt. Dtsch. Math.-Ver., 2:45-52, 1998. This entry was posted in Matroids by Guest Contributor. Bookmark the permalink. 2 thoughts on "Rota's Basis Conjecture" Jim Geelen on February 24, 2014 at 4:14 pm said: Timothy Chow's conjecture [Cho10], to which you refer, was refuted in: Harvey, N.J.A, Kiraly, T., Lau, L.C., On disjoint common bases in two matroids, SIAM J. Discrete Math. 25 (2011) 1792-1803. Jan Draisma on February 25, 2014 at 4:33 am said: Thanks for pointing this out, Jim! What is a minimal dependent set called? *
CommonCrawl
Strategy (game theory) In game theory, a player's strategy is any of the options which they choose in a setting where the outcome depends not only on their own actions but on the actions of others.[1] The discipline mainly concerns the action of a player in a game affecting the behavior or actions of other players. Some examples of "games" include chess, bridge, poker, monopoly, diplomacy or battleship.[2] A player's strategy will determine the action which the player will take at any stage of the game. In studying game theory, economists enlist a more rational lens in analyzing decisions rather than the psychological or sociological perspectives taken when analyzing relationships between decisions of two or more parties in different disciplines. The strategy concept is sometimes (wrongly) confused with that of a move. A move is an action taken by a player at some point during the play of a game (e.g., in chess, moving white's Bishop a2 to b3). A strategy on the other hand is a complete algorithm for playing the game, telling a player what to do for every possible situation throughout the game. It is helpful to think about a "strategy" as a list of directions, and a "move" as a single turn on the list of directions itself. This strategy is based on the payoff or outcome of each action. The goal of each agent is to consider their payoff based on a competitors action. For example, competitor A can assume competitor B enters the market. From there, Competitor A compares the payoffs they receive by entering and not entering. The next step is to assume Competitor B doesn't enter and then consider which payoff is better based on if Competitor A chooses to enter or not enter. This technique can identify dominant strategies where a player can identify an action that they can take no matter what the competitor does to try to maximize the payoff. This also helps players to identify Nash equilibrium which are discussed in more detail below. A strategy profile (sometimes called a strategy combination) is a set of strategies for all players which fully specifies all actions in a game. A strategy profile must include one and only one strategy for every player. Strategy set A player's strategy set defines what strategies are available for them to play. A strategy profile is a list of strategy sets, ordered from most to least desirable. A player has a finite strategy set if they have a number of discrete strategies available to them. For instance, a game of rock paper scissors comprises a single move by each player—and each player's move is made without knowledge of the other's, not as a response—so each player has the finite strategy set {rock paper scissors}. A strategy set is infinite otherwise. For instance the cake cutting game has a bounded continuum of strategies in the strategy set {Cut anywhere between zero percent and 100 percent of the cake}. In a dynamic game, games that are played over a series of time, the strategy set consists of the possible rules a player could give to a robot or agent on how to play the game. For instance, in the ultimatum game, the strategy set for the second player would consist of every possible rule for which offers to accept and which to reject. In a Bayesian game, or games in which players have incomplete information about one another, the strategy set is similar to that in a dynamic game. It consists of rules for what action to take for any possible private information. Choosing a strategy set In applied game theory, the definition of the strategy sets is an important part of the art of making a game simultaneously solvable and meaningful. The game theorist can use knowledge of the overall problem, that is the friction between two or more players, to limit the strategy spaces, and ease the solution. For instance, strictly speaking in the Ultimatum game a player can have strategies such as: Reject offers of ($1, $3, $5, ..., $19), accept offers of ($0, $2, $4, ..., $20). Including all such strategies makes for a very large strategy space and a somewhat difficult problem. A game theorist might instead believe they can limit the strategy set to: {Reject any offer ≤ x, accept any offer > x; for x in ($0, $1, $2, ..., $20)}. Pure and mixed strategies A pure strategy provides a complete definition of how a player will play a game. Pure strategy can be thought about as a singular concrete plan subject to the observations they make during the course of the game of play. In particular, it determines the move a player will make for any situation they could face. A player's strategy set is the set of pure strategies available to that player. A mixed strategy is an assignment of a probability to each pure strategy. When enlisting mixed strategy, it is often because the game doesn't allow for a rational description in specifying a pure strategy for the game. This allows for a player to randomly select a pure strategy. (See the following section for an illustration.) Since probabilities are continuous, there are infinitely many mixed strategies available to a player. Since probabilities are being assigned to strategies for a specific player when discussing the payoffs of certain scenarios the payoff must be referred to as "expected payoff". Of course, one can regard a pure strategy as a degenerate case of a mixed strategy, in which that particular pure strategy is selected with probability 1 and every other strategy with probability 0. A totally mixed strategy is a mixed strategy in which the player assigns a strictly positive probability to every pure strategy. (Totally mixed strategies are important for equilibrium refinement such as trembling hand perfect equilibrium.) Mixed strategy Illustration In a soccer penalty kick, the kicker must choose whether to kick to the right or left side of the goal, and simultaneously the goalie must decide which way to block it. Also, the kicker has a direction they are best at shooting, which is left if they are right-footed. The matrix for the soccer game illustrates this situation, a simplified form of the game studied by Chiappori, Levitt, and Groseclose (2002).[3] It assumes that if the goalie guesses correctly, the kick is blocked, which is set to the base payoff of 0 for both players. If the goalie guesses wrong, the kick is more likely to go in if it is to the left (payoffs of +2 for the kicker and -2 for the goalie) than if it is to the right (the lower payoff of +1 to kicker and -1 to goalie). Goalie Lean LeftLean Right KickerKick Left 0,  0+2, -2 Kick Right+1, -1 0,  0  Payoff for the Soccer Game (Kicker, Goalie) This game has no pure-strategy equilibrium, because one player or the other would deviate from any profile of strategies—for example, (Left, Left) is not an equilibrium because the Kicker would deviate to Right and increase his payoff from 0 to 1. The kicker's mixed-strategy equilibrium is found from the fact that they will deviate from randomizing unless their payoffs from Left Kick and Right Kick are exactly equal. If the goalie leans left with probability g, the kicker's expected payoff from Kick Left is g(0) + (1-g)(2), and from Kick Right is g(1) + (1-g)(0). Equating these yields g= 2/3. Similarly, the goalie is willing to randomize only if the kicker chooses mixed strategy probability k such that Lean Left's payoff of k(0) + (1-k)(-1) equals Lean Right's payoff of k(-2) + (1-k)(0), so k = 1/3. Thus, the mixed-strategy equilibrium is (Prob(Kick Left) = 1/3, Prob(Lean Left) = 2/3). Note that in equilibrium, the kicker kicks to their best side only 1/3 of the time. That is because the goalie is guarding that side more. Also note that in equilibrium, the kicker is indifferent which way they kick, but for it to be an equilibrium they must choose exactly 1/3 probability. Chiappori, Levitt, and Groseclose try to measure how important it is for the kicker to kick to their favored side, add center kicks, etc., and look at how professional players actually behave. They find that they do randomize, and that kickers kick to their favored side 45% of the time and goalies lean to that side 57% of the time. Their article is well-known as an example of how people in real life use mixed strategies. Significance In his famous paper, John Forbes Nash proved that there is an equilibrium for every finite game. One can divide Nash equilibria into two types. Pure strategy Nash equilibria are Nash equilibria where all players are playing pure strategies. Mixed strategy Nash equilibria are equilibria where at least one player is playing a mixed strategy. While Nash proved that every finite game has a Nash equilibrium, not all have pure strategy Nash equilibria. For an example of a game that does not have a Nash equilibrium in pure strategies, see Matching pennies. However, many games do have pure strategy Nash equilibria (e.g. the Coordination game, the Prisoner's dilemma, the Stag hunt). Further, games can have both pure strategy and mixed strategy equilibria. An easy example is the pure coordination game, where in addition to the pure strategies (A,A) and (B,B) a mixed equilibrium exists in which both players play either strategy with probability 1/2. Interpretations of mixed strategies During the 1980s, the concept of mixed strategies came under heavy fire for being "intuitively problematic", since they are weak Nash equilibria, and a player is indifferent about whether to follow their equilibrium strategy probability or deviate to some other probability.[4] [5] Game theorist Ariel Rubinstein describes alternative ways of understanding the concept. The first, due to Harsanyi (1973),[6] is called purification, and supposes that the mixed strategies interpretation merely reflects our lack of knowledge of the players' information and decision-making process. Apparently random choices are then seen as consequences of non-specified, payoff-irrelevant exogenous factors.[5] A second interpretation imagines the game players standing for a large population of agents. Each of the agents chooses a pure strategy, and the payoff depends on the fraction of agents choosing each strategy. The mixed strategy hence represents the distribution of pure strategies chosen by each population. However, this does not provide any justification for the case when players are individual agents. Later, Aumann and Brandenburger (1995),[7] re-interpreted Nash equilibrium as an equilibrium in beliefs, rather than actions. For instance, in rock paper scissors an equilibrium in beliefs would have each player believing the other was equally likely to play each strategy. This interpretation weakens the descriptive power of Nash equilibrium, however, since it is possible in such an equilibrium for each player to actually play a pure strategy of Rock in each play of the game, even though over time the probabilities are those of the mixed strategy. Behavior strategy While a mixed strategy assigns a probability distribution over pure strategies, a behavior strategy assigns at each information set a probability distribution over the set of possible actions. While the two concepts are very closely related in the context of normal form games, they have very different implications for extensive form games. Roughly, a mixed strategy randomly chooses a deterministic path through the game tree, while a behavior strategy can be seen as a stochastic path. The relationship between mixed and behavior strategies is the subject of Kuhn's theorem, a behavioral outlook on traditional game-theoretic hypotheses. The result establishes that in any finite extensive-form game with perfect recall, for any player and any mixed strategy, there exists a behavior strategy that, against all profiles of strategies (of other players), induces the same distribution over terminal nodes as the mixed strategy does. The converse is also true. A famous example of why perfect recall is required for the equivalence is given by Piccione and Rubinstein (1997) with their Absent-Minded Driver game. Outcome equivalence Outcome equivalence combines the mixed and behavioral strategy of Player i in relation to the pure strategy of Player i’s opponent. Outcome equivalence is defined as the situation in which, for any mixed and behavioral strategy that Player i takes, in response to any pure strategy that Player I’s opponent plays, the outcome distribution of the mixed and behavioral strategy must be equal. This equivalence can be described by the following formula: (Q^(U(i), S(-i)))(z) = (Q^(β(i), S(-i)))(z), where U(i) describes Player i's mixed strategy, β(i) describes Player i's behavioral strategy, and S(-i) is the opponent's strategy.[8] Strategy with perfect recall Perfect recall is defined as the ability of every player in game to remember and recall all past actions within the game. Perfect recall is required for equivalence as, in finite games with imperfect recall, there will be existing mixed strategies of Player I in which there is no equivalent behavior strategy. This is fully described in the Absent-Minded Driver game formulated by Piccione and Rubinstein. In short, this game is based on the decision-making of a driver with imperfect recall, who needs to take the second exit off the highway to reach home but does not remember which intersection they are at when they reach it. Figure [2] describes this game. Without perfect information (i.e. imperfect information), players make a choice at each decision node without knowledge of the decisions that have preceded it. Therefore, a player’s mixed strategy can produce outcomes that their behavioral strategy cannot, and vice versa. This is demonstrated in the Absent-minded Driver game. With perfect recall and information, the driver has a single pure strategy, which is [continue, exit], as the driver is aware of what intersection (or decision node) they are at when they arrive to it. On the other hand, looking at the planning-optimal stage only, the maximum payoff is achieved by continuing at both intersections, maximized at p=2/3 (reference). This simple one player game demonstrates the importance of perfect recall for outcome equivalence, and its impact on normal and extended form games.[9] See also • Nash equilibrium • Haven (graph theory) • Evolutionarily stable strategy References 1. Ben Polak Game Theory: Lecture 1 Transcript ECON 159, 5 September 2007, Open Yale Courses. 2. Aumann, R. (22 March 2017). Game Theory. In: Palgrave Macmillan. London: Palgrave Macmillan. ISBN 978-1-349-95121-5. 3. Chiappori, P. -A.; Levitt, S.; Groseclose, T. (2002). "Testing Mixed-Strategy Equilibria when Players Are Heterogeneous: The Case of Penalty Kicks in Soccer" (PDF). American Economic Review. 92 (4): 1138. CiteSeerX 10.1.1.178.1646. doi:10.1257/00028280260344678. 4. Aumann, R. (1985). "What is Game Theory Trying to accomplish?" (PDF). In Arrow, K.; Honkapohja, S. (eds.). Frontiers of Economics. Oxford: Basil Blackwell. pp. 909–924. 5. Rubinstein, A. (1991). "Comments on the interpretation of Game Theory". Econometrica. 59 (4): 909–924. doi:10.2307/2938166. JSTOR 2938166. 6. Harsanyi, John (1973). "Games with randomly disturbed payoffs: a new rationale for mixed-strategy equilibrium points". Int. J. Game Theory. 2: 1–23. doi:10.1007/BF01737554. S2CID 154484458. 7. Aumann, Robert; Brandenburger, Adam (1995). "Epistemic Conditions for Nash Equilibrium". Econometrica. 63 (5): 1161–1180. CiteSeerX 10.1.1.122.5816. doi:10.2307/2171725. JSTOR 2171725. 8. Shimoji, Makoto (2012-05-01). "Outcome-equivalence of self-confirming equilibrium and Nash equilibrium". Games and Economic Behavior. 75 (1): 441–447. doi:10.1016/j.geb.2011.09.010. ISSN 0899-8256. 9. Kak, Subhash (2017). "The Absent-Minded Driver Problem Redux". arXiv:1702.05778 [cs.AI]. Topics in game theory Definitions • Congestion game • Cooperative game • Determinacy • Escalation of commitment • Extensive-form game • First-player and second-player win • Game complexity • Graphical game • Hierarchy of beliefs • Information set • Normal-form game • Preference • Sequential game • Simultaneous game • Simultaneous action selection • Solved game • Succinct game Equilibrium concepts • Bayesian Nash equilibrium • Berge equilibrium • Core • Correlated equilibrium • Epsilon-equilibrium • Evolutionarily stable strategy • Gibbs equilibrium • Mertens-stable equilibrium • Markov perfect equilibrium • Nash equilibrium • Pareto efficiency • Perfect Bayesian equilibrium • Proper equilibrium • Quantal response equilibrium • Quasi-perfect equilibrium • Risk dominance • Satisfaction equilibrium • Self-confirming equilibrium • Sequential equilibrium • Shapley value • Strong Nash equilibrium • Subgame perfection • Trembling hand Strategies • Backward induction • Bid shading • Collusion • Forward induction • Grim trigger • Markov strategy • Dominant strategies • Pure strategy • Mixed strategy • Strategy-stealing argument • Tit for tat Classes of games • Bargaining problem • Cheap talk • Global game • Intransitive game • Mean-field game • Mechanism design • n-player game • Perfect information • Large Poisson game • Potential game • Repeated game • Screening game • Signaling game • Stackelberg competition • Strictly determined game • Stochastic game • Symmetric game • Zero-sum game Games • Go • Chess • Infinite chess • Checkers • Tic-tac-toe • Prisoner's dilemma • Gift-exchange game • Optional prisoner's dilemma • Traveler's dilemma • Coordination game • Chicken • Centipede game • Lewis signaling game • Volunteer's dilemma • Dollar auction • Battle of the sexes • Stag hunt • Matching pennies • Ultimatum game • Rock paper scissors • Pirate game • Dictator game • Public goods game • Blotto game • War of attrition • El Farol Bar problem • Fair division • Fair cake-cutting • Cournot game • Deadlock • Diner's dilemma • Guess 2/3 of the average • Kuhn poker • Nash bargaining game • Induction puzzles • Trust game • Princess and monster game • Rendezvous problem Theorems • Arrow's impossibility theorem • Aumann's agreement theorem • Folk theorem • Minimax theorem • Nash's theorem • Negamax theorem • Purification theorem • Revelation principle • Sprague–Grundy theorem • Zermelo's theorem Key figures • Albert W. Tucker • Amos Tversky • Antoine Augustin Cournot • Ariel Rubinstein • Claude Shannon • Daniel Kahneman • David K. Levine • David M. Kreps • Donald B. Gillies • Drew Fudenberg • Eric Maskin • Harold W. Kuhn • Herbert Simon • Hervé Moulin • John Conway • Jean Tirole • Jean-François Mertens • Jennifer Tour Chayes • John Harsanyi • John Maynard Smith • John Nash • John von Neumann • Kenneth Arrow • Kenneth Binmore • Leonid Hurwicz • Lloyd Shapley • Melvin Dresher • Merrill M. Flood • Olga Bondareva • Oskar Morgenstern • Paul Milgrom • Peyton Young • Reinhard Selten • Robert Axelrod • Robert Aumann • Robert B. Wilson • Roger Myerson • Samuel Bowles • Suzanne Scotchmer • Thomas Schelling • William Vickrey Miscellaneous • All-pay auction • Alpha–beta pruning • Bertrand paradox • Bounded rationality • Combinatorial game theory • Confrontation analysis • Coopetition • Evolutionary game theory • First-move advantage in chess • Game Description Language • Game mechanics • Glossary of game theory • List of game theorists • List of games in game theory • No-win situation • Solving chess • Topological game • Tragedy of the commons • Tyranny of small decisions Authority control: National • Germany • Czech Republic
Wikipedia
Agno3 charge The most notable exception is NH 4 + The charges on metals in the representative groups can be determined from their position in the periodic table. of the silver cation (Ag+) and the nitrate ion (NO3-), in which the central nitrogen atom is covalently bonded to three oxygen atoms with a net charge of -1. Each provides a different perspective on the chemicals involved in the reaction. ,. AgNO3 is a white crystal at room temperature. Another example of reduction is the formation of solid copper from copper ions in solution. 2. Use a plastic pipette to add a small amount of silver nitrate solution (AgNO3) to the copper metal, and observe the growth of silver (Ag) crystals. Its melting point is 212 ̊C (413. Some, as described above, are stabilized by bridging to neighboring nucleophiles. Copper metal collects on the solid magnesium. Wow. 1 Cation/ Anion List Worksheet for naming ions Students enrolled in Dr. Almost all the common + ions are metallic. 1. 87 g/mol. balance charge by adding e– 6. How many electrons does a calcium ion have? F 2 G 18 H 20 J 22 An acid is a substance that forms a solution with a pH value of less than 7. The reaction is expressed as 2 AgNO3 + Cu → Cu(NO3)2 + 2 Ag. RE: What is the charge, or charges, of silver (ag)? i thought because it was a transition metal, it varied, but ive been reading it has a 1+ charge? Two iron oxides samples are given to you where one is red and the other is black. We have one (1) copper atom on both sides, and the charges balance as well. Jan 24, 2018 When AgNO3 is added to a KI solution the following reaction takes place:- KI + AgNO3( excess) → AgI + KNO3 In the above reaction since is a Chemical Compound that is Described by the Chemical Formula AgNO3. Insoluble in alcohol, dilute acids Alfa Aesar 87342 Best Answer: The first, is indeed a redox reaction. 7809 g ofprecipitate forms. Answer this question and win exciting prizes The oxidation state of an atom is the charge of this atom after ionic approximation of its heteronuclear bonds. ticle surface charge as shown in the schematic diagram in Fig- ure 5. Thus, Fe2+ is an iron(II) ion and Pb4+ is a lead This program was created with a lot of help from: The book "Parsing Techniques - A Practical Guide" (IMHO, one of the best computer science books ever written. The formulas are, of course, written so that the positive charge from the cations equal to the negative charge from the anions. 12 A neutral atom of calcium has 20 electrons. 01g per 100g of water) Fuel better learning: Mastering creates truly personalized online learning experiences that help students make real progress in their courses and in their lives. Chapter 6 quiz. Jul 14, 2017 Synthesis of positively charged hybrid PHMB-stabilized silver . O showsdepression in freezing point by 0. 63, 4. 58 / Monday, March 26, 2012 / Rules and Regulations 02/14/2018 EN (English US) 3/6 In this lab, there are three flasks labeled Q, R, and S. . Silver;nitrate | AgNO3- | CID 4125394 - structure, chemical names, physical and chemical properties, classification, patents, literature, biological activities Yes, there is some logic behind it, but your excellent question shows that the answer is not obvious. Chemical reactions Chloe Larsen, erik james, Kaylee johnson You can balance reactions by, using the A B C D method. 0 ato - brainsanswers. 77, No. Answer / mazhar ullah jamro. This Site Might Help You. 1 M AgNO3 and the other contains 0. This is because when a salt dissolves, its dissociated ions can move freely in solution, allowing a charge to flow. The spectator ions are the type of the ions that are Review for the General Chemistry Final Exam 3Second Semester Part 3 of (a) Thermodynamics, (b) Transition Metals, (c) Redox and Electrochemistry, (d) Nuclear and (e) Organic balancing redox reactions by oxidation number change method In the oxidation number change method the underlying principle is that the gain in the oxidation number (number of electrons) in one reactant must be equal to the loss in the oxidation number of the other reactant. 8 hours ago · NiSO4+AgNO3 reaction? 1 * 10-3 m solution of Pt(NH2)CI, in H. The origin of charge on the colloidal particles may be due to frictional electrification. Solution for Write the correct net ionic equation for the reaction of potassium carbonate with copper(II) chloride, which produces a precipitate. Molar mass Molar ratio 180. soluble - soluble (more than 1g per 100g of water) low - low solubility (0. 0 atoms b. Answer to Determine the balanced net ionic equation when aqueous solutions of AgNO3 and NaI are mixed. = +1 Cl – = chloride has gained an electron O. Applied to the skin and mucous membranes, silver nitrate is used either in stick form as lunar caustic Lesson 3: Precipitation Reactions. You need 2 chloride anions to balance each calcium cation. Transfer a small portion of the solution to a test tube. 491. 1 C = 1 amp-s Charges of positive ions. So, to release one mole of a singly charged ion at an electrode exactly one Faraday of charge (96,500 C) must pass through the electrode. Aug 1, 2016 For the gegenion, NO−3 , the sum of the oxidation numbers equals the charge on the ion: i. 1) of the systems AgNO3-M(NO3)2. 02 x 10^24 atoms d. Include the physical states? AgNO3, and magnesium Silver nitrate (AgNO3) is soluble in water and could not be impregnated into the beaker. Absorption over along period may cause "Argyria", a blue/grey discolouration of various tissues. Since both chloride anions and nitrate anions have a charge of -1, there will be the same number of moles of silver chloride produced as the moles of silver nitrate reacted. And so this is actually a case where we will not use the Roman numerals because it's one of our three exceptions. You can hold it, shake it, heat it up, cap it, add in chemicals, pour out, or pour between BEAKERs via AirMix. Now customize the name of a clipboard to store your clips. Solved Examples on Electrochemistry Example 1. According to the solubility table, nitrates are always soluble, so the strong ionic bond between silver ions and nitrate ions are broken by water molecules because of ion-dipole attraction. Interactive and user-friendly interface. acid. The units microSiemens/cm (uS/cm) and milliSiemens/cm (mS/cm) are most commonly used to describe the conductivity of aqueous solutions. But once a block has been drilled to a certain extent I can't drill it anymore because it's so full of holes. 42 (a) If the positive plate were lower than the negative plate, the oil drops 'coated' with negatively charged electrons would be attracted to the positively charged plate and would descend much more quickly. 6712-g sample was determined by a Volhard titration. The whole nitrate ion carries a total charge of minus 1 when combining the charges of the one nitrogen atom and three oxygen atoms. Charge on the ion. Potassium Thiocyanate Safety Data Sheet according to Federal Register / Vol. (a) Convert into mole 12 g of oxygen gas. 35 g/cm3. Net ionic equations are useful in that they show only those chemical species participating in a chemical reaction. The charge is balanced because the 2+ charge on the zinc ion is balanced by two electrons, 2e - , giving zero net charge on both sides. Why? Because the electronic configuration of an inert gas is particularly stable In other words, an atom involved in bonding tends to acquire or lose the smallest possible The Activity Series Minnesota K-12 Academic Standards that are covered in this Unit: 9. Typically, you will be given the left-hand (reactant side) and asked to provide the products to the reaction. It helps to write oxidation states for each major ion/element in each compound. The answer will appear below; Always use the upper case for the first character in the element name and the lower case for the second character. ,Ltd The charge on the ion - the bigger the charge on the ions, the more electrons must be transferred to give one mole of the product compare the effect of one mole of electrons in the table above and see examples 13. Notice that, like the stoichiometry notation, we have a "balance" between both sides of the reaction. In an ionic compound, such as silver nitrate, one atom — silver — gives up an electron to a group of atoms — nitrate. silver nitrate, AgNO3, and magnesium bromide, MgBr2 net ionic equation: B. Electrolysis of a silver nitrate solution produces oxygen at the anode and silver at the cathode. Accurately weigh about 4. What is the term for a single atom that has a negative or a positive charge as the result of gaining or losing valence This video shows the reaction between (Silver Nitrate) AgNO3 + NaCl (Sodium Chloride). Chemical reaction. This white crystalline solid is well known for its low solubility in water (this behavior being reminiscent of the chlorides of Tl + and Pb 2+). 33 mol O Strategy: Once the number of moles of a substance present is known, we can use: – Molar mass to find the number of grams – Avogadro's number to find the number of atoms, ions, or molecules Moles B Grams B Atoms B Molar mass NA Moles A Molar Ratio Molar Conversions Chapter 3 H? Grams Tollens' test, also known as silver-mirror test, is a qualitative laboratory test used to distinguish between an aldehyde and a ketone. 7761-88-8: Amazon. Using these charge densities and the free ion charge density for the Ag ion [9], we computed the short-range Solutions Needed Silver nitrate solution: (0. 00 mL of 0. To balance the 2+ charge on the Ca, you need 2NO3 - anions - each NO3- still carries 1- charge. Protons are the heavy, positively charged particles found in the atom's inner nucleus, while electrons are light and negatively charged and orbit the nucleus. Explain Faraday's First law of elelctrolysis? Ans: Faraday's First Law: When an electric current is passed through an electrolyte, the amount of substance deposited is proportional to the quantity of electric charge passed through the electrolyte. 25 g of solid AgNO3 and dissolve it in 250 mL of distilled water in a conical flask. The reaction of sodium thiosulfate and hydrochloric acid produces colloidal sulfur which clouds the solution. ) 2 CO 2 (g) + CaSiO 3 (s) + H That charge buildup would serve to oppose the current from anode to cathode-- effectively stopping the electron flow--if the cell lacked a path for ions to flow between the two solutions. Faraday's laws of electrolysis and applications VERY SHORT ANSWER QUESTIONS 1. ___ SO 2 + ___ O 2 ___ SO 3 Types of Aqueous Solutions. M+ is an unknown metal cation with a +1 charge. If the gas given off turns limewater cloudy, then the test is positive. Clearly, Na went from an oxidation number and formal charge of 0 to 1+ and Cl from 0 to 1-. With the same equation what is being oxidize and what is being reduced? Zn + AgNO3 Zn + CuSO4 Zn + PbNO3 Mg + AgNO3 Mg + CuSO4 Mg + PbNO3 Cu + AgNO3 . Precipitation reactions can be represented using several types of chemical equations: complete-formula equations (also known as "molecular" equations), complete ionic equations, and net ionic equations. in a Chemical Equation for Silver Nitrate and Copper In this presentation we will explain the chemical reaction of silver nitrate and copper. Λ° can be found from the next table, "Ionic Conductivity and Diffusion at Infinite Dilution. g. Mix each of the solutions with each of the other and record all observations. Herath,, Athula B. The salts were stored under dry air. In all cases of anchimeric assistance described above, a charge delocalized or redistributed species is an intermediate on the reaction path. 1 M BaCl2. It commonly occurs as a colourless crystal or white powder and, unlike most other lead(II) salts, is soluble in water. Reaction Types. Silver chloride is a chemical compound with the chemical formula Ag Cl. Tip-off – When you are asked to predict whether a precipitation reaction takes place when two aqueous solutions of ionic compounds are mixed and to write complete and net ionic equations for the reaction, if it takes place. This video shows the reaction between (Silver Nitrate) AgNO3 + NaCl (Sodium Chloride). com Make sure to indicate the formula and charge of each ion, use coefficients (numbers in front of a species) to indicate the quantity of each ion, and write (aq) after each ion to indicate it's in aqueous solution. The presence of a colloidal suspension can be detected by the reflection of a laser beam from the particles. The qualitative analysis of cations requires an extensive knowledge of various aspects of chemistry, such as acid-base equilibria , complex equilibria , solubility , etc. Would NH3 form a precipitate when mixed with? HC2H3O2 KNO3 CaCl2 BaCl2 FeCl3 HCl Ni(NO3)2 CuSO4 AgNO3 Pb(NO3)2 I have no idea how to find the net ionic equations of them too because of NH3. 6. Include the physical states. Ions of the same charge usually repel each other, but ions of opposite charge may form a stable molecule or solid. B is not soluble in water. how many atoms of helium remain in the balloon? a. This is called a molecular equation. In other words, the formulas of ionic compounds must have a net charge of zero. 14 mL to reach the end point. Calculating Formal Charge You just clipped your first slide! Clipping is a handy way to collect important slides you want to go back to later. That insoluble compound is called a precipitate. We use Flash technology. Formula and structure: The chemical formula of silver nitrate is AgNO 3, and its molar mass is 169. It is a salt, and its chemical structure consists of the silver cation (Ag +) and the nitrate ion (NO 3-), in which the central nitrogen atom is covalently bonded to three oxygen atoms with a net charge of -1. Anhydrous copper nitrate forms deep blue-green crystals and sublimes in a vacuum at 150-200 °C. Increasing the number of ions in solution will increase the amount of charge that can be carried between electrodes and will increase the conductivity. it says to complete and balance the following chemical reactions I thought the answer would be HNO3(aq) +AgCl(aq) but i looked at another question and someone said it was H2NO3 -&gt; H20 + NO2 which makes no sense because the charge on Nitrate is -1 and the charge on hydrogen is +1 so umm which is the right answer ty The charge on the solution will be negative. If it loses an electron, it will have a positive charge. 142 g O>1 g N; 2. WHAT IS NaP04 + AgNO3 -> NaNO3 +??? would it be AgPO4? MY FRIEND SAID WHEN SHE ASKED THE TEACHER Ag(PO4)2 was incorrect and thats what i thought the answer was but is that wrong? In addition, this compound has an overall charge of -1; therefore the overall charge is not neutral in this example. Notice that we did not multiply the value for the reduction potential of I 2 by a factor of 2, even though the iodine reduction equation would be multiplied by this factor to balance the number of electrons produced and consumed. Write the net ionic equation (including phases) that corresponds to . Hence the oxidation numbers are equal to the formal charge in these examples. Silver Nitrate AgNO3 Molar Mass, Molecular Weight. 10 M K2CO3. 1. That means that there will be more attraction BFC SILVER NITRATE AR - 10gm (AgNO3) CAS No. Step 3. "Biosynthetic Studies of Platensimycin Kithsiri B. 2 ̊F), density 4. What Is the Compound Ag(NH3)2+? Ag(NH3)2+ is the chemical symbol for diamminesilver. In all electrochemical cells, electrons move through the wire (the external circuit) to the cathode, where a reduction reaction occurs, thus consuming electrons. Author information: (1)Department of Chemical, Biomedical, and Materials Engineering, Stevens Institute of Technology, Hoboken, Castle Point on Hudson, New Jersey Silver react with nitric acid to produce silver(I) nitrate, nitrogen dioxide and water. Hazard(s) identification Classification This chemical is considered hazardous by the 2012 OSHA Hazard Communication Standard (29 CFR 1910. A 10g sample of an unknown ionic compound is dissolved, and the solution is treated with enough AgNO3 to precipitate all the chloride ion. One thing of paramount importance in Chemistry is to visualize concepts. For example: NaCl Na + = sodium has lost an electron O. Doing so leads to a needlessly complicated reaction equation, so chemists often prefer to write net ionic equations, which omit the spectator ions. Notice that, like the Note: charges in a net ionic equation are conserved. Influence of asphalt addition and consolidation method on the durability of cement concrete/Asfalto priedu ir konsolidacijos metodo itaka cementbetonio This program was created with a lot of help from: The book "Parsing Techniques - A Practical Guide" (IMHO, one of the best computer science books ever written. Writing Equations for Precipitation Reactions. If the ion has a double charge then two moles of electrons (2 Faradays) are needed to release 1 mole of ions Strong Nucleophiles – • Usually anions with a full negative charge (easily recognizable by the presence of sodium, lithium or potassium counterions) • Participate in SN2-type substitutions Examples: NaOCH3 (any NaOR), LiCH3 (any RLi), NaOH or KOH, NaCN or KCN, NaCCR (acetylide anion), NaNH2, NaNHR, NaNR2, NaI, LiBr, KI, NaN3 Magnesium nitrate react with sodium hydroxide to produce magnesium hydroxide and sodium nitrate. . NO3 is oxidized and Cu is reduced. Silver nitrate is not Silver;nitrate | AgNO3- | CID 4125394 - structure, chemical names, physical and chemical properties, classification, patents, literature, Formal Charge, -1. 20 and B = 0. The above figure points out that the electrode in the oxidation half-cell is called the anode and the electrode in the reduction half-cell is called the cathode. Yes, you can use oxidation numbers to determine that it is a redox reaction. 02025 mol NaCl = M x L = 0. KCl KBr and KI are mixed with AgNO3 followed by diluted HNO3 to test the [Ag( NH3)2]+ (aq) + Br- (aq) Silver ion has high charge density which enables it to (>99% AgNO3) and sodium borohydride (99% NaBH4) were purchased from Aldrich . Thus, since the oxygen atoms in the ion contribute a total oxidaiton state of -8, and since the overall charge of the ion is -1, the sole manganese atom (Mn) must have an oxidation state of +7. Acidic solutions contain an excess of hydrogen ions, H + (aq). balance e– by multiplying by a common factor 6 e– + 14 H+ + Cr2O72– Cr3+ + 7 H 2 2O H2O + C2H6O C2H4O2 + 4 H+ + 4 e– 12 e– + 28 H+ + 2 Cr 2O7 2– + 3 H 2O + 3 C2H6O 4 Cr3+ + 14 H 2O + 3 C2H4O2 + 12 H + + 12 e– 16 11 16 H+ + 2 Cr 2O7 2– + 3 C 2H6O 4 Cr3+ + 11 H2O + 3 C2H4O2 Balancing: Method of A magnesium ion would be written + because it has a double positive charge. Silver nitrate is used by some podiatrists to kill cells located in the nail bed. You mix the ingredients together (flour, butter, salt, sugar, and eggs), bake it, and see that it changes into something new: During the titration the dichlorofluorescein molecules exist as negatively charged ions (anions) in solution. Thestructure of the compound will be (GivenK = 1. First, since the AgNO3 molecule doesn't have an overall charge (like NO3- or H3O+) we could say that the total of the oxidation numbers for AgNO3 will be zero since it is a neutral molecule. then write it as a net ionic equation. Nitrate is a triangular molecule with a positive charge on the nitrogen and a (2–) charge on the oxygens. 1 and 0. This means that the overall charge (called the net charge) on the reactants side (left) of the equation must Most solvated ions are polyatomic ions: either + vely charged cations or e. The reaction results in charge transfer between the electrode and the electrolyte solution inside the cell. It can have a 1+ or 2+ charge, right? if you consider it with the 1+ charge, the balanced chemical equation would be: Cu(s) + AgNO3(aq) = CuNO3(aq) + Ag(s). ) double charge: Mg++) and sulphate anions (also with double charge, SO 4 =) so that the formula is MgSO4. It is believed that frictional electrification due to rubbing the dispersed phase particles with those of dispersion medium results in some type of charge on colloidal particles of the sol. Ionic charges are not yet supported and will be ignored. a. The photo-process within the column and a ProScan slow-scan coupled charged. Journal of the American Chemical Society 129 (2007) 15422-15423. INTRODUCTION: This is an account on how to detect Silver ions in solution by simple precipitation reactions. 3 – The Nature of Science and Engineering: The practice of science - Scientific inquiry uses multiple interrelated processes to investigate and explain the natural world. The Half-Reaction Method S1085 | 7761-88-8. 2 x10^24 atoms c. = –1 • In covalent compounds, the oxidation numbers are based on relative electronegativities. 1 to 13. it says to complete and balance the following chemical reactions I thought the answer would be HNO3(aq) +AgCl(aq) but i looked at another question and someone said it was H2NO3 -&gt; H20 + NO2 which makes no sense because the charge on Nitrate is -1 and the charge on hydrogen is +1 so umm which is the right answer ty Please note our forum guidelines require that you show your own effort on homework questions before we can help you. In some cases, though, an atom or group of atoms can lose or gain electrons and acquire a charge as a result. Use uppercase for the first character in the element and lowercase for the second character. Objectives. Its chemical formula is AgNO3. The most effective way is to do a substitution reaction which turns the halogen into a halide ion, and then to test for that ion with silver nitrate solution. 892914 Da; ChemSpider ID22878. asked by Jose-Ann on November 21, 2010; physiscs. Homework #8, Graded Answers Chem20, Elementary Chemistry 7. It's just a bookkeeping exercise. This is not always the case. Structure, properties, spectra, suppliers and links for: Silver nitrate, 7761-88-8. Silver is widely distributed in the earth's crust and is found in soil, fresh and sea water, and the air. In the language of chemistry, an ion is an atom or group of atoms that carries a charge as the result of losing or gaining electrons. 02/13/2008. Question = Is AgNO3 ( Silver nitrate ) polar or nonpolar ? Answer = AgNO3 ( Silver nitrate ) is Polar What is polar and non-polar? Polar "In chemistry, polarity is a separation of electric charge leading to a molecule or its chemical groups having an electric dipole or multipole moment. The halogenoalkane is warmed with Keep Your Epistaxis Coding Simple With a Single ICD-10 Choice. A half-reaction is a "fake" chemical reaction. metrical (equal charge on cation and anion) electrolyte as Λ = Λ° – (A + BΛ°)c1/2 For a solution at 25 °C and both cation and anion with charge ]1], the constants are A = 60. As the AgCl precipitate is forming, the excess Cl-ions in the solution form a layer of negative charge on the precipitate surface. of positively charged silver nanoparticles via photoreduction of AgNO3 in The slight positive charge on the carbon will be larger if it is attached to a chlorine atom than to an iodine atom. Therefore, Na was oxidized by the oxidizing agent Cl 2, and Cl 2 was reduced by the oxidizing agent Na. 1g of silver nitrate, AgNO3? Redox equations are often so complex that fiddling with coefficients to balance chemical equations doesn't always work well. Examples: Fe, Au, Co, Br, C, O, N, F. Drip 2 mL of 0. N. 1 M Pb(NO3)2, 0. Coefficients because Na- is sodium and the charge for sodium is +1 but NO3 is a -1 so when you Cris cross them: your formula should look like this. One of these flasks contains 0. Think of the incredible number of chemical reactions going on around you and in you all the time. The reaction proceeded smoothly under mild conditions and generated 1,2-diketones in moderate to good yields with a good tolerance of functional groups. All Rapid - AgNO3 - Silver Nitrate LR (AgNO3) 25g - Please refer to the pdf datasheet for full details of this substance and its associated Health and Safety Top Purity Ar Grade Silver Nitrate (agno3) 99% With Best Price For Sale Cas 7761-88-8 , Find Complete Details about Top Purity Ar Grade Silver Nitrate (agno3) 99% With Best Price For Sale Cas 7761-88-8,Silver Nitrate(agno3),Silver Nitrate Price,Silver Nitrate from Nitrate Supplier or Manufacturer-Hubei Aoks Bio-Tech Co. Silver nitrate is used to cauterize superficial blood vessels in the nose to help prevent nose bleeds. ) AgNO3-LiNO3-RbNO3 phase diagram. AgI + I– → (AgI)I– AgNO3, NaCl, NaNO3 are all soluble and will exist as ions in an aqueous solution. Get an answer for 'Name the following compounds: AgNO3, K2S, BaS, Agl & Kl' and find homework help for other Chemistry questions at eNotes Solubility and Net Ionic Equations. Determining oxidation numbers from the Lewis structure (Figure 1a) is even easier than deducing it from the molecular formula (Figure 1b). 2 Molecular, Ionic and Net ionic equations Molecular and Ionic Equations When two different ionic compounds that have been dissolved in water are mixed, a chemical reaction may occur between certain pairs of the hydrated ions. After adding 50. Question:A solution of AgNO3 (45 mL/0. It is the main chemical component in Tollens' reagent and is used to determine if carbonyl compounds are an aldehyde or a ketone. So let's go through a list of some of the ones that you might see in your class. Workshop: Ions in Solution I 1) For each of the following compounds, (a) write the formula and (b) classify it as either a strong or a weak electrolyte or a non-electrolyte. Na+1NO3-1= +1 and -1 cancel out and you are left with NaNO3 Na+1NO3-1= +1 and -1 cancel out and you are left with NaNO3 Note: In the above reaction, magnesium metal is the reducer because it reduces the positive charge of the hydrogen ion (H +) as it becomes hydrogen gas (H 2), which has no charge. Due to its high solubility in water and other polar solvents, much less light sensitivity than silver halides, as well as lower prices due to lower production costs, is used as a universal precursor for the synthesis of many other silver compounds. Tan S(1), Erol M, Attygalle A, Du H, Sukhishvili S. 1 M NaCl, or 0. Calcium chloride is made of calcium ions (with 2 charges, Ca++) and chloride ions (with 1 charge only, Cl—). Those charges would be written as He- or Fe3+, but as a superscript. electron charge-density distri - bution of the whole N03 ion into approximate charge densi- ties for the individual N and 0 ions i n the spirit of a Mulliken population analysi s [8]. Monatomic. NOxidation number+3×OOxidation number=−1 . There are many common types of reactions that occur in aqueous solutions. (DMF) act as the surface charge of the Ag NPs and electric dissolving solid AgNO3 in 6 wt% (w/v) PVDF-DMF. The silver-catalyzed formation of disulfide bonds can lead to changes in protein structure and the inactivation of key enzymes, such as those needed for cellular respiration (Davies and Etris, 1997). Save Average mass169. The cell would therefore proceed spontaneously in Case 2. These materials are more commonly Why Join Course Hero? Course Hero has all the homework and study help you need to succeed! We've got course-specific notes, study guides, and practice tests along with expert tutors. the marketing group estimates that this will increase sales over the year from 40,000 to 55,000 vehicles. In contrast, the systematic naming method used today indicates the charge of the ion with a Roman numeral in parentheses (called the Stock number) immediately following the ion's name. It exploits the fact that aldehydes are readily oxidized (see oxidation), whereas ketones are not. Most of the transtition metals have two or more commonly occuring charge. 3 Answers to CoCl2 + AgNO3, Net Ionic Equation - 113115. In this type of from AgNO3, where N,N-dimethylformamide. Synthesis of Positively Charged Silver Nanoparticles via Oxidative Photoreduction of AgNO3 and Branched Polyethyleneimine in HEPES Solutions. 5 below in Part one) Electrolysis of silver nitrate. Write a balanced net-ionic equation for the reactions that occur when the following aqueous solutions are mixed. BEAKER turns your device into a virtual lab to experiment with 150+ chemicals. Consider the reaction: AgNO3 + Cu ===> Cu(NO3)2 + Ag (not a balanced Charges must be correct to earn 1 point. About Oxidation States Atoms tend to participate in bonding in such a manner that their electronic configuration becomes similar to that of the nearest inert gas. Some important physical and chemical properties of silver nitrate are listed in this subsection. The net charge associated by the nitrate ion is -1, which is quenched by the +1 charge held by the Ag + ion via an ionic bond in AgNO 3. Price: ₹ 1,800. in: Industrial & Scientific. In order to use Faraday's law we need to recognize the relationship between current, time, and the amount of electric charge that flows through a circuit. The apparatus is set up as outlined above with 10 grams CaC2 in the 250mL Erlenmeyer, 50mL water in the buchner funnel and 1 gram of AgNO3 solution in 20mL water on the 100mL Erlenmeyer. The blue color fades as the copper ions end up in the solid. Flashcards. (ii). 001M silver nitrate (AgNO3) into the stirring NaBH4 solution at approximately 1 drop per second. Balance the reaction of LiNO3 + Ag = Li2 + AgNO3 using this chemical equation balancer! A sample is a mixture of AgNO3, CuCl2, and FeCl3. AgCl(s). 0135 M x 0. Solution: first containing AgNO3, second CuSO4 and third FeCl3 Dans le cas des sels, tels que AgNO3, il y a attraction entre la charge positive de Ag+ et celle négative de NO3-. • HNO3 + LiHCO3 = H2O + CO2 + LiNO3 CHAPTER 4: ANSWERS TO ASSIGNED PROBLEMS Hauser- General Chemistry I revised 10/14/08 4. My answer: mol AgNO3 = M x L = 0. yo, divalent nitrate. The net charge associated by the nitrate ion is -1, which is quenched by the +1 This half-reaction says that we have solid copper (with no charge) being oxidized (losing electrons) to form a copper ion with a plus 2 charge. K (potassium) is in group 1 of the periodic table hence has a charge of +1(which after reacting, will leave a charge of -1 on the conpound) Ag (silver) on the hand has a charge of +2(it is a transition element) hence will form a neutral compound with NO3 to give Silver nitrate (AgNO3) 2 Answers to AgNo3+CuSO4=AgSO4+CuNO3, Net Ionic Equation - 325005 The %w/w I– in a 0. com Silver Nitrate. Copy this reaction: CuSO 4 - [Voiceover] When you take a general chemistry class, you often have to memorize some of the common polyatomic ions. Any free (unattached) element with no charge has the oxidation state of zero. Ag is reduced and Cu is oxidized. 0054°C. Since you don't sound like you even know where to start here, why don't you tell us what you do know about balancing reactions, or what the charge on ions in these equations represents. When AgNO3 is added to a KI solution the following reaction takes place:- KI + AgNO3( excess) → AgI + KNO3 In the above reaction since AgNO3 is added to the solution of KI ,I assume that AgNO3 is added in excess. 045 M = 0. 4. Each flask contains one of the following solutions: 0. The magnesium ions are colorless. Conservation of mass and of moles: We should recover as much copper as we They can donate electron density to a neighboring group. The precipitation reaction is one of them, which results in the formation of precipitate (insoluble product). CH 105 - Chemistry and Society Chemical Reactions. If there is a precipitation reaction, write the complete and net ionic equation that describes the reaction. 229. we will explain the chemical and physical properties of each reactant and product and also give background history of each element/compound. 00 Delivery charge Details. 01 N solutions, respectively. Published on Wed Nov 18, 2015 PDF. In net ionic equations, you only need the main reaction, the one that forms a precipitate: 2AgNO3+NaCrO4 -->Ag2CrO4+2NaNO3 How many grams of silver chromate, AgCrO4, are produced from 66. What is silver nitrate. 05619 M AgNO3 and allowing the precipitate to form, the remaining silver was back titrated with 0. ". 18 × 1023 b. C dissolves in water to form a nonconducting solution. (b) The more times a measurement is repeated, the better the chance of detecting and 1. Contrary to popular belief, the results of this investigation shows: The silver-catalyzed formation of disulfide bonds could possibly change the shape of cellular enzymes and subsequently affect their function. 3 b. Actually its more like punishing the productive elements of society and rewarding the social parasites. Find the charge in coulomb on 1 g-ion of N 3-. As a common method, AgNO3 was first loaded on the GO surface and then was reduced and stabilized by walnut green husk extract, producing Ag-GO-Ⅰ. Look at the crystalline structure of $\ce{AgCl}$ vs $\ce{AgNO3}$. Chlorides usually form face centred cubic crystals while nitrates form trigonal planar crystals. ) MgO 2 (s) + 4 HCl(aq) Cl 2 (g) + MnCl 2 (aq) + 2 H 2 O(l) b. If it gains an electron, it will be negative and will be expressed for example: Na+1 or F- Also elements on the left side of the periodic table have a tendency to lose electrons in order to achieve their stable octet, thus making them positive, explaining why Na has a positive charge. ammonium sulfide, (NH4)2S, and cobalt(T) chloride, Cocl, net ionic equation: Lithium Nitrate LiNO3 Molar Mass, Molecular Weight. 0 moles of helium. Properties of Silver Nitrate. This compound is a versatile precursor to many other silver compounds, such as those ChemSpider 2D Image | Silver nitrate | AgNO3. 860 km-1) Why an anion of higher charge gets polarized easily? C4h8iupac structure 10. , both AgNO3 and K2SO4 are strong electrolytes, so that in separate aqueous. CAUTION! AgNO3 causes skin discoloration. *Just add the 3 in front of CO which makes 3CO What happens when you pour two solutions of different electrolytes together? The mixture will have all ions from the two electrolytes. Objectives Reactions; Procedure. " Transformation of Copper: A Sequence of Chemical Reactions. You need to strive to get the total charge on each side EQUAL, not zero. Lets consider other similar redox reactions: 2Mg(s) + O 2 (g) ----> 2MgO(s) solution. 327 and the black has a Fe/O mass ratio of 3. It should be answer d. Some,such as allyl and benzyl, are stabilized by conjugation to pi-electron systems. If the NO3- anion combines with a 2+ cation, such as Ca 2+ it forms Ca(NO3)2. 6 ×1023 AgNO3 c. The balanced net ionic equa I have to figure out a way to separate out the individual cations from a solution containing Fe(NO3)2, Cr(NO3)2, Ag(NO3), and Ni(NO3)2 and get each one into a particular form and perform a confirmatory test on it. This view is not satisfactory. 1 mol/dm AgNO3 solution. Copper nitrate also occurs as five different hydrates, the most common ones being the trihydrate and hexahydrate. meeting of ions of like charge; that is, A++ C+ and B-+ D-. There are all those subscripts and parentheses because compounds must have a neutral electric charge, and so the number of elements in a compound must change depending on the charge of the initial molecules. honda motor company is considering offering a $1,900 rebate on its minivan ,lowering the vehicles's price from $31,000 to $29,100. First adding some NH3 to the solution I ››More information on molar mass and molecular weight. 35 x 10-2 M) a) Calculate the ion product of the potential precipitate. Silver nitrate is an inorganic compound with chemical formula AgNO 3. 47 and 7. The net ionic equation can be described as the equation that contains only those species which would be participated in the chemical reaction. 085 L = asked by Randy on April 18, 2008; chemistry So what is the IUPAC name for AgNO3? So the first thing we need to establish is that Ag is silver and silver is a transition metal. Part G — Oxidation-reduction and Single Displacement Reactions: Cu2+(aq) Cu(s) The blue copper ions in solution react with the magnesium atoms in the solid metal. In the ion-electron method Branched polyethyleneimine (BPEI) and 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES) were used collaboratively to reduce silver nitrate under UV irradiation for the synthesis of positively charged silver nanoparticles. e. Calcium forms a 2 ion. 158 4. Write the balanced net ionic equation for the reactions that occur when the given aqueous solutions are mixed. Chemists have developed an alternative method (in addition to the oxidation number method) that is called the ion-electron (half-reaction) method. 0 moles of helium escape the balloon. Unit 7. 01g to 1g per 100g of water) insoluble - insoluble (less than 0. Is This Answer Correct ? 0 Yes : 18 No : What are the products of the Correct answer to the question: Aballoon contains 6. But since like charges Na2CO3 (aq) + 2 AgNO3 (aq) → 2 NaNO3 (aq) + Ag2CO3 (s). -3- AP ¥ CHEMISTRY EQUATIONS AND CONSTANTS . When we encountered a similar phenomenon in the chemistry of free radicals we noted that 3º radicals are roughly 30 kJ/mol more stable than 1 radicals. From the observation 2. This indicates that the space charge region is richer in Ag~ the lower the concentration. This means that there is no overall charge to the atom. NH4+NaAgb. A novel ICl/AgNO3 co-catalyzed radical oxidation of diaryl- and alkylarylalkynes into 1,2-diketones is reported. 00 + 100. L'eau (polaire : oxygène = pôle négatif, et H = pôle positif) exerce des forces électrostatique sur la molécule de AgNO3. be sure to include states of matter in your answ… Looking for affordable agno3 price? 226 low price agno3 products from 75 trustworthy agno3 suppliers on Alibaba. Draganjac's Introduction to Chemistry (CHEM1003), General Chemistry I (CHEM1013) and General Chemistry II (CHEM1023) classes are responsible for learning the names and formulae for the common acids and common reagents and for learning the names, formulae and the charges for A new and reliable information on the solubility of salts, acids and bases. The balanced equation will appear above. Credits: Design, Text, and Demonstration To balance redox reactions, you must assign oxidation numbers to the reactants and products to determine how many moles of each species are needed to conserve mass and charge. It is soluble in water. A precipitation reaction is a reaction in which soluble ions in separate solutions are mixed together to form an insoluble compound that settles out of solution as a solid. It is readily absorbed into the human body with food and drink and through inhalation, but the low levels of silver commonly present in the bloodstream (< 2. Throughout the exam the following symbols have the definitions specified unless otherwise noted. Silver nitrate is an ionic compound, or salt, with the formula AgNO3. 3 that the flat band current increases as the concen- tration of AgNO3 is decreased amounting to 2. 10 CQ Chapter 4 | Aqueous Reactions and Solution Stoichiometry Which of the three solutions shown schematically to the right corresponds to the result of each of the following Notice that the ion with the lesser charge ends with –ous and the one with greater charges ends with –ic. Each of your cells as you reading this is orchestrating thousands of different chemical reactions simultaneously. Solid A dissolves in water to form a conducting solution. Equiv. What is oxidized and what is reduced in this reaction? Cu(s) + 2AgNO3(aq) → Cu(NO3)2(aq) + 2Ag(s) Cu is reduced and Ag is oxidized. Charge - Charge Silver nitrate is a good example of an ionic compound; a chemical formed from the mutual attraction of oppositely charged atomic groups. As an innovative approach, GO was first exposed to the extract and then the AgNO3 was added as the second step, producing Ag-GO-Ⅱ. 45 M x 0. So Ag is always plus 1 in ionic compounds. However, it only has one possible charge. Learn. Mg + Mg(NO3)2 --> What the hell Clearly I'm an idiot and am just forgetting some fundamental aspect of chemistry… Anyway, here is my problem: I have these big blocks of Mg that I drill to get Mg turnings. The formal charge is found by subtracting the number of lone electrons and half the number of bonded electrons from the total number of valence electrons. Dentists sometimes use silver nitrate-infused swabs to heal oral ulcers. It does not matter if the element is written first or second on the reactant side. The silver nitrate salt consists of silver cations, positively charged particles, paired with nitrate anions, where an anion is a negatively charged particle. Using this program will help you to learn how to write ionic compound names and formulas for Chemistry A. The figure gives the value of the charge penetrating the sample in coulombs [C], as well as the depth of chloride penetration into concrete fractures moistened by 0. Soluble in ammonia, solutions of alkali cyanides, thiosulfates, ammonium carbonates. 1200) Label Elements Signal Word Danger Practice Problems (Chapter 5): Balancing and Reactions CHEM 30A I suggest that you complete these practice problems in pencil because you may need to erase and change coefficients as you balance the chemical equations. KCl(aq) with AgNO3 to form. In this case, the difference One point of concern: notice that each half-reaction wound up with a total charge of zero on each side. When a solid is formed such as \(\mathrm{AgCl}\), a precipitate is formed. Balancing chemical equations. 3. It does not participate in the ion-exchange process. So we'll start off with Cation here, so a positively charged ion, NH four plus is called the Ammonium GO ON TO THE NEXT PAGE. Silver is very easy to distinguish since most of its compounds are insoluble compounds, however many are white. Get an answer for 'What is the chemical equation for the dissolution of silver nitrate, AgNO3' and find homework help for other Science questions at eNotes The ionic charge on the orthophosphate ion, whether H 2 PO 4-or HPO 4 2-, is dependent upon pH conditions prevailing in solution. Skip navigation Practically insoluble in water. Study Sheet. The oxidation number is synonymous with the oxidation state. This activity includes every compound formula and name that can be formed from the list 44 Ions provided in Chemistry A at Pickerington High School Central. This charge can be either positive or negative. Compound Formula Strong, Weak or Non-Electrolyte Potassium hydroxide Acetic acid Sodium chloride Octane Sucrose Ethanol • Oxidation number of a monatomic ion is equal to the charge of the ion. In chemistry, the formula weight is a quantity computed by multiplying the atomic weight (in atomic mass units) of each element in a chemical formula by the number of atoms of that element present in the formula, then adding all of these products together. One more point to make before wrapping this up. When silver nitrate solution is added to potassium iodide solution (excess), the precipitated silver iodide adsorbs iodide ions from the dispersion medium and negatively charged colloidal solution results (at this stage KI is in excess, I– and being common to AgI). Having trouble watching the video? This half-reaction says that we have solid copper (with no charge) being oxidized (losing electrons) to form a copper ion with a plus 2 charge. Apr 29, 2007 autopolymerizable resin and AgNO3 in an ethanol solution. Instructions on balancing chemical equations: Enter an equation of a chemical reaction and click 'Balance'. Balance the following equations (show your check), and answer the accompanying questions. The formal charge of nitrogen in the compound NO3 is plus 1. AgNO3(aq) + NaCl(aq) → AgCl(s) + . In the ternary AgNO3–LiNO3–NaNO3 system we have calculated two vertical sections which are in reasonable agreement with the reported experimental data How to use the molecular equation to find the complete ionic and net ionic equation Silver nitrate is caustic and irritating to the skin, eyes and mucous membranes, producing a persistent black stain, with possible tissue destruction. This tends to delocalize the charge over a larger volume of the molecule, which stabilizes the carbocation. आंबट मध कोठे मिलते PREDICTING REACTION PRODUCTS: REPLACEMENT REACTIONS!A metal will not always replace a metal in a compound dissolved in water because of differing reactivities !An activity series can be used to predict if reactions will occur !To replace a metal, the other metal must be MORE REACTIVE !Same applies for halogens ! Best answer: The main idea of socialism is to take from those who have plenty and give to those in need. It can be noted that the structure of the nitrate ion is stabilized by resonance. Attygalle, Sheo B. NO3 anion carries a charge of -1. In the net ionic equation, all species with (s), (l), and (g) will be unchanged. Balance the reaction of AgNO3 + Al = Ag + Al(NO3)3 using this chemical equation balancer! The current density is decreasing during the TPTC3PO 3 H 2 polymerization indicating that the process is self-terminating, at the charge density of approximately 100 mC/cm 2. As the sulfur concentration increases, shorter wavelengths are scattered and longer ones pass through, this causes an increase of reddish color to appear on the overhead. Builds On:Work with atoms and their structures begins with the sixth grade SOL and increases in complexity throughout the study of science. AGNO3 (DS) The primary charge should be constructed separately, e. A student The student then mixes the solution with excess AgNO3 solution, causing AgCl to precipitate. The possibility of studying the gaming table. c) particle/mass charge. 284 g O>1 g N; 2 General description Silver carbonate,prepared by precipitation or ion-exchange method can behave as a visible light sensitive semiconductor photocatalyst. perchloric acid, HCIO, and potassium hydroxide, KOH net ionic equation: C. Some, such as tert-butyl, are localized. mu g/L) and in key tissues like liver and kidney have not been associated with any disease or disability. Ag is oxidized and NO3 is reduced. How to Write a Chemical Equation. The other reactant will be a compound. When a 1. Also, in the above reaction, H + is considered an oxidizer, because it is taking away electrons from the magnesium metal. Iron can be found as either Fe 3+ or Fe 2+, with a difference in redox state as well as ionic charge. k3po4(aq) + mgcl2(aq) - brainsanswers. Looking at oxygen first, each oxygen atom has 2 electrons in its inner shell, and 6 in its second shell. By definition, one coulomb of charge is transferred when a 1-amp current flows for 1 second. Two other flasks are labeled X and Y. Silver nitrate solution can be used to find out which halogen is present in a suspected halogenoalkane. A The highest positive potential is found by using the Zr oxidation half-reaction. Identify Galvanic cells, also known as voltaic cells, are electrochemical cells in which spontaneous oxidation-reduction reactions produce electrical energy. It's chemical formula is Ag(NO3)2 because Silver originally had a charge of 2+ while Nitrate only had a charge of 1- to obtain a neutral charge you can multiply the 1- from nitrate by 2. At this point, you should have already learned how to write these formulas: Na + and SO 4 2-make Na 2 SO 4 and not NaSO 4-and not NaSO 4. Observe the growth of the metal crystals under the stereoscope. Copper(II) nitrate, Cu(NO 3) 2, is an inorganic compound that forms a blue crystalline solid. The primary purpose of this layer is to functionalize the PPy surface with the phosphonate groups. monatomic ion equals the charge of the ion. Therefore the formula of calcium chloride is CaCl2. The dissolving of AgNO3(s) in pure water is represented by the equation weight of the charge was determined periodically. A good way to think about a chemical reaction is the process of baking cookies. charge. a hole lets 4. 05322 M KSCN, requiring 35. Step 4. Reach out to suppliers directly and ask for the lowest price, discount, and small shipping fees. Remember you still have multiple procedure codes associated with Three Faradays of charge is supplied to bivalent metal what is the number of electrons involved? [BPKIHS 1999] a. Temp. Synthesis of positively charged silver nanoparticles via photoreduction of AgNO3 in branched polyethyleneimine/HEPES solutions. In single replacement, one reactant is always an element. 1 g mol C6H12 O6 = 2. Singh. 5. o Identify the critical assumptions and logic used in a line of Silver nitrate, caustic chemical compound, important as an antiseptic, in the industrial preparation of other silver salts, and as a reagent in analytical chemistry. Substance Classification (ICAO) Drill & Practice : As "Citizens of Science," when given a substance—ICAO on it—giving it specific substance identifiers as described in the previous video (). 1 mol L−1) If possible, dry 5 g of AgNO3 for 2 hours at 100°C and allow to cool. AgCl is insoluble so it will precipitate in the solution as a solid. Silver nitrate is an extremely versatile analytical laboratory reagent for both the qualitative and quantitative determination of halides, solution to carry the charge from one electrode to another. The silver nitrate is a silver salt of a chemical formula of AgNO3. Finally, an ammonium ion would be written [] + because each molecule contains 1 nitrogen and 4 hydrogen atoms and has a charge of 1+. In writing the equations, it is often convenient to separate the oxidation-reduction reactions into half-reactions to facilitate balancing the overall equation and to emphasize the actual chemical transformations. What is the type of charge on AgI colloidal sol formed when AgNO3 solution is added to KI solution - Chemistry - Haloalkanes and Haloarenes Precipitations Reactions . You see this clearly in nitric acid HNO3: H+1 and NO3 -1. 873 Da; Monoisotopic mass168. Solubility Rules for Ionic Compounds Details of the supplier of the safety data sheet Emergency Telephone Number CHEMTRECÒ, Inside the USA: 800-424-9300 CHEMTRECÒ, Outside the USA: 001-703-527-3887 2. 50) Balance each equation. Way to go Sak****, you've found an over nine year old thread and added to the conversation. charge distribution for CO adsorption on all Ag clusters given in Table 1 indicates Ag2O and AgNO3 represent progressively larger positive charge on silver: Precipitation reactions. The key to being able to write net ionic equations is the ability to recognize monoatomic and polyatomic ions, and the solubility rules. The corresponding flat band overpotentials are 11, 28 and 46 mV. Silver Nitrate, Crystal, Reagent, ACS is an inorganic compound with the formula AgNO3. I've think I know how to get started on it. Find an answer to your question Write the net ionic equation for the reaction between agno3(aq) and na2so4(aq). Stop stirring as soon as all of the AgNO3 is added. STUDY. Correct answer to the question: Complete and balance the following equation. com. Enter the net ionic equation, including phases, for the reaction of AgNO3(aq) and KCl(aq). A molecular equation is an equation in which the formulas of the Best Answer: 2AgNO3 + FeSO4 --> Ag2SO4 + Fe(NO3)2 Basically, you have to switch the cations and anions around for the 2 compounds, making sure to keep their charges are well eg: NO3 has a charge of 1- Ag has a charge of 1+ Lead(II) nitrate is an inorganic compound with the chemical formula Pb(NO 3) 2. Illustrate variety of substances of which an element can be part: metal --> blue solution --> blue solid --> black solid --> blue solution (again) --> metal (again). 76) What is the ionic charge for the cobalt ion in C03N2? A) zero E) none of the above 77) What is the chemical formula for the binary compound composed of C03+ and S2- ions? A) c03S2 C) cos E) none of the above What are the products of the reaction AgNO3 + NaBH4. In all of these reactions, one is the Because spectator ions don't actually participate in the chemistry of a reaction, you don't always need to include them in a chemical equation. H2CO3 d. 41 mA/cm2 for 1, 0. It is obvious from Fig. A. 1 Expert Answer(s) - 29878 - When excess of AgNO3 is treated with KI solution, AgI form a) +ve charged sols b) -ve charged sols . Ionic equations and net ionic equations are usually written only for reactions that occur in solution and are an attempt to show how the ions present are reacting. suppose If it loses an electron, it will have a positive charge. In fact what must be impregnated in the beaker is reduced silver (if it is dark) from the decomposition of Sciencemadness Discussion Board » Special topics » Energetic Materials » Ag2C2. 45 M) was mixed with solution of NaCl (85 mL/1. Nitrate has a -1 charge, so AgNO3 has Ag in the +1 oxidation state, and Cu(NO3)2 has copper in the +2 oxidation state. However, in the deductive process, To balance a chemical equation, enter an equation of a chemical reaction and press the Balance button. The valve on the buchner funnel is opened so as to provide a steady trickle of water into the calcium carbide. Let's start with a: Ag has an oxidation state of +1 in the reactants (since NO3 has a charge of -1) and a state of +1 in the products (since Cl has a charge of -1). While ionic equations show all of the substances present in solution, a net ionic equation shows only those that are changed during the course of the reaction. 0000-g sample of the mixture is dissolved inwater and reacted with excess silver nitrate, 1. 5 You are presented with three white solids, A, B, and C, which are glucose (a sugar substance), NaOH, and AgBr. Over forty thousand views for such a boring thread with only (up until now) five posts. You perform a chemical analysis and you find that the red sample has a Fe/O mass ratio of 2. 6 ̊F), boiling point 444 ̊C (831. atoms have a formal charge of zero. agno3 charge pyo1, j2z4i, xfda9xj, nzo, m0pwm, rquud, d21fqu, u9nfuqt0w, o9hjnj, ylyv0, rjdtdph,
CommonCrawl
What is the value of $c$ if $x\cdot(3x+1)<c$ if and only when $x\in \left(-\frac{7}{3},2\right)$? When $x\in \left(-\frac{7}{3},2\right)$, we have $x\cdot(3x+1)-c<0$. This means that $x(3x+1)-c=0$ at $x=-\frac{7}{3}$ and $x=2$. We now know that $x(3x+1)-c=0$ is a quadratic equation with roots at $x=-\frac{7}{3}$ and $x=2$, and we want to use these roots to find a quadratic of the same form as the problem. $x=-\frac{7}{3}$ gives us $(3x+7)=0$ and $x=2$ gives us $(x-2)=0$. \begin{align*} x(3x+1)-c&=(3x+7)(x-2)\\ &=3x^2+x-14\\ &=x(3x+1)-14. \end{align*} So, $c=\boxed{14}$.
Math Dataset
\begin{definition}[Definition:Miscellaneous Volume Measure] This page gathers together various units of measurement of volume which can be found in the literature. \end{definition}
ProofWiki
Varieties of Lattices Peter Jipsen and Henry Rose ## Synopsis An interesting problem in universal algebra is the connection between the internal structure of an algebra and the identities which it satisfies. The study of varieties of algebras provides some insight into this problem. Here we are concerned mainly with lattice varieties, about which a wealth of information has been obtained in the last twenty years. We begin with some preliminary results from universal algebra and lattice theory. The second chapter presents some properties of the lattice of all lattice subvarieties. Here we also discuss the important notion of a splitting pair of varieties and give several characterizations of the associated splitting lattice. The more detailed study of lattice varieties splits naturally into the study of modular lattice varieties and nonmodular lattice varieties, dealt with in the third and fourth chapter respectively. Among the results discussed there are Freese's theorem that the variety of all modular lattices is not generated by its finite members, and several results concerning the question which varieties cover a given variety. The fifth chapter contains a proof of Baker's finite basis theorem and some results about the join of finitely based lattice varieties. Included in the final chapter is a characterization of the amalgamation classes of certain congruence distributive varieties and the result that there are only three lattice varieties which have the amalgamation property. ## Introduction The study of lattice varieties evolved out of the study of varieties in general, which was initiated by Garrett Birkhoff in the 1930's. He derived the first significant results in this subject, and further developments by Alfred Tarski and later, for congruence distributive varieties, by Bjarni Jónsson, laid the groundwork for many of the results about lattice varieties. During the same period, investigations in projective geometry and modular lattices, by Richard Dedekind, John von Neumann, Garrett Birkhoff, George Grätzer, Bjarni Jónsson and others, generated a wealth of information about these structures, which was used by Kirby Baker and Rudolf Wille to obtain some structural results about the lattice of all modular subvarieties. Nonmodular varieties were considered by Ralph McKenzie, and a paper of his published in 1972 stimulated a lot of research in this direction. Since then the efforts of many people have advanced the subject of lattice varieties in several directions, and many interesting results have been obtained. The purpose of this book is to present a selection of these results in a (more or less) self-contained framework and uniform notation. In Chapter 1 we recall some preliminary results from the general study of varieties of algebras, and some basic results about congruences on lattices. This chapter also serves to introduce most of the notation which we use subsequently. Chapter 2 contains some general results about the structure of the lattice $\Lambda$ of all lattice subvarieties and about the important concept of "splitting". We present several characterizations of splitting lattices and Alan Day's result that splitting lattices generate all lattices. These results are applied in Chapter 4 and 6 . Chapters $3-6$ each begin with an introduction in which we mention the important results that fall under the heading of the chapter. Chapter 3 then proceeds with a review of projective spaces and the coordinatization of (complemented) modular lattices. These concepts are used to prove the result of Ralph Freese, that the finite modular lattices do not generate all modular lattices. In the second part of the chapter we give some structural results about covering relations between modular varieties. In Chapter 4 we concentrate on nonmodular varieties. A characterization of semidistributive varieties is followed by several technical lemmas which lead up to an essentially complete description of the "almost distributive" part of $\Lambda$. We derive the result of Bjarni Jónsson and Ivan Rival, that the smallest nonmodular variety has exactly 16 covers, and conclude the chapter with results of Henry Rose about covering chains of join-irreducible semidistributive varieties. Chapter 5 is concerned with the question which varieties are finitely based. A proof of Kirby Baker's finite basis theorem is followed by an example of a nonfinitely based variety, and a discussion about when the join of two finitely based varieties is again finitely based. In Chapter 6 we study amalgamation in lattice varieties, and the amalgamation property. The first half of the chapter contains a characterization of the amalgamation class of certain congruence distributive varieties, and in the remaining part we prove that there are only three lattice varieties that have the amalgamation property. By no means can this monograph be regarded as a full account of the subject of lattice varieties. In particular, the concept of a congruence variety (i.e. the lattice variety generated by the congruence lattices of the members of some variety of algebras) is not included, partly to avoid making this monograph too extensive, and partly because it was felt that this notion is somewhat removed from the topic and requires a wider background of universal algebra. For the basic concepts and facts from lattice theory and universal algebra we refer the reader to the books of George Grätzer [GLT], [UA] and Peter Crawley and Robert P. Dilworth [ATL]. However, we denote the join of two elements $a$ and $b$ in a lattice by $a+b$ (rather than $a \vee b$ ) and the meet by $a \cdot b$, or simply $a b$ (instead of $a \wedge b$; for the meet of two congruences we use the symbol $\cap$ ). When using this plus, dot notation, it is traditionally assumed that the meet operation has priority over the join operation, which reduces the apparent complexity of a lattice expression. As a final remark, when we consider results that are applicable to wider classes of algebras (not only to lattices) then we aim to state and prove them in an appropriate general form. ## Chapter 1 ## Preliminaries ### The Concept of a Variety Lattice varieties. Let $\mathcal{E}$ be a set of lattice identities (equations), and denote by Mod $\mathcal{E}$ the class of all lattices that satisfy every identity in $\mathcal{E}$. A class $\mathcal{V}$ of lattices is a lattice variety if $$ \mathcal{V}=\operatorname{Mod} \mathcal{E} $$ for some set of lattice identities $\mathcal{E}$. The class of all lattices, which we will denote by $\mathcal{L}$, is of course a lattice variety since $\mathcal{L}=\operatorname{Mod} \emptyset$. We will also frequently encounter the following lattice varieties: $$ \begin{array}{rlrl} \mathcal{T} & =\operatorname{Mod}\{x=y\} & - & \text { all trivial lattices } \\ \mathcal{D} & =\operatorname{Mod}\{x y+x z=x(y+z)\} \quad & \text { all distributive lattices } \\ \mathcal{M} & =\operatorname{Mod}\{x y+x z=x(y+x z)\}- & \text { all modular lattices } \end{array} $$ Let $\mathcal{J}$ be the (countable) set of all lattice identities. For any class $\mathcal{K}$ of lattices, we denote by Id $\mathcal{K}$ the set of all identities which hold in every member of $\mathcal{K}$. A set of identities $\mathcal{E} \subseteq \mathcal{J}$ is said to be closed if $$ \mathcal{E}=\operatorname{Id} \mathcal{K} $$ for some class of lattices $\mathcal{K}$. It is easy to see that for any lattice variety $\mathcal{V}$, and for any closed set of identities $\mathcal{E}$, $$ \mathcal{V}=\operatorname{Mod} \operatorname{Id} \mathcal{V} \quad \text { and } \mathcal{E}=\operatorname{Id} \operatorname{Mod} \mathcal{E}, $$ whence there is a bijection between the collection of all lattice varieties, denoted by $\Lambda$, and the set of all closed subsets of $\mathcal{J}$. Thus $\Lambda$ is a set, although its members are proper classes. $\Lambda$ is partially ordered by inclusion, and for any collection $\left\{\mathcal{V}_{i}: i \in I\right\}$ of lattice varieties $$ \bigcap_{i \in I} \mathcal{V}_{i}=\operatorname{Mod} \bigcup_{i \in I} \operatorname{Id} \mathcal{V}_{i} $$ is again a lattice variety, which implies that $\Lambda$ is closed under arbitrary intersections. Since $\Lambda$ also has a largest element, namely $\mathcal{L}$, we conclude that $\Lambda$ is a complete lattice with intersection as the meet operation. For any class of lattices $\mathcal{K}$, $$ \mathcal{K}^{\mathcal{V}}=\operatorname{Mod} \operatorname{Id} \mathcal{K}=\bigcap\{\mathcal{V} \in \Lambda: \mathcal{K} \subseteq \mathcal{V}\} $$ is the smallest variety containing $\mathcal{K}$, and we call it the variety generated by $\mathcal{K}$. Now the join of a collection of lattices varieties is the variety generated by their union. We discuss the lattice $\Lambda$ in more detail in Section 2.1. Varieties of algebras. Many of the results about lattice varieties are valid for varieties of other types of algebras, which are defined in a completely analogous way. When we consider a class $\mathcal{K}$ of algebras, then the members of $\mathcal{K}$ are all assumed to be algebras of the same type with only finitary operations. We denote by $\mathrm{H} \mathcal{K}$ - the class of all homomorphic images of members of $\mathcal{K}$ $\mathrm{S} \mathcal{K}$ - the class of all subalgebras of members of $\mathcal{K}$ $\mathrm{PK}$ - the class of all direct products of members of $\mathcal{K}$ $\mathbf{P}_{S} \mathcal{K}$ - the class of all subdirect products of members of $\mathcal{K}$. (Recall that an algebra $A$ is a subdirect product of algebras $A_{i}(i \in I)$ if there is an embedding $f$ from $A$ into the direct product $X_{i \in I} A_{i}$ such that $f$ followed by the $i$ th projection $\pi_{i}$ maps $A$ onto $A_{i}$ for each $i \in I$.) The first significant results in the general study of varieties are due to Birkhoff [35], who showed that varieties are precisely those classes of algebras that are closed under the formation of homomorphic images, subalgebras and direct products, i.e. $$ \mathcal{V} \text { is a variety if and only if } \mathbf{H} \mathcal{V}=\mathbf{S} \mathcal{V}=\mathbf{P} \mathcal{V}=\mathcal{V} $$ Tarski [46] then put this result in the form $$ \mathcal{K}^{\mathcal{V}}=\operatorname{HSP} \mathcal{K} $$ and later Kogalovskiï [65] showed that $$ \mathcal{K}^{\mathcal{V}}=\mathrm{HP}_{S} \mathcal{K} $$ for any class of algebras $\mathcal{K}$. ### Congruences and Free Algebras Congruences of algebras. Let $A$ be an algebra, and let $\operatorname{Con}(A)$ be the lattice of all congruences on $A$. For $a, b \in A$ and $\theta \in \operatorname{Con}(A)$ we denote by: $$ \begin{gathered} a / \theta \text { - the congruence class of } a \text { modulo } \theta \\ A / \theta \text { - the quotient algebra of } A \text { modulo } \theta \\ \mathbf{0}, 1 \text { - the zero and unit of } \operatorname{Con}(A) \\ \operatorname{con}(a, b) \text { - the principal congruence generated by }(a, b) \end{gathered} $$ (i.e. the smallest congruence that identifies $a$ and $b$ ). $\operatorname{Con}(A)$ is of course an algebraic (= compactly generated) lattice with the finite joins of principal congruences as compact elements. (Recall that a lattice element $c$ is compact if whenever $c$ is below the join of set of elements $\mathcal{C}$ then $c$ is below the join of a finite subset of $\mathcal{C}$. A lattice is algebraic if it is complete and every element is a join of compact elements.) For later reference we recall here a description of the join operation in $\operatorname{Con}(A)$. Lemma 1.1 Let $A$ be an algebra, $x, y \in A$ and $\mathcal{C} \subseteq \operatorname{Con}(A)$. Then $$ x\left(\sum \mathcal{C}\right) y \quad \text { if and only if } \quad x=z_{0} \psi_{1} z_{1} \psi_{2} z_{2} \cdots z_{n-1} \psi_{n} z_{n}=y $$ for some $n \in \omega, \psi_{1}, \psi_{2}, \ldots, \psi_{n} \in \mathcal{C}$ and $z_{0}, z_{1}, \ldots, z_{n} \in A$. The connection between congruences and homomorphisms is exhibited by the homomorphism theorem: for any homomorphism $h: A \rightarrow B$ the image $h(A)$ is isomorphic to $A /$ ker $h$, where $$ \text { ker } h=\left\{\left(a, a^{\prime}\right) \in A^{2}: h(a)=h\left(a^{\prime}\right)\right\} \in \operatorname{Con}(A) $$ We will also make use of the second isomorphism theorem which states that for (fixed) $\theta \in \operatorname{Con}(A)$ and any $\phi \in \operatorname{Con}(A)$ containing $\theta$ there exists a congruence $(\phi / \theta) \in \operatorname{Con}(A / \theta)$, defined by $$ a / \theta(\phi / \theta) b / \theta \quad \text { if and only if } \quad a \phi b $$ such that $(A / \theta) /(\phi / \theta)$ is isomorphic to $A / \phi$. Furthermore the map $\phi \mapsto \phi / \theta$ defines an isomorphism from the principal filter $[\theta)=\{\phi \in \operatorname{Con}(A): \theta \leq \phi\}$ to $\operatorname{Con}(A / \theta)$. The homomorphism theorem implies that an algebra $A$ is a subdirect product of quotient algebras $A / \theta_{i}\left(\theta_{i} \in \operatorname{Con}(A)\right)$ if and only if the meet (intersection) of the $\theta_{i}$ is the $\mathbf{0}$ of $\operatorname{Con}(A)$. An algebra $A$ is subdirectly irreducible if and only if $A$ satisfies anyone of the following equivalent conditions: (i) whenever $A$ is a subdirect product of algebras $A_{i}(i \in I)$ then $A$ is isomorphic to one of the factors $A_{i}$; (ii) the $\mathbf{0}$ of $\operatorname{Con}(A)$ is completely meet irreducible; (iii) there exist $a, b \in A$ such that $\operatorname{con}(a, b)$ is the smallest non-0 element of $\operatorname{Con}(A)$. $A$ is said to be finitely subdirectly irreducible if whenever $A$ is a subdirect product of finitely many algebras $A_{1}, \ldots, A_{n}$ then $A$ is isomorphic to one of the $A_{i}(1 \leq i \leq n)$, or equivalently if the 0 of $\operatorname{Con}(A)$ is meet irreducible. For any variety $\mathcal{V}$ of algebras we denote by $\mathcal{V}_{S I}$ - the class of all subdirectly irreducible members of $\mathcal{V}$ $\mathcal{V}_{F S I}$ - the class of all finitely subdirectly irreducible members of $\mathcal{V}$. We can now state Birkhoff's [44] subdirect representation theorem: Every algebra is a subdirect product of its subdirectly irreducible homomorphic images. This can be deduced from the following result concerning decompositions in algebraic lattices: THEOREM 1.2 ([ATL] p.43). In an algebraic lattice every element is the meet of completely meet irreducible elements. Relatively free algebras. Let $\mathcal{K}$ be a class of algebras, and let $F$ be an algebra generated by a set $X \subseteq F$. We say that $F$ is $\mathcal{K}$-freely generated by the set $X$ if any map $f$ from the set of generators $X$ to any $A \in \mathcal{K}$ extends to a homomorphism $\bar{f}: F \rightarrow A$. If, in addition, $F \in \mathcal{K}$, then $F$ is called the $\mathcal{K}$ - free algebra on $\alpha=|X|$ generators or the $\alpha$-generated free algebra over $\mathcal{K}$, and is denoted by $F_{\mathcal{K}}(X)$ or $F_{\mathcal{K}}(\alpha)$. The extension $\bar{f}$ is unique (since $X$ generates $F$ ), and it follows that $F_{\mathcal{K}}(\alpha)$ is uniquely determined (up to isomorphism) for each cardinal $\alpha$. However, $F_{\mathcal{K}}(\alpha)$ need not exist for every class $\mathcal{K}$ of algebras. Birkhoff [35] found a simple method of constructing $\mathcal{K}$-freely generated algebras in general, and from this he could deduce that for any nontrivial variety $\mathcal{V}$ and any cardinal $\alpha \neq 0$, the $\mathcal{V}$-free algebra on $\alpha$ generators exists. We briefly outline his method below (further details can be found in $[\mathbf{U A}])$. Let $W(X)$ be the word algebra over the set $X$, i.e. $W(X)$ is the set of all terms (= polynomials or words) in the language of the algebras in $\mathcal{K}$, with variables taken from the set $X$, and with the operations defined on $W(X)$ in the obvious way (e.g. for lattices the join of two terms $p, q \in W(X)$ is the term $(p+q) \in W(X))$. It is easy to check that, for any class $\mathcal{K}$ of algebras, $W(X)$ is $\mathcal{K}$-freely generated by the set $X$. Other $\mathcal{K}$-freely generated algebras can be constructed as quotient algebras of $W(X)$ in the following way: Let $$ \begin{aligned} \theta_{\mathcal{K}} & =\prod\{\operatorname{ker} h \mid h: W(X) \rightarrow A \text { is a homomorphism, } A \in \mathcal{K}\} \\ & \left(=\sum\{\operatorname{con}(p, q) \mid p, q \in W(X) \text { and } p=q \in \operatorname{Id} \mathcal{K}\}\right) . \end{aligned} $$ We claim that $F=W(X) / \theta_{\mathcal{K}}$ is $\mathcal{K}$-freely generated by the set $\bar{X}=\left\{x / \theta_{\mathcal{K}}: x \in X\right\}$. Indeed, given a map $f: \bar{X} \rightarrow A \in \mathcal{K}$, define $f^{\prime}: X \rightarrow A$ by $f^{\prime}(x)=f\left(x / \theta_{\mathcal{K}}\right)$, then $f^{\prime}$ extends to a homomorphism $\bar{f}^{\prime}: F \rightarrow A$. If $\mathcal{K}$ contains a nontrivial algebra, then $|\bar{X}|=|X|$ (if not, then $\theta_{\mathcal{K}}$ identifies all of $W(X)$, hence $F$ is trivial), and by construction $F$ is a subdirect product of the algebras $W(X) /$ ker $h$, which are all members of $\mathbf{S} \mathcal{K}$. Consequently, if $\mathcal{K}$ is closed under the formation of subdirect products, then $F \in \mathcal{K}$, whence Birkhoff's result follows. If an identity holds in every member of a variety $\mathcal{V}$, then it must hold in $F_{\mathcal{V}}(n)$ for each $n \in \omega$, and if an identity fails in some member of $\mathcal{V}$, then it must fail in some finitely generated algebra (since an identity has only finitely many variables), and hence it fails in $F_{\mathcal{V}}(n)$ for some $n \in \omega$. Thus $$ \mathcal{V}=\left\{F_{\mathcal{V}}(n): n \in \omega\right\}^{\mathcal{V}} $$ and it now follows from Birkhoff's subdirect representation theorem that every variety is generated by its finitely generated subdirectly irreducible members. In fact, a similar argument to the one above shows that $$ \mathcal{V}=\left\{F_{\mathcal{V}}(\omega)\right\}^{\mathcal{V}} $$ whence every variety is generated by a single (countably generated) algebra. To obtain an interesting notion of a finitely generated variety, we define this to be a variety that is generated by finitely many finite algebras, or equivalently, by a finite algebra (the product of the former algebras). ### Congruence Distributivity An algebra $A$ is said to be congruence distributive if the lattice $\operatorname{Con}(A)$ is distributive. A variety $\mathcal{V}$ of algebras is congruence distributive if every member of $\mathcal{V}$ is congruence distributive. Congruence distributive algebras have factorable congruences. What is interesting about algebras in a congruence distributive variety is that they satisfy certain conditions which do not hold in general. The most important ones are described by Jónsson's Lemma (1.4) and its corollaries, but we first point out a result which follows directly from the definition of congruence distributivity. Lemma 1.3 Suppose $A$ is the product of two algebras $A_{1}, A_{2}$ and $\operatorname{Con}(A)$ is distributive. Then $\operatorname{Con}(A)$ is isomorphic to $\operatorname{Con}\left(A_{1}\right) \times \operatorname{Con}\left(A_{2}\right)$ (i.e. congruences on the product of two algebras can be factored into two congruences on the algebras). Proof. Let $L=\operatorname{Con}\left(A_{1}\right) \times \operatorname{Con}\left(A_{2}\right)$, and for $\theta=\left(\theta_{1}, \theta_{2}\right) \in L$ define a relation $\bar{\theta}$ on $A$ by $$ \left(a_{1}, a_{2}\right) \bar{\theta}\left(b_{1}, b_{2}\right) \quad \text { if and only if } \quad a_{1} \theta_{1} b_{1} \text { and } a_{2} \theta_{2} b_{2} \text {. } $$ One easily checks that $\bar{\theta} \in \operatorname{Con}(A)$, and that if $\psi=\left(\psi_{1}, \psi_{2}\right) \in L$, then $$ \bar{\theta} \subseteq \bar{\psi} \quad \text { if and only if } \quad \theta_{1} \subseteq \psi_{1} \text { and } \theta_{2} \subseteq \psi_{2} $$ Thus the map $\theta \mapsto \bar{\theta}$ from $L$ to $\operatorname{Con}(A)$ is one-one and order preserving, so it remains to show that it is also onto. Let $\phi \in \operatorname{Con}(A)$, and for $i=1,2$ define $\rho_{i}=\operatorname{ker} \pi_{i}$, where $\pi_{i}$ is the projection from $A$ onto $A_{i}$. Clearly $\rho_{1} \cap \rho_{2}=0$ in $\operatorname{Con}(A)$, hence $\phi=\left(\phi+\rho_{1}\right) \cap\left(\phi+\rho_{2}\right)$ by the distributivity of $\operatorname{Con}(A)$. Observe that for $i=1,2 \rho_{i} \subseteq \phi+\rho_{i}$ and $A / \rho_{i} \cong A_{i}$, so from the second isomorphism theorem we obtain $\theta_{i} \in \operatorname{Con}\left(A_{i}\right)$ such that for $a=\left(a_{1}, a_{2}\right), b=$ $\left(b_{1}, b_{2}\right) \in A$ $$ a\left(\phi+\rho_{i}\right) b \quad \text { if and only if } \quad a_{i} \theta_{i} b_{i} \quad(i \in\{1,2\}) . $$ Therefore $a \phi b$ iff $a\left(\phi+\rho_{1}\right) b$ and $a\left(\phi+\rho_{2}\right) b$ iff $a_{1} \theta_{1} b_{1}$ and $a_{2} \theta_{2} b_{2}$. Letting $\theta=\left(\theta_{1}, \theta_{2}\right) \in L$, we see that $\phi=\bar{\theta}$. A short review of filters, ideals and ultraproducts. Let $L$ be a lattice and $F$ a filter in $L$ (i.e. $F$ is a sublattice of $L$, and for all $y \in L$, if $y \geq x \in F$ then $y \in F$ ). A filter $F$ is proper if $F \neq L$, it is principal if $F=[a)=\{x \in L: a \leq x\}$ for some $a \in L$, and $F$ is prime if $x+y \in F$ implies $x \in F$ or $y \in F$ for all $x, y \in L$. An ultrafilter is a maximal proper filter (maximal with respect to inclusion). In a distributive lattice the notions of a proper prime filter and an ultrafilter coincide. The notion of a (proper / principal / prime) ideal is defined dually, and principal ideals are denoted by $(a]$. Let $\mathcal{I} L$ be the collection of all ideals in $L$. Then $\mathcal{I} L$ is closed under arbitrary intersections and has a largest element, hence it is a lattice, partially ordered by inclusion. Dually, the collection of all filters of $L$, denoted by $\mathcal{F} L$, is also a lattice. The order on $\mathcal{F} L$ is reverse inclusion i.e. $F \leq G$ iff $F \supseteq G$. With these definitions it is not difficult to see that the map $x \mapsto[x)(x \mapsto(x])$ is an embedding of $L$ into $\mathcal{F} L$ $(\mathcal{I} L)$, and that $\mathcal{F} L$ and $\mathcal{I} L$ satisfy the same identities as $L$. It follows that whenever $L$ is a member of some lattice variety then so are $\mathcal{F} L$ and $\mathcal{I} L$. For an arbitrary set $I$ we say that $\mathcal{F}$ is a filter over $I$ if $\mathcal{F}$ is a filter in the powerset lattice $\mathcal{P} I$ (= the collection of all subsets of $I$ ordered by inclusion). Since $\mathcal{P} I$ is distributive, a filter $\mathcal{U}$ is an ultrafilter if and only if $\mathcal{U}$ is a proper prime filter, and this is equivalent to the condition that whenever $I$ is partitioned into finitely many disjoint blocks, then $\mathcal{U}$ contains exactly one of these blocks. Let $A=\chi_{i \in I} A_{i}$ be a direct product of a family of algebras $\left\{A_{i}: i \in I\right\}$. If $\mathcal{F}$ is a filter over $I$ (i.e. a filter in the powerset lattice $\mathcal{P} I$ ), then we can define a congruence $\phi_{\mathcal{F}}$ on $A$ by $$ a \phi_{\mathcal{F}} \boldsymbol{b} \quad \text { if and only if } \quad\left\{i \in I: a_{i}=b_{i}\right\} \in \mathcal{F} $$ where $a_{i}$ is the $i$ th coordinate of $a$. If $A$ is a direct product of algebras $A_{i}(i \in I)$ and $\mathcal{U}$ is an ultrafilter over $I$, then the quotient algebra $A / \phi_{\mathcal{U}}$ is called an ultraproduct. For any given class of algebras $\mathcal{K}$, we denote by $$ \mathbf{P}_{U} \mathcal{K} \text { - the class of all ultraproducts of members of } \mathcal{K} . $$ Jónsson's Lemma. We are now ready to state and prove this remarkable result. Lemma 1.4 (Jónsson [67]). Suppose $B$ is a congruence distributive subalgebra of a direct product $A=\chi_{i \in I} A_{i}$, and $\theta \in \operatorname{Con}(B)$ is meet irreducible. Then there exists an ultrafilter $\mathcal{U}$ over $I$ such that $\phi_{\mathcal{U}} \mid B \subseteq \theta$. Proof. We will denote by $[J)$ the principal filter generated by a subset $J$ of $I$, and to simplify the notation we set $\psi_{J}=\phi_{[J)} \mid B$. Clearly $a \psi_{J} b$ if and only if $J \subseteq\left\{i \in I: a_{i}=b_{i}\right\}$, for $a, b \in B \subseteq A$. If $\theta=1 \in \operatorname{Con}(B)$ then any ultrafilter over $I$ will do. So assume $\theta<1$ and let $\mathcal{C}$ be the collection of all subsets $J$ of $I$ such that $\psi_{J} \subseteq \theta$. We claim that $\mathcal{C}$ has the following properties: (i) $I \in \mathcal{C}, \emptyset \notin \mathcal{C}$ (ii) $K \supseteq J \in \mathcal{C}$ implies $K \in \mathcal{C}$; (iii) $J \cup K \in \mathcal{C}$ implies $J \in \mathcal{C}$ or $K \in \mathcal{C}$. (i) and (ii) hold because $\psi_{I}, \psi_{\emptyset}$ are the zero and unit of $\operatorname{Con}(B)$, and $K \supseteq J$ implies $\psi_{K} \leq \psi_{J}$. To prove (iii), observe that $J \cup K \in \mathcal{C}$ implies $$ \theta=\theta+\psi_{J \cup K}=\theta+\left(\psi_{J} \cap \psi_{K}\right)=\left(\theta+\psi_{J}\right) \cap\left(\theta+\psi_{K}\right) $$ by the distributivity of $\operatorname{Con}(B)$, and since $\theta$ is assumed to be meet irreducible, it follows that $\theta=\theta+\psi_{J}$ or $\theta=\theta+\psi_{K}$, whence $J \in \mathcal{C}$ or $K \in \mathcal{C}$. $\mathcal{C}$ itself need not be a filter, but using Zorn's Lemma we can choose a filter $\mathcal{U}$ over $I$, maximal with respect to the property $\mathcal{U} \subseteq \mathcal{C}$. It is easy to see that $$ \phi_{\mathcal{U}} \mid B=\bigcup_{J \in \mathcal{U}} \psi_{J} \subseteq \theta $$ so it remains to show that $\mathcal{U}$ is an ultrafilter. Suppose the contrary. Then there exists a set $H \subseteq I$ such that neither $H$ nor $I-H$ belong to $\mathcal{U}$. If $H \cap J \in \mathcal{C}$ for all $J \in \mathcal{U}$, then by (i) and (ii) $\mathcal{U} \cup\{H\}$ would generate a filter contained in $\mathcal{C}$, contradicting the maximality of $\mathcal{U}$, and similarly for $I-H$. Hence we can find $J, K \in \mathcal{U}$ such that $H \cap J,(I-H) \cap K \notin \mathcal{C}$. But $J \cap K \in \mathcal{U}$, and $$ J \cap K=(H \cap J \cap K) \cup((I-H) \cap J \cap K) $$ which contradicts (iii). This completes the proof. Corollary 1.5 (Jónsson [67]). Let $\mathcal{K}$ be a class of algebras such that $\mathcal{V}=\mathcal{K}^{\mathcal{V}}$ is congruence distributive. Then (i) $\mathcal{V}_{S I} \subseteq \mathcal{V}_{F S I} \subseteq \mathbf{H S P}_{U} \mathcal{K}$; (ii) $\mathcal{V}=\mathrm{P}_{S} \mathbf{H S P}{ }_{U} \mathcal{K}$. Proof. (i) We always have $\mathcal{V}_{S I} \subseteq \mathcal{V}_{F S I}$. Since $\mathcal{V}=\operatorname{HSPK}$, every algebra in $\mathcal{V}$ is isomorphic to a quotient algebra $B / \theta$, where $B$ is a subalgebra of a direct product $A=$ $\chi_{i \in I} A_{i}$ and $\left\{A_{i}: i \in I\right\} \subseteq \mathcal{K}$. If $B / \theta$ is finitely subdirectly irreducible, then $\theta$ is meet irreducible, hence by the preceding lemma there exists an ultrafilter $\mathcal{U}$ over $I$ such that $\phi=\phi_{\mathcal{U}} \mid B \subseteq \theta$. Thus $B / \theta$ is a homomorphic image of $B / \phi$, which is isomorphic to a subalgebra of the ultraproduct $A / \phi_{\mathcal{U}}$. (ii) follows from (i) and Birkhoff's subdirect representation theorem. To exhibit the full strength of Jónsson's Lemma for finitely generated varieties, we need the following result: Lemma 1.6 (Frayne, Morel and Scott [62]). If $\mathcal{K}$ is a finite set of finite algebras, then every ultraproduct of members of $\mathcal{K}$ is isomorphic to a member of $\mathcal{K}$. Proof. Let $A=\mathcal{X}_{i \in I} A_{i}$ be a direct product of members of $\mathcal{K}$, and define an equivalence relation $\sim$ on $I$ by $i \sim j$ if and only if $A_{i} \cong A_{j}$. Since $\mathcal{K}$ is finite, $\sim$ partitions $I$ into finitely many blocks $I_{0}, I_{1}, \ldots, I_{n}$. If $\mathcal{U}$ is an ultrafilter over $I$, then $\mathcal{U}$ contains exactly one of these blocks, say $J=I_{k}$. Let $\overline{\mathcal{U}}=\{U \cap J: U \in \mathcal{U}\}$ be the ultrafilter over $J$ induced by $\mathcal{U}$, and let $\bar{A}=\mathcal{X}_{j \in J} A_{j}$. We claim that $$ A / \phi_{\mathcal{U}} \cong \bar{A} / \phi_{\overline{\mathcal{U}}} \cong B $$ where $B$ is an algebra isomorphic to each of the $A_{j}(j \in J)$, and hence to a member of $\mathcal{K}$ as required. Consider the epimorphism $h: A \rightarrow \bar{A} / \phi_{\overline{\mathcal{U}}}$ given by $h(a)=\bar{a} / \phi_{\overline{\mathcal{U}}}$, where $\bar{a}$ is the restriction of $a$ to $J$. We have $$ \begin{array}{lll} a \text { kerh } b & \text { iff } & \bar{a} \phi_{\overline{\mathcal{U}}} \bar{b} \quad \text { iff } \quad\left\{j \in J: a_{j}=b_{j}\right\} \in \overline{\mathcal{U}} \\ & \text { iff } \quad\left\{i \in I: a_{i}=b_{i}\right\} \in \mathcal{U} \text { iff } a \phi_{\mathcal{U}} b \end{array} $$ so ker $h=\phi_{\mathcal{U}}$, whence the first isomorphism follows. To establish the second isomorphism, observe that $\bar{A} \cong B^{J}$, and therefore $\bar{A} / \phi_{\overline{\mathcal{U}}}$ is isomorphic to an ultrapower $B^{J} / \phi_{\overline{\mathcal{U}}}$ over the finite algebra $B$. In this case the canonical embedding $B \hookrightarrow B^{J} / \phi_{\overline{\mathcal{U}}}$, given by $\boldsymbol{b} \mapsto \hat{b} / \phi_{\overline{\mathcal{U}}}$ $\left(\hat{b}=(b, b, \ldots) \in B^{J}\right)$, is always onto (hence an isomorphism) because for each $c \in B^{J}$ we can partition $J$ into finitely many blocks $J_{c, b}=\left\{j \in J: c_{j}=b\right\}$ (one for each $b \in B$ ), and then one of these blocks, say $J_{c, b^{\prime}}$ must be in $\overline{\mathcal{U}}$, hence $b^{\prime} \mapsto c / \phi_{\overline{\mathcal{U}}}$. Corollary 1.7 (Jónsson [67]). Let $\mathcal{K}$ be a finite set of finite algebras such that $\mathcal{V}=\mathcal{K}^{\mathcal{V}}$ is congruence distributive. Then (i) $\mathcal{V}_{S I} \subseteq \mathcal{V}_{F S I} \subseteq \mathbf{H S} \mathcal{K}$; (ii) $\mathcal{V}$ has up to isomorphism only finitely many subdirectly irreducible members, each one finite; (iii) $\mathcal{V}$ has only finitely many subvarieties; (iv) if $A, B \in \mathcal{V}_{S I}$ are nonisomorphic and $|A| \leq|B|$, then there is an identity that holds in $A$ but not in $B$. Proof. (i) follows immediately from 1.5 and 1.6, and (ii) follows from (i), since HS $\mathcal{K}$ has only finitely many non isomorphic members. (iii) holds because every subvariety is determined by its subdirectly irreducible members, and (iv) follows from the observation that if both $A$ and $B$ are finite, nonisomorphic and $|A| \leq|B|$, then $B \notin \mathbf{H S}\{A\}$, which implies $B \notin\{A\}^{\mathcal{V}}$ by part (i). Lattices are congruence distributive algebras. This is one of the most important results about lattices, since it means that we can apply Jónsson's Lemma to lattices. We first give a direct proof of this result. Theorem 1.8 (Funayama and Nakayama [42]). For any lattice $L, C o n(L)$ is distributive. Proof. Let $\theta, \psi, \phi \in \operatorname{Con}(L)$ and observe that the inclusion $(\theta \cap \psi)+(\theta \cap \phi) \subseteq \theta \cap(\psi+\phi)$ holds in any lattice. So suppose for some $x, y \in L$ the congruence $\theta \cap(\psi+\phi)$ identifies $x$ and $y$. We have to show that $(\theta \cap \psi)+(\theta \cap \phi)$ identifies $x$ and $y$. By assumption $x \theta y$ and $x(\psi+\phi) y$, hence by Lemma 1.1 $$ x=z_{0} \psi z_{1} \phi z_{2} \psi z_{3} \phi z_{4} \cdots z_{n}=y $$ for some $z_{0}, z_{1}, \ldots, z_{n} \in L$. If we can replace the elements $z_{i}$ by $z_{i}^{\prime}$, which all belong to the same $\theta$-class as $x$ and $y$, then $$ x=z_{0}^{\prime}(\theta \cap \psi) z_{1}^{\prime}(\theta \cap \phi) z_{2}^{\prime}(\theta \cap \psi) z_{3}^{\prime}(\theta \cap \phi) z_{4}^{\prime} \cdots z_{n}^{\prime}=y, $$ whence $x(\theta \cap \psi)+(\theta \cap \phi) y$ follows. One way of making this replacement is by taking $z_{i}^{\prime}=x z_{i}+y z_{i}+x y$ (the median polynomial), then any congruence which identifies $z_{i}$ with $z_{i+1}$, also identifies $z_{i}^{\prime}$ with $z_{i+1}^{\prime}$, and (since $L$ is a lattice) $x y \leq z_{i}^{\prime} \leq x+y$, whence $z_{i}^{\prime} \in x / \theta$ for all $i=0,1, \ldots, n$. Consequently every lattice variety is congruence distributive. Notice that the proof appeals to the lattice properties of $L$ only in the last few lines. Jónsson polynomials.The next theorem is a generalization of the above result. Theorem 1.9 (Jónsson [67]). A variety $\mathcal{V}$ of algebras is congruence distributive if and only if for some positive integer $n$ there exist ternary polynomials $t_{0}, t_{1}, \ldots, t_{n}$ such that for $i=0,1, \ldots, n$, the following identities hold in $\mathcal{V}$ : $$ \begin{aligned} & t_{0}(x, y, z)=x, \quad t_{n}(x, y, z)=z, \quad t_{i}(x, y, x)=x \\ & t_{i}(x, x, z)=t_{i+1}(x, x, z) \quad \text { for } i \text { even } \\ & t_{i}(x, z, z)=t_{i+1}(x, z, z) \quad \text { for } i \text { odd. } \end{aligned} $$ Proof. Suppose $\mathcal{V}$ is congruence distributive and consider the algebra $F_{\mathcal{V}}(\{a, b, c\})$. Define $\theta=\operatorname{con}(a, c), \phi=\operatorname{con}(a, b)$ and $\psi=\operatorname{con}(b, c)$. Then $(a, c) \in \theta \cap(\phi+\psi)=$ $(\theta \cap \phi)+(\theta \cap \psi)$. By Lemma 1.1 there exist $d_{0}, d_{1}, \ldots, d_{n} \in A$ such that $$ a=d_{0} \theta \cap \phi d_{1} \theta \cap \psi d_{2} \theta \cap \phi d_{3} \theta \cap \psi \cdots d_{n}=c . $$ Each $d_{i}$ is of the form $d_{i}=t_{i}(a, b, c)$ for some ternary polynomial $t_{i}$, and it remains to show that the identities $(*)$ are satisfied for $x=a, y=b$ and $z=c$ in $F_{\mathcal{V}}(\{a, b, c\})$, since then they must hold in every member of $\mathcal{V}$. The first two identities follow from the fact that $d_{0}=a$ and $d_{n}=c$. For the third identity let $h^{\prime}: F_{\mathcal{V}}(\{a, b, c\}) \rightarrow F_{\mathcal{V}}(\{a, b\})$ be the homomorphism induced by the map $a, c \mapsto a, b \mapsto b$. Then $h(a)=h(c)$ implies $\theta \subseteq$ ker $h$ and since each $d_{i}=t_{i}(a, b, c)$ is $\theta$-related to $a$ we have $$ a=h(a)=h\left(t_{i}(a, b, c)\right)=t_{i}(a, b, a) $$ Now suppose $i$ is even and consider $h: F_{\mathcal{V}}(\{a, b, c\}) \rightarrow F_{\mathcal{V}}(\{a, c\})$ induced by the map $a, b \mapsto a, c \mapsto c$. Then $\phi \subseteq \operatorname{ker} h$, and since $$ t_{i}(a, b, c) \phi t_{i+1}(a, b, c) $$ it follows that $$ t_{i}(a, a, c)=h\left(t_{i}(a, b, c)\right)=h\left(t_{i+1}(a, b, c)\right)=t_{i+1}(a, a, c) $$ The proof for odd $i$ is similar. Now assume the identities $(*)$ hold in $\mathcal{V}$ for some ternary polynomials $t_{0}, t_{1}, \ldots, t_{n}$, let $A \in \mathcal{V}$ and $\theta, \phi, \psi \in \operatorname{Con}(A)$. To prove that $\mathcal{V}$ is congruence distributive it suffices to show that $\theta \cap(\phi+\psi) \subseteq(\theta \cap \phi)+(\theta \cap \psi)$. Let $(a, c) \in \theta \cap(\phi+\psi)$. Then $(a, c) \in \theta$ and by Lemma 1.1 there exist $b_{0}, b_{1}, \ldots, b_{m} \in A$ such that $$ a=b_{0} \phi b_{1} \psi b_{2} \phi b_{3} \psi \cdots b_{m}=c . $$ So for each $i=0,1, \ldots, n$ we have $$ t_{i}\left(a, b_{0}, c\right) \phi t_{i}\left(a, b_{1}, c\right) \psi t_{i}\left(a, b_{2}, c\right) \phi \cdots t_{i}\left(a, b_{m}, c\right) $$ The identity $t_{i}(x, y, x)=x$ together with $(a, c) \in \theta$ implies that the elements $t_{i}\left(a, b_{j}, c\right) \in$ $a / \theta$ whence $$ t_{i}\left(a, b_{0}, c\right) \theta \cap \phi t_{i}\left(a, b_{1}, c\right) \theta \cap \psi t_{i}\left(a, b_{2}, c\right) \theta \cap \phi \cdots t_{i}\left(a, b_{m}, c\right) . $$ It follows that $t_{i}(a, a, c)(\theta \cap \phi)+(\theta \cap \psi) t_{i}(a, c, c)$ holds for each $i=0,1, \ldots, n$ and the remaining identities now give $(a, c) \in(\theta \cap \phi)+(\theta \cap \psi)$. The polynomials $t_{0}, t_{1}, \ldots, t_{n}$ are known as Jónsson polynomials, and will be of use in Chapter 5 . Here we just note that for lattices we can deduce Theorem 1.8 from Theorem 1.9 if we take $n=2, t_{0}(x, y, z)=x, t_{1}(x, y, z)=x y+z y+z x$ (the median polynomial) and $t_{2}(x, y, z)=z$. ### Congruences on Lattices Prime quotients and unique maximal congruences. Let $L$ be a lattice and $u, v \in L$ with $v \leq u$. By a quotient $u / v$ (alternatively interval $[v, u]$ ) we mean the sublattice $\{x \in L: v \leq x \leq u\}$. We say that $u / v$ is nontrivial if $u>v$, and prime if $u$ covers $v$ (i.e. $u / v=\{u, v\}$, in symbols: $u \succ v$ ). If $L$ is subdirectly irreducible and $\operatorname{con}(u, v)$ is the smallest non-0 congruence of $L$, then $u / v$ is said to be a critical quotient. LEMMA 1.10 If $u / v$ is a prime quotient of $L$, then there exists a unique maximal congruence $\theta$ that does not identify $u$ and $v$. Proof. Let $\mathcal{C} \subseteq \operatorname{Con}(L)$ be the set of all congruences of $L$ that do not identify $u$ and $v$. Take $\theta=\sum \mathcal{C}$, and suppose $\theta$ identifies $u$ and $v$. By Lemma 1.1 we can find $\psi_{1}, \psi_{2}, \ldots, \psi_{n} \in \mathcal{C}$ and $z_{0}, z_{1}, \ldots, z_{n} \in L$ such that $$ u=z_{0} \psi_{1} z_{1} \psi_{2} z_{2} \cdots z_{n-1} \psi_{n} z_{n}=v $$ Replacing $z_{i}$ by $z_{i}^{\prime}=u z_{i}+v z_{i}+v$ we see that $u=z_{0}^{\prime} \psi_{1} z_{1}^{\prime} \psi_{2} z_{2}^{\prime} \cdots z_{n-1}^{\prime} \psi_{n} z_{n}^{\prime}=v$ and $v \leq z_{i}^{\prime} \leq u$ for all $i=1, \ldots, n$. Since $u / v$ is assumed to be prime, we must have $z_{i}^{\prime}=u$ or $z_{i}^{\prime}=v$ for all $i$, which implies $u \psi_{i} v$ for some $i$, a contradiction. Thus $\theta \in \mathcal{C}$, and it is clearly the largest element of $\mathcal{C}$. Weak transpositions. Given two quotients $r / s$ and $u / v$ in $L$, we say that $r / s$ transposes weakly up onto $u / v$ (in symbols $r / s \nearrow_{w} u / v$ ) if $r+v=u$ and $s \leq v$. Dually, we say that $r / s$ transposes weakly down onto $u / v$ (in symbols $r / s \searrow w u / v$ ) if $s u=v$ and $r \geq u$. We write $r / s \sim_{w} u / v$ if $r / s$ transposes weakly up or down onto $u / v$. The quotient $r / s$ projects weakly onto $u / v$ in $n$ steps if there exists a sequence of quotients $x_{i} / y_{i}$ in $L$ such that $$ r / s=x_{0} / y_{0} \sim_{w} x_{1} / y_{1} \sim_{w} \ldots \sim_{w} x_{n} / y_{n}=u / v . $$ Note that the symbols $\nearrow_{w}, \searrow_{w}$ and $\sim_{w}$ define nonsymmetric binary relations on the set of quotients of a lattice. Some authors (in particular [GLT], [ATL] and Rose [84]) define weak transpositions in terms of the inverses of the above relations, but denote these inverse relations by the same symbols. Usually the phrase "transposes weakly into" (rather than "onto") is used to distinguish the two definitions. The usefulness of weak transpositions lies in the fact that they can be used to characterize principal congruences in arbitrary lattices. LemMa 1.11 (Dilworth [50]). Let $r / s$ and $u / v$ be quotients in a lattice $L$. Then con $(r, s)$ identifies $u$ and $v$ if and only if for some finite chain $u=t_{0}>t_{1}>\ldots>t_{m}=v$, the quotient $r / s$ projects weakly onto $t_{i} / t_{i+1}$ (all $i=0,1, \ldots, m-1$ ). Notice that if $u / v$ is a prime critical quotient of a subdirectly irreducible lattice $L$, then by the above lemma every nontrivial quotient of $L$ projects weakly onto $u / v$. Bijective transpositions and modularity. We say that $r / s$ transposes up onto $u / v$ (in symbols $r / s \nearrow u / v$ ) or equivalently $u / v$ transposes down onto $r / s$ (in symbols $u / v \searrow r / s$ ) if $r+v=u$ and $r v=s$. We write $r / s \sim u / v$ if either $r / s \nearrow u / v$ or $r / s \searrow u / v$. Note that $\sim$ is a symmetric relation, and that $$ r / s \sim_{w} u / v \text { and } u / v \sim_{w} r / s \quad \text { if and only if } r / s \sim u / v . $$ Suppose $r / s \nearrow u / v$ and, in addition, for every $t \in r / s$ and $t^{\prime} \in u / v$ we have $$ t=(t+v) r \quad \text { and } \quad t^{\prime}=t^{\prime} r+v . $$ Then the map $t \mapsto t+v$ is an isomorphism from $r / s$ to $u / v$, and in this case we say that $r / s$ transposes bijectively up onto $u / v$ (in symbols $r / s \nearrow_{\beta} u / v$ ), or equivalently $u / v$ transposes bijectively down onto $r / s$ (in symbols $u / v \searrow_{\beta} r / s$ ). In a modular lattice every transpose is bijective, since $t \leq r$ and modularity imply $$ (t+v) r=t+v r=t+s=t $$ and similarly $t^{\prime}=t^{\prime} r+v$. It follows that for any sequence of weak transpositions $x_{0} / y_{0} \sim_{w}$ $x_{1} / y_{1} \sim_{w} \ldots \sim_{w} x_{n} / y_{n}$ we can find subquotients $x_{i}^{\prime} / y_{i}^{\prime}$ of $x_{i} / y_{i}(i=0,1, \ldots, n-1)$ such that $$ x_{0} / y_{0} \supseteq x_{0}^{\prime} / y_{0}^{\prime} \sim x_{1}^{\prime} / y_{1}^{\prime} \sim \ldots \sim x_{n} / y_{n} . $$ In this case we say that the two quotients $x_{0}^{\prime} / y_{0}^{\prime}$ and $x_{n} / y_{n}$ are projective to each other, and by Lemma 1.11 this concept is clearly sufficient for describing principal congruences in modular lattices. Congruence lattices of modular lattices. The symbol 2 denotes a two element lattice, and a complemented distributive lattice will be referred to as a Boolean algebra (although we do not include complementation, zero and unit as basic operations). We need the following elementary result about distributive lattices: Lemma 1.12 Let $D$ be a finite distributive lattice. If the largest element of $D$ is a join of atoms, then $D$ is a Boolean algebra. Proof. It suffices to show that $D$ is complemented. Let $a \in D$ and define $\bar{a}$ to be the join of all atoms that are not below $a$. By assumption $a+\bar{a}=1_{D}$ and by distributivity $a \bar{a}=0_{D}$, whence $\bar{a}$ is the complement of $a$. A chain $C$ is a linearly ordered subset of a lattice, and if $|C|$ is finite then the length of $C$ is defined to be $|C|-1$. A lattice $L$ is said to be of length $n$ if there is a chain in $L$ that has length $n$ and all chains in $L$ are of length $\leq n$. Recall the Jordan-Hölder Chain condition ([GLT] p.172): if $M$ is a (semi-) modular lattice of finite length then any two maximal chains in $M$ have the same length. In such lattices the length is also referred to as the dimension of the lattice. ## LeMma 1.13 Let $M$ be a modular lattice. (i) If $u / v$ is a prime quotient of $M$, then $\operatorname{con}(u, v)$ is an atom of $\operatorname{Con}(M)$. (ii) If $M$ has finite length $m$, then $\operatorname{Con}(M)$ is isomorphic to a Boolean algebra $\mathbf{2}^{n}$, where $n \leq m$. Proof. (i) If $\operatorname{con}(u, v) \supseteq \operatorname{con}(r, s)$ for some $r \neq s \in M$, then $u / v$ and a prime subquotient of $r+s / r s$ are projective to each other, which implies that $\operatorname{con}(u, v)=\operatorname{con}(r, s)$. It follows that $\operatorname{con}(u, v)$ is an atom. (ii) Let $z_{0} \prec z_{1} \prec \ldots \prec z_{m}$ be a maximal chain in $M$. Then the principal congruences $\operatorname{con}\left(z_{i}, z_{i+1}\right)(i=0,1, \ldots, m-1)$ are atoms (not necessarily distinct) of $\operatorname{Con}(M)$, and since their join collapses the whole of $M$, the result follows from the distributivity of $\operatorname{Con}(M)$ and the preceding lemma. As a corollary we have that every subdirectly irreducible modular lattice of finite length is simple (i.e. $\operatorname{Con}(M) \cong 2$ ). ## Chapter 2 ## General Results ### The Lattice $\Lambda$ $\Lambda$ is a dually algebraic distributive lattice. In Section 1.1 it was shown that the collection $\Lambda$ of all lattice subvarieties of $\mathcal{L}$ is a complete lattice, with intersection as meet. A completely analogous argument shows that this result is true in general for the collection of all subvarieties of an arbitrary variety $\mathcal{V}$ of algebras. We denote by $$ \Lambda_{\mathcal{V}} \text { - the lattice of all subvarieties of the variety } \mathcal{V} $$ (If $\mathcal{V}=\mathcal{L}$ then we usually drop the subscript $\mathcal{V}$.) Call a variety $\mathcal{V}^{\prime} \in \Lambda_{\mathcal{V}}$ finitely based relative to $\mathcal{V}$ if it can be defined by some finite set of identities together with the set Id $\mathcal{V}$. If $\mathcal{V}$ is finitely based relative to the variety Mod $\emptyset$ (= the class of all algebras of the same type as $\mathcal{V}$ ) then we may omit the phrase "relative to $\mathcal{V}$ ". THEOREM 2.1 For any variety $\mathcal{V}$ of algebras, $\Lambda_{\mathcal{V}}$ is a dually algebraic lattice, and the varieties which are finitely based relative to $\mathcal{V}$ are the dually compact elements. Proof. Let $\mathcal{V}^{\prime}, \mathcal{V}_{i}(i \in I)$ be subvarieties of $\mathcal{V}$, and suppose that $\mathcal{V}^{\prime} \supseteq \bigcap_{i \in I} \mathcal{V}_{i}=$ $\operatorname{Mod}\left(\bigcup_{i \in I} \operatorname{Id} \mathcal{V}_{i}\right)$. If $\mathcal{V}^{\prime}$ is finitely based relative to $\mathcal{V}$, then $\mathcal{V}^{\prime}=\mathcal{V} \cap \operatorname{Mod} \mathcal{E}$ for some finite set $\mathcal{E} \subseteq \operatorname{Id} \mathcal{V}^{\prime}$. It follows that $\mathcal{E} \subseteq \bigcup_{i \in I} \operatorname{Id} \mathcal{V}_{i}$, and since $\mathcal{E}$ is finite, it will be included in the union of finitely many Id $\mathcal{V}_{i}$. Clearly the finite intersection of the corresponding subvarieties is included in $\mathcal{V}^{\prime}$, whence $\mathcal{V}^{\prime}$ is dually compact. Conversely, suppose $\mathcal{V}^{\prime}$ is dually compact. We always have $$ \text { (*) } \quad \mathcal{V}^{\prime}=\operatorname{Mod} \operatorname{Id} \mathcal{V}^{\prime}=\mathcal{V} \cap \bigcap_{\varepsilon \in \operatorname{Id} \mathcal{V}^{\prime}} \operatorname{Mod}\{\varepsilon\} $$ so by dual compactness $\mathcal{V}^{\prime}=\mathcal{V} \cap \bigcap_{i=1}^{n} \operatorname{Mod}\left\{\varepsilon_{i}\right\}$ for some finite set $\mathcal{E}=\left\{\varepsilon_{1}, \ldots, \varepsilon_{n}\right\} \subseteq \operatorname{Id} \mathcal{V}^{\prime}$. Hence $\mathcal{V}^{\prime}$ is finitely based relative to $\mathcal{V}$. Finally $(*)$ implies that every element of $\Lambda_{\mathcal{V}}$ is a meet of dually compact elements, and so $\Lambda_{\mathcal{V}}$ is dually algebraic. Let $C_{\mathcal{V}}\left(\mathcal{V}^{\prime}\right)$ denote the collection of all varieties in $\Lambda_{\mathcal{V}}$ that cover $\mathcal{V}^{\prime}$. We say that $C_{\mathcal{V}}\left(\mathcal{V}^{\prime}\right)$ strongly covers $\mathcal{V}^{\prime}$ if any variety that properly contains $\mathcal{V}^{\prime}$, contains some member of $C_{\mathcal{V}}\left(\mathcal{V}^{\prime}\right)$. Recall that a lattice $L$ is weakly atomic if every nontrivial quotient of $L$ contains a prime subquotient. An algebraic (or dually algebraic) lattice $L$ is always weakly atomic, since for any nontrivial quotient $u / v$ in $L$ we can find a compact element $c \leq u, c \leq v$ and using Zorn's Lemma we can choose a maximal element $d$ of the set $\{x \in L: v \leq d<c+v\}$, which then satisfies $v \leq d \prec c+v \leq u$. In particular, if $u$ is compact, then there exists $d \in L$ such that $v \leq d \prec u$. THEOREM 2.2 Let $\mathcal{V}^{\prime}$ be a subvariety of a variety $\mathcal{V}$. (i) If $\mathcal{V}^{\prime}$ is finitely based relative to $\mathcal{V}$ then $C_{\mathcal{V}}\left(\mathcal{V}^{\prime}\right)$ strongly covers $\mathcal{V}^{\prime}$. (ii) If $C_{\mathcal{V}}\left(\mathcal{V}^{\prime}\right)$ is finite and strongly covers $\mathcal{V}^{\prime}$ then $\mathcal{V}^{\prime}$ is finitely based relative to $\mathcal{V}$. Proof. (i) $\mathcal{V}^{\prime}$ is dually compact, so by the remark above, any variety which contains $\mathcal{V}^{\prime}$, contains a variety that covers $\mathcal{V}^{\prime}$. (ii) Suppose $C_{\mathcal{V}}\left(\mathcal{V}^{\prime}\right)=\left\{\mathcal{V}_{1}, \ldots, \mathcal{V}_{n}\right\}$ for some $n \in \omega$. Then for each $i=1, \ldots, n$ there exists an identity $\varepsilon_{i} \in \operatorname{Id} \mathcal{V}^{\prime}$ such that $\varepsilon_{i}$ fails in $\mathcal{V}_{i}$. Let $\mathcal{V}^{\prime \prime}=\mathcal{V} \cap \operatorname{Mod}\left\{\varepsilon_{i}: i=1, \ldots, n\right\}$. We claim that $\mathcal{V}^{\prime}=\mathcal{V}^{\prime \prime}$. Since each $\varepsilon_{i}$ holds in $\mathcal{V}^{\prime}$, we certainly have $\mathcal{V}^{\prime} \subseteq \mathcal{V}^{\prime \prime}$. If $\mathcal{V}^{\prime} \neq \mathcal{V}^{\prime \prime}$ then the assumption that $C_{\mathcal{V}}\left(\mathcal{V}^{\prime}\right)$ strongly covers $\mathcal{V}^{\prime}$ implies that $\mathcal{V}_{i} \subseteq \mathcal{V}^{\prime \prime}$ for some $i \in\{1, \ldots, n\}$. But this is a contradiction since $\varepsilon_{i}$ fails in $\mathcal{V}_{i}$. We now focus our attention on congruence distributive varieties, since we can then apply Jónsson's Lemma to obtain further results. THEOREM 2.3 (Jónsson [67]). Let $\mathcal{V}$ be a congruence distributive variety of algebras and let $\mathcal{V}^{\prime}, \mathcal{V}^{\prime \prime} \in \Lambda \mathcal{V}$. Then (i) $\left(\mathcal{V}^{\prime}+\mathcal{V}^{\prime \prime}\right)_{S I}=\mathcal{V}_{S I}^{\prime} \cup \mathcal{V}_{S I}^{\prime \prime}$; (ii) $\Lambda_{\mathcal{V}}$ is a distributive lattice; (iii) if $\mathcal{V}^{\prime}$ is finitely generated, then $\mathcal{V}^{\prime}+\mathcal{V}^{\prime \prime} / \mathcal{V}^{\prime \prime}$ is a finite quotient in $\Lambda_{\mathcal{V}}$. Proof. (i) We always have $\mathcal{V}_{S I}^{\prime} \cup \mathcal{V}_{S I}^{\prime \prime} \subseteq\left(\mathcal{V}^{\prime}+\mathcal{V}^{\prime \prime}\right)_{S I}$. Conversely, if $A \in\left(\mathcal{V}^{\prime}+\mathcal{V}^{\prime \prime}\right)_{S I}$ then Jónsson's Lemma implies that $A \in \operatorname{HSP}_{U}\left(\mathcal{V}^{\prime} \cup \mathcal{V}^{\prime \prime}\right)$. It is not difficult to see that $\operatorname{HSP}_{U}\left(\mathcal{V}^{\prime} \cup \mathcal{V}^{\prime \prime}\right)=\operatorname{HSP}_{U} \mathcal{V}^{\prime} \cup \mathbf{H S P}_{U} \mathcal{V}^{\prime \prime}=\mathcal{V}^{\prime} \cup \mathcal{V}^{\prime \prime}$, and since $A$ is subdirectly irreducible, we must have $A \in \mathcal{V}_{S I}^{\prime}$ or $A \in \mathcal{V}_{S I}^{\prime \prime}$. (ii) If $\mathcal{V}_{1}, \mathcal{V}_{2}, \mathcal{V}_{3} \in \Lambda_{\mathcal{V}}$, then (i) implies that every subdirectly irreducible member of $\mathcal{V}_{1} \cap\left(\mathcal{V}_{2}+\mathcal{V}_{3}\right)$ belongs to either $\mathcal{V}_{1} \cap \mathcal{V}_{2}$ or $\mathcal{V}_{1} \cap \mathcal{V}_{3}$, whence $\mathcal{V}_{1} \cap\left(\mathcal{V}_{2}+\mathcal{V}_{3}\right) \subseteq\left(\mathcal{V}_{1} \cap \mathcal{V}_{2}\right)+\left(\mathcal{V}_{1} \cap \mathcal{V}_{3}\right)$ The reverse inclusion is always satisfied. (iii) By Corollary 1.7(iii), the quotient $\mathcal{V}^{\prime} / \mathcal{V}^{\prime} \cap \mathcal{V}^{\prime \prime}$ is finite, and it transposes bijectively up onto $\mathcal{V}^{\prime}+\mathcal{V}^{\prime \prime} / \mathcal{V}^{\prime \prime}$ since $\Lambda_{\mathcal{V}}$ is distributive. The fact that, for any congruence distributive variety $\mathcal{V}$, the lattice $\Lambda_{\nu}$ is dually algebraic and distributive can also be derived from the following more general result, due to B. H. Neumann [62]: Theorem 2.4 For any variety $\mathcal{V}$ of algebras, $\Lambda_{\mathcal{V}}$ is dually isomorphic to the lattice of all fully invariant congruences on $F_{\mathcal{V}}(\omega)$. (A congruence $\theta \in \operatorname{Con}(A)$ is fully invariant if $a \theta b$ implies $f(a) \theta f(b)$ for all endomorphisms $f: A \hookrightarrow A)$. However, we will not make use of this result. Some properties of the variety $\mathcal{L}$. For any class $\mathcal{K}$ of algebras, denote by $$ \mathcal{K}_{F} \quad \text { - the class of all finite members of } \mathcal{K} \text {. } $$ The variety $\mathcal{V}=\mathcal{L}$ of all lattice varieties has the following interesting properties: (P1) $\mathcal{V}$ is generated by its finite (subdirectly irreducible) members (i.e. $\mathcal{V}=\left(\mathcal{V}_{F}\right)^{\mathcal{V}}$ ). (P2) Every member of $\mathcal{V}$ can be embedded in a member of $\mathcal{V}_{S I}$ (i.e. $\mathcal{V} \subseteq \mathrm{S} \mathcal{V}_{S I}$ ). (P3) Every finite member of $\mathcal{V}$ can be embedded in a finite member of $\mathcal{V}_{S I}$. That $\mathcal{L}$ satisfies (P1) was proved by Dean [56], who showed that if an identity fails in some lattice, then it fails in some finite lattice (see Lemma 2.23). In Section 2.3 we prove an even stronger result, namely that $\mathcal{L}$ is generated by the class of all splitting lattices (which are all finite). (P2) follows from the result of Whitman [46] that every lattice can be embedded in a partition lattice, which is simple (hence subdirectly irreducible, see also Jónsson [53]). (P3) follows from the analogous result for finite lattices and finite partition lattices, due to Pudlák and Tuma [80]. Theorem 2.5 (McKenzie [72]). Let $\mathcal{V}$ be a variety of algebras and consider the following statements about a subvariety $\mathcal{V}^{\prime}$ of $\mathcal{V}$ : (i) $\mathcal{V}^{\prime}$ is completely join prime in $\Lambda_{\mathcal{V}}$ (i.e. $\mathcal{V}^{\prime} \leq \sum_{i \in I} \mathcal{V}_{i}$ implies $\mathcal{V}^{\prime} \leq \mathcal{V}_{i}$ some $i \in I$ ); (ii) $\mathcal{V}^{\prime}$ can be generated by a finite subdirectly irreducible member; (iii) $\mathcal{V}^{\prime}$ is completely join irreducible in $\Lambda_{\mathcal{V}}$; (iv) $\mathcal{V}^{\prime}$ can be generated by a finitely generated subdirectly irreducible member; (v) $\mathcal{V}^{\prime}$ can be generated by a (single) subdirectly irreducible member; (vi) $\mathcal{V}^{\prime}$ is join irreducible in $\Lambda_{\mathcal{V}}$; Then we always have (iii) $\Rightarrow(\mathrm{iv}) \Rightarrow(\mathrm{v})$. If (P1) holds, then (i) $\Rightarrow(\mathrm{ii})$, and if $\mathcal{V}$ is congruence distributive then (ii) $\Rightarrow$ (iii) and (v) $\Rightarrow$ (vi). Proof. (iii) $\Rightarrow$ (iv) Every variety is generated by its finitely generated subdirectly irreducible members, so if $\mathcal{V}^{\prime}$ is completely join irreducible, then it must be generated by one of them. (iv) $\Rightarrow(\mathrm{v})$ is obvious. Suppose now that $\mathcal{V}=\left(\mathcal{V}_{F}\right)^{\mathcal{V}}$ (i.e. (P1) holds). Then $\mathcal{V}$ is the join of all its finitely generated subvarieties. If $\mathcal{V}^{\prime} \subseteq \mathcal{V}$ is completely join prime, then it is included in one of these, and therefore $\mathcal{V}^{\prime}$ is itself finitely generated. This means that $\mathcal{V}^{\prime}$ can be generated by finitely many finite subdirectly irreducible algebras, and since it is also join irreducible, it must be generated by just one of them, i.e. (ii) holds. If $\mathcal{V}$ is congruence distributive and (ii) holds, then Theorem 2.3(i) implies that $\mathcal{V}^{\prime}$ is join irreducible, and by Corollary 1.7(iii), $\mathcal{V}^{\prime}$ has only finitely many subvarieties, hence it is completely join irreducible. (v) $\Rightarrow$ (vi) also follows from Theorem 2.3(i). Thus for $\mathcal{V}=\mathcal{L}$ we have (i) $\Rightarrow$ (ii) $\Rightarrow$ (iii) $\Rightarrow$ (iv) $\Rightarrow(v) \Rightarrow($ vi). McKenzie also gives examples of lattice varieties which show that, in general, none of the reverse implications hold. If $\mathcal{V}^{\prime}$ is assumed to be finitely generated, then of course (ii)-(vi) are equivalent. THEOREM 2.6 (Jónsson [67]). Let $\mathcal{V}$ be a congruence distributive variety of algebras. Then (i) (P1) implies that every proper subvariety of $\mathcal{V}$ has a cover in $\Lambda_{\mathcal{V}}$; (ii) (P2) implies that $\mathcal{V}$ is join irreducible in $\Lambda_{\mathcal{V}}$; (iii) (P1) and (P2) imply that $\mathcal{V}$ has no dual cover. Proof. (i) If $\mathcal{V}^{\prime}$ is a proper subvariety of $\mathcal{V}=\left(\mathcal{V}_{F}\right)^{\mathcal{V}}$, then there exists a finite algebra $A \in \mathcal{V}$ such that $A \notin \mathcal{V}^{\prime}$. By Theorem 2.3(iii) the quotient $\{A\}^{\mathcal{V}}+\mathcal{V}^{\prime} / \mathcal{V}^{\prime}$ is finite and therefore contains a variety that covers $\mathcal{V}^{\prime}$. (ii) If $\mathcal{V}^{\prime}$ and $\mathcal{V}^{\prime \prime}$ are proper subvarieties of $\mathcal{V}$, then there exist algebras $A^{\prime}$ and $A^{\prime \prime}$ in $\mathcal{V}$ such that $A^{\prime} \notin \mathcal{V}^{\prime}$ and $A^{\prime \prime} \notin \mathcal{V}^{\prime \prime}$. Assuming that $\mathcal{V} \subseteq \mathbf{S} \mathcal{V}_{S I}$, we can find a subdirectly irreducible algebra $A \in \mathcal{V}$ which has $A^{\prime} \times A^{\prime \prime}$ as subalgebra. Then $A \notin \mathcal{V}^{\prime}$ and $A \notin \mathcal{V}^{\prime \prime}$, so Theorem 2.3(i) implies that $A \notin \mathcal{V}^{\prime}+\mathcal{V}^{\prime \prime}$, whence $\mathcal{V}^{\prime}+\mathcal{V}^{\prime \prime} \neq \mathcal{V}$ (iii) Again we let $\mathcal{V}^{\prime}$ be a proper subvariety of $\mathcal{V}$. By (P1) there exists a finite algebra $A \in \mathcal{V}$ such that $A \notin \mathcal{V}^{\prime}$. Now (P2) implies that $\{A\}^{\mathcal{V}} \neq \mathcal{V}$, whence by (ii) $\mathcal{V}$ properly contains $\mathcal{V}^{\prime}+\{A\}^{\mathcal{V}}$ which in turn properly contains $\mathcal{V}^{\prime}$. Consequently $\mathcal{V}^{\prime}$ is not a dual cover of $\mathcal{V}$. The cardinality of $\Lambda$. Let $\mathcal{J}$ be the (countable) set of all lattice identities. Since every variety in $\Lambda$ is defined by some subset of $\mathcal{J}$, we must have $|\Lambda| \leq 2^{\omega}$. The same argument shows that if $\mathcal{V}$ is any variety of algebras (of finite or countable type), then $\left|\Lambda_{\mathcal{V}}\right| \leq 2^{\omega}$. Whether this upper bound on the cardinality is actually attained depends on the variety $\mathcal{V}$. For $\mathcal{V}=\mathcal{L}$, the answer is affirmative, as was proved independently by McKenzie [70] and Baker [69] (see also Wille [72] and Lee [85]). Baker in fact shows that the lattice $\Lambda_{\mathcal{M}}$ of all modular subvarieties contains the Boolean algebra $2^{\omega}$ as a sublattice. We postpone the proof of this result until we have covered some theory of projective spaces in the next chapter. In Section 4.3 we give another result, from Lee [85], which shows that there are $2^{\omega}$ almost distributive lattice varieties (to be defined). In contrast, Jónsson's Lemma implies that any finitely generated congruence distributive variety $\mathcal{V}$ has only finitely many subvarieties and therefore $\Lambda_{\mathcal{V}}$ is finite. An as yet unsolved problem about lattice varieties is whether the converse of the above observation is true, i.e. if a lattice variety has only finitely many subvarieties, is it finitely generated? This problem can also be approached from below: if a lattice variety $\mathcal{V}$ is finitely generated, is every cover of $\mathcal{V}$ finitely generated? Sometimes these problems are phrased in terms of the height of a variety $\mathcal{V}$ in $\Lambda$ (= length of the ideal ( $\mathcal{V}]$ ). Since $\Lambda$ is distributive, to be of finite height is of course equivalent to having only finitely many subvarieties. Call a variety $\mathcal{V}$ locally finite if every finitely generated member of $\mathcal{V}$ is finite. For locally finite congruence distributive varieties, the above problem is easily solved. THEOREM 2.7 Every finitely generated variety of algebras is locally finite. Conversely, if $\mathcal{V}$ is a locally finite congruence distributive variety, then (i) every join irreducible subvariety of $\mathcal{V}$ that has finite height in $\Lambda_{\mathcal{V}}$ is generated by a finite subdirectly irreducible member, and (ii) every variety of finite height in $\Lambda_{\mathcal{V}}$ is finitely generated. Figure 2.1 Proof. If $\mathcal{V}$ is finitely generated, then $\mathcal{V}=\{A\}^{\mathcal{V}}$ for some finite algebra $A \in \mathcal{V}$. For $n \in \omega$ the elements of $F_{\mathcal{V}}(n)$ are represented by $n$-ary polynomial functions from $A^{n}$ to $A$, of which we can have at most $|A|^{|A|^{n}}$. Hence $F_{\mathcal{V}}(n)$ is finite for each $n \in \omega$, and this is equivalent to $\mathcal{V}$ being locally finite. Conversely, assume that $\mathcal{V}$ is a locally finite congruence distributive variety. (i) If a subvariety $\mathcal{V}^{\prime}$ of $\mathcal{V}$ is join irreducible and has finite height in $\Lambda_{\mathcal{V}}$, then it is in fact completely join irreducible, whence Theorem 2.5 implies that $\mathcal{V}^{\prime}$ is generated by a finitely generated subdirectly irreducible algebra, which must be finite. (ii) follows from (i) and the fact that a variety of finite height is the join of finitely many join irreducible varieties. Nonfinitely based and nonfinitely generated varieties. It is easy to see that a variety can have at most countably many finitely based or finitely generated subvarieties, hence McKenzie's and Baker's result $\left(|\Lambda|=\left|\Lambda_{\mathcal{M}}\right|=2^{\omega}\right)$ imply that there are both nonfinitely based and nonfinitely generated (modular) lattice varieties. An example of a modular variety that is not finitely based is given in Section 5.3, and $\mathcal{L}$ and $\mathcal{M}$ are examples of varieties that are not finitely generated. In fact Freese [79] showed that, unlike $\mathcal{L}, \mathcal{M}$ is not even generated by its finite members (see Section 3.3). Other such varieties were previously discovered by Baker [69] and Wille [69]. ### The Structure of the Bottom of $\Lambda$ Covering relations between modular varieties. The class of all trivial (one-element) lattices, denoted by $\mathcal{T}=\operatorname{Mod}\{x=y\}$, is the smallest lattice variety and hence the least element of $\Lambda$. If a lattice variety $\mathcal{V}$ properly contains $\mathcal{T}$, then $\mathcal{V}$ must contain a lattice which has the two-element chain 2 as sublattice, hence $2 \in \mathcal{V}$. It is well known that, up to isomorphism, 2 is the only subdirectly irreducible distributive lattice, and therefore generates the variety of all distributive lattices, $\mathcal{D}=\{\mathbf{2}\}^{\mathcal{V}}=\operatorname{Mod}\{x(y+z)=x y+x z\}$. It follows that $\mathcal{D}$ is the unique cover of $\mathcal{T}$ in the lattice $\Lambda$. The next important identity is the modular identity $x y+x z=x(y+x z)$, which defines the variety $\mathcal{M}$ of all modular lattices. Of course every distributive lattice is modular, and a lattice $L$ satisfies the modular identity if and only if, for all $u, v, w \in L$ with $u \leq w$ we have $u+v w=(u+v) w$ (for arbitrary lattices we only have $\leq$ instead of equality). The diamond $M_{3}$ (see Figure 2.1) is the smallest example of a nondistributive modular lattice. A well known result due to Birkhoff [35] states that $M_{3}$ is in fact a sublattice of every nondistributive modular lattice. We give a sketch of the proof. Let $x, y, z \in L$ and define $u=x y+x z+y z$ and $v=(x+y)(x+z)(y+z)$. Then clearly $u \leq v$, and the elements $$ \begin{aligned} & a=u+x v=(u+x) v \\ & b=u+y v=(u+y) v \\ & c=u+z v=(u+z) v \end{aligned} $$ generate a diamond with least element $u$ and greatest element $v$. On the other hand, if $u=v$ for all choices of $x, y, z \in L$, then the identity $$ x y+x z+y z=(x+y)(x+z)(y+z) $$ holds in L, and it is not difficult to see that this identity is equivalent to the distributive identity. It follows that every nondistributive modular lattice contains a sublattice isomorphic to $M_{3}$, and consequently the variety $\mathcal{M}_{3}=\left\{M_{3}\right\}^{\mathcal{V}}$ covers $\mathcal{D}$. More generally, since the lattices $M_{n}$ (see Figure 2.1) are simple (hence subdirectly irreducible) modular lattices for each $n \geq 3$, and since $M_{n}$ is a sublattice of $M_{n+1}$, it follows from Corollary 1.7(i) that, up to isomorphism, $\left(\mathcal{M}_{n}\right)_{S I}=\{\mathbf{2}\} \cup\left\{M_{k}: 3 \leq k \leq n\right\}$, where $\mathcal{M}_{n}=\left\{M_{n}\right\}^{\mathcal{V}}$. Hence the varieties $\mathcal{M}_{n}$ form a countable chain of join irreducible modular subvarieties of $\mathcal{M}$, with $\mathcal{M}_{n+1}$ covering $\mathcal{M}_{n}$ for $n \geq 3$. Jónsson [68] proved that for $n \geq 4, \mathcal{M}_{n+1}$ is in fact the only join irreducible cover of $\mathcal{M}_{n}$, and that $\mathcal{M}_{3}$ has exactly two join irreducible covers, $\mathcal{M}_{3^{2}}$ and $\mathcal{M}_{4}$. This result is presented in Section 3.4. Further remarks about the covers of $\mathcal{M}_{3^{2}}$ and various other modular varieties appear at the end of Chapter 3. Covering relations between nonmodular varieties. A lattice variety is said to be nonmodular if it contains at least one nonmodular lattice $L$ (i.e. $L \notin \mathcal{M}$ ). If $L \in \mathcal{V}$ is nonmodular, then we can infer the existence of three elements $u, v, w \in L$ such that $u \leq w$ and $u+v w<(u+v) w$. In that case the elements $a=u+v w, b=v$ and $c=(u+v) w$ generate a sublattice of $L$ that is isomorphic to the pentagon $N$ with critical quotient $c / a$ (see Figure 2.1). Since the pentagon is nonmodular, one obtains the well known result of Dedekind [00]: Every nonmodular lattice contains a sublattice isomorphic to $N$. Many of the later results will be of a similar flavor, in the sense that a certain property is shown to fail precisely because of the presence of some particular sublattices. If $L$ and $K$ are lattices, we say that $L$ excludes $K$ if $L$ does not have a sublattice isomorphic to $K$. Otherwise we say that $L$ includes $K$. In this terminology, modularity is said to be characterized by the exclusion of the pentagon. An immediate consequence is that the variety generated by the pentagon (denoted by $\mathcal{N}$ ) is the smallest nonmodular variety. Again, Jónsson's Lemma enables us to compute $\mathcal{N}_{S I}=\{\mathbf{2}, N\}$ and hence $\mathcal{N}$ is a join irreducible cover of the distributive variety $\mathcal{D}$. Since every lattice is either modular or nonmodular, we conclude that $\mathcal{M}_{3}$ and $\mathcal{N}$ are the only covers of $\mathcal{D}$. In a paper of McKenzie [72] there is a list of 15 subdirectly irreducible lattices $L_{1}, L_{2}, \ldots, L_{15}$ (see Figure 2.2) with the following property: If we let $\mathcal{L}_{i}=\left\{L_{i}\right\}^{\mathcal{V}}(i=$ $1, \ldots, 15)$ then each of them satisfy $\left(\mathcal{L}_{i}\right)_{S I}=\left\{\mathbf{2}, N, L_{i}\right\}$. Hence each $\mathcal{L}_{i}$ is a join irreducible cover of the variety $\mathcal{N}$. It is a nontrivial result, due to Jónsson and Rival [79], that McKenzie's list is complete. A proof of this result appears in Chapter 4. $L_{1}$ $L_{6}$ $L_{11}$ $L_{2}$ $L_{7}$ $L_{12}$ $L_{3}$ $L_{8}$ $L_{13}$ $L_{4}$ $L_{9}$ $L_{14}$ $L_{5}$ $L_{15}$ Figure 2.2 $V_{1}$ $V_{5}$ $V_{2}$ $V_{6}$ $V_{3}$ $V_{7}$ $V_{4}$ $V_{8}$ Figure 2.3 Rose [84] proves that above each of the varieties $\mathcal{L}_{6}, \mathcal{L}_{7}, \mathcal{L}_{8}, \mathcal{L}_{9}, \mathcal{L}_{10}, \mathcal{L}_{13}, \mathcal{L}_{14}$ and $\mathcal{L}_{15}$ there is a chain of varieties $\mathcal{L}_{i}^{n}(n \in \omega)$, each one generated by a finite subdirectly irreducible lattice $L_{i}^{n}\left(L_{i}^{0}=L_{i}\right)$, such that $\mathcal{L}_{i}^{n+1}$ is the unique join irreducible cover of $\mathcal{L}_{i}^{n}$ $(i=6,7,8,9,10,13,14,15)$. Lattice varieties which do not include any of the lattices $M_{3}, L_{1}, \ldots, L_{12}$ are called almost distributive by Lee [85]. They are all locally finite, and Lee shows that their finite subdirectly irreducible members can be characterized in a certain way which, in principle, enables us to determine the position of any finitely generated almost distributive variety in the lattice $\Lambda$. Ruckelshausen [78] investigates the covers of $\mathcal{M}_{3}+\mathcal{N}$, and further results by Nation [85] [86] include a complete list of the covers of the varieties $\mathcal{L}_{1}$ and $\mathcal{L}_{11}, \mathcal{L}_{12}$. Nation also shows that above $\mathcal{L}_{11}$ and $\mathcal{L}_{12}$ there are exactly two covering chains of join irreducible varieties. These results are discussed in more detail at the end of Chapter 4. A diagram of $\Lambda$ is shown in Figure 2.4. ### Splitting Lattices and Bounded Homomorphisms The concept of splitting. A pair of elements $(x, y)$ in a lattice $L$ is said to be a splitting pair of $L$ if $L$ is the disjoint union of the principal ideal ( $x]$ and the principal filter $[y$ ) (or equivalently, if for any $z \in L$ we have $z \leq x$ if and only if $y \not z z$ ). The following lemma is an immediate consequence of this definition. Figure 2.4 Lemma 2.8 In a complete lattice $L$ the following conditions are equivalent: (i) $(x, y)$ is a splitting pair of $L$; (ii) $x$ is completely meet prime in $L$ and $y=\prod_{z \notin x} z$; (iii) $y$ is completely join prime in $L$ and $x=\sum_{z \Varangle_{y}} z$. The notion of "splitting" a lattice into a (disjoint) ideal and filter was originally investigated by Whitman [43]. In McKenzie [72] this concept is applied to the lattice $\Lambda$, as a generalization of the familiar division of $\Lambda$ into a modular and a nonmodular part. What is noteworthy about McKenzie's and subsequent investigations by others is that they yield greater insight, not only into the structure of $\Lambda$, but also into the structure of free lattices. In this section we first present some basic facts about splitting pairs of varieties in general and then discuss some related concepts and their implications for lattices. Let $\mathcal{V}$ be a variety of algebras and suppose $\left(\mathcal{V}_{0}, \mathcal{V}_{1}\right)$ is a splitting pair in $\Lambda$. By Lemma $2.8 \mathcal{V}_{0}$ is completely meet prime, hence completely meet irreducible, and since $$ \mathcal{V}_{0}=\operatorname{Mod} \operatorname{Id} \mathcal{V}_{0}=\bigcap_{\varepsilon \in \operatorname{Id} \nu_{0}} \operatorname{Mod}\{\varepsilon\} $$ it follows that $\mathcal{V}_{0}$ can be defined by a single identity $\varepsilon_{0}$. Dually, since every variety is generated by its finitely generated subdirectly irreducible members, $\mathcal{V}_{1}=\{A\}^{\mathcal{V}}$ for some finitely generated subdirectly irreducible algebra $A$. We shall refer to such an algebra $A$ (which generates a completely join prime subvariety of $\mathcal{V}$ ) as a splitting algebra in $\mathcal{V}$, and to the variety $\mathcal{V}_{0}$ as its conjugate variety, defined by the conjugate identity $\varepsilon_{0}$. Note that if $\mathcal{V}$ is generated by its finite members, then we can assume, by Theorem 2.5 , that $A$ is a finite algebra. If, in addition, $\mathcal{V}$ is congruence distributive, then Corollary 1.7 (iv) implies that $A$ is unique. In particular, if $\left(\mathcal{V}_{0}, \mathcal{V}_{1}\right)$ is a splitting pair in $\Lambda$, then $\mathcal{V}_{1}=\{L\}^{\mathcal{V}}$ where $L$ is a finite subdirectly irreducible lattice, and we refer to such a lattice as a splitting lattice. The two standard examples of splitting pairs in $\Lambda$ are $(\mathcal{T}, \mathcal{D})$ and $(\mathcal{M}, \mathcal{N})$. Projective Algebras. An algebra $P$ in a class $\mathcal{K}$ of algebras said to be projective in $\mathcal{K}$ if for any homomorphism $h: P \rightarrow B$ and epimorphism $g: A \rightarrow B$ with $A, B \in \mathcal{K}$, there exists a homomorphism $f: P \rightarrow A$ such that $h=g f$ (Figure 2.5(i)). An algebra $B$ is a retract of an algebra $A$ if there exist homomorphisms $f: B \rightarrow A$ such that $g f$ is the identity on $B$. Clearly $f$ must be an embedding, and $g$ is called a retraction of $f$. LEMMA 2.9 Let $\mathcal{K}$ be a class of algebras in which $\mathcal{K}$-free algebras exist. Then, for any $P \in \mathcal{K}$, the following conditions are equivalent: (i) $P$ is projective in $\mathcal{K}$; (ii) For any algebra $A \in \mathcal{K}$ and any epimorphism $g: A \rightarrow P$ there exists an embedding $f: P \hookrightarrow A$ such that $g f$ is the identity on $P$; (iii) $P$ is a retract of a $\mathcal{K}$-free algebra. Proof. (ii) is a special case of (i), with $B=P$ and $h$ the identity on $P$. Clearly $f$ must be an embedding in this case. Suppose (ii) holds, and let $X$ be a set with $|X|=|P|$. Then (i) (ii) Figure 2.5 there exists an epimorphism $g: F_{\mathcal{K}}(X) \rightarrow P$. Since $F_{\mathcal{K}}(X) \in \mathcal{K}$, it follows from (ii) that $P$ is a retract of $F_{\mathcal{K}}(X)$. Lastly, suppose $P$ is a retract of some $K$-free algebra $F_{\mathcal{K}}(X)$, and let $h: P \rightarrow B$ and $g: A \rightarrow B$ be given $(A, B \in \mathcal{K})$. Then there exist $f^{\prime}: P \rightarrow F_{\mathcal{K}}(X)$ and $g^{\prime}: F_{\mathcal{K}}(X) \rightarrow P$ such that $g^{\prime} f^{\prime}$ is the identity on $P$ (Figure 2.5(ii)). Since $g$ is onto, we can define a map $k: X \rightarrow A$ satisfying $g k(x)=h g^{\prime}(x)$ for all $x \in X$. Let $\bar{k}$ be the extension of $k$ to $F_{\mathcal{K}}(X)$, then $\bar{k} f^{\prime}$ is the required homomorphism from $P$ to $A$. Thus, for any variety $\mathcal{V}$, every $\mathcal{V}$-free algebra is projective in $\mathcal{V}$. LemMA 2.10 (Rose [84]). Let $\mathcal{K}$ be a class of algebras and suppose that $P \in \mathcal{K}^{\mathcal{V}}$ is subdirectly irreducible and projective in $\mathcal{K}^{\mathcal{V}}$. Then $P$ is isomorphic to a subalgebra of some member of $\mathcal{K}$. Proof. Since $P \in \mathcal{K}^{\mathcal{V}}=\mathbf{H S P} \mathcal{K}$, Lemma 2.9 (ii) implies that $P \in \mathbf{S P} \mathcal{K}$. Hence we can assume that $P$ is a subalgebra of a direct product $\mathcal{X}_{i \in I} A_{i}$, where $A_{i} \in \mathcal{K}$ for $i$ in some index set $I$. Denoting the projection map from the product to each $A_{i}$ by $\pi_{i}$, we see that $P$ is a subdirect product of the family of algebras $\left\{\pi_{i}(P): i \in I\right\}$. But $P$ is assumed to be subdirectly irreducible, so there exists $j \in I$ such that $\pi_{i}(P)$ is isomorphic to $P$. Therefore $\pi_{j}: P \rightarrow L_{j}$ is an embedding. Recall from Section 1.1 that $F_{\mathcal{V}}(X)$ can be constructed as a quotient algebra of the word algebra $W(X)$, whence every element of $F_{\mathcal{V}}(X)$ can be represented by a term of $W(X)$. Also if $A$ is an algebra and $p, q$ are terms in $W(X)$, then the identity $p=q$ holds in $A$ if and only if $h(p)=h(q)$ for every homomorphism $h: W(X) \rightarrow A$. Notice that if $\mathcal{V}$ is a variety containing $A$, then any such $h$ can be factored through $F_{\mathcal{V}}(X)$. The following theorem was proved by McKenzie [72] for $\mathcal{L}$-free lattices, and then generalized to projective lattices by Wille [72] and to projective algebras by Day [75]. Theorem 2.11 Let $\mathcal{V}$ be a variety of algebras, suppose $P \in \mathcal{V}$ is projective in $\mathcal{V}$, and for some $a, b \in P$ there is a largest congruence $\psi \in C o n(P)$ which does not identify $a$ and b. Then $P / \psi$ is a splitting algebra in $\mathcal{V}$, and if $f: P \hookrightarrow F_{\mathcal{V}}(X)$ is an embedding and $g: F_{\mathcal{V}}(X) \rightarrow P$ is a retraction of $f$ (i.e. $g f=i d_{P}$ ) then for any terms $p, q$ which represent $f(a), f(b)$ respectively, the identity $p=q$ is a conjugate identity of $P / \psi$. Proof.It is enough to show that $p=q$ fails in $P / \psi$ but holds in any subvariety of $\mathcal{V}$ that does not contain $P / \psi$. Let $\gamma: P \rightarrow P / \psi$ be the canonical epimorphism. Then $\gamma g$ does not identify $f(a)$ and $f(b)$, hence by the remark above, $p=q$ fails in $P / \psi$. Suppose now that $\mathcal{V}^{\prime}$ is any subvariety of $\mathcal{V}$ not containing $P / \psi$, and let $h: F_{\mathcal{V}}(X) \rightarrow F_{\mathcal{V}}(X)$ be the extension of the identity map on the generating set $X$. Clearly the identity $p=q$ will hold in $\mathcal{V}^{\prime}$ if and only if $h f(a)=h f(b)$. Suppose to the contrary that $a$ and $b$ are not identified by ker $h f$. Since $\psi$ is assumed to be the largest such congruence, we have ker $h f \subseteq \psi$, and it follows from the second homomorphism theorem that $\gamma: P \rightarrow P / \psi$ factors through $F_{\mathcal{V}^{\prime}}(X)$, hence $P / \psi \in \mathcal{V}^{\prime}$, a contradiction. Therefore $p=q$ holds in $\mathcal{V}^{\prime}$. In particular, any projective subdirectly irreducible algebra is splitting. Combining Lemma 1.10 with the above theorem we obtain the result of Wille [72]: Corollary 2.12 Let $\mathcal{V}$ be a lattice variety, $u / v$ a prime quotient in some lattice $P \in \mathcal{V}$ which is projective in $\mathcal{V}$, and suppose $\theta$ is the largest congruence on $P$ that does not collapse $u / v$. Then $P / \theta$ is a splitting lattice in $\mathcal{V}$. If we take $\mathcal{V}$ to be the variety $\mathcal{L}$ of all lattices, then the converse of the above corollary is also true. In fact it follows from a result of McKenzie (Corollary 2.26) that every splitting lattice in $\mathcal{L}$ is isomorphic to a quotient lattice of some finitely generated free lattice $F(n)$ modulo a congruence $\theta$ which is maximal with respect to not collapsing some prime quotient of $F(n)$. Before we can prove this result, however, we need some more information about free lattices, which is due to Whitman $[41][42]$ and can also be found in [ATL] and [GLT]. Whitman showed that a lattice $L$ is $\left(\mathcal{L}^{-}\right)$freely generated by a set $X \subseteq L$ if and only if for all $x, y \in X$ and $a, b, c, d \in L$ the following four conditions are satisfied: (W1) $\quad x \leq y$ implies $x=y \quad$ (i.e. generators are incomparable) (W2) $\quad a b \leq y$ implies $a \leq y$ or $b \leq y$ (W3) $x \leq c+d$ implies $x \leq c$ or $x \leq d$ (W) $a b \leq c+d$ implies $a \leq c+d$ or $b \leq c+d$ or $a b \leq c$ or $a b \leq d$. (These conditions are also known as Whitman's solution to the word problem for lattices since they provide an algorithm for testing when two lattice terms represent the same element of a free lattice.) In fact Jónsson [70] showed that if $\mathcal{V}$ is a nontrivial lattice variety then (W1), (W2) and (W3) hold in any $\mathcal{V}$-freely generated lattice. We give a proof of this result. A subset $X$ of a lattice $L$ is said to be irredundant if for any distinct $x, x_{1}, x_{2}, \ldots, x_{n} \in X$ $$ \begin{array}{ll} \left(\mathrm{W} 2^{\prime}\right) & x \geq x_{1} x_{2} \ldots x_{n} \quad \text { and } \\ \left(\mathrm{W} 3^{\prime}\right) & x \geq x_{1}+x_{2}+\ldots+x_{n} . \end{array} $$ We also require the important notion of a join-cover. Let $U$ and $V$ be two nonempty finite subsets of a lattice $L$. We say that $U$ refines $V$ (in symbols $U \ll V$ ) if for every $u \in U$ there exists $v \in V$ such that $u \leq v . V$ is a join-cover of $a \in L$ if $a \leq \sum V$, and a join-cover $V$ of $a$ is nontrivial if $a \leq v$ for all $v \in V$. Observe that any join-cover of $a$, which refines a nontrivial join-cover of $a$, is itself nontrivial. The notion of a meet-cover is defined dually. Lemma 2.13 (Jónsson[70]). Let $L$ be a lattice generated by a set $X \subseteq L$. (i) If every join-cover $U \subseteq X$ of an element $a \in L$ is trivial, then every join-cover of $a$ is trivial. (ii) $X$ is irredundant if and only if for all $x, y \in X$ and $a, b, c, d \in L$ (W1), (W2) and (W3) hold. Proof. (i) Let $Y$ be a subset of $L$ for which every join-cover $U \subseteq Y$ of $a$ is trivial, and let $S(Y)$ and $P(Y)$ denote the sets of all elements of $L$ that are (finite) joins and meets of elements of $Y$ respectively. Since every join-cover $V \subseteq S(Y)$ of $a$ is refined by a join-cover $U \subseteq Y, V$ is trivial. We claim that the same holds for every join-cover $V \subseteq P(Y)$ of $a$. Suppose $V$ is a finite nonempty subset of $P(Y)$ such that for all $v \in V, a \leq x$. Each element $v$ is the meet of a nonempty set $U_{v} \subseteq Y$. Since $a \notin s$, there exists an element $u_{v} \in U_{v}$ such that $a \not u_{v}$. Each $u_{v}$ belongs to $Y$, so the set $W=\left\{u_{v}: v \in V\right\}$ cannot be a join-cover of $a$, i.e. $a \leq \sum W$. But $v=\prod U_{v} \leq u_{v}$, and hence $\sum V \leq \sum W$. It follows that $a \notin \sum V$ which means that $V$ is not a join-cover of $a$. This contradiction proves the claim. Now let $Y_{0}=P(X)$ and $Y_{n+1}=P S\left(Y_{n}\right)$ for $n \in \omega$. If every join-cover $U \subseteq X$ of $a$ is trivial, then by the above this is also true for $X$ replaced by $Y_{n}$. Since $X$ generates $L$, we have $L=\bigcup_{n \in \omega} Y_{n}$, so if $V$ is any join-cover of $a$, then $V \subseteq Y_{m}$ for some $m \in \omega$, and therefore $V$ is trivial. (ii) If $X$ is irredundant, then clearly the elements of $X$ must be incomparable, so (W1) is satisfied. It also follows that any join-cover $U \subseteq X$ of a generator $x \in X$ is trivial, hence by part (i) every join-cover of $x$ is trivial. This implies (W3), and (W2) follows by duality. Conversely, if $x \leq x_{1}+x_{2}+\cdots+x_{n}$ with all $x, x_{1}, \ldots, x_{n} \in X$ distinct, then repeated application of (W3) yields $x \leq x_{i}$ and by (W1) $x=x_{i}$ for some $i=1, \ldots, n$. This contradiction, and its dual argument, shows that $X$ is irredundant. Theorem 2.14 (Jónsson[70]). Let $\mathcal{K}$ be a class of lattices that contains at least one nontrivial lattice. If $F$ is a lattice that is $\mathcal{K}$-freely generated by a set $X$, then $X$ is irredundant. Proof. Suppose $x, x_{1}, \ldots, x_{n} \in X$ are distinct, and let $L \in \mathcal{K}$ be a lattice with more than one element. Then there exists $a, b \in L$ such that $a \not \mathbb{Z}$. Choose a map $f: X \rightarrow L$ such that $f(x)=a$ and $f\left(x_{i}\right)=b$ (all $i$ ). Then the extension of $f$ to a homomorphism $\bar{f}: F \rightarrow L$ satisfies $\bar{f}(x)=a \not \underline{b}=\bar{f}\left(x_{1}+\ldots+x_{n}\right)$, and hence $x \not x_{1}+\ldots+x_{n}$. By duality $X$ is irredundant. The last condition (W) is usually referred to as Whitman's condition, and it may be considered as a condition applicable to lattices in general, since it makes no reference to the generators of $L$. Clearly, if a lattice $L$ satisfies (W), then every sublattice of $L$ again satisfies (W). In particular Whitman's result and Lemma 2.9 show that every projective lattice satisfies (W). Day [70] found a very simple proof of this fact based on a construction which we will use several times in this section and in Chapters 4 and 6 . Given a lattice $L$ and a quotient $I=u / v$ in $L$, we construct a new lattice $$ L[I]=(L-I) \cup(I \times \mathbf{2}) $$ with the ordering $x \leq y$ in $L$ if and only if one of the following conditions holds: (i) $x, y \in L-I$ and $x \leq y$ in $L$; (ii) $x \in L-I, y=(b, j)$ and $x \leq b$ in $L$; (iii) $y \in L-I, x=(a, i)$ and $y \leq a$ in $L$; (iv) $x=(a, i), y=(b, j)$ and $a \leq b$ in $L, i \leq j$ in 2. $L[I]$ is referred to as $L$ with doubled quotient $u / v$, and it is easy to check that $L[I]$ is in fact a lattice $(2=\{0,1\}$ is the two-element chain with $0<1)$. Also there is a natural epimorphism $\gamma: L[u / v] \rightarrow L$ defined by $$ \gamma(x)= \begin{cases}x & \text { if } \quad x \in L-u / v \\ a & \text { if } \quad x=(a, i) \text { some } \quad i \in \mathbf{2} .\end{cases} $$ We say that a 4-tuple $(a, b, c, d) \in L^{4}$ a (W)-failure of $L$ if $a b \leq c+d$ but $a b \not \leq c, d$ and $a, b \not c+d$. Lemma 2.15 (Day [70]). Let $(a, b, c, d)$ be a (W)-failure of $L$, and let $u / v=c+d / a b$. Then there does not exist an embedding $f: L \hookrightarrow L[u / v]$ such that $\gamma f$ is the identity map on $L$ (i.e. there exists no coretraction of $\gamma$ ). Proof. Suppose the contrary. Then $f(x)=x$ for each $x \in L-u / v$, and $f(v) \leq f(u)$. But $f(v)=f(a b)=f(a) f(b)=a b=(v, 1)$ since $a, b \not \leq u$, and dually $f(u)=(u, 0)$. This is a contradiction since by definition $(v, 1) \not \leq(u, 0)$. From the equivalence of (i) and (ii) of Lemma 2.9 we now obtain: Corollary 2.16 (i) Every lattice which is projective in $\mathcal{L}$ satisfies (W). (ii) Every free lattice satisfies (W1), (W2), (W3) and (W). As mentioned before, the converse of (ii) is also true. A partial converse of (i) is given by Theorem 2.19. Bounded homomorphisms. A lattice homomorphism $f: L \rightarrow L^{\prime}$ is said to be upper bounded - if for every $b \in L^{\prime}$ the set $f^{-1}(b]=\{x \in L: f(x) \leq b\}$ is either empty or has a greatest element, denoted by $\alpha_{f}(b)$; lower bounded - if for every $b \in L^{\prime}$ the set $f^{-1}[b)=\{x \in L: f(x) \geq b\}$ is either empty or has a least element, denoted by $\beta_{f}(b)$; bounded - if $f$ is both upper and lower bounded; (upper/lower) bounded - if a given one of the above three properties holds. The following lemma lists some easy consequences of the above definitions. LemMa 2.17 Let $f: L \rightarrow L^{\prime}$ and $g: L^{\prime} \rightarrow L^{\prime \prime}$ be two lattice homomorphisms. (i) If $L$ is a finite lattice, then $f$ is bounded. (ii) If $f$ and $g$ are (upper/lower) bounded then $g f$ is (upper/lower) bounded. (iii) If $g f$ is (upper/lower) bounded and $f$ is an epimorphism, then $g$ is (upper/lower) bounded. (iv) If $g f$ is (upper/lower) bounded and $g$ is an embedding, then $f$ is (upper/lower) bounded. (v) If $f$ is an upper bounded epimorphism and $b \in L^{\prime}$ then $\alpha_{f}(b)$ is the greatest member of $f^{-1}\{b\}$, and the map $\alpha_{f}: L^{\prime} \hookrightarrow L$ is meet preserving. Dually, if $f$ is a lower bounded epimorphism then $\beta_{f}(b)$ is the least member of $f^{-1}\{b\}$, and $\beta_{f}: L^{\prime} \hookrightarrow L$ is join preserving. A lattice is said to be upper bounded if it is an upper bounded epimorphic image of some free lattice, and lower bounded if the dual condition holds. A lattice which is a bounded epimorphic image of some free lattice is said to be bounded (not to be confused with lattices that have a largest and a smallest element: such lattices will be referred to as 0,1 - lattices). Of course every bounded lattice is both upper and lower bounded. We shall see later (Theorem 2.23) that, for finitely generated lattices, the converse also holds. The notion of a bounded homomorphism was introduced by McKenzie [72], and he used it to characterize splitting lattices as subdirectly irreducible finite bounded lattices (Theorem 2.25). We first prove a result of Kostinsky [72] which shows that every bounded lattice that satisfies Whitman's condition (W) is projective. For finitely generated lattices this result was already proved in McKenzie [72]. LEMMA 2.18 Suppose $F(X)$ is a free lattice generated by the set $X$ and let $f: F(X) \rightarrow L$ be a lower bounded epimorphism. Then for each $b \in L$ the set $\{x \in X: f(x) \geq b\}$ is finite. Proof. Since $f$ is lower bounded and onto, $\beta_{f}(b)$ exists. Suppose it is represented by a term $t\left(x_{1}, \ldots, x_{n}\right) \in F(X)$ for some $x_{1}, \ldots, x_{n} \in X$, then clearly $$ \{x \in X: f(x) \geq b\}=\left\{x \in X: x \geq t\left(x_{1}, \ldots, x_{n}\right)\right\} \subseteq\left\{x_{1}, \ldots, x_{n}\right\} . $$ In particular note that if a finite lattice is a (lower) bounded homomorphic image of a free lattice $F$, then $F$ must necessarily be finitely generated. Theorem 2.19 (Kostinsky [72]). Every bounded lattice that satisfies (W) is projective (in $\mathcal{L}$ ). Proof. Let $f$ be a bounded homomorphism from some free lattice $F(X)$ on to a lattice $L$, and suppose that $L$ satisfies (W). We show that $L$ is a retract of $F(X)$, and then apply Lemma 2.9. To simplify the notation we will denote the maps $\alpha_{f}$ and $\beta_{f}$ simply by $\alpha$ and $\beta$. Let $h: F(X) \rightarrow F(X)$ be the endomorphism that extends the map $x \mapsto \alpha f(x), x \in X$. We claim that for each $a \in F(X), \beta f(a) \leq h(a) \leq \alpha f(a)$, from which it follows that $f h(a)=f(a)$. This is clearly true for $x \in X$. Suppose it holds for $a, a^{\prime} \in F(X)$. Since $\beta$ is join-preserving $$ \begin{aligned} \beta f\left(a+a^{\prime}\right) & =\beta f(a)+\beta f\left(a^{\prime}\right) \leq h(a)+h\left(a^{\prime}\right) \\ & =h\left(a+a^{\prime}\right) \leq \alpha f(a)+\alpha f\left(a^{\prime}\right) \leq \alpha f\left(a+a^{\prime}\right) \end{aligned} $$ and similarly since $\alpha$ is meet-preserving $\beta f\left(a a^{\prime}\right) \leq h\left(a a^{\prime}\right) \leq \alpha f\left(a a^{\prime}\right)$. This establishes the claim. We now define the map $g: L \rightarrow F(X)$ by $g=h \beta$. Then $g$ is clearly join-preserving. Also $f g(b)=f h \beta(b)=f \beta(b)=b$ for all $b \in L$ (since $f h=f)$. Thus we need only show that $g$ is meet-preserving. Since $h \beta$ is orderpreserving, it suffices to show that $(*) \quad h \beta(b c) \geq h(\beta(b) \beta(c)) \quad$ for all $\quad b, c \in L$. We first observe that, by the preceding lemma, the set $S=\{x \in X: f(x) \geq b c\}$ is finite. Let $$ u=\left\{\begin{array}{lll} \beta(b) \beta(c) \Pi S & \text { if } \quad S \neq \emptyset \\ \beta(b) \beta(c) & \text { if } \quad S=\emptyset . \end{array}\right. $$ Then clearly $f(u)=b c$. We claim that $u=\beta(b c)$. To see this, let $Y$ be the set of all $a \in F(X)$ such that $b c \leq f(a)$ implies $u \leq a$. By definition $X \subseteq Y$, and $Y$ is closed under meets. Suppose $a, a^{\prime} \in Y$ and $b c \leq f(a)+f\left(a^{\prime}\right)$. Since $L$ satisfies (W), we have $$ b \leq f(a)+f\left(a^{\prime}\right) \text { or } c \leq f(a)+f\left(a^{\prime}\right) \text { or } b c \leq f(a) \text { or } b c \leq f\left(a^{\prime}\right) . $$ Applying $\beta$ to the first two cases we obtain $u \leq \beta(b) \leq a+a^{\prime}$ or $u \leq \beta(c) \leq a+a^{\prime}$, and in the third and fourth case we have $u \leq a \leq a+a^{\prime}$ or $u \leq a^{\prime} \leq a+a^{\prime}$ (since $a, a^{\prime} \in Y$ ). Thus $Y=F(X)$, and it follows that $u$ is the least element for which $f(u) \geq b c$, i.e. $u=\beta(b c)$. Now consider $h(u)=h \beta(b c)$. If $S=\emptyset$ then $\beta(b c)=\beta(b) \beta(c)$ and so $(*)$ holds. If $S \neq \emptyset$ then $h \beta(b c)=h \beta(b) h \beta(c) \prod h(S)$ so to prove $(*)$, it is enough to show that $h(\beta(b) \beta(c)) \leq h(x)$ for all $x \in S$. But any such $x$ satisfies $f(\beta(b) \beta(c))=b c \leq f(x)$, and applying $\alpha$ we get $\beta(b) \beta(c) \leq \alpha f(x)=h(x)$. Thus $h(\beta(b) \beta(c)) \leq h h(x)$ and, since we showed that $h(a) \leq \alpha f(a)$, we have $h h(x) \leq \alpha f h(x)=\alpha f(x)=h(x)$ as required. A complete characterization of the projective lattices in $\mathcal{L}$ can be found in Freese and Nation [78]. We now describe a particularity elegant algorithm, due to Jónsson, to determine whether a finitely generated lattice is bounded. Let $D_{0}(L)$ be the set of all $a \in L$ that have no nontrivial join-cover (i.e. the set of all join-prime elements of $L$ ). For $k \in \omega$ let $D_{k+1}(L)$ be the set of all $a \in L$ such that if $V$ is any nontrivial join-cover of $a$, then there exists a join-cover $U \subseteq D_{k}(L)$ of $a$ with $U \ll V$. Note that $D_{0}(L) \subseteq D_{1}(L)$, and if we assume that $D_{k-1}(L) \subseteq D_{k}(L)$ for some $k \geq 1$, then for any $a \in D_{k}(L)$ any nontrivial join-cover $V$ of $a$ there exists a join-cover $U \subseteq D_{k-1}(L) \subseteq D_{k}(L)$ of $a$ with $U \ll V$, whence $a \in D_{k+1}(L)$. So, by induction we have $$ D_{0}(L) \subseteq D_{1}(L) \subseteq \cdots \subseteq D_{k}(L) \subseteq \cdots $$ Finally, let $D(L)=\bigcup_{k \in \omega} D_{k}(L)$ and define the sets $D_{k}^{\prime}(L)$ and $D^{\prime}(L)$ dually. Jónsson's algorithm states that a finitely generated lattice is bounded if and only if $D(L)=L=$ $D^{\prime}(L)$. This result will follow from Theorem 2.23 . Lemma 2.20 (Jónsson and Nation [75]). Suppose $L$ is a lattice generated by a set $X \subseteq L$, let $H_{0}$ be the set of all (finite) meets of elements of $X$, and for $k \in \omega$ let $H_{k+1}$ be the set of all meets of joins of elements of $H_{k}$. Then $D_{k}(L) \subseteq H_{k}$ for all $k \in \omega$. Proof. Note that the $H_{k}$ are closed under meets, $H_{0} \subseteq H_{1} \subseteq \ldots$ and $L=\bigcup_{k \in \omega} H_{k}$. Suppose $a \in D_{0}(L)$ and let $m \in \omega$ be the smallest number for which $a \in H_{m}$. If $m>0$, then $a$ is of the form $a=\prod_{i=1}^{n} \sum U_{i}$, where each $U_{i}$ is a finite subset of $H_{m-1} \cdot U_{i}$ is a join-cover of $a$ for each $i=1, \ldots, n$, and since $a \in D_{0}(L)$, it must be trivial. Hence there exists $a_{i} \in U_{i}$ with $a \leq a_{i}$ for each $i$. But then $a=\prod_{i=1}^{n} a_{i} \in H_{m-1}$, a contradiction. Thus $D_{0}(L) \subseteq H_{0}$. We proceed by induction. Suppose $D_{k}(L) \subseteq H_{k}, a \in D_{k+1}(L)$ and $m \geq k+1$ is the smallest number for which $a \in H_{m}$. Again $a$ is of the form $a=\prod_{i=1}^{n} \sum U_{i}$ with $U_{i} \subseteq H_{m-1}$ and each $U_{i}$ is a join-cover of $a$. If $U_{i}$ is trivial, pick $a_{i} \in U_{i}$ with $a \leq a_{i}$, and if $U_{i}$ is nontrivial, pick a join-cover $V_{i} \subseteq D_{k}(L)$ of $a$ with $V_{i} \ll U_{i}$. By assumption each $V_{i}$ is a subset of $H_{k}$. If $m>k+1$ then $\sum V_{i} \in H_{k+1}$ and $a$ is the meet of these elements $\sum V_{i}$ and the $a_{i}$, so $a \in H_{m-1}$, a contradiction. Thus $m=k+1$ and $D_{k+1}(L) \subseteq H_{k+1}$. For the next lemma, note that Whitman's condition (W) is equivalent to the following: for any two finite subset $U, V$ of $L$, if $a=\prod U \leq \sum V=b$, then $V$ is a trivial join-cover of $a$ or $U$ is a trivial meet-cover of $b$. Lemma 2.21 Suppose $L=F(X)$ is freely generated by $X$, and let $H_{k}$ be as in the previous lemma. Then $D_{k}(L)=H_{k}$ and therefore $D(L)=L$. Proof. By the previous lemma, it is enough to show that $H_{k} \subseteq D_{k}(L)$ for each $k \in \omega$. If $a \in H_{0}$, then $a=\prod_{i=1}^{n} x_{i}$ for some $x_{i} \in X$, and Whitman's condition (W) and (W1) imply that any join-cover of $a$ must be trivial, hence $a \in D_{0}(L)$. Next suppose $a \in H_{1}$. Then $a=\prod_{i=1}^{n} \sum U_{i}$ for some finite sets $U_{i} \subseteq H_{0}=D_{0}(L)$, some $n \in \omega$. If $V$ is a nontrivial join-cover of $a$, then (W) implies that for some $i_{0}$ we have $\sum U_{i_{0}} \leq \sum V$ (see remark above). Since $U_{i_{0}} \subseteq D_{0}(L), V$ is a trivial join-cover of each $u \in U_{i_{0}}$, and therefore $U_{i_{0}} \ll V$. Since $U_{i_{0}}$ is also a join-cover of $a$, it follows that $a \in D_{1}(L)$. Proceeding by induction, suppose now that $D_{k}(L)=H_{k}$ and $a \in H_{k+1}$, for some $k \geq 1$. Then $a=\prod_{i=1}^{n} \sum U_{i}$ for some $U_{i} \subseteq H_{k}$, some $n \in \omega$, and each $U_{i}$ is a join-cover of $a$. Let $V$ be any nontrivial join-cover of $a$. As before (W) implies that $\sum U_{i_{0}} \leq \sum V$ for some $i_{0}$. Let $W$ be the set of all $u \in U_{i_{0}}$ such that $V$ is a nontrivial join-cover of $u$, and set $W^{\prime}=U_{i_{0}}-W$. Since $W \subseteq H_{k}=D_{k}(L)$, there exists for each $u \in W$ a join-cover $V_{u} \subseteq D_{k_{1}}(L)$ of $u$ with $V_{u} \ll V$. It is now easy to check that $U=W^{\prime} \cup \bigcup_{u \in W} V_{u}$ is a join-cover of $a$ which refines $V$ and is contained in $D_{k}(L)$. Hence $a \in D_{k+1}(L)$. A join-cover $V$ of $a$ in a lattice $L$ is said to be irredundant if no proper subset of $V$ is a join-cover of $a$, and minimal if for any join-cover $U$ of $a, U \ll V$ implies $V \subseteq U$. Observe that every join-cover contains an irredundant join-subcover (since it is a finite set) and that the elements of an irredundant join-cover are noncomparable. Also, every minimal join-cover is irredundant. LEMMA 2.22 (Jónsson and Nation [75]). If $F$ is a free lattice and $V$ is a nontrivial cover of some $a \in D_{k}(F)$, then there exists a minimal cover $V_{0}$ of $a$ with $V_{0} \ll V$ and $V_{0} \subseteq D_{k-1}(F)$. Proof. First assume that $F$ is freely generated by a finite set $X$. Suppose $V$ is a nontrivial join-cover of $a \in D_{k}(F)$, and let $\mathcal{C}$ be the collection of all irredundant join-covers $U \subseteq D_{k-1}(F)$ of $a$, which refine $V \cdot \mathcal{C}$ is nonempty since $a \in D_{k}(F)$, and by Lemma 2.20 $D_{k}(F)$ is finite, hence $\mathcal{C}$ is finite. Note that if $U \in \mathcal{C}$ and $U \ll W \ll U$ for some subset $W$ of $F$, then for each $u \in U$ there exists $w \in W$ and $u^{\prime} \in U$ such that $u \leq w \leq u^{\prime}$, and since the elements of $U$ are noncomparable, we must have $u=w=u^{\prime}$ and therefore $U \subseteq W$. In particular, it follows that $\mathcal{C}$ is partially ordered by the relation $\ll$. Let $V_{0}$ be a minimal (with respect to $\ll$ ) member of $\mathcal{C}$, and suppose $U$ is any join-cover of $a$ with $U \ll V_{0}$. Because $V$ is assumed to be nontrivial, so are $V_{0}$ and $U$, and since $a \in D_{k}(F)$, there exists a join-cover $U_{0} \subseteq D_{k-1}(F)$ of $a$ with $U_{0} \ll U$. It follows that $U_{0} \ll V_{0}$, and since we may assume that $U_{0} \in \mathcal{C}$, we have $U_{0}=V_{0}$. Therefore $V_{0} \ll U \ll V_{0}$ which implies $V_{0} \subseteq U$. Thus $V_{0}$ is a minimal join-cover of $a$. Assume now that $X$ is infinite, and choose a finite subset $Y$ of $X$ such that $V \cup\{a\}$ belong to the sublattice $F^{\prime}$ generated by $Y \subseteq F$. By the first part of the proof, there exists a set $V_{0} \subseteq D_{k-1}\left(F^{\prime}\right) \subseteq D_{k-1}(F)$ such that $V_{0} \ll V$ and $V_{0}$ is a minimal cover of $a$ in $F^{\prime}$. We show that $V_{0}$ is also a minimal cover of $a$ in $F$. Let $F_{0}$ be the lattice obtained by adjoining a smallest element 0 to $F$, and let $h$ be the endomorphism of $F_{0}$ that maps each member of $Y$ onto itself and all the remaining elements of $X$ onto 0 . Then $h$ maps every member of $F^{\prime}$ onto itself, and $h(u) \leq u$ for all $u \in F_{0}$. Hence, if $U$ is any join-cover of $a$ in $F^{\prime}$, then the set $U^{\prime}=h(U)-\{0\}$ is a join-cover of $a$ in $F^{\prime}$ and $U^{\prime} \ll V_{0}$, so that $V_{0} \subseteq U^{\prime} \ll U$. Thus $V_{0} \ll U \ll V_{0}$, which implies that $V_{0} \subseteq U$. For finitely generated lattices we can now give an internal characterization of lower boundedness. This result, together with its dual, implies that an upper and lower bounded finitely generated lattice is bounded. Theorem 2.23 (Jónsson and Nation [75]). For any finitely generated lattice $L$, the following statements are equivalent: (i) $L$ is lower bounded; (ii) $D(L)=L$; (iii) Every homomorphism of a finitely generated lattice into $L$ is lower bounded. Proof. Suppose (i) holds, let $f$ be a lower bounded epimorphism that maps some free lattice $F$ onto $L$, and denote by $\beta(a)=\beta_{f}(a)$ the smallest element of the set $f^{-1}[a)$ for all $a \in L$. We show by induction that $$ \beta(a) \in D_{k}(F) \quad \text { implies } \quad a \in D_{k}(L) $$ then $D(L)=L$ follows from the result $D(F)=F$ of Lemma 2.21. Since $\beta$ is a joinpreserving map, the image $\beta(U)$ of a join-cover $U$ of $a$ is a join-cover of $\beta(a)$. If $\beta(U)$ is trivial, then $\beta(a) \leq \beta(u)$ for some $u \in U$, hence $a=f \beta(a) \leq f \beta(u)=u$ and $U$ is also trivial. It follows that $\beta(a) \in D_{0}(F)$ implies $a \in D_{0}(L)$. Suppose now that $\beta(a) \in D_{k}(f)$ and that $U$ is a nontrivial join-cover of $a$. Then $\beta(U)$ is a nontrivial join-cover of $\beta(a)$ and by Lemma 2.22 there exists a minimal join-cover $U_{0} \subseteq D_{k-1}(F)$ of $\beta(a)$ with $U_{0} \ll \beta(U)$. Clearly $f\left(U_{0}\right)$ is a join-cover of $a$ and $f\left(U_{0}\right) \ll U$. Furthermore, the set $\beta f\left(U_{0}\right)$ is a join-cover of $\beta(a)$ with $\beta f\left(U_{0}\right) \ll U_{0}$. By the minimality of $U_{0}$, we have $U_{0} \subseteq \beta f\left(U_{0}\right)$, and since $U_{0}$ is finite, this implies $\beta f\left(U_{0}\right)=U_{0} \subseteq D_{k-1}(F)$. It now follows from the induction hypothesis that $f\left(U_{0}\right) \subseteq D_{k-1}$, and therefore $a \in D_{k}(L)$. Now assume $D(L)=L$, and consider a homomorphism $f: K \rightarrow L$ where $K$ is generated by a finite set $Y$. Let $H_{0}=Y$, and for $k \in \omega$ let $H_{k+1}$ be the set of all joins of meets of elements in $H_{k}$. For each $k \in \omega$ define maps $\beta_{k}: L \rightarrow K$ by $$ \beta_{k}(a)=\prod\left\{y \in H_{k}: f(y) \geq a\right\} $$ (In particular $\Pi \emptyset=\sum Y=1_{K}$.) We claim that for every $k \in \omega$ and $a \in D_{k}(L)$, if the set $f^{-1}[a)$ is nonempty, then $f^{-1}[a)=\left[\beta_{k}(a)\right)$, and $\beta_{k}(a)$ is therefore the smallest element of this set. Since $D(L)=L$ it then follows that $f$ is lower bounded. So suppose that $a \in D_{k}(L)$ for some $k$, and $f^{-1}[a)$ is nonempty. If $x \in K$ and $x \geq \beta_{k}(a)$ then $$ f(x) \geq f \beta_{k}(a)=\left\{\begin{array}{l} f\left(1_{K}\right) \quad \text { if } \quad\left\{y \in H_{k}: f(y) \geq a\right\}=\emptyset \\ \prod\left\{f(y): y \in H_{k} \text { and } f(y) \geq a\right\} \text { otherwise } \end{array}\right. $$ and in both cases $f(x) \geq a\left(f\left(1_{K}\right) \geq a\right.$ since $f^{-1}[a)$ is nonempty). Thus $\left[\beta_{k}(a)\right) \subseteq f^{-1}[a)$. For the reverse inclusion we have to show that for all $x \in K$ $$ \text { (*) } \quad f(x) \geq a \quad \text { implies } \quad x \geq \beta_{k}(a) . $$ The set of elements $x \in K$ that satisfy $(*)$ contains $Y$ and is closed under meets, hence it is enough to show that it is also closed under joins. For $k=0$, we have $a \in D_{0}(L)$, so $\sum f(U) \geq a$ implies $f(u) \geq a$ for some $u \in U$, and by $(*) u \geq \beta_{0}(a)$, hence $\sum U \geq \beta_{0}(a)$. Suppose now that $(*)$ holds for all values less than some fixed $k>0$. Let $x=\sum U$ and assume $f(x) \geq a$, i.e. $f(U)$ is a join-cover of $a$. If it is trivial, then $(*)$ is satisfied as before, so assume it is nontrivial. Then there exists a join-cover $V \subseteq D_{k-1}(L)$ of $a$ with $V \ll f(U)$, and by the inductive hypothesis $x \geq \beta_{k-1}(v)$ for all $v \in V$. Now the elements $\beta_{k-1}(v)$ are meets of elements in $H_{k_{1}}$, and the element $z=\sum \beta_{k-1}(V)$ therefore belongs to $H_{k}$. Since $f(z) \geq \sum V \geq a$, it follows from the definition of $\beta_{k}$ that $z \geq \beta_{k}(a)$, hence $x \geq \beta_{k}(a)$ as required. (iii) implies (i) follows immediately from the assumption that $L$ is finitely generated. The equivalence of (i) and (iii) was originally proved by McKenzie [72]. Note that (i) $\Rightarrow$ (ii) $\Rightarrow$ (iii) is true for any lattice $L$, so the above theorem implies that if $L$ is lower bounded then $D(L)=L$, and the converse holds whenever $L$ is finitely generated. Together with Lemma 2.21 we also have that every finitely generated sublattice of a free lattice is bounded. It is fairly easy to compute $D(L)$ and $D^{\prime}(L)$ for any given finite lattice $L$. Thus one can check that the lattices $N, L_{6}, L_{7}, \ldots, L_{15}$ are all bounded, and since they also satisfy Whitman's condition (W), Theorem 2.19 implies that they are projective (hence sublattices of a free lattice). On the other hand $L_{1}$ fails to be upper bounded (dually for $L_{2}$ ) and $M_{3}, L_{3}, L_{4}$ and $L_{5}$ are neither upper nor lower bounded (see Figures 2.1 and 2.2). Note that if $U$ is a finite subset of $D_{k}(L)$ then $\sum U \in D_{k+1}(L)$. Since every join irreducible element of a distributive lattice is join prime, and dually, it follows that every finite distributive lattice is bounded. We now recall a construction which is usually used to prove that the variety of all lattices is generated by its finite members. In Section 1.2 it was shown that every free lattice $F(X)$ can be constructed as a quotient algebra of a word algebra $W(X)$, whence elements of $F(X)$ are represented by lattice terms (words) of $W(X)$. The length $\lambda$ of a lattice term is defined inductively by $\lambda(x)=1$ for each $x \in X$ and $\lambda(p+q)=\lambda(p q)=$ $\lambda(p)+\lambda(q)$ for any terms $p, q \in W(X)$. Let $X$ be a finite set, and for each $k \in \omega$, construct a finite lattice $P(X, k)$ as follows: Take $W$ to be the finite subset of the free lattice $F(X)$ which contains all elements that can be represented by lattice terms of length at most $k$, and let $P(X, k)$ be the set of all finite meets of elements from $W$, together with largest element $1_{F}=\sum X$ of $F(X)$. Then $P(X, k)$ is a finite subset of $F(X)$, and it is a lattice under the partial order inherited from $F(X)$, since it is closed under meets and has a largest element. However, $P(X, k)$ is not a sublattice of $F(X)$ because for $a, b \in P(X, k), a+_{P} b \geq a+_{F} b$ and equality holds if and only if $a+{ }_{F} b \in P(X, k)$. Nevertheless, $P(X, k)$ is clearly generated by the set $X$. LEMMA 2.24 (i) If $p=q$ is a lattice identity that fails in some lattice, then $p=q$ fails in a finite lattice of the form $P(X, k)$ from some finite set $X$ and $k \in \omega$. (ii) If $h: F(X) \rightarrow P(X, k)$ is the extension of the identity map on $X$, then $h$ is upper bounded. Proof. (i) Let $X$ be the set of variables that occur in $p$ and $q$, and let $k$ be the greater of the lengths of $p$ and $q$. Since $p=q$ fails in some lattice, $p$ and $q$ represent different elements of the free lattice $F(X)$ and therefore also different elements of $P(X, k)$. Thus $p=q$ fails in $P(X, k)$. (ii) Note that for $a \in P(X, k) \subseteq F(X), h(a)=a$, and in general $h(b) \geq_{F} b$ for any $b \in F$. Therefore $h(b) \leq_{P} a$ implies $b \leq_{F} h(b) \leq_{P} a$, and conversely $b \leq_{F} a$ implies $h(b) \leq_{P} h(a)=a$, whence $a$ is the largest element of $f^{-1}(a]$ for all $a \in P(X, k)$. Theorem 2.25 (McKenzie [72]). $S$ is a splitting lattice if and only if $S$ is a finite subdirectly irreducible bounded lattice Proof. Suppose $S$ is a splitting lattice, and $p=q$ is its conjugate identity. We have to show that $S$ is a bounded epimorphic image of some free lattice $F(X)$. As we noted in the beginning of Section 2.3, every splitting lattice is finite, so there exists an epimorphism $h: F(X) \rightarrow S$ for some finite set $X$. The identity $p=q$ does not hold in $S$, hence it fails in $F(X)$ and also in $P(X, k)$ for some large enough $k \in \omega$ (by Lemma 2.24 (i)). Therefore $S \in\{P(X, k)\}^{\mathcal{V}}$ and since $S$ is subdirectly irreducible and $P(X, k)$ is finite, it follows from Jónsson's Lemma (Corollary 1.7(i)) that $S \in \mathbf{H S}\{P(X, k)\}$. So there exists a sublattice $L$ of $P(X, k)$ and an epimorphism $g: L \rightarrow S$. Since $g$ is onto, we can choose for each $x \in X$ an element $a_{x} \in L$ such that $g\left(a_{x}\right)=h(x)$. Let $f: F(X) \rightarrow L$ be the extension of the map $x \mapsto a_{x}$. By Lemma 2.24 (ii) $P(X, k)$ is an upper bounded image of $F(X)$, hence by the equivalence of (i) and (iii) of (the dual of) Theorem $2.23 f$ is upper bounded. Since $L$ is finite, $g$ is obviously bounded, and therefore $h=g f$ is upper bounded. A dual argument shows that $h$ is also lower bounded, whence $S$ is a bounded lattice. Conversely, suppose $S$ is a finite subdirectly irreducible lattice, and let $u / v$ be a prime critical quotient of $S$. If $S$ is bounded, then there exists a bounded epimorphism $h$ from some free lattice $F(X)$ onto $S$. Let $r$ be the smallest element of $h^{-1}[u)$ and let $s$ be the largest element of $h^{-1}(v]$. Now $r+s / s$ is a prime quotient of $F(X)$, for if $s<t \leq r+s$ then $h(t)=u=h(t r)=h(r)$, and by the choice of $r, t r=r$ hence $r \leq t=r+s$. By Lemma 1.10 there exists a largest congruence $\theta$ on $F(X)$ which does not identify $r+s$ and $s$. Since $h(r+s)=u \neq v=h(s)$ we have ker $h \subseteq \theta$, and equality follows from the fact that $u / v$ is a critical quotient of $S$. Now Corollary 2.12 implies that $S$ is a splitting lattice. Referring to the remark after Theorem 2.23 we note that the lattices $L_{6}, L_{7}, \ldots, L_{15}$ are examples of splitting lattices. In fact McKenzie [72] shows how one can effectively compute a conjugate identity for any such lattice. For the details of this procedure we refer the reader to his paper and also to the more recent work of Freese and Nation [83]. From Corollary 2.12 and the proof of the above theorem we obtain the following: Corollary 2.26 (McKenzie [72]). A lattice $L$ is a splitting lattice if and only if $L$ is isomorphic to $F(n) / \psi_{\tau s}$ where $F(n)$ is some finitely generated free lattice and $\psi_{r s}$ is the largest congruence that does not identify some covering pair $r \succ s$ of $F(n)$. Canonical representations and semidistributivity. A finite set $U$ of a lattice $L$ is said to be a join representation of an element $a$ in $L$ if $a=\sum U$. Thus a join representation is a special case of a join-cover. $U$ is a canonical join representation of $a$ if it is irredundant (i.e. no proper subset of $U$ is a join representation of $a$ ) and refines every other join representation of $a$. Note that an element can have at most one canonical join representation, since if $U$ and $V$ are both canonical join representations of $a$ then $U \ll V \ll U$ and because the elements of an irredundant join representation are noncomparable it follows that $U \subseteq V \subseteq U$. However canonical join representations do not exist in general (consider for example the largest element of $M_{3}$ ). Canonical meet representations are defined dually and have the same uniqueness property. A fundamental result of Whitman's [41] paper is that every element of a free lattice has a canonical join representation and a canonical meet representation. We briefly outline the proof of this result. Denote by $\bar{p}$ the element of $F(X)$ represented by the term $p$. A term $p$ is said to be minimal if the length of $p$ is minimal with respect to the lengths of all terms that represent $\bar{p}$. If $p$ is formally a join of simpler terms $p_{1}, \ldots, p_{n}$, none of which is itself a join, then these terms will be called the join components of $p$. The meet components of $p$ are defined dually. Note that every term is either a variable or it has join components or meet components. Theorem 2.27 (Whitman[41]). A term $p$ is minimal if and only if $p=x \in X$ or $p$ has join components $p_{1}, \ldots, p_{n}$ and for each $i=1, \ldots, n$ (1) $p_{i}$ is minimal, (2) $\bar{p}_{i} \not \leq \sum_{j \neq i} \bar{p}_{j}$, (3) for any meet component $r$ of $p_{i}, \bar{r} \neq \bar{p}$, or the duals of (1), (2) and (3) hold for the meet components of $p$. Proof. All $x \in X$ are minimal, so by duality we may assume that $p$ has join components $p_{1}, \ldots, p_{n}$. If (1), (2) or (3) fail, then we can easily construct a term $q$ such that $\bar{q}=\bar{p}$, but $\lambda(q) \lambda(p)$, which shows that $p$ is not minimal. (If (1) fails, replace a nonminimal $p_{i}$ by a minimal term; if (2) fails, omit the $p_{i}$ for which $\bar{p}_{i} \leq \sum_{j \neq i} \bar{p}_{j}$; if (3) fails, replace $p_{i}$ by its meet component $r$ which satisfies $\bar{r} \leq \bar{p}$.) Conversely, suppose $p$ satisfies (1), (2) and (3), and let $q$ be a minimal term such that $\bar{q}=\bar{p}$. We want to show that $\lambda(p)=\lambda(q)$, then $p$ is also minimal. First observe that $q$ must have join components, for if $q \in X$ or if $q$ has meet components $q_{1}, \ldots, q_{m}$, then $\bar{q} \leq \bar{p}_{1}+\ldots+\bar{p}_{n}$ together with (W3) or (W) imply $\bar{q} \leq \bar{p}_{i} \leq \bar{p}$ or $\bar{q} \leq \bar{q}_{j} \leq \bar{p}$ for some $i$, and since $\bar{q}=\bar{p}$ we must have equality throughout, which contradicts the minimality of $q$. So let $q_{1}, \ldots, q_{m}$ be the join components of $q$. For each $i \in\{1, \ldots, n\}, \bar{p}_{i} \leq \bar{q}_{1}+\ldots+\bar{q}_{m}$ and either $p_{i} \in X$ or $p_{i}$ has meet components (whose images in $F(X)$ are not below $\bar{p}$ by condition (3)), so we can use (W3) or (W) to conclude that $\bar{p}_{i} \leq \bar{q}_{i^{*}}$ for some unique $i^{*} \in\{1, \ldots, m\}$. Similarly, since $q$ is minimal, it satisfies (1)-(3), and so for each $j \in\{1, \ldots, m\}$ there exists a unique $j_{*} \in\{1, \ldots, n\}$ such that $\bar{q}_{j} \leq \bar{p}_{j_{*}}$. Thus $\bar{p}_{i} \leq \bar{q}_{i^{*}} \leq \bar{p}_{i^{*} *}$ and $\bar{q}_{j} \leq \bar{p}_{j_{*}} \leq \bar{q}_{j_{*}{ }^{*}}$, whence (2) implies $i=i^{*}{ }_{*}$ and $j=j_{*}{ }^{*}$. It follows that the map $i \mapsto i^{*}$ is a bijection, $m=n, \bar{p}_{i}=\bar{q}_{i^{*}}$ and since both terms are minimal by (1), $\lambda\left(p_{i}\right)=\lambda\left(q_{i^{*}}\right)$. Consequently $\lambda(p)=\lambda(q)$. COROLLARY 2.28 Every element of a free lattice has a canonical join representation and a canonical meet representation. Proof. Suppose $u$ is an element of a free lattice, and let $p$ be a minimal term such that $\bar{p}=u$. If $p$ has no join components, then (W3) or (W) imply that $u$ is join irreducible, in which case $\{u\}$ is the canonical join representation of $u$. If $p$ has join components $p_{1}, \ldots, p_{n}$ then condition (2) above implies that $U=\left\{\bar{p}_{1}, \ldots, \bar{p}_{n}\right\}$ is irredundant, and (W3) or (W) and condition (3) imply that $U$ is a canonical join representation of $u$. The canonical meet representation is constructed dually. The existence of canonical representations is closely connected to the following weak form of distributivity: A lattice $L$ is said to be semidistributive if it satisfies the following two implications for all $u, x, y, z \in L$ : $$ \begin{array}{llll} \left(\mathrm{SD}^{+}\right) & u=x+y=x+z & \text { implies } & u=x+y z \\ \left(\mathrm{SD}^{*}\right) & u=x y=x z & \text { implies } & u=x(y+z) \end{array} \quad \text { and dually } $$ Lemma 2.29 If every element of a lattice $L$ has a canonical join representation then $L$ satisfies $\left(\mathrm{SD}^{+}\right)$. Proof. Let $u=\sum V$ be a canonical join representation of $u$, and suppose $u=x+y=$ $x+z$. Then for each $v \in V$ we have $v \leq x$ or $v \leq y, z$. It follows that $v \leq x+y z$ for each $v \in V$, which implies $u \leq x+y z$. The reverse inclusion always holds, hence (SD $\left.{ }^{+}\right)$is satisfied. Now Corollary 2.28 and the preceding lemma together with its dual imply that every free lattice is semidistributive. The next lemma extends this observation to all bounded lattices. ## LEMMA 2.30 (i) Bounded epimorphisms preserve semidistributivity. (ii) Every bounded lattice is semidistributive. Proof. (i) Suppose $L$ is semidistributive and $f: L \rightarrow L^{\prime}$ is a bounded epimorphism. Let $u, x, y, z \in L^{\prime}$ be such that $u=x+y=x+z$. Then $\beta(u)=\beta(x)+\beta(y)=\beta(x)+\beta(z)=$ $\beta(x)+\beta(y) \beta(z)$, where $\beta=\beta_{f}: L^{\prime} \hookrightarrow L$ is the join-preserving map associated with $f$. Hence $$ u=f \beta(u)=f(\beta(x)+\beta(y) \beta(z))=x+y z $$ which shows that (SD+ ${ }^{+}$) holds in $L^{\prime}$. (SD) follows by duality (using $\alpha_{f}$ ), whence $L^{\prime}$ is semidistributive. Now (ii) follows immediately from the fact that every free lattice is semidistributive. Since there are lattices in which the semidistributive laws fail (the simplest one is the diamond $M_{3}$ ) it is now clear that these lattices cannot be bounded. However, there are also semidistributive lattices which are not bounded. An example of such a lattice is given at the end of this section (Figure 2.2). For finite lattices the converse of Lemma 2.29 also holds. To see this we need the following equivalent form of the semidistributive laws. LEMMA 2.31 (Jónsson and Kiefer [62]). A lattice $L$ satisfies (SD ${ }^{+}$) if and only if for all $u, a_{1}, \ldots, a_{m}, b_{1}, \ldots, b_{n} \in L$ $$ \text { (*) } \quad u=\sum_{i=1}^{m} a_{i}=\sum_{j=1}^{n} b_{j} \quad \text { implies } \quad u=\sum_{i=1}^{m} \sum_{j=1}^{n} a_{i} b_{j} . $$ Proof. Assuming that $L$ satisfies (SD ${ }^{+}$), we will prove by induction that the statement $$ \begin{gathered} \text { for all } w, a_{1}, \ldots, a_{m}, b_{1}, \ldots, b_{n} \in L \\ \mathrm{P}(m, n) \quad u=w+\sum_{i} a_{i}=w+\sum_{j} b_{j} \quad \text { implies } \quad u=w+\sum_{i} \sum_{j} a_{i} b_{j} \end{gathered} $$ holds for all $m, n \geq 1$. Then (*) follows if we choose any $w \leq \sum_{i} \sum_{j} a_{i} b_{j}$. $\mathrm{P}(1,1)$ is precisely $\left(\mathrm{SD}^{+}\right)$, so we assume that $n>1$, and that $\mathrm{P}\left(1, n^{\prime}\right)$ holds whenever $1 \leq n^{\prime}<n$. By hypothesis $u=w+a_{1}=w+b_{n}^{\prime}+b_{n}$, where $b_{n}^{\prime}=\sum_{j=1}^{n-1} b_{j}$. Therefore $$ \begin{aligned} & u=\left(w+b_{n}^{\prime}\right)+a_{1}=\left(w+b_{n}^{\prime}\right)+b_{n} \quad \text { implies } \\ & u=w+b_{n}^{\prime}+a_{1} b_{n} \quad \text { by }\left(\mathrm{SD}^{+}\right), \text {and now } \\ & u=w+a_{1} b_{n}+a_{1}=w+a_{1} b_{n}+\sum_{j=1}^{n-1} b_{j} \quad \text { implies } \\ & u=w+a_{1} b_{n}+\sum_{j=1}^{n-1} a_{1} b_{j}=w+\sum_{j=1}^{n} a_{1} b_{j} \quad \text { by } \mathrm{P}(1, n-1) . \end{aligned} $$ Hence $\mathrm{P}(1, n)$ holds for all $n$. Now assume that $m>1$ and that $\mathrm{P}\left(m^{\prime}, n\right)$ holds for $1 \leq m^{\prime}<m$. By hypothesis $u=w+a_{m}^{\prime}+a_{m}=w+\sum_{j=1}^{n} b_{j}$, where $a_{m}^{\prime}=\sum_{i=1}^{m-1} a_{j}$. Consequently $$ \begin{aligned} & u=w+a_{m}^{\prime}+a_{m}=w+a_{m}^{\prime}+\sum_{j=1}^{n} b_{j} \quad \text { implies } \\ & u=w+a_{m}^{\prime}+\sum_{j=1}^{n} a_{m} b_{j} \quad \text { by } \mathrm{P}(1, n), \text { and now } \\ & u=\left(w+\sum_{j=1}^{n} a_{m} b_{j}\right)+\sum_{i=1}^{m 1} a_{i}=\left(w+\sum_{j=1}^{n} a_{m} b_{j}\right)+\sum_{j=1}^{n} b_{j} \quad \text { implies } \\ & u=w+\sum_{j=1}^{n} a_{m} b_{j}+\sum_{i=1}^{m-1} \sum_{j=1}^{n} a_{i} b_{j}=w+\sum_{i=1}^{m} \sum_{j=1}^{n} a_{i} b_{j} \end{aligned} $$ by $\mathrm{P}(m-1, n)$ as required. Therefore $\mathrm{P}(m, n)$ holds for all $m, n$. Conversely, if (*) holds and $u=a+b=a+c$ for some $u, a, b, c \in L$, then $u=$ $a a+a b+a c+b c=a+b c$. Hence $\left(\mathrm{SD}^{+}\right)$holds in $L$. Corollary 2.32 (Jónsson and Kiefer [62]). A finite lattice satisfies (SD ${ }^{+}$) if and only if every element has a canonical join representation. Proof. Let $L$ be a finite lattice that satisfies (SD ${ }^{+}$), and suppose that $V$ and $W$ are two join representations of $u \in L$. By the preceding lemma the set $\{a b: a \in V, b \in W\}$ is again a join representation of $u$, and it clearly refines both $V$ and $W$. Since $L$ is finite, $u$ has only finitely many distinct join representations. Combining these in the same way we obtain a join representation $U$ that refines every other join representation of $u$. Clearly a canonical join representation is given by a subset of $U$ which is an irredundant join representation of $u$. The converse follows from Lemma 2.29 Thus finite semidistributive lattices have the same property as free lattices in the sense that every element has canonical join and meet representations. Further results about semidistributivity appear in Section 4.2. Cycles in semidistributive lattices. We shall now discuss another way of characterizing splitting lattices, due to Jónsson and Nation [75]. Let $L$ be a finite lattice and denote by $J(L)$ the set of all nonzero join-irreducible elements of $L$. Every element $p \in J(L)$ has a unique lower cover, which we denote by $p_{*}$. We define two binary relations $A$ and $B$ on the set $J(L)$ as follows: for $p, q \in J(L)$ we write $$ \begin{aligned} & p A q \quad \text { if } \quad p<q+x, q<p, q \not x x \text { and } q_{*} \leq x \text { for some } x \in L \\ & p B q \quad \text { if } \quad p \leq p_{*}+q, p \not \underline{q} \text { and } p \not \leq p_{*}+q_{*} \text {. } \end{aligned} $$ A third relation $\sigma$ is defined by $p \sigma q$ if $p A q$ or $p B q$. Note that if $p A q$ then $q x=q_{*}$, $p+x=q+x$ and $p x \geq q_{*}$. So, depending on whether or not the last inequality is strict, the elements $p, q, x$ generate a sublattice of $L$ that is isomorphic to either $A_{1}$ or $A_{2}$ (Figure 2.6). Also if $p B q$, then $p_{*} \not \leq q_{*}$ (else $p \leq p_{*}+q \leq q_{*}+q=q$, contradicting $p \not q)$ and $p+q=p_{*}+q \geq p+q_{*}$. Now the elements $p, p_{*}, q, q_{*}$ generate a sublattice of $L$ isomorphic to $$ \begin{array}{lll} B_{1} & \text { if } & q_{*} \leq p_{*} \text { and } p_{*}+q>p+q_{*} \\ B_{2} & \text { if } & q_{*} \leq p_{*} \\ B_{3} & \text { if } & q_{*} \leq p_{*} \text { and } p_{*}+q=p+q_{*} . \end{array} $$ If we assume that $L$ is semidistributive, then the last case is excluded since $B_{3}$ fails $\left(\mathrm{SD}^{+}\right)\left(p_{*}+q_{*}+p=p_{*}+q_{*}+q \neq p_{*}+q_{*}+p q\right)$. Observe also that in general the element $x$ in the definition of $A$ is not unique, but in the presence of (SD) we can always take $x=\kappa(q)$, where $$ \kappa(q)=\sum\left\{x \in L: q \not x x \text { and } q_{*} \leq x\right\}=\sum\left\{x \in L: q x=q_{*}\right\}, $$ since $L$ is finite and by (SD) $\kappa(q)$ itself satisfies $q \kappa(q)=q_{*}$. In this case $x$ is covered by $q+x$. The following lemma from Jónsson and Nation [75] motivates the above definitions. Note that if $D(L) \neq L$ for some finite lattice $L$, then some join-irreducible element of $L$ is not in $D(L)$, since for any nonempty subset $U$ of $D_{k}(L), \sum U$ is an element of $D_{k+1}(L)$. Lemma 2.33 If $L$ is a finite semidistributive lattice and $p \in J(L)-D(L)$ then there exists $q \in J(L)-D(L)$ with $p \sigma q$. Proof. Since $p \notin D(L)$, there exists a nontrivial join-cover $V$ of $p$ such that no join-cover $U \subseteq D(L)$ of $p$ refines $V$. Since $p \leq \sum V$, we have $\sum V \leq \kappa(p)$, whence $v_{0} \leq \kappa(P)$ for some $v_{0} \in V$. Choose $y \leq v_{0}$ minimal with respect to the property $y \not \leq k(p)$. Clearly $y \in J(L)$ and $p \leq y$ since $V$ is a nontrivial join-cover. Note that $y \not \leq \kappa(p)$ if and only if $p \leq p_{*}+y$, so by the minimality of $y, p \not \leq p_{*}+y_{*}$. Thus $p B y$, and if $y \notin D(L)$, then $q=y$ yields the desired conclusion. $A_{1}$ $A_{2}$ $B_{1}$ $B_{2}$ $B_{3}$ Figure 2.6 Otherwise $y \in D(L)$, and we can choose an element $z \leq p_{*}$ subject to the condition $p<y+z$. We claim that $z \notin D(L)$. Assume the contrary. Since $z<p \leq \sum V, V$ is a join-cover of $z$, which is either trivial or nontrivial. If it is trivial, we let $U=\{y, z\}$, and if it is nontrivial, then there exists a join-cover $W \subseteq D(L)$ of $z$ which refines $V$, and we let $U=W \cup\{y\}$. In both cases $U$ is a subset of $D(L)$ and a join-cover of $p$ which refines $V$ (since $y \leq v_{0}$ ), contradicting the assumption $p \notin D(L)$. By Corollary 2.32 every element of $L$ has a canonical join representation, so there exists a finite set $U_{0} \subseteq L$ such that $z=\sum U_{0}$. Since $z \notin D(L)$, there exists $q \in U_{0}$ such that $q \notin D(L)$. Letting $q^{\prime}=\sum\left(U_{0}-\{q\}\right.$ and $x=y+q_{*}+q^{\prime}$ we see that $x+q \geq y+z>p$, $q_{*} \leq x$ and $q<p$ (since $z \leq p_{*}$ ). Furthermore, $q_{*}+q^{\prime}<z$ and therefore $p \not \leq x$ by the minimality of $z$. Consequently $p A q$. By a repeated application of the preceding lemma we obtain elements $p_{i} \in J(L)$ with $p_{i} \sigma p_{i+1}$ for $i \in \omega$. Since $L$ is finite, this sequence must repeat itself eventually, so we can assume that $p_{0} \sigma p_{1} \sigma \cdots \sigma p_{n} \sigma p_{0}$. Such a sequence is called a cycle, and it follows that the nonexistence of such cycles in a finite semidistributive lattice $L$ implies that $D(L)=L$. The next result, also from Jónsson and Nation [75], shows that the converse is true is an arbitrary lattice. Theorem 2.34 If $L$ is any lattice which contains a cycle, then $D(L) \neq L$. Proof. Suppose $p_{0} \sigma p_{1} \sigma \cdots \sigma p_{n} \sigma p_{0}$ for some $p_{i} \in J(L)$. From Figure 2.6 we can see that each $p_{i}$ has a nontrivial cover, so $p_{i} \notin D_{0}(L)$. Suppose no $p_{i}$ belongs to $D_{k-1}(L)$, but say $p_{0} \in D_{k}(L)$. If $p_{0} A p_{1}$, then $p_{0}<p_{1}+x, p_{1}<p_{0}, p_{1} \not \leq x$ and $p_{1 *} \leq x$ for some $x \in L$. Since $\left\{p_{1}, x\right\}$ covers $p_{0}$, there exists a cover $U \subseteq D_{k-1}(L)$ of $p_{0}$ with $U \ll\left\{p_{1}, x\right\}$. For each $u \in U$, either $u \leq x$ or $u<p_{1}$ (since $p_{1} \notin D_{k-1}(L)$ ), but if $u<p_{1}$, then $u \leq p_{1 *} \leq x$. Hence $u \leq x$ for all $u \in U$, so that $p_{0} \leq \sum U \leq x$, a contradiction. If $p_{0} B p_{1}$, then $\left\{p_{0 *}, p_{1}\right\}$ covers $p_{0}$, and therefore there exists a cover $u \subseteq D_{k-1}(L)$ of $p_{0}$ with $U \ll\left\{p_{0 *}, p_{1}\right\}$. For each $u \in U$, either $u \leq p_{0 *}$ or $u<p_{1}$, i.e. $u \leq p_{0 *}$ or $u \leq p_{1 *}$. Therefore $p_{0} \leq \sum U \leq p_{0 *}+p_{1 *}$, again a contradiction. By induction we have $p_{i} \notin D(L)$ for all $i$, and so $D(L) \neq L$. Corollary 2.35 For a finite semidistributive latice $L, D(L)=L$ if and only if $L$ contains no cycles. An example of a finite semidistributive lattice which contains a cycle is give in Figure 2.7. at the end of this section. Day's characterization of finite bounded lattices. The results of this section are essentially due to Alan Day, but the presentation here is taken from Jónsson and Nation [75]. A more general treatment can be found in Day [79]. We investigate the relationship between $J(L)$ and $J(\operatorname{Con}(L))$. By transitivity, a congruence relation on a finite lattice is determined uniquely by the prime quotients which it collapses. The next lemma shows that we need only consider prime quotients of the form $p / p_{*}$, where $p \in J(L)$. LemMa 2.36 Let $L$ be a finite lattice, and suppose $\theta \in \operatorname{Con}(L)$. Then (i) $\theta \in J(\operatorname{Con}(L))$ if and only if $\theta=\operatorname{con}(u, v)$ for some prime quotient $u / v$ of $L$; (ii) if $u / v$ is a prime quotient of $L$ then there exists $p \in J(L)$ such that $p / p_{*} \nearrow u / v$; (iii) if $L$ is semidistributive, then the element $p$ in (ii) is unique. Proof. (i) In a finite lattice $$ \theta=\sum\{\operatorname{con}(u, v): u / v \text { is prime and } u \theta v\} $$ so if $\theta$ is join-irreducible then $\theta=\operatorname{con}(u, v)$ for some prime quotient $u / v$. Conversely, suppose $\phi \in \operatorname{Con}(L)$ is strictly below $\theta$. Then $\phi \subseteq \psi_{u v} \cap \operatorname{con}(u, v)$, where $\psi_{u v}$ is the unique largest congruence that does not identify $u$ and $v$. Hence $\psi_{u v} \cap \operatorname{con}(u, v)$ is the unique dual cover of $\operatorname{con}(u, v)$, and it follows that $\operatorname{con}(u, v) \in J(\operatorname{con}(L))$. To prove (ii) we simply choose $p$ minimal with respect to the condition $u=v+p$. Then $p \in J(L)$ and $p / p_{*} \nearrow u / v$. (iii) Suppose for some $q \in J(L), q \neq p$, we also have $q / q_{*} \nearrow u / v$. Then $u=v+p=v+q$, and by semidistributivity $u=v+p q$. Now $p \neq q$ implies $p q<q$ or $p q<p$, so $p q \leq p_{*}$ or $p q \leq q_{*}$. But then $p q \leq v$, hence $u=v+p q=v$, which is a contradiction. From (i) and (ii) we conclude that for any finite lattice $L$ the map $p \mapsto \operatorname{con}\left(p, p_{*}\right)$ from $J(L)$ to $J(\operatorname{Con}(L))$ is onto. Day [79] shows that the map is one-one if and only if $L$ is lower bounded. Let us say that a set $Q$ of prime quotients in $L$ corresponds to a congruence relation $\theta$ on $L$ if $\theta$ collapses precisely those prime quotients in $L$ that belong to $Q$. Lemma 2.37 A set of prime quotients $Q$ in a finite lattice $L$ corresponds to some $\theta \in$ $\operatorname{Con}(L)$ if and only if (*) For any two prime quotients $r / s$ and $u / v$ in $L$, if $r / s \in Q$ and if either $s \leq v<u \leq$ $r+v$ or $s u \leq v<u \leq r$, then $u / v \in Q$. Proof. The two conditions of (*) can be rewritten as either $r / s \nearrow r+v / v \supseteq u / v$ or $r / s \searrow u / s u \supseteq u / v$, hence if $r / s$ is collapsed by some congruence, so is $u / v$. Therefore $(*)$ is clearly necessary. Suppose $(*)$ holds and let $\theta=\sum\{\operatorname{con}(u, v): u / v \in Q\}$. If $x / y$ is a prime quotient that is collapsed by $\theta$, then $\operatorname{con}(x, y) \subseteq \theta$. But $\operatorname{con}(x, y)$ is join irreducible and Con $(L)$ is distributive, so $\operatorname{con}(x, y)$ is in fact join prime, whence $\operatorname{con}(x, y) \subseteq \operatorname{con}(u, v)$ for some $u / v \in Q$. By Lemma $1.11 u / v$ projects weakly onto $x / y$, and since $(*)$ forces each quotient in the sequence of transposes to be in $Q$, it follows that $x / y \in Q$. ThEorem 2.38 (Jónsson and Nation [75]). If $L$ is a finite semidistributive lattice and $S \subseteq J(L)$, then the following conditions are equivalent: (i) There exists $\theta \in C o n(L)$ such that for all $p \in J(L), p \theta p_{*}$ if and only if $p \in S$. (ii) For all $p, q \in J(L), p \sigma q$ and $q \in S$ imply $p \in S$. Proof. Assume (i) and let $p, q \in J(L)$ with $p \sigma q$ and $q \in S$. Then Figure 2.6 shows that $q / q_{*}$ projects weakly onto $p / p_{*}$, so if $\theta$ collapses $q / q_{*}$, it also collapses $p / p_{*}$, which implies $p \in S$. Conversely, suppose (ii) holds, and let $Q$ be the set of all prime quotients $u / v$ in $L$, such that the unique member $p \in J(L)$ with $p / p_{*} \nearrow u / v$ belongs to $S$. Then $p / p_{*} \in Q$ if and only if $p \in S$, so by the proceeding lemma it suffices to show that $Q$ satisfies (*). Consider two prime quotients $r / s$ and $u / v$ in $L$ and let $p$ and $q$ be the corresponding members of $J(L)$, so that $q / q_{*} \nearrow r / s$ and $p / p_{*} \nearrow u / v$. If $r / s \in Q$, then by uniqueness $q \in S$. Now $p=q \in S$ implies $u / v \in Q$ by definition. Assuming that $p \neq q$, we are going to show that (1) if $s \leq v<u \leq r+v$ then $p B q$, and (2) if $s u \leq v<u \leq r$ then $p A q$. Statement (ii) then implies $p \in S$, whence $u / v \in Q$ as required. Under the hypothesis of (1) we need to show that $p \leq p_{*}+q, p \leq q$ and $p \leq p_{*}+q_{*}$. Since $q_{*} \leq s \leq v$ and $p_{*} \leq v$, we must have $p \leq p_{*}+q_{*}$, else $p \leq v$. For the same reason $p \nless q$, and $p \neq q$ by assumption. Finally, $p \not \leq p_{*}+q$ would imply $p\left(p_{*} q\right)=p_{*}$, which together with $p v=p_{*}$ gives $p(v+q)=p_{*}$ by semidistributivity. This, however, is impossible since $v+q=s+v+q=r+v \geq u \geq p$. This proves (1). Now suppose that the hypothesis of (2) is satisfied. Clearly $q \not \leq s$ and $q_{*} \leq s$, so to prove $p A q$ it suffices to show that $q<p$ and $p<q+s$. We certainly have $q+s=r \geq p$ by the hypothesis, and this inclusion must be strict, since $p=r>s$ would imply $p=q$ by the join irreducibility of $p$. Observe that $p \not \leq s$ because $p s \leq s u \leq v$ and $p \not \leq v$. Since $p \leq r$, this implies $r=s+p$ which, together with $r=s+q$ yields $r=s+p q$. Now $q \leq \leq p$ would imply $p q \leq q_{*} \leq s$, which is impossible because $s+p q=r>s$. Thus $p \leq q$ as required. Figure 2.7 Theorem 2.39 For any finite semidistributive lattice $L$, the following conditions are equivalent: (i) $|J(L)|=|J(\operatorname{Con}(L))|$ (ii) $D(L)=L$ (iii) $D^{\prime}(L)=L$ (iv) $L$ is bounded. Proof. It follows from the preceding theorem that, for two distinct elements $p$ and $q$ of $J(L), \operatorname{con}\left(p, p_{*}\right)=\operatorname{con}\left(q, q_{*}\right)$ if and only if there exists a cycle containing both $p$ and $q$. Consequently the map $p \mapsto \operatorname{con}\left(p, p_{*}\right)$ from $J(L)$ to $J(\operatorname{Con}(L))$ is one-one if and only if $L$ contains no cycles if and only if $D(L)=L$ by Corollary 2.35 . Therefore (i) is equivalent to (ii). Lemma 2.36(iii) and its dual imply that the number of meet irreducible elements of $L$ is equal to the number of join irreducible elements (to every prime quotient $\mathrm{m}^{*} / \mathrm{m}$ where $m$ is meet irreducible and $m^{*}$ is its dual cover, corresponds a unique $p \in J(L)$ such that $p / p_{*} \nearrow m^{*} / m$, and vice versa). Therefore the condition $|J(L)|=|J(\operatorname{Con}(L))|$ is equivalent to its own dual, and hence to the dual of $D(L)=L$, namely $D^{\prime}(L)=L$. Lastly (ii) and (iii) together are equivalent to (iv) by Theorem 2.23 and its dual. It is interesting to examine how the above conditions fail in the semidistributive lattice in Figure 2.7, which contains the cycle $p_{0} A p_{1} A p_{2} B p_{3} B p_{0}$. If we add an element $a$ on the edge $c, p_{2}$, then we obtain an example of a subdirectly irreducible semidistributive lattice which is not a splitting lattice. It is not difficult to prove that every critical quotient of a splitting lattice must be prime, but the example we just mentioned shows that the converse does not hold even for semidistributive lattices. ### Splitting lattices generate all lattices We will now prove a few lemmas which lead up to the result of Day [77], that the variety $\mathcal{L}$ of all lattices is generated by the class of all splitting lattices. This result and some of the characterizations of splitting lattices will be used at the end of Chapter 6 . Let $\mathcal{B}$ be the class of all bounded lattices and let $\mathcal{B}_{F}$ be the class of finite members of $\mathcal{B}$. By Theorem $2.25\left(\mathcal{B}_{F}\right)_{S I}$ is the class of all splitting lattices, and it is clearly sufficient to show that $\mathcal{L}=\left(\mathcal{B}_{F}\right)^{\mathcal{V}}$. Lemma $2.40 \mathcal{B}_{F}$ is closed under sublattices, homomorphic images and direct products with finitely many factors. Proof. If $L$ is a sublattice of a lattice $B \in \mathcal{B}_{F}$, then by Lemma 2.17 (iv), any $f: F(X) \rightarrow$ $\rightarrow L$, where $F(X)$ is a finitely generated free lattice, is bounded. If $L$ is a homomorphic image of $B$, say $h: B \rightarrow L$, then there exists $g: F(X) \rightarrow B$ such that $f=h g$, and by the equivalence of (i) and (iii) of Theorem $2.23 \mathrm{~g}$ is bounded. $h$ is bounded since $B$ is finite, hence $f$ is also bounded. Lastly, if $B_{1}, B_{2} \in \mathcal{B}_{F}$ and $f: F(X) \rightarrow B_{1} \times B_{2}$ is an epimorphism, then $\pi_{i} f$ is bounded, where $\pi_{i}: B_{1} \times B_{2} \rightarrow B_{i}$ is the projection map $(i=1,2)$. For $b_{i} \in B_{i}$, let $\beta_{i}\left(b_{i}\right)$ be the least preimage of $b_{i}$ under the map $\pi_{i} f$, and denote the zero of $B_{i}$ by $0_{i}$. Since $\left(b_{1}, 0_{2}\right)$ is the least element of $\pi_{1}^{-1}\left\{b_{1}\right\}, \beta_{1}\left(b_{1}\right)$ is also the least element of $f^{-1}\left\{\left(b_{1}, 0_{2}\right)\right\}$, and similarly $\beta_{2}\left(b_{2}\right)$ is the least element of $f^{-1}\left\{\left(0_{1}, b_{2}\right)\right\}$. It follows that $\beta_{1}\left(b_{1}\right)+\beta_{2}\left(b_{2}\right)$ is the least preimage of $\left(b_{1}, 0_{2}\right)+\left(0_{1}, b_{2}\right)=\left(b_{1}, b_{2}\right)$ under $f$. Hence $f$ is lower bounded, and a dual argument shows that $f$ is also upper bounded. The two element chain is a splitting lattice, so the above lemma implies that every finite distributive lattice is bounded. Recall the construction of the lattice $L[u / v]$ from a lattice $L$ and a quotient $u / v$ of $L$ (see above Lemma 2.15). Lemma 2.41 (Day [77]). If $L \in \mathcal{B}_{F}$ and $I=u / v$ is a quotient of $L$, then $L[I] \in \mathcal{B}_{F}$. Proof. By assumption $L$ is a finite lattice, hence $L[I]$ is also finite. Let $X$ be a finite set with $f: F(X) \rightarrow L[I]$ a lattice epimorphism and let $\gamma: L[I] \rightarrow L$ be the natural epimorphism. Since $L \in \mathcal{B}_{F}, h=\gamma f: F(X) \rightarrow L$ is bounded, so for each $b \in L$ there exists a least member $\beta_{h}(b)$ of $h^{-1}\{b\}$. By definition of $\gamma$, we have $$ \gamma^{-1}\{b\}= \begin{cases}\{b\} & \text { if } \quad b \in L-I \\ \{(b, 0),(b, 1)\} & \text { if } \quad b \in I .\end{cases} $$ hence $\beta_{h} \gamma(b)$ is the least member of $f^{-1}\{b\}$ for each $b \in(L-I) \cup I \times\{0\}$. Note that for any $a_{1}, a_{2}, b_{1}, b_{2} \in L$ if $a_{i}$ is the least member of $f^{-1}\left\{b_{i}\right\}(i=1,2)$ then $a_{1}+a_{2}$ is the least member of $f^{-1}\left\{b_{1}+b_{2}\right\}$. Since for any $t \in u / v,(t, 1)=(t, 0)+(v, 1)$ it is enough to show that $f^{-1}\{(v, 1)\}$ has a least member. $f$ is surjective, so there exists a $w \in F(X)$ with $f(w)=(v, 1)$. Define $$ \bar{w}=w \cdot \prod\{x \in X:(v, 1) \leq f(x)\} \cdot \prod\left\{\beta_{h}(b): b \in L-I \text { and } v<b\right\} . $$ Clearly $f(\bar{w})=(v, 1)$, and if $$ S=\{p \in F(X):(v, 1) \leq f(p) \text { implies } \bar{w} \leq p\} $$ then $X \subseteq S$ and $S$ is closed under meets. We need to show that $S$ is also closed under joins. Let $p, q \in S$ and suppose that $(v, 1) \leq f(p+q)=f(p)+f(q)$. Note that by construction of $L[I], r+s \in I \times\{1\}$ implies $r \in I \times\{1\}$ or $s \in I \times\{1\}$, so if $f(p+q) \in I \times\{1\}$, then $(v, 1) \leq f(p)$ or $(v, 1) \leq f(q)$, whence $\bar{w} \leq p$ or $\bar{w} \leq q$, which certainly implies $\bar{w} \leq p+q$. On the other hand, if $f(p+q) \in L-I$, then $\gamma f(p+q)=f(p+q)$, and so $\bar{w} \leq \beta_{h} \gamma f(p+q) \leq p+q$. Therefore $f$ is lower bounded by $\beta_{f}: L[I] \hookrightarrow F(X)$, where $$ \beta_{f}(b)= \begin{cases}\beta_{h} \gamma(b) & \text { if } \quad b \in(L-I) \cup I \times\{0\} \\ \bar{w}+\beta_{h} \gamma(b) & \text { if } \quad b \in I \times\{1\} .\end{cases} $$ A dual argument shows that $f$ is also upper bounded, hence $L[I] \in \mathcal{B}_{F}$. Let $W(L)$ be the set of all (W)-failures of the lattice $L$ (see Lemma 2.15) and define $$ \mathcal{I}_{W}(L)=\{c+d / a b:(a, b, c, d) \in W(L)\} $$ Lemma 2.42 (Day [77]). If $L$ is a lattice that fails (W), then there exists a lattice $\bar{L}$ and a bounded epimorphism $\rho: \bar{L} \rightarrow L$ satisfying: For any $\left(a_{1}, a_{2}, a_{3}, a_{4}\right) \in W(L)$ and any $x_{i} \in \rho^{-1}\left\{a_{i}\right\}(i=1,2,3,4) x_{1} x_{2} \not x_{3}+x_{4}$. Proof. For each $I \in \mathcal{I}_{W}(L)$ we construct the lattice $L[I]$ and denote by $\gamma_{I}$ the natural epimorphism from $L[I]$ onto $L$. Note that $\gamma_{I}$ is bounded with the upper and lower bounds of $\gamma^{-1}\{b\}$ given by $$ \alpha_{I}(b)=\left\{\begin{array}{ll} b & \text { if } b \in L-I \\ (b, 1) & \text { if } b \in I \end{array} \quad \beta_{I}(b)= \begin{cases}b & \text { if } b \in L-I \\ (b, 0) & \text { if } b \in I\end{cases}\right. $$ respectively. Let $L^{\prime}$ be the product of all the $L[I]$ as $I$ ranges through $\mathcal{I}_{W}(L)$, and let $\pi_{I}: L^{\prime} \rightarrow L[I]$ be the $I$ th projection map. Recall that for $f, g: L^{\prime} \rightarrow L$ we can define a sublattice of $L^{\prime}$ by $$ \mathrm{Eq}(f, g)=\left\{x \in L^{\prime}: f(x)=g(x)\right\} $$ Let $\bar{L}=\bigcap\left\{\mathrm{Eq}\left(\gamma_{I} \pi_{I}, \gamma_{J} \pi_{J}\right): I, J \in \mathcal{I}_{W}(L)\right\}$ and take $\rho: \bar{L} \rightarrow L$ to be the restriction of $\gamma_{I} \pi_{I}$ to $\bar{L}$. Now, for every $y \in L, \gamma_{I} \alpha_{I}(y)=y=\gamma_{J} \alpha_{J}(y)$, hence the $I$-tuple $\left(\alpha_{I}(y)\right)$ is an element of $\bar{L}$, and clearly $\alpha_{\rho}(y)=\left(\alpha_{I}(y)\right)$ is the greatest element of $\rho^{-1}\{y\}$. Similarly $\beta_{p}(y)=\left(\beta_{I}(y)\right)$, and therefore $\rho$ is a bounded epimorphism. To verify the last part of the lemma, it is sufficient to show that for all $(a, b, c, d) \in W(L), \beta_{\rho}(a) \beta_{\rho}(b) \notin \alpha_{\rho}(c)+\alpha_{\rho}(d)$. This is indeed the case, since $c+d / a b=I \in \mathcal{I}_{W}(L)$ implies $$ \beta_{I}(a) \beta_{I}(b)=(a b, 1) \not \leq(c+d, 0)=\alpha_{I}(c)+\alpha_{I}(d) . $$ Note that if $L \in \mathcal{B}_{F}$, then $\bar{L}$ is a sublattice of a finite product of lattices $L[I]$, hence by Lemmas 2.40 and $2.41, \bar{L} \in \mathcal{B}_{F}$. Theorem 2.43 (Day [77]). For any lattice $L$, there is a lattice $\hat{L}$ satisfying (W) and a bounded epimorphism $\hat{\rho}: \hat{L} \rightarrow L$. Proof. Let $L_{0}=L$ and, for each $n \in \omega$, let $L_{n+1}=\bar{L}_{n}$ and $\rho_{n+1}: L_{n+1} \rightarrow L_{n}$ be given by the preceding lemma. $\hat{L}$ is defined to be the inverse limit of the $L_{n}, \rho_{n}$, i.e. $\hat{L}$ is the sublattice of the product $X_{n \in \omega} L_{n}$ defined by $$ x \in \hat{L} \quad \text { if and only if } \quad \rho_{n+1}\left(x_{n+1}\right)=x_{n} \quad \text { for all } n \in \omega $$ where $x_{i} \in L_{i}$ is the image of $x$ under the projection $\pi_{i}: X_{n \in \omega} L_{n} \rightarrow L_{i}$. We claim that $\hat{L}$ satisfies (W). Suppose $a, b, c, d \in \hat{L}$ with $a, b \leq c+d$ and $a b \not \leq c, d$. Then there exist indices $j, k, l, m$ such that $$ a_{j} b_{j} \not \leq c_{j}, \quad a_{k} b_{k} \not \leq d_{k}, \quad a_{l} \not \leq c_{l}+d_{l} \quad \text { and } \quad b_{m} \not \leq c_{m}+d_{m} . $$ Since each $\rho_{i}$ is order- preserving, we have that for any $i \geq \max \{j, k, l, m\}$, $$ a_{i}, b_{i} \not \leq c_{i}+d_{i} \quad \text { and } \quad a_{i} b_{i} \not \leq c_{i}, d_{i} . $$ Now if $a_{i} b_{i} \leq c_{i}+d_{i}$, then $a b \not c c d$ and we are done. If $a_{i} b_{i} \leq c_{i}+d_{i}$, then by the previous lemma $a_{i+1} b_{i+1} \not \leq c_{i+1}+d_{i+1}$, and again $a b \not \leq c+d$. Hence $\hat{L}$ satisfies (W). Let $\hat{\rho}=\pi_{0} \mid \hat{L}: \hat{L} \rightarrow L$, let $\bar{\alpha}_{0}=\bar{\beta}_{0}$ be the identity map on $L_{0}=L$, and for $n \geq 1$ define the maps $\bar{\alpha}_{n}, \bar{\beta}_{n}: L \hookrightarrow L_{n}$ by $$ \bar{\alpha}_{n}=\alpha_{\rho_{n}} \bar{\alpha}_{n-1} \quad \text { and } \quad \bar{\beta}_{n}=\beta_{\rho_{n}} \bar{\beta}_{n-1} . $$ Then it is easy to check that for $y \in L$ the sequences $\left(\bar{\alpha}_{n}(y)\right)$ and $\left(\bar{\beta}_{n}(y)\right)$ are the greatest and least elements of $\hat{\rho}^{-1}\{y\}$ respectively, hence $\hat{\rho}$ is a bounded epimorphism. Theorem 2.44 (Day [77]). $\mathcal{L}$ is generated by the class of all splitting lattices. Proof. Let $L=F_{\mathcal{D}}(3)$, the free distributive lattice on three generators, say $x, y, z$, and consider the lattice $\hat{L}$ constructed in the preceding theorem. $L$ is a finite distributive lattice, hence $L \in \mathcal{B}_{F}$, and it follows that $\hat{L} \in\left(\mathcal{B}_{F}\right)^{\mathcal{V}}$. Choose elements $\hat{x}, \hat{y}, \hat{z} \in \hat{L}$ which map to $x, y, z$ under $\hat{\rho}: \hat{L} \rightarrow L$. Since the set $X=\{x, y, z\}$ satisfies (W2') and (W $\left.3^{\prime}\right)$, so does the set $\hat{X}=\{\hat{x}, \hat{y}, \hat{z}\}$. In addition $\hat{L}$ satisfies (W), hence the sublattice of $\hat{L}$ generated by $\hat{X}$ is isomorphic to $F_{\mathcal{L}}(3)$. By a well known result of Whitman [42], the free lattice on countably many generators is a sublattice of $F_{\mathcal{L}}(3)$, and therefore $F_{\mathcal{L}}(\omega) \in\left(\mathcal{B}_{F}\right)^{\mathcal{V}}$. The result now follows. The two statements of the following corollary were proven equivalent to the above theorem by A. Kostinsky (see McKenzie [72]). COROLLARY 2.45 (i) $F_{\mathcal{L}}(n)$ is weakly atomic for each $n \in \omega$. (ii) For any proper subvariety $\mathcal{V}$ of $\mathcal{L}$, there is a splitting pair $\left(\mathcal{V}_{1}, \mathcal{V}_{2}\right)$ of $\mathcal{L}$ such that $\mathcal{V} \subseteq \mathcal{V}_{1}$ Proof. (i) By the above theorem $F_{\mathcal{L}}(n)$ is a subdirect product of splitting lattices $S_{i}$ $(i \in I)$. Let $f: F_{\mathcal{L}}(n) \hookrightarrow X_{i \in I} S_{i}$ be the subdirect representation, and suppose $r / s$ is a nontrivial quotient of $F_{\mathcal{L}}(n)$. Then for some index $i \in I, \pi_{i} f(r) \neq \pi_{i} f(s)$. Since $S_{i}$ is finite, we can choose a prime quotient $p / q \subseteq \pi_{i} f(r) / \pi_{i} f(s)$. By Theorem $2.25 \pi_{i} f: F_{\mathcal{L}}(n) \rightarrow S_{i}$ is a bounded epimorphism, so there exists a greatest preimage $v$ of $q$ and a least preimage $u$ of $p$, and it is easy to check that $u+v / v$ is a prime subquotient of $r / s$ (see proof of Theorem 2.25). (ii) This is an immediate consequence of the preceding theorem, since $\mathcal{L}=\left(\mathcal{B}_{F}\right)^{\mathcal{V}}$ if and only if every proper subvariety of $\mathcal{L}$ does not contain all splitting lattices. Note that if $F_{\mathcal{L}}(n)$ is weakly atomic for $n \in \omega$, then by Corollary $2.26 F_{\mathcal{L}}(n)$ is a subdirect product of splitting lattices, hence $F_{\mathcal{L}}(n) \in\left(\mathcal{B}_{F}\right)^{\mathcal{V}}$ for each $n \in \omega$. This clearly implies $\mathcal{L}=\left(\mathcal{B}_{F}\right)^{\mathcal{V}}$. Using some of the results of this section, we prove one last characterization of finite bounded lattices. Theorem 2.46 (Day [79]). A finite lattice $L$ is bounded if and only if there is a sequence of lattices $1=L_{0}, L_{1}, \ldots, L_{n+1}=L$ and a sequence of quotients $u_{0} / v_{0}, \ldots, u_{n} / v_{n}$ with $u_{i} / v_{i} \subseteq L_{i}$ such that $L_{i+1} \cong L_{i}\left[u_{i} / v_{i}\right](i=0,1, \ldots, n)$. Proof. The reverse implication follows from Lemma 2.41 , since the trivial lattice 1 is obviously bounded. To prove the forward implication, let $\theta$ be an atom in $\operatorname{Con}(L)$. We need only show that $L$ can be obtained from $L_{n}=L / \theta$ by finding a suitable quotient $u_{n} / v_{n}$ in $L_{n}$ such that $L_{n}\left[u_{n} / v_{n}\right] \cong L$. Since $L_{n}$ is again a finite bounded lattice, we can then repeat this process to obtain $L_{n-1}, L_{n-2}, \ldots, L_{0}=1$. By Theorem 2.39 the map $p \mapsto \operatorname{con}\left(p, p_{*}\right)$ is a bijection from $J(L)$ to $J(\operatorname{Con}(L))$, and since $\theta \in J(\operatorname{Con}(L))$, there exists a unique $p \in J(L)$ with $\theta=\operatorname{con}\left(p, p_{*}\right) . \quad L$ is semidistributive, so by the dual of Lemma 2.36 (iii) we can find a unique meet irreducible $m \in L$ such that $m^{*} / m \searrow p / p_{*}$, where $m^{*}$ is the unique cover of $m$. We claim that (1) $m / p_{*}$ transposes bijectively up onto $m^{*} / p$, and (2) $x \theta y$ if and only if $x=y$ or $\{x, y\}=\{z, p+z\}$ for some $z \in m / p_{*}$. Letting $u_{n}=m / \theta$ and $v_{n}=p / \theta$, we then have $L_{n}\left[u_{n} / v_{n}\right] \cong L$. To prove (1), suppose $x \in m / p_{*}$ but $x<(p+x) m$. Then we can find $q \in J(L)$ such that $q \leq(p+x) m$ and $q \leq x$. Now $p_{*} \theta p$ implies $x \theta(p+x) m$, which in turn implies $q_{*} \theta q$. Since the map $p \mapsto \operatorname{con}\left(p, p_{*}\right)$ is one-one, this forces $p=q \leq m$, a contradiction. Dually one proves that for $x \in m^{*} / p, x=m x+p$. Since $L$ is a finite lattice we need only check the forward implication of (2) for pairs $(x, y) \in \theta$ of the form $x \prec y$. Clearly $\operatorname{con}(x, y) \leq \operatorname{con}\left(p, p_{*}\right)$, and since $\operatorname{con}\left(p, p_{*}\right)$ is an atom of $\operatorname{Con}(L)$, equality holds. This means that $p$ is the unique join irreducible for which $p / p_{*} \nearrow y / x$, and therefore $\{x, y\}=\{x, p+x\}$. The reverse implication follows from the observation that if $x=z$, say, and $z \in m / p_{*}$, then $x \geq p_{*}, x \geq p$ and $y=p+x$. This implies that $p / p_{*} \nearrow y / x$, whence $x \theta y$. ### Finite lattices that satisfy (W) We conclude this chapter with a result about finite lattices that satisfy Whitman's condition (W), and some remarks about finite sublattices of a free lattice. Theorem 2.47 (Davey and Sands[77]). Suppose $f$ is an epimorphism from a finite lattice $K$ onto a lattice $L$. If $L$ satisfies (W), then there exists an embedding $g: L \hookrightarrow K$ such that $f g$ is the identity map on $L$. Proof. Let $f$ be the epimorphism from $K$ onto $L$. Since $K$ is finite, $f$ is bounded, so we obtain the meet preserving map $\alpha_{f}: L \hookrightarrow K$ and the join preserving map $\beta_{f}: L \hookrightarrow K$ (see Lemma $2.17(\mathrm{v}))$. Let $M$ be the collection of all join preserving maps $\gamma: L \rightarrow K$ which are pointwise below $\alpha_{f}$ (i.e. $\gamma(b) \leq \alpha_{f}(b)$ for all $b \in L$ ). $M$ is not empty since $\beta_{f} \in M$. Now define a map $g: L \rightarrow K$ by $g(b)=\sum\{\gamma(b): \gamma \in M\} . g$ is clearly join preserving and is in fact the largest element of $M$ (in the pointwise order). Also, since $\beta_{f}(b) \leq g(b) \leq \alpha_{f}(b)$ for all $b \in L$, we have $$ b=f \beta_{f}(b) \leq f g(b) \leq f \alpha_{f}(b)=b $$ which implies that $f g$ is the identity map on $L$. It remains to show that $g$ is meet preserving, then $g$ is the desired embedding of $L$ into $K$. Suppose $g(a b) \neq g(a) g(b)$ for some $a, b \in L$. Since $g$ is order preserving, we actually have $g(a b)<g(a) g(b)$. Define $h: L \rightarrow M$ by $$ h(x)= \begin{cases}g(x) & \text { if } \quad a b \leq x \\ g(x)+g(a) g(b) & \text { if } \quad a b \leq x\end{cases} $$ Then $h \notin M$ because $h(a b)=g(a) g(b)>g(a b)$, but $h$ is pointwise below $\alpha_{f}$ since for $a b \leq x$ we have $h(x)=g(x)+g(a) g(b) \leq \alpha_{f}(x)+\alpha_{f}(a) \alpha_{f}(b)=\alpha_{f}(x)+\alpha_{f}(a b)=\alpha_{f}(x)$. It follows that $h$ is not join preserving, so there exist $c, d \in L$ such that $h(c+d) \neq h(c)+h(d)$. From the definition of $h$ we see that this is only possible if $a b \leq c+d, a b \leq c$ and $a b \leq d$. Thus (W) implies that $a \leq c+d$ or $b \leq c+d$. However, either one of these conditions leads to a contradiction, since then $$ h(c+d)=g(c+d)+g(a) g(b)=g(c+d)=g(c)+g(d)=h(c)+h(d) . $$ Actually the result proved in Davey and Sands [77] is somewhat more general, since it suffices to require that every chain of elements in $K$ is finite. Finite sublattices of a free lattice. Another result worth mentioning is that any finite semidistributive lattice which satisfies Whitman's condition (W) can be embedded in a free lattice. This longstanding conjecture of Jónsson was finally proved by Nation [83]. Following an approach originally suggested by Jónsson, Nation proves that a finite semidistributive lattice $L$ which satisfies (W) cannot contain a cycle. By Corollary 2.35 and Theorem $2.39 L$ is bounded, and it follows from Theorem 2.19 that $L$ can be embedded in a free lattice. (Note that (W) fails in the lattice of Figure 2.7) Of course any finite sublattice of a free lattice is semidistributive and satisfies (W) (Corollary 2.16, Lemma 2.30). So in particular Nation's result shows that the finite sublattices of free lattices can be characterized by the first-order conditions $\left(\mathrm{SD}^{+}\right),(\mathrm{SD})$ and (W). ## Chapter 3 ## Modular Varieties ### Introduction Modular lattices were studied in general by Dedekind around 1900, and for quite some time they were referred to as Dedekind lattices. The importance of modular lattices stems from the fact that many algebraic structures give rise to such lattices. For example the lattice of normal subgroups of a group and the lattice of subspaces of a vector space and a projective space (projective geometry) are modular. The Jordan Hölder Theorem of group theory depends only on the (semi-) modularity of normal subgroup lattices and the theorem of Kuroš and Ore holds in any modular lattice. Projective spaces play an important role in the study of modular varieties because their subspace lattices provide us with infinitely many subdirectly irreducible (complemented) modular lattices of arbitrary dimension. They also add a geometric flavor to the study of modular lattices. The Arguesian identity was introduced by Jónsson [53] (see also Schützenberger [45]). It implies modularity and is a lattice equivalent of Desargues' Law for projective spaces. Some of the results about Arguesian lattices are discussed in Section 3.2, but to keep the length of this presentation within reasonable bounds, most proofs have been omitted. As we have mentioned before McKenzie [70] and Baker [69] (see also Wille [72] and Lee $[\mathbf{8 5}]$ ) showed independently that the lattice $\Lambda$ of all lattice subvarieties has $2^{\omega}$ members. Moreover, Baker's proof shows that the lattice $\Lambda_{\mathcal{M}}$ of all modular lattice subvarieties contains the Boolean algebra $2^{\omega}$ as a sublattice. Continuous geometries, as introduced by von Neumann [60], are complemented modular lattices and von Neumann's coordinatization of these structures demonstrates an important connection between rings and modular lattices. Using the notion of an $n$-frame and its associated coordinate ring (due to von Neumann), Freese [79] shows that the variety $\mathcal{M}$ of all modular lattices is not generated by its finite members. Herrmann [84] extends this result by showing that $\mathcal{M}$ is not even generated by its members of finite length. The structure of the bottom end of $\Lambda_{\mathcal{M}}$ is investigated in Grätzer [66] and Jónsson [68], where it is shown that the variety $\mathcal{M}_{3}$ generated by the diamond $M_{3}$ is covered by exactly two varieties, $\mathcal{M}_{4}$ and $\mathcal{M}_{3^{2}}$. Furthermore, Jónsson [68] proved that above $\mathcal{M}_{4}$ we have a chain of varieties $\mathcal{M}_{n}$, each generated by a finite modular lattice of length 2 , such that $\mathcal{M}_{n+1}$ is the only join irreducible cover of $\mathcal{M}_{n}$. Hong [72] adds further detail to this picture Figure 3.1 by showing, among other things, that $\mathcal{M}_{3^{2}}$ has exactly five join irreducible covers. The methods developed by Jónsson and Hong have proved to very useful for the investigation of modular varieties generated by lattices of finite length and / or finite width (=maximal number of pairwise incomparable elements). Freese [72] extends these methods and gives a complete description of the variety generated by all modular lattices of width 4 . ### Projective Spaces and Arguesian Lattices We begin with a discussion of projective spaces, since many of the results about modular varieties make use of some of the properties of these structures. A some of the results reviewed here will also be used in Chapter 6. Definition of a projective space. In this section we will be concerned with pairs of sets $(P, L)$, where $P$ is a set of points and a collection $L$ of subsets of $P$, called lines. If a point $p \in P$ is an element of a line $l \in L$, then we say that $p$ lies on $l$, and $l$ passes through $p$. A set of points is collinear if all the points lie on the same line. A triangle is an ordered triple of noncollinear (hence distinct) points $(p, q, r)$. $(P, L)$ is said to be a projective space (sometimes also called projective geometry) if it satisfies: (P1) each line contains at least two points; (P2) any two distinct points $p$ and $q$ are contained in exactly one line (denoted by $\overline{\{p, q\}}$ ); (P3) for any triangle $(p, q, r)$, if a line intersects two of the lines $\overline{\{p, q\}}, \overline{\{p, r\}}$ or $\overline{\{q, r\}}$ in distinct points, then it meets the third side (i.e. coplanar lines intersect, see Figure 3.1). The two simplest projective spaces, which have no lines at all, are $(\emptyset, \emptyset)$ and $(\{p\}, \emptyset)$, while $(\{p, q\},\{\{p, q\}\})$ and $(\{p, q, r\},\{\{p, q, r\}\})$ have one line each, and $$ (\{p, q, r\},\{\{p, q\},\{p, r\},\{q, r\}\}) $$ has three lines. The last two examples show that we can have different projective spaces defined on the same set of points. However we will usually be dealing only with one space $(P, L)$ at a time which we then simply denote by the letter $P$. A subspace $S$ of a projective space $P$ is a subset $S$ of $P$ such that every line which passes through any two distinct points of $S$ is included in $S$ (i.e. $p, q \in S$ implies $\overline{\{p, q\}} \subseteq S$, where we define $\overline{\{p, p\}}=\{p\}$ ). The collection of all subspaces of $P$ is denoted by $\mathcal{L}(P)$. Projective spaces and modular geometric lattices. A lattice $L$ is said to be uppersemimodular or simply semimodular if $a \prec b$ in $L$ implies $a+c \prec b+c$ or $a+c=b+c$ for all $c \in L$. Clearly every modular lattice is semimodular. A geometric lattice is a semimodular algebraic lattice in which the compact elements are exactly the finite joins of atoms. The next theorem summarizes the connection between projective spaces and (modular) geometric lattices. THEorem 3.1 Let $P$ be an arbitrary projective space. Then (i) $(\mathcal{L}(P), \subseteq)$ is a complete modular lattice; (ii) associated with every modular lattice $M$ is a projective space $P(M)$, where $P(M)$ is the set of all atoms of $M$, and a line through two distinct atoms $p$ and $q$ is the set of atoms below $p+q$ (i.e. $\overline{\{p, q\}}=\{r \in P(M): r \leq p+q\}$ ); (iii) $P(\mathcal{L}(P)) \cong P$; (iv) for any modular lattice $M$, if $M^{\prime}$ is the sublattice of $M$ generated by the atoms of $M$, and $\mathcal{I} M^{\prime}$ is the ideal lattice of $M^{\prime}$, then $\mathcal{I} M^{\prime} \cong \mathcal{L}(P(M))$; (v) $\mathcal{L}(P)$ is a modular geometric lattice; (vi) $\mathcal{L}(P(M)) \cong M$ for any modular geometric lattice $M$ Proof. (i) $\mathcal{L}(P)$ is closed under arbitrary intersections and $P \in \mathcal{L}(P)$, hence $\mathcal{L}(P)$, ordered by inclusion, is a complete lattice. For $S, T \in \mathcal{L}(P)$ the join can be described by $$ S+T=\bigcup\{\overline{\{p, q\}}: p \in S, \quad q \in T\} $$ (here we use (P3), see [GLT] p.203). Suppose $R \in \mathcal{L}(P)$ and $R \supseteq T$. To prove $\mathcal{L}(P)$ modular, we need only show that $R(S+T) \subseteq R S+T$. Let $r \in R(S+T)$. Then $r \in R$ and $r \in S+T$, which implies $r \in \overline{\{p, q\}}$ for some $p \in S, q \in T \subseteq R$. If $r=q$ then $r \in T$, and if $r \neq q$ then $p \in \overline{\{r, q\}} \subseteq R$ (by (P2)), hence $p \in R S$. In either case $r \in R S+T$ as required. (ii) We have to show that $P(M)$ satisfies (P1), (P2) and (P3). (P1) holds by construction, and (P2) follows from the fact that, by modularity, the join of two atoms covers both atoms. To prove (P3), suppose $(p, q, r)$ form a triangle, and $x, y$ are two distinct points (atoms) such that $x \leq p+q$ and $y \leq q+r$ (see Figure 3.1). It suffices to show that $\overline{\{x, y\}} \cap \overline{\{p, r\}} \neq \emptyset$. Since $p, q, r$ are noncollinear, $p+q+r$ covers $p+r$ by (upper semi-) modularity. Also $x+y \leq p+q+r$, hence $x+y=(x+y)(p+q+r)$, which covers or equals $(x+y)(p+r)$ by (lower semi-) modularity. If $x+y=(x+y)(p+r)$, then $x+y \leq p+r$, and since $x+y$ and $p+r$ are elements of height 2 , we must have $x+y=p+r$. In this case $\overline{\{x, y\}}=\overline{\{p, r\}}$ and there intersection is certainly nonempty. If $x+y \succ(x+y)(p+r)=z$, then $z$ must be an atom, and is in fact the point of intersection of $\overline{\{x, y\}}$ and $\overline{\{p, r\}}$. (iii) A point $p \in P$ corresponds to the one element subspace (atom) $\{p\} \in \mathcal{L}(P)$, and it is easy to check that this map extends to a correspondence between the lines of $P$ and $\mathcal{L}(P)$. (iv) Each ideal of $M^{\prime}$ is generated by the set of atoms it contains, every subspace of $P(M)$ is the set of atoms of some ideal of $M^{\prime}$, and the (infinite) meet operations (intersection) of both lattices are preserved by this correspondence. Hence the result follows. (v) By (iii) $P \cong P(M)$ for some modular lattice $M$, hence (iv) implies that $\mathcal{L}(P) \cong$ $\mathcal{I} M^{\prime}$. It is easy to check that $\mathcal{I} M^{\prime}$ is a geometric lattice, and modularity follows from (i). Now (vi) follows from (iv) and the observation that if $M$ is a geometric lattice, then $\mathcal{I} M^{\prime} \cong M$. Lemma 3.2 (Birkhoff [35']). Every geometric lattice is complemented. Proof. Let $L$ be a geometric lattice and consider any nonzero element $x \in L$. By Zorn's Lemma there exists an element $m \in L$ that is maximal with respect to the property $x m=0_{L}$. We want to show $x+m=1_{L}$. Every element of $L$ is the join of all the atoms below it, so if $x+m<1_{L}$, then there is an atom $p \leq x+m$, and by semimodularity $m \prec m+p$. We show that $x(m+p)=0_{L}$, which then contradicts the maximality of $m$, and we are done. Suppose $x(m+p)>0_{L}$. Then there is an atom $q \leq x(m+p)$, and $q \leq m$ since $q \leq x$. Again by semimodularity $m \prec m+q$. Also $q \leq m+p$, hence $m \prec m+q \leq m+p$, and together with $m \prec m+p$ we obtain $m+q=m+p$. But this implies $p \leq m+q \leq x+m$, a contradiction. In fact MacLane [38] showed that every geometric lattice is relatively complemented (see $[\mathbf{G L T}]$ p.179). The next theorem is a significant result that is essentially due to Frink [46], although Jónsson [54] made the observation that the lattice $L$ is in the same variety as $K$. Theorem 3.3 Let $\mathcal{V}$ be a variety of lattices. Then every complemented modular lattice $K \in \mathcal{V}$ can be 0,1-embedded in some modular geometric lattice $L \in \mathcal{V}$. Proof. Let $M=\mathcal{F} K$ be the filter lattice of $K$, ordered by reverse inclusion. Then $M$ satisfies all the identities which hold in $K$, hence $M$ is modular and $M \in \mathcal{V}$. For $L$ we take the subspace lattice of the projective space $P(M)$ associated with $M$. Note that the points of $P(M)$ are the maximal (proper) filters of $K$. By Theorem $3.1(\mathrm{v}), L$ is a modular geometric lattice, and by (iv) $L \cong \mathcal{I} M^{\prime}$, which implies that $L$ is also in $\mathcal{V}$. Define a map $f: K \rightarrow L$ by $$ f(x)=\{F \in P(M): x \in F\} $$ for each $x \in K$. It is easy to check that $f(x)$ is in fact a subspace of $P(M)$, that $f\left(0_{K}\right)=\emptyset$, $f\left(1_{K}\right)=P(M)$ and that $f$ is meet preserving, hence isotone. To conclude that $f$ is also join preserving, it is therefore sufficient to show that $f(x+y) \subseteq f(x)+f(y)$. This is trivial for $x$ or $y$ equal to $0_{K}$, so suppose $x, y \neq 0_{K}$ and $F \in f(x+y)$. Then $x+y \in F$, and we have to show that there exist two maximal filters $G \in f(x), H \in f(y)$ such that $F \leq G+H$ (i.e. $F \supseteq G+H$ ). If $x \in F$, then we simply take $F=G$, and $H$ as any maximal filter containing $y$, and similarly for $y \in F$ (here, and subsequently, we use Zorn's Lemma to extend any filter to a maximal filter). Thus we may assume that $x, y \notin F$. Further we may assume that $x y=0_{K}$, since if $x y>0_{K}$, then we let $y^{\prime}$ be a relative complement of $x y$ in the quotient $y / 0_{K}$ (it is easy to see that every complemented modular lattice is relatively complemented). Clearly $x y^{\prime}=x y y^{\prime}=0_{K}, x+y^{\prime}=x+x y+y^{\prime}=x+y$ and any filter that does not contain $y$ must also exclude $y^{\prime}$, so we can replace $y$ by $y^{\prime}$. Now $y \notin F$ implies $[y)<[y)+F$, where $[y)$ is the principal filter generated by $y$. Hence by modularity, we see that $$ \left[0_{K}\right)=[x) \cdot[y)<[x) \cdot([y)+F) $$ (else $[y),[y)+F$ and $[x)$ generate a pentagon). So there is a maximal filter $G \leq[x) \cdot([y)+F)$, whence it follows that $x \in G$ and $[y)+F=[y)+F+G$. This time $x \notin F$ gives $G<G+F$, and to avoid a pentagon, we must have $\left[0_{K}\right)=[y) \cdot G<[y) \cdot(G+F)$. Hence there is a maximal filter $H \leq[y) \cdot(G+F)$, and $x \in G, y \in H$ and $x y=0_{K}$ shows that $G \neq H$. Consequently $F, G$ and $H$ are three distinct atoms of $\mathcal{F} K$, and since $H \leq G+F$, they generate a diamond. Thus $F \leq G+H$ as required. To see that $f$ is one-one, suppose $x \notin y$, and let $x^{\prime}$ be a relative complement of $x y$ in $x / 0_{K}$. If $F$ is a maximal filter containing $x^{\prime}$, then $F \in f(x)$ but $F \notin f(y)$ since $x^{\prime} y=0_{K}$. Therefore $f(x) \neq f(y)$. The above result is not true if we allow $K$ to be an arbitrary modular lattice. Hall and Dilworth [44] construct a modular lattice that cannot be embedded in any complemented modular lattice. Coordinatization of projective spaces. The dimension of a subspace is defined to be the cardinality of a minimal generating set. This is equal to the height of the subspace in the lattice of all subspaces. If it is finite, then it is one greater than the usual notion of Euclidian dimension, since a line is generated by a minimum of two points. A twodimensional projective (sub-) space is called a projective line and a three-dimensional one is called a projective plane. It is easy to characterize the subspace lattices of projective lines: they are all the (modular) lattices of length 2, excluding the three element chain. Note that except for the four element Boolean algebra, these lattices are all simple. A projective space in which every line has at least three points is termed nondegenerate. A simple geometric argument shows that the lines of a nondegenerate projective space all have the same number of points. Nondegenerate projective spaces are characterized by the fact that their subspace lattices are directly indecomposable (not the direct product of subspace lattices of smaller projective spaces) and, in the light of the following theorem, they form the building blocks of all other projective spaces. ThEorem 3.4 (Maeda [51]). Every (modular) geometric lattice is the product of directly indecomposable (modular) geometric lattices. A proof of this theorem can be found in [GLT] p.180. There it is also shown that a directly indecomposable modular geometric lattice is subdirectly irreducible (by Lemma 1.13, it will be simple if it is finite dimensional). An important type of nondegenerate projective space is constructed in the following way: Let $D$ be a division ring (i.e. a ring with unit, in which every nonzero element has a multiplicative inverse), and let $V$ be an $\alpha$-dimensional vector space over $D$. (For $\alpha=n$ Figure 3.2 finite, $V$ is isomorphic to ${ }_{D} D^{n}$, otherwise $V$ is isomorphic to the vector subspace of ${ }_{D} D^{\alpha}$ generated by the set $\left\{e_{\gamma}: \gamma \in \alpha\right\}$, where the coordinates of the $e_{\gamma}$ are all zero except for a 1 in the $\gamma$ th position.) It is not difficult to check that the lattice $\mathcal{L}(V, D)$ of all vector subspaces of $V$ over $D$ is a modular geometric lattice, so by Theorem $3.1, \mathcal{L}(V, D)$ determines a projective space $P$ such that $\mathcal{L}(V, D) \cong \mathcal{L}(P)$. Clearly $P$ has dimension $\alpha$, and the points of $P$ are the one-dimensional vector subspaces of $V$. Note that $P$ is nondegenerate, for if $p_{u}=\{a u: a \in D\}$ and $p_{v}=\{a v: a \in D\}$ are two distinct points of $P$ (i.e. $u, v \in V, u \neq a v$ for any $a \in D$ ), then the line through these two points must contain the point $p_{u-v}$, which is different from $p_{u}$ and $p_{v}$ (else $u-v=a v$, say, giving $u=(a+1) v$ and therefore $\left.p_{u}=p_{v}\right)$. Observe also that the number of points on each line (=number of one-dimensional subspaces in any two-dimensional subspace) is $|D|+1$. The smallest nondegenerate projective space is obtained from $\mathcal{L}\left(Z_{2}{ }^{3}, Z_{2}\right)$ where $Z_{2}$ is the two element field. The subspace lattice, denoted by $P_{2}$, is given in Figure 3.7 . We say that a nondegenerate projective space $P$ can be coordinatized if $\mathcal{L}(P) \cong \mathcal{L}(V, D)$ for some vector space $V$ over some division ring $D$. To answer the question which projective spaces can be coordinatized, we need to recall Desargues' Law. Two triangles $a=\left(a_{0}, a_{1}, a_{2}\right)$ and $b=\left(b_{0}, b_{1}, b_{2}\right)$ in a projective space $P$ are centrally perspective if $\overline{\left\{a_{i}, a_{j}\right\}} \neq \overline{\left\{b_{i}, b_{j}\right\}}$ and for some point $p$ the points $a_{i}, b_{i}, p$ are collinear $(i, j \in\{0,1,2\})$. If we think of the points $a_{i}, b_{i}$ as atoms of the lattice $\mathcal{L}(P)$, then we can express this condition by $$ \left(a_{0}+b_{0}\right)\left(a_{1}+b_{1}\right) \leq a_{2}+b_{2} . $$ The triangles are said to be axially perspective if the points $c_{0}, c_{1}, c_{2}$ are collinear, where $c_{k}=\left(a_{i}+a_{j}\right)\left(b_{i}+b_{j}\right),\{i, j, k\}=\{0,1,2\}$ (see Figure 3.2). This can be expressed by $$ c_{2} \leq c_{0}+c_{1} . $$ Desargues' Law states that if two triangles are centrally perspective then they are also axially perspective. A projective space which satisfies Desargues' Law is said to be Desarguesian. It is a standard result of projective geometry that every projective space associated with a vector subspace lattice is Desarguesian. Conversely, we have the classical coordinatization theorem of projective geometry, due to Veblen and Young [10] for the finite dimensional case and Frink [46] in general. Theorem 3.5 Let $P$ be a nondegenerate Desarguesian projective space of dimension $\alpha \geq 3$. Then there exists a division ring $D$, unique up to isomorphism, such that $\mathcal{L}(P) \cong$ $\mathcal{L}\left(D^{\alpha}, D\right)$. For a proof of this theorem and further details, the reader should consult [ATL] p.111 or [GLT] p.208. Here we remark only that to construct the division ring $D$ which coordinatizes $P$ we may choose an arbitrary line $l$ of $P$ and define $D$ on the set $l-\{p\}$ where $p$ is any point of $l$. The 0 and 1 of $D$ may also be chosen arbitrarily, and the addition and multiplication are then defined with reference to the lattice operations in $\mathcal{L}(P)$. This leads to the following observation: Lemma 3.6 Let $P$ and $Q$ be two nondegenerate Desarguesian projective spaces of dimension $\geq 3$ and let $D_{P}$ and $D_{Q}$ be the corresponding division rings which coordinate them. If $\mathcal{L}(P)$ can be embedded in $\mathcal{L}(Q)$ such that the atoms of $\mathcal{L}(P)$ are mapped to atoms of $\mathcal{L}(Q)$, then $D_{P}$ can be embedded in $D_{Q}$. It is interesting to note that projective spaces of dimension 4 or more automatically satisfy Desargues' Law ([GLT] p.207), hence any noncoordinatizable projective space is either degenerate, or a projective plane that does not satisfy Desargues' Law, or a projective line that has $k+1$ points, where $k$ is a finite number that is not a prime power. Arguesian lattices. The lattice theoretic version of Desargues' Law can be generalized to any lattice $L$ by considering arbitrary triples $a, b \in L^{3}$ (also referred to as triangles in $L$ ) instead of just triples of atoms. We now show that under the assumption of modularity this form of Desargues' Law is equivalent to the Arguesian identity: $$ \left(a_{0}+b_{0}\right)\left(a_{1}+b_{1}\right)\left(a_{2}+b_{2}\right) \leq a_{0}\left(a_{1}+d\right)+b_{0}\left(b_{1}+d\right) $$ where $d$ is used as an abbreviation for $$ d=c_{2}\left(c_{0}+c_{1}\right)=\left(a_{0}+a_{1}\right)\left(b_{0}+b_{1}\right)\left(\left(a_{1}+a_{2}\right)\left(b_{1}+b_{2}\right)+\left(a_{0}+a_{2}\right)\left(b_{0}+b_{2}\right)\right) . $$ A lattice is said to be Arguesian if it satisfies this identity. LEMMA 3.7 Let $p=\left(a_{0}+b_{0}\right)\left(a_{1}+b_{1}\right)\left(a_{2}+b_{2}\right)$, then (i) the identity $p \leq a_{0}\left(a_{1}+d\right)+b_{0}$ is equivalent to the Arguesian identity, (ii) every Arguesian lattice is modular and (iii) to check whether the Arguesian identity holds in a modular lattice, it is enough to consider triangles $a^{\prime}=\left(a_{0}^{\prime}, b_{0}^{\prime}, c_{0}^{\prime}\right)$ and $b^{\prime}=\left(b_{0}^{\prime}, b_{1}^{\prime}, b_{2}^{\prime}\right)$ which satisfy $$ \text { (*) } \quad a_{i}^{\prime}+b_{i}^{\prime}=a_{i}^{\prime}+p^{\prime}=b_{i}^{\prime}+p^{\prime}, \quad(i=0,1,2) $$ where $p^{\prime}$ is defined in the same manner as $p$. Proof. Since we always have $b_{0}\left(b_{1}+d\right) \leq b_{0}$, the Arguesian identity clearly implies $p \leq$ $a_{0}\left(a_{1}+d\right)+b_{0}$. Conversely, let $L$ be a lattice which satisfies the identity $p \leq a_{0}\left(a_{1}+d\right)+b_{0}$. We first show that $L$ is modular. Given $u, v, w \in L$ with $u \leq w$, let $a_{0}=v, b_{0}=u$ and $a_{1}=a_{2}=b_{1}=b_{2}=w$. Then $p=(v+u) w$ and $d=w$, whence the identity implies $(v+u) w \leq v w+u$. Since $u+v w \leq(u+v) w$ holds in any lattice, we have equality, and so $L$ is modular. This proves (ii). To complete (i), observe that $p$ and $d$ are unchanged if we swop the $a_{i}$ 's with their corresponding $b_{i}$ 's, hence we also have $p \leq b_{0}\left(b_{1}+d\right)+a_{0}$. Combining these two inequalities gives $$ \begin{aligned} p & \leq\left(a_{0}\left(a_{1}+d\right)+b_{0}\right)\left(b_{0}\left(b_{1}+d\right)+a_{0}\right) \\ & =a_{0}\left(a_{1}+d\right)+b_{0}\left(b_{0}\left(b_{1}+d\right)+a_{0}\right) \\ & =a_{0}\left(a_{1}+d\right)+a_{0} b_{0}+b_{0}\left(b_{1}+d\right) \end{aligned} $$ by modularity. Also $a_{0} b_{0} \leq c_{2}, c_{1}$ shows $a_{0} b_{0} \leq d$ and therefore $a_{0} b_{0} \leq a_{0}\left(a_{1}+d\right)$. This means we can delete the term $a_{0} b_{0}$ and obtain the Arguesian identity. Now let $a, b \in L^{3}$ and define $a_{i}^{\prime}=a_{i}\left(b_{i}+p\right), b_{i}^{\prime}=b_{i}\left(a_{i}+p\right)$. Since we are assuming that $L$ is modular, $$ \begin{aligned} a_{i}^{\prime}+b_{i}^{\prime}=a_{i}\left(b_{i}+p\right)+b_{i}\left(a_{i}+p\right) & =\left(a_{i}\left(b_{i}+p\right)+b_{i}\right)\left(a_{i}+p\right) \\ & =\left(b_{i}+p\right)\left(a_{i}+b_{i}\right)\left(a_{i}+p\right) \\ & =\left(b_{i}+p\right)\left(a_{i}+p\right)=\left(b_{i}+p\right) a_{i}+p=a_{i}^{\prime}+p \\ & =b_{i}\left(a_{i}+p\right)+p=b_{i}^{\prime}+p . \end{aligned} $$ Thus $p^{\prime}=\left(a_{0}^{\prime}+p\right)\left(a_{1}^{\prime}+p\right)\left(a_{2}^{\prime}+p\right) \geq p$, while $a_{i}^{\prime} \leq a_{i}$ and $b_{i}^{\prime} \leq b_{i}$ imply $p^{\prime} \leq p$. So we have $p=p^{\prime}$, and condition $(*)$ is satisfied. If the Arguesian identity holds for $a^{\prime}, b^{\prime}$ and we define $d^{\prime}$ in the same way as $d$, then clearly $d^{\prime} \leq d$ and $$ p=p^{\prime} \leq a_{0}^{\prime}\left(a_{1}^{\prime}+d^{\prime}\right)+b_{0}^{\prime} \leq a_{0}\left(a_{1}+d\right)+b_{0}, $$ hence the identity holds for the triangles $a, b$. Theorem 3.8 If a modular lattice $L$ satisfies Desargues' $L a w$ then $L$ is Arguesian. Conversely, if $L$ is Arguesian, then $L$ satisfies Desargues' Law. Proof. Let $a_{0}, a_{1}, a_{2}, b_{0}, b_{1}, b_{2} \in L, p=\left(a_{0}+b_{0}\right)\left(a_{1}+b_{1}\right)\left(a_{2}+b_{2}\right), c_{k}=\left(a_{i}+a_{j}\right)\left(b_{i}+b_{j}\right)$, $(\{i, j, k\}=\{0,1,2\})$ and $d=c_{2}\left(c_{0}+c_{1}\right)$ as before. By part (iii) of the preceding lemma we may assume that $$ \text { (*) } \quad a_{i}+b_{i}=a_{i}+p=b_{i}+p \quad i=0,1,2 . $$ Define $\bar{b}_{2}=b_{2}+b_{0}\left(a_{1}+b_{1}\right)$. The following calculation shows that the triangles $\left(a_{0}, a_{1}, a_{2}\right)$ and $\left(b_{0}, b_{1}, \bar{b}_{2}\right)$ are centrally perspective: $$ \begin{aligned} \left(a_{0}+b_{0}\right)\left(a_{1}+b_{1}\right) & =\left(p+b_{0}\right)\left(a_{1}+b_{1}\right) \quad \text { by }(*) \\ & =p+b_{0}\left(a_{1}+b_{1}\right) \quad \text { by modularity } \\ & \leq a_{2}+b_{2}+b_{0}\left(a_{1}+b_{1}\right)=a_{2}+\bar{b}_{2} . \end{aligned} $$ Therefore Desargues' Law implies that $$ \begin{aligned} c_{2} & \leq\left(a_{1}+a_{2}\right)\left(b_{1}+\bar{b}_{2}\right)+\left(a_{2}+a_{0}\right)\left(\bar{b}_{2}+b_{0}\right) \\ & =\left(a_{1}+a_{2}\right)\left(b_{1}+b_{2}+b_{0}\left(a_{1}+b_{1}\right)\right)+\left(a_{2}+a_{0}\right)\left(b_{2}+b_{0}\right) \\ & =\left(a_{1}+a_{2}\right)\left(b_{2}+\left(b_{1}+b_{0}\right)\left(a_{1}+b_{1}\right)\right)+c_{1} \\ & =\left(a_{1}+a_{2}\right)\left(b_{1}+b_{2}+a_{1}\left(b_{0}+b_{1}\right)\right)+c_{1} \\ & =\left(a_{1}+a_{2}\right)\left(b_{1}+b_{2}\right)+a_{1}\left(b_{0}+b_{1}\right)+c_{1}=c_{0}+c_{1}+a_{1}\left(b_{0}+b_{1}\right), \end{aligned} $$ whence $c_{2}=c_{2}\left(c_{0}+c_{1}+a_{1}\left(b_{0}+b_{1}\right)\right)=c_{2}\left(c_{0}+c_{1}\right)+a_{1}\left(b_{0}+b_{1}\right)=d+a_{1}\left(b_{0}+b_{1}\right)$. It follows that $$ \begin{array}{rlr} a_{1}+d=a_{1}+c_{2} & =a_{1}+\left(a_{0}+a_{1}\right)\left(b_{0}+b_{1}\right) & \\ & =\left(a_{1}+b_{1}+b_{0}\right)\left(a_{0}+a_{1}\right) & \\ & =\left(a_{1}+p+b_{0}\right)\left(a_{0}+a_{1}\right) & \text { by }(*) \\ & =\left(a_{1}+p+a_{0}\right)\left(a_{0}+a_{1}\right) & \text { by }(*) \\ & =\left(a_{0}+a_{1}\right) \geq a_{0}, & \end{array} $$ so we finally obtain $a_{0}\left(a_{1}+d\right)+b_{0}=a_{0}+b_{0} \geq p$. Conversely, suppose $L$ is Arguesian (hence modular) and $\left(a_{0}, a_{1}, a_{2}\right),\left(b_{0}, b_{1}, b_{2}\right)$ are centrally perspective, i.e. $\left(a_{0}+b_{0}\right)\left(a_{1}+b_{1}\right) \leq a_{2}+b_{2}$. Let $\bar{c}_{0}=\left(a_{1}+a_{2}\right)\left(c_{1}+c_{2}\right)$ and take $a_{0}^{\prime}=\bar{c}_{0}, a_{1}^{\prime}=b_{1}, a_{2}^{\prime}=a_{1}, b_{0}^{\prime}=c_{1}, b_{1}^{\prime}=b_{0}, b_{2}^{\prime}=a_{0}$ in the (equivalent form of the) Arguesian identity $p^{\prime} \leq a_{0}^{\prime}\left(a_{1}^{\prime}+d^{\prime}\right)+b_{0}^{\prime}$. We claim that under these assignments $p^{\prime}=c_{2}$ and $a_{0}^{\prime}\left(a_{1}^{\prime}+d^{\prime}\right)+b_{0}^{\prime} \leq c_{0}+c_{1}$ from which it follows that the two triangles are axially perspective. Firstly, $$ \begin{aligned} \bar{c}_{0}+c_{1} & =\left(a_{1}+a_{2}+c_{1}\right)\left(c_{1}+c_{2}\right) \\ & =\left(a_{1}+\left(a_{0}+a_{2}\right)\left(b_{0}+a_{2}+b_{2}\right)\right)\left(c_{1}+c_{2}\right) \\ & \geq\left(a_{1}+\left(a_{0}+a_{2}\right)\left(b_{0}+\left(a_{0}+b_{0}\right)\left(a_{1}+b_{1}\right)\right)\right)\left(c_{1}+c_{2}\right) \\ & =\left(a_{1}+\left(a_{0}+a_{2}\right)\left(a_{0}+b_{0}\right)\left(b_{0}+a_{1}+b_{1}\right)\right)\left(c_{1}+c_{2}\right) \\ & \geq\left(a_{1}+a_{0}\left(b_{0}+a_{1}+b_{1}\right)\right)\left(c_{1}+c_{2}\right) \\ & =\left(a_{1}+a_{0}\right)\left(b_{0}+a_{1}+b_{1}\right)\left(c_{1}+c_{2}\right) \\ & \geq\left(a_{0}+a_{1}\right)\left(b_{0}+b_{1}\right)\left(c_{1}+c_{2}\right)=c_{2} \end{aligned} $$ so $p^{\prime}=\left(\bar{c}_{0}+c_{1}\right)\left(b_{1}+b_{0}\right)\left(a_{1}+a_{0}\right)=c_{2}$. Secondly, $$ \begin{aligned} d^{\prime} & =\left(\bar{c}_{0}+b_{1}\right)\left(c_{1}+b_{0}\right)\left(\left(\bar{c}_{0}+a_{1}\right)\left(c_{1}+a_{0}\right)+\left(b_{1}+a_{1}\right)\left(b_{0}+a_{0}\right)\right) \\ & \leq\left(b_{0}+b_{2}\right)\left(\left(a_{0}+a_{2}\right)\left(a_{1}+a_{2}\right)+\left(a_{0}+b_{0}\right)\left(a_{1}+b_{1}\right)\right) \\ & \leq\left(b_{0}+b_{2}\right)\left(\left(a_{0}+a_{2}\right)\left(a_{1}+a_{2}\right)+b_{2}\right) \quad \text { (by central persp.) } \\ & =\left(b_{0}+b_{2}\right)\left(a_{0}+a_{2}\right)\left(a_{1}+a_{2}\right)+b_{2}=c_{1}\left(a_{1}+a_{2}\right)+b_{2} \end{aligned} $$ which implies $$ \begin{aligned} a_{0}^{\prime}\left(a_{1}^{\prime}+d^{\prime}\right)+b_{0}^{\prime} & =\bar{c}_{0}\left(b_{1}+d^{\prime}\right)+c_{1} \\ & \leq\left(a_{1}+a_{2}\right)\left(b_{1}+b_{2}+c_{1}\left(a_{1}+a_{2}\right)\right)+c_{1} \\ & =\left(a_{1}+a_{2}\right)\left(b_{1}+b_{2}\right)+c_{1}\left(a_{1}+a_{2}\right)+c_{1}=c_{0}+c_{1} \end{aligned} $$ The first statement of this theorem appeared in Grätzer, Jónsson and Lakser [73], and the converse is due to Jónsson and Monk [69]. In [GLT] p.205 it is shown that for any Desarguesian projective plane $P$ the atoms of $\mathcal{L}(P)$ satisfy the Arguesian identity and that this implies that $\mathcal{L}(P)$ is Arguesian. Hence it follows from the preceding theorem that $P$ is Desarguesian if and only if $\mathcal{L}(P)$ satisfies (the generalized version of) Desargues' Law. Since modularity is characterized by the exclusion of the pentagon $N$, which is isomorphic to its dual, it follows that the class of all modular lattices $\mathcal{M}$ is self-dual (i.e. $M \in \mathcal{M}$ implies that the dual of $M$ is also in $\mathcal{M}$ ). The preceding theorem can be used to prove the corresponding result for the variety of all Arguesian lattices. Lemma 3.9 (Jónsson[72]). The variety of all Arguesian lattices is self-dual. Proof. For modular lattices the Arguesian identity is equivalent to Desargues' Law by Theorem 3.8. Let $L$ be an Arguesian lattice and denote its dual by $\bar{L}$. Lemma 3.7 (ii) implies that $L$ is modular, and by the above remark, so is $\bar{L}$. We show that the dual of Desargues' Law holds in $L$, i.e. for all $x_{0}, x_{1}, x_{2}, y_{0}, y_{1}, y_{2} \in L$ $$ (*) \quad x_{0} y_{0}+x_{1} y_{1} \geq x_{2} y_{2} $$ implies that $$ (* *) \quad x_{0} x_{1}+y_{0} y_{1} \geq\left(x_{1} x_{2}+y_{1} y_{2}\right)\left(x_{0} x_{2}+y_{0} y_{2}\right) . $$ Then $\bar{L}$ satisfies Desargues' Law and is therefore Arguesian. Assume ( $*$ ) holds, and let $a_{0}=x_{0} x_{2}, a_{1}=y_{0} y_{2}, a_{2}=x_{0} y_{0}, b_{0}=x_{1} x_{2}, b_{1}=y_{1} y_{2}$, $b_{2}=x_{1} y_{1}$ and $c_{k}=\left(a_{i}+a_{j}\right)\left(b_{i}+b_{j}\right)(\{i, j, k\}=\{0,1,2\})$. Then $$ \left(a_{0}+b_{0}\right)\left(a_{1}+b_{1}\right)=\left(x_{0} x_{2}+x_{1} x_{2}\right)\left(y_{0} y_{2}+y_{1} y_{2}\right) \leq x_{2} y_{2} \leq a_{2}+b_{2} $$ by (*), so it follows from Desargues' Law that $c_{2} \leq c_{0}+c_{1}$. But $c_{0} \leq y_{0} y_{1}, c_{1} \leq x_{0} x_{1}$ and $c_{2}$ equals the right hand side of $(* *)$. Therefore $(* *)$ is satisfied. So far we have only considered the most basic properties of Arguesian lattices. Extensive research has been done on these lattices, and many important results have been obtained in recent years. We mention some of the results now. Recall that the collection of all equivalence relations (partitions) on a fixed set form an algebraic lattice, with intersection as meet. If two equivalence relations permute with each other under the operation of composition then their join is simply the composite relation. A lattice is said to be linear if it can be embedded in a lattice of equivalence relations in such a way that any pair of elements is mapped to a pair of permuting equivalence relations. (These lattices are also referred to as lattices that have a type 1 representation, see [GLT] p.198). An example of a linear lattice is the lattice of all normal subgroups of a group (since groups have permutable congruences), and similar considerations apply to the "subobject" lattices associated with rings, modules and vectorspaces. Jónsson [53] showed that any linear lattice is Arguesian, and posed the problem whether the converse also holds. A recent example of Haiman [86] shows that this is not the case, i.e. there exist Arguesian lattices which are not linear. Most of the modular lattices which have been studied are actually Arguesian. The question how a modular lattice fails to be Arguesian is investigated in Day and Jónsson [89]. Pickering [84] [a] proves that there is a non-Arguesian, modular variety of lattices, all of whose members of finite length are Arguesian. This result shows that Arguesian lattices cannot be characterized by the exclusion of a finite list of lattices or even infinitely many lattices of finite length. For reasons of space the details of these results are not included here. The cardinality of $\Lambda_{\mathcal{M}}$. In this section we discuss the result of Baker [69] which shows that there are uncountably many modular varieties. We begin with a simple observation about finite dimensional modular lattices. Lemma 3.10 Let $L$ and $M$ be two modular lattices, both of dimension $n<\omega$. If a map $f: L \hookrightarrow M$ is one-one and order-preserving then $f$ is an embedding. Proof. We have to show that $$ (*) \quad f(x+y)=f(x)+f(y) \quad \text { and } \quad f(x y)=f(x) f(y) . $$ Since $f$ is assumed to be order-preserving, $f(x+y) \geq f(x)+f(y), f(x y) \leq f(x) f(y)$ and equality holds if $x$ is comparable with $y$. For $x, y$ noncomparable we use induction on the length of the quotient $x+y / x y$. Observe firstly that if $u \succ v$ in $L$ then $u, v$ are successive elements in some maximal chain of $L$, and since $M$ has the same dimension as $L$ and $f$ is one-one it follows that $f(u) \succ f(v)$. If the length of $x+y / x y$ is 2 , then $x$ and $y$ cover $x y$ and $x, y$ are both covered by $x+y$, so $(*)$ holds in this case. Now suppose the length of $x+y / x y$ is $n>2$ and $(*)$ holds for all quotients of length $<n$. Then either $x+y / x$ or $x / x y$ has length $\geq 2$. By modularity $x / x y \cong x+y / y$, and by symmetry we can assume that there exists $x^{\prime}$ such that $x<x^{\prime}<x+y$. The quotients $x^{\prime}+y / x^{\prime} y$ and $x+x^{\prime} y / x\left(x^{\prime} y\right)=x^{\prime} / x y$ have length $<n$, hence $$ f\left(x^{\prime}\right)=f\left(x+x^{\prime} y\right)=f(x)+f\left(x^{\prime}\right) f(y)=(f(x)+f(y)) f\left(x^{\prime}\right) . $$ It follows that $f\left(x^{\prime}\right) \leq f(x)+f(y)$ and so $f(x+y)=f\left(x^{\prime}+y\right)=f\left(x^{\prime}\right)+f(y) \leq f(x)+f(y)$. Similarly $f(x y) \geq f(x) f(y)$. Let $P$ be a finite partially ordered set and define $\mathrm{N}(P)$ to be the class of all lattices that do not contain a subset order-isomorphic to $P$. For example if $\mathbf{5}$ is the linearly ordered set $\{0,1,2,3,4\}$ then $\mathbf{N}(\mathbf{5})$ is the class of all lattices of length $\leq 4$. LemMa 3.11 For any finite partially ordered set $P$ (i) $\mathbf{N}(P)$ is closed under ultraproducts, sublattices and homomorphic images; (ii) any subdirectly irreducible lattice in the variety $\mathrm{N}(P)^{\mathcal{V}}$ is a member of $\mathrm{N}(P)$. Proof. (i) The property of not containing a finite partially ordered set can be expressed as a first-order sentence and is therefore preserved under ultraproducts. If $L$ is a lattice and a sublattice of $L$ contains a copy of $P$, then of course so does $L$. Finally, if a homomorphic image of $L$ contains $P$ then for each minimal $p \in P$ choose an inverse image $\bar{p} \in L$, and thereafter choose an inverse image $\bar{q}$ of each $q \in P$ covering a minimal element in $P$ such that $\bar{q} \geq \sum\{\bar{p}: p \leq q, p \in P\}$. Proceeding in this way one obtains a copy of $P$ in $L$. (ii) This is an immediate consequence of Corollary 1.5 . ThEorem 3.12 (Baker [69]). There are uncountably many modular lattice varieties. Proof. Let $\Pi$ be the set of all prime numbers, and for each $p \in \Pi$ denote by $F_{p}$ the $p$-element Galois field. Let $L_{p}=\mathcal{L}\left(F_{p}^{3}, F_{p}\right)$ and observe that each $L_{p}$ is a finite subdirectly irreducible lattice since it is the subspace lattice of a finite nondegenerate projective space. We also let $\mathcal{A}$ be the class of all Arguesian lattices of length $\leq 4$. Now define a map $f$ from the set of all subsets of $\Pi$ to $\Lambda_{\mathcal{M}}$ by $$ f(S)=\mathcal{A}^{\mathcal{V}} \cap \bigcap\left\{\mathrm{N}\left(L_{q}\right)^{\mathcal{V}}: q \notin S\right\} . $$ We claim that $f$ is one-one. Suppose $S, T \subseteq \Pi$ and $p \in S-T$. Then $f(T) \subseteq \mathrm{N}\left(L_{p}\right)^{\mathcal{V}}$ and since $L_{p} \notin \mathrm{N}\left(L_{p}\right)$ and $L_{p}$ is subdirectly irreducible it follows from the preceding lemma that $L_{p} \notin f(T)$. On the other hand we must have $L_{p} \in f(S)$ since $L_{p} \notin \mathrm{N}\left(L_{q}\right)$ for some $q \notin S$ would imply that $L_{p}$ contains a subset order-isomorphic to $L_{q}$. By Lemma $3.10 L_{q}$ is actually a sublattice of $L_{p}$ and it follows from Lemma 3.6 that $F_{q}$ is a subfield of $F_{p}$. This however is impossible since $q \neq p \in S$. By a more detailed argument one can show that the map $f$ above is in fact a lattice embedding, from which it follows that $\Lambda_{\mathcal{M}}$ contains a copy of $\mathbf{2}^{\omega}$ as a sublattice. ## 3 n-Frames and Freese's Theorem Products of projective modular lattices. By a projective modular lattice we mean a lattice which is projective in the variety of all modular lattices. LemMa 3.13 (Freese[76]). If $A$ and $B$ are projective modular lattices with greatest and least element then $A \times B$ is a projective modular lattice. Proof. Let $f$ be a homomorphism from a free modular lattice $F$ onto $A \times B$, and choose elements $u, v \in F$ such that $f(u)=\left(1_{A}, 0_{B}\right)$ and $f(v)=\left(0_{A}, 1_{B}\right)$. By Lemma 2.9 it suffices to produce an embedding $g: A \times B \hookrightarrow F$ such that $f g$ is the identity on $A \times B$. Clearly $f$ followed by the projection $\pi_{A}$ onto the first coordinate maps the quotient $u / u v$ onto $A$. Assuming that $A$ is projective modular, there exists an embedding $g_{A}$ : $A \hookrightarrow u / u v$ such that $\pi_{A} f g_{A}$ is the identity on $A$. Similarly, if $B$ is projective modular, there exists an embedding $g_{B}: B \hookrightarrow v / u v$ such that $\pi_{B} f g_{B}=\mathrm{id}_{B}$. Define $g$ by $g(a, b)=$ $g_{A}(a)+g_{B}(b)$ for all $(a, b) \in A \times B$. Then $g$ is join preserving, and clearly $f g$ is the identity on $A \times B$. To see that $g$ is also meet preserving, observe that by the modularity of $F$ $$ g(a, b)=g_{A}(a)+v u+g_{B}(b)=\left(g_{A}(a)+v\right) u+g_{B}(b)=\left(g_{A}(a)+v\right)\left(u+g_{B}(b)\right) . $$ Hence $$ \begin{aligned} g(a, b) g(c, d) & =\left(g_{A}(a)+v\right)\left(u+g_{B}(b)\right)\left(g_{A}(c)+v\right)\left(u+g_{B}(d)\right) \\ & =\left(g_{A}(a) g_{A}(c)+v\right)\left(u+g_{B}(b) g_{B}(d)\right)=g(a c, b d), \end{aligned} $$ where the middle equality follows from the fact that in a modular lattice the map $t \mapsto t+v$ is an isomorphism from $u / u v$ to $u+v / v$. Von Neumann $n$-frames. Let $\left\{a_{i}: i=1, \ldots, n\right\}$ and $\left\{c_{1 j}: j=2, \ldots, n\right\}$ be subsets of a modular lattice $L$ for some finite $n \geq 2$. We say that $\phi=\left(a_{i}, c_{1 j}\right)$ is an $n$-frame in $L$ if the sublattice of $L$ generated by the $a_{i}$ is a Boolean algebra $2^{n}$ with atoms $a_{1}, \ldots, a_{n}$, and for each $j=2, \ldots, n$ the elements $a_{1}, c_{1 j}, a_{j}$ generate a diamond in $L$ (i.e. $a_{1}+c_{1 j}=$ $a_{j}+c_{1 j}=a_{1}+a_{j}$ and $a_{1} c_{1 j}=a_{j} c_{1 j}=a_{1} a_{j}$ ). The top and bottom element of the Boolean algebra are denoted by $0_{\phi}\left(=a_{1} a_{2}\right)$ and $1_{\phi}\left(=\sum_{i=1}^{n} a_{i}\right)$ respectively, but they need not equal the top and bottom of $L$ (denoted by $0_{L}$ and $1_{L}$ ). If they do, then $\phi$ is called a spanning $n$-frame. If the elements $a_{1}, \ldots, a_{n} \in L$ are the atoms of a sublattice isomorphic to $2^{n}$, then they are said to be independent over $0=a_{1} a_{2}$. If $L$ is modular this is equivalent to the conditions $a_{i} \neq 0$ and $a_{i} \sum_{j \neq i} a_{j}=0$ for all $i=1, \ldots, n$ (see [GLT] p.167). The index 1 in $c_{1 j}$ indicates that an $n$-frame determines further elements $c_{i j}$ for distinct $i, j \neq 1$ as follows: let $c_{j 1}=c_{1 j}$ and $$ c_{i j}=\left(a_{i}+a_{j}\right)\left(c_{i 1}+c_{1 j}\right) . $$ These elements fit nicely into the $n$-frame, as is shown by the next lemma. LEMMA 3.14 Let $\phi=\left(a_{i}, c_{1 j}\right)$ be an $n$-frame in a modular lattice and suppose $c_{i j}$ is defined as above. Then, for distinct $i, j \in\{1, \ldots, n\}$ (i) $a_{i}+c_{i j}=a_{i}+a_{j}=c_{i j}+a_{j}$; (ii) $c_{i j} \sum_{r \neq j} a_{r}=0_{\phi}$; (iii) $a_{i} \sum_{r \neq k} c_{k r}=0_{\phi}$ for any fixed index $k$; (iv) $a_{i}, c_{i j}, a_{j}$ generate a diamond; (v) $c_{i j}=\left(a_{i}+a_{j}\right)\left(c_{i k}+c_{k j}\right)$ for any $k$ distinct from $i, j$. Proof. (i) Using modularity and the $n$-frame relations, we compute $$ \begin{aligned} a_{i}+c_{i j} & =a_{i}+\left(a_{i}+a_{j}\right)\left(c_{i 1}+c_{1 j}\right) \\ & =\left(a_{i}+a_{j}\right)\left(a_{i}+c_{i 1}+c_{1 j}\right) \\ & =\left(a_{i}+a_{j}\right)\left(a_{i}+a_{1}+a_{j}\right)=a_{i}+a_{j} . \end{aligned} $$ The second part follows by symmetry. (ii) We first show that $c_{i j} \sum_{r \neq j} a_{r} \leq a_{i}$. $$ \begin{aligned} a_{i}+c_{i j} \sum_{r \neq j} a_{r} & =\left(a_{i}+c_{i j}\right) \sum_{r \neq j} a_{r} \quad \text { by modularity since } i \neq j \\ & =\left(a_{i}+a_{j}\right) \sum_{r \neq j} a_{r}=a_{i} \quad \text { since the } a_{i} \text { 's generate } 2^{n} . \end{aligned} $$ Hence if $i=1$ then $0_{\phi} \leq c_{1 j} \sum_{r \neq j} a_{r} \leq c_{1 j} a_{1}=0_{\phi}$. The general case will follow in the same way once we have proved (iv). (iii) We first fix $i=k=1$ and show that $a_{1} \sum_{r=2}^{m} c_{1 r} \leq \sum_{r=2}^{m-1} c_{1 r}$ for $3 \leq m \leq n$. $$ \begin{aligned} a_{1} \sum_{r=2}^{m} c_{1 r}+\sum_{r=2}^{m_{1}} c_{1 r} & =\left(c_{12}+\ldots+c_{1 m}\right)\left(a_{1}+c_{12}+\ldots+c_{1 m-1}\right) \\ & =\left(c_{12}+\ldots+c_{1 m}\right)\left(a_{1}+a_{2}+\ldots+a_{m-1}\right) \\ & =c_{12}+\ldots+c_{1 m-1}+c_{1 m} \sum_{r=1}^{m-1} a_{r} \\ & =c_{12}+\ldots+c_{1 m-1}+0_{\phi} \quad \text { by part (ii). } \end{aligned} $$ Thus $0_{\phi} \leq a_{1} \sum_{r=2}^{n} c_{1 r} \leq a_{1} \sum_{r=2}^{n-1} c_{1 r} \leq \ldots \leq a_{1} c_{12}=0_{\phi}$. Let $e=\sum_{r=2}^{n} c_{1 r}$ and suppose $i \neq 1$. Then $c_{1 i}+a_{i} e=\left(c_{1 i}+a_{i}\right) e=\left(c_{1 i}+a_{1}\right) e=$ $c_{1 i}+a_{1} e=c_{1 i}+0_{\phi}$, so $a_{i} e \leq a_{i} c_{1 i}=0_{\phi}$. Hence (iii) holds for $k=1$ and any $i$. Now (iv) follows from (i) and the calculation $$ 0_{\phi} \leq a_{i} c_{i j}=a_{i}\left(a_{i}+a_{j}\right)\left(c_{i 1}+c_{i j}\right)=a_{i}\left(c_{1 i}+c_{1 j}\right) \leq a_{i} e=0_{\phi} . $$ Therefore (ii) holds in general. Using this one can show in the same way as for $k=1$, that $a_{k} \sum_{r=k+1}^{m} c_{k r} \leq \sum_{r=k+1}^{m-1} c_{k r}$ for $k+1 \leq m \leq n$ and, letting $c^{\prime}=\sum_{r=k+1}^{n} c_{k r}$, $a_{k}\left(c^{\prime}+\sum_{r=1}^{m} c_{k r}\right) \leq c^{\prime}+\sum_{r=1}^{m-1} c_{r k}$ for $1 \leq m \leq k-1$. Assuming $i \neq k$, let $e=\sum_{r \neq k} c_{r k}$. Then one shows as before that $c_{k i}+a_{i} e=c_{k i}$, whence $a_{i} e=0_{\phi}$. Thus (iii) holds in general. For $k=1(\mathrm{v})$ holds by definition. Suppose $i=1 \neq j, k$. $$ \begin{aligned} & \left(a_{1}+a_{j}\right)\left(c_{1 k}+c_{k j}\right)=\left(a_{1}+a_{j}\right)\left(c_{1 k}+\left(a_{k}+a_{j}\right)\left(c_{k 1}+c_{1 j}\right)\right) \\ & =\left(a_{1}+a_{j}\right)\left(c_{1 k}+a_{k}+a_{j}\right)\left(c_{k 1}+c_{1 j}\right) \\ & =\left(a_{1}+a_{j}\right)\left(a_{1}+a_{k}+a_{j}\right) c_{k 1}+c_{1 j}=0_{\phi}+c_{1 j} \text { by (ii). } \end{aligned} $$ The case $j=1 \neq i, k$ is handled similarly. Finally suppose that $i, j, k, 1$ are all distinct and note that $c_{i k} \leq a_{i}+a_{j}+a_{k}$. $$ \begin{aligned} \left(a_{i}+a_{j}\right)\left(c_{i k}+c_{k j}\right) & =\left(a_{i}+a_{j}\right)\left(\left(a_{i}+a_{j}+a_{k}\right)\left(c_{i 1}+c_{1 k}\right)+c_{k j}\right) \\ & =\left(a_{i}+a_{j}\right)\left(c_{i 1}+c_{1 k}+c_{k j}\right) \\ & =\left(a_{i}+a_{j}\right)\left(c_{i 1}+\left(a_{1}+a_{j}\right)\left(c_{1 k}+c_{k j}\right)\right) \\ & =\left(a_{i}+a_{j}\right)\left(c_{i 1}+c_{1 j}\right)=c_{i j} \end{aligned} $$ A concept equivalent to that of an $n$-frame is the following: A modular lattice $L$ contains an $n$-diamond $\delta=\left(a_{1}, \ldots, a_{n}, e\right)$ if the $a_{i}$ are independent over $0_{\delta}=a_{1} a_{2}$ and $e$ is a relative complement of each $a_{i}$ in $1_{\delta} / 0_{\delta}\left(1_{\delta}=\sum_{j=1}^{n} a_{j}\right)$. The concept of an $n$-diamond is due to Huhn [72] (he referred to it as an $(n-1)$-diamond). Note that although $e$ seems to be a special element relative to the $a_{i}$, this is not really true since any $n$ elements of the set $\left\{a_{1}, \ldots, a_{n}, e\right\}$ are independent, and the remaining element is a relative complement of all the others. LeMma 3.15 Let $\delta=\left(a_{i}, e\right)$ be an $n$-diamond and define $c_{1 j}=e\left(a_{1}+a_{j}\right)$, then $\phi_{\delta}=\left(a_{i}, c_{1 j}\right)$ is an $n$-frame. Conversely, if $\phi=\left(a_{i}, c_{1 j}\right)$ is an $n$-frame and $e=\sum_{j=2}^{n} c_{1 j}$ then $\delta_{\phi}=\left(a_{i}, e\right)$ is an $n$-diamond. Furthermore $\phi_{\delta_{\phi}}=\phi$ and $\delta_{\phi_{\delta}}=\delta$. Proof. Since $e$ is a relative complement for each $a_{i}$ in $1_{\delta} / 0_{\delta}, a_{1} e\left(a_{1}+a_{j}\right)=0_{\delta}=$ $a_{j} e\left(a_{1}+a_{j}\right)$ and $$ a_{1}+e\left(a_{1}+a_{j}\right)=\left(a_{1}+e\right)\left(a_{1}+a_{j}\right)=1\left(a_{1}+a_{j}\right)=a_{1}+a_{j}=a_{j}+e\left(a_{1}+a_{j}\right), $$ so $\phi_{\delta}=\left(a_{i}, e\left(a_{1}+a_{j}\right)\right)$ is an $n$-frame. Conversely, if $e=\sum_{j=2}^{n} c_{1 j}$ then $a_{i} e=0_{\phi}$ by Lemma 3.14 (iii) and $$ \begin{aligned} a_{i}+e & =c_{12}+\ldots+a_{i}+c_{1 i}+\ldots+c_{1 n} \\ & =c_{12}+\ldots+a_{i}+a_{1}+\ldots+c_{1 n} \\ & =a_{1}+\ldots+a_{n}=1 \end{aligned} $$ Hence $\delta_{\phi}=\left(a_{i}, e\right)$ is an $n$-diamond. Also $e\left(a_{1}+a_{j}\right)=c_{1 j}+\left(\sum_{r \neq j} c_{1 r}\right)\left(a_{1}+a_{j}\right)=c_{1 j}$ since $\left(\sum_{r \neq j} c_{1 r}\right)\left(a_{1}+a_{j}\right)=0_{\phi}$ can be proved similar to Lemma 3.14 (iii). Finally, if $\delta=\left(a_{i}, e\right)$ is any $n$-diamond, and we let $e^{\prime}=\sum_{j=2}^{n} c_{1 j}$, then $e^{\prime} \leq e$ and in fact $e^{\prime}=e^{\prime}+\left(a_{1}+a_{2}\right) e=\left(e^{\prime}+a_{1}+a_{2}\right) e=1_{\delta} e=e$. Lemma 3.16 (Freese[76]). Suppose $\beta=\left(a_{i}, e\right)$ is an (n+1)-tuple of elements of a modular lattice such that the $a_{i}$ form an independent set over $0_{\beta}=a_{1} a_{2}, 0_{\beta} \leq e \leq 1_{\beta}=j$ oin ${ }_{i=1}^{n} a_{1}$ and $e$ is incomparable with each $a_{i}$. Define ( $i$ ranges over $1, \ldots, n$ ) $$ \begin{gathered} b=\sum a_{i} e \leq e \leq c=\prod\left(a_{i}+e\right) \\ d=\sum\left(a_{i}+b\right) c=b+\sum a_{i} c=\sum a_{i} c \quad \text { and } \\ \beta^{*}=\left(a_{i}+b, e\right), \quad \beta_{*}=\left(a_{i} c, e d\right), \quad{\beta^{*}}_{*}=\left(\left(a_{i}+b\right) c, e d\right) . \end{gathered} $$ (i) If $a_{i}+e=1$ for all $i$ then $\beta^{*}$ is an $n$-diamond in $1 / b$. (ii) If $a_{i} e=0_{\beta} \neq a_{i} c$ for all $i$ then $\beta_{*}$ is an $n$-diamond in $d / 0_{\beta}$. (iii) If $b \neq\left(a_{i}+b\right) c$ for all $i$ then ${\beta^{*}}^{*}$ is an $n$-diamond in $d / b$. Proof. (i) Since $b \leq e$ and $a_{i} \not \leq e$ we have $a_{i}+b \neq b$ for all $i$. The following calculation shows that the $a_{i}+b$ are independent over $b$ : $$ \begin{aligned} \left(a_{i}+b\right) \sum_{j \neq i}\left(a_{j}+b\right) & =b+a_{i}\left(\sum_{j \neq i} a_{j}+\sum_{k=1}^{n} a_{k} e\right) \\ & =b+a_{i}\left(\sum_{j \neq i} a_{j}+a_{i} e\right) \\ & =b+a_{i} \sum_{j \neq i} a_{j}+a_{i} e \\ & =b+0_{\beta}+a_{i} e=b . \end{aligned} $$ Furthermore $\left(a_{i}+b\right)+e=a_{i}+b=1$ by assumption, and $\left(a_{i}+b\right) e=b+a_{i} e=b$. (ii) Since $a_{i} c \neq 0_{\beta}$ and $0_{\beta} \leq a_{i} c \sum_{j \neq i} a_{j} c \leq a_{i} \sum_{j \neq i} a_{j}=0_{\beta}$, the $a_{i} c$ are independent over $0_{\beta}$. Also $e \leq c$ and $a_{i} e \leq a_{i} c \leq d$ imply $\left(a_{i} c\right)(e d)=a_{i} e d=a_{i} e=0_{\beta}$ by assumption, and $a_{i} c+e d=\left(a_{i} c+e\right) d=c\left(a_{i}+e\right) d=c d=d$. Now (iii) follows from (i) and (ii). Suppose $M$ and $L$ are two modular lattices and $f$ is a homomorphism from $M$ to $L$. If $\phi=\left(a_{i}, c_{1 j}\right)$ is an $n$-frame in $M$ and the elements $f\left(a_{i}\right), f\left(c_{1 j}\right)$ are all distinct, then $\left(f\left(a_{i}\right), f\left(c_{1 j}\right)\right.$ ) is an $n$-frame in $L$ (since the diamonds generated by $a_{1}, c_{1 i}, a_{i}$ are simple lattices). Risking a slight abuse of notation, we will denote this $n$-frame by $f(\phi)$. Of course similar considerations apply to $n$-diamonds. The next result shows that $n$-diamonds (and hence $n$-frames) can be "pulled back" along epimorphisms. Corollary 3.17 (Huhn [72], Freese [76]). Let $M$ and $L$ be modular lattices and let $f: M \rightarrow L$ be an epimorphism. If $\delta=\left(a_{i}, e\right)$ is an $n$-diamond in $L$ then there is an $n$-diamond $\hat{\delta}=\left(\hat{a}_{i}, \hat{e}\right)$ in $M$ such that $f(\hat{\delta})=\delta$. Proof. It follows from Lemma 3.13 that $2^{n}$ is a projective modular lattice, so we can find $\bar{a}_{1}, \ldots, \bar{a}_{n} \in M$ such that $f\left(\bar{a}_{i}\right)=a_{i}$ and the $\bar{a}_{i}$ are independent over $\bar{a}_{1} \bar{a}_{2}$. Choose $\bar{e} \in f^{-1}\{e\}$ such that $\bar{a}_{1} \bar{a}_{2} \leq \bar{e} \leq \sum_{i=1}^{n} \bar{a}_{i}$ and let $\beta=\left(\bar{a}_{i}, \bar{e}\right)$. Since $\delta$ is an $n$-diamond, each $\bar{a}_{i}$ is incomparable with $\bar{e}$. Defining $\bar{b}, \bar{c}, \bar{d}$ in the same way as $b, c, d$ in the previous lemma, we see that $f(\bar{b})=0_{\delta}, f(\bar{c})=1_{\delta}$ and $f\left(\left(\bar{a}_{i}+\bar{b}\right) \bar{c}\right)=a_{i}$. Therefore $\bar{b} \neq\left(\bar{a}_{i}+\bar{b}\right) \bar{c}$, whence $\hat{\delta}=\beta^{*}{ }_{*}$ is the required $n$-diamond. LemMa 3.18 (Herrmann and Huhn[76]). Let $\phi=\left(a_{i}, c_{1 j}\right)$ be an $n$-frame in a modular lattice $L$ and let $u_{1} \in L$ satisfy $0_{\phi} \leq u_{1} \leq a_{1}$. Define $u_{i}=a_{i}\left(u_{1}+c_{1 i}\right)$ for $i \neq 1$ and $u=\sum_{i=1}^{n} u_{i}$. Then $\phi^{u}=\left(u+a_{i}, u+c_{1 j}\right)$ and $\phi_{u}=\left(u a_{i}, u c_{1 j}\right)$ are $n$-frames in $1_{\phi}$ and $u / 0_{\phi}$ respectively. A proof of this result can be found in Freese [79]. We think of $\phi^{u}\left(\phi_{u}\right)$ as being obtained from $\phi$ by a reduction over (under) $u$. The canonical $n$-frame. The following example shows that $n$-frames occur naturally in the study of $R$-modules: Let $\left(R,+,-, \cdot, 0_{R}, 1_{R}\right)$ be a ring with unit, and let $\mathcal{L}\left(R^{n}, R\right)$ be the lattice of all (left-) submodules of the (left-) $R$-module $R^{n}$. We denote the canonical basis of $R^{n}$ by $e_{1}, \ldots, e_{n}$ (i.e. $e_{i}=\left(0_{R}, \ldots, 0_{R}, 1_{R}, 0_{R}, \ldots, 0_{R}\right)$ with the $1_{R}$ in the $i$ th position), and let $$ \begin{aligned} a_{i} & =R e_{i}=\left\{r e_{i}: r \in R\right\} \\ c_{i j} & =R\left(e_{i}-e_{j}\right) \quad i, j=1, \ldots, n \quad i \neq j . \end{aligned} $$ Then it is not difficult to check that $\mathcal{L}\left(R^{n}, R\right)$ is a modular lattice and that $\phi_{R}=\left(a_{i}, c_{1 j}\right)$ is a (spanning) $n$-frame in $\mathcal{L}\left(R^{n}, R\right)$, referred to as the canonical $n$-frame of $\mathcal{L}\left(R^{n}, R\right)$. Definition of the auxillary ring. Let $L$ be a modular lattice containing an $n$-frame $\phi=\left(a_{i}, c_{1 j}\right)$ for some $n \geq 3$. We define an auxillary ring $R_{\phi}$ associated with the frame $\phi$ as follows: $$ R_{\phi}=R_{12}=\left\{x \in L: x a_{2}=a_{1} a_{2} \text { and } x+a_{2}=a_{1}+a_{2}\right\} $$ and for some $k \in\{3, \ldots, n\}, x, y \in R_{\phi}$ $$ \begin{aligned} \pi(x) & =\left(x+c_{1 k}\right)\left(a_{2}+a_{k}\right), \quad \pi^{\prime}(x)=\left(x+c_{2 k}\right)\left(a_{1}+a_{k}\right) \\ x \oplus y & =\left(a_{1}+a_{2}\right)\left[\left(x+a_{k}\right)\left(c_{1 k}+a_{2}\right)+\pi(y)\right] \\ x \ominus y & =\left(a_{1}+a_{2}\right)\left[a_{k}+\left(c_{2 k}+x\right)\left(a_{2}+\pi^{\prime}(y)\right)\right] \\ x \odot y & =\left(a_{1}+a_{2}\right)\left[\pi(x)+\pi^{\prime}(y)\right] \\ 0_{R} & =a_{1}, \quad 1_{R}=c_{12} . \end{aligned} $$ Theorem 3.19 If $n \geq 4$, or $L$ is an Arguesian lattice and $n=3$, then $\left(R_{\phi}, \oplus, \ominus, \odot, 0_{R}, 1_{R}\right)$ is a ring with unit, and the operations are independent of the choice of $k$. This theorem is due to von Neumann [60] for $n \geq 4$ and Day and Pickering [83] for $n=3$. The presentation here is derived from Herrmann [84], where the theorem is stated without proof in a similar form. The proof is long, as many properties have to be checked, and will be omitted here as well. The theorem however is fundamental to the study of modular lattices. It is interesting to compare the definition of $R$ with the definition $D$ in the classical coordinatization theorem for projective spaces ( $[G L T]$ p.209). The element $a_{2}$ corresponds to the point at infinity, and the operations of addition and multiplication are defined in the same way. There is nothing special about the indices 1 and 2 in the definition of $R_{\phi}=R_{12}$. We can replace them throughout by distinct indices $i$ and $j$ to obtain isomorphic rings $R_{i j}$. For example the isomorphism between $R_{12}$ and $R_{1 j}(j \neq 1,2)$ is induced by the projectivity $$ R_{12} \subseteq a_{1}+a_{2} / 0 \nearrow a_{1}+a_{2}+c_{2 j} / c_{2 j} \searrow a_{1}+a_{j} / 0 \supseteq R_{1 j} $$ (Since in a modular lattice every transposition is bijective, it only remains to show that this induced map preserves the respective operations. For readers more familiar with von Neumann's $L$-numbers, we note that they are $n(n-1)$ - tuples of elements $\beta_{i j} \in R_{i j}$, which correspond to each other under the above isomorphisms.) Coordinatization of complemented modular lattices. The auxillary ring construction is actually part of the von Neumann coordinatization theorem, which we will not use, but mention here briefly (for more detail, the reader is referred to von Neumann [60]). Theorem 3.20 Let $L$ be a complemented modular lattice containing a spanning $n$-frame $(4 \leq n \in \omega)$ and let $R$ be the auxillary ring. Then $L$ is isomorphic to the lattice $\mathcal{L}_{f}\left(R^{n}, R\right)$ of all finitely generated submodules of the $R$-module $R^{n}$. Notice that if $R$ happens to be a division ring $D$, then $D^{n}$ will be a vector space over $D$, and $\mathcal{L}_{f}\left(D^{n}, D\right) \cong \mathcal{L}\left(D^{n}, D\right)$. Hence the above theorem extends the coordinatization of (finite dimensional) projective spaces to arbitrary complemented modular lattices containing a spanning $n$-frame $(n \geq 4)$. Moreover, Jónsson [59] [60'] showed that if $L$ is a complemented Arguesian lattice, then the above theorem also holds for $n=3$. Further generalizations to wider classes of modular lattices appear in Baer [52], Inaba [48], Jónsson and Monk [69], Day and Pickering [83] and Herrmann [84]. Characteristic of an $n$-frame. Recall that the characteristic of a ring $R$ with unit $1_{R}$ is the least number $r=\operatorname{char} R$ such that adding $1_{R}$ to itself $r$ times equals $0_{R}$. If no such $r$ exists, then $\operatorname{char} R=0$. We define a related concept for $n$-frames as follows: Let $\phi=\left(a_{i}, c_{1 j}\right)$ be an $n$-frame in some modular lattice $L(n \geq 3)$, and choose $k \in$ $\{3, \ldots, n\}$. The projectivity $$ a_{1}+a_{2} / 0 \nearrow a_{1}+a_{2}+a_{k} / a_{k} \searrow c_{1 k}+a_{2} / 0 \nearrow a_{1}+a_{2}+a_{k} / c_{2 k} \searrow a_{1}+a_{2} / 0 $$ induces an automorphism $\alpha_{\phi}$ on the quotient $a_{1}+a_{2} / 0$ given by $$ \alpha_{\phi}(x)=\left(\left(x+a_{k}\right)\left(c_{1 k}+a_{2}\right)+c_{2 k}\right)\left(a_{1}+a_{2}\right) . $$ Let $r$ be a natural number and denote by $\alpha_{\phi}^{r}$ the automorphism $\alpha_{\phi}$ iterated $r$ times. We say that $\phi$ is an $n$-frame of characteristic $r$ if $\alpha_{\phi}^{r}\left(a_{1}\right)=a_{1}$. LEMMA 3.21 Suppose $\phi=\left(a_{i}, c_{1 j}\right)$ is an $n$-frame of characteristic $r$, and $R$ is the auxillary ring of $\phi$. Then the characteristic of $R$ divides $r$. Proof. By definition $0_{R}=a_{1}$ and $1_{R}=c_{12}$. From the definition of $x \oplus y$ we see that for $x \in R, \alpha_{\phi}(x)=x \oplus 1_{R}$ (since $\left.\pi\left(1_{R}\right)=\left(c_{21}+c_{1 k}\right)\left(a_{2}+a_{k}\right)=c_{2 k}\right)$. This also shows that $\alpha_{\phi} \mid R$ is independent of the choice of $k$. The condition $\alpha_{\phi}^{r}\left(a_{1}\right)=a_{1}$ therefore implies $$ 0_{R} \oplus \overbrace{1_{R} \oplus 1_{R} \oplus \ldots \oplus 1_{R}}^{r \text { terms }}=0_{R} $$ whence the result follows. The next result shows that the automorphism $\alpha_{\phi}$ is compatible with the operation of reducing an $n$-frame. For a proof the reader is referred to the original paper. Lemma 3.22 (Freese [79]). Let $\phi, u, \phi^{u}$ and $\phi_{u}$ be as in Lemma 3.18. If $x \in a_{1}+a_{2} / 0_{\phi}$ then (i) $x+u \in a_{1}+a_{2}+u / u$ and $\alpha_{\phi^{u}}(x+u)=\alpha_{\phi}(x)+u$; (ii) $x u \in a_{1} u+a_{2} u / 0_{\phi}$ and $\alpha_{\phi_{u}}(x u)=\alpha_{\phi}(x) u$. It follows that if $\phi$ is an $n$-frame of characteristic $r$, then so are $\phi^{u}$ and $\phi_{u}$. The lemma below shows how one can obtain an $n$-frame of any given characteristic. LEMMA 3.23 (Freese [79]). Let $\phi=\left(a_{i}, c_{1 j}\right)$ be an $n$-frame and $r$ any natural number. If we define $a=\alpha_{\phi}^{r}\left(a_{1}\right), u_{2}=a_{2}\left(a+a_{1}\right), u_{1}=a_{1}\left(u_{2}+c_{12}\right), u_{i}=a_{i}\left(u_{1}+c_{1 i}\right)$ and $u=\sum_{i=1}^{n} u_{i}$ then $\phi^{u}$ is an $n$-frame of characteristic $r$. Proof. Note that $u_{2}$, defined as above, agrees with the definition of $u_{2}$ in Lemma 3.18 since $$ \begin{aligned} a_{2}\left(u_{1}+c_{12}\right) & =a_{2}\left(a_{1}\left(u_{2}+c_{12}\right)+c_{12}\right) \\ & =a_{2}\left(u_{1}+c_{12}\right)\left(u_{2}+c_{12}\right) \\ & =a_{2}\left(u_{2}+c_{12}\right)=a_{2} c_{12}+u_{2}=u_{2} . \end{aligned} $$ Let $R$ be the auxillary ring of $\phi$. By definition the elements of $R$ are all the relative complements of $a_{2}$ in $a_{1}+a_{2} / 0_{\phi}$. Since $x \in R$ implies $\alpha_{\phi}(x)=x \oplus 1_{R} \in R$, it follows that $\alpha_{\phi}(x)$ is again a relative complement of $a_{2}$ in $a_{1}+a_{2} / 0_{\phi}$ (this can also be verified easily from the definition of $\left.\alpha_{\phi}\right)$. Thus $a=\alpha_{\phi}^{r}\left(a_{1}\right) \in R$ and $a+a_{2}=a_{1}+a_{2}$. By the preceding lemma $$ \begin{aligned} \alpha_{\phi^{u}}^{r}\left(a_{1}+u\right) & =\alpha_{\phi}^{r}\left(a_{1}\right)+u \\ & =a+u+a_{2}\left(a+a_{1}\right) \\ & =\left(a+a_{2}\right)\left(a+a_{1}\right)+u \\ & =\left(a_{1}+a_{2}\right)\left(a+a_{1}\right)+u=a+a_{1}+u . \end{aligned} $$ Also $a_{1}+u_{2}=\left(a_{1}+a_{2}\right)\left(a_{1}+a\right)=a_{1}+a$ shows that $a_{1}+u \geq a$, whence $\alpha_{\phi^{u}}^{r}\left(a_{1}+u\right)=a_{1}+u . \square$ We can now prove the result corresponding to Theorem 3.17 for $n$-frames of a given characteristic. Theorem 3.24 (Freese [79]). Let $M$ and $L$ be modular lattices and let $f: M \rightarrow L$ be an epimorphism. If $\phi=\left(a_{i}, c_{1 j}\right)$ is an $n$-frame of characteristic $r$ in $L$, then there is an $n$-frame $\hat{\phi}=\left(\hat{a}_{i}, \hat{c}_{1 j}\right)$ of characteristic $r$ in $M$ such that $f(\hat{\phi})=\phi$. Proof. From Theorem 3.17 we obtain an $n$-frame $\bar{\phi}=\left(\bar{a}_{i}, \bar{c}_{1 j}\right)$ in $M$ such that $f(\bar{\phi})=\phi$. If we let $u_{2}=\bar{a}_{2}\left(\alpha \frac{r}{\phi}\left(\bar{a}_{1}\right)+\bar{a}_{1}\right)$ and $u$ be as in the preceding lemma, then we see that $\bar{\phi}^{u}=\left(a_{i}+u, c_{1 j}+u\right)$ is an $n$-frame of characteristic $r$ in $M$. Since $\phi$ has characteristic $r$ by assumption, $$ f\left(u_{2}\right)=f\left(\bar{a}_{2}\left(\alpha \frac{r}{\phi}\left(a_{1}\right)+a_{1}\right)\right)=a_{2}\left(\alpha_{\phi}^{r}\left(a_{1}\right)+a_{1}\right)=a_{2}\left(a_{1}+a_{1}\right)=0_{\phi} $$ from which it easily follows that $f\left(u_{1}\right)=f\left(\bar{a}_{1}\left(u_{2}+\bar{c}_{12}\right)=0_{\phi}\right.$ and $f\left(u_{i}\right)=0_{\phi}$. Therefore $f(u)=0_{\phi}$ and $f\left(\bar{\phi}^{u}\right)=\phi$, so we can take $\hat{\phi}=\bar{\phi}^{u}$. $\mathcal{M}$ is not generated by its finite members. This is the main result of Freese [79], and follows immediately from the theorem below, where $\mathcal{M}_{F}$ is the class of all finite modular lattices. Theorem 3.25 (Freese [79]). There exists a modular lattice $L$ such that $L \notin\left(\mathcal{M}_{F}\right)^{\mathcal{V}}$. Proof. The lattice $L$ is constructed (using a technique due to Hall and Dilworth [44]) as follows: Let $F$ and $K$ be two countably infinite fields of characteristic $p$ and $q$ respectively, where $p$ and $q$ are distinct primes. Let $L_{p}=\mathcal{L}\left(F^{4}, F\right)$ be the subspace lattice of the 4-dimensional vector space $F^{4}$ over $F$ and let $\phi=\left(a_{i}, c_{1 j}\right)$ be the canonical 4-frame in $L_{p}$ (the index $i=1,2,3,4$ and $j=2,3,4$ through out). Note that $\phi$ is a spanning 4 -frame of characteristic $p$. Similarly let $L_{q}=\mathcal{L}\left(K^{\mathbf{4}}, K\right)$ with canonical 4-frame $\phi^{\prime}=\left(\boldsymbol{a}_{i}^{\prime}, c_{1 j}^{\prime}\right)$ of characteristic $q$. Since $|K|=\omega$, there are precisely $\omega$ one-dimensional subspaces in the quotient $a_{1}^{\prime}+a_{2}^{\prime} / 0_{\phi^{\prime}}$, hence $a_{1}^{\prime}+a_{2}^{\prime} / 0_{\phi^{\prime}} \cong M_{\omega}$ (the countable two-dimensional lattice). The quotient $1_{\phi} / a_{3}+a_{4}$ of $L_{p}$ is isomorphic to $a_{1}+a_{2} / 0_{\phi}$ via the map $x \mapsto x\left(a_{1}+a_{2}\right)$ and since (ii) Figure 3.3 $|F|=\omega$, we see that $1_{\phi} / a_{3}+a_{4}$ is also isomorphic to $M_{\omega}$. Let $\sigma: 1_{\phi} / a_{3}+a_{4} \rightarrow a_{1}^{\prime}+a_{2}^{\prime} / 0_{\phi^{\prime}}$ be any isomorphism which satisfies $$ \begin{aligned} \sigma\left(a_{1}+a_{3}+a_{4}\right) & =a_{1}^{\prime} \\ \sigma\left(a_{2}+a_{3}+a_{4}\right) & =a_{2}^{\prime} \\ \sigma\left(c_{12}+a_{3}+a_{4}\right) & =c_{12}^{\prime} \end{aligned} $$ The lattice $L$ is constructed by "loosely gluing" the lattice $L_{q}$ over $L_{p}$ via the isomorphism $\sigma$, i.e. let $L$ be the disjoint union of $L_{p}$ and $L_{q}$ and define $x \leq y$ in $L$ if and only if $$ \begin{aligned} & x, y \in L_{p} \quad \text { and } \quad x \leq y \quad \text { in } L_{p} \text { or } \\ & x, y \in L_{q} \text { and } x \leq y \text { in } L_{q} \text { or } \\ & x \in L_{p}, y \in L_{q} \quad \text { and } \quad x \leq z, \sigma(z) \leq y \text { for some } z \in 1_{\phi} / a_{3}+a_{4} \text {. } \end{aligned} $$ Then it is easy to check that $L$ is a modular lattice. (The conditions on $\sigma$ are needed to make the two 4-frames fit together nicely.) Let $D$ be the finite distributive sublattice of $L$ generated by the set $\left\{a_{i}, a_{i}^{\prime}: i=1,2,3,4\right\}$ (see Figure 3.3 (i)). Notice that $D$ is the product of the four element Boolean algebra and the lattice in Figure 3.3 (ii). Both these lattices are finite projective modular lattices, so by Lemma $3.13 D$ is a projective modular lattice. Suppose now that $L \in\left(\mathcal{M}_{F}\right)^{\mathcal{V}}=\mathrm{HSP} \mathcal{M}_{F}$. Then $L$ is a homomorphic image of some lattice $\bar{L} \in \mathbf{S P} \mathcal{M}_{F}$, and hence $\bar{L}$ is residually finite (i.e. a subdirect product of finite lattices). But hereafter we show that any lattice which has $L$ as a homomorphic image cannot be residually finite, and this contradiction will conclude the proof. Let $f$ be the homomorphism from $\bar{L} \rightarrow L$. Since $D$ is a projective modular sublattice of $L$, we can find elements $\bar{a}_{i}, \bar{a}_{i}^{\prime}$ in $\bar{L}$ which generate a sublattice isomorphic to $D$, and $f\left(\bar{a}_{i}\right)=a_{i}, f\left(\bar{a}_{i}^{\prime}\right)=a_{i}^{\prime}$. Let us assume for the moment that (*) there exist further elements $\bar{c}_{1 j}$ and $\bar{c}_{1 j}^{\prime}$ in $\bar{L}$ such that $\bar{\phi}=\left(\bar{a}_{i}, \bar{c}_{1 j}\right)$ is a 4 -frame of characteristic $p, \bar{\phi}^{\prime}=\left(\bar{a}_{i}^{\prime}, \bar{c}_{1 j}^{\prime}\right)$ is a 4-frame of characteristic $q$ and $f(\bar{\phi})=\phi$, $f\left(\bar{\phi}^{\prime}\right)=\phi^{\prime}$. If $\bar{L}$ is residually finite, then we can find a finite modular lattice $M$ and a homomorphism $g: \bar{L} \rightarrow M$ which maps the (finite) 4-frames $\bar{\phi}$ and $\bar{\phi}^{\prime}$ in a one to one fashion into $M$, where we denote them by $\hat{\phi}=\left(\hat{a}_{i}, \hat{c}_{1 j}\right)$ and $\hat{\phi}^{\prime}\left(\hat{a}_{i}^{\prime}, \hat{c}_{1 j}^{\prime}\right)$ respectively. By Lemmas 3.19 and 3.21 they give rise to two auxillary rings $R \subseteq \hat{a}_{1}+\hat{a}_{2} / 0_{\hat{\phi}}$ and $R^{\prime} \subseteq \hat{a}_{1}^{\prime}+\hat{a}_{2}^{\prime} / 0_{\hat{\phi}^{\prime}}$ of characteristic $p$ and $q$ respectively. Since $M$ is a finite lattice, $R$ and $R^{\prime}$ are finite rings, so $|R|=p^{m}$ and $\left|R^{\prime}\right|=q^{n}$ for some $n, m \in \omega$. Also, since $0_{R}=\hat{a}_{1} \neq \hat{c}_{12}=1_{R}, R$ has at least two elements. Now in $M$ the elements $\hat{a}_{i}, \hat{a}_{i}^{\prime}$ generate a sublattice $\hat{D} \cong D$, hence $\hat{a}_{1}+\hat{a}_{2} / 0_{\hat{\phi}} \nearrow$ $\hat{a}_{1}^{\prime}+\hat{a}_{2}^{\prime} / 0_{\hat{\phi}^{\prime}}, \hat{a}_{1}+0_{\hat{\phi}^{\prime}}=\hat{a}_{1}^{\prime}$ and $\hat{a}_{2}+0_{\hat{\phi}^{\prime}}=\hat{a}_{2}^{\prime}$ (see Figure 3.3 (i)). It follows that the two quotients are isomorphic, and checking the definition of $R_{\phi}$ above Theorem 3.19, we see that this isomorphism restricts to an isomorphism between $R$ and $R^{\prime}$. Thus $|R|=\left|R^{\prime}\right|$, which is a contradiction, as $p$ and $q$ are distinct primes and $|R| \geq 2$. Consequently $\bar{L}$ is not residually finite, which implies that $L$ is not a member of $\left(\mathcal{M}_{F}\right)^{\mathcal{V}}$. We now complete the proof with a justification of $(*)$. This is done by adjusting the elements $\bar{a}_{i}, \bar{a}_{i}^{\prime}$ in several steps, thereby constructing the required 4 -frames. Since we will be working primarily with elements of $\bar{L}$, we first of all change the notation, denoting the 4 -frames $\phi, \phi^{\prime}$ in $L$ by $\hat{\phi}=\left(\hat{a}_{i}, \hat{c}_{1 j}\right), \hat{\phi}^{\prime}=\left(\hat{a}_{i}^{\prime}, \hat{c}_{1 j}^{\prime}\right)$ and the $\bar{a}_{i}, \bar{a}_{i}^{\prime}$ in $\bar{L}$ by $a_{i}, a_{i}^{\prime}$. Also the condition that the elements $a_{i}, a_{i}^{\prime}$ generate a sublattice isomorphic to $D$ (Figure 3.3 (i)) will be abbreviated by $D\left(a_{i}, a_{i}^{\prime}\right)$. To check that $D\left(a_{i}, a_{i}^{\prime}\right)$ holds, one has to verify that the $a_{i}$ are independent over $a_{1} a_{2}$, the $a_{i}^{\prime}$ are independent over $a_{1}^{\prime} a_{2}^{\prime}=0^{\prime}, 1 / a_{3}+a_{4} \nearrow a_{1}^{\prime}+a_{2}^{\prime} / 0^{\prime}$, $a_{1}^{\prime}=a_{1}+0^{\prime}$ and $a_{2}^{\prime}=a_{2}+0^{\prime}$. Actually, once the transposition has been established, it is enough to show that $a_{1} \leq a_{1}^{\prime}$ and $a_{2} \leq a_{2}^{\prime}$ since then $0^{\prime} \leq a_{1}^{\prime}\left(a_{2}+0^{\prime}\right) \leq a_{1}^{\prime} a_{2}^{\prime}=0^{\prime}$ implies $$ a_{1}^{\prime}=\left(1+0^{\prime}\right) a_{1}^{\prime}=\left(a_{1}+a_{2}+0^{\prime}\right) a_{1}^{\prime}=a_{1}+\left(a_{2}+0^{\prime}\right) a_{1}^{\prime}=a_{1}+0^{\prime} $$ and $a_{2}^{\prime}=a_{2}+0^{\prime}$ follows similarly. Step 1: Let $1^{\prime}=\sum_{i=1}^{4} a_{i}^{\prime}$ and $\hat{e}^{\prime}=\hat{c}_{12}^{\prime}+\hat{c}_{13}^{\prime}+\hat{c}_{14}^{\prime}$. By Lemma $3.15\left(\hat{a}_{i}^{\prime}, \hat{e}^{\prime}\right)$ is a 4diamond in $L$. Since $\hat{e}^{\prime} \in 1_{\hat{\phi}^{\prime}} / 0_{\hat{\phi}^{\prime}}$, we can choose $e^{\prime} \in 1^{\prime} / 0^{\prime}$ such that $f\left(e^{\prime}\right)=\hat{e}^{\prime}$. Clearly $e^{\prime}$ is incomparable with each $a_{i}^{\prime}$. Defining $b^{\prime}=\sum a_{i}^{\prime} e^{\prime}, c^{\prime}=\prod\left(a_{i}^{\prime}+e^{\prime}\right), d^{\prime}=\sum\left(a_{i}^{\prime}+b^{\prime}\right) c^{\prime}$ $(i=1,2,3,4)$ corresponding to $b, c, d$ in Lemma 3.16 it is easy to check that $b^{\prime} \leq d^{\prime} \leq c^{\prime}$, $\left(a_{i}^{\prime}+b^{\prime}\right) c^{\prime}=a_{i}^{\prime} c^{\prime}+b^{\prime}, f\left(b^{\prime}\right)=0_{\hat{\phi}^{\prime}}, f\left(c^{\prime}\right)=1_{\hat{\phi}^{\prime}}=f\left(d^{\prime}\right)$ and $f\left(\left(a_{i}^{\prime}+b^{\prime}\right) c^{\prime}\right)=\hat{a}_{i}^{\prime}$, so combining part (iii) of that lemma with Lemma 3.15 shows that $$ \bar{\phi}^{\prime}=\left(\bar{a}_{i}^{\prime}, \bar{c}_{1 j}^{\prime}\right)=\left(a_{i}^{\prime} c^{\prime}+b^{\prime},\left(a_{1}^{\prime} c^{\prime}+a_{j}^{\prime} c^{\prime}+b^{\prime}\right) e^{\prime} d^{\prime}\right) $$ is a 4-frame in $d^{\prime} / b^{\prime}$ and $f\left(\bar{\phi}^{\prime}\right)=\hat{\phi}^{\prime}$. Now let $\overline{0}=\left(a_{1}+a_{2}\right) b^{\prime}$ and consider the elements $\bar{a}_{i}=a_{i} d^{\prime}+\overline{0}$. Since $D\left(\hat{a}_{i}, \hat{a}_{i}^{\prime}\right)$ holds, $\hat{a}_{1}+\hat{a}_{2} \notin 0_{\hat{\phi}^{\prime}}$, whence $f(\overline{0})=0_{\hat{\phi}}$ and $f\left(\bar{a}_{i}\right)=\hat{a}_{i}$. In particular it follows that $\hat{a}_{i} \neq \overline{0}$. We show that the $\bar{a}_{i}$ are independent over $\overline{0}$. Observe that $a_{3}+a_{4} \leq 0^{\prime} \leq b^{\prime} \leq d^{\prime}$ implies $\bar{a}_{3}=a_{3}+\overline{0}$ and $\bar{a}_{4}=a_{4}+\overline{0}$. Since $a_{1} \leq a_{1}^{\prime}, a_{2} \leq a_{2}^{\prime}, d^{\prime} \leq c^{\prime}$ and $\bar{\phi}^{\prime}$ is a 4 -frame, $$ \begin{aligned} \bar{a}_{1} \sum_{i \neq 1} \bar{a}_{i} & =\left(a_{1} d^{\prime}+\overline{0}\right)\left(a_{2} d^{\prime}+a_{3}+a_{4}+\overline{0}\right) \\ & \leq\left(a_{1}^{\prime} c^{\prime}+b^{\prime}\right)\left(a_{2}^{\prime} c^{\prime}+b^{\prime}\right)=b^{\prime} \end{aligned} $$ Since $\bar{a}_{1} \leq a_{1}+a_{2}$, the left hand side is $\leq 0^{\prime}$, and the opposite inequality is obvious. Similarly $\bar{a}_{2} \sum_{i \neq 2} \bar{a}_{i}=\overline{0}$. Also $$ \begin{aligned} \bar{a}_{3} \sum_{i \neq 3} \bar{a}_{i} & =\left(a_{3}+\overline{0}\right)\left(a_{1} d^{\prime}+a_{2} d^{\prime}+a_{4}+\overline{0}\right) \\ & =\overline{0}+a_{3}\left(a_{1} d^{\prime}+a_{2} d^{\prime}+a_{4}+\overline{0}\right)=\overline{0} \end{aligned} $$ because $a_{1} d^{\prime}+a_{2} d^{\prime}+a_{4}+\overline{0} \leq a_{1}+a_{2}+a_{4}$, and likewise for $\bar{a}_{4} \sum_{i \neq 4} \bar{a}_{i}=\overline{0}$. Let $\overline{1}=\sum_{i=1}^{4} \bar{a}_{i}$ and observe that $0_{\bar{\phi}^{\prime}}=b^{\prime}$. We proceed to show that $D\left(\bar{a}_{i}, \bar{a}_{i}^{\prime}\right)$ holds. Note firstly that $a_{i}^{\prime} d^{\prime}=a_{i}^{\prime} c^{\prime}$ since $a_{i}^{\prime} d^{\prime}=a_{i}^{\prime} \sum_{k=1}^{4} a_{k}^{\prime} c^{\prime}=a_{i}^{\prime}\left(a_{i}^{\prime} c^{\prime}+\sum_{k \neq i} a_{k}^{\prime} c^{\prime}=a_{i}^{\prime} c^{\prime}+0^{\prime}\right.$. Now $$ \begin{aligned} \overline{1}+b^{\prime} & =a_{1} d^{\prime}+a_{2} d^{\prime}+a_{3}+a_{4}+\overline{0}+b^{\prime} \\ & \leq a_{1}^{\prime} d^{\prime}+a_{2}^{\prime} d^{\prime}+b^{\prime}=a_{1}^{\prime} c^{\prime}+a_{2}^{\prime} c^{\prime}+b^{\prime}=\bar{a}_{1}^{\prime}+\bar{a}_{2}^{\prime}, \end{aligned} $$ and $a_{1} d^{\prime}+b^{\prime} \geq a_{1} d^{\prime}+\overline{0}=\left(a_{1}+0^{\prime}\right) d^{\prime}=a_{1}^{\prime} d^{\prime}$ together with a similar computation for $a_{2}$ shows that $\overline{1}+b^{\prime}=\bar{a}_{1}^{\prime}+\bar{a}_{2}^{\prime}$. Furthermore $$ \begin{aligned} \overline{1} b^{\prime} & =\left(a_{1} d^{\prime}+a_{2} d^{\prime}+a_{3}+a_{4}+\overline{0}\right) b^{\prime} \\ & =a=3+a=4+\overline{0}+\left(a_{1} d^{\prime}+a_{2} d^{\prime}\right) b^{\prime} \\ & =a_{3}+a_{4}+\overline{0}=\bar{a}_{3}+\bar{a}_{4} . \end{aligned} $$ Since $\bar{a}_{i}=a_{i} d^{\prime}+\left(a_{1}+a_{2}\right) b^{\prime} \leq a_{i}^{\prime} c^{\prime}+b^{\prime}=\bar{a}_{i}^{\prime}$ we have $D\left(\bar{a}_{i}, \bar{a}_{i}^{\prime}\right)$. Step 2: Using Lemmas 3.18 and 3.23 we now construct a new 4 -frame $$ \phi^{\prime}=\left(a_{i}^{\prime}, c_{1 j}^{\prime}\right)={\overline{\phi^{\prime}}}^{u}=\left(\bar{a}_{i}^{\prime}+u, \bar{c}_{1 j}^{\prime}+u\right), $$ where $u$ is derived from $u_{2}=\bar{a}_{2}^{\prime}\left(\alpha_{\bar{\phi}^{\prime}}^{q}\left(\bar{a}_{1}^{\prime}\right)+\bar{a}_{1}^{\prime}\right)$. By Lemma $3.23 \phi^{\prime}$ is a 4 -frame of characteristic $q$. Since $\hat{\phi}^{\prime}$ in $L$ has characteristic $q$, it follows that $f(u)=0_{\hat{\phi}^{\prime}}$ and $f\left(\phi^{\prime}\right)=\hat{\phi}^{\prime}$ (see proof of Theorem 3.24). Moreover, if we define $0=\left(\bar{a}_{1}+\bar{a}_{2}\right) u$ and $a_{i}=\bar{a}_{i}+0$ then $f\left(a_{i}\right)=\hat{a}_{i}$ and calculations similar to the ones in Step 1 show that the $a_{i}, a_{i}^{\prime}$ generate a copy of $D$. Step 3: In this step we first construct a new 4-frame $\bar{\phi}=\left(\bar{a}_{i}, \bar{c}_{1 j}\right)$ derived from the elements $a_{i}$ of Step 2 such that $f(\bar{\phi})=\hat{\phi}$. Then we adjust $\phi^{\prime}$ accordingly to obtain $\bar{\phi}^{\prime}=\left(\bar{a}_{i}^{\prime}, \bar{c}_{1 j}^{\prime}\right)$ satisfying $f\left(\bar{\phi}^{\prime}\right)=\hat{\phi}^{\prime}, D\left(\bar{a}_{i}, \bar{a}_{i}^{\prime}\right)$ and $\bar{c}_{12}^{\prime}=\bar{c}_{12}+0_{\bar{\phi}^{\prime}}$. Since $\hat{c}_{23}+\hat{c}_{24} \leq \hat{a}_{2}+\hat{a}_{3}+\hat{a}_{4}$, it is possible to choose $\bar{e} \in \bar{L}$ such that $f(\bar{e})=\hat{c}_{23}+\hat{c}_{24}$ and $a_{1} a_{2} \leq \bar{e} \leq a_{2}+a_{3}+a_{4}$. Let $c_{12}=c_{12}^{\prime}\left(a_{1}+a_{2}\right)$ and observe that $f\left(c_{12}\right)=\hat{c}_{12}^{\prime}\left(\hat{a}_{1}+\hat{a}_{2}\right)=\hat{c}_{12}$ by the choice of $\sigma$ in the construction of $L$. Let $e=c_{12}+\bar{e}$ and define $b, c, d$ as in Lemma 3.16. Since $f(e)=\hat{c}_{12}+\hat{c}_{23}+\hat{c}_{24}=\hat{e}$ is a relative complement of each $\hat{a}_{i}$, considerations similar to the ones in Step 1 show that $$ \bar{\phi}=\left(\bar{a}_{i}, \bar{c}_{1 j}\right)=\left(a_{i} c+b,\left(a_{1} c+a_{j} c+b\right) e d\right) $$ is a 4 -frame in $d / b$ and $f(\bar{\phi})=\hat{\phi}$. $$ \begin{array}{cccc} \text { Let } & u_{1}^{\prime}=a_{1} c+0_{\phi^{\prime}}, & u_{i}^{\prime}=a_{i}^{\prime}\left(u_{1}^{\prime}+c_{1 i}^{\prime}\right) & u^{\prime}=\sum_{i=1}^{4} u_{i}^{\prime} \\ \text { and } \quad v_{1}^{\prime}=a_{1} e+0_{\phi^{\prime}}, & v_{i}^{\prime}=a_{i}^{\prime}\left(v_{1}^{\prime}+c_{1 i}^{\prime}\right) \quad v^{\prime}=\sum_{i=1}^{4} v_{i}^{\prime} . \end{array} $$ Then $0_{\phi^{\prime}} \leq v_{1}^{\prime} \leq u_{1}^{\prime} \leq a_{1}$ and two applications of Lemma 3.18 show that $$ \bar{\phi}^{\prime}=\left(\bar{a}_{i}^{\prime}, \bar{c}_{1 j}^{\prime}\right)=\phi_{u^{\prime}}^{\prime} v^{\prime}=\left(a_{i}^{\prime} u^{\prime}+v^{\prime}, c_{1 j}^{\prime} u^{\prime}+v^{\prime}\right) $$ is a 4-frame in $u^{\prime} / v^{\prime}$. Also since $\phi^{\prime}$ has characteristic $q$, Lemma 3.22 implies that $\bar{\phi}^{\prime}$ has characteristic $q$. Furthermore, it is easy to check that $f\left(u^{\prime}\right)=1_{\hat{\phi}^{\prime}}$ and $f\left(v^{\prime}\right)=0_{\hat{\phi}^{\prime}}$, whence $f\left(\bar{\phi}^{\prime}\right)=\hat{\phi}^{\prime}$. We now show that the elements $\bar{a}_{i}, \bar{a}_{i}^{\prime}$ generate a copy of $D$. From $D\left(a_{i}, a_{i}^{\prime}\right)$ we deduce that $$ \begin{aligned} c_{12}+0_{\phi^{\prime}} & =c_{12}^{\prime}\left(a_{1}+a_{2}\right)+0_{\phi^{\prime}}=c_{12}^{\prime}\left(a_{1}+a_{2}+0_{\phi^{\prime}}=c_{12}^{\prime}\left(a_{1}^{\prime}+a_{2}^{\prime}\right)=c_{12}^{\prime}\right. \\ a_{1}+c_{12} & =a_{1}+c_{12}^{\prime}\left(a_{1}+a_{2}\right)=\left(a_{1}+c_{12}^{\prime}\right)\left(a_{1}+a_{2}\right) \\ & =\left(a_{1}+0_{\phi^{\prime}}+c_{12}^{\prime}\right)\left(a_{1}+a_{2}\right)=\left(a_{1}^{\prime}+a_{2}^{\prime}\right)\left(a_{1}+a_{2}\right)=a_{1}+a_{2} . \end{aligned} $$ Similarly $a_{2}+c_{12}=a_{1}+a_{2}$ and $a_{1} c_{12}=a_{1} a_{2}=a_{2} c_{12}$. Since $c_{12} \leq e \leq c$, $$ \begin{gathered} c_{12}+a_{1} e=\left(c_{12}+a_{1}\right) e=\left(c_{12}+a_{2}\right) e=c_{12}+a_{2} e \\ c_{12}+a_{1} c=c_{12}+a_{2} c . \end{gathered} $$ Moreover $$ \begin{aligned} u_{2}^{\prime} & =a_{2}^{\prime}\left(a_{1} c+0_{\phi^{\prime}}+c_{12}^{\prime}\right)=a_{2}^{\prime}\left(a_{1} c+0_{\phi^{\prime}}+c_{12}\right) \\ & =a_{2}^{\prime}\left(a_{2} c+0_{\phi^{\prime}}+c_{12}\right)=a_{2}^{\prime}\left(a_{2} c+c_{12}^{\prime}\right) \\ & =a_{2} c+a_{2}^{\prime} c_{12}^{\prime}=a_{2} c+0_{\phi^{\prime}} \end{aligned} $$ A similar calculation yields $v_{2}^{\prime}=a_{2} e+0_{\phi^{\prime}}$. Now $a_{2} c+u_{3}^{\prime}+u_{4}^{\prime} \leq a_{2}^{\prime}+a_{3}^{\prime}+a_{4}^{\prime}$ implies $$ a_{1}^{\prime} u^{\prime}=a_{1}^{\prime}\left(a_{1} c+a_{2} c+0_{\phi^{\prime}}+u_{3}^{\prime}+u_{4}^{\prime}\right)=a_{1} c+0_{\phi^{\prime}}, $$ and similarly $a_{2}^{\prime} u^{\prime}=a_{2} c+0_{\phi^{\prime}}$. Together with $a_{3} c+a_{4} c \leq v^{\prime}$ we compute $$ \begin{aligned} d+v^{\prime} & =\sum_{i=1}^{4} a_{i} c+v^{\prime}=\sum_{i=1}^{4} a_{i} c+0_{\phi^{\prime}}+v^{\prime} \\ & =a_{1}^{\prime} u^{\prime}+a_{2}^{\prime} u^{\prime}+v^{\prime}=\bar{a}_{1}^{\prime}+\bar{a}_{2}^{\prime}, \\ d v^{\prime} & =d\left(a_{1}^{\prime}+a_{2}^{\prime}\right)\left(a_{1} e+a_{2} e+0_{\phi^{\prime}}+v_{3}^{\prime}+v_{4}^{\prime}\right) \\ & =d\left(a_{1} e+a_{2} e+0_{\phi^{\prime}}+\left(a_{1}^{\prime}+a_{2}^{\prime}\right)\left(v_{3}^{\prime}+v_{4}^{\prime}\right)\right) \\ & =d\left(a_{1} e+a_{2} e+0_{\phi^{\prime}}\right) \\ & =\left(a_{1} c+a_{2} c+a_{3} c+a_{4} c\right) 0_{\phi^{\prime}}+a_{1} e+a_{2} e \\ & =a_{3} c+a_{4} c+\left(a_{1} c+a_{2} c\right) 0_{\phi^{\prime}}+a_{1} e+a_{2} e \\ & =a_{3} c+a_{4} c+b=\bar{a}_{3}+\bar{a}_{4} . \end{aligned} $$ Since $\bar{a}_{1}+v^{\prime}=a_{1} c+b+v^{\prime}=a_{1} c+0_{\phi^{\prime}}+v^{\prime}=a_{1} u^{\prime}+v^{\prime}=\bar{a}_{1}^{\prime}$ and similarly $\bar{a}_{2}+v^{\prime}=\bar{a}_{2}^{\prime}$, we have $D\left(\bar{a}_{i}, \bar{a}_{i}^{\prime}\right)$. Lastly we want to show that $\bar{c}_{12}^{\prime}=\bar{c}_{12}+0_{\bar{\phi}^{\prime}}$. Since $d=\sum_{i=1}^{n} a_{i} c+b, \bar{e} \leq a_{2}+a_{3}+a_{4}$ and $\bar{e} \leq e \leq c$, $$ \begin{aligned} \bar{c}_{12} & =\left(a_{1} c+a_{2} c+b\right) e d=\left(a_{1} c+a_{2} c\right) e+b \\ & =\left(a_{1} c+a_{2} c\right)\left(a_{1}+a_{2}\right)\left(c_{12}+\bar{e}\right)+b \\ & =\left(a_{1} c+a_{2} c\right)\left(c_{12}+\left(a_{1}+a_{2}\right) \bar{e}\right)+b \\ & =\left(a_{1} c+a_{2} c\right)\left(c_{12}+a_{2} \bar{e}\right)+b \\ & =c_{12}\left(a_{1} c+a_{2} c\right)+a_{2} \bar{e}+b \\ & \leq c_{12}^{\prime} u^{\prime}+v^{\prime}=\bar{c}_{12}^{\prime}, \end{aligned} $$ where we used $c_{12} \leq c_{12}^{\prime}, a_{1} c+a_{2} c \leq u^{\prime}$ and $a_{2} \bar{e} \leq b \leq v^{\prime}$ in the last line. Also $\bar{a}_{2} \leq \bar{a}_{2}^{\prime}$ implies $v^{\prime} \leq\left(\bar{a}_{2}+v^{\prime}\right) \bar{c}_{12}^{\prime} \leq \bar{a}_{2}^{\prime} \bar{c}_{12}^{\prime}=v^{\prime}$, and since we already know that $d / \bar{a}_{3}+\bar{a}_{4} \nearrow \bar{a}_{1}^{\prime}+\bar{a}_{2}^{\prime} / v^{\prime}$ the calculation, $$ \begin{aligned} \bar{c}_{12}^{\prime} & =\left(d+v^{\prime}\right) \bar{c}_{12}^{\prime}=\left(\bar{a}_{1}+\bar{a}_{2}+v^{\prime}\right) \bar{c}_{12}^{\prime} \\ & =\left(\bar{c}_{12}+\bar{a}_{2}+v^{\prime}\right) \bar{c}_{12}^{\prime}=\bar{c}_{12}+\left(\bar{a}_{2}+v^{\prime}\right) \bar{c}_{12}^{\prime}=\bar{c}_{12}+v^{\prime} \end{aligned} $$ completes this step. Step 4: As in Step 2 we use Lemma 3.18 to construct a new 4-frame $$ \phi=\left(a_{i}, c_{1 j}\right)=\bar{\phi}^{u}=\left(\bar{a}_{i}+u, \bar{c}_{1 j}+u\right), $$ where $u$ is derived from $u_{2}=\bar{a}_{2}\left(\alpha_{\bar{\phi}}^{p}\left(\bar{a}_{1}\right)+\bar{a}_{1}\right)$ and $u_{1}=\bar{a}_{1}\left(u_{2}+\bar{c}_{12}\right)$. By Lemma $3.23 \phi$ is a 4 -frame of characteristic $p$ and as before $f(\phi)=\hat{\phi}$. Let $w_{1}^{\prime}=u_{1}+v^{\prime}, w_{i}^{\prime}=\bar{a}_{i}^{\prime}\left(w_{1}^{\prime}+\bar{c}_{1 i}^{\prime}\right), w^{\prime}=\sum_{i=1}^{4} w_{i}^{\prime}$ and consider the 4-frame $$ \phi^{\prime}=\left(a_{i}^{\prime}, c_{1 j}^{\prime}\right)=\bar{\phi}^{w^{\prime}}=\left(\bar{a}_{i}^{\prime}+w^{\prime},,_{1 j}^{\prime}+w^{\prime}\right) $$ Since $\bar{\phi}^{\prime}$ was of characteristic $q$, so is $\phi^{\prime}$ (Lemma 3.22). It remains to show that $D\left(a_{i}, a_{i}^{\prime}\right)$ holds. Note that $1_{\phi}=1_{\bar{\phi}}=d$ and $0_{\phi^{\prime}}=w^{\prime} \geq w_{i} \geq$ $v^{\prime} \geq \bar{a}_{3}+\bar{a}_{4}$. Therefore $$ \begin{aligned} 1_{\phi}+w^{\prime} & =\bar{a}_{1}+\bar{a}_{2}+\bar{a}_{3}+\bar{a}_{4}+w^{\prime} \\ & =\bar{a}_{1}^{\prime}+\bar{a}_{2}^{\prime}+w^{\prime}=a_{1}^{\prime}+a_{2}^{\prime} . \end{aligned} $$ Also $u_{2}=\bar{a}_{2}\left(u_{1}+\bar{c}_{12}\right)$ (see proof of Lemma 3.23) and $u_{1}=\bar{a}_{1}\left(u_{2}+\bar{c}_{12}\right)$ imply $u_{2}+\bar{c}_{12}=$ $u_{1}+\bar{c}_{12}$. Together with $\bar{c}_{12}^{\prime}=\bar{c}_{12}+v^{\prime}$ from Step 3 we have $$ \begin{aligned} w_{1}^{\prime}+\bar{c}_{12}^{\prime} & =u_{1}+\bar{c}_{12}+v^{\prime}=u_{2}+\bar{c}_{12}+v^{\prime} \\ w_{2}^{\prime} & =\bar{a}_{2}^{\prime}\left(w_{1}^{\prime}+\bar{c}_{12}^{\prime}\right)=\bar{a}_{2}^{\prime}\left(u_{2}+\bar{c}_{12}+v^{\prime}\right) \\ & =u_{2}+v^{\prime}+\bar{a}_{2}^{\prime} \bar{c}_{12}=u_{2}+v^{\prime} \end{aligned} $$ A last calculation shows that $$ \begin{aligned} 1_{\phi} w^{\prime} & =1_{\phi}\left(u_{1}+u_{2}+w_{3}^{\prime}+w_{4}^{\prime}\right) \\ & =u_{1}+u_{2}+1_{\phi}\left(\bar{a}_{1}^{\prime}+\bar{a}_{2}^{\prime}\right)\left(w_{3}^{\prime}+w_{4}^{\prime}\right) \\ & =u_{1}+u_{2}+1_{\phi} v^{\prime}=u_{1}+u_{2}+\bar{a}_{3}+\bar{a}_{4} \\ & =u+\bar{a}_{3}+\bar{a}_{4}=a_{3}+a_{4} . \end{aligned} $$ Since $\bar{a}_{1} \leq \bar{a}_{1}^{\prime}, \bar{a}_{2} \leq \bar{a}_{2}^{\prime}$ and $u=0_{\phi} \leq w^{\prime}$ it follows that $a_{1} \leq a_{1}^{\prime}$ and $a_{2} \leq a_{2}^{\prime}$. Hence $D\left(a_{i}, a_{i}^{\prime}\right)$ holds. Denoting $\phi, \phi^{\prime}$ by $\bar{\phi}, \bar{\phi}^{\prime}$ and $\hat{\phi}, \hat{\phi}^{\prime}$ by $\phi, \phi^{\prime}$ we see that condition $(*)$ is now satisfied. Note that the lattice $L$ in the preceding theorem has finite length. Let $\mathcal{M}_{F l}$ be the class of all modular lattices of finite length, and denote by $\mathcal{M}_{Q}$ the collection of all subspaces of vector spaces over the rational numbers. By a result of Herrmann and Huhn [75] $\mathcal{M}_{Q} \subseteq\left(\mathcal{M}_{F}\right)^{\mathcal{V}}$. Furthermore Herrmann [84] shows that any modular variety that contains $\mathcal{M}_{Q}$ cannot be both finitely based and generated by its members of finite length. From these results and Freese's Theorem one can obtain the following conclusions. CorollaRY 3.26 (i) $\operatorname{Both}\left(\mathcal{M}_{F}\right)^{\mathcal{V}}$ and $\left(\mathcal{M}_{F l}\right)^{\mathcal{V}}$ are not finitely based. (ii) $\left(\mathcal{M}_{F}\right)^{\mathcal{V}} \subset\left(\mathcal{M}_{F l}\right)^{\mathcal{V}} \subset \mathcal{M}$ and all three varieties are distinct. (iii) The variety of Arguesian lattices is not generated by its members of finite length. ### Covering Relations between Modular Varieties The structure of the bottom of $\Lambda_{\mathcal{M}}$. In Section 2.1 we saw that the distributive variety $\mathcal{D}$ is covered by exactly two varieties, $\mathcal{M}_{3}$ and $\mathcal{N}$. The latter is nonmodular, and its covers will be studied in the next chapter. Which varieties cover $\mathcal{M}_{3}$ ? Grätzer [66] showed that if a finitely generated modular variety $\mathcal{V}$ properly contains $\mathcal{M}_{3}$, then $M_{4} \in \mathcal{V}$ or $M_{3^{2}} \in \mathcal{V}$ or both these lattices are in $\mathcal{V}$ (see Figure 2.1 and 3.6). The restriction that $\mathcal{V}$ should be finitely generated was removed by Jónsson [68]. In fact Jónsson showed that for any modular variety $\mathcal{V}$ the condition $M_{3^{2}} \notin \mathcal{V}$ is equivalent to $\mathcal{V}$ being generated by its members of length $\leq 2$. The next few lemmas lead up to the proof of this result. Recall from Section 1.4 that principal congruences in a modular lattice can be described by sequences of transpositions, which are all bijective. Two nontrivial quotients in a modular lattice are said to be projective to each other if they are connected by some (alternating) sequence of (bijective) transpositions. For example the sequence $a_{0} / b_{0} \nearrow$ $a_{1} / b_{1} \searrow \ldots \nearrow a_{n} / b_{n}$ makes $a_{0} / b_{0}$ and $a_{n} / b_{n}$ projective to each other in $n$ steps. This sequence is said to be normal if $b_{k}=b_{k-1} b_{k+1}$ for even $k$ and $a_{k}=a_{k-1}+a_{k+1}$ for odd $k$ $(k=1, \ldots, n-1)$. It is strongly normal if in addition for even $k$ we have $b_{k-1}+b_{k+1} \geq a_{k}$ and for odd $k a_{k-1} a_{k+1} \leq b_{k}$. LemMa 3.27 (Grätzer [66]). In a modular lattice any alternating sequence of transpositions can be replaced by a normal sequence of the same length. Proof. Pick any three consecutive quotients from the sequence. By duality we may assume that $a / b \nearrow x / y \searrow c / d$. If this part of the sequence is not normal (i.e. $a+c<x$ ), then we replace $x / y$ by $a+c / y(a+c)$. To see that $a / b \nearrow a+c / y(a+c)$ we only have to observe that $a y(a+c)=a y=b$ and, by modularity, $a+y(a+c)=(a+y)(a+c)=$ $x(a+c)=a+c$. Similarly $a+c / y(a+c) \searrow c / d$. Notice also that the normality of adjacent parts of the sequence is not disturbed by this procedure, for suppose $u / v$ is the quotient that precedes $a / b$ in the sequence, then $v y=b$ implies $v y(a+c)=b(a+c)=b$. Thus we can replace quotients as necessary, until the sequence is normal. Grätzer also observed that the six elements of a normal sequence $a / b \nearrow x / y \searrow c / d$ are generated by $a, y$ and $c$. Hence, in a modular lattice, they generate a homomorphic image of the lattice in Figure 3.4 (i) (this is the homomorphic image of the free modular lattice $F_{\mathcal{M}}(a, y, c)$ subject to the relations $\left.a+y=a+c=y+c\right)$. If the sequence is also strongly normal, then $y=y+a c$, and so $a, y$ and $c$ generate a homomorphic image of the lattice in Figure 3.4 (ii). Of course the dual lattices are generated by a (strongly) normal sequence $a / b \searrow x / y \nearrow c / d$. Figure 3.4 (ii) also shows that strongly normal sequences cannot occur in a distributive lattice, unless all the quotients are trivial. However Jónsson $[68]$ proved the following: LemMa 3.28 Suppose $L$ is a modular lattice and $p / q$ and $r / s$ are nontrivial quotients of $L$ that are projective in $n$ steps. If no nontrivial subquotients of $p / q$ and $r / s$ are projective in fewer than $n$ steps, then either $n \leq 2$ or else $p / q$ and $r / s$ are connected by a strongly normal sequence, also in $n$ steps. Proof. Let $p / q=a_{0} / b_{0} \sim a_{1} / b_{1} \sim \ldots \sim a_{n} / b_{n}=r / s$ (some $n \geq 3$ ) be the sequence that connects the two quotients. By Lemma 3.27 we can assume that it is normal. If it is not strongly normal, then for some $k$ with $0<k<n$ we have $a_{k-1} / b_{k-1} \nearrow a_{k} / b_{k} \searrow a_{k+1} / b_{k+1}$, (i) (ii) Figure 3.4 but $a_{k_{1}} a_{k+1} \leq b_{k}$, or dually. Let $c_{k-1}=b_{k-1}+a_{k-1} a_{k+1}$, and for $i \leq n, i \neq k-1$ we define $c_{i}$ to be the element of $a_{i} / b_{i}$ that corresponds to $c_{k-1}$ under the given (bijective) transpositions. With reference to Figure 3.4 (i) it is straightforward to verify that $$ c_{k-1} / b_{k-1} \searrow a_{k-1} a_{k+1} / b_{k-1} b_{k+1} \nearrow c_{k+1} / b_{k+1} . $$ Since $0<k<n$ and $n \geq 3$ we have $k>1$ or $k<n-1$ (or both). In the first case $$ c_{k-2} / b_{k-2} \searrow a_{k-1} a_{k+1} / b_{k-1} b_{k+1} \nearrow c_{k+1} / b_{k+1} $$ and in the second $$ c_{k-1} / b_{k-1} \searrow a_{k-1} a_{k+1} / b_{k-1} b_{k+1} \nearrow c_{k+2} / b_{k+2} $$ Either way it follows that the nontrivial subintervals $c_{0} / b_{0}$ of $p / q$ and $c_{n} / b_{n}$ of $r / s$ are projective in $n-1$ steps. This however contradicts the assumption of the lemma. Lemma 3.29 (Jónsson [68]). Let $L$ be a modular lattice such that $M_{3^{2}}$ is not a homomorphic image of a sublattice of $L$. If $(v<x, y, z<u)$ and $\left(v^{\prime}<x^{\prime}, y^{\prime}, z^{\prime}<u^{\prime}\right)$ are diamonds in $L$ such that $y^{\prime}=y u^{\prime}$ and $z=z^{\prime}+v$, then $u / v \searrow u^{\prime} / v^{\prime}$ (refer to Figure 3.5(i)). Proof. Observe firstly that the conditions imply $u / y \searrow z^{\prime} / v^{\prime}$, since $y+z^{\prime}=(y+v)+z^{\prime}=$ $y+z=u$ and $y z^{\prime}=y\left(u^{\prime} z^{\prime}\right)=y^{\prime} z^{\prime}=v^{\prime}$. Let $w=v+u^{\prime}$. Then $w \geq v+z^{\prime}=z$ and $u \geq y^{\prime}, z^{\prime}$ imply $u \geq u^{\prime}$, hence $w \in u / z$. We show that $w=u$ and dually $v u^{\prime}=v^{\prime}$, which gives the desired conclusion. Note that we cannot have $w=z$ since then the two diamonds would generate a sublattice that has $M_{3^{2}}$ as homomorphic image. So suppose $z<w<u$. Because all the edges of a diamond are projective to another, the six elements $w, v, x, y, z, u$ generate the lattice in Figure 3.5 (ii). Under the transposition $u / y \searrow z^{\prime} / v^{\prime}$ the element $x w+y$ (i) (iii) Figure 3.5 is sent to $w^{\prime}=z^{\prime}(x w+y)$, and together with $v^{\prime}, x^{\prime}, y^{\prime}, z^{\prime}, u^{\prime}$ these elements generate the lattice in Figure 3.5 (iii). It is easy to check that $(x w+y w<x+y w, x w+y, w<u)$ and $\left(\left(x^{\prime}+w^{\prime}\right)\left(y^{\prime}+w^{\prime}\right)<y^{\prime}+w^{\prime}, z^{\prime}+x^{\prime}\left(y^{\prime}+w^{\prime}\right), x^{\prime}+w^{\prime}<u^{\prime}\right)$ are diamonds (they appear in Figure 3.5 (ii) and (iii)). We claim that $w / x w+y w \searrow u^{\prime} / y^{\prime}+w^{\prime}$. Indeed, since $u^{\prime} \leq w \leq\left(x+u^{\prime}\right)\left(y+u^{\prime}\right)$ we have $$ \begin{aligned} x w+y x+u^{\prime} & =u^{\prime}+x w+u^{\prime}+y w=\left(u^{\prime}+x\right) w+\left(u^{\prime}+y\right) w=w \\ y^{\prime}+w^{\prime} & =y^{\prime}+z^{\prime}(x w+y)=\left(y^{\prime}+z^{\prime}\right)(x w+y) \\ & =u^{\prime}(x w+y)=u^{\prime}(x w+y) w=u^{\prime}(x w+y w) . \end{aligned} $$ But this means that the two diamonds form a sublattice of $L$ that is isomorphic to the lattice in Figure 3.5 (iv), and therefore has $M_{3^{2}}$ as a homomorphic image. This contradiction shows that we must have $u=w$. LemMa 3.30 (Jónsson [68]). If $L$ is a modular lattice such that $M_{3^{2}}$ is not a homomorphic image of a sublattice of $L$, then any $t$ wo quotients in $L$ that are projective to each other have nontrivial subquotients that are projective to each other in three steps or less. Proof. It is enough to prove the theorem for two quotients $a / b$ and $c / d$ that are projective in four steps, since longer sequences can be handled by repeated application of this case. We assume that no nontrivial subquotients of $a / b$ and $c / d$ are projective in less than four steps and derive a contradiction. By Lemma 3.28 there exists a strongly normal sequence of transpositions $$ a / b=a_{0} / b_{0} \nearrow a_{1} / b_{1} \searrow a_{2} / b_{2} \nearrow a_{3} / b_{3} \searrow a_{4} / b_{4}=c / d $$ or dually. Associated with this sequence are three diamonds $\left(b_{0}+b_{2}<a_{0}+b_{2}, b_{1}, b_{0}+a_{2}<\right.$ $\left.a_{1}\right),\left(b_{2}<b_{1} a_{3}, a_{2}, a_{1} b_{3}<a_{1} a_{3}\right)$ and $\left(b_{2}+b_{4}<a_{2}+b_{4}, b_{3}, a_{2}+b_{4}<a_{3}\right)$. The first and the second, and the second and the third diamond satisfy the conditions of Lemma 3.29, since $$ \begin{aligned} & b_{1} a_{3}=b_{1}\left(a_{1} a_{3}\right), \quad b_{0}+a_{2}=a_{2}+\left(b_{0}+b_{2}\right) \\ & a_{1} b_{3}=b_{3}\left(a_{1} a_{3}\right), \quad a_{2}+b_{4}=a_{2}+\left(b_{0}+b_{4}\right) \end{aligned} $$ whence we conclude that $$ \text { (*) } \quad a_{1} / b_{0}+b_{2} \searrow a_{1} a_{2} / b_{2} \nearrow a_{3} / b_{2}+b_{4} . $$ This enables us to show that $a / b$ and $c / d$ are projective in 2 steps. In fact $$ (* *) \quad a_{0} / b_{0} \nearrow a_{2}+b_{0}+b_{4} / a_{1}+a_{3} \searrow a_{4} / b_{4} $$ as is shown by the following calculations $$ \begin{array}{rlr} a_{0}+\left(a_{2}+b_{0}+b_{4}\right) & =a_{1}+b_{4}=a_{1}+a_{1} a_{3}+b_{2}+b_{4} \quad\left(a_{1} \geq a_{1} a_{3}+b_{2}\right) \\ & =a_{1}+a_{3} \quad\left(a_{1} a_{3}+\left(b_{2}+b_{4}\right)=a_{3} \text { by }(*)\right) \\ a_{0}\left(a_{2}+b_{0}+b_{4}\right) & =b_{0}+a_{0}\left(a_{2}+b_{4}\right) \quad \text { (by modularity) } \\ & =b_{0}+a_{0} a_{1}\left(a_{2}+b_{4}\right) \quad\left(a_{1} \geq a_{0}\right) \\ & =b_{0}+a_{0}\left(a_{2}+a_{1} b_{4}\right) \quad \text { (by modularity) } \\ & \leq b_{0}+a_{0}\left(a_{2}+b_{2}\right)=b_{0}+a_{1} a_{2}=b_{0}, \end{array} $$ where the inequality holds since $a_{1} b_{4}=a_{1} a_{3} b_{4} \leq a_{1} a_{3}\left(b_{2}+b_{4}\right)=b_{2}$ by $(*)$. The second part of $(* *)$ follows by symmetry. Since $a / b$ and $c / d$ were assumed to be projective in not less than 4 steps, this contradiction completes the proof. Lemma 3.31 Suppose $L$ is a modular lattice with $b<a \leq d<c$ in $L$. If $a / b$ and $c / d$ are projective in three steps, then $a / b$ transposes up onto a lower edge of a diamond and $c / d$ transposes down onto an upper edge of a diamond. Proof. Since $a \leq d$, no nontrivial subintervals of $a / b$ and $c / d$ are projective to each other in less than three steps. Hence by Lemma $3.29 a / b$ and $c / d$ are connected by a strongly normal sequence of length 3 , say $$ a / b=a_{0} / b_{0} \nearrow a_{1} / b_{1} \searrow a_{2} / b_{2} \nearrow a_{3} / b_{3}=c / d $$ (the dual case cannot apply). Then $a / b$ transposes up onto $a_{0}+b_{2} / b_{0}+b_{2}$ of the diamond $\left(b_{0}+b_{2}<a_{0}+b_{2}, b_{1}, b_{0}+a_{2}<a_{1}\right.$ ) (see Figure 3.4(ii)) and $c / d$ transposes down onto $a_{1} a_{3} / b_{1} b_{3}$ of $\left(b_{2}<b_{1} a_{3}, a_{2}, a_{1} b_{3}<a_{1} a_{3}\right)$ as required. Theorem 3.32 (Jónsson [68]). For any variety $\mathcal{V}$ of modular lattices the following conditions are equivalent: (i) $M_{3^{2}} \notin \mathcal{V}$; (ii) every subdirectly irreducible member of $\mathcal{V}$ has dimension two or less; (iii) the inclusion $a(b+c d)(c+d) \leq b+a c+a d$ holds in $\mathcal{V}$. Proof. Suppose $M_{3^{2}} \notin \mathcal{V}$ but some subdirectly irreducible lattice $L$ in $\mathcal{V}$ has dimension greater than two. Then $L$ contains a four element chain $a>b>c>d$. Since $L$ is subdirectly irreducible con $(a, b)$ and con $(b, c)$ cannot have trivial intersection, and therefore some nontrivial subquotients $a^{\prime} / b^{\prime}$ of $a / b$ and $p / q$ of $b / c$ are projective to each other. By Lemma 3.30 we can assume that they are projective in three steps. Similarly some nontrivial subquotients $p^{\prime} / q^{\prime}$ of $p / q$ and $c^{\prime} / d^{\prime}$ of $c / d$ are projective to each other, again in three steps. Since all transpositions are bijective, $p^{\prime} / q^{\prime}$ is also projective to a subquotient of Figure 3.6 $a^{\prime} / b^{\prime}$ in three steps. From Lemma 3.31 we infer that $p^{\prime} / q^{\prime}$ transposes up onto a lower edge of a diamond and down onto an upper edge of a diamond. It follows that the two diamonds generate a sublattice of $L$ which has $M_{3^{2}}$ as homomorphic image. This contradicts (i), thus (i) implies (ii). Every variety is generated by its subdirectly irreducible members, so to prove that (ii) implies (iii), we only have to observe that the inclusion $a(b+c d)(c+d) \leq b+a c+a d$ holds in every lattice of dimension 2. Indeed, in such a lattice we always have $c \leq d$ or $d \leq c$ or $c d=0$. In the first case $a(b+c d)(c+d)=a(b+c) d \leq a d \leq b+a c+a d$, in the second $a(b+c d)(c+d)=a(b+d) c \leq a c \leq b+a c+a d$ and in the third $a(b+c d)(c+d)=$ $a b(c+d) \leq b \leq b+a c+a d$. (ii). Finally, Figure 3.6 shows that the inclusion fails in $M_{3^{2}}$, and therefore (iii) implies For any cardinal $\alpha \geq 3$ there exists up to isomorphism exactly one lattice $M_{\alpha}$ with dimension 2 and $\alpha$ atoms (see Figure 2.2). For $\alpha=n \in \omega$ each $M_{n}$ generates a variety $\mathcal{M}_{n}$, while for $\alpha \geq \omega$ all the lattices $M_{\alpha}$ generate the same variety $\mathcal{M}_{\omega}$ since they all have the same finitely generated sublattices. Clearly $\mathcal{M}_{n} \subseteq \mathcal{M}_{\omega}$, and by Jónsson's Lemma $\mathcal{M}_{n+1}$ covers $\mathcal{M}_{n}$ for $3 \leq n \in \omega$. The above theorem implies that $M_{3^{2}} \notin \mathcal{M}_{n}$ for all $n \geq 3$, and conversely, if $\mathcal{V}$ is a variety of modular lattices that satisfies $M_{3^{2}} \notin \mathcal{V}$, then $\mathcal{V}$ is either $\mathcal{T}, \mathcal{D}, \mathcal{M}_{\omega}$ or $\mathcal{M}_{n}$ for some $n \in \omega, n \geq 3$. Thus we obtain: CorollaRY 3.33 (Jónsson [68]). In the lattice $\Lambda$, the variety $\mathcal{M}_{n}(3 \leq n \in \omega)$ is covered by exactly three varieties: $\mathcal{M}_{n+1}, \mathcal{M}_{n}+\mathcal{M}_{3^{2}}$ and $\mathcal{M}_{n}+\mathcal{N}$. Proof. $\mathcal{M}_{n} \cap \mathcal{M}_{3^{2}}=\mathcal{M}_{3}$ is covered by $\mathcal{M}_{3^{2}}$, and $\mathcal{M}_{n} \cap \mathcal{N}=\mathcal{D}$ is covered by $\mathcal{N}$. By the distributivity of $\Lambda, \mathcal{M}_{n} \cap \mathcal{M}_{3^{2}}$ and $\mathcal{M}_{n} \cap \mathcal{N}$ cover $\mathcal{M}_{n}$. Suppose a variety $\mathcal{V}$ properly includes $\mathcal{M}_{n}$. If $\mathcal{V}$ contains a nonmodular lattice, then $N \in \mathcal{V}$, hence $\mathcal{M}_{n}+\mathcal{N} \subseteq \mathcal{V}$. If $\mathcal{V}$ contains only modular lattices, then either $M_{3^{2}} \in \mathcal{V}$ or $M_{3^{2}} \notin \mathcal{V}$. In the first case we have $\mathcal{M}_{n}+\mathcal{M}_{3^{2}} \subseteq \mathcal{V}$, while in the latter case Theorem 3.32 implies that $\mathcal{V}=\mathcal{M}_{k}$ for some $n<k \in \omega$. Hence $\mathcal{M}_{n+1} \subseteq \mathcal{V}$, and the proof is complete. The proof in fact shows that, for $n \geq 3, C\left(\mathcal{M}_{n}\right)=\left\{\mathcal{M}_{n+1}, \mathcal{M}_{n}+\mathcal{M}_{3^{2}}, \mathcal{M}_{n}+\mathcal{N}\right\}$ strongly covers $\mathcal{M}_{n}$ (see Section 2.1). But this is to be expected in view of Theorem 2.2 and the result that every finitely generated lattice variety is finitely based (Section 5.1). $A_{1}$ $A_{2}$ $A_{3}$ $M_{3}$ $M_{3^{3}}$ $M_{3^{n}}$ Figure 3.7 Observe that $\mathcal{M}_{3}$ has two join irreducible covers $\left(\mathcal{M}_{4}\right.$ and $\left.\mathcal{M}_{3^{2}}\right)$ whereas $\mathcal{M}_{n}(4 \leq$ $n \in \omega)$ has only one. Further results on modular varieties. Consider the lattices in Figure 3.7. The main result of Hong [72] is the following: Theorem 3.34 Let $L$ be a subdirectly irreducible modular lattice and suppose $$ A_{1}, A_{2}, A_{3}, M_{3^{n}} \notin \mathbf{H S}\{L\} $$ Then the dimension of $L$ is less than or equal to $n$. The proof of this theorem is based on a detailed analysis of how the diamonds that are associated with a normal sequence of quotients fit together. We list some consequences of this result. Let $\mathcal{M}_{3^{n}}, \mathcal{A}_{1}, \mathcal{A}_{2}, \mathcal{A}_{3}$ and $\mathcal{P}_{2}$ be the varieties generated by the lattices $M_{3^{2}}, A_{1}, A_{2}, A_{3}$ and $P_{2}$ respectively. Corollary 3.35 (Hong [70]). For $2 \leq n \in \omega$ the variety $\mathcal{M}_{3^{n}}$ is covered by the varieties $\mathcal{M}_{3^{n+1}}, \quad \mathcal{M}_{3^{n}}+\mathcal{M}_{4}, \quad \mathcal{M}_{3^{n}}+\mathcal{A}_{1}, \quad \mathcal{M}_{3^{n}}+\mathcal{A}_{2}, \quad \mathcal{M}_{3^{n}}+\mathcal{A}_{3}, \quad \mathcal{M}_{3^{n}}+\mathcal{P}_{2}, \quad \mathcal{M}_{3^{n}}+\mathcal{N}$ Let $\mathcal{M}_{n}^{m}$ be the variety generated by all modular lattices whose length does not exceed $m$ and whose width does not exceed $n(1 \leq m, n \leq \infty)$. Note that Theorem 3.32 implies $\mathcal{M}_{\infty}^{2}=\mathcal{M}_{\omega}$, and since every lattice of length at most 3 can be embedded in the subspace lattice of a projective plane ([GLT] p.214), $\mathcal{M}_{\infty}^{3}$ is the variety generated by all such subspace lattices. CoRollaRY 3.36 (Hong [72]). The variety $\mathcal{M}_{\infty}^{3}$ is strongly covered by the collection $$ \left\{\mathcal{M}_{\infty}^{3}+\mathcal{M}_{3^{2}}, M_{\infty}^{3}+\mathcal{A}_{1}, M_{\infty}^{3}+\mathcal{A}_{2}, M_{\infty}^{3}+\mathcal{A}_{3}, M_{\infty}^{3}+\mathcal{N}\right\} $$ From Theorem 2.2 one may now deduce that $\mathcal{M}_{\infty}^{3}$ is finitely based. Considering the varieties $\mathcal{M}_{n}^{\infty}$, we first of all note that since $M_{3}$ has width $3, \mathcal{M}_{1}^{\infty}$ and $\mathcal{M}_{2}^{\infty}$ are both equal to the distributive variety $\mathcal{D}$. The two modular varieties which cover $\mathcal{M}_{3}$ are generated by modular lattices of width 4 , hence $\mathcal{M}_{3}^{\infty}=\mathcal{M}_{3}$. The variety $\mathcal{M}_{4}^{\infty}$ is investigated in Freese [77]. It is not finitely generated since it contains simple lattices of arbitrary length (Figure 3.8 (i)). Freese obtains the following result: THEOREM 3.37 The variety $\mathcal{M}_{4}^{\infty}$ is strongly covered by the following collection of ten varieties: $$ \left\{\mathcal{M}_{4}^{\infty}+\{L\}^{\mathcal{V}}: L=A_{2}, A_{3}, \ldots, A_{8}, M_{5}, P_{2}, N\right\} $$ He also gives a complete list of the subdirectly irreducible members of this variety, and shows that it has uncountably many subvarieties. Further remarks about the varieties $\mathcal{M}_{n}^{m}$ appear at the end of Chapter 5. (i) $A_{4}$ $A_{6}$ $A_{8}$ $A_{5}=M_{3^{3}}$ $A_{7}$ $M_{5}$ $$ P_{2}=\mathcal{L}\left(Z_{2}^{3}, Z_{2}\right) $$ Figure 3.8 ## Chapter 4 ## Nonmodular Varieties ### Introduction The first significant results specifically about nonmodular varieties appear in a paper by McKenzie [72], although earlier studies by Jónsson concerning sublattices of free lattices contributed to some of the results in this paper (see also Kostinsky [72], Jónsson and Nation [75]). Splitting lattices are characterized as subdirectly irreducible bounded homomorphic images of finitely generated free lattices, and an effective procedure for deciding if a lattice is splitting, and to find its conjugate equation (see Section 2.3) is given. Also included in McKenzie's paper are several problems which stimulated a lot of research in this direction. One of these problems was solved when Day [77] showed that the class of all splitting lattices generates the variety of all lattices (Section 2.3). McKenzie [72] also lists fifteen subdirectly irreducible lattices $L_{1}, L_{2}, \ldots, L_{15}$, (see Figure 2.2) each of which generates a join irreducible variety that covers the smallest nonmodular variety $\mathcal{N}$. Davey, Poguntke and Rival [75] proved that a variety, generated by a lattice which satisfies the double chain condition, is semidistributive if and only if it does not contain one of the lattices $M_{3}, L_{1}, \ldots, L_{5}$. Jónsson proved the same result without the double chain condition restriction, and in Jónsson and Rival [79] this is used to show that McKenzie's list of join irreducible covers of $\mathcal{N}$ is complete. Further results in this direction by Rose [84] prove that there are eight chains of semidistributive varieties, each generated by a finite subdirectly irreducible lattice $L_{6}^{n}, L_{7}^{n}$, $L_{8}^{n}, L_{9}^{n}, L_{10}^{n}, L_{13}^{n}, L_{14}^{n}, L_{15}^{n}\left(n \geq 0\right.$, see Figure 2.2), such that $L_{i}^{0}=L_{i}$, and $\left\{L_{i}^{n+1}\right\}^{\mathcal{V}}$ is the only join irreducible cover of $\left\{L_{i}^{n}\right\}^{\mathcal{V}}$ for $i=6,7,8,9,10,13,14,15$. Extending some results of Rose, Lee [85] gives a fairly complete description of all the varieties which do not contain any of $M_{3}, L_{2}, L_{3}, \ldots, L_{12}$. In particular, these varieties turn out to be locally finite. Ruckelshausen [78] obtained some partial results about the covers of $\mathcal{M}_{3}+\mathcal{N}$, and Nation [85] [86] has developed another approach to finding the covers of finitely generated varieties, which he uses to show that $\left\{L_{1}\right\}^{\mathcal{V}}$ has ten join irreducible covers, and that above $\left\{L_{12}\right\}^{\mathcal{V}}$ there are exactly two join irreducible covering chains of varieties. These results are mentioned again in more detail at the end of Section 4.4. The notions of splitting lattices and bounded homomorphic images have been discussed in Section 2.3, so this chapter covers the results of Jónsson and Rival [79], Rose [84] and Lee $[85]$. $$ \begin{gathered} M_{3}(x, y, z) \\ \\ x+y=y+z \\ =x+z \\ x y=y z=x z \end{gathered} $$$$ L_{1}(x, y, z) $$$$ \text { ( } L_{2} \text { is dual) } $$$$ y(x+z)=x z $$$$ (x+y) z=x y $$$$ (x+y)(x+z)=x $$$$ x \leq y+z $$ $$ L_{3}(x, y, z) $$$$ x+z=y+z $$$$ x y=x z $$$$ x+y z=x+y $$ $$ (x+y) z=y z $$ Figure 4.1 ### Semidistributivity Recall from Section 2.3 that a lattice $L$ is semidistributive if for any $u, v, x, y, z \in L$ $$ \begin{array}{llll} \left(\mathrm{SD}^{+}\right) & u=x+y=x+z & \text { implies } u=x+y z & \text { and dually } \\ (\mathrm{SD}) & u=x y=x z & \text { implies } & u=x(y+z) . \end{array} $$ A glance at Figure 4.1 shows that the lattices $M_{3}, L_{1}, L_{2}, L_{3}, L_{4}$ and $L_{5}$ fail to be semidistributive and hence they cannot be a sublattice of any semidistributive lattice. The next lemma implies that for finite lattices the converse is also true. Given a lattice $L$ and three noncomparable elements $x, y, z \in L$ we will write $L_{i}(x, y, z)$ to indicate that these elements generate a sublattice of $L$ isomorphic to $L_{i}, i=1,2,3,4,5$ (Figure 4.1). Algebraically this is verified by checking that the corresponding defining relations (below Figure 4.1) hold. We denote by $\mathcal{I} L$ and $\mathcal{F} L$ the ideal and filter lattice of $L$ respectively ( $\mathcal{F} L$ is ordered by reverse inclusion). $L$ is embedded in $\mathcal{I} L$ via the map $x \mapsto(x]$ and in $\mathcal{F} L$ via $x \mapsto[x)$. We identify $L$ with its image in $\mathcal{I} L$ and $\mathcal{F} L$. Of course $\mathcal{I} L(\mathcal{F} L)$ is (dually) algebraic with the (dually) compact elements being the principal ideals (filters) of $L$. Hence both lattices are weakly atomic (i.e. in any quotient $u / v$ we can find $r, s \in u / v$ such that $r \succ s$ ). In particular, given $a, b \in L$, there exists $c \in \mathcal{I} L$ satisfying $a \leq c \prec a+b$. Note also that $\mathcal{I} L$ is upper continuous, i.e. for any $x \in \mathcal{I} L$ and any chain $C \subseteq \mathcal{I} L$, we have $x \sum C=\sum_{y \in C} x y$ (see [ATL] p. 15). Lemma 4.1 (Jónsson and Rival [79]). If a lattice $L$ is not semidistributive, then either $\mathcal{I F I} L$ or $\mathcal{F I F} L$ contains a sublattice isomorphic to one of the lattices $M_{3}, L_{1}, L_{2}, L_{3}, L_{4}$ or $L_{5}$. Proof. Suppose that $L$ is not semidistributive. By duality we may assume that there (i) (ii) (iii) Figure 4.2 exist $u, x, y, z \in L$ such that $$ u=x+y=x+z \quad \text { but } \quad x+y z<u . $$ As a first observation we have that $x, y$ and $z$ must be noncomparable. By the weak atomicity of $\mathcal{I} L$, we can find $x^{\prime} \in \mathcal{I} L$ such that $x+y z \leq x^{\prime} \prec u$, whence it follows that $$ u=x^{\prime}+y=x^{\prime}+z, \quad y z \leq x^{\prime} \prec u . $$ In $\mathcal{F I} L$ we can then find minimal elements $y^{\prime}, z^{\prime}$ subject to the conditions $u=x^{\prime}+y^{\prime}=$ $x^{\prime}+z^{\prime}, y^{\prime} \leq y$ and $z^{\prime} \leq z$. Now $x^{\prime} y^{\prime}<y^{\prime}$ since equality would imply $x^{\prime}=u$. Furthermore, if $x^{\prime} y^{\prime}<w \leq y^{\prime}$, then $x^{\prime}<x^{\prime}+w$ (equality would imply $w \leq x^{\prime} y^{\prime}$ ) and $x^{\prime}+w \leq x^{\prime}+y^{\prime}=u$. Hence $x^{\prime}+w=u$ and by the minimality of $y^{\prime}, w=y^{\prime}$. It follows that $y^{\prime}$ covers $x^{\prime} y^{\prime}$, and similarly $z^{\prime}$ covers $x^{\prime} z^{\prime}$. So, dropping the primes, we have found $u, x, y, z \in \mathcal{F I} L$ satisfying $(*)$ and $$ \text { (**) } \quad z \leq x \prec u, \quad x y \prec y, \quad x z \prec z \text { see Figure } 4.2 \text { (i). } $$ Since $x y \prec y$, we have either $y(x y+z)=x y$ or $y \leq x y+z$, and similarly $z(x z+y)=x z$ or $z \leq x z+y$. We will show that in each of the four cases that arise, the lattice $\mathcal{F} L$ or $\mathcal{I F I} L$ must contain $M_{3}$ or one of the $L_{i}(i=1, \ldots, 5)$ as a sublattice. Case 1: $y \leq x y+z$ and $z \leq x z+y$. Since $x, y$ and $z$ are noncomparable, so are $x y$ and $x z(x y \leq x z$ would imply $y \leq x y+z \leq x z+z=z$ ). Let $w=x y+x z$, then $y, z \leq w \leq x$ and $w \not \leq y, z$ since $x y \prec y$ and $x z \prec z$. The following calculations show that we in fact have $L_{2}(w, y, z): w y+w z=x y+x z=w ; w \geq x y z=y z ; y+x z \leq y+z$ and equality follows from the assumption that $y+x z \geq z$; similarly $z+x y=y+z$ (see Figure 4.1 (ii), 4.2 (i) and (ii)). Case 2: $y \leq x y+z$ and $z(x z+y)=x z$. Let $s=x(y+z)$ and $t=x z+y$. The most general relationship between $x, s, t$ and $z$ is pictured in Figure 4.2 (iii). We will show that either $L_{5}(s, t, z)$ or $L_{3}(z, s+t, x)$ (see Figure 4.1). Clearly $s z=x(y+z) z=x z=x t$ by assumption. Furthermore $t$ and $z$ are noncomparable $(z \not \leq t$ since $t z=x z<z, y \leq t \not z$ since $y \not \leq z)$, as are $s$ and $z(z \not \leq s \leq x, s \not \leq z$ else $s+z=z=y+z)$, and $t \not \leq s$ since $y \leq t$, $s \leq x$ but $y \not \leq x$. Suppose now that $s+t=y+z$. Then $s \not \leq t$ (else $y+z=s+t=t \geq z$, (i) (ii) (iii) Figure 4.3 contradicting $t \gtreqless z$ ) and therefore $s, t, z$ are noncomparable. Also $s t=x t$ since $t \leq y+z$, thus $s t+z=x t+z \geq x y+z \geq y+z$ (by assumption), whence $s t+z=y+z$. This shows $L_{5}(s, t, z)$. On the other hand $s+t<y+z$ implies that $z, s+t, x$ are noncomparable, and so $L_{3}(z, s+t, x)$ follows from the calculations: $$ \begin{array}{rlrl} u & =z+x=s+t+x & & (\text { since } x \prec u) \\ z(s+t) & =z x & & (\text { since } z x \prec x) \\ z+x(s+t) & =z+s=z+(s+t) & & \\ (z+s+t) x & =(y+z) x=s=(s+t) x & \end{array} $$ Case 3: $y(x y+z)=x y$ and $z \leq x z+y$. This case is symmetric to the preceding case. Case 4: $y(x y+z)=x y$ and $z(x z+y)=x z$. We claim that for $n=0,1, \ldots$ one can find increasing chains of elements $y_{n} \in y+z / y$ and $z_{n} \in y+z / z$ such that $(*)$ and $(* *)$ hold with $y$ and $z$ replaced by $y_{n}$ and $z_{n}$. Indeed, let $y_{0}=y, z_{0}=z$ and $$ y_{n+1}=y_{n}+x z_{n}, \quad z_{n+1}=z_{n}+x y_{n} . $$ Then $y_{0} \leq y+z$ and $z_{0} \leq y+z$ and if we suppose that $y_{n}, z_{n} \leq y+z$ then clearly $y_{n+1}=y_{n}+x z_{n} \leq y+z$ and $z_{n+1} \leq y+z$. Now suppose that (*) and (**) hold with $y$ and $z$ replaced by $y_{n}$ and $z_{n}$ for some $n \geq 0$. We show that the same is true for $y_{n+1}$ and $z_{n+1}$. Firstly $x+y_{n+1}=x+y_{n}+x z_{n}=x+y_{n}=u$ by hypothesis, and similarly $x+z_{n+1}=u$. Further we may assume that $y_{n} z_{n+1}=x y_{n}$ and $z_{n} y_{n+1}=x z_{n}$, for otherwise one of the three previous cases would apply. Now $z \leq z_{n+1}$ and $z \leq x$, so $z_{n+1} \leq x$ and hence $x z_{n+1}<z_{n+1}$. If $x z_{n+1}<t<z_{n+1}$ then put $s=z_{n}+x t$ (Figure 4.3 (i)). We show that either $L_{3}\left(z_{n}, t, x\right)$ or $L_{4}(s, t, x)$ or $L_{3}\left(z_{n}, s t, x\right)$, from which it follows that we may assume $x z_{n+1} \prec z_{n+1} \cdot u=z_{n}+x=t+x$ since $x \prec u$, and $x z_{n}=t z_{n}$ since $x z_{n} \prec z_{n}$. Also $x t \leq x\left(z_{n}+t\right) \leq x z_{n+1} \leq x t$ shows that $x t=x\left(z_{n}+t\right)=x z_{n+1}$ and $x, t, z_{n}$ are noncomparable. Now either $z_{n}+t=s$, which implies $L_{3}\left(z_{n}, t, x\right)$, or $z_{n}+t>s$ in which case we have $L_{4}(s, t, x)$ (if $s t=x t$ ) or $L_{3}\left(z_{n}, s t, x\right.$ ) (if $s t>x t$ ). Similarly we may assume that $x y_{n+1} \prec y_{n+1}$. Finally we can assume that $y_{n+1} z_{n+1} \leq x$, otherwise we obtain $L_{5}\left(y_{n+1} z_{n+1}, x y_{n+1}, y_{n}\right)$ (Figure 4.3 (ii)). In $\mathcal{I F I} L$ we now form the join $y_{\infty}$ of all the $y_{n}$ and the join $z_{\infty}$ of all the $z_{n}$. Clearly $$ u=x+y_{\infty}=x+z_{\infty}, \quad y_{\infty}+z_{\infty}=y+z $$ Furthermore, $x \leq y_{\infty}$ since $x$ is compact and $x \leq y_{n}$ for all $n$. Therefore $x y_{\infty}<y_{\infty}$ and if $x y_{\infty} \leq t<y_{\infty}$ then there exists $m \in \omega$ such that for all $n \geq m t \nsupseteq y_{n}$, hence $t y_{n}=x y_{n}$ (Figure 4.3 (iii)). We compute $$ t=t y_{\infty}=\sum_{n \geq m} t y_{n}=\sum_{n \geq m} x y_{n}=x y_{\infty} $$ where the second and last equality make use of the upper continuity of $\mathcal{I F} \mathcal{I} L$. Thus $x y_{\infty} \prec y_{\infty}$ and similarly $x z_{\infty} \prec z_{\infty}$. Also, for each $m, y_{m} z_{\infty}=\sum_{n \in \omega} y_{m} z_{n} \leq x$, hence $y_{\infty} z_{\infty} \leq x$. Lastly, $x y_{n} \leq x z_{n+1}$ implies $$ x y_{\infty}=\sum x y_{n} \leq \sum x z_{n+1}=x z_{\infty} $$ and similarly $x z_{\infty} \leq x y_{\infty}$. Consequently $x y_{\infty}=x z_{\infty}=y_{\infty} z_{\infty}$. Dropping the subscripts we now have $u, x, y, z \in \mathcal{I F I} L$ satisfying $(*),(* *)$ and $x y=$ $x z=y z$. Let $t=x(y+z)$. If $t=y z$ then $L_{4}(y, z, x)$ holds, and if $t>y z$ then we consider the four cases depending on whether or not the equations $y+z=y+t$ and $y+z=t+z$ hold. If both hold, then we get $M_{3}(w, y, z)$, if both fail then we let $s=(y+w)(w+z)$ to obtain $L_{1}(s, y, z)$ (here we use $x y \prec y, x z \prec z$, see Figure 4.3 (iv)), and if just one equation holds, say $y+t<z+t=y+z$, then $L_{4}(y, t, z)$ follows. This completes the proof. Semidistributive varieties. If $L$ is a finite lattice, then $\mathcal{I F} \mathcal{I} L \mathcal{F I F} L \cong L$, so $L$ is semidistributive if and only if $L$ excludes $M_{3}, L_{1}, L_{2}, L_{3}, L_{4}$ and $L_{5}$. We say that a variety $\mathcal{V}$ of lattices is semidistributive if every member of $\mathcal{V}$ is semidistributive. The next theorem characterizes all the semidistributive varieties. Theorem 4.2 (Jónsson and Rival [79]). For a given lattice variety $\mathcal{V}$, the following statements are equivalent: (i) $\mathcal{V}$ is semidistributive. (ii) $M_{3}, L_{1}, L_{2}, L_{3}, L_{4}, L_{5} \notin \mathcal{V}$. (iii) Both the filter and ideal lattice of $F_{\mathcal{V}}(3)$ are semidistributive. (iv) Let $y_{0}=y, z_{0}=z$ and, for $n \in \omega$ let $y_{n+1}=y+x z_{n}$ and $z_{n+1}=z+x y_{n}$. Then for some natural number $m$ the identity $$ \left(\mathrm{SD}_{m}^{*}\right) \quad x(y+z)=x y_{m} $$ and its dual $\left(\mathrm{SD}_{m}^{+}\right)$hold in $\mathcal{V}$. Proof. Since each of the lattices in (ii) fail to be semidistributive, (i) implies (ii), and (ii) implies (i) follows from Lemma 4.1 and the fact that $L \in \mathcal{V}$ implies $\mathcal{I F I} L, \mathcal{F I F} L \in \mathcal{V}$. Also (i) implies (iii) since $\mathcal{I} F_{\mathcal{V}}(3), \mathcal{F} F_{\mathcal{V}}(3) \in \mathcal{V}$. (iii) implies (iv): By duality it suffices to show that, for some $m, x(y+z)=x y_{m}$ in the free lattice $F_{\mathcal{V}}(3)$ of $\mathcal{V}$ generated by $x, y, z$. By induction one easily sees that $y_{n} \leq y_{n+1}$ and $z_{n} \leq z_{n+1}$. In $\mathcal{I} F_{\mathcal{V}}(3)$ we define $y_{\infty}$ as the join of all the $y_{n}$ and $z_{\infty}$ as the join of all the $z_{n}$. Now $x y_{n} \leq x z_{n+1}$ and $x z_{n} \leq x y_{n+1}$, hence by the upper continuity of $\mathcal{I} F_{\mathcal{V}}(3)$, $x y_{\infty}=x z_{\infty}=v$, say. Also $y_{n}+z_{n}=y+z$ for each $n$ implies $y_{\infty}=z_{\infty}=y+z$. By semidistributivity we therefore have $v=x\left(y_{\infty}+z_{\infty}\right)=x(y+z)$. Hence $v=\sum_{n \in \omega} x y_{n}$ is a compact element of $\mathcal{I} F_{\mathcal{V}}(3)$, so for some $m \in \omega, x(y+z)=x y_{m}$. (iv) implies (i): If $L \in \mathcal{V}$ is not semidistributive, then there are elements $x, y, z$ in $L$ such that $x y=x z<x(y+z)$ or dually. Then, for all $n, y_{n}=y$ and $z_{n}=z$, whence $x y_{n}<x(y+z)$. Consequently the identity fails for each $n$. The fourth statement shows that semidistributivity cannot be characterized by a set of identities, and so the class of all semidistributive lattices does not form a variety. Semidistributivity and weak transpositions. For the notions of weak projectivity we refer the reader to Section 1.4. The next result concerns the possibility of shortening a sequence of weak transpositions. Suppose in some lattice $L$ a quotient $x_{0} / y_{0}$ projects weakly onto another quotient $x_{n} / y_{n}$ in $n>2$ steps, say $$ x_{0} / y_{0} \nearrow_{w} x_{1} / y_{1} \searrow_{w} x_{2} / y_{2} \nearrow_{w} \ldots x_{n} / y_{n} . $$ If there exists a quotient $u / v$ such that $$ x_{0} / y_{0} \searrow_{w} u / v \nearrow_{w} x_{2} / y_{2} $$ then we can shorten the sequence of weak transpositions by replacing the quotients $x_{1} / y_{1}$ and $x_{2} / y_{2}$ by the single quotient $u / v$. In a distributive lattice this can always be done, since we may take $u / v=x_{0} x_{2} / y_{0} y_{2}$. The nonexistence of such a quotient $u / v$ is therefore connected with the presence of a diamond or a pentagon as a sublattice of $L$. If $L$ is semidistributive, then this sublattice must of course be a pentagon. The aim of Lemma 4.3 is to describe the location of the pentagon relative to the quotients $x_{i} / y_{i}$. We introduce the following terminology and notation: A quotient $c / a$ in a lattice $L$ is said to be an $N$-quotient if there exists $b \in L$ such that $a+b=a+c$ and $a b=a c$. In this case $a, b, c \in L$ generate a sublattice isomorphic to the pentagon $N$, a condition which we abbreviate by writing $N(c / a, b)$. Lemma 4.3 (Jónsson and Rival [79]). Let $L$ be a semidistributive lattice and suppose $x_{0} / y_{0} \nearrow_{w} x_{1} / y_{1} \searrow_{w} x_{2} / y_{2}$ in $L$. Then either (i) there exist $a, b, c \in L$ with $N(c / a, b)$, and $b / b c$ is a subquotient of $x_{0} / y_{0}$ or (ii) there exist $a, b, c, t \in L$ with $N(c / a, b), y_{0}<t \leq x_{0}$ and $t / y_{0} \nearrow_{w} a+b / b$ or (iii) there exists a subquotient $p / q$ of $x_{0} / y_{0}$ such that $p / q \searrow u / v \nearrow x_{2} / y_{2}$ for some quotient $u / v$. Proof. Let $x_{0}^{\prime}=x_{0}\left(y_{1}+x_{2}\right)$. (i) If $x_{0}^{\prime}+y_{1}<x_{2}+y_{1}$, then the elements $a=x_{0}^{\prime}+y_{1}$, $b=x_{0}$ and $c=y_{1}+x_{2}$ give $N(c / a, b)$ and $b / b c=x_{0} / x_{0}^{\prime} \subseteq x_{0} / y_{0}$ (Figure 4.4 (i)). (ii) Suppose $x_{0}^{\prime}+y_{1}=x_{2}+y_{1}$. By the semidistributivity of $L, x_{2}+y_{1}=x_{0}^{\prime} x_{2}+y_{1}=$ $x_{0} x_{2}+y_{1}$, hence $\left(x_{0} x_{2}+y_{2}\right)+y_{1}=x_{2}+y_{1}$. If $x_{0} x_{2}+y_{2}<x_{2}$, then the elements $a=x_{0} x_{2}+y_{2}, b=y_{1}, c=x_{2}$ satisfy $n(c / a, b)$, and $a+b / b$ transposes down on to the subquotient $x_{0}^{\prime} / x_{0} y_{1}$ of $x_{0} / y_{0}$ (Figure 4.4 (ii)). (i) (ii) Figure 4.4 (iii) If $x_{0} x_{2}+y_{2}=x_{2}$, then $$ x_{0} / y_{0} \supseteq p / q=x_{0} y_{1}+x_{0} x_{2} / x_{0} y_{1} \searrow x_{0} x_{2} / x_{0} y_{2} \nearrow x_{2} / y_{2} . $$ LemMa 4.4 (Rose [84]). If $L$ is a semidistributive lattice and $x_{0} / y_{0} \nearrow_{\beta} x_{1} / y_{1} \searrow_{\beta} x_{2} / y_{2}$ in $L$, then $x_{0} / y_{0} \searrow x_{0} x_{2} / y_{0} y_{2} \nearrow x_{2} / y_{2}$. Proof. Since we are dealing with transpositions, $y_{0}=x_{0} y_{1}, y_{2}=x_{2} y_{1}$ and $x_{1}=y_{1}+x_{0}=$ $y_{1}+x_{1}$. By semidistributivity $x_{1}=y_{1}+x_{0} x_{2}$. Now the bijectivity of the transpositions implies $y_{0}+x_{0} x_{2}=\left(y_{0}+x_{0} x_{2}+y_{2}\right) x_{0}=x_{1} x_{0}=x_{0}$, and similarly $y_{2}+x_{0} x_{2}=x_{2}$. Also $y_{0}\left(x_{0} x_{2}\right)=x_{0} y_{1} x_{2}=y_{0} y_{1}$ and $y_{2}\left(x_{0} x_{2}\right)=y_{0} y_{1}$. 3-generated semidistributive lattices. Let $F_{1}, F_{2}, F_{3}, F_{4}$ be the lattices in Figure 4.5. It is easy to check that each of these lattices is freely generated by the elements $x, y, z$ subject to the defining relations listed below. Lemma 4.5 (Jónsson and Rival [79], Rose [84]). Let $L$ be a semidistributive lattice generated by the three $x, y, z$ with $x \leq x y+z$ and $x z \leq y$. (i) If $L$ excludes $L_{12}$ then $L \in \mathbf{H} F_{1}$. (ii) If $L$ excludes $L_{12}$ and $L_{7}$ then $L \in \mathbf{H} F_{2}$. (iii) If $L$ excludes $L_{12}$ and $L_{8}$ then $L \in \mathbf{H} F_{3}$. (iv) If $L$ excludes $L_{12}, L_{7}$ and $L_{8}$ then $L \in \mathbf{H} F_{4}$. Proof. (i) $x \leq x y+z$ is equivalent to $x y+z=x+z$, so it suffices to show that under the above assumptions $(x+y) z=y z$. The free lattice determined by the elements $x, z, x y, y z$ and the defining relations $x \leq x y+z$ and $x z \leq y$ is pictured in Figure 4.6(i). Let $y_{0}=x+y z, y_{1}=x y+y_{0} z, y_{2}=y z+x y_{1}$ and $w=x y+y z$. To avoid $L_{12}$ we must have $y_{1}=y_{2}$. $$ \begin{array}{cccc} F_{1} & F_{2} & F_{3} & F_{4} \\ (x+y) z=y z & (x+y) z=y z & (x+y) z=y z & (x+y) z=y z \\ x y+z=x+z & x y+z=x+z & x y+z=x+z & x y+z=x+z \\ & & (x+y z) y & (x+y z) y \\ & x+y(x+z) & =x y+y z & =x y+y z \\ & =(x+y)(x+z) & & x+y(x+z) \\ & & & =(x+y)(x+z) \end{array} $$ Figure 4.5 (i) (ii) (iii) Figure 4.6 (i) (ii) (iii) Figure 4.7 Since $w+y_{0} z=y_{1}=y_{2}=w+x y_{1}$, semidistributivity implies $y_{1}=y_{2}=w y_{0} z x y_{1}=w$. Further we compute $y_{0} z=y_{1} z=w z=y z$. Again by semidistributivity $y z=\left(y_{0}+y\right) z=$ $(x+y z+y) z=(x+y) z$, as required. (ii) Let $s=(x+y)(y+z)$ and $t=x+y(x+z)$, then the sublattice of $F_{1}$ generated by $y, z$ and $t$ is isomorphic to $L_{7}$ (Figure 4.6 (iii)) with critical quotient $s / t$. (iii) is dual to (ii), and (iv) follows from (ii) and (iii). LemMa 4.6 (Rose [84]). Let $L$ be a semidistributive lattice that excludes $L_{11}$ and $L_{12}$. If $a, b, c, u, v \in L$ with $N(u / v, b), a<c$ and $u / v$ projects weakly onto $c / a$, then $N(c / a, b)$. Proof. We show that $u / v \nearrow_{w} c / a$ implies $N(c / a, b)$, then the result follows by repeated application of this result and its dual. Let $x=u, y=a$ and $z=b$ (Figure 4.7 (i)). Then $x \leq x y+z$ and $x z \leq y$, hence by Lemma $4.5 x, y, z$ generate a homomorphic image of $F_{1}$ (Figure 4.5). Computing in this lattice, we have $b c=z(x+y)=z y=b a$ and $b+a=z+y=z+(x+y)=b+c$, which implies $N(c / a, b)$. Corollary 4.7 Suppose $u / v$ and $c / a$ are nontrivial quotients in a semidistributive lattice $L$ that excludes $L_{11}$ and $L_{12}$. If $b \in L$ and $(a, c) \in \operatorname{con}(u, v)$, then $N(u / v, b)$ implies $N(c / a, b)$. Proof. By Lemma 1.11 there is a sequence $a=e_{0}<e_{1}<\ldots<e_{n}=c$ such that $u / v$ projects weakly onto $e_{i} / e_{i-1}$ for each $i=1, \ldots, n$. By Lemma $4.6 N(u / v)$ implies $N\left(e_{i} / e_{i-1}, b\right)$, hence $e_{i} b<e_{i-1}$ and $e_{i-1}+b>e_{i}$. It follows that $a b=e_{1} b=\ldots=e_{n} b=c b$ and $a+b=c+b$, whence $N(c / a, b)$. Figure 4.7 (ii) shows that the above result does not hold if $L$ includes $L_{11}$ or (by duality) $L_{12}$. The next result shows just how useful the preceding few lemmas are. THEOREM 4.8 (Rose [84]). Let $L$ be a subdirectly irreducible semidistributive lattice that excludes $L_{11}$ and $L_{12}$. Then $L$ has a unique critical quotient. Proof. Suppose to the contrary that $c / a$ and $p / q$ are two distinct critical quotients of $L$. Then $(p, q) \in \operatorname{con}(a, c)$, hence by Lemma 1.11 there exists $p^{\prime} \in p / q$ such that $p^{\prime}>q$ and $c / a$ projects weakly onto $p^{\prime} / q$ in $k$ steps. We may assume that $c / a \nsubseteq p / q$ (else $p / c$ or $a / q$ is critical and can replace $p / q$ ), and $p / q \nsubseteq c / a$. Consequently $k \geq 1, p^{\prime} / q \nsubseteq c / a$ and therefore we can find a nontrivial quotient $u / v \nsubseteq c / a$ such that $c / a \sim_{w} u / v$. By duality, suppose that $c / a \nearrow_{w} u / v$, and put $a^{\prime}=c v$. Since $c / a^{\prime}$ is also critical, we get $\left(a^{\prime}, c\right) \in \operatorname{con}\left(a^{\prime}, v\right)$. Again by Lemma 1.11 there exists $c^{\prime} \in c / a, c^{\prime}>a^{\prime}$ such that $v / a^{\prime}$ projects weakly onto $c^{\prime} / a^{\prime}$ (see Figure 4.7 (iii)). Consider a shortest sequence $$ v / a^{\prime}=x_{0} / y_{0} \sim_{w} x_{1} / y_{1} \sim_{w} \ldots \sim_{w} x_{n} / y_{n}=c^{\prime} / a^{\prime} $$ Clearly $n \geq 2$ since $c^{\prime} \not v$. Observe also that if $n=2$, then we cannot have $v / a^{\prime} \searrow w$ $x_{1} / y_{1} \nearrow{ }_{w} c^{\prime} / a^{\prime}$, since that would imply $c^{\prime}=a^{\prime}+x_{1} \leq v$. First suppose that $v / a^{\prime} \nearrow_{w} x_{1} / y_{1} \searrow_{w} x_{2} / y_{2}$. Then only (i) or (ii) of Lemma 4.2 can apply, since the sequence cannot be shortened if $n \geq 3$, and for $n=2$ this follows from the observation above. If (i) holds, then there exist $a^{\prime \prime}, b, c^{\prime \prime} \in L$ such that $N\left(c^{\prime \prime} / a^{\prime \prime}, b\right)$ and $b / b c^{\prime \prime} \subseteq v / a^{\prime}$. Since $c^{\prime} / a^{\prime}$ is critical, $\left(a^{\prime}, c^{\prime}\right) \in \operatorname{con}\left(a^{\prime \prime}, c^{\prime \prime}\right)$, whence by Corollary 4.7 we have $N\left(c^{\prime} / a^{\prime}, b\right)$. If (ii) holds, then there exist $a^{\prime \prime}, b, c^{\prime \prime}, t \in L$ such that $N\left(c^{\prime \prime} / a^{\prime \prime}, b\right), t / a^{\prime \prime} \subseteq v / a^{\prime}$ and $t / a^{\prime} \nearrow_{w} a^{\prime \prime}+b / b$. Again we get $N\left(c^{\prime} / a^{\prime}, b\right)$ from Corollary 4.7. But in both cases we also have $b \leq a^{\prime}$, so this is a contradiction. Now suppose that $v / a^{\prime} \searrow_{w} x_{1} / y_{1} \nearrow_{w} x_{2} / y_{2}$. As we already noted, this implies $n \geq 3$, so we may only apply the dual parts of (i) or (ii) of Lemma 4.2. That is, there exist $a^{\prime \prime}, b, c^{\prime \prime}, t \in L$ with $N\left(c^{\prime \prime} / a^{\prime \prime}, b\right)$ and either $a^{\prime \prime}+b / b \subseteq v / a^{\prime}$ or $v / t \subseteq v / a^{\prime}, v / t \searrow_{w} b / b c^{\prime \prime}$. Again Corollary 4.7 gives $N\left(c^{\prime} / a^{\prime}, b\right)$. In the first case this contradicts $b \geq a^{\prime}$, and in the second, since $b / b c^{\prime \prime} \nearrow v / t$, we have $v=b+t \geq b+a^{\prime} \geq c^{\prime}$, and this contradicts $a^{\prime}=v c^{\prime} . \square$ Notice that if a lattice has a unique critical quotient $c / a$, then this quotient is prime (i.e. $c$ covers $a$ ), $c$ is join irreducible, $a$ is meet irreducible, and $\operatorname{con}(a, c)$ identifies no two distinct elements except $c$ and $a$. To get a feeling for the above theorem, the reader should check that the lattices $N, L_{6}, L_{7}, L_{8}, L_{9}, L_{10}, L_{13}, L_{14}$ and $L_{15}$ each have a unique critical quotient, where as $L_{11}$ and $L_{12}$ each have two. ### Almost Distributive Varieties Recall the definition of the identities $\left(\mathrm{SD}_{m}^{*}\right)$ and $\left(\mathrm{SD}_{m}^{+}\right)$in Theorem 4.2 . Of course $\left(\mathrm{SD}_{0}^{+}\right)$ and $\left(\mathrm{SD}_{0}^{+}\right)$only hold in the trivial variety, while $$ \left(\mathrm{SD}_{1}\right) \quad x(y+z)=x(y+x z) $$ holds in the distributive variety, but fails in $M_{3}$ and $N$ (Figure 4.8). Thus ( $\mathrm{SD}_{i}$ ) (and by duality $\left(\mathrm{SD}_{1}^{+}\right)$) is equivalent to the distributive identity. The first identities that are of interest are therefore $$ \begin{aligned} \left(\mathrm{SD}_{2}\right) & x(y+z) & =x(y+x(z+x y)) \\ \left(\mathrm{SD}_{2}^{+}\right) & x+y z & =x+y(x+z(x+y)) \end{aligned} $$ Neardistributive lattices. A lattice, or a lattice variety, is said to be neardistributive if it satisfies the identities $\left(\mathrm{SD}_{2}^{-}\right)$and $\left(\mathrm{SD}_{2}^{+}\right)$. This definition appears in Lee [85]. By Theorem 4.2 every neardistributive lattice is semidistributive, and it is not difficult (though somewhat tedious) to check that $N, L_{6}, \ldots, L_{10}, L_{13}, L_{14}$ and $L_{15}$ are all neardistributive. (i) (ii) (iii) Figure 4.8 On the other hand Figure 4.8 (iii) shows that $\left(\mathrm{SD}_{2}\right)$ fails in $L_{11}$, and by duality $\left(\mathrm{SD}_{2}^{+}\right)$ fails in $L_{12}$. Theorem 4.9 (Lee [85]). A lattice variety $\mathcal{V}$ is neardistributive if and only if $\mathcal{V}$ is semidistributive and contains neither $L_{11}$ or $L_{12}$. Proof. The forward implication follows immediately from the remarks above. Conversely, suppose that $L_{11}, L_{12} \notin \mathcal{V}$ and $\mathcal{V}$ is semidistributive but not neardistributive. We show that this leads to a contradiction. By duality we may assume that $\left(\mathrm{SD}_{2}^{*}\right)$ does not hold in $\mathcal{V}$, so for some lattice $L \in \mathcal{V}, x, y, z, a, c \in L$ we have $x(y+x(z+x y))=a<c=x(y+z)$. Let $\bar{L}$ be a homomorphic image of $L$ such that $\bar{c} / \bar{a}$ is a critical quotient in $\bar{L}$. Clearly $\bar{L}$ excludes $L_{11}$ and $L_{12}$, hence Theorem 4.8 implies that $\bar{c} / \bar{a}$ is prime and $\bar{a}$ is meet irreducible. Thus $\bar{a}=\bar{x}$ or $\bar{a}=\bar{y}+\bar{x}(\bar{z}+\overline{x y})$. But $\bar{c} \leq \bar{x}$, whence $\bar{a}=\bar{y}+\bar{x}(\bar{z}+\overline{x y}) \geq \bar{y}$. This however is impossible, since $\bar{x} \geq \bar{y}$ implies $\bar{a}=\bar{y}+\bar{x}(\overline{z y})=\bar{y}+\bar{c}=\bar{c}$. For finite lattices we can get an even stronger result. Theorem 4.10 (Lee [85]). A finite lattice $L$ is neardistributive if and only if $L$ is semidistributive and excludes $L_{11}$ and $L_{12}$. Proof. The forward direction follows immediately from Theorem 4.9. Conversely, suppose $L$ is finite, semidistributive and excludes $L_{11}$ and $L_{12}$, but is not neardistributive. Then by Theorem 4.9, $\{L\}^{\mathcal{V}}$ contains a lattice $K$, where $K$ is one of the lattices $M_{3}, L_{1}, \ldots, L_{5}, L_{11}, L_{12}$. Since $K$ is subdirectly irreducible, Jónsson's Lemma implies $K \in \mathbf{H S}\{L\}$. It is also easy to check that every choice of $K$ satisfies Whitman's condition (W), hence Theorem 2.47 implies that $K$ is isomorphic to a sublattice of $L$. This however contradicts the assumption that $L$ is semidistributive and excludes $L_{11}, L_{12}$. It is not known whether the above theorem also holds for infinite lattices. Note that Theorem 4.8 implies that any finite subdirectly irreducible neardistributive lattice has a unique critical quotient. Rose [84] observed that any semidistributive lattice which contains a cycle must include either $L_{11}$ or $L_{12}$ (refer to Section 2.3 for the definition of a cycle). This follows easily from Corollary 4.7 and the fact that if $p_{1} \sigma p_{2} \sigma \ldots \sigma p_{n} \sigma p_{0}$ is a cycle then $L_{6}$ $L_{8}$ $L_{9}$ Figure 4.9 $\operatorname{con}\left(p_{i}, p_{i_{*}}\right) \subseteq \operatorname{con}\left(p_{i+1}, p_{i+1}\right)$ (see Figure 2.6), whence all the quotients $p_{i} / p_{i_{*}}$ generate the same congruence. In particular, Theorem 4.10 and Corollary 2.35 therefore imply that every finite neardistributive lattice is bounded. Almost distributive lattices. The next definition is also from Lee [85]. A lattice or a lattice variety is said to be almost distributive if it is neardistributive and satisfies the inclusion $$ \text { (AD ) } \quad v(u+c) \leq u+c(v+a), \quad \text { where } a=x y+x z, c=x(y+x z), $$ and it dual $\left(\mathrm{AD}^{+}\right)$. Every distributive lattice is almost distributive, since the distributive identity implies $u+c(v+a)=(u+c)(u+v+a) \geq(u+c) v$. On the other hand $L_{11}$ and $L_{12}$ fail to be almost distributive, since they are not neardistributive. Further investigation shows that (AD) fails in $L_{6}, L_{8}$ and $L_{9}$ (see Figure 4.9), and by duality $\left(\mathrm{AD}^{+}\right.$) does not hold in $L_{7}$ and $L_{10}$, while the next lemma shows that $N, L_{13}, L_{14}$ and $L_{15}$ are almost distributive. Recall from Section 2.3 Day's construction of "doubling" a quotient $u / v$ in a lattice $L$ to obtain a new lattice $L[u / v]$. Here we only need the case where $L$ is a distributive lattice $D$, and $u=v=d \in D$. In this case we denote the new lattice by $D[d]$. Note that $N, L_{13}, L_{14}$ and $L_{15}$ can be obtained from a distributive lattice in this way. Lemma 4.11 (Lee [85]). For any distributive lattice $D$ and $d \in D$, the lattice $D[d]$ is almost distributive. Proof. By duality it suffices to show that $D[d]$ satisfies $\left(\mathrm{SD}_{2}^{*}\right)$ and $\left(\mathrm{AD}^{*}\right)$. If $\left(\mathrm{SD}_{2}^{*}\right)$ fails, then we can find $x, y, z, a, c \in D[d]$ such that $x(y+x(z+x y))=a<c=x(y+z)$. Let $\bar{u}$ be the image of $u \in D[d]$ under the natural epimorphism $D[d] \rightarrow D$ (i.e. $\bar{u}=u$ for all $u \neq d$, and $\overline{(d, 0)}=\overline{(d, 1)}=d$ ). Since $D$ is distributive, we must have $\bar{a}=\bar{c}$, whence $a=(d, 0)$ and $c=(d, 1)$. Clearly $a$ is meet irreducible by the construction of $D[d]$, and this leads to a contradiction as in Theorem 4.9. To show that (AD) holds in $D[d]$, let us now denote by $u, v, x, y, z, a, c$ the elements of $D[d]$ corresponding to an assignment of the (same) variables of (AD ). If $a=c$, then (AD) obviously holds. If $a \neq d$ (i.e. $a<c$ ), then the distributivity of $D$ again implies that $\bar{a}=\bar{c}$, hence $a=(d, 0)$ and $c=(d, 1)$. Now (i) (ii) (iii) Figure 4.10 $v \leq a$ implies $u+c(v+a)=u+a \geq v \geq v(a+c)$, while $v \leq a$ and the meet irreducibility of $a$ imply $v+a \geq c$, whence $u+c(v+a)=u+c \geq v(u+c)$. Thus (AD') holds in all cases. Of course not every almost distributive lattice is of the form $D[d]$ (take for example $\mathbf{2} \times \mathbf{2}$, or any one element lattice), but we shall see shortly that all subdirectly irreducible almost distributive lattices can indeed be characterized in this way. Lemma 4.12 (Jónsson and Rival [79]). Let $L$ be a semidistributive lattice which excludes $L_{12}$, and suppose that $a, b, c, a^{\prime}, b^{\prime} \in L$ with $N(c / a, b)$ and $c / a \nearrow c^{\prime} / a^{\prime}$. Set $r=a^{\prime} b+c$ and $s=(b+c) a^{\prime}$. Then (i) $c / a \nearrow_{\beta} r / a^{\prime} r$ or $L$ includes $L_{8}$ or $L_{10}$; (ii) $c^{\prime} / a^{\prime} \searrow_{\beta} c+s / s$ or $L$ includes $L_{7}$ or $L_{9}$; (iii) $r / a^{\prime} r \nearrow_{\beta} c+s / s$ or $L$ includes $L_{6}$. Proof. (i) Note that the lattice in Figure 4.10 (i) is isomorphic to $F_{4}$ in Figure 4.5. Assume $L$ excludes $L_{8}$ and $L_{10}$ (as well as $L_{12}$ ), and take $x=c, y=a^{\prime}, z=b$ in Lemma 4.5 (iii). Then $L$ is a homomorphic image of $F_{3}$, and since we have $N\left(c^{\prime} / a^{\prime}, b\right)$ and (i) (ii) Figure 4.11 $r a^{\prime}=a^{\prime} b+a$ in $F_{3}$, the same is true in $L$. For $t \in c / a$ we must have $\left(t+a^{\prime} b\right) c=t$, else $b, a^{\prime} r, t,\left(t+a^{\prime} b\right) c$ generate a sublattice isomorphic to $L_{10}$ (see Figure 4.10 (ii)). Similarly, for $t^{\prime} \in r / a^{\prime} r$ we must have $c t^{\prime}+a^{\prime} b=t^{\prime}$ to avoid $L_{8}\left(t^{\prime}, c, b\right)$ (see Figure 4.10 (iii)). This shows that $c / a$ transposes up onto $r / a^{\prime} r$ and also proves that this transposition is bijective. (ii) is dual to (i). Lastly (iii) hold because for $t \in r / a^{\prime} r$ and $t^{\prime} \in c+s / s$, we must have $(t+s) r=t$ and $t^{\prime} r+s=t^{\prime}$, to avoid $L_{6}((t+s) r / t, s, b)$ and $L_{6}\left(t^{\prime} r+s / t^{\prime}, r, b\right)$ (see Figure 4.11). Characterizing almost distributive varieties. The next theorem is implicit in Jónsson and Rival [79] and appeared in the present form in Rose [84]. Theorem 4.13 Let $L$ be a subdirectly irreducible semidistributive lattice. Then the following conditions are equivalent: (i) $L$ excludes $L_{6}, L_{7}, L_{8}, L_{9}, L_{10}, L_{11}, L_{12}$; (ii) $L$ has at most one $N$-quotient; (iii) $L \cong D[d]$ for some distributive lattice $D$ and some $d \in D$. Proof. Assume that (i) holds, and consider an $N$-quotient $u / v$ in $L$. By Theorem 4.8, $L$ has a unique critical quotient which we denote by $c / a$. It follows that $c / a$ is prime, and Lemma 1.11 implies that $u / v$ projects weakly onto $c / a$, say $$ u / v=x_{0} / y_{0} \nearrow_{w} x_{1} / y_{1} \searrow w \ldots \nearrow_{w} x_{n} / y_{n}=c / a . $$ Of course this implies that $x_{i} / y_{i} \in \operatorname{con}(u, v)$ for each $i=0,1, \ldots, n$, and since $u / v$ is assumed to be an $N$-quotient, we have $N(u / v, b)$ for some $b \in L$. Thus Corollary 4.7 implies $N\left(x_{i} / y_{i}, b\right)$ for each $i$. In particular, it follows that $c / a$ is an $N$-quotient. We show that it is the only one. Note that $x_{i} / y_{i} \searrow w x_{i+1} / y_{i+1}$ implies $x_{i+1} / y_{i+1} \nearrow y_{i}+x_{i+1} / y_{i}$, whence by Lemma 4.12 (i), (ii) and (iii) this transposition is bijective. Similarly the dual of Lemma 4.12 shows that $x_{i} / y_{i} \nearrow_{w} x_{i+1} / y_{i+1}$ implies $x_{i+1} / y_{i+1} \searrow \beta x_{i} / x_{i} y_{i+1}$. Hence $c / a$ is projective to a subquotient of $u / v$ (see Figure 4.12). By Theorem 4.8 this subquotient must equal $c / a$, otherwise $L$ would have two critical quotients. Furthermore $u<v$ implies Figure 4.12 that $u / c$ is also an $N$-quotient, and for the same reason as above, $c / a$ would have to be a subquotient of $u / c$. But this is clearly impossible, hence $u=c$, and similarly $v=a$. If (ii) holds then $L$ excludes $L_{11}$ and $L_{12}$, so again Theorem 4.8 implies that $c / a$ is a prime quotient and con $(a, c)$ identifies no two distinct elements of $L$ except $a$ and $c$. Let $D=L / \operatorname{con}(a, c)$. Clearly $D$ cannot include $M_{3}$, otherwise the same result would be true for $L$, contradicting semidistributivity. $D$ also excludes the pentagon $N$, since $\operatorname{con}(a, c)$ collapses the only $N$-quotient of $L$. Hence $D$ is distributive. Let $d=a, c \in D$, then it is easy to check that the map $x \mapsto\{x\}(x \neq c, a), c \mapsto(d, 1)$ and $a \mapsto(d, 0)$ is an isomorphism from $L$ to $D[d]$. To prove that (iii) implies (i), we first note that since $D$ is distributive, the natural homomorphism from $D[d]$ to $D$ must collapse any $N$-quotient in $D[d]$. Hence $(d, 1) /(d, 0)$ is the only $N$-quotient, and as each of the lattices $L_{6}, L_{7}, \ldots, L_{12}$ has at least two $N$-quotients, (i) must hold. The following corollary summarizes the results that have been obtained about almost distributive lattices and varieties. CorollaRY 4.14 (i) A subdirectly irreducible lattice $L$ is almost distributive if and only if $L \cong D[d]$ for some distributive lattice $D$ and $d \in D$. (ii) A lattice variety is almost distributive if and only if it is semidistributive and contains none of the lattices $L_{6}, L_{7}, \ldots, L_{12}$. (iii) Every finitely generated subdirectly irreducible almost distributive lattice is finite. (iv) Every almost distributive variety that has finitely many subvarieties is generated by a finite lattice. (v) Every join irreducible almost distributive variety of finite height is generated by a finite subdirectly irreducible lattice. (vi) Every finite almost distributive lattice is bounded (in the sense of Section 2.3). Proof. (i) The forward implication follows from Theorem 4.13, since $L$ is certainly semidistributive and, as $L_{6}, L_{7}, \ldots, L_{12}$ all fail to be almost distributive, $L$ must exclude these lattices. Theorem 4.11 provides the reverse implication. (ii) The forward direction is trivial. Conversely, if $\mathcal{V}$ is semidistributive and contains none of $L_{6}, L_{7}, \ldots, L_{12}$, then Theorem 4.11 and Theorem 4.13 imply that every subdirectly irreducible member, and hence every member of $\mathcal{V}$ is almost distributive. (iii) If the lattice $L \cong D[d]$ in (i) is finitely generated, so is the distributive lattice $D$. It follows that $D$ is finite, and since $|L|=|D[d]|=|D|+1, L$ is also finite. Now (iv) and (v) follow from Lemma 2.7, and (vi) is a consequence of Lemmas 2.40 and 2.41. In particular the last result shows that any finite subdirectly irreducible almost distributive lattice is a splitting lattice. Part (iii) above says that almost distributive varieties are locally finite, and this is the reason why they are much easier to describe than arbitrary varieties. More generally we have the following result: Lemma 4.15 (Rose [84]). Let $L$ be a finitely generated subdirectly irreducible lattice all of whose critical quotients are prime. If $c / a$ is a critical quotient of $L$, and if $L / \operatorname{con}(a, c)$ belongs to a variety that is generated by a finite lattice, then $L$ is finite. Proof. Since every critical quotient of $L$ is prime, each congruence class of $\operatorname{con}(a, c)$ has at most two elements. Thus the assumption that $L$ is infinite implies that $L / \operatorname{con}(a, c)$ is infinite as well. However, this leads to a contradiction, since the lattice $L / \operatorname{con}(a, c)$ is finitely generated and belongs to a variety generated by a finite lattice, hence $L / \operatorname{con}(a, c)$ must be finite. Subdirectly irreducible lattices of the form $D[d]$. We continue our investigation of semidistributive lattices that exclude $L_{11}$ and $L_{12}$, which will then lead to a characterization of all the finite subdirectly irreducible almost distributive lattices. Lemma 4.16 (Rose [84]). Let $L$ be a subdirectly irreducible semidistributive lattice that excludes $L_{11}$ and $L_{12}$, and suppose $c / a$ is the (unique) critical quotient of $L$. Then (i) the sublattices $[a)$ and (c] of $L$ are distributive; (ii) for any nontrivial quotient $u / v$ of (c] there exist $b, v^{\prime} \in L$ with $v \leq v^{\prime}<u$ such that $N(c / a, b), b \leq u, b \not \leq v^{\prime}$ and $v^{\prime}+b / v^{\prime} \searrow a+b /(a+b) v^{\prime}$ (Figure 4.13). (iii) if $u \succ v=c$ in (ii), then we also have $u=a+b$. Proof. (i) By semidistributivity, $L$ excludes $M_{3}$. Suppose that for some $u, v, b \in L$ we have $N(u / v, b)$. Then Corollary 4.7 implies $N(c / a, b)$, whence $b \notin[a)$ and $b \notin(c]$. It follows that $[a)$ and $(c]$ also exclude the pentagon, and are therefore distributive. (ii) Choose a shortest possible sequence $$ u / v=x_{0} / y_{0} \sim_{w} x_{1} / y_{1} \sim_{w} \ldots \sim_{w} x_{n} / y_{n}=c / a . $$ Since $v \geq c$, we must have $n \geq 3$. Suppose that $u / v \nearrow_{w} x_{1} / y_{1} \searrow_{w} x_{2} / y_{2}$. By the minimality of $n$, only part (i) or (ii) of Lemma 4.3 can apply. That is, there exist $a^{\prime}, b, c^{\prime}, u^{\prime} \in L$ with $N\left(c^{\prime} / a^{\prime}, b\right)$ and $v<u^{\prime} \leq u$ such that $b / b c^{\prime} \subseteq u / v$ or $u^{\prime} / v \nearrow_{w} a^{\prime}+b / b$. But Corollary 4.7 implies $N(c / a, b$, which is impossible since in both cases $b \geq a$. Thus we must have $u / v \searrow_{w} x_{1} / y_{1} \nearrow x_{2} / y_{2}$. By the dual of Lemma 4.3 (i) and (ii), there exist $a^{\prime}, b, c^{\prime}, v^{\prime} \in L$ with $N\left(c^{\prime} / a^{\prime}, b\right)$ and $v \leq v^{\prime}<u$ such that either $a^{\prime}+b / b \subseteq u / v$ or $u / v^{\prime} \searrow_{w} b / b c^{\prime}$. Only the latter is possible, since we again have $N(c / a, b)$ by Corollary 4.7 . Now $a, b<u$ implies Figure 4.13 $a+b \leq u$, and $a<v^{\prime}$ implies $v^{\prime}+b=v^{\prime}+(a+b)$, hence $v^{\prime}+b / v^{\prime} \searrow a+b /(a+b) v^{\prime}$. Also $b v^{\prime}=b c^{\prime} \neq b$ implies $b \not z v$, and the bijectivity of the transposition follows from the distributivity of $[a$ ) (part (i)). (iii) is a special case if (ii). ThEOREM 4.17 (Rose [84]). Let $D$ be a finite distributive lattice and $d \in D$. Then $D[d]$ is subdirectly irreducible if and only if all of the following conditions hold: (i) every cover of $d$ is join reducible, (ii) every dual cover of $d$ is meet reducible, and (iii) every prime quotient in $D$ is projective to a prime quotient $p / q$ with $p=d$ or $q=d$. Proof. Suppose $L=D[d]$ is subdirectly irreducible. Let $a=(d, 0)$ and $c=(d, 1)$. Notice that $D=d$ implies that (i), (ii) and (iii) are satisfied vacuously. If $u \in D$ covers $d$, then $u$ covers $c$ in $L$, whence by Lemma 4.16 (iii) there exists $b \in L$ noncomparable with $c$, and $u=a+b$. Thus $u$ is join reducible in $L$ and also in $D$. Dually, every element that is covered by $d$ is meet irreducible. To prove (iii), consider a prime quotient $u / v \neq c / a$ in $L$, and choose a sequence $$ u / v=x_{0} / y_{0} \sim_{w} x_{1} / y_{1} \sim_{w} \ldots \sim_{w} x_{n} / y_{n} \supseteq c / a $$ with $n$ as small as possible. For $i<n$ none of the quotients $x_{i} / y_{i}$ contains $c / a$, and is therefore isomorphic to $\overline{x_{i}} / \overline{y_{i}}$ in $D$ (where $\bar{x}$ denotes the image of $x$ under the natural epimorphism $D[d] \rightarrow D$ ). Since $D$ is distributive and $\bar{u} / \bar{v}$ is prime, each $\overline{x_{i}} / \overline{y_{i}}$ is prime, whence $\overline{x_{0}} / \overline{y_{0}} \sim \overline{x_{1}} / \overline{y_{1}} \sim \ldots \sim \overline{x_{n}} / \overline{y_{n}}$. It follows that $\overline{x_{n}}=d$ or $\overline{y_{n}}=d$, so (iii) holds with $p / q=\overline{x_{n}} / \overline{y_{n}}$. Conversely, suppose (i), (ii) and (iii) hold. Since $D$ and hence $L$ are finite, it suffices to show that every prime quotient of $L$ projects weakly onto $c / a$. We begin by showing that every prime quotient $u / v \neq c / a$ is projective to a prime quotient $x / y$ with $x=a$ or $y=c$. Since $c$ is the only cover of $a$ (and dually), we cannot have $v=a$ or $u=c$. Also if $u=a$ or $v=c$, then we take $x / y=u / v$. Otherwise, using (iii), we may assume by duality, that $\bar{u} / \bar{v}$ is projective to a prime quotient $\bar{x} / d$ with $x \succ c$. Since $D$ is distributive, this means that $\bar{u} \not \leq d$ and $\bar{u} / \bar{v} \nearrow \bar{u}+\bar{x} / \bar{v}+\bar{c} \searrow \bar{x} / \bar{c}$, i.e. $\bar{u}+\bar{c}=\bar{u}+\bar{x}=\bar{x}+\bar{v}, \bar{u}(\bar{v}+\bar{c})=\bar{v}$ and $\bar{x}(\bar{v}+\bar{c})=\bar{c}$. Hence $u+c=u+x=x+v$, and further more $\bar{v} \neq d$ implies $u(v+c)=v$ and $x(v+c)=c($ since $a<c$ ). Thus $u / v \nearrow u+x / v+x \searrow x / c$. Now we apply (i) to obtain $b \in L$ with $\bar{b}<\bar{x}$ and $\bar{x}=\bar{c}+\bar{b}$. It follows that $x=c+b=a+b$ (since $a$ is meet irreducible), while the join-irreducibility of $c$ implies $c b=a b$. Thus we have $N(c / a, b)$, and since $\operatorname{con}(u, v)$ identifies $x$ and $c$, it also identifies $c$ and $a$. Consequently $L$ is subdirectly irreducible. Varieties covering that smallest nonmodular variety. From the results obtained so far one can now prove the following: THEOREM 4.18 The variety $\mathcal{N}$ is covered by precisely three almost distributive varieties, $\mathcal{L}_{13}, \mathcal{L}_{14}$ and $\mathcal{L}_{15}$. Proof. With the help of Jónsson's Lemma it is not difficult to check that each of the varieties $\mathcal{L}_{i}\left(=\left\{L_{i}\right\}^{\mathcal{V}}\right)$ cover $\mathcal{N}(i=1, \ldots, 15)$. So let $\mathcal{V}$ be an almost distributive variety that properly includes $\mathcal{N}$. We have to show that $\mathcal{V}$ includes at least one of $\mathcal{L}_{13}, \mathcal{L}_{14}$ or $\mathcal{L}_{15}$. Every variety is determined by its finitely generated subdirectly irreducible members, hence we can find such a lattice $L \in \mathcal{V}$ not isomorphic to $N$ or 2. By Corollary 4.14 (i) and (iii), $L \cong D[d]$ for some finite distributive lattice $D, d \in D$. We show that $D[d]$ contains one of $L_{13}, L_{14}$ or $L_{15}$ as a sublattice. $D$ is nontrivial since $D[d] \not 2$. Let $0_{D}$ and $1_{D}$ be the smallest and largest element of $D$ respectively. Theorem 4.17 (i) and (ii) imply that $d \neq 0_{D}, 1_{D}$. Also, $0_{D} \prec d \prec 1_{D}$ would imply $D[d] \cong N$, so by duality we can find $u, v \in D$ such that $v \prec u \prec d$. By Theorem 4.17 (iii) $u / v$ is projective to a prime quotient $p / q$ such that $p=d$ or $q=d$. Since $D$ is distributive and $u<d$, the case $q=d$ is excluded, hence $u / v$ is projective to $d / q$. Again, by the distributivity of $D, u \neq q$ and therefore $d=u+q$ (see Figure 4.14 (i)). By Theorem 4.17 (ii) $u$ and $q$ are meet reducible, so there exist $x, y \in D$ such that $u=x d, q=y d$ and since $D$ is finite we may assume that $x \succ u$ and $y \succ v$. The sublattice $D^{\prime}$ of $D$ generated by $x, d, y$ is a homomorphic image of the lattice in Figure 4.14 (ii) (the distributive lattice with generators $x, d, y$ and defining relation $d=x d=y d$ ). Since $x \succ u$ and $y \succ v, D^{\prime}$ must in fact be isomorphic to the lattice in Figure 4.14 (iii) or (iv). Consequently $D^{\prime}[d]$, as a sublattice of $D[d]$, is isomorphic to $L_{14}$ or $L_{15}$. A sublattice isomorphic to $L_{13}$ is obtained from the dual case when $d \prec v \prec u$. The above theorem and Corollary 4.14 (ii) now imply: Theorem 4.19 (Jónsson and Rival [79]). In the lattice $\Lambda$ of all lattice subvarieties, the variety $\mathcal{N}$ is covered by exactly 16 varieties: $\mathcal{M}_{3}+\mathcal{N}, \mathcal{L}_{1}, \mathcal{L}_{2}, \ldots, \mathcal{L}_{15}$. Representing finite almost distributive lattices. Building on Theorem 4.17, Lee [85] gives another criterion for the subdirect irreducibility of a lattice $D[d]$, where $D$ is distributive, and he also sets up a correspondence between finite subdirectly irreducible almost distributive lattices and certain matrices of zeros and ones. Before discussing his results, we recall some facts about distributive lattices which can be found in [GLT]. Figure 4.14 Lemma 4.20 Let $D$ and $D^{\prime}$ be finite distributive lattices and denote by $J(D)$ the poset of all nonzero join irreducible elements of $D$. Then (i) any poset isomorphism from $J(D)$ to $J\left(D^{\prime}\right)$ can be extended to an isomorphism from $D$ to $D^{\prime}$. (ii) every maximal chain of $D$ has length $|J(D)|$. Given a finite distributive lattice $D$ and $d \in D$, let $B=\left\{b_{1}, \ldots, b_{m}\right\}$ be the set of all meet reducible dual covers of $d, C=\left\{c_{1}, \ldots, c_{n}\right\}$ the set of all join reducible covers of $d$, and consider the set $$ X_{d}(D)=\{x \in D: x d \prec d \prec x+d\} . $$ We define two partitions $\left\{B_{1}, \ldots, B_{m}\right\}$ and $\left\{C_{1}, \ldots, C_{n}\right\}$ of $X_{d}(D)$, referred to as the natural partitions of $X_{d}(D)$, as follows: $$ B_{i}=\left\{x \in X_{d}(D): x d=b_{i}\right\}, \quad C_{j}=\left\{x \in X_{d}(D): x+d=c_{j}\right\} . $$ (We assume here, and subsequently, that the index $i$ ranges from 1 to $m$ and $j$ ranges from 1 to $n$.) By the distributivity of $D$, two blocks $B_{i}$ and $C_{j}$ have at most one element in common, so we can define an $m \times n$ matrix $A(D[d])$ of 0 's and 1's by $a_{i j}=|B \cap C|$. (If any, and hence all, of the sets $B, C$ or $X_{d}(D)$ is empty, then $A(D[d])=($ ), the $0 \times 0$ matrix with no entries.) $A(D[d])$ is called the matrix associated with $D[d]$, but notice that because the elements of $B$ and $C$ were labeled arbitrarily, $A(D[d])$ is determined only up to the interchanging of any rows or any columns. Observe also that $A(D[d])$ does not have any rows or columns with just zeros, since $\left\{B_{i}\right\}$ and $\left\{C_{j}\right\}$ are partitions of the same set $X_{d}(D)$. As examples we note that $A(\mathbf{2})=(), A(N)=(1), A\left(L_{13}\right)=(1,1), A\left(L_{14}\right)=\left(\begin{array}{l}1 \\ 1\end{array}\right)$ and $A\left(L_{15}\right)=\left(\begin{array}{ll}1 & 0 \\ 0 & 1\end{array}\right)$ or $\left(\begin{array}{ll}0 & 1 \\ 1 & 0\end{array}\right)$ (see Figure 4.15). We will also be concerned with the sublattice $D^{*}$ of $D$ generated by the set $U=$ $X_{d}(D) \cup\{d\}$. Let $1^{*}=\sum U, 0^{*}=\prod U$, then clearly the elements $c_{1}, \ldots, c_{n}$ will be atoms in the quotient $1^{*} / d$ and $\sum_{j} c_{j}=1^{*}$, so by Lemma $1.12,1^{*} / d$ is isomorphic to the Boolean algebra $2^{n}$, and the elements $c_{1}, \ldots, c_{n}$ are the only covers of $d$ in $D^{*}$. Dually we have $$ A(N)=(1) $$ $$ A\left(L_{14}\right)=\left(\begin{array}{l} 1 \\ 1 \end{array}\right) $$ $A\left(L_{13}\right)=\left(\begin{array}{ll}1 & 1\end{array}\right)$ $$ A\left(L_{15}\right)=\left(\begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) $$ Figure 4.15 that $d / 0^{*}$ is isomorphic to the Boolean algebra $2^{m}$ and that $b_{1}, \ldots, b_{m}$ are the only dual covers of $d$ in $D^{*}$. It follows that $D^{*}$ has length $m+n$ and therefore, by Lemma 4.20 (ii), $\left|J\left(D^{*}\right)\right|=m+n$. We can in fact describe the elements of $J\left(D^{*}\right)$ : LemMa 4.21 (Lee [85]). $J\left(D^{*}\right)=\left\{b_{\mathbf{1}}^{\prime}, \ldots, b_{m}^{\prime}, c_{1}^{\prime}, \ldots, c_{n}^{\prime}\right\}$ where $b_{i}^{\prime}$ is the complement of $b_{i}$ in $d / 0^{*}$, and $c_{j}^{\prime}=\prod C_{j}$. All elements of $J\left(D^{*}\right)$ are incomparable except: $b_{i}^{\prime} \leq c_{j}^{\prime}$ if and only if $B_{i} \cap C_{j}=\emptyset$. Proof. It is clear that the $b_{i}^{\prime}$ are distinct atoms of $D^{*}$ and therefore pairwise incomparable and join irreducible in $D^{*}$. As for the $c_{j}^{\prime}$, we first note that in a distributive lattice every join irreducible element is a meet of generators. Thus if $c$ is join irreducible and $c<c_{j}^{\prime}$, then $c \leq c_{j}^{\prime} x$ for some $x \in U-C_{j}$. Now $c_{j}^{\prime}+d=c_{j} \neq x+d$, hence $c_{j}^{\prime} x d=\left(c_{j}^{\prime}+d\right)(x+d)=d$, which shows that $c \leq c_{j}^{\prime} x \leq d$. But $c_{j}^{\prime} \leq d$ and therefore $c_{j}^{\prime}$ cannot be a join of join irreducibles strictly less that itself. It follows that $c_{j}^{\prime}$ is join irreducible. Also $c_{1}^{\prime}, \ldots, c_{n}^{\prime}$ are pairwise incomparable, since $c_{j}^{\prime} \leq c_{k}^{\prime}$ for some $j \neq k$ implies $c_{j}^{\prime}=c_{j}^{\prime} c_{k}^{\prime} \leq c_{j}^{\prime} x$ for any $x \in C_{k}$, and $c_{j}^{\prime} x \leq d$ as above, which contradicts $c_{j}^{\prime} \not d$. Clearly also $c_{j}^{\prime} \not b_{i}^{\prime}$ for any $i$ and $j$, since $b_{i}^{\prime} \leq d$. Therefore it remains to show that $b_{i}^{\prime} \leq c_{j}^{\prime}$ if and only if $B_{i} \cap C_{j}=\emptyset$. If $x \in B_{i} \cap C_{j}$, then $b_{i}^{\prime} \leq d$ and $c_{j}^{\prime} \leq x$, so $b_{i}^{\prime} c_{j}^{\prime}=b_{i}^{\prime}(d x) c_{j}^{\prime}=\left(b_{i}^{\prime} b_{i}\right) c_{j}^{\prime}=0^{*} c_{j}^{\prime}=0^{*}$ and hence $b_{i}^{\prime} \not \leq c_{j}^{\prime}$. Conversely $B_{i} \cap C_{j}=\emptyset$ implies $C_{j} \subseteq U-B_{i}$, and since $b_{i}^{\prime}$ is a meet of generators, $b_{i}^{\prime}=\prod U-B_{i} \leq \prod C_{j}=c_{j}^{\prime}$. So, given any $m \times n$ matrix $A=\left(a_{i j}\right)$ of 0's and 1's, we define a finite distributive lattice $D_{A}$ and an element $d_{A}$ as follows: Suppose $2^{m+n}$ be the Boolean algebra generated by the $m+n$ atoms $p_{1}, \ldots, p_{m}$, $q_{1}, \ldots, q_{n}$. Put $$ d_{A}=\sum p_{i}, \quad \bar{b}_{k}=\sum_{i \neq k} p_{i} \quad \text { and } \quad x_{i j}=\bar{b}_{i}+q_{j} $$ and let $X_{A}=\left\{x_{i j}: a_{i j}=1\right\}$. Now we let $D_{A}$ be the sublattice of $2^{m+n}$ generated by $X_{A} \cup\left\{d_{A}\right\}$. Lemma 4.22 (Lee [85]). For no proper subset $U$ of $X_{d}(D)$ does $U \cup\{d\}$ generate $D^{*}$. Proof. We may assume that $X_{d}(D)$ is nonempty. Suppose to the contrary that $U=$ $X_{d}(D)-\left\{x_{0}\right\}$ for some $x_{0} \in X_{d}(D)$, and $U \cup\{d\}$ generates $D^{*}$. Then $x_{0} \in C_{j}$ for some block $C_{j}$ of the natural partition $\left\{C_{1}, \ldots, C_{n}\right\}$ of $X_{d}(D)$. Let $c_{j}^{\prime}=\prod C_{j} \in D^{*}$. By Lemma $4.21 c_{j}^{\prime}$ is join irreducible, and since $U \cup\{d\}$ is a generating set, $c_{j}^{\prime}$ is the meet of a subset $V$ of $U \cup\{d\}$. Notice that $x \prec x+d=c_{j}^{\prime}$ for each $x \in C_{j}$, so by the dual of Lemma $1.12 C_{j}$ generates a Boolean algebra with least element $c_{j}^{\prime}$. Hence $V$ is not a proper subset of $C_{j}$ and, as $x_{0} \notin V$, we also cannot have $V=C_{j}$. Choose $x \in V-C_{j}$. Then $x+d \neq x_{0}+d=c_{j}$, so $$ d=(x+d)\left(x_{0}+d\right)=x x_{0}+d \geq x x_{0} \geq \prod C_{j}=\prod V=c_{j}^{\prime} . $$ However this contradicts $c_{j}^{\prime} \notin d$. Theorem 4.23 (Lee [85]). Let $D$ be a finite distributive lattice and $d \in D$. Then the following are equivalent: (i) $D[d] \cong D_{A}\left[d_{A}\right]$ where $A=A(D[d])$ and $d \in D$ corresponds to $d_{A} \in D_{A}$; (ii) the set $X_{d}(D) \cup\{d\}$ generates $D$ (i.e. $D^{*}=D$ ); (iii) $D[d]$ is subdirectly irreducible. Proof. (i) implies (ii): Let $d_{A}, \overline{b_{k}}, x_{i j}$ and $X_{A}$ be defined as above. Clearly $x_{i j} d_{A} \prec$ $d_{A} \prec x_{i j}+d_{A}$ for all $i, j$, hence $X_{A} \subseteq X_{d_{A}}\left(D_{A}\right)$. Since $X_{A} \cup\{d\}$ generates $D_{A}$, so does $X_{d_{A}}\left(D_{A}\right) \cup\{d\}$. Notice that $D_{A}^{*}=D_{A}$, and by Lemma $4.22 X_{A}=X_{d_{A}}\left(D_{A}\right)$. (ii) implies (i): Again suppose $2^{m+n}$ is the Boolean algebra generated by the atoms $p_{1}, \ldots, p_{m}, q_{1}, \ldots, q_{n}$, and let $\overline{c_{j}}=d_{A}+q_{j}$. We claim that the elements $\overline{b_{1}}, \ldots, \overline{b_{m}}, \overline{c_{1}}, \ldots, \overline{c_{n}}$ are all in $D_{A}$. This follows because $x_{i j} d_{A}=\left(\overline{b_{i}}+q_{j}\right) d_{A}=\overline{b_{i}} d_{A}+q_{j} d_{A}=\overline{b_{i}}$ and $x_{i j}+d_{A}=$ $\overline{b_{i}}+q_{j}+d_{A}=d_{A}+q_{j}=\overline{c_{j}}$ for all $i, j$, and $A=A(D[d])$ has no rows or columns of zeros, hence for any given $i$ (or $j$ ) there exists $j$ (respectively $i$ ) such that $x_{i j} \in X_{A}$. Clearly the $\overline{c_{j}}$ are covers of $d_{A}$, and they are the only ones, since by Lemma $1.12 \sum \overline{c_{j}}=\sum X_{A}=1_{A}^{*}$. Dually the $\overline{b_{i}}$ are all the dual covers of $d_{A}$. Let $\left\{B_{1}, \ldots, B_{m}\right\}$ and $\left\{C_{1}, \ldots, C_{n}\right\}$ be the natural partitions of $X_{d_{A}}\left(D_{A}\right)=X_{A}$. By Lemma $4.21 J\left(D_{A}^{*}\right)=\left\{\overline{b_{1}^{\prime}}, \ldots, \overline{b_{m}^{\prime}}, \overline{c_{1}^{\prime}}, \ldots, \overline{c_{n}^{\prime}}\right\}$ where $\overline{b_{i}^{\prime}}=p_{i}$ and $\overline{c_{j}^{\prime}}=\prod \overline{C_{j}}$. Now $B_{i} \cap C_{j} \neq \emptyset$ iff $a_{i j}=1$ in $A(D[d])$ iff $\overline{b_{i}} \leq x_{i j} \leq \overline{c_{j}}$ in $D_{A}$ iff $\overline{B_{i}} \cap \overline{C_{j}} \neq \emptyset$. Consequently the map $b_{i}^{\prime} \mapsto \overline{b_{i}^{\prime}}, c_{j}^{\prime} \mapsto \overline{c_{j}^{\prime}}$ from $J\left(D^{*}\right)$ to $J\left(D_{A}^{*}\right)$ is a poset isomorphism which extends to an isomorphism $D^{*} \cong D_{A}^{*}$ by Lemma 4.20 (i). $D^{*}$ is the sublattice of $D$ generated by $X_{d}(D) \cup\{d\}$, so by assumption $D^{*}=D$, and we always have $D_{A}^{*}=D_{A}$. Clearly also $d=\sum b_{i}^{\prime}$ is mapped to $d_{a}=\sum \overline{b_{i}^{\prime}}$ by the isomorphism. (ii) implies (iii): We verify that the conditions (i), (ii) and (iii) of Theorem 4.17 hold. By Lemma 1.12, the join reducible covers of $d$ in $D^{*}=D$ are in fact all the covers of $d$, and dually, which implies that the first two conditions hold. Also, if $u \prec v$ in $D$, then the length of $D / \operatorname{con}(u, v)$ is less that the length of $D$. It follows that $\operatorname{con}(u, v)$ identifies $d$ with one of its covers or dual covers, hence condition (iii) of Theorem 4.17 holds. (iii) implies (ii): Suppose $D[d]$ is subdirectly irreducible, but $D^{*}$ is a proper sublattice of $D$. Let $0^{*}$ be the smallest and $1^{*}$ the largest element of $D^{*}$, and choose an element $z \in D-D^{*}$. Case 1: $z \leq 1^{*}$ or $z \geq 0^{*}$. Then one of the quotients $z+1^{*} / 1^{*}$ or $0^{*} / z 0^{*}$ is nontrivial. Observe that in any distributive lattice, if $v<u \leq v^{\prime}<u^{\prime}$, then the quotients $u / v$ and $u^{\prime} / v^{\prime}$ cannot project onto each other. Hence no prime quotients in $z+1^{*} / 1^{*}$ or $0^{*} / z 0^{*}$ project onto any prime quotient $p / q$ with $p=d$ or $q=d$, since $p, q \in D^{*}$ by condition (i) and (ii) of Theorem 4.17. This however contradicts condition (iii) of the same theorem. Case 2: $0^{*} \leq z \leq 1^{*}$. Choose $z$ such that the height of $z$ is as large as possible, and let $z^{*}$ be a cover of $z$. Then $z^{*} \in D^{*}$, and $z^{*}$ is the only cover of $z$, else $z$ would be the meet of two elements from $D^{*}$ and would also belong to $D^{*}$. By Theorem 4.17 (iii), the prime quotient $z^{*} / z$ projects onto a prime quotient $p / q$ with $p=d$ or $q=d$, and since $D$ is distributive, this implies $z^{*} / z \nearrow u / v \searrow p / q$ for some quotient $u / v$. Since $z^{*}$ is the unique cover of $z$, we must have $u / v=z^{*} / z \searrow p / q$. Suppose $p=d$. Then $z^{*} / z \searrow d / z d$, and the two quotients are distinct, otherwise Theorem 4.17 (ii) implies $z \in D^{*}$. Consequently $z^{*} / d \searrow_{\beta} z / z d$. As before Lemma 1.12 implies that $1^{*} / d$ is a Boolean algebra, hence $z^{*} / d$ is a Boolean algebra, and so is $z / z d$ (via the bijective transposition). Therefore $z$ is the join of the atoms of $z / z d$, which are in fact elements of $X_{d}(D)$. This implies $z \in D^{*}$, a contradiction. Next suppose $q=d$. Since, by Lemma $1.12,1^{*} \subseteq D^{*}$ we would again have $z \in D^{*}$, a contradiction. Thus we conclude that $D^{*}=D$. Given any matrix $A$ of 0 's and 1's with no rows or columns of zeros, the equivalence of (ii) and (iii) tells us that $D_{A}\left[d_{A}\right]$ is subdirectly irreducible. Conversely, for any subdirectly irreducible lattice $D[d]$, the matrix $A(D[d])$ has no rows or columns of zeros, and it is not difficult to see that, up to the interchanging of some rows or columns, the matrices $A$ and $A\left(D_{A}\left[d_{A}\right]\right)$ are the same. Furthermore, given any lattice $D[d]$ and $X^{\prime} \subseteq X_{d}(D)$, the sublattice $D^{\prime}$ generated by $X^{\prime} \cup\{d\}$ is subdirectly irreducible, and by Lemma 4.22 $X_{d}\left(D^{\prime}\right)=X^{\prime}$. Rephrased in terms of the matrices that represent the lattices $D$ and $D^{\prime}$ we have the following: Corollary 4.24 Let $A=A(D[d])$ for some finite distributive lattice $D, d \in D$, and suppose $D^{\prime}$ is the sublattice generated by some $X^{\prime} \subseteq X_{d}(D)$. Then the matrix $A^{\prime}$ which represents $D^{\prime}[d]$ is obtained from $A$ by changing each 1 corresponding to an element of $X_{d}(D)-X^{\prime}$ to 0 and deleting any rows or columns of zeros that may have arisen. Conversely any matrix obtained from $A$ in this way represents a (subdirectly irreducible) sublattice of $D[d]$. Covering chains of almost distributive varieties. The next lemma, which was proved by Rose [84] directly from Theorem 4.17, can now be derived from the above corollary. Lemмa 4.25 Let $L$ be a finite subdirectly irreducible almost distributive lattice, $L \varsubsetneqq \mathbf{2}, N$. (i) If $L_{14}, L_{15} \notin\{L\}^{\mathcal{V}}$ then $L \cong L_{13}^{k}$, (ii) if $L_{13}, L_{15} \notin\{L\}^{\mathcal{V}}$ then $L \cong L_{14}^{k}$ and (iii) if $L_{13}, L_{14} \notin\{L\}^{\mathcal{V}}$ then $L \cong L_{15}^{k}$ for some $k \in \omega$ (see Figure 2.2). Proof. (i) By Corollary 4.14 (i) $L \cong D[d]$ for some finite distributive lattice $D$ and $d \in D$. Let $A=A(D[d])$ be the matrix representing $D[d]$ and suppose $A$ has more than one row. If $A$ has no column with two 1's in it, then it has at least two columns (since it has at least two rows, and no rows of 0's), and we can therefore find two entries equal to 1 in two different columns and rows. Deleting all other rows and columns, it follows from Corollary 4.24 that $L_{\mathbf{1 5}}$ is a sublattice of $L$. Hence if $L_{\mathbf{1 4}}, L_{\mathbf{1 5}} \notin\{L\}^{\mathcal{V}}$, then $A$ has only one row with all entries equal to 1 . This is the matrix representing $L_{13}^{k}$, where $k+2$ is the number of columns of $A$ (see Figure 4.16 (i)). Similar arguments prove (ii) and (iii). $\square$ Theorem 4.26 (Rose [84]). For each $i \in\{13,14,15\}$ and $n \in \omega$ the variety $\mathcal{L}_{i}^{n+1}$ is the only join irreducible cover of $\mathcal{L}_{i}^{n}$. Proof. Let $i=13$ and suppose $\mathcal{V}$ is a join irreducible variety that covers $\mathcal{L}_{13}^{n}=\left\{L_{13}^{n}\right\}^{\mathcal{V}}$. $\mathcal{V}$ must be almost distributive, otherwise, by Corollary 4.14 (ii), $\mathcal{V}$ contains one, say $L$, of the lattices $M_{3}, L_{1}, L_{2}, \ldots, L_{12}$, in which case $\mathcal{V} \geq \mathcal{L}_{13}^{n}+\{L\}^{\mathcal{V}}>\mathcal{L}_{13}^{n}$, hence either $\mathcal{V}$ is not a cover of $\mathcal{L}_{13}^{n}$ or $\mathcal{V}$ is join reducible. $\mathcal{V}$ is of finite height, thus by Corollary 4.14 (i), (iii) and Lemma $2.7 \mathcal{V}$ is generated by a finite subdirectly irreducible lattice $L=D[d]$, where $D$ is distributive. Since $\mathcal{V}$ is join irreducible, $L_{14}, L_{15} \notin \mathcal{V}$ so by Lemma 4.25 (i) $L=L_{13}^{k}$, and since $\mathcal{V}$ covers $\mathcal{L}_{13}^{n}$, we must have $k=n+1$. The proof for $i=14$ and 15 is completely analogous. The smallest subdirectly irreducible almost distributive lattice that is not of the form 2 , $$ A\left(L_{13}^{n}\right)=\left(\begin{array}{lll} 1 & 1 & \ldots \end{array}\right) $$ $$ A(L)=\left(\begin{array}{ll} 1 & 1 \\ 1 & 0 \end{array}\right) $$ $$ A\left(L_{15}^{n}\right)=I_{n+2}=\left(\begin{array}{ccc} 1 & 0 & 0 \\ & \ddots & \\ 0 & 0 & 1 \end{array}\right) $$ Figure 4.16 ## Further results on almost distributive varieties. THEOREM 4.27 (Lee [85]). Every almost distributive lattice variety of finite height has only finitely many covers. Proof. Let $\mathcal{V}$ be an almost distributive variety of finite height. Then $\mathcal{V}$ is generated by finitely many subdirectly irreducible almost distributive lattices, and by Corollary 4.14 (i) these lattices are of the form $D_{1}\left[d_{1}\right], \ldots, D_{n}\left[d_{n}\right]$ for some finite distributive lattices $D_{1}, \ldots, D_{n}$. Let $k=\max \left\{\left|X_{d_{i}}\left(D_{i}\right)\right|: i=1, \ldots, n\right\}$. By Corollary 4.14 (iv), each join irreducible cover of $\mathcal{V}$ is generated by a finite subdirectly irreducible lattice $D[d]$, and clearly we must have $\left|X_{d}(D)\right|=k+1$. By Theorem $4.23 D[d]$ can be represented by a matrix of 0's and 1's with at most $k+1$ rows and $k+1$ columns, hence $\mathcal{V}$ has only finitely many join irreducible covers. On the other hand, each join reducible cover of $\mathcal{V}$ is a join of $\mathcal{V}$ and a join irreducible cover of a subvariety of $\mathcal{V}$. Therefore $\mathcal{V}$ also has only finitely many join reducible covers. LemMa 4.28 (Lee $[\mathbf{8 5}]$ ). Let $\bar{D}$ be a sublattice of a finite distributive lattice $D$, and $d \in \bar{D}$. If $\bar{D}[d]$ is subdirectly irreducible, then $\bar{D}[d] \cong D^{\prime}[d]$, where $D^{\prime}$ is generated by $d$ and a subset of $X_{d}(D)$. Proof. Let $\left\{B_{1}, \ldots, B_{m}\right\}$ and $\left\{C_{1}, \ldots, C_{n}\right\}$ be the natural partitions of $X_{d}(\bar{D})$, and let $b_{i}=x d$ with $x \in B_{i}, c_{j}=x+d$ with $x \in C_{j}$. Choose $b_{1}^{\prime}, \ldots, b_{m}^{\prime}, c_{1}^{\prime}, \ldots, c_{n}^{\prime} \in D$ such that $b_{i} \leq b_{i}^{\prime} \prec d$ and $d \prec c_{j}^{\prime} \leq c_{j}$. For each $x \in X_{d}(\bar{D})$ we have $x \in B_{i} \cap C_{j}$ for unique $i, j$, in which case we define $x^{\prime}=x c_{j}^{\prime}+b_{i}^{\prime}$. By distributivity $x^{\prime} d=\left(x c_{j}^{\prime}+b_{i}^{\prime}\right) d=x d+b_{i}^{\prime}$ and $x^{\prime}+d=(x+d) c_{j}^{\prime}$, hence $x d=b_{i}$ implies $x^{\prime} d=b_{i}^{\prime}$, and $x+d=c_{j}$ implies $x^{\prime}+d=c_{j}^{\prime}$. It follows that the set $X^{\prime}=\left\{x^{\prime}: x \in X_{d}(\bar{D})\right\}$ is a subset of $X_{d}(D)$, and since the elements $b_{i}^{\prime}$ and $c_{j}^{\prime}$ all have to be distinct, the map $x \mapsto x^{\prime}$ is bijective. Let $D^{\prime}$ be the sublattice of $D$ generated by $X^{\prime} \cup\{d\}$. By Lemma $4.22 X_{d}\left(D^{\prime}\right)=x^{\prime}$. We show that $\bar{D}$ and $D^{\prime}$ have the same matrix representation, then it follows from Theorem 4.23 that $\bar{D} \cong D^{\prime}$. By Lemma 1.12 the elements $c_{1}^{\prime}, \ldots, c_{n}^{\prime}$ are all the (join reducible) covers of $d$, and dually for $b_{1}^{\prime}, \ldots, b_{m}^{\prime}$. Let $B_{i}^{\prime}=\left\{x^{\prime} \in X^{\prime}: x^{\prime} d=b_{i}^{\prime}\right\}$ and $C_{j}^{\prime}=\left\{x^{\prime} \in X^{\prime}: x^{\prime}+d=c_{j}^{\prime}\right\}$ be the blocks of the natural partitions of $X^{\prime}$. Clearly $x \in B_{i}$ implies $x^{\prime} \in B_{i}^{\prime}$ for all $i$, and the converse must also hold, since the map $x \mapsto x^{\prime}$ is bijective and the blocks $B_{i}, b_{i}^{\prime}$ are finite. Similarly $x \in C_{j}$ if and only if $x^{\prime} \in C_{j}^{\prime}$. Hence $\left|B_{i} \cap C_{j}\right|=\mid B_{i}^{\prime} \cap C_{j}^{\prime}$, which implies $A(\bar{D}[d])=A\left(d^{\prime}[d]\right)$. Lemma 4.29 (Lee [85]). Let $D$ be a finite distributive lattice, $d \in D$, and let $D^{*}$ be the sublattice of $D$ generated by $X_{d}(D) \cup\{d\}$. Then $D^{*}[d]$ is a retract of $D[d]$. In particular, $D^{*}[d]$ is the smallest homomorphic image of $D[d]$ separating $(d, 0)$ and $(d, 1)$. Proof. Since $(d, 0) \prec(d, 1)$, by Lemma 1.11 there is a unique subdirectly irreducible homomorphic image $\overline{D[d]}$ of $D[d]$ such that $\overline{(d, 1)} / \overline{(d, 0)}$ is a critical quotient of $\overline{D[d]}$. By Theorem 4.23 $D^{*}[d]$ is also subdirectly irreducible with critical quotient $(d, 1) /(d, 0)$, hence $D^{*}[d]$ is isomorphic to its image $\overline{D^{*}[d]} \subseteq \overline{D[d]}$. We have to show that $\overline{D^{*}[d]}=\overline{D[d]}$. The epimorphism $D[d] \rightarrow \overline{D[d]}$ induces an epimorphism $D \rightarrow \bar{D}$, where $\bar{D}$ is obtained from $\overline{D[d]}$ by collapsing the quotient $\overline{(d, 1)} / \overline{(d, 0)}$. Then $D^{*} \cong \overline{D^{*}} \subseteq \bar{D}$, and it suffices to show that $\overline{D^{*}}=\bar{D}$. Consider $x \in D$ such that $\bar{x} \in X_{\bar{d}}(\bar{D})$. $x$ must be noncomparable with $d$, so we can find $b, c \in D$ with $x d \leq b \prec d \prec c \leq x+d$. Let $x_{0}=x c+b$, then one easily $$ A\left(K_{i}\right)=\left(\begin{array}{ccccccc} 1 & 1 & 0 & 0 & \cdots & 0 & 0 \\ 0 & 1 & 1 & 0 & \cdots & 0 & 0 \\ 0 & 0 & 1 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & \cdots & 1 & 1 \\ 1 & 0 & 0 & 0 & \cdots & 0 & 1 \end{array}\right) $$ Figure 4.17 checks that $x_{0} \in X_{d}\left(D^{*}\right)$ and $b, c \in D^{*}$. Notice that $\bar{d} \neq \bar{c}$, for otherwise the epimorphism $D[d] \rightarrow \overline{D[d]}$ would collapse the quotient $c /(d, 1)$ and, as $x_{0},(d, 0)$ and $(d, 1)$ generate a pentagon, it would also identify $(d, 0)$ and $(d, 1)$. Similarly $\bar{b} \neq \bar{d}$, whence $\bar{b} \prec \bar{d} \prec \bar{c}$ in $\overline{D^{*}}$. Because $\bar{x} \in X_{\bar{d}}(\bar{D})$, it follows that $\bar{x} \bar{d}=\bar{b}, \bar{x}+\bar{d}=\bar{c}$ and since $\bar{b}, \bar{c} \in \overline{D^{*}}$, we in fact have $\bar{x} \in X_{\bar{d}}\left(\overline{D^{*}}\right)$. Thus $X_{\bar{d}}(\bar{D}) \subseteq X_{\bar{d}}\left(\overline{D^{*}}\right) \subseteq \overline{D^{*}} \subseteq \bar{D}$. By Theorem $4.23 X_{\bar{d}}(\bar{D}) \cup\{d\}$ is a generating set for $\bar{D}$, hence $\bar{D}^{*}=\bar{D}$, and therefore $D^{*}[d] \cong \overline{D^{*}[d]}=\overline{D[d]}$. Lemma 4.30 (Lee [85]). Let $D$ be a finite distributive lattice and $d \in D$. Then every subdirectly irreducible member of $\{D[d]\}^{\mathcal{V}}$ is isomorphic to $D^{\prime}[d]$, where $D^{\prime}$ is a sublattice of $D$ generated by $d$ and a subset of $X_{d}(D)$. Proof. Let $L$ be a subdirectly irreducible member of $\{D[d]\} \mathcal{V}$. By Jónsson's Lemma $L \in \mathbf{H S}\{D[d]\}$, so there is a sublattice $L_{0}$ of $D[d]$ and an epimorphism $f: L_{0} \rightarrow L$. If $(d, 1) /(d, 0) \nsubseteq L_{0}$, then $L_{0}$ is distributive and hence $L \cong 2$. If $(d, 1) /(d, 0) \subseteq L_{0}$ then $L_{0}=D_{0}[d]$ for a sublattice $D_{0}$ of $D$. But if $(d, 1) /(d, 0)$ is collapsed by $f$, then again $L \cong 2$. Suppose $(d, 1) /(d, 0)$ is not collapsed by $f$. Since $L$ is subdirectly irreducible, and $f(d, 1) / f(d, 0)$ is critical, $L$ is a smallest homomorphic image of $D_{0}[d]$ separating $(d, 0)$ and $(d, 1)$. By Lemma 4.29 , the same holds for $D_{0}^{*}[d]$. Hence $L \cong D_{0}^{*}[d]$. Also $D_{0}^{*}$ is a sublattice of $D$, and $D_{0}^{*}[d]$ is subdirectly irreducible, therefore Lemma 4.28 implies that $L \cong D_{0}^{*}[d]$ is isomorphic to $D^{\prime}[d]$, where $D^{\prime}$ is a sublattice of $D$ generated by $d$ and a subset $X_{d}(D)$. Notice that there are at least $\left|X_{d}(D)\right|+1$ nonisomorphic subdirectly irreducible members in $\{D[d]\}^{\mathcal{V}}$, since if $U, V$ are two subset of different cardinality, then $U \cup\{d\}$ and $V \cup\{d\}$ generate two nonisomorphic sublattices. We now consider an interesting sequence of almost distributive lattices which is given in Lee [85], and was originally suggested by Jónsson. Let $K_{i}$ be the finite subdirectly irreducible almost distributive lattice represented by the $(i+1) \times(i+1)$ matrix $A\left(K_{i}\right)$ in Figure 4.17, and let $\mathcal{V}_{0}=\left\{K_{1}, K_{2}, K_{3}, \ldots\right\}^{\mathcal{V}}$, $$ \mathcal{V}_{i}=\left\{K_{1}, \ldots, K_{i-1}, K_{i+1}, \ldots\right\}^{\mathcal{V}} \quad \text { for } i=1,2,3, \ldots $$ LEMMA $4.31 K_{i} \notin \mathcal{V}_{i}$ for $i \in\{1,2,3 \ldots\}$. Proof. By Corollary 4.14 (vi) $K_{i}=D_{i}\left[d_{i}\right]$ for some finite distributive lattice $D_{i}, d_{i} \in D_{i}$, and $K_{i}$ is a splitting lattice, so it generates a completely join prime variety for each $i$ (Lemma 2.8). Since $\mathcal{V}_{i}=\sum_{j \neq i}\left\{K_{j}\right\}^{\mathcal{V}}$ it suffices to show that $K_{i} \notin\left\{K_{j}\right\}^{\mathcal{V}}$ for any $i \neq j$. By the preceding lemma any subdirectly irreducible lattice in $\left\{K_{j}\right\}$ is isomorphic to a sublattice of $K_{j}=D_{j}\left[d_{j}\right]$ generated by a subset of $X_{d_{j}}\left(D_{j}\right)$. If $j<i$ then $\left|X_{d_{j}}\left(D_{j}\right)\right|<$ $\left|X_{d_{i}}\left(D_{i}\right)\right|$ which certainly implies $K_{i} \notin\left\{K_{j}\right\}^{\mathcal{V}}$. Now suppose $j>i$ and let $X_{d_{i}}\left(D_{i}\right)=$ $\left\{x_{1}, \ldots, x_{2 i+2}\right\}$ with corresponding natural partitions $$ \begin{aligned} & \left\{B_{1}, B_{2}, \ldots, B_{i+1}\right\}=\left\{\left\{x_{1}, x_{2}\right\},\left\{x_{3}, x_{4}\right\}, \ldots,\left\{x_{2 i+1}, x_{2 i+2}\right\}\right\} \\ & \left\{C_{1}, C_{2}, \ldots, C_{i+1}\right\}=\left\{\left\{x_{2}, x_{3}\right\},\left\{x_{4}, x_{5}\right\}, \ldots,\left\{x_{2 i+2}, x_{1}\right\}\right\} \end{aligned} $$ and $X_{d_{j}}\left(D_{j}\right)=\left\{y_{1}, \ldots, y_{2 j+2}\right\}$ with natural partitions $$ \begin{aligned} & \left\{B_{1}^{\prime}, B_{2}^{\prime}, \ldots, B_{j+1}^{\prime}\right\}=\left\{\left\{y_{1}, y_{2}\right\},\left\{y_{3}, y_{4}\right\}, \ldots,\left\{y_{2 j+1}, y_{2 j+2}\right\}\right\} \\ & \left\{C_{1}^{\prime}, C_{2}^{\prime}, \ldots, C_{j+1}^{\prime}\right\}=\left\{\left\{y_{2}, y_{3}\right\},\left\{y_{4}, y_{5}\right\}, \ldots,\left\{y_{2 j+2}, y_{1}\right\}\right\} . \end{aligned} $$ If $f$ is an embedding of $K_{i}$ into $K_{j}$, then we can assume without loss of generality that $f\left(x_{1}\right)=y_{1}$. As an embedding $f$ must map $B$-blocks onto $B^{\prime}$-blocks and $C$-blocks onto $C^{\prime}$-blocks, hence $f\left(x_{2}\right)=y_{2}, \ldots, f\left(x_{2 i+2}\right)=y_{2 i+2}$. But $f\left(C_{i+1}\right)=\left\{f\left(x_{2 i+2}\right), f\left(x_{1}\right)\right\}=$ $\left\{y_{2 i+2}, y_{1}\right\} \notin\left\{C_{1}^{\prime}, \ldots, C_{j+1}^{\prime}\right\}$ which is a contradiction. Therefore $K_{i}$ is not isomorphic to a sublattice of $K_{j}$, and consequently $K_{i} \notin\left\{K_{j}\right\}^{\mathcal{V}}$. Theorem 4.32 (Lee $[\mathbf{8 5}]$ ). Let $\mathcal{A}$ be the variety of all almost distributive lattices and let $\mathcal{V}_{0}, \mathcal{V}_{i}$ be defined as above. (i) $\left|\Lambda_{\mathcal{A}}\right|=2^{\omega}$. (ii) There is an infinite descending chain of almost distributive varieties. (iii) $\mathcal{V}_{0}$ has infinitely many dual covers. (iv) There is an almost distributive variety with infinitely many covers in $\Lambda_{\mathcal{A}}$. Proof. (i) By the preceding lemma, distinct subsets of $\left\{K_{1}, K_{2}, K_{3}, \ldots\right\}$ generate distinct subvarieties of $\mathcal{V}_{0}$. (ii) Let $\mathcal{V}_{i}^{\prime}=\left\{K_{i}, K_{i+1}, K_{i+2}, \ldots\right\}^{\mathcal{V}}$ for each $i \in \omega$. Then $\mathcal{V}_{0}=\mathcal{V}_{1}^{\prime}>\mathcal{V}_{2}^{\prime}>\mathcal{V}_{3}^{\prime}>\ldots$ follows again by Lemma 4.31 . (iii) We claim that $K_{i}$ is the only finitely generated (hence finite) subdirectly irreducible member of $\mathcal{V}_{0}$ that is not in $\mathcal{V}_{i}$, from which it then follows that $\mathcal{V}_{0} \succ \mathcal{V}_{i}$ for each $i \in \omega$. By Lemma $4.31 K_{i} \notin \mathcal{V}_{i}$. Every finite subdirectly irreducible member $L \in \mathcal{V}_{0}$ is a splitting lattice, so $L \in\left\{K_{j}\right\}^{\mathcal{V}}$ for some $j$. If $i \neq j$ then $L \in \mathcal{V}_{i}$, and if $L \in\left\{K_{i}\right\}^{\mathcal{V}}$ and $L$ is not isomorphic to $K_{i}$ then, by looking at the matrix that represents $L$, we see that $L \in\left\{K_{j}\right\}^{\mathcal{V}}$ for any $j>i$, so we also have $L \in \mathcal{V}_{0}$. This proves the claim. (iv) Let $\overline{\mathcal{V}}_{i}$ be the conjugate variety of $K_{i}$ relative to $\Lambda_{\mathcal{A}}(i \in \omega)$, and let $\overline{\mathcal{V}}=\bigcap_{i \in \omega} \overline{\mathcal{V}}_{i}$. We show that $\overline{\mathcal{V}} \prec \overline{\mathcal{V}}+\left\{K_{i}\right\}^{\mathcal{V}}$ for each $i$. By Theorem 2.3 (i) every subdirectly irreducible member of $\overline{\mathcal{V}}+\left\{K_{i}\right\}^{\mathcal{V}}$ belongs to $\overline{\mathcal{V}}$ or $\left\{K_{i}\right\}^{\mathcal{V}}$. Let $L$ be a subdirectly irreducible lattice in $\left\{K_{i}\right\}^{\mathcal{V}}$. Lemma 4.30 implies that $L$ is a sublattice of $K_{i}$, so $K_{j} \notin\{L\}^{\mathcal{V}}$ for any $j \neq i$. It follows that $L \in \overline{\mathcal{V}}$ or $L \cong K_{i}$, hence $K_{i}$ is the only subdirectly irreducible lattice in $\overline{\mathcal{V}}+\left\{K_{i}\right\}^{\mathcal{V}}$ which is not in $\overline{\mathcal{V}}$. ### Further Sequences of Varieties In Section 4.3 we saw that above each of the varieties $\mathcal{L}_{13}, \mathcal{L}_{14}$ and $\mathcal{L}_{15}$ there is exactly one covering sequence of join irreducible varieties (Theorem 4.26). These results are due to Rose [84], and he also proved the corresponding results for $\mathcal{L}_{6}, \ldots, \mathcal{L}_{10}$. Since these varieties are not almost distributive, the proofs are more involved. Here we only consider the sequence $\mathcal{L}_{6}^{n}$ above $\mathcal{L}_{6}$. Some technical results. Let $L$ be a lattice and $X$ a subset of $L$. An element $z \in L$ is said to be $X$-join isolated if $z=x+y$ and $x, y<z$ implies $x, y \in X$. The notion of an $X$-meet isolated element is defined dually. A quotient $u / v$ of $L$ is said to be isolated if every element of $u / v$ is $u / v$-join isolated and $u / v$-meet isolated. The next four lemmas (4.32-4.35) appear in Rose [84], where they are used to prove that the variety $\mathcal{L}_{i}^{n+1}$ is the only join irreducible cover of $\mathcal{L}_{i}^{n}$ for $i \in\{6,7,8,9,10\}$ (see Figure 2.2). These lemmas only apply to lattices satisfying certain conditions summarized here as Condition $(*) . L$ is a finite subdirectly irreducible neardistributive lattice with critical quotient $c / a$ (which is unique by Theorem 4.8). Furthermore $c^{\prime} / a^{\prime}$ is a quotient of $L$ such that (i) $a^{\prime} \leq a<c \leq c^{\prime}$ (ii) any $z \in c^{\prime} / a^{\prime}-\left\{a^{\prime}\right\}$ is $c^{\prime} / a^{\prime}$-join isolated; (iii) any $z \in c^{\prime} / a^{\prime}-\left\{c^{\prime}\right\}$ is $c^{\prime} / a^{\prime}$-meet isolated. Observe that if $b \notin c^{\prime} / a^{\prime}$ and $b$ is noncomparable with some $z \in c^{\prime} / a^{\prime}$, then $b$ is noncomparable with all the elements of $c^{\prime} / a^{\prime}$. Moreover, $a^{\prime}+b=z+b=c^{\prime}+b$ and $a^{\prime} b=$ $z b=c^{\prime} b$, which implies $N\left(c^{\prime} / a^{\prime}, b\right)$. Hence, for any $b \notin c^{\prime} / a^{\prime}$, the conditions $N\left(c^{\prime} / a^{\prime}, b\right)$ and $N(c / a, b)$ are equivalent. Lemma 4.33 Suppose L satisfies condition $(*)$. (i) If $u \succ c^{\prime}$ in $L$, then there exists $b \in L$ such that $N\left(c^{\prime} / a^{\prime}, b\right)$ and $u=a^{\prime}+b \succ b \succ a^{\prime} b$. (ii) If $L$ excludes $L_{7}$, then we also have $a^{\prime} \succ a^{\prime} b$. Proof. (i) By Lemma 4.16 (iii) there exists $b \in L$ such that $N(c / a, b), b \leq u, b \leq c^{\prime}$ and $u / c^{\prime} \searrow_{\beta} a+b /(a+b) c^{\prime}$. $b$ is noncomparable with $c^{\prime}$, so $N\left(c^{\prime}, a^{\prime}, b\right)$ follows from the remark above, and we cannot have $(a+b) c^{\prime}<c^{\prime}$, since $(a+b) c^{\prime}$ is not $c^{\prime} / a^{\prime}$-meet isolated. So $(a+b) c^{\prime}=c^{\prime}$ and therefore $u=a+b=a^{\prime}+b$. Since $L$ is finite we can choose $t$ such that $b \leq t \prec a^{\prime}+b$. $t$ is also noncomparable with $c^{\prime}$, so we get $N\left(c^{\prime} / a^{\prime}, t\right)$, and of course $u=a^{\prime}+t$. Hence we may assume that $u \succ b$. Also $b \succ a^{\prime} b$, since $a^{\prime} b<t<b$ would imply $N\left(b / t, a^{\prime}\right)$, hence $N(b / t, a)$, and by Corollary $4.7 N(c / a, a)$, which is impossible. (ii) Suppose to the contrary, that $a^{\prime} b<t \prec a^{\prime}$ for some $t \in L$. By the dual of part (i) there exists $b_{0} \in L$ with $N\left(c^{\prime} / a^{\prime}, b_{0}\right)$ and $t=a^{\prime} b_{0} \prec b_{0} \prec a^{\prime}+b_{0}$ (Figure 4.18 (i)). Since $a^{\prime}+b \succ b$, we have $t+b=a^{\prime}+b$ and so $N\left(a^{\prime} / t, b\right)$. Now $a^{\prime} / t \nearrow a^{\prime}+b_{0} / b_{0}$ and Corollary 4.7 imply $N\left(a^{\prime}+b_{0} / b_{0}, b\right)$. Thus $a^{\prime}+b_{0} \gtreqless a^{\prime}+b$, which clearly implies that $b_{0}$ and $a^{\prime}+b_{0}$ are noncomparable with $a^{\prime}+b$. Since $a^{\prime}+b \succ c, b \succ a^{\prime} b$ and $b_{0} \succ y$, we must have $\left(a^{\prime}+b\right)\left(a^{\prime}+b_{0}\right)=c^{\prime},\left(a^{\prime}+b_{0}\right) b=a^{\prime} b$ and $\left(a^{\prime}+b\right) b_{0}=t$. Hence the elements $a^{\prime}, b$ and $b_{0}$ generate $L_{7}$ (Figure 4.18 (i)), and this contradiction completes the proof. (i) $a^{\prime} b$ (ii) Figure 4.18 We now add the following condition. Condition (**). $b$ is an element of $L$ such that $N\left(c^{\prime} / a^{\prime}, b\right)$ and $a^{\prime}+b / a^{\prime} b=\left\{b, a^{\prime}+\right.$ $\left.b, a^{\prime} b\right\} \cup c^{\prime} / a^{\prime}$ LemMa 4.34 If $L$ satisfies conditions $(*),(* *)$ and excludes $L_{\mathbf{1 4}}$, then for $x, y \in L$, (i) $a^{\prime}+b=a^{\prime}+y>y$ implies $y \leq b$; (ii) $a^{\prime}+b=x+b>x$ implies $x \leq c^{\prime}$. Proof. (i) If $y \not \leq b$, then $y$ is noncomparable with $b$ and with $a^{\prime}$. We claim that $y$ can be chosen so that $\boldsymbol{a}^{\prime}, \boldsymbol{c}^{\prime}, \boldsymbol{b}, y$ generate $L_{\mathbf{1 4}}$ (see Figure 4.18 (ii)). We may assume that $y \prec a^{\prime}+y$. If $a^{\prime} y<t<y$, then we would have $N\left(y / t, a^{\prime}\right)$, hence $N(y / t, a)$, and by Corollary $4.7 N(c / a, a)$, which is impossible. Therefore $y \succ a^{\prime} y$. By semidistributivity $$ a^{\prime}+b=a^{\prime}+y=b+y=a^{\prime} b+a^{\prime} y+b y . $$ From this it follows that the elements $a^{\prime} b=c^{\prime} b, a^{\prime} y=c^{\prime} y$ and $b y$ are noncomparable, and therefore $$ a^{\prime}=a^{\prime} b+a^{\prime} y, \quad b=a^{\prime} b+b y \quad \text { and } \quad y=a^{\prime} y+b y . $$ This shows that $a^{\prime}, b$ and $y$ generate an eight element Boolean algebra. Since $N\left(c^{\prime} / a^{\prime}, b\right)$ and $N\left(c^{\prime} / a^{\prime}, y\right)$ hold, $L$ includes $L_{14}$. (ii)If $x \leq c$, then $a^{\prime}+x=a^{\prime}+b$, and since we cannot have $x \leq b$, part (i) implies that $L$ includes $L_{14}$. Lemma 4.35 If $L$ satisfies conditions $(*),(* *)$ and excludes $L_{7}, L_{13}$ and $L_{15}$, then $c^{\prime}$ is meet irreducible. Proof. Suppose $c^{\prime}$ is meet reducible. Then there exists an element $x$ covering $c^{\prime}$ such that $c^{\prime}=x\left(a^{\prime}+b\right)$. By Lemma 4.33 (i) there exists $b_{0} \in L$ with $N\left(c^{\prime} / a^{\prime}, b_{0}\right)$ and $x=$ $a^{\prime}+b_{0} \succ b_{0} \succ a^{\prime} b_{0}$. The elements $a^{\prime}+b, a^{\prime}+b_{0}$ and $b+b_{0}$ generate a lattice $K$ that is a homomorphic image of the lattice in Figure 4.19 (i). If $K$ is isomorphic to that lattice, then $b b_{0} \leq a^{\prime}$, since $b b_{0} \leq a^{\prime}$ would imply $b b_{0}+a^{\prime} \in c^{\prime} / a^{\prime}-\left\{a^{\prime}\right\}$, contradicting the (i) (ii) Figure 4.19 (i) (ii) Figure 4.20 assumption that every element of $c^{\prime} / a^{\prime}-\left\{a^{\prime}\right\}$ is $c^{\prime} / a^{\prime}$-join isolated. In fact we must have $b b_{0}<a^{\prime}$, since $a^{\prime}$ is $c^{\prime} / a^{\prime}$-meet isolated. Thus $K \cup\left\{a^{\prime}\right\}$ is a sublattice of $L$ isomorphic to $L_{13}$, contrary to the hypothesis. We infer that $K$ is a proper homomorphic image of the lattice in Figure 4.19 (i), and since $a^{\prime}+b, a^{\prime}+b_{0}$ and $c^{\prime}$ are distinct, it follows that $c^{\prime}<b+b_{0}$. Now Figure 4.19 (ii) shows that if $a^{\prime} b$ and $a^{\prime} b_{0}$ are noncomparable, then $L$ includes $L_{15}$, while $a^{\prime} b<a^{\prime} b_{0}$ or $a^{\prime} b_{0}<a^{\prime} b$ imply that $L$ includes $L_{7}$. Finally, we cannot have $a^{\prime} b=a^{\prime} b_{0}$, since then $L$ includes $L_{1}$, which contradicts the semidistributivity of $L . \square$ LemMA 4.36 If $L$ satisfies conditions (*),(**) and excludes $L_{9}, L_{13}, L_{14}$ and $L_{15}$ then $b$ is meet irreducible. Proof. To avoid repetition, we first establish two technical results: (A) If $N(u / v, z)$ for some $u / v \notin c^{\prime} / a^{\prime}$ and $z \notin c^{\prime} / a^{\prime}$, then there exists $y \in L$ with $N\left(c^{\prime} / a^{\prime}, y\right)$ such that $N\left(a^{\prime}+y / a^{\prime}, z\right), N\left(y / a^{\prime} y, z\right)$ and $a^{\prime}+y \succ c^{\prime}$ (Figure 4.20 (i)) or dually. Consider a sequence $u / v=x_{0} / y_{0} \sim_{w} x_{1} / y_{1} \sim_{w} \ldots \sim_{w} x_{n} / y_{n}=c / a$. Since $c / a$ is a subquotient of $c^{\prime} / a^{\prime}$ and $u / v$ is not, there is an index $i>0$ such that $x_{i} / y_{i} \nsubseteq c^{\prime} / a^{\prime}$ and $x_{i+1} / y_{i+1} \subseteq c^{\prime} / a^{\prime}$. By duality, suppose that $x_{i} / y_{i} \searrow_{w} x_{i+1} / y_{i+1}$. Since $y_{i+1}$ is $c^{\prime} / a^{\prime}$-meet isolated, $y_{i} \in c^{\prime} / a^{\prime}$, and since $x_{i} c^{\prime}<c^{\prime}$ would also imply $x_{i} \in c^{\prime} / a^{\prime}$, we must have $x_{i} c^{\prime}=c^{\prime}$, and therefore $x_{i}>c^{\prime} \geq y_{i}$. Now Lemma 4.6, and the fact that $u / v$ projects weakly onto $x_{i} / y_{i}$ and $c / a$ imply $N\left(x_{i} / y_{i}, z\right)$ and $N(c / a, z)$. Since $z \notin c^{\prime} / a^{\prime}$ we must have $N\left(c^{\prime} / a^{\prime}, z\right)$. Choose $x \in L$ such that $c^{\prime} \prec x \leq x_{i}$, then clearly $N\left(x / a^{\prime}, z\right)$ holds (Figure 4.20 (ii)). By Lemma 4.33 (i) there exists $y \in L$ with $N\left(c^{\prime} / a^{\prime}, y\right)$ and $x=a^{\prime}+y$. Since $x / a^{\prime} \searrow y / a^{\prime} y$, Lemma 4.6 again implies $N\left(y / a^{\prime} y, z\right)$. This proves (A). (B) If for some $u, v, z \in L$ with $z \geq b$ we have $N(u / v, z)$, then $u / v \subseteq c^{\prime} / a^{\prime}$. Suppose $u / v \nsubseteq c^{\prime} / a^{\prime}$. Since clearly $z \in c^{\prime} / a^{\prime}$, (A) implies that there exists $b_{0} \in L$ such that $n\left(c^{\prime} / a^{\prime}, b_{0}\right)$ and either (1) $N\left(a^{\prime}+b_{0} / a^{\prime}, z\right), N\left(b_{0} / a^{\prime} b_{0}, z\right)$ and $c^{\prime} \prec a^{\prime}+b_{0}$ or (2) $N\left(c^{\prime} / a^{\prime} b_{0}, z\right), N\left(a^{\prime}+b_{0}, z\right)$ and $a^{\prime} b_{0} \prec a^{\prime}$. We will show that, contrary to the hypothesis of the lemma, the elements $a^{\prime}, c^{\prime}, b$ and $b_{0}$ generate $L_{15}$. Since we already know that $N\left(c^{\prime} / a^{\prime}, b\right)$ and $N\left(c^{\prime} / a^{\prime}, b_{0}\right)$, it suffices to check that $a^{\prime} b+a^{\prime} b_{0}=a^{\prime}$ and $\left(a^{\prime}+b\right)\left(a^{\prime}+b_{0}\right)=c^{\prime}$. Either of (1) or (2) imply that $z$ is noncomparable with $a^{\prime} b_{0}$ and $a^{\prime}+b_{0}$. Since $a^{\prime} b<b \leq z$ we must have $a^{\prime} b_{0} \not a^{\prime} b$. Strict inclusion $a^{\prime} b<a^{\prime} b_{0}$ is also not possible, because $a^{\prime} b \prec a^{\prime}$ and $a^{\prime} b_{0}<a^{\prime}$. Thus $a^{\prime} b$ and $a^{\prime} b_{0}$ are noncomparable, and since $a^{\prime} b \prec a^{\prime}$, it follows that $a^{\prime} b+a^{\prime} b_{0}=a^{\prime}$. Next note that $a^{\prime} z=a^{\prime} b$, because $a^{\prime} b \prec a^{\prime}$ and $a^{\prime} b \leq a^{\prime} z<a^{\prime}$. Hence $a^{\prime} z=a^{\prime} b>a^{\prime} b b_{0}=a^{\prime} b_{0} z$, so we cannot have $N\left(c^{\prime} / a^{\prime} b_{0}, z\right)$ in (2). Therefore (1) must hold, and in particular $N\left(a^{\prime}+b_{0} / a^{\prime}, z\right)$, whence it follows that $a^{\prime}+b_{0} \nsucceq b$. Thus $c^{\prime} \leq\left(a^{\prime}+b\right)\left(a^{\prime}+b_{0}\right)<a^{\prime}+b$, and since $c^{\prime} \prec a^{\prime}+b$, $c=\left(a^{\prime}+b\right)\left(a^{\prime}+b_{0}\right)$. This proves (B). Proceeding now with the proof of the lemma, suppose $b$ is meet reducible. Then we can find $z \succ b$ such that $b=\left(a^{\prime}+b\right) z$. Consider a shortest sequence $$ z / b=x_{0} / y_{0} \sim_{w} x_{1} / y_{1} \sim_{w} \ldots \sim_{w} x_{n} / y_{n}=c / a . $$ Clearly $n \geq 2$. The case $n=2$ can also be ruled out, since $z / b$ is a transpose of $x_{1} / y_{1}$, while Theorem 4.8 implies that $c / a$ is a subquotient of $x_{n-1} / y_{n-1}$, hence $x_{1} / y_{1}=x_{n-1} / y_{n-1}$ would imply $z=x_{1}+b \geq a^{\prime}+b$ or $b=y_{1} z \leq a b$, both of which are impossible. Thus $n \geq 3$. If $z / b \nearrow x_{1} / y_{1} \searrow x_{2} / y_{2}$, then $x_{1} / y_{1}$ is prime, since $y_{1}<t<x_{1}$ would imply $N\left(t / y_{1}, z\right)$, and by (B) $t / y_{1} \subseteq c^{\prime} / a^{\prime}$, which leads to a contradiction, as $b \not \leq c^{\prime}$. Similarly $x_{2} / y_{2}$ must be prime, because $y_{2}<t<x_{2}$ would imply $N\left(x_{2}, t, y_{1}\right)$, whence (B) gives $x_{2} / t \subseteq c^{\prime} / a^{\prime}$. This contradicts the semidistributivity of $L$, since $x_{1}=y_{1}+z=y_{1}+x_{2}$, but $x_{2} z=c^{\prime} z<y_{1}$. Hence $z / b \nearrow_{\beta} x_{1} / y_{1} \searrow_{\beta} x_{2} / y_{2}$, and now Lemma 4.4 implies that the sequence can be shortened, contrary to our assumption. Consequently we must have $z / b \searrow x_{1} / y_{1} \nearrow x_{2} / y_{2}$. Observe that $x_{1} \notin c^{\prime} / a^{\prime}$, for otherwise $z=x_{1}+b=a^{\prime}+b$. Again the quotient $x_{1} / y_{1}$ is prime, since $y_{1}<t<x_{1}$ would imply $N\left(x_{1} / t, b\right)$, contradicting (B). However $x_{2} / y_{2}$ cannot be prime because of the minimality of $n$. So there exists $u \in L$ with $y_{2}<v<u<x_{2}$ such that $N\left(u / v, x_{1}\right)$ holds (Figure 4.21 (i)). By Corollary 4.7 we have $N\left(c / a, x_{1}\right)$, and since $x_{1} \notin c^{\prime} / a^{\prime}, N\left(c^{\prime} / a^{\prime}, x_{1}\right)$ holds. Notice that $y_{1}=\left(a^{\prime}+b\right) z x_{1}=\left(a^{\prime}+b\right) x_{1} \geq a^{\prime} x_{1}$. We claim that $y_{1}=a^{\prime} x_{1}$. Suppose to the contrary that $a^{\prime} x_{1}<y_{1}$. Then $u / v \nsubseteq c^{\prime} / a^{\prime}$ since $u x_{1}=y_{1} \neq a^{\prime} x_{1}$. By (A) there exists $b_{0} \in L$ with $N\left(c^{\prime} / a^{\prime}, b_{0}\right)$ such that $N\left(a^{\prime}+b_{0} / a^{\prime}, x_{1}\right)$ and $a^{\prime}+b_{0} \succ c^{\prime}$, or dually $N\left(c^{\prime} / a^{\prime} b_{0}, x_{1}\right)$ and $a^{\prime} b_{0} \prec a^{\prime}$. First suppose that $N\left(a^{\prime}+b_{0} / a^{\prime}, x_{1}\right)$. We cannot have $a^{\prime}+b_{0}<a^{\prime}+b$ since $a^{\prime}+b \succ c$. On the other hand $a^{\prime}+b \leq a^{\prime}+b_{0}$ implies $N\left(a^{\prime}+b / a^{\prime}, x_{1}\right)$, whence $a^{\prime} x_{1}=\left(a^{\prime}+b\right) x_{1}=y_{1}$, Figure 4.21 a contradiction. Therefore $a^{\prime}+b$ and $a^{\prime}+b_{0}$ are noncomparable and $\left(a^{\prime}+b\right)\left(a^{\prime}+b_{0}\right)=c^{\prime}$. Since $L$ excludes $L_{13}$ and $L_{15}$, it follows as in the proof of Lemma 4.35 that the $a^{\prime}, c^{\prime}, b$ and $b_{0}$ generate $L_{7}$. Thus $a^{\prime}+b<b+b_{0}$, and as $a^{\prime} \succ a^{\prime} b$, we can only have $a^{\prime} b_{0}<a^{\prime} b$. By Lemma $4.6 N\left(a^{\prime}+b_{0} / a^{\prime}, x_{1}\right)$ and $a^{\prime}+b_{0} / a^{\prime} \searrow b_{0} / a^{\prime} b_{0}$ imply $N\left(b_{0} / a^{\prime} b, x_{1}\right)$. Hence $a^{\prime} b_{0}+x_{1}=b_{0}+x_{1}$, and together with $a^{\prime} b_{0}$ and $x_{1} \leq z$ this implies $b_{0} \leq b_{0}+x_{1}=$ $a^{\prime} b_{0}+x_{1} \leq z$. It follows that $a^{\prime}+b<b+b_{0} \leq z$, which is a contradiction. Now suppose that $N\left(c^{\prime} / a^{\prime} b_{0}, x_{1}\right)$. Since we are also assuming that $L$ excludes $L_{14}$, we can dualize the above argument to again obtain a contradiction. Thus $y_{1}=a^{\prime} x_{1}$. We complete the proof by showing that $a^{\prime}, c^{\prime}, b$ and $x_{1}$ generate $L_{9}$ (Figure 4.21 (ii)). Clearly $a^{\prime} \geq a^{\prime} x_{1}=y_{1}$ implies $a^{\prime} b \geq y_{1} b=y_{1}$. In fact we must have $a^{\prime} b>y_{1}$, since $a^{\prime} b=y_{1}=a^{\prime} x_{1}<x_{1}$ would imply $x_{1} \geq b$ by the dual of Lemma 4.34 (i), a contradiction. Also $a^{\prime}\left(a^{\prime} b+x_{1}\right)=a^{\prime} b<a^{\prime} b+x_{1}$, since $a^{\prime} \succ a^{\prime} b$, and now the dual of Lemma 4.34 (i) implies $a^{\prime} b+x_{1} \geq b$. Hence $a^{\prime} b+x_{1}=b+x_{1}=z$. Finally $a^{\prime}+b / c^{\prime} \searrow b / a^{\prime} b, N\left(b / a^{\prime} b, x_{1}\right)$ and Lemma 4.6 imply $N\left(a^{\prime}+b / c^{\prime}, x_{1}\right)$, whence $a^{\prime} x_{1}=\left(a^{\prime}+b\right) x_{1}$. The sequence $\mathcal{L}_{6}^{n}$. The next theorem is in preparation to proving the result due to Rose [84] that $\mathcal{L}_{6}^{n+1}$ is the only join irreducible cover of $\mathcal{L}_{6}^{n}$. A quotient $c / a$ of a lattice is an $L_{6}^{n}$-quotient if for some $b, b_{0}, \ldots, b_{n} \in L$ the set $\left\{a, c, b, b_{0}, \ldots, b_{n}\right\}$ generates a sublattice of $L$ isomorphic to $L_{6}^{n}$, with $c / a$ as critical quotient (Figure 2.2). In this case we shall write $L_{6}^{n}\left(c / a, b, b_{0}, \ldots, b_{n}\right)$. Theorem 4.37 (Rose [84]). Let $L$ be a subdirectly irreducible lattice, and assume that the variety $\{L\}^{\mathcal{V}}$ contains none of the lattices $M_{3}, L_{1}, \ldots, L_{5}, L_{7}, \ldots, L_{15}$. Suppose further that, for some $k \in \omega, c / a$ is an $L_{6}^{k}$-quotient of $L$. Then $L / \operatorname{con}(a, c)$ has no $L_{6}^{k}$-quotients. (ii) if $L$ is finite and $L \not L_{6}^{k}$, then $c / a$ is an $L_{6}^{k+1}$-quotient. Proof.(i) By Theorem $4.1\{L\}^{\mathcal{V}}$ is semidistributive, and by Theorem $4.8 L$ has a unique critical quotient, which we denote by $x / y$. Choose $b, b_{0}, \ldots, b_{k}$ so that $L_{6}^{k}\left(c / a, b, b_{0}, \ldots, b_{k}\right)$ Figure 4.22 holds. We will prove several statements, the last of which shows that $x / y=c / a$. The first three are self-evident. (A) Any nontrivial subquotient $c^{\prime} / a^{\prime}$ of $c / a$ is an $L_{6}^{k}$-quotient. (B) Suppose that for some $a^{\prime}, c^{\prime}, z \in L$ we have $N\left(c^{\prime} / a^{\prime}, z\right)$ with $a \leq a^{\prime} z<a^{\prime}+z \leq c$. Then $L_{6}^{k+1}\left(c^{\prime} / a^{\prime}, z, b, b_{0}, \ldots, b_{k}\right)$ holds (see Figure 4.22$)$. (C) Suppose that for some $z \in L$ we have $N\left(a+b_{i} / a b_{i}, z\right)(i \in\{0, \ldots, k\})$. Then $L_{6}^{i+1}\left(c / a, b, b_{0}, \ldots, b_{i}, z\right)$ holds, and similarly if $N(a+b / a b, z)$ then we have $L_{6}^{0}(c / a, b, z)$. (D) For any quotients $u / v$ and $p / q$ in $L$, if $u / v \nearrow p / q \searrow c / a$, then $u / v \searrow u c / v a \nearrow c / a$, and all four transpositions are bijective. By Lemma 4.5 the lattice generated by $q, c, b$ is a homomorphic image of the lattice in Figure 4.23 (i). The pentagon $N(r / d, b)$ is contained in $a+b / a b$, whence it follows that $L_{6}^{k}\left(r / d, b, b_{0}, \ldots, b_{k}\right)$. From this we infer that $r / d$ is distributive, for otherwise $r / d$ would contain a pentagon $N\left(c^{\prime} / a^{\prime}, b^{\prime}\right)$ (by semidistributivity $L$ excludes $M_{3}$ ), and we would have $$ L_{6}^{k+1}\left(c^{\prime} / a^{\prime}, b^{\prime}, b, b_{0}, \ldots, b_{k}\right) . $$ Hence the transposition $r / s \searrow e / d$ is bijective. By Lemma 4.6, the transpositions $p / q \searrow$ $r / s$ and $e / d \searrow c / a$ are also bijective, and we consequently have $p / q \searrow_{\gamma} c / a$. Again by Lemma 4.5 , the lattice generated by $q, u, b$ is a homomorphic image of the lattice in Figure 4.23 (ii). Note that $a b \leq b q<b \leq a+b$, whence $N\left(b / b q, b_{0}\right)$. Since $v+b / d^{\prime} \searrow b / b q$, it follows by still another application of Lemma 4.5 that the lattice generated by $d^{\prime}, b$ and $b_{0}$ is a homomorphic image of the lattice in Figure 4.23 (iii) and by Lemma 4.12 the transposition $v+b / d^{\prime} \searrow r^{\prime \prime} / s^{\prime \prime}$ is bijective. Put $t=r^{\prime}\left(b+b_{0}\right)$ to obtain $N\left(t / s^{\prime \prime}, \boldsymbol{b}\right)$ and therefore $L_{6}^{k}\left(t / s, b, b_{0}, \ldots, b_{k}\right)$. This implies that $t / s^{\prime \prime}$ is distributive, and so is $r^{\prime} / d^{\prime}$, since the two quotients are isomorphic. The transposition $e^{\prime} / d^{\prime} \nearrow r^{\prime} / s^{\prime}$ is therefore bijective, and the bijectivity of $u / v \nearrow e^{\prime} / d^{\prime}$ and $r^{\prime} / s^{\prime} \nearrow p / q$ follows from Lemma 4.12. Consequently $u / v \nearrow_{\beta} p / q$. Now semidistributivity (Lemma 4.4) implies $u / v \searrow u c / v a \nearrow c / a$. By duality, these two transpositions must also be bijective. (E) If $c / a$ projects weakly onto a quotient $u / v$, then $u / v \searrow u c^{\prime} / v a^{\prime} \nearrow c^{\prime} / a^{\prime}$ for some subquotient $c^{\prime} / a^{\prime}$ of $c / a$ Assume that $c / a=x_{0} / y_{0} \sim_{w} x_{1} / y_{1} \sim_{w} \ldots \sim_{w} x_{n} / y_{n}=u / v$, where the transpositions alternate up and down. We use induction on $n$. The cases $n=0,1$ are trivial, so by duality we may assume that $c / c y_{1} \nearrow x_{1} / y_{1} \supseteq y_{1}+x_{2} / y_{1} \searrow x_{2} / y_{2}$. Since $c / c y_{1}$ is also (i) (ii) Figure 4.23 a $L_{6}^{k}$-quotient we can apply (D) to conclude that the first transpose must be bijective. Hence $y_{1}+x_{2} / y_{1}$ transposes bijectively onto a subquotient $c^{\prime} / a^{\prime}$ of $c / a\left(a^{\prime}=c y_{1}\right)$. A second application of (D) gives $c^{\prime} / a^{\prime} \searrow c^{\prime} x_{2} / a^{\prime} y_{2} \nearrow x_{2} / y_{2}$, proving the case $n=2$, while for $n>2$ the sequence can now be shortened by one step. The result follows by induction. (F) $x / y=c / a$. Since $x / y$ is critical and prime, $c / a$ projects weakly onto $x / y$. By (E) $x / y$ projects onto a subquotient $c^{\prime} / a^{\prime}$ of $c / a$ and since $x / y$ is the only critical quotient of $L$, we must have $x / y=c^{\prime} / a^{\prime}$. If $x<c$, then the hypothesis of part (i) (of the theorem) is satisfied with $a$ replaced by $x$, and we infer that $x / y$ is a subinterval of $c / x$, which is impossible. Hence $x=c$, and similarly $y=a$, which also shows that $c / a$ is the only $L_{6}^{k}$-quotient of $L$. To complete the proof of part (i), suppose $\bar{L}=L / \operatorname{con}(a, c)$ contains an $L_{6}^{k}$-quotient, i.e. for some $u, v, d, d_{0}, \ldots, d_{k} \in L$ we have $L_{6}^{k}\left(\bar{u} / \bar{v}, \bar{d}, \bar{d}_{0}, \ldots, \bar{d}_{k}\right)$ in $\bar{L}$. If $c=u$ in $L$, then $u=c>a>v$ and $L_{6}^{k}\left(u / v, d, d_{0}, \ldots, d_{k}\right)$, which would contradict the fact that $c / a$ is the only $L_{6}^{k}$-quotient of $L$. Thus $c \neq u$ and, similarly, $a \neq u$ and $c, a \neq v$. If $a=d$, then we must have $N(u / v, a)$ in $L$. But by Corollary 4.7 this would imply $N(c / a, a)$, which is impossible. So $a \neq d$ and, more generally, $c, a \notin\left\{d, d_{0}, \ldots, d_{k}\right\}$. Since $\operatorname{con}(a, c)$ identifies only $a$ and $c$, we infer that $L_{6}^{k}\left(u / v, d, d_{0}, \ldots, d_{k}\right)$ in $L$ with $u / v \neq c / a$, and this contradiction concludes part (i). For the proof of part (ii), we will use the concept of an isolated quotient and all its implications (Lemmas 4.33-4.36). Let $c^{\prime} / a^{\prime}$ be an isolated quotient of $L$ such that $c / a \subseteq c^{\prime} / a^{\prime}$. (G) Suppose that for some $b \in L$ we have $N\left(c^{\prime} / a^{\prime}, b\right)$ with $a^{\prime} b \prec a^{\prime}, c^{\prime} \prec a^{\prime}+b$ and $a^{\prime} b \prec b \prec a^{\prime}+b$. Then (1) $a^{\prime}+b / a^{\prime} b=c^{\prime} / a^{\prime} \cup\left\{a^{\prime} b, b, a^{\prime}+b\right\}$ (2) $a^{\prime}+b / a^{\prime} b$ is an isolated quotient of $L$. Assume (1) fails. Then there exists $x \in L$ such that $x \in a^{\prime}+b / a^{\prime} b$ but $x \notin c^{\prime} / a^{\prime} \cup$ $\left\{a^{\prime} b, b, a^{\prime}+b\right\}$. Since $a^{\prime} b<x<a^{\prime}+b$ and $a^{\prime} b \prec b \prec a^{\prime}+b$, it follows that $b$ and $x$ are noncomparable and $x b=a^{\prime} b, x+b=a^{\prime}+b$. Furthermore, as $c^{\prime} / a^{\prime}$ is isolated, $x$ is noncomparable with $a^{\prime}$ and $c^{\prime}$, whence $a^{\prime}+x=a^{\prime}+b$. This, however, contradicts the semidistributivity of $L$, since $a^{\prime}+b \neq a^{\prime}+x b=a^{\prime}$. Therefore (1) holds. To prove (2), it suffices to show that (3) $a^{\prime}$ is join irreducible and $c^{\prime}$ is meet irreducible; (4) $b$ is both join and meet irreducible; (5) $x \in L$ and $a^{\prime}+b=x+b>x$ imply $x \in c^{\prime} / a^{\prime}$; (6) $y \in L$ and $a^{\prime}+b=a^{\prime}+y>y$ imply $y=b$; (7) $x \in L$ and $a^{\prime} b=x b<x$ imply $x \in c^{\prime} / a^{\prime}$; (8) $y \in L$ and $a^{\prime} b=a^{\prime} y<y$ imply $y=b$. (3) and (4) follow from Lemmas 4.35 and 4.36 and their duals respectively. Suppose $a^{\prime}+b=x+b>x$. Then $x \leq c^{\prime}$ by Lemma 4.34 (ii) and, since $x+b \neq b$, we have $x \not c^{\prime} b=a^{\prime} b$. Now $x \nless a^{\prime}$, because $a^{\prime} b$ is the only dual cover of $a^{\prime}$. Since $c^{\prime} / a^{\prime}$ is isolated, this implies $x \in c^{\prime} / a^{\prime}$, whence (5) holds. If $a^{\prime}+b=a^{\prime}+y$, then Lemma 4.34 (i) implies that $y \leq b$, and from the join irreducibility of $b$ we infer $y=b$, thereby proving (6). Finally, (7) and (8) are the duals of (5) and (6). (H) If $L_{6}^{k}\left(c / a, b, b_{0}, \ldots, b_{k}\right)$, then the elements $b, b_{0}, \ldots, b_{k} \in L$ can be chosen such that $$ \begin{aligned} a+b / a b & =c / a \cup\{a b, b, a+b\}, \\ a+b_{0} / a b_{0} & =a+b / a b \cup\left\{a b_{0}, b_{0}, a+b_{0}\right\}, \\ a+b_{i} / a b_{i} & =a+b_{i-1} / a b_{i-1} \cup\left\{a b_{i}, b_{i}, a+b_{i}\right\} \quad \text { for } \quad i \in\{1,2, \ldots, k\}, \end{aligned} $$ and all these quotients are isolated. By Lemma 4.35 and its dual, the quotient $c / a$ is isolated. Choose $x \in L$ with $c \prec$ $x \leq a+b$. Since $L$ excludes $L_{7}$, Lemma 4.33 (i), (ii) and (G) above imply the existence of $b^{\prime} \in L$ with $N\left(c / a, b^{\prime}\right)$ and $a+b^{\prime}=x$, such that this sublattice is an interval in $L$ and is isolated. Since $a$ is join irreducible and $a \prec a b^{\prime}$, we infer that $a b^{\prime} \geq a b$. Thus $a b \leq a b^{\prime} \prec b^{\prime} \prec x \leq a+b$, whence it follows that $L_{6}^{k}\left(c / a, b^{\prime}, b_{0}, \ldots, b_{k}\right)$. So we may replace $b$ by $b^{\prime}$, and continuing in this way we prove $(\mathrm{H})$. Since $c / a$ is a prime $L_{6}^{k}$-quotient of $L,(\mathrm{H})$ implies that we can find $b, b_{0}, \ldots, b_{k}$ in $L$ such that the sublattice generated by $c / a$ and these $b$ 's is an interval of $L$. Since $L \cong L_{6}^{k}$, there exists $u \in L$ with $u \succ a+b_{k}$ or $u \prec a b=k$ and from Lemma 4.33 (i) or its dual we obtain $b^{\prime} \in L$ such that $N\left(a+b_{k} / a b_{k}, b^{\prime}\right)$, which implies $L_{6}^{k+1}\left(c / a, b, b_{0}, \ldots, b_{k}, z\right)$ as required. After much technical detail we can finally prove: Theorem 4.38 (Rose [84]). $\mathcal{L}_{6}^{n+1}$ is the only join irreducible cover of $\mathcal{L}_{6}^{n}$. Proof. Suppose to the contrary that for some natural number $n$, the variety $\mathcal{L}_{6}^{n}=\left\{L_{6}^{n}\right\}^{\mathcal{V}}$ has a join irreducible cover $\mathcal{V} \neq \mathcal{L}_{6}^{n+1}$. Choose $n$ as small as possible. Since $\mathcal{V}$ has finite height in $\Lambda$, it is completely join irreducible, so it follows from Theorem 2.5 that $\mathcal{V}=\{L\}^{\mathcal{V}}$ for some finitely generated subdirectly irreducible lattice $L$. Note that $L_{6}^{n} \in\{L\}^{\mathcal{V}}$ Using the results of Section 2.3 one can check that $L_{6}^{n}$ is a splitting lattice, and since it also satisfies Whitman's condition (W), Theorem 2.19 implies that $L_{6}^{n}$ is projective in $\mathcal{L}$. By Lemma $2.10 L_{6}^{n}$ is a sublattice of $L$, so for some $a, c, b, b_{0}, \ldots, b_{n} \in L$ we have $L_{6}^{n}\left(c / a, b, b_{0}, \ldots, b_{n}\right)$. By Theorem 4.37 (i) $c / a$ is critical, and $L / \operatorname{con}(a, c)$ has no $L_{6}^{n}$ quotients. Again, since $L_{6}^{n}$ is subdirectly irreducible and projective, Lemma 2.10 implies that $L_{6}^{n}$ is not a member of the variety generated by $L / \operatorname{con}(a, c)$. This, together with the minimality of $n$ implies that, for $n=0, L / \operatorname{con}(a, c)$ is a member of $\mathcal{N}$ and, for $n>0$, $L / \operatorname{con}(a, c)$ is in $\mathcal{L}_{6}^{n_{1}}$. By Lemma $4.15 L$ is finite and, since $L ¥ L_{6}^{n}$, it follows from Theorem 4.38 (ii) that $L$ includes $L_{6}^{n+1}$. This contradiction completes the proof. By a similar approach Rose [84] proves that $\mathcal{L}_{i}^{n+1}$ is the only join irreducible cover of $\mathcal{L}_{i}^{n}$ for $i=7$ and 9 (the cases $i=8$ and 10 follow by duality). A slight complication arises due to the fact that $L_{7}^{n}$ and $L_{9}^{n}$ are not projective for $n \geq 1$, since the presence of doubly reducible elements implies that (W) fails in these lattices. As a result the final step requires an inductive argument. For the details we refer the reader to the original paper of Rose [84]. Further results about nonmodular varieties. The variety $\mathcal{M}_{3}+\mathcal{N}$ is the only join reducible cover of $\mathcal{N}$ (and $\mathcal{M}_{3}$ ), and its covers have been investigated by Ruckelshausen [78]. His results show that the varieties $\mathcal{V}_{1}, \ldots, \mathcal{V}_{8}$ generated by the lattices $V_{1}, \ldots, V_{8}$ in Figure 2.4 are the only join irreducible covers of $\mathcal{M}_{3}+\mathcal{N}$ that are generated by a planar lattice of finite length. The techniques used in the preceding investigations make extensive use of Theorem 4.8, and are therefore unsuitable for the study of varieties above $\mathcal{L}_{11}$ or $\mathcal{L}_{12}$. Rose [84] showed that $\mathcal{L}_{12}$ has at least two join irreducible covers, generated by the two subdirectly irreducible lattices $L_{12}^{1}$ and $G$ respectively, (see Figure 4.24, dual considerations apply to $\mathcal{L}_{11}$ ). Using methods developed by Freese and Nation [83] for the study of covers in free lattices, Nation [85] proves that these are the only join irreducible covers of $\mathcal{L}_{12}$, and that above each of these is exactly one covering sequence of join irreducible varieties $\mathcal{L}_{12}^{n}$ and $\mathcal{G}^{n}=\left\{G^{n}\right\}$ (Figure 4.24). By a result of Rose [84], any semidistributive lattice which fails to be bounded contains a sublattice isomorphic to $L_{11}$ or $L_{12}$ (see remark after Theorem 4.10). Thus it is interesting to note that the lattices $L_{12}^{n}$ and $G^{n}$ are again splitting lattices. In Nation [86] similar techniques are used to find a complete list of covering varieties of $\mathcal{L}_{1}$ (and $\mathcal{L}_{2}$ by duality). The ten join irreducible covers are generated by the subdirectly irreducible lattices $L_{16}, \ldots, L_{25}$ in Figure 4.25. $L_{12}^{1}$ $L_{12}^{2}$ $G^{1}$ $G^{2}$ Figure 4.24 $L_{17}$ $L_{21}$ $L_{24}$ $L_{20}$ $L_{22}$ $L_{23}$ Figure 4.25 ## Chapter 5 ## Equational Bases ### Introduction An equational basis for a variety $\mathcal{V}$ of algebras is a collection $\mathcal{E}$ of identities such that $\mathcal{V}=\operatorname{Mod} \mathcal{E}$. An interesting problem in the study of varieties is that of finding equational bases. Of course the set Id $\mathcal{V}$ of all identities satisfied by members of $\mathcal{V}$ is always a basis, but this set is generally highly redundant, so we are interested in finding proper (possibly minimal) equational basis for $\mathcal{V}$. In particular we would like to know under what conditions $\mathcal{V}$ has a finite equational basis. It might seem reasonable to conjecture that every finitely generated variety is finitely based, but this is not the case in general. Lyndon [54] constructed a seven-element algebra with one binary operation which generates a nonfinitely based variety, and later a four-element and three-element example were found by Višin [63] and Murskiǐ [65] respectively. On the other hand Lyndon [51] proved that any two element algebra with finitely many operations does generate a finitely based variety. The same is true for finite groups (Oates and Powell [64]), finite lattices (even with finitely many additional operations, McKenzie [70]), finite rings (Kruse [77], Lvov [77]) and various other finite algebras. Shortly after McKenzie's result, Baker discovered that any finitely generated congruence distributive variety is finitely based. Actually his result is somewhat more general and moreover, the proof is constructive, meaning that for a particular finitely generated congruence distributive variety one can follow the proof to obtain a finite basis. However the proof, which only appeared in its final version in Baker [77], is fairly complicated and several nonconstructive shortcuts have been published (see Herrmann [73], Makkai [73], Taylor [78] and also Burris and Sankappanavar [81]). The proof that is presented in this chapter is due to Jónsson [79] and is a further generalization of Baker's theorem. In contrast to these results on finitely based lattice varieties, McKenzie [70] gives an example of a lattice variety that is not finitely based. Another example by Baker [69], constructed from lattices corresponding to projective planes, shows that there is a nonfinitely based modular variety. Clearly an equational basis for the meet (intersection) of two varieties is given by the union of equational bases for the two varieties, which implies that the meet of two finitely based varieties is always finitely based. An interesting question is whether the same is true for the join of two finitely based varieties. This is not the case, as was independently discovered by Jónsson [74] and Baker. The example given in Baker [77'] is included in this chapter and actually shows that even with the requirement of modularity the above question has a negative answer. In Jónsson's paper, however, we find sufficient conditions for a positive answer and these ideas are generalized further by Lee [85']. One consequence is that the join of the variety $\mathcal{M}$ of all modular lattices and the smallest nonmodular variety $\mathcal{N}$ is finitely based. This variety, denoted by $\mathcal{M}^{+}(=\mathcal{M}+\mathcal{N})$, is a cover of $\mathcal{M}$, and an equational basis for $\mathcal{M}^{+}$consisting of just eight identities is presented in Jónsson [77]. Recently Jónsson showed that the join of two finitely based modular varieties is finitely based whenever one of them is generated by a lattice of finite length. A generalization of this result and further extensions to the case where one of the varieties is nonmodular appear in Kang [87]. Although Baker's theorem allows one to construct, in principle, finite equational bases for any finitely generated lattice variety, the resulting basis is usually too large to be of any practical use. In Section 5.4 we give some examples of finitely based varieties for which reasonably small equational bases have been found. These include the varieties $\mathcal{M}_{n}$ $\left(n \in \omega\right.$, from Jónsson [68]), $\mathcal{N}$ (McKenzie [72]) and the variety $\mathcal{M}^{+}$referred to above. ### Baker's Finite Basis Theorem Some results from model theory. A class $\mathcal{K}$ of algebras is an elementary class if it is the class of all algebras which satisfy some set $\mathcal{S}$ of first-order sentences (i.e. $\mathcal{K}=$ Mod $\mathcal{S}$ ), and $\mathcal{K}$ is said to be strictly elementary if $\mathcal{S}$ may be taken to be finite or, equivalently, if $\mathcal{K}$ is determined by a single first-order sentence (the conjunction of the finitely many sentences in $\mathcal{S})$. (These concepts from model theory are applicable to any class of models of some given first-order language. Here we assume this to be the language of the algebras in $\mathcal{K}$. For a general treatment consult Chang and Keisler [73] or Burris and Sankappanavar [81].) The problem of finding a finite equational basis is a particular case of the following more general question: When is an elementary class strictly elementary? Recall the definition of an ultraproduct from Section 1.3. The nonconstructive shortcuts to Baker's finite basis theorem make use of the following well-known result about ultraproducts: Theorem 5.1 (Los[55]). Let $A=\prod_{i \in I} A_{i}$ and suppose $\phi_{\mathcal{U}}$ is the congruence induced by some ultrafilter $\mathcal{U}$ over the index set $I$. Then, for any first-order sentence $\sigma$, the ultraproduct $A / \phi_{\mathcal{U}}$ satisfies $\sigma$ if and only if the set $\left\{i \in I: A_{i}\right.$ satisfies $\left.\sigma\right\}$ is in $\mathcal{U}$. In particular this theorem shows that elementary classes are closed under ultraproducts. But it also has many other consequences. For example we can deduce the following two important results: Theorem 5.2 (Frayne, Morel and Scott [62], Kochen [61]). An elementary class $\mathcal{K}$ of algebras is strictly elementary if and only if the complement of $\mathcal{K}$ is closed under ultraproducts. The complement can be taken relative to any strictly elementary class containing $\mathcal{K}$. Proof. Suppose $\mathcal{B}$ is an elementary class that contains $\mathcal{K}$. If $\mathcal{K}$ is strictly elementary, then membership in $\mathcal{K}$ can be described by a first-order sentence. By the preceding theorem the negation of this sentence is preserved by ultraproducts, so any ultraproduct of members in $\mathcal{B}-\mathcal{K}$ must again be in $\mathcal{B}-\mathcal{K}$. Conversely, suppose $\mathcal{K}$ is elementary and is contained in a strictly elementary class $\mathcal{B}$. Assuming that $\mathcal{B}-\mathcal{K}$ is closed under ultraproducts, let $\mathcal{S}$ be the set all sentences that hold in every member of $\mathcal{K}$, and let $I$ be the collection of all finite subsets of $\mathcal{S}$. Since $\mathcal{B}$ is strictly elementary, $\mathcal{B}=\operatorname{Mod} \mathcal{S}_{0}$ for some $\mathcal{S}_{0} \in I$. If $\mathcal{K}$ is not strictly elementary then, for each $i \in I$, there must exist an algebra $A_{i}$ not in $\mathcal{K}$ such that $A_{i}$ satisfies every sentence in the finite set $i \cup \mathcal{S}_{0}$. Note that this implies $A_{i} \in \mathcal{B}-\mathcal{K}$. We construct an ultraproduct $A / \phi_{\mathcal{U}} \in \mathcal{K}$ as follows: Let $A=\prod_{i \in I} A_{i}$ and, for each $i \in I$ define $J_{i}=\{j \in I: j \supseteq i\}$. Then $J_{i} \neq \emptyset$ and $J_{i} \cap J_{k}=J_{i \cup k}$ for all $i, k \in I$, whence $\mathcal{F}=\left\{J \subseteq I: J_{i} \subseteq J\right.$ for some $\left.i\right\}$ is a proper filter over $I$, and by Zorn's Lemma $\mathcal{F}$ can be extended to an ultrafilter $\mathcal{U}$. We claim that $A / \phi \mathcal{U}$ satisfies every sentence in $\mathcal{S}$. This follows from Theorem 5.1 and the observation that for each $\sigma \in \mathcal{S}$, $$ \left\{j \in I: A_{j} \text { satisfies } \sigma\right\} \supseteq J_{\{\sigma\}} \in \mathcal{U} \text {. } $$ Since $\mathcal{K}$ is an elementary class, we have $A / \phi_{\mathcal{U}} \in \mathcal{K}$. But the $A_{i}$ are all members of $\mathcal{B}-\mathcal{K}$, so this contradicts the assumption that $\mathcal{B}-\mathcal{K}$ is closed under ultraproducts. Therefore $\mathcal{K}$ must be strictly elementary. Theorem 5.3 Let $\mathcal{K}$ be an elementary class, and suppose $\mathcal{S}$ is some set of sentences such that $\mathcal{K}=\operatorname{Mod} \mathcal{S}$. If $\mathcal{K}$ is strictly elementary, then $\mathcal{K}=\operatorname{Mod} \mathcal{S}_{0}$ for some finite set of sentences $\mathcal{S}_{0} \subseteq \mathcal{S}$. Proof. Suppose to the contrary that for every finite subset $\mathcal{S}_{0}$ of $\mathcal{S}$, Mod $\mathcal{S}_{0}$ properly contains $\mathcal{K}$. As in the proof of the previous theorem we can then construct an ultraproduct $A / \phi_{\mathcal{U}} \in \mathcal{K}$ of algebras $A_{i}$ not in $\mathcal{K}$. This, however, contradicts the result that the complement of $\mathcal{K}$ is closed under ultraproducts. Every identity is a first-order sentence and every variety is an elementary class, so the second result tells us that if a variety is definable by a finite set of first-order sentences, then it is finitely based. The following theorem, from Jónsson [79], uses Theorem 5.2 to give another sufficient condition for a variety to be finitely based. ThEorem 5.4 Let $\mathcal{V}$ be a variety of algebras contained in some strictly elementary class $\mathcal{B}$. If there exists an elementary class $\mathcal{C}$ such that $\mathcal{B}_{S I}$ is contained in $\mathcal{C}$ and $\mathcal{V} \cap \mathcal{C}$ is strictly elementary, then $\mathcal{V}$ is finitely based. Proof. Suppose $\mathcal{V}$ is not finitely based. Then Theorem 5.2 implies that $\mathcal{B}-\mathcal{V}$ is not closed under ultraproducts. Hence, for some index set $I$, there exist $A_{i} \in \mathcal{B}-\mathcal{V}$ and an ultrafilter $\mathcal{U}$ over $I$ such that the ultraproduct $A / \phi_{\mathcal{U}} \in \mathcal{V}$, where $A=\prod_{i \in I} A_{i}$. Each $A_{i}$ has at least one subdirectly irreducible image $A_{i}^{\prime}$ not in $\mathcal{V}$. On the other hand, if we let $A^{\prime}=\prod_{i \in I} A_{i}^{\prime}$ then $A^{\prime} / \phi_{\mathcal{U}} \in \mathcal{V}$ since it is a homomorphic image of $A / \phi_{\mathcal{U}}$. $\mathcal{B}$ need not be closed under homomorphic images, so the $A_{i}^{\prime}$ are not necessarily in $\mathcal{B}$, but $A^{\prime} / \phi_{\mathcal{U}} \in \mathcal{V} \subseteq \mathcal{B}$ and $\mathcal{B}$ strictly elementary imply that $\left\{i \in I: A_{i}^{\prime} \in \mathcal{B}\right\}$ is in $\mathcal{U}$. Therefore, restricting the ultraproduct to this set, we can assume that every $A_{i}^{\prime} \in$ $\mathcal{B}_{S I} \subseteq \mathcal{C}$ and, because $\mathcal{C}$ is an elementary class, it follows that $A^{\prime} / \phi \mathcal{U} \in \mathcal{V} \cap \mathcal{C}$. This contradicts Theorem 5.2 since $\mathcal{V} \cap \mathcal{C}$ is strictly elementary (by assumption), and $A^{\prime} / \phi_{\mathcal{U}}$ is an ultraproduct of algebras not in $\mathcal{V} \cap \mathcal{C}$. Finitely based congruence distributive varieties. Let $\mathcal{V}$ be a congruence distributive variety of algebras (with finitely many operations). By Theorem 1.9 this is equivalent to the existence of $n+1$ ternary polynomials $t_{0}, t_{1}, \ldots, t_{n}$ such that $\mathcal{V}$ satisfies the following identities: $$ \begin{aligned} t_{0}(x, y, z)=x, & t_{n}(x, y, z)=z, & t_{i}(x, y, x)=x \\ t_{i}(x, x, z)=t_{i+1}(x, x, z) & & \text { for } i \text { even } \\ t_{i}(x, z, z)=t_{i+1}(x, z, z) & & \text { for } i \text { odd. } \end{aligned} $$ In the remainder of this section we let $\mathcal{V}_{t}$ be the finitely based congruence distributive variety that satisfies these identities. Clearly $\mathcal{V} \subseteq \mathcal{V}_{t}$. Translations, boundedness and projective radius. The notion of weak projectivity in lattices and its application to principal congruences can be generalized for an arbitrary algebra $A$ by considering translations of $A$ (i.e. polynomial functions on $A$ with all but one variable fixed in $A$ ). A 0-translation is any map $f: A \rightarrow A$ that is either constant or the identity map. A 1-translation is a map $f: A \rightarrow A$ that is obtained from one of the basic operations of $A$ by fixing all but one variable in $A$. For our purposes it is convenient to also allow maps that are obtained from one of the polynomials $t_{i}$ above. Equivalently we could assume that the $t_{i}$ are among the basic operations of the variety. A $k$-translation is any composition of $k$ 1 -translations and a translation is a map that is a $k$-translation for some $k \in \omega$. For $a, b \in A$ define the relation $\Gamma_{k}(a, b)$ on $A$ by $$ (c, d) \in \Gamma_{k}(a, b) \quad \text { if } \quad\{c, d\}=\{f(a), f(b)\} $$ for some $k$-translation $f$ of $A$. Let $\Gamma(a, b)=\bigcup_{k \in \omega} \Gamma_{k}(a, b)$. This relation can be used to characterize the principal congruences of $A$ (implicit in Mal'cev [54], see [UA] p.54) as follows: For $a, b \in A$ we have $(c, d) \in \operatorname{con}(a, b)$ if and only if there exists a sequence $c=$ $e_{0}, e_{1}, \ldots, e_{m}=d$ in $A$ such that $\left(e_{i}, e_{i+1}\right) \in \Gamma_{k}(a, b)$ for $i<m$. Two pairs $(a, b),\left(a^{\prime}, b^{\prime}\right) \in A \times A$ are said to be $k$-bounded if $\Gamma_{k}(a, b) \cap \Gamma_{k}\left(a^{\prime}, b^{\prime}\right) \neq \mathbf{0}$ and they are bounded if $\Gamma(a, b) \cap \Gamma\left(a^{\prime}, b^{\prime}\right) \neq 0$. Observe that if $A$ has only finitely many operations, then $k$-boundedness can be expressed by a first order formula. The projective radius (2-radius in Baker [77]) of an algebra $A$, written $R(A)$, is the smallest number $k>0$ such that for all $a, b, a^{\prime}, b^{\prime} \in A$ $$ \operatorname{con}(a, b) \cap \operatorname{con}\left(a^{\prime}, b^{\prime}\right) \neq \mathbf{0} \quad \text { implies } \quad \Gamma_{k}(a, b) \cap \Gamma_{k}\left(a^{\prime}, b^{\prime}\right) \neq \mathbf{0} $$ (if it exists, else $R(A)=\infty$ ). For a class $\mathcal{K}$ of algebras, we let $R(\mathcal{K})=\sup \{R(A): A \in \mathcal{K}\}$. The next few lemmas show that under certain conditions a class of finitely subdirectly irreducible algebras (see Section 1.2) is elementary if and only if it has finite projective radius. These results first appeared in a more general form in Baker [77] (using $n$-radii) but we follow a later presentation due to Jónsson [79]. Lemma 5.5 If $A \in \mathcal{V}_{t}, e_{0}, e_{1}, \ldots, e_{m} \in A$ and $e_{0} \neq e_{m}$, then there exists a number $p<m$ such that $\left(e_{0}, e_{m}\right)$ and $\left(e_{p}, e_{p+1}\right)$ are 1-bounded. Proof. Consider the 1-translations $f_{i}(x)=t_{i}\left(e_{0}, x, e_{m}\right), i \leq n$. Then $f_{0}\left(e_{j}\right)=e_{0}$ and $f_{n}\left(e_{j}\right)=e_{m}$ for all $i \leq m$, hence there exists a smallest index $q \leq n$ such that the elements $f_{q}\left(e_{j}\right)$ are not all equal to $e_{0}$. If $q$ is odd, then $f_{q}\left(e_{0}\right)=f_{q-1}\left(e_{0}\right)=e_{0}$, so we can choose $p<m$ such that $c=f_{q}\left(e_{p}\right)=e_{0} \neq f_{q}\left(e_{p+1}\right)=d$. It follows that $(c, d) \in \Gamma_{1}\left(e_{p}, e_{p+1}\right)$ and the 1-translation $f(x)=t_{q}\left(e_{0}, e_{p+1}, x\right)$ shows that $(c, d) \in \Gamma_{1}\left(e_{0}, e_{m}\right)$. For even $q$ we have $f_{q}\left(e_{m}\right)=f_{q-1}\left(e_{m}\right)=e_{0}$. Choosing $p<m$ such that $c=f_{q}\left(e_{p}\right) \neq e_{0}=f_{q}\left(e_{p+1}\right)=d$, we again see that $(c, d) \in \Gamma_{1}\left(e_{p}, e_{p+1}\right)$, and now the 1-translation $g(x)=t_{q}\left(e_{0}, e_{p}, x\right)$ gives $(c, d) \in \Gamma_{1}\left(e_{0}, e_{m}\right)$. In either case $(c, d) \in \Gamma_{1}\left(e_{0}, e_{m}\right) \cap \Gamma_{1}\left(e_{p}, e_{p+1}\right)$, which implies that $\left(e_{0}, e_{m}\right)$ and $\left(e_{p}, e_{p+1}\right)$ are 1-bounded. Lemma 5.6 For all $A \in \mathcal{V}_{t}$ and $a, b, a^{\prime}, b^{\prime} \in A$, $$ \operatorname{con}(a, b) \cap \operatorname{con}\left(a^{\prime}, b^{\prime}\right) \neq \mathbf{0} \quad \text { implies } \quad \Gamma(a, b) \cap \Gamma\left(a^{\prime}, b^{\prime}\right) \neq \mathbf{0} . $$ Proof. Suppose $(c, d) \in \operatorname{con}(a, b) \cap \operatorname{con}\left(a^{\prime}, b^{\prime}\right)$ for some $c, d \in A, c \neq d$. Since $(c, d) \in$ $\operatorname{con}(a, b)$, there exists (by Mal'cev) a sequence $c=e_{0}, e_{1}, \ldots, e_{m}=d$ in $A$ such that $\left(e_{i}, e_{i+1}\right) \in \Gamma(a, b)$ for $i<m$. As before let $f_{i}(x)=t_{i}\left(e_{0}, x, e_{m}\right)$, and choose $p<m$, $q<n$ such that $c^{\prime}=f_{q}\left(e_{p}\right) \neq f_{q}\left(e_{p+1}\right)=d^{\prime}$. By composition of translations $\left(c^{\prime}, d^{\prime}\right) \in$ $\Gamma(a, b)$. Also $\left(c^{\prime}, d^{\prime}\right) \in \operatorname{con}\left(a^{\prime}, b^{\prime}\right)$, since $\operatorname{con}\left(a^{\prime}, b^{\prime}\right)$ identifies $e_{0}$ with $e_{m}$ and hence all elements of the form $t_{i}\left(e_{0}, e_{j}, e_{m}\right)$ with $t_{i}\left(e_{0}, e_{j}, e_{0}\right)=e_{0}$. Again there exists a sequence $c^{\prime}=e_{0}^{\prime}, e_{1}^{\prime}, \ldots, e_{m^{\prime}}^{\prime}=d^{\prime}$ with $\left(e_{i}^{\prime}, e_{i+1}^{\prime}\right) \in \Gamma\left(a^{\prime}, b^{\prime}\right)$ for $i<m^{\prime}$. From Lemma 5.5 we obtain an index $p<m^{\prime}$ such that $\left(c^{\prime}, d^{\prime}\right)$ and $\left(e_{p}^{\prime}, e_{p+1}^{\prime}\right)$ are 1-bounded and it follows via a composition of translations that $(a, b)$ and $\left(a^{\prime}, b^{\prime}\right)$ are bounded. Recall from Section 1.2 that an algebra $A$ is finitely subdirectly irreducible if the $\mathbf{0} \in \operatorname{Con}(A)$ is not the meet of finitely many non-0 congruences, and that $\mathcal{V}_{F S I}$ denotes the class of all finitely subdirectly irreducible members of $\mathcal{V}$. LeMma 5.7 Let $\mathcal{C}$ be an elementary subclass of $\mathcal{V}_{t}$. Then $R\left(\mathcal{C}_{F S I}\right)$ is finite if and only if $\mathcal{C}_{F S I}$ is elementary. Proof. By assumption algebras in $\mathcal{V}_{t}$ have only finitely many basic operations, so there exists a first order formula $\phi_{k}\left(x, y, x^{\prime}, y^{\prime}\right)$ such that for all $A \in \mathcal{V}_{t}, A$ satisfies $\phi_{k}\left(a, b, a^{\prime}, b^{\prime}\right)$ if and only if $(a, b)$ and $\left(a^{\prime}, b^{\prime}\right)$ are $k$-bounded. Suppose $R\left(\mathcal{C}_{F S I}\right)=k<\infty$. Then an algebra $A \in \mathcal{C}$ is finitely subdirectly irreducible iff it satisfies the sentence $\sigma_{k}$ : for all $x, y, x^{\prime}, y^{\prime}$, $x=y$ or $x^{\prime}=y^{\prime}$ or $\phi_{k}\left(x, y, x^{\prime}, y^{\prime}\right)$. Hence $\mathcal{C}_{F S I}$ is elementary. Conversely, suppose $\mathcal{C}_{F S I}$ is an elementary class. Lemma 5.6 implies that $A \in \mathcal{V}_{t}-\mathcal{C}_{F S I}$ iff $A$ satisfies the negation of $\sigma_{k}$ for each $k \in \omega$. So $\mathcal{V}_{t}-\mathcal{C}_{F S I}$ is also elementary and hence (by Theorem 5.2) strictly elementary, i.e. it is defined by finitely many of the $\neg \sigma_{k}$. Since $\neg \sigma_{k+1}$ implies $\neg \sigma_{k}$, we in fact have $A \in \mathcal{V}_{t}-\mathcal{C}_{F S I}$ iff $A$ satisfies $\neg \sigma_{k}$ for just one particular $k$ (the largest). It follows that all algebras in $\mathcal{C}_{F S I}$ satisfy $\sigma_{k}$, whence $R\left(\mathcal{C}_{F S I}\right)=k$. LEMMA 5.8 If $R\left(\mathcal{V}_{F S I}\right)=k<\infty$, then $R(\mathcal{V}) \leq k+2$. Proof. Let $R\left(\mathcal{V}_{F S I}\right)=k<\infty$ and suppose $(c, d) \in \operatorname{con}\left(a_{0}, b_{0}\right) \cap \operatorname{con}\left(a_{1}, b_{1}\right)$ for some $A \in \mathcal{V}$ and $a_{0}, b_{0}, a_{1}, b_{1}, c, d \in A, c \neq d$. Then there exists a subdirectly irreducible epimorphic image $A^{\prime}$ of $A$ with $c^{\prime} \neq d^{\prime}$ and hence $a_{0}^{\prime} \neq b_{0}^{\prime}$ and $a_{1}^{\prime} \neq b_{1}^{\prime}$ (primes denote images in $\left.A^{\prime}\right)$. By assumption $\left(a_{0}^{\prime}, b_{0}^{\prime}\right)$ and $\left(a_{1}^{\prime}, b_{1}^{\prime}\right)$ are $k$-bounded, i.e. for some distinct $u, v \in A^{\prime}$, $(u, v) \in \Gamma_{k}\left(a_{0}^{\prime}, b_{0}^{\prime}\right) \cap \Gamma_{k}\left(a_{1}^{\prime}, b_{1}^{\prime}\right)$. For $i=0,1$ choose $u_{i}, v_{i} \in A$ such that $\left(u_{i}, v_{i}\right) \in \Gamma_{k}\left(a_{i}, b_{i}\right)$ and $u_{i}^{\prime}=u, v_{i}^{\prime}=v$. Such elements exist since if $f^{\prime}$ is a $k$-translation in $A^{\prime}$ with $f^{\prime}\left(a_{i}^{\prime}\right)=u$ and $f^{\prime}\left(b_{i}^{\prime}\right)=v$, then we can construct a corresponding $k$-translation $f$ in $A$ by replacing each fixed element of $A^{\prime}$ by one of its preimages in $A$ and we let $u_{i}=f\left(a_{i}\right)$ and $v_{i}=f\left(b_{i}\right)$. Now choose $j<n$ such that $u^{*}=t_{j}\left(u_{0}, u_{1}, v_{0}\right) \neq t_{j}\left(u_{0}, v_{1}, v_{0}\right)=v^{*}$. This is possible since in $A^{\prime}, t_{j}(u, u, v)$ and $t_{j}(u, v, v)$ must be distinct for some $j<n$, else $$ u=t_{0}(u, u, v)=t_{1}(u, u, v)=t_{1}(u, v, v)=t_{2}(u, v, v)=\ldots=t_{n}(u, v, v)=v . $$ The 1-translations $t_{j}\left(u_{0}, x, v_{0}\right), t_{j}\left(u_{0}, u_{1}, x\right)$ and $t_{j}\left(u_{0}, v_{1}, x\right)$ now show that $\left(u^{*}, v^{*}\right) \in$ $\Gamma_{1}\left(u_{1}, v_{1}\right)$ and $\left(u^{*}, u_{0}\right),\left(u_{0}, v^{*}\right) \in \Gamma_{1}\left(u_{0}, v_{0}\right)$. Lemma 5.5 applied to the sequence $u^{*}, u_{0}, v^{*}$ implies that either $\left(u^{*}, v^{*}\right)$ and $\left(u^{*}, u_{0}\right)$ are 1-bounded or $\left(u^{*}, v^{*}\right)$ and $\left(u_{0}, v^{*}\right)$ are 1 bounded. In either case $\left(u_{0}, v_{0}\right)$ and $\left(u_{1}, v_{1}\right)$ are 2-bounded and therefore $\left(a_{0}, b_{0}\right)$ and $\left(a_{1}, b_{1}\right)$ are $(k+2)$-bounded. With the help of these four lemmas and Theorem 5.4, we can now prove the following result: THEOREM 5.9 (Jónsson [79]). If $\mathcal{V}$ is a congruence distributive variety of algebras and $\mathcal{V}_{F S I}$ is strictly elementary, then $\mathcal{V}$ is finitely based. Proof. By Lemma $5.7 R\left(\mathcal{V}_{F S I}\right)=k$ for some $k \in \omega$, and by the above Lemma $R(\mathcal{V})=$ $k+2$. Let $\mathcal{B}$ be the class of all $A \in \mathcal{V}_{t}$ with $R(A) \leq k+2$. Since the condition $R(A) \leq k+2$ can be expressed by a first-order formula, and since $\mathcal{V}_{t}$ is strictly elementary, so is $\mathcal{B}$. Clearly $R\left(\mathcal{B}_{F S I}\right) \leq k+2$, hence Lemma 5.7 implies that $\mathcal{B}_{F S I}$ is elementary. By assumption $\mathcal{V} \cap \mathcal{B}_{F S I}=\mathcal{V}_{F S I}$ is strictly elementary, so applying Theorem 5.4 with $\mathcal{C}=\mathcal{B}_{F S I}$, we conclude that $\mathcal{V}$ is finitely based. Assuming that $\mathcal{V}$ is a finitely generated congruence distributive variety, Corollary 1.7 implies that up to isomorphism $\mathcal{V}_{F S I}$ is a finite set of finite algebras. Since such a collection is always strictly elementary, one obtains Baker's result from the preceding theorem: THEOREM 5.10 (Baker [77]). If $\mathcal{V}$ is a finitely generated congruence distributive variety of algebras then $\mathcal{V}$ is finitely based. ### Joins of finitely based varieties In this section we first give an example which shows that the join of two finitely based modular varieties need not be finitely based. Lemma 5.11 (Baker [77']). There exist finitely based modular varieties $\mathcal{V}$ and $\mathcal{V}^{\prime}$ such that the complement of $\mathcal{V}+\mathcal{V}^{\prime}$ is not closed under ultraproducts. Proof. Let $M$ be the modular lattice of Figure 5.1 (i) and let $\mathbf{N}(M)$ be the class of all lattices that do not contain a subset order-isomorphic to $M$ regarded as a partially ordered set). By Lemma 3.10 (ii) $M \notin \mathbf{N}(M)$, so there exists an identity $\varepsilon \in \operatorname{Id} \mathbf{N}(M)$ that does not hold in $M$. Let $\mathcal{V}$ be the variety of modular lattices that satisfy $\varepsilon$ (i.e. $\mathcal{V}=\operatorname{Mod}\left\{\varepsilon, \varepsilon_{m}\right\}$, where $\varepsilon_{m}$ is the modular identity) and let $\mathcal{V}^{\prime}$ be the variety of all the dual lattices of members in $\mathcal{V}$. Since the modular variety is self-dual, $\mathcal{V}^{\prime}$ is defined by $\varepsilon_{m}$ and the dual identity $\varepsilon^{\prime}$ of $\varepsilon$. Hence $\mathcal{V}$ and $\mathcal{V}^{\prime}$ are both finitely based. Let $K_{n}$ be the lattice of Figure 5.1 (ii). (i) $M$ (ii) Figure 5.1 We claim that, for each $n \in \omega, K_{n} \notin \mathcal{V}+\mathcal{V}^{\prime}$. Note that $K_{n}$ is subdirectly irreducible (in fact simple), and since $K_{n}$ contains a copy of $M$ and its dual as sublattices, both $\varepsilon$ and $\varepsilon^{\prime}$ fail in $K_{n}$. Hence $K_{n} \notin \mathcal{V} \cup \mathcal{V}^{\prime}$, and the claim follows from Theorem 2.3 (i). Now let $K=\prod_{n \in \omega} K_{n}$ and choose any nonprincipal ultrafilter $\mathcal{U}$ over $\omega$. We show that the ultraproduct $\bar{K}=K / \phi_{\mathcal{U}}$ is in $\mathcal{V}+\mathcal{V}^{\prime}$. Notice that an order-isomorphic copy of $M$ is situated only at the bottom of each $K_{n}$. This fact can be expressed as a firstorder sentence and, by Theorem 5.1, also holds in $\bar{K}$. Similarly, the dual of $M$ can only be situated at the top of $\bar{K}$. The local structure of the middle portion of $K_{n}$ can also be described by a first order sentence, whence $\bar{K}$ looks like an infinite version of $K_{n}$. Interpreting Figure 5.1 (ii) as a diagram of $\bar{K}$, we see that $M$ is not order-isomorphic to any subset of $\bar{K} / \operatorname{con}(c, d)$ since $\operatorname{con}(c, d)$ collapses the only copy of $M$ in $\bar{K}$. Consequently $\bar{K} / \operatorname{con}(c, d) \in \mathcal{V}$ and by a dual argument $\bar{K} / \operatorname{con}(a, b) \in \mathcal{V}^{\prime}$. Observe that $\bar{K}$ is not a simple lattice since principal congruences can only identify quotients reachable by finite sequences of transpositions (Theorem 1.11). In fact $\operatorname{con}(a, b) \cap \operatorname{con}(c, d)=\mathbf{0}$, and hence $\bar{K}$ can be embedded in $K / \operatorname{con}(a, b) \times K / \operatorname{con}(c, d)$. Therefore $\bar{K} \in \mathcal{V}+\mathcal{V}^{\prime}$. Together with Theorem 5.2 and Theorem 5.3, the above lemma implies: THEOREM 5.12 (Baker [77']). The join of two finitely based (modular) varieties need not be finitely based. In view of this theorem it is natural to look for sufficient conditions under which the join of two finitely based varieties is finitely based. In what follows, we shall assume that $\mathcal{V}_{c}$ is a congruence distributive variety, and that $\mathcal{V}$ and $\mathcal{V}^{\prime}$ are two subvarieties defined relative to $\mathcal{V}_{c}$ by the identities $p=q$ and $p^{\prime}=q^{\prime}$ respectively. By an elementary result of lattice theory, any finite set of lattice identities is equivalent to a single identity (relative to the class of all lattices, [GLT] p.28). Moreover Baker [74] showed that this result extends to congruence distributive varieties in general. Consequently, the above condition on the varieties $\mathcal{V}$ and $\mathcal{V}^{\prime}$ is equivalent to them being finitely based relative to $\mathcal{V}_{c}$. The next two lemmas are due to Jónsson [74], though the second one has been generalized to congruence distributive varieties. If $p$ is an $n$-ary polynomial function (= word or term with at most $n$ variables) on an algebra $A$ and $u_{1}, \ldots, u_{n} \in A$ then we will abbreviate $p\left(u_{1}, \ldots, u_{n}\right)$ by $p(u), u \in A^{\omega}$, thereby assuming that only the first $n$ components of $u$ are used to evaluate $p$. Lemma 5.13 An algebra $A \in \mathcal{V}_{c}$ belongs to $\mathcal{V}+\mathcal{V}^{\prime}$ if and only if for all $u, v \in A^{\omega}$ $$ (*) \quad \operatorname{con}(p(u), q(u)) \cap \operatorname{con}\left(p^{\prime}(v), q^{\prime}(v)\right)=\mathbf{0} . $$ Proof. Let $\theta=\sum\left\{\operatorname{con}(p(u), q(u)): u \in A^{\omega}\right\}$ and $\theta^{\prime}=\sum\left\{\operatorname{con}\left(p^{\prime}(u), q^{\prime}(u)\right): u \in A^{\omega}\right\}$. By the (infinite) distributivity of $\operatorname{Con}(A)$ we have that $(*)$ holds if and only if $\theta \cap \theta^{\prime}=\mathbf{0}$. This in turn is equivalent to $A$ being a subdirect product of $A / \theta$ and $A / \theta^{\prime}$. Since $A / \theta \in \mathcal{V}$ and $A / \theta^{\prime} \in \mathcal{V}^{\prime}$, it follows that $A \in \mathcal{V}+\mathcal{V}^{\prime}$. On the other hand Jónsson's Lemma implies that any $A \in \mathcal{V}+\mathcal{V}^{\prime}$ can be written as a subdirect product of two algebras $A / \phi$ and $A / \phi^{\prime}$. Notice that $\theta$ and $\theta^{\prime}$ above are the smallest congruences on $A$ for which $A / \theta \in \mathcal{V}$ and $A / \theta^{\prime} \in \mathcal{V}^{\prime}$, hence $\theta \subseteq \phi$ and $\theta^{\prime} \subseteq \phi^{\prime}$. Since $\phi \cap \phi^{\prime}=\mathbf{0}$ we conclude that $\theta \cap \theta^{\prime}=\mathbf{0}$. Recall the notion of $k$-boundedness defined in the previous section. It is an elementary property, so we can construct a first-order sentence $\sigma_{k}$ such that an algebra $A \in \mathcal{V}_{c}$ satisfies $\sigma_{k}$ if and only if for $u, v \in A^{\omega}(p(u), q(u))$ and $\left(p^{\prime}(v), q^{\prime}(v)\right)$ are not $k$-bounded. Lemma $5.14 \mathcal{V}+\mathcal{V}^{\prime}$ is finitely based relative to $\mathcal{V}_{c}$ if and only if the following property holds for some positive integer $n$ : $\mathrm{P}(n):$ For any $A \in \mathcal{V}_{c}$, if $A$ satisfies $\sigma_{n}$, then $A$ satisfies $\sigma_{k}$ for all $k>1$. Proof. Firstly, we claim that, relative to $\mathcal{V}_{c}$, the variety $\mathcal{V}+\mathcal{V}^{\prime}$ is defined by the set of sentences $\mathcal{S}=\left\{\sigma_{1}, \sigma_{2}, \sigma_{3}, \ldots\right\}$. Indeed, by Lemma 5.6 we have that $$ \begin{gathered} \operatorname{con}(p(u), q(u)) \cap \operatorname{con}\left(p^{\prime}(v), q^{\prime}(v)\right)=\mathbf{0} \\ \text { if and only if } \\ \Gamma_{k}(p(u), q(u)) \cap \Gamma_{k}\left(p^{\prime}(v), q^{\prime}(v)\right)=\mathbf{0} \end{gathered} $$ for all $k>0$. Hence by Lemma 5.13 an algebra $A \in \mathcal{V}_{c}$ belongs to $\mathcal{V}+\mathcal{V}^{\prime}$ if and only if $A$ satisfies $\sigma_{k}$ for all $k>0$. We can now make use of Theorem 5.3 to conclude that $\mathcal{V}+\mathcal{V}^{\prime}$ will have a finite basis relative to $\mathcal{V}_{c}$ if and only if it is defined, relative to $\mathcal{V}_{c}$, by a finite subset of $\mathcal{S}$, or equivalently by a single sentence $\sigma_{k}$, since $\sigma_{k}$ implies $\sigma_{m}$ for all $m<k$. If $\mathrm{P}(n)$ holds, then clearly $\mathcal{V}+\mathcal{V}^{\prime}$ is defined relative to $\mathcal{V}_{c}$ by the sentence $\sigma_{n}$. On the other hand, if $\mathrm{P}(n)$ fails, then there must exist an algebra $A \in \mathcal{V}_{c}$ such that $A$ satisfies $\sigma_{n}$ but fails $\sigma_{m}$ for some $m>n$. If this is true for any positive integer $n$, then $\mathcal{V}+\mathcal{V}^{\prime}$ cannot be finitely based relative to $\mathcal{V}_{c}$. Although $\mathrm{P}(n)$ characterizes all those pairs of finitely based congruence distributive subvarieties whose join is finitely based, it is not a property that is easily verified. Fortunately, for lattice varieties, $k$-boundedness can be expressed in terms of weak projectivities. More precisely, if we exclude the use of the polynomials $t_{i}$ in the definition of a $k$-translation then a $k$-translation from one quotient of a lattice to another is nothing else but a sequence of $k$ weak transpositions. Two quotients $a / b$ and $a^{\prime} / b^{\prime}$ are then said to be $k$-bounded if they both project weakly onto some nontrivial quotient $c / d$ in less than or equal to $k$ steps. Furthermore, if $p=q$ is a lattice identity then we can assume that the inclusion $p \leq q$ holds in any lattice (if not, replace $p=q$ by the equivalent identity $p q=p+q)$ and the sentence $\sigma_{k}$ can be rephrased as: $L \in \mathcal{V}_{c}$ satisfies $\sigma_{k}$ if and only if, for all $u, v \in L^{\omega}$, the quotients $q(u) / p(u)$ and $q^{\prime}(v) / p^{\prime}(v)$ do not both project weakly onto a common nontrivial quotient in $k$ (or less) steps. The following is a slightly sharpened version (for lattices) of Lemma 5.8. LemMa 5.15 Let $\bar{L}$ be a homomorphic image of $L$ and let $x / y$ be a prime quotient in $L$. For any quotient $a / b$ of $L$, if $\bar{a} / \bar{b}$ projects weakly onto $\bar{x} / \bar{y}$ in $n$ steps, then $a / b$ projects weakly onto $x / y$ in $n+1$ steps if $n>0$, and in $t$ wo steps if $n=0$. Proof. Suppose $\bar{a} / \bar{b}$ projects onto $\bar{x} / \bar{y}$ in 0 steps, i.e. $\bar{a}=\bar{x}$ and $\bar{b}=\bar{y}$. Then $$ \begin{aligned} & a / b \nearrow_{w} a+y / b+y \searrow_{w}(a+y) x /(b+y) x=x / y \\ & \text { and } a / b \searrow_{w} a x / b x \nearrow_{w} a x+y / b x+y=x / y, \end{aligned} $$ since $y \leq(b+y) x<(a+y) x \leq x$ and $y \leq b x+y<a x+y \leq x$. Now suppose that $\bar{a} / \bar{b}$ projects weakly onto $\bar{x} / \bar{y}$ in $n>0$ steps. Since the other cases can be treated similarly, we may assume that $\bar{a} / \bar{b} \nearrow_{w} \bar{a}_{1} / \bar{b}_{1} \searrow_{w} \ldots \searrow_{w} \bar{a}_{n-1} / \bar{b}_{n-1} \nearrow_{w} \bar{x} / \bar{y}$ for some $b_{i}, a_{i} \in L$, $i=1, \ldots, n-1$. In this case $\bar{b} \leq \bar{b}_{1}$ implies that there exists $b_{1}^{\prime} \in L$ with $\bar{b}_{1}^{\prime}=\bar{b}_{1}$ and $b \leq b_{1}^{\prime}$. Letting $a_{1}^{\prime}=b_{1}^{\prime}+a$ we have $\bar{a}_{1}^{\prime}=\bar{a}_{1}$ and $a / b \nearrow_{w} a_{1}^{\prime} / b_{1}^{\prime}$. Next, there exists $a_{2}^{\prime} \in L$ such that $\bar{a}_{2}^{\prime}=\bar{a}_{2}$ and $a_{2}^{\prime} \leq a_{1}^{\prime}$. Letting $b_{2}^{\prime}=a_{2}^{\prime} b_{1}^{\prime}$ we have $\bar{b}_{2}^{\prime}=\bar{b}_{2}$ and $a_{1}^{\prime} / b_{1}^{\prime} \searrow_{w} a_{2}^{\prime} / b_{2}^{\prime}$. Repeating this process we get $$ a / b \nearrow_{w} a_{1}^{\prime} / b_{1}^{\prime} \searrow_{w} a_{2}^{\prime} / b_{2}^{\prime} \nearrow_{w} \ldots \searrow_{w} a_{n-1}^{\prime} / b_{n-1}^{\prime} \nearrow_{w} x^{\prime} / y^{\prime} $$ where $\bar{x}^{\prime}=\bar{x}$ and $\bar{y}^{\prime}=\bar{y}$. By the first argument $x^{\prime} / y^{\prime} \nearrow_{w} x^{\prime}+x / y^{\prime}+x \searrow_{w} x / y$, so $a / b$ projects weakly onto $x / y$ in $n+2$ steps. One of the steps $\left(x^{\prime} / y^{\prime}\right)$ can still be eliminated, hence the result follows. Given a variety $\mathcal{V}$, we denote by $(\mathcal{V})^{n}$ the variety that is defined by the identities of $\mathcal{V}$ which have $n$ or less variables for some positive integer $n$. Clearly $\mathcal{V} \subseteq(\mathcal{V})^{n}$ and $F_{\mathcal{V}}(m)=F_{(\mathcal{V})^{n}}(m)$ for any $m \leq n$. Another nice consequence of this definition is the following lemma, which appears in Jónsson [74]. LeMma 5.16 If $F_{\mathcal{V}}(n)$ and $F_{\mathcal{V}^{\prime}}(n)$ are finite for some lattice varieties $\mathcal{V}$ and $\mathcal{V}^{\prime}$, then $\left(\mathcal{V}+\mathcal{V}^{\prime}\right)^{n}$ is finitely based. Proof. In general, if $F_{\mathcal{V}}(n)$ is finite, then $(\mathcal{V})^{n}$ is finitely based. Now $F_{\mathcal{V}+} \mathcal{V}^{\prime}(n)$ is a subdirect product of the two finite lattices $F_{\mathcal{V}}(n)$ and $F_{\mathcal{V}^{\prime}}(n)$, hence finite, and so the result follows. We can now give sufficient conditions for the join of two finitely based lattice varieties to be finitely based. This result appeared in Lee [85'] and is a generalization of a result of Jónsson [74]. THEOREM 5.17 If $\mathcal{V}$ and $\mathcal{V}^{\prime}$ are finitely based lattice varieties with $\mathcal{V} \cap \mathcal{V}^{\prime}=\mathcal{W}$ and $R\left(\mathcal{W}_{S I}\right)=r<\infty$ and if $F_{\mathcal{V}}(r+3)$ and $F_{\mathcal{V}^{\prime}}(r+3)$ are finite, then $\mathcal{V}+\mathcal{V}^{\prime}$ is finitely based. Figure 5.2 Proof. We can assume that $\mathcal{V}$ and $\mathcal{V}^{\prime}$ are defined by the identities $p=q$ and $p^{\prime}=q^{\prime}$ respectively, relative to the variety of all lattices, and that the inequalities $p \leq q$ and $p^{\prime} \leq q^{\prime}$ hold in any lattice. Let $\mathcal{K}$ and $\mathcal{K}^{\prime}$ be the classes of $(r+3)$-generated subdirectly irreducible lattices in $\mathcal{V}$ and $\mathcal{V}^{\prime}$ respectively, and define $h=\max (R(\mathcal{K}), R(\mathcal{K}))$. We only consider $h>0$ since if $h=0$, then $\mathcal{V}, \mathcal{V}^{\prime} \subseteq \mathcal{D}$, the variety of distributive lattices, in which case the theorem holds trivially. Let $\mathcal{V}_{c}=\left(\mathcal{V}+\mathcal{V}^{\prime}\right)^{r+3}$, then $\mathcal{V}_{c}$ is finitely based by Lemma 5.16. If we can show that the condition $\mathrm{P}(n)$ in Lemma 5.14 holds for some $n$, then $\mathcal{V}+\mathcal{V}^{\prime}$ will be finitely based relative to $\mathcal{V}_{c}$ and hence relative to the variety of all lattices. So let $L \in \mathcal{V}_{c}$ and suppose that for some $u, u^{\prime} \in L^{\omega}$ the quotients $q(u) / p(u)$ and $q^{\prime}\left(u^{\prime}\right) / p^{\prime}\left(u^{\prime}\right)$ are bounded, that is they project weakly onto a common quotient $c / d$ of $L$ in $m$ and $m^{\prime}$ steps respectively. Property $\mathrm{P}(n)$ demands that $m, m^{\prime} \leq n$ for some fixed integer $n$. Take $n=\max (2 h+5, h+r+5)$ and assume that $u, u^{\prime}, c, d$ have been chosen so as to minimize the number $m+m^{\prime}$. We will show that if $m>n$ then there is another choice for $u, u^{\prime}, c, d$ such that the corresponding combined number of steps in the weak projectivities is strictly less than $m+m^{\prime}$. This contradiction, together with the same argument for $m^{\prime}$, proves the theorem. By assumption $q(u) / p(u)=a_{0} / b_{0} \sim_{w} a_{1} / b_{1} \sim_{w} \ldots \sim_{w} a_{m} / b_{m}=c / d$ for some quotients $a_{i} / b_{i}$ in $L$ which transpose weakly alternatingly up and down onto $a_{i+1} / b_{i+1}$ $(i=0,1, \ldots, m-1)$. Since $m>\max (2 h+5, h+r+5)$, we can always find an integer $k$ such that $\max (h+2, r+2)<k<m-h-2$. Consider the $r+3$ quotients up to and including $a_{k} / b_{k}$ in the above sequence. Since the other cases can be treated similarly, we may assume that $$ a_{k-r-2} / b_{k-r-2} \nearrow_{w} a_{k-r-1} / b_{k-r-1} \searrow_{w} \ldots \nearrow_{w} a_{k} / b_{k} $$ Let $L_{0}$ be the sublattice of $L$ generated by the $r+3$ elements $$ a_{k-r-2}, b_{k-r-1}, a_{k-r}, \ldots, a_{k-1}, b_{k} . $$ Notice that $L_{0} \in \mathcal{V}+\mathcal{V}^{\prime}$, and $L_{0}$ is a finite lattice because $F_{\mathcal{V}+\mathcal{V}^{\prime}}(r+3)=F_{\mathcal{V}_{c}}(r+3)$ is a subdirect product of $F_{\mathcal{V}}(r+3)$ and $F_{\mathcal{V}}(r+3)$, and is therefore finite. $a_{k} / b_{k}\left(=a_{k-1}+b_{k} / b_{k}\right)$ can be divided into (finitely many) prime quotients in $L_{0}$ and at least one of these prime quotients, say $x / y$, must project weakly onto a nontrivial subquotient of $c / d$. Let $\bar{L}_{0}$ be the unique subdirectly irreducible quotient lattice of $L_{0}$ in which $\bar{x} / \bar{y}$ is a critical quotient. Then $\bar{L}_{0} \in \mathcal{V}+\mathcal{V}^{\prime}$, and hence Theorem 2.3 (i) implies $\bar{L}_{0} \in \mathcal{V} \cup \mathcal{V}^{\prime}$. We examine each of the three cases that arise: Case 1: $\bar{L}_{0} \in \mathcal{V}$ and $\bar{L}_{0} \notin \mathcal{V}^{\prime}$. Since $\bar{L}_{0} \notin \mathcal{V}^{\prime}$, there exists $v \in L_{0}{ }^{\omega}$ such that $p^{\prime}(\bar{v})<$ $q^{\prime}(\bar{v})$. $\bar{L}_{0} \in \mathcal{V}$ implies $R\left(\bar{L}_{0}\right) \leq h$. Also $p^{\prime}(\bar{v})=\overline{p^{\prime}(v)}$ and $q^{\prime}(\bar{v})=\overline{q^{\prime}(v)}$. So by Lemma 5.15 $q^{\prime}(v) / p^{\prime}(v)$ projects weakly onto $x / y$ in $h+1$ steps. Now $q(u) / p(u)$ projects weakly onto $c / d$ in $k$ steps, hence onto $x / y$ in $k+1$ steps. But $h+1+k+1<m \leq m+m^{\prime}$, so this contradicts the minimality of $m+m^{\prime}$. Case 2: $\bar{L}_{0} \notin \mathcal{V}$ and $\bar{L}_{0} \in \mathcal{V}^{\prime}$. Since $\bar{L}_{0} \notin \mathcal{V}, p(v)<q(v)$ for some $v \in L_{0}{ }^{\omega}$. As above, since $\bar{L}_{0} \in \mathcal{V}^{\prime}, R\left(L_{0}\right) \leq h$, and hence $q(v) / p(v)$ projects weakly onto $x / y$ in $h+1$ steps and from there onto a nontrivial subquotient $c^{\prime} / d^{\prime}$ of $c / d$ in $m-k$ steps. By the choice of $k$ we have $h+1+m-k<m-1$. Also $q^{\prime}\left(u^{\prime}\right) / p^{\prime}\left(u^{\prime}\right)$ projects weakly onto $c / d$ in $m^{\prime}+1$ steps so again we get a contradiction. Case 3: $\bar{L}_{0} \in \mathcal{V} \cap \mathcal{V}^{\prime}=\mathcal{W}$. First suppose that $r>0$, hence $\mathcal{W} \neq \mathcal{D} . R\left(\mathcal{W}_{S I}\right)=r$ implies $\bar{a}_{k-r-2} / \bar{b}_{k-r-2}$ projects weakly onto $\bar{x} / \bar{y}$ in $r$ steps, so by Lemma $5.15 a_{k-r-2} / b_{k-r-2}$ projects weakly onto $x / y$ in $r+1$ steps. Now either $$ \begin{aligned} & a_{k-r-2} / b_{k-r-2} \searrow_{w} \boldsymbol{a}_{k-r-1}^{\prime} / b_{k-r-1}^{\prime} \nearrow_{w} \ldots \searrow_{w} a_{k-2}^{\prime} / b_{k-2}^{\prime} \nearrow_{w} x / y \text { or } \\ & a_{k-r-2} / b_{k-r-2} \nearrow_{w} a_{k-r-1}^{\prime} / b_{k-r-1}^{\prime} \searrow_{w} \ldots \nearrow_{w} a_{k-2}^{\prime} / b_{k-2}^{\prime} \searrow_{w} x / y \end{aligned} $$ for some quotients $a_{k-r-1}^{\prime} / b_{k-r-1}^{\prime}, \ldots, a_{k-2}^{\prime} / b_{k-2}^{\prime}$ in $L$. Since $$ a_{k-r-3} / b_{k-r-3} \searrow_{w} a_{k-r-2} / b_{k-r-2} \text { and } a_{k} / b_{k} \searrow_{w} a_{k+1} / b_{k+1} $$ we have that $q(u) / p(u)$ projects weakly onto $x / y$ in $k-2$ steps and hence onto a nontrivial subquotient $c^{\prime} / d^{\prime}$ of $c / d$ in $m-2$ steps. As before $q\left(u^{\prime}\right) / p\left(u^{\prime}\right)$ projects weakly onto $c^{\prime} / d^{\prime}$ in $m^{\prime}+1$ steps which again contradicts the minimality of $m+m^{\prime}$. Now suppose that $r=1$, which implies $\mathcal{W}=\mathcal{D}$ and $\bar{L}_{0}=\mathbf{2}$. Hence in $L_{0}$ we have $$ \begin{gathered} a_{k-2} / b_{k-2} \nearrow a_{k-2}+y / b_{k-2}+y \searrow x / y \text { and } \\ a_{k-2} / b_{k-2} \searrow a_{k-2} x / b_{k-2} x \nearrow x / y . \end{gathered} $$ It follows that we can shorten the sequence of weak projectivities from $q(u) / p(u)$ onto a nontrivial subquotient $c^{\prime} / d^{\prime}$ of $c / d$ to $m-2$ steps. Again $q^{\prime}\left(u^{\prime}\right) / p^{\prime}\left(u^{\prime}\right)$ projects weakly onto $c^{\prime} / d^{\prime}$ in $m^{\prime}+1$ steps, giving rise to another contradiction. This concludes the proof. We end this section with a theorem that summarizes some conditions under which the join of two finitely based varieties is known to be finitely based. Parts (i) and (ii) are from Lee [85'], and they follows easily from the preceding theorem. Part (iii) is due to Jónsson and the remaining results are from Kang $[87]$. THEOREM 5.18 Let $\mathcal{V}$ and $\mathcal{V}^{\prime}$ be two finitely based lattice varieties. If one of the following conditions holds then $\mathcal{V}+\mathcal{V}^{\prime}$ is finitely based: (i) $\mathcal{V}$ is modular and $\mathcal{V}^{\prime}$ is generated by a finite lattice that excludes $M_{3}$. (ii) $\mathcal{V}$ and $\mathcal{V}^{\prime}$ are locally finite and $R\left(\mathcal{V} \cap \mathcal{V}^{\prime}\right)$ is finite. (iii) $\mathcal{V}$ and $\mathcal{V}^{\prime}$ are modular and $\mathcal{V}^{\prime}$ is generated by a lattice of finite length. (iv) $\mathcal{V}$ is modular and $\mathcal{V}^{\prime}$ is generated by a lattice with finite projective radius. (v) $\mathcal{V} \cap \mathcal{V}^{\prime}=\mathcal{D}$, the distributive variety. Lee [85'] also showed that any almost distributive (see Section 4.3) subdirectly irreducible lattice has a projective radius of at most 3 . Since any almost distributive variety is locally finite, it follows from Theorem 5.17 that the join of two finitely based almost distributive varieties is again finitely based. ### Equational Bases for some Varieties A variety $\mathcal{V}$ is usually specified in one of two ways: either by a set $\mathcal{E}$ of identities that determine $\mathcal{V}$ (i.e. $\mathcal{V}=\operatorname{Mod} \mathcal{E}$ ) or by a class $\mathcal{K}$ of algebras that generate $\mathcal{V}$ (i.e. $\mathcal{V}=\mathcal{K}^{\mathcal{V}}$ ). In the first case $\mathcal{E}$ is of course an equational basis for $\mathcal{V}$, so here we are interested in the second case. A lattice inclusion or inequality of the form $p \leq q$ will also be referred to as a lattice identity, since it is equivalent to the identity $p=p q$. Theorem 3.32 shows that the variety $\mathcal{M}_{\omega}$, generated by all lattices of length 2 , has an equational basis consisting of one identity $\varepsilon: x_{0}\left(x_{1}+x_{2} x_{3}\right)\left(x_{2}+x_{3}\right) \leq x_{1}+x_{0} x_{2}+x_{0} x_{3}$. Jónsson [68] observed that if one adds to this the identity $$ \varepsilon_{n}: \quad x_{0} \prod_{1 \leq i, j \leq n}\left(x_{i}+x_{j}\right) \leq \sum_{1 \leq i \leq n} x_{0} x_{i} $$ then one obtains an equational basis for $\mathcal{M}_{n}=\left\{M_{n}\right\}^{\mathcal{V}}(3 \leq n \in \omega)$. To see this, note that $\varepsilon_{n}$ holds in a lattice of length 2 whenever two of the variables $x_{0}, x_{1}, \ldots, x_{n}$ are assigned to the same element or one of them is assigned to 0 or 1 , but fails when they are assigned to $n+1$ distinct atoms. Therefore $\varepsilon_{n}$ holds in $\mathcal{M}_{n}$ and fails in $M_{n+1}$. For $\mathcal{M}_{3}$ this basis may be simplified even further by observing that $\varepsilon_{3}$ implies $\varepsilon$, hence $$ \mathcal{M}_{3}=\operatorname{Mod}\left\{x_{0}\left(x_{1}+x_{2}\right)\left(x_{2}+x_{3}\right)\left(x_{3}+x_{1}\right) \leq x_{0} x_{1}+x_{0} x_{2}+x_{0} x_{3}\right\} $$ An equational basis for $\mathcal{N}$ was found by McKenzie [72]. It is given by the identities $$ \begin{aligned} & \eta_{1}: \quad x(y+u)(y+v) \leq x(y+u v)+x u+x v \\ & \eta_{2}: \quad x(y+u(x+v))=x(y+u x)+x(x y+u v) \end{aligned} $$ McKenzie shows that $\eta_{1}$ and $\eta_{2}$ hold in any lattice of width $\leq 2$, whence $\mathcal{N} \subseteq \operatorname{Mod}\left\{\eta_{1}, \eta_{2}\right\}$, and then proves by direct computation that any identity which holds in $\mathcal{N}$ is implied by $\eta_{1}$ and $\eta_{2}$. In view of Theorem 4.19 the second part may now also be verified by checking that either $\eta_{1}$ or $\eta_{2}$ fail in each of the lattices $M_{3}, L_{1}, L_{2}, \ldots, L_{15}$ (see Figure 2.2). Theorem 5.17 implies that the variety $\mathcal{M}^{+}=\mathcal{M}+\mathcal{N}$ is finitely based ( $\mathcal{M}$ is the variety of all modular lattices). Note that since $\mathcal{N}$ is the only nonmodular variety that covers the distributive variety, $\mathcal{M}^{+}$is the unique cover of $\mathcal{M}$. Jónsson [77] derives the following equational basis for $\mathcal{M}^{+}$consisting of 8 identities: (i) $((x+c) y+z)(x+z+a)=(x+a) y+z$ (ii) $(x+c) y \leq x+(y+a) c$ (iii) $((t+x) y+a) c=((c t+x) y+a) c+((b t+x) y+a) c$ $$ \begin{aligned} \text { (iv) }((c t+x) y+a) c & =(((c t+x) c+a+x y) y+a) c \\ \text { (v) }((b t+x) y+a) c & =((b t+a) c+x y)((b+x) y+a) c \end{aligned} $$ and the duals of (iii), (iv) and (v), where $a=p q+p r, b=q$ and $c=p(q+r q)$. (Note that (ii) is the identity (AD) which forms part of the equational basis for the variety of all almost distributive lattices in Section 4.3.) Varieties generated by lattices of bounded length or width. Let $\mathcal{V}_{n}^{m}$ be the lattice variety generated by all lattices of length at most $m$ and width at most $n(1 \leq m, n \leq \infty)$ and recall from Section 3.4 the varieties $\mathcal{M}_{n}^{m}$ which are defined similarly for modular lattices. For $m, n<\infty$ all these varieties are finitely generated, hence finitely based (Theorem 5.10), and it would be interesting to find a finite equational basis for each of them. Apart from several trivial cases, and the case $\mathcal{M}_{n}^{2}=\mathcal{M}_{n}$, not much is known about these varieties. Nelson [68] showed that $\mathcal{V}_{2}^{\infty}=\mathcal{N}\left(=\mathcal{V}_{2}^{n}\right.$ for $\left.n \geq 3\right)$. With the help of Theorem 4.19, this follows from the observation that each of the lattices $M_{3}, L_{1}, L_{2}, \ldots, L_{15}$ has width $\geq 3$. Baker [77] proves that $\mathcal{V}_{4}^{\infty}$ and $\mathcal{M}_{5}^{\infty}$ are not finitely based, and the same holds for $\mathcal{V}_{n}^{\infty}, \mathcal{M}_{n}^{\infty} n \geq 5$. The proofs are similar to the proof of Lemma 5.11. As mentioned at the end of Chapter $3 \mathcal{M}_{3}^{\infty}=\mathcal{M}_{3}$, and by a result of Freese $[77] \mathcal{M}_{4}^{\infty}$ is finitely based. Whether $\mathcal{V}_{3}^{\infty}$ is finitely based is apparently still an unresolved question. ## Chapter 6 ## Amalgamation in Lattice Varieties ### Introduction The word amalgamation generally refers to a process of combining or merging certain structures which have something in common, to form a larger or more complicated structure which incorporates all the individual features of its substructures. In the study of varieties, amalgamation, of course, has a very specific meaning, which is defined in the following section. This leads to the formulation of the amalgamation property, which has been of interest for quite some time in several related areas of mathematics such as the theory of field extensions, universal algebra, model theory and category theory. Amalgamations of groups were originally considered by Schreier [27] in the form of free products with amalgamated subgroup. Implicit in his work, and in the subsequent investigations of B. H. Neumann [54] and H. Neumann [67], is the result that the variety of all groups has the amalgamation property. The first definition of this property in a universal algebra setting can be found in Fraïssé [54]. The strong amalgamation property appears in Jónsson [56] and [60] among a list of properties used for the construction of universal (and homogeneous) models of various first order theories, including lattice theory. One of the results in the [56] paper is that the variety $\mathcal{L}$ of all lattices has the strong amalgamation property. Interesting applications of the amalgamation property to free products of algebras can be found in Jónsson [61], Grätzer and Lakser [71] and [GLT]. The property also plays a role in the theory of algebraic field extensions (Jónsson [62]) and can be related to the solvability of algebraic equations (Hule $[\mathbf{7 6}],[78],[79]$ ). However, it soon became clear that not many of the better known varieties of algebras satisfy the amalgamation property. Counterexamples showing that it fails in the variety of all semigroups are given in Kimura [57] and Howie [62], and these can be used to construct counterexamples for the variety of all rings. As far as lattice varieties are concerned, it follows from Pierce [68] that the variety of all distributive lattices does have the amalgamation property, but Grätzer, Jónsson and Lakser [73] showed that this was not true for any nondistributive modular subvariety. Finally Day and Ježek [84] completed the picture for lattice varieties, by showing that the amalgamation property fails in every nondistributive proper subvariety of $\mathcal{L}$. A comprehensive survey of the amalgamation $A, B, C, D \in \mathcal{K}$ Figure 6.1 property and related concepts for a wide range of algebras can be found in Kiss, Márki, Pröhle and Tholen [83]. Because of all the negative results, investigations in this field are now focusing on the amalgamation class $\operatorname{Amal}(\mathcal{K})$ of all amalgamation bases of $\mathcal{K}$, which was first defined in Grätzer and Lakser [71]. A syntactic characterization, and some general facts about the structure of $\operatorname{Amal}(\mathcal{K}), \mathcal{K}$ an elementary class, appear in Yasuhara [74]. Bergman [85] gives sufficient conditions for a member of a residually small variety $\mathcal{V}$ of algebras to be an amalgamation base of $\mathcal{V}$, and Jónsson [90] showed that for finitely generated lattice varieties these conditions are also necessary and, moreover, that it is effectively decidable whether or not a finite lattice is a member of the amalgamation class of such a variety. In Section 6.3 we present some of Bergman's results, and a generalization of Jónsson's results to residually small congruence distributive varieties whose members have one-element subalgebras (due to Jipsen and Rose [89]). In Grätzer, Jónsson and Lakser [73] it is shown that the two-element chain does not belong to the amalgamation class of any finitely generated nondistributive lattice variety, and that the amalgamation class of the variety of all modular lattices does not contain any nontrivial distributive lattice. On the other hand Berman [81] constructed a nonmodular variety $\mathcal{V}$ such that the two-element chain is a member of $\operatorname{Amal}(\mathcal{V})$. Lastly, whenever the amalgamation property fails in some variety $\mathcal{V}$, then $\operatorname{Amal}(\mathcal{V})$ is a proper subclass of $\mathcal{V}$, and it would be of interest to know what kind of class we are dealing with. In particular, is $\operatorname{Amal}(\mathcal{V})$ an elementary class (i.e. can membership be characterized by some collection of first order sentences)? Using results of Albert and Burris [88], Bergman [89] showed that the amalgamation class of any finitely generated nondistributive modular variety is not elementary. In contrast Bruyns, Naturman and Rose [a] show that for the variety generated by the pentagon, the amalgamation class is elementary, and is in fact determined by Horn sentences. ### Preliminaries The amalgamation class of a variety. By a diagram in a class $\mathcal{K}$ of algebras we mean a quintuple $(A, f, B, g, C)$ with $A, B, C \in \mathcal{K}$ and $f: A \hookrightarrow B, g: A \hookrightarrow C$ embeddings. An amalgamation in $\mathcal{K}$ of such a diagram is a triple $\left(f^{\prime}, g^{\prime}, D\right)$ with $D \in \mathcal{V}$ and $f^{\prime}: B \hookrightarrow D$, $g^{\prime}: C \hookrightarrow D$ embeddings such that $f^{\prime} f=g^{\prime} g$ (see Figure 6.1). A strong amalgamation is an amalgamation with the additional property that $$ f^{\prime}(B) \cap g^{\prime}(C)=f^{\prime} f(A) $$ An algebra $A \in \mathcal{K}$ is called an amalgamation base for $\mathcal{K}$ if every $\operatorname{diagram}(A, f, B, g, C)$ can be amalgamated in $\mathcal{K}$. The class of all amalgamation bases for $\mathcal{K}$ is called the amalgamation class of $\mathcal{K}$, and is denoted by $\operatorname{Amal}(\mathcal{K}) . \mathcal{K}$ is said to have the (strong) amalgamation property if every diagram can be (strongly) amalgamated in $\mathcal{K}$. We are interested mainly in the case where $\mathcal{K}$ is a variety. Some general results about $\operatorname{Amal}(\mathcal{V})$. We summarize below some results, the first of which is due to Grätzer, Jónsson and Lakser [73] and the others are from Yasuhara [74]. Theorem 6.1 Let $\mathcal{V}$ be a variety of algebras. (i) If $f: A \hookrightarrow A^{\prime} \in A m a l(\mathcal{V})$, and for every $g: A \hookrightarrow C, f$ and $g$ can be amalgamated in $\mathcal{V}$, then $A \in A \operatorname{mal}(\mathcal{V})$. (ii) Every $A^{\prime} \in \mathcal{V}$ can be embedded in some $A \in A m a l(\mathcal{V})$, with $|A| \leq\left|A^{\prime}\right|+\omega$. (iii) $A \operatorname{mal}(\mathcal{V})$ is a proper class. (iv) The complement of Amal( $\mathcal{V})$ is closed under reduced powers. (v) If $A \times A^{\prime} \in A m a l(\mathcal{V})$, and if $A^{\prime}$ has a one element subalgebra, then $A \in A m a l(V)$. In general we know very little about the members of $\operatorname{Amal}(\mathcal{V})$. Take for example $\mathcal{V}=\mathcal{M}$, the variety of all modular lattices: as yet nobody has been able to construct a nontrivial amalgamation base for $\mathcal{M}$. In fact, we do not even know whether $\operatorname{Amal}(\mathcal{M})$ has any finite members except the trivial lattices. As we shall see below, the situation is somewhat better if we restrict ourselves to residually small varieties (defined below). Essential extensions and absolute retracts. An extension $B$ of an algebra $A$ is said to be essential if every nontrivial congruence on $B$ restricts to a nontrivial congruence on $A$. An embedding $f: A \hookrightarrow B$ is an essential embedding if $B$ is an essential extension of $f(A)$. Notice that if $A$ is $(a, b)$-irreducible (i.e. con(a,b) is the smallest nontrivial congruence on $A)$ and $f: A \hookrightarrow B$ is an essential embedding, then $B$ is $(f(a), f(b))$-irreducible. Lemma 6.2 If $h: A \hookrightarrow B$ is any embedding, then there exists a congruence $\theta$ on $B$ such that $h$ followed by the canonical epimorphism from $B$ onto $B / \theta$ is an essential embedding of $A$ into $B / \theta$. Proof. By Zorn's Lemma we can choose $\theta$ to be maximal with respect to not identifying any two members of $h(A)$. An algebra $A$ in a variety $\mathcal{V}$ is an absolute retract of $\mathcal{V}$ if, for every embedding $f$ : $A \hookrightarrow B$ with $B \in \mathcal{V}$, there exists an epimorphism (retraction) $g: B \rightarrow A$ such that the composite $g f$ is the identity map on $A$. ThEorem 6.3 (Bergman [85]). Every absolute retract of a variety $\mathcal{V}$ is an amalgamation base of $\mathcal{V}$. Proof. Suppose $A$ is an absolute retract of $\mathcal{V}$ and let $(A, f, B, g, C)$ be a diagram in $\mathcal{V}$. Then there exist epimorphisms $h$ and $k$ such that $f h=\operatorname{id}_{A}=k g$. To amalgamate the diagram, we take $D=B \times C$ and define $f^{\prime}: B \hookrightarrow D$ by $f^{\prime}(b)=(b, g h(b))$ and $g^{\prime}: C \hookrightarrow D$ by $g^{\prime}(c)=(f k(c), c)$, then $f^{\prime} f(a)=(f(a), g(a))=g^{\prime} g(a)$ for all $a \in A$. Recall that $\mathcal{V}_{S I}$ denotes the class of all subdirectly irreducible algebras of $\mathcal{V}$ and consider the following two subclasses (referred to as the class of all maximal irreducibles and all weakly maximal irreducibles respectively): $$ \begin{aligned} \mathcal{V}_{M I} & =\left\{M \in \mathcal{V}_{S I}: M \text { has no proper extension in } \mathcal{V}_{S I}\right\} \\ \mathcal{V}_{W M I} & =\left\{M \in \mathcal{V}_{S I}: M \text { has no proper essential extension in } \mathcal{V}_{S I}\right\} . \end{aligned} $$ Clearly $\mathcal{V}_{M I} \subseteq \mathcal{V}_{W M I}$ Lemma $6.4 M \in \mathcal{V}_{W M I}$ if and only if $M \in \mathcal{V}_{S I}$ and $M$ is an absolute retract in $\mathcal{V}$ Proof. Let $M \in \mathcal{V}_{W M I}$ and suppose $f: M \hookrightarrow B \in \mathcal{V}$ is an embedding. By Lemma 6.2, $f$ induces an essential embedding $f^{\prime}: M \hookrightarrow B / \theta$ for some $\theta \in \operatorname{Con}(B)$. Since $M$ has no proper essential extensions, $f^{\prime}$ must be an isomorphism, so the canonical epimorphism from $B$ to $B / \theta$ followed by the inverse of $f^{\prime}$ is the required retraction. The converse follows from the observation that an absolute retract of $\mathcal{V}$ cannot have a proper essential extension in $\mathcal{V}$. Theorem 6.3 together with Lemma 6.4 implies that $\mathcal{V}_{W M I} \subseteq \operatorname{Amal}(\mathcal{V})$. Observe also that if $\mathcal{V}$ is a finitely generated congruence distributive variety, then $\mathcal{V}_{S I}$ is a finite set of finite algebras (Corollary 1.7), and so we can determine the members of $\mathcal{V}_{M I}$ by inspection. Residually small varieties. A variety $\mathcal{V}$ is said to be residually small if the subdirectly irreducible members of $\mathcal{V}$ form, up to isomorphism, a set, or equivalently, if there exists an upper bound on the cardinality of the subdirectly irreducible members of $\mathcal{V}$. For example, any finitely generated congruence distributive variety is residually small. ThEorem 6.5 (Taylor [72]). If $\mathcal{V}$ is a residually small variety, then every member of $\mathcal{V}_{S I}$ has an essential extension in $\mathcal{V}_{W M I}$. Proof. The union of a chain of essential extensions is again an essential extension, so we can apply Zorn's Lemma to the set $\mathcal{V}_{S I}$ (ordered by essential inclusion) to obtain its maximal elements. Clearly these are all the elements of $\mathcal{V}_{W M I}$. In fact Taylor [72] also proved the converse of the above theorem, but we won't make use of this result. Note that if $\mathcal{V}$ is a finitely generated congruence distributive variety then every member of $\mathcal{V}_{S I}$ has an essential extension in $\mathcal{V}_{M I}$. Amalgamations constructed from factors. The following lemma is valid in any class of algebras that is closed under products, and makes the problem of amalgamating a diagram somewhat more accessible. Lemma 6.6 (Grätzer and Lakser [71]). A diagram $(A, f, B, g, C)$ in a variety $\mathcal{V}$ can be amalgamated if and only if for all $u \neq v \in B$ there exists a $D \in \mathcal{V}$ and homomorphisms $f^{\prime}: B \rightarrow D$ and $g^{\prime}: C \rightarrow D$ such that $f^{\prime} f=g^{\prime} g$ and $f^{\prime}(u) \neq f^{\prime}(v)$, and the same holds for $C$. Proof. The condition is clearly necessary. To see that it is sufficient, we need only observe that the diagram can be amalgamated by the product of these $D$ 's, generated as $u$ and $v$ run through all distinct pairs of $B$ and $C$. Figure 6.2 ### Amal $(\mathcal{V})$ for Residually Small Varieties Property (Q). An algebra $A$ in a variety $\mathcal{V}$ is said to have property $(Q)$ if for any embedding $f: A \hookrightarrow B \in \mathcal{V}$ and any homomorphism $h: A \rightarrow M \in \mathcal{V}_{W M I}$ there exists a homomorphism $g: B \rightarrow M$ such that $h=g f$. This property was used in Grätzer and Lakser [71], Bergman [85] and Jónsson [90] to characterize amalgamation classes of certain varieties. Theorem 6.7 (Bergman [85]). Let $\mathcal{V}$ be a residually small variety. If $A \in \mathcal{V}$ has property (Q), then $A \in A \operatorname{mal}(\mathcal{V})$. Proof. We use Lemma 6.6. Let $(A, f, B, g, C)$ be any diagram in $\mathcal{V}$ and let $u \neq v \in B$. Choose a maximal congruence $\theta$ on $B$ such that $\theta$ does not identify $u$ and $v$. Then $B / \theta \in \mathcal{V}_{S I}$ and hence by Theorem $6.5 B / \theta$ has an essential extension $M \in \mathcal{V}_{W M I}$. Let $f^{\prime}$ be the canonical homomorphism $B \rightarrow B / \theta$, but considered as a map into $M$. Clearly $f^{\prime}(u) \neq f^{\prime}(v)$, and since $A$ has property (Q), the homomorphism $f^{\prime} f: A \rightarrow M$ can be extended to a homomorphism $g^{\prime}: C \rightarrow M$ such that $f^{\prime} f=g^{\prime} g$. We argue similarly for $u \neq v \in C$, hence Lemma 6.6 implies $A \in \operatorname{Amal}(\mathcal{V})$. We show that, for certain congruence distributive varieties of algebras (including all residually small lattice varieties), the converse of the above theorem also holds. We first prove two simple results. LEMMA 6.8 (Jipsen and Rose [89]). Let $A$ and $B$ be algebras in a congruence distributive variety $\mathcal{V}, a \in A$ and suppose $\{a\}$ is a subalgebra of $A$. Let $h_{a}: B \hookrightarrow A \times B$ be an embedding such that $h_{a}(b)=(a, b)$ for all $b \in B$. Then the projection $\pi_{B}: A \times B \rightarrow B$ is the only retraction of $h_{a}$ onto $B$. Proof. Suppose $g: A \times B \rightarrow B$ is a retraction, that is $g h_{a}$ is an identity map on $B$. By Lemma 1.3 there exist $\theta \in \operatorname{Con}(A)$ and $\phi \in \operatorname{Con}(B)$ such that for $(x, y),\left(x^{\prime}, y^{\prime}\right) \in A \times B$ $$ (x, y) \text { ker } g\left(x^{\prime}, y^{\prime}\right) \quad \text { if and only if } \quad x \theta x^{\prime} \text { and } y \phi y^{\prime} . $$ Since $g h_{a}$ is an identity map on $B, \phi$ must be a trivial congruence on $B$. To prove that $g=\pi_{B}$ it suffices to show that for any $a^{\prime} \in A$ we have $a \theta a^{\prime}$. Suppose the contrary. First observe that for $b, b^{\prime} \in B$ with $b \neq b^{\prime}$ we always have $$ g(a, b)=b \neq b^{\prime}=g\left(a, b^{\prime}\right) . $$ Now if $\left(a, a^{\prime}\right) \notin \theta$ for some $a^{\prime} \in A$, then there exist $b, b^{\prime} \in B$ such that $$ g(a, b)=b \neq b^{\prime}=g\left(a^{\prime}, b\right) $$ Thus $g\left(a, b^{\prime}\right)=g\left(a^{\prime}, b\right)$ and so Lemma 1.3 implies $a \theta a^{\prime}$ and $b \phi b^{\prime}$, a contradiction. Counterexample. The assumption that $h=h_{a}$ is a one-element subalgebra embedding cannot be dropped. Indeed, let $\mathbf{2}=\{0,1\}$ be the two-element chain and consider a lattice embedding $h: \mathbf{2} \hookrightarrow \mathbf{2} \times \mathbf{2}$ given by $h(0)=(0,0)$ and $h(1)=(1,1)$. Then both projections on $2 \times 2$ are retractions onto 2 . Corollary 6.9 Let $\mathcal{V}$ be congruence distributive, $A, B \in \mathcal{V}$, and suppose $A$ has a oneelement subalgebra $\{a\}$ and $B$ is an absolute retract in $\mathcal{V}$. If $k: A \times B \hookrightarrow C \in \mathcal{V}$ is an embedding, then the projection $\pi_{B}: A \times B \rightarrow B$ can be extended to an epimorphism of $C$ onto $B$. Proof. If $h_{a}: B \hookrightarrow A \times B$ is an embedding as in Lemma 6.8, then $k h_{a}$ is an embedding of $B$ into $C$. Since $B$ is an absolute retract in $\mathcal{V}$ there is a retraction $p$ of $C$ onto $B$. It follows from Lemma 6.8 that $p \mid A \times B=\pi_{B}$. The characterization theorem. The following theorem is a generalization of a result of Jónsson [90]. There the result was proved for finitely generated lattice varieties. ThEorem 6.10 (Jipsen and Rose $[\mathbf{8 9}]$ ). Let $\mathcal{V}$ be a residually small congruence distributive variety, $A \in \mathcal{V}$ and suppose $A$ has a one-element subalgebra. Then the following conditions are equivalent: (i) A satisfies property (Q); (ii) $A \in A \operatorname{mal}(\mathcal{V})$; (iii) Let $h: A \rightarrow M \in \mathcal{V}_{W M I}$ be a homomorphism and $k: A \hookrightarrow A \times M$ be an embedding given by $k(a)=(a, h(a))$ for all $a \in A$. If $f: A \hookrightarrow B \in \mathcal{V}$ is an essential embedding then the diagram $(A, f, B, k, A \times M)$ can be amalgamated in $\mathcal{V}$. Proof. (i) implies (ii) by Theorem 6.7 , and trivially (ii) implies (iii). Suppose (iii) holds. It follows from Lemma 6.2, that in order to prove (i), we may assume that the embedding $f: A \hookrightarrow B$ is essential. Let $h: A \rightarrow M \in \mathcal{V}_{W M I}$ be any homomorphism, and define an embedding $k: A \hookrightarrow A \times M$ by $k(a)=(a, h(a))$ for all $a \in A$. Notice that $h=\pi_{M} k$ where $\pi_{M}$ is the projection from $A \times M$ onto $M$. By (iii) the diagram $(A, f, B, k, A \times M)$ has an amalgamation $\left(C, f^{\prime}, k^{\prime}\right)$ in $\mathcal{V}$. It follows from Corollary 6.9 that there is a retraction $p: C \rightarrow M$ such that $h=p k^{\prime} k=p f^{\prime} f$ (see Figure 6.3). Letting $g=p f^{\prime}$ we have $h=g f . \square$ In case $\mathcal{V}$ is a finitely generated congruence distributive variety, we have that each $M \in \mathcal{V}_{W M I}$ is embedded in some $M^{\prime} \in \mathcal{V}_{M I}$ (= the set of all maximal extensions in the finite set $\left.\mathcal{V}_{W M I}\right)$. Since members of $\mathcal{V}_{W M I}$ are absolute retracts, we only have to test property (Q) for all homomorphisms $h: A \rightarrow M^{\prime} \in \mathcal{V}_{M I}$ (this is how property (Q) is defined in Jónsson [90]). If $A \in \mathcal{V}$ is a finite algebra, then $A$ has only finitely many nonisomorphic essential extensions $B \in \mathcal{V}$ and there are only finitely many possibilities for the homomorphisms $h: A \rightarrow M^{\prime} \in \mathcal{V}_{M I}$. In each case one can effectively determine if there exists a homomorphism $g: B \rightarrow M^{\prime}$ such that $g \mid A=h$. Thus we obtain: Figure 6.3 CorollaRy 6.11 (Jónsson [90]). Let $\mathcal{V}$ be a finitely generated congruence distributive variety. If $A \in \mathcal{V}$ is a finite algebra with a one-element subalgebra, then it is effectively decidable whether or not $A \in A m a l(\mathcal{V})$. Property (P). We conclude this section by stating without proof further interesting results from Bergman $[\mathbf{8 5}]$ and Jónsson $[\mathbf{9 0}]$. For an algebra $A$ in a variety $\mathcal{V}$, we let $A^{\#}$ be the direct product of all algebras $A / \theta$ with $\theta \in \operatorname{Con}(A)$ and $A / \theta \in \mathcal{V}_{S I} \cap \operatorname{Amal}(\mathcal{V})$, and we let $\mu_{A}$ be the canonical homomorphism of $A$ into $A^{\#}$. An algebra $A$ in a variety $\mathcal{V}$ is said to have property $(P)$ if $\mu_{A}$ is an embedding of $A$ into $A^{\#}$, and for every homomorphism $g: A \rightarrow M \in \mathcal{V}_{M I}$ there exists a homomorphism $h: A^{\#} \rightarrow M$ with $h \mu_{A}=g$. THEOREM 6.12 (Bergman [85]). For any finitely generated variety of modular lattices and $A \in \mathcal{V}$, we have $A \in A m a l(\mathcal{V})$ if and only if $A$ is congruence extensile and has property $(P)$. Bergman also showed that the above theorem holds for $\mathcal{V}=\mathcal{N}$, the smallest nonmodular variety (see Jónsson [00]), and that the reverse implication holds for all finitely generated lattice varieties, but it is not known whether the same is true for the forward implication. Theorem 6.13 (Jónsson [90]). A finite lattice $L \in \mathcal{N}$ belongs to Amal( $\mathcal{N}$ ) if and only if $L$ is a subdirect power of $N$ and $L$ does not have the three element chain as a homomorphic image. It is not known whether this theorem is also true for infinite members of $\mathcal{N}$. ### Products of absolute retracts It is shown in Taylor [73] that, in general, the product of absolute retracts is not an absolute retract even if $\mathcal{V}$ is a congruence distributive variety. Theorem 6.14 however shows that absolute retracts are preserved under arbitrary products in a congruence distributive varieties, provided that every member of this variety has a one-element subalgebra. The theorem is a generalization of a result of Jónsson [90], which states that if $\mathcal{V}$ is a finitely generated lattice variety then any product of members of $\mathcal{V}_{M I}$ is an absolute retract in $\mathcal{V}$. Theorem 6.14 (Jipsen and Rose [89]). Let $\mathcal{V}$ be a congruence distributive variety and assume that every member of $\mathcal{V}$ has a one-element subalgebra. Then every direct product of absolute retracts in $\mathcal{V}$ is an absolute retract in $\mathcal{V}$. Proof. Suppose $A=\prod_{i \in I} A_{i}$ is a direct product of absolute retracts in $\mathcal{V}$, and consider an embedding $f: A \hookrightarrow B \in \mathcal{V}$. For $i \in I$, let $\pi_{i}: A \rightarrow A_{i}$ be a projection. By Corollary 6.9 there is a homomorphism $h_{i}: B \rightarrow A_{i}$ such that $\pi_{i}=h_{i} f$. Consider a homomorphism $h: B \rightarrow A$ given by $\pi_{i} h=h_{i}$ for each $i \in I$. Then $\pi_{i} h f=h_{i} f=\pi_{i}$ and so $h f: A \rightarrow A$ is the identity map. ### Lattices and the Amalgamation Property Given the characterization of $\operatorname{Amal}(\mathcal{V})$ (Theorem 6.10), some well known results about the amalgamation classes of finitely generated lattice varieties can be derived easily. Theorem 6.15 (Pierce [68]). The variety $\mathcal{D}$ of all distributive lattices has the amalgamation property. Proof. We show that property (Q) is satisfied for any $A \in \mathcal{D} . \mathcal{D}_{S I}=\mathcal{D}_{W M I}=\mathcal{D}_{M I}=\{\mathbf{2}\}$, so let $A, B \in \mathcal{D}$ with embedding $f: A \hookrightarrow B$ and homomorphism $h: A \rightarrow \mathbf{2}$. If $h(A)=\{0\}$ or $h(A)=\{1\}$ then trivially there exists $g: B \rightarrow \mathbf{2}$ such that $h=g f$. On the other hand, if $h$ is onto, then ker $h$ splits $A$ into an ideal $I$ and a filter $F$, say. Extend $f(I)$ to the ideal $I^{\prime}=\{b \in B: b \leq a \in f(I)\}$ and $f(F)$ to the filter $F^{\prime}=\{b \in B: b \geq a \in f(F)\}$. Clearly $I^{\prime} \cap F^{\prime}=\emptyset$, hence by Zorn's Lemma and the distributivity of $B, I^{\prime}$ can be enlarged to a maximal ideal $P$, which is also disjoint from $F^{\prime}$. Define $g: B \rightarrow \mathbf{2}$ by $g(b)=0$ if $b \in P$, and $g(b)=1$ otherwise. Then one easily checks that $g$ is a homomorphism and that $h=g f$. In fact, one can show more generally that if $\mathcal{V}$ is any congruence distributive variety which is generated by a finite simple algebra that has no nontrivial subalgebra, then $\mathcal{V}$ has the amalgamation property. This result is essentially contained in Day [72]. Theorem 6.16 For any nondistributive finitely generated lattice variety $\mathcal{V}$ we have $2 \notin$ Amal $(\mathcal{V})$ and consequently the amalgamation property fails in $\mathcal{V}$. Proof. Since $\mathcal{V}$ is nondistributive, $M_{3}$ or $N$ is a member of $\mathcal{V}$. Let $f$ be a map that embeds 2 into a prime critical quotient of $L=M_{3}$ or $L=N$, depending on which lattice is in $\mathcal{V}$. Also, since each $M \in \mathcal{V}_{M I}$ is finite, we can define the $\operatorname{map} h: \mathbf{2} \hookrightarrow M$ by $h(0)=0_{M}$ and $h(1)=1_{M}$. Now it is easy to see that there does not exist a homomorphism $g: L \rightarrow M$ such that $h=g f$ (see Figure 6.4). Berman [81] showed that there exists a lattice variety $\mathcal{V}$ such that $\mathbf{2} \in \operatorname{Amal}(\mathcal{V})$. In fact Berman considers the variety $\mathcal{V}=\left\{L_{6}^{n}: n \in \omega\right\}^{\mathcal{V}}$ (see Figure 2.2) and proves that all of its finitely generated subdirectly irreducible members are amalgamation bases. $\mathcal{L}$ has the strong amalgamation property. Next we would like to prove the result of Jónsson [56], that the variety $\mathcal{L}$ of all lattices has the strong amalgamation property. Since $\mathcal{L}$ is not residually small, we cannot make use of the results in the previous section. Figure 6.4 We first consider a notion weaker than that of an amalgamation: Let $L_{1}$ and $L_{2}$ be two lattices with $L^{\prime}=L_{1} \cap L_{2}$ a sublattice of both $L_{1}$ and $L_{2}$. A completion of $L_{1}$ and $L_{2}$ is a triple $\left(f_{1}, f_{2}, L_{3}\right)$ such that $L_{3}$ is any lattice and $f_{i}: L_{i} \hookrightarrow L_{3}$ are embeddings $(i=1,2)$ with $f_{1}\left|L^{\prime}=f_{2}\right| L^{\prime}$. Suppose $L_{1}$ and $L_{2}$ are members of some lattice variety $\mathcal{V}$, and let us denote the inclusion map $L^{\prime} \hookrightarrow L_{i}$ by $j_{i}(i=1,2)$. Then clearly $\left(f_{1}, f_{2}, L_{3}\right)$ is an amalgamation of the diagram $\left(L^{\prime}, j_{1}, L_{1}, j_{2}, L_{2}\right)$ in $\mathcal{V}$ if and only if $L_{3} \in \mathcal{V}$. How to construct a completion of $L_{1}$ and $L_{2}$ ? This can be done in various ways, of which we consider two, namely the ideal completion and the filter completion. We begin by setting $P=L_{1} \cup L_{2}$ and defining a partial order $\leq_{P}$ on $P$ as follows: on $L_{1}$ and $L_{2}$, $\leq_{P}$ agrees with the existing order (which we denote by $\leq_{1}$ and $\leq_{2}$ respectively), and if $x \in L_{i}, y \in L_{j}(\{i, j\}=\{1,2\})$ then (equivalently $\leq_{P}=\leq_{1} \cup \leq_{2} \cup \leq_{1} \circ \leq_{2} \cup \leq_{2} \circ \leq_{1}$ ). It is straightforward to verify that $\leq_{P}$ is indeed a partial order on $P$. Define a subset $I$ of $P$ to be a $\left(L_{1}, L_{2}\right)$-ideal of $P$ if $I$ satisfies $$ \begin{aligned} & \text { (1) } x \in I \quad \text { and } \quad z \leq_{P} x \text { imply } z \in I \text { and } \\ & \text { (2) } x, y \in I \cap L_{i} \text { implies } \quad x+_{i} y \in I \quad(i=1,2) \text {. } \end{aligned} $$ Let $\mathcal{I}\left(L_{1}, L_{2}\right)$ be the collection of all $\left(L_{1}, L_{2}\right)$-ideals of $P$ together with the empty set. $\mathcal{I}\left(L_{1}, L_{2}\right)$ is closed under arbitrary intersections, so it forms a complete lattice with $I \cdot J=$ $I \cap J$ and $I+J$ equal to the $\left(L_{1}, L_{2}\right)$-ideal generated by $I \cup J$ for any $I, J \in \mathcal{I}\left(L_{1}, L_{2}\right)$. For each $x \in L_{1} \cup L_{2}$ the principal ideal $(x]$ is in $\mathcal{I}\left(L_{1}, L_{2}\right)$, so we can define the maps $$ f_{i}: L_{i} \hookrightarrow \mathcal{I}\left(L_{1}, L_{2}\right) \quad \text { by } \quad f_{i}(x)=(x] \quad(i=1,2) $$ Lemma $6.17\left(f_{1}, f_{2}, \mathcal{I}\left(L_{1}, L_{2}\right)\right)$ is a completion (the ideal completion) of $L_{1}$ and $L_{2}$. Proof. Let $x, y \in L_{i}(i=1$ or 2$)$. Then $f_{i}(x+i y)=\left(x+{ }_{i} y\right] \supseteq(x]+(y]$ since $x, y \leq_{P} x+i y$. On the other hand we have $x, y \in(x]+(y]$ so by (2) $x+i y \in(x]+(y]$, Figure 6.5 hence $f_{i}\left(x+{ }_{i} y\right)=f_{i}(x)+f_{i}(y)$. Similarly $f_{i}\left(x \cdot_{i} y\right)=f_{i}(x) f_{i}(y)$ and clearly $f_{1}$ and $f_{2}$ are one-one, with $f_{1}\left|L=f_{2}\right| L$. The notion of a $\left(L_{1}, L_{2}\right)$-filter and the filter completion $\left(g_{1}, g_{2}, \mathcal{F}\left(L_{1}, L_{2}\right)\right)$ are defined dually. As an easy consequence we now obtain: THEorEM 6.18 (Jónsson [56]). $\mathcal{L}$ has the strong amalgamation property. Proof. Let $(A, f, B, g, C)$ be a diagram in $\mathcal{L}$. Since $\mathcal{L}$ is closed under taking isomorphic copies, we may assume that $A$ is a sublattice of $B$ and $C$, and that $A=B \cap C$ with $f$ and $g$ as the corresponding inclusion maps. Now $\mathcal{I}(B, C) \in \mathcal{L}$, so the ideal completion $\left(f_{1}, f_{2}, \mathcal{I}(B, C)\right)$ is in fact an amalgamation of the diagram. To see that this is a strong amalgamation we need only observe that if $(x] \in f_{1}(B) \cap f_{2}(C)$ then $x \in A$. Observe that we could not have used the above approach to prove the amalgamation property for the variety $\mathcal{D}$, since the ideal completion of two distributive lattices need not be distributive. Indeed, let $B=M_{2}(a, b)(=\mathbf{2} \times \mathbf{2}$ generated by $a, b)$ and $C=M_{2}(a, c)$ with $a+b=a+c$ and $a b=a c$ (see Figure 6.5). Then $B \cup C=M_{3}(a, b, c)$ is already a lattice, and therefore a sublattice of $\mathcal{I}(B, C)$. However $B \cup C$ is nondistributive, hence $\mathcal{I}(B, C) \notin \mathcal{D}$. The same holds for any other lattice $D$ in which we might try to amalgamate $B$ and $C$, so it follows that $\mathcal{D}$ does not have the strong amalgamation property. ### Amalgamation in modular varieties No nondistributive modular variety has the amalgamation property. In our presentation of this result we follow Grätzer, Jónsson and Lakser [73]. We begin with a technical lemma. LEMMA 6.19 Let $\mathcal{V}$ be a variety of algebras that has the amalgamation property, and let $A, B, C, D \in \mathcal{V}$. (i) If $D$ is an extension of $C$, and $f$ is any embedding of $C$ into $D$, then there exists an extension $E \in \mathcal{V}$ of $D$ and an embedding $g: D \hookrightarrow E$ such that $g \mid C=f$. (ii) If $B$ is an extension of $A$, and $\alpha$ is an automorphism of $A$, then there exists an extension $\bar{B} \in \mathcal{V}$ of $B$ and an automorphism $\bar{\alpha}$ of $\bar{B}$ such that $\bar{\alpha} \mid A=\alpha$ Figure 6.6 Proof. (i) Let $j$ be the inclusion map $C \hookrightarrow D$ and consider the diagram $(C, f, D, j, D)$ which by assumption has an amalgamation $\left(f^{\prime}, g, E\right)$. The result follows if we identify $D$ with its isomorphic image $f^{\prime}(D)$ in $E$. To prove (ii), let $B_{0}=A, B_{1}=B$, and consider $h_{0}=\alpha$ as an embedding of $B_{0}$ into $B_{1}$. We now apply part (i), with $C=h_{0}\left(B_{0}\right)$ and $f=h_{0}^{-1}$ to obtain an extension $B_{2}$ of $B_{1}$ and an embedding $h_{1}: B_{1} \hookrightarrow B_{2}$ satisfying $h_{1} \mid h_{0}\left(B_{0}\right)=h_{0}^{-1}$. Repeating this process for $n=2,3, \ldots$ we get a sequence of extensions of $A=B_{0} \subseteq B_{1} \subseteq B_{2} \subseteq \ldots$ and a sequence of embeddings $h_{n}: B_{n} \hookrightarrow B_{n+1}$ such that $h_{n+1} \mid h_{n}\left(B_{n}\right)=h_{n}^{-1}$ (see Figure 6.6). We can now define $$ \bar{B}=\bigcup_{n \in \omega} B_{n} \quad \text { and } \quad \bar{\alpha}=\bigcup_{n \in \omega} h_{2 n}, $$ then clearly $\bar{\alpha}$ is an embedding of $\bar{B}$ into $\bar{B}$. To see that $\bar{\alpha}$ is also onto, choose any $y \in \bar{B}$, then there is an $n \in \omega$ such that $y \in B_{2 n+1}$. Put $x=h_{2 n+1}(y)$, then $\bar{\alpha}(x)=$ $h_{2 n+2}(x)=y$, since $h_{2 n+2}$ is an extension of $h_{2 n+1}^{-1}$. Hence $\bar{\alpha}$ is an automorphism of $\bar{B}$, and by construction $\bar{\alpha} \mid A=\alpha$. Theorem 6.20 (Grätzer, Jónsson and Lakser [73]). Any nondistributive subvariety of the variety $\mathcal{M}$ of all modular lattices does not have the amalgamation property. Proof. Let us assume to the contrary that there exists a variety $\mathcal{V}$ such that $\mathcal{D} \subset \mathcal{V} \subseteq \mathcal{M}$ and $\mathcal{V}$ has the amalgamation property. Under these assumptions we will prove a number of statements about $\mathcal{V}$ that will eventually lead to a contradiction. The proof does require the coordinatization theorem of projective spaces (see Section 3.2). As a simple observation we have that $M_{3} \in \mathcal{V}$. Statement 1: Every member of $\mathcal{V}$ can be embedded in a simple complemented lattice that also belongs to $\mathcal{V}$. Clearly any lattice in $\mathcal{V}$ can be embedded into some lattice $L \in \mathcal{V}$, which has a least and a greatest element, denoted by $0_{L}$ and $1_{L}$ respectively (e.g. we can take $L$ to be the ideal lattice). Given $x \in L, x \neq 0_{L}, 1_{L}$ we can embed the three element chain $\mathbf{3}=\{0<1<2\}$ into $L$ and into $M_{3}(a, b, c)$ such that $$ \begin{aligned} & 0 \mapsto 0_{L}, \quad 1 \mapsto x, \quad 2 \mapsto 1_{L} \quad \text { in } L \\ & 0 \mapsto a b, \quad 1 \mapsto a, \quad 2 \mapsto a+b \quad \text { in } M_{3}(a, b, c) \text {. } \end{aligned} $$ By the amalgamation property, there exists a lattice $L_{1} \in \mathcal{V}$ with $L$ as a 0,1 -sublattice (i.e. $0_{L}=0_{L_{1}}$ and $1_{L}=1_{L_{1}}$ ) such that $\left\{0_{L}, x, 1_{L}\right\}$ is contained in a diamond sublattice of $L_{1}$. Iterating this process for each $x \in L, x \neq 0_{L}, 1_{L}$ we obtain a lattice $C_{1} \in \mathcal{V}$ that contains $L$ as a sublattice, and for each $x \in L x \neq 0_{L}, 1_{L}$ there exists $y, z \in C_{1}$ such that $\left\{0_{L}, x, y, z, 1_{L}\right\}$ form a diamond sublattice. Repeating this process, we obtain an infinite sequence $$ L=C_{0} \subseteq C_{1} \subseteq C_{2} \subseteq \cdots \subseteq C_{n} \subseteq \cdots $$ of lattices in $\mathcal{V}$, each with the same 0 and 1 as $L$ and satisfying: for all $n \in \omega$, all $x \in C_{n}$, if $x \neq 0_{L}, 1_{L}$ then $x$ belongs to a diamond $\left\{0_{L}, x, y, z, 1_{L}\right\}$ in $C_{n+1}$. Letting $C_{\infty}=\bigcup_{n \in \omega} C_{n}$, we have that $C_{\infty} \in \mathcal{V}$, and clearly $C_{\infty}$ is complemented. In a complemented lattice, a congruence $\theta$ is determined by the ideal $0_{L} / \theta$, hence if $\theta$ is a nontrivial congruence on $C_{\infty}$, then $x \theta 0_{L}$ for some $x \in C_{\infty}, x \neq 0_{L}$. Now $x=1_{L}$ implies that $\theta$ collapses all of $C_{\infty}$, and $x<1_{L}$ implies that $x$ belongs to a diamond $\left\{0_{L}, x, y, z, 1_{L}\right\}$ in $C_{\infty}$, so again $\theta$ collapses all of $C_{\infty}$, since the diamond $M_{3}$ is simple. Hence $C_{\infty}$ is a simple complemented lattice in $\mathcal{V}$ containing $L$ as a sublattice. Note that since Hall and Dilworth [46] constructed a modular lattice that cannot be embedded in any complemented modular lattice, this statement suffices to conclude that $\mathcal{M}$ does not have the amalgamation property. Statement 2: For every $L \in \mathcal{V}$ there exists an infinite dimensional nondegenerate projective space $P$ such that $L$ can be embedded in $\mathcal{L}(P)$ and $\mathcal{L}(P) \in \mathcal{V}$. We may assume that $L$ has a greatest and least element, and that it contains an infinite chain, for if it does not, then we adjoin an infinite chain above the greatest element of $L$ and the resulting lattice is still a member of $\mathcal{V}$. By Statement $1, L$ can be embedded in a simple complemented lattice $C \in \mathcal{V}$, and by Theorem 3.3, $C$ can be 0,1 -embedded in some modular geometric lattice $M \in \mathcal{V} . M$ can be represented as a product of modular geometric lattices $M_{i}(i \in I)$ which correspond to nondegenerate projective spaces $P_{i}$, in the sense that $M_{i} \cong \mathcal{L}\left(P_{i}\right)$. Let $f_{i}$ denote the embedding of $C$ into $M$ followed by the $i$ th projection. Since $f_{i}$ preserves 0 and 1 , it cannot map $C$ onto a single element, hence by the simplicity of $C, f_{i}$ must be an embedding of $C$ into $M_{i} \cong \mathcal{L}\left(P_{i}\right)$. Also $P_{i}$ is infinite dimensional since $C$ contains an infinite chain. Statement 3: There exists a projective plane $Q$ such that $\mathcal{L}(Q) \in \mathcal{V}$ and $Q$ has at least six points on each line, By Statement 2, there exists an infinite dimensional nondegenerate projective space $P$ such that $\mathcal{L}(P)$ belongs to $\mathcal{V}$. If every line of $P$ has at least six points, then we can take $Q$ to be any projective plane in $P$, and $\mathcal{L}(Q) \in \mathcal{V}$ since $\mathcal{L}(Q)$ is a sublattice of $\mathcal{L}(P)$. If the lines of $P$ have less than six points, then by Theorem $3.5, \mathcal{L}(P)$ is isomorphic to $\mathcal{L}(V, F)$, where $V$ is an infinite dimensional vector space and $F$ is a field with 2,3 or 4 elements. Let $K$ be a finite field extension of $F$ with $|K| \geq 5$, and let $W$ be a three dimensional vector space over $K$. As in Section $3.2, \mathcal{L}(W, K)$ determines a projective plane $Q$, such that $\mathcal{L}(W, K) \cong \mathcal{L}(Q)$. Since $F$ is embedded in $K, \mathcal{L}(W, K)$ is a sublattice of $\mathcal{L}(W, F)$, and since $V$ is infinite dimensional, $\mathcal{L}(W, F)$ is isomorphic to a sublattice of $\mathcal{L}(V, F)$. It follows that $\mathcal{L}(Q)$ can be embedded in $\mathcal{L}(P)$, and is therefore a member of $\mathcal{V}$. By construction, every line of $Q$ has at least $|K|+1 \geq 6$ points. With the help of these three statements and Lemma 6.19 we can now produce the desired contradiction. Let $Q$ be the projective plane in the last statement. By Statement 2 , there exists an infinite dimensional nondegenerate projective space $P$ such that $\mathcal{L}(P) \in$ $\mathcal{V}$ and $\mathcal{L}(Q)$ is isomorphic to a sublattice of $\mathcal{L}(P)$. By the coordinatization theorem (see Section 3.2), there exists a vector space $V$ over a division ring $D$ such that $\mathcal{L}(P)$ is isomorphic to $\mathcal{L}(V, D)$. So $\mathcal{L}(Q)$ is embedded in $\mathcal{L}(V, D) \subseteq \mathcal{S}(V)$ (= the lattice of subgroups of the abelian group $V$ ), which implies that the arguesian identity holds in $\mathcal{L}(Q)$. This means $Q$ can be coordinatized in the standard way by choosing any line $l$ in $Q$ and two distinct points $a_{0}, a_{\infty}$ on $l$. The division ring structure is then defined on the set $K=l-\left\{a_{\infty}\right\}$. Here we require only the definition of the addition operation $\oplus$ ([GLT] p.208): Choose two distinct points $p$ and $q$ of $Q$ that are collinear with $a_{0}$ but are not on $l$. Given $x, y \in K$, let $$ \begin{array}{ccc} \text { (1) } & u=(x+p)\left(q+a_{\infty}\right) & v=(y+q)\left(p+a_{\infty}\right) \\ \text { (2) } & x \oplus y=(u+v) l=(u+v)\left(a_{0}+a_{\infty}\right) \end{array} $$ The operation $\oplus$ is independent of the choice of $p$ and $q$, and $\left(K, \oplus, a_{0}\right)$ is an abelian group. Any permutation of the points of $l$ induces an automorphism of the quotient $l / 0 \subseteq$ $\mathcal{L}(Q)$. Since $l$ has at least six points, we can find $x, y \in K-\left\{a_{0}\right\}$ such that $x \oplus y \neq a_{0}$. Let $\alpha$ be an automorphism of $l / 0$ that keeps $a_{0}, a_{\infty}, x, y$ fixed and maps $x \oplus y$ to a point $z \neq x \oplus y$. By Lemma 6.19 (ii) there exists an extension $L$ of $\mathcal{L}(Q)$ such that $\alpha$ extends to an automorphism $\beta$ of $L$. By Statement 2 and the same argument as above, there exists an abelian group $A$ and an embedding $f: L \hookrightarrow \mathcal{S}(A)$. We claim that $f(x \oplus y)$ is the subgroup of $A$ which satisfies: $$ \text { (3) } a \in f(x \oplus y) \quad \text { iff } \quad \begin{gathered} \text { for some } b \in f(x), \quad c \in f(y) \text { we have } \\ a \ominus b, a \ominus c \in f\left(a_{\infty}\right) \text { and } a \ominus b \ominus c \in f\left(a_{0}\right) \end{gathered} $$ (where $a \ominus b=a \oplus(\ominus b)$ and $\ominus b$ is the additive inverse of $b$ ). Indeed, let $u, v$ be as in (1), and assume $a \in f(x \oplus y)$. Since $x \oplus y \leq u+v$, we have $f(x \oplus y) \subseteq f(u+v)=f(u)+f(v)$ $(=\{r \oplus s: r \in f(u), s \in f(v)\})$. So there exists $d \in A$ satisfying $$ d \in f(u) \quad \text { and } \quad a \ominus d \in f(v) . $$ Since $u \leq x+p$ and $v \leq y+q$, it follows that there exist $b, c \in A$ such that $$ b \in f(x), \quad d \ominus b \in f(p) \quad \text { and } \quad c \in f(y), \quad a \ominus d \ominus c \in f(q) . $$ $a \ominus b$ belongs to $f(x \oplus y)+f(x)$ and $f(v)+f(p)$, and since $$ ((x \oplus y)+x)(v+p) \leq l(v+p) \leq a_{\infty} $$ we have that $a \ominus b \in f\left(a_{\infty}\right)$. Similarly $a \ominus c \in f\left(a_{\infty}\right)$ and $a \ominus b \ominus c \in f\left(a_{0}\right)$. Conversely, assume the right hand side of (3) holds. Since $a_{0} \leq p+q$, there exists $d \in A$ such that $$ d \in f(p) \quad \text { and } \quad a \ominus b \ominus c \ominus d \in f(q) . $$ Now (1) implies $b+c \in f(u)$ and $a \ominus b \ominus c \in f(v)$, and from (2) we get $a \in f(x \oplus y)$. Also, in the above argument we can replace $f$ by $f^{\prime}$ which we define by $$ f^{\prime}(t)=f(\beta(t)) \quad \text { for all } \quad t \in A . $$ Since $f$ and $f^{\prime}$ agree on $a_{0}, a_{\infty}, x$ and $y$, it follows from (3) that $$ f(x \oplus y)=f^{\prime}(x \oplus y)=f(z) . $$ This is a contradiction, since $f$ is an embedding and $z \neq x \oplus y$. ### The Day - Ježek Theorem In this section we will prove the result of Day and Ježek [84]: if $\mathcal{V}$ is a lattice variety that has the amalgamation property and contains the pentagon $N$, then $\mathcal{V}$ must be the variety $\mathcal{L}$ of all lattices. Together with the preceding result, this implies that $\mathcal{T}, \mathcal{D}$ and $\mathcal{L}$ are the only varieties that have the amalgamation property. The proof makes use of the result that $\mathcal{L}$ is generated by the class $\mathcal{B}_{F}$ of all finite bounded lattices (see Section 2.2). Partial results in this direction had previously been obtained by Berman [81], who showed that if a variety has the amalgamation property and includes $N$, then it must also include $L_{3}$, $L_{6}, L_{7}, L_{9}, L_{11}$ and $L_{15}$. The notion of $A$-decomposability of a finite lattice. This concept was introduced by Slavik [83]. Let $L$ be a finite lattice with $L_{1}$ and $L_{2}$ proper sublattices of $L$ and $L=L_{1} \cup L_{2}$. $L$ is said to be $A$-decomposable by means of $L_{1}$ and $L_{2}$ (written $L=A\left(L_{1}, L_{2}\right)$ ) if whenever ( $f_{1}, f_{2}, L_{3}$ ) is a completion of $L_{1}$ and $L_{2}$, then $f=f_{1} \cup f_{2}$ is an embedding of $L$ into $L_{3}$. So in a sense $A\left(L_{1}, L_{2}\right)$ is the smallest completion of $L_{1}$ and $L_{2}$. In particular, if we let $j_{i}$ denote the inclusion map of $L_{1} \cap L_{2}$ into $L_{i}(i=1,2)$, then $A\left(L_{1}, L_{2}\right)$ is by definition embeddable into any amalgamation of the diagram $\left(L_{1} \cap L_{2}, j_{1}, L_{1}, j_{2}, L_{2}\right)$. Hence if $\mathcal{V}$ is a variety having the amalgamation property, and $L_{1}, L_{2} \in \mathcal{V}$, then $A\left(L_{1}, L_{2}\right) \in \mathcal{V}$. This, together with the fact that $A$-decomposability can be characterized by three easily verifiable conditions on $L_{1}$ and $L_{2}$, makes it a very useful concept. For any element $z \in L$ we define $C(z)$ to be the set of all covers of $z$, and $C^{d}(z)$ the set of all dual covers of $z$. LemMa 6.21 (Day and Ježek [84]). Let $L=L_{1} \cup L_{2}$ be a finite lattice with $L_{1}$ and $L_{2}$ proper sublattices of $L$. Then $L$ is A-decomposable by means of $L_{1}$ and $L_{2}$ if and only if $L_{1}$ and $L_{2}$ also satisfy: (1) $x \in L_{i}, y \in L_{j}$ and $x \leq y$ imply $x \leq z \leq y$ for some $z \in L_{1} \cap L_{2}(\{i, j\}=\{1,2\})$; (2) $z \in L_{1} \cap L_{2}$ implies $C^{d}(z) \subseteq L_{1}$ or $C^{d}(z) \subseteq L_{2}$; (3) $z \in L_{1} \cap L_{2}$ implies $C(z) \subseteq L_{1}$ or $C(z) \subseteq L_{2}$. Proof. Suppose $L=A\left(L_{1}, L_{2}\right)$ and let $\left(f_{1}, f_{2}, \mathcal{I}\left(L_{1}, L_{2}\right)\right)$ be the ideal completion of $L_{1}$ and $L_{2}$ (see Lemma 6.17). By definition the map $$ f=f_{1} \cup f_{2}: L \rightarrow \mathcal{I}\left(L_{1}, L_{2}\right) \quad \text { given by } \quad f(x)=(x] $$ is a lattice embedding. Let $x \in L_{i}, y \in L_{j}(\{i, j\}=\{1,2\})$ and $x \leq y$. Then $f(x)=$ $(x] \subseteq(y]=f(y)$,hence $x \leq_{P} y$, which implies that there exists a $z \in L_{1} \cap L_{2}$ such that $x \leq z \leq y$. Therefore (1) holds. Suppose to the contrary that (2) fails. Then there exists $z \in L_{1} \cap L_{2}$ with dual covers $x \in L_{1}-L_{2}$ and $y \in L_{2}-L_{1}$. Clearly $z=x+y$ so $f(z)=(z]=(x]+(y]$. But $(x] \cup(y]$ is already a $\left(L_{1}, L_{2}\right)$-ideal, so we should have $(x] \cup(y]=(x]+(y]$. This is a contradiction, since $(z] \neq(x] \cup(y]$. Dually, the existence of the filter completion implies that (3) holds. Conversely, suppose that (1), (2) and (3) hold, and let $\left(f_{1}, f_{2}, L_{3}\right)$ be any completion of $L_{1}$ and $L_{2}$. We must show that $f=f_{1} \cup f_{2}$ is an embedding of $L$ into $L_{3}$. Firstly, $x<y$ implies $f(x)<f(y)$, since if $x, y \in L_{i}$ this follows from the fact that $f_{i}$ is an embedding, and if $x \in L_{i}-L_{j}, y \in L_{j}-L_{i}$ then by (1) there exists a $z \in L_{1} \cap L_{2}$ such that $x \leq z \leq y$. Because $x, y \notin L_{1} \cap L_{2}$, we must have $x<z<y$, giving $f(x)<f(z)<f(y)$. This shows that $f$ is one-one and order preserving. To see that $f$ is in fact a homomorphism, requires a bit more work. Since $f \mid L_{i}=f_{i}$ is a homomorphism, we only have to consider $x \in L_{1}-L_{2}, y \in L_{2}-L_{1}$, and show that $f(x+y)=f(x)+f(y)(f(x y)=f(x) f(y)$ follows by duality). We define two maps $\mu_{i}: L \rightarrow L_{i}(i=1,2)$ by $\mu_{i}(u)=\sum\left\{v \in L_{i}: v \leq u\right\}$. Note that the join is taken in $L$, so $\sum \emptyset=0_{L}$. Also, clearly $\mu_{i}$ is orderpreserving, and $\mu_{i} \mid L_{i}$ is the identity map on $L_{i}$. Define two increasing sequences of elements $x_{n} \in L_{1}, y_{n} \in L_{2}$ by $x_{0}=x, y_{0}=y$ and $$ x_{n+1}=x_{n}+\mu_{1}\left(y_{n}\right), \quad y_{n+1}=y_{n}+\mu_{2}\left(x_{n}\right) . $$ By induction one can easily see that $x_{n}+y_{n}=x+y$ and $f\left(x_{n}\right)+f\left(y_{n}\right)=f(x)+f(y)$ for all $n \in \omega$. We show that for some $n=k$ we have $x_{k}=x+y$ or $y_{k}=x+y$, then $f(x+y)=f\left(x_{k}\right)+f\left(y_{k}\right)=f(x)+f(y)$ as required. Suppose $x_{n}, y_{n}<x+y$ for all $n$. Since $L$ is finite, this implies that there exists a $k$ such that $x_{k+1}=x_{k}$ and $y_{k+1}=y_{k}$, so by definition $\mu_{1}\left(y_{k}\right) \leq x_{k}$ and $\mu_{2}\left(x_{k}\right) \leq y_{k}$. We always have $\mu_{1}\left(y_{k}\right) \leq y_{k}$ and $\mu_{2}\left(x_{k}\right) \leq x_{k}$, hence $\mu_{2}\left(x_{k}\right), \mu_{1}\left(y_{k}\right) \leq x_{k} y_{k}$. If $x_{k} y_{k} \in L_{1}$ then $x_{k} y_{k} \leq \mu_{1}\left(y_{k}\right)$, so we have $\mu_{1}\left(y_{k}\right)=x_{k} y_{k} \in L_{1}$. Since $y_{k} \in L_{2}$, condition (1) implies that there exists $z \in L_{1} \cap L_{2}$ such that $\mu_{1}\left(y_{k}\right) \leq z \leq y_{k}$. But then $z \leq \mu_{1}\left(y_{k}\right)$, so $z=\mu_{1}\left(y_{k}\right)=x_{k} y_{k} \in L_{1} \cap L_{2}$. Similarly, if $x_{k} y_{k} \in L_{2}$ then we also get $x_{k} y_{k} \in L_{1} \cap L_{2}$, hence we actually have $$ \mu_{2}\left(x_{k}\right)=x_{k} y_{k}=\mu_{1}\left(y_{k}\right) \in L_{1} \cap L_{2} $$ However $x_{k} \notin L_{1} \cap L_{2}$ else $\mu_{2}\left(x_{k}\right)=x_{k}$ which gives $y_{k+1}=y_{k}+x_{k}$, contrary to the initial assumption that $y_{n}<x+y$ for all $n$. Similarly $y_{k} \notin L_{1} \cap L_{2}$, so there exist covers $u, v$ of $x_{k} y_{k}$ such that $u \leq x_{k}$ and $v \leq y_{k}$. But $\mu_{2}\left(x_{k}\right) \prec u \leq x_{k}$ implies $u \in L_{1}-L_{2}$ (else $\left.u \leq \mu_{2}\left(x_{k}\right)\right)$ and $\mu_{1}\left(y_{k}\right) \prec v \leq y_{k}$ implies $v \in L_{1}-L_{2}$. Since this contradicts condition (3), it follows that $x_{n}=x+y$ or $y_{n}=x+y$ for some $n$. Two easy consequences of the above characterization are: Corollary 6.22 (i) If $L=A\left(L_{1}, L_{2}\right)$ and, for $i=1,2, L_{i}$ is a sublattice of $L_{i}^{\prime}$, which in turn is a proper sublattice of $L$ then $L=A\left(L_{1}^{\prime}, L_{2}^{\prime}\right)$. (ii) If $L=[a) \cup(b]$ for some $a, b \in L$ with $0_{L} \leq a \leq b \leq 1_{L}$ then $L=A([a),(b])$. $\mathcal{L}$ is the only nonmodular variety that has the amalgamation property. We also need the following lemma about the lattice $L[I]$ constructed by Day [70] (see Section 2.2). LeMma 6.23 Let $I=u / v$ be a quotient in a lattice $L, \theta \in \operatorname{Con}(L)$ and put $J=$ $(u / \theta) /(v / \theta)$. If $I=\bigcup J$, then $L[I]$ is a sublattice of the direct product of $L$ and $L / \theta[J]$. Proof. Recall that if $\psi, \phi$ are two congruences on an algebra $A$ such that $\psi \cap \phi$ is the zero of $\operatorname{Con}(A)$, then $A$ is a subdirect product of $A / \psi$ and $A / \phi$. So we need only define $\psi$ and $\phi$ on $L[I]$ in such a way that $L[I] / \psi$ is a sublattice of $L, L[I] / \phi$ is a sublattice of $L / \theta[J]$, and $\psi \cap \phi=\mathbf{0}$. Let $\psi=\operatorname{ker} \gamma$, where $\gamma: L[I] \rightarrow L$ is the natural epimorphism, then $L[I] / \psi$ is of course isomorphic to $L$. Define $\phi$ by $$ x \phi y \quad \text { if and only if } \quad \gamma(x) \theta \gamma(y) \quad \text { and } \quad \begin{aligned} & x, y \in L-I \quad \text { or } \\ & x, y \in I \times\{i\} \quad(i=1,2) . \end{aligned} $$ With this definition $\phi$ is a congruence, since $h$ is a homomorphism and $\theta \in \operatorname{Con}(L)$. Moreover, it follows that $I=\bigcup J$, $$ \begin{aligned} & x \in L-I \quad \text { implies } \quad x / \phi=x / \theta \text { and } \\ & (x, i) \in I \times\{i\} \quad \text { implies } \quad(x, i) / \phi=(x / \theta, i) \quad(i=0,1) \end{aligned} $$ whence $L[I] / \phi$ is a subset of $L / \theta[J]$. By examining several cases of meets and joins in $L[I] / \phi$, one sees that it is in fact a sublattice of $L / \theta[J]$. Suppose now that $x, y \in L[I]$ and $x(\psi \cap \phi) y$. Then $h(x)=h(y)$ and $x, y \in L-I$ or $x, y \in I \times\{i\}(i=0$ or 1$)$. In all cases it follows that $x=y$, so we have $\psi \cap \phi=\mathbf{0}$ as required. Suppose $L$ is a finite lattice. As in Section 2.2, we let $\kappa(p)=\sum\{x \in L: x \nsupseteq p$ and $x \geq$ $\left.p_{*}\right\}$, where $p$ is any join irreducible of $L$ and $p_{*}$ is its unique dual cover. Dually we define $\lambda(m)=\prod\left\{x \in L: x \leq m\right.$ and $\left.x \leq m^{*}\right\}$ for any meet irreducible $m \in L$. Corollary 6.24 Let $L$ be a finite semidistributive lattice, and let $I=u / v$ be a quotient in $L$ with $0_{L} \prec v \leq u \prec 1_{L}$. Then $L[I]$ is a sublattice of $L \times N$, where $N$ denotes the pentagon. Proof. Clearly $v$ is join irreducible and $u$ is meet irreducible. By semidistributivity, $L$ is the disjoint union of the quotients $u / v, u \kappa(v) / 0_{L}, 1_{L} / v+\lambda(u)$ and $\kappa(v) / \lambda(u)$, where the last quotient might be empty if $\kappa(v) \succeq \lambda(u)$. This defines an equivalence relation $\theta$ on $L$ with the quotients as $\theta$-classes. $\theta$ is a congruence relation since $L$ is semidistributive, and $L / \theta$ is isomorphic to a sublattice of $2 \times 2$. Letting $J=(u / \theta) /(v / \theta)=\{u / \theta\}$ we have $\bigcup J=u / v$. Thus $L / \theta[J]$ is isomorphic to a sublattice of $N$, and the result follows from the preceding lemma. The following crucial lemma forces larger and larger bounded lattices into any nonmodular variety that has the amalgamation property. LEMMA 6.25 Let $\mathcal{V}$ be a variety that has the amalgamation property and contains $N$. If $L \in \mathcal{B}_{F} \cap \mathcal{V}$ and $v \leq u \in L$, then $L_{i}=(L \times \mathbf{2})[(u, i) /(v, i)] \in \mathcal{B}_{F} \cap \mathcal{V}(i=0,1)$. Proof. It follows from Section 2.2 that all lattices in $\mathcal{B}$ are semidistributive, $\mathbf{2} \in \mathcal{B}_{F}, \mathcal{B}_{F}$ is closed under the formation of finite products and $L \in \mathcal{B}_{F}$ implies $L[I] \in \mathcal{B}_{F}$ for any quotient $I$ of $L$, so $L \in \mathcal{B}_{F}$ implies $L_{i} \in \mathcal{B}_{F}(i=0,1)$. We proceed by induction on $|L|$. Assume $i=1$. If $u=1_{L}$ then $L_{i}$ is a sublattice of $L \times \mathbf{3} \in \mathcal{V}$, hence $L_{1} \in \mathcal{V}$. If $u<1_{L}$ then there is a co-atom $w$ such that $u \leq w \prec 1$, and $L=(w] \cup[\lambda(w))$. Therefore $L \times \mathbf{2}$ can be pictured as in Figure 6.7 (i). Let $I=(w, 1) /(0,1)$, then $(L \times \mathbf{2})[I]$ is a sublattice of $(L \times \mathbf{2}) \times N$ (by Corollary 6.21), hence a lattice in $\mathcal{V}$. The congruence classes modulo the induced homomorphism $h$ : $(L \times \mathbf{2})[I] \rightarrow N$ produce the diagram in Figure 6.7 (ii). Since $B_{0}$ is one of these congruence classes, we can double it, again using Day's construction, to obtain a lattice $L^{\prime}$ as in Figure 6.8. Figure 6.7 Figure 6.8 Clearly $L^{\prime}=(L \times \mathbf{2})[I]\left[B_{0}\right] \in \mathcal{V}$ by Lemma 6.23. Let $J$ be the quotient $u / v$ considered as lying in the congruence class labeled $B$ in Figure 6.8, and consider the lattice $L^{\prime}[J]=$ $A \cup B_{0} \cup B_{1} \cup C \cup D \cup B[J]$. If we define $C_{1}=A \cup B_{0} \cup B_{1} \cup C \cup D$ and $C_{2}=B_{0} \cup B_{1} \cup B[J]$, then we have $L^{\prime}[J]=A\left(C_{1}, C_{2}\right)$. Now $C_{1}=(L \times \mathbf{2})[I] \in \mathcal{V}$ and $C_{2}=A\left(B_{0} \cup B[J], B_{1} \cup B[J]\right)$, hence $C_{2} \in \mathcal{V}$ if and only if these two lattices belong to $\mathcal{V}$. But $B=w / 0$, so $|B|<|L|$, and $B_{i} \cup B[J]=(B \times \mathbf{2})[(u, i) /(v, i)]$. By induction then $C_{2} \in \mathcal{V}$ and this gives $L^{\prime}[J] \in \mathcal{V}$. Since $L_{1}$ is isomorphic to $A \cup B[J] \cup C \cup D$ which is a sublattice of $L^{\prime}[J], L_{1}$ is also in $\mathcal{V}$. The proof for $i=0$ follows by symmetry. Theorem 6.26 (Day and Ježek [84]). The only lattice varieties that have the amalgamation property are the variety $\mathcal{T}$ of all trivial lattices, the variety $\mathcal{D}$ of all distributive lattices, and the variety $\mathcal{L}$ of all lattices. Proof. If $\mathcal{V}$ is a nondistributive lattice variety that has the amalgamation property, then $N \in \mathcal{V}$ by Theorem 6.20. The preceding lemma implies that for every $L \in \mathcal{B}_{F}$ and any $v \leq u \in L$, if $L \in \mathcal{V}$ then $L[u / v] \in \mathcal{V}$, since $L[u / v]$ is a sublattice of $(L \times \mathbf{2})[(u, i) /(v, i)]$. It follows Theorem 2.38 that $\mathcal{B}_{F} \subseteq \mathcal{V}$, and since the finite bounded lattices generate all lattices (Theorem 2.36), we have $\mathcal{V}=\mathcal{L}$. ## 8 $\operatorname{Amal}(\mathcal{V})$ for some finitely generated lattice varieties Let $\mathcal{V}$ be a variety which fails to satisfy the amalgamation property. In this case $\operatorname{Amal}(\mathcal{V})$ is a proper subclass of $\mathcal{V}$, and it is interesting to find out whether or not $\operatorname{Amal}(\mathcal{V})$ is an elementary class. In this section we outline the proofs of two results in this direction. They concern the amalgamation classes of finitely generated lattice varieties, and they are surprisingly contrasting with each other: If $\mathcal{V}$ is a finitely generated nondistributive modular lattice variety then $\operatorname{Amal}(\mathcal{V})$ is not elementary; on the other hand if $\mathcal{V}$ is a variety generated by a pentagon then $\operatorname{Amal}(\mathcal{V})$ is an elementary class determined by Horn sentences. Finitely generated modular varieties. We begin with the following: Definition 6.27 (Albert and Burris [88]). (i) Let $\mathcal{V}$ be a variety, and suppose that the diagram $(A, f, B, g, C)$ cannot be amalgamated in $\mathcal{V}$. An obstruction is any subalgebra $C^{\prime}$ of $C$ such that the diagram $\left(A^{\prime}, f^{\prime}, B, g^{\prime}, C^{\prime}\right)$ cannot be amalgamated in $\mathcal{V}$, where $A^{\prime}=g^{-1}\left(C^{\prime}\right)$ and $f^{\prime}$ and $g^{\prime}$ are the restrictions of $f$ and $g$ to $A^{\prime}$. (ii) Let $\mathcal{V}$ be a locally finite variety. Amal $(\mathcal{V})$ is said to have the bounded obstruction property with respect to $\mathcal{V}$ if for every $k \in \omega$ there exists an $n \in \omega$ such that the following holds: If $C \in A m a l(\mathcal{V}),|B|<k$ and the diagram $(A, f, B, g, C)$ cannot be amalgamated in $\mathcal{V}$, then there is an obstruction $C^{\prime} \leq C$ such that $|C|<n$. Theorem 6.28 (Albert and Burris [88]). Let $\mathcal{V}$ be a finitely generated variety of finite type. Then Amal( $\mathcal{V})$ is elementary if and only if it has the bounded obstruction property. Using the preceding theorem, Bergman [89] proved the following result. Theorem 6.29 Let $\mathcal{V}$ be a finitely generated nondistributive modular variety. Then Amal $(\mathcal{V})$ is not elementary. OUtLINE of proof. Let $\mathcal{V}$ be as in the theorem and let $L$ be a finite modular nondistributive lattice which generates $\mathcal{V}$. If $S$ is a subdirectly irreducible member of $\mathcal{V}$ then $|S| \leq|L|$, since $S$ is an image of a sublattice of $L$. Thus $S$ is simple, and since $S$ has a diamond as a sublattice, we have $|S| \geq 5$. Pick $S$ with largest possible cardinality. Let $z$ be the bottom element of $S$ and let $a \in S$ such that $a$ covers $z$. Define $B=S \times \mathbf{2}$ and let $f: \mathbf{2} \hookrightarrow B$ be an embedding with $f(\mathbf{2})=\{(z, 0),(a, 1)\}$. Then there is $C \in \operatorname{Amal}(\mathcal{V})$ and embeddings $g_{n}: \mathbf{2} \hookrightarrow C$ such that for each $n \in \omega$, the diagram $\left(\mathbf{2}, f, B, g_{n}, C\right)$ cannot be amalgamated in $\mathcal{V}$, and every obstruction has cardinality at least $n$. (For the details the reader is referred to Bergman [89].) Thus by Theorem 6.28, $\operatorname{Amal}(\mathcal{V})$ is not elementary. The variety generated by the pentagon. As before, let $\mathcal{N}$ be the variety generated by the pentagon. The following result appears in Bruyns, Naturman and Rose [a]. Theorem 6.30 Amal $(\mathcal{N})$ is an elementary class. It is closed under reduced products and therefore is determined by Horn sentences. Furthermore, if $B$ is an image of $A \in \operatorname{Amal}(\mathcal{N})$ and $B$ is a subdirect power of the pentagon then $B \in A \operatorname{mal}(\mathcal{N})$. The full proof of the above theorem is too long to give here. It requires several definitions and intermediate results. We will list some of them first, and then outline the proof of the theorem. For more details the reader is referred to Bruyns, Naturman and Rose [a]. DEFINITION 6.31 (i) Let $\theta$ be a congruence on a lattice $A$. We shall say that $\theta$ is a 2 -congruence if $A / \theta \cong 2$. (ii) Let $A$ be a subdirect product of lattices $\left\{A_{i}: i \in I\right\}$ and let $B=\left\langle A_{i}\right.$. A subdirect representation $A \leq B$ is said to be regular if for any kernels $\theta_{i}$ and $\theta_{j}$ of two distinct projections we have $\left.\theta_{i}\right|_{A} \neq\left.\theta_{j}\right|_{A}$. Theorem 6.32 Let $A$ be a nontrivial member of $\mathcal{N}$. The following are equivalent: (i) $A \in \operatorname{Amal}(\mathcal{N})$. (ii) If $A \leq B \in \mathcal{N}$, then every 2-congruence on $A$ can be extended to a 2-congruence on $B$, and 3 is not an image of $A$. (iii) $A$ is a subdirect power of $N$, and for any regular subdirect representation $f: A \hookrightarrow N^{I}$ and any homomorphism $g: A \rightarrow N$ there is a homomorphism $h: N^{I} \rightarrow N$ such that $g=h f$. (iv) $A$ is a subdirect power of $N, \mathbf{3}$ is not an image of $A$, and if $A \leq N^{I}$ is a regular subdirect representation, then every 2-congruence on $A$ can be extended to a 2congruence of $N^{I}$. ## Proposition 6.33 (i) Let $B \in \mathcal{N}$, and assume that for any distinct 2-congruences $\theta_{0}$ and $\theta_{1}$ on $B$ there is $A \in \operatorname{Amal}(\mathcal{N})$ and embeddings $f_{0}, f_{1}: A \hookrightarrow B$ such that $\left.\theta_{0}\right|_{f_{0}(A)}$ and $\left.\theta_{1}\right|_{f_{1}(A)}$ are two distinct congruences on $A$. Then $B \in A \operatorname{mal}(\mathcal{N})$. (ii) Let $B$ be an image of $A \in A m a l(\mathcal{N})$ and assume that $B$ is a subdirect power of $N$. If $B \leq N^{I}$ is a regular subdirect representation, then every $\mathbf{2}$-congruence on $B$ can be extended to a 2-congruence on $N^{I}$. OUtline of The proof of Theorem 6.30. We first consider the last statement of the theorem. Let $B$ be an image of $A \in \operatorname{Amal}(\mathcal{N})$. Since 3 is not an image of $A$ we have that 3 is not an image of $B$. Thus $B \in \operatorname{Amal}(\mathcal{N})$ by Proposition 6.33 (ii) and Theorem 6.32 (iv). Next we consider direct products. Let $A=\chi_{\gamma \in \alpha} A_{\gamma}$ be a direct product of members of $\operatorname{Amal}(\mathcal{N})$. Without loss of generality we may assume that each $A_{\gamma}$ is nontrivial. We use Proposition 6.33 (i) to prove that $A \in \operatorname{Amal}(\mathcal{N})$. Thus we have to show that for any distinct 2-congruences $\theta_{0}, \theta_{1}$ on $A$ there is a $\gamma \in \alpha$ and embeddings $f_{0}, f_{1}: A_{\gamma} \hookrightarrow A$ such that $\left.\theta_{0}\right|_{f_{0}\left(A_{\gamma}\right)}$ and $\left.\theta_{1}\right|_{f_{1}\left(A_{\gamma}\right)}$ are two distinct congruences on $A$. Now if $\theta_{0}$ and $\theta_{1}$ are distinct 2-congruences on $A$, then $A /\left(\theta_{0} \cap \theta_{1}\right)$ is isomorphic to either 3 or $2 \times 2$. In either case we have $u, v, z \in A$ with $u>v>z$ such that $$ (u, v) \in \theta_{0}, \quad(v, z) \notin \theta_{0} \quad \text { and } \quad(u, v) \notin \theta_{1}, \quad(v, z) \in \theta_{1} . $$ By Jónsson's Lemma there are congruences $\phi_{0}, \phi_{1}$ on $A$ induced by ultrafilters $\mathcal{D}_{0}$ and $\mathcal{D}_{1}$ on $\alpha$ such that $\phi_{0} \subseteq \theta_{0}$ and $\phi_{1} \subseteq \theta_{1}$. Defining $$ R=\left\{\beta \in \alpha: u_{\beta}>v_{\beta}\right\} \quad S=\left\{\beta \in \alpha: v_{\beta}>z_{\beta}\right\} $$ we have $R \in \mathcal{D}_{0}$ and $S \in \mathcal{D}_{1}$. There are three possible cases: (i) For some $\gamma \in \alpha$ the set $\{\gamma\}$ belongs to both $\mathcal{D}_{0}$ and $\mathcal{D}_{1}$. (ii) For each $\gamma \in \alpha$ the set $\{\gamma\}$ belongs to neither $\mathcal{D}_{0}$ nor $\mathcal{D}_{1}$. (iii) There exists a $\gamma \in \alpha$ such that $\{\gamma\}$ belongs to one ultrafilter and not the other. If (i) holds then we can choose $u, v, z$ so that $R=S=\{\gamma\}$, for some $\gamma \in \alpha$. For $\beta \in \alpha$ with $\beta \neq \gamma$ let $a_{\beta}$ be an arbitrary but fixed element of $A_{\beta}$. In this case we can have $f_{0}=f_{1}$ so that for $i \in\{0,1\}$ the embedding $f_{i}: A_{\gamma} \hookrightarrow A$ is defined as follows: For $x \in A_{\gamma}$ the $\gamma$ th coordinate of $f_{i}(x) \in A$ is $x$, and if $\beta \in \alpha$ with $\beta \neq \alpha$ then the $\beta$ th coordinate of $f_{i}(x)$ is $a_{\gamma}$. Suppose now that (ii) holds. Pick any $\gamma \in \alpha$. We have $(R-\{\gamma\}) \in \mathcal{D}_{0}$ and $(S-\{\gamma\}) \in$ $\mathcal{D}_{1}$. First observe that since $A_{\gamma}$ is nontrivial it is a subdirect power of $N$ (Theorem 6.32 (iii)). Thus there are at least two distinct epimorphisms $r, s: A \rightarrow \mathbf{2}=\{0,1\}$. The embedding $f_{0}: A_{\gamma} \hookrightarrow A$ is defined as follows: For $x \in A_{\gamma}$ the $\gamma$ th coordinate of $f_{0}(x) \in A$ is $x$. If $\beta \in(R-\{\gamma\})$ then the $\beta$ th coordinate of $f_{0}(x)$ is $u_{\beta}$ if $r(x)=1$ and $v_{\beta}$ if $r(x)=0$. For $\beta \in \alpha$ with $\beta \notin(R \cup\{\gamma\})$ the $\beta$ th coordinate is an arbitrary but fixed element of $A_{\beta}$. The embedding $f_{1}: A_{\gamma} \hookrightarrow A$ is defined as follows: For $x \in A_{\gamma}$ the $\gamma$ th coordinate of $f_{1}(x) \in A$ is $x$. If $\beta \in(S-\{\gamma\})$ then the $\beta$ th coordinate of $f_{1}(x)$ is $v_{\beta}$ if $s(x)=1$ and $z_{\beta}$ if $s(x)=0$. For $\beta \notin(S \cup\{\gamma\})$ the $\beta$ th coordinate is an arbitrary but fixed element of $A_{\beta}$. The case (iii) is a combination of (i) and (ii). For instance if $\{\gamma\} \in \mathcal{D}_{0}$ and $\{\gamma\} \notin \mathcal{D}_{1}$, then $(S-\{\gamma\}) \in \mathcal{D}_{1}$, hence $f_{0}$ is defined as in case (i) and $f_{1}$ is as in case (ii). Thus we have shown that every direct product of members of $\operatorname{Amal}(\mathcal{N})$ belongs to $\operatorname{Amal}(\mathcal{N})$. Now if $B$ is a reduced product of members of $\operatorname{Amal}(\mathcal{N})$ then it must be a subdirect power of $N$ (see Bruyns, Naturman and Rose [a] Lemma 0.1.9). On the other hand $B$ is an image of a product $A$ of members of $\operatorname{Amal}(\mathcal{N})$. Since $A \in \operatorname{Amal}(\mathcal{N})$ it follows that $B \in \operatorname{Amal}(\mathcal{N})$. In particular every ultraproduct of members of $\operatorname{Amal}(\mathcal{N})$ belongs to $\operatorname{Amal}(\mathcal{N})$, so that $\operatorname{Amal}(\mathcal{N})$ is elementary (see Yasuhara [74]). It is determined by Horn sentences since it is closed under reduced products (Chang and Keisler [73]). ## Bibliography H. Albert and S. Burris [88] Bounded obstructions, model companions and amalgamation bases, Z. Math. Logik. Grundlag. Math. 34 (1988), 109-115. R. BAER [52] "Linear Algebra and Projective Geometry," Academic Press, New York (1952). K. BAKER [69] Equational classes of modular lattices, Pacific J. Math. 28 (1969), 9-15. [74] Primitive satisfaction and equational problems for lattices and other algebras, Trans. Amer. Math. Soc. 190 (1974), 125-150. [77] Finite equational bases for finite algebras in congruence distributive equational classes, Advances in Mathematics 24 (1977), 207-243. [77'] Some non-finitely based varieties of lattices, Colloq. Math. 29 (1977), 53-59. C. Bergman [85] Amalgamation classes of some distributive varieties, Algebra Universalis 20 (1985), $143-166$. [89] Non-axiomatizability of the amalgamation class of modular lattice varieties, Order (1989), 49-58. ## J. BERMAN [81] Interval lattices and the amalgamation property, Algebra Universalis 12 (1981), 360375, MR 82k: 06007. G. BIRKHOFF [35] On the structure of abstract algebras, Proc. Camb. Phil. Soc. 31 (1935), 433-454. [35'] Abstract linear dependence and lattices, Amer. J. Math. 57 (1935), 800-804. [44] Subdirect unions in universal algebra, Bull. Amer. Math. Soc. 50 (1944), 764-768. P. Bruyns, C. Naturman and H. Rose [a] Amalgamation in the pentagon varieties, Algebra Universalis (to appear). S. Burris and H. P. Sankappanavar [81] "A Course in Universal Algebra," Springer-Verlag, New York (1981). C. Chang and H. J. Keisler [73] "Model Theory," North-Holland Publ. Co., Amsterdam (1973). P. Crawley and R. P. Dilworth [ATL] "Algebraic theory of lattices," Prentice-Hall, Englewood Cliffs, N.J. (1973). B. A. Davey, W. Poguntke and I. Rival [75] A characterization of semi-distributivity, Algebra Universalis 5 (1975), 72-75. B. A. Davey and B. Sands [77] An application of Whitman's condition to lattices with no infinite chains, Algebra Universalis 7 (1977), 171-178. A. DAY [70] A simple solution to the word problem for lattices, Canad. Math. Bull. 13 (1970), 253-254. [72] Injectivity in equational classes of algebras, Canad. J. Math. 24 (1972), 209-220. [75] Splitting algebras and a weak notion of projectivity, Algebra Universalis 5 (1975), 153-162. [77] Splitting lattices generate all lattices, Algebra Universalis 7 (1977), 163-169. [79] Characterizations of lattices that are bounded homomorphic images of sublattices of free lattices, Canad. J. Math. 31 (1979), 69-78. A. DAY AND J. JEŽEK [84] The amalgamation property for varieties of lattices, Trans. Amer. Math. Soc., Vol 2861 (1984), 251-256. A. DAY AND B. JónSSON [89] Non-Arguesian configurations and glueings of modular lattices, Algebra Universalis 26 (1989), 208-215. A. Day and D. Pickering [83] Coordinatization of Arguesian lattices, Trans. Amer. Math. Soc., Vol 2782 (1983), $507-522$. R. A. DEAN [56] Component subsets of the free lattice on $n$ generators, Proc. Amer. Math. Soc. 7 (1956), 220-226. R. Dedekind [00] Über die von drei Moduln erzeugte Dualgruppe, Math. Ann. 53 (1900), 371-403. R. P. DiLworth [50] The structure of relatively complemented lattices, Ann. of Math. (2) 51 (1950), 348-359. R. Frä̈sSÉ [54] Sur l'extension aux relations de quelques properiétés des ordres, Ann. Sci. École Norm. Sup. (3) 71 (1954), 363-388, MR 16-1006. T. Frayne, A. C. Morel and D. S. Scott [62] Reduced direct products, Fund. Math. 51 (1962)/63, 195-228. R. FreEse [76] Planar sublattices of FM(4), Algebra Universalis 6 (1976), 69-72. [77] The structure of modular lattices of width four, with applications to varieties of lattices, Memoirs of Amer. Math. Soc. No 1819 (1977). [79] The variety of modular lattices is not generated by its finite members, Trans. Amer. Math. Soc. 255 (1979), 277-300, MR 81g: 06003. R. Freese and J. B. Nation [78] Projective lattices, Pacific J. of Math. 75 (1978), 93-106. [85] Covers in free lattices, Trans. Amer. Math. Soc. (1) 288 (1985), 1-42. O. Frink [46] Complemented modular lattices and projective spaces of infinite dimension, Trans. Amer. Math. Soc. 60 (1946), 452-467. N. Funayama and T. NaKayama [42] On the distributivity of a lattice of lattice-congruences, Proc. Imp. Acad. Tokyo 18 (1942), 553-554. ## G. GRÄTZER [66] Equational classes of lattices, Duke Math. J. 33 (1966), 613-622. [GLT] "General lattice theory," Academic Press, New York (1978). [UA] "Universal Algebra," Second Expanded Edition, Springer Verlag, New York, Berlin, Heidelberg (1979). G. Grätzer and H. LAKSER [71] The structure of pseudocomplemented distributive lattices, II: Congruence extensions and amalgamations, Trans. Amer. Math. Soc. 156 (1971), 343-358. G. Grätzer, H. LAKSER AND B. Jónsson [73] The amalgamation property in equational classes of lattices, Pacific J. Math. 45 (1973), 507-524, MR 51-3014. M. D. Haiman [86] Arguesian lattices which are not linear, Massachusetts Inst. of Tech. preprint (1986). M. Hall and R. P. Dilworth [44] The imbedding problem for modular lattices, Ann. of Math. 45 (1944), 450-456. C. Herrmann [73] Weak (projective) radius and finite equational bases for classes of lattices, Algebra Universalis 3 (1973), 51-58. [84] On the arithmetic of projective coordinate systems, Trans. Amer. Math. Soc. (2) $\mathbf{2 8 4}$ (1984), 759-785. C. Herrmann and A. Huhn [75] Zum Wortproblem für freie Untermodulverbände, Arch. Math. 26 (1975), 449-453. D. X. Hong [70] Covering relations among lattice varieties, Thesis, Vanderbilt U. (1970). [72] Covering relations among lattice varieties, Pacific J. Math. 40 (1972), 575-603. J. M. HowIE [62] Embedding theorems with amalgamation for semigroups, Proc. London Math. Soc. (3) 12 (1962), 511-534, MR 25-2139. A. Huhn [72] Schwach distributive Verbände. I, Acta Sci. Math. (Szeged) 33 (1972), 297-305. H. HulE [76] Über die Eindeutigkeit der Lösungen algebraischer Gleichungssysteme, Journal für Reine Angew. Math. 282 (1976), 157-161, MR 53-13080. [78] Relations between the amalgamation property and algebraic equations, J. Austral. Math. Soc. Ser. A 22 (1978), 257-263, MR 58-5454. [79] Solutionally complete varieties, J. Austral. Math. Soc. Ser. A 28 (1979), 82-86, MR 81i: 08007. E. INABA [48] On primary lattices, J. Fac. Sci. Hokkaido Univ. 11 (1948), 39-107. P. JiPsen AND H. Rose [89] Absolute retracts and amalgamation in certain congruence distributive varieties, Canadian Math. Bull. 32 (1989), 309-313. B. JónsSON [53] On the representation of lattices, Math. Scand. 1 (1953), 193-206. [54] Modular lattices and Desargues theorem, Math. Scand. 2 (1954), 295-314. [56] Universal relational systems, Math. Scand. 4 (1956), 193-208, MR 20-3091. [59] Arguesian lattices of dimension $n \leq 4$, Math. Scand. 7 (1959), 133-145. [60] Homogeneous universal relational systems, Math. Scand. 8 (1960), 137-142, MR 23-A2328. [60'] Representations of complemented modular lattices, Trans. Amer. Math. Soc. 97 (1960), 64-94. [61] Sublattices of a free lattice, Canad. J. Math. 13 (1961), 256-264. [62] Algebraic extensions of relational systems, Math. Scand. 11 (1962), 179-205, MR 27-4777. [67] Algebras whose congruence lattices are distributive, Math. Scand. 21 (1967), 110 121, MR 38-5689. [68] Equational classes of lattices, Math. Scand. 22 (1968), 187-196. [70] Relatively free lattices, Coll. Math. 21 (1970), 191-196. [72] The class of Arguesian lattices is self-dual, Algebra Universalis 2 (1972), 396. [74] Sums of finitely based lattice varieties, Advances in Mathematics (4) 14 (1974), 454468. [77] The variety covering the variety of all modular lattices, Math. Scand. 41 (1977), $5-14$. [79] On finitely based varieties of algebras, Colloq. Math. 42 (1979), 255-261. [80] Varieties of lattices: Some open problems, Algebra Universalis 10 (1980), 335-394. [90] Amalgamation in small varieties of lattices, Jour. of Pure and Applied Algebra 68 (1990), 195-208. B. Jónsson AND J. Kiefer [62] Finite sublattices of a free lattice, Canad. J. Math. 14 (1962), 487-497. B. Jónsson and G. S. Monk [69] Representation of primary Arguesian lattices, Pac. Jour. Math. 30 (1969), 95-139. B. Jónsson and J. B. Nation [77] "A report on sublattices of a free lattice," Colloq. Math. Soc. János Bolyai, Contributions to Universal Algebra (Szeged), Vol. 17, North-Holland, Amsterdam (1977), 223-257. B. Jónsson and I. Rival [79] Lattice varieties covering the smallest non-modular lattice variety, Pacific J. Math. 82 (1979), 463-478. Y. Y. KANG [87] Joins of finitely based lattice varieties, Thesis, Vanderbilt U. (1987). N. KIMURA [57] On semigroups, Thesis, Tulane University, New Orleans (1957). E. W. Kiss, L. Márki, P. Pröhle and W. Tholen [83] Categorical, algebraic properties. Compendium on amalgamation, congruence extension, endomorphisms, residual smallness and injectivity, Studia Scientiarum Mathematicarum Hungarica 18 (1983), 19-141. S. B. KOCHEN [61] Ultraproducts in the theory of models, Ann. Math. Ser. 274 (1961), 221-261. S. R. Kogalovskĭ [65] On a theorem of Birkhoff, (Russian), Uspehi Mat. Nauk 20 (1965), 206-207. A. Kostinsky [72] Projective lattices and bounded homomorphisms, Pacific J. Math. 40 (1972), 111-119. R. L. KRUSE [73] Identities satisfied by a finite ring, J. Algebra 26 (1973), 298-318. J. G. LEE [85] Almost distributive lattice varieties, Algebra Universalis 21 (1985), 280-304. [85'] Joins of finitely based lattice varieties, Korean Math. Soc. 22 (1985), 125-133. I. V. Lvov [73] Varieties of associative rings I, II, Algebra and Logic 12 (1973), 150-167, 381-393. R. LYNDON [51] Identities in two-valued calculi, Trans. Amer. Math. Soc. 71 (1951), 457-465. [54] Identities in finite algebras, Proc. Amer. Math. Soc. 5 (1954), 8-9. F. Maeda [51] Lattice theoretic characterization of abstract geometries, J. Sci. Hiroshima Univ. Ser. A. 15 (1951), 87-96. A. I. MaL'CeV [54] On the general theory of algebraic systems, (Russian), Mat. Sb. (N.S.) (77) 35 (1954), 3-20, MR 17 . M. MAKKAI [73] A proof of Baker's finite basis theorem on equational classes generated by finite elements of congruence distributive varieties, Algebra Universalis 3 (1973), 174-181. R. MCKenzie [70] Equational bases for lattice theories, Math. Scand. 27 (1970), 24-38. [72] Equational bases and non-modular lattice varieties, Trans. Amer. Math. Soc. 174 (1972), 1-43. V. L. MurskiI [65] The existence in the three-valued logic of a closed class with a finite basis having no finite complete system of identities, (Russian), Dokl. Akad. Nauk. SSSR 163 (1965), 815-818, MR 32-3998. J. B. Nation [82] Finite sublattices of a free lattice, Trans. Amer. Math. Soc. 269 (1982), 311-337. [85] Some varieties of semidistributive lattices, "Universal algebra and lattice theory," (Charlston, S. C., 1984) Lecture Notes in Math. 1149, Springer Verlag, Berlin - New York (1985), 198-223. [86] Lattice varieties covering $V\left(L_{1}\right)$, Algebra Universalis 23 (1986), 132-166. O. T. NELSON [68] Subdirect decompositions of lattices of width two, Pacific J. Math. 24 (1968), 519-523. B. H. Neumann [54] An essay on free products of groups with amalgamation, Philos. Trans. Roy. Soc. London Ser. A 246 (1954), 503-554, MR 16-10. H. Neumann [67] "Varieties of groups," Springer Verlag, Berlin, Heidelberg (1967). S. Oates and M. B. Powell [64] Identical relational in finite groups, Quarterly J. of Math. 15 (1964), 131-148. D. Pickering [84] On minimal non-Arguesian lattice varieties, Ph.D. Thesis, U. of Hawaii (1984). [a] Minimal non-Arguesian Lattices, preprint. R. S. Pierce [68] "Introduction to the theory of abstract algebras," Holt, Rinehart and Winston, New York-Montreal, Que.-London (1968), MR 37-2655. P. Pudlák and J. Tuma [80] Every finite lattice can be embedded in the lattice of all equivalences over a finite set, Algebra Universalis 10 (1980), 74-95. I. RIVAL [76] Varieties of nonmodular lattices, Notices Amer. Math. Soc. 23 (1976), A-420. H. Rose [84] Nonmodular lattice varieties, Memoirs of Amer. Math. Soc. 292 (1984). W. Ruckelshausen [78] Obere Nachbarn von $\mathcal{M}_{3}+\mathcal{N}$, Contributions of general algebra, proceedings of the Klagenfurt Conference (1978), 291-329. O. Schreier [27] Die Untergruppen der freien Gruppen, Abh. Math. Sem. Univ. Hamburg 5 (1927), 161-183. M. SChütZEnberger [45] Sur certains axiomes de la théorie des structures, C. R. Acad. Sci. Paris 221 (1945), 218-220. A. TARSKI [46] A remark on functionally free algebras, Ann. of Math. (2) 47 (1946), 163-165. W. TAYLOR [72] Residually small varieties, Algebra Universalis 2 (1972), 33-53. [73] Products of absolute retracts, Algebra Universalis 3 (1973), 400-401. [78] Baker's finite basis theorem, Algebra Universalis 8 (1978), 191--196. O. Veblen and W. H. Young [10] Projective Geometry, 2 volumes, Ginn and Co., Bosten (1910). V. V. VIŠIN [63] Identity transformations in a four-valued logic, (Russian), Dokl. Akad. Nauk. SSSR 150 (1963), 719-721, MR 33-1266. J. VON NEUMANN [60] "Continuous Geometry," Princeton Univ. Press, Princeton, N. J. (1960). P. Whitman [41] Free lattices, Ann. of Math. (2) 42 (1941), 325-330. [42] Free lattices. II, Ann. of Math. (2) 43 (1942), 104-115. [43] Splittings of a lattice, Amer. J. Math. 65 (1943), 179-196. [46] Lattices, equivalence relations, and subgroups, Bull. Amer. Math. Soc. 52 (1946), 507-522. R. WILLE [69] Primitive Länge und primitive Weite bei modularen Verbänden, Math. Z. 108 (1969), 129-136. [72] Primitive subsets of lattices, Algebra Universalis 2 (1972), 95-98. M. Yasuhara [74] The amalgamation property, the universal-homogeneous models and the generic models, Math. Scand. 34 (1974), 5-36, MR 51-7860.
Textbooks
Heat equation In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region. As the prototypical parabolic partial differential equation, the heat equation is among the most widely studied topics in pure mathematics, and its analysis is regarded as fundamental to the broader field of partial differential equations. The heat equation can also be considered on Riemannian manifolds, leading to many geometric applications. Following work of Subbaramiah Minakshisundaram and Åke Pleijel, the heat equation is closely related with spectral geometry. A seminal nonlinear variant of the heat equation was introduced to differential geometry by James Eells and Joseph Sampson in 1964, inspiring the introduction of the Ricci flow by Richard Hamilton in 1982 and culminating in the proof of the Poincaré conjecture by Grigori Perelman in 2003. Certain solutions of the heat equation known as heat kernels provide subtle information about the region on which they are defined, as exemplified through their application to the Atiyah–Singer index theorem.[1] The heat equation, along with variants thereof, is also important in many fields of science and applied mathematics. In probability theory, the heat equation is connected with the study of random walks and Brownian motion via the Fokker–Planck equation. The Black–Scholes equation of financial mathematics is a small variant of the heat equation, and the Schrödinger equation of quantum mechanics can be regarded as a heat equation in imaginary time. In image analysis, the heat equation is sometimes used to resolve pixelation and to identify edges. Following Robert Richtmyer and John von Neumann's introduction of "artificial viscosity" methods, solutions of heat equations have been useful in the mathematical formulation of hydrodynamical shocks. Solutions of the heat equation have also been given much attention in the numerical analysis literature, beginning in the 1950s with work of Jim Douglas, D.W. Peaceman, and Henry Rachford Jr. Statement of the equation In mathematics, if given an open subset U of Rn and a subinterval I of R, one says that a function u : U × I → R is a solution of the heat equation if ${\frac {\partial u}{\partial t}}={\frac {\partial ^{2}u}{\partial x_{1}^{2}}}+\cdots +{\frac {\partial ^{2}u}{\partial x_{n}^{2}}},$ where (x1, …, xn, t) denotes a general point of the domain. It is typical to refer to t as "time" and x1, …, xn as "spatial variables," even in abstract contexts where these phrases fail to have their intuitive meaning. The collection of spatial variables is often referred to simply as x. For any given value of t, the right-hand side of the equation is the Laplacian of the function u(⋅, t) : U → R. As such, the heat equation is often written more compactly as ${\frac {\partial u}{\partial t}}=\Delta u$. In physics and engineering contexts, especially in the context of diffusion through a medium, it is more common to fix a Cartesian coordinate system and then to consider the specific case of a function u(x, y, z, t) of three spatial variables (x, y, z) and time variable t. One then says that u is a solution of the heat equation if ${\frac {\partial u}{\partial t}}=\alpha \left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right)$ in which α is a positive coefficient called the thermal diffusivity of the medium. In addition to other physical phenomena, this equation describes the flow of heat in a homogeneous and isotropic medium, with u(x, y, z, t) being the temperature at the point (x, y, z) and time t. If the medium is not homogeneous and isotropic, then α would not be a fixed coefficient, and would instead depend on (x, y, z); the equation would also have a slightly different form. In the physics and engineering literature, it is common to use ∇2 to denote the Laplacian, rather than ∆. In mathematics as well as in physics and engineering, it is common to use Newton's notation for time derivatives, so that ${\dot {u}}$ is used to denote ∂u/∂t, so the equation can be written ${\dot {u}}=\Delta u$. Note also that the ability to use either ∆ or ∇2 to denote the Laplacian, without explicit reference to the spatial variables, is a reflection of the fact that the Laplacian is independent of the choice of coordinate system. In mathematical terms, one would say that the Laplacian is "translationally and rotationally invariant." In fact, it is (loosely speaking) the simplest differential operator which has these symmetries. This can be taken as a significant (and purely mathematical) justification of the use of the Laplacian and of the heat equation in modeling any physical phenomena which are homogeneous and isotropic, of which heat diffusion is a principal example. The "diffusivity constant" α is often not present in mathematical studies of the heat equation, while its value can be very important in engineering. This is not a major difference, for the following reason. Let u be a function with ${\frac {\partial u}{\partial t}}=\alpha \Delta u.$ Define a new function $v(t,x)=u(t/\alpha ,x)$. Then, according to the chain rule, one has ${\frac {\partial }{\partial t}}v(t,x)={\frac {\partial }{\partial t}}u(t/\alpha ,x)=\alpha ^{-1}{\frac {\partial u}{\partial t}}(t/\alpha ,x)=\Delta u(t/\alpha ,x)=\Delta v(t,x)$ (⁎) Thus, there is a straightforward way of translating between solutions of the heat equation with a general value of α and solutions of the heat equation with α = 1. As such, for the sake of mathematical analysis, it is often sufficient to only consider the case α = 1. Since $\alpha >0$ there is another option to define a $v$ satisfying $ {\frac {\partial }{\partial t}}v=\Delta v$ as in (⁎) above by setting $v(t,x)=u(t,\alpha ^{1/2}x)$. Note that the two possible means of defining the new function $v$ discussed here amount, in physical terms, to changing the unit of measure of time or the unit of measure of length. Interpretation Physical interpretation of the equation Informally, the Laplacian operator ∆ gives the difference between the average value of a function in the neighborhood of a point, and its value at that point. Thus, if u is the temperature, ∆ tells whether (and by how much) the material surrounding each point is hotter or colder, on the average, than the material at that point. By the second law of thermodynamics, heat will flow from hotter bodies to adjacent colder bodies, in proportion to the difference of temperature and of the thermal conductivity of the material between them. When heat flows into (respectively, out of) a material, its temperature increases (respectively, decreases), in proportion to the amount of heat divided by the amount (mass) of material, with a proportionality factor called the specific heat capacity of the material. By the combination of these observations, the heat equation says the rate ${\dot {u}}$ at which the material at a point will heat up (or cool down) is proportional to how much hotter (or cooler) the surrounding material is. The coefficient α in the equation takes into account the thermal conductivity, specific heat, and density of the material. Mathematical interpretation of the equation The first half of the above physical thinking can be put into a mathematical form. The key is that, for any fixed x, one has ${\begin{aligned}u_{(x)}(0)&=u(x)\\u_{(x)}'(0)&=0\\u_{(x)}''(0)&={\frac {1}{n}}\Delta u(x)\end{aligned}}$ where u(x)(r) is the single-variable function denoting the average value of u over the surface of the sphere of radius r centered at x; it can be defined by $u_{(x)}(r)={\frac {1}{\omega _{n-1}r^{n-1}}}\int _{\{y:|x-y|=r\}}u\,d{\mathcal {H}}^{n-1},$ in which ωn − 1 denotes the surface area of the unit ball in n-dimensional Euclidean space. This formalizes the above statement that the value of ∆u at a point x measures the difference between the value of u(x) and the value of u at points nearby to x, in the sense that the latter is encoded by the values of u(x)(r) for small positive values of r. Following this observation, one may interpret the heat equation as imposing an infinitesimal averaging of a function. Given a solution of the heat equation, the value of u(x, t + τ) for a small positive value of τ may be approximated as 1/2n times the average value of the function u(⋅, t) over a sphere of very small radius centered at x. Character of the solutions The heat equation implies that peaks (local maxima) of $u$ will be gradually eroded down, while depressions (local minima) will be filled in. The value at some point will remain stable only as long as it is equal to the average value in its immediate surroundings. In particular, if the values in a neighborhood are very close to a linear function $Ax+By+Cz+D$, then the value at the center of that neighborhood will not be changing at that time (that is, the derivative ${\dot {u}}$ will be zero). A more subtle consequence is the maximum principle, that says that the maximum value of $u$ in any region $R$ of the medium will not exceed the maximum value that previously occurred in $R$, unless it is on the boundary of $R$. That is, the maximum temperature in a region $R$ can increase only if heat comes in from outside $R$. This is a property of parabolic partial differential equations and is not difficult to prove mathematically (see below). Another interesting property is that even if $u$ initially has a sharp jump (discontinuity) of value across some surface inside the medium, the jump is immediately smoothed out by a momentary, infinitesimally short but infinitely large rate of flow of heat through that surface. For example, if two isolated bodies, initially at uniform but different temperatures $u_{0}$ and $u_{1}$, are made to touch each other, the temperature at the point of contact will immediately assume some intermediate value, and a zone will develop around that point where $u$ will gradually vary between $u_{0}$ and $u_{1}$. If a certain amount of heat is suddenly applied to a point in the medium, it will spread out in all directions in the form of a diffusion wave. Unlike the elastic and electromagnetic waves, the speed of a diffusion wave drops with time: as it spreads over a larger region, the temperature gradient decreases, and therefore the heat flow decreases too. Specific examples Heat flow in a uniform rod For heat flow, the heat equation follows from the physical laws of conduction of heat and conservation of energy (Cannon 1984). By Fourier's law for an isotropic medium, the rate of flow of heat energy per unit area through a surface is proportional to the negative temperature gradient across it: $\mathbf {q} =-k\,\nabla u$ where $k$ is the thermal conductivity of the material, $u=u(\mathbf {x} ,t)$ is the temperature, and $\mathbf {q} =\mathbf {q} (\mathbf {x} ,t)$ is a vector field that represents the magnitude and direction of the heat flow at the point $\mathbf {x} $ of space and time $t$. If the medium is a thin rod of uniform section and material, the position is a single coordinate $x$, the heat flow towards increasing $x$ is a scalar field $q=q(t,x)$, and the gradient is an ordinary derivative with respect to the $x$. The equation becomes $q=-k\,{\frac {\partial u}{\partial x}}$ Let $Q=Q(x,t)$ be the internal heat energy per unit volume of the bar at each point and time. In the absence of heat energy generation, from external or internal sources, the rate of change in internal heat energy per unit volume in the material, $\partial Q/\partial t$, is proportional to the rate of change of its temperature, $\partial u/\partial t$. That is, ${\frac {\partial Q}{\partial t}}=c\,\rho \,{\frac {\partial u}{\partial t}}$ where $c$ is the specific heat capacity (at constant pressure, in case of a gas) and $\rho $ is the density (mass per unit volume) of the material. This derivation assumes that the material has constant mass density and heat capacity through space as well as time. Applying the law of conservation of energy to a small element of the medium centered at $x$, one concludes that the rate at which heat accumulates at a given point $x$ is equal to the derivative of the heat flow at that point, negated. That is, ${\frac {\partial Q}{\partial t}}=-{\frac {\partial q}{\partial x}}$ From the above equations it follows that ${\frac {\partial u}{\partial t}}\;=\;-{\frac {1}{c\rho }}{\frac {\partial q}{\partial x}}\;=\;-{\frac {1}{c\rho }}{\frac {\partial }{\partial x}}\left(-k\,{\frac {\partial u}{\partial x}}\right)\;=\;{\frac {k}{c\rho }}{\frac {\partial ^{2}u}{\partial x^{2}}}$ which is the heat equation in one dimension, with diffusivity coefficient $\alpha ={\frac {k}{c\rho }}$ This quantity is called the thermal diffusivity of the medium. Accounting for radiative loss An additional term may be introduced into the equation to account for radiative loss of heat. According to the Stefan–Boltzmann law, this term is $\mu \left(u^{4}-v^{4}\right)$, where $v=v(x,t)$ is the temperature of the surroundings, and $\mu $ is a coefficient that depends on the Stefan-Boltzmann constant and the emissivity of the material. The rate of change in internal energy becomes ${\frac {\partial Q}{\partial t}}=-{\frac {\partial q}{\partial x}}-\mu \left(u^{4}-v^{4}\right)$ and the equation for the evolution of $u$ becomes ${\frac {\partial u}{\partial t}}={\frac {k}{c\rho }}{\frac {\partial ^{2}u}{\partial x^{2}}}-{\frac {\mu }{c\rho }}\left(u^{4}-v^{4}\right).$ Non-uniform isotropic medium Note that the state equation, given by the first law of thermodynamics (i.e. conservation of energy), is written in the following form (assuming no mass transfer or radiation). This form is more general and particularly useful to recognize which property (e.g. cp or $\rho $) influences which term. $\rho c_{p}{\frac {\partial T}{\partial t}}-\nabla \cdot \left(k\nabla T\right)={\dot {q}}_{V}$ where ${\dot {q}}_{V}$ is the volumetric heat source. Three-dimensional problem In the special cases of propagation of heat in an isotropic and homogeneous medium in a 3-dimensional space, this equation is ${\frac {\partial u}{\partial t}}=\alpha \nabla ^{2}u=\alpha \left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right)$   $=\alpha \left(u_{xx}+u_{yy}+u_{zz}\right)$ where: • $u=u(x,y,z,t)$ is temperature as a function of space and time; • ${\tfrac {\partial u}{\partial t}}$ is the rate of change of temperature at a point over time; • $u_{xx}$, $u_{yy}$, and $u_{zz}$ are the second spatial derivatives (thermal conductions) of temperature in the $x$, $y$, and $z$ directions, respectively; • $\alpha \equiv {\tfrac {k}{c_{p}\rho }}$ is the thermal diffusivity, a material-specific quantity depending on the thermal conductivity $k$, the specific heat capacity $c_{p}$, and the mass density $\rho $. The heat equation is a consequence of Fourier's law of conduction (see heat conduction). If the medium is not the whole space, in order to solve the heat equation uniquely we also need to specify boundary conditions for u. To determine uniqueness of solutions in the whole space it is necessary to assume additional conditions, for example an exponential bound on the growth of solutions[2] or a sign condition (nonnegative solutions are unique by a result of David Widder).[3] Solutions of the heat equation are characterized by a gradual smoothing of the initial temperature distribution by the flow of heat from warmer to colder areas of an object. Generally, many different states and starting conditions will tend toward the same stable equilibrium. As a consequence, to reverse the solution and conclude something about earlier times or initial conditions from the present heat distribution is very inaccurate except over the shortest of time periods. The heat equation is the prototypical example of a parabolic partial differential equation. Using the Laplace operator, the heat equation can be simplified, and generalized to similar equations over spaces of arbitrary number of dimensions, as $u_{t}=\alpha \nabla ^{2}u=\alpha \Delta u,$ where the Laplace operator, Δ or ∇2, the divergence of the gradient, is taken in the spatial variables. The heat equation governs heat diffusion, as well as other diffusive processes, such as particle diffusion or the propagation of action potential in nerve cells. Although they are not diffusive in nature, some quantum mechanics problems are also governed by a mathematical analog of the heat equation (see below). It also can be used to model some phenomena arising in finance, like the Black–Scholes or the Ornstein-Uhlenbeck processes. The equation, and various non-linear analogues, has also been used in image analysis. The heat equation is, technically, in violation of special relativity, because its solutions involve instantaneous propagation of a disturbance. The part of the disturbance outside the forward light cone can usually be safely neglected, but if it is necessary to develop a reasonable speed for the transmission of heat, a hyperbolic problem should be considered instead – like a partial differential equation involving a second-order time derivative. Some models of nonlinear heat conduction (which are also parabolic equations) have solutions with finite heat transmission speed.[4][5] Internal heat generation The function u above represents temperature of a body. Alternatively, it is sometimes convenient to change units and represent u as the heat density of a medium. Since heat density is proportional to temperature in a homogeneous medium, the heat equation is still obeyed in the new units. Suppose that a body obeys the heat equation and, in addition, generates its own heat per unit volume (e.g., in watts/litre - W/L) at a rate given by a known function q varying in space and time.[6] Then the heat per unit volume u satisfies an equation ${\frac {1}{\alpha }}{\frac {\partial u}{\partial t}}=\left({\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}+{\frac {\partial ^{2}u}{\partial z^{2}}}\right)+{\frac {1}{k}}q.$ For example, a tungsten light bulb filament generates heat, so it would have a positive nonzero value for q when turned on. While the light is turned off, the value of q for the tungsten filament would be zero. Solving the heat equation using Fourier series The following solution technique for the heat equation was proposed by Joseph Fourier in his treatise Théorie analytique de la chaleur, published in 1822. Consider the heat equation for one space variable. This could be used to model heat conduction in a rod. The equation is $\displaystyle u_{t}=\alpha u_{xx}$ (1) where u = u(x, t) is a function of two variables x and t. Here • x is the space variable, so x ∈ [0, L], where L is the length of the rod. • t is the time variable, so t ≥ 0. We assume the initial condition $u(x,0)=f(x)\quad \forall x\in [0,L]$ (2) where the function f is given, and the boundary conditions $u(0,t)=0=u(L,t)\quad \forall t>0$. (3) Let us attempt to find a solution of (1) that is not identically zero satisfying the boundary conditions (3) but with the following property: u is a product in which the dependence of u on x, t is separated, that is: $u(x,t)=X(x)T(t).$ (4) This solution technique is called separation of variables. Substituting u back into equation (1), ${\frac {T'(t)}{\alpha T(t)}}={\frac {X''(x)}{X(x)}}.$ Since the right hand side depends only on x and the left hand side only on t, both sides are equal to some constant value −λ. Thus: $T'(t)=-\lambda \alpha T(t)$ (5) and $X''(x)=-\lambda X(x).$ (6) We will now show that nontrivial solutions for (6) for values of λ ≤ 0 cannot occur: 1. Suppose that λ < 0. Then there exist real numbers B, C such that $X(x)=Be^{{\sqrt {-\lambda }}\,x}+Ce^{-{\sqrt {-\lambda }}\,x}.$ From (3) we get X(0) = 0 = X(L) and therefore B = 0 = C which implies u is identically 0. 2. Suppose that λ = 0. Then there exist real numbers B, C such that X(x) = Bx + C. From equation (3) we conclude in the same manner as in 1 that u is identically 0. 3. Therefore, it must be the case that λ > 0. Then there exist real numbers A, B, C such that $T(t)=Ae^{-\lambda \alpha t}$ and $X(x)=B\sin \left({\sqrt {\lambda }}\,x\right)+C\cos \left({\sqrt {\lambda }}\,x\right).$ From (3) we get C = 0 and that for some positive integer n, ${\sqrt {\lambda }}=n{\frac {\pi }{L}}.$ This solves the heat equation in the special case that the dependence of u has the special form (4). In general, the sum of solutions to (1) that satisfy the boundary conditions (3) also satisfies (1) and (3). We can show that the solution to (1), (2) and (3) is given by $u(x,t)=\sum _{n=1}^{\infty }D_{n}\sin \left({\frac {n\pi x}{L}}\right)e^{-{\frac {n^{2}\pi ^{2}\alpha t}{L^{2}}}}$ where $D_{n}={\frac {2}{L}}\int _{0}^{L}f(x)\sin \left({\frac {n\pi x}{L}}\right)\,dx.$ Generalizing the solution technique The solution technique used above can be greatly extended to many other types of equations. The idea is that the operator uxx with the zero boundary conditions can be represented in terms of its eigenfunctions. This leads naturally to one of the basic ideas of the spectral theory of linear self-adjoint operators. Consider the linear operator Δu = uxx. The infinite sequence of functions $e_{n}(x)={\sqrt {\frac {2}{L}}}\sin \left({\frac {n\pi x}{L}}\right)$ for n ≥ 1 are eigenfunctions of Δ. Indeed, $\Delta e_{n}=-{\frac {n^{2}\pi ^{2}}{L^{2}}}e_{n}.$ Moreover, any eigenfunction f of Δ with the boundary conditions f(0) = f(L) = 0 is of the form en for some n ≥ 1. The functions en for n ≥ 1 form an orthonormal sequence with respect to a certain inner product on the space of real-valued functions on [0, L]. This means $\langle e_{n},e_{m}\rangle =\int _{0}^{L}e_{n}(x)e_{m}^{*}(x)dx=\delta _{mn}$ Finally, the sequence {en}n ∈ N spans a dense linear subspace of L2((0, L)). This shows that in effect we have diagonalized the operator Δ. Heat conduction in non-homogeneous anisotropic media In general, the study of heat conduction is based on several principles. Heat flow is a form of energy flow, and as such it is meaningful to speak of the time rate of flow of heat into a region of space. • The time rate of heat flow into a region V is given by a time-dependent quantity qt(V). We assume q has a density Q, so that $q_{t}(V)=\int _{V}Q(x,t)\,dx\quad $ • Heat flow is a time-dependent vector function H(x) characterized as follows: the time rate of heat flowing through an infinitesimal surface element with area dS and with unit normal vector n is $\mathbf {H} (x)\cdot \mathbf {n} (x)\,dS.$ Thus the rate of heat flow into V is also given by the surface integral $q_{t}(V)=-\int _{\partial V}\mathbf {H} (x)\cdot \mathbf {n} (x)\,dS$ where n(x) is the outward pointing normal vector at x. • The Fourier law states that heat energy flow has the following linear dependence on the temperature gradient $\mathbf {H} (x)=-\mathbf {A} (x)\cdot \nabla u(x)$ where A(x) is a 3 × 3 real matrix that is symmetric and positive definite. • By the divergence theorem, the previous surface integral for heat flow into V can be transformed into the volume integral ${\begin{aligned}q_{t}(V)&=-\int _{\partial V}\mathbf {H} (x)\cdot \mathbf {n} (x)\,dS\\&=\int _{\partial V}\mathbf {A} (x)\cdot \nabla u(x)\cdot \mathbf {n} (x)\,dS\\&=\int _{V}\sum _{i,j}\partial _{x_{i}}{\bigl (}a_{ij}(x)\partial _{x_{j}}u(x,t){\bigr )}\,dx\end{aligned}}$ • The time rate of temperature change at x is proportional to the heat flowing into an infinitesimal volume element, where the constant of proportionality is dependent on a constant κ $\partial _{t}u(x,t)=\kappa (x)Q(x,t)$ Putting these equations together gives the general equation of heat flow: $\partial _{t}u(x,t)=\kappa (x)\sum _{i,j}\partial _{x_{i}}{\bigl (}a_{ij}(x)\partial _{x_{j}}u(x,t){\bigr )}$ Remarks. • The coefficient κ(x) is the inverse of specific heat of the substance at x × density of the substance at x: $\kappa =1/(\rho c_{p})$. • In the case of an isotropic medium, the matrix A is a scalar matrix equal to thermal conductivity k. • In the anisotropic case where the coefficient matrix A is not scalar and/or if it depends on x, then an explicit formula for the solution of the heat equation can seldom be written down, though it is usually possible to consider the associated abstract Cauchy problem and show that it is a well-posed problem and/or to show some qualitative properties (like preservation of positive initial data, infinite speed of propagation, convergence toward an equilibrium, smoothing properties). This is usually done by one-parameter semigroups theory: for instance, if A is a symmetric matrix, then the elliptic operator defined by $Au(x):=\sum _{i,j}\partial _{x_{i}}a_{ij}(x)\partial _{x_{j}}u(x)$ is self-adjoint and dissipative, thus by the spectral theorem it generates a one-parameter semigroup. Fundamental solutions A fundamental solution, also called a heat kernel, is a solution of the heat equation corresponding to the initial condition of an initial point source of heat at a known position. These can be used to find a general solution of the heat equation over certain domains; see, for instance, (Evans 2010) for an introductory treatment. In one variable, the Green's function is a solution of the initial value problem (by Duhamel's principle, equivalent to the definition of Green's function as one with a delta function as solution to the first equation) ${\begin{cases}u_{t}(x,t)-ku_{xx}(x,t)=0&(x,t)\in \mathbb {R} \times (0,\infty )\\u(x,0)=\delta (x)&\end{cases}}$ where $\delta $ is the Dirac delta function. The solution to this problem is the fundamental solution (heat kernel) $\Phi (x,t)={\frac {1}{\sqrt {4\pi kt}}}\exp \left(-{\frac {x^{2}}{4kt}}\right).$ One can obtain the general solution of the one variable heat equation with initial condition u(x, 0) = g(x) for −∞ < x < ∞ and 0 < t < ∞ by applying a convolution: $u(x,t)=\int \Phi (x-y,t)g(y)dy.$ In several spatial variables, the fundamental solution solves the analogous problem ${\begin{cases}u_{t}(\mathbf {x} ,t)-k\sum _{i=1}^{n}u_{x_{i}x_{i}}(\mathbf {x} ,t)=0&(\mathbf {x} ,t)\in \mathbb {R} ^{n}\times (0,\infty )\\u(\mathbf {x} ,0)=\delta (\mathbf {x} )\end{cases}}$ The n-variable fundamental solution is the product of the fundamental solutions in each variable; i.e., $\Phi (\mathbf {x} ,t)=\Phi (x_{1},t)\Phi (x_{2},t)\cdots \Phi (x_{n},t)={\frac {1}{\sqrt {(4\pi kt)^{n}}}}\exp \left(-{\frac {\mathbf {x} \cdot \mathbf {x} }{4kt}}\right).$ The general solution of the heat equation on Rn is then obtained by a convolution, so that to solve the initial value problem with u(x, 0) = g(x), one has $u(\mathbf {x} ,t)=\int _{\mathbb {R} ^{n}}\Phi (\mathbf {x} -\mathbf {y} ,t)g(\mathbf {y} )d\mathbf {y} .$ The general problem on a domain Ω in Rn is ${\begin{cases}u_{t}(\mathbf {x} ,t)-k\sum _{i=1}^{n}u_{x_{i}x_{i}}(\mathbf {x} ,t)=0&(\mathbf {x} ,t)\in \Omega \times (0,\infty )\\u(\mathbf {x} ,0)=g(\mathbf {x} )&\mathbf {x} \in \Omega \end{cases}}$ with either Dirichlet or Neumann boundary data. A Green's function always exists, but unless the domain Ω can be readily decomposed into one-variable problems (see below), it may not be possible to write it down explicitly. Other methods for obtaining Green's functions include the method of images, separation of variables, and Laplace transforms (Cole, 2011). See also: Weierstrass transform Some Green's function solutions in 1D A variety of elementary Green's function solutions in one-dimension are recorded here; many others are available elsewhere.[7] In some of these, the spatial domain is (−∞,∞). In others, it is the semi-infinite interval (0,∞) with either Neumann or Dirichlet boundary conditions. One further variation is that some of these solve the inhomogeneous equation $u_{t}=ku_{xx}+f.$ where f is some given function of x and t. Homogeneous heat equation Initial value problem on (−∞,∞) ${\begin{cases}u_{t}=ku_{xx}&(x,t)\in \mathbb {R} \times (0,\infty )\\u(x,0)=g(x)&{\text{Initial condition}}\end{cases}}$ $u(x,t)={\frac {1}{\sqrt {4\pi kt}}}\int _{-\infty }^{\infty }\exp \left(-{\frac {(x-y)^{2}}{4kt}}\right)g(y)\,dy$ Comment. This solution is the convolution with respect to the variable x of the fundamental solution $\Phi (x,t):={\frac {1}{\sqrt {4\pi kt}}}\exp \left(-{\frac {x^{2}}{4kt}}\right),$ and the function g(x). (The Green's function number of the fundamental solution is X00.) Therefore, according to the general properties of the convolution with respect to differentiation, u = g ∗ Φ is a solution of the same heat equation, for $\left(\partial _{t}-k\partial _{x}^{2}\right)(\Phi *g)=\left[\left(\partial _{t}-k\partial _{x}^{2}\right)\Phi \right]*g=0.$ Moreover, $\Phi (x,t)={\frac {1}{\sqrt {t}}}\,\Phi \left({\frac {x}{\sqrt {t}}},1\right)$ $\int _{-\infty }^{\infty }\Phi (x,t)\,dx=1,$ so that, by general facts about approximation to the identity, Φ(⋅, t) ∗ g → g as t → 0 in various senses, according to the specific g. For instance, if g is assumed bounded and continuous on R then Φ(⋅, t) ∗ g converges uniformly to g as t → 0, meaning that u(x, t) is continuous on R × [0, ∞) with u(x, 0) = g(x). Initial value problem on (0,∞) with homogeneous Dirichlet boundary conditions ${\begin{cases}u_{t}=ku_{xx}&(x,t)\in [0,\infty )\times (0,\infty )\\u(x,0)=g(x)&{\text{IC}}\\u(0,t)=0&{\text{BC}}\end{cases}}$ $u(x,t)={\frac {1}{\sqrt {4\pi kt}}}\int _{0}^{\infty }\left[\exp \left(-{\frac {(x-y)^{2}}{4kt}}\right)-\exp \left(-{\frac {(x+y)^{2}}{4kt}}\right)\right]g(y)\,dy$ Comment. This solution is obtained from the preceding formula as applied to the data g(x) suitably extended to R, so as to be an odd function, that is, letting g(−x) := −g(x) for all x. Correspondingly, the solution of the initial value problem on (−∞,∞) is an odd function with respect to the variable x for all values of t, and in particular it satisfies the homogeneous Dirichlet boundary conditions u(0, t) = 0. The Green's function number of this solution is X10. Initial value problem on (0,∞) with homogeneous Neumann boundary conditions ${\begin{cases}u_{t}=ku_{xx}&(x,t)\in [0,\infty )\times (0,\infty )\\u(x,0)=g(x)&{\text{IC}}\\u_{x}(0,t)=0&{\text{BC}}\end{cases}}$ $u(x,t)={\frac {1}{\sqrt {4\pi kt}}}\int _{0}^{\infty }\left[\exp \left(-{\frac {(x-y)^{2}}{4kt}}\right)+\exp \left(-{\frac {(x+y)^{2}}{4kt}}\right)\right]g(y)\,dy$ Comment. This solution is obtained from the first solution formula as applied to the data g(x) suitably extended to R so as to be an even function, that is, letting g(−x) := g(x) for all x. Correspondingly, the solution of the initial value problem on R is an even function with respect to the variable x for all values of t > 0, and in particular, being smooth, it satisfies the homogeneous Neumann boundary conditions ux(0, t) = 0. The Green's function number of this solution is X20. Problem on (0,∞) with homogeneous initial conditions and non-homogeneous Dirichlet boundary conditions ${\begin{cases}u_{t}=ku_{xx}&(x,t)\in [0,\infty )\times (0,\infty )\\u(x,0)=0&{\text{IC}}\\u(0,t)=h(t)&{\text{BC}}\end{cases}}$ $u(x,t)=\int _{0}^{t}{\frac {x}{\sqrt {4\pi k(t-s)^{3}}}}\exp \left(-{\frac {x^{2}}{4k(t-s)}}\right)h(s)\,ds,\qquad \forall x>0$ Comment. This solution is the convolution with respect to the variable t of $\psi (x,t):=-2k\partial _{x}\Phi (x,t)={\frac {x}{\sqrt {4\pi kt^{3}}}}\exp \left(-{\frac {x^{2}}{4kt}}\right)$ and the function h(t). Since Φ(x, t) is the fundamental solution of $\partial _{t}-k\partial _{x}^{2},$ the function ψ(x, t) is also a solution of the same heat equation, and so is u := ψ ∗ h, thanks to general properties of the convolution with respect to differentiation. Moreover, $\psi (x,t)={\frac {1}{x^{2}}}\,\psi \left(1,{\frac {t}{x^{2}}}\right)$ $\int _{0}^{\infty }\psi (x,t)\,dt=1,$ so that, by general facts about approximation to the identity, ψ(x, ⋅) ∗ h → h as x → 0 in various senses, according to the specific h. For instance, if h is assumed continuous on R with support in [0, ∞) then ψ(x, ⋅) ∗ h converges uniformly on compacta to h as x → 0, meaning that u(x, t) is continuous on [0, ∞) × [0, ∞) with u(0, t) = h(t). Inhomogeneous heat equation Problem on (-∞,∞) homogeneous initial conditions Comment. This solution is the convolution in R2, that is with respect to both the variables x and t, of the fundamental solution $\Phi (x,t):={\frac {1}{\sqrt {4\pi kt}}}\exp \left(-{\frac {x^{2}}{4kt}}\right)$ and the function f(x, t), both meant as defined on the whole R2 and identically 0 for all t → 0. One verifies that $\left(\partial _{t}-k\partial _{x}^{2}\right)(\Phi *f)=f,$ which expressed in the language of distributions becomes $\left(\partial _{t}-k\partial _{x}^{2}\right)\Phi =\delta ,$ where the distribution δ is the Dirac's delta function, that is the evaluation at 0. Problem on (0,∞) with homogeneous Dirichlet boundary conditions and initial conditions ${\begin{cases}u_{t}=ku_{xx}+f(x,t)&(x,t)\in [0,\infty )\times (0,\infty )\\u(x,0)=0&{\text{IC}}\\u(0,t)=0&{\text{BC}}\end{cases}}$ $u(x,t)=\int _{0}^{t}\int _{0}^{\infty }{\frac {1}{\sqrt {4\pi k(t-s)}}}\left(\exp \left(-{\frac {(x-y)^{2}}{4k(t-s)}}\right)-\exp \left(-{\frac {(x+y)^{2}}{4k(t-s)}}\right)\right)f(y,s)\,dy\,ds$ Comment. This solution is obtained from the preceding formula as applied to the data f(x, t) suitably extended to R × [0,∞), so as to be an odd function of the variable x, that is, letting f(−x, t) := −f(x, t) for all x and t. Correspondingly, the solution of the inhomogeneous problem on (−∞,∞) is an odd function with respect to the variable x for all values of t, and in particular it satisfies the homogeneous Dirichlet boundary conditions u(0, t) = 0. Problem on (0,∞) with homogeneous Neumann boundary conditions and initial conditions ${\begin{cases}u_{t}=ku_{xx}+f(x,t)&(x,t)\in [0,\infty )\times (0,\infty )\\u(x,0)=0&{\text{IC}}\\u_{x}(0,t)=0&{\text{BC}}\end{cases}}$ $u(x,t)=\int _{0}^{t}\int _{0}^{\infty }{\frac {1}{\sqrt {4\pi k(t-s)}}}\left(\exp \left(-{\frac {(x-y)^{2}}{4k(t-s)}}\right)+\exp \left(-{\frac {(x+y)^{2}}{4k(t-s)}}\right)\right)f(y,s)\,dy\,ds$ Comment. This solution is obtained from the first formula as applied to the data f(x, t) suitably extended to R × [0,∞), so as to be an even function of the variable x, that is, letting f(−x, t) := f(x, t) for all x and t. Correspondingly, the solution of the inhomogeneous problem on (−∞,∞) is an even function with respect to the variable x for all values of t, and in particular, being a smooth function, it satisfies the homogeneous Neumann boundary conditions ux(0, t) = 0. Examples Since the heat equation is linear, solutions of other combinations of boundary conditions, inhomogeneous term, and initial conditions can be found by taking an appropriate linear combination of the above Green's function solutions. For example, to solve ${\begin{cases}u_{t}=ku_{xx}+f&(x,t)\in \mathbb {R} \times (0,\infty )\\u(x,0)=g(x)&{\text{IC}}\end{cases}}$ let u = w + v where w and v solve the problems ${\begin{cases}v_{t}=kv_{xx}+f,\,w_{t}=kw_{xx}\,&(x,t)\in \mathbb {R} \times (0,\infty )\\v(x,0)=0,\,w(x,0)=g(x)\,&{\text{IC}}\end{cases}}$ Similarly, to solve ${\begin{cases}u_{t}=ku_{xx}+f&(x,t)\in [0,\infty )\times (0,\infty )\\u(x,0)=g(x)&{\text{IC}}\\u(0,t)=h(t)&{\text{BC}}\end{cases}}$ let u = w + v + r where w, v, and r solve the problems ${\begin{cases}v_{t}=kv_{xx}+f,\,w_{t}=kw_{xx},\,r_{t}=kr_{xx}&(x,t)\in [0,\infty )\times (0,\infty )\\v(x,0)=0,\;w(x,0)=g(x),\;r(x,0)=0&{\text{IC}}\\v(0,t)=0,\;w(0,t)=0,\;r(0,t)=h(t)&{\text{BC}}\end{cases}}$ Mean-value property for the heat equation Solutions of the heat equations $(\partial _{t}-\Delta )u=0$ satisfy a mean-value property analogous to the mean-value properties of harmonic functions, solutions of $\Delta u=0,$ though a bit more complicated. Precisely, if u solves $(\partial _{t}-\Delta )u=0$ and $(x,t)+E_{\lambda }\subset \mathrm {dom} (u)$ then $u(x,t)={\frac {\lambda }{4}}\int _{E_{\lambda }}u(x-y,t-s){\frac {|y|^{2}}{s^{2}}}ds\,dy,$ where Eλ is a "heat-ball", that is a super-level set of the fundamental solution of the heat equation: $E_{\lambda }:=\{(y,s):\Phi (y,s)>\lambda \},$ $\Phi (x,t):=(4t\pi )^{-{\frac {n}{2}}}\exp \left(-{\frac {|x|^{2}}{4t}}\right).$ Notice that $\mathrm {diam} (E_{\lambda })=o(1)$ as λ → ∞ so the above formula holds for any (x, t) in the (open) set dom(u) for λ large enough.[8] This can be shown by an argument similar to the analogous one for harmonic functions. Steady-state heat equation The steady-state heat equation is by definition not dependent on time. In other words, it is assumed conditions exist such that: ${\frac {\partial u}{\partial t}}=0$ This condition depends on the time constant and the amount of time passed since boundary conditions have been imposed. Thus, the condition is fulfilled in situations in which the time equilibrium constant is fast enough that the more complex time-dependent heat equation can be approximated by the steady-state case. Equivalently, the steady-state condition exists for all cases in which enough time has passed that the thermal field u no longer evolves in time. In the steady-state case, a spatial thermal gradient may (or may not) exist, but if it does, it does not change in time. This equation therefore describes the end result in all thermal problems in which a source is switched on (for example, an engine started in an automobile), and enough time has passed for all permanent temperature gradients to establish themselves in space, after which these spatial gradients no longer change in time (as again, with an automobile in which the engine has been running for long enough). The other (trivial) solution is for all spatial temperature gradients to disappear as well, in which case the temperature become uniform in space, as well. The equation is much simpler and can help to understand better the physics of the materials without focusing on the dynamic of the heat transport process. It is widely used for simple engineering problems assuming there is equilibrium of the temperature fields and heat transport, with time. Steady-state condition: ${\frac {\partial u}{\partial t}}=0$ The steady-state heat equation for a volume that contains a heat source (the inhomogeneous case), is the Poisson's equation: $-k\nabla ^{2}u=q$ where u is the temperature, k is the thermal conductivity and q is the rate of heat generation per unit volume. In electrostatics, this is equivalent to the case where the space under consideration contains an electrical charge. The steady-state heat equation without a heat source within the volume (the homogeneous case) is the equation in electrostatics for a volume of free space that does not contain a charge. It is described by Laplace's equation: $\nabla ^{2}u=0$ Applications Particle diffusion Main article: Diffusion equation One can model particle diffusion by an equation involving either: • the volumetric concentration of particles, denoted c, in the case of collective diffusion of a large number of particles, or • the probability density function associated with the position of a single particle, denoted P. In either case, one uses the heat equation $c_{t}=D\Delta c,$ or $P_{t}=D\Delta P.$ Both c and P are functions of position and time. D is the diffusion coefficient that controls the speed of the diffusive process, and is typically expressed in meters squared over second. If the diffusion coefficient D is not constant, but depends on the concentration c (or P in the second case), then one gets the nonlinear diffusion equation. Brownian motion Let the stochastic process $X$ be the solution to the stochastic differential equation ${\begin{cases}\mathrm {d} X_{t}={\sqrt {2k}}\;\mathrm {d} B_{t}\\X_{0}=0\end{cases}}$ where $B$ is the Wiener process (standard Brownian motion). The probability density function of $X$ is given at any time $t$ by ${\frac {1}{\sqrt {4\pi kt}}}\exp \left(-{\frac {x^{2}}{4kt}}\right)$ which is the solution to the initial value problem ${\begin{cases}u_{t}(x,t)-ku_{xx}(x,t)=0,&(x,t)\in \mathbb {R} \times (0,+\infty )\\u(x,0)=\delta (x)\end{cases}}$ where $\delta $ is the Dirac delta function. Schrödinger equation for a free particle Main article: Schrödinger equation With a simple division, the Schrödinger equation for a single particle of mass m in the absence of any applied force field can be rewritten in the following way: $\psi _{t}={\frac {i\hbar }{2m}}\Delta \psi $, where i is the imaginary unit, ħ is the reduced Planck's constant, and ψ is the wave function of the particle. This equation is formally similar to the particle diffusion equation, which one obtains through the following transformation: ${\begin{aligned}c(\mathbf {R} ,t)&\to \psi (\mathbf {R} ,t)\\D&\to {\frac {i\hbar }{2m}}\end{aligned}}$ Applying this transformation to the expressions of the Green functions determined in the case of particle diffusion yields the Green functions of the Schrödinger equation, which in turn can be used to obtain the wave function at any time through an integral on the wave function at t = 0: $\psi (\mathbf {R} ,t)=\int \psi \left(\mathbf {R} ^{0},t=0\right)G\left(\mathbf {R} -\mathbf {R} ^{0},t\right)dR_{x}^{0}\,dR_{y}^{0}\,dR_{z}^{0},$ with $G(\mathbf {R} ,t)=\left({\frac {m}{2\pi i\hbar t}}\right)^{3/2}e^{-{\frac {\mathbf {R} ^{2}m}{2i\hbar t}}}.$ Remark: this analogy between quantum mechanics and diffusion is a purely formal one. Physically, the evolution of the wave function satisfying Schrödinger's equation might have an origin other than diffusion. Thermal diffusivity in polymers A direct practical application of the heat equation, in conjunction with Fourier theory, in spherical coordinates, is the prediction of thermal transfer profiles and the measurement of the thermal diffusivity in polymers (Unsworth and Duarte). This dual theoretical-experimental method is applicable to rubber, various other polymeric materials of practical interest, and microfluids. These authors derived an expression for the temperature at the center of a sphere TC ${\frac {T_{C}-T_{S}}{T_{0}-T_{S}}}=2\sum _{n=1}^{\infty }(-1)^{n+1}\exp \left({-{\frac {n^{2}\pi ^{2}\alpha t}{L^{2}}}}\right)$ where T0 is the initial temperature of the sphere and TS the temperature at the surface of the sphere, of radius L. This equation has also found applications in protein energy transfer and thermal modeling in biophysics. Further applications The heat equation arises in the modeling of a number of phenomena and is often used in financial mathematics in the modeling of options. The Black–Scholes option pricing model's differential equation can be transformed into the heat equation allowing relatively easy solutions from a familiar body of mathematics. Many of the extensions to the simple option models do not have closed form solutions and thus must be solved numerically to obtain a modeled option price. The equation describing pressure diffusion in a porous medium is identical in form with the heat equation. Diffusion problems dealing with Dirichlet, Neumann and Robin boundary conditions have closed form analytic solutions (Thambynayagam 2011). The heat equation is also widely used in image analysis (Perona & Malik 1990) and in machine-learning as the driving theory behind scale-space or graph Laplacian methods. The heat equation can be efficiently solved numerically using the implicit Crank–Nicolson method of (Crank & Nicolson 1947). This method can be extended to many of the models with no closed form solution, see for instance (Wilmott, Howison & Dewynne 1995). An abstract form of heat equation on manifolds provides a major approach to the Atiyah–Singer index theorem, and has led to much further work on heat equations in Riemannian geometry. See also • Caloric polynomial • Curve-shortening flow • Diffusion equation • Relativistic heat conduction • Schrödinger equation • Weierstrass transform Notes 1. Berline, Nicole; Getzler, Ezra; Vergne, Michèle. Heat kernels and Dirac operators. Grundlehren der Mathematischen Wissenschaften, 298. Springer-Verlag, Berlin, 1992. viii+369 pp. ISBN 3-540-53340-0 2. Stojanovic, Srdjan (2003), "3.3.1.3 Uniqueness for heat PDE with exponential growth at infinity", Computational Financial Mathematics using MATHEMATICA®: Optimal Trading in Stocks and Options, Springer, pp. 112–114, ISBN 9780817641979 3. John, Fritz (1991-11-20). Partial Differential Equations. Springer Science & Business Media. p. 222. ISBN 978-0-387-90609-6. 4. The Mathworld: Porous Medium Equation and the other related models have solutions with finite wave propagation speed. 5. Juan Luis Vazquez (2006-12-28), The Porous Medium Equation: Mathematical Theory, Oxford University Press, USA, ISBN 978-0-19-856903-9 6. Note that the units of u must be selected in a manner compatible with those of q. Thus instead of being for thermodynamic temperature (Kelvin - K), units of u should be J/L. 7. The Green's Function Library contains a variety of fundamental solutions to the heat equation. 8. Conversely, any function u satisfying the above mean-value property on an open domain of Rn × R is a solution of the heat equation References • Cannon, John Rozier (1984), The one–dimensional heat equation, Encyclopedia of Mathematics and its Applications, vol. 23, Reading, MA: Addison-Wesley Publishing Company, Advanced Book Program, ISBN 0-201-13522-1, MR 0747979, Zbl 0567.35001 • Crank, J.; Nicolson, P. (1947), "A Practical Method for Numerical Evaluation of Solutions of Partial Differential Equations of the Heat-Conduction Type", Proceedings of the Cambridge Philosophical Society, 43 (1): 50–67, Bibcode:1947PCPS...43...50C, doi:10.1017/S0305004100023197, S2CID 16676040 • Evans, Lawrence C. (2010), Partial Differential Equations, Graduate Studies in Mathematics, vol. 19 (2nd ed.), Providence, RI: American Mathematical Society, ISBN 978-0-8218-4974-3 • Perona, P; Malik, J. (1990), "Scale-Space and Edge Detection Using Anisotropic Diffusion" (PDF), IEEE Transactions on Pattern Analysis and Machine Intelligence, 12 (7): 629–639, doi:10.1109/34.56205, S2CID 14502908 • Thambynayagam, R. K. M. (2011), The Diffusion Handbook: Applied Solutions for Engineers, McGraw-Hill Professional, ISBN 978-0-07-175184-1 • Wilmott, Paul; Howison, Sam; Dewynne, Jeff (1995), The mathematics of financial derivatives. A student introduction, Cambridge: Cambridge University Press, ISBN 0-521-49699-3 Further reading • Carslaw, H.S.; Jaeger, J.C. (1988), Conduction of heat in solids, Oxford Science Publications (2nd ed.), New York: The Clarendon Press, Oxford University Press, ISBN 978-0-19-853368-9 • Cole, Kevin D.; Beck, James V.; Haji-Sheikh, A.; Litkouhi, Bahan (2011), Heat conduction using Green's functions, Series in Computational and Physical Processes in Mechanics and Thermal Sciences (2nd ed.), Boca Raton, FL: CRC Press, ISBN 978-1-43-981354-6 • Einstein, Albert (1905), "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen", Annalen der Physik, 322 (8): 549–560, Bibcode:1905AnP...322..549E, doi:10.1002/andp.19053220806 • Friedman, Avner (1964), Partial differential equations of parabolic type, Englewood Cliffs, N.J.: Prentice-Hall • Unsworth, J.; Duarte, F. J. (1979), "Heat diffusion in a solid sphere and Fourier Theory", Am. J. Phys., 47 (11): 891–893, Bibcode:1979AmJPh..47..981U, doi:10.1119/1.11601 • Widder, D.V. (1975), The heat equation, Pure and Applied Mathematics, vol. 67, New York-London: Academic Press [Harcourt Brace Jovanovich, Publishers] External links Wikiversity has learning resources about Heat equation Wikimedia Commons has media related to Heat equation. • Derivation of the heat equation • Linear heat equations: Particular solutions and boundary value problems - from EqWorld • "The Heat Equation". PBS Infinite Series. November 17, 2017. Archived from the original on 2021-12-11 – via YouTube.
Wikipedia
Čech cohomology In mathematics, specifically algebraic topology, Čech cohomology is a cohomology theory based on the intersection properties of open covers of a topological space. It is named for the mathematician Eduard Čech. Motivation Let X be a topological space, and let ${\mathcal {U}}$ be an open cover of X. Let $N({\mathcal {U}})$ denote the nerve of the covering. The idea of Čech cohomology is that, for an open cover ${\mathcal {U}}$ consisting of sufficiently small open sets, the resulting simplicial complex $N({\mathcal {U}})$ should be a good combinatorial model for the space X. For such a cover, the Čech cohomology of X is defined to be the simplicial cohomology of the nerve. This idea can be formalized by the notion of a good cover. However, a more general approach is to take the direct limit of the cohomology groups of the nerve over the system of all possible open covers of X, ordered by refinement. This is the approach adopted below. Construction Let X be a topological space, and let ${\mathcal {F}}$ be a presheaf of abelian groups on X. Let ${\mathcal {U}}$ be an open cover of X. Simplex A q-simplex σ of ${\mathcal {U}}$ is an ordered collection of q+1 sets chosen from ${\mathcal {U}}$, such that the intersection of all these sets is non-empty. This intersection is called the support of σ and is denoted |σ|. Now let $\sigma =(U_{i})_{i\in \{0,\ldots ,q\}}$ be such a q-simplex. The j-th partial boundary of σ is defined to be the (q−1)-simplex obtained by removing the j-th set from σ, that is: $\partial _{j}\sigma :=(U_{i})_{i\in \{0,\ldots ,q\}\setminus \{j\}}.$ :=(U_{i})_{i\in \{0,\ldots ,q\}\setminus \{j\}}.} The boundary of σ is defined as the alternating sum of the partial boundaries: $\partial \sigma :=\sum _{j=0}^{q}(-1)^{j+1}\partial _{j}\sigma $ :=\sum _{j=0}^{q}(-1)^{j+1}\partial _{j}\sigma } viewed as an element of the free abelian group spanned by the simplices of ${\mathcal {U}}$. Cochain A q-cochain of ${\mathcal {U}}$ with coefficients in ${\mathcal {F}}$ is a map which associates with each q-simplex σ an element of ${\mathcal {F}}(|\sigma |)$, and we denote the set of all q-cochains of ${\mathcal {U}}$ with coefficients in ${\mathcal {F}}$ by $C^{q}({\mathcal {U}},{\mathcal {F}})$. $C^{q}({\mathcal {U}},{\mathcal {F}})$ is an abelian group by pointwise addition. Differential The cochain groups can be made into a cochain complex $(C^{\bullet }({\mathcal {U}},{\mathcal {F}}),\delta )$ by defining the coboundary operator $\delta _{q}:C^{q}({\mathcal {U}},{\mathcal {F}})\to C^{q+1}({\mathcal {U}},{\mathcal {F}})$ by: $\quad (\delta _{q}f)(\sigma ):=\sum _{j=0}^{q+1}(-1)^{j}\mathrm {res} _{|\sigma |}^{|\partial _{j}\sigma |}f(\partial _{j}\sigma ),$ where $\mathrm {res} _{|\sigma |}^{|\partial _{j}\sigma |}$ is the restriction morphism from ${\mathcal {F}}(|\partial _{j}\sigma |)$ to ${\mathcal {F}}(|\sigma |).$ (Notice that ∂jσ ⊆ σ, but |σ| ⊆ |∂jσ|.) A calculation shows that $\delta _{q+1}\circ \delta _{q}=0.$ The coboundary operator is analogous to the exterior derivative of De Rham cohomology, so it sometimes called the differential of the cochain complex. Cocycle A q-cochain is called a q-cocycle if it is in the kernel of $\delta $, hence $Z^{q}({\mathcal {U}},{\mathcal {F}}):=\ker(\delta _{q})\subseteq C^{q}({\mathcal {U}},{\mathcal {F}})$ is the set of all q-cocycles. Thus a (q−1)-cochain $f$ is a cocycle if for all q-simplices $\sigma $ the cocycle condition $\sum _{j=0}^{q}(-1)^{j}\mathrm {res} _{|\sigma |}^{|\partial _{j}\sigma |}f(\partial _{j}\sigma )=0$ holds. A 0-cocycle $f$ is a collection of local sections of ${\mathcal {F}}$ satisfying a compatibility relation on every intersecting $A,B\in {\mathcal {U}}$ $f(A)|_{A\cap B}=f(B)|_{A\cap B}$ A 1-cocycle $f$ satisfies for every non-empty Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle U = A\cap B \cap C} with $A,B,C\in {\mathcal {U}}$ $f(B\cap C)|_{U}-f(A\cap C)|_{U}+f(A\cap B)|_{U}=0$ Coboundary A q-cochain is called a q-coboundary if it is in the image of $\delta $ and $B^{q}({\mathcal {U}},{\mathcal {F}}):=\mathrm {Im} (\delta _{q-1})\subseteq C^{q}({\mathcal {U}},{\mathcal {F}})$ is the set of all q-coboundaries. For example, a 1-cochain $f$ is a 1-coboundary if there exists a 0-cochain $h$ such that for every intersecting $A,B\in {\mathcal {U}}$ $f(A\cap B)=h(A)|_{A\cap B}-h(B)|_{A\cap B}$ Cohomology The Čech cohomology of ${\mathcal {U}}$ with values in ${\mathcal {F}}$ is defined to be the cohomology of the cochain complex $(C^{\bullet }({\mathcal {U}},{\mathcal {F}}),\delta )$. Thus the qth Čech cohomology is given by ${\check {H}}^{q}({\mathcal {U}},{\mathcal {F}}):=H^{q}((C^{\bullet }({\mathcal {U}},{\mathcal {F}}),\delta ))=Z^{q}({\mathcal {U}},{\mathcal {F}})/B^{q}({\mathcal {U}},{\mathcal {F}})$. The Čech cohomology of X is defined by considering refinements of open covers. If ${\mathcal {V}}$ is a refinement of ${\mathcal {U}}$ then there is a map in cohomology ${\check {H}}^{*}({\mathcal {U}},{\mathcal {F}})\to {\check {H}}^{*}({\mathcal {V}},{\mathcal {F}}).$ The open covers of X form a directed set under refinement, so the above map leads to a direct system of abelian groups. The Čech cohomology of X with values in ${\mathcal {F}}$ is defined as the direct limit ${\check {H}}(X,{\mathcal {F}}):=\varinjlim _{\mathcal {U}}{\check {H}}({\mathcal {U}},{\mathcal {F}})$ of this system. The Čech cohomology of X with coefficients in a fixed abelian group A, denoted ${\check {H}}(X;A)$, is defined as ${\check {H}}(X,{\mathcal {F}}_{A})$ where ${\mathcal {F}}_{A}$ is the constant sheaf on X determined by A. A variant of Čech cohomology, called numerable Čech cohomology, is defined as above, except that all open covers considered are required to be numerable: that is, there is a partition of unity {ρi} such that each support $\{x\mid \rho _{i}(x)>0\}$ is contained in some element of the cover. If X is paracompact and Hausdorff, then numerable Čech cohomology agrees with the usual Čech cohomology. Relation to other cohomology theories If X is homotopy equivalent to a CW complex, then the Čech cohomology ${\check {H}}^{*}(X;A)$ is naturally isomorphic to the singular cohomology $H^{*}(X;A)\,$. If X is a differentiable manifold, then ${\check {H}}^{*}(X;\mathbb {R} )$ is also naturally isomorphic to the de Rham cohomology; the article on de Rham cohomology provides a brief review of this isomorphism. For less well-behaved spaces, Čech cohomology differs from singular cohomology. For example if X is the closed topologist's sine curve, then ${\check {H}}^{1}(X;\mathbb {Z} )=\mathbb {Z} ,$ whereas $H^{1}(X;\mathbb {Z} )=0.$ If X is a differentiable manifold and the cover ${\mathcal {U}}$ of X is a "good cover" (i.e. all the sets Uα are contractible to a point, and all finite intersections of sets in ${\mathcal {U}}$ are either empty or contractible to a point), then ${\check {H}}^{*}({\mathcal {U}};\mathbb {R} )$ is isomorphic to the de Rham cohomology. If X is compact Hausdorff, then Čech cohomology (with coefficients in a discrete group) is isomorphic to Alexander-Spanier cohomology. For a presheaf ${\mathcal {F}}$ on X, let ${\mathcal {F}}^{+}$ denote its sheafification. Then we have a natural comparison map $\chi :{\check {H}}^{*}(X,{\mathcal {F}})\to H^{*}(X,{\mathcal {F}}^{+})$ :{\check {H}}^{*}(X,{\mathcal {F}})\to H^{*}(X,{\mathcal {F}}^{+})} from Čech cohomology to sheaf cohomology. If X is paracompact Hausdorff, then $ \chi $ is an isomorphism. More generally, $ \chi $ is an isomorphism whenever the Čech cohomology of all presheaves on X with zero sheafification vanishes.[2] In algebraic geometry Čech cohomology can be defined more generally for objects in a site C endowed with a topology. This applies, for example, to the Zariski site or the etale site of a scheme X. The Čech cohomology with values in some sheaf ${\mathcal {F}}$ is defined as ${\check {H}}^{n}(X,{\mathcal {F}}):=\varinjlim _{\mathcal {U}}{\check {H}}^{n}({\mathcal {U}},{\mathcal {F}}).$ where the colimit runs over all coverings (with respect to the chosen topology) of X. Here ${\check {H}}^{n}({\mathcal {U}},{\mathcal {F}})$ is defined as above, except that the r-fold intersections of open subsets inside the ambient topological space are replaced by the r-fold fiber product ${\mathcal {U}}^{\times _{X}^{r}}:={\mathcal {U}}\times _{X}\dots \times _{X}{\mathcal {U}}.$ As in the classical situation of topological spaces, there is always a map ${\check {H}}^{n}(X,{\mathcal {F}})\rightarrow H^{n}(X,{\mathcal {F}})$ from Čech cohomology to sheaf cohomology. It is always an isomorphism in degrees n = 0 and 1, but may fail to be so in general. For the Zariski topology on a Noetherian separated scheme, Čech and sheaf cohomology agree for any quasi-coherent sheaf. For the étale topology, the two cohomologies agree for any étale sheaf on X, provided that any finite set of points of X are contained in some open affine subscheme. This is satisfied, for example, if X is quasi-projective over an affine scheme.[3] The possible difference between Čech cohomology and sheaf cohomology is a motivation for the use of hypercoverings: these are more general objects than the Čech nerve $N_{X}{\mathcal {U}}:\dots \to {\mathcal {U}}\times _{X}{\mathcal {U}}\times _{X}{\mathcal {U}}\to {\mathcal {U}}\times _{X}{\mathcal {U}}\to {\mathcal {U}}.$ A hypercovering K∗ of X is a certain simplicial object in C, i.e., a collection of objects Kn together with boundary and degeneracy maps. Applying a sheaf ${\mathcal {F}}$ to K∗ yields a simplicial abelian group $ {\mathcal {F}}(K_{\ast })$ whose n-th cohomology group is denoted $ H^{n}({\mathcal {F}}(K_{\ast }))$. (This group is the same as ${\check {H}}^{n}({\mathcal {U}},{\mathcal {F}})$ in case K∗ equals $N_{X}{\mathcal {U}}$.) Then, it can be shown that there is a canonical isomorphism $H^{n}(X,{\mathcal {F}})\cong \varinjlim _{K_{*}}H^{n}({\mathcal {F}}(K_{*})),$ where the colimit now runs over all hypercoverings.[4] Examples For example, we can compute the coherent sheaf cohomology of $\Omega ^{1}$ on the projective line $\mathbb {P} _{\mathbb {C} }^{1}$ using the Čech complex. Using the cover ${\mathcal {U}}=\{U_{1}={\text{Spec}}(\mathbb {C} [y]),U_{2}={\text{Spec}}(\mathbb {C} [y^{-1}])\}$ we have the following modules from the cotangent sheaf ${\begin{aligned}&\Omega ^{1}(U_{1})=\mathbb {C} [y]dy\\&\Omega ^{1}(U_{2})=\mathbb {C} \left[y^{-1}\right]dy^{-1}\end{aligned}}$ If we take the conventions that $dy^{-1}=-(1/y^{2})dy$ then we get the Čech complex $0\to \mathbb {C} [y]dy\oplus \mathbb {C} \left[y^{-1}\right]dy^{-1}{\xrightarrow {d^{0}}}\mathbb {C} \left[y,y^{-1}\right]dy\to 0$ Since $d^{0}$ is injective and the only element not in the image of $d^{0}$ is $y^{-1}dy$ we get that ${\begin{aligned}&H^{1}(\mathbb {P} _{\mathbb {C} }^{1},\Omega ^{1})\cong \mathbb {C} \\&H^{k}(\mathbb {P} _{\mathbb {C} }^{1},\Omega ^{1})\cong 0{\text{ for }}k\neq 1\end{aligned}}$ References Citation footnotes 1. Penrose, Roger (1992), "On the Cohomology of Impossible Figures", Leonardo, 25 (3/4): 245–247, doi:10.2307/1575844, JSTOR 1575844, S2CID 125905129. Reprinted from Penrose, Roger (1991), "On the Cohomology of Impossible Figures / La Cohomologie des Figures Impossibles", Structural Topology, 17: 11–16, retrieved January 16, 2014 2. Brady, Zarathustra. "Notes on sheaf cohomology" (PDF). p. 11. Archived (PDF) from the original on 2022-06-17. 3. Milne, James S. (1980), "Section III.2, Theorem 2.17", Étale cohomology, Princeton Mathematical Series, vol. 33, Princeton University Press, ISBN 978-0-691-08238-7, MR 0559531 4. Artin, Michael; Mazur, Barry (1969), "Lemma 8.6", Etale homotopy, Lecture Notes in Mathematics, vol. 100, Springer, p. 98, ISBN 978-3-540-36142-8 General references • Bott, Raoul; Loring Tu (1982). Differential Forms in Algebraic Topology. Springer. ISBN 0-387-90613-4. • Hatcher, Allen (2002). Algebraic Topology (PDF). Cambridge University Press. ISBN 0-521-79540-0. • Wells, Raymond (1980). "2. Sheaf Theory: Appendix A. Cech Cohomology with Coefficients in a Sheaf". Differential Analysis on Complex Manifolds. Springer. pp. 63–64. doi:10.1007/978-1-4757-3946-6_2. ISBN 978-3-540-90419-9.
Wikipedia
Research | Open | Published: 12 March 2019 Delta-radiomics signature predicts treatment outcomes after preoperative chemoradiotherapy and surgery in rectal cancer Seung Hyuck Jeon1 na1, Changhoon Song2 na1, Eui Kyu Chie1, Bohyoung Kim3, Young Hoon Kim4, Won Chang4, Yoon Jin Lee4, Joo-Hyun Chung1, Jin Beom Chung2, Keun-Wook Lee5, Sung-Bum Kang6 & Jae-Sung Kim2 Radiation Oncologyvolume 14, Article number: 43 (2019) | Download Citation To develop and compare delta-radiomics signatures from 2- (2D) and 3-dimensional (3D) features that predict treatment outcomes following preoperative chemoradiotherapy (CCRT) and surgery for locally advanced rectal cancer. In total, 101 patients (training cohort, n = 67; validation cohort, n = 34) with locally advanced rectal adenocarcinoma between 2008 and 2015 were included. We extracted 55 features from T2-weighted magnetic resonance imaging (MRI) scans. Delta-radiomics feature was defined as the difference in radiomics feature before and after CCRT. Signatures were developed to predict local recurrence (LR), distant metastasis (DM), and disease-free survival (DFS) from 2D and 3D features. The least absolute shrinkage and selection operator regression was used to select features and build signatures. The delta-radiomics signatures and clinical factors were integrated into Cox regression analysis to determine if the signatures were independent prognostic factors. The radiomics signatures for LR, DM, and DFS were developed and validated using both 2D and 3D features. Outcomes were significantly different in the low- and high-risk patients dichotomized by optimal cutoff in both the training and validation cohorts. In multivariate analysis, the signatures were independent prognostic factors even when considering the clinical parameters. There were no significant differences in C-index from 2D vs. 3D signatures. This is the first study to develop delta-radiomics signatures for rectal cancer. The signatures successfully predicted the outcomes and were independent prognostic factors. External validation is warranted to ensure their performance. After the landmark randomized trial [1], preoperative chemoradiotherapy (CCRT) followed by total mesorectal excision (TME) has been a standard treatment strategy for locoregionally advanced rectal cancer. However, efforts have been continuously made to promote risk-adaptive therapy. One such approach is local excision [2, 3] or even observation [4, 5], rather than TME, in good responders following CCRT to reduce the risk of impaired quality of life. Alternatively, adding chemotherapeutic agents or intensifying radiation doses may be attempted in patients with poor response or prognosis [6,7,8]. These strategies can be implemented with the help of treatment outcome predictors; however, there are still no tools explicitly available for this purpose. Radiomics provide image features associated with clinical characteristics or outcomes that are extracted from medical images. Numerous studies have been conducted on various cancer types, including lung cancer [9], glioma [10], and head and neck cancer [11] as well as proposed radiomics-based predictors of good performance. Radiomics models for rectal cancer have recently been developed using computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) [12,13,14,15,16,17,18]. Some researchers have built radiomics models to predict pathologic response to CCRT [12, 13, 17, 18]. Meng et al. [14] reported a radiomics signature to predict disease-free survival (DFS) using pretreatment MRI in which the models could predict the treatment response or prognosis with acceptable predictability. Nonetheless, clinical response to treatment is another important indicator and may improve the performance of the models. Although treatment response is an important prognostic factor, it cannot describe the entire details of response. In this context, some researchers have examined delta-radiomic features, which are the differences in radiomic features before and after treatment. Delta-radiomics deals with serial changes in images, which is one of the major parts of radiologic studies. Delta-radiomics features have been reported to be associated with treatment response or outcome [19, 20]; however, reports on delta-radiomics in rectal cancer are less [21]. In this paper, we have focused on applying delta-radiomics features extracted from T2-weighted MRI to build prediction signatures for treatment outcomes and compared the performances of 2- (2D) and 3-dimensional (3D) features. This retrospective study was approved by the institutional review board of our hospital; the requirement of informed consent was waived. The protocol was compliant with the Health Insurance Portability and Accountability Act. We retrospectively enrolled patients with locally advanced (cT3–4 and/or cN1–2) biopsy-proven rectal adenocarcinoma treated at our institution with preoperative CCRT and TME between 2008 and 2015. Patients with the following were excluded: distant metastasis (DM) at the time of diagnosis, MRI with poor quality (e.g., artifact), or slice spacing of MRI not 4 mm (to minimize the influence of different voxel sizes). Included patients were randomly allocated to the training or validation cohort in a 2:1 ratio. We examined local recurrence (LR), DM, and DFS. LR and DM were defined as recurrences inside and outside the true pelvis, respectively. DFS was calculated as time from beginning of preoperative CCRT to death from any cause or recurrence. Image protocol An MRI scan was obtained for each patient before preoperative CCRT (MRI-before) and before TME after completion of CCRT (MRI-after). MRI-after was acquired at 72 days (median; interquartile range, 70–78) after the start of CCRT. MRI was performed using 1 .5T Gyroscan Intera, 3 T Achieva, or 3 T Ingenia MR scanners (Philips Medical Systems, Best, Netherlands). The protocol included T2-weighted sequences using the following parameters: repetition time, 2424–8296 ms; echo time, 92–120 ms; flip angle, 90°; slice thickness, 3 mm; slice spacing, 4 mm; matrix, 512 × 512–576 × 576. Each region of interest (ROI) was segmented on all T2-weighted axial with reference to diffusion-weighted imaging (DWI) sequences. On MRI-before, the ROI was delineated on the tumor with an area of low to intermediate signal intensity on T2-weighted images, excluding the intestinal lumen. The ROI on MRI-after was defined as residual tumor and/or rectal tissues with abnormal signal intensity on T2-weighted images where tumor preexisted [13]. Bladder urine of approximately 1-cm3 sphere volume was drawn to obtain average pixel value of bladder urine which was used for normalization. Segmentation of all patients was performed manually using the Eclipse system (Varian Medical Systems, Palo Alto, CA, USA) by a radiation oncologist with 12-year experience in gastro-intestinal tumor. Representative examples of tumor segmentations are demonstrated on Fig. 1. Examples of tumor segmentation on MRI acquired (a) before and (b) after preoperative CCRT Image preprocessing and feature extraction Image preprocessing and feature extraction were performed using in-house MATLAB R2017b software (MathWorks, Natick, MA, USA). For preprocessing, Collewet normalization algorithm [22] was used to reduce the differences between image acquisition protocols. All pixel values were normalized to average intensity of bladder urine [23] to improve image reproducibility. Pixel values were quantized into 64 levels with bladder urine signal intensity corresponding to the highest level. The 3-dimensional ROIs were isotropically resampled to 1 × 1 × 1-mm3 voxels. For 2-dimensional analysis, the ROI on the axial slice with the largest area was selected and resampled to 1 × 1-mm2 pixels. Within each ROI, (a) volume (or area in 2-dimensional analysis), (b) 8 first-order features, (c) 15 texture features from gray level co-occurrence matrix, (d) 13 texture features from gray level run length matrix, (e) 13 texture features from gray level size zone matrix, and (f) 5 texture features from neighbor gray tone difference matrix were extracted. The details and list of the extracted features are described in Additional file 1: Appendix A and Table B1, respectively. The delta-radiomics feature was defined as the difference between features on MRI-before and MRI-after and calculated as follows: $$ \mathrm{Delta}-\mathrm{radiomic}\ \mathrm{Feature}={\mathrm{Feature}}_{\mathrm{MRI}-\mathrm{after}}-{\mathrm{Feature}}_{\mathrm{MRI}-\mathrm{before}} $$ Feature selection and statistical analysis Robustness of each feature was evaluated by generating translated ROIs and calculating their features. The method was modified from the stability test introduced by Bologna et al. [24] Eight translated ROIs representing inter-observer variability were generated by translating ROI by ±1 mm in lateral and/or ± 1 mm in vertical directions; 0 mm in both directions yields the original ROI and is thus excluded from the robustness test. After extracting the features from the original ROI and 8 translated ROIs, intraclass correlation coefficient (ICC) values were calculated for each feature. Features with ICC > 0.9 in both 3D and 2D extraction in MRI-before and MRI-after were considered robust and selected. This process substituted the comparison of features derived from multiple observers. The least absolute shrinkage and selection operator (LASSO) method was used to select core features and to develop score-based signatures in the training cohort. The final value of λ, a tuning parameter, was determined by 10-fold cross-validation, which gave minimum cross-validation error. A radiomics score (Rad score) was generated by linearly combining the selected core features and their respective coefficients. Consequently, the optimal cutoff of Rad score, making the greatest difference in outcome between the two groups divided by the cutoff, was determined. The differences in clinical and treatment parameters between the training and validation cohort were evaluated using the Student's t-test or chi-squared test, as appropriate. Survival outcomes were compared between these cohorts using the log-rank test. Univariate and multivariate analyses of clinical factors and radiomics scores were performed using the Cox proportional hazards model. Performance of the models were evaluated with area under the ROC curve (AUC) and Hosmer-Lemeshow goodness-of-fit test, and the relationship between radiomics scores was quantified using Pearson correlation coefficient and variance inflation factor (VIF). Variables with a significant association were integrated into multivariate analysis; the association was considered significant when p < 0.05. R software version 3.5.0 was used to perform all statistical analyses (http://www.r-project.org). Patients and treatment characteristics A total of 101 patients were included in the analysis, with 67 in the training and 34 in the validation cohort. The median follow-up duration was 49.7 months (range, 9.3–99.4). Clinical characteristics of the two cohorts are summarized in Table 1. There was no significant difference between the two cohorts. Table 1 Patient characteristics of training and validation cohort All patients received radiotherapy doses of 50.4 Gy in 28 fractions to the primary tumor and regional lymphatics with the 2D (n = 18) or 3D (n = 83) technique. Additionally, 5-fluorouracil (n = 19) or capecitabine (n = 82) was administered concurrently with radiotherapy. TME-based surgery was performed at a median 48 days (range, 28–90) after the end of CCRT. Adjuvant chemotherapy was administered in 91 patients (90.1%); fluorouracil and leucovorin in 20, capecitabine in 36, uracil and tegafur in 7, and FOLFOX in 28 patients, respectively. Development and validation of Delta-Radiomics signature The delta-radiomics signature was developed using the remaining 22 features after the robustness test. The 22 robust features with ICC > 0.9 are listed in Additional file 1: Appendix Table B2. LASSO Cox regression analysis was conducted in the training cohort to select radiomics features with non-zero coefficients. Rad scores predicting LR, DM, and DFS are as follows: $$ \mathrm{LR}\ 3\mathrm{D}\ \mathrm{Radscore}=-5.9627417\times {10}^{-5}\ \mathrm{x}\ \mathrm{Volume}+4.0761146\times {\mathrm{Int}}_{\mathrm{Energy}}-135.5705805\times {\mathrm{GLCM}}_{\mathrm{Energy}}+286.7201809\times {\mathrm{GLCM}}_{\mathrm{SumAverage}}+2.7222298\ \mathrm{x}\ {10}^{-3}\times {\mathrm{GLCM}}_{\mathrm{Autocorrelation}}+0.1212618\times {\mathrm{GLSZM}}_{\mathrm{LZLGLE}}-263.8275908\times {\mathrm{NGTDM}}_{\mathrm{Coarseness}} $$ $$ \mathrm{LR}\ 2\mathrm{D}\ \mathrm{Radscore}=-3.9078995\times {10}^{-4}\times \mathrm{Volume}+2.4888091\times {\mathrm{Int}}_{\mathrm{Energy}}-22.2879655\times {\mathrm{GLCM}}_{\mathrm{Energy}}+432.3870771\times {\mathrm{GLCM}}_{\mathrm{SumAverage}}-8.7912561\times {\mathrm{GLRLM}}_{\mathrm{SRLGLE}}-37.6853716\times {\mathrm{NGTDM}}_{\mathrm{Coarseness}} $$ $$ \mathrm{DM}\ 3\mathrm{D}\ \mathrm{Radscore}=2.0001257\times {\mathrm{Int}}_{\mathrm{Energy}}+9.0766595\times {10}^{-5}\times {\mathrm{GLCM}}_{\mathrm{SumVariance}}-36.4133193\times {\mathrm{GLRLM}}_{\mathrm{LRLGLE}}+1.3710706\ \mathrm{x}\ {10}^{-4}\times {\mathrm{GLRLM}}_{\mathrm{LRHGLE}}+2.0565009\times {10}^{-8}\times {\mathrm{GLSZM}}_{\mathrm{LZHGLE}}-108.1326981\times {\mathrm{NGTDM}}_{\mathrm{Coarseness}} $$ $$ \mathrm{DM}\ 2\mathrm{D}\ \mathrm{Radscore}=0.4580866\times {\mathrm{Int}}_{\mathrm{Energy}}-45.2277485\times {\mathrm{NGTDM}}_{\mathrm{Coarseness}} $$ $$ \mathrm{DFS}\ 3\mathrm{D}\ \mathrm{Radscore}=1.6123539\times {\mathrm{Int}}_{\mathrm{Energy}}+7.9686750\times {10}^{-5}\times {\mathrm{GLCM}}_{\mathrm{SumVariance}}-32.6172005\times {\mathrm{GLRLM}}_{\mathrm{LRLGLE}}+1.1112033\ \mathrm{x}\ {10}^{-4}\times {\mathrm{GLRLM}}_{\mathrm{LRHGLE}}-101.5991090\times {\mathrm{NGTDM}}_{\mathrm{Coarseness}} $$ $$ \mathrm{DFS}\ 2\mathrm{D}\ \mathrm{Radscore}=0.3188674\times {\mathrm{Int}}_{\mathrm{Energy}}-44.9766973\times {\mathrm{NGTDM}}_{\mathrm{Coarseness}} $$ Optimal cutoff values of Rad scores were determined and used to divide the cohort into high- and low-risk groups, and a higher score was correlated to a higher risk. In the training cohort, all Rad scores were significantly associated with respective outcomes (all p < 0.05, log-rank test). The prognostic performance of all Rad scores was validated in a randomly selected cohort. All Rad scores significantly stratified the risk in the 2 groups (all p < 0.05, log-rank test). The Kaplan-Meier survival curves for LR, DM, and DFS according to 3D and 2D Radscores are demonstrated on Fig. 2 and Fig.3, respectively. The AUC and Hosmer-Lemeshow Chi-square values suggest that the predictability of the signatures is acceptable and are detailed on Additional file 1: Appendix Table C. Kaplan–Meier curves of (a) local recurrence, (b) distant metastasis, and (c) disease-free survival according to optimal cutoffs of 3D Rad scores. P-values from log-rank test are shown Integration with clinical features Univariate analysis followed by multivariate analysis was performed to verify radiomics scores as independent prognostic factors for the respective endpoints in the combined cohort. Detailed results of the analyses are presented in Additional file 1: Appendix Table D1–3. All radiomics scores, regardless of 2D or 3D signature, were significantly associated with the corresponding outcomes on multivariate analysis. Comparisons of 3D and 2D signatures Correlations between 3D and 2D Rad scores were investigated in the 101 patients. Pearson correlation coefficients of 3D and 2D Rad scores for LR, DM, and DFS were 0.840 (95% CI = 0.771–0.889, p < 0.0001, VIF = 3.39), 0.641 (95% CI = 0.510–0.743, p < 0.0001, VIF = 1.70), and 0.665 (95% CI = 0.540–0.761, p < 0.0001, VIF = 1.79), respectively. The scatterplot of Rad scores and their correlations are shown in Fig. 4. The scatterplots between 3D and 2D Rad scores of the entire cohort (n = 101) predicting (a) local recurrence, (b) distant metastasis, and (c) disease-free survival. Linear fit lines and 95% confidence intervals were drawn, and presented coefficients and p-values were calculated using Pearson correlation coefficient The value of C-index was calculated and compared for each Rad score. In terms of all outcomes, the C-indices between the 2D and 3D Rad scores, both as continuous and binary variables divided by cutoff, were not significant (Table 2). Table 2 Comparison of C-indices of 3D and 2D Rad-scores. The 95% confidence interval for each C-index is presented In the present study, we developed and validated the prognostic role of delta-radiomics signatures in locally advanced rectal cancer treated with preoperative CCRT and surgery. Furthermore, we compared the performance of 2D and 3D radiomics features. To our knowledge, this is the first publication to incorporate delta-radiomics features to predict recurrences in rectal cancer. A good response to preoperative CCRT is consistently associated with improved treatment outcomes of rectal cancer [25, 26]. Although pretreatment radiomics features predict pathologic tumor response, they do not contain all of the information regarding response. Images following preoperative CCRT can reveal indicators of tumor response. Tumor regression grade according to post-treatment T2-weighted MRI is correlated with pathologic tumor regression grade [27, 28]. Thus, we hypothesized that delta-radiomics features on T2-weighted MRI have prognostic power. A recent study showed correlation between delta-radiomics features and clinical response in rectal cancer [21]. Analyzing 16 patients, the study provided an evidence of clinical significance of delta-radiomics features. Since there are no studies with large patients concerning delta-radiomics, however, we strictly limited the number of features included in the investigation. We only included the features that are widely used in radiomics studies, and the image preprocessing step was onefold. Furthermore, we included the features with ICC > 0.9 in all 3D and 2D analyses, leaving 22 features for LASSO regression. The developed Rad scores were successfully validated in a randomly selected cohort. Some radiomics features may be closely related to clinical factors, thus the signatures should be independent prognostic factors in multivariate analysis to be valuable. Rad scores along with clinical factors consistently reported to be prognostic were independently associated with treatment outcomes. Remarkably, post-treatment pathologic characteristics such as pathologic stage and tumor regression grade are incorporated in the analysis. The results suggest that delta-radiomics features may contain more information than microscopic findings, e.g., tumor genotype or microenvironment. The developed signatures are believed to be useful in daily practice. There is no consensus regarding the use of adjuvant chemotherapy after preoperative CCRT and surgery. Subgroups that may benefit from adjuvant chemotherapy have been reported [29,30,31]. It is generally hypothesized that patients at high risk of recurrence, usually those with distant metastasis, benefit from adjuvant chemotherapy. Most of the patients (90.1%) in our study received adjuvant chemotherapy, suggesting that, for patients with low Rad scores, adjuvant chemotherapy can be omitted. We compared 2D and 3D delta-radiomics signatures in predicting outcomes. One major advantage of the radiomics approach is that it can represent properties of the whole tumor. However, in the case of 2D features, only part of the tumor is segmented. Therefore, there are concerns regarding the power of 2D radiomics features. The main advantage of 2D radiomics features is the convenience in investigation and application; investigators or users only need to delineate the tumor on 1 representative slice. In addition, 2D features, particularly slice thickness and spacing, may be less dependent on the image-acquiring protocol. Several authors have utilized 2D radiomics features of rectal cancer and reported their predictive power [32,33,34]. As rectal cancer usually grows along the wall and has an irregular shape, the segmented whole tumor may not represent its actual shape [35]; hence, 2D radiomics features need to be further studied. Regarding all outcomes, both 2D and 3D Rad scores were independent prognostic factors on multivariate analysis. By comparing C-index, 2D and 3D Rad scores were not statistically different in prognostic power, possibly because of the high correlation between scores. Hence, our data suggest that 2D delta-radiomics features can be investigated as a good surrogate for 3D features of rectal cancer. One of the drawbacks of our study is the exclusion of functional images such as DWI or other modalities such as CT and PET. Several studies have reported the correlation between DWI parameters and response or outcomes after CCRT [36,37,38]. Recent work by Giannini and colleagues demonstrated the role of PET-derived radiomics features in predicting treatment response [18]. We believe that the performance of delta-radiomics signature would improve with the incorporation of other sequences or modalities. We hope that radiomics features from various images can be used in subsequent delta-radiomics investigations. Another limitation of the study is the different parameters of the analyzed T2-weighted images. We normalized the pixel intensity using Collewet's method and urine intensity and resampled the voxels or pixels into isometric cubes or squares. Nonetheless, the preprocessing steps cannot fully compensate for the differences. However, we believe that the radiomics signature should be applicable to various image protocols for widespread clinical use. In that context, the wide applicability of our signatures needs to be tested in MRIs from other institutions. In conclusion, we developed radiomics scores to predict treatment outcomes after preoperative CCRT and surgery. The results support further investigation of delta-radiomics features in rectal cancer. The 2D and 3D delta-radiomics features were similarly informative. External validation of our signatures is necessary to ensure their performance. CCRT: Chemoradiotherapy DFS: Disease-free survival Distant metastasis DWI: Diffusion-weighted image LASSO: Least absolute shrinkage and selection operator LR: Local recurrence TME: Total mesorectal excision Sauer R, Liersch T, Merkel S, et al. Preoperative versus postoperative chemoradiotherapy for locally advanced rectal cancer: results of the German CAO/ARO/AIO-94 randomized phase III trial after a median follow-up of 11 years. J Clin Oncol. 2012;30:1926–33. Stipa F, Picchio M, Burza A, et al. Long-term outcome of local excision after preoperative chemoradiation for ypT0 rectal cancer. Dis Colon Rectum. 2014;57:1245–52. Yu CS, Yun HR, Shin EJ, et al. Local excision after neoadjuvant chemoradiation therapy in advanced rectal cancer: a national multicenter analysis. Am J Surg. 2013;206:482–7. Habr-Gama A, Perez RO, Nadalin W, et al. Operative versus nonoperative treatment for stage 0 distal rectal cancer following chemoradiation therapy: long-term results. Ann Surg. 2004;240:711–7. Renehan AG, Malcomson L, Emsley R, et al. Watch-and-wait approach versus surgical resection after chemoradiotherapy for patients with rectal cancer (the OnCoRe project): a propensity-score matched cohort analysis. Lancet Oncol. 2016;17:174–83. Gérard JP, Azria D, Gourgou-Bourgade S, et al. Comparison of two neoadjuvant chemoradiotherapy regimens for locally advanced rectal cancer: results of the phase III trial ACCORD 12/0405-Prodige 2. J Clin Oncol. 2010;28:1638–44. Habr-Gama A, Perez RO, São Julião GP, et al. Consolidation chemotherapy during neoadjuvant chemoradiation (CRT) for distal rectal cancer leads to sustained decrease in tumor metabolism when compared to standard CRT regimen. Radiat Oncol. 2016;11:24. Breugom AJ, Swets M, Bosset JF, et al. Adjuvant chemotherapy after preoperative (chemo) radiotherapy and surgery for patients with rectal cancer: a systematic review and meta-analysis of individual patient data. Lancet Oncol. 2015;16:200–7. Sun W, Jiang M, Dang J, et al. Effect of machine learning methods on predicting NSCLC overall survival time based on Radiomics analysis. Radiat Oncol. 2018;13(1):197. Li Q, Bai H, Chen Y, et al. A fully-automatic multiparametric Radiomics model: towards reproducible and prognostic imaging signature for prediction of overall survival in glioblastoma Multiforme. Sci Rep. 2017;7:14331. Zhang B, Tian J, Dong D, et al. Radiomics features of multiparametric MRI as novel prognostic factors in advanced nasopharyngeal carcinoma. Clin Cancer Res. 2017;23:4259–69. Nie K, Shi L, Chen Q, et al. Rectal Cancer: assessment of neoadjuvant Chemoradiation outcome based on Radiomics of multiparametric MRI. Clin Cancer Res. 2016;22:5256–64. Liu Z, Zhang XY, Shi YJ, et al. Radiomics analysis for evaluation of pathological complete response to neoadjuvant Chemoradiotherapy in locally advanced rectal Cancer. Clin Cancer Res. 2017;23:7253–62. Meng Y, Zhang Y, Dong D, et al. Novel radiomic signature as a prognostic biomarker for locally advanced rectal cancer. J Magn Reson Imaging. 2018. https://doi.org/10.1002/jmri.25968 [Epub ahead of print]. Sun Y, Hu P, Wang J, et al. Radiomic features of pretreatment MRI could identify T stage in patients with rectal cancer: preliminary findings. J Magn Reson Imaging. 2018. https://doi.org/10.1002/jmri.25969 [Epub ahead of print]. Cusumano D, Dinapoli N, Boldrini L, et al. Fractal-based radiomic approach to predict complete pathological response after chemo-radiotherapy in rectal cancer. Radiol Med. 2018;123:286–95. Dinapoli N, Barbaro B, Gatta R, et al. Magnetic resonance, vendor-independent, intensity histogram analysis predicting pathologic complete response after Radiochemotherapy of rectal Cancer. Int J Radiat Oncol Biol Phys. 2018;102:765–74. Giannini V, Mazzetti S, Bertotto I, et al. Predicting locally advanced rectal cancer response to neoadjuvant therapy with 18F-FDG PET and MRI radiomics features [published online ahead of print January 13, 2019]. Eur J Nucl Med Mol Imaging. 2019. https://doi.org/10.1007/s00259-018-4250-6 [Epub ahead of print]. Fave X, Zhang L, Yang J, et al. Delta-radiomics features for the prediction of patient outcomes in non-small cell lung cancer. Sci Rep. 2017;7:588. Goh V, Ganeshan B, Nathan P, et al. Assessment of response to tyrosine kinase inhibitors in metastatic renal cell cancer: CT texture as a predictive biomarker. Radiology. 2011;261:165–71. Boldrini L, Cusumano D, Chiloiro G, et al. Delta radiomics for rectal cancer response prediction with hybrid 0.35 T magnetic resonance-guided radiotherapy (MRgRT): a hypothesis-generating study for an innovative personalized medicine approach. Radiol Med. 2019;124:145–53. Collewet G, Strzelecki M, Mariette F. Influence of MRI acquisition protocols and image intensity normalization methods on texture classification. Magn Reson Imaging. 2004;22:81–91. Johnston E, Punwani S. Can we improve the reproducibility of quantitative multiparametric prostate MR imaging metrics? Radiology. 2016;281:652–3. Bologna M, Corino VDA, Montin E, et al. Assessment of stability and discrimination capacity of Radiomic features on apparent diffusion coefficient images. J Digit Imaging. 2018. https://doi.org/10.1007/s10278-018-0092-9 [Epub ahead of print]. Maas M, Nelemans PJ, Valentini V, et al. Long-term outcome in patients with a pathological complete response after chemoradiation for rectal cancer: a pooled analysis of individual patient data. Lancet Oncol. 2010;11:835–44. Agarwal A, Chang GJ, Hu CY, et al. Quantified pathologic response assessed as residual tumor burden is a predictor of recurrence-free survival in patients with rectal cancer who undergo resection after neoadjuvant chemoradiotherapy. Cancer. 2013;119:4231–41. Bhoday J, Smith F, Siddiqui MR, et al. Magnetic resonance tumor regression grade and residual mucosal abnormality as predictors for pathological complete response in rectal Cancer Postneoadjuvant Chemoradiotherapy. Dis Colon Rectum. 2016;59:925–33. Patel UB, Brown G, Rutten H, et al. Comparison of magnetic resonance imaging and histopathological response to chemoradiotherapy in locally advanced rectal cancer. Ann Surg Oncol. 2012;19:2842–52. Song C, Chung JH, Kang SB, et al. Impact of tumor regression grade as a major prognostic factor in locally advanced rectal cancer after neoadjuvant chemoradiotherapy: a proposal for a modified staging system. Cancers (Basel). 2018;10:e319. Maas M, Nelemans PJ, Valentini V, et al. Adjuvant chemotherapy in rectal cancer: defining subgroups who may benefit after neoadjuvant chemoradiation and resection: a pooled analysis of 3,313 patients. Int J Cancer. 2015;137:212–20. van Erning FN, Rutten HJ, van den Berg HA, et al. Effect of adjuvant chemotherapy on recurrence-free survival varies by neo-adjuvant treatment in patients with stage III rectal cancer. Eur J Surg Oncol. 2015;41:1630–5. Chee CG, Kim YH, Lee KH, et al. CT texture analysis in patients with locally advanced rectal cancer treated with neoadjuvant chemoradiotherapy: a potential imaging biomarker for treatment response and prognosis. PLoS One. 2017;12:e0182883. Jalil O, Afaq A, Ganeshan B, et al. Magnetic resonance based texture parameters as potential imaging biomarkers for predicting long-term survival in locally advanced rectal cancer treated by chemoradiotherapy. Color Dis. 2017;19:349–62. De Cecco CN, Ciolina M, Caruso D, et al. Performance of diffusion-weighted imaging, perfusion imaging, and texture analysis in predicting tumoral response to neoadjuvant chemoradiotherapy in rectal cancer patients studied with 3T MR: initial experience. Abdom Radiol (NY). 2016;41:1728–35. Liu L, Liu Y, Xu L, et al. Application of texture analysis based on apparent diffusion coefficient maps in discriminating different stages of rectal cancer. J Magn Reson Imaging. 2017;45:1798–808. Birlik B, Obuz F, Elibol FD, et al. Diffusion-weighted MRI and MR- volumetry--in the evaluation of tumor response after preoperative chemoradiotherapy in patients with locally advanced rectal cancer. Magn Reson Imaging. 2015;33:201–12. Sun Y, Tong T, Cai S, et al. Apparent diffusion coefficient (ADC) value: a potential imaging biomarker that reflects the biological features of rectal cancer. PLoS One. 2014;9:e109371. Lambregts DM, Rao SX, Sassen S, et al. MRI and diffusion-weighted MRI Volumetry for identification of complete tumor responders after preoperative Chemoradiotherapy in patients with rectal Cancer: a bi-institutional validation study. Ann Surg. 2015;262:1034–9. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Grant number: NRF-2017R1D1A1B03033892). The datasets generated and analyzed during the current study are not publicly available due to the Personal Information Protection Act but are available from the corresponding author on reasonable request. Seung Hyuck Jeon and Changhoon Song contributed equally to this work. Department of Radiation Oncology, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea Seung Hyuck Jeon , Eui Kyu Chie & Joo-Hyun Chung Department of Radiation Oncology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82 Gumi-ro 173beon-gil, Bundang-gu, Seongnam, 13620, Republic of Korea Changhoon Song , Jin Beom Chung & Jae-Sung Kim Division of Biomedical Engineering, Hankuk University of Foreign Studies, 81 Oedae-ro, Mohyeon-eup, Cheoin-gu, Yongin, 17035, Republic of Korea Bohyoung Kim Department of Radiology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82 Gumi-ro 173beon-gil, Bundang-gu, Seongnam, 13620, Republic of Korea Young Hoon Kim , Won Chang & Yoon Jin Lee Department of Internal Medicine, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82 Gumi-ro 173beon-gil, Bundang-gu, Seongnam, 13620, Republic of Korea Keun-Wook Lee Department of Surgery, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82 Gumi-ro 173beon-gil, Bundang-gu, Seongnam, 13620, Republic of Korea Sung-Bum Kang Search for Seung Hyuck Jeon in: Search for Changhoon Song in: Search for Eui Kyu Chie in: Search for Bohyoung Kim in: Search for Young Hoon Kim in: Search for Won Chang in: Search for Yoon Jin Lee in: Search for Joo-Hyun Chung in: Search for Jin Beom Chung in: Search for Keun-Wook Lee in: Search for Sung-Bum Kang in: Search for Jae-Sung Kim in: Conception and design of the study: JSK and CS. Acquisition of data: SHJ and CS. Analysis and interpretation of the data: SHJ, CS and JSK. All authors participated in clinical data acquisition. Writing and revision of the manuscript: SHJ, CS, and JSK. All authors read and approved the final manuscript. Correspondence to Jae-Sung Kim. This retrospective patient study was approved by Institutional Review Board of the Seoul National University Bundang Hospital, and the requirement for written informed consent was waived. This study complies with the standards of the Declaration of Helsinki and current ethics guidelines. Additional file Additional file 1: Appendix A. Extracted radiomics features. (DOCX 37 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Radiomics Delta-radiomics By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Please note that comments may be removed without notice if they are flagged by another user or do not comply with our community guidelines. Clinical Radiation Oncology
CommonCrawl
Y. Cheng, S. Friedman, and J. D. Hamkins, "Large cardinals need not be large in HOD," Annals of Pure and Applied Logic, vol. 166, iss. 11, pp. 1186-1198, 2015. Abstract. We prove that large cardinals need not generally exhibit their large cardinal nature in HOD. For example, a supercompact cardinal $\kappa$ need not be weakly compact in HOD, and there can be a proper class of supercompact cardinals in $V$, none of them weakly compact in HOD, with no supercompact cardinals in HOD. Similar results hold for many other types of large cardinals, such as measurable and strong cardinals. 1. To what extent must a large cardinal in $V$ exhibit its large cardinal properties in HOD? 2. To what extent does the existence of large cardinals in $V$ imply the existence of large cardinals in HOD? For large cardinal concepts beyond the weakest notions, we prove, the answers are generally negative. In Theorem 4, for example, we construct a model with a supercompact cardinal that is not weakly compact in HOD, and Theorem 9 extends this to a proper class of supercompact cardinals, none of which is weakly compact in HOD, thereby providing some strongly negative instances of (1). The same model has a proper class of supercompact cardinals, but no supercompact cardinals in HOD, providing a negative instance of (2). The natural common strengthening of these situations would be a model with a proper class of supercompact cardinals, but no weakly compact cardinals in HOD. We were not able to arrange that situation, however, and furthermore it would be ruled out by Conjecture 13, an intriguing positive instance of (2) recently proposed by W. Hugh Woodin, namely, that if there is a supercompact cardinal, then there is a measurable cardinal in HOD. Many other natural possibilities, such as a proper class of measurable cardinals with no weakly compact cardinals in HOD, remain as open questions. This entry was posted in Publications and tagged definability, forcing, HOD, homogeneous forcing, indestructibility, large cardinals, measurable, supercompact, weakly compact by Joel David Hamkins. Bookmark the permalink. V [G], if there are no measurable cardinals in V ? final extension. Now find some intermediate model, by noting for inaccessible $\alpha, Add(\alpha, 1)$ can be written as two step iteration, first adding an $\alpha$-Souslin tree, and then forcing with that tree. The intermediate model is an iteration with the Souslin parts. But this idea does not work for trivial reasons. But what if instead of iteration, we have a kind of product. Then we would be able to do the job. Now, there is a paper by Mack Stanley "Notes on a theorem of Silver" (see http://www.math.sjsu.edu/~stanley/gch.pdf) which gives a proof of Silver's theorem by a kind of forcing which is not an iteration and can be considered as product. I don't see if his forcing adds Cohen subsets to weakly compacts below the supercompact. But maybe a modification of his method can be used to answer the above question and similar ones.
CommonCrawl
Observer-based event-triggered finite-time consensus for general linear leader-follower multi-agent systems Yiping Luo ORCID: orcid.org/0000-0002-2963-72701 & Jialong Pang1 In this study, the event-triggered finite-time consensus problem of a class of general linear leader-follower multi-agent systems with unmeasurable states is investigated. First, an observer-based distributed event-triggered strategy is proposed in view of introducing an external dynamic threshold that is independent of the state variables. Second, the Lyapunov method and proposed event-triggered strategy are implemented as the control scheme to ensure that the tracking error can converge to the origin within a finite time under given conditions. Analytical findings indicate that the Zeno behavior can be avoided by selecting the appropriate parameters. Finally, a numerical simulation is implemented, and the results verify the effectiveness of the proposed method. In recent decades, the cooperative control of multi-agent systems (MAS) has attracted great attention because of its wide application in smart grid [1], sensor network [2], Unmanned Aerial Vehicle (UAV) formation [3], and multi-robot systems [4]. MAS has the advantage of solving complex problems that cannot be completed by a single agent, and it has higher flexibility and stronger adaptability [5]. As an important research branch of MAS cooperation, the consensus has received extensive attention [6–8]. The consensus design involves a reasonable and effective control protocol based on the local information exchange between agents. In this manner, all agents in the system reaching a final state can be ensured. According to whether a leader is present in a system, the consensus algorithm is divided into a leaderless consensus algorithm and a leader-following consensus algorithm with leaders. Olfati-Saber et al. [9] and Wen et al. [10] studied the problem of consensus without leaders. The consensus state of the system was determined on the basis of the information exchange and initial state between agents. In [11] and [12], the problem of leader-following consensus of systems was also studied. Interestingly, the abovementioned studies [9–12] have a common feature: each agent needs to communicate continuously with its neighbors to reach a consensus. However, in practical applications, the continuous communication between agents is not always sufficient, and it even increases the burden of communication and energy consumption, especially in the case of limited resources of a single agent. Despite the limitations, the communication network resources can be effectively utilized by introducing an event-triggered strategy into the multi-agent consensus problem. In [13], the real-time performance of event-triggered systems was found to be better than that of time-triggered systems. In [14], the average consensus of event-trigger conditions in MAS was considered to have single integrator dynamics. In [15], a decentralized event-triggered consensus algorithm was considered for MAS with single integrator and double integrator dynamics. In [16], a distributed adaptive event-triggered protocol was designed on the basis of local sampling state or local output information, and the leaderless and leader-following consensus issues were simultaneously considered. Zhang et al. [17] considered the event-triggered tracking control problem of nonlinear MAS with unknown disturbances and proposed a new adaptive event-triggered control method for this system. In [18], a distributed event-triggered consensus controller for each agent was proposed to achieve consensus without the need for continuous communication between agents. The abovementioned studies entailed in-depth research of event-trigger conditions to effectively reduce the communication burden and energy consumption of systems. However, the key performance of the control system, namely, convergence performance, should also be considered. The finite-time convergence of closed-loop systems is the time-optimal control method from the perspective of time optimization of control systems. However, the event-triggered consensus in the literature represents an asymptotic consensus. As the fractional power term is present in the finite-time controller, the finite-time consensus has better robustness and anti-interference than the asymptotic consensus [19–22]. In [23], the finite-time consensus problem of leaderless and leader-follower MAS was investigated, and two new nonlinear consensus protocols were proposed to substantially reduce communication cost and controller update frequency. In [24], the tracking control problem of MAS with bounded disturbances was studied with respect to the sliding mode controller. In [25], the finite-time consensus of second-order MAS with internal nonlinear dynamics and external bounded disturbances was explored. However, the abovementioned results mainly concentrated on integrator-type dynamics, and they could not deal with general linear dynamics. In the research about general linear dynamics in [26], the problem of finite-time consensus with distributed event-trigger conditions was analyzed, and a dynamic threshold that could converge to zero within a finite time in the trigger function was designed. Cao et al. [27] proposed a distributed event-triggered control strategy to achieve the finite-time consensus of general linear MAS. In the abovementioned studies, a common assumption is that the state of the system is measurable. However, in many practical engineering applications, not all system state variables can be directly detected. In such cases, the control based on the state feedback cannot be used, and the one based on the output feedback should be utilized [28]. An unstable operating system with an unknown state is generally reconstructed using an observer, and then this reconstructed state is used to replace the real state of the system as a means of achieving the required state feedback. In [29–31], the event-triggered control of the first-order MAS based on the output feedback is studied. In [32], two observer-based control protocols (centralized and distributed protocols) were considered in relation to the relative output information. In [33], a fully distributed event-triggered control strategy was proposed for general linear MAS, and the schemes for leaderless and leader-follower consensus were simultaneously considered. In [34], an observer-based consensus tracking control based on the distributed velocity estimation method was designed for second-order leader-follower MAS. In [35], for general linear MAS with different topologies and unknown states, observer-based distributed controllers are proposed for different situations, and finite-time coordinated tracking is achieved. To the best of our knowledge, the studies on the event-triggered finite-time consensus control of MAS with unmeasured states are few, hence the motivation of the present research. On the basis of the above discussion and existing results, this study investigates the observer-based control of event-triggered finite-time consensus for general linear leader-follower MAS. Some of the contributions of this research are as follows. First, construct a universal observer based on the output feedback information of the system to estimate the real state of the system and use the estimated state information of the observer to define the measurable error. Second, on the basis of the measurable error, a dynamic threshold that converges to zero within a finite time is introduced in the trigger function, and a new model-based event-triggered controller is proposed. Finally, we prove that the finite-time leader-follower consensus can be achieved under the observer-based and event-triggered scheme, which ensures the finite-time convergence of the system under consideration and substantially reduces the controller update. In addition, the Zeno behavior is analyzed comprehensively, and the appropriate parameters can be selected to avoid the Zeno behavior. The rest of this paper is organized as follows. Section 2 briefly discusses the preparations and problem description. The main results are given in Sect. 3. An example is given in Sect. 4. Section 5 concludes the study. In this study, R, \({R^{n},}\) and \({R^{n \times m}}\) denote the set of real numbers, an n-dimensional real vector, and the set of \(n \times m\) real matrices, respectively. \({I_{n}}\) denote n-dimensional identity matrix. Given a matrix \(x = [{x_{1}},{x_{2}}, \ldots ,{x_{n}}]\) and \(\alpha > 0\), Define \(\operatorname{sig}{ ( x )^{\alpha}} = [\operatorname{sig} ( {x_{1}^{\alpha}} ), \ldots , \operatorname{sig} ( {x_{n}^{\alpha}} )]^{T}\), \(\operatorname{sig} ( {x_{n}^{\alpha}} ) = \operatorname{sign} ( {{x_{n}}} ){ \vert {{x_{n}}} \vert ^{\alpha}}\), where \(\operatorname{sign}( \cdot )\) means signum function. \({\lambda _{\max }} ( x )\) and \({\lambda _{\min }} ( x )\) indicate the maximum and minimum eigenvalues of x, respectively. \(\Vert x \Vert \) represents the Euclidean norm of vector x or the induced matrix 2-norm. ⊗ is the Kronecker product. Preliminaries and problem formulation Graph theory and lemmas The topology of MAS is usually described using a directed graph or an undirected graph. In this study, the leader-follower MAS consists of a leader (marked as 0) and N followers. The weighted undirected topological graphs between N agents are described by graph \(\mathcal{G} = ( \mathcal{V},\mathcal{E},\mathcal{A} )\), where \(\mathcal{V} = \{ {1,2,\ldots,N} \}\) represents the set of nodes, \(\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}\) denotes the set of edges, and \(\mathcal{A} = { [ {{a_{ij}}} ]_{N \times N}}\) means the related weighted adjacency matrix, where \({a_{ii}} = 0\). If \(( {j,i} ) \in \mathcal{E}\), then \({a_{ij}} > 0\); otherwise \({a_{ij}} = 0\). \({N_{i}}\) represents the set of neighbors of the follower i (\({i = 1,\ldots,N} \)). \(D = \operatorname{diag} \{ {{d_{1}},\ldots,{d_{N}}} \} \in {R^{N \times N}}\) represents the connection matrix between the follower and the leader. If the follower i can obtain information from the leader, then \({d_{i}} > 0\); otherwise, \({d_{i}} = 0\). The Laplacian matrix \(L = [ {{l_{ij}}} ] \in {R^{N \times N}}\) of \(\mathcal{G}\) related to the adjacency matrix \(\mathcal{A}\) is defined as \({l_{ij}} = - {a_{ij}}\), \(i \ne j\) and \({l_{ii}} = \sum_{j = 1,j \ne i}^{N} {{a_{ij}}}\). If \({a_{ij}} = {a_{ji}}\), \(\forall i, j = 1,\ldots,N\), then the graph \(\mathcal{G}\) is undirected; otherwise, it is directed. \(H = L + D\) is defined to describe the topological information of the leader-follower MAS. The path from \({v_{i}}\) to \({v_{j}}\) in the graph is a sequence of adjacent nodes starting from \({v_{i}}\) and ending in \({v_{j}}\). If a path is present between any two agents, then the graph is connected. The matrix H has positive eigenvalues \({\lambda _{1}},{\lambda _{2}},\ldots,\lambda _{N}\), and the H is positive definite if and only if the undirected graph \(\mathcal{G}\) is connected. For any \({x_{i}} \in R\), \(i = 1,\ldots,N\) and \(0 < p \le 1\), then $$ { \Biggl( {\sum_{i = 1}^{N} { \vert {{x_{i}}} \vert } } \Biggr)^{p}} \le \sum _{i = 1}^{N} {{{ \vert {{x_{i}}} \vert }^{p}}} \le {N^{1 - p}} { \Biggl( {\sum _{i = 1}^{N} { \vert {{x_{i}}} \vert } } \Biggr)^{p}}.$$ Consider the system \(\dot{x} = f ( x )\), \(x \in {R^{n}}\) and \(f ( 0 ) = 0\). Suppose that there exists a positive definite continuously differentiable function V: \({R^{n}} \to R\) and the real number \(c > 0\), \(0 < \alpha < 1\), making the following inequality $$ \dot{V} \bigl( {x ( t )} \bigr) \le - c{V^{\alpha}} \bigl( {x ( t )} \bigr)$$ holds, then the origin of the system is global finite-time stable. The settling time T is bounded as $$ T \le \frac{{{V^{1 - \alpha }} ( {x ( 0 )} )}}{{c ( {1 - \alpha } )}}.$$ For the Laplacian matrix L, we have $$ {x^{T}} ( t )Lx ( t ) = \frac{1}{2}\sum _{i = 1}^{N} {\sum_{j = 1}^{N} {{a_{ij}}} } { \bigl( {{x_{i}} ( t ) - {x_{j}} ( t )} \bigr)^{T}} \bigl( {{x_{i}} ( t ) - {x_{j}} ( t )} \bigr),$$ where L is positive semidefinite. In addition, when \({1^{T}}x ( t ) = 0\), \({\lambda _{2}} ( L ){x^{T}} ( t )x ( t ) \le {x^{T}} ( t )Lx ( t ) \le { \lambda _{\max }} ( L ){x^{T}} ( t )x ( t )\). Young's inequality: Let \(a,b > 0\) and \(p,q > 1\) be real numbers with \(\frac{1}{p} + \frac{1}{q} = 1\), then inequality \(ab \le \frac{1}{p}{a^{p}} + \frac{1}{q}{b^{q}}\) holds. Consider a MAS with N followers and a leader. The dynamic equation of follower i is $$\begin{aligned} \textstyle\begin{cases} {{\dot{x}}_{i}} ( t ) = A{x_{i}} ( t ) + B{u_{i}} ( t ), \\ {y_{i}} ( t ) = C{x_{i}} ( t ), \end{cases}\displaystyle \quad i = 1,2,\ldots,N \end{aligned}$$ where \({x_{i}} ( t ) \in {R^{n}}\), \({u_{i}} ( t ) \in {R^{m}}\), and \({y_{i}} ( t ) \in {R^{p}}\) represent the state, control input, and output of follower i, respectively. A, B, and C denote the constant matrix with appropriate dimensions. For system (1), the following assumptions are introduced: The matrix pairs \((A, B)\) and \((C, A)\) are stabilizable and detectable, respectively. Graph \(\mathcal{G}\) is composed of N followers and a leader, and it contains a directed spanning tree, and leader 0 is its root node. The dynamics of leader 0 is given by $$\begin{aligned} \textstyle\begin{cases} {{\dot{x}}_{0}} ( t ) = A{x_{0}} ( t ), \\ {y_{0}} ( t ) = C{x_{0}} ( t ), \end{cases}\displaystyle \end{aligned}$$ where \({x_{0}} ( t ) \in {R^{n}}\) and \({y_{0}} ( t ) \in {R^{p}}\) denote the state and output of the leader, respectively. Consider a general linear leader-follower MAS with a fixed undirected graph \(\mathcal{G}\). For any initial conditions, if the system state satisfies \(\mathop {\lim } _{t \to \infty } \Vert {{x_{i}} ( t ) - {x_{0}} ( t )} \Vert = 0\), \(i = 1,2, \ldots ,N\), then the system has achieved a consensus. The local degree matrix is defined as \(D = \operatorname{diag}\{ {d_{i}}\} \in {R^{N \times N}}\), \(i = 1,2, \ldots ,N\) to represent the communication mode between the follower and the leader. If the follower can receive the information from the leader, then \({d_{i}} = 1\); otherwise, \({d_{i}} = 0\). Given the difficulty of obtaining or detecting the complete state in many systems, the following observers are considered: $$\begin{aligned} \textstyle\begin{cases} {{\dot {\hat{x}}_{i}}} ( t ) = A{{\hat{x}}_{i}} ( t ) + B{u_{i}} ( t ) + G ( {{{\hat{y}}_{i}} ( t ) - {y_{i}} ( t )} ), \\ {{\hat{y}}_{i}} ( t ) = C{{\hat{x}}_{i}} ( t ), \end{cases}\displaystyle \quad i = 1,2,\ldots,N, \end{aligned}$$ where \({\hat{x}_{i}} ( t ) \in {R^{n}}\) and G represent the state and gain matrix of the observer, respectively, and \({\hat{y}_{i}} ( t ) \in {R^{p}}\) is the output information of the observer-based system. On the basis of the above observer (3), the event-triggered and leader-following consensus protocols are designed as follows: $$\begin{aligned} {u_{i}} ( t ) = {}& - K \biggl( {\sum_{j \in {N_{i}}} {{a_{ij}} \bigl( {{{\hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {{ \hat{x}}_{j}} \bigl( {t_{k'}^{j}} \bigr)} \bigr) + {d_{i}} \bigl( {{{ \hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {x_{0}} \bigl( {t_{k}^{i}} \bigr)} \bigr)} } \biggr) \\ &{}- K\operatorname{sig} { \biggl( {\sum_{j \in {N_{i}}} {{a_{ij}} \bigl( {{{ \hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {{\hat{x}}_{j}} \bigl( {t_{k'}^{j}} \bigr)} \bigr) + {d_{i}} \bigl( {{{ \hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {x_{0}} \bigl( {t_{k}^{i}} \bigr)} \bigr)} } \biggr)^{\alpha}}, \end{aligned}$$ where \(K = {B^{T}}P \in {R^{m \times n}}\) is the control gain matrix, P is a positive definite matrix, \(0 < \alpha \le 0.5\), \({a_{ij}}\) represents the ijth item of the adjacency matrix \(\mathcal{A}\), \(t_{k}^{i}\) is the latest trigger moment of agent i, and \(\hat{x} ( {t_{k}^{i}} )\) represents the latest broadcast state of agent i. Controller (4) is designed in two parts. The first part aims to reduce the state error to near zero, and the second part ensures that the state error converges to zero within a finite time. The state tracking error and observation error are defined as follows: $$\begin{aligned} {\tilde{x}_{i}} ( t ) = {\hat{x}_{i}} ( t ) - {x_{0}} ( t ), \end{aligned}$$ $$\begin{aligned} {h_{i}} ( t ) = {\hat{x}_{i}} ( t ) - {x_{i}} ( t ). \end{aligned}$$ Therefore, the following result is obtained: $$\begin{aligned} {{\dot {\tilde{x}}_{i}}} ( t ) ={}& A{\tilde{x}_{i}} ( t ) - B{B^{T}}P \biggl( {\sum _{j \in {N_{i}}} {{a_{ij}} \bigl( {{{\hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {{\hat{x}}_{j}} \bigl( {t_{k'}^{j}} \bigr)} \bigr) + {d_{i}} \bigl( {{{ \hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {x_{0}} \bigl( {t_{k}^{i}} \bigr)} \bigr)} } \biggr) \\ & {}- B{B^{T}}Psig{ \biggl( {\sum_{j \in {N_{i}}} {{a_{ij}} \bigl( {{{\hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {{\hat{x}}_{j}} \bigl( {t_{k'}^{j}} \bigr)} \bigr) + {d_{i}} \bigl( {{{\hat{x}}_{i}} \bigl( {t_{k}^{i}} \bigr) - {x_{0}} \bigl( {t_{k}^{i}} \bigr)} \bigr)} } \biggr)^{\alpha}} \\ &{}+ GC \bigl( {{{\hat{x}}_{i}} ( t ) - {x_{i}} ( t )} \bigr). \end{aligned}$$ The measurement error of the current state and the trigger state is defined as follows: $$\begin{aligned} {{{e_{i}} ( t ) = {\tilde{x}_{i}} \bigl( {t_{k}^{i}} \bigr) - {\tilde{x}_{i}} ( t ).}} \end{aligned}$$ Substituting (8) into (7) obtains $$\begin{aligned} {{\dot {\tilde{x}}_{i}}} ( t ) ={}& A{{\tilde{x}}_{i}} ( t ) - B{B^{T}}P ( {\sum _{j \in {N_{i}}} {{a_{ij}} \bigl( { \bigl( {{{\tilde{x}}_{i}} ( t ) - {{\tilde{x}}_{j}} ( t )} \bigr) + \bigl( {{e_{i}} ( t ) - {e_{j}} ( t )} \bigr)} \bigr)} } \\ & {} + {d_{i}} \bigl( {{{\tilde{x}}_{i}} ( t ) + {e_{i}} ( t )} \bigr) - B{B^{T}}P\operatorname{sig} \biggl( {\sum_{j \in {N_{i}}} {{a_{ij}} \bigl( { \bigl( {{{ \tilde{x}}_{i}} ( t ) - {{\tilde{x}}_{j}} ( t )} \bigr)} } } \\ & + \bigl( e_{i} ( t ) - e_{j} ( t ) \bigr) \bigr) + d_{i} \bigl(\tilde{x}_{i} ( t ) + e_{i} ( t ) \bigr) \biggr)^{\alpha} + GC{h_{i}} ( t ). \end{aligned}$$ The event-triggered mechanism can be applied by introducing a dynamic variable as follows: $$\begin{aligned} {\dot{\vartheta}_{i}} ( t ) = - {\varepsilon _{i}} \operatorname{sig} { \bigl( {{\vartheta _{i}} ( t )} \bigr)^{\gamma}}, \end{aligned}$$ where \({\vartheta _{i}}\) is a non-zero real number, \({\varepsilon _{i}} > 0\), and \(\gamma \in ( {0,1} )\). The event-triggered function of agent i is designed as follows: $$\begin{aligned} {f_{i}} \bigl( {t,{e_{i}} ( t ),{{\tilde{x}}_{i}} ( t ),{\vartheta _{i}} ( t )} \bigr) ={}& {\eta _{1}} { \bigl\Vert {{e_{i}} ( t )} \bigr\Vert ^{2}} + {\eta _{2}} { \bigl\Vert {{e_{i}} ( t )} \bigr\Vert ^{2\alpha }} + {\eta _{3}} { \bigl\Vert {{{ \tilde{x}}_{i}} ( t )} \bigr\Vert ^{2\alpha }} \\ &{}- \rho {\varepsilon _{i}}\delta { \bigl\vert {{\vartheta _{i}} ( t )} \bigr\vert ^{2\gamma }}, \end{aligned}$$ where \(\rho \in ( {0,1} )\), \(\delta > 0\). In addition, $$\begin{aligned}& {\eta _{1}} \ge {a_{1}} {\lambda _{\max }} \bigl( {{H^{T}}H \otimes PB{B^{T}}P} \bigr), \\& {\eta _{2}} \ge {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ ( {{N_{n}}} )^{1 - \alpha }} {2^{\alpha}} { \bigl\Vert { ( {H \otimes {I_{N}}} )} \bigr\Vert ^{2\alpha }}, \\& {\eta _{3}} \ge \bigl( {{a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){{ ( {{N_{n}}} )}^{1 - \alpha }} {2^{\alpha}} + 1} \bigr) \times { \bigl\Vert { ( {H \otimes {I_{N}}} )} \bigr\Vert ^{2 \alpha }}, \end{aligned}$$ here \({a_{1}} > 0\), \({a_{2}} > 0\). Under the proposed event-triggered consensus strategy, the agent i not only monitors its own state but also receives the broadcast state of its in-neighbors. The event will be triggered when \({f_{i}} ( {t,{e_{i}} ( t ),{{\tilde{x}}_{i}} ( t ),{\vartheta _{i}} ( t )} ) > 0\). Then, the agent i updates its controller with its current state and broadcasts its current state to out-neighbors. At the same time, \({e_{i}} ( t )\) is reset to zero. If the agent i receives the broadcast state of its in-neighbor, the controller will also be updated. The threshold used in [15] and [18] is suitable for asymptotic consensus control; however, the threshold is incompatible for finite-time control because it cannot converge to 0 within a finite time. In contrast to the methods in [15] and [18], the threshold used in this study can ensure the convergence of the control to 0 within a finite time. This scheme also plays an important role in verifying the accuracy of the proposed MAS finite-time event-triggering algorithm. Consider systems (1) and (2) with observer (3) and control protocol (4). Suppose that Assumption 1holds. If there exists a positive definite matrix P and an appropriate positive scalar μ and β, such that the following Riccati inequality $$\begin{aligned} PA + {A^{T}}P - 2\mu PB{B^{T}}P + \beta {I_{n}} < 0 \end{aligned}$$ holds, and the trigger function is given by (11), then the finite-time leader-following consensus can be achieved for all initial conditions. With the Kronecker product, (9) can be written in compact form as follows: $$\begin{aligned} \dot {\tilde{x}} ( t ) ={}& \bigl( {{I_{N}} \otimes A - H \otimes B{B^{T}}P} \bigr)\tilde{x} ( t ) - \bigl( {H \otimes B{B^{T}}P} \bigr)e ( t ) \\ &{}- \bigl( {{I_{N}} \otimes B{B^{T}}P} \bigr) \operatorname{sig} { \bigl( { ( {H \otimes {I_{N}}} ) \bigl( {\tilde{x} ( t ) + e ( t )} \bigr)} \bigr)^{\alpha}} \\ &{} + ( {{I_{N}} \otimes GC} )h ( t ), \end{aligned}$$ where \(\tilde{x} ( t ) = { [ {\tilde{x}_{1}^{T} ( t ),\ldots,\tilde{x}_{N}^{T} ( t )} ]^{T}}\), \(e ( t ) = { [ {e_{1}^{T} ( t ),\ldots,e_{N}^{T} ( t )} ]^{T}}\), \(h ( t ) = { [ {h_{1}^{T} ( t ),\ldots, h_{N}^{T} ( t )} ]^{T}}\). When Assumption 2 is satisfied, matrix H has N eigenvalues, and the real part of each eigenvalue is positive. According to the observation error, \({\dot{h}_{i}} ( t ) = ( {A + GC} ){h_{i}} ( t )\). Thus, $$\begin{aligned} \dot{h} ( t ) = \bigl( {{I_{N}} \otimes ( {A + GC} )} \bigr)h ( t ). \end{aligned}$$ If the observer feedback matrix G is designed such that \(A + GC\) is a Hurwitz matrix, then \({h_{i}} ( t )\) will asymptotically approach zero. According to (13) and (14), the estimation error \(h ( t )\) is decoupled from the dynamics \(\tilde{x} ( t )\), and the stability of (13) is equivalent to the stability of the following system: $$\begin{aligned} \dot {\tilde{x}} ( t ) ={}& \bigl( {{I_{N}} \otimes A - H \otimes B{B^{T}}P} \bigr)\tilde{x} ( t ) - \bigl( {H \otimes B{B^{T}}P} \bigr)e ( t ) \\ &{}- \bigl( {{I_{N}} \otimes B{B^{T}}P} \bigr) \operatorname{sig} { \bigl( { ( {H \otimes {I_{N}}} ) \bigl( {\tilde{x} ( t ) + e ( t )} \bigr)} \bigr)^{\alpha}}. \end{aligned}$$ For system (15), the Lyapunov function is constructed as follows: $$\begin{aligned} V = {\tilde{x}^{T}} ( {{I_{N}} \otimes P} ) \tilde{x} + \sum_{i = 1}^{N} {\frac{\delta }{{1 + \gamma }}} { \vert {{ \vartheta _{i}}} \vert ^{1 + \gamma }}. \end{aligned}$$ Let \({V_{1}} = {\tilde{x}^{T}} ( {{I_{N}} \otimes P} )\tilde{x}\), \({V_{2}} = \sum_{i = 1}^{N} {\frac{\delta }{{1 + \gamma }}} { \vert {{\vartheta _{i}}} \vert ^{1 + \gamma }}\). Take the derivative of \({V_{1}}\) along the trajectory of system (15) $$\begin{aligned} {\dot{V}_{1}} ={}& 2{\tilde{x}^{T}} ( {{I_{N}} \otimes P} ) \dot {\tilde{x}} \\ ={}& 2{{\tilde{x}}^{T}} ( {{I_{N}} \otimes P} ) \bigl[ { \bigl( {{I_{N}} \otimes A - H \otimes B{B^{T}}P} \bigr)\tilde{x} - \bigl( {H \otimes B{B^{T}}P} \bigr)e} \\ &{} - \bigl( {{I_{N}} \otimes B{B^{T}}P} \bigr) \operatorname{sig} {{ \bigl( { ( {H \otimes {I_{N}}} ) ( {\tilde{x} + e} )} \bigr)}^{\alpha}} \bigr]. \\ ={}& 2{{\tilde{x}}^{T}} \bigl( {{I_{N}} \otimes P - H \otimes PB{B^{T}}P} \bigr)\tilde{x} - 2{{\tilde{x}}^{T}} \bigl( {H \otimes PB{B^{T}}P} \bigr)e. \\ &{} - 2{{\tilde{x}}^{T}} \bigl( {{I_{N}} \otimes PB{B^{T}}P} \bigr)\operatorname{sig} { \bigl( { ( {H \otimes {I_{N}}} ) ( {\tilde{x} + e} )} \bigr)^{\alpha}}. \end{aligned}$$ After analysis, the first item of (17) obtains the following results: $$\begin{aligned} &2{\tilde{x}^{T}} \bigl( {{I_{N}} \otimes P - H \otimes PB{B^{T}}P} \bigr)\tilde{x} \\ &\quad = {\tilde{x}^{T}} \bigl( {{I_{N}} \otimes \bigl( {PA + {A^{T}}P} \bigr) - 2H \otimes PB{B^{T}}P} \bigr)\tilde{x} \\ &\quad \le \sum_{i = 1}^{N} {\xi _{i}^{T}} \bigl( { \bigl( {PA + {A^{T}}P} \bigr) - 2{\lambda _{1}}PB{B^{T}}P} \bigr){\xi _{i}} \\ &\quad \le \sum_{i = 1}^{N} {\xi _{i}^{T}} \bigl( { \bigl( {PA + {A^{T}}P} \bigr) - 2\mu PB{B^{T}}P} \bigr){\xi _{i}} \\ &\quad \le - \beta \sum_{i = 1}^{N} {\xi _{i}^{T}} {\xi _{i }} \\ &\quad \le - \beta { \Vert {\tilde{x}} \Vert ^{2}}. \end{aligned}$$ According to Lemma 5, the second term in (17) is bounded as follows: $$\begin{aligned} - 2{\tilde{x}^{T}} \bigl( {H \otimes PB{B^{T}}P} \bigr)e ={}& - 2{ \bigl( { \bigl( {{I_{N}} \otimes {B^{T}}P} \bigr)\tilde{x}} \bigr)^{T}} \bigl( {H \otimes {B^{T}}P} \bigr)e \\ \le{}& \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{1}}}}{ \Vert {\tilde{x}} \Vert ^{2}} \\ & {} + {a_{1}} {\lambda _{\max }} \bigl( {{H^{T}}H \otimes PB{B^{T}}P} \bigr){ \Vert e \Vert ^{2}}. \end{aligned}$$ According to Lemma 5, the last item in (17) is bounded as follows: $$\begin{aligned} &{}- 2{\tilde{x}^{T}} \bigl( {{I_{N}} \otimes PB{B^{T}}P} \bigr)\operatorname{sig} { \bigl( { ( {H \otimes {I_{N}}} ) ( {\tilde{x} + e} )} \bigr)^{\alpha}} \\ &\quad = - 2{ \bigl( { \bigl( {{I_{N}} \otimes {B^{T}}P} \bigr)\tilde{x}} \bigr)^{T}} \bigl( {{I_{N}} \otimes {B^{T}}P} \bigr)\operatorname{sig} { ( q )^{\alpha}} \\ &\quad \le \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{2}}}}{ \Vert {\tilde{x}} \Vert ^{2}} + {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ \bigl( {\operatorname{sig} {{ ( q )}^{\alpha}}} \bigr)^{T}}\operatorname{sig} { ( q )^{\alpha}}, \end{aligned}$$ where \({a_{1}} > 0\), \({a_{2}} > 0\), and \(q = ( {H \otimes {I_{N}}} ) ( {\tilde{x} + e} )\). Substituting (18), (19), and (20) into (17) yields $$\begin{aligned} {\dot{V}_{1}} ={}& 2{\tilde{x}^{T}} ( {{I_{N}} \otimes P} ) \dot {\tilde{x}} \\ \le{}& - \beta { \Vert {\tilde{x}} \Vert ^{2}} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{1}}}}{ \Vert {\tilde{x}} \Vert ^{2}} + {a_{1}} {\lambda _{\max }} \bigl( {{H^{T}}H \otimes PB{B^{T}}P} \bigr){ \Vert e \Vert ^{2}} \\ & {} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{2}}}}{ \Vert {\tilde{x}} \Vert ^{2}} + {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ \bigl( {\operatorname{sig} {{ ( q )}^{\alpha}}} \bigr)^{T}}\operatorname{sig} { ( q )^{\alpha}}. \end{aligned}$$ On the basis of equation (10), the time derivative of \({V_{2}}\) satisfies $$\begin{aligned} {\dot{V}_{2}} = \sum_{i = 1}^{N} \delta \operatorname{sig} { ( {{ \vartheta _{i}}} )^{\gamma}} { \dot{\vartheta}_{i}} = - \sum_{i = 1}^{N} {\delta {\varepsilon _{i}} {{ \vert {{\vartheta _{i}}} \vert }^{2\gamma }}}. \end{aligned}$$ Consequently, combining (21) and (22), we obtain $$\begin{aligned} \dot{V} ={}& {\dot{V}_{1}} + {\dot{V}_{2}} \\ \le{}& - \beta { \Vert {\tilde{x}} \Vert ^{2}} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{1}}}}{ \Vert {\tilde{x}} \Vert ^{2}} + {a_{1}} {\lambda _{\max }} \bigl( {{H^{T}}H \otimes PB{B^{T}}P} \bigr){ \Vert e \Vert ^{2}} \\ &{} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{2}}}}{ \Vert {\tilde{x}} \Vert ^{2}} + {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ \bigl( {\operatorname{sig} {{ ( q )}^{\alpha}}} \bigr)^{T}}\operatorname{sig} { ( q )^{\alpha}} \\ & {} - \sum_{i = 1}^{N} {\delta {\varepsilon _{i}} {{ \vert {{ \vartheta _{i}}} \vert }^{2\gamma }}} \\ ={}& \biggl( { - \beta + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{1}}}} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{2}}}}} \biggr){ \Vert {\tilde{x}} \Vert ^{2}} \\ & {} + {a_{1}} {\lambda _{\max }} \bigl( {{H^{T}}H \otimes PB{B^{T}}P} \bigr){ \Vert e \Vert ^{2}} \\ & {} + {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ \bigl( {\operatorname{sig} {{ ( q )}^{\alpha}}} \bigr)^{T}}\operatorname{sig} { ( q )^{\alpha}} - \sum _{i = 1}^{N} {\delta {\varepsilon _{i}} {{ \vert {{\vartheta _{i}}} \vert }^{2\gamma }}}. \end{aligned}$$ According to Lemma 2, for \({a_{2}}{\lambda _{\max }} ( {PB{B^{T}}P} ){ ( {\operatorname{sig}{{ ( q )}^{\alpha}}} )^{T}}\operatorname{sig}{ ( q )^{\alpha}}\), the following results can be attained: $$\begin{aligned} &{a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ \bigl( {\operatorname{sig} {{ ( q )}^{\alpha}}} \bigr)^{T}}\operatorname{sig} { ( q )^{\alpha}} \\ &\quad = {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr) \bigl\Vert { ( {H \otimes {I_{N}}} ) ( {\tilde{x} + e} )} \bigr\Vert _{2\alpha }^{2\alpha } \\ &\quad \le {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ ( {{N_{n}}} )^{1 - \alpha }} { \bigl\Vert { ( {H \otimes {I_{N}}} ) \tilde{x} + ( {H \otimes {I_{N}}} )e} \bigr\Vert ^{2\alpha }} \\ & \quad \le {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ ( {{N_{n}}} )^{1 - \alpha }} { \bigl( {2{{ \bigl\Vert { ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert }^{2}} + 2{{ \bigl\Vert { ( {H \otimes {I_{N}}} )e} \bigr\Vert }^{2}}} \bigr)^{\alpha}} \\ &\quad \le {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ ( {{N_{n}}} )^{1 - \alpha }} {2^{\alpha}} \bigl( {{{ \bigl\Vert { ( {H \otimes {I_{N}}} ) \tilde{x}} \bigr\Vert }^{2\alpha }} + {{ \bigl\Vert { ( {H \otimes {I_{N}}} )e} \bigr\Vert }^{2\alpha }}} \bigr) \\ &\quad = \bigl( {{a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){{ ( {{N_{n}}} )}^{1 - \alpha }} {2^{\alpha}}+ 1} \bigr){ \bigl\Vert { ( {H \otimes {I_{N}}} ) \tilde{x}} \bigr\Vert ^{2\alpha }} - { \bigl\Vert { ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert ^{2\alpha }} \\ & \qquad {} + {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ ( {{N_{n}}} )^{1 - \alpha }} {2^{\alpha}} { \bigl\Vert { ( {H \otimes {I_{N}}} )e} \bigr\Vert ^{2\alpha }} \\ &\quad \le \bigl( {{a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){{ ( {{N_{n}}} )}^{1 - \alpha }} {2^{\alpha}}+ 1} \bigr){ \bigl\Vert { ( {H \otimes {I_{N}}} )} \bigr\Vert ^{2 \alpha }} { \Vert {\tilde{x}} \Vert ^{2\alpha }} - { \bigl\Vert { ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert ^{2\alpha }} \\ & \qquad {} + {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ ( {{N_{n}}} )^{1 - \alpha }} {2^{\alpha}} { \bigl\Vert { ( {H \otimes {I_{N}}} )} \bigr\Vert ^{2\alpha }} { \Vert e \Vert ^{2 \alpha }}. \end{aligned}$$ Substituting (24) into (23) obtains the following: $$\begin{aligned} \dot{V}\le{}& \biggl( { - \beta + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{1}}}} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{2}}}}} \biggr){ \Vert {\tilde{x}} \Vert ^{2}} \\ & {} + {a_{1}} {\lambda _{\max }} \bigl( {{H^{T}}H \otimes PB{B^{T}}P} \bigr){ \Vert e \Vert ^{2}} \\ & {} + \bigl( {{a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){{ ( {{N_{n}}} )}^{1 - \alpha }} {2^{\alpha}}+ 1} \bigr){ \bigl\Vert { ( {H \otimes {I_{N}}} )} \bigr\Vert ^{2 \alpha }} { \Vert {\tilde{x}} \Vert ^{2\alpha }} \\ &{} + {a_{2}} {\lambda _{\max }} \bigl( {PB{B^{T}}P} \bigr){ ( {{N_{n}}} )^{1 - \alpha }} {2^{\alpha}} { \bigl\Vert { ( {H \otimes {I_{N}}} )} \bigr\Vert ^{2\alpha }} { \Vert e \Vert ^{2\alpha }} \\ & {} - { \bigl\Vert { ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert ^{2\alpha }} - \sum_{i = 1}^{N} {\delta { \varepsilon _{i}} {{ \vert {{\vartheta _{i}}} \vert }^{2\gamma }}}. \end{aligned}$$ Using the triggering functions (11) and (25), V̇ can be rewritten as $$\begin{aligned} \dot{V} \le{}& \biggl( { - \beta + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{1}}}} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{2}}}}} \biggr){ \Vert {\tilde{x}} \Vert ^{2}} \\ & {} - { \bigl\Vert { ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert ^{2\alpha }} - ( {1 - \rho } )\delta \sum_{i = 1}^{N} {{\varepsilon _{i}} {{ \vert {{\vartheta _{i}}} \vert }^{2\gamma }}}. \end{aligned}$$ Let \(\beta > \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{1}}}} + \frac{{{\lambda _{\max }} ( {PB{B^{T}}P} )}}{{{a_{2}}}}\). Then, $$\begin{aligned} \dot{V} \le - { \bigl\Vert { ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert ^{2\alpha }} - ( {1 - \rho } )\delta \sum _{i = 1}^{N} {{\varepsilon _{i}} {{ \vert {{\vartheta _{i}}} \vert }^{2\gamma }}}. \end{aligned}$$ After analysis, the first part of (27) obtains the following results: $$\begin{aligned} - { \bigl\Vert { ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert ^{2 \alpha }} = {}& - { \bigl\Vert {{{\tilde{x}}^{T}} \bigl( {{H^{T}} \otimes {I_{N}}} \bigr) ( {H \otimes {I_{N}}} )\tilde{x}} \bigr\Vert ^{\alpha}} \\ = {}& - { \bigl\Vert {{{\tilde{x}}^{T}} \bigl( {{H^{T}}H \otimes {I_{N}}} \bigr)\tilde{x}} \bigr\Vert ^{\alpha}} \\ = {}& - { \biggl( { \frac{{{{\tilde{x}}^{T}} ( {{H^{T}}H \otimes {I_{N}}} )\tilde{x}}}{{{{\tilde{x}}^{T}} ( {{I_{N}} \otimes P} )\tilde{x}}}{{ \tilde{x}}^{T}} ( {{I_{N}} \otimes P} )\tilde{x}} \biggr)^{\alpha}} \\ ={}& - { \biggl( { \frac{{{{\tilde{x}}^{T}} ( {{H^{T}}H \otimes {I_{N}}} )\tilde{x}}}{{{{\tilde{x}}^{T}} ( {{I_{N}} \otimes P} )\tilde{x}}}} \biggr)^{\alpha}}V_{1}^{\alpha} \\ \le{} & - { \biggl( { \frac{{{\lambda _{\min }} ( {{H^{T}}H} )}}{{{\lambda _{\max }} ( P )}}} \biggr)^{\alpha}}V_{1}^{\alpha} \\ = {}& - {c_{1}}V_{1}^{\alpha}, \end{aligned}$$ where \({c_{1}} = { ( { \frac{{{\lambda _{\min }} ( {{H^{T}}H} )}}{{{\lambda _{\max }} ( P )}}} )^{\alpha}} > 0\). Meanwhile, the second part of (27) indicates that the term can be bounded as follows: $$\begin{aligned} - ( {1 - \rho } )\delta \sum_{i = 1}^{N} {{ \varepsilon _{i}} {{ \vert {{\vartheta _{i}}} \vert }^{2\gamma }}} ={} &- ( {1 - \rho } ){\varepsilon _{\min }}\sum _{i = 1}^{N} { ( {1 + \gamma } ) \frac{\delta }{{1 + \gamma }}{{ \vert {{\vartheta _{i}}} \vert }^{ ( {1 + \gamma } )\frac{{2\gamma }}{{1 + \gamma }}}}} \\ \le {}& - ( {1 - \rho } ){\varepsilon _{\min }} \frac{{{{ ( {1 + \gamma } )}^{\frac{{2\gamma }}{{1 + \gamma }}}}}}{{{\delta ^{\frac{{\gamma - 1}}{{1 + \gamma }}}}}}{ \Biggl( {\sum_{i = 1}^{N} {\frac{\delta }{{1 + \gamma }}{{ \vert {{\vartheta _{i}}} \vert }^{ ( {1 + \gamma } )}}} } \Biggr)^{\frac{{2\gamma }}{{1 + \gamma }}}} \\ = {}& - {c_{2}}V_{2}^{\frac{{2\gamma }}{{1 + \gamma }}}, \end{aligned}$$ where \({c_{2}} = ( {1 - \rho } ){\varepsilon _{\min }} \frac{{{{ ( {1 + \gamma } )}^{\frac{{2\gamma }}{{1 + \gamma }}}}}}{{{\delta ^{\frac{{\gamma - 1}}{{1 + \gamma }}}}}} > 0\), and \({\varepsilon _{\min }}=\min \{ {{\varepsilon _{1}},\ldots,{ \varepsilon _{N}}} \}\). According to (28) and (29), (27) can further obtain the following results: $$\begin{aligned} \dot{V} \le - {c_{1}}V_{1}^{\alpha}- {c_{2}}V_{2}^{ \frac{{2\gamma }}{{1 + \gamma }}}. \end{aligned}$$ According to (30), V will converge to \(V \le 1\) within a finite time. This scenario indicates that \({V_{1}} \le 1\) and \({V_{2}} \le 1\) hold within a finite time. Furthermore, with \(\gamma \in ( {0,1} )\) in (10), one has \(\alpha < \frac{{2\alpha }}{{1 + \alpha }}\) and \(\frac{{2\gamma }}{{1 + \gamma }} < \frac{{2\alpha }}{{1 + \alpha }}\), then \(V_{1}^{\alpha}> V_{1}^{\frac{{2\alpha }}{{1 + \alpha }}}\) and \(V_{2}^{\frac{{2\gamma }}{{1 + \gamma }}} > V_{2}^{ \frac{{2\alpha }}{{1 + \alpha }}}\). Then, we have $$\begin{aligned} \dot{V} &\le - {c_{1}}V_{1}^{\frac{{2\alpha }}{{1 + \alpha }}} - {c_{2}}V_{2}^{ \frac{{2\alpha }}{{1 + \alpha }}} \\ &< - \min ( {{c_{1}},{c_{2}}} ) \bigl( {V_{1}^{ \frac{{2\alpha }}{{1+ \alpha }}}+ V_{2}^{ \frac{{2\alpha }}{{1+ \alpha }}}} \bigr) \\ &< - \min ( {{c_{1}},{c_{2}}} ){ ( {{V_{1}}+ {V_{2}}} )^{\frac{{2\alpha }}{{1+ \alpha }}}} \\ &< - c{V^{\eta}}, \end{aligned}$$ where \(c = \min ( {{c_{1}},{c_{2}}} )\) and \(\eta = \frac{{2\alpha }}{{1+ \alpha }}\). According to Lemma 3, we can derive \(V ( t ) \to 0\) within a finite time T. At this phase, the proof is completed. □ Consider the event-triggered mechanism and finite-time consensus [19, 20, 22, 25, 27] and the event-triggered mechanism and the problem of unmeasurable state [28, 29, 31, 32, 34] in the literature. The problems of unmeasurable state and convergence within finite time are also considered in [35]. However, the aforementioned studies have not considered simultaneously the problems of unmeasurable state, event-triggered mechanism, and finite-time consensus. These problems are jointly investigated in the current research. An observer-based event-triggered strategy is proposed, and the event-triggered condition (11) is distributed, and the trigger time of each agent is independent. Under the finite-time event-triggered consensus protocol, when the state-based measurement error of agent i exceeds a given threshold, an event will be triggered for it, the controller will be updated with the current state, and its current state will be broadcast to external neighbors. At the same time, the state-based agent i measurement error is reset to zero. If the state-based measurement error is less than the given threshold, it will not trigger, and no communication is required until the next event is triggered. Consider the leader-follower MAS (1) and (2). If the event-trigger condition (11) is satisfied, then the Zeno behavior can be avoided under the effect of consensus control protocol (4). Assuming that the current trigger time is \(t_{k}^{i}\), the next trigger time \(t_{k+ 1}^{i}\) is determined by event-trigger condition (11). Consider the time interval \(t \in [ {t_{k}^{i},t_{k + 1}^{i}} )\), and let the event interval time be \(\tau = t_{k + 1}^{i} - t_{k}^{i}\). From the previous analysis, we know that \({\tilde{x}_{i}}\) and \({e_{i}}\) are convergent, and \(\Vert {{{\tilde{x}}_{i}}} \Vert \) and \(\Vert {{e_{i}}} \Vert \) are bounded. Let the upper bounds of \(\Vert {{{\tilde{x}}_{i}}} \Vert \) and \(\Vert {{e_{i}}} \Vert \) be \({b_{1}}\tau \) and \({b_{2}}\tau \), respectively, where \({b_{1}}\) and \({b_{2}}\) are positive constants. Then, we can derive the following results: $$\begin{aligned} {\eta _{1}} { \Vert {{e_{i}}} \Vert ^{2}} + {\eta _{2}} { \Vert {{e_{i}}} \Vert ^{2\alpha }} + {\eta _{3}} { \Vert {\tilde{x}} \Vert ^{2 \alpha }} \le {\eta _{1}} { ( {{b_{1}}\tau } )^{2}} + { \eta _{2}} { ( {{b_{1}}\tau } )^{2\alpha }}+ { \eta _{3}} { ( {{b_{2}}\tau } )^{2\alpha }} \stackrel{ \Delta}{=} {\mathrm{{q}}} ( \tau ). \end{aligned}$$ The lower bound of the time interval can be determined using the solution to (32). $$\begin{aligned} \rho {\varepsilon _{i}}\delta { \bigl\vert {{\vartheta _{i}} ( t )} \bigr\vert ^{2\gamma }}={}&{\eta _{1}} { \Vert {{e_{i}}} \Vert ^{2}} + {\eta _{2}} { \Vert {{e_{i}}} \Vert ^{2\alpha }} + {\eta _{3}} { \Vert {\tilde{x}} \Vert ^{2\alpha }} \\ = {}&{\eta _{1}} { ( {{b_{1}} {\tau _{1}}} )^{2}} + {\eta _{2}} { ( {{b_{1}} {\tau _{1}}} )^{2\alpha }}+ {\eta _{3}} { ( {{b_{2}} {\tau _{1}}} )^{2\alpha }}. \end{aligned}$$ According to equation (33), if \({\vartheta _{i}} ( t ) \ne 0\), then \(\tau \ge {\tau _{1}} > 0\). As the proposed dynamic threshold will converge to 0 within a finite time, the system cannot guarantee that the lower bound of the time interval between events will be strictly greater than zero. However, if the appropriate parameters are selected such that the time of system consensus is less than the time \({\vartheta _{i}} ( t )\) converges to 0 (i.e., the finite-time consensus is achieved before the dynamic threshold of each agent converges to 0), then the Zeno behavior will not occur. □ The research results of this study provide ideas for solving the problem of finite-time output consensus in general linear MAS using event-triggered mechanisms. In particular, this study can help to ensure that the Zeno behavior will not occur when appropriate parameters are selected. Our future research will focus on the event-triggered mechanism that will not have Zeno behavior at all. A numerical example is given to verify the theoretical results. Consider an undirected topology consisting of five followers and a leader, as shown in Fig. 1. The connection weight between agents is 1, and the Laplacian matrix of the network topology is given by L= [ 2 − 1 0 − 1 0 − 1 2 − 1 0 0 0 − 1 2 0 − 1 − 1 0 0 2 − 1 0 0 − 1 − 1 2 ] . The adjacency matrix is \(D = \operatorname{diag} \{ {1,1,1,0,0} \}\). Communication topology The constant matrix of system dynamics are as follows: \(A = [ {0 \ 5; - 2 \ 2} ]\), \(B = [ {1;1} ]\), \(G = [ {1;1} ]\), \(C = [ {1,0} ]\). Then, we select the relevant parameters, namely, \(\alpha = 0.5\), \(\gamma = 0.45\), \({a_{1}} = 3\), \({a_{2}} = 2\), \(\delta = 60\), \(\rho = 0.8\), and \(\mu = 0.4\). Let the initial state of system (1) be \({x_{1}} ( 0 ) = { [ {2, - 1.3} ]^{T}}\), \({x_{2}} ( 0 ) = { [ {0.5, - 1.8} ]^{T}}\), \({x_{3}} ( 0 ) = { [ {1.5, - 0.8} ]^{T}}\), \({x_{4}} ( 0 ) = { [ {0.8, - 1} ]^{T}}\), and \({x_{5}} ( 0 ) = { [ {2.6, - 1.2} ]^{T}}\). The initial state of the leader is \({x_{0}} ( 0 ) = { [ {0,0} ]^{T}}\). Figures 2 and 3 show the state tracking error of each agent reaching zero within a finite time. Figure 4 is the event-triggered update state \({x_{ij}} ( {{t_{k}}} )\) (\(i = 1, \ldots ,5\), \(j = 1,2\)). Figures 5 and 6 show the observer error reaching zero within a finite time. Figure 7 shows the event-triggered interval of each agent under strategy (11). The error and threshold in the trigger function are shown in Fig. 8. The state tracking error \({x_{i}} - {x_{0}}\), event-triggered update state \(x ( {{t_{k}}} )\), and observer error \({h_{i}} ( t )\) all approach zero within a finite time, which means that the system can achieve consensus within a finite time. The results of the numerical simulation verify the feasibility and effectiveness of the control method and event-triggered strategy. State tracking error of each agent (first component) State tracking error of each agent (second component) Event-triggered update state Observer error \({h_{i1}} ( t )\) The trigger interval of each agent under the control strategy The errors and thresholds for each agent The finite-time leader-following consensus of observer-based MAS is studied. As the state of some systems cannot be measured directly, an observer is used to estimate the state of the system. An observer-based distributed control protocol is proposed, in which the dynamic event-triggered mechanism depends on an external dynamic threshold to achieve consensus within a finite time. Then, the finite-time consensus is obtained using matrix theory, the Lyapunov control method, and algebraic graph theory. The analysis indicates that the Zeno behavior can be avoided by selecting the appropriate parameters. Finally, a numerical example is given to verify the effectiveness of the method. Although the current work only considers general linear dynamics, in future work, we will consider how to extend the results to other practical nonlinear systems. Khazaei, J., Nguyen, D.H.: Multi-agent consensus design for heterogeneous energy storage devices with droop control in smart grids. IEEE Trans. Smart Grid 10(2), 1395–1404 (2019) Ge, X.H., Han, Q.L., Zhang, X.M., et al.: Distributed event-triggered estimation over sensor networks: a survey. IEEE Trans. Cybern. 50(3), 1306–1320 (2020) Liao, F., Teo, R., Wang, J.L., et al.: Distributed formation and reconfiguration control of VTOL UAVs. IEEE Trans. Control Syst. Technol. 25(1), 270–277 (2017) Botelho, W.T., Marietto, M.D., Mendes, E.D., et al.: Toward an interdisciplinary integration between multi-agents systems and multi-robots systems: a case study. Knowl. Eng. Rev. 35, e35 (2020) He, W., Xu, C., Han, Q.L., et al.: L2 leader-follower consensus of networked Euler-Lagrange systems with external disturbances. IEEE Trans. Syst. Man Cybern. Syst. 48(11), 1920–1928 (2018) Nowzari, C., Garcia, E., Cortes, J.: Event-triggered communication and control of networked systems for multi-agent consensus. Automatica 105(5), 1–27 (2019) Article MathSciNet Google Scholar Copp, D.A., Vamvoudakis, K.G., Hespanha, J.P.: Distributed output-feedback model predictive control for multi-agent consensus. Syst. Control Lett. 127(5), 52–59 (2019) Sun, F., Turkoglu, K.: Distributed real-time non-linear receding horizon control methodology for multi-agent consensus problems. Aerosp. Sci. Technol. 63(5), 82–90 (2017) Olfati-Saber, R., Murray, R.M.: Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Autom. Control 49(9), 1520–1533 (2004) Wen, G., Duan, Z., Yu, W., et al.: Consensus of multi-agent systems with nonlinear dynamics and sampled-data information: a delayed-input approach. Int. J. Robust Nonlinear Control 23(6), 602–619 (2013) Song, Q., Liu, F., Cao, J., et al.: M-matrix strategies for pinning-controlled leader-following consensus in multiagent systems with nonlinear dynamics. IEEE Trans. Cybern. 43(6), 1688–1697 (2013) Xiao, F., Chen, T.: Adaptive consensus in leader-following networks of heterogeneous linear systems. IEEE Trans. Control Netw. Syst. 5(3), 1169–1176 (2018) Albert, A.: Comparison of event-triggered and time-triggered concepts with regard to distributed control systems. In: Proc. Embedded World, vol. 17, pp. 235–252 (2004) Fan, Y., Feng, G., Wang, Y., et al.: Distributed event-triggered control of multi-agent systems with combinational measurements. Automatica 49(2), 671–675 (2013) Seyboth, G.S., Dimarogonas, D.V., Johansson, K.H.: Event-based broadcasting for multi-agent average consensus. Automatica 49(1), 245–252 (2013) Cheng, B., Li, Z.: Fully distributed event-triggered protocols for linear multiagent networks. IEEE Trans. Autom. Control 64(4), 1655–1662 (2019) Zhang, Y., Sun, J., Liang, H., et al.: Event-triggered adaptive tracking control for multiagent systems with unknown disturbances. IEEE Trans. Cybern. 50(3), 890–901 (2020) Yang, D.P., Ren, W., Liu, X.D., et al.: Decentralized event-triggered consensus for linear multi-agent systems under general directed graphs. Automatica 69, 242–249 (2016) Ren, J.C., Sun, J., Fu, J.: Finite-time event-triggered sliding mode control for one-sided Lipschitz nonlinear systems with uncertainties. Nonlinear Dyn. 4, 865–882 (2021) Zhang, L., Zhang, Z.X., Lawrance, N., et al.: Decentralised finite-time consensus for second-order multi-agent system under event-triggered strategy. IET Control Theory Appl. 14(4), 664–673 (2020) Zhou, X.Y., Chen, Y., Wang, Q., et al.: Event-triggered finite-time \(H_{\infty} \) control of networked state-saturated switched systems. Int. J. Syst. Sci. 51(10), 1–15 (2020) Fan, H.J., Zheng, K.H., Liu, L., et al.: Event-Triggered Finite-Time Consensus of Second-Order Leader-Follower Multiagent Systems with Uncertain Disturbances. IEEE Transactions on Cybernetics (2020) Zhu, Y., Guan, X., Luo, X., et al.: Finite-time consensus of multi-agent system via nonlinear event-triggered control strategy. IET Control Theory Appl. 9(17), 2548–2552 (2015) Nair, R.R., Behera, L., Kumar, S.: Event-triggered finite-time integral sliding mode controller for consensus-based formation of multirobot systems with disturbances. IEEE Trans. Control Syst. Technol. 27(1), 39–47 (2019) Zhang, A., Zhou, D., Yang, P., et al.: Event-triggered finite-time consensus with fully continuous communication free for second-order multi-agent systems. Int. J. Control. Autom. Syst. 17(4), 836–846 (2019) Du, C.K., Liu, X.D., Ren, W.: Finite-time consensus for linear multiagent systems via event-triggered strategy without continuous communication. IEEE Trans. Control Netw. Syst. 7(1), 19–29 (2020) Cao, Z., Li, C., Wang, X., et al.: Finite-time consensus of linear multi-agent system via distributed event-triggered strategy. J. Franklin Inst. 355(3), 1338–1350 (2018) Zhang, H., Feng, G., Yan, H., et al.: Observer-based output feedback event-triggered control for consensus of multi-agent systems. IEEE Trans. Ind. Electron. 61(9), 4885–4894 (2014) Donkers, M.C.F., Heemels, W.P.H.H.: Output-based event-triggered control with guaranteed \(L_{\infty} \) gain and improved and decentralized event-triggering. IEEE Trans. Autom. Control 57(6), 1362–1376 (2012) Yu, J.H., Antsaklis, P.J.: Event-triggered output feedback control for networked control systems using passivity: achieving L2 stability in the presence of communication delays and signal quantization. Automatica 49(1), 30–38 (2013) Dolk, V.S., Borgers, D.P., Heemels, W.P.M.H.: Output-based and decentralized dynamic event-triggered control with guaranteed Lp- gain performance and zeno-freeness. IEEE Trans. Autom. Control 62(1), 34–49 (2016) Cheng, B., Li, Z.K.: Fully distributed event-triggered protocols for linear multi-agent networks. IEEE Trans. Autom. Control 64(4), 1655–1662 (2019) Hu, J., Geng, J., Zhu, H.: An observer-based consensus tracking control and application to event-triggered tracking. Commun. Nonlinear Sci. Numer. Simul. 20(2), 559–570 (2015) Fu, J.J., Wang, J.Z.: Observer-based finite-time coordinated tracking for general linear multi-agent systems. Automatica 66, 231–237 (2016) Hardy, G.H., Littlewood, J.E., Polya, G.: Inequalities. Cambridge University Press, Cambridge (1952) The authors are thankful to Nature Research Editing Service for providing professional language editing services in the improvement of the present work. This work was jointly supported by the National Natural Science Foundation of China(Grant No. 11972156) and Special Fundation for Innovative Province Constraction (Grant No. 2020GK2030). Hunan Institute of Engineering, Fuxing Road 88, Xiangtan, China Yiping Luo & Jialong Pang Yiping Luo Jialong Pang Both authors contributed equally to the writing of this paper. Furthermore, both authors also read carefully and approved the final manuscript. Correspondence to Yiping Luo. Luo, Y., Pang, J. Observer-based event-triggered finite-time consensus for general linear leader-follower multi-agent systems. Adv Cont Discr Mod 2022, 40 (2022). https://doi.org/10.1186/s13662-022-03711-x Event-triggered Finite-time consensus Leader-follower
CommonCrawl
Markov partition A Markov partition in mathematics is a tool used in dynamical systems theory, allowing the methods of symbolic dynamics to be applied to the study of hyperbolic dynamics. By using a Markov partition, the system can be made to resemble a discrete-time Markov process, with the long-term dynamical characteristics of the system represented as a Markov shift. The appellation 'Markov' is appropriate because the resulting dynamics of the system obeys the Markov property. The Markov partition thus allows standard techniques from symbolic dynamics to be applied, including the computation of expectation values, correlations, topological entropy, topological zeta functions, Fredholm determinants and the like. Motivation Let $(M,\varphi )$ be a discrete dynamical system. A basic method of studying its dynamics is to find a symbolic representation: a faithful encoding of the points of $M$ by sequences of symbols such that the map $\varphi $ becomes the shift map. Suppose that $M$ has been divided into a number of pieces $E_{1},E_{2},\ldots ,E_{r}$ which are thought to be as small and localized, with virtually no overlaps. The behavior of a point $x$ under the iterates of $\varphi $ can be tracked by recording, for each $n$, the part $E_{i}$ which contains $\varphi ^{n}(x)$. This results in an infinite sequence on the alphabet $\{1,2,\ldots ,r\}$ which encodes the point. In general, this encoding may be imprecise (the same sequence may represent many different points) and the set of sequences which arise in this way may be difficult to describe. Under certain conditions, which are made explicit in the rigorous definition of a Markov partition, the assignment of the sequence to a point of $M$ becomes an almost one-to-one map whose image is a symbolic dynamical system of a special kind called a shift of finite type. In this case, the symbolic representation is a powerful tool for investigating the properties of the dynamical system $(M,\varphi )$. Formal definition A Markov partition[1] is a finite cover of the invariant set of the manifold by a set of curvilinear rectangles $\{E_{1},E_{2},\ldots ,E_{r}\}$ such that • For any pair of points $x,y\in E_{i}$, that $W_{s}(x)\cap W_{u}(y)\in E_{i}$ • $\operatorname {Int} E_{i}\cap \operatorname {Int} E_{j}=\emptyset $ for $i\neq j$ • If $x\in \operatorname {Int} E_{i}$ and $\varphi (x)\in \operatorname {Int} E_{j}$, then $\varphi \left[W_{u}(x)\cap E_{i}\right]\supset W_{u}(\varphi x)\cap E_{j}$ $\varphi \left[W_{s}(x)\cap E_{i}\right]\subset W_{s}(\varphi x)\cap E_{j}$ Here, $W_{u}(x)$ and $W_{s}(x)$ are the unstable and stable manifolds of x, respectively, and $\operatorname {Int} E_{i}$ simply denotes the interior of $E_{i}$. These last two conditions can be understood as a statement of the Markov property for the symbolic dynamics; that is, the movement of a trajectory from one open cover to the next is determined only by the most recent cover, and not the history of the system. It is this property of the covering that merits the 'Markov' appellation. The resulting dynamics is that of a Markov shift; that this is indeed the case is due to theorems by Yakov Sinai (1968)[2] and Rufus Bowen (1975),[3] thus putting symbolic dynamics on a firm footing. Variants of the definition are found, corresponding to conditions on the geometry of the pieces $E_{i}$.[4] Examples Markov partitions have been constructed in several situations. • Anosov diffeomorphisms of the torus. • Dynamical billiards, in which case the covering is countable. Markov partitions make homoclinic and heteroclinic orbits particularly easy to describe. The system $([0,1),x\mapsto 2x\ mod\ 1)$ has the Markov partition $E_{0}=(0,1/2),E_{1}=(1/2,1)$, and in this case the symbolic representation of a real number in $[0,1)$ is its binary expansion. For example: $x\in E_{0},Tx\in E_{1},T^{2}x\in E_{1},T^{3}x\in E_{1},T^{4}x\in E_{0}\Rightarrow x=(0.01110...)_{2}$. The assignment of points of $[0,1)$ to their sequences in the Markov partition is well defined except on the dyadic rationals - morally speaking, this is because $(0.01111\dots )_{2}=(0.10000\dots )_{2}$, in the same way as $1=0.999\dots $ in decimal expansions. References 1. Gaspard, Pierre (1998). Chaos, scattering and statistical mechanics. Cambridge Nonlinear Science Series. Vol. 9. Cambridge: Cambridge University Press. ISBN 978-0-521-39511-3. Zbl 0915.00011. 2. Sinaĭ, Ja. G. (1968), "Markov partitions and U-diffeomorphisms", Akademija Nauk SSSR, 2 (1): 64–89, MR 0233038. Sinaĭ, Ja. G. (1968), "Construction of Markov partitionings", Akademija Nauk SSSR, 2 (3): 70–80, MR 0250352. 3. Pytheas Fogg (2002), p. 208. 4. Pytheas Fogg (2002), p. 206. • Lind, Douglas; Marcus, Brian (1995). An introduction to symbolic dynamics and coding. Cambridge University Press. ISBN 978-0-521-55124-3. Zbl 1106.37301. • Pytheas Fogg, N. (2002). Berthé, Valérie; Ferenczi, Sébastien; Mauduit, Christian; Siegel, Anne (eds.). Substitutions in dynamics, arithmetics and combinatorics. Lecture Notes in Mathematics. Vol. 1794. Berlin: Springer-Verlag. ISBN 978-3-540-44141-0. Zbl 1014.11015.
Wikipedia
A randomized controlled trial of the efficacy of orally administered fluralaner (Bravecto™) against induced Ixodes holocyclus (Australian paralysis tick) infestations on dogs Petr Fisara ORCID: orcid.org/0000-0001-6211-639X1 & Maurice Webster2 Parasites & Vectors volume 8, Article number: 257 (2015) Cite this article Ixodes holocyclus ticks are a frequently fatal threat to dogs in eastern Australia. These ticks secrete a neurotoxin that can produce an ascending paralysis after 72 h attachment that can lead to death in affected animals. Fluralaner is a potent systemic acaricide with immediate and persistent efficacy for tick control including evidence of 100% efficacy against Ixodes ricinus ticks within 72 h. This study investigated the potential for oral fluralaner administration to control I. holocyclus infestation and the subsequent risk of host paralysis. Healthy Foxhound and Foxhound cross dogs immunized against holocyclotoxin were randomly allocated to receive either a single fluralaner (at least 25 mg/kg) dose or no treatment. All dogs were penned individually and infested with 30 adult unfed female I. holocyclus 1 day before treatment and 14, 28, 42, 56, 70, 84, 112 and 140 days following treatment. Ticks were counted and assessed at 24, 48 and 72 h after the initial fluralaner treatment and after each subsequent infestation. Ticks were not removed at the 24 and 48 h assessments, but were removed after the 72 h assessments. On 112 and 140 days post treatment a new group of untreated control dogs was used. Fluralaner treatment efficacy against I. holocyclus was 100% at 72 h post treatment. Following re-infestations the efficacy remained at 100% at the 72 h assessments for 115 days and reached 95.7% at 143 days. The differences between mean live tick counts on treatment and control groups were significant (P < 0.00l) at all assessment time points for 143 days following treatment. Oral fluralaner treatment can prevent Australian paralysis tick infestations for at least 115 days. The female of the Ixodes holocyclus tick secretes a potent holocyclotoxin following attachment that causes severe neurotoxicosis in man and domestic animals, including: dogs, cats, cattle, sheep, horses and goats. The neurotoxicosis is manifested as a rapidly ascending flaccid paralysis that can result in paralysis of respiratory muscles and death if not treated [1]. An estimated 10,000 domestic dogs are affected by I. holocyclus annually in Australia [2]; however, a recent study found that toy breeds were most at risk of tick paralysis associated death [3]. Most tick paralysis cases in dogs are reported in spring and early summer, although ticks and cases of paralysis may be found all year [3], and this seasonality corresponds with the presence of adult female ticks [4]. The I. holocyclus geographic range in eastern Australia extends from Lake Entrance in Victoria in the South, along the New South Wales and Queensland Coast to Cairns in the North. Tick distribution is based on several factors, and high humidity with low vegetation provide the ideal habitat for ticks [5]. Another factor affecting I. holocyclus prevalence is the bandicoot population as these animals are the most common natural hosts. Bandicoots are native marsupial species found on the eastern board of Australia. The most common species are the long-nosed bandicoot (Perameles nasuta) and the northern brown bandicoot (Isoodon macroorus); These small mammals often inhabit areas of coastal bushland and their presence is linked to the incidence of I. holocycus ticks [1]. Bandicoots can survive heavy tick infestations apparently because of an acquired immunity to the toxin rather than an intrinsic resistance [1]. Dogs at risk of tick attachment need protection against I. holocyclus, particularly when adult female ticks are present; however, pet owner compliance with tick control treatment application – even in the face of the risk of death from paralysis - remains a challenge. A prospective survey conducted at 42 veterinary clinics along the eastern coast of Australia showed that only 14% of all dogs presented at veterinary clinics with tick paralysis were correctly treated with a prophylactic tick control agent. Mortality in the surveyed dogs reached 5% despite the animals being treated in veterinary hospitals [6]. Therefore, currently registered preventative products are not achieving the goal of providing protection. Understanding the relationship between feeding activity of I. holocyclus and the associated paralysis is important for understanding how this neurotoxicity can be prevented. I. holocyclus engorge in two phases with initial slow feeding over the first 72 h after attachment and then a more rapid phase after 120 h. The swelling from the initial slow feeding can be obvious from as early as 10 minutes after attachment, when the tick changes from a flattened dorso-ventral appearance to having a slightly turgid appearance. At 24 h post attachment the ticks are not noticeably wider, but appear slightly swollen. Ticks continue swelling over the following 48 h, and by 72 h post attachment are approximately 30% of the volume of a fully engorged tick and show a rectangular block shape. The end of the engorgement process is marked with a dramatic engorgement just prior to detachment after approximately 7 days [7]. Toxin secretion has a specific timing during the engorgement process. Studies on factors affecting salivary gland extract toxicity in a mouse bioassay found that toxin quantity increased rapidly from the third day after feeding, apparently associated with major physiological changes in tick salivary glands occurring on the third day. Once these changes are stimulated, they continue independently of further tick feeding [8]. Onset of paralysis is never seen before the end of the fourth, or beginning of the fifth day after attachment [9]. Laboratory dogs infested with 3 to 4 ticks had an onset of clinical signs of toxicity between 5.5 and 7 days (mean = 6.2) after attachment with death occurring in a mean of 23.3 h after the onset of signs [10]. Therefore, to prevent the occurrence of fatal paralysis in infested dogs, it is critical to kill the ticks within 72 h post infestation and prior to the onset of clinical signs. Paralysis in dogs can be avoided if ticks are found and removed 1-2 days after attachment, however, once clinical signs begin to manifest from the fourth day onwards the paralysis may be fatal if left untreated even if the tick(s) are removed. Fluralaner (Bravecto, MSD Animal Health) has proven to have immediate and persistent efficacy against multiple genera and species of ticks that infest dogs in Europe and in the USA [11,12]. Fluralaner is orally administered and is systemically effective against ticks and therefore the tick must attach and initiate feeding to be exposed to the active ingredient. However, the onset of activity of fluralaner against ticks results in greater than 90% kill of I. ricinus ticks within 12 h throughout the 12 week post-treatment interval [13] and tick mortality was found to reach 100% within 12 h following the initial treatment [14]. This speed of kill suggests that fluralaner treatment has the potential to effectively prevent tick paralysis by killing I. holocyclus ticks before they begin to secrete increased amounts of holocyclotoxin which occurs after the third day of feeding [8]. This study was designed to measure the efficacy of fluralaner treatment against I. holocyclus using dogs hyper-immunized against holocyclotoxin. The study was conducted in compliance with the VICH GL9 Good Clinical Practices [15], the World Association for the Advancement of Veterinary Parasitology (W.A.A.V.P.) Guidelines for Evaluating the Efficacy of Parasiticides for the Treatment, Prevention and Control of Flea and Tick Infestation on Dogs and Cats [16] and the APVMA Guidelines for small animal ectoparasiticide efficacy submission [17]. Animals were handled in compliance with Animal Research Authority no. 12/544(23) issued by the AEC convened by the Director General of NSW DPI on 6 February 2012, and any applicable local regulations. Twenty four healthy male and female dogs between 1.0 and 10.8 years old that had not been treated with an ectoparasiticide in the previous 60 days were immunized against holocyclotoxin, the tick paralysis toxin. All dogs were infested with I. holocyclus prior to random allocation to ensure that they were capable of carrying adequate numbers of ticks. The 20 dogs found to have the highest tick carrying capacity were randomly assigned to either the treatment or control groups using a randomized block design based on pre-treatment live tick counts. Each dog was weighed once prior to treatment and then dogs in the treated group received orally administered fluralaner (Bravecto, MSD Animal Health) at a minimum dose of 25 mg/kg in a flavoured chewable tablet. The dose for each dog was based on the chewable tablet size recommended for the weight band that the dog fell into. The individual doses ranged between 25.8 and 35.0 mg/kg fluralaner. This is lower than the maximum possible dose (56 mg/kg) administered to a dog treated according to the package insert. Following fluralaner treatment administration the animals were offered food and were closely observed for adverse reactions. The study protocol was designed to end at 87 days following the initial treatment; however, efficacy remained high and the study was extended for a further 8 weeks. Some dogs in the original control group could not continue in the extended study because of previous unrelated commitments. Therefore, a new control group was created by combining 4 of the original 10 untreated control dogs with 4 new untreated immunized dogs. There were no changes made to the treated group. All dogs in the extended study period were tick challenged 2 more times on days 112 and 140. Adult unfed female I. holocyclus were collected from the Northern Rivers region of New South Wales. All dogs in the experiment, including the second control group, were immunized against holocyclotoxin using a modification of the methods described by Stone, Neish & Wright [18]. At the end of this process, dogs were able to tolerate a challenge of 30 ticks with no evidence of intoxication. For each tick infestation during the experiment, 30 ticks were manually attached to each dog predominantly on the head, shoulders and dorsal midline. Infestations were applied 1 day before the treatment date and 14, 28, 42, 56, 70, 84, 112 and 140 days following treatment. During the tick challenge dogs were housed individually, while at all other times they were housed in socially compatible groups of 2 or 3 dogs from the same experimental groups. All dogs survived the study and were returned to their original colony on study completion. The same personnel conducted all tick counts to ensure a standardized technique during assessments. Tick counting personnel wore disposable overalls to avoid skin contact with treated dogs and thoroughly washed their hands with non-acaricidal soap after assessing each dog and between counts on groups of dogs to ensure no possible transfer of active ingredient. Tick counts were conducted 24, 48 and 72 h after each infestation by searching the whole body of the dog visually and by palpation until no more ticks could be found. The 24 and 48 h assessments were conducted without removing ticks, while at the 72 h assessments any remaining ticks were removed, assessed and discarded. All dogs were searched again prior to each infestation to ensure that no ticks had been missed during the previous tick assessment. Each tick observed during the count was described according to selected parameters (Table 1). Some ticks were classified as 'dead' when examined in situ at 24 and 48 h but displayed uncoordinated agonal leg movement after removal at 72 h and were then reclassified as "moribund". These ticks typically exhibited reduced engorgement and were smaller than live ticks of the same age with evidence of mild crenation (slightly shriveled appearance). These ticks were included in the total dead tick count as they were not feeding and were not capable of causing paralysis in the dogs. Table 1 Parameters used to describe observed ticks at each assessment The recorded description of each tick was then used to place it in one of seven categories (Table 2). At each counting period the total number of ticks counted on each dog that were assigned to categories 1, 2, 3 & 7 were used in the calculation of the results. Table 2 Tick category assignments used to assess the acaricidal effect of treatment Treatment efficacy was calculated using arithmetic means as: $$ \mathrm{Treatment}\ \mathrm{efficacy} = 100*\left(\mathrm{Control}\ \mathrm{Group}\ \mathrm{Mean}\hbox{--}\ \mathrm{TreatedGroup}\ \mathrm{Mean}\right)/\left(\mathrm{Control}\ \mathrm{Group}\mathrm{Mean}\right) $$ The experimental unit was the individual animal; very few ticks were counted on treated dogs for most of the study and block differences were negligible, therefore, they were not included in the analyses. Generalized linear models for Poisson data using the logarithmic link function (log-linear modelling or regression, or Poisson regression) [19] was used to compare mean total live tick counts for dogs in control and treated groups at each post-treatment sampling occasion. This method produces an analysis of deviance, analogous to the analysis of variance for normally distributed data. In the generalized linear models method, the observed counts are analyzed and the mean-variance relation is used to link mean counts to the linear model on the logarithmic scale [20]. Mean tick counts at each sampling point (Table 3) show that the tick infestation level on control dogs was sufficient and that trial results can be used to confirm treatment efficacy. Mean tick counts on treated dogs were significantly lower than the mean tick counts on control group dogs at each time point post-infestation and post-treatment (Table 3). Mean tick counts on treated dogs at 24 h post-infestation were below 1.0 until 113 days post-treatment (Table 3). The 72 h post-infestation mean tick counts were 0 (and therefore an efficacy of 100% was achieved ) at any measurement point prior to the final count at 143 days post treatment. Table 3 Arithmetic mean I. holocyclus tick counts in fluralaner treated and untreated dogs Efficacy results (Figure 1) show that fluralaner treatment provided 100% efficacy at 72 h following induced I. holocyclus infestation of dogs for 115 days post treatment. Onset of paralysis following I. holocyclus attachment is never seen before the end of the fourth, or beginning of the fifth day after attachment [9]; therefore, the 100% efficacy at 72 h found in this study means that all attached adult female ticks are killed before the onset of clinical signs. It should be noted that due to the systemic nature of action of fluralaner, without any apparent repellency effect, dead attached ticks can be still seen on animals following treatment. These ticks will often appear partially engorged and can be easily mistaken for live ticks by an untrained person. This could be perceived as a lack of efficacy by pet owners, however, fluralaner relies on the parasite to ingest the medicated blood during feeding to become affected and killed. Dead ticks can be easily removed as opposed to live ticks which take some force to dislodge from the skin of the animals. Despite the fact that attached ticks were seen on the dogs at post treatment counts in this trial, all of these ticks were killed well within the critical period of 72 hours after attachment before paralysis begins to set in. Consequently, pet owners will need to be educated about the way fluralaner works and protects their dogs from tick paralysis so that they are not unduly alarmed by a dead attached tick. Acaricidal efficacy of fluralaner treatment of dogs against adult Ixodes holocyclus ticks. It is assumed that fluralaner oral treatment can, under the conditions of this study, effectively prevent the occurrence of fatal paralysis for 115 days. Additionally, fluralaner treatment continued to significantly reduce the number of ticks at 72 h post infestation for 143 days, reaching 95.7% at the last measurement time point in this study. This is a uniquely persistent efficacy against this dangerous parasite following a single administration of a systemic treatment. Lack of pet owner compliance with recommendations for topical treatment [6] has limited the ability of veterinarians to prevent tick paralysis in Australian dogs. Therefore, the formulation of fluralaner into a palatable and easily administered tick control treatment provides an important new option for veterinarians and owners in preventing the neurotoxicosis associated with I. holocyclus. Palatability, defined as voluntary acceptance of the chewable tablet following administration to the dog by the owner at home, was reported to be 92.5% [12]. No treated dog vomited the administered medication during the post treatment period in this study, which is consistent with results observed in an extensive field trial in Europe [11]. However, there were 3 adverse events reported in untreated dogs and 2 in treated dogs during this study. Adverse events in treated dogs included one dog that developed otitis externa 79 days after treatment and one dog that developed forelimb lameness 95 days after treatment. The timing and nature of these 2 events was such that neither adverse event was considered to be related to fluralaner administration. Oral fluralaner treatment of dogs kills 100% of attached I. holocyclus adults within 72 h post infestation for at least 115 days and can be used to prevent the onset of clinical signs of tick paralysis. The tick killing effect of 95.7% is still present at 143 days post administration. Masina S, Broady KW. Tick paralysis: development of a vaccine. Int J Parasitol. 1999;29:535–41. Stone BF, Aylward JH. Tick toxicosis and the causal toxins: tick paralysis. In: Gopalakhrisnakone P, Tan CK, editors. Progress in venom and toxin research. Singapore: National University of Singapore Press; 1987. p. 594–682. Epplestone KR, Kelman M, Ward MP. Distribution, seasonality and risk factors for tick paralysis in Australian dogs and cats. Vet Parasitol. 2013;196:460–8. Doube BM. Seasonal patterns of abundance and host relationships of the Australian paralysis tick, Ixodes holocyclus Neumann ( Ixodidae) in southeastern Queensland. Aust J Ecol. 1974;4(4):345–60. Taylor MA, Coop RL, Wall MA. Veterinary parasitology. 3rd ed. Oxford, UK: Blackwell Publishing; 2007. p. 691. Atwell RB, Campbell FE, Evans EA. Prospective survey of tick paralysis in dogs. Aust Vet J. 2001;79:412–8. Webster MC, Fisara P, Sargent RM. Long term efficacy of a deltamethrin-impregnated collar for the control of the Australian paralysis tick Ixodes holocyclus on dogs. Aust Vet J. 2011;89:439–43. Goodrich BS, Murray MD. Factors influencing the toxicity of salivary gland extracts of Ixodes holocyclus Neumann. lnt J Parasitol. 1977;8:313–20. Clunies R. Tick paralysis: a fatal disease of dogs and other animals in Eastern Australia. J Coun Sci Ind Res Aust. 1935;8:8–13. Ilkiw JE, Turner DM, Howlett CR. Infestation in the dog of the paralysis tick Ixodes holocyclus. 1. Clinical and histological findings. Aust Vet J. 1987;64:137–9. Rohdich N, Roepke RKA, Zschiesche E. A randomized, blinded, controlled and multi-centered field study comparing the efficacy and safety of Bravecto™ (fluralaner) against Frontline™ (fipronil) in flea- and tick-infested dogs. Parasit Vectors. 2014;7:83. Article PubMed Central PubMed Google Scholar Meadows C, Guerino F, Sun F. A randomized, blinded, controlled USA field study to assess the use of fluralaner tablets in controlling canine flea infestations. Parasit Vectors. 2014;7:375. European Commission, Community register of veterinary medicinal products, Product information Bravecto: Annex 1 Summary of product characteristics. Bruxelles; 2014. http://ec.europa.eu/health/documents/community-register/html/v158.htm. Wengenmayer C, Williams H, Zschiesche E, Moritz A, Langenstein J, Roepke RKA, et al. The speed of kill of fluralaner (Bravecto™) against Ixodes ricinus ticks on dogs. Parasit Vectors. 2014;7:525. International Cooperation on Harmonisation of Technical Requirements for Registration of Veterinary Medicinal Products (VICH), June 2000, VICH Guideline 9: Good Clinical Practice (GCP). Marchiondo AA, Holdsworth PA, Green P, Blagburn BL, Jacobs DE. World Association for the Advancement of Veterinary Parasitology (W.A.A.V.P.) guidelines for evaluating the efficacy of parasiticides for the treatment, prevention and control of flea and tick infestation on dogs and cats. Vet Parasitol. 2007;145:332–44. The Australian Pesticides and Veterinary Medicines Authority (APVMA). 1996, Guidelines for small animal ectoparasiticide efficacy submission, http://www.apvma.gov.au/guidelines/vetguidelines.shtml#6 Stone BF, Neish AL, Wright IG. Tick (Ixodes holocyclus) paralysis in the dog – quantitative studies on immunity following artificial infestation with the tick. Aust Vet J. 1983;60:65–8. GenStat Release 7.2. 2004 VSN International, Wilkinson House, Jordan Hill Road, Oxford. McCullagh P, Neider JA. Generalized linear models. London: Chapman and Hall; 1983. p. 193–200. The authors acknowledge the assistance of Paul Nicholls with the statistical analysis and of Rob Armstrong and Karen Lipworth in the preparation of the manuscript. The study was funded by MSD Animal Health. MSD Animal Health, 26 Artisan road, Seven Hills, 2147, NSW, Australia Petr Fisara Vetx Research, PO Box 23, Casino, NSW, 2470, Australia Maurice Webster Correspondence to Petr Fisara. PF is an employee of MSD Animal Health and MW worked for MSD Animal Health under contract for the duration of this study. PF reviewed the protocol and wrote the manuscript. MW prepared the protocol for the study, conducted the trial and reviewed the manuscript. Both authors read and approved the final version of the manuscript. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Fisara, P., Webster, M. A randomized controlled trial of the efficacy of orally administered fluralaner (Bravecto™) against induced Ixodes holocyclus (Australian paralysis tick) infestations on dogs. Parasites Vectors 8, 257 (2015). https://doi.org/10.1186/s13071-015-0864-8 Accepted: 16 April 2015 Fluralaner Ixodes holocyclus Bravecto TM
CommonCrawl
Covariance matrix A bivariate Gaussian probability density function centered at (0, 0), with covariance matrix [ 1.00, 0.50 ; 0.50, 1.00 ]. Sample points from a multivariate Gaussian distribution with a standard deviation of 3 in roughly the lower left-upper right direction and of 1 in the orthogonal direction. Because the x and y components co-vary, the variances of x and y do not fully describe the distribution. A 2×2 covariance matrix is needed; the directions of the arrows correspond to the eigenvectors of this covariance matrix and their lengths to the square roots of the eigenvalues. In probability theory and statistics, a covariance matrix (also known as dispersion matrix or variance–covariance matrix) is a matrix whose element in the i, j position is the covariance between the i th and j th elements of a random vector (that is, of a vector of random variables). Each element of the vector is a scalar random variable, either with a finite number of observed empirical values or with a finite or infinite number of potential values specified by a theoretical joint probability distribution of all the random variables. Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the x and y directions contain all of the necessary information; a 2×2 matrix would be necessary to fully characterize the two-dimensional variation. Because the covariance of the i th random variable with itself is simply that random variable's variance, each element on the principal diagonal of the covariance matrix is the variance of one of the random variables. Because the covariance of the i th random variable with the j th one is the same thing as the covariance of the j th random variable with the i th one, every covariance matrix is symmetric. In addition, every covariance matrix is positive semi-definite. 1.1 Generalization of the variance 1.2 Correlation matrix 2 Conflicting nomenclatures and notations 3.1 Block matrices 4 As a linear operator 5 Which matrices are covariance matrices? 6 How to find a valid correlation matrix 7 Complex random vectors 8 Estimation 9 As a parameter of a distribution 10 Applications 10.1 In financial economics Throughout this article, boldfaced unsubscripted X and Y are used to refer to random vectors, and unboldfaced subscripted Xi and Yi are used to refer to random scalars. If the entries in the column vector X=[X1⋮Xn]{\displaystyle \mathbf {X} ={\begin{bmatrix}X_{1}\\\vdots \\X_{n}\end{bmatrix}}} are random variables, each with finite variance, then the covariance matrix Σ is the matrix whose (i, j) entry is the covariance Σi⁢j=c⁢o⁢v⁢(Xi,Xj)=E⁢[(Xi−μi)⁢(Xj−μj)]{\displaystyle \Sigma _{ij}=\mathrm {cov} (X_{i},X_{j})=\mathrm {E} {\begin{bmatrix}(X_{i}-\mu _{i})(X_{j}-\mu _{j})\end{bmatrix}}} μi=E⁡(Xi){\displaystyle \mu _{i}=\mathrm {E} (X_{i})\,} is the expected value of the ith entry in the vector X. In other words, Σ=[E⁡[(X1−μ1)⁢(X1−μ1)]E⁡[(X1−μ1)⁢(X2−μ2)]⋯E⁡[(X1−μ1)⁢(Xn−μn)]E⁡[(X2−μ2)⁢(X1−μ1)]E⁡[(X2−μ2)⁢(X2−μ2)]⋯E⁡[(X2−μ2)⁢(Xn−μn)]⋮⋮⋱⋮E⁡[(Xn−μn)⁢(X1−μ1)]E⁡[(Xn−μn)⁢(X2−μ2)]⋯E⁡[(Xn−μn)⁢(Xn−μn)]].{\displaystyle \Sigma ={\begin{bmatrix}\mathrm {E} [(X_{1}-\mu _{1})(X_{1}-\mu _{1})]&\mathrm {E} [(X_{1}-\mu _{1})(X_{2}-\mu _{2})]&\cdots &\mathrm {E} [(X_{1}-\mu _{1})(X_{n}-\mu _{n})]\\\\\mathrm {E} [(X_{2}-\mu _{2})(X_{1}-\mu _{1})]&\mathrm {E} [(X_{2}-\mu _{2})(X_{2}-\mu _{2})]&\cdots &\mathrm {E} [(X_{2}-\mu _{2})(X_{n}-\mu _{n})]\\\\\vdots &\vdots &\ddots &\vdots \\\\\mathrm {E} [(X_{n}-\mu _{n})(X_{1}-\mu _{1})]&\mathrm {E} [(X_{n}-\mu _{n})(X_{2}-\mu _{2})]&\cdots &\mathrm {E} [(X_{n}-\mu _{n})(X_{n}-\mu _{n})]\end{bmatrix}}.} The inverse of this matrix, Σ−1{\displaystyle \Sigma ^{-1}} is the inverse covariance matrix, also known as the concentration matrix or precision matrix;[1] see precision (statistics). The elements of the precision matrix have an interpretation in terms of partial correlations and partial variances.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Generalization of the variance The definition above is equivalent to the matrix equality Σ=E⁡[(X−E⁡[X])⁢(X−E⁡[X])T]{\displaystyle \Sigma =\mathrm {E} \left[\left(\mathbf {X} -\mathrm {E} [\mathbf {X} ]\right)\left(\mathbf {X} -\mathrm {E} [\mathbf {X} ]\right)^{\rm {T}}\right]} This form can be seen as a generalization of the scalar-valued variance to higher dimensions. Recall that for a scalar-valued random variable X σ2=v⁢a⁢r⁢(X)=E⁡[(X−E⁡(X))2]=E⁢[(X−E⁡(X))⋅(X−E⁡(X))].{\displaystyle \sigma ^{2}=\mathrm {var} (X)=\mathrm {E} [(X-\mathrm {E} (X))^{2}]=\mathrm {E} [(X-\mathrm {E} (X))\cdot (X-\mathrm {E} (X))].\,} Indeed, the entries on the diagonal of the covariance matrix Σ{\displaystyle \Sigma } are the variances of each element of the vector X{\displaystyle \mathbf {X} } . Correlation matrix A quantity closely related to the covariance matrix is the correlation matrix, the matrix of Pearson product-moment correlation coefficients between each of the random variables in the random vector X{\displaystyle \mathbf {X} } , which can be written corr(X)=(diag(Σ))−12⁢Σ⁢(diag(Σ))−12{\displaystyle {\text{corr}}(\mathbf {X} )=\left({\text{diag}}(\Sigma )\right)^{-{\frac {1}{2}}}\,\Sigma \,\left({\text{diag}}(\Sigma )\right)^{-{\frac {1}{2}}}} where diag(Σ){\displaystyle {\text{diag}}(\Sigma )} is the matrix of the diagonal elements of Σ{\displaystyle \Sigma } (i.e., a diagonal matrix of the variances of Xi{\displaystyle X_{i}} for i=1,…,n{\displaystyle i=1,\dots ,n} ). Equivalently, the correlation matrix can be seen as the covariance matrix of the standardized random variables Xi/σ⁡(Xi){\displaystyle X_{i}/\sigma (X_{i})} for i=1,…,n{\displaystyle i=1,\dots ,n} . Each element on the principal diagonal of a correlation matrix is the correlation of a random variable with itself, which always equals 1. Each off-diagonal element is between 1 and –1 inclusive. Conflicting nomenclatures and notations Nomenclatures differ. Some statisticians, following the probabilist William Feller, call the matrix Σ{\displaystyle \Sigma } the variance of the random vector X{\displaystyle X} , because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it the covariance matrix, because it is the matrix of covariances between the scalar components of the vector X{\displaystyle X} . Thus var⁡(X)=cov⁡(X)=E⁡[(X−E⁡[X])⁢(X−E⁡[X])T].{\displaystyle \operatorname {var} (\mathbf {X} )=\operatorname {cov} (\mathbf {X} )=\mathrm {E} \left[(\mathbf {X} -\mathrm {E} [\mathbf {X} ])(\mathbf {X} -\mathrm {E} [\mathbf {X} ])^{\rm {T}}\right].} However, the notation for the cross-covariance between two vectors is standard: cov⁡(X,Y)=E⁡[(X−E⁡[X])⁢(Y−E⁡[Y])T].{\displaystyle \operatorname {cov} (\mathbf {X} ,\mathbf {Y} )=\mathrm {E} \left[(\mathbf {X} -\mathrm {E} [\mathbf {X} ])(\mathbf {Y} -\mathrm {E} [\mathbf {Y} ])^{\rm {T}}\right].} The var notation is found in William Feller's two-volume book An Introduction to Probability Theory and Its Applications,[2] but both forms are quite standard and there is no ambiguity between them. The matrix Σ{\displaystyle \Sigma } is also often called the variance-covariance matrix since the diagonal terms are in fact variances. For Σ=E⁡[(X−E⁡[X])⁢(X−E⁡[X])T]{\displaystyle \Sigma =\mathrm {E} \left[\left(\mathbf {X} -\mathrm {E} [\mathbf {X} ]\right)\left(\mathbf {X} -\mathrm {E} [\mathbf {X} ]\right)^{\rm {T}}\right]} and μ=E⁡(X){\displaystyle {\boldsymbol {\mu }}=\mathrm {E} ({\textbf {X}})} , where X is a random p-dimensional variable and Y a random q-dimensional variable, the following basic properties apply:[3] Σ=E⁡(X⁢XT)−μ⁢μT{\displaystyle \Sigma =\mathrm {E} (\mathbf {XX^{\rm {T}}} )-{\boldsymbol {\mu }}{\boldsymbol {\mu }}^{\rm {T}}} Σ{\displaystyle \Sigma \,} is positive-semidefinite and symmetric. cov⁡(A⁢X+a)=A⁢cov⁡(X)⁢AT{\displaystyle \operatorname {cov} (\mathbf {AX} +\mathbf {a} )=\mathbf {A} \,\operatorname {cov} (\mathbf {X} )\,\mathbf {A^{\rm {T}}} } cov⁡(X,Y)=cov⁡(Y,X)T{\displaystyle \operatorname {cov} (\mathbf {X} ,\mathbf {Y} )=\operatorname {cov} (\mathbf {Y} ,\mathbf {X} )^{\rm {T}}} cov⁡(X1+X2,Y)=cov⁡(X1,Y)+cov⁡(X2,Y){\displaystyle \operatorname {cov} (\mathbf {X} _{1}+\mathbf {X} _{2},\mathbf {Y} )=\operatorname {cov} (\mathbf {X} _{1},\mathbf {Y} )+\operatorname {cov} (\mathbf {X} _{2},\mathbf {Y} )} If p = q, then var⁡(X+Y)=var⁡(X)+cov⁡(X,Y)+cov⁡(Y,X)+var⁡(Y){\displaystyle \operatorname {var} (\mathbf {X} +\mathbf {Y} )=\operatorname {var} (\mathbf {X} )+\operatorname {cov} (\mathbf {X} ,\mathbf {Y} )+\operatorname {cov} (\mathbf {Y} ,\mathbf {X} )+\operatorname {var} (\mathbf {Y} )} cov⁡(A⁢X+a,BT⁢Y+b)=A⁢cov⁡(X,Y)⁢B{\displaystyle \operatorname {cov} (\mathbf {AX} +\mathbf {a} ,\mathbf {B} ^{\rm {T}}\mathbf {Y} +\mathbf {b} )=\mathbf {A} \,\operatorname {cov} (\mathbf {X} ,\mathbf {Y} )\,\mathbf {B} } If X{\displaystyle \mathbf {X} } and Y{\displaystyle \mathbf {Y} } are independent or uncorrelated, then cov⁡(X,Y)=0{\displaystyle \operatorname {cov} (\mathbf {X} ,\mathbf {Y} )=\mathbf {0} } where X,X1{\displaystyle \mathbf {X} ,\mathbf {X} _{1}} and X2{\displaystyle \mathbf {X} _{2}} are random p×1 vectors, Y{\displaystyle \mathbf {Y} } is a random q×1 vector, a{\displaystyle \mathbf {a} } is a q×1 vector, b{\displaystyle \mathbf {b} } is a p×1 vector, and A{\displaystyle \mathbf {A} } and B{\displaystyle \mathbf {B} } are q×p matrices. This covariance matrix is a useful tool in many different areas. From it a transformation matrix can be derived, called a whitening transformation, that allows one to completely decorrelate the data{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} or, from a different point of view, to find an optimal basis for representing the data in a compact way{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} (see Rayleigh quotient for a formal proof and additional properties of covariance matrices). This is called principal components analysis (PCA) and the Karhunen-Loève transform (KL-transform). Block matrices The joint mean μX,Y{\displaystyle {\boldsymbol {\mu }}_{X,Y}} and joint covariance matrix ΣX,Y{\displaystyle {\boldsymbol {\Sigma }}_{X,Y}} of X{\displaystyle {\boldsymbol {X}}} and Y{\displaystyle {\boldsymbol {Y}}} can be written in block form μX,Y=[μXμY],ΣX,Y=[ΣX⁢XΣX⁢YΣY⁢XΣY⁢Y]{\displaystyle {\boldsymbol {\mu }}_{X,Y}={\begin{bmatrix}{\boldsymbol {\mu }}_{X}\\{\boldsymbol {\mu }}_{Y}\end{bmatrix}},\qquad {\boldsymbol {\Sigma }}_{X,Y}={\begin{bmatrix}{\boldsymbol {\Sigma }}_{\mathit {XX}}&{\boldsymbol {\Sigma }}_{\mathit {XY}}\\{\boldsymbol {\Sigma }}_{\mathit {YX}}&{\boldsymbol {\Sigma }}_{\mathit {YY}}\end{bmatrix}}} where ΣX⁢X=var(X),ΣY⁢Y=var(Y),{\displaystyle {\boldsymbol {\Sigma }}_{XX}={\mbox{var}}({\boldsymbol {X}}),{\boldsymbol {\Sigma }}_{YY}={\mbox{var}}({\boldsymbol {Y}}),} and ΣX⁢Y=ΣY⁢XT=cov(X,Y){\displaystyle {\boldsymbol {\Sigma }}_{XY}={\boldsymbol {\Sigma }}_{\mathit {YX}}^{T}={\mbox{cov}}({\boldsymbol {X}},{\boldsymbol {Y}})} . ΣX⁢X{\displaystyle {\boldsymbol {\Sigma }}_{XX}} and ΣY⁢Y{\displaystyle {\boldsymbol {\Sigma }}_{YY}} can be identified as the variance matrices of the marginal distributions for X{\displaystyle {\boldsymbol {X}}} and Y{\displaystyle {\boldsymbol {Y}}} respectively. If X{\displaystyle {\boldsymbol {X}}} and Y{\displaystyle {\boldsymbol {Y}}} are jointly normally distributed, x,y∼N⁡(μX,Y,ΣX,Y){\displaystyle {\boldsymbol {x}},{\boldsymbol {y}}\sim \ {\mathcal {N}}({\boldsymbol {\mu }}_{X,Y},{\boldsymbol {\Sigma }}_{X,Y})} , then the conditional distribution for Y{\displaystyle {\boldsymbol {Y}}} given X{\displaystyle {\boldsymbol {X}}} is given by y|x∼N⁡(μY|X,ΣY|X){\displaystyle {\boldsymbol {y}}|{\boldsymbol {x}}\sim \ {\mathcal {N}}({\boldsymbol {\mu }}_{Y|X},{\boldsymbol {\Sigma }}_{Y|X})} ,[4] defined by conditional mean μY|X=μY+ΣY⁢X⁢ΣX⁢X−1⁢(x−μX){\displaystyle {\boldsymbol {\mu }}_{Y|X}={\boldsymbol {\mu }}_{Y}+{\boldsymbol {\Sigma }}_{YX}{\boldsymbol {\Sigma }}_{XX}^{-1}\left(\mathbf {x} -{\boldsymbol {\mu }}_{X}\right)} and conditional variance ΣY|X=ΣY⁢Y−ΣY⁢X⁢ΣX⁢X−1⁢ΣX⁢Y.{\displaystyle {\boldsymbol {\Sigma }}_{Y|X}={\boldsymbol {\Sigma }}_{YY}-{\boldsymbol {\Sigma }}_{\mathit {YX}}{\boldsymbol {\Sigma }}_{\mathit {XX}}^{-1}{\boldsymbol {\Sigma }}_{\mathit {XY}}.} The matrix ΣYXΣXX−1 is known as the matrix of regression coefficients, while in linear algebra ΣY|X is the Schur complement of ΣXX in ΣX,Y The matrix of regression coefficients may often be given in transpose form, ΣXX−1ΣXY, suitable for post-multiplying a row vector of explanatory variables xT rather than pre-multiplying a column vector x. In this form they correspond to the coefficients obtained by inverting the matrix of the normal equations of ordinary least squares (OLS). As a linear operator Applied to one vector, the covariance matrix maps a linear combination, c, of the random variables, X, onto a vector of covariances with those variables: cT⁢Σ=cov⁡(cT⁢X,X){\displaystyle \mathbf {c} ^{\rm {T}}\Sigma =\operatorname {cov} (\mathbf {c} ^{\rm {T}}\mathbf {X} ,\mathbf {X} )} . Treated as a bilinear form, it yields the covariance between the two linear combinations: dT⁢Σ⁢c=cov⁡(dT⁢X,cT⁢X){\displaystyle \mathbf {d} ^{\rm {T}}\Sigma \mathbf {c} =\operatorname {cov} (\mathbf {d} ^{\rm {T}}\mathbf {X} ,\mathbf {c} ^{\rm {T}}\mathbf {X} )} . The variance of a linear combination is then cT⁢Σ⁢c{\displaystyle \mathbf {c} ^{\rm {T}}\Sigma \mathbf {c} } , its covariance with itself. Similarly, the (pseudo-)inverse covariance matrix provides an inner product, ⟨c−μ⁢|Σ+|⁢c−μ⟩{\displaystyle \langle c-\mu |\Sigma ^{+}|c-\mu \rangle } which induces the Mahalanobis distance, a measure of the "unlikelihood" of c.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Which matrices are covariance matrices? From the identity just above, let b{\displaystyle \mathbf {b} } be a (p×1){\displaystyle (p\times 1)} real-valued vector, then var⁡(bT⁢X)=bT⁢var⁡(X)⁢b,{\displaystyle \operatorname {var} (\mathbf {b} ^{\rm {T}}\mathbf {X} )=\mathbf {b} ^{\rm {T}}\operatorname {var} (\mathbf {X} )\mathbf {b} ,\,} which must always be nonnegative since it is the variance of a real-valued random variable. From the symmetry of the covariance matrix's definition it follows that only a positive-semidefinite matrix can be a covariance matrix.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} Conversely, every symmetric positive semi-definite matrix is a covariance matrix. To see this, suppose M is a p×p positive-semidefinite matrix. From the finite-dimensional case of the spectral theorem, it follows that M has a nonnegative symmetric square root, that can be denoted by M1/2. Let X{\displaystyle \mathbf {X} } be any p×1 column vector-valued random variable whose covariance matrix is the p×p identity matrix. Then var⁡(M1/2⁢X)=M1/2⁡(var⁡(X))⁢M1/2=M.{\displaystyle \operatorname {var} (\mathbf {M} ^{1/2}\mathbf {X} )=\mathbf {M} ^{1/2}(\operatorname {var} (\mathbf {X} ))\mathbf {M} ^{1/2}=\mathbf {M} .\,} How to find a valid correlation matrix In some applications (e.g., building data models from only partially observed data) one wants to find the "nearest" correlation matrix to a given symmetric matrix (e.g., of observed covariances). In 2002, Higham[5] formalized the notion of nearness using a weighted Frobenius norm and provided a method for computing the nearest correlation matrix. Complex random vectors The variance of a complex scalar-valued random variable with expected value μ is conventionally defined using complex conjugation:{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} var⁡(z)=E⁡[(z−μ)⁢(z−μ)∗]{\displaystyle \operatorname {var} (z)=\operatorname {E} \left[(z-\mu )(z-\mu )^{*}\right]} where the complex conjugate of a complex number z{\displaystyle z} is denoted z∗{\displaystyle z^{*}} ; thus the variance of a complex number is a real number. If Z{\displaystyle Z} is a column-vector of complex-valued random variables, then the conjugate transpose is formed by both transposing and conjugating. In the following expression, the product of a vector with its conjugate transpose results in a square matrix, as its expectation: E⁡[(Z−μ)⁢(Z−μ)†],{\displaystyle \operatorname {E} \left[(Z-\mu )(Z-\mu )^{\dagger }\right],} where Z†{\displaystyle Z^{\dagger }} denotes the conjugate transpose, which is applicable to the scalar case since the transpose of a scalar is still a scalar. The matrix so obtained will be Hermitian positive-semidefinite,[6] with real numbers in the main diagonal and complex numbers off-diagonal. {{#invoke:main|main}} If MX{\displaystyle \mathbf {M} _{\mathbf {X} }} and MY{\displaystyle \mathbf {M} _{\mathbf {Y} }} are centred data matrices of dimension n-by-p and n-by-q respectively, i.e. with n rows of observations of p and q columns of variables, from which the column means have been subtracted, then, if the column means were estimated from the data, sample correlation matrices QX{\displaystyle \mathbf {Q} _{\mathbf {X} }} and QX⁢Y{\displaystyle \mathbf {Q} _{\mathbf {XY} }} can be defined to be QX=1n−1⁢MXT⁢MX,QX⁢Y=1n−1⁢MXT⁢MY{\displaystyle \mathbf {Q} _{\mathbf {X} }={\frac {1}{n-1}}\mathbf {M} _{\mathbf {X} }^{T}\mathbf {M} _{\mathbf {X} },\qquad \mathbf {Q} _{\mathbf {XY} }={\frac {1}{n-1}}\mathbf {M} _{\mathbf {X} }^{T}\mathbf {M} _{\mathbf {Y} }} or, if the column means were known a-priori, QX=1n⁢MXT⁢MX,QX⁢Y=1n⁢MXT⁢MY{\displaystyle \mathbf {Q} _{\mathbf {X} }={\frac {1}{n}}\mathbf {M} _{\mathbf {X} }^{T}\mathbf {M} _{\mathbf {X} },\qquad \mathbf {Q} _{\mathbf {XY} }={\frac {1}{n}}\mathbf {M} _{\mathbf {X} }^{T}\mathbf {M} _{\mathbf {Y} }} These empirical sample correlation matrices are the most straightforward and most often used estimators for the correlation matrices, but other estimators also exist, including regularised or shrinkage estimators, which may have better properties. As a parameter of a distribution If a vector of n possibly correlated random variables is jointly normally distributed, or more generally elliptically distributed, then its probability density function can be expressed in terms of the covariance matrix. In financial economics The covariance matrix plays a key role in financial economics, especially in portfolio theory and its mutual fund separation theorem and in the capital asset pricing model. The matrix of covariances among various assets' returns is used to determine, under certain assumptions, the relative amounts of different assets that investors should (in a normative analysis) or are predicted to (in a positive analysis) choose to hold in a context of diversification. Covariance mapping Gramian matrix Eigenvalue decomposition Quadratic form (statistics) ↑ {{#invoke:citation/CS1|citation |CitationClass=book }} ↑ Template:Cite web ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }} {{#invoke:citation/CS1|citation |CitationClass=citation }} Weisstein, Eric W., "Covariance Matrix", MathWorld. |CitationClass=book }} Template:Statistics Retrieved from "https://en.formulasearchengine.com/index.php?title=Covariance_matrix&oldid=225642" Covariance and correlation Summary statistics
CommonCrawl
Corporate Finance & Accounting Financial Ratios Net Debt-to-EBITDA Ratio Definition What Is the Net Debt-to-EBITDA Ratio? The net debt-to-EBITDA (earnings before interest depreciation and amortization) ratio is a measurement of leverage, calculated as a company's interest-bearing liabilities minus cash or cash equivalents, divided by its EBITDA. The net debt-to-EBITDA ratio is a debt ratio that shows how many years it would take for a company to pay back its debt if net debt and EBITDA are held constant. If a company has more cash than debt, the ratio can be negative. It is similar to the debt/EBITDA ratio, but net debt subtracts cash and cash equivalents while the standard ratio does not. The net debt-to-EBITDA ratio is a debt ratio that shows how many years it would take for a company to pay back its debt if net debt and EBITDA are held constant. When analysts look at the net debt-to-EBITDA ratio, they want to know how well a company can cover its debts. It is similar to the debt/EBITDA ratio, but net debt subtracts cash and cash equivalents while the standard ratio does not. If a company has more cash than debt, the ratio can be negative. The Formula for the Net Debt-to-EBITDA Is Net Debt to EBITDA=Total Debt−Cash&EquivalentsEBITDANet \ Debt \ to \ EBITDA = \frac{Total \ Debt - Cash \& Equivalents}{EBITDA}Net Debt to EBITDA=EBITDATotal Debt−Cash&Equivalents​ Net Debt to EBITDA Ratio What Does the Net Debt-to-EBITDA Ratio Tell You? The net debt-to-EBITDA ratio is popular with analysts because it takes into account a company's ability to decrease its debt. Ratios higher than 4 or 5 typically set off alarm bells because this indicates that a company is less likely to be able to handle its debt burden, and thus is less likely to be able to take on the additional debt required to grow the business. The net debt-to-EBITDA ratio should be compared with that of a benchmark or the industry average to determine the creditworthiness of a company. Additionally, a horizontal analysis could be conducted to determine whether a company has increased or decreased its debt burden over a specified period. For horizontal analysis, ratios or items in the financial statement are compared with those of previous periods to determine how the company has grown over the specified time frame. Example of Net Debt-to-EBITDA Suppose an investor wishes to conduct horizontal analysis on Company ABC to determine its ability to pay off its debt. For its previous fiscal year, Company ABC's short-term debt was $6.31 billion, long-term debt was $28.99 billion, and cash holdings were $13.84 billion. Therefore, Company ABC reported net debt of $21.46 billion, or $6.31 billion plus $28.99 billion less $13.84 billion, and an EBITDA of $60.60 billion during the fiscal period. Consequently, Apple had a net debt to EBITDA ratio of 0.35 or $21.46 billion divided by $60.60 billion. For its recent fiscal year, Apple had short-term debt of $8.50 billion, long-term debt of $53.46 billion, and $21.12 billion cash. The company increased its net debt by 90.31%, to $40.84 billion year-over-year. Company ABC reported an EBITDA of $77.89 billion, a 28.53% increase from its EBITDA the previous year. Therefore, Company ABC had a net debt to EBITDA ratio of 0.52 or $40.84 billion divided by $77.89 billion. Company ABC's net debt to EBITDA ratio increased by 0.17, or 49.81% year-over-year. Limitations of Net Debt-to-EBITDA Analysts like the net debt/EBITDA ratio because it is easy to calculate. Debt figures can be found on the balance sheet and EBITDA can be calculated from the income statement. The issue, however, is that it may not provide the most accurate measure of earnings. More than earnings, analysts want to gauge the amount of cash available for debt repayment. Depreciation and amortization are non-cash expenses that do not really impact cash flows, but interest can be a significant expense for some companies. Banks and investors looking at the current debt/EBITDA ratio to gain insight on how well the company can pay for its debt may want to consider the impact of interest on the debt, even if that debt will be included in a new issuance. In this way, net income minus capital expenditures, plus depreciation and amortization may be the better measure of cash available for debt repayment. What the Debt/EBITDA Ratio Tells You Debt/EBITDA is a ratio measuring the amount of income generation available to pay down debt before deducting interest, taxes, depreciation, and amortization. Enterprise Value – EV Enterprise value (EV) is a measure of a company's total value, often used as a comprehensive alternative to equity market capitalization. EV includes in its calculation the market capitalization of a company but also short-term and long-term debt as well as any cash on the company's balance sheet. What Does EBITDA Margin Mean? EBITDA margin measures a company's profit as a percentage of revenue. EBITDA stands for earnings before interest, taxes, depreciation, and amortization. How EBITA Helps Investors Evaluate Company Performance Earnings before interest, taxes, and amortization (EBITA) is a measure of a company's real performance. It can be more informative than bottom-line earnings. How to Best Use the Debt-To-Capital Ratio The debt-to-capital ratio is calculated by dividing a company's total debt by its total capital, which is total debt plus total shareholders' equity. What the EBITDA-To-Sales Ratio Tells Us 'EBITDA-to-sales' is used to assess profitability by comparing revenue with operating income before interest, taxes, depreciation, and amortization. A Clear Look at EBITDA 4 Ratios to Evaluate Dividend Stocks Why the Debt/EBITDA Ratio is Crucial to Junk Bonds How to Calculate the ROI on a Rental Property How Is Operating Margin And EBITDA Different? How can I find a company's EV/EBITDA multiple?
CommonCrawl
Fσ set In mathematics, an Fσ set (said F-sigma set) is a countable union of closed sets. The notation originated in French with F for fermé (French: closed) and σ for somme (French: sum, union).[1] The complement of an Fσ set is a Gδ set.[1] Fσ is the same as $\mathbf {\Sigma } _{2}^{0}$ in the Borel hierarchy. Examples Each closed set is an Fσ set. The set $\mathbb {Q} $ of rationals is an Fσ set in $\mathbb {R} $. More generally, any countable set in a T1 space is an Fσ set, because every singleton $\{x\}$ is closed. The set $\mathbb {R} \setminus \mathbb {Q} $ of irrationals is not a Fσ set. In metrizable spaces, every open set is an Fσ set.[2] The union of countably many Fσ sets is an Fσ set, and the intersection of finitely many Fσ sets is an Fσ set. The set $A$ of all points $(x,y)$ in the Cartesian plane such that $x/y$ is rational is an Fσ set because it can be expressed as the union of all the lines passing through the origin with rational slope: $A=\bigcup _{r\in \mathbb {Q} }\{(ry,y)\mid y\in \mathbb {R} \},$ where $\mathbb {Q} $, is the set of rational numbers, which is a countable set. See also • Gδ set — the dual notion. • Borel hierarchy • P-space, any space having the property that every Fσ set is closed References 1. Stein, Elias M.; Shakarchi, Rami (2009), Real Analysis: Measure Theory, Integration, and Hilbert Spaces, Princeton University Press, p. 23, ISBN 9781400835560. 2. Aliprantis, Charalambos D.; Border, Kim (2006), Infinite Dimensional Analysis: A Hitchhiker's Guide, Springer, p. 138, ISBN 9783540295877.
Wikipedia
Mathematics Meta Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up. How do we prove that something is unprovable? I have read somewhere there are some theorems that are shown to be "unprovable". It was a while ago and I don't remember the details, and I suspect that this question might be the result of a total misunderstanding. By the way, I assume that unprovable theorem does exist. Please correct me if I am wrong and skip reading the rest. As far as I know, the mathematical statements are categorized into: undefined concepts, definitions, axioms, conjectures, lemmas and theorems. There might be some other types that I am not aware of as an amateur math learner. In this categorization, an axiom is something that cannot be built upon other things and it is too obvious to be proved (is it?). So axioms are unprovable. A theorem or lemma is actually a conjecture that has been proved. So "a theorem that cannot be proved" sounds like a paradox. I know that there are some statements that cannot be proved simply because they are wrong. I am not addressing them because they are not theorems. So what does it mean that a theorem is unprovable? Does it mean that it cannot be proved by current mathematical tools and it may be proved in the future by more advanced tools that are not discovered yet? So why don't we call it a conjecture? If it cannot be proved at all, then it is better to call it an axiom. Another question is, how can we be sure that a theorem cannot be proved? I am assuming the description might be some high level logic that is way above my understanding. So I would appreciate if you put it into simple words. Edit- Thanks to a comment by @user21820 I just read two other interesting posts, this and this that are relevant to this question. I recommend everyone to take a look at them as well. soft-question polfosolpolfosol $\begingroup$ I was just thinking of preparing a talk on this very subject... $\endgroup$ – Stefan Mesken $\begingroup$ "In this categorization, an axiom is something that cannot be built upon other things and it is too obvious to be proved (is it?). So axioms are unprovable. " Not at all. Often axioms are found to be redundant: Hilbert's axiom "$a \implies a$" was found to be redundant (in other words, it was provable from other axioms). Happens all the time. $\endgroup$ – DanielV $\begingroup$ While logic is a rich subject, it tends to be very shallow in the parts where laypeople think there is depth (e.g. the notion of "axiom"). Most of the ideas you have about logic are not actually about logic, but instead ideas about philosophy or pedagogy $\endgroup$ $\begingroup$ Stefan's answer has identified that the main issue is your understanding of "unprovable", which is always relative. See math.stackexchange.com/a/1643073 for a brief outline of some points I wish to highlight, and math.stackexchange.com/a/1808558 for a little more on foundational aspects. $\endgroup$ $\begingroup$ 'Theorem" has 2 different meanings. In the usual language, it means "a proposition (of some importance) for which someone has given have an acceptable proof". It is a result obtained by people. In logic, it is a well-formed formula which happens to be true in some theory. When talking about "provability", you are in the second context; so theorem exists even without written proof, without the exsitence of proof, - and even without anybody stating them. $\endgroup$ – Michel Billaud When we say that a statement is 'unprovable', we mean that it is unprovable from the axioms of a particular theory. Here's a nice concrete example. Euclid's Elements, the prototypical example of axiomatic mathematics, begins by stating the following five axioms: Any two points can be joined by a straight line Any finite straight line segment can be extended to form an infinite straight line. For any point $P$ and choice of radius $r$ we can form a circle centred at $P$ of radius $r$ All right angles are equal to one another. [The parallel postulate:] If $L$ is a straight line and $P$ is a point not on the line $L$ then there is at most one line $L'$ that passes through $P$ and is parallel to $L$. Euclid proceeds to derive much of classical plane geometry from these five axioms. This is an important point. After these axioms have been stated, Euclid makes no further appeal to our natural intuition for the concepts of 'line', 'point' and 'angle', but only gives proofs that can be deduced from the five axioms alone. It is conceivable that you could come up with your own theory with 'points' and 'lines' that do not resemble points and lines at all. But if you could show that your 'points' and 'lines' obey the five axioms of Euclid, then you could interpret all of his theorems in your new theory. In the two thousand years following the publication of the Elements, one major question that arose was: do we need the fifth axiom? The fifth axiom - known as the parallel postulate - seems less intuitively obvious than the other four: if we could find a way of deducing the fifth axiom from the first four then it would become superfluous and we could leave it out. Mathematicians tried for millennia to find a way of deducing the parallel postulate from the first four axioms (and I'm sure there are cranks who are still trying to do so now), but were unable to. Gradually, they started to get the feeling that it might be impossible to prove the parallel postulate from the first four axioms. But how do you prove that something is unprovable? The right approach was found independently by Lobachevsky and Bolyai (and possibly Gauss) in the nineteenth century. They took the first four axioms and replaced the fifth with the following: [Hyperbolic parallel postulate:] If $L$ is a straight line and $P$ is a point not on the line $L$ then there are at least two lines that pass through $P$ and are parallel to $L$. This axiom is clearly incompatible with the original parallel postulate. The remarkable thing is that there is a geometrical theory in which the first four axioms and the modified parallel postulate are true. The theory is called hyperbolic geometry and it deals with points and lines inscribed on the surface of a hyperboloid: In the bottom right of the image above, you can see a pair of hyperbolic parallel lines. Notice that they diverge from one another. The first four axioms hold (and you can check this), but now if $L$ is a line and $P$ is a point not on $L$ then there are infinitely many lines parallel to $L$ passing through $P$. So the original parallel postulate does not hold. This now allows us to prove very quickly that it is impossible to prove the parallel postulate from the other four axioms: indeed, suppose there were such a proof. Since the first four axioms are true in hyperbolic geometry, our proof would induce a proof of the parallel postulate in the setting of hyperbolic geometry. But the parallel postulate is not true in hyperbolic geometry, so this is absurd. This is a major method for showing that statements are unprovable in various theories. Indeed, a theorem of Gödel (Gödel's completeness theorem) tells us that if a statement $s$ in the language of some axiomatic theory $\mathbb T$ is unprovable then there is always some structure that satisfies the axioms of $\mathbb T$ in which $s$ is false. So showing that $s$ is unprovable often amounts to finding such a structure. It is also possible to show that things are unprovable using a direct combinatorial argument on the axioms and deduction rules you are allowed in your logic. I won't go into that here. You're probably interested in things like Gödel's incompleteness theorem, that say that there are statements that are unprovable in a particular theory called ZFC set theory, which is often used as the foundation of all mathematics (note: there is in fact plenty of mathematics that cannot be expressed in ZFC, so all isn't really correct here). This situation is not at all different from the geometrical example I gave above: If a particular statement is neither provable nor disprovable from the axioms of all mathematics it means that there are two structures out there, both of which interpret the axioms of all mathematics, in one of which the statement is true and in the other of which the statement is false. Sometimes we have explicit examples: one important problem at the turn of the century was the Continuum Hypothesis. The problem was solved in two steps: Gödel gave a structure satisfying the axioms of ZFC set theory in which the Continuum Hypothesis was true. Later, Cohen gave a structure satisfying the axioms of ZFC set theory in which the Continuum Hypothesis was false. Between them, these results show that the Continuum Hypothesis is in fact neither provable nor disprovable in ZFC set theory. edited May 28, 2019 at 7:32 John GowersJohn Gowers $\begingroup$ Nice answer. Do you want to add a few words about the third version of the parallel postulate, in which there are no parallel lines? $\endgroup$ – Scott - Слава Україні $\begingroup$ I learned a lot from all answers and I can't resist to emphasize that marking this answer as accepted doesn't imply undervaluing other ones $\endgroup$ – polfosol $\begingroup$ It isn't quite true that Euclid proved his theorems from those 5 axioms alone. Over the years, mathematicians have identified several other assumptions he made without realizing it. These additional assumptions are also axioms of Euclidean geometry. One such assumption is that a line separates a plane into two halves, such that any line connecting points in one half with points in the other must intersect the first line. $\endgroup$ – Paul Sinclair $\begingroup$ @HenningMakholm SGA/EGA make heavy use of Grothendieck universes, which can't be shown to exist in ZFC alone. Now, much of the material in SGA/EGA can be developed in ZFC, but the convenience of the Grothendieck universes approach makes me uneasy of claiming that ZFC is a foundation for all mathematics. $\endgroup$ – John Gowers $\begingroup$ @JohnGowers: Grothendieck universes don't need to be viewed as outside or beyond ZFC — they can reasonably be seen as just an extra existence assumption, the basic framework still remains ZFC. I work in type theory and constructive maths, so I'm a big proponent of alternatives, but I don't think there's any problem in saying that ZFC's a satisfactory foundation for the vast majority of current mathematics (including EGA, SGA etc). $\endgroup$ – Peter LeFanu Lumsdaine First of all in the following answer I allowed myself (contrary to my general nature) to focus my efforts on simplicity, rather than formal correctness. In general, I think that the way we teach the concept of axioms is rather unfortunate. While traditionally axioms were thought of as statements that are - in some philosophical way - obviously true and don't need further justifications, this view has shifted a lot in the last century or so. Rather than thinking of axioms as obvious truths think of them as statements that we declare to be true. Let $\mathcal A$ be a set of axioms. We can now ask a bunch of questions about $\mathcal A$. Is $\mathcal A$ self-contradictory? I.e. does there exist a proof (<- this needs to be formalized, but for the sake of simplicity just think of your informal notion of proofs) - starting from formulas in $\mathcal A$ that leads to a contradiction? If that's the case, then $\mathcal A$ was poorly chosen. If all the statements in $\mathcal A$ should be true (in a philosophical sense), then they cannot lead to a contradiction. So our first requirement is that $\mathcal A$ - should it represent a collection of true statements - is not self-contradictory. Does $\mathcal A$ prove interesting statements? Take for example $\mathcal A$ as the axioms of set theory (e.g. $\mathcal A = \operatorname{ZFC}$). In this case we can prove all sorts of interesting mathematical statements. In fact, it seems reasonable that every mathematical theorem that can be proved by the usual style of informal proofs, can be formally proved from $\mathcal A$. This is one of the reasons, the axioms of set theory have been so successful. Is $\mathcal A$ a natural set of axioms? ... Is there a statement $\phi$ which $\mathcal A$ does not decide? I.e. is there a statement $\phi$ such that there is no proof of $\phi$ or $\neg \phi$ starting from $\mathcal A$? The last point is what we mean when we say that $\phi$ is unprovable from $\mathcal A$. And if $\mathcal A$ is our background theory, say $\mathcal A = \operatorname{ZFC}$, we just say that $\phi$ is unprovable. By a very general theorem of Kurt Gödel, any natural set of axioms $\mathcal A$ has statements that are unprovable from it. In fact, the statement "$\mathcal A$ is not self-contradictory" is not provable from $\mathcal A$. So, while natural sets of axioms $\mathcal A$ are not self-contradictory - they themselves cannot prove this fact. This is rather unfortunate and demonstrates that David Hilbert's program on the foundation of mathematics - in its original form - is impossible. The natural workaround is something contrary to the general nature of mathematics - a leap of faith: If $\mathcal A$ is a sufficiently natural set of axioms (or otherwise certified), we believe that it is consistent (or - if you're more like me - you assume it is consistent until you see a reason not to). This is - for example - the case for $\mathcal A = \operatorname{ZFC}$ and for the remainder of my answer, I will restrict myself to this scenario. Now that we know that $\mathcal A$ does not decide all statements (and arguably does not prove some true statements - like its consistency), a new question arises: Does $\operatorname{ZFC}$ decide all mathematical statements? In other words: Is there a question about typical mathematical objects that $\operatorname{ZFC}$ does not answer? The - to some people unfortunate - answer is yes and the most famous example is $\operatorname{ZFC}$ does not decide how many real numbers there are. Actually proving this fact, took mathematicians (logicians) many decades. At the end of this effort, however, we not only had a way to prove this single statement, but we actually obtained a very general method to prove the independence of many statements (the so-called method of forcing, introduced by Paul Cohen in 1963). The idea - roughly speaking - is as follows: Let $\phi$ be a statement, say $\phi \equiv$ "there is no infinity strictly between the infinity of $\mathbb N$ and of $\mathbb R$" Let $\mathcal M$ be a model of $\operatorname{ZFC}$. Starting from $\mathcal M$ we would like to construct new models $\mathcal M_{\phi}$ and $\mathcal M_{\neg \phi}$ of $\operatorname{ZFC}$ such that $\mathcal M_{\phi} \models \phi$ and $\mathcal M_{\neg \phi} \models \neg \phi$ (i.e. $\phi$ is true in $\mathcal M_{\phi}$ and $\phi$ is false in $\mathcal M_{\neg \phi}$). If this is possible, then this proves that $\phi$ is not decided by $\operatorname{ZFC}$. Why is that? Well, if it were decided by $\operatorname{ZFC}$, then there would be a proof of $\phi$ or a proof of $\neg \phi$. Let us say that $\phi$ has a proof (the other case is the same). Then, by soundness of our proofs, any model that satisfies $\operatorname{ZFC}$ must satisfy $\phi$, so there cannot be a model $\mathcal M_{\neg \phi}$ as above. Stefan MeskenStefan Mesken $\begingroup$ @polfosol Well, because the truth of a given statement - unless implied by your axioms - is debatable. You are free to add this true statement to your axioms (and we regularly do that), but to other people the negation may seem to be true and they might decide to accept that as an axiom. $\endgroup$ $\begingroup$ @polfosol I myself don't consider any statement to be true (in an absolute sense) - at least I think I don't. Rather I work under the reasonable assumption that they are not self-contradictory and try to figure out what their consequences are. $\endgroup$ $\begingroup$ @polfosol Because - thus far - set theory has been most successful (or unfortunate - depending on your point of view) in determining natural, unprovable statements. However, there are other examples. See for example Goodstein's Theorem. $\endgroup$ $\begingroup$ When you consider whether A is a "natural" set of axioms, what do you mean by "natural"? $\endgroup$ – Matt Gutting $\begingroup$ @Stefan Strictly speaking, other theories have been much more successful than set theory for determining natural unprovable statements. For example, the statement '$G$ is abelian' is undecidable in the theory of groups. The point is that when we say undecidable, we usually mean undecidable in set theory , so the natural examples will also come from set theory. $\endgroup$ Your question is partially based on an error of terminology. We don't speak of unprovable theorems - as you say, being a theorem implies having a proof. The correct thing to speak of is unprovable statements or unprovable assertions. That said, it is possible for a statement to be a theorem in one context but not in another, in which case you could say that, in the second context, it is an unprovable theorem. Example: All proofs of the Pythagorean Theorem rely essentially on the Parallel Postulate; so if you try to prove what you can in geometry with the Parallel Postulate omitted (this is called 'absolute geometry'), then the Pythagorean Theorem becomes an unprovable theorem within that context. (this is about the only situation where I think the phrase 'unprovable theorem' is the natural choice) draks ... PMarPMar $\begingroup$ +1 and thanks for an easy to follow example of unprovable theorems. Is it worked out somewhere? $\endgroup$ – draks ... You have already received some excellent responses, but I just wanted to clarify a few things that I feel may have not received the attention that they require. For instance, you mentioned [...] definitions, axioms, conjectures, lemmas and theorems. It is important to realize that when setting up a mathematical theory, there are really only two things that matter: Statements we define to be true (or, equivalently, false). All other statements, that can either be true or false, or neither; these require a proof. Definitions and axioms are both a form of type 1. They are quite closely related as axioms usually define some important concept and definitions usually specify that we call $Z$ a zork if and only if $Z$ has certain properties. For example, consider the statments Every natural number $n$ has a successor $S(n)$. An even number is of the form $2n$, for some $n \in \mathbb N$. Are thesee definitions of the successor function and even numbers, respectively? Or are they axioms in a theory about natural numbers? Similarly the distinction between theorems, conjectures, lemmas etc. is mostly an emotional distinction: they help the reader of a book or article distinguish between the "important" results and the statements that are more like tools and need to be proven only to get an interesting result. Now on to (un)provable. There are formal definitions of a proof in mathematical logic, but let's not go into them. The main point is that a proof is a logical sequence of steps, starting from the assumptions of a theorem (or lemma, or conjecture, ...), ending with the conclusion of the theorem, and where every step is justified either by an axiom or by a previously proven theorem: Theorem: (If conditions then) conclusion Note that the conditions part can be omitted if the conclusion is always true (a tautology). The "proof" for an axiom is then quite trivial: there are no preconditions and the single step of the proof consists of invoking the axiom. For a lot of statements, we can usually show that they are true or false, given some set of axioms (and we usually include a set of proven theorems as well that we agree on, for example, you don't invoke Peano's axioms every time you prove something about a natural number). Showing that the theorem is false usually involves finding a counter example, or - more formally - showing that the "anti"theorem Theorem: (If conditions then) not (conclusion) is true, again by a proof in which each step is nicely justified. So for example, we can prove the theorem "6 is even" to be true as follows: $6 = 2 \cdot 3$ So $6 = 2 \cdot n$ for $n = 3$. This satisfies the definition of an even number (or, if you have set up things that way: This is an even number by the axiom "Even number"). Similarly we can prove that 7 is not even by showing that there is no number $n$ such that $7 = 2 \cdot n$ (the only solution is $n = 3.5$ and that is not a natural number). However, suppose that I want to prove "4 is a nortial number". Is that theorem true or false in our simple theory with one axiom about even numbers? No matter what way you invoke the axiom, or any theorem you can prove from it, you cannot show that 4 is a nortial number. However, similarly you cannot prove that "4 is not a nortial number" so the theorem can also not be false. Clearly, your theory does not have enough axioms to be able to reach either conclusion. This is what we mean by "an unprovable statement". CompuChipCompuChip $\begingroup$ What is a nortial number? $\endgroup$ – Ole Petersen $\begingroup$ I have no idea :) $\endgroup$ – CompuChip $\begingroup$ Your answer is pretty good, but like many others, you overlook the importance of "undefined concepts" in definitions. In set theory, what is an element? Does an element exist? What is "membership"? In geometry, we have a choice of a new undefined concept, "sense" referring to clockwise or counterclockwise, or creating a new axiom related to a line partitioning the real projective plane. When you try to define an undefined concept, you generally do so in other terms that lead back to the thing you are trying to define. $\endgroup$ – richard1941 "Too obvious" doesn't exist in mathematics, we choose to agree about some axioms to build a theory. Axioms mustn't imply each other and mustn't be contradictory. If a theorem cannot be proven nor denied by the axioms that means it goes beyond boundaries of that theory. So you can join that theorem as another axiom to build more complex theory, or you can join its negation as an axiom to build another theory In addition, how can be proven that a theorem isn't implied nor denied by axioms?? That would be something like if you don't get contradiction by joining the theorem nor by joining its negation to the set of axioms. Just pass trough whole set of objects you use in the theory... :P Djura MarinkovDjura Marinkov $\begingroup$ "we choose to agree about some axioms to build a theory." To be historically accurate, we usually know the theorems we want to prove long before anyone proposes a set of axioms to support them, for almost any theory you will ever find. "Axioms mustn't imply each other and mustn't be contradictory." Axioms often are found to imply each other, but fortunately are not often found to be contradictory. $\endgroup$ $\begingroup$ @DanielV You're right, this was said for an imagined perfectly defined theory. And of course we choose axioms so to satisfy a wanted purpose $\endgroup$ – Djura Marinkov $\begingroup$ Note that In set theory we prove that Zermelo's and axiom of choice are equivenent; each implies the other. $\endgroup$ $\begingroup$ +1 how large will these sets of theorems grow when you keep on joining theorems that are currently beyond the boundaries? $\endgroup$ $\begingroup$ @draks... I guess as large as your imagination reaches $\endgroup$ A theory is created from building blocks called axioms which are accepted to be true. The axioms need not and cannot be proven (not because they are "too obvious", but because they are independent of the other axioms). Anyway, you have to prove that the set of axioms forming your theory is not contradictory. Since Gödel, we know that a (rich) theory cannot prove all the logical propositions expressible in its frame, so that some propositions will require extra axioms, hence a stronger theory, to become theorems. Simplistically speaking, propositions can be shown "unprovable" when their meaning is equivalent to "I am not provable in the frame of the theory". Then if they were provable, they would not be provable, and conversely. $\begingroup$ There are rich and complete theories - take for example the theory of $H_{\omega_{1}}$. The point of Gödel's theorem is, that those cannot be recursively enumerable. $\endgroup$ $\begingroup$ @Stefan: I don't think that this is on the level that the OP expects. $\endgroup$ $\begingroup$ Sure. The reason for my comment is that people often get Gödel's Incompleteness Theorem wrong - which trained me to be very sensitive about its statement (-; $\endgroup$ $\begingroup$ Off topic a bit, but is there a minimal axiomatic structure that contains unprovable statements? is there a maximally complex axiomatic structure that contains NO unprovable statements? $\endgroup$ $\begingroup$ @richard1941 I think you would want to examine philosophy.stackexchange.com/questions/15525/… carefully. And then try to prove maximality or minimality. Since all of the systems I know of are amenable to various axiomatic formulations; you probably need to specify minimal and maximal. $\endgroup$ – rrogers It can make sense to call something an "unprovable theorem" if what is meant is that the theorem is unprovable in a given formal system or from a given set of axioms. For instance Gödel's formal statement which informally said "I cannot be proven" is unprovable in the formal system he used, but he also proved it using more powerful, informal tools - thus making it a theorem. CasperCasper The most common way (AFAIK) to establish $A \not \vdash B$ is to find some statement $X$ such that: $A,~X \vdash \lnot B$ $A \land X$ is consistent Then you can conclude $A \not \vdash B$ because constructively if $A \vdash B$ then $A ,~ X \vdash B$ then $A ,~ X \vdash B \land \lnot B$ which contradicts the assumption that $A \land X$ are consistent. For example, if you wanted to show that from "$n$ is divisible by $3$" you cannot prove "$n$ is divisible by $6$" you do the obvious thing of pointing out the $X$ of $n = 9$. Since $3|n , ~ n = 9 \vdash \lnot 6|n$ $(3|n) \land (n = 9)$ is consistent So $3|n \not \vdash 6|n$. This simple approach can get very complex, mainly because establishing $A \land X$ as being consistent can be extremely demanding, usually an informal model theoretic approach is used for this. It is also a well known way to establish undecidability, if you can find $X_1$ and $X_2$ such that: $A, ~ X_1 \vdash B$ $A, ~ X_2 \vdash \lnot B$ $A \land X_1$ is consistent Then for the same reason $A \not \vdash B$ and $A \not \vdash \lnot B$, so $B$ is independent of $A$. answered Sep 15, 2017 at 18:16 DanielVDanielV $\begingroup$ There is some circularity in this answer. The claim "$A \land X$ is consistent" is equivalent to claiming the unprovability of inconsistencies. This is unavoidable. Unprovability, like many other things in logic, is never created, only propagated. $\endgroup$ How to prove that a axiom in a certain system is independent of the other ones? Proving the existence of a proof without actually giving a proof Are sets and symbols the building blocks of mathematics? Prove Gödel's incompleteness theorem using halting problem Proving sentences to be unprovable with "combinatorial argument"? Prove the Parallel Postulate! Demonstration of proving a statement is unprovable Proving the impossibility of a proof What if a conjecture were provably unprovable? What is a simple example of an unprovable statement? What's the difference between "unprovable" and "undecidable"? Is the negation of all unprovable truths all unprovable falsehoods? Are there explicit examples of "absolutely" unprovable mathematical statements? Is Halting problem an example of a problem which is true but unprovable? Power Set, Replacement or Infinity axioms unprovable in set theory
CommonCrawl
\begin{definition}[Definition:Natural Language] A '''natural language''' is one of the conventional, everyday languages in which people usually communicate. Although there are many natural languages in the world, we are not generally going to distinguish between them, merely lumping them all into the one concept. When '''natural language''' is referred to on {{ProofWiki}}, it will usually mean English. \end{definition}
ProofWiki
Introduction to Prerequisites Real Numbers: Algebra Essentials Exponents and Scientific Notation Radicals and Rational Exponents Introduction to Equations and Inequalities The Rectangular Coordinate Systems and Graphs Linear Equations in One Variable Other Types of Equations Linear Inequalities and Absolute Value Inequalities Introduction to Functions Functions and Function Notation Rates of Change and Behavior of Graphs Transformation of Functions Linear Functions Introduction to Linear Functions Modeling with Linear Functions Fitting Linear Models to Data Polynomial and Rational Functions Introduction to Polynomial and Rational Functions Power Functions and Polynomial Functions Graphs of Polynomial Functions Inverses and Radical Functions Modeling Using Variation Exponential and Logarithmic Functions Introduction to Exponential and Logarithmic Functions Graphs of Exponential Functions Graphs of Logarithmic Functions Logarithmic Properties Exponential and Logarithmic Equations Exponential and Logarithmic Models Fitting Exponential Models to Data The Unit Circle: Sine and Cosine Functions Introduction to The Unit Circle: Sine and Cosine Functions Right Triangle Trigonometry Unit Circle The Other Trigonometric Functions Periodic Functions Introduction to Periodic Functions Graphs of the Sine and Cosine Functions Graphs of the Other Trigonometric Functions Inverse Trigonometric Functions Trigonometric Identities and Equations Introduction to Trigonometric Identities and Equations Solving Trigonometric Equations with Identities Sum and Difference Identities Double-Angle, Half-Angle, and Reduction Formulas Sum-to-Product and Product-to-Sum Formulas Further Applications of Trigonometry Introduction to Further Applications of Trigonometry Non-right Triangles: Law of Sines Non-right Triangles: Law of Cosines Polar Coordinates: Graphs Polar Form of Complex Numbers Parametric Equations: Graphs Introduction to Systems of Equations and Inequalities Systems of Linear Equations: Two Variables Systems of Linear Equations: Three Variables Systems of Nonlinear Equations and Inequalities: Two Variables Matrices and Matrix Operations Solving Systems with Gaussian Elimination Solving Systems with Inverses Solving Systems with Cramer's Rule Analytic Geometry Introduction to Analytic Geometry The Ellipse The Hyperbola The Parabola Rotation of Axes Conic Sections in Polar Coordinates Sequences, Probability, and Counting Theory Introduction to Sequences, Probability and Counting Theory Sequences and Their Notations Series and Their Notations Counting Principles Proofs, Identities, and Toolkit Functions Algebra and Trigonometry In this section, you will: Solve systems of three equations in three variables. Identify inconsistent systems of equations containing three variables. Express the solution of a system of dependent equations containing three variables. Figure 1. (credit: "Elembis," Wikimedia Commons) John received an inheritance of $12,000 that he divided into three parts and invested in three ways: in a money-market fund paying 3% annual interest; in municipal bonds paying 4% annual interest; and in mutual funds paying 7% annual interest. John invested $4,000 more in municipal funds than in municipal bonds. He earned $670 in interest the first year. How much did John invest in each type of fund? Understanding the correct approach to setting up problems such as this one makes finding a solution a matter of following a pattern. We will solve this and similar problems involving three equations and three variables in this section. Doing so uses similar techniques as those used to solve systems of two equations in two variables. However, finding solutions to systems of three equations requires a bit more organization and a touch of visual gymnastics. Solving Systems of Three Equations in Three Variables In order to solve systems of equations in three variables, known as three-by-three systems, the primary tool we will be using is called Gaussian elimination, named after the prolific German mathematician Karl Friedrich Gauss. While there is no definitive order in which operations are to be performed, there are specific guidelines as to what type of moves can be made. We may number the equations to keep track of the steps we apply. The goal is to eliminate one variable at a time to achieve upper triangular form, the ideal form for a three-by-three system because it allows for straightforward back-substitution to find a solution which we call an ordered triple. A system in upper triangular form looks like the following: The third equation can be solved for and then we back-substitute to find and To write the system in upper triangular form, we can perform the following operations: Interchange the order of any two equations. Multiply both sides of an equation by a nonzero constant. Add a nonzero multiple of one equation to another equation. The solution set to a three-by-three system is an ordered triple Graphically, the ordered triple defines the point that is the intersection of three planes in space. You can visualize such an intersection by imagining any corner in a rectangular room. A corner is defined by three planes: two adjoining walls and the floor (or ceiling). Any point where two walls and the floor meet represents the intersection of three planes. Number of Possible Solutions (Figure) and (Figure) illustrate possible solution scenarios for three-by-three systems. Systems that have a single solution are those which, after elimination, result in a solution set consisting of an ordered triple Graphically, the ordered triple defines a point that is the intersection of three planes in space. Systems that have an infinite number of solutions are those which, after elimination, result in an expression that is always true, such as Graphically, an infinite number of solutions represents a line or coincident plane that serves as the intersection of three planes in space. Systems that have no solution are those that, after elimination, result in a statement that is a contradiction, such as Graphically, a system with no solution is represented by three planes with no point in common. Figure 2. (a)Three planes intersect at a single point, representing a three-by-three system with a single solution. (b) Three planes intersect in a line, representing a three-by-three system with infinite solutions. Figure 3. All three figures represent three-by-three systems with no solution. (a) The three planes intersect with each other, but not at a common point. (b) Two of the planes are parallel and intersect with the third plane, but not with each other. (c) All three planes are parallel, so there is no point of intersection. Determining Whether an Ordered Triple Is a Solution to a System Determine whether the ordered triple is a solution to the system. [reveal-answer q="fs-id1165137530486″]Show Solution[/reveal-answer] [hidden-answer a="fs-id1165137530486″] We will check each equation by substituting in the values of the ordered triple for and The ordered triple is indeed a solution to the system. [/hidden-answer] Given a linear system of three equations, solve for three unknowns. Pick any pair of equations and solve for one variable. Pick another pair of equations and solve for the same variable. You have created a system of two equations in two unknowns. Solve the resulting two-by-two system. Back-substitute known variables into any one of the original equations and solve for the missing variable. Solving a System of Three Equations in Three Variables by Elimination Find a solution to the following system: There will always be several choices as to where to begin, but the most obvious first step here is to eliminate by adding equations (1) and (2). The second step is multiplying equation (1) by and adding the result to equation (3). These two steps will eliminate the variable In equations (4) and (5), we have created a new two-by-two system. We can solve for by adding the two equations. Choosing one equation from each new system, we obtain the upper triangular form: Next, we back-substitute into equation (4) and solve for Finally, we can back-substitute and into equation (1). This will yield the solution for The solution is the ordered triple See (Figure). Solving a Real-World Problem Using a System of Three Equations in Three Variables In the problem posed at the beginning of the section, John invested his inheritance of $12,000 in three different funds: part in a money-market fund paying 3% interest annually; part in municipal bonds paying 4% annually; and the rest in mutual funds paying 7% annually. John invested $4,000 more in mutual funds than he invested in municipal bonds. The total interest earned in one year was $670. How much did he invest in each type of fund? To solve this problem, we use all of the information given and set up three equations. First, we assign a variable to each of the three investment amounts: The first equation indicates that the sum of the three principal amounts is $12,000. We form the second equation according to the information that John invested $4,000 more in mutual funds than he invested in municipal bonds. The third equation shows that the total amount of interest earned from each fund equals $670. Then, we write the three equations as a system. To make the calculations simpler, we can multiply the third equation by 100. Thus, Step 1. Interchange equation (2) and equation (3) so that the two equations with three variables will line up. Step 2. Multiply equation (1) by and add to equation (2). Write the result as row 2. Step 3. Add equation (2) to equation (3) and write the result as equation (3). Step 4. Solve for in equation (3). Back-substitute that value in equation (2) and solve for Then, back-substitute the values for and into equation (1) and solve for John invested $2,000 in a money-market fund, $3,000 in municipal bonds, and $7,000 in mutual funds.[/hidden-answer] Solve the system of equations in three variables. Identifying Inconsistent Systems of Equations Containing Three Variables Just as with systems of equations in two variables, we may come across an inconsistent system of equations in three variables, which means that it does not have a solution that satisfies all three equations. The equations could represent three parallel planes, two parallel planes and one intersecting plane, or three planes that intersect the other two but not at the same location. The process of elimination will result in a false statement, such as or some other contradiction. Solving an Inconsistent System of Three Equations in Three Variables Solve the following system. Looking at the coefficients of we can see that we can eliminate by adding equation (1) to equation (2). Next, we multiply equation (1) by and add it to equation (3). Then, we multiply equation (4) by 2 and add it to equation (5). The final equation is a contradiction, so we conclude that the system of equations in inconsistent and, therefore, has no solution. In this system, each plane intersects the other two, but not at the same location. Therefore, the system is inconsistent. Solve the system of three equations in three variables. No solution. Expressing the Solution of a System of Dependent Equations Containing Three Variables We know from working with systems of equations in two variables that a dependent system of equations has an infinite number of solutions. The same is true for dependent systems of equations in three variables. An infinite number of solutions can result from several situations. The three planes could be the same, so that a solution to one equation will be the solution to the other two equations. All three equations could be different but they intersect on a line, which has infinite solutions. Or two of the equations could be the same and intersect the third on a line. Finding the Solution to a Dependent System of Equations Find the solution to the given system of three equations in three variables. First, we can multiply equation (1) by and add it to equation (2). We do not need to proceed any further. The result we get is an identity, which tells us that this system has an infinite number of solutions. There are other ways to begin to solve this system, such as multiplying equation (3) by and adding it to equation (1). We then perform the same steps as above and find the same result, When a system is dependent, we can find general expressions for the solutions. Adding equations (1) and (3), we have We then solve the resulting equation for We back-substitute the expression for into one of the equations and solve for So the general solution is In this solution, can be any real number. The values of and are dependent on the value selected for [/hidden-answer] As shown in (Figure), two of the planes are the same and they intersect the third plane on a line. The solution set is infinite, as all points along the intersection line will satisfy all three equations. Does the generic solution to a dependent system always have to be written in terms of No, you can write the generic solution in terms of any of the variables, but it is common to write it in terms of x and if needed and Infinite number of solutions of the form Access these online resources for additional instruction and practice with systems of equations in three variables. Ex 1: System of Three Equations with Three Unknowns Using Elimination Ex. 2: System of Three Equations with Three Unknowns Using Elimination A solution set is an ordered triple that represents the intersection of three planes in space. See (Figure). A system of three equations in three variables can be solved by using a series of steps that forces a variable to be eliminated. The steps include interchanging the order of equations, multiplying both sides of an equation by a nonzero constant, and adding a nonzero multiple of one equation to another equation. See (Figure). Systems of three equations in three variables are useful for solving many different types of real-world problems. See (Figure). A system of equations in three variables is inconsistent if no solution exists. After performing elimination operations, the result is a contradiction. See (Figure). Systems of equations in three variables that are inconsistent could result from three parallel planes, two parallel planes and one intersecting plane, or three planes that intersect the other two but not at the same location. A system of equations in three variables is dependent if it has an infinite number of solutions. After performing elimination operations, the result is an identity. See (Figure). Systems of equations in three variables that are dependent could result from three identical planes, three planes intersecting at a line, or two identical planes that intersect the third on a line. Section Exercises Can a linear system of three equations have exactly two solutions? Explain why or why not No, there can be only one, zero, or infinitely many solutions. If a given ordered triple solves the system of equations, is that solution unique? If so, explain why. If not, give an example where it is not unique. If a given ordered triple does not solve the system of equations, is there no solution? If so, explain why. If not, give an example. Not necessarily. There could be zero, one, or infinitely many solutions. For example, is not a solution to the system below, but that does not mean that it has no solution. Using the method of addition, is there only one way to solve the system? Can you explain whether there can be only one method to solve a linear system of equations? If yes, give an example of such a system of equations. If not, explain why not. Every system of equations can be solved graphically, by substitution, and by addition. However, systems of three equations become very complex to solve graphically so other methods are usually preferable. Algebraic For the following exercises, determine whether the ordered triple given is the solution to the system of equations. For the following exercises, solve each system by substitution. For the following exercises, solve each system by Gaussian elimination. [hidden-answer a="fs-id1165134072227″]No solutions exist[/hidden-answer] For the following exercises, solve the system for and Real-World Applications Three even numbers sum up to 108. The smaller is half the larger and the middle number is the larger. What are the three numbers? Three numbers sum up to 147. The smallest number is half the middle number, which is half the largest number. What are the three numbers? At a family reunion, there were only blood relatives, consisting of children, parents, and grandparents, in attendance. There were 400 people total. There were twice as many parents as grandparents, and 50 more children than parents. How many children, parents, and grandparents were in attendance? 70 grandparents, 140 parents, 190 children An animal shelter has a total of 350 animals comprised of cats, dogs, and rabbits. If the number of rabbits is 5 less than one-half the number of cats, and there are 20 more cats than dogs, how many of each animal are at the shelter? Your roommate, Sarah, offered to buy groceries for you and your other roommate. The total bill was $82. She forgot to save the individual receipts but remembered that your groceries were $0.05 cheaper than half of her groceries, and that your other roommate's groceries were $2.10 more than your groceries. How much was each of your share of the groceries? [reveal-answer q="486222″]Show Solution[/reveal-answer] [hidden-answer a="486222″]Your share was $19.95, Sarah's share was $40, and your other roommate's share was $22.05.[/hidden-answer] Your roommate, John, offered to buy household supplies for you and your other roommate. You live near the border of three states, each of which has a different sales tax. The total amount of money spent was $100.75. Your supplies were bought with 5% tax, John's with 8% tax, and your third roommate's with 9% sales tax. The total amount of money spent without taxes is $93.50. If your supplies before tax were $1 more than half of what your third roommate's supplies were before tax, how much did each of you spend? Give your answer both with and without taxes. Three coworkers work for the same employer. Their jobs are warehouse manager, office manager, and truck driver. The sum of the annual salaries of the warehouse manager and office manager is $82,000. The office manager makes $4,000 more than the truck driver annually. The annual salaries of the warehouse manager and the truck driver total $78,000. What is the annual salary of each of the co-workers? There are infinitely many solutions; we need more information At a carnival, $2,914.25 in receipts were taken at the end of the day. The cost of a child's ticket was $20.50, an adult ticket was $29.75, and a senior citizen ticket was $15.25. There were twice as many senior citizens as adults in attendance, and 20 more children than senior citizens. How many children, adult, and senior citizen tickets were sold? A local band sells out for their concert. They sell all 1,175 tickets for a total purse of $28,112.50. The tickets were priced at $20 for student tickets, $22.50 for children, and $29 for adult tickets. If the band sold twice as many adult as children tickets, how many of each type was sold? 500 students, 225 children, and 450 adults In a bag, a child has 325 coins worth $19.50. There were three types of coins: pennies, nickels, and dimes. If the bag contained the same number of nickels as dimes, how many of each type of coin was in the bag? Last year, at Haven's Pond Car Dealership, for a particular model of BMW, Jeep, and Toyota, one could purchase all three cars for a total of $140,000. This year, due to inflation, the same cars would cost $151,830. The cost of the BMW increased by 8%, the Jeep by 5%, and the Toyota by 12%. If the price of last year's Jeep was $7,000 less than the price of last year's BMW, what was the price of each of the three cars last year? The BMW was $49,636, the Jeep was $42,636, and the Toyota was $47,727. A recent college graduate took advantage of his business education and invested in three investments immediately after graduating. He invested $80,500 into three accounts, one that paid 4% simple interest, one that paid *** QuickLaTeX cannot compile formula: \,3\frac{1}{8}\text{%}\, *** Error message: File ended while scanning use of \text@. Emergency stop. simple interest, and one that paid simple interest. He earned $2,670 interest at the end of one year. If the amount of the money invested in the second account was four times the amount invested in the third account, how much was invested in each account? You inherit one million dollars. You invest it all in three accounts for one year. The first account pays 3% compounded annually, the second account pays 4% compounded annually, and the third account pays 2% compounded annually. After one year, you earn $34,000 in interest. If you invest four times the money into the account that pays 3% compared to 2%, how much did you invest in each account? $400,000 in the account that pays 3% interest, $500,000 in the account that pays 4% interest, and $100,000 in the account that pays 2% interest. You inherit one hundred thousand dollars. You invest it all in three accounts for one year. The first account pays 4% compounded annually, the second account pays 3% compounded annually, and the third account pays 2% compounded annually. After one year, you earn $3,650 in interest. If you invest five times the money in the account that pays 4% compared to 3%, how much did you invest in each account? The top three countries in oil consumption in a certain year are as follows: the United States, Japan, and China. In millions of barrels per day, the three top countries consumed 39.8% of the world's consumed oil. The United States consumed 0.7% more than four times China's consumption. The United States consumed 5% more than triple Japan's consumption. What percent of the world oil consumption did the United States, Japan, and China consume?[1] The United States consumed 26.3%, Japan 7.1%, and China 6.4% of the world's oil. The top three countries in oil production in the same year are Saudi Arabia, the United States, and Russia. In millions of barrels per day, the top three countries produced 31.4% of the world's produced oil. Saudi Arabia and the United States combined for 22.1% of the world's production, and Saudi Arabia produced 2% more oil than Russia. What percent of the world oil production did Saudi Arabia, the United States, and Russia produce?[2] The top three sources of oil imports for the United States in the same year were Saudi Arabia, Mexico, and Canada. The three top countries accounted for 47% of oil imports. The United States imported 1.8% more from Saudi Arabia than they did from Mexico, and 1.7% more from Saudi Arabia than they did from Canada. What percent of the United States oil imports were from these three countries?[3] Saudi Arabia imported 16.8%, Canada imported 15.1%, and Mexico 15.0% The top three oil producers in the United States in a certain year are the Gulf of Mexico, Texas, and Alaska. The three regions were responsible for 64% of the United States oil production. The Gulf of Mexico and Texas combined for 47% of oil production. Texas produced 3% more than Alaska. What percent of United States oil production came from these regions?[4] At one time, in the United States, 398 species of animals were on the endangered species list. The top groups were mammals, birds, and fish, which comprised 55% of the endangered species. Birds accounted for 0.7% more than fish, and fish accounted for 1.5% more than mammals. What percent of the endangered species came from mammals, birds, and fish? Birds were 19.3%, fish were 18.6%, and mammals were 17.1% of endangered species Meat consumption in the United States can be broken into three categories: red meat, poultry, and fish. If fish makes up 4% less than one-quarter of poultry consumption, and red meat consumption is 18.2% higher than poultry consumption, what are the percentages of meat consumption?[5] solution set the set of all ordered pairs or triples that satisfy all equations in a system of equations "Oil reserves, production and consumption in 2001," accessed April 6, 2014, http://scaruffi.com/politics/oil.html. ↵ "USA: The coming global oil crisis," accessed April 6, 2014, http://www.oilcrisis.com/us/. ↵ "The United States Meat Industry at a Glance," accessed April 6, 2014, http://www.meatami.com/ht/d/sp/i/47465/pid/47465. ↵ Previous: Systems of Linear Equations: Two Variables Next: Systems of Nonlinear Equations and Inequalities: Two Variables Systems of Linear Equations: Three Variables by OpenStax is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.
CommonCrawl
The sum of two numbers $x$ and $y$ is 399, and the value of the fraction $\frac{x}{y}$ is 0.9. What is the value of $y - x$? We have the system of equations: \begin{align*} x + y &= 399 \\ \frac{x}{y} &= 0.9 \\ \end{align*} From the second equation, multiplying both sides by $y$ gives $x=.9y$. Next, substituting the second equation into the first to eliminate $x$ gives $.9y+y=399$, or $y=210$. Plugging in this value into the first equation in the original system of equations gives $x+210=399$ or $x=189$. Thus, $y-x=210-189=\boxed{21}$.
Math Dataset
\begin{document} \author{Bena Tshishiku} \author{Bena Tshishiku and Genevieve Walsh} \date{\today} \title{On groups with $S^2$ Bowditch boundary} \begin{abstract} We prove that a relatively hyperbolic pair $(G,\mathcal P)$ has Bowditch boundary a 2-sphere if and only if it is a 3-dimensional Poincar\'e duality pair. We prove this by studying the relationship between the Bowditch and Dahmani boundaries of relatively hyperbolic groups. \end{abstract} \section{Duality for groups with Bowditch boundary $S^2$} The goal of this paper is to study the duality properties of relatively hyperbolic pairs $(G,\mathcal P)$. This builds on work of Bestvina--Mess \cite{bestvina-mess}, who show that the duality properties of a hyperbolic group $G$ are encoded in its Gromov boundary $\pa G$; for example, a hyperbolic group $G$ with Gromov boundary $\pa G\simeq S^{n-1}$ is a $\PD(n)$ group. By analogy, one might hope for a similar result for relatively hyperbolic pairs $(G,\mathcal P)$ with the Gromov boundary replaced by the Bowditch boundary $\pa_B(G,\mathcal P)$. This would follow immediately from \cite{bestvina-mess} if the Bowditch boundary gave a Z-set compactification of $G$, but unfortunately this is not the case, and \cite{bestvina-mess} does not imply that $(G, \mathcal{P})$ is a duality pair whenever $\pa_B(G, \mathcal{P})\simeq S^{n-1}$. Instead we work with the \emph{Dahmani boundary} $\pa_D(G, \mathcal{P})$ (see \S\ref{s:Dahmani}), which does give a Z-set compactification. Our main theorem determines the Dahmani boundary when $\pa_B(G,\mathcal P)\simeq S^2$. \begin{theorem}\label{thm:sierpinski} A relatively hyperbolic group $(G, \mathcal{P})$ with Bowditch boundary $\pa_B(G, \mathcal{P})\simeq S^2$ has Dahmani boundary $\pa_D(G, \mathcal{P})\simeq \mathscr S$ a Sierpinski carpet. \end{theorem} As a corollary, we find that if $\pa_B(G,\mathcal P)$ is a 2-sphere then the same is true for the Dahmani boundary of the \emph{double of $G$ along $\mathcal P$} (see \S\ref{sec:double} for the definition). \begin{corollary}\label{cor:sphere} Let $(G, \mathcal{P})$ be a relatively hyperbolic pair, and let $G_\delta$ denote the double of $G$ along $\mathcal P$. If $\pa_B(G, \mathcal{P})\simeq S^2$, then $\pa_D(G_\delta,\mathcal{P})\simeq S^2$. \end{corollary} From Corollary \ref{cor:sphere} we obtain the following corollary, which is our main application. A finitely presented group $G$ is an \emph{oriented Poincar\'e duality group of dimension $n$} (a $\PD(n)$ group) if for each $G$-module $A$ there are isomorphisms $H^i(G;A)\rightarrow H_{n-i}(G;A)$ for each $i$, induced by cap product with a generator of $H_n(G;\Z)$. A relative version of this definition was introduced by Bieri--Eckmann \cite{bieri-eckmann-pdpairs}. We will only need a special case: a group pair $(G,\mathcal P)$ is a \emph{$\PD(3)$ pair} if each $P\in\mathcal P$ is the fundamental group of a closed surface and the double of $G$ along $\mathcal P$ is a $\PD(3)$ group; c.f.\ \cite[Cor.\ 8.5]{bieri-eckmann-pdpairs}. \begin{corollary}\label{cor:duality} Let $(G, \mathcal{P})$ be a torsion-free relatively hyperbolic pair with Bowditch boundary $\pa_B(G, \mathcal{P})\simeq S^2$. Then $(G, \mathcal{P})$ is a $\PD(3)$ pair. \end{corollary} The converse is also true. \begin{theorem} \label{th:converse} Let $(G, \mathcal{P})$ be a relatively hyperbolic pair. If $(G, \mathcal{P})$ is a $\PD(3)$ pair, then $\pa_B(G, \mathcal{P})\simeq S^2$. \end{theorem} {\it Remark.} As a motivating example of Corollary \ref{cor:duality}, suppose $G$ is the fundamental group of a hyperbolic 3-manifold $M$ with $k$ cusps and $\ell$ totally geodesic boundary components. Then $(G, \mathcal{P})$ is a relatively hyperbolic, where $\mathcal{P}$ consists of conjugates of the boundary and cusp subgroups $\lbrace P_1,\ldots,P_{k+\ell} \rbrace$. On the one hand, $(G,\mathcal P)$ is a PD(3) pair because $M$ is a $K(G,1)$ and manifolds satisfy Poincar\'e duality. (Alternatively, remove neighborhoods of the cusps and take the double.) On the other hand, Corollary \ref{cor:duality} gives a geometric-group-theoretic proof since $\pa_B(G,\mathcal P)\simeq S^2$ (see e.g.\ \cite{ruane,tran}). A different, homological proof of Corollary \ref{cor:sphere} and Theorem \ref{th:converse} is given in Manning-Wang \cite[Corollary 4.3]{manning-wang}. {\it Relation to the Wall and Cannon conjectures.} The Wall conjecture \cite{wall_problems} posits (in dimension 3) that any $\PD(3)$ group is the fundamental group of a closed aspherical 3-manifold. Similarly, one would conjecture that if $(G,\mathcal P)$ is a $\PD(3)$ pair, then $G$ is the fundamental group of an aspherical 3-manifold with boundary, where $\mathcal P$ is the collection of conjugacy classes of the boundary subgroups. \begin{conjecture} [The relative Cannon conjecture] Let $(G, \mathcal{P})$ be a relatively hyperbolic group pair with $G$ torsion-free. If $\pa_B(G, \mathcal{P})\simeq S^2$, then $G$ is the fundamental group of a finite volume hyperbolic $3$-manifold $M$. Furthermore, the peripheral groups are the fundamental groups of the cusps and totally geodesic boundary components of $M$. \end{conjecture} \begin{theorem} \label{thm:wall}If the Wall conjecture is true, then the relative Cannon conjecture is true. \end{theorem} Compare with \cite{kk}, which is similar. A slightly different theorem that the Cannon conjecture implies the relative Cannon conjecture when the peripheral subgroups are $\Z^2$, is given in \cite{GMS} with a completely different proof. Our version follows from Theorem \ref{thm:sierpinski}, Corollary \ref{cor:sphere}, and a result of Kapovich and Kleiner on the uniqueness of peripheral structures \cite[Theorem 1.5]{kk_duality}. Martin and Skora \cite{martin-skora} conjecture that convergence groups can be realized as Kleinian groups, which encompasses the Cannon and relative Cannon conjectures. We now explain the rough outline for Theorem \ref{thm:sierpinski}. \begin{enumerate} \item In general there is a continuous surjection $c:\pa_D(G,\mathcal P)\rightarrow\pa_B(G,\mathcal P)$. We collect some facts about the topology on $\pa_D(G,\mathcal P)$ and this map in Proposition \ref{prop:quotient}. In the case of Theorem \ref{thm:sierpinski}, we have a map $c:\pa_D(G,\mathcal P)\rightarrow S^2$ such that $c^{-1}(z)$ is either a single point or a circle for each $z\in S^2$. \item In Lemma \ref{lem:sierpinski} we identify conditions on a map $X\rightarrow S^2$ that are sufficient to conclude that $X$ is a Sierpinski carpet. This gives a characterization of the Sierpinski carpet, which may be of independent interest. \item We verify that the conditions of Lemma \ref{lem:sierpinski} are satisfied for the map $c:\pa_D(G,\mathcal P)\rightarrow S^2$. One of the difficult parts is to show that if $\pa_B(G,\mathcal P)\simeq S^2$, then if $\pa_D(G,\mathcal P)_P$ is the quotient of $\pa_D(G,\mathcal P)$ obtained by collapsing all but one of the peripheral circles to points, then $\pa_D(G,\mathcal P)_P$ is homeomorphic to the closed disk. \end{enumerate} {\it Remark.} It would be interesting to know a version of Theorem \ref{thm:sierpinski} when $\pa_B(G,\mathcal P)\simeq S^{n-1}$ for $n>3$, i.e.\ that in this case $\pa_D(G,\mathcal P)$ is an $(n-2)$-dimensional Sierpinski carpet. The methods of this paper show that this is true if one knows that each $P\in\mathcal P$ admits a $\mathcal Z$-boundary $\pa P\simeq S^{n-2}$. When $n=3$ this is automatic because the peripheral subgroups are always surface groups. {\it Section outline.} In \S\ref{s:Dahmani} we collect some facts about the Bowditch and Dahmani boundaries of a relatively hyperbolic group and their relation. In \S\ref{S:proofof2} we prove Theorem \ref{thm:sierpinski}, and in \S\ref{sec:cor} we prove Corollaries \ref{cor:sphere} and \ref{cor:duality} and Theorems \ref{th:converse} and \ref{thm:wall}. {\it Acknowledgements.} The authors thank Francois Dahmani for helpful conversations about his work, and thank Craig Guilbault for help with $\mathcal Z$-structures. The authors are grateful to the referee for carefully reading the paper, catching errors, and offering helpful observations that significantly improved the paper. The authors also acknowledge support from NSF grants DMS 1502794 and DMS 1709964, respectively. \section{Relatively hyperbolic groups and their boundaries} \label{s:Dahmani} We will assume throughout that $G$ is finitely generated and thus so are the peripheral subgroups \cite{osin}. There are different notions of boundary for a relatively hyperbolic group. The most general definition is due to Bowditch \cite{bowditch}. Another boundary was defined by Dahmani \cite{dahmani-boundary} in the case when each peripheral subgroup \emph{admits a boundary}, i.e.\ for each $P\in\mathcal P$ there is a space $\pa P$ so that $P\cup\pa P$ is compact, metrizable, and $P\subset P\cup\pa P$ is dense. In this section we describe the Bowditch and Dahmani boundaries. Our description of Dahmani's boundary differs slightly from that in \cite{dahmani-boundary} because we use the coned-off Cayley graph instead of the collapsed Cayley graph. This is required in order to allow $\mathcal P$ to contain more than one conjugacy class, as discussed in \cite[\S6]{dahmani-boundary}. Everything in this section will be done for general relatively hyperbolic groups, although the case with one conjugacy class of peripheral subgroups is the most rigorous case in \cite{dahmani-boundary}. In the next section we will specialize to the case $\pa_B(G,\mathcal P)\simeq S^2$. \subsection{Relatively hyperbolic groups and the Bowditch boundary}\label{sec:bowditch-boundary} Below $G$ is a group and $\mathcal P$ is a collection of subgroups of $G$ that consists of a finite number of conjugacy classes of $G$. Some authors use $\mathcal P$ to refer to a collection of conjugacy representatives, but we do not use this convention. This causes a minor notational conflict since the notion of PD($n$) pair (as discussed in the previous section) is reserved for a group with respect to a \emph{finite} collection of subgroups. However, this should not cause any confusion as we can pick any set $\{P_1,\ldots,P_d\}$ of conjugacy class representatives to be this finite collection. For a subgroup $P<G$ and $a\in G$, we denote ${}^aP:=aPa^{-1}$ for the (left) action of $G$ by conjugation. {\it The coned--off Cayley graph.} Fix a relatively hyperbolic group $(G,\mathcal P)$, and let $P_1,\ldots, P_d$ be representatives for the conjugacy classes in $\mathcal{P}$. Let $S$ be a generating set for $G$ that contains generating sets $S_i$ for each $P_i$. Then the Cayley graph $\Gamma(G)=\Gamma(G,S)$ naturally contains the Cayley graphs $\Gamma(P_i,S_i)$ for each $i=1,\ldots,d$. If $P\in\mathcal P$ and $P=aP_ia^{-1}$, then we denote by $\Gamma(P)\subset\Gamma(G)$ the subgraph $a\>\Gamma(P_i,S_i)$; note that $\Gamma(P)$ is isomorphic to a Cayley graph for $P$ since $\Gamma({}^aP_i,{}^aS_i)\simeq\Gamma(P_i,S_i)\simeq a\>\Gamma(P_i,S_i)$. We form the {\it coned off Cayley graph} $\hat\Gamma=\hat \Gamma(G, \mathcal{P}, S)$ by adding a vertex $*_{P}$ for each $P\in\mathcal P$ and adding edges of length $1/2$ from $*_{P}$ to each vertex of $\Gamma(P)\subset\Gamma(G)$. An oriented path $\gamma$ in $\hat\Gamma$ is said to {\it penetrate} $P\in\mathcal P$ if it passes through the cone point $*_{P}$; its \emph{entering} and \emph{exiting} vertices are the vertices immediately before and after $*_{P}$ on $\gamma$. The path is {\it without backtracking} if once it penetrates $P\in\mathcal P$, it does not penetrate $P$ again. \begin{definition} The triple $(G, \mathcal{P} ,S)$ is said to have {\it bounded coset penetration} if for each $\lambda\ge1$, there is a constant $a=a(\lambda)$ such that if $\gamma$ and $\gamma'$ are $(\lambda,0)$ quasi-geodesics without backtracking in $\hat \Gamma$ and with the same endpoints, then \begin{enumerate} \item[(i)] if $\gamma$ penetrates some $P\in\mathcal P$, but $\gamma'$ does not, then the distance between the entering and exiting vertices of $\gamma$ in $\Gamma(P)$ is at most $a$; and \item[(ii)] if $\gamma$ and $\gamma'$ both penetrate $P$, then the distance between the entering vertices of $\gamma$ and $\gamma'$ in $\Gamma(P)$ is at most $a$, and similarly for the exiting vertices. \end{enumerate} \end{definition} {\it Relative hyperbolicity and Bowditch boundary.} The pair $(G, \mathcal{P})$ is called \emph{relatively hyperbolic} when $\hat \Gamma(G, \mathcal{P}, S)$ is hyperbolic and satisfies bounded coset penetration \cite{farb-rel-hyp}. To equip $(G,\mathcal P)$ with a boundary, Bowditch \cite{bowditch} used an equivalent definition: $(G,\mathcal P)$ is relatively hyperbolic if there exists a \emph{fine} $\delta$-hyperbolic graph $K$ with a $G$-action so that there are finitely many orbits of edges and $\mathcal P$ is the set of infinite vertex stabilizers. A graph is {\it fine} if each edge is in finitely many cycles of length $n$, for each $n$. Then the Bowditch boundary is defined as $\pa_B(G,\mathcal P):=\pa K\cup V_\infty(K)$, where $V_\infty(K)\subset V(K)$ is the set of vertices of $K$ with infinite valence. If the $G$-action on $K$ is geometric, then $\mathcal P=\vn$ and this recovers the Gromov boundary of a hyperbolic group $\pa G=\pa K$. An alternate definition of relatively hyperbolic is that $G$ acts geometrically finitely on a proper geodesic metric space \cite[Definition 1]{bowditch}. In particular, this implies that for each $P\in\mathcal P$, the action of $P$ on $\pa_B(G,\mathcal P)\setminus\{*_P\}$ is properly discontinuous and cocompact. For the many equivalent notions of relative hyperbolicity, see \cite{hruskadef}. If $(G,\mathcal P)$ is relatively hyperbolic, then the coned-off Cayley graph $\hat\Gamma$ is a fine hyperbolic graph \cite{Dahmanithesis}. In this case $V_\infty(\hat\Gamma)\simeq\mathcal P$, so we can describe $\pa_B(G,\mathcal P)$ as \begin{equation}\label{eqn:bowditch-boundary}\pa_B(G,\mathcal P)=\pa\hat\Gamma\cup\big(\cup_{P\in\mathcal P} \{*_{P}\}\big).\end{equation} {\it Topology on the Bowditch boundary.} For a finite subset $A\subset V(\hat\Gamma)$ and $v\in \pa\hat\Gamma\cup V(\hat\Gamma)$, let $M(v,A)$ denote the collection of points $w$ in $\pa_B(G,\mathcal P)$ so that there exists a geodesic from $v$ to $w$ that avoids $A$. This forms a basis for the topology on $\partial_B(G, \mathcal{P})$, see \cite[Section 8]{bowditch}. In particular, a subset $U\subset\pa_B(G,\mathcal P)$ is open if for each $v\in U$, there exists a finite set $A\subset V(\hat\Gamma)$ so that $M(v,A)\subset U$. A different basis for the topology is the collection of the sets $M_{(\lambda,c)}(v,A)$ of points connected to $v$ by a $(\lambda,c)$ quasi-geodesic that avoids $A$ (see \cite{bowditch} and \cite[\S3]{tran}). \subsection{$\mathcal{Z}$-structures on groups} Before we discuss the Dahmani boundary it will be useful to have the notion of a \emph{$\mathcal Z$-structure} on a group \cite{bestvina_localhomology}. This concept generalizes both (i) a $\CAT(0)$-metric space with its visual boundary, and (ii) the Rips complex of a hyperbolic group with its Gromov boundary \cite{bestvina-mess}. See \cite{ancel-guilbault} for more about $\mathcal Z$-structures. \begin{definition}[\cite{bestvina_localhomology}] A \emph{$\mathcal{Z}$-structure} on a torsion-free group $\Gamma$ is a pair of spaces $(\bar X, Z)$ such that \begin{enumerate} \item The space $\bar X$ is a Euclidean retract, i.e.\ $\bar X$ is compact, metrizable, finite dimensional, contractible, and locally contractible. \item The subspace $Z\subset\bar X$ is a $\mathcal Z$-set, i.e.\ for all $\epsilon$, there exists a map $f_\epsilon: \bar X \rightarrow \bar X \setminus Z$ that is $\epsilon$ close to the identity. \item The space $\bar X \setminus Z$ admits a proper, cocompact $\Gamma$ action. \item For any compact $K$ in $\bar X \setminus Z$, and any open cover $\mathcal{U}$ of $\bar{X}$, each translate $gK$ is contained in some $U_g \in \mathcal{U}$ for all but finitely many $g \in \Gamma$. \end{enumerate} \end{definition} If $(\bar X,Z)$ is a $\mathcal Z$-structure on $\Gamma$, then the space $Z$ is called a \emph{$\mathcal Z$-boundary} of $\Gamma$. In general, a $\mathcal Z$-boundary is not unique; however, the following theorem gives a uniqueness result for the $\mathcal Z$-boundary of a $\PD(n)$ group when $n\le3$. \begin{theorem} [\cite{bestvina-mess}]\label{thm:bestvina} Let $G$ be a torsion-free group that admits a $\mathcal{Z}$-structure $(\bar X, Z)$. Then $G$ is a $\PD(2)$ or a $\PD(3)$ group, respectively, exactly when $Z \simeq S^1$, or $Z \simeq S^2$, respectively. \end{theorem} Theorem \ref{thm:bestvina} follows directly from the proof of \cite[Cor.\ 1.3]{bestvina-mess}, together with the fact that a homology manifold that is a homology $k$-sphere is homeomorphic to $S^k$ when $k\le 2$ \cite[Rmk.\ 2.9]{bestvina_localhomology}. See also \cite[Thm 2.8]{bestvina_localhomology} for a generalization. \subsection{The Dahmani boundary and its topology} \label{sec:dahmani-boundary} Fix a relatively hyperbolic group $(G,\mathcal P)$. Assume that each $P\in\mathcal P$ admits a $\mathcal Z$-boundary $\pa P$. As a set, the Dahmani boundary is \begin{equation}\label{eqn:dahmani-boundary}\pa_D(G,\mathcal P)=\pa\hat\Gamma \cup\big(\cup_{P\in\mathcal P}\pa P\big).\end{equation} If $P$ acts on $\pa P$ for each $P\in\mathcal P$ and if $\pa P=\pa P'$ whenever $P$ and $P'$ are conjugate, then $G$ naturally acts on $\cup_{P\in\mathcal P}\pa P$, and so $G$ acts on $\pa_D(G,\mathcal P)$. There is a natural map $c:\pa_D(G,\mathcal P)\rightarrow\pa_B(G,\mathcal P)$ that is the identity on $\pa\hat\Gamma$ and sends $\pa P$ to $*_{P}$. This map is studied more in \S\ref{sec:compare} and will be important in \S\ref{S:proofof2}. {\it Topology on the Dahmani boundary.} The topology on $\pa_D(G,\mathcal P)$ has a basis consisting of two types of open sets (\ref{eqn:open1}) and (\ref{eqn:open2}) below. The first type is a neighborhood basis $\{U_x'\}$ of points $x$ in $\pa \hat\Gamma $. For $x\in\pa \hat\Gamma$, and for an open set $U_x\subset\pa_B(G,\mathcal P)$ containing $x$, define $U_x'\subset\pa_D(G,\mathcal P)$ by \begin{equation} \label{eqn:open1} U'_x = (U_x \cap \pa \hat\Gamma) \cup\big(\cup_{*_P\in U_x}\pa P\big). \end{equation} The second type is a neighborhood basis about points $x \in \pa P$. To describe it we first introduce some terminology. \begin{definition} For $P\in\mathcal P$ and a vertex $v\in\Gamma(P)\subset\Gamma(G)$, the \emph{shadow of $v$ with respect to $P$}, denoted $\Sh(v,P)$, is the set of endpoints in $\pa\hat\Gamma\cup\hat\Gamma$ of (non-backtracking) geodesic arcs and rays beginning at $v$ that immediately leave $\Gamma(P)$ (and do not pass through $*_{P}$). Furthermore, we define $\Sh_B(v,P)$ as the intersection of $\Sh(v,P)$ with $\pa_B(G,\mathcal P)\subset\pa\hat\Gamma\cup\hat\Gamma$, and we define $\Sh_D(v,P)\subset\pa_D(G,\mathcal P)$ as the preimage of $\Sh_B(v,P)$ under $c$. Note that by definition, $\Sh_B(v,P)\subset\pa_B(G,\mathcal P)\setminus\{*_{P}\}$. \end{definition} \begin{observation} \label{obs:covers} For each $P\in\mathcal P$, \[\bigcup_{v \in \Gamma(P)} Sh_B(v,P) = \pa_B(G,\mathcal P)\setminus\{*_{P}\}\>\>\text{ and so }\>\>\bigcup_{v \in \Gamma(P)} Sh_D(v,P) = \pa_D(G) \setminus \pa P.\] \end{observation} We now define a neighborhood basis $\{U_x'\}$ for $x\in \pa P$. For $x\in \pa P$ and a neighborhood $U_x$ of $x$ in $P\cup\pa P$, define $U_x'\subset\pa_D(G,\mathcal P)$ by \begin{equation} \label{eqn:open2} U_x'= (U_x \cap \partial P) \cup\big(\cup_{v\in U_x}\Sh_D(v,P)\big) \end{equation} We recap the above discussion. \begin{definition} \cite[Defn 3.3]{dahmani-boundary} \label{def:Dahmani} Let $(G, \mathcal P) $ be a relatively hyperbolic group. Assuming each $P\in\mathcal P$ admits a boundary the {\it Dahmani boundary}, $\pa_D(G, \mathcal P)$ is the set (\ref{eqn:dahmani-boundary}) with topology generated by open sets of the form (\ref{eqn:open1}) and (\ref{eqn:open2}). \end{definition} Dahmani \cite[Thm 3.1]{dahmani-boundary} proves that $\pa_D(G,\mathcal P)$ is compact and metrizable. {\it Remark.} There is a slight difference between our definition of the topology on $\pa_D(G,\mathcal P)$ and the definition in \cite{dahmani-boundary}. Instead of using endpoints of \emph{geodesics} (as in our definition of $\Sh(v,P)$), Dahmani uses endpoints of quasi-geodesics that are geodesics outside of a compact set. However, these give the same topology. One way to see this is to note that $\Sh_B(v,P)$ has the form $M(v,A)$ (c.f.\ \S\ref{sec:bowditch-boundary}) where $A$ is the finite set of vertices in $\Gamma(P)\cup\{*_P\}$ that are adjacent to $v$. (Note that the distance between any two vertices in $P$ is 1 in $\hat\Gamma$.) Bowditch \cite[\S8]{bowditch} proves that this gives a basis for the topology on $\pa_B(G, \mathcal{P})$. Furthermore, Bowditch shows that this is equivalent to the topology on $\partial_B(G, \mathcal{P})$ defined using $M_{(\lambda, c)}(v, A)$, defined above. It follows that the topology we defined is equivalent to Dahmani's definition. \subsection{Comparing the Bowditch and Dahmani boundaries}\label{sec:compare} Consider the {\it collapsing map} \begin{equation}\label{eqn:collapse-map}c:\pa_D(G,\mathcal P)\rightarrow\pa_B(G,\mathcal P)\end{equation} that sends each peripheral boundary $\pa P$ to the corresponding point $*_P$ and is the identity on $\pa\hat \Gamma$. \begin{proposition}\label{prop:quotient} Let $(G, \mathcal{P})$ be a relatively hyperbolic group. Assume that each $P\in\mathcal P$ admits a boundary $\pa P$. \begin{enumerate} \item[(i)] For $P\in\mathcal P$, the inclusion $\pa P\hookrightarrow\pa_D(G,\mathcal P)$ is an embedding. \item[(ii)] The subset $\bigcup_{P\in\mathcal P}\pa P\subset\pa_D(G,\mathcal P)$ is dense, and $\{\partial P: P\in\mathcal P\}$ is a null family (i.e.\ for each $r>0$ there are finitely many $P\in\mathcal P$ with diameter greater than $r$). \item[(iii)] The collapsing map $c$ is continuous and $c\rest{}{\pa\hat\Gamma}$ is an embedding (i.e.\ a homeomorphism onto its image). \end{enumerate} \end{proposition} \begin{proof} Both (i) and (ii) follow from the definition of the topology on $\pa_D(G,\mathcal P)$. The subspace topology on $\pa P\subset\pa_D(G,\mathcal P)$ agrees with the standard topology on $\pa P$ by definition of the open sets (\ref{eqn:open2}). Also, $\bigcup_{P\in\mathcal P}\pa P$ is dense because each of the open sets (\ref{eqn:open1}) and (\ref{eqn:open2}) generating the topology on $\pa_D(G,\mathcal P)$ contain points of some peripheral boundary. Finally, $\{\pa P: P\in\mathcal P\}$ is a null family. This is because for $r>0$, we can cover $\pa_D(G,\mathcal P)$ by open sets $V_1,\ldots,V_k$ of the form (\ref{eqn:open1}) or (\ref{eqn:open2}), each with diameter at most $r$, by compactness. Note that by definition for each $V_i$ there is at most one peripheral circle that intersects $V_i$ nontrivially but is not contained in $V_i$. It follows that there are at most $k$ peripheral circles with diameter $ \geq r$. Next we prove (iii). To show that $c$ is continuous, we fix an open set $U\subset\pa_B(G,\mathcal P)$ and show $c^{-1}(U)$ is open. By definition of the topology, we can write $U$ as $U=\cup_{x\in U}M(x,A_x)$. Since each $M(x,A_x)$ is an open set containing $x$, the preimage $c^{-1}\big(M(x,A_x)\big)$ is of the form (\ref{eqn:open1}) and hence is open. Thus $c^{-1}(U)=\cup_{x\in U} \>c^{-1}\big(M(x,A_x)\big)$ is open, which implies that $c$ is continuous. To see that $c\rest{}{\pa\hat\Gamma }$ is an embedding, it suffices to show that $c\rest{}{\pa\hat\Gamma}$ is a closed map. This follows from the fact that $c$ is a closed map, which is true for any continuous map between compact metric spaces. \end{proof} \section{$\pa_D(G,\mathcal P)$ when $\pa_B(G,\mathcal P)=S^2$ (Proof of Theorem \ref{thm:sierpinski})} \label{S:proofof2} Throughout this section we assume $\pa_B(G,\mathcal P)\simeq S^2$. Our goal is to show that this implies that $\pa_D(G,\mathcal P)$ is a Sierpinski carpet. Recall the outline of the proof of Theorem \ref{thm:sierpinski} given in the introduction. In the previous section we completed Step 1; in \S\ref{sec:identify} and \S\ref{sec:finish} we complete Steps 2 and 3, respectively. Before these steps, we explain why the Dahmani boundary is always defined when $\pa_B(G,\mathcal P)\simeq S^2$, i.e.\ why the peripheral subgroups admit boundaries. \subsection{Boundaries for peripheral subgroups} \label{sec:peripheral-boundary} Fix $P\in\mathcal P$. To define a boundary $\pa P$ on $P$, consider the action of $P$ on \[\Omega:=\pa_B(G,\mathcal P)\setminus\{*_{P}\},\] which is cocompact and properly discontinuous. Bowditch \cite[Section 2]{bowditchboundary} defines a metric $d_{\Omega}$ on $\Omega$ that makes the action of $P$ on $\Omega$ geometric. Then $K=\ker\big[P\rightarrow\Isom(\Omega)\big]$ is finite, and $P/K$ contains a finite-index subgroup $P'$ that is a closed surface group (in particular $P'$ is torsion free); this was observed in \cite[Theorem 0.3]{dahmani-parabolic}\footnote{According to the MathSciNet review of \cite{dahmani-parabolic}, the print version of this paper has an error. This has been fixed in an updated version (arxiv:0401059). The fact we are using is independent of this issue.}. It follows that $P$ acts geometrically on a model space $X$, either $\bb E^2$ or $\bb H^2$. Define $\pa P:=\pa X$ as the CAT(0) boundary. Next we topologize $\overline\Omega:=\Omega\cup\pa P$. By the classification of surfaces, there is a $P'$-equivariant homeomorphism $\Omega\rightarrow X$. This extends to a map $\overline\Omega\rightarrow\overline X$, and we topologize $\overline\Omega$ so that this map is a homeomorphism. The pair $(\overline X,\pa X)$ is the standard $\mathcal Z$-structure on $P$. It turns out that $(\overline\Omega,\pa P)$ is an alternate description of this $\mathcal Z$-structure. Axioms 1--3 of a $\mathcal Z$-structure are immediate. Axiom 4 follows from Proposition \ref{prop:shadow} and Observation \ref{obs:covers} that the shawdows cover $\Omega$. See also the proof of Theorem \ref{th:homeo}. Alternatively, one can use a very general ``boundary swapping" argument \cite[Theorem 1.3]{guilbault-moran} to conclude that $\overline\Omega$ can be topologized so that $(\overline\Omega,\pa P)$ is a $\mathcal Z$-structure on $P$. We will use our concrete description of the topology on $\overline\Omega$ in what follows. For later use, we choose a quasi-isometry $P\rightarrow\Omega$ by taking the orbit of a point. Specifically, choose a geodesic ray $\gamma_0$ in $\hat\Gamma$ that starts at $*_{P}$, goes through the identity vertex $e\in \Gamma(P)$, and ends at some point $0\in\pa\hat\Gamma\subset\Omega$. (Recall that the boundary of a hyperbolic space consists of equivalence classes of geodesic rays, so here $\gamma_0$ is a representative for $0\in\pa\hat\Gamma$.) Then we identify $P$ with the orbit $P.0$. For $g \in P$, $g.0$ is the endpoint of the geodesic $g\gamma_0$ in $\hat\Gamma$ starting at $*_{P}$ and going through the vertex of $g$ in $\Gamma(P)$. \subsection{Identifying a Sierpinski carpet} \label{sec:identify} The following lemma gives a criterion that will allow us to identify $\pa_D(G,\mathcal P)$ as a Sierpinski carpet. \begin{lemma}\label{lem:sierpinski} Let $X$ be a compact metric space. Assume that there exists a continuous surjection $\pi:X\rightarrow S^2$ such that \begin{enumerate} \item[(i)] there exists a countable dense subset $Z=\{z_1,z_2,\ldots\}\subset S^2$ so that the restriction of $\pi$ to $\pi^{-1}\big(S^2\setminus Z\big)$ is injective, and \item[(ii)] for each $k$, the space $X_k$ obtained from $X$ by collapsing each $C_i$ to a point for $i\neq k$ is homeomorphic to a closed disk $\mathbb D^2$. \end{enumerate} Then $X$ is homeomorphic to a Sierpinski carpet. \end{lemma} We remark that assumption (i) implies that $\pi\rest{}{}:\pi^{-1}(S^2\setminus Z)\rightarrow S^2\setminus Z$ is a homeomorphism. Indeed $\pi\rest{}{}$ is a continuous bijection, and since $\pi$ is a continuous map between compact metric spaces, $\pi$ (and hence $\pi\rest{}{}$) is a closed map. Note also that (ii) implies that \begin{enumerate} \item[(iii)] for each $k$, the preimage $C_k:=\pi^{-1}(z_k)$ is an embedded circle. \end{enumerate} \begin{example}\label{rmk:example} We illustrate the theorem with a non-example. Consider $X=[0,1]^2\setminus \bigcup_{i=1}^\infty D_i$, where $D_i$ is a dense countable collection of open disks, with pairwise disjoint closures, that includes the collection of disks pictured in Figure \ref{fig:converging-disks}. The space $X$ is not homeomorphic to the Sierpinski carpet because the disks pictured in Figure \ref{fig:converging-disks} have diameter bounded from below, so $\{D_i\}$ is not a null family. Nevertheless, $X$ satisfies conditions (i) and (iii) above: the set $X\setminus\bigcup_{i=1}^\infty \overline{D_i}$ is homeomorphic to $S^2\setminus Z$, where $Z\subset S^2$ is countable and dense, and by collapsing each $\pa D_i$ to a point we obtain a continuous surjection $X\rightarrow S^2$ satisfying (i) and (iii). Note however that condition (ii) from Lemma \ref{lem:sierpinski} is not satisfied. Indeed the space $X_1$ obtained from collapsing all the $D_i$ except the outer disk is not Hausdorff, and hence not homeomorphic to the closed disk. \end{example} \begin{figure} \caption{A collection of disjoint disks with diameter bounded from below. } \label{fig:converging-disks} \end{figure} \begin{proof}[Proof of Lemma \ref{lem:sierpinski}] First observe that the Sierpinski carpet $X=\mathscr S$ satisfies the assumptions (i)--(iii) with $\pi:\mathscr S\rightarrow S^2$ the map the collapses each peripheral circle to a point. Condition (ii) follows from Moore's theorem about upper semicontinuous decompositions of the plane \cite[pg 3]{daverman}. To prove the lemma, it suffices to show that any two compact metric spaces $X,X'$ with surjections to $S^2$ that satisfy (i)--(iii) are homeomorphic. For $k\ge0$, let $X(k)$ be the space obtained by collapsing each circle $C_i$ to a point for $i>k$ (i.e.\ we collapse all but the first $k$ circles). There are maps $p_k:X(k)\rightarrow X(k-1)$, and $X=\lim X(k)$ is the inverse limit. Similarly, we express $X'=\lim X'(k)$. To show $X$ is homeomorphic to $X'$, we'll show that the inverse systems $\{X(k),p_k\}$ and $\{X'(k),p_k'\}$ are isomorphic. First we describe the topology of $X(k)$. By (ii), each $X_{k}$ is homeomorphic to $\mathbb D^2$, or equivalently $S^2\setminus D$, where $D$ is an open disk. From this, it's not hard to see that $X(k)\simeq S^2\setminus\bigcup_1^kD_i$, where $D_i$ are open embedded disks with disjoint closures. For example, in the case $k=2$, consider the following diagrams. \[\begin{xy} (0,0)*+{X(2)}="A"; (-10,-10)*+{X_{1}}="B"; (10,-10)*+{X_{2}}="C"; (0,-20)*+{X(0)}="D"; {\ar"A";"B"}?*!/_3mm/{}; {\ar "A";"C"}?*!/_3mm/{}; {\ar "B";"D"}?*!/^3mm/{}; {\ar "C";"D"}?*!/_3mm/{}; (50,0)*+{X(2)}="E"; (40,-10)*+{\bb D^2}="F"; (60,-10)*+{\bb D^2}="G"; (50,-20)*+{S^2}="H"; (25,-10)*+{\simeq}="Z"; {\ar"E";"F"}?*!/_3mm/{}; {\ar "E";"G"}?*!/_3mm/{}; {\ar "F";"H"}?*!/^3mm/{}; {\ar "G";"H"}?*!/_3mm/{}; \end{xy} \] Assumption (i) implies that $X(2)\rightarrow X(0)$ is a homeomorphism away from $C_1\cup C_2$, and so $X(2)\setminus(C_1\cup C_2)$ is homeomorphic to an open annulus $S^1\times(0,1)$. Furthermore, by assumption (ii), $X(2)\rightarrow X_{i}$ is a homeomorphism in a neighborhood of $C_i$, so it follows that $X(2)$ is homeomorphic to an annulus. Note also that the restriction $p_{k+1}\rest{}{}:X(k+1)\setminus C_{k+1}\rightarrow X(k)\setminus\{z_{k+1}\}$ is a homeomorphism. This follows from the definitions and assumption (i). We construct compatible homeomorphisms $\phi_k:X(k)\rightarrow X'(k)$ inductively. For $k=0$, let $Z=\{z_i\}$ and $Z'=\{z_i'\}\subset S^2$ be the countable dense subsets associated to $X,X'$. By \cite[Thm 3]{bennett} there exists a homeomorphism $\phi_0:X(0)\simeq S^2\rightarrow S^2\simeq X'(0)$ so that $\phi_0(Z)=Z'$. Without loss of generality, we assume that $\phi_0(z_i)=z_i'$ for each $i$. For the induction step, suppose we're given a homeomorphism $\phi_{k}:X(k)\rightarrow X'(k)$ and a commutative diagram \[\begin{xy} (0,0)*+{X(k)}="A"; (20,0)*+{X'(k)}="B"; (0,-15)*+{X(0)}="C"; (20,-15)*+{X'(0)}="D"; {\ar"A";"B"}?*!/_3mm/{\phi_{k}}; {\ar "A";"C"}?*!/^3mm/{}; {\ar "B";"D"}?*!/_3mm/{}; {\ar "C";"D"}?*!/_3mm/{\phi_0}; \end{xy}\] By the choice of $\phi_0$, it follows that $\phi_{k}(z_{k+1})=z_{k+1}'$. Then $\phi_k$ restricts to a homeomorphism $\phi_k\rest{}{}:X(k+1)\setminus C_{k+1}\rightarrow X'(k+1)\setminus C_{k+1}'$. Since $X(k)$ is compact, $\phi_k$ is uniformly continuous, so $\phi_k\rest{}{}$ extends uniquely to a homeomorphism \[\phi_{k+1}:X(k+1)\rightarrow X'(k+1)\] such that $\phi_{k+1}\circ p_{k+1}=p_{k+1}'\circ \phi_{k+1}$. This shows that the inverse systems $\{X(k),p_k\}$ and $\{X'(k),p_k'\}$ are isomorphic, so then the inverse limits $X,X'$ are homeomorphic. \end{proof} \subsection{Collapsing the Dahmani boundary to a disk}\label{sec:finish} In this section we show that $\pa_D(G,\mathcal P)$ and the collapse map $c:\pa_D(G,\mathcal P)\rightarrow\pa_B(G,\mathcal P)\simeq S^2$ satisfy the assumptions of Lemma \ref{lem:sierpinski}, which allows us to conclude that $\pa_D(G,\mathcal P)$ is a Sierpinski carpet. The main result is as follows. \begin{theorem} \label{th:homeo} Let $(G,\mathcal P)$ be relatively hyperbolic with $\pa_B(G,\mathcal P) \simeq S^2$. Fix $P\in\mathcal P$, and let $\pa_D(G,\mathcal P)_P$ be the quotient of $\pa_D(G,\mathcal P)$ obtained by collapsing $\pa Q$ to a point for each $Q\in\mathcal P\setminus\{P\}$. Then $\pa_D(G,\mathcal P)_P$ is $P$-equivariantly homeomorphic to the disk $\overline{\Omega}$ (c.f.\ \S\ref{sec:peripheral-boundary}). \end{theorem} {\it Remark.} An analogous theorem to Theorem \ref{th:homeo} holds more generally for relatively hyperbolic groups with $\pa_B(G,\mathcal P)\simeq S^n$ whose peripheral subgroups have $\mathcal Z$-boundaries (so $\Omega$ has a natural $\mathcal Z$-set compactification) with a similar proof. The proof of Theorem \ref{th:homeo} will rely on the Proposition \ref{prop:shadow} below, which is a general fact about the shadow of points in the Bowditch boundary. The proof of Proposition \ref{prop:shadow} is technical, so we postpone it to the end of the section. \begin{proposition}\label{prop:shadow} Let $(G, \mathcal{P})$ be a relatively hyperbolic group such that each $P\in\mathcal P$ admits a boundary $\pa P$. For each $P\in\mathcal P$, and $v\in\Gamma(P)\subset\hat\Gamma$, the shadow $\Sh_B(v,P)\subset\Omega$ is bounded in the Bowditch metric on $\Omega$. \end{proposition} We note that since $P$ acts isometrically on $\Omega$ and $\Sh_B(g\cdot v,P)=g\cdot\Sh_B(v,P)$, in fact $\Sh_B(v,P)$ is bounded uniformly for $v\in\Gamma(P)$. \begin{proof}[Proof of Theorem \ref{th:homeo}] There is a homeomorphism \[H:\pa_D(G,\mathcal P)_P\setminus(\pa P)\rightarrow\pa_B(G,\mathcal P)\setminus\{*_P\},\] since, by definition, the domain and codomain are equal as sets, and the identity map is a homeomorphism by Proposition \ref{prop:quotient}. Set $\Omega:=\pa_B(G,\mathcal P)\setminus\{*_{P}\}$ and $\overline{\Omega}:=\Omega\cup\pa P\simeq\bb D^2$ as in \S\ref{sec:peripheral-boundary}. Then $H$ extends (via the identity map $\pa P\rightarrow \pa P$) to a bijection $\overline H:\pa_D(G,\mathcal P)_P\rightarrow \overline{\Omega}$, which is equivariant. To prove the theorem, we need only show that $\overline H$ is a homeomorphism. Since $\pa_D(G,\mathcal P)_P$ is compact and $\overline\Omega$ is Hausdorff, it suffices to show that $H$ is continuous; furthermore, since $H$ is a homeomorphism, we only need to show continuity of $\overline H$ at each $\xi\in\pa P$. Fixing $\xi\in\pa P$, it suffices to show that for every neighborhood $U$ of $\xi$ in $\overline\Omega$, there exists a neighborhood $W$ of $\xi\in\pa_D(G,\mathcal P)$ so that $\overline H(W)\subset U$. Since $Sh_B(v,P)\subset\Omega$ is bounded for each $v\in\Gamma(P)$ (by Proposition \ref{prop:shadow}), there is a neighborhood $\xi\in V \subset P\cup\pa P$ such that if $v \in V$, then $\Sh_B(v,P)\subset U$. Indeed let $\hat V$ consist of the vertices $v_g$ in $P$ such that the endpoint of $g.\gamma_0$ is in $U$, and far enough from the frontier of $U$ such that the shadow $\Sh_B(v_g,P)$ fits in $U$. Then $V$ is the interior of the closure of $\hat V$ in $P \cup \pa P$. Now the set $W_0=(V\cap \pa P)\cup \big(\cup_{v\in V}\Sh_D(v,P)\big)$ is open in $\pa_D(G,\mathcal P)$ (c.f.\ (\ref{eqn:open2})), and it is saturated with respect to $f_P$, so $W:=f_P(W_0)$ is open in $\pa_D(G,\mathcal P)_P$ and $\overline H(W)\subset U$. This completes the proof. \end{proof} In summary, we have proved Theorem \ref{thm:sierpinski}, modulo a proof of Proposition \ref{prop:shadow}. To see this, suppose that $\partial_B(G,\mathcal P)\simeq S^2$. Then there is a surjection $\partial_D(G,\mathcal P)\rightarrow S^2$ which satisfies the assumptions of Lemma \ref{lem:sierpinski} by Proposition \ref{prop:quotient} and Theorem \ref{th:homeo}. Then by Lemma \ref{lem:sierpinski}, $\partial_D(G,\mathcal P)$ is homeomorphic to the Sierpinski carpet $\mathscr S$. We remark that our understanding of the shadows allows us to prove that the complement of a point in the Bowdtich boundary admits an equivariant $Z$-set compactification (when the peripheral groups admit an equivariant $Z$-set compactification.) \subsection{Shadows in the Bowditch boundary (Proof of Proposition \ref{prop:shadow}) } Before we begin the proof we need some additional notions and notations from \cite{bowditch}. When $(G,\mathcal P)$ is relatively hyperbolic group pair, there exists a proper hyperbolic metric space $X$ on which $G$ acts geometrically finitely. There are many models for such a space $X$, e.g.\ \cite[\S3]{bowditch} or \cite{groves-manning}, and the existence of such an $X$ is one definition of a relatively hyperbolic group pair. The main fact we will need is that the nerve of a system of horoballs in $X$ is quasi-isometric to $\hat\Gamma$. From $X$ one can obtain a fine hyperbolic graph $K=K(X)$ by considering the nerve of an appropriate collection of horoballs $\{H(P)\}_{P\in\mathcal P}$ in $X$ \cite[\S7]{bowditch}. The graph $K$ has vertex set $V(K)\simeq \mathcal P$. \begin{lemma} The graph $K$ is quasi-isometric to $\hat\Gamma$. \end{lemma} \begin{proof} First we claim that $\hat\Gamma$ is quasi-isometric to the graph $\Lambda$ that has vertex set $\{*_P:P\in \mathcal P\}$ and an edge between $*_P$ and $*_{P'}$ if there exists an arc (i.e.\ a path with distinct vertices) between them in $\hat\Gamma$ of length at most 2 such that the intermediate vertices are in $\Gamma(G)\subset\hat\Gamma$. The definition of $\Lambda$ is a special instance of the ``$K(A,n)$" construction in \cite[\S2]{bowditch}. To define a quasi-isometry $\Lambda\rightarrow\hat\Gamma$, note that both $\Lambda$ and $\hat \Gamma$ are quasi-isometric to the subset $\{*_P: P\in\mathcal P\}$ in each with the associated metrics, since every vertex of $\hat \Gamma$ is within distance $1/2$ of some $*_P$. Then by composing, there is a map \begin{equation}\label{eqn:quasi-isom}\phi:\Lambda\rightarrow\hat\Gamma\end{equation} that is the identity on $\{*_P: P\in\mathcal P\}$. This is a quasi-isometry because $d_{\Lambda}(*_{P_1},*_{P_2})\le d_{\hat\Gamma}(*_{P_1},*_{P_2})\le 2 d_\Lambda(*_{P_1},*_{P_2})$. Notice that for any edge in $\hat \Gamma$, it either meets an element of $V_\infty$, goes between two vertices at distance 1/2 from the same element of $V_\infty$, or goes between two vertices which are at distance $1/2$ from two different elements of $V_\infty$. For any $X$ on which $(G, \mathcal{P})$ acts geometrically finitely, $\Lambda$ and $K=K(X)$ are quasi-isometric because both are connected graphs with vertex set $\mathcal P$ and with a cocompact $G$ action; c.f.\ \cite[Lem.\ 4.2]{bowditch}. \end{proof} It will be useful to choose a quasi-isometry \begin{equation}\label{eqn:qi}\pi: \hat\Gamma\rightarrow K.\end{equation} For this, it suffices to choose a coarse inverse $\psi:\hat\Gamma\rightarrow\Lambda$ to the map $\phi$ in (\ref{eqn:quasi-isom}) (then we can compose with any quasi-isometry $\Lambda\rightarrow K$ that is the identity on vertices). To define $\psi$, we choose for each $v\in\Gamma(G)$ an element $P_v\in\mathcal P$ so that $v$ is adjacent to $*_{P_v}$. If we fix $P\in\mathcal P$, then we can define $P_v$ as the unique subgroup (with $*_{P_v}$ adjacent to $v$) that's conjugate to $P$. Then $\psi$ is equivariant. There is a homeomorphism $\pa_B(G,\mathcal P)\rightarrow \pa X$ \cite[\S 9]{bowditch}. Furthermore, if we label the parabolic fixed points $\Pi$ in $\pa X$ by the peripheral group $P \in \mathcal P $ which fixes it, then the homeomorphism from $\pa_B(G,\mathcal P) =\pa\hat\Gamma\cup V_\infty(\hat \Gamma)$ to $\partial X$ is the identity on $V_\infty(\hat\Gamma)$. Since the fixed points of the conjugates of any peripheral subgroup are dense in $\partial X$, it follows that once we fix the image of some $*_P$ (that is, label one of the peripheral fixed points of $\partial X$) there is exactly one equivariant homeomorphism between $\pa_B(G,\mathcal P)$ and $\partial X$. This allows us to canonically identify $\Omega = \partial X \setminus \{*_P\}$ with $\pa_B(G,\mathcal P) \setminus \{*_P\}$. Bowditch \cite[Section 2]{bowditchboundary} puts a metric $d_\Omega$ on $\Omega$ that makes the $P$ action geometric. If two points $x,y \in \Omega$ are close in this metric, the center $z\in X$ of the ideal triangle in $X$ with vertices $x,y$ and $ *_P$ is ``close to" $\Omega$, which means that there is a horofunction $h:X\rightarrow\R$ about $*_P$ with $h(z)\ll0$. \begin{proof}[Proof of Proposition \ref{prop:shadow}] Recall from \S\ref{sec:peripheral-boundary} that we've chosen $P\hookrightarrow\Omega$ as the $P$-orbit of the endpoint $0\in\Omega$ of a given geodesic ray $\gamma_0$. We take the space $X$ with horoballs/horospheres $H(P), S(P)$, and the fine hyperbolic graph $K=K(X)$ as discussed in the preceding paragraphs. {\bf Step 1} (From geodesics in $\hat\Gamma$ to geodesics in $X$). Suppose, for a contradiction, that the shadow of $e\in\Gamma(P)$ is unbounded. Then there exist geodesics $\gamma_n$ in $\hat\Gamma$ from $*_P$ through $e\in\Gamma(P)$ with endpoints $\xi_n\in\Omega\subset\pa_B(G,\mathcal P)$ such that $d_\Omega(0,\xi_n)\rightarrow\infty$. The image $\pi(\gamma_n)$ under the quasi-isometry $\pi:\hat\Gamma\rightarrow K$ in (\ref{eqn:qi}) is a quasi-geodesic. Each $\pi(\gamma_n)$ can be described as a sequence of horoballs $H(P_{n,1}), H(P_{n,2}),\ldots$ in $X$, where $P_{n,1}=P$ and adjacent horoballs in this sequence are distinct. {\it Claim.} After passing to a subsequence we can assume $H(P_{n,2})=H(P_2)$ is constant. {\it Proof of Claim.} We show there are only finitely many possibilities for the first vertex of $\pi(\gamma_n)$ that differs from $*_P$. Recall that $\pi$ sends a vertex $v\in\Gamma(G)$ to one of the adjacent cone vertices $*_{P_v}\in\hat \Gamma$ (such $P_v$ is conjugate to $P$), and is the identity on the cone vertices. Enumerate the vertices along the path $\gamma_n$ as $(v_{n,1},v_{n,2},\ldots)$. By assumption $v_{n,1}=*_P$ and $v_{n,2}=e$. By definition $\pi(v_{n,1})=\pi(v_{n,2})=*_P$. The first vertex of $\pi(\gamma_n)$ that differs from $*_P$ will be $\pi(v_{n,3})$. There are two possibilities: either (a) $v_{n,3}$ is a cone point $*_{P_2}$ or (b) $v_{n,3}$ is a vertex of the Cayley graph $\Gamma(G)$. In case (a), $\pi(v_{n,3})=*_{P_2}$, and since $*_{P_2}$ is adjacent to $e$, there are finitely many such choices. In case (b), $\pi(v_{n,3})$ is the cone point adjacent to $v_{n,3}$ whose stabilizer is conjugate to $P$. Since $v_{n,3}$ is adjacent to $e$ and there are finitely many such vertices in $\Gamma(G)$ (as $G$ is finitely generated), this shows there are finitely many possibilities for $\pi(v_{n,3})$. \qed From $\pi(\gamma_n)$, we can construct a quasi-geodesic in $X$ as follows. Let $H(P_{n,1}), H(P_{n,2}),\ldots$ be the sequence of horoballs along $\pi(\gamma_n)$ as defined above. For $i\ge1$, choose a geodesic arc $\alpha_{n,i}$ between $H(P_{n,i})$ and $H(P_{n,i+1})$ that has endpoints on the horospheres $S(P_{n,i})$ and $S(P_{n,i+1})$. Then choose a geodesic arc $\beta_{n,i}$ between the endpoint of $\alpha_{n,i}$ and the starting point of $\alpha_{n,i+1}$. The concatenation $\alpha_{n,1}*\beta_{n,1}*\alpha_{n,2}*\beta_{n,2}*\cdots$ is a quasi-geodesic with constants depending only on the quasi-geodesic constants for $\gamma_n$ \cite[Lem.\ 7.3,7.6]{bowditch}. Since the $\gamma_n$ have uniform constants, the quasi-geodesic $\alpha_{n,1}*\beta_{n,1}*\cdots$ is a bounded distance (with bound uniform in $n$) from a geodesic $\gamma_n'$ in $X$. If $\gamma_n$ represents a point $\xi_n\in\pa\hat\Gamma$, then $\gamma_n'$ represents the same point on $\pa X$, with respect to the natural homeomorphism $\pa_B(G,\mathcal P)\rightarrow \pa X$ that takes $*_P$ to itself. Since the quasi-geodesics $\pi(\gamma_n)$ all have the same first three vertices, there is a bounded subset of the horosphere $S(P)$ that contains $\gamma_n'\cap S(P)$ for each $n$. This is because the quasi-geodesic in $X$ corresponding to $\pi(\gamma_n)$, described above, contains a geodesic segment connecting the horoballs $H(P)$ and $H(P_2)$, and any two geodesics between a pair of horoballs lie within a bounded distance from one another, c.f.\ \cite[\S9]{bowditch}. {\bf Step 2} (Centers of ideal triangles). Let $(\xi_n)$ be the sequence of endpoints of the $\gamma_n'$ in $\Omega$. Since $d_\Omega(0,\xi_n)\rightarrow\infty$ by assumption, and the $P$-action on $\Omega$ is cocompact, we can chose $p_n\in P$ (with distance from $e$ in $\Gamma(P)$ going to infinity) so that $d_\Omega(0,p_n(\xi_n))$ is bounded. Then by passing to a subsequence we can assume that $p_n(\xi_n)$ converge in $\Omega$, and in particular form a Cauchy sequence. By choosing $N$ sufficiently large, we can ensure that if $n,m>N$, then $d_\Omega\big(p_n(\xi_n),p_m(\xi_m)\big)$ is small enough to ensure that if $z_{n,m}\in X$ is the center of the ideal triangle formed by the triple $*_P$, $p_n(\xi_n)$, $p_m(\xi_m)$, then $z_{n,m}$ is disjoint from $H(P)$. For each $n,m>N$, we define two quasi-geodesics $\eta_{n,m}^n$ and $\eta_{n,m}^m$ between $*_P$ and $z_{n,m}$. Each is a union of two geodesic segments: for $i=n,m$, the quasi-geodesic $\eta_{n,m}^i$ follows $p_i\gamma_i'$ until it nears $z_{n,m}$ and then follows a geodesic to $z_{n,m}$. Note that $\eta_{n,m}^i$ is a $(1,2c^i_{n,m})$-quasi-geodesic, where $c_{n,m}^i$ is the distance from $p_i\gamma_i'$ to $z_{n,m}$. The constant $c_{n,m}^i$ is bounded in terms of the hyperbolicity constant, so the collection of quasi-geodesics $\eta_{n,m}^n$ and $\eta_{n,m}^m$ for all $n,m>N$ are all $(1,c)$-quasi-geodesics for some $c$. Since $z_{n,m}\notin H(P)$, the quasi-geodesics $\eta_{n,m}^i$ and $\eta_{n,m}^i$ exit $H(P)$ for $i=n,m$. Furthermore, the distance between the sets $\eta_{n,m}^n\cap S(P)$ and $\eta_{n,m}^m\cap S(P)$ is roughly comparable to the distance between $p_n$ and $p_m$ in $\Gamma(P)$. This is because $\gamma_n'$ and $\gamma_m'$ intersect $S(P)$ in a bounded region, the intersection of $\eta_{n,m}^i$ with $S(P)$ is within the $p_i$ translate of this bounded region, and the $P$ action on the horosphere defines a quasi-isometry between the word metric on $\Gamma(P)$ and the horospherical metric on $S(P)$, c.f.\ \cite[\S1]{bowditchboundary}. {\bf Step 3} (Intrinsic and extrinsic distance in the horosphere $S(P)$). In this step we'll fix $k,\ell>N$ and consider the quasi-geodesics $\eta_{k,\ell}^k$ and $\eta_{k,\ell}^\ell$. On the one hand, $\eta_{k,\ell}^k$ and $\eta_{k,\ell}^\ell$ are a bounded distance from one another, so must exit the horoball at a bounded distance. On the other hand, the distance between $\eta_{k,\ell}^k\cap S(P)$ and $\eta_{k,\ell}^\ell\cap S(P)$ is comparable to the distance between $p_k$ and $p_\ell$ in $\Gamma(P)$, which we can make as large as we want by choosing $\ell\gg k$. This tension leads to a contradiction, as we now make precise. There is a constant $R$ so that if $\gamma$ is a geodesic and $\gamma'$ is a $(1,c)$-quasi-geodesic with the same endpoints, then the Hausdorff distance between $\gamma,\gamma'$ is less than $R$. Similarly, any two $(1,c)$-quasi-geodesics $\gamma',\gamma''$ with the same endpoints as $\gamma$ are contained in a $2R$ neighborhood of one another. It follows that at each time $t$ the distance between $\gamma'(t)$ and $\gamma''(t)$ is less than $R':=4R+c$. According to \cite[\S1]{bowditchboundary}, the distance in $(X,\rho)$ between two points in $S(P)$ is comparable to the intrinsic metric $\si$ on $S(P)$: there are constants $K,C,\omega$ so that $\si(x,y) \le K\omega^{\rho(x,y)}+C$. Since $(S(P),\si)$ and $\Gamma(P)$ are quasi-isometric, it follows that we can find $D>0$ so that if $p,q$ have distance at least $\omega^D$ in $\Gamma(P)$, and $x\in S(P)$, then $\rho(px,qx)> R'$. Choose $k>N$ and $\ell\gg k$ so that the distance between $p_k$ and $p_\ell$ in $\Gamma(P)$ is greater than $\omega^{D}$ (this is possible because the sequence $p_n$ is unbounded in $\Gamma(P)$). Consider the $(1,c)$-quasi-geodesics $\eta_{k,\ell}^k$ and $\eta_{k,\ell}^\ell$ between $*_P$ and $z_{k,\ell}$. On the one hand, the distance between $\eta_{k,\ell}^k\cap S(P)$ and $\eta_{k,\ell}^\ell\cap S(P)$ is less than $R'$ because $\eta_{k,\ell}^k$ and $\eta_{k,\ell}^\ell$ are $(1,c)$-quasi-geodesics with the same endpoints. On the other hand, the distance between $\eta_{k,\ell}^k\cap S(P)$ and $\eta_{k,\ell}^\ell\cap S(P)$ is greater than $R'$ because $p_k,p_\ell$ have distance greater than $\omega^D$ in $\Gamma(P$). This contradiction implies that the shadow of a point is bounded. \end{proof} \section{Corollaries to Theorem \ref{thm:sierpinski}} \label{sec:cor} \subsection{Dahmani boundary of the double (Proof of Corollary \ref{cor:sphere})}\label{sec:double} First we recall the definition of the double $G_\delta$ of $G$ along its peripheral subgroups. We use notation similar to \cite{kk}. \begin{definition} Let $(G, \mathcal{P})$ be a relatively hyperbolic pair, and let $P_1,\ldots,P_d$ be representatives for the conjugacy classes in $\mathcal P$. Define a graph of groups $D(G,\mathcal{P})$ as follows: the underlying graph has two vertices with $n$ edges connecting them. The vertices are labeled by $G$, the $i$-th edge is labeled by $P_i$, and the edge homomorphisms are the inclusions $P_i \hookrightarrow G$. The fundamental group of the graph of groups $D(G,\mathcal{P})$ is called the \emph{double of $G$ along $\mathcal{P}$}, denoted $G_\delta$. \end{definition} Note that if $G$ is torsion-free, so is $G_\delta$. \begin{proof}[Proof of Corollary \ref{cor:sphere}] Assume that $(G, \mathcal{P})$ is a torsion-free relatively hyperbolic group pair with $\partial_B(G, \mathcal{P}) \simeq S^2$. First we remark that $(G_\delta,\mathcal P)$ is relatively hyperbolic by work of Dahmani \cite[Thm 0.1]{dahmani-combination}. Furthermore, \cite[\S2]{dahmani-combination} describes the Bowditch boundary for graphs of groups: the result is a tree of metric spaces where the edge spaces are the limit sets of the amalgamating subgroups. (Dahmani doesn't use this terminology -- see instead Swiatkowski \cite[Defn 1.B.1]{swiatkowski}.) In the case of $G_\delta$ with $\pa_B(G, \mathcal{P})=S^2$, $\pa_B(G_\delta,\mathcal P)$ is a ``tree of 2-spheres", where each 2-sphere has a countable dense collection of points along which other 2-spheres are glued as in the figure below. \begin{figure} \caption{The Bowditch boundary $\pa_B(G_\delta,\mathcal P)$ is a ``tree of 2-spheres". } \end{figure} The Dahmani boundary inherits the structure of a tree of metric spaces from the tree structure on $\pa_B(G_\delta,\mathcal P)$ via the collapsing map (\ref{eqn:collapse-map}) applied to $G_\delta$. Each vertex space is a copy of $\pa_D(G,\mathcal P)$, which is a Sierpinski carpet by Theorem \ref{thm:sierpinski}. The edge spaces that meet a given vertex space are the peripheral circles $\pa P$ for $P\in\mathcal P$. An important part of the definition of a tree of metric spaces is that the edges spaces that meet a given vertex space must form a null family. This holds generally for the peripheral boundaries of a Dahmani boundary (Proposition \ref{prop:quotient}); it also holds in our specific case because the peripheral circles of a Sierpinski carpet are a null family \cite{cannon_sierpinski,whyburn}. It follows from \cite[Lem 1.D.2.1]{swiatkowski} that $\pa_D(G_\delta,\mathcal P)\simeq S^2$. This completes the proof of Corollary \ref{cor:sphere}. \end{proof} \subsection{Duality and the Bowditch boundary (Corollary \ref{cor:duality} and its Converse)} \begin{proof}[Proof of Corollary \ref{cor:duality}] By a criterion of Bieri--Eckmann \cite[Cor 8.5]{bieri-eckmann-pdpairs}, to show that $(G,\mathcal P)$ is a PD(3) pair, it is enough to show that the double $G_\delta$ is a PD(3) group and that the peripheral subgroups $P\in\mathcal P$ are $\PD(2)$ groups. The latter is true because the peripheral subgroups act properly and cocompactly on $\pa_B(G,\mathcal P)\setminus\{*_P\}\simeq\R^2$, c.f.\ \cite[Thm 0.3]{dahmani-parabolic} and the assumption that our group is torsion-free. To see $G_\delta$ is a PD(3) group, we use Corollary \ref{cor:sphere} to conclude $\pa_D(G_\delta,\mathcal P)\simeq S^2$. Since $\pa_D(G_\delta,\mathcal P)$ is a $\mathcal Z$-boundary for $G_\delta$ \cite[Thm 0.2]{dahmani-boundary}, and $G_\delta$ is torsion free, it follows that $G_\delta$ is a PD(3) group by the argument of Bestvina--Mess \cite[Corollary 1.3 (b,c)]{bestvina-mess}. (See Theorem \ref{thm:bestvina} above.) \end{proof} \begin{proof}[Proof of Theorem \ref{th:converse}] Let $(G, \mathcal{P})$ be a relatively hyperbolic group pair which is also a $\PD(3)$ pair. It follows that $G$ is torsion-free and again by \cite[Cor 8.5]{bieri-eckmann-pdpairs}, the subgroups in $\mathcal{P}$ are surface groups, and the double of $G$ along $\mathcal{P}$ is a $\PD(3)$ group. By \cite[Thm 0.1]{dahmani-combination} $(G_\delta, \mathcal{P})$ is relatively hyperbolic, so $G_\delta$ admits a $Z$-structure with $\mathcal Z$-boundary $\pa_D(G_\delta,\mathcal P)$ by Dahmani \cite{dahmani-boundary}. It follows that $\partial_D(G_\delta, \mathcal{P})\simeq S^2$, c.f.\ Theorem \ref{thm:bestvina}. By Proposition \ref{prop:quotient}, there is a dense collection of embedded circles in $\partial_D(G_\delta, \mathcal{P})$ such that when we form the quotient by collapsing these circles, we obtain $\pa_B(G_\delta,\mathcal P)$. As each embedded circle in $S^2$ bounds a disk on either side, the result is a tree of 2-spheres glued along points. By \cite[Theorem 0.1]{bowditchboundary} and \cite[Theorem 9.2]{bowditchperipheral}, each of these cut points correspond to a peripheral splitting. Furthermore, by the description of the boundary of an amalgamated product given in \cite[section 2]{dahmani-combination}, this tree of two-spheres is formed by gluing the Bowditch boundaries of the vertex groups along the limit sets of the amalgamating groups, which are the fixed points of the peripheral subgroups in this case. Thus, the Bowditch boundary of each vertex group (relative to $\mathcal{P}$) is $S^2$, hence $\pa_B(G, \mathcal{P})\simeq S^2$. \end{proof} \subsection{The Wall and relative Cannon conjectures (Proof of Theorem \ref{thm:wall})} \label{s:wall} Let $(G, \mathcal{P})$ be a relatively hyperbolic group pair with $G$ torsion-free and $\pa_B(G, \mathcal{P})\simeq S^2$. We may assume $\mathcal{P}$ is non-empty. Choose representatives of the conjugacy classes of the peripheral subgroups $P_1,\ldots, P_d$ and denote our group pair by $(G, \lb P_i \rb)$. Corollary \ref{cor:duality} implies that the double $G_\delta$ is $\PD(3)$ group. Assuming the Wall conjecture, we conclude that $G_\delta=\pi_1(M)$ for some closed aspherical 3-manifold. Let $M'\rightarrow M$ be the cover corresponding to $G<G_\delta$. Since $G$ is finitely generated, by Scott's compact core theorem \cite{scott-compact-core}, there is a compact submanifold $N\subset M'$ such that the inclusion induces an isomorphism $\pi_1(N)\simeq\pi_1(M')\simeq G$. Let $N_0$ be $N$ without its torus boundary components. To prove the theorem, we explain why $N_0$ admits a complete hyperbolic metric with totally geodesic boundary, and that the boundary subgroups and cusp subgroups are exactly the peripheral subgroups of $(G, \mathcal{P})$. {\it Claim.} (i) Any $\Z\times\Z$ subgroup of $\pi_1(N)$ is conjugate into one of the boundary subgroups. (ii) The boundary subgroups are malnormal, i.e., if $P_i \cap {}^gP_j \neq \{1\}$ for any two boundary subgroups $P_i$ and $P_j$, then $P_i = P_j$ and $g \in P_i $. To prove the claim, first note that any $\Z\times\Z$ subgroup of a relatively hyperbolic group is contained in one of the peripheral subgroups. To see this, consider a geometrically finite action of $G$ on a hyperbolic space, and use the classification of isometries \cite[Prop.\ 4.1]{benakli-kapovich}. Now the claim follows once we explain that the boundary subgroups of $N$ and the peripheral subgroups $P_1,\ldots,P_n$ are the same, up to conjugacy. (This justifies our notation in (ii).) This follows from the uniqueness of the $\PD(3)$-pair structure for pairs $(G,\{P_1,\ldots,P_n\})$, where the subgroups $P_1,\ldots,P_n$ do not \emph{coarsely separate} $G$ \cite[Thm 1.5]{kk_duality}. In our case $P_i<G$ does not coarsely separate because $\pa P_i\subset\pa_D(G,\mathcal P)$ does not separate as the peripheral circles of a Sierpinski carpet do not separate; they are exactly the non-separating circles. Malnormality of the peripheral subgroups in torsion-free relatively hyperbolic groups is exactly \cite[Prop.\ 2.37]{osin}. This finishes the proof of the claim. Since every $\Z \times \Z$ subgroup is peripheral, $N_0$ admits a complete hyperbolic metric. To see this, observe that if $N_0$ has no higher genus boundary components, this is Thurston's hyperbolization \cite[Theorem B]{morgan-uniformization}. Suppose $N_0$ has higher genus boundary components. Then there are no essential annuli since this would yield a free homotopy between two curves on the boundary, impling that the group elements are conjugate. Malnormality implies that this conjugation can be done in the surface group, so the annulus is not essential. Thus the double of $N_0$ along the higher genus boundary components is hyperbolic \cite[Theorem B]{morgan-uniformization}, and the involution of the double fixes the boundary components of $N_0$. Since this is realized by an isometry \cite[Theorem 1.44]{Kapovichbook}, $N_0$ admits a hyperbolic metric with totally geodesic boundary components. Department of Mathematics, Harvard University\\ \texttt{[email protected]} Department of Mathematics, Tufts University\\ \texttt{[email protected]} \includepdf[pages=-]{relativePD3_erratum.pdf} \end{document}
arXiv
\begin{definition}[Definition:Omega Constant] The '''omega constant''' $\Omega$ is the constant as the value of the principal branch of the Lambert W function at $1$: :$\Omega \, e^ \Omega = 1$ That is, it is the root of the equation: :$x e^x = 1$ where $e$ denotes Euler's number. \end{definition}
ProofWiki
Increased gait variability during robot-assisted walking is accompanied by increased sensorimotor brain activity in healthy people Alisa Berger ORCID: orcid.org/0000-0003-2888-70641, Fabian Horst2, Fabian Steinberg1,3, Fabian Thomas1, Claudia Müller-Eising4, Wolfgang I. Schöllhorn2 & Michael Doppelmayr1,5 Journal of NeuroEngineering and Rehabilitation volume 16, Article number: 161 (2019) Cite this article Gait disorders are major symptoms of neurological diseases affecting the quality of life. Interventions that restore walking and allow patients to maintain safe and independent mobility are essential. Robot-assisted gait training (RAGT) proved to be a promising treatment for restoring and improving the ability to walk. Due to heterogenuous study designs and fragmentary knowlegde about the neural correlates associated with RAGT and the relation to motor recovery, guidelines for an individually optimized therapy can hardly be derived. To optimize robotic rehabilitation, it is crucial to understand how robotic assistance affect locomotor control and its underlying brain activity. Thus, this study aimed to investigate the effects of robotic assistance (RA) during treadmill walking (TW) on cortical activity and the relationship between RA-related changes of cortical activity and biomechanical gait characteristics. Twelve healthy, right-handed volunteers (9 females; M = 25 ± 4 years) performed unassisted walking (UAW) and robot-assisted walking (RAW) trials on a treadmill, at 2.8 km/h, in a randomized, within-subject design. Ground reaction forces (GRFs) provided information regarding the individual gait patterns, while brain activity was examined by measuring cerebral hemodynamic changes in brain regions associated with the cortical locomotor network, including the sensorimotor cortex (SMC), premotor cortex (PMC) and supplementary motor area (SMA), using functional near-infrared spectroscopy (fNIRS). A statistically significant increase in brain activity was observed in the SMC compared with the PMC and SMA (p < 0.05), and a classical double bump in the vertical GRF was observed during both UAW and RAW throughout the stance phase. However, intraindividual gait variability increased significantly with RA and was correlated with increased brain activity in the SMC (p = 0.05; r = 0.57). On the one hand, robotic guidance could generate sensory feedback that promotes active participation, leading to increased gait variability and somatosensory brain activity. On the other hand, changes in brain activity and biomechanical gait characteristics may also be due to the sensory feedback of the robot, which disrupts the cortical network of automated walking in healthy individuals. More comprehensive neurophysiological studies both in laboratory and in clinical settings are necessary to investigate the entire brain network associated with RAW. Safe and independent locomotion represents a fundamental motor function for humans that is essential for self-contained living and good quality of life [1,2,3,4,5]. Locomotion requires the ability to coordinate a number of different muscles acting on different joints [6,7,8], which are guided by cortical and subcortical brain structures within the locomotor network [9]. Structural and functional changes within the locomotor network are often accompanied by gait and balance impairments which are frequently considered to be the most significant concerns in individuals suffering from brain injuries or neurological diseases [5, 10, 11]. Reduced walking speeds and step lengths [12] as well as non-optimal amount of gait variability [13,14,15] are common symptoms associated with gait impairments that increase the risk of falling [16]. In addition to manual-assisted therapy, robotic neurorehabilitation has often been applied in recent years [17, 18] because it provides early, intensive, task-specific and multi-sensory training which is thought to be effective for balance and gait recovery [17, 19, 20]. Depending on the severity of the disease, movements can be completely guided or assisted, tailored to individual needs [17], using either stationary robotic systems or wearable powered exoskeletons. Previous studies investigated the effectiveness of robot-assisted gait training (RAGT) in patients suffering from stroke [21, 22], multiple sclerosis [23,24,25,26], Parkinson's disease [27, 28], traumatic brain injury [29] or spinal cord injury [30,31,32]. Positive effects of RAGT on walking speed [33, 34], leg muscle force [23] step length, and gait symmetry [29, 35] were reported. However, the results of different studies are difficult to summarize due to the lack of consistency in protocols and settings of robotic-assisted treatments (e.g., amount and frequency of training sessions, amount and type of provided robotic support) as well as fragmentary knowledge of the effects on functional brain reorganization, motor recovery and their relation [36, 37]. Therefore, it is currently a huge challenge to draw guidelines for robotic rehabilitation protocols [22, 36,37,38]. To design prologned personalized training protocols in robotic rehabilitation to maximize individual treatment effects [37], it is crucial to increase the understanding of changes in locomotor patterns [39] and brain signals [40] underlying RAGT and how they are related [36, 41]. A series of studies investigated the effects of robotic assistance (RA) on biomechanical gait patterns in healthy people [39, 42,43,44]. On one side, altered gait patterns were reported during robot-assisted walking (RAW) compared to unassisted walking (UAW), in particular, substantially higher muscle activity in the quadriceps, gluteus and adductor longus leg muscles and lower muscle activity in the gastrocnemius and tibialis anterior ankle muscles [39, 42] as well as reduced lower-body joint angles due to the little medial-lateral hip movements [45,46,47]. On the other side, similar muscle activation patterns were observed during RAW compared to UAW [44, 48, 49], indicating that robotic devices allow physiological muscle activation patterns during gait [48]. However, it is hypothesized that the ability to execute a physiological gait pattern depends on how the training parameters such as body weight support (BWS), guidance force (GF) or kinematic restrictions in the robotic devices are set [44, 48, 50]. For example, Aurich-Schuler et al. [48] reported that the movements of the trunk and pelvis are more similar to UAW on a treadmill when the pelvis is not fixed during RAW, indicating that differences in musle activity and kinematic gait characteristics between RAW and UAW are due to the reduction in degrees of freedom that user's experience while walking in the robotic device [45]. In line with this, a clinical concern that is often raised with respect to RAW is the lack of gait variability [45, 48, 50]. It is assumed that since the robotic systems are often operated with 100% GF, which means that the devices attempt to force a particular gait pattern regardless of the user's intentions, the user lacks the ability to vary and adapt his gait patterns [45]. Contrary to this, Hidler et al. [45] observed differences in kinematic gait patterns between subsequent steps during RAW, as demonstrated by variability in relative knee and hip movements. Nevertheless, Gizzi et al. [49] showed that the muscular activity during RAW was clearly more stereotyped and similar among individuals compared to UAW. They concluded that RAW provides a therapeutic approach to restore and improve walking that is more repeatable and standardized than approaches based on exercising during UAW [49]. In addition to biomechanical gait changes, insights into brain activity and intervention-related changes in brain activity that relate to gait responses, will contribute to the optimization of therapy interventions [41, 51]. Whereas the application of functional magnetic resonance imaging (fMRI), considered as gold standard for the assessment of activity in cortical and subcortical structures, is restricted due to the vulnerability for movement artifacts and the range of motion in the scanner [52], functional near infrared spectroscopy (fNIRS) is affordable and easily implementable in a portable system, less susceptible to motion artifacts, thus facilitation a wider range of application with special cohorts (e.g., children, patients) and in everyday environments (e.g., during a therapeutic session of RAW or UAW) [53, 54]. Although with lower resolution compared to fMRI [55], fNIRS also relies on the principle of neurovascular coupling and allows the indirect evaluation of cortical activation [56, 57] based on hemodynamic changes which are analogous to the blood-oxygenation-level-dependent responses measured by fMRI [56]. Despite limited depth sensitivity, which restricts the measurement of brain activity to cortical layers, it is a promising tool to investigate the contribution of cortical areas to the neuromotor control of gross motor skills, such as walking [53]. Regarding the cortical correlates of walking, numerous studies identified either increaesed oxygenated hemoglobin (Hboxy) concentration changes in the sensorimotor cortex (SMC) by using fNIRS [53, 57,58,59] or suppressed alpha and beta power in sensorimotor areas by using electroencephalography (EEG) [60,61,62] demonstrating that motor cortex and corticospinal tract contribute directly to the muscle activity of locomotion [63]. However, brain activity during RAW [36, 61, 64,65,66,67,68], especially in patients [69, 70] or by using fNIRS [68, 69], is rarely studied [71]. Analyzing the effects of RA on brain activity in healthy volunteers, Knaepen et al. [36] reported significantly suppressed alpha and beta rhythms in the right sensory cortex during UAW compared to RAW with 100% GF and 0% BWS. Thus, significantly larger involvement of the SMC during UAW compared to RAW were concluded [36]. In contrast, increases of Hboxy were observed in motor areas during RAW compared UAW, leading to the conclusion that RA facilitated increased cortical activation within locomotor control systems [68]. Furthermore, Simis et al. [69] demonstrated the feasibility of fNIRS to evaluate the real-time activation of the primary motor cortex (M1) in both hemispheres during RAW in patients suffering from spinal cord injury. Two out of three patients exhibited enhanced M1 activation during RAW compared with standing which indicate the enhanced involvement of motor cortical areas in walking with RA [69]. To summarize, previous studies mostly focused the effects of RA on either gait characteristics or brain activity. Combined measurements investigating the effects of RA on both biomechanical and hemodynamic patterns might help for a better understanding of the neurophysiological mechanisms underlying gait and gait disorders as well as the effectiveness of robotic rehabilitation on motor recovery [37, 71]. Up to now, no consensus exists regarding how robotic devices should be designed, controlled or adjusted (i.e., device settings, such as the level of support) for synergistic interactions with the human body to achieve optimal neurorehabilitation [37, 72]. Therefore, further research concerning behavioral and neurophysiological mechanisms underlying RAW as well as the modulatory effect of RAGT on neuroplasticy and gait recovery are required giving the fact that such knowledge is of clinical relevance for the development of gait rehabilitation strategies. Consequently, the central purpose of this study was to investigate both gait characteristics and hemodynamic activity during RAW to identify RAW-related changes in brain activity and their relationship to gait responses. Assuming that sensorimotor areas play a pivotal role within the cortical network of automatic gait [9, 53] and that RA affects gait and brain patterns in young, healthy volunteers [39, 42, 45, 68], we hypothesized that RA result in both altered gait and brain activity patterns. Based on previous studies, more stereotypical gait characteristics with less inter- and intraindividual variability are expected during RAW due to 100% GF and the fixed pelvis compared to UAW [45, 48], wheares brain activity in SMC can be either decreased [36] or increased [68]. This study was performed in accordance with the Declaration of Helsinki. Experimental procedures were performed in accordance with the recommendations of the Deutsche Gesellschaft für Psychologie and were approved by the ethical committee of the Medical Association Hessen in Frankfurt (Germany). The participants were informed about all relevant study-related contents and gave their written consent prior to the initiation of the experiment. Twelve healthy subjects (9 female, 3 male; aged 25 ± 4 years), without any gait pathologies and free of extremity injuries, were recruited to participate in this study. All participants were right-handed, according to the Edinburg handedness-scale [73], without any neurological or psychological disorders and with normal or corrected-to-normal vision. All participants were requested to disclose pre-existing neurological and psychological conditions, medical conditions, drug intake, and alcohol or caffeine intake during the preceding week. The Lokomat (Hocoma AG, Volketswil, Switzerland) is a robotic gait-orthosis, consisting of a motorized treadmill and a BWS system. Two robotic actuators can guide the knee and hip joints of participants to match pre-programmed gait patterns, which were derived from average joint trajectories of healthy walkers, using a GF ranging from 0 to 100% [74, 75] (Fig. 1a). Kinematic trajectories can be adjusted to each individual's size and step preferences [45]. The BWS was adjusted to 30% body weight for each participant, and the control mode was set to provide 100% guidance [64]. Montage and Setup. a Participant during robot-assisted walking (RAW), with functional near-infrared spectroscopy (fNIRS) montage. b fNIRS montage; S = Sources; D = Detectors c Classification of regions of interest (ROI): supplementary motor area/premotor cortex (SMA/PMC) and sensorimotor cortex (SMC) Functional activation of the human cerebral cortex was recorded using a near-infrared optical tomographic imaging device (NIRSport, NIRx, Germany; Wavelengths: 760 nm, 850 nm; Sampling rate: 7.81 Hz). The methodology and the underlying physiology are explained in detail elsewhere [76]. A total of 16 optodes (8 emittors, 8 detectors) were placed with an interoptode distance of 3 cm [53, 54] above the motor cortex, based on the landmarks from the international 10–5 EEG system [77], resulting in 24 channels (source-detector pairs) of measurement (Fig. 1b). The spatial reolution was up to 1 cm. Head dimensions were individually measured and corresponding cap sizes assigned. Channel positions covered identical regions of both hemispheres including the SMC (Brodmann Area [BA] 1–4), and the supplementary motor area/premotor cortex (SMA/PMC; BA6) (Fig. 1c). Participants were equipped with standardized running shoes (Saucony Ride 9, Saucony, USA). Pressure insoles (Pedar mobile system, Novel GmbH, Germany) were inserted into the shoes for the synchronized measurement of plantar foot pressure, at a frequency of 100 Hz. Each insole consists of 99 capacitive sensors and covers the entire plantar area. The data recording process was managed by the software Novel Pedar-X Recorder 25.6.3 (Novel GmbH, Germany), and the vertical ground reaction force (GRF) was estimated for the analysis of kinetic and temporal gait variables. Participants performed two blocks, (1) UAW and (2) RAW, in a randomized order. Each block consisted of five walking trials (60 s) and intertrail standing intervals of 60 s (s) [41, 53, 68, 78] (Fig. 2). While walking, the participants were instructed to actively follow the orthosis's guidance while watching a neutral symbol (black cross) on a screen at eye level to ensure the most natural walking possible in an upright posture. During standing (rest), participants were instructed to stand with their feet shoulder-width apart while watching the same black cross. Furthermore, the participants were requested to avoid head movements and talking during the entire experiment, to reduce motion and physiological artifacts [78]. Prior to the experiment, individual adjustments of the Lokomat were undertaken, according to common practices in clinical therapy. The safety procedures of the rehabilitation center required that all subjects wore straps around the front foot to assist with ankle dorsiflexion. To familiarize themselves with the robotic device and treadmill walking (TW), participants walked with and without the Lokomat for 4 min before the experiment started. Study design and schematic illustration of unassisted walking (UAW) and robot-assisted walking (RAW) Data processing and analysis fNIRS raw data were preprocessed and analyzed using the time series analysis routine available in the MATLAB-based NIRSlab analysis package (v2017.05, Nirx Medical Technologies, Glen Head, NY, ["Biomedical Optics"]) [79] following current recommendations when possible [53, 78]. In each channel of individual participant, fNIRS signal was visually inspected with respect to transient spikes and abrupt discontinuities which represent two most common forms of movement artifacts in fNIRS data. First, sections containing discontinuities (or "jumps") as well as long term drifts were detected and corrected (standard deviation threshold = 5) [79]. Second, spikes were smoothed by a procedure that replaces contaminated data with the nearest signal [79]. Third, a band-pass filter (0.01 to 0.2 Hz) was applied to attenuate slow drifts and high frequency noises to reduce unknown global trend due to breathing, respiratory or cardiac rhythms, vasomotion, or other movement artifacts [59]. Then, time series of hemodynamic states of Hboxy and deoxygenated hemoglobin (Hbdeoxy) were computed using the the modified Beer-Lambert law [80, 81]. Following parameters were specified: wavelengths (WL1 = 760 nm; WL2 = 850 nm), differential pathlength factors (7.25 for WL1; 6.38 for WL2), interoptode distances (3 cm), background tissue values (totHb: 75 uM; MVO2Sat: 70%). Preprocessed Hboxy concentration changes (∆Hboxy) were exported and processed as follows: 50 s per walking trial were used to analyze the hemodynamic responses during (1) UAW and (2) RAW due to the time needed for the acceleration and deceleration of the treadmill. The averaged baseline concentration values of rest before each walking trial were subtracted from the task-evoked concentration measurements, to account for time-dependent changes in cerebral oxygenation [78]. ∆Hboxy were calculated for regions of interest (ROI) (see Fig. 1c) during both UAW and RAW and used as a marker for the regional cortical activation, since it is more sensitive to locomotion-related activities than Hbdeoxy [82] and represents an accurate indicator of hemodynamic activity [83]. GRFs were preprocessed and analyzed using Matlab 2017b (MathWorks, USA). GRFs were filtered using a second-order Butterworth bidirectional low-pass filter, at a cut off frequency of 30 Hz. Offline processing included kinetic and temporal variables that were calculated based on stance-phase detection, using a GRF threshold of 50 N. The first and last ten stance phases (steps) from each of the five walking trials were excluded from the analysis because they corresponded with the acceleration and deceleration phases of the treadmill. The swing and stance phase times were measured. The stance phase was also subdivided into initial double-limb, single-limb and terminal double-limb support times. Furthermore, the number of steps and the cadence was calculated. Kinetic variables were analyzed during the stance phase of walking. The GRF values were normalized against body mass and were time-normalized against 101 data points corresponding with the stance phase of walking. Gait variability was estimated for time-continuous GRF during the stance phase, using the coefficient of variation (CV) [84]. According to Eq. (1), the intraindividual CV was calculated based on the mean (\( \overline{GRF_{s,b,i}} \)) and standard deviation (σs, b, i) of the normalized GRF at the i-th interval of a concanated vector of the right and left leg stance phases. The intraindividual CV was calculated for each subject s and both blocks b (RAW and UAW). $$ IntraindividualCV\left(s,b\right)=\frac{\sqrt{\frac{1}{202}\ast {\sum}_{i=1}^{202}{\sigma_{s,b,i}}^2}}{\frac{1}{202}\ast {\sum}_{i=1}^{202}\mid \overline{GR{F}_{s,b,i}}\mid}\ast 100\left[\%\right] $$ Similarly, interindividual variability was estimated across the subject's mean GRF, calculated across the time-continuous GRF from all stance phases from one subject. According to Eq. (2), the interindividual CV was calculated based on the mean (\( \overline{GRF_{\overline{s},b,i}} \)) and standard deviation (\( {\sigma}_{\overline{s},b,i} \)) of the normalized subject's mean GRF at the i-th interval of the concanated vector of the right and left leg stance phases. Interindividual CV was calculated for both blocks b (RAW and UAW). $$ InterindividualCV(b)=\frac{\sqrt{\frac{1}{202}\ast {\sum}_{i=1}^{202}{\sigma_{\overline{s},b,i}}^2}}{\frac{1}{202}\ast {\sum}_{i=1}^{202}\mid \overline{GR{F}_{\overline{s},b,i}}\mid}\ast 100\left[\%\right] $$ The absolute magnitude of the symmetry index, according to Herzog et al. [85], was adapted for i time-intervals of time-continuous GRF. The symmetry index (SI) is a method of assessing the differences between the variables associated with both lower limbs during walking. According to Eq. (3), the SI was calculated based on the absolute difference of the mean normalized GRF (\( \overline{GRF\_{right}_i} \) and \( \overline{GRF\_{left}_i} \)) at the i-th interval for each subject s and both blocks b (RAW and UAW). An SI value of 0% indicates full symmetry, while an SI value > 0% indicates the degree of asymmetry [85]. $$ SI\left(s,b\right)=\frac{1}{101}\ast \left(\sum \limits_{i=1}^{101}\frac{\mid \overline{GR{F_{right}}_{s,b,i}}-\overline{GR{F_{left}}_{s,b,i}}\mid }{\frac{1}{2}\ast \mid \overline{GR{F_{right}}_{s,b,i}}+\overline{GR{F_{left}}_{s,b,i}}\mid}\ast 100\right)\left[\%\right] $$ Based on the time-continuous vertical GRF waveforms, three time-discrete variables were derived within the stance phase: the magnitude of the first peak (weight acceptance), the valley (mid-stance) and the magnitude of the second peak (push-off), as well as their temporal appearances during the stance phase. The statistical analysis was conducted using SPSS 23 (IBM, Armonk, New York, USA). Normal distribution was examined for both hemodynamic and kinetic/temporal variables using the Shapiro-Wilk test (p ≥ 0.05). Averaged Hboxy values were computed for each subject and ROI (SMA/PMC, SMC) during both UAW and RAW [53, 78] and were normalized (normHboxy) by dividing them by the corresponding signal amplitude for the whole experiment [41, 59]. A two-way analysis of variance (ANOVA), with the factors condition (UAW or RAW) and ROI (SMA/PMC, SMC), was used to analyze differences in cortical hemodynamic patterns. In cases of significant main effects, Bonferroni-adjusted post hoc analyses provided statistical information regarding the differences among the ROIs by condition. Temporal and kinetic gait variables were statistically tested for differences between the experimental conditions (UAW and RAW) using paired t-tests. The overall level of significance was set to p ≤ 0.05. Mauchly's test was used to check for any violations of sphericity. If a violation of sphericity was detected (p < 0.05) and a Greenhouse-Geisser epsilon ε > 0.75 existed, the Huynh-Feldt corrected p-values were reported. Otherwise (epsilon ε < 0.75), a Greenhouse-Geisser correction was applied. Effect sizes were given in partial eta-squared (ƞp2) or interpreted, according to Cohen. The association between cortical activation and gait characteristics was explored using Pearson's correlation coefficient. Cortical activity (Hboxy) The effect of RAW on ∆Hboxy in locomotor cortical areas was analyzed using a two-way repeated measurements ANOVA with the factors ROI (SMA/PMC, SMC) and CONDITION (UAW, RAW). ∆Hboxy served as dependent variable. A significant main effect for ROI [F(1,11) = 11.610, p = 0.006, ƞp2 = 0.513] was found indicating significant greater ∆Hboxy values in the 7 channels (1–3,13–16) covering regions of the SMA/PMC [BA6] compared to the 17 channels (4–12 and 17–24) covering regions of the SMC [BA1–4] (p = 0.052), independent of the condition. Neither CONDITION [F(1,11) = 1.204, p = 0.296, ƞp2 = 0.099] nor the interaction ROI x CONDITION [F(1,11) = 0.092, p = 0.767, ƞp2 = 0.008] were significant (Fig. 3). Normalized oxygenated hemoglobin (Hboxy; mean ± SME) for unassisted-walking (UAW) and robot-assisted walking (RAW). SMA/PMC, supplementary motor area/premotor cortex; SMC, sensorimotor cortex; SME = standard mean error Gait characteristics Descriptive analyses of the mean vertical GRFs show a "classical" double bump (M-Shape) during the stance phase [84] for both UAW and RAW (Fig. 4). However, various differences in the gait characteristics were observed between the two conditions. First, the mean vertical GRFs were lower during RAW than during UAW. Second, the relative appearance of the peak values occurs earlier for the first peak and later for the second peak during RAW compared with UAW. Third, the vertical GRFs had higher standard deviations during RAW than during UAW. Statistical analyses of the time-discrete kinetic gait variables confirmed significantly lower GRFs and earlier and later appearances for the first and second vertical GRF peaks, respectively, during RAW than during UAW (Table 1). Normalized vertical ground reaction force (GRF; mean ± SD) during the stance phase of unassisted walking (UAW) and robot-assisted walking (RAW). In Additional file 1, normalized vertical GRF during the stance phase of UAW (Figure S1) and RAW (Figure S2) are presented for each individual participant Table 1 Comparison of vertical ground reaction force variables (GRF; mean ± SD) during the stance phase of unassisted walking (UAW) and robot-assisted walking (RAW), SD = standard deviation Fourth, significantly increased inter- and intraindividual variability and asymmetry between the time-continuous GRFs of the right and left feet (SI values) and significantly longer stance and swing phases emerge during RAW compared with UAW, despite the guidance of the robotic device and the same treadmill velocity (Table 2). Accordingly, lower numbers of steps and lower cadence values were observed during RAW than during UAW. Table 2 Comparison of temporal gait variables (mean ± SD) during unassisted walking (UAW) and robot-assisted walking (RAW) Association between changes in cortical activity and gait characteristics Correlation analyses showed that changes in gait characteristics due to RA were also associated with changes in cortical activity. During RAW, a positive association between gait variability and Hboxy was observed only in the SMC (p = 0.052, r = 0.570). No further correlations were found during UAW or for other brain regions (SMA/PMC p = 0.951, r = 0.020). Thus, increased gait variability during RAW was associated with increased brain activity in the SMC (Fig. 5b). Correlations between relative oxygenated hemoglobin (Hboxy) and gait variability calculated by intraindividual coefficient of variation (CV) during unassisted-walking (UAW) and robot-assisted walking (RAW). a SMA/PMC, supplementary motor area/premotor cortex; b SMC, sensorimotor cortex; the shaded area represents the 95% confidence interval In this study, the effects of RA on cortical activity during TW and the relationship to changes in gait characteristics were investigated. We identified a classical double bump in the GRF, throughout the stance phase during both UAW and RAW, which was accompanied by significantly increased brain activity in the SMC compared to premotor/supplementary motor areas. However, individual analyses showed significantly higher inter- and intraindividual gait variability due to RA that correlated with increased hemodynamic activity in the SMC (p = 0.052; r = 0.570). In both conditions, shape characteristics of the mean GRF curves during the stance phase were observed. This in not in line with the results of Neckel et al. [46] who did not report a classical double bump during the stance phase during RAW, which could be due to the age differences of our samples. Furthermore, significantly altered kinematic patterns (lower GRF values and earlier and later appearances for the first and second vertical GRF peak values, respectively) as well as large inter- and intraindividual gait variability were observed during RAW compared to UAW. Results of the kinematic patterns are consistent with other biomechanical studies showing altered muscle activity [39, 42] or kinematic patterns [45,46,47] due to RA. The results of greater inter- and intraindividual gait variability during RAW do not agree with the more stereotypical and similar patterns of Gizzi et al. [49], nor with the assumption that the user lacks the ability to vary and adapt gait patterns during RAW [45, 48, 50]. Regarding brain activity during UAW, Hboxy concentration changes were significantly increased in sensorimotor areaes compared to areas of the SMA/PMC which is in line with other neurophysiological studies that showed increased Hboxy concentrations during walking [57, 58]. This is further confirmed by EEG studies reporting suppressed alpha and beta oscillations within the SMC [60,61,62] during active walking. This also demonstrates that the SMC and the corticospinal tract contribute directly to muscle activity in locomotion [9, 53, 63] representing a general marker of an active movemet-related neuronal state [61]. Analyzing the effects of RA on cortical patterns, significantly increased Hboxy concentration changes were also observed in SMC compared to frontal areas. Whereas Kim et al. [68] observed more global network activation during RAW compared to UAW, Knaepen et al. [36] reported significantly suppressed alpha and beta power during UAW compared to RAW with the conclusion that walking with 100% GF leads to less active participation and little activation of the SMC, which should be avoided during RAGT. However, during RAW, we observed a positive correlation between ΔHboxy concentrations in the SMC and intraindividual gait variability. Thus, individuals with larger gait variability showed higher sensorimotor brain activity, which is similar to the results reported of Vitorio et al. [41]. In this study, positive correlations between gait variability and ΔHboxy in the PMC and M1 were found in young healthy adults when walking with rhythmic auditory cueing [41]. The following two possible explanations are suggested. On one side, robotic guidance might induce additional and new sensory feedback that promotes active participation, resulting in high gait variability and increased brain activity. This possibility is supported by previous observations that muscles exhibited marked and structurally phased activity, even under full guidance conditions [39, 42, 86,87,88]. Van Kammen et al. [88] found muscle activity in the vastus lateralis, suggesting that the leg muscles are still activated during RAW as opposed to the muscles related to stability and propulsion, in which activity is reduced under guidance conditions. This finding is remarkable because, in this state, the exoskeleton is responsible for walking control, and theoretically, no voluntary activity from the performer is required [87, 89]. However, the instructions used in the present study (i.e., 'actively move along with the device') may have affected activity, as previous studies have shown that encouraging active involvement increases muscle activity [86, 87] as well as brain activity significantly during RAW [64]. More specifically, Wagner et al. [64] showed significantly suppressed alpha and beta power during active compared to passive RAW. Dobkin (1994) also showed that passive stepping can lead to task-specific sensory information that induces and modulates step-like electromyography activity [90]. Thus, high guidance might also promote active contribution. Particularly in patients who are not able to walk unassisted, successful stepping induces task-specific sensory information that may trigger plastic changes in the central nervous system [88, 91]. Since active participation and the production of variable movement patterns are prerequisites for activity-dependent neuroplasticity [7, 20, 89, 92,93,94], it is important to determine whether the activation of the SMC can be triggered by changes in the levels of GF, BWS and kinematic freedom in order to specifically provoke gait variability due to active participation of the patient [45, 48, 50]. High gait variability may indicate that people use multiple combinations of gait variables to walk more effectively [45, 95], resulting in better and faster improvements during robotic rehabilitation. On other side, the sensory feedback from robot guidance could also disturb the brain network underlying automatic walking, leading to increased gait variability and sensorimotor activity. According to Vitorio et al. [41], the requirement to adapt to external stimuli leads to disturbances in automatic walking in young healthy people, resulting in higher gait variability and higher cortical costs. As previous study have shown, the ability to execute a physiological gait pattern depends on how the training parameters such as BWS, GF or kinematic freedom in the robotic devices are set. During RAW with fixed pelvis, significantly altered muscle activity [39, 42, 45] and kinematic patterns [48, 50] were found. In addition to GF, BWS and kinematic freedom, the presence of foot support may also contribute to altered patterns. The safety procedures of the therapy institution required that all subjects wear straps around the front foot to assist with ankle dorsiflexion, which is known to reduce activity in the ankle dorsiflexors [39, 42]. In summary, increased gait variability and sensorimotor activity during RAW could be the result of active participation or disrupted automatic locomotor control. However, the generalization of these results to other populations is not intended or recommended. Healthy elderly individuals [41] and patients with stroke [22], multiple sclerosis [23, 25, 26], Parkinson's disease [27, 28], brain injuries [29] or spinal cord injuries [30, 31] who suffer from gait and balance disorders react differently to robotic support than healthy young people, which may lead to different gait and brain activation patterns [44]. In addition to high inter- and intraindividual variability within one sample, the heterogeneity of methodological procedures between studies appears to pose another challenge [71]. Therefore, one future goal should be to understand the mechanisms underlying RAGT and which parameters determine the effectiveness of a single treatment in the heterogenuous population of patients suffering from neurological diseases [37]. For this purpose, objective biomarkers for motor recovery and neuroplastic changes have to be identified [37]. Then, specific training protocols and further interventions, such as augmented feedback with virtual reality, brain-machine interface or non-invasive brain stimulation, can be developed to deliver sustainable therapies for individualized rehabilitation that optimizes the outcome and efficacy of gait recovery, which together can foster independent living and improve the quality of life for neurological patients [37, 71]. Methodological limitations Two methodological limitations that emerged using the present approach should be mentioned. First, the ability to walk is guided by an optimal interaction between cortical and subcortical brain structures within the locomotor network [53]. Using our NIRSport system, we were only able to report brain activity patterns in motor cortical areas and were unable to monitor the activities of subcortical areas or other cortical involvements. Various studies have reported that patients with gait disorders recruit additional cortical regions to manage the demands of UAW and RAW, due to structural and/or functional changes in the brain. Measuring the entire cortical network underlying locomotion may be necessary to investigate neuronal compensations and cognitive resources used for neuroplastic processes during gait rehabilitation. Therefore, we must be careful when discussing brain activity associated with other regions involved in locomotor control [9]. Secondly, we must take into account the small sample size of our healthy volunteers and their young age (mean: 25 ± 4 years), which also had no gait pathologies. Thus, RA guidance of gait movement might have different effects in elderly subjects or patients who are not able to walk without restrictions [96]. Therefore, the findings from our study are difficult to apply to other age or patient groups, as neurological patients often suffer from movement disorders and therefore use different control strategies during RAW. Although the available results provide relevant insights into the mobile applications of neurophysiological measurements during RAW, with approaches for further therapeutic interventions during robotic rehabilitation, the effects of RAW must also be investigated in other groups and in patients with gait disorders in the future. The purpose of the present study was to investigate brain activity during UAW and RAW and how this activity was associated with gait characteristics. The results confirmed the involvement of the SMC during TW and significantly increased gait variability due to RA, which correlated positively with brain activity. Furthermore, this study highlights the interaction between cortical activity and gait variability, stressing the need to use holistic, multisystem approaches when investigating TW in elderly individuals or patients suffering from gait disorders. Assessing the effects of RA on brain activity and gait characteristics is essential to develop a better understanding of how robotic devices affect human locomotion. This knowledge is essential for interventional studies examining the rehabilitation of motor disorders. Basic research regarding robotic rehabilitation is necessary to gain a deeper understanding of the brain and gait patterns associated with RAW, which is essential for further investigations of gait recovery and neuroplastic changes. In addition, clinical longitudinal studies are required to identify individual gait improvements and to identify the underlying neurophysiological changes to develop therapies with respect to interindividual differences. RAGT devices should be designed to provide an amount of force that adapts to the patient's capacity, to achieve an optimal balance between forced motor activity and the promotion of the patient's voluntary activity [36, 92,93,94]. Further combined studies are necessary to determine the relationship between brain activity and functional motor improvements and to evaluate the effects of therapeutic interventions. Neurophysiological investigations can contribute to the development of robotic rehabilitation and to individual, closed-loop treatments for future neurorehabilitation therapies. The datasets used and analysed during the current study are available from the corresponding author on reasonable request. ANOVA: BA: Brodmann area BWS: fNIRS: Functional nearinfrared spectroscopy GF: Guidance force GRF: Ground reaction forces Hbdeoxy: Deoxygenated hemoglobin Hboxy: Oxygenated hemoglobin Primary motor cortex RA: Robotic assistance RAGT: Robot assisted gait training RAW: Robot assisted walking Standard mean error SI: Symmetry index SMA: Supplementary motor area SMC: Sensorimotor cortex TW: Unassisted walking ΔHboxy: Relative changes of oxygenated hemoglobin Verghese J, LeValley A, Hall CB, Katz MJ, Ambrose AF, Lipton RB. Epidemiology of gait disorders in community-residing older adults. J Am Geriatr Soc. 2006;54:255–61. https://doi.org/10.1111/j.1532-5415.2005.00580.x. Forte R, Boreham CAG, de Vito G, Pesce C. Health and quality of life perception in older adults: the joint role of cognitive efficiency and functional mobility. Int J Environ Res Public Health. 2015;12:11328–44. https://doi.org/10.3390/ijerph120911328. Fagerström C, Borglin G. Mobility, functional ability and health-related quality of life among people of 60 years or older. Aging Clin Exp Res. 2010;22:387–94. Hirsch CH, Buzková P, Robbins JA, Patel KV, Newman AB. Predicting late-life disability and death by the rate of decline in physical performance measures. Age Ageing. 2012;41:155–61. https://doi.org/10.1093/ageing/afr151. Soh S-E, Morris ME, McGinley JL. Determinants of health-related quality of life in Parkinson's disease: a systematic review. Parkinsonism Relat Disord. 2011;17:1–9. https://doi.org/10.1016/j.parkreldis.2010.08.012. Nielsen JB. How we walk: central control of muscle activity during human walking. Neuroscientist. 2003;9:195–204. https://doi.org/10.1177/1073858403009003012. Bernstein N. The co-ordination and regulation of movements. 1st ed. Oxford: Pergamon Press; 1967. Hatze H. Motion variability--its definition, quantification, and origin. J Mot Behav. 1986;18:5–16. La Fougère C, Zwergal A, Rominger A, Förster S, Fesl G, Dieterich M, et al. Real versus imagined locomotion: a 18F-FDG PET-fMRI comparison. Neuroimage. 2010;50:1589–98. https://doi.org/10.1016/j.neuroimage.2009.12.060. Ellis T, Cavanaugh JT, Earhart GM, Ford MP, Foreman KB, Dibble LE. Which measures of physical function and motor impairment best predict quality of life in Parkinson's disease? Parkinsonism Relat Disord. 2011;17:693–7. https://doi.org/10.1016/j.parkreldis.2011.07.004. Schmid A, Duncan PW, Studenski S, Lai SM, Richards L, Perera S, Wu SS. Improvements in speed-based gait classifications are meaningful. Stroke. 2007;38:2096–100. https://doi.org/10.1161/STROKEAHA.106.475921. von Schroeder HP, Coutts RD, Lyden PD, Billings E, Nickel VL. Gait parameters following stroke: a practical assessment. J Rehabil Res Dev. 1995;32:25–31. Stergiou N, Harbourne R, Cavanaugh J. Optimal movement variability: a new theoretical perspective for neurologic physical therapy. J Neurol Phys Ther. 2006;30:120–9. https://doi.org/10.1097/01.npt.0000281949.48193.d9. Hausdorff JM. Gait dynamics, fractals and falls: finding meaning in the stride-to-stride fluctuations of human walking. Hum Mov Sci. 2007;26:555–89. https://doi.org/10.1016/j.humov.2007.05.003. Chen G, Patten C, Kothari DH, Zajac FE. Gait differences between individuals with post-stroke hemiparesis and non-disabled controls at matched speeds. Gait Posture. 2005;22:51–6. https://doi.org/10.1016/j.gaitpost.2004.06.009. Titianova EB, Tarkka IM. Asymmetry in walking performance and postural sway in patients with chronic unilateral cerebral infarction. J Rehabil Res Dev. 1995;32:236–44. Turner DL, Ramos-Murguialday A, Birbaumer N, Hoffmann U, Luft A. Neurophysiology of robot-mediated training and therapy: a perspective for future use in clinical populations. Front Neurol. 2013;4:184. https://doi.org/10.3389/fneur.2013.00184. Calabrò RS, Cacciola A, Bertè F, Manuli A, Leo A, Bramanti A, et al. Robotic gait rehabilitation and substitution devices in neurological disorders: where are we now? Neurol Sci. 2016;37:503–14. https://doi.org/10.1007/s10072-016-2474-4. Galen SS, Clarke CJ, Allan DB, Conway BA. A portable gait assessment tool to record temporal gait parameters in SCI. Med Eng Phys. 2011;33:626–32. https://doi.org/10.1016/j.medengphy.2011.01.003. Schmidt RA, Lee TD. Motor control and learning: A behavioral emphasis. 5th ed. Champaign: Human Kinetics; 2011. Bruni MF, Melegari C, de Cola MC, Bramanti A, Bramanti P, Calabrò RS. What does best evidence tell us about robotic gait rehabilitation in stroke patients: a systematic review and meta-analysis. J Clin Neurosci. 2018;48:11–7. https://doi.org/10.1016/j.jocn.2017.10.048. Mehrholz J, Thomas S, Werner C, Kugler J, Pohl M, Elsner B. Electromechanical-assisted training for walking after stroke. Cochrane Database Syst Rev. 2017;5:CD006185. https://doi.org/10.1002/14651858.CD006185.pub4. Beer S, Aschbacher B, Manoglou D, Gamper E, Kool J, Kesselring J. Robot-assisted gait training in multiple sclerosis: a pilot randomized trial. Mult Scler. 2008;14:231–6. https://doi.org/10.1177/1352458507082358. Lo AC, Triche EW. Improving gait in multiple sclerosis using robot-assisted, body weight supported treadmill training. Neurorehabil Neural Repair. 2008;22:661–71. https://doi.org/10.1177/1545968308318473. Schwartz I, Sajin A, Moreh E, Fisher I, Neeb M, Forest A, et al. Robot-assisted gait training in multiple sclerosis patients: a randomized trial. Mult Scler. 2012;18:881–90. https://doi.org/10.1177/1352458511431075. Straudi S, Fanciullacci C, Martinuzzi C, Pavarelli C, Rossi B, Chisari C, Basaglia N. The effects of robot-assisted gait training in progressive multiple sclerosis: a randomized controlled trial. Mult Scler. 2016;22:373–84. https://doi.org/10.1177/1352458515620933. Lo AC, Chang VC, Gianfrancesco MA, Friedman JH, Patterson TS, Benedicto DF. Reduction of freezing of gait in Parkinson's disease by repetitive robot-assisted treadmill training: a pilot study. J Neuroeng Rehabil. 2010;7:51. https://doi.org/10.1186/1743-0003-7-51. Picelli A, Melotti C, Origano F, Waldner A, Fiaschi A, Santilli V, Smania N. Robot-assisted gait training in patients with Parkinson disease: a randomized controlled trial. Neurorehabil Neural Repair. 2012;26:353–61. https://doi.org/10.1177/1545968311424417. Esquenazi A, Lee S, Packel AT, Braitman L. A randomized comparative study of manually assisted versus robotic-assisted body weight supported treadmill training in persons with a traumatic brain injury. PM R. 2013;5:280–90. https://doi.org/10.1016/j.pmrj.2012.10.009. Nam KY, Kim HJ, Kwon BS, Park J-W, Lee HJ, Yoo A. Robot-assisted gait training (Lokomat) improves walking function and activity in people with spinal cord injury: a systematic review. J Neuroeng Rehabil. 2017;14:24. https://doi.org/10.1186/s12984-017-0232-3. Schwartz I, Sajina A, Neeb M, Fisher I, Katz-Luerer M, Meiner Z. Locomotor training using a robotic device in patients with subacute spinal cord injury. Spinal Cord. 2011;49:1062–7. https://doi.org/10.1038/sc.2011.59. Wirz M, Zemon DH, Rupp R, Scheel A, Colombo G, Dietz V, Hornby TG. Effectiveness of automated locomotor training in patients with chronic incomplete spinal cord injury: a multicenter trial. Arch Phys Med Rehabil. 2005;86:672–80. https://doi.org/10.1016/j.apmr.2004.08.004. Benito-Penalva J, Edwards DJ, Opisso E, Cortes M, Lopez-Blazquez R, Murillo N, et al. Gait training in human spinal cord injury using electromechanical systems: effect of device type and patient characteristics. Arch Phys Med Rehabil. 2012;93:404–12. https://doi.org/10.1016/j.apmr.2011.08.028. Uçar DE, Paker N, Buğdaycı D. Lokomat: a therapeutic chance for patients with chronic hemiplegia. NeuroRehabil. 2014;34:447–53. https://doi.org/10.3233/NRE-141054. Husemann B, Müller F, Krewer C, Heller S, Koenig E. Effects of locomotion training with assistance of a robot-driven gait orthosis in hemiparetic patients after stroke: a randomized controlled pilot study. Stroke. 2007;38:349–54. https://doi.org/10.1161/01.STR.0000254607.48765.cb. Knaepen K, Mierau A, Swinnen E, Fernandez Tellez H, Michielsen M, Kerckhofs E, et al. Human-robot interaction: does robotic guidance force affect gait-related brain dynamics during robot-assisted treadmill walking? PLoS One. 2015;10:e0140626. https://doi.org/10.1371/journal.pone.0140626. Coscia M, Wessel MJ, Chaudary U, Millán JDR, Micera S, Guggisberg A, et al. Neurotechnology-aided interventions for upper limb motor rehabilitation in severe chronic stroke. Brain. 2019;142:2182–97. https://doi.org/10.1093/brain/awz181. Mehrholz J, Pohl M, Platz T, Kugler J, Elsner B. Electromechanical and robot-assisted arm training for improving activities of daily living, arm function, and arm muscle strength after stroke. Cochrane Database Syst Rev. 2018;9:CD006876. https://doi.org/10.1002/14651858.CD006876.pub5. Moreno JC, Barroso F, Farina D, Gizzi L, Santos C, Molinari M, Pons JL. Effects of robotic guidance on the coordination of locomotion. J Neuroeng Rehabil. 2013;10:79. https://doi.org/10.1186/1743-0003-10-79. Youssofzadeh V, Zanotto D, Stegall P, Naeem M, Wong-Lin K, Agrawal SK, Prasad G. Directed neural connectivity changes in robot-assisted gait training: a partial granger causality analysis. Conf Proc IEEE Eng Med Biol Soc. 2014;2014:6361–4. https://doi.org/10.1109/EMBC.2014.6945083. Vitorio R, Stuart S, Gobbi LTB, Rochester L, Alcock L, Pantall A. Reduced gait variability and enhanced brain activity in older adults with auditory cues: a functional near-infrared spectroscopy study. Neurorehabil Neural Repair. 2018;32:976–87. https://doi.org/10.1177/1545968318805159. Hidler JM, Wall AE. Alterations in muscle activation patterns during robotic-assisted walking. Clin Biomech (Bristol, Avon). 2005;20:184–93. https://doi.org/10.1016/j.clinbiomech.2004.09.016. Hidler J, Nichols D, Pelliccio M, Brady K, Campbell DD, Kahn JH, Hornby TG. Multicenter randomized clinical trial evaluating the effectiveness of the Lokomat in subacute stroke. Neurorehabil Neural Repair. 2009;23:5–13. https://doi.org/10.1177/1545968308326632. van Kammen K, Boonstra AM, van der Woude LHV, Reinders-Messelink HA, den Otter R. The combined effects of guidance force, bodyweight support and gait speed on muscle activity during able-bodied walking in the Lokomat. Clin Biomech (Bristol, Avon). 2016;36:65–73. https://doi.org/10.1016/j.clinbiomech.2016.04.013. Hidler J, Wisman W, Neckel N. Kinematic trajectories while walking within the Lokomat robotic gait-orthosis. Clin Biomech (Bristol, Avon). 2008;23:1251–9. https://doi.org/10.1016/j.clinbiomech.2008.08.004. Neckel ND, Blonien N, Nichols D, Hidler J. Abnormal joint torque patterns exhibited by chronic stroke subjects while walking with a prescribed physiological gait pattern. J Neuroeng Rehabil. 2008;5:19. https://doi.org/10.1186/1743-0003-5-19. Neckel N, Wisman W, Hidler J. Limb alignment and kinematics inside a Lokomat robotic orthosis. Conf Proc IEEE Eng Med Biol Soc. 2006;1:2698–701. https://doi.org/10.1109/IEMBS.2006.259970. Aurich-Schuler T, Gut A, Labruyère R. The FreeD module for the Lokomat facilitates a physiological movement pattern in healthy people - a proof of concept study. J Neuroeng Rehabil. 2019;16:26. https://doi.org/10.1186/s12984-019-0496-x. Gizzi L, Nielsen JF, Felici F, Moreno JC, Pons JL, Farina D. Motor modules in robot-aided walking. J Neuroeng Rehabil. 2012;9:76. https://doi.org/10.1186/1743-0003-9-76. Aurich-Schuler T, Labruyère R. An increase in kinematic freedom in the Lokomat is related to the ability to elicit a physiological muscle activity pattern: a secondary data analysis investigating differences between guidance force, path control, and FreeD. Front Robot AI. 2019;6:387. https://doi.org/10.3389/frobt.2019.00109. Chen I-H, Yang Y-R, Lu C-F, Wang R-Y. Novel gait training alters functional brain connectivity during walking in chronic stroke patients: a randomized controlled pilot trial. J Neuroeng Rehabil. 2019;16:33. https://doi.org/10.1186/s12984-019-0503-2. Cutini S, Brigadoi S. Unleashing the future potential of functional near-infrared spectroscopy in brain sciences. J Neurosci Methods. 2014;232:152–6. https://doi.org/10.1016/j.jneumeth.2014.05.024. Herold F, Wiegel P, Scholkmann F, Thiers A, Hamacher D, Schega L. Functional near-infrared spectroscopy in movement science: a systematic review on cortical activity in postural and walking tasks. Neurophotonics. 2017;4:41403. https://doi.org/10.1117/1.NPh.4.4.041403. Quaresima V, Ferrari M. Functional near-infrared spectroscopy (fNIRS) for assessing cerebral cortex function during human behavior in natural/social situations: a concise review. Organ Res Methods. 2019;22:46–68. https://doi.org/10.1177/1094428116658959. Koch SP, Koendgen S, Bourayou R, Steinbrink J, Obrig H. Individual alpha-frequency correlates with amplitude of visual evoked potential and hemodynamic response. Neuroimage. 2008;41:233–42. https://doi.org/10.1016/j.neuroimage.2008.02.018. Ferrari M, Mottola L, Quaresima V. Principles, techniques, and limitations of near infrared spectroscopy. Can J Appl Physiol. 2004;29:463–87. Miyai I, Tanabe HC, Sase I, Eda H, Oda I, Konishi I, et al. Cortical mapping of gait in humans: a near-infrared spectroscopic topography study. Neuroimage. 2001;14:1186–92. https://doi.org/10.1006/nimg.2001.0905. Hamacher D, Herold F, Wiegel P, Hamacher D, Schega L. Brain activity during walking: a systematic review. Neurosci Biobehav Rev. 2015;57:310–27. https://doi.org/10.1016/j.neubiorev.2015.08.002. Koenraadt KLM, Roelofsen EGJ, Duysens J, Keijsers NLW. Cortical control of normal gait and precision stepping: an fNIRS study. Neuroimage. 2014;85(Pt 1):415–22. https://doi.org/10.1016/j.neuroimage.2013.04.070. Severens M, Nienhuis B, Desain P, Duysens J. Feasibility of measuring event related desynchronization with electroencephalography during walking. Conf Proc IEEE Eng Med Biol Soc. 2012;2012:2764–7. https://doi.org/10.1109/EMBC.2012.6346537. Seeber M, Scherer R, Wagner J, Solis-Escalante T, Müller-Putz GR. EEG beta suppression and low gamma modulation are different elements of human upright walking. Front Hum Neurosci. 2014;8:485. https://doi.org/10.3389/fnhum.2014.00485. Bulea TC, Kim J, Damiano DL, Stanley CJ, Park H-S. Prefrontal, posterior parietal and sensorimotor network activity underlying speed control during walking. Front Hum Neurosci. 2015;9:247. https://doi.org/10.3389/fnhum.2015.00247. Petersen TH, Willerslev-Olsen M, Conway BA, Nielsen JB. The motor cortex drives the muscles during walking in human subjects. J Physiol Lond. 2012;590:2443–52. https://doi.org/10.1113/jphysiol.2012.227397. Wagner J, Solis-Escalante T, Grieshofer P, Neuper C, Müller-Putz G, Scherer R. Level of participation in robotic-assisted treadmill walking modulates midline sensorimotor EEG rhythms in able-bodied subjects. Neuroimage. 2012;63:1203–11. https://doi.org/10.1016/j.neuroimage.2012.08.019. Wagner J, Solis-Escalante T, Scherer R, Neuper C, Müller-Putz G. It's how you get there: walking down a virtual alley activates premotor and parietal areas. Front Hum Neurosci. 2014;8:93. https://doi.org/10.3389/fnhum.2014.00093. Wagner J, Solis-Escalante T, Neuper C, Scherer R, Müller-Putz G. Robot assisted walking affects the synchrony between premotor and somatosensory areas. Biomed Tech (Berl). 2013. https://doi.org/10.1515/bmt-2013-4434. Seeber M, Scherer R, Wagner J, Solis-Escalante T, Müller-Putz GR. High and low gamma EEG oscillations in central sensorimotor areas are conversely modulated during the human gait cycle. Neuroimage. 2015;112:318–26. https://doi.org/10.1016/j.neuroimage.2015.03.045. Kim HY, Yang SP, Park GL, Kim EJ, You JSH. Best facilitated cortical activation during different stepping, treadmill, and robot-assisted walking training paradigms and speeds: a functional near-infrared spectroscopy neuroimaging study. NeuroRehabilitation. 2016;38:171–8. https://doi.org/10.3233/NRE-161307. Simis M, Santos K, Sato J, Fregni F, Battistella L. T107. Using Functional Near Infrared Spectroscopy (fNIRS) to assess brain activity of spinal cord injury patient, during robot-assisted gait. Clin Neurophysiol. 2018;129:e43–4. https://doi.org/10.1016/j.clinph.2018.04.108. Calabrò RS, Naro A, Russo M, Bramanti P, Carioti L, Balletta T, et al. Shaping neuroplasticity by using powered exoskeletons in patients with stroke: a randomized clinical trial. J Neuroeng Rehabil. 2018;15:35. https://doi.org/10.1186/s12984-018-0377-8. Berger A, Horst F, Müller S, Steinberg F, Doppelmayr M. Current state and future prospects of EEG and fNIRS in robot-assisted gait rehabilitation: a brief review. Front Hum Neurosci. 2019;13:172. https://doi.org/10.3389/fnhum.2019.00172. Tucker MR, Olivier J, Pagel A, Bleuler H, Bouri M, Lambercy O, et al. Control strategies for active lower extremity prosthetics and orthotics: a review. J Neuroeng Rehabil. 2015;12:1. https://doi.org/10.1186/1743-0003-12-1. Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971;9:97–113. https://doi.org/10.1016/0028-3932(71)90067-4. Colombo G, Joerg M, Schreier R, Dietz V. Treadmill training of paraplegic patients using a robotic orthosis. J Rehabil Res Dev. 2000;37:693–700. Duschau-Wicke A, Caprez A, Riener R. Patient-cooperative control increases active participation of individuals with SCI during robot-aided gait training. J Neuroeng Rehabil. 2010;7:43. https://doi.org/10.1186/1743-0003-7-43. Obrig H, Villringer A. Beyond the visible--imaging the human brain with light. J Cereb Blood Flow Metab. 2003;23:1–18. https://doi.org/10.1097/01.WCB.0000043472.45775.29. Jurcak V, Tsuzuki D, Dan I. 10/20, 10/10, and 10/5 systems revisited: their validity as relative head-surface-based positioning systems. Neuroimage. 2007;34:1600–11. https://doi.org/10.1016/j.neuroimage.2006.09.024. Vitorio R, Stuart S, Rochester L, Alcock L, Pantall A. fNIRS response during walking - Artefact or cortical activity? A systematic review. Neurosci Biobehav Rev. 2017;83:160–72. https://doi.org/10.1016/j.neubiorev.2017.10.002. Xu Y, Graber HL, Barbour RL. nirsLAB: A Computing Environment for fNIRS Neuroimaging Data Analysis. Miami: Optical Society of America; 2014. p. 1. https://doi.org/10.1364/BIOMED.2014.BM3A.1. Sassaroli A, Fantini S. Comment on the modified Beer-Lambert law for scattering media. Phys Med Biol. 2004;49:N255–7. https://doi.org/10.1088/0031-9155/49/14/N07. Cope M, Delpy DT, Reynolds EO, Wray S, Wyatt J, van der Zee P. Methods of quantitating cerebral near infrared spectroscopy data. Adv Exp Med Biol. 1988;222:183–9. https://doi.org/10.1007/978-1-4615-9510-6_21. Suzuki M, Miyai I, Ono T, Oda I, Konishi I, Kochiyama T, Kubota K. Prefrontal and premotor cortices are involved in adapting walking and running speed on the treadmill: an optical imaging study. Neuroimage. 2004;23:1020–6. https://doi.org/10.1016/j.neuroimage.2004.07.002. Strangman G, Culver JP, Thompson JH, Boas DA. A quantitative comparison of simultaneous BOLD fMRI and NIRS recordings during functional brain activation. Neuroimage. 2002;17:719–31. Winter DA. Kinematic and kinetic patterns in human gait: variability and compensating effects. Hum Mov Sci. 1984;3:51–76. https://doi.org/10.1016/0167-9457(84)90005-8. Herzog W, Nigg BM, Read LJ, Olsson E. Asymmetries in ground reaction force patterns in normal human gait. Med Sci Sports Exerc. 1989;21:110–4. Aurich Schuler T, Müller R, van Hedel HJA. Leg surface electromyography patterns in children with neuro-orthopedic disorders walking on a treadmill unassisted and assisted by a robot with and without encouragement. J Neuroeng Rehabil. 2013;10:78. https://doi.org/10.1186/1743-0003-10-78. Israel JF, Campbell DD, Kahn JH, Hornby TG. Metabolic costs and muscle activity patterns during robotic- and therapist-assisted treadmill walking in individuals with incomplete spinal cord injury. Phys Ther. 2006;86:1466–78. https://doi.org/10.2522/ptj.20050266. van Kammen K, Boonstra AM, van der Woude LHV, Reinders-Messelink HA, den Otter R. Differences in muscle activity and temporal step parameters between Lokomat guided walking and treadmill walking in post-stroke hemiparetic patients and healthy walkers. J Neuroeng Rehabil. 2017;14:32. https://doi.org/10.1186/s12984-017-0244-z. Duschau-Wicke A, von Zitzewitz J, Caprez A, Lunenburger L, Riener R. Path control: a method for patient-cooperative robot-aided gait rehabilitation. IEEE Trans Neural Syst Rehabil Eng. 2010;18:38–48. https://doi.org/10.1109/TNSRE.2009.2033061. Dobkin BH, Harkema S, Requejo P, Edgerton VR. Modulation of locomotor-like EMG activity in subjects with complete and incomplete spinal cord injury. Neurorehabil Neural Repair. 1995;9:183–90. Riener R, Lünenburger L, Jezernik S, Anderschitz M, Colombo G, Dietz V. Patient-cooperative strategies for robot-aided treadmill training: first experimental results. IEEE Trans Neural Syst Rehabil Eng. 2005;13:380–94. https://doi.org/10.1109/TNSRE.2005.848628. Riener R, Lünenburger L, Maier I, Colombo G, Dietz V. Locomotor training in subjects with Sensori-motor deficits: an overview of the robotic gait Orthosis Lokomat. J Healthc Eng. 2010;1:197–216. https://doi.org/10.1260/2040-2295.1.2.197. Lewek MD, Cruz TH, Moore JL, Roth HR, Dhaher YY, Hornby TG. Allowing intralimb kinematic variability during locomotor training poststroke improves kinematic consistency: a subgroup analysis from a randomized clinical trial. Phys Ther. 2009;89:829–39. https://doi.org/10.2522/ptj.20080180. Krishnan C, Kotsapouikis D, Dhaher YY, Rymer WZ. Reducing robotic guidance during robot-assisted gait training improves gait function: a case report on a stroke survivor. Arch Phys Med Rehabil. 2013;94:1202–6. https://doi.org/10.1016/j.apmr.2012.11.016. Bohnsack-McLagan NK, Cusumano JP, Dingwell JB. Adaptability of stride-to-stride control of stepping movements in human walking. J Biomech. 2016;49:229–37. https://doi.org/10.1016/j.jbiomech.2015.12.010. Yang JK, Ahn NE, Kim DH, Kim DY. Plantar pressure distribution during robotic-assisted gait in post-stroke hemiplegic patients. Ann Rehabil Med. 2014;38:145–52. https://doi.org/10.5535/arm.2014.38.2.145. We thank Alina Hammer and Svenja Klink for assisting with the data collection, and Elmo Neuberger and Alexander Stahl for preparing the graphics included in this manuscript. Department of Sport Psychology, Institute of Sport Science, Johannes Gutenberg-University Mainz, Albert Schweitzer Straße 22, 55128, Mainz, Germany Alisa Berger , Fabian Steinberg , Fabian Thomas & Michael Doppelmayr Department of Training and Movement Science, Institute of Sport Science, Johannes Gutenberg-University Mainz, Mainz, Germany Fabian Horst & Wolfgang I. Schöllhorn School of Kinesiology, Louisiana State University, Baton Rouge, USA Fabian Steinberg Center of Neurorehabilitation neuroneum, Bad Homburg, Germany Claudia Müller-Eising Centre for Cognitive Neuroscience, Paris Lodron University of Salzburg, Salzburg, Austria Michael Doppelmayr Search for Alisa Berger in: Search for Fabian Horst in: Search for Fabian Steinberg in: Search for Fabian Thomas in: Search for Claudia Müller-Eising in: Search for Wolfgang I. Schöllhorn in: Search for Michael Doppelmayr in: AB: Research project conception and execution, data acquisition, statistical analysis, interpretation and the manuscript writing. AB acts as the corresponding author. FH: Research project conception and execution, data acquisition, statistical analysis, and interpretation. CM: Research project conception. FS, FT, WS, and MD: Research project conception, manuscript review, and critique. All authors read and approved the final version of the manuscript. Correspondence to Alisa Berger. The study was performed in accordance with the Declaration of Helsinki. Experimental procedures were performed in accordance with the recommendations of the Deutsche Gesellschaft für Psychologie and were approved by the ethical committee of the Medical Association Hessen in Frankfurt (Germany). Participants were informed of all relevant issues regarding the study and provided their written informed consent prior to the initiation of the experiment. Consent from all authors has been acquired prior to submission of this article. Additional file 1: Figure S1. Normalized vertical ground reaction force (GRF; mean) during the stance phase of unassisted walking (UAW) for each individual participant. Figure S2. Normalized vertical ground reaction force (GRF; mean) during the stance phase of robot-assisted walking (RAW) for each individual participant. Berger, A., Horst, F., Steinberg, F. et al. Increased gait variability during robot-assisted walking is accompanied by increased sensorimotor brain activity in healthy people. J NeuroEngineering Rehabil 16, 161 (2019) doi:10.1186/s12984-019-0636-3 Gait variability Brain activity Functional near-infrared spectroscopy fNIRS Robotic rehabilitation RAGT
CommonCrawl
\begin{definition}[Definition:Norm/Bounded Linear Functional] Let $H$ be a Hilbert space, and let $L$ be a bounded linear functional on $H$. \end{definition}
ProofWiki
\begin{document} \title{Laguerre-Gaussian modes: entangled state representation and generalized Wigner transform in quantum optics} \author{{\small Li-yun Hu}$^{1}${\small \thanks{Corresponding author. \emph{E-mail addresses}: [email protected], [email protected])} and Hong-yi Fan}$^{2}$\\$^{1}${\small College of Physics \& Communication Electronics, Jiangxi Normal University, Nanchang 330022, China}\\$^{2}${\small Department of Physics, Shanghai Jiao Tong University, Shanghai 200030, China}} \date{} \maketitle \begin{abstract} {\small By introducing a new entangled state representation, we show that the Laguerre-Gaussian (LG) mode is just the wave function of the common eigenvector of the orbital angular momentum and the total photon number operators of 2-d oscillator, which can be generated by 50:50 beam splitter with the phase difference }$\phi=\pi/2$ {\small between the reflected and transmitted fields. Based on this and using the Weyl ordering invariance under similar transforms, the Wigner representation of LG is directly obtained, which can be considered as the generalized Wigner transform of Hermite Gaussian modes.} {\small PACS: 03.65.-w-Quantum mechanics} {\small PACS: 42.50.-p-Quantum optics }\ \ \ \ \ \ \end{abstract} \section{Introduction} It has been known that a Laguerre-Gaussian (LG) beam of paraxial light has a well-defined orbital angular momentum \cite{1,2,3,4,5}, which is useful in studying quantum entanglement \cite{6}. In Ref. \cite{3} Nienhuis and Allen employed operator algebra to describe the Laguerre-Gaussian beam, and noticed that Laguerre-Gaussian modes are laser mode analog of the angular momentum eigenstates of the isotropic 2-d harmonic oscillator. In Ref. \cite{7} Simon and Agarwal presented a phase-space description (the Wigner function) of the LG mode by exploiting the underlying phase-space symmetry. In this Letter we shall go a step further to show that LG mode is just the wave function of the common eigenvector $\left\vert n,l\right\rangle $ of the orbital angular momentum operator and the total photon number operator of 2-d oscillator in the{\small \ }entangled state representation (ESR). The ESR was constructed \cite{8,9} based on the Einstein-Podolsky-Rosen quantum entanglement \cite{10}. It is shown that $\left\vert n,l\right\rangle $ can be generated by 50:50 beam splitter with the phase difference $\phi=\pi/2$ between the reflected and transmitted fields. Then we use the Weyl ordering form of the Wigner operator and the Weyl ordering's covariance under similar transformations to directly derive the Wigner representation of LG beams, which seems economical. The marginal distributions of Wigner function (WF) are also obtained by the entangled state representation. It is found that the amplitude of marginal distribution is just the eigenfunction of the fractional Fourier transform (FrFT). In addition, LG mode can also be considered as the generalized Wigner transform of Hermite Gaussian modes by using the Schmidt decomposition of the ESR. \section{Eigenvector corresponding to Laguerre-Guassian mode} In Ref. \cite{3} the Bosonic operator algebra of the quantum harmonic oscillator is applied to the description of Gaussian modes of a laser beam, i.e., a paraxial beam of light is described by operators' eigenvector equations \begin{align} N\left\vert n,l\right\rangle & =n\left\vert n,l\right\rangle ,\text{\ \ } N\equiv\left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) ,\nonumber\\ L\left\vert n,l\right\rangle & =l\left\vert n,l\right\rangle ,\text{ \ }L\equiv X_{1}P_{2}-X_{2}P_{1}, \label{7} \end{align} since $\left[ N,L\right] =0$, where $a_{i}^{\dagger}$ and $a_{i}$ ($i=1,2$) are Bose creation operator and annihilation operator; $L$ and $N$ are the orbital angular momentum operator and the total photon number operator of a paraxial beam of light, respectively. Using $X_{i}=\left( a_{i} +a_{i}^{\dagger}\right) /\sqrt{2}$ and $P_{i}=\left( a_{i}-a_{i}^{\dagger }\right) /(\mathtt{i}\sqrt{2})$, and $[a_{i},a_{j}^{\dagger}]=\delta_{ij}$, then \begin{equation} L=\mathtt{i}(a_{2}^{\dagger}a_{1}-a_{1}^{\dagger}a_{2}). \label{10} \end{equation} Here we search for the common eigenvector of $\left( N,L\right) $ in the entangled state representation. By introducing \begin{equation} A_{+}=\frac{1}{\sqrt{2}}(a_{1}-ia_{2}),\text{ }A_{-}=\frac{1}{\sqrt{2}i} (a_{1}+ia_{2}), \label{12} \end{equation} which obey the commutative relation \begin{align} \lbrack A_{+},A_{+}^{\dagger}] & =1,\text{\ }[A_{-},A_{-}^{\dagger }]=1,\label{13}\\ \lbrack A_{+},A_{-}^{\dagger}] & =0,\text{ }[A_{-},A_{+}^{\dagger }]=0,\nonumber \end{align} one can see \begin{equation} N=A_{+}^{\dagger}A_{+}+A_{-}^{\dagger}A_{-},\text{ }L=A_{+}^{\dagger} A_{+}-A_{-}^{\dagger}A_{-}. \label{14} \end{equation} Now we introduce the entangled state representation in Fock space, \begin{equation} \left\vert \eta\right\rangle =\exp\left\{ -\frac{1}{2}\left\vert \eta\right\vert ^{2}+\eta A_{+}^{\dagger}-\eta^{\ast}A_{-}^{\dagger} +A_{+}^{\dagger}A_{-}^{\dagger}\right\} \left\vert 00\right\rangle , \label{15} \end{equation} here $\eta=\left\vert \eta\right\vert e^{i\varphi}=\eta_{1}+i\eta_{2},$ $\left\vert 00\right\rangle $ is annihilated by $A_{+}$ and $A_{-}.$ It is not difficult to see that $\left\vert \eta\right\rangle $ is the common eigenvector of operators $\left( X_{1}-X_{2}-P_{1}+P_{2},P_{1}+P_{2} -X_{1}-X_{2}\right) $ \begin{align} \left( X_{1}-X_{2}-P_{1}+P_{2}\right) \left\vert \eta\right\rangle & =2\eta_{1}\left\vert \eta\right\rangle ,\nonumber\\ \left( P_{1}+P_{2}-X_{1}-X_{2}\right) \left\vert \eta\right\rangle & =2\eta_{2}\left\vert \eta\right\rangle . \label{15a} \end{align} Using the normal ordering form of vacuum projector \begin{equation} \left\vert 00\right\rangle \left\langle 00\right\vert =\colon\exp\left( -A_{+}^{\dagger}A_{+}-A_{-}^{\dagger}A_{-}\right) \colon, \label{14a} \end{equation} (where : : denotes normal ordering) and the technique of integral within an ordered product (IWOP) of operators \cite{11,12} we can prove the completeness relation and the orthonormal property of $\left\vert \eta\right\rangle ,$ \begin{equation} \int\frac{d^{2}\eta}{\pi}\left\vert \eta\right\rangle \left\langle \eta\right\vert =1,\text{ }\left\langle \eta\right. \left\vert \eta^{\prime }\right\rangle =\pi\delta\left( \eta-\eta^{\prime\ast}\right) \delta\left( \eta^{\ast}-\eta^{\prime}\right) . \label{16} \end{equation} \ Thus $\left\vert \eta\right\rangle $ is qualified to make up a new representation. It follows from (\ref{15}) and (\ref{13}) that \begin{equation} A_{+}\left\vert \eta\right\rangle =(\eta+A_{-}^{\dagger})\left\vert \eta\right\rangle ,\text{\ }A_{+}^{\dagger}\left\vert \eta\right\rangle =\left( \frac{\partial}{\partial\eta}+\frac{\eta^{\ast}}{2}\right) \left\vert \eta\right\rangle , \label{16b} \end{equation} \begin{equation} A_{-}\left\vert \eta\right\rangle =(A_{+}^{\dagger}-\eta^{\ast})\left\vert \eta\right\rangle ,A_{-}^{\dagger}\left\vert \eta\right\rangle =\left( -\frac{\partial}{\partial\eta^{\ast}}-\frac{\eta}{2}\right) \left\vert \eta\right\rangle . \label{17} \end{equation} which lead to (denote $r=\left\vert \eta\right\vert $ for simplicity) \begin{align} (A_{+}^{\dagger}A_{+}+A_{-}^{\dagger}A_{-})\left\vert \eta\right\rangle & =\left( \frac{1}{2}r^{2}-1-2\frac{\partial^{2}}{\partial\eta\partial \eta^{\ast}}\right) \left\vert \eta\right\rangle \nonumber\\ & =\left[ \frac{r^{2}}{2}-1-\frac{1}{2}\left( \frac{\partial^{2}}{\partial r^{2}}+\frac{1}{r}\frac{\partial}{\partial r}+\frac{1}{^{r^{2}}}\frac {\partial^{2}}{\partial\varphi^{2}}\right) \right] \left\vert \eta \right\rangle \nonumber\\ (A_{+}^{\dagger}A_{+}-A_{-}^{\dagger}A_{-})\left\vert \eta\right\rangle & =\left( \eta\frac{\partial}{\partial\eta}-\eta^{\ast}\frac{\partial} {\partial\eta^{\ast}}\right) \left\vert \eta\right\rangle =-\mathtt{i} \frac{\partial}{\partial\varphi}\left\vert \eta\right\rangle . \label{18} \end{align} Projecting Eqs.(\ref{7}) onto the $\left\langle \eta\right\vert $ representation and using (\ref{14}), (\ref{16b})-(\cite{17}), one can obtain the following equations \begin{equation} l\left\langle \eta\right\vert \left. n,l\right\rangle =i\frac{\partial }{\partial\varphi}\left\langle \eta\right\vert \left. n,l\right\rangle , \label{21} \end{equation} and \begin{equation} n\left\langle \eta\right\vert \left. n,l\right\rangle =\left[ \frac{r^{2} }{2}-1-\frac{1}{2}\left( \frac{\partial^{2}}{\partial r^{2}}+\frac{1}{r} \frac{\partial}{\partial r}+\frac{1}{^{r^{2}}}\frac{\partial^{2}} {\partial\varphi^{2}}\right) \right] \left\langle \eta\right\vert \left. n,l\right\rangle . \label{20} \end{equation} Eq.(\ref{21}) indicates that $\left\langle \eta\right\vert \left. n,l\right\rangle \propto e^{-il\varphi}.$\ From the uniqueness of wave function, $e^{-il\varphi}|_{\varphi=0}=e^{-il\varphi}|_{\varphi=2\pi},$ we know $l=0,$ $\pm1,$ $\pm2\cdots.$ So letting $\left\langle \eta\right\vert \left. n,l\right\rangle =R(r)e^{-il\varphi}$ and substituting it into (\ref{20}) yields \begin{equation} \frac{d^{2}R}{dr^{2}}+\frac{1}{r}\frac{dR}{dr}+\left( -r^{2}+2\left( n+1\right) -\frac{l^{2}}{^{r^{2}}}\right) R=0. \label{22} \end{equation} Introducing $\xi=r^{2}$ such that \begin{equation} \frac{dR}{dr}=2\sqrt{\xi}\frac{dR}{d\xi},\text{ }\frac{d^{2}R}{dr^{2}} =2\frac{dR}{d\xi}+4\xi\frac{d^{2}R}{d\xi^{2}}, \label{23} \end{equation} Eq. (\ref{22}) becomes \begin{equation} \frac{d^{2}R}{d\xi^{2}}+\frac{1}{\xi}\frac{dR}{d\xi}+\left( -\frac{1} {4}+\frac{n+1}{2\xi}-\frac{l^{2}}{4\xi^{2}}\right) R=0. \label{24} \end{equation} Then make the variable transform in (\ref{24}) \begin{equation} R(\xi)=e^{-\xi/2}\xi^{\left\vert l\right\vert /2}u(\xi), \label{26} \end{equation} one can obtain the equation for $u(\xi),$ \begin{equation} \xi\frac{d^{2}u}{d\xi^{2}}+(\left\vert l\right\vert +1-\xi)\frac{du}{d\xi }+\frac{n-\left\vert l\right\vert }{2}u=0. \label{27} \end{equation} Eq.(\ref{27}) is just a confluent hypergeometric equation whose solution is the associate Laguerre polynomials, $L_{n_{\rho}}^{\left\vert l\right\vert }(\xi)$, where $n_{\rho}=\frac{n-\left\vert l\right\vert }{2},$ $(n_{\rho}=0,$ $1,$ $2,\cdots)$\cite{13}$.$ Thus the wave function of $\left\vert n,l\right\rangle $ in $\left\langle \eta\right\vert $ representation is given by \begin{equation} \left\langle \eta\right\vert \left. n,l\right\rangle =C_{1}e^{-il\varphi }e^{-\frac{1}{2}r^{2}}r^{\left\vert l\right\vert }L_{n_{\rho}}^{\left\vert l\right\vert }(r^{2}), \label{29} \end{equation} where $C_{1}$ is an integral constant. The right-hand side of Eq.(\ref{29}) is just the LG mode, so we reach the conclusion that the wave function of $\left\vert n,l\right\rangle $ in the entangled state representation is just the LG mode, i.e., the LG mode gets its new physical meaning in quantum optics. Next, we further derive the explicit expression of $\left\vert n,l\right\rangle .$ Using the completeness relation of $\left\langle \eta\right\vert $ (\ref{16}) and (\ref{29}), we have \begin{align} \left\vert n,l\right\rangle & =\int\frac{d^{2}\eta}{\pi}\left\vert \eta\right\rangle \left\langle \eta\right\vert n,l\rangle\nonumber\\ & =C_{1}\int\frac{d^{2}\eta}{\pi}e^{-\frac{1}{2}r^{2}}\left\vert \eta\right\rangle e^{-il\varphi}r^{\left\vert l\right\vert }L_{n_{\rho} }^{\left\vert l\right\vert }(r^{2}). \label{30} \end{align} Then noticing the relation between two-variable Hermite polynomial \cite{14,15} and Laguerre polynomial, \begin{equation} H_{m,n}\left( \eta,\eta^{\ast}\right) =m!\left( -1\right) ^{m}\eta^{\ast }{}^{n-m}L_{m}^{n-m}\left( \eta\eta^{\ast}\right) , \label{31} \end{equation} where $m<n,\ $and the generating function of $H_{m,n}\left( \eta,\eta^{\ast }\right) $ is \begin{equation} H_{m,n}\left( x,y\right) =\left. \frac{\partial^{m+n}}{\partial t^{m}\partial t^{\prime n}}\exp\left[ -tt^{\prime}+tx+t^{\prime}y\right] \right\vert _{t=t^{\prime}=0}, \label{32} \end{equation} as well as using the integral formula \cite{16} \begin{equation} \int\frac{d^{2}z}{\pi}\exp\left( \zeta\left\vert z\right\vert ^{2}+\xi z+\eta z^{\ast}\right) =-\frac{1}{\zeta}e^{-\frac{\xi\eta}{\zeta}},\text{ Re}\left( \xi\right) <0, \label{33} \end{equation} we can reform Eq.(\ref{30}) as (without loss of the generality, setting $l>0$ and $m_{\rho}=[n+\left\vert l\right\vert ]/2$) \begin{align} \left\vert n,l\right\rangle & =\frac{\left( -1\right) ^{n_{\rho}}C_{1} }{n_{\rho}!}\int\frac{d^{2}\eta}{\pi}H_{n_{\rho},m_{\rho}}\left( \eta ,\eta^{\ast}\right) e^{-\frac{1}{2}\left\vert \eta\right\vert ^{2}}\left\vert \eta\right\rangle \nonumber\\ & =\frac{\left( -1\right) ^{n_{\rho}}C_{1}}{n_{\rho}!}\frac{\partial ^{n_{\rho}+m_{\rho}}}{\partial t^{n_{\rho}}\partial t^{\prime m_{\rho}}} \exp\left[ -tt^{\prime}+A_{+}^{\dagger}A_{-}^{\dagger}\right] \nonumber\\ & \times\int\frac{d^{2}\eta}{\pi}\exp\left[ -\left\vert \eta\right\vert ^{2}+\left( A_{+}^{\dagger}+t\right) \eta+\left( t^{\prime}-A_{-}^{\dagger }\right) \eta^{\ast}\right] _{t=t^{\prime}=0}\left\vert 00\right\rangle \nonumber\\ & =\frac{\left( -1\right) ^{n_{\rho}}C_{1}}{n_{\rho}!}\frac{\partial ^{n_{\rho}+m_{\rho}}}{\partial t^{n_{\rho}}\partial t^{\prime m_{\rho}}} \exp\left[ A_{+}^{\dagger}t^{\prime}-tA_{-}^{\dagger}\right] _{t=t^{\prime }=0}\left\vert 00\right\rangle \nonumber\\ & =\frac{C_{1}}{n_{\rho}!}\left( A_{+}^{\dagger}\right) ^{m_{\rho}}\left( A_{-}^{\dagger}\right) ^{n_{\rho}}\left\vert 00\right\rangle . \label{34} \end{align} \section{Generation of $\left\vert n,l\right\rangle $ by Beam Splitter} Note Eq.(\ref{12}) and \begin{align} A_{+}^{\dag} & =e^{i\frac{\pi}{2}J_{x}}a_{1}^{\dag}e^{-i\frac{\pi}{2}J_{x} },\text{ \ }A_{-}^{\dag}=e^{i\frac{\pi}{2}J_{x}}a_{2}^{\dag}e^{-i\frac{\pi} {2}J_{x}},\text{ }\nonumber\\ J_{x} & =\frac{1}{2}\left( a_{1}^{\dag}a_{2}+a_{2}^{\dag}a_{1}\right) ,\text{ }\left\vert 00\right\rangle =e^{i\frac{\pi}{2}J_{x}}\left\vert 00\right\rangle , \label{35} \end{align} thus Eq.(\ref{34}) can be further put into the following form \begin{align} \left\vert n,l\right\rangle & =\frac{C_{1}}{n_{\rho}!}e^{i\frac{\pi}{2} J_{x}}\left( a_{1}^{\dag}\right) ^{m_{\rho}}\left( a_{2}^{\dag}\right) ^{n_{\rho}}\left\vert 00\right\rangle \nonumber\\ & =C_{1}\sqrt{\frac{m_{\rho}!}{n_{\rho}!}}e^{i\frac{\pi}{2}J_{x}}\left\vert m_{\rho},n_{\rho}\right\rangle . \label{36} \end{align} It is easy to see that the normalized constant can be chosen as $C_{1} =\sqrt{n_{\rho}!/m_{\rho}!},$ which further leads to \begin{equation} \left\vert n,l\right\rangle =e^{i\frac{\pi}{2}J_{x}}\left\vert m_{\rho },n_{\rho}\right\rangle , \label{e37} \end{equation} where $J_{x}$ can be expressed by angular momentum operators $J_{+} =a_{1}^{\dag}a_{2}$ and $J_{-}=a_{1}a_{2}^{\dag}$, $J_{x}=\frac{1}{2}\left( J_{+}+J_{-}\right) .$ $J_{+},$ $J_{-}$ and $J_{z}=\frac{1}{2}\left( a_{1}^{\dag}a_{1}-a_{2}^{\dag}a_{2}\right) $ make up a close SU(2) Lie algebra. On the other hand, the beam splitter is one of the few experimentally accessible devices that may act as an entangler. In fact, the role of a beam splitter operator \cite{j1,j2} is expressed by \begin{equation} B\left( \theta,\phi\right) =\exp\left[ \frac{\theta}{2}\left( a_{1}^{\dag }a_{2}e^{i\phi}-a_{1}a_{2}^{\dag}e^{-i\phi}\right) \right] , \label{e38} \end{equation} with the amplitude reflection and transmission coefficients $T=\cos \frac{\theta}{2},$ $R=\sin\frac{\theta}{2}.$ The beam splitter gives the phase difference $\phi$ between the reflected and transmitted fields. Comparing Eq.(\ref{e38}) with $e^{i\frac{\pi}{2}J_{x}}$ leads us to choose $\theta =\pi/2$ (corresponding to 50:50 beam splitter) and $\phi=\pi/2,$ thus $B\left( \pi/2,\pi/2\right) $ is just equivalent to $e^{i\frac{\pi}{2}J_{x} }$ in form. This indicates that $\left\vert n,l\right\rangle $ can be generated by acting a symmetric beam splitter with $\phi=\pi/2$ on two independent input Fock states $\left\vert m_{\rho},n_{\rho}\right\rangle =\left\vert m_{\rho}\right\rangle _{1}\left\vert n_{\rho}\right\rangle _{2}.$ In addition, note that $n=m_{\rho}+n_{\rho},$ i.e., when the total number of input photons is $n,$ so the output state becomes an ($n+1$)-dimensional entangled state \cite{j3}. \section{The Wigner representation} As is well-known, the Wigner quasidistribution provides with a definite phase space distribution of quantum states and is very useful in quantum statistics and quantum optics. In this section, we evaluate the Wigner representation of $\left\vert n,l\right\rangle .$ According Ref.\cite{17}, the Wigner representation of $\left\vert n,l\right\rangle $ is given by \begin{align} W_{\left\vert n,l\right\rangle } & =\left\langle n,l\right\vert \Delta _{1}\left( x_{1},p_{1}\right) \Delta_{2}\left( x_{2},p_{2}\right) \left\vert n,l\right\rangle \nonumber\\ & =\left\langle m_{\rho},n_{\rho}\right\vert e^{-i\frac{\pi}{2}J_{x}} \Delta_{1}\left( x_{1},p_{1}\right) \nonumber\\ & \times\Delta_{2}\left( x_{2},p_{2}\right) e^{i\frac{\pi}{2}J_{x} }\left\vert m_{\rho},n_{\rho}\right\rangle , \label{37} \end{align} where $\Delta_{1}\left( x_{1},p_{1}\right) $ is the single-mode Wigner operator, whose Weyl ordering form \cite{18} is \begin{equation} \Delta_{1}\left( x_{1},p_{1}\right) = \genfrac{}{}{0pt}{}{:}{:} \delta\left( p_{1}-P_{1}\right) \delta\left( x_{1}-X_{1}\right) \genfrac{}{}{0pt}{}{:}{:} , \label{38} \end{equation} where the symbol$ \genfrac{}{}{0pt}{}{:}{:} \genfrac{}{}{0pt}{}{:}{:} $denotes Weyl ordering \cite{19}. Note that the order of Bose operators $a$ and $a^{\dag}$ within $ \genfrac{}{}{0pt}{}{:}{:} \genfrac{}{}{0pt}{}{:}{:} $ can be permitted. That is to say, even though $[a$,$a^{\dag}]$ $=1$, we can have $ \genfrac{}{}{0pt}{}{:}{:} aa^{\dag} \genfrac{}{}{0pt}{}{:}{:} = \genfrac{}{}{0pt}{}{:}{:} a^{\dag}a \genfrac{}{}{0pt}{}{:}{:} $. According to the covariance property of Weyl ordering under similar transformations \cite{18} and \begin{align} e^{-i\frac{\pi}{2}J_{x}}X_{1}e^{i\frac{\pi}{2}J_{x}} & =\frac{1}{\sqrt{2} }\left( X_{1}-P_{2}\right) ,\text{ }\nonumber\\ e^{-i\frac{\pi}{2}J_{x}}P_{1}e^{i\frac{\pi}{2}J_{x}} & =\frac{1}{\sqrt{2} }\left( P_{1}+X_{2}\right) ,\nonumber\\ e^{-i\frac{\pi}{2}J_{x}}X_{2}e^{i\frac{\pi}{2}J_{x}} & =\frac{1}{\sqrt{2} }\left( X_{2}-P_{1}\right) ,\text{ }\nonumber\\ e^{-i\frac{\pi}{2}J_{x}}P_{2}e^{i\frac{\pi}{2}J_{x}} & =\frac{1}{\sqrt{2} }\left( P_{2}+X_{1}\right) , \label{39} \end{align} we have \begin{align} & e^{-i\frac{\pi}{2}J_{x}}\Delta_{1}\left( x_{1},p_{1}\right) \Delta _{2}\left( x_{2},p_{2}\right) e^{i\frac{\pi}{2}J_{x}}\nonumber\\ & = \genfrac{}{}{0pt}{}{:}{:} \delta\left( p_{1}-\frac{P_{1}+X_{2}}{\sqrt{2}}\right) \delta\left( x_{1}-\frac{X_{1}-P_{2}}{\sqrt{2}}\right) \nonumber\\ & \times\delta\left( p_{2}-\frac{P_{2}+X_{1}}{\sqrt{2}}\right) \delta\left( x_{2}-\frac{X_{2}-P_{1}}{\sqrt{2}}\right) \genfrac{}{}{0pt}{}{:}{:} \nonumber\\ & =\Delta_{1}\left( \frac{x_{1}+p_{2}}{\sqrt{2}},\frac{p_{1}-x_{2}}{\sqrt {2}}\right) \Delta_{2}\left( \frac{x_{2}+p_{1}}{\sqrt{2}},\frac{p_{2}-x_{1} }{\sqrt{2}}\right) . \label{40} \end{align} Since the Wigner representation of number state $\left\vert m\right\rangle $ is well known \cite{20}, \begin{align} W_{\left\vert m\right\rangle } & =\left\langle m\right\vert \Delta _{1}\left( x_{1},p_{1}\right) \left\vert m\right\rangle \nonumber\\ & =\frac{\left( -1\right) ^{m}}{\pi}e^{-\left( x_{1}^{2}+p_{1}^{2}\right) }L_{m}\left[ 2\left( x_{1}^{2}+p_{1}^{2}\right) \right] , \label{41} \end{align} so we directly obtain the Wigner representation of L-G mode, \begin{align} W_{\left\vert n,l\right\rangle } & =\left\langle m_{\rho}\right\vert \Delta_{1}\left( \frac{x_{1}+p_{2}}{\sqrt{2}},\frac{p_{1}-x_{2}}{\sqrt{2} }\right) \left\vert m_{\rho}\right\rangle \nonumber\\ & \times\left\langle n_{\rho}\right\vert \Delta_{2}\left( \frac{x_{2}+p_{1} }{\sqrt{2}},\frac{p_{2}-x_{1}}{\sqrt{2}}\right) \left\vert n_{\rho }\right\rangle \nonumber\\ & =\frac{\left( -1\right) ^{m_{\rho}+n_{\rho}}}{\pi^{2}}e^{-Q_{0} }L_{m_{\rho}}\left( Q_{0}+Q_{2}\right) L_{n_{\rho}}\left( Q_{0} -Q_{2}\right) , \label{42} \end{align} where $Q_{0}=\allowbreak p_{1}^{2}+p_{2}^{2}+x_{1}^{2}+x_{2}^{2}$ and $Q_{2}=2p_{2}x_{1}-2p_{1}x_{2}$. Eq.(\ref{42}) is in agreement with the result of Ref. \cite{5,7}. Our derivation seems economical. \section{The marginal distributions and fractional Fourier transform of $\left\vert n,l\right\rangle $} The fractional Fourier transform (FrFT) has been paid more and more attention within different contexts of both mathematics and physics. It is also very useful tool in Fourier optics and information optics. In this section, we examine the relation between the FrFT and the marginal distributions of $W_{\left\vert n,l\right\rangle }$. For this purpose, we recall that the two-mode Wigner operator $\Delta _{1}\left( x_{1},p_{1}\right) \Delta_{2}\left( x_{2},p_{2}\right) \equiv\Delta_{1}\left( \alpha\right) \Delta_{2}\left( \beta\right) $ ($\alpha=(x_{1}+ip_{1})/\sqrt{2}$, $\beta=(x_{1}+ip_{1})/\sqrt{2}$) in entangled state representation $\left\langle \tau\right\vert .$ Using the IWOP technique we have shown in \cite{19} that $\Delta_{1,2}\left( \sigma ,\gamma\right) $ is just the product of two independent single-mode Wigner operators $\Delta_{1}\left( \alpha\right) \Delta_{2}\left( \beta\right) =\Delta_{1,2}\left( \sigma,\gamma\right) $ i.e., \begin{equation} \Delta_{1,2}\left( \sigma,\gamma\right) =\int\frac{d^{2}\tau}{\pi^{3} }\left\vert \sigma-\tau\right\rangle \left\langle \sigma+\tau\right\vert e^{\tau\gamma^{\ast}-\tau^{\ast}\gamma},\label{43} \end{equation} where $\sigma=\alpha-\beta^{\ast},$ $\gamma=\alpha+\beta^{\ast}$ and $\left\vert \tau=\tau_{1}+i\tau_{2}\right\rangle $ can be expressed in two-mode Fock space as \cite{12,13} \begin{equation} \left\vert \tau\right\rangle =\exp\left\{ -\frac{1}{2}\left\vert \tau\right\vert ^{2}+\tau a_{1}^{\dagger}-\tau^{\ast}a_{2}^{\dagger} +a_{1}^{\dagger}a_{2}^{\dagger}\right\} \left\vert 00\right\rangle ,\label{44} \end{equation} which is the common eigenvector of $X_{1}-X_{2}$\ \ and \ $P_{1}+P_{2}$, which obeys the eigenvector equations $\left( X_{1}-X_{2}\right) \left\vert \tau\right\rangle =\sqrt{2}\tau_{1}\left\vert \tau\right\rangle ,$\ $\left( P_{1}+P_{2}\right) \left\vert \tau\right\rangle =\sqrt{2}\tau_{2}\left\vert \tau\right\rangle .$ Performing the integration of $\Delta_{1,2}\left( \sigma,\gamma\right) $ over $d^{2}\gamma$ ($d^{2}\sigma$) leads to the projection operator of the entangled state $\left\vert \tau\right\rangle $ ($\left\vert \xi\right\rangle $) \begin{align} \int d^{2}\gamma\Delta_{1,2}(\sigma,\gamma) & =\frac{1}{\pi}\left\vert \tau\right\rangle \left\langle \tau\right\vert |_{\tau=\sigma},\nonumber\\ \int d^{2}\sigma\Delta_{1,2}(\sigma,\gamma) & =\frac{1}{\pi}\left\vert \xi\right\rangle \left\langle \xi\right\vert |_{\xi=\gamma}, \label{45} \end{align} where $\left\vert \xi\right\rangle $ is the conjugate state of $\left\vert \tau\right\rangle $. Thus the marginal distributions for quantum states $\rho$ in ($\tau_{1},\tau_{2}$) and ($\xi_{1},\xi_{2}$) phase space are given by \begin{align} \int d^{2}\sigma W\left( \sigma,\gamma\right) & =\frac{1}{\pi}\left\langle \xi\right\vert \rho\left\vert \xi\right\rangle |_{\xi=\gamma},\text{ }\nonumber\\ \int d^{2}\gamma W\left( \sigma,\gamma\right) & =\frac{1}{\pi}\left\langle \tau\right\vert \rho\left\vert \tau\right\rangle |_{\tau=\sigma}, \label{47} \end{align} respectively. Eq.(\ref{47}) shows that, for bipartite system, the marginal distributions can be calculated by evaluating the quantum average of $\rho$ in $\left\langle \xi\right\vert $, $\left\langle \tau\right\vert $ representations. Now we calculate the inner-product $\left\langle \tau\right. \left\vert n,l\right\rangle .$ Note that $a_{1}^{\dag}a_{2}$, $a_{1}a_{2}^{\dag}$, and $J_{z}=\frac{1}{2}\left( a_{1}^{\dag}a_{1}-a_{2}^{\dag}a_{2}\right) $ make up a close SU(2) Lie algebra, thus $e^{i\frac{\pi}{2}J_{x}}$ can be decomposed as \begin{equation} e^{i\frac{\pi}{2}J_{x}}=e^{ia_{1}^{\dag}a_{2}}\exp\left[ \frac{1}{2}\left( a_{1}^{\dag}a_{1}-a_{2}^{\dag}a_{2}\right) \ln2\right] e^{ia_{2}^{\dag} a_{1}}, \label{50} \end{equation} then we have \begin{align} \left\langle \tau\right. \left\vert n,l\right\rangle & =\sqrt{\frac {m_{\rho}!}{n_{\rho}!}}\sum_{k=0}^{m_{\rho}}\left( \sqrt{2}\right) ^{m_{\rho}-n_{\rho}-2k}\nonumber\\ & \times\frac{\left( n_{\rho}+k\right) !}{k!\left( m_{\rho}-k\right) !}\sum_{j=0}^{n_{\rho}+k}\frac{i^{k+j}}{j!}\sqrt{\frac{\left( m_{\rho }-k+j\right) !}{\left( n_{\rho}+k-j\right) !}}\nonumber\\ & \times\left\langle \tau\right. \left\vert m_{\rho}-k+j,n_{\rho }+k-j\right\rangle . \label{51} \end{align} Using the generating function of $H_{m,n},$ we have \begin{equation} \left\langle \tau^{\prime}\right. \left\vert m,n\right\rangle =\frac{\left( -1\right) ^{n}}{\sqrt{m!n!}}H_{m,n}\left( \tau^{\prime\ast},\tau^{\prime }\right) e^{-\left\vert \tau^{\prime}\right\vert ^{2}/2}. \label{52} \end{equation} Substituting Eq.(\ref{52}) into Eq.(\ref{51}) leads to \begin{align} \left\langle \tau\right. \left\vert n,l\right\rangle & =\left( -\right) ^{n_{\rho}}2^{\left( m_{\rho}-n_{\rho}\right) /2}e^{-\left\vert \tau\right\vert ^{2}/2}\nonumber\\ & \sqrt{\frac{m_{\rho}!}{n_{\rho}!}}\sum_{k=0}^{m_{\rho}}\frac{\left( n_{\rho}+k\right) !}{2^{k}k!\left( m_{\rho}-k\right) !}\nonumber\\ & \times\sum_{j=0}^{n_{\rho}+k}\frac{\left( -i\right) ^{k+j}}{j!\left( n_{\rho}+k-j\right) !}H_{m_{\rho}-k+j,n_{\rho}+k-j}\left( \tau^{\ast} ,\tau\right) . \label{53} \end{align} Thus the marginal distribution is \begin{align} & \int d^{2}\gamma W\left( \sigma,\gamma\right) \nonumber\\ & =\frac{e^{-\left\vert \sigma\right\vert ^{2}}}{\pi}2^{m_{\rho}-n_{\rho} }\frac{m_{\rho}!}{n_{\rho}!}\left\vert \sum_{k=0}^{m_{\rho}}\frac{\left( -i\right) ^{k}\left( n_{\rho}+k\right) !}{2^{k}k!\left( m_{\rho}-k\right) !}\right. \nonumber\\ & \times\left. \sum_{j=0}^{n_{\rho}+k}\left( -i\right) ^{j}\frac {H_{m_{\rho}-k+j,n_{\rho}+k-j}\left( \sigma^{\ast},\sigma\right) }{j!\left( n_{\rho}+k-j\right) !}\right\vert ^{2}. \label{54} \end{align} Due to the presence of sum polynomial, the marginal distribution is not Gaussian. In a similar way, one can obtain the other marginal distribution in $\gamma$ direction. Before the end of this section, we mention the relation between the FrFT and marginal distribution. In Ref.\cite{j5} we have proved that, in the context of quantum optics, the FrFT can be described as the matrix element of fractional operator $\exp[-i\alpha(a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2})]$ between $\left\langle \tau^{\prime}\right\vert $ and $\left\vert f\right\rangle $, i.e., \[ \mathcal{F}_{\alpha}\left[ f\left( \tau^{\prime}\right) \right] =\frac{e^{i(\alpha-\frac{\pi}{2})}}{2\sin\alpha}\int\frac{d^{2}\tau^{\prime} }{\pi}\exp\left[ \frac{i(\left\vert \tau^{\prime}\right\vert ^{2}+\left\vert \tau\right\vert ^{2})}{2\tan\alpha}-\frac{i\left( \tau^{\ast}\tau^{\prime }+\tau^{\prime\ast}\tau\right) }{2\sin\alpha}\right] f\left( \tau^{\prime }\right) =\left\langle \tau\right\vert \exp\left[ -i\alpha\left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) \right] \left\vert f\right\rangle , \] where $f\left( \tau^{\prime}\right) =\left\langle \tau^{\prime}\right\vert \left. f\right\rangle .$ When $\left\vert f\right\rangle =e^{i\frac{\pi} {2}J_{x}}\left\vert m_{\rho},n_{\rho}\right\rangle ,$ the corresponding FrFT is \begin{align} \mathcal{F}_{\alpha}\left[ \left\langle \tau^{\prime}\right. \left\vert n,l\right\rangle \right] & =\left\langle \tau\right\vert \exp\left[ -i\alpha\left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) \right] e^{i\frac{\pi}{2}J_{x}}\left\vert m_{\rho},n_{\rho}\right\rangle \nonumber\\ & =e^{-i\alpha\left( m_{\rho}+n_{\rho}\right) }\left\langle \tau\right. \left\vert n,l\right\rangle , \label{49} \end{align} where we have used $e^{-i\alpha a_{1}^{\dagger}a_{1}}a_{1}$ $e^{i\alpha a_{1}^{\dagger}a_{1}}=a_{1}e^{i\alpha}.$ Eq.(\ref{49}) implies that the eigenequations of FrFT can also be $\left\langle \tau\right. \left\vert n,l\right\rangle $ with the eigenvalue being $e^{-i\alpha\left( m_{\rho }+n_{\rho}\right) }$, which is the superposition (\ref{51}) of two-variable Hermite polynomials. In Ref.\cite{j6}, we have proved that the two-variable Hermite polynomials (TVHP) is just the eigenfunction of the FrFT in complex form by using the IWOP technique and the bipartite entangled state representations. Here, we should emphasize that for any unitary two-mode operators $U$ obeying the relation $\exp[-i\alpha(a_{1}^{\dagger}a_{1} +a_{2}^{\dagger}a_{2})]U\exp[i\alpha(a_{1}^{\dagger}a_{1}+a_{2}^{\dagger} a_{2})]=U,$ the wave function $\left\langle \tau\right\vert U\left\vert m_{\rho},n_{\rho}\right\rangle $ is the eigenfunction of FrFT with the eigenvalue being $e^{-i\alpha\left( m_{\rho}+n_{\rho}\right) }$ \cite{2}. Combing Eqs.(\ref{49}) and (\ref{47}), one can obtain a simple formula connecting the FrFT and the marginal distribution of $W_{\left\vert n,l\right\rangle },$ \begin{equation} \int d^{2}\gamma W\left( \sigma,\gamma\right) =\frac{1}{\pi}\left\vert \mathcal{F}_{\alpha}\left[ \left\langle \tau=\sigma\right. \left\vert n,l\right\rangle \right] \right\vert ^{2}, \label{55} \end{equation} which is $\alpha-$independent. Thus we can also obtain the marginal distribution by the FrFT. \section{L-G mode as generalized Wigner transform of Hermit-Gaussian modes} In this section we shall reveal the relation between the L-G mode and the single variable Hermit-Gaussian(H-G) modes. Note that by taking the Fourier transformation of $\left\vert \tau\right\rangle $ with regard to $\tau_{2}$ followed by the inverse Fourier transformation, we can recover the entangled state $\left\vert \tau\right\rangle .$ In another word, $\left\vert \tau =\tau_{1}+i\tau_{2}\right\rangle $ can be decomposed into \begin{align} \left\vert \tau\right\rangle & =\int_{-\infty}^{\infty}dxe^{ix\sqrt{2} \tau_{2}}\left\vert x+\frac{\tau_{1}}{\sqrt{2}}\right\rangle _{1} \otimes\left\vert x-\frac{\tau_{1}}{\sqrt{2}}\right\rangle _{2}\nonumber\\ & =e^{-i\tau_{1}\tau_{2}}\int_{-\infty}^{\infty}dxe^{ix\sqrt{2}\tau_{2} }\left\vert x\right\rangle _{1}\otimes\left\vert x-\sqrt{2}\tau_{1} \right\rangle _{2}, \label{56} \end{align} in which $\left\vert x\right\rangle _{i}$ ($i=1,2$) are the coordinate eigenvectors. Eq.(\ref{56}) is called the Schmidt decomposition of $\left\vert \tau\right\rangle $ and indicates $\left\vert \tau\right\rangle $ is an entangled state \cite{21}. Eq.(\ref{56}) leads to \begin{align} \left\langle m,n\right. \left\vert \tau\right\rangle & =\int_{-\infty }^{\infty}dxe^{ix\sqrt{2}\tau_{2}}\left\langle m\left\vert x+\frac{\tau_{1} }{\sqrt{2}}\right. \right\rangle \left\langle n\left\vert x-\frac{\tau_{1} }{\sqrt{2}}\right. \right\rangle \nonumber\\ & =\left\langle m\right\vert \left[ \int_{-\infty}^{\infty}dxe^{ix\sqrt {2}\tau_{2}}\left\vert x+\frac{\tau_{1}}{\sqrt{2}}\right\rangle \left\langle x-\frac{\tau_{1}}{\sqrt{2}}\right\vert \right] \left\vert n\right\rangle . \label{57} \end{align} It is interesting to notice that the integration [...] in Eq.(\ref{57}) is similar to the single-mode Wigner operator, \begin{equation} \Delta\left( x,p\right) =\frac{1}{2\pi}\int_{-\infty}^{\infty} due^{-iup}\left\vert x-\frac{u}{2}\right\rangle \left\langle x+\frac{u} {2}\right\vert , \label{58} \end{equation} and the left hand side of Eq.(\ref{57}) corresponds to TVHP mode (L-G mode). It might be expected that the L-G can be expressed in terms of the generalized Wigner transform (GWT). Actually, after making variable replacement, Eq.(\ref{57}) can be rewritten as \begin{equation} \left\langle m,n\right. \left\vert \tau\right\rangle =\pi\left\langle m\right\vert \Delta\left( \frac{\tau_{1}}{\sqrt{2}},\frac{\tau_{2}}{\sqrt{2} }\right) \left( -1\right) ^{N}\left\vert n\right\rangle . \label{59} \end{equation} If we introduce the following GWT, \begin{equation} W_{g}\left[ f,v\right] \left( x,p\right) =\left\langle f\right\vert \Delta\left( x,p\right) \left\vert v\right\rangle , \label{60} \end{equation} which reduces to the usual Wigner transform under the condition $\left\vert v\right\rangle =\left\vert f\right\rangle $, while for $\left\vert v\right\rangle =\left( -1\right) ^{n}\left\vert n\right\rangle $ and $\left\vert f\right\rangle =\left\vert m\right\rangle $ Eq.(\ref{60}) becomes the right hand side of Eq.(\ref{59}), which corresponds to the GWT. On the other hand, note that Eqs. (\ref{31}) and (\ref{52}), the left hand side of Eq.(\ref{59}) can be put into (without loss of the generality, letting $m<n$) \begin{align} \left\langle m,n\right. \left\vert \tau\right\rangle & =\frac{\left( -1\right) ^{n}}{\sqrt{m!n!}}H_{m,n}\left( \tau,\tau^{\ast}\right) e^{-\left\vert \tau\right\vert ^{2}/2}\nonumber\\ & =\left( -1\right) ^{n+m}\sqrt{\frac{m!}{n!}}\tau^{\ast}{}^{n-m} L_{m}^{n-m}\left( \tau\tau^{\ast}\right) e^{-\left\vert \tau\right\vert ^{2}/2}, \label{61} \end{align} which indicates that the left hand side of Eq.(\ref{59}) is just corresponding to the L-G mode, as well as \begin{equation} \left\langle m\right. \left\vert x\right\rangle =\frac{e^{-x^{2}/2}} {\sqrt{2^{m}m!\sqrt{\pi}}}H_{m}\left( x\right) \equiv h_{m}\left( x\right) , \label{62} \end{equation} where $H_{m}\left( x\right) $ is single variable Hermite polynomial, and $h_{m}\left( x\right) $ just corresponds to the H-G mode, we have \begin{align} & \sqrt{\frac{m!}{n!}}\tau^{\ast}{}^{n-m}L_{m}^{n-m}\left( \tau\tau^{\ast }\right) e^{-\left\vert \tau\right\vert ^{2}/2}\nonumber\\ & =\frac{\left( -1\right) ^{m}}{2}\int_{-\infty}^{\infty}due^{-iu\frac {\tau_{2}}{\sqrt{2}}}h_{m}\left( \frac{\tau_{1}}{\sqrt{2}}-\frac{u} {2}\right) h_{n}\left( \frac{\tau_{1}}{\sqrt{2}}+\frac{u}{2}\right) \nonumber\\ & =\left( -1\right) ^{m}\pi W_{g}\left[ h_{m},h_{n}\right] \left( \tau_{1}/\sqrt{2},\tau_{2}/\sqrt{2}\right) . \label{63} \end{align} Thus, we can conclude that the L-G mode can be obtained by the GWT of two single-variable H-G modes. In addition, we should point out that the L-G mode can also be generated by windowed Fourier transform (which is often used in signal process) of two single-variable H-G modes by noticing the second line of Eq.(\ref{56}). In summary, we have endowed the Laguerre-Gaussian (LG) mode with new physical meaning in quantum optics, i.e., we find that it is just the wave function of the common eigenvector of the orbital angular momentum and the total photon number operators of 2-d oscillator in the entangled state representation. The common eigenvector can be obtained by using beam splitter with the phase difference $\phi=\pi/2$ between the reflected and transmitted fields. With the aid of the Weyl ordering invariance under similar transforms, the Wigner representation of LG is directly obtained. It is shown that its marginal distributions can be calculated by the FrFT. In addition, L-G mode can also be considered as the generalized Wigner transform of Hermite Gaussian modes by using the Schmidt decomposition of the entangled state representation. \textbf{ACKNOWLEDGEMENT: }Work supported by a grant from the Key Programs Foundation of Ministry of Education of China (No. 210115) and the Research Foundation of the Education Department of Jiangxi Province of China (No. GJJ10097). L.-Y. Hu's email address is [email protected]. \end{document}
arXiv
Effectiveness of binary combinations of Plectranthus glandulosus leaf powder and Hymenocardia acida wood ash against Sitophilus zeamais (Coleoptera: Curculionidae) Jean Wini Goudoungou1, Elias Nchiwan Nukenine2, Christopher Suh3, Tiburce Gangué1 & Dieudonné Ndjonka2 Botanicals are generally assumed to be more biodegradable, leading to less environmental problems. Combination of botanicals could enhance biological activity against insect pests. Hence, the amount of botanical used for the control of stored grain pests may be minimised. In this study, the bioassay was carried out on Sitophilus zeamais to assess the effectiveness of binary combinations of Hymenocardia acida wood ash and Plectranthus glandulosus leaf powder. The quantities of mixed products were added to maize grains to constitute the contents of 5, 10, 20 and 40 g/kg. Then, the bioassays on toxicity within 1, 3, 7 and 14 days exposure, progeny production, population increase, grain damage and germination ability of protected grains were carried out. The major compounds (pinene, α-pinene, α-terpineol, thymol, β-myrcene and 3-carene) of P. glandulosus leaf powder were monoterpenes. The major non-monoterpenic constituent was an oxygenated sesquiterpene, β caryophyllene oxide. The chemical analysis of H. acida ash showed that calcium (5800 mg/kg) and phosphorus (2782 mg/kg) recorded higher content than the other minerals. Plectranthus glandulosus leaf powder, H. acida wood ash and their binary combinations significantly induced mortality of S. zeamais adult (P < 0.0001). The higher mortality rate was achieved by the highest content within 14 days of exposure. The combinations of P. glandulosus leaf powder with H. acida at different proportions produced different interactions. The mixture of 75% P. glandulosus and 25% H. acida produced synergistic effect, whereas the mixture of 50:50 had antagonistic effect in weevil mortality. The three combinations of H. acida and P. glandulosus significantly reduced the production of the progeny compared to the control. From the application of 5 g/kg (lowest content), the number of emerging adults was highly reduced. The combination 25PG75HA revealed to be more effective than the two other against F1 production. The grain damage and population growth were significantly reduced. In general, the non-infested maize grain had a good germination rate than the infested ones. The treatment did not have negative effect on seed germination. From our results, the two powders and their binary combinations could be used to reduce grain infestation by insect while taking into account the proportions of insecticidal powders implied in the combination. Cereals constitute the group of most consumed grain in sub-Saharan African, especially in Sahelian zones. In these zones, cereals are very interesting according to its conservation ability. Cereals are easily conserved compared to the other food products and also less demanding in terms of storage technology, which can be self-made. The conserved and protected seeds permit availability of seeds (food) throughout the year, thereby contributing to food security. Cereals such as sorghum, millets, wheat, maize and rice are major staple foods of most population. Botanically, cereals are grasses and belong to the monocot family Poaceae [1]. Maize remains the most cultivated and most consumed cereal in Africa. This staple food crop is grown in diverse agro-ecological zones and farming systems, and consumed by people with varying food preferences and socio-economic backgrounds in sub-Saharan Africa. Maize also has many uses in food industry (biscuit factory, pastry making, brewery, distilling, sweetening substance, etc.), textile industry, utilisation in pharmaceutical industry, biodegradable plastics, biofuel, alcohol and cosmetics production [2]. The production of maize is done in the short period of the year, whereas its commercialisation and consumption are done all year round. This makes imperative the storage and the protection of this grain. The insufficiencies of different storage methods in developing countries have not stopped to cause grain losses and this in unacceptable proportions [3]. During storage, maize grain is highly devastated by several pests, especially insect pest. Insects are at the origin of the majority of the damage occurring in the reserves of stored food products. Temperature and high humidity of the tropical climate favour proliferation of insects and micro-organisms which, in order to survive, devour the food products causing enormous damage [4]. Maize grain does not escape to insect attack during storage. Amongst the insect, maize grain pest, Sitophilus zeamais is the most detrimental. This pest causes quantitative and qualitative damage on stored maize. In this condition, the protection of this grain according to its multiple uses becomes a major necessity of food security. Food security could be enhanced by reducing stock losses. Damage caused by S. zeamais on maize could be reduced through chemical, biological, physical control and host plant resistance, which are important components of integrated pest management strategies. However, the use of synthetic residual chemicals dominates in Cameroon and other African countries. These chemicals, although effective, cause many environmental problems such as pollution, diseases and resistance in pests [5, 6]. Furthermore, the majority of farmers in Africa are resource-poor and have neither the means nor the skills to obtain and handle pesticides appropriately. Therefore, an environmentally safe and economically feasible pest control practice needs to be available. Botanicals are products based on parts, powders, extracts or purified substances of plant origin. They are generally assumed to be more biodegradable, leading to less environmental problems. Plectranthus glandulosus Hook leaf [7,8,9] and wood ash [10,11,12,13] could stand out as good candidates for environmentally friendly control of storage beetle pests under Cameroonian conditions. P. glandulosus is an annual, glandular and strongly aromatic herb, used in folk medicine for the treatment of colds and sore throat in the Adamawa region of Cameroon [14]. The insecticidal properties of products from P. glandulosus have shown good insecticidal properties against stored maize grain pests [8, 9, 15,16,17]. The leaf powder of the plant, which is more accessible to rural farmers than synthetic chemicals, should be an alternative to control the different stages of insect pests. Many authors have reported the effectiveness of wood ash as a grain protectant [18,19,20,21,22]. The insecticidal efficacy of Hymenocardia acida wood ash needs to be determined since it is one of the plants which the wood is most used as firewood and charcoal in traditional kitchens in the northern part of Cameroon. Combinations of wood ash with P. glandulosus leaf powder could enhance biological activity against insects. This in turn will reduce the amount of both botanical and wood ash used in storage protection. Data concerning the effectiveness of the binary combinations between H. acida wood ash and P. glandulosus leaf powder are not available, although farmers mix dusts like wood ash with plant materials in stocks. As the stored grain in traditional facilities is used as seeds, the determination of the influence of grain protectant on seed germination is imperative. Therefore, the objective of this study was to assess the effectiveness of binary combinations between P. glandulosus leaf powder and H. acida wood ash regarding adult toxicity, progeny production, population growth, grain damage and germination. Source of maize grains The variety of maize used during all experimentation was Shaba provided by IRAD, Wakwa station in the Adamawa region of Cameroon. Before experimentation, broken grains, the pieces of stone, sand and other foreign materials were removed from the stock. Then, the maize was kept in the freezer at − 20 °C for 14 days to allow its disinfestations. After disinfestations from all types of living organisms, the maize was kept in ambient conditions of laboratory for 14 days to allow its acclimatisation. After all these steps, the maize was ready for use as substrate for insect rearing and bioassays. Insect rearing Adults of S. zeamais were obtained from a colony maintained in rearing since 2005 in the Applied Chemistry Laboratory of the University of Ngaoundere. Then, the insect culture was transferred and kept in Crop Protection Laboratory of IRAD Bambui, North-West region of Cameroon. The weevils were reared on disinfested maize in 900-mL glass jars and kept under laboratory conditions (23.08 ± 2.05 °C and 74.67 ± 14.36% r.h.). The culture was maintained and used as source of S. zeamais for bioassays. Preparation of Hymenocardia acida wood ash Stems and branches of H. acida were collected in Ngaoundéré, Adamawa region of Cameroon (latitude 7°25ʹ North and longitude 13°35ʹ East, altitude of 1151 m above sea level). The identity of the plants was confirmed at the Cameroon National Herbarium in Yaounde, where voucher samples were deposited. H. acida is registered on number 50114/HNC. Woods were air-dried until moisture was completely lost and burnt separately in a traditional kitchen normally used in the region. The obtained ash was sieved and packaged in glass jars, labelled and kept in a freezer (at − 4 °C) until subsequent use in the bioassays. Preparation of Plectranthus glandulosus leaf powder Leaves of P. glandulosus were collected in July 2012 in Ngaoundere, head quarter of the Adamawa region of Cameroon (latitude 7°25ʹ North and longitude 13°35ʹ East, altitude of 1151 m above sea level). The identity of the plant was confirmed at the Cameroon National Herbarium in Yaounde on number 7656/SRF. The leaves were dried at room temperature for 7 days and then crushed. The crushed leaves were ground until the powder passed through a 0.20-mm sieve. Then, a part of powder was stored in a freezer at − 20 °C until needed for bioassays and the other part was used for essential oil extraction. Preparation of binary combinations The products were mixed in the following proportions to constitute the different binary combinations: 25% P. glandulosus leaf powder and 75% H. acida wood ash: 25PG75HA; 75% P. glandulosus leaf powder and 25% H. acida wood ash: 75PG25HA. Analysis of volatile compounds of Plectranthus glandulosus leaf powder The essential oil was extracted by hydrodistillation during 4 h using a Clevenger-type apparatus. The extracted oil was kept in a brown bottle at 4 °C to avoid degradation of chemical compounds by light until needed for gas chromatography–mass spectrometry (GC–MS) analysis. Gas chromatography–mass spectrometry analysis was carried out with a chromatograph, model Agilent 7890A GC, equipped with an automatic injector and a column HP-1MS (15 m × 0.25 mm d.i; 0.25 µm film thickness) coupled to a mass detector Agilent 7890A MSD. The molecules were bombarded by an electronic beam of 70 eV. The gaze vector was helium (1 mL/min) with a pressure of 25 psi at the beginning of the column. The injector temperature was 250 °C. The programming of temperature consisted of a rise from 60 to 230 °C with the range of 2 °C/min and then 35 °C/min to reach 230 °C. The injection was done by split mode with the coefficient of 1/180. The injected quantity of essential oil of P. glandulosus was 0.2 µL. The detection was done by a quadripolar analyser constituted by an assembling of four parallel cylindrical electrodes. The bombing of essential oil by the electronic beam of 70 eV induced its ionisation and its fragmentation. Then, the positive ionic fragments formed the characteristic mass spectrum of compounds. The obtained spectra were compared with computerised database using NIST/EPA/NIH Mass Spectral Library, Wiley Register of Mass Spectral Data [23] and König et al. [24]. Determination of Hymenocardia acida wood ash mineral contents The sample of ash was calcinated at 450 °C for 24 h using incinerator for a complete mineralisation [25]. Calcinated ash was dissolved in nitric acid (HNO3) 1 M for digestion and then boiled. The solution was filtered after cooling. The filtrate obtained was used to proportion the following minerals: P, K, Ca, Mg, Na, Fe, Mn, Zn and Pb. Ca, K and Na were proportioned by flame photometry, while Mg, Fe, Mn, Zn and Pb were proportioned by atomic absorption spectrometry. The content of phosphate was measured by molecular absorption spectrophotometry. Toxicity bioassay The toxicity bioassay was carried out under ambient conditions of the laboratory. During experimentation, the temperature and relative humidity were recorded using a data logger (Data logger Model EL-USB-2, LASCAR, China). Four concentrations from each combination were considered. The masses of 0.25, 0.5, 1 and 2 g of P. glandulosus leaf powder and H. acida wood ash as well as their binary combinations were separately added to 50 g of maize in glass jars to constitute the contents of 5, 10, 20 and 40 g/kg, respectively. The insecticidal materials plus grain were thoroughly mixed by manual shaking. The controls consisted of substrate without insecticidal products. A group of 20 insects of mixed sexes and 7- to 14-days-old were added into each jar containing the treated or untreated grains. All treatments were replicated four times, and the experiment was arranged in a completely randomised design. Mortality was recorded 1, 3, 7 and 14 days post-infestation. The co-toxicity coefficient per P. glandulosus leaf powder–H. acida wood ash mixture was calculated: A co-toxicity coefficient of less than 80 is considered as antagonistic, between 80 and 120 as additive and higher than 120 as synergistic [26]. When mixture (M) compounds of two parts (A and B) and both components have LC50, then the following formulae are used (A serving as standard, it is represented in this study by P. glandulosus leaf powder, B represents wood ash for H. acida): $${\text{Toxicity}}\; {\text{index}}\, \left( {\text{TI}} \right)\;{\text{of}}\; A = 100,$$ $${\text{Toxicity}}\; {\text{index}}\, \left( {\text{TI}} \right)\;{\text{of}}\; B = \frac{{{\text{LC}}_{50} \;{\text{of}}\; A}}{{{\text{LC}}_{50} \;{\text{of}}\; B}} \times 100,$$ $${\text{Actual}} \;{\text{TI}}\; {\text{of}} \;M = \frac{{{\text{LC}}_{50 } \;{\text{of}}\; A}}{{{\text{LC}}_{50} \;{\text{of}}\; M}} \times 100,$$ $${\text{Theoretical}}\;{\text{TI}}\;{\text{of}}\;M = {\text{TI}}\;{\text{of}}\;A \times \% \; {\text{of}}\;A\;{\text{in}}\;M + TI\;{\text{of}}\;B \times \% \; {\text{of}}\;B\;{\text{in}}\;M,$$ $${\text{Co-toxicity}}\; {\text{coefficient}} = \frac{{{\text{Actual}} \;{\text{TI}}\; {\text{of}}\;M}}{{{\text{Theoretical }}\;{\text{TI}}\;{\text{of}}\;M}} \times 100.$$ If one component of the mixture alone (for example a wood ash) causes low mortality at all doses (< 20%), then the co-toxicity coefficient of the mixture should be calculated by the formula: $${\text{Co-toxicity}}\; {\text{coefficient}} = \frac{{{\text{LC}}_{50 } \;{\text{of}}\; A\; {\text{alone}}}}{{{\text{LC}}_{50 } \;{\text{of}}\; A\; {\text{in}}\; {\text{the}}\; {\text{mixture}}}} \times 100.$$ F1 progeny bioassay After the 14-day mortality recordings, all insects and products were discarded. The grains were left inside the bottles, and the counting of F1 adults was carried out once a week during 5 weeks. The emergence started only from 5th week after infestation. After each counting session, the insects were removed from the jars [8]. Damage bioassay Four rates of the binary combinations (5, 10, 20 and 40 g/kg) were mixed with 150 g of maize grain as described above. Fifty unidentified sex weevils (7–14 days old) were introduced into each jar. Each treatment had four replications. After 3 months, the live weevils and dead ones were counted. Damage assessment was performed by counting and weighing the number of damaged and undamaged grain using the method of Adams and Schulten [27]. $${\text{Weight}}\; {\text{loss}}\,\left( \% \right) = \frac{{\left( {W_{\text{u}} \times N_{\text{d}} } \right) - \left( {W_{\text{d}} \times N_{\text{u}} } \right)}}{{W_{\text{u}} \left( {N_{\text{d}} + N_{\text{u}} } \right)}} \times 100,$$ where Wu is the weight of undamaged grain, Nd the number of damaged grain, Wd the weight of damaged grain, and Nu the number of undamaged grain. Test of germination Seed germination was tested using 30 randomly picked grains from non-perforated grains after separation of the perforated from the non-perforated in each jar. In order to assess the effect of binary mixtures on germination ability, the seeds were treated with different contents and stored as previously described, but without insect. The seeds from the two lots (infested and non-infested) of stored maize grains were placed on moistened paper in 9-cm glass Petri dishes. The number of germinated seeds was recorded after 10 days [28]. Abbott's formula [29] was used to correct for control mortality before analysis of variance (ANOVA) and probit analysis. Data on cumulative corrected mortality, reduction in F1 progeny, damage, weight loss and germination percentage were arcsine-transformed [(square root(x/100)], and the number of F1 progeny was log-transformed (x + 1). The transformed data were subjected to the ANOVA procedure using the statistical analysis system [30, 31]. Tukey's test (P = 0.05) was applied for mean separation. Probit analysis [31, 32] was conducted to determine lethal dosages causing 50% (LC50) and 95% (LC95) mortality of S. zeamais at 1, 3, 7 and 14 days after treatment application. The probit analysis was also used to determine the effective content causing 50% (EC50) and 95% (EC95) reduction in F1 progeny. Plectranthus glandulosus leaf powder The volatile constituents of the essential oils from P. glandulosus leaf powder were identified by their retention indices and mass spectra in comparison with those of standard synthetic compounds. The results of the chemical analysis are presented in Table 1. The dominant chemical constituents were terpenic compounds. The major compounds were β-pinene (11.5%), α-pinene (11.2%), α-terpineol (10.8%), thymol (10.1%), β-myrcene (9.7%) and 3-carene (8.7%), which are monoterpenes. The major non-monoterpenic compound was an oxygenated sesquiterpene, β caryophyllene oxide (9.4%). Table 1 Chemical constituents of essential oil of Plectranthus glandulosus leaf Hymenocardia acida wood ash Different contents of ions were recorded concerning the mineral determination (Table 2). Calcium (5800 mg/kg) and phosphorus (2782 mg/kg) recorded the higher content than the other minerals. Iron, zinc and magnesium had almost the same content. The lower content was recorded for manganese (0.011 mg/kg) and lead (0.0019 mg/kg). Table 2 Chemical composition of wood ash Adult mortality Plectranthus glandulosus leaf powder, H. acida wood ash and their binary combinations induced significant mortality of S. zeamais adult (Table 3). This mortality increased with content and exposure time for each product (Table 4). The insecticidal efficacy varied slightly amongst the tested products. The higher mortality rate was achieved by the highest content (40 g/kg) of H. acida wood ash (94.66%) and 25PG75HA (94.59%) within 14 days of exposure. Plectranthus glandulosus leaf powder induced low mortality (only 57.24% after 14 days at 40 g/kg) compared to the other products at all exposure periods. Low mortality rate was recorded at 5 g/kg for the different powders. However, this lowest content (5 g/kg) induced significant mortality with increasing exposure time. The combinations 25PG75HA, 50PG50HA and 75PG25HA at 5 g/kg induced 8.75, 11.25 and 21.25%, respectively, within 1-day exposure. However, the same combinations in the same order, at the same content level within 14-day exposure, provoked 68.13, 63.16 and 56.07% mortality of S. zeamais. According to the mortality rate, the different products can be ranked as follows: H. acida wood ash > 25PG75HA > 50PG50HA > 75PG52HA > Plectranthus glandulosus leaf powder. Table 3 Test of between-subjects effects concerning mortality Table 4 Cumulative mortality of Sitophilus zeamais adult induced by Plectranthus glandulosus leaf powder, Hymenocardia acida wood ash and their binary combinations under fluctuating laboratory conditions (temp. = 22.76 ± 2.02 °C; r.h. = 69.87 ± 9.93%) Toxicity and effect of the binary combinations The lethal content of different powders and their combinations reduced, when the exposure period increased (Table 5). The LC50 of P. glandulosus leaf powder was 213.64 g/kg within 3 days, but it reduced to 28.65 g/kg within 14 days. The LC50 of H. acida was 135.23 and 1.94 g/kg, respectively, within 3 days and 14 days. The same tendency was observed with the LC95, H. acida wood ash recorded 190.82 and 38.34 g/kg LC95 values, respectively, within 7-day and 14-day exposure period. Between the 7th and 14th day after exposure, LC50 and LC95 values of the different combinations reduced with increasing exposure periods, respectively, from 18.17 to 3.04 and 458.17 to 416.35 g/kg for 75PG25HA, 6.80 to 1.97 and 372.55 to 86.34 g/kg for 50PG50HA and 10.82 to 2.18 and 928.12 to 33.77 g/kg for 25PG75HA. Table 5 Lethal contents and co-toxicity coefficients of binary combinations of Plectranthus glandulosus leaf powder with Hymenocardia acida wood ash on Sitophilus zeamais under fluctuating laboratory conditions (temp. = 22.76 ± 2.02 °C; r.h. = 69.87 ± 9.93%) The combinations of P. glandulosus leaf powder with H. acida at different proportions produced different interactions. The combination made up by 75% of P. glandulosus leaf powder with 25% of H. acida wood ash produced synergistic effect, whereas that made up by 50% each of two powders had antagonistic effect. The additive effect was observed at 14th day of exposure with 25PG75HA mixture. Reduction of progeny production The three combinations of H. acida and P. glandulosus significantly reduced the production of progeny compared to the control (Table 6). From the application of 5 g/kg (lowest content), the number of emerging adults was highly reduced in treated samples (14.25 emerged adults) than in untreated ones (42.50 emerged adults). The progeny production inhibition ability might depend on the proportion of each botanical involved in the mixture, and thus, 2, 7 and 8 adults emerged, respectively, in grain treated with 25PG75HA, 50PG50HA and 75PG25HA at the content of 40 g/kg. At 5 g/kg, 25PG75HA inhibited the F1 progeny production by more than 60%, whereas 50PG50HA and 75PG25HA caused a reduction of less than 50%. The higher inhibition rate of emerging insects was recorded at the highest content (40 g/kg) of the three combinations, with 95.49, 83.39 and 80.92 reduction, recorded, respectively, for 25PG75HA, 50PG50HA and 75PG25HA. Overall, the combination 25PG75HA was revealed more effective than the two other (50PG50HA and 75PG25HA). This combination recorded the lowest EC50 (2.55 g/kg), whereas the highest EC50 (10.69 g/kg) was recorded for 75PG25HA (Tables 7 and 8). Table 6 Progeny production of Sitophilus zeamais in maize treated with binary mixtures from Plectranthus glandulosus leaf powder with Hymenocardia acida wood ash under fluctuating laboratory conditions (temp. = 22.76 ± 2.02 °C; r.h. = 69.87 ± 9.93%) Table 7 Population increase in Sitophilus zeamais and grain damage recorded in stored maize treated with the binary combinations of wood ash of Hymenocardia acida with leaf powder of Plectranthus glandulosus under fluctuating laboratory conditions (temp. = 22.76 ± 2.02 °C; r.h. = 69.87 ± 9.93%) Table 8 Germination of stored grains treated with binary combinations of Hymenocardia acida wood ash with Plectranthus glandulosus leaf powder and infested, and non-infested by Sitophilus zeamais under laboratory conditions (temp.: 22.76 ± 2.02 °C; r.h. = 69.87 ± 9.93%) Suppression of population increase and reduction in grain damage All the treatments significantly reduced grain damage and population increase, compared to the control. The number of insects, grain damage and weight loss decreased when the concentration of powders increased. Concerning the different parameters, the difference was observed in terms of effectiveness according to the combination. The number of insects was also considerably suppressed. Even at the lowest content (5 g/kg), the three combinations revealed to be very effective; the grain treated with 25P75HA recorded 21.92 damaged grain and 3.22% weight loss, whereas the non-treated grain recorded 49.61 damaged grain and 12.81% weight loss. At their highest content level (40 g/kg), the damage was almost completely suppressed. 25PG75HA revealed to be more effective compared to the other combinations; the grain treated with 25PG75HA recorded fewer insects than those treated with the two other combinations (50PG50HA, 75PG25HA). At 40 g/kg, 26.50 lived insects and 58.50 dead insects were counted from 25PG75HA treatment, whereas from 50PG50HA and 75PG25HA, 68 and 44.75 lived insects and 71.50 and 53.75 dead insects were recorded, respectively. Germination rate after 3 months of storage The germination rate varied with treatment. In general, the non-infested maize grain had a good germination rate than the infested ones. The germination rates from the non-infested treated grains did not varied amongst the treatments. However, these rates increased with ascending dosage with respect to treated infested grains. At 40 g/kg, 97.50, 95 and 90% germination rates were recorded from the treatments 25PG75HA, 50PG50HA and 75PG25HA, respectively. Without insect, the germination was significantly higher even without insecticidal powder (94.33%), whereas with insect the germination rate was very low (21.67%). The binary mixtures of P. glandulosus leaf powder and H. acida wood ash provoked significant mortality of S. zeamais. The different binary combinations of these two substances produced different effects such as synergism, antagonism and additivity. The combination of insecticidal materials has the advantages to increase efficacy by complementing the bio-efficacy of the individual products and simultaneously lowering their doses on the one hand, and broadening the spectrum of activity and reducing the chance of resistance development on the other hand [33]. However, with mixtures, negative effects can also occur such as reduced efficacy, phyto-toxicity and incompatibility problems between materials [34]. The combinations of 75% of P. glandulosus leaf powder and 25% of H. acida on S. zeamais mortality produced synergistic effect, whereas combination made up by 50% of P. glandulosus leaf and 50% H. acida wood ash induced antagonistic effect within 14 days, which produced a significant synergism. In general, the mixtures composed by the individual insecticidal materials improved in efficacy. The additive effect was also observed; the effect of two materials is equal to the sum of each component given alone (1 + 3 = 4), which was observed in the present study by the combination 25PG75HA (25% P. glandulosus leaf powder and 75% H. acida wood ash) within 14 days of exposure. The synergism has been demonstrated in this experiment by the decreasing LC50 values compared to those of single material. The proportions of two products used in combinations may produce different performances according to the involved proportions. The combinations of 75PG25HA and 50PG50HA produced, respectively, synergistic and antagonistic effect. The same tendency concerning the variations in efficacy for different proportions of products was observed by Ntonifor et al. [35]. They found that the combinations of Syzygium aromaticum (L.) (Myrtaceae) and Cyperus aequalis (Vahl) (Cyperaceae) at the proportions of 0:2, 0.5:1.5, 1:1, 2:0 (g:g) induced 36.3, 93.8, 98.8, 100% mortality of C. maculatus, respectively, within 3 days of exposure. Shaalan et al. [36] found that mixtures of Khaya senegalensis A. Juss (Meliaceae) and Daucus carota Linnaeus (Apiaceae) seed extracts were more effective and economical than phytochemicals alone in controlling Aedes aegypti Linnaeus (Diptera: Culicidae) and Culex annulirostris Skuse (Diptera: Culicidae) mosquitoes. The three binary combinations of P. glandulosus leaf powder and H. acida wood ash considerably inhibited the production of S. zeamais progeny. In addition to increasing mortality, the combinations of these products have an effect on the development of S. zeamais. The presence of P. glandulosus leaf powder in combinations may potentiate the effect of ash. There are physical and chemical actions, which are the desiccation by ash and poisoning by the chemical compounds contained in P. glandulosus leaf. Mixtures can disturb or delay the development of larvae in adults. Karso and Al Mallah [37] found that the mixture of soya oil and Acetamiprid pesticide gave the highest average mortality of Trogoderma granarium larvae, which varied according to the proportions. The combinations of insecticidal materials improve the protection of stored grain by reducing the qualitative and quantitative losses. The reduction in damage and the suppression of S. zeamais population growth were positively correlated. Combinations of H. acida wood ash and P. glandulosus leaf powder at different proportions considerably reduced damage, by lowering the number of perforated grains and weight loss, and at the same time by inhibiting the population increase. Hill [38] reported that wood ash was useful as a physical barrier on the grain. However, it can also possess various chemical properties according to its botanical source. Plectranthus glandulosus leaf, thanks to its chemical compounds, controlled the proliferation of insect, which explains the efficacy of combinations in short storage period. When the storage period increased, the efficacy decreased by loss of their volatile compounds which confer its toxicity against insects. Similar findings were reported by Mwangangi and Mutsiya [39], who showed that the efficacy of Ocimum basilicum Linnaeus (Lamiaceae) powder deteriorated the fastest leading to 80, 77, 44, 20 and 15% mortality over 0, 7, 14, 21 and 28 days of storage. Then, to improve effectiveness of ash plant powders combinations and allow components to ensure a good preservation of grains, it is important to avoid the increase in moisture. In many African countries, stored grains provide not only grains for food, but also seeds for planting. Thus, the conservation of seed viability after the application of protectant is necessary. During the experiment, in the presence of weevils, germination rate increased with increase in product contents. The untreated maize in the presence of insects recorded the week germination rate. In this case, the seed loses their germination ability due to the high S. zeamais infestation that lays its eggs on grain. The larvae develop and feed inside the grain by consuming the germ of the seed, thus diminishing the viability of the seeds. Usha Rani and Devanand [40] found that seed germination was significantly reduced when untreated maize seeds were exposed to S. oryzae and T. castaneum. But when the maize seeds were stored with combinations of P. glandulosus leaf powder and wood ash without weevils, no difference was observed amongst treatments and content levels and they conserved significant germination rate. Higher levels of the products improved their ability to protect grain, leading to a greater germination capacity. The different powders did not present any adverse effect on maize seed germination. It has been shown that the storage of maize seeds for prolonged periods after treatment with various botanical concentrations does not have any adverse effect on the seed viability. In the present study, no adverse effect was observed on germination ability. But, some findings reported the inhibiting effect of some plant extracts on seed germination [41, 42]. The application of lower concentrations of Murraya koenigii Linnaeus (Rutaceae) and Capsicum annuum Linnaeus (Solanaceae) extracts caused a normal germination, but the same plants at higher concentrations caused 30–35% inhibition of seed germination. Bustos-Figueroa et al. [42] observed that the leaf powder of P. boldus used alone or mixed with lime did not affect the percentage of maize seed germination. This corroborates our findings about the germination rate recorded with the treatments. Higher germination rate recorded by the combinations of 25% of P. glandulosus leaf powder and 75% of wood ash could be due to the higher content of ash in the combination. According to Philogène [43], the ash does not affect germination but could enhance growth because of the cations that it contains. H. acida wood ash contains high quantity of Ca, K, P, Na and Fe, which are important for plant growth. Parimelazhagan and Francis [44] found that leaf extracts of Cerastium viscosum Linnaeus (Caryophyllaceae) increased seed germination and improved seedling development of rice seeds. In general, grains in storage facilities lost their viability and germination chances as the post-harvest storage period increases [45]. That could explain the loss of viability partly even when the seeds do not have damage. The combinations protected the maize grains against the destruction of their germination capacity by weevils, and they did not influence negatively seed germination. The binary combinations of P. glandulosus leaf powder and H. acida wood ash at different proportions effectively protect maize grain against infestation by S. zeamais in storage. The binary combinations did not alter the viability of maize grains, and thus, the germination rate. The beneficial effect of the combinations could be enhanced by using the appropriated proportions. Other proportions which may be involved in the mixture of these botanicals may be tested in order to the find out the most appropriate efficient combination. Further studies need to be carried out concerning mammalian toxicity that could be attributed to the use of these products in grain storage. Also, the investigations need to be undertaken in order to assess the effect of these powders on the organoleptic and technological properties of treated grains. 25PG75HA: 25% Plectranthus glandulosus leaf powder + 75% Hymenocardia acida wood ash temp.: r.h.: LC: lethal content EC: efficacy content df : degree of freedom FL: fudicial limit Koehler P, Wieser H. Chemistry of cereal grains. In: Gobbetti M, Gänzle M, editors. Handbook on sourdough biotechnology. New York: Springer; 2013. https://doi.org/10.1007/978-1-4614-5425-0_2. FAO, Maize: International Market Profile. FAO/O.N.U. Rome. 2006. Gwinner J, Harnish R, Muck O. Manual on the prevention of postharvest grain loss. Eschborn: GTZ; 1996. Ngamo TSL, Hance T. Diversité des ravageurs des denrées et méthodes alternatives de lutte en milieu tropical. Tropicultura. 2007;25:215–20. Subramanyam Bh, Hagstrum D. Resistance measurement and management. In: Subramanyam Bh, Hagstrum D, editors. Integrated management of insects in stored products. New York: Marcel Dekker; 1995. p. 231–398. Park IK, Lee SG, Choi DH. Insecticidal activities of constituents identified in the essential oil from leaves of Chamaecyparis obtusa against Callosobruchus chinensis (L.) and Sitophilus oryzae (L.). J Stored Prod Res. 2003;39:375–84. Ngamo TSL, Ngantanko I, Ngassoum MB, Mapongmetsem PM, Hance T. Insecticidal efficiency of essential oils from 5 aromatic plants tested both, alone and in combination towards Sitophilus oryzae (L.) (Coleoptera: Curculionidae). Res J Biol Sci. 2007;2:75–80. Nukenine EN, Adler C, Reichmuth Ch. Efficacy evaluation of plant powders from Cameroon as post-harvest grain protectants against the infestation of Sitophilus zeamais Motchulsky (Coleoptera: Curculionidae). J Plant Dis Prot. 2007;114:30–6. Nukenine EN, Adler C, Reichmuth Ch. Bioactivity of fenchone and Plectranthus glandulosus oil against Protephanus truncatus and two strains of Sitophilus zeamais. J Appl Entomol. 2010;134:132–41. Mulungu LS, Kubala MT, Mhamphi GG, Misangu R, Mwatawala MW. Efficacy of protectants against maize weevils (Sitophilus zeamais Motschulsky) and the larger grain borer (Prostephanus truncatus Horn) for stored maize. Int Res J Plant Sci. 2010;1:150–4. Moyin-Jesu EI. Comparative evaluation of modified neem leaf, neem leaf and wood ash extracts as pest control in maize (Zea mays L). Emir J Food Agric. 2010;22:34–44. Singh SR. Bioecological studied and control of pulse bettle Callasobruchus chinensis (Coleoptera: Bruchidae) on cowpea seed. Adv Appl Sci Res. 2011;2:295–302. Ntonifor NN, Forbanka DN, Mbuh JV. Potency of Chenopodium ambrosioides powders and its combinations with wood ash on Sitophilus zeamais in stored maize. J Entomol. 2011;8:375–83. Ngassoum MB, Jirovetz L, Buchbauer G, Fleischlacker W. Investigation of essential oils of Plectranthusglandulosus Hook. (Lamiacée) from Cameroon. J Essent Oil Res. 2001;13:73–5. Ngamo TSL, Ngassoum MB, Mapongmestsem PM, Noudjou WF. Use of essential oils of plants as protectant of grains during storage. Agric J. 2007;2:204–9. Goudoum A, Ngamo TLS, Ngassoum MB, Tatsadjieu LN, Mbofung CM. Tribolium castaneum (Coleoptera: Curculionidae) sensitivity to repetitive applications of lethal doses of imidacloprid and extracts of Clausena anisata (Rutaceae) and Plectranthus glandulosus (Lamiaceae). Int J Biol Chem Sci. 2010;4:1242–50. Nukenine EN, Adler C, Reichmuth Ch. Efficacy of Clausena anisata and Plectranthus glandulosus leaf powder against Prostephanus truncatus (Coleoptera: Bostrichidae) and two strains of Sitophilus zeamais (Coleoptera: Curculionidae) on maize. J Pest Sci. 2010;83:81–90. Golob P, Mwambula J, Mhango V, Ngulube F. The use of locally available materials as protectants of maize grain against insect infestation during storage. J Stored Prod Res. 1982;18:67–74. Firdissa E, Abraham T. Effect of some botanicals and other materials against the maize weevil Sitophilus zeamais, Motschulsky on stored maize. In: Bent T, editor, Maize production technology for the future: challenge and opportunities. Proceedings of the sixth eastern and southern Africa regional maize conference, 21–25 September 1998, Addis Ababa, Ethiopia. 1999; p. 101–4. Akob AC, Ewete KF. The efficacy of ashes of four locally used plant materials against Sitophilus zeamais (Coleoptera: Curculionidae) in Cameroon. Int J Trop Insect Sci. 2007;27:21–6. Oguntade TO, Adekunle AA. Preservation of seeds against fungi using wood-ash of some tropical forest trees in Nigeria. Afr J Microbiol Res. 2010;4:279–88. Gemu M, Getu E, Yosuf A, Tadess T. Management of Sitophilus zeamais Motshulsky (Coleoptera: Ciurculionidae) and Sitotroga cerealella (Olivier) (Lepidoptera: Gelechidae) using locally available inert materials in Southern Ethiopia. Greener J Agric Sci. 2013;3(6):508–15. Mc Lafferty FW, Stauffer DB. Wiley registry of mass spectral data, 6th ed. Mass spectrometry library search system BenchTop/PBM, version 3.10d. Newfield: Palisade Co.; 1994. König WA, Hochmuth DH, Joulain D. Terpenoids and related constituents of essential oils. Hamburg: Institute of Organic Chemistry, University of Hamburg; 2001. Pauwels JM, Van Ranst AE, Verloo M, Mvondo Ze A. Manuel de laboratoire de pédologie: Méthodes d'analyses de sols et de plantes, équipement, gestion de stocks de verrerie et de produits chimiques AGCD et Centre Universitaire de Dschang. Bruxelles: Royaume de Belgique; 1982. Sun Y-P, Johnson ER. Synergistic and antagonistic actions of insecticide synergist combinations and their mode of action. J Agric Food Chem. 1960;8:261–6. Adams JM, Schulten GGM. Loss caused by insects, mites and micro-organisms. In: Harris KL, Lindblad CJ, editors. Post-harvest grain loss assessment methods. St. Paul: American Association of Cereal Chemists; 1978. p. 83–95. Rao NK, Hanson J, Dulloo ME, Ghosh K, Nowell D, Larinde M. Manuel de manipulation des semences dans les banques de gènes. Manuels pour les banques de gènes No. 8. Rome: Bioversity International; 2006. Zar JH. Biostatistical analysis. 4th ed. Upper Saddle River: Prentice-Hall; 1999. SAS Institute. The SAS Sysrem version 9.1 for windows. SAS Institute, Cary. NC; 2003. Finney DJ. Probit analysis. London: Cambridge University Press; 1971. Abbot WA. Method of computing the effectiveness of an insecticide. J Econ Entomol. 1925;18:265–7. Das SK. Scope and relevance of using pesticide mixtures in crop protection: a critical review. Int J Environ Sci Toxicol Res. 2014;2:119–23. Regupathy A, Ramasubramanian T, Ayyasamy R. Rational behind use of pesticide mixtures for management of resistant pest in India. J Food Agric Environ. 2004;2:278–84. Ntonifor NN, Oben EO, Konje CB. Use of selected plant-derived powders and their combinations to protect stored cowpea grains against damage by Callosobruchus maculatus. ARPN J Agric Biol Sci. 2010;5(5):13–21. Shaalan EAS, Canyon DV, Younes MWF, Abdel-Wahab H, Mansour A-H. Synergistic efficacy of botanical blends with and without synthetic insecticides against Aedes aegypti and Culex annulirostris mosquitoes. J Vector Ecol. 2005;30:284–8. Karso BA, Al Mallah NM. Effect of mixing ratio and oil kind on toxicity activation of Acetamprid against Trogoderma granarium larvae. IOSR J Pharm. 2014;4:35–40. Hill DS. Pests of stored products and their control. London: Belhaven Press; 1990. p. 206–61. Mwangangi BM, Mutisya DL. Performance of basil powder as insecticide against maize weevil, Sitopillus zeamais (Coleoptera: Curculionidae). Discourse J Agric Food Sci. 2013;1(11):196–201. Usha Rani P, Devanand P. Efficiency of different plant foliar extracts on grain protection and seed germination in maize. Res J Seed Sci. 2011;4(1):1–14. Chung IM, Miller DA. Natural herbicide potential of alfalfa residue on selected weed species. Agron J. 1995;87:920–5. Bustos-Figueroa G, Osses-Ruiz F, Silva-Aguayo G, Tapia-Vargas M, Hepp-Gallo R, Rodriguez-Maciel JC. Insecticidal properties of Peumus boldus Molina powder used alone and mixed with lime against Sitophilus zeamais Motschulsky (Coleoptera: Curculionidae). Chil J Agric Res. 2009;3:350–5. Philogène BJ. Volcanic ash for insect control. Can Entomol. 1972;104:1487. Parimelazhagan T, Francis K. Antifungal activity of Clerodendrum viscosum against Curvularia lunata in rice seeds. J Mycol Plant Pathol. 1999;29:139–41. Hedimbi M, Ananias NK, Kandawa-Schulz M. Effect of storage conditions on viability, germination and sugar content of pearl millet (Pennisetum glaucum) grains. J Res Agric. 2012;1:088–92. JWG, ENN and DN conceived the idea, designed the experiments and analysed the data. JWG, CS and TG carried out the experiments. JWG wrote the manuscript. All authors read and approved the final manuscript. The authors are thankful to the laboratory of soil analysis and environmental chemistry of the faculty of Agronomic Sciences of the University of Dschang (Cameroon) for the analysis of wood ash mineral content and the Research Institute on Medicinal Plants (IMPM) of Yaoundé for the extraction of essential oils and the determination of its volatile constituents using GC–MS. They also express their gratitude to IRAD (Institute of Agricultural Research for Development) Bambui (Cameroon) for providing facilities to carry out this research work. The data sets used and/or analysed during this study are available from the corresponding author on reasonable request. Department of Biological Sciences, Faculty of Science, University of Bamenda, P.O. Box 39, Bambili, Cameroon Jean Wini Goudoungou & Tiburce Gangué Department of Biological Sciences, Faculty of Science, University of Ngaoundere, P.O. Box 454, Ngaoundere, Cameroon Elias Nchiwan Nukenine & Dieudonné Ndjonka Coordination of Annual Crops, IRAD Nkolbisson, P.O. Box 2123, Yaoundé, Cameroon Christopher Suh Search for Jean Wini Goudoungou in: Search for Elias Nchiwan Nukenine in: Search for Christopher Suh in: Search for Tiburce Gangué in: Search for Dieudonné Ndjonka in: Correspondence to Jean Wini Goudoungou. Wini Goudoungou, J., Nchiwan Nukenine, E., Suh, C. et al. Effectiveness of binary combinations of Plectranthus glandulosus leaf powder and Hymenocardia acida wood ash against Sitophilus zeamais (Coleoptera: Curculionidae). Agric & Food Secur 7, 26 (2018) doi:10.1186/s40066-018-0179-z Sitophilus zeamais Hymenocardia acida Plectranthus glandulosus Leaf powder Binary combinations
CommonCrawl
Truncated 5-cell In geometry, a truncated 5-cell is a uniform 4-polytope (4-dimensional uniform polytope) formed as the truncation of the regular 5-cell. 5-cell Truncated 5-cell Bitruncated 5-cell Schlegel diagrams centered on [3,3] (cells at opposite at [3,3]) There are two degrees of truncations, including a bitruncation. Truncated 5-cell Truncated 5-cell Schlegel diagram (tetrahedron cells visible) Type Uniform 4-polytope Schläfli symbol t0,1{3,3,3} t{3,3,3} Coxeter diagram Cells 10 5 (3.3.3) 5 (3.6.6) Faces 30 20 {3} 10 {6} Edges 40 Vertices 20 Vertex figure Equilateral-triangular pyramid Symmetry group A4, [3,3,3], order 120 Properties convex, isogonal Uniform index 2 3 4 The truncated 5-cell, truncated pentachoron or truncated 4-simplex is bounded by 10 cells: 5 tetrahedra, and 5 truncated tetrahedra. Each vertex is surrounded by 3 truncated tetrahedra and one tetrahedron; the vertex figure is an elongated tetrahedron. Construction The truncated 5-cell may be constructed from the 5-cell by truncating its vertices at 1/3 of its edge length. This transforms the 5 tetrahedral cells into truncated tetrahedra, and introduces 5 new tetrahedral cells positioned near the original vertices. Structure The truncated tetrahedra are joined to each other at their hexagonal faces, and to the tetrahedra at their triangular faces. Seen in a configuration matrix, all incidence counts between elements are shown. The diagonal f-vector numbers are derived through the Wythoff construction, dividing the full group order of a subgroup order by removing one mirror at a time.[1] A4 k-facefkf0f1f2f3k-figure Notes A2( ) f0 20133331{3}v( )A4/A2 = 5!/3! = 20 A2A1{ } f1 210*3030{3}A4/A2A1 = 5!/3!/2 = 10 A1A1 2*301221{ }v( )A4/A1A1 = 5!/2/2 = 30 A2A1t{3} f2 63310*20{ }A4/A2A1 = 5!/3!/2 = 10 A2{3} 303*2011A4/A2 = 5!/3! = 20 A3t{3,3} f3 12612445*( )A4/A3 = 5!/4! = 5 {3,3} 40604*5 Projections The truncated tetrahedron-first Schlegel diagram projection of the truncated 5-cell into 3-dimensional space has the following structure: • The projection envelope is a truncated tetrahedron. • One of the truncated tetrahedral cells project onto the entire envelope. • One of the tetrahedral cells project onto a tetrahedron lying at the center of the envelope. • Four flattened tetrahedra are joined to the triangular faces of the envelope, and connected to the central tetrahedron via 4 radial edges. These are the images of the remaining 4 tetrahedral cells. • Between the central tetrahedron and the 4 hexagonal faces of the envelope are 4 irregular truncated tetrahedral volumes, which are the images of the 4 remaining truncated tetrahedral cells. This layout of cells in projection is analogous to the layout of faces in the face-first projection of the truncated tetrahedron into 2-dimensional space. The truncated 5-cell is the 4-dimensional analogue of the truncated tetrahedron. Images orthographic projections Ak Coxeter plane A4 A3 A2 Graph Dihedral symmetry [5] [4] [3] • net • stereographic projection (centered on truncated tetrahedron) Alternate names • Truncated pentatope • Truncated 4-simplex • Truncated pentachoron (Acronym: tip) (Jonathan Bowers) Coordinates The Cartesian coordinates for the vertices of an origin-centered truncated 5-cell having edge length 2 are: $\left({\frac {3}{\sqrt {10}}},\ {\sqrt {3 \over 2}},\ \pm {\sqrt {3}},\ \pm 1\right)$ $\left({\frac {3}{\sqrt {10}}},\ {\sqrt {3 \over 2}},\ 0,\ \pm 2\right)$ $\left({\frac {3}{\sqrt {10}}},\ {\frac {-1}{\sqrt {6}}},\ {\frac {2}{\sqrt {3}}},\ \pm 2\right)$ $\left({\frac {3}{\sqrt {10}}},\ {\frac {-1}{\sqrt {6}}},\ {\frac {-4}{\sqrt {3}}},\ 0\right)$ $\left({\frac {3}{\sqrt {10}}},\ {\frac {-5}{\sqrt {6}}},\ {\frac {1}{\sqrt {3}}},\ \pm 1\right)$ $\left({\frac {3}{\sqrt {10}}},\ {\frac {-5}{\sqrt {6}}},\ {\frac {-2}{\sqrt {3}}},\ 0\right)$ $\left(-{\sqrt {2 \over 5}},\ {\sqrt {2 \over 3}},\ {\frac {2}{\sqrt {3}}},\ \pm 2\right)$ $\left(-{\sqrt {2 \over 5}},\ {\sqrt {2 \over 3}},\ {\frac {-4}{\sqrt {3}}},\ 0\right)$ $\left(-{\sqrt {2 \over 5}},\ -{\sqrt {6}},\ 0,\ 0\right)$ $\left({\frac {-7}{\sqrt {10}}},\ {\frac {1}{\sqrt {6}}},\ {\frac {1}{\sqrt {3}}},\ \pm 1\right)$ $\left({\frac {-7}{\sqrt {10}}},\ {\frac {1}{\sqrt {6}}},\ {\frac {-2}{\sqrt {3}}},\ 0\right)$ $\left({\frac {-7}{\sqrt {10}}},\ -{\sqrt {3 \over 2}},\ 0,\ 0\right)$ More simply, the vertices of the truncated 5-cell can be constructed on a hyperplane in 5-space as permutations of (0,0,0,1,2) or of (0,1,2,2,2). These coordinates come from positive orthant facets of the truncated pentacross and bitruncated penteract respectively. Related polytopes The convex hull of the truncated 5-cell and its dual (assuming that they are congruent) is a nonuniform polychoron composed of 60 cells: 10 tetrahedra, 20 octahedra (as triangular antiprisms), 30 tetrahedra (as tetragonal disphenoids), and 40 vertices. Its vertex figure is a hexakis triangular cupola. Vertex figure Bitruncated 5-cell Bitruncated 5-cell Schlegel diagram with alternate cells hidden. Type Uniform 4-polytope Schläfli symbol t1,2{3,3,3} 2t{3,3,3} Coxeter diagram or or Cells 10 (3.6.6) Faces 40 20 {3} 20 {6} Edges 60 Vertices 30 Vertex figure ({ }v{ }) dual polytope Disphenoidal 30-cell Symmetry group Aut(A4), [[3,3,3]], order 240 Properties convex, isogonal, isotoxal, isochoric Uniform index 5 6 7 The bitruncated 5-cell (also called a bitruncated pentachoron, decachoron and 10-cell) is a 4-dimensional polytope, or 4-polytope, composed of 10 cells in the shape of truncated tetrahedra. Topologically, under its highest symmetry, [[3,3,3]], there is only one geometrical form, containing 10 uniform truncated tetrahedra. The hexagons are always regular because of the polychoron's inversion symmetry, of which the regular hexagon is the only such case among ditrigons (an isogonal hexagon with 3-fold symmetry). E. L. Elte identified it in 1912 as a semiregular polytope. Each hexagonal face of the truncated tetrahedra is joined in complementary orientation to the neighboring truncated tetrahedron. Each edge is shared by two hexagons and one triangle. Each vertex is surrounded by 4 truncated tetrahedral cells in a tetragonal disphenoid vertex figure. The bitruncated 5-cell is the intersection of two pentachora in dual configuration. As such, it is also the intersection of a penteract with the hyperplane that bisects the penteract's long diagonal orthogonally. In this sense it is a 4-dimensional analog of the regular octahedron (intersection of regular tetrahedra in dual configuration / tesseract bisection on long diagonal) and the regular hexagon (equilateral triangles / cube). The 5-dimensional analog is the birectified 5-simplex, and the $n$-dimensional analog is the polytope whose Coxeter–Dynkin diagram is linear with rings on the middle one or two nodes. The bitruncated 5-cell is one of the two non-regular convex uniform 4-polytopes which are cell-transitive. The other is the bitruncated 24-cell, which is composed of 48 truncated cubes. Symmetry This 4-polytope has a higher extended pentachoric symmetry (2×A4, [[3,3,3]]), doubled to order 240, because the element corresponding to any element of the underlying 5-cell can be exchanged with one of those corresponding to an element of its dual. Alternative names • Bitruncated 5-cell (Norman W. Johnson) • 10-cell as a cell-transitive 4-polytope • Bitruncated pentachoron • Bitruncated pentatope • Bitruncated 4-simplex • Decachoron (Acronym: deca) (Jonathan Bowers) Images orthographic projections Ak Coxeter plane A4 A3 A2 Graph Dihedral symmetry [[5]] = [10] [4] [[3]] = [6] stereographic projection of spherical 4-polytope (centred on a hexagon face) Net (polytope) Coordinates The Cartesian coordinates of an origin-centered bitruncated 5-cell having edge length 2 are: Coordinates $\pm \left({\sqrt {\frac {5}{2}}},\ {\frac {5}{\sqrt {6}}},\ {\frac {2}{\sqrt {3}}},\ 0\right)$ $\pm \left({\sqrt {\frac {5}{2}}},\ {\frac {5}{\sqrt {6}}},\ {\frac {-1}{\sqrt {3}}},\ \pm 1\right)$ $\pm \left({\sqrt {\frac {5}{2}}},\ {\frac {1}{\sqrt {6}}},\ {\frac {4}{\sqrt {3}}},\ 0\right)$ $\pm \left({\sqrt {\frac {5}{2}}},\ {\frac {1}{\sqrt {6}}},\ {\frac {-2}{\sqrt {3}}},\ \pm 2\right)$ $\pm \left({\sqrt {\frac {5}{2}}},\ -{\sqrt {\frac {3}{2}}},\ \pm {\sqrt {3}},\ \pm 1\right)$ $\pm \left({\sqrt {\frac {5}{2}}},\ -{\sqrt {\frac {3}{2}}},\ 0,\ \pm 2\right)$ $\pm \left(0,\ 2{\sqrt {\frac {2}{3}}},\ {\frac {4}{\sqrt {3}}},\ 0\right)$ $\pm \left(0,\ 2{\sqrt {\frac {2}{3}}},\ {\frac {-2}{\sqrt {3}}},\ \pm 2\right)$ More simply, the vertices of the bitruncated 5-cell can be constructed on a hyperplane in 5-space as permutations of (0,0,1,2,2). These represent positive orthant facets of the bitruncated pentacross. Another 5-space construction, centered on the origin are all 20 permutations of (-1,-1,0,1,1). Related polytopes The bitruncated 5-cell can be seen as the intersection of two regular 5-cells in dual positions. = ∩ . Isotopic uniform truncated simplices Dim. 2 3 4 5 6 7 8 Name Coxeter Hexagon = t{3} = {6} Octahedron = r{3,3} = {31,1} = {3,4} $\left\{{\begin{array}{l}3\\3\end{array}}\right\}$ Decachoron 2t{33} Dodecateron 2r{34} = {32,2} $\left\{{\begin{array}{l}3,3\\3,3\end{array}}\right\}$ Tetradecapeton 3t{35} Hexadecaexon 3r{36} = {33,3} $\left\{{\begin{array}{l}3,3,3\\3,3,3\end{array}}\right\}$ Octadecazetton 4t{37} Images Vertex figure ( )∨( ) { }×{ } { }∨{ } {3}×{3} {3}∨{3} {3,3}×{3,3} {3,3}∨{3,3} Facets {3} t{3,3} r{3,3,3} 2t{3,3,3,3} 2r{3,3,3,3,3} 3t{3,3,3,3,3,3} As intersecting dual simplexes ∩ ∩ ∩ ∩ ∩ ∩ ∩ Configuration Seen in a configuration matrix, all incidence counts between elements are shown. The diagonal f-vector numbers are derived through the Wythoff construction, dividing the full group order of a subgroup order by removing one mirror at a time.[2] Elementfk f0 f1 f2 f3 f0 30 2 2 1 4 1 2 2 f1 2 30 * 1 2 0 2 1 2 * 30 0 2 1 1 2 f2 3 3 0 10 * * 2 0 6 3 3 * 20 * 1 1 3 0 3 * * 10 0 2 f3 12 12 6 4 4 0 5 * 12 6 12 0 4 4 * 5 Related regular skew polyhedron The regular skew polyhedron, {6,4|3}, exists in 4-space with 4 hexagonal around each vertex, in a zig-zagging nonplanar vertex figure. These hexagonal faces can be seen on the bitruncated 5-cell, using all 60 edges and 30 vertices. The 20 triangular faces of the bitruncated 5-cell can be seen as removed. The dual regular skew polyhedron, {4,6|3}, is similarly related to the square faces of the runcinated 5-cell. Disphenoidal 30-cell Disphenoidal 30-cell Type perfect[3] polychoron Symbol f1,2A4[3] Coxeter Cells 30 congruent tetragonal disphenoids Faces 60 congruent isosceles triangles   (2 short edges) Edges 40 20 of length $\scriptstyle 1$ 20 of length $\scriptstyle {\sqrt {3/5}}$ Vertices 10 Vertex figure (Triakis tetrahedron) Dual Bitruncated 5-cell Coxeter group Aut(A4), [[3,3,3]], order 240 Orbit vector (1, 2, 1, 1) Properties convex, isochoric The disphenoidal 30-cell is the dual of the bitruncated 5-cell. It is a 4-dimensional polytope (or polychoron) derived from the 5-cell. It is the convex hull of two 5-cells in opposite orientations. Being the dual of a uniform polychoron, it is cell-transitive, consisting of 30 congruent tetragonal disphenoids. In addition, it is vertex-transitive under the group Aut(A4). Related polytopes These polytope are from a set of 9 uniform 4-polytope constructed from the [3,3,3] Coxeter group. Name 5-cell truncated 5-cell rectified 5-cell cantellated 5-cell bitruncated 5-cell cantitruncated 5-cell runcinated 5-cell runcitruncated 5-cell omnitruncated 5-cell Schläfli symbol {3,3,3} 3r{3,3,3} t{3,3,3} 2t{3,3,3} r{3,3,3} 2r{3,3,3} rr{3,3,3} r2r{3,3,3} 2t{3,3,3} tr{3,3,3} t2r{3,3,3} t0,3{3,3,3} t0,1,3{3,3,3} t0,2,3{3,3,3} t0,1,2,3{3,3,3} Coxeter diagram Schlegel diagram A4 Coxeter plane Graph A3 Coxeter plane Graph A2 Coxeter plane Graph References • H.S.M. Coxeter: • H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 • Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 • (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] • (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] • (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] • Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999, ISBN 0-486-40919-8 p. 88 (Chapter 5: Regular Skew Polyhedra in three and four dimensions and their topological analogues, Proceedings of the London Mathematics Society, Ser. 2, Vol 43, 1937.) • Coxeter, H. S. M. Regular Skew Polyhedra in Three and Four Dimensions. Proc. London Math. Soc. 43, 33-62, 1937. • Norman Johnson Uniform Polytopes, Manuscript (1991) • N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. (1966) • 1. Convex uniform polychora based on the pentachoron - Model 3, George Olshevsky. • Klitzing, Richard. "4D uniform polytopes (polychora)". x3x3o3o - tip, o3x3x3o - deca Specific 1. Klitzing, Richard. "x3x4o3o - tip". 2. Klitzing, Richard. "x3o4x3o - srip". 3. On Perfect 4-Polytopes Gabor Gévay Contributions to Algebra and Geometry Volume 43 (2002), No. 1, 243-259 ] Table 2, page 252 Fundamental convex regular and uniform polytopes in dimensions 2–10 Family An Bn I2(p) / Dn E6 / E7 / E8 / F4 / G2 Hn Regular polygon Triangle Square p-gon Hexagon Pentagon Uniform polyhedron Tetrahedron Octahedron • Cube Demicube Dodecahedron • Icosahedron Uniform polychoron Pentachoron 16-cell • Tesseract Demitesseract 24-cell 120-cell • 600-cell Uniform 5-polytope 5-simplex 5-orthoplex • 5-cube 5-demicube Uniform 6-polytope 6-simplex 6-orthoplex • 6-cube 6-demicube 122 • 221 Uniform 7-polytope 7-simplex 7-orthoplex • 7-cube 7-demicube 132 • 231 • 321 Uniform 8-polytope 8-simplex 8-orthoplex • 8-cube 8-demicube 142 • 241 • 421 Uniform 9-polytope 9-simplex 9-orthoplex • 9-cube 9-demicube Uniform 10-polytope 10-simplex 10-orthoplex • 10-cube 10-demicube Uniform n-polytope n-simplex n-orthoplex • n-cube n-demicube 1k2 • 2k1 • k21 n-pentagonal polytope Topics: Polytope families • Regular polytope • List of regular polytopes and compounds
Wikipedia
A gut feeling review of 2012. In 2012, we saw some trends: an acceleration of work around AMP solvers from sparse recovery into matrix factorization. the move towards more complex signal structures (we are moving beyond simply cardinality and power laws and it's a about time) personally a somewhat better understanding of the analysis based approach a continuing slew of good solvers (BSBL, any of the AMP solvers) more implementation in C++/Python for reconstruction solvers The actual use of several places where people exchange views (LinkedIn also here, Google+ and Reddit). a few low level interesting hardware development, ghost imaging being one of them but also the potential replacement of the Anger logic, FREAK a few new results that render TV-minimization and NMF a more stable theoretical grounding a few promising approach such the QTT tensor factorization, manifold signal processing, Protein-DNA interaction. a surge in phase retrieval, blind deconvolution approaches a surge of code implementations made available by the researchers themselves (36 in the last three months), two or three are on Github. Our field is becoming a trailblazer when it comes to reproducible research. the birth of the RandNLA movement some continuing thoughts on post publication peer review process a series of compelling videos from GenomeTV a series of themes: Predicting the future, accidental cameras, Sunday Morning Insights, some Monthly Reviews Curiosity landing on Mars Finally, with Cable we had fun with the CAI series and expect to continue it into 2013. Like the pirates sailing from one island of knowledge to the other, forward we go. Happy New Year 2013 to y'all. Image Credit: NASA/JPL/Space Science Institute N00199979.jpg was taken on December 30, 2012 and received on Earth December 31, 2012. The camera was pointing toward SATURN-ERING at approximately 1,048,438 miles (1,687,298 kilometers) away, and the image was taken using the CL1 and CL2 filters. By Igor at 12/31/2012 02:28:00 PM 2 comments: labels: GenomeTV, python Nuit Blanche in Review (December 2012 edition) Since the last Nuit Blanche in Review, we had a few implementations made available (several in Python, is this a trend ?) A hand-waving introduction to sparsity for compressed tomography reconstruction - python, Scattering Representations for Recognition 24AM: 24 efficient parallel SPCA implementations Estimating Natural Illumination from a Single Outdoor Image Robust image reconstruction from multi-view measurements #Python implementation of BSBL code family #Python NMF/NTF with beta divergence Robustness Analysis of HottTopixx, a Linear Programming Model for Factoring Nonnegative Matrices Hott Topixx: Factoring nonnegative matrices with linear programs Scaled gradients on grassmann manifolds for matrix Completion PDRank: Penalty Decomposition Methods for Rank Minimization (and more) NuMax: A Convex Approach for Learning Near-Isometric Linear Embeddings Blind Deconvolution using Convex Programming Random Projections for Support Vector Machines We also had a Q&A: A Q&A with Ben Adcock and Anders Hansen: Infinite Dimensional Compressive Sensing, Generalized Sampling, Wavelet Crimes, Safe Zones and the Incoherence Barrier. Additional insights on the Q&A with Ben Adcock and Anders Hansen several questions: The Genetics of Parkinson's Disease: A question and a meta-question Randomized Bits: Education a Low Rank Problem ?, The JASON Report on Compressive Sensing and more. Multiple regularizers, Robust NMF and a simple question several themes: Ghost and Entanglement Imaging Visualization of Astronomical Nebulae via Distributed Multi-GPU Compressed Sensing Tomography Highly Technical Reference Pages Plenoptic Function Sensing Hacks Reconstruction of Integers from Pairwise Distances M-MUX / FM-MUX : Compressive Multiplexers for Correlated Signals These Technologies Do Not Exist: Closed-Loop Inertial Confinement Fusion Quantized Embeddings of Scale-Invariant Image Features for Mobile Augmented Reality and Parameter estimation methods based on binary observations - Application to Micro -Electromechanical Systems (MEMS) some items from conferences: MMDS2012 Videos are out ! Workshop on Algorithms for Modern Massive Data Sets #NIPS2012 Workshop presentations a list of noteworthy papers and blogs in: Compressive Sensing and Matrix Factorization This Month Around the blogs in 80 summer hours (NIPS and more) Around the blogs in 80 summer hours Two Sunday Morning Insights: Sunday Morning Insight: The extreme paucity of tools for blind deconvolution of biochemical networks Sunday Morning Insight: Can L1 help Inpainting ? some jobs announcement: CSJobs: Research Scientist Job Openings and Student Internships on Sparsity and Compressed Sensing. Postdoc opening on developing advanced sparse machine learning and bioinformatics strategies and finally the miscellaneous stuff: The Nuit Blanche Chronicles 2012 The Nuit Blanche Effect, three years later. Don't tell my mom I blog, she thinks I'm a used car salesman New +Google changes Google+ and Reddit Experiments Credit: NASA/ESA labels: CS, MF, NuitBlancheReview, phaseretrieval, python, QuantCS When watching the presentation on the Genetics of Parkinson's Disease by Ellen Sidransky featured yesterday. I could not but notice a few other things on top of yesterday's question and meta-question. For one, it is abundantly clear that it is not because a metabolic circuit becomes disrupted , thanks to the lack of a certain gene, that it necessarily implies a large physiological change (also see [1]). Let us note that the robustness in random processes observed in Nature parallels some of the information preserving capacity in compressive sensing with random matrices. When people find that a specific gene is responsible for one specific disease, it really means that the culpable gene is a single point of failure. But what about those, probably more important, cases when no specific gene is a single point of failure, how do we investigate those cases in some automatized fashion (instead of just relying on chance as we have done so far) ? Or let me put it in a different perspective that is familiar to readers of Nuit Blanche: What measurements are required for the blind deconvolution of these metabolic circuits ? I note the paucity of studies in that regard, see for instance Reverse Engineering Biochemical Networks and Compressive Sensing, It's quite simply, the stuff of Life... and Instances of Null Spaces: Can Compressive Sensing Help Study Non Steady State Metabolic Networks ?. In the post-screening analysis, a natural extension of compressive sensing could include a better sensor and better image reconstruction for confocal imaging but I wonder if group testing and other known devices could help the other processes like it did for High Throughput Testing ? [1] In Properties of Metabolic Networks: Structure versus Function by R. Mahadevan and B. O. Palsson on can read: It is observed that, functionally speaking, the essentiality of reactions in a node is not correlated with node connectivity as structural analyses of other biological networks have suggested. labels: BlindDeconvolution, SundayMorningInsight Caught this very interesting presentation on the Genetics of Parkinson's Disease by Ellen Sidransky and couldn't help but think of several issues including the most important one in my mind: How come GWAS studies on Parkinson's did not pick up on the GBA gene ? Also the mosaic at the very end reminded me of another one we mentioned about multiplexing happening with accidental cameras. Could Compressive Sensing that is concerned with multiplexing signals and finding the sparsest one, be helpful in the recognition and even classification of diseases ? labels: BlindDeconvolution, synbio We have a slew of pre-prints and papers on both compressive sensing and advanced matrix factorization, enjoy the week-end: Overview of compressive sensing techniques applied in holography Yair Rivenson, Adrian Stern, and Bahram Javidi In recent years compressive sensing (CS) has been successfully introduced in digital holography (DH). Depending on the ability to sparsely represent an object, the CS paradigm provides an accurate object reconstruction framework from a relatively small number of encoded signal samples. DH has proven to be an efficient and physically realizable sensing modality that can exploit the benefits of CS. In this paper, we provide an overview of the theoretical guidelines for application of CS in DH and demonstrate the benefits of compressive digital holographic sensing. A Frechet Mean Approach for Compressive Sensing ´Date Acquisition and Reconstruction in Wireless Sensor Networks by Wei Chen, Miguel R. D. Rodrigues, and Ian J. Wassell Abstract—Compressive sensing leverages the compressibility of natural signals to trade off the convenience of data acquisition against computational complexity of data reconstruction. Thus, CS appears to be an excellent technique for data acquisition and reconstruction in a wireless sensor network (WSN) which typically employs a smart fusion center (FC) with a high computational capability and several dumb front-end sensors having limited energy storage. This paper presents a novel signal reconstruction method based on CS principles for applications in WSNs. The proposed method exploits both the intra-sensor and inter-sensor correlation to reduce the number of samples required for reconstruction of the original signals. The novelty of the method relates to the use of the Frechet mean of the signals as an estimate of their sparse representations in some basis. This crude estimate of the sparse representation is then utilized in an enhanced data recovering convex algorithm, i.e., the penalized ℓ1 minimization, and an enhanced data recovering greedy algorithm, i.e., the precognition matching pursuit (PMP). The superior reconstruction quality of the proposed method is demonstrated by using data gathered by a WSN located in the Intel Berkeley Research lab. On the Performance Bound of Sparse Estimation with Sensing Matrix Perturbation by Yujie Tang, Laming Chen, Yuantao Gu. The abstract reads: This paper focusses on the sparse estimation in the situation where both the the sensing matrix and the measurement vector are corrupted by additive Gaussian noises. The performance bound of sparse estimation is analyzed and discussed in depth. Two types of lower bounds, the constrained Cram\'{e}r-Rao bound (CCRB) and the Hammersley-Chapman-Robbins bound (HCRB), are discussed. It is shown that the situation with sensing matrix perturbation is more complex than the one with only measurement noise. For the CCRB, its closed-form expression is deduced. It demonstrates a gap between the maximal and nonmaximal support cases. It is also revealed that a gap lies between the CCRB and the MSE of the oracle pseudoinverse estimator, but it approaches zero asymptotically when the problem dimensions tend to infinity. For a tighter bound, the HCRB, despite of the difficulty in obtaining a simple expression for general sensing matrix, a closed-form expression in the unit sensing matrix case is derived for a qualitative study of the performance bound. It is shown that the gap between the maximal and nonmaximal cases is eliminated for the HCRB. Numerical simulations are performed to verify the theoretical results in this paper. Generalized Distributed Compressive Sensing by Jeonghun Park, Seunggye Hwang, Janghoon Yang, Dongku Kim. The abstract reads: Distributed Compressive Sensing (DCS) improves the signal recovery performance of multi signal ensembles by exploiting both intra- and inter-signal correlation and sparsity structure. However, the existing DCS was proposed for a very limited ensemble of signals that has single common information \cite{Baron:2009vd}. In this paper, we propose a generalized DCS (GDCS) which can improve sparse signal detection performance given arbitrary types of common information which are classified into not just full common information but also a variety of partial common information. The theoretical bound on the required number of measurements using the GDCS is obtained. Unfortunately, the GDCS may require much a priori-knowledge on various inter common information of ensemble of signals to enhance the performance over the existing DCS. To deal with this problem, we propose a novel algorithm that can search for the correlation structure among the signals, with which the proposed GDCS improves detection performance even without a priori-knowledge on correlation structure for the case of arbitrarily correlated multi signal ensembles. Cooperative Sparsity Pattern Recovery in Distributed Networks Via Distributed-OMP by Thakshila Wimalajeewa, Pramod K. Varshney. The abstract reads: In this paper, we consider the problem of collaboratively estimating the sparsity pattern of a sparse signal with multiple measurement data in distributed networks. We assume that each node makes Compressive Sensing (CS) based measurements via random projections regarding the same sparse signal. We propose a distributed greedy algorithm based on Orthogonal Matching Pursuit (OMP), in which the sparse support is estimated iteratively while fusing indices estimated at distributed nodes. In the proposed distributed framework, each node has to perform less number of iterations of OMP compared to the sparsity index of the sparse signal. Thus, with each node having a very small number of compressive measurements, a significant performance gain in support recovery is achieved via the proposed collaborative scheme compared to the case where each node estimates the sparsity pattern independently and then fusion is performed to get a global estimate. We further extend the algorithm to estimate the sparsity pattern in a binary hypothesis testing framework, where the algorithm first detects the presence of a sparse signal collaborating among nodes with a fewer number of iterations of OMP and then increases the number of iterations to estimate the sparsity pattern only if the signal is detected. Approximate Projected Generalized Gradient Methods with Sparsity-inducing Penalties by Laming Chen, Yuantao Gu. The abstract reads: Projected gradient (or subgradient) methods are simple and typical approaches to minimize a convex function with constraints. In the area of sparse recovery, however, plenty research results reveal that non-convex penalties induce better sparsity than convex ones. In this paper, the idea of projected subgradient methods is extended to minimize a class of sparsity-inducing penalties with linear constraints, and these penalties are not necessarily convex. To make the algorithm computationally tractable for large scale problems, a uniform approximate projection is applied in the projection step. The theoretical convergence analysis of the proposed method, approximate projected generalized gradient (APGG) method, is provided in the noisy scenario. The result reveals that if the initial solution satisfies some certain requirements, the bound of the recovery error is linear in both the noise term and the step size of APGG. In addition, the parameter selection rules and the initial criteria are analyzed. If the approximate least squares solution is adopted as the initial one, the result reveals how non-convex the penalty could be to guarantee convergence to the global optimal solution. Numerical simulations are performed to test the performance of the proposed method and verify the theoretical analysis. Contributions of this paper are compared with some existing results in the end. Coherence-based Partial Exact Recovery Condition for OMP/OLS by Cedric Herzet, Charles Soussen, Jerome Idier, Remi Gribonval. We address the exact recovery of the support of a k-sparse vector with Orthogonal Matching Pursuit (OMP) and Orthogonal Least Squares (OLS) in a noiseless setting. We consider the scenario where OMP/OLS have selected good atoms during the first l iterations (l lt k) and derive a new sufficient and worst-case necessary condition for their success in k steps. Our result is based on the coherence µ of the dictionary and relaxes Tropp's well-known condition µ less than 1/(2k − 1) to the case where OMP/OLS have a partial knowledge of the support. Consistency of l1 recovery from noisy deterministic measurements by Charles Dossal, Rémi Tesson In this paper a new result of recovery of sparse vectors from deterministic and noisy measurements by l1 minimization is given. The sparse vector is randomly chosen and follows a generic p-sparse model introduced by Candes and al. The main theorem ensures consistency of l1 minimization with high probability. This first result is secondly extended to compressible vectors. Compressive Schlieren Deflectometry by Prasad Sudhakar, Laurent Jacques, Xavier Dubois, Philippe Antoine, Luc Joannes Schlieren deflectometry aims at characterizing the deflections undergone by refracted incident light rays at any surface point of a transparent object. For smooth surfaces, each surface location is actually associated with a sparse deflection map (or spectrum). This paper presents a novel method to compressively acquire and reconstruct such spectra. This is achieved by altering the way deflection information is captured in a common Schlieren Deflectometer, i.e., the deflection spectra are indirectly observed by the principle of spread spectrum compressed sensing. These observations are realized optically using a 2-D Spatial Light Modulator (SLM) adjusted to the corresponding sensing basis and whose modulations encode the light deviation subsequently recorded by a CCD camera. The efficiency of this approach is demonstrated experimentally on the observation of few test objects. Further, using a simple parametrization of the deflection spectra we show that relevant key parameters can be directly computed using the measurements, avoiding full reconstruction. Low-rank Matrix Completion using Alternating Minimization by Prateek Jain, Praneeth Netrapalli, Sujay Sanghavi Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge. In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. $X = UV^\dag$; the algorithm then alternates between finding the best $U$ and the best $V$. Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and there has been almost no theoretical understanding of when this approach yields a good result. In this paper we present first theoretical analysis of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a simpler analysis. Sparse and Optimal Acquisition Design for Diffusion MRI and Beyond by Cheng Guan Koay, Evren Özarslan, Kevin M Johnson, M. Elizabeth Meyerand The focus of this paper is on the development of a sparse and optimal acquisition (SOA) design for diffusion MRI multiple-shell acquisition and beyond. A novel optimality criterion is proposed for sparse multiple-shell acquisition and quasi multiple-shell designs in diffusion MRI and a novel and effective semi-stochastic and moderately greedy combinatorial search strategy with simulated annealing to locate the optimum design or configuration. Even though the number of distinct configurations for a given set of diffusion gradient directions is very large in general---e.g., in the order of 10^{232} for a set of 144 diffusion gradient directions, the proposed search strategy was found to be effective in finding the optimum configuration. It was found that the square design is the most robust (i.e., with stable condition numbers and A-optimal measures under varying experimental conditions) among many other possible designs of the same sample size. Under the same performance evaluation, the square design was found to be more robust than the widely used sampling schemes similar to that of 3D radial MRI and of diffusion spectrum imaging (DSI). Few-view single photon emission computed tomography (SPECT) reconstruction based on a blurred piecewise constant object model by Paul A Wolf, Jakob H Jørgensen, Taly G Schmidt, Emil Y Sidky A sparsity-exploiting algorithm intended for few-view Single Photon Emission Computed Tomography (SPECT) reconstruction is proposed and characterized. The algorithm models the object as piecewise constant subject to a blurring operation. To validate that the algorithm closely approximates the true object in the noiseless case, projection data were generated from an object assuming this model and using the system matrix. Monte Carlo simulations were performed to provide more realistic data of a phantom with varying smoothness across the field of view. Reconstructions were performed across a sweep of two primary design parameters. The results demonstrate that the algorithm recovers the object in a noiseless simulation case. While the algorithm assumes a specific blurring model, the results suggest that the algorithm may provide high reconstruction accuracy even when the object does not match the assumed blurring model. Generally, increased values of the blurring parameter and TV weighting parameters reduced noise and streaking artifacts, while decreasing spatial resolution. As the number of views decreased from 60 to 9 the accuracy of images reconstructed using the proposed algorithm varied by less than 3%. Overall, the results demonstrate preliminary feasibility of a sparsity-exploiting reconstruction algorithm which may be beneficial for few-view SPECT. Toeplitz Matrix Based Sparse Error Correction in System Identification: Outliers and Random Noises by Weiyu Xu, Er-Wei Bai, Myung Cho In this paper, we consider robust system identification under sparse outliers and random noises. In our problem, system parameters are observed through a Toeplitz matrix. All observations are subject to random noises and a few are corrupted with outliers. We reduce this problem of system identification to a sparse error correcting problem using a Toeplitz structured real-numbered coding matrix. We prove the performance guarantee of Toeplitz structured matrix in sparse error correction. Thresholds on the percentage of correctable errors for Toeplitz structured matrices are also established. When both outliers and observation noise are present, we have shown that the estimation error goes to 0 asymptotically as long as the probability density function for observation noise is not "vanishing" around 0. Sparse seismic imaging using variable projection by Aleksandr Y. Aravkin, Tristan van Leeuwen, Ning Tu We consider an important class of signal processing problems where the signal of interest is known to be sparse, and can be recovered from data given auxiliary information about how the data was generated. For example, a sparse Green's function may be recovered from seismic experimental data using sparsity optimization when the source signature is known. Unfortunately, in practice this information is often missing, and must be recovered from data along with the signal using deconvolution techniques. In this paper, we present a novel methodology to simultaneously solve for the sparse signal and auxiliary parameters using a recently proposed variable projection technique. Our main contribution is to combine variable projection with sparsity promoting optimization, obtaining an efficient algorithm for large-scale sparse deconvolution problems. We demonstrate the algorithm on a seismic imaging example. Reducing Reconciliation Communication Cost with Compressed Sensing by H. T. Kung, Chia-Mu Yu We consider a reconciliation problem, where two hosts wish to synchronize their respective sets. Efficient solutions for minimizing the communication cost between the two hosts have been previously proposed in the literature. However, they rely on prior knowledge about the size of the set differences between the two sets to be reconciled. In this paper, we propose a method which can achieve comparable efficiency without assuming this prior knowledge. Our method uses compressive sensing techniques which can leverage the expected sparsity in set differences. We study the performance of the method via theoretical analysis and numerical simulations. Compressed Sensing Recoverability In Imaging Modalities by Mahdi S. Hosseini, Konstantinos N. Plataniotis The paper introduces a framework for the recoverability analysis in compressive sensing for imaging applications such as CI cameras, rapid MRI and coded apertures. This is done using the fact that the Spherical Section Property (SSP) of a sensing matrix provides a lower bound for unique sparse recovery condition. The lower bound is evaluated for different sampling paradigms adopted from the aforementioned imaging modalities. In particular, a platform is provided to analyze the well-posedness of sub-sampling patterns commonly used in practical scenarios. The effectiveness of the various designed patterns for sparse image recovery is studied through numerical experiments. Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding by Ramji Venkataramanan, Tuhin Sarkar, Sekhar Tatikonda We propose computationally efficient encoders and decoders for lossy compression using a Sparse Regression Code. The codebook is defined by a design matrix and codewords are structured linear combinations of columns of this matrix. The proposed encoding algorithm sequentially chooses columns of the design matrix to successively approximate the source sequence. The algorithm is shown to achieve the optimal distortion-rate function for i.i.d Gaussian sources under the squared-error distortion criterion. For a given rate, the parameters of the design matrix can be varied to trade off distortion performance with encoding complexity. An example of such a trade-off is: computational resource (space or time) per source sample of O((n/log n)^2) and probability of excess distortion decaying exponentially in n/log n, where n is the block length. The Sparse Regression Code is robust in the following sense: for any ergodic source, the proposed encoder achieves the optimal distortion-rate function of an i.i.d Gaussian source with the same variance. Simulations show that the encoder has very good empirical performance, especially at low and moderate rates. Low Rank Mechanism for Optimizing Batch Queries under Differential Privacy by Ganzhao Yuan, Zhenjie Zhang, Marianne Winslett, Xiaokui Xiao, Yin Yang, Zhifeng Hao Differential privacy is a promising privacy-preserving paradigm for statistical query processing over sensitive data. It works by injecting random noise into each query result, such that it is provably hard for the adversary to infer the presence or absence of any individual record from the published noisy results. The main objective in differentially private query processing is to maximize the accuracy of the query results, while satisfying the privacy guarantees. Previous work, notably \cite{LHR+10}, has suggested that with an appropriate strategy, processing a batch of correlated queries as a whole achieves considerably higher accuracy than answering them individually. However, to our knowledge there is currently no practical solution to find such a strategy for an arbitrary query batch; existing methods either return strategies of poor quality (often worse than naive methods) or require prohibitively expensive computations for even moderately large domains. Motivated by this, we propose the \emph{Low-Rank Mechanism} (LRM), the first practical differentially private technique for answering batch queries with high accuracy, based on a \emph{low rank approximation} of the workload matrix. We prove that the accuracy provided by LRM is close to the theoretical lower bound for any mechanism to answer a batch of queries under differential privacy. Extensive experiments using real data demonstrate that LRM consistently outperforms state-of-the-art query processing solutions under differential privacy, by large margins. Tensor Principal Component Analysis via Convex Optimization by Bo Jiang, Shiqian Ma, Shuzhong Zhang This paper is concerned with the computation of the principal components for a general tensor, known as the tensor principal component analysis (PCA) problem. We show that the general tensor PCA problem is reducible to its special case where the tensor in question is super-symmetric with an even degree. In that case, the tensor can be embedded into a symmetric matrix. We prove that if the tensor is rank-one, then the embedded matrix must be rank-one too, and vice versa. The tensor PCA problem can thus be solved by means of matrix optimization under a rank-one constraint, for which we propose two solution methods: (1) imposing a nuclear norm penalty in the objective to enforce a low-rank solution; (2) relaxing the rank-one constraint by Semidefinite Programming. Interestingly, our experiments show that both methods yield a rank-one solution with high probability, thereby solving the original tensor PCA problem to optimality with high probability. To further cope with the size of the resulting convex optimization models, we propose to use the alternating direction method of multipliers, which reduces significantly the computational efforts. Various extensions of the model are considered as well. Chaotic Analog-to-Information Conversion: Principle and Reconstructability with Parameter Identifiability by Feng Xi, Sheng Yao Chen, Zhong Liu This paper proposes a chaos-based analog-to-information conversion system for the acquisition and reconstruction of sparse analog signals. The sparse signal acts as an excitation term of a continuous-time chaotic system and the compressive measurements are performed by sampling chaotic system outputs. The reconstruction is realized through the estimation of the sparse coefficients with principle of chaotic parameter estimation. With the deterministic formulation, the reconstructability analysis is conducted via the sensitivity matrix from the parameter identifiability of chaotic systems. For the sparsity-regularized nonlinear least squares estimation, it is shown that the sparse signal is locally reconstructable if the columns of the sparsity-regularized sensitivity matrix are linearly independent. A Lorenz system excited by the sparse multitone signal is taken as an example to illustrate the principle and the performance. Smoothed analysis of symmetric random matrices with continuous distributions by Brendan Farrell, Roman Vershynin We study invertibility of matrices of the form $D+R$ where $D$ is an arbitrary symmetric deterministic matrix, and $R$ is a symmetric random matrix whose independent entries have continuous distributions with bounded densities. We show that $|(D+R)^{-1}| = O(n^2)$ with high probability. The bound is completely independent of $D$. No moment assumptions are placed on $R$; in particular the entries of $R$ can be arbitrarily heavy-tailed. Compressed Sensing Based on Random Symmetric Bernoulli Matrix by Yi-Zheng Fan, Tao Huang, Ming Zhu The task of compressed sensing is to recover a sparse vector from a small number of linear and non-adaptive measurements, and the problem of finding a suitable measurement matrix is very important in this field. While most recent works focused on random matrices with entries drawn independently from certain probability distributions, in this paper we show that a partial random symmetric Bernoulli matrix whose entries are not independent, can be used to recover signal from observations successfully with high probability. The experimental results also show that the proposed matrix is a suitable measurement matrix. New and Improved Conditions for Uniqueness of Sparsest Solutions of Underdetermined Linear Systems by Yun-Bin Zhao The uniqueness of sparsest solutions of underdetermined linear systems plays a fundamental role in the newly developed compressed sensing theory. Several new algebraic concepts, including the sub-mutual coherence, scaled mutual coherence, coherence rank, and sub-coherence rank, are introduced in this paper in order to develop new and improved sufficient conditions for the uniqueness of sparsest solutions. The coherence rank of a matrix with normalized columns is the maximum number of absolute entries in a row of its Gram matrix that are equal to the mutual coherence. The main result of this paper claims that when the coherence rank of a matrix is low, the mutual-coherence-based uniqueness conditions for the sparsest solution of a linear system can be improved. Furthermore, we prove that the Babel-function-based uniqueness can be also improved by the so-called sub-Babel function. Moreover, we show that the scaled-coherence-based uniqueness conditions can be developed, and that the right-hand-side vector $b$ of a linear system, the support overlap of solutions, the orthogonal matrix out of the singular value decomposition of a matrix, and the range property of a transposed matrix can be also integrated into the criteria for the uniqueness of the sparsest solution of an underdetermined linear system. A Multi-View Embedding Space for Modeling Internet Images, Tags, and their Semantics by Yunchao Gong, Qifa Ke, Michael Isard, Svetlana Lazebnik This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets. Sodium Magnetic Resonance Imaging: Biomedical Applications by Guillaume Madelin In this article, we present an up-to-date overview of the potential biomedical applications of sodium MRI in vivo. Sodium MRI is a subject of increasing interest in translational research as it can give some direct and quantitative biochemical information on the tissue viability, cell integrity and function, and therefore not only help the diagnosis but also the prognosis of diseases and treatment outcomes. It has already been applied in vivo in most of human tissues, such as brain for stroke or tumor detection and therapeutic response, in breast cancer, in articular cartilage, in muscle and in kidney, and it was shown in some studies that it could provide very useful new information not available through standard proton MRI. However, this technique is still very challenging due to the low detectable sodium signal in biological tissue with MRI and hardware/software limitations of the clinical scanners. The article is divided in three parts: (1) the role of sodium in biological tissues, (2) a short review on sodium magnetic resonance, and (3) a review of some studies on sodium MRI on different organs/diseases to date. Sparse Dynamics for Partial Differential Equations by Hayden Schaeffer, Stanley Osher, Russel Caflisch, Cory Hauck We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations which promote sparsity. We find that our method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high frequency source terms. From compression to compressed sensing by Shirin Jalali, Arian Maleki Can compression algorithms be employed for recovering signals from their underdetermined set of linear measurements? Addressing this question is the first step towards applying compression algorithms to compressed sensing (CS). In this paper, we consider a family of compression algorithms $\mathcal{C}_R$, parametrized by rate $R$, for a compact class of signals $\mathcal{Q} \subset \mathds{R}^n$. The set of natural images and JPEG2000 at different rates are examples of $\mathcal{Q}$ and $\mathcal{C}_R$, respectively. We establish a connection between the rate-distortion performance of $\mathcal{C}_R$, and the number of linear measurement required for successful recovery in CS. We then propose compressible signal pursuit (CSP) algorithm and prove that, with high probability, it accurately and robustly recovers signals from an underdetermined set of linear measurements. We also explore the performance of CSP in the recovery of infinite dimensional signals. Exploring approximations or simplifications of CSP, which is computationally demanding, is left for the future research. Applying full polarization A-Projection to very wide field of view instruments: An imager for LOFAR by C. Tasse, B. van der Tol, J. van Zwieten, Ger van Diepen, S. Bhatnagar The aimed high sensitivities and large fields of view of the new generation of interferometers impose to reach high dynamic range of order $\sim$1:$10^6$ to 1:$10^8$ in the case of the Square Kilometer Array. The main problem is the calibration and correction of the Direction Dependent Effects (DDE) that can affect the electro-magnetic field (antenna beams, ionosphere, Faraday rotation, etc.). As shown earlier the A-Projection is a fast and accurate algorithm that can potentially correct for any given DDE in the imaging step. With its very wide field of view, low operating frequency ($\sim30-250$ MHz), long baselines, and complex station-dependent beam patterns, the Low Frequency Array (LOFAR) is certainly the most complex SKA precursor. In this paper we present a few implementations of A-Projection applied to LOFAR that can deal with non-unitary station beams and non-diagonal Mueller matrices. The algorithm is designed to correct for all the DDE, including individual antenna, projection of the dipoles on the sky, beam forming and ionospheric effects. We describe a few important algorithmic optimizations related to LOFAR's architecture allowing us to build a fast imager. Based on simulated datasets we show that A-Projection can give dramatic dynamic range improvement for both phased array beams and ionospheric effects. We will use this algorithm for the construction of the deepest extragalactic surveys, comprising hundreds of days of integration. The Scale of the Problem : Recovering Images of Reionization with GMCA by Emma Chapman, Filipe B. Abdalla, J. Bobin, J.-L. Starck, Geraint Harker, Vibor Jelic, Panagiotis Labropoulos, Saleem Zaroubi, Michiel A. Brentjens, A. G. de Bruyn, L. V. E. Koopmans The accurate and precise removal of 21-cm foregrounds from Epoch of Reionization redshifted 21-cm emission data is essential if we are to gain insight into an unexplored cosmological era. We apply a non-parametric technique, Generalized Morphological Component Analysis or GMCA, to simulated LOFAR-EoR data and show that it has the ability to clean the foregrounds with high accuracy. We recover the 21-cm 1D, 2D and 3D power spectra with high accuracy across an impressive range of frequencies and scales. We show that GMCA preserves the 21-cm phase information, especially when the smallest spatial scale data is discarded. While it has been shown that LOFAR-EoR image recovery is theoretically possible using image smoothing, we add that wavelet decomposition is an efficient way of recovering 21-cm signal maps to the same or greater order of accuracy with more flexibility. By comparing the GMCA output residual maps (equal to the noise, 21-cm signal and any foreground fitting errors) with the 21-cm maps at one frequency and discarding the smaller wavelet scale information, we find a correlation coefficient of 0.689, compared to 0.588 for the equivalently smoothed image. Considering only the central 50% of the maps, these coefficients improve to 0.905 and 0.605 respectively and we conclude that wavelet decomposition is a significantly more powerful method to denoise reconstructed 21-cm maps than smoothing. Sparse Hopfield network reconstruction with $\ell_{1}$ regularization by Haiping Huang We propose an efficient strategy to infer sparse Hopfield network based on the magnetizations and pairwise correlations measured through Glauber samplings. This strategy incorporates the $\ell_{1}$ regularization into the Bethe approximation, and is able to further reduce the inference error of the Bethe approximation without the regularization. The optimal regularization parameter is observed to be of the order of $M^{-1/2}$ where $M$ is the number of independent samples. Distributed Sparse Signal Recovery For Sensor Networks by Stacy Patterson, Yonina C. Eldar, Idit Keidar We propose a distributed algorithm for sparse signal recovery in sensor networks based on Iterative Hard Thresholding (IHT). Every agent has a set of measurements of a signal x, and the objective is for the agents to recover x from their collective measurements at a minimal communication cost and with low computational complexity. A naive distributed implementation of IHT would require global communication of every agent's full state in each iteration. We find that we can dramatically reduce this communication cost by leveraging solutions to the distributed top-K problem in the database literature. Evaluations show that our algorithm requires up to three orders of magnitude less total bandwidth than the best-known distributed basis pursuit method. Fourier Domain Beamforming for Medical Ultrasound by T. Chernyakova, Y. C. Eldar, R. Amit Sonography techniques use multiple transducer elements for tissue visualization. Signals detected at each element are sampled prior to digital beamforming. The required sampling rates are up to 4 times the Nyquist rate of the signal and result in considerable amount of data, that needs to be stored and processed. A developed technique, based on the finite rate of innovation model, compressed sensing (CS) and Xampling ideas, allows to reduce the number of samples needed to reconstruct an image comprised of strong reflectors. A significant drawback of this method is its inability to treat speckle, which is of significant importance in medical imaging. Here we build on previous work and show explicitly how to perform beamforming in the Fourier domain. Beamforming in frequency exploits the low bandwidth of the beamformed signal and allows to bypass the oversampling dictated by digital implementation of beamforming in time. We show that this allows to obtain the same beamformed image as in standard beamforming but from far fewer samples. Finally, we present an analysis based CS-technique that allows for further reduction in sampling rate, using only a portion of the beamformed signal's bandwidth, namely, sampling the signal at sub-Nyquist rates. We demonstrate our methods on in vivo cardiac ultrasound data and show that reductions up to 1/25 over standard beamforming rates are possible. REGULARIZED BAYESIAN COMPRESSED SENSING IN ULTRASOUND IMAGING by Nicolas Dobigeon, Adrian Basarab, Denis Kouame´ and Jean-Yves Tourneret. Compressed sensing has recently shown much interest for ultrasound imaging. In particular, exploiting the sparsity of ultrasound images in the frequency domain, a specific random sampling of ultrasound images can be used advantageously for designing efficient Bayesian image reconstruction methods. We showed in a previous work that assigning independent Bernoulli Gaussian priors to the ultrasound image in the frequency domain provides Bayesian reconstruction errors similar to a classical ℓ1 minimization technique. However, the advantage of Bayesian methods is to estimate the sparsity level of the image by using a hierarchical algorithm. This paper goes a step further by exploiting the spatial correlations between the image pixels in the frequency domain. A new Bayesian model based on a correlated Bernoulli Gaussian model is proposed for that purpose. The parameters of this model can be estimated by sampling the corresponding posterior distribution using an MCMC method. The resulting algorithm provides very low reconstruction errors even when reducing significantly the number of measurements via random sampling. The degrees of freedom of the Group Lasso for a General Design by Samuel Vaiter, Charles Deledalle, Gabriel Peyré, Jalal Fadili, Charles Dossal In this paper, we are concerned with regression problems where covariates can be grouped in nonoverlapping blocks, and where only a few of them are assumed to be active. In such a situation, the group Lasso is an at- tractive method for variable selection since it promotes sparsity of the groups. We study the sensitivity of any group Lasso solution to the observations and provide its precise local parameterization. When the noise is Gaussian, this allows us to derive an unbiased estimator of the degrees of freedom of the group Lasso. This result holds true for any fixed design, no matter whether it is under- or overdetermined. With these results at hand, various model selec- tion criteria, such as the Stein Unbiased Risk Estimator (SURE), are readily available which can provide an objectively guided choice of the optimal group Lasso fit. Credit: NASA/JPL-Caltech/MSSS, Layered Martian Outcrop 'Shaler' in 'Glenelg' Area By Igor at 12/28/2012 12:31:00 PM No comments: labels: CS, CSHardware, MatrixFactorization, MF, phaseretrieval About a year ago we had a few discussions on multiple regularizers ( Multiple Regularizers For the Reconstruction of Natural Objects ? and Optimization with multiple non-standard regularizers ) and it looks like the subject is taking off, see the next few papers: Learning efficient sparse and low rank models by Pablo Sprechmann, Alex M. Bronstein, Guillermo Sapiro. The abstract reads: Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speedup compared to the exact optimization algorithms. Simultaneously Structured Models with Application to Sparse and Low-rank Matrices by Samet Oymak, Amin Jalali, Maryam Fazel, Yonina C. Eldar, Babak Hassibi. The abstract reads: The topic of recovery of a structured model given a small number of linear observations has been well-studied in recent years. Examples include recovering sparse or group-sparse vectors, low-rank matrices, and the sum of sparse and low-rank matrices, among others. In various applications in signal processing and machine learning, the model of interest is known to be structured in \emph{several} ways at the same time, for example, a matrix that is simultaneously sparse and low-rank. An important application is the sparse phase retrieval problem, where the goal is to recover a sparse signal from phaseless measurements. In machine learning, the problem comes up when combining several regularizers that each promote a certain desired structure. Often penalties (norms) that promote each individual structure are known and yield an order-wise optimal number of measurements (e.g., $\ell_1$ norm for sparsity, nuclear norm for matrix rank), so it is reasonable to minimize a combination of such norms. We show that, surprisingly, if we use multi-objective optimization with the individual norms, then we can do no better, order-wise, than an algorithm that exploits only one of the several structures. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, i.e., not one that is a function of the convex relaxations used for each structure. We then specialize our results to the case of sparse and low-rank matrices. We show that a nonconvex formulation of the problem can recover the model from very few measurements, on the order of the degrees of freedom of the matrix, whereas the convex problem obtained from a combination of the $\ell_1$ and nuclear norms requires many more measurements. This proves an order-wise gap between the performance of the convex and nonconvex recovery problems in this case. After the use of an iPhone for signal reconstruction (not just display!), here is an iPad involved in Robust PCA decomposition. One item of further interest is the robust-NMF approach. Alas, like the other preprints, no codes are available:. Learning Robust Low-Rank Representations by Pablo Sprechmann, Alex M. Bronstein, Guillermo Sapiro. The abstract reads: In this paper we present a comprehensive framework for learning robust low-rank representations by combining and extending recent ideas for learning fast sparse coding regressors with structured non-convex optimization techniques. This approach connects robust principal component analysis (RPCA) with dictionary learning techniques and allows its approximation via trainable encoders. We propose an efficient feed-forward architecture derived from an optimization algorithm designed to exactly solve robust low dimensional projections. This architecture, in combination with different training objective functions, allows the regressors to be used as online approximants of the exact offline RPCA problem or as RPCA-based neural networks. Simple modifications of these encoders can handle challenging extensions, such as the inclusion of geometric data transformations. We present several examples with real data from image, audio, and video processing. When used to approximate RPCA, our basic implementation shows several orders of magnitude speedup compared to the exact solvers with almost no performance degradation. We show the strength of the inclusion of learning to the RPCA approach on a music source separation application, where the encoders outperform the exact RPCA algorithms, which are already reported to produce state-of-the-art results on a benchmark database. Our preliminary implementation on an iPad shows faster-than-real-time performance with minimal latency. RobustNMF showed up earlier here. Robust Nonnegative Matrix Factorization via $L_1$ Norm Regularization by Bin Shen, Luo Si, Rongrong Ji, Baodi Liu and here Robust Non Negative Matrix Factorization for Multispectral Data with Sparsity Prior, Jeremy Rapin, Jerome Bobin, Anthony Larue, and Jean-Luc Starck When one simply consults PubMed with Matrix Factorization as a keyword, one realizes that NMF is being equated to Matrix Factorization (as if there were no other matrix factorization!) and that different fields are using it extensively. Why no code release for Robust-NMF is beyond me. If there is no code release, the algorithm cannot be listed on the Advanced Matrix Factorization Jungle page. If you implement some of these algorithms please do let me know and you will be feature on the blog and in the Jungle page. Image Credit:NASA/JPL-Caltech/Space Science Institute, PIA14934: A Splendor Seldom Seen labels: CS, MatrixFactorization, MF, NMF, phaseretrieval Wow, here is a goodie that are now on the Advanced Matrix Factorization Jungle page, 24 parallel Sparse PCA implementations: The supporting paper is: Alternating Maximization: Unifying Framework for 8 Sparse PCA Formulations and Efficient Parallel Codes by Peter Richtárik, Martin Takáč, Selin Damla Ahipaşaoğlu. The abstract reads: Given a multivariate data set, sparse principal component analysis (SPCA) aims to extract several linear combinations of the variables that together explain the variance in the data as much as possible, while controlling the number of nonzero loadings in these combinations. In this paper we consider 8 different optimization formulations for computing a single sparse loading vector; these are obtained by combining the following factors: we employ two norms for measuring variance (L2, L1) and two sparsity-inducing norms (L0, L1), which are used in two different ways (constraint, penalty). Three of our formulations, notably the one with L0 constraint and L1 variance, have not been considered in the literature. We give a unifying reformulation which we propose to solve via a natural alternating maximization (AM) method. We show the the AM method is nontrivially equivalent to GPower (Journ\'{e}e et al; JMLR 11:517--553, 2010) for all our formulations. Besides this, we provide 24 efficient parallel SPCA implementations: 3 codes (multi-core, GPU and cluster) for each of the 8 problems. Parallelism in the methods is aimed at i) speeding up computations (our GPU code can be 100 times faster than an efficient serial code written in C++), ii) obtaining solutions explaining more variance and iii) dealing with big data problems (our cluster code is able to solve a 357 GB problem in about a minute). The 24AM Parallel Sparse PCA Codes are here. labels: implementation, MatrixFactorization, MF Sunday Morning Insight: The extreme paucity of too... The Genetics of Parkinson's Disease: A question an... Compressive Sensing and Matrix Factorization This ... Multiple regularizers, Robust NMF and a simple que... M-MUX / FM-MUX : Compressive Multiplexers for Corr... These Technologies Do Not Exist: Closed-Loop Inert... Estimating Natural Illumination from a Single Out... Quantized Embeddings of Scale-Invariant Image Feat... Randomized Bits: Education a Low Rank Problem ?, T... Robust image reconstruction from multi-view measur... Don't tell my mom I blog, she thinks I'm a used ca... Visualization of Astronomical Nebulae via Distribu... CSJobs: Research Scientist Job Openings and Studen... #Python implementation of BSBL code family - imple... MMDS2012 Videos are out ! Workshop on Algorithms f... Robustness Analysis of HottTopixx, a Linear Progra... Hott Topixx: Factoring nonnegative matrices with l... Scaled gradients on grassmann manifolds for matrix... A hand-waving introduction to sparsity for compres... Scattering Representations for Recognition - imple... NuMax: A Convex Approach for Learning Near-Isometr... Postdoc opening on developing advanced sparse mach... Blind Deconvolution using Convex Programming - imp... Additional insights on the Q&A with Ben Adcock and... A Q&A with Ben Adcock and Anders Hansen: Infinite ... PDRank: Penalty Decomposition Methods for Rank Min... Nuit Blanche in Review (November 2012 Edition)
CommonCrawl
Asian Pacific Organization for Cancer Prevention (아시아태평양암예방학회) Cancer Preventive Effects of Whole Cell Type Immunization against Mice Ehrlich Tumors Aysan, Erhan (Department of General Surgery, Bezmialem Vakif University) ; Bayrak, Omer Faruk (Department of Genetics and Bioengineering, Yeditepe University) ; Aydemir, Esra (Department of Genetics and Bioengineering, Yeditepe University) ; Telci, Dilek (Department of Genetics and Bioengineering, Yeditepe University) ; Sahin, Fikrettin (Department of Genetics and Bioengineering, Yeditepe University) ; Yardimci, Cem (Department of Microbiology, Istanbul Educational and Research Hospital) ; Muslumanoglu, Mahmut (Department of General Surgery, Bezmialem Vakif University) Citation PDF KSCI Background: Effects of whole cell type immunization on mice Ehrlich tumours were evaluated. Materials and Methods: After preliminary study, mice were divided two major groups; $1{\times}1000$ and $100{\times}1000$ live Ehrlich cell transferred major groups, each divided into four subgroups (n: 10). Study groups were immunized with Ehrlich cell lysates in 0, 3, 7, $14^{th}$ days and after 30 days of last immunization, live Ehrlich cells were transferred. Mice were observed for six months and evaluated for total and cancer free days. Results: Out of $100{\times}1000$ cell transferred solid type study group, all study group mean and tumour free periods were statistically longer than control groups. All $1{\times}1000$ Ehrlich cell transferred study groups survived significantly longer than $100{\times}1000$ Ehrlich cell transferred groups. Conclusions: Ehrlich mice tumours were prevented and survival prolonged with whole cell type immunization. Effects are related to the number of transferred tumor cells. Cancer;prevention;whole cell immunization;Ehrlich cell tumours Niang M, Soukup T, Zivny P, et al (2011). Biochemical and pharmacological effects of mitoxantrone and acetyl-lcarnitine in mice with a solid form of ehrlich tumour. Chemotherapy, 57, 35-42. https://doi.org/10.1159/000321296 Niu H, Dong Z, Dong F (2004). Experimental and clinical research of dendritic cell and syngeneic immunotherapy of brain glioma. Chin-Ger J Clin Oncol, 3, 147-50. Oka Y, Tsuboi A, Taguchi T, et al (2004). Induction of WT1 (Wilms' tumor gene)spcific cytotoxic T lymphocytes by WT1 peptide vaccine and the resultant cancer regression. PNAS, 101, 13885-90. https://doi.org/10.1073/pnas.0405884101 Pardoll DM (2000). Therapeutic vaccination for cancer. Clin Immunol, 95, 44-62. https://doi.org/10.1006/clim.1999.4819 Pessina A, Brambilla P, Mocarelli P (1980). Surface antigen on Ehrlich ascites tumor cells. Biomedicine, 33, 105-9. Reang P, Gupta M, Kohli K (2006). Biological response modifiers in cancer. Med Gen Med, 8, 33-6. https://doi.org/10.1097/01.gim.0000195894.67756.8b Rinaldi M, Fioretti D, Iurescia S, et al (2008). Anti-tumor immunity induced by CDR3-based DNA vaccination in a murine B-cell lymphoma model. Biocehm Biophysic Res Comm, 370, 279-84. https://doi.org/10.1016/j.bbrc.2008.03.076 Salem FS, Badr MO, Neamat-Allah AN (2011). Biochemical and pathological studies on the effects of levamisole and chlorambucil on Ehrlich ascites carcinoma-bearing mice. Vet Ital, 47, 89-95. Sauter B, Albert ML, Francisco L, et al (2000). Consequences of cell death : exposure to necrotic tumour cells but not primary tissue cells or apoptotic cells or apoptotic cells induces the maturation of immunostimulatory dendritic cells. J Exp Med, 191, 423-34. https://doi.org/10.1084/jem.191.3.423 Small EJ, Sacks N, Nemunaitis J, et al (2007). Granulocyte macrophage colony stimulating factor-secreting allogeneic cellular immunotherapy for hormone-refractory prostate cancer. Clin Cancer Res, 13, 3883-91. https://doi.org/10.1158/1078-0432.CCR-06-2937 Suckow MA, Wolter WR, Pollard M (2005). Prevention of de novo prostate cancer by immunization with tumor-derived vaccines. Cancer Immunol Immunother, 54, 571-6. https://doi.org/10.1007/s00262-004-0612-y Zeng J, Cai S, Yi Y, et al (2009). Prevetion of spontaneous tumor development in a ret transgenic mice model by ret peptide vaccination with Indoleamine 2,3-dioxygenase inhibitor 1-methyl trypophan. Cancer Res, 69, 3963-70. https://doi.org/10.1158/0008-5472.CAN-08-2476 Zhu J, Shi H, Zhang H (2000). Photodynamic therapy of malignancy of skin with a He-Ne laser. Chin J Lasers, 27, 95-6. De Gruijl TD, van den Eertwegh AJ, Pinedo HM, et al (2008). Whole-cell cancer vaccination: from autologous to allogeneic tumorand dendritic cell-based vaccines. Cancer Immunol Immunother, 57, 1569-77. https://doi.org/10.1007/s00262-008-0536-z Ehrlich P, Apolant H (1905). Beobachtungen uber maligne mausentumoren. Berlin Klin Wochenschr, 42, 871-4. Fernandez A, Mondai S, Heidelberger C (1980). Probabilistic view of the transformation of cultured C3H/10'/2 mouse embryo fibroblasts by 3-methylcholanthrene. Proc Nati Acad Sci, 77, 7272-6. https://doi.org/10.1073/pnas.77.12.7272 Frieden TR, Myers JE, Krauskopf MS, et al (2008). A public health approach to winning the war against cancer. Oncologist, 13, 1306-13. https://doi.org/10.1634/theoncologist.2008-0157 Hana MG, Brandhorst JS, Peters LC (1979). Active specific immunotherapy of residual micrometastases: an evaluation of sources, doss and ratios of BCG with tumour cells. Cancer Immunol Immunther, 7, 165-73. Hanna MG, Hoover HC, Vermorken JB, et al (2001). Adjuvant active specific immunotherapy of stage II and stage III colon cancer with an autologous tumour cell vaccine: first randomized phase III trials show promise. Vaccine, 19, 2576-82. https://doi.org/10.1016/S0264-410X(00)00485-0 Iurescia S, Fioretti D, Pierimarchi P, et al (2010). Genetic immunization with CDR3-based fusion vaccine confers protection and long term tumor free survival in Mice model of lymphoma. J Biomed Biotechnol, 27, 1-9. Jaffee EM, Hruban RH, Biedrzycki B, et al (2001). Novel allogeneic granulocyte-macrophage colont stimulating factor-secreting tumour vaccine for pancreatic cancer; a phase I trial of safety and immune activation. J Clin Oncol, 19, 145-56. Jema A (2006). How much of the decrease in cancer death rates in the United States is attributable to reductions in tobacco smoking? Tob Control, 15, 345-7. https://doi.org/10.1136/tc.2006.017749 Jocham D, Richter A, Hoffmann L, et al (2004). Adjuvant autologous renal tumour cell vaccine and risk of tumour progression in patients with renal-cell carcinoma after radical nephrectomy: phase III, randomised controlled trial. Lancet, 363, 594-9. https://doi.org/10.1016/S0140-6736(04)15590-6 Jukanti R, Devraj G, Shashank AS, et al (2011). Biodistribution of ascorbyl palmitate loaded doxorubicin pegylated liposomes in solid tumor bearing mice. J Microencapsul, 28, 142-9. https://doi.org/10.3109/02652048.2010.542496 Liu BY, Chen XH, Gu QL, et al (2004). Antitumor effects of vaccine consisting of dendritic cells pulsed with tumor RNA from gastric cancer. Gastric Cancer, 10, 630-3. Mashanova OG, Romanov YA, Antohin AI, et al (2010). Relationship between proliferation of Ehrlich ascitic tumor cells and status of the chalone system under conditions of modified photoregimen. Bull Exp Biol Med, 149, 746-8. https://doi.org/10.1007/s10517-010-1042-9 Michael A, Ball G, Quatan N, et al (2005). Delayed disease progression after allogeneic cell vaccination in hormoneresistant prostate cancer and correlation with immunologic variables. Clin Cancer Res, 11, 4469-78. https://doi.org/10.1158/1078-0432.CCR-04-2337 Nagorsen D, Thiel E (2006). Clinical and immunologic responses to active specific cancer vaccines in human colorectal cancer. Clin Cancer Res. 12: 3064-3069. https://doi.org/10.1158/1078-0432.CCR-05-2788 Baars A, Claessen AME, van den Eertwegh AJM, et al (2000). Skin tests predict survival after autologous tumor cell vaccination in metastatic melanoma: experience in 81 patients. Ann Oncol, 11, 965-70. https://doi.org/10.1023/A:1008363601515 Bailar JC, Gornik HL (2006). Cancer undefeated. N Engl J Med, 29, 1569-74. Banchereau J, Schuler-Thurner B, Palucka AK, et al (2001). Dendritic cells as vectors for therapy. Cell, 106, 271-4. https://doi.org/10.1016/S0092-8674(01)00448-2 Banchereau J, Steinman RM (1998). Dendritic cells and the control of immunity. Nature, 392, 245-52. https://doi.org/10.1038/32588 Berd D, Sato T, Maguire HC Jr, et al (2004). Immunopharmacologcal analysis of an autologous, hapten-modified human melanoma vaccine. J Clin Oncol, 22, 403-15. Bodey B, Bodey B Jr, Siegel SE, et al (2000). Failure of cancer vaccines: the significant limitations of this approach to immunotherapy. Anticancer Res, 20, 2665-76. Chen L, Watkins JF (1970). Evidence against the presence of H2 histocompatibility antigens in Ehrlich ascites tumour cells. Nature, 225, 734-5. https://doi.org/10.1038/225734a0 Impact of IL-2 and IL-2R SNPs on Proliferation and Tumor-killing Activity of Lymphokine-Activated Killer Cells from Healthy Chinese Blood Donors vol.15, pp.18, 2014, https://doi.org/10.7314/APJCP.2014.15.18.7965
CommonCrawl
Aniekan Ebiefung /Aniekan Ebiefung Education and Professional Experience Summary of Achievements External Honors Internal honors Workshops, MISC Journals and Proceedings Presentations and Conferences External and Internal grant activities External Grants Internal Grants Professor Aniekan Ebiefung UC Foundation Professor of Mathematics Fulbright Scholar Fellow, Institute of Operational Research and Management Science Dr. Ebiefung is a University of Chattanooga Foundation Professor of Mathematics and graduate faculty. He received a BSC (First Class Honors) from the University of Calabar, Nigeria, and a PH.D. in Mathematical Sciences from Clemson University, USA. Dr. Ebiefung has received over 5 research and teaching awards. He has also received over 40 grants, published over 39 papers, and made over 48 presentations. He regularly conducts workshops for teachers and is the author of the book "Responsible Use of the Internet in Education." As an advanced Toastmaster, he has served as club president and area governor of the Toastmaster organization, an international organization for communication and leadership training. Professor Ebiefung is listed in Marques Who's who in American Education, Who's Who in Science and Engineering, and Who's Who in the World. He is married with 3 children. $$ \int_{-\infty}^{\infty} e^{-x^{2}} \, dx = \sqrt{\pi}$$ $$ \oint_{C} f (z) \, dz $$ 418C EMCS Building | Dept 6956 | 615 McCallie Ave | Chattanooga, TN 37403 | 423-425-4697 p | 423-425-4586 f | About UTC
CommonCrawl
Quasi-regular mapping From Encyclopedia of Mathematics mapping with bounded distortion Let $G$ be an open connected subset of the Euclidean space ${\bf R} ^ { n }$, $n \geq 2$, and let $f : G \rightarrow \mathbf{R} ^ { n }$ be a continuous mapping of Sobolev class $W _ { \text{loc} } ^ { 1 , n } ( G )$ (cf. also Sobolev space). Then $f$ is said to be quasi-regular if there is a number $K \in [ 1 , \infty )$ such that the inequality \begin{equation*} | f ^ { \prime } ( x ) | ^ { n } \leq K J _ { f } ( x ) \end{equation*} is satisfied almost everywhere (a.e.) in $G.$ Here, $f ^ { \prime } ( x )$ denotes the formal derivative of $f$ at $x$, i.e. the $( n \times n )$-matrix of partial derivatives of the coordinate functions $f_j$ of $f = ( f _ { 1 } , \dots , f _ { n } )$. Further, \begin{equation*} | f ^ { \prime } ( x ) | = \operatorname { max } \{ | f ^ { \prime } ( x ) h | : | h | = 1 \} \end{equation*} and $J _ { f } ( x )$ is the Jacobian determinant (cf. also Jacobian) of $f$ at $x$. The smallest $K \geq 1$ in the above inequality is called the outer dilatation of $f$ and is denoted by $K_{\text{O}} ( f )$. If $f$ is quasi-regular, then the inner dilatation $K _ { \text{I} } ( f )$ is the smallest constant $K \geq 1$ in the inequality \begin{equation*} J _ { f } ( x ) \leq K \text{l} ( f ^ { \prime } ( x ) ) ^ { n }, \end{equation*} \begin{equation*} \operatorname { l(f } ^ { \prime } ( x ) ) = \operatorname { min } \{ | f ^ { \prime } ( x ) h | : | h | = 1 \}. \end{equation*} The maximal dilatation is the number $K ( f ) = \operatorname { max } \{ K _ { \text{O} } (\, f ) , K _ { \text{I} } (\, f ) \}$, and one says that $f$ is $K$-quasi-regular if $K ( f ) \leq K$. Particular cases. For the dimension $n = 2$ and $K = 1$, the class of $K$-quasi-regular mappings agrees with that of the complex-analytic functions (cf. also Analytic function). Injective quasi-regular mappings in dimensions $n \geq 2$ are called quasi-conformal (see Quasi-conformal mapping; [a4], [a8], [a10]). For $n = 2$ and $K \geq 1$, a $K$-quasi-regular mapping $f : G \rightarrow \mathbf{R} ^ { 2 }$ can be represented in the form $\varphi \circ w$, where $w : G \rightarrow G ^ { \prime }$ is a $K$-quasi-conformal homeomorphism and $\varphi : G ^ { \prime } \rightarrow \mathbf{R} ^ { 2 }$ is an analytic function (Stoilow's theorem). There is no such representation in dimensions $n \geq 3$. Note that for all dimensions $n \geq 2$ and $K > 1$ the set of those points where a $K$-quasi-conformal mapping is not differentiable may be non-empty and the behaviour of the mapping may be very curious at the points of this set, which has also Lebesgue measure zero. Thus, there is a substantial difference between the two cases $K > 1$ and $K = 1$. For higher dimensions $n \geq 3$ the theory of quasi-regular mappings is essentially different from the plane case $n = 2$. There are several reasons for this: a) there are neither general existence theorems nor counterparts of power series expansions in higher dimensions; b) the usual methods of function theory are not applicable in the higher-dimensional setup; c) in the plane case the class of conformal mappings is very rich (cf. also Conformal mapping), while in higher dimensions it is very small (J. Liouville proved that for $n \geq 3$ and $K = 1$, sufficiently smooth $K$-quasi-conformal mappings are restrictions of Möbius transformations, see Quasi-conformal mapping); d) for dimensions $n \geq 3$ the branch set (i.e. the set of those points at which the mapping fails to be a local homeomorphism) is more complicated than in the two-dimensional case; for instance, it does not contain isolated points (see also Zorich theorem). In spite of these difficulties, many basic properties of analytic functions have their counterparts for quasi-regular mappings. In a pioneering series of papers, Yu.G. Reshetnyak proved in 1966–1969 that these mappings share the fundamental topological properties of complex-analytic functions: non-constant quasi-regular mappings are discrete, open and sense-preserving. He also proved important convergence theorems for these mappings and several analytic properties: they preserve sets of zero Lebesgue $n$-measure, are differentiable almost everywhere and are Hölder continuous (cf. also Hölder condition). The Reshetnyak theory (which uses the phrase mapping with bounded distortion for "quasi-regular mapping" ) makes use of Sobolev spaces, potential theory, partial differential equations, calculus of variations, and differential geometry. These results led to the discovery of the connection between partial differential equations and quasi-regular mappings. One of the main results says that if $f$ is quasi-regular and non-zero, then $\operatorname{log}| f ( x ) |$ is a solution of a certain non-linear elliptic partial differential equation. For $n = 2$ and $K = 1$ these equations reduce to the familiar Laplace equation and the corresponding potential theory is linear (see [a5]). The word "quasi-regular" was introduced in this meaning in 1969 by O. Martio, S. Rickman and J. Väisälä, who found another approach to quasi-regular mappings. Their approach uses techniques and tools from the theory of spatial quasi-conformal mappings and is based, in part, on Reshetnyak's fundamental results. The tools they use in their 1969–1972 work involve the modulus of a family of curves (the extremal length) and capacities of condensers. Their work showed the power of the method of the modulus of a family of curves, made these mappings more widely known, and led to a series of important results (cf. below). Some of the topics considered for quasi-regular mappings include: i) stability theory, which refers to the study of $K$-quasi-regular mappings with dilatation $K > 1$ close to one, see [a6]; ii) value-distribution theory, such as the analogue of the Picard theorem and its refinements, see [a7]; iii) non-linear potential theory (cf. also Potential theory) and connection of quasi-regular mappings to partial differential equations, see [a2]; iv) geometric analysis, differential forms, and applications to elasticity theory, see [a3]; v) conformal invariants and quasi-regular mappings, see [a9], [a1]. [a1] G.D. Anderson, M.K. Vamanamurthy, M. Vuorinen, "Conformal invariants, inequalities and quasiconformal mappings" , Wiley (1997) (book with a software diskette) [a2] J. Heinonen, T. Kilpeläinen, O. Martio, "Nonlinear potential theory of degenerate elliptic equations" , Oxford Math. Monographs , Oxford Univ. Press (1993) [a3] T. Iwaniec, "$p$-Harmonic tensors and quasiregular mappings" Ann. of Math. , 136 (1992) pp. 39–64 [a4] O. Lehto, K.I. Virtanen, "Quasiconformal mappings in the plane" , Grundl. Math. Wissenschaft. , 126 , Springer (1973) (Edition: Second) [a5] Yu.G. Reshetnyak, "Space mappings with bounded distortion" , Transl. Math. Monogr. , 73 , Amer. Math. Soc. (1989) (In Russian) [a6] Yu.G. Reshetnyak, "Stability theorems in geometry and analysis" , Kluwer Acad. Publ. (1994) [a7] S. Rickman, "Quasiregular mappings" , Ergeb. Math. Grenzgeb. , 26 , Springer (1993) [a8] J. Väisälä, "Lectures on $n$-dimensional quasiconformal mappings" , Lecture Notes in Mathematics , 229 , Springer (1971) [a9] M. Vuorinen, "Conformal geometry and quasiregular mappings" , Lecture Notes in Mathematics , 1319 , Springer (1988) [a10] "Quasiconformal space mappings: A collection of surveys, 1960–1990" M. Vuorinen (ed.) , Lecture Notes in Mathematics , 1508 , Springer (1992) How to Cite This Entry: Quasi-regular mapping. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Quasi-regular_mapping&oldid=50458 This article was adapted from an original article by M. Vuorinen (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Quasi-regular_mapping&oldid=50458" TeX semi-auto TeX done
CommonCrawl
Robion Kirby Robion Cromwell Kirby (born February 25, 1938) is a Professor of Mathematics at the University of California, Berkeley who specializes in low-dimensional topology. Together with Laurent C. Siebenmann he developed the Kirby–Siebenmann invariant for classifying the piecewise linear structures on a topological manifold. He also proved the fundamental result on the Kirby calculus, a method for describing 3-manifolds and smooth 4-manifolds by surgery on framed links. Along with his significant mathematical contributions, he has over 50 doctoral students and his problem list. Robion Kirby Kirby in 2009 Born (1938-02-25) February 25, 1938 Chicago, Illinois, US Alma materUniversity of Chicago Known forKirby–Siebenmann class Kirby calculus AwardsOswald Veblen Prize in Geometry (1971) NAS Award for Scientific Reviewing (1995) Scientific career FieldsMathematics InstitutionsUniversity of California, Berkeley Doctoral advisorEldon Dyer[1] Doctoral students • Selman Akbulut • Stephen Bigelow • Tim Cochran • David Gauld • Robert Gompf • Elisenda Grigsby • Tomasz Mrowka • Yongbin Ruan • Martin Scharlemann He received his Ph.D. from the University of Chicago in 1965. He soon became an assistant professor at UCLA. While there he developed his "torus trick" which enabled him to solve, in dimensions greater than four (with additional joint work with Siebenmann), four of John Milnor's seven most important problems in geometric topology.[2] In 1971, he was awarded the Oswald Veblen Prize in Geometry by the American Mathematical Society. In 1995 he became the first mathematician to receive the NAS Award for Scientific Reviewing from the National Academy of Sciences for his problem list in low-dimensional topology.[3] He was elected to the National Academy of Sciences in 2001. In 2012 he became a fellow of the American Mathematical Society.[4] Kirby is also the President of Mathematical Sciences Publishers, a small non-profit academic publishing house that focuses on mathematics and engineering journals. Books • Kirby, Robion C.; Siebenmann, Laurence C. (1977). Foundational Essays on Topological Manifolds, Smoothings, and Triangulations (PDF). Annals of Mathematics Studies. Vol. 88. Princeton, NJ: Princeton University Press. ISBN 0-691-08191-3. MR 0645390. • Kirby, Robion C. (1989). The topology of 4-manifolds. Lecture Notes in Mathematics. Vol. 1374. Berlin, New York: Springer-Verlag. doi:10.1007/BFb0089031. ISBN 978-3-540-51148-9. MR 1001966.[5] References 1. Robion Kirby at the Mathematics Genealogy Project 2. S. Ferry. Lecture notes in geometric topology (PDF). 3. "NAS Award for Scientific Reviewing". National Academy of Sciences. Archived from the original on 18 March 2011. Retrieved 27 February 2011. 4. List of Fellows of the American Mathematical Society, retrieved 2013-01-27. 5. Taylor, Lawrence R. (1991). "Review: Robion C. Kirby, The topology of 4 manifolds". Bull. Amer. Math. Soc. (N.S.). 24 (2): 466–471. doi:10.1090/s0273-0979-1991-16068-4. External links • Kirby's home page. • Kirby's list of problems in low dimensional topology. (This is a large 380 page gzipped ps file.) • Biographical notes from the Proceedings of the Kirbyfest in honour of his 60th birthday in 1998. • Robion Kirby at the Mathematics Genealogy Project • Video Lectures by Kirby at Edinburgh Recipients of the Oswald Veblen Prize in Geometry • 1964 Christos Papakyriakopoulos • 1964 Raoul Bott • 1966 Stephen Smale • 1966 Morton Brown and Barry Mazur • 1971 Robion Kirby • 1971 Dennis Sullivan • 1976 William Thurston • 1976 James Harris Simons • 1981 Mikhail Gromov • 1981 Shing-Tung Yau • 1986 Michael Freedman • 1991 Andrew Casson and Clifford Taubes • 1996 Richard S. Hamilton and Gang Tian • 2001 Jeff Cheeger, Yakov Eliashberg and Michael J. Hopkins • 2004 David Gabai • 2007 Peter Kronheimer and Tomasz Mrowka; Peter Ozsváth and Zoltán Szabó • 2010 Tobias Colding and William Minicozzi; Paul Seidel • 2013 Ian Agol and Daniel Wise • 2016 Fernando Codá Marques and André Neves • 2019 Xiuxiong Chen, Simon Donaldson and Song Sun Authority control International • FAST • ISNI • VIAF • WorldCat National • France • BnF data • Germany • Israel • United States • Netherlands Academics • CiNii • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH Other • IdRef
Wikipedia
Meta-analysis of the association between second-hand smoke exposure and ischaemic heart diseases, COPD and stroke Florian Fischer1 & Alexander Kraemer1 Second-hand smoke (SHS) is the most important contaminant of indoor air in first world countries. The risks associated with SHS exposure are highly relevant, because many people are regularly, and usually involuntarily, exposed to SHS. This study aims to quantify the effects of SHS exposure. Therefore, its impact on ischaemic heart diseases (IHD), chronic obstructive pulmonary diseases (COPD) and stroke will be considered. A systematic literature review was conducted to identify articles dealing with the association between SHS and the three outcomes IHD, COPD and stroke. Overall, 24 articles were included in a meta-analysis using a random effects model. Effect sizes stratified for sex and for both sexes combined were calculated. The synthesis of primary studies revealed significant effect sizes for the association between SHS exposure and all three outcomes. The highest RR for both sexes combined was found for COPD (RR = 1.66, 95 % CI: 1.38–2.00). The RR for both sexes combined was 1.35 (95 % CI: 1.22–1.50) for stroke and 1.27 (95 % CI: 1.10–1.48) for IHD. The risks were higher in women than in men for all three outcomes. This is the first study to calculate effect sizes for the association between SHS exposure and the disease outcomes IHD, COPD, and stroke at once. Overall, the effect sizes are comparable with previous findings in meta-analyses and therefore assumed to be reliable. The results indicate the high relevance of public health campaigns and legislation to protect non-smokers from the adverse health effects attributable to SHS exposure. Second-hand smoke (SHS) still remains the most important contaminant of indoor air in first world countries [1]. Despite significant reductions within the past decades, a considerable part of the global population is regularly, and usually involuntarily, exposed to SHS. Therefore, it is a highly important risk factor for the total population. SHS exposure may lead to several chronic conditions, which are highly relevant in terms of morbidity and mortality for a population's health [2]. There is a broad scientific consensus that SHS exposure is linked to carcinogenesis, in particular lung cancer. Furthermore, SHS has been linked to most diseases which are caused by active smoking [3–7]. This association is comprehensible due to the more than 50 carcinogens that have been identified in SHS [8]. Several mechanisms may lead to an increased likelihood of adverse effects in the cardiovascular and respiratory system. These mechanisms may cause a reduction in vascular flow and therefore the development of atherosclerosis [8, 9]. The mechanisms by which SHS exposure increases the risk of heart disease are multiple and interact with each other [10]. In comparison with lung cancer, there is one important difference in the association between SHS exposure and ischaemic heart diseases (IHD): for lung cancer, adverse health effects result from long-term exposure, whereas for other diseases, such as IHD, these effects are not merely long-term and chronic but also acute [11–15]. The effects of even brief passive smoking are often nearly as great as (chronic) active smoking [10, 16, 17]. Evidence of adverse health effects attributable to SHS exposure Research focused on the associations between SHS exposure and lung cancer first [18]. But subsequently other outcomes, such as IHD [19–21], respiratory diseases [22, 23] and stroke [24–26] were also included in the research. Beginning in 1984, observational studies started to point out the association between SHS exposure and IHD. This seems to be the most important outcome attributable to SHS exposure, because the effects on cardiovascular diseases are obvious even at low doses of SHS exposure [19, 27] and because IHDs are much more frequent than lung disease. Because IHD is so prevalent, even a small increase in risk associated with SHS exposure will have a substantial public health impact [28]. Extensive epidemiological research spanning a period of 25 years has indicated that SHS exposure increases the risk of IHD by 25-30 % [2, 10, 17, 19–21, 29], and this was also concluded by the Institute of Medicine [30]. The effects still remain if other factors such as dietary intake, socio-economic status, and health-care use are included in the analysis [31]. Furthermore, a dose–response relationship between the level of SHS exposure and the occurrence of IHD was observed [32]. The reported RR of 1.3 (indicating a 30 % excess risk) for the association between SHS exposure and IHD that has been described in several meta-analyses [12, 19, 20, 33, 34], is quite large compared to active smoking. The excess risk for regular SHS exposure is about one third of that smoking 20 cigarettes per day, although the total exposure to tobacco smoke is only 1 % of that from 20 cigarettes per day [4, 32]. Assuming a linear dose–response relationship would lead to an expected excess risk associated with SHS exposure of only 0.8 % (1 % of the 80 % excess risk from smoking 20 cigarettes per day) [35]. Active smoking is the most important risk factor for chronic obstructive pulmonary diseases (COPD). Almost 85-90 % of COPD related mortality is attributable to active cigarette smoking. However, it is also suggested that 10-15 % of COPD cases are attributable to other risk factors such as SHS exposure, occupational exposures, and genetic factors [22, 36]. Since environmental tobacco smoke contains potent airway irritants, SHS exposure could lead to chronic airway irritation, inflammation, and obstruction [37, 38]. Nevertheless, up to now the causal association between SHS exposure and COPD has received limited attention in epidemiological studies. The first studies focusing on the association between SHS exposure and COPD faced several limitations. First of all, most studies are based on self-reports and secondly, different methods for defining COPD were used. Therefore, the reported effects of passive smoking on lung function are small and partially inconsistent [22, 39–41]. Comparable to COPD, the relationship between SHS exposure and stroke was not verified for a long time [8, 42, 43]. In 2014, stroke was included as a condition that is causally linked to SHS exposure in the Surgeon General's Report [44]. After several studies provided overall inconsistent results regarding the association between SHS exposure and stroke [25, 26, 43, 45–48], a meta-analysis of 20 studies indicated a strong dose-dependent association between SHS exposure and stroke [49]. Study objective and research question Tobacco use is one of the most important modifiable risk factors for several adverse health effects. Nevertheless, the effects of SHS exposure on health have not yet been fully recognized in public health policies [31, 50]. Although several studies have accounted for the (causal) associations between SHS exposure and disease conditions, some results are still inconsistent. In order to implement demand-actuated and successful strategies to protect the public from adverse health effects attributable to SHS exposure, it is necessary to provide evidence-based information about the magnitude and reliability of associations between SHS exposure and health outcomes. Therefore, this study aims to quantify the effect sizes of SHS exposure for three major outcomes: IHD, COPD, and stroke. Based on the results of a systematic review, a meta-analysis was performed to summarize the results of single studies in one effect size for each of the three outcomes. The main goals of the meta-analysis were: 1) to test whether the study results are homogeneous and, if so, 2) to obtain a combined estimator of the effect magnitude for the association between SHS exposure and the outcomes IHD, COPD and stroke. Although some meta-analyses have dealt with the association between SHS exposure and IHD as well as stroke, this is the first meta-analysis on the association between SHS exposure and COPD. Furthermore, it is the first study that allows a comparison of the effects for the selected outcomes, because the same methodology was used for the systematic literature review and meta-analysis. Systematic literature review As a first step, a systematic literature review was performed in PubMed according to the procedure and requirements described in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [51]. The aim of the systematic review was to identify articles dealing with the association between SHS and the three outcomes (IHD, COPD, and stroke). All relevant literature in English or German language was included without any restrictions regarding the year of publication. The search was restricted to studies on the effects of SHS exposure in humans. The search in PubMed was completed in July 2015. Therefore, the systematic literature review contained articles published between 1984 and 2014. The following search algorithm was performed: (second hand smok* [Title/Abstract] OR second-hand smok* [Title/Abstract] OR passive smok* [Title/Abstract] OR "tobacco smoke pollution" [Title/Abstract] OR environmental tobacco smok* [Title/Abstract]) AND (heart disease* [Title/Abstract] OR COPD [Title/Abstract] OR chronic obstructive pulmonary disease* [Title/Abstract] OR obstructive pulmonary disease* [Title/Abstract] OR chronic obstructive airways disease* [Title/Abstract] OR COAD [Title/Abstract] OR chronic obstructive lung disease* [Title/Abstract] OR COLD [Title/Abstract] OR stroke*[Title/Abstract] OR apople*[Title/Abstract]) Using the search algorithm under the above-mentioned filters led to the identification of 403 records. Among them, 221 were attributable to a combination of the search terms regarding exposure and the outcome IHD, 178 further articles were attributable to the search terms on COPD and 47 on stroke.Footnote 1 After the screening of title and abstract, 307 of these articles were excluded, because they did not fit the study's objective. Therefore, 96 full-texts were assessed for eligibility. According to this assessment, 71 articles were excluded for the following reasonsFootnote 2: survey/cross-sectional study (9) (systematic) review (28) meta-analysis (5) no effect sizes provided (24) other outcomes observed (5) other exposures considered (4) A manual search was conducted through the reference lists of all full-texts, which led to the inclusion of eight further articles. Finally, 33 articles were included in the qualitative analysis of the systematic review. Before including the studies in the quantitative synthesis in the form of a meta-analysis, a quality assessment was conducted. This quality assessment, which is described in more detail in the following section, led to the exclusion of further 9 studies. The process of the systematic review is presented in a flow chart (Fig. 1). Flow chart for study selection Quality assessment A checklist for the quality assessment was compiled on the basis of already existing and well-established instruments, such as the PRISMA guidelines [51] and instruments developed for observational studies [52–54]. The quality score developed for this study consists of three categories, with four items each. The first category was introduced to identify a selection bias. Therefore, the selection of cases and response rate are focused here. Since both case–control and cohort studies were included in the systematic review, two quality scales were developed which differed slightly in the aspects regarding recruitment of the study population. The second category deals with the assessment of misclassification bias. It is asked 1) whether the exposure evaluation was made in relation to the time of diagnosis, 2) whether the exposure was validated by a biomarker, 3) whether specific disease criteria were provided, and 4) whether the disease was validated by histology or another gold standard. The third category focuses on aspects of data analysis. One item was integrated to detect whether or not an adjustment of variables was performed. Additionally, studies with power calculations and sufficient sample size scored higher. A sample size was defined a priori as sufficient if at least 100 subjects were included in the analysis and a minimum of 20 cases occurred, in order to exclude studies with low precision. The last criterion was about the provision of exact p-values and confidence intervals (CI). Each item of the quality score answered with "yes" received one point, and all items with the labels "uncertain/not reported" or "no" received no points. All points were summed which allows a maximum score of 12 points. A priori, it was decided that all studies with an overall score of 7 points or lower (n = 9) would be excluded from the meta-analysis. Calculation of relative risks To allow for comparability between the results of the single studies, those results in which regular SHS exposure was investigated were focused upon. The definition of regular exposure varied between studies. Most commonly, spousal smoking or being exposed to about 20 cigarettes or more per day was interpreted as regular SHS exposure. In case studies divided between SHS exposure at home or at work, only the results for exposure at home were chosen. Nevertheless, several studies only provided information for SHS exposure at home and work combined. The RR from the cohort studies were directly transferred to the summary of studies presented in Table 1. For case–control studies RR had to be derived from the provided odds ratios (OR). This was done for reasons of comparability of the results and because a single measurement unit was needed for the meta-analysis. For the calculation of RR based on OR an approach introduced by Barendregt [55] was selected. This approach describes the OR as a function of the RR, the average risk of disease in the population (s), and the prevalence of the risk factor (p). The equation uses the assumptions of the common definitions of RR and OR, and the observation that the average risk of a disease in any population is a linear combination of the risk in the exposed and non-exposed sub-populations: $$ \mathrm{OR}=\frac{\mathrm{RR}\cdot \left(1-\frac{\mathrm{s}}{\mathrm{p}\cdot \mathrm{R}\mathrm{R}+1-\mathrm{p}}\right)}{1-\frac{\mathrm{RR}\cdot \mathrm{s}}{\mathrm{p}\cdot \mathrm{R}\mathrm{R}+1-\mathrm{p}}} $$ Table 1 Systematic literature review–Overview of all studies The reciprocal conversion from OR to RR requires a numerical optimization procedure. The detailed derivation of the equation and the Excel add-in for the calculation of RR is provided by Barendregt [55]. The provided or calculated RRs from the primary studies with high methodological quality were used for the meta-analysis. The meta-analysis was conducted in MIX 2.0 Pro, which is a statistical add-in to perform meta-analysis with Microsoft Excel [56]. As a first step, the RRs and CIs from all the studies were converted into the logarithm function of the RR (log (rr)) and standard errors (se). This information, including the sample size, was used to calculate effect sizes for each of the three outcomes, stratified by sex. The precision was set to an alpha-level of 0.05 and a z-distribution as the standard distribution was chosen. For the analysis, a generic inverse-variance method random effects model was chosen, to provide estimates for the association between SHS exposure and the outcomes IHD, COPD and stroke. In this model, weight is given to each study according to the inverse variance of the effect, to minimize uncertainty about the summarized effect estimates, according to the widely used approach developed by DerSimonian and Laird [57]. The random effects model was chosen, because the data were expected to be heterogeneous across studies. The advantage of a random effects model is that it incorporates variation in the underlying effect sizes between studies. It is assumed that each single study has its own (true) effect and that there is a random distribution of these effects around a central effect [58]. In contrast, using a fixed effect model under conditions of heterogeneity, the CI for the overall effects reflects the random variation within each study, but not the potential heterogeneity across studies, which would lead to artificially narrow CIs [59]. Furthermore, random effects models are more sensitive to publication bias, due to the larger relative weight given to smaller studies. This implies that a random effects model may still be worth considering as it cannot be assumed that true homogeneity exists across the studies [60]. In order to consider the sensitivity of results, potential publication and study bias were assessed visually using a heterogeneity funnel plot (see Additional file 1). Additionally, heterogeneity was quantified using two statistical measures: The Q- and I2-statistics reflect a certain dimension of the extent of heterogeneity between the studies. The Q-statistic is the sum of the weighted squared differences between each individual study's estimate and the overall (inverse variance) summary estimates. This statistic follows a χ2-distribution with k–1 degrees of freedom, under the null-hypothesis of homogeneity. The Q-test is defined by Hedges and Olkin [61] as: $$ \mathrm{Q}={\displaystyle \sum }{\mathrm{w}}_{\mathrm{i}}\cdot {\left({\mathrm{T}}_{\mathrm{i}}-\overline{\mathrm{T}}\right)}^2 $$ In this equation, w i is the weighting factor for the ith study, T i is the ith effect estimate in a collection of k studies and \( \overline{T} \) is the estimate of the mean effect size, which consists of weighting every effect estimate Ti by its inverse variance. A p-value < 0.1 for the Q-statistic indicates heterogeneity [61]. Afterwards, the I2 is derived from the Q-statistic. The I2-index measures the extent of true heterogeneity by dividing the difference between the results of the Q test and its degrees of freedom by the Q-value itself, and multiplying by 100: $$ {\mathrm{I}}^2=\frac{\mathrm{Q}-\left(\mathrm{k}-1\right)}{\mathrm{Q}}\cdot 100 $$ The I2-index quantifies the proportion of inconsistency among the study results. It is commonly expressed as a percentage and is therefore interpreted as the percentage of the total variability in a set of effect sizes due to between-study variation that is not attributable to random sampling from a fixed parameter [62]. Higgins and Thompson [62] proposed a tentative classification of I2-values to help in the interpretation of the heterogeneity's magnitude: according to this classification, percentages of around 25 %, 50 % and 75 % would mean low, medium, and high heterogeneity, respectively. Studies of SHS exposure and selected outcomes Overall, 33 studies were included in the systematic review. The first article was published in 1988, and the most recent in 2013. Several of the articles provided information on more than one outcome. Most articles described the effect of SHS exposure on IHD (n = 20). In 12 articles stroke was investigated as an outcome and eight articles focused on COPD (Table 1). The spatial distribution of the study locations of all studies identified by the systematic review is quite equal: nine studies were performed in Asia (mainly in China and Hong Kong), Europe (mainly in Great Britain and northern European countries), and the USA. A further five studies were located in Australia and/or New Zealand and one in South America. Half of the articles described the results of a case–control study (n = 17) and the other half used a cohort design (n = 16). In almost all studies, information on SHS exposure was based on self-reporting (n = 30), while two studies performed a cotinine assessment for measuring SHS exposure and one study used a combination of self-reporting and cotinine assessment (Table 1). Usually, never-smokers or non-smokers were studied. However, some studies did not provide any information on the smoking status of subjects or included active smokers as well as non-smokers. In these cases, smoking status was controlled for in the analyses. All but two studies [63, 64] controlled for several factors. The study samples varied between 309 599 never-smokers in a cohort study in the USA, dealing with the association between SHS exposure and IHD [65] and a case–control study with 56 female IHD patients and 136 female controls in China [66]. Effect sizes for SHS exposure and selected outcomes SHS and ischaemic heart disease The RR for the single studies dealing with the association between SHS and IHD are presented in Table 2. From the 20 studies on IHD in the systematic review, five were excluded because of low methodological quality according to the quality assessment. Additionally, the Greek study from Panagiotakos et al. [67] was excluded in the meta-analysis, because the same data was used in the study by Pitsavos et al. [50], in which the analysis was stratified by place of exposure. This led to 14 studies on the effects of SHS exposure on IHD. In 6 of these studies, information summarized for both sexes were provided (n = 24 903). The RR for the association between SHS and IHD was either stratified by sex or only observed for one sex in six studies for men (n = 8208) and nine for women (n = 111 533). Table 2 Effect sizes–SHS and ischaemic heart disease The synthesis of all the studies included in the meta-analysis results in a RR of 1.27 (95 % CI: 1.10 – 1.48) for both sexes together. The RR was much higher for women (RR = 1.50, 95 % CI: 1.31 – 1.72) than for men (RR = 1.06, 95 % CI: 0.96 – 1.19). None of the studies showed significant results for men regarding the association between SHS exposure and IHD. The studies from McGhee et al. [24], Pitsavos et al. [50] and Rosenlund et al. [68] had the highest impact on the synthesis, because these three studies were weighted with 88 % overall. The results of Ciruzzi et al. [69], with a very broad confidence interval (RR = 2.04, 95 % CI: 0.99–12.52), contributed only to a small extent to the overall RR due to the weighting factor of 1.39 %. For men, the study by McElduff et al. [70] contributed most to the synthesis result (46.59 %). For women, several studies contributed to more or less the same extent to the synthesis (Table 2, Fig. 2). Forest plot–SHS and ischaemic heart disease Cochran's Q-test revealed no heterogeneity, because the p-value was larger than 0.1 for all three subgroup syntheses. This is confirmed by the I2-statistic, which quantifies the assumption between the three different subgroup syntheses. According to the results of these tests, no heterogeneity was observed for men (I2 = 0 %), and only a small but negligible heterogeneity for the studies focusing on women (I2 = 16.00 %). I2 was highest for studies including both sexes (I2 = 30.78 %), because the RR obviously differed for men and women (Table 2). SHS and chronic obstructive pulmonary disease Only five studies investigating the association between SHS exposure and COPD were included in the meta-analysis, after three further studies were excluded because of low quality. Overall, 28 965 participants were included in these studies, with more than half of them (n = 15 379) being investigated in one Chinese cohort study [71]. In three studies the RRs for the association between SHS and COPD were calculated for both sexes combined (n = 21,558). Only McGhee et al. [24] provided information stratified for men (n = 3,098) and women (n = 2,503) and two further studies investigated the association between SHS and COPD in a female-only study population (Table 3). Table 3 Effect sizes–SHS and COPD The large study by Yin et al. [71] accounted for almost half (49.49 %) of the weighting factor for both sexes. Two further studies, by Chan-Yeung et al. [72] and McGhee et al. [24], accounted for 25 % each for the weighting factor in the subgroup of both sexes. For the female subgroup, the weighting factors were distributed in a similar way for the three studies included, although based on different studies. The synthesis for both sexes is based on three studies with consistent and significant results. A RR of 1.66 with a comparatively small confidence interval (95 % CI: 1.38–2.00) was calculated. Since the synthesis for men is based on only one study, the RR of 1.50 (95 % CI: 0.96–2.28) was inherited. For women, a higher RR was identified (RR = 2.17, 95 % CI: 1.48–3.18) than for men (Table 3, Fig. 3). Forest plot–SHS and COPD The heterogeneity between studies was assessed for the subgroups of both sexes and for women. The Q-statistic and its p-value suggested no heterogeneity between study results. The I2 for both sexes was 0 % and for women it was 22.95 %, which indicates no or only small heterogeneity (Table 3). SHS and stroke The results for stroke are based on seven studies, after five studies were excluded due to the quality assessment. Five studies provided information combined for both sexes (n = 52,263). In four studies the analysis was stratified for sex. This leads overall to 22 905 male study participants. Two large additional studies focused only on women, which leads overall to 162 197 female study participants, which allows for investigating the association between SHS exposure and stroke. For the synthesis of all three subgroups, the study performed by McGhee et al. [24] is of particular importance due to its high weighting factor (Table 4). Table 4 Effect sizes–SHS and stroke The synthesis for the three stroke subgroups differs from the two outcomes for IHD and COPD described above. In this case, the RR for the association between SHS and stroke is 1.35 (95 % CI: 1.22 – 1.50) for both sexes combined. The analysis separated for sex led to a slightly higher RR for men (RR = 1.40, 95 % CI: 1.09–1.81) as well as for women (RR = 1.43, 95 % CI: 1.28–1.61) compared to the synthesis for both sexes (Table 4, Fig. 4). This is due to the fact that the studies included in the meta-analysis in which both sexes are considered in a combined effect size are not exclusively the same as those which show results for men or women separately. One study only gives results for both sexes combined [25] and two studies only give results for women [73, 74]. Forest plot–SHS and stroke The Q-statistic indicated no heterogeneity, although the p-value for the Q-statistic for men was 0.184 and therefore close to the border indicating heterogeneity. According to the I2, the studies for women (I2 = 0 %) as well as for both sexes (I2 = 2.08 %) are homogeneous. For men, a low to medium heterogeneity was observed (I2 = 37.95 %) (Table 4). In this study, the effect sizes for IHD, COPD and stroke attributable to SHS exposure were estimated. For all three outcomes, the effect sizes were larger for women than for men. In men, statistically significant results were revealed only for the association between SHS exposure and stroke. According to the calculated effect sizes for all three disease entities, the risk factor of SHS exposure seems to be particularly important for COPD. A 66 % excess risk of COPD was calculated for people exposed to SHS for both sexes combined. For stroke (RR = 1.35, 95 % CI: 1.22–1.50) and IHD (RR = 1.27, 95 % CI: 1.10–1.48), the RR was considerably lower. The calculated association between SHS exposure and IHD is consistent with several meta-analyses calculating the overall RR of coronary heart diseases associated with SHS exposure among non-smokers. In a meta-analysis including 18 studies (10 prospective cohort studies and eight case–control studies), the estimated RR was 1.25 (95 % CI: 1.17–1.32) [20]. A meta-analysis by Wells [12] focused on the association between IHD mortality and SHS exposure. According to this study, a RR of 1.23 (95 % CI: 1.12–1.35) was calculated for both sexes combined (men: RR = 1.25, 95 % CI: 1.03–1.51; women: RR = 1.23, 95 % CI: 1.11–1.36) [12]. These estimations are comparable to the calculation of the effect size for both sexes combined. Nevertheless, the study by Wells [12] provided effect sizes which are almost equal for both sexes. In our study the results for the association between SHS exposure and IHD indicated much higher effect sizes for women. Wells [12] also calculated the effect size associating IHD morbidity with SHS exposure. Here, the RR for women was 1.51 (95 % CI: 1.16–1.97), which is comparable to the estimation of the results of our study. Therefore, it seems that the associations for IHD morbidity and mortality differ substantially, and this leads to differences in the effect sizes estimated in this study compared to previous ones. The estimation of the effect size for the association between SHS exposure and COPD cannot be compared to other meta-analyses, because this is the first attempt to calculate a synthesis for the primary studies dealing with this association. Up to now, the number of studies on SHS exposure as a risk factor for adult onset COPD is small compared with the number on the adverse health effects of SHS exposure on childhood respiratory symptoms and diseases [22]. The estimation for both sexes combined led to a RR of 1.66 (95 % CI: 1.38–2.00), which is higher than the estimation for the association between SHS exposure and IHD. This also applies to the gender stratified estimations: in women a RR of 2.17 was calculated with a fairly broad confidence interval (95 % CI: 1.48–3.18). This can be explained by the fact that three of the total of five studies dealt with the association in women. The studies by Wu et al. [75] (RR = 3.12, 95 % CI: 1.56–6.50) and McGhee et al. [24] (RR = 2.59, 95 % CI: 1.30–5.27) in particular contributed to the broad confidence interval. Therefore, the few existing studies on SHS exposure and COPD differ considerably, although the results indicate a positive association. No judgement on the consistency of the results of primary studies on the association between SHS exposure and COPD for men is possible, because only the study by McGhee et al. [24] provided results for the male subgroup (RR = 1.50, 95 % CI: 0.96–2.28). The estimations for the association between SHS exposure and stroke (RR = 1.35, 95 % CI: 1.22–1.50) are comparable with previous meta-analyses. In our study, the effect sizes showed a significantly increased risk for people exposed to SHS in both sexes, with RRs that are almost equal between men (RR = 1.40, 95 % CI: 1.09–1.81) and women (RR–1.43, 95 % CI: 1.28–1.61). Lee and Forey [76] provided a comprehensive review of epidemiological evidence relating stroke to SHS exposure in lifelong non-smokers. Overall, including 16 studies (seven prospective cohort studies, six case–control studies and three cross-sectional studies) which used current spousal smoking (or nearest equivalent) as the exposure index led to an overall estimate of 1.25 (95 % CI: 1.16–1.36), which is slightly lower than our calculations. The study results also indicated no significant heterogeneity and no differences between men and women [76], which is consistent with our study results. Eight studies in the meta-analysis provided information regarding a possible dose–response relationship between SHS exposure and stroke. According to this, the synthesis for the highest level of exposure led to a RR of 1.56 (95 % CI: 1.34–1.82). Another meta-analysis [49], included 20 studies (10 cohort studies, six case–control studies and four cross-sectional studies) published between 1984 and 2010. All of these reported results for non-smokers, who were mainly defined as never-smokers, although some studies also included ex-smokers or infrequent current smokers. Eleven studies in the meta-analysis by Oono et al. [49] measured the dose of SHS exposure, which was either defined as the number of smokers, cigarettes per day, hours per week, pack years, or cotinine concentration and score. Our calculations for the effect size of the increased risk of stroke attributable to SHS exposure (RR = 1.35, 95 % CI: 1.22–1.50) are in line with the results of SHS exposure of either 10 cigarettes per day (RR = 1.31, 95 % CI: 1.12–1.54) or 15 cigarettes per day (RR = 1.45, 95 % CI: 1.19–1.78) [49]. Dose–response relationship The results of the primary studies that were included in the meta-analysis on the associations between SHS exposure and IHD as well as stroke indicate a distinct dose–response relationship. Even low levels of SHS exposure increase the risk of adverse health effects, indicating that there is no safe level of exposure [42, 49]. The effects of SHS exposure are lower than those of active smoking, but it has been consistently shown that the effects of SHS exposure on the cardiovascular system are much larger than might be expected from a comparison of the doses of toxins delivered to active and passive smokers. Therefore the effects of SHS are estimated to be on average 80-90 % as harmful as those of active smoking [10]. The effects of a dose–response relationship between SHS exposure and adverse health outcomes were not depicted in this study, because it focused on regular exposure to SHS. Although the dose–response function might supply important additional information, Sauerbrei et al. [77] argued that aggregated data are too limited to perform a meta-analysis including a dose–response analysis. Nevertheless, regular SHS exposure, irrespective of the dose is still an important risk factor, because it may lead to both acute and chronic diseases. The stratification for sex performed in this study is highly relevant, because the effect sizes as well as the prevalence of diseases and the prevalence of SHS exposure differ between the sexes. Until now, it has been largely men who have been considered in many studies dealing with IHD, because of their higher prevalence of coronary diseases. In most parts of the world women are at least 50 % more likely to be exposed to SHS than men [78]. Until now, only a few studies have investigated possible mechanisms underlying sex differences in adverse health outcomes such as IHD related to SHS exposure. It is assumed that the anti-oestrogenic effect of cigarette smoking–and therefore also the exposure to SHS–may be at least partly related to the increased risk of IHD in young females smokers [79]. Furthermore, a study by Geisler et al. [80] indicated that in smoking women undergoing oestrogen replacement therapy, plasma levels of oestrogen were 40-70 % lower than in non-smoking women. Additionally, a decrease in both oestradiol and testosterone concentrations in smoking men has been reported [81]. Therefore, hormonal factors seem to considerably influence vulnerability due to SHS exposure. This might be one explanation for gender differences in the effects of SHS exposure [82]. There are methodological restrictions in data quality of primary studies, which have to be considered when interpreting the results. Among these, particularly the differences in study designs and misclassification bias due to different definitions and measurements of SHS exposure have to be mentioned. Another limitation of major importance in the context of a systematic literature review is a possible publication bias, although a review of published and unpublished studies on the health effects of SHS exposure showed no evidence of publication bias against statistically non-significant results in the peer-reviewed literature [83]. Another limitation in the identification of primary studies on the association between SHS exposure and the three selected diseases leads back to the decision to perform the systematic literature search only in one literature database, PubMed. Therefore, some studies might have been missed, although an additional manual search in the reference lists of publications was performed, which led to only eight further articles. A broader search strategy with another search algorithm may have led to further articles eligible for the meta-analysis. The quality assessment led to the exclusion of nine studies. Although the development of criteria for the quality assessment was based on established instruments, different criteria may have led to the exclusion of more or fewer articles, depending on their strictness. The quality checklist was used as a scale, although the criticism has been made that these scales do not provide a transparent estimation of the degree of bias [84]. Furthermore, quality scores neglect information about individual items and no empirical basis for the different weights that are implicitly given to each item exists [85]. Nevertheless, this approach was chosen, to allow for the exclusion of studies with low methodological quality. Since only cohort studies and case–control studies were selected, a large number of studies had to be excluded either during the screening of titles and abstracts or during the assessment of full-texts. Also, comparatively small studies with low effect sizes or rather broad confidence intervals were included in the meta-analysis. These studies carried a smaller weight in the synthesis of results. To make the results of the primary studies comparable, all the OR provided in case–control studies were re-calculated into RR using quite a conservative approach, which is more likely to underestimate the true association. Therefore, overall, the effect sizes calculated in the meta-analysis represent a conservative estimate. Besides the identification and data quality of primary studies, the combination of research results from multiple studies performed in the meta-analysis faces several limitations and uncertainties. Although this meta-analysis indicates only low heterogeneity, diversity between studies, for example due to different populations (e.g., countries, age groups), inclusion and exclusion criteria (e.g., more severe patients), study designs (e.g., inadequate follow-up of lost patients), statistical methods used, and various sources of bias, is still an important issue. Formal heterogeneity tests face low statistical power. In this study, the Q-statistic and I2-test were used. A shortcoming of the Q-statistic is that it has low power to detect true heterogeneity among studies when the meta-analysis includes only a small number of studies [86]. The Q-statistic is useful to test for the existence of heterogeneity, but not to assess the magnitude of heterogeneity. For that we used the I2. For the calculation of the effect sizes for COPD, it has to be kept in mind that the estimation for men is based on only one study. Particularly for COPD, the synthesis is based on very few studies, which limits its reliability. The combination of studies will often result in small confidence intervals, suggesting a false precision [58]. In this context, it is relevant to point out that random effects models, as used in this study, are not sufficient to explain the heterogeneity between studies, since the random effect merely quantifies an unexplained variation by estimating it [59]. Conclusion and implications Up to now, the effects of SHS exposure on population health are still controversial, although several studies and meta-analyses have revealed comparable results on the association between regular SHS exposure and adverse health outcomes. However, further studies with sound methodological approaches due to large prospective epidemiological studies using biomarkers for exposure assessment are still required to determine the risks associated with SHS exposure [16, 76]. Furthermore, there is only a little evidence for the effects of SHS on health-related quality of life, which is a very important parameter for well-being besides objective parameters such as morbidity and mortality. To address this research need, this study was conducted. It is the first study to have calculated effect sizes for the association between SHS exposure and the disease outcomes IHD, COPD, and stroke, stratified by sex. The effect sizes calculated in the meta-analysis are overall comparable with previous findings in meta-analyses for IHD and stroke. This suggests that the results are reliable. Although no previous meta-analysis for the association between SHS exposure and COPD is available, the results are assumed to be reliable as well, because the methodological approach in this study was the same for all three disease entities. Nevertheless, further research is needed, to provide more adequate primary studies which account for confounding and other biases. Because some articles dealt with multiple outcomes, the sum of all articles did not add up to 403. The number of articles excluded for each criterion is mentioned in brackets. The sum did not add up to 71, because some articles were excluded for multiple reasons. Öberg M, Jaakkola MS, Woodward A, Peruga A, Prüss-Üstun A. Worldwide burden of disease from exposure to second-hand smoke: a retrospective analysis of data from 192 countries. Lancet. 2011;377(9760):139–46. Heidrich J, Wellmann J, Heuschmann PU, Kraywinkel K, Keil U. Mortality and morbidity from coronary heart disease attributable to passive smoking. Eur Heart J. 2007;28(20):2498–502. Vineis P, Airoldi L, Veglia F, Olgiati L, Pastorelli R, Autrup H, et al. Environmental tobacco smoke and risk of respiratory cancer and chronic obstructive pulmonary disease in former smokers and never smokers in the EPIC prospective study. BMJ. 2005;330(7486):277. PubMed Central CAS PubMed Article Google Scholar Hackshaw AK, Law MR, Wald NJ. The accumulated evidence on lung cancer and environmental tobacco smoke. BMJ. 1997;315(7114):980–8. Wells AJ. Lung cancer from passive smoking at work. Am J Public Health. 1998;88(7):1025–9. Wald NJ, Nanchahal K, Thompson SG, Cuckle HS. Does breathing other people's tobacco smoke cause lung cancer? Br Med J. 1986;293(6556):1217–22. Lee PN, Chamberlain J, Alderson MR. Relationship of passive smoking to risk of lung cancer and other smoking-associated diseases. Br J Cancer. 1986;54(1):97–105. U.S. Department of Health and Human Services. The Health Consequences of Involuntary Exposure to Tobacco Smoke. In: A Report of the Surgeon General. Atlanta, GA: U.S. Department of Health Human Services; 2006. Powell JT. Vascular damage from smoking: disease mechanisms at the arterial wall. Vasc Med. 1998;3(1):21–8. Barnoya J, Glantz SA. Cardiovascular effects of secondhand smoke: nearly as large as smoking. Circulation. 2005;111(20):2684–98. Davis JW, Shelton L, Watanabe IS, Arnold J. Passive smoking affects endothelium and platelets. Arch Intern Med. 1989;149(2):386–9. Wells AJ. Passive smoking as a cause of heart disease. J Am Coll Cardiol. 1994;24(2):546–54. Callinan JE, Clarke A, Doherty K, Kelleher C. Legislative smoking bans for reducing secondhand smoke exposure, smoking prevalence and tobacco consumption. Cochrane. 2010;4:Cd005992. Dinas PC, Metsios GS, Jamurtas AZ, Tzatzarakis MN, Wallace Hayes A, Koutedakis Y, et al. Acute effects of second-hand smoke on complete blood count. Int J Env Health Res. 2014;24(1):56–62. Dacunto PJ, Cheng K-C, Acevedo-Bolton V, Jiang R-T, Klepeis NE, Repace JL, et al. Identifying and quantifying secondhand smoke in source and receptor rooms: logistic regression and chemical mass balance approaches. Indoor Air. 2014;24(1):59–70. Ding D, Wing-Hong Fung J, Zhang Q, Wai-Kwok Yip G, Chan CK, Yu CM. Effect of household passive smoking exposure on the risk of ischaemic heart disease in never-smoke female patients in Hong Kong. Tob Control. 2009;18(5):354–7. Lippert WC, Gustat J. Clean Indoor Air Acts reduce the burden of adverse cardiovascular outcomes. Public health. 2012;126(4):279–85. Ahijevych K, Wewers ME. Passive smoking and vascular disease. J Cardiovasc Nurs. 2003;18(1):69–74. Law MR, Morris JK, Wald NJ. Environmental tobacco smoke exposure and ischaemic heart disease: an evaluation of the evidence. BMJ. 1997;315(7114):973–80. He J, Vupputuri S, Allen K, Prerost MR, Hughes J, Whelton PK. Passive smoking and the risk of coronary heart disease-a meta-analysis of epidemiologic studies. N Engl J Med. 1999;340(12):920–6. Thun M, Henley J, Apicella L. Epidemiologic studies of fatal and nonfatal cardiovascular disease and ETS exposure from spousal smoking. Environ Health Perspect. 1999;107 Suppl 6:841–6. PubMed Central PubMed Article Google Scholar Coultas DB. Passive smoking and risk of adult asthma and COPD: an update. Thorax. 1998;53(5):381–7. Jindal SK, Gupta D. The relationship between tobacco smoke & bronchial asthma. Indian J Med Res. 2004;120(5):443–53. McGhee SM, Ho SY, Schooling M, Ho LM, Thomas GN, Hedley AJ, et al. Mortality associated with passive smoking in Hong Kong. BMJ. 2005;330(7486):287–8. You RX, Thrift AG, McNeil JJ, Davis SM, Donnan GA. Ischemic stroke risk and passive exposure to spouses' cigarette smoking. Melbourne Stroke Risk Factor Study (MERFS) Group. Am J Public Health. 1999;89(4):572–5. Bonita R, Duncan J, Truelsen T, Jackson RT, Beaglehole R. Passive smoking as well as active smoking increases the risk of acute stroke. Tob Control. 1999;8(2):156–60. Juster HR, Loomis BR, Hinman TM, Farrelly MC, Hyland A, Bauer UE, et al. Declines in hospital admissions for acute myocardial infarction in New York state after implementation of a comprehensive smoking ban. Am J Public Health. 2007;97(11):2035–9. Helsing KJ, Sandler DP, Comstock GW, Chee E. Heart disease mortality in nonsmokers living with smokers. Am J Epidemiol. 1988;127(5):915–22. Dunbar A, Gotsis W, Frishman W. Second-hand tobacco smoke and cardiovascular disease risk: an epidemiological review. Cardiol Rev. 2013;21(2):94–100. Institute of Medicine. Secondhand smoke exposure and cardiovascular effects: making sense of the evidence. Washington, D.C.: National Academic Press; 2010. Kawachi I, Colditz GA, Speizer FE, Manson JE, Stampfer MJ, Willett WC, et al. A prospective study of passive smoking and coronary heart disease. Circulation. 1997;95(10):2374–9. Celermajer DS, Adams MR, Clarkson P, Robinson J, McCredie R, Donald A, et al. Passive smoking and impaired endothelium-dependent arterial dilatation in healthy young adults. N Engl J Med. 1996;334(3):150–4. Glantz SA, Parmley WW. Passive smoking and heart disease. Mechanisms and risk. JAMA. 1995;273(13):1047–53. Kritz H, Schmid P, Sinzinger H. Passive smoking and cardiovascular risk. Arch Intern Med. 1995;155(18):1942–8. Law MR, Wald NJ. Environmental tobacco smoke and ischemic heart disease. Prog Cardiovasc Dis. 2003;46(1):31–8. Antó JM, Vermeire P, Vestbo J, Sunyer J. Epidemiology of chronic obstructive pulmonary disease. Eur Respir J. 2001;17(5):982–94. Nikula KJ, Green FH. Animal models of chronic bronchitis and their relevance to studies of particle-induced disease. Inhal Toxicol. 2000;12 Suppl 4:123–53. California Environmental Protection Agency. Health Effects of Exposure to Environmental Tobacco Smoke. In. Office of Environmental Health Hazard Assessment: Sacramento; 1997. Eisner MD, Balmes J, Yelin EH, Katz PP, Hammond SK, Benowitz N, et al. Directly measured secondhand smoke exposure and COPD health outcomes. BMC Pulm Med. 2006;6:12. Menezes AM, Hallal PC. Role of passive smoking on COPD risk in non-smokers. Lancet. 2007;370(9589):716–7. Zhou Y, Wang C, Yao W, Chen P, Kang J, Huang S, et al. COPD in Chinese nonsmokers. Eur Respir J. 2009;33(3):509–18. ASH. The health effects of exposure to secondhand smoke. In: Action on Smoking and Health. 2014. Iribarren C, Darbinian J, Klatsky AL, Friedman GD. Cohort study of exposure to environmental tobacco smoke and risk of first ischemic stroke and transient ischemic attack. Neuroepidemiology. 2004;23(1–2):38–44. U.S. Department of Health and Human Services. The Health Consequences of Smoking-50 Years of Progress. A Report of the Surgeon General. In: Office on Smoking and Health. Atlanta, GA: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion; 2014. Qureshi AI, Suri MF, Kirmani JF, Divani AA. Cigarette smoking among spouses: another risk factor for stroke in women. Stroke; a journal of cerebral circulation. 2005;36(9):e74–6. Donnan GA, McNeil JJ, Adena MA, Doyle AE, O'Malley HM, Neill GC. Smoking as a risk factor for cerebral ischaemia. Lancet. 1989;2(8664):643–7. Howard G, Wagenknecht LE, Cai J, Cooper L, Kraut MA, Toole JF. Cigarette smoking and other risk factors for silent cerebral infarction in the general population. Stroke; a journal of cerebral circulation. 1998;29(5):913–7. Glymour MM, Defries TB, Kawachi I, Avendano M. Spousal smoking and incidence of first stroke: the Health and Retirement Study. Am J Prev Med. 2008;35(3):245–8. Oono IP, Mackay DF, Pell JP. Meta-analysis of the association between secondhand smoke exposure and stroke. J Public Health. 2011;33(4):496–502. Pitsavos C, Panagiotakos DB, Chrysohoou C, Tzioumis K, Papaioannou I, Stefanadis C, et al. Association between passive cigarette smoking and the risk of developing acute coronary syndromes: the CARDIO2000 study. Heart Vessels. 2002;16(4):127–30. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA. 2000;283(15):2008–12. La Torre G, Chiaradia G, Gianfagna F, De Laurentis A, Boccia S, Ricciardi W. Quality assessment in meta-analysis. Italian Journal of Public Health. 2006;3(2):44–50. Spitzer WO, Lawrence V, Dales R, Hill G, Archer MC, Clark P, et al. Links between passive smoking and disease: a best-evidence synthesis. A report of the Working Group on Passive Smoking. Clin Invest Med. 1990;13(1):17–42. discussion 43–16. From relative risks to odds ratios and back [http://www.epigear.com/index_files/or2rr.html]. Accessed 22 Nov 2015. Meta-analysis with MIX 2.0 [http://www.meta-analysis-made-easy.com/index.html]. Accessed 22 Nov 2015. DerSimonian R, Laird N. Meta-analysis in clinical trials. Control Clin Trials. 1986;7(3):177–88. Blettner M, Krahn U, Schlattmann P. Meta-analysis in epidemiology. In: Ahrens W, Pigeot I, editors. Handbook of epidemiology. New York: Springer; 2014. p. 1377–411. Blettner M, Sauerbrei W, Schlehofer B, Scheuchenpflug T, Friedenreich C. Traditional reviews, meta-analyses and pooled analyses in epidemiology. Int J Epidemiol. 1999;28(1):1–9. Sutton AJ, Abrams KR, Jones DR, Sheldon TA, Song F. Methods for meta-analysis in medical research. John Wiley & Sons, Ltd: Chichester; 2000. Hedges LV, Olkin I. Statistical methods for meta-analysis. Orlando, FL: Academic; 1985. Higgins JP, Thompson SG. Quantifying heterogeneity in a meta-analysis. Stat Med. 2002;21(11):1539–58. Kalandidi A, Trichopoulos D, Hatzakis A, Tzannes S, Saracci R. The effect of involuntary smoking on the occurrence of chronic obstructive pulmonary disease. Soz Praventivmed. 1990;35(1):12–6. Johannessen A, Bakke PS, Hardie JA, Eagan TM. Association of exposure to environmental tobacco smoke in childhood with chronic obstructive pulmonary disease and respiratory symptoms in adults. Respirology (Carlton, Vic. 2012;17(3):499–505. Steenland K, Thun M, Lally C, Heath Jr C. Environmental tobacco smoke and coronary heart disease in the American Cancer Society CPS-II cohort. Circulation. 1996;94(4):622–8. He Y, Lam TH, Li LS, Li LS, Du RY, Jia GL, et al. Passive smoking at work as a risk factor for coronary heart disease in Chinese women who have never smoked. BMJ. 1994;308(6925):380–4. Panagiotakos DB, Chrysohoou C, Pitsavos C, Papaioannou I, Skoumas J, Stefanadis C, et al. The association between secondhand smoke and the risk of developing acute coronary syndromes, among non-smokers, under the presence of several cardiovascular risk factors: The CARDIO2000 case–control study. BMC Public Health. 2002;2:9. Rosenlund M, Berglind N, Gustavsson A, Reuterwall C, Hallqvist J, Nyberg F, et al. Environmental tobacco smoke and myocardial infarction among never-smokers in the Stockholm Heart Epidemiology Program (SHEEP). Epidemiology (Cambridge, Mass). 2001;12(5):558–64. Ciruzzi M, Pramparo P, Esteban O, Rozlosnik J, Tartaglione J, Abecasis B, et al. Case–control study of passive smoking at home and risk of acute myocardial infarction. Argentine FRICAS Investigators. Factores de Riesgo Coronario en America del Sur. J Am Coll Cardiol. 1998;31(4):797–803. McElduff P, Dobson AJ, Jackson R, Beaglehole R, Heller RF, Lay-Yee R. Coronary events and exposure to environmental tobacco smoke: a case–control study from Australia and New Zealand. Tob Control. 1998;7(1):41–6. Yin P, Jiang CQ, Cheng KK, Lam TH, Lam KH, Miller MR, et al. Passive smoking exposure and risk of COPD among adults in China: the Guangzhou Biobank Cohort Study. Lancet. 2007;370(9589):751–7. Chan-Yeung M, Ho AS, Cheung AH, Liu RW, Yee WK, Sin KM, et al. Determinants of chronic obstructive pulmonary disease in Chinese patients in Hong Kong. Int J Tuberc Lung Dis. 2007;11(5):502–7. Wen W, Shu XO, Gao YT, Yang G, Li Q, Li H, et al. Environmental tobacco smoke and mortality in Chinese women who have never smoked: prospective cohort study. BMJ. 2006;333(7564):376. Zhang X, Shu XO, Yang G, Li HL, Xiang YB, Gao YT, et al. Association of passive smoking by husbands with prevalence of stroke among Chinese women nonsmokers. Am J Epidemiol. 2005;161(3):213–8. Wu CF, Feng NH, Chong IW, Wu KY, Lee CH, Hwang JJ, et al. Second-hand smoke and chronic bronchitis in Taiwanese women: a health-care based study. BMC Public Health. 2010;10:44. Lee PN, Forey BA. Environmental tobacco smoke exposure and risk of stroke in nonsmokers: a review with meta-analysis. J Stroke Cerebrovasc Dis. 2006;15(5):190–201. Sauerbrei W, Blettner M, Royston P. On alcohol consumption and all-case mortality. J Clin Epidemiol. 2001;54(5):537–40. Singh RJ, Lal PG. Second-hand smoke: A neglected public health challenge. Indian J Public Health. 2011;55(3):192–8. Baron JA, La Vecchia C, Levi F. The antiestrogenic effect of cigarette smoking in women. Am J Obstet Gynecol. 1990;162(2):502–14. Geisler J, Omsjo IH, Helle SI, Ekse D, Silsand T, Lonning PE. Plasma oestrogen fractions in postmenopausal women receiving hormone replacement therapy: influence of route of administration and cigarette smoking. J Endocrinol. 1999;162(2):265–70. Hsieh CC, Signorello LB, Lipworth L, Lagiou P, Mantzoros CS, Trichopoulos D. Predictors of sex hormone levels among the elderly: a study in Greece. J Clin Epidemiol. 1998;51(10):837–41. Bolego C, Poli A, Paoletti R. Smoking and gender. Cardiovasc Res. 2002;53(3):568–76. Bero LA, Glantz SA, Rennie D. Publication bias and public health policy on environmental tobacco smoke. JAMA. 1994;272(2):133–6. Shamliyan T, Kane RL, Dickinson S. A systematic review of tools used to assess the quality of observational studies that examine incidence or prevalence and risk factors for diseases. J Clin Epidemiol. 2010;63(10):1061–70. Dreier M, Borutta B, Stahmeyer J, Krauth C, Walter U. Comparison of tools for assessing the methodological quality of primary and secondary studies in health technology assessment reports in Germany. GMS Health Technol Assess. 2010;6:Doc07. PubMed Central PubMed Google Scholar Hardy RJ, Thompson SG. Detecting and describing heterogeneity in meta-analysis. Stat Med. 1998;17(8):841–56. Dobson AJ, Alexander HM, Heller RF, Lloyd DM. Passive smoking and the risk of heart attack or coronary death. Med J Aust. 1991;154(12):793–7. Gallo V, Neasham D, Airoldi L, Ferrari P, Jenab M, Boffetta P, et al. Second-hand smoke, cotinine levels, and risk of circulatory mortality in a large cohort study of never-smokers. Epidemiology (Cambridge, Mass). 2010;21(2):207–14. He Y, Jiang B, Li LS, Li LS, Ko L, Wu L, et al. Secondhand smoke exposure predicted COPD and other tobacco-related mortality in a 17-year cohort study in China. Chest. 2012;142(4):909–18. Hill SE, Blakely T, Kawachi I, Woodward A. Mortality among lifelong nonsmokers exposed to secondhand smoke at home: cohort data and sensitivity analyses. Am J Epidemiol. 2007;165(5):530–40. Hole DJ, Gillis CR, Chopra C, Hawthorne VM. Passive smoking and cardiorespiratory health in a general population in the west of Scotland. BMJ. 1989;299(6696):423–7. Jefferis BJ, Lawlor DA, Ebrahim S, Wannamethee SG, Feyerabend C, Doig M, et al. Cotinine-assessed second-hand smoke exposure and risk of cardiovascular disease in older adults. Heart. 2010;96(11):854–9. Muscat JE, Wynder EL. Exposure to environmental tobacco smoke and the risk of heart attack. Int J Epidemiol. 1995;24(4):715–9. Rostron B. Mortality risks associated with environmental tobacco smoke exposure in the United States. Nicotine Tob Res. 2013;15(10):1722–8. Schwartz AG, Cote ML, Wenzlaff AS, Van Dyke A, Chen W, Ruckdeschel JC, et al. Chronic obstructive lung diseases and risk of non-small cell lung cancer in women. J Thorac Oncol. 2009;4(3):291–9. Whincup PH, Gilg JA, Emberson JR, Jarvis MJ, Feyerabend C, Bryant A, et al. Passive smoking and risk of coronary heart disease and stroke: prospective study with cotinine measurement. BMJ. 2004;329(7459):200–5. We wish to thank Elizabeth Sourbut for English language editing. This analysis received no funding. We acknowledge support of the publication fee by the Deutsche Forschungsgemeinschaft and the Open Access Publication Funds of Bielefeld University. Department of Public Health Medicine, School of Public Health, University of Bielefeld, P.O. Box 100 131, 33501, Bielefeld, Germany Florian Fischer & Alexander Kraemer Florian Fischer Alexander Kraemer Correspondence to Florian Fischer. FF and AK conceptualized the study. FF analysed and interpreted the data, AK supervised the process. FF drafted the manuscript and AK revised the manuscript critically for important intellectual content. All authors read and approved the final manuscript. Caption: Heterogeneity funnel plots. (DOCX 20 kb) Fischer, F., Kraemer, A. Meta-analysis of the association between second-hand smoke exposure and ischaemic heart diseases, COPD and stroke. BMC Public Health 15, 1202 (2015). https://doi.org/10.1186/s12889-015-2489-4 Second-hand smoke
CommonCrawl
\begin{definition}[Definition:Boolean Lattice/Definition 2] An ordered structure $\left({S, \vee, \wedge, \preceq}\right)$ is a '''Boolean lattice''' {{iff}}: $(1): \quad \left({S, \vee, \wedge}\right)$ is a Boolean algebra $(2): \quad$ For all $a, b \in S$: $a \wedge b \preceq a \vee b$ \end{definition}
ProofWiki
\begin{definition}[Definition:Sign of Permutation] Let $n \in \N$ be a natural number. Let $\N_n$ denote the set of natural numbers $\set {1, 2, \ldots, n}$. Let $\tuple {x_1, x_2, \ldots, x_n}$ be an ordered $n$-tuple of real numbers. Let $\pi$ be a permutation of $\N_n$. Let $\map {\Delta_n} {x_1, x_2, \ldots, x_n}$ be the product of differences of $\tuple {x_1, x_2, \ldots, x_n}$. Let $\pi \cdot \map {\Delta_n} {x_1, x_2, \ldots, x_n}$ be defined as: :$\pi \cdot \map {\Delta_n} {x_1, x_2, \ldots, x_n} := \map {\Delta_n} {x_{\map \pi 1}, x_{\map \pi 2}, \ldots, x_{\map \pi n} }$ The '''sign of $\pi \in S_n$''' is defined as: :$\map \sgn \pi = \begin{cases} \dfrac {\Delta_n} {\pi \cdot \Delta_n} & : \Delta_n \ne 0 \\ 0 & : \Delta_n = 0 \end{cases}$ \end{definition}
ProofWiki
In another paper (Butterfield 2011), one of us argued that emergence and reduction are compatible, and presented four examples illustrating both. The main purpose of this paper is to develop this position for the example of phase transitions. We take it that emergence involves behaviour that is novel compared with what is expected: often, what is expected from a theory of the system's microscopic constituents. We take reduction as deduction, aided by appropriate definitions. Then the main idea of our reconciliation of emergence and reduction is that one makes the deduction after taking a limit of an appropriate parameter $N$. Thus our first main claim will be that in some situations, one can deduce a novel behaviour, by taking a limit $N\to\infty$. Our main illustration of this will be Lee-Yang theory. But on the other hand, this does not show that the $N=\infty$ limit is physically real. For our second main claim will be that in such situations, there is a logically weaker, yet still vivid, novel behaviour that occurs before the limit, i.e. for finite $N$. And it is this weaker behaviour which is physically real. Our main illustration of this will be the renormalization group description of cross-over phenomena.
CommonCrawl
Skip to main content Skip to sections Stem Cell Research & Therapy December 2019 , 10:279 | Cite as Autograft microskin combined with adipose-derived stem cell enhances wound healing in a full-thickness skin defect mouse model Yuansen Luo Xiaoyou Yi Tangzhao Liang Shihai Jiang Ronghan He Ying Hu Chunmei Wang Kun Wang Autograft microskin transplantation has been widely used as a skin graft therapy in full-thickness skin defect. However, skin grafting failure can lead to a pathological delay wound healing due to a poor vascularization bed. Considering the active role of adipose-derived stem cell (ADSC) in promoting angiogenesis, we intend to investigate the efficacy of autograft microskin combined with ADSC transplantation for facilitating wound healing in a full-thickness skin defect mouse model. An in vivo full-thickness skin defect mouse model was used to evaluate the contribution of transplantation microskin and ADSC in wound healing. The angiogenesis was detected by immunohistochemistry staining. In vitro paracrine signaling pathway was evaluated by protein array and Gene Ontology, Kyoto Encyclopedia of Genes and Genomes pathway, and protein-protein interaction network analysis. Co-transplantation of microskin and ADSC potentiated the wound healing with better epithelization, smaller scar thickness, and higher angiogenesis (CD31) in the subcutaneous layer. We found both EGF and VEGF cytokines were secreted by microskin in vitro. Additionally, secretome proteomic analysis in a co-culture system of microskin and ADSC revealed that ADSC could secrete a wide range of important molecules to form a reacting network with microskin, including VEGF, IL-6, EGF, uPAR, MCP-3, G-CSF, and Tie-2, which most likely supported the angiogenesis effect as observed. Overall, we concluded that the use of ADSC partially modulates microskin function and enhances wound healing by promoting angiogenesis in a full-thickness skin defect mouse model. Adipose-derived stem cell Microskin Secretome Full-thickness skin defect Wound healing ADSC Adipose-derived stem cell Differentially expressed protein Fold change Granulocyte colony-stimulating factor Hepatocyte growth factor HMECs Human microvascular endothelial cells Kinase insert domain receptor Kyoto Encyclopedia of Genes and Genomes Histochemistry and immunohistochemistry IL-6 Interleukin 6 MCP-3 Monocyte chemotactic protein 3 Protein-protein interactions Tyrosine kinase with immunoglobulin-like and EGF-like domains 2 uPAR Urokinase plasminogen activator receptor α-SMA Alpha smooth muscle actin Yuansen Luo, Xiaoyou Yi and Tangzhao Liang contributed equally to this work. Wound healing is a remarkably complex, and continuous process consisted of hemostasis and coagulation, inflammation, proliferation, and wound repairing with scar tissue formation [1]. Inappropriate management of wound care would result in a negative contribution to the healing process and potential complications, such as delay or non-healing wounds. Microskin grafting is a method of laying small sheets of the skin graft on the cutaneous wound to enhance wound healing which has been widely used as skin graft therapy in developing countries [2]. The procedure is simple and economical which has been approved successfully in full-thickness skin defect. However, there is a limitation of microskin grafting such as lack of neovascularization, keloid scar formation, and failure of transplantation due to poor wound bed and ischemia-reperfusion (IR) [3]. Therefore, burn surgeons face a considerable challenge as to how to enhance the effectiveness of microskin grafting. In recent years, adipose-derived stem cell (ADSC) application, as a stem cell-based therapy, has been proven to promote tissue regeneration in chronic and non-healing wounds owing to their differentiation and paracrine effects [4]. ADSCs have attracted focus widely because they can be harvested with minimal invasiveness. Early in vitro studies reported that ADSCs could secrete various bioactive factors such as vascular endothelial growth factor (VEGF), epidermal growth factor (EGF), and hepatocyte growth factor (HGF), which were beneficial to enhance endothelial cell (EC) function and promote angiogenesis [5]. Moreover, extracellular vesicles (EVs) released from ADSCs can stimulate proliferation of human microvascular endothelial cells (HMECs) and enhance pro-angiogenic function [6]. These data show the ability of ADSCs to enhance neovascularization, and paracrine function may play a critical role in angiogenesis. In the model of extended inferior epigastric artery skin flap in rats, treatment with ADSCs increased flap survival by enhancing angiogenic response and improving blood perfusion [7]. Besides, significantly accelerate neovascularization has been found in venous congested skin graft of rabbit model treated with ADSCs [8]. Owing to its paracrine function, ADSCs have been widely applied as a new therapy to skin wound healing and skin graft in recent years. Although microskin grafting is the main method of massive skin defects, there are still several problems we need to solve as previously discussed, such as insufficient angiogenesis. We hypothesized that the paracrine function of ADSCs might enhance angiogenesis promotion in microskin grafting. Thus, we aim to explore if microskin in a combination of ADSCs could promote the wound healing of full-thickness skin defects and conquer the limitation of microskin grafting. In our study, a number of cytokines secreted by the co-culture system of microskin and ADSCs, including vascular endothelial growth factor (VEGF), interleukin 6 (IL-6), epidermal growth factor (EGF), urokinase plasminogen activator receptor (uPAR), monocyte chemotactic protein-3 (MCP-3), granulocyte colony-stimulating factor (G-CSF), and tyrosine kinase with immunoglobulin-like and EGF-like domains 2 (Tie-2), were identified by high-throughput protein array. These cytokines contribute to angiogenesis and promote wound healing. Therefore, our in vivo and in vitro study suggests that the combination of microskin and ADSCs could be a promising therapy to promote wound healing of full-thickness skin defect. The animal protocol was approved by the Institutional Animal Research Committee Approval of Sun Yat-sen University. All the animals were purchased from the Animal Center for Medical Experiment of Guangdong. This study has been conducted under the guideline of the Guide for the Care and Use of Laboratory Animals. Isolation, culture, and characterization of adipose-derived stem cell ADSC extraction was performed as described by Zuk et al. [9]. The inguinal subcutaneous fat was isolated from the Balb/c mice (male, 12 weeks old). The adipose was washed with phosphate-buffered saline (PBS, Gibco, USA) consists of 1% penicillin-streptomycin Solution (Keygen, Jiangsu, China) three times, minced with scissors into pieces less than 1-mm diameter, washed with PBS, and centrifuge for 5 min at 400g three times. For the floating tissue, a threefold volume of 1% type I collagenase (Gibco, USA) was added, and the admixture was then digested in 37 °C water bath and shaken gently every 5 min for 45 min. The deposit was suspended with culture medium and pass through a 70-μm filter to removed undigested tissue. The pellets were resuspended with culture medium to a final concentration of 5 × 106cells/ml; then, the cells were placed in a 37 °C incubator supplied with 5% CO2 and 95% humidity. The medium was changed every 2 days. The third or fourth passage cells were used for various experiments. After the third or fourth passage, cells were harvested and applied to characterize the CD markers of mesenchymal stem cells. The protocols were adopted and followed by other previously published studies [10]. Briefly, 50 μl of cell suspension was incubated with a fluorochrome-conjugated monoclonal antibody for 1 h in the dark at room temperature, washed three times with PBS, and analyzed using a FACS Calibur flow cytometer (Becton Dickinson, San Jose, CA). The antibodies used in the experiments were HLA-DR, CD11b, CD19, CD34, CD45, CD73, CD90, CD105 (Abcam, USA). Mouse fluorochrome-conjugated isotype control IgG antibodies (Abcam, USA) were used in the experiments as a negative labeling control. To analyze cell differentiation abilities of adipogenic, osteogenesis, and chondrogenic, ADSCs were cultured in adipogenic differentiation medium for 2 weeks, osteogenesis differentiation medium for 3 weeks, and chondrogenic differentiation medium for 4 weeks (ScienceCell, USA). Cells were fixed with 4% paraformaldehyde in PBS for 1 h at room temperature and stained with Oil Red O, Alizarin Red, and Alcian blue (Sigma) solution. The results were observed under a phase contrast microscope (Nikon, Japan). Fabrication of microskin Preparation of microskin was performed as described by Zhang et al. [11], with a bit of modification. Balb/c mice (male, 12 weeks old) weighing approximately 22 g were used in this experiment. Their backs were shaved and wiped with 75% ethyl alcohol after intraperitoneal injection of 50 mg/kg sodium pentobarbital for anesthesia. Then, a piece of full-thickness skin (2 cm × 3 cm) was removed from the subjected mouse and cut into 5 mm × 5 mm with a scissor. The debris of the skin was then immersed in PBS and washed three times. For the debris of the skin, a threefold volume of 0.25% dispase was added and incubated at 4 °C overnight to detach the dermis and the epidermis. The epidermis was cut into less than 1-mm diameter (microskin) and washed with PBS. In vivo mouse wound healing model Thirty Balb/c mice (male, 8 weeks old), weighing 22 ± 4 g, were used in this experiment. Mice were housed in the environment without specific pathogen and free to access standard food and water with 12-h photoperiods. Before the surgical procedure, all the mice accepted anesthesia by 50 mg/kg sodium pentobarbital by intraperitoneal injection. Their dorsal surface, including the surgical area, was shaved exhaustively and wiped with 75% ethyl alcohol twice. Mice were randomized into two parts (n = 15 each) depending on the processing treatment after surgery. Two round shape of the full-thickness wound (1.2-cm diameter) were created in the middle of the back of each mouse. The wounds were photographed by digital camera immediately. According to local treatment used for each mouse, one wound was transplanted 1/4 area autologous microskin evenly (as previously described) and 30 μl PBS (MS group), the other was transplanted 30 μl PBS as a negative control (control group). A coupled mouse was manipulated similarly, and one wound was transplanted 1/4 area autologous microskin and 30 μl ADSC with 1 × 105 cells (MS+ADSC group), the other was transplant 30 μl ADSC with 1 × 105 cells as control (ADSC group). The ADSCs were prepared before the surgery and injected into the surface of the wound area. A polyethylene collar was stitched to the adjacent skin to retain the margin of the wound. The mice were administered with sodium salicylate (150 mg/kg) for pain control and antibiotic for the following 2 days. At the time point of 7 days and 14 days after surgery, the wounded skin was photographed. To evaluate the therapeutic effect of each treatment, we performed a photograph of the wounded skin by a digital camera at 0 days, 7 days, and 14 days post-treatment and analyze the data by ImageJ (NIH, Bethesda, MD) software. All the measurements of the wound area and wound contraction were followed as previous studies [12]. For the wound area studies, we defined that the actual wound area was open wound area and it was calculated using the following formula: $$ \%\mathrm{wound}\ \mathrm{area}=\left({\mathrm{W}}_0-{\mathrm{W}}_1-{\mathrm{W}}_2\right)/{\mathrm{W}}_0\times 100\% $$ For the wound contraction studies, it was calculated using the following formula: $$ \%\mathrm{wound}\ \mathrm{contraction}=\left({\mathrm{W}}_0-{\mathrm{W}}_1\right)/{\mathrm{W}}_0\times 100\% $$ where W0 was the original wound area on day 0, W1 was the open wound area at day 7 and day 14, and W2 was the area of microskin adhering to the skin or re-epithelialization area. The sum of contraction area, re-epithelialization area or microskin area, and open wound areas equal 100% of the original wound size. Meanwhile, half of the mice were sacrificed by CO2 asphyxiation and carefully harvested the wounded skin for histological analysis at each time point. For histochemistry and immunohistochemistry analysis, the wounded skins taken at each time point were excised and fixed in 4% paraformaldehyde, embedded in paraffin, and sectioned vertically into 4-μm-thick sections. For histological observations, representative sections were stained for hematoxylin and eosin (H&E) following conventional protocols. Alpha smooth muscle actin (a-SMA), CD31, and vascular endothelial growth factor (VEGF) were chosen to perform immunohistochemistry to evaluate fibrosis and neovascularization following routine protocols [13]. Briefly, the tissue sections were performed in citrate-based antigen retrieval for 15 min and blocked with normal goat serum for 30 min. Then, the sections were incubated with anti-CD31 (1:100; Abcam, UK), anti-VEGF (1:100; Abcam, UK), and anti-alpha smooth muscle actin (anti-α-SMA, 1:150; Abcam, UK) antibodies at 4 °C overnight, separately. After washing with PBS, the sections were developed with DAB and counterstained with hematoxylin. The sections were analyzed and images acquired with an upright optical microscope (Nikon, Japan). Neovascularization at the wound sites was detected by CD31 staining. Microphotographs were captured, and quantification of CD31-positive (+) blood vessels was performed in ten random fields per section. Only the blood vessels, which have a diameter of 2–10 μm, were counted as one vessel [14]. For the measurement of scar thickness (the distance from the epidermal-dermal junction down to the panniculus carnosus, Additional file 4: Figure S3A), five random distances with equal gap (1000 μm) of scar thickness were measured in the wound area which are determined on three H&E staining sections each group using ImageJ software [15]. Interaction detection by co-culture of microskin and ADSC, ADSC, and fibroblasts It is reported that ADSCs could differentiate into keratinocyte and endotheliocyte cells in vitro [16, 17]. To investigate whether ADSCs differentiated into keratinocyte or endotheliocyte cells while co-culturing with microskin, we performed a co-culture system, which was adopted and followed by previously published studies [18]. An 8-μm micropore, 6-well Transwell plate (Millipore, USA) was used to co-culture microskin and ADSC. 1 × 105cells/ml ADSC was seeded in the lower chamber, and microskin about 1-cm2 epidermis area was put in the upper chamber. The Transwell system was supplied by DMEM medium with 2% FBS. After incubated for 7 days and 14 days, total RNA and protein of ADSC were extracted and preserved in − 80 °C for further research. Cultured ADSC supplied by DMEM medium with 2% FBS was taken as the control group. qRT-PCR and Western blot were applied to investigate the protein and mRNA expression of keratin 5 (CK5), keratin 19 (CK19), kinase insert domain receptor (KDR), and von Willebrand factor (VWF) (Additional file 1: Table S1) in the co-culture system of MS and ADSC. Secretion function of microskin To investigate the secretion ability of microskin, microskin about 1-cm2 epidermis area was put in a 6-well plate uniformly and incubated for 3 days, 7 days, and 14 days. The supernatant was taken for EGF and VEGF secretion detection (Elabscience, China) by enzyme-linked immunosorbent assay (ELISA). The concentration of cytokines demonstrated the secretion function of microskin. Paracrine analyze by protein array The cultured microskin suspends and the co-cultured microskin with ADSC suspend were taken as protein array samples. A total of 60 proteins were selected, including growth factors, chemotactic factors, and inflammation factors. All the samples were analyzed using an array (RayBiotech, Norcross, GA, USA, GSH-ANG-1000). All experiments were conducted according to the manufacturer's instructions. Briefly, after 60 min of incubation with blocking buffer, 60 μl of 100-fold concentrated samples was added to each well. After overnight incubation at 4 °C and extensive washing, the biotin-labeled detection antibody was added for 2 h and then washed away. AlexaFluor 555-conjugated streptavidin was then added and incubated for 1 h at room temperature. The signals (532-nm excitation, 635-nm emission) were scanned and extracted using an InnoScan 300 scanner (Innopsys, Carbonne, France). Raw data from the array scanner were provided as images (.tif files) and spot intensities (tab-delimited.txt file) through Mapix 7.3.1 Software. All experiments were conducted according to the manufacturer's instructions. Individual array spots were background-subtracted locally and normalized through two positive controls. Calculate the mean signal-BKG for each set of duplicate standards and samples. Then, plot the standard curve on log-log graph paper, with standard concentration on the x-axis and signal-BKG on the y-axis. At last, draw the best-fit straight line through the standard points. Concentrations of all serum proteins detected were determined according to its standard curve. It was considered as differentially expressed protein (DEP) by comparison of the signal values between the groups based on p < .05 by t test, signal value > 150, and fold change (FC) ≥ 1.2 or ≤ 0.83. Gene Ontology, KEGG pathway analysis, and integration of protein-protein interaction network analysis We performed Gene Ontology (GO) analysis and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway to analyze the DEPs by using Online String Tools. GO analysis was used to annotate genes and gene products including cellular component, biological process, and molecular function. KEGG was performed for systematic analysis of the pathways in which genes in DEPs were involved in our study by R Bioconductor package clusterProfiler [19]. STRING version 11.0 is a database of protein-protein interactions (PPIs) which covers 5090 organisms. The STRING database is performed to access the protein-protein interactions including direct (physical) and indirect (functional) associations [20]. To evaluate the interrelation among DPEs detected in this study, STRING was utilized and obtained a PPI network through the function and pathway enrichment analysis. It was considered statistically significant with p < .05. Results from the quantitative studies of wound healing analysis in vivo, Western blot, and qRT-PCR were expressed as the mean ± standard deviation (SD). Results from the quantitative studies of the blood vessels in IHC staining were expressed as the mean ± standard error of the mean (SEM). Three independent experiments were performed for validity, and at least three samples per each test were taken for statistical analysis. Statistical comparisons between the two groups were performed by two-tailed Student's t test. Differences among multiple groups were statistically analyzed using one-way analysis of variance (ANOVA). Differences were considered significant when p < 0.05. Characterization of ADSCs According to the Mesenchymal and Tissue Stem Cell Committee of the International Society for Cellular Therapy, we investigated the expression levels of cell surface markers [10]. Flow cytometry analysis indicated that more than 98% of cultured cells expressed CD73 (99.5%), CD90 (98.7%), and CD105 (98.8%), whereas a small fraction of them expressed HLA-DR (1.7%), CD45 (1.8%), CD34 (1.9%), CD19 (0.1%), and CD11b (0.2%) (Additional file 2: Figure S1A). This expression of cell surface markers is a characteristic protein expression of ADSCs. Differentiation abilities of the isolated cell were further examined by adipogenic, osteogenesis, and chondrogenic. The isolated cells were cultured in special supplemented medium of adipogenic, osteogenesis, and chondrogenic. Staining of Oil Red O (Additional file 2: Figure S1B), Alizarin Red (Additional file 2: Figure S1C), and Alcian blue (Additional file 2: Figure S1D) was carried out to verify the differentiation capacity. The mesenchymal phenotype was supported by their multipotency. All these results demonstrated that the isolated cells were ADSCs. Autograft microskin combined with adipose-derived stem cell enhanced wound healing in full-thickness skin defect mouse model We originally designed to figure out the effect of microskin combined with ADSCs on cutaneous wound healing in a full-thickness skin defect mouse model. Full-thickness skin wounds were made on the mouse's back and photographed the open wound area immediately. After surgery, different treatments were performed in the wound area. Besides, the wound area was assessed by photo observation along the time of transplantation and analyzed by photo analyzed software (Fig. 1a). There are no apparent signs of infection of any groups throughout the experiment, including no exudate or purulent drainage. On day 7 post-treatment, the wound area of the MS+ADSC group (34.16% ± 3.113%) was smaller than all other groups (ADSC group, 58.67% ± 3.900%; MS group, 49.55% ± 5.170%; and control groups, 68.05% ± 2.687%; all p < .0001). At 14 days after treatment, the wound area of the MS+ADSC group (3.514% ± 1.261%) was almost closed while the other groups (ADSC group, 8.792% ± 0.743%; MS group, 7.039% ± 1.177%; all p < .001) still not completely closed with an obviously non-healed area, especially in the control group (11.09% ± 1.324%, p < .0001). (Fig. 1b, c). It should be noted that the MS+ADSC group (56.27% ± 3.033%) could suppress the wound contraction on day 14 post-treatment, compared to other groups (ADSC group, 75.29% ± 3.679%; MS group, 73.93% ± 3.224%; and control groups, 81.92% ± 2.380%; all p < .0001) (Fig. 1d). Besides, a majority of microskin graft was survival in the newly formed skin coverage close to normal skin in the MS+ADSC group. The newly formed skin was flat without infection and open wound. Besides, there were no signs of hypertrophic scarring, eschar, and hyperpigmentation in the MS+ADSC group. On the contrary, we could observe an obvious wound contraction and eschar in the ADSC group (Additional file 3: Figure S2B), MS group (Additional file 3: Figure S2C), and control group (Additional file 3: Figure S2D). It demonstrated that the treatment of microskin and ADSCs could enhance full-thickness skin defect wound healing in mouse model and suppress wound contraction. Open image in new window Treatment of ADSC+MS promoted wound healing in a full-thickness skin defect mouse model. a Mice were randomized into two parts (n = 15 each part). One group accepted the treatment of microskin or PBS, the other accepted microskin plus ADSCs or ADSCs after surgery. Photograph of the wound area was performed at days 0, 7, and 14, and mice were sacrificed to have IHC procedure at days 7 and 14. b Representative of a full-thickness wound of each group at day 0, and wound closure can be observed at days 7 and 14 which the MS+ADSC group presented the most remarkable effect of wound healing. c Quantitative evaluation of the wound area on days 0, 7, and 14 post-treatment. ###p < .001, the MS+ADSC group compared to all other groups; ****p < .0001, the MS+ADSC group compared to the ADSC group and control group; ***p < .001, the MS+ADSC group compared to the MS group. d Quantitative evaluation of wound contraction at day 14 post-treatment. ****p < .0001; **p < .01; *p < .05, compared to the control group In addition, we found a difference between the MS+ADSC group and MS group in hematoxylin and eosin staining. In wound beds at day 14 post-surgery, a newly formed epithelium could be found in the MS+ADSC group with stratum corneum. Besides, the newly formed epithelium of the MS+ADSC group is thicker than the other groups including the MS group. Especially, in the MS+ADSC group, the microskin probably had become appendages of the freshly formed skin, and we seldom observed this phenomenon in the MS group (Fig. 2a). In day 14 post-surgery, also, the MS+ADSC group (0.8143 ± 0.10 mm) presented with a significantly smaller scar thickness than the other groups (ADSC group, 1.00 ± 0.16 mm, p < .05; MS group, 1.05 ± 0.13 mm, p < .01; control group, 1.24 ± 0.15 mm, p < .0001; Additional file 4: Figure S3). Microscopic appearance of wound beds post-surgery. a Hematoxylin and eosin staining of wound beds at day 14 post-surgery. Wounds treated with MS+ADSC showed a newly formed, hyperplastic epithelium that covered the wound area. The black arrows indicate the microskin grafts had become appendages of the newly formed skin. The asterisks indicated normal adnexal structures. "N" is represented for normal skin, and "w" is represented for wound area. b α-SMA staining of wound beds at day 14 post-treatment. At day 14, the expression of α-SMA in wound tissue was decreased in the MS+ADSC group compared to others. Scale bars are 100 and 500 μm. c CD31 staining of the wound area at days 7 and 14 after treatment. Black arrows indicate CD31-positive vessels. d the number of CD31-positive (+) blood vessels per high-power fields (HPFs) (× 20) were quantified to a particular time point. The data expressed are the average means ± SEM, n = 5. ****p < .0001, compared to the ADSC group, MS group, and control group; **p < .01; *p < .05; ns, not significant, compared to the control group Combination of ADSCs and microskin suppressed the expression of α-SMA and increased the expression of CD31 IHC staining suggested that the expression of α-SMA in the MS+ADSC group was significantly reduced compared to the MS group and control group, which means the fibrosis in hypodermis was inhibited (Fig. 2b). Neovascularization at the area of the wound was detected by CD31 staining, which was the marker protein of endothelial cells. The representative images of CD31 staining were presented (× 20), and black arrows indicated CD31-positive vessels (Fig. 2c). At day 7, as expectedly, it was shown that the number of new blood vessels in the wound area treated with a combination of microskin and ADSCs (16.27% ± 0.493%) was much higher than the other groups (ADSC group, 9.267% ± 0.431%; MS group, 10.11% ± 0.588%; and control group 7.889% ± 0.247%; Fig. 2d), while there was no significance between the ADSC group and MS group. Although the number of new blood vessels in the ADSC group and MS group was much more than the control group, however, the neovascularization was uneven with a lower density, compared to the MS+ADSC group. At day 14 after treatment, we could observe a large number of mature blood vessel formation in the MS+ADSC group (17.03% ± 0.606%) compared to the other groups (ADSC group, 10.30% ± 0.276%; MS group, 11.40% ± 0.423%; and control group, 9.667% ± 0.27%; Fig. 2d). Notably, there was no significance in both the ADSC group and MS group compared to the control group. These finding indicated that vascular regeneration was better in the MS+ADSC group during full-thickness skin defect regeneration. Therefore, the combination of microskin and ADSCs might accelerate skin wound healing through suppressing fibrosis and promoting vascularization. ADSCs did not differentiate into keratinocyte or endotheliocyte cells while co-culturing with microskin To investigate whether ADSCs differentiated into keratinocyte or endotheliocyte cells while co-cultured with microskin, we designed a co-culture system as previously described. Western blot and qRT-PCR investigated the expression of CK5, CK19, KDR, and VWF of ADSC in the co-culture system and control ADSC. The protein and mRNA expression of CK5 and CK19 were both downregulated in 7 days and 14 days, along with the downregulation of the protein and mRNA expression of KDR and VWF (Fig. 3a, b). This data demonstrated that the function of ADSC in the co-culture system was not differentiation. MS+ADSC may enhance wound healing through paracrine function rather than the differentiation of ADSCs. a mRNA expression of CK5, CK19, KDR, and VWF are shown in each group. b Representative Western blot bands for CK5, CK19, VWF, and KDR expression are shown in each group including at days 7 and 14 post-treatment. In co-cultured system, ADSCs were all downregulated compared to the control group. c The concentration of EGF and VEGF secreted by microskin kept stabilizing at 7 days and decreased at 14 days. Both at 7 and 14 days, EGF and VEGF secreted by microskin were higher than the control group (FBS) which demonstrated the microskin could release biological factor. *p < .05, compared with the control group To further identify the secretion ability of microskin, we performed a tissue culture experiment [21]. In our experiment, EGF and VEGF were selected as a reference of secreted cytokines by microskin. At 3 days, 7 days, and 14 days, the secretion of EGF and VEGF was higher than the control group (culture medium) (Fig. 3c). The secretion of EGF and VEGF began declining since the seventh day of culture. These data indicated microskin was able to secrete cytokines in vitro. Protein array showed several pro-angiogenic cytokines secreted by a co-culture system of ADSC and microskin A total of 60 angiogenic cytokines were detected in the culture supernatants of microskin (MS+A group), ADSCs (ADSC group), and microskin (MS group) on day 7. As expected, all three groups highly expressed several growth factors, including angiopoietin-1 (ANG-1), placental growth factor (PIGF), Tie-2, hepatocyte growth factor (HGF), EGF, and VEGF. Specifically, there were 31 common differential cytokines detected in the MS+A group and were upregulated compared to the MS group ADSC group (Fig. 4a). Also, the heat map of differentially expressed proteins and the detailed cluster analysis were presented (Fig. 4b, c). From the heat map, we could find that co-culture of microskin and ADSCs could upregulate the expression of most cytokines detected in vitro compared to culturing microskin and ADSCs only. Generally, there were 29 and 32 upregulated DEPs detected in the MS+ADSC group compared to the ADSC group and MS group, respectively. Also, the top upregulation cytokines detected in the MS+ADSC group were I-309, EGF, G-CSF, Tie-2, and VEGF-R2 in comparison with the ADSC group; while it was Tie-2, MCP-3, HGF, ANG-1, and GSF in comparison with the ADSC group. We could confirm that there was a regulation of biological factors while co-culture of microskin and ADSCs in vitro. The differentially expressed proteins (DEPs) among the MS+ADSC group, the ADSC group, and the MS group. a Venn diagram showed there were 31 common DEPs detected in the MS+ADSC group compared to the ADSC group and MS group. R Bioconductor package and golots were performed to obtain heat map of DEPs in the MS+ADSC group compared with the ADSC group (b) and MS group (c), with the cluster analysis results GO/KEGG enrichment analysis of differentially expressed proteins For GO enrichment analysis, the comparison of the MS+A group versus the MS group and MS+A group versus a group showed similar results (Fig. 5a, b). There were three notable cellular component (CC) enrichment terms, including vesicle lumen, cytoplasmic vesicle lumen, and secretory granule lumen. The biological process (BP) analysis indicated that co-culture of microskin and ADSCs mainly had two effects. One was positive regulation of cell migration and cellular component movement, and the other was peptidyl-tyrosine phosphorylation. The molecular function (MF) results showed that the differentially expressed proteins in the MS+A group played critical roles in enhancing the activity of cytokines, growth factors, receptor regulator, and receptor ligand. Also, it made a difference in molecular binding, including cytokine receptor binding, growth factor receptor binding, and G protein-coupled receptor binding. GO/KEGG enrichment analysis of DEPs in the ADSC+MS group compared to the ADSC group and MS group by DAVID online tool and R package clusterProfiler. a GO enrichment results of the ADSC+MS group versus ADSC group. b GO enrichment results of the ADSC+MS group versus MS group. c KEGG pathway enrichment results of the ADSC+MS group versus ADSC group. d KEGG pathway enrichment results of the ADSC+MS group versus ADSC group. GO, Gene Ontology; KEGG, Kyoto Encyclopedia of Genes and Genomes The results of KEGG analysis of the MS+ADSC group versus the ADSC group and the MS+ADSC group versus the MS group are highly similar (Fig. 5c, d). According to the differentially expressed proteins, KEGG analysis results indicated that PI3K-Akt [22] and Jak-STAT [23] signaling pathways were involved with co-culture of microskin and ADSCs in vitro, which may induce migration and proliferation of fibroblasts and keratinocytes. Besides, PI3K-Akt [24] and Ras [25] signaling pathways were also highly involved with co-culture of microskin and ADSCs in vitro. As a result, the activity of these signaling pathways may promote angiogenesis. The KEGG analysis demonstrated that co-culture of microskin and ADSCs could activate several signaling pathways which were highly related to enhance skin wound healing through angiogenesis and cell migration and proliferation. PPI network analysis of the significant associated DEPs To better analyze the protein-protein interaction in our study, a section of common significant associated differentially expressed proteins (FC > 1.5, p < .05) was chosen (Table 1). The PPI network included 17 nodes and 57 edges (Fig. 6). In the PPI network, IL-6 (14), G-CSF (13), EGF (11), IP-10 (11), and ENA-78 (10) possessed a large number of interactions (as displayed in the parentheses). Moreover, these proteins might be the core protein in this PPI network of a total of 17 nodes. Therefore, they were considered to be the potential proteins in further studies of the treatment of microskin and ADSCs to full-thickness skin defect. The significant associated differentially expressed proteins in the MS+ADSC group (FC > 1.5, p < .05) Entrez ID FC (MS+A versus MS) FC (MS+A versus ADSC) Activin A CXCL16 ENA-78 PIGF RANTES TGFb3 VEGF R2 Entrez, Entrez Gene (http://www.ncbi.nlm.nih.gov/gene), FC fold change, MS+A combination of microskin and ADSCs group, MS microskin group, ADSC, ADSCs group Intermolecular interactions of vital associated differentially expressed proteins (DEPs). IL-6, G-CSF, EGF, IP-10, and ENA-78 held a large number of interactions. The interaction relationship of DEPs was shown as the lines. Network edges: line color represents the type of interaction evidence from the interaction sources, line shape represents the predicted mode of action, and line thickness represents the strength of data support. Colored nodes: query proteins and first shell of interactors; empty nodes: proteins of unknown 3D structure; filled nodes: proteins of some known or predicted 3D structure For the reconstruction of a large area of wounded skin, there are problems to be solved, including lack of microskin, wound contraction, delayed vascularization, and scar formation [26]. ADSCs were easy to obtain and apply to many clinical trials and make a great result, including skin regeneration, with the ability to regenerate wounded tissue, enhancing vascularization, and inhibiting fibrosis [18]. To our best knowledge, this is the first study of microskin and ADSCs used in full-thickness skin defect mouse model. In the present study, the mouse model showed that this combination of therapy could promote full-thickness skin defects regenerating with faster wound closure and more neovascularization. Our results indicated co-transplantation microskin and ADSCs promote skin tissue regeneration by the secretions of multiple cytokines which enhance pro-angiogenic. Clinically, autograft microskin transplantation was one of the accepted standard therapies to massive area skin damage [2]. Microskin transplantation had been employed to treatment for extensive skin damage for years [27]. However, there are still some limitations, such as a limited abundance of donors, less revascularization, and failure of transplantation [28]. Evidence supported that ADSCs could prolong the survival of skin grafts with angiogenic effect and ameliorate microcirculatory alterations [29, 30]. In our work, the combination of microskin and ADSCs enhanced the repairing of skin defect with a faster wound healing with less wound contraction, compared to transplantation of ADSCs or microskin only. Interestingly, the wound area at 14 days showed no statistical significance between the MS group (7.039% ± 1.177%) and ADSC group (8.792% ± 0.743%). It should be noted that the survival rate of microskin was increased in the MS+ADSC group compared to the MS group (Additional file 5: Figure S4B). This may reflect that the combination of microskin and ADSCs could better accelerate skin wound healing. To properly evaluate the therapeutic effect of the combination of microskin and ADSCs, we performed an immunohistochemistry staining of the wounded skin of each group. ADSCs, including their derived extracellular microvesicles, were used to skin tissue regeneration [31]. It is confirmed that ADSC could facilitate angiogenesis in tissue repair [32]. In our study, we observed that the MS+ADSC group showed more neovascularization by CD31 staining and less fibrosis by α-SMA staining than other treatments. Interestingly, several skin appendages were found in the MS+ADSC group, while few for the other groups. All these data indicated that a better regeneration of skin tissue happened in the MS+ADSC group. However, at day 14, the number of neovascularization was not significant among the MS group (11.40% ± 0.423%), ADSC group (10.30% ± 0.276%), and control group (9.667% ± 0.27%) in this study. Treatment with ADSCs or microskin could play a particular role at the early stage of angiogenesis during skin wound healing. But the angiogenesis effect of ADSCs or microskin has no remarkable superiority compared to the control group at the later stage of skin wound healing. The reason might be the paracrine effect of microskin, and ADSCs was weakened along with prolonging the time of culture (Fig. 3c). We have noticed that there was still a signal of neo-vascularization at 14 days post-injury. The VEGF staining of wounded skin at day 14 post-injury showed the strong positive staining was detected in the MS+ADSC group and ADSC group (Additional file 6: Figure S5). In our vitro study, the expression of VEGF in microskin culturing was declined after day 7, and this might be the reason for the weaken VEGF staining of the wound sections at day 14 post-injury in the MS group. Normally, the number of vessels will normalize and return to a level close to the normal skin at the late stage of wound healing [33]. In our study, the larger number of neovascularization in the MS+ADSC group may due to the mutual promotion of cytokine secretion, which was still unclear. The combined impact of microskin and ADSCs showed a distinct advantage in angiogenesis which accelerated concrescence of the skin wound. We first wonder whether ADSCs could differentiate into epidermal cells or endothelial cells to enhance wound healing in coaction with microskin [16, 17]. The co-culture system of microskin and ADSCs showed that the keratin expression in ADSC was downregulated, along with angiogenesis marker VWF and KDR. In other words, microskin might not accelerate the differentiation ability of ADSCs in our study. However, there were a few research reported the secretion function of skin tissue culture [34]. Therefore, we investigated the paracrine effect of microskin. We observed that microskin could secret several cytokines while cultured in the Transwell system, which was a persistent secretion of EGF and VEGF. However, the secretion was declining since day 7 of culture. The result of our in vivo experiment also reflected that treatment of microskin or ADSCs only might not access a satisfying outcome of angiogenesis. The paracrine function was another critical function of ADSCs in wound healing. Therefore, we further investigated the two-way paracrine effects between microskin and ADSCs, and the data indicated that combination of microskin and ADSCs could secrete various cytokines with an up-expression, such as VEGF, IL-6, HGF, and EGF (Fig. 4b, c). These cytokines have been confirmed that they play an essential role in promoting angiogenesis [35, 36]. Through GO enrichment analysis, we found that the biological process of DEPs in the MS+ADSC group was involved in cell migration, cellular component movement, and peptidyl-tyrosine phosphorylation. The molecular function (MF) results of them were involved in enhancing the activity of cytokines, growth factors, receptor regulator, and receptor ligand. Moreover, three important cellular component (CC) enrichment terms were detected, including vesicle lumen, cytoplasmic vesicle lumen, and secretory granule lumen. Also, the most significant pathways were enriched in KEGG analysis, which are cytokine-cytokine receptor interaction and PI3K-Akt and MAPK signaling pathways. These signaling pathways were essential to wound repair [22]. The paracrine of DEPs were enriched in biological process and signaling pathway related to growth factors and inflammation. The wound healing process after treatment of microskin and ADSCs might involve the various cellular metabolic process by the DEPs. By binding with other molecules (e.g., cytokines, chemokines, and receptors) and activity of growth factors, receptors, and ligands, they affect cell and cellular component movement, inflammatory response, and phosphorylation. This combination of growth factors and inflammatory factors stimulated wound healing-related cell migration and angiogenesis for tissue regeneration. Wound healing is a highly complex process and involves a variety of complicated interactions among different resident cells extracellular, extracellular matrix, soluble cytokines, and infiltrating leukocyte subtypes. It has been confirmed that an appropriate inflammatory response is necessary for healthy wound healing [37]. However, the excessive inflammatory response may lead to delay healing or non-healing wounds [38]. In this study, inflammatory factors (e.g., IL-6) and growth factors (e.g., EGF) were upregulated in the heat map and hold a larger number of interacting proteins in the PPI network. IL-6 is considered as an inflammatory factor secreted from cells that induce angiogenesis process in murine skin isografts [39]. With the close correlation of VEGF, treatment of epithelial cell lines with IL-6 can induce the expression of VEGF mRNA to promote angiogenesis [40]. It is reported that IL-6 secreted from ADSCs can stimulate angiogenesis, accelerate cutaneous wound healing, and enhance recovery after ischemia/reperfusion (I/R) injury to increase flap survival [41, 42, 43]. In the present study, we found increasing neovascularization in the MS+ADSC group in mouse model and an upregulation of IL-6 in protein array, which was consistent with these previous studies. Our protein array detected a series of upregulation of angiogenic factors, such as IL-6, Tie-2, uPAR, and G-CSF. Besides, IL-6, G-CSF, EGF, IP-10, and ENA-78 might be the core proteins in our study with a large number of interacting proteins. It was probably that the combination of inflammatory factors and growth factors might play a considerable part in angiogenesis. Meanwhile, we have noticed that over a dozen highly relevant cytokines (FC > 1.5, p < .05) were both detected in the MS+ADSC group in the protein array, compared to the ADSC group and MS group (Table 1). It was quite remarkable that a couple of angiogenic cytokines were upregulated, including VEGFR2, G-CSF, Tie-2, and MCP-3. These cytokines were confirmed highly related to angiogenesis by regulating the migration, proliferation, and survival of vascular endothelial cells and upregulating angiogenic cytokines [44, 45, 46, 47]. We speculated combination treatment of microskin and ADSCs promoted angiogenesis by upregulating angiogenic cytokines. Although the underlying mechanisms remain indistinct, our result suggests that the cytokines, including VEGF, IL-6, HGF, and EGF, which were upregulated in supernatants of co-culturing of microskin and ADSCs, may play an essential role during the process of wound healing. Furthermore, our data revealed the regulation of the two-way cytokine between microskin and ADSCs. Most of the cytokines we detected were up-expression in co-culturing of microskin and ADSCs. There should be a positive feedback loop that upregulates the cytokines to promote wound healing. However, future study is needed to find out the critical cytokine and underlying signaling pathway in our study, as well as the target cells. Therefore, we propose a possible mode of clinical translation as follows: A combined transplantation of microskin and ADSCs is operated to the wound site. The cytokines derived from microskin and ADSCs could enhance the wound healing with a better vascularization (Additional file 7: Figure S6). There is still a limitation related to our study. Our investigation focused on tissue-to-cell and cell-to-tissue interactions, while wound healing including tissue-to-cell, cell-to-tissue, and cell-to-cell interactions. Inflammation cells, extracellular matrix, and blood supply also contributed to wound healing, not only adipose stem cells, fibroblasts, and microskin. Further investigation is needed to figure out which cytokine and signaling pathway are critical in vivo study. Our present study demonstrated that autograft microskin combined with adipose-derived stem cell could enhance the healing of a large-area wound in a mouse model with better angiogenesis. The treatment with microskin and ADSCs improve angiogenesis and reduce fibrosis with the secretion of multiple cytokines. The interaction network of various upregulated cytokines secreted by microskin and ADSCs, such as IL-6, G-CSF, EGF, IP-10, and ENA-78, may play an essential role in promoting wound healing. The combination of microskin and ADSCs may be a promising therapy to enhance tissue regeneration of full-thickness skin defect. LYS, YXY, and LTZ contributed to the conception and design and performed the experiments, collection of data, data analysis and interpretation, and manuscript writing. JSH performed the experiments and collection of data. HRH contributed to the data analysis and interpretation. HY and WCM contributed to the collection and/or assembly of the data. WK contributed to the conception and design and data analysis and interpretation. ZL contributed to the conception and design and administrative support. All authors read and approved the final manuscript. This work was supported by the Natural Science Foundation of Guangdong Province, China (2016A030313207 and 2017A030313889). This study was partly supported by the Guangzhou Science, Technology and Innovation Commission (201604020008) and National Natural Science Foundation of China (81772368). The animal protocol was approved by the Medical Ethics Committee of the Third Affiliated Hospital of Sun Yat-sen University. All animal procedures were approved by the Medical Ethics Committee of the Third Affiliated Hospital of Sun Yat-sen University under the approved protocol code 02-180-01. This study has been conducted under the guideline of the Guide for the Care and Use of Laboratory Animals. 13287_2019_1389_MOESM1_ESM.doc (30 kb) Additional file 1: Table S1. Primers of quantitative reverse transcription–polymerase chain reaction (qRT-PCR). (DOC 29 kb) 13287_2019_1389_MOESM2_ESM.tif (4.5 mb) Additional file 2: Figure S1. Characterization and differentiation capacity of ADSCs. (A): Flowcytometry analysis indicated that more than 98% of cultured cells expressed CD73 (99.5%), CD90 (98.7%) and CD105 (98.8%), whereas a small fraction of them expressed HLA-DR (1.7%), CD 45 (1.8%), CD 34 (1.9%), CD 19 (0.1%) and CD 11b (0.2%). ISO-Alexa Flour, ISO-PE, and ISO-APC were considered as controls (PE: phycoerythrin; APC: allophycocyanin) (B): Oil Red O staining of ADSCs cultured in adipogenic media. Scale bar = 50 μm (C): Alizarin red staining of ADSCs cultured in osteogenic media. Scale bar = 50 μm (D): Alcian blue staining of ADSCs cultured in chondrogenic media. Scale bar = 100μm. (TIF 4594 kb) Additional file 3: Figure S2. Treatment of MS+ADSC obtained cosmetically appealing at day 14 post-injury compared to other groups. (A): MS+ADSC group (B): ADSC group (C):MS group (D): control group. (TIF 2180 kb) 13287_2019_1389_MOESM4_ESM.tif (6 mb) Additional file 4: Figure S3. Treatment of MS+ADSC reduced scar thickness at day 14 post-injury. (A): Representative photomicrographies of the scar tissue which determined on H&E stained sections of wounded skin at day 14 post-injury (dashed lined area), and black lines indicated the scar thickness. (B): Scar thickness in tissue sections of day 14 post-injury skin. The data expressed are an average means ± SEM, n=5. ****, p <.0001; **, p <.01; *, p <.05, compared to control group. (TIF 6116 kb) Additional file 5: Figure S4. ADSCs play a role in improving the survival rate of micro skin grafts. (A): The original area of micro skin grafts (rounded up by yellow line) (B): The area of survival micro skin grafts (rounded up by yellow line) (C): Representative of survival rate in MS+ADSC group and MS group. The data expressed are an average means ± SEM, n = 5. ****, p <.0001. (TIF 1249 kb) Additional file 6: Figure S5. VEGF staining of wounded skin at day 7 and 14 post-injury. There was a stronger positive staining of VEGF at wounded skin in MS+ADSC group. Both day 7 and day 14 post-injury, the positive staining was stronger in MS+ADSC group compared to other groups. (TIF 6239 kb) Additional file 7: Figure S6. A possible mode of clinical translation about the combined transplantation of micro skin and ADSCs. After a debridement of massive injury and purchased autologous ADSCs, on the day of surgery, a combined transplantation of micro skin and ADSCs is operated to the wound site. The cytokines derived from micro skin and ADSCs could enhance the wound healing with a better vascularization. (TIF 1798 kb) Richardson R, Slanchev K, Kraus C, et al. Adult zebrafish as a model system for cutaneous wound-healing research. J Invest Dermatol. 2013;133(6):1655–65.PubMedPubMedCentralCrossRefGoogle Scholar Gabarro P. A new method of grafting. Br Med J. 1943;1(4301):723–4.PubMedPubMedCentralCrossRefGoogle Scholar Wang WZ, Baynosa RC, Zamboni WA. Update on ischemia-reperfusion injury for the plastic surgeon: 2011. Plast Reconstr Surg. 2011;128(6):685e–92e.PubMedCrossRefGoogle Scholar Li P, Guo X. A review: therapeutic potential of adipose-derived stem cells in cutaneous wound healing and regeneration. Stem Cell Res Ther. 2018;9(1):302.PubMedPubMedCentralCrossRefGoogle Scholar Lee DE, Ayoub N, Agrawal DK. Mesenchymal stem cells and cutaneous wound healing: novel methods to increase cell delivery and therapeutic efficacy. Stem Cell Res Ther. 2016;7:37.PubMedPubMedCentralCrossRefGoogle Scholar Gangadaran P, Rajendran RL, Lee HW, et al. Extracellular vesicles from mesenchymal stem cells activates VEGF receptors and accelerates recovery of hindlimb ischemia. J Control Release. 2017;264:112–26.PubMedCrossRefGoogle Scholar Reichenberger MA, Heimer S, Schaefer A, et al. Adipose derived stem cells protect skin flaps against ischemia-reperfusion injury. Stem Cell Rev. 2012;8(3):854–62.CrossRefGoogle Scholar Kim CM, Oh JH, Jeon YR, et al. Effects of human adipose-derived stem cells on the survival of rabbit ear composite grafts. Arch Plast Surg. 2017;44(5):370–7.PubMedPubMedCentralCrossRefGoogle Scholar Zuk PA, Zhu M, Ashjian P, et al. Human adipose tissue is a source of multipotent stem cells. Mol Biol Cell. 2002;13(12):4279–95.PubMedPubMedCentralCrossRefGoogle Scholar Bourin P, Bunnell BA, Casteilla L, et al. Stromal cells from the adipose tissue-derived stromal vascular fraction and culture expanded adipose tissue-derived stromal/stem cells: a joint statement of the International Federation for Adipose Therapeutics and Science (IFATS) and the International Society for Cellular Therapy (ISCT). Cytotherapy. 2013;15(6):641–8.PubMedPubMedCentralCrossRefGoogle Scholar Zhang MLCZ, Han X, Zhu M. Microskin grafting. I. Animal experiments. Burns Incl Therm Inj. 1986;12(8):554.Google Scholar Kao HK, Chen B, Murphy GF, et al. Peripheral blood fibrocytes: enhancement of wound healing by cell proliferation, re-epithelialization, contraction, and angiogenesis. Ann Surg. 2011;254(6):1066–74.PubMedCrossRefGoogle Scholar Fang S, Xu C, Zhang Y, et al. Umbilical cord-derived mesenchymal stem cell-derived exosomal microRNAs suppress myofibroblast differentiation by inhibiting the transforming growth factor-beta/SMAD2 pathway during wound healing. Stem Cells Transl Med. 2016;5(10):1425–39.PubMedPubMedCentralCrossRefGoogle Scholar Kang T, Jones TM, Naddell C, et al. Adipose-derived stem cells induce angiogenesis via microvesicle transport of miRNA-31. Stem Cells Transl Med. 2016;5(4):440–50.PubMedPubMedCentralCrossRefGoogle Scholar Canesso MC, Vieira AT, Castro TB, et al. Skin wound healing is accelerated and scarless in the absence of commensal microbiota. J Immunol. 2014;193(10):5171–80.PubMedCrossRefGoogle Scholar Manavella DD, Cacciottola L, Payen VL, et al. Adipose tissue-derived stem cells boost vascularization in grafted ovarian tissue by growth factor secretion and differentiation into endothelial cell lineages. Mol Hum Reprod. 2019;25(4):184–93.PubMedCrossRefGoogle Scholar Petry L, Kippenberger S, Meissner M, et al. Directing adipose-derived stem cells into keratinocyte-like cells: impact of medium composition and culture condition. J Eur Acad Dermatol Venereol. 2018;32(11):2010–9.PubMedCrossRefGoogle Scholar Kim WS, Park BS, Sung JH, et al. Wound healing effect of adipose-derived stem cells: a critical role of secretory factors on human dermal fibroblasts. J Dermatol Sci. 2007;48(1):15–24.PubMedCrossRefGoogle Scholar Kanehisa M, Furumichi M, Tanabe M, et al. KEGG: new perspectives on genomes, pathways, diseases and drugs. Nucleic Acids Res. 2017;45(D1):D353–d361.PubMedCrossRefGoogle Scholar Szklarczyk D, Gable AL, Lyon D, et al. STRING v11: protein-protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. Nucleic Acids Res. 2019;47(D1):D607–d613.PubMedCrossRefGoogle Scholar Orci L, Pepper MS. Studying actin-dependent processes in tissue culture. Nat Rev Mol Cell Biol. 2002;3(2):133–7.PubMedCrossRefGoogle Scholar Sun L, Dong Y, Zhao J, et al. The CLC-2 chloride channel modulates ECM synthesis, differentiation, and migration of human conjunctival fibroblasts via the PI3K/Akt signaling pathway. Int J Mol Sci. 2016;17(6). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4926444/.PubMedCentralCrossRefGoogle Scholar Zhao L, Man Y, Liu S. Long non-coding RNA HULC promotes UVB-induced injury by up-regulation of BNIP3 in keratinocytes. Biomed Pharmacother. 2018;104:672-8. https://www.ncbi.nlm.nih.gov/pubmed/29803927.CrossRefGoogle Scholar Chen Y, Li C, Xie H, et al. Infiltrating mast cells promote renal cell carcinoma angiogenesis by modulating PI3K-->AKT-->GSK3beta-->AM signaling. Oncogene. 2017;36(20):2879–88.PubMedPubMedCentralCrossRefGoogle Scholar Sarkar C, Ganju RK, Pompili VJ, et al. Enhanced peripheral dopamine impairs post-ischemic healing by suppressing angiotensin receptor type 1 expression in endothelial cells and inhibiting angiogenesis. Angiogenesis. 2017;20(1):97–107.PubMedPubMedCentralCrossRefGoogle Scholar Demidova-Rice TN, Hamblin MR, Herman IM. Acute and impaired wound healing: pathophysiology and current methods for drug delivery, part 1: normal and chronic wounds: biology, causes, and approaches to care. Adv Skin Wound Care. 2012;25(7):304–14.PubMedPubMedCentralCrossRefGoogle Scholar Chen XL, Liang X, Sun L, et al. Microskin autografting in the treatment of burns over 70% of total body surface area: 14 years of clinical experience. Burns. 2011;37(6):973–80.PubMedCrossRefGoogle Scholar Xiao H, Li C, Zhou X, et al. A new method of microskin autografting with a Vaseline-based moisture dressing on granulation tissue. Burns. 2014;40(2):337–46.PubMedCrossRefGoogle Scholar Plock JA, Schnider JT, Zhang W, et al. Adipose- and bone marrow-derived mesenchymal stem cells prolong graft survival in vascularized composite allotransplantation. Transplantation. 2015;99(9):1765–73.PubMedCrossRefGoogle Scholar Wang WZ, Fang XH, Williams SJ, et al. Elimination of reperfusion-induced microcirculatory alterations in vivo by adipose-derived stem cell supernatant without adipose-derived stem cells. Plast Reconstr Surg. 2015;135(4):1056–64.PubMedCrossRefGoogle Scholar Keshtkar S, Azarpira N, Ghahremani MH. Mesenchymal stem cell-derived extracellular vesicles: novel frontiers in regenerative medicine. Stem Cell Res Ther. 2018;9(1):63.PubMedPubMedCentralCrossRefGoogle Scholar Rehman J, Traktuev D, Li J, et al. Secretion of angiogenic and antiapoptotic factors by human adipose stromal cells. Circulation. 2004;109(10):1292–8.PubMedPubMedCentralCrossRefGoogle Scholar Okonkwo UA, DiPietro LA. Diabetes and wound angiogenesis. Int J Mol Sci. 2017;18(7).PubMedCentralCrossRefGoogle Scholar Sidgwick GP, McGeorge D, Bayat A. Functional testing of topical skin formulations using an optimised ex vivo skin organ culture model. Arch Dermatol Res. 2016;308(5):297–308.PubMedPubMedCentralCrossRefGoogle Scholar Wang X, Wang H, Cao J, et al. Exosomes from adipose-derived stem cells promotes VEGF-C-dependent lymphangiogenesis by regulating miRNA-132/TGF-beta pathway. Cell Physiol Biochem. 2018;49(1):160–71.PubMedCrossRefGoogle Scholar Alexander RA, Prager GW, Mihaly-Bison J, et al. VEGF-induced endothelial cell migration requires urokinase receptor (uPAR)-dependent integrin redistribution. Cardiovasc Res. 2012;94(1):125–35.PubMedPubMedCentralCrossRefGoogle Scholar Singer AJ, Clark RA. Cutaneous wound healing. N Engl J Med. 1999;341(10):738–46.PubMedCrossRefGoogle Scholar Eming SA, Martin P, Tomic-Canic M. Wound repair and regeneration: mechanisms, signaling, and translation. Sci Transl Med. 2014;6(265):265sr266.CrossRefGoogle Scholar Stojadinovic A, Elster EA, Anam K, et al. Angiogenic response to extracorporeal shock wave treatment in murine skin isografts. Angiogenesis. 2008;11(4):369–80.PubMedCrossRefGoogle Scholar Cohen T, Nahari D, Cerem LW, et al. Interleukin 6 induces the expression of vascular endothelial growth factor. J Biol Chem. 1996;271(2):736–41.PubMedCrossRefGoogle Scholar Pu CM, Liu CW, Liang CJ, et al. Adipose-derived stem cells protect skin flaps against ischemia/reperfusion injury via IL-6 expression. J Invest Dermatol. 2017;137(6):1353–62.PubMedCrossRefGoogle Scholar Heo SC, Jeon ES, Lee IH, et al. Tumor necrosis factor-alpha-activated human adipose tissue-derived mesenchymal stem cells accelerate cutaneous wound healing through paracrine mechanisms. J Invest Dermatol. 2011;131(7):1559–67.PubMedCrossRefGoogle Scholar Lu JH, Wei HJ, Peng BY, et al. Adipose-derived stem cells enhance cancer stem cell property and tumor formation capacity in Lewis lung carcinoma cells through an interleukin-6 paracrine circuit. Stem Cells Dev. 2016;25(23):1833–42.PubMedCrossRefGoogle Scholar Shibuya M. VEGF-VEGFR signals in health and disease. Biomol Ther (Seoul). 2014;22(1):1–9.CrossRefGoogle Scholar Shibuya M, Claesson-Welsh L. Signal transduction by VEGF receptors in regulation of angiogenesis and lymphangiogenesis. Exp Cell Res. 2006;312(5):549–60.PubMedCrossRefGoogle Scholar Ohki Y, Heissig B, Sato Y, et al. Granulocyte colony-stimulating factor promotes neovascularization by releasing vascular endothelial growth factor from neutrophils. FASEB J. 2005;19(14):2005–7.PubMedCrossRefGoogle Scholar Dimova I, Hlushchuk R, Makanya A, et al. Inhibition of Notch signaling induces extensive intussusceptive neo-angiogenesis by recruitment of mononuclear cells. Angiogenesis. 2013;16(4):921–37.PubMedCrossRefGoogle Scholar © The Author(s). 2019 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. 1.Department of Plastic and Aesthetic SurgeryThe Third Affiliated Hospital of Sun Yat-sen UniversityGuangzhouChina 2.Department of Orthopedics SurgeryTungwah Hospital of Sun Yat-sen UniversityDongguanChina 3.Department of Joint and Trauma Surgerythe Third Affiliated Hospital of Sun Yat-sen UniversityGuangzhouChina 4.Department of Plastic and Aesthetic SurgeryDermatology Hospital of Southern Medical UniversityGuangzhouChina Luo, Y., Yi, X., Liang, T. et al. Stem Cell Res Ther (2019) 10: 279. https://doi.org/10.1186/s13287-019-1389-4 Revised 25 July 2019 Accepted 16 August 2019 Publisher Name BioMed Central
CommonCrawl
\begin{document} \title{A Fixed-Depth Size-Hierarchy Theorem for $\AC^0[\oplus]$ via the Coin Problem} \thispagestyle{empty} \begin{abstract} In this work we prove the first \emph{Fixed-depth Size-Hierarchy Theorem} for uniform $\mathrm{AC}^0[\oplus]$. In particular, we show that for any fixed $d$, the class $\mc{C}_{d,k}$ of functions that have uniform $\mathrm{AC}^0[\oplus]$ formulas of depth $d$ and size $n^k$ form an infinite hierarchy. We show this by exibiting the first class of \emph{explicit} functions where we have nearly (up to a polynomial factor) matching upper and lower bounds for the class of $\mathrm{AC}^0[\oplus]$ formulas. The explicit functions are derived from the \emph{$\delta$-Coin Problem}, which is the computational problem of distinguishing between coins that are heads with probability $(1+\delta)/2$ or $(1-\delta)/2,$ where $\delta$ is a parameter that is going to $0$. We study the complexity of this problem and make progress on both upper bound and lower bound fronts. \begin{itemize} \item \textbf{Upper bounds.} For any constant $d\geq 2$, we show that there are \emph{explicit} monotone $\mathrm{AC}^0$ formulas (i.e. made up of AND and OR gates only) solving the $\delta$-coin problem that have depth $d$, size $\exp(O(d(1/\delta)^{1/(d-1)}))$, and sample complexity (i.e. number of inputs) $\mathop{\mathrm{poly}}(1/\delta).$ This matches previous upper bounds of O'Donnell and Wimmer (ICALP 2007) and Amano (ICALP 2009) in terms of size (which is optimal) and improves the sample complexity from $\exp(O(d(1/\delta)^{1/(d-1)}))$ to $\mathop{\mathrm{poly}}(1/\delta)$. \item \textbf{Lower bounds.} We show that the above upper bounds are nearly tight (in terms of size) even for the significantly stronger model of $\mathrm{AC}^0[\oplus]$ formulas (which are also allowed NOT and Parity gates): formally, we show that any $\mathrm{AC}^0[\oplus]$ formula solving the $\delta$-coin problem must have size $\exp(\Omega(d(1/\delta)^{1/(d-1)})).$ This strengthens a result of Shaltiel and Viola (SICOMP 2010), who prove a $\exp(\Omega((1/\delta)^{1/(d+2)}))$ lower bound for $\mathrm{AC}^0[\oplus]$, and a result of Cohen, Ganor and Raz (APPROX-RANDOM 2014), who show a $\exp(\Omega((1/\delta)^{1/(d-1)}))$ lower bound for the smaller class $\mathrm{AC}^0$. \end{itemize} The upper bound is a derandomization involving a use of Janson's inequality (from probabilistic combinatorics) and classical combinatorial designs; as far as we know, this is the first such use of Janson's inequality. For the lower bound, we prove an optimal (up to a constant factor) degree lower bound for multivariate polynomials over $\mathbb{F}_2$ solving the $\delta$-coin problem, which may be of independent interest. \end{abstract} \section{Size-Hierarchy theorems for $\mathrm{AC}^0[\oplus]$} \label{sec:sht} \setcounter{page}{1} Given any natural computational resource, an intuitive conjecture one might make is that access to more of that resource results in more computational power. \emph{Hierarchy theorems} make this intuition precise in various settings. Classical theorems in Computational complexity theory such as the time and space hierarchy theorems~\cite{Time-hie,space-hie,NTime-hie,BPP-hie} show that Turing Machine-based computational models do become strictly more powerful with more access to time or space respectively. The analogous questions in the setting of Boolean circuit complexity deal with the complexity measures of depth and size of Boolean circuits. Both of these have been intensively studied in the case of $\mathrm{AC}^0$ circuits and by now, we have near-optimal \emph{Depth} and \emph{Size-hierarchy} theorems for $\mathrm{AC}^0$ circuits~\cite{Hastad,Rossman,Amano-SHT,HRST}. Our focus in this paper is on size-hierarchy theorems for $\mathrm{AC}^0[\oplus]$ circuits. Essentially, a size-hierarchy theorem for a class of circuits says that there are Boolean functions $f:\{0,1\}^n\rightarrow \{0,1\}$ that can be computed by circuits of some size $s = s(n)$ but not by circuits of size significantly smaller than $s$, e.g. $\sqrt{s}$. However, stated in this way, such a statement is trivial to prove, since we can easily show by counting arguments that there are more functions computed by circuits of size $s$ than by circuits of size $\sqrt{s}$ and hence there must be a function that witnesses this separation. As is standard in the setting of circuits, what is interesting is an \emph{explicit} separation. (Equivalently, we could consider the question of separating uniform versions of these circuit classes.) Strong results in this direction are known in the setting of $\mathrm{AC}^0$ circuits (i.e. constant-depth Boolean circuits made up of AND, OR and NOT gates). \paragraph{Size hierarchy theorem for $\mathrm{AC}^0$.} In order to prove a size-hierarchy theorem for $\mathrm{AC}^0,$ we need an explicit function that has circuits of size $s$ but no circuits of size less than $\sqrt{s}.$ If we fix the depth of the circuits under consideration, a result of this form follows immediately from the \emph{tight} exponential $\mathrm{AC}^0$ circuit lower bound of H\r{a}stad~\cite{Hastad} from the 80s. H\r{a}stad shows that any depth-$d$ $\mathrm{AC}^0$ circuit for the Parity function on $n$ variables must have size $\exp(\Omega(n^{1/(d-1)}));$ further, this result is tight, as demonstrated by a folklore depth-$d$ $\mathrm{AC}^0$ upper bound of $\exp(O(n^{1/(d-1)}))$. Using both the lower and upper bounds for Parity, we get a separation between circuits of size $s_0 = \exp(O(n^{1/(d-1)}))$ and $s_0^{\varepsilon}$ for some fixed $\varepsilon > 0.$ The same separation also holds between $s$ and $s^\varepsilon$ for any $s$ such that $s\leq s_0,$ since we can always take the Parity function on some $m\leq n$ variables so that the above argument goes through. We thus get a \emph{Fixed-depth Size-Hierarchy theorem} for $\mathrm{AC}^0$ for any $s(n)\leq \exp(n^{o(1)}).$ Even stronger results are known for $\mathrm{AC}^0.$ Note that the above results do not separate, for example, size $s$ circuits of depth $2$ from size $s^{\varepsilon}$ circuits of depth $3$. However, recent results of Rossman~\cite{Rossman-clique} and Amano~\cite{Amano-SHT} imply the existence of explicit functions\footnote{The explicit functions are the $k$-clique problem and variants.} that have $\mathrm{AC}^0$ circuits of depth $2$ and size $n^k$ (for any constant $k$) but not $\mathrm{AC}^0$ circuits of \emph{any} constant depth and size $n^{k-\varepsilon}.$ \paragraph{$\mathbf{\mathrm{AC}^0[\oplus]}$ setting.} Our aim is to prove size-hierarchy theorems for $\mathrm{AC}^0[\oplus]$ circuits (i.e. constant-depth Boolean circuits made up of AND, OR, NOT and $\oplus$ gates).\footnote{Our results also extend straightforwardly to $\mathrm{AC}^0[\mathrm{MOD}_p]$ gates for any constant prime $p$ (here, a $\mathrm{MOD}_p$ gate accepts if the sum of its input bits is non-zero modulo $p$).} As for $\mathrm{AC}^0,$ we have known exponential lower bounds for this circuit class from the 80s, this time using the results of Razborov~\cite{Razborov} and Smolensky~\cite{Smolensky}. Unfortunately, however, most of these circuit lower bounds are \emph{not} tight. For instance, we know that the Majority function on $n$ variables does not have $\mathrm{AC}^0[\oplus]$ circuits of depth $d$ and size $\exp(\Omega(n^{1/2(d-1)})),$ but the best upper bounds are worse than $\exp(O(n^{1/(d-1)}))$ (in fact, the best known upper bound~\cite{Maj-ubd} is an $\mathrm{AC}^0$ circuit of size $\exp(O(n^{1/(d-1)}) (\log n)^{1-1/(d-1)})$).\nutan{Is this correct?}\footnote{A similar fact is also true for the $\mathrm{MOD}_p$ functions, for $p$ an odd prime.} As a direct consequence of this fact, we do not even have \emph{fixed-depth} size-hierarchy theorems of the above form for $\mathrm{AC}^0[\oplus]$: the known results only yield a separation between circuits of size $s$ and circuits of size $\exp(O(\sqrt{\log s}))$, which is a considerably worse result. In this work we present the first fixed-depth size-hierarchy theorem for $\mathrm{AC}^0[\oplus]$ circuits. Formally, we prove the following. \begin{theorem}[Fixed-depth size-hierarchy theorem] \label{thm:size-hie} There is an absolute constant $\varepsilon_0 \in (0,1)$ such that the following holds. For any fixed depth $d\geq 2$, and for infinitely many $n$ and any $s = s(n) = \exp(n^{o(1)}),$ there is an explicit monotone depth-$d$ $\mathrm{AC}^0$ formula $F_n$ on $n$ variables of size at most $s$ such that any $\mathrm{AC}^0[\oplus]$ formula computing the same function has size at least $s^{\varepsilon_0}.$ In particular, if $\mc{C}_{d,k}$ denotes the family of languages that have uniform $\mathrm{AC}^0[\oplus]$ formulas of depth $d$ and size $n^k$, then the hierarchy $\mc{C}_{d,1}\subseteq \mc{C}_{d,2}\cdots$ is infinite. \end{theorem} We can also get a similar result for $\mathrm{AC}^0[\oplus]$ \emph{circuits} of fixed depth $d$ by using the fact that circuits of depth $d$ and size $s_1$ can be converted to formulas of depth $d$ and size $s_1^{d}.$ Using this idea, we can get a separation between circuits (in fact formulas) of depth $d$ and size $s$ and circuits of depth $d$ and size $s^{\varepsilon_0/d}.$ To get this (almost) optimal fixed-depth size-hierarchy theorem we design an explicit function $f$ and obtain tight upper and lower bounds for it for each fixed depth $d$. The explicit function is based on the \emph{Coin Problem}, which we define below. \section{The Coin Problem} The Coin Problem is the following natural computational problem. Given a two-sided coin that is heads with probability either $(1+\delta)/2$ or $(1-\delta)/2$, decide which of these is the case. The algorithm is allowed many independent tosses of the coin and has to accept in the former case with probability at least $0.9$ (say) and accept in the latter case with probability at most $0.1.$ The formal statement of the problem is given below. \begin{definition}[The $\delta$-Coin Problem] \label{def:coinproblem} For any $\alpha\in [0,1]$ and integer $N\geq 1$, let $D_{\alpha}^N$ be the product distribution over $\{0,1\}^N$ obtained by setting each bit to $1$ independently with probability $\alpha.$ Let $\delta \in (0,1)$ be a parameter. Given an $N\in \mathbb{N},$ we define the probability distributions $\mu_{\delta,0}^N$ and $\mu_{\delta,1}^N$ to be the distributions $D^N_{(1-\delta)/2}$ and $D^N_{(1+\delta)/2}$ respectively. We omit the $\delta$ in the subscript when it is clear from context. Given a function $g: \{0,1\}^N \rightarrow \{0,1\}$, we say that \emph{$g$ solves the $\delta$-coin problem with error $\varepsilon$} if \begin{equation} \label{eq:cpdefn} \prob{\bm{x}\sim \mu_0^N}{g(\bm{x}) = 1} \leq \varepsilon \text{ and } \prob{\bm{x}\sim \mu_1^N}{g(\bm{x}) = 1} \geq 1-\varepsilon. \end{equation} In the case that $g$ solves the coin problem with error $0.1,$ we simply say that $g$ solves the $\delta$-coin problem (and omit mention of the error). We say that the \emph{sample complexity} of $g$ is $N$. \end{definition} We think of $\delta$ as a parameter that is going to $0$. We are interested in both the sample complexity and \emph{computational complexity} of functions solving the coin problem. Both these complexities are measured as a function of the parameter $\delta.$ The problem is folklore, and has also been studied (implicitly and explicitly) in many papers in the Computational complexity literature~\cite{ABO,OW,Amano,SV,BV,Viola,Steinberger,CGR,LV,RossmanS}. It was formally introduced in the work of Brody and Verbin~\cite{BV}, who studied it with a view to devising pseudorandom generators for Read-once Branching Programs (ROBPs). It is a standard fact that $\Omega(1/\delta^2)$ samples are necessary to solve the $\delta$-coin problem (irrespective of the computational complexity of the underlying function). Further, the algorithm that takes $O(1/\delta^2)$ many independent samples and accepts if and only if the majority of the coin tosses are heads, does indeed solve the $\delta$-coin problem. We call this the ``trivial solution'' to the coin problem. It is not clear, however, if this is the most computationally ``simple'' method of solving the coin problem. Specifically, one can ask if the $\delta$-coin problem can be solved in computational models that cannot compute the Boolean Majority function on $O(1/\delta^2)$ many input bits. (Recall that the Majority function on $n$ bits accepts inputs of Hamming weight greater than $n/2$ and rejects other inputs.) Such questions have received quite a bit of attention in the computational complexity literature. Our focus in this paper is on the complexity of this problem in the setting of $\mathrm{AC}^0$ and $\mathrm{AC}^0[\oplus]$ circuits. Perhaps surprisingly, the Boolean circuit complexity of the coin problem in the above models is \emph{not} the same as the circuit complexity of the Boolean Majority function. We describe below some of the interesting upper as well as lower bounds known for the coin problem in the setting of constant-depth Boolean circuits. \paragraph{Known upper bounds.} It is implicit in the results of O'Donnell and Wimmer~\cite{OW} and Amano~\cite{Amano} (and explicitly noted in the paper of Cohen, Ganor and Raz~\cite{CGR}) that the complexity of the coin problem is closely related to the complexity of \emph{Promise} and \emph{Approximate} variants of the Majority function. Here, a \emph{promise majority} is a function that accepts inputs of relative Hamming weight at least $(1+\delta)/2$ and rejects inputs of relative Hamming weight at most $(1-\delta)/2;$ and an \emph{approximate majority} is a function that agrees with the Majority function on $90\%$ of its inputs.\footnote{Unfortunately, both these variants of the Majority function go by the name of ``approximate majority'' in the literature.} O'Donnell and Wimmer~\cite{OW} and Amano~\cite{Amano} show that the $\mathrm{AC}^0$ circuit complexity of some approximate majorities is superpolynomially \emph{smaller} than the complexity of the Majority function. More specifically, the results in~\cite{OW,Amano} imply that there are approximate majorities that are computed by monotone $\mathrm{AC}^0$ \emph{formulas} of depth $d$ and size $\exp(O(dn^{1/2(d-1)}))$, while a well-known result of H\r{a}stad~\cite{Hastad} implies that any $\mathrm{AC}^0$ circuit of depth $d$ for computing the Majority function must have size $\exp(\Omega(n^{1/(d-1)})).$ For example, when $d=2$, there are approximate majorities that have formulas of size $\exp(O(\sqrt{n}))$ while any circuit for the Majority function must have size $\exp(\Omega(n)).$ These upper bounds were slightly improved to $\mathrm{AC}^0$ \emph{circuits} of size $\exp(O(n^{1/2(d-1)}))$ in a recent result of Rossman and Srinivasan~\cite{RossmanS}. The key step in the results of~\cite{OW,Amano} is to show that the $\delta$-coin problem can be solved by explicit read-once\footnote{i.e. each variable in the formula appears exactly once} monotone $\mathrm{AC}^0$ formulas of depth $d$ and size $\exp(O(d(1/\delta)^{1/(d-1)})).$ (This is improved to circuits of size $\exp(O((1/\delta)^{1/(d-1)}))$ in~\cite{RossmanS}. However, these circuits are \emph{not} explicit.) Compare this to the trivial solution (i.e. computing Majority on $\Theta(1/\delta^2)$ inputs) that requires $\mathrm{AC}^0$ circuit size $\exp(\Omega((1/\delta)^{2/(d-1)})),$ which is superpolynomially worse. \paragraph{Known lower bounds.} Lower bounds for the coin problem have also been subject to a good deal of investigation. Shaltiel and Viola~\cite{SV} show that if the $\delta$-coin problem can be solved by a circuit $C$ of size $s$ and depth $d$, then there is a circuit $C'$ of size $\mathop{\mathrm{poly}}(s)$ and depth $d+3$ that computes the Majority function on $n=\Omega(1/\delta)$ inputs. Using H\r{a}stad's lower bound for Majority~\cite{Hastad}, this implies that any depth-$d$ $\mathrm{AC}^0$ circuit $C$ solving the $\delta$-coin problem must have size $\exp(\Omega( (1/\delta)^{1/(d+2)} ))).$ Using Razborov and Smolensky's~\cite{Razborov,Smolensky93} lower bounds, this also yields the same lower bound for the more powerful circuit class $\mathrm{AC}^0[\oplus]$.\footnote{Using~\cite{SV} as a black box will yield a weaker lower bound of $\exp(\Omega((1/\delta)^{1/2(d+2)})).$ However, slightly modifying the proof of Shaltiel and Viola, we can obtain circuits of depth $d+3$ that \emph{approximate} the Majority function on $1/\delta^2$ inputs instead of computing the Majority function on $(1/\delta)$ inputs \emph{exactly}. This yields the stronger lower bound given here.} In a later result, Aaronson~\cite{Aaronson} observed that a stronger lower bound can be deduced for $\mathrm{AC}^0$ by constructing a circuit $C''$ of depth $d+2$ that only distinguishes inputs of Hamming weight $n/2$ from inputs of Hamming weight $(n/2)-1$. By H\r{a}stad's results, this suffices to recover a lower bound of $\exp(\Omega( (1/\delta)^{1/(d+1)} )))$ for $\mathrm{AC}^0$, but does not imply anything for $\mathrm{AC}^0[\oplus]$ (since the parity function can distinguish between inputs of weight $n/2$ and $(n/2)-1$). Note that these lower bounds for the $\delta$-coin problem, while exponential, do not meet the upper bounds described above. In fact, they are quasipolynomially weaker. Lower bounds for the closely related promise and approximate majorities were proved by Viola~\cite{Viola} and O'Donnell and Wimmer~\cite{OW} respectively. Viola~\cite{Viola} shows that any $\mathop{\mathrm{poly}}(n)$-sized depth-$d$ $\mathrm{AC}^0$ circuit cannot compute a promise majority for $\delta = o(1/(\log n)^{d-2})$. O'Donnell and Wimmer~\cite{OW} show that any depth-$d$ $\mathrm{AC}^0$ circuit that approximates the Majority function on 90\% of its inputs must have size $\exp(\Omega(n^{1/2(d-1)})$. Using the connection between the coin problem and approximate majority, it follows that any \emph{monotone} depth-$d$ $\mathrm{AC}^0$ circuit solving the $\delta$-coin problem must have size $\exp(\Omega((1/\delta)^{1/(d-1)}))$ matching the upper bounds above. The lower bound of~\cite{OW} is based on the Fourier-analytic notion of the \emph{Total Influence} of a Boolean function (see~\cite[Chapter 2]{ODbook}) and standard upper bounds on the total influence of a Boolean function computed by a small $\mathrm{AC}^0$ circuit~\cite{LMN,Boppana97}. Using more Fourier analytic ideas~\cite{LMN}, Cohen, Ganor and Raz~\cite{CGR} proved near-optimal $\mathrm{AC}^0$ circuit lower bounds for the $\delta$-coin problem (with no assumptions on monotonicity). They show that any depth-$d$ $\mathrm{AC}^0$ circuit for the $\delta$-coin problem must have size $\exp(\Omega((1/\delta)^{1/(d-1)}))$, nearly matching the upper bound constructions above. \paragraph{The Coin Problem and Size Hierarchy Theorems for $\mathrm{AC}^0[\oplus]$.} Recall that to prove size-hierarchy theorems for $\mathrm{AC}^0[\oplus],$ we need to come up with explicit functions for which we can prove near-tight lower bounds. One class of functions for which the Razborov-Smolensky proof technique does yield such a lower bound is the class of approximate majorities defined above. Unfortunately, however, this does not yield an explicit separation, since the functions constructed in~\cite{OW,Amano,RossmanS} are randomized and not explicit. These circuits are obtained by starting with explicit large monotone read-once formulas for the coin problem from~\cite{OW,Amano} and replacing each variable with one of the $n$ variables of the approximate majority; one can show that this probabilistic procedure produces an approximate majority with high probability. However, explicitness is then lost. Our starting point is that instead of working with approximate majorities, we can directly work with the explicit formulas solving the coin problem. As shown in~\cite{OW,Amano}, there are explicit formulas of size $\exp(O(d(1/\delta)^{1/(d-1)}))$ solving the $\delta$-coin problem. Since these yield optimal-sized circuits for computing approximate majorities, it follows that the functions computed by these formulas cannot be computed by much smaller circuits. While this is true, nevertheless the explicit formulas of~\cite{OW,Amano} do not yield anything non-trivial by way of size-hierarchy theorems. This is because, as noted above, these formulas are \emph{read-once}. Hence, showing that the underlying functions cannot be computed in smaller size would prove a separation between size $s = O(n)$ circuits and circuits of size much smaller than $n$, which is trivial. The way we circumvent this obstacle is to construct explicit circuits solving the $\delta$-coin problem with optimal size and much smaller sample complexity. In fact, we are able to bring down the sample complexity from exponential to polynomial in $(1/\delta)$. This allows us to prove a size hierarchy theorem for all $s$ up to $\exp(n^{o(1)})$. \subsection{Our results for the Coin Problem} We make progress on the complexity of the coin problem on both the upper bound and lower bound fronts. \paragraph{Upper bounds.} Note that the upper bound results known so far only yield circuit size and depth upper bounds for the coin problem, and do not say anything about the sample complexity of the solution. In fact, the explicit $\mathrm{AC}^0$ formulas of O'Donnell and Wimmer~\cite{OW} and Amano~\cite{Amano} are \emph{read-once} in the sense that each input corresponds to a distinct input variable. Hence, these results imply explicit formulas of size $s = \exp(O(d(1/\delta)^{1/(d-1)}))$ and sample size $\Theta(s)$ for the $\delta$-coin problem. (Recall that, in contrast, the trivial solution has sample complexity only $O(1/\delta^2).$) The sample complexity of these formulas can be reduced to $O(1/\delta^2)$ by a probabilistic argument (as essentially shown by~\cite{OW,Amano}; more on this below), but then we no longer have explicit formulas. The circuit construction of Rossman and Srinivasan~\cite{RossmanS} can be seen to use $O(1/\delta^2)$ samples, but is again not explicit. We show that the number of samples can be reduced to $\mathop{\mathrm{poly}}(1/\delta)$ (where the degree of the polynomial depends on the depth $d$ of the circuit), which is the in same ballpark as the trivial solution, while retaining both the size and the explicitness of the formulas. The result is as follows. \begin{theorem}[Explicit formulas for the coin problem with small sample complexity] \label{thm:main-intro} Let $\delta \in (0,1)$ be a parameter and $d\geq 2$ any fixed constant. There is an explicit depth-$d$ monotone $\mathrm{AC}^0$ formula $\Gamma_d$ that solves the $\delta$-coin problem, where $\Gamma_d$ has size $\exp(O(d(1/\delta)^{1/(d-1)}))$ and sample complexity $(1/\delta)^{2^{O(d)}}.$ (All the constants implicit in the $O(\cdot)$ notation are absolute constants.) \end{theorem} \noindent {\bf Approximate majority and the coin problem.} This result may be interpreted as a ``partial derandomization'' of the approximate majority construction of~\cite{OW,Amano} in the following sense. It is implicit in~\cite{OW,Amano} that an approximate majority on $n$ variables can be obtained by starting with a \emph{monotone} circuit $C$ solving the $\delta$-coin problem for $\delta = \Theta(1/\sqrt{n})$, and replacing each input of $C$ with a random input among the $n$ input bits on which we want an approximate majority. While, as noted above, the coin-problem-solving circuits of~\cite{OW,Amano} have exponential sample-complexity, our circuits only have polynomial sample-complexity, leading to a much more randomness-efficient way of constructing such an approximate majority. Indeed, this feature of our construction is crucial for proving the Size-Hierarchy theorem for $\mathrm{AC}^0[\oplus]$ circuits (Theorem~\ref{thm:size-hie}). \paragraph{Lower bounds.} As noted above, Shaltiel and Viola~\cite{SV} prove that any $\mathrm{AC}^0[\oplus]$ circuit of depth $d$ solving the $\delta$-coin problem must have size at least $\exp(\Omega((1/\delta)^{1/(d+2)})).$ For the weaker class of $\mathrm{AC}^0$ circuits, Cohen, Ganor and Raz~\cite{CGR} proved an optimal bound of $\exp(\Omega((1/\delta)^{1/(d-1)}))$. We are also able to strengthen these incomparable results by proving an optimal lower bound for $\mathrm{AC}^0[\oplus]$ circuits. More formally, we prove the following. \begin{theorem}[Lower bounds for the coin problem] \label{thm:lb-intro} Say $g$ is a Boolean function solving the $\delta$-coin problem, then any $\mathrm{AC}^0[\oplus]$ formula of depth $d$ for $g$ must have size $\exp(\Omega(d(1/\delta)^{1/(d-1)}))$. (The $\Omega(\cdot )$ hides an absolute constant.) \end{theorem} While the above result is stated for $\mathrm{AC}^0[\oplus]$ \emph{formulas}, it easily implies a $\exp(\Omega((1/\delta)^{1/(d-1)}))$ lower bound for depth-$d$ \emph{circuits}, since any $\mathrm{AC}^0[\oplus]$ circuit of size $s$ and depth $d$ can be converted to an $\mathrm{AC}^0[\oplus]$ formula of size $s^d$ and depth $d$. We thus get a direct extension of the results of Shaltiel and Viola~\cite{SV} and Cohen, Ganor and Raz~\cite{CGR}. The proof of this result is closely related to the results of Razborov~\cite{Razborov} and Smolensky~\cite{Smolensky} (also see~\cite{szegedy,Smolensky93}) that prove lower bounds for $\mathrm{AC}^0[\oplus]$ circuits computing the Majority function. For \emph{monotone} functions\footnote{Recall that a function $g:\{0,1\}^m\rightarrow \{0,1\}$ is monotone if it is non-decreasing w.r.t. the standard partial order on the hypercube.}, the lower bound immediately follows from the standard lower bounds of ~\cite{Razborov} and~\cite{Smolensky} for approximate majorities\footnote{The standard lower bounds of Razborov and Smolensky are usually stated for computing the hard function (e.g. Majority) \emph{exactly}. However, it is easily seen that the proofs only use the fact that the circuit computes the function on most (say 90\%) of its inputs (see, e.g.~\cite{RossmanS}). In particular, this yields lower bounds even for approximate majorities, which, moreover, turn out to be tight. This can be seen as an alternate proof of the (later) lower bound of O'Donnell and Wimmer~\cite[Theorem 4]{OW} for a stronger class of circuits. (The lower bound of~\cite{OW} only holds for $\mathrm{AC}^0$.)} (actually, we need a slightly stronger lower bound for $\mathrm{AC}^0[\oplus]$ \emph{formulas}) and the reduction~\cite{OW,Amano} from computing approximate majorities to the coin problem outlined above. This special case is already enough for the size-hierarchy theorem stated in Section~\ref{sec:sht}. However, to prove the result in the non-monotone setting, it is not clear how to use the lower bounds of~\cite{Razborov,Smolensky} directly. Instead, we use the ideas behind these results, specifically the connections between $\mathrm{AC}^0[\oplus]$ circuits and low-degree polynomials. We show that if a function $g(x_1,\ldots,x_N)$ solves the $\delta$-coin problem, then its degree, as a polynomial from $\mathbb{F}_2[x_1,\ldots,x_N]$, must be at least $\Omega(1/\delta)$ (independent of its sample complexity). From this statement and Razborov's~\cite{Razborov} low-degree polynomial approximations for $\mathrm{AC}^0[\oplus],$ it is easy to infer the lower bound. Further, we think that the statement about polynomials is interesting in its own right. Note that Theorems~\ref{thm:main-intro} and~\ref{thm:lb-intro} immediately imply the Fixed-depth Size Hierarchy Theorem for $\mathrm{AC}^0[\oplus]$ (Theorem~\ref{thm:size-hie}). \paragraph{Independent work of Chattopadhyay, Hatami, Lovett and Tal~\cite{CHLT}.} A beautiful recent paper of Chattopadhyay et al.~\cite{CHLT} proves a result on the Fourier spectrum of low-degree polynomials over $\mathbb{F}_2$ (Theorem 3.1 in~\cite{CHLT}) which is equivalent\footnote{We thank Avishay Tal (personal communication) for pointing out to us that the results of~\cite{CHLT} imply the degree lower bounds for the coin problem using an observation of~\cite{CHHL}. This direction in fact works for any class of functions closed under \emph{restrictions} (i.e. setting inputs to constants from $\{0,1\}$).} to the degree lower bound on the coin problem mentioned above. Indeed, the main observation, which is an extension of the Smolensky~\cite{Smolensky,Smolensky93} lower bound for the Majority function, is common to our proof as well as that of~\cite{CHLT}. \srikanth{Need to add a short paragraph on techniques.} \subsection{Proof Outline} \paragraph{Upper bounds.} We start with a description of the read-once formula construction of~\cite{OW, Amano}. In these papers, it is shown that for every $d\geq 2$, there is an explicit read-once formula $F_d$ that solves the $\delta$-coin problem. This formula $F_d$ is defined as follows. We fix a $d\geq 2$ and let $m = \Theta((1/\delta)^{1/(d-1)})$, a large positive integer. We define positive integer parameters $f_1,\ldots,f_d \approx \exp(m),$\footnote{These numbers have to be chosen carefully for the proof, but we do not need to know them exactly here.} and define the formula $F_d$ to be a read-once formula with alternating AND and OR input gates where the gates at height $i$ in the formula all have fan-in $f_i$. (It does not matter if we start with AND gates or OR gates, but for definiteness, let us assume that the bottom layer of gates in the formula is made up of AND gates.) Each leaf of the formula is labelled by a distinct variable (or equivalently, the formula is read-once). \nutan{Do we really need the construction of $F_d$ here? By simply defining $d,m$ as done here we can state the size of $F_d$ in terms of these. }\srikanth{Removed.} The formula $F_d$ is shown to solve the coin problem (for suitable values of $f_1,\ldots,f_d$). Note that the size of the formula, as well as its sample complexity, is $\Theta(f_1\cdots f_d)$, which turns out be $\exp(\Omega(md)) = \exp(\Omega(d(1/\delta)^{1/(d-1)})).$ To show that the formula $F_d$ solves the coin problem, we proceed as follows. For each $i\in \{1,\ldots,d\}$, let us define $\mathrm{Acc}_i^{(0)}$ (resp. $\mathrm{Rej}_i^{(0)}$) to be the probability that some subformula $F_i$ of height $i$ accepts (resp. rejects) an input from the distribution $\mu_0^{N_i}$ (where $N_i$ is the sample complexity of $F_i$). Similarly, also define $\mathrm{Acc}_i^{(1)}$ and $\mathrm{Rej}_i^{(1)}$ w.r.t. the distribution $\mu_1^i.$ Define \[ p_i^{(b)} = \min\{\mathrm{Acc}_i^{(b)},\mathrm{Rej}_i^{(b)}\} \] for each $b\in \{0,1\}$. Note that these definitions are independent of the exact subformula $F_i$ of height $i$ that we choose. It can be shown via a careful analysis that for each odd $i < d$ and each $b\in \{0,1\}$, $p_i^{(b)} = \mathrm{Acc}_i^{(b)} = \Theta(1/2^m)$ (i.e. the acceptance probability is smaller than the rejection probability and is roughly $1/2^m$) and we have \begin{equation} \label{eq:ratio} \frac{p_i^{(1)}}{p_i^{(0)}} = (1+\Theta(m^i\delta)). \end{equation} (Note that when $i < d-1,$ the quantity $m^i\delta = o(1)$ and so $p_i^{(0)}$ and $p_i^{(1)}$ are actually quite close to each other.) An analogous fact holds for even $i$ and rejection probabilities, where we now measure $p_i^{(0)}/p_i^{(1)}$ instead. At $i=d-1$, we get that the ratio is in fact a large constant. From here, it is easy to argue that $F_d$ accepts an input from $\mu_1^{N_d}$ w.h.p., and rejects an input from $\mu_0^{N_d}$ w.h.p.. This concludes the proof of the fact that $F_d$ solves the $\delta$-coin problem. We now describe the ideas behind our derandomization. The precise calculations that are required for the analysis of $F_d$ use crucially the fact that the formulas are read-once. In particular, this implies that we are considering the AND or OR of distinct subformulas, it is easy to compute (using independence) the probability that these formulas accept an input from the distributions $\mu_0^{N_d}$ or $\mu_1^{N_d}.$ In the derandomized formulas that we construct, we can no longer afford read-once formulas, since the size of our formulas is (necessarily) exponential in $(1/\delta),$ but the number of distinct variables (i.e. the sample complexity) is required to be $\mathop{\mathrm{poly}}(1/\delta).$ Thus, we need to be able to carry out the same kinds of precise computations for ANDs or ORs of formulas that share many variables. For this, we use a tool from probabilistic combinatorics named \emph{Janson's inequality}~\cite{janson,alon-spencer}. Roughly speaking, this inequality says the following. Say we have a \emph{monotone} formula $F$ over $n$ Boolean variables that is the OR of $M$ subformulas $F_1,\ldots,F_M$, and we want to analyze the probability that $F$ rejects a random input $\bf{x}$ from some product distribution over $\{0,1\}^n.$ Let $p_i$ denote the probability that $F_i$ rejects a random input. If the $F_i$s are variable disjoint, we immediately have that $F$ rejects $\bf{x}$ with probability $\prod_i p_i.$ However, when the $F_i$s are not variable disjoint but \emph{most pairs} of these subformulas are variable disjoint, then Janson's inequality allows us to infer that this probability is \emph{upper bounded} by $\left(\prod_i p_i\right)\cdot (1+\alpha)$ where $\alpha$ is quite small. Furthermore, by the monotonicity of $F$ and the resulting positive correlation between the distinct $F_i$, we immediately see that the probability that $F$ rejects is always \emph{lower bounded} by $\prod_i p_i$ and hence we get \[ \prod_i p_i \leq \prob{\bf{x}}{\text{$F$ rejects $\bf{x}$}} \leq \left(\prod_i p_i\right)\cdot (1+\alpha). \] In other words, the estimate $\prod_i p_i$, which is an exact estimate of the rejection probability of $F$ in the disjoint case, is a good \emph{multiplicative} approximation to the same quantity in the correlated case. Note that this is \emph{exactly} the kind of approximation that would allow us to recover an inequality of the form in (\ref{eq:ratio}) and allow an analysis similar to that of~\cite{OW,Amano} to go through even in the correlated setting. \begin{remark} \label{rem:janson-intro} While Janson's inequality has been used earlier in the context of Boolean circuit complexity (for example in the work of Rossman~\cite{Rossman-clique,Rossmanmonclique}), as far as we know, this is the first application in the area of the fact that the inequality actually yields a \emph{multiplicative approximation} to the probability being analyzed. \end{remark} This observation motivates the construction of our derandomized formulas (with only $\mathop{\mathrm{poly}}(1/\delta)$ variables). At each depth $d$, we construct the derandomized formula $\Gamma_d$ as follows. The structure (i.e. fan-ins) of the formula $\Gamma_d$ is exactly the same as that of $F_d$. However, the subformulas of $\Gamma_d$ are not variable disjoint. Instead, we use the $n_d$ variables of $\Gamma_d$ to obtain a family $\mc{F}$ of $f_d$ many sets of size $n_{d-1}$ (one for each subformula of depth $d-1$) in a way that ensures that Janson's inequality can be used to analyze the acceptance or rejection probability of $\Gamma_d$. As mentioned above, to apply Janson's inequality, this family $\mc{F}$ must be chosen in a way that ensures that most pairs of sets in $\mc{F}$ are disjoint. It turns out that we also need other properties of this family to ensure that the multiplicative approximation $(1+\alpha)$ is suitably close to $1$. However, we show that standard designs due to Nisan and Wigderson~\cite{Nisan, NW} used in the construction of pseudorandom generators already have these properties (though these properties were not needed in these earlier applications, as far as we know). With these properties in place, we can analyze the derandomized formula $\Gamma_d.$ For each subformula $\Gamma$ of depth $i\leq d$, we can define $p_{\Gamma}^{(b)}$ analogously to above. Using a careful analysis, we show that $p_{\Gamma}^{(b)} \in [p_i^{(b)}(1-\alpha_i),p_i^{(b)}(1+\alpha_i)]$ for a suitably small $\alpha_i.$ This allows to infer an analogue of (\ref{eq:ratio}) for $p_\Gamma^{(1)}$ and $p_\Gamma^{(0)}$, which in turn can be used to show (as in the case of $F_d$) that $\Gamma_d$ solves the $\delta$-coin problem. \paragraph{Lower bounds.} We now describe the ideas behind the proof of Theorem~\ref{thm:lb-intro}. It follows from the result of O'Donnell and Wimmer~\cite{OW} that there is a close connection between the $\delta$-coin problem and computing an approximate majority on $n$ Boolean inputs. In particular, it follows from this connection that if there is an $\mathrm{AC}^0[\oplus]$ formula $F$ of size $s$ and depth $d$ solving the $\delta$-coin problem for $\delta = \Theta(1/\sqrt{n})$ that \emph{additionally computes a monotone function},\footnote{Note that we are not restricting the formula $F$ itself to be monotone. We only require that it computes a monotone function.} then we also have a formula $F'$ of size $s$ and depth $d$ computing an approximate majority on $n$ inputs. (The formula $F'$ is obtained by substituting each input of $F$ with a uniformly random input among the $n$ inputs to the approximate majority.) Since standard lower bounds for $\mathrm{AC}^0[\oplus]$ formulas~\cite{Razborov,Smolensky,RossmanS} imply lower bounds for computing approximate majorities, we immediately get a lower bound of $\exp(\Omega(d(1/\delta)^{1/(d-1)}))$ for $\mathrm{AC}^0[\oplus]$ formulas $F$ that solve the $\delta$-coin problem \emph{by computing a monotone function.} For the general case, the above reduction from approximate majorities to the coin problem no longer works and we have to do something different. Our strategy is to look inside the proof of the $\mathrm{AC}^0[\oplus]$ formula lower bounds and use these ideas to prove the general lower bound for the coin problem. In particular, by the polynomial-approximation method due to Razborov~\cite{Razborov} (and a quantitative improvement from~\cite{RossmanS}), it suffices to prove degree lower bounds on polynomials from $\mathbb{F}_2[x_1,\ldots,x_N]$ that solve the $\delta$-coin problem. We are able to prove the following theorem in this direction, which we believe is independently interesting. \begin{theorem} \label{thm:lbd-polys-intro} Let $g\in \mathbb{F}_2[x_1,\ldots,x_N]$ solve the $\delta$-coin problem. Then, $\deg(g) = \Omega(1/\delta).$ \end{theorem} \begin{remark} \label{rem:lbd-polys} \begin{enumerate} \item Note that the degree lower bound in Theorem~\ref{thm:lbd-polys-intro} is independent of the sample complexity $N$ of the underlying function $g$. \item The lower bound obtained is tight up to a constant factor. This can be seen by using the fact that this yields tight lower bounds for the coin problem (which we show), or by directly approximating the Majority function on $1/\delta^2$ bits suitably~\cite{Gopalanetalapprox} to obtain a degree $O(1/\delta)$ polynomial that solves the $\delta$-coin problem. \item A weaker degree lower bound of $\Omega(1/(\delta\cdot (\log^2 (1/\delta))))$ can be obtained by using an idea of Shaltiel and Viola~\cite{SV}, who show how to use any solution to the coin problem and some additional Boolean circuitry to approximate the Majority function on $1/\delta^2$ inputs. Unfortunately, this weaker degree lower bound only implies a formula lower bound that is superpolynomially weaker than the upper bound. \item As mentioned above, an independent recent paper of Chattopadhyay et al.~\cite{CHLT} proves a result on the Fourier spectrum of low-degree polynomials which can be used to recover the degree bound in Theorem~\ref{thm:lbd-polys-intro} (Avishay Tal (personal communication)). Conversely, Theorem~\ref{thm:lbd-polys-intro} can be used to recover the corresponding result of Chattopadhyay et al.~\cite{CHLT}. \end{enumerate} \end{remark} The proof of Theorem~\ref{thm:lbd-polys-intro} is inspired by a standard result in circuit complexity that says that any polynomial $P\in \mathbb{F}_2[x_1,\ldots,x_n]$ that computes an approximate majority must have degree $\Omega(\sqrt{n}).$ The basic ideas of this proof go back to Smolensky~\cite{Smolensky},\footnote{Though Razborov~\cite{Razborov} was the first to prove an exponential $\mathrm{AC}^0[\oplus]$ circuit lower bound for the Majority function, he did not explicitly prove a lower bound on the degree of approximating polynomials for the Majority function. Instead, he worked with a different symmetric function for the polynomial question.} though the result itself was proved in Szegedy's PhD thesis~\cite{szegedy} and a later paper of Smolensky~\cite{Smolensky93}. Here, we modify a slightly different ``dual'' proof of this result which appears in the work of Kopparty and Srinivasan~\cite{KS}, which itself builds on ideas of Aspnes, Beigel, Furst and Rudich~\cite{ABFR} and Green~\cite{Green}. (The proof idea of Smolensky~\cite{Smolensky} can also be made to work.) The first idea is to note that the proof in~\cite{KS} can be modified to prove a lower bound of $\Omega(\sqrt{n})$ on the degree of any $P\in \mathbb{F}_2[x_1,\ldots,x_n]$ that satisfies the following condition: there exist constants $a > b$ such that $P$ agrees with the Majority function on $n$ bits on all but an $\varepsilon$ fraction of inputs of Hamming weight in $[(n/2)-a\sqrt{n},(n/2)-b\sqrt{n}]\cup [(n/2)+b\sqrt{n},(n/2)+a\sqrt{n}]$ (where $\varepsilon$ is suitably small depending on $a,b$). Using the sampling argument of O'Donnell and Wimmer~\cite{OW} and the above degree lower bound, it follows that if $g$ satisfies the property that it accepts w.h.p. inputs from any product distribution $D_{\alpha}^N$ for $\alpha\in [(1/2)-a\delta,(1/2)-b\delta]$ and rejects w.h.p. inputs from any product distribution $D_{\beta}^N$ for $\beta\in [(1/2)+b\delta,(1/2)+a\delta],$ then the degree of $g$ must be $\Omega(1/\delta).$ \nutan{I am not able to understand this statement?}\srikanth{Fixed.} But $g$ might not satisfy this hypothesis. Informally, solving the $\delta$-coin problem only means that the acceptance probability of $g$ is small on inputs from $D_{(1-\delta)/2}^N$ and large on inputs from $D_{(1+\delta)/2}^N$. It is not clear that these probabilities will remain small for $\alpha,\beta$ in some intervals of length $\Omega(\delta)$. For example, it may be that the acceptance probability of the polynomial $g$ on distribution $D_{\alpha}^N$ oscillates rapidly for $\alpha\in [(1/2)-a\delta,(1/2)-b\delta]$ even for $a,b$ that are quite close to each other. In this case, however, we observe that $g$ can be used to distinguish $D_{\alpha'}^{N'}$ and $D_{\alpha''}^{N'}$ for $\alpha',\alpha''$ quite close to each other. In other words, we are solving a `harder' coin problem (since $|\alpha'-\alpha''|$ is small). Further, we can show that this new distinguisher, say $g'$, has not much larger degree and sample complexity than the old one. We can thus try to prove the degree lower bound for $g'$ instead. We repeat this argument until we can prove a degree lower bound on the new distinguisher $g'$ (which implies a degree lower bound on $g$). We can show that since the sample complexities of successive distinguishers are not increasing too quickly, but the coin problems that they solve are getting much harder, this iteration cannot continue for more than finitely many steps. Hence, after finitely many steps, we will be able to obtain a degree lower bound. \subsection{Other related work} The coin problem has also been investigated in other computational models. Brody and Verbin~\cite{BV}, who formally defined the coin problem, studied the complexity of this model in read-once branching programs. Their lower bounds were strengthened by Steinberger~\cite{Steinberger} and Cohen, Ganor and Raz~\cite{CGR}. Lee and Viola~\cite{LV} studied the problem which has also been studied in the model of ``product tests.'' Both these models are incomparable in strength to the constant-depth circuits we study here. \section{Preliminaries} \label{sec:prelim} Throughout this section, let $d\geq 2$ be a fixed constant and $\delta \in (0,1)$ be a parameter. For any $N\geq 1$, let $\mu_0^N$ and $\mu_1^N$ denote the product distributions over $\{0,1\}^N$ where each bit is set to $1$ with probability $(1-\delta)/2$ and $(1+\delta)/2$ respectively. \subsection{Some technical preliminaries} \label{sec:tech-prelim} Throughout, we use $\log (\cdot)$ to denote logarithm to the base $2$ and $\ln (\cdot)$ for the natural logarithm. We use $\exp(x)$ to denote $e^x.$ \begin{fact} \label{fac:exp} Assume that $x\in [-1/2,1/2].$ Then we have the following chain of inequalities. \begin{equation} \label{eq:exp-ineq} \exp(x - (|x|/2))\mathop{\leq}_{\mathrm{(a)}} \exp(x-x^2) \mathop{\leq}_{\mathrm{(b)}} 1+x\mathop{\leq}_{\mathrm{(c)}} \exp(x)\mathop{\leq}_{\mathrm{(d)}} 1+x+x^2 \mathop{\leq}_{\mathrm{(e)}} 1+x+(|x|/2) \end{equation} \end{fact} The following is an easy consequence of the Chernoff bound. \begin{fact}[Error reduction] \label{fac:err_redn} Say $g$ solves the coin problem with error $(1/2)-\eta$ for some $\eta > 0$ and let $N$ denote the sample complexity of $g$. Let $G_t:\{0,1\}^{N\cdot t}\rightarrow \{0,1\}$ be defined as follows. On input $x \in \{0,1\}^{N\cdot t},$ \[ G_t(x) = \mathrm{Maj}_t(g(x_1,\ldots,x_N),g(x_{N+1},\ldots,x_{2N}),\ldots,g(x_{(t-1)N+1},\ldots,x_{t\cdot N})). \] Then, for $t = O(\log(1/\varepsilon)/\eta^2),$ $G_t$ solves the $\delta$-coin problem with error at most $\varepsilon.$ \end{fact} \subsection{Boolean formulas} \label{sec:formulas} We assume standard definitions regarding Boolean circuits. The size of a circuit always refers to the total number of gates (including input gates) in the circuit. We abuse notation and use \emph{$\mathrm{AC}^0$ formulas of size $s$ and depth $d$} (even for superpolynomial $s$) to denote depth $d$ formulas of size $s$ made up of AND, OR and NOT gates. Similar notation is also used for $\mathrm{AC}^0[\oplus]$ formulas. Given a Boolean formula $F$, we use $\mathrm{Vars}(F)$ to denote the set of variables that appear as labels of input gates of $F$. We say that a Boolean formula family $\{F_n\}_{n\geq 1}$ is explicit if there is a deterministic polynomial-time algorithm which when given as input $n$ (in binary) and the description of two gates $g_1,g_2$ of $F_n$ is able to compute whether there is a wire from $g_1$ to $g_2$ or not. Such a notion of explicitness has been described as \emph{uniformity} in \cite{vollmer}(see Chapter 2 and definition 2.24). \srikanth{Link to reference for this kind of explicitness??} \subsection{Amano's formula construction} \label{sec:amano} In this section we present the construction of a depth $d$ $\mathrm{AC}^0$ formula that solves the $\delta$-coin problem. The construction presented here is by Amano~\cite{Amano}, which works for $d\geq 3$. For $d=2$, a construction was presented by O'Donnell and Wimmer~\cite{OW}. We describe their construction in Section~\ref{sec:ubd-d=2}. Define $m = \lceil (1/\delta)^{1/(d-1)}\cdot(1/\ln 2)\rceil.$ For $i\in [d-2]$, define $\delta_i$ inductively by $\delta_1 = m\delta$ and $\delta_i = \delta_{i-1}\cdot (m\ln 2).$ Define fan-in parameters $f_1 = m, f_2 = f_3 = \cdots = f_{d-2} = \lceil m\cdot 2^m\cdot \ln 2\rceil, f_{d-1} = C_1\cdot m2^m$ and $f_d = \lceil \exp(C_1\cdot m)\rceil,$ where $C_1= 50.$ Define the formula $F_d$ to be an alternating formula with AND and OR gates such that \begin{itemize} \item Each gate at level $i$ above the variables has fan-in $f_i$. \item The gates at level $1$ (just above the variables) are AND gates. \item Each leaf is labelled by a distinct variable. \end{itemize} Note that $F_d$ is a formula on $N = \prod_{i\in [d]}f_i \leq \exp(O(dm))$ variables of size $O(N).$ Amano~\cite{Amano} showed that $F_d$ solves the $\delta$-coin problem. We state a more detailed version of his result below. Since this statement does not exactly match the statement in his paper, we give a proof in the appendix. For each $i\leq d$, let $F_i$ denote any subformula of $F_d$ of depth $i.$ Let $N_i$ denote $|\mathrm{Vars}(F_i)|$ and let $p_i^{(b)}$ denote the probability \begin{equation} \label{eq:p-i-def} p_i^{(b)} = \min\{\prob{\bm{x}\sim \mu_b^{N_i}}{F_i(\bm{x}) = 0}, \prob{\bm{x}\sim \mu_b^{N_i}}{F_i(\bm{x}) = 1}\}. \end{equation} Note that the definition of $p_i^{(b)}$ is independent of the exact subformula $F_i$ chosen: any subformula of depth $i$ yields the same value. \begin{theorem} \label{thm:amano} Assume $d\geq 3$ and $F_d$ is defined as above. Then, for small enough $\delta$, we have the following. \begin{enumerate} \item For $b,\beta\in \{0,1\}$ and each $i\in [d-1]$ such that $i\equiv \beta \pmod{2}$, we have \[ p_i^{(b)} = \prob{\bm{x}\sim \mu_b^{N_i}}{F_i(\bm{x}) = \beta}. \] In particular, for any $i\in \{2,\ldots,d-2\}$ and any $b\in \{0,1\}$ \begin{equation} \label{eq:pivspi-1} p_i^{(b)} = (1-p_{i-1}^{(b)})^{f_i}. \end{equation} \item For $\beta \in \{0,1\}$ and $i\in [d-2]$ such that $i\equiv \beta\pmod{2},$ we have \begin{align*} \frac{1}{2^m}(1+\delta_i\exp(-3\delta_i))&\leq p_i^{(\beta)} \leq \frac{1}{2^m}(1+\delta_i\exp(3\delta_i))\\ \frac{1}{2^m}(1-\delta_i\exp(3\delta_i))&\leq p_i^{(1-\beta)} \leq \frac{1}{2^m}(1-\delta_i\exp(-3\delta_i)) \end{align*} \item Say $d-1\equiv \beta \pmod{2}.$ Then \[ p_{d-1}^{(\beta)} \geq \exp(-C_1m + C_2) \text{ and } p_{d-1}^{(1-\beta)} \leq \exp(-C_1m - C_2) \] where $C_2 = C_1/10.$ \item For each $b\in \{0,1\}$, $\prob{\bm{x}\sim \mu^N_b}{F_d(\bm{x}) = 1-b}\leq 0.05.$ In particular, $F_d$ solves the $\delta$-coin problem. \end{enumerate} \end{theorem} \begin{observation} \label{obs:pifi} For any $i \in \{2, \ldots, d\}$ and $b \in \{0,1\}$, $p_{i-1}^{(b)} \cdot f_i \leq 50m$. \end{observation} \begin{remark} \label{rem:RST} A similar construction to Amano's formula above was used by Rossman, Servedio and Tan~\cite{RST} to prove an average-case \emph{Depth-hierarchy} theorem for $\mathrm{AC}^0$ circuits. Their construction was motivated by the \emph{Sipser functions} used in the work of Sipser~\cite{SipserDH} and H\r{a}stad~\cite{Hastad} to prove worst-case Depth-hierarchy theorems. \end{remark} \subsection{Janson's inequality} \label{sec:janson} We state Janson's inequality~\cite{janson} in the language of Boolean circuits. The standard proof due to Boppana and Spencer (see, e.g.~\cite[Chapter 8]{alon-spencer}) easily yields this statement. Since Janson's inequality is not normally presented in this language, we include a proof in the appendix for completeness. \begin{theorem}[Janson's inequality] \label{thm:janson} Let $C_1,\ldots,C_M$ be any monotone Boolean circuits over inputs $x_1,\ldots,x_N,$ and let $C$ denote $\bigvee_{i\in [M]}C_i.$ For each distinct $i,j\in [M]$, we use $i \sim j$ to denote the fact that $\mathrm{Vars}(C_i)\cap \mathrm{Vars}(C_j)\neq \emptyset$. Assume each $\bm{x}_j$ ($j\in [n]$) is chosen independently to be $1$ with probability $p_i\in [0,1]$, and that under this distribution, we have $\max_{i\in [M]}\prob{\bm{x}}{C_i(\bm{x}) = 1}\leq 1/2.$ Then, we have \begin{equation} \label{eq:janson} \prod_{i\in [M]}\prob{\bm{x}}{C_i(\bm{x}) = 0} \leq \prob{\bm{x}}{C(\bm{x}) = 0} \leq \left(\prod_{i\in [M]}\prob{\bm{x}}{C_i(\bm{x}) = 0}\right) \cdot \exp(2\Delta) \end{equation} where $\Delta := \sum_{i < j: i\sim j} \prob{\bm{x}}{(C_i(\bm{x})=1) \wedge (C_j(\bm{x}) = 1)}.$ \end{theorem} \begin{remark} \label{rem:janson} By using DeMorgan's law, a similar statement also holds for the probability that the conjunction $C' = \bigwedge_{i\in [M]}C_i$ takes the value $1$. More precisely, if $\max_{i\in [M]}\prob{\bm{x}}{C_i(\bm{x}) = 0}\leq 1/2$, we have \begin{equation} \label{eq:janson-and} \prod_{i\in [M]}\prob{\bm{x}}{C_i(\bm{x}) = 1} \leq \prob{\bm{x}}{C'(\bm{x}) = 1} \leq \left(\prod_{i\in [M]}\prob{\bm{x}}{C_i(\bm{x}) = 1}\right) \cdot \exp(2\Delta) \end{equation} where $\Delta := \sum_{i < j: i\sim j} \prob{\bm{x}}{(C_i(\bm{x})=0) \wedge (C_j(\bm{x}) = 0)}.$ \end{remark} \srikanth{move to appendix and add pointer.} \section{Design construction} \label{sec:design} In order to define a derandomized version of the formulas in Section~\ref{sec:amano}, we will need a suitable notion of a combinatorial design. The following definition of a combinatorial design refines the well-known notion of a Nisan-Wigderson design from the work of \cite{Nisan,NW}. We give a construction of our combinatorial design by using a construction of Nisan-Wigderson design from~\cite{NW} and showing that this construction in fact satisfies the additional properties we need. \begin{definition}[Combinatorial Designs] \label{def:design} For positive integers $N_1,N_2,M,\ell$ and $\gamma,\eta\in (0,1),$ an \emph{$(N_1,M,N_2,\ell,\gamma,\eta)$-Combinatorial Design} is a family $\mc{F}$ of subsets of $[N_1]$ such that \begin{enumerate} \item $|\mc{F}| \geq M,$ \item $\mc{F}\subseteq \binom{[N_1]}{N_2}$ (i.e. every set in $\mc{F}$ has size $N_2$), \item Given any distinct $S,T\in \mc{F}$ we have $|S\cap T|\leq \ell,$ \item For any $a\in [N_2]$, we have $|\{S\in \mc{F}\ |\ S\ni a\}|\leq \gamma\cdot M,$ \item For any $i\in [\ell]$, we have $|\{\{S,T\}\subseteq \mc{F}\ |\ S\neq T, |S\cap T| = i\}| \leq \eta^i\cdot M^2.$ \end{enumerate} \end{definition} The main result of this section is the following. \begin{lemma}[Construction of Combinatorial design] \label{lem:design} Given positive integers $N_2$ and $M$ and real parameters $\gamma,\eta\in (0,1)$ satisfying $N_2\geq (\log M)/10$, $M\geq 10\cdot N_2/\eta,$ and $\gamma\geq \eta/N_2,$ there exist positive integers $\ell= \Theta \left( \log M/\log(N_2/\eta)\right)$ and $Q = O((N_2/\eta)^{1+1/\ell})$ and an $(N_1 = Q\cdot N_2,M,N_2,\ell,\gamma,\eta)$-combinatorial design. Further, the design is explicit in the following sense. Identify $[N_1]$ with $[N_2]\times [Q]$ via the bijection $\rho:[N_1]\rightarrow [N_2]\times [Q]$ such that $\rho(i) = (j,k)$ where $i = (k-1)N_2 + j$. Then, each set in $\mc{F}$ is of the form $\{(1,k_1),\ldots,(N_2,k_{N_2})\}$ for some $k_1,\ldots,k_{N_2}\in [Q].$ Finally, there is a deterministic algorithm $\mc{A}$, which when given as input an $i\in [|\mc{F}|]$ and a $j\in [N_2]$, produces $k_j\in [Q]$ in $\mathop{\mathrm{poly}}(\log M)$ time. \end{lemma} \begin{proof} \newcommand{\mathbf{b}}{\mathbf{b}} Define $\ell$ to be the largest integer such that $M^{1/\ell} \geq 10\cdot N_2/\eta$: note that $\ell \geq 1$ by our assumption that $M\geq 10\cdot N_2/\eta.$ Thus, we have \begin{equation} \label{eq:des-ell} \ell \leq \frac{\log M}{\log(10\cdot N_2/\eta)}\leq \frac{\log M}{\log \log M} \end{equation} and also \begin{equation} \label{eq:des-ell+1} M^{1/(\ell+1)} < \frac{10\cdot N_2}{\eta}. \end{equation} Define the parameter $Q_1 = \lceil M^{1/\ell}\rceil.$ We have \begin{equation} \label{eq:des-Q-1} \frac{10\cdot N_2}{\eta}\leq M^{1/\ell} \leq Q_1 \leq 2M^{1/\ell}\leq O\left(\left(\frac{N_2}{\eta}\right)^{1+\frac{1}{\ell}}\right) \end{equation} where we used (\ref{eq:des-ell+1}) for the last inequality. Let $Q$ be the smallest power of $2$ greater than or equal to $Q_1$ and let $\mathbb{F}_Q$ be a finite field of size $Q.$ By a result of Shoup~\cite{shoup}, we can construct in time $\mathop{\mathrm{poly}}(\log Q) = \mathop{\mathrm{poly}}(\log M)$ time an implicit representation of $\mathbb{F}_Q$ where each element of $\mathbb{F}_Q$ is identified with an element of $\{0,1\}^{\log Q}$ and arithmetic can be performed in time $\mathop{\mathrm{poly}}(\log Q)$. Fix such a representation of $\mathbb{F}_Q.$ Let $A\subseteq \mathbb{F}_Q$ be any subset of size $N_2$ (note that by (\ref{eq:des-Q-1}) we have $N_2\leq Q_1$ which is at most $Q$) and let $B\subseteq \mathbb{F}_Q$ be any fixed subset of size $Q_1$. Let $A_1\subseteq A$ be a set of size $\ell$ (note that by (\ref{eq:des-ell}) $\ell\leq (\log M)/10$ which is at most $N_2$ by assumption). Fix $N_1 = Q\cdot N_2$ and identify $[N_1]$ with the set $A\times \mathbb{F}_Q$ in an arbitrary way. Assume that $A = \{a_1,\ldots,a_{N_2}\}$ and $A_1 = \{a_1,\ldots,a_{\ell}\}.$ We define $\mc{P}$ to be the set of all polynomials $P\in \mathbb{F}_Q[x]$ of degree at most $\ell-1$ such that $P(a)\in B$ for each $a\in A_1.$ We are now ready to define the family $\mc{F}.$ For each $\mathbf{b} = (b_1,\ldots,b_\ell)\in A^\ell$, we define the polynomial $P_\mathbf{b}(x)$ to be the unique polynomial in $\mc{P}$ such that $P(a_i) = b_i$ for each $i\in [\ell]$ (note that $P$ is uniquely defined since any polynomial of degree at most $\ell-1$ can be specified by its evaluations at any $\ell$ distinct points). We add the set $S_{\mathbf{b}}\subseteq A\times \mathbb{F}_Q$ to $\mc{F},$ where $S_{\mathbf{b}}$ is defined by \begin{equation} \label{eq:des-S-b} S_\mathbf{b} = \{(a_i,P_\mathbf{b}(a_i))\ |\ i\in [N_2]\}. \end{equation} In words, $S_{\mathbf{b}}$ is the graph of the polynomial $P_\mathbf{b}$ restricted to the domain $A.$ We now show that $\mc{F}$ is indeed a $(N_1,M,N_2,\ell,\gamma,\eta)$-combinatorial design. \begin{enumerate} \item For distinct $\mathbf{b},\mathbf{b}'\in B^\ell$, the sets $|S_\mathbf{b}\cap S_{\mathbf{b}'}|\leq \ell$ since the graphs of the distinct polynomials $P_\mathbf{b}$ and $P_{\mathbf{b}'}$ can intersect at at most $\ell-1$ points. In particular, we have $|\mc{F}| = |B|^\ell = Q_1^\ell \geq M$ (by (\ref{eq:des-Q-1})). Further, we also have that any pair of distinct sets in $\mc{F}$ have an intersection of at most $\ell.$ This proves properties 1 and 3 in Definition~\ref{def:design} above. \item Each set in $\mc{F}$ has size $N_2$, since it is of the form $\{(a_i,b_i)\ |\ i\in [N_2]\}$ for some choice of $b_1,\ldots,b_{N_2}\in \mathbb{F}_Q.$ This proves property 2. \item We now consider property 4. Fix any $(a,b) \in A\times \mathbb{F}_Q.$ If $(a,b)\in S\in \mc{F},$ then $S$ is the graph of a polynomial $P\in \mc{P}$ such that $P(a) = b$. To uniquely specify such a polynomial, it suffices to provide its evaluations at any $\ell-1$ other points. We choose the evaluation points to be a fixed set $A_1'\subseteq A_1\setminus\{a\}$ of size $\ell-1.$ Since $P(a')\in B$ for each $a'\in A_1',$ there are at most $|B|^{\ell-1} = Q_1^{\ell-1}$ many choices for these evaluations, which yields the same bound for the number of sets $S\in \mc{F}$ such that $(a,b)\in S$. Hence, we have \[ |\{S\in \mc{F}\ |\ S\ni (a,b)\}| \leq \frac{Q_1^{\ell}}{Q_1} = \frac{\left(\lceil M^{1/\ell}\rceil\right)^{\ell}}{Q_1}\leq \frac{\eta}{N_2}\cdot M \leq \gamma\cdot M \] where the final inequality follows from our assumption that $\gamma\geq \eta/N_2$, and the second last inequality uses the fact that $Q_1\geq 10\cdot N_2/\eta$ and \begin{equation} \label{eq:des-ceil} \left(\lceil M^{1/\ell}\rceil\right)^{\ell} \leq \left( M^{1/\ell} + 1\right)^{\ell} = M\cdot \left(1+\frac{1}{M^{1/\ell}}\right)^\ell \leq M\cdot \left(1+\frac{1}{\ell}\right)^\ell \leq 3M \end{equation} (using $\ell^\ell\leq M$, which follows from (\ref{eq:des-ell}), for the second-last inequality). \item For property 5, we use a similar argument to property 4. Fix distinct sets $S,T\in \mc{F}$ such that $|S\cap T| = i.$ The sets $S$ and $T$ are graphs of distinct polynomials $P_1,P_2\in \mc{P}$ respectively that agree in $i$ places. We bound the number of such pairs of polynomials. The number of choices for $S$, and hence $P_1$, is exactly $|\mc{F}| = Q_1^\ell.$ Given $P_1$, we can specify $P_2$ as follows. \begin{itemize} \item Specify a set $A'\subseteq A$ of size $i$ such that $P_1$ and $P_2$ agree on $A'.$ This gives the evaluation of $P_2$ at $i$ points. Further, the number of such $A'$ is $\binom{N_2}{i}\leq N_2^i.$ \item Specify the evaluation of $P_2$ at the first $\ell-i$ points from $A_1\setminus A'.$ This gives the evaluation of $P_2$ at $\ell-i$ points outside $A'$ and hence specifies $P_2$ exactly. The number of possible evaluations is $|B|^{\ell-i} = Q_1^{\ell-i.}$ \end{itemize} Hence, the number of pairs of polynomials $(P_1,P_2)$ whose graphs agree at $i$ points is at most \[ Q_1^\ell\cdot N_2^i \cdot Q_1^{\ell-i} = Q_1^{2\ell}\cdot \left(\frac{N_2}{Q_1}\right)^i = \left(\lceil M^{1/\ell}\rceil\right)^{2\ell}\cdot \left(\frac{N_2}{Q_1}\right)^i\leq 9M^2\cdot \left(\frac{N_2}{Q_1}\right)^i \leq 9M^2\cdot \left(\frac{\eta}{10}\right)^i\leq \eta^i\cdot M^2 \] where for the first inequality we have used (\ref{eq:des-ceil}) and the second inequality follows from the fact that $Q_1\geq 10\cdot N_2/\eta.$ \end{enumerate} We have thus shown that $\mc{F}$ is indeed a $(N_1,M,N_2,\ell,\gamma,\eta)$-combinatorial design as required. The explicitness of the design follows easily from its definition. \end{proof} \section{Proof of Theorem~\ref{thm:main-intro} for $d=2$} \label{sec:ubd-d=2} In this section we will present the proof of Theorem~\ref{thm:main-intro} for the special case when $d=2$. The proof is quite similar to the case for general $d$, but is somewhat simpler (as the construction of the $\mathrm{AC}^0$ formulas is simpler) and illustrates many of the ideas of the general proof. Throughout this section, let $\delta$ be a parameter going to $0$. We start by stating a result of O'Donnell and Wimmer~\cite{OW}, who gave a depth-$2$ $\mathrm{AC}^0$ formula for solving the $\delta$-coin problem. Formally, they defined a depth-$2$ circuit as follows. Let $C_0 \geq 10$. Let $m = 1/\delta\cdot C_0$, $f_1=m$, $f_2=2^m$, where $C=2^{C_0}$. The formula $F_2$ is defined as follows: \begin{itemize} \item At layer $1$ we have AND gates and the fan-in of each AND gate is $f_1$. \item At layer $2$ we have a single OR gate with fan-in $f_2$. \item Each leaf is labelled with distinct variables. \end{itemize} For $F_2$ defined as above~\cite{OW} proved the following theorem. \begin{theorem}[\cite{OW}] \label{thm:ow} Let $N =f_1\cdot f_2$. For each $b \in \{0,1\}$ $$\prob{\bm{x} \sim \mu_b^N}{F_2(\bm{x}) = 1-b} \leq 0.05,$$ i.e. specifically $F_2$ solves the $\delta$-coin problem. \end{theorem} Here, the number of inputs is $N$ and the size of $F_2$ is also $O(N)$. We now give a construction of an explicit depth $2$ formula of the same size as in the theorem above which solves the $\delta$-coin problem, but using far fewer inputs. We achieve this by an application of the Janson's inequality coupled with our combinatorial design. \srikanth{Need to hyphenate throughout. "depth 2 formula" should be "depth-2 formula" throughout.} We now describe the construction of such a depth $2$ formula $\Gamma_2$. Fix $m,f_1,f_2$ as above. Define parameters $\gamma=1$, $\eta= 1/(16\cdot (\frac{1+\delta}{2})^{m} \cdot f_2) = {1}/({16\cdot (1+\delta)^m})$. Let $\mc{F}$ be an $(n, f_2, f_1, \ell, \gamma, \eta)$-design obtained using Lemma~\ref{lem:design}. We are now ready to define $\Gamma_2$. \begin{itemize} \item Let $S_1, S_2, \ldots, S_{f_2} \in \binom{[n]}{f_1}$ be the first $f_2$ sets in the $(n, f_2, f_1, \ell, \gamma, \eta)$-design $\mc{F}$. At layer $1$ we have $f_2$ many AND gates, say $\Gamma_1^1, \ldots, \Gamma_1^{f_2}$, with fan-in $f_1$ each. For each $i \in [f_2]$, the inputs of the gate $\Gamma_1^i$ are the variables indexed by the set $S_i$. \item At layer $2$ we have a single gate, which is an OR of $\Gamma_1^1, \ldots, \Gamma_1^{f_2}$. \end{itemize} With this definition of $\Gamma_2$, we now prove Theorem~\ref{thm:main-intro}. From the definition of the parameters, it can be checked that $\eta=\Theta(1)$. Therefore, we get $\ell = \Theta(m/\log m)$ and $Q=O(f_1/\eta)^{1+1/\ell} = O((1/\delta))$. Therefore, the number of inputs in the formula is $N = O(Q \cdot f_1) = O(1/\delta^2)$ and the size of the formula is $O(f_1\cdot f_2) = \exp(O(1/\delta))$. The only thing we need to prove now is that for any $b \in \{0,1\}$, $\prob{\bm{x}\sim \mu_b^{n}}{\Gamma_2(\bm{x}) = 1-b}\leq 0.1$. Let $q^{(0)} = (1-\delta)/2$ and $q^{(1)} = (1+\delta)/2$. Let $p_1^{(0)} = \left(\frac{1-\delta}{2}\right)^m$ and $p_1^{(1)} = \left(\frac{1+\delta}{2}\right)^m$. Note that $p_1^{(b)}$ ($b\in \{0,1\}$) is the probability that each subformula $\Gamma_1^i$ accepts on a random input $\bm{x}$ chosen from the distribution $\mu_b^n.$ Let $b=0$. In this case \begin{align*} \prob{\bm{x}\sim \mu_0^n}{\Gamma_2(\bm{x}) = 1} & = \prob{\bm{x}\sim \mu_0^n}{\exists i \in [f_2]: \Gamma_1^i(\bm{x}) = 1} \\ & \leq f_2 \cdot p_1^{(0)} = (1-\delta)^m \leq \exp(-C_0) \leq 0.1. \end{align*} Here the first inequality is due to a union bound. The other inequalities are obtained by simple substitutions of the parameters and using (\ref{eq:exp-ineq}). Now consider the $b=1$ case. Here $\prob{\bm{x}\sim \mu_1^n}{\Gamma_2(\bm{x}) = 0} = \prob{\bm{x}\sim \mu_1^n}{\forall i \in [f_2]: \Gamma_1^i(\bm{x}) = 0}$. Now we would like to bound this using Janson's inequality (Theorem~\ref{thm:janson}). Applying Janson's inequality, we get \begin{align} \prob{\bm{x}\sim \mu_1^n}{\Gamma_2(\bm{x}) = 0} & = \prob{\bm{x}\sim \mu_1^n}{\forall i \in [f_2]: \Gamma_1^i(\bm{x}) = 0}\nonumber \\ & \leq \prod_{i \in [f_2]} \prob{\bm{x}\sim \mu_1^n}{\Gamma_1^i(\bm{x}) = 0} \cdot \exp(2\Delta)\nonumber\\ & \leq (1-p_1^{(1)})^{f_2} \cdot \exp(2\Delta) \nonumber \\ & \leq \exp(-p_1^{(1)}f_2 + 2\Delta), \label{eq:janson-d=2} \end{align} where \[\Delta = \sum_{\substack{j < k:\\ \mathrm{Vars}(\Gamma_1^j)\cap \mathrm{Vars}(\Gamma_1^k)\neq \emptyset}} \prob{\bm{x}\sim \mu_1^n}{(\Gamma_1^j(\bm{x}) =1) \wedge (\Gamma_1^k(\bm{x}) =1)}.\] We will now obtain a bound on $\Delta$. \begin{align*} \Delta & = \sum_{\substack{j < k:\\ \mathrm{Vars}(\Gamma_1^j)\cap \mathrm{Vars}(\Gamma_1^k)\neq \emptyset}} \prob{\bm{x}\sim \mu_1^n}{(\Gamma_1^j(\bm{x}) =1) \wedge (\Gamma_1^k(\bm{x}) =1)}. \\ & = \sum_{r=1}^\ell \sum_{\substack{j < k:\\ |\mathrm{Vars}(\Gamma_1^j)\cap \mathrm{Vars}(\Gamma_1^k)| = r}} \prob{\bm{x}\sim \mu_1^n}{(\Gamma_1^j(\bm{x}) =1) \wedge (\Gamma_1^k(\bm{x}) =1)} \end{align*} As $\Gamma_1^j$ and $\Gamma_1^k$ are both ANDs of size $m$, $\Gamma_1^j \wedge \Gamma_1^k$ is an AND of size $(2m-|\mathrm{Vars}(\Gamma_1^j)\cap \mathrm{Vars}(\Gamma_1^k)|)$. Therefore, we get \begin{align*} \Delta& = \sum_{r=1}^\ell \sum_{\substack{j < k:\\ |\mathrm{Vars}(\Gamma_1^j)\cap \mathrm{Vars}(\Gamma_1^k)| = r}} \prob{\bm{x}\sim \mu_1^n}{((\Gamma_1^j \wedge \Gamma_1^k)(\bm{x}) =1)} \\ & = \sum_{r=1}^\ell \sum_{\substack{j < k:\\ |\mathrm{Vars}(\Gamma_1^j)\cap \mathrm{Vars}(\Gamma_1^k)| = r}} \left(\frac{1+\delta}{2}\right)^{2m-r} \\ & = \sum_{r=1}^\ell \sum_{\substack{j < k:\\ |\mathrm{Vars}(\Gamma_1^j)\cap \mathrm{Vars}(\Gamma_1^k)| = r}} \frac{(p_1^{(1)})^2}{((1+\delta)/2)^r} \\ & = \sum_{r=1}^\ell \frac{(p_1^{(1)})^2}{(q^{(1)})^r} \cdot |\{(j,k) \mid j < k \text{ and } |\mathrm{Vars}(\Gamma_1^j)\cap \mathrm{Vars}(\Gamma_1^k)| = r\}| \\ \end{align*} From the construction of the formula and the combinatorial design $\mc{F}$, we know that $|\{(j,k) \mid j < k \text{ and } |\mathrm{Vars}(\Gamma_1^j)\cap \mathrm{Vars}(\Gamma_1^k)| = r\}| \leq \eta^r f_2^2$. We can also bound $1/q^{(1)}$ by a small constant, say $3$. Therefore, we can simplify the above equation as follows: \begin{align} \Delta & \leq \sum_{r=1}^\ell {(p_1^{(1)})^2} \cdot 3^r \cdot \eta^r f_2^2 \nonumber \\ & = (p_1^{(1)})^2 \cdot f_2^2 \sum_{r=1}^\ell 3^r \cdot \eta^r \nonumber \\ & \leq (p_1^{(1)})^2 \cdot f_2^2 \cdot 4 \cdot \eta \label{eq:delta-bound} \end{align} using the fact that $3\eta \leq 1/4$ as $\eta \leq 1/16.$ Now, by using our setting of $\eta = 1/(16 \cdot p_1^{(1)}\cdot f_2)$ in (\ref{eq:delta-bound}), we get $\Delta \leq p_1^{(1)}f_2/4$. Using this value of $\Delta$ in (\ref{eq:janson-d=2}), we get $\prob{\bm{x}\sim \mu_1^n}{\Gamma_2(\bm{x}) = 0} \leq \exp(-\frac{p_1^{(1)}\cdot f_2}{2}) \leq 0.1$, by our choice of parameters. This completes the proof of Theorem~\ref{thm:main-intro} for $d=2$. \section{Proof of Theorem~\ref{thm:main-intro} for $d\geq 3$} \label{sec:ubd} Throughout this section, fix a constant depth $d\geq 3$ and a parameter $\delta\in (0,1).$ The parameter $\delta$ is assumed to be asymptotically converging to $0$. We also assume the notation from Section~\ref{sec:amano}. \subsection{Definition of the formula $\Gamma_d$} \label{sec:gamma-d} The formula $\Gamma_d$ is an alternating monotone depth-$d$ formula made up of AND and OR gates. The structure of the formula and the labels of the gates are the same as in the formula $F_d$ defined in Section~\ref{sec:amano}. However, the leaves are labelled with only $\mathop{\mathrm{poly}}(m)$ distinct variables. We now proceed to the formal definition. We iteratively define a sequence of formulas $\Gamma_1,\ldots,\Gamma_d$ (where $\Gamma_i$ has depth $i$) as follows. Define the parameters $\gamma$ and $\eta$ by \begin{equation} \label{eq:def-gam-eta} \gamma = \frac{1}{m^3} \text{ and } \eta = \frac{1}{m^{10d}}. \end{equation} \begin{itemize} \item $\Gamma_1$ is just an AND of $n_1=m$ distinct variables. \item Recall that for $i\geq 2$, any gate at level $i$ in the formula $F_d$ has fan-in $f_i$ for $f_i = \exp(\Theta(m)).$ For each $i\in \{2,\ldots,d\}$, define $n_i$ so that by Lemma~\ref{lem:design}, we have an explicit $(n_i,f_i,n_{i-1},\ell,\gamma,\eta)$-combinatorial design $\mc{F}_i$ where $\ell = \Theta(\log f_i/\log(n_{i-1}/\eta)).$ Note that $n_i = O((n_{i-1})^{2+1/\ell}/\eta^{1+1/\ell}) \leq n_{i-1}^3/\eta^2$. The formula $\Gamma_i$ is defined on a set $X$ of $n_i$ variables by taking the OR/AND (depending on whether $i$ is even or odd respectively) of $f_i$ copies of $\Gamma_{i-1}$, each defined on a distinct subset $Y\subseteq X$ of $n_{i-1}$ variables obtained from the combinatorial design $\mc{F}_i.$ Formally, let $S_1,\ldots,S_{f_i}\in \binom{[n_i]}{n_{i-1}}$ be the first $f_i$ many sets in the design $\mc{F}_i$ (in lexicographic order, say). Identifying $[n_i]$ with the variable set $X$ of $\Gamma_i,$ we obtain corresponding subsets $Y_1,\ldots,Y_{f_i}$ of $X.$ The formula $\Gamma_i$ is an OR/AND of $f_i$ many subformulas $\Gamma_i^1,\ldots,\Gamma_i^{f_i}$ where the $j$th subformula $\Gamma_i^j$ is a copy of $\Gamma_{i-1}$ with variable set $Y_j.$ \end{itemize} \begin{observation} \label{obs:gamma-d} The size of $\Gamma_d$ is $\exp(O(dm)).$ The number of variables appearing in $\Gamma_d$ is $n_d = m^{2^{O(d)}}.$ \end{observation} \paragraph{Explicitness of the formula $\Gamma_d.$} The structure of the formula is determined completely by the parameter $\delta.$ Thus to argue that the formula $\Gamma_d$ is explicit, it suffices to show that the labels of the input gates can be computed efficiently. Note that the inputs are in $1$-$1$ correspondence with the set $[f_d]\times [f_{d-1}]\times \cdots [f_2]\times [f_1].$ Let $\Gamma_i$ be any subformula of $\Gamma_d$ of depth $i.$ If $i=1$, then $\Gamma_i$ is simply an AND of $m=f_1$ variables and we identify its variable set with $[f_1]$. When $i > 1$, by the properties of the design constructed in Lemma~\ref{lem:design}, we see that the set $\mathrm{Vars}(\Gamma_i)$ is in a natural $1$-$1$ correspondence with the set $\mathrm{Vars}(\Gamma_{i-1})\times [Q_i]$ where $\Gamma_{i-1}$ is any subformula of depth $i-1$ and $Q_i = n_i/n_{i-1}$. Each subformula $\Gamma_i^j$ ($j\in [f_i]$) of depth $i-1$ in $\Gamma_i$ has as its variable set a set of the form $\{(x,k_x)\ |\ x\in \mathrm{Vars}(\Gamma_{i-1}), k_x\in [Q_i]\}.$ Further, by the explicitness properties of the design constructed in Lemma~\ref{lem:design}, we see that given any $x\in \mathrm{Vars}(\Gamma_{i-1})$ and $j\in [f_i]$, we can find in $\mathop{\mathrm{poly}}(\log(f_i)) \leq \mathop{\mathrm{poly}}(m)$ time the variable $(x,k)\in \mathrm{Vars}(\Gamma_i)$ that belongs to $\mathrm{Vars}(\Gamma_i^j).$ Equivalently, given a leaf $\ell = (j_i,\ldots,j_1)\in [f_i]\times \cdots\times [f_1]$ of $\Gamma_i$ and the variable $x\in \mathrm{Vars}(\Gamma_{i-1})$ corresponding to the leaf $(j_{i-1},\ldots,j_1)$ in $\Gamma_{i-1},$ we can find the variable labelling $\ell$ in $\mathop{\mathrm{poly}}(m)$ time. Using this algorithm and a recursive procedure to find the variable $x$, we see that the variable labelling the leaf $\ell$ can be found in $\mathop{\mathrm{poly}}(m)$ time. In particular, given a leaf of $\Gamma_d,$ the variable labelling it can be found in $\mathop{\mathrm{poly}}(m)$ time. Thus, the formula $\Gamma_d$ is explicit. \subsection{Analysis of $\Gamma_d$} \label{sec:analysis-gamma-d} In this section, we will show that $\Gamma_d$ distinguishes between the distributions $\mu_0^{n_d}$ and $\mu_1^{n_d}$ as defined in Definition~\ref{def:coinproblem}. For brevity, we use $n$ to denote $n_d$. Fix any subformula $\Gamma$ of $\Gamma_d$ and $b\in \{0,1\}$. Assume $\Gamma$ has depth $i\in [d]$ and $\beta \in \{0,1\}$ is such that $i\equiv \beta\pmod{2}.$ We define $p_{\Gamma}^{(b)} = \prob{\bm{x}\sim \mu_b^n}{\Gamma(\bm{x}) = \beta}.$ Assume that $\Gamma$ is an OR/AND of depth-$(i-1)$ subformulas $\Gamma^1,\ldots,\Gamma^f.$ We define \begin{equation} \label{eq:delta-gamma} \Delta_{\Gamma}^{(b)} = \sum_{\substack{j < k:\\ \mathrm{Vars}(\Gamma^j)\cap\\ \mathrm{Vars}(\Gamma^k)\neq \emptyset}} \prob{\bm{x}\sim \mu_b^n}{(\Gamma^j(\bm{x}) =1-\beta) \wedge (\Gamma^k(\bm{x}) =1-\beta)}. \end{equation} The following lemma is the main technical lemma of this section. Along with Theorem~\ref{thm:amano}, it easily implies Theorem~\ref{thm:main-intro} (as we show below). \begin{lemma} \label{lem:main} Let $\Gamma_d$ be as constructed above. Then for each $i\in \{2,\ldots,d\}$, each $b\in \{0,1\}$, and any subformula $\Gamma$ of depth $i$, we have the following. \begin{enumerate} \item $p_{\Gamma}^{(b)} \in [p_i^{(b)}(1-\eta\cdot (C_3m)^i), p_i^{(b)}(1+\eta\cdot (C_3m)^i)]$ where $C_3=1000.$ \item $\Delta_{\Gamma}^{(b)} \leq (C_4m)^2\cdot \eta$ where $C_4=100.$ \end{enumerate} \end{lemma} Assuming the above lemma, we first prove Theorem~\ref{thm:main-intro}. \begin{proof}[Proof of Theorem~\ref{thm:main-intro}] We use the explicit formula $\Gamma_d$ described above. By Lemma~\ref{lem:main} applied in the case that $i=d$, it follows that for each $b\in \{0,1\}$ \[ |\prob{\bm{x}\sim \mu_b^n}{\Gamma_d(\bm{x}) = 1-b} - \prob{\bm{x}\sim \mu_b^n}{F_d(\bm{x}) = 1-b}| = |p_{\Gamma_d}^{(b)}-p_d^{(b)}| \leq p^{(b)}_d \cdot \eta (C_3 m)^d = o(1). \] In particular, using Theorem~\ref{thm:amano}, it follows that $\prob{\bm{x}\sim \mu_b^n}{\Gamma_d(\bm{x}) = 1-b}\leq 0.1$ and hence $\Gamma_d$ solves the $\delta$-coin problem. The sample complexity of $\Gamma_d$ is $m^{2^{O(d)}} = (1/\delta)^{2^{O(d)}}$ by construction. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:main}] We prove the lemma by induction on $i$. The base case is when $i=2$. This proof is quite similar to the proof of the $d=2$ case from Section~\ref{sec:ubd-d=2}. \paragraph{Base case, i.e. $i=2$:} Recall that for $i=2$, $\Gamma$ is an OR of $f_2$-many subformulas $\Gamma^1, \Gamma^2, \ldots, \Gamma^{f_2}$, where each $\Gamma^j$ is an AND of distinct set of variables. Therefore, we have that $p_{\Gamma^j}^{(b)}$ is the same as in the case of Amano's proof, i.e. $p_{\Gamma^j}^{(b)} = p_1^{(b)}$. Recall that $p_1^{(b)}$ is equal to $(\frac{1-\delta}{2})^m$ if $b=0$ and it is equal to $(\frac{1+\delta}{2})^m$ if $b=1$. Let $q^{(0)}$ ($q^{(1)}$) denote $\frac{1-\delta}{2}$ (respectively, $\frac{1+\delta}{2}$). \begin{align*} \Delta_{\Gamma}^{(b)} & = \sum_{\substack{j < k:\\ \mathrm{Vars}(\Gamma^j)\cap \mathrm{Vars}(\Gamma^k)\neq \emptyset}} \prob{\bm{x}\sim \mu_b^n}{(\Gamma^j(\bm{x}) =1) \wedge (\Gamma^k(\bm{x}) =1)}. \\ & = \sum_{r=1}^\ell \sum_{\substack{j < k:\\ |\mathrm{Vars}(\Gamma^j)\cap \mathrm{Vars}(\Gamma^k)| = r}} \prob{\bm{x}\sim \mu_b^n}{(\Gamma^j(\bm{x}) =1) \wedge (\Gamma^k(\bm{x}) =1)} \end{align*} As $\Gamma^j$ and $\Gamma^k$ are both ANDs of size $m$, $\Gamma^j \wedge \Gamma^k$ is an AND of size $(2m-|\mathrm{Vars}(\Gamma^j)\cap \mathrm{Vars}(\Gamma^k)|)$. Therefore, we get \begin{align*} \Delta_{\Gamma}^{(b)} & = \sum_{r=1}^\ell \sum_{\substack{j < k:\\ |\mathrm{Vars}(\Gamma^j)\cap \mathrm{Vars}(\Gamma^k)| = r}} \prob{\bm{x}\sim \mu_b^n}{((\Gamma^j \wedge \Gamma^k)(\bm{x}) =1)} \\ & = \sum_{r=1}^\ell \sum_{\substack{j < k:\\ |\mathrm{Vars}(\Gamma^j)\cap \mathrm{Vars}(\Gamma^k)| = r}} (q^{(b)})^{2m-r} \\ & = \sum_{r=1}^\ell \sum_{\substack{j < k:\\ |\mathrm{Vars}(\Gamma^j)\cap \mathrm{Vars}(\Gamma^k)| = r}} \frac{(p_1^{(b)})^2}{(q^{(b)})^r} \\ & = \sum_{r=1}^\ell \frac{(p_1^{(b)})^2}{(q^{(b)})^r} \cdot |\{(j,k) \mid j < k \text{ and } |\mathrm{Vars}(\Gamma^j)\cap \mathrm{Vars}(\Gamma^k)| = r\}| \\ \end{align*} From the construction of the formula, we know that $|\{(j,k) \mid j < k \text{ and } |\mathrm{Vars}(\Gamma^j)\cap \mathrm{Vars}(\Gamma^k)| = r\}| \leq \eta^r f_2^2$. We can also bound $1/q^{(b)}$ by a small constant, say $3$. Therefore, we can simplify the above equation as follows: \begin{align*} \Delta_{\Gamma}^{(b)} & \leq \sum_{r=1}^\ell {(p_1^{(b)})^2} \cdot 3^r \cdot \eta^r f_2^2\\ & = (p_1^{(b)})^2 \cdot f_2^2 \sum_{r=1}^\ell 3^r \cdot \eta^r \\ & \leq (p_1^{(b)})^2 \cdot f_2^2 \cdot 4 \cdot \eta \\ \end{align*} The last inequality comes from summing up a geometric series. Now using Observation~\ref{obs:pifi} we get that $p_1^{(b)}\cdot f_2 \leq 50m$. Hence, we get $\Delta_{\Gamma}^{(b)} \leq (p_1^{(b)})^2 \cdot f_2^2 \cdot 4 \cdot \eta \leq (50m)^2 \cdot 4 \eta = (100m)^2 \cdot \eta$. This proves the bound on $\Delta_{\Gamma}^{(b)}$ in the base case. We now prove the bounds claimed for $p_\Gamma^{(b)}$ in the base case. When $i=2$, $\beta=0$, hence $p_{\Gamma}^{(b)} = \prob{\bm{x}\sim \mu_b^n}{\Gamma(\bm{x}) = 0}.$ By Janson's inequality (Theorem~\ref{thm:janson}), we get the following bounds on the value of $p_\Gamma^{(b)}$. $$\prod_{j=1}^{f_2} (1-p_{\Gamma^j}^{(b)}) \leq p_\Gamma^{(b)} \leq \prod_{j=1}^{f_2} (1-p_{\Gamma^j}^{(b)}) \cdot \exp(2\cdot \Delta_{\Gamma}^{(b)}).$$ Recall that $p_{\Gamma^j}^{(b)} = p_1^{(b)}$ as we are in the base case. Also, from Equation~(\ref{eq:pivspi-1}) we have that $(1-p_1^{(b)})^{f_2} = p_2^{(b)}$. Therefore, we get \begin{align*} p_2^{(b)} \leq p_\Gamma^{(b)} & \leq p_2^{(b)} \cdot \exp(2\Delta_\Gamma^{(b)}) & \\ & \leq p_2^{(b)} \cdot (1+4 \cdot \Delta_\Gamma^{(b)}) & \text{Using (\ref{eq:exp-ineq}) (d)}\\ & \leq p_2^{(b)} \cdot (1+4 \cdot(C_4m)^2 \cdot \eta ) & \\ & \leq p_2^{(b)} \cdot (1+ (C_3m)^2 \cdot \eta ) & \\ \end{align*} This finishes the proof of the base case. \paragraph{Inductive case, i.e. $i \geq 3$:} We now proceed to proving the inductive case. Assume that the statement holds for $(i-1)$. Let $\Gamma$ be a subformula at depth $i$ which is OR/AND of subformulas $\Gamma^1, \Gamma^2, \ldots, \Gamma^{f_i}$ each of depth $(i-1)$. From the definition of $\Delta_\Gamma^{(b)}$, we get the following: \begin{align*} \Delta_{\Gamma}^{(b)} & = \sum_{\substack{j < k:\\ \mathrm{Vars}(\Gamma^j)\cap \mathrm{Vars}(\Gamma^k)\neq \emptyset}} \prob{\bm{x}\sim \mu_b^n}{(\Gamma^j(\bm{x}) =1-\beta) \wedge (\Gamma^k(\bm{x}) =1-\beta)}. \\ & = \sum_{r=1}^\ell \sum_{\substack{j < k:\\ |\mathrm{Vars}(\Gamma^j)\cap \mathrm{Vars}(\Gamma^k)| = r}} \prob{\bm{x}\sim \mu_b^n}{(\Gamma^j(\bm{x}) =1-\beta) \wedge (\Gamma^k(\bm{x}) =1-\beta)} \end{align*} Let $t_r$ denote the maximum value of $\prob{\bm{x}\sim \mu_b^n}{(\Gamma^j(\bm{x}) =1-\beta) \wedge (\Gamma^k(\bm{x}) =1-\beta)}$, where the maximum is taken over $j < k$ such that $|\mathrm{Vars}(\Gamma^j)\cap \mathrm{Vars}(\Gamma^k)| = r$. Then we get \begin{align} \Delta_{\Gamma}^{(b)} & \leq \sum_{r=1}^\ell t_r \cdot |\{(j,k) \mid j < k \text{ and } |\mathrm{Vars}(\Gamma^j)\cap \mathrm{Vars}(\Gamma^k)| = r\}| \nonumber \\ & \leq \sum_{r=1}^\ell t_r \cdot \eta^r \cdot f_i^2 \label{eq:tr} \end{align} Let us now bound $t_r$, which we will do by using the construction parameters and the inductive hypothesis. Fix any $j < k$. We have \begin{equation} \label{eq:GjANDGk} \prob{\bm{x}\sim \mu_b^n}{(\Gamma^j(\bm{x}) =1-\beta) \wedge (\Gamma^k(\bm{x}) =1-\beta)} = \prob{\bm{x}\sim \mu_b^n}{\Gamma^j(\bm{x}) =1-\beta} \cdot \prob{\bm{x}\sim \mu_b^n}{(\Gamma^k(\bm{x}) =1-\beta) | (\Gamma^j(\bm{x}) =1-\beta)}. \end{equation} As $\Gamma^j$ is a formula of depth $i-1$ and $i-1\equiv (1-\beta)\pmod{2}$, using the induction hypothesis, we can upper bound the quantity $\prob{\bm{x}\sim \mu_b^n}{\Gamma^j(\bm{x}) =1-\beta} $. We get \begin{equation} \label{eq:Gj} \prob{\bm{x}\sim \mu_b^n}{\Gamma^j(\bm{x}) =1-\beta} = p_{\Gamma^j}^{(b)} \leq p_{i-1}^{(b)} \cdot (1+\eta \cdot (C_3m)^{i-1}) = p_{i-1}^{(b)} (1+o(1)). \end{equation} We now analyse the second term on the right hand side of Equation~(\ref{eq:GjANDGk}). From the construction of the formula, we know that for any $y \in \mathrm{Vars}(\Gamma^j)$, the variable $y$ appears in at most $\gamma \cdot f_{i-1}$ many depth-$(i-2)$ subformulas of $\Gamma^k$. Since $|\mathrm{Vars}(\Gamma^j)\cap \mathrm{Vars}(\Gamma^k)| = r$, the number of depth-$(i-2)$ subformulas $T$ of $\Gamma^k$ that contain some variable from $\Gamma^j$ is at most $\gamma \cdot f_{i-1} \cdot r$ which is at most $\gamma \cdot f_{i-1} \cdot \ell$, as $r \leq \ell$. Let us construct a formula $\Phi^k$ from $\Gamma^k$ by deleting all the depth-$(i-2)$ subformulas containing some variable from $\Gamma^j$. Then we get \begin{align} \prob{\bm{x}\sim \mu_b^n}{(\Gamma^k(\bm{x}) =1-\beta) | (\Gamma^j(\bm{x}) =1-\beta)} & \leq \prob{\bm{x}\sim \mu_b^n}{(\Phi^k(\bm{x}) =1-\beta) | (\Gamma^j(\bm{x}) =1-\beta)} \nonumber \\ & = \prob{\bm{x}\sim \mu_b^n}{(\Phi^k(\bm{x}) =1-\beta)} \label{eq:phi} \end{align} The first inequality follows from the fact that $\Phi^k$ was constructed by removing some subformulas of depth-$(i-2)$ from $\Gamma^k$, and this can only increase the probability of taking value $1-\beta$. The equality follows from the fact that $\Phi^k$ and $\Gamma^j$ share no variables in common and hence the events $(\Phi^k(\bm{x}) =1-\beta)$ and $ (\Gamma^j(\bm{x}) =1-\beta)$ are independent. Let $\Gamma^{k,1}, \Gamma^{k,2}, \ldots, \Gamma^{k,f_{i-1}}$ be the depth-$(i-2)$ subformulas of $\Gamma^k$. By ordering the variables if necessary, let $\Gamma^{k,1}, \Gamma^{k,2}, \ldots, \Gamma^{k,f_{i-1}-T}$ be the depth-$(i-2)$ subformulas of $\Phi^k$. We will show below that \begin{equation} \label{eq:phi-final} \prob{\bm{x}\sim \mu_b^n}{(\Phi^k(\bm{x}) =1-\beta)} \leq \prob{\bm{x}\sim \mu_b^n}{(\Gamma^k(\bm{x}) =1-\beta)} \cdot (1+o(1)). \end{equation} Suppose we have this then we will proceed as follows. \begin{equation} \label{eq:phi2} \prob{\bm{x}\sim \mu_b^n}{(\Phi^k(\bm{x}) =1-\beta)} \leq {\prob{\bm{x}\sim \mu_b^n}{\Gamma^k(\bm{x})=1-\beta}} \cdot (1+o(1)) \leq p_{i-1}^{(b)} \cdot (1+o(1)) \end{equation} Here the last inequality is obtained by using the induction hypothesis for $\Gamma^k$. Now using (\ref{eq:Gj}), (\ref{eq:phi}), and (\ref{eq:phi2}) in (\ref{eq:GjANDGk}) we get \begin{align*} \prob{\bm{x}\sim \mu_b^n}{(\Gamma^j(\bm{x}) =1-\beta) \wedge (\Gamma^k(\bm{x}) =1-\beta)}& \leq (p_{i-1}^{(b)} (1+o(1))) \cdot (p_{i-1}^{(b)} (1+o(1))) \\ & \leq (p_{i-1}^{(b)})^2 (1+o(1)) \end{align*} Since the above holds for all $j < k$ such that $|\mathrm{Vars}(\Gamma^j)\cap \mathrm{Vars}(\Gamma^k)| = r,$ this gives us a bound on $t_r$. Using this in (\ref{eq:tr}), we get \begin{align*} \Delta_{\Gamma}^{(b)} & \leq \sum_{r=1}^\ell (p_{i-1}^{(b)})^2 \cdot (1+o(1)) \cdot \eta^r \cdot f_i^2 \\ & = (p_{i-1}^{(b)})^2 \cdot f_i^2 \cdot (1+o(1)) \cdot \sum_{r=1}^\ell \eta^r \\ & \leq (50m)^2 \cdot 2 \eta \end{align*} Here the last inequality is by applying Observation~\ref{obs:pifi} and by summing a geometric series. This therefore proves the inductive bound on $\Delta_{\Gamma}^{(b)}$ assuming (\ref{eq:phi-final}). In order to prove (\ref{eq:phi-final}), we note that by using Janson's inequality (Theorem~\ref{thm:janson}) for $\Phi^k,$ we get that \begin{align*} \prob{\bm{x}\sim \mu_b^n}{(\Phi^k(\bm{x}) =1-\beta)} \leq \prod_{u \leq f_{i-1}-T} (1-p_{\Gamma^{k,u}}^{(b)}) \cdot \exp(2\Delta_{\Phi^k}) \end{align*} Also observe (Theorem~\ref{thm:janson}) that $\prob{\bm{x}\sim \mu_b^n}{(\Gamma^k(\bm{x}) =1-\beta)}$ is lower bounded by $\prod_{u \leq f_{i-1}} (1-p_{\Gamma^{k,u}}^{(b)})$. Therefore, we get $$\prod_{u \leq f_{i-1}-T} (1-p_{\Gamma^{k,u}}^{(b)}) \leq \frac{\prob{\bm{x}\sim \mu_b^n}{\Gamma^k(\bm{x})=1-\beta}}{\prod_{u > f_{i-1}-T} (1-p_{\Gamma^{k,u}}^{(b)})}.$$ Now, we have $\Delta_{\Phi^k}^{(b)} \leq \Delta_{\Gamma^k}^{(b)}$ by the definitions of these quantities and the fact that $\Phi^k$ is obtained from $\Gamma^k$ by removing some depth-$(i-2)$ subformulas. Also, by the induction hypothesis, we have $\Delta_{\Gamma^k}^{(b)} \leq (C_4m)^2 \eta$. As $\eta = 1/m^{10d}$, we get that $\Delta_{\Phi^k} = o(1)$. Hence, $\exp(2\Delta_{\Phi^k}) = \exp(o(1)) \leq (1+o(1))$. Putting these together, we obtain the following inequality. \begin{equation} \label{eq:phi1} \prob{\bm{x}\sim \mu_b^n}{(\Phi^k(\bm{x}) =1-\beta)} \leq \frac{\prob{\bm{x}\sim \mu_b^n}{\Gamma^k(\bm{x})=1-\beta}}{\prod_{u > f_{i-1}-T} (1-p_{\Gamma^{k,u}}^{(b)})} \cdot (1+o(1)) \end{equation} Now using the induction hypothesis for $p_{\Gamma^{k,u}}^{(b)}$, we get $p_{\Gamma^{k,u}}^{(b)} \leq (1+o(1))\cdot p_{i-2}^{(b)} \leq 2 \cdot p_{i-1}^{(b)}$. Using this bound on the value of $p_{\Gamma^{k,u}}^{(b)}$, we get the following lower bound on $\prod_{u> f_{i-1}-T} (1-p_{\Gamma^{k,u}}^{(b)})$. \begin{align*} \prod_{u> f_{i-1}-T} (1-p_{\Gamma^{k,u}}^{(b)}) & \geq (1-2p_{i-2}^{(b)})^T \\ & \geq 1-2 \cdot T \cdot p_{i-2}^{(b)} \\ & \geq 1-2 \cdot \gamma \cdot f_{i-1} \cdot \ell \cdot p_{i-2}^{(b)} \\ & \geq (1-o(1)) \end{align*} The third inequality comes from the upper bound on the value of $T$ argued above. Using Observation~\ref{obs:pifi} we get that $f_{i-1}\cdot p_{i-2}^{(b)} \leq 50m$. From our choice of parameters, $\gamma = 1/m^3$ and $\ell \leq m$. Therefore, we get $\gamma \cdot \ell \cdot f_{i-1}p_{i-2}^{(b)} \leq (1/m^3) \cdot m \cdot 50m = o(1)$. This gives the last inequality above. Putting it together, this gives is (\ref{eq:phi-final}). This finishes the proof of part 2 in Lemma~\ref{lem:main}. We now proceed to proving the inductive step for part 1 of Lemma~\ref{lem:main}. The proof is very similar to the proof of the analogous statement in the base case. We give the details for the sake of completeness. Using Janson's inequality, we get \begin{equation} \label{eq:pGamma} \prod_{j=1}^{f_i} (1-p_{\Gamma^j}^{(b)}) \leq p_\Gamma^{(b)} \leq \prod_{j=1}^{f_i} (1-p_{\Gamma^j}^{(b)}) \cdot \exp(2\cdot \Delta_{\Gamma}^{(b)}) \end{equation} Using (\ref{eq:exp-ineq}) we get $p_\Gamma^{(b)} \geq \prod_{j=1}^{f_i} (1-p_{\Gamma^j}^{(b)}) \geq \exp(-\sum_{j\leq f_i}p_{\Gamma^j}^{(b)} - \sum_{j\leq f_i} (p_{\Gamma^j}^{(b)})^2)$. To lower bound this quantity, we will first upper bound $p_{\Gamma^j}^{(b)}$. By using the induction hypothesis, we get $p_{\Gamma^j}^{(b)} \leq p_{i-1}^{(b)}(1+\eta \cdot (C_3m)^{i-1})$. Using this, we get $\sum_{j\leq f_i}p_{\Gamma^j}^{(b)} \leq f_i \cdot p_{i-1}^{(b)}(1+\eta \cdot (C_3m)^{i-1})$. We will also show that $ \sum_{j\leq f_i} (p_{\Gamma^j}^{(b)})^2$ is negligible. For that observe the following: \begin{align*} \sum_{j\leq f_i} (p_{\Gamma^j}^{(b)})^2& \leq f_i \cdot (p_{i-1}^{(b)}(1+\eta \cdot (C_3m)^{i-1}))^2 \\ & \leq 4 \frac{(f_i \cdot p_{i-1}^{(b)})^2}{f_i} \leq \frac{O(m^2)}{\lceil m\cdot 2^m\cdot \ln 2\rceil} \leq \eta \cdot (C_3m)^{i-1} \end{align*} Here the second inequality comes from the fact that $(1+\eta \cdot (C_3m)^{i-1})) \leq 2$. The other inequalities easily follow from our choice of parameters and Observation~\ref{obs:pifi}. \begin{align} p_\Gamma^{(b)} & \geq \exp(-\sum_{j\leq f_i}p_{\Gamma^j}^{(b)} - \sum_{j\leq f_i} (p_{\Gamma^j}^{(b)})^2) \nonumber \\ & \geq \exp(-f_i \cdot p_{i-1}^{(b)}(1+\eta \cdot (C_3m)^{i-1}) - \eta \cdot (C_3m)^{i-1} ) \nonumber \\ & = \exp(-f_i \cdot p_{i-1}^{(b)} - \eta \cdot (C_3m)^{i-1}\cdot (f_ip_{i-1}^{(b)}+1)) \nonumber \\ & \geq (1-p_{i-1}^{(b)})^{f_i} (1- (\eta \cdot (C_3m)^{i-1} (f_ip_{i-1}^{(b)}+1)) \label{eq:exp}\\ & \geq p_i^{(b)} (1-(\eta \cdot (C_3m)^{i-1} (50m+1)) \label{eq:ih}\\ & \geq p_i^{(b)} (1-(\eta \cdot (C_3m)^{i-1} C_3m)) \nonumber\\ & = p_i^{(b)} (1-(\eta \cdot (C_3m)^{i}))\nonumber \end{align} Here, the above inequalities can be obtained primarily by simple rearrangement of terms. The inequality (\ref{eq:exp}) uses (\ref{eq:exp-ineq}), while inequality (\ref{eq:ih}) uses the induction hypothesis and Observation~\ref{obs:pifi}. This proves the desired lower bound on $p_\Gamma^{(b)}$. Now we prove the upper bound. \begin{align} p_\Gamma^{(b)} & \leq \prod_{j \leq f_i} (1-p_{\Gamma^j}^{(b)}) \exp(\Delta_\Gamma^{(b)}) \nonumber \\ & \leq \exp(-\sum_{j \leq f_i} p_{\Gamma^j}^{(b)} + 2 \Delta_\Gamma^{(b)}) \nonumber \\ & \leq \exp(-p_{i-1}^{(b)} (1-\eta\cdot (C_3 m)^{i-1})\cdot f_i + 2 \Delta_{\Gamma}^{(b)}) \nonumber \\ & \leq \exp(-p_{i-1}^{(b)}f_i) \exp\left(p_{i-1}^{(b)}\cdot \eta\cdot (C_3 m)^{i-1}\cdot f_i + 2 \Delta_{\Gamma}^{(b)}\right) \nonumber \\ & = \exp(-p_{i-1}^{(b)})^{f_i} \exp\left(p_{i-1}^{(b)}\cdot \eta\cdot (C_3 m)^{i-1}\cdot f_i + 2 \Delta_{\Gamma}^{(b)}\right) \nonumber \\ & \leq \left((1-p_{i-1}^{(b)}) \cdot \exp((p_{i-1}^{(b)})^2)\right)^{f_i} \cdot \exp\left(p_{i-1}^{(b)}\cdot \eta\cdot (C_3 m)^{i-1}\cdot f_i + 2 \Delta_{\Gamma}^{(b)}\right) \label{eq:expb}\\ & = (1-p_{i-1}^{(b)})^{f_i} \cdot \exp\left((p_{i-1}^{(b)})^2\cdot f_i + p_{i-1}^{(b)}\cdot \eta\cdot (C_3 m)^{i-1}\cdot f_i + 2 \Delta_{\Gamma}^{(b)}\right) \nonumber \\ & \leq p_i^{(b)} \cdot \exp\left(\eta\cdot (C_3 m)^{i-1} + 50 m \cdot \eta\cdot (C_3 m)^{i-1} + 2 \eta \cdot (C_3 m)^{i-1}\right) \label{eq:ihu}\\ & \leq p_i^{(b)} \cdot \exp\left( \eta \cdot (C_3m)^{i-1} \cdot (50m+3)\right) \nonumber \\ & \leq p_i^{(b)} \cdot (1+ 2 \cdot \eta \cdot (C_3m)^{i-1} \cdot (50m+3)) \label{eq:1+2x}\\ & \leq p_i^{(b)} \cdot (1+\eta \cdot (C_3m)^i) \nonumber \end{align} Most inequalities above are obtained by simple rearrangement of terms. Inequality (\ref{eq:expb}) is obtained by applying the inequality (b) from (\ref{eq:exp-ineq}). Inequality (\ref{eq:ihu}) is obtained by applying (\ref{eq:pivspi-1}), by using Observation~\ref{obs:pifi}, and by using the fact that $f_i \cdot (p_{i-1}^{(b)})^2 \leq \eta \cdot (C_3m)^{i-1}$. Finally, (\ref{eq:1+2x}) is obtained by using inequalities (d) and (e) of (\ref{eq:exp-ineq}). This completes the proof of part 1 of Lemma~\ref{lem:main}. \end{proof} \section{Lower bounds for the Coin Problem} \label{sec:lbds} In this section, we prove Theorem~\ref{thm:lb-intro}. We start with a special case of the theorem (that we call the monotone case) the proof of which is shorter and which suffices for the application to the Fixed-Depth Size-Hierarchy theorem (Theorem~\ref{thm:size-hie}). We then move on to the general case. The special case is implicit in the results of O'Donnell and Wimmer~\cite{OW} and Amano~\cite{Amano}, but we prove it below for completeness. \subsection{The monotone case} \label{sec:monotone} In this section, we prove a near-optimal size lower bound (i.e. matching the upper bound construction from Theorem~\ref{thm:main-intro}) on the size of any $\mathrm{AC}^0[\oplus]$ formula computing any \emph{monotone} Boolean function solving the $\delta$-coin problem. Observe that this already implies Theorem~\ref{thm:size-hie}, since the formula $F_n$ from the statement of Theorem~\ref{thm:size-hie} computes a monotone function. Let $g:\{0,1\}^N\rightarrow \{0,1\}$ be any \emph{monotone} Boolean function solving the $\delta$-coin problem. Note that the monotonicity of $g$ implies that for all $\alpha\in [0,(1-\delta)/2]$ and $\beta \in [(1+\delta)/2,1],$ we have \begin{equation} \label{eq:monotonicity} \prob{\bm{x}\sim D_{\alpha}^N}{g(\bm{x}) =1}\leq 0.1 \text{ and } \prob{\bm{x}\sim D_{\beta}^N}{g(\bm{x}) =1}\geq 0.9. \end{equation} Let $F$ be any $\mathrm{AC}^0[\oplus]$ formula of size $s$ and depth $d$ computing $g$. We will show that $s\geq \exp(d\cdot\Omega(1/\delta)^{1/(d-1)}).$ Our main tool is the following implication of the results of Razborov~\cite{Razborov}, Smolensky~\cite{Smolensky93}, and Rossman and Srinivasan~\cite{RossmanS}. \begin{theorem} \label{thm:RS} Let $F'$ be any $\mathrm{AC}^0[\oplus]$ formula of size $s'$ and depth $d$ with $n$ input bits that agrees with the $n$-bit Majority function in at least a $0.75$ fraction of its inputs. Then, $s' \geq \exp(d\cdot\Omega(n)^{1/2(d-1)}).$ \end{theorem} We will use the above theorem to lower bound $s$ (the size of $F$) by using $F$ to construct a formula $F'$ of size at most $s$ that agrees with the Majority function on $n = \Theta(1/\delta^2)$ bits at a $0.8$ fraction of its inputs. Theorem~\ref{thm:RS} then implies the result. We now describe the construction of $F'$. Let $n = \lfloor (1/100\delta^2)\rfloor$. We start by defining a \emph{random} formula $\bm{F}''$ on $n$ inputs as follows. On input $x = (x_1,\ldots, x_n)\in \{0,1\}^n,$ define $\bm{F}''(x)$ to be $F(x_{\bm{i}_1},\ldots,x_{\bm{i}_N})$ where $\bm{i}_1,\ldots,\bm{i}_N$ are chosen i.u.a.r. from $[n].$ We make the following easy observation. For any $x\in \{0,1\}^n$ and for $\alpha = |x|/n,$ \begin{equation} \label{eq:F''acc} \prob{\bm{F}''}{\bm{F}''(x) = 1} = \prob{\bm{y}\sim D_{\alpha}^N}{F(\bm{y}) = 1} = \prob{\bm{y}\sim D_{\alpha}^N}{g(\bm{y}) = 1}. \end{equation} In particular, from (\ref{eq:monotonicity}), we see that if $\alpha \leq (1-\delta)/2$ or $\alpha \geq (1+\delta)/2,$ we have $\prob{\bm{F}''}{\bm{F}''(x) \neq \mathrm{Maj}_n(x)}\leq 0.1.$ As a result we get \begin{align*} \prob{\bm{x}\sim \{0,1\}^n,\bm{F}''}{\bm{F}''(\bm{x}) \neq \mathrm{Maj}_n(\bm{x})} &= \avg{\bm{x}}{\prob{\bm{F}''}{\bm{F}''(\bm{x}) \neq \mathrm{Maj}_n(\bm{x})}}\\ &\leq \prob{\bm{x}}{|\bm{x}|/n \in ((1-\delta)/2,(1+\delta)/2)}\\ &\ \ + \max_{\alpha\not\in [(1-\delta)/2,(1+\delta)/2)]} \prob{\bm{F}''}{\bm{F}''(\bm{x}) \neq \mathrm{Maj}_n(\bm{x})\ |\ |\bm{x}| = \alpha n}\\ &\leq \prob{\bm{x}}{|\bm{x}|/n \in ((1-\delta)/2,(1+\delta)/2)} + 0.1. \end{align*} By Stirling's approximation, it follows that for any $i\in [n]$, $\prob{\bm{x}}{|\bm{x}|=i}\leq \binom{n}{\lfloor n/2\rfloor}/2^n \leq 1/\sqrt{n}.$ Hence, by a union bound, we have $\prob{\bm{x}}{|\bm{x}|/n\in ((1-\delta)/2,(1+\delta)/2)}\leq (\delta n)\cdot 1/\sqrt{n}\leq \delta\sqrt{n} \leq 0.1.$ Plugging this in above, we obtain \[ \prob{\bm{x}\sim \{0,1\}^n, \bm{F}''}{\bm{F}''(\bm{x}) \neq \mathrm{Maj}_n(\bm{x})} \leq 0.2. \] By an averaging argument, there is a fixed choice of $\bm{F}''$, which we denote by $F'$, that agrees with the Majority function $\mathrm{Maj}_n$ on a $0.8$ fraction of all inputs. Note that $F' = F(x_{i_1},\ldots,x_{i_N})$ for some choices of $i_1,\ldots,i_N\in [n]$. Hence, $F'$ is a circuit of depth $d$ and size at most $s$. Theorem~\ref{thm:RS} now implies the lower bound on $s$. \subsection{The general case} \label{sec:lbd-gen} In this section, we prove a general lower bound on the size of any $\mathrm{AC}^0[\oplus]$ formula that solves the coin problem (not necessarily by computing a monotone function). The main technical result is the following theorem about polynomials that solve the coin problem. \begin{theorem} \label{thm:lbd-polys} Let $g\in \mathbb{F}_2[x_1,\ldots,x_N]$ solve the $\delta$-coin problem. Then, $\deg(g) = \Omega(1/\delta).$ \end{theorem} Given the above result, it is easy to prove Theorem~\ref{thm:lb-intro} in its general form. \begin{proof}[Proof of Theorem~\ref{thm:lb-intro}] Assume that $F$ is an $\mathrm{AC}^0[\oplus]$ formula $F$ of size $s$ and depth $d$ on $N$ inputs that solves the $\delta$-coin problem. Building on Razborov~\cite{Razborov}, Rossman and Srinivasan~\cite{RossmanS} show that for any such $\mathrm{AC}^0[\oplus]$ formula $F$ of size $s$ and depth $d$ and any probability distribution $\mu$ on $\{0,1\}^N$, there exists a polynomial $P\in \mathbb{F}[x_1,\ldots,x_N]$ of degree $O((\log s)/d)^{d-1}$ such that \[ \prob{\bm{x}\sim \mu}{P(\bm{x})\neq F(\bm{x})}\leq 0.05. \] Taking $\mu = (\mu_0^N + \mu_1^N)/2$, we have for each $b\in \{0,1\},$ the above polynomial $P$ satisfies \begin{equation} \label{eq:PvsF} \prob{\bm{x}\sim \mu_b^N}{P(\bm{x})\neq F(\bm{x})}\leq 2\prob{\bm{x}\sim \mu}{P(\bm{x})\neq F(\bm{x})}\leq 0.1. \end{equation} In particular, if $F$ solves the $\delta$-coin problem, then $P$ solves the $\delta$-coin problem with error at most $0.2$. By Fact~\ref{fac:err_redn} applied with $t$ being a suitably large constant, it follows that there is a polynomial $Q\in \mathbb{F}[x_1,\ldots,x_N]$ that solves the $\delta$-coin problem (with error at most $0.1$) and satisfies $\deg(Q) \leq t\cdot\deg(P) = O(\deg(P)).$ By Theorem~\ref{thm:lbd-polys}, it follows that $\deg(Q) = \Omega(1/\delta)$ and hence we have $\deg(P) = \Omega(1/\delta)$ as well. Since $\deg(P) = O((\log s)/d)^{(d-1)}$, we get $s \geq \exp(\Omega(d\cdot (1/\delta)^{1/(d-1)})).$ \end{proof} We now turn to the proof of Theorem~\ref{thm:lbd-polys}. \subsubsection{Proof of Theorem~\ref{thm:lbd-polys}} \label{sec:lbd-polys} We define a \emph{probabilistic function} to be a random function $\bm{g}:\{0,1\}^N\rightarrow \{0,1\}$, chosen according to some distribution. We say that $\deg(\bm{g}) \leq D$ if this distribution is supported over polynomials (from $\mathbb{F}_2[x_1,\ldots,x_N]$ ) of degree at most $D$. A probabilistic function solves the $\delta$-coin problem with error at most $\varepsilon$ if it satisfies (\ref{eq:cpdefn}), where the probability is additionally taken over the randomness used to sample $\bm{g}.$ If no mention is made of the error, we assume that it is $0.1.$ Note that a standard (i.e. non-random) function is also a probabilistic function of the same degree. We will prove the stronger statement that any probabilistic function $\bm{g}$ solving the $\delta$-coin problem must have degree $\Omega(1/\delta).$ \footnote{While formally stronger, this statement is more or less equivalent, since given such a probabilistic function $\bm{g}$, one can always extract a deterministic function of the same degree that solves the coin problem with error at most $0.21$ by an averaging argument. Then using error reduction (Fact~\ref{fac:err_redn}), we can obtain a deterministic function with a slightly larger degree that solves the coin problem with error $0.1$. } Given a probabilistic function $\bm{g}:\{0,1\}^N\rightarrow \{0,1\},$ we define the \emph{profile of $\bm{g}$}, denoted $\pi_{\bm{g}}$, to be a function $\pi_{\bm{g}}:[0,1]\rightarrow [0,1]$ where \[ \pi_{\bm{g}}(\alpha) = \prob{\substack{\bm{g},\\ \bm{x}\sim D_\alpha^N}}{\bm{g}(\bm{x}) = 1}. \] Note that since $\bm{g}$ solves the $\delta$-coin problem, we have \begin{equation} \label{eq:pig} \pi_{\bm{g}}((1-\delta)/2) \leq 0.1 \text{ and } \pi_{\bm{g}}((1+\delta)/2) \geq 0.9. \end{equation} The proof of the lower bound on $\deg(\bm{g})$ proceeds in two phases. In the first phase, we use $\bm{g}$ to obtain a probabilistic function $\bm{h}$ (of related degree) which satisfies a stronger criterion than (\ref{eq:pig}): namely that the profile of $\bm{h}$ is small in an \emph{interval} close to $(1-\delta')/2$ and large in an interval close to $(1+\delta')/2$ (for some $\delta'\leq \delta$). In the second phase, we use algebraic arguments~\cite{Smolensky} to lower bound $\deg(\bm{h}),$ which leads to a lower bound on $\deg(\bm{g}).$ Let $r,t\in\mathbb{N}$ and $\zeta\in (0,1)$ denote absolute constants that we will fix later on in the proof. We start the first phase of the proof as outlined above. We iteratively define a sequence of probabilistic functions $(\bm{g}_k)_{k\geq 0}$ where $\bm{g}_k:\{0,1\}^{N_k}\rightarrow \{0,1\}$ solves the $\delta_k$-coin problem where $N_k,\delta_k$ are parameters that are defined below. \begin{itemize} \item The function $\bm{g}_0$ is simply the function $\bm{g}.$ Hence, $N_0 = N$ and we can take $\delta_0 = \delta.$ \item Having defined $\bm{g}_k,$ we consider which of the following 3 cases occur. \begin{itemize} \item Case 1: There is an $\alpha\in ((1-\delta_k)/2, (1-\delta_k)/2 + \delta_k/r]$ such that $\pi_{\bm{g}_k}(\alpha) \geq 0.4.$ \item Case 2: There is an $\beta\in [(1+\delta_k)/2 - \delta_k/r, (1+\delta_k)/2)$ such that $\pi_{\bm{g}_k}(\beta) \leq 0.6.$ \item Case 3: Neither Case 1 nor Case 2 occur. In this case, the sequence of probabilistic functions ends with $\bm{g}_k.$ \end{itemize} \item If Case 1 or Case 2 occurs, we extend the sequence by defining $\bm{g}_{k+1}$. For simplicity, we assume Case 1 occurs (Case 2 is handled similarly). Note that in this case we have \begin{equation} \label{eq:alpha-pigk} \pi_{\bm{g}_k}((1-\delta)/2) \leq 0.1 \text{ and } \pi_{\bm{g}_k}(\alpha) \geq 0.4 \end{equation} for some $\alpha \in ((1-\delta_k)/2, (1-\delta_k)/2 + \delta_k/r]$. We will need the following technical claim. \begin{claim} \label{clm:convex} Let $\delta',\delta''\in (0,1)$ be such that $\delta''\geq 4\delta'.$ Assume we have $\alpha_1,\alpha_2,\beta_1,\beta_2\in (0,1)$ such that $(1/4)\leq \alpha_1\leq(1/2), \alpha_2 = \alpha_1 +\delta', \beta_1 = (1-\delta'')/2, \beta_2 = (1+\delta'')/2$. Then, there exist $\gamma,\eta\in [0,1]$ such that for each $i\in [2]$, $\alpha_i = \gamma\cdot \beta_i + (1-\gamma)\cdot \eta.$ \end{claim} \begin{proof} We immediately get \[ (\alpha_2-\alpha_1)=\gamma(\beta_2-\beta_1)\quad\implies\quad\gamma=\frac{\alpha_2-\alpha_1}{\beta_2-\beta_1}=\frac{\delta'}{\delta''}\in(0,1/4]. \] Then \[ \eta=\frac{\alpha_1-\beta_1\gamma}{1-\gamma}>\alpha_1-\beta_1\gamma\ge\frac{1-\beta_1}{4}>0. \] Further \[ \eta=\frac{\alpha_1-\beta_1\gamma}{1-\gamma}\le\frac{4\alpha_1}{3}\le\frac{2}{3}. \] So $\eta\in(0,2/3]$. \end{proof} Applying the above claim to $\alpha_1 = (1-\delta_k)/2, \alpha_2 = \alpha, \delta' = \alpha_2-\alpha_1$ and $\delta'' = 4\delta_k/r,$ we see that there exist $\gamma,\eta\in [0,1]$ such that \begin{equation} \label{eq:gammaetak} (1-\delta_k)/2 = \gamma\cdot (1-\delta'')/2 + (1-\gamma)\cdot \eta \text{ and } \alpha = \gamma\cdot (1+\delta'')/2 + (1-\gamma)\cdot \eta \end{equation} To define the function $\bm{g}_{k+1},$ we start with an intermediate probabilistic function $\bm{h}_k$ on $N_k$ inputs. On any input $x\in\{0,1\}^{N_k},$ the function $\bm{h}_k$ is defined as follows. \noindent $\bm{h}_k(x)$: \begin{itemize} \item Sample a random $\bm{b}$ from the distribution $D_{\gamma}^{N_k}$ and $\bm{y}$ from the distribution $D_{\eta}^{N_k}.$ \item Define $\bm{z}\in \{0,1\}^{N_k}$ by $\bm{z}_i = \bm{b}_i\cdot x_i + (1-\bm{b}_i)\cdot \bm{y}_i.$ \item $\bm{h}_k(x)$ is defined to be $\bm{g}_k(\bm{z}).$ \end{itemize} Note that as each $\bm{z}_i$ is a (random) degree-$1$ polynomial in $x$, the probabilistic function $\bm{h}_k(x)$ satisfies $\deg(\bm{h}_k)\leq \deg(\bm{g}_k).$ Also, note that when $\bm{x}$ is sampled from the $D_{(1-\delta'')/2}^{N_k}$ or $D_{(1+\delta'')/2}^{N_k}$, then by (\ref{eq:gammaetak}), $\bm{z}$ has the distribution $D_{(1-\delta_k)/2}^{N_k}$ or $D_\alpha^{N_k}$ respectively. Hence, we get \begin{align*} \pi_{\bm{h}_k}((1-\delta'')/2) = \pi_{\bm{g}_k}((1-\delta_k)/2) \leq 0.1 \text{ and } \pi_{\bm{h}_k}((1+\delta'')/2) = \pi_{\bm{g}_k}(\alpha) \geq 0.4. \end{align*} We are now ready to define $\bm{g}_{k+1}.$ Let $\mathrm{Thr}_{t/4}^t:\{0,1\}^t\rightarrow \{0,1\}$ be the Boolean function that accepts inputs of Hamming weight at least $t/4$. We set $N_{k+1} = N_k\cdot t$ and define $\bm{g}_{k+1}:\{0,1\}^{N_{k+1}}\rightarrow \{0,1\}$ by \[ \bm{g}_{k+1}(x) = \mathrm{Thr}_{t/4}^t(\bm{h}_k^{(1)}(x^{(1)}),\ldots,\bm{h}_k^{(t)}(x^{(t)})) \] where $\bm{h}_k^{(1)},\ldots,\bm{h}_k^{(t)}$ are \emph{independent} copies of the probabilistic function $\bm{h}_k$ and $x^{(i)}\in \{0,1\}^{N_k}$ is defined by $x^{(i)}_j = x_{(i-1)N_k + j}$ for each $j\in [N_k]$. Clearly, $\deg(\bm{g}_{k+1})\leq \deg(\mathrm{Thr}_{t/4}^t)\cdot \deg(\bm{h}_k) \leq \deg(\bm{g}_k)\cdot t.$ Note that if $\bm{x}$ is chosen according to the distribution $D_{(1-\delta'')/2}^{N_{k+1}},$ each $\bm{h}_k^{(i)}(\bm{x}^{(i)})$ is a Boolean random variable that is $1$ with probability at most $0.1$. Similarly, if $\bm{x}$ is chosen according to the distribution $D_{(1+\delta'')/2}^{N_{k+1}},$ each $\bm{h}_k^{(i)}(\bm{x}^{(i)})$ is a Boolean random variable that is $1$ with probability at least $0.4$. By a standard Chernoff bound (see e.g. \cite[Theorem 1.1]{DP}), we see that for a large enough absolute constant $t$, \begin{equation} \label{eq:gk+1} \pi_{\bm{g}_{k+1}}((1-\delta'')/2) = \exp(-\Omega(t)) \leq 0.1 \text{ and } \pi_{\bm{g}_{k+1}}((1+\delta'')/2) = 1-\exp(-\Omega(t)) \geq 0.9. \end{equation} We now fix the value of $t$ so that the above inequalities hold. Note that we have shown the following. \begin{observation} \label{obs:gk+1} $\bm{g}_{k+1}:\{0,1\}^{N_{k+1}}\rightarrow \{0,1\}$ is a probabilistic function that solves the $\delta_k$-coin problem where $N_{k+1} = N_k \cdot t,\deg(\bm{g}_{k+1})\leq \deg(\bm{g}_k)\cdot t$ and $\delta_{k+1} \leq 4\delta_k/r.$ \end{observation} \end{itemize} We now argue that, for $r = 10t$, the above process cannot produce an infinite sequence of probabilistic functions. In other words, there is a fixed $k$ such that $\bm{g}_k$ is in neither Case 1 nor Case 2 mentioned above. Assume to the contrary that the above process produces an infinite sequence of probabilistic functions. By Observation~\ref{obs:gk+1} and induction, we see that $\bm{g}_k$ is a probabilistic function on at most $N\cdot t^k$ variables solving the $\delta_k$-coin problem for $\delta_k\leq \delta\cdot (4/r)^k.$ We now appeal to the following standard fact. \begin{fact}[Folklore] \label{fac:stat-dist} Let $\delta'\in (0,1)$ and $N'\in \mathbb{N}$ be arbitrary. Then, the statistical distance between $D_{(1-\delta')/2}^{N'}$ and $D_{(1+\delta')/2}^{N'}$ is at most $O(\sqrt{N'}\cdot \delta').$ \end{fact} Thus, for $\bm{g}_k$ to able to solve the $\delta_k$-coin problem with $N_k$ samples, we must have $\sqrt{N_k}\cdot \delta_k\geq \alpha_0$ for some absolute positive constant $\alpha_0.$ On the other hand, this cannot be true for large enough $k$, since $\sqrt{N_k}\delta_k \leq N_k\delta_k \leq N\delta \cdot (4t/r)^k$ and $r\geq 10t.$ This yields a contradiction. Thus, we have shown that for large enough $k$, the function $\bm{g}_k$ is in neither Case 1 nor Case 2. Equivalently, for any $\alpha\in [1/2-\delta_k,1/2-\delta_k+\delta_k/r]$ and any $\beta \in [1/2+\delta_k-\delta_k/r, 1/2 + \delta_k],$ we have \[ \pi_{\bm{g}_k}(\alpha) \leq 0.4 \text{ and } \pi_{\bm{g}_k}(\beta) \geq 0.6. \] Using error reduction as above, we can obtain a probabilistic function that satisfies the above inequalities with parameters $\zeta := \exp(-10r^2)$ and $1-\zeta$ respectively. Set $\ell = 10\lceil \log(1/\zeta)\rceil$ and define $\bm{h}:\{0,1\}^{N_k\cdot \ell}\rightarrow \{0,1\}$ by \srikanth{I think this value of $\zeta$ should be small enough. Redefine if necessary.} \[ \bm{h}(x) = \mathrm{Maj}_\ell(\bm{g}_k^{(1)}(x^{(1)}),\ldots,\bm{g}_k^{(t)}(x^{(t)})) \] where $\bm{g}_k^{(1)},\ldots,\bm{g}_k^{(t)}$ are \emph{independent} copies of the probabilistic function $\bm{g}_k$ and $x^{(i)}\in \{0,1\}^{N_k}$ is defined by $x^{(i)}_j = x_{(i-1)N_k + j}$ for each $j\in [N_k]$. Clearly, $\deg(\bm{h}) \leq \ell\cdot \deg(\bm{g}_k) = O(\deg(\bm{g}_k))$ as $\ell$ is an absolute constant. Further, by the Chernoff bound, $\bm{h}$ satisfies \begin{equation} \label{eq:h} \pi_{\bm{h}}(\alpha) \leq \zeta \text{ and } \pi_{\bm{h}}(\beta) \geq 1-\zeta. \end{equation} This concludes the first phase of the proof. In the second phase, we will show the following lower bound on $\deg(\bm{h})$. \begin{claim} \label{clm:h} $\deg(\bm{h})\geq \Omega(1/\delta_k).$ \end{claim} Note that this immediately implies the result since we have \[ \deg(\bm{g}) = \deg(\bm{g}_0) \geq \frac{\deg(\bm{g}_k)}{t^k} = \Omega\left(\frac{\deg(\bm{h})}{t^k}\right) = \Omega\left(\frac{1}{\delta_kt^k}\right) \geq \Omega\left(\frac{1}{\delta\cdot (4t/r)^k}\right) \geq \Omega(1/\delta) \] where we have used Observation~\ref{obs:gk+1} and the fact that $r \geq 10t.$ It therefore suffices to prove Claim~\ref{clm:h}. We prove this in two steps. We start with an extension of a lower bound of Smolensky~\cite{Smolensky} (see also the earlier results of Razborov~\cite{Razborov} and Szegedy~\cite{szegedy}) on the degrees of polynomials approximating the Majority function.\footnote{In~\cite{Smolensky}, Smolensky proves a lower bound for $\mathrm{MOD}_p$ functions. However, the same idea also can be used to prove lower bounds for the Majority function, as observed by Szegedy~\cite{szegedy}.} Our method is a slightly different phrasing of this bound following the results of Aspnes, Beigel, Furst and Rudich~\cite{ABFR}, Green~\cite{Green} and Kopparty and Srinivasan~\cite{KS}.\footnote{It can be viewed as a `dual' version of Smolensky's proof. Smolensky's standard proof can also be modified to yield this.} \begin{lemma}[A slight extension of Smolensky's bound] \label{lem:smol-ext} Let $h:\{0,1\}^n\rightarrow \{0,1\}$ be a (deterministic) function satisfying the following. There exist integers $D < R < n/2$ such that $E_h^R$ defined by \begin{equation} \label{eq:EhRdef} E_h^R = \{ x\in \{0,1\}^n\ |\ h(x) \neq \mathrm{Maj}_{n}(x), |x|\not\in (R,n - R)\} \end{equation} satisfies $|E_h^R| < \binom{n}{\leq (R - D)}.$\footnote{Recall that $\binom{n}{\leq i}$ denotes $\sum_{j=0}^i \binom{n}{j}.$} Then, $\deg(h) > D.$ \end{lemma} \begin{proof} Consider the vector space $V_{R-D}$ of all multilinear polynomials of degree $\leq (R - D)$. A generic polynomial $g\in V_{R-D}$ is given by \[ g(x_1,\ldots,x_n) = \sum_{|S|\leq R-D}a_S \cdot \prod_{i\in S} x_i \] where $a_S\in \mathbb{F}_2$ for each $S$. We claim that there is a \emph{non-zero} $g$ as above that vanishes at \emph{all points} in $E_h^R.$ To see this, note that finding such a $g$ is equivalent to finding a non-zero assignment to the $a_S$ so that the resulting $g$ vanishes at $E_h^R$. Vanishing at any point of $\{0,1\}^n$ gives a linear constraint on the coefficients $a_S$. Since we have $|E_h^R| < \binom{n}{\leq (R - D)}$, we have a homogeneous linear system with more variables than constraints and hence, there exists a non-zero multilinear polynomial $g$ of degree $\leq (R-D)$ which vanishes on $E_h^R$. Thus, there is a non-zero $g$ as claimed. Let $B_1 = \{ x\in \{0,1\}^n\ |\ |x| \leq R\}$ and $B_2 = \{ x\in \{0,1\}^n\ |\ |x| \geq n-R\}$. Note that $B_1$ and $B_2$ are both Hamming balls of radius $R$. Let $f$ be the pointwise product of the functions $g$ and $h$. Note that $f$ can be represented as a \emph{multilinear} polynomial of degree at most $\deg(g) + \deg(h)$ (by replacing $x_i^2$ by $x_i$ as necessary in the polynomial $g\cdot h$). We will need the following standard fact (see e.g.~\cite{KS} for a proof). \begin{fact} \label{fac:hamming balls} Let $P$ be a non-zero multilinear polynomial of degree $\leq d$ in $n$ variables. Then $P$ cannot vanish on a Hamming ball of radius $d$. \end{fact} On $B_1$, $g$ vanishes wherever $h$ does not (by definition of $E_h^R$) and therefore $f$ vanishes everywhere. On $B_2$, $h$ is non-vanishing wherever $g$ is non-vanishing. But since $B_2$ is a Hamming ball of radius $R > R-D \geq \text{deg}(g)$, Fact~\ref{fac:hamming balls} implies that $g(x_0) \neq 0$ for some point $x_0$ in $B_2$. Therefore $h(x_0) \neq 0$ and so $f(x_0) \neq 0$. In particular, $f$ is a non-zero multilinear polynomial of degree at most $\deg(g) + \deg(h).$ Since $f$ is non-zero and vanishes on $B_1$ which is a Hamming ball of radius $R$, Fact~\ref{fac:hamming balls} implies that $\text{deg}(f) > R$. Therefore $(R-D)+\text{deg}(h) \geq \text{deg}(g)+\text{deg}(h) \geq \text{deg}(f) > R \Rightarrow \text{deg}(h) > D$. \end{proof} We now prove Claim~\ref{clm:h}. \begin{proof}[Proof of Claim~\ref{clm:h}] The idea is to use $\bm{h}$ to produce a deterministic function $h$ of the same degree to which Lemma~\ref{lem:smol-ext} is applicable. Let $M$ denote $N_k\cdot \ell$, the sample complexity (i.e. number of inputs) of $\bm{h}.$ Let $n = \lceil r^2/\delta_k^2\rceil $. Define a probabilistic function $\tilde{\bm{h}}:\{0,1\}^n\rightarrow \{0,1\}$ as follows. On any input $x\in \{0,1\}^n$, we choose $\bm{i}_1,\ldots,\bm{i}_M\in [n]$ i.u.a.r. and set \[ \tilde{\bm{h}}(x) = \bm{h}(x_{\bm{i}_1},\ldots,x_{\bm{i}_M}). \] Clearly, we have $\deg(\tilde{\bm{h}})\leq \deg(\bm{h}).$ Also note that for any $x\in \{0,1\}^n$, we have \[ \prob{\tilde{\bm{h}}}{\tilde{\bm{h}}(x) = 1} = \pi_{\bm{h}}(|x|/n) \] since in this case $(x_{\bm{i}_1},\ldots,x_{\bm{i}_M})$ is drawn from the distribution $D_{|x|/n}^M.$ By (\ref{eq:h}), we have for any $x$ such that $|x|/n\in [1/2-\delta_k,1/2-\delta_k+\delta_k/r]\cup [1/2+\delta_k-\delta_k/r,1/2+\delta_k],$ \[ \prob{\tilde{\bm{h}}}{\tilde{\bm{h}}(x) \neq \mathrm{Maj}_n(x)} \leq \zeta. \] In particular, for $\bm{x}$ chosen uniformly at random from $\{0,1\}^n$, we have \[ \prob{\bm{x},\tilde{\bm{h}}}{\tilde{\bm{h}}(\bm{x}) \neq \mathrm{Maj}_n(\bm{x})\ |\ |\bm{x}|/n\in [1/2-\delta_k+\delta_k/r,1/2-\delta_k]\cup [1/2+\delta_k-\delta_k/r,1/2+\delta_k] } \leq \zeta. \] We will apply Lemma~\ref{lem:smol-ext} below with $n$ unchanged, $R = \lfloor (1/2-\delta_k + \delta_k/r)\cdot n\rfloor,$ and $D = \lfloor \delta_kn/(2r)\rfloor.$ For these parameters, we have \begin{align*} \avg{\tilde{\bm{h}}}{\frac{|E^R_{\tilde{\bm{h}}}|}{2\binom{n}{\leq R}}} &= \avg{\tilde{\bm{h}}}{\prob{\bm{x}}{\tilde{\bm{h}}(\bm{x}) \neq \mathrm{Maj}_n(\bm{x})\ |\ |\bm{x}|\not\in (R,n-R)}}\\ &=\prob{\bm{x},\tilde{\bm{h}}}{\tilde{\bm{h}}(\bm{x}) \neq \mathrm{Maj}_n(\bm{x})\ |\ |\bm{x}|\not\in (R,n-R)}\\ &\leq \prob{\bm{x},\tilde{\bm{h}}}{\tilde{\bm{h}}(\bm{x}) \neq \mathrm{Maj}_n(\bm{x})\ |\ |\bm{x}|/n\in [1/2-\delta_k,1/2-\delta_k+\delta_k/r]\cup [1/2+\delta_k-\delta_k/r,1/2+\delta_k] }\\ &+ \prob{\bm{x},\tilde{\bm{h}}}{ |\bm{x}|/n\not\in [1/2-\delta_k,1/2+\delta_k]\ |\ |\bm{x}|\not\in (R,n-R)}\\ &\leq \zeta + \prob{\bm{x}}{ |\bm{x}|/n\not\in [1/2-\delta_k,1/2+\delta_k]\ |\ |\bm{x}|\not\in (R,n-R)} \leq \zeta + \frac{2\binom{n}{\leq R'}}{2\binom{n}{\leq R}} \end{align*} where $R' = \lfloor (1/2-\delta_k)n\rfloor.$ Hence, we have \[ \avg{\tilde{\bm{h}}}{|E^R_{\tilde{\bm{h}}}|} \leq 2\zeta\binom{n}{\leq R} + 2\binom{n}{\leq R'}. \] By averaging, we can fix a deterministic $h:\{0,1\}^n\rightarrow \{0,1\}$ so that $\deg(h)\leq \deg(\tilde{\bm{h}})$ and we have the same bound for $|E^R_h|.$ By a computation, we show below that the above bound is strictly smaller than $\binom{n}{\leq R - D}.$ Lemma~\ref{lem:smol-ext} then implies that $\deg(h)\geq D = \Omega(1/\delta_k),$ completing the proof of Claim~\ref{clm:h}. It remains to prove only that \begin{equation} \label{eq:cltineq} 2\zeta\binom{n}{\leq R} + 2\binom{n}{\leq R'} \leq \binom{n}{\leq R-D}. \end{equation} \srikanth{UV: fill in proof.} Recall that $\zeta=e^{-10r^2}$, where $r=10t$ and $t\geq 1$. Now let $V_n(\alpha)=\frac{1}{2^n}{n\choose\le n/2-\alpha\sqrt{n}}$ and $\Phi(x)=\int_x^\infty\frac{e^{-t^2/2}}{\sqrt{2\pi}}\,dt$. Then by the Central Limit Theorem (see, e.g.,~\cite[Chapter VIII, vol II]{Feller}), for any fixed $\alpha$, we have \begin{equation} \label{eq:clt} \lim_{n\to\infty}(V_n(\alpha)-\Phi(2\alpha))=0. \end{equation} Also, we have the estimates~\cite[Lemma 2, Chapter VII, vol I]{Feller} \begin{equation} \label{eq:gaussian} \frac{1}{\sqrt{2\pi}}\cdot\left(\frac{1}{x}-\frac{1}{x^3}\right)e^{-x^2/2}\le\Phi(x)\le \frac{1}{\sqrt{2\pi}}\cdot \frac{1}{x}e^{-x^2/2},\quad x>0. \end{equation} Note that we have $\delta_k n \geq \delta_k\cdot (r/\delta_k) \cdot \sqrt{n} = r\sqrt{n}.$ So we get \begin{align} \frac{1}{2^n}\cdot 2\zeta{n\choose\le R}+\frac{1}{2^n}{n\choose\le R'}&\le \frac{1}{2^n}\cdot 2\zeta{n\choose \le n/2-(1-1/r)\delta_kn}+\frac{1}{2^n}\cdot 2{n\choose\le n/2-\delta_kn}\notag\\ &\le \frac{1}{2^n}\cdot 2\zeta{n\choose \le n/2-(1-1/r)r\sqrt{n}}+\frac{1}{2^n}\cdot 2{n\choose\le n/2-r\sqrt{n}}\notag\\ &= 2\zeta\cdot V_n(r-1) + V_n(r)\notag\\ &\leq 2\zeta + V_n(r)\notag\\ &\leq 2e^{-10r^2} + \frac{1}{\sqrt{2\pi}r}\cdot e^{-r^2/2} + o(1)\label{eq:clt1} \end{align} where the second-last inequality uses the fact $V_n(\alpha)\leq 1$ for any $\alpha,$ and the final inequality follows from the definition of $\zeta$ and (\ref{eq:clt}) and (\ref{eq:gaussian}) above. Note that the $o(1)$ above goes to $0$ as $n\rightarrow \infty$ (which happens as $\delta \rightarrow 0$). Further, we have $\delta_k n \leq \delta_k \cdot ((r/\delta_k) + 1)\cdot \sqrt{n} = (r+o(1))\sqrt{n},$ which yields \begin{align} \frac{1}{2^n}{n\choose\le R-D}&=\frac{1}{2^n}{n\choose\le \lfloor n/2-(1-1/r)\delta_kn\rfloor-\lfloor(1/2r)\delta_kn\rfloor}\notag\\ &\ge\frac{1}{2^n}{n\choose \le n/2-(1-1/2r)\delta_kn-1}\notag\\ &= \frac{1}{2^n}{n\choose \le n/2-(1-1/2r+o(1))\cdot \delta_kn}\notag\\ &\geq\frac{1}{2^n}{n\choose \le n/2-(1-1/2r+o(1))\cdot (r+o(1))\sqrt{n}} \notag\\ &= \frac{1}{2^n}{n\choose \le (n/2)- (r-(1/2) + o(1))\cdot\sqrt{n}} \geq V_n(r-(1/4))\qquad \text{(for large enough $n$)}\notag\\ &\geq \frac{1}{\sqrt{2\pi}} e^{-(r-(1/4))^2/2} \cdot \left(\frac{1}{r-(1/4)} - \frac{1}{(r-1/4)^3}\right) - o(1)\notag\\ &\geq \frac{1}{\sqrt{2\pi}} e^{-(r^2/2) + 2}\cdot \frac{1}{2r} - o(1) \label{eq:clt2} \end{align} where the second-last inequality uses (\ref{eq:clt}) and (\ref{eq:gaussian}) and the final inequality uses the fact that $r\geq 10t \geq 10.$ From (\ref{eq:clt1}) and (\ref{eq:clt2}), the inequality follows for all large $n$. \end{proof} \section{Open Problems} \label{sec:discussion} We close with some open problems. \begin{itemize} \item We get almost optimal upper and lower bounds on the complexity of the $\delta$-coin problem. In particular, as mentioned in Theorem~\ref{thm:main-intro} and Theorem~\ref{thm:lb-intro}, the upper and lower bounds on the size of $\mathrm{AC}^0[\oplus]$ formulas computing the $\delta$-coin problem are $\exp(O(d(1/\delta)^{1/(d-1)}))$ and $\exp(\Omega(d(1/\delta)^{1/(d-1)}))$, respectively. It may be possible to get even tighter bounds by exactly matching the constants in the exponents in these bounds. The strongest result in this direction would be to give explicit separations between $\mathrm{AC}^0[\oplus]$ formulas of size $s$ and size $s^{1+\varepsilon}$ for any fixed $\varepsilon > 0$ (for $s = n^{O(1)}$, for example). For circuits, we have a \emph{non-explicit} upper bound of $\exp(O((1/\delta)^{1/(d-1)}))$ due to Rossman and Srinivasan~\cite{RossmanS}. It would be interesting to achieve this upper bound with an explicit family of circuits. \item In Theorem~\ref{thm:main-intro} we get a $(1/\delta)^{2^{O(d)}}$ upper bound on the sample complexity of the $\delta$-coin problem. We believe that this can be improved to $O((1/\delta)^2)$. (We get this for depth-$2$ formulas, but not for larger depths.) \item Finally, can we match the $\mathrm{AC}^0$ size-hierarchy theorem of Rossman~\cite{Rossman-clique} by separating $\mathrm{AC}^0[\oplus]$ circuits of size $s$ and some fixed depth (say $2$) and $\mathrm{AC}^0[\oplus]$ circuits of size $s^{\varepsilon}$ (for some absolute constant $\varepsilon > 0$) and \emph{any} constant depth? \end{itemize} \paragraph{Acknowledgements.} The authors thank Paul Beame, Prahladh Harsha, Ryan O'Donnell, Ben Rossman, Rahul Santhanam, Ramprasad Saptharishi, Madhu Sudan, Avishay Tal, Emanuele Viola, and Avi Wigderson for helpful discussions and comments. The authors also thank the anonymous referees of STOC 2018 for their comments. \appendix \section{Omitted Proofs from Section \ref{sec:amano}} \begin{reptheorem}{thm:amano} Assume $d\geq 3$ and $F_d$ is defined as in section \ref{sec:amano}. Then, for small enough $\delta$, we have the following. \begin{enumerate} \item For $b,\beta\in \{0,1\}$ and each $i\in [d-1]$ such that $i\equiv \beta \pmod{2}$, we have \[ p_i^{(b)} = \prob{\bm{x}\sim \mu_b^{N_i}}{F_i(\bm{x}) = \beta}. \] In particular, for any $i\in \{2,\ldots,d-2\}$ and any $b\in \{0,1\}$ \begin{equation} p_i^{(b)} = (1-p_{i-1}^{(b)})^{f_i}. \end{equation} \item For $\beta \in \{0,1\}$ and $i\in [d-2]$ such that $i\equiv \beta\pmod{2},$ we have \begin{align*} \frac{1}{2^m}(1+\delta_i\exp(-3\delta_i))&\leq p_i^{(\beta)} \leq \frac{1}{2^m}(1+\delta_i\exp(3\delta_i))\\ \frac{1}{2^m}(1-\delta_i\exp(3\delta_i))&\leq p_i^{(1-\beta)} \leq \frac{1}{2^m}(1-\delta_i\exp(-3\delta_i)) \end{align*} \item Say $d-1\equiv \beta \pmod{2}.$ Then \[ p_{d-1}^{(\beta)} \geq \exp(-C_1m + C_2) \text{ and } p_{d-1}^{(1-\beta)} \leq \exp(-C_1m - C_2) \] where $C_2 = C_1/10.$ \item For each $b\in \{0,1\}$, $\prob{\bm{x}\sim \mu^N_b}{F_d(\bm{x}) = 1-b}\leq 0.05.$ In particular, $F_d$ solves the $\delta$-coin problem. \end{enumerate} \end{reptheorem} \begin{proof} \textbf{Proof of (1) (for $i \in [d-2]$) and (2):} We show these by induction on $i$. We start with the base case $i=1$. Each formula at level $1$ computes an AND of $N_1=f_1=m$ many variables. Hence we have: \begin{flalign*} &&\prob{D_0^{N_1}}{F_1(\bm{x})=1}&=\left(\frac{1-\delta}{2}\right)^{f_1}= \left(\frac{1-\delta}{2}\right)^{m}\le \frac{1}{2^m}\le 0.5&\\ &&\prob{D_1^{N_1}}{F_1(\bm{x})=1}&=\left(\frac{1+\delta}{2}\right)^{f_1}= \left(\frac{1+\delta}{2}\right)^{m}\le 0.5 &\text{(for small enough $\delta$)}\\ \end{flalign*} This implies $p_1^{(b)}=\prob{D_{(b)}^{N_1}}{F(\bm{x})=1}$. For part (2): \begin{flalign*} &&p_1^{(1)}&=\prob{D_1^m}{F_1(\bm{x})=1}=\left(\frac{1+\delta}{2}\right)^m&\\ && &= \frac{1}{2^m}(1+\delta)^m\le \frac{1}{2^m} \exp(\delta m)& \text{By Fact~\ref{fac:exp} (c) and $\delta m = o(1)$}\\ && &\le \frac{1}{2^m}(1+\delta m+\delta^2 m^2) & \text{By Fact~\ref{fac:exp} (d)}\\ && &= \frac{1}{2^m}(1+\delta m(1+\delta m))& \\ && &\leq \frac{1}{2^m}(1+\delta m \exp(\delta m))& \text{By Fact~\ref{fac:exp} (c)}\\ && &= \frac{1}{2^m}(1+\delta_1 \exp(\delta_1))&\\ && &\le \frac{1}{2^m}(1+\delta_1 \exp(3\delta_1))&\\ && p_1^{(1)}&= \frac{1}{2^m}(1+\delta)^m\ge \frac{1}{2^m}\exp(m(\delta-\delta^2)) & \text{By Fact~\ref{fac:exp} (b) with $x = \delta$}\\ && &\ge \frac{1}{2^m}(1+(m(\delta-\delta^2)))& \text{By Fact~\ref{fac:exp} (c)}\\ && &= \frac{1}{2^m}(1+(m\delta(1-\delta)))\ge \frac{1}{2^m}(1+(m\delta\exp(-2\delta))) & \text{By Fact~\ref{fac:exp} (a),(b)}\\ && &\ge \frac{1}{2^m}(1+(\delta_1\exp(-3\delta_1)))&\\ \end{flalign*} Similarly, we bound $p_1^{(0)}$: \begin{flalign*} && p_1^{(0)} &= \frac{1}{2^m}(1-\delta)^m \le \frac{1}{2^m}\exp(-m\delta)& \text{By Fact~\ref{fac:exp} (c)}\\ && &\le \frac{1}{2^m}(1-m\delta(1-m\delta))& \text{By Fact~\ref{fac:exp} (d)}\\ && &\le \frac{1}{2^m}(1-m\delta\exp(-3m\delta))& \text{By Fact~\ref{fac:exp} (a),(b)} \\ && &= \frac{1}{2^m}(1-\delta_1\exp(-3\delta_1))&\\ && p_1^{(0)} &= \frac{1}{2^m}(1-\delta)^m &\\ && &\ge \frac{1}{2^m}\exp(m(-\delta-\delta^2))& \text{By Fact~\ref{fac:exp} (b)}\\ && &\ge \frac{1}{2^m}(1-m\delta(1+\delta))&\text{By Fact~\ref{fac:exp} (c)}\\ && &\ge \frac{1}{2^m}(1-m\delta(1+3m\delta))&\\ && &\ge \frac{1}{2^m}(1-\delta_1\exp(3\delta_1))&\text{By Fact~\ref{fac:exp} (c)} \end{flalign*} We now show the inductive step of parts (1) and (2) for $p_{i+1}^{(b)}$. Since the circuit consists of alternating layers of OR gates and AND gates, we obtain $\forall i\in [d-2]$: \begin{align*} \prob{D_{b}^{N_i}}{F_i(\bm{x})=\beta}=\left(1-\prob{D_{b}^{N_{i-1}}}{F_{i-1}(\bm{x})=(1-\beta)}\right)^{f_i} \end{align*} Without loss of generality, assume $i\equiv 0\pmod 2$. Then we have: \begin{flalign*} &&\prob{\bm{x}\sim\mu_0^{N_{i+1}}}{F_{i+1}(\bm{x})=1}&= \left(1-\prob{\bm{x}\sim\mu_0^{N_{i}}}{F_{i}(\bm{x})=0}\right)^{f_{i+1}}&\\ && &=\left(1-p_i^{(0)} \right)^{f_{i+1}}&\text{From induction part (1)}\\ && &\le \exp(-p_i^{(0)} \cdot f_{i+1})&\text{By Fact~\ref{fac:exp} (c)}\\ && &= \exp(-p_i^{(0)}\ceil{2^mm\ln 2})&\\ && &\le \exp(-p_i^{(0)}(2^mm\ln 2))&\\ && &= \exp\left(-\left(\frac{1}{2^m}(1+\delta_i\exp(-3\delta_i))\right)2^mm\ln 2\right)&\text{From induction part(2)}\\ && &\le \exp\left(-(m\ln 2)(1+\delta_i\exp(-3\delta_i))\right)&\\ && &\le \exp(-m\ln 2) =\frac{1}{2^{m}}\le 0.5&\\ &&\text{Similarly, }\prob{\bm{x}\sim\mu_1^{N_{i+1}}}{F_{i+1}(\bm{x})=1}&=(1-p_i^{(1)})^{f_{i+1}}&\\ && &\le \exp(-p_i^{(1)}\cdot f_{i+1}) = \exp(-p_i^{(1)}\ceil{2^mm\ln 2})&\text{By Fact~\ref{fac:exp} (c)}\\ && &\le \exp\left(-\left(\frac{1}{2^m}(1-\delta_i\exp(3\delta_i))\right)(2^mm\ln 2)\right)&\\ && &\le \exp((-m\ln 2)(1-\delta_i(1+3\delta_i+9\delta_i^2)))&\text{By Fact~\ref{fac:exp} (d)}\\ && &= 2^{-m(1-\delta_i(1+3\delta_i+9\delta_i^2))}\le 0.5&\text{Since $\delta_i = o(1)$} \end{flalign*} This completes the induction step for part (1). Now we show the bounds for part (2): \begin{flalign*} && p_{i+1}^{(1)} &=(1-p_i^{(1)})^{\ceil{2^mm\ln 2}}\le (1-p_i^{(1)})^{m2^m\ln 2}& \\ && &\le \left( 1-\frac{1}{2^m}(1-\delta_i\exp(3\delta_i))\right)^{m2^m\ln 2}& \text{From induction hypothesis}\\ && &\le \exp\left(-\frac{1}{2^m}(1-\delta_i\exp(3\delta_i))m2^m\ln 2\right) & \text{By Fact~\ref{fac:exp} (c)}\\ && &= \exp\left((-m\ln 2) (1-\delta_i\exp(3\delta_i))\right)&\\ && &= \exp\left(-m\ln 2+\delta_i\exp(3\delta_i)m\ln 2\right)&\\ && &=\frac{1}{2^m}\exp(\delta_i\exp(3\delta_i)m\ln 2)&\\ && &=\frac{1}{2^m}\exp(\delta_{i+1}\exp(3\delta_i))&\\ && &\le \frac{1}{2^m}\left(1+\delta_{i+1}\exp(3\delta_i)(1+\delta_{i+1}\exp(3\delta_i))\right)& \text{By Fact~\ref{fac:exp} (d)}\\ && &\le \frac{1}{2^m}(1+\delta_{i+1}\exp(3\delta_i)(1+2\delta_{i+1}))& \text{Since $\delta_{i}= o(1)$}\\ && &\le \frac{1}{2^m}(1+\delta_{i+1}\exp(3\delta_{i}+\delta_{i+1}))& \text{By Fact~\ref{fac:exp} (c)}\\ && &\le \frac{1}{2^m}(1+\delta_{i+1}\exp(3\delta_{i+1}))&\\ && p_{i+1}^{(1)}&=(1-p_i^{(1)})^{\ceil{2^mm\ln 2}}&\\ && &\ge (1-p_i^{(1)})^{2^mm\ln 2+1} &\\ && &\ge \left(1-\frac{1}{2^m}(1-\delta_i\exp(-3\delta_i))\right)^{m2^m\ln 2+1}&\text{From induction}\\ && &\ge \exp\left(\left(\frac{-1}{2^m}(1-\delta_i\exp(-3\delta_i))-\frac{1}{2^{2m}}(1-\delta_i\exp(-3\delta_i))^2\right)\left(m2^m\ln 2+1)\right)\right)&\text{By Fact~\ref{fac:exp} (b)}\\ && &\ge \exp\left(\left(\frac{-1}{2^m}(1-\delta_i\exp(-3\delta_i))-\frac{1}{2^{2m}}\right)(m2^m\ln 2+1)\right)&\\ && &\ge \frac{1}{2^m}\exp\left( \delta_i\exp(-3\delta_i)m\ln 2 -\frac{m}{2^m}\right)&\\ && &= \frac{1}{2^m}\exp\left(\delta_{i+1}\exp(-3\delta_{i})-\frac{m}{2^m}\right)&\\ && &\ge \frac{1}{2^m}\exp\left(\delta_{i+1}\exp(-3\delta_{i})-\delta_{i+1}^2\right)&\text{Using $\delta_{i+1}\geq \frac{1}{m}$}\\ && &\ge \frac{1}{2^m}\exp\left(\delta_{i+1}(1-3\delta_{i})-\delta_{i+1}^2\right)&\text{By Fact~\ref{fac:exp} (c)}\\ && &\ge \frac{1}{2^m}\exp\left(\delta_{i+1}(1-2\delta_{i+1})\right)&\\ && &\ge \frac{1}{2^m}\exp\left(\delta_{i+1}\exp(-3\delta_{i+1})\right)&\text{By Fact~\ref{fac:exp} (a)(b)}\\ && &\ge \frac{1}{2^m}\left(1+\delta_{i+1}\exp(-3\delta_{i+1})\right)&\text{By Fact~\ref{fac:exp} (c)}\\ \end{flalign*} The case of $p_{i+1}^{(0)}$ is similar and hence omitted.\\ \textbf{Proof of (3):} Assume, without loss of generality, $d-1\equiv 1\pmod 2$. Let $i=d-1$. Then we have: \begin{flalign*} \delta_{i-1}&=\delta_{d-2}=\delta m(m(\ln 2))^{d-3}&\\ &=\delta m^{d-2}(\ln 2)^{d-3}&\\ \implies \delta_{i-1}m&=\delta m^{d-1}(\ln 2)^{d-3}&\\ &=\delta\left(\ceil{\left(\frac{1}{\delta}\right)^{1/(d-1)}\frac{1}{\ln 2}}\right)^{d-1}(\ln 2)^{d-3}&\\ &=\frac{1}{\ln 2^{d-1}}(\ln 2)^{d-3}\epsilon&\text{for some $\epsilon\in[1,2]$}\\ &=\frac{1}{(\ln 2)^2}\epsilon \end{flalign*} With the above estimate for $\delta_{i-1}m$, we show the required bounds as follows. It follows exactly as in the proof of Part (1) for $i\in [d-2]$ that \[ p^{(b)}_i = \prob{\bm{x}\in \mu^{N_i}_b}{F_i(\bm{x}) = 1}. \] Hence, we have \begin{flalign*} && p_i^{(1)} &= (1-p_{i-1}^{(1)})^{f_{d-1}} &\\ && &= (1-p_{i-1}^{(1)})^{C_1\cdot m2^m} &\\ && &\ge \left(1-\frac{1}{2^m}(1-\delta_{i-1}\exp(-3\delta_{i-1}))\right)^{C_1m2^m} &\text{From part (2)}\\ && &\ge \exp\left(\left(\frac{-1}{2^m}(1-\delta_{i-1}\exp(-3\delta_{i-1}))\right)\left(1+\frac{1}{2^m}(1-\delta_{i-1}\exp(-3\delta_{i-1}))\right)C_1m2^m\right)&\text{By Fact~\ref{fac:exp} (b)}\\ && &\ge \exp\left(\frac{-1}{2^m}\left(1-\delta_{i-1}\exp(-3\delta_{i-1})+\frac{1}{2^m}\right)C_1m2^m\right)&\\ && &= \exp(-C_1m)\exp\left(C_1\left(\delta_{i-1}m\exp(-3\delta_{i-1})-\frac{m}{2^m}\right) \right) &\\ && &\ge \exp(-C_1m) \exp\left(\frac{C_1}{4(\ln 2)^2}\right)&\delta_{i-1}m\geq \frac{1}{(\ln 2)^2}\\ && &\ge \exp(-C_1m) \exp\left(C_2\right)&C_2 = C_1/10\\ && \text{The }&\text{upper bound for $p_i^{(0)}$ is as follows:}&\\ && p_i^{(0)}&= (1-p_{i-1}^{(0)})^{C_1m2^m}&\\ && &\le \left(1-\frac{1}{2^m}(1+\delta_{i-1}\exp(-3\delta_{i-1}))\right)^{C_1m2^m}&\text{From part (2)}\\ && &\le \exp\left( \frac{-1}{2^m}(1+\delta_{i-1}\exp(-3\delta_{i-1}))C_1m2^m\right)&\text{From Fact~\ref{fac:exp} (c)}\\ && &\le \exp(-C_1m)\exp(-C_1\delta_{i-1}m\exp(-3\delta_{i-1}))&\\ && &\le \exp(-C_1m)\exp(-C_1/4(\ln 2)^2)&\\ && &\le \exp(-C_1m)\exp(-C_2)&\\ \end{flalign*} \textbf{Proof of (4):} Without loss of generality, assume $d\equiv 0\pmod 2$. Then, the output gate of the circuit is an OR gate. Thus: \begin{flalign*} && \prob{\mu_1^{N_d}}{F_d(\bm{x})=0}&=(1-p_{d-1}^{(1)})^{f_d}&\\ && &\le \exp(-p_{d-1}^1\cdot f_d)&\text{From Fact~\ref{fac:exp} (c)} \\ && &\le \exp(-\exp(-C_1m+C_2)\cdot \exp(C_1m))&\text{From part (3)}\\ && &= \exp(-\exp(C_2))&\\ && &= \frac{1}{e^{e^5}}\le 0.05&\\ &&\text{Similarly, }\prob{D_0}{F_d(\bm{x})=1}&\le f_d\cdot \prob{D_n}{F_{d-1}(\bm{x})=1} &\text{by union bound}\\ && &\le 2\exp(C_1m)\exp(-C_1m\cdot C_2)&\\ && &\le 2\exp(-C_2) = \frac{2}{e^5}&\\ && &\le 0.05& \end{flalign*} \end{proof} \section{Omitted Proofs from Section \ref{sec:janson}} \begin{reptheorem}{thm:janson}[Janson's inequality] Let $C_1,\ldots,C_M$ be any monotone Boolean circuits over inputs $x_1,\ldots,x_N,$ and let $C$ denote $\bigvee_{i\in [M]}C_i.$ For each distinct $i,j\in [M]$, we use $i \sim j$ to denote the fact that $\mathrm{Vars}(C_i)\cap \mathrm{Vars}(C_j)\neq \emptyset$. Assume each $\bm{x}_j$ ($j\in [n]$) is chosen independently to be $1$ with probability $p_i\in [0,1]$, and that under this distribution, we have $\max_{i\in [M]}\prob{\bm{x}}{C_i(\bm{x}) = 1}\leq 1/2.$ Then, we have \begin{equation} \label{eq:janson} \prod_{i\in [M]}\prob{\bm{x}}{C_i(\bm{x}) = 0} \leq \prob{\bm{x}}{C(\bm{x}) = 0} \leq \left(\prod_{i\in [M]}\prob{\bm{x}}{C_i(\bm{x}) = 0}\right) \cdot \exp(2\Delta) \end{equation} where $\Delta := \sum_{i < j: i\sim j} \prob{\bm{x}}{(C_i(\bm{x})=1) \wedge (C_j(\bm{x}) = 1)}.$ \end{reptheorem} We will use the following inequality in the proof of the above theorem. \begin{lemma}[Kleitman's inequality] \label{lem:kleitman} Let $F,G: \{0,1\}^n \rightarrow \{0,1\}$ be two monotonically increasing Boolean functions or monotonically decreasing Boolean functions. Then, \[\prob{\bm{x}}{F(\bm{x})=1| G(\bm{x}) =0} \mathop{\leq}_{\mathrm{(i)}} \prob{\bm{x}}{F(\bm{x})=1} \mathop{\leq}_{\mathrm{(ii)}} \prob{\bm{x}}{F(\bm{x})=1|G(\bm{x})=1} \] \end{lemma} \begin{proof}[Proof of Theorem~\ref{thm:janson}] As $C(\bm{x})$ is an OR over $C_i(\bm{x})$ for $i \in [M]$, $\prob{\bm{x}}{C(\bm{x}) = 0} = \prob{x}{\forall {i \in [M]}, (C_i(\bm{x})=0)}$. The lower bound on $\prob{\bm{x}}{C(\bm{x}) = 0}$ follows easily from Kleitman's inequality (Lemma~\ref{lem:kleitman}) and induction on $M$. Now we prove the upper bound on $\prob{\bm{x}}{C(\bm{x}) = 0}$. In order to prove the intended upper bound, we use the following intermediate lemma. \begin{lemma} \label{lem:inter} For all $i \in [M]$, $$\prob{\bm{x}}{(C_i(\bm{x})=1) \mid \forall j < i, (C_j(\bm{x})=0)} \geq \prob{\bm{x}}{C_i(\bm{x})=1} - \sum_{j:j < i, j \sim i} \prob{\bm{x}}{(C_i(\bm{x})=1) ~\mathrm{AND}~ (C_j(\bm{x})=1)}$$ \end{lemma} Assuming the above lemma, we will complete the proof of Theorem~\ref{thm:janson}. \begin{align} \prob{\bm{x}}{C(\bm{x}) = 0} & = \prob{\bm{x}}{\forall {i \in [M]}, (C_i(\bm{x})=0)} \nonumber \\ & = \prod_{i \in [M]} \prob{\bm{x}}{(C_i(\bm{x})=0)\mid \forall j < i, (C_j(\bm{x})=0)} \notag \\ & = \prod_{i \in [M]} (1-\prob{\bm{x}}{(C_i(\bm{x})=1)\mid \forall j < i, (C_j(\bm{x})=0)}) \nonumber \\ & \leq \prod_{i \in [M]} (1-\prob{x}{C_i(\bm{x})=1} - \sum_{j: j < i, j \sim i} \prob{\bm{x}}{(C_i(\bm{x})=1) \text{ AND } (C_j(\bm{x})=1)}) \label{eq:inter} \\ & = \prod_{i \in [M]} (\prob{\bm{x}}{C_i(\bm{x})=0} - \sum_{j: j < i, j \sim i} \prob{\bm{x}}{(C_i(\bm{x})=1) \text{ AND } (C_j(\bm{x})=1)}) \nonumber \\ & = \prod_{i \in [M]} \left(\prob{\bm{x}}{C_i(\bm{x})=0} \cdot (1+ \frac{1}{\prob{x}{(C_i(\bm{x})=0)}}\sum_{j: j < i, j \sim i} \prob{\bm{x}}{(C_i(\bm{x})=1) \text{ AND } (C_j(\bm{x})=1)}\right) \nonumber \\ & \leq \prod_{i \in [M]} \left(\prob{\bm{x}}{C_i(\bm{x})=0} \cdot (1+ 2\sum_{j: j < i, j \sim i} \prob{\bm{x}}{(C_i(\bm{x})=1) \text{ AND } (C_j(\bm{x})=1)}\right) \label{eq:half} \\ & \leq \prod_{i \in [M]} \left(\prob{\bm{x}}{C_i(\bm{x})=0} \cdot \exp(2 \sum_{j: j < i, j \sim i} \prob{\bm{x}}{(C_i(\bm{x})=1) \text{ AND } (C_j(\bm{x})=1)})\right) \label{eq:expn}\\ & = \left(\prod_{i \in [M]} \prob{\bm{x}}{C_i(\bm{x})=0}\right) \cdot \exp(2\Delta) \nonumber \end{align} Inequality (\ref{eq:inter}) follows from Lemma~\ref{lem:inter}. Inequality (\ref{eq:half}) follows from the fact that $\prob{\bm{x}}{(C_i(\bm{x})=0)} \leq 1/2$ for each $i \in [M]$. Finally, (\ref{eq:expn}) follows from (\ref{eq:exp-ineq}). Therefore, assuming Lemma~\ref{lem:inter}, we are done. We now prove this lemma. \begin{proof}[Proof of Lemma~\ref{lem:inter}] By reordindering the indices if required, assume that $d$ is an index such that $d<i$ and for $1\leq j \leq d$, $i \sim j$ and for $d < j < i$, $i \not\sim j$. Let $\mathcal{E}$ be the event that $\left[\forall j \leq d, (C_j(\bm{x})=0)\right]$ and $\mathcal{F}$ be the event that $\left[\forall d < j < i, (C_j(\bm{x})=0)\right]$. \begin{align} &\prob{\bm{x}}{(C_i(\bm{x})=1) \mid \forall j < i, (C_j(\bm{x})=0)} \nonumber \\ &= \prob{\bm{x}}{(C_i(\bm{x})=1) \mid \mathcal{E} \text{ AND } \mathcal{F}} \nonumber\\ & \geq \prob{\bm{x}}{(C_i(\bm{x})=1) \text{ AND }\mathcal{E} \mid \mathcal{F}} & \text{ (Bayes' Rule) } \notag \\ & = \prob{\bm{x}}{(C_i(\bm{x})=1) \mid \mathcal{F}} - \prob{\bm{x}}{(C_i(\bm{x})=1) \text { AND } \overline{\mathcal{E}} \mid \mathcal{F}} \notag \\ & = \prob{\bm{x}}{(C_i(\bm{x})=1)} - \prob{\bm{x}}{(C_i(\bm{x})=1) \text { AND } \exists j \leq d, (C_j(\bm{x})=1) \mid \mathcal{F}} & (C_i(\bm{x}) \text{ and } \mathcal{F} \text{ are independent}) \notag \\ & = \prob{\bm{x}}{(C_i(\bm{x})=1)} - \prob{\bm{x}}{\exists j \leq d, [(C_i(\bm{x})=1) \text { AND } (C_j(\bm{x})=1)] \mid \mathcal{F}} \nonumber \\ & \geq \prob{\bm{x}}{(C_i(\bm{x})=1)} - \sum_{j \leq d} \prob{\bm{x}}{[(C_i(\bm{x})=1) \text { AND } (C_j(\bm{x})=1)] \mid \mathcal{F}} & \text{(Union bound)} \notag\\ & \geq \prob{\bm{x}}{(C_i(\bm{x})=1)} - \sum_{j \leq d} \prob{\bm{x}}{[(C_i(\bm{x})=1) \text { AND } (C_j(\bm{x})=1)]} & \text{(Kleitman's inequality)} \notag \end{align} \end{proof} \end{proof} \end{document}
arXiv
\begin{definition}[Definition:Galois Connection] Let $\struct {S, \preceq}$ and $\struct {T, \precsim}$ be ordered sets. Let $g: S \to T$, $d: T \to S$ be mappings. Then $\struct {g, d}$ is a '''Galois connection''' {{iff}}: :$g$ and $d$ are increasing mappings and ::$\forall s \in S, t \in T: t \precsim \map g s \iff \map d t \preceq s$ $g$ is '''upper adjoint''' and $d$ is '''lower adjoint''' of a Galois connection. {{definition wanted|monotone, antitone, closure operator, closed}} {{NamedforDef|Évariste Galois|cat = Galois}} \end{definition}
ProofWiki
\begin{document} \title[Criticality of a Randomly-Driven Front] {Criticality of a Randomly-Driven Front} \author[A.\ Dembo]{Amir Dembo} \address{A.\ Dembo, Departments of Statistics and Mathematics, Stanford University, \newline\hphantom{\quad\ \ A Dembo} Stanford, California, CA 94305} \email{[email protected]} \author[L.-C.\ Tsai]{Li-Cheng Tsai} \address{L.-C.\ Tsai, Departments of Mathematics, Columbia University, \newline\hphantom{\quad\quad L-C Tsai} 2990 Broadway, New York, NY 10027} \email{[email protected]} \subjclass[2010]{ Primary 60K35, Secondary 35B30, 80A22} \keywords{Supercooled Stefan problem, explosion of PDEs, interacting particles system, multiparticle diffusion limited aggregation} \begin{abstract} Consider an advancing `front' $R(t) \in \mathbb{Z}_{\geq 0}$ and particles performing independent continuous time random walks on $ (R(t),\infty)\cap\mathbb{Z}$. Starting at $R(0)=0$, whenever a particle attempts to jump into $R(t)$ the latter instantaneously moves $k \ge 1$ steps to the right, \emph{absorbing all} particles along its path. We take $ k $ to be the minimal random integer such that exactly $ k $ particles are absorbed by the move of $ R $, and view the particle system as a discrete version of the Stefan problem \begin{align*} &\partial_t u_*(t,\xi) = \tfrac12 \partial^2_{\xi} u_*(t,\xi), \quad \xi >r(t), \\ &u_*(t,r(t))=0, \\ &\tfrac{d~}{dt}r(t) = \tfrac12 \partial_\xi u_*(t,r(t)), \\ &t\mapsto r(t) \text{ non-decreasing }, \quad r(0):=0. \end{align*} For a constant initial particles density $u_*(0,\xi)=\rho {\bf 1}_{\{\xi >0\}}$, at $\rho<1$ the particle system and the PDE exhibit the same diffusive behavior at large time, whereas at $\rho \ge 1$ the PDE explodes instantaneously. Focusing on the critical density $ \rho=1 $, we analyze the large time behavior of the front $ R (t)$ for the particle system, and obtain both the scaling exponent of $R(t)$ and an explicit description of its random scaling limit. Our result unveils a rarely seen phenomenon where the macroscopic scaling exponent is \emph{sensitive} to the amount of initial local fluctuations. Further, the scaling limit demonstrates an interesting oscillation between instantaneous super- and sub-critical phases. Our method is based on a novel monotonicity as well as PDE-type estimates. \end{abstract} \maketitle \section{Introduction} Consider the following Stefan \ac{PDE} problem: \begin{subequations} \label{eq:Stefan} \begin{align} \label{eq:HE} &\partial_t u_*(t,\xi) = \tfrac12 \partial^2_{\xi} u_*(t,\xi), \ \xi >r(t), \\ \label{eq:DiriBC} &u_*(t,r(t))=0, \\ \label{eq:fluxBC} &\tfrac{d~}{dt}r(t) = \tfrac12 \partial_\xi u_*(t,r(t)), \\ &t\mapsto r(t) \text{ non-decreasing }, \quad r(0):=0, \end{align} \end{subequations} with a given, nonnegative initial condition $ u_*(0,\xi) \geq 0 $, $ \forall \xi \geq 0 $. Upon a sign change $ v_* := -u_* $, the Stefan problem~\eqref{eq:Stefan} describes a solid-liquid system, where the solid is kept at its freezing temperature $ 0 $, and the liquid is super-cooled, with temperature distribution $ v_*(t,\xi) $. Here, instead of the super-cooled, solid-liquid system, we consider a different type of physical phenomenon that is also described by~\eqref{eq:Stefan}. That is, $ u_* $ represents the density of particles that diffuse in the ambient region $ (r(t),\infty) $. A sticky aggregate occupies the region $ [0,r(t)] $ to the left of the particles, and we refer to $ r(t) $ as the `front' of the aggregate. Whenever a particle hits $ r(t) $, the particle adheres to the aggregate and the front advances according to the particle mass thus accumulated. The zero-value boundary condition~\eqref{eq:DiriBC} arises due to absorption of particles (into the aggregate), while the condition~\eqref{eq:fluxBC} ensures that the front advances by the total mass of particles being absorbed. Indeed, given sufficient smoothness of $ u_* $ and $ r $, the condition \eqref{eq:fluxBC} is written (using~\eqref{eq:HE}--\eqref{eq:DiriBC}) as \begin{align*} \frac{d~}{dt}r(t) = \frac12 \partial_\xi u_*(t,r(t)) = \frac{d~}{dt} \int_{0}^\infty \big( u_*(0,\xi)-u_*(t,\xi)\mathbf{1}_\set{\xi>r(t)} \big) d\xi. \end{align*} Integrating in $ t $ gives \begin{align} \tag{\ref{eq:fluxBC}'} \label{eq:fluxBC:} r(t) = \int_{0}^\infty \big( u_*(0,\xi)-u_*(t,\xi)\mathbf{1}_\set{\xi>r(t)} \big) d\xi =(\text{total absorbed mass up to time }t). \end{align} We refer to~\eqref{eq:fluxBC}--\eqref{eq:fluxBC:} as the \textbf{flux condition}. In this article we focus on the case of a constant initial density $ u_*(0,\xi) = \rho \mathbf{1}_\set{\xi >0} $. For $ \rho\in(0,1) $, the system is solved explicitly as \begin{align} \label{eq:explictu} u_*(t,\xi) &:= \frac{\rho}{\Phi_*(1,\kappa_\rho)} \big( \Phi_*(1,\kappa_\rho)-\Phi_*(t,\xi) \big) \mathbf{1}_\set{\xi\geq r(t)}, \\ \label{eq:explictr} r(t) &:= \kappa_\rho \sqrt{t}. \end{align} Here $ p_*(t,\xi) := \frac{1}{\sqrt{2\pi t}} \exp(-\frac{\xi^2}{2t}) $ denotes the standard heat kernel, with the corresponding tail distribution function $ \Phi_*(t,\xi) := \int_{\xi}^{\infty} p_*(t,\zeta) d\zeta $. The value $ \kappa_\rho\in(0,\infty) $ is the unique positive solution to the following equation \begin{align} \label{eq:kappaEq} \rho = g(\kappa):=\frac{\kappa\Phi_*(1,\kappa)}{p_*(1,\kappa)}. \end{align} Indeed, \eqref{eq:kappaEq} has a unique positive solution since $ g(\Cdot) $ is strictly increasing from $ g(0)=0 $ to $ g(\infty)=1 $. The explicit solution~\eqref{eq:explictr} shows that $ r(t) $ travels diffusively, i.e., $ r(t) = O(t^{\frac12}) $. On the other hand, for $ \rho \geq 1 $, the Stefan problem~\eqref{eq:Stefan} admits no solution. To see this, note that if $ u_* $ solves heat equation~\eqref{eq:HE} with zero boundary condition~\eqref{eq:DiriBC}, by the strong maximal principle we have $ u_*(t,\xi) < \rho=u_*(0,\xi) $, for all $ t >0 $ and $ \xi>r(t) $. Using this in~\eqref{eq:fluxBC:} gives \begin{align*} r(t) > \int_0^{r(t)} u_*(0,\xi) d\xi = \rho\, r(t), \end{align*} which cannot hold for any $ r(t) \in [0,\infty) $ if $ \rho \geq 1 $. Put it in physics term, if the particles density is $ \geq 1 $ everywhere initially, the flux condition~\eqref{eq:fluxBC:} forces the front $ r(t) $ to explode instantaneously. This is also seen by taking $ \rho\uparrow 1 $ in~\eqref{eq:kappaEq}, whence $ \kappa_\rho \uparrow \infty $. In addition to the one-phase Stefan problem~\eqref{eq:Stefan}, explosion of similar \acp{PDE} appears in a wide range of contexts, such as systemic risk modeling~\cite{nadtochiy17} and neural networks~\cite{carrillo13, delarue15}. Explosions of the type of Stefan problem~\eqref{eq:Stefan}, as well as possible regularizations beyond explosions, have been intensively investigated. We refer to \cite{fasano81, fasano89, fasano90, herrero96} and the references therein. Commonly considered in the literature is the case where $ u_*(0,r(0))=0 $ (and $ u_*(0,\xi) $ is bounded and continuous). In this case the corresponding Stefan problem admits a unique solution for a short time \cite{friedman76, fasano83}. For the case $ u_*(0,\xi) = \rho \mathbf{1}_\set{\xi>0} $, $ \rho \geq 1 $, considered here, explosion occurs \emph{instantaneously}, as discussed previously (and also \cite[Theorem~2.2]{fasano81}). Further, our system being semi-infinite $ (r(t),\infty) $, the explosion cannot be cured by conventional approaches of perturbing the other end point of a finite system. Among all possible explosions, of particular interest is the case $ \rho=1 $, where the explosion is \emph{marginal}. To study the behavior of the underlying phenomenon at this critical density $ \rho=1 $, we propose a different approach: we introduce a \emph{discrete}, stochastic particle system that models the type of phenomena as the Stefan problem~\eqref{eq:Stefan}. Indeed, for $ \rho<1 $ the particle system exhibits the same diffusive behavior as~\eqref{eq:explictr} at large time (Proposition~\ref{prop:subp}\ref{enu:sub}); while for $ \rho>1 $, the particle system explodes in finite time (Proposition~\ref{prop:subp}\ref{enu:sup}). For the case $ \rho=1 $ of interest, we show that the particle system exhibits an intriguing scaling exponent $ r(t) \asymp t^{\alpha} $, which is super-diffusive $ \alpha>\frac12 $ and sub-linear $ \alpha<1 $. Even though here the front does not explode, $ \rho=1 $ has an effect of making the exponent $ \alpha $ \emph{sensitive} to the amount of initial local fluctuations (Theorem~\ref{thm:main:}). We now define the particle system that is studied in this article. A non-decreasing, $ \mathbb{Z}_{\geq 0} $-valued process $ t\mapstoR(t) $ is fueled by a crowd of random walkers occupying the region to its right $ (R(t),\infty)\cap\mathbb{Z} $. We regard $ R(t) $ as the front of an aggregate $ \Omega(t)=[0,R(t)]\cap\mathbb{Z} $, and refer to $ R $ as the `front' throughout the article. To define the model, we start the front at the origin, i.e., $ R(0)=0 $, and consider particles $ \{X_i(t)\}_{i=1}^\infty $ performing independent simple random walks on $ \mathbb{Z}_{>0} $ in continuous-time. That is, at $ t=0 $, we initiate the particles $ \{X_i(0)\}_{i=1}^\infty $ on $ \mathbb{Z}_{>0} $ according to a given distribution, and for $ t>0 $, each $X_i(t)$ waits an independent Exponential$(1)$ time, then independently chooses to jump one step to the left or to the right with probability $1/2$ each. The front $ R $ remains stationary expect when a particle $ X_i $ attempts to jump into the front, i.e., \begin{align*} \mathcal{J}_i(t) := \{ X_i(t^-)=R(t^-)+1 \text{ attempts to jump to } R(t^-) \}. \end{align*} When such an attempt occurs, the front immediately moves $k \ge 1$ steps to the right, i.e.\ \begin{align} \label{eq:R:move} R(t) - R(t^-) = k \mathbf{1}_{\cup_{i=1}^\infty\mathcal{J}_i(t)}, \end{align} and \emph{absorbs} all the particles on the sites $ (R(t^-),R(t)]\cap\mathbb{Z} $. Here we choose $k$ to be the smallest integer such that $ R(t) $ satisfies the flux condition: \begin{align} \label{eq:fluxCond} \#\{ \text{particles absorbed by } R \text{ up to time } t \} =: N^R(t) = R(t). \end{align} More explicitly, \begin{align} \label{eq:fluxCond:} k :=& \inf \Big\{ j\in\mathbb{Z}_{>0} : \#\Big( (R(t^-),R(t^-)+j] \cap \big\{ X_i(t^-) \big\}_i\Big) = j \Big\}. \end{align} See Figure~\ref{fig:fgm} for an illustration. We adopt the convention $ \inf\emptyset :=\infty $, allowing finite-time explosion: $ R(t)=\infty $. We refer to this model as the \textbf{frictionless growth model}, where the term `frictionless' refers to the fact that the front travels in a fashion satisfying the flux condition~\eqref{eq:fluxCond}. \begin{figure} \caption{The case of single absorption} \label{fig:fgm1} \caption{The case of multiple absorption} \label{fig:fgm2} \caption{Motion of $ R $. Red crosses represent absorptions.} \label{fig:fgm} \end{figure} Similar models have been studied in the literature under a different context. Among them is the \ac{1d-MDLA} \cite{kesten08a, kesten08b}, which is defined the same way as in the preceding except $ k:=1 $ in \eqref{eq:R:move}. That is, the front moves exactly one step to the right whenever a particle attempts to jump onto the front. Letting $ k:=1 $ introduces possible \emph{friction} to the motion of the front, in the sense that the front may consume more particles than the step it moves. For comparison, we let $ \widetilde{R}(t) $ denote the front of the \ac{1d-MDLA}. The interest of \ac{1d-MDLA} originates from its relation to reaction diffusion-type particle systems. The precise definition of such particle systems differ among literature, and roughly speaking they consist of two species of particles $ A $ and $ B $ performing independent random walks on $ \mathbb{Z}^d $ in continuous time, with jumps occurring at rates $ D_A $ and $ D_B $, respectively, such that an $ A $-particle is converted into a $ B $-particle whenever the $ A $-particle is in the vicinity of a $ B $-particle. Particle systems of this type serve as a prototype of various phenomenon, such as stochastic combustion and infection spread, depending on the values of the jumping rates $ D_A $ and $ D_B $ \cite{kesten12}. Despite their seemly simple setup, the reaction-diffusion particle systems cast significant challenges for rigorous mathematical study, and has attracted much attention. We refer to \cite{alves02a,alves02,berard10,comets07,comets09,kesten08,kesten12,ramirez04,richardson73} and the references therein. Of relevance to our current discussion is the special case $ D_A=1, D_B=0 $. Under this specification, reaction diffusion-type particle systems can be formulated as problems of a randomly growing aggregate. That is, we view the cluster of the stationary $ B $-particles as an aggregate $ \Omega(t)\subset\mathbb{Z}^d $, which grows in the bath of $ A $-particles. For $ d>1 $, numerical simulations show that the cluster exhibits intriguing geometry, from which speculations and conjectures arise. Here we mention a recent result \cite{sidoravicius16} on the linear growth of (the longest arms of) the cluster under certain assumptions, and refer to the references therein for development in $ d>1 $. As for $ d=1 $, the aforementioned \ac{1d-MDLA} is a specific realization of such models \cite[Section~4]{kesten12}. For the \ac{1d-MDLA}, the longtime behavior of $ \widetilde{R} $ exhibits a transition from diffusive scaling $ \widetilde{R}(t) \asymp t^{\frac12} $ for $ \rho<1 $ to linear motions $ \widetilde{R}(t) \asymp t $ for $ \rho>1 $, as shown in \cite{kesten08b} and \cite{sly16}, respectively. The behavior of \ac{1d-MDLA} at $ \rho=1 $ remains open, and there has been attempts \cite{sidoravicius17} to derive the scaling limit of $ \tilR $ non-rigorously. The frictionless model considered in article is more tractable than the \ac{1d-MDLA}. In particular, the flux condition~\eqref{eq:fluxCond} allows us to derive certain monotonicity to bypass refined estimates on the process $ R $. We now return to our discussion about the frictionless growth model. Adopt the standard notation \begin{align*} \eta^\text{ic}(x) := \# \{ X_i(0)=x \} \end{align*} for occupation variables (i.e., number of particles at site $ x $) at $ t=0 $. A natural setup for constant density initial condition is to let $ \{\eta^\text{ic}(x)\}_{x\in\mathbb{Z}_{>0}} $ be i.i.d.\ with $ \mathbf{E} (\eta^\text{ic}(1))=\rho $. Our first result verifies that: if $ \rho>1 $ the front $ R $ explodes in finite time; and if $ \rho<1 $, the front converges to same expression~\eqref{eq:explictr} as the Stefan problem. Recall that $ \kappa_\rho $ is the unique solution to~\eqref{eq:kappaEq} for a given $ \rho\in(0,1) $. \begin{proposition} \label{prop:subp} Start the system from the following i.i.d.\ initial condition: \begin{align} \label{eq:iidIC} \{\eta^\text{ic}(x)\}_{x\in\mathbb{Z}_{>0}} \text{i.i.d.}, \quad \text{ with } \mathbf{E} (\eta^\text{ic}(1))=\rho, \ \mathbf{E} (e^{\lambda_0 \eta^\text{ic}(1)}) <\infty, \ \text{ for some }\lambda_0>0. \end{align} \begin{enumerate}[label=(\alph*)] \item \label{enu:sup} If $ \rho>1 $, the front $ R $ explodes in finite time: \begin{align} \label{eq:supcrt} \mathbf{P} \Big( R(t) =\infty, \text{ for some }t<\infty \Big) =1. \end{align} \item \label{enu:sub} If $ 0<\rho<1 $, the front scales diffusively to the deterministic trajectory $ \kappa_\rho\sqrt{t} $: \begin{align} \label{eq:subcrt} \sup_{t\in[0,t_0]} |\eR(\varepsilon^{-2}t) - \kappa_\rho\sqrt{t}| \longrightarrow_\text{P} 0, \quad \text{as } \varepsilon \to 0, \end{align} for any fixed $ t_0 <\infty $. \end{enumerate} \end{proposition} Proposition~\ref{prop:subp} is settled in the Appendix. We now turn to the case $ \rho=1 $ of interest. To prepare for notations, consider the space \begin{align} \label{eq:rcll} \mathbb{D}^{\uparrow} &:= \big\{ f: \mathbb{R}_{\geq 0} \xrightarrow[\text{nondecr.}]{\text{RCLL}} [0,\infty] \big\} \end{align} of non-decreasing, $ [0,\infty] $-valued, \ac{RCLL} functions. On this space $ \mathbb{D}^{\uparrow} $, we define the map \begin{align} \label{eq:inversion} \iota: \mathbb{D}^{\uparrow} \to \mathbb{D}^{\uparrow}, \quad \iota\big(f\big)(t) := \sup \big( \{ \xi\in[0,\infty) : f(\xi) < t \} \cup \{0\} \big). \end{align} It is straightforward to verify that $ \iota $ is an involution, i.e.\ $ \iota^2(f)=f $. Alternatively, defining the \textbf{Complete Graph} of $ f\in\mathbb{D}^{\uparrow} $ as \begin{align*} \text{CG}(f) &:= \bigcup_{t\in[0,\infty)} \{ (t,\xi): f(t^-)\leq \xi < f(t) \} \subset [0,\infty)^2, \\ &\text{ where } f(t^-) := \lim_{s\uparrow t}f(s) \text{ for } t>0 \text{ and } f(0^-) := 0, \text{ see Figure~\ref{fig:CG},} \end{align*} one equivalently defines $ \iota(f)=:g $ as the unique $ \mathbb{D}^{\uparrow} $-valued function such that CG$ (g) $ equals the `transpose' of CG$ (f) $, i.e., $ \text{CG}(g) = (\text{CG}(f))^{\text{t}} := \{ (\xi,t) : (t,\xi)\in\text{CG}(f) \}. $ In view of this, hereafter we refer to $ \iota(f) $ as the \textbf{inverse} of $ f $. \begin{figure} \caption{The complete graph of $ t\mapsto f(t) $.} \label{fig:CG} \end{figure} \noindent Next, considering the space \begin{align} \label{eq:Rcllt0} \mathbb{D}[0,t_0] := \{ f:[0,t_0]\xrightarrow{\text{RCLL}} \mathbb{R} \} \end{align} of RCLL functions on $ [0,t_0] $. \begin{definition} \label{def:M1} For $ f\in\mathbb{D}[0,t_0] $, we say $ g \in C([0,1];\mathbb{R}^2) $ is a parametrization of $ \text{CG}(f) $ if $ g $ maps $ [0,1] $ onto $ \text{CG}(f) $, with $ g(0)=(0,f(0)) $ and $ g(1)=(t_0,f(t_0)) $. Recall from~\cite[Chapter~12.3]{whitt02} that Skorokhod's $ M_1 $-topology on $ \mathbb{D}[0,t_0] $ is characterized by the metric \begin{align*} d_{M_1}(f_1,f_2) := \inf \big\{ \Vert g_1-g_2 \Vert_{C[0,1]} \big\}, \end{align*} where the infimum goes over all continuous parameterizations $ g_i $ of $ \text{CG}(f_i) $, and $ \Vert g_1-g_2 \Vert_{C[0,1]} $ denotes the supremum norm measured in the Euclidean distance of $ \mathbb{R}^2 $. Let $ \mathscr{M}_1[0,t_0] $ denote Skorokhod's $ M_1 $-topology on $ \mathbb{D}[0,t_0] $. \end{definition} \noindent To avoid technical sophistication regarding topology, we do not define Skorokhod's $ M_1 $-topology for functions on $ [0,\infty) $, and restrict our discussion to functions defined on finite intervals $ \mathbb{D}[0,t_0] $, $ t_0<\infty $. We use $ \Rightarrow $ to denote the weak convergence of the laws of stochastic processes. For i.i.d.\ initial conditions we have \begin{theorem} \label{thm:iid} Let $ \{\eta^\text{ic}(x)\}_{x\in\mathbb{Z}_{\geq 0}} $ be i.i.d., with \begin{align} \label{eq:iid} & \mathbf{E} (\eta^\text{ic}(1)) = 1, \ \text{Var}(\eta^\text{ic}(1)) := \sigma^2>0, \quad \quad \mathbf{E} (e^{\lambda_0\eta^\text{ic}(1)})<\infty, \ \text{ for some } \lambda_0>0. \end{align} Let $ \mathcal{T}_*(\xi) := 2\sigma \int_0^\xi [ B(\zeta)]_+ d\zeta $, where $ B(\Cdot) $ denotes a standard Brownian motion, and let $ \mathcal{R}_* := \iota(\mathcal{T}_*) $. For any fixed $ t_0<\infty $, we have that \begin{align*} T^{-\frac{2}{3}} R\big(T\Cdot\big) \Rightarrow \mathcal{R}_*(\Cdot) \text{ under } \mathscr{M}_1[0,t_0], \ \text{ as } T \to \infty. \end{align*} \end{theorem} Theorem~\ref{thm:iid} completely characterizes the scaling behavior of $ R $ at the critical density $ \rho=1 $ under the scope stated therein, giving a scaling exponent $ \frac{2}{3} $, and a non-Gaussian limiting process $ \mathcal{R}_* $. In contrast, as shown in \cite{berard16}, for $ D_A=D_B=1 $ and $ d=1 $ the front admits Brownian fluctuations at scaling exponent $\frac{1}{2}$ for generic initial conditions. Another interesting property is that the limiting process $ \mathcal{R}_* $ exhibits \emph{jumps}. Indeed, the process $ \mathcal{T}_*(\xi) := 2\sigma \int_0^\xi [B(\zeta)]_+ d\zeta $ remains constant during negative Brownian excursions $ O_*:=\{\xi: B(\xi)<0\} \subset \mathbb{R} $, which results in jumps of $ \mathcal{R}_*:=\iota(\mathcal{T}_*) $. From a microscopic point of view, such jumps originate from the \emph{oscillation between two phases}. Indeed, given the i.i.d.\ initial condition as in Theorem~\ref{thm:iid}, the number of particles in $ [0,L] $ oscillates around $ L $ similarly to a Brownian motion as $ L $ varies. The Brownian motion $ B $ in Theorem~\ref{thm:iid} being negative corresponds to a region with an \emph{excess} of particles. In this case, the front $ R $ travels effectively at \emph{infinite} velocity under the scaling of consideration, resulting in jumps of $ \mathcal{R}_* $. On the other hand, $ B(\xi)>0 $ corresponds to a region with a \emph{deficiency} of particles. In this case, the front is limited by the scarcity of particles, and travels at the specified scale $ t^{\frac{2}{3}} $, resulting in a $ C^1 $-smooth region of $ \mathcal{R}_* $. While our approach of proving Theorem~\ref{thm:iid} relies on the flux condition~\eqref{eq:fluxCond}, through coupling it is clear that $ R $ stochastically dominates $ \widetilde{R} $ (the front of \ac{1d-MDLA}). This immediately yields \begin{corollary}\label{cor} Let $\widetilde{R} $ denote the front of the \ac{1d-MDLA}. Under the same initial conditioned as in Theorem~\ref{thm:iid}, $ \{ T^{-\frac23} \widetilde{R}(T) \}_{T>0} $ is tight, and the limit points are stochastically dominated by $ \mathcal{R}_*(1) $. \end{corollary} \begin{remark} Under prescribed scaling, Corollary~\ref{cor} does \emph{not} exclude the possibility that limit points $ \widetilde{\mathcal{R}}_* $ of the \ac{1d-MDLA} degenerates, i.e., $ \widetilde{\mathcal{R}}_*=0 $. \end{remark} \noindent Event though, for $ \rho>1 $ the front $ R $ explodes in finite time, it is possible to avoid such finite time explosion while keeping the flux condition~\eqref{eq:fluxCond}. For example, let $ \widehat{R} $ denote the front of system where, in the case of potential multiple absorptions, the front absorbs exactly one particle, advance one step, and \emph{pushes} all the excess particles one step to the right. See Figure~\ref{fig:fpgm}, and compare that with Figure~\ref{fig:fgm2}. It is straightforward to show that, under i.i.d.\ initial conditions, $ \widehat{R} $ stays finite for all time even when $ \rho>1 $. While we do not pursuit this direction here, we believe that our approach is applicable for analyzing $ \widehat{R} $ at $ \rho=1 $, and conjecture that \begin{conjecture} Theorem~\ref{thm:iid} holds for $ \widehat{R} $ in place of $ R $. \end{conjecture} \begin{figure} \caption{Motion of $ \widehat{R} $} \label{fig:fpgm} \end{figure} To explain the origin of the $ \frac23 $ scaling exponent as well as demonstrating the robustness of our method, consider the following class of initial conditions. Let $ \{\eta^\text{ic}_\varepsilon = (\eta^\text{ic}_\varepsilon(x))_{x\in\mathbb{Z}_{>0}}\}_{\varepsilon\in(0,1]} $ be a sequence of (possibly random) initial conditions, parameterized by a scaling parameter $ \varepsilon\in(0,1] $. To each $ \eta^\text{ic}_\varepsilon $ we attach the centered, integrated function: \begin{align} \label{eq:ict} F_\varepsilon(\xi) &:= \sum_{0<y \leq \lfloor \xi \rfloor} (1-\eta^\text{ic}_\varepsilon(y))\,. \end{align} Let $ \mathscr{U} $ denote the uniform topology over compact sets, defined on the space $ \mathbb{D} := \{ f: [0,\infty) \xrightarrow{\text{RCLL}}\mathbb{R} \} $ of \ac{RCLL} functions. \begin{definition} \label{def:ic} Let $ \mathcal{F} $ be a $ C[0,\infty) $-valued process. We say that a possibly random collection of initial condition $ \{\eta^\text{ic}_\varepsilon\}_{\varepsilon\in(0,1]} $ is at density $ 1 $, with \textbf{shape exponent} $ \gamma\in[0,1) $ and \textbf{limiting fluctuation} $ \mathcal{F} $ if \newline (a). There exist constants $ C_*<\infty $ and $ a_*>0 $ such that for all $ \varepsilon\in(0,1] $, $ r>0 $ and $ x_1,x_2\in\mathbb{Z}_{\geq 0} $, \begin{align} \label{eq:ic:den1} \mathbf{P} \bigg( |F_\varepsilon(x_2)-F_\varepsilon(x_1)| \geq r|x_2-x_1|^\gamma \bigg) \leq C_* e^{-r^{a_*}}, \end{align} which, for non-random initial conditions, amounts to the condition $|F_\varepsilon(x_2)-F_\varepsilon(x_1)| < r|x_2-x_1|^\gamma$, for some $r < \infty$. \newline (b). As $ \varepsilon\to 0 $, \begin{align} \label{eq:ic:lim} \varepsilon^{\gamma}F_\varepsilon(\varepsilon^{-1}\Cdot) \Rightarrow \mathcal{F}(\Cdot)\,, \quad \text{ under } \quad \mathscr{U}. \end{align} \end{definition} We have the following for any initial condition satisfying Definition~\ref{def:ic}: \begin{theorem} \label{thm:main:} Fixing $ \mathcal{F} \in C[0,\infty) $, we define \begin{align} \label{eq:Hit} \mathcal{T}_\mathcal{F}(\xi) := 2 \int_0^\xi [\mathcal{F}(\zeta)]_+ d\zeta. \end{align} Assuming further \begin{align} \label{eq:Hit:cnd} \lim_{\xi\to\infty}\mathcal{T}_\mathcal{F}(\xi) = \infty, \end{align} we let $ \mathcal{R} := \iota(\mathcal{T}_\mathcal{F})\in\mathbb{D}^{\uparrow}\cap\mathbb{D} $. Fixing $ \gamma\in(\frac13,1) $, and starting the system from initial conditions $ \{\eta^\text{ic}_\varepsilon\}_{\varepsilon\in(0,1]} $ as in Definition~\ref{def:ic}, with density $ 1 $, shape exponent $ \gamma $ and limiting fluctuation $ \mathcal{F} $, for any fixed $ t_0<\infty $, we have \begin{align*} \widetilde{\varepsilon}^{\frac{1}{1+\gamma}} R(\widetilde{\varepsilon}^{\,-1}\Cdot) \Rightarrow \mathcal{R}(\Cdot) \quad \text{ under } \; \mathscr{M}_1[0,t_0], \quad \text{ as } \;\; \widetilde{\varepsilon} \to 0, \end{align*} where $ \widetilde{\varepsilon} := \varepsilon^{1+\gamma} $. \end{theorem} \begin{remark} The assumption $ \gamma>\frac13 $ in Theorem~\ref{thm:main:} assures that the fluctuation of the initial condition (characterized by $ F_\varepsilon $ in \eqref{eq:ict}) overwhelms the random fluctuation due to the motions of the particles; see Remark~\ref{rmk:mgt<<}. When $ \gamma\leq\frac13 $, we conjecture that both the scaling exponents and the scaling limit change. \end{remark} \begin{remark} \label{exm:iid} For i.i.d.\ $ (\eta^\text{ic}(x))_{x\in\mathbb{Z}_{>0}} $ satisfying~\eqref{eq:iid} as in Theorem~\ref{thm:iid}, it is standard to verify that the conditions of Definition~\ref{def:ic} hold with $ \gamma=\frac12 $, some $ 0<a_*<1 $ and $ \mathcal{F}(\xi)=\sigma B(\xi) $. Hence, Theorem~\ref{thm:iid} is a direct consequence of Theorem~\ref{thm:main:}. \end{remark} \begin{example} \label{exm:ic} To construct initial conditions with a \emph{generic} (other than $ \frac12 $) shape exponent $ \gamma\in(0,1) $, consider the \emph{deterministic} initial condition: \begin{align} \label{eq:sineIC} \eta^\text{ic}_\varepsilon(x) := 1 - \lfloor \varepsilon^{-\gamma} \sin(\varepsilon x) \rfloor + \lfloor \varepsilon^{-\gamma} \sin(\varepsilon (x-1)) \rfloor, \quad x\in\mathbb{Z}_{>0}. \end{align} Since $ |\varepsilon^{-\gamma} \sin(\varepsilon (x-1))-\varepsilon^{-\gamma} \sin(\varepsilon x)| \leq \varepsilon^{1-\gamma} \leq 1 $, such $ \eta^\text{ic}_\varepsilon(x) $ is indeed non-negative, and hence defines an occupation variable. For such an $ \eta^\text{ic}_\varepsilon(x) $, we have $ F_\varepsilon(x) = \lfloor \varepsilon^{-\gamma} \sin(\varepsilon x) \rfloor $. From this it is straightforward to verify that \begin{align*} & |F_\varepsilon(x_1) - F_\varepsilon(x_2)| \leq (\varepsilon^{1-\gamma}|x_1-x_2|) \wedge \varepsilon^{-\gamma} \leq |x_1-x_2|^{\gamma}, \quad \forall x_1,x_2\in\mathbb{Z}_{\geq 0}, \\ &\sup_{\xi\in\mathbb{R}_{\geq 0}} |\varepsilon^{\gamma}F_\varepsilon(\varepsilon\xi) - \sin(\xi)| \rightarrow 0, \quad \text{ as } \varepsilon \to 0, \end{align*} so the initial condition~\eqref{eq:sineIC} satisfies Definition~\ref{def:ic} with shape exponent $ \gamma\in [0,1)$, and limiting fluctuation $ \mathcal{F}(\xi) = \sin(\xi) $. \end{example} \subsection{A PDE heuristic for Theorem~\ref{thm:main:}} \label{sect:heu} In this subsection we give a heuristic derivation of Theorem~\ref{thm:main:} via a combination of PDE-type calculations and consequences of the flux condition. We begin with a discussion of the case $ \rho<1 $. Express the flux condition~\eqref{eq:fluxBC:} as \begin{align} \label{eq:graphical} (\xi- \rho \xi)|_{\xi=r(t)} = \int_{r(t)}^\infty (\rho-u_*(t,\xi)) d\xi. \end{align} Indeed, the flux condition~\eqref{eq:fluxBC:} demands that the front absorbs exactly mass $ \xi $ when $ r(t)=\xi $, but initially there is only an amount of mass $ \xi\rho $ allocated within $ [0,\xi] $. The l.h.s.\ of~\eqref{eq:graphical} represents this deficiency. Such a deficiency is compensated by the `boundary layer' $ \rho-u_*(t,\xi) $ caused by the motion of the front. That is, \eqref{eq:graphical} offers an alternative description of the motion of $ r(t) $, by matching the deficiency to the mass of boundary layer. We now attempt to generalize the preceding matching argument to $ \rho=1 $. When $ \rho=1 $, however, the l.h.s.\ of~\eqref{eq:graphical} becomes zero. This suggests that we should look for the next order, namely the fluctuation of the initial condition $ F(\xi) $ (defined in~\eqref{eq:ict}). Under this setting the matching condition~\eqref{eq:graphical} generalizes into \begin{align} \label{eq:N:dcmp:heu} F(R(t)) = G^{R}(t) + M(t,R(t)). \end{align} This identity follows as a special case of the general decomposition~\eqref{eq:N:dcmp} we derive in Section~\ref{sect:elem}, by setting $ Q=R $ therein and using $ N^Q(t)=Q(t) $. Here $ G^{R}(t) $ (defined in~\eqref{eq:blt}) acts as the discrete analog of the boundary layer mass $ \int_{r(t)}^\infty (\rho-u_*(t,\xi)) d\xi $, and $ M(t,R(t)) $ (defined in~\eqref{eq:mgt}) encodes the random fluctuations of motions of the particles. As we show in Proposition~\ref{prop:mgtbd} (see also Remark~\ref{rmk:mgt<<}), the term $ M(t,R(t)) $ is of smaller order, and we hence rewrite~\eqref{eq:N:dcmp:heu} as \begin{align} \label{eq:N:dcmp:heu:} F(R(t)) \approx G^{R}(t). \end{align} To obtain the position $ R (t)$ of the front, we need to approximate the boundary layer term $ G^{R}(t) $. The boundary layer is in general coupled to the entire trajectory of $ R(\Cdot) $. However, as $ \rho=1 $ puts us in a \emph{super-diffusive} scenario, we expect the boundary layer to depend on $ R $ \emph{locally} in time, only through its derivative $ \frac{d~}{dt}R(t) $. This being the case, we look for stationary solutions $ u_v $ to the Stefan problem~\eqref{eq:Stefan} with a constant velocity $ \frac{d~}{dt}r(t) = v $: \begin{align*} u_v(t,\xi) = (1 - e^{-2v(x-r(t))})\mathbf{1}_\set{\xi>r(t)}, \quad r(t) = vt + \alpha. \end{align*} Such a solution enjoys the relation $ \int_{r(t)}^\infty (1-u_v(t,\xi)) d\xi = \frac{1}{2v} $ between the mass of the boundary layer and the front velocity. This suggests the ansatz $G^{R}(t) \approx 1/(2 dR(t)/dt).$ Combining this with~\eqref{eq:N:dcmp:heu:} gives $F(R(t)) \approx 1/\big(2 \frac{d R (t)}{dt}\big)$. So far our discussion has been for $F(R(t))>0$, i.e., when the front experiences an instantaneous deficiency of particles (see~\eqref{eq:ict}). In contrast, when $ F(R(t))<0 $, we expect, as discussed earlier, that $\frac{dR(t)}{dt} \approx \infty$ under the relevant scaling. This being the case, the general form our ansatz reads $ [F(R(t)) ]_+ \approx 1/(2d R (t)/dt) $, or equivalently \begin{align} \label{eq:ansatz} 2 [ F(R(t))]_+ d R(t) \approx dt. \end{align} We now perform the scaling $ t \mapsto \widetilde{\varepsilon}^{-1} t $ in~\eqref{eq:ansatz}, and postulate that $ \widetilde{\varepsilon}^{\alpha}R(\widetilde{\varepsilon}^{-1}t) $ converges to a non-degenerate limiting process $ \mathcal{R}(t) $, for some $ \alpha\in\mathbb{R} $ as $ \widetilde{\varepsilon}\to 0 $. This together with our assumptions on $ F $ in Definition~\ref{def:ic} suggests $ F(R(\widetilde{\varepsilon}^{-1} t)) \approx \widetilde{\varepsilon}^{-\gamma\alpha} \mathcal{F}(\mathcal{R}(t)) $. Writing also $ d R(\widetilde{\varepsilon}^{-1} t) = \widetilde{\varepsilon}^{-\alpha} d\mathcal{R}(t) $, We now obtain \begin{align} \label{eq:ansatz:} 2 \widetilde{\varepsilon}^{\,-\gamma\alpha} [ \mathcal{F}(\mathcal{R}(t))]_+ \, \widetilde{\varepsilon}^{\,-\alpha} d \mathcal{R}(t) \approx \widetilde{\varepsilon}^{\,-1} dt. \end{align} Balancing the powers of $ \widetilde{\varepsilon} $ on both sides of~\eqref{eq:ansatz:} requires $ \alpha=1/(1+\gamma) $, which is indeed the scaling in Theorem~\ref{thm:main:}. Further, for $ \alpha=1/(1+\gamma) $, passing~\eqref{eq:ansatz:} to the limit $ \widetilde{\varepsilon}\to 0 $ gives $ 2 [\mathcal{F}(\mathcal{R}(t))]_+ d \mathcal{R}(t) = dt. $ Upon integrating in $ t $, we obtain $ \mathcal{T}_\mathcal{F}(\mathcal{R}(t)) = t $. After applying the inversion $\iota$, we obtain the claimed limiting process $ \mathcal{R} $. Our proof of Theorem~\ref{thm:main:} amounts to rigorously executing the prescribed heuristics. The challenge lies in controlling the regularity of the front $ R $. Indeed, the limiting process $ \mathcal{R}:=\iota(\mathcal{T}_\mathcal{F}) $ is $ C^1 $ with derivative $ \frac{d~}{dt}\mathcal{R}(t) = \frac{2}{\mathcal{F}(\mathcal{R}(t))} $ away from the points of discontinuity. On the other hand, the microscopic front $ R $ is a \emph{pure jump process}. A direct proof of the convergence of $ R $ hence requires establishing certain mesoscopic averaging to match the regularity of the limiting process $ \mathcal{R} $. This poses a significant challenge due to the lack of invariant measures (as a result of absorption). The problem is further exacerbated by \textit{a}) criticality, which requires more refined estimates; and \textit{b}) the aforementioned oscillation between two phases, which requires us to incorporate in the argument two distinct scaling ansatzes. In this article we largely \emph{circumvent} these problems by utilizing a novel monotonicity. This monotonicity, established in Proposition~\ref{prop:mono}, is a direct consequence of the flux condition~\eqref{eq:fluxCond}--\eqref{eq:fluxCond:}, and it allows us to construct certain upper and lower bounds which \emph{by design} have the desired microscopic regularity. \section{Overview of the Proof} \label{sect:elem} To simplify notations, hereafter we often omit dependence on $ \varepsilon $, and write $ F(x), \eta^\text{ic}(x) $ in place of $ F_\varepsilon(x), \eta^\text{ic}_\varepsilon(x) $. Throughout this article, we adopt the convention that $ x,y $, etc., denote points on the integer lattice $ \mathbb{Z} $, while $ \xi,\zeta $, etc., denote points real line $ \mathbb{R} $, and we use $ t,s\in[0,\infty) $ for the time variable. We begin with a reduction of Theorem~\ref{thm:main:}. Consider the hitting time process corresponding to $ R $: \begin{align*} T := \iota(R), \text{ i.e., } T(\xi) := \inf\{ t\geq 0: R(t) > \xi \}. \end{align*} Recall that $ \mathscr{U} $ denotes the uniform topology over compact intervals. Instead of proving Theorem~\ref{thm:main:} directly, we aim to proving the analogous statement regarding the hitting time process $ T $. \begin{customthm}{\ref*{thm:main:}} \label{thm:main} Fixing $ \mathcal{F} \in C[0,\infty) $, we let $ \mathcal{T}_\mathcal{F} $ be as in \eqref{eq:Hit}. Fixing $ \gamma\in(\frac13,1) $, and starting the system from initial conditions $ \{\eta^\text{ic}_\varepsilon\}_{\varepsilon\in(0,1]} $ as in Definition~\ref{def:ic}, with density $ 1 $, shape exponent $ \gamma $ and limiting fluctuation $ \mathcal{F} $, we have \begin{align} \label{eq:main} \varepsilon^{1+\gamma} T(\varepsilon^{-1}\Cdot) \Rightarrow \mathcal{T}_\mathcal{F}(\Cdot) \text{ under } \mathscr{U}, \text{ as } \varepsilon \to 0. \end{align} \end{customthm} \noindent We now explain how Theorem~\ref{thm:main} implies Theorem~\ref{thm:main:}. Recall the space $ \mathbb{D}^{\uparrow} $ from~\eqref{eq:rcll}, and consider the subspace \begin{align*} \mathbb{D}^{\uparrow}_* := \Big\{ f\in\mathbb{D}^{\uparrow} : \, f(\xi)<\infty, \forall \xi\in[0,\infty), \ \lim_{\xi\to\infty} f(\xi)=\infty \Big\}. \end{align*} It is readily checked that $ \iota $ maps $ \mathbb{D}^{\uparrow}_* $ into itself, i.e., $ \iota(\mathbb{D}^{\uparrow}_*) \subset \mathbb{D}^{\uparrow}_* $. Recall the definition of $ \mathbb{D}[0,t_0] $ from~\eqref{eq:Rcllt0}. For any fixed $ t_0<\infty $, consider the restriction maps \begin{align*} \mathfrak{r}_{t_0} : \mathbb{D}^{\uparrow}_* \to \mathbb{D}[0,t_0], \quad \mathfrak{r}_{t_0}(f) := f|_{[0,t_0]}. \end{align*} Equipping $ \mathbb{D}^{\uparrow}_* $ with the uniform topology $ \mathscr{U} $ and equipping $ \mathbb{D}[0,t_0] $ with the $ \mathscr{M}_1[0,t_0] $ topology, we have that \begin{align} \label{eq:conti} \mathfrak{r}_{t_0}\circ \iota: (\mathbb{D}^{\uparrow}_*, \mathscr{U}) \longrightarrow (\mathbb{D}[0,t_0],\mathscr{M}_1[0,t_0]) \text{ continuously.} \end{align} To see this, fix $ f\in\mathbb{D}^{\uparrow}_* $ and consider a sequence $ \{f_n\}_n \subset \mathbb{D}^{\uparrow}_* $ such that $ f_n \to f $ in $ \mathscr{U} $. This gives a convergence at the level of parametrization of $ \iota(f)|_{[0,t_0]} $, and hence, by Definition~\ref{def:M1}, gives convergence of $ \iota(f_n)|_{[0,t_0]} $ to $ \iota(f)|_{[0,t_0]} $ under $ \mathscr{M}_1[0,t_0] $. The assumption~\eqref{eq:Hit:cnd} ensures $ \mathcal{T}_\mathcal{F} \in \mathbb{D}^{\uparrow}_* $. Hence, by~\eqref{eq:conti}, Theorem~\ref{thm:main} immediately implies Theorem~\ref{thm:main:}. We focus on Theorem~\ref{thm:main} hereafter. To give an overview of the proof, we begin by preparing some notations. Define the following functional space \begin{align} \label{eq:rclll} \mathbb{D}^{\uparrow}_{\mathbb{Z}} := \big\{ f:\mathbb{R}_{\geq 0} \xrightarrow[\text{nondecr.}]{\text{RCLL}} \mathbb{Z}\cup\{\infty\} \big\}. \end{align} Note that, unlike the space $ \mathbb{D}^{\uparrow} $, here we allow trajectories to take negative values in $ \mathbb{D}^{\uparrow}_{\mathbb{Z}} $. We consider the `free' particle system, which is simply the system of particles performing independent random walks \emph{without} absorption, starting from $ \eta^\text{ic} $. We adopt the standard notation \begin{align*} \eta(t,x) := \#\{ \text{free particles at time } t \text{ and site }x\} \end{align*} for the occupation variable, and, by abuse of notations, use $ \eta $ also to refer the free particle system itself. Next, for any $ \mathbb{D}^{\uparrow}_{\mathbb{Z}} $-valued process $ Q $, letting \begin{align} \label{eq:shade} A_Q(t) := \{ (s,x) : s\in[0,t], \ x\in(-\infty,Q(s)] \cap\mathbb{Z} \} \subset [0,\infty)\times\mathbb{Z} \end{align} denote the `shaded region' of $ Q $ up to time $ t $, we construct the absorbed particle system $ \eta^Q $ from $ \eta $ by deleting all $ \eta $-particles that have visited $ A_Q(t) $: \begin{align*} \eta^Q(t,x) := \# \{ \eta\text{-particles at site } x \text{ that have never visited } A_Q(t) \text{ up to time }t \}. \end{align*} Under these notations, $ \eta^R(t,x) $ denotes the occupation variable of $ \{X_i(t)\}_i $. Recall from~\eqref{eq:fluxCond} that $ N^R(t) $ denotes the number of $ \eta $-particles absorbed into $ R $ up to time $ t $. We likewise let $ N^Q(t) $ denote the analogous quantity for any $ \mathbb{D}^{\uparrow}_{\mathbb{Z}} $-valued process $ Q $, i.e., \begin{align} N^Q(t) \label{eq:NQ} := \sum_{ x\in\mathbb{Z} } (\eta(t,x) - \eta^Q(t,x)). \end{align} Indeed, even though both $\sum_{x\in\mathbb{Z}} \eta(t,x)$ and $\sum_{x\in\mathbb{Z}}\eta^Q(t,x)$ are infinite, \eqref{eq:NQ} is well-defined since \begin{align*} \lim_{x\to-\infty} \eta(t,x) =0, \quad \eta^Q(t,x) =0, \ \forall x\leq Q(t), \quad \lim_{x\to\infty} (\eta(t,x)-\eta^Q(t,x))=0. \end{align*} The starting point of the proof of Theorem~\ref{thm:main} is the following monotonicity property (which is proven in Section~\ref{sect:basic}). \begin{proposition} \label{prop:mono} Let $ \tau\in[0,\infty) $ be (possibly) random, and let $ Q $ be a $ \mathbb{D}^{\uparrow}_{\mathbb{Z}} $-valued process. If $ N^Q(t) \leq Q(t) $, $ \forall t\leq \tau $ and $ Q(0)\geq 0 $, we have that $ Q(t) \geq R(t) $, $ \forall t \leq \tau $. Similarly, if $ Q(0)=0 $ and $ N^Q(t) \geq Q(t) $, $ \forall t\leq \tau $, we have that $ Q(t) \leq R(t) $, $ \forall t \leq \tau $. \end{proposition} Given Proposition~\ref{prop:mono}, our strategy is to construct suitable processes $ \overline{R}_\lambda,\underline{R}_\lambda\in\mathbb{D}^{\uparrow}_{\mathbb{Z}} $ such that $ N^{\overline{R}_\lambda}(t) \leq \overline{R}_\lambda(t) $ and that $ N^{\underline{R}_\lambda}(t) \geq \underline{R}_\lambda(t) $. Here $ \lambda>0 $ is an auxiliary parameter, such that, for any fixed $ \lambda>0 $, $ \overline{R}_\lambda, \underline{R}_\lambda $ are suitable deformations of $ R $ that allows certain rooms to accommodate various error terms for our analysis, but as $ \lambda\to 0 $, both $ \overline{R}_\lambda $ and $ \underline{R}_\lambda $ approximate $ R $. The precise constructions of $ \overline{R}_\lambda $ and $ \underline{R}_\lambda $ are given in Section~\ref{sect:pfmain}. Essential to the constructions is the following identity~\eqref{eq:N:dcmp} that relates $ N^Q(t) $ to the motion of the $ \eta $- and $ \eta^Q $-particles. To derive such an identity, define \begin{align} \label{eq:mgt} M(t,x) &:= \sum_{y \leq x} (\eta(t,y)-\eta^\text{ic}(y)), \\ \label{eq:blt} G^Q(t) &:= \sum_{ y > Q(t) } (\eta(t,y)-\eta^Q(t,y)). \end{align} Hereafter, for consistency of notations, we set $ \eta^\text{ic}(y):= 0 $ for $ y\leq 0 $. Recall the definition of $ F(x) $ from \eqref{eq:ict} and recall $ N^Q(t) $ from~\eqref{eq:NQ}. Under these notations, it is now straightforward to verify \begin{align} \label{eq:N:dcmp} N^Q(t) = Q(t) - F(Q(t)) + M(t,Q(t)) + G^Q(t). \end{align} The first two terms on the r.h.s.\ of \eqref{eq:N:dcmp} collectively contribute $ N'(t) := \sum_{0<y\leq Q(t)} \eta^\text{ic}(y) $, which is the value of $ N^Q(t) $ had all particles been \emph{frozen} at their $ t=0 $ locations. Indeed, as the density equals $ 1 $ under current consideration, $ Q(t) $ represents the first order approximation of $ N'(t) $, and the \textbf{initial fluctuation term} $ F(Q(t)) $ describes the random fluctuation of the initial condition. Subsequently, the \textbf{noise term} $ M(t,Q(t)) $ accounts for the \emph{time evolution} of the $ \eta $-particle; and the \textbf{boundary layer term} $ G^Q(t) $ encodes the loss of $ \eta^Q $-particles due to absorption seen at time $ t $ to the right of $ Q(t)$. For $Q(t)=R(t)$ we have by the flux condition~\eqref{eq:fluxCond} that $ N^R(t) = R(t)$, hence the last three terms in \eqref{eq:N:dcmp} add up to zero. Focusing hereafter on these terms, recall from \eqref{eq:main} that, under our scaling convention, the time and space variables are of order $ \varepsilon^{-1-\gamma} $ and $ \varepsilon^{-1} $, respectively. With this and \eqref{eq:ic:lim}, we expect the term $ -F(Q(t)) $ to scale as $ (\varepsilon^{-1})^{\gamma}=\varepsilon^{-\gamma}=:\Theta_F $. As for the noise term $ M(t,x) $, we establish the following bound in Section~\ref{sect:mgt}. \begin{proposition} \label{prop:mgtbd} Let \begin{align} \label{eq:Xi} \Xi_\varepsilon(a) := \{ (t,x) : t\in[0,\varepsilon^{-1-\gamma-a}], \ x\in[0,\varepsilon^{-1-a}]\cap\mathbb{Z}, \ x/\sqrt{t} \geq \varepsilon^{-a} \}. \end{align} Starting from an initial condition $ \eta^\text{ic} $ satisfying \eqref{eq:ic:den1}, for any fixed $ a\in(0,1] $, we have \begin{align} \label{eq:mgt:bd} \lim_{\varepsilon\to 0} \mathbf{P} \big( |M(t,x)| \leq 6\varepsilon^{-a}(1+t)^{\frac{1}{4}\vee\frac{\gamma}{2}}, \ \forall (t,x)\in\Xi_\varepsilon(a) \big) = 1. \end{align} \end{proposition} \noindent Roughly speaking, the conditions $ t\leq \varepsilon^{-1-\gamma-a} $ and $ x\leq \varepsilon^{-1-a} $ in \eqref{eq:Xi} correspond to the scaling $ (\varepsilon^{-1-\gamma},\varepsilon^{-1}) $ for $ (t,x) $. The extra factor $ a $ is a small parameter devised for absorbing various error terms in the subsequent analysis. \begin{remark} \label{rmk:mgt<<} Under the scaling $ (\varepsilon^{-1-\gamma},\varepsilon^{-1}) $ of time and space, Proposition~\ref{prop:mgtbd} asserts that $ |M(t,x)| $ is at most of order $ \Theta_M:=\varepsilon^{-(\frac14\vee\frac{\gamma}{2})(1+\gamma)-a} $ for all relevant $ (t,x) $. The condition~$ \frac13<\gamma<1 $ implies $ (\tfrac14\vee\tfrac{\gamma}{2})(1+\gamma) < \gamma $. In particular, by choosing $ a $ small enough, we have $ \Theta_M \ll \Theta_F = \varepsilon^{-\gamma} $, i.e., $ M(t,x) $ is negligible compared to $ F(Q(t)) $. \emph{This is where the assumption~$ \gamma>\frac13 $ enters}. If $ \gamma\leq \frac13 $, the preceding scaling argument is invalid, and we expect $ M(t,x) $ to be non-negligible, and the scaling should change. \end{remark} As explained in Remark~\ref{rmk:mgt<<}, we expect $ |M(t,x)| $ to be of smaller order than $ F(Q(t)) $, so the latter must be effectively balanced by the boundary layer term~$ G^Q(t) $ and our next step is thus to derive an approximate expression for $ G^Q(t) $. To this end, instead of a generic trajectory $ Q $, we consider first \emph{linear} trajectories and truncated linear trajectories as follows. Adopting the notations $ \lfloor \xi \rfloor := \sup((-\infty,\xi]\cap\mathbb{Z}) $ and $ \lceil \xi \rceil := \inf([\xi,\infty)\cap\mathbb{Z}) $, we let $ L_{t_0,x_0,v}: \mathbb{R}_{\geq 0} \to \mathbb{Z} $ denote the $ \mathbb{D}^{\uparrow}_{\mathbb{Z}} $-valued linear trajectory that passes through $ (t_0,x_0) $ with velocity $ v\in(0,\infty) $: \begin{align} \label{eq:L} L_{t_0,x_0,v}(t) &:= x_0 - \lceil v(t_0-t) \rceil. \end{align} Fixing $ \gamma'\in(\frac{\gamma+1}{2},1) $, we consider also the $ \mathbb{D}^{\uparrow}_{\mathbb{Z}} $-valued truncated linear trajectories \begin{align} \label{eq:Lup} \overline{L}_{t_0,x_0,v}(t) &:= \left\{\begin{array}{l@{,}l} L_{t_0,x_0,v}(t) & \text{ if } L_{t_0,x_0,v}(t) \geq x_0-\lfloor\varepsilon^{-\gamma'}\rfloor, \\ x_0-\lfloor\varepsilon^{-\gamma'}\rfloor & \text{ otherwise}, \end{array}\right. \\ \label{eq:Llw} \underline{L}_{t_0,x_0,v}(t) &:= \left\{\begin{array}{l@{,}l} \big[ L_{t_0,x_0,v}(t) \big]_+ & \text{ if } L_{t_0,x_0,v}(t) \geq x_0-\lfloor\varepsilon^{-\gamma'}\rfloor, \\ 0 & \text{ otherwise}, \end{array}\right. \end{align} where $ [\xi]_+ $ denotes the positive part of $ \xi $. The following proposition, proved in Section~\ref{sect:blt}, provides the necessary estimates of $ G^{\overline{L}_{t_0,x_0,v}}(t) $, $ G^{\underline{L}_{t_0,x_0,v}}(t) $ and $ G^{L_{t_0,x_0,v}}(t_0) $. To state this proposition, we first define the admissible set of parameters: \begin{align} \label{eq:tv:cnd} \Sigma_{\varepsilon}(a) := \big\{ (t_0,x_0,v): \ t_0 \in [1,\varepsilon^{-1-\gamma-a}], \ &x_0 \in [\varepsilon^{-\gamma'-a},\varepsilon^{-1-a}]\cap\mathbb{Z}, \ v\in[\varepsilon^{\gamma+a},\varepsilon^{\gamma-a}], \\ \label{eq:supdiff} &\ \text{such that} \ v \sqrt{t_0} \geq \varepsilon^{-a} \big\}. \end{align} Similarly to \eqref{eq:Xi}, the conditions in \eqref{eq:tv:cnd} correspond to the scaling $ (\varepsilon^{-\gamma-1},\varepsilon^{-1},\varepsilon^{\gamma}) $ for $ (t_0,x_0,v) $, where the scaling $ \varepsilon^{\gamma} $ of $ v $ is understood under the informal matching $ v\mapsto \frac{x_0}{t_0} $. On the other hand, the condition \eqref{eq:supdiff} quantifies super-diffusivity, and excludes the short-time regime $ t_0 \leq \varepsilon^{-2a}v^{-2} $. To bridge the gap, we consider also \begin{align} \label{eq:tv:cnd:} \widetilde{\Sigma}_{\varepsilon}(a) := \big\{ (t_0,x_0,v): \ t_0 \in [0,\varepsilon^{-1-\gamma-a}], \ x_0\in[0,\varepsilon^{-1-a}]\cap\mathbb{Z}, \ v \in [\varepsilon^{\gamma+a},\varepsilon^{a}] \big\}. \end{align} \begin{proposition}\label{prop:blt} Start the system from an initial condition $ \eta^\text{ic} $ satisfying \eqref{eq:ic:den1}, with the corresponding constants $ a_*,C_*,\gamma $. For any fixed $ 0<a <(\gamma'-\frac{1+\gamma}{2})\wedge(1-\gamma)\wedge\frac{\gamma}{2} $, there exists a constant $ C<\infty $, depending only on $ a,a_*,C_*,\gamma $, such that \begin{align} \label{eq:blt:cnt:up} & \lim_{\varepsilon\to 0} \mathbf{P} \Big( \big| G^{\overline{L}_{t_0,x_0,v}}(t_0) - \tfrac{1}{2v} \big| \leq v^{\frac1C-1}, \ \forall (t_0,x_0,v)\in\Sigma_\varepsilon(a) \Big) = 1, \\ \label{eq:blt:cnt:lw} & \lim_{\varepsilon\to 0} \mathbf{P} \Big( \big| G^{\underline{L}_{t_0,x_0,v}}(t_0) - \tfrac{1}{2v} \big| \leq v^{\frac1C-1}, \ \forall (t_0,x_0,v)\in\Sigma_\varepsilon(a) \Big) = 1, \\ \label{eq:blt:<} & \lim_{\varepsilon\to 0} \mathbf{P} \Big( G^{L_{t_0,x_0,v}}(t_0) \leq 4\varepsilon^{-a}v^{-1}, \ \forall (t_0,x_0,v) \in \widetilde{\Sigma}_\varepsilon(a) \Big) = 1. \end{align} \end{proposition} \noindent The first two estimates \eqref{eq:blt:cnt:up}--\eqref{eq:blt:cnt:lw} state that $ G^{\overline{L}_{t_0,x_0,v}}(t_0) $ and $ G^{\underline{L}_{t_0,x_0,v}}(t_0) $ are well approximated by $ (2v)^{-1} $ for $ (t_0,x_0,v)\in\Sigma_\varepsilon(a) $. As for the short time regime $ (t_0,x_0,v)\in\widetilde{\Sigma}_\varepsilon(a) $, we establish a weaker estimate~\eqref{eq:blt:<} that suffices for our purpose. In Section~\ref{sect:pfmain}, we employ Proposition~\ref{prop:blt} to estimate $ G^{\overline{R}_\lambda}(t) $ and $ G^{\underline{R}_\lambda}(t) $, by approximating $ \overline{R}_\lambda $ and $ \underline{R}_\lambda $ with suitable truncated linear trajectories. Such linear approximations suffice due to the \emph{super-diffusive} nature of $ R $. In general, $ G^Q(t) $ depends on the entire trajectory of $ Q $ from $ 0 $ to $ t $, but for the super diffusive trajectories $ \overline{R}_\lambda $ and $ \underline{R}_\lambda $, a linear approximation is accurate enough to capture the leading order of $ G^{\overline{R}_\lambda}(t) $ and $ G^{\underline{R}_\lambda}(t) $. Based on these estimates of $ G^{\overline{R}_\lambda}(t) $ and $ G^{\underline{R}_\lambda}(t) $, we then show that, with sufficiently high probability, $ N^{\overline{R}_\lambda}(t) \leq \overline{R}_\lambda(t) $ and $ N^{\underline{R}_\lambda}(t) \geq \underline{R}_{\lambda}(t) $ over the relevant time regime. This together with Proposition~\ref{prop:mono} shows that $ \overline{R}_\lambda $ and $ \underline{R}_\lambda $ indeed sandwich $ R $ in the middle. Our last step of the proof is then to show that this sandwiching becomes sharp under the iterated limit $ \lim_{\lambda\to0}\lim_{\varepsilon\to 0} $. More precisely, we show that the hitting time processes $ \overline{T}_\lambda $ and $ \underline{T}_\lambda $ corresponding to $ \overline{R}_\lambda $ and $ \underline{R}_\lambda $, respectively, weakly converge to $ \mathcal{T}_\mathcal{F} $ under the prescribed iterated limit. \subsection*{Outline of the rest of this article} To prepare for the proof, we establish a few basic tools in Section~\ref{sect:basic}. Subsequently, in Section~\ref{sect:mgt}, we settle Proposition~\ref{prop:mgtbd} regarding bounding the noise term, and in Section~\ref{sect:blt}, we show Proposition~\ref{prop:blt} regarding estimations of the boundary layer term. In Section~\ref{sect:pfmain}, we put together results from Sections~\ref{sect:mgt}--\ref{sect:blt} to give a proof of the main result Theorem~\ref{thm:main}. In Appendix~\ref{sect:subp}, to complement our study of the critical behaviors at $ \rho=1 $ throughout this article, we discuss the $ \rho<1 $ and $ \rho>1 $ behaviors of the front $ R $. \section{Basic Tools} \label{sect:basic} \begin{proof}[Proof of Proposition~\ref{prop:mono}] Fixing $ \tau<\infty $, we consider only the case $ Q(t)\leq N^Q(t) $, $ \forall t \leq \tau $, as the other case is proven by the same argument. By assumption, $ Q(0) \geq 0 $ and $ R(0)=0 $, so $ Q(t) \geq R(t) $ holds for $ t=0 $. Our goal is to prove that this dominance continues to hold for all $ t\leq \tau $. To this end, we let $ \sigma := \inf\{ t: R(t) > Q(t) \} $ be the first time when such a dominance fails. At time $ \sigma $, exactly one $ \eta^R $-particle attempts to jump, triggering the front $ R $ to move for $ R(\sigma^-) $ to $ R(\sigma) $. Index all the $ \eta^R $ at time $ \sigma^- $ as $ Y_i(\sigma^-) $, $ i=1,2,\ldots $. Let us now imagine performing the motion of $ R $ into two steps: \textit{i}) from $ R(\sigma^-) $ to $ Q(\sigma) $; and \textit{ii}) from $ Q(\sigma) $ to $ R(\sigma) $. During step~(\textit{i}), the front absorbs \begin{align*} \widetilde{N} := \# \big( (R(\sigma^-),Q(\sigma)]\cap\{Y_i(\sigma^{-})\}_{i=1}^\infty \big) \end{align*} particles. Due to the condition~\eqref{eq:fluxCond:}, we must have $ \widetilde{N} > Q(\sigma)-R(\sigma^-) $, otherwise $ R $ would have stopped at or before it reaches $ Q(\sigma) $ and not performed step~(\textit{ii}). Combining $ \widetilde{N} > Q(\sigma)-R(\sigma^-) $ with $ R(\sigma^-)=N^R(\sigma^-) $ (by the flux condition~\eqref{eq:fluxCond}) yields \begin{align} \label{eq:N':mono} N^R(\sigma^-)+\widetilde{N} > Q(\sigma). \end{align} Further, since $ R $ is dominated by $ Q $ up to time $ \sigma^- $, i.e.\ $ R(t) \leq Q(t) $, $ \forall t<\sigma $, the total number of particles absorbed by $ R $ \emph{up to step~(\textit{i})} cannot exceed the number of particles absorbed by $ Q $ up to time $ \sigma $, i.e\ $ N^R(\sigma^-)+\widetilde{N} \leq N^Q(\sigma) $. Combining this with \eqref{eq:N':mono} yields $ N^Q(\sigma) > Q(\sigma) $, which holds only if $ \sigma > \tau $ by our assumption. \end{proof} We devote the rest of this section to establishing a few technical lemmas, in order to facilitate the proof in subsequent sections. The proof of these lemmas are standard. \begin{lemma}\label{lem:cnd:cnctrtn} Letting $ \{B_{x,j}\}_{(x,j)\in\mathbb{Z}_{>0}^2} $ be mutually independent Bernoulli variables, independent of $ \eta^\text{ic} $, we consider a random variable $ X $ of the form \begin{align} \label{eq:XBer} X = \sum_{x\in\mathbb{Z}_{>0}} \sum_{j=1}^{\eta^\text{ic}(x)} B_{x,j}. \end{align} We have that for all $ r\in(0,\infty) $ and $ \zeta \geq 0$, \begin{align} \label{eq:Chernov00} & \mathbf{P} \big( |X-\zeta \big| > 2r \big) \leq 2 e^{-\frac{r^2}{3r+2\zeta}} + \mathbf{P} \big( | \mathbf{E} (X|\eta^\text{ic}) - \zeta| > r \big). \end{align} \end{lemma} \begin{proof} To simplify notations, we write $ \mathbf{E} '(\Cdot) := \mathbf{E} (\Cdot|\eta^\text{ic}) $ and $ \mathbf{P} '(\mathcal{A}) := \mathbf{E} (\mathbf{1}_\mathcal{A}|\eta^\text{ic}) $ for the conditional expectation and the conditional probability. Since $\log \mathbf{E} (e^{s B_{x,j}}) \le (e^s-1) \mathbf{E} (B_{x,j})$ for any $s,x,j$, it follows that $\log \mathbf{E} ' (e^{s X}) \le (e^s-1) \mathbf{E} ' (X)$. The inequality $(1+\delta)\log(1+\delta) -\delta \ge \delta^2/(|\delta|+2)$ for $\delta \ge -1$ then yields that \begin{align} \label{eq:Chernovv} \mathbf{P} '(|X- \mathbf{E} '(X)| > r) \leq 2\exp\Big( -\frac{r^2}{r+2 \mathbf{E} '(X)} \Big),\quad\forall r\geq 0 \end{align} (e.g.\ \cite[Theorem~4]{goemans15}, where $r = |\delta| \mathbf{E} '(X)$). Since \begin{align*} \{ |X-\zeta| > 2r\} \subseteq \{ |X- \mathbf{E} '(X)| > r, | \mathbf{E} '(X) - \zeta| \le r \} \cup \{| \mathbf{E} '(X)-\zeta| > r \} \end{align*} and \eqref{eq:Chernovv} implies that \begin{align} \label{eq:chernov:union:} \mathbf{P} '\big( |X- \mathbf{E} '(X)| > r, | \mathbf{E} '(X)-\zeta|\leq r \big) \leq 2 e^{-\frac{r^2}{3r+2\zeta}}\,, \end{align} we get \eqref{eq:Chernov00} by taking $ \mathbf{E} (\Cdot) $ on both sides of \eqref{eq:chernov:union:} followed by the union bound. \end{proof} Let $ \Delta f(x) := f(x+1) + f(x-1) - 2f(x) $ denote the discrete Laplacian. \begin{lemma}[discrete maximal principle, bounded fixed domain] \label{lem:max:} Fixing $ x_1<x_2\in\mathbb{Z} $ and $ \overline{t} <\infty $. We consider $ u(\Cdot,\Cdot): [0,\overline{t}]\times([x_1,x_2]\cap\mathbb{Z}) \to \mathbb{R} $, such that $ u(\Cdot,x) \in C([0,\overline{t}])\cap C^1((0,\overline{t})) $, for each fixed $ x\in((x_1,x_2)\cap\mathbb{Z}) $. If $ u $ solves the discrete heat equation \begin{align} \label{eq:ui:HE:} \partial_t u(t,x) = \tfrac12 \Delta u(t,x), \quad \forall t\in(0,\overline{t}), \ x_1<x<x_2, \end{align} and satisfies \begin{align} \label{eq:ui:domin:bdy:} u(0,x) \geq 0, \ \forall x \in (x_1,x_2)\cap\mathbb{Z}, \quad u(t,x_1) \geq 0, \ u(t,x_2) \geq 0, \ \forall t \in [0,\overline{t}], \end{align} then \begin{align} \label{eq:max:} u(t,x) \geq 0, \quad \forall (t,x) \in[0,\overline{t}]\times([x_1,x_2]\cap\mathbb{Z}). \end{align} \end{lemma} \begin{proof} Assume the contrary. Namely, for some fixed $ \varepsilon>0 $, letting \begin{align} \label{eq:t0} t_0 := \inf\{ t\in[0,\overline{t}] : u(t,x) \leq -\varepsilon, \text{ for some } x_1<x<x_2 \}, \end{align} we have $ t_0 \in (0,\overline{t}] $. Since $ t\mapsto u(t,x) $ is continuous, we must have $ u(t_0,x)=-\varepsilon $, for some $ x\in (x_1,x_2)\cap\mathbb{Z} $. Such $ x $ may not be unique, and we let $ x_0 := \min\{ x\in(x_1,x_2)\cap\mathbb{Z} : u(t_0,x)=-\varepsilon \} $. That is, $ t_0 $ is the first time where the function $ u(t,\Cdot) $ hits level $ -\varepsilon $, and $ x_0 $ is the left-most point where this hitting occurs. We have $ u(t_0,x_0-1) > -\varepsilon $ and $ u(t_0,x_0+1) \geq -\varepsilon $, so in particular $ \Delta u(t_0,x_0) > 0 $, and thereby $ \partial_t u(t_0,x_0) >0 $. This implies that $ u(t,x_0)< u(t_0,x_0) $, for all $ t<t_0 $ sufficiently close to $ t_0 $, which contradicts with the definition~\eqref{eq:t0} of $ t_0 $. This proves that, for any given $ \varepsilon>0 $, such $ t_0 $ does not exist, so \eqref{eq:max:} must hold. \end{proof} \begin{lemma}[discrete maximal principle, with a moving boundary] \label{lem:max} Fixing a $ \mathbb{D}^{\uparrow}_{\mathbb{Z}} $-valued function $ Q $ and $ \overline{t} <\infty $, we consider $ u_i(t,x) $, $ i=1,2 $, defined on $ \mathcal{D} := \{(t,x): t\in[0,\overline{t}], x\geq Q(t)\} $, such that \begin{enumerate}[label=\roman*)] \item \label{enu:ui:1} $ u_i(t,x) $ is continuous in $ t $ on $ \mathcal{D} $; \item $ u_i(t,x) $ is $ C^1 $ in $ t $ on $ \mathcal{D}^\circ := \{ (t,x) : t\in(0,\overline{t}), x > Q(t) \} $; \item \label{enu:ui:tailbd} \begin{align} \label{eq:u12:expbd} \limsup_{x\to\infty} \sup_{t\in[0,\overline{t}]} \log |u_i(t,x)| <\infty , \quad \forall (t,x) \in \mathcal{D}, \ i=1,2. \end{align} \end{enumerate} If $ u_1, u_2 $ solve the discrete heat equation \begin{align} \label{eq:ui:HE} \partial_t u_i(t,x) = \tfrac12 \Delta u_i(t,x), \quad \forall (t,x)\in\mathcal{D}^\circ, \end{align} and satisfy the dominance condition at $ t=0 $ and on the boundary: \begin{align} \label{eq:ui:domin:bdy} u_1(0,x) \geq u_2(0,x), \ \forall x > Q(0); \quad u_1(t,Q(t)) \geq u_2(t,Q(t)), \ \forall t \in [0,\overline{t}], \end{align} then such a dominance extends to the entire $ \mathcal{D} $: \begin{align} u_1(t,x) \geq u_2(t,x), \quad \forall (t,x) \in \mathcal{D}. \end{align} \end{lemma} \begin{proof} First, we claim that it suffices to settle the case of a \emph{fixed} boundary $ Q(t)=c $, $ \forall t \leq \overline{t} $. To see this, index all the discontinuous points of $ Q $ as $ 0<t_1<t_2<\ldots < t_n \leq \overline{t} $. Since the domain $ [Q(t_i),\infty) $ shrinks as $ i $ increases, once we settle this Lemma for the case of a fixed boundary, applying this result within the time interval $ [t_i,t_{i+1}) $, we conclude the general case by induction in $ i $. Now, let us assume without loss of generality $ Q(t)=0 $, $ \forall t \leq \overline{t} $. Let $ u:= u_1-u_2 $. By \eqref{eq:u12:expbd}, there exists $ c_0<\infty $ such that $ u(t,x) \geq -c_0 e^{c_0x} $, $ \forall t\leq \overline{t} $ and $ x>0 $. With this, fixing $ x' > 0 $, we let $ c_1 := \cosh(2c_0)-1 $ and $ \widehat{u}(t,x) := c_0 \exp(2c_0x+c_1t-c_0x') $, and consider the function $ \widetilde{u}(t,x) := u(t,x) + \widehat{u}(t,x) $. It is straightforward to verify that $ \widehat{u} $ solves the discrete heat equation on $ (t,x)\in[0,\infty)\times\mathbb{Z} $, so $ \widetilde{u} $ also solves the discrete heat equation for $ t,x>0 $. Further, with $ u(0,x) \geq 0 $, $ u(t,0) \geq 0 $ and $ u(t,x') \geq -c_0 e^{c_0x'} $, $ \forall t\in[0,\overline{t}] $, $ x\in\mathbb{Z}_{>0} $, we indeed have $ \widetilde{u}(0,x), \widetilde{u}(t,0), \widetilde{u}(t,x') \geq 0 $, $ \forall t\in[0,\overline{t}], x\in (0,x')\cap\mathbb{Z} $. With these properties of $ \widetilde{u} $, we apply Lemma~\ref{lem:max:} with $ (x_1,x_2)=(0,x') $ to conclude that $ \widetilde{u}(t,x) \geq 0 $, $ \forall t\in[0,\overline{t}] $, $ x \in (0,x')\cap\mathbb{Z}_{>0} $. Consequently, \begin{align*} u(t,x) \geq -c_0 e^{2c_0x+c_1t-c_0x'} \geq - c_0 e^{2c_0x+c_1\overline{t}} e^{-c_0x'}, \end{align*} $\forall t\in[0,\overline{t}] $, $ x<x'\in \mathbb{Z}_{>0} $. Now, for fixed $ x\in\mathbb{Z}_{>0} $, sending $ x'\to\infty $, we arrive at the desired result: $ u(t,x) \geq 0 $, $ \forall t\in[0,\overline{t}] $. \end{proof} \section{Bounding the Noise Term: Proof of Proposition~\ref{prop:mgtbd}} \label{sect:mgt} Throughout this section, we fix an initial condition $ \eta^\text{ic} $ satisfying~\eqref{eq:ic:den1}, with the corresponding constants $ \gamma,a_*,C_* $. Fix $ a\in(0,1] $, throughout this section we use $ C<\infty $ to denote a generic finite constant, that may change from line to line, but depends only on $ a,a_*,\gamma,C_* $. For any fixed $ (t,x) $, we let $ M^+(t,x) $ denotes the number of $ \eta $-particles starting in $ (0,x] $ and ending up in $ (x,\infty) $ at $ t $. Similarly we let $ M^-(t,x) $ denotes the number of $ \eta $-particles starting in $ (x,\infty) $ and ending up in $ (-\infty,x] $ at $ t $. More explicitly, labeling all the $ \eta $-particles as $ Z_1(t), Z_2(t),\ldots $, we write \begin{align} \label{eq:mgt+-} M^+(t,x) := \sum_{i=1}^\infty \mathbf{1}_\set{Z_i(t)>x} \mathbf{1}_\set{Z_i(0) \leq x}, \quad M^-(t,x) := \sum_{i=1}^\infty \mathbf{1}_\set{Z_i(t)\leq x} \mathbf{1}_\set{Z_i(0) > x}, \end{align} From the definition~\eqref{eq:mgt} of $ M(t,x) $, it is straightforward to verify that \begin{align} \label{eq:mgt:pm} M(t,x) = M^{-}(t,x) - M^{+}(t,x). \end{align} Given the decomposition~\eqref{eq:mgt:pm}, our aim is to establish a certain concentration result of $ M^\pm(t,x) $. Let $ \mathbf{P}_\text{RW} $ denote the law of a random walk $ W $ on $ \mathbb{Z} $ starting from $ 0 $, so that $ p(t,x) := \mathbf{P}_\text{RW} (W(t)=x) $ is the standard discrete heat kernel, and let \begin{align} \label{eq:erf} \Phi(t,x) := \mathbf{P}_\text{RW} (W(t)\geq x) = \sum_{y\geq x} p(t,y) \end{align} denote the corresponding tail distribution function. We expect $ M^\pm(t,x) $ to concentrate around \begin{align} \label{eq:V} V(t) := \sum_{y>0} \Phi(t,y). \end{align} To see why, recall the definition of $ M^\pm(t,x) $ from~\eqref{eq:mgt+-}. Taking $ \mathbf{E} (\Cdot|\eta^\text{ic}) $ gives \begin{subequations} \label{eq:mgt:ex} \begin{align} \label{eq:mgt:ex-} \mathbf{E} (M^-(t,x)|\eta^\text{ic}) &= \sum_{y> x} \mathbf{P}_\text{RW} (W(t)+y\leq x) \eta^\text{ic}(y) = \sum_{y> x} \Phi(t,y-x) \eta^\text{ic}(y), \\ \label{eq:mgt:ex+} \mathbf{E} (M^+(t,x)|\eta^\text{ic}) &= \sum_{0<y\leq x} \mathbf{P}_\text{RW} ( W(t)+y> x) \eta^\text{ic}(y) = \sum_{0<y\leq x} \Phi(t,y-x) \eta^\text{ic}(y). \end{align} \end{subequations} Since density is roughly $ 1 $ under current consideration, we approximate $ \eta^\text{ic}(y) $ with $ 1 $ in~\eqref{eq:mgt:ex-}--\eqref{eq:mgt:ex+}. Doing so in~\eqref{eq:mgt:ex-} gives exactly $ V(t) $, and doing so in~\eqref{eq:mgt:ex+} gives approximately $ V(t) $ for $ x $ that are suitably large. To prove this concentration of $ M(t,x) $, we begin by quantifying how $ \eta^\text{ic} $ is well-approximated by unity density. Recall the definition of $ F $ from~\eqref{eq:ict}, and consider \begin{align} \label{eq:Gamma} \Gamma(x,y) := F(y) - F(x) = \left\{\begin{array}{l@{,}l} \sum_{z\in(x,y]} (1-\eta^\text{ic}(z)) & \text{ for } x\leq y, \\ \sum_{z\in(y,x]} (1-\eta^\text{ic}(z)) & \text{ for } x>y. \end{array}\right. \end{align} which measures the deviations of $ \eta^\text{ic} $ from unity density. We show \begin{lemma} \label{lem:Gamma} Let $ a, b,b'\in(0,1] $. There exists $ C_1=C_1(a,a_*,C_*,b,b')<\infty $, such that \begin{align*} \mathbf{P} \Big( |\Gamma(x,y)| \leq \varepsilon^{-b'} |x-y|^{\gamma+b}, \ \forall y\in\mathbb{Z}_{\geq 0}, \ x\in [0,\varepsilon^{-1-a}] \Big) \geq 1 - C_1 \exp(-\varepsilon^{-b'a_*/C_1}). \end{align*} \end{lemma} \begin{proof} To simply notations, throughout this proof we write $ C=C(a,a_*,C_*, b,b') $, whose value may change from line to line. As $ \eta^\text{ic} $ satisfies the condition~\eqref{eq:ic:den1}, setting $ (x_1,x_2)=(x,y) $ and $ r= \varepsilon^{-b'}|x-y|^{b} $ in \eqref{eq:ic:den1}, we have \begin{align} \label{eq:Gamma:bd:1} \mathbf{P} ( |\Gamma(x,y)| > \varepsilon^{-b'}|x-y|^{\gamma+b} ) \leq C \exp(\varepsilon^{-a_*b'}|y-x|^{a_*b}), \end{align} for all $ y \in \mathbb{Z}_{\geq 0} $. Using the elementary inequality $ \xi_1\xi_2 \geq \frac12(\xi_1+\xi_2) $, $ \forall \xi_1,\xi_2 \geq 1 $, for $ \xi_1= \varepsilon^{-a_*b'} $ and $ \xi_2=|y-x|^{a_*b} $, we obtain $ \varepsilon^{-a_*b'}|y-x|^{a_*b} \geq \frac{1}{2}(\varepsilon^{-a_*b'}+|y-x|^{a_*b}) $, for all $ y\neq x $. Using this to bound the last expression in \eqref{eq:Gamma:bd:1}, and taking the union bound of the result over $ y\in\mathbb{Z}_{>0}\setminus\{x\} $, we conclude that \begin{align} \notag \mathbf{P} \Big( |\Gamma(x,y)| \leq \varepsilon^{-b'}&|x-y|^{\gamma+b}, \ \forall y\in\mathbb{Z}_{\geq 0}\setminus\{x\} \Big) \\ \label{eq:Gamma:bd:2} &\geq 1 - C \exp(-\tfrac12\varepsilon^{-b'a_*}) \sum_{i=1}^\infty \exp(-\tfrac12i^{a_*b}) \geq 1 - C \exp(-\tfrac12\varepsilon^{-b'a_*}). \end{align} Since $ \Gamma(x,x)=0 $, the event in \eqref{eq:Gamma:bd:2} automatically extend to all $ y\in\mathbb{Z}_{>0} $. With this, taking union bound of \eqref{eq:Gamma:bd:2} over $ x\in[0,\varepsilon^{-1-a}] $, we obtain \begin{align*} \mathbf{P} \Big( |\Gamma(x,y)| \leq \varepsilon^{-b'}&|x-y|^{\gamma+b}, \ \forall y\in\mathbb{Z}_{\geq 0}, \ \forall x\in[0,\varepsilon^{-1-a}] \Big) \\ &\geq 1 - C \varepsilon^{-2} \exp(-\tfrac12\varepsilon^{-b'a_*}) \geq 1 - C \exp(-\varepsilon^{-b'a_*/C}). \end{align*} This concludes the desired result. \end{proof} Lemma~\ref{lem:Gamma} gives the relevant estimate on how $ \eta^\text{ic} $ is approximated by unit density. Based on this, we proceed to show the concentration of $ \mathbf{E} (M^\pm(t,x)|\eta^\text{ic}) $. For the kernel $ p(t,x) $, we have the following standard estimate (see \cite[Eq.(A.13)]{dembo16}) \begin{align} \label{eq:hk:bd} p(t,x) \leq C (t+1)^{-\frac12} e^{-\frac{|x|}{\sqrt{t+1}}}, \end{align} and hence \begin{align} \label{eq:erf:bd} \Phi(t,x) \leq C e^{-\frac{[x]_+}{\sqrt{t+1}}}. \end{align} Recall the definition of $ \Xi_\varepsilon(a) $ from \eqref{eq:Xi}. \begin{lemma} \label{lem:mgtEx'} Let $ a\in(0,1] $. There exists $ C=C(a)<\infty $such that \begin{align} \label{eq:mgtEx':bd} \mathbf{P} \Big( \big| \mathbf{E} (M^\pm(t,x)|\eta^\text{ic})-V(t)\big| \leq \varepsilon^{-a}(t+1)^{\frac{\gamma}{2}} \Big) \geq 1 - C\exp(-\varepsilon^{-a/C}), \end{align} for any $ (t,x)\in \Xi_\varepsilon(a) $ and for all $ \varepsilon\in(0,1] $. \end{lemma} \begin{remark} We will prove~\eqref{eq:mgtEx':bd} only for $\varepsilon \in (0, 1/C]$. This suffices because, once~\eqref{eq:mgtEx':bd} holds for all $ \varepsilon $ small enough, by increasing the constant $C$ in \eqref{eq:mgtEx':bd}, that statement trivially extends to all $\varepsilon\in(0,1]$. The same convention be used in the sequel, when statements of such form are made for all $ \varepsilon\in(0,1] $ but only proven for small enough $ \varepsilon $. \end{remark} \begin{conv} \label{conv:omitsp} To simplify the presentation, in the course of proving Lemma~\ref{lem:mgtEx'}, we omit finitely many events $ \mathcal{E}_i $, $ i=1,2,\ldots $, of probability $ \leq C\exp(-\varepsilon^{-a/C}) $, sometimes without explicitly stating it. Similar conventions are adopted in proving other statements in the following, where we omit events of small probability, of the form permitted in corresponding statement. \end{conv} \begin{proof} Fixing $ (t,x)\in\Xi_\varepsilon(a) $, we consider first $ \mathbf{E} (M^-(t,x)|\eta^\text{ic}) $. On the r.h.s.\ of~\eqref{eq:mgt:ex-}, write $ \eta^\text{ic}(y)=1-(1-\eta^\text{ic}(y)) $ to separate the contributions of the average density $ 1 $ and fluctuation. For the former we have $ \sum_{y> x} \Phi(t,y-x) = V(t) $ (as defined in~\eqref{eq:V}). For the latter, writing $ 1-\eta^\text{ic}(y) = \Gamma(x,y)-\Gamma(x,y-1)$ gives \begin{align} \label{eq:mgt:Ex':} \mathbf{E} (M^-(t,x)|\eta^\text{ic}) - V(t) = -\sum_{y> x} \Phi(t,y-x) (\Gamma(x,y)-\Gamma(x,y-1)). \end{align} To bound the r.h.s.\ of \eqref{eq:mgt:Ex':}, we apply Lemma~\ref{lem:Gamma} with $ b=\frac{a}{2} $ and $ b'=\frac{a}{4} $ to conclude that \begin{align} \label{eq:mgt:Gammabd} |\Gamma(x,y)| \leq \varepsilon^{-\frac{a}{4}}|x-y|^{\gamma+\frac{a}{2}}, \quad y\in\mathbb{Z}_{>0}, \ x\in [0,\varepsilon^{-1-a}]. \end{align} Here~\eqref{eq:mgt:Gammabd} holds up to probability $ C\exp(-\varepsilon^{-a/C}) $. As declared in Convention~\ref{conv:omitsp}, we will often omit events of probability $ \leq C\exp(-\varepsilon^{-a/C}) $ \emph{without} explicitly stating it. With $ \Gamma(x,x)=0 $, we have the following summation by parts formulas, \begin{align} \label{eq:sby1} \sum_{0< y\leq x} f(y) (\Gamma(x,y)-\Gamma(x,y-1)) &= \sum_{0< y\leq x} (f(y)-f(y+1)) \Gamma(x,y) - f(1) \Gamma(x,0), \\ \label{eq:sby2} \sum_{y> x} f(y) (\Gamma(x,y)-\Gamma(x,y-1)) &= \sum_{y> x} (f(y)-f(y+1)) \Gamma(x,y), \end{align} for all $ f $ such that \begin{align} \label{eq:sby:cdn} \sum_{y\in\mathbb{Z}} |f(y)||\Gamma(x,y)|<\infty, \quad \sum_{y\in\mathbb{Z}} |f(y+1)||\Gamma(x,y)| < \infty. \end{align} Apply the formula \eqref{eq:sby2} with $ f(y) = \Phi(t,y-x) $, where the summability condition~\eqref{eq:sby:cdn} holds by \eqref{eq:erf:bd} and \eqref{eq:mgt:Gammabd}. With $ \Phi(t,y-x)-\Phi(t,y-x+1) = p(t,y-x) $ we obtain \begin{align} \label{eq:mgt:Ex'::} \mathbf{E} (M^-(t,x)|\eta^\text{ic}) - V(t) = -\sum_{y> x} p(t,y-x) \Gamma(x,y). \end{align} On the r.h.s., using \eqref{eq:hk:bd} to bound the discrete heat kernel, and using \eqref{eq:mgt:Gammabd} to bound $ |\Gamma(x,y)| $, we obtain \begin{align} \label{eq:mgt:Ex':::} | \mathbf{E} (M^-(t,x)|\eta^\text{ic}) - V(t) | \leq C \varepsilon^{-\frac{a}{4}} \sum_{y\in\mathbb{Z}} \frac{|x-y|^{\gamma+a/2}}{\sqrt{t+1}} e^{-\frac{|x-y|}{\sqrt{t+1}}} \leq C \varepsilon^{-\frac{a}{4}}(t+1)^{\frac{a}{4}+\frac{\gamma}{2}}. \end{align} Further, with $ (t,x)\in\Xi_\varepsilon(a) $, we have $ t \leq \varepsilon^{-1-\gamma-a} $. Using this to bound $ (t+1)^{\frac{a}{4}} $ in \eqref{eq:mgt:Ex':::}, we obtain \begin{align*} | \mathbf{E} (M^-(t,x)|\eta^\text{ic}) - V(t) | \leq C \varepsilon^{-\frac{a}{4}} \varepsilon^{-\frac{a}{4}(1+\gamma+a)} (t+1)^{\frac{\gamma}{2}} \leq \varepsilon^{-a} (t+1)^{\frac{\gamma}{2}}, \end{align*} for all $ \varepsilon $ small enough. This concludes the desired result~\eqref{eq:mgtEx':bd}. As for $ \mathbf{E} (M^+(t,x)|\eta^\text{ic}) $, similarly to \eqref{eq:mgt:Ex':} we have \begin{align} \label{eq:mgt+:Ex':} \mathbf{E} (M^+(t,x)|\eta^\text{ic}) = V_1(t,x) - \sum_{0<y\leq x} \Phi(t,x+1-y) (\Gamma(x,y)-\Gamma(x,y-1)), \end{align} where $ V_1(t,x) := \sum_{0<y\leq x} \Phi(t,x+1-y) = \sum_{0<z\leq x} \Phi(t,z) $. Let \begin{align} \label{eq:V2} V_2(t,x) := \sum_{z>x} \Phi(t,z). \end{align} In \eqref{eq:mgt+:Ex':}, we write $ V_1(t,x) = V(t)-V_2(t,x) $ and apply the summation by parts formula~\eqref{eq:sby1} with $ f(y)=\Phi(t,x+1-y) $ to get \begin{align} \label{eq:mgt+V} \mathbf{E} (M^+(t,x)|\eta^\text{ic}) - V(t) = -V_2(t,x) - \Phi(t,x)\Gamma(x,0) - \sum_{0<y\leq x} p(t,x-y) \Gamma(x,y). \end{align} The last term in \eqref{eq:mgt+V} is of the same form as the r.h.s.\ of \eqref{eq:mgt:Ex'::}, so, applying the same argument following~\eqref{eq:mgt:Ex'::}, here we have \begin{align} \label{eq:mgt:last} \sum_{0<y\leq x} \big|p(t,x-y) \Gamma(x,y) \big| \leq C \varepsilon^{-\frac{a}{4}} (t+1)^{\frac{\gamma}{2}+\frac{a}{4}} \leq \tfrac12 \varepsilon^{-a} (t+1)^{\frac{\gamma}{2}}, \end{align} for all $ \varepsilon $ small enough. Next, with $ V_2(t,x) $ defined in~\eqref{eq:V2}, by \eqref{eq:erf:bd} we have $ |V_2(t,x)| \leq C \sqrt{t+1} \exp(-\frac{x}{\sqrt{t+1}}) $. Further, since $ (t,x)\in\Xi_\varepsilon(a) $ we have $ t\leq \varepsilon^{-1-\gamma-a} $ and $ x/\sqrt{t} \geq \varepsilon^{-a} $, so \begin{align} \label{eq:mgt:V2} |V_2(t,x)| \leq C \varepsilon^{-\frac32} e^{-\varepsilon^{-a}} \leq C. \end{align} To bound the term $ \Phi(t,x)\Gamma(x,0) $, combining \eqref{eq:erf:bd} and \eqref{eq:mgt:Gammabd}, followed by using $ x\leq \varepsilon^{-1-a} $ and $ x/\sqrt{t} \geq \varepsilon^{-a} $, we obtain \begin{align} \label{eq:mgt:erfGamma} |\Phi(t,x)\Gamma(x,0)| \leq C e^{-\frac{x}{\sqrt{t+1}}} \varepsilon^{-\frac{a}{4}}x^{\gamma+\frac{a}{2}} \leq C e^{-\varepsilon^{-a}} \varepsilon^{-3} \leq C. \end{align} Inserting \eqref{eq:mgt:last}--\eqref{eq:mgt:erfGamma} into \eqref{eq:mgt+V}, we conclude the desired result~\eqref{eq:mgtEx':bd} for $ \mathbf{E} (M^+(t,x)|\eta^\text{ic}) $, for all $ \varepsilon $ small enough. \end{proof} Having established concentration of $ \mathbf{E} (M^\pm(t,x)|\eta^\text{ic}) $ in Lemma~\ref{lem:mgtEx'}, we proceed to show the concentration of $ M^\pm(t,x) $. \begin{lemma} \label{lem:mgt:cntr} Let $ a\in(0,1] $ be fixed as in the proceeding. There exists $ C<\infty $, such that \begin{align} \label{eq:mgt:cntr} \mathbf{P} \Big( \big|M^\pm(t,x)-V(t)\big| \leq 2\varepsilon^{-a} (t+1)^{\frac{\gamma}{2}\vee\frac{1}{4}} \Big) \geq 1 - C\exp(-\varepsilon^{-a/C}), \end{align} for any fixed $ (t,x)\in\Xi_\varepsilon(a) $ and for all $ \varepsilon\in(0,1] $. \end{lemma} \begin{proof} Since $M^\pm(t,x) $ is of the form \eqref{eq:XBer}, setting $r_{\varepsilon,t} := \varepsilon^{-a}(t+1)^{\frac{1}{4}\vee\frac{\gamma}{2}}$, we obtain upon applying \eqref{eq:Chernov00} for $X= M^\pm(t,x)$, $r = r_{\varepsilon,t}$ and $\zeta=V(t)$ that \begin{align} \label{eq:mg:chernov001} \mathbf{P} ( |M^\pm(t,x)-V(t)| > & 2 r_{\varepsilon,t} ) \leq 2 e^{ -\frac{ r_{\varepsilon,t}^2 } {3 r_{\varepsilon,t} +2 V(t)}} + \mathbf{P} \Big( | \mathbf{E} (M^\pm(t,x)|\eta^\text{ic})-V(t)| > r_{\varepsilon,t} \Big). \end{align} Lemma~\ref{lem:mgtEx'} bounds the right-most term in \eqref{eq:mg:chernov001} by $ C\exp(-\varepsilon^{-a/C}) $. Further, summing \eqref{eq:erf:bd} over $ x>0 $ yields $ V(t) \leq C\sqrt{t+1} $. Hence, \begin{align*} \exp\Big( -\frac{ \varepsilon^{-2a}(t+1)^{\frac{1}{2}\vee\gamma} } {3 \varepsilon^{-a}(t+1)^{\frac14\vee\frac{\gamma}{2}}+2V(t)} \Big) \leq C \exp(-\tfrac1C\varepsilon^{-a}) \leq C \exp(-\varepsilon^{-a/C})\,, \end{align*} which thereby bounds the other term on the r.h.s.\ of \eqref{eq:mg:chernov001} and consequently establishes \eqref{eq:mgt:cntr}. \end{proof} Given Lemma~\ref{lem:mgt:cntr}, proving Proposition~\ref{prop:mgtbd} amounts to extending the pointwise bound \eqref{eq:mgt:cntr} to a bound that holds simultaneously for all relevant $ (t,x) $. This requires a technical lemma: \begin{lemma} \label{lem:eta:J} Let $ J(i,x) $ denote the total number of jumps of the $ \eta $-particles across the bond $ (x,x+1) $ within the time interval $ [i,i+1] $, let $ a\in(0,1] $ be fixed as in the proceeding, and let $ b\in(0,1] $. We have \begin{align} & \label{eq:eta:bd} \lim_{\varepsilon\to 0} \mathbf{P} \Big( \eta(t,x) \leq \varepsilon^{-b}, \ \forall t\in[0,\varepsilon^{-1-\gamma-a}], \ x\in[0,\varepsilon^{-1-a}] \Big) =1, \\ \label{eq:J:bd} & \lim_{\varepsilon\to 0} \mathbf{P} \Big( J(i,x) \leq \varepsilon^{-b}, \ \forall i\in [0,\varepsilon^{-1-a}], \ x\in[-\varepsilon^{-1-a},\varepsilon^{-1-a}] \Big) =1. \end{align} \end{lemma} \noindent{}We postpone the proof of Lemma~\ref{lem:eta:J} until the end of this section, and continue to finish the proof of Proposition~\ref{prop:mgtbd}. \begin{proof}[Proof of Proposition~\ref{prop:mgtbd}] Given the decomposition~\eqref{eq:mgt:pm} of $ M(t,x) $, by Lemma~\ref{lem:mgt:cntr} we have that \begin{align*} \mathbf{P} ( |M(t,x)| \leq 4\varepsilon^{-a}(t+1)^{\frac14\vee\frac{\gamma}{2}} ) \geq 1 - C e^{-\varepsilon^{-a/C}}, \end{align*} for any fixed $ (t,x)\in\Xi_\varepsilon(a) $. Take union bound of this over all $ (i,x) \in \Xi_\varepsilon(a) $, where $ i\in\mathbb{Z}_{\geq 0} $. As this set is only polynomially large in $ \varepsilon^{-1} $, we obtain \begin{align} \label{eq:mgtbd:i} \lim_{\varepsilon\to 0} \mathbf{P} \big( |M(i,x)| \leq 4\varepsilon^{-a}(t+1)^{\frac14\vee\frac{\gamma}{2}}, \ \forall (i,x)\in\Xi_\varepsilon(a) \big) = 1. \end{align} Given \eqref{eq:mgtbd:i}, the next step is to derive a continuity estimate of $ t\mapsto M(t,x) $. Recall the definition of $ J(i,x) $ from Lemma~\ref{lem:eta:J}. With $ M^\pm(t,x) $ defined in \eqref{eq:mgt+-}, we have that \begin{align*} \sup_{t\in[i,i+1]} |M^\pm(t,x)-M^\pm(i,x)| \leq J(i,x). \end{align*} This together with \eqref{eq:mgt:pm} yields \begin{align*} \sup_{t\in[i,i+1]} |M(t,x)-M(i,x)| \leq 2J(i,x). \end{align*} Combining this with \eqref{eq:J:bd}, we obtain the following continuity estimate \begin{align*} \lim_{\varepsilon\to 0} \mathbf{P} \Big( \sup_{t\in[i,i+1]} |M(t,x)-M(i,x)| \leq 2\varepsilon^{-a}, \ \forall (i,x)\in\Xi_\varepsilon(a) \Big) = 1. \end{align*} Using this continuity estimate in \eqref{eq:mgtbd:i}, we conclude the desired result~\eqref{eq:mgt:bd}. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:eta:J}] Instead of showing~\eqref{eq:eta:bd} directly, we first establish a weaker statement \begin{align} \label{eq:eta:bd:} \lim_{\varepsilon\to 0} \mathbf{P} \Big( \eta(i,x) \leq 4 \varepsilon^{-\frac{b}{2}}, \ \forall i\in[0,\varepsilon^{-1-\gamma-a}], \ x\in[0,\varepsilon^{-1-a}] \Big) =1, \end{align} where time takes integer values $ i $. Since $ \eta $-particles perform independent random walks (starting from $ \eta^\text{ic} $), for each fixed $ (i,x) $, the random variable $ \eta(i,x) $ is of the form \eqref{eq:XBer}. This being the case, applying \eqref{eq:Chernov00} with $ X=\eta(i,x) $, $ \zeta=0 $ and $ r=2\varepsilon^{-\frac{b}{2}} $, we obtain \begin{align} \label{eq:Jeta:Chernov} \mathbf{P} \big( \eta(i,x) > 4\varepsilon^{-\frac{b}{2}} \big) \leq 2 \exp(-\tfrac23\varepsilon^{-\frac{b}{2}}) + \mathbf{P} ( \mathbf{E} (\eta(i,x)|\eta^\text{ic}) > 2\varepsilon^{-\frac{b}{2}} ). \end{align} We next bound the last term in~\eqref{eq:Jeta:Chernov} that involves $ \mathbf{E} (\eta(i,x)|\eta^\text{ic}) $. As $ \eta $-particles perform independent random walks, we have the following expression for the conditional expectation: \begin{align} \label{eq:etaEx'} \mathbf{E} (\eta(i,x)|\eta^\text{ic}) = \sum_{y\in\mathbb{Z}} p(i,x-y)\eta^\text{ic}(y). \end{align} Write $ \eta^\text{ic}(y) \leq 1+|F(y)-F(y+1)| \leq \varepsilon^{-\frac{b}{2}}+|F(y)-F(y+1)|, $ and apply~\eqref{eq:ic:den1} for $ (x_1,x_2)=(y,y+1) $ and $ r = \varepsilon^{-\frac{b}{2}} $. We obtain \begin{align} \label{eq:Jeta:etaic:} \mathbf{P} ( \eta^\text{ic}(y) \leq 2\varepsilon^{-\frac{b}{2}} ) \geq 1 - C\exp(-\varepsilon^{-b/C}). \end{align} Taking union bound of~\eqref{eq:Jeta:etaic:} over $ y\in (0,\varepsilon^{-1-a}] $ yields \begin{align} \label{eq:Jeta:etaic} \mathbf{P} \big( \eta^\text{ic}(y) \leq 2\varepsilon^{-\frac{b}{2}}, \ \forall y \in (0,\varepsilon^{-1-a}] \big) \geq 1 - C\varepsilon^{-2}e^{-\varepsilon^{-b/C}} \geq 1 - Ce^{-\varepsilon^{-b/C}}. \end{align} Use \eqref{eq:Jeta:etaic} to bound $ \eta^\text{ic}(y) $ on the r.h.s.\ of \eqref{eq:etaEx'}, followed by using $ \sum_{y\in\mathbb{Z}} p(i,x-y) =1 $. We obtain \begin{align} \label{eq:eta':bd} \mathbf{P} \big( \mathbf{E} (\eta(i,x)|\eta^\text{ic}) > 2\varepsilon^{-\frac{b}{2}} \big) \leq Ce^{-\varepsilon^{-b/C}}. \end{align} Insert~\eqref{eq:eta':bd} into~\eqref{eq:Jeta:Chernov}, and take union bound of the result over $i\in[0,\varepsilon^{-1-\gamma-a}] $, $ x\in[0,\varepsilon^{-1-a}] $ (which is a union of polynomial size $ \varepsilon^{-C} $), we conclude~\eqref{eq:eta:bd:}. Passing from~\eqref{eq:eta:bd:} to~\eqref{eq:eta:bd} amounts to bounding the change $ \eta(t,x)-\eta(i,x) $ in number of particles within $ t\in[i,i+1] $. Indeed, such a change is encoded in the flux across the bonds $ (x-1,x) $ and $ (x,x+1) $, and \begin{align} \label{eq:changinflux} \sup_{t\in[i,i+1]} \big( \eta(t,x) -\eta(i,x) \big) \leq J(i,x-1) + J(i,x). \end{align} That is, the change in number of particles is controlled by $ J $. Given~\eqref{eq:changinflux}, let us first establish the bound~\eqref{eq:J:bd} on $ J $. Fixing $ i $, for each $ y\in\mathbb{Z} $, we order the $ \eta $-particles at time $ t=i $ at site $ y $ as $ X_{y,1}(i),X_{y,2}(i),\ldots, X_{y,\eta(i,y)}(i) $, and consider the event $ \mathcal{A}(y,j;i,x) $ that the $ X_{y,j} $ particle ever jumps cross the bond $ [x,x+1] $ within the time interval $ [i,i+1] $, i.e. \begin{align} \mathcal{A}(y,j;i,x) := \left\{\begin{array}{l@{,}l} \{ \sup_{t\in[i,i+1]} X_{y,j}(t) \geq x+1 \} & \text{ for } y\leq x, \\ \{ \inf_{t\in[i,i+1]} X_{y,j}(t) \leq x \} & \text{ for } y\geq x+1. \end{array} \right. \end{align} Under these notations, we have \begin{align} \label{eq:J:Aexpress} J(i,x) = \sum_{y\in\mathbb{Z}} \sum_{j=1}^{\eta(i,y)} \mathbf{1}_{\mathcal{A}(y,j;i,x)}. \end{align} This is a random variable of the form~\eqref{eq:XBer}. Applying \eqref{eq:Chernov00} with $ X=J(i,x) $, $ \zeta=0 $ and $ r=\varepsilon^{-\frac{b}{3}} $, we obtain \begin{align} \label{eq:Jeta:Chernov:J} \mathbf{P} \big( J(i,x) > 2\varepsilon^{-\frac{b}{3}} \big) \leq C\exp(-\varepsilon^{-\frac{b}{C}}) + \mathbf{P} ( \mathbf{E} (J(i,x)|\eta^\text{ic}) > \varepsilon^{-\frac{b}{3}} ). \end{align} We next bound the last term in~\eqref{eq:Jeta:Chernov:J} that involves $ \mathbf{E} (J(i,x)|\eta^\text{ic}) $. To this end, fix $ i\in[0,\varepsilon^{-1-\gamma-a}] $, and view $ \eta(i+t,\Cdot) := \widetilde{\eta}(t,\Cdot) $, $ t\geq 0 $, as a free particle system starting from $ \widetilde{\eta}^\text{ic}(\Cdot)=\eta(i,\Cdot) $. Since $ \{\mathcal{A}(y,j;i,x)\}_{y,j} $ and $ \widetilde{\eta}^\text{ic} $ are independent, taking the conditional expectation $ \mathbf{E} (\Cdot|\widetilde{\eta}^\text{ic}) $ in~\eqref{eq:J:Aexpress} yields \begin{align} \label{eq:J:bd:} \mathbf{E} ( J(i,x)| \widetilde{\eta}^\text{ic} ) = \sum_{y\in\mathbb{Z}} \widetilde{\eta}^\text{ic}(y) \mathbf{P} (\mathcal{A}(y,1;i,x)). \end{align} Let $ \Psi(t,x) $ denote the probability that a random walk $ W $ starting from $ 0 $ ever reach $ x $ within the time interval $ [0,t] $: \begin{align} \label{eq:barErf} \Psi(t,x) := \mathbf{P}_\text{RW} \big( W(s)=x, \ \text{for some } s\leq t \big). \end{align} We have $ \mathbf{P} (\mathcal{A}(y,1;i,x)) = \Psi(1,|y-x|+1) $. Further, by the reflection principle, $ \Psi(t,x) \leq 2 \Phi(t,x) $, $ \forall x \geq 0 $. This together with \eqref{eq:erf:bd} yields the bound \begin{align} \label{eq:erff:bd} \Psi(t,x) \leq C e^{-\frac{[x]_+}{\sqrt{t+1}}}. \end{align} Inserting this bound into \eqref{eq:J:bd:}, we obtain \begin{align*} \mathbf{E} ( J(i,x)| \widetilde{\eta}^\text{ic} ) = \sum_{y\in\mathbb{Z}} \widetilde{\eta}^\text{ic}(y) \Psi(1,|y-x|+1) \leq C\sum_{y\in\mathbb{Z}} \widetilde{\eta}^\text{ic}(y) e^{-\frac12 |y-x|}. \end{align*} Combining this with~\eqref{eq:eta':bd} yields $ \mathbf{P} ( \mathbf{E} ( J(i,x)| \widetilde{\eta}^\text{ic} ) \leq C\varepsilon^{-\frac{b}{2}} ) \geq 1 - C\exp(-\varepsilon^{-b/C}). $ Use this to bound the last term in~\eqref{eq:Jeta:Chernov:J} (note that $ C \varepsilon^{-\frac{b}{2}} < \varepsilon^{-\frac{b}{3}} $, for all $ \varepsilon $ small enough), and take union bound of the result over $i\in[0,\varepsilon^{-1-\gamma-a}] $, $ x\in[0,\varepsilon^{-1-a}] $. We obtain \begin{align} \label{eq:J:bd:::} \mathbf{P} \big( J(i,x) \leq 2\varepsilon^{-\frac{b}{3}}, \ \forall \, i\in[0,\varepsilon^{-1-\gamma-a}], \, x\in[0,\varepsilon^{-1-a}] \big) \geq 1 - C\exp(-\varepsilon^{-b/C}). \end{align} This in particular concludes~\eqref{eq:J:bd}. Returning to showing~\eqref{eq:eta:bd}, we combine~\eqref{eq:J:bd:::} with~\eqref{eq:changinflux} to obtain \begin{align*} \lim_{\varepsilon\to 0} \mathbf{P} \Big( \sup_{t\in[i,i+1]} \eta(t,x) \leq \eta(i,x) + 4\varepsilon^{-\frac{b}{3}}, \ \forall i \in [0,\varepsilon^{-1-\gamma-a}], \, x\in [0,\varepsilon^{-1-a}] \Big) = 1. \end{align*} Combine this with~\eqref{eq:eta:bd:}, and use $ 4 \varepsilon^{-\frac{b}{2}}+ 4\varepsilon^{-\frac{b}{3}} \leq \varepsilon^{-b} $, for all $ \varepsilon $ small enough. We obtain~\eqref{eq:eta:bd}. \end{proof} \section{Boundary Layer Estimate: Proof of Proposition~\ref{prop:blt}} \label{sect:blt} As in Section~\ref{sect:mgt}, we fix an initial condition $ \eta^\text{ic} $ satisfying~\eqref{eq:ic:den1}, with the corresponding constants $ \gamma,a_*,C_* $. Recall that $ \gamma'\in(\frac{\gamma+1}{2},1) $ is a fixed parameter in the definitions~\eqref{eq:Lup}--\eqref{eq:Llw} of $ \overline{L}_{t_0,x_0,v} $ and $ \underline{L}_{t_0,x_0,v} $. Fixing further \begin{align} \label{eq:a:blt} 0<a <(\gamma'-\tfrac{1-\gamma}{2})\wedge(1+\gamma)\wedge\tfrac{\gamma}{2}, \end{align} throughout this section we use $ C<\infty $ to denote a generic finite constant, that may change from line to line, but depends only on $ a,a_*,\gamma,\gamma',C_* $. Recall the definitions of $ \Sigma_\varepsilon(a) $ and $ \widetilde{\Sigma}_\varepsilon(a) $ from \eqref{eq:tv:cnd}--\eqref{eq:supdiff} and \eqref{eq:tv:cnd:}. The first step is to establish the concentration of the conditional expectations $ \mathbf{E} (G^{\overline{L}_{t_0,x_0,v}}(t_0)|\eta^\text{ic}) $, $ \mathbf{E} (G^{\underline{L}_{t_0,x_0,v}}(t_0)|\eta^\text{ic}) $ and $ \mathbf{E} (G^{L_{t_0,x_0,v}}(t_0)|\eta^\text{ic}) $. \begin{lemma}\label{lem:blt} \begin{enumerate}[label=(\alph*)] \item[] \item \label{enu:blt:<:} There exists $ C<\infty $ such that \begin{align} \label{eq:blt:<:} \mathbf{P} \Big( \mathbf{E} (G^{L_{t_0,x_0,v}}(t_0)|\eta^\text{ic}) \leq \varepsilon^{-a}v^{-1} \Big) \geq 1 - Ce^{-\varepsilon^{a/C}}, \end{align} for all $ (t_0,x_0,v) \in \widetilde{\Sigma}_\varepsilon(a) $. \item \label{enu:blt:} There exists $ C<\infty $, such that \begin{align} \label{eq:blt:cnt:up:} \mathbf{P} \Big( \big| \mathbf{E} (G^{\overline{L}_{t_0,x_0,v}}(t_0)|\eta^\text{ic}) - \tfrac{1}{2v} \big| \leq v^{\frac1C-1} \Big) \geq 1 - Ce^{-\varepsilon^{a/C}}, \\ \label{eq:blt:cnt:lw:} \mathbf{P} \Big( \big| \mathbf{E} (G^{\underline{L}_{t_0,x_0,v}}(t_0)|\eta^\text{ic}) - \tfrac{1}{2v} \big| \leq v^{\frac1C-1} \Big) \geq 1 - Ce^{-\varepsilon^{a/C}}, \end{align} for all $ (t_0,x_0,v) \in \Sigma_\varepsilon(a) $. \end{enumerate} \end{lemma} \begin{proof}[Proof of \ref{enu:blt:<:}] Fixing $ (t_0,x_0,v) \in \widetilde{\Sigma}_\varepsilon(a) $, throughout this proof we omit the dependence on these variables, writing $ L:= L_{t_0,x_0,v} $. The proof is carried out in steps. \noindent\textbf{Step~1: setting up a discrete PDE.} Consider $ u(t,x) := \mathbf{E} (\eta(t,x)-\eta^L(t,x)|\eta^\text{ic}) $, which we view as a function in $ (t,x) $. Taking $ \mathbf{E} (\Cdot|\eta^\text{ic}) $ on both sides of \eqref{eq:blt}, we express $ \mathbf{E} (G^L(t_0)|\eta^\text{ic}) $ as the mass of the function $ u $ over the region $ x>L(t_0) $ at time $ t_0 $, i.e., \begin{align} \label{eq:blt:ex} \mathbf{E} (G^L(t_0)|\eta^\text{ic}) = \sum_{x > L(t_0)} u(t_0,x). \end{align} Given \eqref{eq:blt:ex}, proving~\eqref{eq:blt:<:} amounts to analyzing the function $ u $. We do this by studying the underlying discrete \ac{PDE} of $ u $. To set this PDE, we decompose $ u $ into the difference of $ u_1(t,x) := \mathbf{E} (\eta(t,x)|\eta^\text{ic}) $ and $ u_2(t,x) := \mathbf{E} (\eta^{L}(t,x)|\eta^\text{ic}) $. Recall that $ \Delta f(x) := f(x+1)+f(x-1) - 2f(x) $ denote the discrete Laplacian. Since $ \eta $ and $ \eta^L $ are particle systems consisting of independent random walks with possible absorption, and since the boundary $ L $ is deterministic, $ u_1 $ and $ u_2 $ satisfy the discrete heat equation with the relevant boundary condition as follows: \begin{align} \label{eq:u1PDE} & \left\{ \begin{array}{l@{,}l} \partial_t u_1(t,x) = \tfrac12 \Delta u_1(t,x) &\quad\forall x \in \mathbb{Z}, \\ u_1(0,x) = \eta^\text{ic}(x) &\quad \forall x \in \mathbb{Z}, \end{array} \right. \\ \label{eq:u2PDE} & \left\{ \begin{array}{l@{,}l} \partial_t u_2(t,x) = \tfrac12 \Delta u_2(t,x) &\quad \forall x > L(t), \\ u_2(t,x) = 0 &\quad\forall x \leq L(t), \\ u_2(0,x) = \eta^\text{ic}(x) &\quad\forall x \in \mathbb{Z}. \end{array} \right. \end{align} As $ u(t,x)=u_1(t,x)-u_2(t,x) $, taking the difference of \eqref{eq:u1PDE}--\eqref{eq:u2PDE}, and focusing on the relevant region $ x \geq L(t) $, we obtain the following discrete \ac{PDE} for $ u $: \begin{align}\label{eq:uPDE} \left\{ \begin{array}{l@{,\quad}l} \partial_t u(t,x) = \tfrac12 \Delta u(t,x) &\forall x > L(t), \\ u(t,L(t)) = u_1(t,L(t))& \\ u(0,x) = 0& \forall x \geq L(0). \end{array} \right. \end{align} \noindent\textbf{Step~2: estimating the boundary condition.} In order to analyze the \ac{PDE} \eqref{eq:uPDE}, here we estimate the boundary condition $ u_1(t,L(t)) $. Recall that $ p(t,x) $ denotes the standard discrete heat kernel. Since $ \eta $-particles perform independent random walks on $ \mathbb{Z} $, we have \begin{align} \label{eq:u1} u_1(t,x) := \mathbf{E} (\eta(t,x)|\eta^\text{ic}) = \sum_{y>0} p(t,x-y) \eta^\text{ic}(y). \end{align} For each term in the sum of \eqref{eq:u1}, write $ \eta^\text{ic}(y) = 1+ (\eta^\text{ic}(y)-1) $, and split the sum into $ \sum_{y >0 } p(t,x-y) $ and $ \sum_{y >0 } p(t,x-y)(\eta^\text{ic}(y)-1) $ accordingly. Rewriting the first sum as \begin{align*} \sum_{y >0 } p(t,x-y) = \sum_{y\in\mathbb{Z}} p(t,x-y) - \Phi(t,x) = 1-\Phi(t,x), \end{align*} we obtain \begin{align} \label{eq:u1:dcmp} u_1(t,x) &= 1 - \Phi(t,x) + \widetilde{u}_1(t,x), \end{align} where $ \widetilde{u}_1(t,x) := \sum_{y>0} p(t,x-y)(\eta^\text{ic}(y)-1) $. Given the expression~\eqref{eq:u1:dcmp}, we proceed to bound the term $ \widetilde{u}_1(t,x) $. Recalling the definition of $ \Gamma(x,y) $ from \eqref{eq:Gamma}, we write $ 1-\eta^\text{ic}(y) = \Gamma(x,y) - \Gamma(x,y-1) $ and express $ \widetilde{u}_1(t,x) $ as \begin{align} \label{eq:tilu1:} \widetilde{u}_1(t,x) = -\sum_{y>0} p(t,x-y) (\Gamma(x,y)-\Gamma(x,y-1)). \end{align} Applying Lemma~\ref{lem:Gamma} with $ b=\frac{1-\gamma}{2} $ and $ b'=\varepsilon^{-\frac19 a(1-\gamma)} $, after ignoring events of small probability $ \leq C\exp(-\varepsilon^{-a/C}) $ (following Convention~\ref{conv:omitsp}), we have \begin{align} \label{eq:blt:Gammabd} |\Gamma(x,y)| \leq \varepsilon^{-\frac19 a(1-\gamma)} |x-y|^{\frac12(1+\gamma)}, \ \forall y\in\mathbb{Z}_{\geq 0}, \ x\in [0,\varepsilon^{-1-a}]. \end{align} By \eqref{eq:blt:Gammabd} and \eqref{eq:hk:bd}, the summability condition \eqref{eq:sby:cdn} holds for $ f(y)=p(t,x-y) $. We now apply the summation by parts formulas~\eqref{eq:sby1}--\eqref{eq:sby2} with $ f(y) = p(t,x-y) $ in \eqref{eq:tilu1:} to express the sum as \begin{align} \label{eq:blt:u1} \widetilde{u}_1(t,x) = -\sum_{y>0} (p(t,x-y)-p(t,x-y-1)) \Gamma(x,y) - p(t,x-1) \Gamma(x,0). \end{align} For the discrete heat kernel, we have the following standard estimate on its discrete derivative (see, e.g., \cite[Eq.(A.13)]{dembo16}) \begin{align} \label{eq:hk:d} |p(t,x)-p(t,x-1)| \leq \tfrac{C}{t+1} e^{-\frac{|x|}{\sqrt{t+1}}}. \end{align} On the r.h.s.\ of \eqref{eq:blt:u1}, using the bounds \eqref{eq:blt:Gammabd}, \eqref{eq:hk:bd} and \eqref{eq:hk:d} to bound the relevant terms, we arrive at \begin{align} \notag |\widetilde{u}_1(t,x)| &\leq C \varepsilon^{-\frac19a(1-\gamma)} \sum_{y>0} \frac{|x-y|^{\frac{1}{2}(1+\gamma)}}{t+1} e^{-\frac{|x-y|}{\sqrt{t+1}}} + \varepsilon^{-\frac19a(1-\gamma)} |x|^{\frac{1}{2}(1+\gamma)} \frac{C}{\sqrt{t+1}}e^{-\frac{|x|}{\sqrt{t+1}}} \\ \notag &\leq C \varepsilon^{-\frac19a(1-\gamma)} (t+1)^{-\frac14(1-\gamma)}, \\ \label{eq:u1:bd:1} &\leq \varepsilon^{-\frac{a}8(1-\gamma)} (t+1)^{-\frac14(1-\gamma)}, \quad \forall x \in [0,\varepsilon^{-1-a}], \end{align} for all $ \varepsilon $ small enough. Inserting \eqref{eq:u1:bd:1} into \eqref{eq:u1:dcmp}, with $ -\Phi(t,x) \leq 0 $, we obtain \begin{align} \label{eq:u1:bd:<} u(t,L(t)) = u_1(t,L(t)) \leq 2\varepsilon^{-\frac{a}{8}(1-\gamma)}, \quad \forall t\leq t_0. \end{align} for all $ \varepsilon $ small enough. \noindent\textbf{Step~3: comparison through maximal principle.} The inequality \eqref{eq:u1:bd:<} gives an upper bound on $ u $ along the \emph{boundary} $ L $. Our next step is to leverage such an upper bound into an upper bound on the \emph{entire profile} of $ u $. We achieve this by utilizing the maximal principle, Lemma~\ref{lem:max}. Consider the traveling wave solution $ u_\text{tw} $ of the discrete heat equation: \begin{align} \label{eq:baru} u_\text{tw}(t,x) := e^{v'(v(t-t_0)-(x-x_0))}. \end{align} Here $ v' >0 $ is the unique positive solution to the equation $ v=\frac{1}{v'} (\cosh(v')-1) $, so that $ u_\text{tw} $ solves the discrete heat equation $ \partial_t u_\text{tw} = \frac{1}{2} \Delta u_\text{tw} $. Equivalently, $ v':= f^{-1}(v) $, where \begin{align*} f: [0,\infty) \to [0,\infty), \quad f(v') := \left\{\begin{array}{l@{,\quad}l} \frac{1}{v'} (\cosh(v')-1) & v'>0, \\ 0 & v'=0. \end{array}\right. \end{align*} Further, as $ f\in C^\infty[0,\infty) $, with $ \frac{df}{dv'}>0 $, $ f(0)=0 $ and $ \frac{df}{dv'}(0)=\frac12 $, we have that $ f^{-1} \in C^\infty[0,\infty) $, $ f^{-1}(0)=0 $ and $ \frac{d f^{-1}}{dv}(0) = 2 $. Combining these properties with $ v \leq \varepsilon^{a} $ (from~\eqref{eq:tv:cnd:}), we obtain \begin{align} \label{eq:vv'} |2v-v'| \leq C v^2 \leq C\varepsilon^{2a}, \end{align} and therefore \begin{align} \label{eq:baru:bdy} |u_\text{tw}(t,L(t)) -1| =|e^{v'(v(t_0-t)-\lceil v(t_0-t)\rceil)} -1| \leq 1-e^{-v'} \leq C\varepsilon^{a}. \end{align} Let $ \overline{u} := 3\varepsilon^{-\frac{a}{8}(1-\gamma)} u_\text{tw} $. Combining~\eqref{eq:baru:bdy} and \eqref{eq:u1:bd:<}, we have that \begin{align*} u(t,L(t)) \leq 2\varepsilon^{-\frac{a}{8}(1-\gamma)} \tfrac{1}{1-C\varepsilon^{a}}u_\text{tw}(t,L(t)) \leq \overline{u}(t,L(t)), \quad \forall t\in[0,t_0], \end{align*} for all $ \varepsilon $ small enough. That is, $ \overline{u} $ dominates $ u $ along the boundary $ L $. Also, we have $ u(0,x)= 0 $ and $ \overline{u}(0,x) \geq 0 $, $ \forall x\in\mathbb{Z} $, so $ \overline{u} $ dominates $ u $ at $ t=0 $. Further, by \eqref{eq:hk:bd} and \eqref{eq:blt:Gammabd} and, it is straightforward to verify that $ u $ satisfies \eqref{eq:u12:expbd} almost surely (for any $ \overline{t}<\infty $), and from the definition \eqref{eq:baru} it is clear that $ \overline{u} $ satisfies \eqref{eq:u12:expbd}. Given these properties on $ \overline{u} $ and $ u $, we now apply Lemma~\ref{lem:max} for $ (u_1,u_2)=(\overline{u},u) $, $ Q=L $ and $ \overline{t}=t_0 $ to obtain \begin{align} \label{eq:u:bd<} u(t,x) \leq 3\varepsilon^{-\frac{a}{8}(1-\gamma)} u_\text{tw}(t,x), \quad \forall x > L(t), \ t\leq t_0. \end{align} Setting $ t=t_0 $ in~\eqref{eq:u:bd<} and inserting the result into \eqref{eq:blt:ex}, as $ L(t_0)=x_0 $, we arrive at \begin{align} \label{eq:blt:<pre} \mathbf{E} ( G^{L}(t_0) | \eta^\text{ic}) \leq 3\varepsilon^{-\frac{a}{8}(1-\gamma)} \sum_{x>x_0} e^{-v'(x-x_0)} = 3\varepsilon^{-\frac{a}{8}(1-\gamma)} \frac{e^{-v'}}{1-e^{-v'}}. \end{align} Using \eqref{eq:vv'} and $ v\leq \varepsilon^{a} $ to approximate $ e^{-v'} $ by $ e^{-2v} $, we obtain that $ 1-e^{-v'} \geq 1-Ce^{-2v} \geq \frac{1}{C}v $. Using this in \eqref{eq:blt:<pre}, together with $ \frac{a}{8}(1-\gamma) <a $, we conclude the desired result~\eqref{eq:blt:<:}. \end{proof} \begin{proof}[Proof of \ref{enu:blt:}] Fixing $ (t_0,x_0,v) $ satisfying \eqref{eq:tv:cnd}--\eqref{eq:supdiff}, throughout this proof we omit the dependence on these variables, writing $ L:= L_{t_0,x_0,v} $, $ \overline{L}:= \overline{L}_{t_0,x_0,v} $. $ \underline{L}:= \underline{L}_{t_0,x_0,v} $, etc. \noindent\textbf{Step~1: reduction to $ L $.} We claim that \begin{align} \label{eq:bltL':clam} \mathbf{P} \big( G^L(t_0)=G^{\overline{L}}(t_0)=G^{\underline{L}}(t_0) \big) \geq 1 - Ce^{-\varepsilon^{-a/C}}. \end{align} Labeling all the $ \eta $-particles starting in $ (0,L'_0] $ as $ Z_1(t),Z_2(t),\ldots, Z_n(t) $, we have \begin{align} \label{eq:bltL':::} \big\{ G^L(t_0)=G^{\overline{L}}(t_0)=G^{\underline{L}}(t_0) \big\} \subset \Big\{ \sup_{t\in[0,t_0]} Z_i(t) < L(t_0), \ \forall i=1,\ldots,n \Big\} := \mathcal{A}. \end{align} To see why, let $ L'_0 := L(t_0)-\lfloor \varepsilon^{-\gamma'} \rfloor $, and recall from \eqref{eq:blt} that the boundary layer term $ G^Q(t) $ records the loss of $ \eta $-particles caused by absorption by $ Q $. Since the trajectories $ \overline{L} $, $ \underline{L} $ and $ L $ differ only when $ L(t) < L'_0 $ (see \eqref{eq:Lup}--\eqref{eq:Llw}), the event $ \{G^L(t_0)=G^{\overline{L}}(t_0)=G^{\underline{L}}(t_0)\} $ holds if no $ \eta $-particles starting in $ (0,L'_0] $ ever reaches $ L(t_0) $ within $ [0,t_0] $. This gives~\eqref{eq:bltL':::}. Recall the notation $ \Psi(t,x) $ from \eqref{eq:barErf}. From~\eqref{eq:bltL':::} we have \begin{align*} \mathbf{P} (\mathcal{A}^c|\eta^\text{ic}) \leq \sum_{x\in(0,L'_0]} \eta^\text{ic}(x) \Psi(t_0,L(t_0)-x). \end{align*} Applying the bound \eqref{eq:erff:bd} to the expression $ \Psi(t_0,L(t_0)-x) $ on the the r.h.s., we obtain \begin{align*} \mathbf{P} (\mathcal{A}^c|\eta^\text{ic}) \leq C\exp(-\tfrac{L(t_0)-L'_0}{\sqrt{t_0+1}}) \sum_{x\in(0,L'_0]} \eta^\text{ic}(x) \leq \exp(-\tfrac{L(t_0)-L'_0}{\sqrt{t_0+1}}) (F(L'_0)+L'_0). \end{align*} Further using $ L(t_0)-L_0' = \lfloor \varepsilon^{-\gamma'} \rfloor $, $ t_0\leq\varepsilon^{-1-\gamma-a} $ (by \eqref{eq:tv:cnd}) and $ a < \gamma'-\frac{\gamma+1}{2} $, we obtain $ \frac{L(t_0)-L'_0}{\sqrt{t_0+1}} \geq \frac1C \varepsilon^{-\gamma'+\frac{1+\gamma+a}{2}} \geq \frac1C \varepsilon^{-a/2} $, thereby \begin{align} \label{eq:bltL':} \mathbf{P} (\mathcal{A}^c|\eta^\text{ic}) \leq C \exp(-\tfrac1C \varepsilon^{-a/2}) (L'_0+F(L'_0)) \leq C \exp(-\varepsilon^{-a/C}) (L'_0+F(L'_0)). \end{align} To bound the term $ F(L'_0) $ in \eqref{eq:bltL':}, we apply \eqref{eq:ic:den1} for $ (x_1,x_2)=(0,L'_0) $ and $ r={L'_0}^{1-\gamma} $, to obtain that $ \mathbf{P} (F(L'_0) > L'_0) \leq C \exp(-(L'_0)^{(1-\gamma)a_*}) $. Inserting this into~\eqref{eq:bltL':} yields \begin{align} \label{eq:bltL'::} \mathbf{P} (\mathcal{A}^c) \leq C L'_0 \exp(-\varepsilon^{-a/C}) + C \exp(-(L'_0)^{(1-\gamma)a_*}). \end{align} Next, as $ x_0\in[\varepsilon^{-\gamma'-a},\varepsilon^{-1-a}] $ (by \eqref{eq:tv:cnd}), we have $ L'_0 =x_0 - \lfloor \varepsilon^{-\gamma'} \rfloor \leq x_0 \leq \varepsilon^{-2} $ and $ L'_0 = x_0 - \lfloor \varepsilon^{-\gamma'} \rfloor \geq \frac{1}{2} \varepsilon^{-\gamma'} $, for all $ \varepsilon $ small enough. Using these bounds on $ L'_0 $ in \eqref{eq:bltL'::}, with $ a<\gamma<\gamma' $, we further obtain \begin{align} \label{eq:bltL'-} \mathbf{P} (\mathcal{A}^c) \leq C \varepsilon^{-2}\exp(-\varepsilon^{-a/C})+C\exp(-\varepsilon^{-\gamma'/C}) \leq C \exp(-\varepsilon^{-a/C}). \end{align} Combining \eqref{eq:bltL'-} and \eqref{eq:bltL':::}, we see that the claim~\eqref{eq:bltL':clam} holds. Given \eqref{eq:bltL':clam}, to prove \eqref{eq:blt:cnt:up:}--\eqref{eq:blt:cnt:lw:}, it suffices to prove the analogous statement where $ \overline{L} $ and $ \underline{L} $ are replaced by $ L $, i.e.\ \begin{align} \label{eq:blt:cnt:} \mathbf{P} \Big( \big| \mathbf{E} (G^{L}(t_0)|\eta^\text{ic}) - \tfrac{1}{2v} \big| \leq v^{-1+\frac1C} \Big) \geq 1 - Ce^{-\varepsilon^{a/C}}. \end{align} \noindent\textbf{Step~2: Setting up the PDE.} To prove~\eqref{eq:blt:cnt:}, we adopt the same strategy as in Part~\ref{enu:blt:<:}, by expressing $ \mathbf{E} (G^{L}(t_0)|\eta^\text{ic}) $ in terms of the function $ u $ as in \eqref{eq:blt:ex}, and then analyzing the r.h.s.\ through the discrete \ac{PDE} \eqref{eq:uPDE}. As $ \Sigma_\varepsilon(a) \subset \widetilde{\Sigma}_\varepsilon(a) $, all the bounds established in Part~\ref{enu:blt:<:} continue to hold here. In particular, combining \eqref{eq:u1:dcmp} and \eqref{eq:u1:bd:1} we obtain \begin{align} \label{eq:u1:upbd} &u(t,L(t)) \leq 1+ \varepsilon^{-\frac{a}{8}(1-\gamma)} (1+t)^{-\frac14(1-\gamma)}, \\ \label{eq:u1:lwbd} &u(t,L(t)) \geq 1-\Phi(t,L(t)) - \varepsilon^{-\frac{a}{8}(1-\gamma)} (1+t)^{-\frac14(1-\gamma)}. \end{align} Recall from \eqref{eq:baru} that $ u_\text{tw} $ denotes the traveling wave solution of the discrete heat equation. The bounds \eqref{eq:u1:upbd}--\eqref{eq:u1:lwbd} and \eqref{eq:baru:bdy} give quantitative estimates on how closely $ u $ and $ u_\text{tw} $ approximate $ 1 $ along the \emph{boundary} $ L(t) $. Our strategy is to leverage these estimates into showing that $ u $ and $ u_\text{tw} $ approximate each other within the \emph{interior} $ (L(t),\infty) $. We achieve this via the maximal principle, Lemma~\ref{lem:max}, which requires constructing the solutions $ \overline{u}_\text{tw} $ and $ \underline{u}_\text{tw} $ to the discrete heat equation such that \begin{align} \label{eq:baru:up} \overline{u}_\text{tw}(0,x) \geq 0=u(0,x), \ \forall x > L(0), &\quad \overline{u}_\text{tw}(t,L(t)) \geq u(t,L(t)), \ \forall t \leq t_0, \\ \label{eq:baru:lw} \underline{u}_\text{tw}(0,x) \leq 0, \ \forall x > L(0), &\quad \overline{u}_\text{tw}(t,L(t)) \leq u(t,L(t)), \ \forall t \leq t_0. \end{align} \noindent\textbf{Step~3: Constructing $ \overline{u}_\text{tw} $ and $ \underline{u}_\text{tw} $.} Recall the definition of $ \Phi(t,x) $ from \eqref{eq:erf}. We define \begin{align} \label{eq:baru:up:} \overline{u}_\text{tw}(t,x) := u_\text{tw}(t,x) + 2\varepsilon^{\frac{a}{8}(1-\gamma)} u_\text{tw}(t,x) + 2\varepsilon^{-\frac{a}{8}(1-\gamma)} \Phi(t,x-L(\tfrac{1}{v})). \end{align} Indeed, $ \overline{u}_\text{tw}(0,x) \geq 0 $. Since $ \Phi(t,x-L(\frac{1}{v})) $ and $ u_\text{tw}(t,x) $ solve the discrete heat equation, so does $ \overline{u}_\text{tw} $. To verify the last condition in \eqref{eq:baru:up}, we set $ x=L(t) $ in \eqref{eq:baru:up:}, and write \begin{align} \label{eq:baru:up:Lt<} \overline{u}_\text{tw}(t,L(t)) \geq (1+2\varepsilon^{\frac{a}{8}(1-\gamma)})u_\text{tw}(t,L(t)) + 2\varepsilon^{-\frac{a}{8}(1-\gamma)} \Phi(t,L(t)-L(\tfrac{1}{v})). \end{align} By~\eqref{eq:baru:bdy}, $ (1+2\varepsilon^{\frac{a}{8}(1-\gamma)})\overline{u}_\text{tw}(t,L(t)) \geq (1+2\varepsilon^{\frac{a}{8}(1-\gamma)})(1-C\varepsilon^{a}). $ With $ \frac{a}{8}(1-\gamma) < a $, the last expression is greater than $ (1 + \varepsilon^{\frac{a}8(1-\gamma)}) $ for all small enough $ \varepsilon $, so in particular \begin{align} \label{eq:baru:up::} (1+2\varepsilon^{\frac{a}{8}(1-\gamma)})u_\text{tw}(t,L(t)) \geq 1+\varepsilon^{\frac{a}8(1-\gamma)}, \end{align} for all $ \varepsilon $ small enough. Next, to bound the last term in~\eqref{eq:baru:up:Lt<}, we consider the cases $ t\leq \frac{1}{v} $ and $ t>\frac{1}{v} $ separately. For the case $ t\leq\frac{1}{v} $, we have $ L(t) \leq L(\frac{1}{v}) $, so $ \Phi(t,L(t)-L(\frac{1}{v})) \geq \Phi(t,0) $. Further, since the discrete kernel satisfies $ p(t,x)=p(t,-x) $ and $ \sum_{x\in\mathbb{Z}} p(t,x) =1 $, we have $ \Phi(t,0) \geq \frac{1}{2} $, so \begin{align} \label{eq:erf:LLv} \Phi(t,L(t)-L(\tfrac{1}{v})) \geq \tfrac12, \quad \forall t \leq \tfrac{1}{v}. \end{align} Using~\eqref{eq:baru:up::} and \eqref{eq:erf:LLv} to lower bound the expressions in \eqref{eq:baru:up:Lt<}, and comparing the result with \eqref{eq:u1:upbd}, we conclude $ \overline{u}_\text{tw}(t,L(t)) \geq u(t,L(t)) $, for $ t \leq \frac{1}{v} $. As for the case $ \frac{1}{v} < t $, we drop the last term in~\eqref{eq:baru:up:Lt<} and write \begin{align} \label{eq:baru:up:Lt<:} \overline{u}_\text{tw}(t,L(t)) \geq (1+2\varepsilon^{\frac{a}{8}(1-\gamma)})u_\text{tw}(t,L(t)). \end{align} Under the assumption $ t>\frac1v $, the bound \eqref{eq:u1:upbd} gives $ u(t,L(t)) \leq 1 + \varepsilon^{-\frac18(1-\gamma)} v^{\frac14a(1-\gamma)}. $ With $ v \leq \varepsilon^{\gamma-a} \leq \varepsilon^{\frac12} $ (see~\eqref{eq:a:blt}), we have $ u(t,L(t)) \leq 1 + \varepsilon^{\frac{a}{8}(1-\gamma)} $. Comparing this with \eqref{eq:baru:up::} and \eqref{eq:baru:up:Lt<:}, we conclude $ \overline{u}_\text{tw}(t,L(t)) \geq u(t,L(t)) $. Turning to constructing $ \underline{u}_\text{tw} $, we let \begin{align} \label{eq:baru'} u_\text{tw}'(t,x) := \sum_{y> L(0)} p(t,x-y) u_\text{tw}(0,y), \end{align} which solves the discrete heat equation on $ \mathbb{Z} $ with the initial condition $ u'_*(0,x) = u_\text{tw}(0,x)\mathbf{1}_\set{x>L(0)} $. We then define $ \underline{u}_\text{tw} $ as \begin{align} \label{eq:baru:lw:1} \underline{u}_\text{tw}(t,x) := u_\text{tw}(t,x) &- 2\varepsilon^{\frac{a}{8}(1-\gamma)}u_\text{tw}(t,x) -2\varepsilon^{-\frac{a}{8}(1-\gamma)}\Phi(t,x-L(\tfrac{1}{v})) \\ \label{eq:baru:lw:2} &-\Phi(t,x) -( 1 - 2\varepsilon^{\frac{a}{8}(1-\gamma)} )u_\text{tw}'(t,x). \end{align} Clearly, $ \underline{u}_\text{tw} $ solves the discrete heat equation, and \begin{align*} \underline{u}_\text{tw}(0,x) \leq (1- 2\varepsilon^{\frac{a}{8}(1-\gamma)})u_\text{tw}(0,x) - ( 1 - 2\varepsilon^{\frac{a}{8}(1-\gamma)} )u_\text{tw}'(0,x) =0, \quad \forall x > L(0). \end{align*} To verifying the last condition~\eqref{eq:baru:lw}, we consider separately the case $ t\leq \frac{1}{v} $ and $ t>\frac{1}{v} $. For the cases $ t\leq\frac{1}{v} $, we set $ x=L(t) $ in~\eqref{eq:baru:lw:1}--\eqref{eq:baru:lw:2} and write \begin{align} \label{eq:baru:lw:Lt<} \underline{u}_\text{tw}(t,L(t)) \leq u_\text{tw}(t,L(t)) -2\varepsilon^{-\frac{a}{8}(1-\gamma)}\Phi(t,L(t)-L(\tfrac{1}{v})). \end{align} Applying \eqref{eq:baru:bdy} and \eqref{eq:erf:LLv} to the r.h.s.\ of \eqref{eq:baru:lw:Lt<}, we obtain $ u_\text{tw}(t,L(t)) \leq 1+C\varepsilon^{a}-\varepsilon^{-\frac{a}{8}(1-\gamma)} <0 $, for all $ \varepsilon $ small enough. This together with $ 0\leq u(t,L(t)) $ concludes $ \underline{u}_\text{tw}(t,L(t)) \leq u(t,L(t)) $ for the case $ t\leq\frac{1}{v} $. As for the case $ \frac{1}{v} < t $, we set $ x=L(t) $ in~\eqref{eq:baru:lw:1}--\eqref{eq:baru:lw:2} and write \begin{align*} \underline{u}_\text{tw}(t,L(t)) \leq (1 - 2\varepsilon^{\frac{a}{8}(1-\gamma)}) u_\text{tw}(t,L(t)) - \Phi(t,L(t)). \end{align*} Similarly to~\eqref{eq:baru:up::}, here we have $ (1 - 2\varepsilon^{\frac{a}{8}(1-\gamma)}) u_\text{tw}(t,L(t)) \geq 1 - \varepsilon^{\frac{a}{8}(1-\gamma)}, $ for all $ \varepsilon $ small enough, so in particular \begin{align} \label{eq:baru:lw:Lt>} \underline{u}_\text{tw}(t,L(t)) \leq ( 1-2\varepsilon^{\frac18(1-\gamma)} )(1+C\varepsilon^{a})- \Phi(t,L(t)) \leq 1 - \varepsilon^{\frac18(1-\gamma)} - \Phi(t,L(t)). \end{align} On the other hand, since here $ t > v^{-1} $, the bound~\eqref{eq:u1:lwbd} gives $ u(t,L(t)) \geq 1 - \varepsilon^{-\frac18(1-\gamma)} v^{\frac14a(1-\gamma)}- \Phi(t,L(t)). $ Further using $ v \leq \varepsilon^{\gamma-a} \leq \varepsilon^{a} $ gives $ u(t,L(t)) \geq 1 - \varepsilon^{\frac{a}{8}(1-\gamma)}- \Phi(t,L(t)) $. Comparing this with the bound~\eqref{eq:baru:lw:Lt>}, we conclude $ \underline{u}_\text{tw}(t,L(t)) \leq u(t,L(t)) $ for the case $ \frac{1}{v} < t $. With $ \overline{u}_\text{tw} $ and $ \underline{u}_\text{tw} $ satisfying the respective conditions~\eqref{eq:baru:up}--\eqref{eq:baru:lw}, we now apply Lemma~\ref{lem:max} with $ (u_1,u_2) = (\overline{u}_\text{tw},u) $ and with $ (u_1,u_2) = (u,\underline{u}_\text{tw}) $ to conclude that $ \underline{u}_\text{tw}(t_0,x) \leq u(t_0,x) \leq \overline{u}_\text{tw}(t_0,x) $, $ \forall x > L(t_0) $. Combining this with \eqref{eq:blt:ex}, with the notation $ \mathscr{S}(f):= \sum_{x>L(t_0)} f(t_0,x) $, we arrive at the following sandwiching bound: \begin{align} \label{eq:blt':bd} \mathscr{S}(\underline{u}_\text{tw}) \leq \mathbf{E} (G^L(t_0)|\eta^\text{ic}) \leq \mathscr{S}(\overline{u}_\text{tw}). \end{align} \noindent\textbf{Step~4: Sandwiching.} Our last step is to show that, the upper and lower bounds in~\eqref{eq:blt':bd} are well-approximated by $ \frac{1}{2v} $. With $ \overline{u}_\text{tw} $ and $ \underline{u}_\text{tw} $ defined in \eqref{eq:baru:up:} and \eqref{eq:baru:lw:1}--\eqref{eq:baru:lw:2}, we indeed have \begin{subequations} \label{eq:blt':bd:} \begin{align} \big| \mathscr{S}(\overline{u}_\text{tw})-\tfrac1{2v} \big|, \ \big| \mathscr{S}(\underline{u}_\text{tw})-\tfrac1{2v} \big| \leq & \label{eq:blt':bd:utr} \big|\mathscr{S}(u_\text{tw}) - \tfrac{1}{2v} \big| + 2\varepsilon^{\frac{a}{8}(1-\gamma)}\mathscr{S}(u_\text{tw}) \\ \label{eq:blt':bd:erf} &+ 2 \varepsilon^{-\frac{a}{8}(1-\gamma)} \mathscr{S}(\widetilde\Phi) + \mathscr{S}(\Phi) \\ \label{eq:blt':bd:utr'} &+ \mathscr{S}(u_\text{tw}'), \end{align} \end{subequations} where $ \widetilde\Phi(t_0,x) :=\Phi(t_0,x-L(\tfrac1v)) $. To complete the proof, it remains to bound each of the terms in \eqref{eq:blt':bd:utr}--\eqref{eq:blt':bd:utr'}. To bound the terms in \eqref{eq:blt':bd:utr}, set $ t=t_0 $ in \eqref{eq:baru} and sum the result over $ x>L(t_0) $: \begin{align} \label{eq:utrsum} \mathscr{S}(u_\text{tw}) = \sum_{x>L(t_0)} u_\text{tw}(t_0,x) = \sum_{x>L(t_0)} e^{-v'(x-L(t_0))} = \frac{ e^{-v'} }{ 1-e^{-v'} }. \end{align} Within the last expression of \eqref{eq:utrsum}, using \eqref{eq:vv'} to approximate $ e^{-v'} $ with $ e^{-2v} $, we obtain \begin{align} \label{eq:utrsum:est} |\mathscr{S}(u_\text{tw}) - \tfrac{1}{2v}| \leq C. \end{align} Apply~\eqref{eq:utrsum:est} to the terms in \eqref{eq:blt':bd:utr}. Together with $ v\leq\varepsilon^{\gamma-a}\leq\varepsilon^{a} $ and $ v\geq \varepsilon^{\gamma+a} \geq \varepsilon^{2\gamma} $, we conclude \begin{align} \label{eq:blt':bd:utr:} \big|\mathscr{S}(u_\text{tw}) - \tfrac{1}{2v} \big| + 2\varepsilon^{\frac{a}{8}(1-\gamma)}\mathscr{S}(u_\text{tw}) \leq C + C\varepsilon^{\frac1C} v^{-1} \leq v^{\frac1C-1}, \end{align} for all $ \varepsilon $ small enough. Turning to \eqref{eq:blt':bd:erf}, As $ x\mapsto \Phi(t,x) $ is decreasing, we have $ \mathscr{S}(\widetilde\Phi) \leq \mathscr{S}(\Phi) $, so without loss of generality we replace $ \widetilde\Phi $ with $ \Phi $ in \eqref{eq:blt':bd:erf}. Next, applying the bound \eqref{eq:erf:bd} to $ \Phi(t_0,x-L(\tfrac{1}{v})) $, and summing the result over $ x> L(t_0) $, we obtain \begin{align*} \mathscr{S}(\Phi) = \sum_{x > L(t_0)} \Phi(t_0,x-L(\tfrac{1}{v})) \leq C \sqrt{t_0+1} \exp\Big( -\frac{L(t_0)-L(\frac{1}{v})}{\sqrt{t_0+1}} \Big). \end{align*} Using $ L(t_0)-L(\frac{1}{v}) = \lceil vt_0-1 \rceil \geq vt_0-1 $, we further obtain \begin{align} \label{eq:erf:sum} \mathscr{S}(\Phi) \leq C \sqrt{t_0+1} \exp \Big( -\frac{vt_0}{\sqrt{t_0+1}} \Big). \end{align} Recall that $ t_0,v $ satisfy the conditions~\eqref{eq:tv:cnd}--\eqref{eq:supdiff}. On the r.h.s.\ of \eqref{eq:erf:sum}, using $ t_0 \leq \varepsilon^{-(1+\gamma+a)} $ to bound $ \sqrt{t_0+1} \leq 2\varepsilon^{-2} $, and using $ t_0\geq 1 $ and $ v\sqrt{t_0} \geq \varepsilon^{-a} $ to bound $ \exp(-\frac{vt_0}{\sqrt{t_0+1}}) \leq \exp(-\frac{vt_0}{\sqrt{2t_0}}) \leq \exp(-\frac{1}{\sqrt{2}}\varepsilon^{-a}), $ we obtain $ \mathscr{S}(\Phi) \leq C \varepsilon^{-2} e^{-\frac1{C} \varepsilon^{-a}}. $ Using this bound in \eqref{eq:blt':bd:erf} gives \begin{align} \label{eq:blt':bd:erf:} 2 \varepsilon^{-\frac{a}{8}(1-\gamma)} \mathscr{S}(\widetilde\Phi) + \mathscr{S}(\Phi) \leq \big(2 \varepsilon^{-\frac{a}{8}(1-\gamma)}+1\big) \mathscr{S}(\Phi) \leq \varepsilon^{-C} e^{-\frac1C \varepsilon^{-a}} \leq C \leq v^{1-\frac1C}, \end{align} for all $ \varepsilon $ small enough. Turning to \eqref{eq:blt':bd:utr'}, we first recall that $ u_\text{tw}' $ is defined in terms of $ u_\text{tw}(0,y) $ as in \eqref{eq:baru'}. With $ u_\text{tw}(t,x) $ defined in \eqref{eq:baru}, we have \begin{align*} u_\text{tw}(0,y) = e^{v'(-y+L(t_0)-vt_0)} = e^{-v'(y-L(0))} e^{v'(\lceil vt_0\rceil-vt_0)}. \end{align*} Using $ e^{v'(\lceil vt_0\rceil-vt_0)} \leq \varepsilon^{v'} \leq C $ to bound the last exponential factor on the r.h.s., inserting the resulting inequality into \eqref{eq:baru'}, and summing the result over $ y > L(0) $, we obtain \begin{align} \label{eq:baru':bd:} \sum_{x>L(t_0)} u_\text{tw}'(t,x) \leq C \sum_{x>L(t_0)} \sum_{y > L(0)} p(t_0,x-y) e^{-v'(y-L(0))}. \end{align} By \eqref{eq:erf:bd} we have \begin{align} \label{eq:baru':bd:1} \sum_{x>L(t_0)} p(t_0,x-y) = \Phi(t,L(t_0)-y+1) \leq Ce^{-\frac{[L(t_0)-y+1]_+}{\sqrt{t_0+1}}}. \end{align} Exchanging the two sums in \eqref{eq:baru':bd:} and applying \eqref{eq:baru':bd:1} to the result, we arrive at \begin{align} \label{eq:baru':bd:2} \mathscr{S}(u_\text{tw}') = \sum_{x>L(t_0)} u_\text{tw}'(t_0,x) \leq C \sum_{y > L(0)} e^{-\frac{[L(t_0)-y+1]_+}{\sqrt{t_0+1}}} e^{-v'(y-L(0))}. \end{align} On r.h.s.\ of \eqref{eq:baru':bd:2}, the two exponential functions concentrate at well-separated locations $ L(t_0) $ and $ L(0) $. To utilize this property, we divide the r.h.s.\ of \eqref{eq:baru':bd:2} into sums over $ L(0)<y\leq \frac{L(0)+L(t_0)}{2} $ and over $ \frac{L(0)+L(t_0)}{2}<y $, and let $ u_\text{tw}^{1,\prime} $ and $ u_\text{tw}^{2,\prime} $ denote the resulting sums, respectively. For $ u_\text{tw}^{1,\prime} $, using $ L(t_0)-y+1\geq\frac{1}{2}(L(0)+L(t_0)) $ to bound the first exponential function in \eqref{eq:baru':bd:2}, we have \begin{align*} u_\text{tw}^{1,\prime} \leq C \exp\Big( -\frac{\frac{1}{2}(L(t_0)-L(0))}{\sqrt{t_0+1}} \Big) \Big( \sum_{y > L(0)} e^{-v'(y-L(0))} \Big). \end{align*} The sum over $ y>L(0) $ is equal to $ \mathscr{S}(u_\text{tw}) $ (see \eqref{eq:utrsum}), and is in particular bounded by $ Cv^{-1} $ (by \eqref{eq:utrsum:est}). Therefore, \begin{align} \label{eq:baru'1:bd} u_\text{tw}^{1,\prime} \leq Cv^{-1} \exp\Big( -\frac{(L(t_0)-L(0))}{2\sqrt{t_0+1}} \Big). \end{align} As for $ u_\text{tw}^{2,\prime} $, we simply replace $ \exp(-\frac{[L(t_0)-y+1]_+}{\sqrt{t_0+1}}) $ with $ 1 $ and write \begin{align} \label{eq:baru'2:bd} u_\text{tw}^{2,\prime} \leq C \hspace{-10pt} \sum_{y > \frac{1}{2}(L(0)+L(t_0))} \hspace{-15pt} e^{-v'(y-L(0))} \leq C{v'}^{-1} \exp\big(-v'(L(t_0)-L(0))\big). \end{align} Now, add \eqref{eq:baru'1:bd} and \eqref{eq:baru'2:bd} to obtain \begin{align} \label{eq:baru'12:bd} \mathscr{S}(u_\text{tw}') \leq Cv^{-1} \exp\Big( -\frac{(L(t_0)-L(0))}{2\sqrt{t_0+1}} \Big) + C{v'}^{-1} \exp\big(-v'(L(t_0)-L(0))\big). \end{align} On the r.h.s.\ of \eqref{eq:baru'12:bd}, using $ L(t_0)-L(0) \geq vt_0 $, $ t_0 \geq 1 $, and using \eqref{eq:vv'} to approximate $ v' $ by $ 2v $, with $ v\sqrt{t_0} \geq \varepsilon^{-a} $, we obtain \begin{align} \label{eq:blt':bd:utr':} \mathscr{S}(u_\text{tw}') \leq Cv^{-1} \exp\big( { -\tfrac14 v\sqrt{t_0} } \big) + C{v}^{-1} \exp\big(-\tfrac{1}{C}v^2t_0\big) \leq C v^{-1} \exp(-\varepsilon^{-a}) \leq v^{-1+\frac1C}, \end{align} for all $ \varepsilon $ small enough. Inserting the bounds \eqref{eq:blt':bd:utr:}, \eqref{eq:blt':bd:erf:} and \eqref{eq:blt':bd:utr':} into \eqref{eq:blt':bd:} gives the desired result~\eqref{eq:blt:cnt:}. \end{proof} Equipped with Lemma~\ref{lem:blt}, we next establish the pointwise version of Proposition~\ref{prop:blt}. \begin{lemma}\label{lem:blt-} Let $ a $ be fixed as in \eqref{eq:a:blt}. \begin{enumerate}[label=(\alph*)] \item \label{enu:blt:<-} There exists $ C<\infty $ such that \begin{align} \label{eq:blt:<-} \mathbf{P} \Big( G^{L_{t_0,x_0,v}}(t_0) \leq 2\varepsilon^{-a}v^{-1} \Big) \geq 1 - Ce^{-\varepsilon^{a/C}}, \end{align} for all $ (t_0,x_0,v) \in \widetilde{\Sigma}_\varepsilon(a) $. \item \label{enu:blt-} There exists $ C<\infty $ such that \begin{align} & \label{eq:blt:cnt-:up} \mathbf{P} \Big( \big| G^{\overline{L}_{t_0,x_0,v}}(t_0) - \tfrac{1}{2v} \big| \leq v^{-1+\frac1C} \Big) \geq 1 - Ce^{-\varepsilon^{a/C}}, \\ & \label{eq:blt:cnt-:lw} \mathbf{P} \Big( \big| G^{\underline{L}_{t_0,x_0,v}}(t_0) - \tfrac{1}{2v} \big| \leq v^{-1+\frac1C} \Big) \geq 1 - Ce^{-\varepsilon^{a/C}}, \end{align} for all $ (t_0,x_0,v) \in \Sigma_\varepsilon(a) $. \end{enumerate} \end{lemma} \begin{proof} To simplify notations, we write $ \mathbf{E} '(\Cdot):= \mathbf{E} (\Cdot|\eta^\text{ic}) $ and $ \mathbf{P} '(\Cdot) := \mathbf{P} (\Cdot|\eta^\text{ic}) $ for the conditional expectation and conditional probability. We first establish Part~\ref{enu:blt-}. Indeed, from the definition~\eqref{eq:blt} of $ G^Q(t) $, for any fixed, \emph{deterministic} $ t\mapsto Q(t) $, the random variable $ G^Q(t_0) $ is of the form \eqref{eq:XBer}. More precisely, labeling all the $ \eta $-particles starting at site $ x $ at $ t=0 $ as $ X_{x,j}(0) $, $ j=1,\ldots,\eta^\text{ic}(x) $, we have $ G^Q(t_{0})=\sum_{x>0} \sum_{j=1}^{\eta^\text{ic}(x)} \mathbf{1}_{\mathcal{B}_{x,j}} $, where \begin{align*} \mathcal{B}_{x,j} := \big\{ X_{x,j}(t_0) > Q(t_0), \ X_{x,j}(t) \leq Q(t), \text{ for some } t<t_0 \big\}. \end{align*} We set $ X_1 := G^{\overline{L}_{t_0,x_0,v}(t_0)}(t_0) $ and $ X_2:=G^{\underline{L}_{t_0,x_0,v}(t_0)}(t_0) $ to simplify notations. From Lemma~\ref{lem:blt}\ref{enu:blt:}, \begin{align} \label{eq:let:b} \mathbf{P} ( | \mathbf{E} '(X_i)-\tfrac{1}{2v}| > v^{\frac1C-1} ) \leq C\exp(-\varepsilon^{-a/C}). \end{align} Without loss of generality, we assume $ C\geq 1 $. We now apply \eqref{eq:Chernov00} with $ X=X_1, X_2 $, $ r=v^{\frac1C-1} $ and $ \zeta=\frac{1}{2v} $ to obtain \begin{align*} \mathbf{P} ( |X_i-2v^{-1}| > 2v^{\frac1C-1} ) &\leq C\exp(-\tfrac{2}{3} v^{\frac2C-1} ) + \mathbf{P} ( | \mathbf{E} '(X_i)-2v^{-1}| > v^{\frac1C-1} ), \quad i=1,2. \end{align*} Using \eqref{eq:let:b} to bound the last term, with $ v^{-\frac2C+1} \geq (\varepsilon^{a})^{-\frac12} $ (since $ v\leq\varepsilon^{a} $ and $ C\geq 1 $), we see that the desired result \eqref{eq:blt:cnt-:up}--\eqref{eq:blt:cnt-:lw} follows. Turning to Part~\ref{enu:blt:<-}, we let $ X_0 := G^{L_{t_0,x_0,v}(t_0)}(t_0) $. Similarly to the preceding, we apply \eqref{eq:Chernov00} with $ X=X_0 $, $ \zeta=0 $ and $ r=\varepsilon^{a}v^{-1} $ to obtain \begin{align*} \mathbf{P} ( X_0> 2\varepsilon^{-a}v^{-1} ) \leq 2 e^{-\frac13\varepsilon^{-a}v^{-1}} + \mathbf{P} ( \mathbf{E} '(X_0) > \varepsilon^{-a}v^{-1} ). \end{align*} Using $ v\leq \varepsilon^{\gamma-a} \leq 1 $ and Lemma~\ref{lem:blt}\ref{enu:blt:<:}, we see that the r.h.s.\ is bounded by $ Ce^{-\varepsilon^{-a/C}} $. Hence the desired result~\eqref{eq:blt:<-} follows. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:blt}] Given Lemma~\ref{lem:blt-}, the proof of \eqref{eq:blt:cnt:up}--\eqref{eq:blt:<} are similar, so here we prove only \eqref{eq:blt:cnt:up} and omit the rest. Our goal is to extend the probability bound \eqref{eq:blt:cnt-:up}, so that the corresponding event holds \emph{simultaneously} for all $ (t,x,v) \in \Sigma(a) $. To this end, fixing $ \widetilde{a}\in(0,a) $, and letting $ n:=\lceil \varepsilon^{-2\gamma-3a} \rceil $, we consider the following discretization of $ \Sigma_\varepsilon(\widetilde{a}) $: \begin{align*} \Sigma_{n,\varepsilon}(\widetilde{a}) := \Sigma_\varepsilon(\widetilde{a}) \cap \big( \mathbb{Z} \times \mathbb{Z} \times (\tfrac{1}{n}\mathbb{Z}_{\geq 0}) \big). \end{align*} That is, we consider all the points $ (t,x,v)\in\Sigma(\widetilde{a}) $ such that $ t\in\mathbb{Z} $ and $ v\in\frac{1}{n}\mathbb{Z} $. From \eqref{eq:tv:cnd}--\eqref{eq:supdiff}, it is clear that $ \Sigma_{n,\varepsilon}(\widetilde{a}) $ is at most polynomially large in $ \varepsilon^{-1} $. This being the case, taking union bounds of \eqref{eq:blt:cnt-:up} over $ (t_0,x_0,v)\in\Sigma_{n,\varepsilon}(\widetilde{a}) $, we have that \begin{align} \label{eq:blt:union} |G^{\overline{L}_{t,x,v}}(t)-\tfrac{1}{2v}| \leq v^{\frac1C-1}, \ \forall (t,x,v)\in\Sigma_{n,\varepsilon}(\widetilde{a}), \end{align} with probability $ \to 1 $ as $ \varepsilon\to 0 $. Our next step is to extend \eqref{eq:blt:union} to those values of $ (t,v) $ not included in the discrete set $ \Sigma_{n,\varepsilon}(\widetilde{a}) $. To this end, we consider the set $ \Lambda:= \frac1n\mathbb{Z}\cap[\varepsilon^{\widetilde{a}+\gamma},\varepsilon^{\gamma-\widetilde{a}}] $ that represents the widest possible range of $ \Sigma_n(\widetilde{a}) $ in the $ v $ variable, and order the points in $ \Lambda $ as \begin{align*} \tfrac1n\mathbb{Z}\cap[\varepsilon^{\widetilde{a}+\gamma},\varepsilon^{\gamma-\widetilde{a}}] = \{ v_1<v_2<\ldots<v_{m} \}. \end{align*} We now consider a generic `cell' of the form $ E= [i,i+1]\times\{x\}\times[v_j,v_{j+1}] $, such that $ E \subset \Sigma_{n,\varepsilon}(\widetilde{a}) $, and establish a continuity (in $ (t,x) $) estimate $ G^{\overline{L}_{t,x,v}}(t) $ for $ (t,x,v)\in E $. More precisely, we aim at showing \begin{align} \label{eq:blt:sand'} G^{\overline{L}_{i,x,v_{j+1}}}(i)-J(i,x-1) \leq G^{\overline{L}_{t,x,v}}(t) \leq &G^{\overline{L}_{i,x,v_{j}}}(i)+J(i,x-1), \\ \notag &\forall (t,v,x)\in E, \ E \subset \Sigma_{n,\varepsilon}(\widetilde{a}). \end{align} To prove~\eqref{eq:blt:sand'}, we begin by noting a simple but useful inequality~\eqref{eq:blt:mono:cmp}. Recall from~\eqref{eq:shade} that $ A_Q(t) $ denote the region shaded by a given trajectory $ Q $ up to time $ t $. Since $ \eta^Q $ denotes the particle system obtained from absorbing $ \eta $-particles into $ Q $, it follows that \begin{align*} \eta^{Q}(t) \geq \eta^{Q'}(y), \forall y\in\mathbb{Z}, \quad \text{if } A_Q(t) \subset A_{Q'}(t). \end{align*} Combining this with the expression~\eqref{eq:blt} of $ G^Q(t) $ give the following inequality \begin{align} \label{eq:blt:mono:cmp} G^{Q}(t) \leq G^{Q'}(t), \quad \text{if } A_Q(t) \subset A_{Q'}(t), \ Q(t)=Q'(t). \end{align} Now, fix $ v\in[v_j,v_{j+1}] $. From the definition~\eqref{eq:L} of $ \overline{L}_{t,x,v}(s) $, we see that \begin{align*} A_{\overline{L}_{t,x,v_{j+1}}}(t) \subset A_{\overline{L}_{t,x,v}}(t) \subset A_{\overline{L}_{t,x,v_j}}(t), \quad \overline{L}_{t,x,v_j}(t)=\overline{L}_{t,x,v}(t)=\overline{L}_{t,x,v_{j}+1}(t)=x. \end{align*} Given these properties, applying~\eqref{eq:blt:mono:cmp} for $ (Q,Q')=(\overline{L}_{t,x,v_{j}+1},\overline{L}_{t,x,v}) $ and for $ (Q,Q')=(\overline{L}_{t,x,v},\overline{L}_{t,x,v_j}) $, we conclude \begin{align} \label{eq:blt:mono} G^{\overline{L}_{t,x,v_{j+1}}}(t) \leq G^{\overline{L}_{t,x,v}}(t) \leq G^{\overline{L}_{t,x,v_{j}}}(t), \quad \forall v\in[v_j,v_{j+1}]. \end{align} Given~\eqref{eq:blt:mono}, our next step is to compare the difference of $ G^{\overline{L}_{t,x,v_{j+1}}}(t) $ and $ G^{\overline{L}_{i,x,v_{j+1}}}(i) $ and the difference of $ G^{\overline{L}_{t,x,v_{j}}}(t) $ and $ G^{\overline{L}_{i,x,v_{j}}}(i) $. Fix $ t\in[i,i+1] $. Since $ t\geq i $, we clearly have that $ A_{L_{t,x,v_{j+1}}}(t) \subset A_{L_{i,x,v_{j+1}}}(t) $. Referring back to~\eqref{eq:Lup}, with $ v_{j+1}\leq \varepsilon^{\gamma-a}< 1 $ and $ 0\leq t-i \leq 1 $, we have that $ \overline{L}_{i,x,v_{j+1}}(i) = \overline{L}_{i,x,v_{j+1}}(t)=x $, i.e., the function $ \overline{L}_{i,x,v_{j+1}}(s) $ remains constant for $ s\in[i,t] $. Given these properties, applying~\eqref{eq:blt:mono:cmp} for $ (Q,Q')=(\overline{L}_{t,x,v_{j+1}},\overline{L}_{i,x,v_{j+1}}) $ we obtain \begin{align} \label{eq:blt:sand} G^{\overline{L}_{i,x,v_{j+1}}}(t) \leq G^{\overline{L}_{t,x,v_{j+1}}}(t). \end{align} Recall the definition of $ J(i,x) $ from Lemma~\ref{lem:eta:J}. From the definition~\eqref{eq:blt} of $ G^Q(t) $, we see that the change in $ G^{\overline{L}_{t,x,v_{j+1}}}(s) $ over $ s\in[i,t] $ is dominated by the total jump across the bond $ (x-1,x) $, and in particular $ G^{\overline{L}_{i,x,v_{j+1}}}(t) \geq G^{\overline{L}_{i,x,v_{j+1}}}(i) - J(i,x-1). $ Combining this with~\eqref{eq:blt:sand} gives \begin{align} \label{eq:blt:sand::} G^{\overline{L}_{i,x,v_{j+1}}}(i) - J(i,x-1)\leq G^{\overline{L}_{t,x,v_{j+1}}}(t). \end{align} A similar argument gives \begin{align} \label{eq:blt:sand::::} G^{\overline{L}_{t,x,v_{j}}}(t) \leq G^{\overline{L}_{i+1,x,v_{j}}}(i) + J(i,x-1). \end{align} Combining \eqref{eq:blt:mono} and \eqref{eq:blt:sand::}--\eqref{eq:blt:sand::::} yields~\eqref{eq:blt:sand'}. Now, using \eqref{eq:blt:union} and \eqref{eq:J:bd} (for $ b=a $), after ignoring events of probability $ \to 0 $, we have \begin{subequations} \label{eq:thesebds} \begin{align} G^{\overline{L}_{i,x,v_{j+1}}}(i) &\geq \tfrac{1}{2v_{j+1}} - (v_{j+1})^{\frac1C-1}, \\ G^{\overline{L}_{i,x,v_{j}}}(i) &\leq \tfrac{1}{2v_{j}} + (v_{j})^{\frac1C-1}, \\ J(i,x-1) &\leq \varepsilon^{-a} \leq \tfrac12 v^{\frac1C-1}, \quad\quad\quad \forall (i,x,v_{j+1}), (i,x,v_j)\in\Sigma_{n,\varepsilon}(\widetilde{a}), \end{align} \end{subequations} for all $ \varepsilon $ small enough. Using \eqref{eq:thesebds} in \eqref{eq:blt:sand'}, we obtain \begin{align} \label{eq:blt:sand:} |G^{\overline{L}_{t,x,v}}(t) - \tfrac{1}{2v}| \leq (|\tfrac{1}{v}-\tfrac{1}{v_{j+1}}|\vee|\tfrac{1}{v}-\tfrac{1}{v_{j}}|) + \tfrac12 v^{\frac1C-1}, \quad \forall (t,v,x)\in E, \ E \subset \Sigma_{n,\varepsilon}(\widetilde{a}). \end{align} Further, since $ v\in[v_{j},v_{j+1}] $, we have \begin{align*} (|\tfrac{1}{v}-\tfrac{1}{v_{j+1}}|\vee|\tfrac{1}{v}-\tfrac{1}{v_{j}}|) \leq |\tfrac{1}{v_j}-\tfrac{1}{v_{j+1}}| \leq \tfrac{1/n}{v_j^2}. \end{align*} In the last expression, further using the conditions $ v_{j}\geq\varepsilon^{\gamma+a} $ and $ n\geq \varepsilon^{2\gamma+3a} $, we obtain $ (|\tfrac{1}{v}-\tfrac{1}{v_{j+1}}|\vee|\tfrac{1}{v}-\tfrac{1}{v_{j}}|) \leq \varepsilon^{-a} \leq \frac12 v^{\frac1C-1} $, for all $ \varepsilon $ small enough. Inserting this bound into the r.h.s.\ of \eqref{eq:blt:sand:}, we arrive at \begin{align} \label{eq:blt:sand:::} |G^{\overline{L}_{t,x,v}}(t)-\tfrac{1}{2v}| \leq v^{\frac1C-1}, \ \forall (t,x,v) \in \Sigma'_\varepsilon(\widetilde{a}), \end{align} where $ \Sigma'_\varepsilon(\widetilde{a}) := \bigcup_{E \subset \Sigma_{n,\varepsilon}(\widetilde{a})} E $. Even though the set $ \Sigma'_\varepsilon(\widetilde{a}) $ leaves out some points near the boundary of $ \Sigma_\varepsilon(\widetilde{a}) $, with $ \widetilde{a}<a $, $ \Sigma'_\varepsilon(\widetilde{a})\supset\Sigma_\varepsilon(a) $ eventually hold for all $ \varepsilon $ small enough. Hence \eqref{eq:blt:sand:::} concludes the desired result~\eqref{eq:blt:cnt:up}. \end{proof} \section{Proof of Theorem~\ref{thm:main}} \label{sect:pfmain} We fix an initial condition $ \eta^\text{ic} $ as in Definition~\ref{def:ic}, with the corresponding constants $ \gamma,a_*,C_* $ and limiting distribution $ \mathcal{F} \in C[0,\infty) $. We first note that under the conditions in Definition~\ref{def:ic}, we necessarily have \begin{align} \label{eq:ict0=0} \mathbf{P} ( \mathcal{F}(0) = 0 ) = 1. \end{align} To see this, set $ (x_1,x_2)=(0,\lfloor \varepsilon^{-1}\xi\rfloor) $ and $ r= \xi^{-\frac{\gamma}{2}} $ in \eqref{eq:ic:den1} to obtain $ \mathbf{P} ( |F(\xi)| > \varepsilon^{-\gamma}|\xi|^{\frac{\gamma}{2}} ) \leq C_* \varepsilon^{-|\xi|^{-\frac{\gamma a_*}{2}}}. $ Since $ \varepsilon^{\gamma} F(\varepsilon^{-1}\Cdot) \Rightarrow \mathcal{F}(\Cdot) $ under $ \mathscr{U} $, letting $ \varepsilon\to 0 $ yields \begin{align} \label{eq:ict0:} \mathbf{P} ( |\mathcal{F}(\xi)| > |\xi|^{\frac{\gamma}{2}} ) \leq C_* \exp( -|\xi|^{-\frac{\gamma a_*}{2}}), \end{align} for any fixed $ \xi\in(0,\infty) $. Now, set $ \xi=\xi_n := \frac{1}{n} $ in \eqref{eq:ict0:}. With $ \sum_{n=1}^\infty C_* \exp(-n^{\frac{\gamma a_*}{2}})<\infty $, by the Borel--Cantelli lemma, we have $ \mathbf{P} ( \lim_{n\to 0} \mathcal{F}(\xi_n) =0 ) = 1. $ As $ \xi\mapsto\mathcal{F}(\xi) $ is continuous by assumption, this concludes~\eqref{eq:ict0=0}. Recall that $ \gamma'\in(\frac{\gamma+1}{2},1) $ is a fixed parameter in the definitions~\eqref{eq:Lup}--\eqref{eq:Llw} of $ \overline{L}_{t_0,x_0,v} $ and $ \underline{L}_{t_0,x_0,v} $. Fixing further \begin{align} \label{eq:a:main} 0<a <(\gamma'-\tfrac{1+\gamma}{2})\wedge\tfrac{1-\gamma}{5}\wedge\tfrac{\gamma}{2}\wedge\tfrac{3\gamma-1}{4}\wedge(\gamma(1-\gamma')), \end{align} throughout this section we use $ C<\infty $ to denote a generic finite constant, that may change from line to line, but depends only on $ a,a_*,\gamma,\gamma',C_* $. Recall from Section~\ref{sect:elem} that our strategy is to construct processes $ \overline{R}_\lambda(t) $ and $ \underline{R}_\lambda(t) $ that serve as upper and lower bounds of $ R(t) $. We begin with the upper bound. \subsection{The upper bound} \label{sect:pfmain:up} The process $ \overline{R}_\lambda(t) $ is constructed via the corresponding hitting time process $ \overline{T}_\lambda(\xi) $, defined in the following. Fixing $ \lambda>0 $, we let $ v_* := \varepsilon^{\gamma-a} $ and \begin{align} \label{eq:x*} x_* &:= \inf\{ x \geq \lambda\varepsilon^{-1} : F(x) \geq \lambda\varepsilon^{-\gamma} \}, \\ \label{eq:x**} x_{**} &:= \inf\{ x \geq x_* : F(x) \leq \tfrac12\lambda\varepsilon^{-\gamma} \} \wedge (x_*+\lambda\varepsilon^{-1}). \end{align} For $ x\in\mathbb{Z}_{\geq 0} $ we define the hitting time process $ \overline{T}_\lambda(x) $ as \begin{align} \label{eq:Tup} \overline{T}_\lambda(x) := \left\{\begin{array}{l@{\quad}l} \displaystyle v_*^{-1} [(x-x_*)]_+ ,& x \leq x_{**}, \\ ~ \\ \displaystyle \overline{T}_\lambda(x_{**}) + 2 \sum_{x_{**}< y \leq x} \big( F(y) - \tfrac12 \lambda\varepsilon^{-\gamma} \big) \mathbf{1}_\set{ F(y) > \lambda\varepsilon^{-\gamma} }, & x > x_{**}, \end{array}\right. \end{align} and extend $ \overline{T}_\lambda(\Cdot) $ to $ [0,\infty) $ by letting $ \overline{T}_\lambda(\xi) := \overline{T}_\lambda(\lfloor\xi\rfloor) $. With this, recalling the definition of the involution $ \iota(\Cdot) $ from \eqref{eq:inversion}, we then define $ \overline{R}_\lambda := \iota(\overline{T}_\lambda) $. Note that, even though the processes $ \overline{T}_\lambda $ and $ \overline{R}_\lambda $ do depend on $ \varepsilon $, we omit the dependence to simplify notations. Let us explain the motivation for the construction of $ \overline{T}_\lambda $. First, in~\eqref{eq:Tup}, the regimes for $ \xi \leq x_{**} $ and for $ \xi > x_{**} $ correspond respectively to $ \widetilde{\Sigma}_\varepsilon(a) $ (defined in \eqref{eq:tv:cnd:}) and to $ \Sigma_\varepsilon(a) $ (defined in \eqref{eq:tv:cnd}--\eqref{eq:supdiff}). As mentioned previously, the process $ \overline{R}_\lambda $ will serve as an upper bound of $ R $. For this to be the case, we need $ \overline{T}_\lambda $ to be a \emph{lower} bound of $ T $. In the first regime $ \xi \leq x_{**} $, we freeze the process $ \overline{T}_\lambda(\xi) $ at zero until $ \xi=x_* $, in order to accommodate potential atypical behaviors of the actually hitting process $ T $ upon initiation. Then, we let $ \overline{T}_\lambda $ grow linearly, with inverse speed $ (v_*)^{-1} =\varepsilon^{-\gamma+a} \ll \varepsilon^{-\gamma} $, much slower than the expected inverse speed $ \asymp \varepsilon^{-\gamma} $. Doing so ensures $ \overline{T}_{\lambda} $ being a lower bound of $ T $. This linear motion of $ \overline{T}_\lambda $ translates into the motion of $ \overline{R}_\lambda $ as \begin{align} \label{eq:Qlin} &\overline{R}_\lambda(t) = \lfloor v_*t \rfloor + \overline{R}_\lambda(0), \ \forall t \leq \overline{T}_\lambda(x_{**}). \end{align} Next, recall from Section~\ref{sect:heu} that we expect $ R $ to growth at speed $ 1/[2F(R)]_+ $ and hence $ T $ to grow at inverse speed $ [2F(x)]_+ $. With this in mind, in the second regime $ \xi \geq x_{**} $, we let $ \overline{T}_{\lambda} $ grow at inverse speed $ 2( F(y) - \frac12 \lambda\varepsilon^{-\gamma} ) \mathbf{1}_\set{ F(y) > \lambda\varepsilon^{-\gamma} } $. The offset by $ - \frac12 \lambda\varepsilon^{-\gamma} $ slightly slows down $ \overline{T}_\lambda $ so that it will be a lower bound of $ T $, introducing the indicator $ \set{F(y) > \lambda\varepsilon^{-\gamma}}$ for technical reasons (to avoid having to deal with a non-zero but too small growth of $ \overline{T}_{\lambda}$). We next establish a simple comparison criterion. \begin{lemma} \label{lem:cmpCrit} Fixing $ (t_0,v)\in[0,\infty)\times(0,\infty) $, we let $ x_0:=\overline{R}_\lambda(t_0) $. If \begin{align} \label{eq:shcmp:criterion} \overline{T}_\lambda(x_0)-\overline{T}_\lambda(y) \leq v^{-1} (x_0-y), \quad \forall \ x_0-\lfloor\varepsilon^{-\gamma'}\rfloor \leq y \leq x_0, \end{align} then \begin{align} \label{eq:shcmp} & \overline{R}_\lambda(t) \leq \overline{L}_{t_0,x_0+1,v}(t), \quad \forall t\leq t_0. \\ \label{eq:shcmp:} &G^{\overline{R}_\lambda}(t_0) \leq G^{\overline{L}_{t_0,x_0+1,v}}(t_0)+\eta(t_0,x_0+1). \end{align} \end{lemma} \begin{proof} The proof of~\eqref{eq:shcmp} is geometric, so we include a schematic figure to facilitate it. \begin{figure} \caption{Proof of~\eqref{eq:shcmp}.} \label{fig:lem1} \end{figure} In Figure~\ref{fig:lem1}, we show the complete graphs CG$ (\overline{T}_\lambda) $ and CG$ (\overline{L}_{t_0,x_0+1,v}) $ of $ \overline{T}_\lambda $ and $ \overline{L}_{t_0,x_0+1,v} $. The gray crosses in the figure label points of the form $ (x,\overline{T}_\lambda(x)) $, $ x\in\mathbb{Z}_{\geq 0} $. The dash lines both have slope $ v^{-1} $: the black one passes through $ (x_0,\overline{T}_{\lambda}(x_0)) $ while the blue one passes through $ (x_0+1,t_0) $. The given assumption~\eqref{eq:shcmp:criterion} translates into \begin{center} the gray crosses lie above the back dash line, for all $ x\in [x_0- \lfloor\varepsilon^{-\gamma'}\rfloor,x_0]$. \end{center} From this one readily verifies that CG$ (\overline{L}_{t_0,x_0+1,v}) $ lies below CG$ (\overline{T}_\lambda) $ within $ \xi\in[x_0 - \lfloor\varepsilon^{-\gamma'}\rfloor,x_0] $. Further, by definition (see~\eqref{eq:Lup}), CG($ \overline{L}_{t_0,x_0+1,v} $) sits at level $ t=0 $ for $ \xi<x_0 - \lfloor\varepsilon^{-\gamma'}\rfloor $, as shown in Figure~\ref{fig:lem1}. Hence, CG$ (\overline{L}_{t_0,x_0+1,v}) $ lies below CG$ (\overline{T}_\lambda) $ for all $ \xi\in[0,x_0] $, which gives \eqref{eq:shcmp}. Having established \eqref{eq:shcmp}, we next turn to showing \eqref{eq:shcmp:}. Recall from~\eqref{eq:shade} that $ A_Q(t) $ denotes the shaded region of a given process $ Q $, and that $ \eta^Q $ denotes the particle system constructing from $ \eta $ by deleting all the $ \eta $-particles which has visited $ A_Q(t) $ up to a given time $ t $. By \eqref{eq:shcmp}, we have $ A_{\overline{R}_\lambda}(t_0) \subset A_{\overline{L}_{t_0,x_0+1,v}}(t_0) $, so in particular \begin{align} \label{eq:etaRup:cmp} \eta^{\overline{R}_{\lambda}}(t,x) \geq \eta^{\overline{L}_{t_0,x_0+1,v}}(t,x), \quad \forall x\in\mathbb{Z}. \end{align} Now, recall the definition of $ G^{Q}(t) $ from \eqref{eq:blt}. Combining \eqref{eq:etaRup:cmp} and $ \overline{L}_{t_0,x_0+1,v}(t)=\overline{R}_\lambda(t)+1 $, we see that the second claim~\eqref{eq:shcmp:} holds. \end{proof} The next step is to prove an upper bound on $ G^{\overline{R}_\lambda}(t) $. To prepare for this, we first establish a few elementary bounds on the range of various variables related to the processes $ \overline{R}_\lambda $, $ \overline{T}_\lambda $ and $ F $. \begin{lemma} \label{lem:techbd} Let $ \lambda>0 $. The following holds with probability $ \to 1 $ as $ \varepsilon\to 0 $: \begin{align} \label{eq:ict:bd} & F(x) < \varepsilon^{-\gamma-a}, \quad \forall x\leq\varepsilon^{-1-a}, \\ \label{eq:ict:cnti} &|F(x)-F(y)| < \tfrac{\lambda}{4}\varepsilon^{-\gamma}, \quad \forall x\in[0,\varepsilon^{-1-a}]\cap\mathbb{Z}, \ y\in [x-\varepsilon^{-\gamma'},x]\cap\mathbb{Z}_{\geq 0}, \\ \label{eq:Tupe-1} &\overline{T}_\lambda(\varepsilon^{-1-a}) < \varepsilon^{-1-\gamma-3a}, \\ \label{eq:etaQ} &|\eta(t,\overline{R}_\lambda(t)+1)| < \tfrac{\lambda}{16} \varepsilon^{-\gamma}, \ \forall t < \overline{T}_\lambda(\varepsilon^{-1-a}), \\ \label{eq:x*-x**} & x_*-x_{**} > \varepsilon^{-1+a}, \\ \label{eq:Tupx**} &\overline{T}_\lambda(x_{**}) > \varepsilon^{-\gamma-1+2a}, \\ \label{eq:Rup0} &\overline{R}_\lambda(0) \geq \lambda\varepsilon^{-1}. \end{align} \end{lemma} \begin{proof} The proof of each inequality is listed sequentially as follows. \begin{itemize}[leftmargin=3ex] \item Since, by definition, $ F(0)=0 $, using~\eqref{eq:ic:den1} for $ (x_1,x_2)=(0,x) $ and $ r=\varepsilon^{-\gamma-a}/|x|^{\gamma} $ gives \begin{align*} \mathbf{P} \big( F(x) > \varepsilon^{-\gamma-a} \big) \leq C_* \exp\big(-(\tfrac{\varepsilon^{-\gamma-a}}{x^{\gamma}})^{a_*}\big). \end{align*} Taking the union bound of this over $ x\in[1,\varepsilon^{-1-a}]\cap\mathbb{Z} $, using $ \varepsilon^{-\gamma-a}{x^{-\gamma}}\geq \varepsilon^{-(1-\gamma)a} $, we have that \begin{align*} \mathbf{P} \big( F(x) \leq \varepsilon^{-\gamma-a}, \, \forall x\in[0,\varepsilon^{-1-a}]\cap\mathbb{Z} \big) \leq C_* \varepsilon^{-1-a}\exp\big(-(\varepsilon^{-(1-\gamma)aa_*}\big) \longrightarrow 0. \end{align*} \item Let $ \gamma'':=\gamma(1-\gamma')-a>0 $ (by~\eqref{eq:a:main}). Using~\eqref{eq:ic:den1} for $ (x_1,x_2)=(y,x) $ and $ r=\varepsilon^{-\gamma''} $, we have that \begin{align*} \mathbf{P} \big( |F(x)-F(y)| \leq \varepsilon^{-\gamma''}|x-y|^\gamma \big) \geq 1 - C_* e^{-\varepsilon^{-a_*\gamma''}}. \end{align*} Take the union bound of this over $ x\in[0,\varepsilon^{-1-a}]\cap\mathbb{Z} $ and $ y\in[x-\varepsilon^{-\gamma'},x]\cap\mathbb{Z}_{\geq 0} $. Further, by $ \varepsilon^{-\gamma''}|x-y|^\gamma \leq \varepsilon^{-\gamma''-\gamma\gamma'} = \varepsilon^{-\gamma+a} $ and $ -\gamma+a <-a $ (by~\eqref{eq:a:main}), we have that $ \varepsilon^{-\gamma''}|x-y|^\gamma < \frac{\lambda}{4}\varepsilon^{-a} $, for all $ \varepsilon $ small enough, and hence \eqref{eq:ict:cnti} holds. \item Using \eqref{eq:ict:bd} in \eqref{eq:Tup}, we obtain \begin{align*} \overline{T}_\lambda(\varepsilon^{-1-a}) &\leq v^{-1}_*(x_*-x_{**}) + \varepsilon^{-1-a}\varepsilon^{-\gamma-a} \\ &\leq \varepsilon^{-\gamma+a} \lambda \varepsilon^{-1-a} + \varepsilon^{-1-\gamma-2a} = (1+\lambda) \varepsilon^{-1-\gamma-2a}, \end{align*} with probability $ \to 1 $ as $ \varepsilon\to 0 $. Further using $ (1+\lambda) \varepsilon^{-1-\gamma-2a} < \varepsilon^{-1-\gamma-3a} $, for all $ \varepsilon $ small enough, we conclude \eqref{eq:Tupe-1}. \item The condition~$ t<\overline{T}_\lambda(\varepsilon^{-1-a}) $ implies $ \overline{R}_\lambda(t) \leq \varepsilon^{-1-a} $ and (by \eqref{eq:Tupe-1}) $ t<\varepsilon^{-1-\gamma-3a} $. With these bounds on the range of $ (t,\overline{R}_\lambda(t)) $, we see that \eqref{eq:etaQ} follows from \eqref{eq:eta:bd}. \item Since $ \varepsilon^{\gamma} F(\varepsilon^{-1}\Cdot) \Rightarrow \mathcal{F}(\Cdot)\in C[0,\infty) $ under $ \mathscr{U} $, from the definition~\eqref{eq:x**} of $ x_{**} $, we see that the bound \eqref{eq:x*-x**} holds with probability $ \to 1 $. \item Combining \eqref{eq:x*-x**} with \eqref{eq:Tup} yields \eqref{eq:Tupx**}. \item By \eqref{eq:x*}, $ x_* \geq \lambda\varepsilon^{-1} $. Consequently, $ \overline{T}_\lambda(\xi)=0 $, $ \forall \xi\leq \lambda \varepsilon^{-1} $, and hence the inequality~\eqref{eq:Rup0} holds. \end{itemize} Having proven all claimed inequalities, we complete the proof. \end{proof} \begin{lemma} \label{lem:blt:Rup} Let $ \lambda>0 $. We have \begin{align} \label{eq:blt:Rup} & \lim_{\varepsilon\to 0} \mathbf{P} \big( G^{\overline{R}_\lambda}(t) < F(\overline{R}_\lambda(t)) - \tfrac{\lambda}{16}\varepsilon^{-\gamma} \text{ whenever } \overline{R}_\lambda(t)\in (x_{**},\varepsilon^{-1-a}] \big) = 1, \\ & \label{eq:blt:Rup<} \lim_{\varepsilon\to 0} \mathbf{P} \big( G^{\overline{R}_\lambda}(t) < F(\overline{R}_\lambda(t)) - \tfrac{\lambda}{16}\varepsilon^{-\gamma} \text{ whenever } \overline{R}_\lambda(t)\in [0,x_{**}\wedge\varepsilon^{-1-a}] \big) = 1. \end{align} \end{lemma} \begin{proof} To simplify notations, we let $ x:= \overline{R}_\lambda(t) $. We consider first the case $ x \in (x_{**},\varepsilon^{-1-a}] $ and prove~\eqref{eq:blt:Rup<}. In Figure~\ref{fig:lem2}, we show schematic figures of the graphs of $ \overline{R}_\lambda $ and $ \overline{T}_{\lambda} $, together with their complete graph CG($ \overline{R}_\lambda $)$ = $CG($ \overline{T}_\lambda $). As shown in Figure~\ref{fig:lem2-1}, the graph of $ \overline{R}_\lambda $ consists of vertical line segment, so $ (x_0,t_0) $ necessarily sits on a vertical segment. Referring to Figure~\ref{fig:lem2-2}, we see that $ \overline{T}_\lambda(x)>\overline{T}_\lambda(x-1) $. This is possible only if (see~\eqref{eq:Tup}) \begin{align} \label{eq:ict>e-1} F(x) \geq \lambda\varepsilon^{-1}. \end{align} \begin{figure} \caption{Graph of $ \overline{R}_\lambda $} \label{fig:lem2-1} \caption{Graph of $ \overline{T}_{\lambda} $} \label{fig:lem2-2} \caption{Graphs of $ \overline{R}_\lambda $ and $ \overline{T}_{\lambda} $, together with their complete graph (dashed line).} \label{fig:lem2} \end{figure} Given this lower bound on $ F(x) $, we now define $ v := 1/(2(F(x)-\frac{1}{4}\lambda\varepsilon^{-\gamma})) $, and consider the truncated linear trajectory $ \overline{L}(\Cdot):= \overline{L}_{t,x+1,v}(\Cdot) $ passing through $ (t,x+1) $ with velocity $ v $. The first step of proving \eqref{eq:blt:Rup} is to compare $ G^{\overline{R}_\lambda}(t) $ with $ G^{\overline{L}}(t) $, by using Lemma~\ref{lem:cmpCrit}. For Lemma~\ref{lem:cmpCrit} to apply, we first verify the relevant condition~\eqref{eq:shcmp:criterion}. To this end, use \eqref{eq:ict:cnti} to obtain that \begin{align} \notag 2 [ F(y')-\tfrac{\lambda}{2}\varepsilon^{-\gamma} ]_+ &\leq 2 [ F(x)-\tfrac{\lambda}{2}\varepsilon^{-\gamma}+\tfrac{\lambda}{4}\varepsilon^{-\gamma} ]_+ \\ \label{eq:ict:cnti:bd} &= 2 (F(x)-\tfrac{\lambda}{4}\varepsilon^{-\gamma}), \quad \hbox{whenever} \quad x-\lfloor\varepsilon^{-\gamma'}\rfloor\leq y' \leq x, \end{align} where the equality follows by \eqref{eq:ict>e-1}. The expression in \eqref{eq:ict:cnti:bd} equals $ v^{-1} $, so summing \eqref{eq:ict:cnti:bd} over $ y'\in(y,x] $, for any fixed $ y\in(x-\varepsilon^{-\gamma'},x] $, yields \begin{align*} \overline{T}_\lambda(x)-\overline{T}_\lambda(y) \leq v^{-1}(x-y), \quad \forall x-\lfloor\varepsilon^{-\gamma'}\rfloor\leq y' \leq x. \end{align*} This verifies the condition \eqref{eq:shcmp:criterion}. We now apply Lemma~\ref{lem:cmpCrit} to conclude that \begin{align*} G^{\overline{R}_\lambda}(t) \leq G^{\overline{L}}(t) + \eta(t,\overline{R}_\lambda(t)+1). \end{align*} Further using \eqref{eq:etaQ}, after ignoring events of probability $ \to 0 $, we obtain that \begin{align} \label{eq:bltbd:bltLup} G^{\overline{R}_\lambda}(t) \leq G^{\overline{L}}(t) + \tfrac{\lambda}{16}\varepsilon^{-\gamma}. \end{align} for all $ \varepsilon $ small enough. The next step is to apply the estimates~\eqref{eq:blt:cnt:up} to the term $ G^{\overline{L}}(t) $ in \eqref{eq:bltbd:bltLup}. For \eqref{eq:blt:cnt:up} to apply, we first establish bounds on the range of the variables $ (t,x,v) $. Under the current consideration $ x_{**}< \overline{R}_\lambda(t) \leq \varepsilon^{-1-a} $, we have $ \overline{T}_\lambda(x_{**}) \leq t \leq \overline{T}_\lambda(\varepsilon^{-1-a}) $. Combining this with \eqref{eq:Tupe-1} and \eqref{eq:Tupx**} yields \begin{align} \label{eq:Rup:trg} \varepsilon^{-1-\gamma+2a} \leq t \leq \varepsilon^{-1-\gamma-3a}. \end{align} Next, With $ x:=\overline{R}_\lambda(t) $, we have $ x\leq\varepsilon^{-1-a} $ and $ x>x_{**} \geq x_{**}-x_{*} $. Combining the last equality with \eqref{eq:x*-x**} yields $ x \geq \varepsilon^{-1+a} $, so \begin{align} \label{eq:Rup:xrg} \varepsilon^{-\gamma'-a}\leq\varepsilon^{-1+a} \leq x \leq \varepsilon^{-1-a}. \end{align} As for $ v := 1/(2(F(x)-\frac{1}{4}\lambda\varepsilon^{-\gamma})) $, by \eqref{eq:ict:bd} and \eqref{eq:ict>e-1} we have \begin{align} \label{eq:Rup:vrg} \varepsilon^{\gamma+a} \leq v \leq \tfrac{2}{3\lambda}\varepsilon^{\gamma}, \end{align} for all $ \varepsilon $ small enough. Combining \eqref{eq:Rup:trg} and \eqref{eq:Rup:vrg}, followed by using $ \frac{1-\gamma-3a}{2}>a $ (since $ a<\frac{1-\gamma}{5} $, by~\eqref{eq:a:main}), we have \begin{align} \label{eq:Rup:vtrg} v\sqrt{t} \geq \tfrac{2}{3\lambda} \varepsilon^{\frac{-1+\gamma+3a}{2}} \geq \varepsilon^{-a}, \end{align} for all $ \varepsilon $ small enough. Recalling the definition of $ \Sigma_\varepsilon(a) $ from \eqref{eq:tv:cnd}--\eqref{eq:supdiff}, equipped with the bounds \eqref{eq:Rup:trg}--\eqref{eq:Rup:vtrg} on the range of $ (t,x,v) $, one readily verifies that $ (t,x,v) \in \Sigma_\varepsilon(\frac{a}{3}) $. With this, we now apply \eqref{eq:blt:cnt:up} to obtain \begin{align} \label{eq:bltbd:v} G^{\overline{L}}(t) \leq \tfrac{1}{2v} + 4v^{-1+1/C} \leq F(x) - \tfrac{1}{4}\lambda\varepsilon^{-\gamma} + 4 (\tfrac{3\lambda}{2}\varepsilon^{\gamma})^{-1+1/C} \leq F(x) - \tfrac{1}{8}\lambda\varepsilon^{-\gamma}, \end{align} for all $ \varepsilon $ small enough. Inserting \eqref{eq:bltbd:v} into \eqref{eq:bltbd:bltLup} yields the desired result~\eqref{eq:blt:Rup}. We next consider the case $ x=\overline{R}_\lambda(t)\leq x_{**}\wedge\varepsilon^{-1-a} $, and prove \eqref{eq:blt:Rup}. Under the current consideration $ x\leq x_{**} $, from the definition~\eqref{eq:Tup} of $ \overline{T}_\lambda $ we have $ \overline{T}_\lambda(x)-\overline{T}_\lambda(y) \leq v_*(x-y) $, $ \forall y\in[0,x] $. With this, letting $ \overline{L}' := \overline{L}_{t,x+1,v_*} $, using the same argument for deriving \eqref{eq:bltbd:bltLup} based on Lemma~\ref{lem:cmpCrit}, here we have \begin{align} \label{eq:bltbd:bltLup:<} G^{\overline{R}_\lambda}(t) \leq G^{\overline{L}'}(t) + \tfrac{\lambda}{16}\varepsilon^{-\gamma}. \end{align} Similarly to the preceding, the next step is to apply \eqref{eq:blt:<} for bounding $ G^{\overline{L}'}(t) $. Recall the definition of $ \widetilde{\Sigma}_\varepsilon(a) $ from \eqref{eq:tv:cnd:}. Since $ v_*:=\varepsilon^{\gamma-a} $ and $ a<\frac{\gamma}{2} $ (by~\eqref{eq:a:main}), we have $ v_*\in[\varepsilon^{\gamma+a},\varepsilon^{a}] $. From this and \eqref{eq:Tupe-1}, we see that $ (t,x,v_*)\in\widetilde{\Sigma}_\varepsilon(\frac{a}{3}) $. With this, we apply Proposition~\eqref{eq:blt:<} to conclude that $ G^{\overline{L}'}(t) \leq \varepsilon^{-\frac{a}{3}}v^{-1}_* = \varepsilon^{-\gamma+\frac{2a}{3}} $. Inserting this bound into \eqref{eq:bltbd:bltLup:<} yields \begin{align} \label{eq:bltRup>gamma4} G^{\overline{R}_\lambda}(t) \leq \varepsilon^{-\gamma+\frac{2a}{3}} +\tfrac{\lambda}{8}\varepsilon^{-\gamma} \leq \tfrac{\lambda}{4}\varepsilon^{-\gamma}, \end{align} for all $ \varepsilon $ small enough. On the other hand, with $ x_{**} $ defined in~\eqref{eq:x**}, we have $ F(x) \geq \frac{\lambda}{2}\varepsilon^{-\gamma} $. Combining this with \eqref{eq:bltRup>gamma4} yields the desired result~\eqref{eq:blt:Rup<}. \end{proof} We are now ready to prove that $ \overline{R}_\lambda $ serves as an upper bound of $ R $. \begin{proposition} \label{prop:Rup>R} Let $ \lambda>0 $. We have that \begin{align} \label{eq:NRup<Rup} \lim_{\varepsilon\to 0} \mathbf{P} \big( N^{\overline{R}_\lambda}(t) \leq \overline{R}_\lambda(t) \text{ whenever } \overline{R}_\lambda(t) \leq \varepsilon^{-1-a} \big) = 1. \end{align} In particular, by Proposition~\ref{prop:mono}, \begin{align*} \lim_{\varepsilon\to 0} \mathbf{P} \big( R(t) \leq \overline{R}_\lambda(t) \text{ whenever } \overline{R}_\lambda(t) \leq \varepsilon^{-1-a} \big) =1. \end{align*} \end{proposition} \begin{proof} Recall the decomposition of $ N^Q(t) $ from \eqref{eq:N:dcmp}. Applying Lemma~\ref{lem:blt:Rup} within this decomposition, after ignoring events of small probability, we have \begin{align} \label{eq:Rup:goal} N^{\overline{R}_\lambda}(t) \leq \overline{R}_\lambda(t) -\tfrac{\lambda}{16}\varepsilon^{-\gamma} + M(t,\overline{R}_\lambda(t)), \end{align} for all $ \overline{R}_\lambda(t) \leq \varepsilon^{-1-a} $. The next step is to apply Proposition~\ref{prop:mgtbd} and bound the noise term $ M(t,\overline{R}_\lambda(t)) $. The condition $ \overline{R}_\lambda(t) \leq \varepsilon^{-1-a} $ implies $ t<\overline{R}_\lambda(\varepsilon^{-1-a}) $, so by \eqref{eq:Tupe-1} we have $ t \leq \varepsilon^{-1-\gamma-3a} $. Combining this with \eqref{eq:Rup0}, and using $ a<\frac{1-\gamma}{5} $, we obtain \begin{align*} \frac{\overline{R}_\lambda(t)}{\sqrt{t}} \geq \frac{\overline{R}(0)}{\sqrt{t}} \geq \lambda \varepsilon^{-\frac12(1-\gamma-3a)} \geq \varepsilon^{-a}, \end{align*} for all $ \varepsilon $ small enough. With these bounds on the range of $ (t,\overline{R}_\lambda(t),\overline{R}_\lambda(t)/\sqrt{t}) $, we apply Proposition~\ref{prop:mgtbd} to the noise term $ M(t,\overline{R}_\lambda(t)) $ to obtain that $ |M(t,\overline{R}_\lambda(t))| \leq 6\varepsilon^{-a}(1+t)^{\frac{\gamma}{2}\vee\frac{1}{4}} $, $ \forall \overline{R}_\lambda(t) \leq \varepsilon^{-1-a} $. Further, with $ t \leq \varepsilon^{-1-\gamma-3a} $ and $ a<\frac{1-\gamma}{4}\wedge \frac{3\gamma-1}{4} $, we have that $ 6\varepsilon^{-a}(1+t)^{\frac{\gamma}{2}\vee\frac{1}{4}} \leq \frac{1}{32}\varepsilon^{-\gamma} $, for all $ \varepsilon $ small enough, and therefore $ |M(t,\overline{R}_\lambda(t))| \leq \frac{1}{32}\varepsilon^{-\gamma} $, $ \forall \overline{R}_\lambda(t) \leq \varepsilon^{-1-a} $. Inserting this bound into \eqref{eq:Rup:goal} yields the desired result~\eqref{eq:NRup<Rup}. \end{proof} \subsection{The lower bound} \label{sect:pfmain:lw} Similarly to the construction of $ \overline{R}_\lambda $, here the process $ \underline{R}_\lambda(t) $ is constructed via the corresponding hitting time process $ \underline{T}_\lambda(\xi) $. Let $ y_* := \lceil\varepsilon^{-1}\rceil $. For each $ x\in\mathbb{Z}_{\geq 0} $, we define \begin{align} \label{eq:Tlw} \underline{T}_\lambda(x) := y_*\lambda\varepsilon^{-\gamma} + \sum_{0< y \leq x} \big( \lambda\varepsilon^{-\gamma}+ 2[F(y)]_+ \big), \end{align} and extend $ \underline{T}_\lambda(\Cdot) $ to $ [0,\infty) $ by letting $ \underline{T}_\lambda(\xi) := \underline{T}_\lambda(\lfloor\xi\rfloor) $. We then define $ \underline{R}_\lambda := \iota(\underline{T}_\lambda) $. The general strategy is the same as in Section~\ref{sect:pfmain:up}: we aim at showing $ N^{\underline{R}_\lambda}(t) \geq \underline{R}_\lambda(t) $, by using a comparison with a truncated linear trajectory $ \underline{L}_{t,x,v} $ and applying \eqref{eq:blt:cnt:lw}. The major difference here is the relevant regime of $ (t,\underline{R}_\lambda(t)) $. Unlike in Section~\ref{sect:pfmain:up}, where we treat separately the longer-time regime (corresponding to $ \Sigma_\varepsilon(a) $ defined in \eqref{eq:tv:cnd}--\eqref{eq:supdiff}) and short-time regime (corresponding to $ \widetilde{\Sigma}_\varepsilon(a) $ defined in \eqref{eq:tv:cnd:}), here we simply \emph{avoid} the short time regime. Indeed, since $ \underline{T}_\lambda(\xi) \geq y_*\lambda\varepsilon^{-\gamma} $ (by \eqref{eq:Tlw}), we have $ \underline{R}_\lambda(t)=0 $, $ \forall t <y_*\lambda\varepsilon^{-\gamma} $, and therefore \begin{align} \label{eq:NRlw:trivial} N^{\underline{R}_\lambda}(t) \geq \underline{R}_\lambda(t)=0, \quad \forall t\in [0, y_*\lambda\varepsilon^{-\gamma}). \end{align} Given \eqref{eq:NRlw:trivial}, it thus suffices to consider $ t \geq y_*\lambda\varepsilon^{-\gamma} $, whereby the condition $ t \geq 1 $ in the longer-time regime \eqref{eq:tv:cnd} holds. On the other hand, within the longer-time regime, we need also the condition $ \underline{R}_\lambda(t) =:x\geq \varepsilon^{-\gamma'+a} $ in~\eqref{eq:tv:cnd} to hold. This, however, fails when $ t $ is close to $ y_*\lambda \varepsilon^{-\gamma} $, as can be seen from \eqref{eq:Tlw}. We circumvent the problem by considering a `shifted' and `linear extrapolated' system $ (\widetilde{\eta},\widetilde{\underline{R}}_\lambda,\widetilde{\underline{T}}_\lambda) $, described as follows. First, we shift the entire $ \eta $-particle system, as well as $ \underline{R}_\lambda $ and $ T_\lambda $, by $ y_* $ in space, i.e., \begin{align*} \eta^\to(t,x):=\eta(t,x-y_*), \quad \underline{R}^\to_\lambda(t) := \underline{R}_\lambda(t)+y_*, \quad \underline{T}^\to_\lambda(\xi) := \underline{T}_\lambda(\xi-y_*). \end{align*} Subsequently, we consider the modified initial condition \begin{align} \label{eq:Rlw:mdyic} \widetilde\eta^\text{ic}(x) := \mathbf{1}_{(0,y_*]}(x) + \eta^{\text{ic}}(x-y_*), \end{align} where we place one particle at each site of $ (0,y_*]\cap\mathbb{Z} $. Let \begin{align} \label{eq:tilict} \widetilde{F}(x) := \sum_{y\in(0,x]} (1-\widetilde{\eta}^\text{ic}(x)) \end{align} denote the corresponding centered distribution function. From such $ \widetilde{F} $, we construct the following hitting time process $ \widetilde{\underline{T}}_\lambda(\Cdot) $: \begin{align} \label{eq:tilTlw} \widetilde{\underline{T}}_\lambda(x) &:= 2 \sum_{0< y \leq x} \Big( \lambda\varepsilon^{-\gamma}+[\widetilde{F}(y)]_+ \Big), \\ \notag \widetilde{\underline{T}}_\lambda(\xi) &:= \widetilde{\underline{T}}_\lambda(\lfloor\xi\rfloor), \end{align} and let $ \widetilde{\underline{R}}_\lambda := \iota(\underline{R}_\lambda) $. To see how $ \widetilde{\underline{R}}_\lambda $ and $ \widetilde{\underline{T}}_\lambda $ are related to $ \underline{R}_\lambda $ and $ \underline{T}_\lambda $, with $ \widetilde{\eta} $ and $ \widetilde{F} $ defined as in \eqref{eq:Rlw:mdyic}--\eqref{eq:tilict}, we note that $ \widetilde{F}(x) = 0 $, $ \forall x\in(0,y_*] $ and that $ \widetilde{F}(x) = F(x-y_*) $, $ \forall x>y_* $. From this we deduce \begin{align} \label{eq:tilTlw::} \widetilde{\underline{T}}_\lambda(\xi) &= \left\{\begin{array}{l@{,}l} \lambda\varepsilon^{-\gamma}\lfloor \xi \rfloor &\text{ for } \xi\in [0,y_*), \\ T_\lambda(\xi-y_*) &\text{ for } \xi \geq y_*, \end{array}\right. \\ \label{eq:tilRlw} \widetilde{\underline{R}}_\lambda(t) &= \left\{\begin{array}{l@{,}l} \displaystyle \lfloor t/(\lambda\varepsilon^{-\gamma})\rfloor +1 &\text{ for } t\in [0,y_*\lambda\varepsilon^{-\gamma}), \\ \underline{R}_\lambda(t)+y_* &\text{ for } t \geq y_*\lambda\varepsilon^{-\gamma}. \end{array}\right. \end{align} From this, we see that $ \widetilde{\underline{R}}_\lambda $ and $ \widetilde{\underline{T}}_\lambda $ are indeed shifted and linear extrapolated processes of $ \underline{R}_\lambda $ and $ \underline{T}_\lambda $, respectively. We consider further the free particle system $ \widetilde{\eta} $ starting from the modified initial condition $ \widetilde{\eta}^\text{ic} $ \eqref{eq:Rlw:mdyic}. The systems $ \widetilde{\eta} $ and $ \eta^\to $ are coupled in the natural way such that all particles starting from $ \mathbb{Z}\cap(y_*,\infty) $ evolve exactly the same for both systems, and those $ \widetilde{\eta} $-particles starting in $ (0,y_*] $ run independently of the $ \eta $-particles. Having constructed the modified system $ (\widetilde{\eta},\widetilde{\underline{R}}_\lambda, \widetilde{\underline{T}}_\lambda) $, we proceed to explain how analyzing this modified system helps to circumvent the previously described problem. To this end, we let $ \widetilde{N}^{\widetilde{\underline{R}}_\lambda}(t) $ denote the analogous quantity of total number of $ \widetilde{\eta} $-particle absorbed into $ \widetilde{\underline{R}}_\lambda $ up to time $ t $. By considering the extreme case where all $ \widetilde{\eta} $-particles starting in $ (0,y_*] $ are all absorbed at a given time $ t $, we have that \begin{align} \label{eq:N:Rlw:tilR} N^{\underline{R}_\lambda}(t) \geq \widetilde{N}^{\widetilde{\underline{R}}_\lambda}(t) - \sum_{0<x\leq y_*} \widetilde{\eta}^\text{ic}(x) = \widetilde{N}^{\widetilde{\underline{R}}_\lambda}(t) - y_*. \end{align} By \eqref{eq:tilRlw}, we have $ \underline{R}_\lambda(t)+y_* =\widetilde{\underline{R}}_\lambda(t) $, $ \forall t \geq y_*\lambda\varepsilon^{-\gamma} $. With this, subtracting $ \underline{R}_\lambda(t) $ from both sides of \eqref{eq:N:Rlw:tilR}, we arrive at \begin{align} \label{eq:NRlw:cmp} N^{\underline{R}_\lambda}(t) - \underline{R}_\lambda(t) \geq N^{\widetilde{\underline{R}}_\lambda}(t) - \widetilde{\underline{R}}_\lambda(t), \quad \forall t \geq y_*\lambda\varepsilon^{-\gamma}. \end{align} Indeed, for $ t<y_*\lambda\varepsilon^{-\gamma} $, we already have \eqref{eq:NRlw:trivial}. For $ t \geq y_*\lambda\varepsilon^{-\gamma} $, by \eqref{eq:NRlw:cmp}, it suffices to show the analogous property $ N^{\widetilde{\underline{R}}_\lambda}(t) - \widetilde{\underline{R}}_\lambda(t) \geq 0 $ for the modified system. For the case $ t \geq y_*\lambda\varepsilon^{-\gamma} $, unlike $ \underline{R}_\lambda(t) $, the modified process satisfies $ \widetilde{\underline{R}}_\lambda(t) = \underline{R}_\lambda(t)+y_* \geq y_* $, so the aforementioned condition $ x \geq \varepsilon^{-\gamma'+a} $ (in \eqref{eq:tv:cnd}) does holds for $ x=\widetilde{\underline{R}}_\lambda(t) $. That is, under the shifting by $ y_* $, the modified process $ \widetilde{\underline{R}}_\lambda $ bypasses the aforementioned problem regarding the range of $ \underline{R}_\lambda(t) $, and with a linear extrapolation, the modified system $ (\widetilde{\eta},\widetilde{\underline{R}}_\lambda,\widetilde{\underline{T}}_\lambda) $ links to the original system $ (\eta,\underline{R}_\lambda,\underline{T}_\lambda) $ via the inequality~\eqref{eq:NRlw:cmp} to provide the desired lower bound. In view of the preceding discussion, we hereafter focus on the modified system $ (\widetilde{\eta},\widetilde{\underline{R}}_\lambda,\widetilde{\underline{T}}_\lambda) $ and establish the relevant inequalities. Recall that $ \underline{L}_{t_0,x_0,v} $ is the truncated linear trajectory defined as in \eqref{eq:Lup}, and that $ \gamma'\in(\frac{\gamma+1}{2},1) $ is a fixed parameter. Similarly to Lemma~\ref{lem:cmpCrit}, here we have: \begin{lemma} \label{lem:cmpCrit:} Fixing $ (t_0,v)\in [y_*\lambda\varepsilon^{-\gamma},\infty)\times(0,\infty) $, we let $ x_0:=\widetilde{\underline{R}}_\lambda(t_0) $. If \begin{align} \label{eq:shcmp:criterion:lw} \widetilde{\underline{R}}_\lambda(x_0-1)-\widetilde{\underline{R}}_\lambda(y) \geq v^{-1} (x_0-1-y), \quad \forall y\in [x_0-1-\varepsilon^{-\gamma'},x_0-1], \end{align} then we have that \begin{align} \label{eq:shcmp:lw} \widetilde{\underline{R}}_\lambda(t) &\geq \underline{L}_{t_0,x_0-1,v}(t),\quad \forall t\leq t_0, \\ \label{eq:shcmp:lw:} G^{\widetilde{\underline{R}}_\lambda}(t_0) &\geq G^{\underline{L}_{t_0,x_0-1,v}}(t_0) -\eta(t_0,x_0). \end{align} \end{lemma} \begin{proof} Similarly to Lemma~\ref{lem:cmpCrit}, we include a schematic figure to facilitate the proof. In Figure~\ref{fig:lem3}, we show the complete graphs CG$ (\widetilde\underline{T}_\lambda) $ and CG$ (\underline{L}_{t_0,x_0-1,v}) $ of $ \widetilde\underline{T}_\lambda $ and $ \overline{L}_{t_0,x_0-1,v} $. The gray crosses in the figure label points of the form $ (x,\widetilde\underline{T}_\lambda(x)) $, $ x\in\mathbb{Z}_{\geq 0} $. The dash lines have slope $ v^{-1} $: the black one passes through $ (x_0-1,\widetilde{\underline{T}}_\lambda(x_0-1)) $, while the blue one passes through $ (x_0-1,t_0) $. The given assumption~\eqref{eq:shcmp:criterion:lw} translates into \begin{center} the crosses lie below the black dash line, for all $ x\in [x_0- \lfloor\varepsilon^{-\gamma'}\rfloor-1,x_0-1]$. \end{center} From this it is now readily verified that CG$ (\underline{L}_{t_0,x_0-1,v}) $ lies above CG$ (\widetilde\underline{T}_\lambda) $ within $ \xi\in[x_0 -1- \lfloor\varepsilon^{-\gamma'}\rfloor,x_0] $. Further, by definition (see~\eqref{eq:Llw}), CG($ \underline{L}_{t_0,x_0-1,v} $) sits at level $ t_0-v^{-1}\lfloor\varepsilon^{-\gamma'}\rfloor $ for $ x\in(0,x_0 - \lfloor\varepsilon^{-\gamma'}\rfloor) $, as shown in Figure~\ref{fig:lem3}. Hence, CG$ (\overline{L}_{t_0,x_0-1,v}) $ lies below CG$ (\widetilde\underline{T}_\lambda) $ for all $ \xi\in[0,x_0] $, which gives \eqref{eq:shcmp:lw}. \begin{figure} \caption{Proof of~\eqref{eq:shcmp:lw}} \label{fig:lem3} \end{figure} As for~\eqref{eq:shcmp:lw:}, recall from~\eqref{eq:shade} that $ A_Q(t) $ denotes the shaded region of a given process $ Q $, and that $ \eta^Q $ denotes the particle system constructing from $ \eta $ by deleting all the $ \eta $-particles which has visited $ A_Q(t) $ up to a given time $ t $. By \eqref{eq:shcmp:lw}, we have $ A_{\overline{R}_\lambda}(t_0) \supset A_{\overline{L}_{t_0,x_0-1,v}}(t_0) $, which gives $ \eta^{\underline{R}_{\lambda}}(t,x) \leq \eta^{\overline{L}_{t_0,x_0-1,v}}(t,x), $ $ \forall x\in\mathbb{Z} $, $ t\leq t_0 $. Given this, referring to the definition~\eqref{eq:blt} of $ G^{Q}(t) $, we see that~\eqref{eq:shcmp:lw} follows. \end{proof} Let $ \widetilde{G}^Q(t) $ and $ \widetilde{M}(t,x) $ denote the analogous boundary layer term and martingale term of the modified particle system $ \widetilde{\eta} $. Indeed, the initial condition~\eqref{eq:Rlw:mdyic} satisfies all the conditions in Definition~\ref{def:ic}, with the same constants $ \gamma,a_*,C_* $ and with the limiting distribution $ \widetilde{\mathcal{F}}(\xi) := \mathcal{F}(\xi)\mathbf{1}_\set{\xi\geq 1} $. Consequently, the bounds established in Proposition~\ref{prop:mgtbd} and Lemma~\ref{lem:techbd} apply equally well to the $ \widetilde{\eta} $-systems, giving \begin{lemma} \label{lem:techbd:} Recall the definition of $ \Xi_\varepsilon(a) $ from \eqref{eq:Xi}. For any fixed $ \lambda>0 $, the following holds with probability $ \to 1 $ as $ \varepsilon\to 0 $: \begin{align} & \label{eq:mgt:bd:hat} |\widetilde{M}(t,\widetilde{\underline{R}}_\lambda(t))| < \tfrac{1}{32}\varepsilon^{-\gamma}\leq \tfrac{\lambda}{8}\varepsilon^{-\gamma}, \ \forall t\leq \varepsilon^{-1-\gamma-3a}, \\ \label{eq:ict:bd:hat} & \widetilde{F}(x) < \varepsilon^{-\gamma-a}, \quad \forall x\leq 2\varepsilon^{-1-a}, \\ \label{eq:ict:cnti:hat} &|\widetilde{F}(x)-\widetilde{F}(y)| < \tfrac{\lambda}{4}\varepsilon^{-\gamma}, \quad \forall x\in(0,2\varepsilon^{-1-a}]\cap\mathbb{Z}, \ y\in [x-\varepsilon^{-\gamma'},x]\cap\mathbb{Z}, \\ \label{eq:Tupe-1:hat} &\widetilde{\underline{T}}_\lambda(2\varepsilon^{-1-a}) < \varepsilon^{-1-\gamma-3a}. \end{align} \end{lemma} Equipped with Lemmas~\ref{lem:cmpCrit:} and \ref{lem:techbd:}, we proceed to establish an lower bound on $ \widetilde{G}^{\widetilde{\underline{R}}_\lambda}(t) $. \begin{lemma} \label{lem:blt:Rlw} Let $ \lambda>0 $. We have that \begin{align} \label{eq:blt:Rlw} \lim_{\varepsilon\to 0} \mathbf{P} \Big( \widetilde{G}^{\widetilde{\underline{R}}_\lambda}(t) \geq F(\widetilde{\underline{R}}_\lambda(t)) + \tfrac{\lambda}{8}\varepsilon^{-\gamma} \text{ whenever } \widetilde{\underline{R}}_\lambda(t)\in [y_*,y_*+\varepsilon^{-1-a}] \Big) = 1. \end{align} \end{lemma} \begin{proof} We write $ x:= \widetilde\underline{R}_\lambda(t) $ to simplify notations. Letting \begin{align} \label{eq:v:blt:Rlw} v := \frac{1}{2([F(x)]_++\frac{1}{4}\lambda\varepsilon^{-\gamma})}, \end{align} we consider the truncated linear trajectory $ \underline{L}(\Cdot):= \underline{L}_{t',x,v}(\Cdot) $ passing through $ (t,x) $ with velocity~$ v $. Our aim is to compare $ \widetilde{G}^{\widetilde{\underline{R}}_\lambda}(t) $ with $ \widetilde{G}^{\widetilde{\underline{L}}}(t) $, by using Lemma~\ref{lem:cmpCrit:}. To this end, we use \eqref{eq:ict:cnti:hat} to write \begin{align} \notag 2 ([F(y')]_++\tfrac{\lambda}{2}\varepsilon^{-\gamma}) &\geq 2 ([F(x) -\tfrac{\lambda}{4}\varepsilon^{-\gamma}]_+ +\tfrac{\lambda}{2}\varepsilon^{-\gamma}) \\ \label{eq:ict:cnti:bd:hat} & \geq 2 ( [F(x)]_+ +\tfrac{\lambda}{4}\varepsilon^{-\gamma}) = v^{-1}, \quad \forall y'\in[x-1-\varepsilon^{-\gamma'},x-1]\cap\mathbb{Z}. \end{align} With $ \widetilde{\underline{T}}_\lambda $ defined in \eqref{eq:tilTlw}, summing the result over $ y'\in(y,x-1] $, for any fixed $ y\in[x-1-\varepsilon^{-\gamma'},x-1] $, we arrive at \begin{align*} \widetilde{\underline{T}}_\lambda(x-1) - \widetilde{\underline{T}}_\lambda(y) \geq v^{-1}(x-1-y), \quad \forall y\in[x-1-\varepsilon^{-\gamma'},x-1]\cap\mathbb{Z}. \end{align*} Given this, applying Lemma~\ref{lem:cmpCrit:}, we obtain \begin{align} \label{eq:bltbd:bltLup:hat} \widetilde{G}^{\widetilde{\underline{R}}_\lambda}(t) \geq \widetilde{G}^{\underline{L}}(t) - \eta(t,x). \end{align} The next step is to bound the r.h.s.\ of~\eqref{eq:bltbd:bltLup:hat}. Let us first establish some bounds on the ranges of $ t,x,v $. Recall that $ y_* := \lceil \varepsilon^{-1} \rceil $, so $ y_*\leq x \leq y_*+\varepsilon^{-1-a} $ implies \begin{align} \label{eq:Qhat:xrg} \varepsilon^{-1} \leq x \leq 2\varepsilon^{-1-a} \leq \varepsilon^{-1-2a}, \quad t \geq y_*\lambda\varepsilon^{-\gamma}, \end{align} for all $ \varepsilon $ small enough. Next, combining $ x \leq \varepsilon^{-1-2a} $ with \eqref{eq:Tupe-1:hat} yields \begin{align} \label{eq:Qhat:trg} t \leq {\varepsilon}^{-1-\gamma-3a}. \end{align} With $ v $ defined as in the preceding, by \eqref{eq:ict:bd:hat} we have \begin{align} \label{eq:Qhat:vrg} {\varepsilon}^{\gamma+a} < v \leq \tfrac{4}{\lambda} \varepsilon^{\gamma}, \end{align} for all $ \varepsilon $ small enough. From these bounds \eqref{eq:Qhat:xrg}--\eqref{eq:Qhat:vrg} on the rang of $ t,x,v $, we see that $ (t,x,v)\in\Sigma_\varepsilon(\frac{a}{3}) $. Applying \eqref{eq:blt:cnt:lw} and \eqref{eq:J:bd} for $ b=a $ gives \begin{align} \label{eq:bltbd:v:hat} G^{\underline{L}}(t) \geq 2v^{-1} -v^{-1+1/C} - \varepsilon^{-a} \geq [F(x)]_+ + \tfrac{1}{4}\lambda \varepsilon^{-\gamma}-(\tfrac{4}{\lambda}\varepsilon^{-\gamma})^{-1+1/C}- \varepsilon^{-a}. \end{align} The r.h.s.\ of \eqref{eq:bltbd:v:hat} is bounded below by $ \tfrac{1}{8}\lambda \varepsilon^{-\gamma} $, for all $ \varepsilon $ small enough. This concludes the desired result~\eqref{eq:blt:Rlw}. \end{proof} \begin{lemma} \label{lem:NhatQ>hatQ} Let $ \lambda>0 $. We have that \begin{align} \label{eq:NhatQ>hatQ} \lim_{\varepsilon\to 0} \mathbf{P} \big( \widetilde{N}^{\widetilde{\underline{R}}_\lambda}(t) - \widetilde{\underline{R}}_\lambda(t) \geq 0 \text{ whenever } \widetilde{\underline{R}}_\lambda(t) \in [y_*, y_*+\varepsilon^{-1-a}] \big) = 1. \end{align} \end{lemma} \begin{proof} Similarly to \eqref{eq:N:dcmp}, for $ \widetilde{N}^Q(t) $, we have the following decomposition \begin{align} \label{eq:tilN:dcmp} \widetilde{N}^{\widetilde{\underline{R}}_\lambda}(t) = \widetilde{\underline{R}}_\lambda(t) - \widetilde{F}(\widetilde{\underline{R}}_\lambda(t)) + \widetilde{M}(t,\widetilde{\underline{R}}_\lambda(t)) + \widetilde{G}^{\widetilde{\underline{R}}_\lambda}(t). \end{align} Applying \eqref{eq:mgt:bd:hat} and Lemma~\ref{lem:blt:Rlw} to bound the last two terms in \eqref{eq:tilN:dcmp}, after ignoring events of small probability, we obtain \begin{align} \label{eq:Rlw:goal} \widetilde{N}^{\widetilde{\underline{R}}_\lambda}(t) \geq \widetilde{\underline{R}}_\lambda(t) -\tfrac{\lambda}{8}\varepsilon^{-\gamma} +\tfrac{\lambda}{8}\varepsilon^{-\gamma} = \widetilde{\underline{R}}_\lambda(t), \quad \forall \widetilde{\underline{R}}_\lambda(t) \in [y_*, y_*+\varepsilon^{-1-a}]. \end{align} This concludes the desired result~\eqref{eq:NhatQ>hatQ}. \end{proof} Now, combining \eqref{eq:NRlw:trivial}, \eqref{eq:NRlw:cmp} and Lemma~\ref{lem:NhatQ>hatQ} we immediately obtain \begin{proposition} \label{prop:Rlw<R} Let $ \lambda>0 $. We have that \begin{align} \label{eq:NRlw>Rlw} \lim_{\varepsilon\to 0} \mathbf{P} \big( N^{\underline{R}_\lambda}(t) \geq \underline{R}_\lambda(t) \text{ whenever } \underline{R}_\lambda(t) \in [0, \varepsilon^{-1-a}] \big) = 1. \end{align} In particular, by Proposition~\ref{prop:mono}, \begin{align*} \lim_{\varepsilon\to 0} \mathbf{P} \big( \underline{R}_\lambda(t) \leq R(t) \text{ whenever } R(t) \in [0, \varepsilon^{-1-a}] \big) =1. \end{align*} \end{proposition} \subsection{Sandwiching} For any fixed $ \lambda>0 $, by Propositions~\ref{prop:Rup>R} and \ref{prop:Rlw<R}, we have the sandwiching inequality \begin{align} \label{eq:sand} \underline{R}_\lambda(t) \leq R(t) \leq \overline{R}_\lambda(t) \text{ whenever } \overline{R}_\lambda(t) \in [0,\varepsilon^{-1-a}]. \end{align} with probability $ \to 1 $ as $ \varepsilon \to 0 $. Further, since $ \underline{T}_\lambda $, $ T $ and $ \overline{T}_\lambda $ are the inverse of $ \underline{R}_\lambda $, $ R $ and $ \overline{R}_\lambda $, respectively, applying the involution $ \iota(\Cdot) $ to \eqref{eq:sand} yields \begin{align} \label{eq:sand:} \overline{T}_\lambda(\xi) \leq T(\xi) \leq \underline{T}_\lambda(\xi), \quad \forall \xi \in [0,\varepsilon^{-1-a}]. \end{align} Hereafter, we use superscript in $ \varepsilon $ such as $ \overline{T}_\lambda^\varepsilon $ to denote \emph{scaled} processes. \emph{Not to be confused} with the subscript notation such as \eqref{eq:ict}, which highlights the $ \varepsilon $ dependence on the corresponding processes. Consider the scaling $ \overline{T}_\lambda^\varepsilon(\xi) := \varepsilon^{1+\gamma} \overline{T}_\lambda(\varepsilon^{-1}\xi) $ and $ \underline{T}_\lambda^\varepsilon(\xi) := \varepsilon^{1+\gamma} \underline{T}_\lambda(\varepsilon^{-1}\xi) $. Recall the definition of the limiting process $ \mathcal{T} $ from \eqref{eq:Hit}. Given \eqref{eq:sand:}, to prove Theorem~\ref{thm:main}, it suffices to prove the following convergence in distribution, under the \emph{iterated} limit $ (\lim_{\lambda\to 0}\lim_{\varepsilon\to 0}) $: \begin{align} \label{eq:Tlw:cnvg} \lim_{\lambda\to 0}\lim_{\varepsilon\to 0} \underline{T}^\varepsilon_\lambda(\Cdot) \stackrel{\text{d}}{=} \mathcal{T}(\Cdot), \quad \text{ under } \mathscr{U}, \\ \label{eq:Tup:cnvg} \lim_{\lambda\to 0}\lim_{\varepsilon\to 0} \overline{T}^\varepsilon_\lambda(\Cdot) \stackrel{\text{d}}{=} \mathcal{T}(\Cdot), \quad \text{ under } \mathscr{U}. \end{align} Let $ F^\varepsilon(\xi) := \varepsilon^{\gamma}F(\varepsilon\xi) $. To show \eqref{eq:Tlw:cnvg}--\eqref{eq:Tup:cnvg}, with $ \overline{T}_\lambda $ and $ \underline{T}_\lambda $ defined in \eqref{eq:Tup} and \eqref{eq:Tlw} respectively, here we write the scaled processes $ \overline{T}^\varepsilon_\lambda $ and $ \underline{T}^\varepsilon_\lambda $ explicitly as \begin{align} \label{eq:Tupe} \overline{T}^\varepsilon_\lambda(\xi) &:= \left\{\begin{array}{l@{\quad}l} \displaystyle \varepsilon^{\gamma+1} v_*^{-1} \big[ \lfloor\varepsilon^{-1}\xi\rfloor- x_*\big]_+ ,& \xi \leq \varepsilon x_{**}, \\ ~ \\ \displaystyle \varepsilon^{\gamma+1} v_*^{-1} (x_{**}-x_*) + 2\varepsilon \sum_{\varepsilon x_{**}< \varepsilon y \leq \xi } \hspace{-8pt} \big( F^\varepsilon(\varepsilon y) - \tfrac12 \lambda \big) \mathbf{1}_\set{ F^\varepsilon(\varepsilon y) > \lambda }, & \xi > \varepsilon x_{**}, \end{array}\right. \\ \label{eq:Tlwe} \underline{T}_\lambda(\xi) &:= \varepsilon y_*\lambda + \varepsilon \sum_{0< \varepsilon y \leq \xi} \big( \lambda+ 2[F^\varepsilon(\varepsilon y)]_+ \big). \end{align} Further, letting $ \mathscr{I} $ denote the following integral operator \begin{align*} \mathscr{I}: \mathbb{D}^{\uparrow} \to \mathbb{D}^{\uparrow}, \quad \mathscr{I}(f)(\xi) := 2 \int_0^\xi [ f(\zeta) ]_+ d\zeta, \end{align*} we also consider the process \begin{align} \label{eq:hatHit} \widehat{T}^\varepsilon(\xi) := \mathscr{I}(F^\varepsilon) = 2\varepsilon \sum_{0<\varepsilon y\leq\xi} [ F^\varepsilon(\varepsilon y)]_+ + 2\varepsilon(\varepsilon^{-1}\xi-\lfloor\varepsilon^{-1}\xi\rfloor) [ F^\varepsilon(\xi)]_+. \end{align} Indeed, the integral operator $ \mathscr{I}: (\mathbb{D}^{\uparrow},\mathscr{U})\to(\mathbb{D}^{\uparrow},\mathscr{U}) $ is continuous. This together with the assumption~\eqref{eq:ic:lim} implies that \begin{align} \label{eq:hatHit:cnvg} \widehat{T}^\varepsilon \Rightarrow \mathcal{T}(\Cdot), \quad \text{ under } \mathscr{U}. \end{align} On the other hand, comparing \eqref{eq:hatHit} with \eqref{eq:Tlwe}, we have \begin{align} \label{eq:hatHit-Tlwe1} |\widehat{T}^\varepsilon(\xi)-\underline{T}_\lambda(\xi)| \leq 2\varepsilon [ F^\varepsilon(\xi)]_+ + \xi\lambda + \varepsilon y_*\lambda. \end{align} Fix arbitrary $ \xi_0<\infty $. Since $ y_* := \lceil\varepsilon^{-1} \rceil $, taking the supremum of \eqref{eq:hatHit-Tlwe1} over $ \xi\in[0,\xi_0] $, and letting $ \varepsilon\to 0 $ and $ \lambda\to 0 $ in order, we conclude that \begin{align} \lim_{\lambda\to 0}\lim_{\varepsilon\to 0} \sup_{\xi\in[0,\xi_0]}|\widehat{T}^\varepsilon(\xi)-\underline{T}_\lambda(\xi)| \stackrel{\text{d}}{=} \lim_{\lambda\to 0} (0+2\xi_0\lambda+\lambda) = 0. \end{align} This together with \eqref{eq:hatHit:cnvg} concludes the desired convergence \eqref{eq:Tlw:cnvg} of $ \underline{T}^\varepsilon_\lambda $. Similarly, for $ \overline{T}^\varepsilon_\lambda $, by comparing \eqref{eq:hatHit} with \eqref{eq:Tupe}, it is straightforward to verify that \begin{align} \label{eq:hatHit-Tupe1} |\widehat{T}^\varepsilon(\xi)-\overline{T}_\lambda(\xi)| \leq& 2\varepsilon [ F^\varepsilon(\xi) ]_+ + \varepsilon^{1+\gamma} v_*^{-1}(x_{**}-x_{*}) \\ \label{eq:hatHit-Tupe2} &+ 2\varepsilon \sum_{0< y\leq \xi} F^\varepsilon(\varepsilon y) \mathbf{1}_\set{0< F^\varepsilon(\varepsilon y) \leq \lambda } + 2\varepsilon \sum_{x_{**}\leq y\leq \xi} \frac{\lambda}{2} \mathbf{1}_\set{F^\varepsilon(\varepsilon y) > \lambda } \\ \label{eq:hatHit-Tupe3} &+ 2\varepsilon \sum_{0<y<x_{**}} \hspace{-5pt} [ F^\varepsilon(\varepsilon y) ]_+. \end{align} Using $ v_*:=\varepsilon^{\gamma-a} $ and $ x_{**}-x_* \leq \varepsilon^{-1}\lambda $ (by \eqref{eq:x**}) in \eqref{eq:hatHit-Tupe1} and \eqref{eq:hatHit-Tupe3}, and replacing $ F^\varepsilon(\varepsilon y) \mathbf{1}_\set{0< F^\varepsilon(\varepsilon y) \leq \lambda } $ with $ \lambda $ in \eqref{eq:hatHit-Tupe2}, we further obtain \begin{align} \label{eq:hatHit-Tupe4} |\widehat{T}^\varepsilon(\xi)-\overline{T}_\lambda(\xi)| \leq 2\varepsilon [ F^\varepsilon(\xi)]_+ + \lambda\varepsilon^{a} + 2\lambda \xi (1+\tfrac{1}{2}) + 2\varepsilon \hspace{-10pt} \sum_{0<y<x_{*}+\lambda\varepsilon^{-1}} \hspace{-10pt} [ F^\varepsilon(\varepsilon y) ]_+. \end{align} Fix arbitrary $ \xi_0<\infty $. Since $ F^\varepsilon(\Cdot) \Rightarrow \mathcal{F}(\Cdot) $, given any $ n<\mathbb{Z}_{>0} $ there exists $ L(n)<\infty $ such that \begin{align} \label{eq:ictx**} \mathbf{P} \Big( \sup_{\xi\in[0,\xi_0]} |F^\varepsilon(\xi)| \leq L(n) \Big) \geq 1 - \frac{1}{n}. \end{align} Using \eqref{eq:ictx**} to bound the last term in \eqref{eq:hatHit-Tupe3}, and then letting $ \varepsilon\to 0 $, we obtain \begin{align} \label{eq:hatHit-Tupe5} \lim_{\varepsilon\to 0} \mathbf{P} \Big( \sup_{\xi\in[0,\xi_0]} |\widehat{T}^\varepsilon(\xi)-\overline{T}_\lambda(\xi)| \leq 3\lambda \xi_0 + (\lambda + \varepsilon x_{*})L(n) \Big) \leq 1 - \frac{1}{n}. \end{align} From the definition~\eqref{eq:x*} of $ x_* $, we have that \begin{align*} \lim_{\varepsilon \to 0} (\varepsilon x_*) \stackrel{\text{d}}{=} \inf\{ \xi\geq 0: \mathcal{F}(\xi) \geq \lambda \} =: \xi_{*,\lambda}. \end{align*} Since $ \mathcal{F}\in C[0,\infty) $ and $ \mathcal{F}(0)=0 $ (by \eqref{eq:ict0=0}), further letting $ \lambda\to 0 $ we obtain $ \lim_{\lambda \to 0}\xi_{*,\lambda} \stackrel{\text{d}}{=} 0 $. Using this in \eqref{eq:hatHit-Tupe5} to bound the term $ \varepsilon x_* $, after sending $ \lambda\to 0 $ with $ \xi_0,n $ being fixed, we conclude \begin{align*} \lim_{\lambda\to 0} \lim_{\varepsilon\to 0} \mathbf{P} \Big( \sup_{\xi\in[0,\xi_0]} |\widehat{T}^\varepsilon(\xi)-\overline{T}_\lambda(\xi)| > \delta \Big) \leq 1 - \frac{1}{n}, \end{align*} for any $ \delta>0 $. Since $ \delta $ and $ n $ are arbitrary, it follows that \begin{align*} \lim_{\lambda\to 0} \lim_{\varepsilon\to 0} \sup_{\xi\in[0,L]} |\widehat{T}^\varepsilon(\xi)-\overline{T}_\lambda(\xi)| \stackrel{\text{d}}{=} 0. \end{align*} This together with \eqref{eq:hatHit:cnvg} concludes the desired convergence \eqref{eq:Tup:cnvg} of $ \overline{T}^\varepsilon_\lambda $. \appendix \section{Proof of Proposition~\ref{prop:subp}} \label{sect:subp} To complement the study at $ \rho=1 $ of this article, here we discuss the behavior for density $ \rho\neq 1 $. Recall that $ p_*(t,\xi) := \frac{1}{\sqrt{2\pi t}} \exp(-\frac{\xi^2}{2t}) $ denotes the standard heat kernel (in the continuum), with the corresponding tail distribution function $ \Phi_*(t,\xi) := \int_{\xi}^{\infty} p_*(t,\zeta) d\zeta $. Compared to the rest of the article, the proof of Proposition~\ref{prop:subp} is simpler and more standard. Instead of working out the complete proof of Proposition~\ref{prop:subp}, here we only give a \emph{sketch}, focusing on the ideas and avoiding repeating technical details. \begin{proof}[Sketch of proof, Part\ref{enu:sup}] Let $ \widehat{F}(t,x) := \sum_{y\leq x} \eta(t,y) $ denote the number of $ \eta $-particles in $ (-\infty,x] $ at time $ t $. Indeed, $ N^{R}(t) \geq \hatF(t,R(t)) $, $ \forall t \in [0,\infty) $. Consequently, when the event $ \{\hatF(t,x) > x, \forall x\in\mathbb{Z}_{\geq 0}\} $ holds true, we must have $ R(t)=\infty $. It hence suffices to show \begin{align} \label{eq:supcrt:goal} \lim_{t\to\infty} \mathbf{P} ( \hatF(t,x) > x, \forall x\in\mathbb{Z}_{\geq 0} ) = 1. \end{align} Recall that $ \mathbf{P}_\text{RW} $ and $ \mathbf{E}_\text{RW} $ denote the law and expectation of a continuous time random walk $ W(t) $. As the $ \eta $-particles perform independent random walks, we have that \begin{align*} \mathbf{E} (\eta(t,y)) = \sum_{z>0} \mathbf{P}_\text{RW} (W(t)+z=y) \mathbf{E} (\eta^\text{ic}(z)) = \rho \mathbf{P}_\text{RW} (W(t)<y). \end{align*} Summing this over $ y\leq x $ yields \begin{align*} \mathbf{E} (\widehat{F}(t,x)) = \sum_{y\leq x} \mathbf{E} (\eta(t,y)) = \rho \sum_{y \leq x} \mathbf{P} ( W(t)<y ). \end{align*} In the last expression, divide the sum into two sums over $ y\leq 0 $ and over $ 0<y\leq x $, and let $ A_1 $ and $ A_2 $ denote the respective sums. For $ A_1 $, using $ W(t) \stackrel{\text{d}}{=} -W(t) $ to rewrite \begin{align} \label{eq:sup:S1} A_1 = \rho \sum_{y \leq 0} \mathbf{P} ( W(t)<y ) = \rho \sum_{y \geq 0} \mathbf{P} ( W(t)>y ). \end{align} For $ A_2 $, using $ \mathbf{P} ( W(t)<y ) = 1 - \mathbf{P} (W(t)\geq y) $ to rewrite \begin{align} \label{eq:sup:S2} A_2 = \rho \sum_{0< y \leq x } \mathbf{P} ( W(t)<y ) = \rho x -\rho \sum_{0<y \leq x} \mathbf{P} ( W(t)\geq y ) = \rho x -\rho \sum_{0\leq y < x} \mathbf{P} ( W(t)> x ). \end{align} Adding \eqref{eq:sup:S1}--\eqref{eq:sup:S2} yields \begin{align} \label{eq:Exhatict} \mathbf{E} (\widehat{F}(t,x)) = \rho x + \rho \sum_{ y\geq x} \mathbf{P}_\text{RW} (W(t)>y) = x+ (\rho-1) x + \rho \mathbf{E}_\text{RW} ( W(t) \mathbf{1}_\set{W(t)>x} ). \end{align} With $ \rho>1 $, the r.h.s.\ of \eqref{eq:Exhatict} is clearly greater than $ x $, $ \forall x\in\mathbb{Z}_{\geq 0} $. This demonstrates why \eqref{eq:supcrt:goal} should hold true. To \emph{prove} \eqref{eq:supcrt:goal}, following similar arguments as in Section~\ref{sect:mgt}--\ref{sect:blt}, it is possible to refine these calculations of expected values to produce a bound on $ \widehat{F}(t,x) $ that holds with high probability. In the course of establishing such a lower bound, the last two terms $ (\rho-1) x $ and $ \rho \mathbf{E}_\text{RW} ( W(t) \mathbf{1}_\set{W(t)>x} ) $ in \eqref{eq:Exhatict} make enough room for absorbing various error terms. \end{proof} Next, for Part\ref{enu:sub}, we first recall the flux condition~\eqref{eq:fluxBC:}, which in the current setting reads \begin{align} \label{eq:fluxCond:hydro} \int_{0}^\infty (\rho - u_*(t,\xi)\mathbf{1}_\set{\xi>r(t)})d\xi = \kappa_\rho\sqrt{t} = r(t). \end{align} \begin{proof}[Sketch of proof, Part\ref{enu:sub}] The strategy is to utilize Proposition~\ref{prop:mono}. This requires constructing the suitable upper and lower bound functions $ \overline{R}_\lambda(t) $ and $ \underline{R}_\lambda(t) $, where $ \lambda>0 $ is an auxiliary parameter that we send to zero \emph{after} sending $ \varepsilon\to 0 $. To construct such functions $ \overline{R}_\lambda(t) $ and $ \underline{R}_\lambda(t) $, recall that $ p(t,x) $ denote the standard \emph{discrete} heat kernel with tail distribution function $ \Phi(t,x) $. Fix $ 0<a<2 $. For each fixed $ t\in[0,\infty) $, we let $ \tilR(t)\in\mathbb{Z}_{\geq 0} $ be the unique solution to the following equation \begin{align} \label{eq:up:tilR} \Phi(\varepsilon^{-a}, \lfloor \varepsilon^{-a/2}\kappa_\rho \rfloor ) = \Phi(t, \tilR(t) ), \end{align} and define \begin{align} \label{eq:Ruplw} \overline{R}_\lambda(t) := \tilR(t) + \lfloor \lambda\varepsilon^{-1} \rfloor, \quad \underline{R}_\lambda(t) := \tilR(t) - \lfloor \lambda\varepsilon^{-1} \rfloor . \end{align} Under the diffusive scaling, it is standard to show that the discrete tail distribution function $ \Phi $ converges to its continuum counterpart. That is, \begin{align} \label{eq:erfcnvg} \Phi(\varepsilon^{-b}t,\lfloor\varepsilon^{-b/2}\xi\rfloor) \to \Phi_*(t,\xi), \quad \text{ uniformly over } t\in[0,t_0], \ \xi\in[0,\xi_0], \end{align} for any fixed $ t_0, \xi_0<\infty $ and $ b>0 $. Fix arbitrary $ t_0<\infty $ hereafter. On the r.h.s.\ of~\eqref{eq:up:tilR}, substitute in $ t\mapsto \varepsilon^{-2}t $, following by using~\eqref{eq:erfcnvg} for $ b=a $ on the l.h.s.\ and for $ b=2 $ on the r.h.s. We have \begin{align*} \varepsilon \widetilde{R}(\varepsilon^{-2}t) \longrightarrow \kappa_\rho\sqrt{t}, \text{ uniformly over } [0,t_0] \text{ as } \varepsilon \to 0. \end{align*} From this it follows that \begin{align} & \label{eq:sub:Rupcnvg} \varepsilon\overline{R}_\lambda(\varepsilon^{-2} t) \longrightarrow \kappa_\rho\sqrt{t} + \lambda, \text{ uniformly over } [0,t_0] \text{ as } \varepsilon \to 0. \\ & \notag \varepsilon\underline{R}_\lambda(\varepsilon^{-2} t) \longrightarrow \kappa_\rho\sqrt{t} - \lambda, \text{ uniformly over } [0,t_0] \text{ as } \varepsilon \to 0. \end{align} In particular, $ \varepsilon\overline{R}_\lambda(\varepsilon^{-2}\Cdot) $ and $ \varepsilon\overline{R}_\lambda(\varepsilon^{-2}\Cdot) $ converge to $ r(t)=\kappa_\rho\sqrt{t} $ under the iterated limit $ \lim_{\lambda\to 0}\lim_{\varepsilon\to 0} $. It now suffices to show that $ \overline{R}_\lambda $ and $ \underline{R}_\lambda $ sandwich $ R $ in between with high probability. This, by Proposition~\ref{prop:mono}, amounts to showing the following property: \begin{align} \label{eq:Ruplw:flxup} &N^{\overline{R}_\lambda}(\varepsilon^{-2}t) \leq \overline{R}_\lambda(\varepsilon^{-2}t), \quad \forall t \leq t_0, \\ \label{eq:Ruplw:flxlw} &N^{\underline{R}_\lambda}(\varepsilon^{-2}t) \geq \underline{R}_\lambda(\varepsilon^{-2}t), \quad \forall t \leq t_0, \end{align} with probability $ \to 1 $ as $ \varepsilon \to 0 $. Similarly to Part~\ref{enu:sup}, instead of giving the complete proof of \eqref{eq:Ruplw:flxup}--\eqref{eq:Ruplw:flxlw}, we demonstrate how they should hold true by calculating the corresponding expected values. As the calculations of $ \mathbf{E} (N^{\overline{R}_\lambda}(t)) $ and $ \mathbf{E} (N^{\underline{R}_\lambda}(t)) $ are similar, we carry out only the former in the following. Set $ Q=\overline{R}_\lambda $ in~\eqref{eq:NQ}, and take expectation of the resulting expression to get \begin{align} \label{eq:sub:Nex::} \mathbf{E} ( N^{\overline{R}_\lambda}(t) ) = \sum_{x\in\mathbb{Z}} \big( \mathbf{E} (\eta(t,x)) - \mathbf{E} (\eta^{\overline{R}_\lambda}(t,x)) \big). \end{align} Taking $ \mathbf{E} (\Cdot) $ on both sides of~\eqref{eq:etaEx'} and using $ \mathbf{E} (\eta^\text{ic}(y))=\rho\mathbf{1}_\set{y>0} $, we have $ \mathbf{E} (\eta(t,x)) = \rho \sum_{y>0} p(t,x-y). $ Inserting this into~\eqref{eq:sub:Nex::} yields \begin{align} \label{eq:sub:Nex1} \mathbf{E} ( N^{\overline{R}_\lambda}(t) ) = \sum_{x\in\mathbb{Z}} \Big( \sum_{y>0} p(t,x-y) \rho - \mathbf{E} (\eta^{\overline{R}_\lambda}(t,x)) \Big). \end{align} On the r.h.s.\ of~\eqref{eq:sub:Nex1}, divide the sum over $ x\in\mathbb{Z} $ into sums over $ x\leq 0 $ and $ x>0 $. Given that $ \eta^{\overline{R}_\lambda}(t,x)\mathbf{1}_\set{x\leq 0}=0 $, the former sum is simply $ \sum_{x\leq 0} \sum_{y>0} p(t,x-y) \rho = \sum_{x> 0} \sum_{y\leq} p(t,x-y) \rho. $ Consequently, \begin{align} \notag \mathbf{E} ( N^{\overline{R}_\lambda}(t) ) &= \sum_{x>0} \sum_{y\leq 0} p(t,x-y)\rho + \sum_{x>0} \Big( \sum_{y>0} p(t,x-y) \rho - \mathbf{E} (\eta^{\overline{R}_\lambda}(t,x)) \Big) \\ \label{eq:sub:Nex} &= \sum_{x>0} \Big( \rho - \mathbf{E} (\eta^{\overline{R}_\lambda}(t,x)) \Big). \end{align} Since $ \overline{R}_\lambda $ is deterministic, letting $ \overline{u}_\lambda(t,x) := \mathbf{E} (\eta^{R_\lambda}(t,x)) $, similarly to \eqref{eq:u2PDE}, here we have \begin{align} \left\{ \begin{array}{l@{,}l} \partial_t \overline{u}_\lambda(t,x) = \tfrac12 \Delta \overline{u}_\lambda(t,x) &\quad \forall x > \overline{R}_\lambda(t), \\ \overline{u}_\lambda(t,\overline{R}_\lambda(t)) = 0 &\quad\forall t \geq 0, \\ \overline{u}_\lambda(0,x) = \mathbf{E} (\eta^\text{ic}(x))=\rho &\quad\forall x > \overline{R}_\lambda(0). \end{array} \right. \end{align} Such a $ \overline{u}_\lambda $ is solved explicitly as \begin{align} \label{eq:barulambda} \overline{u}_\lambda(t,x) &= \frac{\rho}{\Phi(\varepsilon^{-a},\lfloor\varepsilon^{-a/2}\kappa_\rho\rfloor)} \big( \Phi(\varepsilon^{-a},\lfloor\varepsilon^{-a/2}\kappa_\rho\rfloor) - \Phi(t,x-\lfloor\lambda\varepsilon^{-1}\rfloor) \big) \mathbf{1}_\set{x > \overline{R}_\lambda(t)}. \end{align} Combining \eqref{eq:barulambda} and \eqref{eq:sub:Nex}, under the diffusive scaling $ \varepsilon \mathbf{E} ( N^{\overline{R}_\lambda}(\varepsilon^{-2}t) ) $, yields \begin{align} \label{eq:sub:Nex:} \begin{split} \varepsilon \mathbf{E} ( N^{\overline{R}_\lambda}&(\varepsilon^{-2}t) ) = \varepsilon\sum_{x>0} \Big( \rho - \frac{\rho}{\Phi(\varepsilon^{-a},\lfloor\varepsilon^{-a/2}\kappa_\rho\rfloor)} \\ &\big(\Phi(\varepsilon^{-a},\lfloor\sqrt{\varepsilon^{-a}}\kappa_\rho\rfloor) - \Phi(\varepsilon^{-2}t,x-\lfloor\lambda\varepsilon^{-1}\rfloor) \big)\mathbf{1}_\set{x>\overline{R}_\lambda(t)} \Big). \end{split} \end{align} In \eqref{eq:sub:Nex:}, using \eqref{eq:erfcnvg} for $ b=a $ and $ b=2 $, and using the tail bound \eqref{eq:erf} on $ \Phi(t,x) $ for large $ x $, it is straightforward to show \begin{align} \label{eq:cnvg:uuplambda} \varepsilon \mathbf{E} ( N^{\overline{R}_\lambda}(\varepsilon^{-2}t) ) \longrightarrow \int_{0}^\infty \Big( \rho - \frac{\rho}{\Phi_*(1,\kappa_\rho)} (\Phi_*(1,\kappa_\rho) - \Phi_*(t,\xi-\lambda)\mathbf{1}_\set{\xi>\kappa_\rho\sqrt{t}+\lambda}) \Big ) d\xi, \end{align} uniformly over $ t\leq t_0 $, as $ \varepsilon\to 0 $. On the r.h.s.\ of \eqref{eq:cnvg:uuplambda}, use \eqref{eq:kappaEq} to replace $ \frac{\rho}{\Phi_*(1,\kappa_\rho)} $. Referring back to~\eqref{eq:explictu}, we now have \begin{align} \label{eq:cnvg:uuplambda:} \varepsilon \mathbf{E} ( N^{\overline{R}_\lambda}(\varepsilon^{-2}t) ) \longrightarrow \int_{0}^\infty \Big( \rho - u_*(t,\xi-\lambda) \mathbf{1}_\set{\xi>\kappa_\rho\sqrt{t}+\lambda} \Big) d\xi, \end{align} uniformly over $ t\leq t_0 $, as $ \varepsilon\to 0 $. Given~\eqref{eq:fluxCond:hydro}, after a change of variable $ \xi-\lambda\mapsto \xi $, the r.h.s.\ of~\eqref{eq:cnvg:uuplambda:}, matches $ \lambda+\kappa_\rho\sqrt{t} $. Combining this with \eqref{eq:sub:Rupcnvg}, we see that \eqref{eq:Ruplw:flxup} holds in expectation. \end{proof} \section*{Ethical Statement} \textit{Funding}: Dembo's research was partially supported by the NSF grant DMS-1613091, whereas Tsai's research was partially supported by a Graduate Fellowship from the \ac{KITP} and by a Junior Fellow award from the Simons Foundation. Some of this work was done during the KITP program ``New approaches to non-equilibrium and random systems: KPZ integrability, universality, applications and experiments'' supported in part by the NSF grant PHY-1125915. \textit{Conflict of Interest}: The authors declare that they have no conflict of interest. \end{document}
arXiv
\begin{definition}[Definition:Pentatope Number] '''Pentatope numbers''' are those denumerating a collection of objects which can be arranged in $4$ dimensions in the form of a regular pentatope. The $n$th '''pentatope number''' $P_n$ is defined as: :$\ds P_n = \sum_{k \mathop = 1}^n T_k$ where $T_k$ is the $k$th tetrahedral number. \end{definition}
ProofWiki
\begin{document} \title[Abelian sandpile and Biggs-Merino polynomial for digraphs]{Abelian sandpile model and Biggs-Merino polynomial for directed graphs} \author{Swee Hong Chan} \address{Department of Mathematics, Cornell University, Ithaca, NY 14853.} \email{\url{[email protected]}} \urladdr{https://www.math.cornell.edu/~sc2637/} \begin{abstract} We prove several results concerning a polynomial that arises from the sandpile model on directed graphs; these results are previously only known for undirected graphs. Implicit in the sandpile model is the choice of a sink vertex, and it is conjectured by Perrot and Pham that the polynomial $c_0+c_1y+\ldots c_n y^n$, where $c_i$ is the number of recurrent classes of the sandpile model with level $i$, is independent of the choice of the sink. We prove their conjecture by expressing the polynomial as an invariant of the sinkless sandpile model. We then present a bijection between arborescences of directed graphs and reverse $G$-parking functions that preserves external activity by generalizing Cori-Le Borgne bijection for undirected graphs. As an application of this bijection, we extend Merino's Theorem by showing that for Eulerian directed graphs the polynomial $c_0+c_1y+\ldots c_n y^n$ is equal to the greedoid polynomial of the graph. \end{abstract} \keywords{abelian sandpile model, chip firing game, Tutte polynomial, greedoid, G-parking function} \subjclass[2010]{05C30, 05C31} \maketitle \section{Introduction} To what extent do the known results for undirected graphs extend to directed graphs? Driven by this question, we consider a remarkable theorem of Merino L{\'o}pez~\cite{Mer97} that expresses a one variable specialization of the Tutte polynomial of an undirected graph in terms of the abelian sandpile model on the graph. In this paper, we show that this theorem can be extended to all Eulerian directed graphs, and a weaker version of the theorem can be extended to all directed graphs. The \emph{abelian sandpile model} is a dynamical system on a finite directed graph that starts with a number of chips at each vertex of the graph. If a vertex has at least as many chips as its outgoing edges, then we are allowed to \emph{fire} the vertex by sending one chip along each edge leaving the vertex to the neighbors of the vertex. This model was introduced by Dhar \cite{Dhar90} as a model to study the concept of self-organized criticality introduced in \cite{BTW88}. Since then it has been studied in several different field of mathematics. In graph theory it was studied under the name of {chip-firing game}~\cite{Tar88, BLS91}; it appears in arithmetic geometry in the study of the Jacobian of algebraic curves \cite{Lor89, BN07}; and in algebraic graph theory it relates to the study of potential theory on graphs \cite{Biggs97pt, BS13}. It is common to study the sandpile model by specifying a vertex as the sink vertex, and all chips that end up at the sink vertex are removed from the process. For a strongly connected directed graph, this guarantees that the sandpile model terminates (i.e. when none of the vertices have enough chips to be fired) in finite time. After fixing a sink, one can study a special type of chip configurations with the following property. A chip configuration is \emph{recurrent} if, for any arbitrary chip configuration as the initial state, one can add a finite amount of chips to each vertex so that the recurrent configuration is the state of the sandpile model when the process terminates. It was conjectured by Biggs~\cite{Biggs97} and was proved by Merino L{\'o}pez \cite{Mer97} that, for any undirected graph $G$ and any choice of the sink vertex $s$, \begin{equation*} \label{equation: Merino's theorem for undirected graphs} c_0 + c_1y+\ldots+ c_ny^n= y^{|E(G)|} \mathcal T (G;1,y) \qquad \text{(Merino's Theorem)}, \end{equation*} where $\mathcal T(G;x,y)$ is the Tutte polynomial of the graph and $c_i$ is the number of recurrent configurations with $i-\deg(s)$ chips. (We remark that the extra factor $y^{|E(G)|}$ does not appear in the right side of \cite[Theorem~3.6]{Mer97} as their left side differs from ours by the same factor.) As the Tutte polynomial is defined without any involvement of the vertex $s$, this implies that $c_0 + c_1y+\ldots+ c_ny^n$ does not depend on the choice of $s$. The sink independence of the polynomial $c_0 + c_1y+\ldots+ c_ny^n$ was then extended to all Eulerian directed graphs by Perrot and Pham~\cite{Perrot-Pham}, and they observed that the same statement does not hold for non-Eulerian directed graphs. However, they conjectured that a variant of this polynomial has the sink independence property for all directed graphs. Perrot and Pham defined an equivalence relation on the recurrent configurations~(Definition~\ref{definition: sink equivalence relation}), and they defined the total number of chips of an equivalence class to be the maximum of the total number of chips of configurations contained in the class. The conjecture of Perrot and Pham is that the sink independence property is true for the polynomial, \begin{equation*} \mathcal B(G,s;y):= c_0' + c_1'y+\ldots+ c_n'y^n, \end{equation*} where $c_i$ is the number of equivalence classes with $i-\outdeg(s)$ chips. We prove their conjecture by expressing $\mathcal B(G,s;y)$ as an invariant of the sinkless sandpile model (which, as its name implies, does not involve any choice of sink vertex). \begin{theorem}[Weak version of Merino's theorem {\cite[Conjecture~1]{Perrot-Pham}}]\label{c. Perrot-Pham} Let $G$ be a strongly connected digraph. Then the polynomial $\mathcal B(G,s;y)$ is independent of the choice of the vertex $s$. \end{theorem} We call $\mathcal B(G,s;y)$ the \emph{Biggs-Merino polynomial} to honor the contribution of Biggs and Merino L{\'o}pez to this subject. One consequence of Merino's Theorem is that, for any undirected graph and for any $i$, the number of recurrent configurations with $i-\deg(s)$ chips is equal to the number of spanning trees with external activity $i$. A bijective proof of this statement was given by Cori and Le Borgne~\cite{CL03}, and we generalize the bijection of Cori and Le Borgne to all Eulerian directed graphs. Let $G$ be a strongly connected digraph, and let $s$ be a vertex of $G$. A \emph{reverse $G$-parking function}~\cite{PS04} with respect to $s$ is a function $f:V(G)\setminus \{s\} \to \mathbb{N}_0$ such that, for any non-empty subset $A \subseteq V(G) \setminus \{s\}$, there exists $v \in A$ for which $f(v)$ is strictly smaller than the number of edges from $V(G) \setminus A$ to $v$. An \emph{arborescence} of $G$ {rooted} at $s$ is a subgraph of $G$ that contains $|V(G)|-1$ edges and such that for any vertex $v$ of $G$ there exists a unique directed path from $s$ to $v$ in the subgraph. We show that Cori-Le Borgne bijection generalizes to a bijection between reverse $G$-parking functions and arborescences of $G$ for all directed graphs. For the full description of the bijection, see Algorithm~\ref{algorithm: parking functions to arborescences}. \begin{theorem}\label{t. generalized Cori-Le Borgne} Let $G$ be a strongly connected digraph, and let $s \in V(G)$. Then Cori-Le Borgne bijection generalizes to a bijection that sends reverse $G$-parking functions with respect to $s$ to arborescences of $G$ rooted at $s$. Furthermore, the external activity of the output arborescence is the level of the input reverse $G$-parking function. \end{theorem} The external activity of an arborescence and the level of a $G$-parking function are defined in Definition~\ref{definition: external activity} and Definition~\ref{definition: level of a function}, respectively. As a consequence of Theorem~\ref{t. generalized Cori-Le Borgne} and the duality between reverse $G$-parking functions and recurrent configurations for Eulerian directed graphs~\cite[Theorem~4.4]{HLM08}, we get the extension of Merino's Theorem for Eulerian directed graphs. \begin{theorem}[Merino's Theorem for Eulerian directed graphs] \label{t. Merino's theorem} Let $G$ be a connected Eulerian digraph. Then for any $s \in V(G)$, \[c_0 + c_1y+\ldots+ c_ny^n= t_0+t_1y+\ldots + t_n y^n, \] where $c_i$ is the number of recurrent configurations with $i-\outdeg(s)$ chips and $t_i$ is the number of arborescences of $G$ rooted at $s$ with external activity $i$. \end{theorem} The right side of Theorem~\ref{t. main theorem} is known in the literature as the \emph{greedoid polynomial}~\cite{BKL85}, and it can be considered as a single variable generalization of the Tutte polynomial for directed graphs~\cite{GM89,GT90,GM91}. This paper is arranged as follows: In Section \ref{s. preliminaries} we give a review on the sinkless sandpile model and the sandpile model with a sink. In Section \ref{s. connections} we prove Theorem~\ref{c. Perrot-Pham}. In Section~\ref{s. examples} we prove a recurrence relation for the Biggs-Merino polynomial. In Section \ref{s. greedoid polynomial} we prove Theorem~\ref{t. generalized Cori-Le Borgne} and Theorem~\ref{t. Merino's theorem}. Finally in Section \ref{s. conjecture} we include a list of questions for future research. \section{Review of abelian sandpile models}\label{s. preliminaries} In this section we review basic results concerning the sinkless sandpile model and the sandpile model with a sink. We refer to \cite{BL92,HLM08} for a more detailed introduction of these two models, to \cite{PPW13} for an algebraic treatment of this model, and to \cite{BL13,BL142, BL14} for a generalization of these models. We use $G:=(V(G),E(G))$ to denote a \emph{directed graph} (\emph{digraph} for short), possibly with loops and multiple edges. We use $V$ and $E$ as a shorthand for $V(G)$ and $E(G)$ when the digraph $G$ is evident from the context. Each edge $e \in E$ is directed from its source vertex to its target vertex. The \emph{outdegree} of a vertex $v \in V$, denoted by $\outdeg(v)$, is the number of edges with $v$ as source vertex, while the \emph{indegree} of a vertex $v \in V$, denoted by $\indeg(v)$, is the number of edges with $v$ as target vertex. In this paper, we identify an {undirected graph} $G$ with the directed graph obtained by replacing each undirected edge $e:=\{i,j\}$ of $G$ with two directed edges $(i,j)$ and $(j,i)$. A digraph obtained in this way is called \emph{bidirected}. A digraph $G$ is \emph{Eulerian} if $\outdeg(v)=\indeg(v)$ for all $v \in V$. In particular, all bidirected graphs are Eulerian. A digraph is \emph{strongly connected} if for any two vertices $v,w \in V$ there exists a directed path from $v$ to $w$. Note that a connected Eulerian digraph is always strongly connected. Throughout this paper, we always assume that our digraph $G$ is strongly connected. The \emph{Laplacian matrix} $\Delta$ of a digraph $G$ is the square matrix $(\Delta_{i,j})_{V\times V}$ given by: \begin{align*} \Delta_{i,j}:=\begin{cases} \outdeg(i)- \# \text{ of loops in vertex } i &\text{ if }i=j;\\ - \text{ the number of of edges from vertex }j \text{ to vertex } i &\text{ if }i\neq j. \end{cases} \end{align*} Note that the Laplacian matrix is a symmetric matrix if and only if $G$ is bidirected. \begin{definition}[Primitive period vector] A vector $\mathbf{r} \in \mathbb{R}^V$ is a \emph{period vector} of $G$ if it is non-negative, integral, and $\mathbf{r} \in \ker(\Delta)$. A period vector is \emph{primitive} if its entries has no non-trivial common divisor. \end{definition} If $G$ is a strongly connected digraph, then a primitive period vector exists, is unique, and is strictly positive~\cite[Proposition~4.1(i)]{BL92}. Throughout this paper, we will use $\mathbf{r}$ to denote the primitive period vector of $G$. Note that any nonzero period vector of $G$ is a positive multiple of the primitive period vector~\cite[Proposition~4.1(iii)]{BL92}. Also note that the primitive period vector is equal to $(1,\ldots, 1)$ if and only if $G$ is an Eulerian digraph~\cite[Proposition~4.1(ii)]{BL92}. A \emph{reverse arborescence} of $G$ rooted at $v \in V$ is a subgraph $G$ that contains $|V|-1$ edges and such that for any $w\in V$ there exists a unique directed path from $w$ to $v$ in the subgraph. Let $t_v$ denote the number of reverse arborescences of $G$ rooted at $v$. Note that $t_v>0$ for all $v \in V$ if $G$ is strongly connected. \begin{definition}[Period constant]\label{definition: period constant} The \emph{period constant} $\alpha$ of a strongly connected digraph $G$ is \[ \alpha:= \underset{v \in V}{\gcd} \, \{ t_v \}. \qedhere \] \end{definition} If $G$ is strongly connected, then by the markov chain tree theorem~\cite{AT89} the primitive period vector $\mathbf{r}$ is given by \[ \mathbf{r}(v)=\frac{t_v}{\alpha} \quad (v \in V). \] \subsection{Sinkless abelian sandpile model}\label{ss. abelian sandpile model} The \emph{sinkless (abelian) sandpile model} on a strongly connected digraph $G$, denoted by $\textnormal{Sand}(G)$, starts with a number of chips at each vertex of $G$. A \emph{sinkless (chip) configuration} $\mathbf{c}$ is a vector in $\mathbb{N}_0^{V}$, with $\mathbf{c}(v)$ representing the number of chips in the $v \in V$. A \emph{sinkless firing move} on $\mathbf{c}$ consists of removing $\outdeg(v)$ chips from a vertex $v$ and sending each of those chips along each edge leaving $v$ to a neighbor of $v$. Denote by $\textbf{{1}}_v$ the vector in $\mathbb{N}_0^V$ given by $\textbf{{1}}_v(v):=1$ and $\textbf{{1}}_v(w):=0$ for all $w \in V \setminus \{v\}$. Note that a sinkless firing move changes a sinkless configuration $\mathbf{c}$ to $\mathbf{c}-\Delta \textbf{{1}}_v$. Note that the result of a sinkless firing move is not necessarily a sinkless configuration, as the $v$-th entry of $\mathbf{c}-\Delta \textbf{{1}}_v$ is negative if $\mathbf{c}(v)\leq \outdeg(v)$. A sinkless firing move is \emph{legal} if the fired vertex has at least as many chips as its outgoing edges, or equivalently if the result of the firing move is another sinkless configuration. A sinkless configuration $\mathbf{c}$ is \emph{stable} if $\mathbf{c}(v)<\outdeg(v)$ for all $v \in V$, or equivalently if there are no legal sinkless firing moves for $\mathbf{c}$. Each finite (possibly empty) sequence of sinkless firing moves is associated with an \emph{odometer} $\mathbf{q} \in \mathbb{N}_0^{V}$, where $\mathbf{q}(v)$ is equal to the number of times the vertex $v$ is being fired in the sequence. Note that applying a finite sequence of firing moves with odometer $\mathbf{q}$ to a sinkless configuration $\mathbf{c}$ gives us the sinkless configuration $\mathbf{c}-\Delta\mathbf{q}$. For any two sinkless configurations $\mathbf{c}, \mathbf{d}$ of $G$, we write $\mathbf{c} \longrightarrow \mathbf{d}$ if there exists a finite (possibly empty) sequence of legal sinkless firing moves that sends $\mathbf{c}$ to $\mathbf{d}$. If the odometer $\mathbf{q}$ of the sequence is known, we will write $\mathbf{c} \xlongrightarrow[\mathbf{q}]{} \mathbf{d}$ instead. It follows from the definition that $\longrightarrow$ is a transitive relation. \begin{definition}[Recurrent sinkless configurations] \label{definition: recurrent sinkless} Let $G$ be a strongly connected digraph. A sinkless configuration $\mathbf{c}$ of $G$ is \emph{(sinkless) recurrent} if it satisfies these two conditions: \begin{itemize} \item The configuration $\mathbf{c}$ is not stable; and \item If $\mathbf{d}$ is a sinkless configuration that satisfies $\mathbf{c} \longrightarrow \mathbf{d}$, then $\mathbf{d} \longrightarrow \mathbf{c}$. \qedhere \end{itemize} \end{definition} We use $\textnormal{Rec}(G)$ to denote the set of all recurrent sinkless configurations of $G$. Recall that $\mathbf{r}$ denote the primitive period vector of $G$. \begin{lemma}[{\cite[Lemma~1.3, Lemma~4.3]{BL92}}]\label{lemma: BL92} Let $G$ be a strongly connected digraph. \begin{enumerate} \item \label{item: BL92 1} If $\mathbf{q}_1, \mathbf{q}_2 \in \mathbb{N}_0^V$ satisfy $\mathbf{q}_1 \leq \mathbf{q}_2$ and $\mathbf{c},\mathbf{d}_1,\mathbf{d}_2$ are sinkless configurations that satisfy $\mathbf{c} \xlongrightarrow[\mathbf{q}_1]{} \mathbf{d}_1$ and $\mathbf{c} \xlongrightarrow[\mathbf{q}_2]{} \mathbf{d}_2$, then $\mathbf{d}_1 \xlongrightarrow[\mathbf{q}_2-\mathbf{q}_1]{} \mathbf{d}_2$. \item \label{item: BL92 2} If $\mathbf{c}$ is a sinkless configuration that satisfies $\mathbf{c} \xlongrightarrow[k\mathbf{r}]{} \mathbf{c}$ for some positive $k$, then $\mathbf{c} \xlongrightarrow[\mathbf{r}]{} \mathbf{c}$. \qed \end{enumerate} \end{lemma} In the next lemma we present a test called the \emph{sinkless burning test} that checks whether a given sinkless configuration is recurrent. \begin{proposition}[Sinkless burning test] \label{l. burning test for sandg} Let $G$ be a strongly connected digraph. A sinkless configuration $\mathbf{c}$ is recurrent if and only if there exists a finite sequence of legal firing moves from $\mathbf{c}$ back to $\mathbf{c}$ such that each vertex $v \in V$ is fired exactly $\mathbf{r}(v)$ times. \end{proposition} \begin{proof} Proof for the $\Rightarrow$ direction: Let $\mathbf{c}$ be an arbitrary recurrent sinkless configuration. Since $\mathbf{c}$ is not stable by definition of recurrence, there exists a sinkless configuration $\mathbf{d}$ and a vertex $v \in V$ such that $\mathbf{c} \xlongrightarrow[\textbf{{1}}_v]{} \mathbf{d}$. Since $\mathbf{c}$ is recurrent, there exists $\mathbf{q} \in \mathbb{N}_0^V$ such that $\mathbf{d} \xlongrightarrow[\mathbf{q}]{} \mathbf{c}$. Write $\mathbf{q}':=\textbf{{1}}_v+\mathbf{q}$. It follows that $\mathbf{c} \xlongrightarrow[\mathbf{q}]{} \mathbf{c}$, and in particular we have $\mathbf{q}$ is a nonzero period vector of $G$. This implies that $\mathbf{q}$ is a positive multiple of $\mathbf{r}$, and Lemma~\ref{lemma: BL92}\eqref{item: BL92 2} then implies that $\mathbf{c} \xlongrightarrow[\mathbf{r}]{}\mathbf{c}$, as desired. Proof for the $\Leftarrow$ direction: Since $\mathbf{c} \xlongrightarrow[\mathbf{r}]{} \mathbf{c}$ by assumption and $\mathbf{r}$ is a strictly positive vector, we conclude that $\mathbf{c}$ is not a stable sinkless configuration. Let $\mathbf{d}$ be an arbitrary sinkless configuration that satisfies $\mathbf{c} \longrightarrow \mathbf{d}$. It suffices to show that $\mathbf{d} \longrightarrow \mathbf{c}$. Let $\mathbf{q}$ be the odometer of this sequence of firing moves that sends $\mathbf{c}$ to $\mathbf{d}$. Since $\mathbf{r}$ is a strictly positive vector, there exists a positive $k$ such that $\mathbf{q} \leq k \mathbf{r}$. Since $\mathbf{c} \xlongrightarrow[\mathbf{r}]{} \mathbf{c}$ by assumption, it follows that $\mathbf{c} \xlongrightarrow[k\mathbf{r}]{} \mathbf{c}$. On the other hand, we also have $\mathbf{c} \xlongrightarrow[\mathbf{q}]{} \mathbf{d}$ by assumption. Lemma~\ref{lemma: BL92}\eqref{item: BL92 1} then implies that $\mathbf{d} \xlongrightarrow[k\mathbf{r} -\mathbf{q}]{} \mathbf{c}$, as desired. \end{proof} The next lemma gives two sufficient conditions for a sinkless configuration to be recurrent. \begin{lemma}\label{p. recurrence can be inherited} Let $G$ be a strongly connected digraph. \begin{enumerate} \item \label{item: recurrence inheritance 1} If $\mathbf{c}$ is a recurrent sinkless configuration and $\mathbf{d}$ is a sinkless configuration that satisfies $\mathbf{c} \longrightarrow \mathbf{d}$, then $\mathbf{d}$ is a recurrent sinkless configuration. \item \label{item: recurrence inheritance 2} If $\mathbf{c}$ is a recurrent sinkless configuration, then for any $k \in \mathbb{N}_0$ and $v \in V$ the sinkless configuration $\mathbf{c}+k\textbf{{1}}_v$ is also recurrent. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item We first show that $\mathbf{d}$ is not a stable sinkless configuration. Since $\mathbf{c} \longrightarrow \mathbf{d}$ by assumption and $\mathbf{c}$ is recurrent, we conclude that $\mathbf{d} \longrightarrow \mathbf{c}$. If the odometer of the sequence of legal firing moves from $\mathbf{d}$ to $\mathbf{c}$ is nonzero, then $\mathbf{d}$ is not stable by definition. If the odometer is the zero vector, then $\mathbf{d}=\mathbf{c}$ is recurrent and hence is not stable. We now show that if $\mathbf{d}'$ is a sinkless configuration that satisfies $\mathbf{d} \longrightarrow \mathbf{d}'$, then $\mathbf{d}' \longrightarrow \mathbf{d}$. Since $\mathbf{c} \longrightarrow \mathbf{d}$ and $\mathbf{d} \longrightarrow \mathbf{d}'$, the transitivity of $\longrightarrow$ implies that $\mathbf{c} \longrightarrow \mathbf{d}'$. Since $\mathbf{c}$ is recurrent, we then have $\mathbf{d}' \longrightarrow \mathbf{c}$. Since we also have $\mathbf{c} \longrightarrow \mathbf{d}$, the transitivity of $\longrightarrow$ then implies that $\mathbf{d}' \longrightarrow \mathbf{d}$. The proof is complete. \item Since $\mathbf{c}$ is recurrent, we have $\mathbf{c} \xlongrightarrow[\mathbf{r}]{} \mathbf{c}$ by Proposition~\ref{l. burning test for sandg}. It then follows from the definition of legal firing moves that $\mathbf{c}+k\textbf{{1}}_v \xlongrightarrow[\mathbf{r}]{} \mathbf{c}+k\textbf{{1}}_v$. Proposition~\ref{l. burning test for sandg} then implies that $\mathbf{c}+k\textbf{{1}}_v$ is recurrent, as desired. \qedhere \end{enumerate} \end{proof} \subsection{Abelian sandpile model with a sink}\label{ss. sink} Let $s \in V$ be a fixed vertex which we refer to as the \emph{sink}. The \emph{(abelian) sandpile model with a sink} at $s$, denoted by $\textnormal{Sand}(G,s)$, is a variant of the sinkless sandpile model for which the sink vertex $s$ never fires and all chips sent to $s$ are removed from the game. A \emph{sink (chip) configuration} $\widehat{\mathbf{c}}$ is a vector $\mathbb{N}_0^{V}$ such that $\widehat{\mathbf{c}}(s)=0$. A \emph{sink firing move} consists of reducing the number of chips of $\widehat{\mathbf{c}}$ at a vertex $v \in V \setminus \{s\}$ by $\outdeg_G(v)$, and then sending one chip along each outgoing edge of $v$ to its neighbouring vertex that is not $s$. A sink firing move is \emph{legal} if the fired vertex $v$ has at least as many chips as its outdegree before the firing. It is convenient for us to be able to fire the sink vertex $s$ as a legal sink firing move, so we adopt the convention that firing $s$ is a legal sink firing move that sends a sink configuration $\widehat{\mathbf{c}}$ back to $\widehat{\mathbf{c}}$. (Note that a legal sink firing move sends a sink configuration to another sink configuration.) The \emph{odometer} of a sequence of sink firing moves is the vector $\mathbf{q} \in \mathbb{N}_0^V$ that records the number of times a vertex is fired in the sequence. Let $\Delta_s$ denote the $V \times V$ matrix obtained by changing the row of the Laplacian matrix $\Delta$ that corresponds to $s$ with the zero vector. Note that applying a finite sequence of sink firing moves with odometer $\mathbf{q}$ to a sink configuration $\widehat{\mathbf{c}}$ gives us the sink configuration $\widehat{\mathbf{c}}-\Delta_s\mathbf{q}$, provided that $\mathbf{q}(s)=0$. For two sink configurations $\widehat{\mathbf{c}}$ and $\widehat{\mathbf{d}}$, we write $\widehat{\mathbf{c}} \xlongrightarrow{s} \widehat{\mathbf{d}}$ if there exists a sequence of legal sink firing moves from $\widehat{\mathbf{c}}$ to $\widehat{\mathbf{d}}$. A sink configuration $\widehat{\mathbf{c}}$ is \emph{stable} if $\widehat{\mathbf{c}}(v)< \outdeg(v)$ for all $v \in V$. \begin{definition} [Stabilization] For any sink configuration $\widehat{\mathbf{c}}$, the \emph{stabilization} $\widehat{\mathbf{c}}^\circ$ of $\widehat{\mathbf{c}}$ is a sink configuration such that $\widehat{\mathbf{c}} \xlongrightarrow{s} \widehat{\mathbf{c}}^\circ$ and $\widehat{\mathbf{c}}$ is a stable sink configuration. \qedhere \end{definition} For a strongly connected digraph $G$, any sink configuration $\widehat{\mathbf{c}}$ has a unique stabilization~\cite[Lemma~2.4]{HLM08}. \begin{lemma}\textnormal{(\cite[Lemma~2.2]{HLM08}).}\label{l. e and f related get same stabilizer} Let $G$ be a strongly connected digraph, let $s\in V$, and let $\widehat{\mathbf{c}}$ and $\widehat{\mathbf{d}}$ be sink configurations of $G$. If $\widehat{\mathbf{c}} \xlongrightarrow{s} \widehat{\mathbf{d}}$, then $\widehat{\mathbf{c}}^\circ=\widehat{\mathbf{d}}^\circ$. \qed \end{lemma} \begin{definition}[Recurrent sink configurations] \label{definition: recurrent sink} Let $G$ be a strongly connected digraph, and let $s\in V$. A sink configuration $\widehat{\mathbf{c}}$ of $G$ is \emph{(sink) recurrent} if for any sink configurations $\widehat{\mathbf{c}}_1$ there exists another sink configuration $\widehat{\mathbf{c}}_2$ such that $(\widehat{\mathbf{c}}_1+\widehat{\mathbf{c}}_2)^\circ=\widehat{\mathbf{c}}$. \end{definition} Note that a recurrent sink configuration is always a stable configuration. We use $\textnormal{Rec}(G,s)$ to denote the set of s-recurrent configurations of $\textnormal{Sand}(G,s)$. When there is a possible ambiguity between the two notion of recurrence, \emph{sinkless recurrence} will refer to Definition~\ref{definition: recurrent sinkless}, and \emph{sink recurrence} will refer to Definition~\ref{definition: recurrent sink}. \begin{lemma}[Abelian property \textnormal{\cite[Corollary~2.6]{HLM08}}]\label{l. abelian property} Let $G$ be a strongly connected digraph, let $s \in V$, and let $\widehat{\mathbf{c}}_1, \widehat{\mathbf{c}}_2,\widehat{\mathbf{c}}_3$ be sink configurations. Then: \[ \pushQED{\qed} ((\widehat{\mathbf{c}}_1+\widehat{\mathbf{c}}_2)^\circ+\widehat{\mathbf{c}}_3)^\circ=((\widehat{\mathbf{c}}_1+\widehat{\mathbf{c}}_3)^\circ+\widehat{\mathbf{c}}_2)^\circ =(\widehat{\mathbf{c}}_1+\widehat{\mathbf{c}}_2+\widehat{\mathbf{c}}_3)^\circ. \popQED \] \end{lemma} In the next proposition we present a burning test to check for sink recurrence. It is first discovered by Dhar~\cite{Dhar90} for undirected graphs and then by Speer~\cite{Speer93} and Asadi and Backman~\cite{AB11} for directed graphs. The \emph{sink Laplacian vector} $\widehat{\mathbf u} \in \mathbb{N}_0^V$ is \[ \widehat{\mathbf u}(v):=\begin{cases} \text{number of edges from $s$ to $v$ } &\text{ if }v\neq s;\\ 0 &\text{ if }v=s. \end{cases} \] Recall that $\mathbf{r}(s)$ is the entry of the primitive period vector $\mathbf{r}$ that corresponds to $s$. \begin{proposition}[Sink burning test \textnormal{\cite[Theorem~3]{Speer93}}, \textnormal{\cite[Theorem~3.11]{AB11})}]\label{l. burning test thief} Let $G$ be a strongly connected digraph, let $s \in V$, and let $\widehat{\mathbf{c}}$ be a sink configuration of $G$. Then $\widehat{\mathbf{c}}$ is a sink recurrent configuration if and only if $(\widehat{\mathbf{c}}+ \mathbf{r}(s) \widehat{\mathbf u})^\circ=\widehat{\mathbf{c}}$. \qed \end{proposition} The next lemma gives a sufficient condition for a sink configuration to be recurrent. \begin{lemma}\textnormal{(\cite[Lemma~2.17]{HLM08}).}\label{p. s-recurrence can be inherited} Let $G$ be a strongly connected digraph, let $s \in V$, and let $\widehat{\mathbf{c}}$ be a sink recurrent configuration. If $\widehat{\mathbf{d}}$ is a sink configuration such that there exists a sink configuration $\widehat{\mathbf{c}}'$ satisfying $\widehat{\mathbf{d}}=(\widehat{\mathbf{c}}+\widehat{\mathbf{c}}')^\circ$, then $\widehat{\mathbf{d}}$ is a sink recurrent configuration. \qed \end{lemma} Let $Z_s \subseteq \mathbb{Z}^V$ denote the set \[ Z_s:=\{ \mathbf{z} \in \mathbb{Z}^V \mid \mathbf{z}(s)=0 \}.\] Note that $\textnormal{Rec}(G,s)$ is a subset of $Z_s$. Also note that $\Delta_s Z_s \subseteq Z_s$. \begin{lemma}[{\cite[Corollary~2.16, Corollary~2.18]{HLM08}}]\label{lemma: sandpile group bijection} Let $G$ be a strongly connected digraph, and let $s \in V$. Then \begin{enumerate} \item \label{item: sandpile group 1} The inclusion map $\textnormal{Rec}(G,s) \to Z_s/\Delta_s Z_s$ is a bijection. \item \label{item: sandpile group 2} The cardinality of $\textnormal{Rec}(G,s)$ is equal to the number of reverse arborescences of $G$ rooted at $s$. \qed \end{enumerate} \end{lemma} \subsection{A Connection between the sinkless sandpile model and the sandpile model with sink}\label{ss. comparion two sandpile models} Let $\mathbf{c}$ be a sinkless configuration of $G$. In order to reduce the number of notations, we denote by $\widehat{\mathbf{c}}$ the sink configuration of $G$ with $\widehat{\mathbf{c}}(v):=\mathbf{c}(v)$ if $v \neq s$ and $\widehat{\mathbf{c}}(v):=0$ if $v=s$. Let $v_1,\ldots, v_k$ be vertices of $G$. For two sinkless configurations $\mathbf{c}$ and $\mathbf{d}$, we write $\mathbf{c} \xlongrightarrow[v_1\cdots v_k]{}\mathbf{d}$ the sequence of sinkless firing moves that fires $v_1,\ldots, v_k$ (in that order) is legal and sends $\mathbf{c}$ to $\mathbf{d}$. For two sink configurations $\widehat{\mathbf{c}}$ and $\widehat{\mathbf{d}}$, we write $\widehat{\mathbf{c}} \xlongrightarrow[v_1\cdots v_k]{s}\widehat{\mathbf{d}}$ the sequence of sink firing moves that fires $v_1,\ldots, v_k$ (in that order) is legal and sends $\widehat{\mathbf{c}}$ to $\widehat{\mathbf{d}}$. In the next lemma we highlight a connection between sinkless configurations and sink configurations. \begin{lemma}\label{p. sequence of firing moves is transferrable} Let $\mathbf{c}$ and $\mathbf{d}$ be sinkless configurations. \begin{enumerate} \item \label{item: tranferrable 1} Let $v_1,\ldots,v_k \in V \setminus \{s\}$, and let $n$ be the number of chips removed from the game by the sequence of sink firing moves that fires $v_1,\ldots, v_k$. If $\widehat{\mathbf{c}} \xlongrightarrow[v_1\cdots v_k]{s}\widehat{\mathbf{d}}$ and $\mathbf{d}(s)-\mathbf{c}(s)=n$, then $\mathbf{c} \xlongrightarrow[v_1\cdots v_k]{}\mathbf{d}$. \item \label{item: transferrable 2} Let $v_1,\ldots, v_k \in V$, and let $m$ be the number of instances of $s$ in the sequence $v_1,\ldots, v_k$. If $\mathbf{c} \xlongrightarrow[v_1\cdots v_k]{}\mathbf{d}$, then $\widehat{\mathbf{c}}+m \widehat{\mathbf u} \xlongrightarrow[v_1\cdots v_k]{s}\widehat{\mathbf{d}}$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item By induction on $k$, it suffices to prove the claim for when $k=1$. Since firing $v_1$ is a legal sink firing move on $\widehat{\mathbf{c}}$ and $v_1\neq s$, we have firing $v_1$ is also a legal sinkless firing move on $\mathbf{c}$. Now note that \begin{align*} \mathbf{c}- \Delta \textbf{{1}}_{v_1}=&\mathbf{c} +n\textbf{{1}}_s -\Delta_s \textbf{{1}}_{v_1} \quad \text{(since }v_1 \neq s)\\ =&\mathbf{d}+ (\mathbf{c}- \mathbf{d}) +n\textbf{{1}}_s -\Delta_s \textbf{{1}}_{v_1} = \mathbf{d} + (\widehat{\mathbf{c}}- \widehat{\mathbf{d}}) -\Delta_s \textbf{{1}}_{v_1}=\mathbf{d}. \end{align*} Hence we conclude that $\mathbf{c} \xlongrightarrow[v_1]{}\mathbf{d}$, as desired. \item By induction on $k$, it suffices to prove the claim for when $k=1$. First consider the case when $v_1 =s$. Note that by definition the legal sink firing move that fires $s$ sends $\widehat{\mathbf{c}}+\widehat{\mathbf u}$ back to $\widehat{\mathbf{c}}+\widehat{\mathbf u}$. On the other hand, we have \begin{align*} \mathbf{d}= \mathbf{c}- \Delta \textbf{{1}}_s= \mathbf{c}+ \widehat{\mathbf u}- \outdeg(s)\textbf{{1}}_s. \end{align*} This then implies that $\widehat{\mathbf{d}}=\widehat{\mathbf{c}}+\widehat{\mathbf u}$. Hence we have $\widehat{\mathbf{c}}+m \widehat{\mathbf u} \xlongrightarrow[v_1]{s}\widehat{\mathbf{d}}$. Now consider the case when $v_1\neq s$. Since firing $v_1$ is a legal sinkless firing move on $\mathbf{c}$ and $v_1\neq s$, we have firing $v_1$ is also a legal sink firing move on $\widehat{\mathbf{c}}$. Now note that \begin{align*} \mathbf{d}=\mathbf{c}-\Delta_s \textbf{{1}}_{v_1}=&\mathbf{c} +n\textbf{{1}}_s -\Delta_s \textbf{{1}}_{v_1} \quad \text{(since }v_1 \neq s). \end{align*} This implies that $\widehat{\mathbf{d}}=\widehat{\mathbf{c}}-\Delta_s \textbf{{1}}_{v_1}$. Since $m=0$ when $v_1\neq s$, we conclude that $\widehat{\mathbf{c}}+m \widehat{\mathbf u} \xlongrightarrow[v_1]{s}\widehat{\mathbf{d}}$. The proof is complete. \qedhere \end{enumerate} \end{proof} \section{Proof of Theorem~\ref{c. Perrot-Pham}}\label{s. connections} In this section we prove the conjecture of Perrot and Pham~\cite[Conjecture~1]{Perrot-Pham}. Recall that $\Delta$ is the Laplacian matrix of $G$. \begin{definition}[Sinkless equivalence relation] For any recurrent sinkless configurations $\mathbf{c}$ and $\mathbf{d}$, we write $\mathbf{c} \sim \mathbf{d}$ if there exists $\mathbf{z} \in \mathbb{Z}^V$ such that $\mathbf{c}-\mathbf{d}=\Delta \mathbf{z}$. \end{definition} Note that $\sim$ defines an equivalence relation on the set of recurrent sinkless configurations. We call an equivalence class for the relation $\sim$ a \emph{recurrent (sinkless) class}. For any recurrent sinkless configuration $\mathbf{c}$, we denote by $[\mathbf{c}]$ the recurrent sinkless class that contains $\mathbf{c}$. \begin{definition} [Sinkless level] For any sinkless configuration $\mathbf{c}$, the \emph{level} of $\mathbf{c}$, denoted by $\textnormal{lvl}(\mathbf{c})$, is the total number of chips in the configuration $\mathbf{c}$. For any recurrent sinkless class $[\mathbf{c}]$, the \emph{level} of $[\mathbf{c}]$, denoted by $\textnormal{lvl}([\mathbf{c}])$, is the level of a configuration contained in $[\mathbf{c}]$. \end{definition} It is straightforward to check that two sinkless configurations has the same total number of chips if they are related by $\sim$, and hence the level of recurrent sinkless classes is well defined. Recall that $\Delta_s$ is the $V \times V$ matrix obtained by changing the row of the Laplacian matrix of $G$ that corresponds to $s$ with the zero vector. \begin{definition}[Sink equivalence relation]\label{definition: sink equivalence relation} For any recurrent sink configurations $\widehat{\mathbf{c}}$ and $\widehat{\mathbf{d}}$, we write $\widehat{\mathbf{c}} \es \widehat{\mathbf{d}}$ if there exists $\mathbf{z} \in \mathbb{Z}^V$ such that $\widehat{\mathbf{c}}-\widehat{\mathbf{d}}=\Delta_s \mathbf{z}$. \end{definition} Note that $\es$ defines an equivalence relation on the set of recurrent sink configurations. We call an equivalence class for the relation $\es$ a \emph{recurrent (sink) class}. For any recurrent sink configuration $\widehat{\mathbf{c}}$, we denote by $[\widehat{\mathbf{c}}]_s$ the recurrent sink class that contains $\widehat{\mathbf{c}}$. \begin{definition} [Sink level] For any sink configuration $\widehat{\mathbf{c}}$, the \emph{level} $\textnormal{lvl}(\widehat{\mathbf{c}})$ of $\widehat{\mathbf{c}}$ is the total number of chips in $\widehat{\mathbf{c}}$. For any recurrent sink class $[\mathbf{c}_s]$, the \emph{level} of $[\widehat{\mathbf{c}}]_s$ is \[\textnormal{lvl}([\widehat{\mathbf{c}}]_s):=\max \{ \textnormal{lvl}(\widehat{\mathbf{d}}) \mid \widehat{\mathbf{d}} \in [\widehat{\mathbf{c}}]_s \}. \qedhere \] \end{definition} For any nonnegative integer $m$, we denote by $\textnormal{Rec}_m(G,\sim)$ the set of recurrent sinkless classes with level $m$, and by $\textnormal{Rec}_m(G,\es)$ the set of recurrent sink classes with level $m$. \begin{proposition}\label{p. size Rec(G,es)} Let $G$ be a strongly connected digraph, and let $s \in V$. Then the cardinality of $\textnormal{Rec}(G,\es)$ is equal to the period constant $\alpha$ of $G$. \end{proposition} \begin{proof} It follows from Lemma~\ref{lemma: sandpile group bijection}\eqref{item: sandpile group 1} and the definition of $\es$ that: \begin{align*} |\textnormal{Rec}(G,\es)|=\left| \frac{Z_s}{\Delta_s \mathbb{Z}^{V}} \right|, \end{align*} where $Z_s$ and $\Delta_s$ is as defined in Section~\ref{s. preliminaries}. Now note that \begin{align*} \left| \frac{Z_s}{\Delta_s \mathbb{Z}^{V}} \right|&={\left| \frac{Z_s}{\Delta_s Z_s} \right|} \, \bigg/ \, {\left| \frac{\Delta_s \mathbb{Z}^V }{\Delta_s Z_s} \right|} \quad \text{(by the third isomorphism theorem for groups)}\\ &= {t_s} \, \bigg/\, { \left| \frac{\Delta_s \mathbb{Z}^V}{\Delta_s Z_s} \right|} \quad \text{(by Lemma~\ref{lemma: sandpile group bijection}\eqref{item: sandpile group 2})}. \end{align*} By a direct computation, we have $|{\Delta_s \mathbb{Z}^V} \, / \, {\Delta_s Z_s} |$ is equal to $\mathbf{r}(s)$, where $\mathbf{r}$ is the primitive period vector of $G$. Now recall that $\mathbf{r}(s)$ is equal to $t_s/\alpha$ by the markov chain tree theorem~\cite{AT89}. Hence we conclude that \begin{align*} |\textnormal{Rec}(G,\es)|={t_s} \, \bigg/\, { \left| \frac{\Delta_s Z_s}{\Delta_s \mathbb{Z}^{V}} \right|} =\frac{t_s}{\mathbf{r}(s)}=\alpha, \end{align*} as desired. \end{proof} As a corollary of Proposition~\ref{p. size Rec(G,es)}, we have the cardinality of $\textnormal{Rec}(G,\es)$ is independent of the choice of $s$. We will prove a stronger sink independence result in the next theorem. \begin{definition}[Biggs-Merino polynomial] Let $G$ be a strongly connected digraph, and let $s \in V$. The \emph{Biggs-Merino polynomial} $\mathcal B(G,s;y)$ is \[\mathcal B(G,s;y):= \sum_{m\geq 0} |\textnormal{Rec}_m(G,\es)|\cdot y^{m+\outdeg(s)}. \qedhere \] \end{definition} Since $\textnormal{Rec}(G,\es)$ is a finite set by Proposition~\ref{p. size Rec(G,es)}, we have $\textnormal{Rec}_m(G,\es)$ is an empty set for sufficiently large $m$. This then implies that $\mathcal B(G,s;y)$ is a polynomial. We denote by $\mathcal R(G;y)$ the formal power series \[\mathcal R(G;y):= \sum_{m\geq 0} |\textnormal{Rec}_m(G,{\sim})|\, y^{m}.\] \begin{theorem}\label{t. main theorem} Let $G$ be a strongly connected digraph, and let $s \in V$. We have the following equality of formal power series: \begin{equation*} \mathcal R(G;y)=\frac{\mathcal B(G,s;y)}{(1-y)}. \end{equation*} \end{theorem} The following conjecture of Perrot and Pham~\cite{Perrot-Pham} is a direct corollary of Theorem~\ref{t. main theorem}. \begin{reptheorem}{c. Perrot-Pham} \textnormal{ (\cite[Conjecture~1]{Perrot-Pham}).} Let $G$ be a strongly connected digraph. Then $\mathcal B(G,s;y)$ is independent of the choice of the vertex $s$. \qed \end{reptheorem} The rest of this section is focused on the proof of Theorem~\ref{t. main theorem}. For any nonnegative $m$, we define the map $\varphi$ by \begin{align*} \varphi: \textnormal{Rec}_m(G, \sim) &\to \bigsqcup_{n \leq m-\outdeg(s)}\textnormal{Rec}_{n } (G, \es)\\ [\mathbf{c}] &\mapsto [\widehat{\mathbf{c}}^\circ]_s \end{align*} The following lemma shows that $\varphi$ is well defined and is injective for all positive $m$. \begin{lemma}\label{lemma: varphi is well defined and injective} Let $G$ be a strongly connected digraph, let $s \in V$, and let $\mathbf{c},\mathbf{d}$ be sinkless recurrent configurations of $G$ with the same level. Then \begin{enumerate} \item The sink configuration $\widehat{\mathbf{c}}^\circ$ is sink recurrent, and $\textnormal{lvl}(\widehat{\mathbf{c}}^\circ) \leq \textnormal{lvl}(\mathbf{c})-\outdeg(s)$. \item $\mathbf{c} \sim \mathbf{d}$ if and only if $ \widehat{\mathbf{c}}^\circ \es \widehat{\mathbf{d}}^\circ$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item Since $\mathbf{c}$ is sinkless recurrent, we have $\mathbf{c} \xlongrightarrow[\mathbf{r}]{} \mathbf{c}$ by Lemma \ref{l. burning test for sandg} (recall that $\mathbf{r}$ is the primitive period vector of $G$). By Lemma \ref{p. sequence of firing moves is transferrable}\eqref{item: transferrable 2}, we then have $\widehat{\mathbf{c}}+\mathbf{r}(s)\widehat{\mathbf u} \xlongrightarrow[\mathbf{r}]{s}\widehat{\mathbf{c}}$. By Lemma \ref{l. e and f related get same stabilizer}, this implies that $(\widehat{\mathbf{c}}+\mathbf{r}(s)\widehat{\mathbf u})^\circ = \widehat{\mathbf{c}}^\circ$. By Lemma \ref{l. abelian property}, we then conclude that $(\widehat{\mathbf{c}}^\circ+\mathbf{r}(s)\widehat{\mathbf u})^\circ = (\widehat{\mathbf{c}}+\mathbf{r}(s)\widehat{\mathbf u})^\circ = \widehat{\mathbf{c}}^\circ$. Hence $\widehat{\mathbf{c}}^\circ$ passes the sink burning test in Proposition \ref{l. burning test thief}, and we have $\widehat{\mathbf{c}}^\circ$ is a recurrent sink configuration. Let $n$ be the number of chips removed during the stabilization of $\widehat{\mathbf{c}}$, and let $\mathbf{d}:=\widehat{\mathbf{c}}^\circ+(n+\mathbf{c}(s))\textbf{{1}}_s$. By Lemma~\ref{p. sequence of firing moves is transferrable}\eqref{item: tranferrable 1}, we conclude that $\mathbf{c} \xlongrightarrow{} \mathbf{d}$. This implies that $\textnormal{lvl}(\mathbf{c})=\textnormal{lvl}(\mathbf{d})$ as legal firing moves do not change the total number of chips. By Lemma~\ref{p. recurrence can be inherited}\eqref{item: recurrence inheritance 1}, this also implies that $\mathbf{d}$ is a recurrent sinkless configuration. In particular, we have $\mathbf{d}$ is not a stable sinkless configuration. Since $\widehat{\mathbf{c}}^\circ$ is a recurrent sink configuration (and hence stable), we have $\widehat{\mathbf{c}}^\circ(v)<\outdeg(v)$ for all $v \in V$. This implies that $\mathbf{d}(v)=\widehat{\mathbf{c}}^\circ(v) < \outdeg(v)$ for all $v \in V \setminus\{s\}$. Since $\mathbf{d}$ is not a stable sinkless configuration, we then conclude that $\mathbf{d}(s)\geq \outdeg(s)$. Now note that \[ \textnormal{lvl}(\mathbf{c})=\textnormal{lvl}(\mathbf{d})=\textnormal{lvl}(\widehat{\mathbf{c}}^\circ)+\mathbf{d}(s)\geq \textnormal{lvl}(\widehat{\mathbf{c}}^\circ)+\outdeg(s),\] and the proof is complete. \item Let $\mathbf{q}_1$ be the odometer of a sequence of sink firing moves that stabilizes $\widehat{\mathbf{c}}$, and let $\mathbf{q}_2$ be the odometer of a sequence of sink firing moves that stabilizes $\widehat{\mathbf{d}}$. We have \begin{equation}\label{equation: varphi} \mathbf{c}-\mathbf{d} = \widehat{\mathbf{c}}-\widehat{\mathbf{d}} +(\mathbf{c}(s)-\mathbf{d}(s))\textbf{{1}}_s =\widehat{\mathbf{c}}^\circ-\widehat{\mathbf{d}}^\circ + \Delta_s(\mathbf{q}_1-\mathbf{q}_2)+(\mathbf{c}(s)-\mathbf{d}(s))\textbf{{1}}_s. \end{equation} If $\mathbf{c}-\mathbf{d}=\Delta \mathbf{z}$ for some $\mathbf{z} \in \mathbb{Z}^V$, then : \begin{align*} \widehat{\mathbf{c}}^\circ-\widehat{\mathbf{d}}^\circ =& \Delta\mathbf{z}- \Delta_s (\mathbf{q}_1-\mathbf{q}_2)-(\mathbf{c}(s)-\mathbf{d}(s))\textbf{{1}}_s \quad \text{(by equation~\eqref{equation: varphi})}\\ =&\Delta_s( \mathbf{z}-\mathbf{q}_1+\mathbf{q}_2) +t\textbf{{1}}_s, \end{align*} for some $t \in \mathbb{Z}$. Since $\widehat{\mathbf{c}}^\circ(s)=\widehat{\mathbf{d}}^\circ(s)=0$ and $\textbf{{1}}_s^\top \Delta_s=(0,\ldots,0)$, \[0 =\textbf{{1}}_s^\top(\widehat{\mathbf{c}}^\circ -\widehat{\mathbf{d}}^\circ)= \textbf{{1}}_s^\top(\Delta_s( \mathbf{z}-\mathbf{q}_1+\mathbf{q}_2) +t\textbf{{1}}_s)=t. \] Hence we have $\widehat{\mathbf{c}}^\circ-\widehat{\mathbf{d}}^\circ=\Delta_s( \mathbf{z}-\mathbf{q}_1+\mathbf{q}_2)$, which implies that $\widehat{\mathbf{c}}^\circ \es \widehat{\mathbf{d}}^\circ$. If $\widehat{\mathbf{c}}^\circ-\widehat{\mathbf{d}}^\circ=\Delta_s \mathbf{z}$ for some $\mathbf{z} \in \mathbb{Z}^V$, then \begin{align*} \mathbf{c}-\mathbf{d} =& \Delta_s \mathbf{z}+ \Delta_s (\mathbf{q}_1-\mathbf{q}_2)+(\mathbf{c}(s)-\mathbf{d}(s))\textbf{{1}}_s \quad \text{(by equation~\eqref{equation: varphi})}\\ =&\Delta( \mathbf{z}+\mathbf{q}_1-\mathbf{q}_2) +t\textbf{{1}}_s, \end{align*} for some $t \in \mathbb{Z}$. Since $\textnormal{lvl}(\mathbf{c})=\textnormal{lvl}(\mathbf{d})$ by assumption and $(1,\ldots,1)^\top \Delta=(0,\ldots,0)$, \begin{align*} 0=&(1,\ldots, 1)^\top(\mathbf{c}-\mathbf{d}) = (1,\ldots,1)^\top (\Delta( \mathbf{z}+\mathbf{q}_1-\mathbf{q}_2) +t\textbf{{1}}_s) =t. \end{align*} Hence we have $\mathbf{c}-\mathbf{d} =\Delta( \mathbf{z}+\mathbf{q}_1-\mathbf{q}_2)$, which implies that $\mathbf{c} \sim \mathbf{d}$. The proof is complete. \qedhere \end{enumerate} \end{proof} We now proceed by showing that the map $\varphi$ is surjective, and we need the following technical lemma. \begin{lemma}\label{lemma: varphi is surjective} Let $G$ be a strongly connected digraph, let $s \in V$, and let $\widehat{\mathbf{c}}$ be a recurrent sink configuration such that $\textnormal{lvl}(\widehat{\mathbf{c}})\geq \textnormal{lvl}(\widehat{\mathbf{d}})$ for all $\widehat{\mathbf{d}} \in [\widehat{\mathbf{c}}]_s$. Let $\mathbf{c}:=\widehat{\mathbf{c}}+\outdeg(s) \textbf{{1}}_s$. Then $\mathbf{c}$ is a recurrent sinkless configuration of $G$. \end{lemma} \begin{proof} Let $v_1,\ldots, v_k$ be a sequence of legal sinkless firing moves on $\mathbf{c}$ with odometer $\mathbf{q}$ less than the primitive period vector $\mathbf{r}$. Without loss of generality, assume that $v_1,\ldots, v_k$ is of maximum length. Note that $k\geq 1$ as firing $s$ is a legal sinkless firing move on $\mathbf{c}$, and in particular $\mathbf{q}$ is a nonzero vector. Write $\mathbf{c}':=\mathbf{c}-\Delta \mathbf{q}$. We claim that $\mathbf{c}'(v)<\outdeg(v)$ for all $v \in V \setminus \{s\}$. Suppose to the contrary that $\mathbf{c}'(v)\geq \outdeg(v)$ for some $v \in V \setminus \{s\}$. By the maximality of the odometer $\mathbf{q}$, it follows that $\mathbf{q}(v) =\mathbf{r}(v)$. Now note that \begin{align*} \mathbf{c}'(v)=&\mathbf{c}(v)+\Delta \mathbf{q}(v)= \mathbf{c}(v)+ \outdeg(v) \mathbf{q}(v) -\sum_{w \in V \setminus \{v\}} \Delta_{v,w} \mathbf{q}(w)\\ \leq &\mathbf{c}(v)+ \outdeg(v) \mathbf{q}(v) -\sum_{w \in V \setminus \{v\}} \Delta_{v,w} \mathbf{r}(w)\\ = &\mathbf{c}(v)+ \outdeg(v) \mathbf{r}(v) -\sum_{w \in V \setminus \{v\}} \Delta_{v,w} \mathbf{r}(w)=\mathbf{c}(v)=\widehat{\mathbf{c}}(v). \end{align*} Since $\widehat{\mathbf{c}}$ is a recurrent sink configuration (and hence stable) and $v \in V\setminus\{s\}$, we have $\widehat{\mathbf{c}}(v)<\outdeg(v)$. This means that $\widehat{\mathbf{c}}'(v)=\widehat{\mathbf{c}}(v)<\outdeg(v)$, and we get a contradiction. This proves the claim. Since $\mathbf{c} \xlongrightarrow[\mathbf{q}]{}\mathbf{c}'$, we have $\widehat{\mathbf{c}}+\mathbf{q}(s) \widehat{\mathbf u} \xlongrightarrow[\mathbf{q}]{s} \widehat{\mathbf{c}}'$ by Lemma~\ref{p. sequence of firing moves is transferrable}\eqref{item: transferrable 2}. Since $\widehat{\mathbf{c}}'$ is a stable sink configuration, this implies that $(\widehat{\mathbf{c}}+\mathbf{q}(s) \widehat{\mathbf u})^\circ=\widehat{\mathbf{c}}'$. Since $\widehat{\mathbf{c}}$ is a recurrent sink configuration, we have $\widehat{\mathbf{c}}'$ is a recurrent sink configuration by Lemma~\ref{p. s-recurrence can be inherited}. It then follows that $\widehat{\mathbf{c}}'$ is contained in $[\widehat{\mathbf{c}}]_s$, and hence we have $\textnormal{lvl}(\widehat{\mathbf{c}})\geq \textnormal{lvl}(\widehat{\mathbf{c}}')$ by assumption. On the other hand, we have $\textnormal{lvl}(\mathbf{c})=\textnormal{lvl}(\mathbf{c}')$ since $\mathbf{c} \xlongrightarrow[\mathbf{q}]{} \mathbf{c}'$. Now note that: \begin{align}\label{equation: surjective 1} \begin{split} 0=&\textnormal{lvl}(\mathbf{c})-\textnormal{lvl}(\mathbf{c}')=\textnormal{lvl}(\widehat{\mathbf{c}})-\textnormal{lvl}(\widehat{\mathbf{c}}')+\mathbf{c}(s)-\mathbf{c}'(s)\\ \geq& \mathbf{c}(s)-\mathbf{c}'(s)=\outdeg(s) -\mathbf{c}'(s). \end{split} \end{align} Hence we have $\mathbf{c}'(s)\geq \outdeg(s)$. By the maximality of the odometer $\mathbf{q}$, this then implies that $\mathbf{q}(s)=\mathbf{r}(s)$. Now note that \[\widehat{\mathbf{c}}'=(\widehat{\mathbf{c}}+\mathbf{q}(s) \widehat{\mathbf u})^\circ=(\widehat{\mathbf{c}}+\mathbf{r}(s) \widehat{\mathbf u})^\circ=\widehat{\mathbf{c}},\] where the last equality is due to Proposition~\ref{l. burning test thief}. This implies that we have equality in equation~\eqref{equation: surjective 1}, which then implies that $\mathbf{c}'(s)=\mathbf{c}(s)$. Hence we conclude that $\mathbf{c}'=\widehat{\mathbf{c}}'+\mathbf{c}'(s)=\widehat{\mathbf{c}}+\mathbf{c}(s)=\mathbf{c}$. Now note that $\mathbf{q} \in \ker(\Delta)$ since $\Delta \mathbf{q}= \mathbf{c}-\mathbf{c}'=(0,\ldots,0)^\top$. Since $\ker(\Delta)$ has dimension 1 (as $G$ is strongly connected) and $\mathbf{q}$ is nonnegative, we conclude that $\mathbf{q}=k\mathbf{r}$ for some nonnegative $k$. Since we have previously shown that $\mathbf{q}$ is a nonzero vector, we have that $k$ is positive. By Lemma~\ref{lemma: BL92}\eqref{item: BL92 2}, we conclude that $\mathbf{c} \xlongrightarrow[\mathbf{r}]{} \mathbf{c}$. It then follows from Proposition~\ref{l. burning test for sandg} that $\mathbf{c}$ is a recurrent sinkless configuration. \end{proof} \begin{lemma}\label{lemma: varphi is a bijection} Let $G$ be a strongly connected digraph, let $s \in V$, and let $m$ be a nonnegative integer. The map $\varphi: \textnormal{Rec}_m(G, \sim) \to \bigsqcup_{n \leq m-\outdeg(s)}\textnormal{Rec}_{n } (G, \es)$ is a bijection. \end{lemma} \begin{proof} All other properties except for the surjectivity of $\varphi$ have been checked in Lemma~\ref{lemma: varphi is well defined and injective}. Let $[\widehat{\mathbf{c}}]_s$ be a recurrent sink class of $G$ with level at most $m-\outdeg(s)$, and without loss of generality let $\widehat{\mathbf{c}}$ be a recurrent sink configuration in $[\widehat{\mathbf{c}}]_s$ such that $\textnormal{lvl}(\widehat{\mathbf{c}})\geq \textnormal{lvl}(\widehat{\mathbf{d}})$ for all $\widehat{\mathbf{d}} \in [\widehat{\mathbf{c}}]_s$. Let $n:=\textnormal{lvl}(\widehat{\mathbf{c}})=\textnormal{lvl}([\widehat{\mathbf{c}}]_s)$, and let $\mathbf{c}:=\widehat{\mathbf{c}}+(m-n)\textbf{{1}}_s$. Note that $\textnormal{lvl}(\mathbf{c})=\textnormal{lvl}(\widehat{\mathbf{c}})+m-n=m$, and therefore the surjectivity of $\varphi$ follows if we can show that $\mathbf{c}$ is a recurrent sinkless configuration. Let $\mathbf{c}':=\widehat{\mathbf{c}}+\outdeg(s)\textbf{{1}}_s$. By Lemma~\ref{lemma: varphi is surjective}, we have $\mathbf{c}'$ is a recurrent sinkless configuration. Now note that \[\mathbf{c}(s)=m-n=m-\textnormal{lvl}([\widehat{\mathbf{c}}]_s)\geq \outdeg(s),\] and hence $\mathbf{c}'=\mathbf{c}+k\textbf{{1}}_s$ for some nonnegative $k$. It then follows from Lemma~\ref{p. recurrence can be inherited}\eqref{item: recurrence inheritance 2} that $\mathbf{c}$ is a recurrent sinkless configuration. The proof is complete. \end{proof} \begin{proof}[Proof of Theorem~\ref{t. main theorem}] By Lemma \ref{lemma: varphi is a bijection}, we have for any nonnegative $m$ \begin{equation}\label{equation: main theorem 1} |\textnormal{Rec}_{m}(G,{\sim})|= \sum_{n=0}^{m-\outdeg(s)} |\textnormal{Rec}_{n}(G,\es)|. \end{equation} Now note that \begin{align*} \frac{\mathcal B(G,s;y)}{(1-y)}&=\frac{\sum_{n\geq 0} |\textnormal{Rec}_n(G,\es)| \cdot y^{n+\outdeg(s)}}{(1-y)} \\ &=\left( \sum_{n\geq 0} |\textnormal{Rec}_n(G,\es)| \cdot y^{n+\outdeg(s)}\right) \cdot \left( \sum_{k\geq 0} y^k\right)\\ &=\sum_{m\geq 0} \left(\sum_{n=0}^{m-\outdeg(s)} |\textnormal{Rec}_{n}(G,\es)| \right)y^m=\sum_{m \geq 0} |\textnormal{Rec}_m(G,{\sim})| y^{n}\\ &=\mathcal R(G;y). \qedhere \end{align*} \end{proof} \section{A recurrence relation for the Biggs-Merino polynomial}\label{s. examples} In this section we present a recurrence relation for the Biggs-Merino polynomial, and we apply it to compute the Biggs-Merino polynomial for a(n infinite) family of non-Eulerian digraphs. \begin{proposition}\label{l. computation} Let $G$ be a strongly connected digraph, let $s \in V$, and let $k$ be a positive natural number. Let $G^k$ be the digraph obtained from $G$ by replacing each edge in $G$ with $k$ copies of the same edge. Then for any $s \in V$, \[\mathcal B(G^k,s;y)= \mathcal B(G,s;y^k) \, \left( \frac{1-y^k}{1-y} \right)^{|V|-1} .\] \end{proposition} \begin{proof} For any sinkless configuration $\mathbf{c}$ in $G^d$, let $\pi(\mathbf{c})$ be the sinkless configuration in $\textnormal{Sand}(G)$ given by \[ \pi(\mathbf{c})(v):=\lfloor {\mathbf{c}(v)}/{k}\rfloor \quad (v \in V). \] Let $\mathbf{c}$ and $\mathbf{c}'$ be configurations in $\textnormal{Sand}(G^k)$. It is straightforward to check that $\mathbf{c}'$ is accessible from $\mathbf{c}$ by a sequence of (legal) sinkless firing moves in $G^d$ if and only if $\pi(\mathbf{c}')$ is accessible from $\pi(\mathbf{c})$ by the same sequence of (legal) firing moves in $G$ and for all $v \in V$ we have $\mathbf{c}(v)\equiv \mathbf{c}'(v)$ (mod $k$). Let $[\mathbf{d}]$ be a sinkless recurrent class in $G$. For any $\mathbf{h} \in \{0,\ldots, k-1\}^V$, let $\mathbf{d}_\mathbf{h}$ be the configuration in $\textnormal{Sand}(G^d)$ given by \[ \mathbf{d}_{\mathbf{h}}(v):= k \, \mathbf{d}(v)+\mathbf{h}(v) \quad (v \in V). \] It then follows from the conclusion in the previous paragraph that: \[ \pi^{-1}([\mathbf{d}])= \bigsqcup_{ \mathbf{h} \in \{0,\ldots, k-1 \}^{V} } [\mathbf{d}_\mathbf{h}]. \] Hence we have \begin{align*} \mathcal R(G^k;y)=\sum_{[\mathbf{c}] \in \textnormal{Rec}(G^k,{\sim})} y^{\textnormal{lvl}([\mathbf{c}])}&= \sum_{[\mathbf{d}] \in \textnormal{Rec}(G,{\sim})} \, \sum_{\mathbf{h} \in \{0,\ldots, k-1 \}^{V}} y^{\textnormal{lvl}[\mathbf{d}_\mathbf{h}]}\\ &= \sum_{[\mathbf{d}] \in \textnormal{Rec}(G,{\sim})} y^{k \, \textnormal{lvl}([\mathbf{e}])} \, (1+y+\ldots y^{k-1})^{|V|} \\ &= \mathcal R(G;y^k) \, \left( \frac{1-y^k}{1-y} \right)^{|V|}. \end{align*} Together with Theorem~\ref{t. main theorem}, this implies that \begin{align*} \mathcal B(G^k,s;y)=&\mathcal R(G^k;y)\, (1-y)={\mathcal R(G;y^k)} \, \frac{(1-y^k)^{|V|}}{(1-y)^{|V|-1}}\\ =&{\mathcal B(G,s;y^k)} \, \left( \frac{1-y^k}{1-y} \right)^{|V|-1}. \qedhere \end{align*} \end{proof} Let $n,a,b$ be positive integers. We denote by $G(n;a,b)$ the digraph with vertex set $\{v_1,v_2, \ldots, v_n\}$, and with $a$ edges from $v_i$ to $v_{i+1}$ and $b$ edges from $v_{i+1}$ to $v_{i}$ for $1 \leq i \leq n-1$. Note that $G(n;a,b,)$ is Eulerian if and only if $a= b$. \begin{figure} \caption{The digraph $G(n;a,b)$ with vertex set $\{v_1,\ldots, v_n\}$, and with $a$ edges from $v_i$ to $v_{i+1}$ and $b$ edges from $v_{i+1}$ to $v_{i}$ ($i \in \{1,\ldots, n-1\}$).} \end{figure} \begin{lemma}\label{lemma: Gnab} Let $n,a,b$ be positive integers, and let $k:=\gcd(a,b)$. Then \begin{align*} \mathcal B(G(n;a,b),v_1;y) &= y^{(n-1)(a+b-d)} \left( \frac{1-y^k}{1-y}\right)^{n-1}. \end{align*} \end{lemma} \begin{proof} We start with the case when $\gcd(a,b)=1$. Note that, for any $i\in \{1,\ldots, n-1\}$, the number of reverse arborescences of $G(n;a,b)$ rooted at $v_i$ is equal to $b^{n-i}a^{i-1}$. Hence the period constant $\alpha$ of $G(n;a,b)$ is equal to \[ \alpha= \gcd_{1 \leq i \leq n-1} b^{n-i}a^{i-1} =\gcd(a,b)^{n-1}=1. \] By Lemma~\ref{p. size Rec(G,es)}, this implies that $\textnormal{Rec}(G(n;a,b,),\stackrel{v_1}{\sim})$ contains only one element. Let $\widehat{\mathbf{c}}$ be the stable sink configuration of $G(n;a,b)$ with maximum level, i.e. \begin{align*} \widehat{\mathbf{c}}(v):=\begin{cases} \outdeg(v)-1 & \text{if } v \neq v_1;\\ 0 & \text{if } v = v_1. \end{cases} \end{align*} Since $\textnormal{Rec}(G(n;a,b,),\stackrel{v_1}{\sim})$ contains only one element, we conclude that $[\widehat{\mathbf{c}}]_{v_1}$ is the unique element in $\textnormal{Rec}(G(n;a,b,),\stackrel{v_1}{\sim})$. Hence we have: \begin{align} \label{equation: Gnab 1} \begin{split} \mathcal B(G(n;a,b), v_1;y)=& y^{\textnormal{lvl}([\widehat{\mathbf{c}}]_{v_1}) +\outdeg(v_1)}\\ =&y^{{\textnormal{lvl}(\widehat{\mathbf{c}})}+\outdeg(v_1)} \quad \text{(by the maximality of }\widehat{\mathbf{c}})\\ =&y^{(n-1)(a+b-1)}. \end{split} \end{align} We now proceed with the case when $k=\gcd(a,b)$ is arbitrary. Note that \begin{align*} \mathcal B(G(n;a,b),v_1;y)&=\mathcal B({G(n;a/k,b/k)},v_1;y^k) \left(\frac{1-y^k}{1-y} \right)^{n-1} \quad \text{(by Proposition~\ref{l. computation})}\\ &= y^{(n-1)(a+b-k)} \left( \frac{1-y^k}{1-y}\right)^{n-1} \quad \text{(by equation~\eqref{equation: Gnab 1})}. \qedhere \end{align*} \end{proof} By a similar argument as in Lemma~\ref{lemma: Gnab}, for any $k \geq 1$ and any strongly connected digraph $G$ with the period constant equal to 1, \[ \mathcal B(G^k,s;y)=y^{k(|E(G)|-|V|+1))} \left( \frac{1-y^k}{1-y}\right)^{|V|-1}.\] \section{Connections to the greedoid polynomial}\label{s. greedoid polynomial} In this section we relate the Biggs-Merino polynomial to another invariant of digraphs called the greedoid polynomial. \subsection{Greedoid polynomial and reverse G-parking functions}\label{subsection: definition of greedoid polynomial and reverse G-parking function} Let $G$ be a directed graph. A \emph{directed path} $P$ of $G$ of length $k$ is a sequence $e_1\ldots e_{k}$ such that for $i \leq \{1,\ldots, k-1\}$ the target vertex of $e_{i}$ is the source vertex of $e_{i+1}$. \begin{definition}[Arborescences] Let $G$ be a strongly connected digraph. An \emph{arborescence} $T$ of $G$ {rooted} at $s \in V$ is a subgraph of $G$ that contains $|V|-1$ edges and such that for any $v \in V$ there exists a unique directed path from $s$ to $v$ in the subgraph. \end{definition} Fix a total order $<$ on the directed edges of $G$. For any two distinct edge-disjoint directed paths $P_1$ and $P_2$, we write $P_1 < P_2$ if the smallest edge in $E(P_1) \sqcup E(P_2)$ (with respect to $<$) is contained in $P_1$. \begin{definition}[External activity]\label{definition: external activity} Let $T$ be an arborescence of a strongly connected digraph $G$ rooted at $s \in V$. For any edge $e \in E(G) \setminus E(T)$, there are exactly two edge-disjoint directed paths $P_1$ and $P_2$ that share the same starting vertex and ending vertex. Let $P_1$ be the path that contains $e$. We say that $e$ is \emph{externally active} with respect to $T$ if $P_1< P_2$. The \emph{external activity} $\textnormal{ext}(T)$ of $T$ is the number of edges in $G$ that are externally active with respect to $T$. \end{definition} See Figure~\ref{figure: external activity} for an illustration describing the process in Definition~\ref{definition: external activity}. \begin{figure} \caption{(a) An arborescence $T$. (b) An arborescence $T$ with an extra edge $e$. (c) The (undashed) path $P_1$ that contains $e$ and the (dashed) path $P_2$ that doesn'nt contain $e$. } \label{figure: external activity} \end{figure} \begin{definition}[Greedoid polynomial]\label{definition: greedoid polynomial} Let $G$ be a strongly connected digraph, and let $s \in V$. The \emph{(single variable) greedoid polynomial} is \begin{equation*} T(G,s;y):=\sum_{T} y^{\textnormal{ext}(T)}, \end{equation*} with the sum taken over all arborescences of $G$ rooted at $s$. \end{definition} This definition of the greedoid polynomial is due to Bj{\"o}rner, Korte, and Lov{\'a}sz~\cite{BKL85}. Their definition encompasses a bigger for a more general family of combinatorial objects called \emph{greedoids}, of which the polynomial in Definition~\ref{definition: greedoid polynomial} is a special case. The polynomial $T(G,s;y)$ does not depend on the choice of total order $<$ on the edges~\cite[Theorem~6.1]{BKL85}. If $G$ is a loopless undirected graph, then the greedoid polynomial $T(G,s;y)$ of $G$ (considered as a bidirected digraph) is equal to $y^{E(G)}\mathcal T( G;1,y)$, where $\mathcal T( G;x,y)$ is the Tutte polynomial of $G$ (considered as an undirected graph)~\cite{BKL85}. We remark that the extra factor $y^{E(G)}$ is due to undirected edges of $G$ being considered as two separate directed edges. We refer the reader to \cite{BZ92} for an introduction to greedoids and related topics, and \cite{GM89,GT90,GM91} for a more detailed study of the greedoid polynomial. \begin{definition}[Reverse $G$-parking functions]\label{definition: reverse G-parking functions} Let $G$ be a strongly connected digraph, and let $s \in V$. A \emph{reverse $G$-parking function} with respect to $s$ is a function $f:V\setminus \{s\} \to \mathbb{N}_0$ such that, for any non-empty subset $A \subseteq V \setminus \{s\}$, there exists $v \in A$ for which $f(v)$ is strictly smaller than the number of edges from $V \setminus A$ to $v$. \end{definition} We use $\textnormal{Park}(G,s)$ to denote the set of reverse $G$-parking functions rooted at $s$. $G$-parking functions were originally defined by Konheim and Weiss \cite{KW66} for complete graphs, and were then extended to arbitrary digraphs by Postnikov and Shapiro \cite{PS04}. Reverse $G$-parking functions are known under several different names, including {reduced divisors}~\cite{BS13}, {superstable configurations}~\cite{HLM08, AB11}, and $\mathbf{\chi}${-superstable configurations}~\cite{GK15}. We remark that the choice of working with reverse $G$-parking functions (instead of $G$-parking functions) is not due to a mere choice of convention, but is due to a duality property in the next lemma. For any sink configuration $\widehat{\mathbf{c}}$ of $G$, its \emph{dual function} $f:V \setminus \{s\} \to \mathbb{N}_)$ is given by \[f(v):= \outdeg(v) -1 -\widehat{\mathbf{c}}(v) \quad (v \in V\setminus\{s\}). \] \begin{lemma}[{\cite[Theorem~4.4]{HLM08}}]\label{l. superstable} Let $G$ be a connected Eulerian digraph, and let $s \in V$. Then a sink configuration of $G$ is sink recurrent if and only if its dual function is a reverse $G$-parking function. \qed \end{lemma} \begin{remark} We would like to warn the reader that Lemma~\ref{l. superstable} is false if $G$ is not an Eulerian digraph. For an arbitrary (strongly connected) digraph, the functions dual to recurrent sink configurations are called {z-superstable configurations} \cite{AB11, GK15}. We refer the reader to \cite[Section~4]{GK15} (specifically, Example~4.17) for the subtle distinction between these two functions. \end{remark} \begin{definition}[Level of a function]\label{definition: level of a function} The \emph{level} of a function $f:V\setminus \{s\} \to \mathbb{N}_0$ is \[\textnormal{lvl}(f):= |E|-|V|+1 -\sum_{v \in V \setminus \{s\}} f(v). \qedhere\] \end{definition} Note that the level of a function is equal to level of its dual sink configuration plus the outdegree of $s$. \subsection{Cori-Le Borgne bijection for directed graphs} In this subsection we give a bijection between reverse $G$-parking functions and arborescences of a directed graph $G$. This bijection is a directed graph version of Cori-Le Borgne bijection~\cite{CL03,BS13} for undirected graphs. For the description of this bijection, see Algorithm~\ref{algorithm: parking functions to arborescences}. \begin{algorithm} \label{algorithm: parking functions to arborescences} \caption{Cori-Le Borge bijection from $G$-parking functions to arborescences of $G$.} \KwIn{\\$G$-parking function $f$ with respect to $s$, \\Total order on the edges of $G$.} \KwOut{\\Arborescence $T_f$ of $G$ rooted at $s$.} \BlankLine {\bf Initialization:} \\$\textnormal{BV}:=\{s\}$ (burnt vertices), \\$\textnormal{BE}:=\emptyset$ (burnt edges), \\$T:=\emptyset$ (directed tree). \BlankLine \While{$\textnormal{BV} \neq V(G)$ \label{item: algorithm while start}}{ $e:=\max\{ (v,w) \in E(G) \, | \, (v,w) \not\in \textnormal{BE}, \, v \in \textnormal{BV}, w \not\in \textnormal{BV} \}$,\\ $w:=$ the target vertex of $e$, \\ \If{$f(w) ==$ {the number of edges in $\textnormal{BE}$ with} $w$ as the target vertex \label{item: algorithm parking function comparison} }{ $\textnormal{BV} \leftarrow \textnormal{BV} \cup \{w\}$, \\ $T \leftarrow T\cup \{e\}$, } $\textnormal{BE} \leftarrow \textnormal{BE} \cup \{e\}$ \label{item: algorithm if end} } \label{item: algorithm while end} Output $T_f:=T$. \end{algorithm} \begin{reptheorem}{t. generalized Cori-Le Borgne} Let $G$ be a strongly connected digraph, and let $s \in V$. Then Algorithm~\ref{algorithm: parking functions to arborescences} is a bijection that sends reverse $G$-parking functions with respect to $s$ to arborescences of $G$ rooted at $s$. Furthermore, the external activity of the output arborescence is the level of the input reverse $G$-parking function. \end{reptheorem} \begin{remark} There are several bijections in the existing literature between $G$-parking functions and spanning trees of undirected graphs (for example \cite{Biggs99t, KY08, Back12, YBKS14,PYY17}). For directed graphs, there is a bijection between reverse $G$-parking functions and arborescences of $G$ by Chebikin and Pylyavskyy \cite{CP05}. Note that this bijection is different from the bijection in Algorithm~\ref{algorithm: parking functions to arborescences} as the former does not preserve the notion of activities (see \cite[Section~5]{CP05}). \end{remark} The following theorem is a direct consequence of Theorem~\ref{t. generalized Cori-Le Borgne} and Lemma~\ref{l. superstable}. \begin{reptheorem}{t. Merino's theorem}[Merino's Theorem for Eulerian digraphs] Let $G$ be a connected Eulerian digraph. Then for any $s \in V$, \[ \pushQED{\qed} T(G,s;y)=\sum_{\widehat{\mathbf{c}} \in \textnormal{Rec}(G,s)} y^{\textnormal{lvl}(\widehat{\mathbf{c}})+\outdeg(s)}. \qedhere \popQED \] \end{reptheorem} If the graph in Theorem~\ref{t. Merino's theorem} is bidirected, then we recover the original theorem of Merino L{\'o}pez~\cite{Mer97}. \begin{remark} We would like to warn the reader that there are non-Eulerian digraphs for which Theorem~\ref{t. Merino's theorem} is false. This is because $T(G,s;1)$ is the number of arborescences of $G$, while $|\textnormal{Rec}(G,s)|$ is the number of reverse arborescences of $G$ (by Lemma~\ref{lemma: sandpile group bijection}\eqref{item: sandpile group 2}). Those two numbers are in general not equal for non-Eulerian digraphs. \end{remark} The following corollary is a consequence of Theorem~\ref{t. Merino's theorem}. \begin{corollary}\label{corollary: greedoid polynomial and Biggs-Merino polynomial} Let $G$ be a connected Eulerian digraph. Then for any $s \in V$, \[ T(G,s;y)=\mathcal B(G,s;y). \] \end{corollary} \begin{proof} Since $G$ is a connected Eulerian digraph, the primitive period vector $\mathbf{r}$ of $G$ is equal to $(1,\ldots,1)$. By the markov chain tree theorem~\cite{AT89}, this implies that the period constant $\alpha$ is equal to the number of reverse arborescences rooted at $s$. By Proposition~\ref{p. size Rec(G,es)} and Lemma~\ref{lemma: sandpile group bijection}\eqref{item: sandpile group 2}, this implies that there are as many recurrent sink classes as recurrent sink configurations. Hence we conclude that each recurrent sink class of $G$ contains a unique recurrent sink configuration. Together with Theorem~\ref{t. Merino's theorem}, this implies that \begin{equation*} T(G,s;y)=\sum_{\widehat{\mathbf{c}} \in \textnormal{Rec}(G,s)} y^{\textnormal{lvl}(\widehat{\mathbf{c}})+\outdeg(s)}= \sum_{[\widehat{\mathbf{c}}] \in \textnormal{Rec}(G,\es)} y^{\textnormal{lvl}([\widehat{\mathbf{c}}])}=\mathcal B(G,s;y). \qedhere \end{equation*} \end{proof} This relates the Biggs-Merino polynomial to the greedoid polynomial, as promised in the beginning of this section. Corollary~\ref{corollary: greedoid polynomial and Biggs-Merino polynomial} gives two interesting consequences for a connected Eulerian digraph $G$. The first consequence is that the greedoid polynomial $T(G,s;y)$ does not depend on the choice of $s$ (by Theorem~\ref{t. main theorem}). The second consequence is that $\mathcal B(G,s;2)$ counts the number of subgraphs of $G$ such that, for any $v \in V$, there exists a directed path from $s$ to $v$ in the subgraph (since $T(G,s;2)$ counts the same thing by \cite[Lemma~2.1]{GM89}). \begin{remark} We would like to warn the reader that Corollary~\ref{corollary: greedoid polynomial and Biggs-Merino polynomial} is false when $G$ is a non-Eulerian digraph. This is because the number of reverse arborescences of $G$ depends on the choice of $s$ if $G$ is non-Eulerian, while $\mathcal B(G,s;y)$ does not depend on the choice of $s$ (by Theorem~\ref{t. main theorem}). \end{remark} The rest of this section is focused on the proof of Theorem~\ref{t. generalized Cori-Le Borgne}. For any $G$-parking function $f$, denote by $T_f$ the output of Algorithm~\ref{algorithm: parking functions to arborescences}, denote by $\textnormal{BV}(f)$ the set of vertices that are burnt in Algorithm~\ref{algorithm: parking functions to arborescences}, and by $\textnormal{BE}(f)$ the set of edges that are burnt in Algorithm~\ref{algorithm: parking functions to arborescences}. \begin{lemma}\label{lemma: properties of algorithm 1} Let $G$ be a strongly connected digraph, let $s \in V$, and let $f$ be a $G$-parking function with respect to $s$. Then: \begin{enumerate} \item \label{item: all vertices are burnt} $\textnormal{BV}(f)=V(G)$; \item \label{item: burnt edges determine the parking function} $f(v) =|\{e \in \textnormal{BE}(f) \mid \textnormal{trgt}(e)=v \}|$ for all $v \in V$; and \item \label{item: output of algorithm 1 is an arborescence} $T_f$ is an arborescence of $G$ rooted at $s$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item Suppose to the contrary that Algorithm~\ref{algorithm: parking functions to arborescences} terminates when $\textnormal{BV}(f) \subsetneq V(G)$. Line \ref{item: algorithm while start}-\ref{item: algorithm while end} of the algorithm imply that all edges with source vertex in $\textnormal{BV}(f)$ and target vertex in $V(G) \setminus \textnormal{BV}(f)$ are burnt. Write $A:=V(G) \setminus \textnormal{BV}(f)$. Line~\ref{item: algorithm parking function comparison} of the algorithm then implies that for all $v \in A$, the function $f(v)$ is greater than or equal to the number of edges $V(G) \setminus A$ to $v$. This contradicts the assumption that $f$ is a $G$-parking function, as desired. \item Since $\textnormal{BV}(f)=V(G)$ by Lemma~\ref{lemma: properties of algorithm 1}\eqref{item: all vertices are burnt}, Line~\ref{item: algorithm parking function comparison} of Algorithm~\ref{algorithm: parking functions to arborescences} implies that $f(v)$ is equal to the number of burnt edges with target vertex $v$ for all $v \in V$, as desired. \item It follows from Line \ref{item: algorithm while start}-\ref{item: algorithm while end} of Algorithm~\ref{algorithm: parking functions to arborescences} that $T_f$ is a directed tree with $|\textnormal{BV}(f)|-1$ edges and with $s$ as the unique source vertex. Since $\textnormal{BV}(f)=V(G)$ by Lemma~\ref{lemma: properties of algorithm 1}\eqref{item: all vertices are burnt}, it then follows that $T_f$ is an arborescence of $G$ rooted at $s$. \qedhere \end{enumerate} \end{proof} \begin{lemma}\label{lemma: external activity becomes level by Cori-Le Borgne bijection} Let $G$ be a strongly connected digraph, let $s \in V$, and let $f$ be a $G$-parking function with respect to $v$. Then an edge $e \in E(G) \setminus E(T_f)$ is externally active with respect to $T_f$ if and only if $e$ is not contained in $\textnormal{BE}(f)$. \end{lemma} \begin{proof} Let $P_1$ and $P_2$ be two edge-disjoint directed paths as in Definition~\ref{definition: external activity}. Note that $e$ is contained in $P_1$ by definition. Let $e'$ be the minimum edge in $E(P_1) \sqcup E(P_2)$. We need to show that $e'$ is contained in $P_2$ if and only if $e$ is contained in $\textnormal{BE}(f)$. Suppose that $e'$ is contained in $P_2$. By the minimality of $e'$, it then follows that the source vertex of $e$ is burnt before $e'$ in the while loop of Algorithm~\ref{algorithm: parking functions to arborescences}. Again by the minimality of $e'$, it then follows that $e$ is evaluated before $e'$ in the while loop of the algorithm. Since $e$ is not contained in $T_f$, it then follows that $e$ is burnt when it is evaluated. This proves one direction of the claim. Suppose that $e'$ is contained in $P_1$. By the minimality of $e'$, it then follows that all edges in $P_2$ are evaluated before $e'$ in the while loop of Algorithm~\ref{algorithm: parking functions to arborescences}. This implies that all vertices in $P_2$ is burnt before $e'$ is evaluated by the while loop. Since $P_1$ and $P_2$ share the same target vertex and $e$ is the last edge in $P_1$, it then follows that $e$ is either not evaluated or evaluated after its target vertex is burnt in the while loop. In either cases $e$ is not burnt in the while loop. This proves the other direction of the claim. \end{proof} We now give an algorithm that will provide the inverse map to Algorithm~\ref{algorithm: parking functions to arborescences} (note that at this point we have not yet shown that Algorithm~\ref{algorithm: parking functions to arborescences} is a bijection). See Algorithm~\ref{algorithm: arborescences to parking functions} for the description of the algorithm. \begin{algorithm} \label{algorithm: arborescences to parking functions} \caption{Cori-Le Borge bijection from arborescences of $G$ to $G$-parking functions.} \KwIn{\\Arborescence $T$ of $G$ rooted at $s$, \\Total order on the edges of $G$.} \KwOut{\\$G$-parking function $f_T$ with respect to $s$.} \BlankLine {\bf Initialization:} \\$\textnormal{BV}:=\{s\}$ (burnt vertices), \\$\textnormal{BE}:=\emptyset$ (burnt edges). \BlankLine \While{$\textnormal{BV} \neq V(G)$ }{ $e:=\max\{ (v,w) \in E(G) \, | \, (v,w) \not\in \textnormal{BE}, \, v \in \textnormal{BV}, w \not\in \textnormal{BV} \}$,\\ $w:=$ the target vertex of $e$, \\ \If{$e\in E(T)$ }{ $\textnormal{BV} \leftarrow \textnormal{BV} \cup \{w\}$, } $\textnormal{BE} \leftarrow \textnormal{BE} \cup \{e\}$ } Output $f_T$, with $f_T(v):=$ the number of edges in $\textnormal{BE}$ with $v$ as target vertex (for $v \in V \setminus \{s\}$). \end{algorithm} For any arborescence $T$ of $G$, denote by $f_T$ the output of Algorithm~\ref{algorithm: arborescences to parking functions}, denote by $\textnormal{BV}(T)$ the set of vertices that are burnt in Algorithm~\ref{algorithm: arborescences to parking functions}, and by $\textnormal{BE}(T)$ the set of edges that are burnt in Algorithm~\ref{algorithm: arborescences to parking functions}. \begin{lemma}\label{lemma: inverse function to Cori-Le Borgne map} Let $G$ be a strongly connected digraph, let $s \in V$, and let $T$ be an arborescence of $G$ rooted at $s$. Then: \begin{enumerate} \item \label{item: output of arborescences are G-parking functions} $f_T$ is a $G$-parking function with respect to $s$; and \item \label{item: inverse function Cori-Le Borgne map} For any $G$-parking function $f$ with respect to $s$, we have $f_{T_f}=f$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item Let $A$ be an arbitrary non-empty subset of $V \setminus \{s\}$. Since $T$ is an arborescence of $G$, it follows that Algorithm~\ref{algorithm: arborescences to parking functions} terminates only when all vertices are burnt. Let $v$ be the first vertex in $A$ that is burnt by Algorithm~\ref{algorithm: arborescences to parking functions}. By the minimality assumption on $v$, it then follows that the source vertex of every edge in $\{e \in \textnormal{BE}(T) \mid \textnormal{trgt}(e)=v \}$ is contained in $V \setminus A$. Also note that any edge in $T$ is not burnt in Algorithm~\ref{algorithm: arborescences to parking functions}. Hence: \begin{align*} f_T(v)=&| \{e \in \textnormal{BE}(T) \mid \textnormal{trgt}(e)=v \}| \\ \leq& | \{e \in E(G) \mid \textnormal{src}(e) \in V\setminus A, \textnormal{trgt}(e)=v, \text{ and } e \notin E(T)\} |\\ =&\, \text{the number of edges from $V \setminus A$ to $v$} \ -1. \end{align*} Since the choice of $A$ is arbitrary, this shows that $f_T$ is a $G$-parking function. \item It follows from the description of Algorithm~\ref{algorithm: parking functions to arborescences} and Algorithm~\ref{algorithm: arborescences to parking functions} that $\textnormal{BE}(f)=\textnormal{BE}(T_f)$. It then follows from Lemma~\ref{lemma: properties of algorithm 1}\eqref{item: burnt edges determine the parking function} that $f=f_{T_f}$. \qedhere \end{enumerate} \end{proof} \begin{proof}[Proof of Theorem~\ref{t. generalized Cori-Le Borgne}] Note that Algorithm~\ref{algorithm: parking functions to arborescences} maps $G$-parking functions to arborescences of $G$ (by Lemma~\ref{lemma: properties of algorithm 1}\eqref{item: output of algorithm 1 is an arborescence}). Also note that Algorithm~\ref{algorithm: arborescences to parking functions} maps arborescences of $G$ to $G$-parking functions (by Lemma~\ref{lemma: inverse function to Cori-Le Borgne map}\eqref{item: output of arborescences are G-parking functions}). Finally, note that applying Algorithm~\ref{algorithm: arborescences to parking functions} after Algorithm~\ref{algorithm: parking functions to arborescences} sends a $G$-parking function back to itself. These three statements imply that Algorithm~\ref{algorithm: parking functions to arborescences} is a bijection from $G$-parking functions to arborescences of $G$, and Algorithm~\ref{algorithm: arborescences to parking functions} is its inverse. It follows from Lemma~\ref{lemma: properties of algorithm 1}\eqref{item: burnt edges determine the parking function} and Lemma~\ref{lemma: external activity becomes level by Cori-Le Borgne bijection} that $\textnormal{ext}(T_f)=\textnormal{lvl}(f)$ for any $G$-parking function $f$. The proof is complete. \end{proof} \section{Concluding remarks}\label{s. conjecture} In this section we present a few unanswered questions that might warrant further research. As remarked in Section \ref{s. greedoid polynomial}, there are non-Eulerian digraphs for which the conclusion of Theorem~\ref{t. Merino's theorem} and Corollary~\ref{corollary: greedoid polynomial and Biggs-Merino polynomial} are false. A further research can be done on extending these two theorems to general digraphs, and here we list two questions of that flavor. \begin{question}\label{question: greedoid for strongly connected digraphs} Let $G$ be a non-Eulerian strongly connected digraph. \begin{enumerate} \item \label{item: question greedoid} Does there exist a greedoid for which its greedoid polynomial satisfies the conclusion of Theorem~\ref{t. Merino's theorem} or Corollary~\ref{corollary: greedoid polynomial and Biggs-Merino polynomial}? \item \label{item: question general} Does there exist an expression for the polynomial in the right hand side of Theorem~\ref{t. Merino's theorem} or Corollary~\ref{corollary: greedoid polynomial and Biggs-Merino polynomial} that is not related to the sandpile model? \end{enumerate} \end{question} Note that Question~\ref{question: greedoid for strongly connected digraphs}\eqref{item: question greedoid} is a special case of Question~\ref{question: greedoid for strongly connected digraphs}\eqref{item: question general}. One consequence of Merino's Theorem for undirected graphs is that it implies Stanley's pure $O$-sequence conjecture~\cite{Stan96} for cographic matroids. It is therefore natural to ask for a relationship between our works and $O$-sequences. Let $X$ be a finite, nonempty set of (monic) monomials in the indeterminates $x_1,\ldots, x_k$. We call $X$ a \emph{(monomial) order ideal} if, for any monomial $m_1 \in X$ and any monomial $m_2$, we have $m_2$ divides $m_1$ only if $m_2 \in X$. An order ideal $X$ is \emph{pure} if all the maximal monomials in $X$ (i.e. polynomials that are not divisible by any other elements in $X$) have the same degree. Let $h_i$ $(i\geq 0)$ denote the number of monomials in $X$ with degree $i$. The \emph{$h$-vector} of $X$ is the vector ($h_0,\ldots, h_n$), where $n$ is the maximum degree of monomials in $X$. An \emph{$O$-sequence} is the $h$-vector of an order ideal, and a \emph{pure $O$-sequence} is the $h$-vector of a pure order ideal. It follows from Theorem~\ref{t. generalized Cori-Le Borgne} that, for any strongly connected digraph $G$, the nonzero coefficients of its greedoid polynomial, ordered from the highest degree to the lowest degree, is an $O$-sequence. A further research can be done on extending this observation to other classes of greedoids. \begin{question} Do the nonzero coefficients of the greedoid polynomial of a greedoid form an $O$-sequence? If not, what is the class of greedoids for which this property holds? \end{question} We remark that there are Eulerian digraphs for which the corresponding $O$-sequence is not pure, for example~\cite[Figure~10]{PP15}. Another possible research direction is on the method of computing the Biggs-Merino polynomial efficiently. There is a variant of the deletion-contraction recursion~\cite{BKL85} for the greedoid polynomial and a M\"{o}bius inversion formula~\cite{Perrot-Pham} for the Biggs-Merino polynomial of an Eulerian digraph. However, we are not aware of any formulas of the same type for the Biggs-Merino polynomial of non-Eulerian digraphs. \begin{question} Does there exist any kind of deletion-contraction recurrence for the Biggs-Merino polynomial of non-Eulerian digraphs? \end{question} \end{document}
arXiv
\begin{document} \title{Evolution of tripartite entangled states in a decohering environment and their experimental protection using dynamical decoupling} \author{Harpreet Singh} \email{[email protected]} \affiliation{Department of Physical Sciences, Indian Institute of Science Education \& Research (IISER) Mohali, Sector 81 SAS Nagar, Punjab 140306 India.} \author{Arvind} \email{[email protected]} \affiliation{Department of Physical Sciences, Indian Institute of Science Education \& Research (IISER) Mohali, Sector 81 SAS Nagar, Punjab 140306 India.} \author{Kavita Dorai} \email{[email protected]} \affiliation{Department of Physical Sciences, Indian Institute of Science Education \& Research (IISER) Mohali, Sector 81 SAS Nagar, Punjab 140306 India.} \begin{abstract} We embarked upon the task of experimental protection of different classes of tripartite entangled states, namely the maximally entangled GHZ and W states and the ${\rm W \bar{W}}$ state, using dynamical decoupling. The states were created on a three-qubit NMR quantum information processor and allowed to evolve in the naturally noisy NMR environment. Tripartite entanglement was monitored at each time instant during state evolution, using negativity as an entanglement measure. It was found that the W state is most robust while the GHZ-type states are most fragile against the natural decoherence present in the NMR system. The ${\rm W \bar{W}}$ state which is in the GHZ-class, yet stores entanglement in a manner akin to the W state, surprisingly turned out to be more robust than the GHZ state. The experimental data were best modeled by considering the main noise channel to be an uncorrelated phase damping channel acting independently on each qubit, alongwith a generalized amplitude damping channel. Using dynamical decoupling, we were able to achieve a significant protection of entanglement for GHZ states. There was a marginal improvement in the state fidelity for the W state (which is already robust against natural system decoherence), while the ${\rm W \bar{W}}$ state showed a significant improvement in fidelity and protection against decoherence. \end{abstract} \pacs{03.67.Lx, 03.67.Bg} \maketitle \section{Introduction} \label{intro} Quantum entanglement is considered to lie at the crux of QIP~\cite{nielsen-book} and while two-qubit entanglement can be completely characterized, multipartite entanglement is more difficult to quantify and is the subject of much recent research~\cite{horodecki-rmp-09}. Entanglement can be rather fragile under decoherence and various multiparty entangled states behave very differently under the same decohering channel~\cite{dur-prl-04}. It is hence of paramount importance to understand and control the dynamics of multipartite entangled states in multivarious noisy environments~\cite{mintert-pr-05,aolita-prl-08,aolita-rpp-15}. A three-qubit system is a good model system to study the diverse response of multipartite entangled states to decoherence and the entanglement dynamics of three-qubit GHZ and W states were theoretically studied~\cite{borras-pra-09,weinstein-pra-10}. Under an arbitrary (Markovian) decohering environment, it was shown that W states are more robust than GHZ states for certain kinds of channels while the reverse is true for other kinds of channels~\cite{carvalho-prl-04,siomau-eurphysd-10,siomau-pra-10,ali-jpb-14}. On the experimental front, tripartite entanglement was generated using photonic qubits and the robustness of W state entanglement was studied in optical systems~\cite{lanyon-njp-09,zang-scirep-15,he-qip-15,zang-optics-16}. The dynamics of multi-qubit entanglement under the influence of decoherence was experimentally characterized using a string of trapped ions~\cite{barreiro-nature} and in superconducting qubits~\cite{wu-qip-16}. In the context of NMR quantum information processing, three-qubit entangled states were experimentally prepared~\cite{suter-3qubit,shruti-generic,manu-pra-14}, and their decay rates compared with bipartite entangled states~\cite{kawamura-ijqc-06}. With a view to protecting entanglement, dynamical decoupling (DD) schemes have been successfully applied to decouple a multiqubit system from both transverse dephasing and longitudinal relaxation baths~\cite{viola-review,uhrig-njp-08,kuo-jmp-12,zhen-pra-16}. UDD schemes have been used in the context of entanglement preservation~\cite{song-ijqi-13,franco-prb-14}, and it was shown theoretically that Uhrig DD schemes are able to preserve the entanglement of two-qubit Bell states and three-qubit GHZ states for quite long times~\cite{agarwal-scripta}. In this work, we experimentally explored the robustness against decoherence, of three different tripartite entangled states, namely, the GHZ, W and $\rm W{\bar W}$ states. The ${\rm W{\bar W}}$ state is a novel tripartite entangled state which belongs to the GHZ entanglement class in the sense that it is SLOCC equivalent to the GHZ state, however stores its entanglement in ways very similar to that of the W state~\cite{Devi2012,shruti-wwbar}. We created these states with a very high fidelity, via GRAPE-optimized rf pulses~\cite{tosner-jmr-09} on a system of three NMR qubits, using three fluorine spins individually addressable in frequency space. We allowed these entangled states to decohere and measured their entanglement content at different instances in time. To estimate the fidelity of state preparation and entanglement content, we performed complete state tomography~\cite{leskowitz-pra-04} using maximum likelihood estimation~\cite{singh-pla-16}. As a measure for tripartite entanglement, we used a well-known extension of the bipartite Peres-Horodecki separability criterion~\cite{peres-prl-96} called negativity~\cite{vidal-pra-02}. Our results showed that the W state was most robust against the environmental noise, followed by the ${\rm W {\bar W}}$ state, while the GHZ state was rather fragile. We analytically solved the Lindblad master equation for decohering open quantum systems and showed that the best-fit to our experimental data was provided by a model which considered two predominant noise channels acting on the three qubits: and a homogeneous phase-damping channel acting independently on each qubit and a generalized amplitude damping channel. Next, we protected entanglement of these states using two different DD sequences: the symmetrized XY-16(s) and the Knill dynamical decoupling (KDD) sequences, and evaluated their efficacy of protection. Both DD schemes were able to achieve a good degree of entanglement protection. The GHZ state was dramatically protected, with its entanglement persisting for nearly double the time. The W state showed a marginal improvement, which was to be expected since these DD schemes are designed to protect mainly against dephasing noise, and our results indicated that the W state is already robust against this type of decohering channel. Interestingly, although the $\rm W {\bar W}$ state belongs to the GHZ entanglement-class, our experiments revealed that its entanglement persists for a longer time than the GHZ state, while the DD schemes are able to preserve its entanglement to a reasonable extent. The decoherence characteristics of the ${\rm W \bar{W}}$ state hence suggest a way of protecting fragile GHZ-type states against noise by transforming the type of entanglement (since a GHZ-class state can be transformed via local operations to a ${\rm W\bar{W}}$ state). These aspects of the entanglement dynamics of the ${\rm W {\bar W}}$ state require more detailed studies for a better understanding. There has been a longstanding debate about the existence of entanglement in spin ensembles at high temperature as encountered in NMR experiments. There are two ways to look at the situation. Entangled states in such ensembles are obtained via unitary transformations on pseudopure states. If we consider the entire spin ensemble, given that the number of spins that are involved in the pseudopure state is very small compared to the total number of spins, it has been shown that the overall ensemble is not entangled~\cite{braunstein,chuang}. However one can take a different point of view and only consider the sub-ensemble of spins that have been prepared in the pseudopure state, and as far as these spins are concerned, entanglement genuinely exists~\cite{brazil1,brazil2,long}. The states that we have created are entangled in this sense, and hence may not be considered as entangled if one works with the entire ensemble. Therefore, one has to be aware and cautious about this aspect while dealing with these states. These states are sometimes referred to as being pseudo-entangled. Moreover, these states have interesting properties in terms of the presence of multiple-quantum coherences and their evolution and dynamics under decoherence. This paper is organized as follows:~In Section~\ref{entang-decoh} we describe the experimental decoherence behaviour of tripartite entangled states, with section~\ref{system} containing details of the NMR system section~\ref{construct} delineating the experimental schemes to prepare tripartite-entangled GHZ, W and $\rm W{\bar W}$ states. The experimental entanglement dynamics of these states decohering in a noisy environment is contained in section~\ref{decay}. Section~\ref{ddprotect} describes the results of protecting these tripartite entangled states using robust dynamical decoupling sequences, while Section~\ref{concl} presents some conclusions. The theoretical model of noise damping used to fit the experimental data is described in the Appendix. \section{Dynamics of tripartite entangled states} \label{entang-decoh} \subsection{Three-qubit NMR system} \label{system} \begin{figure} \caption{(a) Molecular structure of trifluoroiodoethylene molecule and tabulated system parameters with chemical shifts $\nu_i$ and scalar couplings J$_{ij}$ (in Hz), and spin-lattice relaxation times $T_{1}$ and spin-spin relaxation times T$_{2}$ (in seconds). (b) NMR spectrum obtained after a $\pi/2$ readout pulse on the thermal equilibrium state. and (c) NMR spectrum of the pseudopure $\vert 000 \rangle$ state. The resonance lines of each qubit are labeled by the corresponding logical states of the other qubit. } \label{molecule} \end{figure} We use the three ${}^{19}$F nuclear spins of the trifluoroiodoethylene (C$_2$F$_3$I) molecule to encode the three qubits. On an NMR spectrometer operating at 600 MHz, the fluorine spin resonates at a Larmor frequency of $\approx 564$ MHz. The molecular structure of the three-qubit system with tabulated system parameters and the NMR spectra of the qubits at thermal equilibrium and prepared in the pseudopure state $\vert 000 \rangle$ are shown in Figs.~\ref{molecule}(a), (b), and (c), respectively. The Hamiltonian of a weakly-coupled three-spin system in a frame rotating at $\omega_{{\rm rf}}$ (the frequency of the electromagnetic field $B_1(t)$ applied to manipulate spins in a static magnetic field $B_0$) is given by~\cite{ernst-book-87}: \begin{equation} {\cal H} = -\sum_{i=1}^3 (\omega_i - \omega_{{\rm rf}}) I_{iz} + \sum_{i<j,j=1}^3 2 \pi J_{ij} I_{iz} I_{jz} \end{equation} where $I_{iz}$ is the spin angular momentum operator in the $z$ direction for ${}^{19}$F; the first term in the Hamiltonian denotes the Zeeman interaction between the fluorine spins and the static magnetic field $B_0$ with $\omega_i = 2 \pi \nu_i$ being the Larmor frequencies; the second term represents the spin-spin interaction with $J_{ij}$ being the scalar coupling constants. The three-qubit equilibrium density matrix (in the high temperature and high field approximations) is in a highly mixed state given by: \begin{eqnarray} \rho_{eq}&=&\tfrac{1} {8}(I+\epsilon \ \Delta \rho_{eq}) \nonumber \\ \Delta\rho_{{\rm eq}} &\propto& \sum_{i=1}^{3} I_{iz} \end{eqnarray} with a thermal polarization $\epsilon \sim 10^{-5}$, $I$ being the $8 \times 8$ identity operator and $\Delta \rho_{{\rm eq}}$ being the deviation part of the density matrix. The system was first initialized into the $\vert 000\rangle$ pseudopure state using the spatial averaging technique~\cite{cory-physicad}, with the density operator given by \begin{equation} \rho_{000}=\frac{1-\epsilon}{8}I + \epsilon \vert 000\rangle\langle000 \vert \end{equation} \begin{figure} \caption{NMR pulse sequence used to prepare pseudopure state $\rho_{000}$ starting from thermal equilibrium.The pulses represented by black filled rectangles are of angle $\pi$. The other rf flip angles are set to $\theta_1=\frac{5\pi}{12}$, $\theta_2=\frac{\pi}{6}$ and $\delta=\frac{\pi}{4}$. The phase of each rf pulse is written below each pulse bar. The evolution interval $\tau_{ij}$ is set to a multiple of the scalar coupling strength ($J_{ij}$). } \label{ppure-fig} \end{figure} The specific sequence of rf pulses, $z$ gradient pulses and time evolution periods we used to prepare the pseudopure state $\rho_{000}$ starting from thermal equilibrium is shown in Figure~\ref{ppure-fig}. All the rf pulses used in the pseudopure state preparation scheme were constructed using the Gradient Ascent Pulse Engineering (GRAPE) technique~\cite{tosner-jmr-09} and were designed to be robust against rf inhomogeneity, with an average fidelity of $ \ge 0.99$. Wherever possible, two independent spin-selective rf pulses were combined using a specially crafted single GRAPE pulse; for instance the first two rf pulses to be applied before the first field gradient pulse, were combined into a single pulse specially crafted pulse ($U_{p_{1}}$ in Figure~\ref{ppure-fig}), of duration $600 \mu$s. The combined pulses $U_{p_{2}}$, $U_{p_{3}}$ and $U_{p_{4}}$ applied later in the sequence were of a total duration $\approx 20$ ms. All experimental density matrices were reconstructed using a reduced tomographic protocol and by using maximum likelihood estimation~\cite{leskowitz-pra-04,singh-pla-16} with the set of operations $\{ III, IIY, IYY, YII, XYX, XXY, XXX\}$; $I$ is the identity (do-nothing operation) and $X (Y)$ denotes a single spin operator implemented by a spin-selective $\pi/2$ pulse. We constructed these spin-selective pulses for tomography using GRAPE, with the length of each pulse $\approx 600 \mu$s. The fidelity of an experimental density matrix was estimated by measuring the projection between the theoretically expected and experimentally measured states using the Uhlmann${\rm -}$Jozsa fidelity measure~\cite{uhlmann-fidelity,jozsa-fidelity}: \begin{equation} F = \left(Tr \left( \sqrt{ \sqrt{\rho_{\rm theory}} \rho_{\rm expt} \sqrt{\rho_{\rm theory}} } \right)\right)^2 \label{fidelity} \end{equation} where $\rho_{\rm theory}$ and $\rho_{\rm expt}$ denote the theoretical and experimental density matrices respectively. The experimental density matrices were reconstructed by repeating each experiment ten times (keeping the temperature fixed at 288 K). The mean of the ten experimentally reconstructed density matrices was used to compute the statistical error in the state fidelity. The experimentally created pseudopure state $\vert 000\rangle$ was tomographed with a fidelity of $0.985 \pm 0.015$ and the total time taken to prepare the state was $\approx 60$ ms. \begin{figure}\label{ckt} \end{figure} \subsection{NMR implementation of tripartite entangled states} \label{construct} Tripartite entanglement has been well characterized and it is known that the two different classes of tripartite entanglement, namely GHZ-class and W-class, are inequivalent. While both classes are maximally entangled, there are differences in the their type of entanglement: the W-class entanglement is more robust against particle loss than the GHZ-class (which becomes separable if one particle is lost) and it is also known that the W state has the maximum possible bipartite entanglement in its reduced two-qubit states~\cite{guhne-review}. The entanglement in the ${\rm W \bar{W}}$ state (which belongs to the GHZ-class of entanglement) shows a surprising result, that it is reconstructible from its reduced two-qubit states (similar to the W-class of states). We now turn to the construction of tripartite entangled states on the three-qubit NMR system. The quantum circuits to prepare the three qubits in a GHZ-type state, a W state and a ${\rm W \bar{W}}$ state are shown in Figs.~\ref{ckt} (a), (b) and (c), respectively. Several of the quantum gates in these circuits were optimized using the GRAPE algorithm and we were able to achieve a high gate fidelity and smaller pulse lengths. The GHZ-type $\frac{1}{\sqrt{2}}(\vert000\rangle-\vert111\rangle)$ state was prepared from the $\vert000\rangle$ pseudopure state by a sequence of three quantum gates (labeled as $U_{G_1}, U_{G_2}, U_{G_3}$ in Fig.~\ref{ckt}(a)): first a selective rotation of $\left [\frac{\pi}{2}\right]_{-y}$ on the first qubit, followed by a CNOT$_{12}$ gate, and finally a CNOT$_{13}$ gate. The step-by-step sequential gate operation leads to: \begin{eqnarray} \vert 0 0 0 \rangle &\stackrel{{R^1{\left (\frac{\pi}{2}\right)_{-y}}}}{\longrightarrow}& \frac{1}{\sqrt{2}}\left(\vert 0 0 0 \rangle - \vert 1 0 0 \rangle \right) \nonumber \\ &\stackrel{\rm CNOT_{12}}{\longrightarrow}& \frac{1}{\sqrt{2}}\left(\vert 0 0 0 \rangle - \vert 1 1 0 \rangle\right) \nonumber \\ &\stackrel{\rm CNOT_{13}}{\longrightarrow}& \frac{1}{\sqrt{2}}\left(\vert 0 0 0 \rangle - \vert 1 1 1 \rangle \right) \end{eqnarray} All the pulses for the three gates used for GHZ state construction were designed using the GRAPE algorithm and had a fidelity $\ge$ 0.995. The GRAPE pulse duration corresponding to the gate $U_{G_1}$ is $600 \mu $s, while the $U_{G_2}$ and $U_{G_3}$ gates had pulse durations of $24$ms. The GHZ-type state was prepared with a fidelity of $0.969 \pm 0.013$. \begin{figure} \caption{The real (left) and imaginary (right) parts of the experimentally tomographed (a) GHZ-type state, with a fidelity of $0.969 \pm 0.013$. (b) W state, with a fidelity of $0.964 \pm 0.012$ and (c) $W\bar{W}$ state with a fidelity of $0.937 \pm 0.005$. The rows and columns encode the computational basis in binary order from $\vert 000 \rangle$ to $\vert 111 \rangle$.} \label{nodd} \end{figure} The W state was prepared from the initial $\vert 000 \rangle$ by a sequence of four unitary operations (labeled as $U_{W_1}, U_{W_2}, U_{W_3}, U_{W_4}$ in Fig.~\ref{ckt}(b)) and the sequential gate operation leads to: \begin{eqnarray} \vert 000 \rangle & \stackrel{R^{1}\left({\pi}\right)_y}{\longrightarrow} & \vert 100 \rangle \nonumber \\ & \stackrel{\rm R^{2}\left({0.39\pi}\right)_y} {\longrightarrow} & \sqrt{\frac{2}{3}} \vert 100 \rangle + \frac{1}{\sqrt{3}} \vert 110 \rangle \nonumber \\ & \stackrel{\rm CNOT_{21}} {\longrightarrow} & \sqrt{\frac{2}{3}} \vert 100 \rangle + \frac{1}{\sqrt{3}} \vert 010 \rangle \nonumber \\ & \stackrel{\rm CR_{13}{\left(\frac{\pi}{2}\right)_y}}{\longrightarrow} & \frac{1}{\sqrt{3}} [\vert 100 \rangle + \vert 101 \rangle + \vert 010 \rangle] \nonumber \\ & \stackrel{\rm CNOT_{31}} {\longrightarrow} & \frac{1}{\sqrt{3}}[ \vert 100 \rangle + \vert 001 \rangle + \vert 010 \rangle] \label{weqn} \end{eqnarray} The different unitaries were individually optimized using GRAPE and the pulse duration for $U_{W_1}$, $U_{W_2}$, $U_{W_3}$, and $U_{W_4}$ turned out to be $600 \mu $s, $24 $ms, $16 $ms, and $20 $ms, respectively and the fidelity of the final state was estimated to be $0.937 \pm 0.012$. The ${\rm W\bar{W}}$ state was constructed by applying the following sequence of gate operations on the $\vert000\rangle$ state: \begin{eqnarray} \vert 000 \rangle & \stackrel{R^{1}\left(\frac{\pi}{3}\right)_{-y}}{\longrightarrow} & \frac{\sqrt{3}}{2}\vert 000 \rangle-\frac{1}{2}\vert 100 \rangle \nonumber \\ & \stackrel{\scriptstyle {\rm CR}_{12}\left(0.61\pi\right)_y} {\longrightarrow} & \frac{\sqrt{3}}{2}\vert 0 0 0 \rangle - \frac{1}{2\sqrt{3}} \vert 1 0 0 \rangle - \sqrt{\frac{1}{6}} \vert 1 1 0 \rangle \nonumber \\ & \stackrel{\scriptstyle {\rm CR}_{21}\left(\frac{\pi}{2}\right)_{-y}} {\longrightarrow} & \frac{1}{2}(\sqrt{3}\vert 0 0 0 \rangle - \frac{1}{\sqrt{3}} (\vert 1 0 0 \rangle + \vert 1 1 0 \rangle + \nonumber \\ & & \vert 0 1 0 \rangle)) \nonumber \\ & \stackrel{\rm CNOT_{13}}{\longrightarrow} & \frac{1}{2}( \sqrt{3}\vert 0 0 0 \rangle - \frac{1}{\sqrt{3}} (\vert 1 0 1 \rangle + \vert 1 1 1 \rangle + \nonumber \\ & &\vert 0 1 0 \rangle)) \nonumber \\ & \stackrel{\rm CNOT_{23}}{\longrightarrow} & \frac{1}{2}( \sqrt{3}\vert 0 0 0 \rangle - \frac{1}{\sqrt{3}} (\vert 1 0 1 \rangle + \vert 1 1 0 \rangle + \nonumber \\ & & \vert 0 1 1 \rangle))\nonumber \\ & \stackrel{\rm R^{123}\left(\frac{\pi}{2}\right)_y}{\longrightarrow} & \frac{1}{\sqrt{6}}( \vert 0 0 1 \rangle + \vert 0 1 0 \rangle + \vert 0 1 1 \rangle + \nonumber \\ & & \vert 1 0 0 \rangle + \vert 1 0 1 \rangle + \vert 1 1 0 \rangle) \label{wwbareqn} \end{eqnarray} The unitary operator for the entire preparation sequence (labeled $U_{W\bar{W}}$ in Fig.~\ref{ckt}(c)) comprising a spin-selective rotation operator:~two controlled-rotation gates, two controlled-NOT gates and one non-selective rotation by $\frac{\pi}{2}$ on all the three qubits, was created by a specially crafted single GRAPE pulse (of pulse length $48$ms) and applied to the initial state $\vert000\rangle$. The final state had a computed fidelity of $0.937 \pm 0.005$. \subsection{Decay of tripartite entanglement} \label{decay} We next turn to the dynamics of tripartite entanglement under decoherence channels acting on the system. For two qubits, all entangled states are negative under partial transpose (NPT) and for such NPT states, the minimum eigenvalues of the partially transposed density operator is a measure of entanglement~\cite{peres-prl-96}. This idea has been extended to three qubits, and entanglement can be quantified for our three-qubit system using the well-known tripartite negativity ${\cal N}^{(3)}_{123}$ measure~\cite{vidal-pra-02,weinstein-pra-10}: \begin{equation} {\cal{N}}^{(3)}_{123}= [{\cal N}_{1}{\cal N}_{2}{\cal N}_{3}]^{1/3} \end{equation} where the negativity of a qubit ${\cal N}_i$ refers to the most negative eigenvalue of the partial transpose of the density matrix with respect to the qubit $i$. We studied the time evolution of the tripartite negativity ${\cal N}^{(3)}_{123}$ for the tripartite entangled states, as computed from the experimentally reconstructed density matrices at each time instant. The experimental results are depicted in Fig~\ref{3qdecay} (a), (b) and (c) for the GHZ state, the ${\rm W \bar{W}}$ state, and the W state, respectively. Of the three entangled states considered in this study, the GHZ and W states are maximally entangled and hence contain the most amount of tripartite negativity, while the ${\rm W \bar{W}}$ state is not maximally entangled and hence has a lower tripartite negativity value. The experimentally prepared GHZ state initially has a ${\cal N}^{(3)}_{123}$ of 0.96 (quite close to its theoretically expected value of 1.0). The GHZ state decays rapidly, with its negativity approaching zero in 0.55 s. The experimentally prepared ${\rm W \bar{W}}$ state initially has a ${\cal N}^{(3)}_{123}$ of 0.68 (close to its theoretically expected value of 0.74), with its negativity approaching zero at 0.67 s. The experimentally prepared W state initially has a ${\cal N}^{(3)}_{123}$ of 0.90 (quite close to its theoretically expected value of 0.94). The W state is quite long-lived, with its entanglement persisting up to 0.9 s. The tomographs of the experimentally reconstructed density matrices of the GHZ, W and ${\rm W \bar{W}}$ states at the time instances when the tripartite negativity parameter ${\cal N}^{(3)}_{123}$ approaches zero for each state, are displayed in Fig.~\ref{tomodecay}. We explored the noise channels acting on our three-qubit NMR entangled states which best fit our experimental data, by analytically solving a master equation in the Lindblad form, along the lines suggested in Reference~\cite{jung-pra-08}. The master equation is given by~\cite{lindblad}: \begin{equation} \frac{\partial \rho}{\partial t} = -i[H_s,\rho] + \sum_{i,\alpha} \left[ L_{i,\alpha} \rho L_{i,\alpha}^{\dagger} - \frac{1}{2} \{ L^{\dagger}_{i,\alpha} L_{i,\alpha},\rho \} \right] \label{mastereqn} \end{equation} where $H_s$ is the system Hamiltonian, $L_{i,\alpha} \equiv \sqrt{\kappa_{i,\alpha}} \sigma^{(i)}_{\alpha}$ is the Lindblad operator acting on the $i$th qubit and $\sigma^{(i)}_{\alpha}$ is the Pauli operator on the $i$th qubit, $\alpha=x,y,z$; the constant $\kappa_{i,\alpha}$ turns out to be the inverse of the decoherence time. \begin{figure}\label{3qdecay} \end{figure} \begin{figure}\label{tomodecay} \end{figure} We consider a decoherence model wherein a nuclear spin is acted on by two noise channels namely a phase damping channel (described by the T$_2$ relaxation in NMR) and a generalized amplitude damping channel (described by the T$_1$ relaxation in NMR)~\cite{childs-pra-03}. As the fluorine spins in our three-qubit system have widely differing chemical shifts, we assume that each qubit interacts independently with its own environment. The experimentally determined T$_1$ NMR relaxation rates are T$_1^{1F}=5.42\pm 0.07$ s, T$_1^{2F}=5.65\pm 0.05$ s and T$_1^{3F}=4.36\pm 0.05$ s, respectively. The T$_2$ relaxation rates were experimentally measured by first rotating the spin magnetization into the transverse plane by a $90^{\circ}$ rf pulse followed by a delay and fitting the resulting magnetization decay. The experimentally determined T$_2$ NMR relaxation rates are T$_2^{1F}=0.53\pm 0.02$s, T$_2^{2F}=0.55\pm 0.02$ s, and T$_2^{3F}=0.52\pm 0.02$ s, respectively. We solved the master equation (Eqn.~(\ref{mastereqn})) for the GHZ, W and ${\rm W\bar{W}}$ states with the Lindblad operators $L_{i,x} \equiv \sqrt{\frac{\kappa_{i,x}}{2}}\sigma^{(i)}_{x}$ and $L_{i,z} \equiv \sqrt{\frac{\kappa_{i,z}}{2}} \sigma^{(i)}_{z}$, where $\kappa_{i,x}=\frac{1}{T_1^i}$ and $\kappa_{i,z}=\frac{1}{T_2^i}$. With this model, the GHZ state decays at the rate $\gamma^{al}_{GHZ}=6.33\pm 0.06 s^{-1}$, and its entanglement approaches zero in 0.53 s. The $W\bar{W}$ state decays at the rate $\gamma^{al}_{W\bar{W}}=5.90 \pm 0.10 s^{-1}$, and its entanglement approaches zero in 0.50 s. The $W$ state decays at the rate $\gamma^{al}_{W}=4.84\pm 0.07 s^{-1}$, and its entanglement approaches zero in 0.62 s. We used the high-temperature approximation (T $\approx \infty$) to model the noise (the experiments were performed at 288 K), and the results of the analytical calculation and the experimental data match well, as shown in Figure~\ref{3qdecay}. \section{Protecting three-qubit entanglement via dynamical decoupling} \label{ddprotect} As the tripartite entangled states under investigation are robust against noise to varying extents, we wanted to discover if either the amount of entanglement in these states could be protected or their entanglement could be preserved for longer times, using dynamical decoupling (DD) protection schemes. While DD sequences are effective in decoupling system-environment interactions, often errors in their implementation arise either due to errors in the pulses or errors due to off-resonant driving~\cite{suter-review}. Two approaches have been used to design robust DD sequences which are impervious to pulse imperfections: the first approach replaces the $\pi$ rotation pulses with composite pulses inside the DD sequence, while the second approach focuses on optimizing phases of the pulses in the DD sequence. In this work, we use DD sequences that use pulses with phases applied along different rotation axes: the XY-16(s) and the Knill Dynamical Decoupling (KDD) schemes~\cite{souza-pra-12}. \begin{figure} \caption{NMR pulse sequence corresponding to (a) XY-16(s) and (b) KDD$_{xy}$ DD schemes (the superscript 2 implies that the set of pulses inside the bracket is applied twice, to form one cycle of the DD scheme). The pulses represented by black filled rectangles (in both schemes) are of angle $\pi$, and are applied simultaneously on all three qubits (denoted by $F^{i}, i=1,2,3$). The angle below each pulse denotes the phase with which it is applied. Each DD cycle is repeated $N$ times, with $N$ large to achieve good system-bath decoupling.} \label{dd-fig} \end{figure} In conventional DD schemes the $\pi$ pulses are applied along one axis (typically $x$) and as a consequence, only the coherence along that axis is well protected. The XY family of DD schemes applies pulses along two perpendicular ($x,y$) axes, which protects coherence equally along both these axes~\cite{souza-phil}. The XY-16(s) sequence is constructed by combining an XY-8(s) cycle with its phase-shifted copy, where the (s) denotes the ``symmetric'' version i.e. the cycle is time-symmetric with respect to its center. The XY-8 cycle is itself created by combining a basic XY-4 cycle with its time-reversed copy. One full unit cycle of the XY-16(s) sequence comprises sixteen $\pi$ pulses interspersed with free evolution time periods, and each cycle is repeated $N$ times for better decoupling. The KDD sequence has additional phases which further symmetrize pulses in the $x-y$ plane and compensate for pulse errors; each $\pi$ pulse in a basic XY DD sequence is replaced by five $\pi$ pulses, each of a different phase~\cite{ryan-prl-10,souza-prl-11}: \begin{equation} {\rm KDD}_{\phi} \equiv (\pi)_{\frac{\pi}{6}+\phi}- (\pi)_{\phi} -(\pi)_{\frac{\pi}{2}+\phi}-(\pi)_{\phi} -(\pi)_{\frac{\pi}{6}+\phi} \label{basickdd} \end{equation} where $\phi$ denotes the phase of the pulse; we set $\phi=0$ in our experiments. The KDD$_{\phi}$ sequence of five pulses given in Eqn.~\ref{basickdd} protects coherence along only one axis. To protect coherences along both the $(x,y)$ axes, we use the KDD$_{xy}$ sequence, which combines two basic five-pulse blocks shifted in phase by $\pi/2$ i.e $[{\rm KDD}_{\phi} - {\rm KDD}_{\phi+\pi/2}]$. One unit cycle of the KDD$_{xy}$ sequence contains two of these pulse-blocks shifted in phase, for a total of twenty $\pi$ pulses. The XY-16(s) and KDD$_{xy}$ DD sequences are given in Figs.~\ref{dd-fig}(a) and (b) respectively, where the black filled rectangles represent $\pi$ pulses on all three qubits and $\tau$ ($\tau_k$) indicates a free evolution time period. We note here that the chemical shifts of the three fluorine qubits in our particular molecule cover a very large frequency bandwidth, making it difficult to implement an accurate non-selective pulse simultaneously on all the qubits. To circumvent this problem, we crafted a special excitation pulse of duration $\approx 400 \mu$s consisting of a set of three Gaussian shaped pulses that are applied at different spin frequency offsets and are frequency modulated to achieve simultaneous excitation~\cite{shruti-wwbar}. \begin{figure} \caption{Plot of the tripartite negativity (${\cal N}^{(3)}_{123}$) with time, computed for the (a) GHZ-type state, (b) W state and (c) $W\bar{W}$ state. The negativity was computed for each state without applying any protection and after applying the XY-16(s) and KDD$_{xy}$ dynamical decoupling sequences.Note that the time scale for part (a) is different from (b) and (c)} \label{entang-dd} \end{figure} Figs.\ref{entang-dd}(a),(b) and (c) show the results of protecting the GHZ, W and ${\rm W\bar{W}}$ states respectively, using the XY-16(s) and the KDD$_{xy}$ DD sequences. \noindent{\bf GHZ state protection:} The XY-16(s) protection scheme was implemented on the GHZ state with an inter-pulse delay of $\tau = 0.25$ ms and one run of the sequence took $10.40$ ms (including the length of the sixteen $\pi$ pulses). The value of the negativity ${\cal N}^3_{123}$ remained close to 0.80 and 0.52 for up to $80$ms and $240$ ms respectively when XY-16 protection was applied, while for the unprotected state the state fidelity is quite low and ${\cal N}^3_{123}$ decayed to a low value of 0.58 and 0.09 at $80$ms and $240$ ms, respectively (Fig.~\ref{entang-dd}(a)). The KDD$_{xy}$ protection scheme on this state was implemented with an inter-pulse delay $\tau_k =0.20$ ms and one run of the sequence took $12$ ms (including the length of the twenty $\pi$ pulses). The value of the negativity ${\cal N}^3_{123}$ remained close to 0.80 and 0.72 for up to $140$ms and $240$ ms when KDD$_{xy}$ protection was applied (Fig.~\ref{entang-dd}(a)). \noindent{\bf W state protection:} The XY-16(s) protection scheme was implemented on the W state with an inter-pulse delay $\tau = 3.12$ ms and one run of the sequence took $56.40$ ms (including the length of the sixteen $\pi$ pulses). The value of the negativity ${\cal N}^3_{123}$ remained close to 0.30 for up to $0.68$ s when XY-16 protection was applied, whereas ${\cal N}^3_{123}$ reduced to 0.1 at $0.68$ s when no state protection is applied (Fig.~\ref{entang-dd}(b)). The KDD$_{xy}$ protection scheme was implemented on the W state with an inter-pulse delay $\tau_k = 2.5$ ms and one run of the sequence took $58$ ms (including the length of the twenty $\pi$ pulses). The value of the negativity ${\cal N}^3_{123}$ remained close to 0.21 for upto $0.70$ s when KDD$_{xy}$ protection was applied (Fig.~\ref{entang-dd}(b)). \noindent{\bf ${\rm W \bar{W}}$ state protection:} The XY-16(s) protection sequence was implemented on the ${\rm W \bar{W}}$ state with an inter-pulse delay of $\tau = 3.12$ ms and one run of the sequence took $56.40$ ms (including the length of the sixteen $\pi$ pulses). The value of the negativity ${\cal N}^3_{123}$ remained close to 0.5 for upto $0.45$ s when XY-16(s) protection was applied, whereas ${\cal N}^3_{123}$ reduced almost to zero ($\approx 0.02$) at $0.45$ s when no protection was applied (Fig.~\ref{entang-dd}(c)). The KDD$_{xy}$ protection sequence was applied with an inter-pulse delay of $\tau_k = 2.5$ ms and one run of the sequence took $58$ ms (including the length of the twenty $\pi$ pulses). The value of the negativity ${\cal N}^3_{123}$ remained close to 0.52 for upto $0.46$ s when KDD$_{xy}$ protection was applied (Fig.~\ref{entang-dd}(c)). The results of UDD-type of protection summarized above demonstrate that state protection worked to varying degrees and protected the entanglement of the tripartite entangled states to different extents, depending on the type of state to be protected. The GHZ state showed maximum protection and the ${\rm W\bar{W}}$ state also showed a significant amount of protection, while the W state showed a marginal improvement under protection. We note here that the lifetime of the GHZ state is not significantly enhanced by using DD state protection; what is noteworthy is that state fidelity remains high (close to 0.8) under DD protection, whereas the state quickly gets disentangled (fidelity drops to 0.4) when no protection is applied. This implies that under DD protection, there is no leakage from the state to other states in the Hilbert space of the three qubits. \section{Conclusions} \label{concl} We undertook an experimental study of the dynamics of tripartite entangled states in a three-qubit NMR system. Our results are relevant in the context of other studies which showed that different entangled states exhibit varying degrees of robustness against diverse noise channels. We found that the W state was the most robust against the decoherence channel acting on the three NMR qubits, the GHZ state was the most fragile and decayed very quickly, while the ${\rm W \bar{W}}$ state was more robust than the GHZ state but less robust than the W state. We also implemented entanglement protection on these states using dynamical decoupling sequences. The protection worked to a remarkable extent in entanglement preservation in the GHZ and ${\rm W \bar{W}}$ states, while the W state showed a better fidelity under protection but no appreciable increase in the lifetime of entanglement. The entangled states that we deal with in this study are obtained by unitary transformations on pseudopure states, where only a small subset of spins participate, and are thus pseudo-entangled. Our results have important implications for entanglement storage and preservation in realistic quantum information processing protocols. \begin{acknowledgments} All experiments were performed on a Bruker Avance-III 600 MHz FT-NMR spectrometer at the NMR Research Facility at IISER Mohali. KD acknowledges funding from DST India under Grant No. EMR/2015/000556. Arvind acknowledges funding from DST India under Grant No. EMR/2014/000297. HS acknowledges CSIR India for financial support. \end{acknowledgments} \raggedbottom \appendix* \section{Analytical solution of the Lindblad master equation} \label{anal} We analytically solved a master equation of the Lindblad form given in Eqn.~\ref{mastereqn}~\cite{lindblad,jung-pra-08}, by putting in explicit values for the Lindblad operators according to the two main NMR noise channels (generalized amplitude damping and phase damping), and computed the decay behavior of the GHZ, W and ${\rm W \bar{W}}$ states. Under the simultaneous action of all the NMR noise channels, the GHZ state decoheres as: \begin{equation} \rho_{GHZ}=\left( \begin{array}{cccccccc} \alpha_1 & 0 & 0 & 0 & 0 & 0 & 0 & \beta_1 \\ 0 & \alpha_2 & 0 & 0 & 0 & 0 & \beta_2 & 0 \\ 0 & 0 & \alpha_3 & 0 & 0 & \beta_3 & 0 & 0 \\ 0 & 0 & 0 & \alpha_4 & \beta_4 & 0 & 0 & 0 \\ 0 & 0 & 0 & \beta_4 & \alpha_4 & 0 & 0 & 0 \\ 0 & 0 & \beta_3 & 0 & 0 & \alpha_3 & 0 & 0 \\ 0 & \beta_2 & 0 & 0 & 0 & 0 & \alpha_2 & 0 \\ \beta_1 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_1 \\ \end{array} \right) \end{equation} where \begin{eqnarray} \alpha_1 &=& \frac{1}{8}(1+e^{-(\kappa_{x,1}+\kappa_{x,2})t}+e^{-(\kappa_{x,1}+\kappa_{x,3})t} +e^{-(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \alpha_2 &=& \frac{1}{8}(1+e^{-(\kappa_{x,1}+\kappa_{x,2})t}-e^{-(\kappa_{x,1}+\kappa_{x,3})t} -e^{-(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \alpha_3 &=& \frac{1}{8}(1-e^{-(\kappa_{x,1}+\kappa_{x,2})t}+e^{-(\kappa_{x,1}+\kappa_{x,3})t} -e^{-(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \alpha_4 &=& \frac{1}{8}(1-e^{-(\kappa_{x,1}+\kappa_{x,2})t}-e^{-(\kappa_{x,1}+\kappa_{x,3})t} +e^{-(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \beta_1 &=& \frac{1}{8}(e^{-(\kappa_{1,x}+\kappa_{2,x}+\kappa_{3,x}+\kappa_{1,z}+\kappa_{2,z}+\kappa_{3,z})t} \nonumber \\ && \quad(e^{\kappa_{1,x}t}+ e^{\kappa_{2,x}t}+e^{\kappa_{3,x}t} +e^{(\kappa_{1,x}+\kappa_{2,x}+\kappa_{3,x})t}) \nonumber \\ \beta_2 &=& \frac{1}{8}(e^{-(\kappa_{1,x}+\kappa_{2,x}+\kappa_{3,x}+\kappa_{1,z}+\kappa_{2,z}+\kappa_{3,z})t} \nonumber \\ && \quad (-e^{\kappa_{1,x}t}-e^{\kappa_{2,x}t}+e^{\kappa_{3,x}t} +e^{(\kappa_{1,x}+\kappa_{2,x}+\kappa_{3,x})t}) \nonumber \\ \beta_3 &=& \frac{1}{8}(e^{-(\kappa_{1,x}+\kappa_{2,x}+\kappa_{3,x}+\kappa_{1,z}+\kappa_{2,z}+\kappa_{3,z})t} \nonumber \\ \quad &&(-e^{\kappa_{1,x}t}+ e^{\kappa_{2,x}t}-e^{\kappa_{3,x}t} +e^{(\kappa_{1,x}+\kappa_{2,x}+\kappa_{3,x})t}) \nonumber \\ \beta_4 &=& \frac{1}{8}(e^{-(\kappa_{1,x}+\kappa_{2,x}+\kappa_{3,x}+\kappa_{1,z}+\kappa_{2,z}+\kappa_{3,z})t} \nonumber \\ && \quad (e^{\kappa_{1,x}t}- e^{\kappa_{2,x}t}-e^{\kappa_{3,x}t} +e^{(\kappa_{1,x}+\kappa_{2,x}+\kappa_{3,x})t}) \nonumber \\ \end{eqnarray} Under the simultaneous action of all the NMR noise channels, the W state decoheres as: \begin{equation} \rho_{W} = \left( \begin{array}{cccccccc} \alpha_{1} & 0 & 0 & \beta_{1} & 0 & \beta_{5} & \beta_{1} &0 \\ 0 & \alpha_{2} & \beta_{2} & 0 & \beta_{6} & 0 & 0& \beta_{10} \\ 0 & \beta_{2} & \alpha_{3} & 0 & \beta_{11}& 0 & 0 & \beta_{7} \\ \beta_{1} & 0 & 0 & \alpha_{4} & 0 & \beta_{12} & \beta_{8} & 0 \\ 0 & \beta_{6} & \beta_{11} & 0 & \alpha_{5} & 0 & 0 & \beta_{3} \\ \beta_{5} & 0 & 0 & \beta_{12} & 0 & \alpha_{6} & \beta_{4} & 0 \\ \beta_{1} & 0 & 0 &\beta_{8} & 0 & \beta_{4} & \alpha_{7} & 0 \\ 0 & \beta_{10}&\beta_{7} & 0 & \beta_{3} & 0 & 0 & \alpha_{8} \\ \end{array} \right) \nonumber \\ \end{equation} Where \begin{eqnarray} \alpha_{1}&=&\frac{1}{8}- \frac{1}{24} e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3})t} (3+e^{\kappa_{x,1}t} +e^{\kappa_{x,2}t}- \nonumber \\ && e^{(\kappa_{x,1}+\kappa_{x,2})t}+e^{\kappa_{x,3}t} - e^{(\kappa_{x,1}+\kappa_{x,3})t} - e^{(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \alpha_{2}&=&\frac{1}{8}+ \frac{1}{24} e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3})t} (3+e^{\kappa_{x,1}t} +e^{\kappa_{x,2}t}- \nonumber \\ && e^{(\kappa_{x,1}+\kappa_{x,2})t}-e^{\kappa_{x,3}t} + e^{(\kappa_{x,1}+\kappa_{x,3})t} + e^{(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \alpha_{3}&=&\frac{1}{8}+ \frac{1}{24} e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3})t} (3+e^{\kappa_{x,1}t} -e^{\kappa_{x,2}t} \nonumber \\ && + e^{(\kappa_{x,1}+\kappa_{x,2})t}+e^{\kappa_{x,3}t} - e^{(\kappa_{x,1}+\kappa_{x,3})t} + e^{(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \alpha_{4}&=&\frac{1}{8}- \frac{1}{24} e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3})t} (3+e^{\kappa_{x,1}t} -e^{\kappa_{x,2}t} \nonumber \\ && + e^{(\kappa_{x,1}+\kappa_{x,2})t}-e^{\kappa_{x,3}t} + e^{(\kappa_{x,1}+\kappa_{x,3})t} - e^{(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \alpha_{5}&=&\frac{1}{8}+ \frac{1}{24} e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3})t} (3-e^{\kappa_{x,1}t} +e^{\kappa_{x,2}t} \nonumber \\ && + e^{(\kappa_{x,1}+\kappa_{x,2})t}+e^{\kappa_{x,3}t} + e^{(\kappa_{x,1}+\kappa_{x,3})t} - e^{(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \alpha_{6}&=&\frac{1}{8}+ \frac{1}{24} e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3})t} (-3+e^{\kappa_{x,1}t} -e^{\kappa_{x,2}t} \nonumber \\ && - e^{(\kappa_{x,1}+\kappa_{x,2})t}+e^{\kappa_{x,3}t} + e^{(\kappa_{x,1}+\kappa_{x,3})t} - e^{(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \alpha_{7}&=&\frac{1}{8}+ \frac{1}{24} e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3})t} (-3+e^{\kappa_{x,1}t}+ e^{\kappa_{x,2}t} \nonumber \\ && + e^{(\kappa_{x,1}+\kappa_{x,2})t}-e^{\kappa_{x,3}t} - e^{(\kappa_{x,1}+\kappa_{x,3})t} - e^{(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \alpha_{8}&=&\frac{1}{8}- \frac{1}{24} e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3})t} (-3+e^{\kappa_{x,1}t}+ e^{\kappa_{x,2}t} \nonumber \\ && + e^{(\kappa_{x,1}+\kappa_{x,2})t}+e^{\kappa_{x,3}t} + e^{(\kappa_{x,1}+\kappa_{x,3})t} + e^{(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \end{eqnarray} \begin{eqnarray} \beta_{1}&=&\frac{1}{12} (e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,2}+\kappa_{z,3})t}\nonumber \\ &&(1+e^{(\kappa_{x,1})t})(-1+e^{(\kappa_{x,2}+\kappa_{x,3})t}))\nonumber \\ \beta_{2}&=&\frac{1}{12} (e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,2}+\kappa_{z,3})t}\nonumber \\ &&(1+e^{(\kappa_{x,1})t})(1+e^{(\kappa_{x,2}+\kappa_{x,3})t}))\nonumber \\ \beta_{3}&=&\frac{1}{12} (e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,2}+\kappa_{z,3})t}\nonumber \\ &&(-1+e^{(\kappa_{x,1})t})(-1+e^{(\kappa_{x,2}+\kappa_{x,3})t}))\nonumber \\ \beta_{4}&=&\frac{1}{12} (e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,2}+\kappa_{z,3})t}\nonumber \\ &&(-1+e^{(\kappa_{x,1})t})(1+e^{(\kappa_{x,2}+\kappa_{x,3})t}))\nonumber \end{eqnarray} \begin{eqnarray} \beta_{5}&=&\frac{1}{12} (e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,1}+\kappa_{z,3})t}\nonumber \\ &&(1+e^{(\kappa_{x,2})t})(-1+e^{(\kappa_{x,1}+\kappa_{x,3})t}))\nonumber \\ \beta_{6}&=&\frac{1}{12} (e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,1}+\kappa_{z,3})t}\nonumber \\ &&(1+e^{(\kappa_{x,2})t})(1+e^{(\kappa_{x,1}+\kappa_{x,3})t}))\nonumber \\ \beta_{7}&=&\frac{1}{12} (e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,1}+\kappa_{z,3})t}\nonumber \\ &&(-1+e^{(\kappa_{x,2})t})(-1+e^{(\kappa_{x,1}+\kappa_{x,3})t}))\nonumber \\ \beta_{8}&=&\frac{1}{12} (e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,1}+\kappa_{z,3})t}\nonumber \\ &&(-1+e^{(\kappa_{x,2})t})(1+e^{(\kappa_{x,1}+\kappa_{x,3})t}))\nonumber \\ \beta_{9}&=&\frac{1}{12} (e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,1}+\kappa_{z,2})t}\nonumber \\ &&(-1+e^{(\kappa_{x,1}+\kappa_{x,2})t})(1+e^{(\kappa_{x,3})t}))\nonumber \\ \beta_{10}&=&\frac{1}{12} (e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,1}+\kappa_{z,2})t}\nonumber \\ &&(-1+e^{(\kappa_{x,1}+\kappa_{x,2})t})(-1+e^{(\kappa_{x,3})t}))\nonumber \\ \beta_{11}&=&\frac{1}{12} (e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,1}+\kappa_{z,2})t}\nonumber \\ &&(1+e^{(\kappa_{x,1}+\kappa_{x,2})t})(1+e^{(\kappa_{x,3})t}))\nonumber \\ \beta_{12}&=&\frac{1}{12} (e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,1}+\kappa_{z,2})t}\nonumber \\ &&(1+e^{(\kappa_{x,1}+\kappa_{x,2})t})(-1+e^{(\kappa_{x,3})t})) \end{eqnarray} Under the simultaneous action of all the NMR noise channels, the ${\rm W \bar{W}}$ state decoheres as: \begin{equation} \rho_{{\rm W \bar{W}}} =\left( \begin{array}{cccccccc} \alpha_{1} & \beta_{1} & \beta_{2} & \beta_{3} &\beta_{4} & \beta_{5} & \beta_{6} &\beta_{7} \\ \beta_{1} & \alpha_{2} & \beta_{8} &\beta_{9} & \beta_{10} & \beta_{11} & \beta_{12} & \beta_{13} \\ \beta_{2} & \beta_{8} & \alpha_{3} & \beta_{14} & \beta_{15}& \beta_{16} & \beta_{11} & \beta_{5} \\ \beta_{3} & \beta_{9} & \beta_{14} & \alpha_{4} & \beta_{17} & \beta_{15} & \beta_{10} & \beta_{4} \\ \beta_{4} & \beta_{10} & \beta_{15} & \beta_{17} & \alpha_{4} & \beta_{15} & \beta_{9} & \beta_{18} \\ \beta_{5} & \beta_{11} & \beta_{16} & \beta_{15} & \beta_{15} & \alpha_{3} & \beta_{8} & \beta_{2} \\ \beta_{6} & \beta_{12} & \beta_{11} &\beta_{10} & \beta_{9} & \beta_{8} & \alpha_{2} & \beta_{1} \\ \beta_{7} & \beta_{13}&\beta_{5} & \beta_{4} & \beta_{18} & \beta_{2} & \beta_{1} & \alpha_{1} \\ \end{array} \right) \end{equation} where \begin{eqnarray} \alpha_{1}&=&\frac{1}{24}(3-e^{-(\kappa_{x,1}+\kappa_{x,2})t}-e^{-(\kappa_{x,1}+\kappa_{x,3})t} \nonumber \\ &&- e^{-(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \alpha_{2}&=&\frac{1}{24}(3-e^{-(\kappa_{x,1}+\kappa_{x,2})t}+e^{-(\kappa_{x,1}+\kappa_{x,3})t} \nonumber \\ && + e^{-(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \alpha_{3}&=&\frac{1}{24}(3+e^{-(\kappa_{x,1}+\kappa_{x,2})t}-e^{-(\kappa_{x,1}+\kappa_{x,3})t} \nonumber \\ && + e^{-(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \alpha_{4}&=&\frac{1}{24}(3+e^{-(\kappa_{x,1}+\kappa_{x,2})t}+e^{-(\kappa_{x,1}+\kappa_{x,3})t} \nonumber \\ && - e^{-(\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \beta_{1}&=&\frac{1}{12} e^{-(\kappa_{x,1}+\kappa_{x,2}+2\kappa_{z,3})t} \nonumber \\ &&(e^{(\kappa_{x,1}+\kappa_{x,2}+\kappa_{z,3})t}-e^{ \kappa_{z,3} t}) \nonumber \\ \beta_{2}&=&\frac{1}{12} e^{-(\kappa_{x,1}+\kappa_{x,3}+2\kappa_{z,2})t} \nonumber \\ &&(e^{(\kappa_{x,1}+\kappa_{x,3}+\kappa_{z,2})t}-e^{ \kappa_{z,2} t}) \nonumber \\ \beta_{3}&=&\frac{1}{12} e^{-(\kappa_{x,2}+\kappa_{x,3}+2(\kappa_{z,2}+\kappa_{z,3}))t} \nonumber \\ &&(e^{(\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,2}+\kappa_{z,3})t}-e^{ (\kappa_{z,2}+\kappa_{z,3}) t}) \nonumber \\ \beta_{4}&=&\frac{1}{12} e^{-(\kappa_{x,2}+\kappa_{x,3}+2\kappa_{z,1})t} \nonumber \\ &&(-e^{ \kappa_{z,1} t}+e^{(\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,1})t}) \nonumber \\ \beta_{5}&=&\frac{1}{12} e^{-(\kappa_{x,1}+\kappa_{x,3}+2(\kappa_{z,1}+\kappa_{z,3}))t} \nonumber \\ &&(-e^{ (\kappa_{z,1}+\kappa_{z,3}) t}+e^{(\kappa_{x,1}+\kappa_{x,3}+\kappa_{z,1}+\kappa_{z,3})t}) \nonumber \\ \beta_{6}&=&\frac{1}{12} e^{-(\kappa_{x,1}+\kappa_{x,2}+2(\kappa_{z,1}+\kappa_{z,2}))t} \nonumber \\ &&(-e^{ (\kappa_{z,1}+\kappa_{z,2}) t}+e^{(\kappa_{x,1}+\kappa_{x,2}+\kappa_{z,1}+\kappa_{z,2})t}) \nonumber \\ \beta_{7}&=&-\frac{1}{24} e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,1}+\kappa_{z,2}+\kappa_{z,3})t} \nonumber \\ && (e^{\kappa_{x,1} t}+e^{\kappa_{x,2} t}+e^{\kappa_{x,3} t}-3e^{(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \beta_{8}&=&\frac{1}{12} e^{-(\kappa_{x,2}+\kappa_{x,3}+2(\kappa_{z,2}+\kappa_{z,3}))t} \nonumber \\ &&(e^{ (\kappa_{z,2}+\kappa_{z,3}) t}+e^{(\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,2}+\kappa_{z,3})t}) \nonumber \\ \beta_{9}&=&\frac{1}{12} e^{-(\kappa_{x,1}+\kappa_{x,3}+2\kappa_{z,2})t} \nonumber \\ &&(e^{ \kappa_{z,2} t}+e^{(\kappa_{x,1}+\kappa_{x,3}+\kappa_{z,2})t}) \nonumber \\ \beta_{10}&=&\frac{1}{12} e^{-(\kappa_{x,1}+\kappa_{x,3}+2(\kappa_{z,1}+\kappa_{z,3}))t} \nonumber \\ &&(e^{ (\kappa_{z,1}+\kappa_{z,3}) t}+e^{(\kappa_{x,1}+\kappa_{x,3}+\kappa_{z,1}+\kappa_{z,3})t}) \nonumber \end{eqnarray} \begin{eqnarray} \beta_{11}&=&\frac{1}{12} e^{-(\kappa_{x,2}+\kappa_{x,3}+2\kappa_{z,1})t}(e^{ \kappa_{z,1} t}+e^{(\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,1})t}) \nonumber \\ \beta_{12}&=&\frac{1}{24} e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,1}+\kappa_{z,2}+\kappa_{z,3})t} \nonumber \\ && (e^{\kappa_{x,1} t}+e^{\kappa_{x,2} t}-e^{\kappa_{x,3} t}+3e^{(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \beta_{13}&=&\frac{1}{12} e^{-(\kappa_{x,1}+\kappa_{x,2}+2(\kappa_{z,1}+\kappa_{z,2}))t} \nonumber \\ &&(-e^{ (\kappa_{z,1}+\kappa_{z,2}) t}+e^{(\kappa_{x,1}+\kappa_{x,2}+\kappa_{z,1}+\kappa_{z,2})t}) \nonumber \\ \beta_{14}&=&\frac{1}{12} e^{-(\kappa_{x,1}+\kappa_{x,2}+2\kappa_{z,3})t} \nonumber \\ &&(e^{ \kappa_{z,3} t}+e^{(\kappa_{x,1}+\kappa_{x,2}+\kappa_{z,3})t}) \nonumber \\ \beta_{15}&=&\frac{1}{12} e^{-(\kappa_{x,1}+\kappa_{x,2}+2(\kappa_{z,1}+\kappa_{z,2}))t} \nonumber \\ &&(e^{ (\kappa_{z,1}+\kappa_{z,2}) t}+e^{(\kappa_{x,1}+\kappa_{x,2}+\kappa_{z,1}+\kappa_{z,2})t}) \nonumber \\ \beta_{16}&=&\frac{1}{24} e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,1}+\kappa_{z,2}+\kappa_{z,3})t} \nonumber \\ && (e^{\kappa_{x,1} t}-e^{\kappa_{x,2} t}+e^{\kappa_{x,3} t}+3e^{(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \beta_{17}&=&\frac{1}{24} e^{-(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,1}+\kappa_{z,2}+\kappa_{z,3})t} \nonumber \\ && (-e^{\kappa_{x,1} t}+e^{\kappa_{x,2} t}+e^{\kappa_{x,3} t}+3e^{(\kappa_{x,1}+\kappa_{x,2}+\kappa_{x,3})t}) \nonumber \\ \beta_{18}&=&\frac{1}{12} e^{-(\kappa_{x,2}+\kappa_{x,3}+2(\kappa_{z,2}+\kappa_{z,3}))t} \nonumber \\ &&(e^{ (\kappa_{z,2}+\kappa_{z,3}) t}+e^{(\kappa_{x,2}+\kappa_{x,3}+\kappa_{z,2}+\kappa_{z,3})t}) \end{eqnarray} Solving the master equation (Eqn.~\ref{mastereqn}) ensures that the off-diagonal elements of the corresponding $\rho$ matrices satisfy a set of coupled equations, from which the explicit values of $\alpha$s and $\beta$s can be computed. The equations are solved in the high-temperature limit. For an ensemble of NMR spins at room temperature this implies that the energy $E << k_{B} T$ where $k_B$ is the Boltzmann constant and $T$ refers to the temperature, ensuring a Boltzmann distribution of spin populations at thermal equilibrium. \begin{thebibliography}{54} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Nielsen}\ and\ \citenamefont {Chuang}(2000)}]{nielsen-book} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Nielsen}}\ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Computation and Quantum Information}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {address} {Cambridge UK},\ \bibinfo {year} {2000})\BibitemShut {NoStop} \bibitem [{\citenamefont {Horodecki}\ \emph {et~al.}(2009)\citenamefont {Horodecki}, \citenamefont {Horodecki}, \citenamefont {Horodecki},\ and\ \citenamefont {Horodecki}}]{horodecki-rmp-09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Horodecki}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Horodecki}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Horodecki}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Horodecki}},\ }\href {\doibase 10.1103/RevModPhys.81.865} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {865} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dur}\ and\ \citenamefont {Briegel}(2004)}]{dur-prl-04} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Dur}}\ and\ \bibinfo {author} {\bibfnamefont {H.-J.}\ \bibnamefont {Briegel}},\ }\href {\doibase 10.1103/PhysRevLett.92.180403} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {180403} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mintert}\ \emph {et~al.}(2005)\citenamefont {Mintert}, \citenamefont {Carvalho}, \citenamefont {Kuś},\ and\ \citenamefont {Buchleitner}}]{mintert-pr-05} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Mintert}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Carvalho}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kuś}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Buchleitner}},\ }\href {\doibase http://dx.doi.org/10.1016/j.physrep.2005.04.006} {\bibfield {journal} {\bibinfo {journal} {Phys. Rep.}\ }\textbf {\bibinfo {volume} {415}},\ \bibinfo {pages} {207 } (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aolita}\ \emph {et~al.}(2008)\citenamefont {Aolita}, \citenamefont {Chaves}, \citenamefont {Cavalcanti}, \citenamefont {Ac\'{\i}n},\ and\ \citenamefont {Davidovich}}]{aolita-prl-08} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Aolita}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Chaves}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Cavalcanti}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ac\'{\i}n}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Davidovich}},\ }\href {\doibase 10.1103/PhysRevLett.100.080501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages} {080501} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aolita}\ \emph {et~al.}(2015)\citenamefont {Aolita}, \citenamefont {d.~Melo},\ and\ \citenamefont {Davidovich}}]{aolita-rpp-15} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Aolita}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {d.~Melo}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Davidovich}},\ }\href {\doibase 10.1088/0034-4885/78/4/042001} {\bibfield {journal} {\bibinfo {journal} {Rep. Prog. Phys.}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {042001} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Borras}\ \emph {et~al.}(2009)\citenamefont {Borras}, \citenamefont {Majtey}, \citenamefont {Plastino}, \citenamefont {Casas},\ and\ \citenamefont {Plastino}}]{borras-pra-09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Borras}}, \bibinfo {author} {\bibfnamefont {A.~P.}\ \bibnamefont {Majtey}}, \bibinfo {author} {\bibfnamefont {A.~R.}\ \bibnamefont {Plastino}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Casas}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Plastino}},\ }\href {\doibase 10.1103/PhysRevA.79.022108} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo {pages} {022108} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Weinstein}(2010)}]{weinstein-pra-10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.~S.}\ \bibnamefont {Weinstein}},\ }\href {\doibase 10.1103/PhysRevA.82.032326} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {032326} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Carvalho}\ \emph {et~al.}(2004)\citenamefont {Carvalho}, \citenamefont {Mintert},\ and\ \citenamefont {Buchleitner}}]{carvalho-prl-04} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~R.~R.}\ \bibnamefont {Carvalho}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Mintert}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Buchleitner}},\ }\href {\doibase 10.1103/PhysRevLett.93.230501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {230501} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Siomau}\ and\ \citenamefont {Fritzsche}(2010{\natexlab{a}})}]{siomau-eurphysd-10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Siomau}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Fritzsche}},\ }\href {\doibase 10.1140/epjd/e2010-00189-1} {\bibfield {journal} {\bibinfo {journal} {Eur. Phys. J. D.}\ }\textbf {\bibinfo {volume} {60}},\ \bibinfo {pages} {397} (\bibinfo {year} {2010}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Siomau}\ and\ \citenamefont {Fritzsche}(2010{\natexlab{b}})}]{siomau-pra-10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Siomau}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Fritzsche}},\ }\href {\doibase 10.1103/PhysRevA.82.062327} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {062327} (\bibinfo {year} {2010}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ali}\ and\ \citenamefont {Guhne}(2014)}]{ali-jpb-14} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ali}}\ and\ \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Guhne}},\ }\href {\doibase 10.1088/0953-4075/47/5/055503} {\bibfield {journal} {\bibinfo {journal} {J. Phys. B}\ }\textbf {\bibinfo {volume} {47}},\ \bibinfo {pages} {055503} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lanyon}\ and\ \citenamefont {Langford}(2009)}]{lanyon-njp-09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~P.}\ \bibnamefont {Lanyon}}\ and\ \bibinfo {author} {\bibfnamefont {N.~K.}\ \bibnamefont {Langford}},\ }\href {\doibase 10.1088/1367-2630/11/1/013008} {\bibfield {journal} {\bibinfo {journal} {New. J. Phys.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {013008} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zang}\ \emph {et~al.}(2015)\citenamefont {Zang}, \citenamefont {Yang}, \citenamefont {Ozaydin}, \citenamefont {Song},\ and\ \citenamefont {Cao}}]{zang-scirep-15} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-P.}\ \bibnamefont {Zang}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Ozaydin}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Song}}, \ and\ \bibinfo {author} {\bibfnamefont {Z.-L.}\ \bibnamefont {Cao}},\ }\href {\doibase 10.1038/srep16245} {\bibfield {journal} {\bibinfo {journal} {Scientific Reports}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {16245} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {He}\ and\ \citenamefont {Yang}(2015)}]{he-qip-15} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont {He}}\ and\ \bibinfo {author} {\bibfnamefont {C.-P.}\ \bibnamefont {Yang}},\ }\href {\doibase 10.1007/s11128-015-1131-9} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. Process.}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {4461} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zang}\ \emph {et~al.}(2016)\citenamefont {Zang}, \citenamefont {Yang}, \citenamefont {Ozaydin}, \citenamefont {Song},\ and\ \citenamefont {Cao}}]{zang-optics-16} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-P.}\ \bibnamefont {Zang}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Ozaydin}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Song}}, \ and\ \bibinfo {author} {\bibfnamefont {Z.-L.}\ \bibnamefont {Cao}},\ }\href {\doibase 10.1038/srep16245} {\bibfield {journal} {\bibinfo {journal} {Optics Express}\ }\textbf {\bibinfo {volume} {24}},\ \bibinfo {pages} {12293} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barreiro}\ \emph {et~al.}(2010)\citenamefont {Barreiro}, \citenamefont {Schindler}, \citenamefont {Guhne}, \citenamefont {Monz}, \citenamefont {C.}, \citenamefont {Roos}, \citenamefont {Hennrich},\ and\ \citenamefont {Blatt}}]{barreiro-nature} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~T.}\ \bibnamefont {Barreiro}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Schindler}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Guhne}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Monz}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {C.}}, \bibinfo {author} {\bibfnamefont {C.~F.}\ \bibnamefont {Roos}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hennrich}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Blatt}},\ }\href {http://dx.doi.org/10.1038/nphys1781} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {943} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wu}\ \emph {et~al.}(2016)\citenamefont {Wu}, \citenamefont {Song}, \citenamefont {Xu}, \citenamefont {Yu}, \citenamefont {Ji},\ and\ \citenamefont {Zhang}}]{wu-qip-16} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Song}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Ji}}, \ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zhang}},\ }\href {\doibase 10.1007/s11128-016-1366-0} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. Process.}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {3663} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Peng}\ \emph {et~al.}(2010)\citenamefont {Peng}, \citenamefont {Zhang}, \citenamefont {Du},\ and\ \citenamefont {Suter}}]{suter-3qubit} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Du}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Suter}},\ }\href {\doibase 10.1103/PhysRevA.81.042327} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {042327} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dogra}\ \emph {et~al.}(2015)\citenamefont {Dogra}, \citenamefont {Dorai},\ and\ \citenamefont {Arvind}}]{shruti-generic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Dogra}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Dorai}}, \ and\ \bibinfo {author} {\bibnamefont {Arvind}},\ }\href {\doibase 10.1103/PhysRevA.91.022312} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {022312} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Manu}\ and\ \citenamefont {Kumar}(2014)}]{manu-pra-14} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~S.}\ \bibnamefont {Manu}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kumar}},\ }\href {\doibase 10.1103/PhysRevA.89.052331} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {052331} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kawamura}\ \emph {et~al.}(2006)\citenamefont {Kawamura}, \citenamefont {Morimoto}, \citenamefont {Mori}, \citenamefont {Sawae}, \citenamefont {Takarabe},\ and\ \citenamefont {Manmoto}}]{kawamura-ijqc-06} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kawamura}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Morimoto}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Mori}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Sawae}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Takarabe}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Manmoto}},\ }\href {\doibase 10.1002/qua.21172} {\bibfield {journal} {\bibinfo {journal} {Intl. J. Quant. Chem.}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo {pages} {3108} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Viola}(2004)}]{viola-review} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Viola}},\ }\href {\doibase 10.1080/09500340408231795} {\bibfield {journal} {\bibinfo {journal} {J. Mod. Opt.}\ }\textbf {\bibinfo {volume} {51}},\ \bibinfo {pages} {2357} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Uhrig}(2008)}]{uhrig-njp-08} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~S.}\ \bibnamefont {Uhrig}},\ }\href {\doibase 10.1088/1367-2630/13/5/059504} {\bibfield {journal} {\bibinfo {journal} {New. J. Phys.}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {083024} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kuo}\ \emph {et~al.}(2012)\citenamefont {Kuo}, \citenamefont {Quiroz}, \citenamefont {Paz-Silva},\ and\ \citenamefont {Lidar}}]{kuo-jmp-12} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Kuo}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Quiroz}}, \bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont {Paz-Silva}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Lidar}},\ }\href {\doibase 10.1063/1.4769382} {\bibfield {journal} {\bibinfo {journal} {J. Math. Phys.}\ }\textbf {\bibinfo {volume} {53}},\ \bibinfo {pages} {122207} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhen}\ \emph {et~al.}(2016)\citenamefont {Zhen}, \citenamefont {Zhang}, \citenamefont {Feng}, \citenamefont {Li},\ and\ \citenamefont {Long}}]{zhen-pra-16} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-L.}\ \bibnamefont {Zhen}}, \bibinfo {author} {\bibfnamefont {F.-H.}\ \bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Feng}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Li}}, \ and\ \bibinfo {author} {\bibfnamefont {G.-L.}\ \bibnamefont {Long}},\ }\href {\doibase 10.1103/PhysRevA.93.022304} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {022304} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Song}\ \emph {et~al.}(2013)\citenamefont {Song}, \citenamefont {Pan},\ and\ \citenamefont {Xi}}]{song-ijqi-13} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Song}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Pan}}, \ and\ \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Xi}},\ }\href {\doibase 10.1142/S0219749913500123} {\bibfield {journal} {\bibinfo {journal} {Intl. J. Quant. Infor.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {1350012} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Franco}\ \emph {et~al.}(2014)\citenamefont {Franco}, \citenamefont {D'Arrigo}, \citenamefont {Falci}, \citenamefont {Compagno},\ and\ \citenamefont {Paladino}}]{franco-prb-14} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~L.}\ \bibnamefont {Franco}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {D'Arrigo}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Falci}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Compagno}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Paladino}},\ }\href {\doibase 10.1103/PhysRevB.90.054304} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {054304} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Agarwal}(2010)}]{agarwal-scripta} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~S.}\ \bibnamefont {Agarwal}},\ }\href {\doibase 10.1088/0031-8949/82/03/038103} {\bibfield {journal} {\bibinfo {journal} {Physica Scripta}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo {pages} {038103} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Devi}\ \emph {et~al.}(2012)\citenamefont {Devi}, \citenamefont {Sudha},\ and\ \citenamefont {Rajagopal}}]{Devi2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~R.~U.}\ \bibnamefont {Devi}}, \bibinfo {author} {\bibnamefont {Sudha}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~K.}\ \bibnamefont {Rajagopal}},\ }\href {\doibase 10.1007/s11128-011-0280-8} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. Process.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {685} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Das}\ \emph {et~al.}(2015)\citenamefont {Das}, \citenamefont {Dogra}, \citenamefont {Dorai},\ and\ \citenamefont {Arvind}}]{shruti-wwbar} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Das}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Dogra}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Dorai}}, \ and\ \bibinfo {author} {\bibnamefont {Arvind}},\ }\href {\doibase 10.1103/PhysRevA.92.022307} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {022307} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tosner}\ \emph {et~al.}(2009)\citenamefont {Tosner}, \citenamefont {Vosegaard}, \citenamefont {Kehlet}, \citenamefont {Khaneja}, \citenamefont {Glaser},\ and\ \citenamefont {Nielsen}}]{tosner-jmr-09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Tosner}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Vosegaard}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Kehlet}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Khaneja}}, \bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont {Glaser}}, \ and\ \bibinfo {author} {\bibfnamefont {N.~C.}\ \bibnamefont {Nielsen}},\ }\href {\doibase 10.1016/j.jmr.2008.11.020} {\bibfield {journal} {\bibinfo {journal} {J. Magn. Reson.}\ }\textbf {\bibinfo {volume} {197}},\ \bibinfo {pages} {120} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Leskowitz}\ and\ \citenamefont {Mueller}(2004)}]{leskowitz-pra-04} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~M.}\ \bibnamefont {Leskowitz}}\ and\ \bibinfo {author} {\bibfnamefont {L.~J.}\ \bibnamefont {Mueller}},\ }\href {\doibase 10.1103/PhysRevA.69.052302} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {69}},\ \bibinfo {pages} {052302} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Singh}\ \emph {et~al.}(2016)\citenamefont {Singh}, \citenamefont {Arvind},\ and\ \citenamefont {Dorai}}]{singh-pla-16} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Singh}}, \bibinfo {author} {\bibnamefont {Arvind}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Dorai}},\ }\href {\doibase 10.1016/j.physleta.2016.07.046} {\bibfield {journal} {\bibinfo {journal} {Phys. Lett. A}\ }\textbf {\bibinfo {volume} {380}},\ \bibinfo {pages} {3051 } (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Peres}(1996)}]{peres-prl-96} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Peres}},\ }\href {\doibase 10.1103/PhysRevLett.77.1413} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {1413} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vidal}\ and\ \citenamefont {Werner}(2002)}]{vidal-pra-02} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Vidal}}\ and\ \bibinfo {author} {\bibfnamefont {R.~F.}\ \bibnamefont {Werner}},\ }\href {\doibase 10.1103/PhysRevA.65.032314} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {65}},\ \bibinfo {pages} {032314} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Braunstein}\ \emph {et~al.}(1999)\citenamefont {Braunstein}, \citenamefont {Caves}, \citenamefont {Jozsa}, \citenamefont {Linden}, \citenamefont {Popescu},\ and\ \citenamefont {Schack}}]{braunstein} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont {Braunstein}}, \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Caves}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Jozsa}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Linden}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Popescu}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Schack}},\ }\href {\doibase 10.1103/PhysRevLett.83.1054} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {83}},\ \bibinfo {pages} {1054} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yu}\ \emph {et~al.}(2005)\citenamefont {Yu}, \citenamefont {Brown},\ and\ \citenamefont {Chuang}}]{chuang} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~M.}\ \bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont {K.~R.}\ \bibnamefont {Brown}}, \ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\href {\doibase 10.1103/PhysRevA.71.032341} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {71}},\ \bibinfo {pages} {032341} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Soares-Pinto}\ \emph {et~al.}(2012)\citenamefont {Soares-Pinto}, \citenamefont {Auccaise}, \citenamefont {Maziero}, \citenamefont {Gavini-Viana}, \citenamefont {Serra},\ and\ \citenamefont {C{\'e}leri}}]{brazil1} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~O.}\ \bibnamefont {Soares-Pinto}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Auccaise}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Maziero}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Gavini-Viana}}, \bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Serra}}, \ and\ \bibinfo {author} {\bibfnamefont {L.~C.}\ \bibnamefont {C{\'e}leri}},\ }\href {\doibase 10.1098/rsta.2011.0364} {\bibfield {journal} {\bibinfo {journal} {Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences}\ }\textbf {\bibinfo {volume} {370}},\ \bibinfo {pages} {4821} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Serra}\ and\ \citenamefont {Oliveira}(2012)}]{brazil2} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Serra}}\ and\ \bibinfo {author} {\bibfnamefont {I.~S.}\ \bibnamefont {Oliveira}},\ }\href {\doibase 10.1098/rsta.2012.0332} {\bibfield {journal} {\bibinfo {journal} {Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences}\ }\textbf {\bibinfo {volume} {370}},\ \bibinfo {pages} {4615} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gui-Lu}\ \emph {et~al.}(2002)\citenamefont {Gui-Lu}, \citenamefont {Hai-Yang}, \citenamefont {Yan-Song}, \citenamefont {Chang-Cun}, \citenamefont {Sheng-Jiang}, \citenamefont {Dong}, \citenamefont {Yang}, \citenamefont {Jia-Xun},\ and\ \citenamefont {Hao-Ming}}]{long} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Gui-Lu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Hai-Yang}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Yan-Song}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Chang-Cun}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Sheng-Jiang}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Dong}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jia-Xun}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Hao-Ming}},\ }\href {http://stacks.iop.org/0253-6102/38/i=3/a=305} {\bibfield {journal} {\bibinfo {journal} {Communications in Theoretical Physics}\ }\textbf {\bibinfo {volume} {38}},\ \bibinfo {pages} {305} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ernst}\ \emph {et~al.}(1987)\citenamefont {Ernst}, \citenamefont {Bodenhausen},\ and\ \citenamefont {Wokaun}}]{ernst-book-87} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~R.}\ \bibnamefont {Ernst}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Bodenhausen}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Wokaun}},\ }\href@noop {} {\emph {\bibinfo {title} {Principles of nuclear magnetic resonance in one and two dimensions}}}\ (\bibinfo {publisher} {Oxford University Press},\ \bibinfo {address} {New York},\ \bibinfo {year} {1987})\BibitemShut {NoStop} \bibitem [{\citenamefont {Cory}\ \emph {et~al.}(1998)\citenamefont {Cory}, \citenamefont {Price},\ and\ \citenamefont {Havel}}]{cory-physicad} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Cory}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Price}}, \ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Havel}},\ }\href {\doibase 10.1016/S0167-2789(98)00046-3} {\bibfield {journal} {\bibinfo {journal} {Physica D}\ }\textbf {\bibinfo {volume} {120}},\ \bibinfo {pages} {82} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Uhlmann}(1976)}]{uhlmann-fidelity} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Uhlmann}},\ }\href {\doibase 10.1016/0034-4877(76)90060-4} {\bibfield {journal} {\bibinfo {journal} {Rep. Math. Phys.}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {273} (\bibinfo {year} {1976})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jozsa}(1994)}]{jozsa-fidelity} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Jozsa}},\ }\href {\doibase 10.1080/09500349414552171} {\bibfield {journal} {\bibinfo {journal} {J. Mod. Opt.}\ }\textbf {\bibinfo {volume} {41}},\ \bibinfo {pages} {2315} (\bibinfo {year} {1994})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Guhne}\ and\ \citenamefont {Toth}(2009)}]{guhne-review} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Guhne}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Toth}},\ }\href {\doibase http://dx.doi.org/10.1016/j.physrep.2009.02.004} {\bibfield {journal} {\bibinfo {journal} {Physics Reports}\ }\textbf {\bibinfo {volume} {474}},\ \bibinfo {pages} {1} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jung}\ \emph {et~al.}(2008)\citenamefont {Jung}, \citenamefont {Hwang}, \citenamefont {Ju}, \citenamefont {Kim}, \citenamefont {Yoo}, \citenamefont {Kim}, \citenamefont {Park}, \citenamefont {Son}, \citenamefont {Tamaryan},\ and\ \citenamefont {Cha}}]{jung-pra-08} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Jung}}, \bibinfo {author} {\bibfnamefont {M.-R.}\ \bibnamefont {Hwang}}, \bibinfo {author} {\bibfnamefont {Y.~H.}\ \bibnamefont {Ju}}, \bibinfo {author} {\bibfnamefont {M.-S.}\ \bibnamefont {Kim}}, \bibinfo {author} {\bibfnamefont {S.-K.}\ \bibnamefont {Yoo}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Kim}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Park}}, \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Son}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Tamaryan}}, \ and\ \bibinfo {author} {\bibfnamefont {S.-K.}\ \bibnamefont {Cha}},\ }\href {\doibase 10.1103/PhysRevA.78.012312} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {012312} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lindblad}(1976)}]{lindblad} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Lindblad}},\ }\href {\doibase doi:10.1007/BF01608499} {\bibfield {journal} {\bibinfo {journal} {Commun. Math. Phys.}\ }\textbf {\bibinfo {volume} {48}},\ \bibinfo {pages} {119} (\bibinfo {year} {1976})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Childs}\ \emph {et~al.}(2001)\citenamefont {Childs}, \citenamefont {L.Chuang},\ and\ \citenamefont {Leung}}]{childs-pra-03} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {L.Chuang}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont {Leung}},\ }\href {\doibase 10.1103/PhysRevA.64.012314} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {64}},\ \bibinfo {pages} {012314} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Suter}\ and\ \citenamefont {\'Alvarez}(2016)}]{suter-review} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Suter}}\ and\ \bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont {\'Alvarez}},\ }\href {\doibase 10.1103/RevModPhys.88.041001} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {88}},\ \bibinfo {pages} {041001} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Souza}\ \emph {et~al.}(2012{\natexlab{a}})\citenamefont {Souza}, \citenamefont {Alvarez},\ and\ \citenamefont {Suter}}]{souza-pra-12} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Souza}}, \bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont {Alvarez}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Suter}},\ }\href {\doibase 10.1103/PhysRevA.85.032306} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {85}},\ \bibinfo {pages} {032306} (\bibinfo {year} {2012}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Souza}\ \emph {et~al.}(2012{\natexlab{b}})\citenamefont {Souza}, \citenamefont {Alvarez},\ and\ \citenamefont {Suter}}]{souza-phil} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Souza}}, \bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont {Alvarez}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Suter}},\ }\href {\doibase 0.1098/rsta.2011.0355} {\bibfield {journal} {\bibinfo {journal} {Phil. T. Roy. Soc. A}\ }\textbf {\bibinfo {volume} {370}},\ \bibinfo {pages} {4748} (\bibinfo {year} {2012}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ryan}\ \emph {et~al.}(2010)\citenamefont {Ryan}, \citenamefont {Hodges},\ and\ \citenamefont {Cory}}]{ryan-prl-10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Ryan}}, \bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont {Hodges}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~G.}\ \bibnamefont {Cory}},\ }\href {\doibase 10.1103/PhysRevLett.105.200402} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {105}},\ \bibinfo {pages} {200402} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Souza}\ \emph {et~al.}(2011)\citenamefont {Souza}, \citenamefont {Alvarez},\ and\ \citenamefont {Suter}}]{souza-prl-11} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Souza}}, \bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont {Alvarez}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Suter}},\ }\href {\doibase 10.1103/PhysRevLett.106.240501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo {pages} {240501} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
arXiv
Clemens, Herbert; Kollár, János; Mori, Shigefumi Higher dimensional complex geometry. A summer seminar at the University of Utah, Salt Lake City, 1987. (English) Zbl 0689.14016 Centre National de la Recherche Scientifique. Astérisque, 166. Paris: Société Mathématique de France. 144 p. FF 100.00; $ 17.00 (1988). The 24 chapter of this book consist of notes of a seminar held in 1987 at Salt Lake City. An introduction to Mori's minimal model program is presented in the 16 first chapters, which although of interest to specialists is understandable with a good background in algebraic geometry. This program aims at the classification of projective n-folds X according to the numerical positivity of their canonical bundle \(K_ X\). This (non- birational) invariant can be considered as a refinement of Kodaira dimension. Say that X is minimal if \(K_ X\cdot\) is nef (= numerically effective), i.e.: \(K_ X\cdot C\geq 0\) for every effective curve C of X. When X is not minimal, S. Mori showed the existence of extremal rational curves C on X, for which: \(0<-K_ X\cdot C\leq n+1\), and which generate the extremal rays of the closed cone of effective curves of X. He showed when \(n=2, 3\) that each such extremal ray R determines an extremal contraction \(f: X\to Y\) with Y projective, which maps to points exactly the curves the numerical class of which is in R. This is the content of chapters 1 to 4, where proofs are given, which are very close to the geometric intuition. Chapters 5 to 7: Let \(f: X\to Y\) be an extremal contraction of the n-fold X. If \(n\geq 3\) (respectively \(n\geq 4)\), then Y (respectively \(K_ Y)\) may not be smooth (respectively \({\mathbb{Q}}\)-Cartier). The process of contraction had then to stop. Chapter 5 explains how one is led to admit terminal singularities and "flips" in order to construct a minimal model program, that is a systematic procedure to produce minimal models (i.e.: with K nef) of projective terminal varieties by a succession of extremal contractions and flips. Chapter 6 discusses terminal singularities, in particular in dimension 3. Chapter 7 deals with extension of this program to some other usual situations. Chapters 8 to 13: The first step of Mori's program, that is the existence of extremal contractions of canonical projective varieties the canonical bundle of which is not nef, is proved. The proof consists of several fundamental results: Kawamata-Viehweg vanishing, Shokurov non-vanishing, base-point freeness and rationality theorems. The proofs given are the simplest known. Chapters 14 to 16: They deal with the existence of flips, which is known only in dimension 3. They provide an accessible introduction to the difficult paper of S. Mori where this result is proved. Chapters 17 to 20: They treat some applications of harmonic maps to Kähler geometry, due to Carlson and Toledo: a continuous map \(f: M\to N\) from a compact Kähler manifold M to a compact Riemannian locally symmetric space is homotopic to a non-surjective harmonic map unless N is locally hermitian symmetric. - Chapter 20 gives an upper bound for the rank of variations of Hodge structures of weight 2. Chapters 21 to 22 are devoted to the study of possibly singular curves of low genus g on generic hypersurfaces of degree d in \({\mathbb{P}}^ n:\) Chapter 21 deals with the cases \(g=0\), \(d\geq 2n-1\) (they don't exist), and \((n+1)\leq d\leq (2n-2)\). - Chapter 22 deals with the case \(g=0, 1, 2\) and \(d=5\), \(n=4.\) Chapters 23 to 24: The submanifolds Z of complete intersections X in Grassmannians \(Gr(r,n)\) are studied. An application is the following: Z is of general type if X is of type \((m_ 1,...,m_ k)\) with: \(m\geq \dim(X)+n+1\), and \(m:=m_ 1+...+m_ k.\) In the following commented table of contents the authors' names are listed only in case they are not the editors of this seminar notes. Chapter 1: Finding rational curves when \(K_ X\) is negative (p. 9-15). Chapter 2: Finding rational curves when \(K_ X\) is non-semi-positive (p. 16-18). Chapter 3: Surface classification (p. 19-21). Chapter 4: The cone of curves, smooth case (p. 22-27). These chapters are devoted to the proof of the existence of extremal rational curves C (i.e.: such that \(0<K_ X\cdot C\leq n+1)\) which generate the extremal rays of the closed cone of (numerical equivalence of) effective curves on a projective n-fold X whose canonical bundle \(K_ X\) is not nef (chapters 1, 2, 4). - When X is a surface, such an extremal ray R determines an extremal contraction \(f: X\to Y\) mapping to points the curves the class of which lies in R. Such an extremal contraction is either: the constant map (iff \(X={\mathbb{P}}^ 2)\), or a ruling of X, or the contraction of a \((-1)\)-curve on X. - The proofs are very close to geometric intuition; the core of the passage through characteristic \(p>0\) is reduced to an elementary lemma in elimination theory. These lecture are thus a very accessible introduction to the techniques of S. Mori. (Notice that lemma 1.5 is false, although only its true part is used.) Chapter 5: Introduction to Mori's program (p. 28-37). Let \(f: X\to Y\) be an extremal contraction of the n-fold X. If \(n\geq 3\) (resp. \(n\geq 4)\), then Y (resp. \(K_ Y)\) may not be smooth (resp. \({\mathbb{Q}}\)-Cartier). The process of contraction has then to stop. Chapter 5 explains how one is led to admit terminal singularities and "flips" in order to construct a minimal model program, that is a systematic procedure to produce minimal models (i.e.: with K nef) of projective terminal varieties by a succession of extremal contractions and flips. Chapter 6: Singularities in the minimal model program (p. 38-46). This chapter deals with terminal singularities. The notion of discrepancy for \({\mathbb{Q}}\)-Cartier singularities is introduced, as well as the notion of canonical and terminal singularities. Terminal singularities are shown to be rational, at least in dimension 3. Canonical singularities are discussed in dimensions 2 and 3. In partigular: a 3-fold singularity is terminal Gorenstein iff it is a hypersurface double point. Chapter 7: Extensions of the minimal model program (p. 47-49). This chapter is concerned with extensions of Mori's program to the relative case and to the case of varieties on which a finite group acts. Chapter 8: Vanishing theorems (p. 50-56). A simple proof of the Kawamata-Viehweg vanishing theorem and of its corollaries is given. The proof rests on Hodge theory and the fact that complex n-dimensional affine varieties have the homotopy type of a real n-dimensional CW-complex. Chapter 9: Introduction to the proof of the cone theorem (p. 57-59). Chapter 10: Basepoint-free theorem (p. 60-62). Chapter 11: K.Matsuki: The cone theorem (p. 63-66). Chapter 12: Rationality theorem (p. 67-73). Chapter 13: K. Matsuki: Non-vanishing theorem (p. 74-76). The cone theorem for canonical projective varieties is proven. The proof consists of several fundamental intermediate results: Shokurov non- vanishing, base-point free and rationality theorems. The proofs given here are the simplest known. The first part of Mori's program is thus established. The second part, namely the existence of flips is discussed in chapter 14 to chapter 16 in the 3-dimensional case. Chapter 14: Introduction to flips (p. 77-82). The termination of sequences of flips is established, as a consequence of the decreasing character of Shokurov's notion of difficulty under flips. General properties of "extremal neighborhoods" X (of an irreducible curve C contracted to a point under a small extremal contraction in dimension 3) are proved: C is rational smooth, and does not contain more than two points at which X is not Gorenstein. Moreover: (1) \((K_ X\cdot C)\in [-1,0);\) (2) \(((\omega_ X/I\cdot \omega_ X)/Tors)\cong {\mathcal O}_ C(-1);\) (3) \(((I/I^ 2)/Tors)\cong {\mathcal O}(a)\oplus {\mathcal O}(b)\) with a,b\(\geq - 1\), where I is the ideal of C in X. It is also shown that the construction of flips can be reduced to that of (easier) flops, provided \(| -2K_ X|\) contains an element E such that the associated double cover has only canonical singularities. This double cover exists if \(| -K_ X|\) contains an element D which has only Du Val singularities. Such a D exists when X has only cyclic quotient terminal singularities. Chapter 15: Singularities on an extremal neighborhood (p. 83-91). This chapter introduces to the paper of S. Mori where the existence of terminal flips in dimension 3 is proved. It is thus intended to classify the triples \((X,C,p)\) where X is an extremal neighborhood of the flipped curve C, and \(p\in C\). Mori's invariants \(i_ p\) and \(w_ p\) are introduced, and it is shown that the relations (2) and (3) of chapter 14 translate into the following inequalities, which are global near \(C\): \(((\sum w_ p <1))\) and ((\(\sum i_ p )\leq 3)\), the sum being taken over all points of C. From this it is deduced that, after passing to the index one cover of X, either C becomes planar, or \(mult_ p(C)\leq 3.\) Chapter 16: Small resolutions of terminal singularities (p. 92-96). Chapter 16 discusses small resolutions f: \(X\to Y\) of 3-fold terminal singularities Y, and explains how to construct the flop of f from a local equation \([x^ 2+q(y,z,t)]=0\) of the index one cover of Y using te involution which maps x to \((-x).\) Chapter 17: D. Toledo: Kähler structures on locally symmetric spaces (p. 97-100). Chapter 18: D. Toledo: Proof of Sampson's theorem (p. 101-104). Chapter 19: D. Toledo: Abelian subalgebras of Lie algebras (p. 105-108). It is shown that any continuous map \(f: M\to N\) from a compact Kähler manifold M to a compact Riemannian locally symmetric space is homotopic to a non-surjective harmonic map, unless N is locally hermitian symmetric. One can assume f to be harmonic, by a theorem of Eells-Sampson (chapter 17). Another theorem of Sampson, proved in chapter 18, shows that for any \(x\in M\), \(df: W:=TM_ x^{1,0}\to (TN_{f(x)})^{{\mathbb{C}}}:=V\) maps W to an abelian subspace of V, i.e.: \([df,df]=0\). The last step, proved in chapter 19, shows that if W is any abelian subspace of V, then \(\dim_{{\mathbb{C}}}(W)\leq \dim_{{\mathbb{R}}}(V)\), with equality iff N is locally hermitian symmetric. Chapter 20: J. Carlson: Maximal variations of Hodge structures (p. 109- 116). By arguments analogous to that of chapter 19, upper bounds, sometimes sharp, are obtained for the rank r of local variations of Hodge structures of weight 2. For example: \(r\leq (h^{2,0}\cdot h^{1,1})\) when \(h^{1,1}\) is even and \(h^{2,0}\geq 3.\) Chapter 21: Subvarieties of generic hypersurfaces (p. 117-122). A typical example of the results proved here is the following: Let f: \(C\to V\subseteq {\mathbb{P}}^ n\) be a finite map from a smooth rational curve C to a generic hypersurface V of degree \( m\) in \({\mathbb{P}}^ n\). Let \(N_{f,V}:=(f^*T_ V/T_ C)\) be the normal sheaf to f. Then: \[ \text{rank}[N_{f,V}/\text{image}(H^ 0(C,N_{f,V}\otimes {\mathcal O}_ C)]>(m-(n+1)). \] In particular: V does not contain any rational curve if \(m\geq (2n-1).\) Chapter 22: Conjectures about curves on generic quintic threefolds (p. 123-128). The following two conjectures are stated, where V is a generic quintic hypersurface of \({\mathbb{P}}^ 4:\) (1) For every degree \(d\geq 1\), V contains only finitely many rational curves of degree d. (2) V is not covered by elliptic curves. It is further shown that (1) implies (2), and that: (3) V is covered by curves of genus 2 (more precisely plane quintics with 4 nodes.) Chapter 23: L. Ein: Submanifolds of generic complete intersections in Grassmannians (p. 129-133). Let X be a generic complete intersection of type \((m_ 1,...,m_ k)\) in the Grassmannian \(G=Grass(r,n)\). Let \(m:=(m_ 1+...+m_ k)\). Let Z be a submanifold of X, and let \(m_ 0:=\inf \{\mu: h^ 0(Z,K_ Z\otimes {\mathcal O}_ Z(\mu))>0\}\). Let c be the codimension in X of the subvariety covered by deformations (in X) of Z. It is shown here, using the Koszul resolution of the ideal of the graph in \((Z\times G)\) of the natural inclusion, that \(c\geq (m+m_ 0-(n+1))\). Thus: Z is of general type if: \(m\geq (\dim(X)+n+1)\). - The Hilbert scheme of X is also shown to be smooth at Z if \(\dim(Z)=1\) and \(h^ 1(Z,N_{Z| G})=0\). This last result uses the theorem of Gruson-Lazarsfeld-Peskine shown in chapter 24. Chapter 24: L. Ein: A theorem of Gruson-Lazarsfeld-Peskine and a lemma of Lazarsfeld (p. 134-139). Let \(C\subseteq {\mathbb{P}}^ n\) be a smooth curve of degree d, which spans \({\mathbb{P}}^ n\). Then: \(H^ 0({\mathbb{P}}^ n,{\mathcal O}(a))\to H^ 0(C,{\mathcal O}(a))\) is surjective if \(a\geq (d-h+1)\). This result, due to Gruson- Lazarsfeld-Peskine is shown here. Reviewer: F.Campana Cited in 5 Reviews 14J30 \(3\)-folds 14C20 Divisors, linear systems, invertible sheaves 00Bxx Conference proceedings and collections of articles 14-06 Proceedings, conferences, collections, etc. pertaining to algebraic geometry 14-02 Research exposition (monographs, survey articles) pertaining to algebraic geometry 14J15 Moduli, classification: analytic theory; relations with modular forms 14J10 Families, moduli, classification: algebraic theory Mori's minimal model program; classification of projective n-folds; numerical positivity; canonical bundle; nef; extremal ray; terminal singularities; flips; vanishing; base-point freeness; rationality; harmonic maps; variations of Hodge structures; complete intersections; general type; terminal varieties; canonical singularities; extremal neighborhoods \textit{H. Clemens} et al., Higher dimensional complex geometry. A summer seminar at the University of Utah, Salt Lake City, 1987. Paris: Société Mathématique de France (1988; Zbl 0689.14016)
CommonCrawl
Written by Colin+ in arithmetic. That @solvemymaths is an excellent source of puzzles and whathaveyou: Meanwhile, back in 1940 when everything was basically shit... pic.twitter.com/A5eKXOunFC — Ed Southall (@solvemymaths) October 7, 2017 How would you find $\sqrt[3]{\frac{1-x^2}{x}}$ when $x=0.962$, using log tables or otherwise? I would start by trying to make the numbers nicer: I note that $x=(1-0.038)$, which means we can rewrite the expression (using difference of two squares) as: $\sqrt[3]{\frac{(1.962)(0.038)}{1-0.038}}$ Does that help? Well, maybe. It's easy enough to estimate $\ln(1.962)$ - 1.962 is a small amount - 1.9% - short of 2, so $\ln(1.962)\approx \ln(2) - 0.019$. Similarly, on the bottom, $\ln(0.962) \approx - 0.038$. But how about $\ln(0.038)$? We could work out $\ln(38) - \ln(1000)$, which is simple enough. Thirty-eight is a little more than 36, and $\ln(38) \approx 2\ln(6) + \frac{1}{18}$. Meanwhile, $\ln(1000) = 3\ln(10)$. So, the logarithm of everything under the square root is $\ln(1.962)+\ln(0.038)-\ln(0.962)$, which we estimate as $\ln(2) - 0.19 + 2\ln(6) + \frac{1}{18} - 3\ln(10) + 0.38$. Ugly as decimals are, we can write $\frac{1}{18}$ as $0.0\dot{5}$ and combine everything: we get $\ln(72) - \ln(1000) + 0.13$ or $+0.14$, give or take. Now, we need the cube root of that, so we're going to divide everything by 3. The final two terms are easy enough: $\frac{1}{3} \ln(1000) = \ln(10)$ and we'll call the decimal bit 0.045. How about $\ln(72)$? Well, $\ln(72) = \ln(8) + \ln(9)$, and a third of that is $\ln(2)$ plus a third of $2.196$, from memory, which is $0.732$. OK: so we now have $\ln(2) + 0.732 - \ln(10) + 0.045$, which is $0.779 - \ln(5)$. $\ln(5) \approx 1.608$, so we end up with -0.828 as the logarithm of the answer. What about $e^{-0.828}$? It's $e^{-0.693} \times e^{-0.135}$, so a fair guess would be 13.5% smaller than a half, which is 0.43 or so. A brief play with the calculator says that the answer is 0.426 - which, for a by-hand estimate, isn't bad at all. The Mathematical Ninja lets the student investigate… cube roots A Digital Root Puzzle Ask Uncle Colin: A Short, Sweet Limit
CommonCrawl
\begin{document} \title{Numerical investigation of the logarithmic Schr\"{o}dinger model of quantum decoherence} \author{Rory van Geleuken} \author{Andrew V. Martin} \email[Correspondence email address: ]{[email protected]} \affiliation{School of Science, RMIT University, Melbourne, Victoria 3000, Australia.} \date{\today} \begin{abstract} A logarithmic Schr\"{o}dinger equation with time-dependent coupling to the non-linearity is presented as a model of collisional decoherence of the wavefunction of a quantum particle in position-space. The particular mathematical form of the logarithmic Schr\"{o}dinger equation has been shown to follow from conditional wave theory, but the validity of the logarithmic Schr\"{o}dinger equation has not yet been investigated numerically for general initial conditions. Using an operator-splitting approach, we solve the non-linear equation of motion for the wavefunction numerically and comparit it to the solution of the standard Joos-Zeh master equation for the density matrix. We find good agreement for the time-dependent behaviour of the ensemble widths between the two approaches, but note curious `zero-pinning' behaviour of the logarithmic Schr\"{o}dinger equation, whereby the zeros of the wavefunction are not erased by continued propagation. By examining the derivation of the logarithmic Schr\"{o}dinger equation from conditional wave theory, we indicate possible avenues of resolution to this zero-pinning problem. \end{abstract} \keywords{quantum decoherence, master equations, logarithmic Schr{\"o}dinger equation, quantum localisation, conditional wave theory} \maketitle \section{Introduction} Understanding the behaviour of a quantum system under the continuous influence of its environment is of prime importance to a wide variety of research areas, ranging from fundamental questions about the quantum-to-classical transition \cite{Schlosshauer2004} \cite{Schlosshauer2007} \cite{Joos1985}, to the creation and operation of emerging quantum technologies \cite{Duan2001}. In particular, modelling the effects of decoherence by the environment is of vital importance in the design of these new technologies, where the loss of coherence between states is often a limiting factor \cite{Tyryshkin2012}. However, the macroscopic number of degrees of freedom of any physically reasonable environmental model results in analytically intractable models for the interactions, and so these must be treated approximately. Typically, a master equation is used, where the effects of a large class of environmental models, such as those obeying the Born and Markov approximations, can be incorporated on general principles by the addition of terms of Lindblad form \cite{Lindblad1976}. If we consider a quantum particle interacting with an environment of scattering particles, the Gallis-Fleming master equation \cite{Gallis1990} can be used to describe the environment's decohering effects in position-space. It is the loss of coherence between position space basis states that is responsible for the localisation of quantum particles, a feature that is completely absent in the bare Schr\"{o}dinger equation. The Gallis-Fleming master equation provides a very general description of this process and was re-derived and corrected by Hornberger and Sipe \cite{Hornberger2003}, after first being considered in approximate form by Joos and Zeh \cite{Joos1985}. This has been used to model the effects of environmental engineering in matter-wave interferometry \cite{Arndt2005}, to study the quantum-to-classical transition \cite{Schlosshauer2004} \cite{Zurek2003}, and in modelling the behaviour of quantum states of matter, such as Bose-Einstein Condensates \cite{Colombe2007}. Treating the environment as a bath of harmonic oscillators leads to the Quantum Brownian Motion (QBM) model, originally derived by Caldeira and Leggett \cite{calderia1983} using path integral techniques introduced by Feynman and Vernon \cite{vernon1963}, and solved analytically by Fleming et al \cite{fleming2011}. This model, specialised appropriately to the physical situation, has been used to study a variety of emerging quantum technologies, such as the decoherence of Josephson junctions \cite{Makhlin2001}, QED cavities \cite{Pellizzari1995}, and photonic devices \cite{Obrien2009}, among others \cite{Tian2002}. A significant practical consideration in the implementation of these models is the computational complexity entailed by working with density matrices. The resources required to propagate such equations scale at least with the square of the size of the state space under consideration. In a similar vein, the phase-space formalism, initially developed by Groenewald \cite{Groenewold1946} and independently by Moyal \cite{Moyal1949}, building on ideas from Wigner \cite{Wigner1932}, among others \cite{Curtright2012} involves converting the master equation formalism into a description of the evolution of quasiprobability distribution on phase space. This has been used very successfully to describe a wide variety of applications, such as the effects of decoherence in matter-wave interferometry \cite{Bateman2014}, modelling the behaviour of quantum walks as a basis for novel quantum algorithms \cite{Lopez2003}, modelling fundamental tests of quantum mechanics in cavity QED \cite{Milman05}, and even furnishing descriptions of coherent dynamics in the early universe \cite{Matacz94}. The flexibility of phase-space approaches is hindered only, as in the case of the density matrix formalism, by the growth of the computational complexity of numerical propagation in higher dimensional settings. This is because the Wigner-Weyl transformation between the density matrix description and the Wigner function description is invertible, and so both necessarily contain the same number of degrees of freedom. The use of stochastic methods as a way of circumventing this computational scaling was pioneered by the quantum state diffusion methods of Gisin and Percival \cite{Gisin92}. They showed that the solution to a large class of master equations could be modelled by the evolution of a stochastic Schr{\"o}dinger equation with appropriately chosen stochastic forcing terms. Such calculations would result in a wavefunction that evolved in a non-deterministic manner, in response to the constant and random influence of the system's environment. The total history of a wavefunction calculated in such a way is known as a stochastic unravelling. In essence, these amount to samples in a Monte-Carlo numerical integration scheme of the path integral formulation, with each path corresponding to an unravelling, and can offer significant computational advantages over master equation methods \cite{Gisin93}. Additionally, conceptual insight into the interpretation of quantum mechanics with open systems offered by stochastic methods has been leveraged in proposals of modified theories of quantum mechanics such as the spontaneous collapse model of Ghirardi, Rimini and Weber \cite{Ghirardi86} and the subsequent continuous spontaneous localisation model of Ghirardi, Pearle, and Rimini \cite{Ghirardi90}. Extensions to relate these models to gravitation have also been proposed by Diosi \cite{Diosi87} and Penrose \cite{Penrose96}. Even with these conceptual and numerical advantages in many applications, stochastic methods cannot guarantee an increase in performance in general. This is because for complex systems, the benefit of only having to propagate wavefunctions rather than density matrices is outweighed by the cost of requiring large sample sizes to ensure `weak' convergence (which is to say, convergence at the ensemble level, rather than convergence at the level of a single unravelling, or so-called `strong' convergence) \cite{Platen99}. Although various powerful schemes for reducing necessary sample sizes exist in specific applications of stochastic evolutions \cite{Shapiro2003}\cite{Kroese11}, there is no known general method for reducing the inherent statistical error below that guaranteed by the central limit theorem. For many applications of stochastic quantum propagation, this is not a problem, as the value in such methods often lies in (but is not limited to) the reproduction of realistic evolutions of a system of interest, for example. However, for the description of effects such as decoherence, a well-converged density matrix or equivalent is necessary, and in this respect, stochastic methods may not be able to out-perform the master equation formalism in general. Conditional wave theory (CWT) addresses this by working in a deterministic fashion\includecomment{in a deterministic fashion} at the wavefunction level, introducing the notion of marginal and conditional wavefunctions, which were first introduced by Hunter \cite{Hunter1975} and applied in the exact factorisation approach to molecular physics \cite{Abedi2012}. By exploiting the favourable computational scaling offered by this approach, as no sample averaging is necessary,\includecomment{as no sample averaging is necessary,} it may be possible to produce more computationally efficient models of decoherence. The utility of this speed up is most evident when considering higher-dimensional systems. As noted above, when the physical situation can be reduced to a lower-dimensional problem, there exist powerful and efficient methods that CWT does not necessarily out-perform. However, as the number of spatial or abstract dimensions increases, working directly at the wavefunction level can significantly reduce computational overhead. This has noteworthy practical consequences for the simulation of matter-wave interferometry, for example, as working with the density matrix leads to a significant increase in the computational burden with increasing dimension. To take a concrete case, simulating a three-dimensional system using the master equation approach would require solution of a six-dimensional partial differential equation, whereas CWT reduces this to three dimensions.Other applications of our approach may exist in related fields of quantum matter, such as modelling the motion of solitons in BECs with dissipation or drag, or of the motion of quantum (quasi-)particles in other exotic media, which can prove challenging with increasing dimensionality.\includecomment{Other applications of our approach may exist in related fields of quantum matter, such as modelling the motion of solitons in BECs with dissipation or drag, or of the motion of quantum (quasi-)particles in other exotic media, which can prove challenging with increasing dimensionality.} The price of the reduction in dimensionality offered by CWT\includecomment{the reduction in dimensionality offered by CWT} is non-linearity of the resulting equations of motion. Nevertheless, this would still render previously impractical calculations tractable, allowing for experimental investigations of decoherence to be compared with theory. It has been noted that both QBM and the scattering model yield the same dynamics in the limit of an environment dominated by long-wavelength (low momentum) and small central particle displacements. Their respective master equations reduce to the form originally derived by Joos and Zeh, which is itself in Lindblad form \cite{Schlosshauer2007}. This provides the motivation to construct the conditional wave theory corresponding to this important model. Consequently, a nonlinear equation of motion for the marginal wavefunction has been derived \cite{vang2020}, possessing a logarithmic non-linearity\includecomment{possessing a logarithmic non-linearity} which predicts the same evolution for as the Joos-Zeh master equation (JZME) for Gaussian states. The Schr{\"o}dinger equation for a wavefunction $\psi$ augmented with a logarithmic non-linear term proportional to $-\psi\ln|\psi|$, known as the Logarithmic Schr{\"o}dinger Equation (LogSE) has been investigated as a possible model for a variety of non-linear phenomena. An early application was found in the description of the quantum-to-classical transition due to the separability property and existence of solitonic `Gausson' solutions \cite{Bialynicki76}\cite{Bialynicki1979}. Although this was not born out by experiments in atomic physics \cite{Shull80}, it found applications in nuclear physics \cite{Hefter1985} and in modelling solitonic behaviour of optically non-linear media \cite{Shen05}. It has also been proposed as a model of quantum information exchange \cite{Zlosh11} and Bose-Einstein condensation \cite{Avdeenkov11}, due to its obvious formal similarity to the expression for entropy. In contrast to these proposals, the logarithmic non-linearity arises from the CWT description of collisional decoherence, rather than as a speculative model. The aim of the current work is to investigate the LogSE as a model of decoherence for non-Gaussian wave-packets, using numerical techniques. In Secs. \ref{sec:theoryA} and \ref{sec:TimeDepNonLin} we briefly review the Joos-Zeh master equation and conditional wave theory, as well as the mathematical form of the corresponding LogSE predicted by the latter. In Sec. \ref{sec:reg} the regularisation of the logarithmic non-linearity is discussed and in Sec. \ref{sec:err}, important properties of error propagation in PDEs with time-dependent non-linear terms, such as the LogSE given by CWT, are discussed. Section \ref{sec:num} contains the results of our numerical investigation, with Gaussian (Sec \ref{sec:numgauss}) and non-Gaussian, double-peak (Sec \ref{sec:numlynch}) initial conditions presented. Section \ref{sec:numwidtherr} contains an analysis of the behaviour of the numerical error and the nature of the agreement between the JZME and LogSE. Section \ref{sec:disc} contains a discussion of the advantages and disadvantages of the methods presented, as well as in depth analysis of the curious `zero-pinning' behaviour exhibited by the LogSE in Sec \ref{sec:zeropin}. \section{Theory} \label{sec:theory} \subsection{The Joos-Zeh Master Equation and Conditional Wave Theory} \label{sec:theoryA} A seminal result in the theory of quantum decoherence and localisation was the JZME. Joos and Zeh considered a massive particle with position $x$ that scatters massless (or approximately massless) environmental particles and found that the net effect of a macroscopic number of these events could be approximated by a simple quadratic decohering term. The regime in which the JZME's approximations hold is that of low-momentum environmental particles and small displacements of the central particle. With these conditions satisfied, the JZME governs the behaviour of the reduced density matrix $\rho_S(x,x')$ of a particle moving along the $x$-axis, \begin{widetext} \begin{equation} \frac{\dd{}}{\dd{t}} \rho_S(x,x',t) = \frac{i\hbar}{2m} \left(\frac{\partial^2}{\partial x^2} - \frac{\partial^2}{\partial x'^2}\right)\rho_S(x,x',t) - \frac{\Lambda}{\hbar}(x-x')^2\rho_S(x,x',t) \end{equation} \end{widetext} where $\Lambda$ is the decoherence parameter, which controls the strength of the decohering effect of the environment. The reduced density matrix $\rho_S(x,x')$ is related to the total system-environment density matrix $\hat{\rho}$ through the partial trace, \begin{equation} \rho_S(x,x') = \langle x ' |\text{tr}_{E} \hat{\rho}| x \rangle = \sum_{|e\rangle \in \mathcal{B}_E} \langle x ' | \otimes \langle e | \hat{\rho} | e \rangle \otimes | x \rangle, \end{equation} where $\mathcal{B}_E$ is an orthonormal basis of $\mathcal{H}_E$, the Hilbert space of environmental states, and $\otimes$ denotes the tensor product. Using the tools of conditional wave theory we have previously shown in a prior work \cite{vang2020} that the master equation formalism can be captured by a pair of coupled equations of motion for the conditional wavefunction, $\phi$, and marginal wavefunction, $a$. These are defined by a factorisation of the total system-plus-environment wavefunction $\psi$, through \begin{equation} \psi(x,q,t) = \phi(x,q,t)a(x,t), \end{equation} where $x$ is the coordinate of the central particle of interest (the `system') and $q$ is an abstract coordinate that represents the configuration of the environment. The factorisation is not unique, and has an associated gauge symmetry realised by multiplying one factor by an $x$-dependent phase factor and the other factor by the conjugate of the same. Nonetheless, by choosing an appropriate gauge and constructing the marginal density matrix -- given by $\rho_m(x,x',t)=a^*(x',t)a(x,t)$ -- the resulting equation of motion can be compared to the JZME. We find that a Logarithmic Schr\"{o}dinger Equation of the form \begin{equation} \label{eq:lse} i\dv{a(x,t)}{t} = -\frac{\hbar}{2m}\nabla^2a(x,t)+\frac{\hbar\gamma(t)}{m}a(x,t)\ln{|a(x,t)|^2} \end{equation} where $\gamma(t)$ is a real-valued function of time that reproduces the behaviour of solutions to the JZME in coordinate-space, provided that the environmental state is Gaussian in its coordinate. The details of the derivation can be found in the appendix. The time-dependence of the coupling parameter, $\gamma(t)$ can be derived exactly for Gaussian initial conditions, and approximately generalised to non-Gaussian initial conditions, where it possesses asymptotically linear forms in both the long- and short-time regimes. As our aim is to solve Eq. \eqref{eq:lse} for non-Gaussian initial conditions, the asymptotic behaviour exhibited by $\gamma(t)$ results in a straightforward and computationally efficient numerical implementation. However, due to the nature of error propagation in non-linear PDEs, this linearly increasing coupling leads to numerical breakdown in finite time, as discussed in section \ref{sec:err}. \subsection{Time Dependence of the Coupling to the Non-Linearity} \label{sec:TimeDepNonLin} As shown in the paper by Joos and Zeh \cite{Joos1985} where it first appeared, the JZME not only preserves the Gaussian functional form of states (so that initially Gaussian states remain Gaussian for all time) but also causes $O(t^3)$ spreading of the ensemble width, $w(t)$, of these states (which corresponds to the standard deviation of the associated probability distributions). For a free particle, the Schr\"{o}dinger equation predicts a linear growth rate for $w(t)$. Note that $w(t)$ is often referred to as the ensemble width, as it represents the standard deviation of a probability distribution and to distinguish it from the coherence length, which represents the distance over which a given state retains quantum phase information and hence can support superpositions. These two length scales can be more directly interpreted in terms of the density matrix. The coherence length measures the width of the distribution of off-diagonal entries, while the ensemble width measures the spread of the on-diagonal entries. For a free particle, these have identical time-dependence. Decoherence causes a reduction in the coherence length over time which is accompanied by an increase in the ensemble width. In reality, the coherence length will only fall to some minimum value related to the energy and length scale of the interaction between the system and environment. However, this minimum is closely related to thermalization, and will not appear in simplified models such as the JZME. Substituting a Gaussian initial state into Eq. (\ref{eq:lse}), we find that the coupling parameter $\gamma(t)$ can be written as \begin{equation} \label{eq:gammaintegral} \gamma(t) = \frac{2 \Lambda}{\hbar} \frac{1}{w(t)}\int_0^t \dd{t'}w(t'). \end{equation} Working through the equations of motion, we find that $w(t) = O(t^3)$ and indeed the form of the resulting coupled differential equations for the parameters describing the Gaussians exactly agree \cite{vang2020}. Setting $w(0)=b$, the characteristic decoherence time is \begin{equation} t_b = \frac{\hbar}{\Lambda b^2}, \end{equation} which is the time taken for the coherence length to fall by a factor of $1/e$ (unless otherwise stated, we will be working in units where $b=\hbar=\Lambda=1$ for the remainder of this paper). The behaviour of $\gamma(t)$ can be split into two regimes, short- and long-times, like so, \begin{align} \gamma(t) &= 2\Lambda t + O(t^2), \qquad\qquad t <<t_b, \\ \gamma(t) &= c_0 + \frac{\Lambda t}{2} + O(t^{-1}), \,\,\, \quad t >> t_b, \end{align} where $c_0$ is a constant determined by the initial conditions. The transition between these two regimes occurs around the first decoherence time. In the numerical calculations, we used an interpolation between these two regimes rather than calculating the integral explicitly, even in the Gaussian case. The short time expansion clearly holds (by Taylor expanding the integral in Eq. \eqref{eq:gammaintegral} around zero, for instance) for an arbitrary time-dependent width (i.e. the second moment) of a general state, where that is well-defined. Although the long-time dependence is only exact in the case of Gaussian initial conditions, numerical investigation shows that it shows good agreement with the JZME in general. Furthermore, if we assume that the long time behaviour of the width is dominated by polynomial growth, so that $w(t)=O(t^n)$ for $t>>t_b$ then it's easily verified that \begin{equation} \gamma(t)=\frac{2\Lambda}{\hbar} \frac{\int_0^t \dd{t'} w(t')}{w(t)} \propto \frac{t^{n+1}}{t^n} =O(t), \end{equation} which further motivates the general use of a linear time dependence in the limiting (long and short) regimes for a general state. \begin{figure} \caption{Relative $L^2(0,1]$ distance between the natural logarithm and a variety of regularised logarithms as discussed in the text. `Rational' refers to the form given in Eq. \ref{ratlog}, and `root average' refers to the form given in Eq. \ref{rootavg}.} \label{fig:regerr} \end{figure} \subsection{Regularisation of the Logarithm} \label{sec:reg} Although the limit, \begin{equation} \lim_{z\rightarrow0} z \log |z| = 0 \end{equation} for $z\in \mathbb{C}$, is straightforward to prove, it is necessary to regularise the logarithm for numerical stability. In particular we used the regularised log function, $\ln_\sigma$, defined as, \begin{equation} \ln_\sigma(x) = \Re \ln(x+10^{-\sigma}i) \end{equation} for some $\sigma\geq0$ so that for $x>>0$, \begin{equation} \ln_\sigma(x) \approx \ln(x) \end{equation} but as $x\rightarrow 0$, $\ln_\sigma(x)\rightarrow-\sigma\ln(10)$ rather than diverging to $-\infty$. In the results presented in this paper, $\sigma = 16$, as this was well below the level of desired numerical accuracy and did not noticeably effect the computational resources required. Other choices had little to no appreciable effect on the results, provided $\sigma \gtrsim 2$. Several other regularisation schemes were also trialled, including \begin{equation} \label{rootavg} \ln_{(\sigma,N)}(x) = \frac{1}{N}\sum_{k=0}^{N-1} \ln\left(x+10^{-\sigma}e^{\frac{2\pi i k}{N}}\right), \end{equation} which also showed no appreciable sensitivity to choices of $\sigma$ or $N$ provided $\sigma\gtrsim3$. A slightly different approach took the form, \begin{equation} \label{ratlog} \text{lnr}_{(\sigma,r)}(x) = \frac{x^p}{x^p+10^{-\sigma}}\ln(x+10^{-\sigma}) \end{equation} which also did not display any appreciable effect on the evolution for $\sigma \gtrsim 3$ and all $p\geq 1$. Similar conclusions can be drawn from the graph in Fig. \ref{fig:regerr}, which shows the relative error in terms of the functional distance between the natural logarithm and the various regularisation schemes discussed. The functional distance was calculated using the formula \begin{equation} \text{Err}(f,g) = \frac{||f-g||_2}{\sqrt{||f||_2 ||g||_2}} \end{equation} for two functions $f$ and $g$, where $||f||_2$ denotes the $L^2(0,1]$ norm, given by, \begin{equation} ||f||_2 = \left( \int_0^1 \dd{x} |f(x)|^2 \right)^{1/2}. \end{equation} The fact that the singularity of $z \ln |z|$ at the origin is removable appears to be of much greater importance than the particular method of calculating the logarithm during the intermediate steps. This is consistent with the results of Bao et. al. \cite{Bao2019A}, which also showed that the evolution to the standard LogSE displayed insensitivity to the regularisation scheme used. As discussed in more detail below, particularly in section (\ref{sec:zeropin}), the behaviour of solutions to the LogSE near zeros of the wavefunction disagree significantly with standard theory. However, the results in this section strongly suggest that these issues do not appear to be caused by the numerical details of the logarithmic regularisation, but analytic properties of the form of the LogSE itself and its derivation from CWT. We used a split-operator approach with the second order Strang-splitting scheme to propagate the LogSE (see Bao et al \cite{Bao2019A} for a thorough analysis of operator splitting applied to the standard LogSE). This has been shown to have better performance both in computational time and reduced numerical error when compared to a (modified) Crank-Nicholson finite difference scheme which is standard for linear PDEs \cite{Bao2019B}. Operator splitting also has the advantage of being relatively straightforward to implement, as most of the computation is handled by Fourier transforms. In our case, the algorithm was implemented in Python, using the Numpy library \cite{numpy} for array manipulation and Fourier transforms. \subsection{Error Analysis} \label{sec:err} Although the Strang-splitting method has relatively low numerical error and is stable for the standard LogSE \cite{Bao2019A}, CWT requires the non-linearity to be time-dependent. This leads to an intrinsic source of numerical error for any discrete numerical scheme. To see this, consider a general non-linear Schr\"{o}dinger like equation, \begin{equation} i\partial_t{u(x,t)} + \frac{1}{2} \nabla^2 u(x,t) = F[u(x,t)] \end{equation} where $F[u]$ is some arbitrary differentiable function with the square brackets denoting that $F$ will typically take a function as an argument, although it can also depend on $x$ and $t$ independently. Recall also that in our units $\hbar=m=1$. If we denote some approximate solution to this equation, given by some numerical scheme for example, by $\tilde{u}$ so that \begin{equation} \tilde{u}(x,t) = u(x,t) + \eta(x,t), \end{equation} where $u(x,t)$ is the exact solution and $\eta(x,t)$ is the error. If we try to evolve $\tilde{u}$ using the equation of motion, we find, \begin{align} \label{eq:nlineom} i\partial_t u + \frac{1}{2} \nabla^2 u + i \partial_t \eta + \frac{1}{2} \nabla^2 \eta = F[u+\eta]. \end{align} Expanding out the right hand side expression as a Taylor series in $\eta$, \begin{equation} F[u+\eta] = F[u(x,t)] + \sum_{k=1}^{\infty} \frac{\partial^k F[u]}{\partial u^k} \frac{\eta(x,t)^k}{k!}, \end{equation} and substituting this back into the right hand side of Eq. (\ref{eq:nlineom}), we can cancel out the exact solution's terms so we are left with, \begin{equation} \label{eqn:errprop} i \partial_t \eta + \frac{1}{2} \nabla^2 \eta = \sum_{k=1}^{\infty} \frac{\partial^k F[u]}{\partial u^k} \frac{\eta(x,t)^k}{k!}, \end{equation} which means that the error of any approximate solution to a non-linear equation propagates according to a non-linear equation of its own. For equations with a non-linearity that is polynomial in the intensity, such as the Gross-Pitaevskii Equation (GPE), the right hand side of (\ref{eqn:errprop}) will be of finite order. However, in general the sum will extend over infinitely many terms. Nonetheless, if $\tilde{u}$ is a good approximation, then $\eta$ can be taken to be small everywhere and we can truncate the series at $k=1$. That is, \begin{equation} \label{eq:firstborn} \left(i\partial_t + \frac{1}{2}\nabla^2\right)\eta(x,t) = F'[u] \eta(x,t) + O(\eta^2), \end{equation} where $F'[u]=\partial F/\partial u$. The differential operator on the left hand side of (\ref{eqn:errprop}) shares a Green's function with the homogeneous time-dependent Schr\"{o}dinger equation over $\mathbb{R}^d$, \begin{equation} \label{eq:greens} G(x,t) = i\Theta(t)\left(\frac{i}{2\pi t}\right)^{d/2} e^{-i \frac{x^2}{2t}}, \end{equation} where $\Theta(t)$ is the Heaviside step function, such that, \begin{equation} \left(i\partial_t + \frac{1}{2}\nabla^2\right)G(x,t) = \delta^d(x)\delta(t). \end{equation} Although equation (\ref{eq:firstborn}) cannot be solved exactly either, we can extract some information about the behaviour of the errors propagated by (\ref{eqn:errprop}) by expanding the solution in a series, $\eta(x,t)=\sum_n\eta_n(x,t)$, where, \begin{equation} \label{eqn:errborn} \eta_{n+1} = G\ast(F'[u]\eta_n) \end{equation} and $\ast$ denotes convolution. The first term, $\eta_0$, can be thought of as the error intrinsic to the original approximation, and $\eta_n$ for $n\geq1$ represents error propagated due to the non-linear nature of the equation of motion. Since the Green's function contains a factor of $t^{-d/2}$, we expect that the higher order propagated errors will die out quickly. This is indeed the case for the GPE, were $F'[u]$ is not explicitly time dependent. However, in the case of CWT, we have a time-dependent coupling to the non-linearity, and so $F'[u]$ has an approximately linear time dependence. Thus, each iteration of (\ref{eqn:errborn}) yields a factor of $t^{1-d/2}$. For $d=1$, this means that the propagated errors will eventually diverge. For $d=3$, we would expect these errors to die out, whereas for $d=2$, a more subtle analysis is required. These latter two cases are outside the scope of this paper, though the methods we present can be generalised to higher dimensions and are intended to form the basis of future work. While we have shown that any numerical scheme for evolving our equation of interest is inherently unstable, we found that the Strang-splitting approach was stable over the time-scale required. \, \begin{widetext} \begin{figure} \caption{The behaviour of $\gamma(t)$ for a Gaussian wave packet over the timescale of interest, along with its interpolated linear approximation. Note that this is a log-log plot so straight lines here correspond to straight lines on the equivalent rectangular axes, but offset lines have different gradients, even if they appear parallel.} \label{fig:linapprox} \end{figure} \section{Numerical Results} \label{sec:num} \subsection{Gaussian Initial Conditions} \label{sec:numgauss} As demonstrated in \cite{vang2020}, CWT can exactly reproduce the evolution predicted by the JZME for Gaussian initial conditions. Moreover, CWT allows us to make a so-called `linear-time' approximation, which simplifies calculations at the cost of accuracy. Even with this simplified approach, we find that some qualitative features are captured by the LogSE. Recalling the discussion in section \ref{sec:TimeDepNonLin}, we note that since the coupling to the non-linearity transitions between two distinct linear regimes, we try an interpolation between them to approximate $\gamma(t)$. That is, our $\gamma(t)$ is given by \begin{equation} \gamma(t) = 2\Lambda t (1-\sigma(t)) + \left(c_0 + \frac{\Lambda t}{2}\right) \sigma(t), \end{equation} where $\sigma(t)$ is the function \begin{equation} \sigma(t) = \frac{1+\tanh(t-t_{b})}{2}. \end{equation} Any sigmoid function can be used for $\sigma(t)$ and the hyperbolic tangent was selected as it offered a good combination of both smoothness of transition and simplicity of implementation. Other widths of the transition (that is, scaling of the time coordinate) can be used, but since we are working in units natural to the problem, the simplest form provided the best agreement. This linear approximation for $\gamma(t)$ has been plotted in Fig. \ref{fig:linapprox}. In this manner, we smoothly interpolate between the short- and long-time regimes with a transition over the characteristic decoherence time, $t_b$. Naturally, this does introduce some error, even in the Gaussian case, which can be seen as a slight disagreement between the exact JZME solution and the LogSE solution, which is visible in Fig. \ref{fig:gauss2x2}. On the other hand, this approach is reasonably general and simple, as well as being quite flexible, as different interpolation methods could produce better agreement in regions of interest (much shorter than the decoherence timescale or much longer, for example). \begin{figure} \caption{Gaussian initial conditions propagated both by the JZME and the LogSE with the linear-interpolation form $\gamma(t)$ function at multiples of the characteristic decoherence times. This demonstrates the good agreement between the two models, until the wave-packet encounters the boundary. The origin of the interference fringes is discussed in the main text.} \label{fig:gauss2x2} \end{figure} Figure \ref{fig:gauss2x2} was created using a time-step of 0.05 times the characteristic timescale (which in our units is 1), a domain of width 30 comprising 2048 points (or a spatial step size of approximately 0.015). The Gaussian initially had a unit standard deviation and a mean of zero. The apparent interference effects evident after about 4 decoherence times in Fig. \ref{fig:gauss2x2} are due to the periodic boundary conditions implicit in the Fourier transforms used to perform the split-operator steps. This leads to one side of the Gaussian wrapping around the domain and interfering with the other half once it has grown sufficiently wide. This can be postponed by simply using a wider domain, although at the expense of a larger number of spatial steps to maintain the same density of points. \subsection{Non-Gaussian, Zero-Free Initial Conditions} Two non-Gaussian functions that do not possess zeros were also trialled as initial conditions. These were the Lorentzian, given by, \begin{equation} a(x,0) = \frac{1}{\sqrt{\pi}} \frac{1}{1+\left(\frac{x}{b}\right)^2} \end{equation} for a `width' parameter $b$, and the hyperbolic secant, given by, \begin{equation} a(x,0) = \frac{1}{b}\mathrm{sech}\left(\frac{x}{b}\right) = \frac{2}{\sqrt{b}} \frac{1}{\exp\left(\frac{x}{b}\right)+\exp\left(-\frac{x}{b}\right)} \end{equation} with analogous width parameter $b$. Note that the width parameter for the Lorentzian represents half the full width at half maximum (FWHM) and not the standard deviation, since the latter is ill-defined for a Lorentzian distribution. \begin{figure} \caption{An initially Lorentzian wavefunction evolving under both the JZME and LogSE with the linearly-interpolated for of $\gamma(t)$ at multiples of the decoherence time. Although zero-pinning effects, discussed in detail in Sec. \ref{sec:zeropin}, are visible, the ensemble width shows good agreement between the two methods.} \caption{A wavefunction initially described by the hyperbolic secant evolving under both the JZME and LogSE with the linearly-interpolated for of $\gamma(t)$ at multiples of the decoherence time. Since zeros do not develop during evolution, the agreement between the two methods is very good until finite domain effects cause spurious interference effects, similar to the Gaussian case. The origin of these effects is discussed in the main text.} \label{fig:lrntz} \label{fig:sech} \end{figure} \subsection{Double Peak Initial Conditions} \label{sec:numlynch} For non-Gaussian initial conditions, we observe a phenomenon we have termed `zero-pinning' whereby points of zero intensity in the wavefunction are preserved by the evolution under the LogSE. This is particularly evident in Fig. \ref{fig:lynch2x2}, where the initial marginal wavefunction is a coherent superposition of two Gaussians, of the form, \includecomment{marginal wavefunction is a coherent superposition of two Gaussians, of the form,} \begin{equation} a(x,0) = N(b,s)^{-1/2}\left[\exp\left(-\frac{(x-s)^2}{4b^2}\right)+\exp\left(-\frac{(x+s)^2}{4b^2}\right)\right] \end{equation} where $N(b,s)$ is a normalisation factor. We chose $s=b=1$ in our units, so the peaks are initially of unit width and separated by two units.\includecomment{where $N(b,s)$ is a normalisation factor. We chose $s=b=1$ in our units, so the peaks are initially of unit width and separated by two units.} Again, we used a time-step of 0.05 and a spatial step size of approximately 0.015 (a 30 unit wide domain with 2048 points). As the two Gaussian peaks spread out, they begin to interfere, creating zeros in the probability distribution. The zeros spread out at the correct rate, but their visibility is not diminished, unlike solutions to the JZME. This represents a significant drawback to using the LogSE as a model of decoherence, as decreased fringe visibility is a central feature of decoherence, and is used extensively in experimental work to characterise the decohering effect of the environment \cite{Arndt2005}. The analytic nature of this problem is discussed further in section \ref{sec:zeropin}. \begin{figure} \caption{ Twin Gaussian initial conditions at various fractions of the characteristic decoherence time. As in Fig. (\ref{fig:gauss2x2}), the LogSE is implemented with the linear-interpolation form of $\gamma(t)$. This demonstrates the good agreement in the ensemble width behaviour of the LogSE solution, while the `zero-pinning' effects, discussed further in Sec. \ref{sec:zeropin}, are evident.} \label{fig:lynch2x2} \end{figure} In all test cases where zeros were present, zero-pinning was observed. There is no sensitivity to the origin of the zero, whether it is a feature of the initial conditions, interference effects (such as the twin-Gaussian case shown above), or as a result of dynamics (for example, an isolated Cauchy-Lorentz distribution will spontaneously form zeros as it spreads out). As discussed below, this is likely a result of the analytic properties of the LogSE and not a numerical artifact. Moreover, even with the zero-pinning, all test cases showed good agreement with the predictions of the JZME regarding the time-dependence of the ensemble widths of the distributions. Note that since the same numerical integrator was used for both the JZME and LogSE, the JZME solutions simulated here also have implicitly periodic boundary conditions. Their interference effects are not visible here, however, since the JZME does not exhibit zero-pinning and appears to be somewhat more efficient at suppressing the off-diagonal terms in the density matrix. Conversely, the use of the same integrator for both formalisms resulted in a good demonstration of the significant computational speed-up offered by the LogSE model. Performing the calculations (on a commercial PC CPU) to produce the figures in this paper took on the order of hours for the highest spatial resolution calculations\includecomment{for the highest spatial resolution calculations} using the JZME but took only a few minutes using the LogSE. However, operator splitting is not necessarily the most efficient method for propagating the JZME, as it is linear in the density matrix. Generally, finite-difference methods (such as the well-established implicit Crank-Nicholson method) may exhibit better performance for these types of partial differential equations. Nonetheless, the observed reduction in computation time still serves as a good comparison to demonstrate an important advantage of the LogSE model. \end{widetext} \subsection{Widths and Error Growth} \label{sec:numwidtherr} Although solutions to the LogSE display different behaviour to the corresponding JZME solutions, there is good agreement in the behaviour of the ensemble-width spreading, as shown in Fig. \ref{fig:widths}. Note that the `kink' present around $0.1t_b$ corresponds to the initial moment the interference fringes form, which effects the growth rate of the width. This is because the fringes contain less probability mass far away from the origin, and instead concentrate it in the central band. As discussed in section \ref{sec:err}, the growth in numerical error for a 1-dimensional simulation of the LogSE grows over time regardless of numerical details. At the same time as this error is being propagated, the typical (absolute) value of the wavefunction is shrinking due to the rapid ensemble width growth (rapid compared to the free Schr\"{o}dinger equation solution with the same initial conditions, that is). Since the logarithm rapidly approaches an asymptote for real arguments in $(0,1]$, the function is quite steep in this region. Consequently, small changes in the absolute value of the wavefunction can have a considerable effect on the value of the logarithm, which may contribute to numerical instability. Thirdly, the self-interference caused by the periodic boundary conditions can occur quite early in the simulation due to the rapid spreading out of the wavefunction. Combined with the zero-pinning, these effects can be difficult to tease apart by varying the parameters of the simulation (temporal and spatial resolutions, domain size, etc). Incorporation of absorptive boundary conditions was attempted in order to mitigate the effects of self-interference, but were ultimately disregarded. An imaginary potential that was non-zero only at the boundary was trialed, however, unless the width of this non-zero region was carefully tuned reflections from the boundary were produced (\cite{he2007} contains a good discussion of this problem). This tuning seemed to be sensitive to the initial conditions, unlike in the linear case, and simply introduced a new problem while not completely eliminating the wrapping of the wavefunction around the domain. While more mathematically rigorous and physically realistic methods exist for implementing absorptive boundary conditions, these can introduce significant computational complexity because they require an additional integral to be performed at each time-step, as well as being non-local in time (see \cite{Lubich2003} for an example algorithm and general analysis). Ultimately, the computational scaling of integrating the LogSE with Strang-splitting means that it is, in general, much more tractable to simply work with a very large domain to mitigate the spurious effects of periodicity. Regardless, the good agreement between the two formalisms (master equation versus LogSE) as far as the ensemble width is concerned suggests that while the form of the LogSE that has been derived captures some aspects of the environmental influence, the mechanism by which fringe visibility is diminished is lost. Interestingly this also suggests that these are - in some sense - separate phenomena, which is not obvious \textit{a priori}, since both appear simultaneously with the addition of the single decohering term in the JZME. \, \begin{widetext} \begin{figure} \caption{(a) The kinetic energy of a Gaussian state plotted against time, in units of the initial kinetic energy. Initially, the kinetic energy grows linearly (note the logarithmic axis) and suddenly jumps when the wavepacket encounters the edge and wraps around the domain. (b) The point at which this jump first occurs, plotted with increasing domain width. Note that for domains wider than about $10^3b$, the numerical breakdown is no longer due to self-interference, but is intrinsic to the LogSE, as implemented here.} \label{fig:DomsEnergy} \end{figure} \begin{figure} \caption{Plots of the ensemble widths of the probability distributions corresponding to the Gaussian initial state (left) and the twin-Gaussian initial state (right), measured relative to the initial width of the distribution ($w(0)=b$). Note that the `kink' that occurs in the twin-Gaussian plot at around $10^{-1}t_{b}$ corresponds to the initial formation of the interference fringes. The divergence of the widths just before $10 t_{b}$ corresponds to the numerical breakdown discussed in section \ref{sec:err}.} \label{fig:widths} \end{figure} \begin{figure} \caption{The relative $L^2$ error (defined in the text), of the JZME and LogSE evolutions of a Gaussian initial state. The rapid increase in error around 9 characteristic times is insensitive to domain width and spatial resolution, corresponding to the inevitable numerical breakdown discussed in section \ref{sec:err}.} \label{fig:errs} \end{figure} \section{Discussion} \label{sec:disc} As shown in Fig. \ref{fig:errs} (a), the LogSE performs very well as a model of decoherence for a Gaussian initial state. In all other test cases investigated, the LogSE yielded ensemble width growth that closely matched the JZME solution, before numerical instability set in. Since the LogSE's computational demand scales much more favourably than the master equation approach, it is not unreasonable to decrease the step size somewhat in order to improve stability. However, as shown in \textbf{Fig. \ref{fig:DomsEnergy}}, this only delays the breakdown to a point. As discussed in Sec. \ref{sec:err}, the time dependence of the non-linearity all but guarantees a finite-time numerical breakdown. Furthermore, our analysis indicates that in more than one dimension, the accumulation of error intrinsic to the non-linearity of the problem should not prove as destructive. As discussed above, this is a consequence of the form of the Green's function in equation \ref{eq:greens}. This is of particular interest as the computational speedup offered by conditional wave theory is also most evident in higher dimensional problems. This suggests that the application of conditional wave theory to quantum mechanical problems involving continuous degrees of freedom, such as matter-wave interferometry or decoherence of quantum states of matter could be a promising area for further research. Besides ensemble width growth, the other major prediction of decoherence theory is the loss of fringe visibility in interference patterns. As discussed in the results section, the LogSE in this form cannot reproduce this behaviour. This is a significant hurdle in the way of using this model in practical situations such as prediction matter-wave interference patterns. The cause of this problem is discussed in the next section, and possible areas of investigation which may offer a solution are proposed. \subsection{Zero-pinning} \label{sec:zeropin} We now discuss in depth the mathematical origins of the zero-pinning phenomena as this is the most significant drawback of the LogSE model. Furthermore, the way that the LogSE treats zeros so differently to other values of the wave-function suggests that some important mathematical structure is missing or incorrect in the derivation of the LogSE from CWT. Although several approximations have been made, it's not immediately apparent how these would result in the observed effects. For example, most of the approximations make no reference to the on-diagonal values of the density matrix, or are concerned with simplifying the time dependence of the non-linear coupling. Although the zero-pinning is time-independent (in that it occurs regardless how far along in time the simulation is) we will show that the assumption that the evolution is dependent on purely local information contained in the wavefunction is closely related to the appearance of zero-pinning effects. Assume that at some point on the $z$-axis, say $2x_0$, the intensity $\rho(x_0,x_0)$ goes to zero. We now consider why the addition of a term proportional to $y^2\rho(y,z)$ in the master equation should fill in the zero when this term clearly vanishes at this point. To begin, we note that along the $z$-axis $\rho$ must be real and a short distance along this axis - say $\delta x$ - from the point $x_0$, $\rho$ must be small, positive, and approximately parabolic. This follows if the zero is of first order in the wavefunction. If we Taylor expand in time around the zero (assuming $\rho$ vanishes at some time $t$), we find, \begin{equation} \label{eqn:rhoexpand} \rho(x_0,x_0,t+\dd{t}) = \dd{t}\left(\frac{i}{\mu}\partial^2_{yz}\rho - \lambda y^2 \rho\right) + \frac{\dd{t}^2}{2} \left( \frac{i}{\mu}\partial^2_{yz}\dot{\rho} - \lambda y^2 \dot{\rho} \right) + \frac{\dd{t}^3}{3!} \left( \frac{i}{\mu}\partial^2_{yz}\ddot{\rho} - \lambda y^2 \ddot{\rho} \right) + O(\dd{t}^4), \end{equation} \end{widetext} where $\mu^{-1} = 2\hbar/m$, $\lambda=\Lambda/\hbar$, the dots denote differentiation with respect to time, and everything on the right hand side is understood as being evaluated at the point $(x_0,x_0)$ at time $t$. We can find the expressions for the higher time derivatives of $\rho$ by repeated application of the JZME (Eq. (\ref{eqn:JZME})). A significant amount of algebraic work can be saved by noting that most of the terms generated in this manner will vanish. Trivially, anything with a surviving factor of $y$ must be zero, since this clearly vanishes at $(x_0,x_0)$. Furthermore, the derivative operator can be rewritten \begin{equation} \partial^2_{yz}\rho = \frac{1}{4} \left(\partial^2_x\rho-\partial^2_{x'}\rho\right) \end{equation} but since $\rho$ vanishes along both of the axes $(s,x_0)$ and $(x_0,s)$ (where $s\in\mathbb{R}$) the second derivatives with respect to both $x$ and $x'$ must both vanish at $(x_0,x_0)$. From all this, we can conclude that both the first- and second-order-in-time terms vanish, but the third order term will contain the following, \begin{equation} \dddot{\rho} \sim \left(\frac{i}{\mu}\frac{\partial^2}{\partial y \partial z}\right)^2\left(-\lambda y^2 \rho\right) \sim \frac{2\lambda}{\mu^2} \frac{\partial^2\rho}{\partial z^2} \approx \frac{16\hbar\Lambda}{m} \frac{\rho_\delta}{\delta x^2}, \end{equation} where $\sim$ denotes equality up to cancellation of terms that vanish according to our observations above and $\rho_\delta$ is the value of $\rho(x,x)$ a short distance in either direction along the $z$-axis. As we noted, $\rho$ is approximately parabolic along this axis, and so the second derivative along it is positive (since $\rho$ must be non-negative along $z$). Thus for short times, the growth of absolute value of the wavefunction will be cubic in time. Note this argument depends crucially on the fact that the JZME is manifestly non-local. Information about the state at the point $x'$ is propagated to $x$ through the decoherence term encountering the kinetic term. In the LSEs that we have developed so far, all information remains local, and so cannot be propagated to fill in the zeros. A similar analysis can be performed for the LogSE. To begin, the marginal wavefunction can be expanded over some small time-step $\dd{t}$ at some first-order zero $x_0$ and eliminating terms that vanish at the zero, \begin{equation} a(x_0,t+\dd{t}) = \dd{t}^2\frac{1}{m} \nabla \varepsilon \cdot \nabla a + O\left(\dd{t}^3\right), \end{equation} recalling that $\nabla^2a = 0$ at a first order zero. Furthermore, from the derivation of the LogSE (detailed in the appendix) the gradient of $\varepsilon(x)$ is given by, \begin{equation} \pdv{\varepsilon(x)}{x} = -\frac{\hbar^2}{m} \frac{1}{|a(x)|}\frac{\partial}{\partial x}\{|a(x)|\gamma(x)\} \end{equation} so that, \begin{equation} a(x_0,t+\dd{t}) = -\frac{\dd{t}^2}{2}\frac{\hbar^2}{m^2} \frac{1}{|a|}\frac{\partial}{\partial x}\{|a(x)|\gamma(x)\} \frac{\partial a}{\partial x} \end{equation} setting $\hbar=m=1$, the coefficient of $\dd{t}^2$, $c_2$, can be written, \begin{equation} \label{eq:c2def} c_2 = \pdv{\gamma}{x}\pdv{a}{x} + \frac{\gamma}{|a|}\pdv{|a|}{x}\pdv{a}{x}, \end{equation} which appears to be singular at the zero, since $|a|\rightarrow0$. However, care must be taken, since that is also precisely the point at which $|a|$ is not differentiable. Expanding over some small step $\dd{x}$ using a central finite difference and setting $|a(x)|=r(x)$, \begin{align} &\lim_{x\rightarrow x_0} \frac{1}{|a(x)|} \pdv{|a(x)|}{x} \\ &= \lim_{x\rightarrow x_0}\frac{1}{r(x)}\lim_{\dd{x}\rightarrow 0} \frac{r(x+\dd{x}/2)-r(x-\dd{x}/2)}{\dd{x}}\\ &= \lim_{x\rightarrow x_0}\frac{r'(x_0^+) + r'(x_0^-)}{r(x)}. \end{align} Note that close to a zero, $f'(x_0^+)=-f'(x_0^-)$ for any continuous real function $f$. So it must hold that, \begin{equation} \lim_{x\rightarrow x_0} \frac{1}{|a(x)|} \pdv{|a(x)|}{x} = \lim_{x\rightarrow x_0}\frac{0}{r(x)} = 0. \end{equation} The second term of $c_2$ in Eq. \eqref{eq:c2def} can therefore be dropped, leaving, \begin{equation} a(x_0, t+\dd{t}) = -\dd{t}^2\frac{\hbar^2}{m^2} \gamma'(x_0, t) a'(x_0, t), \end{equation} which is clearly non-zero if $a$ is not everywhere zero and $\gamma$ is a function of space. In the simplest version of the CWT LogSE that we've used so far, $\gamma$ is not a function of space, implying that $\gamma'(x)=0$ which in turn means the zeros cannot be filled in. Following the derivation of the LogSE given in the appendix, we can arrive at a slightly more general solution for the non-linear term than is given in Eq. (\ref{eq:lse}), \begin{equation} \label{eq:eps1} \varepsilon(x,t) = C(t) - \int_0^{x} \dd{x'} \frac{1}{|a(x',t)|} \frac{\partial(\gamma(x',t)|a(x',t)|)}{\partial x'}, \end{equation} where $C(t)$ is an arbitrary function of time that does not affect the dynamics. If we allow $\gamma$ to be independent of $x'$ then we recover our expression for the logarithmic non-linearity. In general however, $\varepsilon$ can be written, \begin{equation} \label{eq:eps2} \varepsilon(x,t) = -\gamma(x,t) - \int_0^x \dd{x'} \gamma(x',t) \frac{\partial}{\partial x'} \ln |a(x',t)| \end{equation} which follows from the product rule applied to the integrand of Eq. (\ref{eq:eps1}) and the constant of integration has been set to zero. This form suggests that the influence of the coupling to the environment manifests in two ways. The first term represents a local potential which arises directly from the interaction with the environment and a second, non-local term which is difficult to interpret physically. Sergi and Zloschastiev \cite{Sergi2013} have discussed the logarithmic Schr\"{o}dinger equation in the context of quantum information transfer, and the above form of the non-linear term suggests a kind of weighted sum over the differential information content of the probability distribution generated by the marginal wavefunction. Thus the integral term may be interpreted as the amount of entanglement with the environment, as measured by the delocalised quantum information content of the marginal state. The principal challenges in applying Eq. \eqref{eq:eps2} unsurprisingly lies in evaluating the integral. Firstly, at points where the magnitude of the wavefunction vanishes, the logarithm is ill-defined. However, this does not mean that the integral itself is undefined. If we regard $x$ as the real part of a complex coordinate, it is possible to evaluate the integral assuming the marginal magnitude is meromorphic on an appropriate domain. This can be done using the generalised Cauchy argument principle, which states that for a meromorphic function $f$ and a holomorphic function $g$ of a complex variable $w$, defined on an open subset of the complex plane $D$, \begin{equation} \frac{1}{2\pi i} \oint_C \dd{w} \frac{f'(w)}{f(w)} g(w) = \sum_z g(z) n_C(z) - \sum_p g(p) n_C(p), \end{equation} where $C$ is a closed curve in $D$ that does not intersect the zeros ($z$) or poles ($p$) of $f$, and $n_C(x)$ is the winding number of the point $x$ with respect to the curve $C$ (note that if $x$ does not lie in the interior of $C$, $n_C(x)=0$). Currently, we know of no sufficiently general solution for $\gamma(x,t)$ to actually apply this formula or how to find such a solution numerically. Nonetheless, the importance of zeros in the generalized Cauchy argument principle, along with the seemingly reasonable assumption that there should not be any poles in $|a(x,t)|$ on the real line is suggestive of a possible resolution to the zero-pinning problem. \section{Conclusion} Conditional wave theory offers new mathematical tools for the analysis of quantum decoherence, a central and important problem in the era of emerging quantum technologies. We have shown that as applied to the localising effects of decoherence, as measured by the behaviour of the ensemble width of a state, the logarithmic equation of motion predicted by CWT demonstrates excellent agreement with standard theory. However, the mathematically simplest form of this equation of motion leads to significant accumulation of error and zero-pinning behaviour. We have shown that the former is accompanied by intrinsic reduction of necessary computational resources, by working at the wavefunction level and becomes less problematic with increasing dimension. The zero-pinning effect prevents the destruction of interference effects that are well-known to occur as a result of decoherence. The resolution of this shortcoming of CWT may lie in careful analysis of the subtle non-local behaviour of the coupling to the environment, which appears to have mathematical links to previous work on information theory and the logarithmic Schr\"{o}dinger equation. \appendix* \section{Derivation of the LogSE} \label{sec:devapx} The one-dimensional JZME for the reduced density matrix $\rho_S(x,x')$ can be written in the standard form, \begin{widetext} \begin{equation} \frac{\dd{\rho_S(x,x',t)}}{\dd{t}} = \frac{i\hbar}{2m} \left(\frac{\partial^2}{\partial x^2} - \frac{\partial^2}{\partial x'^2}\right)\rho_S(x,x') - \frac{\Lambda}{\hbar}(x-x')^2\rho_S(x,x'). \end{equation} It is more convenient to work in the rotated coordinate system given by, $y=x-x'$ and $z=x+x'$. The JZME thus becomes, \begin{equation} \label{eqn:JZME} \frac{\dd{\rho_S(y,z,t)}}{\dd{t}} = \frac{2i\hbar}{m} \frac{\partial^2}{\partial y \partial z} \rho_S(y,z,t) - \frac{\Lambda}{\hbar} y^2 \rho_S(y,z,t). \end{equation} The evolution of solutions to the JZME are characterised by the narrowing of the distribution of non-zero density matrix entries around $y=0$, i.e. the diagonal. The key insight of CWT is that this single master equation can be written as two equations for the conditional and marginal wavefunctions, $\phi(x,q,t)$ and $a(x,t)$, respectively, where the total system-environment wavefunction, $\psi(x,q,t)$ is given by, \begin{equation} \label{eqn:CWTfact} \psi(x,q,t)=\phi(x,q,t)a(x,t), \end{equation} where the generalised coordinate $q$ represents the degree(s) of freedom of the environment. The reduced density matrix is then given by, \begin{align} \begin{split} \rho_S(x,x',t) &= \text{tr}_E ~ \psi^*(x',q',t)\psi(x,q,t)\\ &= \int \dd{q} ~ \psi^*(x',q,t) \psi(x,q,t). \end{split} \end{align} Substituting in the CWT factorisation, we find, \begin{equation} \rho_S(x,x',t) = a^*(x',t)a(x,t) \int \dd{q} ~ \phi^*(x',q,t)\phi(x,q,t) = \rho_M(x,x',t) K(x,x',t) \end{equation} where we've defined the \textit{marginal density matrix}, $\rho_M(x,x',t)=a^*(x',t)a(x,t)$ and the \textit{coherence integral} $K(x,x',t)=\int \dd{q} ~ \phi^*(x',q,t)\phi(x,q,t)$. Substituting this back into the JZME (Eq. \ref{eqn:JZME}), we find the marginal density matrix and the coherence integral must satisfy, \begin{equation}\label{eqn:factoredME} \frac{\dd{\rho_M}}{\dd{t}} = \frac{2i\hbar}{m}\left( \partial_{yz}\rho_M + (\partial_{y}\rho_M)(\partial_{z} \ln K) + (\partial_z\rho_M)(\partial_{y} \ln K) + \rho_M\frac{\partial^2_{yz}K}{K} \right) - \frac{\Lambda}{\hbar}y^2 \rho_M - \frac{\dd{}}{\dd{t}} \ln K. \end{equation} However, CWT also requires the marginal density matrix obey an equation of motion of its own \cite{vang2020}. This arises from the equation of motion for the marginal wavefunction, which itself arises from the Schr\"{o}dinger equation, the CWT factorisation, Eq. \eqref{eqn:CWTfact}, and the requirement, \begin{equation} \int \dd{q} \, \phi^*(x,q,t) \phi(x,q,t) = 1. \end{equation} With an appropriate choice of gauge, the equation of motion for the marginal wavefunction can be written as, \begin{equation} \label{eqn:marginaleom} i\frac{\dd{a}(x,t)}{\dd{t}} = -\frac{\hbar}{2m} \frac{\partial^2 a(x,t)}{\partial x^2} + \frac{1}{\hbar}\varepsilon(x,t)a(x,t) \end{equation} where $\varepsilon(x,t)$ is a real-valued function. This leads to an equation of motion for the marginal density matrix through the relation, $\rho_M(x,x',t) = a^*(x',t)a(x,t)$, \begin{equation} \label{eqn:marginalME} \frac{\dd{\rho_M}}{\dd{t}} = \frac{i\hbar}{2m} \left(\partial^2_x - \partial^2_{x'}\right) \rho_M - \frac{i}{\hbar}\left[\varepsilon(x,t)-\varepsilon(x',t)\right]\rho_M. \end{equation} Our aim then, is to find a form of $\varepsilon(x,t)$ and $K(x,x',t)$ such that Eq. \eqref{eqn:marginalME} and Eq. \eqref{eqn:factoredME} are equivalent. One such form can be found by noting the fact that the original JZME has analytic Gaussian solutions, suggesting a Gaussian form for $K$ may satisfy this requirement. In particular, if we take, \begin{equation} K(y,t) = \exp\left(-\frac{\gamma(t) y^2}{2}\right) \end{equation} where we're working in the $(y,z)$ basis for convenience, then it can be verified by substitution that Eq. \eqref{eqn:factoredME} and Eq. \eqref{eqn:marginalME} coincide as $y\rightarrow 0$ provided, \begin{equation} \varepsilon(x,t) = \frac{\hbar^2}{m} \gamma(t) \ln|a(x,t)|^2. \end{equation} For $y \neq 0$, the two equations coincide only if, \begin{equation} \label{eqn:lingamma} \dv{\gamma(t)}{t} = \frac{\Lambda}{\hbar} \implies \gamma(t) = \gamma(0) + \Lambda t/\hbar. \end{equation} However, since all observables depend only on the diagonal values of $\rho_M(x,x')$, that is, when $x=x'$ or equivalently $y=0$, the requirement that the evolution of the off-diagonal terms is not strictly necessary. In the Gaussian case for example Eq. \eqref{eqn:lingamma} does not hold for all time, but still shows good agreement with the JZME, as discussed in the main text. A more general solution can be found by dropping the Gaussian assumption for $K(x,x',t)$. The expression for $\varepsilon(x,t)$ in that case has the form, \begin{equation} \label{eqn:generaleps} \varepsilon(x,t) = C(t) + \int_0^{x} \dd{x'} \gamma_2(x',t) \pdv{}{x'} \ln[\gamma_2(x',t)r(x',t)] \end{equation} where $r(x,t)=|a(x,t)|$ and $C(t)$ is an arbitrary function of time which does not effect the dynamics, and where $\gamma_2(x,t)$ is defined as, \begin{equation} \gamma_2(x,t) = \frac{\partial^2}{\partial y^2} \ln K(y,z,t)\Big|_{y=0}. \end{equation} Equation Eq. \eqref{eqn:generaleps} can be verified by substitution of \begin{equation} \gamma(t)y^2/2 \rightarrow \sum_n\gamma_n(z,t)y^n/n! \end{equation} in equation \ref{eqn:marginalME}. This agrees with the Gaussian case exactly when $\pdv{\gamma_2}{x}=0$, which is to say that $\gamma_n=0 $ for $n>2$. \end{widetext} \end{document}
arXiv
Mathematical model of TGF-βsignalling: feedback coupling is consistent with signal switching Shabnam Khatibi1,5, Hong-Jian Zhu2, John Wagner3,4, Chin Wee Tan5,6, Jonathan H. Manton1 & Antony W. Burgess2,5,6 BMC Systems Biology volume 11, Article number: 48 (2017) Cite this article Transforming growth factor β (TGF-β) signalling regulates the development of embryos and tissue homeostasis in adults. In conjunction with other oncogenic changes, long-term perturbation of TGF-β signalling is associated with cancer metastasis. Although TGF-β signalling can be complex, many of the signalling components are well defined, so it is possible to develop mathematical models of TGF-β signalling using reduction and scaling methods. The parameterization of our TGF-β signalling model is consistent with experimental data. We developed our mathematical model for the TGF-β signalling pathway, i.e. the RF- model of TGF-β signalling, using the "rapid equilibrium assumption" to reduce the network of TGF-β signalling reactions based on the time scales of the individual reactions. By adding time-delayed positive feedback to the inherent time-delayed negative feedback for TGF-β signalling. We were able to simulate the sigmoidal, switch-like behaviour observed for the concentration dependence of long-term (> 3 hours) TGF-β stimulation. Computer simulations revealed the vital role of the coupling of the positive and negative feedback loops on the regulation of the TGF-β signalling system. The incorporation of time-delays for the negative feedback loop improved the accuracy, stability and robustness of the model. This model reproduces both the short-term and long-term switching responses for the intracellular signalling pathways at different TGF-β concentrations. We have tested the model against experimental data from MEF (mouse embryonic fibroblasts) WT, SV40-immortalized MEFs and Gp130 F/F MEFs. The predictions from the RF- model are consistent with the experimental data. Signalling feedback loops are required to model TGF-β signal transduction and its effects on normal and cancer cells. We focus on the effects of time-delayed feedback loops and their coupling to ligand stimulation in this system. The model was simplified and reduced to its key components using standard methods and the rapid equilibrium assumption. We detected differences in short-term and long-term signal switching. The results from the RF- model compare well with experimental data and predict the dynamics of TGF-β signalling in cancer cells with different mutations. TGF-β is a member of the transforming growth factor superfamily, which also includes other growth factors such as bone morphogenetic proteins, Mullerian inhibitory substance, activin, inhibin and Nodal [1–3]. Each family member controls a broad range of cellular processes, such as differentiation, proliferation, migration, life span and apoptosis [1, 4]. TGF-β is secreted in an inactive form and sequestered in the extracellular matrix, but once activated by serine and metalloproteinases [5] TGF-β is released and binds to the cell surface to form TGF-β receptor complexes. The active ligand:receptor complex then initiates the intracellular signalling that leads to SMAD activation (phosphorylation) and nucleocytoplasmic shuttling and, eventually, to gene responses in the nucleus [6, 7]. Recent studies indicate that TGF-β concentration, stimulation time, cell type and even the percentage of active signalling components within cells can influence the gene responses, giving a multi-functional aspect to TGF-β signaling [2, 8]. This is of particular interest in colon cancer, where SMAD signalling is a critical pathway controlling the transition of normal epithelial cells to cancerous cells [3, 8–11]. In spite of the myriad studies on the TGF-β signalling pathway, there are still many unanswered questions concerning the impact of TGF-β signalling at different stages of cancer cell progression [12]. In particular, there are two opposing reactions of cancer cells to TGF-β: the proliferation of cancer cells at an early-stage is inhibited by TGF-β [13], yet at more advanced stages of malignancy, proliferation of cancer cells is stimulated by this cytokine [14]. Although many of the TGF-β signalling components were discovered decades ago [15], the quantitation, dynamics and locations of the signalling components that occur within hours of TGF-β stimulation [16–22] have been more difficult to interpret. Consequently, mathematical models of TGF-β signalling have been developed [2, 16–21, 23, 24]. In a comprehensive model of TGF-β signalling, Zi et al. [22] aim to explain the high cooperativity and discontinuous cellular responses to TGF-β in terms of switch-like behavior arising from ligand depletion. However, these models did not include the feedback mechanisms known to regulate the TGF-β system, in particular feedback through SMAD7, a key inhibitor in TGF-β signal transduction [25]. Furthermore, SMAD7 is an important component for mediating the crosstalk between TGF-β signal transduction and other cytokine signalling pathways such as IL-6 or IL-11 [10]. The Zi model [22] also lacks the more recently discovered positive feedback loop in TGF-β signalling that acts by suppressing Azin1 via the microRNA miR-433 [26]. Azin1 promotes polyamine synthesis [26, 27], which suppresses TGF-β signalling [26, 28–30]. Azin1 inhibits antizyme, thus preventing the degradation of ornithine decarboxylase (ODC) [26, 27]. ODC is essential for the biosynthesis of polyamines [26, 27] (see Fig. 1). Interestingly, over-expression of Azin1 suppresses the expression of TGF-β and its Type 1 receptor [26]. The miR-433:Azin1:Antizyme:ODC reactions appears to induce a positive feedback on TGF-β signalling [26]. The full TGF-β signalling biological model. Potential phosphorylation sites of the receptors are specified with empty circles attached to R1 and R2 components. Arrows pointing to 6 blue dots represent degradation process. The stars indicate the production processes for specific proteins on the membrane and in the cytoplasm. The red solid arrows originating from SMAD7/Smurf apply negative and/or positive feedback on the receptor components of the membrane. Oval-shaped components written in small letters represent micro-RNAs. In this figure, S represents the SMAD proteins. Note that the arrow from ODC to polyamine shows an stimulatory reaction rather than conversion It is likely that these feedback loops will produce both cooperativity and switch-like behavior, even in the absence of ligand depletion [31–35]. The modelling of feedback loops requires the introduction of time-delays due to the extended time scales of the reactions. This is typically found in cellular signalling systems which involve gene regulation, protein synthesis and for the shuttling of signalling components between subcellular compartments [31–33]. As a prelude to improving our understanding of the TGF-β signalling system we have developed a new mathematical model which incorporates negative feedback control via SMAD signalling, positive feedback via Azin1 and appropriate time-delays for specific reactions [25, 36]. We started the modelling process by incorporating all of the reactions involved in TGF-β and SMAD signalling, including the feedback loops and time-delays. We then used the rapid equilibrium assumption to produce a simpler system that is more amenable to robust mathematical analysis and numerical simulation (section "Mathematical Model for TGF-β Signalling" in "Additional file 1") [37]. The reduction methods were applied to the TGF-β signalling system in two steps, resulting in a semi-reduced mode and the RF- model. The RF- model allows us to characterise the system both at the steady-state and during the transient dynamics in response to TGF-β signals. It should be noted that the activation of TGF-β receptors also stimulates the MAPK (Mitogen-activated protein kinases) [38–41] and P38 [40–42] systems, which will influence the responses of late-stage cancer cells. The predictions from the proposed model are compared with published experimental data [22] and new experimental data from our laboratory. Development of the TGF-β signalling model The TGF-β receptor complex is a tetramer comprised of Type 1 and Type 2 receptors that, upon TGF-β binding, becomes activated via autophosphorylation [1, 43, 44] (Fig. 1). The activated TGF-β receptor complex is then internalized [45, 46], where it phosphorylates and activates SMAD2/3 [44]. Activated SMAD2/3 then forms homotrimers, which bind to SMAD4 homotrimers. The heterotrimers (hexamers) are imported into the nucleus [47]. The phosphorylated SMAD2/3:SMAD4 complex functions as a transcription factor that upregulates a number of target genes, including Jun, Fos, SNAIL1 and SMAD7; the last of these target genes, SMAD7 is a known inhibitor of TGF-β Type 1 receptors and TGF-β receptor signalling [25, 47, 48]. The detailed reactions for this signalling system are summarized in Fig. 1. Signalling systems like the TGF-β pathway can be modelled using ordinary differential equations which describe the concentration changes of the various cellular components (e.g TGF-β receptors, SMAD4) as a function of time [49]. TGF-β receptor activation starts with the dimerization of both components (TGF-β receptor type 1 and 2, called respectively R1 and R2). dimers are vital for the signalling processes [50, 51]. The R2 dimer binds to the R1 dimer, resulting in the receptor complex RC. The RC complex binds TGF-β dimers present in the medium around TGF-β:RC complex (LC) contains all the components essential for signalling, however, the R1 s are not yet activated (phosphorylated), i.e. LC is not the membrane transducer of the exogenous TGF-β signal. Signalling requires ligand stimulated phosphorylation of R1 by R2 to produce a phosphorylated ligand-receptor complex (PC in Fig. 1). PC an intermediate component caused by the binding of the ligand (TGF-β) to the R1 monomers. A degradation reaction for LC is not necessary as LC is PC and degraded through PC. After phosphorylation of SMAD2/3, the SMADs oligomerize to form the (PSMAD2/3)3:(SMAD4)3 complex [52]. (PSMAD2/3)3:(SMAD4)3 translocates to the nucleus, stimulating the SMAD7 gene and the expression of the miR-433 microRNA [26, 53]. The SMAD7 mRNA is translated and eventually the SMAD7/SMURF complex accelerates the degradation of the R1-associated membrane components [54, 55]. Although receptor dimerization of type1 and 2 receptors on the membrane are reported to occur in different orders [56–59], the short time scale of the receptor dimerization reactions means that the dimerization order does not change the steady-state receptor output for the TGF-β: TGF-βR signalling system. In considering the development of a model for a signalling pathway, it is important to consider all of the processes associated with the dynamics, activation, transfer, maintenance or damping of the signal. Some signalling processes are triggered rapidly and reach a new steady-state within minutes. Other processes require hours or even days to reach new steady-states. In our modelling process we defined as many processes as is practical (to produce a detailed model) and then studied the contributions of the different processes (reactions) to the regulation of specific components between 5 minutes ("short"-term) and three hours ("long"-term). Where particular reactions reach equilibrium rapidly, we introduced several "fast" reactions where only the final concentration of the "fast" reaction products appear in the "slow" equations as functions of the substances (rapid equilibrium assumption). N.B. the rapid equilibrium assumption is a special form of quasi steady-state approximation (QSSA) which is often used in the context of time scale separation (see [60] for a review). In order to compensate for the elimination of the "fast" reactions, time-delays are used in the RF- model. The time delays are explained in more detail in the next section. We tested the effectiveness of the model with a reduced number of equations (reduced model) for simulating the expected concentration of SMAD2 and Phospho-SMAD2 at both short times (<3 hour) and long times (>6 hour). SMAD3 plays a crucial role in regulating SMAD7 [61, 62] and miR-433 [26] and stimulating the negative and positive feedback loops. However, due to similar dynamics for SMAD2 and SMAD3 inside the cell, it is reasonable to use measurements of Phospho-SMAD2 as the output of the TGF-β signalling system. Semi-reduced model of TGF-β signalling In order to reduce the number of intracellular reactions involving in TGF-β signaling, we have focused on the receptor components and then the direct interactions of the critical receptor components with the SMADs at the membrane. We considered the reduction process of the TGF-β signalling system in two steps: first by developing a semi-reduced model and second reducing it further to the RF- model. The semi-reduced model of TGF-β signalling is shown in Fig. 2. The semi-reduced TGF-β signal transduction reactions. The red dashed lines which originate from phosphorylated SMAD trimer indirectly regulate the receptor levels. All the reactions from trimerization of phospho-SMAD2/3 to SMAD7 transcription and translation are reduced to the red dashed lines (see Fig. 1 for clarification). The dotted ends of red dashed lines show that included reactions could lead to both inhibition and stimulation of their targeting reactions (demonstrating negative and positive feedback effects). In this figure S is specifically used for SMAD2/3 We reduced the SMAD signalling interactions (e.g. nucleocytoplasmic shuttling of activated SMAD complexes and transcription and translation of feedback-associated proteins, such as SMAD7 and miR-433) to a single ligand dependent feedback loop that is regulated by the levels of the PSMAD trimer, (S)3. For SMAD activation of transcription, an intermediate step, Sn, was added to mimic the nuclear accumulation of phosphorylated SMAD. These steps simplify the initial modelling equations and include negative and positive feedback loops. The two feedback loops for TGF-β signalling are both the result of sequences of back-to-back, coupled reactions (see Fig. 1). Each of the intracellular processes happens at specific locations, within a specific time interval and at defined kinetic rates. In order to simulate all the cytoplasmic and nuclear reactions associated with the feedback loops significant time-delays need to be incorporated into the model for TGF-β signalling. In programming from the full set of reactions (Fig. 1) to the semi-reduced model (Fig. 2) several assumptions were necessary. Primarily, the component S is used to represent the initial states of the SMAD proteins. Since both SMAD2 and 3 follow similar dynamics, we assigned the single component S to represent both proteins. \({\hat {\text {S}}}\) replaces all the phoshorylated SMAD2/3 in the cytoplasm, while the nuclear PSMAD3 is represented by Sn. SMAD4 is the common-mediator SMAD that participates in the TGF-β signalling by interacting with PSMAD2/3. Therefore, it is possible to incorporate the role of SMAD4 in \({\hat {\text {S}}}\). The total (PSMAD2/3) 3.(SMAD4) 3 concentration is represented by (S)3 in Fig. 2. The negative feedback cascade via SMAD7 (S 7) is initiated from the transcriptional SMAD complex (Sn) and is represented by the (S)3 component. However, (S)3 is represented as a dimer in the negative feedback equations in order to simulate the SMAD7:SMURF interaction. The positive feedback loop is caused by a chain of biochemical reactions which are triggered by nuclear (PSMAD2/3) 3.(SMAD4) 3 [52]. These Azin1:Antizyme:ODC:Polyamine associated reactions are represented via a single intermediate inhibitor P. In Fig. 3 both the positive and negative feedback loops are indicated with a dot-terminated solid line emerging from miR-433 and S 7. TGF-β receptor signalling system. a The schematic semi-reduced model, TGF-β signal transduction. TGF and \({\hat {\text {S}}}\) + 3 (S)3 represent the input and the output of the model. b A Simplified Model of TGF-β signal transduction. TGF-β and \(\hat {\text {S}}+ \text {S}_{\text {n}} + 3 \text {(S)}_{\text {3}}\) represent the input and the output of the model. Both positive and negative feedback loops are indicated by dot-ended solid lines emerging from (S)3. τ P and τ N represent time-delays incorporated in the positive and negative feedback loops, respectively According to the semi-reduced model shown in Fig. 2, the receptor associated reactions can be represented by: where ∗ represents the production process of specific proteins and ::: represents the proteosomal degradation processes. Corresponding delay differential equations describing all of the reactions associated with semi-reduced TGF-β signal transduction are (Fig. 2): $$ {{\begin{aligned} \frac{\text{d[R1]}}{\text{d}t} &= {v_{1}} - {k_{1}} [\text{R1}] - {k_{1}^{\text{f-}}} [\text{N}]^{2} \frac{[\text{R1}]}{[\text{R1}]+{K}}- \\ &\quad 2{k_{1}^{+}} [\text{R1}] [\text{R1}] + 2{k_{1}^{-}}[(\text{R1})_{2}] - {k_{1}^{\text{f+}}} [\text{P}] \frac{[\text{R1}]}{[\text{R1}]+{K}}\\ \frac{\text{d}[\text{R2}]}{\text{dt}} &= {v_{2}} - {k_{2}} [\text{R2}] - 2{k_{2}^{+}} [\text{R2}] [\text{R2}] + 2 {k_{2}^{-}}[(\text{R2})_{2}]\\ \frac{\text{d}[(\text{R1})_{2}]}{\text{dt}} &= {k_{1}^{+}} [\text{R1}] [\text{R1}] - {k_{1}^{-}} [(\text{R1})_{2}] - {k_{\text{RC}}^{+}} [(\text{R1})_{2}] [(\text{R2})_{2}]\\ &\quad+ {k_{\text{RC}}^{-}} [\text{RC}] \\ \frac{\text{d}[(\text{R2})_{2}]}{\text{dt}} &= {k_{2}^{+}} [\text{R2}] [\text{R2}] - {k_{2}^{-}} [(\text{R2})_{2}] - {k_{\text{RC}}^{+}} [(\text{R1})_{2}] [(\text{R2})_{2}]\\ &\quad+ {k_{\text{RC}}^{-}} [\text{RC}] \\ \frac{\text{d}[\text{RC}]}{\text{dt}} &= {k_{\text{RC}}^{+}} [(\text{R1})_{2}] [(\text{R2})_{2}] - {k_{\text{RC}}^{-}} [\text{RC}] \\ &\quad -{k_{\text{LC}}^{+}} [\text{RC}] [(\text{TGF}-\beta)_{2}] + {k_{\text{LC}}^{-}} [\text{LC}] - {k_{\text{RC}}}[\text{RC}]\\ &\quad- {k_{\text{RC}}^{\text{f-}}} [\text{N}]^{2} \frac{[\text{RC}]}{[\text{RC}]+{K}} \\ \frac{\text{d}[\text{LC}]}{\text{dt}} &= {k_{\text{LC}}^{+}} [\text{RC}] [(\text{TGF}-\beta)_{2}] - {k_{\text{LC}}^{-}} [\text{LC}] - {k_{\text{PC}}^{+}} [\text{LC}]\\ &\quad+ {k_{\text{PC}}^{-}} [\text{PC}] \\ \frac{\text{d}[\text{PC}]}{\text{dt}} &= {k_{\text{PC}}^{+}} [\text{LC}] - {k_{\text{PC}}^{-}} [\text{PC}] - {k_{\text{PC}}} [\text{PC}]\\ &\quad - {k_{\text{PC}}^{\text{f-}}} [\text{N}]^{2} \frac{[\text{PC}]}{[\text{PC}]+{K}} \frac{\text{d}[\text{S}]}{\text{dt}} &= {v_{\text{S}}} - {k_{\text{S}}} [\text{S}] - {k_{\text{S}}^{+}} [\text{PC}] \frac{[\text{S}]}{[\text{S}]+{K_{\text{S}}}}+{k_{\text{S}}^{-}} [{\hat{\text{S}}}] \\ \frac{\text{d}[{\hat{\text{S}}}]}{\text{dt}} &= {k_{\text{S}}^{+}} [\text{PC}] \frac{[\text{S}]}{[\text{S}]+{K_{\text{S}}}}-{k_{\text{S}}^{-}} [{\hat{\text{S}}}] - {k_{\hat{\text{S}}}}[{\hat{\text{S}}}]\\ &\quad - {k_{n}^{+}} [{\hat{\text{S}}}] + {k_{n}^{-}} [\text{S}_{\text{n}}] \\ \frac{\text{d}[\text{S}_{\text{n}}]}{\text{dt}} &= {k_{n}^{+}} [{\hat{\text{S}}}] - {k_{n}^{-}} [\text{S}_{\text{n}}] - 3 {k_{3}^{+}} [\text{S}_{\text{n}}]^{3} + 3 {k_{3}^{-}} [(\text{S})_{3}] \\ \frac{\text{d}[(\text{S})_{3}]}{\text{dt}} &= {k_{3}^{+}} [\text{S}_{\text{n}}]^{3} - {k_{3}^{-}} [(\text{S})_{3}] - {k_{3}} [(\text{S})_{3}] \end{aligned}}} $$ where [P]=K I 2/(K I 2+[(S)3(t−τ P )]2) and [N]=[(S)3](t−τ N ), the positive and negative feedback intermediate components, respectively (see Fig. 1 for definitions of the components). RF - model of TGF-β signalling The reduced model approximates TGF-β signalling with 6 differential equations. It is assumed that the R1, and R2 dynamics are similar, hence the individual components were replaced by a receptor block, R. R then become dimerized to form RC. LC and PC are combined in one parameter, i.e. PC, since they approximately follow the same kinetics. The reactions describing the receptor interactions and the initial SMAD changes are: Although some cooperativity within the system originates from the several dimer and trimer reactions on the membrane, in the cytosol and in the nucleus, the most critical cooperativity associated with the TGF-β induced signalling reactions comes from the trimerization of the Phosphorylated SMAD3, the binding of these oligomers to the SMAD4 trimer and the consequential stimulation of miR-433 and SMAD7 transcription. It should be noted that the trimerization of Phospho-SMADs influences both the positive and negative feedback loops (see Fig. 1). Figure 3 describes the reaction framework we used to produce the RF- model for simulating TGF-β signalling and how it is derived from the semi-reduced model. The key components in the RF- model are specified in Fig. 3. This reduction/simplification method retains all of the critical components of the signalling pathways. The set of delayed differential equations which describe the RF- model is introduced in Eq. 4. We have named this model RF- model of TGF-β signalling since "R" indicates that the model is "reduced" and "F" emphasizes that the positive and negative "feedback" loops are considered in the RF- model. Initially, the time-delays and amplitudes of the positive and negative feedback loops (τ P and τ N ) are assumed to be identical, however as shown in the supplementary results, it is feasible to adjust these parameters when appropriate experimental data is available. $$ \begin{aligned} \frac{\mathrm{d}[\mathrm{R}]}{\mathrm{d}t} &= {v_{1}} -{k_{1}} [\mathrm{R}] - 2{k_{\text{RC}}^{+}}[\mathrm{R}]^{2} + 2{k_{\text{RC}}^{-}}[\text{RC}]\\ &\quad - {k_{1}^{\text{f+}}}[\mathrm{P}] \frac{[\mathrm{R}]}{[\mathrm{R}]+{K}} - {k_{1}^{\text{f}-}}[\mathrm{N}]^{2} \frac{[\mathrm{R}]}{[\mathrm{R}] + {K}} \\ \frac{\mathrm{d}[\text{RC}]}{\mathrm{d}t} &= {k_{\text{RC}}^{+}}[\mathrm{R}]^{2} - {k_{\text{RC}}^{-}}[\text{RC}] - {k_{\text{RC}}} [\text{RC}]\\ &\quad - {k_{\text{PC}}^{+}}[(\text{TGF}-\beta)_{2}][\text{RC}] \\ &\quad + {k_{\text{PC}}^{-}} [\text{PC}] - {k_{\text{RC}}^{\text{f}-}}[\mathrm{N}]^{2} \frac{[\text{RC}]}{[\text{RC}] + {K}} \\ \frac{\mathrm{d}[\text{PC}]}{\mathrm{d}t} &= {k_{\text{PC}}^{+}} [(\text{TGF}-\beta)_{2}][\text{RC}] - {k_{\text{PC}}^{-}} [\text{PC}]\\ &\quad - {k_{\text{PC}}} [\text{PC}] - {k_{\text{PC}}^{\text{f}-}}[\mathrm{N}]^{2} \frac{[\text{PC}]}{[\text{PC}] + {K}} \\ \frac{\mathrm{d}[\mathrm{S}]}{\mathrm{d}t} &= {v_{\mathrm{S}}} - {k_{\mathrm{S}}}[\mathrm{S}] - {k_{\mathrm{S}}^{+}} [\text{PC}]\frac{[\mathrm{S}]}{[\mathrm{S}] + {K_{\mathrm{S}}}} + {k_{\mathrm{S}}^{-}}[{\hat{\mathrm{S}}}]\\ \frac{\mathrm{d}[{\hat{\mathrm{S}}}]}{\mathrm{d}t} &= {k_{\mathrm{S}}^{+}} [\text{PC}]\frac{[\mathrm{S}]}{[\mathrm{S}] + {K_{\mathrm{S}}}} - {k_{\mathrm{S}}^{-}} [{\hat{\mathrm{S}}}] - {k_{n}^{+}}[{\hat{\mathrm{S}}}]\\ &\quad + {k_{n}^{-}}[\mathrm{S}_{\mathrm{n}}] - k_{\hat{\mathrm{S}}}[{\hat{\mathrm{S}}}]\\ \frac{\mathrm{d}[\mathrm{S}_{\mathrm{n}}]}{\mathrm{d}t} &= {k_{n}^{+}}[{\hat{\mathrm{S}}}] - {k_{n}^{-}}[\mathrm{S}_{\mathrm{n}}] - {k_{\mathrm{S}_{\mathrm{n}}}} [\mathrm{S}_{\mathrm{n}}] \end{aligned} $$ where again, [(S)3]=[Sn]3/K 3, [N]=[(S)3](t−τ N ) and [P]=K I 2/(K I 2+[(S)3(t−τ P )]2). τ P and τ N represent the time-delays incorporated in the positive and negative feedback loops respectively. Total PSMAD concentration \([\hat {\mathrm {S}}]\) is defined as: $$[{\hat{\mathrm{S}}}] + \frac{Vn}{Vc} \left([\mathrm{S}_{\mathrm{n}}] + [(\mathrm{S})_{3}]\right) $$ where Vn and Vc are defined as the volume of the nucleus and the cytoplasm compartment, respectively. [(S)3]=[Sn]3/K 3 and [Sn], is calculated from the final equation of 4. The parameters \({k_{1}^{\text {f-}}}\), \({k_{\text {RC}}^{\text {f-}}}\) and \({k_{\text {PC}}^{\text {f-}}}\) represent, respectively, the strength of the negative feedback on R, RC and PC, the R1-associated membrane complexes. Although we have applied the negative feedback on R, RC and PC simultaneously and with identical strengths and binding constants, the feedback on PC is what produces the switching behaviour (see "Site of negative feedback for TGF-β signalling" section). The positive feedback is applied only to R, where the polyamines act [26, 29]. The cooperativity of the RF - TGF-β signalling system originates from the coupling of the self-regulatory positive and negative feedback rather than from extracellular effects such as ligand dimerization or depletion. The component P in Eq. 4 represents the Azin1: Antizyme:ODC:Polyamine associated reactions through which the positive feedback acts on the receptors (Fig. 1). The positive feedback is indirect, being affected by two coupled, inhibitory processes [26]. To achieve the most biologically compatible and robust model of TGF-β signalling, the sites of action of the feedback reactions needs to be determined. Sensitivity analysis identified PC as the negative feedback action point (see "Site of negative feedback for TGF-β signalling" section). SMAD7 binds to receptors and participates in the induction of E3 ubiquitin (Ub) ligase-mediated receptor ubiquitination [63, 64]. Henri-Michaelis-Menten kinetics is used to model the negative feedback inhibitory function. It is been reported that polyamine depletion increases the TGF-β type 1 receptor mRNA and increases the sensitivity of cells to TGF-β- mediated growth inhibition [26, 28, 29]. Consequently, we have modelled successive reactions of the positive feedback loop using two inhibitory reactions: first, the inhibition the intermediate inhibitor P via miR-433 and second, the inhibition of R via P. Time delays are required in the reactions initiated by (S)3. Hence, time-delays have been applied to both the positive and negative feedback loops. The time-delays compensate for the SMAD nucleocytoplasmic shuttling and the other reactions that have been consolidated in the reduced models (e.g. SMAD7 transcription and translation for the negative feedback loop and the miR-433/Azin1/Antizyme/ODC reaction chain for the positive feedback loop). Simulations described in the results were performed with the equations described in the RF- model. Concentrations are dimensionless and scaled such that v 1=1. More simulation and experiment results are shown in section "Supplementary Figures" of "Additional file 1". Numerical simulations of TGF-β signalling Analyses of the reduced equations and scaling make it possible to study the characteristics of the model with less complexity. Our model uses six coupled differential equations to represent all the reactions occurring on the membrane, within the SMAD signalling cascade and during the feedback loops. In all the computer simulations we have assumed τ P =τ N =45 minutes. In order to ensure the existence and uniqueness of the solution (or the steady-state), the system must satisfy the global/local Lipschitz condition [65]. All the equations defined by Eq. 4 can be considered in the form of state equations, \(\dot {x} = f(x,t)\), and are globally continuous in x and t. Also their partial derivatives \(\left (\frac {\partial f_{i}}{\partial x_{j}}\right)\) are continuous for all x ∈R n, n =6. Since the partial derivatives \(\left (\frac {\partial f_{i}}{\partial x_{j}}\right)\) are locally bounded, it can be inferred that all f i (x,t) are locally Lipschitz for all x. Therefore, the state equations in Eq. 4 ensure the existence of a unique solution in the domain of interest. Note that the domain of interest D, where x ∈D, is a subset of R 6. Several biological constraints are applied to the model parameters and the initial values of the variables. For instance, none of the components of the model can be negative nor infinite since they are concentrations, kinetic rates or binding constants. Many of the cytoplasmic and nuclear variables are zero at the beginning of the stimulation. Consequently, D does not cover entire R 6 space. To test our hypothesis that the positive feedback is responsible for the change of the behaviour of the system for both short-term (0-3 hours) and long-term (6-8 hours) cellular responses, we ran the simulations for the same TGF-β concentration and for stimulation times up to 8 hours. The parameter values used to populate the RF- TGF-β model equations are shown in "Additional file 1: Table S3". Figure 4 shows the predicted changes in the PSMAD concentration time-course for different TGF-β concentrations. Despite noticeable changes in the transient response of the model to different (non-zero) ligand concentrations, the steady-state remains unchanged. Zero TGF-β input initiates no signalling, as we expected. In order to reproduce the results of the total PSMAD time-course in the literature (e.g. [22, 66]), we have parametrized the RF- model with TGF-β=5 arbitrary unit, where the steady-state level of PSMAD is 40% less than its short-term peak value and peaks one hour after the ligand stimulation. Total PSMAD time-course for different TGF-β concentrations. The TGF-β signalling and hence the PSMAD time-course is proportional to the TGF-β concentration. As the TGF-β input signal increases, the peak of the total PSMAD concentration is shifted to the left, is stronger and lasts longer. In the case of approximately zero TGF-β input (TGF-β=0.001), the signalling does not occur. Despite the short-term changes in the total PSMAD concentration (< 3 hours) with respect to TGF-β, its steady-state level remains the same (0.3). We have parametrize our RF- model based on its consistency with the experimental data, i.e. TGF-β=5, where the peak in the total PSMAD concentration 50-60 min after the stimulation corresponds to the short-term (transient) response and the constant level at 0.3 represents the long-term (steady-state) response of the system. Note that all concentrations are represented with arbitrary units The RF- model of TGF-β signalling can show oscillations under certain conditions (Additional file 2: Figure S5). Oscillation occurs because of the coupling between the positive and negative feedback loops. More specifically, increasing the receptor production rate (v 1) and SMAD production rate (v S) at the same time increases the potential components which are necessary for signalling when the ligand is abundant. Therefore, the system can oscillate without decaying of the PSMAD levels. While the model can produce oscillatory responses, no oscillation has been reported in TGF-β signalling pathway experimentally. As a result, we adjusted the RF- model parameters and kinetic rates such that PSMAD experiences a single peak after the stimulation and decays smoothly to the steady-state level. The Zi et al. model [22] produced a sigmoidal TGF-β concentration dependence for the cellular responses to long-term stimulation. The total concentration of PSMAD was used as an interpretation of the final cellular response. According to their results [22], the Hill coefficient of the fitted curve to the cell responses to long-time TGF-β stimulation was approximately 4.5. The Zi et al. model's short-term (transient) responses to TGF-β followed the Hill equation with an approximate coefficient of 0.8 [22]. Zi et al. proposed that the reason for such a dramatic change in the behaviour of the system was due to a significant time-dependent ligand depletion caused ligand-receptor interaction and consequential degradation of the ligand [22]. We examined the short- (0-3 hours) and long- (6-8 hours) term responses for PSMAD in our model as a function of TGF-β concentration (Fig. 5). The Hill coefficients are 0.85 for the short-term and 3.87 for the long-term stimulation, i.e. similar to the values determined by Zi et al. (see Zi's Figure 5.A and 5.B [22]). The parameter values are fitted to a single term (Hill coefficient) in Fig. 5 and Additional file 2: Figure S3. Note that the dots are the results of the RF- model simulation and the curve show the fitted Hill equation. These results support our hypothesis that the coupling of time-delayed positive and negative feedbacks in the TGF-β signal transduction system can account for ultra-sensitive responses to the ligand concentrations. Transient and steady-state responses of the simplified TGF-β signalling model. Short-term responses of PSMAD levels to different concentrations of TGF-β is referred as transient response. The simulation time for each point in this figure is 50 min (the time of overshoot in Fig. 4). Long-term responses of PSMAD levels to different concentrations of TGF-β is referred as steady-state response. The simulation time for each point in this figure is 500 min (the time of steady-state in Fig. 4). The only parameter of the model which is being changed in producing both curves is the TGF-β concentration. Note that the unit of concentration on both axes are arbitrary Site of negative feedback for TGF-β signalling In an initial calculation we allowed the negative feedback to operate on all of the R1-associated complexes on the membrane, however, sensitivity analysis indicated that it is the negative feedback through PC which regulates the system. PC is the only TGF-β-associated complex in the simplified model for TGF-β signalling. The total TGF-β ligand concentration (extracellular TGF-β, which is kept constant in our simulations, and that which is bound within the PC complex) decreases because of the degradation of PC via the basal degradation of, and negative feedback on, PC. The saturation of the system with TGF-β flattens the TGF-β concentration response curves at high concentrations of ligand (Fig. 5). In order to examine our hypothesis, we conducted a set of simulations with the feedback on R and RC removed (Figure S3). To accomplish this, \(k_{1}^{\text {f}-}\) and \(k_{\text {RC}}^{\text {f}-}\) were set to zero. The results of these simulations corroborated our initial hypothesis that the negative feedback acts almost entirely through PC. The dynamics and the effect of the feedback loops depend on other parameters i.e. N and K. However, other parameters cannot be set to zero, as these concentrations, e.g. N, depend on other concentrations in the system, such as (S)3, which is non-zero after the initial time point. Consequently, N is not zero after time 0. K is the binding constant of the reaction and is in the denominator together with another concentration, e.g. R in Eq. 4. Setting K large enough does not guarantee that the negative feedback loop will be turned off. Setting coefficients to zero is the only way of removing the effect of a negative feedback loop from components R and RC. The negative feedback loop is only acting on R, RC and PC. If we remove its effect on R and RC, PC is the only component that is affected and regulated by negative feedback loop. Please note that turning off the negative feedback loop for one component does not alter the effectiveness of this loop on the other components: N is considered as an enzyme in the equations (Michaelis -Menten kinetics) and is not consumed during the reactions, so its concentration and hence its effectiveness does not change. Cancer cells: changes in response to TGF-β We propose that the time-course of the PSMAD concentration in response to TGF-β stimulation is modified in cancer cells due to the possible mutations in SMADs, mutations to TGF-β receptors and/or different receptor levels [67–70]. Consequently, we simulated the biochemical conditions of the early-stage tumors by reducing the TGF-β receptor levels and the SMAD concentrations [71]. More precisely for modulating the receptor levels, we decreased the effect of the positive feedback loop on the receptors (\(k_{1}^{\text {f}+}\) in Additional file 1: Table S3 is decreased from 1 to 0.1) and SMADs (v s in Additional file 1: Table S3 is decreased from 1 to 0.5). The simulation response of the total PSMAD time-course in cells with lower receptor and SMAD concentrations is plotted in Fig. 6. A comparison of Fig. 6 with Fig. 4 reveals that PSMAD concentration peaks to a higher level (0.67 rather than 0.5) but reduces to a lower level at the steady-state (0.13 v.s. 0.3). Clearly at lower receptor levels (< 0.5 normal), e.g. found in early cancer, the responses to TGF-β are reduced significantly. This result confirms the suitability of our simplified receptor model of TGF-β signalling for simulating the responses in both normal cells and the early colon cancer cells. Total PSMAD time-course for a certain TGF-β concentration. Total PSMAD time-course for a certain TGF-β concentration. Simulation results for low membrane receptor concentration condition (or so called early-stage tumors) are compared with the simulation results for high membrane receptor concentration condition (or so called late-stage tumors). These conditions were simulated via altering the receptor production rate on the membrane. Note that the units of PSMAD concentration levels are arbitrary In contrast, late-stage tumors are more responsive to TGF-β signalling [72]. This could be due to the effects of TGF-β on the micro-environment and consequential indirect stimulation of the tumor [73–77]. However, where the TGF-β receptor is intact and SMADs are mutated, active receptors and signalling via the MAPK and P38 pathways can stimulate migration and invasion [41, 73, 78]. In order to simulate late tumor environment, the receptors and SMADs levels are increased, by increasing the relative kinetic rates. v 1 in Additional file 1: Table S3 is increased from 1 to 1.2 and v S is increased from 1 to 1.5. The predicted responses of late-stage tumors to TGF-β stimulation are shown in Fig. 6. Although total PSMAD concentration peaks at a higher level of TGF-β receptor in late tumors, the steady-state levels of PSMAD are not significantly different from the peak (i.e. normal levels of TGF-β receptor). To investigate the role of receptor level in the signalling, we have simulated the behaviour of PSMAD concentration while the receptor concentration increases monotonically. Receptor production rate was increased to achieve an increase in receptor concentration. TGF-β concentration was maintained at a constant level during the experiment. This simulation was conducted for two distinct concentrations of TGF-β: 5 and 2 (arbitrary units). The second TGF-β concentration is located approximately where the switch in the long-term steady-state PSMAD concentration occurs (see steady-state responses in Fig. 5 and Additional file 2: Figure S3). There was no distinguishable change in the PSMAD steady-state concentration when TGF-β concentration was reduced (Fig. 7). Low receptor concentrations simulate cancer cells (see Fig. 7). The non-responsiveness at the start in both panels of Fig. 7 show that the cells are insensitive to TGF-β signalling when the receptor copy numbers are very low, i.e. the situation in cancer cells. The saturation level determines the receptor concentration in which the highest level of signal occurs. When receptor concentration is approximately 0.75, the PSMAD level reaches a saturation level and stays there as the receptor concentration increases. The saturation levels in Fig. 7 correspond to the steady-state of PSMAD in Fig. 5, steady-state response. As expected, when the TGF-β concentration is increased the curve of PSMAD shift to the left and all the changes happen at lower receptor levels. The effects of receptor concentration on the long-term response of PSMAD (500 min). The PSMAD steady-state levels are calculated for two distinct ligand concentrations TGF-β = 2 and TGF-β = 5 (arbitrary units). Approximately, no difference is observed between the two curves of this figure. Note that the units of PSMAD and receptor concentration levels are arbitrary According to the RF- model formulation, the negative feedback term is directly proportional to −((S)3)2, while the positive feedback term changes in proportion to \(-\frac {1}{(\text {(S)}_{\text {3}})^{2}}\). As a result, negative feedback dominates the positive feedback at high (S)3 concentrations (e.g. at the peak value of PSMAD) and decreases the PSMAD level until it reaches a stable state (see Fig. 7 and section "Feedback Loops and Time-Delays in the RF- Model" in "Additional file 1"). The results of the simulations with different initial SMAD concentrations are shown in Fig. 8. The PSMAD levels in these simulations are sensitive to the TGF-β concentration. As expected the PSMAD levels increase with the SMAD levels until they saturate. Decreasing the TGF-β value suppressed the signal at all SMAD concentrations. At the higher concentration of TGF-β, PSMAD levels reach the saturation level at lower SMAD concentration i.e. 0.1 (Fig. 8, TGF-β=5) compared to 0.4 in Fig. 8 when TGF-β=2. The difference in the saturation levels of the two curves in Fig. 8 is due to the different steady-state levels of PSMAD time-course, stimulated by different TGF-β concentrations. Furthermore, as the initial concentration of SMAD increases, the RF- model reaches its steady-state later (due to damped oscillation of PSMAD level, Additional file 2: Figure S5). The effects of SMAD concentration on the long-term response of PSMAD (500-2500 min). The PSMAD steady-state levels are calculated for two distinct ligand concentrations TGF-β = 2 and TGF-β = 5 (arbitrary units). The steady-state level of total PSMAD rises higher when TGF-β=5 than when TGF-β=2 due to the increase in the ligand concentration. Note that the units of PSMAD and SMAD concentration levels are arbitrary Comparison of simulation results with experimental data Our simplified TGF-β signalling RF- model was tested experimentally using PSMAD data from mouse embryonic fibroblasts. The predicted results from the model are compared to two different experimental data sets in Fig. 9. The difference between the experimental data and the simulation curves can be explained by the errors associated with the experiments and lack of experimental data to parameterize the model. The simulation results are in good agreement with the experimental results from the response to TGF-β signalling in normal cells (Fig. 9 a and b). PSMAD2 time-course validation with experimental data sets from a wild type and b Gp13 0F/F MEFs. Different colors of dots specify different experiments. The curves represent the model prediction of PSMAD2 dynamics. The model parameters are changed in the curve of Fig. 9 b so that the steady-state level of PSMAD2 concentration is lower and its peak is higher The experimental data from wild type MEFs and the model prediction curve for total PSMAD2 concentration level are plotted in Fig. 9 a. Similarly, in Fig. 9 b the simplified model is plotted with the experimental data set from Gp13 0F/F MEFs [53]. In order to achieve the best fit in Fig. 9 b the parameters of the RF- model had to be adjusted. It has been reported that the level of the SMAD7 concentration is higher in Gp13 0F/F MEFs due to their gene modification [53]. As is shown in Fig. 9 b, the steady-state level of PSMAD2 is lower than in Fig. 9 a. Note that the error bars are smaller in Fig. 9 b for the longer time points. The importance of TGF-β signalling in the progression of cancer heralded in a new era of cancer cell biology research [73, 79–81]. Several models for TGF-β signalling have now been proposed [16–22]. In each case these models attempted to study the responses of the intracellular signalling reactions to different concentrations of TGF-β. In one of the most comprehensive mathematical models Zi et al. [22] predicted that ligand depletion contributed to the long-term response levels of PSMAD. Zi et al. suggested that at higher concentrations of TGF-β, there was no depletion from the medium and as a result there was a transfer from a transient to a switch-like response to the TGF-β concentration. However, they also noted the possibility that negative feedback mechanisms might also contribute to the switch-like response [22]. Our TGF-β model uses fewer reactions than Zi et al. [22], however our model represents the behaviour of the critical components that control the responses to TGF-β stimulation over both 80 min and 8 hr time-frames. It is known that time-delayed positive and negative coupled feedbacks can create robust stable signalling [32, 33, 82, 83]. In order to explore the critical role of feedback loops in the TGF-β signalling networks we introduced a model where the steady-state was dependent on positive and negative feedback loops. One of the objectives of our study was to design a mathematical model that is applicable to both normal cells and cancer cells. In many early cancer cells the number of TGF-β receptors decreases significantly [68–70], thus TGF-β signalling is down-regulated. The time-dependent ligand depletion model of Zi et al. [22] does not simulate this decrease in the receptor levels. Our simulation results show that the PSMAD response of the cells is less sensitive to TGF-β stimulation at low receptor concentration. This is consistent with TGF-β signal suppression in early cancer cell lines. Our simulations also indicate that reduction in SMAD levels will also cause a global suppression of signalling in response to TGF-β. Due to mutations of SMADs, many early cancers are likely to have reduced levels of TGF-β signalling [67]. These results are consistent with the picture of early-stage tumors being associated with the loss of TGF-β sensitivity and the decrease of TGF-β receptor expression [84]. TGF-β signal transduction can be stimulated in late-stage tumors ("The TGF-β Paradox" [85, 86] i.e. early-stage cancers are less sensitive to TGF-β inhibitor, whereas many late-stage cancers are stimulated by TGF-β, either directly through increased receptor levels or indirectly by effects on the micro-environment of the cells). Our model with feedback loops produces results consistent with both roles of TGF-β in tumorigenesis. In the late-stage tumors the increased responsiveness to TGF-β could occur via increased production rates for receptors or SMADs. According to our model predictions, the overshoot peak of PSMAD in response to TGF-β is higher in early tumors and the steady-state levels of PSMAD are lower generally, while in late tumors both steady-state and peak levels are higher than normal cells. Additionally, the difference between the PSMAD peak and steady-state levels is less in late-stage tumors, so the TGF-β signalling would be on for longer times. This work can be used as a guide for future experimental research on TGF-β effects on tumor progression. It must be emphasized that the late-stage tumour responses must be influenced by other genetic changes which change the response to TGF-β from inhibition to stimulation. In future studies it will be important to add other pathways which can link the TGF-β signalling to anti-mitotic processes, migration processes or even increases proliferation. This model provides the basis for predicting the effects of TGF-β on the signalling processes in cells with different levels of TGF-β receptors or SMADs. By considering of a model where coupled, positive-negative feedback loops modulate TGF-β signalling switching responses can be observed without depletion of TGF-β [22]. TGF-β signal transduction can be studied more precisely using control theory analysis including system identification methods [87, 88]. The experimental data set and the kinetic rates used to set the initial parameters for the model were taken from the literature [16–22]. For the initial conditions, estimation of parameter values and the interpretation of some experimental data, we have benefitted from the model proposed by Zi et al. [22]. The values for all parameters are documented in the Additional file 1: Tables S1, S2 and S3. Computer modelling and simulations The programs used for these simulation where PYTHON 2.7 and MATLAB 7.10. The curve fitting tool box of MATLAB is used for fitting the Hill equation in Fig. 5 and Additional file 2: Figure S3, and deriving the Hill coefficients. Mathematical and biochemical analysis The biochemical kinetics, equilibrium analysis, feedback analysis, reduction analysis using the rapid equilibrium assumption, time-delayed analysis, asymptotic expansions and sensitivity analysis (refer to "Additional file 1") have been performed on the model [49]. We have used Western blot analysis for our quantitation. Western blot is only a semi-quantitative method; absolute values are not measured. As a result, throughout the manuscript and Figures, the units for protein concentrations are arbitrary. N.B. each species in Eq. 4 is calculated in its compartment volume. The volume corrections for all species of Eq. 4 are hidden in the coefficients of the corresponding terms and are not explicitly shown in the equations, e.g. \({k_{n}^{-}}\) and \({k_{n}^{+}}\) include the V n /V c and V c /V n volume correction terms, respectively. Cell culture and cell lysis Mouse embryonic fibroblasts (MEFs) cells were isolated from day 13 to 15 embryos. MEF WT, SV40-immortalized MEFs (Simian vacuolating virus 40) and Gp130 F/F MEFs [53] were cultured in DMEM containing 15% FCS. The cells were typsinazed and washed with DMEM + 15% FCS before plating. Passage 3 cells with 1 ×106 MEFs/well were seeded in 60 mm plates for 0-4 hours, 0.5 ×106 MEFs/well for 24 hour and 0.25 ×106 MEFs/well for 48 hour treatment with 5 ng/ml TGF-β respectively. After washing with cold PBS for two times, cells were lysed in ice-cold 200 μl RIPA lysis buffer, containing 1M Tris/HCL, 0.5 M EDTA, 5M NaCl, 10% Na Doc (Sodium Deoxycholate), 10% TX-100, 10% SDS, proteinase inhibitor 100 × and H 2O. The cell lysates were passed through 27 G needle 5 times, then incubated on ice for 20 min. After incubation the samples were spun at 13,000 rpm for 30 min at 4 oC. The supernatant was transferred to new tubes: 20 μl of samples used for the BCA protein assay (Sigma kit B9643); 20 μl 5 × sample buffer was added to 80 μl of sample, the samples heated at 95 oC for 10 min and analysed by SDS-PAGE. Novex NuPAGE Ⓡ 4-12%-Bis-Tris (life technologies NP0335 Box) gels were used to analyse the sample lysates from each time point. The SMAD7 antibody was provided via Santa Cruz Biotechnology and was used at 1:1000 in 3% BSA-TBS-T. PSMAD2 antibody (rabbit polyclonal anti-phospho-Smad2 antibody (1:1000 for Western blot)) was a gift from Prof. Peter ten Dijke (Leiden University Medical Center, Netherlands). β-tubulin, actin, Lamin B1 or transferrin receptor were used as loading controls depending on the protein being analysed. For antibody detection, the proteins were transferred onto a nitrocellulose membrane using the iBlot 2 gel transfer device (Life technologies) and the membranes were scanned using the Odyssey infrared scanner (LI-COR). Protein quantitation The Western blot images were quantitated using ImageJ 1.49p. The signals from each protein were normalised using the signal from each loading control. Massagué J. The transforming growth factor- β family. Annu Rev Cell Biol. 1990; 6:597–641. Clarke DC, Liu X. Decoding the quantitative nature of TGF- β/Smad signaling. Trends Cell Biol. 2008; 18(9):430–42. Shi Y, Massagué J. Mechanisms of TGF- β signaling from cell membrane to the nucleus. Cell. 2003; 113(6):685–700. Feng XH, Derynck R. Specificity and versatility in TGF- β signaling through Smads. Annu Rev Cell Dev Biol. 2005; 21:659–93. Jenkins G. The role of proteases in transforming growth factor- β activation. Int J Biochem Cell Biol. 2008; 40(6):1068–78. Massagué J, Seoane J, Wotton D. Smad transcription factors. Genes Dev. 2005; 19(23):2783–810. ten Dijke P, Miyazono K, Heldin CH. Signaling inputs converge on nuclear effectors in TGF- β signaling. Trends Biochem Sci. 2000; 25(2):64–70. Massagué J. TGF- β signal transduction. Ann Rev Biochem. 1998; 67(1):753–91. Nicolás FJ, Hill CS. Attenuation of the TGF- β-Smad signaling pathway in pancreatic tumor cells confers resistance to TGF- β-induced growth arrest. Oncogene. 2003; 22(24):3698–711. Jenkins BJ, Grail D, Nheu T, Najdovska M, Wang B, Waring P, Inglese M, McLoughlin RM, Jones SA, Topley N, Baumann H, Judd LM, Giraud AS, Boussioutas A, Zhu HJ, Ernst M. Hyperactivation of Stat3 in gp130 mutant mice promotes gastric hyperproliferation and desensitizes TGF- β signaling. Nat Med. 2005; 11(8):845–52. Massagué J, Blain SW, Lo RS. TGF- β signaling in growth control, cancer, and heritable disorders. Cell. 2000; 103(2):295–309. Bachman KE, Park BH. Duel nature of TGF- β signaling: tumor suppressor vs. tumor promoter. Curr Opin Oncol. 2005; 17(1):49–54. Leight JL, Wozniak MA, Chen S, Lynch ML, Chen CS. Matrix rigidity regulates a switch between TGF- β1-induced apoptosis and epithelial-mesenchymal transition. Mol Biol Cell. 2012; 23(5):781–91. Ikushima H, Miyazono K. Biology of transforming growth factor- β signaling. Curr Pharm Biotechnol. 2011; 12(12):2099–107. Attisano L, Wrana JL. Signal transduction by the TGF- β superfamily. Sci Signal. 2002; 296(5573):1646. Melke P, Jönsson H, Pardali E, ten Dijke P, Peterson C. A rate equation approach to elucidate the kinetics and robustness of the TGF- β pathway. Biophys J. 2006; 91(12):4368–80. Vilar JMG, Jansen R, Sander C. Signal Processing in the TGF- β Superfamily Ligand-Receptor Network. PLoS Comput Biol. 2006; 2(1):3. Clarke DC, Brown ML, Erickson RA, Shi Y, Liu X. Transforming growth factor beta depletion is the primary determinant of Smad signaling kinetics. Mol Cell Biol. 2009; 29(9):2443–55. Zi Z, Klipp E. Constraint-Based Modeling and Kinetic Analysis of the Smad Dependent TGF- β Signaling Pathway. PLoS ONE. 2007; 2(9):936. Schmierer B, Tournier AL, Bates PA, Hill CS. Mathematical modeling identifies Smad nucleocytoplasmic shuttling as a dynamic signal-interpreting system. Proc Natl Acad Sci. 2008; 105(18):6608–613. Chung SW, Miles FL, Sikes RA, Cooper CR, Farach-Carson MC, Ogunnaike BA. Quantitative modeling and analysis of the transforming growth factor beta signaling pathway. Biophys J. 2009; 96(5):1733–50. Zi Z, Feng Z, Chapnick DA, Dahl M, Deng D, Klipp E, Moustakas A, Liu X. Quantitative analysis of transient and sustained transforming growth factor- β signaling dynamics. Mol Syst Biol. 2011; 7:492. Zi Z, Klipp E. SBML-PET: a Systems Biology Markup Language-based parameter estimation tool. Bioinformatics. 2006; 22(21):2704–5. Bachmann J, Raue A, Schilling M, Becker V, Timmer J, Klingmuller U. Predictive mathematical models of cancer signalling pathways. J Intern Med. 2012; 271(2):155–65. Nakao A, Imamura T, Souchelnytskyi S, Kawabata M, Ishisaki A, Oeda E, Tamaki K, Hanai J, Heldin CH, Miyazono K, ten Dijke P. TGF- β receptor-mediated signalling through Smad2, Smad3 and Smad4. EMBO J. 1997; 16(17):5353–62. Li R, Chung AC, Dong Y, Yang W, Zhong X, Lan HY. The microRNA miR-433 promotes renal fibrosis by amplifying the TGF- β/Smad3-Azin1 pathway. Kidney Int. 2013; 84(6):1129–44. Kahana C. Regulation of cellular polyamine levels and cellular proliferation by antizyme and antizyme inhibitor. Essays Biochem. 2009; 46:47–62. Liu L, Santora R, Rao JN, Guo X, Zou T, Zhang HM, Turner DJ, Wang JY. Activation of TGF- β-Smad signaling pathway following polyamine depletion in intestinal epithelial cells. Am J Physiology-Gastrointestinal Liver Physiol. 2003; 285(5):1056–67. Rao JN, Li L, Bass BL, Wang JY. Expression of the TGF- β receptor gene and sensitivity to growth inhibition following polyamine depletion. Am J Physiology-Cell Physiol. 2000; 279(4):1034–44. Patel AR, Li J, Bass BL, Wang JY. Expression of the transforming growth factor- β gene during growth inhibition following polyamine depletion. Am J Physiology-Cell Physiol. 1998; 275(2):590–8. Shi F, Zhou P, Wang R. Coupled positive feedback loops regulate the biological behavior. IEEE 2012;169–73. Ferrell JE, Ha SH, et al. Ultrasensitivity part II: multisite phosphorylation, stoichiometric inhibitors, and positive feedback. Trends Biochem Sci. 2014; 39(11):556–69. Ferrell JE, Ha SH. Ultrasensitivity part I: Michaelian responses and zero-order ultrasensitivity. Trends Biochem Sci. 2014; 39(10):496–503. Mitrophanov AY, Groisman EA. Positive feedback in cellular control systems. Bioessays. 2008; 30(6):542–55. Chang DE, Leung S, Atkinson MR, Reifler A, Forger D, Ninfa AJ. Building biological memory by linking positive feedback loops. Proc Natl Acad Sci. 2010; 107(1):175–80. Kleeff J, Ishiwata T, Maruyama H, Friess H, Truong P, Büchler M, Falb D, Korc M. The TGF- β signaling inhibitor Smad7 enhances tumorigenicity in pancreatic cancer. Oncogene. 1999; 18(39):5363–372. Wagner J, Keizer J. Effects of rapid buffers on Ca2+ diffusion and Ca2+ oscillations. Biophys J. 1994; 67(1):447. Massagué J, Gomis RR. The logic of tgf β signaling. FEBS Lett. 2006; 580(12):2811–20. Lee MK, Pardoux C, Hall MC, Lee PS, Warburton D, Qing J, Smith SM, Derynck R. Tgf- β activates erk map kinase signalling through direct phosphorylation of shca. EMBO J. 2007; 26(17):3957–67. Mu Y, Gudey SK, Landström M. Non-smad signaling pathways. Cell Tissue Res. 2012; 347(1):11–20. Wang X, Li X, Ye L, Chen W, Yu X. Smad7 inhibits tgf- β1-induced mcp-1 upregulation through a mapk/p38 pathway in rat peritoneal mesothelial cells. Int Urol Nephrol. 2013; 45(3):899–907. Yu L, Hébert MC, Zhang YE. Tgf- β receptor-activated p38 map kinase mediates smad-independent tgf- β responses. EMBO J. 2002; 21(14):3749–59. Wieser R, Wrana J, Massagué J. GS domain mutations that constitutively activate T beta RI, the downstream signaling component in the TGF- β receptor complex. EMBO J. 1995; 14(10):2199. Heldin CH, Miyazono K, Ten Dijke P. TGF- β signalling from cell membrane to nucleus through SMAD proteins. Nature. 1997; 390(6659):465–71. Hayes S, Chawla A, Corvera S. TGF β receptor internalization into EEA1-enriched early endosomes role in signaling to Smad2. J Cell Biol. 2002; 158(7):1239–49. Massagué J, Kelly B. Internalization of transforming growth factor- β and its receptor in BALB/c 3T3 fibroblasts. J Cell Physiol. 1986; 128(2):216–22. Zhang Y, Feng XH, Derynck R. Smad3 and Smad4 cooperate with c-Jun/c-Fos to mediate TGF- β-induced transcription. Nature. 1998; 394(6696):909–13. Vincent T, Neve EP, Johnson JR, Kukalev A, Rojo F, Albanell J, Pietras K, Virtanen I, Philipson L, Leopold PL, et al. A SNAIL1–SMAD3/4 transcriptional repressor complex promotes TGF- β mediated epithelial–mesenchymal transition. Nat Cell Biol. 2009; 11(8):943–50. Fall CP. Computational Cell Biology Interdisciplinary Applied Mathematics; V. 20. New York: Springer-Verlag; 2002. Luo K, Lodish H. Signaling by chimeric erythropoietin-TGF- β receptors: homodimerization of the cytoplasmic domain of the type I TGF- β receptor and heterodimerization with the type II receptor are both required for intracellular signal transduction. EMBO J. 1996; 15(17):4485. Ebner R, Chen RH, Shum L, Lawler S, Zioncheck TF, Lee A, Lopez AR, Derynck R. Cloning of a type I TGF- β receptor and its effect on TGF- β binding to the type II receptor. Science. 1993; 260(5112):1344–8. Wu JW, Hu M, Chai J, Seoane J, Huse M, Li C, Rigotti DJ, Kyin S, Muir TW, Fairman R, et al. Crystal structure of a phosphorylated Smad2: Recognition of phosphoserine by the MH2 domain and insights on Smad function in TGF- β signaling. Mol Cell. 2001; 8(6):1277–89. Jenkins BJ, Grail D, Nheu T, Najdovska M, Wang B, Waring P, Inglese M, McLoughlin RM, Jones SA, Topley N, et al. Hyperactivation of Stat3 in gp130 mutant mice promotes gastric hyperproliferation and desensitizes TGF- β signaling. Nat Med. 2005; 11(8):845–52. Budi EH, Xu J, Derynck R. Regulation of TGF- β Receptors. Methods in molecular biology (Clifton, NJ). 2016; 1344:1. Asano Y, Ihn H, Yamane K, Kubo M, Tamaki K. Impaired smad7-smurf–mediated negative regulation of tgf- β signaling in scleroderma fibroblasts. J Clin Investig. 2004; 113(2):253. Kang JS, Liu C, Derynck R. New regulatory mechanisms of TGF- β receptor function. Trends Cell Biol. 2009; 19(8):385–94. Massagué J, Attisano L, Wrana JL. The TGF- β family and its composite receptors. Trends Cell Biol. 1994; 4(5):172–8. Groppe J, Hinck CS, Samavarchi-Tehrani P, Zubieta C, Schuermann JP, Taylor AB, Schwarz PM, Wrana JL, Hinck AP. Cooperative assembly of TGF- β superfamily signaling complexes is mediated by two disparate mechanisms and distinct modes of receptor binding. Mol Cell. 2008; 29(2):157–68. Kingsley DM. The TGF- β superfamily: new members, new receptors, and new genetic tests of function in different organisms. Genes Dev. 1994; 8(2):133–46. Gunawardena J. Time-scale separation–michaelis and menten's old idea, still bearing fruit. FEBS J. 2014; 281(2):473–88. von Gersdorff G, Susztak K, Rezvani F, Bitzer M, Liang D, Böttinger EP. Smad3 and smad4 mediate transcriptional activation of the human smad7 promoter by transforming growth factor β. J Biol Chem. 2000; 275(15):11320–6. Yan X, Liao H, Cheng M, Shi X, Lin X, Feng XH, Chen YG. Smad7 protein interacts with receptor-regulated smads (r-smads) to inhibit transforming growth factor- β (tgf- β)/smad signaling. J Biol Chem. 2016; 291(1):382–92. Ebisawa T, Fukuchi M, Murakami G, Chiba T, Tanaka K, Imamura T, Miyazono K. Smurf1 interacts with transforming growth factor- β type I receptor through Smad7 and induces receptor degradation. J Biol Chem. 2001; 276(16):12477–80. Kavsak P, Rasmussen RK, Causing CG, Bonni S, Zhu H, Thomsen GH, Wrana JL. Smad7 binds to Smurf2 to form an E3 ubiquitin ligase that targets the TGF β receptor for degradation. Mol Cell. 2000; 6(6):1365–75. Khalil HK, Vol. 3. Nonlinear Systems. New Jersey: Prentice-Hall; 1996. Inman GJ, Nicolás FJ, Hill CS. Nucleocytoplasmic shuttling of Smads 2, 3, and 4 permits sensing of TGF- β receptor activity. Mol Cell. 2002; 10(2):283–94. Fleming NI, Jorissen RN, Mouradov D, Christie M, Sakthianandeswaren A, Palmieri M, Day F, Li S, Tsui C, Lipton L, et al. SMAD2, SMAD3 and SMAD4 mutations in colorectal cancer. Cancer Res. 2013; 73(2):725–35. Wakefield LM, Smith DM, Masui T, Harris CC, Sporn MB. Distribution and modulation of the cellular receptor for transforming growth factor- β. J Cell Biol. 1987; 105(2):965–75. Laiho M, Weis M, Massagué J. Concomitant loss of transforming growth factor (TGF)- β receptor types I and II in TGF- β-resistant cell mutants implicates both receptor types in signal transduction. J Biol Chem. 1990; 265(30):18518–24. Kimchi A, Wang XF, Weinberg RA, Cheifetz S, Massagué J. Absence of TGF- β receptors and growth inhibitory responses in retinoblastoma cells. Science. 1988; 240(4849):196–9. Yu M, Trobridge P, Wang Y, Kanngurn S, Morris S, Knoblaugh S, Grady W. Inactivation of TGF- β signaling and loss of PTEN cooperate to induce colon cancer in vivo. Oncogene. 2014; 33(12):1538–47. Liu RY, Zeng Y, Lei Z, Wang L, Yang H, Liu Z, Zhao J, Zhang HT. Jak/stat3 signaling is required for tgf- β-induced epithelial-mesenchymal transition in lung cancer cells. Int J Oncol. 2014; 44(5):1643–51. Pickup M, Novitskiy S, Moses HL. The roles of TGF [beta] in the tumour microenvironment. Nat Rev Cancer. 2013; 13(11):788–99. Giampieri S, Manning C, Hooper S, Jones L, Hill CS, Sahai E. Localized and reversible tgf β signalling switches breast cancer cells from cohesive to single cell motility. Nat Cell Biol. 2009; 11(11):1287–96. Langenskiöld M, Holmdahl L, Falk P, Angenete E, Ivarsson ML. Increased tgf-beta1 protein expression in patients with advanced colorectal cancer. J Surg Oncol. 2008; 97(5):409–15. Shariat SF, Shalev M, Menesses-Diaz A, Kim IY, Kattan MW, Wheeler TM, Slawin KM. Preoperative plasma levels of transforming growth factor beta1 (tgf- β1) strongly predict progression in patients undergoing radical prostatectomy. J Clin Oncol. 2001; 19(11):2856–64. Xiong B, Gong LL, Zhang F, Hu MB, Yuan HY. Tgf beta˜ 1 expression and angiogenesis in colorectal cancer tissue. World J Gastroenterol. 2002; 8(3):496–8. Xu J, Acharya S, Sahin O, Zhang L, Lowery FJ, Sahin AA, Zhang XH-F, Hung MC, Yu D. Abstract lb-202: 14-3-3 ζ turns tgf- β's function from tumor suppressor to metastasis promoter in breast cancer by contextual changes of smad partners from p53 to gli2. Cancer Res. 2015; 75(15 Supplement):202. Santibanez JF, Quintanilla M, Bernabeu C. TGF- β/TGF- β receptor system and its role in physiological and pathological conditions. Clin Sci. 2011; 121(6):233–51. Anzano M, Roberts A, Smith J, Sporn M, De Larco J. Sarcoma growth factor from conditioned medium is composed of both type α and type β transforming growth factors. Proc Natl Acad Sci U S A. 1983; 80:6264–8. De Larco JE, Todaro GJ. Growth factors from murine sarcoma virus-transformed cells. Proc Natl Acad Sci. 1978; 75(8):4001–5. Wagner J, Ma L, Rice J, Hu W, Levine A, Stolovitzky G. p53–Mdm2 loop controlled by a balance of its feedback strength and effective dampening using ATM and delayed feedback. IEE Proc Syst Biol. 2005; 152(3):109–18. Wagner J, Stolovitzky G. Stability and time-delay modeling of negative feedback loops. Proc IEEE. 2008; 96(8):1398–410. Duffy I, Varacallo P, Klerk H, Hawker J. Endothelial and cancer cells have differing amounts of tgf beta receptors involved in angiogenesis. FASEB J. 2015; 29(1 Supplement):554–4. Hansson GK, Libby P. The immune response in atherosclerosis: a double-edged sword. Nat Rev Immunol. 2006; 6(7):508–19. Akhurst RJ, Derynck R. TGF- β signaling in cancer–a double-edged sword. Trends Cell Biol. 2001; 11(11):44–51. Chen BS, Wu CC. On the calculation of signal transduction ability of signaling transduction pathways in intracellular communication: systematic approach. Bioinformatics. 2012; 28(12):1604–11. Choi S. Systems Biology Approaches: Solving New Puzzles in a Symphonic Manner. Systems Biology for Signaling Networks. New York: Springer; 2010, pp. 3–11. Kavsak P, Rasmussen RK, Causing CG, Bonni S, Zhu H, Thomsen GH, Wrana JL. Smad7 Binds to Smurf2 to Form an E3 Ubiquitin Ligase that Targets the TGFbeta Receptor for Degradation. Mol Cell. 2000; 6(6):1365–75. Di Guglielmo GM, Le Roy C, Goodfellow AF, Wrana JL. Distinct endocytic pathways regulate TGF- β receptor signalling and turnover. Nat Cell Biol. 2003; 5(5):410–21. SK was supported by a Melbourne International Research Scholarship and a Melbourne International Fee Remission Scholarship from the University of Melbourne. AWB was supported by the Ludwig Institute for Cancer Research and NHMRC Program grant Number 487922. All the data supporting your findings is contained within the manuscript and Additional file 1. All authors contributed to the analysis of the literature and development of the conceptual model. SK and JW developed and analysed the mathematical models. SK, AWB designed and SK conducted computational experiments. SK and AWB wrote the paper. SK, AWB, CWT, JHM and HJZ designed the experiments. SK performed the experiments. All authors were involved in drafting the manuscript or revising it critically for important intellectual content. All authors read and approved the final manuscript. Electrical and Electronic Engineering Department, The University of Melbourne, Parkville, Victoria, 3010, Australia Shabnam Khatibi & Jonathan H. Manton Department of Surgery (RMH), The University of Melbourne, Parkville, Victoria, 3050, Australia Hong-Jian Zhu & Antony W. Burgess IBM Research Collaboratory for Life Sciences–Melbourne, Victorian Life Sciences Computation Initiative, 87 Grattan Street, Victoria, 3010, Australia IBM Research–Australia, 204 Lygon Street Level 5, Carlton, Victoria, 3053, Australia The Walter and Eliza Hall Institute of Medical Research (WEHI), 1G Royal Parade, Parkville, Victoria, 3052, Australia Shabnam Khatibi, Chin Wee Tan & Antony W. Burgess Department of Medical Biology, The University of Melbourne, 1G Royal Parade, Parkville, Victoria, 3052, Australia Chin Wee Tan & Antony W. Burgess Shabnam Khatibi Hong-Jian Zhu Chin Wee Tan Jonathan H. Manton Antony W. Burgess Correspondence to Antony W. Burgess. Mathematical Model for TGF-β Signalling, which provides more information about the model design and model reduction steps. Feedback Loops and Time-Delays in the RF- Model, which provides more information about the delayed positive and negative feedback loops and their effects on the signalling system. Parameters for TGF-β Signalling Model, which provides tables of parameters involved in the signalling model [89, 90]. (PDF 152 kb) Which provides information about extra figures (Figures S1-S7) of model simulation and experimental data that help understanding the signalling system. Figure S1 Dynamics of feedback loops in RF- model. A) The negative feedback loop time-course for a time-delay τ N =20 minutes B) The positive feedback loop time-course for a time-delay τ P =120 minutes. N changes proportionally with (S)3 as ((S)3) 2 while P is inversely proportional to (S)3 as 1/(1+((S)3)2). Figure S2 The effects of feedback loops on the TGF-βreceptor concentration dynamics. The time-delays are either for both τ N and τ P =45 minutes or τ P =120 and τ N =20 minutes. The effects of individual feedback loops on the receptor levels are studied. The peaks and the valleys are due to positive and negative feedback loops respectively. The time-delays shift the peaks and valleys in time. The strength of the feedback loops change the amplitude of the peaks and valleys. Figure S3 The predicted effects of different concentrations of TGF-β on PSMAD levels when the negative feedback loop influences PC only. These effects are shown for the short-term (50 min) and long-term (500 min) responses of the TGF-βsignalling system. Note that the dots are derived from model simulation and the curves show the Hill equations which are fitted by MATLAB. Figure S4 PSMAD responses to changes in TGF-β concentration at different simulation times. The Hill coefficient increases with the increase in the simulation time. The PSMAD response does not switch before 200 min, where the PSMAD level starts saturating (Fig. 4). The curves of 50 min and 500 min simulation times correspond to the curves in Fig. 5. Figure S5 PSMAD time-course for different production rates of SMAD The SMAD production rate (v S) determines the steady-state of the PSMAD response of RF- model. Higher v S SMAD concentration during the signalling. The PSMAD time-course experiences damped oscillation for high v S. The oscillations appear to delay the system reaching its steady-state. Figure S6 The representative Western blots for PSMAD2 analysed in Fig. 9. A)wild type MEFs B)Gp130 F/F MEFs. In each panel, the top bands show the PSMAD2 signals of the double-stimulation experiment. The bottom bands show the actin signals for loading control to which the PSMAD2 levels are normalised to produce the results shown in Fig. 9. Figure S7 The validation of the simplified model with experimental data. The dots show the level of PSMAD2 concentration obtained from experiment and the curves specify the model predictions. A) PSMAD2 time-course for 0-1h on SV40-immortalised MEFs stimulated with TGF-β and its corresponding blot B) PSMAD2 time-course for 0-4h on SV40-immortalised MEFs stimulated with TGF-β and its corresponding blot C) PSMAD2 time-course for 0-4h on wild type MEFs stimulated with TGF-β and its corresponding blot. (PDF 1750 kb) Khatibi, S., Zhu, HJ., Wagner, J. et al. Mathematical model of TGF-βsignalling: feedback coupling is consistent with signal switching. BMC Syst Biol 11, 48 (2017). https://doi.org/10.1186/s12918-017-0421-5 TGF-β signalling Feedback coupling Time-delay Rapid equilibrium assumption Signal switching Methods, software and technology
CommonCrawl
OSA Publishing > Optical Materials Express > Volume 10 > Issue 1 > Page 178 Alexandra Boltasseva, Editor-in-Chief Low-temperature carrier dynamics in MBE-grown InAs/GaAs single- and multi-layered quantum dots investigated via photoluminescence and terahertz time-domain spectroscopy Alexander E. De Los Reyes, John Daniel Vasquez, Hannah R. Bardolaza, Lorenzo P. Lopez, Che-Yung Chang, Armando S. Somintac, Arnel A. Salvador, Der-Jun Jang, and Elmer S. Estacio Alexander E. De Los Reyes,1,* John Daniel Vasquez,1,2 Hannah R. Bardolaza,1,2 Lorenzo P. Lopez, Jr.,1,2 Che-Yung Chang,3 Armando S. Somintac,1 Arnel A. Salvador,1 Der-Jun Jang,3 and Elmer S. Estacio1,2 1National Institute of Physics, University of the Philippines Diliman, Quezon City 1101, Philippines 2Materials Science and Engineering Program, University of the Philippines Diliman, Quezon City 1101, Philippines 3Department of Physics, National Sun-Yat-Sen University, Kaohsiung 80424, Taiwan, ROC *Corresponding author: [email protected] A De Los Reyes J Vasquez H Bardolaza L Lopez C Chang A Somintac A Salvador D Jang E Estacio Issue 1, •https://doi.org/10.1364/OME.380909 Alexander E. De Los Reyes, John Daniel Vasquez, Hannah R. Bardolaza, Lorenzo P. Lopez, Che-Yung Chang, Armando S. Somintac, Arnel A. Salvador, Der-Jun Jang, and Elmer S. Estacio, "Low-temperature carrier dynamics in MBE-grown InAs/GaAs single- and multi-layered quantum dots investigated via photoluminescence and terahertz time-domain spectroscopy," Opt. Mater. Express 10, 178-186 (2020) Confined photocarrier transport in InAs pyramidal quantum dots via terahertz time-domain spectroscopy (OE) Low temperature-grown GaAs carrier lifetime evaluation by double optical pump terahertz time-domain emission spectroscopy (OE) Intense THz emission in high quality MBE-grown GaAs film with a thin n-doped buffer (OME) Electric fields Physical vapor deposition Terahertz generation Terahertz spectroscopy Original Manuscript: October 17, 2019 Revised Manuscript: December 7, 2019 Manuscript Accepted: December 9, 2019 Equations (5) The photocarrier dynamics in molecular beam epitaxy (MBE)-grown single- (SLQD) and multi-layered (MLQD) InAs/GaAs quantum dots were studied. Photoluminescence (PL) spectroscopy has shown that the MLQD has more uniform QD size distribution as compared to the bimodal SLQD. Correlation between PL and THz-TDS has shown that photocarrier transport is more favored in the MLQD owing to this uniform QD size distribution, resulting to higher THz emission. The THz emission from the QD samples were found to be proportional to temperature. A drift-related photocarrier transport mechanism is proposed, wherein photocarriers generated in the QDs are accelerated by an interface electric field. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement InAs/GaAs quantum dots (QDs) are zero-dimensional artificial heterostructures that have found applications as far-infrared emitters and detectors [1,2]. These QDs have been attractive due to their relative ease-of-integration with the well-established GaAs device fabrication technology. In order to design and optimize the optoelectronic device, the optical and luminescence properties of the QD structure are usually studied. Among various optical characterization techniques, photoluminescence (PL) spectroscopy is very popular since it is contactless and non-destructive [3]. In InAs/GaAs QDs, the origin of these PL emissions can be due to ground state (GS), excited state (ES), or defect-related transitions [4–6]. Similarly, terahertz time-domain spectroscopy (THz-TDS) is becoming a valuable optical characterization technique for investigating carrier dynamics and transport in semiconductor heterostructures [7]. The ultrafast photoexcitation of semiconductors via illumination of femtosecond laser pulses creates a transient photocurrent, resulting in the generation of THz radiation [8]. The measured THz radiation can provide information on different semiconductor material properties such as doping type, surface electric field, band offset, and dipole orientation [9–12]. Moreover, semiconductor heterostructures such as quantum wells (QWs) and QDs have the potential to serve as viable active components of THz-TDS systems [13,14]. The development of QD photoconductive antenna (PCA) devices would also result in cheaper and compact THz-TDS systems since it is compatible with the telecommunication wavelength technology [14]. The incorporation of nanoparticles and nanostructures have also been previously reported to enhance THz emission in semiconductor devices [15]. These reports primarily attribute the enhancement to plasmonics effects. In this work, the QD structures, themselves, are the THz emitters owing to transient photocurrent generation . PL spectroscopy was performed to investigate the photocarrier recombination and optical properties of MBE-grown single - (SLQD) and multi-layered (MLQD) InAs/GaAs QD samples. Temperature- and excitation power-dependent PL spectroscopy were used to distinguish between the possible origins of the PL transitions. THz-TDS was done to study photocarrier dynamics and transport in the QD samples in comparison with bare semiconductor surfaces such as p-InAs and SI-GaAs. Using temperature-dependent THz-TDS spectroscopy, we elucidate the THz radiation mechanism from the QD samples. We also studied the correlation between PL and THz to assess recombination versus photocarrier transport. These results could provide insights in the design and optimization of future QD-based THz optoelectronic devices. 2. Experimental details The InAs/GaAs SLQD and MLQD samples were grown via Stranski-Krastanov (SK) Growth Technique using a Riber 32P molecular beam epitaxy (MBE) facility. A semi-insulating gallium arsenide (SI-GaAs) wafer was used as the substrate. Initially, a 1 µm undoped GaAs buffer was deposited on the substrate at $T_s =$ 640 °C. It was then followed by the deposition of three pairs of GaAs/AlAs superlattice and a 300 Å undoped GaAs layer. For the SLQD sample, the growth of the InAs/GaAs QD layer was initiated by the deposition of a single InAs wetting layer. After a certain critical thickness, island formation resulted to the formation of pyramidal QD structures. Then, a 300 Å undoped GaAs layer was grown on top of the QD layer. For the MLQD, similar steps were taken using eight InAs/GaAs QD layers, with each QD layer separated by a 330 Å undoped GaAs layer. Lastly, a 300 Å n-GaAs cap was deposited on top. Based from previous AFM measurements, the dimensions of our MBE-grown QD samples have a typical height of 6 nm-9 nm, diameter of 20 nm-40 nm, and density of $1-3\times 10^{10}\mu m^{-2}$ [16–18]. Photoluminescence spectroscopy was performed using an 808 nm Mai Tai laser as the excitation source. The samples were loaded on the cold finger of a closed-cycle helium (He) cryostat. The PL signal was collected and focused into the entrance slits of a Triax 550 spectrometer equipped with a silicon (Si) photodiode. Temperature-dependent PL spectroscopy was performed from 14 K to 300 K at a fixed laser power of 105 mW. The excitation dependent PL spectroscopy was performed from 0.1 mW to 175 mW at a fixed temperature of 14 K. The THz emission from the samples was measured using a standard THz-TDS setup in conjunction with a cryostat. A mode-locked Ti:Sapphire femtosecond laser with a central wavelength of 870 nm, pulse width of 100 fs, and repetition rate of 80 MHz was used as the excitation source. The central wavelength at $\lambda =$ 870 nm with full width at half maximum $\Delta \lambda =$ 12 nm was chosen to avoid photo-excitation of the GaAs layers. The laser beam was split into pump and probe arms. The pump beam was mechanically chopped using an optical chopper and hits the samples at an excitation power of 70 mW. The estimated beam spot size at the focal point is 55 µm and the corresponding excitation fluence is 20 µJ/cm2. The THz emission from the samples was collected using a pair of off-axis paraboloids (f = 75 mm) and redirected into the dipole gap of an LT-GaAs PCA and the probe beam ($\lambda =$ 870 nm) was used to optically gate the PCA. To avoid photoexcitation of the GaAs layers, the temperature-dependence measurements of the THz emission were performed at a range having a maximum of 200 K; wherein the bandgap of GaAs would still be higher than the 870 nm wavelength of the fs laser excitation. Figure 1(a) and 1(b) show the temperature-dependent PL spectra for the SLQD and MLQD, respectively. Two prominent energy peaks can be observed for both spectra. At $T =$ 14 K a peak at 1.240 eV (Peak A) and 1.290 eV (Peak B) can be observed in the SLQD PL. Similarly, a peak at 1.305 eV (Peak C) and 1.390 eV (Peak D) can be observed in the MLQD PL. The PL signal from the MLQD is relatively weaker than the SLQD sample. A third peak near 1.424 (Peak G) can also be observed in the MLQD especially at 300 K, corresponding to bulk GaAs. No signal was observed from the wetting layer. Fig. 1. (a) Temperature-dependent PL spectra of (a) SLQD and (b) MLQD. The PL spectra undergoes redshift as T increases. The PL peak intensity also decreases and the FWHM broadens at elevated T. Two peaks can be observed for both SLQD and MLQD PL. A 3rd peak corresponding to bulk GaAs only becomes evident in the MLQD sample at 300 K In general, the PL intensity of the samples decreases as T increases. However, an anomalous behavior in this trend can be observed from $T=14~K$ to $30~K$. These results have been previously reported as due to (i) carrier redistribution between different QD sizes or (ii) competition between GS and ES transitions [4,19,20]. Additionally, multilayering is expected to result in a more uniform QD size distribution for the MLQD. This should yield narrower full width half maximum (FWHM) of the MLQD PL spectra, as compared to the SLQD. However, we observed comparable FWHM between the SLQD and MLQD samples. Using the absorption coefficient of $1.3 \times 10^{4}$ $cm^{-1}$ at 808 nm for GaAs, we have calculated the penetration depth to be $\approx$ 766 nm [21]. This suggest that all QD layers are possibly photoexcited, resulting in a broad FWHM of the MLQD PL. The PL spectra undergo a redshift as T is increased and thermal effects induce broadening. Peak B also becomes less prominent at higher T. The PL spectra were deconvolved and Gaussian curve fitting was performed in order to properly extract the energy values. The temperature-dependence of the bandgap $E_g (T)$ was then modelled using the Varshni Eq. [22]: (1)$$E_{g}(T) = E_0 - \frac{\alpha T^2}{T+\beta}$$ where $E_0$ is the bandgap at $T=0~K$, $\alpha$ and $\beta$ are constants, and $T$ is the temperature. Band-to-band transitions such as GS and ES transitions are known to obey Eq. (1) Meanwhile, defect-related transitions will not follow Eq. (1) and will thermalize at higher T [23]. Figure 2 shows the plot of the PL energy peak versus T. As it can be seen, all observed peaks follow the behavior dictated by Eq. (1). None of the peaks are possibly defect-related. Table 1 summarizes the results of Varshni fitting. Fig. 2. Temperature-dependence of the PL energy peaks for (a) SLQD and (b) MLQD. All peaks followed the behavior dictated by the Varshni equation, suggesting these are band-to-band transitions. None of the PL peak energies is possibly defect-related Table 1. Varshni Fitting Parameter Values Due to the non-uniform growth of the InAs/GaAs QD, it is possible to obtain uneven size distributions. Depending on their relative size, smaller QDs will emit at higher energies while larger QDs will emit at lower energy. Hence, in the case of SLQD, the presence of two observed PL peaks do not necessarily suggest GS and ES transitions. It is also possible to observe PL emission from the MLQD sample due to GS transitions from different InAs/GaAs QD layers due to the deep penetration of the excitation laser. In order to further investigate the origin of the PL transitions, excitation power-dependent PL spectroscopy was performed. Figures 3(a) and 3(b) show the PL spectra of the samples for the lowest four excitation powers (0.1 to 20 mW). For the SLQD sample, Peak A and Peak B continuously increase and no observable change in the PL lineshape can be observed as the excitation power is increased. Meanwhile, for the MLQD sample, Peak D becomes higher than Peak C starting at $\approx$ 1 mW. Peak C also increases at a slower rate as compared to peak D. For a more qualitative description, plots of the PL intensity vs excitation power are shown in Fig. 3(c) and 3(d). In the SLQD sample, Peak A and Peak B have equal slopes and a linear increase in PL intensity can be observed. We conclude that Peaks A and B are GS due to two possible QD size distribution and the SLQD sample is bimodal [24]. However, in the case of the MLQD sample, a cross-over can be observed $\approx$ 1 mW due to the change in the relative PL intensities of Peaks C and D. The observations in Fig. 3(b) and 3(d) for the MLQD are consistent with state filling [17,25]. We can surmise that peak C is the GS and peak D is the ES. It also shows that the incorporation of multilayers in the MLQD sample has resulted to a more uniform QD size distribution [26,27]. The effect of multilayering growth in the QD size distribution is as follows:in S-K growth, the growth starts as a 2D planar growth until eventually the strain in the growth surface would be high enough to cause dislocations resulting to QD island formation. The lowest QD layer will have a random size distribution. Upon deposition of the GaAs spacer layer, there will be undulations that will be preferable nucleation sites for the next QD layers. The succeeding QD layers will have wider base and greater height as compared the the QDs below. As the number of QD layers is increased, the topmost QDs will have a more uniform QD size distribution [16]. Fig. 3. Normalized excitation power-dependent PL spectra of the (a) SLQD ad (b) MLQD. The plot of PL intensity vs excitation power was fitted with a linear trendline for the (c) SLQD and (d) MLQD. Peak A and B continuously increase as the excitation power is increased. Meanwhile, Peak C becomes saturated at 1 mW and Peak D eventually becomes higher as excitation power is increased, consistent with state-filling The photocarrier dynamics and transport in the InAs/GaAs QD samples were also studied using THz-TDS. Figure 4 shows the THz-TDS waveforms and corresponding FFT spectra for p-InAs, SLQD, MLQD, and SI-GaAs at $T =$ 200 K. Using Eq. (1), the energy gap of GaAs was estimated to be 1.465 eV at this temperature. The excitation laser wavelength was set to 870 nm, corresponding to a photon energy of 1.425 eV, in order to avoid photoexcitation of the GaAs layers. In addition, the FWHM, $\Delta \lambda$ of the excitation laser was only 12 nm, thus the possible photon energy ranges only from 1.415 - 1.435 eV. This further assures that no GaAs layer is being photoexcited. The strongest THz emission was obtained from p-InAs, followed by the MLQD and the SLQD. No significant THz emission was measured from SI-GaAs. The THz emission from the MLQD and SLQD samples are about $30\%$ and $6\%$ of p-InAs, respectively. The higher THz emission from the MLQD sample has been previously attributed to the higher number of QD emitters participating in the THz generation process as compared to the SLQD. Possible effects of multilayering resulting to less scattering and smoother interfaces, as in the PL result might have also contributed to the THz emission enhancement [28]. Fig. 4. (a) THz-TDS waveforms for p-InAs, SLQD, MLQD, and SI-GaAs (b) Corresponding FFT spectra of the THz-TDS waveforms. The highest THz signal was obtained from p-InAs. The THz emission from the MLQD sample is about $30\%$ and the SLQD is $6\%$ as compared to p-InAs. Recombination and transport are competing processes affecting both luminescence and THz emission [29,30]. The relative PL intensity of the MLQD sample is lower than the SLQD sample, suggesting less efficient recombination dynamics. Moreover, temperature- and excitation power-dependent PL spectroscopy have also shown that the MLQD sample has a more uniform QD size distribution. A uniform QD size distribution would result to less scattering and smoother interface [28]. If other non-competing processes are excluded, it can be inferred that photocarriers in the MLQD will undergo transport as compared to the SLQD where recombination process is more likely to proceed. This scenario should explain why the THz emission is higher in the MLQD as compared to the SLQD. Figure 5 shows the THz amplitudes at different temperatures for p-InAs, SLQD, and MLQD. For p-InAs, the THz emission decreases as temperatures increases. However, the THz emission for the QDs was found to increase as temperature increases. The observed temperature-dependence from the samples can be explained by investigating the origin of the THz generation mechanism. The generated THz electromagnetic radiation $E_{THz}$ is given by the expression [31]: (2)$$E_{THz}=-\frac{S}{c^2 R}\int^{\infty}_0 \left( \frac{\partial J}{\partial t}+\frac{\partial^2P}{\partial t^2}\right) dz$$ Fig. 5. Temperature-dependence of the THz emission from p-InAs, SLQD and MLQD. The THz amplitudes were normalized with respect to peak-to-peak amplitude of p-InAs at $T =$ 14 K. The THz emission from p-InAs decreases while the THz emission from the SLQD and MLQD increases as T is increased. where $S$ is the illuminated area of the semiconductor, $c$ is the speed of light, $R$ is the distance from the emitter and observation point, $J$ is the transient photocurrent and $P$ is the nonlinear (NL) polarization. Various works have investigated the NL effect in semiconductor surfaces [32,33]. For the excitation fluence values used in the experiment, the NL effect is expected to be negligible [34]. Morever, the 100 orientation and excitation geometry of the layers further suppress THz emission from NL effects [9,35]. The linear process of THz radiation can be primarily attributed to the generation of a transient photocurrent that is described by the Drift-Diffusion Eq. [36]: (3)$$\frac{\partial N_i \left(\vec{r}, t \right)}{\partial t} = G \left( \vec{r}, t \right)+\vec{\nabla} \cdot \{D_i(\vec{r}, t)\vec{\nabla}N_i(\vec{r}, t) \} \pm \vec{\nabla} \cdot \{ \mu_i (\vec{r}, t) \vec{E}(\vec{r}, t)N_i(\vec{r}, t)\}$$ where $N_i(\vec {r}, t)$ is the carrier concentration, $G \left ( \vec {r}, t \right )$ is the laser generation rate, $D_i(\vec {r}, t)$ is the diffusion coefficient equal to $\mu _{i} \left (\vec {r}, t \right )kT_i/q$, $\mu _i (\vec {r}, t)$ is the mobility, $T_i$ is the carrier temperature related to the excess energy $h\nu -E_g$ and $\vec {E}(\vec {r}, t)$ is the electric field. The subscript $i$ denotes $e$ for electrons and $h$ for the holes. Since electrons are lighter than holes, it can be assumed that the THz generation can be primarily due to electron transport. In the right-hand side of Eq. (3), the 2nd term can be interpreted as the contribution due to carrier diffusion while the 3rd term is due to carrier drift. In both diffusion- and drift-related photocarrier transport, the THz electric field $E_{THz}$ is proportional to the time-derivative of the transient photocurrent density $\partial J / \partial t$. At low excitation fluence, the dominant THz radiation mechanism for low $E_g$ semiconductors such as InAs is carrier diffusion [36,37]. For intermediate $E_g$ semiconductors such as GaAs, the dominant THz radiation mechanism is carrier drift [38,39]. In the case of InAs/GaAs QDs, we also propose a drift-related carrier transport mechanism. Based on theoretical works, the photo-carriers in InAs/GaAs QDs are not confined in the center of the pyramidal QD [40]. In particular, electrons were found to be closer in the apex while holes are closer to the base, resulting to the creation of a permanent dipole. B-field dependent THz-TDS measurements have also shown that carrier drift is the origin of the THz emission from InAs/GaAs QDs [18]. For p-InAs, the diffusion current $\vec {J}_{diff}$ is given by: (4)$$\vec{J}_{diff}= \mu_e (\vec{r}, t)(kT_e/q) \vec{\nabla}N_e(\vec{r}, t)$$ and for a drift-type THz emitter such as QDs, the drift current $\vec {J}_{drift}$ is given by: (5)$$\vec{J}_{drift}= e \mu_e (\vec{r}, t) \vec{E}(\vec{r}, t)N_e(\vec{r}, t)$$ Note that the electron mobility $\mu _e$, electric field $\vec {E}$, and carrier concentration $N_e$ have implicit temperature-dependence. We discuss the effects of T on these parameters since the resulting behavior of the THz emission will be a complex convolution of these effects. First, the change in excess energy $h\nu -E_g$ from $T=14K-200K$ is not significant to induce a temperature-dependence in $N_e$ or result to carrier repopulation between the $\Gamma -L$ valleys [41]. The change in $N_e$ due to the growth of thermal carriers is also expected to occur at a much higher T ($T\;>\;750$ K) [42]. Next, in general, scattering effects decreases $\mu _e$ at higher T. In particular, $\mu _e$ is related to scattering time $\tau _e$ via the relation $\mu _e = e \tau _e /m_e^*$ where $m_e^*$ is the electron effective mass [43]. Due to polar impurity scattering, the behavior of $\tau _e$ is known to increase from 0-100 K, then decreases as T is further increased [44]. However, correction using the carrier temperature $T_e$ would instead show a monotonic decrease of $\tau _e$ as T is increased resulting to decrease in $\mu _e$ [45]. The decrease in the THz emission in p-InAs as T increases can be primarily attributed to the decrease in $\mu _e$. Meanwhile, the THz emission from the QD samples was found to be proportional to T. Since $\mu \propto 1/T$, we surmise that the role of $\mu _e$ is not as significant as $\vec {E}$. Previous works have investigated the temperature-dependence of the THz emission from intermediate $E_g$ semiconductors such as InP and SI-GaAs at 800 nm [42,45]. It has been shown that the surface electric field $E_{surf}$ increases as T increases due to shifting of the Fermi level. In the case of QDs, the photocarriers are under the influence of an interface electric field $\vec {E}_{int}$ brought about at the InAs and GaAs interfaces instead of $E_{surf}$. For modulation-doped heterostructures (MDHs) wherein $E_{int}$ is very high, temperature-dependent photoreflectance measurements have confirmed that $\vec {E}_{int}$ increases as T increases [46]. The increase in the THz emission from the QD samples is possibly due to the increase in the $E_{int}$, thus validating the conjecture that the THz emission from the SLQD and the MLQD samples are drift-related. In summary, we have investigated carrier dynamics and transport in MBE-grown InAs/GaAs QD structures. Temperature- and excitation power-dependent PL spectroscopy have shown that the SLQD sample has a bimodal size distribution. Multi-layering in the MLQD samples has also resulted to a more uniform QD size distribution. The uniform QD size distribution in the MLQD sample has resulted to more intense THz emission than the SLQD sample possibly due to less scattering and smoother interface quality. The temperature-dependence of the THz emission from QDs was found to increase as temperature increases, in contrast to a diffusion-type THz emitter such as p-InAs. A drift-related THz radiation mechanism is proposed in the QD samples wherein photocarriers generated in the QD structures are accelerated by the interface electric field. CHED-PCARI (IIID-2015-13); DOST-PCIEERD-GIA (Project No. 04001); UP OVPAA (OVPAA-BPhD-2012-03). 1. N. Kirstaedter, O. Schmidt, N. Ledentsov, D. Bimberg, V. Ustinov, A. Y. Egorov, A. Zhukov, M. Maximov, P. Kop'ev, and Z. I. Alferov, "Gain and differential gain of single layer inas/gaas quantum dot injection lasers," Appl. Phys. Lett. 69(9), 1226–1228 (1996). [CrossRef] 2. H. Liu, M. Gao, J. McCaffrey, Z. Wasilewski, and S. Fafard, "Quantum dot infrared photodetectors," Appl. Phys. Lett. 78(1), 79–81 (2001). [CrossRef] 3. D. K. Schroder, Semiconductor Material and Device Characterization (John Wiley & Sons, 2006). 4. S. Jung, H. Yeo, I. Yun, J. Leem, I. Han, J. Kim, and J. Lee, "Size distribution effects on self-assembled inas quantum dots," J. Mater. Sci.: Mater. Electron. 18(S1), 191–194 (2007). [CrossRef] 5. K. S. Lee, G. Oh, E. K. Kim, and J. D. Song, "Temperature dependent photoluminescence from inas/gaas quantum dots grown by molecular beam epitaxy," Appl. Sci. Convergence Technol. 26(4), 86–90 (2017). [CrossRef] 6. M. Kaniewska, O. Engström, and M. Kaczmarczyk, "Classification of energy levels in quantum dot structures by depleted layer spectroscopy," J. Electron. Mater. 39(6), 766–772 (2010). [CrossRef] 7. A. Leitenstorfer, S. Hunsche, J. Shah, M. Nuss, and W. Knox, "Femtosecond charge transport in polar semiconductors," Phys. Rev. Lett. 82(25), 5140–5143 (1999). [CrossRef] 8. K.-T. Tsen, Ultrafast Dynamical Processes in Semiconductors, vol. 92 (Springer Science & Business Media, 2004). 9. P. Gu, M. Tani, S. Kono, K. Sakai, and X.-C. Zhang, "Study of terahertz radiation from inas and insb," J. Appl. Phys. 91(9), 5533–5537 (2002). [CrossRef] 10. P. Han, X. Huang, and X.-C. Zhang, "Direct characterization of terahertz radiation from the dynamics of the semiconductor surface field," Appl. Phys. Lett. 77(18), 2864–2866 (2000). [CrossRef] 11. V. Karpus, R. Norkus, B. Čechavičius, and A. Krotkus, "Thz-excitation spectroscopy technique for band-offset determination," Opt. Express 26(26), 33807–33817 (2018). [CrossRef] 12. I. Beleckaitė and R. Adomavičius, "Determination of the terahertz pulse emitting dipole orientation by terahertz emission measurements," J. Appl. Phys. 125(22), 225706 (2019). [CrossRef] 13. I. Kostakis, D. Saeedkia, and M. Missous, "Terahertz generation and detection using low temperature grown ingaas-inalas photoconductive antennas at 1.55 um pulse excitation," IEEE Trans. Terahertz Sci. Technol. 2(6), 617–622 (2012). [CrossRef] 14. N. Daghestani, M. Cataluna, G. Berry, G. Ross, and M. Rose, "Terahertz emission from inas/gaas quantum dot based photoconductive devices," Appl. Phys. Lett. 98(18), 181107 (2011). [CrossRef] 15. O. Abdulmunem, K. Hassoon, M. Gaafar, A. Rahimi-Iman, and J. C. Balzer, "Tin nanoparticles for enhanced thz generation in tds systems," J. Infrared, Millimeter, Terahertz Waves 38(10), 1206–1214 (2017). [CrossRef] 16. A. Somintac, E. Estacio, and A. Salvador, "Observation of blue-shifted photoluminescence in stacked inas/gaas quantum dots," J. Cryst. Growth 251(1-4), 196–200 (2003). [CrossRef] 17. K. Omambac, J. Porquez, J. Afalla, D. Vasquez, M. Balgos, R. Jaculbia, A. Somintac, and A. Salvador, "Application of external tensile and compressive strain on a single layer in a s/g a a s quantum dot via epitaxial lift-off," Phys. Status Solidi B 250(8), 1632–1635 (2013). [CrossRef] 18. J. M. M. Presto, E. A. P. Prieto, K. M. Omambac, J. P. C. Afalla, D. A. O. Lumantas, A. A. Salvador, A. S. Somintac, E. S. Estacio, K. Yamamoto, and M. Tani, "Confined photocarrier transport in inas pyramidal quantum dots via terahertz time-domain spectroscopy," Opt. Express 23(11), 14532–14540 (2015). [CrossRef] 19. A. Polimeni, A. Patane, M. Henini, L. Eaves, and P. Main, "Temperature dependence of the optical properties of i n a s/a l y ga 1- y as self-organized quantum dots," Phys. Rev. B 59(7), 5064–5068 (1999). [CrossRef] 20. H. Lee, W. Yang, and P. C. Sercel, "Temperature and excitation dependence of photoluminescence line shapein inas/gaas quantum-dot structures," Phys. Rev. B 55(15), 9757–9762 (1997). [CrossRef] 21. D. Aspnes, S. Kelso, R. Logan, and R. Bhat, "Optical properties of al x ga1- x as," J. Appl. Phys. 60(2), 754–767 (1986). [CrossRef] 22. Y. P. Varshni, "Temperature dependence of the energy gap in semiconductors," Physica 34(1), 149–154 (1967). [CrossRef] 23. M. H. Balgos, J. P. Afalla, S. Vizcara, D. Lumantas, E. Estacio, A. Salvador, and A. Somintac, "Temperature behavior of unstrained (gaas/algaas) and strained (ingaas/gaas) quantum well bandgaps," Opt. Quantum Electron. 47(8), 3053–3063 (2015). [CrossRef] 24. J. Kim, B. Ko, J.-I. Yu, and I.-H. Bae, "Temperature-and excitation-power-dependent optical properties ofinas/gaas quantum dots by comparison of photoluminescence andphotoreflectance spectroscopy," J. Korean Phys. Soc. 55(2), 640–645 (2009). [CrossRef] 25. D. Hessman, P. Castrillo, M.-E. Pistol, C. Pryor, and L. Samuelson, "Excited states of individual quantum dots studied by photoluminescence spectroscopy," Appl. Phys. Lett. 69(6), 749–751 (1996). [CrossRef] 26. J. Tersoff, C. Teichert, and M. Lagally, "Self-organization in growth of quantum dot superlattices," Phys. Rev. Lett. 76(10), 1675–1678 (1996). [CrossRef] 27. G. Solomon, S. Komarov, J. Harris Jr, and Y. Yamamoto, "Increased size uniformity through vertical quantum dot columns," J. Cryst. Growth 175-176, 707–712 (1997). [CrossRef] 28. E. Estacio, M. H. Pham, S. Takatori, M. Cadatal-Raduban, T. Nakazato, T. Shimizu, N. Sarukura, A. Somintac, M. Defensor, F. C. Awitan, R. Jaculbia, A. Salvador, and A. Garcia, "Strong enhancement of terahertz emission from gaas in inas/gaas quantum dot structures," Appl. Phys. Lett. 94(23), 232104 (2009). [CrossRef] 29. J. Muldera, N. I. Cabello, J. C. Ragasa, A. Mabilangan, M. H. Balgos, R. Jaculbia, A. Somintac, E. Estacio, and A. Salvador, "Photocarrier transport and carrier recombination efficiency in vertically aligned si nanowire arrays synthesized via metal-assisted chemical etching," Appl. Phys. Express 6(8), 082101 (2013). [CrossRef] 30. N. I. Cabello, P. Tingzon, K. Cervantes, A. Cafe, J. Lopez, A. Mabilangan, A. De Los Reyes, L. Lopez Jr, J. Muldera, D. C. Nguyen, X. T. Nguyen, H. M. Pham, T. B. Nguyen, A. Salvador, A. Somintac, and E. Estacio, "Luminescence and carrier dynamics in nanostructured silicon," J. Lumin. 186, 312–317 (2017). [CrossRef] 31. V. L. Malevich, R. Adomavičius, and A. Krotkus, "Thz emission from semiconductor surfaces," C. R. Phys. 9(2), 130–141 (2008). [CrossRef] 32. M. Reid and R. Fedosejevs, "Terahertz emission from (100) inas surfaces at high excitation fluences," Appl. Phys. Lett. 86(1), 011906 (2005). [CrossRef] 33. L. Peters, J. Tunesi, A. Pasquazi, and M. Peccianti, "High-energy terahertz surface optical rectification," Nano Energy 46, 128–132 (2018). [CrossRef] 34. E. Estacio, H. Sumikura, H. Murakami, M. Tani, N. Sarukura, M. Hangyo, C. Ponseca Jr, R. Pobre, R. Quiroga, and S. Ono, "Magnetic-field-induced fourfold azimuthal angle dependence in the terahertz radiation power of (100) inas," Appl. Phys. Lett. 90(15), 151915 (2007). [CrossRef] 35. H. Takahashi, A. Quema, R. Yoshioka, S. Ono, and N. Sarukura, "Excitation fluence dependence of terahertz radiation mechanism from femtosecond-laser-irradiated inas under magnetic field," Appl. Phys. Lett. 83(6), 1068–1070 (2003). [CrossRef] 36. K. Liu, J. Xu, T. Yuan, and X.-C. Zhang, "Terahertz radiation from inas induced by carrier diffusion and drift," Phys. Rev. B 73(15), 155330 (2006). [CrossRef] 37. S. Kono, P. Gu, M. Tani, and K. Sakai, "Temperature dependence of terahertz radiation from n-type insb and n-type inas surfaces," Appl. Phys. B 71(6), 901–904 (2000). [CrossRef] 38. X.-C. Zhang, B. Hu, J. Darrow, and D. Auston, "Generation of femtosecond electromagnetic pulses from semiconductor surfaces," Appl. Phys. Lett. 56(11), 1011–1013 (1990). [CrossRef] 39. J. Heyman, N. Coates, A. Reinhardt, and G. Strasser, "Diffusion and drift in terahertz emission at gaas surfaces," Appl. Phys. Lett. 83(26), 5476–5478 (2003). [CrossRef] 40. M. Grundmann, N. Ledentsov, O. Stier, D. Bimberg, V. Ustinov, P. Kop'ev, and Z. I. Alferov, "Excited states in self-organized inas/gaas quantum dots: theory and experiment," Appl. Phys. Lett. 68(7), 979–981 (1996). [CrossRef] 41. A. Markelz and E. J. Heilweil, "Temperature-dependent terahertz output from semi-insulating gaas photoconductive switches," Appl. Phys. Lett. 72(18), 2229–2231 (1998). [CrossRef] 42. M. Nakajima, M. Takahashi, and M. Hangyo, "Strong enhancement of thz radiation intensity from semi-insulating gaas surfaces at high temperatures," Appl. Phys. Lett. 81(8), 1462–1464 (2002). [CrossRef] 43. N. W. Ashcroft and N. D. Mermin, Solid State Physics (Saunders College, Philadelphia, 1976). 44. K. Fletcher and P. Butcher, "An exact solution of the linearized boltzmann equation with applications to the hall mobility and hall factor of n-gaas," J. Phys. C: Solid State Phys. 5(2), 212–224 (1972). [CrossRef] 45. M. Nakajima, M. Hangyo, M. Ohta, and H. Miyazaki, "Polarity reversal of terahertz waves radiated from semi-insulating inp surfaces induced by temperature," Phys. Rev. B 67(19), 195308 (2003). [CrossRef] 46. H. R. Bardolaza, J. D. E. Vasquez, M. Y. Bacaoco, E. Alexander, L. P. Lopez, A. S. Somintac, A. A. Salvador, E. S. Estacio, and R. V. Sarmago, "Temperature dependence of thz emission and junction electric field of gaas–algaas modulation-doped heterostructures with different i-algaas spacer layer thicknesses," J. Mater. Sci.: Mater. Electron. 29(10), 8760–8766 (2018). [CrossRef] N. Kirstaedter, O. Schmidt, N. Ledentsov, D. Bimberg, V. Ustinov, A. Y. Egorov, A. Zhukov, M. Maximov, P. Kop'ev, and Z. I. Alferov, "Gain and differential gain of single layer inas/gaas quantum dot injection lasers," Appl. Phys. Lett. 69(9), 1226–1228 (1996). H. Liu, M. Gao, J. McCaffrey, Z. Wasilewski, and S. Fafard, "Quantum dot infrared photodetectors," Appl. Phys. Lett. 78(1), 79–81 (2001). D. K. Schroder, Semiconductor Material and Device Characterization (John Wiley & Sons, 2006). S. Jung, H. Yeo, I. Yun, J. Leem, I. Han, J. Kim, and J. Lee, "Size distribution effects on self-assembled inas quantum dots," J. Mater. Sci.: Mater. Electron. 18(S1), 191–194 (2007). K. S. Lee, G. Oh, E. K. Kim, and J. D. Song, "Temperature dependent photoluminescence from inas/gaas quantum dots grown by molecular beam epitaxy," Appl. Sci. Convergence Technol. 26(4), 86–90 (2017). M. Kaniewska, O. Engström, and M. Kaczmarczyk, "Classification of energy levels in quantum dot structures by depleted layer spectroscopy," J. Electron. Mater. 39(6), 766–772 (2010). A. Leitenstorfer, S. Hunsche, J. Shah, M. Nuss, and W. Knox, "Femtosecond charge transport in polar semiconductors," Phys. Rev. Lett. 82(25), 5140–5143 (1999). K.-T. Tsen, Ultrafast Dynamical Processes in Semiconductors, vol. 92 (Springer Science & Business Media, 2004). P. Gu, M. Tani, S. Kono, K. Sakai, and X.-C. Zhang, "Study of terahertz radiation from inas and insb," J. Appl. Phys. 91(9), 5533–5537 (2002). P. Han, X. Huang, and X.-C. Zhang, "Direct characterization of terahertz radiation from the dynamics of the semiconductor surface field," Appl. Phys. Lett. 77(18), 2864–2866 (2000). V. Karpus, R. Norkus, B. Čechavičius, and A. Krotkus, "Thz-excitation spectroscopy technique for band-offset determination," Opt. Express 26(26), 33807–33817 (2018). I. Beleckaitė and R. Adomavičius, "Determination of the terahertz pulse emitting dipole orientation by terahertz emission measurements," J. Appl. Phys. 125(22), 225706 (2019). I. Kostakis, D. Saeedkia, and M. Missous, "Terahertz generation and detection using low temperature grown ingaas-inalas photoconductive antennas at 1.55 um pulse excitation," IEEE Trans. Terahertz Sci. Technol. 2(6), 617–622 (2012). N. Daghestani, M. Cataluna, G. Berry, G. Ross, and M. Rose, "Terahertz emission from inas/gaas quantum dot based photoconductive devices," Appl. Phys. Lett. 98(18), 181107 (2011). O. Abdulmunem, K. Hassoon, M. Gaafar, A. Rahimi-Iman, and J. C. Balzer, "Tin nanoparticles for enhanced thz generation in tds systems," J. Infrared, Millimeter, Terahertz Waves 38(10), 1206–1214 (2017). A. Somintac, E. Estacio, and A. Salvador, "Observation of blue-shifted photoluminescence in stacked inas/gaas quantum dots," J. Cryst. Growth 251(1-4), 196–200 (2003). K. Omambac, J. Porquez, J. Afalla, D. Vasquez, M. Balgos, R. Jaculbia, A. Somintac, and A. Salvador, "Application of external tensile and compressive strain on a single layer in a s/g a a s quantum dot via epitaxial lift-off," Phys. Status Solidi B 250(8), 1632–1635 (2013). J. M. M. Presto, E. A. P. Prieto, K. M. Omambac, J. P. C. Afalla, D. A. O. Lumantas, A. A. Salvador, A. S. Somintac, E. S. Estacio, K. Yamamoto, and M. Tani, "Confined photocarrier transport in inas pyramidal quantum dots via terahertz time-domain spectroscopy," Opt. Express 23(11), 14532–14540 (2015). A. Polimeni, A. Patane, M. Henini, L. Eaves, and P. Main, "Temperature dependence of the optical properties of i n a s/a l y ga 1- y as self-organized quantum dots," Phys. Rev. B 59(7), 5064–5068 (1999). H. Lee, W. Yang, and P. C. Sercel, "Temperature and excitation dependence of photoluminescence line shapein inas/gaas quantum-dot structures," Phys. Rev. B 55(15), 9757–9762 (1997). D. Aspnes, S. Kelso, R. Logan, and R. Bhat, "Optical properties of al x ga1- x as," J. Appl. Phys. 60(2), 754–767 (1986). Y. P. Varshni, "Temperature dependence of the energy gap in semiconductors," Physica 34(1), 149–154 (1967). M. H. Balgos, J. P. Afalla, S. Vizcara, D. Lumantas, E. Estacio, A. Salvador, and A. Somintac, "Temperature behavior of unstrained (gaas/algaas) and strained (ingaas/gaas) quantum well bandgaps," Opt. Quantum Electron. 47(8), 3053–3063 (2015). J. Kim, B. Ko, J.-I. Yu, and I.-H. Bae, "Temperature-and excitation-power-dependent optical properties ofinas/gaas quantum dots by comparison of photoluminescence andphotoreflectance spectroscopy," J. Korean Phys. Soc. 55(2), 640–645 (2009). D. Hessman, P. Castrillo, M.-E. Pistol, C. Pryor, and L. Samuelson, "Excited states of individual quantum dots studied by photoluminescence spectroscopy," Appl. Phys. Lett. 69(6), 749–751 (1996). J. Tersoff, C. Teichert, and M. Lagally, "Self-organization in growth of quantum dot superlattices," Phys. Rev. Lett. 76(10), 1675–1678 (1996). G. Solomon, S. Komarov, J. Harris Jr, and Y. Yamamoto, "Increased size uniformity through vertical quantum dot columns," J. Cryst. Growth 175-176, 707–712 (1997). E. Estacio, M. H. Pham, S. Takatori, M. Cadatal-Raduban, T. Nakazato, T. Shimizu, N. Sarukura, A. Somintac, M. Defensor, F. C. Awitan, R. Jaculbia, A. Salvador, and A. Garcia, "Strong enhancement of terahertz emission from gaas in inas/gaas quantum dot structures," Appl. Phys. Lett. 94(23), 232104 (2009). J. Muldera, N. I. Cabello, J. C. Ragasa, A. Mabilangan, M. H. Balgos, R. Jaculbia, A. Somintac, E. Estacio, and A. Salvador, "Photocarrier transport and carrier recombination efficiency in vertically aligned si nanowire arrays synthesized via metal-assisted chemical etching," Appl. Phys. Express 6(8), 082101 (2013). N. I. Cabello, P. Tingzon, K. Cervantes, A. Cafe, J. Lopez, A. Mabilangan, A. De Los Reyes, L. Lopez Jr, J. Muldera, D. C. Nguyen, X. T. Nguyen, H. M. Pham, T. B. Nguyen, A. Salvador, A. Somintac, and E. Estacio, "Luminescence and carrier dynamics in nanostructured silicon," J. Lumin. 186, 312–317 (2017). V. L. Malevich, R. Adomavičius, and A. Krotkus, "Thz emission from semiconductor surfaces," C. R. Phys. 9(2), 130–141 (2008). M. Reid and R. Fedosejevs, "Terahertz emission from (100) inas surfaces at high excitation fluences," Appl. Phys. Lett. 86(1), 011906 (2005). L. Peters, J. Tunesi, A. Pasquazi, and M. Peccianti, "High-energy terahertz surface optical rectification," Nano Energy 46, 128–132 (2018). E. Estacio, H. Sumikura, H. Murakami, M. Tani, N. Sarukura, M. Hangyo, C. Ponseca Jr, R. Pobre, R. Quiroga, and S. Ono, "Magnetic-field-induced fourfold azimuthal angle dependence in the terahertz radiation power of (100) inas," Appl. Phys. Lett. 90(15), 151915 (2007). H. Takahashi, A. Quema, R. Yoshioka, S. Ono, and N. Sarukura, "Excitation fluence dependence of terahertz radiation mechanism from femtosecond-laser-irradiated inas under magnetic field," Appl. Phys. Lett. 83(6), 1068–1070 (2003). K. Liu, J. Xu, T. Yuan, and X.-C. Zhang, "Terahertz radiation from inas induced by carrier diffusion and drift," Phys. Rev. B 73(15), 155330 (2006). S. Kono, P. Gu, M. Tani, and K. Sakai, "Temperature dependence of terahertz radiation from n-type insb and n-type inas surfaces," Appl. Phys. B 71(6), 901–904 (2000). X.-C. Zhang, B. Hu, J. Darrow, and D. Auston, "Generation of femtosecond electromagnetic pulses from semiconductor surfaces," Appl. Phys. Lett. 56(11), 1011–1013 (1990). J. Heyman, N. Coates, A. Reinhardt, and G. Strasser, "Diffusion and drift in terahertz emission at gaas surfaces," Appl. Phys. Lett. 83(26), 5476–5478 (2003). M. Grundmann, N. Ledentsov, O. Stier, D. Bimberg, V. Ustinov, P. Kop'ev, and Z. I. Alferov, "Excited states in self-organized inas/gaas quantum dots: theory and experiment," Appl. Phys. Lett. 68(7), 979–981 (1996). A. Markelz and E. J. Heilweil, "Temperature-dependent terahertz output from semi-insulating gaas photoconductive switches," Appl. Phys. Lett. 72(18), 2229–2231 (1998). M. Nakajima, M. Takahashi, and M. Hangyo, "Strong enhancement of thz radiation intensity from semi-insulating gaas surfaces at high temperatures," Appl. Phys. Lett. 81(8), 1462–1464 (2002). N. W. Ashcroft and N. D. Mermin, Solid State Physics (Saunders College, Philadelphia, 1976). K. Fletcher and P. Butcher, "An exact solution of the linearized boltzmann equation with applications to the hall mobility and hall factor of n-gaas," J. Phys. C: Solid State Phys. 5(2), 212–224 (1972). M. Nakajima, M. Hangyo, M. Ohta, and H. Miyazaki, "Polarity reversal of terahertz waves radiated from semi-insulating inp surfaces induced by temperature," Phys. Rev. B 67(19), 195308 (2003). H. R. Bardolaza, J. D. E. Vasquez, M. Y. Bacaoco, E. Alexander, L. P. Lopez, A. S. Somintac, A. A. Salvador, E. S. Estacio, and R. V. Sarmago, "Temperature dependence of thz emission and junction electric field of gaas–algaas modulation-doped heterostructures with different i-algaas spacer layer thicknesses," J. Mater. Sci.: Mater. Electron. 29(10), 8760–8766 (2018). Abdulmunem, O. Adomavicius, R. Afalla, J. Afalla, J. P. Afalla, J. P. C. Alexander, E. Alferov, Z. I. Ashcroft, N. W. Aspnes, D. Auston, D. Awitan, F. C. Bacaoco, M. Y. Bae, I.-H. Balgos, M. Balgos, M. H. Balzer, J. C. Bardolaza, H. R. Beleckaite, I. Berry, G. Bhat, R. Bimberg, D. Butcher, P. Cabello, N. I. Cadatal-Raduban, M. Cafe, A. Castrillo, P. Cataluna, M. Cechavicius, B. Cervantes, K. Coates, N. Daghestani, N. Darrow, J. De Los Reyes, A. Defensor, M. Eaves, L. Egorov, A. Y. Engström, O. Estacio, E. Estacio, E. S. Fafard, S. Fedosejevs, R. Fletcher, K. Gaafar, M. Gao, M. Garcia, A. Grundmann, M. Gu, P. Han, I. Han, P. Hangyo, M. Harris Jr, J. Hassoon, K. Heilweil, E. J. Henini, M. Hessman, D. Heyman, J. Hu, B. Huang, X. Hunsche, S. Jaculbia, R. Jung, S. Kaczmarczyk, M. Kaniewska, M. Karpus, V. Kelso, S. Kim, E. K. Kim, J. Kirstaedter, N. Knox, W. Ko, B. Komarov, S. Kono, S. Kop'ev, P. Kostakis, I. Krotkus, A. Lagally, M. Ledentsov, N. Lee, H. Lee, J. Lee, K. S. Leem, J. Leitenstorfer, A. Liu, H. Liu, K. Logan, R. Lopez, J. Lopez, L. P. Lopez Jr, L. Lumantas, D. Lumantas, D. A. O. Mabilangan, A. Main, P. Malevich, V. L. Markelz, A. Maximov, M. McCaffrey, J. Mermin, N. D. Missous, M. Miyazaki, H. Muldera, J. Murakami, H. Nakajima, M. Nakazato, T. Nguyen, D. C. Nguyen, T. B. Nguyen, X. T. Norkus, R. Nuss, M. Oh, G. Ohta, M. Omambac, K. Omambac, K. M. Ono, S. Pasquazi, A. Patane, A. Peccianti, M. Peters, L. Pham, H. M. Pham, M. H. Pistol, M.-E. Pobre, R. Polimeni, A. Ponseca Jr, C. Porquez, J. Presto, J. M. M. Prieto, E. A. P. Pryor, C. Quema, A. Quiroga, R. Ragasa, J. C. Rahimi-Iman, A. Reid, M. Reinhardt, A. Rose, M. Ross, G. Saeedkia, D. Sakai, K. Salvador, A. Salvador, A. A. Samuelson, L. Sarmago, R. V. Sarukura, N. Schmidt, O. Schroder, D. K. Sercel, P. C. Shah, J. Shimizu, T. Solomon, G. Somintac, A. Somintac, A. S. Song, J. D. Stier, O. Strasser, G. Sumikura, H. Takahashi, H. Takahashi, M. Takatori, S. Tani, M. Teichert, C. Tersoff, J. Tingzon, P. Tsen, K.-T. Tunesi, J. Ustinov, V. Varshni, Y. P. Vasquez, D. Vasquez, J. D. E. Vizcara, S. Wasilewski, Z. Xu, J. Yamamoto, K. Yamamoto, Y. Yang, W. Yeo, H. Yoshioka, R. Yu, J.-I. Yuan, T. Yun, I. Zhang, X.-C. Zhukov, A. Appl. Phys. B (1) Appl. Phys. Express (1) Appl. Phys. Lett. (14) Appl. Sci. Convergence Technol. (1) C. R. Phys. (1) IEEE Trans. Terahertz Sci. Technol. (1) J. Appl. Phys. (3) J. Cryst. Growth (2) J. Electron. Mater. (1) J. Infrared, Millimeter, Terahertz Waves (1) J. Korean Phys. Soc. (1) J. Lumin. (1) J. Mater. Sci.: Mater. Electron. (2) J. Phys. C: Solid State Phys. (1) Nano Energy (1) Opt. Quantum Electron. (1) Phys. Rev. Lett. (2) Phys. Status Solidi B (1) Physica (1) Equations on this page are rendered with MathJax. Learn more. (1) E g ( T ) = E 0 − α T 2 T + β (2) E T H z = − S c 2 R ∫ 0 ∞ ( ∂ J ∂ t + ∂ 2 P ∂ t 2 ) d z (3) ∂ N i ( r → , t ) ∂ t = G ( r → , t ) + ∇ → ⋅ { D i ( r → , t ) ∇ → N i ( r → , t ) } ± ∇ → ⋅ { μ i ( r → , t ) E → ( r → , t ) N i ( r → , t ) } (4) J → d i f f = μ e ( r → , t ) ( k T e / q ) ∇ → N e ( r → , t ) (5) J → d r i f t = e μ e ( r → , t ) E → ( r → , t ) N e ( r → , t ) Varshni Fitting Parameter Values Peak A Peak B Peak C Peak D E 0 1.24 1.29 1.30 1.39 α ( e V / K ) 3.24 × 10 − 4 3.02 × 10 − 4 4.02 × 10 − 4 3.24 × 10 − 4 β ( K ) 165 150 150 200
CommonCrawl
\begin{document} \title{Leakage Benchmarking for Universal Gate Sets} \author[1,2]{Bujiao Wu} \author[3,1]{Xiaoyang Wang} \author[1,2]{Xiao Yuan} \author[4]{Cupjin Huang \thanks{Email:\href{mailto:[email protected]}{\texttt{[email protected]}}}} \author[4]{Jianxin Chen} \affil[1]{Center on Frontiers of Computing Studies, Peking University, Beijing 100871, China} \affil[2]{School of Computer Science, Peking University, Beijing 100871, China} \affil[3]{School of Physics, Peking University, Beijing 100871, China} \affil[4]{Alibaba Quantum Laboratory, Alibaba Group USA, Bellevue, Washington 98004, USA} \date{} \maketitle \begin{abstract} Errors are common issues in quantum computing platforms, among which leakage is one of the most challenging to address. This is because leakage, i.e., the loss of information stored in the computational subspace to undesired subspaces in a larger Hilbert space, is more difficult to detect and correct than errors that preserve the computational subspace. As a result, leakage presents a significant obstacle to the development of fault-tolerant quantum computation. In this paper, we propose an efficient and accurate benchmarking framework called \emph{leakage randomized benchmarking} (LRB) for measuring leakage rates on multi-qubit quantum systems. Our approach is more insensitive to state preparation and measurement (SPAM) noise than existing leakage benchmarking protocols, requires fewer assumptions about the gate set itself, and can be used to benchmark multi-qubit leakages, which was not done previously. We also extend the LRB protocol to an interleaved variant called interleaved LRB (iLRB), which can benchmark the average leakage rate of generic $n$-site quantum gates with reasonable noise assumptions. We demonstrate the iLRB protocol on benchmarking generic two-qubit gates realized using flux tuning, and analyze the behavior of iLRB under corresponding leakage models. Our numerical experiments show good agreement with theoretical estimations, indicating the feasibility of both the LRB and iLRB protocols. \end{abstract} \section{Introduction} Quantum computation maps information processing into the manipulation of (typically microscopic) physical systems governed by quantum mechanics. Although quantum computation holds the promise to solve problems that are believed to be classically intractable, practical quantum computation suffers from various noise sources, ranging from fabrication defects and control inaccuracies to fluctuations in external physical environments. Such noise greatly hinders the practicability of quantum computation on unprotected, bare physical qubits beyond proof-of-concept demonstrations. While any kind of error is unwanted and would possibly affect the quality of the computation processes, there is a significant difference between the harmfulness of different types of errors. The most ``benign'' error happens locally and independently on single qubits; such errors can, in principle, be compressed arbitrarily with quantum error correction under reasonable assumptions on the error rates~\cite{Harper_2020,Sun21Mitigating}. More malicious errors might introduce time correlations (e.g., \ non-Markovian errors) or space correlations (e.g., \ crosstalk) and are more challenging to mitigate. Of particular interest is the \emph{leakage error}, where a piece of quantum information escapes from a confined, finite-dimensional Hilbert space used for computation, called \emph{computational subspace}, to a \emph{leaked subspace} of a larger Hilbert space. Such escaped information might undergo arbitrary and uncontrolled processes and is harder to detect, let alone correct. More seriously, typical frameworks of quantum error correction only deal with errors happening within the computational subspace and are either unable to apply or scale poorly with the leakage error. It is thus of great importance to be able to detect, correct, or even suppress leakage errors in order to conduct large-scale quantum computation. This paper focuses on estimating the leakage error rate associated with a given quantum processor, preferably efficiently and accurately. This task is part of a process usually referred to as \emph{benchmarking}, provides an estimate of certain characteristics of a piece of the quantum device before proceeding with subsequent actions. In the context of leakage benchmarking, the information can be used as a criterion to accept or abort a newly-fabricated quantum processor or as feedback information on leakage-suppressing gate schemes. Given the diverse nature of errors occurring in quantum computation, many different benchmarking schemes have been proposed over the years. A large class of benchmarking schemes, collectively called \emph{randomized benchmarking}(RB), extracts error information from the fitted result of multiple experiments with different lengths~\cite{knill2008randomized,carignan2015characterizing,PhysRevA.97.062323,francca2018approximate,helsen2019new,claes2021character,erhard2019characterizing,magesan2012efficient,chasseur15Complete,wallman2015robust,Heinrich22General,Merkel2021Randomized}. Compared to tomography-based methods or direct fidelity estimation~\cite{flammia_direct_2011,da_silva_practical_2011}, RB schemes are typically more gate-efficient, and the fitting results are typically insensitive to state preparation and measurement (SPAM) errors, making them ideal candidates for benchmarking gate errors. These protocols have been successfully implemented in many quantum experiments~\cite{PhysRevA.103.042604,PhysRevLett.129.010502,sung2021realization,Morvan21Qutrit,Xue19Benchmarking}. The first theoretical framework for RB-based leakage benchmarking was given by Wallman et al.~\cite{wallman2016robust}. Without any prior assumption on the SPAM noise, this protocol was able to provide an estimate for the sum of the leakage rate and the \emph{seepage rate}, i.e., the rate information in the leaked subspace comes back to the computational subspace. Refs.~\cite{wood2018quantification,claes2021character} later gave a detailed analysis of the protocol and illustrated this framework with several examples relevant to superconducting devices. The authors were also able to differentiate the leakage from the seepage with reasonable assumptions on the SPAM noise. Based on these protocols, several experimental characterizations of single qubit leakage noise have been proposed in superconducting quantum devices~\cite{PhysRevLett.116.020501,PhysRevA.96.022330}, quantum dots~\cite{andrews2019quantifying}, and trapped ions~\cite{haffner2008quantum}. There are two major limitations to the existing protocols~\cite{claes2021character,chasseur15Complete,wallman2016robust,wood2018quantification}. First, all protocols require that the quantum gates act nontrivially on the leakage subspaces, in order to eliminate non-Markovian behavior originating from residual information stored in the leakage subspace. As most practical gate schemes only focus on their actions on computational subspaces rather than the leakage subspaces, leakage benchmarking schemes built upon them typically do not work in general, multi-qubit quantum systems. Second, most existing protocols {can only estimate the sum of the leakage rate and the seepage rate without prior knowledge of SPAM noise, and the SPAM information is required if we need to get the leakage and seepage rates separately.} As there is typically only one set of state preparation and measurement within one run of benchmarking, the SPAM errors do not get amplified and cannot be measured accurately~\cite{nielsen2021gate}. Such inaccuracy would further affect the accuracy of gate leakage rate estimation. A natural question arises: \emph{How can one characterize the leakage rate of a multi-qubit system without operating the leakage subspace while maintaining robustness to SPAM noise?} In this paper, we propose a leakage benchmarking scheme based on RB, dedicated to benchmark leakage rates on multi-qubit systems. Compared to existing protocols requiring the leakage subspace to be fully twirled, our scheme only requires having access to the Pauli group with gate-independent, time-independent and Markovian noise. Assuming each qubit has only one-dimensional leakage space, such a gate set does not twirl the leakage subspace as a whole, but instead twirls each invariant subspace of the Pauli group individually. This allows us to formulate the LRB process as a classical Markovian process between different invariant subspaces, which can be described by a Markovian $Q$-matrix~\cite{LevinPeresWilmer2006}. The leakage and seepage rates of the system can then be estimated by leveraging the spectral property of the Markovian process, which can, in turn, be estimated similarly to RB protocols on the computational subspace. The $Q$-matrix has a dimension exponential with respect to the number of qubits in general and thus the spectral property is hard to be measured using LRB experiments. To further simplify the problem, we study the spectrum of the $Q$-matrix in two physically-motivated scenarios: The first model, named as \emph{leakage damping noise}, assumes that leakage happens at most one qubit, and leakage does not ``hop'' from one qubit to another, which is the generalization of amplitude damping noise~\cite{nielsen2002quantum} in the computational subspace; the second model assumes that each qubit undergoes an independent leakage process. In both cases, the spectral property of the $Q$-matrix can be significantly simplified, and easier for data analysis. We also show how to calculate the corresponding average leakage rates on the above two noise scenarios of the proposed LRB protocol. As an illustration of the leakage damping noise model, we found the noise model of commonly used two-qubit gates such as iSWAP, SQiSW, and CZ gates all belong to this form. Building upon the foundation of leakage randomized benchmarking (LRB) protocols, we delve deeper into the study of leakage benchmarking for specific multi-qubit gates, which is a crucial aspect of quantum hardware development. To this end, we propose an interleaved variant of the LRB (iLRB) protocol that allows for the benchmarking of individual gates, rather than a set of gates. We show that leakage rate can be extracted in general for arbitrary target gates with access to noiseless Pauli gates, and perform more careful analysis when Pauli gates are implemented noisy. In addition, we show that the leakage rate of the target gate can still be extracted under certain physically-motivated assumptions that inherently apply to flux-tuning gates in superconducting quantum computation. To demonstrate the applicability of the iLRB protocol, we apply it to the case of flux tunable superconducting quantum devices~\cite{Krantz_2019}, construct its noise model, and benchmark the leakage rate of the $\mathrm{iSWAP}$ gate. {This paper targets both theorists and experimentalists, as it seeks to establish an experimental-friendly leakage benchmarking scheme. We offer a thorough theoretical analysis for multi-qubit scenarios, as well as numerical verification of the average leakage rate for the iSWAP gate. This is achieved by extracting the noise model of the iSWAP gate from its Hamiltonian evolution. In Section \ref{sec:notation}, we introduce the fundamental concepts and notations. Section \ref{sec:leak_rateGen} presents our LRB protocol and analyzes the calculation of the average leakage rate using this method. In Section \ref{sec:lrb_specific}, we provide a detailed examination of the average leakage rate under two leakage models: single-site leakage and no cross-talk. Section \ref{sec:iLRB_protocol} proposes the iLRB protocol for any target gate that commutes with the noise channel, focusing on a special leakage damping noise. In Section \ref{sec:numerical_res}, we numerically validate the LRB and iLRB protocols. Additionally, we introduce the leakage damping noise model for $\mathrm{iSWAP}/\mathrm{SQiSW}/\mathrm{CZ}$ gates in flux-tunable superconducting quantum devices, based on their Hamiltonian evolution. We also test the iLRB protocols numerically using the noise model of the $\mathrm{iSWAP}$ gate. Finally, Section \ref{sec:discuss} concludes the paper with a discussion of our work and suggestions for future research directions. } \section{Notations} \label{sec:notation} In order to characterize leakage, we assume that the quantum states lie in a Hilbert space $\mathcal{H}$ with finite dimension $d$ that decomposes into a \emph{computational} and a \emph{leakage} subspace, denoted as $\mathcal{H}_c$ and $\mathcal{H}_l$ respectively. Let $d_c:=\dim\pbra{\mathcal{H}_c}$ and $d_l:=\dim\pbra{\mathcal{H}_l}=d-d_c$ be the dimensions of $\mathcal{H}_c$ and $\mathcal{H}_l$. Unless explicitly specified, we assume throughout the paper that a single qubit (site) lies in a three-dimensional Hilbert space with basis $\{|0\rangle,|1\rangle,|2\rangle\}$, where the computational subspace is spanned by $\{|0\rangle,|1\rangle\}$ and the leakage subspace by $\{|2\rangle\}$. In other words, higher-level excitations of a qubit can be ignored. We call such a system \emph{a single qubit with leakage}. A composite system of $n$ qubits with leakage lies in a Hilbert space $\mathcal{H}=\bigotimes_{k=1}^n (\mathcal{H}_{c_k}\oplus \mathcal{H}_{l_k})$, where $\mathcal{H}_{c_k}$ ($\mathcal{H}_{l_k}$) represents the computational (leakage) subspace of the qubit $k$. We define the computational subspace of $\mathcal{H}$ be where no qubits leaks, that is, $\mathcal{H}_c= \bigotimes_{k=1}^n\mathcal{H}_{c_k}$. Hence $d = 3^n$, and $d_c = 2^n$. The projector on the computational subspace ${\Pi}_c=\otimes_{k=1}^n{\Pi}_{c_k}$ is a tensor product where ${\Pi}_{c_k}$ is the projector onto the computational subspace on the $k$-th qubit. Note that the projector onto the leakage subspace on the $k$-th qubit is ${\Pi}_{l_k}=|2\rangle\langle 2|$ and the projector onto the leakage subspace ${\Pi}_{l}:=\mathbb{I}-{\Pi}_c\ne\otimes_{k=1}^n{\Pi}_{l_k}$, where $\mathbb{I}$ is the identity operator on $\mathcal{H}$. For each $\bm i=(i_1,i_2,\cdots,i_n)\in\cbra{c,l}^n$, we define $\mathcal{H}_{\bm i}:=\bigotimes_{k=1}^n \mathcal{H}_{i_k}$ to be the subspace where qubit $k$ is leaked if and only if $i_k=l$. The corresponding projector onto $\mathcal{H}_{\bm i}$ is ${\Pi}_{\bm i}:=\otimes_{k=1}^n{\Pi}_{(i_k)_k}$. Note that $\mathcal{H}=\bigoplus_{\bm i\in\cbra{c,l}^n}\mathcal{H}_{\bm i}, \mathcal{H}_c=\mathcal{H}_{c^n}$, and $\mathcal{H}_l = \bigoplus_{\bm i\neq c^n}\mathcal{H}_{\bm i}$. For each Hilbert space $\mathcal{H}_{\bm i}$, denote $\widetilde{{\Pi}}_{\bm i}:={\Pi}_{\bm i}/\dim(\mathcal{H}_{\bm i})$ the trace-normalized projector associated to the projector ${\Pi}_{\bm i}$. We assume the noise of interest to be Markovian and time-independent throughout this paper. Given an ideal unitary $U\in \bm{\mathrm U}(d_c)$, we denote $\mathcal{U}(\cdot) :=({\Pi}_l\oplus U)\cdot ({\Pi}_l\oplus U^\dagger)$ as the corresponding ideal unitary channel acting on the whole space. Given a completely-positive trace-preserving (CPTP) channel $\hat{\mathcal{U}}$ characterizing the noisy implementation of $\mathcal{U}$, we further denote $\Lambda:= \mathcal{U}^\dagger \circ\hat{\mathcal{U}}$ as the noise information of $\mathcal{U}$ accounting leakage. Note that $\hat{\mathcal{U}}=\mathcal{U}\circ\Lambda$ as $\mathcal{U}$ is a unitary channel. The average leakage and seepage rates of a channel $\Lambda$ are defined as~\cite{wood2018quantification} \begin{align} L_{\mathrm{ave}}\left(\Lambda\right)=\tr{{\Pi}_l\Lambda(\widetilde{{\Pi}}_c)};\\ S_{\mathrm{ave}}\left(\Lambda\right)=\tr{{\Pi}_c \Lambda(\widetilde{{\Pi}}_l)}. \label{eq:leakage_seepage_rate} \end{align} We often write $L_{\mathrm{ave}}$ and $S_{\mathrm{ave}}$ when the noise channel $\Lambda$ being referred to is unambiguous. Unless explicitly specified, we use the term ``leakage noise'' to represent both leakage and seepage errors. The Pauli group with phase $\mathbb{P}< \bm{\mathrm U}(2)$ is defined as $\pm\cbra{1,\text{i}}\times \cbra{I, X, Y, Z}$, where $X,Y,Z$ are $2\times2$ Pauli-X/Y/Z matrices respectively. Let $\mathbb{P}_n :=\mathbb{P}^{\times n}< \bm{\mathrm U}(2)^{\times n}$. For an element $P=\bigotimes_i P_i\in \mathbb{P}_n$, its corresponding ideal unitary channel in the full space is defined as $\mathcal{P}:=\bigotimes_i\mathcal{P}_i$. For sake of simplicity, we identify the element $P$ with its corresponding ideal channel $\mathcal{P}$, and use $\hat{\mathcal{P}}$ as a shorthand for the corresponding noisy implementation $\mathcal{P}\circ\Lambda$. Inspired by the Pauli-transfer matrix (PTM) representation~\cite{chow2012universal}, here we define the \emph{condensed-operator representation} $\supket{\cdot}$ of linear operators as the Liouville representation~\cite{wood2011tensor} with respect to the orthonormal operator basis $\mathcal{I}=\{{\Pi}_{\bm i}/\sqrt{\dim(\mathcal{H}_{\bm i})}\}_{i\in \{c,l\}^n}$. The basis is not complete in the sense that it does not span $\mathcal{L}(\mathcal{H})$; for a linear operator $\rho$ not lying in the span of $\mathcal{I}$, $\supket{\rho}$ is understood as the projection of $\rho$ onto the span of $\mathcal{I}$ followed by the vectorization, that is, $$\supket{\rho}:=\supket{\bar{\mathcal{P}}(\rho)},$$where $\bar{\mathcal{P}}:\mathcal{L}(\mathcal{H})\rightarrow \mathrm{span}(\mathcal{I}); \bar{\mathcal{P}}(\rho):=\sum_{\bm i}\tr{{\Pi}_{\bm i}\rho}\widetilde{{\Pi}}_{\bm i}$ is the \emph{twirling projector} from $\mathcal{L}(\mathcal{H})$ to $\mathrm{span}(\mathcal{I})$. For sake of clarity, in the following, we represent the condensed operator representations under the basis $\{\supket{\widetilde{{\Pi}}_{\bm i}}\}_{\bm i}$, and the adjoints under the basis $\{\supbra{{\Pi}_{\bm i}}\}_{\bm i}$. Note that $\supbraket{{\Pi}_{\bm i}|\widetilde{{\Pi}}_{\bm j}} =\delta_{\bm{i}\bm{j}}$. Under such basis choice, for a generic linear operator $A\in\mathcal{L}(\mathcal{H})$, we have $$\supket{A}=\sum_{\bm i}\tr{{\Pi}_{\bm i}A}\supket{\widetilde{{\Pi}}_{\bm i}} \text{ and } \supbra{A}=\sum_{\bm i}\tr{\widetilde{{\Pi}}_{\bm i}A^\dag}\supbra{{\Pi}_{\bm i}}.$$ For a superoperator $\Lambda$, the corresponding condensed operator representation is then \begin{equation} Q_\Lambda :=\sum_{\bm i, \bm j} \tr{{\Pi}_{\bm i}\Lambda(\widetilde{{\Pi}}_{\bm j})}\supket{\widetilde{{\Pi}}_{\bm i}}\supbra{{\Pi}_{\bm j}}. \label{eq:channel_COR} \end{equation} Since $\mathcal{I}$ does not form a complete basis, compositions of condensed operator representations do not directly translate to compositions of the corresponding linear operators; rather they translate to compositions of the twirled versions of the corresponding linear operators through the twirling projector $\bar{\mathcal{P}}$. More specifically, we have \begin{align} Q_{\Lambda_1}Q_{\Lambda_2}&=Q_{\Lambda_1\circ\bar{\mathcal{P}}\circ\Lambda_2},\label{eqn:channel_comp}\\ Q_{\Lambda}\supket{\rho}&=\supket{(\Lambda\circ \bar{\mathcal{P}})(\rho)},\label{eqn:channelket_comp}\\ \supbra{M}Q_\Lambda\supket{\rho}&=\tr{M\cdot (\bar{\mathcal{P}}\circ \Lambda\circ \bar{\mathcal{P}})(\rho)}.\label{eqn:braket_comp} \end{align} We denote $[n]:=\{1,\ldots, n\}$ throughout the paper. \section{Leakage randomized benchmarking protocol} \label{sec:leak_rateGen} Here we present a leakage randomized benchmarking protocol that does not require actions on the leakage subspace or assumptions about SPAM errors. Our protocol is based on the assumption that the noise, represented by the operator $\Lambda$, is Markovian, time-independent, and gate independent. We further assume we have access to a noisy measurement operator $\hat{{\Pi}}_c$ close to the projector to the computational subspace ${\Pi}_c$. \begin{itemize} \item [(1)] Given a sequence length $m$, sample a sequence of $m$ Paulis $\mathcal{P}_1,\ldots, \mathcal{P}_m$ from $\mathbb{P}_n$ uniformly i.i.d., and perform them sequentially to a fixed (noisy) initial state $\hat{\rho}_0$, obtaining $\hat\mathcal{P}_{m} \circ\cdots \circ\hat\mathcal{P}_{1}(\hat{\rho}_0)$. Measure the output state under $\hat{{\Pi}}_c$ and estimate the probability $p_{{\Pi}_c} (\mathcal{P}_1,\ldots, \mathcal{P}_m)= \tr{\hat{{\Pi}}_c\hat\mathcal{P}_{m}\circ \cdots \circ \hat\mathcal{P}_{1}(\hat{\rho}_0) }$ through repeated experiments. \item[(2)] {Repeat Step (1) multiple times to estimate $p_{{\Pi}_c}(m)$, the expectation of $p_{{\Pi}_c} (\hat{\mathcal{P}}_1,\ldots, \hat{\mathcal{P}}_m)$ under random choices of $\mathcal{P}_1,\ldots,\mathcal{P}_m$ from $\mathbb{P}_n$.} \item[(3)] Repeat Step (2) for different $m$, and fit $\{(m, p_{{\Pi}_c } (m))\}$ to a multi exponential decay curve ${p}_{{\Pi}_c}(m)=\sum_i A_i\cdot \lambda_i^m$. \end{itemize} The average leakage rate $L_{\mathrm{ave}}$ and seepage rate $S_{\mathrm{ave}}$ are estimated with the fitted exponents $\lambda_i$. The number of exponents for $p_{{\Pi}_c}(m)$ depends on the specific noise model of $\Lambda$. In the following, we will show the explicit representation of $\mathbb{E}\sbra{p_{{\Pi}_c}(m)}$ and $\lambda_i$. The Pauli group $ \mathbb{P}_n$ can twirl any quantum state in computational subspace to the maximum mixed state~\cite{dankert2009exact,ambainis2000private}, i.e., $\frac{1}{\abs{\mathbb{P}_n}}\sum_{\mathcal{P}_c\in\mathbb{P}_n} \mathcal{P}_c(\rho_c)=\tr{\rho_c}\widetilde{{\Pi}}_c$, where $\rho_c\in\mathcal{L}(\mathcal{H}_c)$ is a quantum state in computational subspace. Here we expand the twirling of a Pauli group from computational subspace to the entire Hilbert space, as shown in Lemma \ref{lem:PerfectPauli}. \begin{lemma} {Let $\bar{\mathcal{P}}$ be the twirling projector such that $\bar\mathcal{P}(\rho) = \sum_{\bm i} \tr{\Pi_{\bm i} \rho} \widetilde{\Pi}_{\bm i}$ for any quantums state $\rho$. Then it can be equivalently represented as the expectation of all the Pauli channels,} \begin{equation} \bar{\mathcal{P}} = \frac{1}{\abs{\mathbb{P}_n}}\sum_{\mathcal{P} \in \mathbb{P}_n} \mathcal{P}. \end{equation} \label{lem:PerfectPauli} \end{lemma} Lemma \ref{lem:PerfectPauli} can be obtained from the twirling properties of Pauli group $\mathbb{P}_n$ in the computational subspace. We postpone the proof of Lemma \ref{lem:PerfectPauli} into Appendix \ref{app:proof_lem_PerfectPauli}. With Lemma \ref{lem:PerfectPauli}, we can construct the connections of $L_{\mathrm{ave}}, S_{\mathrm{ave}}$ and the multi-exponential decay curve $p_{{\Pi}_c}(m)$, as shown in the following theorem. \begin{theorem} Given Pauli group $\mathbb{P}_n$ with gate-independent leakage error channel $\Lambda$, the average output probability in LRB protocol ${p_{{\Pi}_c}(m)}=\supbra{\hat{{\Pi}}_c} Q^{m-1}\supket{\tilde{\rho}_0}$, where $Q:=Q_\Lambda$ is the condensed-operator representation of $\Lambda$ and $\tilde\rho_0$ is some noisy state determined by the input state $\hat{\rho}_0$. The average leakage rate equals $L_\mathrm{ave} = 1 - Q_{c^n,c^n}$ and the average seepage rate equals $S_\mathrm{ave} = \frac{1}{3^n - 2^n}\sum_{\bm i\ne c^n}\dim\pbra{{\Pi}_{\bm i}} Q_{c^n,\bm i}$. \label{thm:lrb_global} \end{theorem} \begin{proof} Let $\mathcal{P}_{1},\ldots, \mathcal{P}_{m}$ be the ideal gate elements sampled from $\mathbb{P}_n$. Then the expectation of the probability for measuring computational basis equals \begin{align} p_{{\Pi}_c}(m) &=\frac{1}{\abs{\mathbb{P}_n}^m}\sum_{\mathcal{P}_{1},\ldots,\mathcal{P}_{m}\in \mathbb{P}_n} \tr{\hat {\Pi}_c\hat{\mathcal{P}}_m\circ\cdots\circ \hat{\mathcal{P}}_1 \pbra{\hat{\rho}_0}} \\ &= \tr{\hat {\Pi}_c\pbra{\frac{1}{|\mathbb{P}_n|}\sum_{\mathcal{P}\in \mathbb{P}_n}\mathcal{P}\circ \Lambda}^{m}\pbra{\hat{\rho}_0}} \\ &= \tr{\hat {\Pi}_c\pbra{\bar\mathcal{P}\circ \Lambda}^{m}\pbra{\hat{\rho}_0}} \label{eq:pauli_to_twirl}\\ &= \tr{\hat {\Pi}_c\pbra{\bar\mathcal{P}\circ \Lambda}^{m-1}\bar{\mathcal{P}}\circ\Lambda(\hat{\rho}_0})\\ &= \tr{\hat {\Pi}_c \bar{\mathcal{P}} \circ(\Lambda\circ \pbra{\bar\mathcal{P}\circ \Lambda}^{m-2})\circ \bar{\mathcal{P}}\circ{\Lambda(\hat{\rho}_0})} \\ &= \supbra{\hat {\Pi}_c} Q_{\Lambda\circ (\bar{\mathcal{P}} \circ\Lambda)^{m-2}}\supket{\tilde{\rho}_0} \label{eq:braket}\\ &= \supbra{\hat {\Pi}_c} Q_{\Lambda}^{m-1}\supket{\tilde{\rho}_0} . \label{eq:gen_pauli_sec} \end{align} where $\tilde{\rho}_0 := \Lambda \pbra{\hat{\rho}_0}, \hat{\rho}_0$ is the input state with state preparation noise. Eq. \eqref{eq:pauli_to_twirl} holds by Lemma \ref{lem:PerfectPauli}; Eqs. \eqref{eq:braket} and \eqref{eq:gen_pauli_sec} follows from Eqs. \eqref{eqn:braket_comp} and \eqref{eqn:channel_comp} respectively. By the definition of $Q$, we have $Q_{\bm i,\bm j}=\tr{{\Pi}_{\bm i}\Lambda(\widetilde{{\Pi}}_{\bm j})}.$ Moreover, for every ${\bm j}$ it holds that $$\sum_{\bm i}Q_{\bm i, \bm j}=\tr{\Lambda(\widetilde{{\Pi}}_{\bm j})}=\tr{\widetilde{{\Pi}}_{\bm j}}=1$$ since $\Lambda$ preserves the trace. This indicates that $Q$ is a Markov chain transition matrix. By the definitions of $L_\mathrm{ave} $ and $S_\mathrm{ave} $ in Eq. \eqref{eq:leakage_seepage_rate}, we have \begin{align} L_\mathrm{ave} = \tr{{\Pi}_{l}\Lambda\pbra{\widetilde{{\Pi}}_{c}}} =\supbra{{\Pi}_{l}} Q\supket{\widetilde{{\Pi}}_{c}}=\sum_{\bm i\neq c^n} \supbra{{\Pi}_{\bm i}} Q\supket{\widetilde{{\Pi}}_{c}}= 1 - Q_{c^n,c^n} \end{align} and \begin{align} S_\mathrm{ave} &= \tr{{\Pi}_{c}\Lambda\pbra{\widetilde{{\Pi}}_{l}}}\\ &= \sum_{\bm i\ne c^n} \tr{{\Pi}_{c}\Lambda\pbra{\frac{\dim(\mathcal{H}_{\bm i})}{d_l}\widetilde{{\Pi}}_{\bm i}}}\\ &=\frac{1}{3^n - 2^n} \sum_{\bm i\ne c^n}\dim\pbra{\mathcal{H}_{\bm i}} Q_{c^n,\bm i}. \end{align} \end{proof} Theorem \ref{thm:lrb_global} demonstrates that Pauli-twirled quantum channels with leakage can be represented as Markov chains operating on distinct leakage subspaces, including the computational subspace itself. The leakage properties can be inferred from the spectral characteristics of the transition matrix, akin to analyses of RB protocols in the computational space \cite{helsen2022general}. However, this framework does not directly provide an easily applicable LRB scheme, as the transition matrix $Q$ typically has a dimension of $2^n$, resulting in complex matrix exponential decay behavior as the number of qubits increases. Nonetheless, estimating the leakage rate can be significantly simplified in scenarios where the number of qubits is small enough to allow manageable matrix exponential decay or when additional assumptions can be made about the leakage behavior. In the subsequent sections, we propose several physically relevant leakage noise models with straightforward theoretical exponential decay curves suitable for experimental implementation. \section{Average leakage rate for specific noise} \label{sec:lrb_specific} In this section, we present two specific leakage noise models - single-site leakage damping noise and local leakage damping noise. We also provide the respective average leakage rates for each model. In the following, we investigate the average leakage rate for specific leakage noise where leakage only happens on a single site (qubit). For any $1\leq i\leq n$, we define $$B_i=\{{\bm a}\in\{0,1,2\}^n|a_i=2; a_j\in\{0,1\} ,\forall j\neq i\}$$ such that $\{|k\rangle\}_{k\in B_i}$ forms a basis of the specific leakage subspace $\mathcal{H}_{c^{i-1}lc^{n-i}}$ where only the qubit $i$ is leaking. Let $\mathcal{H}_{l,(1)}:=\bigoplus_{i} \mathcal{H}_{c^{i-1}lc^{n-i}}$ be the leakage subspace that exactly one qubit is leaking, with the corresponding basis set $B:=\bigcup_i B_i$. We propose a {\emph{single-site leakage damping noise model}} as a generalization to the amplitude damping noise~\cite{nielsen2002quantum}: \begin{definition} {Let set $W:=(\cbra{0,1}^n, B)\cup (B, \cbra{0,1}^n)\cup \cbra{(B_i,B_i)}_{i=1}^n\cup(\cbra{0,1}^n,\cbra{0,1}^n)$. Define the Kraus operators \begin{align} \begin{aligned} &E_{kk'} := \sqrt{p_{kk'}}\ket{k'}\bra{k}, \forall (k,k')\in W,\\ &E_0 = \sqrt{{\mathbb{I}} - \sum_{(k,k')\in W}E_{kk'}^{\dagger}E_{kk'}} \end{aligned} \label{eq:specific_noise_2} \end{align} where probabilities $p_{kk'},\sum_{k'}p_{kk'}, \sum_{k'}p_{k'k}\in[0,1]$ for any $(k,k')\in W$ with well-defined probabilities $p_{kk'}$ and $p_{k'k}$. The single-site leakage damping noise model is defined as a CPTP map $\Lambda$ such that \begin{align} \Lambda(\rho) = E_0\rho E_0^\dagger + \sum_{(k,k')\in W} E_{kk'} \rho E_{kk'}^\dagger \label{eq:specific_noise} \end{align} for any input state $\rho$. Denote the average leak and seep probabilities associated with the $i$-th site as \begin{align} p_i := \frac{1}{2^n}\sum_{k\in\{0,1\}^n, k'\in B_i} p_{kk'}, \quad q_{i} := \frac{1}{2^n}\sum_{k\in\{0,1\}^n, k'\in B_i} p_{k'k} \label{eq:p_iq_i} \end{align} \label{def:Single-site-leakage-noise} respectively.} \end{definition} In the above definition, the parameters $p_{kk'}$ can be understood as the probability of the state $\ket{k}$ flipped to $\ket{k'}$ after the leakage damping noise, and ${\Pi}_{\mathcal{H}\backslash \mathcal{H}_c \cup \mathcal{H}_{l,(1)}}$ in $E_{0}$ denotes that the noise model has no effect on the Hilbert space with leakage happens on more than one site. It is easy to check that $\sum_{i}E_i^{\dagger}E_i= {\mathbb{I}}$, hence $\Lambda$ is a CPTP map~\cite{nielsen2002quantum} in Hilbert space $\mathcal{H}$. Additionally, we introduce Eq. \eqref{eq:p_iq_i} to simplify the representation, and we will find that the average leakage and seepage rates are only related to $p_i$ and $q_i$ for all of $i\in [n]$. The prefactor $1/2^n$ is added to fit the definition of ``average'' leakage and seepage rates in Eq. \eqref{eq:leakage_seepage_rate}. \subsection{Single-site leakage noise} \label{sec:single-site-leakage-noise} For the particular noise model described in Definition \ref{def:Single-site-leakage-noise}, we can simplify the average leakage rate from Theorem \ref{thm:lrb_global} as stated in the following theorem. \begin{theorem} Let $\Lambda$ be a single-site leakage damping channel as described in Definition \ref{def:Single-site-leakage-noise}. Let $p_i$ and $q_{i}$ be as defined in Eq. \eqref{eq:p_iq_i}, and assume that $p_i> 0$ for all $i$ and $q_1\geq\cdots \geq q_n$. Then after performing $n$-site LRB protocol, the expectation of the probability for measuring computational basis $p_{{\Pi}_{c}}(m)=\sum_{i=0}^n A_i\lambda_i^{m}$, where $A_i$ are real numbers, $\lambda_0\leq \lambda_1\leq\ldots\leq \lambda_n =1$, and $1-2q_{i}\leq \lambda_i\leq 1-2q_{i+1}$ for $1\leq i\leq n-1$, $1-2q_1-\sum_i p_i\leq \lambda_0\leq \min\pbra{1-2q_1,1- 2q_{n}-\sum_i p_i}$. The average leakage and seepage rates of $\Lambda$ are $L_{\mathrm{ave}} = \sum_i p_i$ and $S_{\mathrm{ave}} = \frac{2^n }{3^n - 2^n}\sum_i q_i$ respectively. \label{thm:lrb_specific} \end{theorem} \begin{proof} If the noise model is described by Definition \ref{def:Single-site-leakage-noise}, the corresponding condensed-operator representation only acts non-trivially on the $n+1$-dimensional subspace spanned by $\left\{\supket{\widetilde{{\Pi}}_{\bm_i}}\mid |\{k|i_k=l\}|\leq 1\right\}$, as follows \begin{equation} Q = \begin{pmatrix} 1-\sum_i p_i & 2q_1& \ldots & 2q_n\\ {p_1} & 1 - 2{q_1} & \ldots & 0\\ \vdots & \vdots & \vdots & \vdots \\ {p_n} & 0 & \ldots & 1 - 2{q_n} \end{pmatrix}, \label{eq:Q_specificNoise} \end{equation} where $p_i,q_i$ are defined in Eq. \eqref{eq:p_iq_i}. This transition matrix can be illustrated in Fig.~\ref{fig:model}. Eq. \eqref{eq:Q_specificNoise} holds since $Q_{c^{i-1}lc^{n-i},c^{n}} = \tr{{\Pi}_{c^{i-1}lc^{n-i} }\Lambda(\widetilde{\Pi}_{c^n})} = {p_i}$, and similarly we can get other elements of $Q$. \begin{figure}\label{fig:model} \end{figure} Although the spectrum of the transition matrix $Q$ cannot be explicitly solved in the general case, it is possible to derive bounds on all its eigenvalues by examining its characteristic polynomial. For simplicity, we prove the theorem under a generic scenario where $n\geq 2$ and $q_1>q_2>\cdots>q_n$. In this case, it can be demonstrated that all eigenvalues of $Q$ are distinct, making $Q$ inherently diagonalizable. A detailed analysis of situations where algebraic multiplicities arise can be found in Appendix \ref{app:lrb_specific}. Denote $x_i:=1-2q_i$. Consider \begin{align} \det( Q - \lambda {\mathbb{I}})&= (1 - \sum_{i=1}^n p_i - \lambda)\prod_{j = 1}^n (1 - \lambda - 2q_j) - \sum_{i = 1}^n 2p_iq_i\prod_{j\in[n]\backslash \cbra{i}}(1 - \lambda - 2q_j)\\ &=(1-\lambda) \left(\prod_{j = 1}^n (x_j - \lambda) - \sum_{i = 1}^n p_i\prod_{j\in[n]\backslash \cbra{i}}(x_j-\lambda) \right). \label{eq:characteristic-polynomial} \end{align} where $[n]:=\{1,\ldots, n\}$. Hence $\lambda = 1$ is an eigenvalue of $Q$. Let \begin{equation} f(x) := \prod_{i=1}^n(x_i-x) - \sum_{i=1}^n p_i\prod_{j\in [n]\backslash \cbra{i}}(x_j-x), \label{eq:f_definition} \end{equation} then the roots of function $f(x)$ are meanwhile the eigenvalues of $Q$. Note that \begin{equation} f(x_k) = -p_k\prod_{j\in [n]\backslash \cbra{k}}( x_j-x_k). \label{eq:f_root} \end{equation} As $q_1>q_2>\cdots>q_n\geq0$ and $p_i>0$, we have $x_1<x_2<\cdots<x_n\leq 1$. It can be seen that $f(x_i)$ and $f(x_{i+1})$ always have different signs, indicating a zero in $(x_i, x_{i+1})$ for all $i\in[n-1]$. As $\deg(f)=n$, there is only one zero left to be determined, which is guaranteed to be real since all the other zeros are real. Let \begin{equation} h(x)=\frac{f(x) }{\prod_{i\in [n]}(x_i -x)} = 1-\sum_{i\in [n]} \frac{p_i}{(x_i-x)}. \label{eq:g-lambda} \end{equation} When $x<x_1$, $h(x)$ and $f(x)$ have the same sign, and $$1-\frac{\sum_{i\in[n]}p_i}{(x_1-x)}> h(x)> 1-\frac{\sum_{i\in[n]}p_i}{(x_n-x)}.$$ Therefore we have \begin{itemize} \item $f(x_1)=-p_i\prod_{j\in[n]\setminus\{1\}}(x_j-x_1)<0$, \item $h(x_1-\sum_{i\in[n]}p_i)< 0$, \item $h(x_n-\sum_{i\in[n]}p_i)> 0$, \end{itemize} indicating $f$ having a zero in $(x_1-\sum_{i\in[n]}p_i, \min(x_1, x_n-\sum_{i\in[n]}p_i))$. To summarize, we have a complete characterization of all eigenvalues $\lambda_0<\lambda_1<\cdots<\lambda_n$ of $Q$, namely \begin{itemize} \item $\lambda_0\in (x_1-\sum_{i\in[n]}p_i, \min(x_1, x_n-\sum_{i\in[n]}p_i))$, \item $\lambda_i\in (x_i, x_{i+1}), \forall i \in[n-1]$; \item $\lambda_n$=1. \end{itemize} By Theorem \ref{thm:lrb_global}, the average leakage and seepage rates for the Pauli group with this specific noise equal $\sum_i p_i$ and $\frac{2^{n-1}}{3^n - 2^n}\sum_i 2q_i$ respectively. \end{proof} We assume in Theorem \ref{thm:lrb_specific} that $p_i>0$ for all $i$. When $p_i=0$ for some $i$, the matrix $Q$ might not be fully diagonalizable, requiring more complex data processing schemes. From a physical perspective, such complications can be mitigated by preparing the initial state such that the initial leakage on qubit $i$ is negligible. Theorem \ref{thm:lrb_specific} shows that when the seepage probability of all qubits are close to each other and close to leakage probability, i.e., $p_i\approx p_j\approx \bar{p}_{\mathbb{P}}$ and $q_i\approx q_j\approx \bar{q}_{\mathbb{P}}$ for all of $i,j\in[n]$, then the multi-exponential decay will approximately collapse to two-exponential decay with $\lambda_1\approx 1-2\bar{q}_{\mathbb{P}}, \lambda_0 = 1 - 2q_n - \sum_{i}p_i\approx 1 - 2\bar{q}_{\mathbb{P}} - n\bar{p}_{\mathbb{P}}$. With the properties of the eigenstates for eigendecomposition of the transition matrix $Q$, we can further simplify the exponential curve to a single decay since the coefficient of $\lambda_1$ equals zero when the state preparation noise is negligible. The leakage and seepage of the $n$-qubit system can be consequently derived according to $L_{\mathrm{ave}} = \sum_{i}p_i\approx n\bar{p}_{\mathbb{P}}$, and $S_{ave} = \frac{2^{n}}{3^n-2^n}\sum_{j}q_j\approx \frac{n2^{n}}{3^n-2^n} \bar{p}_{\mathbb{P}}$, as shown in the following corollary. \begin{corollary} Let the leakage noise $\Lambda$ be as described in Definition \ref{def:Single-site-leakage-noise} such that $p_i = p_j = \bar{p}_{\mathbb{P}}>0$ and $q_i = q_j = \bar{q}_{\mathbb{P}}$ for different $i,j\in [n]$, and assume state preparation is noiseless, then after performing $n$-site LRB protocol, the expectation of the probability for measuring computational basis $p_{{\Pi}_c}(m) = A + B \pbra{1 - 2\bar{q}_{\mathbb{P}}-n\bar{p}_{\mathbb{P}}}^m$, where $A,B$ are some real constants. The average leakage and seepage rates of $\Lambda$ are $L_{\mathrm{ave}} = \sum_{i}p_i\approx n\bar{p}_{\mathbb{P}}$, and $S_{ave} = \frac{2^{n}}{3^n-2^n}\sum_{j}q_j\approx \frac{n2^{n}}{3^n-2^n} \bar{q}_{\mathbb{P}}$ respectively. \label{cor:lrb_arp} \end{corollary} The decay rate $1 - 2\bar{q}_{\mathbb{P}}-n\bar{p}_{\mathbb{P}}$ obtained from the LRB experiment does not provide sufficient information to fully determine $L_\mathrm{ave}$ and $S_\mathrm{ave}$. Rather, additional prior knowledge is required, such as the ratio of the leakage and the seepage rates. We postpone the proof of this corollary into Appendix \ref{app:lrb_barp}. \subsection{Cross-talk-free leakage noise} Previous studies have indicated that cross-talk in real devices can be significantly minimized~\cite{sung2021realization}. In the subsequent subsection, we demonstrate that the exponential decay can be simplified under the condition that leakage noise occurs independently and locally across different qubits. We make the assumption that the local noise adheres to Definition \ref{def:Single-site-leakage-noise} for each individual gate. It is important to note that in this context, the noise is inherently single-site, as each qubit possesses only one leakage site. \begin{corollary} By performing the LRB circuit in $n$-site cross-talk free system for the Pauli group, the expectation of the output probability for the computational subspace of the $k$-th qubit is equal to $p_{{\Pi}_{c_k}}(m) = A +B\lambda^m_{k} $, and the average leakage and seepage rates \begin{align} L_{\mathrm{ave}} &= 1 - \prod_{k = 1}^n \pbra{1 - {p_k}},\\ S_{\mathrm{ave}} &= \frac{2^n}{3^n - 2^n} \prod_{k = 1}^n \pbra{1 - p_k + q_k} -\frac{2^n}{3^n - 2^n} \prod_{k = 1}^n (1 - p_k) \end{align} where $m$ is the size of Pauli gates, $\lambda_k = 1 - p_k - 2q_k$, and $A,B$ are some real number, $p_k,q_k$ are leakage rates associated with Eq. \eqref{eq:p_iq_i} in the $k$-th qubit. \label{coro:no_crosstalkPauli} \end{corollary} We postpone the proof of the corollary into Appendix \ref{app:no_crosstalkPauli}. This corollary can be obtained by restricting the noise in Theorem \ref{thm:lrb_specific} to be the tensor product form of each local noise on a single qubit. Then if $p_k\approx q_k$ or we know the relationship between $p_k$ and $q_k$ with the analysis of the system, we can estimate $L, S$ by fitting $p_k$ from $p_{{\Pi}_{c_k}}$ for all of $k\in [n]$ independently. By Corollary \ref{coro:no_crosstalkPauli}, the fitted curve associated with $p_{{\Pi}_c}$ will not follow a single exponential decay, since \begin{equation} p_{{\Pi}_c} (m)= p_{{\Pi}_{c_1}}(m)\cdots p_{{\Pi}_{c_n}}(m)= \pbra{A_1 + B_1 \lambda_k^m} \cdots \pbra{A_n + B_n \lambda_n^m}. \end{equation} We can check that when $n$ equals to $2$, there will be 3 exponents, $\lambda_1 = 1 - p_1 - 2q_1, \lambda_2 = 1 - p_2 - 2q_2$ and $\lambda_3 = (1 - p_1 - 2q_1)(1 - p_2 - 2q_2)$ with average leakage rate $L_{\mathrm{ave}} = 1 - (1 - p_1)(1 - p_2)$ and seepage rate $S_{\mathrm{ave}} = \frac{4}{5}(1 - p_1 + q_1)(1 - p_2 + q_2) -\frac{4}{5}(1 - p_1)(1 - p_2)$. \section{Interleaved LRB protocol for specific target gates} \label{sec:iLRB_protocol} In this section, we focus on benchmarking specific target gates. Benchmarking the leakage rate of an arbitrary target gate $T$ differs from benchmarking the leakage rate of the Pauli group, as the target gate does not readily form the Pauli group. We propose an interleaved variant of the leakage for the previous interleaved randomized benchmarking protocol~\cite{magesan2012efficient}, named iLRB (interleaved leakage randomized benchmarking). The target $T$ can be any gate scheme, provided that the associated leakage noise model conforms to the form discussed in this section. The iLRB protocol is outlined as follows: \begin{itemize} \item [(1)] Sample a sequence of $m$ Paulis $\mathcal{P}_1,\ldots, \mathcal{P}_m$ from $\mathbb{P}_n$ and perform them sequentially to the noisy initial state $\hat{\rho}_0$ interleaved by target gate $\mathcal{T}$ to get $\hat\mathcal{P}_{m}\circ\hat{\mathcal{T}} \circ\cdots \circ\hat\mathcal{P}_{1}\circ\hat{\mathcal{T}}(\hat{\rho}_0)$. Measure the output states and estimate $p_{{\Pi}_c }\pbra{\mathcal{P}_1, \ldots, \mathcal{P}_m} = \tr{\hat{{\Pi}}_c\hat{\mathcal{P}}_m\circ\hat{\mathcal{T}}\circ\cdots \circ\hat{\mathcal{P}}_1\circ\hat{\mathcal{T}}\pbra{\hat{\rho}_0}} $ through repeated experiments. \item [(2)] Repeat Step (1) multiple times to estimate $p_{{\Pi}_c}(m)$, the expectation of $p_{{\Pi}_c }\pbra{\mathcal{P}_1, \ldots, \mathcal{P}_m}$ under random choices of $\mathcal{P}_1,\ldots, \mathcal{P}_m$. \item [(3)] Sample a sequence of $m$ Paulis $\mathcal{P}_1,\ldots, \mathcal{P}_m$ in $\mathbb{P}_n$, and perform them sequentially to the prepared noisy initial state $\hat{\rho}_0$, i.e., $\hat\mathcal{P}_{m} \circ\cdots \circ \hat\mathcal{P}_{1}(\hat{\rho}_0)$. Measure the output states and estimate $p_{{\Pi}_c,\mathbb{P} }\pbra{\mathcal{P}_1, \ldots, \mathcal{P}_m} = \tr{\hat{{\Pi}}_c\hat{\mathcal{P}}_m\circ\cdots \circ\hat{\mathcal{P}}_1\pbra{\hat{\rho}_0}} $ through repeated experiments. \item [(4)] Repeat Step (3) multiple times to estimate $p_{{\Pi}_c,\mathbb{P}}(m)$, the expectation of $p_{{\Pi}_c,\mathbb{P} }\pbra{\mathcal{P}_1, \ldots, \mathcal{P}_m}$ under random choices of $\mathcal{P}_1,\ldots, \mathcal{P}_m$. \item[(5)] Repeat Steps (2), (4) for different $m$, and fit the exponential decay curves of $p_{{\Pi}_c }(m), p_{{\Pi}_c,\mathbb{P}} (m)$ with respect of $m$. \end{itemize} When the leakage noise of the Pauli gates is negligible compared to that of the target gate $\mathcal{T}$, we can benchmark any target gate $\hat{\mathcal{T}}=\mathcal{T}\circ \Lambda_{\mathcal{T}}$ where $\Lambda_{\mathcal{T}}$ has the same leakage noise as in Definition \ref{def:Single-site-leakage-noise}, by only performing the first two steps of the above iLRB protocol. In this case, we can directly leverage Theorem \ref{thm:lrb_specific} to get the average leakage rate of $\Lambda_T$. When the leakage noise of the Pauli gates is not negligible, however, steps (3) and (4) are needed to separate the target gate leakage from the Pauli gate leakage, and more assumptions on the target gate leakage noise are needed. We assume a specific case of the noise model in Definition \ref{def:Single-site-leakage-noise}, where the target gate $\mathcal{T}$ has the noisy implementation $\hat{\mathcal{T}} =\mathcal{T}\circ\Lambda_T=\Lambda_T\circ\mathcal{T}$ and the noise $\Lambda_T$ is defined in Definition \ref{def:simplifiedSingle_iteNoise_iLRB} with the same value for all of $p_i,q_j$ in Eq. \eqref{eq:p_iq_i}. Similarly, we assume that $\hat{\mathcal{P}}=\mathcal{P}\circ\Lambda_{\mathbb{P}}$ with noise channel $\Lambda_{\mathbb{P}}$ as defined in Definition \ref{def:simplifiedSingle_iteNoise_iLRB} also having same value for all of $p_i,q_j$ in Eq. \eqref{eq:p_iq_i}. \begin{definition} We define the \emph{simplified single-site leakage damping noise} model as a CPTP map $\Lambda$ such that \begin{equation} \Lambda(\rho) = E_0 \rho E_0^\dagger + \sum_{i = 1}^{n} E_{0i} \rho E_{0i}^\dagger + \sum_{i=1}^n E_{i0} \rho E_{i0}^\dagger \end{equation} for any input sate $\rho$, where \begin{align} E_0 &= \sqrt{ 1 - np} \ket{u_0}\bra{u_0} + \sum_{i = 1}^n \sqrt{1 - p}\ket{u_i}\bra{u_i} + \sum_{i\in S} \ket{i}\bra{i}, \\ E_{0i} &= \sqrt{p}\ket{u_i}\bra{u_0},\quad E_{i0} = \sqrt{p}\ket{u_0}\bra{u_i}\quad \forall i\in [n], \end{align} where $u_i\in B_i, u_0\in \cbra{0,1}^n, 0\leq np\leq 1, S = \cbra{0,1,2}^n\backslash \cbra{u_i|0\leq i\leq n}$ for any $i$, with $\bar{p} = \frac{p}{2^n}$. \label{def:simplifiedSingle_iteNoise_iLRB} \end{definition} Definition \ref{def:simplifiedSingle_iteNoise_iLRB} is to be regarded as a particular case of Definition \ref{def:Single-site-leakage-noise}, with at most a single leak happening between each $\mathcal{H}_{c^{i-1}lc^{n-i}}\subseteq \mathcal{H}_{l,(1)}$ and $\mathcal{H}_c$. Such a simplified noise model has important applications such as measuring leakage for two-qubit gates on superconducting quantum chips. See more details in the next section. {The requirement of the noise model for iLRB protocol can be further relaxed to more than a single leak between each $\mathcal{H}_{c^{i-1}lc^{n-i}}$ and $\mathcal{H}_c$ with the same leak probability. Due to the restriction of the iLRB protocol, the noise that happened in computational subspace is required to exclude qubit $u_0$.} With the assumption of the above noise model, the average leakage rate of target gate $\mathcal{T}$ can be estimated with exponential decay curves of $p_{{\Pi}_c }(m), p_{{\Pi}_c,\mathbb{P}} (m)$ obtained from iLRB protocol, as shown in the following theorem. Usually, the state preparation noise is negligible compared with gate and measurement noise, we also show that assuming state preparation is noiseless, we can further simplify the iLRB protocol to single-exponent decay curves in the following theorem. \begin{theorem} For any $n$-site target gate $\mathcal{T}$ where its noisy implementation $\hat{\mathcal{T}} = \mathcal{T}\circ\Lambda_T = \Lambda_T\circ \mathcal{T}$, $\Lambda_T$ and the noise of Pauli group $\mathbb{P}_n$ both have the formations as in Definition \ref{def:simplifiedSingle_iteNoise_iLRB} with noise parameter $\bar{p}$ be $\epsilon_T, \bar{p}_{\mathbb{P}}$ respectively, after performing the {iLRB} protocol, the expectation of the output probabilities \begin{align} &p_{{\Pi}_c} = A_0 + A_1 \lambda_1^m + A_2 \lambda_2^m \label{eq:iLRB_decay_1}\\ &p_{{\Pi}_c,\mathbb{P}} = B_0 + B_1 \lambda_{\Pbb1}^m + B_2\lambda_{\Pbb2}^m \label{eq:iLRB_decay_2} \end{align} where $\lambda_1 = 1- 2(\epsilon_T + \bar{p}_{\mathbb{P}}) + 2^{n+1}\bar{p}_{\mathbb{P}}\epsilon_T, \lambda_2 = 1 - (n + 2)(\bar{p}_{\mathbb{P}}+\epsilon_T) + (n+1)(n+2)2^n\bar{p}_{\mathbb{P}}\epsilon_T,\lambda_{\Pbb1} = 1 - 2{\bar{p}_{\mathbb{P}}}, \lambda_{\Pbb2} = 1 - (n+2)\bar{p}_{\mathbb{P}}$, and the average leakage and seepage rates for target gate $T$ equal $ L_{T} = {n\epsilon_T}, S_{T} = \frac{2^{n}n\epsilon_T}{3^n - 2^n}$ respectively. Assuming state preparation is noiseless, we can further simplify Eqs. \eqref{eq:iLRB_decay_1}-\eqref{eq:iLRB_decay_2} to \begin{align} &p_{{\Pi}_c} = A_0 + A_2 \lambda_2^m \label{eq:iLRB_decay_3}\\ &p_{{\Pi}_c,\mathbb{P}} = B_0 + B_2\lambda_{\Pbb2}^m. \label{eq:iLRB_decay_4} \end{align} \label{thm:ilrb_multiq} \end{theorem} \begin{proof} By Theorem \ref{thm:lrb_specific}, we have $\lambda_{\Pbb1} = 1- 2\bar{p}_{\mathbb{P}}$, and $\lambda_{\Pbb2} = 1- (n+2)\bar{p}_{\mathbb{P}}$. {Since $\mathcal{T} \circ \Lambda_T = \Lambda_T\circ \mathcal{T}$}, and $\Lambda_{\mathbb{P}}, \Lambda_{T}$ both have formations as in Definition \ref{def:simplifiedSingle_iteNoise_iLRB}, then \begin{align} p_{{\Pi}_{c}}(m) &= \frac{1}{\abs{\mathbb{P}_n}^m} \sum_{\mathcal{P}_1,\ldots , \mathcal{P}_m \in \mathbb{P}_n} \tr{\hat{{\Pi}}_c\hat{\mathcal{P}}_m \circ\hat{\mathcal{T}}\circ\cdots \circ\hat{\mathcal{P}}\circ \hat{\mathcal{T}}(\hat{\rho}_0)}\\ &= \tr{\hat{{\Pi}}_c \pbra{\frac{1}{\abs{\mathbb{P}_n}}\sum_{\mathcal{P}\in \mathbb{P}_n} \mathcal{P} \circ \Lambda_{\mathbb{P}}\circ \mathcal{T} \circ \Lambda_{T}}^m (\hat{\rho}_0)}\\ &= \tr{\hat{{\Pi}}_c \pbra{\bar{\mathcal{P}}\circ \Lambda_{\mathbb{P}} \circ \Lambda_{T}}^m (\hat{\rho}_0)}\\ &= \tr{\hat{{\Pi}}_c \pbra{\bar{\mathcal{P}} \circ\Lambda_{\mathbb{P}} \circ \Lambda_{T}}^{m-1} \bar{\mathcal{P}} \circ (\Lambda_{\mathbb{P}} \circ \Lambda_{T}(\hat{\rho}_0))}\\ &= \supbra{\hat{{\Pi}}_c }Q_{\Lambda_{\mathbb{P}}\circ \Lambda_{T}}^{m-1}\supket{\tilde{\rho}_0} \label{eq:p_I_m_deviation} \end{align} where $\supket{\tilde{\rho}_0} = \Lambda_{\mathbb{P}} \circ \Lambda_{T}(\hat{\rho}_0)$. Let $\Lambda := \Lambda_{\mathbb{P}}\circ \Lambda_T$ with condensed-operator representation $Q$. Since the $(i,j)$-th element of $Q$ is $Q_{ij}=\tr{{\Pi}_{c^{i-1}lc^{n-i}}\Lambda_{\mathbb{P}}\circ \Lambda_{T}\pbra{\widetilde{\Pi}_{c^{j-1}lc^{n-j}}}}$ for $i\in \cbra{0,1,\ldots, n}$, then we have \begin{align} Q_{ij} &= 2^{n+1}\bar{p}_{\mathbb{P}} \epsilon_T, \forall i\ne j\in [n]\\ Q_{0i} &= 2Q_{i0} = 2(\epsilon_T + \bar{p}_{\mathbb{P}}) -(n+1)2^{n+1}\bar{p}_{\mathbb{P}}\epsilon_T, \forall i\in [n]\\ Q_{ii} &= 1-\sum_{j\ne i}Q_{ji}, \forall i\in \cbra{0,1,...,n}. \label{eq:Qmat_explanation} \end{align} We also provide the details for the representation of $Q$ in Appendix \ref{app:condensend_continuousTwo}. Let $\lambda$ be the eigenvalue of $Q$, with the representation of $Q$ we have \begin{equation} \det(Q-\lambda {\mathbb{I}}) =(1 - \lambda) \pbra{ 1- 2(\epsilon_T + \bar{p}_{\mathbb{P}}) + 2^{n+1}\bar{p}_{\mathbb{P}}\epsilon_T- \lambda}^{n-1} \pbra{1 - (n + 2)(\bar{p}_{\mathbb{P}}+\epsilon_T) + (n+1)(n+2)2^n\bar{p}_{\mathbb{P}}\epsilon_T-\lambda} = 0, \label{eq:ilrb_eigenvalues_cal} \end{equation} which implies we have eigenvalues \begin{align} \lambda_1= 1- 2(\epsilon_T + \bar{p}_{\mathbb{P}}) + 2^{n+1}\bar{p}_{\mathbb{P}}\epsilon_T, \end{align} and \begin{align} \lambda_2 = 1 - (n + 2)(\bar{p}_{\mathbb{P}}+\epsilon_T) + (n+1)(n+2)\bar{p}_{\mathbb{P}}2^n\epsilon_T, \end{align} and $\lambda_3 = 1$. Specifically, the multiplicity of $\lambda_1$ equals $n-1$. We postpone the proof of Eq. \eqref{eq:ilrb_eigenvalues_cal} into Appendix \ref{app:ilrb_eigenvalues}. The average leakage rate for $T$ gate can then be determined as $L_{T} = \tr{{\Pi}_l \Lambda_{T}({\Pi}_{c}/2^n)} = n\epsilon_T$, and seepage rate $S_{T} = \tr{{\Pi}_l \Lambda_{T}({\Pi}_{c}/(3^n - 2^n))} = \frac{2^n n\epsilon_T}{3^n - 2^n}$. The single exponential decay result of this theorem for noiseless preparation noise can be obtained from Theorem \ref{thm:ilrb_multiq} and the properties of the eigenstates for the eigendecomposition of the transition matrix $Q$. We postpone the proof into Appendix \ref{app:iLRB_sp_free}. \end{proof} By leveraging of Theorem \ref{thm:ilrb_multiq}, we can estimate $L_T, S_T$ using the fitted $\lambda_{i}$ estimated from iLRB protocol. \section{Numerical results} \label{sec:numerical_res} In this section, we carry out the numerical experiments for the average leakage rate of the multi-qubit Pauli group with the LRB protocol proposed in Section~\ref{sec:leak_rateGen}. Our iLRB protocol proposed in Section \ref{sec:iLRB_protocol} can be applied to few-qubit cases, which is experimentally important to test the leakage and seepage of quantum gates. To support this, we show average leakage rates for iSWAP/SQiSW and CZ gates with prior noise according to the Hamiltonian of superconducting quantum devices in Appendix \ref{app:iswap-noise-model}. \begin{figure}\label{fig:bitDamp} \end{figure} \subsection{Average leakage rate for multi-qubit Pauli group} \label{subsec:exp_multi_leakage} In this subsection, we numerically implement the LRB protocol introduced in Section~\ref{sec:leak_rateGen} to estimate the average leakage rate of the Pauli group. We list two examples to show the robustness of our protocol. Example 1 presents a simple noise model where the amplitude damping only happens in a pair of qubits between the set $B_i$ and $\cbra{0,1}^n$ for any $i\in [n]$ in noise model \ref{def:Single-site-leakage-noise}. To show the robustness of our protocol, in Example 2 we give a more complex noise model that contains all of the amplitude dampings of the qubit pairs between the set $B_i$ and $\cbra{0,1}^n$ for any $i\in [n]$, and we additionally add the amplitude damping for qubit pairs both in the same $B_i$ or $\cbra{0,1}^n$. \textbf{Example 1.} For an $n$-qubit circuit, we select a specific form of the noise $\Lambda$ from the noise model in Definition \ref{def:Single-site-leakage-noise}. Let $f_i,g_i$ be $n$-trit string denoting basis from computational subspace and leakage subspace respectively, and $f_i := 0\ldots01_i 1_{i-1}0\ldots 0, g_i := 0\ldots02_i 0_{i-1}\ldots 0$ when $2\leq i\leq n$, and {$f_1:= 0...011=f_2, g_1 := 0...02$}. We define the noise model as \begin{align} \Lambda(\rho) = E_0 \rho E_0 + \sum_{1\leq i\leq n} F_{i} \rho F_i^\dagger + \sum_{1\leq i\leq n} G_{i} \rho G_i^\dagger \end{align} where \begin{align} E_0 = \sqrt{1 -p_{1} - p_{2}} \ket{f_1}\bra{f_1} + \sum_{3\leq i\leq n} \sqrt{1 - p_{i}} \ket{f_i}\bra{f_i} + \sum_{1\leq i\leq n} \sqrt{1 - q_{i}} \ket{g_i}\bra{g_i} + \sum_{i\in S}\ket{i}\bra{i} \end{align} where $S = \cbra{0,1,2}^n\backslash \cbra{f_i,g_i|i\in[n]}$, and \begin{align} F_i = \sqrt{p_i}\ket{g_i}\bra{f_i}, \quad G_i = \sqrt{q_i}\ket{f_i}\bra{g_i}, \forall i\in [n], \end{align} where $p_{i},q_{i}$ are uniformly randomly picked from $[2.5\times 10^{-5},3.75\times 10^{-5}]$ for $i,j\in[n]$. We take the number of qubits $n = 4$ in the numerical experiment. To demonstrate the SPAM robustness of the LRB protocol, we choose a specific form of noise in the state preparation and measurement processes. Assume the state preparation process has the depolarizing noise in $\mathcal{H}_c$ and $\mathcal{H}_l$. The resulting initial state can be denoted as \begin{equation} \hat{\rho}_0 = (1-p_c-p_l)\rho_0 + p_c \frac{{\Pi}_c}{d_c} + p_l \frac{{\Pi}_l}{d_l} \label{eq:depolarize_pre} \end{equation} where $\rho_0 = \ket{0}\bra{0}$, and $p_c,p_l$ are the depolarize probabilities with $p_c + p_l\leq 1$. The measurement noise is modeled as a perfect computational basis measurement followed by independent classical probabilistic transitions on each individual site. The probability transition matrix associated with site $j$ is denoted as \begin{equation} \Lambda_{M,j} = \begin{pmatrix} 1 - \eta_{j0}-\eta_{l_{j0}} & \eta_{j1} & \eta_{s_{j0}}\\ \eta_{j0} & 1 - \eta_{j1} - \eta_{l_{j1}} & \eta_{s_{j1}}\\ \eta_{l_{j0}} & \eta_{l_{j1}} & 1- \eta_{s_{j0}} - \eta_{s_{j1}} \end{pmatrix}, \label{eq:meas_noise_model} \end{equation} where $\eta_{j0}, \eta_{j1}$ are 0-flip-to-1, and 1-flip-to-0 probabilities respectively, $\eta_{l_{ji}},\eta_{s_{ji}}$ are $i$-flip-to-2 and 2-flip-to-$i$ probabilities respectively, where $i\in \cbra{0,1}$ for the $j$-th qubit. We set parameters $p_c = p_l = 0.0001$, and $\eta_{j0} = 0.05, \eta_{j1} = 0.1, \eta_{l_{j0}}=\eta_{s_{j0}} = 0.0001, \eta_{l_{j1}} = \eta_{s_{j1}} = 0.0005$ for any $j\in [n]$. The number of qubits $n = 4$. We depict the probabilities of measuring outcomes in the computational subspace with the circuit size of Pauli gates as in Fig. \ref{fig:bitDamp}. Note that here we regard a Pauli gate as a series of Pauli X/Y/Z gates without interaction. The {theoretical} leakage rate $L_{\mathrm{ave}}=\sum_{i}p_i = 3.4\times 10^{-5}$, and seepage rate $S_{\mathrm{ave}}=\sum_{i}q_i=8\times 10^{-6}$. We fit the exponential decay curve with the LRB protocol proposed in Section \ref{sec:lrb_specific}. By Theorem \ref{thm:lrb_specific} and Corollary \ref{cor:lrb_arp} we see that if $p_i$ and $q_j$ are close to each other for all of $i,j\in [n]$, then the probability $p_{{\Pi}_c}$ will be approximately collapse to a two-exponential decay with $\lambda_1\approx 1 - 2\bar{p}_{\mathbb{P}}$, and $\lambda_0 \approx 1 - (n+2)\bar{p}_{\mathbb{P}}$. When the state preparation noise is small, we can approximate $\bar{p}_{\mathbb{P}}$ via a single exponential decay $\bar{p}_{\mathbb{P}}(m)=A+B\lambda_0^m$ for some constants $A,B$. We fit the experimental data to a single exponential decay curve to obtain $\hat\lambda_0 = 0.999957\pm 1.2\times 10^{-5}$. Then the average error $\bar{p} = (1-\hat\lambda_0)/(n+2) = (7.14\pm 2.00)\times 10^{-6}$, and thus the estimated average leakage rate $\hat{L}_{\mathrm{ave}} = n\bar{p}=(2.86\pm 0.80)\times 10^{-5}$ and seepage rate $\hat{S}_{\mathrm{ave}} = \frac{n2^{n}\bar{p}}{3^n-2^{n}} =(7.03\pm 1.97)\times 10^{-6}$. The estimated results are consistent with the theoretical ones within the errors of statistics, which verify the validity of the LRB protocol. \textbf{Example 2.} The SPAM noise is set the same way as in Example 1. To show the robustness of the LRB protocol, we choose noise $\Lambda$ which contains all of the flips (1) between subspace $\mathcal{H}_{l,(1)}$ and computational subspace $\mathcal{H}_c$, and (2) inside each subspace $\mathcal{H}_{\bm k}$ for all $\bm k\in \cbra{c,l}^n$. We choose the number of qubits $n=3$. The noise strength $p_{ij}$ is picked uniformly and randomly from interval $ 10^{-3}[1,1 + 10^{-5}]$ for $(i,j)$ in $(\mathcal{H}_c,\mathcal{H}_{l,(1)})\cup (\mathcal{H}_{l,(1)}, \mathcal{H}_{c})$ and $p_{ij}$ is picked uniformly and randomly from interval $10^{-6}[1,1 + 10^{-5}]$ for $i$ and $j$ both in $\mathcal{H}_{\bm k}$ and $i\ne j, \bm k\in \cbra{c,l}^n$. By Theorem \ref{thm:lrb_specific}, the theoretical average leakage and seepage rates are $L_{\mathrm{ave}} = \sum_{i = 1}^n p_i = 1.51\times 10^{-5}$ and $S_{\mathrm{ave}} = \frac{2^n}{3^n-2^n}\sum_{i = 1}^n q_i = 6.41\times 10^{-6}$ respectively. By Corollary \ref{cor:lrb_arp}, we fit the experimental data using a single exponential decay curve and obtain $ \hat{\lambda} = 0.999974\pm 1.046\times 10^{-5}$. Then the average error $\bar{p} = (1 - \hat{\lambda})/(n+2) = (5.19\pm 2.09)\times 10^{-6}$, and the estimated average leakage rate $\hat{L}_{\mathrm{ave}} = (1.56\pm 0.63)\times 10^{-5}$ and average seepage rate $\hat{S}_{\mathrm{ave}} = (6.56\pm 2.64)\times 10^{-6}$. The numerical results validate the LRB protocol and demonstrate that the noise in the computational subspace does not affect the average leakage rate. We depict the probabilities of measuring outcomes in the computational subspace with the circuit size of Pauli gates as in Fig. \ref{fig:lrb_randp_decay}. \begin{figure}\label{fig:lrb_randp_decay} \end{figure} \subsection{Average leakage rate for specific gates} \label{subsec:two_qubit_num} One important application of the iLRB protocol in Section~\ref{sec:iLRB_protocol} is measuring leakage of experimentally realized two-qubit quantum gates. Noise in real quantum gates can be very hard to characterize due to the complexity of gate schemes. For example, in the flux-tunable superconducting quantum devices, to implement a two-qubit $\mathrm{iSWAP}$ gate, the two qubits are brought to resonance adiabatically, left alone to evolve for some time duration, and finally detuned adiabatically back to their normal working frequencies~\cite{Krantz_2019}. Both the adiabatic evolution and the resonant evolution might lead to leakage and seepage. If one carries out the iLRB protocol for some specific gates proposed in Section~\ref{sec:iLRB_protocol}, one would theoretically get one decay curve that consists of multiple exponents. A general multi-exponential decay curve is hard to fit due to statistical errors in real quantum experiments. To simplify the problem, we focus on the leakage damping noise models given in Definitions \ref{def:simplifiedSingle_iteNoise_iLRB}(The explicit form of the two-qubit case is given below). It can make the data fitting and processing more manageable. These simplified noise models are supported by the Hamiltonian evolution of the target two-qubit gates. \subsubsection{Average leakage rate analysis} The leakage damping noise model for $\mathrm{iSWAP}/\mathrm{SQiSW}$ gate is shown below. This noise model is supported by qubits' Hamiltonian evolution. See more details in Appendix~\ref{app:iswap-noise-model}. \begin{align} \begin{aligned} \Lambda_{\mathrm{iSWAP}}(\rho) &= E_0 \rho E_0^\dagger + \sum_{(k,k')\in \mathcal{S}}E_{kk'}\rho E_{kk'}^\dagger \label{eq:noise_iswap_total} \end{aligned} \end{align} where $\mathcal{S} = \cbra{(02,11),(11,02),(20,11),(11,20)}$, and \begin{equation} \begin{aligned} E_{kk'} &= \sqrt{\epsilon}\ket{k'}\bra{k} ,\forall (k,k')\in \mathcal{S},\\ E_0 &= \sqrt{\mathbb{I} - \sum_{(k,k')\in \mathcal{S}}E_{kk'}^\dagger E_{kk'}}, \label{eq:iswap_noise_cptp} \end{aligned} \end{equation} where $\epsilon \in [0,1/2]$. This noise model contains one parameter $\epsilon$ that remained to be fitted by the iLRB experiment. Since this noise model belongs to the noise model in Def. \ref{def:simplifiedSingle_iteNoise_iLRB}, the average leakage rate of these gates can be formalized with Theorem \ref{thm:ilrb_multiq}. Another commonly realized two-qubit gate in flux-tunable superconducting quantum devices is the CZ gate, of which leakage damping noise model reads (See Appendix~\ref{app:iswap-noise-model} for more details) \begin{align} \begin{aligned} \Lambda_{G}(\rho) &= E_0 \rho E_0 + \sum_{(k,k')\in \mathcal{S}}E_{kk'}\rho E_{kk'}^\dagger \label{eq:noise_cz_total} \end{aligned} \end{align} where $\mathcal{S} = \cbra{(02,11),(11,02),(20,11),(11,20)}$ and \begin{equation} \begin{aligned} E_0 &= \sqrt{ 1- \epsilon_1 }\ket{02}\bra{02} + \sqrt{1 - \epsilon_1 - \epsilon_2 }\ket{11}\bra{11} + \sqrt{1 - \epsilon_2}\ket{20}\bra{20} + {\Pi}_{\mathcal{H}\backslash\cbra{02,11,20}},\\ E_{02,11}&=\sqrt{\epsilon_1}\ket{11}\bra{02},E_{11,02}=\sqrt{\epsilon_1}\ket{02}\bra{11},E_{20,11}=\sqrt{\epsilon_2}\ket{11}\bra{20},E_{11,20}=\sqrt{\epsilon_2}\ket{20}\bra{11}. \label{eq:cphase_noise_cptp} \end{aligned} \end{equation} Similar to iSWAP/SQiSW gates, the noise model of the CZ gate learned from Hamiltonian evolution can be represented as noise model \eqref{eq:noise_cz_total}. Since usually, the noise for single-qubit gates is much lower than that of the two-qubit gates, we make the assumption that Pauli gates are noiseless. Comparing Eq.~(\ref{eq:iswap_noise_cptp}) with Eq.~(\ref{eq:cphase_noise_cptp}), one finds the leakage damping noise model for $\mathrm{iSWAP}/\mathrm{SQiSW}$ gate can be treated as a special case with $\epsilon_1=\epsilon_2=\epsilon$. Thus for the more general leakage damping noise model in Eq.~(\ref{eq:cphase_noise_cptp}), we provide the following corollary for the data analysis after carrying out the iLRB protocol. \begin{corollary} For two-qubit target gate $\mathcal{T}$ with noisy implementation $\hat{\mathcal{T}} = \mathcal{T} \circ\Lambda_{\text{G}}$, where $\Lambda_{\text{G}}$ has the form defined in Eq. \eqref{eq:noise_cz_total}, and we assume the Pauli gates are noiseless. Then by performing the iLRB protocol, the expectation of the output probability is $\mathbb{E}\sbra{p_{{\Pi}_c}} = A + B_1 \lambda_1^m + B_2 \lambda_2^m$, where $\lambda_i\in\cbra{1-\frac{3}{8}\epsilon_1 - \frac{3}{8}\epsilon_2\pm \frac{1}{8}\sqrt{9\epsilon_1^2 -14\epsilon_1\epsilon_2 + 9\epsilon_2^2}}$, and the average leakage rates {$L =\frac{\epsilon_1 + \epsilon_2}{4},$ and $ S= \frac{\epsilon_1 + \epsilon_2}{5}$.} \label{coro:leak_two_sec} \end{corollary} We postpone the proof of this corollary in Appendix \ref{app:gen_noise_three_para}. {Here we only need the assumption that the noise of $T$ gate is right hand side of $\mathcal{T}$, since $\mathbb{E}\sbra{\mathcal{P}_j\circ\mathcal{T} \circ \Lambda_T} = \mathbb{E}\sbra{\mathcal{P}_j\circ\Lambda_T}$.} By Corollary \ref{coro:leak_two_sec}, we can get the average leakage rates by fitting $\lambda_1,\lambda_2$ from the exponential curve $p_{{\Pi}_c}$. \subsubsection{Numerical results for iSWAP leakage rate estimation} Here we numerically analyze the average leakage rate for any two-qubit gates with leakage noise model $\Lambda_{\mathrm{iSWAP}}$. To demonstrate the SPAM robustness of the iLRB protocol, we implement measurement noise which has the same setting as in subsection \ref{subsec:exp_multi_leakage}. Here we choose a smaller preparation noise with $p_c = p_l = 10^{-6}$ in Eq. \eqref{eq:depolarize_pre}. The leakage noise of the Pauli gate is chosen as $\bar{p}_{\mathbb{P}} = 5\times 10^{-6}$. The noise rate of the target gate is chosen as $\bar{\epsilon}_{\mathrm{iSWAP}} = \epsilon_{\mathrm{iSWAP}}/4 = 5\times 10^{-5}$. Hence $L_{\mathrm{iSWAP}} = 2\bar\epsilon_{\mathrm{iSWAP}} = 10^{-4}, S_{\mathrm{iSWAP}} =\frac{8}{5}\bar\epsilon_{\mathrm{iSWAP}}= 8\times 10^{-5}$. By Theorem \ref{thm:ilrb_multiq}, we have the theoretical average leakage and seepage rates equal \begin{align*} L_{\mathrm{iSWAP}} = \frac{\lambda_{\mathbb{P}}-\lambda}{2(3\lambda_{\mathbb{P}}-2)}, S_{\mathrm{iSWAP}} = \frac{2(\lambda_{\mathbb{P}}-\lambda)}{5(3\lambda_{\mathbb{P}}-2)}. \end{align*} \begin{figure}\label{fig:iswap_fit} \end{figure} Figure \ref{fig:iswap_fit} gives the fitted curve from simulated experimental results. From the figure, we can fit the exponent $\lambda = 0.999782(2)$ and pauli noise $\lambda_{\mathbb{P}}\approx 0.999980(1)$. Hence the estimated average leakage and seepage rates are $\hat{L}_{\mathrm{iSWAP}} = 9.9(2)\times 10^{-5}$ and $ \hat{S}_{\mathrm{iSWAP}}=7.9(2)\times 10^{-5}$ respectively, which verifies theoretical values. \section{Discussion} \label{sec:discuss} In this paper, we proposed a framework of \emph{leakage randomized benchmarking} that addresses the limitations of previous proposals and is more versatile in its applicability to a wider range of gates. The LRB protocol is particularly suitable for multi-qubit scenarios in the presence of SPAM noise. We presented an interleaved variant of the LRB protocol (iLRB) and conducted a thorough analysis of the leakage and seepage rates under various noise models, with a focus on the leakage-damping noise model and two-qubit gates in superconducting quantum devices. We carried out numerical experiments and see a good agreement between the theoretical leakage/seepage rates and the numerical ones for multiple gates. As the iLRB protocol is sensitive only to leakage, rather than the specific logic gate in computational subspace, it can be easily extended to other two-qubit gates realized in experiments. We leave the experimental demonstration of the iLRB protocol for future work. One major difference between LRB and RB protocols is that single-exponential decays are guaranteed under general assumptions for RB protocols if the computational space is sufficiently twirled. However, leakage subspaces are hardly affected by any gate schemes designed on purpose for the computational subspaces, causing LRB to exhibit much more complicated decay behavior. Alternatively, gates that can twirl the leakage subspace might lead to cleaner decay behavior, but would pose somewhat unrealistic assumptions on the gate implementation that might not be experiment-friendly. In our work, we choose not to pose assumptions about the gates themselves, but instead require prior knowledge of the leakage noise models. Such prior knowledge facilitates data processing and interpretation, but their validity needs to be established either experimentally or through first-principle error analysis. Although we have proposed two simplifications under which the LRB behaviors are better understood, a more case-by-case study might be needed for other physically-oriented noise models. As a complement, in Appendix \ref{app:gen_noise_three_para} we also analyze the leakage information we can gain in a more complex noise model. \begin{thebibliography}{10} \bibitem{Harper_2020} Robin Harper, Steven~T. Flammia, and Joel~J. Wallman. \newblock Efficient learning of quantum noise. \newblock {\em Nature Physics}, 16(12):1184–1188, Aug 2020. \bibitem{Sun21Mitigating} Jinzhao Sun, Xiao Yuan, Takahiro Tsunoda, Vlatko Vedral, Simon~C. Benjamin, and Suguru Endo. \newblock Mitigating realistic noise in practical noisy intermediate-scale quantum devices. \newblock {\em Phys. Rev. Appl.}, 15:034026, Mar 2021. \bibitem{knill2008randomized} Emanuel Knill, Dietrich Leibfried, Rolf Reichle, Joe Britton, R~Brad Blakestad, John~D Jost, Chris Langer, Roee Ozeri, Signe Seidelin, and David~J Wineland. \newblock Randomized benchmarking of quantum gates. \newblock {\em Physical Review A}, 77(1):012307, 2008. \bibitem{carignan2015characterizing} Arnaud Carignan-Dugas, Joel~J Wallman, and Joseph Emerson. \newblock Characterizing universal gate sets via dihedral benchmarking. \newblock {\em Physical Review A}, 92(6):060302, 2015. \bibitem{PhysRevA.97.062323} Winton~G. Brown and Bryan Eastin. \newblock Randomized benchmarking with restricted gate sets. \newblock {\em Physical Review A}, 97:062323, Jun 2018. \bibitem{francca2018approximate} Daniel~Stilck Fran{\c{c}}a and AK~Hashagen. \newblock Approximate randomized benchmarking for finite groups. \newblock {\em Journal of Physics A: Mathematical and Theoretical}, 51(39):395302, 2018. \bibitem{helsen2019new} Jonas Helsen, Xiao Xue, Lieven~MK Vandersypen, and Stephanie Wehner. \newblock A new class of efficient randomized benchmarking protocols. \newblock {\em npj Quantum Information}, 5(1):1--9, 2019. \bibitem{claes2021character} Jahan Claes, Eleanor Rieffel, and Zhihui Wang. \newblock Character randomized benchmarking for non-multiplicity-free groups with applications to subspace, leakage, and matchgate randomized benchmarking. \newblock {\em PRX Quantum}, 2(1):010351, 2021. \bibitem{erhard2019characterizing} Alexander Erhard, Joel~J Wallman, Lukas Postler, Michael Meth, Roman Stricker, Esteban~A Martinez, Philipp Schindler, Thomas Monz, Joseph Emerson, and Rainer Blatt. \newblock Characterizing large-scale quantum computers via cycle benchmarking. \newblock {\em Nature communications}, 10(1):1--7, 2019. \bibitem{magesan2012efficient} Easwar Magesan, Jay~M Gambetta, Blake~R Johnson, Colm~A Ryan, Jerry~M Chow, Seth~T Merkel, Marcus~P Da~Silva, George~A Keefe, Mary~B Rothwell, Thomas~A Ohki, et~al. \newblock Efficient measurement of quantum gate error by interleaved randomized benchmarking. \newblock {\em Physical Review Letters}, 109(8):080505, 2012. \bibitem{chasseur15Complete} T.~Chasseur and F.~K. Wilhelm. \newblock Complete randomized benchmarking protocol accounting for leakage errors. \newblock {\em Physical Review A}, 92:042333, Oct 2015. \bibitem{wallman2015robust} Joel~J. Wallman, Marie Barnhill, and Joseph Emerson. \newblock Robust characterization of loss rates. \newblock {\em Physical Review Letters}, 115:060501, Aug 2015. \bibitem{Heinrich22General} Markus Heinrich, Martin Kliesch, and Ingo Roth. \newblock General guarantees for randomized benchmarking with random quantum circuits. \newblock {\em arXiv preprint arXiv:2212.06181}, 2022. \bibitem{Merkel2021Randomized} Seth~T. Merkel, Emily~J. Pritchett, and Bryan~H. Fong. \newblock Randomized benchmarking as convolution: Fourier analysis of gate dependent errors. \newblock {\em Quantum}, 5:581, nov 2021. \bibitem{flammia_direct_2011} Steven~T. Flammia and Yi-Kai Liu. \newblock Direct {Fidelity} {Estimation} from {Few} {Pauli} {Measurements}. \newblock {\em Physical Review Letters}, 106(23):230501, June 2011. \newblock Publisher: American Physical Society. \bibitem{da_silva_practical_2011} Marcus~P. da~Silva, Olivier Landon-Cardinal, and David Poulin. \newblock Practical {Characterization} of {Quantum} {Devices} without {Tomography}. \newblock {\em Physical Review Letters}, 107:210404, November 2011. \newblock Publisher: American Physical Society. \bibitem{PhysRevA.103.042604} Matthew Ware, Guilhem Ribeill, Diego Rist\`e, Colm~A. Ryan, Blake Johnson, and Marcus~P. da~Silva. \newblock Experimental pauli-frame randomization on a superconducting qubit. \newblock {\em Physical Review A}, 103:042604, Apr 2021. \bibitem{PhysRevLett.129.010502} Feng Bao, Hao Deng, Dawei Ding, Ran Gao, Xun Gao, Cupjin Huang, Xun Jiang, Hsiang-Sheng Ku, Zhisheng Li, Xizheng Ma, Xiaotong Ni, Jin Qin, Zhijun Song, Hantao Sun, Chengchun Tang, Tenghui Wang, Feng Wu, Tian Xia, Wenlong Yu, Fang Zhang, Gengyan Zhang, Xiaohang Zhang, Jingwei Zhou, Xing Zhu, Yaoyun Shi, Jianxin Chen, Hui-Hai Zhao, and Chunqing Deng. \newblock Fluxonium: An alternative qubit platform for high-fidelity operations. \newblock {\em Physical Review Letters}, 129:010502, Jun 2022. \bibitem{sung2021realization} Youngkyu Sung, Leon Ding, Jochen Braum\"uller, Antti Veps\"al\"ainen, Bharath Kannan, Morten Kjaergaard, Ami Greene, Gabriel~O. Samach, Chris McNally, David Kim, Alexander Melville, Bethany~M. Niedzielski, Mollie~E. Schwartz, Jonilyn~L. Yoder, Terry~P. Orlando, Simon Gustavsson, and William~D. Oliver. \newblock Realization of high-fidelity cz and $zz$-free iswap gates with a tunable coupler. \newblock {\em Physical Review X}, 11:021058, Jun 2021. \bibitem{Morvan21Qutrit} A.~Morvan, V.~V. Ramasesh, M.~S. Blok, J.~M. Kreikebaum, K.~O'Brien, L.~Chen, B.~K. Mitchell, R.~K. Naik, D.~I. Santiago, and I.~Siddiqi. \newblock Qutrit randomized benchmarking. \newblock {\em Physical Review Letters}, 126:210504, May 2021. \bibitem{Xue19Benchmarking} X.~Xue, T.~F. Watson, J.~Helsen, D.~R. Ward, D.~E. Savage, M.~G. Lagally, S.~N. Coppersmith, M.~A. Eriksson, S.~Wehner, and L.~M.~K. Vandersypen. \newblock Benchmarking gate fidelities in a $\mathrm{Si}/\mathrm{SiGe}$ two-qubit device. \newblock {\em Physical Review X}, 9:021011, Apr 2019. \bibitem{wallman2016robust} Joel~J Wallman, Marie Barnhill, and Joseph Emerson. \newblock Robust characterization of leakage errors. \newblock {\em New Journal of Physics}, 18(4):043021, 2016. \bibitem{wood2018quantification} Christopher~J Wood and Jay~M Gambetta. \newblock Quantification and characterization of leakage errors. \newblock {\em Physical Review A}, 97(3):032306, 2018. \bibitem{PhysRevLett.116.020501} Zijun Chen, Julian Kelly, Chris Quintana, R.~Barends, B.~Campbell, Yu~Chen, B.~Chiaro, A.~Dunsworth, A.~G. Fowler, E.~Lucero, E.~Jeffrey, A.~Megrant, J.~Mutus, M.~Neeley, C.~Neill, P.~J.~J. O'Malley, P.~Roushan, D.~Sank, A.~Vainsencher, J.~Wenner, T.~C. White, A.~N. Korotkov, and John~M. Martinis. \newblock Measuring and suppressing quantum state leakage in a superconducting qubit. \newblock {\em Physical Review Letters}, 116:020501, Jan 2016. \bibitem{PhysRevA.96.022330} David~C. McKay, Christopher~J. Wood, Sarah Sheldon, Jerry~M. Chow, and Jay~M. Gambetta. \newblock Efficient $z$ gates for quantum computing. \newblock {\em Physical Review A}, 96:022330, Aug 2017. \bibitem{andrews2019quantifying} Reed~W Andrews, Cody Jones, Matthew~D Reed, Aaron~M Jones, Sieu~D Ha, Michael~P Jura, Joseph Kerckhoff, Mark Levendorf, Se{\'a}n Meenehan, Seth~T Merkel, et~al. \newblock Quantifying error and leakage in an encoded si/sige triple-dot qubit. \newblock {\em Nature nanotechnology}, 14(8):747--750, 2019. \bibitem{haffner2008quantum} Hartmut H{\"a}ffner, Christian~F Roos, and Rainer Blatt. \newblock Quantum computing with trapped ions. \newblock {\em Physics reports}, 469(4):155--203, 2008. \bibitem{nielsen2021gate} Erik Nielsen, John~King Gamble, Kenneth Rudinger, Travis Scholten, Kevin Young, and Robin Blume-Kohout. \newblock Gate set tomography. \newblock {\em Quantum}, 5:557, oct 2021. \bibitem{LevinPeresWilmer2006} David~A. Levin, Yuval Peres, and Elizabeth~L. Wilmer. \newblock {\em {Markov chains and mixing times}}. \newblock American Mathematical Society, 2006. \bibitem{nielsen2002quantum} M.~A. Nielsen and I.~L. Chuang. \newblock {\em Quantum Computation and Quantum Information: 10th Anniversary Edition}. \newblock Cambridge University Press, New York, NY, USA, 10th edition, 2011. \bibitem{Krantz_2019} P.~Krantz, M.~Kjaergaard, F.~Yan, T.~P. Orlando, S.~Gustavsson, and W.~D. Oliver. \newblock A quantum engineer{\textquotesingle}s guide to superconducting qubits. \newblock {\em Applied Physics Reviews}, 6(2):021318, jun 2019. \bibitem{chow2012universal} Jerry~M Chow, Jay~M Gambetta, Antonio~D Corcoles, Seth~T Merkel, John~A Smolin, Chad Rigetti, S~Poletto, George~A Keefe, Mary~B Rothwell, John~R Rozen, et~al. \newblock Universal quantum gate set approaching fault-tolerant thresholds with superconducting qubits. \newblock {\em Physical Review Letters}, 109(6):060501, 2012. \bibitem{wood2011tensor} Christopher~J Wood, Jacob~D Biamonte, and David~G Cory. \newblock Tensor networks and graphical calculus for open quantum systems. \newblock {\em arXiv preprint arXiv:1111.6950}, 2011. \bibitem{dankert2009exact} Christoph Dankert, Richard Cleve, Joseph Emerson, and Etera Livine. \newblock Exact and approximate unitary 2-designs and their application to fidelity estimation. \newblock {\em Physical Review A}, 80(1):012304, 2009. \bibitem{ambainis2000private} Andris Ambainis, Michele Mosca, Alain Tapp, and Ronald De~Wolf. \newblock Private quantum channels. \newblock In {\em Proceedings 41st Annual Symposium on Foundations of Computer Science}, pages 547--553. IEEE, 2000. \bibitem{helsen2022general} J.~Helsen, I.~Roth, E.~Onorati, A.H. Werner, and J.~Eisert. \newblock General framework for randomized benchmarking. \newblock {\em PRX Quantum}, 3:020357, Jun 2022. \bibitem{Martinis_2014} John~M. Martinis and Michael~R. Geller. \newblock Fast adiabatic qubit gates using only ${\ensuremath{\sigma}}_{z}$ control. \newblock {\em Physical Review A}, 90:022307, Aug 2014. \bibitem{zener1932non} Clarence Zener. \newblock Non-adiabatic crossing of energy levels. \newblock {\em Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character}, 137(833):696--702, 1932. \end{thebibliography} \appendix \section{The twirling of Pauli group in the Hilbert space $\mathcal{H}$} \label{app:proof_lem_PerfectPauli} In this section, we prove Lemma \ref{lem:PerfectPauli}, i.e. $\frac{1}{|\mathbb{P}_n|}\sum_{\mathcal{P}\in \mathbb{P}_n}\mathcal{P}=\bar{\mathcal{P}}$. This is an extension to the result that Pauli twirl turns any state in the computational subspace to a maximally mixed state. \begin{proof}[Proof of Lemma \ref{lem:PerfectPauli}.] We first prove the case $n=1$. For any single qubit state $\rho$ with leakage, we have \begin{align} &\frac{1}{|\mathbb{P}_n|}\sum_{\mathcal{P}\in \mathbb{P}_n}\mathcal{P}(\rho)\\ =&\frac{1}{|\mathbb{P}_n|}\sum_{U\in \pm\{1,i\}\times\{I,X,Y,Z\}} (U\oplus {\Pi}_l)\rho(U^\dag\oplus {\Pi}_l)\\ =&\frac{1}{|\mathbb{P}_n|}\sum_{U\in \pm\{1,i\}\times\{I,X,Y,Z\}}\left( U\rho U^\dag + {\Pi}_l\rho U^\dag+ U\rho {\Pi}_l+ {\Pi}_l\rho {\Pi}_l\right)\\ =&\tr{\rho {\Pi}_c}\widetilde{{\Pi}}_c + 0 + 0 + \tr{\rho {\Pi}_l}\widetilde{{\Pi}}_l\\ =&\bar{\mathcal{P}}(\rho). \end{align} The middle two terms vanish since $\sum_U U =\sum_U U^\dag = 0 $; the first term follows the twirling property of the Pauli group in the computational subspace and the last one from that $\mathcal{H}_l$ is one-dimensional. For general $n$, we then have \begin{align} &\frac{1}{|\mathbb{P}_n|}\sum_{\mathcal{P}\in \mathbb{P}_n}\mathcal{P} (\cdot)\\ &=\bigotimes_k \left(\frac{1}{|\mathbb{P}|}\sum_{\mathcal{P}_k\in \mathbb{P}_n} \mathcal{P}_i (\cdot)\right)_k\\ &=\bigotimes_k \left( \sum_{i_k\in\{c,l\}}\tr{{\Pi}_{i_k} \cdot}\widetilde{{\Pi}}_{i_k}\right)_k\\ &=\sum_{\bm i\in\{c,l\}^n}\bigotimes_k (\tr{{\Pi}_{i_k} \cdot}\widetilde{{\Pi}}_{i_k})_k\\ &=\sum_{\bm i\in\{c,l\}^n} \left(\tr{\left(\bigotimes_k{\Pi}_{(i_k)_k}\right) \cdot}\left(\bigotimes_k\widetilde{{\Pi}}_{(i_k)_k}\right)\right)\\ &=\sum_{\bm i\in\{c,l\}^n}\tr{{\Pi}_{\bm i}\cdot}\widetilde{{\Pi}}_{\bm i}=\bar{\mathcal{P}}(\cdot), \end{align} for any $n$-qubit quantum state $\rho$. We here denote by $\mathcal{C}_k$ a quantum operation $\mathcal{C}$ acting on the $k$th qubit. \end{proof} \section{Complete Proof of Theorem \ref{thm:lrb_specific}} \label{app:lrb_specific} \begin{proof} We prove Theorem ~\ref{thm:lrb_specific} under more general cases where $n=1$ or $q_i=q_{i+1}$ for some $i$. While the eigenvalues of such matrices can be derived from the continuity of roots of polynomials with respect to the coefficients, we prove here that $Q$ is always diagonalizable, even if algebraic multiplicities occur. The matrix $Q$ is defined by the two vectors $\Vec{p}$ and $\Vec{q}$. In the following, we use the notation $Q(\Vec{p},\Vec{q})$ in case $\Vec{p}$ and $\Vec{q}$ are to be explicitly specified. \begin{itemize} \item $n=1$: In this case we have $Q=\begin{bmatrix} 1-p_1 & 1-x_1\\p_1&x_1 \end{bmatrix}$ where $x_i=1 - 2q_i$; the two eigenvalues are $1$ and $x_1-p_1$. Note that $x_1-p_1=1$ iff $Q=I$, and therefore $Q$ is always diagonalizable. \item There exists $i$ such that $q_i=q_{i+1}$. We prove that $Q$ is similar to $Q'=Q(\Vec{p'},\Vec{q'})$, where $p'_i=p_i+p_{i+1}, p'_{i+1}=0, q'_i=q_i, q'_{i+1} = 0$, and $p'_j=p_j, q'_j=q_j$ for all $j\not\in\cbra{i,i+1}$. Such similarity is given by the following transformation $Q'=AQA^{-1}$, where $$A:=\begin{bmatrix} 1&&&&&\\&1&&&&\\&&\ddots&&&\\&&&1&1&\\&&&p_{i+1}&-p_i&\\&&&&&\ddots\\ \end{bmatrix}.$$ \end{itemize} By repeatedly applying such similar transformations and rearranging the rows and columns, we can reduce $Q$ to a canonical form $\begin{bmatrix} Q^*&0\\0&\Sigma \end{bmatrix},$ where $\Sigma$ is diagonal, and $Q^*=Q(\Vec{p}^*,\Vec{q}^*)$ where no pairs of entries in $\Vec{q}^*$ collide. $Q^*$ is diagonalizable and hence so is $Q$. \end{proof} \section{Proof of Corollary \ref{cor:lrb_arp}} \label{app:lrb_barp} By Theorem \ref{thm:lrb_global}, after performing $n$-site LRB protocol, the expectation of the probability for measuring computational basis equals \begin{align} p_{{\Pi}_c} = \supbraket{\hat{{\Pi}}_c|Q_{\Lambda}^{m-1}|\tilde{\rho}_0}. \end{align} By eigen-decomposing matrix $Q_{\Lambda}$, \begin{align} Q_{\Lambda} = V \Sigma V^{-1}, \end{align} where $\Sigma$ is the diagonal matrix contains all of the eigenvalues of $Q_{\Lambda}$, and $V$ is the matrix contains all of the associated eigenstates. By Theorem \ref{thm:lrb_specific} we see that there are three different eigenvalues, \begin{itemize} \item [(1)] $\lambda_{0} = 1 - 2\bar{q} - n\bar{p}$ with multiplicity one. Let the associated eigenstate be $\vec{v}=(v_0,v_1,\ldots, v_n)$; \item [(2)] $\lambda_{1} = 1 - 2\bar{q}$ with multiplicity $n-1$. Let the associated eigenstates be $u^{(s)} = (u_0^{(s)},u_1^{(s)},\ldots, u_n^{(s)})^T$, where $s\in [n-1]$; \item [(3)] $\lambda_{2} = 1$ with multiplicity one. Let the associated engenstates be $\vec{w}=(w_0,w_1,\ldots, w_n)$. \end{itemize} Hence we have $\Sigma = \diag(\lambda_0,\ldots, \lambda_0, \lambda_1, 1)$. Let $Q'$ be the matrix obtained by adding all of the rows $\cbra{1,\ldots, n+1}$ into the $0$-th row of $Q-\lambda \mathbb{I}$, and adding the $k+1$-th row into the $k$-th row for any $k\in \cbra{0,\ldots, n}$ (We assume the index of elements is in $(\cbra{0,\ldots, n},\cbra{0,\ldots, n})$). By the definition of $Q$, for any $\lambda \in \cbra{\lambda_0,\lambda_1, \lambda_2}$, \begin{align} \pbra{Q-\lambda{\mathbb{I}}}\vec{q} &=Q'\vec{q} \\ &=\begin{pmatrix} 1-\lambda & 1-\lambda & 1-\lambda & \ldots & 1-\lambda & 1-\lambda\\ 0 & 1 - 2\bar{q}-\lambda & -(1 - 2\bar{q}-\lambda) & \ldots &0 & 0\\ 0 & 0 & 1 - 2\bar{q}-\lambda & \ldots &0 & 0\\ 0 & 0 & 0 & \ldots &1 - 2\bar{q}-\lambda & -(1 - 2\bar{q}-\lambda)\\ \bar{p} & 0 & 0 & \ldots & 0& 1 - 2\bar{q}-\lambda \end{pmatrix}\vec{q}\\ &=0, \end{align} where $\vec{q}:=(q_0,\ldots, q_n)\in \mathbb{R}^{n+1}$ is the eigenstate associated with eigenvalue $\lambda$. Then \begin{align} &(1-\lambda)(q_0 + \ldots + q_n) = 0,\\ &(1-2\bar{q}-\lambda) (q_i-q_{i+1}) = 0, \forall i\in [n-1],\\ &\bar{p}q_0 + (1-2\bar{q}-\lambda) q_n = 0. \end{align} By substituting $\lambda$ into the above equations, we have \begin{align} &v_0 + nv_1 = 0, v_i=v_j, \forall j\in[n]\\ & u_0^{(s)} = 0, u_1^{(s)} + \ldots + u_n^{(s)} = 0\\ &w_0 = 2w_1, w_i=w_j, \forall j\in[n] \end{align} Therefore, all of $u^{(s)}$ are orthogonal to $\vec{w}$ and $\vec{v}$. Let $V = \sbra{\vec u^{(1)},\vec u^{(2)}, \ldots, \vec u^{(n-1)}, \vec v,\vec w}$. Then $V_{0k} =V_{k0}^{-1}= 0$ for $k\in \cbra{0,1,\ldots, n-2}$. We also give the matrix representation of $V$ and $V^{-1}$ as follows, \begin{align} V = \begin{pmatrix} -n & 0 & 0 & \cdots & 0 & 0 & 2\\ 1 & 1 & 1 & \cdots & 1 &1 & 1\\ 1 & -1 & 0 & \cdots & 0&0 & 1\\ 1 & 0 & -1 & \cdots & 0&0 & 1\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ 1 & 0 & 0 & \cdots & -1& 0 & 1\\ 1 & 0 & 0 & \cdots & 0& -1 & 1 \end{pmatrix},\quad V^{-1} = \begin{pmatrix} -\frac{1}{n+2} & \frac{2}{n(n+2)} & \frac{2}{n(n+2)} & \frac{2}{n(n+2)} & \cdots & \frac{2}{n(n+2)} & \frac{2}{n(n+2)}\\ 0 & \frac{1}{n} & \frac{1}{n}-1 & \frac{1}{n}& \cdots & \frac{1}{n} & \frac{1}{n}\\ 0 & \frac{1}{n} & \frac{1}{n}& \frac{1}{n}-1 & \cdots & \frac{1}{n} & \frac{1}{n}\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ 0 & \frac{1}{n}& \frac{1}{n} & \frac{1}{n} & \cdots & \frac{1}{n} & \frac{1}{n}-1\\ \frac{1}{n+2}& \frac{1}{n+2}&\frac{1}{n+2} & \frac{1}{n+2} & \cdots & \frac{1}{n+2} & \frac{1}{n+2} \end{pmatrix}. \end{align} With the state preparation noise-free assumption, we let the vector representation for $\supket{\rho_0}$ be $(1,0,\ldots, 0)$. Since there exist some coefficients $\cbra{\alpha_k}_{k = 1}^n$ such that $\supbra{\hat{{\Pi}}_c} = \sum_{k = 0}^n \alpha_k \supbra{{\Pi}_{c^{k-1}lc^{n-k}}}$, then \begin{align} p_{{\Pi}_c}(m)&= \supbraket{\hat{{\Pi}}_c|Q_{\Lambda}^{m-1}|\tilde{\rho}_0}\\ & = \sum_{j=0}^n \alpha_j Q_{\Lambda}^{m-1}(j,0)\\ &= \sum_{j=0}^n \alpha_j \sum_{k=0}^n V_{jk} \Sigma_k^{m-1} V_{jk}^{-1}\\ &= \sum_{j=0}^n \alpha_j\pbra{ \lambda_0^{m-1}\sum_{k=0}^{n-2} V_{jk}V_{k0}^{-1} + \lambda_1^{m-1} V_{j(n-1)} V_{(n-1)0}^{-1} + V_{jn}V_{n0}^{-1} }\\ &= A_0 + A_1 \lambda_0^{m}, \end{align} for some real constants $A_0,A_1$. \section{Proof of Corollary \ref{coro:no_crosstalkPauli}} \label{app:no_crosstalkPauli} \begin{proof}[Proof of Corollary \ref{coro:no_crosstalkPauli}.] Let $p_j,q_j$ be the average leakage rate and seepage rate in the $j$-th qubit defined as in Eq. \eqref{eq:p_iq_i}. Since the noise has no cross-talk, the average leakage rate can be calculated as \begin{align} L_{\mathrm{ave}} &= \tr{{\Pi}_l \Lambda\pbra{\widetilde{\Pi}_c}}\\ &=\tr{{\Pi}_l \bigotimes_j \Lambda_j\pbra{\widetilde{\Pi}_{c_j}}}\\ &=1 - \prod_{j=1}^n\tr{{\Pi}_{c_j}\Lambda\pbra{\widetilde{\Pi}_{c_j}}}\\ &= 1 - \prod_{j = 1}^n \pbra{1 - p_{j}}. \end{align} Similarly, the average seepage rate \begin{align} S_{\mathrm{ave}} &= \tr{{\Pi}_c \Lambda\pbra{\frac{{\Pi}_l}{d_l}}}\\ &=\tr{{\Pi}_c \Lambda\pbra{\frac{\mathbb{I}- {\Pi}_c}{d_l}}}\\ &= \frac{1}{d_l}\bigotimes_{j = 1}^n \tr{{\Pi}_{c_j}\Lambda_j\pbra{{\Pi}_{c_j} + {\Pi}_{l_j}}} - \frac{d_c}{d_l} \prod_{j = 1}^n (1 - p_j)\\ &= \frac{1}{d_l} \prod_{j = 1}^n \pbra{2(1 - p_j) + 2q_j} -\frac{d_c}{d_l} \prod_{j = 1}^n (1 - p_j)\\ &= \frac{2^n}{3^n - 2^n} \prod_{j = 1}^n \pbra{1 - p_j + q_j} -\frac{2^n}{3^n - 2^n} \prod_{j = 1}^n (1 - p_j). \end{align} where ${\Pi}_{c_j}$ and ${\Pi}_{l_j}$ denote the projector for computational and leakage subspaces in the $j$-th site respectively. \end{proof} \section{Condensed representation for two continuous noise channels} \label{app:condensend_continuousTwo} This section will give the condensed representation for the two continuous noise channels with the same formation as in Definition \ref{def:simplifiedSingle_iteNoise_iLRB}. Let $\Lambda_P$ and $ \Lambda_T$ be two noise channel as defined in Definition \ref{def:simplifiedSingle_iteNoise_iLRB}, then \begin{align} \Lambda_{s} \pbra{{\Pi}_{c^{i-1}lc^{n-i}}} &= p_{i0}^s\ket{u_0}\bra{u_0} - p_{i0}^s\ket{u_i}\bra{u_i} + {\Pi}_{c^{i-1}lc^{n-i}}, \forall i\in [n],\\ \Lambda_{s}\pbra{{\Pi}_c} &=-\sum_{j=1}^n p_{0j}^s\ket{u_0}\bra{u_0}+\sum_{j}p_{0j}^s\ket{u_j}\bra{u_j} + {\Pi}_c, \end{align} where $u_i \in B_i$, and $p_{i0}^s,p_{0i}^s$ in $[0,1]$ are probabilities for $s\in \cbra{P,T}$. Hence, we have \begin{align} &\tr{{\Pi}_{c^{j-1}lc^{n-j}}\Lambda_P \circ \Lambda_T \pbra{\widetilde{\Pi}_{c^{i-1}lc^{n-i}}}} = p_{0j}^{P}p_{i0}^{T}/\dim\pbra{{\Pi}_i}=2^{n+1}p_j^Pq_{i}^T, \forall i\ne j\in [n]\\ &\tr{{\Pi}_0 \Lambda_{P}\circ \Lambda_{T}(\widetilde{\Pi}_{c^{i-1}lc^{n-i}})} = \pbra{(1-p_{i0}^T)p_{i0}^P + p_{i0}^T(1-\sum_{j=1}^n p_{0j}^P)}2^{1-n}= 2(1-2^n q_i^T)q_i^P + 2q_i^T (1-2^n \sum_{j=1}^n p_j^P),\forall i\in [n]\\ &\tr{{\Pi}_{c^{i-1}lc^{n-i}} \Lambda_{P}\circ \Lambda_{T}(\widetilde{\Pi}_0)} = \pbra{(1-\sum_{j=1}^n p_{0j}^T)p_{0i}^P + p_{0i}^T(1-p_{i0}^P)}2^{-n} = (1-2^n\sum_{j=1}^n p_j^T)p_i^P + p_i^T(1-2^nq_{i}^P),\forall i\in [n] \label{eq:tr_two_noise} \end{align} where $i\ne j$ and $i,j\in \cbra{1,...,n}$. Let $Q$ be the condensed representation of $\Lambda_P \circ \Lambda_T$. By the definition of $Q$ in Eq. \eqref{eq:channel_COR}, $Q_{ij}=\tr{{\Pi}_{c^{i-1}lc^{n-i}}\Lambda_P \circ \Lambda_T \pbra{\widetilde{\Pi}_{c^{j-1}lc^{n-j}}}}$ for $i,j\in \cbra{0,1,\ldots, n}$. When $p_{0j}^s=p_{j0}^s=\frac{\bar{p}_s}{2^n}$ for any $j\in [n]$, the elements of $Q$ have the following formations \begin{align} Q_{ij} &= 2^{n+1}\bar{p}_P \bar{p}_T, \forall i\ne j\in [n]\\ Q_{0i} &= 2Q_{i0} = 2(\bar{p}_T + \bar{p}_P) -(n+1)2^{n+1}\bar{p}_P\bar{p}_T, \forall i\in [n]\\ Q_{ii} &= 1-\sum_{j\ne i}Q_{ji}. \end{align} \section{Eigenvalues for iLRB protocol} \label{app:ilrb_eigenvalues} Let $Q$ be defined as \begin{align} Q_{ij} &= 2^{n+1}\bar{p}_{\mathbb{P}}\epsilon_T, \forall i\ne j\in [n]\\ Q_{0i} &= 2Q_{i0} = 2(\epsilon_T + \bar{p}_{\mathbb{P}}) -(n+1)2^{n+1}\bar{p}_{\mathbb{P}}\epsilon_T, \forall i\in [n]\\ Q_{ii} &= 1-\sum_{j\ne i}Q_{ji}, \forall i\in \cbra{0,1,\ldots, n}. \end{align} In the following, we will prove that \begin{equation} \det(Q-\lambda {\mathbb{I}}) = (1 - \lambda) \pbra{(1 - 2^n\bar{p}_{\mathbb{P}})(1-2\epsilon_{\mathbb{P}}) - \lambda}^{n-1} \pbra{\pbra{1 - (n + 1)2^n\bar{p}_{\mathbb{P}}}\pbra{1 - (n + 2)\epsilon_T}-\lambda} \label{eq:ilrb_eigenvalues} \end{equation} Since the summation of any columns of $Q$ equals one, i.e., $\sum_{k=0}^n Q_{kj} = 1$ for any $j$, where $Q_{kj}$ is the $(k,j)$th element of $Q$, where $k,j\in \cbra{0,\ldots, n}$. Let $Q'$ be the matrix obtained by adding all of the rows in set $\cbra{1,\ldots, n+1}$ into the $0$-th row of $Q-\lambda \mathbb{I}$, and then adding the $k+1$-th row into the $k$-th row for any $k\in \cbra{0,\ldots, n}$. we can simplify $\det(Q-\lambda {\mathbb{I}})$ to \begin{align} \det(Q-\lambda {\mathbb{I}}) &= \det(Q')\\ &=\det\begin{pmatrix} 1-\lambda & 1-\lambda & 1-\lambda & \ldots &1-\lambda &1-\lambda\\ 0 & A-\lambda & -(A-\lambda) &\ldots & 0 & 0\\ 0 & 0 & A-\lambda &\ldots & 0 &0\\ \vdots & \vdots & \vdots & \vdots & \vdots& \vdots\\ 0 & 0 & 0 & \ldots & A-\lambda & -(A-\lambda)\\ B & C & C & \ldots & C & D -\lambda \end{pmatrix} \label{eq:ilrb_condensed_simplify}\\ &= (1-\lambda)(A-\lambda)^{n-1} \det\begin{pmatrix} 1 & 1 & 1 &\ldots & 1 & 1 \\ 0 & 1 & -1&\ldots & 0 & 0 \\ 0 & 0& 1&\ldots & 0 & 0 \\ \vdots & \vdots& \vdots& \vdots& \vdots& \vdots \\ 0 & 0 & 0 & \ldots & 1 & -1\\ 0 & 0 & 0 & \ldots & 0 & D-\lambda - B + (n-1)(C-B) \end{pmatrix}, \end{align} where $A = D-C, B = \epsilon_T + \bar{p}_{\mathbb{P}} -(n+1)2^{n}\bar{p}_{\mathbb{P}}\epsilon_T, C=2^{n+1}\bar{p}_{\mathbb{P}}\epsilon_T, D = 1- 2(\epsilon_T + \bar{p}_{\mathbb{P}}) + 2^{n+2}\bar{p}_{\mathbb{P}}\epsilon_T$. It is easy to check \begin{align} \det(Q-\lambda {\mathbb{I}}) &= (1-\lambda)\pbra{(n-1)C(A-\lambda)^{n-1}+ (D-\lambda)(A-\lambda)^{n-1} - nB (A-\lambda)^{n-1}}\\ &= (1-\lambda)(A-\lambda)^{n-1}\pbra{(n-1)C + D -nB -\lambda}\\ &=(1 - \lambda) \pbra{ 1- 2(\epsilon_T + \bar{p}_{\mathbb{P}}) + 2^{n+1}\bar{p}_{\mathbb{P}}\epsilon_T- \lambda}^{n-1} \pbra{\pbra{1 - (n + 2)(\bar{p}_{\mathbb{P}}+\epsilon_T) + (n+1)(n+2)2^n\bar{p}_{\mathbb{P}}\epsilon_T}-\lambda}. \end{align} \section{iLRB protocol with free-preparation noise} \label{app:iLRB_sp_free} \begin{proof}[Proof of the single decay for Theorem \ref{thm:ilrb_multiq}] Let the eigenstate for $\lambda_0=1$ be $\vec{w}=(w_1,w_2,\ldots, w_n)$, the eigenstates for eigenvalue $\lambda_1 = 1 - 2(\epsilon_T + \bar{p}_{\mathbb{P}}) + 2^{n+1}\bar{p}_{\mathbb{P}}$ be $\vec{u}^{(s)} = (u_0,u_1\ldots, u_n)^T$ for $s\in [n-1]$, and the eigenstate for $$\lambda_{2} = \pbra{1 - (n + 2)(\bar{p}_{\mathbb{P}}+\epsilon_T) + (n+1)(n+2)2^n\bar{p}_{\mathbb{P}}\epsilon_T}$$ be $\vec{v} = (v_0,v_1, \ldots, v_n)$. By Eq. \eqref{eq:ilrb_condensed_simplify}, we see that $\vec{u}^{(s)}$ have the properties \begin{align} &\sum_{k = 0}^n u_k = 0,\\ &B u_0+ C\sum_{k = 1}^n u_k = 0, \end{align} where $B = \epsilon_T + \bar{p}_{\mathbb{P}} -(n+1)2^{n}\bar{p}_{\mathbb{P}}\epsilon_T, C=2^{n+1}\bar{p}_{\mathbb{P}}\epsilon_T$. Hence we have $u_0=0$ and $\sum_{k = 1}^n u_k = 0$. Similarly, we have \begin{align} w_0 &= 2w_n, w_k = w_n \forall k \in [n],\\ v_0 &= -nv_n, v_k = v_n \forall k\in [n]. \end{align} Therefore, all of the $n-1$ vectors $\vec{u}^{(s)}$ are orthogonal to $\vec{w}$ and $\vec{v}$. Let \begin{align} V = \sbra{\vec{w}, \vec{u}^{(1)}, \ldots, \vec{u}^{(n-1)},\vec{v}}, \end{align} then $V^{-1}_{j0} = 0$ for $j\in [n-1]$. Let the vector representation for $\supket{\rho_0} $ be $\pbra{1,0,\ldots, 0}$. Let the $(n+1)\times (n+1)$ diagonal matrix $\Sigma = \diag(1,\lambda_1,\lambda_1,\ldots, \lambda_1, \lambda_2)$. Since there exist some coefficients $\cbra{\alpha_k}_{k = 0}^n$ such that $\supbra{\hat{{\Pi}}_c} = \sum_{k = 0}^n \alpha_k \supbra{{\Pi}_{c^{k-1}lc^{n-k}}}$, then we have \begin{align} p_{{\Pi}_c}(m)&=\supbra{\hat{{\Pi}}_c} Q_{\Lambda_\mathbb{P} \circ \Lambda_{T}}^{m-1} \supket{\rho_0} \\ & = \sum_{j=0}^n \alpha_j Q_{\Lambda_\mathbb{P} \circ \Lambda_{T}}^{m-1}(j,0)\\ &= \sum_{j=0}^n \alpha_j \sum_{k=0}^n V_{jk} \Sigma_k^{m-1} V_{jk}^{-1}\\ &= \sum_{j=0}^n \alpha_j\pbra{V_{j0}V_{00}^{-1} + \lambda_1^{m-1}\sum_{k=1}^{n-1} V_{jk}V_{k0}^{-1} + \lambda_2^{m-1} V_{jn} V_{n0}^{-1}}\\ &= A_0 + A_1 \lambda_2^{m}, \end{align} where $\Sigma_k$ be the $k$-th diagonal element of the diagonal matrix $\Sigma$, and $A_0,A_1$ are some real numbers. Similarly, we have $p_{{\Pi}_c,\mathbb{P}}(m) = B_0 + B_1 \lambda_{\mathbb{P}}^m$, where $\lambda_\mathbb{P} = 1 - (n+2)\bar{p}_{\mathbb{P}}$. \end{proof} \section{Gate representations } \label{app:gate_reps} Here we provide the matrices representation of two-qubit gates iSWAP, SQiSW, and CZ operating on the entire Hilbert space $\mathcal{H}$ as follows. \begin{align} &\mathrm{iSWAP} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix},\quad \mathrm{SQiSW} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & \frac{\sqrt{1}}{2} & 0 & \frac{i}{\sqrt{2}} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & \frac{i}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}} & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}.\\ &\text{CZ} = \diag\pbra{1,1,1,1,-1,1,1,1,1}. \end{align} \section{Leakage damping noise model of $\mathrm{iSWAP}/\mathrm{SQiSW}$ and CZ gate}\label{app:iswap-noise-model} $\mathrm{iSWAP}$ and CZ gate are the most commonly realized two-qubit gates in the modern flux-tunable superconducting quantum devices~\cite{Krantz_2019}, In this appendix, we will introduce how to extract the leakage damping noise model of these two gates according to the effective Hamiltonian of the superconducting quantum system. The Hamiltonian of a two-qubit quantum system in flux-tunable superconducting quantum devices reads~\cite{Krantz_2019} \begin{align} H = \sum_{j} \omega^A_j\ket{j}_A\bra{j}_A+\omega^B_j\ket{j}_B\bra{j}_B+g(a_A^{\dagger}a_B+a_B^{\dagger}a_A) \label{eq:effective-hamiltonian} \end{align} where $\omega^A_j,\omega^B_j$ are the energies (with $\hbar=1$) of the $j$-th excited states of qubit A and qubit B, $a_{A/B}^{(\dagger)}$ are the annihilation(creation) operator to the quantum harmonic oscillator eigenstates of the two qubits, and $g$ is the coupling strength of the two qubits. In the idle case, the two qubits are detuned so that they have different energy spectra and the coupling term can be omitted, i.e., $g=0$. When implementing some two-qubit gates, the energy spectra of the two qubits can be tuned by the external magnetic flux and we have $g\neq 0$. In the above Hamiltonian Eq.~\eqref{eq:effective-hamiltonian}, we have used the rotating wave approximation(RWA)~\cite{Krantz_2019}, which drops fast rotating terms, so that $H$ can be decomposed into several orthogonal subspaces \begin{displaymath} H= \left( \begin{array}{cccc} H_0 & & &\\ & H_1 & &\\ & & H_2 &\\ & & & \ddots \end{array} \right), \end{displaymath} and $H_0, H_1, H_2$ are the ground state Hamiltonian, single excitation Hamiltonian and double excitation Hamiltonian respectively with \begin{align} H_{0}=\left( \begin{array}{c} \omega_{00} \end{array} \right), H_1=\left( \begin{array}{cc} \omega_{01} & g \\ g & \omega_{10} \\ \end{array} \right), H_2=\left( \begin{array}{ccc} \omega_{11} & \sqrt{2}g & \sqrt{2}g \\ \sqrt{2}g & \omega_{02} & 0 \\ \sqrt{2}g & 0 & \omega_{20} \\ \end{array} \right). \end{align} Here we denote $\omega_{ij}:= \omega_i^A+\omega_j^B$. Notice that with RWA, we do not need to consider the interaction between, e.g., state $\ket{00}$ and $\ket{11}$, and only the double excitation Hamiltonian $H_2$ leads to leakage and seepage when some two-qubit quantum gates are implemented. In other words, the leakage amplitude damping noise model of the two-qubit quantum gate would only involve the states $\ket{11},\ket{20},\ket{02}$. When implementing the iSWAP gate, the two qubits are working at the same frequency, which means the energy spectra of the two qubits are the same. In that case, by assuming the ground state energy $\omega_0^{A}=\omega_0^{B}=0$, we can denote $\omega_1^{A}=\omega_1^{B}=\omega$, and $\omega_2^{A}=\omega_2^{B}=2\omega-\eta$, where $\eta$ is conventionally called \textit{anharmonicity} which quantifies the difference between energy gap $\omega_1^{(A/B)}-\omega_0^{(A/B)}$ and energy gap $\omega_2^{(A/B)}-\omega_1^{(A/B)}$. Then, the double excitation Hamiltonian can be rewritten as \begin{align} H_2=\left( \begin{array}{ccc} 2\omega & \sqrt{2}g & \sqrt{2}g \\ \sqrt{2}g & 2\omega-\eta & 0 \\ \sqrt{2}g & 0 & 2\omega-\eta \\ \end{array} \right) \label{eq:double-excitation-Hamiltonian} \end{align} With this effective Hamiltonian, we can see explicitly the symmetry between states $\ket{20}$ and $\ket{02}$. With these intuitions, we can write the leakage damping noise model of $\mathrm{iSWAP}$ gate as \begin{align} \begin{aligned} \Lambda_{\mathrm{iSWAP}}(\rho) &= E_0 \rho E_0^\dagger + \sum_{(k,k')\in \mathcal{S}}E_{kk'}\rho E_{kk'}^\dagger \label{eq:noise_iswap_total_app} \end{aligned} \end{align} where $\mathcal{S} = \cbra{(02,11),(11,02),(20,11),(11,20)}$, \begin{equation} E_0 = \sqrt{ 1- \epsilon_{02,11}}\ket{02}\bra{02} + \sqrt{1 - \epsilon_{11,20}-\epsilon_{11,02} }\ket{11}\bra{11} + \sqrt{1 - \epsilon_{20,11}}\ket{20}\bra{20} + {\Pi}_{\mathcal{H}\backslash\cbra{02,11,20}}, \label{eq:iswap_noise_cptp_app} \end{equation} and $E_{kk'} = \sqrt{\epsilon_{kk'}}\ket{k'}\bra{k}$ for $(k,k')\in \mathcal{S}$. The symmetry between $\ket{02}$ and $\ket{20}$ implies $\epsilon_{20,11}=\epsilon_{02,11}$ and $\epsilon_{11,20}=\epsilon_{11,02}$. To see how the leakage damping noise model can be obtained from the Hamiltonian Eq.~\eqref{eq:double-excitation-Hamiltonian}, within the double excitation subspace, we assume the initial density matrix can be parameterized as \begin{align} \rho_0\equiv \rho_{11}\ket{11}\bra{11}+\rho'\ket{02}\bra{02}+\rho'\ket{20}\bra{20}= \left( \begin{array}{ccc} \rho_{11} & & \\ & \rho' & \\ & & \rho' \\ \end{array} \right). \end{align} Here we assume the coefficients for $\ket{02}$ and $\ket{20}$ are the same, due to the symmetric structure in the Hamiltonian (\ref{eq:double-excitation-Hamiltonian}). Additionally, we assume that the evolution of the initial state follows the Schr\"odinger equation. Thus we have \begin{align} e^{-iH_2t}\rho_0 e^{iH_2t} &= \left( \begin{array}{ccc} (1-2\hat{\epsilon})\rho_{11}+2\hat{\epsilon}\rho' & * & * \\ * & \rho' (1-{\hat{\epsilon}})+{\hat{\epsilon}}\rho_{11} & * \\ * & * & \rho'(1-{\hat{\epsilon}})+{\hat{\epsilon}}\rho_{11} \\ \end{array} \right)\label{eq:raw-leakage-model}\\ \hat{\epsilon}:=\hat{\epsilon}(t)&=\frac{16g^2}{16g^2+\eta^2}(1-\cos(\sqrt{16g^2+\eta^2}t))\label{eq:expression-of-epsilon} \end{align} Here we use $(*)$ to denote some irrelevant off-diagonal entries in the density matrix. If we focus on the diagonal entries of the resulting density matrix, compared with $\Lambda_{\mathrm{iSWAP}}(\rho_0)$, it can be realized that we can identify $\hat{\epsilon}=\epsilon_{kk'}, \forall(k,k')\in\mathcal{S}$. Considering the definition of the average leakage and seepage rate in Eq.~\eqref{eq:leakage_seepage_rate}, if all the above approximations and assumptions hold, the leakage damping noise model and the real quantum gate would have the same average leakage and seepage rate, which verifies the reasonability of the leakage damping noise model Eq. \eqref{eq:noise_iswap_total} for $\mathrm{iSWAP}$ gate in the main text. The $\mathrm{SQiSW}$ gate is similar to the $\mathrm{iSWAP}$ gate, with an only difference at the evolution time $t$ in Eq. \eqref{eq:expression-of-epsilon}. Thus the leakage damping noise model of $\mathrm{SQiSW}$ gate can also be described in Eq. \eqref{eq:noise_iswap_total}, where the free parameter $\epsilon$ is different from the one for $\mathrm{iSWAP}$ gate. To quantify the magnitude of $\hat{\epsilon}$, notice that for $\mathrm{iSWAP}$ gate, the evolution time $t=2\pi/g$~\cite{Krantz_2019}, so $\hat{\epsilon}$ in Eq. \eqref{eq:expression-of-epsilon} is determined by the anharmonicity $\eta$ and coupling strength $g$ of the two-qubit system. Taking experimental data from, e.g., Ref.~\cite{PhysRevLett.129.010502}, where $\eta=-2\pi\times 1.87\mathrm{GHz}$ and $g=2\pi\times 11.2\mathrm{MHz}$, we have $\hat{\epsilon}_{\mathrm{iSWAP}}\sim 2.8\times 10^{-4}$. We will use this magnitude of $\epsilon$ in our iLRB numerical experiments for the $\mathrm{iSWAP}$ gate. \begin{figure} \caption{Illustration of the spectrum of the double excitation Hamiltonian $H_2$, as a function of the local magnetic flux of qubit $A$. The trajectory of implementing the CZ gate is denoted as the black curve. At the CZ resonance point, the energy of $\ket{11}$ is close to that of $\ket{20}$. Thus state $\ket{11}$ has larger probability of leaking to $\ket{20}$, compared with leaking to state $\ket{02}$. More details about this figure can be found in~\cite{Krantz_2019}.} \label{fig:H2_spectrum} \end{figure} The leakage damping noise model of the CZ gate can be obtained by generalizing that of the $\mathrm{iSWAP}$ gate. When implementing the CZ gate in superconducting quantum devices, the effective Hamiltonian is still in Eq. \eqref{eq:effective-hamiltonian} under RWA, thus the corresponding leakage damping noise model involves states $\ket{11},\ket{20},\ket{02}$. Different from the $\mathrm{iSWAP}$ gate, here, we do not take the two qubits to resonance. To realize a CZ gate, by tuning the eigenenergy of one of the qubits, one would bring the state $\ket{11}$ to resonate with, e.g. $\ket{20}$, to accumulate phase on $\ket{11}$ at the CZ resonance point (See figure~\ref{fig:H2_spectrum}), which means the energies of $\ket{11}$ and $\ket{20}$ are very close during the tuning and at the resonance point. Thus state $\ket{11}$ has larger probability of leaking to $\ket{20}$, compared with leaking to state $\ket{02}$~\cite{Martinis_2014}. Thus, different from $\mathrm{iSWAP}$ gate, if we still write the leakage damping noise model of CZ gate as the form in Eq.~\eqref{eq:noise_iswap_total_app}, we have $\epsilon_{20,11}\neq \epsilon_{02,11}$ and $\epsilon_{11,20}\neq \epsilon_{11,02}$. Further, the explicit Hamiltonian evolution for the CZ gate can be described by the Landau-Zener transition~\cite{Martinis_2014,zener1932non}. It tells us that we can identify $\epsilon_{11,02}=\epsilon_{02,11}$ and $\epsilon_{11,20}=\epsilon_{20,11}$. Thus the operators in the Eq.~\eqref{eq:noise_iswap_total_app} can be parameterised as \begin{equation} \begin{aligned} E_0 &= \sqrt{ 1- \epsilon_1 }\ket{02}\bra{02} + \sqrt{1 - \epsilon_1 - \epsilon_2 }\ket{11}\bra{11} + \sqrt{1 - \epsilon_2}\ket{20}\bra{20} + {\Pi}_{\mathcal{H}\backslash\cbra{02,11,20}},\\ E_{02,11}&=\sqrt{\epsilon_1}\ket{11}\bra{02},E_{11,02}=\sqrt{\epsilon_1}\ket{02}\bra{11},E_{20,11}=\sqrt{\epsilon_2}\ket{11}\bra{20},E_{11,20}=\sqrt{\epsilon_2}\ket{20}\bra{11}. \label{eq:cphase_noise_cptp_app} \end{aligned} \end{equation} This noise model contains two parameters $\epsilon_1:=\epsilon_{11,02}=\epsilon_{02,11}$ and $\epsilon_2:=\epsilon_{11,20}=\epsilon_{20,11}$, which remain to be fitted by the iLRB experiments. \section{Generalized noise model for 2-qubit gate} \label{app:gen_noise_three_para} Here we consider the noise model which contains the flip between the sites in leakage subspace $\mathcal{H}_l$: \begin{align} \begin{aligned} \Lambda (\rho) &= E_0 \rho E_0^\dagger + \sum_{(k,k')\in \mathcal{S}}E_{kk'}\rho E_{kk'}^\dagger \label{eq:noise_more_gen_total} \end{aligned} \end{align} where $\mathcal{S} = \cbra{(02,11),(11,02),(20,11),(11,20),(12,21)}$, \begin{equation} E_0 = \sqrt{ 1- \epsilon_1 }\ket{02}\bra{02} + \sqrt{1 - \epsilon_1 - \epsilon_2 }\ket{11}\bra{11} + \sqrt{1 - \epsilon_2}\ket{20}\bra{20} + \sqrt{1 - \epsilon_3}\ket{12}\bra{12} + \sqrt{1 - \epsilon_3}\ket{21}\bra{21} + {\Pi}_{\mathcal{H}\backslash\cbra{02,11,12,20,21}}, \label{eq:more_gen_noise_cptp} \end{equation} and $E_{kk'} = \sqrt{\epsilon}\ket{k'}\bra{k}$ for $(k,k')\in \mathcal{S}$. We give the average leakage rate with iLRB protocol with noise model $\Lambda$ defined in Eq. \eqref{eq:noise_more_gen_total}. \begin{corollary} For any two-qubit target gate $T$ with noise model $\Lambda$ in Eq. \eqref{eq:noise_more_gen_total}, with the assumption that Pauli group is noiseless, after performing the iLRB protocol, the expectation of the output probability $p_{{\Pi}_c}(m) = A + B_1 \lambda_1^m + B_2 \lambda_2^m$, where \begin{equation} \lambda_i\in\cbra{1-\frac{3}{8}\epsilon_1 - \frac{3}{8}\epsilon_2- \frac{1}{2}\epsilon_3 \pm \frac{1}{8}\sqrt{9\epsilon_1^2 + 9\epsilon_2^2 + 16\epsilon_3^2 -14\epsilon_1\epsilon_2 -8\epsilon_1\epsilon_3 - 8\epsilon_2\epsilon_3}}, \label{eq:three_para_lam} \end{equation} and the average leakage rates {$L =\frac{\epsilon_1 + \epsilon_2 + \epsilon_3}{4},$ and $ S= \frac{\epsilon_1 + \epsilon_2 + \epsilon_3}{5}$.} \label{coro:leak_three_para} \end{corollary} \begin{proof} By the definition of $\Lambda$ in Eq. \eqref{eq:noise_more_gen_total}, we have \begin{equation} Q = \begin{pmatrix} 1 - \epsilon_1/4 - \epsilon_2/4 & \epsilon_1/2 & \epsilon_2/2 \\ \epsilon_1/4 & 1 - \epsilon_1/2 - \epsilon_3/2 & 2\epsilon_3 \\ \epsilon_2/4 & \epsilon_3/2 & 1 - \epsilon_2/2 - \epsilon_3/2 \end{pmatrix} \end{equation} with eigenvalues as shown in Eq. \eqref{eq:three_para_lam}. $L_{\mathrm{ave}} = 1 -Q_{cc,cc} = \frac{\epsilon_1 + \epsilon_2}{4}$ and $S_{\mathrm{ave}}= \frac{2}{5} (Q_{cc,cl} + Q_{cc,lc}) = \frac{\epsilon_1 + \epsilon_2}{5}$. \end{proof} Note that we can hardly determine all of these three parameters together. Nevertheless, we can determine the value of $L$ and $S$ if we have the prior knowledge of $\epsilon_1,\epsilon_2,\epsilon_3$. For instance, if we know that the noise has similar leakage for leakage subspace $\ket{02}$ and $\ket{20}$, we have $\bar{\epsilon}\approx \epsilon_1\approx \epsilon_2$ and $L\approx 5/8 + \lambda_1/8 - 3\lambda_2/4$ if $\bar{\epsilon}\geq 2\epsilon_3$, and $L\approx 5/8 + \lambda_2/8 - 3\lambda_1/4$ otherwise. Corollary \ref{coro:leak_two_sec} can be obtained by letting $\epsilon_3 = 0$ in Corollary \ref{coro:leak_three_para}. \end{document}
arXiv
\begin{definition}[Definition:Non-Symmetric Relation] Let $\RR \subseteq S \times S$ be a relation in $S$. $\RR$ is '''non-symmetric''' {{iff}} it is neither symmetric nor asymmetric. \end{definition}
ProofWiki
The uniform-score gene set analysis for identifying common pathways associated with different diabetes traits Hao Mei1,2, Lianna Li3, Shijian Liu2, Fan Jiang2, Michael Griswold1 & Thomas Mosley4 Genetic heritability and expression study have shown that different diabetes traits have common genetic components and pathways. A computationally efficient pathway analysis of GWAS results will benefit post-GWAS study of SNP associations and identification of common genetic pathways from diabetes GWAS can help to improve understanding of the disease pathogenesis. We proposed a uniform-score gene-set analysis (USGSA) with implemented package to unify different gene measures by a uniform score for identifying pathways from GWAS data, and use a pre-generated permutation distribution table to quickly obtain multiple-testing adjusted p-value. Simulation studies of uniform score for four gene measures (minP, 2ndP, simP and fishP) have shown that USGSA has strictly controlled family-wise error rate. The power depends on types of gene measure. USGSA with a two-stage study strategy was applied to identify common pathways associated with diabetes traits based on public dbGaP GWAS results. The study identified 7 gene sets that contain binding motifs at promoter region of component genes for 5 transcription factors (TFs) of FOXO4, TCF3, NFAT, VSX1 and POU2F1, and 1 microRNA of mir-218. These gene sets include 25 common genes that are among top 5% of the gene associations over genome for all GWAS. Previous evidences showed that nearly all of these genes are mainly expressed in the brain. USGSA is a computationally efficient approach for pathway analysis of GWAS data with promoted interpretability and comparability. The pathway analysis suggested that different diabetes traits share common pathways and component genes are potentially regulated by common TFs and microRNA. The result also indicated that the central nervous system has a critical role in diabetes pathogenesis. The findings will be important in formulating novel hypotheses for guiding follow-up studies. Genome-wide association studies (GWAS) have been successful in identifying risk genetic variants for various human complex traits, and many GWAS have been deposited into dbGaP with genome results publicly available [1]. Effective analyses of these existing GWAS results will benefit post-GWAS study of SNP associations and improve genetic analyses of complex diseases. The majority of existing GWAS are based on single-SNP association tests; however, single-SNP GWAS have some critical limitations. A major concern is that most identified variants are out of the gene boundary and present only modest effect individually [2]. A large number of SNP tests require the use of stringent significance criteria (e.g. p-value ≤ 5 × 10 −8), which will lead to misidentification of SNPs with weak effects. Genetic heterogeneity can result in the presence of different risk variants in a gene at different GWAS, which further decreases study power and reduces replicability. Besides, common diseases like diabetes are essentially due to the effects of multiple genes, and it is difficult to extrapolate biological processes from single-SNP findings. Gene-set analysis (GSA), in contrast, hypothesizes that a common disease is influenced by polygenic factors, and GSA aims to test for associations between curated pathways and a phenotype through SNP associations [3]. In contrast to single-SNP GWAS, GSA examines SNPs inside the gene boundary and focuses on component gene associations of a curated gene set. Pathway study alleviates GWAS limitations and can contribute to the discovery of systematic genetic regulation underlying complex diseases. The p-value-based GSA is a common type of pathway analysis that does not require access to individual SNP genotypes; the analysis is based only on association p-values of SNPs over the genome, making the analysis broadly applicable for any GWAS with few limitations. GSA typically requires measuring gene associations from all SNPs mapped to genes. A straightforward gene measurement approach is to use the minimum p-value of SNP associations [3-6]. The second best p-value of SNP associations is also used as a gene measure to evade some spurious associations [7]. However, these measures are generally incomparable and hard to interpret. For example, the best p-value as a gene measure may be smaller than 0.05, but it is difficult to interpret the gene association without comparing to other genes; besides, the best p-value is always smaller than the second best p-value, but it does not indicate a stronger gene association than the other. For a curated gene set, an enrichment score is often calculated from its component genes as a statistic to measure pathway association. A common score is a Kolmogorov-Smirnov-like statistic [4,8] calculated over gene measures. Other effective measures include a count of significant genes [6,9], the ratio of nominally significant (P < 0.05) to non-significant SNPs [10], max mean and re-standardization of gene measures [7]. These measures typically require a large number of permutations to obtain an association p-value, and the high computational load may impede the GSA application. Furthermore, the enrichment score is gene-measure dependent, and the permutation is study- and sample-specific; this makes results difficult to compare among different studies. The Z-statistic method is a parametric measure of pathway association [7] without requiring permutation; however, the test requires an assumption of gene-set independence. To supplement existing methods and address their potential limitations, we proposed a uniform-score GSA (USGSA) that aims to improve specificity of pathway and promote interpretability and comparability of pathway results with high computational efficiency. Diabetes is a chronic metabolic disease of hyperglycemia resulting from defects in insulin secretion, action, or both. Its prevalence continues to increase, and is anticipated to rise to 366 million worldwide in 2030 [11]. Diabetes has two major types accompanied with varied symptoms and complications. Type I diabetes (T1D) is known as insulin-dependent diabetes and it is believed to be caused by destruction of beta cells with subsequently absolute lack of insulin. Type II diabetes (T2D) is non-insulin-dependent diabetes and it is mainly characterized by insulin resistance with subsequently relative lack of insulin and hyperglycemia, for which beta cells in contrast can still produce and secrete insulin. A pathway study can help researchers understand the genetic basis of diabetes pathogenesis and design effective strategies for alleviating the public heath burden of diabetes. Diagnostic criteria recommended by the American Diabetes Association include lab testing of hemoglobin A1c (HbA1c), fasting plasma glucose (FPG), glucose tolerance and hyperglycemia [12]. The long-term effects of diabetes also cause different complications, including diabetic nephropathy [12]. Identification of common genetic pathways underlying these symptoms and complications can provide clues to better understand the etiology, pathophysiology changes and progress of diabetes. Genetic heritability of T1D is as high as 88% [13]. The concordance rate of type 2 diabetes (T2D) is 50–92% for monozygotic (MZ) twins, consistently greater than the rate for dizygotic (DZ) twins [14]. The complication of diabetic nephropathy presented familial clustering [15]. Twin bivariate genetic study of the Atherosclerosis Risk in Communities (ARIC) population showed that genetic heritability is 30% for fasting glucose and 39% for fasting insulin, and genetic correlation between them is 22% ~ 39% [16]. Gene expression study evidenced that T1D and T2D share common pathways which are likely related to hyperglycemia and beta-cell dysfunctions [17]. These evidences suggest strong genetic susceptibility to diabetes traits and indicates shared genetic components and pathways among diabetes traits. The current study aims to identify common pathways associated with diabetes traits by analyzing dbGaP GWAS data, and we expect that the high specificity of USGSA using an independent distribution table of pathway associations will make the findings replicable and comparable among studies. Simulation study of family-wise error rate and power Dataset I, II and III of null pathway association were examined by USGSA with gene measures of minP, 2ndP, simP and fishP. The false group positive rate (GPR) of identifying significant pathways was calculated to estimate the family-wise error rate (FWER), and results are shown in Figure 1. The false GPR based on pathway empirical p-value (p e ) was 15.7%, 15.4% and 15.3% for minP, 20.1%, 19.8% and 19.7% for 2ndP, 10%, 11.9% and 10.1% for simP, and 21.7%, 22.1% and 21.6% for fishP at the three datasets. The results showed that the p e based on hypergeometric distribution function has inflated FWER due to multiple testing. In contrast, the false GPR based on pathway adjusted p-value (p adj ) was 0.3%, 0.2% and 2.3% for minP, 1.2%, 1.2% and 1.0% for 2ndP, 0.1%, 0.1% and 0.1% for simP, and 1.9%, 2.1% and 1.9% for fishP at the three datasets. The results showed that the p adj based on pre-generated permutation table has well-controlled FWER. For comparison, a computationally efficient approach, GSA-SNP with corrected p-values for multiple testing, was also examined and the GRP is 11.8% at dataset I, 2% at dataset II, and 1.1% at dataset III. The results demonstrate that the corrected p-values of GSA-SNP may also have increased Type I error. Estimate of USGSA family-wise error rate by group positive rate. Dataset IV and V were examined to estimate specificity and power of USGSA and results are presented in Figure 2. Since only component genes of KEGG_T2D were simulated to contain SNP associations, majority of MSigDB gene sets had null pathway associations at both data sets and a small GPR of MSigDB gene sets indicated high specificity of USGSA. Pathway analysis by USGSA showed that the GPR based on adjusted p-value of p adj is 0.3% (minP), 1.3% (2ndP), 0.1% (simP) and 2.2% (fishP) at dataset IV, and the value is 0.2% (minP), 1.1% (2ndP), 0.02% (simP) and 1.8% (fishP) at dataset V. The GPR of GSA-SNP approach based on corrected p-value is 1.5% in dataset IV and 2.2% in dataset V. The simulation studies showed that both USGSA and GSA-SNP have high specificity of pathway association test. Estimate of USGSA power by group positive rate. The power of identifying KEGG_T2D in Datasets IV and V were estimated based on the pathway P adj of USGSA and the corrected p-values of GSA-SNP. Results are presented in Figure 2. The power is 24% (minP), 95% (2ndP), 9% (simP) and 100% (fishP) at dataset IV, and the value is 84% (minP), 1% (2ndP), 100% (simP) and 58% (fishP) at dataset V. Similar to 2ndP measure of USGSA, the GSA-SNP has power of 98% in dataset IV and 2% in dataset V. The results demonstrate that analysis power depends on correct selection of a gene measure and the same gene measure may have contrary conclusion due to different characteristics of gene effects at two datasets. For example, the power of USGSA is 24% at dataset IV but 84% at dataset V for minP, and the power is 100% at dataset IV but 58% at dataset V for fishP. The GSA-SNP analysis uses the second best SNP p-value as gene measure, so it presents similar power as USGSA with 2ndP measure. Identification of common pathways for diabetes traits Top 500 gene sets were selected from USGSA pathway study of GWAS at stage I and validation analysis at stage II identified 7 common gene sets significantly associated with all studied diabetes traits. Characteristics of the gene sets were summarized in Table 1. Of the identifications, six gene sets share a transcription factor (TF)-binding motif at gene promoter region of [-2 kb, 2 kb] respectively: 1) pathway "1461" contains the motif of AACTTT, but the binding factor is not known; 2) pathway "2247" contains the binding motif of TTGTTT for FOXO4, which regulates the insulin signaling pathway through binding to insulin-response elements [18,19]; 3) pathway "2268" contains the binding motif of TGGAAA for NFAT, which is an activator in response to elevation of intracellular Ca2+, regulating insulin gene transcription by a Ca(2+)-responsive pathway [20]; 4) pathway "2240" contains the motif of CAGGTG for TCF3, which is up-regulated specifically in islets of T2D patients and is associated with Wnt signaling in diabetes pathogenesis [21]; 5) pathway "2239" contains the binding motif of TAATTA for VSX1; and 6) pathway "1551" contains the binding motif of NNGAATATKCANNNN for POU2F1. The 7th pathway (pid: 2076) contains the target motif of AAGCACA for microRNA, mir-218. Table 1 Common gene sets associated with different diabetes traits At the stage I, the identified gene sets of '1461' (AACTTT-motif), '2247' (FOXO4), '2268' (NFAT), '2240' (TCF3), '2076' (MIR-218), '2239' (VSX1) and '1551' (POU2F1) have 11.1% ~ 13.2%, 9.7% ~ 10.8%, 9.4% ~ 11.9%, 8.8% ~ 10.0%, 13.3% ~ 18.8%, 11.1% ~ 14.2% and 17.7% ~ 22.3% of component genes with uniform score ≤ 5% respectively (Table 2). This value is significantly higher than the assumed 5% of genes in the genome associated with diabetes traits, which has pathway association p-values (p e ) of 3.71*10 −7 ~ 3.44*10 −28 (Table 2). The measures of gene set associations over GWAS of stage I showed that gene sets of '1461', '2247', '2268', '2240', '2076', '2239' and '1551' have chi2 (rank) of 1008.84 (13), 626.50 (14), 596.97 (15), 415.33 (21), 474.90 (17), 462.18 (18) and 499.03 (16) respectively. Pathway analysis of the 7 gene sets at stage II gave consistent results as stage I. The gene sets of '1461', '2247', '2268', '2240', '2076', '2239' and '1551' have 11.4% ~ 14.7%, 10.0% ~ 11.8%, 10.4% ~ 11.3%, 9.4% ~ 11.0%, 14.3% ~ 16.0%, 10.7% ~ 14.1% and 15.5% ~ 22.2% of component genes with uniform score ≤ 5%, which corresponds to p e value of 6.70*10 −20 ~ 6.45*10 −42, 1.41*10 −13 ~ 9.05*10 −23 , 2.72*10 −13 ~ 1.43*10 −18, 2.14*10 −12 ~ 4.60*10 −22, 1.72*10 −9 ~ 5.85*10 −13, 8.35*10 −8 ~ 1.73*10 −16 and 6.41*10 −7 ~ 1.10*10 −14, respectively (Table 2). These gene sets are significant over all GWAS after controlling for multiple testing and most of their adjusted p-value (p adj ) are <10 −4. The identified pathways with pids of "1461", "2247", "2268", "1551", "2239", "2076", and "2240" respectively contained 18, 10, 8, 4, 4, 7, and 8 significant common genes for all GWAS in stage I and II, resulting in 25 unique common genes. These genes and their corresponding pathways are summarized in Table 3. Mean uniform scores of these genes for stage I and II are presented in Figure 3. The results demonstrate that these genes had ranges of 0.07% ~ 2.29% at the stage I analysis and 0.13% ~ 2.74% at the stage II analysis, indicating that these genes are among the top 3% of the gene associations with diabetes traits over genome. Detailed uniform scores are noted in Additional file 1: Table S2. Our literature and gene annotation review (Additional file 2: Table S3) showed that almost all of these genes are mainly expressed in the brain, and most are related to neurodevelopment and brain function, including schizophrenia, autism, Alzheimer's disease, impaired learning, and intellectual disability. The identified common pathways and their consistently significant genes suggest that the pathogenesis of diabetes may be attributable to microRNA and TF-mediated regulation and the central nervous system (CNS) plays important role in the regulation. Table 3 Significant genes from common gene sets associated with different diabetes traits Average uniform score of significant genes for stage I and II GWAS. We proposed the USGSA method with implemented R package of snpGeneSets to provide a convenient and fast tool for study of pathway association from GWAS data. The USGSA applies the uniform score to unify four different gene measures of minP, 2ndP, simP and fishP, and measures pathway association by hypergeometric test. The pathway analysis by USGSA is based on test of MSigDB gene sets. The MSigDB annotates 10,722 genes sets with 32,364 genes, and the number is much smaller than the number of GWAS SNPs. Therefore, the pathway analysis will alleviate the burden of multiple-test adjusting and improve testing power. Application of USGSA successfully identified 7 significant gene sets associated with all studied diabetes traits, indicating common genetic regulations shared among different traits. Four gene measures of USGSA are proposed to summarize gene effects with different characteristics: the minP and the 2ndP measure gene effects based on a single-SNP association; while the simP and the fishP assess gene effects based on multiple SNP associations with accounting for the number of GWAS SNPs in a gene. USGSA applies a uniform score to unify these gene measures for comparability with the same interpretability. The score ranges from 0 to 1, and it is explained as top percentage of the gene associations over genome. The USGSA calculates empirical p e of pathway association from gene measures based on hypergeometric distribution, which accounts for both the number of significant genes and the size of the pathway. The pathway adjusted p-value (P adj ) can be calculated by permutation test to account for pathway dependence and multiple testing, and an independent pre-generated permutation table is directly used to facilitate the calculation. USGSA gene measures can better facilitate replication studies of a gene effect that may have inconsistent SNP associations from different GWAS due to genetic heterogeneity. USGSA can also help to identify a significant pathway shared by different traits which however may be activated through different mechanisms. For example, expression study showed that T1D and T2D likely share a common pathway which however has different regulated genes (e.g. MYC) [17]. For this study, the gene set of FOXO4 (pid: '2247') has total ~2,000 genes, and contained 131 ~ 149 and 167 ~ 214 genes with uniform scores ≤ 0.05 in stage I and stage II, respectively, which are about 9.7% ~ 11.8% of component genes among top 5% of the gene associations over genome; the gene set had only 10 significant genes over all studies. These results indicate that multiple diabetes traits may be influenced by a common pathway activated through different genes. The USGSA does not require access to individual-level SNP genotypes and the analysis is based on GWAS p-value only. For existing pathway analysis of GWAS p-values, ALIGATOR preselects a p-value criterion to define a list of significantly associated SNPs [9]; the i-GSEA4GWAS [4], GeSBAP [5] and gamGWAS [6] all select the best p-value of SNP associations to measure gene effects; and the GSA-SNP enables selection of the kth (k = 1, 2, 3, 4, or 5) best p-values as the gene measure [7]. Compared to these analyses, the USGSA provides broader measures to summarize gene effects with different characteristics as described above. The existing pathway analysis, e.g. ALIGATOR [9], generally requires selection of a p-value cut-off to identify significant SNPs and genes for pathway test. The selection may be arbitrary and study-dependent, and the results may not be comparable between different studies. In contrast, USGSA selects a uniform-score cut-point (0 ≤ α G ≤ 1) for all four gene measures, which is hypothesized as α G proportion of genes associated with the study trait. Significant genes identified from different GWAS data by different USGSA gene measure will have the same interpretation, explained as top 100*α G % of the gene associations over genome. Comparing with many existing pathway analyses that rely on a time-consuming permutation to adjust for complex genetic structures, the USGSA implements the adjustment by corresponding gene measures, hypergeometric test and permutation test. In addition, benefiting from the random uniform distribution of uniform score, the USGSA provides a pre-generated permutation distribution table, which facilitates a computationally efficient calculation of pathway adjusted p-value (P adj ) for different gene measures. The USGSA calculates both empirical (p e ) and adjusted (P adj ) p-values to measure pathway association. Due to multiple testing issue, the p e can result in high FWER (Figure 1). The P adj was shown to have well-controlled FWER and high specificity for all four gene measures (GPR ≤ 2.3%), especially for the simP measure (Figures 1 and 2). GSA-SNP analysis implements a computationally efficient measure of pathway association from GWAS SNP p-values by Z statistic. However, the measure assumes independence of gene sets and the test may result in inflated FWER which is evidenced in simulation study of dataset I (GRP = 11.8%). By comparison, the power is 0.1% ~ 1.9% for 4 gene measures of USGSA with P adj as significance indicator (Figure 1). Therefore, the permutation-based P adj not only adjusts for multiple testing but also alleviates potential issues due to complex genetic structure. The power of USGSA based on P adj depends on selected gene measures, and an inappropriate measure can result in a contrary conclusion. For dataset IV, component genes of KEGG_T2D pathway are simulated to have multiple SNP associations each, and the gene measures of 2ndP and fishP have extremely high power of 98% and 100%, whereas the gene measures of minP and simP have low power of 24% and 9% respectively. For dataset V, 11 component genes of KEGG_T2D pathway are simulated to have an extremely strong SNP association each, and the gene measures of minP and simP presented high power of 84% and 100%, whereas 2ndP and fishP showed low power of 1% and 58% respectively. Therefore, different gene measures have their adaptive tests of pathway associations: 2ndP and fishP are more fitting for a gene containing multiple SNP effects, while minP and simP are more suitable for a gene having extreme SNP effects. The results also indicate that the fishP is tolerant to gene effects characterized with different types of SNP associations. USGSA was successfully applied to identify common pathways associated with different diabetes traits. GWAS SNP associations of diabetes traits were identified from the dbGaP and classified using 11 FHS GWAS in stage I and 5 non-FHS GWAS in stage II. GWAS analyses of the FHS samples are dependent, leading to the possibility that SNP associations are potentially correlated among different GWAS and the identified pathways in stage I may be false-positive. Therefore, we analyzed the non-FHS GWAS, based on independent samples, to validate the candidate gene sets through stage II. Although the FHS and non-FHS GWAS are respectively based on lower- and higher-resolution genotyping from different platforms, SNP-Gene mapping of USGSA (Figure 4) applies the recent genome build and makes a consistent map of SNPs and genes to perform comparable pathway studies across heterogeneous GWAS. Implementation of USGSA for pathway association test. USGSA with the fishP measure successfully identified 7 common gene sets associated with diabetes traits. Component genes of these gene sets have significantly higher probability of association with diabetes traits among the top 5% of genes than a random gene. These component genes have common binding motifs in their promoter regions of [-2kb, 2kb] around transcription start sites. The motifs include targets for 5 TFs of FOXO4, NFAT, TCF3, VSX1 and POU2F1, 1 microRNA of MIR-218 and one unknown binding factor. There are 25 common component genes with uniform score ≤ 0.05 over all GWAS (Table 3). These genes and their binding factors suggest potential regulatory genetic mechanisms underlying diabetes pathogenesis. For example, CNTN4 belongs to the gene sets of VSX1 (pid: '2239') and TCF3 (pid: '2240'). Association tests and mouse experiments have indicated that CNTN4 is an obesity–insulin targeted gene [22]. However, the gene function related to diabetes pathogenesis remains unclear. TCF3 is highly expressed in islets of T2D patients and is associated with Wnt signaling in diabetes pathogenesis [21]; and VSX1, expressed in ocular tissues, is associated with eye diseases [23]. Therefore, our findings and existing published evidences can lead to the hypothesis that mutations in the CNTN4 gene modify its binding with TCF3 in the pancreas and VSX1 in the brain and activate related genetic regulations for diabetes and its complications. NPAS3 is the consistent significant gene of all GWAS with target motifs for TFs of FOXO4 (pid: '2247') and POU2F1 (pid: '1551'). FOXO4 [19] and POU2F1 [24] have been shown to regulate insulin signaling and glucocorticoid expression, respectively. Therefore, the component gene of NPAS3 and TFs of FOXO4 and POU2F1 can form another hypothesis of potential genetic pathways related to diabetes pathogenesis. Our literature and gene annotation review showed that nearly all of the 25 common genes are highly expressed in the brain, and most of them are known as susceptibility genes for neural development and neurological disorders (Additional file 2: Table S3). These findings suggest that the CNS may have a critical role in diabetes pathogenesis. This conclusion is supported by previous studies of NPAS3, where gene mutations are related to the aerobiology of psychiatric illness [25,26] and the gene can induce susceptibility to diabetes for psychiatric patients [27]. Therefore, although pathway analysis at this study cannot provide direct evidences of regulatory mechanisms, the findings can help to form new hypotheses and initiate follow-up studies to ascertain pathogenetic changes of diabetes progress. SNP associations for the analysis were retrieved from public GWAS results stored in the dbGaP. However, the number of available GWAS for diabetes traits is limited and the genotyping has low resolution (≤500K SNPs), especially for the FHS GWAS, which may cause missed identifications of some important gene and pathway associations. Therefore, more GWAS of diabetes traits are required to improve the power and replicate the findings in future studies. In addition, the gene set analysis depends on the selection of uniform-score cut-point α G , which is 5% for this study. The MSigDB database includes 32,364 genes from 10,722 genes sets and this selection assumes ~1,620 genes of them associated with every diabetes trait. Pathway analysis of diabetes traits showed that these genes are significantly enriched in the seven identified gene sets. A smaller/larger α G will cause a decreased/increased number of target genes for enrichment test, which consequently affects identification of significant pathways. The current study aims to identify common pathways significantly associated with different diabetes traits. The findings will help to discover shared genes and genetic pathways related to different diabetes symptoms and complications. A significant common gene set is observed only if it has P adj ≤ 0.05 over all GWAS in stage II, and the probability of making type I error is far less than 0.05. The 7 significant identifications by USGSA account for 0.07% of total MSigDB gene sets. These gene sets contain 25 common genes, which are among the top 5% of gene association over genome for all GWAS and account for <1.5% of total hypothesized susceptibility genes in enrichment test. However, it may not be accurate for the hypothesis that 5% of genes in the genome (~1,620 susceptibility genes) are associated with diabetes traits, and 1% deviation of the hypothesis will result in changes of ~320 susceptibility genes for diabetes traits. Therefore, although there are strongly significant evidences that component genes from the 7 gene sets are associated with diabetes traits, these 25 common genes are only susceptibility candidate genes for diabetes pathogenesis, which requires further replications and experiment validations in the future study. In summary, we proposed the USGSA method with implemented R package of snpGeneSets to facilitate computationally efficient analysis of pathway association based on only association p-values of GWAS SNPs. USGSA applies a uniform score that unify 4 gene measures for summarizing different types of gene effects and uses a pre-generated distribution to directly obtain a pathway adjusted p-value. Simulation studies showed that USGSA has strictly controlled FWER and high specificity for all gene measures, but the power depends on the selected type of gene measure. USGSA makes pathway identification from different gene measures comparable and improves interpretability and replicability. The pathway analysis of public dbGaP GWAS results identified 7 common gene sets significantly associated with all studied diabetes traits. Component genes of these gene sets are significantly enriched in the top 5% of gene associations with diabetes traits compared to random genes. The component genes have common promoter motifs for target TFs and microRNA, and 25 significant common genes were identified with high expression in the brain. The findings will help to discover pleiotropic genetic effects and formulate novel hypotheses of common genetic regulations underlying different diabetes symptoms and complications, which are important in guiding the follow-up studies. The USGSA method and implementation The USGSA method for pathway analysis takes five consecutive steps to test associations of curated gene sets with a phenotype based on association p-values of GWAS SNPs (Figure 4). This GSA method is implemented in an R package, snpGeneSets, and can be freely accessed at http://www.umc.edu/biostats_software/. Step 1 is the SNP-Gene mapping. The gene boundary is defined from the upstream region of the transcription start site (TSS) to the downstream region of the transcription end site (TES) in order to include a potential promoter region. Positions of GWAS SNPs and genes are identified based on NCBI dbSNP [28] and Gene [29] databases, respectively, of reference genome build 37. The mapping process assigns a SNP to a gene if the SNP falls inside the defined gene boundary. The second step is 'Gene measures' for summarizing gene effect from SNP associations. There are four gene measures of the best p-value (minP), the second best p-value (2ndP), the Simes' p-value (simP) and the Fisher's p-value (fishP). For K SNPs mapped to a gene with GWAS p-values (p 1 , p 2 ,…p k ), the ordered p-value is defined as p (1) ≤ p (2) ≤ … ≤ p (k) , where p(1) = min{p 1 , p 2 ,…p k } and p(k) = max{p 1 , p 2 ,…p k }. The four measures are calculated as: $$ \begin{array}{l} minP={p}_{(1)},\hfill \\ {}2ndP={p}_{(2)},\hfill \\ {} simP=mi{n}_i\left\{K{p}_{(i)}/i\right\},\hfill \\ {} fishP= Pr\left(X\ge x=-2{\displaystyle {\sum}_{i=1}^K log\left({p}_i\right)}\right)=\varPsi (x)\hfill \end{array} $$ where Ψ is the chi-square distribution function with df = 2K. All measures take values between 0 and 1 and a smaller value indicates a stronger gene association. Every type of gene measure is converted to a uniform score in Step 3. The uniform score of the i-th gene is calculated as, U i = (∑ j I(M j < M i ) + 0.5 · ∑ j I(M j = M i ))/L, where M i is gene measure of the i-th gene and L is the total number of genes. The U i estimates the percentage of genes with stronger associations than the i-th gene in the genome. The cumulative distribution of uniform score approximately converges to Pr(U ≤ u) ≈ u for a large N. When all genes are randomly associated with a phenotype, their uniform scores will have an approximately random uniform distribution, i.e. U ϵ (0,1). Curated gene sets are obtained from the MSigDB [30] database, which integrates heterogeneous annotations from the Kyoto Encyclopedia of Genes and Genomes (KEGG) [31], the Reactome [32], the Gene Ontology [33] and the Biocarta [34]. Based on the sources and characteristics, USGSA classifies all gene sets into 20 types (Additional file 3: Table S1) and assigns a unique pathway id (pid) for every gene set. Pathway associations are measured by hypergeometric test for every curated gene set during Step 4. A pathway empirical p-value (p e ) of a gene set Ω is obtained from hypergeometric distribution and calculated as: $$ {p}_e=1-{\displaystyle {\sum}_{i=0}^K\left(\begin{array}{c}\hfill S\hfill \\ {}\hfill i\hfill \end{array}\right)}\left(\begin{array}{c}\hfill L-S\hfill \\ {}\hfill l-i\hfill \end{array}\right)/\left(\begin{array}{c}\hfill L\hfill \\ {}\hfill l\hfill \end{array}\right), $$ where L is the number of GWAS SNP-mapped genes (G i : 1 ≤ i ≤ L); \( l={\displaystyle {\sum}_{i=1}^LI}\left({U}_i\le {\alpha}_G\right) \) defines the number of significant genes with uniform score ≤ α G ; \( S={\displaystyle {\sum}_{i=1}^LI}\left({G}_i\in\ \Omega \right) \) and \( K={\displaystyle {\sum}_{i=1}^LI}\left({G}_i\in\ \Omega \right)I\left({U}_i\le {\alpha}_G\right) \). The test depends on the selection of parameter α G , which is the hypothesized percentage of GWAS SNP-mapped genes associated with the phenotype over genome. Because different gene sets can have overlapping genes, the pathways for testing may not be mutually independent, and pathway p e will not follow uniform distribution. A permutation test is therefore required to generate a distribution of p e and identify a pathway adjusted p-value P adj with control for multiple testing and pathway dependence. Since the calculation of pathway p e is dependent only on a uniform score of a gene, a sample- and study-independent permutation distribution table of p e is therefore generated based on 10,000 randomly simulated datasets of uniform scores for all GWAS-SNP mapped genes in a uniform distribution. For a particular gene set, its pathway P adj is therefore calculated as $$ {P}_{adj}={\displaystyle {\sum}_{i=1}^{10,000}\left( \min \left\{{p}_{ij}\right\}\le {p}_e\right)}/10000, $$ where p ij is the permuted pathway p e for the j-th curated get set at i-th permutation data. Simulation study We simulated three and two datasets respectively to evaluate the type I error and the power of USGSA for different gene measures. To mimic a real pathway analysis of GWAS SNP associations, we extracted SNPs from the T2D GWAS [35] for the simulation (n = 306,417). The gene boundary for the analysis was defined to include 20 kb regions both upstream and downstream of the transcription zone. Dataset I was generated by simulating random SNP p-values in a uniform distribution between 0 and 1, i.e. ~U(0,1). The SNP p-value in Dataset II was simulated to follow a beta distribution with shape α 1 = α 2 = 0.5, i.e., ~beta(0.5,0.5). The SNP p-value in Dataset III was randomly generated by permuting T2D-GWAS p-values [35]. Datasets I, II and III were analyzed to evaluate the Type I error of USGSA. To evaluate power, we simulated a pathway association of gene set, KEGG_T2D [36], which consists of 47 genes. For Dataset IV, all SNPs mapped to KEGG_T2D genes had GWAS p-values in a beta distribution beta(1,3) with a mean SNP p-value of 0.25, and all other SNPs had GWAS p-values in a uniform distribution U(0,1) with a mean SNP p-value of 0.50. For Dataset V, two steps were taken to simulate 11 genes of KEGG_T2D associated with phenotype. For Step 1, SNP p-values were randomly simulated in a uniform distribution ~ U(0,1). For Step 2, 11 genes were randomly selected from KEGG_T2D, and the minimum SNP p-value of every gene was randomly switched with the 11 minimum p-values of all GWAS SNPs. The number of 11 was determined through the inverse hypergeometric distribution function for pathway empirical p e that corresponds to the pathway adjusted P adj = 0.05 from the pre-generated permutation table. Every dataset consisted of 100 GWAS, and the USGSA was applied to test pathway associations of all MSigDB gene sets at every GWAS. GSA-SNP analysis was also applied and results were compared with USGSA. The group positive rate (GPR) was estimated as the proportion of MSigDB gene sets with P adj ≤ 0.05, and power was estimated as the probability of KEGG_T2D with P adj ≤ 0.05. Pathway analysis of diabetes traits USGSA was applied to identify common pathways associated with different diabetes traits, which took fishP as gene measure and assumed 5% of genes over genome associated with diabetes traits (i.e. α G = 0.05). GWAS results of diabetes traits were obtained from dbGaP [1]; GWAS study and analysis IDs are summarized in Table 4. The pathway study was designed in two stages. The stage I contains 11 Framingham Heart Study (FHS) GWAS [37], and FHS traits include diabetes incidence, fasting plasma glucose (FPG), fasting Insulin (FI), insulin sensitivity, HbA1c and HOMA-IR. At stage I, pathway empirical p-value (p e ) was calculated for every MSigDB gene set and summary of gene set's association was computed as Chi2 = ∑ i − 2 ∗ log((p e ) i ) over all GWAS. Gene sets were ranked in decreasing order of Chi2 and top 500 gene sets (~5%) were selected for validation in the stage II of 5 independent GWAS (Table 4). Table 4 dbGaP GWAS of diabetes traits for common pathway study Compared to low-resolution genotyping of FHS GWAS in stage I, GWAS of stage II contained approximately 300,000-500,000 SNPs with association p-values for pathway analysis, and the traits include T2D, T1D, diabetic nephropathy and serum insulin. Pathway adjusted p-value (p adj ) was calculated for every validated gene set and a significant common pathway is defined as having P adj ≤ 0.05 over all GWAS. Component genes of significant gene sets were examined and a significant common gene is identified if it has uniform score ≤ 0.05 over all GWAS. USGSA: Uniform-score gene-set analysis TFs: GWAS: Genome-wide association studies GSA: Gene-set analysis HbA1c: FPG: Fasting plasma glucose T1D: MZ: Monozygotic DZ: Dizygotic ARIC: Atherosclerosis Risk in Communities TSS: Transcription start site TES: Transcription end site pathway id FHS: Framingham Heart Study Fasting Insulin FWER: Family-wise error rate CNS: GPR: Group positive rate Tryka KA, Hao L, Sturcke A, Jin Y, Wang ZY, Ziyabari L, et al. NCBI's Database of Genotypes and Phenotypes: dbGaP. Nucleic Acids Res. 2014;42(1):D975–9. Maher B. Personal genomes: The case of the missing heritability. Nature. 2008;456(7218):18–21. Mooney MA, Nigg JT, McWeeney SK, Wilmot B. Functional and genomic context in pathway analysis of GWAS data. Trends Genet. 2014;30(9):390–400. Zhang K, Cui S, Chang S, Zhang L, Wang J. i-GSEA4GWAS: a web server for identification of pathways/gene sets associated with traits by applying an improved gene set enrichment analysis to genome-wide association study. Nucleic Acids Res. 2010;38(Web Server issue):W90–5. Medina I, Montaner D, Bonifaci N, Pujana MA, Carbonell J, Tarraga J, et al. Gene set-based analysis of polymorphisms: finding pathways or biological processes associated to traits in genome-wide association studies. Nucleic Acids Res. 2009;37(Web Server issue):W340–4. Jia P, Wang L, Fanous AH, Chen X, Kendler KS, International Schizophrenia C, et al. A bias-reducing pathway enrichment analysis of genome-wide association data confirmed association of the MHC region with schizophrenia. J Med Genet. 2012;49(2):96–103. Nam D, Kim J, Kim SY, Kim S. GSA-SNP: a general approach for gene set analysis of polymorphisms. Nucleic Acids Res. 2010;38(Web Server issue):W749–54. Wang K, Li M, Bucan M. Pathway-based approaches for analysis of genomewide association studies. Am J Hum Genet. 2007;81(6):1278–83. Holmans P, Green EK, Pahwa JS, Ferreira MA, Purcell SM, Sklar P, et al. Gene ontology analysis of GWA study data sets provides insights into the biology of bipolar disorder. Am J Hum Genet. 2009;85(1):13–24. O'Dushlaine C, Kenny E, Heron E, Donohoe G, Gill M, Morris D, et al. Molecular pathways involved in neuronal cell adhesion and membrane scaffolding contribute to schizophrenia and bipolar disorder susceptibility. Mol Psychiatry. 2011;16(3):286–92. Wild S, Roglic G, Green A, Sicree R, King H. Global Prevalence of Diabetes: Estimates for the year 2000 and projections for 2030. Diabetes Care. 2004;27(5):1047–53. American Diabetes A. Diagnosis and classification of diabetes mellitus. Diabetes Care. 2014;37 Suppl 1:S81–90. Hyttinen V, Kaprio J, Kinnunen L, Koskenvuo M, Tuomilehto J. Genetic liability of type 1 diabetes and the onset age among 22,650 young Finnish twin pairs: a nationwide follow-up study. Diabetes. 2003;52(4):1052–5. Permutt MA, Wasson J, Cox N. Genetic epidemiology of diabetes. J Clin Invest. 2005;115(6):1431–9. Seaquist ER, Goetz FC, Rich S, Barbosa J. Familial clustering of diabetic kidney disease. Evidence for genetic susceptibility to diabetic nephropathy. N Engl J Med. 1989;320(18):1161–5. Vattikuti S, Guo J, Chow CC. Heritability and genetic correlations explained by common SNPs for metabolic syndrome traits. PLoS Genet. 2012;8(3), e1002637. Kaizer EC, Glaser CL, Chaussabel D, Banchereau J, Pascual V, White PC. Gene expression in peripheral blood mononuclear cells from children with diabetes. J Clin Endocrinol Metab. 2007;92(9):3705–11. FOXO4_HUMAN, P98177. http://www.uniprot.org/uniprot/P98177. Kops GJ, de Ruiter ND, De Vries-Smits AM, Powell DR, Bos JL, Burgering BM. Direct control of the Forkhead transcription factor AFX by protein kinase B. Nature. 1999;398(6728):630–4. Lawrence MC, Bhatt HS, Watterson JM, Easom RA. Regulation of insulin gene transcription by a Ca(2+)-responsive pathway involving calcineurin and nuclear factor of activated T cells. Mol Endocrinol. 2001;15(10):1758–67. Lee SH, Demeterco C, Geron I, Abrahamsson A, Levine F, Itkin-Ansari P. Islet specific Wnt activation in human type II diabetes. Exp Diabetes Res. 2008;2008:728763. Kraja AT, Lawson HA, Arnett DK, Borecki IB, Broeckel U, de las Fuentes L, et al. Obesity-insulin targeted genes in the 3p26-25 region in human studies and LG/J and SM/J mice. Metab Clin Exp. 2012;61(8):1129–41. Semina EV, Mintz-Hittner HA, Murray JC. Isolation and characterization of a novel human paired-like homeodomain-containing transcription factor gene, VSX1, expressed in ocular tissues. Genomics. 2000;63(2):289–93. Wang JM, Prefontaine GG, Lemieux ME, Pope L, Akimenko MA, Hache RJ. Developmental effects of ectopic expression of the glucocorticoid receptor DNA binding domain are alleviated by an amino acid substitution that interferes with homeodomain binding. Mol Cell Biol. 1999;19(10):7106–22. Kamnasaran D, Muir WJ, Ferguson-Smith MA, Cox DW. Disruption of the neuronal PAS3 gene in a family affected with schizophrenia. J Med Genet. 2003;40(5):325–32. Pickard BS, Pieper AA, Porteous DJ, Blackwood DH, Muir WJ. The NPAS3 gene–emerging evidence for a role in psychiatric illness. Ann Med. 2006;38(6):439–48. Sha L, MacIntyre L, Machell JA, Kelly MP, Porteous DJ, Brandon NJ, et al. Transcriptional regulation of neurodevelopmental and metabolic pathways by NPAS3. Mol Psychiatry. 2012;17(3):267–79. Sherry ST, Ward MH, Kholodov M, Baker J, Phan L, Smigielski EM, et al. dbSNP: the NCBI database of genetic variation. Nucleic Acids Res. 2001;29(1):308–11. Maglott D, Ostell J, Pruitt KD, Tatusova T. Entrez Gene: gene-centered information at NCBI. Nucleic Acids Res. 2005;33(Database issue):D54–8. Liberzon A. A description of the Molecular Signatures Database (MSigDB) Web site. Methods Mol Biol. 2014;1150:153–60. Kanehisa M, Goto S, Sato Y, Furumichi M, Tanabe M. KEGG for integration and interpretation of large-scale molecular data sets. Nucleic Acids Res. 2012;40(Database issue):D109–14. Croft D, Mundo AF, Haw R, Milacic M, Weiser J, Wu G, et al. The Reactome pathway knowledgebase. Nucleic Acids Res. 2014;42(1):D472–7. Harris MA, Clark J, Ireland A, Lomax J, Ashburner M, Foulger R, et al. The Gene Ontology (GO) database and informatics resource. Nucleic Acids Res. 2004;32(Database issue):D258–61. Biocarta Pathway Collection. http://www.biocarta.com/genes/allpathways.asp. Scott LJ, Mohlke KL, Bonnycastle LL, Willer CJ, Li Y, Duren WL, et al. A genome-wide association study of type 2 diabetes in Finns detects multiple susceptibility variants. Science. 2007;316(5829):1341–5. KEGG_TYPE_II_DIABETES_MELLITUS. http://www.broadinstitute.org/gsea/msigdb/cards/KEGG_TYPE_II_DIABETES_MELLITUS. Meigs JB, Manning AK, Fox CS, Florez JC, Liu C, Cupples LA, et al. Genome-wide association with diabetes-related traits in the Framingham Heart Study. BMC Med Genet. 2007;8 Suppl 1:S16. This study was supported by grants N01-HC55021 and U01-HL096917 from National Institutes of Health (NIH) / NIH Heart, Lung and Blood Institute, http://www.nhlbi.nih.gov/. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript Center of Biostatistics & Bioinformatics, University of Mississippi Medical Center, Jackson, MS, USA Hao Mei & Michael Griswold Shanghai Children's Medical Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China Hao Mei, Shijian Liu & Fan Jiang Department of Biology, Tougaloo College, Jackson, MS, USA Lianna Li Department of Neurology, University of Mississippi Medical Center, Jackson, MS, USA Thomas Mosley Hao Mei Shijian Liu Fan Jiang Michael Griswold Correspondence to Hao Mei. HM designed the study strategy and write the manuscript, LL involved in development of software package and analyses, SL performed analyses of simulated data by GSA-SNP method, FJ, MG and TM contributed in study design, data interpretation and writing the manuscript. All authors read and approved the final manuscript. Uniform scores of significant genes from identified common gene sets over all GWAS. Function characteristics of significant genes from identified common gene sets. MSigDB gene-set types. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Mei, H., Li, L., Liu, S. et al. The uniform-score gene set analysis for identifying common pathways associated with different diabetes traits. BMC Genomics 16, 336 (2015). https://doi.org/10.1186/s12864-015-1515-3
CommonCrawl
Elden Ring: Deluxe Edition SKiDROW CODEX [v 1.02 + DLC]+... Elden Ring Install Crack [+ DLC]+ Keygen For (LifeTime) Download... Elden Ring Crack Keygen SKiDROW [v 1.02 + DLC]Full Product Key Download (April-2022) Download Now >>> https://blltly.com/2spQTh ※ ZOOMEROOM ※ FANTASY MOBILE PLATFORM: for mobile devices, including smartphones and tablets (iOS and Android) ※ Elden Ring Crack For Windows GAME ※ ELDEN RING SKILL: Play as an Elden Lord who was once a hero, with family and friends ※ ELDEN LAND ※ FULLY FEATURED GRAPHICS, INCLUDING DYNAMIC PORTAL MODE ※ NO HIDDEN CHARACTER LEVELUP STRATEGIES www.ogame.co.jp/eg www.ez-land.jp/ERGO www.newergo.jp/ERGO About ZOOMEROOM™ ZOOMEROOM™ is a new way for fantasy action RPG games to be played. The world is enhanced by ZOOMOOM – an innovative cloud-based technology for 3D gaming. Players can enjoy the game at any time and anywhere on devices such as smartphones and tablets. Newergo, Inc. is a Japanese company based in Tokyo. Since 2006, Newergo has provided a platform for developers and has developed over 300,000 games in collaboration with some of the largest game companies and publishers in the world. Newergo is committed to bring more adventure into your life, and is constantly seeking for new ways to share a good story with you. Newergo is also a leading company in the field of cloud computing software and hardware, building on the benefits of advanced online technologies, such as cloud service and hardware, which will bring a new generation of gaming experiences to gamers around the globe. Newergo is privately held and currently has offices in Japan, the United States, and Europe, with a worldwide network of sales and support offices. Newergo is the owner of the Newergo Game Network, the largest gaming community on the internet. We strive to be an open platform with strong community values, great content and a strong commitment to customer care. Newergo Games (www.newergo.jp), is a subsidiary of Newergo and specialises in PC, console and mobile games. For more information, please visit www.newergo.jp, or follow us on Twitter. © 2012, STILLRO Procedurally generated fantasy adventure gameplay in a vast world full of excitement Three-dimensional character models that you can design Customized weapons, armor, and magic that you combine according to your own play style The moment when you think about the intricate details of the story and drama, that relief, that comfort inside your head Feel free to choose your own play style. 《Demonbane》 tells a story of pure action gameplay: strong warriors wielding axes in hand. Experience the world with a different Character Expansion Pack Live Wire Support! 《LotUS》 Character Expansion Pack is now available! Feel the presence of others! Fight together and use a variety of tactics to take the lead! A journey to achieve power and beauty, followed by pain and loneliness. ——OLD HORNS SEASON 4 Endeavor ENG ED 《The unique fantasy action RPG:"Rise, Tarnished"(R&DxRITE)》 Platforms: PC(Windows) Language: Japanese, Simplified Chinese • Unique Online Play that Loosely Elden Ring Crack + Keygen For (LifeTime) (Latest) No spam, bogus, or hate mail will be accepted. Comments should remain free to the assumption they are in the spirit of fair use. The results of an appropriately disposed thirteenth-grade ban of any future comments will be filed and sent to a detainment facility where I'll be held until the next locking of the gates. All content on this site is provided by and fan of the writer. Comments are not moderated in advance. Comments are only reviewed after 24-hrs and take up to two business days to be posted.Q: Proof of the complex derivatives of conjugate pairs I'm studying complex analysis (Complex variables are by far my favourite) and I've came across a proof of the complex derivatives of conjugate pairs. Can anyone show me how to prove this? The idea is to consider a complex number $\alpha=a+bi$ as a point on the complex plane and to use $$\frac{d}{dz}=\frac{\partial}{\partial x}+i\frac{\partial}{\partial y}\ \,\ \frac{d}{d\overline{z}}=\frac{\partial}{\partial x}-i\frac{\partial}{\partial y}$$ as the basis of a derivative operator $D$. Then we have, for instance: $$D(e^{iz})=ie^{iz}\frac{d}{dz}e^{ -iz}=ie^{iz}\frac{\partial e^{ -iz}}{\partial z}=-ie^{iz}\frac{dz}{dz}=-i$$ Integrating in a counterclockwise manner around a circle $C$ centered at the origin we get (assuming that the radius of the circle is small): $$\oint_Ce^{iz}=0=e^{i\gamma}\ \,\ \ \gamma=\int_C\frac{dz}{iz}=2\pi i$$ Now set $\alpha=a+bi=re^{i\varphi}$ to obtain $$\frac{d\varphi}{ds}=r\frac{\cos(\varphi)+i\sin(\ Elden Ring Crack Serial Number Full Torrent Free Download Do you want to be one of the Elden Lord? I think this gameplay ELDEN RING game is about time travel, right? What if you decide to take your revenge to someone who betrayed you on your past life? I did choose the role of a Tarnished one, but I think I have a lot of fans who would like a real one. Talking about roleplaying ELDEN RING game, I am still trying to get the role of a Paladin. After all, it has the most powers. But I think there are more Elden Lords who have chosen a Paladin role. So, in the roleplaying ELDEN RING game, I would like to know from you, how should I make the best use of my powers? And my friends, how would you like to be in this role? Happy that you all receive this question, ET: (Vocals) Now, I would like to be a warrior. I have a lot of fans who have chosen the role of warrior. To be a warrior, I think it is important to learn attack skills. However, what is most important to my fans is that I can become a strong warrior. So, what skills should a warrior learn? ET: Do you like this question? Please rate this question. Your rating will help us to make the best content How about taking the role of the Paladin? How should I enhance my power? Is this your role in this roleplaying ELDEN RING game? THE NEW FANTASY ACTION RPG. Rise, Tarnished, and be guided by grace to brandish the power of the Elden Ring and become an Elden Lord in the Lands Between. Create your own character in an expansive fantasy world. Customize your character with complete freedom, such as your character's appearance, accessories, weapons, and armor. You can freely combine these items to become a powerful hero. A multilayered story that loosely connects with other players. An epic drama in which the various thoughts of characters intersect and overlap in the Lands Between, a fragmented world where only the creator of each fragment knows the true story. A world where the living becomes the dead and the dead live. Realms where the Creator has created life upon all the corpses he has left behind, and where undead creatures have taken the matter of their creators and used it for their own purpose. The Creator leads those Undead creatures as they patrol their worlds with the power of the first darkness. Each of these worlds is named after its creators, those who had once served the Creator, and each is named by the Creator Himself after the name of those living beings He desires to forge into Undead. Return to these lands filled with people and objects you have come to know over the course of your life, and in turn, leave them to wander the next time they become your world. In that moment, everything begins. THE NEW FANTASY ACTION RPG will be released in the West on June 18th, 2019. The game's release in the Japanese market is planned for Q2 2020. In response to the winner, I'd like to send one copy of R.U. to this person. I just played the demo a little bit and I liked the level of polish. The controls felt tight, and the graphics were surprisingly detailed. I didn't play for more than 15 minutes, but I thoroughly enjoyed it. It would have been nice to know how many players there are in the Wasteland, and how much progression and level-up they have, and maybe get a little bit of info about who won the prize, but I guess I could build a few more games with it now too. And while I didn't Download Elden Ring Crack With Product Key PC/Windows (2022) Method 1: Install and Crack ELDEN RING game using the released cracked files Please download the cracked files Extract and open setup files in WinRAR Right click on icon > Run as Administrator > Then wait to complete setup files Click on "next" and agree with the terms and conditions Next click on I Accept Now when you run ELDEN RING crack, it will ask "Restore previous version?" Click on "Yes" and it will start the download of the cracked files It will take time to download and install Click "finish" and then you can start the game Method 2: Install and Crack ELDEN RING game using the Customer Published guide Please download the correct version of the guide according to your operating system Start your pc and run the setup files Follow the step by step guide on screen Click Next Click I Agree Run the game as administrator Click on "Yes" Start the game cracked exe Please download the cracked exe Launch the game, and wait until you click on the exe Keygen setup Please download the keygen setup Extract and open setup file in WinRAR You can also try these ELDEN RING game cheat codes which would give you free unlimited money and resources: silver, gold, weapons and other items for free Disconnect your Internet connection. Download and install the game from link given in the crack (/crack=101/). It will now begin downloading the crack. Wait until the process is complete. Once downloaded, run it and have fun. 😉 Windows XP/Vista/7/8/8.1/10 (32-bits) 40MB free hard drive space € 10€ https://wakelet.com/wake/gBVphns06beTplxZ2R428 https://wakelet.com/wake/SIlApR_AiyjVsppDAKh1E https://wakelet.com/wake/LG44RnE5uR6LLXtSNIrBY https://wakelet.com/wake/y0S2xKBa5-d6hx2Fy-qXK https://wakelet.com/wake/xTJS3gLv5PwzuPnKHoHl1 Online via Steam. BlitWorks (Steam ID: Blitworks) Steam group: BlitWorks website: www.blitworks.com Steam Guide: BlitWorks Trailer: https://thoitranghalo.com/2022/07/15/repack-elden-ring-deluxe-edition-crack-keygen-with-serial-number-skidrow-dlckeygen-full-version/ https://buyer1ny.com/wp-content/uploads/2022/07/octbene.pdf http://artterredauvergne.fr/wp-content/uploads/2022/07/masevel.pdf https://solaceforwomen.com/wp-content/uploads/2022/07/hespjan.pdf https://expressionpersonelle.com/repack-elden-ring-crack-patch-skidrow-v-1-02-dlc-download-2022/ https://rollercoasterfriends.be/wp-content/uploads/2022/07/REPACK_Elden_Ring_Deluxe_Edition_Keygen_Crack_Serial_Key__v_102__DLC_Updated_2022.pdf https://comecongracia.com/uncategorized/elden-ring-deluxe-edition-serial-number-dlcactivation-for-pc/ http://newsleading.com/?p=6168 http://orbeeari.com/?p=44576 https://stroitelniremonti.com/wp-content/uploads/2022/07/Elden_Ring-10.pdf https://ithinksew.net/advert/elden-ring-deluxe-edition-crack-file-only-skidrow-v-1-02-dlc-pc-windows/ http://jameschangcpa.com/advert/repack-elden-ring-deluxe-edition-crack-skidrow-v-1-02-dlc-free-download/ https://madeinamericabest.com/repack-elden-ring-crack-with-serial-number-skidrow-codex-dlclicense-keygen-free-download-x64/ https://natepute.com/advert/repack-elden-ring-deluxe-edition-dlc-updated-2022-2/ https://sauvage-atelier.com/advert/elden-ring-patch-full-version-v-1-02-dlcwith-serial-key-latest-2022/ Elden Ring: Deluxe Edition SKiDROW CODEX [v 1.02 + DLC]+ Keygen For (LifeTime) [Win/Mac] [2022]
CommonCrawl
Home Journals AMA_C Backstepping Direct Power Control for Power Quality Enhancement of Grid-connected Photovoltaic System Implemented with PIL Co-simulation Technique CiteScore 2018: N/A ℹCiteScore: SCImago Journal Rank (SJR) 2018: N/A ℹSCImago Journal Rank (SJR): Source Normalized Impact per Paper (SNIP): N/A ℹSource Normalized Impact per Paper(SNIP): Backstepping Direct Power Control for Power Quality Enhancement of Grid-connected Photovoltaic System Implemented with PIL Co-simulation Technique Brahim Elkhalil Youcefa* | Ahmed Massoum | Said Barkat | Patrice Wira Département d'Electrotechnique, Faculté des Sciences de l'Ingénieur, Université Djillali Liabes de Sidi Bel Abbes, Sidi Bel Abbés 22000, Algérie Laboratoire de Génie Electrique, Université de M'sila, M'sila 28000, Algérie Laboratoire lRlMAS, Université de Haute Alsace, Mulhouse 68093, France [email protected] https://doi.org/10.18280/ama_c.740101 This paper proposes a combined nonlinear backstepping approach with direct power control technique for improving power quality of a three-phase grid-connected solar energy conversion system. The presented system basically extracts maximum power from solar photovoltaic array, converts it into AC power via a voltage source converter, and supplies it to the grid and connected loads. The proposed system offers not only the function of grid connected PV system but also it acts as a shunt active power filter (PV-SAPF). The system intends to eliminate the poor power quality issues and provides current conditioning while operating in coherence under nonlinear load variations. In order to validate the proposed double function system, processor-in-the-loop (PIL) tests are carried out for steady state and dynamic regimes under a nonlinear load operating condition. grid connected PV system, shunt active filter, backstepping control, direct power control, power quality, processor-in-the-loop Photovoltaic (PV) technology is becoming one of the foremost alternative solutions to produce electrical energy owing to the broad sustainability of solar irradiances. In fact, PV technology offers many advantages such as clean, renewable, noiseless, and ease of implementation. However, this technology is characterized by low electrical energy generation; its applications are limited to low power systems for instance watches, LED lighting, and some standalone electrical systems. For its application at medium and high power levels, like grid-connected systems, the PV energy must be adapted using appropriate power converters. Thanks to these interfacing electronic systems, the PV source can be fully operational by forcing it to deliver a maximum of energy to the utility grid [1]. It is well established that, the increased demand for nonlinear loads and renewable energy resources influence the power networks performance from the perspective of power quality [2]. Consequently, the integration of grid-connected photovoltaic system in distribution systems supplying nonlinear loads is not only unable to alleviate the power fluctuation, but it may make the situation worse depending on the PV inverter technology [3]. The most meaningful solution to these issues is the resort to power filters. However, the use of passive filter or separate active filter in grid-connected system raises its size, weight, and cost, which are the main shortfalls among others of this solution. To surmount these defects, PV system itself acting as a shunt active power filter has been proposed by multitude researches as indicated in the literature [4-7]. Indeed, by injecting the appropriate compensating current into the grid, the filtering function of the PV system can improve the current harmonic distortion, power conversion efficiency, and reliability [8-9]. In order to improve the performances of the PV system acting as a shunt active power filter, several nonlinear control methods were reported in the literature, which go beyond the limitations of the linear controllers [10]. Feedback linearization technique used to improve the control of the APF is presented in [11]. Another control strategy based on Lyapunov stability theory is studied in [12] for single-phase shunt active power filters. In [13], a nonlinear control technique for a three-phase SAPF is proposed and tested in a laboratory prototype. A robust control for the shunt active filter using fuzzy logic controller was studied in [14]. A nonlinear control based on feedback linearization technique for a single-phase SAPF is presented in [15]. On the other hand, adaptive sliding mode control, studied in [16], is designed for a boost converter with an unknown resistive load and external input voltage. Feedback linearization control strategy is widely used for a cascade nonlinear control of DC-DC boost converter to provide satisfactory performances over a wide range of operating points [17]. Over the last few years, direct power control (DPC) has become more widely used when compared with other methods [18-19] because of its advantages like fast dynamic performance and simple control implementation. This method coming from the popular direct torque control (DTC) [20] that is applied in electrical machines control. Nonetheless, In DPC method, neither the internal current control loops nor the PWM modulator bloc are needed, for the simple raison that the inverter switching states are selected by a switching table founded on the instantaneous power errors and voltage vector position. However, the variation of switching frequency is the main disadvantage of DPC [18], which generates an unwanted spectrum range of broadband harmonic and makes it hard to design a line filter [21]. By using space vector modulation (SVM) algorithm that replaces the conventional switching table, these disadvantages can be efficiently overcome. The combination of DPC and SVM forms the so-called space vector modulation direct power control (SVM-DPC) [21]. In this paper, a nonlinear backstepping control method combined with direct power control is proposed to control a photovoltaic system acting as a shunt active power filter. The main tasks of the filtering system are harmonic currents reduction and reactive power compensation. Ideally, the presented system needs to generate enough reactive power and harmonic currents to compensate for the harmful effect of nonlinear loads on the grid. Moreover, to extract maximum amount of power from the photovoltaic generator, a suitable backstepping current control method for the DC-DC boost converter is also developed. The rapid development of the computer system and user interactive software allows a lot of simulation software, which are available for technical applications. In this paper, all system control methods using simulation block sets are formed and transformed into embedded C code. The generated code is launched into an embedded processor; this test is called the processor-in-the-loop (PIL). In addition, this approach standardized the designer with a program and reduced the core technical labor force effort, in the same time gives more functionality to test the control methods for a system in a real DSP board. The software MATLAB/Simulink is used as simulation platform, and STM32F429i-Discovery board is used as a target to launch the program in a processor. This paper is organized as follows: in section II, description and modeling of the SAPF side system are given. In section III, synthesis and design of the proposed controllers for the SAPF are developed. In section IV, the control synthesis of the DC-DC boost converter is presented. In section V, processor-in-the-loop is described, and co-simulation results are given and discussed. Finally, the paper is concluded in its last section. 2. System Description and Modeling 2.1 System description The utility grid is supposed to be a sinusoidal voltage source with series short circuit impedances. The grid is modeled by three-phase electromotive forces in series with impedances as shown in Figure. 1. In the right side of the system, a nonlinear load is connected to the utility grid through intermediate line impedances (Ll,Rl). This load is composed of uncontrolled three-phase rectifier supplying a load (Rd, Ld) in its DC side. In the same Figure, a DC-DC boost converter is utilized to interface the photovoltaic generator with the grid across a voltage source inverter (VSI). The inverter, connected in parallel at the point of common coupling (PCC), acts, in the same time, as a PV inverter and SAPF where it is often controlled as a current generator. For the purpose to make grid current pure sinusoidal, the active filter injects unbalanced currents equal and in phase opposition of those absorbed by the nonlinear load. Briefly, the active filter function of the proposed system prohibits disturbance currents generated by the nonlinear load to circulate through the grid impedances. Thus, the resulting total currents drawn from the AC main are pure sinusoidal and balanced. Figure 1. Power circuit of grid-connected photovoltaic system acting as a shunt active power filter Figure 2. Block diagram of the proposed backstepping direct power control method for the PV-SAPF 2.2 Mathematical model of the SAPF The system equations defining the SAPF in the three-phase reference frame are given by: ${ L } _ { { f } } \frac { { d } { i } _ { { fa } } } { { dt } } = - { R } _ { { f } } { i } _ { { fa } } + { v } _ { { fa } } - { V } _ { { a } }$ ${ L } _ { { f } } \frac { { d } { i } _ { { fb } } } { { dt } } = - { R } _ { { f } } { i } _ { { fb } } + { v } _ { { fb } } - { V } _ { { b } }$ ${ L } _ { { f } } \frac { { d } { i } _ { { fc} } } { { dt } } = - { R } _ { { f } } { i } _ { { fc } } + { v } _ { { fc } } - { V } _ { { c } }$ ${ C } _ { { dc } } \frac { { d } { V } _ { { dc } } } { { dt } } = - { S } _ { { a } } { i } _ { { fa } } + { S } _ b i_{ fb } + S_c { i } _ { { fc } }$ (1) where ifi, vfi with i=a,b,c represent the AC side currents and voltages of the SAPF, respectively; vi, i=a,b,c are the point of common coupling voltages; Lf, Rf are the output inductance and resistance of the shunt active power filter, respectively; Si, i=a,b,c are the control signals for the VSI, Vdc is the voltage across the DC capacitor Cdc. The mathematical model of SAPF in the stationary reference is given as follows: ${ L } _ { { f } } \frac { { d } { i } _ { { f\alpha } } } { { dt } } = - { R } _ { { f } } { i } _ { { f\alpha } } + { v } _ { { f\alpha } } - { V } _ { { \alpha } }$ ${ L } _ { { f } } \frac { { d } { i } _ { { f\beta } } } { { dt } } = - { R } _ { { f } } { i } _ { { f\beta } } + { v } _ { { f\beta } } - { V } _ { { \beta } }$ ${ C } _ { { dc } } \frac { { d } { V } _ { { dc } } } { { dt } } = \frac{P_{dc}}{V_{dc}} (2) where if, vfi with i=α,β represent the AC side current and voltage components of the SAPF in the stationary reference, respectively, vα, vβ are the point of common coupling voltages in the stationary reference, Pdc is the DC active power across the capacitor Cdc. 2.3 Mathematical model of the SAPF for powers control The powers at the output of the SAPF are given as: $\left[ \begin{array} { c } { \mathrm { P } _ { \mathrm { F } } } \\ { \mathrm { Q } _ { \mathrm { F } } } \end{array} \right] = \left[ \begin{array} { c c } { \mathrm { V } _ { \alpha } } & { \mathrm { V } _ { \beta } } \\ { - \mathrm { V } _ { \beta } } & { \mathrm { V } _ { \alpha } } \end{array} \right] \left[ \begin{array} { c } { \mathrm { i } _ { \mathrm { f } \alpha } } \\ { \mathrm { i } _ { \mathrm { f } \beta } } \end{array} \right]$ (3) In order to calculate the powers derivatives, the Lie derivative method is used [22]; the active and reactive powers must be chosen as outputs. The two first equations of the system (2) can be written as follow: $\frac { \mathrm { d } \mathrm { x } } { \mathrm { dt } } = \mathrm { f } ( \mathrm { x } ) + \mathrm { g } ( \mathrm { x } ) \mathrm { u }$ $\mathrm { y } = \mathrm { h } ( \mathrm { x } )$ (4) $x = \left[ \begin{array} { c } { i _ { f \alpha } } \\ { i _ { f \beta } } \end{array} \right] , f ( x ) = \left[ \begin{array} { c } { f _ { 1 } } \\ { f _ { 2 } } \end{array} \right] = \frac { 1 } { L _ { f } } \left[ \begin{array} { c } { - R _ { f } i _ { f \alpha } - v _ { \alpha } } \\ { - R _ { f } i _ { f \beta } - v _ { \beta } } \end{array} \right]$ $g ( x ) = \frac { 1 } { L _ { f } } \left[ \begin{array} { l l } { 1 } & { 0 } \\ { 0 } & { 1 } \end{array} \right] , u = \left[ \begin{array} { l } { v _ { f \alpha } } \\ { v _ { f \beta } } \end{array} \right] , y = \left[ \begin{array} { l } { h _ { 1 } } \\ { h _ { 2 } } \end{array} \right] = \left[ \begin{array} { l } { P _ { F } } \\ { Q _ { F } } \end{array} \right]$ where f(x) and h(x) are a second order smooth vector fields, g(x) is an 2x2 matrix of smooth vector field columns, u and y are 2x1 vectors of input and output respectively. The derivative of each output can be expressed as follows [23]: $\frac { \mathrm { dy } _ { \mathrm { i } } } { \mathrm { dx } } = \mathrm { L } _ { \mathrm { i } } \mathrm { h } _ { \mathrm { i } } + \mathrm { L } _ { \mathrm { g } } \mathrm { h } _ { \mathrm { i } } \mathrm { u }$ (5) where $\mathrm { L } _ { \mathrm { i } } \mathrm { h } _ { \mathrm { i } }$, $\mathrm { L } _ { \mathrm { g } } \mathrm { h } _ { \mathrm { i } }$ are the Lie derivatives of hi with respect to f and g, respectively. The derivatives of powers are given as: $\frac { \mathrm { d } \mathrm { P } _ { \mathrm { F } } } { \mathrm { dt } } = \mathrm { v } _ { \alpha } \mathrm { f } _ { 1 } + \mathrm { v } _ { \beta } \mathrm { f } _ { 2 } + \frac { \mathrm { V } _ { \alpha } } { \mathrm { L } _ { \mathrm { f } } } \mathrm { v } _ { \mathrm { f } \alpha } + \frac { \mathrm { V } _ { \beta } } { \mathrm { L } _ { \mathrm { f } } } \mathrm { V } _ { \mathrm { f } \beta }$ $\frac { \mathrm { d } \mathrm { Q } _ { \mathrm { F } } } { \mathrm { dt } } = - \mathrm { v } _ { \beta } \mathrm { f } _ { 1 } + \mathrm { v } _ { \alpha } \mathrm { f } _ { 2 } - \frac { \mathrm { V } _ { \beta } } { \mathrm { L } _ { \mathrm { f } } } \mathrm { V } _ { \mathrm { f } \alpha } + \frac { \mathrm { V } _ { \alpha } } { \mathrm { L } _ { \mathrm { f } } } \mathrm { V } _ { \mathrm { f } \beta }$ (6) After more simplification, the final new model of the SAPF is given as follows: $\frac { \mathrm { dP } _ { \mathrm { F } } } { \mathrm { dt } } = \frac { 1 } { \mathrm { L } _ { \mathrm { f } } } \left( - \mathrm { R } _ { \mathrm { f } } \mathrm { P } _ { \mathrm { F } } + \delta _ { \mathrm { f } \alpha } \right)$ $\frac { \mathrm { dQ } _ { \mathrm { F } } } { \mathrm { dt } } = \frac { 1 } { \mathrm { L } _ { \mathrm { f } } } \left( - \mathrm { R } _ { \mathrm { f } } \mathrm { Q } _ { \mathrm { F } } + \delta _ { \mathrm { f } \beta } \right)$ $\frac { dV _ { dc} } { dt } = \frac { P_{dc}} { V_{dc}C_{dc}}$ (7) $\delta _ { f \alpha } = v _ { \alpha } v _ { f \alpha } + v _ { \beta } v _ { f \beta } - \left( v _ { \alpha } ^ { 2 } + v _ { \beta } ^ { 2 } \right)$ $\delta _ { f \beta } = - v _ { \beta } v _ { f \alpha } + v _ { \alpha } v _ { f \beta }$ 3. Control Strategy for the Sapf Side In Figure. 2, the actual capacitor square voltage $V^2_{dc}$ is compared with its reference square value $V^{*2}_{dc}$; the error between the capacitor voltage and its reference is fed to a nonlinear controller. The output of the nonlinear voltage controller presents the active power reference $P^*_{dc}$ across the capacitor Cdc. Based on the instantaneous p–q theory, the compensating powers are calculated, and theaverage powers are extracted using 4-order low-pass filter (LPF). However, the oscillating powers are obtained through a simple subtraction of the average power from the active and reactive powers. 3.1 p–q theory based control strategy The instantaneous active and reactive powers of the nonlinear load are calculated as follows: $\left[ \begin{array} { c } { \mathrm { P } _ { \mathrm { L } } } \\ { \mathrm { Q } _ { \mathrm { L } } } \end{array} \right] = \left[ \begin{array} { c c } { \mathrm { v } _ { \alpha } } & { \mathrm { V } _ { \beta } } \\ { - \mathrm { V } _ { \beta } } & { \mathrm { V } _ { \alpha } } \end{array} \right] \left[ \begin{array} { l } { \mathrm { i } _ { \mathrm { L } \alpha } } \\ { \mathrm { i } _ { \mathrm { L } \beta } } \end{array} \right]$ (8) The instantaneous active and reactive powers, including average and oscillating values, are expressed as follows: $\begin{aligned} \mathrm { P } _ { \mathrm { L } } & = \overline { \mathrm { P } } _ { \mathrm { L } } + \tilde { \mathrm { P } } _ { \mathrm { L } } \\ \mathrm { Q } _ { \mathrm { L } } & = \overline { \mathrm { Q } } _ { \mathrm { L } } + \tilde { \mathrm { Q } } _ { \mathrm { L } } \end{aligned}$ (9) The average values ($\overline{P}_L,\overline{Q}_L$) of ${P}_L$ and ${Q}_L$ are the average active and reactive powers originating from the positive-sequence component of the nonlinear load current. Oscillating values ($\tilde{P}_L,\tilde{Q}_L$) of ${P}_L$ and ${Q}_L$ are the ripple active and reactive powers [24]. The filter active and reactive power references are calculated as follows: $\mathrm { P } _ { \mathrm { F } } ^ { * } = \widetilde { \mathrm { P } } _ { \mathrm { L } } - \mathrm { P } _ { \mathrm { dc } } ^ { * } + \mathrm { P } _ { \mathrm { PV } }$ $\mathrm { Q } _ { \mathrm { F } } ^ { * } = \mathrm { Q } _ { \mathrm { L } }$ (10) The power reference of the DC-link capacitor ${P}^*_{dc}$ is used as an average real power and is obtained from the nonlinear backstepping DC voltage controller, while the ${P}_{PV}$ is the active power delivered from the photovoltaic generator used as a compensating power, (${P}_F$, ${Q}_F$) are the active and reactive filter powers, respectively. 3.2 SAPF backstepping controllers design The backstepping algorithm is based on the idea that specific variables can be utilized as virtual controls to make the original high order system simple. So, the final control outputs can be established step by step through suitable Lyapunov functions that guarantee the global stability [25]. On the contrary of other methods, the backstepping control method does not have constraints on the type of non-linearities, all control objectives are in fact achieved by using tools from the Lyapunov stability. This control method was successfully applied on a growing collection of plants [25-27]. In the following, the backstepping control strategy will be used to design SAPF controllers. The relied backstepping controllers are determined based on a decomposition of the global SAPF model. In order to accomplish this task, the system (7) is subdivided into three subsystems, as follows: Subsystem 1: $\frac { \mathrm { d } \mathrm { V } _ { \mathrm { dc } } } { \mathrm { dt } } = \frac { \mathrm { P } _ { \mathrm { dc } } } { \mathrm { V } _ { \mathrm { dc } } \mathrm { C } _ { \mathrm { dc } } }$ (11) In the first subsystem described by the equation (11), the instantaneous active power $P^*_{dc}$ is considered as a control variable and the voltage Vdc is considered as an output variable. In the second subsystem given by the equation (12), the voltage ${\delta}^*_{f\alpha}$ is chosen as a variable control, while the power PF as an output variable. $\frac { \mathrm { d } \mathrm { P } _ { \mathrm { F } } } { \mathrm { dt } } = \frac { 1 } { \mathrm { L } _ { \mathrm { f } } } \left( - \mathrm { R } _ { \mathrm { f } } \mathrm { P } _ { \mathrm { F } } + \delta _ { \mathrm { fa } } ^ { * } \right)$ (12) In this subsystem, ${\delta}^*_{f\beta}$ and QF are the control variable and the output variable, respectively. $\frac { \mathrm { d } \mathrm { Q } _ { \mathrm { F } } } { \mathrm { dt } } = \frac { 1 } { \mathrm { L } _ { \mathrm { f } } } \left( - \mathrm { R } _ { \mathrm { f } } \mathrm { Q } _ { \mathrm { F } } + \delta _ { \mathrm { f } \beta } ^ { * } \right)$ (13) 3.2.1 DC-link voltage backstepping controller synthesis To preserve the DC-link voltage across the capacitor Cdc at a constant desired value, a backstepping controller for DC-link voltage is used to maintain the DC-link voltage at its reference value covering the inverter losses. Since the purpose of this control is to force the DC-link voltage to follow its reference, the tracking variable error z1 is defined by: $\mathrm { z } _ { 1 } = \mathrm { V } _ { \mathrm { dc } } ^ { * } - \mathrm { V } _ { \mathrm { dc } }$ (14) Using the first subsystem, the dynamics of the error z1 is given by: $\frac { \mathrm { d } \mathrm { z } _ { 1 } } { \mathrm { dt } } = \frac { \mathrm { d } \mathrm { V } _ { \mathrm { dc } } ^ { * } } { \mathrm { dt } } - \frac { \mathrm { P } _ { \mathrm { dc } } ^ { * } } { \mathrm { V } _ { \mathrm { dc } } \mathrm { C } _ { \mathrm { dc } } }$ (15) The Lyapunov candidate function is chosen as: $\mathrm { v } _ { 1 } = \frac { 1 } { 2 } \mathrm { z } _ { 1 } ^ { 2 }$ (16) The derivative of the function (16) is expressed as: $\frac { \mathrm { d } \mathrm { V } _ { 1 } } { \mathrm { dt } } = \mathrm { z } _ { 1 } \left( \frac { \mathrm { d } \mathrm { V } _ { \mathrm { dc } } ^ { * } } { \mathrm { dt } } - \frac { \mathrm { P } _ { \mathrm { dc } } ^ { * } } { \mathrm { V } _ { \mathrm { dc } } \mathrm { C } _ { \mathrm { dc } } } \right)$ (17) To ensure the stability of the system, the derivative of the Lyapunov function must be negative. This can be established by choosing the derivative of z1 as: $\frac { \mathrm { d } \mathrm { z } _ { 1 } } { \mathrm { dt } } = - \mathrm { k } _ { 1 } \mathrm { z } _ { 1 }$ (18) where k1 is a positive gain. Hence, the control law can be given by the equation (19) below and its controller block diagram is presented in Figure. 3. $\mathrm { P } _ { \mathrm { dc } } ^ { * } = \mathrm { V } _ { \mathrm { dc } } \mathrm { C } _ { \mathrm { dc } } \left( \frac { \mathrm { d } \mathrm { V } _ { \mathrm { dc } } ^ { * } } { \mathrm { dt } } + \mathrm { k } _ { 1 } \mathrm { z } _ { 1 } \right)$ (19) Figure 3. Controller block diagram of the DC-link voltage backstepping controller 3.2.2 Active power backstepping controller synthesis The synthesis of a desired backstepping regulator for the active power PF using the second subsystem defined by the equation (12) is analyzed as follows: The variable error z2 is defined by: $z_2=P^*_F-P_F$ (20) The dynamic of the error z2 is given by: $\frac { \mathrm { d } \mathrm { z } _ { 2 } } { \mathrm { dt } } = \frac { \mathrm { d } \mathrm { P } _ { \mathrm { F } } ^ { * } } { \mathrm { dt } } - \frac { 1 } { \mathrm { L } _ { \mathrm { f } } } \left( - \mathrm { R } _ { \mathrm { f } } \mathrm { P } _ { \mathrm { F } } + \delta _ { \mathrm { fa } } ^ { * } \right)$ (21) $V_2=\frac{1}{2}z^2_2$ (22) The derivative of the function (22) is: $\frac { \mathrm { d } \mathrm { V } _ { 2 } } { \mathrm { dt } } = \mathrm { z } _ { 2 } \left( \frac { \mathrm { d } \mathrm { P } _ { \mathrm { F } } ^ { * } } { \mathrm { dt } } - \frac { 1 } { \mathrm { L } _ { \mathrm { f } } } \left( - \mathrm { R } _ { \mathrm { f } } \mathrm { P } _ { \mathrm { F } } + \delta _ { \mathrm { f } \alpha } ^ { * } \right) \right)$ (23) To ensure the stability of the system, the derivative of the Lyapunov function must be negative; this can be accomplished by choosing the derivative of z2 as: $\frac { \mathrm { d } \mathrm { z } _ { 2 } } { \mathrm { dt } } =-k_2z_2$ (24) with k2 is a positive gain. So, the obtained control law is given by the equation (25) while the Figure. 4 presents its controller block diagram. $\delta _ { \mathrm { fa } } ^ { * } = \mathrm { R } _ { \mathrm { f } } \mathrm { P } _ { \mathrm { F } } + \mathrm { L } _ { \mathrm { f } } \mathrm { k } _ { 2 } \mathrm { z } _ { 2 } + \mathrm { L } _ { \mathrm { f } } \frac { \mathrm { d } \mathrm { P } _ { \mathrm { F } } ^ { * } } { \mathrm { dt } }$ (25) Figure 4. Backstepping controller block diagram of the active power Once the intermediate voltage ${\delta}^*_{f\alpha}$ is obtained, the reference voltage $v^*_{f\alpha}$ can be calculated using (7) as follows: $\mathrm { v } _ { \mathrm { f } \alpha } ^ { * } = \frac { \mathrm { v } _ { \alpha } } { \mathrm { v } _ { \alpha } ^ { 2 } + \mathrm { v } _ { \beta } ^ { 2 } } \delta _ { \mathrm { fex } } ^ { * } - \frac { \mathrm { v } _ { \beta } } { \mathrm { v } _ { \alpha } ^ { 2 } + \mathrm { v } _ { \beta } ^ { 2 } } \delta _ { \mathrm { f } \beta } ^ { * } + \mathrm { v } _ { \alpha }$ (26) 3.2.3 Reactive power backstepping controller synthesis Using the third subsystem, defined by the equation (13), the synthesis of reactive power backstepping regulator is analyzed as follows: $z_3=Q^*_F-Q_F$ (27) $\frac { \mathrm { d } \mathrm { z } _ { 3 } } { \mathrm { dt } } = \frac { \mathrm { d } \mathrm { Q } _ { \mathrm { F } } ^ { * } } { \mathrm { dt } } - \frac { 1 } { \mathrm { L } _ { \mathrm { f } } } \left( - \mathrm { R } _ { \mathrm { f } } \mathrm { Q } _ { \mathrm { F } } + \delta _ { \mathrm { ff } \beta } ^ { * } \right)$ (28) $\frac { \mathrm { d } \mathrm { V } _ { 3 } } { \mathrm { dt } } = \mathrm { z } _ { 3 } \left( \frac { \mathrm { d } \mathrm { Q } _ { \mathrm { F } } ^ { * } } { \mathrm { dt } } - \frac { 1 } { \mathrm { L } _ { \mathrm { f } } } \left( - \mathrm { R } _ { \mathrm { f } } \mathrm { Q } _ { \mathrm { F } } + \delta _ { \mathrm { f } \beta } ^ { * } \right) \right)$ (30) To ensure the stability of the system, the derivative of the Lyapunov function must be negative; this can be realized by choosing the derivative of z3 as: $\frac{dz_3}{dt}=-k_3z_3$ (31) So, the obtained control law is given by the equation (32), and Figure. 5 presents its controller block diagram. $\delta _ { \mathrm { f } \beta } ^ { * } = \mathrm { R } _ { \mathrm { f } } \mathrm { Q } _ { \mathrm { F } } + \mathrm { L } _ { \mathrm { f } } \mathrm { k } _ { 3 } \mathrm { z } _ { 3 } + \mathrm { L } _ { \mathrm { f } } \frac { \mathrm { d } \mathrm { Q } _ { \mathrm { F } } ^ { * } } { \mathrm { dt } }$ (32) Figure 5. Backstepping controller block diagram of the reactive power Once the intermediate voltage ${\delta}^*_{f\beta}$ is obtained, the reference voltage $v^*_{f\beta}$ can be calculated using (7) as follows: $\mathrm { v } _ { \mathrm { f } \beta } ^ { * } = \frac { \mathrm { v } _ { \beta } } { \mathrm { v } _ { \alpha } ^ { 2 } + \mathrm { v } _ { \beta } ^ { 2 } } \delta _ { \mathrm { fex } } ^ { * } + \frac { \mathrm { v } _ { \alpha } } { \mathrm { v } _ { \alpha } ^ { 2 } + \mathrm { v } _ { \beta } ^ { 2 } } \delta _ { \mathrm { f } \beta } ^ { * } + \mathrm { v } _ { \beta }$ (33) 4. DC-DC Boost Converter Modeling and Control 4.1 DC-DC boost converter modeling Figure. 6 represents the scheme of the DC-DC boost converter. The state space model for this converter is represented by the dynamic equations below: $\frac{dV_{PV}}{dt}=\frac{1}{C_{PV}}I_{PV}-\frac{1}{C_{PV}}I_{LPV}$ $\frac{dI_{LPV}}{dt}=\frac{1}{L_{PV}}V_{PV}-\frac{1}{L_{PV}}(1-D)V_{dc}$ (34) 4.2 Backstepping control of DC-DC boost converter The backstepping control approach is proposed again for the DC-DC boost converter to extract the maximum of power from the PV array. As shown in Figure. 6, two backstepping controllers are needed to control the photovoltaic generator output voltage and current. The control method of the voltage is fulfilled for the boost converter by controlling the voltage Vpv of the PV generator to its reference V*pv offered by perturb and observe (P&O) maximum power point tracking algorithm (MPPT). The output of the voltage loop controller and the PV current compensation provides the current reference I*Lpv of the inner loop current controller. So, the duty cycle of the converter is provided by the current control loop and the PV voltage Vpv and Vdv compensations. Figure 6. Backstepping control of DC-DC boost converter The needed backstepping controllers are preformed based on a decomposition of the global model given by (34) into two subsystems, as follows: $\frac { \mathrm { d } \mathrm { V } _ { \mathrm { PV } } } { \mathrm { dt } } = \frac { 1 } { \mathrm { C } _ { \mathrm { PV } } } \mathrm { I } _ { \mathrm { PV } } - \frac { 1 } { \mathrm { C } _ { \mathrm { PV } } } \mathrm { I } _ { \mathrm { LPV } }$ (35) In this first subsystem, the current ILPV is considered as a variable control while the PV output voltage VPV is considered as an output variable. In the second subsystem described by the equation (36), the duty cycle is considered as a variable control and the PV current ILPV as an output variable. $\frac { \mathrm { dI } _ { \mathrm { LPV } } } { \mathrm { dt } } = \frac { 1 } { \mathrm { L } _ { \mathrm { PV } } } V _ { \mathrm { PV } } - \frac { 1 } { \mathrm { L } _ { \mathrm { PV } } } ( 1 - \mathrm { D } ) \mathrm { V } _ { \mathrm { dc } }$ (36) 4.2.1 PV voltage backstepping controller synthesis The synthesis of the desired backstepping regulator of the voltage VPV based on the first subsystem, defined by the equation (35), is analyzed as follows: The variable error zVpv is defined by: $Z_{Vpv}=V^*_{PV}-V_{PV}$ (37) The dynamics of the error zVpv is given by: $\frac { \mathrm { d } \mathrm { Z } _ { \mathrm { Vp } v } } { \mathrm { dt } } = \frac { \mathrm { d } V _ { \mathrm { PV } } ^ { * } } { \mathrm { dt } } - \left( \frac { 1 } { \mathrm { C } _ { \mathrm { PV } } } \mathrm { I } _ { \mathrm { PV } } - \frac { 1 } { \mathrm { C } _ { \mathrm { PV } } } \mathrm { I } _ { \mathrm { LPV } } ^ { * } \right)$ (38) $V_{Vpv}=\frac{1}{2}z^2_{Vpv}$ (39) $\frac { \mathrm { d } \mathrm { V } _ { \mathrm { Vpv } } } { \mathrm { dt } } = \mathrm { Z } _ { \mathrm { Vp } } \left( \frac { \mathrm { d } V _ { \mathrm { PV } } ^ { * } } { \mathrm { dt } } - \left( \frac { 1 } { \mathrm { C } _ { \mathrm { PV } } } \mathrm { I } _ { \mathrm { PV } } - \frac { 1 } { \mathrm { C } _ { \mathrm { PV } } } \mathrm { I } _ { \mathrm { LPV } } ^ { * } \right) \right)$ (40) The stability of the system is guaranteed when the derivative of the Lyapunov function is negative; that can be realized by choosing the derivative of zVpv as: $\frac{dz_{Vpv}}{dt}=-k_{Vpv}z_{Vpv}$ (41) with kVpv is a positive constant. Hence, the reference current $I^*_{LPV}$ can be calculated as given in equation (42) and its controller block diagram is given by the Figure. 7. $\mathrm { I } _ { \mathrm { LPV } } ^ { * } = - \mathrm { I } _ { \mathrm { PV } } - \mathrm { k } _ { \mathrm { Vpv } } \mathrm { C } _ { \mathrm { PV } } \left( \mathrm { V } _ { \mathrm { PV } } ^ { * } - \mathrm { V } _ { \mathrm { PV } } \right) + \mathrm { C } _ { \mathrm { PV } } \frac { \mathrm { d } V _ { \mathrm { PV } } ^ { * } } { \mathrm { dt } }$ (42) Figure 7. Backstepping controller block diagram of the PV output voltage 4.2.2 PV current backstepping controller synthesis The synthesis of the desired backstepping controller of the current ILPV using the second subsystem defined by the equation (36) is analyzed as follows: The variable error zILpv is defined by: $Z_{ILpv}=I^*_{LPV}-I_{LPV}$ (43) The dynamics of the error zILpv is given by: $\frac { \mathrm { d } \mathrm { z } _ { \mathrm { ILpv } } } { \mathrm { dt } } = \frac { \mathrm { d } \mathrm { I } _ { \mathrm { PV } } ^ { * } } { \mathrm { dt } } - \left( \frac { 1 } { \mathrm { L } _ { \mathrm { PV } } } \mathrm { V } _ { \mathrm { PV } } - \frac { 1 } { \mathrm { L } _ { \mathrm { PV } } } \left( 1 - \mathrm { D } ^ { * } \right) \mathrm { V } _ { \mathrm { dc } } \right)$ (44) $V_{ILpv}=\frac{1}{2}z^2_{ILpv}$ (45) $\frac { \mathrm { d } \mathrm { V } _ { 2 } } { \mathrm { dt } } = \mathrm { z } _ { \mathrm { ILpv } } \left( \frac { \mathrm { d } \mathrm { I } _ { \mathrm { PV } } ^ { * } } { \mathrm { dt } } - \left( \frac { 1 } { \mathrm { L } _ { \mathrm { PV } } } \mathrm { V } _ { \mathrm { PV } } - \frac { 1 } { \mathrm { L } _ { \mathrm { PV } } } \left( 1 - \mathrm { D } ^ { * } \right) \mathrm { V } _ { \mathrm { dc } } \right) \right)$ (46) The stability of the system is ensured by choosing the derivative of zILpv as: $\frac{dz_{ILpv}}{dt}=-k_{ILpv}z_{ILpv}$ (47) with kILpv is a positive constant. Consequently, the reference of duty cycle D* can be calculated as given by the equation (48) and its controller block diagram is presented by the Figure 8: $\mathrm { D } ^ { * } = \frac { 1 } { \mathrm { V } _ { \mathrm { dc } } } \left( \mathrm { L } _ { \mathrm { PV } } \frac { \mathrm { d } \mathrm { I } _ { \mathrm { LPV } } ^ { * } } { \mathrm { dt } } - \mathrm { V } _ { \mathrm { PV } } + \mathrm { V } _ { \mathrm { dc } } + \mathrm { L } _ { \mathrm { PV } } \mathrm { k } _ { \mathrm { ILpv } } \left( \mathrm { I } _ { \mathrm { LPV } } ^ { * } - \mathrm { I } _ { \mathrm { LPV } } \right) \right)$ (48) Figure 8. Backstepping controller block diagram of the PV current 5. Principle of Prototyping Processor-In-The-Loop In PIL simulation, an embedded platform executing the control algorithm is connected to a host computer in which the physical system model is run. Then an assessment regarding the execution circumstances of the developed algorithm can be carried out aiming to optimize some important factors such as memory footprint, code size, and algorithm execution required time. The principle of PIL based development is depicted in Figure 9. Figure 9. Block diagram of embedded system connected to a PIL simulator The PIL prototyping permits verification and validation of the digital implementation of the control algorithms using a real DSP board, and this before embedding it into a real power environment. The interest of PIL prototyping is to validate the digital control algorithms implementation of the control part while simulating the power part by computer. Therefore, it is possible to validate the control algorithms in a virtual environment where the control algorithms could have corrected and modified without costly hardware iteration. This leads to decrease the cost of a project as well as the development time. Furthermore, it is possible to evaluate the performance of the system control algorithms and the weak points can be detected in this virtual environment while eliminating the risk of damaging all or part of the electrical system. In order to perform the PIL co-simulation, a communication link has to be set up between the host and target. In the given board of study, UART (Universal Asynchronous Receiver Transmitter) communication is used with standard communication protocol as defined by the manufacturer. The port parameter settings are fixed from PC as defined by manufacturer and the application programming interface (API). As STM DSP board has several serial communication options, a specific UART communication port can be set, which enable the embedded systems to send and receive the information to external devices. In addition, the Simulink model time domain and the real-time in the DSP hardware domain should be synchronized which is fixed in this study to $T_s=1\mu s$ . Figure. 10(a) shows the connection between the serial communication Transmit Data (Tx) pin of the transmitter with Received Data (Rx) pin of the receiver. Note that the transmitter and receiver should have a common ground. In the given study, USB to RS232 TTL UART Prolific FTDI232 (serial communication) converter is used to communicate between the host PC and target board. The host, target device/board, and the communication link are shown in Figure. 10. Figure 10. Processor-in-the-loop technique: (a) host device/board, and communication link, (b) PIL co-simulation platform 5.1 Co-simulation results In order to validate the proposed control method and assess its performances, the system has been modeled using embedded Matlab functions, and it is co-simulated through the processor-in-the-loop technique using STM32F429i-discovery DSP board under time step simulation $T_s=1\mu s$ for both presented control techniques. The co-simulation parameters of the proposed PV-SAPF and its control as well as those of the PV generator are gathered in Tables 1 and 2, respectively. It is question also to exhibit the capability of the system to guarantee an active and reactive powers sharing between the shunt active power filter and distribution utility grid at the PCC under unexpected variations of load condition. Where at t=0.15 s another linear load is connected to PCC with the previous load, and is disconnected again at t=0.3 s. The system has examined at the standard test conditions: constant irradiation 1000 W/m² and a constant temperature 25°. The results of the system behavior for both backstepping and traditional PI controllers based direct power control are given in the Figures 12 to 16. The source current before compensation and its harmonic spectrum are shown in Figure 11; the Figure 11(a) shows a substantial amount of harmonics in the source current with a total harmonic distortion (THD) equal to 28.96 % as can be seen in the Figure 11(b). Table 1. PV-SAPF system parameters SAPF side RMS value of phase voltage DC-link capacitor Cdc 5 mF Source impedance Rs, Ls 1.6 mΩ ,100 mΩ Filter impedance Rf, Lf 1 mΩ , 350 μH Line impedance Rl, Ll 2.7 mΩ , 25 μH Diode rectifier load Rd, Ld 5 mΩ , 2.6 μH Fundamental frequency fs DC-link voltage reference V*dc Control constants parameters k1,k2=k3 170, 5x109 PV side system parameters Inductance LPV Capacitance CPV 55 mF Control constants parameters kVpv,kILpv 13x106 , 5x103 Table 2. The array parameters of the (BP Solar's SX 150 solar array) at the standard conditions Maximum power ( PMax) Voltage at PMax ( Vmp ) 34.5 V Current at PMax ( Imp ) Warranted minimum PMax Short-circuit current ( Isc) Open-circuit voltage ( Voc ) Maximum system voltage Temperature coefficient of Isc (0.065 ±0.015) %/C° Temperature coefficient of Voc -(160 ±20)mV/C° Temperature coefficient of power -(0.5 ±0.05) %/C° 47 ±2 C° The dynamic responses of the PV-SAPF system controlled by backstepping approach during a sudden load variation are illustrated in Figure 12. The waveforms include the injected filter current, three phase source currents, source voltage and current, source current harmonic spectrum, and DC-link capacitor voltage. In this test, another linear load with the same value as the first one is connected to the PCC at t=0.15s and it is disconnected again after 0.15s. From Figure. 12(b), the source current is sinusoidal and in phase with its corresponding voltage even during the step change caused by the added extra load as shown in the same Figure. The THD of the source current is reduced from 28.96 % before compensation to 1.55 % after compensation as shown in Figure 12(e). Consequently, the source current is almost free of harmonic reactive current components, which leads to a unity power factor operation. As indicated in Figure 12(d), the DC-link voltage across the capacitor is managed constant during the load change with a drop voltage lower than 30 V; the recovery time is about 0.025 s. Thus, this result confirms the efficiency of the DC-link voltage backstepping control. Using traditional PI controller, the abovementioned PV-SAPF system waveforms under sudden load changes are shown in Figure 14. A unity power factor operation is achieved with approximately sinusoidal source current in phase with its corresponding voltage. Using this controller, the THD is reduced from 28.96 % to 2.28 % as shown in Figure 14(e). The DC-link voltage is maintained constant during the connection and disconnection of the extra load as illustrated in Figure. 14(d). Even though, the THD is low with an almost sinusoidal source current, the compensation effectiveness of the PI controller is still inferior compared to the proposed controller in terms of power quality improvement. The waveforms of the corresponding exchanged active and reactive powers between load, grid, and shunt active power filter are shown in Figures 13, and 15, for both proposed and PI control methods, respectively. Figures 13(a) and 15(a) show that the active power of the shunt active power filter is added to the grid power with a view to supply the load active power demand. Furthermore, the reactive powers waveforms, shown in Figures 13(b) and 15 (b), argue that the shunt active power filter fulfills the load reactive power demand. This is demonstrated by a zero value of the grid reactive power. Figures 13(c) and 15(c) show the active power produced by the PV system, which is clearly equal to the maximum power of PV generator at the standard conditions. The time response of the generated PV active power is about 0.03 s for the PI control and about 0.005 s for the proposed backstepping control. The error between the generated PV power and its reference in steady state is about 200 W for the PI control and about 50 W for the proposed backstepping control. Accordingly, these results confirm again the efficiency of the DC-DC boost converter backstepping controller compared to the traditional PI controller. Under continuous operation conditions, the values of SAPF inductors may vary. A small variation of these inductances is a reason not only of an incorrect extent of instantaneous active and reactive powers flow generated by the shunt active power filter but also high harmonic distortion. The effect of these parameters variation on the source current harmonic distortion is analyzed with different inductance values as indicated in Figure. 16. It is easily to find out that the THD decreases with the increase of the filter inductances. It is worth noticing that the proposed backstepping direct power control gives the best results for all same variations of inductors parameters. The effect of simulation time step Ts is affects directly the THD for the proposed controller, if the time step Ts is decreased, that makes the source current purely sinusoidal which leads to reduce the value of the THD. Otherwise, if the time step Ts is increased then the source current won't be purely sinusoidal and that increase its THD value. Hence, in this paper, we have chosen a very small value of Ts equals to 1µs to get the better performances for the backstepping controller. 11b.png Figure 11. PV-SAPF system responses without filtering function: (a) supply current before harmonics compensation in steady state, (b) harmonic spectrum of supply current 12c.png 12d.png 12e.png Figure 12. Dynamic responses of PV-SAPF system controlled by backstepping control: (a) filter current, (b) source voltage and source current of a-phase, (c) three-phase source currents, (d) DC-link voltage, (e) harmonic spectrum of line current Figure 13. Dynamic responses of PV-SAPF system controlled by backstepping control: (a) grid, load, and voltage source inverter active powers, (b) grid, load, and voltage source inverter reactive powers, (c) power generated by the PV generator (GPV), maximum power point of PVG (MPP) Figure 14. Dynamic responses of PV-SAPF system controlled by PI control: (a) filter current, (b) source voltage and source current of a-phase, (c) Three-phase source currents, (d) DC-link voltage, (e) Harmonic spectrum of line current Figure 15. Dynamic responses of PV-SAPF system controlled by PI control: (a) grid, load, and voltage source inverter active powers, (b) grid, load, and voltage source inverter reactive powers, (c) power generated by the PVG, MPP of PVG Figure 16. Comparative study between PI and backstepping direct power controls: line current THD versus SAPF inductor value This paper has attained a direct power control based on a nonlinear backstepping approach for grid-connected PV system acting as a shunt active power filter. The proposed control scheme is designed to investigate the DC-link voltage regulation, harmonics elimination, reactive power compensation, and power flow sharing between PV generator and the utility grid. Co-simulation results based on processor-in-the-loop technique demonstrate that the backstepping based direct power control strategy can achieve not only the regulation of power factor observed at the PCC between the nonlinear load and the power distribution system but also manifest excellent transient responses during load variations. Furthermore, the DC-DC boost side backstepping control exhibits a good power flow sharing between the PV generator and the utility grid. When compared to the conventional PI control, the backstepping control displays substantial improvements in terms of supply current harmonic content and PV power sharing. These results assert that the backstepping control strategy offers higher performances than the conventional control. [1] Lekouaghet, B., Boukabou, A., Lourci, N., Bedrine, K. (2018). Control of PV grid connected systems using MPC technique and different inverter configureuration models. Electric Power Systems Research, 154: 287-298. http://dx.doi.org/10.1016/j.epsr.2017.08.027 [2] Tareen, W.U., Mekhilef, S., Seyedmahmoudian, M., Horan, B. (2017). Active power filter (APF) for mitigation of power quality issues in grid integration of wind and photovoltaic energy conversion system. Renewable and Sustainable Energy Reviews, 70: 635-655. http://dx.doi.org/10.1016/j.rser.2016.11.091 [3] Abdul Kadir, A.F., Khatib, T., Elmenreich, W. (2014). Integrating photovoltaic systems in power system: Power quality impacts and optimal planning challenges. International Journal of Photoenergy, 2014. http://dx.doi.org/10.1155/2014/321826 [4] Ouchen, S., Abdeddaim, S., Betka, A., Menadi, A. (2016). Experimental validation of sliding mode-predictive direct power control of a grid connected photovoltaic system, feeding a nonlinear load. Solar Energy, 137: 328-336. http://dx.doi.org/10.1016/j.solener.2016.08.031 [5] Agarwal, R.K., Hussain, I., Singh, B., Chandra, A., Al-Haddad, K. (2016). Improved power quality of three-phase grid connected Solar Energy Conversion System under grid voltages distortion and imbalances. Industry Applications Society Annual Meeting, 2016, pp. 1-8. http://dx.doi.org/10.1109/IAS.2016.7731823 [6] Tuyen, N.D., Fujita, G. (2015). PV-active power filter combination supplies power to nonlinear load and compensates utility current. IEEE Power and Energy Technology Systems Journal, 2: 32-42. http://dx.doi.org/10.1109/JPETS.2015.2404355 [7] Srinath, S., Poongothai, M.S., Aruna, T. (2017). PV integrated shunt active filter for harmonic compensation. Energy Procedia, 117: 1134-1144. http://dx.doi.org/10.1016/j.egypro.2017.05.238 [8] Romero-Cadaval, E., Spagnuolo, G., Franquelo, L.G., Ramos-Paja, C.A., Suntio, T., Xiao, W.M. (2013). Grid-connected photovoltaic generation plants: Components and operation. IEEE Industrial Electronics Magazine, 7: 6-20. http://dx.doi.org/10.1109/MIE.2013.2264540 [9] Calleja, H., Jimenez, H. (2004). Performance of a grid connected PV system used as active filter. Energy Conversion and Management, 45: 2417-2428. http://dx.doi.org/10.1016/j.enconman.2003.11.017 [10] Afghoul, H., Chikouche, D., Krim, F., Babes, B., Beddar, A. (2016). Implementation of fractional-order integral-plus-proportional controller to enhance the power quality of an electrical grid. Electric Power Components and Systems, 44: 1018-1028. http://dx.doi.org/10.1080/15325008.2016.1147509 [11] Braiek, M.B., Fnaiech, F., Al-Haddad, K. (2005). Adaptive controller based on a feedback linearization technique applied to a three-phase shunt active power filter. Industrial Electronics Society, 2005. IECON 2005. 31st Annual Conference of IEEE, 6. http://dx.doi.org/10.1109/IECON.2005.1569037 [12] Komurcugil, H., Kukrer, O. (2006). A new control strategy for single-phase shunt active power filters using a Lyapunov function. IEEE Transactions on Industrial Electronics, 53: 305-312. http://dx.doi.org/10.1109/TIE.2005.862218 [13] Rahmani, S., Mendalek, N., Al-Haddad, K. (2010). Experimental design of a nonlinear control technique for three-phase shunt active power filter. IEEE Transactions on Industrial Electronics, 57: 3364-3375. http://dx.doi.org/10.1109/TIE.2009.2038945 [14] Habiba, B., Abdelhalim, T. (2017). Harmonic Compensation in Five Level NPC Active Filtering: Analysis, Dimensioning and Robust Control Using IT2 FLC. AMSE JOURNALS-AMSE IIETA publication-2017-Series: Advances C, 72(4): 227-247. http://dx.doi.org/10.18280/ama_c.720403 [15] Matas, J., De Vicuna, L.G., Miret, J., Guerrero, J.M., Castilla, M. (2008). Feedback linearization of a single-phase active power filter via sliding mode control. IEEE Transactions on Power Electronics, 23: 116-125. http://dx.doi.org/10.1109/TPEL.2007.911790 [16] Oucheriah, S., Guo, L. (2013). PWM-based adaptive sliding-mode control for boost DC–DC converters. IEEE Transactions on Industrial Electronics, 60: 3291-3294. http://dx.doi.org/10.1109/TIE.2012.2203769 [17] Salimi, M., Siami, S. (2015). Cascade nonlinear control of DC-DC buck/boost converter using exact feedback linearization. Electric Power and Energy Conversion Systems (EPECS), 2015 4th International Conference on, pp. 1-5. http://dx.doi.org/10.1109/EPECS.2015.7368525 [18] Chaoui, A., Gaubert, J.P., Krim, F. (2010). Power quality improvement using DPC controlled three-phase shunt active filter. Electric Power Systems Research, 80: 657-666. http://dx.doi.org/10.1016/j.epsr.2009.10.020 [19] Zhang, M., Lv, R., Hu, E., Xu, C., Yang, X. (2011). Research on direct power control strategy. Artificial Intelligence, Management Science and Electronic Commerce (AIMSEC), 2011 2nd International Conference on, pp. 7195-7197. http://dx.doi.org/10.1109/AIMSEC.2011.6010838 [20] Fouad, B., Ali, C., Samir, Z., Salah, S. (2017). Direct Torque Control of Induction Motor Fed by Three-level Inverter Using Fuzzy Logic. AMSE JOURNALS-AMSE IIETA publication-2017-Series: Advances C, 72(4): 248-265. http://dx.doi.org/10.18280/ama_c.720404 [21] Cichowlas, M., Malinowski, M., Kazmierkowski, M.P., Sobczuk, D.L., Rodríguez, P., Pou, J. (2005). Active filtering function of three-phase PWM boost rectifier under different line voltage conditions. IEEE transactions on industrial electronics, 52: 410-419. http://dx.doi.org/10.1109/TIE.2005.843915 [22] Guo, R., Huang, D., Zhang, L. (2005). Chaotic synchronization based on Lie derivative method. Chaos, Solitons & Fractals, 25: 1255-1259. http://dx.doi.org/10.1016/j.chaos.2004.11.067 [23] Byrnes, C., Isidori, A. (2000). Output regulation for nonlinear systems: an overview. International Journal of Robust and Nonlinear Control, 10: 323-337. http://dx.doi.org/10.1002/(SICI)1099-1239(20000430)10:5%3C323::AID-RNC483%3E3.0.CO;2-G [24] Watanabe, E.H., Akagi, H., Aredes, M. (2008). Instantaneous pq power theory for compensating nonsinusoidal systems, in Nonsinusoidal Currents and Compensation, 2008. ISNCC 2008. International School on, pp. 1-10. http://dx.doi.org/10.1109/ISNCC.2008.4627480 [25] Bouzidi, M., Benaissa, A., Barkat, S., Bouafia, S., Bouzidi, A. (2017). Virtual flux direct power control of the three-level NPC shunt active power filter based on backstepping control. International Journal of System Assurance Engineering and Management, 8: 287-300. http://dx.doi.org/10.1007/s13198-016-0433-3 [26] Zhang, Y., Fidan, B., Ioannou, P.A. (2003). Backstepping control of linear time-varying systems with known and unknown parameters. IEEE Transactions on Automatic Control, 48: 1908-1925. http://dx.doi.org/10.1109/TAC.2003.819074 [27] Layadi, N., Zeghlache, S., Benslimane, T., Berrabah, F. (2017). Comparative analysis between the rotor flux oriented control and backstepping control of a double star induction machine (DSIM) under open-phase fault. AMSE JOURNALS-AMSE IIETA publication-2017-Series: Advances C, 72(4): 292-311. http://dx.doi.org/10.18280/ama_c.720407 Latest News & Announcement
CommonCrawl
Distributed Hash Table for the Dependency Graph Jason Cairns The use of an intermediary message broker for RPC's within the system has proved exceptionally useful. Currently, rather than sending messages directly from node-to-node, all messages are sent to queues named after chunks on a central Redis server, which routes the messages to nodes holding those chunks and performing blocking pops on the queues [1]. The layer of indirection provided by this setup has enabled information hiding effectively in support of system modularity, as well as a fair degree of dynamism in chunk location. The principal issue with this approach is the high level of centralisation it entails, alongside the dependency and associated lack of behavioural controls over external software [2]. Such centralisation leads to casting the Redis server as a single point of failure, and requiring inherent knowledge of the location of the Redis server for connecting nodes. Extensions to the behaviour of the Redis server are possible, though require programming to a specific API bundled with Redis, with some limitation. These issues are largely offset by the high efficiency and low complexity afforded by the central message-broker approach, but it is worth considering a more dynamic alternative in the implementation of a Distributed Hash Table (DHT) as a supporting transport layer for the system. This document seeks to outline the motivation, implementation, and some variations of a DHT as part of the largeScaleR system. A key motivator for the usage of a DHT transport layer, beyond the independence and decentralisation engendered, is also in enabling a distributed store of the dependency graph, as outlined in Acyclic Dependency Digraphs for Immutable Chunks. The advantages of a DHT in this respect include the ability to scale with the system, through CPU and memory load being spread through the system, as well as greater live fault-tolerance via in-memory replication [3]. Perhaps even more pressingly, a DHT algorithm as implemented in the system affords more control over components such as the potential for callbacks and hooks, and possibly running these in R for greater system connectivity. This means that there is greater direct control over the graph by the nodes hosting partitions of the graph, which is a shortcoming of Python's Dask; Dask has an explicit dependency graph, contained entirely within the master node, which leaves it out of reach of nodes that it affects directly [4]. 2 Overview of DHT's A DHT is a key-value store with the capacity to be distributed over many disparate nodes [5]. All well-known modern DHT's involve no central control, with all participating nodes being effectively equivalent and symmetrical. Distributed Hash Table algorithms generally share a few features in common, beyond those inherent in the DHT name: Dynamic introduction and exit of nodes is a principal distinguishing feature of DHT's with respect to regular hash tables using nodes as buckets. Resizing of simple hash tables often involve a complete rehashing and remapping of values to the buckets, whereas DHT's expect frequent resizing, and can't afford the remapping. DHT's have some hashing function that doesn't require system-wide remapping when nodes/buckets are added or removed [6]. Nodes usually keep some routing table that is used to optimise lookups [7]. Nodes typically have some node ID of the same length as keys (or some variation on keys, such as their SHA-1 hash), in order to allow for some direct comparison between them. Keys are usually hashed to particular nodes based on some measure of distance, choosing the node with the lowest distance to store the value on. Minimisation of route length and minimisation of degree are principal objectives in the analysis of DHT's, with most DHT's sporting nodes contacted during lookups. All DHT's have some degree of susceptibility to Byzantine faults such as the Sybil attack, where a flood of new nodes can throw off the system; given that the largeScaleR system is expected to run in a non-adverserial environment, this consideration doesn't factor too heavily into choice of DHT algorithm [8] [9]. The primary algorithms for DHT's include Chord and Kademlia, alongside others such as Pastry, Tapestry, and Koorde. Chord is among the more simple of DHT's, with Kademlia possessing some advantages in payment for additional complexity. The following subsections describe the Chord and Kademlia algorithms, including discussion on suitability as a DHT message transport layer in largeScaleR, as well as some potential drawbacks, and suggestions for variations to ensure correctness. 2.1 Chord Chord builds on the consistent hashing algorithm, adding linear lookup and finger tables [10]. Consistent hashing is a hashing scheme with a high probability of load balancing and minimisation of key remapping upon node exit and joining of the network, with only $\mathcal{O}(\frac{1}{N})$ expected key movement [11]. Consistent hashing relies upon an identifier circle, as a form of modulo ring. The identifier ring exists conceptually at the size 2m, where m is chosen as large enough to make the probability of hash collision negligable, typically the same as the number of bits in the node/key ID hash. Each node is hashed to some point on the identifier ring based upon it's node ID, typically with the SHA-1 hash. Keys are then assigned to nodes by determining their point on the ring, using the same hash function as used for nodes, and specifying that their node assignment is to be the first node clockwise on the ring following that key, with that node termed as the key's successor. The original Chord paper has excellent diagrams showing the ring and the relation to the Chord algorithm. Based on the description of key-node assignments given by consistent hashing, the Chord algorithm allows decentralised coordination over the ring through the provision of a lookup routine, which can be used to return either a node or a key. The central requirement is that each node knows it's own successor. With this in place, finding a node or a key involves the initiating node querying it's successor, with successors forwarding the query to their own successors in a recursive manner, until the node or value is found. As this is 𝒪(N) in terms of route length, an additional stipulation is given that nodes carry a finger table, wherein addresses of nodes in exponentially increasing intervals on the ring are stored, and finger tables are consulted for routing instead of bare successors. Specifically, the entries in a node's finger table are the successors to the points relative to the node at increasing values of 2k − 1 mod 2m, 1 ≤ k ≤ m. As such, the first element of a node's finger table, at 20 points along from the node, will be that node's successor Successors to points are found based on querying the maximal node less than the point in the existing finger table, and recursively passing the query, until a queried node finds that the queried point lies between itself and it's own successor, at which point it returns it's own successor. Nodes join the network by finding their successor through querying any arbitrary node in the network. The arrival of new nodes has the potential to throw off existing finger tables, and as such a stabilisation routine must be run periodically to maintain the system, by checking consistency of successors and predecessors. The need for a regular stabilisation and finger table fixing routine is not amenable to arbitrary churn rates. If stabilisation occurs less than churn, then node lookups have a higher potential for failure, as nodes may have incorrect successor pointers, or keys may not have been migrated to new nodes. If stabilisation occurs more than churn, then most stabilisation cycles are idempotent and unnecessary. If Chord is used in a variety of heterogeneous environments, it is almost certain to not match churn in all of them. Given that this is the case, a variation on Chord is essential for reliability. My suggestion for removing all background periodic procedures is the following: Node joins occur sequentially and force stabilisation procedures on both the successor and predecessor of the joining node. Migration of keys occur prior to predecessor stabilisation, and require checks from successor to predecessor that the pointer is correct and that no queries are pending, before the successor deletes any table elements that it hosts. Joining nodes broadcast their existence and ID recursively along finger tables, with the space of each recursive call bound by successors, forcing finger table fixes along all existing nodes. This is additional work, but for datacentre-like applications, as lsr{} fits, rather than transient IM chat applications, the non-internet scale minimises the additional work, and is sufficient to justify the stability. A secondary drawback of Chord is in the fact that the distance measure is asymmetric. This means that new node information is not transmitted through the network without the finger fixing routines. 2.2 Kademlia Kademlia is a more complex algorithm than Chord, though it possesses certain features that make it more amenable to a large dynamic distributed system [12]. Kademlia sees use by the Ethereum cryptocurrency, the Tox messaging protocol, and the gnunet file sharing system [13]–[15]. XOR serves as the Kademlia distance measure, which, though non-Euclidean, satisfies several useful properties, including symmetricality and the triangle inequality, thereby preserving relative distances and allowing propagation of knowledge of new nodes via the very act of lookup. The routing table, known as k-buckets in the Kademlia literature, is a list of m elements, matching the number of bits in node and key ID's as in Chord. Each element in the k-buckets is itself a list of references to k nodes, where k is some pre-decided number shared amongst all nodes, indicated at a recommended 20. To determine the correct bucket to store some node's information in the k-buckets, the ID of the node of interest is XOR'ed with the ID of the node performing the check, and the location of the first differing bit indicates the bucket to which the node is sent. In this way, nodes retain more knowledge on nodes closer to them than nodes further away, as the number of nodes per bucket halves with each unit of magnitude increase in XOR distance. Keys are stored by performing a node lookup of the closest nodes within the corresponding k-bucket, and querying the top α nodes of that list in parallel. The query is run recursively on the results from those nodes, until the results have converged, which is guaranteed to be correct in a static system. Nodes join the network by contacting one existing node in the network, and performing a request for their own ID, which propagates their address through the network. Further features include taking advantage of the fact that node lifetime follows a power law, with longer-lived nodes more lkely to live longer; longer-running nodes are kept at the top of the k-bucket lists. As nodes are encountered, they are added dynamically to k-bucket lists. Kademlia has the potential for lookup failure if newly added nodes are closer to the key than the current keyholder, and the keyholder is not updated accordingly. The authors recommend periodic key republishing as a means of combatting this, but the periodicity suffers many of the same problems that Chord has. Therefore I suggest that it is better to force a copy of all closest keys to a joining node upon making contact with the network - this is not too difficult, as the distance space is single dimensional, so a node only needs to contact it's two nearest (or equivalent) neighbours in order to determine close keys, and copy from them. It may have to go through a similar strictly ordered join process, as was suggested for Chord, before declaring a complete migration. Nodes departing the network may render lookup failures, as in Chord, however I suggest that this can be mitigated by stipulating that all nodes must write their keys and values to disk, so they can be pulled back online and restored if they depart the network. This is one major point of relevance to largeScaleR; these DHT's were originally written with semi-anonymous and potentially adversarial file-sharing in mind, while largeScaleR is intended to be run in a reasonably controlled environment where failing nodes can always be revived. 3 Alternatives to Standard DHT Approaches The two DHT approaches outlined above are surprisingly simple for the amount of power they provide as a decentralised basis for messaging within the system. However, it is important to keep in mind that they are intended for extremely high-scale internet-based file-sharing applications, and largeScaleR can probably get away with an even simpler setup. For instance, the routing table could instead be a list of all nodes, permitting joins and departures, and provide 𝒪(1) lookup cost, at the expense of an 𝒪(N) routing table. This mesh algorithm would scale to a reasonable number of nodes, though is likely to flounder past several hundred. DHT's also aren't the only means of implementing a distributed associative array, which is the base data type that is sought after for our purposes; Skip graph is a distributed version of the skiplist probabilistic data structure, with simple operations and impressive access costs [16]. The skiplist algorithm which underpins it is made use of by Redis in it's implementation of ordered sets. 4 Value and Key Descriptions Aside from determining the form of the base associative array, the structure of the keys and values to be stored in it require some consideration. The information to be kept is the dependency graph, including chunk locations. The values are to be mutable, at the very least in order to allow marking, as part of the checkpointing and deletion process. Upon chunk creation, a chunk is assigned a random 128 bit ID, which is sufficient to uniquely identify it. References to other chunks in the dependency graph describe the chunk ID of the prerequisites/targets, and these can be looked up directly. 5 Relation to System The dependency graph and DHT hosting it are separate, though depended upon, by the working largeScaleR system. For all intents, the DHT may be accessed through a simple read() and execute() interface at the top level, with perhaps some middle layer communicating storage details and garbage collection information to the DHT. The question remains of how much responsibility should be held by the DHT with respect to how much responsibility should be held by the operating largeScaleR system. For instance, should the DHT return the address of a chunk when queried, or go beyond that and return the very value of the chunk? Initial inclination is toward the DHT returning the address, with some thin adaptor returning the value, and the system having no further knowledge than the bare minimum of the adaptor's interface. This information hiding aids in modularity and will hopefully result in less code changes necessary when changing out components in the future [17]. 6 Extensions and Variations In-memory replication, redundancy, or caching, is the standard means by which DHT's prevent data loss. A challenge this brings is to the consistency mutable values if replicated across nodes; Attaining distributed consensus on changes to existing data is exceptionally difficult, though not impossible [18], [19]. Given that all nodes in the system are trusted, it is better to mirror all data to disk, at least as part of the current implementation. That way, when failures occur, the system is merely a reboot and restore back to functionality – an advantage of a non-adversarial network. Another important variation is that while all worker nodes in the system sit above DHT nodes, the master node must not be a full participant in the DHT network, as the processing burden may be too much, given that the master machine must be the most responsive in the network. The master must have some mechanism of adding a non-participant flag to it's RPC's in order to not be taken in by the network. Furthermore, multiple master nodes may be allowed in the system, potentially operating on the same chunks. If this is to be the case, some means of communication between masters must be devised, though this should ideally be delayed until following the implementation of the network itself. The flexibility for multiple masters leads to decreased reliance on the single master not failing, with references to chunks stored in the DHT, rather than sunk to the master's disk. S. Sanfilippo and P. Noordhuis, "Redis." 2009. V. John and X. Liu, "A survey of distributed message broker queues." 2017.Available: https://arxiv.org/abs/1704.00411 E. K. Lua, J. Crowcroft, M. Pias, R. Sharma, and S. Lim, "A survey and comparison of peer-to-peer overlay network schemes," IEEE Communications Surveys & Tutorials, vol. 7, no. 2, pp. 72–93, 2005, doi: 10.1109/comst.2005.1610546. M. Rocklin, "Dask: Parallel computation with blocked algorithms and task scheduling," 2015. H. Balakrishnan, M. F. Kaashoek, D. Karger, R. Morris, and I. Stoica, "Looking up data in P2P systems," Communications of the ACM, vol. 46, no. 2, pp. 43–48, Feb. 2003, doi: 10.1145/606272.606299. D. Thaler and C. V. Ravishankar, "A name-based mapping scheme for rendezvous," in Technical report CSE-TR-316-96, university of michigan, University of Michigan, 1996. W. Galuba and S. Girdzijauskas, "Peer to peer overlay networks: Structure, routing and maintenance," in Encyclopedia of database systems, Springer US, 2009, pp. 2056–2061. doi: 10.1007/978-0-387-39940-9_1215. J. R. Douceur, "The sybil attack," in Peer-to-peer systems, Springer Berlin Heidelberg, 2002, pp. 251–260. doi: 10.1007/3-540-45748-8_24. G. Urdaneta, G. Pierre, and M. V. Steen, "A survey of DHT security techniques," ACM Computing Surveys, vol. 43, no. 2, pp. 1–49, Jan. 2011, doi: 10.1145/1883612.1883615. I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H. Balakrishnan, "Chord: A scalable peer-to-peer lookup service for internet applications," ACM SIGCOMM Computer Communication Review, vol. 31, no. 4, pp. 149–160, Oct. 2001, doi: 10.1145/964723.383071. D. Karger, E. Lehman, T. Leighton, R. Panigrahy, M. Levine, and D. Lewin, "Consistent hashing and random trees," 1997. doi: 10.1145/258533.258660. P. Maymounkov and D. Mazières, "Kademlia: A peer-to-peer information system based on the XOR metric," in Peer-to-peer systems, Springer Berlin Heidelberg, 2002, pp. 53–65. doi: 10.1007/3-540-45748-8_5. V. Buterin et al., "A next-generation smart contract and decentralized application platform," Bitcoin Magazine, vol. 3, no. 37, 2014. R. Alkhulaiwi, A. Sabur, K. Aldughayem, and O. Almanna, "Survey of secure anonymous peer to peer instant messaging protocols," Dec. 2016. doi: 10.1109/pst.2016.7906977. M. Wachs, M. Schanzenbach, and C. Grothoff, "A censorship-resistant, privacy-enhancing and fully decentralized name system," in Cryptology and network security, Springer International Publishing, 2014, pp. 127–142. doi: 10.1007/978-3-319-12280-9_9. J. Aspnes and G. Shah, "Skip graphs," ACM Transactions on Algorithms, vol. 3, no. 4, p. 37, Nov. 2007, doi: 10.1145/1290672.1290674. E. Gamma, Design patterns : Elements of reusable object-oriented software. Reading, Mass: Addison-Wesley, 1995. A. Fox and E. A. Brewer, "Harvest, yield, and scalable tolerant systems," Mar. 1999. doi: 10.1109/hotos.1999.798396. S. Gilbert and N. Lynch, "Brewer's conjecture and the feasibility of consistent, available, partition-tolerant web services," ACM SIGACT News, vol. 33, no. 2, pp. 51–59, Jun. 2002, doi: 10.1145/564585.564601.
CommonCrawl
Compact closed category In category theory, a branch of mathematics, compact closed categories are a general context for treating dual objects. The idea of a dual object generalizes the more familiar concept of the dual of a finite-dimensional vector space. So, the motivating example of a compact closed category is FdVect, the category having finite-dimensional vector spaces as objects and linear maps as morphisms, with tensor product as the monoidal structure. Another example is Rel, the category having sets as objects and relations as morphisms, with Cartesian monoidal structure. Symmetric compact closed category A symmetric monoidal category $(\mathbf {C} ,\otimes ,I)$ is compact closed if every object $A\in \mathbf {C} $ has a dual object. If this holds, the dual object is unique up to canonical isomorphism, and is denoted $A^{*}$. In a bit more detail, an object $A^{*}$ is called the dual of $A$ if it is equipped with two morphisms called the unit $\eta _{A}:I\to A^{*}\otimes A$ and the counit $\varepsilon _{A}:A\otimes A^{*}\to I$, satisfying the equations $\lambda _{A}\circ (\varepsilon _{A}\otimes A)\circ \alpha _{A,A^{*},A}^{-1}\circ (A\otimes \eta _{A})\circ \rho _{A}^{-1}=\mathrm {id} _{A}$ and $\rho _{A^{*}}\circ (A^{*}\otimes \varepsilon _{A})\circ \alpha _{A^{*},A,A^{*}}\circ (\eta _{A}\otimes A^{*})\circ \lambda _{A^{*}}^{-1}=\mathrm {id} _{A^{*}},$ where $\lambda ,\rho $ are the introduction of the unit on the left and right, respectively, and $\alpha $ is the associator. For clarity, we rewrite the above compositions diagrammatically. In order for $(\mathbf {C} ,\otimes ,I)$ to be compact closed, we need the following composites to equal $\mathrm {id} _{A}$: $A{\xrightarrow {\cong }}A\otimes I{\xrightarrow {A\otimes \eta }}A\otimes (A^{*}\otimes A){\xrightarrow {\cong }}(A\otimes A^{*})\otimes A{\xrightarrow {\epsilon \otimes A}}I\otimes A{\xrightarrow {\cong }}A$ and $\mathrm {id} _{A^{*}}$: $A^{*}{\xrightarrow {\cong }}I\otimes A^{*}{\xrightarrow {\eta \otimes A^{*}}}(A^{*}\otimes A)\otimes A^{*}{\xrightarrow {\cong }}A^{*}\otimes (A\otimes A^{*}){\xrightarrow {A^{*}\otimes \epsilon }}A^{*}\otimes I{\xrightarrow {\cong }}A^{*}$ Definition More generally, suppose $(\mathbf {C} ,\otimes ,I)$ is a monoidal category, not necessarily symmetric, such as in the case of a pregroup grammar. The above notion of having a dual $A^{*}$ for each object A is replaced by that of having both a left and a right adjoint, $A^{l}$ and $A^{r}$, with a corresponding left unit $\eta _{A}^{l}:I\to A\otimes A^{l}$, right unit $\eta _{A}^{r}:I\to A^{r}\otimes A$, left counit $\varepsilon _{A}^{l}:A^{l}\otimes A\to I$, and right counit $\varepsilon _{A}^{r}:A\otimes A^{r}\to I$. These must satisfy the four yanking conditions, each of which are identities: $A\to A\otimes I{\xrightarrow {\eta ^{r}}}A\otimes (A^{r}\otimes A)\to (A\otimes A^{r})\otimes A{\xrightarrow {\epsilon ^{r}}}I\otimes A\to A$ $A\to I\otimes A{\xrightarrow {\eta ^{l}}}(A\otimes A^{l})\otimes A\to A\otimes (A^{l}\otimes A){\xrightarrow {\epsilon ^{l}}}A\otimes I\to A$ and $A^{r}\to I\otimes A^{r}{\xrightarrow {\eta ^{r}}}(A^{r}\otimes A)\otimes A^{r}\to A^{r}\otimes (A\otimes A^{r}){\xrightarrow {\epsilon ^{r}}}A^{r}\otimes I\to A^{r}$ $A^{l}\to A^{l}\otimes I{\xrightarrow {\eta ^{l}}}A^{l}\otimes (A\otimes A^{l})\to (A^{l}\otimes A)\otimes A^{l}{\xrightarrow {\epsilon ^{l}}}I\otimes A^{l}\to A^{l}$ That is, in the general case, a compact closed category is both left and right-rigid, and biclosed. Non-symmetric compact closed categories find applications in linguistics, in the area of categorial grammars and specifically in pregroup grammars, where the distinct left and right adjoints are required to capture word-order in sentences. In this context, compact closed monoidal categories are called (Lambek) pregroups. Properties Compact closed categories are a special case of monoidal closed categories, which in turn are a special case of closed categories. Compact closed categories are precisely the symmetric autonomous categories. They are also *-autonomous. Every compact closed category C admits a trace. Namely, for every morphism $f:A\otimes C\to B\otimes C$, one can define $\mathrm {Tr_{A,B}^{C}} (f)=\rho _{B}\circ (id_{B}\otimes \varepsilon _{C})\circ \alpha _{B,C,C^{*}}\circ (f\otimes C^{*})\circ \alpha _{A,C,C^{*}}^{-1}\circ (id_{A}\otimes \eta _{C^{*}})\circ \rho _{A}^{-1}:A\to B$ which can be shown to be a proper trace. It helps to draw this diagrammatically: $A{\xrightarrow {\cong }}A\otimes I{\xrightarrow {A\otimes \eta _{C^{*}}}}A\otimes (C\otimes C^{*}){\xrightarrow {\cong }}(A\otimes C)\otimes C^{*}{\xrightarrow {\;\;f\otimes C^{*}\;\;}}(B\otimes C)\otimes C^{*}{\xrightarrow {\cong }}B\otimes (C\otimes C^{*}){\xrightarrow {B\otimes \varepsilon _{C}}}B\otimes I{\xrightarrow {\cong }}B.$ Examples The canonical example is the category FdVect with finite-dimensional vector spaces as objects and linear maps as morphisms. Here $A^{*}$ is the usual dual of the vector space $A$. The category of finite-dimensional representations of any group is also compact closed. The category Vect, with all vector spaces as objects and linear maps as morphisms, is not compact closed; it is symmetric monoidal closed. Simplex category The simplex category can be used to construct an example of non-symmetric compact closed category. The simplex category is the category of non-zero finite ordinals (viewed as totally ordered sets); its morphisms are order-preserving (monotone) maps. We make it into a monoidal category by moving to the arrow category, so the objects are morphisms of the original category, and the morphisms are commuting squares. Then the tensor product of the arrow category is the original composition operator. The left and right adjoints are the min and max operators; specifically, for a monotone map f one has the right adjoint $f^{r}(n)=\sup\{m\in \mathbb {N} \mid f(m)\leq n\}$ and the left adjoint $f^{l}(n)=\inf\{m\in \mathbb {N} \mid n\leq f(m)\}$ The left and right units and counits are: ${\mbox{id}}\leq f\circ f^{l}\qquad {\mbox{(left unit)}}$ $\,{\mbox{id}}\leq f^{r}\circ f\quad \ \ \ {\mbox{(right unit)}}$ $f^{l}\circ f\leq {\mbox{id}}\qquad {\mbox{(left counit)}}$ $f\circ f^{r}\leq {\mbox{id}}\qquad {\mbox{(right counit)}}$ One of the yanking conditions is then $f=f\circ {\mbox{id}}\leq f\circ (f^{r}\circ f)=(f\circ f^{r})\circ f\leq {\mbox{id}}\circ f=f.$ The others follow similarly. The correspondence can be made clearer by writing the arrow $\to $ instead of $\leq $, and using $\otimes $ for function composition $\circ $. Dagger compact category A dagger symmetric monoidal category which is compact closed is a dagger compact category. Rigid category A monoidal category that is not symmetric, but otherwise obeys the duality axioms above, is known as a rigid category. A monoidal category where every object has a left (resp. right) dual is also sometimes called a left (resp. right) autonomous category. A monoidal category where every object has both a left and a right dual is sometimes called an autonomous category. An autonomous category that is also symmetric is then a compact closed category. References Kelly, G.M.; Laplaza, M.L. (1980). "Coherence for compact closed categories". Journal of Pure and Applied Algebra. 19: 193–213. doi:10.1016/0022-4049(80)90101-2.
Wikipedia
Exercise 3.14 Killing vectors on two-sphere Consider the three Killing vectors of the two-sphere, (3.188). Show that their commutators satisfy the following algebra: \left[R,S\right]=T&\phantom {10000}(1)\nonumber\\ \left[S,T\right]=R&\phantom {10000}(2)\nonumber\\ \left[T,R\right]=S&\phantom {10000}(3)\nonumber The three Killing vectors at 3.188 were R&=\partial_\phi&\phantom {10000}(4)\nonumber\\ S&=\cos{\phi}\partial_\theta-\cot{\theta}\sin{\phi}\partial_\phi&\phantom {10000}(5)\nonumber\\ T&=-\sin{\phi}\partial_\theta-\cot{\theta}\cos{\phi}\partial_\phi&\phantom {10000}(6)\nonumber \end{align}It is easy to prove the algebra as long as we remember that the ##\partial_\theta,\partial_\phi## are just the basis vectors and Carroll's 2.23 that ##\left[X,Y\right]^\mu=X^\lambda\partial_\lambda Y^\mu-Y^\lambda\partial_\lambda X^\mu## which we proved back in exercise 2.04. So first we could write (4),(5),(6) as ##R=\left(0,1\right),\ S=\left(\cos{\phi},-\cot{\theta}\sin{\phi}\right),\ T=\left(-\sin{\phi},-\cot{\theta}\cos{\phi}\right)##. What is more difficult to understand is what does this neat algebra mean? A common way to show a commutator (which Carroll uses when introducing the Riemann tensor) is thus The commutator ##\left[R,S\right]## is the difference between doing ##R## then ##S## and ##S## then ##R##. So the algebra that the Killing vectors satisfy corresponds to this geometry: ##R## has always been shown as a unit vector in increasing ##\phi## direction, ##T,S## are more flexible. I'm not sure how this helps. The proof is given at Ex 3.14 Killing vectors on two-sphere.pdf (2 pages) Labels: Exercises Review chapter 3 Before we start I have again found a very important paragraph, it is the last in section 3.2 and describes how we have got from the beginning to here. The beginning was the concept of a set which became a manifold and we ended up with metrics and covariant derivatives! After nearly getting to the end of chapter 3 I realised that my ideas about covariant derivatives needed refinement and that I did not really understand parallel transport. With the former it would seem that metric compatibility, ##\nabla_\mu g_{\lambda\nu}=0##, arises out of the Leibnitz rule and the demand that the covariant derivative is a tensor and is not an 'additional property' - with the caveat that the manifold in question has a metric with the usual properties. On the latter I was still hazy until I explored further in this review. I still don't think I have mastered every detail of this chapter to section 8 on Killing vectors but one thing is clear: If you have a tensor field ##T\left(x^\mu\right)## which you can express in terms of coordinates ##x^\mu## and we consider two points ##x^\alpha,x^\beta## then if you parallel transport the tensor from ##x^\alpha## along a geodesic to ##x^\beta## and call it ##T\prime\left(x^\beta\right)## there, then in all likelihood ##T\prime\left(x^\beta\right)\neq T\left(x^\beta\right)## and in some sense at least ##T^\prime\left(x^\beta\right)=T\left(x^\alpha\right)##. I think. Equation (20) in the document is saying that for a vector. The rest of chapter 3 was straightforward and finally we met the geodesic deviation equation $$ A^\mu=\frac{D^2}{dt^2}S^\mu=R_{\ \ \nu\sigma\rho}^\mu T^\nu T^\rho S^\sigma $$##A^\mu## is the "relative acceleration of (neighbouring) geodesics", ##S^\mu## is a vector orthogonal to a geodesic (pointing towards its neighbour) and ##T^\mu## is a vector tangent to the geodesic. The equation expresses the idea that the acceleration between two neighbouring geodesics is proportional to the curvature and, physically, it is the manifestation of gravitational tidal forces. The six page document repeats the above and reviews covariant derivatives, parallel transport and geodesics as shown in the video, Riemann and Killing. It's at Commentary 3 Review chapter 3.pdf. Labels: Commentaries, Cool Graphics Exercise 3.08 Vital statistics of a 3-sphere The metric for the 3-sphere in coordinates ##x^\mu=## ##\left(\psi,\theta,\phi\right)## is $$ {ds}^2={d\psi}^2+\sin^2{\psi}\left({d\theta}^2+\sin^2{\theta}{d\phi}^2\right) $$(a) Calculate the Christoffel connection coefficients. Use whatever method you like, but it is good practice to get the connection coefficients by varying the integral (3.49) (b) Calculate the Riemann tensor ##R_{\ \ \ \sigma\mu\nu}^\rho## , Ricci tensor ##R_{\mu\nu}## and Ricci scalar ## R##. (c) Show that (3.191) is obeyed by this metric, confirming that the 3-sphere is a maximally symmetric space (as you might expect.) 3.191 was$$ R_{\rho\sigma\mu\nu}=\frac{R}{n\left(n-1\right)}\left(g_{\rho\mu}g_{\sigma\nu}-g_{\rho\nu}g_{\sigma\mu}\right) $$Calculating the Christoffel coefficients was easy because I could use the code I had written before. I did not take the advice "get good practice by varying the integral"! I forgot to use the result of exercise 3.3! Grrr. The Riemann tensor was gruelling. It has 81 components but in this case there are really only three which are independent and each of them generate only three more by index symmetries (as shown in the picture). However, every one of the 81 needs checking. We also notice that each component is ± a metric component and wonder if there is some relationship like the relation between these components similar to exercise 3.3. Perhaps a future challenge! The Ricci bits are almost trivial. For (c) I was able to write a simple bit of code to compare all 81 equations. It did the job. A 1-sphere is a circle - the set of points in ##\mathbf{R}^2## at an equal distance from the origin. A 2-sphere is the surface of a sphere - the set of points in ##\mathbf{R}^3## at an equal distance from the origin. A 3-sphere is the set of points in ##\mathbf{R}^4## at an equal distance from the origin. (a) Christoffel connection coefficients \Gamma_{\theta\theta}^\psi&=-\sin{\psi}\cos{\psi}&\phantom {10000}(5)\nonumber\\ \Gamma_{\phi\phi}^\psi&=-\sin{\psi}\cos{\psi}\sin^2{\theta}&\phantom {10000}(6)\nonumber\\ \Gamma_{\psi\theta}^\theta=\Gamma_{\theta\psi}^\theta&=\cot{\psi}&\phantom {10000}(7)\nonumber\\ \Gamma_{\phi\phi}^\theta&=-\sin{\theta}\cos{\theta}&\phantom {10000}(8)\nonumber\\ \Gamma_{\psi\phi}^\phi=\Gamma_{\phi\psi}^\phi&=\cot{\psi}&\phantom {10000}(9)\nonumber\\ \Gamma_{\theta\phi}^\phi=\Gamma_{\phi\theta}^\phi&=\cot{\theta}&\phantom {10000}(10)\nonumber \end{align}(b) Riemann tensor components are R_{\ \ \ \theta\psi\theta}^\psi&=\sin^2{\psi}&\phantom {10000}(13)\nonumber\\ R_{\ \ \ \theta\theta\psi}^\psi&=-\sin^2{\psi}&\phantom {10000}(14)\nonumber\\ {R}_{\ \ \ {\phi\psi\phi}}^{\psi}&=\sin^2{\psi}\sin^2{\theta}&\phantom {10000}(15)\nonumber\\ R_{\ \ \ \phi\phi\psi}^\psi&=-\sin^2{\psi}\sin^2{\theta}&\phantom {10000}(16)\nonumber\\ R_{\ \ \ \psi\psi\theta}^\theta&=-1&\phantom {10000}(17)\nonumber\\ R_{\ \ \ \psi\theta\psi}^\theta&=1&\phantom {10000}(18)\nonumber\\ R_{\ \ \ \phi\theta\phi}^\theta&=\sin^2{\psi}\sin^2{\theta}&\phantom {10000}(19)\nonumber\\ R_{\ \ \ \phi\phi\theta}^\theta&=-\sin^2{\psi}\sin^2{\theta}&\phantom {10000}(20)\nonumber\\ R_{\ \ \ \psi\psi\phi}^\phi&=-1&\phantom {10000}(21)\nonumber\\ R_{\ \ \ \psi\phi\psi}^\phi&=1&\phantom {10000}(22)\nonumber\\ R_{\ \ \ \theta\theta\phi}^\phi&=-\sin^2{\psi}&\phantom {10000}(23)\nonumber\\ R_{\ \ \ \theta\phi\theta}^\phi&=\sin^2{\psi}&\phantom {10000}(24)\nonumber \end{align}The Ricci tensor is twice the metric! R_{\mu\nu}=2g_{\mu\nu}&\phantom {10000}(35)\nonumber \end{align}The Ricci scalar is 6 (c) was solved by VBA Complete answer with more links in Ex 3.08 Vital statistics of a 3-sphere.pdf Zilch - How to play and probabilities Zilch is a parlour game introduced to my family by Camilla who learnt it on her honeymoon with David from a man called Bali Bill. There are variations on the rules which we will not further discuss. The game is played with six ordinary dice by two to six people. Six is quite a lot and it usually gets out of hand with more. One of the players must be chosen as the scorer. Each player takes it in turn to play and it is decided who starts by each player throwing just one dice and seeing who gets the highest number. The highest number starts. If two or more players get the same highest number, those players throw again until a starter is found. The game proceeds with the starter making a play. Once they have finished the next player on the left plays and so it goes round and round. It is important to note that every player gets the same number of plays. In order to start scoring points a player must make 500 points in their first scoring play. Middle: Plays, scoring → When a player plays they start by throwing all six dice. ⇒ Whenever a player throws (any number of dice) there are two possible outcomes: 1) The thrown dice get no points. That is Zilch. Their play ends, Z for Zilch is written against their score. They lose any score they made that play. The next player plays. 2) The dice thrown score some points. The player may then stop and their accumulated total for that play is added to their score; the next player plays. Or the player keeps some of the scoring dice and throws the remainder again. Often there is no choice of how many dice to keep. The total scored so far is accumulated for that play. Now go back to ⇒ (with less dice to throw). On step 2 above their may be no dice left. This is excellent. The player may start again with all six dice and continue to accumulate points for that play. (Go back to →). Since a player must get a score of 500 or more in one play to get into the game they must throw relentlessly until that happens. This can cause numerous zilches at the start of the game. If a player gets four zilches in a row 500 is deducted off their score. Subsequent consecutive zilches incur the same penalty of -500. With bad luck, it is quite possible to go seriously negative at the start of the game. I have seen -5000 and the player involved ended the whole game on a record breaking zero. You might be wondering how you do score points in this game. Here's the answer: Thrown dice include Points scored · One 1 100 (and two 1s scores 200 points) · Three of a kind 100×N where N is the number on the dice. So three 4s scores 400 points. But … · Three 1s 1000 (not 100 which would be very silly) · Three pairs · A run (123456) Clearly throws including the last three are very desirable as are three 6s, three 5s and two sets of three of a kind. You may want to cover up the answers to test yourself! They are written quite faintly on the right. Thrown dice One 1 and two 5s Three 1s Three 4s =400 + one 5 Three pairs 0=Z=zilch Nothing scores One 1 and one 5 A run Three 4s + three 3s Three 1s +one 1 Some of the examples are deliberately tricky, but it is easy to make an error in the excitement of the game. For example a throw like 255552 might easily be scored as 550 (for three 5s and the singleton 5). It is also quite easy to not see three pairs or a run. The last example, 131113, is interesting because one could also take 1000 points for the three pairs and then throw all six dice again. This is normally the better strategy. When less than six dice are thrown, a run or three pairs are impossible. When only one or two dice are thrown, three of a kind or three 1s are also impossible. The dice that were previously thrown, whose score is 'in the bank', have no effect on the score of the thrown dice. The scores should be laid out as shown to the right. There are four players Alice, Bob, Doris and Chris. Alice was the starter, chosen as described above, so her score is in the first column. The players were not sitting in alphabetical order round the table. She is not doing well, she has scored zilch five times in a row. Bob was next. He scored 700 in his first play, zilch in his second then 200, 800. Doris 1000, 200, Z, 500. Scores for each play are not recorded. It is easy to tell who must play last because the scores are laid out so neatly. Here is an example of Alice's first play: She threw the six dice and got 146523. She kept the 1 and the 5 (worth 150) and threw the other four dice again. With those she got four 2s. She kept three of those bringing her total on that play to 350. She has one dice left and needs 150 points to get into the game with 500. She throws it and it's a 1! She now has 450 and can throw all six again. But then she threw 364632 which is no points so she lost all her score that play and got Zilch. On Alice's fifth play she got zilch again so another 500 was deducted. Alice must throw more than 500 in one play to get going on the right direction. If she got exactly 1000 in her sixth play 0 not Z would be put in her score. Z would be wrong and confusing. Doris threw three 1's as her first throw and wisely stopped. 1000 went in her score and she was in the game. Next time round (after all the others got zilch) she threw 153562 and kept the score of 200. She probably would have been better to keep the 1 and throw five dice again. As stated above every player gets the same number of plays. The game ends when a player's score gets to 10,000 or more. We'll call that person Bob. When that happens any other players who have not had as many plays as Bob get one last chance to equal or overtake Bob. So that's Doris and Chris in our example. If Doris succeeded she would be declared the winner (unless Chris overtook her on the last play of the game.) Tactics and Etiquette If in a throw any dice fall off the table or are cocked (leaning at an angle due to other dice or some other obstacle) all the thrown dice must be thrown again. It's important not to get zilch in a play but also important to get a decent score. Therein lies the tension in each play and the judgement required. After any throw is made, nobody should touch the dice until a few players (particularly the scorer) have seen them all. Moving them around may be considered cheating. Peter ✠ used to wrap his arms around his thrown dice so only he could see them. This was banned. It may be best not to comment after a throw is made. The player may fail to spot a good score. There is no need to tell them (until it's too late). On the other hand you may innocently call a run a 100 and see if the player falls for your trap. Camilla says this is unsporting. Consider the scorer. They not only have to play but they have to add up everybody else's score. In particular do not start a new play until the scorer has written down the score from the last play. If you're in Bali and Chris wins, for the next game someone will make him go and get the next round of drinks. While he's away from the table they'll move to sit in his place because they believe the seat is affecting his luck – and they want it! Cheating: It is surprisingly easy to cheat if all the players are discussing the latest gossip or otherwise entertained. A cheat may slyly turn over a dice as they are 'rearranging' a throw. If all the other players are really being so inattentive it serves them right. Be warned, be attentive and devise a punishment if a cheater persists. The Zilch Odds Table We all know that the odds of getting a 1 with one dice is 1 in 6 = 1/6 = 17%. What are the odds (the probability) of getting a 1 with six dice? They aren't six times the odds of getting a 1 with one dice. That would be 100% - a dead cert. Here is a little table with some useful probabilities. Zilch Odds Table Throwing dice Probability of 1) Does not help much with calculating the odds of getting over 2000 in a play. 2) Not guaranteed correct. 3) These are all calculated in zilch (Click Read more) except for the first three zilch odds which were done with a simulator. All the others were checked by said simulator which gave the same answer. By George, November 2019 with thanks to David and Camilla who checked this for me (and also told me that the rules I had been using were wrong). Labels: Games
CommonCrawl
State constrained $L^\infty$ optimal control problems interpreted as differential games DCDS Home Value iteration convergence of $\epsilon$-monotone schemes for stationary Hamilton-Jacobi equations September 2015, 35(9): 4019-4039. doi: 10.3934/dcds.2015.35.4019 Computation of Lyapunov functions for systems with multiple local attractors Jóhann Björnsson 1, , Peter Giesl 2, , Sigurdur F. Hafstein 1, and Christopher M. Kellett 3, School of Science and Engineering, Reykjavik University, Menntavegi 1, Reykjavik, IS-101, Iceland, Iceland Department of Mathematics, University of Sussex, Falmer BN1 9QH School of Electrical Engineering and Computer Science, University of Newcastle, Callaghan, New South Wales 2308, Australia Received June 2014 Revised October 2014 Published April 2015 We present a novel method to compute Lyapunov functions for continuous-time systems with multiple local attractors. In the proposed method one first computes an outer approximation of the local attractors using a graph-theoretic approach. Then a candidate Lyapunov function is computed using a Massera-like construction adapted to multiple local attractors. In the final step this candidate Lyapunov function is interpolated over the simplices of a simplicial complex and, by checking certain inequalities at the vertices of the complex, we can identify the region in which the Lyapunov function is decreasing along system trajectories. The resulting Lyapunov function gives information on the qualitative behavior of the dynamics, including lower bounds on the basins of attraction of the individual local attractors. We develop the theory in detail and present numerical examples demonstrating the applicability of our method. Keywords: numerical method., asymptotic stability, multiple local attractors, Lyapunov function, dynamical system. Mathematics Subject Classification: Primary: 37B25, 37M99; Secondary: 37C1. Citation: Jóhann Björnsson, Peter Giesl, Sigurdur F. Hafstein, Christopher M. Kellett. Computation of Lyapunov functions for systems with multiple local attractors. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4019-4039. doi: 10.3934/dcds.2015.35.4019 R. Baier, L. Grüne and S. Hafstein, Linear programming based Lyapunov function computation for differential inclusions,, Discrete and Continuous Dynamical Systems Series B, 17 (2012), 33. doi: 10.3934/dcdsb.2012.17.33. Google Scholar H. Ban and W. Kalies, A computational approach to Conley's decomposition theorem,, Journal of Computational and Nonlinear Dynamics, 1 (2006), 312. doi: 10.1115/1.2338651. Google Scholar J. Barnat, J. Chaloupka and J. van de Pol, Distributed algorithms for SCC decomposition,, Journal of Logic and Computation, 21 (2011), 23. doi: 10.1093/logcom/exp003. Google Scholar J. Björnsson, P. Giesl and S. Hafstein, Algorithmic verification of approximations to complete Lyapunov functions,, in Proceedings of the 21st International Symposium on Mathematical Theory of Networks and Systems, (2014), 1181. Google Scholar J. Björnsson, P. Giesl, S. Hafstein, C. M. Kellett and H. Li, Computation of continuous and piecewise affine Lyapunov functions by numerical approximations of the Massera construction,, in Proceedings of the 53rd IEEE Conference on Decision and Control, (2014), 5506. doi: 10.1109/CDC.2014.7040250. Google Scholar C. Conley, Isolated Invariant Sets and the Morse Index,, CBMS Regional Conference Series no. 38, (1978). Google Scholar M. Dellnitz, G. Froyland and O. Junge, The algorithms behind GAIO-set oriented numerical methods for dynamical systems,, in Ergodic theory, (2001), 145. Google Scholar P. Giesl, Construction of Global Lyapunov Functions Using Radial Basis Functions,, no. 1904 in Lecture Notes in Mathematics, (1904). Google Scholar P. Giesl and S. Hafstein, Construction of Lyapunov functions for nonlinear planar systems by linear programming,, Journal of Mathematical Analysis and Applications, 388 (2012), 463. doi: 10.1016/j.jmaa.2011.10.047. Google Scholar P. Giesl and S. Hafstein, Revised CPA method to compute Lyapunov functions for nonlinear systems,, Journal of Mathematical Analysis and Applications, 410 (2014), 292. doi: 10.1016/j.jmaa.2013.08.014. Google Scholar S. Hafstein, An Algorithm for Constructing Lyapunov Functions,, Electronic Journal of Differential Equations Mongraphs, (2007). Google Scholar S. Hafstein, C. M. Kellett and H. Li, Continuous and piecewise affine Lyapunov functions using the Yoshizawa construction,, in Proceedings of the 2014 American Control Conference, (2014), 548. doi: 10.1109/ACC.2014.6858660. Google Scholar M. Hurley, Lyapunov functions and attractors in arbitrary metric spaces,, Proc. Amer. Math. Soc., 126 (1998), 245. doi: 10.1090/S0002-9939-98-04500-6. Google Scholar O. Junge, Mengenorientierte Methoden zur Numerischen Analyse Dynamischer Systeme,, PhD thesis at the University of Paderborn, (2000). Google Scholar W. Kalies, K. Mischaikow and R. VanderVorst, An algorithmic approach to chain recurrence,, Foundations of Computational Mathematics, 5 (2005), 409. doi: 10.1007/s10208-004-0163-9. Google Scholar S. Marinosson, Lyapunov function construction for ordinary differential equations with linear programming,, Dynamical Systems, 17 (2002), 137. doi: 10.1080/0268111011011847. Google Scholar S. Marinosson, Stability Analysis of Nonlinear Systems with Linear Programming: A Lyapunov Functions Based Approach,, PhD thesis, (2002). Google Scholar J. L. Massera, On Liapounoff's conditions of stability,, Annals of Mathematics, 50 (1949), 705. doi: 10.2307/1969558. Google Scholar D. Norton, The fundamental theorem of dynamical systems,, Comment. Math. Univ. Carolinae, 36 (1995), 585. Google Scholar A. Papachristodoulou and S. Prajna, The construction of Lyapunov functions using the sum of squares decomposition,, in Proceedings of the 41st IEEE Conference on Decision and Control, 3 (2002), 3482. doi: 10.1109/CDC.2002.1184414. Google Scholar M. Patrao, Existence of complete Lyapunov functions for semiflows on separable metric spaces,, Far East Journal of Dynamical Systems, 17 (2011), 49. Google Scholar M. Peet and A. Papachristodoulou, A converse sum-of-squares Lyapunov result: An existence proof based on the Picard iteration,, in Proceedings of the 49th IEEE Conference on Decision and Control, (2010), 5949. doi: 10.1109/CDC.2010.5717536. Google Scholar S. Ratschan and Z. She, Providing a basin of attraction to a target region of polynomial systems by computation of Lyapunov-like functions,, SIAM J. Control and Optimization, 48 (2010), 4377. doi: 10.1137/090749955. Google Scholar R. Tarjan, Depth-first search and linear graph algorithms,, SIAM J. Comput., 1 (1972), 146. doi: 10.1137/0201010. Google Scholar A. R. Teel and L. Praly, A smooth Lyapunov function from a class-$\mathcal{KL}$ estimate involving two positive semidefinite functions,, ESAIM Control Optim. Calc. Var., 5 (2000), 313. doi: 10.1051/cocv:2000113. Google Scholar W. Tucker, A rigorous ODE solver and Smale's 14th problem,, Found. Comput. Math., 2 (2002), 53. doi: 10.1007/s002080010018. Google Scholar T. Yoshizawa, On the stability of solutions of a system of differential equations,, Memoirs of the College of Science, 29 (1955), 27. Google Scholar Michael Schönlein. Asymptotic stability and smooth Lyapunov functions for a class of abstract dynamical systems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 4053-4069. doi: 10.3934/dcds.2017172 Qinghua Ma, Zuoliang Xu, Liping Wang. Recovery of the local volatility function using regularization and a gradient projection method. Journal of Industrial & Management Optimization, 2015, 11 (2) : 421-437. doi: 10.3934/jimo.2015.11.421 Xiangnan He, Wenlian Lu, Tianping Chen. On transverse stability of random dynamical system. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 701-721. doi: 10.3934/dcds.2013.33.701 Güher Çamliyurt, Igor Kukavica. A local asymptotic expansion for a solution of the Stokes system. Evolution Equations & Control Theory, 2016, 5 (4) : 647-659. doi: 10.3934/eect.2016023 David Cheban, Cristiana Mammana. Continuous dependence of attractors on parameters of non-autonomous dynamical systems and infinite iterated function systems. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 499-515. doi: 10.3934/dcds.2007.18.499 Kun Wang, Yinnian He, Yanping Lin. Long time numerical stability and asymptotic analysis for the viscoelastic Oldroyd flows. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1551-1573. doi: 10.3934/dcdsb.2012.17.1551 Xianjin Chen, Jianxin Zhou. A local min-orthogonal method for multiple solutions of strongly coupled elliptic systems. Conference Publications, 2009, 2009 (Special) : 151-160. doi: 10.3934/proc.2009.2009.151 Luci H. Fatori, Marcio A. Jorge Silva, Vando Narciso. Quasi-stability property and attractors for a semilinear Timoshenko system. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 6117-6132. doi: 10.3934/dcds.2016067 Baowei Feng. On a semilinear Timoshenko-Coleman-Gurtin system: Quasi-stability and attractors. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4729-4751. doi: 10.3934/dcds.2017203 Guoshan Zhang, Peizhao Yu. Lyapunov method for stability of descriptor second-order and high-order systems. Journal of Industrial & Management Optimization, 2018, 14 (2) : 673-686. doi: 10.3934/jimo.2017068 Christos V. Nikolopoulos, Georgios E. Zouraris. Numerical solution of a non-local elliptic problem modeling a thermistor with a finite element and a finite volume method. Conference Publications, 2007, 2007 (Special) : 768-778. doi: 10.3934/proc.2007.2007.768 Sergio Grillo, Jerrold E. Marsden, Sujit Nair. Lyapunov constraints and global asymptotic stabilization. Journal of Geometric Mechanics, 2011, 3 (2) : 145-196. doi: 10.3934/jgm.2011.3.145 Kenta Nakamura, Tohru Nakamura, Shuichi Kawashima. Asymptotic stability of rarefaction waves for a hyperbolic system of balance laws. Kinetic & Related Models, 2019, 12 (4) : 923-944. doi: 10.3934/krm.2019035 Fengqi Yi, Hua Zhang, Alhaji Cherif, Wenying Zhang. Spatiotemporal patterns of a homogeneous diffusive system modeling hair growth: Global asymptotic behavior and multiple bifurcation analysis. Communications on Pure & Applied Analysis, 2014, 13 (1) : 347-369. doi: 10.3934/cpaa.2014.13.347 Zhaoquan Xu, Jiying Ma. Monotonicity, asymptotics and uniqueness of travelling wave solution of a non-local delayed lattice dynamical system. Discrete & Continuous Dynamical Systems - A, 2015, 35 (10) : 5107-5131. doi: 10.3934/dcds.2015.35.5107 Ken Shirakawa. Asymptotic stability for dynamical systems associated with the one-dimensional Frémond model of shape memory alloys. Conference Publications, 2003, 2003 (Special) : 798-808. doi: 10.3934/proc.2003.2003.798 Carlos Arnoldo Morales, M. J. Pacifico. Lyapunov stability of $\omega$-limit sets. Discrete & Continuous Dynamical Systems - A, 2002, 8 (3) : 671-674. doi: 10.3934/dcds.2002.8.671 Luis Barreira, Claudia Valls. Stability of nonautonomous equations and Lyapunov functions. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 2631-2650. doi: 10.3934/dcds.2013.33.2631 Paul L. Salceanu, H. L. Smith. Lyapunov exponents and persistence in discrete dynamical systems. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 187-203. doi: 10.3934/dcdsb.2009.12.187 Yejuan Wang, Chengkui Zhong, Shengfan Zhou. Pullback attractors of nonautonomous dynamical systems. Discrete & Continuous Dynamical Systems - A, 2006, 16 (3) : 587-614. doi: 10.3934/dcds.2006.16.587 Jóhann Björnsson Peter Giesl Sigurdur F. Hafstein Christopher M. Kellett
CommonCrawl
Dilation (operator theory) In operator theory, a dilation of an operator T on a Hilbert space H is an operator on a larger Hilbert space K, whose restriction to H composed with the orthogonal projection onto H is T. More formally, let T be a bounded operator on some Hilbert space H, and H be a subspace of a larger Hilbert space H' . A bounded operator V on H' is a dilation of T if $P_{H}\;V|_{H}=T$ where $P_{H}$ is an orthogonal projection on H. V is said to be a unitary dilation (respectively, normal, isometric, etc.) if V is unitary (respectively, normal, isometric, etc.). T is said to be a compression of V. If an operator T has a spectral set $X$, we say that V is a normal boundary dilation or a normal $\partial X$ dilation if V is a normal dilation of T and $\sigma (V)\subseteq \partial X$. Some texts impose an additional condition. Namely, that a dilation satisfy the following (calculus) property: $P_{H}\;f(V)|_{H}=f(T)$ where f(T) is some specified functional calculus (for example, the polynomial or H∞ calculus). The utility of a dilation is that it allows the "lifting" of objects associated to T to the level of V, where the lifted objects may have nicer properties. See, for example, the commutant lifting theorem. Applications We can show that every contraction on Hilbert spaces has a unitary dilation. A possible construction of this dilation is as follows. For a contraction T, the operator $D_{T}=(I-T^{*}T)^{\frac {1}{2}}$ is positive, where the continuous functional calculus is used to define the square root. The operator DT is called the defect operator of T. Let V be the operator on $H\oplus H$ defined by the matrix $V={\begin{bmatrix}T&D_{T^{*}}\\\ D_{T}&-T^{*}\end{bmatrix}}.$ V is clearly a dilation of T. Also, T(I - T*T) = (I - TT*)T and a limit argument[1] imply $TD_{T}=D_{T^{*}}T.$ Using this one can show, by calculating directly, that V is unitary, therefore a unitary dilation of T. This operator V is sometimes called the Julia operator of T. Notice that when T is a real scalar, say $T=\cos \theta $, we have $V={\begin{bmatrix}\cos \theta &\sin \theta \\\ \sin \theta &-\cos \theta \end{bmatrix}}.$ which is just the unitary matrix describing rotation by θ. For this reason, the Julia operator V(T) is sometimes called the elementary rotation of T. We note here that in the above discussion we have not required the calculus property for a dilation. Indeed, direct calculation shows the Julia operator fails to be a "degree-2" dilation in general, i.e. it need not be true that $T^{2}=P_{H}\;V^{2}|_{H}$. However, it can also be shown that any contraction has a unitary dilation which does have the calculus property above. This is Sz.-Nagy's dilation theorem. More generally, if ${\mathcal {R}}(X)$ is a Dirichlet algebra, any operator T with $X$ as a spectral set will have a normal $\partial X$ dilation with this property. This generalises Sz.-Nagy's dilation theorem as all contractions have the unit disc as a spectral set. Notes 1. Sz.-Nagy & Foiaş 1970, 3.1. References • Constantinescu, T. (1996), Schur Parameters, Dilation and Factorization Problems, vol. 82, Birkhauser Verlag, ISBN 3-7643-5285-X. • Paulsen, V. (2002), Completely Bounded Maps and Operator Algebras, Cambridge University Press, ISBN 0-521-81669-6. • Sz.-Nagy, B.; Foiaş, C. (1970), Harmonic analysis of operators on Hilbert space, North-Holland Publishing Company, ISBN 9780720420357.
Wikipedia
\begin{document} \begin{abstract} We show that after forcing with a countable support iteration or a finite product of Sacks or splitting forcing over $L$, every analytic hypergraph on a Polish space admits a $\mathbf{\Delta}^1_2$ maximal independent set. This extends an earlier result by Schrittesser (see \cite{Schrittesser2016}). As a main application we get the consistency of $\mathfrak{r} = \mathfrak{u} = \mathfrak{i} = \omega_2$ together with the existence of a $\Delta^1_2$ ultrafilter, a $\Pi^1_1$ maximal independent family and a $\Delta^1_2$ Hamel basis. This solves open problems of Brendle, Fischer and Khomskii \cite{Brendle2019} and the author \cite{Schilhan2019}. We also show in ZFC that $\mathfrak{d} \leq \mathfrak{i}_{cl}$, adressing another question from \cite{Brendle2019}. \end{abstract} \maketitle \section{Introduction} Throughout mathematics, the existence of various kinds of maximal sets can typically only be obtained by an appeal to the \textit{Axiom of Choice} or one of its popular forms, such as \emph{Zorn's Lemma}. Under certain circumstances, it is possible though, to explicitly define such objects. The earliest result in this direction is probably due to Gödel who noted in \cite[p. 67]{Goedel1940} that in the constructible universe $L$, there is a $\Delta^1_2$ well-order of the reals (see \cite[25]{Jech2013} for a modern treatment). Using similar ideas, many other special sets of reals, such as \emph{Vitali sets}, \emph{Hamel bases} or \emph{mad families}, just to name a few, can be constructed in $L$ in a $\Delta^1_2$ way. This has become by now a standard set theoretic technique. In many cases, these results also give an optimal bound for the complexity of such a set. For example, a Vitali set cannot be Lebesgue measurable and in particular cannot have a $\Sigma^1_1$ or $\Pi^1_1$ definition. In other cases, one can get stronger results by constructing $\Pi^1_1$ witnesses. This is typically done using a coding technique, originally developped by Erdős, Kunen and Mauldin in \cite{Erdoes1981}, later streamlined by Miller (see \cite{Miller1989}) and further generalized by Vidnyánszky (see \cite{Vidnyanszky2014}). For example, Miller showed that there are $\Pi^1_1$ Hamel bases and mad families in $L$. Other results of this type can be found e.g. in \cite{Fischer2010}, \cite{Fischer2013} or \cite{Fischer2019}. Since the assumption $V=L$ is quite restrictive, it is interesting to know in what forcing extensions of $L$, definable witnesses for the above mentioned kinds of sets still exist. Various such results exist in the literature, e.g. in \cite{Brendle2013}, \cite{Fischer2011}, \cite{Fischer2017}, \cite{Schrittesser2018} or \cite{Fischer2020}. The starting observation for this paper is that almost all of these examples can be treated in the same framework, as \emph{maximal independent sets in hypergraphs}. \begin{definition} A \emph{hypergraph} $E$ on a set $X$ is a collection of finite non-empty subsets of $X$, i.e. $E \subseteq [X]^{<\omega} \setminus \{ \emptyset\}$. Whenever $Y \subseteq X$, we say that $Y$ is \emph{$E$-independent} if $[Y]^{<\omega} \cap E = \emptyset$. Moreover, we say that $Y$ is \emph{maximal $E$-independent} if $Y$ is maximal under inclusion in the collection of $E$-independent subsets of $X$. \end{definition} Whenever $X$ is a topological space, $[X]^{<\omega}$ is the disjoint sum of the spaces $[X]^n$ for $n \in \omega$. Here, as usual, $[X]^n$, the set of subsets of $X$ of size $n$ becomes a topological space by identification with the quotient of $X^n$ under the equivalence relation $(x_0, \dots, x_{n-1}) \sim (y_0, \dots, y_{n-1})$ iff $\{x_0, \dots, x_{n-1}\} = \{y_0, \dots, y_{n-1} \}$. Whenever $X$ is Polish, $[X]^{<\omega}$ is Polish as well and we can study its definable subsets. In particular, we can study definable hypergraphs on Polish spaces. The main result of this paper is the following theorem. \begin{thm}\label{thm:maintheorem} After forcing with the $\omega_2$-length countable support iteration (csi) of Sacks or splitting forcing over $L$, every analytic hypergraph on a Polish space has a $\mathbf{\Delta}^1_2$ maximal independent set. \end{thm} This extends a result by Schrittesser \cite{Schrittesser2016}, who proved the above for Sacks forcing, which we denote by $\mathbb{S}$, and ordinary $2$-dimensional graphs (see also \cite{Schrittesser2018}). For equivalence relations this was already known by Budinas \cite{Budinas84}. We will also prove the case of finite products but our main focus will be on the countable support iteration. \emph{Splitting forcing} $\mathbb{SP}$ (Definition~\ref{def:splittingforcing}) is a less-known forcing notion that was originally introduced by Shelah in \cite{Shelah1992} and has been studied in more detail recently (\cite{Spinas2004}, \cite{Spinas2007}, \cite{HeinSpinas2019} and \cite{LaguzziMildenbergerStuber-Rousselle2020}). Although it is very natural and gives a minimal way to add a \emph{splitting real} (see more below), it has not been exploited a lot and to our knowledge, there is no major set theoretic text treating it in more detail. Our three guiding examples for Theorem~\ref{thm:maintheorem} will be \emph{ultrafilters}, \emph{maximal independent families} and \emph{Hamel bases}. Recall that an \emph{ultrafilter} on $\omega$ is a maximal subset $\mathcal{U}$ of $\mathcal{P}(\omega)$ with the \emph{strong finite intersection property}, i.e. the property that for any $\mathcal{A} \in [\mathcal{U}]^{<\omega}$, $\vert \bigcap \mathcal{A} \vert = \omega$.\footnote{In this article, all ultrafilters are considered non-principal.} Thus, letting $E_u := \{ \mathcal{A} \in [\mathcal{P}(\omega)]^{<\omega} : \vert \bigcap \mathcal{A} \vert < \omega \}$, an ultrafilter is a maximal $E_u$-independent set. In \cite{Schilhan2019}, we studied the projective definability of ultrafilters and introduced the cardinal invariant $\mathfrak{u}_B$, which is the smallest size of a collection of Borel subsets of $\mathcal{P}(\omega)$ whose union is an ultrafilter. If there is a $\mathbf{\Sigma}^1_2$ ultrafilter, then $\mathfrak{u}_B = \omega_1$, since every $\mathbf{\Sigma}^1_2$ set is the union of $\omega_1$ many Borel sets. Recall that the classical ultrafilter number $\mathfrak{u}$ is the smallest size of an ultrafilter base. We showed in \cite{Schilhan2019}, that $\mathfrak{u}_B \leq \mathfrak{u}$ and asked whether it is consistent that $\mathfrak{u}_B < \mathfrak{u}$ or even whether a $\Delta^1_2$ ultrafilter can exist while $\omega_1 <\mathfrak{u}$. The difficulty is that we have to preserve a definition for an ultrafilter, while its interpretation in $L$ must be destroyed. This has been achieved before for mad families (see \cite{Brendle2013}). An \emph{independent family} is a subset $\mathcal{I}$ of $\mathcal{P}(\omega)$ so that for any disjoint $\mathcal{A}_0, \mathcal{A}_1 \in [\mathcal{I}]^{<\omega}$, $\vert \bigcap_{x \in \mathcal{A}_0} x \cap \bigcap_{x \in \mathcal{A}_1} \omega \setminus x \vert = \omega$. It is called \emph{maximal independent family} if it is additionally maximal under inclusion. Thus, letting $E_i = \{ \mathcal{A}_0 \dot\cup \mathcal{A}_1 \in [\mathcal{P}(\omega)]^{<\omega} :\vert \bigcap_{x \in \mathcal{A}_0} x \cap \bigcap_{x \in \mathcal{A}_1} \omega \setminus x \vert < \omega \}$, a maximal independent family is a maximal $E_i$-independent set. The definability of maximal independent families was studied by Miller in \cite{Miller1989}, who showed that they cannot be analytic, and recently by Brendle, Fischer and Khomskii in \cite{Brendle2019}, where they introduced the invariant $\mathfrak{i}_B$, the least size of a collection of Borel sets whose union is a maximal independent family. The classical independence number $\mathfrak{i}$ is simply the smallest size of a maximal independent family. In \cite{Brendle2019}, it was asked whether $\mathfrak{i}_B < \mathfrak{i}$ is consistent and whether there can be a $\Pi^1_1$ maximal independent family while $\omega_1 < \mathfrak{i}$. In the same article, it was shown that the existence of a $\Delta^1_2$ maximal independent family is equivalent to that of a $\Pi^1_1$ such family. The difficulty in the problem is similar to that before. A \emph{Hamel basis} is a vector-space basis of $\mathbb{R}$ over the field of rationals $\mathbb{Q}$. Thus, letting $E_h := \{ \mathcal{A} \in [\mathbb{R}]^{<\omega} : \mathcal{A} \text{ is linearly dependent over } \mathbb{Q} \}$, a Hamel basis is a maximal $E_h$-independent set. A Hamel basis must be as large as the continuum itself. This is reflected in the fact that, when adding a real, every ground-model Hamel basis is destroyed. But still it makes sense to ask how many Borel sets are needed to get one. Miller, also in \cite{Miller1989}, showed that a Hamel basis can never be analytic. As before, we may ask whether there can be a $\Delta^1_2$ Hamel basis while CH fails. Again, destroying ground-model Hamel bases, seems to pose a major obstruction. The most natural way to increase $\mathfrak{u}$ and $\mathfrak{i}$ is by iteratively adding \emph{splitting reals}. Recall that for $x,y \in \mathcal{P}(\omega)$, we say that $x$ \emph{splits} $y$ iff $ \vert x \cap y \vert = \omega$ and $\vert y \setminus x \vert = \omega$. A real $x$ is called \emph{splitting over $V$} iff for every $y \in \mathcal{P}(\omega) \cap V$, $x$ splits $y$. The classical forcing notions adding splitting reals are \emph{Cohen}, \emph{Random} and \emph{Silver forcing} and all forcings that add so called \emph{dominating reals}. It was shown though, in \cite{Schilhan2019}, that after forcing with any of these, a $\mathbf\Sigma^1_2$ definition with ground model parameters will not define an ultrafilter and the same argument can be applied to independent families. For this reason, we are going to use the forcing notion $\mathbb{SP}$ that we mentioned above. As an immediate corollary of Theorem~\ref{thm:maintheorem}, we get the following. \begin{thm}\label{thm:mainthm2} It is consistent that $\mathfrak{r} = \mathfrak{u} = \mathfrak{i} = \omega_2$ while there is a $\Delta^1_2$ ultrafilter, a $\Pi^1_1$ maximal independent family and a $\Delta^1_2$ Hamel basis. In particular, we get the consistency of $\mathfrak{i}_B, \mathfrak{u}_B < \mathfrak{r}, \mathfrak{i}, \mathfrak{u}$. \end{thm} Here, $\mathfrak{r}$ is the reaping number, the least size of a set $\mathcal{S} \subseteq \mathcal{P}(\omega)$ so that there is no splitting real over $\mathcal{S}$. This solves the above mentioned questions from \cite{Schilhan2019} and \cite{Brendle2019}. Moreover, Theorem~\ref{thm:maintheorem} gives a ``black-box" way to get many results, saying that certain definable families exists in the Sacks model. In \cite{Brendle2019}, another cardinal invariant $\mathfrak{i}_{cl}$ is introduced, which is the smallest size of a collection of closed sets, whose union is a maximal independent family. Similarly, one can define a closed version of the ultrafilter number, $\mathfrak{u}_{cl}$. Here, it is irrelevant whether we consider closed subsets of $[\omega]^\omega$ or $\mathcal{P}(\omega)$, since every closed subset of $[\omega]^\omega$ with the strong finite intersection property is $\sigma$-compact (see Lemma~\ref{lem:sigmacompind}). In the model of Theorem~\ref{thm:mainthm2}, we have that $\mathfrak{i}_{cl} = \mathfrak{i}_B$ and $\mathfrak{u}_{cl} = \mathfrak{u}_B$, further answering the questions of Brendle, Fischer and Khomskii. On the other hand we show that $\mathfrak{d} \leq \mathfrak{i}_{cl}$, mirroring Shelah's result that $\mathfrak{d} \leq \mathfrak{i}$ (see \cite{Vaughan1990}). Here, $\mathfrak{d}$ is the dominating number, the least size of a dominating family in $(\omega^\omega, <^*)$. \begin{thm}\label{thm:icleqd} (ZFC) $\mathfrak{d} \leq \mathfrak{i}_{cl}$. \end{thm} The paper is organized as follows. In Section 2, we will consider basic results concerning iterations of tree forcings. This section is interesting in its own right and can be read independently from the rest. More specifically, we prove a version of continuous reading of names for countable support iterations that is widely applicable (Lemma~\ref{lem:nicemaster}). In Section 3, we prove our main combinatorial lemma (Main Lemma~\ref{thm:mainlemma} and \ref{lem:mainlemmainf}) which is at the heart of Theorem~\ref{thm:maintheorem}. As for Section 2, Section 3 can be read independently of the rest, since our result is purely descriptive set theoretical. In Section 4, we introduce splitting and Sacks forcing and place it in bigger class of forcings to which we can apply the main lemma. This combines the results from Section 2 and 3. In Section 4, we bring everything together and prove Theorem~\ref{thm:maintheorem}, \ref{thm:mainthm2} and \ref{thm:icleqd}. We end with concluding remarks concerning the further outlook of our technique and pose some questions. \section{Tree forcing} Let $A$ be a fixed countable set, usually $\omega$ or $2$. \begin{enumerate}[label=(\alph*)] \item A \textit{tree} $T$ on $A$ is a subset of $A^{<\omega}$ so that for every $t \in T$ and $n < \vert t \vert$, $t \restriction n \in T$, where $\vert t \vert$ denotes the length of $t$. For $s_0, s_1 \in A^{<\omega}$, we write $s_0 \perp s_1$ whenever $s_0 \not\subseteq s_1$ and $s_1 \not\subseteq s_0$. \item $T$ is \textit{perfect} if for every $t \in T$ there are $s_0, s_1 \in T$ so that $s_0, s_1\supseteq t$ and $s_0 \perp s_1$. \item A node $t \in T$ is called a \textit{splitting node}, if there are $i \neq j \in A$ so that $t^\frown i, t^\frown j \in T$. The set of splitting nodes in $T$ is denoted $\splt(T)$. We define $\splt_n(T)$ to be the set of $t \in \splt(T)$ such that there are exactly $n$ splitting nodes below $t$ in $T$. The finite subtree of $T$ generated by $\splt_n(T)$ is denoted $\splt_{\leq n}(T)$. \item For any $t \in T$ we define the restriction of $T$ to $t$ as $T_t = \{ s \in T : s \not\perp t \}$. \item The set of branches through $T$ is denoted by $[T] = \{ x \in A^\omega : \forall n \in \omega (x \restriction n \in T) \}$. \item $A^{\omega}$ carries a natural Polish topology generated by the clopen sets $[t] = \{ x \in A^\omega : t \subseteq x \}$ for $t \in A^{<\omega}$. Then $[T]$ is closed in $A^\omega$. \item Whenever $X \subseteq A^{\omega}$ is closed, there is a continuous \textit{retraction} $\varphi \colon A^\omega \to X$, i.e. $\varphi''A^\omega = X$ and $\varphi \restriction X$ is the identity. \item A \textit{tree forcing} is a collection $\mathbb{P}$ of perfect trees ordered by inclusion. \item By convention, all tree forcings are closed under restrictions, i.e. if $T \in \mathbb{P}$ and $t \in T$, then $T_t \in \mathbb{P}$, and the trivial condition is $A^{<\omega}$. \item The set $\mathcal{T}$ of perfect subtrees of $A^{<\omega}$ is a $G_\delta$ subset of $\mathcal{P}(A^{<\omega}) \cong \mathcal{P}(\omega)$, where we identify $A^{<\omega}$ with $\omega$, and thus carries a natural Polish topology. It is not hard to see that it is homeomorphic to $\omega^\omega$, when $\vert A \vert \geq 2$. \item Often times, we will use a bar above a variable, as in ``$\bar x$", to indicate that it denotes a sequence. In that case, we either write $x(\alpha)$ or $x_\alpha$ to denote the $\alpha$'th element of that sequence, depending on the context. \item Let $\langle T_i : i < \alpha \rangle$ be a sequence of trees where $\alpha$ is an arbitrary ordinal. Then we write $\bigotimes_{i < \alpha} T_i$ for the set of finite partial sequences $\bar s$ where $\dom \bar s \in [\alpha]^{<\omega}$ and for every $i \in \dom \bar s$, $s(i) \in T_i$. \item $(A^\omega)^\alpha$ carries a topology generated by the sets $[\bar s] = \{ \bar x \in (A^\omega)^\alpha : \forall i \in \dom \bar s (x(i) \in [s(i)]) \}$ for $\bar s \in \bigotimes_{i < \alpha} A^{<\omega}$. \item Whenever $X \subseteq (A^\omega)^\alpha$ and $C \subseteq \alpha$, we define the \textit{projection of $X$ to $C$} as $X \restriction C = \{ \bar x \restriction C : \bar x \in X \}$. \end{enumerate} \begin{fact} Let $\mathbb{P}$ be a tree forcing and $G$ a $\mathbb{P}$-generic filter over $V$. Then $\mathbb{P}$ adds a real $ x_G := \bigcup \{ s \in A^{<\omega} : \forall T \in G (s \in T) \} \in A^\omega$. Moreover, $V[G] = V[x_G]$. \end{fact} \begin{definition}\label{def:axiomA} We say that $(\mathbb{P}, \leq)$ is \textit{Axiom A} if there is a decreasing sequence of partial orders $\langle \leq_n : n \in \omega \rangle$ refining $\leq$ on $\mathbb{P}$ so that \begin{enumerate} \item for any $n \in \omega$ and $T, S \in \mathbb{P}$, if $S \leq_n T$, then $S \cap A^{<n} = T \cap A^{<n}$, \item for any fusion sequence, i.e. a sequence $\langle p_n : n \in \omega \rangle$ where $p_{n+1} \leq_n p_n$ for every $n$, $p = \bigcap_{n \in \omega} p_n \in \mathbb{P}$ and $p \leq_n p_n$ for every $n$, \item and for any maximal antichain $D \subseteq \mathbb{P}$, $p \in \mathbb{P}$, $n \in \omega$, there is $q \leq_n p$ so that $\{ r \in D : r \not\perp q \}$ is countable. \end{enumerate} Moreover we say that $(\mathbb{P}, \leq)$ is \textit{Axiom A with continuous reading of names} (\textit{crn}) if there is such a sequence of partial orders so that additionally, \begin{enumerate} \setcounter{enumi}{3} \item for every $p \in \mathbb{P}$, $n \in \omega$ and $\dot y$ a $\mathbb{P}$-name for an element of a Polish space\footnote{In the generic extension $V[G]$ we reinterpret $X$ as the completion of $(X)^V$. Similarly, we reinterpret spaces $(A^\omega)^\alpha$, continuous functions, open and closed sets on these spaces. This should be standard.} $X$, there is $q \leq_n p$ and a continuous function $f \colon [q] \to X$ so that $$ q \Vdash \dot y [G] = f(x_G).$$ \end{enumerate} \end{definition} Although (1) is typically not part of the definition of Axiom A, we include it for technical reasons. The only classical example that we are aware of, in which it is not clear whether (1)-(4) can be realized simultaneously, is Mathias forcing. Let $\langle \mathbb{P}_\beta, \dot{\mathbb{Q}}_\beta : \beta < \alpha \rangle$ be a countable support iteration of tree forcings that are Axiom A with crn, where for each $\beta < \alpha$, $$\Vdash_{\mathbb{P}_\beta} \text{``}\langle \dot{\leq}_{\beta,n} : n \in \omega \rangle \text{ witnesses that } \dot{\mathbb{Q}}_{\beta} \text{ is Axiom A with crn''}.$$ \begin{enumerate}[label=(\alph*)] \setcounter{enumi}{13} \item For each $n \in \omega, a \subseteq \alpha$, we define $\leq_{n,a}$ on $\mathbb{P}_\alpha$, where $$\bar q \leq_{n,a} \bar p \leftrightarrow \left(\bar q \leq \bar p \wedge \forall \beta \in a ( \bar q \restriction \beta \Vdash_{\mathbb{P}_\beta} \dot q(\beta) \dot{\leq}_{\beta,n} \dot p(\beta))\right).$$ \item The support of $\bar p \in \mathbb{P}_\alpha$ is the set $\supp (\bar p) = \{ \beta < \alpha : \bar p \Vdash \dot p(\beta) \neq \mathbbm{1} \}$. \end{enumerate} Recall that a condition $q$ is called a master condition over a model $M$ if for any maximal antichain $D \in M$, $\{ p \in D : q \not\perp p \} \subseteq M$. Equivalently, it means that for every generic filter $G$ over $V$ containing $q$, $G$ is generic over $M$ as well. Throughout this paper, when we say that \emph{$M$ is elementary}, we mean that it is elementary in a large enough model of the form $H(\theta)$. Sometimes, we will say that \emph{$M$ is a model of set theory} or just that \emph{$M$ is a model}. In most generality, this just mean that $(M,\in)$ satisfies a strong enough fragment of ZFC. But this is a way too general notion for our purposes. For instance, such $M$ may not even be correct about what $\omega$ is. Thus, let us clarify that in all our instances this will mean, that $M$ is either elementary or an extension of an elementary model by a countable (in $M$) forcing. In particular, some basic absoluteness (e.g. for $\Sigma^1_1$ or $\Pi^1_1$ formulas) holds true between $M$ and $V$, $M$ is transitive below $\omega_1$ and $\omega_1$ is computed correctly. \begin{fact}[{Fusion Lemma, see e.g. \cite[Lemma 1.2, 2.3]{Baumgartner1979}}] If $\langle a_n : n \in \omega \rangle$ is $\subseteq$-increasing, $\langle \bar p_n : n \in \omega \rangle$ is such that $\forall n \in \omega (\bar p_{n+1} \leq_{n,a_n} \bar p_n)$ and $\bigcup_{n \in \omega} \supp(\bar p_n) \subseteq \bigcup_{n \in \omega} a_n \subseteq \alpha$, then there is a condition $\bar p \in \mathbb{P}_\alpha$ so that for every $n \in \omega$, $\bar p \leq_{n,a_n} \bar p_n$; in fact, for every $\beta < \alpha$, $\bar p \restriction \beta \Vdash \dot p(\beta) = \bigcap_{n \in \omega} \dot p_n(\beta)$. Moreover, let $M$ be a countable elementary model, $\bar p \in M \cap \mathbb{P}_\alpha$, $n \in \omega$, $a \subseteq M\cap\alpha$ finite and $\langle \alpha_i : i \in \omega \rangle$ a cofinal increasing sequence in $M\cap \alpha$. Then there is $\bar q \leq_{n,a} \bar p$ a master condition over $M$ so that for every name $\dot y \in M$ for an element of $\omega^\omega$ and $j \in \omega$, there is $i \in \omega$ so that below $\bar q$, the value of $\dot y \restriction j$ only depends on the $\mathbb{P}_{\alpha_i}$-generic. \end{fact} \begin{enumerate}[label=(\alph*)] \setcounter{enumi}{15} \item For $G$ a $\mathbb{P}_\alpha$-generic, we write $\bar x_G$ for the generic element of $\prod_{\beta <\alpha}A^\omega$ added by $\mathbb{P}_\alpha$. \end{enumerate} Let us from now on assume that for each $\beta < \alpha$ and $n \in \omega$, $\mathbb{Q}_\beta$ and $\leq_{\beta,n}$ are fixed analytic subsets subsets of $\mathcal{T}$ and $\mathcal{T}^2$ respectively, coded in $V$. Although the theory that we develop below can be extended to a large extent to non-definable iterands, we will only focus on this case, since we need stronger results later on. \begin{lemma} \label{lem:nicemaster} For any $\bar p \in \mathbb{P}_\alpha$, $M$ a countable elementary model so that ${\mathbb{P}_\alpha, \bar p \in M}$ and $n \in \omega, a \subseteq M\cap \alpha$ finite, there is $\bar q \leq_{n,a} \bar p$ a master condition over $M$ and a closed set $[\bar q] \subseteq (A^\omega)^\alpha$ so that \begin{enumerate} \item $\bar q \Vdash \bar x_G \in [\bar q]$, \end{enumerate} for every $\beta < \alpha$, \begin{enumerate} \setcounter{enumi}{1} \item $\bar q \Vdash \dot q(\beta) = \{s \in A^{<\omega} : \exists \bar z \in [\bar q] (\bar z \restriction \beta = \bar x_{G} \restriction \beta \wedge s \subseteq z(\beta)) \} $, \item the map sending $\bar x \in [\bar q] \restriction \beta$ to $\{s \in A^{<\omega} : \exists \bar z \in [\bar q] (\bar z \restriction \beta = \bar x \wedge s \subseteq z(\beta)) \}$ is continuous and maps to $\mathbb{Q}_\beta$, \item $[\bar q] \restriction \beta \subseteq (A^\omega)^\beta$ is closed, \end{enumerate} and for every name $\dot y \in M$ for an element of a Polish space $X$, \begin{enumerate} \setcounter{enumi}{4} \item there is a continuous function $f \colon [\bar q] \to X$ so that $\bar q \Vdash \dot y = f(\bar x_G).$ \end{enumerate} \end{lemma} \begin{enumerate}[label=(\alph*)] \setcounter{enumi}{16} \item We call such $\bar q$ as in Lemma~\ref{lem:nicemaster} a \textit{good master condition over $M$}. \end{enumerate} Before we prove Lemma~\ref{lem:nicemaster}, let us draw some consequences from the definition of a good master condition. \begin{lemma}\label{lem:intermediategoodanalytic}\label{lem:goodmaster2} Let $\bar q \in \mathbb{P}_\alpha$ be a good master condition over a model $M$ and $\dot y \in M$ a name for an element of a Polish space $X$. \begin{enumerate}[label=(\roman*)] \item Then $[\bar q]$ is unique, in fact it is the closure of $\{ \bar x_G : G \ni \bar q \text{ is generic over } V\}$ in any forcing extension $W$ of $V$ where $\left(\beth_{\omega}(\vert \mathbb{P}_\alpha \vert)\right)^V$ is countable. \item The continuous map $f \colon [\bar q] \to X$ given by (5) is unique and \item whenever $Y \in M$ is an analytic subset of $X$ and $\bar q \Vdash \dot y \in Y$, then $f''[\bar q] \subseteq Y$. \end{enumerate} Moreover, there is a countable set $C \subseteq \alpha$, not depending on $\dot y$, so that \begin{enumerate}[label=(\roman*)] \setcounter{enumi}{3} \item $[\bar q] \restriction C$ is a closed subset of the Polish space $(A^{\omega})^C$ and $[\bar q] = ([\bar q] \restriction C) \times (A^\omega)^{\alpha \setminus C}$, \item for every $\beta \in C$, there is a continuous function $g \colon [\bar q] \restriction (C \cap \beta)\to \mathbb{Q}_\beta$, so that for every $\bar x \in [\bar q]$, $$g(\bar x \restriction (C \cap \beta)) = \{s \in A^{<\omega} : \exists \bar z \in [\bar q] (\bar z \restriction \beta = \bar x \restriction \beta \wedge s \subseteq z(\beta)) \} ,$$ \item there is a continuous function $f \colon [\bar q] \restriction C \to X$, so that $$\bar q \Vdash \dot y = f(\bar x_G \restriction C).$$ \end{enumerate}\end{lemma} \begin{proof} Let us write, for every $\beta < \alpha$ and $\bar x \in [\bar q] \restriction \beta$, $$T_{\bar x} := \{s \in A^{<\omega} : \exists \bar z \in [\bar q] (\bar z \restriction \beta = \bar x \wedge s \subseteq z(\beta)) \}.$$ For (i), let $W$ be an extension in which $\left(\beth_{\omega}(\vert \mathbb{P}_\alpha \vert)\right)^V$ is countable and let $\bar s \in \bigotimes_{i < \alpha} A^{<\omega}$ be arbitrary so that $[\bar s] \cap [\bar q]$ is non-empty. We claim that there is a generic $G$ over $V$ containing $\bar q$ so that $\bar x_G \in [\bar s]$. This is shown by induction on $\max(\dom (\bar s))$. For $\bar s = \emptyset$ the claim is obvious. Now assume $\max(\dom (\bar s)) = \beta$, for $\beta < \alpha$. Then, by (3), $O := \{ \bar x \in [\bar q] : s(\beta) \in T_{\bar x \restriction \beta}\}$ is open and it is non-empty since $[\bar s] \cap [\bar q] \neq \emptyset$. Applying the inductive hypothesis, there is a generic $G \ni \bar q$ so that $\bar x_G \in O$. In $V[G \restriction \beta]$ we have, by (2), that $T_{\bar x_G \restriction \beta} = \dot q(\beta)[G]$. Moreover, since $\bar x_G \in O$, we have that $s(\beta) \in \dot q(\beta)[G]$. Then it is easy to force over $V[G \restriction \beta]$, to get a full $\mathbb{P}_\alpha$ generic $H \supseteq G \restriction \beta$ containing $\bar q$ so that $\bar x_{H} \restriction \beta = \bar x_G \restriction \beta $ and $s(\beta) \subseteq \bar x_H(\beta)$. By (1), for every generic $G$ over $V$ containing $\bar q$, $\bar x_G \in [\bar q]$. Thus we have shown that the set of such $\bar x_G$ is dense in $[\bar q]$. Uniqueness follows from $[\bar q]$ being closed and the fact that if two closed sets coded in $V$ agree in $W$, then they agree in $V$. This follows easily from $\mathbf\Pi^1_1$ absoluteness. Now (ii) follows easily since any two continuous functions given by (5) have to agree on a dense set in an extension $W$ and thus they agree in $V$. Again this is an easy consequence of $\mathbf\Pi^1_1$ absoluteness. For (iii), let us consider the analytic space $Z = \{0\} \times X \cup \{1\} \times Y$, which is the disjoint union of the spaces $X$ and $Y$. Then there is a continuous surjection $F \colon \omega^\omega \to Z$ and by elementarity we can assume it is in $M$. Let us find in $M$ a name $\dot z$ for an element of $\omega^\omega$ so that in $V[G]$, if $\dot y[G] \in Y$, then $F(\dot z[G]) = (1,\dot y[G])$, and if $\dot y[G] \notin Y$, then $F(\dot z[G]) = (0,\dot y[G])$. By (5), there is a continuous function $g \colon [\bar q] \to \omega^\omega$ so that $\bar q \Vdash \dot z = g(\bar x_G)$. Since $\bar q \Vdash \dot y \in Y$, we have that for any generic $G$ containing $\bar q$, $F(g(\bar x_G)) = (1,f(\bar x_G))$. By density, for every $\bar x \in [\bar q]$, $F(g(\bar x)) = (1,f(\bar x))$ and in particular $f(\bar x) \in Y$. Now let us say that the support of a function $g \colon [\bar q] \to X$ is the smallest set $C_g \subseteq \alpha$ so that the value of $g(\bar x)$ only depends on $\bar x \restriction C_g$. The results of \cite{Bockstein1948} imply that if $g$ is continuous, then $g$ has countable support. Note that for all $\beta \notin \supp(\bar q)$, the map in (3) is constant on the set of generics and by continuity it is constant everywhere. Thus it has empty support. Let $C$ be the union of $\supp(\bar q)$ with all the countable supports given by instances of (3) and (5). Then $C$ is a countable set. For (iv), (v) and (vi), note that $[\bar q] \restriction C = \{ \bar y \in (A^\omega)^C : \bar y^\frown (\bar x \restriction \alpha \setminus C) \in [\bar q] \}$ for $\bar x \in [\bar q]$ arbitrary, and recall that in a product, sections of closed sets are closed and continuous functions are coordinate-wise continuous. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:nicemaster}] Let us fix for each $\beta < \alpha$ a continuous surjection $F_\beta \colon \omega^\omega \to \mathbb{Q}_\beta$. The proof is by induction on $\alpha$. If $\alpha = \beta +1$, then $\mathbb{P}_\alpha = \mathbb{P}_\beta * \dot{\mathbb{Q}}_{\beta}$. Let $\bar q_0 \leq_{n,a} \bar p \restriction \beta$ be a master condition over $M$ and $H \ni \bar q_0$ a $\mathbb{P}_\beta$ generic over $V$. Then, applying a standard fusion argument using Axiom A with continuous reading of names in $V[H]$ to $\mathbb{Q}_\beta$, we find $q(\beta) \leq_{\beta,n} p(\beta)$ a master condition over $M[H]$ (note that $H$ is also $M$ generic since $\bar q_0$ is a master condition over $M$) so that for each name $\dot y \in M[H]$ for an element of a Polish space $X$ there is a continuous function $f \colon [q(\beta)] \to X$ so that $q(\beta) \Vdash \dot y = f(\dot{x}_G)$. Thus we find in $V$, a $\mathbb{P}_\beta$-name $\dot q(\beta)$ so that $\bar q_0$ forces that it is such a condition. Let $M^+ \ni M$ be a countable elementary model containing $\dot q(\beta)$ and $\bar q_0$, and let $\bar q_{1/2} \leq_{n,a} \bar q_{0}$ be a master condition over $M^+$. Again let $M^{++}\ni M^+$ be a countable elementary model containing $\bar q_{1/2}$. By the induction hypothesis we find $\bar q_1 \leq_{n,a} \bar q_{1/2}$ a good master condition over $M^{++}$. Finally, let $\bar q = \bar q_1 ^\frown \dot{q}(\beta)$. Then $\bar q \leq_{n,a} \bar p$ and $\bar q$ is a master condition over $M$. Since $\dot q(\beta) \in M^+\subseteq M^{++}$, there is a continuous function $f \colon [\bar q_1] \to \omega^\omega$, so that $\bar q_1 \Vdash_\beta F_\beta(f(\bar x_{H})) = \dot q(\beta)$. Here note that $F_\beta$ is in $M$ by elementarity and we indeed find a name $\dot z$ in $M^+$ so that $\bar q_0 \Vdash F_\beta(\dot z) = \dot q(\beta)$. Let $[ \bar q] = \{ \bar x \in (A^{\omega})^\alpha : \bar x \restriction \beta \in [\bar q_1] \wedge x(\beta) \in [F_\beta(f(\bar x \restriction \beta))] \}$. Then $[\bar q]$ is closed and (1), (2), (3), (4) hold true. To see that $[\bar q]$ is closed, note that the graph of a continuous function is always closed, when the codomain is a Hausdorff space. For (5), let $\dot y \in M$ be a $\mathbb{P}_\alpha$-name for an element of a Polish space $X$. If $H \ni \bar q_1$ is $V$-generic, then there is a continuous function $g \colon [q(\beta)] \to X$ in $V[H]$ so that $V[H] \models q(\beta) \Vdash g(\dot x_G) = \dot y$, where we view $\dot y$ as a $\mathbb{Q}_\beta$-name in $M[H]$. Moreover there is a continuous retraction $\varphi \colon A^\omega \to [q(\beta)]$ in $V[H]$. Since $M^+$ was chosen elementary enough, we find names $\dot g$ and $\dot \varphi$ for $g$ and $\varphi$ in $M^+$. The function $g \circ \varphi$ is an element of the space\footnote{The topology is such that for any continuous $h$ mapping to $C(A^\omega, X)$, $(x,y) \mapsto h(x)(y)$ is continuous.} $C(A^\omega, X)$, but this is not a Polish space when $A$ is infinite, i.e. when $A^\omega$ is not compact. It is though, always a coanalytic space (consult e.g. \cite[12, 2.6]{Kechris1995} to see how $C(A^\omega, X)$ is a coanalytic subspace of a suitable Polish space). Thus there is an increasing sequence $\langle Y_\xi : \xi < \omega_1 \rangle$ of analytic subspaces such that $\bigcup_{\xi < \omega_1} Y_\xi = C(A^\omega, X)$ and the same equality holds in any $\omega_1$-preserving extension. Since $\bar q_{1/2}$ is a master condition over $M^+$, we have that $\bar q_{1/2} \Vdash \dot g \circ \dot \varphi \in Y_\xi$, where $\xi = M^+ \cap \omega_1$. Since $\bar q_1$ is a good master condition over $M^{++}$ and $Y_\xi \in M^{++}$, by Lemma~\ref{lem:intermediategoodanalytic}, there is a continuous function $g' \in V$, $g' \colon [\bar q_1] \to Y_\xi$, so that $\bar q_1 \Vdash g'(\bar x_H) = \dot g \circ \dot \varphi$. Altogether we have that $\bar q \Vdash \dot y = g'(\bar x_G \restriction \beta)(x_G(\beta))$. For $\alpha$ limit, let $\langle \alpha_i : i \in \omega \rangle$ be a strictly increasing sequence cofinal in $M \cap \alpha$ and let $\bar q_0 \leq_{n,a} \bar p$ be a master condition over $M$ so that for every name $\dot y \in M$ for an element of $\omega^\omega$, $j \in \omega$, the value of $\dot y \restriction j$ only depends on the generic restricted to $\mathbb{P}_{\alpha_i}$ for some $i \in \omega$. Let us fix a ``big" countable elementary model $N$, with $\bar q_0, M \in N$. Let $\langle a_i : i \in \omega\rangle$ be an increasing sequence of finite subsets of $N \cap \alpha$ so that $a_0 = a$ and $\bigcup_{i \in \omega} a_i = N \cap \alpha$. Now inductively define sequences $\langle M_i : i \in \omega\rangle$, $\langle \bar r_i : i \in \omega \rangle$, initial segments lying in $N$, so that for every $i \in \omega$, \begin{enumerate}[label=-] \item $M_0 = M$, $\bar r_0 = \bar q_0 \restriction \alpha_0$, \item $M_{i+1} \ni \bar q_0$ is a countable model, \item $M_{i}, \bar r_i, a_i \in M_{i+1}$ \item $\bar r_i$ is a good $\mathbb{P}_{\alpha_i}$ master condition over $M_i$, \item $r_{i+1} \leq_{n + i,a_i\cap \alpha_i} r_i ^\frown \bar q_0 \restriction [\alpha_i, \alpha_{i+1})$. \end{enumerate} Define for each $i \in \omega$, $\bar q_i = \bar r_i^\frown \bar q_0 \restriction [\alpha_i, \alpha) $. Then $\langle \bar q_i : i \in \omega \rangle$ is a fusion sequence in $\mathbb{P}_\alpha$ and we can find a condition $\bar q \leq_{n,a} \bar q_0 \leq_{n,a} \bar p$, where for each $\beta < \alpha$, $\bar q \restriction \beta \Vdash \dot q(\beta) = \bigcap_{i \in \omega} \dot q_i(\beta)$. Finally let $[\bar q] := \bigcap_{i \in \omega} ([\bar r_i] \times (A^\omega)^{[\alpha_i, \alpha)})$. Then (1) is easy to check. For (5), we can assume without loss of generality that $\dot y$ is a name for an element of $\omega^\omega$ since for any Polish space $X$, there is a continuous surjection from $\omega^\omega$ to $X$. Now let $(i_j)_{j \in \omega}$ be increasing so that $\dot y \restriction j$ is determined on $\mathbb{P}_{\alpha_{i_j}}$ for every $j \in \omega$. Since $\bar r_{i_j}$ is a good master condition over $M$, there is a continuous function $f_j \colon [\bar r_{i_j}] \to \omega^j$ so that $\bar r_{i_j} \Vdash \dot y \restriction j = f_j(\bar x_{G_{\alpha_{i_j}}})$ for every $j \in \omega$. It is easy to put these functions together to a continuous function $f \colon [\bar q] \to 2^\omega$, so that $f(\bar x) \restriction j = f_j(\bar x \restriction \alpha_{i_j})$. Then we obviously have that $\bar q \Vdash \dot y = f(\bar x_G)$. Now let us fix for each $i \in \omega$, $C_i \subseteq \alpha_i$ a countable set as given by Lemma~\ref{lem:intermediategoodanalytic} applied to $\bar r_i$, $M_i$, which by elementarity exists in $N$. Let $C = \bigcup_{i \in \omega} C_i$. Then $[\bar q] = [\bar q] \restriction C \times (A^\omega)^{\alpha \setminus C}$ and $[\bar q] \restriction C$ is closed. For every $\beta \in \alpha \setminus C$, the map given in (3) is constant and maps to $\mathbb{Q}_\beta$, as $A^{<\omega}$ is the trivial condition. Thus we may restrict our attention to $\beta \in C$. Let us write $X_i = ([\bar r_i] \times (A^\omega)^{[\alpha_i, \alpha)}) \restriction C$ for every $i \in \omega$ and note that $\bigcap_{i \in \omega} X_i = [\bar q] \restriction C$. For every $\beta \in C$, $\bar x \in [\bar q] \restriction (C \cap \beta)$ and $i \in \omega$, we write $$T_{\bar x} := \{s \in A^{<\omega} : \exists \bar z \in [\bar q] \restriction C (\bar z \restriction \beta = \bar x \wedge s \subseteq z(\beta)) \}$$ and $$T_{\bar x}^i = \{s \in A^{<\omega} : \exists \bar z \in X_i (\bar z \restriction \beta = \bar x \wedge s \subseteq z(\beta))\}.$$ \begin{claim} For every $i \in \omega$, where $\beta \in a_i$, $T^{i+1}_{\bar x} \leq_{\beta, i} T^i_{\bar x}$. In particular, $\bigcap_{i \in \omega} T^i_{\bar x} \in \mathbb{Q}_\beta$. \end{claim} \begin{proof} If $\alpha_{i+1} \leq \beta$, then $T^{i+1}_{\bar x} = T^i_{\bar x} = A^{<\omega}$. Else consider a $\mathbb{P}_{\alpha_{i+2}}$-name for $(T^{i+1}_{\bar y}, T^i_{\bar y}) \in \mathcal{T}^2$, where $\bar y = \bar x_G \restriction (C\cap \beta)$. Such a name exists in $M_{i+2}$ and $\beta \in a_i \subseteq M_{i+2}$. Thus $\leq_{\beta, i}\in M_{i+2}$ and by Lemma~\ref{lem:intermediategoodanalytic}, we have that for every $\bar y \in [\bar r_{i+2}] \restriction (C\cap \beta)$, $(T^{i+1}_{\bar y}, T^i_{\bar y}) \in \leq_{\beta,i}$, thus also for $\bar y = \bar x$. The rest follows from the fact that the statement, that for any fusion sequence in $\mathbb{Q}_\beta$, its intersection is in $\mathbb{Q}_\beta$, is $\mathbf{\Pi}^1_2$ and thus absolute. \end{proof} \begin{claim}\label{claim:closedset} For every $\gamma$, $\bigcap_{i \in \omega} (X_i \restriction \gamma) = (\bigcap_{i \in \omega} X_i) \restriction \gamma$. \end{claim} \begin{proof} That $\bigcap_{i \in \omega} (X_i \restriction \gamma) \supseteq (\bigcap_{i \in \omega} X_i) \restriction \gamma$ is obvious. Let us show by induction on $\delta \in C$, that for any $\delta' \in C \cap \delta$, $$\bigcap_{i \in \omega} (X_i \restriction \delta') \subseteq \left(\bigcap_{i \in \omega} (X_i \restriction \delta)\right) \restriction \delta'.$$ The base case $\delta = \min C$ is clear. For the limit case, let $\delta' \in C \cap \delta$ be given and let $(\delta_n)_{n \in \omega} $ be increasing cofinal in $(C \cap \delta) \setminus \delta'$. Whenever $\bar y \in \bigcap_{i \in \omega} (X_i \restriction \delta')$, by the inductive hypothesis, there is $\bar y_0 \in \bigcap_{i \in \omega} (X_i \restriction \delta_0)$ extending $\bar y$. In particular, there is $\bar z_0 \in X_0 \restriction \delta$ extending $\bar y_0$. Next, there is $\bar y_1 \in \bigcap_{i \in \omega} (X_i \restriction \delta_1)$ extending $\bar y_0$ and $\bar z_1 \in X_1 \restriction \delta$ extending $\bar y_1$. Continuing like this, we find a sequence $\langle\bar z_n : n \in \omega \rangle$ that converges to $\bar z \in (A^\omega)^{C \cap \delta}$. Since $\langle\bar z_n : n \geq m \rangle$ is contained within the closed set $X_m \restriction \delta$ for each $m \in \omega$, $\bar z \in \bigcap_{i \in \omega} (X_i \restriction \delta)$. Since $\bar z \restriction \delta' = \bar y$, this proves the limit case. Now assume $\delta = \xi +1$. Let $\delta' < \delta$ be given and let $\bar y \in \bigcap_{i \in \omega} (X_i \restriction \delta')$. Then there is $\bar z \in \bigcap_{i \in \omega} (X_i \restriction \xi)$ extending $\bar y$ by the inductive hypothesis. Since $\bigcap_{i \in \omega} T^i_{\bar z} \in \mathbb{Q}_\delta$, there is $u \in [\bigcap_{i \in \omega}T^i_{\bar z}]$ and $\bar z {}^\frown u \in \bigcap_{i \in \omega} (X_i \restriction \delta)$. To finish the proof apply the induction step one more time to $\delta = \sup \{ \xi +1 : \xi \in C\}$ and $\delta' = \gamma$. \end{proof} Claim~\ref{claim:closedset} shows that (4) holds as $[\bar q] \restriction \beta = (\bigcap_{i \in \omega} X_i) \restriction \beta \times (A^\omega)^{\beta \setminus C} = \bigcap_{i \in \omega} (X_i \restriction \beta) \times (A^\omega)^{\beta \setminus C}$ and $\bigcap_{i \in \omega} (X_i \restriction \beta)$ is closed, being an intersection of closed sets. \begin{claim} $T_{\bar x} = \bigcap_{i \in \omega} T^i_{\bar x}$. \end{claim} \begin{proof} That $T_{\bar x} \subseteq \bigcap_{i \in \omega} T^i_{\bar x}$ is clear from the definitions. Thus let $s \in \bigcap_{i \in \omega} T^i_{\bar x}$. As $\bigcap_{i \in \omega} T^i_{\bar x} \in \mathbb{Q}_\beta$, there is $y \in [\bigcap_{i \in \omega} T^i_{\bar x}]$ with $s \subseteq y$. In particular, $\bar x {}^\frown y \in \bigcap_{i \in \omega} (X_i \restriction (\beta +1)) = (\bigcap_{i \in \omega} X_i )\restriction (\beta +1)$. So there is $\bar z \in \bigcap_{i \in \omega} X_i$ with $\bar z \restriction \beta = \bar x$ and $z(\beta) = y \supseteq s$. Thus $s \in T_{\bar x}$.\end{proof} Now (2) follows easily. For the continuity of $\bar x \mapsto T_{\bar x}$, let $t \in A^{<\omega}$ be arbitrary and $j$ large enough so that $\vert t \vert \leq j$ and $\beta \in a_j$. Then $\{ \bar x \in [\bar q] \restriction \beta : t \notin T_{\bar x} \} = \{ \bar x \in [\bar q] \restriction \beta : t \notin T^j_{\bar x} \}$ and $\{ \bar x \in [\bar q] \restriction \beta : t \in T_{\bar x} \} = \{ \bar x \in [\bar q] \restriction \beta : t \in T^j_{\bar x} \}$ which are both open.\footnote{Here we use clause (1) in the definition of Axiom A.} Thus we have shown (3).\end{proof} \begin{lemma} \label{lem:gettingacondition} Let $C \subseteq \alpha$ be countable and $X \subseteq (A^{\omega})^C$ be a closed set so that for every $\beta \in C$ and $\bar x \in X \restriction \beta$, $$\{s \in A^{<\omega} : \exists \bar z \in X (\bar z \restriction \beta = \bar x \wedge s \subseteq z(\beta)) \} \in \mathbb{Q}_\beta.$$ Let $M \ni X$ be countable elementary. Then there is a good master condition $\bar r$ over $M$ so that $[\bar r] \restriction C \subseteq X$. \end{lemma} \begin{proof} It is easy to construct $\bar q \in M$ recursively so that $\bar q \Vdash \bar x_G \restriction C \in X$. By Lemma~\ref{lem:nicemaster}, we can extend $\bar q$ to a good master condition $\bar r$ over $M$. The unique continuous function $f \colon [\bar r] \to (A^{\omega})^C$ so that for generic $G$, $f(\bar x_G) = \bar x_G \restriction C$, is so that $f(\bar x) = \bar x \restriction C$ for every $\bar x \in [\bar r]$. Since $f$ maps to $X$, $[\bar r] \restriction C \subseteq X$. \end{proof} \section{The Main Lemma} \subsection{Mutual Cohen Genericity} Let $X$ be a Polish space and $M$ a model of set theory with $X \in M$. Recall that $x \in X$ is \emph{Cohen generic in} $X$ over $M$ if for any open dense $O \subseteq X$, such that $O \in M$, $x \in O$. Let $x_0, \dots, x_{n-1} \in X$. Then we say that $x_0, \dots x_{n-1}$ are ($X$-)\emph{mutually Cohen generic (mCg) over $M$} if $(y_0, \dots, y_{K-1})$ is a Cohen generic real over $M$ in the Polish space $X^K$, where $\langle y_i : i < K \rangle$ is some, equivalently any, enumeration of $\{ x_0, \dots, x_{n-1} \}$. In particular, we allow for repetition in the definition of mutual genericity. \begin{definition}\label{def:mcgfinite} Let $\langle X_l : l < k \rangle \in M$ be Polish spaces. Then we say that $\bar x_0, \dots, \bar x_{n-1} \in \prod_{l<k} X_l$ are $\langle X_l : l < k \rangle$-\emph{mutually Cohen generic (mCg) over $M$}, if their components are mutually added Cohen generics, i.e. $$(y_0^0, \dots, y_0^{K_0}, \dots,y_{k-1}^0, \dots, y_{k-1}^{K_{k-1}}) \text{ is Cohen generic in } \prod_{l<k }X_l^{K_l} \text{ over } M,$$ where $\langle y_l^{i} : i < K_l \rangle$ is some, equivalently any, enumeration of $\{ x_i(l) : i<n \}$ for each $l< k$. \end{definition} \begin{definition} Let $X$ be a Polish space with a fixed countable basis $\mathcal{B}$. Then we define the forcing poset $\mathbb{C}(2^\omega, X)$ consisting of functions $h \colon 2^{\leq n} \to \mathcal{B} \setminus \{\emptyset\}$ for some $n \in \omega$ such that $\forall \sigma \subseteq \tau \in 2^{\leq n} (h(\sigma) \supseteq h(\tau))$. The poset is ordered by function extension. \end{definition} The poset $\mathbb{C}(2^\omega, X)$ adds generically a continuous function $\chi \colon 2^\omega \to X$, given by $\chi(x) = y$ where $\bigcap_{n \in \omega} h(x \restriction n ) = \{ y\}$ and $h = \bigcup G$ for $G$ the generic filter. This forcing will be used in this section several times to obtain ZFC results. Note for instance that if $G$ is generic over $M$, then for any $x \in 2^\omega$, $\chi(x)$ is Cohen generic in $X$ over $M$, and moreover, for any $x_0, \dots, x_{n-1} \in 2^\omega$, $\chi(x_0), \dots, \chi(x_{n-1})$ are $X$-mutually Cohen generic over $M$. Sometimes we will use $\mathbb{C}(2^\omega, X)$ to force over a countable model a continuous function from a space homeomorphic to $2^\omega$, such as $(2^\omega)^\alpha$ for $\alpha < \omega_1$. \begin{lemma} \label{lem:forcingcont} Let $M$ be a model of set theory, $K, n \in \omega$, $X_j \in M$ a Polish space for every $j < n$ and $G$ a $\prod_{j < n} \mathbb{C}(2^\omega, X_j)$-generic over $M$ yielding $\chi_j \colon 2^\omega \to X_j$ for every $j <n$. Then, whenever $\bar x$ is Cohen generic in $(2^\omega)^K$ over $M[G]$ and $u_0, \dots, u_{n-1} \in 2^\omega \cap M[\bar x]$ are pairwise distinct, $$\bar x^\frown \langle \chi_j(u_i) : i < n, j < n \rangle $$ is Cohen generic in $$(2^\omega)^K \times \prod_{i < n} X_i $$ over $M$. \end{lemma} \begin{proof} Since $\bar x$ is generic over $M$ it suffices to show that $\langle \chi_j(u_i) : i < n, j \rangle$ is generic over $M[\bar x]$. Let $\dot O \in M$ be a $(2^{<\omega})^K$-name for a dense open subset of $\prod_{j < n} (X_j)^n$ and $\dot u_i$ a $(2^{<\omega})^K$-name for $u_i$, $i < n$, such that the trivial condition forces that the $\dot u_i$ are pairwise distinct. Then consider the set \begin{multline*} D := \{ (\bar h,\bar s) \in \prod_{i<n} \mathbb{C}(2^\omega, X_i) \times (2^{<\omega})^K : \exists t_0, \dots, t_{n-1}\in 2^{<\omega} \\ ( \forall i < n(\bar s \Vdash t_i \subseteq \dot u_i) \wedge \bar s \Vdash \prod_{i,j<n} h_{j}(t_i) \subseteq \dot O) \}. \end{multline*} We claim that this set is dense in $\prod_{i<n} \mathbb{C}(2^\omega, X_i) \times (2^{<\omega})^K$ which finishes the proof. Namely let $(\bar h, \bar s)$ be arbitrary, wlog $\dom h_{j} = 2^{\leq n_0}$ for every $j < n$. Then we can extend $\bar s$ to $\bar s'$ so that there are incompatible $t_i$, with $\vert t_i \vert \geq n_0$, so that $\bar s' \Vdash t_i \subseteq \dot u_i$ and there are $U_{i,j} \subseteq h_j(t_i \restriction n_0)$ basic open subsets of $X_j$ in $M$ for every $i < n$ and $j<n$, so that $\bar s' \Vdash \prod_{i,j < n} U_{i,j} \subseteq \dot O$. Then we can extend $\bar h$ to $\bar h'$ so that $h'_j(t_i) = U_{i,j}$ for every $i,j < n$. We see that $(\bar h', \bar s') \in D$. \end{proof} \subsection{Finite products} This subsection can be skipped entirely if one is only interested in the results for the countable support iteration. The lemma that we will prove below is relevant to finite products instead (see Theorem~\ref{thm:finiteprod}). It is very similar to Main Lemma~\ref{lem:mainlemmainf} in the next subsection, but the proofs are completely different. Main Lemma~\ref{thm:mainlemma} is based on a forcing-theoretic proof of the Halpern-Läuchli theorem that is commonly attributed to L. Harrington (see e.g. \cite[Lemma~4.2.4]{Golshani2015} as a reference). On the other hand, Main Lemma~\ref{lem:mainlemmainf} uses an inductive argument. \begin{mainlemma} \label{thm:mainlemma} Let $k \in \omega$ and $E \subseteq [(2^\omega)^k]^{<\omega}$ an analytic hypergraph on $(2^\omega)^k$. Then there is a countable model $M$ so that either \begin{enumerate} \item for any $n \in \omega$ and $\bar x_0,\dots, \bar x_{n-1} \in (2^\omega)^k$ that are $\langle 2^\omega : l < k\rangle$-mCg over $M$, $$\{\bar x_0,\dots, \bar x_{n-1}\} \text{ is } E \text{-independent}$$ \end{enumerate} or for some $N \in \omega$, \begin{enumerate} \setcounter{enumi}{1} \item there are $\phi_0, \dots, \phi_{N-1} \colon (2^\omega)^k \to (2^\omega)^k$ continuous, $\bar s \in \bigotimes_{l<k} 2^{<\omega}$ so that for any $n \in \omega$ and $\bar x_0,\dots, \bar x_{n-1} \in (2^\omega)^k \cap [\bar s]$, that are $\langle 2^\omega : l < k\rangle$-mCg over $M$, $$\{\phi_j(\bar x_i) : j < N, i<n\} \text{ is } E \text{-independent but } \{ \bar x_0\} \cup \{ \phi_j(\bar x_0) : j < N \} \in E.$$ \end{enumerate} \end{mainlemma} \begin{remark} Note that $N = 0$ is possible in the second option. For example whenever $[(2^\omega)^k]^1 \subseteq E$, then $\emptyset$ is the only $E$-independent set. In this case the last line simplifies to ``$\{ \bar x_0 \} \in E$". \end{remark} \begin{proof} Let $\kappa = \beth_{2k -1}(\aleph_0)^+$. Recall that by Erdős-Rado (see \cite[Thm 9.6]{Jech2013}), for any ${c \colon [\kappa]^{2k} \to H(\omega)}$, there is $B \in [\kappa]^{\aleph_1}$ which is monochromatic for $c$, i.e. $c \restriction [B]^{2k}$ is constant. Let $\mathbb{Q}$ be the forcing adding $\kappa$ many Cohen reals $$\langle z_{(l,\alpha)} : \alpha < \kappa\rangle \text{ in } 2^{\omega} \text{ for each } l<k$$ with finite conditions, i.e. $\mathbb{Q} = \prod_\kappa^{<\omega} (2^{<\omega})^k$. We will use the notational convention that elements of $[\kappa]^d$, for $d \in \omega$, are sequences $\bar \alpha = (\alpha_0, \dots, \alpha_{d-1})$ ordered increasingly. For any $\bar \alpha \in [\kappa]^k$ we define $\bar z_{\bar \alpha} := (z_{(0,\alpha_0)}, \dots, z_{(k-1,\alpha_{k-1})}) \in (2^\omega)^k$. Let $\dot{\mathcal{A}}$ be a $\mathbb{Q}$-name for a maximal $E$-independent subset of $\{ \bar z_{\bar \alpha} : \bar \alpha \in [\kappa]^k \}$, reinterpreting $E$ in the extension by $\mathbb{Q}$. For any $\bar \alpha \in [\kappa]^k$, we fix $p_{\bar\alpha} \in \mathbb{Q}$ so that either \begin{equation} p_{\bar \alpha} = \mathbbm{1} \wedge p_{\bar \alpha} \Vdash \bar z_{\bar \alpha} \in \dot{\mathcal{A}} \tag{1} \end{equation} or \begin{equation} p_{\bar\alpha} \Vdash \bar z_{\bar \alpha} \not\in \dot{\mathcal{A}}.\tag{2} \end{equation} In case (2) we additionally fix $N_{\bar\alpha} < \omega$ and $(\bar \beta^i)_{i < N_{\bar\alpha}} = (\bar \beta^i(\bar \alpha))_{i < N_{\bar\alpha}}$, and we assume that $$p_{\bar\alpha} \Vdash \{ \bar z_{\bar \beta^i} : i < N_{\bar\alpha} \} \subseteq \dot{\mathcal{A}} \wedge \{ \bar z_{\bar \alpha} \} \cup \{ \bar z_{\bar \beta^i} : i < N_{\bar\alpha} \} \in E .$$ We also define $H_l(\bar \alpha) = \{ \beta^i_l : i < N_{\bar\alpha} \} \cup \{ \alpha_l \} \in [\kappa]^{<\omega}$ for each $l < k$. Now for $\bar \alpha \in [\kappa]^{2k}$ we collect the following information: \begin{enumerate}[label=(\roman*)] \item whether $p_{\bar \alpha \restriction k } = p_{\alpha_0,\dots, \alpha_{k-1}}\Vdash \bar z_{\bar \alpha \restriction k} \in \dot{\mathcal{A}}$ or not, \item $\bar s = (p_{\bar \alpha \restriction k }(0,\alpha_0), \dots, p_{\bar \alpha \restriction k }(k-1,\alpha_{k-1})) \in (2^{<\omega})^k$, \item the relative position of the $p_{\bar \gamma}$ for $\bar \gamma \in \Gamma := \prod_{l<k} \{\alpha_{2l}, \alpha_{2l+1} \}$ to each other. More precisely, consider $ \bigcup_{\bar \gamma \in\Gamma } \dom p_{\bar \gamma} = \{0\} \times d_0 \cup \dots \cup \{k-1\} \times d_{k-1}$ where $d_0,\dots, d_{k-1} \subseteq \kappa$. Let $M_l = \vert d_l \vert$ for $l<k$ and for each $\bar j \in \prod_{l<k} \{2l, 2l+1 \}$, collect a function $r_{\bar j}$ with $\dom r_{\bar j} \subseteq \{0\} \times M_0 \cup \dots \cup \{k-1\} \times M_{k-1}$ that is a copy of $p_{\bar \gamma}$, where $\bar \gamma = (\alpha_{j_0}, \dots, \alpha_{j_{k-1}})$, $\bar j = (j_0, \dots, j_{k-1})$. Namely, $r_{\bar j}(l,m) = p_{\bar \gamma}(l,\beta)$, whenever $\beta$ is the $m$'th element of $d_l$. \end{enumerate} In case $p_{\bar \alpha \restriction k }\Vdash \bar z_{\bar \alpha \restriction k} \notin \dot{\mathcal{A}}$ we additionally remember \begin{enumerate}[label=(\roman*)] \setcounter{enumi}{3} \item $N = N_{\bar \alpha \restriction k}$, \item $N_l = \vert H_l(\bar \alpha \restriction k) \vert$, for each $l < k$, \item $\bar b^i \in \prod_{l <k} N_l$ so that $\beta^i_l$ is the $b^i_l$'th element of $H_l(\bar \alpha \restriction k)$, for each $i<N$, \item $\bar a \in \prod_{l <k} N_l$ so that $\alpha_l$ is the $a_l$'th member of $H_l(\bar \alpha \restriction k)$, \item the partial function $r$ with domain a subset of $\bigcup_{l <k} \{l\} \times N_l$, so that $r(l,m)= t \in 2^{<\omega}$ iff $p_{\bar \alpha \restriction k}(l,\beta) = t$ where $\beta$ is the $m$'th element of $H_l(\bar \alpha \restriction k)$. \end{enumerate} And finally we also remember \begin{enumerate}[label=(\roman*)] \setcounter{enumi}{8} \item for each pair $\bar\gamma, \bar\delta \in \prod_{l<k} \{\alpha_{2l}, \alpha_{2l +1} \}$, where $\bar\gamma = (\alpha_{j_l})_{l < k}$ and $\bar\delta = (\alpha_{j'_l})_{l < k}$, finite partial injections $e_{l, \bar j, \bar j'} \colon N_l \to N_l$ so that $e_{l, \bar j, \bar j'}(m) = m'$ iff the $m$'th element of $H_l(\bar\gamma)$ equals the $m'$'th element of $H_l(\bar\delta)$. \end{enumerate} This information is finite and defines a coloring $c \colon [\kappa]^{2k} \to H(\omega)$. Let $B \in [\kappa]^{\omega_1}$ be monochromatic for $c$. Let $M \preccurlyeq H(\theta)$ be countable for $\theta$ large enough so that $\kappa, c, B, \langle p_{\bar \alpha} : \bar \alpha \in [\kappa]^k \rangle, E, \dot{\mathcal{A}} \in M$. \begin{claim} If for every $\bar \alpha \in [B]^k$, $p_{\bar{\alpha}} \Vdash \bar z_{\bar \alpha} \in \dot{\mathcal{A}}$, then (1) of the main lemma holds true. \end{claim} \begin{proof} Let $\bar x_0, \dots, \bar x_{n-1}$ be arbitrary mCg over $M$. Say $\{x_i(l) : i < n\}$ is enumerated by $\langle y^i_l : i < K_l \rangle$ for every $l<k$. Now find $$\alpha^0_0 < \dots < \alpha^{K_0 -1}_0 < \dots < \alpha^{0}_{k-1} < \dots < \alpha^{K_{k-1} -1}_{k-1}$$ in $M \cap B$. Then there is a $\mathbb{Q}$-generic $G$ over $M$ so that for any $\bar j \in \prod_{l < k} K_l$, $$\bar z_{\bar \beta}[G] = (y^{j_0}_0, \dots, y^{j_{k-1}}_{k-1}),$$ where $\bar \beta = (\alpha^{j_0}_0, \dots, \alpha^{j_{k-1}}_{k-1})$. In particular, for each $i<n$, there is $\bar \beta_i \in [B \cap M]^k$ so that $\bar z_{\bar \beta_i}[G]= \bar x_{i}$. Since $p_{\bar \beta_i} =\mathbbm{1} \in G$ for every $\bar \beta_i$ we have that $$M[G] \models \bar x_i \in \dot{\mathcal{A}}[G]$$ for every $i< n$ and in particular $$M[G] \models \{ \bar x_i : i < n \} \text{ is } E \text{-independent}.$$ By absoluteness $\{ \bar x_i : i < n \}$ is indeed $E$-independent. \end{proof} Assume from now on that $p_{\bar \alpha}\Vdash \bar z_{\bar \alpha} \notin \dot{\mathcal{A}}$ for every $\bar \alpha \in [B]^k$. Then we may fix $\bar s$, $N$, $(N_l)_{l<k}$, $\bar b^i$ for $i<N$, $\bar a$, $r$ and $e_{l,\bar j, \bar j'}$ for all $l<k$ and $\bar j,\bar j' \in \prod_{l'<k} \{2l', 2l'+1 \}$ corresponding to the coloring on $[B]^{2k}$. \begin{claim} For any $\bar \alpha \in [B]^{2k}$ and $\bar\gamma, \bar\delta \in \prod_{l<k} \{\alpha_{2l}, \alpha_{2l +1} \}$, $$p_{\bar\gamma} \restriction (\dom p_{\bar\gamma} \cap \dom p_{\bar\delta}) = p_{\bar\delta} \restriction (\dom p_{\bar\gamma} \cap \dom p_{\bar\delta}).$$ \end{claim} \begin{proof} Suppose not. By homogeneity we find a counterexample $\bar \alpha$, $\bar\gamma$, $\bar\delta$ where $B \cap (\alpha_{2l'},\alpha_{2l' +1})$ is non-empty for every $l' <k$. So let $(l,\beta) \in \dom p_{\bar\gamma} \cap \dom p_{\bar\delta}$ such that $p_{\bar\gamma}(l,\beta) = u \neq v = p_{\bar\delta}(l,\beta)$. Let $\bar \rho \in [B]^k$ be such that for every $l' < k$, $$\begin{cases} \rho_{l'} \in (\gamma_{l'}, \delta_{l'} ) & \text{if } \gamma_{l'} < \delta_{l'} \\ \rho_{l'} \in (\delta_{l'}, \gamma_{l'} ) & \text{if } \delta_{l'} < \gamma_{l'} \\ \rho_{l'} = \gamma_{l'} & \text{if } \gamma_{l'} = \delta_{l'}. \end{cases}$$ Now note that $\bar\rho$'s relative position to $\bar\gamma$ is the same as that of $\bar\delta$ to $\bar\gamma$. More precisely, let $\bar j, \bar j' \in \prod_{l'<k} \{2l', 2l'+1 \}$ so that $\bar \gamma = (\alpha_{j_0}, \dots, \alpha_{j_{k-1}})$, $\bar \delta = (\alpha_{j'_0}, \dots, \alpha_{j'_{k-1}})$. Then there is $\bar \beta \in [B]^{2k}$ so that $\bar \gamma = (\beta_{j_0}, \dots, \beta_{j_{k-1}})$ and $\bar \rho = (\beta_{j'_0}, \dots, \beta_{j'_{k-1}})$. Thus by homogeneity of $[B]^{2k}$ via $c$, $p_{\bar\rho}(l,\beta) = v$. Similarly $\bar\delta$ is in the same position relative to $\bar\rho$ as to $\bar\gamma$. Thus also $p_{\bar\rho}(l,\beta) = u$ and we find that $v = u$ -- we get a contradiction. \end{proof} \begin{claim} For any $l<k$ and $\bar j,\bar j' \in \prod_{l'<k} \{2l', 2l'+1 \}$, $e_{l,\bar j, \bar j'}(m) = m$ for every $m \in \dom e_{l,\bar j, \bar j'} $. \end{claim} \begin{proof} Let $\alpha_0 < \dots < \alpha_{2k} \in B$ so that $(\alpha_{2l'},\alpha_{2l'+1}) \cap B \neq \emptyset$ for every $l' <k$. Consider $\bar \gamma = (\alpha_{j_{l'}})_{l' < k}$, $\bar \delta = (\alpha_{j'_{l'}})_{l' < k}$ and again we find $\bar \rho \in [B]^k$ so that $\rho_{l'}$ is between (possibly equal to) $\alpha_{j_{l'}}$ and $\alpha_{j'_{l'}}$. If $e_{l,\bar j, \bar j'}(m) = m'$, then if $\beta$ is the $m$'th element of $H_l(\bar \gamma)$, then $\beta$ is $m'$'th element of $H_l(\bar \delta)$ aswell as of $H_l(\bar\rho)$. But also $\beta$ is the $m$'th element of $H_l(\bar\rho)$, thus $m =m'$. \end{proof} Note that by the above claim $e_{l,\bar j, \bar j'} = (e_{l,\bar j', \bar j})^{-1} = e_{l,\bar j', \bar j}$ and the essential information given by $e_{l,\bar j, \bar j'}$ is it's domain. Next let us introduce some notation. Whenever $x,y \in 2^\omega$, we write $x < y$ to say that $x$ is lexicographically below $y$, i.e. $x(n) < y(n)$, where $n = \min\{m \in \omega : x(m) \neq x(m) \}$. For any $g \in \{-1,0,1 \}^{k}$ we naturally define a relation $\tilde R_g$ between $k$-length sequences $\bar \nu$ and $\bar \mu$, either of elements of $2^\omega$, or of ordinals $<\kappa$, as follows: $$\bar \nu \tilde R_g \bar \mu \leftrightarrow \forall l<k \begin{cases} \nu_l < \mu_l &\text{ if } g(l)=-1\\ \nu_l = \mu_l &\text{ if } g(l)=0\\ \nu_l > \mu_l &\text{ if } g(l)=1. \end{cases} $$ \noindent Further we write $\bar \nu R_g \bar \mu$ iff $\bar \nu \tilde R_g \bar \mu$ or $\bar \mu \tilde R_g \bar \nu$. Enumerate $\{R_g : g \in \{-1,0,1 \}^{k} \}$ without repetition as $\langle R_i : i < K \rangle$ (it is easy to see that $K = \frac{3^k +1}{2}$). Note that for any $\bar \nu, \bar \mu$ there is a unique $i <K$ so that $\bar \nu R_i \bar \mu$. Now for each $l < k$ and $i < K$, we let $$I_{l,i} := \dom e_{l,\bar j, \bar j'} \subseteq N_l,$$ where $\bar j R_i \bar j'$. By homogeneity of $[B]^{2k}$ and the observation that $e_{l,\bar j, \bar j'} = e_{l,\bar j', \bar j}$, we see that $I_{l,i}$ does not depend on the particular choice of $\bar j, \bar j'$, such that $\bar j R_i \bar j'$. For each $l< k$ and $m < N_l$, we define a relation $E_{l,m}$ on $(2^\omega)^k$ as follows: $$ \bar x E_{l,m} \bar y \leftrightarrow m \in I_{l,i} \text{ where } i \text{ is such that } \bar x R_i \bar y.$$ \begin{claim} $E_{l,m}$ is an equivalence relation. \end{claim} \begin{proof} The reflexivity and symmetry of $E_{l,m}$ is obvious. Assume that $\bar x_0 E_{l,m} \bar x_1$ and $\bar x_1 E_{l,m} \bar x_2$, and say $\bar x_0 R_{i_0} \bar x_1$, $\bar x_1 R_{i_1} \bar x_2$ and $\bar x_0 R_{i_2} \bar x_2$. Find $\bar \gamma^0, \bar \gamma^1, \bar \gamma^2 \in [B]^k$ so that $$\{\gamma^i_0 : i <3 \} < \dots < \{\gamma^i_{k-1} : i <3 \} $$ and $$\bar \gamma^0 R_{i_0} \bar \gamma^1, \bar \gamma^1 R_{i_1} \bar \gamma^2, \bar \gamma^0 R_{i_2} \bar \gamma^2.$$ If $\beta$ is the $m$'th element of $H_l(\bar \gamma^0)$, then $\beta$ is also the $m$'th element of $H_l(\bar \gamma^1)$, since we can find an appropriate $\bar \alpha \in [B]^{2k}$ and $\bar j$, $\bar j'$ so that $\bar \gamma^0 = (\alpha_{j_l})_{l<k}$ and $\bar \gamma^1 = (\alpha_{j'_l})_{l<k}$, $\bar j R_{i_0} \bar j'$ and we have that $m \in I_{l,i_0}$. Similarly $\beta$ is the $m$'th element of $H_l(\bar \gamma^2)$. But now we find again $\bar \alpha \in [B]^{2k}$ and $\bar j$, $\bar j'$ so that $\bar \gamma^0 = (\alpha_{j_l})_{l<k}$ and $\bar \gamma^2 = (\alpha_{j'_l})_{l<k}$. Thus $m \in I_{l,i_2}$, as $e_{l,\bar j, \bar j'}(m) = m$ and $\bar x_0 E_{l,m} \bar x_2$. \end{proof} \begin{claim} $E_{l,m}$ is smooth as witnessed by a continuous function, i.e. there is a continuous map $\varphi_{l,m} \colon (2^{\omega})^k \to 2^{\omega}$ so that $\bar x E_{l,m} \bar y$ iff $\varphi_{l,m}(\bar x) = \varphi_{l,m}(\bar y)$. \end{claim} \begin{proof} We will check the following: \begin{enumerate}[label=(\alph*)] \item For every open $O \subseteq (2^\omega)^k$, the $E_{l,m}$ saturation of $O$ is Borel, \item every $E_{l,m}$ equivalence class is $G_\delta$. \end{enumerate} By a theorem of Srivastava (\cite[Thm 4.1.]{Srivastava1979}), (a) and (b) imply that $E_{l,m}$ is smooth, i.e. we can find $\varphi_{l,m}$ Borel. \begin{enumerate}[label=(\alph*)] \item The $E_{l,m}$ saturation of $O$ is the set $\{\bar x : \exists \bar y \in O (\bar x E_{l,m} \bar y) \}$. It suffices to check for each $g \in \{-1,0,1\}^k$ that the set $X = \{\bar x : \exists \bar y \in O (\bar x \tilde R_g \bar y) \}$ is Borel. Let $\mathcal{S} = \{\bar \sigma \in (2^{<\omega})^k : [\sigma_0] \times \dots \times [\sigma_{k-1}] \subseteq O \}$. Consider $$\varphi(\bar x) :\leftrightarrow \exists \bar \sigma \in \mathcal{S} \forall l' < k \begin{cases} x_{l'} <_{\lex} {\sigma_{l'}}^\frown 0^\omega &\text{ if } g(l') = -1 \\ x_{l'} \in [\sigma_{l'}] &\text{ if } g(l') = 0 \\ {\sigma_{l'}}^\frown 1^\omega <_{\lex} x_{l'} &\text{ if } g(l') = 1\end{cases}.$$ If $\varphi(\bar x)$ holds true then let $\bar \sigma$ witness this. We then see that there is $\bar y \in [\sigma_0] \times \dots \times [\sigma_{k-1}]$ with $\bar x \tilde R_g \bar y$. On the other hand, if $\bar y \in O$ is such that $\bar x R_g \bar y$, then we find $\bar \sigma \in \mathcal{S}$ defining a neighborhood of $\bar y$ witnessing $\varphi(\bar x)$. Thus $X$ is defined by $\varphi$ and is thus Borel. \item Since finite unions of $G_\delta$'s are $G_\delta$ it suffices to check that $\{ \bar x : \bar x \tilde R_g \bar y \}$ is $G_\delta$ for every $\bar y$ and $g \in \{-1,0,1\}^k$. But this is obvious from the definition. \end{enumerate} Now note that given $\varphi_{l,m}$ Borel, we can find perfect $X_0,\dots X_{k-1} \subseteq 2^{\omega}$ so that $\varphi_{l,m}$ is continuous on $X_0 \times \dots \times X_{k-1}$ ($\varphi_{l,m}$ is continuous on a dense $G_\delta$). But there is a $<_{\lex}$ preserving homeomorphism from $X_l$ to $2^{\omega}$ for each $l<k$ so we may simply assume $X_l = 2^\omega$. \end{proof} Fix such $\varphi_{l,m}$ for every $l<k$, $m<N_l$, so that $\varphi_{l,a_l}(\bar x) = x_l$ (note that $\bar x E_{l,a_l} \bar y$ iff $x_l = y_l$). Now let $M_0$ be countable elementary, containing all relevant information and such that $\varphi_{l,m} \in M_0$ for every $l < k$, $m < N_l$. Let $\chi_{l,m} \colon 2^\omega \to [r(l,m)]$ for $l < k$ and $m \neq a_l$ be generic continuous functions over $M_0$, i.e. the sequence $(\chi_{l,m})_{l < k, m \in N_l \setminus \{a_l\} }$ is $\prod_{l < k, m \in N_l \setminus \{a_l\}} \mathbb{C}(2^\omega, [r(l,m)])$ generic over $M_0$. Let us denote with $M$ the generic extension of $M_0$. Also let $\chi_{l,m}$ for $m = a_l$ be the identity. Finally we set $$\phi_i(\bar x) = ((\chi_{l,b^i_l}\circ \varphi_{l,b^i_l})(\bar x))_{l < k}$$ for each $i < N$. \begin{claim} (2) of the main lemma holds true with $M$, $\bar s$ and $\phi_i$, $i <N$, that we just defined. \end{claim} \begin{proof} Let $\bar x_0. \dots, \bar x_{n-1} \in [\bar s]$ be $\langle 2^\omega : l < k\rangle$-mCg over $M$. Let us write $\{ \bar x_i(l) : i <n \} = \{ y_l^i : i < K_l \}$ for every $l < k$, where $y_l^0 <_{\lex} \dots <_{\lex} y_l^{K_l -1} $. Now find $$\alpha^0_0 < \dots < \alpha_0^{K_0 -1} < \dots < \alpha_{k-1}^0 < \dots < \alpha_{k-1}^{K_{k-1}-1}$$ in $B \cap M$. For every $\bar j \in \prod_{l < k} K_l$, define $\bar y_{\bar j} := (y_0^{j(0)}, \dots, y_{k-1}^{j(k-1)})$ and $\bar \alpha_{\bar j} := (\alpha_0^{j(0)}, \dots, \alpha_{k-1}^{j(k-1)})$. Then, for each $i < n$, we have $\bar j_i \in \prod_{l < k} K_l$ so that $\bar x_i = \bar y_{\bar j_i}$. For each $i < n$ define the function $g_i \colon \bigcup_{l < k} \{ l\} \times H_l(\bar \alpha_{\bar j_i}) \to 2^\omega$, setting $$g_i(l,\beta) = \chi_{l,m}(\varphi_{l,m}(\bar x_{i})), $$ whenever $\beta$ is the $m$'th element of $H_l(\bar \alpha_{\bar j_i})$. Now we have that the $g_i$ agree on their common domain. Namely let $i_0, i_1 < n$ and $(l,\beta) \in \dom g_{i_0} \cap \dom g_{i_1}$. Then if we set $i$ to be so that $\bar x_{i_0} R_i \bar x_{i_1}$, we have that $m \in I_{l,i}$, where $\beta$ is the $m$'th element of $H_{l}(\bar \alpha_{\bar j_{i_0}})$ and of $H_{l}(\bar \alpha_{\bar j_{i_1}})$. In particular $\bar x_{i_0} E_{l,m} \bar x_{i_1}$ and $\varphi_{l,m}(\bar x_{i_0}) = \varphi_{l,m}(\bar x_{i_1})$ and thus $$g_{i_0}(l,\beta) = \chi_{l,m}(\varphi_{l,m}(\bar x_{i_0})) = \chi_{l,m}(\varphi_{l,m}(\bar x_{i_1})) = g_{i_1}(l,\beta).$$ Let $g := \bigcup_{i < n} g_i$. Then we see by Lemma~\ref{lem:forcingcont}, that $g$ is Cohen generic in $\prod_{(l,\beta) \in \dom g} 2^\omega$ over $M_0$. Namely consider $K = \sum_{l < k} K_l$ and $(y^0_0, \dots, y_{k-1}^{K_{k-1}-1})$ as a $(2^{<\omega})^K$-generic over $M$. Then, if $\langle u_i : i < n'\rangle$ enumerates $\{ \varphi_{l,m}(\bar x_i) : i < n, l<k, m < N_l \}$, we have that every value of $g$ is contained in $\{ \chi_{l,m}(u_i) : i < n', l <k, m<N_l \}$. Also note that by construction for every $i < n$, $p_{\bar \alpha_{\bar j_i}} \restriction \dom g$ is in the generic filter defined by $g$. Since $\{ p_{\bar \alpha_{\bar j_i}} : i < n \}$ is centered we can extend the generic filter of $g$ to a $\mathbb{Q}$-generic $G$ over $M_0$ so that $p_{\bar \alpha_{j_i}} \in G$ for every $i < n$. Now we have that $$ \bar z_{\bar \alpha_{\bar j_i}}[G] = \bar x_i \text{ and } \bar z_{\bar \beta^{j}(\bar \alpha_{\bar j_i})}[G] = \phi_j(\bar x_{i})$$ for every $i < n$ and $j < N$. Thus we get that $$M_0[G] \models \bigcup_{i < n} \{ \phi_j(\bar x_i) : j < N \} \subseteq \dot{\mathcal{A}}[G] \wedge \{ \bar x_0 \} \cup \{\phi_j(\bar x_0) : j < N \} \in E.$$ Again, by absoluteness, we get the required result. \end{proof} \end{proof} \subsection{Infinite products} \begin{definition}\label{def:mcginfinite} Let $\langle X_i : i < \alpha \rangle \in M$ be Polish spaces indexed by a countable ordinal $\alpha$. Then we say that $\bar x_0, \dots, \bar x_{n-1} \in \prod_{i<\alpha} X_i$ are $\langle X_i : i < \alpha \rangle$-\emph{mutually Cohen generic (mCg) over} $M$ if there are $\xi_0 = 0 < \dots< \xi_k = \alpha$ for some $k \in \omega$ so that $$\bar x_0, \dots, \bar x_{n-1} \text{ are } \langle Y_l : l < k\rangle\text{-mutually Cohen generic over } M,$$ where $Y_l = \prod_{i \in [\xi_l, \xi_{l+1})} X_i$ for every $l <k$ and we identify $\bar x_i$ with $$(\bar x_i \restriction [\xi_0, \xi_1), \dots, \bar x_i \restriction [\xi_{k-1}, \xi_k)) \in \prod_{l <k} Y_l ,$$ for every $i < n$. \end{definition} The identification of a sequence $\bar x$ with $(\bar x \restriction [\xi_0, \xi_1), \dots, \bar x \restriction [\xi_{k-1}, \xi_k))$ for a given $\langle \xi_l : l < k\rangle$ will be implicitely made throughout the rest of the paper in order to reduce the notational load. Note that whenever $\bar x_0, \dots, \bar x_{n-1}$ are $\langle X_i : i < \alpha\rangle$-mCg over $M$ and $\beta \leq \alpha$, then ${\bar x_0 \restriction \beta}, \dots, {\bar x_{n-1} \restriction \beta}$ are $\langle X_i : i < \beta\rangle$-mCg over $M$. Also note that Definition~\ref{def:mcginfinite} agrees with the notion of mCg for finite $\alpha$. \begin{definition} We say that $\bar x_0, \dots, \bar x_{n-1} \in \prod_{i<\alpha} X_i$ are \emph{strongly $\langle X_i : i < \alpha \rangle$-mCg} over $M$ if they are $\langle X_i : i < \alpha \rangle$-mCg over $M$ and for any $i,j < n$, if $\xi = \min\{ \beta < \alpha : x_i(\beta) \neq x_j(\beta) \}$, then $x_i(\beta) \neq x_j(\beta)$ for all $\beta \geq \xi$. \end{definition} \begin{mainlemma} \label{lem:mainlemmainf} Let $\alpha < \omega_1$ and $E \subseteq [(2^\omega)^\alpha]^{<\omega}$ be an analytic hypergraph. Then there is a countable model $M$, $\alpha +1 \subseteq M$, so that either \begin{enumerate} \item for any $n \in \omega$ and $\bar x_0,\dots, \bar x_{n-1} \in (2^\omega)^\alpha$ that are strongly $\langle 2^\omega : i < \alpha\rangle$-mCg over $M$, $$\{\bar x_0,\dots, \bar x_{n-1}\} \text{ is } E \text{-independent}$$ \end{enumerate} or for some $N\in \omega$, \begin{enumerate} \setcounter{enumi}{1} \item there are $\phi_0, \dots, \phi_{N-1} \colon (2^\omega)^\alpha \to (2^\omega)^\alpha$ continuous, $\bar s \in \bigotimes_{i<\alpha} 2^{<\omega}$ so that for any $n \in \omega$ and $\bar x_0,\dots, \bar x_{n-1} \in (2^\omega)^\alpha \cap [\bar s]$ that are strongly mCg over $M$, $$\{\phi_j(\bar x_i) : j < N, i<n\} \text{ is } E \text{-independent but } \{ \bar x_0\} \cup \{ \phi_j(\bar x_0) : j < N \} \in E.$$ \end{enumerate} \end{mainlemma} \begin{proof} We are going to show something slightly stronger. Let $R$ be an analytic hypergraph on $(2^\omega)^\alpha \times \omega$, $M$ a countable model with $R \in M, \alpha +1 \subseteq M$ and $k \in \omega$. Then consider the following two statements. \begin{enumerate}[label={$(\arabic*)_{R,M,k}$:},align=left] \item For any pairwise distinct $\bar x_0, \dots, \bar x_{n-1}$ that are strongly $\langle 2^\omega : i < \alpha\rangle$-mCg over $M$, and any $k_0, \dots, k_{n-1} < k$, $$\{(\bar x_0{},k_0),\dots, (\bar x_{n-1}, k_{n-1})\} \text{ is } R \text{-independent}.$$ \item There is $N \in \omega$, there are $\phi_0, \dots, \phi_{N-1} \colon (2^\omega)^\alpha \to (2^\omega)^\alpha$ continuous, such that for every $\bar x \in (2^\omega)^\alpha$ and $j_0 < j_1 < N$, $\phi_{j_0}(\bar x) \neq \phi_{j_1}(\bar x)$ and $\phi_{j_0}(\bar x) \neq \bar x$, there are $k_0, \dots, k_{N-1} \leq k$ and $\bar s \in \bigotimes_{i<\alpha} 2^{<\omega}$, so that for any pairwise distinct $\bar x_0,\dots, \bar x_{n-1} \in (2^\omega)^\alpha \cap [\bar s]$ that are strongly $\langle 2^\omega : i < \alpha\rangle$-mCg over $M$, $$\{(\phi_j(\bar x_i),k_{j}) : j < N, i<n\} \text{ is } R \text{-independent, but}$$ $$\{ (\bar x_0, k)\} \cup \{(\phi_j(\bar x_0), k_{j}) : j < N \} \in R.$$ In fact, if $k > 0$, $$\{ (\bar x_i, {k-1}) : i < n \} \cup \{(\phi_j(\bar x_i), k_{j}) : j < N, i<n\} \text{ is } R \text{-independent.}$$ \end{enumerate} \noindent We are going to show by induction on $\alpha$ that for any $R, M, k$, $(1)_{R,M,k}$ implies that either $(1)_{R,M,k+1}$ or there is a countable model $M^+ \supseteq M$ so that $(2)_{R,M^+,k}$. From this we easily follow the statement of the main lemma. Namely, whenever $E$ is a hypergraph on $(2^\omega)^\alpha$, consider the hypergraph $R$ on $(2^\omega)^\alpha \times \omega$ where $\{ (\bar x_0, k_0), \dots, (\bar x_{n-1}, k_{n-1}) \} \in R$ iff $\{ \bar x_0, \dots, \bar x_{n-1} \} \in R$. Then, if $M$ is an arbitrary countable elementary model with $R,\alpha \in M$ and if $k=0$, $(1)_{R,M,k}$ holds vacuously true. Applying the claim we find $M^+$ so that either $(1)_{R,M,1}$ or $(2)_{R,M^+,0}$. The two options easily translate to the conclusion of the main lemma. Let us first consider the successor step. Assume that $\alpha = \beta +1$, $R$ is an analytic hypergraph on $(2^\omega)^\alpha \times \omega$ and $M$ a countable model with $R \in M, \alpha +1 \subseteq M$ so that $(1)_{R,M,k}$ holds true for some given $k \in \omega$. Let $\mathbb{Q}$ be the forcing adding mutual Cohen reals $\langle z_{0,i,j}, z_{1,i,j} : i, j \in \omega \rangle$ in $2^\omega$. Then we define the hypergraph $\tilde R$ on $(2^\omega)^\beta \times \omega$ where $\{(\bar y_0, m_0), \dots, (\bar y_{n-1}, m_{n-1}) \} \in \tilde R \cap [(2^\omega)^\beta \times \omega]^n$ iff there is $p \in \mathbb{Q}$ and there are $K_i \in \omega$, $k_{i,0}, \dots, k_{i,K_i -1} < k$ for every $i < n$, so that $$p \Vdash_{\mathbb{Q}} \bigcup_{i<n} \{ (\bar y_i{}^\frown \dot z_{0,i,j}, k_{i,j}) : j < K_i \} \cup \{ (\bar y_i{}^\frown \dot z_{1,i,j}, k) : j < m_i \} \in R.$$ Then $\tilde R$ is analytic (see e.g. \cite[29.22]{Kechris1995}). \begin{claim} \label{claim:tildeR1} $(1)_{\tilde R, M, 1}$ is satisfied. \end{claim} \begin{proof} Suppose $\bar y_0, \dots, \bar y_{n-1}$ are pairwise distinct and strongly mCg over $M$, but $\{(\bar y_0{}, 0),$ $\dots,(\bar y_{n-1}, 0) \} \in \tilde R$ as witnessed by $p \in \mathbb{Q}$, $\langle K_i : i < n \rangle$ and $\langle k_{i,j} : i < n, j < K_i \rangle$, each $k_{i,j} < k$. More precisely, \begin{equation}\tag{$*_0$} p \Vdash_{\mathbb{Q}} \bigcup_{i<n} \{ (\bar y_i{}^\frown \dot z_{0,i,j}, k_{i,j}) : j < K_i \} \in R. \end{equation} By absoluteness, $(*_0)$ is satisfied in $M[\bar y_0, \dots, \bar y_{n-1}]$. Thus, let $\langle z_{0,i,j}, z_{1,i,j} : i, j \in \omega \rangle$ be generic over $M[\bar y_0, \dots, \bar y_{n-1}]$ with $p$ in the associated generic filter. Then the $\bar y_i{}^\frown z_{(0,i,j)}$ for $i < n, j < K_i$ are pairwise distinct and strongly mCg over $M$, but $$ \bigcup_{i<n} \{ (\bar y_i{}^\frown z_{0,i,j}, k_{i,j}) : j < K_i \} \in R.$$ This poses a contradiction to $(1)_{R,M,k}$. \end{proof} \begin{claim} If $(1)_{\tilde R, M, m}$ is satisfied for every $m \in \omega$, then also $(1)_{R,M,k+1}$. \end{claim} \begin{proof} Let $\bar x_0, \dots, \bar x_{n-1} \in (2^\omega)^\alpha$ be pairwise distinct, strongly mCg over $M$ and let $k_0, \dots, k_{n-1} \leq k$. Then we may write $\{ (\bar x_0, k_0), \dots, (\bar x_{n-1}, k_{n-1}) \}$ as \begin{equation}\tag{$*_1$}\bigcup_{i < n'} \{ (\bar y_i{}^\frown z_{0,i,j}, k_{i,j}) : j < K_i \} \cup \{ (\bar y_i{}^\frown z_{1,i,j}, k) : j < m_i \},\end{equation} for some pairwise distinct $\bar y_0, \dots,\bar y_{n'-1}$, $\langle K_i : i < n' \rangle$, $\langle k_{i,j} : i < n', j<K_i \rangle$, $\langle m_i : i < n' \rangle$ and $\langle z_{0,i,j} : i, j \in \omega \rangle$, $\langle z_{1,i,j} : i, j \in \omega \rangle$ $2^\omega$-mutually Cohen generic over $M[\bar y_0, \dots, \bar y_{n'-1}]$. Letting $m = \max_{i < n'} m_i +1$, we follow the $R$-independence of the set in $(*_1)$ from $(1)_{\tilde R,M,m}$. \end{proof} \begin{claim} If there is $m \in \omega$ so that $(1)_{\tilde R, M, m}$ fails, then there is a countable model $M^+ \supseteq M$ so that $(2)_{R,M^+,k}$. \end{claim} \begin{proof} Let $m\geq 1$ be least so that $(2)_{\tilde R, M_0, m}$ for some countable model $M_0 \supseteq M$. We know that such $m$ exists, since from $(1)_{\tilde R, M, 1}$ we follow that either $(1)_{\tilde R, M, 2}$ or $(2)_{\tilde R, M_0,1}$ for some $M_0$, then, if $(1)_{\tilde R, M, 2}$, either $(1)_{\tilde R, M, 3}$ or $(2)_{\tilde R, M_0, 2}$ for some $M_0$, and so on. Let $\phi_0, \dots, \phi_{N-1}$, $m_0, \dots, m_{N-1} \leq m$ and $\bar s \in \bigotimes_{i < \beta}(2^{<\omega})$ witness $(2)_{\tilde R, M_0, m}$. Let $M_1$ be a countable elementary model such that $\phi_0, \dots, \phi_{N-1}, M_0 \in M_1$. Then we have that for any $\bar y$ that is Cohen generic in $(2^\omega)^\beta \cap [\bar s]$ over $M_1$, in particular over $M_0$, that $$\{(\bar y, m) \} \cup \{ (\phi_j(\bar y), m_j) : j <N \} \in \tilde R,$$ i.e. there is $p \in \mathbb{Q}$, there are $K_i \in \omega$, $k_{i,0}, \dots, k_{i,K_i -1} < k$ for every $i \leq N$, so that \begin{multline} \tag{$*_2$} p \Vdash_{\mathbb{Q}} \bigcup_{i<N} \{ (\phi_i(\bar y){}^\frown \dot z_{0,i,j}, k_{i,j}) : j < K_i \} \cup \{ (\phi_i(\bar y){}^\frown \dot z_{1,i,j}, k) : j < m_i \} \\ \cup \{(\bar y{}^\frown \dot z_{0,N,j}, k_{N,j}) : j < K_N \} \cup \{(\bar y{}^\frown \dot z_{1,N,j}, k) : j < m \} \in R. \end{multline} By extending $\bar s$, we can assume wlog that $p$, $\langle K_i : i \leq N \rangle$, $\langle k_{i,j} : i \leq N, j < K_i \rangle$ are the same for each $\bar y \in [\bar s]$ generic over $M_1$, since $(*_2)$ can be forced over $M_1$. Also, from the fact that $\phi_j$ is continuous for every $j < N$, that $\phi_j(\bar y) \neq \bar y$ for every $j < N$, and that $\phi_{j_0}(\bar y) \neq \phi_{j_1}(\bar y)$ for every $j_0 < j_1 < N$, we can assume wlog that for any $\bar y_0, \bar y_1 \in [\bar s]$ and $j_0 < j_1 < N$, \begin{equation}\tag{$*_3$} \phi_{j_0}(\bar y_0) \neq \bar y_1 \text{ and } \phi_{j_0}(\bar y_0) \neq \phi_{j_1}(\bar y_1). \end{equation} Let us force in a finite support product over $M_1$ continuous functions $\chi_{0,i,j} \colon (2^\omega)^\beta \to [p(0,i,j)]$ and $\chi_{1,i,j} \colon (2^\omega)^\beta \to [p(1,i,j)]$ for $i,j \in \omega$ and write $M^+ = M_1[\langle \chi_{0,i,j}, \chi_{1,i,j} : i,j \in \omega \rangle]$. For every $i < N$ and $j < K_i$ and $\bar x \in (2^\omega)^\alpha$, define $$\phi_{0,i,j}(\bar x ) := \phi_i(\bar x \restriction \beta){}^\frown \chi_{0,i,j}(\phi_i(\bar x \restriction \beta)) \text{ and } k_{0,i,j} = k_{i,j}.$$ For every $i< N$ and $j < m_i$ and $\bar x \in (2^\omega)^\alpha$, define $$\phi_{1,i,j}(\bar x ) := \phi_i(\bar x \restriction \beta){}^\frown \chi_{1,i,j}(\phi_i(\bar x \restriction \beta)) \text{ and } k_{1,i,j} = k.$$ For every $j < K_{N}$ and $\bar x \in (2^\omega)^\alpha$, define $$\phi_{0,N,j}(\bar x ) := \bar x \restriction \beta{}^\frown \chi_{0,N,j}(\bar x \restriction \beta) \text{ and } k_{0,N,j} = k_{N,j}.$$ At last, define for every $j < m-1$ and $\bar x \in (2^\omega)^\alpha$, $$\phi_{1,N,j}(\bar x ) := \bar x \restriction \beta{}^\frown \chi_{1,N,j}(\bar x \restriction \beta) \text{ and } k_{1,N,j} = k.$$ Let $\bar t \in \bigotimes_{i < \alpha} 2^{<\alpha}$ be $\bar s$ with $p(1,N,m-1)$ added in coordinate $\beta$. Now we have that for any $\bar x \in [\bar t]$ that is Cohen generic in $(2^\omega)^\alpha$ over $M^+$, \begin{multline*} \{ (\bar x, k) \} \cup \{ (\phi_{0,i,j}(\bar x), k_{0,i,j}) : i \leq N, j < K_N \} \cup \{ (\phi_{1,i,j}(\bar x), k_{1,i,j}) : i < N, j < m_i\} \\ \cup \{(\phi_{1,N,j}(\bar x), k_{1,N,j}) : j < m -1 \} \in R. \end{multline*} This follows from $(*)_2$ and applying Lemma~\ref{lem:forcingcont} to see that the $\chi_{0,i,j}(\phi_i(\bar x \restriction \beta))$, $\chi_{1,i,j}(\phi_i(\bar x \restriction \beta))$, $\chi_{0,N,j}(\bar x \restriction \beta)$, $\chi_{1,N,j}(\bar x \restriction \beta)$ and $x(\beta)$ are mutually Cohen generic over $M_1[\bar x \restriction \beta]$. Moreover they correspond to the reals $z_{0,i,j}$, $z_{1,i,j}$ added by a $\mathbb{Q}$-generic over $M_1[\bar x \restriction \beta]$, containing $p$ in its generic filter. Also, remember that $(*)_2$ is absolute between models containing the relevant parameters, which $M_1[\bar y]$ is, with $\bar y = {\bar x \restriction \beta}$. On the other hand, whenever $\bar x_0, \dots, \bar x_{n-1} \in (2^\omega)^\alpha \cap [\bar t]$ are pairwise distinct and strongly mCg over $M^+$, letting $\bar y_0, \dots, \bar y_{n'-1}$ enumerate $\{\bar x_i \restriction \beta : i < n \}$, we have that \begin{equation}\tag{$*_4$} \{ (\bar y_i{},m-1) : i < n' \} \cup \{ (\phi_j(\bar y_i), m_j) : i < n', j < N \} \text{ is } \tilde R \text{-independent.} \end{equation} According to the definition of $\tilde R$, $(*_4)$ is saying e.g. that whenever $A \cup B \subseteq (2^\omega)^\alpha$ is an arbitrary set of strongly mCg reals over $M_1$, where $A \restriction \beta, B \restriction \beta \subseteq \{ \bar y_i, \phi_j(\bar y_i) : i < n', j < N \}$ and in $B$, $\bar y_i$ is extended at most $m-1$ many times and $\phi_j(\bar y_i)$ at most $m_j$ many times for every $i < n'$, $j < N$, and, assuming for now that $k > 0$, if $f \colon A \to k$, then $$\{ (\bar x{}, f(\bar x)) : \bar x \in A \} \cup (B \times \{k\}) \text{ is } R\text{-independent.}$$ As an example for such sets $A$ and $B$ we have, $$A = \{\phi_{0,i,j}(\bar x_l) : l < n, i \leq N, j < K_i \} \cup \{ \bar x_l : l < n' \}, \text{ and}$$ $$B = \{ \phi_{1,i,j}(\bar x_l) : l < n, i < N, j < m_i \} \cup \{ \phi_{1,N,j}(\bar x_l) : l < n', j < m-1 \}.$$ Again, to see this we apply Lemma~\ref{lem:forcingcont} to show that the relevant reals are mutually generic over the model $M_1[\bar y_0, \dots, \bar y_{n'-1}]$. Also, remember from the definition of $\phi_{1,i,j}$ for $i < N$ and $j < m_i$ that, if $\phi_i(\bar x_{l_0} \restriction \beta) = \phi_i(\bar x_{l_1} \restriction \beta)$, then also $\phi_{1,i,j}(\bar x_{l_0}) = \phi_{1,i,j}(\bar x_{l_1})$, for all $l_0, l_1 < n$. Equally, if $\bar x_{l_0} \restriction \beta = \bar x_{l_1} \restriction \beta$, then $\phi_{1,N,j}(\bar x_{l_0}) = \phi_{1,N,j}(\bar x_{l_1})$ for every $j < m-1$. Use $(*_3)$ to note that $\{ \bar y_i : i < n'\}$, $\{\phi_0(\bar y_i) : i < n' \}, \dots$, ${\{ \phi_{N-1}(\bar y_i) : i < n' \}}$ are pairwise disjoint. From this we can follow that indeed, each $\bar y_i$ is extended at most $m-1$ many times in $B$ and $\phi_j(\bar y_i)$ at most $m_i$ many times. In total, we get that \begin{multline*} \{(\phi_{0,i,j}(\bar x_l), k_{0,i,j}) : l < n, i \leq N, j < K_i \} \cup \{ (\bar x_l,k-1) : l < n' \} \cup \\ \{ (\phi_{1,i,j}(\bar x_l), k) : l < n, i < N, j < m_i \} \cup \{ (\phi_{1,N,j}(\bar x_l), k) : l < n', j < m-1 \} \\ \text{ is } R\text{-independent}. \end{multline*} It is now easy to check that we have the witnesses required in the statement of $(2)_{R,M^+,k}$. For example, $\phi_{0,i,j}(\bar x) \neq \bar x$ when $i < N$, follows from $\phi_i(\bar x) \neq \bar x$. For the values $\phi_{0,N,j}(\bar x)$ we simply have that $\chi_{0,N,j}(\bar x \restriction \beta) \neq x(\beta)$, as the two values are mutually generic. Everything else is similar and consists only of a few case distinctions. Also, the continuity of the functions is clear. If $k = 0$, then we can simply forget the set $A$ above, since $K_i$ must be $0$ for every $i \leq N$. In this case we just get that \begin{multline*} \{ (\phi_{1,i,j}(\bar x_l), k) : l < n, i < N, j < m_i \} \cup \{ (\phi_{1,N,j}(\bar x_l), k) : l < n', j < m-1 \} \\ \text{ is } R\text{-independent}, \end{multline*} which then yields $(2)_{R,M^+,k}$. \end{proof} This finishes the successor step. Now assume that $\alpha$ is a limit ordinal. We fix some arbitrary tree $T \subseteq \omega^{<\omega}$ such that for every $t \in T$, $\vert \{ n \in \omega : t^\frown n \in T\} \vert = \omega$ and for any branches $x \neq y \in [T]$, if $d = \min \{ i \in \omega : x(i)\neq y(i) \}$ then $x(j) \neq x(j)$ for every $j \geq d$. We will use $T$ only for notational purposes. For every sequence $\xi_0 < \dots < \xi_{k'} = \alpha$, we let $\mathbb{Q}_{\xi_0, \dots, \xi_{k'}} = \left(\prod_{l < k'} (\bigotimes_{i \in [\xi_l, \xi_{l+1})} 2^{<\omega})^{<\omega}\right) \times ( \bigotimes_{i \in [\xi_0, \alpha)} 2^{<\omega})^{<\omega}$. $\mathbb{Q}_{\xi_0, \dots, \xi_{k'}}$ adds, in the natural way, reals $\langle \bar z^0_{l, i} : l < k', i \in \omega \rangle$ and $\langle \bar z^0_i : i \in \omega \rangle$, where $\bar z^0_{l,i} \in (2^\omega)^{[\xi_l, \xi_{l+1})}$ and $\bar z^1_i \in (2^{\omega})^{[\xi_0, \alpha)}$ for every $l < k'$, $i \in \omega$. Whenever $t \in T \cap \omega^{k'}$, we write $\bar z^0_{t} = \bar z^0_{0,t(0)}{}^\frown \dots {}^\frown \bar z^0_{k'-1, t(k'-1)}$.\iffalse Here we formally distinguish between something of the form $\bar z^0_{l,i}$ and $\bar z^0_{t}$, where $t = (l,i)$, else there would be a slight abuse of notation, but this should always be clear.\fi~ Note that for generic $\langle \bar z^0_{l,i} : i \in \omega, l < k' \rangle$, the reals $\langle \bar z^0_t : t \in T \cap \omega^{k'}\rangle$ are strongly $\langle 2^\omega : i \in [\xi_0, \alpha)\rangle$-mCg. Now, let us define for each $\xi < \alpha$ an analytic hypergraph $R_\xi$ on $(2^\omega)^\xi \times 2$ so that $\{(\bar y^0_i, 0) : i < n_0 \} \cup \{ (\bar y^1_i, 1) : i < n_1 \} \in R_\xi \cap [(2^\omega)^\xi \times 2]^{n_0+n_1}$, where $\vert \{(\bar y^0_i, 0) : i < n_0 \}\vert = n_0$ and $\vert \{ (\bar y^1_i, 1) : i < n_1 \} \vert = n_1$, iff there are $\xi_0 = \xi < \dots < \xi_{k'} = \alpha$, $(p,q) \in \mathbb{Q}_{\xi_0, \dots, \xi_{k'}}$, $K_i \in \omega$, $k_{i,0}, \dots, k_{i,K_i -1}<k$ and distinct $t_{i,0}, \dots, t_{i,K_i -1} \in T \cap \omega^{k'}$ for every $i < n_0$, so that $t_{i_0,j_0}(0) \neq t_{i_1,j_1}(0)$ for all $i_0 < i_1 < n_0$ and $j_0 < K_{i_0}, j_1 < K_{i_1}$, and $$(p,q) \Vdash_{\mathbb{Q}_{\bar \xi}} \bigcup_{i < n_0} \{ (\bar y^0_i{}^\frown \bar z^0_{t_{i,j}}, k_{i,j}): j < K_i \} \cup \{ (\bar y^1_i{}^\frown \bar z^1_{i}, k): i < n_1 \} \in R.$$ Note that each $R_\xi$ can be defined within $M$. It should be clear, similar to the proof of Claim~\ref{claim:tildeR1}, that from $(1)_{R,M,k}$, we can show the following. \begin{claim} For every $\xi < \alpha$, $(1)_{R_\xi,M,1}$. \end{claim} \begin{claim} Assume that for every $\xi < \alpha$, $(1)_{R_\xi,M,2}$. Then also $(1)_{R,M,k+1}$. \end{claim} \begin{proof} Let $\bar x^0_0, \dots, \bar x^0_{n_0-1}, \bar x^1_0, \dots, \bar x^1_{n_1-1}$ be pairwise distinct and strongly mCg over $M$ and $k_0, \dots, k_{n_0 -1} < k$. Then there is $\xi < \alpha$ large enough so that $\bar x^0_0 \restriction \xi, \dots, \bar x^0_{n_0-1}\restriction \xi , \bar x^1_0\restriction \xi, \dots, \bar x^1_{n_1-1}\restriction \xi$ are pairwise distinct and in particular, $\bar x^0_0 \restriction [\xi,\alpha), \dots, \bar x^0_{n_0-1}\restriction [\xi, \alpha) , \bar x^1_0\restriction [\xi, \alpha), \dots, \bar x^1_{n_1-1}\restriction [\xi, \alpha)$ are pairwise different in every coordinate. Let $\xi_0 = \xi$, $\xi_1 = \alpha$, $K_i = 1$ for every $i < n_0$ and $t_{0,0}, \dots, t_{n_0 -1,0} \in T \cap \omega^1$ pairwise distinct. Also, write $k_{0,0} = k_0$, ..., $k_{n_0-1,0} = k_{n_0 -1}$. Then, from $(1)_{R_\xi,M,2}$, we have that $$\mathbbm{1} \Vdash_{\xi_0,\xi_1} \{ ((\bar x^0_i \restriction \xi ){}^\frown \bar z^0_{t_{i,0}}, k_{i,0}): i < n_0 \} \cup \{((\bar x^1_i\restriction \xi){}^\frown \bar z^1_{i}, k): i < n_1 \} \text{ is } R\text{-independent}.$$ By absoluteness, this holds true in $M[\langle \bar x^0_i \restriction \xi,\bar x^1_j\restriction \xi: i < n_0, j < n_1\rangle]$ and we find that $$\{(\bar x^0_i{}, k_i) : i < n_0 \} \cup \{ (\bar x^1_i, k) \} \text{ is } R\text{-independent},$$ as required. \end{proof} \begin{claim} If there is $\xi < \alpha$ so that $(1)_{R_\xi,M,2}$ fails, then there is a countable model $M^+ \supseteq M$ so that $(2)_{R,M^+,k}$. \end{claim} \begin{proof} If $(1)_{R_\xi, M, 2}$ fails, then there is a countable model $M_0 \supseteq M$ so that $(2)_{R_\xi, M, 1}$ holds true as witnessed by $\bar s \in \bigotimes_{i < \xi} 2^{<\omega}$, $\phi^0_{0}, \dots, \phi^0_{N_0 -1}, \phi^1_{0}, \dots, \phi^1_{N_1 -1} \colon (2^\omega)^\xi \to (2^\omega)^\xi$ such that for any pairwise distinct $\bar y_0, \dots, \bar y_{n-1} \in (2^\omega)^\xi \cap [\bar s]$ that are strongly mCg over $M_0$, \begin{equation}\tag{$*_5$} \{ (\bar y_i, 0) : i < n\} \cup \{(\phi^0_j(\bar y_i), 0) : i < n, j < N_0\} \cup \{ (\phi^1_j(\bar y_i), 1) : i < n, j < N_1\} \end{equation} is $R_\xi$-independent, but \begin{equation}\tag{$*_6$} \{ (\bar y_0, 1) \} \cup \{ (\phi^0_j(\bar y_0), 0) : j < N_0\} \cup \{ (\phi^1_j(\bar y_0), 1) : j < N_1\} \in R_\xi. \end{equation} As before, we may pick $M_1 \ni M_0$ elementary containing all relevant information, assume that $(*_6)$ is witnessed by fixed $\xi_0 = \xi < \dots< \xi_{k'} = \alpha$, $(p,q) \in \mathbb{Q}_{\xi_0, \dots, \xi_{k'}}$, $K_0, \dots, K_{N_0-1}$, $k_{i,0}, \dots, k_{i, K_i -1}$ and $t_{i,0}, \dots, t_{i, K_i-1} \in T \cap \omega^{k'}$ for every $i < N_0$, so that for every generic $\bar y_0 \in (2^\omega)^\xi \cap [\bar s]$ over $M_1$, \begin{multline}\tag{$*_7$} (p,q) \Vdash_{\mathbb{Q}_{\bar \xi}} \{(\bar y_0{}^\frown \bar z^1_{N_1}, k) \}\cup \bigcup_{i < N_0}\{ (\phi^0_i(\bar y_0){}^\frown \bar z^0_{t_{i,j}}, k_{i,j}): j < K_i\} \cup \\ \{ (\phi^1_j(\bar y_0){}^\frown \bar z^1_{j}, k): j < N_1 \} \in R. \end{multline} As before, we may also assume that $\bar y_0 \neq \phi^{j_0}_{i_0}(\bar y_0) \neq \phi^{j_1}_{i_1}(\bar y_1)$ for every $\bar y_0, \bar y_1 \in [\bar s]$ and $(j_0,i_1) \neq (j_1,i_1)$. We let $\bar s' = \bar s{}^\frown q(N_1)$. Now we force continuous functions $\chi^0_{l,i} \colon (2^\omega)^\xi \to (2^{\omega})^{[\xi_l, \xi_{l+1})} \cap [p(l,i)]$ and $\chi^1_{i} \colon (2^\omega)^\xi \to (2^{\omega})^{[\xi, \alpha)} \cap [q(i)]$ over $M_1$ for every $i \in \omega$, $l < k'$ and we let $M^+ = M_1[\langle \chi^0_{l,i}, \chi^1_{i} : i \in \omega, l < k'\rangle]$. Finally we let $$ \phi_{0,i,j}(\bar x) := \phi^0_i(\bar x \restriction \xi){}^\frown \chi_{0,t_{i,j}(0)}{}(\phi^0_i(\bar x \restriction \xi)){}^\frown \dots{}^\frown \chi_{k'-1,t_{i,j}(k'-1)}(\phi^0_i(\bar x \restriction \xi))$$ for every $i < N_0$ and $j < K_i$, $\bar x \in (2^\omega)^\alpha$, and $$\phi_{1,i}(\bar x) := \phi^1_i(\bar x \restriction \xi){}^\frown \chi_{1,i}(\phi ^1_i(\bar x \restriction \xi))$$ for every $i < N_1$, $\bar x \in (2^\omega)^\alpha$. We get from $(*_7)$, and, as usual, applying Lemma~\ref{lem:forcingcont}, that for any $\bar x \in (2^\omega)^\alpha \cap [\bar s']$ which is generic over $M^+$, $$ \{(\bar x, k)\} \cup \bigcup_{i < N_0} \{(\phi_{0,i,j}(\bar x), k_{i,j}) : j < K_i \} \cup \{(\phi_{1,i}(\bar x), k) : i < N_1 \} \in R.$$ On the other hand, whenever $\bar x_0,\dots, \bar x_{n'-1}\in (2^{\omega})^\alpha \cap [\bar s']$ are strongly mCg over $M^+$, and letting $\bar y_0, \dots, \bar y_{n-1}$ enumerate $\{ \bar x_i \restriction \xi : i < n'\}$, knowing that the set in $(*_5)$ is $R_\xi$-independent, we get that \begin{multline*} \{ (\bar x_l,k-1) : l < n' \} \cup \bigcup_{i < N_0} \{(\phi_{0,i,j}(\bar x_l),k_{i,j}) : j < K_i, l < n' \} \cup \\\{(\phi_{1,i}(\bar x_l), k) : i < N_1, l < n' \} \text{ is } R\text{-independent}, \end{multline*} in case $k>0$. To see this, we let $\eta_0< \dots< \eta_{k''}$ be a partition refining $\xi_0 < \dots, \xi_{k'}$ witnessing the mCg of $\bar x_0 \restriction [\xi, \alpha), \dots, \bar x_{n'-1}\restriction [\xi, \alpha)$ and we find appropriate $u_{0,0}, \dots, u_{0,L_0 -1}, \dots, u_{n-1,0 }, \dots, u_{n-1,L_{n-1}-1} \in T \cap \omega^{k''}$ and $v_{i,j} \in T \cap \omega^{k''}$ for $i < N_0, j < K_i$ to interpret the above set in the form \begin{multline*} \{ (\bar y_l{}^\frown \bar z^0_{u_{l,i}}, k-1) : l < n, i < L_i \} \cup \bigcup_{i < N_0} \{(\phi^0_i(\bar y_l){}^\frown \bar z^0_{v_{i,j}}, k_{i,j}) : i < N_0, j < K_i, l < n \} \\\cup \{(\phi^1_{i}(\bar y_l){}^\frown \bar z^1_i , k) : i < N_1, l < n \}, \end{multline*} for $\mathbb{Q}_{\eta_0, \dots, \eta_{k''-1}}$-generic $\langle \bar z ^0_{l,i}, \bar z^1_{i} : l < k'', i \in \omega \rangle$ over $M_1[\bar y_0, \dots, \bar y_{n-1}]$. We leave the details to the reader. In case $k= 0$, all $K_i$ are $0$ and we get that $$\{(\phi_{1,i}(\bar x_l), k) : i < N_1, l < n' \} \text{ is } R\text{-independent}.$$ Everything that remains, namely showing e.g. that $\bar x \neq \phi_{1,i}(\bar x)$ is clear. \end{proof} As a final note, let us observe that the case $\alpha = 0$ is trivial, since $(2^\omega)^\alpha$ has only one element. \end{proof} \begin{remark}\label{rem:E1} If we replace ``strong mCg" with ```mCg" in the above Lemma, then it already becomes false for $\alpha = \omega$. Namely consider the equivalence relation $E$ on $(2^\omega)^\omega$, where $\bar x E \bar y$ if they eventually agree, i.e. if $\exists n \in \omega \forall m \geq n (x(n) = y(n))$.\footnote{This equivalence relation is usually called $E_1$.} Then we can never be in case (1) since we can always find two distinct $\bar x$ and $\bar y$ that are mCg and $\bar x E \bar y$. On the other hand, in case (2) we get a continuous selector $\phi_0$ for $E$ (note that $N = 0$ is not possible). More precisely we have that for any $\bar x$, $\bar y$ that are mCg, $\bar x E \phi_0(\bar x)$ and $\phi_0(\bar x) = \phi_0(\bar y)$ iff $\bar x E \bar y$. But for arbitrary mCg $\bar x$ and $\bar y$ so that $\bar x \neg E \bar y$, we easily find a sequence $\langle \bar x_n : n \in \omega \rangle$ so that $\bar x$ and $\bar x_n$ are mCg and $\bar x E \bar x_n$, but $\bar x_n \restriction n = \bar y \restriction n$ for all $n$. In particular $\lim_{n \in \omega} \bar x_n = \bar y$. Then $\phi_0(\bar y) = \lim_{n\in \omega} \phi_0(\bar x_n) = \lim_{n\in \omega} \phi_0(\bar x) = \phi_0(\bar x)$. \end{remark} The proofs of Main Lemma~\ref{thm:mainlemma} and \ref{lem:mainlemmainf} can be generalized to $E$ that is $\omega$-universally Baire, in particular they also hold for coanalytic hypergraphs. The only assumptions on analytic sets that we used in the proofs are summarized below. \begin{prop} Let $\Gamma$ be a pointclass closed under countable unions, countable intersections and continuous preimages and assume that for every $A \in \Gamma \cap \mathcal{P}(\omega^\omega)$, there are formulas $\varphi$, $\psi$ (with parameters) in the language of set theory, such that for every countable elementary model $M$ (with the relevant parameters) and $G$ a generic over $M$ for a finite support product of Cohen forcing, \begin{enumerate} \item for $x \in M[G] \cap \omega^\omega$, $M[G] \models \varphi(x) \text{ iff } x \in A \text{ and } M[G] \models \psi(x) \text{ iff } x \notin A,$ \item for $\dot x \in M[G]$ a $\mathbb{C}$-name for a real, $p \in \mathbb{C}$, $M[G] \models ``p \Vdash \varphi(\dot x)"$ iff $p \Vdash \varphi(\dot x)$, \item for $\dot y$ a $\mathbb{C}$-name for a real, $p \in \mathbb{C}$ and a continuous function $f \colon \omega^\omega \times \omega^\omega \to \omega^\omega$, $\{ x \in \omega^\omega : p \Vdash \varphi(f(x,\dot y)) \} \in \Gamma.$ \end{enumerate} Then Main Lemma~\ref{thm:mainlemma} and \ref{lem:mainlemmainf} hold, where ``analytic" is replaced by $\Gamma$. \end{prop} \begin{definition}\label{def:Delta} For $\bar x_0, \dots, \bar x_{n-1} \in \prod_{i<\alpha} X_i$, we define $$\Delta(\bar x_0, \dots, \bar y_{n-1}) := \{ \Delta_{\bar x_i, \bar x_j} : i \neq j < n \} \cup \{ 0, \alpha\},$$ where $\Delta_{\bar x_i, \bar x_j} := \min \{ \xi < \alpha: x_i(\xi) \neq x_j(\xi) \}$ if this exists and $\Delta_{\bar x_i, \bar x_j} = \alpha$ if $\bar x_i = \bar x_j$. \end{definition} \begin{remark} Whenever $\bar x_0, \dots, \bar x_{n-1}$ are strongly mCg, then they are mCg as witnessed by the partition $\xi_0 < \dots < \xi_k$, where $\{\xi_0, \dots, \xi_k \} = \Delta(\bar x_0, \dots, \bar x_{n-1})$. \end{remark} \section{Sacks and splitting forcing} \subsection{Splitting Forcing} \begin{definition}\label{def:splittingforcing} We say that $S \subseteq 2^{<\omega}$ is \textit{fat} if there is $m \in \omega$ so that for all $n \geq m$, there are $s,t \in S$ so that $s(n) =0$ and $t(n) =1$. A tree $T$ on $2$ is called \textit{splitting tree} if for every $s \in T$, $T_s$ is fat. We call \textit{splitting forcing} the tree forcing $\mathbb{SP}$ consisting of splitting trees. \end{definition} Note that for $T \in \mathbb{SP}$ and $s \in T$, $T_s$ is again a splitting tree. Recall that $x \in 2^\omega$ is called splitting over $V$, if for every $y \in 2^{\omega} \cap V$, $\{ n \in \omega: y(n) = x(n) = 1 \}$ and $\{n \in \omega: x(n) = 1 \wedge y(n) =0 \}$ are infinite. The following is easy to see. \begin{fact} Let $G$ be $\mathbb{SP}$-generic over $V$. Then $x_G$, the generic real added by $\mathbb{SP}$, is splitting over $V$. \end{fact} Whenever $S$ is fat let us write $m(S)$ for the minimal $m \in \omega$ witnessing this. \begin{definition} Let $S,T$ be splitting trees and $n \in \omega$. Then we write $S \leq_n T$ iff $S \leq T$, $\splt_{\leq n}(S) = \splt_{\leq n}(T)$ and $\forall s \in \splt_{\leq n}(S) (m(S_s) = m(T_s))$. \end{definition} \begin{prop} The sequence $\langle \leq_n : n \in \omega \rangle$ witnesses that $\mathbb{SP}$ has Axiom A with continuous reading of names. \end{prop} \begin{proof} It is clear that $\leq_{n}$ is a partial order refining $\leq$ and that $\leq_{n+1} \subseteq \leq_n$ for every $n \in \omega$. Let $\langle T_n : n \in \omega \rangle$ be a fusion sequence in $\mathbb{SP}$, i.e. for every $n$, $T_{n+1} \leq_n T_n$. Then we claim that $T := \bigcap_{n \in \omega} T_n$ is a splitting tree. More precisely, for $s \in T$, we claim that $m := m((T_{\vert s\vert})_s)$ witnesses that $T_s$ is fat. To see this, let $n \geq m$ be arbitrary and note that $n \geq m \geq \vert s \vert$ must be the case. Then, since $\splt_{\leq n+1}(T_{n+1}) \subseteq T$ we have that $s \in \splt_{\leq n+1}( T_{n+1})$ and $m((T_{n+1})_s) = m$. So find $t_0, t_1 \in T_{n+1}$ so that $t_0(n) = 0$, $t_1(n) = 1$ and $\vert t_0 \vert = \vert t_1 \vert = n+1$. But then $t_0,t_1 \in T$, because $t_0,t_1 \in \splt_{\leq n+1}(T_{n+1}) \subseteq T$. Now let $D \subseteq \mathbb{SP}$ be open dense, $T \in \mathbb{SP}$ and $n \in \omega$. We will show that there is $S \leq_n T$ so that for every $x \in [S]$, there is $t \subseteq x$, with $S_t \in D$. This implies condition (3) in Definition~\ref{def:axiomA}. \begin{claim} Let $S$ be a splitting tree. Then there is $A \subseteq S$ an antichain (seen as a subset of $2^{<\omega}$) so that for every $k \in \omega, j \in 2$, if $\exists s \in S (s(k) = j)$, then $\exists t \in A (t(k) = j)$. \end{claim} \begin{proof} Start with $\{ s_i : i \in \omega \} \subseteq S$ an arbitrary infinite antichain and let $m_i := m(S_{s_i})$ for every $i \in \omega$. Then find for each $i \in \omega$, a finite set $H_i \subseteq S_{s_i}$ so that for all $k \in [m_i, m_{i+1})$, there are $t_0,t_1 \in H_i$, so that $t_0(k) = 0$ and $t_1(k) = 1$. Moreover let $H \subseteq S$ be finite so that for all $k \in [0,m_0)$ and $j \in 2$, if $\exists s \in S (s(k) =j)$, then $\exists t \in H (t(k) = j)$. Then define $F_i = H_i \cup (H \cap S_{s_i})$ for each $i \in \omega$ and let $F_{-1} := H \setminus \bigcup_{i \in \omega} F_i$. Since $F_i$ is finite for every $i \in \omega$, it is easy to extend each of its elements to get a set $F_i'$ that is an additionally an antichain in $S_{s_i}$. Also extend the elements of $F_{-1}$ to get an antichain $F_{-1}'$ in $S$. It is easy to see that $A := \bigcup_{i \in [-1,\omega)} F_i'$ works. \end{proof} Now enumerate $\splt_n(T)$ as $\langle \sigma_i : i < N \rangle$, $N := 2^n$. For each $i < N$, let $A_i \subseteq T_{\sigma_i}$ be an antichain as in the claim applied to $S = T_{\sigma_i}$. For every $i < N$ and $t \in A_i$, let $S^{t} \in D$ be so that $S^t \leq T_t$. For every $i <N$ pick $t_i \in A_i$ arbitrarily and $F_i \subseteq A_i$ a finite set so that for every $k \in [0, m(S^{t_i}))$ and $j \in 2$, if $\exists s \in A_i (s(k)= j)$, then $ \exists t \in F_i (t(k)=j)$. Then we see that $S := \bigcup_{i < N} (\bigcup_{t \in F_i} S^t \cup S^{t_i} )$ works. We constructed $S$ so that $S \leq_n T$. Moreover, whenever $x \in [S]$, then there is $i < N$ be so that $\sigma_i \subseteq x$. Then $x \in [\bigcup_{t \in F_i} S_t \cup S_{t_i}]$ and since $F_i$ is finite, there is $t \in F_i \cup \{t_i \}$ so that $t \subseteq x$. But then $S_{t} \leq S^t \in D$. Finally, in order to show the continuous reading of names, let $\dot y$ be a name for an element of $\omega^\omega$, $n \in \omega$ and $T \in \mathbb{SP}$. It suffices to consider such names, since for every Polish space $X$, there is a continuous surjection $F \colon \omega^\omega \to X$. Then we have that for each $i \in \omega$, $D_i := \{ S \in \mathbb{SP} : \exists s \in \omega^{i}( S \Vdash \dot y \restriction i = s ) \}$ is dense open. Let $\langle T_i : i \in \omega \rangle$ be so that $T_0 \leq_n T$, $T_{i+1} \leq_{n+i} T_i$ and for every $x \in [T_i]$, there is $t \subseteq x$ so that $(T_i)_t \in D_i$. Then $S = \bigcap_{i \in \omega} T_i \leq_n T$. For every $x \in [S]$, define $f(x) = \bigcup \{ s \in \omega^{<\omega} : \exists t \subseteq x (S_t \Vdash s \subseteq \dot y) \}$. Then $f \colon [S] \to \omega^\omega$ is continuous and $S \Vdash \dot y = f(x_G)$. \end{proof} \begin{cor} $\mathbb{SP}$ is proper and $\omega^\omega$-bounding. \end{cor} \subsection{Weighted tree forcing} In this subsection we define a class of forcings that we call \emph{weighted tree forcing}. The definition is slightly ad-hoc, but simple enough to formulate and to check for various forcing notions. In earlier versions of this paper we proved many of the results only for splitting forcing, but we noted that similar combinatorial arguments apply more generally. The notion of a \emph{weight} resulted directly from analysing the proof of Proposition~\ref{prop:weightedmain} for splitting forcing. It also turned out to be particularly helpful in the next subsection. \begin{definition} Let $T$ be a perfect tree. A \emph{weight} on $T$ is a map $\rho \colon T \times T \to [T]^{<\omega}$ so that $\rho(s,t) \subseteq T_s \setminus T_t$ for all $s,t \in T$. Whenever $\rho_0, \rho_1$ are weights on $T$ we write $\rho_0 \subseteq \rho_1$ to say that for all $s,t \in T$, $\rho_0(s,t) \subseteq \rho_1(s,t)$. \end{definition} Note that if $t \subseteq s$ then $\rho(s,t) = \emptyset$ must be the case. \begin{definition} Let $T$ be a perfect tree, $\rho$ a weight on $T$ and $S$ a tree. Then we write $S \leq_\rho T$ if $S \subseteq T$ and there is a dense set of $s_0 \in S$ with an injective sequence $(s_n)_{n\in \omega}$ in $S_{s_0}$ such that $\forall n \in \omega (\rho(s_n, s_{n+1}) \subseteq S)$. \end{definition} \begin{remark} Whenever $\rho_0 \subseteq \rho_1$, we have that $S \leq_{\rho_1} T$ implies $S \leq_{\rho_0} T$. \end{remark} \begin{definition} \label{def:weightedtreeforcing} Let $\mathbb{P}$ be a tree forcing. Then we say that $\mathbb{P}$ is \emph{weighted} if for any $T \in \mathbb{P}$ there is a weight $\rho$ on $T$ so that for any tree $S$, if $S \leq_\rho T$ then $S \in \mathbb{P}$. \end{definition} \begin{lemma} \label{lem:spweighted} $\mathbb{SP}$ is weighted. \end{lemma} \begin{proof} Let $T \in \mathbb{SP}$. For any $s,t \in T$ let $\rho(s,t) \subseteq T_s \setminus T_t$ be finite so that for any $k \in \omega$ and $i \in 2$, if there is $r \in T_s$ so that $r(k) = i$ and there is no such $r \in T_t$, then there is such $r$ in $\rho(s,t)$. This is possible since $T_t$ is fat. Let us show that $\rho$ works. Assume that $S \leq_\rho T$ and let $s \in S$ be arbitrary. Then there is $s_0 \supseteq s$ in $S$ with a sequence $(s_n)_{n\in \omega}$ as in the definition of $\leq_\rho$. Let $k \geq m(T_{s_0})$ and $i \in 2$ and suppose there is no $r \in S_{s_0}$ with $r(k) = i$. In particular this means that no such $r$ is in $\rho(s_n,s_{n+1})$ for any $n \in \omega$, since $\rho(s_n, s_{n+1}) \subseteq S_{s_0}$. But then, using the definition of $\rho$ and $m(T_{s_0})$, we see inductively that for each $n \in \omega$ such $r$ must be found in $T_{s_n}$. Letting $n$ large enough so that $k < \vert s_n \vert$, $s_n(k) = i$ must be the case. But $s_n \in S_{s_0}$, which is a contradiction. \end{proof} \begin{definition} \textit{Sacks forcing} is the tree forcing $\mathbb{S}$ consisting of all perfect subtrees of $2^{<\omega}$. It is well-known that it is Axiom A with continuous reading of names. \end{definition} \begin{lemma} \label{lem:sweighted} $\mathbb{S}$ is weighted. \end{lemma} \begin{proof} Let $T \in \mathbb{S}$. For $s,t \in T$, we let $\rho(s,t)$ contain all $r^\frown i \in T_s \setminus T_t$ such that $r^\frown (1-i) \in T$ and where $\vert r \vert$ is minimal with this property. \end{proof} Recall that for finite trees $T_0$, $T_1$ we say that $T_1$ is an end-extension of $T_0$, written as $T_0 \sqsubset T_1$, if $T_0 \subsetneq T_1$ and for every $t \in T_1 \setminus T_0$ there is a terminal node $\sigma \in \term (T_0)$ so that $\sigma \subseteq t$. A node $\sigma \in T_0$ is called terminal if it has no proper extension in $T_0$. \begin{definition} Let $T$ be a perfect tree, $\rho$ a weight on $T$ and $T_0, T_1$ finite subtrees of $T$. Then we write $T_0 \lhd_\rho T_1$ iff $T_0 \sqsubset T_1$ and \begin{multline*} \forall \sigma \in \term(T_0) \exists N \geq 2 \exists \langle s_i\rangle_{i < N} \in ((T_{1})_\sigma)^N \text{ injective } \\ \bigl( s_0 = \sigma \wedge s_{N-1} \in \term(T_{1}) \wedge \forall i < N (\rho(s_i, s_{i+1}) \subseteq T_{1}) \bigr).\tag{$*_0$} \end{multline*} \end{definition} \begin{lemma} \label{lem:basicweight} Let $T$ be a perfect tree, $\rho$ a weight on $T$ and $\langle T_n : n \in \omega \rangle$ be a sequence of finite subtrees of $T$ so that $T_n \lhd_\rho T_{n+1}$ for every $n \in \omega$. Then $\bigcup_{n \in \omega} T_n \leq_\rho T$. \end{lemma} \begin{proof} Let $S := \bigcup_{n \in \omega} T_n$. To see that $S \leq_\rho T$ note that $\bigcup_{n\in \omega} \term(T_n)$ is dense in $S$, in a very strong sense. Let $\sigma \in \term(T_n)$ for some $n \in \omega$, then let $s_0, \dots, s_{N_0-1}$ be as in $(*_0)$ for $T_n,T_{n+1}$. Since $s_{N_0-1} \in \term(T_{n+1})$ we again find $s_{N_0}, \dots, s_{N_1-1}$ so that $s_{N_0-1}, \dots, s_{N_1-1}$ is as in $(*_0)$ for $T_{n+1}, T_{n+2}$. Continuing like this, we find a sequence $\langle s_i : i \in \omega \rangle$ in $S$ starting with $s_0 = \sigma$ so that $\rho(s_i, s_{i+1}) \subseteq S$ for all $i \in \omega$, as required. \end{proof} \begin{lemma} \label{lem:wghtdenseset} Let $T$ be a perfect tree, $\rho$ a weight on $T$ and $T_0$ a finite subtree of $T$. Moreover, let $k \in \omega$ and $D \subseteq (T)^k$ be dense open. Then there is $T_1 \rhd_\rho T_0$ so that \begin{multline*} \forall \{\sigma_0, \dots, \sigma_{k -1}\} \in [\term(T_{0})]^{k} \forall \sigma_0', \dots, \sigma_{k -1}' \in \term(T_{1}) \\ \left(\forall l < k (\sigma_l \subseteq \sigma_l') \rightarrow (\sigma_0', \dots, \sigma_{k -1}') \in D\right).\tag{$*_1$} \end{multline*} \end{lemma} \begin{proof} First let us enumerate $\term(T_0)$ by $\sigma_0, \dots, \sigma_{K-1}$. We put $s_0^l = \sigma_{l}$ for each $l < K$. Next find for each $l < K$, $s_1^l \in T, s_0^l \subsetneq s_1^l$ above a splitting node in $T_{s_0^l}$. Moreover we find $s_2^l \in T_{s_0^l}$ so that $s_2^l \perp s_1^l$ and $s_2^l$ is longer than any node appearing in $\rho(s_0^l, s_1^l)$. This is possible since we chose $s_1^l$ to be above a splitting node in $T_{s_0^l}$. For each $l<K$ we let $\tilde{T}_{2}^l$ be the tree generated by (i.e. the downwards closure of) $\{s_1^l, s_2^l\} \cup \rho(s_0^l,s_1^l) \cup \rho(s_1^l,s_2^l)$. Note that $s_2^l \in \term(\tilde{T}_{2}^l)$ as $\rho(s_1^l,s_2^l) \perp s_2^l$. Let us enumerate by $(f_j)_{2\leq j < N}$ all functions $f \colon K \to \{1,2\}$ starting with $f_2$ the constant function mapping to $1$. We are going to construct recursively a sequence $\langle \tilde{T}_j^l : 2 \leq j \leq N \rangle$ where $\tilde{T}_j^l \sqsubseteq \tilde{T}_{j+1}^l$, and $\langle s_j^l : 2 \leq j \leq N \rangle$ without repetitions, for each $l< K$ such that at any step $j <N$: \begin{enumerate} \item for every $l < K$, $s_j^l \in \term(\tilde{T}_{j}^l)$ and $\begin{cases} s_2^l \subseteq s_j^l & \text{if } f_j(l) = 1 \\ s_1^l \subseteq s_j^l &\text{if } f_j(l) = 2. \end{cases}$ \item for any $\{l_i : i <k \} \in [K]^k$ and $(t_i)_{i<k}$ where $t_i \in \term(\tilde{T}_{j+1}^{l_i})$ and $\begin{cases} s_1^{l_i} \subseteq t_{i} & \text{if } f_j(l_i) = 1 \\ s_1^{l_i} \perp t_{i} &\text{if } f_j(l_i) = 2 \end{cases}$ for every $i< k$, $(t_0, \dots, t_{k-1}) \in D$ \item for every $l < K$, $\rho(s_j^l, s_{j+1}^l ) \subseteq \tilde{T}_{j+1}^{l}$. \end{enumerate} Note that $(1)$ holds true at the initial step $j=2$ since $f_2(l) = 1$, $s_2^l \subseteq s_2^l$ and $s_2^l \in \term(\tilde{T}_2^l)$ for each $l < K$. Now suppose that for some $j < N$ we have constructed $\tilde{T}_j^l$ and $s_j^l$ for each $l$ with $(1)$ holding true. Then we proceed as follows. Let $\{ t^l_i : i < N_l\}$ enumerate $\{ t : t\in \term(\tilde{T}_j^l) \wedge s_1^l \subseteq t \text{ if } f_j(l) = 1 \wedge s_1^l \perp t \text{ if } f_j(l) = 2\} $ for each $l <K$. Now it is simple to find $r^l_i \in T$, $t^l_i \subseteq r^l_i$ for each $i <N_l, l< K$ so that $[\{r^l_i : i < N_l, l< K \}]^k \subseteq D$. Let $R_l$ be the tree generated by $\tilde{T}_{j}^l$ and $\{r^l_i : i < N_l\}$ for each $l <K$. It is easy to see that $\tilde{T}_{j}^l \sqsubseteq R_l$ since we only extended elements from $\term(\tilde{T}_j^l)$ (namely the $t_i^l$'s). Note that it is still the case that $s_j^l \in \term (R_l)$ since $s_j^l \perp t_i^l$ for all $i < N_l$. Next we choose $s_{j+1}^l$ extending an element of $\term (R_l)$, distinct from all previous choices and so that $s_2^l \subseteq s_j^l \text{ if } f_{j+1}(l) = 1$ and $s_1^l \subseteq s_j^l \text{ if } f_{j+1}(l) = 2.$ Taking $\tilde{T}_{j+1}^l$ to be the tree generated by $R_l \cup \{ s_{j+1}^l \} \cup \rho(s_j^l, s_{j+1}^{l})$ gives the next step of the construction. Again $R_l \sqsubseteq \tilde{T}_{j+1}^l$, as we only extended terminal nodes of $R_l$. Then $(3)$ obviously holds true and $s^l_{j +1} \in \term(\tilde{T}_{j+1}^l)$ since $\rho(s_j^l, s_{j+1}^{l}) \perp s^l_{j +1}$. It follows from the construction that $(2)$ holds true for each $\tilde{T}_{j+1}^l$ replaced by $R_l$. Since $R_l \sqsubseteq \tilde{T}_{j+1}^l$ we easily see that $(2)$ is satisfied. Finally we put $T_{1} = \bigcup_{l<K} \tilde T_{N}^l$. It is clear that $(*_0)$ is true, in particular that $T_0 \lhd_\rho T_1$. For $(*_1)$ let $\{l_i : i<k \} \in [K]^k$ be arbitrary and assume that $t_i \in \term (\tilde T_{N}^{l_i})$ for each $i < k$. Let $f \colon K \to \{1,2\}$ be so that for each $i < k$ if $s_1^{l_i} \subseteq t_i$ then $f(l_i) = 1$, and if $s_1^{l_i} \perp t_i$ then $f(l_i) = 2$. Then there is $j \in [2,N)$ so that $f_j = f$. Clause $(2)$ ensured that for initial segments $t_i' \subseteq t_i$ where $t_i' \in \term(\tilde{T}_{j+1}^{l_i})$, $(t_0',\dots, t_{k-1}' ) \in D$. In particular $(t_0,\dots, t_{k-1}) \in D$ which proves $(*_1)$. \end{proof} \begin{prop} \label{prop:weightedmain} Let $M$ be a countable model of set theory, $R_l \in M$ a perfect tree and $\rho_l$ a weight on $R_l$ for every $l < k \in \omega$. Then there is $S_l \leq_{\rho_l} R_l$ for every $l < k$ so that any $\bar x_0, \dots, \bar x_{n-1} \in \prod_{l < k}[S_l]$ are $\langle [R_l] : l < k\rangle$-mutually Cohen generic over $M$. \end{prop} \begin{proof} Let $T := \{ \emptyset\} \cup \{ \langle l \rangle^\frown s : s \in R_l, l<k \}$ be the disjoint sum of the trees $R_l$ for $l < k$. Also let $\rho$ be a weight on $T$ extending arbitrarily the weights $\rho_l$ defined on the copy of $R_l$ in $T$. As $M$ is countable, let $(D_n,k_n)_{n\in \omega}$ enumerate all pairs $(D,m) \in M$, such that $D$ is a dense open subset of $T^m$ and $m \in \omega \setminus \{0\}$, infinitely often. Let us find a sequence $(T_n)_{n\in \omega}$ of finite subtrees of $T$, such that for each $n\in \omega$, $T_n \lhd_\rho T_{n+1}$ and \begin{multline*} \forall \{\sigma_0, \dots, \sigma_{k_n -1}\} \in [\term(T_{n})]^{k_n} \forall \sigma_0', \dots, \sigma_{k_n -1}' \in \term(T_{n+1}) \\ [\forall l < k (\sigma_l \subseteq \sigma_l') \rightarrow (\sigma_0', \dots, \sigma_{k_n -1}') \in D_n].\tag{$*_1$} \end{multline*} We start with $T_0 = k^{<2} = \{ \emptyset \} \cup \{ \langle l \rangle : l<k \}$ and then apply Lemma~\ref{lem:wghtdenseset} recursively. Let $S := \bigcup_{n \in \omega} T_n$. Then we have that $S \leq_\rho T$. \begin{claim} For any $m \in \omega$ and distinct $x_0, \dots, x_{m-1} \in [S]$, $(x_0, \dots, x_{m-1})$ is $T^m$-generic over $M$. \end{claim} \begin{proof} Let $D \subseteq T^m$ be open dense with $D \in M$. Then there is a large enough $n \in \omega$ with $(D_n, k_n) = (D,m)$ and $\sigma_0, \dots, \sigma_{m-1} \in \term(T_n)$ distinct such that $\sigma_0 \subseteq x_0, \dots, \sigma_{m-1} \subseteq x_{m-1}$. Then there are unique $\sigma'_0, \dots, \sigma'_{m-1} \in \term(T_{n+1})$ such that $\sigma'_0 \subseteq x_0, \dots, \sigma'_{m-1} \subseteq x_{m-1}$. By $(*_1)$, $(\sigma'_0, \dots, \sigma'_{m-1}) \in D$. \end{proof} Finally let $S_l = \{ s : \langle l \rangle^\frown s \in S \}$ and note that $S_l \leq_{\rho_l} R_l$ for every $l < k$. The above claim clearly implies the statement of the proposition. \end{proof} \begin{remark}\label{rem:mgpspl} Proposition~\ref{prop:weightedmain} implies directly the main result of \cite{Spinas2007}. A modification of the above construction for splitting forcing can be used to show that for $T \in M$, we can in fact find a master condition $S \leq T$ so that for any distinct $x_0,\dots, x_{n-1} \in [S]$, $(x_0, \dots, x_{n-1})$ is $\mathbb{SP}^n$-generic over $M$. In that case $(S,\dots, S) \in \mathbb{SP}^n$ is a $\mathbb{SP}^n$-master condition over $M$. We won't provide a proof of this since our only application is Corollary~\ref{cor:minimality} below, which seems to be implicit in \cite{Spinas2007}. The analogous statement for Sacks forcing is a standard fusion argument. \end{remark} \iffalse \begin{cor} \label{cor:genreal} Let $\mathbb{P}$ be a weighted tree forcing and let $G$ be $\mathbb{P}$-generic over $V$. Then $G = \{ S \in \mathbb{P} \cap V : x_G \in [S] \}$. Thus we may write $V[x_G]$ instead of $V[G]$. \end{cor} \begin{proof} Obviously $G \subseteq H := \{ S \in \mathbb{P} \cap V : x_G \in [S] \}$. Suppose that $S \in H \setminus G$ and $T \in G$ is such that $T \Vdash S \in \dot H \setminus \dot G$. Let $M \preccurlyeq H(\theta)$ be so that $T,S, \mathbb{P} \in M$, for $\theta$ large enough. By Proposition~\ref{prop:weightedmain}, there is $T' \leq T$ so that any $x \in [T']$ is Cohen generic in $[T]$ over $M$. If there is some $x \in [T'] \cap [S]$, then there is $t \subseteq x$ so that $M \models t \Vdash_{T} \dot c \in [S]$, where $\dot c$ is a name for the generic branch added by $T$. But then $(T')_t \subseteq S$ contradicting that $(T')_t \Vdash S \notin G$. Thus $[T'] \cap [S] = \emptyset$, implying that $T' \Vdash S \notin H$. Again this is a contradiction. \end{proof} \fi \begin{cor} \label{cor:minreal} Let $\mathbb{P}$ be a weighted tree forcing with continuous reading of names. Then $\mathbb{P}$ adds a minimal real. In fact for any $\mathbb{P}$-generic $G$, if $y \in 2^\omega \cap V[G] \setminus V$, then there is a continuous map $f \colon 2^\omega \to A^\omega$ in $V$ so that $x_G = f(y)$. \end{cor} \begin{proof} Using the continuous reading of names let $T \in G$ be so that there is a continuous map $g \colon [T] \to 2^\omega$ with $T \Vdash \dot y = g(x_G)$. It is easy to see from the definition, that in any weighted tree forcing, the set of finitely branching trees is dense.\footnote{In particular, weighted tree forcing with the crn is $\omega^\omega$-bounding.} Thus, let us assume that $[T]$ is compact. Moreover let $M$ be countable elementary with $g, T \in M$. Now let $S \leq T$ be so that any $x_0,x_1 \in [S]$ are $[T]$-mCg over $M$. Suppose that there are $x_0 \neq x_1 \in [S]$, with $g(x_0) = g(x_1)$. Then there must be $s \subseteq x_0$ and $t \subseteq x_1$, so that $M \models (s,t) \Vdash_{T^2} g(\dot c_0) = g(\dot c_1)$, where $\dot c_0$, $\dot c_1$ are names for the generic branches added by $T^2$. But then note that for any $x \in S_{t}$, since $x$ and $x_0$ are mCg and $s \subseteq x_0$, $t \subseteq x$, we have that $g(x) = g(x_0)$. In particular $g$ is constant on $S_t$ and $S_t \Vdash g(x_G) = g(\check x_0) \in V$. On the other hand, if $g$ is injective on $[S]$, then $g^{-1}$ is continuous as $[S]$ is compact and it is easy to extend $g^{-1}$ to a continous function $f \colon A^\omega \to 2^\omega$. \end{proof} \begin{cor}\label{cor:minimality} $V^{\mathbb{SP}}$ is a minimal extension of $V$, i.e. whenever $W$ is a model of ZFC so that $V \subseteq W \subseteq V^{\mathbb{SP}}$, then $W = V$ or $W = V^{\mathbb{SP}}$. \end{cor} \begin{proof} Let $G$ be an $\mathbb{SP}$-generic filter over $V$. It suffices to show that if $\langle \alpha_\xi : \xi < \delta \rangle \in W \setminus V$ is an increasing sequence of ordinals, then $x_G \in W$ (see also \cite[Theorem 13.28]{Jech2013}). So let $ \langle \dot \alpha_\xi : \xi < \delta \rangle$ be a name for such a sequence of ordinals and $T \in \mathbb{SP}$ be such that $T \Vdash \langle \dot \alpha_\xi : \xi < \delta \rangle \notin V$. Note that this is in fact equivalent to saying that $(T,T) \Vdash_{\mathbb{SP}^2} \langle \dot \alpha_\xi[\dot x_0] : \xi < \delta \rangle \neq \langle \dot \alpha_\xi[\dot x_1] : \xi < \delta \rangle$, where $\dot x_0, \dot x_1$ are names for the generic reals added by $\mathbb{SP}^2$. Let $M$ be a countable elementary model so that $T, \langle \dot \alpha_\xi : \xi < \delta \rangle \in M$ and let $T' \leq T$ be a master condition over $M$ as in Remark~\ref{rem:mgpspl}. Then also $T' \Vdash \langle \dot \alpha_\xi : \xi \in \delta \cap M \rangle \notin V$. Namely, suppose towards a contradiction that there are $x_0, x_1 \in [T']$ generic over $V$ so that $\langle \dot \alpha_\xi[x_0] : \xi \in \delta \cap M \rangle = \langle \dot \alpha_\xi[x_1] : \xi \in \delta \cap M \rangle$, then $(x_0,x_1)$ is $\mathbb{SP}^2$-generic over $M$ and $M[x_0][x_1] \models \langle \dot \alpha_\xi[x_0] : \xi < \delta \rangle = \langle \dot \alpha_\xi[x_1] : \xi < \delta \rangle$ which yields a contradiction to the sufficient elementarity of $M$. Since $T' \Vdash \langle \dot \alpha_\xi : \xi \in \delta \cap M \rangle \subseteq M$ we can view $\langle \dot \alpha_\xi : \xi \in \delta \cap M \rangle$ as a name for a real, for $M$ is countable. Back in $W$, we can define $\langle \alpha_\xi : \xi \in \delta \cap M \rangle$ since $M \in V \subseteq W$. But then, applying Corollary~\ref{cor:minreal}, we find that $x_G \in W$. \end{proof} \subsection{The countable support iteration} Recall that for any perfect subtree $T$ of $2^{<\omega}$, $\splt(T)$ is order-isomorphic to $2^{<\omega}$ in a canonical way, via a map $\eta_T \colon \splt(T) \to 2^{<\omega}$. This map induces a homeomorphism $\tilde \eta_T \colon [T] \to 2^\omega$ and note that the value of $\tilde \eta_T(x)$ depends continuously on $T$ and $x$. Whenever $\rho$ is a weight on $T$, $\eta_T$ also induces a weight $\tilde \rho$ on $2^{<\omega}$, so that whenever $S \leq_{\tilde \rho} 2^{<\omega}$, then $\eta_T^{-1}(S)$ generates a tree $S'$ with $S' \leq_{\rho} T$. Let $\langle \mathbb{P}_\beta, \dot{\mathbb{Q}}_\beta : \beta < \lambda \rangle$ be a countable support iteration where for each $\beta < \lambda$, $\Vdash_{\mathbb{P}_\beta} \dot{\mathbb{Q}}_\beta \in \{ \mathbb{SP},\mathbb{S}\}$. We fix in this section a $\mathbb{P}_\lambda$ name $\dot y$ for an element of a Polish space $X$, a good master condition $\bar p \in \mathbb{P_\lambda}$ over a countable model $M_0$, where $\dot y, X \in M_0$, and let $C\subseteq \lambda$ be a countable set as in Lemma~\ref{lem:intermediategoodanalytic}. For every $\beta \in C$ and $\bar y \in [\bar p]\restriction (C \cap\beta)$, let us write $$T_{\bar y} = \{s \in 2^{<\omega} : \exists \bar x \in [\bar p] \left[ \bar x \restriction (C \cap\beta) = \bar y \wedge s \subseteq x(\beta)\right] \}.$$ According to Lemma~\ref{lem:intermediategoodanalytic}, the map $\bar y \mapsto T_{\bar y}$ is a continuous function from $[\bar p]\restriction (C \cap\beta)$ to $\mathcal{T}$. Let $\alpha := \otp(C) < \omega_1$ as witnessed by an order-isomorphism $\iota \colon \alpha \to C$. Then we define the homeomorphism $\Phi\colon [\bar p] \restriction C \to (2^\omega)^\alpha$ so that for every $\bar y \in [\bar p] \restriction C$ and every $\delta < \alpha$, $$\Phi(\bar y) \restriction (\delta + 1) = \Phi(\bar y) \restriction \delta^\frown \tilde \eta_{T_{\bar y \restriction \iota(\delta)}}(y(\iota(\delta))).$$ Note that for $\mathbb{P} \in \{ \mathbb{SP}, \mathbb{S}\}$, the map sending $T \in \mathbb{P}$ to the weight $\rho_T$ defined in Lemma~\ref{lem:spweighted} or Lemma~\ref{lem:sweighted} is a Borel function from $\mathbb{P}$ to the Polish space of partial functions from $(2^{<\omega})^2$ to $[2^{<\omega}]^{<\omega}$. Thus for $\beta \in C$ and $\bar x \in [\bar p] \restriction (C \cap\beta)$, letting $\rho_{\bar x} := \rho_{T_{\bar x}}$, we get that $\bar x \mapsto \rho_{\bar x}$ is a Borel function on $[\bar p] \restriction (C \cap \beta)$. For each $\delta < \alpha$ and $\bar y \in (2^\omega)^\delta$, we may then define $\tilde \rho_{\bar y}$ a weight on $2^{<\omega}$, induced by $\rho_{\bar x}$ and $\eta_{T_{\bar x}}$, where $\bar x = \Phi^{-1}(\bar y^\frown \bar z) \restriction \beta$ for arbitrary, equivalently for every, $\bar z \in (2^\omega)^{\alpha \setminus \delta}$. The map sending $\bar y \in (2^\omega)^\delta$ to $\tilde \rho_{\bar y}$ is then Borel as well. \begin{lemma} \label{lem:mCgforiteration} Let $M_1$ be a countable elementary model with $M_0, \bar p, \mathbb{P}_\lambda \in M_1$ and let $\bar s \in \bigotimes_{i < \alpha} 2^{<\omega}$. Then there is $\bar q \leq \bar p$, a good master condition over $M_0$, so that \begin{multline*} \forall \bar x_0, \dots, \bar x_{n-1} \in [\bar q] \big(\Phi(\bar x_0 \restriction C), \dots, \Phi(\bar x_{n-1} \restriction C) \in (2^\omega)^\alpha \cap [\bar s] \\ \text{ are strongly }\langle 2^\omega : i < \alpha \rangle \text{-mCg } \text{ over } M_1\big). \end{multline*} Moreover $[\bar q] \restriction C$ is a closed subset of $[\bar p] \restriction C$ and $[\bar q] = ([\bar q] \restriction C) \times (2^\omega)^{\lambda \setminus C}$ (cf. Lemma~\ref{lem:goodmaster2}). \end{lemma} \begin{proof} We can assume without loss of generality that $\bar s = \emptyset$, i.e. $[\bar s] = (2^\omega)^\alpha$. It will be obvious that this assumption is inessential. Next, let us introduce some notation. For any $\delta \leq \alpha$ and $\bar y_0, \dots, \bar y_{n-1} \in (2^\omega)^\delta$, recall from Definition~\ref{def:Delta} that $$ \Delta(\bar y_0, \dots, \bar y_{n-1}) := \{ \Delta_{\bar y_i, \bar y_j} : i \neq j < n \} \cup \{ 0, \delta\}.$$ Let us write $$\tp(\bar y_0, \dots, \bar y_{n-1}) := (\langle \xi_l : l \leq k \rangle, \langle K_l : l < k \rangle, \langle U_i : i < n \rangle),$$ where $\{ \xi_0 < \dots < \xi_k\} = \Delta(\bar y_0, \dots, \bar y_{n-1})$, $K_l = \vert \{ \bar y_i \restriction [\xi_l, \xi_{l+1}) : i < n \} \vert$ for every $l < k$ and $\langle U_i : i < n \rangle$ are the clopen subsets of $(2^\omega)^\delta$ of the form $U_i = [\bar s_i]$ for $\bar s_i \in \bigotimes_{\xi < \delta} 2^{<\omega}$ with $\dom(\bar s_i) = \{ \Delta_{\bar y_i, \bar y_j} : j < n, \bar y_j \neq \bar y_i \}$ and $\bar s_i$ minimal in the order of $\bigotimes_{\xi < \delta} 2^{<\omega}$ so that $$\bar y_i \in [\bar s_i] \text{ and } \forall j < n (\bar y_j \neq \bar y_i \rightarrow \bar y_j \notin [\bar s_i]),$$ for every $i < n$. Note that for any $\delta_0 \leq \delta$, if $$\tp(\bar y_0 \restriction \delta_0, \dots, \bar y_{n-1} \restriction \delta_0) := (\langle \eta_l : l \leq k' \rangle, \langle M_l : l < k' \rangle, \langle V_i : i < n \rangle),$$ then $V_i = U_i \restriction \delta_0$ for every $i < n$. Moreover, for any $\bar y'_0, \dots, \bar y'_{n-1} \in (2^\omega)^\delta$ with $$\tp(\bar y'_0, \dots, \bar y'_{n-1}) = (\langle \xi_l : l \leq k \rangle, \langle K_l : l < k \rangle, \langle U_i : i < n \rangle),$$ we have that $$\tp(\bar y'_0 \restriction \delta_0, \dots, \bar y'_{n-1} \restriction \delta_0) = (\langle \eta_l : l \leq k' \rangle, \langle M_l : l < k' \rangle, \langle V_i : i < n \rangle).$$ Any $\bar y_0, \dots, \bar y_{n-1}$, with $\tp(\bar y_0, \dots, \bar y_{n-1}) := (\langle \xi_l : l \leq k \rangle, \langle K_l : l < k \rangle, \langle U_i : i < n \rangle)$, that are $\langle 2^\omega : i < \delta \rangle$-mutually Cohen generic over $M_1$ as witnessed by $\xi_0 < \dots < \xi_k$, induce a $\prod_{l < k} (\bigotimes_{\xi \in [\xi_l,\xi_{l+1})} 2^{<\omega})^{K_l}$-generic and vice-versa. Thus whenever $\tau$ is a $\prod_{l < k} (\bigotimes_{\xi \in [\xi_l,\xi_{l+1})} 2^{<\omega})^{K_l}$-name, we may write $\tau[\bar y_0, \dots, \bar y_{n-1}]$ for the evaluation of $\tau$ via the induced generic. It will not matter in what particular way we define the $\prod_{l < k} (\bigotimes_{\xi \in [\xi_l,\xi_{l+1})} 2^{<\omega})^{K_l}$-generic from given $\bar y_0, \dots, \bar y_{n-1}$. We may stipulate for instance, that the generic induced by $\bar y_0, \dots, \bar y_{n-1}$ is $\langle \bar z_{l,j} : l < k, j < K_l \rangle$, where for each fixed $l < k$, $\langle \bar z_{l,j} : l < k, j < K_l \rangle$ enumerates $\{ \bar y_i \restriction [\xi_l, \xi_{l+1}) : i < n \}$ in lexicographic order. Let us get to the bulk of the proof. We will define a finite support iteration $\langle \mathbb{R}_{\delta}, \dot{\mathbb{S}}_\delta : \delta \leq \alpha \rangle$ in $M_1$, together with, for each $\delta \leq \alpha$, an $\mathbb{R}_{\delta}$-name $\dot X_\delta$ for a closed subspace of $(2^{\omega})^\delta$, where $\Vdash_{\mathbb{R}_{\delta_1}} \dot X_{\delta_0} = \dot X_{\delta_1} \restriction \delta_0$ for every $\delta_0 < \delta_1 \leq \alpha$. This uniquely determines the limit steps of the construction. Additionally we will make the following inductive assumptions $(1)_\delta$ and $(2)_\delta$ for all $\delta \leq \alpha$ and any $\mathbb{R}_\delta$-generic $G$. Let $\bar y_0, \dots, \bar y_{n-1} \in \dot X_{\delta}[G]$ be arbitrary and $\tp(\bar y_0, \dots, \bar y_{n-1}) = (\langle \xi_l : l \leq k \rangle, \langle K_l : l < k \rangle, \langle U_i : i < n \rangle)$. Then \begin{enumerate}[$(1)_\delta$] \item $\bar y_0, \dots, \bar y_{n-1}$ are strongly $\langle 2^\omega : i < \delta \rangle$-mCg over $M_1$, \end{enumerate} \begin{enumerate}[$(1)_\delta$] \setcounter{enumi}{1} \item and for any $\prod_{l < k} (\bigotimes_{\xi \in [\xi_l,\xi_{l+1})} 2^{<\omega})^{K_l}$-name $\dot D \in M_1$ for an open dense subset of a countable poset $\mathbb{Q} \in M_1$, \begin{multline*} \bigcap \Bigl\{ \dot D[\bar y'_0, \dots, \bar y'_{n-1}] : \bar y'_0, \dots, \bar y'_{n-1} \in X_\delta, \\ \tp(\bar y'_0, \dots, \bar y'_{n-1}) = (\langle \xi_l : l \leq k \rangle, \langle K_l : l < k \rangle, \langle U_i : i < n \rangle) \Bigr\} \end{multline*} is open dense in $\mathbb{Q}$. \end{enumerate} Having defined $\mathbb{R}_\delta$ and $\dot{X}_\delta$, for $\delta < \alpha$, we proceed as follows. Fix for now $G$ an $\mathbb{R}_\delta$-generic over $M_1$ and $X_\delta := \dot{X}_\delta[G]$. Then we define a forcing $\mathbb{S}_\delta \in M_1[G]$ which generically adds a continuous map $F \colon X_\delta \to \mathcal{T}$, so that for each $\bar y \in X_\delta$, $ S_{\bar y} := F(\bar y) \leq_{\tilde \rho_{\bar y}} 2^{<\omega}$. In $M_1[G][F]$, we then define $X_{\delta+1} \subseteq (2^\omega)^{\delta +1}$ to be $\{ \bar y^\frown z : \bar y \in X_\delta, z \in [S_{\bar y}] \}$. The definition of $\mathbb{S}_\delta$ is as follows. Work in $M_1[G]$. Since the map $\bar y \in (2^\omega)^\delta \mapsto \tilde \rho_{\bar y}$ is Borel and an element of $M_1$ and by $(1)_\delta$ any $\bar y \in X_\delta$ is Cohen generic over $M_1$, it is continuous on $X_\delta$. Since $X_\delta$ is compact we find a single weight $\tilde \rho$ on $2^{<\omega}$, so that $\tilde \rho_{\bar y} \subseteq \tilde \rho$ for every $\bar y \in X_\delta$. Let $\{ O_s : s \in 2^{<\omega} \}$ be a basis of $X_\delta$ so that $O_s \subseteq O_t$ for $t \subseteq s$ and $O_s \cap O_t = \emptyset$ for $s \perp t$. This is possible since $X_\delta$ is homeomorphic to $2^\omega$. Let $\mathcal{FT}$ be the set of finite subtrees of $2^{<\omega}$. Then $\mathbb{S}_\delta$ consists of functions $h \colon 2^{\leq n} \to \mathcal{FT}$, for some $n \in \omega$, so that for every $s \subseteq t \in 2^{\leq n}$, $ (h(s) \unlhd_{\tilde \rho} h(t))$. The extension relation is defined by function extension. Note that $\mathbb{S}_\delta$ is indeed a forcing poset with trivial condition $\emptyset$. Given $H$, an $\mathbb{S}_\delta$-generic over $M_1[G]$, we let $F \colon X_\delta \to \mathcal{T}$ be defined as $$F(\bar y) := \bigcup_{\substack{s \in 2^{<\omega}, \bar y \in O_s \\ h \in H}} h(s).$$ \begin{claim} For every $\bar y \in X_\delta$, $F(\bar y) = S_{\bar y} \leq_{\tilde \rho} 2^{<\omega}$, in particular $S_{\bar y} \leq_{\tilde \rho_{\bar y}} 2^{<\omega}$. For any $\bar y_0, \bar y_1 \in X_\delta$, $[S_{\bar y_0}] \cap [S_{\bar y_1}] \neq \emptyset$. Any $z_0, \dots, z_{n-1} \in \bigcup_{\bar y \in X_\delta}[S_{\bar y}]$ are $2^\omega$-mutually Cohen generic over $M_1[G]$. And for any countable poset $\mathbb{Q} \in M_1$, any $m \in \omega$ and any dense open $E \subseteq (2^{<\omega})^n \times \mathbb{Q}$ in $M_1[G]$, there is $r \in \mathbb{Q}$ and $m_0 \geq m$ so that for any $z_0, \dots, z_{n-1} \in \bigcup_{\bar y \in X_\delta}[S_{\bar y}]$ where $z_0 \restriction m, \dots, z_{n-1} \restriction m$ are pairwise distinct, $( (z_0 \restriction m_0, \dots, z_{n-1} \restriction m_0), r) \in E$. \end{claim} \begin{proof} We will make a genericity argument over $M_1[G]$. Let $h \in \mathbb{S}_\delta$ be arbitrary. Then it is easy to find $h' \leq h$, say with $\dom(h') = 2^{\leq a_0}$, so that for every $s \in 2^{a_0}$ and every $t \in \term(h(s))$, $\vert t \vert \geq m$. For the first claim, it suffices through Lemma~\ref{lem:basicweight} to find $h'' \leq h'$, say with $\dom(h'') = 2^{\leq a_1}$, $a_0 < a_1$, so that for every $s \in 2^{a_0}$ and $t \in 2^{a_1}$, with $s \subseteq t$, $h''(s) \lhd_{\tilde \rho} h''(t)$. Finding $h''$ so that additionally $\term(h''(t_0)) \cap \term(h''(t_1)) = \emptyset$ for every $t_0 \neq t_1 \in 2^{a_1}$ proves the second claim. For the last two claims, given a fixed dense open subset $E \subseteq (2^{<\omega})^n \times \mathbb{Q}$ in $M_1[G]$, it suffices to find $r \in \mathbb{Q}$ and to ensure that for any pairwise distinct $s_0, \dots, s_{n-1} \in \bigcup_{s \in 2^{a_0}} \term(h''(s))$ and $t_0 \supseteq s_0, \dots, t_{n-1} \supseteq s_{n-1}$ with $t_0, \dots, t_{n-1} \in \bigcup_{t \in 2^{a_1}} \term(h''(t))$, $((t_0, \dots, t_{n-1}), r) \in E$. Then we may put $m_0 = \max \{ \vert t \vert : t \in \bigcup_{s \in 2^{a_1}} \term(h''(s)) \}$. We may also assume wlog that $\mathbb{Q} = 2^{<\omega}$. To find such $h''$ we apply Lemma~\ref{lem:wghtdenseset} as in the proof of Proposition~\ref{prop:weightedmain}. More precisely, for every $s \in 2^{a_0}$, we find $T^0_s, T^1_s \rhd_{\tilde \rho} h'(s)$, and we find $T \subseteq 2^{<\omega}$ finite, so that for any pairwise distinct $s_0, \dots, s_{n-1} \in \bigcup_{s \in 2^{a_0}} \term(h'(s))$, any $t_0 \supseteq s_0, \dots, t_{n-1} \supseteq s_{n-1}$ with $t_0, \dots, t_{n-1} \in \bigcup_{s \in 2^{a_0}, i \in 2} \term(T^i_s)$ and any $\sigma \in \term(T)$, $((t_0, \dots, t_{n-1}), \sigma ) \in E$ and $\term (T_s^i) \cap \term (T_{t}^j) = \emptyset$ for every $i, j \in 2$, $s, t \in 2^{a_0}$. Then simply define $h'' \leq h'$ with $\dom(h'') = 2^{a_0 +1}$, where $h''(s^\frown i) = T_s^i$ for $s \in 2^{a_0}$, $i \in 2$. \end{proof} The function $F$ is obviously continuous and $X_{\delta +1 }$ is a closed subset of $(2^\omega)^{\delta +1}$, with $X_{\delta +1} \restriction \delta_0 = (X_{\delta +1} \restriction \delta) \restriction \delta_0 = X_\delta \restriction \delta_0 = X_{\delta_0}$ for every $\delta_0 < \delta + 1$. \begin{proof}[Proof of $(1)_{\delta +1}$, $(2)_{\delta +1}$] Let $G$ be $\mathbb{R}_{\delta +1}$ generic over $M_1$ and $\bar y_0, \dots, \bar y_{n-1} \in \dot X_{\delta +1}[G] = X_{\delta +1}$ be arbitrary. By the inductive assumption we have that $\bar y_0 \restriction \delta, \dots, \bar y_{n-1} \restriction \delta$ are strongly $\langle 2^\omega : i < \delta \rangle$-mCg over $M_1$. By the above claim, whenever $\bar y_i \restriction \delta \neq \bar y_j \restriction \delta$, then $\bar y_i(\delta) \neq \bar y_j(\delta)$. Thus, for $(1)_{\delta +1}$, we only need to show that $\bar y_0, \dots, \bar y_{n-1}$ are mCg. Let $\tp(\bar y_0, \dots, \bar y_{n-1}) = (\langle \xi_{l} : l < k \rangle,\langle K_{l} : l < k \rangle, \langle U_i : i < n \rangle )$, $\tp(\bar y_0 \restriction \delta , \dots, \bar y_{n-1} \restriction \delta) = (\langle \eta_l : l \leq k' \rangle, \langle M_l : l < k \rangle, \langle U_i\restriction \delta : i < n \rangle)$ and $n' = \vert \{ y_i(\delta) : i < n \} \vert = K_{k-1}$. Then we may view a dense open subset of $\prod_{l < k} (\bigotimes_{\xi \in [\xi_l,\xi_{l+1})} 2^{<\omega})^{K_l}$ as a $\prod_{l < k'} (\bigotimes_{\xi \in [\eta_l,\eta_{l+1})} 2^{<\omega})^{M_l}$-name for a dense open subset of $(2^{<\omega})^{n'}$. To this end, let $\dot D \in M_1$ be a $\prod_{l < k'} (\bigotimes_{\xi \in [\eta_l,\eta_{l+1})} 2^{<\omega})^{M_l}$ name for a dense open subset of $(2^{<\omega})^{n'}$. Then we have, by $(2)_\delta$, that \begin{multline*} \tilde D = \bigcap \Bigl\{ \dot D[\bar y'_0, \dots, \bar y'_{n-1}] : \bar y'_0, \dots, \bar y'_{n-1} \in X_\delta, \\ \tp(\bar y'_0, \dots, \bar y'_{n-1}) = (\langle \eta_l : l \leq k' \rangle, \langle M_l : l < k' \rangle, \langle U_i \restriction \delta : i < n \rangle) \Bigr\} \end{multline*} is a dense open subset of $(2^{<\omega})^{n'}$ and $\tilde D \in M_1[G\restriction \delta]$. By the above claim, $y_0(\delta), \dots, y_{n-1}(\delta)$ are mCg over $M_1[G\restriction \delta]$ in $2^\omega$. Altogether, this shows that $\bar y_0, \dots, \bar y_{n-1}$ are $\langle 2^\omega : i < \delta +1 \rangle$-mCg over $M_1$. For $(2)_{\delta+1}$, let $\dot D \in M_1$ now be a $\prod_{l < k} (\bigotimes_{\xi \in [\xi_l,\xi_{l+1})} 2^{<\omega})^{K_l}$-name for a dense open subset of $\mathbb{Q}$. Consider a name $\dot E$ in $M_1$ for the dense open subset of $(2^{<\omega})^{n'} \times \mathbb{Q}$, where for any $\bar y'_0, \dots, \bar y'_{n-1} \in X_{\delta}$, with $\tp(\bar y'_0, \dots, \bar y'_{n-1}) = (\langle \eta_l : l \leq k' \rangle, \langle M_l : l < k \rangle, \langle U_i\restriction \delta : i < n \rangle)$, \begin{multline*} \dot E[\bar y'_0 , \dots, \bar y'_{n-1}] = \{ (\bar t,r) : M_1[\bar y'_0, \dots, \bar y'_{n-1}] \models \\\bar t \Vdash r \in \dot D[\bar y'_0, \dots, \bar y'_{n-1}][\dot z_0, \dots, \dot z_{n'-1}]\}, \end{multline*} where $(\dot z_0, \dots, \dot z_{n'-1})$ is a name for the $(2^{<\omega})^{n'}$-generic. By $(2)_\delta$, we have that \begin{multline*} \tilde E = \bigcap \Bigl\{ \dot E[\bar y'_0, \dots, \bar y'_{n-1}] : \bar y'_0, \dots, \bar y'_{n-1} \in X_{\delta}, \\ \tp(\bar y'_0, \dots, \bar y'_{n-1}) = (\langle \eta_l : l \leq k' \rangle, \langle M_l : l < k \rangle, \langle U_i\restriction \delta : i < n \rangle) \Bigr\} \end{multline*} is a dense open subset of $(2^{<\omega})^{n'} \times \mathbb{Q}$ and $\tilde E \in M_1[G\restriction \delta]$. Let $m \in \omega$ be large enough so that for any $i,j < n$, if $U_i \neq U_j$, then $\forall \bar y'_i \in U_i \cap X_{\delta+1}, \bar y'_j \in U_j \cap X_{\delta +1} (y'_i(\delta) \restriction m \neq y'_j(\delta)\restriction m)$. To see that such $m$ exists, note that if $U_i \neq U_j$, then $U_i \cap X_{\delta+1}$ and $U_j \cap X_{\delta +1}$ are disjoint compact subsets of $X_{\delta +1}$. By the claim, there is $r \in \mathbb{Q}$ and $m_0 \geq m$ so that for any $z_0, \dots, z_{n'-1} \in \bigcup_{\bar y \in X_\delta}[S_{\bar y}]$, if $z_0 \restriction m, \dots, z_{n'-1} \restriction m$ are pairwise different, then $((z_0 \restriction m_0, \dots, z_{n'-1} \restriction m_0), r ) \in \tilde E$. Altogether we find that \begin{multline*} r \in \bigcap \Bigl\{ \dot D[\bar y'_0, \dots, \bar y'_{n-1}] : \bar y'_0, \dots, \bar y'_{n-1} \in X_{\delta+1}, \\ \tp(\bar y'_0, \dots, \bar y'_{n-1}) = (\langle \xi_l : l \leq k \rangle, \langle K_l : l < k \rangle, \langle U_i : i < n \rangle) \Bigr\}. \end{multline*} Of course the same argument can be carried out below any condition in $\mathbb{Q}$, showing that this set is dense. That it is open is also clear since it is the intersection of open subsets of a partial order. \end{proof} Now let $\delta \leq \alpha$ be a limit ordinal. \begin{proof}[Proof of $(1)_\delta$ and $(2)_{\delta}$.] Let $G$ be $\mathbb{R}_{\delta}$-generic over $M_1$, $\bar y_0, \dots, \bar y_{n-1} \in \dot X_\delta[G] = X_\delta$, this time wlog pairwise distinct, and $\tp(\bar y_0, \dots, \bar y_{n-1}) = (\langle \xi_l : l \leq k \rangle, \langle K_l : l < k \rangle, \langle U_i : i < n \rangle)$. We will make a genericity argument over $M_1$ to show $(1)_\delta$ and $(2)_\delta$. To this end, let $D_0 \subseteq \prod_{l < k} (\bigotimes_{\xi \in [\xi_l,\xi_{l+1})} 2^{<\omega})^{K_l}$ be dense open, $D_0 \in M_1$, and let $\dot D_1 \in M_1$ be a $\prod_{l < k} (\bigotimes_{\xi \in [\xi_l,\xi_{l+1})} 2^{<\omega})^{K_l}$-name for a dense open subset of $\mathbb{Q}$. Then consider the dense open subset $D_2 \subseteq \prod_{l < k} (\bigotimes_{\xi \in [\xi_l,\xi_{l+1})} 2^{<\omega})^{K_l} \times \mathbb{Q}$ in $M_1$, where $$D_2 = \{ (r_0, r_1) : r_0 \in D_0 \wedge r_0 \Vdash r_1 \in \dot D_1 \}.$$ Also let $\bar h_0 \in G$ be an arbitrary condition so that $$\bar h_0 \Vdash \forall i < n (U_i \cap \dot X_\delta \neq \emptyset).$$ Then there is $\delta_0 < \delta$ so that $\supp(\bar h_0), \xi_{k-1}+1 \subseteq \delta_0$. We may equally well view $D_2$ as a $\prod_{l < k-1} (\bigotimes_{\xi \in [\xi_l,\xi_{l+1})} 2^{<\omega})^{K_l} \times (\bigotimes_{\xi \in [\xi_{k-1},\delta_0)} 2^{<\omega})^{K_{k-1}}$-name $\dot E \in M_1$ for a dense open subset $$E \subseteq (\bigotimes_{\xi \in [\delta_0,\xi_k)} 2^{<\omega})^{K_{k-1}} \times \mathbb{Q}= (\bigotimes_{\xi \in [\delta_0,\delta)} 2^{<\omega})^{n} \times \mathbb{Q}.$$ We follow again from $(2)_{\delta_0}$, that the set $\tilde E \in M_1[G \cap \mathbb{R}_{\delta_0}]$, where \begin{multline*} \tilde E = \bigcap \Bigl\{ \dot E[\bar y'_0, \dots, \bar y'_{n-1}] : \bar y'_0, \dots, \bar y'_{n-1} \in X_{\delta_0}, \\ \tp(\bar y'_0, \dots, \bar y'_{n-1}) = (\langle \xi_0 < \dots < \xi_{k-1} < \delta_0 \rangle, \langle K_l : l < k \rangle, \langle U_i \restriction \delta_0 : i < n \rangle) \Bigr\}, \end{multline*} is dense open. Let $((\bar t_0, \dots, \bar t_{n-1}), r) \in \tilde E$ be arbitrary and $\bar h_1 \in G \cap \mathbb{R}_{\delta_0}$, $\bar h_1 \leq \bar h_0$, so that $\bar h_1 \Vdash ((\bar t_0, \dots, \bar t_{n-1}), r) \in \tilde E$. Let us show by induction on $\xi \in [\delta_0, \delta)$, $\xi > \sup \left( \bigcup_{i < n} \dom(\bar t_i) \right)$, that there is a condition $\bar h_2 \in \mathbb{R}_\xi,$ $\bar h_2 \leq \bar h_1$, so that \begin{multline*} \bar h_2 \Vdash \forall \bar y'_0, \dots, \bar y'_{n-1} \in \dot X_\delta \big(\tp(\bar y'_0, \dots, \bar y'_{n-1}) = (\langle \xi_l : l \leq k \rangle, \langle K_l : l < k \rangle, \langle U_i : i < n \rangle) \\ \rightarrow \bar y'_0 \in [\bar t_0] \wedge \dots \wedge \bar y'_{n-1} \in [\bar t_{n-1}] \big) \end{multline*} and in particular, if $\bar h_2 \in G$, then for all $\bar y'_0, \dots, \bar y'_{n-1} \in X_\delta$ with $\tp(\bar y'_0, \dots, \bar y'_{n-1}) = (\langle \xi_l : l \leq k \rangle, \langle K_l : l < k \rangle, \langle U_i : i < n \rangle)$, the generic corresponding to $\bar y'_0, \dots, \bar y'_{n-1}$ hits $D_0$, and $r \in \dot D_1[\bar y'_0, \dots, \bar y'_{n-1}]$. Since $\bar h_0 \in G$ was arbitrary, genericity finishes the argument. The limit step of the induction follows directly from the earlier steps since if $\dom (\bar t_i) \subseteq \xi$, with $\xi$ limit, then there is $\eta < \xi$ so that $\dom (\bar t_i) \subseteq \eta$. So let us consider step $\xi +1$. Then there is, by the inductive assumption, $\bar h'_2 \in \mathbb{R}_\xi$, $\bar h'_2 \leq \bar h_1$, so that \begin{multline*} \bar h'_2 \Vdash \forall \bar y'_0, \dots, \bar y'_{n-1} \in \dot X_\delta \big(\tp(\bar y'_0, \dots, \bar y'_{n-1}) = (\langle \xi_l : l \leq k \rangle, \langle K_l : l < k \rangle, \langle U_i : i < n \rangle) \\ \rightarrow ( \bar y'_0 \in [\bar t_0 \restriction \xi] \wedge \dots \wedge \bar y'_{n-1} \in [\bar t_{n-1} \restriction \xi] \big). \end{multline*} Now extend $\bar h'_2$ to $\bar h''_2$ in $\mathbb{R}_\xi$, so that there is $m \in \omega$ such that for every $s \in 2^{m}$ and every $i < n$, either $\bar h''_2 \Vdash \dot O_{s} \subseteq U_i \restriction \xi$ or $\bar h''_2 \Vdash \dot O_{s} \cap (U_i \restriction \xi) = \emptyset$, where $\langle \dot O_s : s \in 2^{<\omega} \rangle$ is a name for the base of $\dot X_\xi$ used to define $\dot{\mathbb{S}}_\xi$. The reason why this is possible, is that in any extension by $\mathbb{R}_\xi$ and for every $i < n$, by compactness of $X_\xi \cap (U_i \restriction \xi)$, there is a finite set $a \subseteq 2^{<\omega}$ so that $X_\xi \cap (U_i \restriction \xi) = \bigcup_{s \in a} O_s$. Let us define $h \colon 2^{\leq m} \to \mathcal{FT}$, where $$h(s) =\begin{cases} \emptyset & \text{if } \forall i < n ( \bar h''_2 \Vdash \dot O_{s} \cap U_i \restriction \xi = \emptyset) \\ \{ t \in 2^{<\omega} : t \subseteq t_i(\xi) \} & \text{if } \bar h''_2 \Vdash \dot O_{s} \subseteq U_i \restriction \xi \text{ and } i < n. \end{cases} $$ Note that $h$ is well-defined as $(U_i \restriction \xi) \cap (U_j \restriction \xi) = \emptyset$ for every $i \neq j < n$. Since $\emptyset \unlhd_{\rho} T$ and $T \unlhd_{\rho} T$ for any weight $\rho$ and any finite tree $T$, we have that $\bar h''_2 \Vdash h \in \dot{\mathbb{S}}_\xi$ and $\bar h_2 = \bar h''_2{}^\frown h \in \mathbb{R}_{\xi+1}$ is as required. \end{proof} This finishes the definition of $\mathbb{R}_\alpha$ and $\dot X_\alpha$. Finally let $G$ be $\mathbb{R}_\alpha$-generic over $M_1$ and $X_\alpha = \dot X_\alpha[G]$. Now let us define $\bar q \leq \bar p$ recursively so that for every $\delta \leq \alpha$, $$\forall \bar x \in [\bar q] (\Phi(\bar x \restriction C) \restriction \delta \in X_\alpha \restriction \delta).$$ If $\beta \notin C$ we let $\dot q(\beta)$ be a name for the trivial condition $2^{<\omega}$, say e.g. $\dot q(\beta) = \dot p(\beta)$. If $\beta \in C$, say $\beta = \iota(\delta)$, we define $\dot q(\beta)$ to be a name for the tree generated by $$ \eta^{-1}_{T_{\bar x_{G} \restriction (C \cap \beta) }} (S_{\bar y}),$$ where $\bar x_{G}$ is the generic sequence added by $\mathbb{P}_\lambda$ and $\bar y = \Phi(\bar x_G \restriction C) \restriction \delta$. This ensures that $\bar q \restriction \beta \Vdash \dot q(\beta) \in \mathbb{Q}_\beta \wedge \dot q(\beta) \leq \dot p(\beta)$. Inductively we see that $\bar q \restriction \beta^\frown \bar p \restriction (\lambda \setminus \beta) \Vdash \Phi(\bar x_G \restriction C) \restriction \delta \in X_\alpha \restriction \delta$. Having defined $\bar q$, it is also easy to check that it is a good master condition over $M_0$, with $[\bar q] = \Phi^{-1}(X_\alpha) \times (2^{\omega})^{\lambda\setminus C}$. Since for every $\bar x \in [\bar q]$, $\Phi(\bar x \restriction C) \in X_\alpha$ and by $(1)_\alpha$, $\bar q$ is as required. \end{proof} \begin{prop} \label{prop:propheart} Let $E \subseteq [X]^{<\omega} $ be an analytic hypergraph on $X$, say $E$ is the projection of a closed set $F \subseteq [X]^{<\omega} \times \omega^\omega$, and let $f \colon [\bar p] \restriction C \to X$ be continuous so that $\bar p \Vdash \dot y = f(\bar x_G \restriction C)$ (cf. Lemma~\ref{lem:goodmaster2}). Then there is a good master condition $\bar q \leq \bar p$, with $[\bar q] \restriction C$ a closed subset of $[\bar p] \restriction C$ and $[\bar q] = ([\bar q] \restriction C) \times (2^\omega)^{\lambda \setminus C}$, a compact $E$-independent set $Y \subseteq X$, $N \in \omega$ and continuous functions $\phi \colon [\bar q] \restriction C \to [Y]^{<N}$, $w \colon [\bar q] \restriction C \to \omega^\omega $, so that \begin{enumerate}[(i)] \item either $f''([\bar q] \restriction C) \subseteq Y$, thus $\bar q \Vdash \dot y \in Y$, \item or $\forall \bar x \in [\bar q]\restriction C ( (\phi(\bar x) \cup \{ f(\bar x) \}, w(\bar x)) \in F)$, thus $\bar q \Vdash {\{\dot y\} \cup Y}$ is not $E\text{-independent}$. \end{enumerate} \end{prop} \begin{proof} On $(2^\omega)^\alpha$ let us define the analytic hypergraph $\tilde E$, where $$\{ \bar y_0, \dots, \bar y_{n-1} \} \in \tilde E \leftrightarrow \{ f(\Phi^{-1}(\bar y_0), \dots,f(\Phi^{-1}(\bar y_{n-1})) \} \in E.$$ By Main Lemma~\ref{lem:mainlemmainf}, there is a countable model $M$ and $\bar s \in \bigotimes_{i<\alpha} 2^{<\omega}$ so that either \begin{enumerate} \item for any $\bar y_0,\dots, \bar y_{n-1} \in (2^\omega)^\alpha \cap [\bar s]$ that are strongly $\langle 2^\omega : i < \alpha \rangle$-mCg over $M$, $\{\bar y_0,\dots, \bar y_{n-1}\} \text{ is } E\text{-independent}$, \end{enumerate} or for some $N\in \omega$, \begin{enumerate} \setcounter{enumi}{1} \item there are $\phi_0, \dots, \phi_{N-1} \colon (2^\omega)^\alpha \to (2^\omega)^\alpha$ continuous so that for any $\bar y_0,\dots, \bar y_{n-1} \in (2^\omega)^\alpha \cap [\bar s]$ that are strongly mCg over $M$, $\{\phi_j(\bar y_i) : j < N, i<n\}$ is $E$-independent but $\{ \bar y_0\} \cup \{ \phi_j(\bar y_0) : j < N \} \in E.$ \end{enumerate} Let $M_1$ be a countable elementary model with $M_0, M, \bar p, \mathbb{P}_\lambda \in M_{1}$ and apply Lemma~\ref{lem:mCgforiteration} to get the condition $\bar q \leq \bar p$. In case (1), let $Y := f''([\bar q] \restriction C)$. Then (i) is satisfied. To see that $Y$ is $E$-independent let $\bar x_0, \dots, \bar x_{n-1} \in [\bar q]$ be arbitrary and suppose that $\{{f(\bar x_0 \restriction C)}, \dots,f(\bar x_{n-1} \restriction C) \} \in E$. By definition of $\tilde E$ this implies that $\{ {\Phi(\bar x_0 \restriction C)}, \dots, \Phi(\bar x_{n-1} \restriction C) \} \in \tilde E$ but this is a contradiction to (1) and the conclusion of Lemma~\ref{lem:mCgforiteration}. In case (2), by elementarity, the $\phi_j$ are in $M_1$ and there is a continuous function $\tilde w \in M_1$, with domain some dense $G_\delta$ subset of $(2^\omega)^\alpha$, so that $\bar s \Vdash (\{f(\bar z), \phi_{j}(\bar z ) : j < N \}, \tilde w(\bar z)) \in F$, where $\bar z$ is a name for the Cohen generic. Let $\phi(\bar x) = \{ f(\Phi^{-1}(\phi_j(\Phi(\bar x)))) : j < N \}$, $w(\bar x) = \tilde w(\Phi(\bar x))$ for $\bar x \in [\bar q] \restriction C$ and $Y := \bigcup_{\bar x \in [\bar q] \restriction C} \phi(\bar x)$. Since $\Phi(\bar x)$ is generic over $M_1$, we indeed have that $(\phi(\bar x), w(\bar x)) \in F$ for every $\bar x \in [\bar q] \restriction C$. Seeing that $Y$ is $E$-independent is as before. \end{proof} \section{Main results and applications} \begin{thm}\label{thm:mainmaintheorem} (V=L) Let $\mathbb{P}$ be a countable support iteration of Sacks or splitting forcing of arbitrary length. Let $X$ be a Polish space and $E \subseteq [X]^{<\omega} $ be an analytic hypergraph. Then there is a $\mathbf{\Delta}^1_2$ maximal $E$-independent set in $V^\mathbb{P}$. If $X = 2^\omega$ or $X = \omega^\omega$, $r \in 2^\omega$ and $E$ is $\Sigma^1_1(r)$, then we can find a $\Delta^1_2(r)$ such set. \end{thm} \begin{proof} We will only concentrate on the case $X = 2^\omega$ since the rest follows easily from the fact that there is a Borel isomorphism from $2^\omega$ to any uncountable Polish space $X$, and if $X = \omega^\omega$ that isomorphism is (lightface) $\Delta^1_1$. If $X$ is countable, then the statement is trivial. Also, let us only consider splitting forcing. The proof for Sacks forcing is the same. First let us us mention some well-known facts and introduce some notation. Recall that a set $Y \subseteq 2^\omega$ is $\Sigma^1_2(x)$-definable if and only if it is $\Sigma_1(x)$-definable over $H(\omega_1)$ (see e.g. \cite[Lemma 25.25]{Jech2013}). Also recall that there is a $\Sigma^1_1$ set $A \subseteq 2^{\omega} \times 2^{\omega}$ that is universal for analytic sets, i.e. for every analytic $B \subseteq 2^{\omega}$, there is some $x\in 2^\omega$ so that $B = A_x$, where $A_x = \{ y \in 2^\omega: (x,y) \in A \}$. In the same way, there is a universal $\Pi^0_1$ set $F \subseteq 2^\omega \times [2^\omega]^{<\omega} \times \omega^\omega$ (\cite[22.3, 26.1]{Kechris1995}). For any $x \in 2^\omega$, let $E_x$ be the analytic hypergraph on $2^\omega$ consisting of $a \in [2^{\omega}]^{<\omega} \setminus \{\emptyset\}$ so that there is $b \in [A_x]^{<\omega}$ with $a \cup b \in E$. Then there is $y \in 2^\omega$ so that $E_x$ is the projection of $F_y$. Moreover, it is standard to note, from the way $A$ and $F$ are defined, that for every $x$, $y= e(x,r)$ for some fixed recursive function $e$. Whenever $\alpha < \omega_1$ and $Z \subseteq (2^\omega)^\alpha$ is closed, it can be coded naturally by the set $S \subseteq \bigotimes_{i < \alpha} 2^{<\omega}$, where $$S = \{ (\bar x \restriction a) \restriction n : \bar x \in Z, a \in [\alpha]^{<\omega}, n \in \omega\}$$ and we write $Z = Z_S$. Similarly, any continuous function $f \colon Z \to \omega^\omega$ can be coded by a function $\zeta \colon S \to \omega^{<\omega}$, where $$f(\bar x) = \bigcup_{\bar s \in S, \bar x \in [\bar s]} \zeta(\bar s)$$ and we write $f = f_\zeta$. For any $\beta < \alpha$ and $\bar x \in Z \restriction \beta$, let us write $T_{\bar x, Z} = \{ s \in 2^{<\omega} : \exists \bar z \in Z (\bar z \restriction \delta = \bar x \wedge s\subseteq z(\delta))\}$. The set $\Psi_0$ of pairs $(\alpha,S)$, where $S$ codes a closed set $Z \subseteq (2^\omega)^\alpha$ so that for every $\beta < \alpha$ and $\bar x \in Z \restriction \beta$, $T_{\bar x,Z} \in \mathbb{SP}$ is then $\Delta_1$ over $H(\omega_1)$. This follows since the set of such $S$ is $\Pi^1_1$, seen as a subset of $\mathcal{P}(\bigotimes_{i < \alpha} 2^{<\omega})$, uniformly on $\alpha$. Similarly, the set $\Psi_1$ of triples $(\alpha, S, \zeta)$, where $(\alpha, S) \in \Psi_0$ and $\zeta$ codes a continuous function $f \colon Z_S \to \omega^\omega$, is $\Delta_1$. Now let $\langle \alpha_\xi, S_\xi, \zeta_\xi : \xi < \omega_1 \rangle$ be a $\Delta_1$-definable enumeration of all triples $(\alpha, S, \zeta)\in \Psi_1$. This is possible since we assume $V=L$ (cf. \cite[Theorem 25.26]{Jech2013}). Let us recursively construct a sequence $\langle x_\xi, y_\xi, T_\xi, \bar \eta_\xi, \theta_\xi : \xi < \omega_1 \rangle$, where for each $\xi < \omega_1$, \begin{enumerate} \item $\bigcup_{\xi' < \xi } A_{x_{\xi'}} = A_{y_\xi}$ and $A_{y_\xi} \cup A_{x_\xi}$ is $E$-independent, \item $\bar \eta_\xi = \langle \eta_{\xi,j} : j <N \rangle$ for some $N \in \omega$, \item $T_\xi \subseteq S_\xi$, $(\alpha_\xi, T_\xi, \eta_{\xi,j}) \in \Psi_1$ for every $j < N$ and $(\alpha_\xi, T_\xi, \theta_{\xi}) \in \Psi_1$, \item either $\forall \bar x \in Z_{T_\xi} (f_{\zeta_\xi}(\bar x) \in A_{x_\xi})$ or $\forall \bar x \in Z_{T_\xi} \big(\forall n < N (f_{\eta_{\xi,n}}(\bar x) \in A_{x_\xi}) \wedge (\{ f_{\eta_{\xi,n}}(\bar x), f_{\zeta_\xi}(\bar x): n < N\},f_{\theta_\xi}(\bar x) ) \in F_{e(y_\xi,r)}\big)$, \end{enumerate} and $(x_\xi, y_\xi, T_\xi, \bar \eta_\xi, \theta_\xi)$ is $<_L$-least such that (1)-(4), where $<_L$ is the $\Delta_1$-good global well-order of $L$. That $<_L$ is $\Delta_1$-good means that for every $z \in L$, the set $\{ z' : z' <_L z \}$ is $\Delta_1(z)$ uniformly on the parameter $z$. In particular, quantifying over this set does not increase the complexity of a $\Sigma_n$-formula. Note that (1)-(4) are all $\Delta_1(r)$ in the given variables. E.g. the second part of (1) is uniformly $\Pi^1_1(r)$ in the variables $x_\xi, y_\xi$, similarly for (4). \begin{claim} For every $\xi < \omega_1$, $(x_\xi, y_\xi, T_\xi, \bar \eta_\xi, \theta_\xi)$ exists. \end{claim} \begin{proof} Assume we succeeded in constructing the sequence up to $\xi$. Then there is $y_\xi$ so that $\bigcup_{\xi' < \xi } A_{x_{\xi'}} = A_{y_\xi}$. By Lemma~\ref{lem:gettingacondition}, there is a good master condition $\bar r \in \mathbb{P}_{\alpha_\xi}$ so that $[\bar r] \subseteq Z_{S_\xi}$, where $\mathbb{P}_{\alpha_\xi}$ is the $\alpha_\xi$-long csi of splitting forcing. Then $f_{\zeta_\xi}$ corresponds to a $\mathbb{P}_{\alpha_\xi}$-name $\dot y$ so that $\bar r \Vdash \dot y = f_{\zeta_\xi}(\bar x_G)$. Let $M_0$ be a countable elementary model with $\dot y,\mathbb{P}_{\alpha_\xi}, \bar r \in M_0$ and $\bar p \leq \bar r$ a good master condition over $M_0$. Let $C := \alpha_\xi$ and consider the results of the last section. By Proposition~\ref{prop:propheart} applied to $E_{y_\xi}$ and $X = 2^\omega$, there is $\bar q \leq \bar p$, a compact $E_{y_\xi}$ independent set $Y_\xi$, $N \in \omega$ and continuous functions $\phi \colon [\bar q] \to [Y_\xi]^{<N}$, $w \colon [\bar q] \to \omega^\omega$ such that \begin{enumerate}[(i)] \item either $f_{\zeta_\xi}''([\bar q]) \subseteq Y_\xi$, \item or $\forall \bar x \in [\bar q] ( (\phi(\bar x) \cup \{ f_{\zeta_\xi}(\bar x) \}, w(\bar x)) \in F)$. \end{enumerate} Let $x_\xi$, $T_\xi$, $\bar \eta_\xi = \langle \eta_{\xi,j} : j < N\rangle$ and $\theta_\xi$ be such that $A_{x_\xi} = Y_\xi$, $Z_{T_\xi} = [\bar q]$, $\{ f_{\eta_{\xi,j}}(\bar x) : j < N \} = \phi(\bar x)$ for every $\bar x \in [\bar q]$, and $f_{\theta_\xi} = w$. Then $(x_\xi, y_\xi, T_\xi, \bar \eta_\xi, \theta_\xi)$ is as required. \end{proof} Let $Y = \bigcup_{\xi< \omega_1} A_{x_\xi}$. Then $Y$ is $\Sigma_1(r)$-definable over $H(\omega_1)$, namely $x \in Y$ iff there is a sequence $\langle x_\xi, y_\xi, T_\xi, \bar \eta_\xi, \theta_\xi : \xi \leq \alpha < \omega_1 \rangle$ so that for every $\xi \leq \alpha$, (1)-(4), for every $(x,y, T, \bar \eta, \theta) <_L (x_\xi,y_\xi, T_\xi, \bar \eta_\xi, \theta_\xi)$, not (1)-(4), and $x \in A_{x_\alpha}$. \begin{claim} In $V^{\mathbb{P}}$, the reinterpretation of $Y$ is maximal $E$-independent. \end{claim} \begin{proof} Let $\bar p \in \mathbb{P}$ and $\dot y \in M_0$ be a $\mathbb{P}$-name for an element of $2^\omega$, $M_0 \ni \mathbb{P}, \bar p$ a countable elementary model. Then let $\bar q \leq \bar p$ be a good master condition over $M_0$ and $C$ countable, $f \colon [\bar q] \restriction C \to 2^\omega$ continuous according to Lemma~\ref{lem:intermediategoodanalytic}. Now $(2^\omega)^{C}$ is canonically homeomorphic to $(2^\omega)^\alpha$, $\alpha = \otp(C)$, via the map $\Phi \colon (2^\omega)^{C} \to (2^\omega)^\alpha$. Then we find some $\xi < \omega_1$ so that $\alpha_\xi = \alpha$, $\Phi''([\bar q] \restriction C) = Z_{S_\xi}$ and $f_{\zeta_\xi} \circ \Phi = f$. On the other hand, $\Phi^{-1} (Z_{T_\xi})$ is a subset of $[\bar q] \restriction C$ conforming to the assumptions of Lemma~\ref{lem:gettingacondition}. Thus we get $\bar r \leq \bar q$ so that $[\bar r] \restriction C \subseteq \Phi^{-1} (Z_{T_\xi})$. According to (4), either $\bar r \Vdash \dot y \in A_{x_{\xi}}$ or $\bar r \Vdash \{ \dot y\} \cup A_{x_\xi} \cup A_{y_\xi}$ is not $E$-independent. Thus we can not have that $\bar p \Vdash \dot y \notin Y \wedge \{\dot y\} \cup Y$ is $E$-independent. This finishes the proof of the claim, as $\bar p$ and $\dot y$ were arbitrary. \end{proof} To see that $Y$ is $\Delta^1_2(r)$ in $V^{\mathbb{P}}$ it suffices to observe that any $\Sigma^1_2(r)$ set that is maximal $E$-independent is already $\Pi^1_2(r)$. \end{proof} A priori, Theorem~\ref{thm:mainmaintheorem} only works for hypergraphs that are defined in the ground model. But note that there is a universal analytic hypergraph on $2^\omega \times 2^\omega$, whereby we can follow the more general statement of Theorem~\ref{thm:maintheorem}. \begin{thm} After forcing with the $\omega_2$-length countable support iteration of $\mathbb{SP}$ over $L$, there is a $\Delta^1_2$ ultrafilter, a $\Pi^1_1$ maximal independent family and a $\Delta^1_2$ Hamel basis, and in particular, $\mathfrak{i}_{B} = \mathfrak{i}_{cl} = \mathfrak{u}_B = \mathfrak{u}_{cl} = \omega_1 < \mathfrak{r} = \mathfrak{i} = \mathfrak{u} = \omega_2$. \end{thm} \begin{proof} Apply Theorem~\ref{thm:mainmaintheorem} to $E_u$, $E_i$ and $E_h$ from the introduction. To see that $\mathfrak{i}_{cl} = \mathfrak{u}_{cl} = \omega_1$ note that every analytic set is the union of $\mathfrak{d}$ many compact sets and that $\mathfrak{d} = \omega_1$, since $\mathbb{SP}$ is $\omega^\omega$-bounding. \end{proof} \begin{thm}\label{thm:finiteprod} (V=L) Let $\mathbb{P}$ be either Sacks or splitting forcing and $k \in \omega$. Let $X$ be a Polish space and $E \subseteq [X]^{<\omega}$ be an analytic hypergraph. Then there is a $\mathbf{\Delta}^1_2$ maximal $E$-independent set in $V^{\mathbb{P}^k}$. \end{thm} \begin{proof} This is similar to the proof of Theorem~\ref{thm:mainmaintheorem}, using Main Lemma~\ref{thm:mainlemma} and Proposition~\ref{prop:weightedmain} to get an analogue of Proposition~\ref{prop:propheart}. \end{proof} \iffalse ucompact is the same as uclosed since independent families have the FIP\\ simpler version of a binary relation that says intersection is finite: this gives a counterexample to david and asgers question on Laver forcing. ? Is the 3-intersection a counter example to Miller forcing???? \begin{lemma}~ \begin{itemize} \item Any $F_\sigma$ generated filter is $F_\sigma$. \item Under CH any $F_\sigma$ filter can be extended to a P-point. \end{itemize} \end{lemma} \fi Lastly, we are going to prove Theorem~\ref{thm:icleqd}. \begin{lemma}\label{lem:sigmacompind} Let $X \subseteq [\omega]^\omega$ be closed so that $\forall x,y \in X ( \vert x \cap y \vert = \omega)$. Then $X$ is $\sigma$-compact. \end{lemma} \begin{proof} If not, then by Hurewicz's Theorem (see \cite[7.10]{Kechris1995}), there is a superperfect tree $T \subseteq \omega^{<\omega}$ so that $[T] \subseteq X$, identifying elements of $[\omega]^\omega$ with their increasing enumeration, as usual. But then it is easy to recursively construct increasing sequences $\langle s_n : n \in \omega \rangle$, $\langle t_n : n \in \omega \rangle$ in $T$ so that $s_0 = t_0 = \operatorname{stem}(T)$, for every $n \in \omega$, $t_n$ and $s_n$ are infinite-splitting nodes in $T$ and $s_{2n+1}(\vert s_{2n} \vert ) > t_{2n+1}(\vert t_{2n+1} \vert -1)$, $t_{2n+2}(\vert t_{2n} \vert ) > s_{2n+1}(\vert s_{2n+1} \vert -1)$. Then, letting $x = \bigcup_{n \in \omega} s_n$ and $y = \bigcup_{n \in \omega} t_n$, $x \cap y \subseteq \vert s_0 \vert$, viewing $x,y$ as elements of $[\omega]^\omega$. This contradicts that $x,y \in X$. \end{proof} The proof of Theorem~\ref{thm:icleqd} is a modification of Shelah's proof that $\mathfrak{d} \leq \mathfrak{i}$. \begin{proof}[Proof of Theorem~\ref{thm:icleqd}] Let $\seq{C_\alpha: \alpha < \kappa}$ be compact independent families so that $ \mathcal{I} = \bigcup_{\alpha < \kappa} C_\alpha$ is maximal independent and $\kappa < \mathfrak d$ and assume without loss of generality that $\{C_\alpha: \alpha < \kappa\}$ is closed under finite unions. Here, we will identify elements of $[\omega]^\omega$ with their characteristic function in $2^\omega$ at several places and it should always be clear from context which representation we consider at the moment. \begin{claim} There are $\seq{x_n : n \in \omega}$ pairwise distinct in $\mathcal{I}$ so that $\{x_n : n \in \omega \} \cap C_\alpha$ is finite for every $\alpha < \kappa$. \end{claim} \begin{proof} The closure of $\mathcal{I}$ is not independent. Thus there is $x \in \bar{\mathcal{I}} \setminus \mathcal{I}$. Now we pick $\seq{x_n : n \in \omega} \subseteq \mathcal{I}$ converging to $x$. Since $C_\alpha$ is closed, whenever for infinitely many $n$, $x_n \in C_\alpha$, then also $x \in C_\alpha$ which is impossible. \end{proof} Fix a sequence $\seq{x_n : n \in \omega}$ as above. And let $a_\alpha = \{ n \in \omega : x_n \in C_\alpha \} \in [\omega]^{<\omega}$. We will say that $x$ is a Boolean combination of a set $X \subseteq [\omega]^\omega$, if there are finite disjoint $Y,Z \subseteq X$ so that $x = (\bigcap_{y \in Y} y) \cap (\bigcap_{z \in Z} \omega \setminus z)$. \begin{claim} For any $\alpha < \kappa$ there is $f_{\alpha} \colon \omega \to \omega$ so that for any $K \in [C_\alpha \setminus \{x_n : n \in a_\alpha \} ]^{<\omega}$, for all but finitely many $k \in \omega$ and any Boolean combination $x$ of $K \cup \{x_0, \dots, x_k \}$, $x \cap [k,f_{\alpha}(k)) \neq \emptyset$. \end{claim} \begin{proof} We define $f_{\alpha}(k)$ as follows. For every $l \leq k$, we define a collection of basic open subsets of $(2^\omega)^{l}$, $ \mathcal{O}_{0,l} := \{[\bar s] : \bar s \in (2^{<\omega})^l \wedge \forall i < l (\vert s_i\vert> k) \wedge (\exists i < l, n \in a_\alpha (s_i \subseteq x_n) \vee \exists i<j<l (s_i \not\perp s_j))\}$. Further we call any $[\bar s] \notin \mathcal{O}_{0,l}$ good if for any $F,G \subseteq l$ with $F \cap G = \emptyset$ and for any Boolean combination $x$ of $\{ x_0,\dots x_k \}$, there is $ k'> k$ so that for every $i \in F$, $s_i(k') = 1$, for every $ i \in G$, $s_i(k')=0$ and $x(k')=1$. Let $\mathcal{O}_{1,l}$ be the collection of all good $[\bar s]$. We see that $\bigcup_{l \leq k}(\mathcal{O}_{0,l} \cup \mathcal{O}_{1,l})$ is an open cover of $C_\alpha \cup (C_\alpha)^2 \cup \dots \cup(C_\alpha)^k$. Thus it has a finite subcover $\mathcal{O}'$. Now let $f_{\alpha}(k) := \max \{ \vert t \vert : \exists [\bar s] \in \mathcal{O}' \exists i < k (t = s_i) \}$. Now we want to show that $f_{\alpha}$ is as required. Let $(y_0,\dots,y_{l-1}) \in (C_\alpha \setminus \{ x_n : n \in a_\alpha \} )^{l}$ be arbitrary, $y_0, \dots, y_{l-1}$ pairwise distinct and $k \geq l$ so that $y_i \restriction k \neq x_n \restriction k$ for all $i < l$, $n \in a_\alpha$ and $y_i \restriction k \neq y_j \restriction k$ for all $i < j < l$. In the definition of $f_{\alpha}(k)$, we have the finite cover $\mathcal{O}'$ of $(C_\alpha)^l$ and thus $(y_0,\dots,y_{l-1}) \in [\bar s]$ for some $[\bar s] \in \mathcal{O}'$. We see that $[\bar s] \in \mathcal{O}_{0,l}$ is impossible as we chose $k$ large enough so that for no $i < l, n \in a_\alpha$, $s_i \subseteq x_n$ and for every $i < j<l$, $s_i \perp s_j$. Thus $[\bar s] \in \mathcal{O}_{1,l}$. But then, by the definition of $\mathcal{O}_{1,l}$, $f_{\alpha}(k)$ is as required. \end{proof} As $\kappa < \mathfrak d$ we find $f \in \omega^\omega$ so that $f$ is unbounded over $\{f_{\alpha} : \alpha < \kappa\}$. Let $x_n^0 := x_n$ and $x_n^1 := \omega \setminus x_n$ for every $n \in \omega$. For any $g \in 2^\omega$ and $n \in \omega$ we define $y_{n,g} := \bigcap_{m\leq n} x_m^{g(m)}$. Further define $y_g = \bigcup_{n \in \omega} y_{n,g} \cap f(n)$. Note $y_{n,g} \subseteq y_{m,g}$ for $m\leq n$ and that $y_g \subseteq^* y_{n,g}$ for all $n \in \omega$. \begin{claim} For any $g \in 2^\omega$, $y_g$ has infinite intersection with any Boolean combination of $\bigcup_{\alpha < \kappa} C_\alpha \setminus \{x_n : n \in \omega\}$. \end{claim} \begin{proof} Let $\{y_0,\dots,y_{l-1}\} \in [C_\alpha\setminus \{x_n : n \in a_\alpha\}]^l$ for some $l \in \omega$, $\alpha < \kappa$ be arbitrary. Here, recall that $\{C_\alpha: \alpha < \kappa\}$ is closed under finite unions. We have that there is some $k_0 \in \omega$ so that for every $k \geq k_0$, any Boolean combination $y$ of $\{y_0,\dots,y_{l-1}\}$ and $x$ of $\{x_n : n \leq k \}$, $x\cap y \cap [k,f_{\alpha}(k)) \neq \emptyset$. Let $y$ be an arbitrary Boolean combination of $\{y_0,\dots,y_{l-1}\}$ and $m \in \omega$. Then there is $k > m,k_0$ so that $f(k) > f_{\alpha}(k)$. But then we have that $y_{k,g}$ is a Boolean combination of $\{x_0, \dots, x_{k} \}$ and thus $y_{k,g} \cap y \cap [k,f(k)) \neq \emptyset$. In particular, this shows that $y \cap y_g \not\subseteq m$ and unfixing $m$, $\vert y \cap y_g \vert = \omega$. \end{proof} Now let $Q_0,Q_1$ be disjoint countable dense subsets of $2^{\omega}$. We see that $\card{y_g \cap y_h} < \omega$ for $h \neq g \in 2^\omega$. Thus the family $\{ y_g : g \in Q_0 \cup Q_1 \}$ is countable almost disjoint and we can find $y'_g =^* y_g$, for every $g \in Q_0 \cup Q_1$, so that $\{y'_g : g \in Q_0 \cup Q_1 \}$ is pairwise disjoint. Let $y = \bigcup_{g \in Q_0} y'_g$. We claim that any Boolean combination $x$ of sets in $\mathcal{I}$ has infinite intersection with $y$ and $\omega \setminus y$. To see this, assume without loss of generality that $x$ is of the from $\tilde x \cap x_0^{g(0)} \cap \dots \cap x_k^{g(k)}$, where $\tilde x$ is a Boolean combination of sets in $\mathcal{I} \setminus \{ x_n : n \in \omega\}$ and $g \in 2^\omega$. As $Q_0$ is dense there is some $h \in Q_0$ such that $h \restriction (k+1) = g \restriction (k+1)$. Thus we have that $y'_h \subseteq^* x_0^{g(0)} \cap \dots \cap x_k^{g(k)}$ but also $y'_h \cap \tilde x$ is infinite by the claim above. In particular we have that $y \cap x$ is infinite. The complement of $y$ is handled by replacing $Q_0$ with $Q_1$. We now have a contradiction to $\mathcal{I}$ being maximal. \end{proof} \section{Concluding remarks} Our focus in this paper was on Sacks and splitting forcing but it is clear that the method presented is more general. We mostly used that our forcing has Axiom A with continuous reading of names and that it is a weighted tree forcing (Definition~\ref{def:weightedtreeforcing}), both in a definable way. For instance, the more general versions of splitting forcing given by Shelah in \cite{Shelah1992} fall into this class. It would be interesting to know for what other tree forcings Theorem~\ref{thm:mainmaintheorem} holds true. In \cite{Schrittesser2018}, the authors showed that after adding a single Miller real over $L$, every (2-dimensional) graph on a Polish space has a $\mathbf \Delta^1_2$ maximal independent set. It is very plausible that this can be extended to the countable support iteration. One line of attack might be to use a similar method to ours, where Cohen genericity is replaced by other kinds of genericity. For instance, the following was shown by Spinas in \cite{Spinas2001} (compare this with Proposition~\ref{prop:weightedmain}): \begin{fact} Let $M$ be a countable model, then there is a superperfect tree $T$ so that for any $x \neq y \in [T]$, $(x,y)$ is $\mathbb{M}^2$ generic over $M$, where $\mathbb{M}$ denotes Miller forcing. \end{fact} On the other hand, it is impossible to have that any three $x,y,z \in [T]$ are mutually generic. This follows from a fact due to Velickovic and Woodin (see \cite[Theorem 1]{Velickovic1998}) that there is a Borel function $h \colon (\omega^\omega)^3 \to 2^\omega$, such that for any superperfect $T$, $h''([T]^3) = 2^\omega$. Also, $\mathbb{M}^3$ always adds a Cohen real (see e.g. the last paragraph in \cite{Brendle1999}). This means that Theorem~\ref{thm:finiteprod} can't hold for Miller forcing and $k \geq 3$, even for just equivalence relations. Namely, after adding a Cohen real, there can't be any $E_0$-transversal that is definable with parameters from the. This doesn't rule out though that the iteration might work. Let us ask the following question. \begin{quest} Does Theorem~\ref{thm:mainmaintheorem} hold true for Miller forcing? \end{quest} A positive result would yield a model in which $\mathfrak{i}_{B} < \mathfrak{i}_{cl}$, as per $\mathfrak{d} \leq \mathfrak{i}_{cl}$. No result of this kind has been obtained so far. Another common way to iterate Sacks or splitting forcing is to use the countable support product. The argument in Remark~\ref{rem:E1} can be used to see that no definable $E_1$-transversals can exist in an extension by a (uncountably long) countable support product of Sacks or splitting forcing. This raises the question for which hypergraphs Theorem~\ref{thm:mainmaintheorem} applies to countable support products. \begin{quest} Is there a nice characterization of hypergraphs for which Theorem~\ref{thm:mainmaintheorem} holds when using countable support products? \end{quest} An interesting application of our method that appears in the authors thesis, is that $P$-points exist after iterating splitting forcing over a model of CH. To our knowledge this is different to any other method of $P$-point existence in the literature. Other applications are related to questions about families of reals and the existence of a well-order of the continuum. For example, it is known that after adding $\omega_1$ many Sacks reals, there is no well-order of the reals in $L(\mathbb{R})$. On the other hand, if we start with $V = L$, a $\Delta^1_2$ Hamel base exists in the extension. In particular, a Hamel base will exist in $L(\mathbb{R})$. This gives a new solution to a question by Pincus and Prikry \cite{Pincus1975}, which asks whether a Hamel basis can exist without a well-order of the reals. This has only been solved recently (see \cite{Schindler2018}). Our results solve this problem not just for Hamel bases but for a big class of families of reals. In an upcoming paper \cite{Schilhan2022} we will consider further applications to questions related to the Axiom of Choice. \end{document}
arXiv
\begin{document} \title{Quantum Teleportation between Remote Qubit Memories with Only a Single Photon as a Resource} \author{Stefan~Langenfeld$^{1}$} \thanks{S.L.\ and S.W.\ contributed equally to this work.} \author{Stephan~Welte$^{1}$} \email[To whom correspondence should be addressed. Email: ]{[email protected]} \author{\hspace{-.4em}\textcolor{link}{\normalfont\textsuperscript{$*$}}\hspace{.4em}Lukas~Hartung$^{1}$} \author{Severin~Daiss$^{1}$} \author{Philip~Thomas$^{1}$} \author{Olivier~Morin$^{1}$} \author{Emanuele~Distante$^{1}$} \author{Gerhard~Rempe$^{1}$} \affiliation{$^{1}$Max-Planck-Institut f{\"u}r Quantenoptik, Hans-Kopfermann-Strasse 1, 85748 Garching, Germany} \begin{abstract} \noindent Quantum teleportation enables the deterministic exchange of qubits via lossy channels. While it is commonly believed that unconditional teleportation requires a preshared entangled qubit pair, here we demonstrate a protocol that is in principle unconditional and requires only a single photon as an ex-ante prepared resource. The photon successively interacts, first, with the receiver and then with the sender qubit memory. Its detection, followed by classical communication, heralds a successful teleportation. We teleport six mutually unbiased qubit states with average fidelity $\overline{\text{F}}=(88.3\pm1.3)\%$ at a rate of $6\,\mathrm{Hz}$ over $60\,\mathrm{m}$.\\ \end{abstract} \maketitle \noindent The direct transfer of a qubit over a long distance constitutes a fundamental problem due to unavoidable transportation losses in combination with the no-cloning theorem \cite{wootters1982}. A solution was provided by Bennett et al. who brought forward the idea of quantum teleportation in a seminal paper in 1993 \cite{Bennett1993}. Here, a sender `Alice' owns a precious and unknown input qubit that she wants to communicate to a receiver `Bob'. The two are connected via a lossy quantum channel and a deterministic classical channel. In a first step, Alice and Bob use the probabilistic quantum channel to share an entangled pair of particles with a repeat-until-success strategy. Once a classical signal heralds the availability of this entanglement resource, Alice performs a joint Bell-state measurement (BSM) on the input qubit and her half of the entangled pair and communicates the outcome classically to Bob. By applying a unitary rotation conditioned on this result to his half of the resource pair, he can eventually recover the state of the input qubit. Soon after the theoretical proposal of this protocol, first proof of principle experiments were performed with photons \cite{bouwmeester1997,boschi1998, furusawa1998} and later extended to other platforms such as atoms \cite{nolleke2013}, ions \cite{riebe2004, barrett2004, olmschenk2009}, nitrogen vacancy centers \cite{pfaff2014}, atomic ensembles \cite{bao2012, krauter2013}, and hybrid systems \cite{takeda2013, bussieres2014}. The complete teleportation protocol poses however some demanding experimental challenges \cite{pirandola2015}. First, Alice must keep the input qubit alive while the entanglement is generated over the quantum channel. This requires the qubit to be stored in a long-lived quantum memory. Second, the entanglement distribution over the lossy channel must be heralded. This allows Alice to perform the BSM only when the link is ready, avoiding the waste of her precious qubit. For the same reason, a deterministic BSM that can faithfully distinguish all four Bell states must be implemented. In earlier attempts of teleportation between distant material qubits \cite{olmschenk2009, bao2012, nolleke2013} the BSM was intrinsically probabilistic and the entanglement distribution was not heralded. Practically unconditional teleportation where Alice's input qubit always reappears on Bob's side was later reported in Ref. \cite{pfaff2014}. In this experiment, an additional ancillary matter qubit was employed to independently herald the availability of the entanglement resource before a deterministic BSM was performed, very much in the spirit of Ref. \cite{Bennett1993}. Here we offer an alternative solution and demonstrate a novel teleportation protocol that allows for, in principle, unconditional teleportation without the necessity of the commonly employed pre-shared entanglement resource. Instead, the only resource needed prior to the start of the teleportation procedure is a single photon traveling from Bob to Alice. If the photon is lost on the way, Alice's qubit is not affected and the protocol can simply be repeated until a successful photon transmission is heralded with downstream photodetectors. Instead of presharing the entanglement resource, the entanglement is generated on the fly between Bob's qubit and the photon when the latter interacts with his node. Notably, this entanglement generation process is in principle deterministic. Furthermore, our scheme implements a BSM with no fundamental efficiency limitation. The successful detection of the photon at Alice's node excludes events of photon loss and acts as a herald both for the entanglement generation on Bob's side and for the BSM on Alice's side. Our teleportation protocol practically achieves high teleportation rates and fidelities, and is ideally suited for future quantum networks. Figure \ref{fig:setup} (a) shows a sketch of our experimental setup. We employ two \isotope[87]{Rb} atoms trapped in two high-finesse optical cavities that are physically separated by 21m. The two atom-cavity systems are connected with a 60m long single mode optical fiber. Each of the two atoms carries one qubit of information encoded in the states $\ket{5\isotope[2]{S}_{1/2},F=2, m_F=2}:=\ket{\uparrow_z}$ and $\ket{5\isotope[2]{S}_{1/2},F=1, m_F=1}:=\ket{\downarrow_z}$. Both cavities are actively tuned to the atomic resonance $\ket{\uparrow_z}\leftrightarrow\ket{\text{e}}:=\ket{5\isotope[2]{P}_{3/2},F'=3, m_F=3}$. On this particular transition, the atom-cavity systems operate in the strong-coupling regime. To teleport a qubit state from Alice to Bob, a single photon is successively reflected from the two quantum nodes, starting on Bob's side. \begin{figure} \caption{ Teleportation of a qubit from Alice to Bob. (a) Setup: Two cavity QED setups are connected with a 60m long optical fiber. The respective atomic qubits are controlled using a pair of Raman lasers (gray arrows) and read out using a different laser beam (blue arrows). A photon (red wiggly arrow) impinges onto Bob's setup, is reflected and then propagates to Alice's setup. A fiber circulator (circular arrow) directs the photon onto Alice's cavity and, after its reflection, to a polarization-resolving detection setup. Feedback (double arrow) acts on Bob's atomic qubit. (b) Quantum circuit diagram. Single-qubit rotations are labeled with an R. The superscript describes the respective pulse area while the subscript describes the rotation axis x ($\ket{\uparrow_z}+\ket{\downarrow_z}$) or y ($\ket{\uparrow_z}+i\ket{\downarrow_z}$). After state preparation of the atoms, the photon performs two controlled-NOT gates. To make the protocol unconditional, the $\pi/2$ rotation and measurement of Alice's qubit (dashed box) could be applied only in case of a successful photon detection event independent of the polarization. At the end of the protocol, a state detection of Alice's atom and the photon is performed (blue squares). These measurement outcomes determine the two classical feedback signals.} \label{fig:setup} \end{figure} After the second reflection, a circulator is used to direct the light to a polarization resolving setup of superconducting nanowire photon detectors. Given the photon click, an atomic state detection is performed on Alice's qubit employing a laser beam resonant with the $\ket{\uparrow_z}\leftrightarrow\ket{e}$ transition. The respective state-detection light is directed into the same detectors as the reflected photon. Depending on the outcome of both the atomic and the photonic measurements, feedback pulses are applied to Bob's qubit. To benchmark this teleportation protocol, we perform a full state tomography of Bob's qubit by measuring the atomic state in three mutually orthogonal bases. The respective scattered state-detection light is directed to an additional tomography setup of single-photon detectors (not shown in Fig. \ref{fig:setup}(a)). \label{sec:protocol}\\ We remark that the fundamental building block of our protocol is an atom-photon controlled-NOT gate that is executed by reflecting a photon from the atom-cavity system \cite{duan2004, reiserer2014}. It is based on a polarization flip of the photon from antidiagonal (diagonal) polarization to the orthogonal diagonal polarization whenever the atom is in the state $\ket{\uparrow_z}$. In the case of a noncoupling atom in $\ket{\downarrow_z}$, the polarization of the photon does not change. The full teleportation protocol then consists of a photon performing a controlled NOT gate at each node in succession, followed by different conditional feedback pulses. The quantum circuit diagram of our full scheme is shown in Fig. \ref{fig:setup}(b). We start by optically pumping both atoms to the initial state $\ket{\uparrow_z}$. Then, Alice's atom is initialized to a general state of the form $\alpha\ket{\uparrow_z}+\beta\ket{\downarrow_z}$. To this end, we employ a two-photon Raman process that allows us to initialize any desired qubit state by an appropriate choice of amplitude, phase, and timing. Bob's atom is prepared in the equal-superposition state $(\ket{\uparrow_z}+\ket{\downarrow_z})/\sqrt{2}$. We use a very weak coherent pulse (average photon number $\braket{n}=0.07(1)$) in combination with single-photon detectors as an approximation of a heralded single-photon source. The light is initially prepared in the antidiagonal polarization $\ket{A}$. This pulse is first reflected from Bob's cavity, creating a maximally entangled state between the photon and his atom. By generating the entanglement on the fly during the reflection, far-away Alice is not yet involved in the teleportation protocol. Subsequently, we reflect the light from Alice's cavity. The resulting atom-atom-photon state can be expressed as (see Supplemental Material \cite{supplement}) \begin{equation}\frac{1}{\sqrt{2}}\Big[\Big(\alpha\ket{\uparrow_z\uparrow_z}+\beta\ket{\downarrow_z\downarrow_z}\Big)\ket{\text{A}}+\Big(\beta{\ket{\uparrow_z\downarrow_z}}+\alpha\ket{\downarrow_z\uparrow_z}\Big)\ket{\text{D}}\Big].\label{eq:threetangle}\end{equation} Notably, the light polarization has changed from $\ket{\text{A}}$ to $\ket{\text{D}}$ whenever an odd number of coupling atoms is present in the two nodes, resulting in a three-particle entangled state. The presence of a photon in the weak coherent pulse is then heralded with single-photon detectors that also register the polarization of the light and thus project the combined state in Eq. (\ref{eq:threetangle}). Now a $\pi/2$ pulse is applied on Alice's atom and then its state is measured. The outcome of the atomic state detection is analyzed and a phase gate is executed on Bob's qubit whenever the result of the state detection is $\ket{\uparrow_z}$. A further feedback is applied depending on the light's measured polarization state. For a detection in $\ket{\text{D}}$, a $\pi$ pulse around the $x-$axis is executed. It inverts the role of $\ket{\uparrow_z}$ and $\ket{\downarrow_z}$ without introducing a relative phase between them. This second feedback completes the teleportation protocol and the state of Bob's atom is $\alpha\ket{\uparrow_z}+\beta\ket{\downarrow_z}$. Excluding optical pumping ($200\,\mathrm{\mu s}$), the entire protocol takes $25.5\,\mathrm{\mu s}$. This is currently limited by the duration of the Raman pulses ($4\,\mathrm{\mu s}$ for a $\pi/2$ pulse). Afterwards we apply a cooling sequence to the atom. We set the repetition rate of the experiment to $1\,\mathrm{kHz}$. The probability for a successful transmission of a single photon through the entire setup and an eventual detection amounts to $8.4\%$ \cite{supplement}. Therefore, employing a single-photon source would yield a teleportation rate of $84\,\mathrm{Hz}$. However, due to the additional use of weak coherent pulses ($\braket{n}=0.07$), it is reduced to $6\,\mathrm{Hz}$ in our implementation.\\ \begin{figure*} \caption{ Teleportation results. (a) Density matrices of the single-qubit states that were teleported from Alice to Bob. (b) Teleportation fidelities of the six teleported states. The orange bars show the experimentally (exp) measured fidelities. The error bars are standard deviations of the mean. In gray, we show the simulated (sim) fidelities. The dashed line represents the classically achievable threshold of $2/3$ which can be reached with a measure-prepare strategy \cite{massar1995}.} \label{fig:results} \end{figure*} To characterize the performance of our teleportation protocol, we prepare Alice's qubit in six mutually unbiased states and teleport these states to Bob. The six states are $\ket{\uparrow_z}$, $\ket{\downarrow_z}$, $\ket{\uparrow_x}:=1/\sqrt{2}(\ket{\uparrow_z}+\ket{\downarrow_z})$, $\ket{\downarrow_x}:=1/\sqrt{2}(\ket{\uparrow_z}-\ket{\downarrow_z})$, $\ket{\uparrow_y}:=1/\sqrt{2}(\ket{\uparrow_z}+i\ket{\downarrow_z})$, and $\ket{\downarrow_y}:=1/\sqrt{2}(\ket{\uparrow_z}-i\ket{\downarrow_z})$. After the teleportation, a quantum state tomography of Bob's atom is performed to extract its complete density matrix. The results for the six prepared input states are shown in Fig. \ref{fig:results}(a). All teleported states have a high overlap with the ideally expected result. On average, we achieve a teleportation fidelity of $(88.3\pm1.3)\%$ which is significantly higher than the classically achievable threshold of $2/3$ \cite{massar1995}. To understand the limitations in the achieved fidelities, we simulate our experiment with the quantum optics toolbox QuTiP \cite{johansson2012}. The simulated fidelities are depicted as the gray bars in Fig. \ref{fig:results}(b) and show an excellent agreement with the experiment. From this we conclude that the fidelity of the teleported states is mainly influenced by three sources of error, namely, qubit decoherence (only when the atoms are in superposition states), two-photon contributions in the coherent laser pulses, and imperfect atomic state preparation. These three effects reduce the fidelity by $6.0\%$, $3.9\%$, and $1.4\%$, respectively. Additional imperfections like mechanical vibrations of the cavity mirrors, imperfect fiber birefringence compensation, and polarization dependent losses are minor errors that sum up to the rest of the accumulated infidelity. For a description of the simulation model, see the Supplemental Material \cite{supplement}. By protocol, our teleportation scheme is designed for employing a single photon to be reflected from the two nodes. Nevertheless, the teleportation can also be executed with weak coherent pulses which, in practice, are much easier to produce than single photons. Although the use of a coherent pulse is not ideal as residual higher photon number contributions in the coherent pulse deteriorate the atom-photon entanglement \cite{reiserer2014} and thus the teleportation fidelity, the teleportation rate can be increased simply by increasing $\braket{n}$. Conversely, in the limit of vanishing $\braket{n}$, the teleportation fidelity approaches the scenario where a single-photon source is employed. To characterize this, we performed an additional measurement where the dependence of the teleportation fidelity on $\braket{n}$ was investigated. For this measurement, Alice's atom is prepared in the state $\ket{\uparrow_x}$, which is most sensitive to experimental imperfections. Afterwards, the teleportation protocol is executed. Figure \ref{fig:alpha_scan} shows the obtained data and an expected curve based on the simulation outlined in the Supplemental Material \cite{supplement}. \begin{figure} \caption{ Scan of mean photon number. Teleportation fidelity as a function of the mean photon number in the employed coherent pulse. The blue line is based on a theoretical model (see Supplemental Material \cite{supplement}). The dashed line shows the classical teleportation threshold of 2/3. The error bars represent standard deviations from the mean.} \label{fig:alpha_scan} \end{figure} As expected, the teleportation fidelity yields its highest values for a vanishing mean photon number $\braket{n}$ and decreases due to the higher photon-number contributions when it is increased. For the smallest employed value of $\braket{n}=0.02$, the fidelity reaches $89\%$. Our measurements show that the classical threshold of $2/3$ is beaten up to $\braket{n}\approx1.0$. At this photon number we achieve a teleportation rate of $84\,\mathrm{Hz}$, more than 1 order of magnitude higher than in the case of our default employed photon number of $\braket{n}=0.07$ (see Supplemental Material \cite{supplement} for more information about the teleportation rate). To mimic a variable distance between the two network nodes, we introduce a variable temporal delay $\tau$ at different times in our teleportation protocol. The modified version of the protocol is shown in Fig. \ref{fig:delay_scan}(a). First, we introduce a delay $\tau$ between the two state-preparation pulses to simulate a communication time after Alice's qubit is prepared. Afterwards, an additional $\tau$ mimics a longer propagation time of the photon in the fiber. Eventually a third delay takes into account the propagation time of the two feedback signals. In Fig. \ref{fig:delay_scan}(b), the teleportation fidelity of Alice's state $\ket{\uparrow_x}$ is plotted against the respective delay $\tau$. The qubits in the two nodes are both in superposition states. \begin{figure} \caption{ Scan of the delay. (a) Quantum circuit diagram with the added three delay intervals $\tau$. (b) Teleportation fidelity versus the variable delay. The dashed line represents the classical threshold of 2/3. The blue line is based on our theoretical model (see Supplemental Material \cite{supplement}). The upper horizontal axis represents the length equivalent corresponding to $\tau\times \mathrm{c}/1.5$, where c is the speed of light and 1.5 the refractive index of the fiber. In this experiment, the mean photon number was chosen a factor of 2 higher compared to the data shown in Fig. \ref{fig:results}.} \label{fig:delay_scan} \end{figure} Thus, the increase of $\tau$ increases the effect of atomic decoherence and thereby reduces the achieved teleportation fidelities. Our measurements show that the classical threshold fidelity is beaten up to a delay of $\tau\approx40\,\mathrm{\mu s}$. This delay corresponds to a fiber length of $8\,\mathrm{km}$, a range comparable to an urban quantum network. It should be noted, however, that our measurements neither simulate the additional fiber fluctuations in an urban environment nor the additional losses in a fiber that is physically longer. To conclude, we have devised and implemented a novel and, in principle, unconditional scheme to perform quantum teleportation in a network. An advantage of our protocol is that the underlying photon-reflection mechanism is robust with respect to the temporal shape of the employed photon as was demonstrated in Ref. \cite{daiss2019}. This makes our scheme easy to implement in comparison to the conventionally employed overlapping of two identical photons on a beam splitter \cite{olmschenk2009, bao2012, nolleke2013, meraner2020}. In combination with the scheme demonstrated in Ref. \cite{kalb2015}, and in contrast to Ref. \cite{pfaff2014}, our protocol would allow for teleportation between any combination of unknown matter and light qubits. It would even be possible to convert \cite{huang1992} the wavelength of the ancilla photon during its passage from Bob to Alice in case the two communication partners employ different kinds of qubits. Furthermore, in an improved setup with cavities having a reflectivity close to unity and negligible photon loss between Alice's cavity and the downstream detectors \cite{bhaskar2020}, our implementation of teleportation would be unconditional. In the current realization, this is not yet the case since the loss of a photon could happen after the interaction with Alice's qubit and therefore damage it. Lastly, our protocol is platform independent and could be implemented with different carriers of quantum information coupled to resonators such as vacancy centers in diamond \cite{bhaskar2020}, rare-earth ions \cite{chen2020}, superconducting qubits \cite{besse2018, kono2018}, or quantum dots \cite{fushman2008, desantis2017, sun2018}. \begin{acknowledgments} This work was supported by the Bundesministerium f\"{u}r Bildung und Forschung via the Verbund Q.Link.X (16KIS0870), by the Deutsche Forschungsgemeinschaft under Germany’s Excellence Strategy – EXC-2111 – 390814868, and by the European Union’s Horizon 2020 research and innovation programme via the project Quantum Internet Alliance (QIA, GA No. 820445). E.D. acknowledges early support by the Cellex-ICFO-MPQ postdoctoral fellowship program. \end{acknowledgments} \begin{thebibliography}{99} \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}} \bibitem{wootters1982} \bibinfo{author}{W.~K. Wootters} and \bibinfo{author}{W.~H. Zurek}. \newblock \emph{\bibinfo{title}{A single quantum cannot be cloned}}. \newblock \href{https://www.nature.com/articles/299802a0}{\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{299}}, \bibinfo{pages}{802--803} (\bibinfo{year}{1982})}. \bibitem{Bennett1993} \bibinfo{author}{C.~H. Bennett}, \bibinfo{author}{G. Brassard}, \bibinfo{author}{C. Cr\'{e}peau}, \bibinfo{author}{R. Jozsa}, \bibinfo{author}{A. Perez}, and \bibinfo{author}{W.~K. Wootters}. \newblock \emph{\bibinfo{title}{Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels}}. \newblock \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.70.1895}{\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{70}}, \bibinfo{pages}{1895--1899} (\bibinfo{year}{1993})}. \bibitem{bouwmeester1997} \bibinfo{author}{D. Bouwmeester}, \bibinfo{author}{J.-~W. Pan}, \bibinfo{author}{K. Mattle}, \bibinfo{author}{M. Eibl}, \bibinfo{author}{H. Weinfurter}, and \bibinfo{author}{A. Zeilinger}. \newblock \emph{\bibinfo{title}{Experimental quantum teleportation}}. \newblock \href{https://www.nature.com/articles/37539}{\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{390}}, \bibinfo{pages}{575--579} (\bibinfo{year}{1997})}. \bibitem{boschi1998} \bibinfo{author}{D. Boschi}, \bibinfo{author}{S. Branca}, \bibinfo{author}{F. De~Martini}, \bibinfo{author}{L. Hardy}, and \bibinfo{author}{S. Popescu}. \newblock \emph{\bibinfo{title}{Experimental Realization of Teleporting an Unknown Pure Quantum State via Dual Classical and Einstein-Podolsky-Rosen Channels}}. \newblock \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.80.1121}{\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{80}}, \bibinfo{pages}{1121--1125} (\bibinfo{year}{1998})}. \bibitem{furusawa1998} \bibinfo{author}{A. Furusawa}, \bibinfo{author}{J.~L. S{\o}rensen}, \bibinfo{author}{S.~L. Braunstein}, \bibinfo{author}{C.~A. Fuchs}, \bibinfo{author}{H.~J. Kimble}, and \bibinfo{author}{E.~S. Polzik}. \newblock \emph{\bibinfo{title}{Unconditional Quantum Teleportation}}. \newblock \href{https://science.sciencemag.org/content/282/5389/706}{\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{282}}, \bibinfo{pages}{706--709} (\bibinfo{year}{1998})}. \bibitem{nolleke2013} \bibinfo{author}{C. N{\"o}lleke}, \bibinfo{author}{A. Neuzner}, \bibinfo{author}{A. Reiserer}, \bibinfo{author}{C. Hahn}, \bibinfo{author}{G. Rempe}, and \bibinfo{author}{S. Ritter}. \newblock \emph{\bibinfo{title}{Efficient Teleportation Between Remote Single-Atom Quantum Memories}}. \newblock \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.110.140403}{\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{110}}, \bibinfo{pages}{140403} (\bibinfo{year}{2013})}. \bibitem{riebe2004} \bibinfo{author}{M. Riebe}, \bibinfo{author}{H. H{\"a}ffner}, \bibinfo{author}{C.~F. Roos}, \bibinfo{author}{W. H{\"a}nsel}, \bibinfo{author}{J. Benhelm}, \bibinfo{author}{G.~P.~T Lancaster}, \bibinfo{author}{T.~W. K{\"o}rber}, \bibinfo{author}{C. Becher}, \bibinfo{author}{F. Schmidt-Kaler}, \bibinfo{author}{D.~F.~V James}, and \bibinfo{author}{R. Blatt}. \newblock \emph{\bibinfo{title}{Deterministic quantum teleportation with atoms}}. \newblock \href{https://www.nature.com/articles/nature02570}{\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{429}}, \bibinfo{pages}{734--737} (\bibinfo{year}{2004})}. \bibitem{barrett2004} \bibinfo{author}{M.~D. Barrett}, \bibinfo{author}{J. Chiaverini}, \bibinfo{author}{T. Schaetz}, \bibinfo{author}{J. Britton}, \bibinfo{author}{W.~M. Itano}, \bibinfo{author}{J.~D. Jost}, \bibinfo{author}{E. Knill}, \bibinfo{author}{C. Langer}, \bibinfo{author}{D. Leibfried}, \bibinfo{author}{R. Ozeri}, and \bibinfo{author}{D.~J. Wineland}. \newblock \emph{\bibinfo{title}{Deterministic quantum teleportation of atomic qubits}}. \newblock \href{https://www.nature.com/articles/nature02608}{\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{429}}, \bibinfo{pages}{737--739} (\bibinfo{year}{2004})}. \bibitem{olmschenk2009} \bibinfo{author}{S. Olmschenk}, \bibinfo{author}{D.~N. Matsukevich}, \bibinfo{author}{P. Maunz}, \bibinfo{author}{D. Hayes}, \bibinfo{author}{L.-M. Duan}, and \bibinfo{author}{C. Monroe}. \newblock \emph{\bibinfo{title}{Quantum Teleportation Between Distant Matter Qubits}}. \newblock \href{https://science.sciencemag.org/content/323/5913/486}{\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{323}}, \bibinfo{pages}{486--489} (\bibinfo{year}{2009})}. \bibitem{pfaff2014} \bibinfo{author}{W. Pfaff}, \bibinfo{author}{B.~J. Hensen}, \bibinfo{author}{H. Bernien}, \bibinfo{author}{S.~B. van Dam}, \bibinfo{author}{M.~S. Blok}, \bibinfo{author}{T.~H. Taminiau}, \bibinfo{author}{M.~J. Tiggelman}, \bibinfo{author}{R.~N. Schouten}, \bibinfo{author}{M. Markham}, \bibinfo{author}{D.~J. Twitchen}, and \bibinfo{author}{R. Hanson}. \newblock \emph{\bibinfo{title}{Unconditional quantum teleportation between distant solid-state quantum bits}}. \newblock \href{https://science.sciencemag.org/content/345/6196/532.full}{\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{345}}, \bibinfo{pages}{532--535} (\bibinfo{year}{2014})}. \bibitem{bao2012} \bibinfo{author}{X-H. Bao}, \bibinfo{author}{X-F. Xu}, \bibinfo{author}{X-M. Li}, \bibinfo{author}{Z-S. Yuan}, \bibinfo{author}{C-Y. Lu}, and \bibinfo{author}{J-W. Pan}. \newblock \emph{\bibinfo{title}{Quantum teleportation between remote atomic-ensemble quantum memories}}. \newblock \href{https://www.pnas.org/content/109/50/20347}{\bibinfo{journal}{Proc. Natl. Acad. Sci. U.S.A.} \textbf{\bibinfo{volume}{109}}, \bibinfo{pages}{20347--20351} (\bibinfo{year}{2012})}. \bibitem{krauter2013} \bibinfo{author}{H. Krauter}, \bibinfo{author}{D. Salart}, \bibinfo{author}{C.~A. Muschik}, \bibinfo{author}{J.~M. Petersen}, \bibinfo{author}{H. Shen}, \bibinfo{author}{T. Fernholz}, and \bibinfo{author}{E.~S. Polzik}. \newblock \emph{\bibinfo{title}{Deterministic quantum teleportation between distant atomic objects}}. \newblock \href{https://www.nature.com/articles/nphys2631}{\bibinfo{journal}{Nat. Phys.} \textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{400--404} (\bibinfo{year}{2013})}. \bibitem{takeda2013} \bibinfo{author}{S. Takeda}, \bibinfo{author}{T. Mizuta}, \bibinfo{author}{M. Fuwa}, \bibinfo{author}{P. van~Look}, and \bibinfo{author}{A. Furusawa}. \newblock \emph{\bibinfo{title}{Deterministic quantum teleportation of photonic quantum bits by a hybrid technique}}. \newblock \href{https://www.nature.com/articles/nature12366}{\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{500}}, \bibinfo{pages}{315--318} (\bibinfo{year}{2013})}. \bibitem{bussieres2014} \bibinfo{author}{F. Bussi\`{e}res}, \bibinfo{author}{C. Clausen}, \bibinfo{author}{A. Tiranov}, \bibinfo{author}{B. Korzh}, \bibinfo{author}{V.~B. Verma}, \bibinfo{author}{S.~W. Nam}, \bibinfo{author}{F. Marsili}, \bibinfo{author}{A. Ferrier}, \bibinfo{author}{P. Goldner}, \bibinfo{author}{H. Herrmann}, \bibinfo{author}{C. Silberhorn}, \bibinfo{author}{W. Sohler}, \bibinfo{author}{M. Afzelius}, and \bibinfo{author}{N. Gisin}. \newblock \emph{\bibinfo{title}{Quantum teleportation from a telecom-wavelength photon to a solid-state quantum memory}}. \newblock \href{https://www.nature.com/articles/nphoton.2014.215}{\bibinfo{journal}{Nat. Photonics} \textbf{\bibinfo{volume}{8}}, \bibinfo{pages}{775--778} (\bibinfo{year}{2014})}. \bibitem{pirandola2015} \bibinfo{author}{S. Pirandola}, \bibinfo{author}{J. Eisert}, \bibinfo{author}{C. Weedbrook}, \bibinfo{author}{A. Furusawa}, and \bibinfo{author}{S.~L. Braunstein}. \newblock \emph{\bibinfo{title}{Advances in quantum teleportation}}. \newblock \href{https://www.nature.com/articles/nphoton.2015.154}{\bibinfo{journal}{Nat. Photonics} \textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{641--652} (\bibinfo{year}{2015})}. \bibitem{duan2004} \bibinfo{author}{L.-M. Duan} and \bibinfo{author}{H.~J. Kimble}. \newblock \emph{\bibinfo{title}{Scalable Photonic Quantum Computation through Cavity-Assisted Interactions}}. \newblock \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.92.127902}{\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{92}}, \bibinfo{pages}{127902} (\bibinfo{year}{2004})}. \bibitem{reiserer2014} \bibinfo{author}{A. Reiserer}, \bibinfo{author}{N. Kalb}, \bibinfo{author}{G. Rempe}, and \bibinfo{author}{S. Ritter}. \newblock \emph{\bibinfo{title}{A quantum gate between a flying optical photon and a single trapped atom}}. \newblock \href{https://www.nature.com/articles/nature13177}{\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{508}}, \bibinfo{pages}{237--240} (\bibinfo{year}{2014})}. \bibitem{supplement} \bibinfo{note}{{See \hyperref[supplement]{Supplemental Material} for details on the theoretical model, a summary of the experimental parameters and a detailed description of the experimental protocol}}. \bibitem{massar1995} \bibinfo{author}{S. Massar} and \bibinfo{author}{S. Popescu}. \newblock \emph{\bibinfo{title}{Optimal Extraction of Information from Finite Quantum Ensembles}}. \newblock \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.74.1259}{\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{74}}, \bibinfo{pages}{1259} (\bibinfo{year}{1995})}. \bibitem{johansson2012} \bibinfo{author}{J.~R. Johansson}, \bibinfo{author}{P.~D. Nation}, and \bibinfo{author}{F. Nori}. \newblock \emph{\bibinfo{title}{QuTiP: An open-source Python framework for the dynamics of open quantum systems}}. \newblock \href{https://www.sciencedirect.com/science/article/pii/S0010465512000835?via{\%}3Dihub}{\bibinfo{journal}{Comput. Phys. Commun.} \textbf{\bibinfo{volume}{183}}, \bibinfo{pages}{1760--1772} (\bibinfo{year}{2012})}. \bibitem{daiss2019} \bibinfo{author}{S. Daiss}, \bibinfo{author}{S. Welte}, \bibinfo{author}{B. Hacker}, \bibinfo{author}{L. Li}, and \bibinfo{author}{G. Rempe}. \newblock \emph{\bibinfo{title}{Single-Photon Distillation via a Photonic Parity Measurement Using Cavity QED}}. \newblock \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.133603}{\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{122}}, \bibinfo{pages}{133603} (\bibinfo{year}{2019})}. \bibitem{meraner2020} \bibinfo{author}{M. Meraner}, \bibinfo{author}{A. Mazloom}, \bibinfo{author}{V. Krutyanskiy}, \bibinfo{author}{V. Krcmarsky}, \bibinfo{author}{J. Schupp}, \bibinfo{author}{D.~A. Fioretto}, \bibinfo{author}{P. Sekatski}, \bibinfo{author}{T.~E. Northup}, \bibinfo{author}{N. Sangouard}, and \bibinfo{author}{B.~P. Lanyon}. \newblock \emph{\bibinfo{title}{Indistinguishable photons from a trapped-ion quantum network node}}. \newblock \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.102.052614}{\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{102}}, \bibinfo{pages}{052614} (\bibinfo{year}{2020})}. \bibitem{kalb2015} \bibinfo{author}{N. Kalb}, \bibinfo{author}{A. Reiserer}, \bibinfo{author}{S. Ritter}, and \bibinfo{author}{G. Rempe}. \newblock \emph{\bibinfo{title}{Heralded Storage of a Photonic Quantum Bit in a Single Atom}}. \newblock \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.220501}{\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{114}}, \bibinfo{pages}{220501} (\bibinfo{year}{2015})}. \bibitem{huang1992} \bibinfo{author}{J. Huang}, and \bibinfo{author}{P. Kumar}. \newblock \emph{\bibinfo{title}{Observation of quantum frequency conversion}}. \newblock \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.68.2153}{\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{68}}, \bibinfo{pages}{2153} (\bibinfo{year}{1992})}. \bibitem{bhaskar2020} \bibinfo{author}{M.~K. Bhaskar}, \bibinfo{author}{R. Riedinger}, \bibinfo{author}{B. Machielse}, \bibinfo{author}{D.~S. Levonian}, \bibinfo{author}{C.~T. Nguyen}, \bibinfo{author}{E.~N. Knall}, \bibinfo{author}{H. Park}, \bibinfo{author}{D. Englund}, \bibinfo{author}{M. Lon\v{c}ar}, \bibinfo{author}{D. Sukachev}, and \bibinfo{author}{M.~D. Lukin}. \newblock \emph{\bibinfo{title}{Experimental demonstration of memory-enhanced quantum communication}}. \newblock \href{https://www.nature.com/articles/s41586-020-2103-5}{\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{580}}, \bibinfo{pages}{60--64} (\bibinfo{year}{2020})}. \bibitem{chen2020} \bibinfo{author}{S. Chen}, \bibinfo{author}{M. Raha}, \bibinfo{author}{C.~M. Phenicie}, \bibinfo{author}{S. Ourari}, and \bibinfo{author}{J.~D. Thompson}. \newblock \emph{\bibinfo{title}{Parallel single-shot measurement and coherent control of solid-state spins below the diffraction limit}}. \newblock \href{https://science.sciencemag.org/content/370/6516/592}{\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{370}}, \bibinfo{pages}{592--595} (\bibinfo{year}{2020})}. \bibitem{besse2018} \bibinfo{author}{J.-C. Besse}, \bibinfo{author}{S. Gasparinetti}, \bibinfo{author}{M.~C. Collodo}, \bibinfo{author}{T. Walter}, \bibinfo{author}{P. Kurpiers}, \bibinfo{author}{M. Pechal}, \bibinfo{author}{C. Eichler}, and \bibinfo{author}{A. Wallraff}. \newblock \emph{\bibinfo{title}{Single-Shot Quantum Nondemolition Detection of Individual Itinerant Microwave Photons}}. \newblock \href{https://journals.aps.org/prx/abstract/10.1103/PhysRevX.8.021003}{\bibinfo{journal}{Phys. Rev. X} \textbf{\bibinfo{volume}{8}}, \bibinfo{pages}{021003} (\bibinfo{year}{2018})}. \bibitem{kono2018} \bibinfo{author}{S. Kono}, \bibinfo{author}{K. Koshino}, \bibinfo{author}{Y. Tabuchi}, \bibinfo{author}{A. Noguchi}, and \bibinfo{author}{Y. Nakamura}. \newblock \emph{\bibinfo{title}{Quantum non-demolition detection of an itinerant microwave photon}}. \newblock \href{https://www.nature.com/articles/s41567-018-0066-3}{\bibinfo{journal}{Nat. Phys.} \textbf{\bibinfo{volume}{14}}, \bibinfo{pages}{546--549} (\bibinfo{year}{2018})}. \bibitem{fushman2008} \bibinfo{author}{I. Fushman}, \bibinfo{author}{D. Englund}, \bibinfo{author}{A. Faraon}, \bibinfo{author}{N. Stoltz}, \bibinfo{author}{P. Petroff}, and \bibinfo{author}{J. Vu\v{c}kovi\'{c}}. \newblock \emph{\bibinfo{title}{Controlled Phase Shifts with a Single Quantum Dot}}. \newblock \href{https://science.sciencemag.org/content/320/5877/769.abstract}{\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{320}}, \bibinfo{pages}{769--772} (\bibinfo{year}{2008})}. \bibitem{desantis2017} \bibinfo{author}{L. De Santis}, \bibinfo{author}{C. Ant\'{o}n}, \bibinfo{author}{B. Reznychenko}, \bibinfo{author}{N. Somaschi}, \bibinfo{author}{G. Coppola}, \bibinfo{author}{J. Senellart}, \bibinfo{author}{C. G\'{o}mez}, \bibinfo{author}{A. Lema\^{i}tre}, \bibinfo{author}{I. Sagnes}, \bibinfo{author}{A.~G. White}, \bibinfo{author}{L. Lanco}, \bibinfo{author}{A. Auff\`{e}ves}, and \bibinfo{author}{P. Senellart}. \newblock \emph{\bibinfo{title}{A solid-state single-photon filter}}. \newblock \href{https://www.nature.com/articles/nnano.2017.85}{\bibinfo{journal}{Nat. Nanotechnol.} \textbf{\bibinfo{volume}{12}}, \bibinfo{pages}{663--667} (\bibinfo{year}{2017})}. \bibitem{sun2018} \bibinfo{author}{S. Sun}, \bibinfo{author}{H. Kim}, \bibinfo{author}{Z. Luo}, \bibinfo{author}{G.~S. Solomon}, and \bibinfo{author}{E. Waks}. \newblock \emph{\bibinfo{title}{A single-photon switch and transistor enabled by a solid-state quantum memory}}. \newblock \href{https://science.sciencemag.org/content/361/6397/57}{\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{361}}, \bibinfo{pages}{57--60} (\bibinfo{year}{2018})}. \end{thebibliography} \section{SUPPLEMENTAL MATERIAL} \label{supplement} \section{Simulation Model Implemented with QuTip} We use the python toolkit QuTip \cite{johansson2012supplement} to simulate our teleportation experiment. The underlying theoretical modelling is based on a cavity input-output theory adapted from \cite{kuhn2015, hacker2019}. In total, we consider four possible scattering modes of a photon after the interaction with the network node. These modes are the cavity reflection, the cavity transmission, scattering on the mirrors and the scattering via the atom. The physical system is modelled with a concatenation of four beam splitters each corresponding to one of the four possible output modes of the system. The reflection of the cavity is additionally equipped with a phase shift operator to take into account the phase shift occurring in the reflection process \cite{duan2004supplement}. The evolution of the initial atom-light state is governed by the operator built from the four beam splitters and the phaseshift operation. In the end the atom scattering, the mirror losses and the cavity transmission are traced out to obtain the result for the desired reflection mode of the cavity.\\ In the experiment, we have two cavity systems connected with an optical fiber that acts as a quantum channel. This channel is modelled in the theory as a retarder and a beamsplitter that describe a lossy depolarizing channel. \\ Eventually, the photons are measured with single-photon detectors. We simulate their non-unity efficiency ($\eta\approx90\%$ at $\lambda=780\,\mathrm{nm}$) and non-zero dark counts ($9\,\mathrm{Hz}$) with a beam splitter where one port is supplied with thermal noise. \\ The simulation evolves the atom-atom-photon state through the beam splitters describing the two reflections, the lossy depolarizing fiber and the non-perfect photon detectors. Afterwards, the projective measurement of the light polarization is included. The simulation then outputs the resulting final density matrix. The theory uses experimental parameters such as the mode matching of the photons to the cavities, the state preparation fidelities, the atomic coherence times and the dark counts of the detectors. These parameters were determined in independent characterization measurements. \label{subsec:supplement} \section{Summary of the Experimental Parameters} Here, we give a summary of the most important experimental parameters. The protocol starts by optical pumping the two atoms to the state $\ket{\uparrow_z}$. This is achieved with a fidelity of $99\%$ in either of the setups. $\pi$ pulses are performed within $8\,\mathrm{\mu s}$ and yield residual populations of $3\%$ in $\ket{\uparrow_z}$. The atomic coherence time in each of the two systems is approximately $400\,\mathrm{\mu s}$.\\ The cavities are actively tuned to the $\ket{\uparrow_z}\leftrightarrow\ket{e}$ transition, where the cavity QED parameters assume the values summarized in Table \ref{tab:parametersPQ}. Both systems operate in the strong coupling regime. \begin{table}[htbp] \centering \begin{tabular}{|c|c|c|}\hline parameter & node 1 (Bob) & node 2 (Alice) \\ \hline\hline $\kappa (\text{MHz})$ & 2.5 & 2.8 \\ $\gamma (\text{MHz})$ & 3.0 & 3.0 \\ $g (\text{MHz})$ & 7.6 & 7.6 \\ \hline $C$ & 3.9 & 3.4\\ \hline \end{tabular} \caption[Tabelle]{Cavity QED parameters of the two network nodes. The given parameters are the cavity linewidth $\kappa$, the atomic polarization decay-rate $\gamma$ and the atom-cavity coupling strength $g$. We define the cooperativity of the atom-cavity system as $C=g^2/(2\kappa\gamma)$.} \label{tab:parametersPQ} \end{table} Between the two setups, the light traverses various optical elements which introduce losses. The 60m fiber with the fiber circulator, color filters, mirror and lens surfaces lead to a total transmission of $51\%$ between the two resonators. Additionally, the reflectivities of the two cavity systems are $60\%$ and $55\%$ on resonance, respectively. The detection efficiency for light after the second cavity system amounts to $50\%$ and includes a second passage through the fiber circulator, incoupling and absorption losses of the respective fiber and the detection efficiency of our single-photon detectors. The total probability to detect a single photon after propagation of a weak coherent pulse (average photon number $\braket{n}=0.07$) through the entire system amounts to $6\times 10^{-3}$. This is also the overall efficiency of our teleportation protocol for this mean photon number. We run the experiment at a repetition rate of $1\,\mathrm{kHz}$, and thus achieve a teleportation rate of $6\,\mathrm{Hz}$. \section{Detailed Description of the Teleportation Protocol} Here we describe the experimental protocol in more detail and calculate the individual states created at the different steps of the teleportation protocol.\\ Given that there is an atom available at each node, we first initialize them in $\ket{\uparrow_z}$. The optical pumping of Bob's atom takes $200\,\mathrm{\mu s}$ while it takes $240\,\mathrm{\mu s}$ for Alice's atom. In both network nodes, we employ a pair of Raman lasers to initialize the desired superposition states. We follow the convention of \cite{nielsen2000} and define a $\theta=\pi/2$ rotation around the $y-$axis with the rotation matrix \begin{equation}R_y(\theta)=\begin{pmatrix}\cos\theta/2 & -\sin\theta/2 \\\sin\theta/2 & \cos\theta/2\\\end{pmatrix}\overset{\theta=\pi/2}{=}\frac{1}{\sqrt{2}}\begin{pmatrix}1 & -1 \\1 & 1\\\end{pmatrix}.\end{equation} On Bob's side, this transformation is used to generate the state $\frac{1}{\sqrt{2}}(\ket{\uparrow_z}+\ket{\downarrow_z})$ while Alice prepares a general state of the form $\alpha\ket{\uparrow_z}+\beta\ket{\downarrow_z}$. Therefore the starting state of the teleportation protocol can be written as a separable state of the form \begin{equation} \begin{split} &\frac{1}{\sqrt{2}}(\ket{\uparrow_z}+\ket{\downarrow_z})\otimes(\alpha\ket{\uparrow_z}+\beta\ket{\downarrow_z})\\ &=\frac{1}{\sqrt{2}}(\alpha\ket{\uparrow_z\uparrow_z}+\beta\ket{\uparrow_z\downarrow_z}+\alpha\ket{\downarrow_z\uparrow_z}+\beta\ket{\downarrow_z\downarrow_z}). \end{split} \end{equation} The photon is reflected from the two cavity QED systems successively. It has a Gaussian envelope with a full width at half maximum of $1\,\mathrm{\mu s}$ and an antidiagonal polarization $\ket{\text{A}}$. Due to the atom-photon gate mechanism, a reflection yields \begin{equation} \begin{split} \ket{\downarrow_z}\ket{\text{A}}&\rightarrow\ket{\downarrow_z}\ket{\text{A}}\\ \ket{\uparrow_z}\ket{\text{A}}&\rightarrow\ket{\uparrow_z}\ket{\text{D}}. \end{split} \end{equation} Applying this to the reflection from Bob's cavity, the resulting atom-photon state is $\frac{1}{\sqrt{2}}(\ket{\uparrow_z}\ket{\text{D}}+\ket{\downarrow_z}\ket{\text{A}})$, a maximally entangled Bell state. This state is the required resource for our teleportation protocol. After the reflection from Alice's cavity, the combined atom-atom-photon state is then \begin{equation} \frac{1}{\sqrt{2}}\Big(\alpha\ket{\uparrow_z\uparrow_z}\ket{\text{A}}+\beta{\ket{\uparrow_z\downarrow_z}}\ket{\text{D}}+\alpha\ket{\downarrow_z\uparrow_z}\ket{\text{D}}+\beta\ket{\downarrow_z\downarrow_z}\ket{\text{A}}\Big). \end{equation} Now a $\pi/2$ rotation around the $y-$axis is applied to Alice's atom. This results in the state \begin{align} \frac{1}{2} &\Big(\alpha\ket{\uparrow_z\uparrow_z}\ket{\text{A}}+\alpha\ket{\uparrow_z\downarrow_z}\ket{\text{A}}+\alpha\ket{\downarrow_z\uparrow_z}\ket{\text{D}}+\alpha\ket{\downarrow_z\downarrow_z}\ket{\text{D}} \nonumber\\ &-\beta\ket{\uparrow_z\uparrow_z}\ket{\text{D}}+\beta\ket{\uparrow_z\downarrow_z}\ket{\text{D}}-\beta\ket{\downarrow_z\uparrow_z}\ket{\text{A}}+\beta\ket{\downarrow_z\downarrow_z}\ket{\text{A}}\Big). \end{align} The measurement of the polarization state of the photon in the A/D basis projects the two atom state to \begin{align} \frac{1}{\sqrt{2}}\Big(\alpha\ket{\uparrow_z\uparrow_z}+\alpha\ket{\uparrow_z\downarrow_z}-\beta\ket{\downarrow_z\uparrow_z}+\beta\ket{\downarrow_z\downarrow_z}\Big) \end{align} for the $\ket{\text{A}}$ polarization and to \begin{align} \frac{1}{\sqrt{2}}\Big(\alpha\ket{\downarrow_z\uparrow_z}+\alpha\ket{\downarrow_z\downarrow_z}-\beta\ket{\uparrow_z\uparrow_z}+\beta\ket{\uparrow_z\downarrow_z}\Big) \end{align} for the $\ket{\text{D}}$ polarization. As a last step of the protocol, the two feedback pulses are applied. For a photon detection in $\ket{\text{D}}$, a $\pi$ rotation around the x-axis is applied to Bob's atom. In matrix form it reads \begin{equation} R_x(\theta)=\begin{pmatrix}\cos\theta/2 & -i\sin\theta/2 \\-i\sin\theta/2 & \cos\theta/2\\\end{pmatrix}\overset{\theta=\pi}{=}\begin{pmatrix}0 & -i \\-i & 0\\\end{pmatrix}. \end{equation} Up to a global phase, this transformation inverts the roles of $\ket{\uparrow_z}$ and $\ket{\downarrow_z}$. As a result of this feedback, the two-atom state becomes \begin{equation} \frac{1}{\sqrt{2}}\Big(\alpha\ket{\uparrow_z\uparrow_z}+\alpha\ket{\uparrow_z\downarrow_z}-\beta\ket{\downarrow_z\uparrow_z}+\beta\ket{\downarrow_z\downarrow_z}\Big). \end{equation} For the second feedback, we measure the state of Alice's atom. If the result is $\ket{\uparrow_z}$, a Z-gate (phase gate) is applied to Bob's atom. For $\ket{\downarrow_z}$, no feedback is applied. In both cases, the resulting state on Bob's side in either case is $\alpha\ket{\uparrow_z}+\beta\ket{\downarrow_z}$ and the teleportation is complete. Table \ref{tab:feedback} shows an overview of the different measurement outcomes and the different feedback signals. \\ \\ \begin{table}[htb] \centering \begin{tabular}{|c|c|} \hline Measured state photon / Alice's atom & Feedback on Bob's atom \\ [0.5ex] \hline\hline $\ket{\text{A}}$& none\\ \hline $\ket{\text{D}}$ &$R_x(\pi)$ \\ \hline $\ket{\uparrow_z}$& Z-gate\\ \hline $\ket{\downarrow_z}$& none\\ \hline \end{tabular} \caption[Tabelle]{Feedback used in the teleportation protocol. The table lists the feedback pulses that are applied to Bob's atom for certain measured states of the photon and Alice's atom.} \label{tab:feedback} \end{table} \end{document}
arXiv
npj breast cancer Spatial interplay of tissue hypoxia and T-cell regulation in ductal carcinoma in situ Unmasking the immune microecology of ductal carcinoma in situ with deep learning Priya Lakshmi Narayanan, Shan E. Ahmed Raza, … Yinyin Yuan PathoNet introduced as a deep neural network backend for evaluation of Ki-67 and tumor-infiltrating lymphocytes in breast cancer Farzin Negahbani, Rasool Sabzi, … Amirreza Dehghanian A Novel Digital Score for Abundance of Tumour Infiltrating Lymphocytes Predicts Disease Free Survival in Oral Squamous Cell Carcinoma Muhammad Shaban, Syed Ali Khurram, … Nasir M. Rajpoot Deep learning-based image analysis predicts PD-L1 status from H&E-stained histopathology images in breast cancer Gil Shamai, Amir Livne, … Ron Kimmel Profiling the Tumour Immune Microenvironment in Pancreatic Neuroendocrine Neoplasms with Multispectral Imaging Indicates Distinct Subpopulation Characteristics Concordant with WHO 2017 Classification Daigoro Takahashi, Motohiro Kojima, … Masato Nagino A deep convolutional neural network for segmentation of whole-slide pathology images identifies novel tumour cell-perivascular niche interactions that are associated with poor survival in glioblastoma Amin Zadeh Shirazi, Mark D. McDonnell, … Guillermo A. Gomez ResNet-32 and FastAI for diagnoses of ductal carcinoma from 2D tissue slides S. Phani Praveen, Parvathaneni Naga Srinivasu, … Muhammad Fazal Ijaz Deep learning for fully-automated nuclear pleomorphism scoring in breast cancer Caner Mercan, Maschenka Balkenhol, … Francesco Ciompi Deep learning-based virtual cytokeratin staining of gastric carcinomas to measure tumor–stroma ratio Yiyu Hong, You Jeong Heo, … Kyoung-Mee Kim Faranak Sobhani ORCID: orcid.org/0000-0002-1787-44551,2, Sathya Muralidhar1,2, Azam Hamidinekoo1,2, Allison H. Hall3, Lorraine M. King4, Jeffrey R. Marks4, Carlo Maley5, Hugo M. Horlings ORCID: orcid.org/0000-0003-4782-88286, E. Shelley Hwang4 & Yinyin Yuan1,2 npj Breast Cancer volume 8, Article number: 105 (2022) Cite this article Cancer imaging Hypoxia promotes aggressive tumor phenotypes and mediates the recruitment of suppressive T cells in invasive breast carcinomas. We investigated the role of hypoxia in relation to T-cell regulation in ductal carcinoma in situ (DCIS). We designed a deep learning system tailored for the tissue architecture complexity of DCIS, and compared pure DCIS cases with the synchronous DCIS and invasive components within invasive ductal carcinoma cases. Single-cell classification was applied in tandem with a new method for DCIS ductal segmentation in dual-stained CA9 and FOXP3, whole-tumor section digital pathology images. Pure DCIS typically has an intermediate level of colocalization of FOXP3+ and CA9+ cells, but in invasive carcinoma cases, the FOXP3+ (T-regulatory) cells may have relocated from the DCIS and into the invasive parts of the tumor, leading to high levels of colocalization in the invasive parts but low levels in the synchronous DCIS component. This may be due to invasive, hypoxic tumors evolving to recruit T-regulatory cells in order to evade immune predation. Our data support the notion that hypoxia promotes immune tolerance through recruitment of T-regulatory cells, and furthermore indicate a spatial pattern of relocalization of T-regulatory cells from DCIS to hypoxic tumor cells. Spatial colocalization of hypoxic and T-regulatory cells may be a key event and useful marker of DCIS progression. Ductal carcinoma in situ (DCIS) of the breast is the most common mammographically detected breast cancer. This "pre-invasive" lesion may progress to invasive ductal carcinoma, but does so at a relatively low frequency (14–53% over 10–15 years)1. Nonetheless, it is commonly treated with extensive surgery, radiation, and hormonal therapy even though most of these lesions would never progress to invasive ductal carcinoma. Thus, there is a pressing clinical need to stratify the risk of DCIS tumors into those in need of intervention and those that can be safely monitored without intervention. Characterizing the evolvability of DCIS into invasive ductal carcinoma could address this need, by predicting those that have a high likelihood of evolving to malignancy versus those that are likely to remain indolent. A recent study based on evolutionary genomic models has categorized DCIS evolution to invasive ductal carcinoma into four models, highlighting its heterogeneity2. Since genomics is not the sole driver of tumor behavior, phenotypic characterization of DCIS and its tumor microenvironment, using markers of hypoxia, immune response, and matrix organization among others, will help unveil the influences of the ecological forces driving the evolution of DCIS. Recent studies highlighted the importance of tumor-infiltrating lymphocytes in the progression from DCIS to invasive ductal carcinoma3 and the risk of local and metastatic recurrences. High-grade DCIS contains a higher percentage of FOXP3+ cells compared to the non-high-grade DCIS4,5. Consistent with observation in invasive ductal carcinoma that high numbers of FOXP3+ regulatory T cells (Tregs) relative to CD8+ T cells predicts decreased progression-free survival and overall survival3,6, an increase in the numbers of FOXP3+ Tregs or a decrease in CD8/FOXP3 ratio in DCIS are associated with increased recurrence risk3,6,7. While these studies underscore the role of FOXP3+ T cells in the evolution of DCIS, the low frequency of these cells in DCIS (<10% of all T cells) and their highly variable topological distribution make quantitative assessment and immune scoring a challenging task8. Furthermore, factors promoting Treg cell recruitment in DCIS remain elusive. Hypoxia, a condition defined as lacking or low in oxygen, has been shown to be increased in IDC (invasive regions adjacent to ducts in IDC/DCIS samples) compared with DCIS9. It has also been shown to modulate the differentiation and function of T lymphocytes and mediate recruitment of suppressive and proangiogenic T-cell subsets10. In invasive ductal carcinoma, hypoxia measured by CA9 positivity has been shown to promote the recruitment of Tregs defined using FOXP3-positive cells in both basal and non-basal subtypes11. In DCIS, CA9 was found to be expressed more frequently in high-grade DCIS associated with central necrosis, compared with low-grade DCIS and normal epithelium9. However, very little is known about the interplay between hypoxia and T-cell regulation in DCIS and how this influences the progression from DCIS to invasive ductal carcinoma. With advancing computing techniques, remarkable progress in machine learning has been made on the objective assessment of cellular context in digitized histological sections. Histological samples can provide the spatial context of diverse cell types coexisting within the microenvironment. Advanced computer-vision techniques have been developed for spatial mapping of cells in histological samples. This has enabled the applications of experimental and analytical tools from ecology to cancer research, generating system-level knowledge of microenvironmental spatial heterogeneity. To enable spatial mapping of hypoxia and T-cell regulation in DCIS, we designed an end-to-end deep learning framework for histology image analysis. We hypothesize, and provide preliminary data, that Treg recruitment is spatially dependent on the hypoxic state. Our primary aims were: (1) to develop a fully automated and versatile pipeline for digital pathology that can accurately classify cells based on hypoxic status and T-regulatory cell marker in immunohistochemistry (IHC) samples; (2) to tailor a powerful deep learning approach for DCIS, thereby dissecting the complex tissue architecture of DCIS. This enabled us to incorporate information from single-cell protein expression, identified in Aim 1, with global DCIS ductal architecture in whole-section tumor sections; (3) to compare the spatial dependency of T-cell regulation on the hypoxic microenvironment in pure DCIS cases and DCIS components identified concurrently with invasive cells in invasive ductal carcinoma. Mapping DCIS intra-tumor heterogeneity of CA9 and FOXP3 expression To investigate the intra-tumor heterogeneity of tumor hypoxia and T-cell regulation in breast tumors, 99 whole-tumor sections stained with dual marker CA9/FOXP3 from cases clinically classified as pure DCIS (n = 30) or invasive carcinomas with synchronous DCIS (IDC/DCIS, n = 34) were included in this study (Table 1). Consistent with the previous publications3,9 and upon preliminary visual inspection, CA9 expression was rare in normal epithelium and benign lesions, but was present focally in DCIS for both pure DCIS and IDC/DCIS samples. High spatial heterogeneity of CA9 staining in DCIS was evident in some tumors (Fig. 1a). Table 1 Demographics of patients in the dataset comprising pure DCIS and IDC/DCIS samples. Fig. 1: Studying intra-tumor heterogeneity of hypoxia in DCIS using deep learning and digital pathology. a An illustrative example of a DCIS tumor with high spatial intra-tumor heterogeneity of hypoxia. Shown are images of IHC dual staining with CA9 and FOXP3, cells were classified into five types based on their expression of CA9 and FOXP3 and morphological features. b The deep learning pipeline using convolutional neural networks (CNNs) for single-cell analysis. c Generative adversarial networks (GANs) for semantic segmentation of individual DCIS ducts. d An example of DICS tumor where individual DCIS ducts have been segmented using GANs. Two high-resolution examples show ground truth obtained from annotations by pathologists and output from GANs. Scale bar represents 100 µm. To analyze cell expression in the context of DCIS ductal structure, we designed an end-to-end framework for both single-cell classification and semantic segmentation of DCIS ducts (Fig. 1a–d). For the objective and accurate identification and classification of single cells based on their CA9 and FOXP3 expression, we developed a deep learning approach using convolutional neural networks (CNNs, Fig. 1a, b), using a total of 35,883 single-cell annotations (Methods). Two independent cell identification and three cell classification models were evaluated (Tables 2 and 3, respectively). For single-cell detection, spatially constrained convolutional neural network (SCCNN) outperformed ConCORDe-Net (F1 score = 0.80 for SCCNN and 0.67 for ConCORDe-Net, Table 2). For cell classification, SCCNN achieved the highest test accuracy of 88.6% (accuracy across all cell classes, visual representation in Fig. 2) compared to Inception_v2 and Inception_v3 in a testing set of 10 randomly selected sections and hence was selected as the final model. Confusion matrix of predicted results versus true class (pathologist's annotations) for five cell classes (CA9+/− epithelial, FOXP3+/− lymphocyte, and stromal cell) are depicted in Table 3. Subsequently, single-cell detection and classification models were applied to all 99 IHC images used in this study, generating 22,121,761 single-cell identities and spatial locations. Table 2 Performance evaluation of deep learning methods for single-cell detection. Table 3 Confusion matrix of the prediction results along with the row and column summaries displaying the percentages of correctly and incorrectly classified observations for each true/predicted class. Fig. 2: Single-cell classification for CA9/FOXP3-stained immunohistochemistry samples of breast tumors. a Examples of five cell classes. b Examples of cell annotations by pathologists, collected with white boxes as shown, alongside deep learning output. c High-resolution images showing FOXP3– and FOXP3+ lymphocyte examples. Deep learning-enabled automated segmentation of DCIS ducts For the detection and segmentation of DCIS ducts in CA9/FOXP3 IHC images, we developed a new model based on generative adversarial network (GAN), specifically accounting for their complex tissue architecture and highly variable shapes and sizes. (Fig. 1c, d). Given the need to capture large ductal regions as well as the architectural details, we used an extended version of GANs12 for analyzing high-resolution histology images and generating semantic label maps corresponding to the target regions (DCIS ducts in our case). This model enabled us to analyze images at high resolutions and predict ducts of variable size and shape (Fig. 3). The network was trained on 18 whole slide images and the performance of the model was tested on annotations from eight unseen slides, using a total of 1500 hand-drawn annotations of individual ducts. To investigate the generalizability of the model, we performed two separate experiments based on different training sets and holdout cross-validation sets (we named them Fold 1 and Fold 2). In each experiment, 18 randomly selected whole slide images were used to make the training set and perform model training. The model performance was then evaluated on the holdout validation set (8 unseen slides) in each experiment. This model achieved the average Dice score of 0.85 and 0.95 for the segmentation performance in the first and the second experiments, respectively. The reported average Dice score is calculated using all the image tiles in the holdout validation set in each experiment (Table 4). The proposed model performs well on the cribriform, solid, and comedo categories with recognizable morphometrics. For example, cribriforms show patterns of gaps between cancer cells. In solid pattern ducts, cancer cells completely fill the affected breast ducts and comedo-type ducts are usually filled by large, markedly cancer cells. The common characteristic of these three DCIS subtypes is that they show distinct boundaries which can be detected better compared to the papillary type ducts. In addition, the limited amount of training data on papillary ducts due to their lower frequency also impacted the performance. Fig. 3: Examples of DCIS duct segmentation in CA9/FOXP3-stained immunohistochemistry samples of breast tumors. a Deep learning output according to DCIS histology types. b An example of pure DCIS samples, with high-resolution images showing a comparison between pathologists' DCIS duct annotation (red) and deep learning DCIS duct segmentation (blue). Table 4 Evaluation metrics calculated for the DCIS segmentation. The DCIS segmentation model was applied to IDC/DCIS whole-tumor section IHC images (n = 56), identifying and segmenting on average 100 DCIS ducts per section. Notably, the automated DCIS duct segmentation was observed to reliably segment ducts in DCIS histology types such as cribriform, papillary, solid and comedo (Fig. 3). This enabled us to automatically differentiate synchronous DCIS components from IDC components in IDC/DCIS samples, facilitating spatial analysis for individual components in the next step. Validating the single-cell classification by comparison with pathological scores of CA9 positivity As an additional validation, we compared pathologist's scoring of ductal CA9 positivity (Methods) with abundance of CA9+ cells predicted by deep learning. To this effect, whole-tumor sections with low abundance of deep learning-predicted CA9+ epithelial cells (0–1%, n = 80) belonged predominantly (n = 74, 92%) to the Pathologist_Score_Negative (samples graded by pathologist as negative for epithelial CA9 expression) with few samples (n = 6, 8%) belonging to Pathologist_Score_positive (samples with varying degree of CA9 positivity). The proportion of samples with a higher abundance of deep learning-predicted CA9+ epithelial cells (1–5% and >5%), did not vary between the two pathologist score groups (Table 5). Factors contributing to the discrepancy between deep learning and pathological scoring include cell misclassification by deep learning, the lack of ability of deep learning to adapt to some of the artifacts presented in the slides, such as air bubbles, folds, blurring, etc. In addition, discrepancies when comparing a slide-level assessment with a single-cell-level automated quantification could be due to weak cytoplasmic CA9 expression in a subset of samples. This could explain some of the false-positive deep learning predictions in samples graded negative for CA9 expression by pathologists. Table 5 Comparing the pathological scores of CA9 in DCIS cells with deep learning. Spatial colocalization of CA9 and FOXP3-positive cells in pure DCIS and IDC/DCIS samples To quantify and compare hypoxia and Treg cell spatial colocalization in the microenvironments of pure DCIS (n = 44 IHC images) and IDC/DCIS (n = 56) samples, the Morisita–Horn index, an ecological measure of community structure to quantify the extent of spatial colocalization or overlap between two spatial variables13 was used. Morisita indices range from 0 to 1, with 0 indicating spatial segregation and 1 indicating maximal colocalization between two spatial variables. For pure DCIS samples, a single value of Morisita colocalization per whole-tumor section was computed. For IDC/DCIS samples, Morisita colocalization for synchronous DCIS (ductal regions in IDC/DCIS samples) and IDC (invasive regions adjacent to ducts in IDCIS samples) were computed per component (Methods). Colocalization between FOXP3+ and FOXP3– lymphocytes with CA9+ and CA9– epithelial cells was compared across the aforementioned sample groups (Table 6 and Fig. 4b). Among the pairwise comparisons (tested using pairwise Wilcoxon rank-sum tests), IDC regions had significantly higher FOXP3+ and CA9+ colocalization, compared to synchronous DCIS regions (p = 0.0004) and pure DCIS samples (p = 0.0007) (Table 6). Similarly, and consistent with previous reports5,14, we report significantly higher FOXP3+ lymphocytes abundance (number of FOXP3+ lymphocytes/total number of lymphocytes) in IDC than in synchronous DCIS regions of the IDC/DCIS samples (p = 3.8e–6), indicating a preferential localization of FOXP3+ lymphocytes to the invasive components in IDC/DCIS (Fig. 4c). To ensure that cellular abundance and clinical covariates do not confound colocalization, we report that the difference in FOXP3+ CA9+ colocalization between IDC regions and pure DCIS samples is independent of abundance of FOXP3+ lymphocytes and CA9+ epithelial cells (which does not vary significantly between these groups: p = 0.44 and p = 0.48 respectively, Fig. 4c), ER status ER status (multivariate p value adjusted for ER status = 0.004) and grade (multivariate p value adjusted for DCIS grade = 0.01). Consistent with this observation, there was no significant difference between CA9%, FOXP3%, or FOXP3-CA9 colocalization between ER– and ER+ subsets (Supplementary Fig. 1). Table 6 Comparison of colocalization between FOXP3+/− lymphocytes with CA9+/− epithelial cells across IDC, synchronous DCIS, or pure DCIS sample groups. Fig. 4: Applying Morisita index to measuring spatial colocalization of cells in DCIS. a Voronoi tessellation of tumor region in representative pure DCIS samples (left pane) and IDC/DCIS samples (representative image-right pane), denoting normal DCIS ducts (blue polygons), IDC component (green polygons), and synchronous DCIS (red polygons). Gray polygons (in both DCIS and IDC/DCIS samples) represent other cell types (fibroblasts, normal epithelium, adipose tissue, and artifacts) excluded for analysis. Yellow-shaded regions represent ducts segmented by automated duct segmentation b Boxplots comparing colocalization between FOXP3+ and FOXP3– lymphocytes with CA9+ epithelial cells in pure DCIS samples (n = 44), IDC (n = 56), and synchronous DCIS (n = 27) regions. c Boxplots comparing abundance of FOXP3+ lymphocytes, CA9+ epithelial cells in pure DCIS samples (n = 44), IDC (n = 56), and synchronous DCIS (n = 27) regions. FOXP3+/− lymphocyte abundance = number of FOXP3+/− lymphocytes/total number of lymphocytes. CA9+/− epithelial cell abundance = number of CA9+/− epithelial cells/total number of epithelial cells. d Proposed model for hypoxic epithelial cells promoting Tregs recruitment selected during DCIS progression. ***p < 0.001. P values from pairwise Wilcoxon rank-sum test, adjusted using Holm method for multiple testing. We explored the differential ductal microenvironments between pure DCIS and synchronous DCIS regions, i.e., ducts adjacent to invasive cancer cells. To this effect, pure DCIS samples had significantly higher FOXP3+ CA9+ colocalization (p = 0.0007, Fig. 4a) as well as abundance of CA9+ (p = 2.3e–9) and FOXP3+ cells (p = 4.4e–6, Fig. 4c), compared to synchronous DCIS regions. The difference in this spatial pattern between pure DCIS and synchronous DCIS remained significant after adjusting for FOXP3+ lymphocyte (adjusted p = 0.0005) and CA9+ epithelial cell abundance (adjusted p = 0.0012). Within the ER+ subsets of these groups, the differences between groups remained significant (Supplementary Fig. 2). In comparison, there was no significant difference in colocalization of FOXP3– lymphocytes with CA9+ epithelial cells between the three groups (Table 6). Taken together, we report that the spatial microenvironmental phenotype of IDC regions differs from pure DCIS samples, with increased colocalization of FOXP3+ lymphocytes and CA9+ epithelial cells in IDC regions, independent of cellular abundance and ER status. Notably, our study revealed that the ductal microenvironment of pure DCIS and synchronous DCIS vary, with significant differences in spatial organization of hypoxic CA9+ epithelial cells and FOXP3+ lymphocytes. This study provides evidence that Treg recruitment is spatially dependent on the hypoxic microenvironment in DCIS. Hypoxia is thought to promote the recruitment of T-regulatory cells for increased immune tolerance and immune evasion10,11, but there is a lack of data on this in DCIS. Our analysis integrating deep learning, computational pathology, and spatial statistics on a customized IHC panel revealed a spatial pattern of preferential colocalization between FOXP3+ lymphocytes and CA9+ epithelial cells in DCIS. Compared with pure DCIS samples, the degree of CA9+ epithelial cell and FOXP3+ lymphocyte colocalization was significantly higher in the invasive compartment of invasive breast cancer (IDC), but significantly lower in the synchronous DCIS compartment. These differences were independent of the abundance of these cell types. Therefore, our study reiterates differential microenvironments between pure DCIS and IDC compartments14,15,16,17. However, we also present evidence that the ductal microenvironment of pure DCIS and synchronous DCIS vary, with significant differences in spatial organization of hypoxic CA9+ epithelial cells and FOXP3+ lymphocytes14,15,16,17. Based on these data, our proposed model is that hypoxic epithelial cells promoting Tregs recruitment are selected during DCIS progression, resulting in stronger, preferential colocalization of these cells within the invasive compartment (Fig. 4d). This study adds crucial data to the increasing body of evidence that adaptation to hypoxia as a result of evolutionary selection is key to transition from in situ to invasive cancer9,18,19. In addition, it suggests that the inflammatory program that hypoxia promotes through the recruitment of Tregs in DCIS are spatially focal events. We speculate that these focal events may influence the invasive potential of individual DCIS ducts and potentiate the invasion of the basement membrane. We are currently testing this hypothesis comparing cohorts of patients with invasive breast cancer recurrence after a diagnosis of pure DCIS, to patients with pure DCIS who have not had a diagnosis of invasive breast cancer. Other limitations include the lack of data on other immune cell subsets such as myeloid cells, B lymphocytes, and NK cells, which could add further insights to the immune landscape of DCIS; the limitation on sample size, which prevented further analysis with respect to HR status. Studies on more immune subsets, biological processes, and conditions including hypoxia that may drive the evolution of DCIS to invasive cancers are also ongoing in our laboratories. Nevertheless, our study defined a new hypoxic and immunosuppressive phenotype, which is the increased spatial colocalization pattern of hypoxic and Tregs identified using machine learning, which differs not only between IDC and synchronous DCIS regions but also pure DCIS samples. With additional validation studies, this new phenotype may be a useful biomarker to predict DCIS progression, and further guide patient selection for new therapeutic approaches to target hypoxia20. Our study was enabled by a new deep learning system design. Deep learning-based methodologies have facilitated a variety of applications in pathology, but the results are often limited to low-resolution images and small images such as tissue microarrays. Precision segmentation of DCIS ducts with highly variable shapes and sizes was not possible in these images. In this work, we generated high-resolution results using a deep learning approach with robust adversarial loss and multi-scale architectures for the generator and the discriminator. This method generates unique results given the same inputs leading to robust and reproducible results. Recently, a U-Net-based deep learning method for the automated detection and simultaneous segmentation of DCIS ducts in H&E samples has been published by our group21. Interestingly, we found that this pipeline did not transfer well to the IHC domain, potentially due to the confounding color contrast in staining and less definitive features of DCIS, but in a way resonating similar experience for pathologists. Our proposed pipeline specifically designed for IHC based on generative adversarial models can capture architectural details of DCIS amidst color variations in high resolution. Besides facilitating detailed microenvironmental studies of these ducts, it paves the way for new studies of ductal morphology, adding a new dimension to genotype–phenotype analysis. In summary, this study highlights the importance of immune spatial heterogeneity in hypoxic tumor microenvironments. It warrants further investigation into detailed molecular activities modulating this phenotype, such as the secretion of chemokine such as CCL28 that induces T-regulatory cell recruitment22. Patient cohort The dataset consists of patient samples composed of pure DCIS disease ("pure DCIS" henceforth) or IDC/DCIS cases containing synchronous DCIS and invasive components (IDC). A total of 99 whole-tumor sections were obtained from formalin-fixed paraffin-embedded blocks from 64 patients. Patient demographics and baseline characteristics of the dataset are summarized in Table 1. Tissue sections of samples with pure DCIS (n = 43: 17 sections with 1 section and 13 sections with 2 sections per patient) and IDC/DCIS samples (n = 56: 12 sections with 1 section and 22 sections with 2 sections per patient) were stained and digitized (automated Aperio scanner; resolution = 0.5 µm/pixel; magnification = ×20). The study was approved by the institutional review board of Duke with a waiver of the requirement to obtain informed consent. ER and PR status were obtained from IHC assay performed at the time of diagnosis (Dako ER pharmDx kit and Dako Link autostainer). Grade of pure DCIS (Table 1, column 2) and IDC/DCIS samples (Table 1, column 3) was based on pathology grading of the whole slide and of the invasive compartment of the IDC/DCIS samples23, respectively. All 99 whole-tumor sections used in this study were dual-stained for CA9 and FOXP3. Formalin-fixed paraffin-embedded tissues were dewaxed and 5 µm sections cut. Antigen retrieval was performed by steaming in a 1X Citrate buffer (Sigma C9999). Dual staining was performed using the ImmPRESS Duet Double Staining Polymer kit (HRP Anti-Mouse IgG/AP Anti-Rabbit IgG, Vector labs, MP-7724) as per the manufacturer's instructions. Sections were stained for cytoplasmic CA9 expression (ImmPACT Vector Red, magenta; primary antibody: rabbit anti-CA9, Novusbio #NB100-417) and nuclear FOXP3 expression (ImmPACT DAB, brown; primary antibody: mouse anti-FOXP3, ABCAM #ab20034), followed by hematoxylin counterstain. Pathologist's score of CA9 positivity in DCIS cells We used a pathologist-generated score (A.H.H.) of CA9 positivity in DCIS cells as a metric to validate the abundance of CA9+ epithelial cells predicted by our deep learning pipeline. The pathologist's scoring was performed independently, blinded to the deep learning prediction of CA9+ epithelial cells. The pathologist's score was based on the intensity of CA9 staining of DCIS cells, with 4 categories ranging from 0 (CA9 Negative, no stain) to 3 (High CA9 staining intensity). The percentage of cells (across a slide) pertaining to each category was scored. For ease of comparison with our deep learning pipeline (which predicts CA9+/− and does not factor intensity), we categorized the pathologist's score into the following groups: Pathologist_Score_Negative: samples in which 100% of DCIS epithelial cells were graded negative for CA9 expression (category 0). Pathologist_Score_Positive: samples with varying degrees of CA9 positivity in DCIS epithelial cells (0% in category 0 but >1% in categories 1–3). Deep learning pipelines for DCIS IHC histology The deep learning framework used to analyze pure DCIS and IDC/DCIS samples in this study consists of four parts. Tissue segmentation Fully automated tissue segmentation was performed to remove background and reduce noise and artifacts, allowing for computational efficiency and reduced processing time in subsequent image analysis steps. Tissue segmentation was performed using a pre-trained Micro-Net-51224,25. Each whole slide image was reduced to ×1.25 resolution, which was subsequently visualized at multiple resolutions in Micro-Net-512 architecture to capture context information and maintain salient features. Single-cell identification A deep learning model was used to identify all individual cells within a given IHC tissue section. The main objective of this step was to detect all nuclei in a whole slide image by locating nuclei center positions, regardless of their class labels. Briefly, an SCCNN25,26 was trained to predict the probability of a pixel being the center of a nucleus. The single-cell detection model was trained in a supervised manner based on pathologist-derived single-cell annotations. To this effect, 26,345 single-cell annotations from 10 whole slide images (double stained for CA9 and FOXP3) were collected from a pathologist (H.M.H). Two network architectures, SCCNN26 and ConCORDe-Net27, were tested and the architecture that produced single-cell detection with the highest accuracy was adapted. Single-cell classification A deep learning model was used to classify single cells (detected in the previous step) into one of five classes: Stroma, FOXP3+ lymphocyte, FOXP3– lymphocyte, CA9+ epithelial cells, and CA9– epithelial cells. Softmax SCNN26 network was used to train the single-cell classification model. Briefly, nuclear morphology features such as shape, size, color, and texture were considered as the main parameters for the model to distinguish between different cell classes. The single-cell classification model was trained in a supervised manner based on pathologist-derived (H.M.H) annotations performed on 12 whole slide images. A total of 35,883 single-cell annotations (2580 stromal cells, 1462 FOXP3+ lymphocytes, 15,413 FOXP– lymphocyte, 4229 CA9+ epithelial cells and 12,199 CA9– epithelial cells) were used for model training. Three network architectures: SCCNN, Inception_v2, and Inception_v3 were tested and the architecture that produced single-cell classification with the highest accuracy was adapted. Whole slide images were split into tiles of size 2000 × 2000 pixels (with pixel size of 0.5 µm/pixel). Quality control of annotations was performed at the tile level and expert consensus was obtained before using the respective tiles for training. Performance of deep learning models for single-cell detection and classification was evaluated in independent samples (not used for training) by comparison with ground truth, i.e., pathologist's annotations. In the case of the single-cell classification model, a five-fold cross-validation was performed to account for class imbalance and sampling effects. Annotations from ten whole-section tumor images (26,345 cells) were randomly divided into five equal groups. Class imbalance was taken into account while creating these five groups. For each cross-validation, four groups were chosen for training and one group for testing. From the four groups of training data, 20% (5269 cells) of the annotations was randomly picked for validation purpose. Therefore, we trained and tested each classifier five times separately on each of these groups. Three single-cell classification networks were trained on the created training data and their performances on the testing set were compared. Based on our results, the SCCNN obtained the average accuracy of 89%, the Inception_v3 obtained the average accuracy of 86% and the Inception_v2 obtained the average accuracy of 82%. The SCCNN (see Fig. 2) achieved the highest, 89% accuracy for cell classification in the validation set. DCIS duct detection and segmentation We describe a deep learning model for the simultaneous detection and segmentation of DCIS ducts from IHC images. An improved GANs architecture12 was used to train a deep learning model capable of delineating DCIS duct regions from surrounding tissue. DCIS duct segmentation using GANs We propose an improved GAN incorporating a coarse-to-fine generator and multi-scale discriminator architectures suitable for conditional image generation at a much higher resolution. This method is based on conditional GANs that use a robust adversarial learning objective together with new multi-scale generator and discriminator architectures. Using this network, the goal is to translate an input histology image from IHC domain to the binary domain given input-output (histology-mask) image pairs used as training data. This model is composed of two main parts: (i) the generator and (ii) the discriminator. The objective of the generator (G) is to generate semantic label maps (in binary) from histology images (in RGB), while the discriminator (D) learns to distinguish ground truth (GT) images from the created masks (semantic label maps). This framework operates in a supervised setting and the training dataset is prepared as sets of pairs of corresponding images {x_i,y_i}, where x_i is a histology image and y_i is the corresponding ground truth label map, annotated by expert pathologists. The overall aim of the network is to model the distribution of the semantic label maps via Eq. (1), given the input histology tiles. $${\rm{Min}}_G\left( {{\rm{Max}}_{D1,D2,D3}\mathop {\sum}\limits_{k = 1}^{k = 3} {\left\{ {E_{\left( {x,y} \right)}\left[ {{\rm{log}}\,D_k\left( {x,y} \right)} \right] + E_x\left[ {{{{\mathrm{log}}}}\left( {1 - D_k\left( {x,G(x)} \right.} \right)} \right]} \right\}} } \right)$$ In Eq. (1), G denotes the generator that contains two sub-networks: (i) G1 or global generator with the aim of generating the initial prediction (semantic map) and (ii) G2 or local generator with the aim of enhancing the quality of the output from G1 and generating the output image with a higher resolution (i.e., ×2). Primarily, a histology image of resolution [1024 × 1024 × 3] is passed through the sequential components of the global generator to output a binary image of resolution [1024 × 1024]. Subsequently, the output of the convolutional front-end of the local generator plus the last feature map of the global generator are fed to the residual block of the local generator, effectively integrating the global information captured by G1. In Eq. (1), D denotes the discriminator part that employs multiple discriminators with identical network structures that operate at different scales of an image pyramid. These discriminators are trained to differentiate the GT and predicted masks at different scales, which encourages coarse to fine learning of the generator. We have used three discriminators (k = 3) as suggested by12, all of which have identical architectures but operate at different scales (coarse to fine). This specific architecture combination helps the generator to learn a better and consistent global view of the image to generate while improving the details of the generated image. The model was trained on hand-drawn annotations of more than 3600 individual ducts (H.M.H.) pertaining to 18 whole slide images. The performance of the model was evaluated on 1500 hand-drawn individual ducts annotations of (H.M.H.) pertaining to an independent set of 8 whole slide images, i.e., images not used for model training. Segmentation efficiency was evaluated using the Dice score, which estimates the degree of overlap between the ground truth (human-defined duct boundaries) and model prediction of duct boundaries28,29. Applying trained deep learning models on whole slide IHC samples The deep learning framework described above was applied to the pure DCIS and IDC/DCIS tissue sections used in this study. However, DCIS segmentation was not applied to pure DCIS samples, since these samples effectively contain only duct regions. The final outcome of the deep learning pipeline was the spatial quantification of five cell classes (stroma, FOXP3+ lymphocyte, FOXP3– lymphocyte, CA9+ epithelial cells, and CA9– epithelial cells) in pure DCIS samples, synchronous DCIS (ducts within IDC/DCIS samples), and IDC. Quantifying cellular abundance and colocalization in pure DCIS and IDC/DCIS samples The deep learning-predicted cell classes and their location were used to compute the abundance of cell types and the degree of colocalization between pairs of cell types, in pure DCIS and IDC/DCIS samples. Cellular abundance: FOXP3+ and CA9+ abundance was computed as follows and compared across groups $${\rm{FOXP3}} + {\rm{lymphocyte}}\;{\rm{aboundance}} = \frac{{{\rm{number}}\;{\rm{of}}\;{\rm{FOXP3}} + {\rm{lymphocytes}}}}{{{\rm{total}}\;{\rm{number}}\;{\rm{of}}\;{\rm{lymphocytes}}}}$$ $${\rm{CA9}} + {\rm{epithelial}}\;{\rm{aboundance}} = \frac{{{\rm{number}}\;{\rm{of}}\;{\rm{CA9}} + {\rm{lymphocytes}}}}{{{\rm{total}}\;{\rm{number}}\;{\rm{of}}\;{\rm{epithelial}}\;{\rm{cells}}}}$$ Cellular colocalization: the Morisita–Horn Index30 was used to quantify colocalization of FOXP3+, FOXP3– lymphocytes with CA9+ and CA9– epithelial cells. Briefly, The Morisita–Horn Index computes the statistical significance in spatial co-occurrence of a pair of cell types within a spatial region defined by Voronoi tessellation. The number of seeds needed to perform Voronoi tessellation (prior to computing colocalization) was set as the cube root of all cells within the region of interest. For the pure DCIS samples which are composed exclusively of DCIS duct regions, cellular abundance and colocalization were computed across whole slide images. This produced a single abundance and colocalization index (per pair of cell types), per whole-tumor section. IDC/DCIS samples are composed of DCIS ducts (segmented by deep learning DCIS segmentation) as well as invasive components. Given our interest in analyzing each component individually, we adapted the following approach: the entire tissue area was divided into non-overlapping Voronoi polygons which were classified into three types (Fig. 4a): (1) synchronous DCIS polygons composed of only DCIS ducts, (2) IDC polygons composed predominantly of invasive cancer cells and (3) mixed polygons composed of both ducts and invasive regions, denoting the interface between DCIS and invasive regions. Note: synchronous DCIS were named as such to distinguish them from pure DCIS regions in subsequent comparative analyses. Cellular abundance and colocalization were computed separately for synchronous DCIS and IDC regions, producing two distinct abundance/colocalization indices per whole section of tumor images. The discrepancy in the number of synchronous DCIS regions (n = 27, rather than 56 to match the IDC regions) is due to a proportion of IDC/DCIS samples (n = 29) that did not contain sufficient synchronous DCIS regions for spatial analysis. These samples were not used for comparisons described above, since they produce a null synchronous DCIS colocalization index (owing to the lack of synchronous DCIS regions). Statistical significance of variable colocalization between pairs of cell types (CA9+/−, FOXP3+/−) across any two groups (IDC vs synchronous DCIS, IDC vs pure DCIS, and synchronous DCIS vs pure DCIS) was performed using Wilcoxon rank-sum test (multiple test adjustment method: Holm). Multivariate regression tests were used as tests of independence to account for covariates such as cellular abundance, ER status, where relevant. All training data, including the fully anonymized raw image tiles and pathological annotations, and binary marks, are available in the corresponding author's https://github.com/sobhani/DCIS-CA9. Requests for data access for the Duke samples can be submitted to E.S.H. ([email protected]) and Y.Y. ([email protected]). Data underlying Fig. 4b, c are in the R package in the same GitHub repository https://github.com/sobhani/DCIS-CA9. The deep learning pipeline for digital pathology image analysis is available in the GitHub repository https://github.com/sobhani/DCIS-CA9 for reproducibility. All codes used for statistical analyses of image data were developed in R (v.3.5.1) and are available at https://github.com/sobhani/DCIS-CA9. Erbas, B., Provenzano, E., Armes, J. & Gertig, D. The natural history of ductal carcinoma in situ of the breast: a review. Breast Cancer Res. Treat. 97, 135–144 (2006). Cowell, C. F. et al. Progression from ductal carcinoma in situ to invasive breast cancer: Revisited. Mol. Oncol. 7, 859–869 (2013). Thompson, E. et al. The immune microenvironment of breast ductal carcinoma in situ. Mod. Pathol. 29, 249–258 (2016). Campbell, M. J. et al. Characterizing the immune microenvironment in high-risk ductal carcinoma in situ of the breast. Breast Cancer Res. Treat. 161, 17–28 (2017). Lal, A. et al. FOXP3-positive regulatory T lymphocytes and epithelial FOXP3 expression in synchronous normal, ductal carcinoma in situ, and invasive cancer of the breast. Breast Cancer Res. Treat. 139, 381–390 (2013). Bates, G. J. et al. Quantification of regulatory T cells enables the identification of high-risk breast cancer patients and those at risk of late relapse. J. Clin. Oncol. 24, 5373–5380 (2006). Semeraro, M. et al. The ratio of CD8 /FOXP3 T lymphocytes infiltrating breast tissues predicts the relapse of ductal carcinoma in situ. OncoImmunology 5, e1218106 (2016). Gil Del Alcazar, C. R. et al. Immune escape in breast cancer during in situ to invasive carcinoma transition. Cancer Discov. 7, 1098–1115 (2017). Wykoff, C. C. et al. Expression of the hypoxia-inducible and tumor-associated carbonic anhydrases in ductal carcinoma in situ of the breast. Am. J. Pathol. 158, 1011–1019 (2001). Palazón, A., Aragonés, J., Morales-Kastresana, A., de Landázuri, M. O. & Melero, I. Molecular pathways: hypoxia response in immune cells fighting or promoting cancer. Clin. Cancer Res. 18, 1207–1213 (2012). Yan, M. et al. Recruitment of regulatory T cells is correlated with hypoxia-induced CXCR4 expression, and is associated with poor prognosis in basal-like breast cancers. Breast Cancer Res 13, R47 (2011). Wang, T.-C. et al. High-resolution image synthesis and semantic manipulation with conditional GANs. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/cvpr.2018.00917 (2018). Maley, C. C., Koelble, K., Natrajan, R., Aktipis, A. & Yuan, Y. An ecological measure of immune-cancer colocalization as a prognostic factor for breast cancer. Breast Cancer Res 17, 131 (2015). Kim, M. et al. Immune microenvironment in ductal carcinoma in situ: a comparison with invasive carcinoma of the breast. Breast Cancer Res 22, 32 (2020). Tower, H., Ruppert, M. & Britt, K. The immune microenvironment of breast cancer progression. Cancers 11, 1375 (2019). Toss, M. S. et al. The prognostic significance of immune microenvironment in breast ductal carcinoma in situ. Br. J. Cancer 122, 1496–1506 (2020). Damaghi, M. et al. The harsh microenvironment in early breast cancer selects for a Warburg phenotype. Proc. Natl Acad. Sci. USA. 118, e2011342118 (2021). Helczynska, K. et al. Hypoxia promotes a dedifferentiated phenotype in ductal breast carcinoma in situ. Cancer Res. 63, 1441–1444 (2003). Gatenby, R. A. & Gillies, R. J. Why do cancers have high aerobic glycolysis? Nat. Rev. Cancer 4, 891–899 (2004). Wigerup, C., Påhlman, S. & Bexell, D. Therapeutic targeting of hypoxia and hypoxia-inducible factors in cancer. Pharmacol. Therapeutics 164, 152–169 (2016). Narayanan, P. L., Raza, S. E. A., Hall, A. H., Marks, J. R. & King, L. Unmasking the tissue microecology of ductal carcinoma in situ with deep learning. Preprint at https://www.biorxiv.org/content/10.1101/812735v1 (2019). Facciabene, A. et al. Tumour hypoxia promotes tolerance and angiogenesis via CCL28 and Treg cells. Nature 475, 226–230 (2011). Committee, T. C. C., the Consensus Conference Committee. Consensus conference on the classification of ductal carcinoma in situ. Cancer 80, 1798–1802 (1997). Raza, S. E. A. et al. Micro-Net: a unified model for segmentation of various objects in microscopy images. Med. Image Anal. 52, 160–173 (2019). AbdulJabbar, K. et al. Geospatial immune variability illuminates differential evolution of lung adenocarcinoma. Nat. Med. 26, 1054–1062 (2020). Sirinukunwattana, K. et al. Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans. Med. Imaging 35, 1196–1206 (2016). Hagos, Y. B., Narayanan, P. L., Akarca, A. U., Marafioti, T. & Yuan, Y. ConCORDe-Net: cell count regularized convolutional neural network for cell detection in multiplex immunohistochemistry images. Lecture Notes in Computer Science 667–675. https://doi.org/10.1007/978-3-030-32239-7_74 (2019). Dice, L. R. Measures of the amount of ecologic association between species. Ecology 26, 297–302 (1945). Sørensen, T. A Method of Establishing Groups of Equal Amplitude in Plant Sociology Based on Similarity of Species Content and Ist Application to Analyses of the Vegetation on Danish Commons (Munksgaard, 1948). Horn, H. S. Measurement of 'overlap' in comparative ecological studies. Am. Nat. 100, 419–424 (1966). This project was funded by NIH R01 CA185138 and CDMRP Breast Cancer Research Program Award BC132057 and Breast Cancer Now (2015NovPR638). Y.Y. also acknowledges funding from Cancer Research UK Career Establishment Award (CRUK C45982/A21808), CRUK Early Detection Program Award (C9203/A28770), CRUK Sarcoma Accelerator (C56167/A29363), CRUK Brain Tumour Award (C25858/A28592), Rosetrees Trust (A2714), Children's Cancer and Leukaemia Group (CCLGA201906), NIH U54 CA217376, European Commission ITN (H2020-MSCA-ITN-2019), and The Royal Marsden/ICR National Institute of Health Research Biomedical Research Centre. C.M. was supported in part by NIH grants U54 CA217376, U2C CA233254, P01 CA91955, R01 CA170595, R01 CA185138, and R01 CA140657 as well as CDMRP Breast Cancer Research Program Award BC132057 and the Arizona Biomedical Research Commission grant ADHS18-198847. E.S.H. was supported in part by NIH grants U2C CA17035, R01 CA170595, R01 CA185138 as well as CDMRP Breast Cancer Research Program Award BC132057 and the CRUK Grand Challenge Award. J.R.M. was supported in part by NIH grant UO1 CA214183. Centre for Evolution and Cancer, Institute of Cancer Research, London, UK Faranak Sobhani, Sathya Muralidhar, Azam Hamidinekoo & Yinyin Yuan Division of Molecular Pathology, Institute of Cancer Research, London, UK Department of Pathology, Duke University School of Medicine, Durham, NC, USA Allison H. Hall Department of Surgery, Duke University School of Medicine, Durham, NC, USA Lorraine M. King, Jeffrey R. Marks & E. Shelley Hwang Arizona Cancer Evolution Center, Biodesign Institute and School of Life Sciences, Arizona State University, Tempe, AZ, USA Carlo Maley Department of Pathology, The Netherlands Cancer Institute, Plesmanlaan, 121 1066CX, Amsterdam, The Netherlands Hugo M. Horlings Faranak Sobhani Sathya Muralidhar Azam Hamidinekoo Lorraine M. King Jeffrey R. Marks E. Shelley Hwang Yinyin Yuan Conceptualization: F.S. and Y.Y.; data curation: F.S., H.M.G., and A.H.; data preparation, methodology, and framework development: F.S., S.M., and A.H.; data acquisition and analysis: F.S., S.M., E.S.H., and Y.Y.; project administration: L.K., J.R.M, C.M., E.S.H., and Y.Y.; supervision: Y.Y. and E.S.H.; validation: F.S., and Y.Y.; writing, review, and editing: F.S., S.M., A.H.H., Y.Y., and E.S.H. Correspondence to Faranak Sobhani or Yinyin Yuan. The funders had no role in the design of the study; the collection, analysis, or interpretation of the data; the writing of the manuscript; or the decision to submit the manuscript for publication. Y.Y. has received speaker's bureau honoraria from Roche and consulted for Merck and Co Inc. Sobhani, F., Muralidhar, S., Hamidinekoo, A. et al. Spatial interplay of tissue hypoxia and T-cell regulation in ductal carcinoma in situ. npj Breast Cancer 8, 105 (2022). https://doi.org/10.1038/s41523-022-00419-9 For Authors and Referees npj Breast Cancer (npj Breast Cancer) ISSN 2374-4677 (online) Sign up for the Nature Briefing: Cancer newsletter — what matters in cancer research, free to your inbox weekly. Get what matters in cancer research, free to your inbox weekly. Sign up for Nature Briefing: Cancer
CommonCrawl
\begin{document} \title{Leggett-Garg violations for continuous variable systems with gaussian states} \author{C. Mawby} \email{[email protected]} \author{J.J.Halliwell} \email{[email protected]} \numberwithin{equation}{section} \renewcommand\thesection{\arabic{section}} \affiliation{Blackett Laboratory \\ Imperial College \\ London SW7 2BZ \\ UK } \begin{abstract} Macrorealism (MR) is the worldview that certain quantities may take definite values at all times irrespective of past or future measurements and may be experimentally falsified via the Leggett-Garg (LG) inequalities. We put this world view to the test for systems described by a continuous variable $x$ by seeking LG violations for measurements of a dichotomic variable $Q = \textrm{sign}(x)$, in the case of gaussian initial states in a quantum harmonic oscillator. Extending our earlier analysis [C. Mawby and J. J. Halliwell, Phys. Rev. A \textbf{105}, 022221 (2022)] we find analytic expressions for the temporal correlators. An exploration of parameter space reveals significant regimes in which the two-time LG inequalities are violated, and likewise at three and four times. To obtain a physical picture of the LG violations, we exploit the continuous nature of the underlying position variable and analyze the relevant quantum-mechanical currents, Bohm trajectories, and Wigner function. We also show that larger violations are possible using the Wigner LG inequalities. Further, we extend the analysis to LG tests using coherent state projectors, thermal coherent states, and squeezed states. \end{abstract} \maketitle \section{Introduction} The motion of a pendulum has been used by clockmakers and hypnotists alike for centuries, with its regular left, right, left motion. Scaled down enough, the quantum mechanical description becomes necessary, which hints that non-classical states without a definite left-right property underlie the motion. The existence of these type of states form one of the pillars of many quantum technologies, and hence verification of their existence, and an understanding of their persistence to macroscopic scales is of great interest. The Leggett-Garg inequalities~\cite{leggett1985, leggett1988,leggett2008,emary2013} were introduced to provide a quantitative test capable of demonstrating the failure of the precise world view known as macrorealism (MR). MR is defined as the conjunction of three realist tenets -- that a system resides in one observable state only, for all instants of time, which may be measured without influencing future dynamics of the system, and that measurements respect causality. The violation of these inequalities indicates a failure of MR, and hence the presence of non-classical behaviour. The LG inequalities are typically established for a dichotomic observable $Q$, which may take value $s_i=\pm 1$, measured in a series of experiments at single times and at pairs of times. This yields a data set consisting of single time averages $\ev{Q_i}$, where $Q_i = Q(t_i)$, and the temporal correlators $C_{ij}$ defined by \begin{equation} C_{ij}=\ev{Q_i Q_j} = \sum_{i,j}s_i s_j p(s_i, s_j), \end{equation} where $p(s_i, s_j)$ is the two-time measurement probability, giving the likelihood of measuring $s_i$ and $s_j$ at times $t_i$, $t_j$. The temporal correlators must be measured in a non-invasive manner, in keeping with the definition of MR, which is typically done using ideal negative measurements~\cite{knee2012,robens2015,katiyar2017} (but other approaches exist~\cite{halliwell2016a,majidy2019a,majidy2019b,zaw2022a,tsirelson2006}). For the commonly-studied three time case, the data set consists of three correlators $C_{12}, C_{23}, C_{13}$ and three single time averages $\ev{Q_i}$, $ i=1,2,3$. These quantities then form six puzzle-pieces that, should the system obey the assumptions of MR, must be the moments of an underlying joint probability distribution $p(s_1, s_2, s_3)$. In the case where this endeavour is possible, the correlators and averages will satisfy the three-time LG inequalities (LG3): \bea L_1=1 + C_{12} + C_{23} + C_{13} & \ge & 0, \label{LG1} \\ L_2=1 - C_{12} - C_{23} + C_{13} & \ge & 0, \label{LG2} \\ L_3=1 + C_{12} - C_{23} - C_{13} & \ge & 0, \label{LG3} \\ L_4=1 - C_{12} + C_{23} - C_{13} & \ge & 0. \label{LG4} \eea They will also satisfy a set of twelve two-time LG inequalities (LG2), four of which are of the form, \beq 1+s_1\ev{Q_1}+s_2 \ev{Q_2}+s_1 s_2 C_{12}\geq 0 \label{LG22} \eeq where $ s_1, s_2 = \pm 1 $, with two more sets of four for the other two time pairs. This set of sixteen inequalities are necessary and sufficient conditions for MR \cite{halliwell2016b,halliwell2017,halliwell2019b,halliwell2019, halliwell2020,araujo2013,majidy2021b}. If any one of them is violated, MR fails. From an experimental point of view, the LG2 inequalities constitute the simplest place to look first, since only one correlator needs to be measured. For measurements at four times, there are six correlators, but the natural data set is a cycle of four $C_{12}, C_{23}, C_{34}, C_{14} $, along with the four single time averages. The $\LG{4}$ inequalities take the form \begin{equation} \label{eq:LG4} -2\leq C_{12}+C_{23}+C_{34}-C_{14}\leq 2, \end{equation} together with the six more inequalities permuting the location of the minus sign. The necessary and sufficient conditions for MR at four times consist of these eight LG4 inequalities, together with the set of sixteen LG2s for the four time pairs \cite{halliwell2016b, halliwell2019, halliwell2017,halliwell2019b}. This general framework has been put to the test experimentally in many types of systems. See for example Refs. \cite{majidy2019a,goggin2011,formaggio2016, joarder2022,knee2012, robens2015,katiyar2017} and also the useful review \cite{emary2013} . Most of these experiments are on systems that are essentially microscopic but some come close to macroscopicity \cite{knee2016}. Note also the useful critique of the LG approach Ref.~\cite{maroney2014a}. In the present paper we investigate LG tests for the quantum harmonic oscillator (QHO). The LG framework is readily adapted to this physical situation using a single dichotomic variable $Q = {\rm sgn} (x)$, where $x$ is the particle position, measured at different times. We build on our earlier work Ref. \cite{mawby2022}, which explored LG violations in the QHO for initial states consisting of harmonic oscillator eigenstates and superpositions thereof, finding close to maximal violations in some cases. Here we focus on the case of an initial coherent state and closely related states. Such an initial state is intriguing in this context since it is often regarded as essentially classical, being a phase space localized state evolving along a classical trajectory. Hence any LG violations arising from this state constitute particularly striking examples of non-classical behaviour. LG tests for the QHO with an initial coherent state were first explored by Bose et al \cite{bose2018}, who proposed an experiment to measure the temporal correlators and carried out calculations of LG4 violations. A subsequent paper \cite{das2022a} explored both LG2 violations and also violations of the no-signalling in time conditions \cite{kofler2013,clemente2015,clemente2016}. Our work in part parallels Ref. \cite{das2022a}. The first main aim of this paper is to undertake a thorough analysis of LG2, LG3 and LG4 inequalities for an initial coherent state. This is carried out in Section \ref{sec:corecalc}, where we set out the formalism, and drawing on our earlier work calculate the temporal correlators \cite{mawby2022}. We carry out a detailed parameter search and find the largest LG violations possible for a coherent state. A second aim is to explore the physical origins of the LG violations. So, in Section \ref{sec:physmec} we examine the difference between quantum-mechanical currents and their classical counterparts for initial coherent states projected onto the positive or negative $x$-axis. In this section we also provide a second approach to calculating correlators in the small-time limit, which is in fact valid for general states. We also calculate the Bohmian trajectories, to give further physical portrait of what underlies the observed LG violations. Finally, we examine the measurement process in the Wigner representation, noting that the initially positive Wigner function of the coherent state acquires negativity as a result of the projective measurement process. Modifications to the above framework are considered in Section \ref{sec:mf}. We investigate what violations are possible using the Wigner LG inequalities \cite{saha2015, naikoo2020}. We briefly discuss other types of measurements beyond the simple projective position measurements used so far and also consider LG2 violations with projections onto coherent states. We also determine how the LG violations may be modified for squeezed states or thermal states. We summarize in Section \ref{sec:sum}. We relegate to a series of appendices the grisly details of the calculations involved in this analysis. \section{Calculation of Correlators and LG Violations} \label{sec:corecalc} \subsection{Conventions and Strategy} For most of this paper, we will work with coherent states of the harmonic potential, which can of course be thought of as the ground state of the QHO, shifted in phase-space. The intricacy of calculating temporal correlators within QM stems from the complexity in the time evolution of a post-measurement state. By considering a co-moving frame for the post-measurement state, we develop a time evolution result which explicitly separates the quantum behaviour from the classical trajectories. We will work with systems defined exactly (or approximately) by the harmonic oscillator Hamiltonian, \begin{equation} \hat H = \frac{\hat p_{\text{phys}}^2}{2m}+\frac12 m\omega^2 \hat x_{\text{phys}}^2, \end{equation} with physical position and momenta $x_{\text{phys}}$ and $p_{\text{phys}}$. In calculations we use the standard dimensionless variables $x\sqrt{\hbar/(m\omega)}=x_{\text{phys}}$ and $p\sqrt{\hbar m\omega}=p_{\text{phys}}$. We denote energy eigenstates $\ket n$, writing $\psi_n(x)$ in the position basis, with corresponding energies $E_n=\hbar \omega(n+\frac12)=\varepsilon_n \hbar \omega$. We write coherent states as $\ket \alpha$, where its eigenvalue $\alpha$ relates to rescaled variables as \begin{align} \label{eq:alphatox} \ev{\hat x(t)}&=\sqrt2 \Re \alpha(t),\\ \ev{\hat p(t)}&=\sqrt2 \Im \alpha(t). \end{align} These are the classical paths underlying the motion of coherent states, and we adopt the short-hand $x_1=\ev{\hat x(t_1)}$ and likewise $p_1=\ev{\hat p(t_1)}$. A coherent state may be represented in terms of $\alpha$ and an initial phase, however in this work we will largely represent them in terms of the initial averages $x_0$ and $p_0$, for clarity of physical understanding. We construct coherent states with the unitary displacement operator $D(\alpha)=\exp(\alpha a^\dagger-\alpha^* \hat a)$ operating on the ground state, \begin{equation} \ket \alpha = D(\alpha)\ket 0, \end{equation} which results in the wave-function \begin{equation} \psi^{\alpha}(x,t)=\frac{1}{\pi^\frac14}\exp(-\frac12(x-x_t)^2 + i \frac{p_t}{\hbar} x+i\gamma(t)). \end{equation} We will calculate the quantities $\ev{Q_i}$, $C_{ij}$ appearing in the LG inequalities. A convenient way to proceed is to first note that the combination appearing in the LG2 inequalities is proportional to the quantity \begin{equation} \label{eq:momexp} q(s_1, s_2)=\frac{1}{4}\left(1+s_1 \langle\hat Q_1\rangle+s_2 \langle\hat Q_2\rangle+s_1 s_2 C_{12}\right). \end{equation} Classically this quantity is non-negative and is the probability distribution matching the data set with moments of $\ev{Q_1}$, $\ev{Q_2}$, $C_{12}$. In the quantum-mechanical case, this quantity may be written \begin{equation} \label{eq:qp} q(s_1, s_2)=\text{Re} {\rm Tr} (P_{s_2}(t_2)P_{s_1}(t_1)\rho), \end{equation} where $s=\pm1$, and $P_{s}$ are projection operators corresponding to the measurement made, with \begin{equation} \label{eq:ptoq} P_s=\frac12(1+s\hat Q). \end{equation} Since Eq.~(\ref{eq:qp}) can be negative (up to a maximum of $-\tfrac18$ the L{\"u}ders bound), it is referred to as a quasi-probability (QP)~\cite{halliwell2016b,halliwell2019c}. Purely from a calculational point of view it is a convenient object to work with, as we found in Ref.~\cite{mawby2022}, since it is proportional to the LG2 inequalities in the quantum case, and since the correlators are easily read off from its moment expansion, so we make use of it here. The QP may be rewritten in the form given in Ref.~\cite{halliwell2019c} \begin{equation} q(s_1, s_2)=\frac18\ev*{(1+s_1 \hat Q_1 +s_2 \hat Q_2)^2 -1} \end{equation} which makes explicit that the maximally violating state satisfies the eigenvalue equation \begin{equation} \label{eq:maxQP} (s_1\hat Q_1 + s_2 \hat Q_2)\ket\psi=-\ket\psi. \end{equation} We have not been able to find the maximally violating state but Eq. (\ref{eq:maxQP}) suggests it is probably discontinuous at $x=0$. This in turn suggests that we will not able to get close to maximal violations with the simple gaussian states explored here. \subsection{Calculation of the Correlators} We now calculate the temporal correlators for the case $\hat Q = {\rm sgn}(\hat x)$. The QP Eq.~(\ref{eq:qp}) is given by \begin{equation} q(s_1,s_2)=\Re\ev{e^{\frac{i H t_2}{\hbar}}\theta(s_2\hat x)e^{-\frac{i H \tau}{\hbar}}\theta(s_1\hat x)e^{-\frac{i H t_1}{\hbar}}}{\alpha}. \end{equation} By considering the displacement operator as acting on the measurements instead of the state, the quasi-probability is shown in Appendix \ref{app:timeev} to be \begin{equation} \label{eqn:quasgen} q(s_1,s_2)=\Re e^{\frac{i\omega \tau}{2}} \ev{\theta(s_2(\hat x + x_2))e^{-iH\tau}\theta(s_1(\hat x + x_1))}{0}, \end{equation} which reveals that the quasi-probability for coherent states can be understood as the quasi-probability for the pure ground state, with measurement profiles translated according to the classical paths. This shows that any LG test on any coherent state may be directly mapped to an LG test on the ground-state of the QHO, with translated measurements. Using the surprising result that $\int_a^b \psi_n(x)\psi_m(x) dx$ has an exact and general solution, where $\psi_n(x)$ are energy eigenstates \cite{moriconi2007a, mawby2022}, we are able to calculate the temporal correlators as an infinite sum \begin{equation} \label{eqn:corr} C_{12}=\erf(x_1)\erf(x_2)+ 4\sum_{n=1}^\infty \cos (n\omega \tau) J_{0n}(x_1,\infty)J_{0n}(x_2,\infty), \end{equation} where \begin{equation} J_{0n}(x, \infty)=\mel{0}{\theta(\hat x - x)}{n}. \end{equation} The $J_{0n}$ terms are given in terms of the $\psi_n(x)$ and its derivative. The details of this calculation are given in Appendix ~\ref{app:timeev}. The infinite sum may be evaluated approximately using numerical methods, by summing up to a finite $n$. This calculation matches the analytically calculated special case of $x_0=0$ given in Ref. \cite{halliwell2021}. \subsection{LG Violations} \label{sec:LGresults} \begin{figure} \caption{Plot (a) is a parameter space exploration, showing the largest LG2 violation for a given coherent state. Plot (b) shows the temporal behaviour of the LG2s for a state leading to the largest violation of $-0.113$.} \label{fig:lg2} \end{figure} The freely chooseable parameters are the initial parameters of the coherent state $x_0$, $p_0$, and the time interval between measurements, $\tau$. Where there are more than two measurements, we use equal time-spacing. In Fig.~\ref{fig:lg2} (a), we plot a parameter space exploration of violations for the LG2 inequalities, where for a given $x_0$, $p_0$, we have numerically searched for the largest violation for that state. The LG3 and LG4 inequalities have a similar distribution, but with progressive broadening. Figures for the LG3 and LG4 inequalities are included in Appendix~\ref{app:LGresults}. The parameters leading to the largest violation, and the magnitude of those violations are reported in Table~\ref{tab:1}. In Fig.~\ref{fig:lg2}(b) and Fig.~\ref{fig:lg34}, we plot the temporal behaviour of the LG2, LG3 and LG4 violations for the states in Table~\ref{tab:1} leading to the largest violations. \begin{table} \centering \setlength{\tabcolsep}{0.5em} \begin{tabular}{|c|c|c|c|} \hline Inequality & Largest Violation & Percent of L{\"u}ders Bound & Location ($|x_0|, |p_0|$) \\ \hline LG2 & -0.113 & 22\% & (0.550, 1.925) \\ LG3 & -0.141 & 28\% & (0.859, 3.317) \\ LG4 & 2.216 & 26\% & (0.929, 3.666) \\ \hline \end{tabular} \caption{\label{tab:1}Tabulation of parameter space results.} \end{table} \begin{figure} \caption{Plot (a) shows temporal behaviour of the LG3s for a coherent state with the largest violation $-0.141$. Plot (b) shows the same for the LG4s with a coherent state leading to the largest violation of $2.216$.} \label{fig:lg34} \end{figure} \section{Physical Mechanisms of Violation} \label{sec:physmec} Given the LG violations exhibited in the previous section a natural question to ask concerns the underlying physical effects producing the non-classical behaviour responsible for the violations. Since $ Q(t) = {\rm sgn}(x(t) )$, a classical picture of the system would involve a set of trajectories $x(t)$ and probabilities for those trajectories. It is then natural to look at the parallel structures in quantum theory and compare with the classical analogues. We therefore look at the quantum-mechanical currents associated with the LG inequalities, which correspond to the time evolution of certain probabilities, and also to the Bohm trajectories associated with those currents, in terms of which the probability flow in space-time is easily seen. What we will see is that the departures from classicality are essentially the ``diffraction in time’’ effect first investigated by Moshinsky, who considered the time evolution of an initial plane wave in one dimension restricted to $x<0$ \cite{moshinsky1952, moshinsky1976}. The key mathematical object is the Moshinsky function \begin{equation} \label{eq:moshfunc} M(x,p,t)=\langle x\lvert e^{-iHt}\theta(\hat x)\rvert p \rangle \end{equation} for an initial momentum state $\lvert p\rangle$, which we will see below appears in the calculation of the quasi-probability. \subsection{Analysis with Currents} As we saw in Section.~\ref{sec:LGresults}, the quasi-probability component $q(-,+)$ exhibits a healthy degree of negativity. We start by writing it as \begin{equation} \label{eq:dqdt} q(-,+)=\int_{0}^{\tau}\mathop{dt}\frac{dq}{dt}. \end{equation} It is then simple to relate $\frac{dq}{dt}$ to a set of quantum mechanical currents, which can be calculated analytically. Overall negativity of the QP can then be spotted by the non-classicality or negativity of certain combinations of currents. With details in Appendix \ref{app:curr}, we are able to write the quasi-probability as the following combinations of currents at the origin, \begin{equation} \label{eq:qpdiffs} q(-,+)=\int_{t_1}^{t_2}\mathop{dt}J_-(t)+\frac12 \int_{t_1}^{t_2}\mathop{dt}\left(\mathbb{J}_-(t)-J_-(t) +\mathbb{J}_+(t)-J_+(t)\right), \end{equation} where $J_\pm(t)$ is the current following a measurement of $s_1 = \pm$, and $\mathbb{J}_\pm(t)$ are the classical analogues of this. The chopped currents contains the complexity of the influence of the earlier measurement, and are hence quite complicated and given in Appendix~\ref{app:chopcurr}. Note that the first time integral is simply the sequential measurement probability $p_{12}(-,+)$, which is non-negative. The negativity of the quasi-probability therefore arises as a result of the difference between the classical and quantum chopped currents. The classical and quantum chopped currents and the current combination appearing in Eq.~(\ref{eq:qpJ}) are all plotted in Figure~\ref{fig:currs} for the initial state giving the LG2 violation described in Section \ref{sec:LGresults}. The departures from classicality are clearly seen and are consistent with a broadening of the momentum distribution produced by the measurement. Note also that the quantum chopped currents diverge initially due to the sharpness of the measurement. Most importantly, we see that the combination of currents appearing in the quasi-probability Eq.~(\ref{eq:qpJ}) will clearly produce an overall negativity when integrated over time, thereby confirming the LG2 violation shown in Fig.~\ref{fig:lg2}. Furthermore, we have integrated Eq.~(\ref{eq:qpJ}) numerically and find an exact agreement with the calculation of Section \ref{sec:corecalc}, Eq.~(\ref{eqn:cohquas}) thereby providing an independent check of this result. \begin{figure} \caption{Plot (a) shows the post-measurement quantum currents $J_\pm(t)$, and their classical analogues $\mathbb{J}_\pm(t)$, normalised to $\mathcal{J}=\tfrac{J}{\omega}$, for the coherent state with $x_0=0.55$, $p_0=-1.925$. Plot (b) shows their combination appearing in the time derivative of $q(-,+)$, Eq.~(\ref{eq:qpdiffs}).} \label{fig:currs} \end{figure} It is also convenient to explore the currents in the small time limit. This is done in App \ref{app:tJ} and gives a clear analytic picture of the departures from classicality. Since these expressions are valid for any initial state they could provide a useful starting point in the search for other initial states giving LG2 violations larger than the somewhat modest violations found here. \subsection{Bohm trajectories} \begin{figure} \caption{The Bohm trajectories associated with the Moshinsky function, $\langle x|e^{-iHt}\theta(\hat x)|p\rangle$ with $p = -1$. The equivalent classical paths are shown dotted.} \label{fig:bohmMosh} \end{figure} To give a visual demonstration of how the measurements influence motion in a way to lead to LG2 violations, we now calculate and plot the de Broglie-Bohm trajectories~\cite{bohm1952,bohm1952a,debroglie1927}. As noted earlier, the Moshinsky function underlies the behaviour of the quasi-probability for these measurements, so we initially examine the Bohm trajectories for this scenario. Using Moshinsky's calculation (free particle dynamics), we calculate the quantum-mechanical current $J_M(x,t)$, which we then use in the guidance equation for Bohm trajectories, \begin{equation} \dot{x}(t)=\frac{J_M(x,t)}{\lvert M(x,p,t)\rvert^2}, \end{equation} which we proceed to solve numerically. In Fig.~\ref{fig:bohmMosh}, we plot the trajectories for a state initially constrained to the right-hand side of the axis, with a left-ward momentum, with classical trajectories shown dotted. From this we see two distinct phases of deviation from the classical result. Initially the trajectories rapidly exit the right hand side, with a negative momentum larger than in the classical case, an anti-Zeno effect \cite{kaulakys1997}. After a short while, a Zeno effect \cite{turing2001, misra1977} happens, and the trajectories bend back relative to the classical trajectories, staying in the right hand side longer than in the classical case. We will see both of these behaviours at play in the case studied in this paper. Using the expressions for the chopped current Eq.~(\ref{eq:chopJ}) and chopped wave-function Eq.~(\ref{eq:chopwav}), we can write the guidance equation for the harmonic oscillator case, \begin{equation} \dot x (t) = \frac{J_{\pm}(x,t)}{\lvert \phi_\alpha^{\pm}(x,t)\rvert^2}, \end{equation} which we again solve numerically. In Fig.~\ref{fig:bts}, we show the Bohm trajectories for the state with $x_0=0.55$, $p_0=-1.925$, initially found on the right hand side of the well. This corresponds to the behaviour of the current $J_+(x,t)$ from the previous section. Looking at the zoom of the trajectories in Fig.~\ref{fig:bts}(b), we can observe the same behaviour that is seen in the Moshinsky case -- initially an anti-Zeno effect, during which the trajectories exit faster than they would classically, followed by a Zeno effect a short while later, where trajectories exit more slowly than in the classical case. This lines up with the behaviour of $\mathbb{J}_+(t)-J_+(t)$ displayed in Fig.~\ref{fig:currs}(b), and is hence a representation on the trajectory level of the source of the LG2 violations. \begin{figure} \caption{The Bohm trajectories for case of the particle being initially found on the right hand side of the axis. In (a), trajectories are separated such that two adjacent lines bound the evolution of $6.67\%$ of the probability density, and in (b) they bound $10\%$ of the probability density.} \label{fig:bts} \end{figure} \subsection{Wigner Function Approach} Another way to understand the LG2 violations, is within the Wigner representation \cite{wigner1932a, hillery1984a, tatarskii1983a, case2008, halliwell1993a}. Since coherent states have non-negative Wigner functions, the source of MR violation lies in the non-gaussianity of the Wigner transform of the operators describing the measurement procedure. We calculate these transformations in Appendix ~\ref{app:wig}, which allows us to write the quasi-probability as a phase-space integral, \begin{equation} q(s_1,s_2)=\int_{-\infty}^{\infty}\mathop{dX}\int_{-\infty}^{\infty}\mathop{dp}f_{s_1, s_2}(X,p) \end{equation} with the phase-space density $f_{s_1, s_2}(X,p)$ given by \begin{equation} \label{eq:wigfin} f_{s_1, s_2}(X,p)=\frac12 W_{\rho}(X,p)\left(1+\Re\erf\left(i(p-p_0)+s_1 X\right)\right)\theta(s_2 X_{-\tau}), \end{equation} where $W_\rho(X,p)$ is the Wigner function of the initial state. In Fig.~\ref{fig:wig}, we plot this phase-space density $f_{-+}(X,p)$ for the state $x_0=0.55$, $p_0=-1.925$ and $\omega\tau=0.55$. To make it clear that this integrates to a negative number, we numerically determine the marginals $f_{-+}(p) = \int_{-\infty}^{\infty}\mathop{dX}f_{-,+}(X,p)$ and likewise $f_{-+}(X)=\int_{-\infty}^{\infty}\mathop{dp}f_{-,+}(X,p)$, plotting them as insets in Fig.~\ref{fig:wig}. It is clear from a simple inspection of these marginals that they will integrate to a negative number. In Appendix~\ref{app:wig}, we plot the intermediate result Eq.~(\ref{eq:wigint}), where it is apparent how the choice of second measurement hones in on the negativity introduced by the initial measurement, ultimately leading to the LG2 violations in Section~\ref{sec:LGresults}. \begin{figure} \caption{We plot the phase-space density $f_{-,+}(X,p)$, in the case corresponding to an LG2 violation of $-0.113$. Inset are the $X$ and $p$ marginals, shaded over regions of negativity.} \label{fig:wig} \end{figure} \section{Modified Frameworks} \label{sec:mf} In this section, we broadly generalise our analysis, finding larger violations using the Wigner variant of the LG2s, and by analysing different measurements, and extending the analysis to squeezed and thermal coherent states. \subsection{Achieving larger violations using the Wigner LG2 inequalities} \label{subsec:wiglg} Given the modest size of the LG violations obtained for a single gaussian, compared to the L{\"u}ders bound, it is natural to ask if there are modified situations in which larger violation can be obtained. One way of doing this is to examine slightly different types of inequalities known as the Wigner-Leggett-Garg (Wigner LG) inequalities \cite{saha2015, naikoo2020}. For the two-time case these arise as follows. The quasi-probability is readily rewritten as \bea q(s_1,s_2) &=& {\rm Re} \langle ( 1 - P_{-s_2} (t_2) ) ( 1 - P_{-s_1} (t_1) ) \rangle \nonumber \\ &=& 1 - \langle P_{-s_1} (t_1) \rangle - \langle P_{-s_2} (t_2) \rangle + q(-s_1, -s_2). \eea However, from a macrorealistic perspective, there is nothing against considering the similar quasi-probability \beq \label{eqn:qw} q^W (s_1,s_2) = 1 - \langle P_{-s_1} (t_1) \rangle - \langle P_{-s_2} (t_2) \rangle + p_{12} (-s_1,-s_2), \eeq (where recall $p_{12}$ is the sequential measurement probability Eq.~(\ref{eq:seqprob})) since the two are the same classically. The relation $q^W (s_1,s_2) \ge 0 $ is a set of Wigner LG2 inequalities (recalling the factor of $\tfrac{1}{4}$ difference between an LG2 and a QP). It differs from the usual LG2 inequalities by the presence of interference terms, which can be positive or negative, which indicates that violations larger the usual L\"uders bound (on the QP) of $- \frac{1}{8}$ might be obtained. The difference between them from an experimental point of view is that the original quasi-probability is measured from three different experiments (determining $\langle Q_1 \rangle$, $ \langle Q_2 \rangle $ and $C_{12}$) but the sequential measurement formula appearing in the Wigner version is measured in a single experiment. To get a sense of how much larger the maximum violation might be, we take the simple case of one-dimensional projectors $P_{-s_1} (t_1) = | A \rangle \langle A |$ and $P_{-s_2}(t_2) = | B \rangle \langle B | $ and we find \beq q^W (A,B) = 1 - | \langle \psi | A \rangle |^2 - | \langle \psi | B \rangle |^2 + | \langle \psi | A \rangle |^2 | \langle A | B \rangle |^2. \label{qWAB} \eeq Simple algebra reveals the lower bound as $ - \frac{1}{3}$, which is achieved with $ \langle A | B \rangle = 1 / \sqrt{3} $ and $ | \psi \rangle = (1 / \sqrt{6} ) ( | A \rangle + \sqrt{3} |B \rangle $). (It seems like that this is the most negative lower bound for all possible choices of projection but we have not proved this.) This bound is significantly larger than the usual L\"uders bound on the QP of $ - \tfrac{1}{8}$. In Section 3 we found that the quasi-probability $q(-,+)$ gives the greatest negativity so we compare with the corresponding Wigner expression, \bea q^W (-,+) &=&1 - \langle P_{+} (t_1) \rangle - \langle P_{-} (t_2) \rangle + p_{12} (+,-) \nonumber \\ &=& q(-,+) + \left[ p_{12}(+,-) - q(+,-) \right], \eea where we have made use of Eq.~(\ref{eqn:qw}) for $q(-,+)$ We first note that Eq.~(\ref{eq:qpdiffs}) may be written $q(-,+) = p_{12} (-,+) + I$, where $I$ denotes the interference terms (i.e. the difference between the classical and quantum currents, the second term in Eq.~\ref{eq:qpdiffs}). The analogous relations for $q(+,-)$ is readily derived and we find $ q(+,-) = p_{12}(+,-) - I $. (The difference in sign is expected on general grounds \cite{halliwell2016b}). We thus find \beq q^W (-,+) = p_{12} (-,+) + 2 I, \eeq so the interference term producing the violations is twice as large as the one in $q(-,+)$. The computation of $p_{12} (-,+)$ can be carried out by integrating the chopped current $J_-(t)$, Eq.~(\ref{eq:chopcurr}) in Section 4. Using the maximally violating state found in Section~\ref{sec:LGresults}, we find a largest violation of $-0.0881$, approximately three times larger than the standard LG2 violation, as well as a larger fraction of the conjectured Wigner LG2 L{\"u}ders bound of $-\frac{1}{3}$. This is plotted alongside the standard LG2 in Fig.~\ref{fig:wigqp}, where the violation is both larger in magnitude, and present for a larger range of measurement intervals. As an aside, we note an interesting aspect of Eq.~(\ref{qWAB}), which is that the last term, corresponding to the sequential measurement probability, factors in two parts. This factoring will also hold for for more general projections at $t_2$ as long as the projection at $t_1$ is one-dimensional. This may have some advantages in terms of meeting the non-invasiveness requirement on the measurements. It seems plausible that one could find macrorealistic arguments implying that the sequential measurement probability factors. Then the first factor is the probability of finding $ |A \rangle$ in an initial state $| \psi \rangle $ and the second factor is the probability of finding $|B \rangle$ when the systems is prepared in state $|A\rangle$. These quantities could therefore be obtained in two different experiments with two different preparations with just a single measurement in each, for which there is no issue with invasiveness. The other obvious way of getting larger violations is to consider von Neumann measurements, which involves making finer-grained measurements than the simple dichotomic ones used here and then coarse graining the probability to compute the correlators \cite{pan2018,dakic2014, budroni2014,wang2017,kumari2018}. For example, one could make measurements onto three regions of the $x$-axis, $ x<0$, $ 0 \le x \le L$ and $ x> L$ at the first time and then coarse-grain the two-time probabilities into probabilities for the usual coarse graining $x<0$ and $x>0$. This produces extra interference terms which can enhance the violations. For the LG3 inequalities, von Neumann measurements can produce violations up to the algebraic maximum of $-2$. For the LG2 inequalities, the enhancement is smaller since there is only one correlator and the LG2 violations can be no more than $-1$. This corresponds to $-\frac{1}{4}$ in the quasi-probability, which we see is not as big as the violation of $ - \frac{1}{3}$ that can be produced produced by the Wigner LG2. \begin{figure} \caption{Plot of $q^W(-,+)$ for the coherent state $x_0=-0.55$, $p_0=-1.925$, as well as the standard quasi-probability (dashed), i.e. the LG2 in Fig.~\ref{fig:lg2}(b) times a factor of $\frac14$.} \label{fig:wigqp} \end{figure} \subsection{Coherent state projectors} \label{subsec:cohproj} It is useful to know what else may be possible beyond using $\theta(\hat x)$ projectors. We note investigations \cite{bose2018} into smoothed $\theta(\hat x)$ measurements, showing LG violations persist under smoothing of measurements up to the characteristic length-scale of the oscillator \cite{mawby2022}. Modular variables such as $\cos(k\hat x)$ have also been investigated, and readily produce significant LG violations \cite{asadian2014}. These examples show that the LG violations are not due to the sharpness of projective measurements with $\theta(\hat x)$. In this section we will look at tests of macrorealism using coherent state projectors \cite{yuen1978,yuen1983, dariano2003}, which are interesting since they leave the post-measurement state gaussian, and are easily experimentally realised. We now consider coherent state projectors, where we have $P_+=\ketbra{\beta}$, and $P_-=\mathds{1} -\ketbra{\beta}$ defining a dichotomic variable in the usual way through Eq.~(\ref{eq:ptoq}). The quasi-probability is given by \begin{equation} q(+,+)= \Re \ev{e^{iHt_2}\ketbra{\beta_2}e^{-iH\tau}\ketbra{\beta_1}e^{-iHt_1}}{\alpha}, \end{equation} where we make two simplifying observations. Firstly, all the time evolution may be absorbed into the measurement projectors. Secondly, without loss of generality, we work with $\alpha=0$, where the change in phase-space location may be absorbed into $\beta_1$ and $\beta_2$. It is hence entirely equivalent to analyze \begin{equation} q(+,+)=\Re \braket{0}{\gamma_1}\!\!\braket{\gamma_1}{\gamma_2}\!\!\braket{\gamma_2}{0}, \end{equation} with the relation $\gamma_i=e^{-i\omega t_i}\beta_i-\alpha$. The overlap between two coherent states is given by \begin{equation} \braket{\beta}{\alpha}=e^{-\frac12(\abs{\alpha}^2+\abs{\beta}^2-2\alpha \beta^*)}, \end{equation} and we readily find \begin{align} q(+,+)&=\exp(-\lvert\gamma_1\rvert^2 -\lvert \gamma_2\rvert^2)\Re \exp(\gamma_1 \gamma_2^*),\\ q(+,-)&=\exp(-\lvert\gamma_1^2\rvert)\left(1-\Re\exp(\gamma_1 \gamma_2^* -\lvert\gamma_2\rvert^2)\right), \end{align} where $q(-,+)$ is found by a relabelling, and $q(-,-)$ does not lead to any violations. To determine the largest violations, it is useful to note these quasi-probabilities depend only on the magnitude of $\gamma_1$ and $\gamma_2$, and the phase difference between them. In $q(+,+)$, $\lvert \gamma_1\rvert$ and $\lvert \gamma_2 \rvert$ appear in the same way, so we set them to be equal, and find a largest violation of $-0.0133$ at $\gamma_1=1.55$, $\gamma_2=1.55 e^{-1.047 i}$, which is about $10\%$ of the maximal violation. For $q(-,+)$, since the violation is aided by the negative sign on $\Re \exp(\gamma_1 \gamma_2^* -\lvert\gamma_2\rvert^2)$, it is easy to see the largest violation will occur when both $\gamma_1$ and $\gamma_2$ are purely real. We readily find that the largest violation is approximately $-0.1054$ with $\gamma_2=\frac12\gamma_1=0.536$, which is about $84\%$ of the maximum. Violations meeting the L{\"u}ders bound may be achieved if a superposition state is chosen which satisfies Eq.~(\ref{eq:maxQP}. The superposition state $\ket\psi = -\ket{\beta_1} -\ket{\beta_2}$ is properly normalized and gives a maximal violation for $q(+,+)$ if the coherent states are chosen so that $\braket{\beta_1}{\beta_2} = -\frac12$. Similarly for $q(+,-)$, the state $\ket\psi = \ket{\beta_1} - \sqrt{3}\ket{\beta_2}$ leads to a maximal violation if we choose $\braket{\beta_1}{\beta_2}=\frac{\sqrt{3}}{2}$. \subsection{Squeezed States} The squeezed coherent state may be written~\cite{schleich2001}, \begin{equation} \ket{\alpha,\zeta}=D(\alpha)S(\zeta)\ket0. \end{equation} While the squeezing operator $S(\zeta)$ does not commute with the displacement operator $D(\alpha)$, there is a simple braiding relation, allowing us to write \begin{equation} \ket{\alpha,\zeta}=S(\zeta)D(\beta)\ket0=S(\zeta)\ket\beta, \end{equation} with $\beta$ depending on both $\alpha$ and $\zeta$. With the quasi-probability given by \begin{equation} q(+,+)=\Re \ev{\theta(\hat x)\theta(\hat x(t))}{\psi}, \end{equation} for $\ket\psi$ given by a squeezed coherent state. We can consider moving the $S(\zeta)$ in $\ket\psi$ onto each $\theta(\hat x)$ function, resulting in $S^\dagger(\zeta)\theta(\hat x)S(\zeta)$ twice. Since the squeezing operator has the action of a canonical transform, taking $\hat x$ and $\hat p$ into a linear combination of themselves, we have that \begin{equation} S^\dagger(\zeta)\theta(\hat x)\theta(\hat x (t))S(\zeta)=\theta(a\hat x + b\hat p)\theta(c\hat x + d\hat ), \end{equation} for some $a,b,c,d$ that may depend on $t$. We now note that $a\hat x + b\hat p$ may be written as $\lambda(\hat x \cos(t')+\hat p \sin(t'))$ for some $\lambda> 0$ and some $t'$, and since the theta-function is invariant under scaling, we see that \begin{equation} S^\dagger(\zeta)\theta(\hat x)\theta(\hat x (t))S(\zeta)=\theta(\hat x(t_1'))\theta(\hat x(t_2')). \end{equation} This means the QP for a squeezed coherent state is equal to the QP for some other coherent state $\beta$, with different measurement times $t_1'$, $t_2'$. Hence the operation of squeezing will not increase the largest possible violation reported in Section~\ref{sec:LGresults}, although for certain states with sub-optimal violation, squeezing can increase the amount of violation. \subsection{Thermal States} The thermal coherent state at a temperature $T$ is given by \begin{equation} \rho_{\text{th}}(\alpha, T)=\frac{1}{Z}\sum_{n=0}^{\infty}e^{-\frac{n\hbar\omega}{k_B T}} \ketbra{n,\alpha}, \end{equation} where $k_B$ is the Boltzmann constant, $\ket{n,\alpha}$ are energy eigenstates displaced by $\alpha$ in phase-space~\cite{oz-vogt1991}. The partition function $Z$ is given by \begin{equation} Z=\frac{1}{1-e^{-\frac{\hbar \omega}{k_B T}}}. \end{equation} Since this state is a mixture, it is simple to update the calculation Eq.~(\ref{eqn:quassum}) to using this state, leading to \begin{equation} q(+,-)=-\frac{1}{Z}\Re\sum_{\ell=0}^{\infty}e^{-\frac{\ell\hbar\omega}{k_B T}} \sum_{n=0}^\infty e^{-i(n-\ell)\omega\tau}J_{\ell n}(x_1,\infty)J_{\ell n}(x_2,\infty) \end{equation} where the $J_{n\ell}$ matrices are given by Eq.~(\ref{eq:Jmn}), except for the cases $n=\ell$, where they must be calculated explicitly. A similar result may be calculated for the correlators using Eq.~(\ref{eqn:corr}), allowing the analysis of LG3 and LG4 inequalities. In Fig.~\ref{fig:thermLG} using the states found in Section~\ref{sec:LGresults}, we plot the behaviour of the largest violation, as temperature is increased. We see the violation persists up to temperatures $k_B T\approx \hbar \omega$, with some preliminary evidence that LG3 and LG4 violations may be more robust against thermal fluctuations in the initial state. \begin{figure}\label{fig:thermLG} \end{figure} \section{Summary} \label{sec:sum} We have undertaken a study of LG violations in the quantum harmonic oscillator for a dichotomic variable $Q = \text{sgn}(x)$ and for an initial state given by a coherent state and closely related states. In Section \ref{sec:corecalc}, building on our earlier work with energy eigenstates of the QHO \cite{mawby2022}, we showed how the quasi-probability, and hence the temporal correlators, may be expressed as a discrete infinite sum which is amenable to numerical analysis. We applied this analysis to the LG2, LG3 and LG4 inequalities and carried out parameter space searches. We found LG violations of magnitude 22\%, 28\% and 26\% of the maximum possible for the LG2, LG3 and LG4 inequalities, respectively, and gave the specific parameters for which these violations are achieved. These violations appear to be robust under small parameter adjustments. The LG2 violation in the case $x_0=0$ agrees with that reported in Ref.~\cite{halliwell2021}. The LG4 violation is significantly smaller than that reported in Ref.~\cite{bose2018}, which used a coherent state with large momentum, and in fact we found no violations in that regime, although the authors note it involves a very narrow parameter range. In Section \ref{sec:physmec} we sought a physical understanding of the mechanism producing the violations. We showed how to relate the quasi-probability (LG2) to a set of currents for projected initial states. We calculated and plotted these currents and also plotted their associated Bohm trajectories along with their classical counterparts. The plots showed the clear departures from classicality and give both a visual understanding and independent check of the LG2 violations described in Section \ref{sec:corecalc}. We also provide a small-time expansion for the LG2s, which is valid for general states. We noted that the quantum effect producing the violations is essentially the diffraction in time effect first noted by Moshinsky~\cite{moshinsky1952,moshinsky1976}. We explored the same issues from a different angle using the Wigner representation. The Wigner function of the initial coherent state is everywhere non-negative. We determined and plotted the Wigner function of the chopped initial state appearing in the quasi-probability. It has significant regions of negativity which are clearly the source of the LG2 violation. In Section \ref{subsec:wiglg}, we extended our results to the slightly different Wigner LG inequalities, which are phrased in terms of the sequential measurement probability, and allow for larger violations, where we found a two-time violation three times greater in magnitude than the standard LG2s. We also noted it is likely possible to increase the LG violations through the use of von Neumann measurements. In addition we noted a possible advantage of the Wigner LG2 inequalities, in some cases, in terms of meeting the non-invasiveness requirement. We briefly noted in \ref{subsec:cohproj} that our work is readily generalized from pure projective measurements to smoothed step function projectors, gaussian projectors and modular variables such as $\cos ( \hat x) $. We also examined the LG2 inequality for the case in which both projectors are taken to be projections onto coherent states. We showed that decent violations are possible for an initial coherent state and that a maximal violation arises when the initial state is a superposition of two coherent states. Finally we finished Section \ref{sec:mf} briefly discussing how the LG violations may be modified using families of states similar to a coherent state. We showed that the QP for any squeezed state is equal to the QP for some other coherent state, hence squeezing will not increase the largest violation found, however for a state with sub-optimal violation it may improve the violation. We also considered a thermal initial state and estimated the degree to which thermal fluctuations may affect the degree of violation. \section{Calculation of correlators} \label{app:timeev} The two-time quasi-probability for a coherent state with a generic position basis measurement $m(\hat x)$ is defined, \begin{equation} q(+,+)=\Re\ev{e^{\frac{i H t_2}{\hbar}}m(\hat x)e^{-\frac{i H \tau}{\hbar}}m(\hat x)e^{-\frac{i H t_1}{\hbar}}}{\alpha}. \end{equation} We are primarily interested in the case $m(\hat x) = \theta(\hat x)$, but what follows holds for more general $m(\hat x)$, e.g gaussian measurements. Writing this in terms of the displacement operator, we have \begin{equation} q(+,+)=\Re\ev{D^{\dagger}(\alpha)e^{\frac{i H t_2}{\hbar}}m(\hat x)e^{-\frac{i H \tau}{\hbar}}m(\hat x)e^{-\frac{i H t_1}{\hbar}}D(\alpha)}{0}. \end{equation} Since the displacement operator is unitary, we have $D^\dagger(\alpha) D(\alpha)=\mathds{1}$. Hence if we may commute the two displacement operators to be neighbours, we will clearly reach a vast simplification of the calculation. To make the exposition clearer, we consider splitting this expression into two states \begin{align} \ket{M(\alpha, t_1, \tau)}&=e^{-i\hat H \tau}m(\hat x)e^{-\frac{iHt_1}{\hbar}}D(\alpha) \ket0,\\ \bra{M(\alpha, t_2, 0)}&=\bra0 D^\dagger(\alpha)e^{\frac{i H t_2}{\hbar}}m(\hat x), \end{align} where we then have $q(+,+)=\Re \braket{M(\alpha, t_2, 0)}{M(\alpha, t_1, \tau)}$, where we have introduced the notation $\ket{M(\alpha, t, s)}$ to represent the coherent state measured with $m(\hat x)$ at time $t$, then evolved by time $s$. Considering now the displacement operator acting to the left, we write $m(\hat x)D(\alpha)=D(\alpha)D^\dagger(\alpha)m(\hat x)D(\alpha)=D(\alpha)m(\hat x + x_\alpha)$, with $x_\alpha=\sqrt2 \Re \alpha$. We then have \begin{equation} \ket{M(\alpha, t_1, \tau)} = e^{-\frac{i\omega t_1}{2}}e^{-i \hat H \tau}D(\alpha(t_1))m(\hat x + x_1)\ket 0. \end{equation} Using the standard result that $e^{-i H t}D(\alpha)e^{i H t}=D(\alpha(t))$, we can rewrite this as \begin{equation} \ket{M(\alpha, t_1, \tau)} =e^{-\frac{i\omega t_1}{2}}D(\alpha(t_2))e^{-i H\tau}m(\hat x + x_1)\ket0. \end{equation} This says that the post-measurement state is the evolution of the regular ground-state undergone a translated measurement, translated by the displacement operator, to a classical trajectory. Proceeding similarly with the other term, we find \begin{equation} \bra{M(\alpha, t_2, 0)}=e^{\frac{i\omega t_2}{2}}\bra{0} m(\hat x + x_2)D^\dagger(\alpha(t_2)). \end{equation} Finally, contracting the two terms we are able to exploit the unitarity of $D(\alpha)$ to find \begin{equation} q(+,+)=\Re e^{\frac{i\omega \tau}{2}} \ev{m(\hat x + x_2)e^{-iH\tau}m(\hat x + x_1)}{0}, \end{equation} \noindent A calculation similar to that in our earlier paper Ref.~\cite{mawby2022} shows the quasi-probability is \begin{equation} q(+,+)=\Re e^{\frac{i\omega \tau}{2}}\sum_{n=0}^{\infty}e^{-i\omega \tau(n+\frac12)}\mel{0}{\theta(\hat x + x_2)}{n}\!\!\mel{n}{\theta(\hat x + x_1)}{0}, \end{equation} and similarly for the other three components. The matrix elements here are given by the $J_{mn}$ matrices from our earlier paper \cite{mawby2022,moriconi2007a}, \begin{equation} J_{mn}(x_1,x_2)=\int_{x_1}^{x_2}\mathop{dx}\braket{m}{x}\braket{x}{n}. \end{equation} The quasi-probability is then \begin{equation} \label{eqn:quassum} q(+,+)=\Re \sum_{n=0}^\infty e^{-in\omega\tau}J_{0n}(x_1,\infty)J_{0n}(x_2,\infty) \end{equation} For $m\neq n$, the $J$ matrices take the value \begin{equation} \label{eq:Jmn} J_{mn}(x_1, x_2)=\frac{1}{2(\varepsilon_n- \varepsilon_m)}\left[\psi_m'(x_2)\psi_n(x_2)-\psi_n'(x_2)\psi_m(x_2)-\psi_m'(x_1)\psi_n(x_1)+\psi_n'(x_1)\psi_m(x_1)\right], \end{equation} where $\psi_n(x)=\braket{x}{n}$. For the $n=m=0$ case, the integration is completed manually yielding \begin{equation} J_{00}(x, \infty)=\frac12(1-\erf(x)). \end{equation} Hence writing out the quasi-probability with $n=0$ case of the sum handled, we have \begin{multline} \label{eqn:cohquas} q(s_1, s_2)= \frac{1}{4}\Bigg[1+ s_1 \erf(x_1) +s_2 \erf(x_2)+\\ s_1 s_2\Bigg( \erf(x_1)\erf(x_2)+ 4\sum_{n=1}^\infty\cos (n\omega \tau)J_{0n}(x_1,\infty)J_{0n}(x_2,\infty)\Bigg) \Bigg]. \end{multline} Comparing to the moment expansion of the quasi-probability, we obtain the correlators \begin{equation} \label{eqn:corr} C_{12}=\erf(x_1)\erf(x_2)+ 4\sum_{n=1}^\infty \cos (n\omega \tau) J_{0n}(x_1,\infty)J_{0n}(x_2,\infty). \end{equation} The infinite sum may be evaluated approximately using numerical methods, by summing up to a finite $n$. This calculation matches the analytically calculated special case of $x_0=0$ given in Ref. \cite{halliwell2021}, and while it is possible to make an analytic calculation for the more general case, it turned out not to be as useful as the numerical evaluation. The exact result is found in terms of Owen-T functions, but for complex arguments, which rendered the behaviour chaotic when computed \cite{owen1965}. The only source of non-classicality here lies in the infinite sum, and with the $J_{0n}$ matrices expressed in terms of the oscillator eigenstates, this means there is a double exponential suppression $e^{-x_1^2 -x_2^2}$ of this non-classical term. This corresponds to the requirement that at least two measurements must make a significant chop of the state, which fits the intuition that without significant chopping, there is no mystery attached to which side of the axis the particle may be found on. \section{Determination of LG violations} \label{app:LGresults} \begin{figure} \caption{Plot (a) shows the greatest possible violations for the LG3 inequalities, plot (b) shows the same for the LG4 inequalities.} \label{fig:param23} \end{figure} In this appendix we fill in the details of the LG violations reported in Section \ref{sec:LGresults}. Recall the variable parameters of the problem are $x_0$ and $p_0$, and the equal time spacing parameter $\tau$. We also note, that it is sufficient to explore a single quadrant of the $x_0, p_0$ parameter space, which we take to be the positive quadrant. For states with $x_0<0$, the quasi-probability may be recovered by inverting the sign of $s_1$. Likewise for states with $p_0<0$, by allowing the interval between measurements to take values $0<\tau\leq 2\pi$, their behaviour is included in the positive quadrant. This same argument applies to the LG inequalities in general, where their different permutations correspond to flips of measurement signs. To represent the three-dimensional parameter space, for each $x_0$, $p_0$, we use numerical minimisation over $0<\tau\leq 2\pi$ to find the largest possible violation for that coherent state. In this numerical procedure, we take the largest possible violation from all of the inequalities involved. The results of this parameter space search for the LG3 and LG4 inequalities is shown in Fig.~\ref{fig:param23}, which shows similar behaviour to the LG2 inequality parameter space behaviour in Fig.~\ref{fig:lg2}. As more measurement intervals are included in the LG tests, a broader range of states lead to violation. LG tests on QHO coherent states are mathematically equivalent to LG tests on the pure ground state $\ket0$, which are we found to have violations only when at least one of the $\theta(\hat x)$ measurements involved is displaced from the axis by order of magnitude $1$. Hence at the centre of each of these parameter space plots is the region where the coherent state is too similar to the ground state to have any LG violation. \begin{figure} \caption{Plot (a) shows the four LG2 inequalities for the state with $x_0=0.55$, $p_0=-1.925$, which reaches a largest violation of $-0.113$ at $\omega \tau = 0.555$. Plot (b) shows the four LG3 inequalities for the state with $x_0=0.859$, $p_0=-3.317$, reaching a largest violation of $-0.141$ at $\omega\tau=0.254$. } \label{fig:qpt} \end{figure} \begin{figure} \caption{Plot of the four LG4 inequalities, for state $x_0=0.929$, $p_0=-3.666$, reaching a largest violation of $2.216$ at $\omega\tau=0.166$.} \label{fig:LG4} \end{figure} All the violations we have found are in states with initial position and momenta approximately on the length-scale of the width of the coherent state, $\sqrt{\hbar/(m\omega)}$. However by appealing to Eq.~(\ref{eqn:corr}), we note that if one were to consider translating the measurement to $\theta(\hat x - x_i)$, the classical motion could be subtracted, and at least theoretically, the same magnitude of violations would exist for arbitrarily high $x_0$ and $p_0$. In Figures \ref{fig:qpt} and \ref{fig:LG4}, we plot the temporal behaviour of the LG2s, LG3s and LG4s respectively, for the case in which the parameters are chosen to give the largest violation. \section{Currents Analysis} \subsection{Classical analogues} \label{app:classan} To understand the connection between the negativity of the quasi-probability Eq.~(\ref{eq:qpJ}) and the behaviour of the currents, it is very convenient to consider the analogous classical currents which are in general defined by \begin{equation} \mathbb{J}(t)=\int_{-\infty}^{\infty}\mathop{dp}\int_{-\infty}^{\infty}\mathop{dx} p(t) \delta(x(t))w(x,p), \end{equation} for a suitably chosen initial phase-space distribution $w(x,p)$. For the un-chopped current $\mathbb{J}(t)$ this is taken to be the Wigner function of the coherent state, $W(x,p,x_0, p_0)$ Eq.(\ref{eq:classWig}), which conveniently, is non-negative. For the chopped curents it is taken to be $\theta(\pm x) W(x,p,x_0,p_0)$. We then easily see that \begin{equation} \label{eq:class} J(t)=\mathbb{J}_-(t)+\mathbb{J}_{+}(t), \end{equation} where $\mathbb{J}_{\pm}(t)$ are the classical analogues to the post-measurement currents, where since we have used the coherent state Wigner function, we have $\mathbb{J}(t)=J(t)$. We begin by writing the classical phase-space density for the Gaussian state \begin{equation} \label{eq:classWig} \mathbb{W}(X,p, x_0, p_0)=\frac{1}{\pi}\exp(-(X -x_0)^2 -(p-p_0)^2), \end{equation} where harmonic time-evolution leads to rigid rotation in phase-space, \begin{equation} \mathbb{W}(X,p,x_0,p_0,t)=\mathbb{W}(X \cos\omega t -p \sin\omega t,p \cos \omega t +X \sin \omega t, x_0, p_0). \end{equation} We have a similar result for the measured classical state, with \begin{equation} \mathbb{W}_{\pm}(X,p, x_0, p_0)=\frac{1}{\pi}\theta(\pm X)\exp(-(X -x_0)^2 -(p-p_0)^2), \end{equation} and \begin{equation} \mathbb{W}_{\pm}(X,p,x_0,p_0,t)=\mathbb{W}_{\pm}(X \cos\omega t -p \sin\omega t,p \cos \omega t +X \sin \omega t, x_0, p_0). \end{equation} The chopped classical current is given by \begin{equation} \mathbb{J}_{\pm} (x,t)=\int_{-\infty}^{\infty}\mathop{dp}\int_{-\infty}^{\infty}\mathop{dX}p\delta(X-x)\mathbb{W}_{\pm}(X,p,x_0, p_0,t) \end{equation} Completing the $X$ integral trivially, we have \begin{equation} \mathbb{J}_{\pm}(x,t)=\int_{-\infty}^{\infty}\mathop{dp} p\mathbb{W}_{\pm}(x,p,x_0, p_0,t). \end{equation} We are interested in the case of $x=0$, which we shorthand $\mathbb{J}_\pm(0,t)=\mathbb{J}_\pm(t)$, and is given by \begin{equation} \mathbb{J}_\pm(t)=\frac{1}{\pi}\int_{-\infty}^{\infty}\mathop{dp} p\,\theta (\mp p \sin\omega t) \exp \left(-(p \cos\omega t-p_0)^2-(-p \sin \omega t-x_0)^2\right). \end{equation} The step-function here just flips the integral between the positive or negative half-plane, dependent on ${\rm sgn}(\mp \sin \omega t)=1$ and ${\rm sgn}(\mp \sin \omega t)=-1$ respectively. Computing the integral, this yields the result \begin{equation} \mathbb{J}_{\pm}(t)= \frac{1}{2\pi}e^{-p_0^2-x_0^2} \left(\mp\text{sgn}(\sin\omega t)+\sqrt{\pi } e^{g(x_0, p_0, t)^2} g(x_0, p_0, t) (1\mp\text{sgn}(\sin\omega t)\erf (g(x_0, p_0, t)))\right),\end{equation} with $g(x_0, p_0, t)=p_0 \cos\omega t -x_0\sin\omega t$. \subsection{Time derivative of the quasi-probability} \label{app:curr} \noindent To calculate the time-derivative of the QP as appears in Eq.~(\ref{eq:dqdt}), we begin by writing the simple projector identity \begin{equation} P\rho + \rho P = P\rho P - \bar P \rho \bar P +\rho, \end{equation} where $\bar P = 1-P$. Hence the quasi-probability, Eq.~(\ref{eq:qp}) is given by \begin{equation} q(-,+)=\frac12{\rm Tr} \left(P_{+}(t_2)\left(P_-(t_1) \rho P_-(t_1) - P_+(t_1) \rho P_+(t_1) +\rho\right) \right). \end{equation} This may be written in terms of the non-negative sequential measurement probabilities \begin{equation} \label{eq:seqprob} p_{12}(s_1, s_2)={\rm Tr}\left(P_{s_2}(t_2)P_{s_1}(t_1)\rho P_{s_1}(t_1)\right), \end{equation} in the form \begin{equation} q(-,+)=\frac12\left( p_{12}(-,+)-p_{12}(+,+)+\ev{P_+(t_2)}\right). \end{equation} It is now simple to take the derivative with respect to $t_2$, noting that \begin{equation} \frac{d}{dt}\theta(\hat x(t))=\frac{1}{2m}\left(\hat p(t) \delta(\hat x(t)) + \delta(\hat x(t))\hat p(t)\right)=\hat J(t), \end{equation} yielding \begin{equation} \frac{d q(-,+)}{dt}=\frac12{\rm Tr}\left(\hat J(t)\left(P_-(t_1) \rho P_-(t_1) - P_+(t_1) \rho P_+(t_1) +\rho\right) \right). \end{equation} We can hence rewrite $q(-,+)$ as \begin{equation} \label{eq:qpJ} q(-,+)=\frac12 \int_{t_1}^{t_2}\mathop{dt} \left(J_-(t)-J_+(t)+J(t)\right), \end{equation} where we have introduced the `chopped current' \begin{equation} \label{eq:chopcurr} J_{\pm}(t)=\ev{\theta(\pm \hat x) \hat J(t) \theta(\pm \hat x)}{\psi}, \end{equation} which corresponds to the current at the origin, after the initial measurement. The chopped currents are therefore the currents of the wave-functions $\langle x\lvert e^{-iHt}\theta(\pm\hat x)\rvert \psi\rangle$, and note the connection to the Moshinsky function when $|\psi\rangle$ is expanded in the momentum basis. Using Eq.~(\ref{eq:class}), we may rewrite Eq.~(\ref{eq:qpJ}) in terms of the difference between quantum and classical post measurement currents, as \begin{equation} q(-,+)=\int_{t_1}^{t_2}\mathop{dt}J_-(t)+\frac12 \int_{t_1}^{t_2}\mathop{dt}\left(\mathbb{J}_-(t)-J_-(t) +\mathbb{J}_+(t)-J_+(t)\right). \end{equation} \subsection{Chopped currents calculation} \label{app:chopcurr} We calculate the `chopped current' first, that is \begin{equation} J_{\pm}(x,t)=\frac{1}{2m}\ev{\delta(\hat x - x)\hat p + \hat p \delta(\hat x - x)}{\phi^{\pm}_\alpha(t)}, \end{equation} where $\phi^{\pm}_\alpha(t)$ is the time evolution of a coherent state, initially projected on $\theta(\pm \hat x)$ at $t_0$, \begin{equation} \ket{\phi^{\pm}_\alpha(t)} = e^{-iHt}\theta(\pm \hat x)\ket\alpha \end{equation} Calculating the current in the position basis, we have \begin{equation} J_\pm(x,t)=-\frac{i\hbar}{2m}\left(\phi_\alpha^{\pm*}(x,t)\frac{\partial \phi_\alpha^{\pm}(x,t)}{\partial x}-\frac{\partial \phi_\alpha^{\pm*}(x,t)}{\partial x}\phi_\alpha^{\pm}(x,t)\right), \end{equation} equivalent to \begin{equation} \label{eqn:Jx} J_{\pm}(x,t)=\frac{\hbar}{m}\Im\left(\phi_\alpha^{\pm*}(x,t)\frac{\partial \phi_\alpha^{\pm}(x,t)}{\partial x}\right). \end{equation} We calculate the evolved chopped state by \begin{equation} \label{eq:phipm} \phi_\alpha^\pm(x,t)=\int_{\Delta(\pm)}\mathop{dy}K(x,y,t)\psi_\alpha(y,t_0), \end{equation} where $\Delta(+)=[0,\infty)$, $\Delta(-)=(-\infty,0]$, \begin{equation} \psi^{\alpha}(x,t_0)=\frac{1}{\pi^\frac14}\exp(-\frac12(x-x_0)^2 + i p_0 x). \end{equation} is the non-dimenionalized coherent state wave-function, with time evolution $\alpha(t) = e^{-i\omega t}\alpha(0)$, and \begin{equation} K(x,y,t)=\left(\frac{1}{2\pi i\sin \omega t}\right)^\frac12 \exp\left(-\frac{1}{2i\sin\omega t}\left((x^2+y^2)\cos\omega t -2 x y\right)\right), \end{equation} is the propagator for the harmonic potential. Inserting the relevant expressions within Eq.~(\ref{eq:phipm}) yields \begin{multline} \label{eq:propchop} \phi_\alpha^\pm(x,t)=\left(\frac{1}{2\pi i \sin \omega t}\right)^\frac12 \left(\frac{1}{\pi}\right)^\frac14 \int_{\Delta(\pm)}\mathop{dy}\exp\left(-\frac12(y-x_0)^2+ i y p_0)\right)\times\\\exp\left(-\frac{x^2 + y^2}{2i \tan \omega t} +\frac{x y}{i \sin \omega t}\right). \end{multline} We proceed writing the integral as \begin{equation} I_\pm(a,b,c)=\int_{\Delta(\pm)}\mathop{dr}\exp\left(-\frac12(r-a)^2 + i b r+ i c r^2\right), \end{equation} where \begin{align} a&=x_0,\\ b&=p_0 - \frac{x}{\sin\omega t},\\ c&=\frac{1}{2\tan\omega t}. \end{align} Completing the integration, we have \begin{equation} \label{eq:Ipm} I_\pm(a,b,c)=\frac{\sqrt{\pi } e^{-\frac{2 a^2 c+2 a b+i b^2}{4 c+2 i}} \left(1\pm \erf\left(\frac{a+i b}{\sqrt{2-4 i c}}\right)\right)}{\sqrt{2-4 i c}}. \end{equation} We hence can write the chopped wave-function as \begin{equation} \label{eq:chopwav} \phi_\alpha^\pm(x,t)=\left(\frac{1}{2\pi i \sin \omega t}\right)^\frac12 \left(\frac{1}{\pi}\right)^\frac14 e^{-\frac{x^2}{2i \tan \omega t}}I_\pm\left(x_0, p_0-\frac{x}{\sin\omega t},\frac{1}{2\tan\omega t}\right). \end{equation} Putting Eq.(\ref{eqn:Jx}) into rescaled units as well, we have \begin{equation} \label{eq:Jpmap} J_{\pm}(x,t)=\omega\Im\left(\phi_\alpha^{\pm*}(x,t)\frac{\partial \phi_\alpha^{\pm}(x,t)}{\partial x}\right). \end{equation} To take the derivative we note $I_\pm(a,b,c)$ depends on $s$ only in its second argument, and so we define \begin{equation} K_{\pm}(a,b,c)=\frac{\partial b}{\partial s}\frac{\partial}{\partial b}I_\pm(a,b,c), \end{equation} which explicitly yields \begin{equation} \label{eq:kpm} K_{\pm}(a,b,c)=\frac{-1}{\sin\omega t}\frac{e^{-\frac{a^2}{2}} \left(-\sqrt{2 \pi } (a+i b) e^{\frac{(a+i b)^2}{2-4 i c}} \left(1\pm \text{erf}\left(\frac{a+i b}{\sqrt{2-4 i c}}\right)\right)\mp 2 \sqrt{1-2 i c}\right)}{2 \sqrt{1-2 i c} (2 c+i)} \end{equation} Altogether, this yields the current \begin{multline} \label{eq:chopJ} J_{\pm}(x,t)=\frac{\omega}{2\pi^{\frac32}}\Im\Bigg(\frac{1}{\lvert\sin\omega t\rvert}I_\pm\bigg(x_0,-p_0+\frac{x}{\sin \omega t},-\frac{1}{2\tan \omega t}\bigg)\times\\ \bigg(\frac{ix}{\tan \omega t}I_\pm\bigg(x_0,p_0-\frac{x}{\sin \omega t},\frac{1}{2\tan \omega t}\bigg) +K_\pm\bigg(x_0,p_0-\frac{x}{\sin \omega t},\frac{1}{2\tan \omega t}\bigg)\bigg) \Bigg). \end{multline} We also calculate the current of the original unperturbed coherent state, in these same rescaled units. Since coherent states are eigenfunctions of the annihilation operator, with eigenvalue $\alpha(t)$, it follows that \begin{equation} \frac{\partial}{\partial x} \psi_\alpha(x,t)=\psi_\alpha(x,t)\left(\sqrt2 \alpha(t) - x\right). \end{equation} Hence the current is given by \begin{equation} J(x,t)=\frac{\omega}{\sqrt\pi}\Im\left(e^{-(x-\sqrt{2}\Re \alpha(t))^2}(\sqrt2 \alpha(t) -x)\right), \end{equation} where taking the imaginary part, and using Eq.~(\ref{eq:alphatox}), we have \begin{equation} J(x,t)=\frac{\omega}{\sqrt\pi}p_t e^{-(x-x_t)^2}. \end{equation} \section{Small Time Current Expansions} \label{app:tJ} We are interested in a small time expansion of the chopped current $J_{\pm}(t)$, for any general state $\ket\psi$. We start by defining the chopped-evolved state, \begin{equation} \ket{\psi^{\pm}(t)}=e^{-iHt}\theta(\pm \hat x)\ket{\psi}. \end{equation} We now follow Appendix~\ref{app:curr} up to Eq.~(\ref{eq:Jpmap}) to calculate the current, however this time using the general $\ket\psi$ state, leading to \begin{equation} J_{\pm}(t)=\omega\Im\left(\psi^{\pm*}(0,t)\frac{\partial\psi^\pm(0,t)}{\partial x}\right), \end{equation} with $\ket\psi$ represented in the non-dimensional position basis, i.e. $\braket{x}{\psi}=\left(\frac{m\omega}{\hbar}\right)^{\frac{1}{4}}\psi(x)$, with a normalised $\psi(x)$ which is purely a function of $x$. Using the QHO propagator as in App.~\ref{app:curr} up to Eq.~(\ref{eq:propchop}), now with the $\ket\psi$ state, we have \begin{equation} \label{eq:genprop} \psi^{\pm}(x,t)=\left(\frac{1}{2\pi i \sin\omega t}\right)^\frac12 \int_{\Delta(\pm)}\mathop{dy}\psi(y,0)\exp\left(-\frac{x^2 + y^2}{2i \tan \omega t} +\frac{x y}{i \sin \omega t}\right). \end{equation} We interested in the current at $x=0$ which means we only need $\psi(x)$ and its first derivative evaluated at $x=0$, leaving \begin{equation} \label{eq:psi0} \psi^{\pm}(0,t)=\left(\frac{1}{2\pi i \sin\omega t}\right)^\frac12 \int_{\Delta(\pm)}\mathop{dy}\psi(y,0)\exp\left(-\frac{y^2}{2i \tan \omega t}\right). \end{equation} We now argue that for small $t$ the main contribution to the integral will come from near the boundary of the chop, and hence Taylor expand $\psi(y,0)$ expand around $y=0$. By using the parametrisation $y=z(\tan\omega t)^\frac12$, we simplify the exponential part of the integrand. \begin{equation} \psi(y,0)=\sum_{n=0}^{\infty}\frac{\psi^{(n)}(0,0)}{n!}z^n(\tan\omega t)^\frac{n}{2}. \end{equation} Using this in Eq.~(\ref{eq:psi0}), and using the substitution within the integral, we have \begin{equation} \psi^{\pm}(0,t)=\left(\frac{\tan\omega t}{2\pi i \sin\omega t}\right)^\frac12 \int_{\Delta(\pm)}\mathop{dz}\exp\left(-\frac{z^2}{2i}\right)\sum_{n=0}^{\infty}\frac{\psi^{(n)}(0,0)}{n!}z^n(\tan\omega t)^\frac{n}{2} \end{equation} Interchanging the order of summation and integration, we have \begin{equation} \psi^\pm(0,t)=\left(\frac{1}{2\pi i \cos\omega t}\right)^\frac12\sum_{n=0}^\infty \frac{\psi^{(n)}(0,0)}{n!}(\tan \omega t)^\frac{n}{2}\int_{\Delta(\pm)}\mathop{dz}e^{-\frac{z^2}{2i}}z^n. \end{equation} We now define \begin{equation} K_\pm(n)=\int_{\Delta(\pm)}\mathop{dz}e^{-\frac{z^2}{2i}}z^n, \end{equation} which can be calculated by taking the Fourier transform of $e^{iz^2}\theta(\pm z)$, taking $n$ derivatives in Fourier space, and then evaluating with the conjugate variable set to $0$, to give the result \begin{equation} K_{\pm}(n)=(\pm1)^n 2^{\frac{n-1}{2}} e^{\frac{1}{4} i \pi (n+1)} \Gamma \left(\frac{n+1}{2}\right), \end{equation} were $\Gamma$ is the Gamma function. This gives a final result of \begin{equation} \label{eq:psiexp0} \psi^\pm(0,t)=\left(\frac{1}{2\pi i \cos \omega t}\right)^\frac12 \sum_{n=0}^\infty K_\pm(n)\frac{\psi^{(n)}(0,0)}{n!}(\tan\omega t)^\frac{n}{2}. \end{equation} We now calculate the derivative of the chopped wavefunction, by taking the derivative of Eq.~(\ref{eq:genprop}), evaluated at $x=0$ to find \begin{equation} \frac{\partial \psi^\pm(x,t)}{\partial x}\eval_{x=0} =\frac{-i}{\sin \omega t}\left(\frac{1}{2\pi i \sin\omega t}\right)^\frac12 \int_{\Delta(\pm)}\mathop{dy}\psi(y,0)y \exp\left(-\frac{y^2}{2i \tan \omega t}\right). \end{equation} We now note, that with the exception of the pre-factor $\frac{-i}{\sin\omega t}$, this is the same result as before, only with $\psi(y,0)$ swapped for $y\psi(y,0)$, leading to the result \begin{equation} \frac{\partial \psi^\pm(0,t)}{\partial x}=-\left(\frac{-1}{2\pi i\sin^2\omega t \cos\omega t}\right)^\frac12\sum_{n=0}^\infty K_\pm(n)\frac{\frac{\partial^n}{\partial y^n}(y\psi(y,0))\eval_{y=0}}{n!}(\tan\omega t)^\frac{n}{2}. \end{equation} Then since \begin{equation} \frac{\partial^n}{\partial y^n}y\psi(y,0)\eval_{y=0}=n \psi^{(n-1)}(0,0), \end{equation} we have as a final result \begin{equation} \label{eq:psipexp0} \frac{\partial \psi^\pm(0,t)}{\partial x}=-\left(\frac{-1}{2\pi i\sin^2\omega t \cos\omega t}\right)^\frac12\sum_{n=1}^\infty K_\pm(n)\frac{n\psi^{(n-1)}(0,0)}{n!}(\tan\omega t)^\frac{n}{2}, \end{equation} noting the change on the sum's lower limit. We now combine Eq.~(\ref{eq:psiexp0}) and Eq.~(\ref{eq:psipexp0}) to yield the small-time expansion for the chopped current \begin{equation} J_\pm(t)=\frac{-\omega}{2\pi\sin\omega t \cos\omega t}\Im \left(i\sum_{\ell=0}^{\infty}\sum_{n=1}^{\infty}K_{\pm}(n)K^*_{\pm}(\ell) \frac{n\psi^{(n-1)}(0,0)\psi^{*(\ell)}(0,0)}{n!\ell!}(\tan\omega t)^\frac{n+\ell}{2}\right) \end{equation} We note that this result is in fact trivial to integrate over time by noting the derivative of $\tan(\omega t)$, yielding \begin{equation} \int_{0}^{\tau}\mathop{dt}\frac{(\tan\omega t)^\frac{k}{2}}{\sin\omega t\cos\omega t}=\frac{2}{k\omega}(\tan\omega \tau)^{\frac{k}{2}}. \end{equation} The time-integral of the chopped current is thus \begin{equation} \int_{0}^{\omega \tau}J_\pm(t)\mathop{dt}=-\frac{1}{\pi}\Im\left(i\sum_{\ell=0}^{\infty}\sum_{n=1}^{\infty}K_{\pm}(n)K^*_{\pm}(\ell) \frac{n\psi^{(n-1)}(0,0)\psi^{*(\ell)}(0,0)}{(n+\ell)n!\ell!}(\tan\omega t)^\frac{n+\ell}{2}\right) \end{equation} We also note that by defining \begin{equation} L(n)=K_+(n)+K_-(n), \end{equation} we may adapt the result to the unchopped current as \begin{equation} J(t)=-\frac{\omega}{2\pi\sin\omega t \cos\omega t}\Im \left(i\sum_{\ell=0}^{\infty}\sum_{n=1}^{\infty}L(n)L^*(\ell) \frac{n\psi^{(n-1)}(0,0)\psi^{*(\ell)}(0,0)}{n!\ell!}(\tan\omega t)^\frac{n+\ell}{2}\right), \end{equation} with time-integral \begin{equation} \int_{0}^{\omega \tau}J(t)\mathop{dt}=-\frac{1}{\pi}\Im\left(i\sum_{\ell=0}^{\infty}\sum_{n=1}^{\infty}L(n)L^*(\ell) \frac{n\psi^{(n-1)}(0,0)\psi^{*(\ell)}(0,0)}{(n+\ell)n!\ell!}(\tan\omega t)^\frac{n+\ell}{2}\right) \end{equation} Using Eq.~(\ref{eq:qpJ}) to express the quasi-probability as the time integral of currents, and noting the similarity of the summands, we can write the quasi-probability for as \begin{equation} q(-,+)=-\frac{1}{2\pi}\Im\left(i\sum_{\ell=0}^{\infty}\sum_{n=1}^{\infty}\mathcal{Q}(n,\ell) \frac{n\psi^{(n-1)}(0,0)\psi^{*(\ell)}(0,0)}{(n+\ell)n!\ell!}(\tan\omega t)^\frac{n+\ell}{2}\right), \end{equation} where \begin{equation} \mathcal{Q}(n,\ell)=K_-(n)K_-^*(\ell)-K_+(n)K_+^*(\ell)+L_n L^*_\ell. \end{equation} By approximating the infinite sums to finite order, we are able to approximate the quasi-probability. To get the first three terms of the approximation, we limit both sums to $\ell_{max}=2$ and $n_{max}=3$, yielding \begin{multline} \label{eq:tayl3} q(-,+)= \frac{1}{2\sqrt{\pi}}\lvert\psi(0,0)\rvert^2 \tan^\frac12\omega\tau + \frac{J(0)}{2}\tan\omega\tau+\\\frac{1}{6\sqrt\pi}\left(\lvert \psi'(0,0)\rvert^2-\left[\frac{1}{4}+\frac{3 i}{4}\right]\psi''^{*}(0,0)\psi(0,0)- \left[\frac{1}{4}-\frac{3 i}{4}\right]\psi''(0,0)\psi^*(0,0)\right)\tan^\frac32\omega t \\+\mathcal{O}(\tan ^2 \omega t) \end{multline} Taking the taking the $\omega\rightarrow 0$ expansions of the trigonometric terms recovers the result for the free particle, \begin{multline} q(-,+)=\frac{1}{2\sqrt{\pi}}\lvert\psi(0,0)\rvert^2 \tau^\frac12+ \frac{J(0)}{2}\tau+\\\frac{1}{6\sqrt\pi}\left(\lvert \psi'(0,0)\rvert^2-\left[\frac{1}{4}+\frac{3 i}{4}\right]\psi''^{*}(0,0)\psi(0,0)- \left[\frac{1}{4}-\frac{3 i}{4}\right]\psi''(0,0)\psi^*(0,0)\right)\tau^\frac32, \end{multline} which we note has a term in $\tau^\frac12$ which was missing from an earlier calculation of this expansion in Ref.~\cite{halliwell2019c}, as well as a different coefficient on the $\tau^\frac32$ term. The initial divergence of the quantum chopped current is clearly seen. These results also agree with the small time expansion of chopped currents given by Sokolowski \cite{sokolovski2012}, giving another useful check on our calculations. For our gaussian initial state we find agreement with the results above, we plot this expansion alongside our original calculation in Fig.~\ref{fig:tayl}. \begin{figure} \caption{The small-time expansion of $q(-,+)$ plotted alongside the previous calculation, at varying degrees of truncation, where $N=2$ corresponds to Eq.~(\ref{eq:tayl3}).} \label{fig:tayl} \end{figure} \section{Wigner Function calculational details} \label{app:wig} The Wigner-Weyl transform, which maps Hermitian operators to real phase space functions~\cite{hillery1984a, tatarskii1983a, case2008, halliwell1993a}, is defined by \begin{equation} \label{eq:wigtx} W_A(X,p) = \frac{1}{2\pi}\int_{-\infty}^{\infty}\mathop{d\xi}e^{-ip\xi}\mel{X+\tfrac{\xi}{2}}{A}{X-\tfrac{\xi}{2}}. \end{equation} Traces of pairs of operators may be expressed in the Wigner representation as \begin{equation} \label{eq:wigexp} {\rm Tr}(\hat A\hat B) = 2\pi \int_{-\infty}^{\infty}\mathop{dX}\int_{-\infty}^{\infty}\mathop{dp}W_{A}(X,p)W_{B}(X,p). \end{equation} To apply this formula to the quasi-probability there are two natural ways to proceed. First, in Ref.~\cite{halliwell2019c}, the free particle quasi-probability was explored in Wigner-Weyl form using $\hat A = \tfrac12\left(P_{s_1}P_{s_2}+P_{s_2}P_{s_1}\right)$ and $\hat B=\rho$. However this was not found to be very useful since $W_A(X,p)$ in this case is highly oscillatory and it was not possible to clearly identify the regions of negativity, hence we proceed with a different approach. We first write the QP in the form $q(s_1, s_2)={\rm Tr}(\bar{\rho}_{s_1}P_{s_2}(\tau))$, where $\bar{\rho}_{s_1}=\frac12(P_{s_1}\rho+\rho P_{s_1})$, and $t_1=0$ without loss of generality. Hence by Eq.~(\ref{eq:wigexp}) we have \begin{equation} \label{eq:wigqp} q(s_1,s_2)=2\pi \int_{-\infty}^{\infty}\mathop{dX}\int_{-\infty}^{\infty}W_{\bar\rho_{s_1}}(X,p)W_{P_{s_2}(\tau)}(X,p), \end{equation} where $W_{P_{s_2}}(X,p)=\theta(s_2(X \cos \omega t + p \sin\omega t))$. Using Eq.~(\ref{eq:wigtx}), the transform of $\bar\rho_{s_1}$ is given by \begin{equation} \label{eq:wigrhob} W_{\bar\rho_{s_1}}(X,p)=\frac{1}{4\pi}\int_{-\infty}^{\infty}\mathop{d\xi}e^{-ip\xi}\mel{X+\tfrac{\xi}{2}}{P_{s_1}\rho+\rho P_{s_1}}{X-\tfrac{\xi}{2}}, \end{equation} We without loss of generality we take $t_1=0$, and so $P_{s_1}=\theta(s_1 \hat x)$, leading to \begin{equation} W_{\bar\rho_{s_1}}(X,p)=\frac{1}{4\pi}\int_{-\infty}^{\infty}\mathop{d\xi}e^{-ip\xi}\left(\theta(s_1(X+\tfrac{\xi}{2}))+\theta(s_1(X-\tfrac{\xi}{2}))\right)\mel{X+\tfrac{\xi}{2}}{\rho}{X-\tfrac{\xi}{2}}. \end{equation} Since coherent states are pure states, we have simply that $\rho(x,y)=\psi(x)\psi^*(y)$, where we will use natural units with $\psi(x)=\tfrac{1}{\pi^\frac14}\exp(-\tfrac12(x-x_0)^2+ip_0x)$. This yields the integrand as $I(X,\xi)=e^{-ip\xi}\exp\left(i p_0 \xi-X^2+2 X x_0-x_0^2-\tfrac{\xi^2}{4}\right)$. Demonstrating with the $s_1=+1$ case, the theta functions are handled by splitting the integral into two integrals over the regions $[-2X, \infty)$, and $(-\infty, 2X]$, and so we have \begin{equation} W_{\bar\rho_{s_1}}(X,p)=\frac{1}{4\pi\sqrt{\pi}}\left(\int_{-2X}^{\infty}\mathop{d\xi}I(X,\xi)+\int_{-\infty}^{2X}\mathop{d\xi}I(X,\xi)\right), \end{equation} Computing the integral involved, we reach the result \begin{equation} W_{\bar\rho_{s_1}}(X,p)=\frac{1}{2\pi}e^{-(p-p_0)^2- (X-x_0)^2}\left(1+\Re\erf\left(i(p-p_0)+s_1 X\right)\right), \end{equation} which may be written as \begin{equation} \label{eq:wigint} W_{\bar\rho_{s_1}}(X,p)=\frac12 W_{\rho}(X,p)\left(1+\Re\erf\left(i(p-p_0)+s_1 X\right)\right). \end{equation} in terms of $W_{\rho}(X,p)$, the Wigner function of the pure coherent state, given by \begin{equation} W_{\rho}(X,p)=\frac{1}{\pi}\exp(-(p-p_0)^2-(X-x_0)^2). \end{equation} The classical equivalent for Eq.~(\ref{eq:wigint}) is $\frac12 W_\rho(X,p)(1+\text{sgn}(X))$, which is approached for $p$ close to $p_0$ and for large $\lvert X \rvert$. The time evolution of the Wigner function in the case of the QHO is given by rigid rotation in accordance with classical paths $X_{\tau}=x_0\cos \omega \tau - p_0 \sin\omega\tau$. Hence in Eq.~(\ref{eq:wigqp}) $W_{P_{s_2}(\tau)}(X,p)=\theta(s_2 X_{-\tau})$, and the final expression for the quasi-probability is \begin{equation} q(s_1,s_2)=\int_{-\infty}^{\infty}\mathop{dX}\int_{-\infty}^{\infty}\mathop{dp}f_{s_1, s_2}(X,p) \end{equation} with the phase-space density $f_{s_1, s_2}(X,p)$ given by \begin{equation} \label{eq:wigfin} f_{s_1, s_2}(X,p)=\frac12 W_{\rho}(X,p)\left(1+\Re\erf\left(i(p-p_0)+s_1 X\right)\right)\theta(s_2 X_{-\tau}). \end{equation} \begin{figure}\label{fig:wig2} \end{figure} Eq. (\ref{eq:wigint}) and Eq. (\ref{eq:wigfin}) are plotted in Fig.~\ref{fig:wig2}. \end{document}
arXiv
\begin{document} \title{Matroids arising from electrical networks} \author{Bob Lutz} \address{Comcast Technology Center, 1800 Arch St., Philadelphia, PA 19103} \email{bob\[email protected]} \thanks{Work of the author was partially supported by NSF grants DMS-1401224 and DMS-1701576; and by NSF grant DMS-1440140 while the author was in residence at the Mathematical Sciences Research Institute in Berkeley, California during the fall 2019 semester.} \date{\today} \subjclass[2020]{05B35, 05C22 (Primary) 05C15, 05C40, 34B45 (Secondary)} \begin{abstract} This paper introduces \emph{Dirichlet matroids}, a generalization of graphic matroids arising from electrical networks. We present four main theorems. First, we exhibit a matroid quotient involving geometric duals of networks embedded in surfaces with boundary. Second, we characterize the Bergman fans of Dirichlet matroids as subfans of graphic Bergman fans. Third, we prove an interlacing result on the real zeros and poles of the trace of the response matrix. And fourth, we bound the coefficients of the precoloring polynomial of a network by the coefficients of the associated chromatic polynomial. \end{abstract} \maketitle \section{Introduction} An \emph{electrical network} (or simply a \emph{network}) $D=(G,\B)$ consists of a finite connected graph $G=(V,E)$ and a nonempty set $\B\subseteq V$ called the \emph{boundary}. This paper associates a matroid $M(D)$ to any network $D$ as follows. Let $\eo=E\cup\eh$, where $\eh$ is an extra element not in $E$. A \emph{grove} of $D$ is a nonempty acyclic set $F\subseteq E$ such that $F$ meets every interior vertex, and every component of $F$ meets the boundary. Let $M(D)$ denote the matroid on $\eo$ whose bases are all sets $B\subseteq \eo$ of the following types: \begin{enumerate}[(i)] \item $B=F$ for some grove $F$ of $D$ containing exactly one path between boundary vertices \item $B=F\cup\eh$ for some grove $F$ of $D$ containing no path between boundary vertices. \end{enumerate} A \emph{Dirichlet matroid} is any matroid of the form $M(D)$. For example, the \emph{star network} $D=S_d$ is the network on $d$ vertices illustrated on the left side of Figure \ref{fig:introstar}. Here we have a single interior vertex $i$ and an edge $ij$ for every $j\in \B$. The bases of $M(D)$ are the 2-element subsets of $\eo$. Hence the Dirichlet matroid $M(D)$ is isomorphic to the uniform matroid $U_{2,d}$, i.e. the $d$-point line. \begin{figure} \caption{A star network with boundary vertices marked in white, left; an affine diagram of its Dirichlet matroid, right.} \label{fig:introstar} \end{figure} Dirichlet matroids arise in several ways. Classically, they are the complete principal truncations of graphic matroids along edge sets of cliques (Appendix \ref{sec:appa}). In the language of biased graphs, they are the complete lift matroids of almost-balanced biased graphs (Appendix \ref{sec:printrun}). Our interest is motivated by \emph{Dirichlet arrangements}, a class of hyperplane arrangements introduced in \cite{lutz2019hyp}. The simple Dirichlet matroids are precisely the matroids of Dirichlet arrangements (Example \ref{eg:dirarr}). Our first theorem generalizes a result on geometric duals of cellularly embedded graphs in surfaces. Let $\Sigma$ be a surface with boundary. An \emph{embedding} of $D$ in $\Sigma$ is an embedding of $\g$ in $\Sigma$ that satisfies certain conditions on the faces and boundary vertices (Definition \ref{def:embedding}). For example, a network is \emph{circular} or \emph{cylindrical} if it is embedded in a closed disk or cylinder, respectively. If $D$ is embedded in $\Sigma$, then one can construct a geometric dual network $D^*$ also embedded in $\Sigma$ (Definition \ref{def:dual}). Let $M^*(D)$ denote the dual of $M(D)$. If $|\B|\geq 2$, then the ranks of $M^*(D)$ and $M(D^*)$ differ by $|\B|-\chi(\Sigma)-1$, so the two matroids are not isomorphic in general. However, they are closely related. Given matroids $M$ and $N$ on the same ground set, we say that $N$ is a \emph{quotient} of $M$ if every flat of $N$ is a flat of $M$. Matroid quotients are combinatorial abstractions of quotient spaces in linear algebra. \begin{thm} Let $D$ be a network embedded in a surface with boundary. If $|\B|\geq 2$, then $M(D^*)$ is a quotient of $M^*(D)$. \label{thm:dualintro} \end{thm} When $|\B|=2$, we recover the special case mentioned above: if $\g$ is cellularly embedded in a surface, then the graphic matroid $M(\g^*)$ is a quotient of $M^*(\g)$ \cite[Lemma 1]{richter1984}. While circular and cylindrical networks have been studied previously \cite{kenyon2017, lam2012}, the author is unaware of any prior results on networks embedded in general surfaces with boundary. Our second theorem is a tropical characterization of Dirichlet matroids. The \emph{Bergman fan} of a loopless matroid $M$, denoted by $\berg{M}$, is a polyhedral fan supported on the intersection of the tropical hyperplanes defined by the circuits of $M$ (Definition \ref{def:berg}). Just as $M$ is defined by its bases or circuits, it is also defined by $\berg{M}$. For our purposes, an \emph{isomorphism} of polyhedral fans is a piecewise-linear homeomorphism between their supports that maps cones onto cones. \begin{thm} Let $\mg$ denote the graph obtained from $\g$ by adding an edge between each pair of boundary vertices. If $\g$ is loopless, then there is an isomorphism of polyhedral fans \begin{equation} \berg{M(D)}\to \berg{M(\mg)}_{\B}, \end{equation} where $\berg{M(\mg)}_{\B}$ is the subfan of $\berg{M(\mg)}$ consisting of all cones whose points are constant on the set of edges between boudary vertices. \label{thm:bergintro} \end{thm} For a complete graph $K_d$, the Bergman fan $\berg{M(K_d)}$ can be identified with the space of phylogenetic $d$-trees \cite{ardila2006}. We generalize this result to a family of ``complete'' networks and phylogenetic trees with prescribed sets of equidistant leaves (Proposition \ref{prop:newphylo}). We also compute a minimal tropical basis of $M(D)$ (Proposition \ref{prop:mintrop}). Our third result concerns the \emph{response matrix} of $D$, which describes the electrical properties of the network. For this, we think of the edges of $\g$ as linear resistors and the boundary vertices as nodes to which voltages are applied, e.g. by batteries. Given an edge $e\in E$, the \emph{conductance} $\con_e\in \R$ measures how easily current may flow through $e$. There is a linear map $\R^{\B}\to \R^{\B}$ that takes the boundary voltages to the resulting currents. The \emph{response matrix} is the matrix $\Lambda=\Lambda(D,\gamma)$ of this map. The entries of $\Lambda$ are homogeneous rational functions in the conductances that can be written in terms of groves. We are interested in the trace of $\Lambda$. A line in $\R^k$ has \emph{positive direction vector} if it can be parametrized as $t\mapsto x+ty$ for some $x\in \R^k$ and $y\in (0,\infty)^k$. \begin{thm} The zeros and poles of the trace of the response matrix $\Lambda$ interlace as the conductances of the network travel along any line in $\R^E$ with positive direction vector. \label{thm:realintro} \end{thm} For networks with $|\B|=2$, the quantity $\tr\Lambda$ has an electrical interpretation: it is twice the \emph{effective conductance} between the boundary vertices \cite[Section 2]{wagner2005}. We show that Theorem \ref{thm:realintro} follows from the \emph{half-plane property} of Dirichlet matroids (Definition \ref{def:hpp}). Our fourth result concerns the polynomial at the heart of the following coloring problem: \begin{quote} How many ways can a given injective $k$-coloring of $\B$ be extended to a proper $k$-coloring of $\g$? \end{quote} The answer is $\cp{D}(k)$, where $\cp{D}$ is the \emph{precoloring polynomial} of $D$ \cite[Definition 3.5]{lutz2019hyp}. Precoloring polynomials have been studied in connection with Sudoku puzzles, marked order polytopes and list colorings, but little is known about their behavior in general \cite{herzberg2007, jochemko2014, stanley2015}. Our result compares the polynomial $\cp{D}$ directly to the chromatic polynomial $\cp{\g}$. \begin{thm} Assume that $\g$ is loopless, and write the precoloring polynomial of $D$ and the chromatic polynomial of $\g$ as \begin{equation} \begin{aligned} \cp{\g}(\l) &= \l^d - a_1\l^{d-1} + \cdots + (-1)^d a_d\\ \cp{D}(\l) &= \l^n - b_1\l^{n-1} + \cdots + (-1)^n b_n, \end{aligned} \end{equation} so that all $a_i$ and $b_i$ are nonnegative. We have $a_i\geq b_i$ for $i=1,\ldots,n$, with $a_i = b_i$ if $i$ is less than the minimum number of edges in a path between boundary vertices. \label{thm:cpintro} \end{thm} \section{Basics of Dirichlet matroids} \label{sec:bias} In this section, we give cryptomorphic definitions of Dirichlet matroids and provide several examples. We also discuss the connection to hyperplane arrangements and characterize the fields over which a given Dirichlet matroid is representable. We assume some familiarity with matroid theory, as presented in \cite{oxley2006}. \subsection{Cryptomorphic definitions and key examples} \label{sec:crypt} Throughout, let $D=(\g,\B)$, where $\g=(V,E)$ is a finite connected graph and $\B\subseteq V$ is a nonempty set. The elements of $\B$ are called the \emph{boundary nodes} of $D$. A \emph{cycle} of $\g$ is a set $C\subseteq E$ such that each vertex of $\g$ is an endpoint of exactly 0 or 2 edges in $C$. Let $d=|V|$, $m=|\B|$, and $n=d-m$. Recall that $M(D)$ is a matroid on $\eo=E\cup \eh$, where $\eh$ is an element not in $E$, and $\eh$ is shorthand for $\{\eh\}$. For basic descriptions of Dirichlet matroids, we appeal to the theory of \emph{biased graphs} developed by Zaslavsky \cite{zaslavsky1989, zaslavsky1991, zaslavsky2003}. Biased graphs encode a large class of matroids, including Dirichlet matroids. We review this connection in detail in Appendix \ref{sec:appa}. \begin{mydef} A \emph{crossing} of $D$ is a minimal (with respect to inclusion) set $X\subseteq E$ containing a path between distinct boundary nodes. \end{mydef} \begin{prop}[Circuits] A set $C\subseteq \eo$ is a circuit of $M(D)$ if and only if one of the following holds: \begin{enumerate}[(A)] \item $C = X\cup \eh$ for some crossing $X$ \item $C\subseteq E$ is a cycle or loop of $\g$ meeting at most 1 boundary node \item $C\subseteq E$ is a minimal set containing 2 distinct crossings and no circuits of type (B). \end{enumerate} \begin{proof} This is proven in Appendix \ref{sec:bias}. \end{proof} \label{prop:circuits} \end{prop} As we saw in the introduction, the bases of $M(D)$ depend on \emph{groves}, a generalization of spanning trees from the literature on electrical networks \cite{kenyon2011}. \begin{mydef} A nonempty acyclic set $F\subseteq E$ is a \emph{grove} of $D$ if $F$ meets every vertex interior vertex, and every connected component of $F$ meets the boundary. \end{mydef} \begin{mydef} Let $\sunc$ be the set of all groves $F$ of $D$ that contain no crossing of $D$. Let $\scro$ be the set of all groves $F$ of $D$ that contain exactly 1 crossing of $D$. \label{def:suncscro} \end{mydef} \begin{prop}[Bases] A set $B\subseteq \eo$ is a basis of $M(D)$ if and only if one of the following holds: \begin{enumerate}[(A)] \item $B\in \scro$ \item $B=F\cup \eh$ for some $F\in \sunc$. \end{enumerate} \begin{proof} This is proven in Appendix \ref{sec:bias}. \end{proof} \label{prop:bases} \end{prop} Sometimes we will prefer to deal only with the simple Dirichlet matroids. We say that a network $D$ is \emph{simple} if $\g$ is simple and $\B$ induces an edgeless subgraph. \begin{prop}[Simplicity] A network $D$ is simple if and only if $M(D)$ is simple. \begin{proof} Let $D$ be a network. The loops of $M(D)$ are the loops of $\g$. The parallel classes of $M(D)$ are the sets of parallel edges of $\g$ and the set $S\cup \eh$, where $S$ is the set of edges between boundary nodes. The result follows. \end{proof} \label{prop:simp} \end{prop} \begin{eg}[Graphic matroids] If $|\B|=2$, then $M(D)$ is graphic. In particular, we have $M(D)\cong M(\mg)$, where $\mg$ is the graph obtained from $\g$ by adding an edge between the two boundary nodes. This example is generalized in Proposition \ref{prop:grapheq}. \label{eg:graphic} \end{eg} \begin{eg}[Lines] Recall the star network $D=S_d$ from the introduction. The bases of $M(D)$ are the 2-element subsets of $\eo$. Hence $D$ is isomorphic to the $d$-point line $U_{2,d}$. \label{eg:star} \end{eg} \begin{eg}[Discriminantal arrangements] Given a point $z\in \C^m$ with all $z_i$ distinct, let $\AA_{m,n}(z)$ denote the arrangement of hyperplanes in $\C^n$ of the following forms: \begin{equation*} \begin{cases} x_i=x_j & \mbox{for all } 1 \leq i,j \leq n\\ x_i=z_j & \mbox{for all } 1 \leq i \leq n \mbox{ and }1\leq j\leq m, \end{cases} \end{equation*} where the $x_i$ are the coordinates of $\C^n$. The arrangements $\AA_{m,n}(z)$ are called \emph{discriminantal arrangements}. Discriminantal arrangements have been studied extensively for their connections to mathematical physics \cite{cohen2000, manin1989, varchenko2011}. We associate to $\AA_{m,n}(z)$ the matroid defined by its cone (see \cite{stanley2007}). Let $D_{m,n}$ denote the network in which $\g$ is obtained from a complete graph $K_{m+n}$ by deleting all edges between the $m$ boundary nodes. The networks $D_{m,n}$ play the role of complete graphs, in the sense that every simple network is obtained from one of the form $D_{m,n}$ by deletion. The matroid $M(D_{m,n})$ is isomorphic to the matroid of $\AA_{m,n}(z)$. \label{eg:dmn} \end{eg} \begin{eg}[Dirichlet arrangements] Generalizing the previous example, let $D$ be any simple network, and let $u\in \C^{\B}$ with all $u_i$ distinct. The vector $u$ is called the \emph{Dirichlet boundary data}; we think of $u$ as voltages applied to the boundary nodes of $D$. Let $\AA_D(u)$ denote the arrangement of hyperplanes in $\C^{V\setminus \B}$ given by \begin{equation*} \begin{cases} x_i=x_j & \mbox{for all } ij \in E \mbox{ not meeting } \B\\ x_i=u_j & \mbox{for all } ij \in E \mbox{ with } j \in \B. \end{cases} \end{equation*} The arrangements $\AA_D(u)$ are called \emph{Dirichlet arrangements}. Dirichlet arrangements model a certain electrical problem as a geometric problem on their complements \cite{lutz2019hyp}. They are also called \emph{$\psi$-graphical arrangements} in connection with order polytopes of finite posets, and \emph{graphic discriminantal arrangements} in connection with the topology of graphic arrangements \cite{stanley2015, cohen2021}. The matroids defined by the cones of these arrangements are the simple Dirichlet matroids. \label{eg:dirarr} \end{eg} \subsection{Matrix representations} \label{sec:matrep} We characterize the fields over which $M(D)$ is representable and determine which Dirichlet matroids are graphic. \begin{mydef} Let $i\in V\setminus \B$. The \emph{block} of $D$ containing $i$ is the set $U\subseteq V$ of all vertices $j$ such that there exists a path $i_1\cdots i_k$ in $\g$ with $i_1 = i$, $i_k = j$, and $i_1,\ldots,i_{k-1}\in V\setminus \B$. \label{def:block} \end{mydef} In other words, the block containing $i\in V\setminus \B$ is the set of all vertices reachable from $i$ by paths that do not breach the boundary. \begin{eg} Consider the network $D$ illustrated in Figure \ref{fig:blocks} with boundary nodes marked in white. For each $i\in V\setminus \B$, let $U(i)$ denote the block of $D$ containing $i$. There are exactly 2 blocks of $D$: $U(i_2)=\{i_1,i_2,i_3\}$ and $U(i_4)=U(i_5)=\{i_3,i_4,i_5,i_6\}$. \begin{figure} \caption{A network with two blocks.} \label{fig:blocks} \end{figure} \end{eg} \begin{mydef} Let $\omega(D)$ be the positive integer given by \begin{equation*} \omega(D)=\max|U\cap \B|, \end{equation*} where the maximum runs over all blocks $U$ of $D$. \end{mydef} \begin{prop} The matroid $M(D)$ is representable over $\KK$ if and only if $|\KK|\geq \omega(D)$. \label{prop:representable} \begin{proof} See Appendix \ref{sec:reppf}. \end{proof} \end{prop} \begin{prop} The following are equivalent: \begin{enumerate}[(a)] \item $\omega(D)=2$ \item $M(D)$ is binary \item $M(D)$ is regular \item $M(D)$ is graphic. \end{enumerate} \begin{proof} Equivalence of (a), (b) and (c) follows from Proposition \ref{prop:representable}. We prove equivalence of (a) and (d). Graphic matroids are regular. Hence if $\omega(D)\geq 3$, then Proposition \ref{prop:representable} implies that $M(D)$ is not graphic. Suppose instead that $\omega(D)=2$. We construct a graph $H$ such that $M(D)\cong M(H)$. Start with $H=K_2$ on the vertex set $\{i,j\}$. Given $S\subseteq V$, let $\g(S)$ denote the subgraph of $\g$ induced by $S$. For every block $U$ of $D$ containing only 1 boundary node, take a copy of $\g(U)$ and attach it to $H$ by gluing the boundary node to either $i$ or $j$. For every block $U$ containing 2 boundary nodes, take a copy of $\g(U)$ and attach it to $H$ by gluing one boundary node to $i$ and the other to $j$. Finally, for every edge of $\g$ between boundary nodes, add an edge to $H$ parallel to $ij$. The resulting graph $H$ satisfies $M(D)\cong M(H)$. An explicit isomorphism is given by swapping $\eh$ and $ij$. \end{proof} \label{prop:grapheq} \end{prop} \section{Matroid quotients and dual networks} \label{sec:dual} Let $M$ and $N$ be matroids on $E$. If every flat of $N$ is a flat of $M$, then we say that $N$ is a \emph{quotient} of $M$, or alternatively that there is a \emph{matroid quotient} $M\to N$. This terminology is explained by the following observation. Given a vector space $V$ and a function $\varphi: E\to V$, consider the matroid $M(\varphi)$ on $E$ in which a set is independent if and only if the images of its elements under $\varphi$ are linearly independent. If $U$ is a linear subspace of $V$ and $f:V\to V/U$ is the quotient map, then $M(f\circ\varphi)$ is a quotient of $M(\varphi)$ \cite[Proposition 7.4.8.2]{brylawski1986}. \begin{prop}[{\cite[Proposition 8.1.6]{kung1986}}] Let $M$ and $N$ be matroids on $E$. The following are equivalent: \begin{enumerate}[(i)] \item $N$ is a quotient of $M$ \item Every circuit of $M$ is a union of circuits of $N$ \item For any sets $A\subseteq B\subseteq E$ we have \begin{equation*} \rk_M(B)-\rk_M(A)\geq \rk_N(B)-\rk_N(A). \end{equation*} \end{enumerate} \label{defthm:quot} \end{prop} There are many other cryptomorphic definitions of matroid quotients, but we will not need them here. We turn our attention to embeddings of networks in surfaces with boundary. \begin{mydef}[Network embedding] Let $D = (\g,\B)$ be a network and $\Sigma$ a surface with boundary $\partial\Sigma$. An \emph{embedding} of $D$ in $\Sigma$ is an embedding of $\g$ in $\Sigma$ that satisfies the following conditions: \begin{enumerate} \item $\B\subseteq \partial\Sigma$ \item The closure of each face of $\g$ is homeomorphic to a closed disk \item No face of $\g$ meets more than one component of $\partial\Sigma$. \end{enumerate} \label{def:embedding} \end{mydef} The conditions of Definition \ref{def:embedding} allow us to define a geometric dual of $D$ as follows. An example is illustrated in Figure \ref{fig:dual}. \begin{mydef}[Geometric dual] Suppose that $D$ is embedded in $\Sigma$. We construct another network $D^*$ also embedded in $\Sigma$. Draw one vertex of $D^*$ in each face of $\g$; if a face meets $\partial \Sigma$, draw that vertex in $\partial \Sigma$. The boundary nodes of $D^*$ are the vertices in $\partial \Sigma$. Two vertices of $D^*$ share an edge if the corresponding faces of $\g$ share an edge of $\g$. This defines a unique network $D^*$, called the \emph{dual} of $D$. \label{def:dual} \end{mydef} \begin{figure} \caption{A circular network and its geometric dual, with one vertex set marked in white and the other in black.} \label{fig:dual} \end{figure} \begin{mydef} A \emph{bond} of $D$ is a minimal set $B\subseteq E$ such that $E\setminus B$ contains no crossings. \label{def:insul} \end{mydef} \begin{prop}[Cocircuits] Let $\og$ be the graph obtained from $\g$ by contracting $\B$ to a single vertex. A set $X\subseteq \oe$ is a cocircuit of $M(D)$ if and only if one of the following holds: \begin{enumerate}[(i)] \item $X$ is a cocircuit of $M(\og)$ \item $X=B\cup \eh$ for some bond $B$ of $D$. \end{enumerate} \begin{proof} See Appendix \ref{sec:bias}. \end{proof} \label{prop:cocircuits} \end{prop} \subsection{Proof of Theorem \ref{thm:dualintro}} Consider the quotient space $\overline{\Sigma}$ obtained from $\Sigma$ by identifying all points in $\partial\Sigma$ as a single point. We adapt the proof of \cite[Theorem 4.3]{em2015} to argue the following lemma. \begin{lem} Let $\Sigma$ be a surface with boundary. If $\g$ is embedded in $\overline{\Sigma}$ with a vertex at the singular point, then $M(\g^*)$ is a quotient of $M^*(\g)$. \begin{proof} By Proposition \ref{defthm:quot} it suffices to show that if $A\subseteq B\subseteq E$, then \begin{equation} \rk_{M^*(\g)}(B) - \rk_{M^*(\g)}(A) \geq \rk_{M(\g^*)}(B)-\rk_{M(\g^*)}(A). \label{eq:rk0} \end{equation} Let $c_G(A)$ denote the number of components of the spanning subgraph $(V,A)$ of $G$. In this notation we have $\rk_{M(G)}(A)=|V|-c_G(A)$. Hence $\rk_{M^*(G)}(A)=|A|-c_G(E\setminus A) + 1$, so for any $e\in E$ we have \begin{equation} \rk_{M^*(G)}(A\cup \{e\}) - \rk_{M^*(G)}(A) = 1-c_G(E\setminus (A\cup e)) + c_G(E\setminus A). \label{eq:rk1} \end{equation} We also have \begin{equation*} \rk_{M(G^*)}(A)=|V(G^*)|-c_{G^*}(A). \end{equation*} Consider $\g$ as a 1-complex embedded in $\overline{\Sigma}$. Let $\mathcal{A}$ (resp., $\mathcal{E}$) denote the union of all 1-cells in $A$ (resp., $E$), as a subset of $\overline{\Sigma}$. If $X$ is a subspace of $\overline{\Sigma}$, then write $k(X)$ for the number of components of $\overline{\Sigma}\setminus (X\cup v)$, where $v$ is the singular point. We have $c_{G^*}(A)=\rho(\mathcal{E}\setminus \mathcal{A})$. Thus \begin{equation} \rk_{M(G^*)}(A\cup \{e\}) - \rk_{M(G^*)}(A) =\rho(\mathcal{E}\setminus \mathcal{A}) -\rho(\mathcal{E}\setminus (\mathcal{A}\cup e)). \label{eq:rk2} \end{equation} We claim that \eqref{eq:rk1} is greater than or equal to \eqref{eq:rk2}. Writing $S=E\setminus A$ and letting $\mathcal{S}$ denote the union of elements of $S$, the claim is equivalent to \begin{equation} 1+c_G(S)-c_G(S\setminus \{e\}) \geq \rho(\mathcal{S})-\rho(\mathcal{S}\setminus e). \label{eq:rk3} \end{equation} The left side must equal 0 or 1. If it equals 0, then $e$ meets at least one vertex not met by $S$. Thus removing $e$ from $(\overline{\Sigma}\setminus (\mathcal{S}\cup v)) \cup e$ does not increase the number of components, so the right side of \eqref{eq:rk3} is 0, as desired. If instead the left side is 1, then $e$ meets two vertices already met by $S$. Thus removing $e$ from $(\overline{\Sigma}\setminus (\mathcal{S}\cup v)) \cup e$ increases the number of components by at most 1, so the right side of \eqref{eq:rk3} is at most 1, as desired. The claim follows. Now let $A=S_1\subseteq \cdots \subseteq S_d =B$ such that each $|S_{i+1}\setminus S_i|=1$ for all $i$. The claim gives \begin{equation*} \rk_{M^*(\g)}(S_{i+1}) - \rk_{M^*(\g)}(S_i) \geq \rk_{M(\g^*)}(S_{i+1})-\rk_{M(\g^*)}(S_i). \end{equation*} Taking the sum over all $i$ gives \eqref{eq:rk0}. \end{proof} \label{lem:quot} \end{lem} We will freely refer to the types of circuits and cocircuits of $M(D)$ described in Propositions \ref{prop:circuits} and \ref{prop:cocircuits}, respectively. \begin{proof}[Proof of Theorem \ref{thm:dualintro}] Suppose that $D$ is embedded in $\Sigma$. Let $\og$ be the graph obtained from $\g$ by identifying all vertices in $\B$ as a single vertex $v_0$. The embedding of $D$ in $\Sigma$ gives an embedding of $\og$ in $\overline{\Sigma}$ with $v_0$ at the singular point of $\overline{\Sigma}$. Lemma \ref{lem:quot} gives a matroid quotient $M^*(\og)\to M(\og^*)$. It follows that any circuit of $M^*(\og)$, i.e. any cocircuit of $M(D)$ of type (i), is a union of circuits of $M(\og^*)$. Let $C$ be a circuit of $M(\og^*)$. We claim that $C$ is a union of circuits of $M(D^*)$. Let $H$ be the graph underlying $D^*$. There is an obvious isomorphism $H\to \og^*$ taking the boundary nodes of $D^*$ to the vertices of $\og^*$ in faces of $\og$ whose closures contain $v_0$. Let $k$ be the number of such vertices of $\og^*$ met by $C$. If $k\leq 1$, then $C$ is a circuit of $M(D^*)$ of type (A). If $k\geq 2$, then $C$ is a union of $\lceil k/2\rceil$ circuits of $M(D^*)$ of type (C). The claim follows. Hence every cocircuit of $M(D)$ of type (i) is a union of circuits of $M(D^*)$. Now suppose that $B\cup \eh$ is a cocircuit of $M(D)$ of type (ii). Note that $B$ is a minimal edge set containing paths between every boundary node of $D^*$. Thus $B$ is a union of crossings of $D^*$. Each of these crossings corresponds to a circuit of $M(D^*)$ of type (A), the union of which is $B\cup \eh$. \end{proof} \subsection{Bounds for circular networks} We give tight upper and lower bounds for the number of circuits of $M(D^*)$ needed to form a cocircuit of $M(D^*)$ when $D$ is circular. Asymptotically, this number is $\Theta(m)$. \begin{thm} If $D$ is circular, then every cocircuit of $M(D)$ can be written as a union of $k$ circuits, where $\frac{1}{4}m + \frac{1}{2} \leq k <\frac{1}{2}m+1$, and these bounds are tight. \begin{proof} We prove only the upper bound; similar techniques yield the lower bound. Examples \ref{eg:sun} and \ref{eg:bisun} will prove that the bounds are tight. Suppose that $D$ is circular, so that $\Sigma$ is a closed disk and $\overline{\Sigma}$ is a 2-sphere. Let $\og$ be the graph defined in the proof of Theorem \ref{thm:dualintro}. Lemma \ref{lem:quot} gives a matroid quotient $M^*(\og)\to M(\og^*)$. Since the ranks of the two matroids are equal, this matroid quotient is an isomorphism. Thus if $C$ is a cocircuit of $M(D)$ of type (i), i.e. a circuit of $M^*(\og)$, then $C$ is a circuit of $M(\og^*)$. In the proof of Theorem \ref{thm:dualintro}, it was shown that $C$ is a union of $\lceil k/2\rceil$ circuits of $M(D^*)$, where $k$ is the cardinality of a subset of $\B$. Hence $C$ is a union of at most $\lceil m/2\rceil$ circuits of $M(D^*)$, as desired. Now suppose that $C=B\cup \eh$ is a cocircuit of $M(D)$ of type (ii). To prove the bound, can choose crossings $X_1,\ldots,X_{m-1}$ contained in $C$ such that for each $i=2,\ldots,m-1$ there is exactly one boundary node met by both $X_i$ and $\bigcup_{j < i} X_j$. Let $X= \bigcup_{i=1}^{m-1} X_i$. Note that $X$ contains a path between each pair of boundary nodes. Hence $X=B$ by minimality. If $i\neq j$, then $X_i\cup X_j$ is a circuit of $M(D^*)$ of type (C). If $m$ is odd, then \begin{equation*} B=(X_1\cup X_2) \cup \cdots \cup (X_{m-2}\cup X_{m-1}) \end{equation*} is a union of $(m-1)/2$ circuits of $M(D^*)$. Since $X_1\cup \eh$ is a circuit of $M(D^*)$ of type (A), we can write $C=B\cup (X_1\cup \eh)$ as a union of $(m+1)/2$ circuits of $M(D^*)$. If $m$ is even, then $B\setminus X_1$ is a union of $(m-2)/2$ circuits of $M(D^*)$. Thus $C=(B\setminus X_1) \cup (X_1\cup \eh)$ is a union of $m/2$ circuits of $M(D^*)$. In either case, the number of circuits needed is less than $(m+2)/2$, as desired. \end{proof} \label{thm:duallater} \end{thm} \begin{eg} The networks in Figure \ref{fig:sun} are the \emph{sun networks} on 4, 5 and 6 boundary nodes. We obtain such a network on any number of boundary nodes. \begin{figure} \caption{Three sun networks.} \label{fig:sun} \end{figure} The left side of Figure \ref{fig:sundual} illustrates the sun network $D$ with $m=5$ and its dual $D^*$. On the right side, a bond $B$ of $D$ is highlighted in green. The minimum number of circuits of $M(D^*)$ whose union is $B\cup\eh$ is 3. In a similar fashion we can construct a bond of the sun network on any number $m$ of boundary nodes. The corresponding minimum number of circuits is $\lceil m/2\rceil$. \begin{figure} \caption{A sun network $D$ and its geometric dual, left; a bond of $D$ in green, right.} \label{fig:sundual} \end{figure} \label{eg:sun} \end{eg} \begin{eg} The networks in Figure \ref{fig:bisun} are the \emph{bisected sun networks} on 4, 6 and 10 boundary nodes. We obtain such a network on any even number of boundary nodes. \begin{figure} \caption{Three bisected sun networks.} \label{fig:bisun} \end{figure} The left side of Figure \ref{fig:bisundual} illustrates the bisected sun network $D$ with $m=6$ and its dual $D^*$. On the right side, a bond $B$ of $D$ is highlighted in green. The minimum number of circuits of $M(D^*)$ whose union is $B\cup\eh$ is 2. Similarly we can construct a bond of the bisected sun network on any number $m\equiv 2\pmod{4}$ of boundary nodes. The corresponding minimum number of circuits is $\frac{1}{4}m+\frac{1}{2}$, achieving the lower bound in Theorem \ref{thm:duallater}. \begin{figure} \caption{A bisected sun network $D$ and its geometric dual, left; a bond of $D$ in green, right.} \label{fig:bisundual} \end{figure} \label{eg:bisun} \end{eg} \section{Bergman fans} \label{sec:berg} Let $M$ be a matroid on $E$. The \emph{tropical linear space} of $M$ is a geometric object that can be used to prove purely combinatorial results about $M$. For references, see \cite{maclagan2015}. \begin{mydef} The \emph{tropical linear space} of $M$ is the set $\trop{M}$ consisting of all $x\in \R^E$ such that the minimum of $\{x_e : e\in C\}$ is achieved at least twice for each circuit $C$ of $M$. \label{def:trop} \end{mydef} A \emph{polyhedral fan} is a polyhedral complex whose elements are cones. The set $\trop{M}$ is the support of a polyhedral fan called the \emph{Bergman fan} of $M$. \begin{mydef} The \emph{Bergman fan} $\berg{M}$ of $M$ is the coarsest polyhedral fan supported on $\trop{M}$. \label{def:berg} \end{mydef} For each $x\in \R^E$, let $M_x$ be the set of \emph{$x$-maximal bases} of $M$, i.e. the bases that maximize the linear form $\sum_{e\in E} x_e$. \begin{prop} Two points $x,y\in \trop{M}$ belong to the same cone of $\berg{M}$ if and only if $M_x=M_y$. \label{prop:xmax} \end{prop} \subsection{Proof of Theorem \ref{thm:bergintro}} Assume that $G$ is loopless. We write $\trop{D} = \trop{M(D)}$, $\berg{D}=\berg{M(D)}$ and similarly for graphs. Let $C\subseteq V$ be a clique of $G$. We write $K_C$ for the subgraph induced by $C$, and $E(C)$ for its edge set. Given $x\in \R^E$ and $S\subseteq E$, let $x_S$ denote the restriction of $x$ to $S$. \begin{lem} Let $C$ be a clique of $G$. For any $x\in \trop{\g}$ and any $x_{E(C)}$-maximal spanning tree $T$ of $K_C$, there is an $x$-maximal spanning tree of $\g$ containing $T$. \begin{proof} Suppose without loss of generality that $\g$ is simple. Let $w\in \trop{\g}$. We construct the desired tree with a greedy algorithm. Suppose that some number, possibly zero, of the edges of $G$ are colored red. Let $Z$ be a cycle of $\g$. If $Z$ contains a red edge, then do nothing. If $Z$ contains no red edges, then color an $x$-minimal edge of $Z$ red. This procedure is called the \emph{red rule}. Starting with all edges of $\g$ uncolored and applying the red rule to all cycles of $\g$ in any order yields an $x$-maximal spanning tree of $\g$ consisting of the uncolored edges \cite[Theorem 6.1]{tarjan1983}. Suppose that no edge in $Z$ is red, and that $Z$ contains exactly one edge in $E(C)$. Since $x\in \trop{\g}$, there must be an $x$-minimal edge of $Z$ in $E\setminus E(C)$. When applying the red rule to such a cycle, we require that an edge in $E\setminus E(C)$ must be colored red. We call this the \emph{modified red rule}. Start with $\g$ uncolored. First apply the red rule to all cycles of $\g$ contained in $E\setminus E(C)$. Next, apply the modified red rule to all cycles of $\g$ containing exactly one edge in $E(C)$. Every cycle contained in $E\setminus E(C)$ now contains a red edge. Moreover there are no red edges in $E(C)$. Let $S$ be the set of uncolored edges in $E\setminus E(C)$. If $T$ is any $x$-maximal spanning tree of $K_C$, then it follows that $S\cup T$ is an $x$-maximal spanning tree of $\g$. \end{proof} \label{new:lem:treecomp} \end{lem} Let $U_C\subseteq \R^E$ denote the set of points that are constant on $E(C)$. Let \begin{equation*} \berg{G}_C = \{P\cap U_C : P\in \berg{G} \}. \end{equation*} In other words, $\berg{G}_C$ is the restriction of $\berg{G}$ to $U_C$. \begin{lem} Let $C$ be a clique of $G$. Every cone of $\berg{G}$ is either contained in $U_C$ or disjoint from $U_C$. In other words, $\berg{G}_C$ is a subfan of $\berg{G}$. \begin{proof} Let $x\in \trop{G}$. We have $x_{E(C)}\in\trop{K_C}$, since every cycle of $K_C$ is a cycle of $G$. Propositions \ref{prop:xmax} and \ref{prop:ardila} give correspondences between the cones of $\berg{K_C}$, combinatorial types of phylogenetic trees, and sets of spanning trees of $K_C$. Combining these with \eqref{eq:fform} yields the following equivalent statements, given a point $x\in \trop{G}$: \begin{enumerate}[(i)] \item $x\in U_C$ \item Every spanning tree of $K_C$ is $x_{E(C)}$-maximal \item The phylogenetic tree determined by $x_{E(C)}$ has no internal edges. \end{enumerate} Suppose that $x\in \trop{G}\cap U_C$ and $y\in \trop{G}\setminus U_C$. Statements (i) and (ii) above give a spanning tree of $K_C$ that is not $y_{E(C)}$-maximal. There is no $y$-maximal spanning tree of $G$ containing $T$. However, $T$ is $x_{E(C)}$-maximal, so Lemma \ref{new:lem:treecomp} gives an $x$-maximal spanning tree of $G$ containing $T$. It follows that $M(G)_x\neq M(G)_y$ in the notation of Proposition \ref{prop:xmax}, so $x$ and $y$ belong to different cones of $\berg{G}$. The result follows. \end{proof} \label{lem:subfan} \end{lem} Recall that $M(D)$ is simple if and only if $D$ is simple, i.e. if $G$ is simple and $\B$ induces an edgeless subgraph (Proposition \ref{prop:simp}). We reduce the proof of Theorem \ref{thm:bergintro} to the case where $D$ is simple. \begin{lem} It suffices to to prove Theorem \ref{thm:bergintro} when $D$ is simple. \begin{proof} Let $P$ be a parallel class of a loopless matroid $M$. Each pair of distinct elements of $P$ forms a circuit. Any element of $\trop{M}$ is constant on such a pair, hence constant on $P$. Thus if $M'$ is the simplification of $M$, then omitting all but one coordinate from each parallel class gives an isomorphism \begin{equation} \berg{M}\to\berg{M'}. \label{eq:bergiso} \end{equation} Let $D$ be a network and $D'$ its simplification, obtained by deleting all duplicate edges and edges between boundary nodes. Let $H$ be the graph obtained from $D'$ by adding an edge between each pair of boundary nodes. Note that $H$ is also the simplification of $\mg$. From \eqref{eq:bergiso} we obtain isomorphisms \begin{equation} \berg{D}\to \berg{D'} \label{eq:iso1} \end{equation} and $\berg{\mg}\to \berg{H}$. Applying Lemma \ref{lem:subfan}, the latter restricts to an isomorphism \begin{equation} \berg{\mg}_\B \to \berg{H}_\B. \label{eq:iso2} \end{equation} Theorem \ref{thm:bergintro} for simple networks gives an isomorphism \begin{equation} \berg{D'}\to\berg{H}_{\B}. \label{eq:iso3} \end{equation} Combining \eqref{eq:iso1}, \eqref{eq:iso2} and \eqref{eq:iso3} gives the desired isomorphism $\trop{D}\to\trop{\mg}_{\B}$. \end{proof} \label{lem:bergsimp} \end{lem} \begin{proof}[Proof of Theorem \ref{thm:bergintro}] Lemma \ref{lem:bergsimp} lets us assume that $D$ is simple. Recall that $\eo = E\cup \eh$ is the ground set of $M(D)$. We write the edge set of $\mg$ as $E(\mg)= E\cup E(\B)$. Let $f:\R^{\eo}\to \R^{E(\mg)}$ be given by \begin{equation*} f(x)_e = \begin{cases} x_e & \mbox{if } e\in E\\ x_{\eh} &\mbox{if } e\in E(\B). \end{cases} \end{equation*} Let $\trop{\mg}_{\B}\subseteq \trop{\mg}$ denote the set of points that are constant on $E(\B)$. Let $x\in \trop{D}$, and let $Z$ be a cycle of $\mg$. We claim that the minimum coordinate of $f(x)_Z$ is achieved at least twice, i.e. that $f(x)\in \trop{\mg}_{\B}$. We argue three cases: \begin{enumerate}[(i)] \item $Z\subseteq E$ \item $Z\subseteq E(\B)$ \item $Z$ is not contained in $E$ or $E(\B)$. \end{enumerate} In case (i), $Z$ is a circuit of $M(D)$, so the claim holds. In case (ii), $f(x)_Z$ is constant, so the claim holds. In case (iii), $Z\cap E$ is a union of crossings of $D$. In particular, there is a crossing $C\subseteq Z\cap E$ such that the minimum coordinate of $f(x)_Z$ occurs on $C$ or on $E(\B)$. Since $C\cup\eh$ is a circuit of $M(D)$, the minimum coordinate of $x_{C\cup \eh}$ occurs at least twice. This equals the minimum coordinate of $f(X)_Z$, so the claim follows in the final case. Clearly $f$ is linear and injective; thus with the claim proven, we have \begin{equation*} f(\trop{D}) = \trop{\mg}_{\B}. \end{equation*} Since $\berg{D}$ is the coarsest polyhedral fan supported on $\trop{D}$, the set $\{f(P) : P\in \berg{D}\}$ is the coarsest polyhedral fan supported on $\trop{\mg}_{\B}$. Lemma \ref{lem:subfan} says that $\berg{\mg}_\B$ is a subfan of $\berg{\mg}$, which must be the coarsest polyhedral fan supported on $\trop{\mg}_{\B}$. It follows that $\berg{\mg}_{\B} = \{f(P) : P\in \berg{D}\}$, so $f$ is an isomorphism $\berg{D}\to \berg{\mg}_{\B}$ as desired. \end{proof} \subsection{Phylogenetic trees and discriminantal arrangements} \label{new:sec:phylo} Let $T$ be a rooted tree with labeled leaves and a real-valued function $\omega$ on its edges. Suppose that the root is not a leaf, and that no non-root vertex has degree 2. The \emph{distance} between distinct vertices $i$ and $j$ of $T$, denoted by $d_T(i,j)$, is the (possibly negative) sum of $\omega(e)$ over the edges $e$ in the unique path between $i$ and $j$. An edge of $T$ is \emph{internal} if it is not incident to a leaf. \begin{mydef} The pair $(T,\omega)$ is called a \emph{phylogenetic tree} if the following hold: \begin{enumerate}[(i)] \item The distance between the root and any leaf is the same \item $\omega(e)>0$ for every internal edge $e$. \end{enumerate} A phylogenetic tree with $n$ leaves is called a \emph{phylogenetic $n$-tree}. \end{mydef} The vertices of a phylogenetic tree form a poset in which the root is the unique minimal element and the leaves are the maximal elements. If two vertices $i$ and $j$ are adjacent with $i\leq j$, then $j$ is the \emph{child} of $i$. A phylogenetic tree is \emph{binary} if every non-leaf vertex has exactly two children. The \emph{most recent common ancestor} of two vertices $i$ and $j$ is their infimum. The \emph{combinatorial type} of a phylogenetic tree $(T,\omega)$ is simply the tree $T$ along with its root and leaf labeling. There is a topological space $\mathcal{T}_n$, introduced in \cite{billera2001}, that realizes the space of phylogenetic $n$-trees and supports a polyhedral fan. We let $\mathcal{T}_n$ denote both the polyhedral fan and the underlying topological space. To construct $\mathcal{T}_n$, first take one $(n-2)$-dimensional orthant for each combinatorial type of binary phylogenetic $n$-trees. These orthants are the maximal cones of $\mathcal{T}_n$. The facets of each orthant correspond to the internal edges of the binary tree; a point in a facet represents a tree in which the corresponding edge has been contracted. Glue the facets of any 2 orthants together when they represent the same combinatorial type of tree. The lower-dimensional faces represent further contractions; glue them together whenever they represent the same combinatorial type of tree. The common vertex of the orthants represents the tree with no internal edges. See Figure \ref{fig:t3space} for an illustration of $\mathcal{T}_3$. \begin{figure} \caption{The polyhedral fan $\mathcal{T}_3$.} \label{fig:t3space} \end{figure} The cones of $\mathcal{T}_n$ correspond to the combinatorial types of phylogenetic $n$-trees. The product $\mathcal{T}_n\times \R$ supports a polyhedral fan inherited from $\mathcal{T}_n$; the factor $\R$ keeps track of the distance between the root and leaves, which is not fixed in the construction of $\mathcal{T}_n$. \begin{prop}[{\cite[Proposition 3]{ardila2006}}] There is an isomorphism of polyhedral fans \begin{equation} f:\mathcal{T}_n\times \R\to \berg{K_n}. \label{eq:f} \end{equation} \label{prop:ardila} \end{prop} If we identify the leaves of a phylogenetic $n$-tree $(T,\omega)$ with the vertices of $K_n$, then the coordinates of $\trop{K_n}$ are indexed by pairs of leaves, and the piecewise-linear homeomorphism in \eqref{eq:f} is easy to describe: \begin{equation} f(T,\omega)_{ij}=d_T(i,j) \label{eq:fform} \end{equation} for all leaves $i$ and $j$ of $T$. We will use this to obtain a version of Proposition \ref{prop:ardila} for ``complete'' networks. \begin{mydef} A set $S$ of leaves of $T$ is \emph{equidistant} if every pair of leaves in $S$ has the same most recent common ancestor. In other words, $S$ is equidistant if $d_T(i,j)=d_T(i,k)$ for all $i,j,k\in S$. \end{mydef} \begin{eg} Consider the phylogenetic trees illustrated in Figure \ref{fig:equiphy}. Both trees have equidistant 3-sets (i.e., equidistant sets $S$ with $|S|=3$) marked in white. \end{eg} \begin{figure} \caption{Two phylogenetic trees with equidistant 3-sets marked in white.} \label{fig:equiphy} \end{figure} We consider phylogenetic trees with a prescribed equidistant $m$-set. Up to permutation of coordinates, this space depends only on the size of the equidistant set. \begin{mydef} Let $\mathcal{T}_{m,n}\times\R$ denote the subfan of $\mathcal{T}_{m+n}\times \R$ of phylogenetic $(m+n)$-trees with a prescribed equidistant $m$-set. \end{mydef} Let $D=D_{m,n}$ as in Example \ref{eg:dmn}, so that $\mg=K_{m+n}$. Restricting the map \begin{equation*} f:\mathcal{T}_{m+n}\times \R\to \trop{\mg} \end{equation*} from \eqref{eq:f} to the set of phylogenetic $(m+n)$-trees with $\B$ equidistant, we obtain: \begin{prop} There is an isomorphism of polyhedral fans \begin{equation} \mathcal{T}_{m,n}\times\R\to \berg{D_{m,n}}. \end{equation} \label{prop:newphylo} \end{prop} \subsection{A minimal tropical basis} For any set $S\subseteq E$, let $V(S)$ be the set of points $x\in \R^E$ such that the minimum coordinate of $x_S$ is achieved at least twice. This set is the \emph{tropical hyperplane} defined by $S$. For any set $\mathcal{S}$ of subsets of $T$, let $V(\mathcal{S})=\bigcap_{S\in \mathcal{S}} V(S)$. For example, we have $\trop{M} = V(\mathcal{C})$, where $\mathcal{C}$ is the set of circuits of $M$. \begin{mydef} A set $\mathcal{S}$ of subsets of $E$ is a \emph{tropical basis} of $M$ if $V(\mathcal{S})=\trop{M}$. \end{mydef} In \cite{yu2007} the problem of computing a minimal tropical basis of a given matroid was posed, and it was shown that graphic matroids admit a unique minimal tropical basis. We compute a minimal tropical basis of $M(D)$. To \emph{paste} two sets is to take their symmetric difference. Since the symmetric difference operation is commutative and associative, we can paste any finite number of sets. \begin{lem} Let $T\subset E$, and let $\mathcal{S}$ be a set of subsets of $E$. If $T$ is obtained by pasting elements of $\mathcal{S}$, then $V(\mathcal{S})\subseteq V(T)$. \begin{proof} Suppose that $T$ is obtained by pasting elements $S_1$ and $S_2$ of $\mathcal{S}$. Let $x\in V(\mathcal{S})$. Suppose without loss of generality that $\min\{x_e :e\in S_1\}\leq \min \{x_e : e\in S_2\}$. If $\min\{x_e :e\in S_1\}$ is achieved on $S_1\cap S_2$, then it equals $\min\{x_e :e\in S_2\}$, so $\min\{x_e : e\in T\}$ is achieved at least once on each of $S_1$ and $S_2$. If not, then $\min\{x_e : e\in T\}$ is achieved at least twice on $S_1$. In either case we have $x\in V(T)$. \end{proof} \label{lem:pasting} \end{lem} A \emph{chord} of a circuit $C$ is any element $i$ such that there exist circuits $C_1$ and $C_2$ with $C_1\cap C_2 = i$ and $C_1\triangle C_2 = C$. Recall the types of circuits of $M(D)$ from Proposition \ref{prop:circuits}. A chord of a circuit $X\cup \eh$ of type (A) is any edge in $E\setminus X$ joining two vertices met by $X$. A chord of a circuit $C$ of type (B) is any edge in $E\setminus C$ joining two vertices met by $C$. \begin{prop} If $M(D)$ is simple, then there is a minimal tropical basis of $M(D)$ consisting of all chordless circuits of types (A) and (B) in Proposition \ref{prop:circuits}. \begin{proof} Suppose that a circuit $C\subseteq E$ of $M(D)$ of type (B) admits a chord $j$. Then there is a set $F\subseteq C$ such that $F\cup j$ and $(C\setminus F)\cup j$ are circuits of type (B). Pasting these two circuits yields $C$. Iterating this argument gives $C$ as a pasting of chordless cycles of type (B). Let $\overline{X}=X\cup \eh$ be a circuit of type (A) for some $X\subseteq E$. If $i$ is a chord of $\overline{X}$, then $X\cup i$ contains a single circuit $Z$ of type (B), and the set $(X\setminus Z)\cup i$ is a crossing. Pasting $Z$ and $(X\setminus Z)\cup i$ yields $\overline{X}$. Iterating this argument and the argument from the first paragraph gives $\overline{X}$ as a pasting of chordless cycles of types (A) and (B). Let $\mathcal{B}$ be the set of all chordless circuits of types (A) and (B). Let $Y$ be a circuit of type (C). We claim that $V(\mathcal{B}) \subseteq V(Y)$. Let $C_1$ and $C_2$ be distinct crossings contained in $Y$, so that $Y=C_1\cup C_2$. Let $x \in V(\mathcal{B})$. Suppose without loss of generality that $\min\{x_e : e \in Y \cup \eh\}$ occurs on $C_1\cup\eh$. If $C_1$ and $C_2$ are disjoint and $x_{\eh}=\min\{x_e : e \in Y\}$, then this minimum is achieved at least once on each of $C_1$ and $C_2$. If $C_1$ and $C_2$ are disjoint and $x_{\eh} \neq \min\{x_e : e \in Y\}$, then this minimum is achieved at least twice on $C_1$. Therefore $x \in V(Y)$ in this case. Suppose now that $C_1$ and $C_2$ are not disjoint, so that $Y$ contains a third crossing $C_3$. If $\min\{x_e : e \in C_2\} > \min\{x_e : e \in C_1 \cup \eh\}$, then the latter must occur twice on $C_1$; otherwise, $\min\{x_e : C_2\cup\eh\}$ occurs only once, on $\eh$, contradicting $x\in V(\mathcal{B})$. Hence $\min\{x_e : e\in Y\}$ occurs twice, on $C_1$. If $\min\{x_e : e \in C_2\}=\min\{x_e : e \in C_1 \cup \eh\}$ and these are both equal to $x_{\eh}$, then $\min\{x_e:e\in C_3\cup\eh\}$ occurs only once, on $\eh$, a contradiction. Hence if $\min\{x_e : e\in C_2\}=\min\{x_e:e\in C_1\cup\eh\}$, then these minima are less than $x_{\eh}$, and the minimum $\min\{x_e : e\in Y\}$ occurs at least 3 times. Therefore $x\in V(Y)$ again, proving the claim. It follows that $\mathcal{B}$ is a tropical basis of $M(D)$. We show that $\mathcal{B}$ is minimal. Suppose that the circuit $C$ from above is chordless. For some $e\in C$, let $y \in \R^{\eo}$ be 1 on $C\setminus e$ and 0 on the rest of $\eo$. Any circuit in $\mathcal{B} \setminus C$ must contain at least two elements of $\eo$ not in $C$. The point $y$ achieves its minimum at least twice on such a circuit. Hence $y \in V(\mathcal{B}\setminus C)\setminus V(\mathcal{B})$, proving that $\mathcal{B}\setminus C$ is not a tropical basis. Suppose now that the circuit $\overline{X}$ from above is chordless. Let $z\in \R^{\eo}$ be 1 on $X$ and 0 on $\eo\setminus X$. Any circuit of type (A) in $\mathcal{B}\setminus \overline{X}$ must contain $\eh$ and at least one edge in $E\setminus X$. Any circuit of type (B) in $\mathcal{B}\setminus \overline{X}$ must contain at least two elements of $E\setminus X$. The point $z$ achieves its minimum at least twice on any such circuit. Hence $\mathcal{B}\setminus \overline{X}$ is not a tropical basis, proving that $\mathcal{B}$ is minimal. \end{proof} \label{prop:mintrop} \end{prop} \section{Response matrices and the half-plane property} \label{sec:hpp} In this section, we show that every Dirichlet matroid has the half-plane property and use this to prove Theorem \ref{thm:realintro}. Let $\mathcal{S}$ be a finite set of finite sets. The \emph{generating polynomial} of $\mathcal{S}$ is the sum of the monomials $\prod_{e\in S} x_e$ over all $S\in \mathcal{S}$. The \emph{basis generating polynomial} of a matroid $M$ is the generating polynomial of the set of bases of $M$. Let $\Re$ and $\Im$ denote the real and imaginary part operators, respectively. Write \begin{equation*} \R_+^n=(0,\infty)^n. \end{equation*} A polynomial $f\in \C[x_1,\ldots,x_n]$ is \emph{stable} if $f$ has no zeros $\xx$ with $\Im(\xx)\in\R^n_+$. \begin{mydef} A matroid $M$ is \emph{HPP} (short for \emph{half-plane property}) if the basis generating polynomial of $M$ is stable. \label{def:hpp} \end{mydef} For a list of known HPP and non-HPP matroids, see \cite{dleon2009}. The next fact, part of the ``folklore'' of electrical engineering, describes a fundamental family of examples \cite[p. 4]{wagner2005}. \begin{prop} Every graphic arrangement is HPP. \label{prop:graphstable} \end{prop} \subsection{Laplacian and response matrices} The \emph{admittance} of an edge $e\in E$ is a complex number that measures how easily current may flow through $e$. In this section, we fix admittances $x\in \C^E$. We assume that $D$ is simple, since parallel edges can be treated as a single edge whose admittance is the sum of the individual admittances, and since current does not flow through loops or edges between boundary nodes. Let $L=L(\xx)$ denote the $V\times V$ \emph{weighted Laplacian matrix}, given by \begin{equation*}L_{ij}=\begin{cases}\sum_{k\sim i} x_{ik}&\mbox{if }i=j\\ -x_{ij}&\mbox{if }i\sim j\\ 0&\mbox{else}.\end{cases}\end{equation*} Write $L$ in block form as \begin{equation} L= \begin{bmatrix} A & B\\ B^T & C \end{bmatrix}, \label{eq:lform} \end{equation} where the rows and columns of $A$ are indexed by $\B$. If $C$ is invertible, then the \emph{response matrix} of $D$ is the Schur complement $L/C$, given by \begin{equation} \rs=A-BC^{-1}B^T. \end{equation} If voltages $u\in \R^\B$ are applied to the boundary of $D$, e.g. by attaching batteries, then $\rs u$ is the vector of resulting currents at the boundary nodes. \begin{lem} Suppose that $C$ is invertible. Let $u\in \R^\B$, $v=-C^{-1}B^Tu$ and $\phi=\rs u$. We have \begin{equation} \begin{bmatrix} A & B\\ B^T & C \end{bmatrix} \begin{bmatrix} u \\ v \end{bmatrix} = \begin{bmatrix} \phi \\ 0 \end{bmatrix}. \label{eq:block} \end{equation} \label{lem:harm} \begin{proof} This is a direct computation. \end{proof} \end{lem} \begin{lem} If $\Re(\xx)\in \R^E_+$, then $\Re(\rs)$ is positive semidefinite. \begin{proof} Suppose that $\Re(\xx)\in \R^E_+$, and let $u\in \R^\B$. Let $f\in \C^V$ be the column vector on the left side of \eqref{eq:block}. Order the boundary nodes $1,\ldots,m$ and the interior vertices $m+1,\ldots,d$. We have \begin{align*} u^T \Re(\rs)u &= \sum_{i,j=1}^m u_i \Re(\rs)_{ij} u_j \\ &= \sum_{i,j=1}^m \Re(u_i \overline{\rs_{ij}u_j})\\ &= \sum_{i=1}^m \Re(u_i \overline{[\rs u]_i}). \label{eq:innerprod} \end{align*} Lemma \ref{lem:harm} implies that $Lf|_{\B}=\rs u$ and $Lf|_{V\setminus \B}=0$, so \begin{equation*} \sum_{i=1}^m \Re(u_i \overline{[\rs u]_i}) = \sum_{i=1}^d \Re(f_i \overline{[Lf]_i}). \end{equation*} Write $x_{ij} = 0$ for all non-adjacent $i,j\in V$. Direct computation gives \begin{equation*} [Lf]_i = \sum_{j=1}^d x_{ij} (f_i-f_j), \end{equation*} so \begin{align*} \sum_{i=1}^d \Re(f_i \overline{[Lf]_i}) &= \sum_{i,j=1}^d \Re(f_i \overline{x_{ij}(f_i-f_j)})\\ &=\sum_{1\leq i< j\leq d} \Re(f_i \overline{x_{ij}(f_i-f_j)}) +\sum_{1\leq j<i\leq d} \Re(f_i \overline{x_{ij}(f_i-f_j)})\\ &=\sum_{1\leq i< j\leq d} \Re(f_i \overline{x_{ij}(f_i-f_j)}) -\sum_{1\leq i<j\leq d} \Re(f_j \overline{x_{ij}(f_i-f_j)})\\ &=\sum_{1\leq i< j\leq d} \Re((f_i-f_j) \overline{x_{ij}(f_i-f_j)})\\ &=\sum_{1\leq i<j \leq d} \Re(x_{ij})|f_i - f_j|^2 \end{align*} is positive. The result follows. \end{proof} \label{lem:psd} \end{lem} \subsection{Basis generating polynomials} We establish formulas for the basis generating polynomial of $M(D)$ and use them to prove that every Dirichlet arrangement has the half-plane property. This can also be proven using the results of Appendix \ref{sec:printrun}, but the connection to the response matrix is lost. Let $\pb$ denote the basis generating polynomial of $M(D)$. For $i=0,1$, let $P_i$ denote the generating polynomial of the set $\Sigma_i$ from Definition \ref{def:suncscro}. Proposition \ref{prop:bases} implies that \begin{equation} \pb(\xx,\xh )= \pcro(\xx)+\xh \punc(\xx) \label{eq:pb1} \end{equation} for all $(\xx,\xh)\in \C^E\times \C$, where $\xh$ is the variable corresponding to $\eh$. Let $\tr\rs$ denote the trace of the response matrix $\rs$. \begin{lem} For all $(\xx,\xh )\in \C^E\times \C$ with $\Im(\xx)\in \R^E_+$, the basis generating polynomial of $M(D)$ is given by \begin{equation} \pb(\xx,\xh )=\punc(\xx)\left(\xh +\frac{1}{2}\tr\rs\right). \end{equation} \begin{proof} For all distinct boundary nodes $i$ and $j$ let \begin{equation*} \sij = \{ F \in \scro : F \mbox{ contains a path from } i \mbox{ to } j \}. \end{equation*} The matrix-tree theorem for principal minors (see e.g. \cite{chaiken1982}) gives $\det C = \punc$, where $C$ is defined in \eqref{eq:lform}. Note that $\punc$ is the basis generating polynomial of $M(\og)$, where $\og$ is the graph obtained by identifying all boundary nodes as a single vertex. Proposition \ref{prop:graphstable} implies that $\rs$ is well defined whenever $\Im(\xx)\in \R^E_+$. Thus if $\Im(\xx)\in \R^E_+$, then for all $i\neq j$ we have \begin{equation} -\rs_{ij}=\frac{P_{ij}}{\punc}, \label{eq:rs1} \end{equation} where $P_{ij}$ is the generating polynomial of $\sij$ (see e.g. \cite[Proposition 2.8]{kenyon2011}). It is not hard to see that $\rs$ is symmetric, and that every row sum of $\rs$ is zero \cite[p. 3]{curtis2000}. Since $\pcro = \frac{1}{2}\sum_{i\neq j} P_{ij}$, we deduce that $\sum_{i\neq j}\rs_{ij}=-\tr\rs$. The result now follows from \eqref{eq:pb1} and \eqref{eq:rs1}. \end{proof} \label{lem:pbfor} \end{lem} \begin{prop} Every Dirichlet matroid has the half-plane property. \begin{proof} Let $(\xx,\xh)\in \C^E\times \C$ with $\Re(\xx)\in \R^E_+$ and $\Re(\xh)>0$. Since $\pb$ is homogeneous, it suffices to show that $\pb(\xx,\xh)\neq 0$. Since $\punc$ is the basis generating polynomial of $M(\og)$, Proposition \ref{prop:graphstable} implies that $\punc(\xx)\neq 0$. Thus by Lemma \ref{lem:pbfor} it suffices to show that $\Re(\tr\rs(\xx))\geq 0$ whenever $\Re(\xx)\in \R_+^n$. This follows from Lemma \ref{lem:psd}. \end{proof} \label{prop:hpp} \end{prop} \begin{remark} Given an edge $e\in E$, the quantities $\Re(\xx_e)$ and $\Im(\xx_e)$ are called the \emph{conductance} and \emph{susceptance} of $e$, respectively. Each edge represents an ideal electrical component whose type can determine the signs of its conductance and susceptance. For example, a \emph{resistor} has positive conductance and zero susceptance. A \emph{capacitor} (resp., \emph{inductor}) has zero conductance and positive (resp., negative) susceptance. \end{remark} \subsection{Interlacing zeros and proof of Theorem \ref{thm:realintro}} For $1\leq i\leq n$, the \emph{Wronskian} with respect to $x_i$ is the bilinear map $W_{x_i}$ on $\R[x_1,\ldots,x_n]$ given by \begin{equation} W_{x_i}(f,g) = f \cdot \partial_i g - \partial_i f\cdot g. \end{equation} If $\xx,y\in \R^n$, then $W_t(f(\xx+t\yy),g(\xx+t\yy))$ is a univariate polynomial in $t$. Two polynomials $f,g\in \R[x_1,\ldots,x_n]$ are in \emph{proper position}, written $f\ll g$, if for all $(\xx,\yy)\in \R^n\times\R^n_+$ we have \begin{equation} W_t(f(\xx+t\yy),g(\xx+t\yy))(t) \geq 0 \end{equation} for all $t\in \R$. For technical reasons we also declare that $0\ll f$ and $f\ll 0$ for all $f$. If $f\ll g$, then for any $(\xx,\yy)\in \R^n\times\R^n_+$ the real zeros of $f(\xx+t\yy)$ and $g(\xx+t\yy)$ \emph{interlace} in the following sense (see \cite{branden2007}). \begin{mydef} Two finite sets $S,T\subseteq \R$ whose cardinalities differ by at most 1 are said to \emph{interlace} if the elements $s_i$ of $S$ and $t_i$ of $T$ can be indexed so that either $s_1\leq t_1\leq s_2\leq t_2\leq \cdots$ or $t_1\leq s_1\leq t_2\leq s_2\leq \cdots$. More generally, two finite subsets of a line $L\subseteq \R^n$ \emph{interlace} if their images interlace under any homeomorphism $L\to \R$. \end{mydef} \begin{prop}[{\cite[Corollary 5.5]{branden2007}}] Let $f,g\in \R[x_1,\ldots,x_n]$. We have $g\ll f$ if and only if $f+\xh g\in \R[x_0,\ldots,x_n]$ is stable. \label{prop:branpp} \end{prop} \begin{proof}[Proof of Theorem \ref{thm:realintro}] This follows from \eqref{eq:pb1}, Proposition \ref{prop:hpp}, and Proposition \ref{prop:branpp}. \end{proof} \begin{eg} Let $D$ be a star network, and write $m=|\B|$. Writing the conductances as $\mathbf{x}=(x_1,\ldots,x_m)$, we have \begin{equation*} \tr\Lambda = \frac{\sum_{i\neq j} x_ix_j}{\sum_i x_i}. \end{equation*} Every line in $\R^m$ with positive direction vector intersects exactly 2 zeros of $\tr\Lambda$ (counted with multiplicity) and one pole in between. For example, if $\mathbf{x}=(-1,\ldots,-1,n-1)$ and $\mathbf{y}=(1,\ldots,1)$, then the zeros occur at $t=\pm 1$, and the pole occurs at $t=0$. When $m=2$, the set of zeros is the union $x_1x_2=0$ of the coordinate axes, and the set of poles is the line $x_1+x_2=0$. \end{eg} \section{Characteristic polynomials and graph colorings} \label{sec:char} We prove Theorem \ref{thm:cpintro} after discussing the precoloring polynomial. This polynomial has been studied in connection with various combinatorial objects and games \cite{herzberg2007, lutz2019hyp, stanley2015}. It is also the fundamental object of the precoloring extension problem \cite{biro1992}. We assume that $D$ is simple; the proofs are essentially unchanged. \subsection{Results from hyperplane arrangements} Given a matroid or hyperplane arrangement $M$, write $\cp{M}$ for the characteristic polynomial of $M$. If $M$ is a matroid, then write $\rcp{M}$ for the \emph{reduced characteristic polynomial} of $M$, given by \begin{equation*} \rcp{M}(\l) = (\l-1)^{-1}\cp{M}(\l). \end{equation*} Assume that $\g$ is loopless, and write $\cp{\g}$ for the chromatic polynomial of $\g$. We have \begin{equation} \cp{\g}(\l)= \l \cp{M(\g)}(\l). \label{eq:chromdef} \end{equation} \begin{mydef} The \emph{precoloring polynomial} of $D$ is the reduced characteristic polynomial of $M(D)$: \begin{equation} \cp{D} = \rcp{M(D)} \end{equation} \end{mydef} If $k\geq |\B|$ is an integer, then $\cp{D}(k)$ is the number of ways to extend a given injective $k$-coloring of $\B$ to a proper $k$-coloring of $\g$. \begin{prop}[{\cite[Proposition 3.9]{lutz2019hyp}}] We have \begin{equation} \cp{D}(\l)=(\l)_m^{-1}\cp{\mg}(\l), \label{eq:cpformula} \end{equation} where $(\l)_m=\l(\l-1)\cdots(\l-m+1)$ is a falling factorial. \end{prop} \begin{eg} Suppose that $D=D_{m,n}$ as in Example \ref{eg:dmn}. Here $\mg = K_{m+n}$ is complete, so that $\cp{\mg}(\l)=(\l)_{m+n}$. The formula \eqref{eq:cpformula} gives \begin{equation*} \cp{D}(\l) = (\l-m)_n. \end{equation*} \end{eg} \begin{eg} Let $D$ be the network whose interior vertices form a cycle and whose boundary nodes are pendants, with each interior vertex adjacent to exactly 1 boundary node. The case $m=6$ is illustrated in Figure \ref{fig:hexnet}. We propose the following closed form and recurrence relation for $\cp{D}$, which we have verified for $m\leq 11$. Write $\cp{m} = \cp{D}$. \begin{conj} For $m\geq 3$ we have \begin{equation} \cp{m}(\l+1) = \frac{\omega_+^m + \omega_-^m}{2^m} + (-1)^m(\l-m-1), \end{equation} where $\omega_\pm = \l-2\pm\sqrt{\l^2+4}$. In particular, $\cp{m}$ satisfies the recurrence \begin{equation} \begin{aligned} \cp{3}(\l)&=\l^3 - 6\l^2 + 14\l - 13\\ \cp{4}(\l)&=\l^4 - 8\l^3 + 28\l^2 - 51\l + 41\\ \cp{m+2}(\l)&= (\l-1)\cp{m+1}(\l)+(\l+1)\cp{m}(\l)+(-1)^m(-2\l+m-1). \end{aligned} \end{equation} \end{conj} \begin{figure} \caption{A network with boundary nodes marked in white.} \label{fig:hexnet} \end{figure} \label{eg:hexwheel} \end{eg} \subsection{Proof of Theorem \ref{thm:cpintro}} \label{sec:bcc} Let $M$ be a matroid on $E$. With respect to a given ordering of $E$, a \emph{broken circuit} of $M$ is a set $C\setminus \min (C)$, where $C$ is a circuit of $M$. The \emph{broken circuit complex} of $M$ is the abstract simplicial complex \begin{equation} \bc(M)=\{S\subseteq E : S\mbox{ contains no broken circuit of }M\}. \end{equation} The $f$-vector of $\bc(M)$ does not depend on the ordering of $E$: \begin{prop}[{\cite[Theorem 4.12]{stanley2007}}] With respect to any ordering of $E$, the number of $i$-element sets in $\bc(M)$ is $(-1)^i$ times the coefficient of $\l^{\rk(M)-i}$ in $\cp{M}(\l)$. \label{prop:broken} \end{prop} We also consider the \emph{reduced broken circuit complex} $\rbc(M)$, obtained from $\bc(M)$ by deleting the minimal element of $E$ and all faces containing it. Every facet of $\bc(M)$ contains the minimal element of $E$, so $\bc(M)$ is easily recovered from $\rbc(M)$. \begin{cor} With respect to any ordering of $E$, the number of $i$-element sets in $\rbc(M)$ is $(-1)^i$ times the coefficient of $\l^{\rk(M)-i-1}$ in $\rcp{M}(\l)$. \label{cor:rbroken} \end{cor} Recall the description of circuits of $M(D)$ from Proposition \ref{prop:circuits}. Note that the circuits of type (C) come in two flavors: one contains 3 distinct crossings, while the other contains only 2. These are illustrated in Figure \ref{fig:circuits}. Circuits of type (C) containing only 2 distinct crossings are either disconnected, as pictured, or connected with both crossings meeting at a single boundary node. \begin{figure} \caption{Two circuits of type (C) in Proposition \ref{prop:circuits}.} \label{fig:circuits} \end{figure} \begin{lem} Fix an ordering of $E$, and extend this ordering to $\eo$ by taking $\eh$ to be minimal. The reduced broken circuit complex $\rbc(M(D))$ is a subcomplex of $\bc(M(\g))$: \begin{equation} \rbc(M(D)) \subseteq \bc(M(\g)). \end{equation} \begin{proof} Let $S\in \rbc(M(D))$, so that $S\subseteq E$ contains no broken circuit of $M(D)$. A cycle of $\g$ is a circuit of $M(D)$ of type (B) if it meets at most 1 boundary node, or type (C) if it meets exactly 2 boundary nodes. If a cycle $C$ of $\g$ meets 3 or more boundary nodes, then every element of $C$ is contained in a circuit $Y\subseteq C$ of type (C). Any broken circuit of $M(\g)$ is a cycle of $\g$ minus its minimal element. Thus $S$ contains no broken circuit of $M(\g)$. A broken circuit of $M(D)$ arising from a type (B) circuit is a crossing. Hence $S$ contains no crossing. Now suppose instead that $S\in \bc(M(\g))$ contains no crossing. Since $S$ contains no broken circuit of $M(\g)$, it contains no broken circuit of $M(D)$ arising from a type (B) circuit. Since $S$ contains no crossing, it contains no broken circuit of $M(D)$ arising from a circuit of type (A) or (C). Hence $X$ contains no broken circuit of $M(D)$. \end{proof} \label{lem:rbc} \end{lem} \begin{proof}[{Proof of Theorem \ref{thm:cpintro}}] This follows from Proposition \ref{prop:broken}, Corollary \ref{cor:rbroken} and Lemma \ref{lem:rbc}. \end{proof} \begin{eg} If $\g$ is a path graph and $\B$ consists of the terminal vertices, then the only path between boundary vertices has $n+1$ edges. The last part of Theorem \ref{thm:cpintro} says that we can simply read off the coefficients $a_i$ of $\cp{\g}$ as the coefficients $b_i$ of $\cp{D}$. \end{eg} \appendix \section{Dirichlet matroids and biased graphs} \label{sec:appa} A biased graph is a graph with a distinguished set of cycles satisfying a certain linearity condition. There are several ways to form a matroid from the distinguished cycles. We will focus on one of these (the \emph{complete lift matroid}) and apply it to a certain class of biased graphs (the \emph{almost balanced} ones). We assume that $D$ is simple to reduce headache. \subsection{Background} A \emph{theta graph} is a graph consisting of 2 ``terminal'' vertices and 3 internally vertex-disjoint paths between the terminals. In other words, a theta graph resembles the symbol $\theta$ (see Figure \ref{fig:thetagraph}). A set $\BB$ of cycles of $\g$ is a \emph{linear subclass} of $\g$ if, for any 2 distinct cycles in $\BB$ belonging to a theta subgraph $H$ of $\g$, the third cycle of $H$ also belongs to $\BB$. \begin{figure} \caption{A theta graph.} \label{fig:thetagraph} \end{figure} A \emph{biased graph} is a pair $\bg=(\g,\BB)$ where $\BB$ is a linear subclass of $\g$. The cycles in $\BB$ are called \emph{balanced}; all others are \emph{unbalanced}. An edge set or subgraph $X$ is \emph{balanced} if every cycle of $\g$ contained in $X$ is balanced; otherwise $X$ is \emph{unbalanced}. There are three matroids typically associated to a biased graph $\bg$ \cite{zaslavsky1991}. We are interested in the \emph{complete lift matroid} $L_0(\bg)$ of $\bg$. This is the matroid on $\eo = E\cup \eh$ with rank function given by \begin{equation} \rk_{L_0(\bg)}(X) = \begin{cases} |V| - c(X) & \mbox{if } X \subseteq E \mbox{ is balanced}\\ |V| - c(X) + 1 & \mbox{if } X \subseteq E \mbox{ is unbalanced or } \eh \in X, \end{cases} \end{equation} where $c(X)$ is the number of components of the graph $(V,X)$. \subsection{Bias representation} \label{sec:bias} For any $S\subseteq E$, let $\bg/S=(\g/S, \BB/S)$, where $\g/S$ is obtained from $\g$ by contracting $S$ and \begin{multline} \BB/S = \{C \in \BB : C \subseteq E \setminus S \mbox{ and } C \mbox{ is a cycle of } \g/S\}\\ \cup \{C \setminus S : C\in \BB \mbox{ and } C \cap S \mbox{ is a simple path} \}. \end{multline} Let $\mg$ be the graph obtained from $\g$ by adding an edge between each pair of boundary nodes, and let $\me$ be the set of added edges. We associate to $D$ the biased graph \begin{equation*} \bg(D) = \bg(\mg)/\me \end{equation*} and denote the underlying graph by $\og$. A cycle $C$ of $\bg(D)$ is unbalanced if and only if $C$ is a crossing of $D$. We write \begin{equation*} L_0(D)=L_0(\bg(D)). \end{equation*} \begin{eg} Recall the network $D$ from Example \ref{eg:hexwheel}. The case $|\B|=6$ is illustrated on the left-hand side of Figure \ref{fig:hexwheel}. In this example there is only one balanced cycle of $\bg(D)$, and it is the unique cycle of $\g$. \begin{figure} \caption{Left to right: a network $D$ with boundary nodes marked in white, the graph $\mg$, and the graph $\og$.} \label{fig:hexwheel} \end{figure} \end{eg} There is a characterization of the biased graphs $\bg(D)$ due to \cite{zaslavsky1987}. For $i\in V$ let $\bg\setminus i$ be the biased graph obtained by deleting $i$ and all edges incident to $i$. If $\bg$ is unbalanced but $\bg\setminus i$ is balanced, then $i$ is called a \emph{balancing vertex} of $\bg$. A biased graph with a unique balancing vertex is called \emph{almost balanced}. We now have all the terms needed to characterize Dirichlet matroids in terms of biased graphs. \begin{thm} The classes of simple Dirichlet matroids and simple complete lift matroids of connected almost balanced biased graphs are equal. In particular, we have $M(D)=L_0(D)$. \begin{proof} This follows from \cite[Proposition 1]{zaslavsky1987} and Proposition \ref{prop:samemat} below. \end{proof} \end{thm} Propositions \ref{prop:circuits}, \ref{prop:bases} and \ref{prop:cocircuits} now follow from parts (e), (g) and (h) of \cite[Theorem 3.1]{zaslavsky1991}, respectively. \subsection{Gain representation} A \emph{gain graph} $\Phi=(\g,\gr,\phi)$ consists of \begin{enumerate}[(i)] \item A graph $\g$ \item A group $\gr$ \item A function $\phi:V\times V\to \gr$ such that $\phi(i,j)=\phi(j,i)^{-1}$ for all $(i,j)$. \end{enumerate} If $ij\in E$, then we think of $(i,j)$ as the edge $ij$ oriented from $i$ to $j$. For any cycle $C$ of $\g$, order the vertices of $C$ in a cycle as $i_1,\ldots,i_\ell=i_1$, and write \begin{equation}\phi(C)=\phi(i_1,i_2)\phi(i_2,i_3)\cdots\phi(i_{\ell-1},i_\ell).\end{equation} In general, the element $\phi(C)$ depends on the choice of starting vertex and direction, unless $\phi(C)$ is the identity. Let \begin{equation} \BB= \{ C \subseteq E: C \mbox{ is a cycle of } \g \mbox{ with } \phi(C) \mbox{ the identity of } \gr \}. \end{equation} The set $\BB$ is a linear subclass of $\g$. Thus every gain graph defines a biased graph whose set of balanced cycles is $\BB$. Let $\KK$ be the additive group of a field. For any $u\in \KK^{\B}$, let $\Phi(\g,u)=(\og,\KK,\phi)$, where $\og$ is defined as above and $\phi:V\times V\to \KK$ is given by \begin{equation} \phi(i,j) = \begin{cases} u_j & \mbox{if } ij \in \be \mbox{ with } j \in \B\\ -u_i & \mbox{if } ij \in \be \mbox{ with } i\in \B\\ 0 & \mbox{else}. \end{cases} \end{equation} Clearly $\Phi(\g,u)$ is a gain graph. \begin{eg} Consider the graph $\g$ on the left side of Figure \ref{fig:gg} with boundary nodes marked in white and values of $u$ labeled. The associated gain graph $\Phi(\g,u)$ is illustrated on the right side of Figure \ref{fig:gg}. An edge oriented from $i$ to $j$ with label $k$ means that $\phi(i,j)=k$. \begin{figure} \caption{A network and the associated gain graph.} \label{fig:gg} \end{figure} \end{eg} \subsection{Block injectivity and equivalence of representations} Recall the notion of a \emph{block} of $D$ from Definition \ref{def:block}. \begin{mydef} We say that $u\in \KK^{\B}$ is \emph{block injective} if for every block $U$ of $D$ the values $u_i$ are distinct for all $i\in U\cap \B$. \end{mydef} A cycle $C$ of the gain graph $\Phi(\g,u)$ is unbalanced if and only if $C$ is a crossing of $D$ between boundary nodes on which $u$ takes distinct values, so $\Phi(\g,u)$ is independent of $u$, as long as $u$ is block injective. Write \begin{equation*} \Phi(D)=\Phi(\g,u), \end{equation*} where $u$ is block injective. The following proposition justifies our claim in Example \ref{eg:dirarr} that the matroids underlying Dirichlet arrangements are the simple Dirichlet matroids. \begin{prop} The following matroids on $\eo$ are equal: \begin{enumerate}[(a)] \item $M(D)$ \item $L_0(\bg(D))$ \item $L_0(\Phi(D))$ \end{enumerate} If $D$ is simple and $u\in \KK^{\B}$ is block injective, then we can add: \begin{enumerate}[(a)] \setcounter{enumi}{3} \item The matroid defined by the Dirichlet arrangement $\AA_D(u)$. \end{enumerate} \begin{proof} A cycle of $\bg(D)$ or $\Phi(D)$ is unbalanced if and only if it is a crossing of $D$. Hence $\Phi(D)=\bg(D)$ as biased graphs, proving the equivalence of (b) and (c). The equivalence of (c) and (d) follows from \cite[Theorem 4.1(a)]{zaslavsky2003}. In addition it follows that $\AA(\g,u)$ defines the same matroid for any field $\KK$ with a block-injective boundary data $u$, proving the equivalence of (a) and (d). \end{proof} \label{prop:samemat} \end{prop} \subsection{Proof of Proposition \ref{prop:representable}} \label{sec:reppf} We need a description of the single-element contractions of $M(D)$. \begin{lem} If $e\in E$, then $M(D)/e = L_0(\bg(D)/e)$, where $\bg(D)/e$ is the biased graph with underlying graph $\og/e$ and in which a cycle $C\subseteq E\setminus e$ of $\og/e$ is balanced if and only if $C\cup e$ is a balanced cycle of $\bg(D)$. \begin{proof} The result follows from the discussion in \cite[p. 38]{zaslavsky1989}. \end{proof} \label{lem:contract} \end{lem} \begin{proof}[Proof of Proposition \ref{prop:representable}] Suppose that $|\KK|\geq \omega(D)$, so that there exists a block-injective $u\in \KK^{\B}$. Proposition \ref{prop:samemat} implies that $M(D)$ is representable over $\KK$, since any hyperplane representation over $\KK$ gives a matrix representation over $\KK$. Now suppose that $|\KK|<\omega(D)$, and let $U$ be a block with $\omega(D) = |U\cap \B|$. Let $X\subseteq E$ be the set of all edges with both endpoints in $U$. Let $\partial X\subseteq X$ be the set of all edges with one endpoint in $U\cap \B$. Deleting all edges in $E\setminus X$ and contracting all edges in $X\setminus\partial X$ yields the star network $S_{\omega(D)+1}$ (see Example \ref{eg:star}). Since $M(S_d)\cong U_{2,d}$, we obtain $U_{2,\omega(D)+1}$ as a minor of $M(D)$ by Lemma \ref{lem:contract}. But $U_{2,\omega(D)+1}$ is not a minor of any matroid representable over $\KK$ \cite[Corollary 6.5.3]{oxley2006}. \end{proof} \section{Complete principal truncations} \label{sec:printrun} Let $M$ be a matroid on $E$ and $F$ a flat of rank $r$. Let $M_F$ denote the \emph{complete principal truncation} of $M$ along $F$, obtained by freely adding an $(r-1)$-element independent set $I$ to $F$ and contracting $I$ (see \cite[p. 266]{oxley2006} for details). Note that $F$ is a parallel class of $M_F$; we choose a representative $e\in F$ and treat $M$ as a matroid on $(E\setminus F)\cup e$ by deleting $E\setminus e$. Let $\mg$ denote the graph obtained from $\g$ by adding an edge between each pair of boundary vertices. Let $E(\B)$ denote the set of added edges. \begin{thm} The Dirichlet matroid $M(D)$ is isomorphic to the complete principal truncation of $M(\mg)$ along $E(\B)$. \begin{proof} This follows from the discussion in \cite[Section 6]{lutz2019hyp}. \end{proof} \end{thm} The following result gives an alternative proof of Proposition \ref{prop:hpp}, although it bypasses the connection to the response matrix. \begin{prop} If a matroid $M$ has the half-plane property, then so does the complete principal truncation of $M$ along any flat. \begin{proof} This follows from Propositions 4.11 and 4.12 of \cite{choe2004}. \end{proof} \end{prop} \section*{Acknowledgments} The author thanks H\'{e}l\`{e}ne Barcelo, Trevor Hyde, Jeffrey C. Lagarias and Thomas Zaslavsky for comments on drafts of this article; Anastasia Chavez, Christopher Eur and James G. Oxley for discussions and references on matroid quotients; and the anonymous referee for suggestions and improvements. \end{document}
arXiv
\begin{document} \sloppy \title{ Contrastive Learning under Heterophily } \author{Wenhan Yang} \email{[email protected]} \affiliation{Department of Computer Science,\\ \institution{University of California, Los Angeles} } \author{Baharan Mirzasoleiman} \email{[email protected]} \affiliation{Department of Computer Science,\\ \institution{University of California, Los Angeles} } \renewcommand{Trovato and Tobin, et al.}{Trovato and Tobin, et al.} \begin{abstract} Graph Neural Networks are powerful tools for learning node representations when task-specific node labels are available. However, obtaining labels for graphs is expensive in many applications. This is particularly the case for large graphs. To address this, there has been a body of work to learn node representations in a self-supervised manner without labels. Contrastive learning (CL), has been particularly popular to learn representations in a self-supervised manner. In general, CL methods work by maximizing the similarity between representations of augmented views of the same example, and minimizing the similarity between augmented views of different examples. However, existing graph CL methods cannot learn high-quality representations under heterophily, where connected nodes tend to belong to different classes. This is because under heterophily, augmentations of the same example may not be similar to each other. In this work, we address the above problem by proposing the first graph CL method, {\textsc{HLCL}}\xspace, for learning node representations, under heterophily. {\textsc{HLCL}}\xspace uses a high-pass and a low-pass graph filter to generate different views of the same node. Then, it contrasts the two filtered views to learn the final node representations. Effectively, the high-pass filter captures the dissimilarity between nodes in a neighborhood and the low-pass filter captures the similarity between neighboring nodes. Contrasting the two filtered views allows {\textsc{HLCL}}\xspace to learn rich node representations for graphs, under heterophily and homophily. Empirically, {\textsc{HLCL}}\xspace outperforms state-of-the-art graph CL methods on benchmark heterophily datasets and large-scale real-world datasets by up to 10\%. \end{abstract} \begin{CCSXML} <ccs2012> <concept> <concept_id>10010520.10010553.10010562</concept_id> <concept_desc>Computer systems organization~Embedded systems</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010520.10010575.10010755</concept_id> <concept_desc>Computer systems organization~Redundancy</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010520.10010553.10010554</concept_id> <concept_desc>Computer systems organization~Robotics</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10003033.10003083.10003095</concept_id> <concept_desc>Networks~Network reliability</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> \end{CCSXML} \ccsdesc[500]{Computing methodologies~Neural networks} \ccsdesc[100]{Mathematics of computing ~Graph Algorithms} \keywords{Graph Neural Networks, Contrastive Learning, Graph Filters} \maketitle \section{Introduction} Graph neural networks (GNNs) are powerful tools for learning graph-structured data in various domains, including social networks, biological compound structures, and citation networks \citep{kipf2016semi,hamilton2017inductive,velivckovic2017graph}. In general, GNNs leverage the graph's adjacency matrix to update the node representations by aggregating information from their neighbors. This can be seen as a low-pass filter that smooths the graph signals and produces similar node representations \citep{nt2019revisiting}. GNNs have achieved great success in supervised and semi-supervised learning, where task-specific labels are available. However, obtaining high-quality labels is very expensive in many domains, specially as graphs grow larger in size. This has motivated a recent body of work on self-supervised learning on graphs that learn the representations in an unsupervised manner \citep{velickovic2019deep,peng2020graph,qiu2020gcc,hassani2020contrastive,zhu2020beyond}. Among the self-supervised methods, Contrastive Learning (CL) has shown a great success \cite{chen2020simple,zhu2020deep,velickovic2019deep,peng2020graph}. Contrastive learning obtains representations by maximizing the agreement between different augmented views of the same example, and minimizing agreement between differently augmented views of different examples. CL has been recently applied to graphs to learn node and graph representations \citep{velickovic2019deep,peng2020graph,qiu2020gcc,hassani2020contrastive,zhu2020beyond}. Graph CL methods work by first explicitly augmenting the input graph by altering the node features or the graph topology. Then, they learn representations by contrasting the augmented views of the graph encoded with a GNN-based encoder. Despite being successful on graphs with homophily, where neighboring nodes tend to share the same label, existing graph CL methods cannot learn high-quality representations for graphs with heterophily, where connected nodes often belong to different classes and have dissimilar features \cite{zhu2020beyond}. Learning self-supervised node representations under heterophily is indeed very challenging. When labels are available, they can be leveraged to learn how the neighborhood information can be aggregated to achieve a superior classification performance \cite{bo2021beyond,luan2020complete,pei2020geom, wang2019demystifying,zhu2020graph}. However, without the label information it is not clear how the neighborhood information should be aggregated. Besides, explicit graph augmentation methods, such as topology and feature augmentation, allow better capturing the node \textit{similarities} under homophily. In doing so, they can effectively boost the performance of graph CL under homophily. However, under heterophily, it is crucial to also leverage \textit{dissimilarities} of nodes with their neighborhood to learn rich representations. Explicit graph augmentations cannot help capturing such dissimilarities. \begin{figure*}\label{fig:method} \end{figure*} In this work, we address the above challenges by proposing the first graph CL method under heterophily. The key idea of our proposed method, {\textsc{HLCL}}\xspace{}, is to leverage a high-pass graph filter in a typical GNN encoder to capture dissimilarities of nodes with their neighborhood and generate non-smooth node representations. The non-smooth representations are then contrasted with their smooth counterparts generated by the same encoder. This results in learning rich representations based on a combination of smooth and non-smooth components. More specifically, we use the normalized Lapcalian matrix as the high-pass filter in the GNN encoder to aggregate the node representations. The Laplacian matrix magnifies the differences in the node features in a neighborhood and makes the representations distinct. In contrast, a typical GNN encoder uses the normalized adjacency matrix as a low-pass filter to aggregate the node features with those of their neighbors, and enforce similar representations for the nodes in the same neighborhood. Maximizing the agreement between the high-pass and low-pass filtered representations enables incorporating both smooth and non-smooth components while learning the final representations. {\textsc{HLCL}}\xspace obtains state-of-the-art performance under heterophily without the need for explicit augmentation. At the same time, it can leverage explicit topology and feature augmentations to also obtain a superior performance on graphs with homophily. We show the effectiveness of our {\textsc{HLCL}}\xspace{} framework through extensive experiments on graphs with heterophily and homophily for unsupervised representations learning, under the linear evaluation protocol. Our results demonstrate that, on five popular public benchmark datasets, our framework can outperform existing graph CL methods by up to 10\% for representation learning on graphs with heterophily, while ensuring a comparable performance to state-of-the-art graph CL methods on graphs with homophily. We also show that {\textsc{HLCL}}\xspace can easily scale to large real-world graphs, including Penn94, twitch-gamers, pokec, and genius, and outperform existing graph CL methods by up to 6\%. \textbf{(Semi-)supervised learning on graphs.} In recent years, GNNs have become one of the most prominent tools for processing graph-structured data. In general, GNNs utilize the adjacency matrix to learn the node representations, by aggregating information within every node's neighborhood \citep{defferrard2016convolutional,kipf2016semi}. Existing variants, including GraphSAGE \citep{hamilton2017inductive}, Graph Attention (GAT) \citep{velivckovic2017graph}, MixHop \citep{abu2019mixhop}, SGC \citep{nt2019revisiting}, GAT \citep{velickovic2019deep}, and GIN \citep{xu2018powerful}, learn a more general class of neighborhood mixing relationships, by aggregating weighted information within a multi-hop neighborhood of every node. GNNs can be generally seen as applying a fix, or a parametric and learnable (e.g. GAT) low-pass graph filter to graph signals. Those with trainable parameters can adapt to a wider range of frequency levels on different graphs. However, they still have a higher emphasis on lower-frequency signals and discard the high-frequency signals in a graph. While the aggregation operation makes GNNs powerful tools for semi-supervised learning, it can make the learned node representations indistinguishable in a neighborhood \citep{nt2019revisiting}. As a result, typical GNNs and their variants have been long criticized for their poor generalization performance under heterophily \citep{balcilar2020analyzing}. \noindent\textbf{(Semi-)supervised learning under heterophily.} To address over-smoothing issue of GNNs, recent methods propose to use other types of aggregation that better fit graphs with heterophily. Geom-GCN uses geometric aggregation in place of the typical aggregation \citep{pei2020geom}, H$_2$GCN uses several special model designs including separate aggregation and higher-neighborhood aggregation to train the model for handling graphs with heterophily, and CPGNN trains a compatibility matrix to model the heterophily level \citep{zhu2020graph}. More recently, \cite{wang2019demystifying} proposed to learn an aggregation filter for every graph from a set of based filters designed based on different ways of normalizing the adjacency matrix. Most recently, \cl{GGCN introduced degree corrections and signed message passing on GCN to address both oversmoothing problems and the model's poor performances on heterophily graphs \citep{yan2021two}. \cite{zhu2021interpreting} analyzed and designed a uniform framework for GNNs propagations and proposed GNN-LF and GNN-HF that preserve information of different frequency separately by using different filtering kernels with learnable weights.} FAGCN \citep{bo2021beyond} and FBGNN \citep{luan2020complete} train two \textit{separate} encoders to capture the high-pass and low-pass graph signals separately. Then they rely on labels to learn relatively complex mechanisms to combine the outputs of the encoders. However, learning how to combine the encoder outputs is highly sensitive to having high-quality labels. This makes such methods highly impractical for unsupervised contrastive learning, where the label information is not available. Unlike the above supervised methods, we leverage the high-pass filter in the \textit{same encoder} to generate augmented graph views that are \textit{contrasted} with their low-pass counterparts to learn unsupervised representations. This is in contrast to learning the best combination of filtered signals of different encoders based on labels. \noindent\textbf{Contrastive learning on graphs.} For contrastive learning on graphs, global graph-level and local node-level data are augmented and contrasted in different ways. DGI \citep{velickovic2019deep} and GMI \citep{peng2020graph} contrast graph and node representations within one augmented view of the original graph. More recent methods contrast global and local representations in two augmented views. \textsc{GraphCL} generates graph augmentations by subgraph sampling, node dropping, and edge perturbation and contrasts the augmented graph representations. GCC samples and contrasts subgraphs of the original graph \citep{qiu2020gcc}. MVGRL leverages node diffusion to augment the graph and contrasts the node representations \citep{hassani2020contrastive}. More recently, contrasting the local node representations has been shown to achieve state-of-the-art. \textsc{GRACE} contrasts the node representations in two graph views augmented with feature masking and edge removal \citep{zhu2020deep}. GCA extends this by dropping the less important edges and features, based on node centrality and feature importance metrics \citep{zhu2021graph}. A thorough empirical study on the combinatorial effect of different augmentations has been conducted by \cite{zhu2021empirical}. Due to the complexity of collecting negative samples in graph data, negative-samples-free contrastive objectives have been also studied. Among existing methods, BGRL that uses the Bootstrapping Latent loss \citep{thakoor2021large}, and GBT uses Barlow Twins loss \citep{bielak2021graph} are the most successful. Existing graph CL methods explicitly augment the input graph and contrast the augmented graph representations obtained with {low-pass} GNN-based encoders. In doing so, they only capture the similarity of nodes in a neighborhood. Hence, they perform poorly on graphs with heterophily. In contrast, we leverage low-pass and high-pass graph filters in the same GNN-based encoder to capture and contrast similarity and dissimilarity of nodes with their neighborhood. This allows achieving state-of-the-art under heterophily. \section{Preliminaries} \textbf{Notations.} We denote by $\mathcal{G} = (\mathcal{V},\mathcal{E})$ an undirected graph, where $\mathcal{V}=\{v_1, v_2, \dots, v_N\}$ represents the node set, and $\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}$ represents the edge set. We denote by $\pmb{A}\in\{0,1\}^{N\times N}$ the symmetric adjacency matrix of graph $\mathcal{G}$. That is, $\pmb{A}_{ij} = 1$ if and only if $(v_i,v_j) \in \mathcal{E}$, and $\pmb{A}_{ij} = 0$ otherwise. We also denote the feature matrix by $\pmb{X}$, where $\pmb{X}_{i.} \in \mathbb{R}^m$ is the feature vector of the $i^{th}$ node, and $\pmb{x}_{}\in\mathbb{R}^N$ is a column of the matrix and represents a graph signal. $\pmb{D}$ is the degree matrix of the graph, with $\pmb{D}_{ii} = \sum_j \pmb{A}_{ij}$, and $\mathcal{N}_i=\{j: \pmb{A}_{ij}=1\}$ is the neighborhood of node $i$. $\pmb{L}$ is the Laplacian matrix of the graph, defined as $\pmb{L} = \pmb{D} - \pmb{A}$. The normalized Laplacian matrix is denoted by $\pmb{L}_{sym} = \pmb{D}^{-\frac{1}{2}}\pmb{LD}^{-\frac{1}{2}}$, and the normalized adjacency matrix is defined in a similar fashion: $\pmb{A}_{sym} = \pmb{D}^{-\frac{1}{2}}\pmb{AD}^{-\frac{1}{2}}$. Here, we use the renormalized version of the adjacency matrix $\pmb{\hat{A}}_{sym}= \pmb{\tilde{D}}^{-\frac{1}{2}}\pmb{\tilde{A}} \pmb{\tilde{D}}^{-\frac{1}{2}}$ as introduced in \citep{kipf2016semi}, where $\pmb{\tilde{A}} = \pmb{A} + \pmb{I}$, $\pmb{\tilde{D}} = \pmb{D} + \pmb{I}$, and $\hat A_{sym} = \tilde D^{-\frac{1}{2}}\tilde A \tilde D^{-\frac{1}{2}}$. Similarly, the renormalized Laplacian matrix is defined as $\pmb{\hat{L}}_{sym} = \pmb{I} - \pmb{\hat{A}}_{sym}$. $\pmb{\hat{L}}_{sym}$ is a real symmetric matrix, with orthonormal eigenvectors $\{\pmb{u}_i\}^n_{l=1} \in \mathbb{R}^n$, and corresponding eigenvalues $\lambda_i \in [0, 2)$ \citep{chung1997spectral}. For $\pmb{\hat{A}}_{sym}$ we have $\lambda_i(\pmb{\hat{A}}_{sym})\in(-1,1]$. \subsection{Graph CL under Homophily} State-of-the-art graph CL methods work by explicitly augmenting the input graph using feature or topology augmentations, encoding the augmented graphs using a GNN-based encoder, and contrasting the encoded node representations \citep{zhu2020deep, zhu2021empirical,velickovic2019deep,thakoor2021large}. Below, we discuss each of these steps in more detail. \noindent\textbf{Graph Augmentation.} First, the input graph is explicitly augmented, by altering its topology or node features. Topology augmentation methods remove or add nodes or edges, and feature augmentation methods alter the node features by masking particular columns, dropping features at random, or randomly shuffling the node features \citep{zhu2020deep, zhu2021empirical,velickovic2019deep,thakoor2021large}. Explicit graph augmentation methods help to better capture the \textit{similarity} between the nodes. Effectively, if two nodes have similar augmentations, their representations are pulled together by the contrastive loss. \noindent\textbf{GNN Encoder.} The augmented graphs are then passed through a GNN-based encoder to obtain the augmented node views. The GNN encoder produces node representations by aggregating the node features in a neighborhood as follows:\looseness=-1 \begin{equation} \pmb{H}_{}^l = \sigma(\pmb{\tilde{A}}_{sym} \pmb{H}^{l-1} \pmb{W}^{l-1}_{}),\quad \pmb{H}^0=\pmb{X}, \end{equation} where $\pmb{H}^l_{L}$ is the node representations at layer $l$ of the encoder, $\pmb{W}^l\in\mathbb{R}^{d_l \times d_{l-1}}$ is the weight matrix in layer $l$ of the encoder, and $\sigma$ is the activation function. Crucially, the Adjacency matrix $\pmb{\tilde{A}}_{sym}$ aggregates every node's features with the features of nodes in its immediate neighborhood. For a multi-layer graph encoder, it iteratively aggregates features in a multi-hop neighborhood of every node to learn its representation. Hence, it smooths out the node representations and produces similar representations for the nodes within the same multi-hop neighborhood. \noindent\textbf{Contrastive Loss.} Finally, the contrastive loss distinguishes the representations of the same node in two different augmented views from other node representations. For example the commonly used InfoNCE loss \cite{oord2018representation} is: \begin{equation} \begin{split} l(\pmb{u}_i^{}, \pmb{v}_{i}^{}) \!=\! -\log\! \frac{e^{\text{sim} (\pmb{u}_{}^i, \pmb{v}_{}^i) / \tau}}{e^{\text{sim} (\pmb{u}_{}^i, \pmb{v}_{}^i) / \tau} \!+\! \sum_{\substack{k \neq i}} e^{\text{sim} (\pmb{u}_{}^i, \pmb{u}_{}^k) / \tau} \!+\! \sum_{\substack{k \neq i}} e^{\text{sim} (\pmb{v}_{}^i, \pmb{v}_{}^k) / \tau}}, \end{split} \end{equation} where $\pmb{u}_i, \pmb{v}_i$ are representations of two different augmented views of node $i$, $\text{sim} (\pmb{v}_{}^i, \pmb{v}_{}^k)=s(g(\pmb{u}),g(\pmb{v}))$ where $s(.,.)$ is the cosine similarity and $g(.)$ is a nonlinear projection to enhance the expression power and $\tau$ is a temperature parameter. Intuitively, existing graph CL methods capture the similarity between nodes in a graph. Under homophily, nodes in a neighborhood have similar labels and features. Such nodes have similar augmented views. Therefore, the contrastive loss pulls the representations of the nodes in the same class together and distinguishes them from other classes. \subsection{Graphs under Heterophily} Under heterophily, connected nodes may have different class labels and dissimilar features. For example, in a dating network, most people tend to connect with people of the opposite gender, while in protein structures, different amino acid types are more likely to connect to each other \cite{zhu2020beyond}. \noindent\textbf{Homophily Ratio.} To quantify how likely nodes with similar labels are connected in the graph, homophily ratio, $\beta$,, is defined as follows \citep{pei2020geom}: $$\beta = \cfrac{1}{|V|} \sum\limits_{v \in V} \cfrac{\text{Number of } v\text{’s neighbors who have the same label as } v}{\text{Number of } v\text{’s neighbors}}$$ For larger values of $\beta$, it is more likely that nodes with the same labels are connected together. Existing graph CL methods have a very poor performance under heterophily, or low homophily ratio, and cannot learn high-quality representations, as we will discuss next. \section{Graph CL under Heterophily} In this section, we first discuss the challenges of graph CL under heterophily or low homophily. Then, we present our approach to overcome these challenges and learn high-quality representations. \noindent\textbf{Challenges.} Learning high-quality representations under heterophily, without having access to labels, is very challenging. First, under heterophily, nodes in a neighborhood have different class labels and dissimilar features. In this setting, aggregating features of nodes in a neighborhood using a GNN encoder fades out the dissimilarity between representations of nodes in different classes and makes them indistinguishable. Indeed, under heterophily, it is crucial to leverage both similarities and dissimilarities of a node to its neighborhood to obtain high-quality representations. However, without having access to the labels, it is not clear how the node features should be aggregated. In fact, even the homophily ratio cannot be calculated without having access to the labels. Second, under heterophily, explicit graph augmentations leveraged by existing graph CL methods may not be helpful in learning high-quality representations. In particular, topology augmentations such as edge drop that are most effective under homophily \citep{zhu2021empirical}, may drastically harm the performance under heterophily (see Table \ref{tab:grace_aug}). This is because nodes in a neighborhood have similar labels under homophily, but may have different labels under heterophily. Hence, while the two augmented versions of the same node with dropped edges are similar under homophily, augmented versions of the same node may not be similar under heterophily as the dropped edges may be connected to different labels. In this setting, maximizing the similarity between representations of the dissimilar augmentations results in a poor performance. Due to the above reasons, existing graph CL methods that contrast different explicitly augmented views of a node obtained using a GNN encoder cannot yield separable representations for different classes, under heterophily. This yields a poor generalization performance for the downstream classifier, as we confirm by our experiments. Next, we discuss how we overcome the above challenges. \subsection{Graph CL via Graph Filters} As discussed, under heterophily, it is crucial to leverage both similarities and dissimilarity between the node features in a neighborhood to be able to learn high-quality representations. However, without having access to the labels, it is not clear how the node features in a neighborhood should be aggregated. To address the above challenges, our key idea is to generate two filtered representations for every node using the same encoder. In particular, the first (non-smooth) representation is generated using a high-pass filter which captures the dissimilarity of the node feature to its neighborhood. The second (smooth) representation is generated using a low-pass filter which captures the similarity of the node feature to its neighborhood. We note that the smooth representation is similar to the representation learned by a typical GNN-based encoder. Then, we learn the final node representations by contrasting the smooth and non-smooth representations. In doing so, we can learn rich node representation based on a combination of the smooth and non-smooth components. Next, we introduce high-pass and low-pass graph filters, and discuss how they can be leveraged to learn smooth and non-smooth node representations. \subsection{High-pass and Low-pass graph filters} The adjacency and Laplacian matrices can be leveraged to filter the smooth and non-smooth graph components, and capture similarity and dissimilarity of node features to their neighborhoods. Specifically, multiplication of Laplacian with a graph signal $\pmb{\hat{L}}_{sym}\pmb{x}=\sum_i \lambda_i \pmb{u}_i\pmb{u}_i^T \pmb{x}$, acts as a filtering operation over $\pmb{x}$, adjusting the scale of the components of $\pmb{x}$ in the frequency domain. The entries of every eigenvector, $\pmb{u}_i$ aligns with a cluster of connected nodes in the graph. For the Laplacian matrix, a smaller eigenvalue $\lambda_i$ corresponds to a smoother { eigenvector} $\pmb{u}_i$, and corresponds to a larger cluster of connected nodes. On the other hand, a larger $\lambda_i$ corresponds to non-smooth {eigenvectors} $\pmb{u}_i$, which identify smaller clusters of closely connected nodes in the graph. A Laplacian filter magnifies the part of the signal that aligns well with basis functions corresponding to large eigenvalues $\lambda_i\in(1,2)$ and suppresses the part the signal that aligns with basis functions corresponding to small eigenvalues $\lambda_i\in[0,1]$. That means, for the small clusters of nodes that have a large alignment with $\pmb{u}_i$ corresponding to $\lambda_i>1$, the projection {$\lambda_i \pmb{u}_i\pmb{u}_i^T \pmb{x}$} amplifies $\pmb{x}$ within the cluster and consequently magnifies the difference in $\pmb{x}$ among the nodes within that cluster. On the other hand, for the larger clusters that align well with $\pmb{u}_i$ corresponding to $\lambda_i<1$, the projection \cl{$\lambda_i \pmb{u}_i\pmb{u}_i^T \pmb{x}$} suppresses $\pmb{x}$ within the cluster and reduces the differences in $\pmb{x}$ among the nodes within that cluster. Hence the Laplacian matrices can be generally regarded as high-pass filters \citep{ekambaram2014graph}, that enlarge the differences in node features over small clusters, and smooths out the differences over larger clusters in the graph. On the other hand, affinity matrices, such as the normalized adjacency matrix, can be treated as low-pass filters \citep{nt2019revisiting}, which suppress and filter out non-smooth components of the signals. This is because all of the eigenvalues of the affinity matrices are smaller than 1, i.e., $\lambda_i\in(-1,1]$. On the node level, left multiplying $\pmb{\hat{L}}_{sym}$ and $\pmb{\hat{A}}_{sym}$ filters with $\pmb{x}$ can be understood as diversification and aggregation operations, respectively \citep{luan2020complete}. In particular, a typical GNN filters smooth graph frequencies by aggregating the node representations with those of their neighbors, using the adjacency matrix, i.e. \begin{align}\label{eq:operations_A} (\pmb{\hat{A}}_{sym}\pmb{x})_i=\sum_{j\in\mathcal{N}_i}\frac{1}{\pmb{D}_{ii}}\pmb{x}_j. \end{align} Hence, it results in similar representations for the nodes in a neighborhood. In contrast, the high-pass filter only preserves the high-pass frequencies, using the Laplacian matrix, i.e. \begin{align}\label{eq:operations_L} (\pmb{\hat{L}}_{sym}\pmb{x})_i=\sum_{j\in\mathcal{N}_i}\frac{1}{\pmb{D}_{ii}}(\pmb{x}_i-\pmb{x}_j). \end{align} In doing so, it magnifies the dissimilarities between the nodes and make the representations of nodes in a neighborhood distinguishable. Since $\pmb{\hat{L}}_{sym}+\pmb{\hat{A}}_{sym} = \pmb{I}$, both filters capture complementary information and their combination using a contrastive loss allow learning richer representations, as we discuss next. \subsection{Generating Graph Views via Graph Filters} As discussed, the key idea of our {\textsc{HLCL}}\xspace{} framework is to leverages a high-pass filter to generate diverse and distinguishable node representations, and contrast them with smooth and similar node representations generated by a low-pass filter. To do so, we leverage the normalized Laplacian matrix $\pmb{\hat{L}}_{sym}$ and adjacency matrix $\pmb{\hat{A}}_{sym}$ as the diversification and aggregation operations in Eq. (\ref{eq:operations_A}), (\ref{eq:operations_L}). We note that other types of high-pass and low-pass filters can be used in a similar way in our framework. More specifically, we input the graph $\pmb{X}$ into a graph encoder, and apply a high-pass filter $\pmb{F}_{HP}=\pmb{\hat{L}}_{sym}$ and a low-pass filter $\pmb{F}_{LP}=\pmb{\hat{A}}_{sym}$ to the encoder to generate high-pass node representations $\pmb{H}_H$ and low-pass node representations $\pmb{H}_L$ as follows: \begin{align} \pmb{H}_{H}^l &= \sigma(\pmb{F}_{HP} \pmb{H}^{l-1}_{H} \pmb{W}^{l-1}_{}), \\ \pmb{H}_{L}^l &= \sigma(\pmb{F}_{LP} \pmb{H}^{l-1}_L \pmb{W}^{l-1}_{}), \end{align} where $\pmb{H}^l_{L}$, and $\pmb{H}^l_{H}$ are the low-pass filtered and high-pass filtered representations at layer $l$ of the encoder, $\pmb{W}^l\in\mathbb{R}^{d_l \times d_{l-1}}$ is the weight matrix in layer $l$ of the encoder, $\sigma$ is the activation function, and we have $\pmb{H}^0_L=\pmb{H}^0_H=\pmb{X}$. The high-pass filter $\pmb{F}_{H}$ filters out the low-frequency signals and preserves the high-frequency signals. In doing so, it captures the difference between features of every node with the features of the nodes in its neighborhood. Using a high-pass encoder within a multi-layer encoder iteratively captures the difference between features of the nodes in a multi-hop neighborhood of a node. Hence, it makes the representations of nodes that have different features from their neighbors distinct in their multi-hop neighborhood. On the other hand, the low-pass filter, $\pmb{F}_{L}$, resembles a typical GNN, which only preserves the low-frequency signals by aggregating every node's features with the features of nodes in its immediate neighborhood. Using the low-pass filter within a multi-layer graph encoder, it iteratively aggregates features in a multi-hop neighborhood of every node to learn its representation. Hence, it smooths out the node representations and produces similar representations for the nodes within the same multi-hop neighborhood. While the low-pass filter is essential for learning good representations in smooth graphs, it cannot produce distinguished representations for graphs that are non-smooth, {especially under heterophily}. For such graphs, the high-pass filter is crucial to provide distinct representations. The combinations of both filters provide complementary information and allow learning both smooth and non-smooth components of the graphs simultaneously. Hence, it enables learning richer representations, particularly under heterophily. \noindent\textbf{Scalability via Message Passing.} The high-pass and low-pass filtered representations can be obtained through message passing in an inductive manner, according to Eq. (\ref{eq:operations_A}), (\ref{eq:operations_L}), without the need to explicitly calculate the Laplacian matrix. In particular, the high-pass filtered representations can be obtained by iteratively differentiating the representations of a node and those of its neighbors, and the low-pass filtered representations can be obtained by aggregating the node's representation with those of its neighbors. Formally: \begin{align} \pmb{\hat{h}}_i^l&=\sigma(\pmb{W}^{l-1}\pmb{h}_i^{l-1}),\\ (\pmb{h}_i^l)_L&=\Sigma_{j\in\{\mathcal{N}_i\cup \{i\}\}} (\pmb{\hat{h}}_i^l+\pmb{\hat{h}}_j^l),\\ (\pmb{h}_i^l)_H&=\Sigma_{j\in\{\mathcal{N}_i\cup \{i\}\}} (\pmb{\hat{h}}_i^l-\pmb{\hat{h}}_j^l). \end{align} This is the same approach used to train common GNNs on large graphs. Hence, our method will have the same complexity as conducting a normal GNN message passing with an additional message being passed. This makes our method scalable to large graphs, as we also confirm in our experiments. \noindent\textbf{Explicit graph augmentation.} We note that the graph $\pmb{X}$ can be optionally explicitly augmented before entering the encoder. While explicit augmentation do not provide a considerable benefit and can even be harmful under heterophily, it is crucial for graphs with homophily. We will study in details the effect of explicit augmentation under homophily and heterophily, in our experiments. Our HLCL framework is demonstrated in Fig. \ref{fig:method}. \begin{algorithm}[t] \caption{Contrastive Learning with Graph Filters ({\textsc{HLCL}}\xspace{})}\label{alg:alg} \begin{algorithmic}[1] \For{epoch=1,2,3,...} \State Input graph $\mathcal{G}$ into the shared graph Encoder $f(\cdot)$ \State Generate high-frequency-signal node embeddings: $\pmb{H}_{H}^l$ = $\sigma(\pmb{F}_{HP} \pmb{H}^{l-1}_{H} \pmb{W}^{l-1}_{})$ \State Generate low-frequency-signal node embeddings: $\pmb{H}_{L}^l = \sigma(\pmb{F}_{LP} \pmb{H}^{l-1}_L \pmb{W}^{l-1}_{})$ \State Compute the contrastive objective $\mathcal{L}$ with Eq. (\ref{eq:loss}) \State Update parameters by applying stochastic gradient ascent to minimize $\mathcal{L}$ \EndFor \end{algorithmic} \end{algorithm} \subsection{The Contrastive Learning Framework} After generating the low-pass and high-pass filtered graph views, we pass them through a shared non-linear projection head and employ a contrastive objective that enforces the filtered representations of each node in the two views to agree with each other. \begin{figure*}\label{fig:chameleon_grace} \label{fig:cora_grace} \label{fig:chameleon_hlcl} \label{fig:cora_hlcl} \label{fig:gracevshlcl} \end{figure*} \noindent\textbf{Non-linear projection head.} Before contrasting the high-pass and low-pass filtered representations, we pass them to a non-linear projection head $g(\cdot)$, which maps the filtered views to another latent space where the contrastive loss is calculated. This has been shown to improve the performance \citep{chen2020simple,you2020graph,zhu2020deep,zhu2020graph}, and we also confirm its effect in our experiment. In our setting, we use a two-layer perceptron network to obtain $\pmb{z}_{h}\! =\! g(\pmb{H}_{H})$ and $\pmb{z}_{l} \!=\! g(\pmb{H}_{L})$. We use the projection head only during the contrastive training, and use the low-pass filtered representations $\pmb{H}_{L}$ in the downstream task. \looseness=-1 \noindent\textbf{Contrastive loss.} For every node $i$, its projected representation generated by the low-pass filter, $\pmb{z}_l^i$, is treated as the anchor, and the one generated by the high-pass filter, $\pmb{z}_l^i$, is treated as the corresponding positive sample. Formally, we have: \begin{equation} l(\pmb{z}_l^i, \pmb{z}_h^i) = {\text{sim} (\pmb{z}_l^i, \pmb{z}_h^i) / \tau}, \end{equation} \noindent where $\text{sim}(\pmb{z}_l^i, \pmb{z}_h^i)$ is the cosine similarity between $\pmb{z}_l^i$ and $\pmb{z}_h^i$, and $\tau$ is a temperature parameter. Notably, the high-pass filter alleviates the need for using negative pairs in the contrastive loss, by automatically producing dissimilar representations for different nodes. Empirically, we also observed no improvement in the performance by incorporating the negative pairs, in presence of the high-pass filter. Since two views are symmetric, the loss for another view, $l(\pmb{z}_h^i, \pmb{z}_l^i)$, is defined in a similar fashion. The overall objective to be maximized is then defined as the average over all positive pairs. Formally, we minimize: \begin{equation} \mathcal{L} = -\frac{1}{2N} \sum_{i = 1}^{N} [l(\pmb{z}_l^i, \pmb{z}_h^i) + l(\pmb{z}_h^i, \pmb{z}_l^i)].\label{eq:loss} \end{equation} Effectively, maximizing the agreement between the low-pass and high-pass views pulls away the representation of nodes with different features from their neighborhood, and allows them to be distinguished from their neighbors. \noindent\textbf{Final representations.} After minimizing the contrastive loss in Eq. \eqref{eq:loss}, we use the output of the low-pass filter (the typical GNN) as the final representation. Note that the parameters of the encoder are learned by contrasting the low-pass representation with high-pass representations. Hence, the output of the low-pass encoder is different than that of existing graph CL methods that only use GNN-based (low-pass) encoders during contrastive learning. Indeed, we do not need to rely on the high-pass filter during the inference. The pseudocode is illustrated in Alg. \ref{alg:alg}. \subsection{Effect of HP and LP Filters under Different Homophily Ratios} In general, contrasting the high-pass filtered view with its low-pass counterpart pushes away the representations of nodes that have dissimilar features to the other nodes in their neighborhood, and allows them to be distinguished better. In particular, the high-pass filter is essential for separating the classes under heterophily, and the low-pass filter is essential for learning high-quality representations under homophily. Nevertheless, real-world networks often have neighborhoods with medium homophily ratio (see Table \ref{tab:benchmark_baselines}, \ref{tab:large_data}). In such cases, both filters are essential for learning rich representations. Here, we discuss the effect of high-pass and low-pass filters in neighborhoods with different homophily ratios, and explain why contrasting the filtered representations does not result in a significant harm in high- or low-homophily neighborhoods. \noindent\textbf{Heterophily.} Under medium or low homophily ratio, a fraction of nodes in the neighborhood have different labels and dissimilar features to the rest of the nodes in the same neighborhood. As long as homophily ratio is not nearly zero, both filters are essential to obtain rich and separable representations. In particular, the low-pass filter is required to aggregate features of similar nodes, and the high-pass filter is required to distinguish features of dissimilar nodes in the same neighborhood. By contrasting the high-pass and low-pass filtered features, {\textsc{HLCL}}\xspace is able to learn rich and distinguishable class representations under heterophily. Note that structural augmentation can drastically harm the performance under heterophily, as it reduces the effectiveness of the graph filters (see Table \ref{tab:hl-ablation}). Additional feature augmentation, in particular on the low-pass filter can slightly boost the performance. \noindent\textbf{Homophily.} Under high homophily ratio, the low-pass filter aggregates the node features in a neighborhood and makes their representations similar to each other. On the other hand, the high-pass filter magnify the dissimilarity between node features in a neighborhood. In this case, explicit structural graph augmentations, in particular dropping edges or nodes, effectively alleviate the adverse effect of the high-pass filter and considerably boost the performance, as can be seen in Table \ref{tab:hl-ablation}. Additional features and structural augmentation on the low-pass filter can further improve the quality of the learned representations, as also confirmed by prior work \cite{zhu2021empirical}. Fig. \ref{fig:gracevshlcl} compares t-SNE visualization of the learned representations with {\textsc{HLCL}}\xspace compare to GRACE, which is a state-of-the-art Graph CL method. We see that under heterophily (Chameleon), {\textsc{HLCL}}\xspace representations of every class form smaller compact clusters, which allows the downstream linear model to separate the classes. In contrast, GRACE representations are spread out and classes are not easily separable. In addition, under homophily (Cora), the high-pass filter combined with explicit graph augmentations allows {\textsc{HLCL}}\xspace to obtain separable high-quality class representations. Effectively, {\textsc{HLCL}}\xspace achieves nearly the same downstream performance as GRACE under homophily, as we confirm by our experiments. \section{Experiments} In this section, we evaluate the node representations learned with {\textsc{HLCL}}\xspace, under linear probe. We compare {\textsc{HLCL}}\xspace with existing graph CL methods, and conduct an extensive ablation study to evaluate the effect of each of {\textsc{HLCL}}\xspace's components. \noindent\textbf{Datasets.} We conduct our experiments on five widely-used public benchmark datasets with different levels of homophily ratios, $\beta$. We show the statistics of the datasets including $\beta$, the number of nodes, edges, and classes, in table \ref{tab:benchmark_baselines}. For graphs with homophily, we use the citation networks including Cora and Citeseer \citep{yang2016revisiting}. For graphs with heterophily, we use the Wikipedia network and the web page networks including Chameleon, Squirrel, and Texas \citep{rozemberczki2021multi,pei2020geom}. {In addition, to illustrate the scalability of {\textsc{HLCL}}\xspace, we apply it to large-scale real-world datasets recently provided by \cite{lim2021large}. The statistics of the datasets are shown in Table \ref{tab:large_data}.} We repeat the experiments 10 times for smaller benchmark datasets, and 3 times for large real-world datasets, using early-stopping, and report the average accuracy as the final result. For each run, following CPGNN \cite{zhu2020graph} and GRACE \cite{zhu2020deep}, we randomly select 10\% nodes for training, 10\% nodes for validation, and 80\% nodes for testing. \begin{table}[b] \centering {\small \caption{Existing graph augmentation methods are highly ineffective for graph CL under heterophily. {\textsc{HLCL}}\xspace{} achieves up to 10\% improvement, over the best graph augmentation. \label{tab:grace_aug} } \begin{tabular} {p{4mm} c |c c c} \toprule && Chameleon & Squirrel & Texas\\\midrule \multirow{1}{3em}{ } &\multicolumn{1}{c|}{\textbf{{\textsc{HLCL}}\xspace{} (no-aug)}} & \textbf{48.03} $\pm$ 3.1 & \textbf{33.44} $\pm$ 1.7 & \textbf{64.21} $\pm$ 11.2\\\midrule \multirow{5}{3em}{Topo}& \multicolumn{1}{c|}{ER} & 34.23 $\pm$ 3.8 & 25.14 $\pm$ 2.3 & 52.63 $\pm$ 10.7\\ &\multicolumn{1}{c|}{ND} &36.02 $\pm$ 1.9 & 26.70 $\pm$ 1.8 & 50.53 $\pm$ 12.2\\ &\multicolumn{1}{c|}{EA} & 38.20 $\pm$ 3.1 & 26.92 $\pm$ 1.6 & 54.73 $\pm$ 9.7\\ &\multicolumn{1}{c|}{RWS} &35.63 $\pm$ 3.3 & 24.59 $\pm$ 13.5 &56.84 $\pm$ 11.7\\ &\multicolumn{1}{c|}{PPR} &34.14 $\pm$ 3.5 & 24.43 $\pm$ 2.9 & 50.00 $\pm$ 6.7 \\\midrule \multirow{2}{3em}{Feat} &\multicolumn{1}{c|}{FM} & 37.55 $\pm$ 3.6 & 25.58 $\pm$ 1.1 & 58.42 $\pm$ 14.21\\ &\multicolumn{1}{c|}{FD} & 36.81 $\pm$ 3.9 & 25.75 $\pm$ 2.7 & 50.52 $\pm$ 6.7\\ \bottomrule \end{tabular} } \end{table} \begin{table*}[t] \centering { \small \caption{ {\textsc{HLCL}}\xspace{} achieves state-of-the-art under heterophily, and a comparable performance under homophily when combined with explicit graph augmentation.}\label{tab:benchmark_baselines} \begin{tabular} {c c c| c c c} \toprule &\multicolumn{2}{c}{Homophily}&\multicolumn{3}{|c}{Heterophily}\\\cmidrule{2-4}\cmidrule{5-6} &Cora & CiteSeer & Chameleon & Squirrel & Texas \\\midrule \multicolumn{1}{c|}{Hom.($\beta$)} & $0.83 \pm .086$ & $0.71 \pm .16$ & $0.25 \pm .052$ & $.22 \pm .038$ & $.06 \pm .032$ \\ \multicolumn{1}{c|}{Nodes} & 2708 & 3327 & 2277 & 5201 & 183 \\ \multicolumn{1}{c|}{Edges} & 5278 & 4676 & 31421 & 198493 & 295 \\ \multicolumn{1}{c|}{Classes} & 6 & 7 & 5 & 5 & 5 \\ \midrule\midrule \multicolumn{1}{c|}{\textbf{{\textsc{HLCL}}\xspace}} & \textbf{82.34} $\pm$ 2.7 & \textbf{71.35} $\pm$ 1.4 & \textbf{49.78} $\pm$ 1.8 & \textbf{34.11} $\pm$ 1.9 & \textbf{65.26} $\pm$ 7.8 \\ \multicolumn{1}{c|}{\textbf{{\textsc{HLCL}}\xspace(no-aug)}} & 77.09 $\pm$ 1.7 & 64.34 $\pm$ 1.5 & \textbf{48.04} $\pm$ 3.1 & \textbf{33.44} $\pm$ 1.7 & \textbf{64.21} $\pm$ 11.2\\ \midrule \multicolumn{1}{c|}{Raw feature} & 64.8 & 64.6 & 31.61 & 24.43 & 52.63 \\ \multicolumn{1}{c|}{DeepWalk} & 75.7 $\pm$ 1.7 & 50.5 $\pm$ 1.6& 27.51 $\pm$ 3.1 & 25.47 $\pm$ 2.4 & 40.00 $\pm$ 15.9 \\ \multicolumn{1}{c|}{DeepWalk + features} & 73.1 $\pm$ 2.3 & 47.6 $\pm$ 1.5 & 36.21 $\pm$ 4.6 & 26.23 $\pm$ 1.8 & 53.68 $\pm$ 10.2\\ \multicolumn{1}{c|}{DGI} & 82.60 $\pm$ 0.4 & 68.80 $\pm$ 0.7 & 40.48 $\pm$ 1.9 & 27.95 $\pm$ 0.9 & 60.00 $\pm${10.8}\\ \multicolumn{1}{c|}{BGRL} & 74.67 $\pm$ 0.6 & 64.25 $\pm$ 2.4 & 38.20 $\pm$ 1.4 & 26.03 $\pm$ 0.9 & 61.05 $\pm$ 11.1 \\ \multicolumn{1}{c|}{GRACE} & \textbf{83.30} $\pm$ 0.4 & \textbf{72.10} $\pm$ 0.5 & 37.37 $\pm$ 0.6 & 28.67 $\pm$ 0.9 & 62.6 $\pm$ 7.2 \\\midrule \multicolumn{1}{c|}{MixHop} & \textbf{85.34} $\pm$ 0.4 & \textbf{73.23} $\pm$ 0.5 & {46.84} $\pm$ 3.5 & \textbf{36.42} $\pm$ 3.4 & 62.15 $\pm$ 2.5\\ \bottomrule \end{tabular} } \end{table*} \noindent \textbf{{\textsc{HLCL}}\xspace{} setup.} We employ a two-layer GNN \citep{kipf2016semi} as our low-pass encoder, and consider an additional high-pass channel with $\pmb{\hat{L}}_{sym}$ as the message passing filter to capture the high-frequency signals from the graphs. The high-pass channel generates augmented node representations to be contrasted with those encoded by the GNN. The high-pass channel shares the same weight parameters with the GNN, but generate representations with different filters. The final representations is generated by the GNN encoder. \noindent \textbf{Linear Probe Evaluation.} We follow the evaluation protocol used in \citep{zhu2020deep}. Models are first trained in a self-supervised manner without labels. Then, we fed the final node embeddings into a $l_2$-regularized logistic regression classifier to fit the labeled data. \subsection{Graph CL under Heterophily} \noindent\textbf{{\textsc{HLCL}}\xspace vs Explicit Augmentation.} First, we compare {\textsc{HLCL}}\xspace with a state-of-the-art graph CL method, namely GRACE \cite{zhu2020deep} with different explicit graph augmentations, under heterophily. We consider topology augmentation methods including Edge Removing (ER), Node Dropping (ND), Edge Adding (EA), Subgraph with Random Walk (RWS), and Personalized PageRank (PPR), as well as feature augmentation method including Feature Masking (FM) and Feature Dropping (FD) for generating different graph augmentations. We compare these methods with {\textsc{HLCL}}\xspace{}. Except for the proposed high-pass filter, we do not use any other explicit graph augmentation. Table \ref{tab:grace_aug} shows that our method achieves a remarkable boost of up to 10\% compared to the best graph augmentation method. This confirms the necessity of utilizing the non-smooth graph components under heterophily. Note that even though topology and feature augmentation methods benefit the homophily graphs, they are highly ineffective for graph CL under heterophily. \begin{table}[b] \centering {\small \caption{ {\textsc{HLCL}}\xspace{} achieves a noticeably higher accuracy on realistic large-scale datasets compared to other GCL methods. {\textsc{HLCL}}\xspace{} outperforms SOTA Graph CL methods by up to 6\%.}\label{tab:large_data} \begin{tabular} {c| c c c c c} \toprule & Penn94 & Twitch-Gamers & Pokec & Genius\\\midrule Hom.($\beta$)& $0.48 \pm .042$ & $0.55 \pm .040$ & $0.42 \pm .10$ & $0.48 \pm .21$ \\ Nodes & 41,554 & 168,114 & 1,632,803 & 421,961 \\ Edges & 1,362,229 & 6,797,557 & 1,632,803 & 984,979\\ Classes & 2 & 2 & 2 & 2 \\\midrule\midrule \multicolumn{1}{c|}{\textbf{{\textsc{HLCL}}\xspace}} & \textbf{68.12} $\pm$ 3.5 & \textbf{66.96} $\pm$ 0.9 & \textbf{56.50} $\pm$ 1.7 & \textbf{85.39} $\pm$ 2.4 \\ \midrule \multicolumn{1}{c|}{DGI} & 62.86 $\pm$ 0.37 & 62.47 $\pm$ 2.8 & 54.56 $\pm$ 0.6 & 83.45 $\pm$ 1.8 \\ \multicolumn{1}{c|}{BGRL} & 58.82 $\pm$ 0.6 & 55.09 $\pm$ 2.3 & 53.53 $\pm$ 1.2 & 76.38 $\pm$ 2.6\\ \multicolumn{1}{c|}{GRACE} & 62.52 $\pm$ 0.15 & 57.08 $\pm$ 0.11 & 53.61 $\pm$ 0.59 & 79.58 $\pm$ 2.9\\ \bottomrule \end{tabular} } \end{table} \noindent\textbf{{\textsc{HLCL}}\xspace vs Self-supervised Baselines.} Next, we compare {\textsc{HLCL}}\xspace with existing baselines for self-supervised representation learning. We consider traditional representation learning methods like DeepWalk \citep{perozzi2014deepwalk}, deep learning methods including DGI \citep{velickovic2019deep}, BGRL \citep{thakoor2021large}, and GRACE \citep{zhu2020deep}, and other popular representation learning baselines like Raw Features and DeepWalk + Raw Features as baselines. We also include MixHop \citep{abu2019mixhop} in our experiments as the supervised learning baseline. Table \ref{tab:benchmark_baselines} compares the performance of {\textsc{HLCL}}\xspace{} when used on its own, {\textsc{HLCL}}\xspace{} (no-aug), and when it is applied to the graphs that are augmented with Edge Removing (ER) on the graphs with homophily and Feature Masking (FM) on the graphs with heterophily. We see that {\textsc{HLCL}}\xspace{} shows a significant boost on graphs with heterophily and a comparable performance on the graphs with homophily. Even without explicit augmentation, {\textsc{HLCL}}\xspace{} surpasses other baselines on graphs with heterophily, which confirms the effectiveness of the high-pass channel. Importantly, applying {\textsc{HLCL}}\xspace{} to graphs that are explicitly augmented achieves a comparable performance under homophily. Thus, the high-pass filter combined with explicit augmentations does not harm the performance on such graphs. {Compared to supervised learning model such as MixHop trained in an end-to-end manner, HLCL achieves a comparable or superior performances. This, demonstrates the effectiveness of our design.}\looseness=-1 \noindent\textbf{Large-scale Experiments} Next, to confirm effectiveness and scalability of {\textsc{HLCL}}\xspace, we apply it to large-scale real-world datasets. Table \ref{tab:large_data} shows that {\textsc{HLCL}}\xspace achieves up to 6\% higher accuracy on large-scale datasets compared to baselines, and easily scales to large graphs. \subsection{Ablation Studies} \begin{table*}[t] \small \centering { \caption{The result on combining {\textsc{HLCL}}\xspace with different graph view augmentations. HP represents augmentation on the high-pass channel only, LP represents augmentation on the low-pass channel only, HL represents augmentation on both channels. Both topology and feature augmentations are included in the experiments. The worst results are highlighted in gray.\label{tab:hl-ablation} } \begin{tabular} {c c c c |c c c} \toprule & &\multicolumn{2}{c}{Homophily}&\multicolumn{3}{|c}{Heterophily}\\\cmidrule{3-4}\cmidrule{5-7} &&Cora & CiteSeer & Chameleon & Squirrel & Texas\\\midrule &\multicolumn{1}{c|}{{\textsc{HLCL}}\xspace{} (no-aug)} & 77.09 $\pm$ 1.7 & 64.34 $\pm$ 1.5 & 48.03 $\pm$ 3.1 & 33.44 $\pm$ 1.7 & 64.21 $\pm$ 11.2\\\midrule \multirow{2}{4em}{HP Topo}& \multicolumn{1}{c|}{ER} & 80.80 $\pm$ 3.0 & 71.20 $\pm$ 1.3 & \colorbox{Mygray}{36.11} $\pm$ 4.1 & \colorbox{Mygray}{26.26} $\pm$ 2.3 & \colorbox{Mygray}{57.36} $\pm$ 8.90\\ &\multicolumn{1}{c|}{ND} & 78.42 $\pm$ 2.0 & 70.24 $\pm$ 2.0 & \colorbox{Mygray}{35.80} $\pm$ 2.7 & \colorbox{Mygray}{26.79} $\pm$ 1.6 & \colorbox{Mygray}{61.05} $\pm$ 12.0 \\ &\multicolumn{1}{c|}{EA} & 74.04 $\pm$ 2.9 & 65.75 $\pm$ 3.2 & \colorbox{Mygray}{35.81} $\pm$ 3.3 & \colorbox{Mygray}{26.79} $\pm$ 2.0 & \colorbox{Mygray}{60.53} $\pm$ 7.90 \\\cmidrule{2-7} \multirow{2}{4em}{HP Feat} &\multicolumn{1}{c|}{FM}& 77.13 $\pm$ 2.9 & 64.67 $\pm$ 1.3 & 49.08 $\pm$ 3.8 & 33.80 $\pm$ 1.7 & 62.63 $\pm$ 9.8\\ &\multicolumn{1}{c|}{FD} & 77.94 $\pm$ 2.6 & 66.05 $\pm$ 1.8 & 47.07 $\pm$ 4.0 & 32.71 $\pm$ 2.0 & 61.57 $\pm$ 6.2\\\midrule \multirow{2}{4em}{LP Topo}& \multicolumn{1}{c|}{ER} & 77.94 $\pm$ 3.1 & 65.18 $\pm$ 1.9 & \colorbox{Mygray}{37.07} $\pm$ 3.4 & \colorbox{Mygray}{25.36} $\pm$ 2.0 & \colorbox{Mygray}{57.37} $\pm$ 7.60\\ &\multicolumn{1}{c|}{ND} & 75.66 $\pm$ 1.6 & 61.28 $\pm$ 2.6 & \colorbox{Mygray}{36.15} $\pm$ 2.0 & \colorbox{Mygray}{26.53} $\pm$ 1.6 & \colorbox{Mygray}{59.47} $\pm$ 8.50 \\ &\multicolumn{1}{c|}{EA} & 80.73 $\pm$ 2.2 & 68.35 $\pm$ 2.0 & \colorbox{Mygray}{39.08} $\pm$ 2.0 & \colorbox{Mygray}{26.75} $\pm$ 1.5 & \colorbox{Mygray}{61.05} $\pm$ 11.8 \\\cmidrule{2-7} \multirow{2}{4em}{LP Feat} &\multicolumn{1}{c|}{FM}& 78.46 $\pm$ 2.4 & 66.68 $\pm$ 1.7 & \textbf{49.78} $\pm$ 1.8 & \textbf{34.11} $\pm${1.9} & \textbf{65.26} $\pm$ 7.8\\ &\multicolumn{1}{c|}{FD} & 78.42 $\pm$ 2.7 & 67.75 $\pm$ 1.7 & 47.46 $\pm$ 3.2 & 33.65 $\pm$ 1.7& 63.15 $\pm$ 7.4\\\midrule \multirow{2}{4em}{HL Topo}& \multicolumn{1}{c|}{ER} & \textbf{82.34} $\pm$ 2.7 & \textbf{71.35} $\pm$ 1.4 & \colorbox{Mygray}{37.29} $\pm$ 2.6 & \colorbox{Mygray}{25.60} $\pm$ 1.6 & \colorbox{Mygray}{57.37} $\pm$ 8.3\\ &\multicolumn{1}{c|}{ND} & 76.10 $\pm$ 2.6 & 65.90 $\pm$ 2.6 & \colorbox{Mygray}{35.81} $\pm$ 1.8 & \colorbox{Mygray}{25.39} $\pm$ 1.5 & \colorbox{Mygray}{61.58} $\pm$ 8.8 \\ &\multicolumn{1}{c|}{EA} & 81.02 $\pm$ 2.5 & 70.02 $\pm$ 3.6 & \colorbox{Mygray}{37.46} $\pm$ 1.8 & \colorbox{Mygray}{26.12} $\pm$ 1.7 & \colorbox{Mygray}{60.53} $\pm$ 7.9 \\\cmidrule{2-7} \multirow{2}{4em}{HL Feat} &\multicolumn{1}{c|}{FM}& 78.60 $\pm$ 2.3 & 66.08 $\pm$ 2.0 & 49.52 $\pm$ 3.6 & 33.90 $\pm$ 1.5 & 63.68 $\pm$ 8.9\\ &\multicolumn{1}{c|}{FD} & 77.50 $\pm$ 3.0 & 66.59 $\pm$ 1.7 & 48.42 $\pm$ 4.0 & 33.49 $\pm$ 1.8 & 63.68 $\pm$ 6.8\\ \bottomrule \end{tabular} } \end{table*} \begin{table*}[t] \centering {\small \caption{ {\textsc{HLCL}}\xspace{} with different graph filters in the final encoder. HP Concatenate LP represents using both high-pass filters and concatenating the output representations; HP Aggregate LP represents using both high-pass filters and aggregating the output representations.}\label{tab:filter} \begin{tabular} {c c c| c c c} \toprule &\multicolumn{2}{c}{Homophily}&\multicolumn{3}{|c}{Heterophily}\\\cmidrule{2-6} &Cora & CiteSeer & Chameleon & Squirrel & Texas\\\midrule \multicolumn{1}{c|}{LP} & \textbf{77.09} $\pm$ 1.7 & \textbf{64.34} $\pm$ 1.5 & \textbf{48.04} $\pm$ 3.1 & \textbf{33.44} $\pm$ 1.7 & 64.21 $\pm$ 11.2\\ \multicolumn{1}{c|}{HP} & 51.94 $\pm$ 2.9 & 36.22 $\pm$ 2.5 & 35.58 $\pm$ 3.1 & 29.83 $\pm$ 1.6 & \textbf{65.26} $\pm$ 10.3\\ \multicolumn{1}{c|}{HP Concatenate LP} & 66.47 $\pm$ 3.6 & 55.96 $\pm$ 2.6 & 41.09 $\pm$ 3.6 & 30.86 $\pm$ 1.9 & 65.21 $\pm$ 9.4 \\ \multicolumn{1}{c|}{HP Aggregate LP} & 65.33 $\pm$ 2.8 & 53.47 $\pm$ 3.1 & 38.77 $\pm$ 3.4 & 28.00 $\pm$ 1.5 & 63.15 $\pm$ 11.7 \\ \bottomrule \end{tabular} } \end{table*} \begin{table*}[t] \centering {\small \caption{ Using high-pass filter in existing Graph CL methods. Significant performance drops are highlighted in gray.}\label{tab:swap} \begin{tabular} {c c c| c c c} \toprule &\multicolumn{2}{c}{Homophily}&\multicolumn{3}{|c}{Heterophily}\\\cmidrule{2-6} &Cora & CiteSeer & Chameleon & Squirrel & Texas\\\midrule \multicolumn{1}{c|}{\textbf{{\textsc{HLCL}}\xspace}} & 82.34 $\pm$ 2.7 & 71.35 $\pm$ 1.4 & \textbf{49.78} $\pm$ 1.8 & \textbf{34.11} $\pm$ 1.9 & 65.26 $\pm$ 7.8\\\midrule \multicolumn{1}{c|}{DGI:low} & 82.60 $\pm$ 0.4 & 68.80 $\pm$ 0.7 & 40.48 $\pm$ 1.9 & 27.95 $\pm$ 0.9 & 60.00 $\pm${10.8} \\ \multicolumn{1}{c|}{DGI:high} & \colorbox{Mygray}{31.95} $\pm$ 2.8 & \colorbox{Mygray}{30.54} $\pm$ 1.7 & 38.38 $\pm$ 3.5 & 29.57 $\pm$ 1.2 & 62.11 $\pm$ 10.2\\\midrule \multicolumn{1}{c|}{BGRL:low} & 74.67 $\pm$ 0.6 & 64.25 $\pm$ 2.4 & 38.20 $\pm$ 1.4 & 26.03 $\pm$ 0.9 & 61.05 $\pm$ 11.1\\ \multicolumn{1}{c|}{BGRL:high} & \colorbox{Mygray}{29.63} $\pm$ 2.8 & \colorbox{Mygray}{24.99} $\pm$ 3.1 & 35.41 $\pm$ 3.4 & 29.57 $\pm$ 1.1 & 65.26 $\pm$ 8.2 \\\midrule \multicolumn{1}{c|}{GRACE:low} & \textbf{83.30} $\pm$ 0.4 & \textbf{72.10} $\pm$ 0.5 & 37.37 $\pm$ 0.6 & 28.67 $\pm$ 0.9 & 62.6 $\pm$ 7.2\\ \multicolumn{1}{c|}{GRACE:high} & \colorbox{Mygray}{32.46} $\pm$ 2.0 & \colorbox{Mygray}{26.55} $\pm$ 3.1 & 39.65 $\pm$ 2.9 & 33.05 $\pm$ 2.1 & \textbf{72.63} $\pm$ 9.3 \\ \bottomrule \end{tabular} } \end{table*} \noindent\textbf{{\textsc{HLCL}}\xspace{} with Explicit Augmentation} Next, we study the effect of explicit graph augmentation methods applied to both high-pass and the typical low-pass GNN channels of {\textsc{HLCL}}\xspace, under different homophily ratios. We include Edge Removing (ER), Node Dropping (ND), Edge Adding (EA), Feature Masking (FM), and Feature Dropout (FD) as graph augmentation methods. Table \ref{tab:hl-ablation} shows that combined with {\textsc{HLCL}}\xspace{}, feature and topology augmentations considerably benefit the performance on graphs with homophily. Topology augmentation, however, may not be helpful under heterophily. \noindent\textbf{Output Encoder with Different Graph Filters} In our experiments, we use low-pass graph filter in our GNN encoder as the final representation encoder. To illustrate the effectiveness of our structural design, we present an ablation study on using different filters or combinations of filters in our final encoder. The result is shown in table \ref{tab:filter}. We see that on most datasets low-pass encoder achieves the highest accuracy with a large gap, and variation involving the high-pass encoder has worse performances across all datasets except Texas, which has a nearly zero homophily ratio. This demonstrates the ineffectiveness of using high-pass filter directly in the representation encoder. \noindent\textbf{Existing Graph CL methods with high-pass encoders} Finally, we illustrate the importance of our contrastive structure. We show that the main accuracy gains on heterophily datasets are from contrasting the low- and high-pass filtered graph views, instead of the introduction of the high-pass filter alone. Indeed, the high-pass filter alone cannot provide high-quality representations under heterophily, as different combinations of low-pass and high-pass filters are required to provide good representations for different graphs. Table \ref{tab:swap} shows that by replacing low-pass filter with high-pass filter in other graph CL methods, such methods are able to achieve a better accuracy on some heterophily datasets like Squirrel and Texas. However, their performances deteriorate significantly under homophily cases, where aggregation among neighborhoods are desired. Indeed, when using high-pass or low-pass alone, the model cannot achieve good performance.\looseness=-1 \section{Related Work} \textbf{(Semi-)supervised learning on graphs.} In recent years, GNNs have become one of the most prominent tools for processing graph-structured data. In general, GNNs utilize the adjacency matrix to learn the node representations, by aggregating information within every node's neighborhood \citep{defferrard2016convolutional,kipf2016semi}. Existing variants, including GraphSAGE \citep{hamilton2017inductive}, Graph Attention (GAT) \citep{velivckovic2017graph}, MixHop \citep{abu2019mixhop}, SGC \citep{nt2019revisiting}, GAT \citep{velickovic2019deep}, and GIN \citep{xu2018powerful}, learn a more general class of neighborhood mixing relationships, by aggregating weighted information within a multi-hop neighborhood of every node. GNNs can be generally seen as applying a fix, or a parametric and learnable (e.g. GAT) low-pass graph filter to graph signals. Those with trainable parameters can adapt to a wider range of frequency levels on different graphs. However, they still have a higher emphasis on lower-frequency signals and discard the high-frequency signals in a graph. While the aggregation operation makes GNNs powerful tools for semi-supervised learning, it can make the learned node representations indistinguishable in a neighborhood \citep{nt2019revisiting}. As a result, typical GNNs and their variants have been long criticized for their poor generalization performance under heterophily \citep{balcilar2020analyzing}. \noindent\textbf{(Semi-)supervised learning under heterophily.} To address over-smoothing issue of GNNs, recent methods propose to use other types of aggregation that better fit graphs with heterophily. Geom-GCN uses geometric aggregation in place of the typical aggregation \citep{pei2020geom}, H$_2$GCN uses several special model designs including separate aggregation and higher-neighborhood aggregation to train the model for handling graphs with heterophily, and CPGNN trains a compatibility matrix to model the heterophily level \citep{zhu2020graph}. More recently, \cite{wang2019demystifying} proposed to learn an aggregation filter for every graph from a set of based filters designed based on different ways of normalizing the adjacency matrix. Most recently, \cl{GGCN introduced degree corrections and signed message passing on GCN to address both oversmoothing problems and the model's poor performances on heterophily graphs \citep{yan2021two}. \cite{zhu2021interpreting} analyzed and designed a uniform framework for GNNs propagations and proposed GNN-LF and GNN-HF that preserve information of different frequency separately by using different filtering kernels with learnable weights.} FAGCN \citep{bo2021beyond} and FBGNN \citep{luan2020complete} train two \textit{separate} encoders to capture the high-pass and low-pass graph signals separately. Then they rely on labels to learn relatively complex mechanisms to combine the outputs of the encoders. However, learning how to combine the encoder outputs is highly sensitive to having high-quality labels. This makes such methods highly impractical for unsupervised contrastive learning, where the label information is not available. Unlike the above supervised methods, we leverage the high-pass filter in the \textit{same encoder} to generate augmented graph views that are \textit{contrasted} with their low-pass counterparts to learn unsupervised representations. This is in contrast to learning the best combination of filtered signals of different encoders based on labels. \noindent\textbf{Contrastive learning on graphs.} For contrastive learning on graphs, global graph-level and local node-level data are augmented and contrasted in different ways. DGI \citep{velickovic2019deep} and GMI \citep{peng2020graph} contrast graph and node representations within one augmented view of the original graph. More recent methods contrast global and local representations in two augmented views. \textsc{GraphCL} generates graph augmentations by subgraph sampling, node dropping, and edge perturbation and contrasts the augmented graph representations. GCC samples and contrasts subgraphs of the original graph \citep{qiu2020gcc}. MVGRL leverages node diffusion to augment the graph and contrasts the node representations \citep{hassani2020contrastive}. More recently, contrasting the local node representations has been shown to achieve state-of-the-art. \textsc{GRACE} contrasts the node representations in two graph views augmented with feature masking and edge removal \citep{zhu2020deep}. GCA extends this by dropping the less important edges and features, based on node centrality and feature importance metrics \citep{zhu2021graph}. A thorough empirical study on the combinatorial effect of different augmentations has been conducted by \cite{zhu2021empirical}. Due to the complexity of collecting negative samples in graph data, negative-samples-free contrastive objectives have been also studied. Among existing methods, BGRL that uses the Bootstrapping Latent loss \citep{thakoor2021large}, and GBT uses Barlow Twins loss \citep{bielak2021graph} are the most successful. Existing graph CL methods explicitly augment the input graph and contrast the augmented graph representations obtained with {low-pass} GNN-based encoders. In doing so, they only capture the similarity of nodes in a neighborhood. Hence, they perform poorly on graphs with heterophily. In contrast, we leverage low-pass and high-pass graph filters in the same GNN-based encoder to capture and contrast similarity and dissimilarity of nodes with their neighborhood. This allows achieving state-of-the-art under heterophily. \section{Conclusion} We proposed {\textsc{HLCL}}\xspace{}, a contrastive learning framework that leverages a high-pass graph filter to generate non-smooth augmented representations to be contrasted with their smooth counterparts, generated using the same encoder. This is particularly beneficial for graphs with heterophily, where existing graph CL methods are highly ineffective. Through extensive experiments, we demonstrated that our proposed framework achieves up to 10\% boost in the performance for representation learning under the linear probe, for various graphs under heterophily. At the same time, it provides a comparable performance on graphs with homophily. We believe our work provides an important direction for future work on contrastive learning under heterophily. To explain the effect of high-pass and low-pass filters and motivates using both filters in a contrastive way, we construct a simple graph with seven nodes. Nodes are separated into two classes, labeled as red and green as shown in the figure. First, we generate features vectors with 200 dimensions for nodes from red and green classes based on two Gaussian distributions, centered at -5 and 5 respectively with a standard deviation of 1. Then, we do graph aggregation or diversification with low- and high-pass filters separately, following Eq. (\ref{eq:operations_A},\ref{eq:operations_L} ). For a clearer illustration, we do not project the feature vectors into other dimensions.\\ We considered three different homophily ratios and ran the node features through high-pass and low-pass filters for a few iterations. \begin{figure} \caption{Case 1: High homophily ratio graph} \label{fig:toy_2} \end{figure} \begin{figure} \caption{Case 2: A mixture of homophily and heterophily components} \label{fig:toy_3} \end{figure} \begin{figure} \caption{Case 3: Very low homophily ratio } \label{fig:toy_graph} \end{figure} \end{document}
arXiv
\begin{document} \ifSTOC \conferenceinfo{STOC'13,} {June 14, 2013, Palo Alto, California, USA.} \CopyrightYear{2013} \crdata{978-1-4503-2029-0/13/06} \clubpenalty=10000 \widowpenalty = 10000 \fi \title{Low Rank Approximation and Regression in \\ Input Sparsity Time} \ifSTOC \numberofauthors{1} \author{\alignauthor Kenneth L. Clarkson and David P. Woodruff \\ \affaddr{IBM Research - Almaden} \\ \affaddr{ San Jose, CA} \\ \email{[email protected], [email protected]} } \else \author{Kenneth L. Clarkson\\IBM Almaden \and David P. Woodruff\\IBM Almaden} \fi \maketitle \begin{abstract} We design a new distribution over ${\mathrm{poly}}(r \varepsilon^{-1}) \times n$ matrices $S$ so that for any fixed $n \times d$ matrix $A$ of rank $r$, with probability at least $9/10$, $\norm{SAx}_2 = (1 \pm \varepsilon)\norm{Ax}_2$ simultaneously for all $x \in \mathbb{R}^d$. Such a matrix $S$ is called a \emph{subspace embedding}. Furthermore, $SA$ can be computed in $O(\nnz(A))$ time, where $\nnz(A)$ is the number of non-zero entries of $A$. This improves over all previous subspace embeddings, which required at least $\Omega(nd \log d)$ time to achieve this property. We call our matrices $S$ \emph{sparse embedding matrices}. Using our sparse embedding matrices, we obtain the fastest known algorithms for overconstrained least-squares regression, low-rank approximation, approximating all leverage scores, and $\ell_p$-regression: \begin{itemize} \item to output an $x'$ for which \[ \norm{Ax'-b}_2 \leq (1+\varepsilon)\min_x \norm{Ax-b}_2 \] for an $n \times d$ matrix $A$ and an $n \times 1$ column vector $b$, we obtain an algorithm running in $O(\nnz(A)) + \tO(d^3\varepsilon^{-2})$ time, and another in $O(\nnz(A)\log(1/\varepsilon)) + \tO(d^3\log(1/\varepsilon))$ time. (Here $\tO(f) = f \cdot \log^{O(1)}(f)$.) \item to obtain a decomposition of an $n \times n$ matrix $A$ into a product of an $n \times k$ matrix $L$, a $k \times k$ diagonal matrix $D$, and an $n \times k$ matrix $W$, for which $$\normF{A - L D W^\top} \leq (1+\varepsilon)\normF{A-A_k},$$ where $A_k$ is the best rank-$k$ approximation, our algorithm runs in \[ O(\nnz(A)) + \tilde O(nk^2\varepsilon^{-4} + k^3\varepsilon^{-5}) \] time. \item to output an approximation to all leverage scores of an $n \times d$ input matrix $A$ simultaneously, with constant relative error, our algorithms run in $O(\nnz(A) \log n) + \tO(r^3)$ time. \item to output an $x'$ for which \[ \norm{Ax'-b}_p \leq (1+\varepsilon)\min_x \norm{Ax-b}_p \] for an $n \times d$ matrix $A$ and an $n \times 1$ column vector $b$, we obtain an algorithm running in $O(\nnz(A) \log n) + {\mathrm{poly}}(r \varepsilon^{-1})$ time, for any constant $1 \leq p < \infty$. \end{itemize} We optimize the polynomial factors in the above stated running times, and show various tradeoffs. Finally, we provide preliminary experimental results which suggest that our algorithms are of interest in practice. \end{abstract} \ifSTOC \category{F.2.1}{Numerical Algorithms and Problems}{Computations on matrices} \terms{Algorithms, Theory } \fi \section{Introduction} A large body of work has been devoted to the study of fast randomized approximation algorithms for problems in numerical linear algebra. Several well-studied problems in this area include least squares regression, low rank approximation, and approximate computation of leverage scores. These problems have many applications in data mining \cite{afkms01}, recommendation systems \cite{dkr02}, information retrieval \cite{prtv00}, web search \cite{afkm01,k99}, clustering \cite{dfkvv04,m01}, and learning mixtures of distributions \cite{ksv08,am05}. The use of randomization and approximation allows one to solve these problems much faster than with deterministic methods. For example, in the overconstrained least-squares regression problem, we are given an $n \times d$ matrix $A$ of rank $r$ as input, $n \gg d$, together with an $n \times 1$ column vector $b$. The goal is to output a vector $x'$ so that with high probability, $\|Ax'-b\|_2 \leq (1+\varepsilon)\min_x \|Ax-b\|_2$. The minimizing vector $x^*$ can be expressed in terms of the Moore-Penrose pseudoinverse $A^-$ of $A$, namely, $x^* = A^-b$. If $A$ has full column rank, this simplifies to $x^* = (A^\top A)^{-1}A^\top b$. This minimizer can be computed deterministically in $O(nd^2)$ time, but with randomization and approximation, this problem can be solved in $O(nd \log d) + {\mathrm{poly}}(d \varepsilon^{-1})$ time \cite{s06,dmms11}, which is much faster for $d \ll n$ and $\epsilon$ not too small. The generalization of this problem to $\ell_p$-regression is to output a vector $x'$ so that with high probability $\|Ax'-b\|_p \leq (1+\varepsilon)\min_x \|Ax-b\|_p$. This can be solved exactly using convex programming, though with randomization and approximation it is possible to achieve $O(nd \log n) + {\mathrm{poly}}(d \varepsilon^{-1})$ time \cite{CDMMMW} for any constant $p$, $1 \leq p < \infty$. Another example is low rank approximation. Here we are given an $n \times n$ matrix (which can be generalized to $n \times d$) and an input parameter $k$, and the goal is to find an $n \times n$ matrix $A'$ of rank at most $k$ for which $\|A'-A\|_F \leq (1+\varepsilon)\|A-A_k\|_F$, where for an $n \times n$ matrix $B$, $\|B\|^2_F \equiv \sum_{i=1}^n \sum_{j=1}^n B_{i,j}^2$ is the squared Frobenius norm, and $A_k\equiv \argmin_{\rank B \le k }\|A-B\|_F$. Here $A_k$ can be computed deterministically using the singular value decomposition in $O(n^3)$ time. However, using randomization and approximation, this problem can be solved in $O(\nnz(A) \cdot (k/\varepsilon + k \log k) + n \cdot {\mathrm{poly}}(k/\varepsilon))$ time\cite{s06, cw09}, where $\nnz(A)$ denotes the number of non-zero entries of $A$. The problem can also be solved using randomization and approximation in $O(n^2 \log n) + n \cdot {\mathrm{poly}}(k/\varepsilon)$ time \cite{s06}, which may be faster than the former for dense matrices and large $k$. Another problem we consider is approximating the {\it leverage scores}. Given an $n \times d$ matrix $A$ with $n \gg d$, one can write $A = U \Sigma V^\top $ in its singular value decomposition, where the columns of $U$ are the left singular vectors, $\Sigma$ is a diagonal matrix, and the columns of $V$ are the right singular vectors. Although $U$ has orthonormal columns, not much can be immediately said about the squared lengths $\|U_i\|_2^2$ of its rows. These values are known as the leverage scores, and measure the extent to which the singular vectors of $A$ are correlated with the standard basis. The leverage scores are basis-independent, since they are equal to the diagonal elements of the projection matrix onto the span of the columns of $A$; see \cite{DMMW12} for background on leverage scores as well as a list of applications. The leverage scores will also play a crucial role in our work, as we shall see. The goal of approximating the leverage scores is to, simultaneously for each $i \in [n]$, output a constant factor approximation to $\|U_i\|_2^2$. Using randomization, this can be solved in $O(nd \log n + d^3 \log d \log n)$ time \cite{DMMW12}. There are also solutions for these problems based on sampling. They either get a weaker additive error \cite{fkv04,prtv00,am07,dkm06,dkm06a,dkm06b,dm05,rv07,drvw06}, or they get bounded relative error but are slow \cite{dv06,dmm06,dmm06b,dmm06c}. Many of the latter algorithms were improved independently by Deshpande and Vempala \cite{dv06} and Sarl\'os \cite{s06}, and in followup work \cite{dmms11,ndt09,mz11}. There are also solutions based on iterative and conjugate-gradient methods, see, e.g., \cite{tb_nla}, or \cite{zf12} as recent examples. These methods repeatedly compute matrix-vector products $Ax$ for various vectors $x$; in the most common setting, such products require $\Theta(\nnz(A))$ time. Thus the work per iteration of these methods is $\Theta(\nnz(A))$, and the number of iterations $N$ that are performed depends on the desired accuracy, spectral properties of $A$, numerical stability issues, and other concerns, and can be large. A recent survey suggests that $N$ is typically $\Theta(k)$ for Krylov methods (such as Arnoldi and Lanczos iterations) to approximate the $k$ leading singular vectors \cite{HMT}. One can also use some of these techniques together, for example by first obtaining a preconditioner using the Johnson-Lindenstrauss (JL) transform, and then running an iterative method. While these results illustrate the power of randomization and approximation, their main drawback is that they are not optimal. For example, for regression, ideally we could hope for $O(\nnz(A)) + {\mathrm{poly}}(d/\varepsilon)$ time. While the $O(nd \log d) + {\mathrm{poly}}(d/\varepsilon)$ time algorithm for least squares regression is almost optimal for {\it dense} matrices, if $\nnz(A)\ll nd$, say $\nnz(A)=O(n)$, as commonly occurs, this could be much worse than an $O(\nnz(A)) + {\mathrm{poly}}(d/\varepsilon)$ time algorithm. Similarly, for low rank approximation, the best known algorithms that are condition-independent run in $O(\nnz(A) (k/\varepsilon + k \log k) + n \cdot {\mathrm{poly}}(k/\varepsilon))$ time, while we could hope for $O(\nnz(A)) + {\mathrm{poly}}(k/\varepsilon)$ time. \subsection{Results} We resolve the above gaps by achieving algorithms for least squares regression, low rank approximation, and approximate leverage scores, whose time complexities have a leading order term that is $O(\nnz(A))$, sometimes up to a log factor, with constant factors that are independent of any numerical properties of $A$. Our results are as follows: \begin{itemize} \item {\bf Least Squares Regression:} We present several algorithms for an $n \times d$ matrix $A$ with rank $r$ and given $\varepsilon > 0$. One has running time bound of $O(\nnz(A)\log (n/\varepsilon) + r^3 \log^2 r + r^2\log(1/\varepsilon))$, stated at Theorem~\ref{thm:it reg}. (Note the logarithmic dependence on $\varepsilon$; a variation of this algorithm has $O(\nnz(A)\log(1/\varepsilon)+ d^3 \log^2 d + d^2\log(1/\varepsilon))$ running time.) Another has running time $O(\nnz(A)) + \tO(d^3 \varepsilon^{-2})$, stated at Theorem~\ref{thm:lin reg}; note that the dependence on $\nnz(A)$ is is linear. We also give an algorithm for generalized (multiple-response) regression, where $\min_X \norm{AX-B}$ is found for $B\in{\mathbb R}^{n\times d'}$, in time \[ O(\nnz(A)\log n + r^2((r+d')\varepsilon^{-1} + rd' + r\log^2 r + \log n)); \] see Theorem~\ref{thm:renRegAlg}. We also note improved results for constrained regression, \S\ref{subsec:constrained}. \item {\bf Low Rank Approximation:} We achieve running time $O(\nnz(A)) + n \cdot {\mathrm{poly}}(k(\log n)/\varepsilon)$ to find an orthonormal $L,W\in{\mathbb R}^{n\times k}$ and diagonal $D\in{\mathbb R}^{k\times k}$ matrix with $\norm{A-LDW^\top}_F$ within $1+\varepsilon$ of the error of the best rank-$k$ approximation. More specifically, Theorem~\ref{thm:SVD} gives a time bound of \[ O(\nnz(A)) + \tilde O(nk^2\varepsilon^{-4} + k^3\varepsilon^{-5}). \] \item {\bf Approximate Leverage Scores:} For any fixed constant $\varepsilon > 0$, we simultaneously $(1+\varepsilon)$-approximate all $n$ leverage scores in $O(\nnz(A) \log n + r^3 \log^2 r + r^2 \log n)$ time. This can be generalized to sub-constant $\varepsilon$ to achieve $O(\nnz(A) \log n) + {\mathrm{poly}}(r/\varepsilon)$ time, though in the applications we are aware of, such as coresets for regression \cite{ddhkm09}, $\varepsilon$ is typically constant (in the applications of this, a general $\varepsilon > 0$ can be achieved by over-sampling \cite{dmm06,ddhkm09}). \item {\bf $\ell_p$-Regression:} For $p \in [1,\infty)$ we achieve running time $O(\nnz(A) \log n) + {\mathrm{poly}}(r \varepsilon^{-1})$ in Theorem \ref{thm:lp-running} as an immediate corollary of our results and a recent connection between $\ell_2$ and $\ell_p$-regression given in \cite{CDMMMW} (for $p = 2$, the $\nnz(A) \log n$ term can be improved to $\nnz(A)$ as stated above). \end{itemize} \subsection{Techniques} All of our results are achieved by improving the time complexity of computing what is known as a {\it subspace embedding}. For a given $n\times d$ matrix $A$, call $S:{\mathbb R}^n\mapsto{\mathbb R}^t$ a \emph{subspace embedding matrix} for $A$ if, for all $x\in{\mathbb R}^d$, $\|SAx\|_2 = (1 \pm \varepsilon) \|Ax\|_2$. That is, $S$ embeds the column space $C(A)\equiv \{Ax \mid x\in {\mathbb R}^d\}$ into ${\mathbb R}^t$ while approximately preserving the norms of all vectors in that subspace. The \emph{subspace embedding problem} is to find such an embedding matrix obliviously, that is, to design a distribution $\pi$ over linear maps $S:{\mathbb R}^n\mapsto{\mathbb R}^t$ such that for any fixed $n \times d$ matrix $A$, if we choose $S \sim \pi$ then with large probability, $S$ is an embedding matrix for $A$. The goal is to minimize $t$ as a function of $n, d,$ and $\varepsilon$, while also allowing the matrix-matrix product $S \cdot A$ to be computed quickly. (A closely related construction, easily derived from a subspace embedding, is an \emph{affine embedding}, involving an additional matrix $B\in{\mathbb R}^{n\times d'}$, such that \ifSTOC $\norm{AX-B}_F\approx \norm{S(AX-B)}_F$, \else \[\norm{AX-B}_F\approx \norm{S(AX-B)}_F,\] \fi for all $X\in{\mathbb R}^{d\times d'}$; see \S\ref{subsec:genAff}. These affine embeddings are used for our low-rank approximation results, and immediately imply approximation algorithms for constrained regression.) By taking $S$ to be a Fast Johnson Lindenstrauss transform, one can set $t = O(d/\varepsilon^2)$ and achieve $O(nd \log t)$ time for $d < n^{1/2-\gamma}$ for any constant $\gamma > 0$. One can also take $S$ to be a subsampled randomized Hadamard transform, or SRHT (see, e.g., Lemma 6 of \cite{BG}) and set $t = O(\varepsilon^{-2} (\log d)(\sqrt{d}+\sqrt{\log n})^2)$, to achieve $O(nd \log t)$ time. These were the fastest known subspace embeddings achieving any value of $t$ not depending polynomially on $n$. Our main result improves this to achieve $t= {\mathrm{poly}}(d/\varepsilon)$ for matrices $S$ for which $SA$ can be computed in $\nnz(A)$ time! Given our new subspace embedding, we plug it into known methods of solving the above linear algebra problems given a subspace embedding as a black box. In fact, our subspace embedding is nothing other than the {\sf CountSketch} matrix in the data stream literature \cite{ccf04}, see also \cite{tz04}. This matrix was also studied by Dasgupta, Kumar, and Sarl\'os \cite{dks10}. Formally, $S$ has a single randomly chosen non-zero entry $S_{h(j), j}$ in each column $j$, for a random mapping $h : [n] \mapsto [t]$. With probability $1/2$, $S_{h(j), j} = 1$, and with probability $1/2$, $S_{h(j), j} = -1$. While such matrices $S$ have been studied before, the surprising fact is that they actually provide subspace embeddings. Indeed, the usual way of proving that a random $S \sim \pi$ is a subspace embedding is to show that for any fixed vector $y \in \mathbb{R}^d$, $\Pr[\|Sy\|_2 = (1 \pm \varepsilon) \|y\|_2] \geq 1-\exp(-d)$. One then puts a net (see, e.g., \cite{ahk06}) on the unit vectors in the column space $C(A)$, and argues by a union bound that $\|Sy\|_2 = (1 \pm \varepsilon)\|y\|_2$ for all net points $y$. This then implies, for a net that is sufficiently fine, and using the linearity of the mapping, that $\|Sy\|_2 = (1 \pm \varepsilon)\norm{y}_2$ for all vectors $y\in C(A)$. We stress that our choice of matrices $S$ does not preserve the norms of an arbitrary set of $\exp(d)$ vectors with high probability, and so the above approach cannot work for our choice of matrices $S$. We instead critically use that these $\exp(d)$ vectors all come from a $d$-dimensional subspace (namely, $C(A)$), and therefore have a very special structure. The structural fact we use is that there is a fixed set $H$ of size $d/\alpha$ which depends only on the subspace, such that for any unit vector $y\in C(A)$, $H$ contains the indices of all coordinates of $y$ larger than $\sqrt{\alpha}$ in magnitude. The key property here is that the set $H$ is independent of $y$, or in other words, only a small set of coordinates could ever be large as we range over all unit vectors in the subspace. The set $H$ selects exactly the set of large leverage scores of the columns space $C(A)$! Given this observation, by setting $t \geq K |H|^2$ for a large enough constant $K$, we have that with probability $1-1/K$, there are no two distinct $j \neq j'$ with $j, j' \in H$ for which $h(j) = h(j')$. That is, we avoid the birthday paradox, and the coordinates in $H$ are ``perfectly hashed'' with large probability. Call this event $\mathcal{E}$, which we condition on. Given a unit vector $y$ in the subspace, we can write it as $y^H + y^L$, where $y^H$ consists of $y$ with the coordinates in $[n] \setminus H$ replaced with $0$, while $y^L$ consists of $y$ with the coordinates in $H$ replaced with $0$. We seek to bound $$\|Sy\|_2^2 = \|Sy^H\|_2^2 + \|Sy^L\|_2^2 + 2\langle Sy^H, Sy^L \rangle.$$ Since $\mathcal{E}$ occurs, we have the isometry $\|Sy^H\|_2^2 = \|y^H\|_2^2$. Now, $\|y^L\|^2_{\infty} < \alpha $, and so we can apply Theorem 2 of \cite{dks10} which shows that for mappings of our form, if the input vector has small infinity norm, then $S$ preserves the norm of the vector up to an additive $O(\varepsilon)$ factor with high probability. Here, it suffices to set $\alpha = 1/{\mathrm{poly}}(d/\varepsilon)$. Finally, we can bound $\langle Sy^H, Sy^L \rangle$ as follows. Define $G \subseteq [n] \setminus H$ to be the set of coordinates $j$ for which $h(j) = h(j')$ for a coordinate $j' \in H$, that is, those coordinates in $[n] \setminus H$ which ``collide'' with an element of~$H$. Then, $\langle Sy^H, Sy^L \rangle = \langle Sy^H, Sy^{L'} \rangle$, where $y^{L'}$ is a vector which agrees with $y^L$ on coordinates $j \in G$, and is $0$ on the remaining coordinates. By Cauchy-Schwarz, this is at most $\|Sy^H\|_2 \cdot \|Sy^{L'}\|_2$. We have already argued that $\|Sy^H\|_2 = \|y^H\|_2 \leq 1$ for unit vectors $y$. Moreover, we can again apply Theorem 2 of \cite{dks10} to bound $\|Sy^{L'}\|_2$, since, conditioned on the coordinates of $y^{L'}$ hashing to the set of items that the coordinates of $y^H$ hash to, they are otherwise random, and so we again have a mapping of our form (with a smaller $t$ and applied to a smaller $n$) applied to a vector with small infinity-norm. Therefore, $\|Sy^{L'}\|_2 \leq O(\varepsilon) + \|y^{L'}\|_2$ with high probability. Finally, by Bernstein bounds, since the coordinates of $y^L$ are small and $t$ is sufficiently large, $\|y^{L'}\|_2 \leq \varepsilon$ with high probability. Hence, conditioned on event $\mathcal{E}$, $\|Sy\|_2 = (1 \pm \varepsilon)\|y\|_2$ with probability $1-\exp(-d)$, and we can complete the argument by union-bounding over a sufficiently fine net. We note that an inspiration for this work comes from work on estimating norms in a data stream with efficient update time by designing separate data structures for the heavy and the light components of a vector \cite{nw10,knpw11}. A key concept here is to characterize the heaviness of coordinates in a vector space in terms of its leverage scores. \\\\ {\bf Optimizing the additive term:} The above approach already illustrates the main idea behind our subspace embedding, providing the first known subspace embedding that can be implemented in $\nnz(A)$ time. This is sufficient to achieve our numerical linear algebra results in time $O(\nnz(A)) + {\mathrm{poly}}(d/\varepsilon)$ for regression and $O(\nnz(A)) + n \cdot {\mathrm{poly}}(k\log(n)/\varepsilon)$ for low rank approximation. However, for some applications $d, k,$ or $1/\varepsilon$ may also be large, and so it is important to achieve a small degree in the additive ${\mathrm{poly}}(d/\varepsilon)$ and $n \cdot {\mathrm{poly}}(k\log(n)/\varepsilon)$ factors. The number of rows of the matrix $S$ is $t = {\mathrm{poly}}(d/\varepsilon)$, and the simplest analysis described above would give roughly $t = (d/\varepsilon)^8$. We now show how to optimize this. The first idea for bringing this down is that the analysis of \cite{dks10} can itself be tightened by using that we are applying it on vectors coming from a subspace instead of on a set of arbitrary vectors. This involves observing that in the analysis of \cite{dks10}, if on input vector $y$ and for every $i \in [t]$, $\sum_{j \mid h(j) = i} y_j^2$ is small then the remainder of the analysis of \cite{dks10} does not require that $\|y\|_{\infty}$ be small. Since our vectors come from a subspace, it suffices to show that for every $i \in [t]$, $\sum_{j \mid h(j) = i} \|U_j\|_2^2$ is small, where $\|U_j\|_2^2$ is the $j$-th leverage score of $A$. Therefore we do not need to perform this analysis for each $y$, but can condition on a single event, and this effectively allows us to increase $\alpha$ in the outline above, thereby reducing the size of $H$, and also the size of $t$ since we have $t = \Omega(|H|^2)$. In fact, we instead follow a simpler and slightly tighter analysis of \cite{KN12} based on the Hanson-Wright inequality. Another idea is that the estimation of $\|y^H\|_2$, the contribution from the ``heavy coordinates'', is inefficient since it requires a perfect hashing of the coordinates, which can be optimized to reduce the additive term to $d^2 \varepsilon^{-2} {\mathrm{polylog}}(d/\varepsilon)$. In the worst case, there are $d$ leverage scores of value about $1$, $2d$ of value about $1/2$, $4d$ of value about $1/4$, etc. While the top $d$ leverage scores need to be perfectly hashed (e.g., if $A$ contains the $d \times d$ identity matrix as a submatrix), it is not necessary that the leverage scores of smaller value, yet still larger than $1/d$, be perfectly hashed. Allowing a small number of collisions is okay provided all vectors in the subspace have small norm on these collisions, which just corresponds to the spectral norm of a submatrix of $A$. This gives an additive term of $d^2 \varepsilon^{-2} {\mathrm{polylog}}(d/\varepsilon)$ instead of $O(d^4 \varepsilon^{-4})$. This refinement is discussed in Section \S\ref{sec:partition}. There is yet another way to optimize the additive term to roughly $d^2 (\log n)/\varepsilon^4$, which is useful in its own right since the error probability of the mapping can now be made very low, namely, $1/{\mathrm{poly}}(n)$. This low error probability bound is needed for our application to $\ell_p$-regression, see Section \ref{sec: ell_p}. By standard balls-and-bins analyses, if we have $O(d^2/\log n)$ bins and $d^2$ balls, then with high probability each bin will contain $O(\log n)$ balls. We thus make $t$ roughly $O(d^2/ \log n)$ and think of having $O(d^2/\log n)$ bins. In each bin $i$, $O(\log n)$ heavy coordinates $j$ will satisfy $h(j) = i$. Then, we apply a separate JL transform on the coordinates that hash to each bin $i$. This JL transform maps a vector $z \in \mathbb{R}^n$ to an $O((\log n) / \varepsilon^2)$-dimensional vector $z'$ for which $\|z'\|_2 = (1 \pm \varepsilon) \|z\|_2$ with probability at least $1-1/{\mathrm{poly}}(n)$. Since there are only $O(\log n)$ heavy coordinates mapping to a given bin, we can put a net on all vectors on such coordinates of size only ${\mathrm{poly}}(n)$. We can do this for each of the $O(d^2 / \log n)$ bins and take a union bound. It follows that the $2$-norm of the vector of coordinates that hash to each bin is preserved, and so the entire vector $y^H$ of heavy coordinates has its $2$-norm preserved. By a result of \cite{KN12}, the JL transform can be implemented in $O((\log n) / \varepsilon)$ time, giving total time $O(\nnz(A) (\log n) / \varepsilon)$, and this reduces $t$ to roughly $O(d^2 \log n)/\varepsilon^4$. We also note that for applications such as least squares regression, it suffices to set $\varepsilon$ to be a constant in the subspace embedding, since we can use an approach in \cite{dmm06,ddhkm09} which, given constant-factor approximations to all of the leverage scores, can then achieve a $(1+\varepsilon)$-approximation to least squares regression by slightly over-sampling rows of the adjoined matrix $A \circ b$ proportional to its leverage scores, and solving the induced subproblem. This results in a better dependence on $\varepsilon$. We can also compose our subspace embedding with a fast JL transform to further reduce $t$ to the optimal value of about $d/\varepsilon^2$. Since $S \cdot A$ already has small dimensions, applying a fast JL transform is now efficient. Finally, we can use a recent result of \cite{ckl12} to replace most dependencies on $d$ in our running times for regression with a dependence on the rank $r$ of $A$, which may be smaller. Note that when a matrix $A$ is input that has leverage scores that are roughly equal to each other, then the set $H$ of heavy coordinates is empty. Such a leverage score condition is assumed, for example, in the analysis of matrix completion algorithms. For such matrices, the sketching dimension can be made $d^2 \varepsilon^{-2} \log(d/\varepsilon)$, slightly improving our $d^2 \varepsilon^{-2} {\mathrm{polylog}}(d/\varepsilon)$ dimension above. \subsection{Recent Related Work} In the first version of our technical report on these ideas (July, 2012), the additive ${\mathrm{poly}}(k,d,1/\varepsilon)$ terms were not optimized, while in the second version, the additive terms were more refined, and results on $\ell_p$-regression for general $p$ were given, but the analysis of sparse embeddings in \S\ref{sec:partition} was absent. In the third version, we refined the dependence still further, with the partitioning in \S\ref{sec:partition}. Recently, a number of authors have told us of followup work, all building upon our initial technical report. Miller and Peng showed that $\ell_2$-regression can be done with the additive term sharpened to sub-cubic dependence on $d$, and with linear dependence on $\nnz(A)$ \cite{MP}. More fundamentally, they showed that a subspace embedding can be found in $O(\nnz(A) + d^{\omega + \alpha}\varepsilon^{-2})$ time, to dimension \[ O((d^{1+\alpha}\log d + \nnz(A)d^{-3})\varepsilon^{-2}); \] here $\omega$ is the exponent for asymptotically fast matrix multiplication, and $\alpha>0$ is an arbitrary constant. (Some constant factors here are increasing in $\alpha$.) Nelson and Nguyen obtained similar results for regression, and showed that sparse embeddings can embed into dimension $O(d^2/\varepsilon^2)$ in $O(\nnz(A))$ time; this considerably improved on our dimension bound for that running time, at that point (our second version), although our current bound is within ${\mathrm{polylog}}(d/\varepsilon)$ of their result. They also showed a dimension bound of $O(d^{1+\alpha})$ for $\alpha>0$, with work $O(f(\alpha)\nnz(A)\varepsilon^{-1})$ for a particular function of $\alpha$. Their analysis techniques are quite different from ours \cite{NN}. Both of these papers use fast matrix multiplication to achieve sub-cubic dependence on $d$ in applications, where our cubic term involves a JL transform, which may have favorable properties in practice. Regarding subspace embeddings to dimensions near-linear in $d$, note that by computing leverage scores and then sampling based on those scores, we can obtain subspace embeddings to $O(d\varepsilon^{-2}\log d)$ dimensions in $O(\nnz(A)\log n) + \tO(r^3)$ time; this may be incomparable to the results just mentioned, for which the running times increase as $\alpha\rightarrow 0$, possibly significantly. Paul, Boutsidis, Magdon-Ismail, and Drineas \cite{sbmd} implemented our subspace embeddings and found that in the TechTC-300 matrices, a collection of 300 sparse matrices of document-term data, with an average of 150 to 200 rows and 15,000 columns, our subspace embeddings as used for the projection step in their SVM classifier are about 20 times faster than the Fast JL Transform, while maintaining the same classification accuracy. Despite this large improvement in the time for projecting the data, further research is needed for SVM classification, as the JL Transform empirically possesses additional properties important for SVM which make it faster to classify the projected data, even though the time to project the data using our method is faster. Finally, Meng and Mahoney improved on the first version of our additive terms for subspace embeddings, and showed that these ideas can also be applied to $\ell_p$-regression, for $1 \leq p < 2$ \cite{MengMahoney}; our work on this in \S\ref{sec: ell_p} achieves $1 \leq p < \infty$ and was done independently. We note that our algorithms for $\ell_p$-regression require constructions of embeddings that are successful with high probability, as we obtain for generalized embeddings, and so some of the constructions in \cite{MP,NN} (as well as our non-generalized embeddings) will not yield such $\ell_p$ results. \subsection{Outline} We introduce basic notation and definitions in \S\ref{sec:sparse embed}, and then the basic analysis in \S\ref{sec:analysis}. A more refined analysis is given in \S\ref{sec:partition}, and then generalized embeddings, with high probability guarantees, in \S\ref{sec:generalized}. In these sections, we generally follow the framework discussed above, splitting coordinates of columnspace vectors into sets of ``large'' and ``small'' ones, analyzing each such set separately, and then bringing these analyses together. Shifting to applications, we discuss leverage score approximation in \S\ref{sec:leverage}, and regression in \S\ref{sec:regression}, including the use of leverage scores and the algorithmic machinery used to estimate them, and considering affine embeddings in \S\ref{subsec:genAff}, constrained regression in \S\ref{subsec:constrained}, and iterative methods in \S\ref{subsec:iterative}. Our low-rank approximation algorithms are given in \S\ref{sec:low rank}, where we use constructions and analysis based on leverage scores and regression. We next apply generalized sparse embeddings to $\ell_p$-regression, in \S\ref{sec: ell_p}. \ifSTOC \else Finally, in \S\ref{sec:exper}, we give some preliminary experimental results. \fi \section{Sparse Embedding Matrices}\label{sec:sparse embed} We let $\norm{A}_F$ or $\norm{A}$ denote the Frobenius norm of matrix $A$, and $\norm{A}_2$ denote the spectral norm of $A$. Let $A \in \mathbb{R}^{n \times d}$. We assume $n > d$. Let $\nnz(A)$ denote the number of non-zero entries of $A$. We can assume $\nnz(A) \geq n$ and that there are no all-zero rows or columns in $A$. For a parameter $t$, we define a random linear map $\Phi D: \mathbb{R}^n \rightarrow \mathbb{R}^t$ as follows: \begin{itemize} \item $h : [n] \mapsto [t]$ is a random map so that for each $i\in [n]$, $h(i)=t'$ for $t'\in [t]$ with probability $1/t$. \item $\Phi \in \{0,1\}^{t \times n}$ is a $t \times n$ binary matrix with $\Phi_{h(i), i} = 1$, and all remaining entries $0$. \item $D$ is an $n \times n$ random diagonal matrix, with each diagonal entry independently chosen to be $+1$ or $-1$ with equal probability. \end{itemize} We will refer to a matrix of the form $\Phi D$ as a {\it sparse embedding matrix}. \section{Analysis}\label{sec:analysis} Let $U \in \mathbb{R}^{n \times r}$ have columns that form an orthonormal basis for the column space $C(A)$. Let $U_{1, *}, \ldots, U_{n, *}$ be the rows of $U$, and let $u_i\equiv \norm{U_{i,*}}^2$. It will be convenient to regard the rows of $A$ and $U$ to be re-arranged so that the $u_i$ are in non-increasing order, so $u_1$ is largest; of course this order is unknown and un-used by our algorithms. For $u\in{\mathbb R}^n$ and $1\le a\le b\le n$, let $u_{a:b}$ denote the vector with $i$'th coordinate equal to $u_i$ when $i\in [a,b]$, and zero otherwise. Let $T > 0$ be a parameter. Throughout, we let $s\equiv \min\{i | u_i \le T\}$, and $s'\equiv \max\{i | \sum_{s \le j\le i} u_j \le 1\}$. We will use the notation $\Ibr{P}$, a function on event $P$, that returns 1 when $P$ holds, and 0 otherwise. The following variation of Bernstein's inequality\footnote{See Wikipedia entry on Bernstein's inequalities (probability theory).} will be helpful. \begin{lemma}\label{lem:Bern} For $L,T\ge 0$ and independent random variables $X_i\in [0,T]$ with $V \equiv \sum_i \Var[X_i]$, if $V\le LT^2/6$, then \[ \Pr\left[\sum_i X_i \ge \sum_i \E[X_i] + LT\right] \le \exp(-L). \] \end{lemma} \STOComitedproof{ \begin{proof} Here Bernstein's inequality says that for $Y_i\equiv X_i - \E[X_i]$, so that $\E[Y^2_i] = \Var[X_i]=V$ and $|Y_i| \le T$, \[ \log\Pr\left[\sum_i Y_i \ge z\right] \le \frac{-z^2/2}{V + zT/3}. \] By the quadratic formula, the latter is no more than $-L$ when \[ z \ge \frac{LT}{3}(1 + \sqrt{1 + 18V/LT^2}), \] which holds for $z\ge LT$ and $V\le LT^2/6$. \end{proof} } \subsection{Handling vectors with small entries}\label{sec:small} We begin the analysis by considering $y\scn$ for fixed unit vectors $y\in C(A)$. Since $\norm{y}=1$, there must be a unit vector $x$ so that $y=Ux$, and so by Cauchy-Schwartz, $\norm{y_i}^2\le \norm{U_{i,*}}^2\norm{x}^2 = u_i$. This implies that $\|y\scn\|_{\infty}^2 \leq u_s$. We extend this to all unit vectors in subsequent sections. The following is similar to Lemma~6 of \cite{dks10}, and is a standard balls-and-bins analysis. \begin{lemma}\label{lem:even hash} For $\delta_h, T, t>0$, and $s\equiv \min\{i \mid u_i\le T\}$, let $\mathcal{E}_h$ be the event that \[ W \ge \max_{j\in [t]} \sum_{\substack{i\in h^{-1}(j)\\ i\ge s}} u_i, \] where $W\equiv T\log(t/\delta_h) + r/t$. If \[ t\ge \frac{6\norm{u\scn}^2}{T^2\log(t/\delta_h)}, \] then $\Pr[\mathcal{E}_h] \ge 1-\delta_h$. \end{lemma} \STOComitedproof{ \begin{proof} We will apply Lemma~\ref{lem:Bern} to prove that the bound holds for fixed $j\in [t]$ with failure probability $\delta_h/t$, and then apply a union bound. Let $X_i$ denote the random variable $u_i \Ibr{h(i)=j, i\ge s}$. We have $0\le X_i \le T$, $\E[X] = \sum_{i\ge s} u_i/t \le r/t$, and $V = \sum_{i\ge s} \E[X_i^2] = \sum_{i\ge s} u_i^2/t = \norm{u\scn}^2/t$. Applying Lemma~\ref{lem:Bern} with $L=\log(t/\delta_h)$ gives \[ \Pr[\sum_i X_i \ge T\log(t/\delta_h) + r/t] \le \exp(-\log(t/\delta_h)) = \delta_h/t, \] when $\norm{u\scn}^2/t \le LT^2/6$, or $t\ge 6\norm{u\scn}^2/LT^2$. \end{proof} } \begin{lemma}\label{lem:HW use} For $W$ as in Lemma~\ref{lem:even hash}, suppose the event $\mathcal{E}_h$ holds. Then for unit vector $y\in C(A)$, and any $2 \leq \ell \leq 1/W$, with failure probability $\delta_L = e^{-\ell}$, $ | \norm{\Phi D y\scn}_2^2 - \norm{y\scn}^2 | \le K_L\sqrt{W \log(1/\delta_L)}$, where $K_L$ is an absolute constant. \end{lemma} \begin{proof} We will use the following theorem, due to Hanson and Wright. \begin{theorem}\cite{HW}\label{thm:hw} Let $z\in{\mathbb R}^n$ be a vect or of i.i.d. $\pm 1$ random values. For any symmetric $B\in {\mathbb R}^{n\times n}$ and $2 \le \ell$, \ifSTOC $\E\left[ | z^\top Bz - \tr(B)| ^\ell \right] \le (CQ)^\ell$, where $Q\equiv \max \{ \sqrt{\ell}\norm{B}_F, \ell \cdot\norm{B}_2\}$, \else \[ \E\left[ | z^\top Bz - \tr(B)| ^\ell \right] \le (CQ)^\ell, \] where \[ Q\equiv \max \{ \sqrt{\ell}\norm{B}_F, \ell \cdot\norm{B}_2\}, \] \fi and $C>0$ is a universal constant. \end{theorem} We will use Theorem~\ref{thm:hw} to prove a bound on the $\ell$'th moment of $\norm{\Phi D y}_2^2$ for large $\ell$. Note that $\norm{\Phi D y}^2$ can be written as $z^\top B z$, where $z$ has entries from the diagonal of $D$, and $B\in{\mathbb R}^{n\times n}$ has $B_{ii'} \equiv y_i y_{i'} \Ibr{h(i)=h(i')}$. Here $\tr(B)=\norm{y\scn}^2$. Our analysis uses some ideas from the proofs for Lemmas 7 and 8 of \cite{KN12}. Since by assumption event $\mathcal{E}_h$ of Lemma \ref{lem:even hash} occurs, and for unit $y\in C(A)$, $y_{i'}^2 \leq u_{i'}$ for all $i'$, we have for $j\in [t]$ that $\sum_{i' \in h^{-1}(j), i'\ge s} y^2_{i'} \le W$. Hence \begin{align}\label{eq:B F} \norm{B}_F^2 & = \sum_{i, i'\ge s} (y_i y_{i'} )^2 \Ibr{h(i')=h(i)}\nonumber \\ & = \sum_{i\ge s} y^2_i \sum_{\substack{i' \in h^{-1}(h(i))\\ i'\ge s}} y^2_{i'}\nonumber \\ & \le \sum_{i\in [n]} y^2_i W\nonumber \\ & \le W. \end{align} For $\norm{B}_2$, observe that for given $j\in [t]$, $z(j) \in{\mathbb R}^n$ with $z(j)_i = y_i \Ibr{h(i)=j, i\ge s}$ is an eigenvector of $B$ with eigenvalue $\norm{z(j)}^2$, and the set of such eigenvectors spans the column space of $B$. It follows that \[ \norm{B}_2 = \max_{j}\norm{z(j)}^2 = \sum_{\substack{i' \in h^{-1}(j)\\ i'\ge s}} y^2_{i'} \le W. \] Putting this and \eqref{eq:B F} into the $Q$ of Theorem~\ref{thm:hw}, we have, \[ Q \le \max \{ \sqrt{\ell}\norm{B}_F, \ell \cdot\norm{B}_2\} \le \max\{ \sqrt{\ell}\sqrt{W}, \ell W \} =\sqrt{\ell W}, \] where we used $\ell W \leq 1$. By a Markov bound applied to $| z^\top Bz - \tr(B)| ^\ell$ with $\ell = \log(1/\delta_L)$, \begin{eqnarray*} \Pr[ | \norm{\Phi D y\scn}_2^2 - \norm{y\scn}^2 | \ge e C \sqrt{\ell W}] & \le & e^{-\ell}\\ & = & \delta_L. \end{eqnarray*} \end{proof} \subsection{Handling vectors with large entries}\label{sec:large} A small number of entries can be handled directly. \begin{lemma}\label{lem:birthday} For given $s$, let $\mathcal{E_B}$ denote the event that $h(i)\ne h(i')$ for all $i , i' < s$. Then $\delta_B \equiv 1 -\Pr[\mathcal{E}_B] \le s^2/t$. Given event $\mathcal{E}_B$, we have that for any $y$, \[ \|y_{1:(s-1)}\|_2^2 = \|\Phi Dy_{1:(s-1)}\|_2^2. \] \end{lemma} \begin{proof} Since $\Pr[h(i)=h(i')] = 1/t$, the probability that some such $i\ne i'$ has $h(i) = h(i')$ is at most $s^2/t$. The last claim follows by a union bound. \end{proof} \subsection{Handling all vectors}\label{sec:all} We have seen that $\Phi D$ preserves the norms for vectors with small entries (Lemma~\ref{lem:HW use}) and large entries (Lemma~\ref{lem:birthday}). Before proving a general bound, we need to prove a bound on the ``cross terms''. \begin{lemma}\label{lem:cross terms} For $W$ as in Lemma~\ref{lem:even hash}, suppose the event $\mathcal{E}_h$ and $\mathcal{E}_B$ hold. Then for unit vector $y\in C(A)$, with failure probability at most $\delta_C$, \[ |y_{1:(s-1)}^\top D\Phi^\top \Phi D y_{s:n}|\le K_C \sqrt{W \log(1/\delta_C)}, \] for an absolute constant $K_C$. \end{lemma} \ifSTOC \begin{proof} The proof applies Khintchine's inequality to the sum making up the dot product of the two sketched vectors, obtaining a moment bound that implies the tail estimate. Please see the full paper for details. \end{proof} \else \begin{proof} With the event $\mathcal{E}_B$, for each $i\ge s$ there is at most one $i' < s$ with $h(i)=h(i')$; let $z_i \equiv y_{i'} D_{i'i'}$, and $z_i \equiv 0$ otherwise. We have for integer $p\ge 1$ using Khintchine's inequality \begin{align*} \E\bigg[\bigg( y_{1:(s-1)}^\top D\Phi^\top \Phi D y_{s:n}\bigg)^{2p}\bigg]^{1/p} & = \E\bigg[\bigg( \sum_{i\ge s} y_i D_{ii} z_i\bigg)^{2p}\bigg]^{1/p} \\ & \le C_p \sum_{i\ge s} y_i^2 z_i^2 \\ & = C_p \sum_{i'<s} y_{i'}^2 \sum_{\substack{i \in h^{-1}(i')\\ i\ge s}} y_i^2 \\ & \le C_p W, \end{align*} where $C_p\le \Gamma(p+1/2)^{1/p}=O(p)$, and the last inequality uses the assumption that $\mathcal{E}_h$ holds, and $\sum_{i' < s} y_{i'}^2 \le 1$. Putting $p=\log(1/\delta_C)$ and applying the Markov inequality, we have \[ \Pr[(y_{1:(s-1)}^\top D\Phi^\top \Phi D y_{s:n})^2 \ge e C_p W] \ge 1 - \exp(-p) = 1 - \delta_C. \] Therefore, with failure probability at most $\delta_C$, we have \[ |y_{1:(s-1)}^\top D\Phi^\top \Phi D y_{s:n}|\le K_C \sqrt{W \log(1/\delta_C)}, \] for an absolute constant $K_C$. \end{proof} \fi \begin{lemma}\label{lem:concentrate y} Suppose the events $\mathcal{E}_h$ and $\mathcal{E}_B$ hold, and $W$ is as in Lemma~\ref{lem:even hash}. Then for $\delta_y>0$ there is an absolute constant $K_y$ such that, if $W\le K_y \epsilon^2/\log(1/\delta_y)$, then for unit vector $y\in C(A)$, with failure probability $\delta_y$, $\norm{\Phi Dy}_2 = (1 \pm \varepsilon)\norm{y}_2$, when $\delta_y\le 1/2$. \end{lemma} \ifSTOC \begin{proof} The proof pulls together the bounds for the large and small cases. Please see the full paper for details. \end{proof} \else \begin{proof} Assuming $\mathcal{E}_h$ and $\mathcal{E}_B$, we apply Lemmas~\ref{lem:birthday}, \ref{lem:HW use}, and \ref{lem:cross terms} , and have with failure probability at most $\delta_L + \delta_C$, \begin{align*} | & \norm{\Phi D y}_2^2 - \norm{y}^2 | \\ & = | \norm{\Phi D y_{1:(s-1)}}_2^2 - \norm{y_{1:(s-1)}}^2 \\ &\qquad + \norm{\Phi D y\scn}_2^2 - \norm{y\scn}^2 + 2 y_{1:(s-1)} D\Phi^\top \Phi D y\scn | \\ & \le | \norm{\Phi D y\scn}_2^2 - \norm{y\scn}^2| + 0 + |2 y_{1:(s-1)} D\Phi^\top \Phi D y\scn | \\ & \le K_L\sqrt{W \log(1/\delta_L)}+ 2 K_C \sqrt{W \log(1/\delta_C)} \\ & \le 3\epsilon \sqrt{K_y}(K_L + K_C) \end{align*} for the given $W$, putting $\delta_L = \delta_C = \delta_y/2$ and assuming $\delta_y\le 1/2$. Thus $K_y \le 1/9(K_L+K_C)^2$ suffices. \end{proof} \fi \begin{lemma}\label{lem:subspaceBound} Suppose $\delta_{sub}>0$, $L$ is an $r$-dimensional subspace of $\mathbb{R}^n$, and $B:\mathbb{R}^n \rightarrow \mathbb{R}^k$ is a linear map. If for any fixed $x \in L$, $\|Bx\|_2^2 = (1 \pm \varepsilon/6)\|x\|_2^2$ with probability at least $1-\delta_{sub}$, then there is a constant $K_{sub} > 0$ for which with probability at least $1-\delta_{sub} K_{sub}^{r}$, for all $x \in L$, $\|Bx\|_2^2 = (1 \pm \varepsilon)\|x\|_2^2$. \end{lemma} \ifSTOC \begin{proof} The proof is a standard $\epsilon$-net argument. Please see the full paper for details. \end{proof} \else \begin{proof} We will need the following standard lemmas for making a net argument. Let $S^{r-1}$ be the unit sphere in ${\mathbb R}^r$ and let $E$ be the set of points in $S^{r-1}$ defined by $$E =\left\{w : w \in \frac{\gamma}{\sqrt{r}} \mathbb{Z}^r, \ \|w\|_2 \leq 1 \right\},$$ where $\mathbb{Z}^r$ is the $r$-dimensional integer lattice and $\gamma$ is a parameter. \begin{fact}[Lemma 4 of \cite{ahk06}]\label{lem:netsize} $|E|\le e^{cr}$ for $c = (\frac{1}{\gamma} + 2)$. \end{fact} \begin{fact}[Lemma 4 of \cite{ahk06}]\label{lem:netprod} For any $r \times r$ matrix $J$, if for every $u, v \in E$ we have $|u^\top Jv| \leq \varepsilon$, then for every unit vector $w$, we have $|w^\top Jw| \leq \frac{\varepsilon}{(1-\gamma)^2}$. \end{fact} Let $U \in \mathbb{R}^{n \times r}$ be such that the columns are orthonormal and the column space equals $L$. Let $I_r$ be the $r \times r$ identity matrix. Define $J = U^TB^TBU - I_r$. Consider the set $E$ in Fact \ref{lem:netsize} and Fact \ref{lem:netprod}. Then, for any $x, y \in E$, we have by the statement of the lemma that with probability at least $1-3\delta_{sub}$, $\|BUx\|_2^2 = (1 \pm \varepsilon/6)\|Ux\|_2^2$, $\|BUy\|_2^2 = (1 \pm \varepsilon/6)\|Uy\|_2^2$, and $\|BU(x+y)\|_2^2 = (1 \pm \varepsilon/6)\|U(x+y)\|_2^2 = (1 \pm \varepsilon/6)(\|Ux\|_2^2 + \|Uy\|_2^2 + 2\langle Ux, Uy \rangle)$. Since $\|Ux\|_2 \leq 1$ and $\|Uy\|_2 \leq 1$, it follows that $|xJy| \leq \varepsilon/2$. By Fact \ref{lem:netsize}, for $\gamma = 1-1/\sqrt{2}$ and sufficiently large $K_{sub}$, we have by a union bound that with probability at least $1-\delta_{sub} K_{sub}^r$ that $|xJy| \leq \varepsilon/2$ for every $x,y \in E$. Hence, with this probability, by Fact \ref{lem:netprod}, $|w^TJw| \leq \varepsilon$ for every unit vector $w$, which by definition of $J$ means that for all $y \in L$, $\|By\|_2^2 = (1 \pm \varepsilon)\|y\|_2^2$. \end{proof} \fi The following is our main theorem in this section. \begin{theorem}\label{thm:main} There is $t=O((r/\epsilon)^4\log^2(r/\epsilon))$ such that with probability at least $9/10$, $\Phi D$ is a subspace embedding matrix for $A$; that is, for all $y \in C(A)$, $\norm{\Phi Dy}_2 = (1 \pm \varepsilon)\norm{y}_2$. The embedding $\Phi D$ can be applied in $O(\nnz(A))$ time. For $s=\min\{i' \mid u_{i'}\le T\}$, where $T$ is a parameter in $\Omega(\epsilon^2/r\log(r/\epsilon))$, it suffices if $t \ge \max\{s^2/30, r/T\}$. \end{theorem} \begin{proof} For suitable $t$, $T$, and $s$, with failure probability at most $\delta_h + \delta_B$, events $\mathcal{E}_h$ and $\mathcal{E}_B$ both hold. Conditioned on this, and assuming $W$ is sufficiently small as in Lemma~\ref{lem:concentrate y}, we have with failure probability $\delta_y$ for any fixed $y\in C(A)$ that $\|\Phi D y\|_2 = (1 \pm \varepsilon)\|y\|_2$. Hence by Lemma \ref{lem:subspaceBound}, with failure probability $\delta_h + \delta_B + \delta_y K_{sub}^r$, $\|\Phi Dy\|_2 = (1 \pm 6\varepsilon) \|y\|_2$ for all $y \in C(A)$. We need $\delta_h + \delta_B + \delta_y K_{sub}^r\le 1/10$, and the parameter conditions of Lemmas~\ref{lem:even hash}, Lemma \ref{lem:HW use}, and Lemma \ref{lem:concentrate y} holding. Listing these conditions: \begin{enumerate} \item $\delta_h + \delta_B + \delta_y K_{sub}^r \le 1/10$, where $\delta_B$ can be set to be $s^2/t$; \item $u_s \le T$; \item \label{it:s} $t \ge 6\norm{u\scn}^2/\log(t/\delta_h)T^2$; \item $\ln(2/\delta_y) \cdot W \leq 1$ (corresponding to the condition $\ell \leq 1/W$ of Lemma \ref{lem:HW use} since we set $\delta_y/2 = \delta_L = e^{-\ell}$) \item $W = T\log(t/\delta_h) + r/t \le K_y \epsilon^2/\log(1/\delta_y)$. \end{enumerate} We put $\delta_y = K_{sub}^{-r}/30$, $\delta_h = 1/30$, and require $t\ge s^2/30$. For the last condition it suffices that $T = O(\epsilon^2/r\log(t))$, and $t = \Omega(r^2/\epsilon^2)$. The last condition implies the fourth condition for small enough constant $\varepsilon$. Also, since $\norm{u\scn}^2 = \sum_{i\ge s} u_i^2 \le \sum_{i\ge s} u_i T\le rT$, the bound for $T$ implies that $t=O((r/\epsilon)^2\log(t))$ suffices for Condition \ref{it:s}. Thus when the leverage scores are such that $s$ is small, $t$ can be $O((r/\epsilon)^2\log(r/\epsilon))$. Since $\sum_i u_i = r$, $s\le r/T$ suffices, and so $t= O((r/T)^2) = O((r/\epsilon)^4\log^2(r/\epsilon))$ suffices for the conditions of the theorem. \end{proof} \ifSTOC \section{Partitioning Leverage\\ Scores}\label{sec:partition} \else \section{Partitioning Leverage Scores}\label{sec:partition} \fi We can further optimize our low order additive ${\mathrm{poly}}(r)$ term by refining the analysis for large leverage scores (those larger than $T$). We partition the scores into groups that are equal up to a constant factor, and analyze the error resulting from the relatively small number of collisions that may occur, using also the leverage scores to bound the error. \ifSTOC We obtain the following theorem; the proof is omitted in this version. \else In what follows we have not optimized the ${\mathrm{poly}}(\log(r/\varepsilon))$ factors. Let $q \equiv \log_2 1/T = O(\log(r/\varepsilon))$. We partition the leverage scores $u_i$ with $u_i \geq T$ into groups $G_j$, $j \in [q]$, where $$G_j = \{i \mid 1/2^j < u_i \leq 1/2^{j-1}\}.$$ Let $\beta_j \equiv 2^{-j}$, and $n_j\equiv |G_j|$. Since $\sum_{i=1}^n u_i = r$, we have for all $j$ that $n_j \leq r/\beta_j$. We may also use $G_j$ to refer to the collection of rows of $U$ with leverage scores in $G_j$. For given hash function $h$ and corresponding $\Phi$, let $G'_j\subset G_j$ denote the collision indices of $G_j$, those $i\in G_j$ such that $h(i)=h(i')$ for some $i'\in G_j$. Let $k_j\equiv |G'_j|$. First, we bound the spectral norm of a submatrix of the orthogonal basis $U$ of $C(A)$, where the submatrix comprises rows of $G'_j$. \subsection{The Spectral Norm}\label{subsec:spectral} We have a matrix $B\in {\mathbb R}^{n_j\times r}$, with $\norm{B}_2\le 1$, and each row of $B$ has squared Euclidean norm at least $\beta_j$ and at most $2\beta_j$, for some $j\in [q]$. We want to bound the spectral norm of the matrix $\hat B$ whose rows comprise those rows of $U$ in the collision set $G'_j$. We let $t = \Theta(r^2 q^6/\epsilon^2)$ be the number of hash buckets. The expected number of collisions in the $t$ buckets is $\E[|G'_j|] = \frac{{n_j \choose 2}}{t} \leq \frac{n_j^2}{2t}.$ Let $\mathcal{D}_j$ be the event that the number $k_j \equiv |G'_j| $ of such collisions in the $t$ buckets is at most $n_j^2 q^2/t$. Let $\mathcal{D} = \cap_{j=1}^q \mathcal{D}_j$. By a Markov and a union bound, $\Pr[\mathcal{D}] \ge 1-1/(2q)$. We will assume that $\mathcal{D}$ occurs. While each row in $B$ has some independent probability of participating in a collision, we first analyze a sampling scheme with replacement. We generate independent random matrices $\hat{H}_m$ for $m\in [\ell_j]$, for a parameter $\ell_j > k_j$, by picking $i\in [n_j]$ uniformly at random, and letting $\hat{H}_m \equiv B_{i:}^\top B_{i:}$. Note that $\E[\hat{H}_m] = \frac{1}{n_j} B^\top B$. Our analysis will use a special case of the version of matrix Bernstein inequalities described by Recht. \begin{fact}[paraphrase of Theorem 3.2 \cite{Recht}] Let $\ell$ be an integer parameter. For $m\in [\ell]$, let $H_m\in{\mathbb R}^{r\times r}$ be independent symmetric zero-mean random matrices. Suppose $\rho_m^2 \equiv \norm{\E[H_m H_m]}_2$ and $M\equiv \max_{m\in [\ell]} \norm{H_m}_2$. Then for any $\tau > 0$, \[ \log \Pr\left[\left\|\sum_{m\in [\ell]} H_m\right\|_2 > \tau\right] \le \log 2r - \frac{\tau^2/2}{\sum_{m \in [\ell]}\rho_m^2 + M\tau/3}. \] \end{fact} We apply this fact with $\ell = \ell_j = (4e^2)k_j + \Theta(q)$ and $H_m \equiv \hat{H}_m - \E[\hat{H}_m]$, so that \begin{align*} \rho_m^2 & \equiv \norm{\E[H_m H_m]}_2 \\ & \le \left\| \frac{1}{n_j} \sum_{i\in [n_j]} \norm{B_{i:}}^2 B_{i:}^\top B_{i:} - \frac{1}{n_j^2} B^\top B B^\top B \right\|_2 \\ & \le \frac{2\beta_j}{n_j} + \frac{1}{n_j^2}. \end{align*} Also $M\equiv \norm{H_m}_2 \le 2\beta_j + \frac{1}{n_j}$. Applying the above fact with these bounds for $\rho_m^2$ and $M$, we have \begin{align*} \log \Pr\left[\left\|\sum_{m\in [\ell_j]} H_m\right\|_2 > \tau\right] & \le \log 2r - \frac{\tau^2/2}{\sum_{m\in [\ell_j]}\rho_m^2 + M\tau/3} \\ & \le \log 2r - \frac{\tau^2/2}{(2\beta_j + 1/n_j)\left(\frac{\ell_j}{n_j} + \frac{\tau}{3}\right)}. \end{align*} We will assume that $n_j \ge \sqrt{t}/q^2$, as discussed in lemma \ref{lem:collide norm} below (otherwise we have perfect hashing). With this assumption, setting $\tau = \Theta(q(\beta_j + 1/n_j + \sqrt{r/t}))$ gives a probability bound of $1/r$. (Here we use that $\beta_j+1/n_j\le 2$, $\ell_j/n_j = O(q n_j/t)$, and $\frac{n_j}{t}(\beta_j + 1/n_j) \le (r+1)/t$.) We therefore have that with probability at least $1-1/r$, \begin{align*} \norm{\sum_{m\in [\ell_j]} \hat{H}_m}_2 & = O(q(\beta_j + 1/n_j + \sqrt{r/t}) + \frac{\ell_j}{n_j}\norm{B^\top B}_2 \\ & = O(q(\beta_j + 1/n_j + \sqrt{r/t}+ \frac{n_j}{t})), \end{align*} where we use that $\|B\|_2 \leq 1$, and use again $\ell_j/n_j = O(qn_j/t)$. We can now prove the following lemma. \begin{lemma}\label{lem:collide norm} With probability $1-o(1)$, for all leverage score groups $G_j$, and for $U$ an orthonormal basis of $C(A)$, the submatrix $\hat B_j$ of $U$ consisting of rows in $G'_j$, that is, those in $G_j$ that collide in a hash bucket with another row in $G_j$ under $\Phi$, has squared spectral norm $O(q(\beta_j + 1/n_j + \sqrt{r/t}+ n_j/t))$. \end{lemma} \begin{proof} Fix a $j \in [q]$. If $n_j \equiv |G_j| \leq \sqrt{t}/q^2$, then with probability $1-o(1/q)$, the items in $G_j$ are perfectly hashed into the $t$ bins. So with probability $1-o(1)$, for all $j \in [q]$, if $n_j \leq \sqrt{t}/q^2$, then there are no collisions. Condition on this event. Now consider a $j \in [q]$ for which $n_j \geq \sqrt{t}/q$. Then $$\ell_j = (4e^2)k_j + \Theta(q) \leq n_j^2/t + O(q) \leq n_j + O(q) \leq 2n_j.$$ When sampling with replacement, the expected number of distinct items is $$n_j \cdot {\ell_j \choose 1} \frac{1}{n_j} \left (1- \frac{1}{n_j} \right )^{\ell_j - 1} \geq \ell_j(1-o(1))/e^2.$$ By a standard application of Azuma's inequality, using that $\ell_j = \Omega(q)$ is sufficiently large, we have that the number of distinct items is at least $\ell_j/(4e^2)$ with probability at least $1-1/r$. By a union bound, with probability $1-o(1)$, for all $j \in [q]$, if $n_j \geq r$, then at least $\ell_j/(4e^2)$ distinct items are sampled when sampling $\ell_j$ items with replacement from $G_j$. Since $\ell_j = 4e^2k_j + O(q)$, it follows that at least $k_j$ distinct items are sampled from each $G_j$. By the analysis above, for a fixed $j \in [q]$ we have that the submatrix of $U$ consisting of the $\ell_j$ sampled rows in $G_j$ has squared spectral norm $O(q(\beta_j + 1/n_j + \sqrt{r/t}+ n_j/t))$ with probability at least $1-1/r$ (notice that $\|\sum_{m \in [\ell_j]} \hat{H}_m \|_2$ is the square of the spectral norm of the submatrix of $U$ consisting of the $\ell_j$ sampled rows from $G_j$). Since the probability of this event is at least $1-1/r$ for a fixed $j \in [q]$, we can conclude that it holds for all $j \in [q]$ simultaneously with probability $1-o(1)$. Finally, using that the spectral norm of a submatrix of a matrix is at most that of the matrix, we have that for each $j$, the squared spectral norm of a submatrix of $k_j$ random distinct rows among the $\ell_j$ sampled rows of $G_j$ from $U$ is at most $O(q(\beta_j + 1/n_j + \sqrt{r/t}+ n_j/t))$. \end{proof} \subsection{Within-Group Errors} Let $L_j\subset {\mathbb R}^n$ denote the set of vectors $y$ so that $y_i=0$ for $i$ not in the collision set $G'_j$, and there is some unit $y'\in C(A)$ such that $y_i = y'_i$ for $i\in G'_j$. (Note that the error for such vectors is that same as that for the corresponding set of vectors with zeros outside of $G_j$.) In this subsection, we show that for all $y\in L_j$, the error in estimating $\norm{y}^2$ using $y^\top D\Phi^\top \Phi D y$ is at most $O(\varepsilon)$. For $y\in L_j$, the error in estimating $\norm{y}^2$ by using $y^\top D\Phi^\top \Phi D y$ contributed by collisions among coordinates $y_i$ for $i\in G_j$ is \begin{equation}\label{eq:big terms} \kappa_j \equiv \sum_{t'\in [t]} \sum_{i,i'\in h^{-1}(t')\cap G_j} y_i y_{i'} D_{ii} D_{i'i'}, \end{equation} and we need a bound on this quantity that holds with high probability. By a standard balls-and-bins analysis, every bucket has $O(\log t)=O(q)$ collisions, with high probability, since $n_j \le r/T \le O(r^2/\varepsilon^2) = O(t)$; we assume this event. The squared Euclidean norm of the vector of all $y_i$ that appear in the summands, that is, with $i\in G'_j$, is at most $\beta_j + 1/n_j + \sqrt{r/t}+ n_j/t)$ by Lemma~\ref{lem:collide norm}. Thus the squared Euclidean norm of the vector comprising all summands in \eqref{eq:big terms} is at most \begin{align} \gamma_j & \equiv \sum_{t'\in [t]} \sum_{i,i'\in h^{-1}(t')\cap G_j} y_i^2 y_{i'}^2 \\ & \le \sum_{t'\in [t]} \sum_{i\in h^{-1}(t')\cap G_j} y_i^2 O(q) 2 \beta_j\nonumber \\ & \le O(q^2\beta_j (\beta_j + 1/n_j + \sqrt{r/t}+ n_j/t)).\label{eq:collision norm} \end{align} By Khintchine's inequality, for $p\ge 1$, \begin{align*} \E[\kappa_j^{2p}]^{1/p} & \le O(p) \gamma_j \le O(p) (q^2\beta_j (\beta_j + 1/n_j + \sqrt{r/t}+ n_j/t)), \end{align*} and therefore $|\kappa_j|^2$ is less than the last quantity, with failure probability at most $4^{-p}$. Putting $p=k'_j \equiv \min\{r, k_j\}$, with failure probability at most $4^{-k'_j}$, for any fixed vector $y\in L_j$, the squared error in estimating $\norm{y}^2$ using the sketch of $y$ is at most $O(k'_j(q^2\beta_j (\beta_j + 1/n_j + \sqrt{r/t}+ n_j/t))$. Assuming the event $\cal D$ from the section above, we have $k'_j\le \min\{r, q \cdot n_j^2q / t \}$. We have, using $\beta_j n_j\le r$, \[ \frac{n_j^2 q^2}{t}(q^2\beta_j (\beta_j + 1/n_j)) \le \frac{q^4r(r+1)}{t}, \] and $r\cdot q^2 \beta_j n_j/t \le q^2 r^2/t$, and finally \[ q^2\beta_j\sqrt{r/t}\min\{r, q^2 n_j^2/t\} \le q^3\sqrt{r/t}\min\{\beta_j r, \frac{q r^2}{\beta_j t}\} \le q^4 r^2/t, \] using $\beta_j n_j\le r$. Putting these bounds on the terms together, the squared error is $O(q^4r^2/t)$, or $\epsilon^2/q^2$, for $t=\Omega(q^6r^2/\epsilon^2)$, so that the error is $O(\epsilon/q)$. Since the dimension of $L_j$ is bounded by $k'_j$, it follows from the net argument of Lemma~\ref{lem:subspaceBound} that for all $y\in L_j$, $\norm{Sy}^2 = \norm{y}^2 \pm O(\epsilon/q)$, and so the total error for unit $y\in C(A)$ is $O(\epsilon)$. We thus have the following theorem. \begin{theorem}\label{thm:partition within} There is an absolute constant $C' > 0$ for which for any parameters $\delta_1 \in (0,1)$, $P \geq 1$, and for sparse embedding dimension $t = O(P (r/\varepsilon)^2 \log^6(r/\varepsilon))$, for all unit $y\in C(A)$, $\sum_{j\in [q]} \norm{Sy^j} = 1 \pm C'\epsilon/P\delta_1$, with failure probability at most $\delta_1 + O(1/\log r)$, where $y^j$ denotes the member of $L_j$ derived from $y$. \end{theorem} \subsection{Handling the Cross Terms} To complete the optimization, we must also handle the error due to ``cross terms". Let $\delta_1 \in (0,1)$ be an arbitrary parameter. For $j \neq j' \in \{1, \ldots, q\}$, let the event $\mathcal{E}_{j,j'}$ be that the number of bins containing both an item in $G_j$ and in $G_{j'}$ is at most $\frac{n_j n_{j'} q^2}{t \delta_1}.$ Let $\mathcal{E} = \cap_{j, j'} \mathcal{E}_{j,j'}$, the event that no pair of groups has too many inter-group collisions. \begin{lemma}\label{lem:E} $\Pr[\mathcal{E}] \geq 1 - \delta_1.$ \end{lemma} \begin{proof} Fix a $j \neq j' \in \{1, \ldots, q\}$. Then the expected number of bins containing an item in both $G_j$ and in $G_{j'}$ is at most $t \cdot \frac{n_j}{t} \cdot \frac{n_{j'}}{t} = \frac{n_j n_{j'}}{t},$ and so by a Markov bound the number of bins containing an item in both $G_j$ and $G_{j'}$ is at most $\frac{n_j n_{j'} q^2}{t \delta_1}$ with probability at least $1-\delta_1/q^2$. The lemma follows by a union bound over the ${q \choose 2}$ choices of $j, j'$. \end{proof} In the remainder of the analysis, we set $t = P (r/\varepsilon)^2 q^6$ for a parameter $P \geq 1$. \\\\ Let $\mathcal{F}$ be the event that no bin contains more than $Cq$ elements of $\cup_{i=1}^q G_j$, where $C > 0$ is an absolute constant. \begin{lemma}\label{lem:F} $\Pr[\mathcal{F}] \geq 1-1/r$. \end{lemma} \begin{proof} Observe that $|\cup_{i=1}^q G_j| = \sum_{i=1}^q n_j \leq r \sum_{i=1}^q 2^j \leq 2r^2/\varepsilon^2.$ By standard balls and bins analysis with the given $t$, with $P \geq 1$, with probability at least $1-1/r$ no bin contains more than $C q$ elements, for a constant $C > 0$. \end{proof} \begin{lemma}\label{lem:individual} Condition on events $\mathcal{E}$ and $\mathcal{F}$ occurring. Consider any unit vector $y = Ax$ in the column space of $A$. Consider any $j \neq j' \in [q]$. Define the vector $y^j$: $y^j_i = y_i$ for $i \in G_j$, and $y^j_i = 0$ otherwise. Then, $$|\langle Sy^j, Sy^{j'} \rangle | = O \left (\frac{1}{P \delta_1 q^2} \right ).$$ \end{lemma} \begin{proof} Since $\mathcal{E}$ occurs, the number of bins containing both an item in $G_j$ and $G_{j'}$ is at most $n_j n_{j'} q^2/(t \delta_1)$. Call this set of bins $S$. Moreover, since $\mathcal{F}$ occurs, for each bin $i \in S$, there are at most $C \log r$ elements from $G_j$ in the bin and at most $C \log r$ elements from $G_{j'}$ in the bin. Hence, for any $S = \Phi \cdot D$, we have, using $n_j\beta_j\le r$ for all $j$, \begin{align*} |\langle Sy^j, Sy^{j'} \rangle| & \le \frac{n_j n_{j'} q^2}{t \delta_1} \cdot (Cq)^2 \beta_j \beta_{j'} \le \frac{(C q)^2 d^2 q^2}{t \delta_1} = \frac{C^2}{P \delta_1 q^2}. \end{align*} \end{proof} The following is our main theorem concerning cross-terms in this section. \begin{theorem}\label{thm:partition cross} There is an absolute constant $C' > 0$ for which for any parameters $\delta_1 \in (0,1)$, $P \geq 1$, and for sparse embedding dimension $t = O(P (r/\varepsilon)^2 \log^6 r)$, the event \[\forall y = Ax \textrm{ with } \|y\|_2 = 1, \ \sum_{j, j' \in [q]} |\langle Sy^j, Sy^{j'} \rangle | \leq \frac{C\varepsilon^2}{P \delta_1} \] occurs with failure probability at most $\delta_1 + \frac{1}{r}$, where $y^j, y^{j'}$ are as defined in Lemma \ref{lem:individual}. \end{theorem} \begin{proof} The theorem follows at once by combining Lemma \ref{lem:E}, Lemma \ref{lem:F}, and Lemma \ref{lem:individual}. \end{proof} \subsection{Putting it together} Putting the bounds for within-group and cross-term errors together, and replacing the use of Lemma~\ref{lem:birthday} in the proof of Theorem~\ref{thm:main}, we have the following theorem. \fi \begin{theorem}\label{thm:partition main} There is an absolute constant $C' > 0$ for which for any parameters $\delta_1 \in (0,1)$, $P \geq 1$, and for sparse embedding dimension $t = O(P (r/\varepsilon)^2 \log^6 (r/\varepsilon))$, for all unit $y\in C(A)$, $\norm{Sy} = 1 \pm C'\epsilon/P\delta_1$, with failure probability at most $\delta_1 + O(1/\log r)$. \end{theorem} \section{Generalized Sparse Embedding Matrices}\label{sec:generalized} \ifSTOC As discussed in the introduction, we can use small JL transforms within each hash bucket, to obtain the following theorem, where the term in the running time dependent on $\nnz(A)$ is more expensive, but the quality bounds hold with high probability. \else \subsection{Johnson-Lindenstrauss transforms} We start with a theorem of Kane and Nelson \cite{KN12}, restated here in our notation. We also present a simple corollary that we need concerning very low dimensional subspaces. Let $\varepsilon > 0, \kv = \Theta(\varepsilon^{-1} \log (r/\varepsilon))$, and $v = \Theta(\varepsilon^{-1})$. Let $B:\mathbb{R}^n \rightarrow \mathbb{R}^{v\kv}$ be defined as follows. We view $B$ as the concatenation (meaning, we stack the rows on top of each other) of matrices $\sqrt{\vk} \cdot \Phi_1 \cdot D_1, \ldots, \sqrt{\vk} \cdot \Phi_{\kv} \cdot D_{\kv}$, each $\Phi_i \cdot D_i$ being a linear map from $\mathbb{R}^n$ to $\mathbb{R}^v$, which is an independently chosen sparse embedding matrix of Section \ref{sec:analysis} with associated hash function $h_i:[n] \rightarrow [v]$. \begin{theorem}(\cite{KN12})\label{thm:general} For any $\delta_{KN}, \varepsilon > 0$, there are $\kv = \Theta(\varepsilon^{-1} \log (1/\delta_{KN}))$ and $v = \Theta(\varepsilon^{-1})$ for which for any fixed $x \in \mathbb{R}^n$, a randomly chosen $B$ of the form above satisfies $\|Bx\|_2^2 = (1 \pm \varepsilon) \|x\|_2^2$ with probability at least $1-\delta_{KN}$. \end{theorem} \begin{corollary}\label{cor:subspace} Let $\gdelta\in (0,1)$. Suppose $L$ is an $O(\log (r/\varepsilon\gdelta))$-dimensional subspace of $\mathbb{R}^n$. Let $C_{subKN} > 0$ be any constant. Then for any $\varepsilon\in (0,1)$, there are $\kv = \Theta(\varepsilon^{-1} \log (r/\varepsilon\gdelta))$ and $v = \Theta(\varepsilon^{-1})$ such that with failure probability at most $(\varepsilon/r\gdelta)^{C_{subKN}}$, $\|By\|_2^2 = (1 \pm \varepsilon) \|y\|_2^2$ for all $y \in L$. \end{corollary} \begin{proof} We use Theorem \ref{thm:general} together with Lemma \ref{lem:subspaceBound}; for the latter, we need that for any fixed $y \in L$, $\|By\|_2^2 = (1 \pm \varepsilon/6)\|y\|_2^2$ with probability at least $1-\delta_{sub}$. By Theorem \ref{thm:general}, we have this for $\delta_{sub} = (\gdelta\varepsilon/r)^{C_{KN}}$ for an arbitrarily large constant $C_{KN} > 0$. Hence, by Lemma \ref{lem:subspaceBound}, there is a constant $K_{sub} > 0$ so that with probability at least $1-(K_{sub})^{O(\log (r/\varepsilon\gdelta))} (\gdelta\varepsilon/r)^{C_{KN}}= 1 - (\gdelta\varepsilon/r)^{C_{subKN}}$, for all $y \in L$, $\|By\|_2^2 = (1 \pm \varepsilon) \|y\|_2^2$. Here we use that $C_{KN} > 0$ can be made arbitrarily large, independent of $K_{sub}$. \end{proof} \subsection{The construction} We now define a {\it generalized sparse embedding matrix} $S$. Let $A \in \mathbb{R}^{n \times d}$ with rank $r$. Let $\kv = \Theta(\varepsilon^{-1} \log (r/\varepsilon\gdelta))$ and $v = \Theta(\varepsilon^{-1})$, be such that Theorem \ref{thm:general} and Corollary \ref{cor:subspace} apply with parameters $\kv$ and $v$, for a sufficiently large constant $C_{subKN} > 0$. Further, let $$q \equiv C_t r \varepsilon^{-2}(r+\log(1/\gdelta\varepsilon)),$$ where $C_t > 0$ is a sufficiently large absolute constant, and let $t \equiv avq$. Let $h:[n] \rightarrow [q]$ be a random hash function. For $i = 1, 2, \ldots, q$, define $a_i = |h^{-1}(i)|$. Note that $\sum_{i=1}^q a_i = n$. We choose independent matrices $B^{(1)}, \ldots, B^{(q)}$, with each $B^{(i)}$ as in Theorem \ref{thm:general} with parameters $\kv$ and $v$. Here $B^{(i)}$ is a $v\kv \times a_i$ matrix. Finally, let $P$ be an $n \times n$ permutation matrix which, when applied to a matrix $A$, maps the rows of $A$ in the set $h^{-1}(1)$ to the set of rows $\{1, 2, \ldots, a_1\}$, maps the rows of $A$ in the set $h^{-1}(2)$ to the set of rows $\{a_1+1, \ldots, a_1+a_2\}$, and for a general $i \in [q]$, maps the set of rows of $A$ in the set $h^{-1}(i)$ to the set of rows $\{a_1+a_2 + \cdots + a_{i-1}+1, \ldots, a_1 + a_2 + \cdots + a_i\}$. The map $S$ is defined to be the product of a block-diagonal matrix and the matrix $P$: \[ S \equiv \left[ \begin{matrix} B^{(1)} & &\\ & B^{(2)} &&\\ && \ddots & \\ &&& B^{(q)} \\ \end{matrix} \right] \cdot P \] \begin{lemma}\label{lem:time} $S \cdot A$ can be computed in $O(\nnz(A) (\log (r/\varepsilon\gdelta)) / \varepsilon)$ time. \end{lemma} \begin{proof} As $P$ is a permutation matrix, $P \cdot A$ can be computed in $O(\nnz(A))$ time and has the same number of non-zero entries of $A$. For each non-zero entry of $P \cdot A$, we multiply it by $B^{(i)}$ for some $i$, which takes $O(\kv) = O(\log(r/\varepsilon\gdelta)/\varepsilon)$ time. Hence, the total time to compute $S \cdot A$ is $O(\nnz(A) (\log (r/\varepsilon\gdelta)) / \varepsilon)$. \end{proof} \subsection{Analysis} We adapt the analysis given for sparse embedding matrices to generalized sparse embedding matrices. Again let $U \in \mathbb{R}^{n \times r}$ have columns that form an orthonormal basis for the column space $C(A)$. Let $U_{1, *}, \ldots, U_{n, *}$ be the rows of $U$, and let $u_i\equiv \norm{U_{i,*}}^2$. For $\gdelta\in (0,1)$, we set the parameter: \begin{equation}\label{eq:T JL} T \equiv \frac{r}{C_Tq\log(t/\gdelta)} = \frac{O(\varepsilon^2)}{\log(r/\varepsilon\gdelta)(r + \log(1/\varepsilon\gdelta))}, \end{equation} where $C_T$ is a sufficiently large absolute constant. \subsubsection{Vectors with small entries} Let $s\equiv \min\{i'\mid u_i \le T\}$, and for $y'\in C(A)$ of at most unit norm, let $y \equiv y'\scn$. Since $y_i^2 \leq u_i$, this implies that $\|y\|_{\infty}^2 \leq T$. Since $P$ is a permutation matrix, we have $\|Py\|_{\infty}^2 \leq T$. In this case, we can reduce the analysis to that of a sparse embedding matrix. Indeed, observe that the matrix $B^{(i)}\in{\mathbb R}^{va\times a_i}$ is the concatenation of matrices $\Phi^{(i)}_1 D^{(i)}_1, \ldots, \Phi^{(i)}_{\kv} D^{(i)}_{\kv}$, where each $\Phi^{(i)}_j D^{(i)}_j \in {\mathbb R}^{v\times a_i}$ is a sparse embedding matrix. Now fix a value $j \in [\kv]$ and consider the block-diagonal matrix $N_j\in {\mathbb R}^{qv\times a_i}$: \[ N_j \equiv \left[ \begin{matrix} \Phi^{(1)}_j D^{(1)}_j & &\\ & \Phi^{(2)}_j D^{(2)}_j &&\\ && \ddots & \\ &&& \Phi^{(q)}_j D^{(q)}_j \\ \end{matrix} \right] \cdot P \] \begin{lemma}\label{lem:reduce} $N_j$ is a random sparse embedding matrix with $qv = t/\kv$ rows and $n$ columns. \end{lemma} \begin{proof} $N_j$ has a single non-zero entry in each column, and the value of this non-zero entry is random in $\{+1, -1\}$. Hence, it remains to show that the distribution of locations of the non-zero entries of $N_j$ is the same as that in a sparse embedding matrix. This follows from the distribution of the values $a_1, \ldots, a_{q}$, and the definition of $P$. \end{proof} \begin{lemma}\label{lem:HW JL} Let $\gdelta\in (0,1)$. For $j = 1, \ldots, \kv$, let $\mathcal{E}_h^j$ be the event $\mathcal{E}_h$ of Lemma \ref{lem:even hash}, applied to matrix $N_j$, with $\delta_h\equiv \gdelta/\kv$, and $W\equiv T\log(qv/\delta_h) + r/qv \le 2r/C_Tq$. Suppose $\cap_{j\in [a]} \mathcal{E}_h^j$ holds. This event has probability at least $1-\gdelta$. Then there is an absolute constant $K_L$ such that with failure probability at most $\delta_L$, \[ |\norm{Sy\scn}^2 - \norm{y\scn}^2 | \le K_L\sqrt{W\log(a/\delta_L)}. \] \end{lemma} \begin{proof} We apply Lemma~\ref{lem:HW use} with $N_j$ the sparse embedding matrix $\Phi D$, and $qv$, the number of rows of $N_j$, taking on the role of $t$ in Lemma~\ref{lem:even hash}, so that the parameter $W=T\log(qv/\delta_h) + r/qv$ as in the lemma statement. (And since $t=avq$, $qv/\delta_h = t/\gdelta$, so $W = r/C_Tq + r/qv \le 2r/C_Tq$.) Since $\norm{u_{s:n}}^2\le rT$, it suffices for Lemma~\ref{lem:even hash} if $qv$ is at least $2rT/T^2\log(t/\delta_h) = 2C_Tq$, or $v\ge 2C_T$. With $\delta_h=\gdelta/\kv$, by a union bound $\cap_{j\in[\kv]} E_h^j$ occurs with failure probability $\gdelta$, as claimed. We have, for given $N_j$, that with failure probability $\delta_L/a$, $| \norm{N_j y\scn}^2 - \norm{y\scn}^2| \le K_L\sqrt{W\log(a/\delta_L)}$. Applying a union bound, and using $$\|Sy\scn\|_2^2 = \fvk \sum_{j=1}^{\kv} \|N_j y\scn\|_2^2,$$ the result follows. \end{proof} \subsubsection{Vectors with large entries} Again, let $s\equiv \min\{i'\mid u_{i'}\le T\}$. Since $\sum_i u_i = r$, we have \[ s\le r/T = C_T q \log(t/\gdelta). \] The following is a standard non-weighted balls-and-bins analysis. \begin{lemma}\label{lem:non-weighted} Suppose the previously defined constant $C_t > 0$ is sufficiently large. Let $\mathcal{E}_{nw}$ be the event that $|h^{-1}(i) \cap [s] | \leq C_t \log (r/\varepsilon\gdelta)$, for all $i \in [q]$. Then $\Pr[\mathcal{E}_{nw}] \ge 1-\gdelta/r$. \end{lemma} \begin{proof} For any given $i \in [q]$, \[ \E[|h^{-1}(i) \cap [s]|] = s/q \le C_T \log(t/\gdelta) = O(\log(r/\epsilon\gdelta)). \] Hence, by a Chernoff bound, for a constant $C_t > 0$, $$\Pr[|h^{-1}(i) \cap [s]| > C_t \log (r/\varepsilon\gdelta)] \leq e^{-\Theta(\log(r/\varepsilon\gdelta))} = \frac{\gdelta}{rq},$$ The lemma now follows by a union bound over all $i \in [q]$. \end{proof} \begin{lemma}\label{lem:large JL} Assume that $\mathcal{E}_{nw}$ holds. Let $\mathcal{E}_s$ be the event that for all $y\in C(A)$, $\|Sy_{1:(s-1)}\|^2 = (1 \pm \varepsilon/2)\|y_{1:(s-1)}\|^2$. Then $\Pr[\mathcal{E}_s] \ge 1-\gdelta/r$. \end{lemma} \begin{proof} For $i = 1, 2, \ldots, q$, let $L^i$ be the at most $C_t\log (r/\varepsilon\gdelta)$-dimensional subspace which is the restriction of the column space $C(A)$ to coordinates $j$ with $h(j) = i$ and $j<s$. By Corollary \ref{cor:subspace}, for any fixed $i$, with probability at least $1-(\gdelta\varepsilon/r)^{C_{subKN}}$, for all $y \in L^i$, $\|Sy\|^2 = (1 \pm \varepsilon)\|y\|^2$. By a union bound and sufficiently large $C_{subKN} > 0$, this holds for all $i \in [q]$ with probability at least $1-q(\gdelta\varepsilon/r)^{C_{subKN}} > 1-\gdelta/r$. This condition implies $\mathcal{E}_s$, since $y_{1:(s-1)}$ can be expressed as $\sum_{i\in [q]} y^{(i)}$, where each $y^{(i)}\in L^i$, and letting $\hat B^{(i)}$ denote the $va$ rows of $S$ corresponding to entries from $B^{(i)}$, \begin{align*} \|Sy_{1:(s-1)}\|^2 & = \sum_{i\in [q]} \norm{\hat B^{(i)}y^{(i)}}^2 \\ & = \sum_{i\in [q]}(1 \pm \varepsilon)\|y^{(i)}\|^2 \\ & = (1\pm\varepsilon) \|y_{1:(s-1)}\|^2. \end{align*} A re-scaling to $\varepsilon/2$ completes the proof. \end{proof} \subsection{Putting it all together} Now consider any unit vector $y$ in $C(A)$, and write it as $y_{1:(s-1)} + y\scn$. We seek to bound $\langle Sy_{1:(s-1)} , Sy\scn \rangle$. For notational convenience, define the block-diagonal matrix $\tilde{N}_j$ to be the matrix \[ \tilde{N}_j \equiv \left[ \begin{matrix} 0 & &\\ \ldots & &\\ 0 & &\\ \Phi^{(1)}_j D^{(1)}_j & &\\ 0 & &\\ \ldots & &\\ 0 & &\\ & 0 &\\ & \ldots & \\ & 0 &\\ & \Phi^{(2)}_j D^{(2)}_j &&\\ & 0 &\\ & \ldots &\\ & 0 &\\ && \ddots & \\ &&& 0\\ &&& \ldots \\ &&& 0\\ &&& \Phi^{(q)}_j D^{(q)}_j \\ &&& 0\\ &&& \ldots\\ &&& 0 \end{matrix} \right] \cdot P \] Then $S = \sqrt{\vk} \cdot \sum_{j=1}^{\kv} \tilde{N}_j$. Notice that since the set of non-zero rows of $\tilde{N}_j$ and $\tilde{N}_{j'}$ are disjoint for $j \neq j'$, \begin{align}\label{eq:blockTZ} \langle Sy_{1:(s-1)}, Sy\scn \rangle & = \fvk \sum_{j=1}^{\kv} \langle \tilde{N}_j y_{1:(s-1)}, \tilde{N}_j y\scn \rangle\nonumber \\ & = \fvk \sum_{j=1}^{\kv} \langle N_j y_{1:(s-1)}, N_j y\scn \rangle, \end{align} where by Lemma \ref{lem:reduce}, each $N_j$ is a sparse embedding matrix with $qv = t/\kv$ rows and $n$ columns. \begin{lemma}\label{lem:cross terms JL} For $W$ as in Lemma~\ref{lem:HW JL}, and assuming events $\cap_{j=1}^{\kv} \mathcal{E}_h^j$, $\mathcal{E}_{nw}$, and $\mathcal{E}_s$, there is absolute constant $K_C$ such that with failure probability $\delta_C$, \[ | \langle Sy_{1:(s-1)}, Sy\scn \rangle | \le K_C \sqrt{W \log(a/\delta_C)}. \] \end{lemma} \begin{proof} We generalize Lemma~\ref{lem:cross terms} slightly to bound each summand $\langle N_j y_{1:(s-1)}, N_j y\scn \rangle$. For a given $j$, and for each $i\ge s$, let \[ z_m \equiv \sum_{i'\in h_j^{-1}(m), i'<s} y_{i'} D^{(j)}_{i'i'}, \] where $h_j$ is the hash function for $\Phi^{(j)}P$. We have for integer $p\ge 1$ using Khintchine's inequality, \begin{align*} & \E\big[\langle N_j y_{1:(s-1)}, N_j y\scn \rangle^{2p}\big]^{1/p} \\ & = \E\bigg[\bigg( \sum_{i\ge s} y_i D^{(j)}_{ii} z_{h_j(i)}\bigg)^{2p}\bigg]^{1/p} \\ & \le C_p \sum_{i\ge s} y_i^2 z_{h_j(i)}^2 = C_p \sum_{m\in h_j([s-1])} z_m^2 \sum_{\substack{i\in h_j^{-1}(m)\\ i\ge s}} y_i^2 \\ & \le C_p W V_j, \end{align*} where $V_j\equiv \sum_{m\in h_j^{-1}([s-1])} z_m^2$, and $C_p\le \Gamma(p+1/2)^{1/p}=O(p)$, and the last inequality uses the assumption that $\mathcal{E}^j_h$ holds. Putting $p=\log(a/\delta_C)$ and applying the Markov inequality, we have for all $j\in [a]$ that \[ \Pr[\langle N_j y_{1:(s-1)}, N_j y\scn \rangle^2 \ge e C_p W V_j] \ge 1 - a\exp(-p) = 1 - \delta_C. \] Moreover, $\fvk \sum_{j\in [a]} V_j = \norm{S y_{1:(s-1)}}^2$, which under $\mathcal{E}_s$ is at most $(1+\varepsilon/2) \norm{y_{1:(s-1)}}^2\le 1+\varepsilon/2$. Therefore, with failure probability at most $\delta_C$, we have \[ | \langle Sy_{1:(s-1)}, Sy\scn \rangle | \le K_C \sqrt{W \log(a/\delta_C)}, \] for an absolute constant $K_C$. \end{proof} \fi The following is our main theorem in this section. \begin{theorem}\label{thm:jlmain} For given $\delta>0$, with probability at least $1-\gdelta$, for $t= O( r \varepsilon^{-4} \log(r/\varepsilon\gdelta)(r + \log(1/\varepsilon\gdelta)))$, $S$ is an embedding matrix for $A$; that is, for all $y \in C(A)$, $\|Sy\|_2 = (1 \pm \varepsilon)\|y\|_2$. $S$ can be applied to $A$ in $O(\nnz(A)\epsilon^{-1} \log (r/\gdelta))$ time. \end{theorem} \STOComitedproof{ \begin{proof} Note that \[ t=\kv v q = O([\varepsilon^{-1}\log(r/\varepsilon\gdelta)][\varepsilon^{-1}][C_t r\varepsilon^{-2}(r+\log(1/\varepsilon\gdelta))]), \] yielding the bound claimed. From Lemma~\ref{lem:HW JL}, event $\cap_{j\in[\kv]} \mathcal{E}^j_h$ occurs with failure probability at most $\gdelta$. From Lemma~\ref{lem:non-weighted} and \ref{lem:large JL} the joint occurrence of $\mathcal{E}_{nw}$ and $\mathcal{E}_s$ holds with failure probability at most $2\gdelta/r\le \gdelta$. Given these events, from Lemmas~\ref{lem:cross terms JL} and \ref{lem:HW JL}, we have with failure probability at most $\delta_L + \delta_C$ that \begin{align*} & | \norm{Sy}^2 - \norm{y}^2| \\ & = | \norm{Sy_{1:(s-1)}}^2 - \norm{y_{1:(s-1)}}^2 + \norm{Sy\scn}^2 - \norm{y\scn}^2 \\ & \qquad\qquad + 2 \langle Sy_{1:(s-1)}, Sy\scn \rangle | \\ & \le (\varepsilon/2) \norm{y_{1:(s-1)}}^2 + K_L\sqrt{W\log(a/\delta_L)} + 2 K_C \sqrt{W \log(a/\delta_C)}, \end{align*} where $W\le 2r/C_Tq$. Setting $\delta_C = \delta_L = \gdelta K_{sub}^{-r}$, where $K_{sub}$ is from Lemma~\ref{lem:subspaceBound}, and recalling that $\kv=O(\varepsilon^{-1}\log(r/\varepsilon\gdelta))$, we have \[ W \log(a/\delta_L) \leq \frac{2r\log(a/\delta_L)}{C_T q} = \frac{2\varepsilon^2 O(r + \log(1/\varepsilon\gdelta))}{C_T (r + \log(1/\varepsilon\gdelta))} \le \varepsilon^2/C'_T, \] for absolute constant $C'_T$. Using Lemma~\ref{lem:subspaceBound}, we have that with failure probability at most $\gdelta + \gdelta +K_{sub}^r(2\gdelta K_{sub}^{-r})\le 4\gdelta$, that \[ | \norm{Sy}^2 - \norm{y}^2| \le \varepsilon/2 +\sqrt{\varepsilon^2/C'_T} (K_L + 2K_C) \le \varepsilon \] for suitable choice of $C'_T$. Adjusting $\gdelta$ by a constant factor gives the result. \end{proof} } \ifSTOC \section{Approximating Leverage \\ Scores}\label{sec:leverage} \else \section{Approximating Leverage Scores}\label{sec:leverage} \fi Let $A \in \mathbb{R}^{n \times d}$ with rank $r$. Let $U \in \mathbb{R}^{n \times r}$ be an orthonormal basis for $C(A)$. In \cite{dmmw11} it was shown how to obtain a $(1 \pm \varepsilon)$-approximation $u_i'$ to the leverage score $u_i$ for all $i \in [n]$, for a constant $\varepsilon > 0$, in time $O(nd \log n) + O(d^3 \log n \log d)$. Here we improve the running time of this task as follows. We state the running time for constant $\varepsilon$, though for general $\varepsilon$ the running time would be $O(\nnz(A)\log n) + {\mathrm{poly}}(r\varepsilon^{-1} \log n)$. \begin{theorem}\label{thm:icml} For any constant $\varepsilon > 0$, there is an algorithm which with probability at least $2/3$, outputs a vector $(u_1', \ldots, u_n')$ so that for all $i \in [n]$, $u_i' = (1 \pm \varepsilon)u_i$. The running time is \ifSTOC $O(\nnz(A) \log n + r^3 \log^2 r + r^2 \log n)$. \else $$O(\nnz(A) \log n + r^3 \log^2 r + r^2 \log n).$$ \fi The success probability can be amplified by independent repetition and taking the coordinate-wise median of the vectors $u'$ across the repetitions. \end{theorem} \STOComitedproof{ \begin{proof} We first run the algorithm of Theorem 2.6 and Theorem 2.7 of \cite{ckl12}. The first theorem gives an algorithm which outputs the rank $r$ of $A$, while the second theorem gives an algorithm which also outputs the indices $i_1, \ldots, i_r$ of linearly independent columns of $A$. The algorithm takes $O(\nnz(A)\log d) + O(r^3)$ time and succeeds with probability at least $1-O(\log d)/d^{1/3}$. Hence, in what follows, we can assume that $A$ has full rank. We follow the same procedure as Algorithm 1 in \cite{dmmw11}, using our improved subspace embedding. The proof of \cite{dmmw11} proceeds by choosing a subspace embedding $\Pi_1$, computing $\Pi_1 A$, then computing a change of basis matrix $R$ so that $\Pi_1 A R$ has orthonormal columns. The analysis there then shows that the row norms $\|(AR)_{i, *}\|_2^2$ are equal to $u_i(1 \pm \varepsilon)$. To obtain these row norms quickly, an $r \times O(\log n)$ Johnson-Lindenstrauss matrix $\Pi_2$ is sampled, and one first computes $R\Pi_2$, followed by $A(R\Pi_2)$. Using a fast Johnson-Lindenstrauss transform $\Pi_1$, one can compute $\Pi_1 A$ in $O(nr \log n)$ time. $\Pi_1$ has $O(r \log n \log r)$ rows, and one can compute the $r \times r$ matrix $R$ in $O(r^3 \log n \log r)$ time by computing a QR-factorization. Computing $R\Pi_2$ can be done in $O(r^2 \log n)$ time, and computing $A (R \Pi_2)$ can be done in $O(\nnz(A) \log n)$ time. Our only change to this procedure is to use a different matrix $\Pi_1$, which is the composition of our subspace embedding matrix $S$ of Theorem \ref{thm:jlmain} with parameter $t = O(r^2 \log r)$, together with a fast Johnson Lindenstrauss transform $F$. That is, we set $\Pi_1 = F \cdot S$. Here, $F$ is an $O(r \log^2 r) \times t$ matrix, see Section 2.3 of \cite{dmmw11} for an instantiation of $F$. Then, $S \cdot A$ can be computed in $O(\nnz(A) \log r)$ time by Lemma \ref{lem:time}. Moreover, $F \cdot (SA)$ can be computed in $O(t \cdot r \log r) = O(r^3 \log^2 r)$ time. One can then compute the matrix $R$ above in $O(r^3 \log^2 r)$ time by computing a QR-factorization of $FSA$. Then one can compute $R \Pi_2$ in $O(r^2 \log n)$ time, and computing $A (R \Pi_2)$ can be done in $O(\nnz(A) \log n)$ time. Hence, the total time is $O(\nnz(A)\log n + r^3 \log^2 r + r^2 \log n)$ time. Notice that by Theorem \ref{thm:jlmain}, with probability at least $4/5$, $\|Sy\|_2 = (1 \pm \varepsilon)\|y\|_2$ for all $y \in C(A)$, and by Lemma 3 of \cite{dmmw11}, with probability at least $9/10$, $\|FSy\|_2 = (1\pm \varepsilon)\|Sy\|_2$ for all $y \in C(A)$. Hence, $\|FSAx\|_2 = (1 \pm \varepsilon)^2 \|Ax\|_2$ for all $x \in \mathbb{R}^d$ with probability at least $7/10$. There is also a small $1/n$ probability of failure that $\|(AR\Pi_2)_{i, *}\|_2 \neq (1 \pm \varepsilon)\|(AR)_{i, *}\|_2$ for some value of $i$. Hence, the overall success probability is at least $2/3$. The rest of the correctness proof is identical to the analysis in \cite{dmmw11}. \end{proof} } \section{Least Squares Regression}\label{sec:regression} Let $A \in \mathbb{R}^{n \times d}$ and $b \in \mathbb{R}^n$ be a matrix and vector for the regression problem: $\min_x \norm{Ax-b}_2$. We assume $n > d$. Again, let $r$ be the rank of $A$. We show that with probability at least $2/3$, we can find an $x'$ for which $$\norm{Ax'-b}_2 \leq (1+\varepsilon)\min_x \norm{Ax-b}_2.$$ We will give several different algorithms. First, we give an algorithm showing that the dependence on $\nnz(A)$ can be linear. Next we shift to the generalized case, with multiple right-hand-sides, and after some analytical preliminaries, give an algorithm based on sampling using leverage scores. Finally, we discuss affine embeddings, constrained regression, and iterative methods. \begin{theorem}\label{thm:lin reg} The $\ell_2$-regression problem can be solved up to a $(1+\varepsilon)$-factor with probability at least $2/3$ in $O(\nnz(A) + O(d^3 \varepsilon^{-2} \log^7(d/\varepsilon))$ time. \end{theorem} \begin{proof} By Theorem \ref{thm:main} applied to the column space $C(A\circ b)$, where $A\circ b$ is $A$ adjoined with the vector $b$, it suffices to compute $\Phi D A$ and $\Phi D b$ and output argmin$_x \norm{\Phi D Ax- \Phi D b}_2$. We use the fact that $d \geq r$, and apply Theorem \ref{thm:partition main} with $t = O(d^2 \varepsilon^{-2} \log^6(d/\varepsilon))$. The theorem implies that with probability at least $9/10$, all vectors $y$ in the space spanned by the columns of $A$ and $b$ have their norms preserved up to a $(1+\varepsilon)$-factor. Notice that $\Phi D A$ and $\Phi D b$ can be computed in $O(\nnz(A))$ time. Now we have a regression problem with $d' = O(d^2 \varepsilon^{-2} \log^6(d/\varepsilon))$ rows and $d$ columns. Using the Fast Johnson-Lindenstrauss transform, this can be solved in $O(d' d \log (d/\varepsilon) + d^3 \varepsilon^{-1} \log d)$ time, see, Theorem 12 of \cite{s06}. The success probability is at least $9/10$. This is $O(d^3 \varepsilon^{-2} \log^7(d/\varepsilon))$ time. \end{proof} Our remaining algorithms will be stated for generalized regression. \subsection{Generalized Regression and Affine Embeddings} The regression problem can be slightly generalized to \[ \min_X \normF{AX-B}, \] where $X$ and $B$ are matrices rather than vectors. This problem, also called \emph{multiple-response} regression, is important in the analysis of our low-rank approximation algorithms, and also of independent interest. Moreover, while an analysis involving the embedding of $A\circ b$ is not significantly different than for an embedding involving $A$ alone, this is not true for $A\circ B$: different techniques must be considered. This subsection gives the needed theorems needed for analyzing algorithms for generalized regression, and also gives a general result for \emph{affine embeddings}. Another form of sketching matrix relies on sampling based on leverage scores; it will be convenient to define it using sampling with replacement: for given sketching dimension $t$, for $m\in [t]$ let $S\in{\mathbb R}^{t\times n}$ have $S_{m, z_m} \gets 1/\sqrt{tp_{z_m}}$, where $p_i \ge u_i/2r$, and $z_m = i$ with probability $p_i$. The following fact is due to Rudelson\cite{Rudelson}, but has since seen many proofs, and follows readily from Noncommutative Bernstein inequalities \cite{Recht}, which are very similar to matrix Bernstein inequalities \cite{Zouzias}. \begin{fact}\label{fact:lev embed} For rank-$r$ $A\in{\mathbb R}^{n\times d}$ with row leverage scores $u_i$, there is $t=O(r\varepsilon^{-2}\log r)$ such that leverage-score sketching matrix $S\in {\mathbb R}^{t\times n}$ is an $\epsilon$-embedding matrix for $A$. \end{fact} \subsection{Preliminaries} We collect a few standard lemmas and facts in this \ifSTOC subsection, where the main lemma needed is the following, which gives well-known bounds for approximate matrix multiplication using sketches. \else subsection. \fi \begin{lemma}\label{lem:tail}{\bf (Approximate Matrix Multiplication}) For $A$ and $B$ matrices with $n$ rows, where $A$ has $n$ columns, and given $\epsilon>0$, there is $t=\Theta(\epsilon^{-2})$, so that for a $t \times n$ generalized sparse embedding matrix $S $, or $t\times n$ fast JL matrix, or $t\log(nd)\times n$ subsampled randomized Hadamard matrix, or leverage-score sketching matrix for $A$ under the condition that $A$ has orthonormal columns, \[ \Pr[\normF{A^\top S ^\top S B - A^\top B}^2 < \epsilon^2\normF{A}^2\normF{B}^2] \ge 1-\delta, \] for any fixed $\delta >0$. \end{lemma} \ifSTOC\else \begin{proof} For a generalized sparse embedding matrix with parameters $k$ and $v$, first suppose $v=1$, so that $S $ is the embedding matrix of \S\ref{sec:sparse embed}. Let $X = A^\top S ^\top S B - AB$. Then $X_{i,j} = A_i^\top S ^\top S B_j - A_i^\top B_j$, where $A_i$ is the $i$-th column of $A$ and $B_j$ is the $j$-th column of $B$. Thorup and Zhang \cite{tz04} have shown that ${\bf E}[X_{i,j}] = 0$ and ${\bf Var}[X_{i,j}] = O(1/t) \norm{A_i}_2^2 \norm{B_j}_2^2.$ Consequently, ${\bf E}[X_{i,j}^2] = {\bf Var}[X_{i,j}] = O(1/t) \cdot \norm{A_i}_2^2 \norm{B_j}_2^2,$ from which for an appropriate $t = \Theta(\epsilon^{-2})$, the lemma follows by Chebyshev's inequality. For $v>1$, $X_{i,j} = \frac{v}{t}\sum_{i\in [t/v]} \hat X_{i,j}$, see \eqref{eq:blockTZ}, so that` \[ \Var[X_{i,j}] = \frac{v^2}{t^2}\sum_i \Var[\hat X_{i,j}] \le \frac{v}{t^2}\norm{A_i}_2^2 \norm{B_j}_2^2 \le \frac{1}{t}\norm{A_i}_2^2 \norm{B_j}_2^2, \] and similarly the lemma follows for the sparse embedding matrices. The result for fast JL matrices was shown by Sarl{\'o}s\cite{s06}, and for subsampled Hadamard by Drineas et al.\cite{dmms11}, proof of Lemma~5. (The claim also follows from norm-preserving properties of these transforms, see \cite{kn10}.) For leverage-score sampling, first note that \[ A^\top S ^\top S B - A^\top B = \frac{1}{t} \sum_{\substack{i\in [n]\\m\in[t]}} A_{i,*}^\top B_{i,*} \left[\frac{\Ibr{z_m=i}}{p_i} - 1\right] \] we have $\E[A^\top S ^\top S B - A^\top B] = 0$, and using the independence of the $z_m$, the second moment of $\normF{A^\top S ^\top S B - A^\top B}$ is the expectation of \begin{align*} & \tr[ ( A^\top S ^\top S B - A^\top B)^\top (A^\top S ^\top S B - A^\top B)] \\ & = \frac{1}{t^2}\tr \sum_{\substack{i,i'\in [n]\\m\in[t]}} B_{i',*}^\top A_{i',*}A_{i,*}^\top B_{i,*} \left[\frac{\Ibr{z_m=i}}{p_i} - 1\right]\left[\frac{\Ibr{z_m=i'}}{p_{i'}} - 1\right], \end{align*} which is \[ \frac{1}{t^2} \sum_{m\in [t]} \tr\left[ \left[\sum_{i\in [n]} B_{i,*}^\top A_{i,*}A_{i,*}^\top B_{i,*}\frac{1}{p_i}\right] - B^\top A A^\top B\right], \] or using the cyclic property of the trace, the fact that $p_i \ge \norm{A_{i,*}}^2/2\norm{A}^2$, and the fact that $\tr[B^\top A A^\top B] = \norm{A^\top B}^2 \le \norm{A}^2\norm{B}^2$, \[ \frac{1}{t} \left[\sum_{i\in [n]} \norm{A_{i,*}}^2\norm{B_{i,*}}^2\frac{1}{p_i} - \tr[B^\top A A^\top B]\right] \le \frac{2}{t} \norm{A}^2\norm{B}^2, \] and so the lemma follows for large enough $t$ in $O(\varepsilon^{-2})$, by Chebyshev's inequality. \end{proof} \begin{fact}\label{fact:subspace jl} Given $n\times d$ matrix $A$ of rank $k\le n^{1/2-\gamma}$ for $\gamma>0$, and $\epsilon > 0$, an $m\times n$ fast JL matrix $\Pi$ with $m=\Theta(k/\epsilon^2)$ is a subspace embedding for $A$ with failure probability at most $\delta$, for any fixed $\delta>0$, and requires $O(nd\log n)$ time to apply to $A$. \end{fact} A similar fact holds for subsampled Hadamard transforms. \begin{fact}\label{fact:pyth}{\bf (Pythagorean Theorem)} If $C$ and $D$ matrices with the same number of rows and columns, then $C^\top D=0$ implies $\normF{C+D}^2 = \normF{C}^2 + \normF{D}^2$. \end{fact} \begin{fact}\label{fact:normal}{\bf (Normal Equations)} Given $n\times d$ matrix $C$, and $n\times d'$ matrix $D$ consider the problem \[ \min_{X\in \mathbb{R}^{d\times d'}} \normF{CX-D}^2. \] The solution to this problem is $X^* = C^- D$, where $C^-$ is the Moore-Penrose inverse of $C$. Moreover, $C^\top (CX^*-D)=0$, and so if $c$ is any vector in the column space of $C$, then $c^\top (CX^*-D)=0$. Using Fact~\ref{fact:pyth}, for any $X$, \[ \normF{CX-D}^2 = \normF{C(X-X^*)}^2 + \normF{CX^*-D}^2. \] \end{fact} \fi \subsection{Generalized Regression: Conditions} The main theorem in this subsection is the following. It could be regarded as a generalization of Lemma 1 of \cite{dmms11}. \begin{theorem}\label{thm:genReg} Suppose $A$ and $B$ are matrices with $n$ rows, and $A$ has rank at most $r$. Suppose $S $ is a $t \times n$ matrix, and the event occurs that $S $ satisfies Lemma~\ref{lem:tail} with error parameter $\sqrt{\epsilon/r}$, and also that $S $ is a subspace embedding for $A$ with error parameter $\epsilon_0 \le 1/\sqrt{2}$. Then if $\tilde{Y}$ is the solution to \begin{equation}\label{eqn:approxls} \min_{Y} \normF{S (A Y - B)}^2, \end{equation} and $Y^*$ is the solution to \begin{equation}\label{eqn:optls} \min_{Y} \normF{A Y - B}^2, \end{equation} then $$\normF{A\tilde{Y} - B} \le (1+\epsilon) \normF{A Y^* - B}.$$ \end{theorem} \ifSTOC\else Before proving Theorem \ref{thm:genReg}, we will need the following lemma. \begin{lemma}\label{lem:betabound} For $S , A, B, Y^*$ and $\tilde{Y}$ as in Theorem \ref{thm:genReg}, assume that $A$ has orthonormal columns. Then $$\normF{A(\tilde{Y}-Y^*)} \leq 2\sqrt{\varepsilon}\normF{B-AY^*}.$$ \end{lemma} \begin{proof} The proof is in the appendix.\end{proof} \fi \begin{proofof}{of Theorem \ref{thm:genReg}} \ifSTOC Omitted in this version. \else Let $A$ have the thin SVD $A=U\Sigma V^\top$. Since $U$ is a basis for $C(A)$, there are $X^*$ and $\tilde X$ so that $S(U\tilde X - B)= S(A\tilde Y-B)$ and $UX^* - B = A Y^* - B$, and therefore $\norm{U \tilde X - B} \le (1+\varepsilon) \norm{U X^* - B}$ implies the theorem: we can assume without loss of generality that $A$ has orthonormal columns. With this assumption, and using the Pythagorean Theorem (Fact~\ref{fact:pyth}) with the normal equations (Fact~\ref{fact:normal}), and then Lemma~\ref{lem:betabound}, \begin{align*} \normF{A \tilde{Y} - B}^2 & = \normF{A Y^* - B}^2 + \normF{A(\tilde{Y} - Y^*)}^2 \\ & \le \normF{A Y^* - B}^2 + 4\varepsilon\normF{AY^* - B}^2 \\ & \le (1+4\varepsilon)\normF{AY^* - B}^2, \end{align*} and taking square roots and adjusting $\varepsilon$ by a constant factor completes the proof. \fi \end{proofof} \subsection{Generalized Regression: Algorithm} Our main algorithm for regression is given in the proof of the following theorem. \begin{theorem}\label{thm:renRegAlg} Given $A\in {\mathbb R}^{n\times d}$ of rank $r$, and $B\in{\mathbb R}^{n\times d'}$, the regression problem $\min_Y \normF{AY-B}$ can be solved up to $\varepsilon$ relative error with probability at least $2/3$, in time \[ O(\nnz(A)\log n + r^2(r\varepsilon^{-1} + rd' + r\log^2 r + d'\varepsilon^{-1} + \log n)), \] and obtaining a coreset of size $O(r(\varepsilon^{-1} + \log r))$. \end{theorem} \begin{proof} We estimate the leverage scores of $A$ to relative error $1/2$, using the algorithm of Theorem~\ref{thm:icml}, which has the side effect of finding $r$ independent columns of $A$, so that we can assume that $d=r$. If $U$ is a basis for $C(A)$, then for any $X$ there is a $Y$ so that $UX=AY$, and vice versa, so that conditions satisfied by $UX$ are satisfied by $AY$. That is, we can (and will hereafter) assume that $A$ has $r$ orthonormal columns, when considering products $AY$. We construct a leverage-score sketching matrix $S$ for $A$ with $t=O(r/\varepsilon + r\log r)$, so that Lemma~\ref{lem:tail} is satisfied for error parameter at most $\sqrt{\varepsilon/r}$. With this $t$, $S$ will also be an $\varepsilon$-embedding matrix with $\varepsilon<1/\sqrt{2}$, using Lemma~\ref{fact:lev embed}. These conditions and Theorem~\ref{thm:genReg} imply that the solution $\tilde Y$ to $\min_Y \norm{S(AY-B)}$ has \[ \norm{A\tilde Y - B}\le (1+\varepsilon)\min_Y \norm{AY-B}. \] The running time is that for computing the leverage scores, plus the time needed for finding $\tilde Y$, which can be done by computing a $QR$ factorization of $SA$ and then computing $R^{-1}Q^\top SB$, which requires $r^3(\varepsilon^{-1} + \log r) + r^2(\varepsilon^{-1}+\log r) d' + r^3d'$, and the cost bound follows. \end{proof} \subsection{Affine Embeddings}\label{subsec:genAff} We also use \emph{affine embeddings} for which a stronger condition than Theorem~\ref{thm:genReg} is satisfied. \begin{theorem}\label{thm:genAff} Suppose $A$ and $B$ are matrices with $n$ rows, and $A$ has rank at most $r$. Suppose $S $ is a $t \times n$ matrix, and the event occurs that $S $ satisfies Lemma~\ref{lem:tail} with error parameter $\varepsilon/\sqrt{r}$, and also that $S $ is a subspace embedding for $A$ with error parameter $\varepsilon$. Let $X^*$ be the solution of $\min_X\norm{AX-B}$, and $\tilde B \equiv AX^*-B$. For all $X$ of appropriate shape, \[ \norm{S(AX-B)}^2 - \norm{S\tilde B}^2 = (1\pm 2\varepsilon) \norm{AX-B}^2 - \norm{\tilde B}^2, \] for $\varepsilon\le 1/2$. So $S$ is an affine embedding with~$2\varepsilon$ relative error up to an additive constant. (That is, a \emph{weak} embedding.) If also $\norm{S\tilde B}^2 = (1\pm\varepsilon)\norm{\tilde B}^2$, then \begin{equation}\label{eq:aff embed} \norm{S(AX-B)}^2 = (1\pm 3\varepsilon) \norm{AX-B}^2, \end{equation} and $S$ is a $3\varepsilon$-affine embedding. \end{theorem} Note that even when only the weaker first statement holds, the sketch still can be used for optimization, since adding a constant to the objective function of an optimization does not change the solution. Note also that \STOComitedproof{ \begin{proof} If $U$ is a basis for $C(A)$, then for any $X$ there is a $Y$ so that $UX=AY$, and vice versa, so that conditions satisfied by $UX$ are satisfied by $AY$. That is, we can (and will hereafter) assume that $A$ has $r$ orthonormal columns. Using the fact that $\norm{W}^2 = \tr W^\top W$ for any $W$, the embedding property, the fact that $\norm{A}\le\sqrt{r}$, and the matrix product approximation condition of Lemma~\ref{lem:tail}, \begin{align*} & \norm{S(AX-B)}^2 - \norm{S\tilde B}^2 \\ & = \norm{SA(X-X^*)+ S(AX^* - B)}^2 - \norm{S\tilde B}^2 \\ & = \norm{SA(X-X^*)}^2 - 2\tr [(X-X^*)^\top A^\top S^\top S\tilde B] \\ & = \norm{A(X-X^*)}^2 \\ & \qquad \pm \varepsilon(\norm{A(X-X^*)}^2 + 2 \norm{X-X^*} \norm{\tilde B}). \end{align*} The normal equations (Fact~\ref{fact:normal}) imply that $\norm{AX-B}^2 = \norm{A(X-X^*)}^2 + \norm{\tilde B}^2$, and using the observation that $(a+b)^2\le 2(a^2+b^2)$ for $a,b\in{\mathbb R}$, \begin{align*} \norm{S(AX-B)}^2 - & \norm{S\tilde B}^2 - (\norm{AX-B}^2 - \norm{\tilde B}^2) \\ & = \pm \varepsilon(\norm{A(X-X^*)}^2 + 2 \norm{X-X^*} \norm{\tilde B}) \\ & \le \pm \varepsilon(\norm{A(X-X^*)} + \norm{\tilde B} )^2 \\ & \le \pm 2\varepsilon(\norm{A(X-X^*)}^2 + \norm{\tilde B}^2) \\ & = \pm 2\varepsilon \norm{AX-B}^2, \end{align*} and the first statement of the theorem follows. When $\norm{S\tilde B}^2 = (1\pm\varepsilon)\norm{\tilde B}^2$, the second statement follows, since then \[ \norm{S(AX-B)}^2 = (1\pm 2\varepsilon)\norm{AX-B}^2 \pm\varepsilon \norm{\tilde B}^2 = (1\pm 3\varepsilon)\norm{AX-B}^2, \] using $\norm{\tilde B}\le \norm{AX-B}$ for all $X$. \end{proof} } To apply this theorem to sparse embeddings, we will need the following lemma. \begin{lemma}\label{lem:len fixed sparse} Let $A$ be an $n \times d$ matrix. Let $S \in \mathbb{R}^{t \times n}$ be a randomly chosen sparse embedding matrix for an appropriate $t = \Omega(\varepsilon^{-2})$. Then with probability at least $9/10$, $$\|SA\|_F^2 = (1 \pm \varepsilon)\|A\|_F^2.$$ \end{lemma} \begin{proof} Please see the appendix. \end{proof} \begin{lemma}\label{lem:len fixed srht} Let $A$ be an $n \times d$ matrix. Let $S \in \mathbb{R}^{t \times n}$ be an SRHT matrix for an appropriate $t = \Omega(\varepsilon^{-2} (\log n)^2)$. Then with probability at least $9/10$, $$\|SA\|_F^2 = (1 \pm \varepsilon)\|A\|_F^2.$$ \end{lemma} \begin{proof} Please see the appendix. \end{proof} \begin{theorem}\label{thm:genAff res} Let $A$ and $B$ be matrices with $n$ rows, and $A$ has rank at most $r$. The following conditions hold with fixed nonzero probability. If $S$ is a $t\times n$ sampled randomized Hadamard transform (SRHT) matrix, there is $t = O(\varepsilon^{-2} [\log^2 n + (\log r)(\sqrt{r}+\sqrt{\log n})^2])$ such that $S$ is an $\varepsilon$-affine embedding for $A$ and $B$. If $S$ is a $t\times n$ sparse embedding, there is $t=O(\varepsilon^{-2}r^2\log^6(r/\varepsilon))$ such that $S$ is an $\varepsilon$-affine embedding. If $S$ is a $t\times n$ leverage-score sampling matrix, there is $t=O(\varepsilon^{-2} r\log r)$ such that $S$ is a weak $\varepsilon$-affine embedding. If the row norms of $\tilde B$ are available, a modified leverage-score sampler is an $\varepsilon$-embedding. (Here $\tilde B$ is as in Theorem~\ref{thm:genAff}.) \end{theorem} Note that none of the dimensions $t$ depend on the number of columns of $B$. \begin{proof} To apply Theorem~\ref{thm:genAff}, we need each given sketching matrix to satisfy conditions on multiplicative error, subspace embedding, and preservation of $\norm{\tilde B}$. As in that theorem, we can assume without loss of generality that $A$ has $r$ orthonormal columns. Regarding the multiplicative error bound of $\epsilon/\sqrt{r}$, Lemma~\ref{lem:tail} tells us that SRHT achieves this bound for $t=O(\log(n)^2 \varepsilon^{-2}r)$, and the other two need $t=O(\varepsilon^{-2}r)$. Regarding subspace embedding, as noted in the introduction, an SRHT matrix achieves this for $t = O(\varepsilon^{-2} (\log r)(\sqrt{r}+\sqrt{\log n})^2)$. A sparse embedding requires $t=O(\varepsilon^{-2}r^2\log^6(r/\varepsilon))$, as in Theorem~\ref{thm:partition main}, and leverage score samplers need $t=O(\varepsilon^{-2} r\log r)$, as mentioned in Fact~\ref{fact:lev embed}. Regarding preservation of the norm of $\tilde B$, Lemma~\ref{lem:len fixed srht} gives the claim for SRHT matrices, and Lemma~\ref{lem:len fixed sparse} gives the claim for sparse embeddings, where the ``$A$'' of those lemmas is $\tilde B$. Thus the conditions are satisfied for Theorem~\ref{thm:genAff} to yield the the claims for SRHT and for sparse embeddings, and for the weak condition for leverage score samplers. We give only a terse version of the argument for the last statement of the theorem. When the squared row norms $b_i \equiv \norm{\tilde B_{i,*}}^2$ of $\tilde B$ are available, a sampler which picks row $i$ with probability $p_i = \min\{1, t b_i/\norm{\tilde B}^2\}$, and scales that row with $1/\sqrt{tp_i}$, will yield a matrix whose Frobenius norm will be $(1\pm 1/\sqrt{t})\norm{\tilde B}$ with high probability. If the leverage score sampler picks rows with probability $q_i$, create a new sampler that picks rows with probability $p'_i \equiv (p_i + q_i)/2$, and scales by $1/\sqrt{t p'_i}$. The resulting sampler will satisfy the norm preserving property for $\tilde B$, and also satisfy the same properties as the leverage score sampler, up to a constant factor. The resulting sampler is thus an $O(\varepsilon)$-affine embedding. \end{proof} \subsection{Affine Embeddings and Constrained Regression}\label{subsec:constrained} From the condition \eqref{eq:aff embed}, an affine embedding can be used to reduce the work needed to achieve small error in regression problems, even when there are constraints on $X$. We consider the constraint $X\ge 0$, that the entries of $X$ are nonnegative. The problem $\min_{X\ge 0} \norm{AX - B}^2$, for $B\in{\mathbb R}^{n\times n}$ and $A\in {\mathbb R}^{n\times d}$, arises among other places as a subroutine in finding a nonnegative approximate factorization of $B$. For an affine embedding $S$, \[ \min_{X\ge 0} \norm{S(AX - B)}^2 = (1\pm \varepsilon)\min_{X\ge 0} \norm{AX - B}^2, \] yielding an immediate reduction yielding a solution with relative error $\varepsilon$: just solve the sketched version of the problem. From Theorem~\ref{thm:genAff res}, suitable sketching matrices for constrained regression include a sparse embedding, an SRHT matrix, or a leverage score sampler. (The latter may not need the condition of preserving the norm of $\tilde B$ if a high-accuracy solver is used for the sketched solution, or if otherwise the additive constant is not an obstacle for that solver.) Since it's immediate that affine embeddings can be composed to obtain an affine embedding (with a constant factor loss), the most efficient approach might be use a sketch that first applies a sparse embedding, and then applies an SRHT matrix, resulting in a sketched problem with $O(\varepsilon^{-2}r\log(r/\varepsilon)^2)$ rows, and where computing the sketch takes $O(\nnz(A) + \nnz(B)) + \tO(\varepsilon^{-2}r^2 (d+d'))$ time, for $B\in {\mathbb R}^{n\times d'}$. When $r$ is unknown, the upper bound $r\le d$ can of course be used. For low-rank approximation, discussed in \S\ref{sec:low rank}, we require $X$ to satisfy a rank condition; the same techniques apply. \iffalse Another way to apply our techniques, which is more expensive but yields higher quality answers, is as follows: first factor $A$ as $A=QW$, with $Q$ having orthormal columns and $W\in{\mathbb R}^{r\times r}$ upper triangular. From Fact~\ref{fact:normal}, the normal equations, \[ \norm{QWX - B}^2 = \norm{QWX - QY^*}^2 + \norm{QY^* - B}^2 = \norm{WX - Q^\top B}^2 + \norm{QQ^\top B - B}^2 \] where $Y^*=Q^\top B = \argmin_Y \norm{QY-B}^2$, using here the fact that $Q$ has orthonormal columns. Thus after solving a generalized regression problem, it remains to solve $\min_{X\ge 0} \norm{WX-Q^\top B}^2$, that is, to find for each column of $Q^\top B$ the nearest vector in the cone generated by the columns of~$W$. For the given column, this problem can be solved in $r^4$ time, so that the cost for all columns is $O(r^4 d')$. \fi \subsection{Iterative Methods for Regression}\label{subsec:iterative} \ifSTOC We use the matrix $R$ obtained for leverage score approximation in \S\ref{sec:leverage} as a pre-conditioner for standard iterative, conjugate-gradient methods; such iterative methods have a running time dependent on the condition number of the input matrix. Our pre-conditioner reduces that method to constant, and the resulting algorithm implies the following theorem, whose proof is given in the full paper. \else A classical approach to finding $\min_{X} \norm{AX-B}$ is to solve the normal equations (Fact~\ref{fact:normal}) $A^\top A X = A^\top B$ via Gaussian elimination; for $A\in{\mathbb R}^{n\times r}$ and $B\in{\mathbb R}^{n\times d'}$, this requires $O(\min\{r\nnz(B), d'\nnz(A)\}$ to form $A^\top B$, $O(r\nnz(A)\}$ to form $A^\top A$, and $O(r^3 + r^2d')$ to solve the resulting linear systems. (Another method is to factor $A=QW$, where $Q$ has orthonormal columns and $W$ is upper triangulation; this typically trades a slowdown for a higher-quality solution.) Another approach to regression is to apply an iterative method (from the general class of Krylov, CG-like methods) to a pre-conditioned version of the problem. In such methods, an estimate $x^{(m)}$ of a solution is maintained, for iterations $m=0,1\ldots$, using data obtained from previous iterations. The convergence of these methods depends on the \emph{condition number} $\kappa(A^\top A)= \frac{\sup_{x,\norm{x}=1} \norm{Ax}^2}{\inf_{x,\norm{x}=1} \norm{Ax}^2}$ from the input matrix. A classical result (\cite{Luen} via \cite{MSM} or Theorem 10.2.6,\cite{GvL}), is that \begin{equation}\label{eq:CG accuracy} \frac{\norm{A(x^{(m)} - x^*)}^2}{\norm{A(x^{(0)} - x^*)}^2} \le 2\left(\frac{\sqrt{\kappa(A^\top A)} - 1}{\sqrt{\kappa(A^\top A)} + 1}\right)^m. \end{equation} Thus the running time of CG-like methods, such as {\tt CGNR} \cite{GvL}, depends on the (unknown) condition number. The running time per iteration is the time needed to compute matrix vector products $Ax$ and $A^\top v$, plus $O(n+d)$ for vector arithmetic, or $O(\nnz(A))$. Pre-conditioning reduces the number of iterations needed for a given accuracy: suppose for non-singular matrix $R$, the condition number $\kappa(R^\top A^\top AR)$ is small. Then a CG-like method applied to $AR$ would converge quickly, and moreover for iterate $y^{(m)}$ that has error $\alpha^{(m)} \equiv \norm{ARy^{(m)} - b}$ small, the corresponding $x\gets Ry^{(m)}$ would have $\norm{Ax-b} = \alpha^{(m)}$. The running time per iteration would have an additional $O(d^2)$ for computing products involving $R$. Consider the matrix $R$ obtained for leverage score approximation in \S\ref{sec:leverage}, where a subspace embedding matrix $\Pi_1$ is applied to $A$, and $R$ is computed so that $\Pi_1 A R$ has orthonormal columns. Since $\Pi_1$ is a subspace embedding matrix to constant accuracy $\varepsilon_0$, for all unit $x\in{\mathbb R}^d$, $\norm{ARx}^2=(1\pm\varepsilon_0)\norm{\Pi_1 ARx}^2= (1\pm \varepsilon_0)^2$. It follows that the condition number \[ \kappa(R^\top A^\top A R) \le \frac{(1+\varepsilon_0)^2}{(1-\varepsilon_0)^2}. \] That is, $AR$ is very well-conditioned. Plugging this bound into \eqref{eq:CG accuracy}, after $m$ iterations $\norm{AR(x^{(m)} - x^*)}^2$ is at most $2\varepsilon_0^m$ times its starting value. Thus starting with a solution $x^{(0)}$ with relative error at most 1, and applying $1+\log(1/\varepsilon)$ iterations of a CG-like method with $\varepsilon_0 = 1/e$, the relative error is reduced to $\varepsilon$ and the work is $O((\nnz(A)+ r^2)\log(1/\varepsilon))$ (where we assume $d$ has been reduced to $r$, as in the leverage computation), plus the work to find $R$. We have \fi \begin{theorem}\label{thm:it reg} The $\ell_2$-regression problem can be solved up to a $(1+\varepsilon)$-factor with probability at least $2/3$ in \[ O(\nnz(A)\log (n/\varepsilon) + r^3 \log^2 r + r^2\log(1/\varepsilon)) \] time. \end{theorem} \ifSTOC\else Note that only the matrix $R$ from the leverage score computation is needed, not the leverage scores, so the $\nnz(A)$ term in the running time need not have a $\log(n)$ factor; however, since reducing $A$ to $r$ columns requires that factor, the resulting running time without that factor is $O(\nnz(A)\log(1/\varepsilon) + d^3\log^2 d + d^2\log(1/\varepsilon))$, depends on $d$. The matrix $AR$ is so well-conditioned that a simple iterative improvement scheme has the same running time up to a constant factor. Again start with a solution $x^{(0)}$ with relative error at most 1, and for $m\ge 0$, let $x^{(m+1)} \gets x^{(m)} + R^\top A^\top (b - ARx^{(m)})$. Then using the normal equations, \begin{align*} AR(x^{(m+1)} - x^*) & = AR(x^{(m)} + R^\top A^\top (b - AR x^{(m)}) - x^*) \\ & = (AR - ARR^\top A^\top AR ) (x^{(m)} - x^*) \\ & = U(\Sigma - \Sigma^3)V^\top (x^{(m)} - x^*), \end{align*} where $AR=U \Sigma V^\top$ is the SVD of $AR$. For all unit $x\in{\mathbb R}^d$, $\norm{ARx}^2 = (1\pm \varepsilon_0)^2$, and so we have that all singular values $\sigma_i$ of $AR$ are $1\pm\varepsilon_0$, and the diagonal entries of $\Sigma - \Sigma^3$ are all at most $\sigma_i(1- (1 - \epsilon_0)^2) \le \sigma_i 3\epsilon_0$ for $\epsilon_0\le 1$. Hence \begin{align*} \norm{AR(x^{(m+1)} - x^*)} & \le 3\varepsilon_0 \norm{AR(x^{(m)} - x^*)}, \end{align*} and by choosing $\epsilon_0= 1/2$, say, $O(\log(1/\varepsilon))$ iterations suffice for this scheme also to attain $\varepsilon$ relative error. This scheme can be readily extended to generalized (multiple-response) regression, using the iteration $X^{(m+1)} \gets X^{(m)} + R^\top A^\top (B - ARX^{(m)})$. The initialization cost then includes that of computing $A^\top B$, which is $O(\min\{ r\nnz(B), d'\nnz(A)\})$, where again $B\in {\mathbb R}^{n\times d'}$. The product $A^\top A$, used implicitly per iteration, could be computed in $O(r\nnz(A))$, and then applied per iteration in time $d'r^2$, or applied each iteration in time $d'\nnz(A)$. That is, this method is never much worse than CG-like methods, but comparable in running time when $d'< r$; when $d'>r$, it is a little worse in asymptotic running time than solving the normal equations. \fi \section{Low Rank Approximation}\label{sec:low rank} This section gives algorithms for low-rank approximation, understood using generalized regression analysis, as in earlier work such as \cite{s06,cw09}. Let $\Delta_k \equiv \normF{A-[A]_k}$, where $[A]_k$ denotes the best rank-$k$ approximation to $A$. We seek low-rank matrices whose distance to $A$ is within $1+\varepsilon$ of $\Delta_k$. While Theorem \ref{thm:main} and Theorem \ref{thm:jlmain} are stated in terms of specific constant probability of success, they can be re-stated and proven so that the failure probabilities are arbitrarily small, but still constant. In the following we'll assume that adjustments have been done, so that the sum of a fixed number of such failure probabilities is at most $1/5$. We will apply embedding matrices composed of products of such matrices, so we need to check that this operation preserves the properties we need. \begin{fact}\label{fact:prod compose} If $S \in {\mathbb R}^{t\times n}$ approximates matrix products and is a subspace embedding with error $\epsilon$ and failure probability $\delta_S $, and $\Pi\in {\mathbb R}^{\hat t\times t}$ approximates matrix products with error $\epsilon$ and failure probability $\delta_\Pi$, then $\Pi S $ approximates matrix products with error $O(\epsilon)$ and failure probability at most $\delta_S + \delta_\Pi$. \end{fact} \begin{proof} This follows from two applications of Lemma~\ref{lem:tail}, together with the observation that $\norm{S Ax} = (1\pm\epsilon)\norm{Ax}$ for basis vectors $x$ implies that $\norm{S A} = (1\pm\epsilon)\norm{A}$. \end{proof} \begin{fact}\label{fact:embed compose} If $S \in {\mathbb R}^{t\times n}$ is a subspace embedding with error $\epsilon$ and failure probability $\delta_S $, and $\Pi\in {\mathbb R}^{\hat t\times t}$ is a subspace embedding with error $\epsilon$ and failure probability $\delta_\Pi$, then $\Pi S $ is a subspace embedding with error $O(\epsilon)$ and failure probability at most $\delta_S + \delta_\Pi$. \end{fact} The following lemma implies a regression algorithm that is linear in $\nnz(A)$, but has a worse dependence in its additive term. \begin{lemma}\label{lem:genReg sparse} Let $A\in {\mathbb R}^{n\times d}$ of rank $r$, $B\in {\mathbb R}^{n\times d'}$, and $c\equiv d+d'$. For $\hat R\in {\mathbb R}^{t\times n}$ a sparse embedding matrix, $\Pi\in {\mathbb R}^{t'\times t}$ a sampled randomized Hadamard matrix, there is $t=O(r^2\log^6(r/\epsilon) + r\varepsilon^{-1})$ and $t'=O(r\varepsilon^{-1}\log(r/\varepsilon))$ such that for $R\equiv \Pi \hat R$, $\tilde X \equiv \argmin_X\norm{R(AX-B)}$ has $\norm{A\tilde X - B}\le (1+\varepsilon)\min_X\norm{AX-B}$. The operator $R$ can be applied in $O(\nnz(A)+\nnz(B) + tc\log t)$ time. \end{lemma} \begin{theorem}\label{thm:SVD} For $A\in{\mathbb R}^{n\times n}$, there is an algorithm that with failure probability $1/10$ finds matrices $L,W\in {\mathbb R}^{n\times k}$ with orthonormal columns, and diagonal $D \in{\mathbb R}^{k\times k}$, so that $\norm{A-LDW^\top} \le (1+\varepsilon)\Delta_k$. The algorithm runs in time \[ O(\nnz(A)) + \tilde O(nk^2\varepsilon^{-4} + k^3\varepsilon^{-5}). \] \end{theorem} \begin{proof} The algorithm is as follows: \begin{enumerate} \item Compute $AR^\top$ and an orthonormal basis $U$ for $C(AR^\top)$, where $R$ is as in Lemma~\ref{lem:genReg sparse} with $r=k$; \item Compute $SU$ and $SA$ for $S$ the product of a $v'\times v$ SRHT matrix with a $v\times n$ sparse embedding, where $v = \Theta(\varepsilon^{-4}k^2\log^6(k/\varepsilon))$ and $v' = \Theta(\varepsilon^{-3}k\log^2(k/\varepsilon))$. (Instead of this affine embedding construction, an alternative might use leverage score sampling, where even the weaker claim of Theorem~\ref{thm:genAff res} would be enough.) \item Compute the SVD of $SU = \tilde U\tilde \Sigma \tilde V^\top$; \item Compute the SVD $\hat U D W^\top$ of $\tilde V\tilde \Sigma^- [\tilde U^\top SA]_k$, where again $[Z]_k$ denotes the best rank-$k$ approximation to matrix $Z$; \item Return $L=U\hat U$, $D$, and $W$. \end{enumerate} {\bf Running time.} Computing $AR^\top$ in the first step takes $O(\nnz(A) + \tO(nk(k+\varepsilon^{-1}))$ time, and then $\tilde O(n(k/\varepsilon)^2)$ to compute the $n\times O(\varepsilon^{-1}k\log(k/\varepsilon))$ matrix $U$. Computing $SU$ and $SA$ requires $O(\nnz(A)) + \tO(n k^2\varepsilon^{-4})$ time. Computing the SVD of the $\tilde O(k\varepsilon^{-3}) \times \tilde O(k\varepsilon^{-1})$ matrix $SU$ requires $\tilde O(k^3\varepsilon^{-5})$. Computing $\tilde U^\top SA$ requires $\tilde O(nk^2\varepsilon^{-4})$ time. Computing the SVD of the $\tilde O(k\varepsilon^{-1})\times n$ matrix of the next step requires $\tilde O(nk^2\varepsilon^{-2})$ time, as does computing $U\hat U$. \paragraph{Correctness} Apply Lemma~\ref{lem:genReg sparse} with $A$ of that lemma mapping to $A_k^\top$ and $B$ mapping to $A^\top$. Taking transposes, this implies that with small fixed failure probability, $\tilde Y \equiv AR^\top (A_kR^\top)^-$ has \[ \norm{\tilde Y A_k - A} \le (1+\epsilon) \min_Y\norm{ YA_k - A} = (1+\epsilon)\Delta_k, \] and so \begin{align}\label{eq:AR good} \min_{X, \rank X=k} \norm{AR^\top X - A} & \le \norm{AR^\top (A_kR^\top)^-A_k - A} \nonumber \\ & \le (1+\epsilon)\Delta_k. \end{align} Since $U$ is a basis for $C(AR^\top)$, \[ (1+\varepsilon)\min_{X, \rank X=k} \norm{UX - A} \le (1+\varepsilon) \min_{X, \rank X=k} \norm{AR^\top X - A}. \] With the given construction of $S$, Theorem~\ref{thm:genAff res} applies (twice), with $AR^\top$ taking the role of $A$, and $A$ taking the role of $B$, so that $S$ is an $\varepsilon$-affine embedding, after adjusting constants. It follows that for $\tilde X\equiv\argmin_{X, \rank X=k} \norm{S(UX - A)}$, \begin{align*} \norm{U\tilde X - A} & \le (1+\varepsilon)\min_{X, \rank X=k} \norm{UX - A} \\ & \le (1+\varepsilon) \min_{X, \rank X=k} \norm{AR^\top X - A} \\ & \le (1+\varepsilon)^2\Delta_k, \end{align*} using \eqref{eq:AR good}. From lemma 4.3 of \cite{cw09}, the solution to \[ \min_{X,\rank X=k}\norm{\tilde U X - SA} \] is $\hat X = [\tilde U^\top SA]_k$, where this denotes the best rank-$k$ approximation to $\tilde U^\top SA$. It follows that $\tilde X = \tilde V\tilde \Sigma^- \hat X$ is a solution to to $\min_{X, \rank X=k} \norm{S(UX - A)}$. Moreover, the rank-$k$ matrix $U\tilde X = LDW^\top$ has $\norm{LDW^\top- A}\le (1+\varepsilon)^2\Delta_k$, and $L$, $D$, and $W$ have the properties promised. \end{proof} \section{$\ell_p$-Regression for any $1 \leq p < \infty$} \label{sec: ell_p} Let $A \in \mathbb{R}^{n \times d}$ and $b \in \mathbb{R}^n$ be a matrix and vector for the regression problem: $\min_x \norm{Ax-b}_p$. We assume $n > d$. Let $r$ be the rank of $A$. We show that with probability at least $2/3$, we can quickly find an $x'$ for which $$\norm{Ax'-b}_p \leq (1+\varepsilon)\min_x \norm{Ax-b}_p.$$ Here $p$ is any constant in $[1, \infty)$. This theorem is an immediate corollary of Theorem \ref{thm:jlmain} and the construction given in section 3.2 of \cite{CDMMMW}, which shows how to solve $\ell_p$-regression given a subspace embedding (for $\ell_2$) as a black box. \ifSTOC The details are omitted in this version. \else We review the construction of \cite{CDMMMW} below for completeness. As in the proof of Theorem \ref{thm:icml}, in $O(\nnz(A) \log d) + O(r^3)$ time we can replace the input matrix $A$ with a new matrix with the same column space of $A$ and full column rank, where $r$ is rank of $A$. We therefore assume $A$ has full rank in what follows. Let $w = \Theta(r^6 \log n (r + \log n))$ and assume $w \mid n$. Split $A$ into $n/w$ matrices $A_1, \ldots, A_{n/w}$, each $w \times r$, so that $A_i$ is the submatrix of $A$ indexed by the $i$-th block of $w$ rows. We invoke Theorem \ref{thm:jlmain} with the parameters $n = w$, $r$, $\varepsilon = 1/2$, and $\delta = 1/(100n)$, choosing a generalized sparse embedding matrix matrix $S$ with $t = O(r \log n(r + \log n))$ rows. Theorem \ref{thm:jlmain} has the guarantee that for each fixed $i$, $SA_i$ is a subspace embedding with probability at least $1-\delta$. It follows by a union bound that with probability at least $1- 1/(100w)$, for all $i \in [n/w]$, $SA_i$ is a subspace embedding. We condition on this event occurring. Consider the matrix $F \in{\mathbb R}^{nt/w \times n}$, which is a block-diagonal matrix comprising $n/w$ blocks along the diagonal. Each block is the $t \times w$ matrix $S$ given above. \[ F \equiv \left[ \begin{matrix} S & &\\ & S &&\\ && \ddots & \\ &&& S \\ \end{matrix} \right] \] We will need the following theorem. \begin{theorem}[Theorem 5 of \cite{ddhkm09}, restated] \label{thm:basisOld} Let $A$ be an $n \times r$ matrix, and let $p \in [1, \infty)$. Then there exists an $(\alpha, \beta, p)$-well-conditioned basis for the column space of $A$ such that if $p < 2$, then $\alpha = r^{1/2 + 1/p}$ and $\beta = 1$; if $p = 2$, then $\alpha = r^{1/2}$ and $\beta = 1$, and if $p > 2$ then $\alpha = r^{1/2 + 1/p}$ and $\beta = r^{1/2 - 1/p}$. An $r \times r$ change of basis matrix $U$ for which $A \cdot U$ is a well-conditioned basis can be computed in $O(nr^5 \log n)$ time. \end{theorem} \noindent The specific conditions satisfied by a well-conditioned basis are given (and used) in the proof of the theorem below. We use the following algorithm {\sf Condition}($A$) given a matrix $A \in \mathbb{R}^{n \times r}$: \begin{enumerate} \item Compute $FA$; \item Apply Theorem \ref{thm:basisOld} to $FA$ to obtain an $r \times r$ change of basis matrix $U$ so that $FAU$ is an $(\alpha, \beta, p)$-well-conditioned basis of the column space of matrix $FA$; \item Output $AU/(r\gamma_p)$, where $\gamma_p \equiv \sqrt{2} t^{1/p-1/2}$ for $p\le 2$, and $\gamma_p \equiv \sqrt{2} w^{1/2-1/p}$ for $p\ge 2$. \end{enumerate} The following lemma is the analogue of that in \cite{CDMMMW} proved for the Fast Johnson Lindenstauss Transform. However, the proof in \cite{CDMMMW} only used that the Fast Johnson Lindenstrauss Transform is a subspace embedding. We state it here with our new parameters, \ifSTOC omitting the proof in this version. \else and give the analogous proof in the Appendix for completeness. \fi \begin{lemma}\label{lem:lpl2} With probability at least $1-1/(100w)$, the output $A U/(r\gamma_p)$ of {\sf Condition}$(A)$ is guaranteed to be a basis that is $(\alpha, \beta \sqrt{3} r (tw)^{|1/p-1/2|} , p)$-well-conditioned, that is, an $(\alpha, \beta \cdot {\mathrm{poly}}(\max(r, \log n)) , p)$-well-conditioned basis. The time to compute $U$ is $O(\nnz(A) \log n) + {\mathrm{poly}}(r \varepsilon^{-1})$. \end{lemma} The following text is from \cite{CDMMMW}, we state it here for completeness. A well-conditioned basis can be used to solve $\ell_p$ regression problems, via an algorithm based on sampling the rows of $A$ with probabilities proportional to the norms of the rows of the corresponding well-conditioned basis. This entails using for speed a second projection $\Pi_2$ applied to $AU$ on the right to estimate the row norms, where $\Pi_2$ can be an $O(r) \times O(\log n)$ matrix of i.i.d. normal random variables, which is the same as is done in \cite{dmmw11}. This allows fast estimation of the $\ell_2$ norms of the rows of $AU$; however, we need the $\ell_p$ norms of those rows, which we thus know up to a factor of $r^{|1/2 - 1/p|}$. We use these norm estimates in the sampling algorithm of \cite{ddhkm09}; as discussed for the running time bound of that paper, Theorem 7, this algorithm samples a number of rows proportional to $r(\alpha\beta)^p$, when an $(\alpha, \beta, p)$-well-conditioned basis is available. This factor, together with a sample complexity increase of $r^{p|1/2-1/p|} = r^{|p/2-1|}$ needed to compensate for error due to using $\Pi_2$, gives a sample complexity increase for our algorithm over that of \cite{ddhkm09} of a factor of \[ [r^{|p/2 - 1|}]r^{p+1}(tw)^{|p/2 - 1|} = \max(r, \log n)^{O(p)}, \] while the leading term in the complexity (for $n\gg r$) is reduced from $O(nr^5\log n)$ to $O(\nnz(A)\log n)$. Observe that if $r < \log n$, then ${\mathrm{poly}}(r \varepsilon^{-1} \log n)$ is less than $n \log n$, which is $O(\nnz(A) \log n)$. Hence, the overall time complexity is $O(\nnz(A) \log n) + {\mathrm{poly}}(r \varepsilon^{-1})$. We adjust Theorem 4.1 of \cite{ddhkm09} and obtain the following. \fi \begin{theorem}\label{thm:lp-running} Given $\epsilon\in(0,1)$, a constant $p \in [1, \infty)$, $A\in{\mathbb R}^{n\times d}$ and $b\in{\mathbb R}^{n}$, there is a sampling algorithm for $\ell_p$ regression that constructs a coreset specified by a diagonal sampling matrix $D$, and a solution vector $\hat{x} \in {\mathbb R}^d$ that minimizes the weighted regression objective $\|D(Ax-b)\|_p$. The solution $\hat{x}$ satisfies, with probability at least $1/2$, the relative error bound that $\|A \hat{x}-b\|_p\le (1+\epsilon)\|Ax-b\|_p$ for all $x\in{\mathbb R}^d$. Further, with probability $1-o(1)$, the entire algorithm to construct $\hat{x}$ runs in time $$O\left(\nnz(A)\log n \right ) + {\mathrm{poly}}(r \varepsilon^{-1}).$$ \end{theorem} \ifSTOC\else \section{Preliminary Experiments}\label{sec:exper} Some preliminary experiments show that a low-rank approximation technique that is a simplified version of these algorithms is promising, and in practice may perform much better than the general bounds of our results. Here we apply the algorithm of Theorem~\ref{thm:SVD}, except that we skip the randomized Hadamard and simply use a sparse embedding $\hat R$ and leverage score sampling. We compare the Frobenius error of the resulting $LDW^\top$ with that of the best rank-$k$ approximation. In our experiments, the matrices tested are $n\times d$. The resulting low-rank approximation was tested for $t_R$ (the number of columns of $\hat R$) taking values of the form$\lfloor 1.6^z - 0.5\rfloor$, for integer $z\ge 1$, while $t_R\le d/5$. The number $t_S$ of rows of $S$ was chosen such that the condition number of $SU$ was at most $1.2$. (Since $U$ has orthogonal columns, its condition number is 1, so a large enough leverage score sample will have this property.) For such $t_R$ and $t_S$, we took the ratio $R_e$ of the Frobenius norm of the error to the Frobenius norm of the error of the best rank-$k$ approximation. The resulting points $(k/t_R, R_e-1)$ were generated, for all test matrices, for three independent trials, resulting in a set of points $P$. The test matrices are from the University of Florida Sparse Matrix Collection, essentially most of those with at most $10^5$ nonzero entries, and with $n$ up to about 7000. There were 1155 matrices tested, from 70 sub-collections of matrices, each such sub-collection representing a particular application area. The curve in Figure 1 represents the results of these tests, where for a particular point $(x,y)$ on the curve, at most one percent of points $(t/k_R, R_e-1)\in P$ gave a result where $k/t_R < x$ but $R_e-1 > y$. Figure~2 shows a similar curve for the points $(t_R/t_S, \cond(SU)-1)$; thus the necessary ratio $t_R/t_S$, so that $\cond(SU)\le 1.2$, as for the results in FIgure~1, need be no smaller than about $1/110$. \begin{figure*} \caption{A ``1\%-Pareto'' curve of error as a function of the size of $\hat R$} \end{figure*} \begin{figure*} \caption{A 1\%-Pareto curve of $\cond(SU)-1$ as a function of the size of $\hat S$ relative to $\hat R$} \end{figure*} \fi \ifSTOC\else \appendix \section{Deferred proofs} \begin{proofof}{of Lemma \ref{lem:betabound}} Since by assumption $A$ has orthonormal columns, $\normF{A(\tilde X - X^*)} = \normF{A^\top A(\tilde X - X^*)}$, so it suffices to bound the latter, or $\normF{\beta}$ where $\beta\equiv A^\top A(\tilde X - X^*)$. By Fact \ref{fact:normal}, we have \begin{equation}\label{eq:approxlsnormal} A^\top S ^\top S (A \tilde{X} - B) = 0. \end{equation} To bound $\normF{\beta}$, we bound $\normF{A^\top S ^\top S A \beta}$, and then show that this implies that $\normF{\beta}$ is small. Using that $A A^\top A = A$ and (\ref{eq:approxlsnormal}), we have \begin{align*} A^\top S ^\top S A \beta & = A^\top S ^\top S A A^\top A (\tilde{X} - X^*) \\ & = A^\top S ^\top S A (\tilde{X} - X^*) \\ & = A^\top S ^\top S A(\tilde{X} - X^*) + A^\top S ^\top S (B-A\tilde{X}) \\ & = A^\top S ^\top S (B - AX^*). \end{align*} Using the hypothesis of the theorem, \begin{align*} \normF{A^\top S ^\top S A \beta} = \normF{A^\top S ^\top S (B - A X^*)} \le \sqrt{\epsilon/r}\normF{A}\normF{B-A X^*} \le \sqrt{\epsilon}\normF{B - A X^*}. \end{align*} To show that this bound implies that $\normF{\beta}$ is small, we use the subadditivity of $\normF{}$ and the property of any conforming matrices $C$ and $D$, that $\normF{CD}\le \norm{C}_2\normF{D}$, to obtain \begin{align*} \normF{\beta} \le \normF{A^\top S ^\top S A \beta} + \normF{A^\top S ^\top S A \beta- \beta} \le \sqrt{\epsilon}\normF{B - A X^*} + \norm{A^\top S ^\top S A - I }_2 \normF{\beta}. \end{align*} By hypothesis, $\norm{S A x}^2 = (1\pm\epsilon_0)\norm{x}^2$ for all $x$, so that $A^\top S ^\top S A - I$ has eigenvalues bounded in magnitude by $\epsilon_0^2$, which implies singular values with the same bound, so that $\norm{A^\top S ^\top S A - I }_2\le \epsilon^2_0$. Thus $\normF{\beta}\le \sqrt{\epsilon}\normF{B - AX^*} + \epsilon^2_0\normF{\beta}$, or \begin{equation*} \normF{\beta} \le \sqrt{\epsilon}\normF{B - A X^*} / (1-\epsilon^2_0) \le 2\sqrt{\epsilon}\normF{B - A X^*}, \end{equation*} since $\epsilon^2_0\le 1/2$. This bounds $\normF{\beta}$, and so proves the lemma. \end{proofof} \begin{proofof}{of Lemma~\ref{lem:len fixed sparse}} Let $S = \Phi D$ with associated hash function $h:[n] \rightarrow [t]$. For $A_i$ denoting the $i$-th column of $A$, let $A_i(b)$ denote the column vector whose $\ell$-th coordinate is $0$ if $h(\ell) \neq b$, and whose $\ell$-th coordinate is $A_{\ell,i}$ if $h(\ell) = b$. We use the second moment method to bound $\|SA\|_F^2$. For the expectation, \begin{eqnarray}\label{eqn:exp} {\bf E}_{D,h}[\|SA\|_F^2] = \sum_{i \in [d]} {\bf E}_{D,h}[\|SA_i\|_2^2] = \sum_{i \in [d]} \sum_{b \in [t]} {\bf E}_{D,h}[(\sum_{\ell \mid h(\ell) = b} A_{\ell, i} D_{\ell, \ell} )^2] = {\bf E}_h \left [\sum_{i \in [d]} \sum_{b \in [t]} \|A_i(b)\|_2^2 \right ] = \|A\|_F^2. \end{eqnarray} For the second moment, \begin{eqnarray}\label{eqn:var} {\bf E}_{D,h}[\|SA\|_F^4] = \sum_{i \in [d]} {\bf E}_{D,h}[\|SA_i\|_2^4] + \sum_{i \neq j \in [d]} {\bf E}_{D,h}[\|SA_i\|_2^2 \cdot \|SA_j\|_2^2]. \end{eqnarray} We handle the first term in (\ref{eqn:var}) as follows: \begin{eqnarray*} {\bf E}_{D,h} [\|SA_i\|_2^4] & = & {\bf E}_h \left [\sum_{b,b' \in [t]} {\bf E}_D [(SA_i)_b^2 \cdot (SA_i)_{b'}^2] \right ]\\ & = & {\bf E}_h \left [\sum_{b \in [t]} {\bf E}_D [(SA_i)_b^4] + \sum_{b \neq b' \in [t]} {\bf E}_D [(SA_i)_b^2] \cdot {\bf E}_D [(SA_i)_{b'}^2] \right ]\\ & = & {\bf E}_h \left [\sum_{b \in [t]} {\bf E}_D [(\sum_{\ell \mid h(\ell) = b} A_{\ell, i} D_{\ell, \ell})^4] + \sum_{b \neq b' \in [t]} {\bf E}_D [(\sum_{\ell \mid h(\ell) = b} A_{\ell, i} D_{\ell, \ell})^2] \cdot {\bf E}_D [(\sum_{\ell \mid h(\ell) = b'} A_{\ell, i} D_{\ell, \ell})^2]\right ]\\ & \leq & {\bf E}_h \left [\sum_{b \in [t]} \left (\sum_{\ell \mid h(\ell) = b} A^4_{\ell, i} + {4 \choose 2} \sum_{\ell < \ell' \mid h(\ell) = h(\ell') = b} A^2_{\ell, i} A^2_{\ell',i} \right ) + \sum_{b \neq b' \in [t]} \|A_i(b)\|_2^2 \cdot \|A_i(b')\|_2^2 \right ]\\ & \leq & {\bf E}_h \left [\|A_i\|_4^4 \right ]+ \frac{6}{t} \|A_i\|_2^4 + {\bf E}_h \left [\sum_{b \neq b' \in [t]} \|A_i(b)\|_2^2 \cdot \|A_i(b')\|_2^2 \right ]\\ & \leq & {\bf E}_h \left [\sum_{b \in [t]} \|A_i(b)\|_2^4 \right ]+ \frac{6}{t} \|A_i\|_2^4 + {\bf E}_h \left [\sum_{b \neq b' \in [t]} \|A_i(b)\|_2^2 \cdot \|A_i(b')\|_2^2 \right ]\\ & \leq & \frac{6}{t} \|A_i\|_2^4 + \|A_i\|_2^4. \end{eqnarray*} For the second term in (\ref{eqn:var}), for $i \neq j \in [d]$, \begin{eqnarray*} {\bf E}_{D,h}[\|SA_i\|_2^2 \cdot \|SA_j\|_2^2] & = & {\bf E}_{D,h} \left [\sum_{b \in [t]} \left (\sum_{\ell \mid h(\ell) = b} A_{\ell, i} D_{\ell, \ell} \right )^2 \left (\sum_{\ell' \mid h(\ell') = b} A_{\ell', j} D_{\ell', \ell'} \right )^2 \right ]\\ && + {\bf E}_{D,h} \left [\sum_{b \neq b' \in [t]} \left (\sum_{\ell \mid h(\ell) = b} A_{\ell, i} D_{\ell, \ell} \right )^2 \left (\sum_{\ell' \mid h(\ell') = b'} A_{\ell', j} D_{\ell', \ell'} \right )^2 \right ]\\ & = & {\bf E}_h \left [\sum_{b \in [t]} \left (\sum_{\ell, \ell' \mid h(\ell) = h(\ell') = b} A_{\ell, i} A_{\ell',i} D_{\ell, \ell} D_{\ell', \ell'} \right ) \left (\sum_{\ell, \ell' \mid h(\ell) = h(\ell') = b} A_{\ell, j} A_{\ell',j} D_{\ell, \ell} D_{\ell', \ell'} \right ) \right ]\\ && + {\bf E}_h \left [\sum_{b \in [t]} \|A_i(b)\|_2^2 \cdot \|A_j(b)\|_2^2 + \sum_{b \neq b' \in [t]} \|A_i(b)\|_2^2 \cdot \|A_j(b')\|_2^2 \right ]\\ & = & \|A_i\|_2^2 \cdot \|A_j\|_2^2 + {\bf E}_h \left [\sum_{b \in [t]} 4 \sum_{\ell < \ell' \mid h(\ell) = h(\ell') = b} A_{\ell, i} A_{\ell', i} A_{\ell, j} A_{\ell', j} \right ], \end{eqnarray*} where the constant $4$ arises because if we choose indices $\ell < \ell'$ from $\left (\sum_{\ell, \ell' \mid h(\ell) = h(\ell') = b} A_{\ell, i} A_{\ell',i} D_{\ell, \ell} D_{\ell', \ell'} \right )$ we need to choose the same $\ell$ and $\ell'$ from $\left (\sum_{\ell, \ell' \mid h(\ell) = h(\ell') = b} A_{\ell, j} A_{\ell',j} D_{\ell, \ell} D_{\ell', \ell'} \right )$ in order to have a non-zero expectation, and there are $4$ ways of doing this for distinct $\ell, \ell'$. Continuing, \begin{eqnarray*} \|A_i\|_2^2 \cdot \|A_j\|_2^2 + {\bf E}_h \left [\sum_{b \in [t]} 4 \sum_{\ell < \ell' \mid h(\ell) = h(\ell') = b} A_{\ell, i} A_{\ell', i} A_{\ell, j} A_{\ell', j} \right ] & \leq & \|A_i\|_2^2 \cdot \|A_j\|_2^2 + {\bf E}_h \left [4\sum_{b \in [t]}\langle A_i(b), A_j(b) \rangle^2 \right ]\\ & \leq & \|A_i\|_2^2 \cdot \|A_j\|_2^2 + {\bf E}_h \left [4\sum_{b \in [t]} \|A_i(b)\|_2^2 \cdot \|A_j(b')\|_2^2 \right ]\\ & = & \|A_i\|_2^2 \cdot \|A_j\|_2^2 + \frac{4}{t} \sum_{\ell, \ell' \in [n]} A_{\ell, i}^2 A_{\ell, j}^2\\ & = & \left(1 + \frac{4}{t} \right )\|A_i\|_2^2 \cdot \|A_j\|_2^2. \end{eqnarray*} Combining (\ref{eqn:exp}) with (\ref{eqn:var}) and the bounds on the terms in (\ref{eqn:var}) above, \begin{eqnarray*} {\bf Var}[\|SA\|_F^2] & \leq & \left (\sum_{i \in [d]} \frac{6}{t} \|A_i\|_2^4 + \|A_i\|_2^4 \right ) + \sum_{i \neq j \in [d]} \left(1 + \frac{4}{t} \right )\|A_i\|_2^2 \cdot \|A_j\|_2^2 - \|A\|_F^2\\ & \leq & \frac{6}{t} \|A\|_F^2\\ & = & \frac{6}{t} {\bf E}[\|SA\|_F^2]. \end{eqnarray*} The lemma now follows by Chebyshev's inequality, for appropriate $t = \Omega(\varepsilon^{-2})$. \end{proofof} \begin{proofof}{of Lemma~\ref{lem:len fixed srht}} Lemma 15 of \cite{BG} shows that $\norm{SA} \le (1+\varepsilon)\norm{A}$ with arbitrarily low failure probability, and the other direction follows from a similar argument. Briefly: the expectation of $\norm{SA}^2$ is $\norm{A}^2$, by construction, and Lemma 11 of \cite{BG} implies that with arbitrarily small failure probability, all rows of $SA$ will have squared norm at most $\beta \equiv \frac{\alpha}{t}\norm{A}^2$, where $\alpha$ is a value in $O(\log n)$. Assuming that this bound holds, it follows from Hoeffding's inequality that the probability that $| \norm{SA}^2 - \norm{A}^2 | \ge \varepsilon \norm{A}^2$ is at most $2\exp(-2[\varepsilon\norm{A}^2]^2 / t\beta^2)$, or $2\exp(-2\varepsilon^2t/\alpha^2)$, so that $t = \Theta(\varepsilon^{-2}(\log n)^2 )$ suffices to make the failure probability at most $1/10$. \end{proofof} \begin{proofof}{of Lemma \ref{lem:lpl2}} This is almost exactly the same as in \cite{CDMMMW}, we simply adjust notation and parameters. Applying Theorem \ref{thm:jlmain}, we have that with probability at least $1-1/(100w)$, for all $x \in \mathbb{R}^r$, if we consider $y = Ax$ and write $y^T = [z_1^T, z_2^T, \ldots, z_{n/w}^T]$, then for all $i \in [n/w]$, \begin{eqnarray*} \sqrt{\textstyle\frac12}\norm{z_i}_2\le\norm{Sz_i}_2\le \sqrt{\textstyle\frac32}\norm{z_i}_2 \end{eqnarray*} By relating the $2$-norm and the $p$-norm, for $1 \leq p \leq 2$, we have $$\norm{Sz_i}_p \le t^{1/p-1/2}\norm{Sz_i}_2 \le t^{1/p-1/2} \sqrt{\textstyle\frac32} \norm{z_i}_2 \le t^{1/p-1/2} \sqrt{\textstyle\frac32} \norm{z_i}_p,$$ and similarly, $$\norm{Sz_i}_p \ge \norm{Sz_i}_2 \ge \sqrt{\textstyle\frac12}\norm{z_i}_2 \ge \sqrt{\textstyle\frac12} w^{1/2-1/p}\norm{z_i}_p. $$ If $p > 2$, then $$ \norm{Sz_i}_p \le \norm{Sz_i}_2 \le \sqrt{\textstyle\frac32} \norm{z_i}_2 \le \sqrt{\textstyle\frac32} w^{1/2 - 1/p} \norm{z_i}_p,$$ and similarly, $$ \norm{Sz_i}_p \ge t^{1/p-1/2}\norm{Sz_i}_2 \ge t^{1/p-1/2} \sqrt{\textstyle\frac12} \norm{z_i}_2 \ge t^{1/p-1/2} \sqrt{\textstyle\frac12} \norm{z_i}_p. $$ Since $\norm{Ax}_p^p = \norm{y}_p^p = \sum_i \norm{z_i}^p$ and $\norm{FAx}_p^p = \sum_i \norm{Sz_i}_p^p$, for $p\in [1,2]$ we have with probability $1-1/(100w)$ \[ \sqrt{\textstyle\frac12} w^{1/2-1/p}\norm{Ax}_p \le \norm{FAx}_p \le \sqrt{\textstyle\frac32} t^{1/p-1/2} \norm{Ax}_p, \] and for $p\in [2,\infty)$ with probability $1-1/(100w)$ \[ \sqrt{\textstyle\frac12} t^{1/p-1/2} \norm{Ax}_p \le \norm{FAx}_p \le \sqrt{\textstyle\frac32} w^{1/2 - 1/p} \norm{Ax}_p. \] In either case, \begin{equation}\label{eqn:first} \norm{Ax}_p \le \gamma_p \norm{FAx}_p \le \sqrt{3} (tw)^{|1/p-1/2|}\norm{Ax}_p. \end{equation} Applying Theorem \ref{thm:basisOld}, we have, from the definition of a $(\alpha,\beta,p)$-well-conditioned basis, that \begin{eqnarray}\label{eqn:second} \|FA U \|_p \leq \alpha \end{eqnarray} and for all $x \in \mathbb{R}^d$, \begin{eqnarray}\label{eqn:third} \|x\|_q \leq \beta \|FAU\|_p. \end{eqnarray} Combining (\ref{eqn:first}) and (\ref{eqn:second}), we have that with probability at least $1-1/(100w)$, \begin{eqnarray*} \|A U/(r\gamma_p) \|_p \leq \sum_i \|A U_i/r\gamma_p \|_p \leq \sum_i \|F A U_i/r\|_p \leq \alpha. \end{eqnarray*} Combining (\ref{eqn:first}) and (\ref{eqn:third}), we have that with probability at least $1-1/(100w)$, for all $x \in \mathbb{R}^r$, \begin{eqnarray*} \|x\|_q \leq \beta \|F A U x\|_p \leq \beta \sqrt{3} r (tw)^{|1/p-1/2|} \|A U\frac{1}{r\gamma_p} x\|_p. \end{eqnarray*} Hence $A U/(r\gamma_p)$ is an $(\alpha, \beta \sqrt{3} r (tw)^{|1/p-1/2|} , p)$-well-conditioned basis. The time to compute $F A$ is $O(\nnz(A) \log n)$ by Theorem \ref{thm:jlmain}. Notice that $FA$ is an $nt/w \times n$ matrix, which is $O(n/r^5) \times r$, and so the time to compute $U$ from $F A$ is $O((n/r^5) r^5 \log n) = O(\nnz(A) \log n)$, since $\nnz(A) \geq n$. \end{proofof} \fi \end{document}
arXiv
Is there a limit as to how fast a black hole can grow? Astronomers find ancient black hole 12 billion times the size of the Sun. According to the article above, we observe this supermassive black hole as it was 900 million years after the formation of the universe, and scientists find its extreme specifications mysterious because of the relatively young age of the Universe at that time. Why would the 12 billion Solar Masses mass value be mysterious, unless there was a limit of sorts to the rate of mass consumption by a black hole? (naive point: Why would 900 million years not suffice for this much accumulation, keeping in mind that most supermassive stars which form black holes have life-spans of a few tens of millions of years at most?) general-relativity classical-mechanics black-holes astrophysics Hritik Narayan Hritik NarayanHritik Narayan The accretion of matter onto a compact object cannot take place at an unlimited rate. There is a negative feedback caused by radiation pressure. If a source has a luminosity $L$, then there is a maximum luminosity - the Eddington luminosity - which is where the radiation pressure balances the inward gravitational forces. The size of the Eddington luminosity depends on the opacity of the material. For pure ionised hydrogen and Thomson scattering $$ L_{Edd} = 1.3 \times 10^{31} \frac{M}{M_{\odot}}\ W$$ Suppose that material fell onto a black hole from infinity and was spherically symmetric. If the gravitational potential energy was converted entirely into radiation just before it fell beneath the event horizon, the "accretion luminosity" would be $$L_{acc} = \frac{G M_{BH}}{R}\frac{dM}{dt},$$ where $M_{BH}$ is the black hole mass, $R$ is the radius from which the radiation is emitted (must be greater than the Schwarzschild radius) and $dM/dt$ is the accretion rate. If we say that $L_{acc} \leq L_{Edd}$ then $$ \frac{dM}{dt} \leq 1.3 \times10^{31} \frac{M_{BH}}{M_{\odot}} \frac{R}{GM_{BH}} \simeq 10^{11}\ R\ kg/s \sim 10^{-3} \frac{R}{R_{\odot}}\ M_{\odot}/yr$$ Now, not all the GPE gets radiated, some of it could fall into the black hole. Also, whilst the radiation does not have to come from near the event horizon, the radius used in the equation above cannot be too much larger than the event horizon. However, the fact is that material cannot just accrete directly into a black hole without radiating; because it has angular momentum, an accretion disc will be formed and will radiate away lots of energy - this is why we see quasars and AGN -, thus both of these effects must be small numerical factors and there is some maximum accretion rate. To get some numerical results we can absorb our uncertainty as to the efficiency of the process and the radius at which the luminosity is emitted into a general ignorance parameter called $\eta$, such that $$L_{acc} = \eta c^2 \frac{dM}{dt}$$ i.e what fraction of the rest mass energy is turned into radiation. Then, equating this to the Eddington luminosity we have $$\frac{dM}{dt} = (1-\eta) \frac{1.3\times10^{31}}{\eta c^2} \frac{M}{M_{\odot}}$$ which gives $$ M = M_{0} \exp[t/\tau],$$ where $\tau = 4\times10^{8} \eta/(1-\eta)$ years (often termed the Salpeter (1964) growth timescale). The problem is that $\eta$ needs to be pretty big in order to explain the luminosities of quasars, but this also implies that they cannot grow extremely rapidly. I am not fully aware of the arguments that surround the work you quote, but depending on what you assume for the "seed" of the supermassive black hole, you may only have a few to perhaps 10 e-folding timescales to get you up to $10^{10}$ solar masses. I guess this is where the problem lies. $\eta$ needs to be very low to achieve growth rates from massive stellar black holes to supermassive black holes, but this can only be achieved in slow-spinning black holes, which are not thought to exist! A nice summary of the problem is given in the introduction of Volonteri, Silk & Dubus (2014). These authors also review some of the solutions that might allow Super-Eddington accretion and shorter growth timescales - there are a number of good ideas, but none has emerged as a front-runner yet. Rob JeffriesRob Jeffries 73k77 gold badges156156 silver badges255255 bronze badges $\begingroup$ Good answer. I would just note that "speculative" means that we aren't sure which details are right, not that we have no good ideas. Overcoming Eddington is easy in principle -- just break spherical symmetry, letting matter flow inward in some places and radiation flow outward elsewhere. It's not like accretion disks are spherically symmetric anyway. $\endgroup$ – user10851 Feb 26 '15 at 20:54 $\begingroup$ @ChrisWhite Of course. But most such get-outs are small numerical factors, not the order(s) of magnitude required. But you are correct - no shortage of ideas. $\endgroup$ – Rob Jeffries Feb 26 '15 at 21:03 $\begingroup$ The Eddington radiation would keep out gas. I don't see how it could stop heavy infalling objects, though--say an area of really massive stars that left behind neutron stars and black holes. Or even galactic mergers. $\endgroup$ – Loren Pechtel Feb 27 '15 at 3:05 $\begingroup$ @LorenPechtel You are right, though I have not heard that suggested as a solution. I think the problem with the idea is that you need most of the gas to have already turned into stars in the first 900 million years. This sounds like an even bigger problem than growing the black hole. It takes most galaxies much longer to assemble even a fraction of their gas into stars. $\endgroup$ – Rob Jeffries Feb 27 '15 at 7:06 $\begingroup$ @HritikNarayan Well you still have to grow the smaller black holes. So you have a slightly smaller individual growth timescale, but then you have to factor in some sort of collisional timescale. I don't think there is ever a problem in explaining one particular object in a variety of ways; but there are actually a population of these things. $\endgroup$ – Rob Jeffries Feb 27 '15 at 15:06 A 12 billion Solar mass black hole sounds massive, but actually it's not all that big. The radius of the event horizon is given by: $$ r_s = \frac{GM}{c^2} $$ and for a 12 billion Solar mass black hole this works out to be about $1.8 \times 10^{13}$m. This seems big, but it's only about 0.002 light years. For comparison, the radius of the Milky way is 50,000 to 60,000 light years, so the black hole is only 0.00000003% the size of the Milky Way. Black holes can't just suck in stars. A star orbiting in a galaxy has an orbital angular momentum, and it can't dive into the centre of the galaxy where the black hole is unless it can shed that angular momentum. In fact, given what a small target an 0.001 light year black hole makes, a star would have to shed almost all its angular momentum to hit the event horizon. But shedding angular momentum is hard because angular momentum is conserved. You can't just make angular momentum disappear, you have to transfer it to something else. Typically a star does this by interacting with other stars. Generally speaking, in an interaction the more massive star emerges with less angular momentum and the lighter star with a higher angular momentum. This process is known as dynamical friction. And all this takes time. The interactions are random and you need lots of them. Interactions are far more frequent in the central bulge of galaxies than our where we are in the suburbs, but even so the surprise is that there has been enough time for billions of stars to hit the black hole and merge with it. John RennieJohn Rennie $\begingroup$ Plausible, but not relevant. $\endgroup$ – Rob Jeffries Feb 26 '15 at 18:36 $\begingroup$ The reason I say it cannot be relevant is that we have known for some considerable length of time that it is possible for AGN/quasars to have huge luminosities that require them to be fed by huge amounts of mass at a rapid rate. So funnelling huge quantities of matter into a small volume does not appear to be a major obstacle. The real difficulty is in growing the black hole because the Eddington rate is smaller for smaller black holes and the seeds for SMBH cannot have been more than of order 1000 solar masses. Radiation pressure is likely what limits the growth of a black hole. $\endgroup$ – Rob Jeffries Feb 26 '15 at 19:30 $\begingroup$ Interesting answer, although I do agree with @RobJeffries $\endgroup$ – Hritik Narayan Feb 27 '15 at 7:23 $\begingroup$ A better comparison for the size of the event horizon might be 120 AU (which is about four times Neptune's semi-major axis). $\endgroup$ – Raidri Feb 27 '15 at 12:11 Not the answer you're looking for? Browse other questions tagged general-relativity classical-mechanics black-holes astrophysics or ask your own question. Primordial galaxies and associated mass of blackholes How is it possible to have such massive black holes? Formation of supermassive black holes Is a Black Hole Gun Possible? Could Hyper-Massive Black Holes be due to Dark Matter in the Early Universe? How do we know that supermassive black holes formed in the early universe? What are black holes made of, and if one "eats" enough, will it cease being a black hole? How does the mass of black holes become so much larger than an average star How does blackholes become supermassive? How can a black hole produce sound? Is there a binary black hole system in the middle of the galaxy? Could the black hole in the center of the galaxy be a white hole? Can many universes exist inside black holes? Could black hole mergers be the source of density waves that show up as galaxies' spiral arms? Is time absolute? How entropy connect to "death" of a black hole? If you hover just above the event horizon of a black hole, would you see the future of the universe? Why do galaxies have a super massive black hole at their center?
CommonCrawl
\begin{document} \title{Global existence in the Lipschitz class for the N-Peskin problem} \begin{abstract} In this paper we study a toy model of the Peskin problem that captures the motion of the full Peskin problem in the normal direction and discards the tangential elastic stretching contributions. This model takes the form of a fully nonlinear scalar contour equation. The Peskin problem is a fluid-structure interaction problem that describes the motion of an elastic rod immersed in an incompressible Stokes fluid. We prove global in time existence of solution for initial data in the critical Lipschitz space. Using a new decomposition together with cancellation properties, pointwise methods allow us to obtain the desired estimates in the Lipschitz class. Moreover, we perform energy estimates in order to obtain that the solution lies in the space $L^2 \pare{ \bra{0,T};H^{3/2} }$ to satisfy the contour equation pointwise. \end{abstract} {\small \tableofcontents} \allowdisplaybreaks \section{Introduction} The two-dimensional Peskin problem \cite{peskin1,peskin2} is a fluid-solid interaction problem that describes the flow of a viscous incompressible fluid in a region containing immersed boundaries. These immersed boundaries move with the fluid and exert forces on the fluid itself. An example of such a boundary is the flexible leaflet of a human heart valve. The immersed boundary method was initially formulated by Peskin to study flow patterns around heart valves \cite{peskin1}. This method was later developed to solve other fluid-structure interaction problems appearing in many different applications in physics, biology and medical sciences\cite{peskin2}. The distinguishing feature of this method was that the entire simulation was carried out on a Cartesian grid, and a novel procedure was formulated for imposing the effect of the immersed boundary on the flow. More concretely, we consider the scenario where there is a elastic rod immersed in Stokes flow. Consequently, the filament, described by the simple, closed curve \begin{equation*} \Gamma(t)=\set{X(s,t)=\pare{ X_1(s,t),X_2(s,t) },\ s\in\mathbb{S}^1 }, \end{equation*} drives the fluid and generates the flow, while the flow pushes the rod and changes its shape. This curve separates the plane into two different regions, the outer region $\Omega^-(t)$ and the inner region $\Omega^+(t)$. \\ Mathematically, when the tension is $T(\alpha)$ and the elastic force density takes the form \begin{equation*} F(X)=\partial_s\left(T(|\partial_s X|)\frac{\partial_s X}{|\partial_s X|}\right), \end{equation*} the Peskin problem reads (see \cite{rodenberg20182d} for more details) \begin{subequations}\label{peskin} \begin{align} -\Delta u^\pm&=-\nabla p^\pm && \text{ in }\Omega^\pm(t)\\ \nabla \cdot u^\pm&=0 && \text{ in }\Omega^\pm(t)\\ \jump{u}&=0 && \text{ on }\Gamma(t)\\ \jump{\left(\nabla u+\nabla u^T-p\text{Id}\right)n}&=\frac{F(X)}{|\partial_s X|} && \text{ on }\Gamma(t)\\ \partial_t X&=u && \text{ on }\Gamma(t), \end{align} \end{subequations} where $n$ denotes the outward pointing unit normal to the free boundary $\Gamma(t)$ and $$ \jump{U}=U^+-U^-. $$ In the particular case where each infinitesimal segment of the rod behaves like a Hookean spring with elasticity coefficient equal to 1, we have that $T(\alpha)=\alpha$ and \eqref{peskin} provides \begin{subequations}\label{peskin2} \begin{align} -\Delta u^\pm&=-\nabla p^\pm && \text{ in }\Omega^\pm(t)\\ \nabla \cdot u^\pm&=0 && \text{ in }\Omega^\pm(t)\\ \jump{u}&=0 && \text{ on }\Gamma(t)\\ \jump{\left(\nabla u+\nabla u^T-p\text{Id}\right)n}&=\frac{\partial_s^2 X}{|\partial_s X|} && \text{ on }\Gamma(t)\\ \partial_t X&=u && \text{ on }\Gamma(t). \end{align} \end{subequations} There is a large literature in the numerical analysis and applied mathematics communities for this problem. However, the works developing the theory for the PDEs \eqref{peskin2} are still scarce. On one hand Lin \& Tong \cite{LT19} proved a local existence result for arbitrary $H^{5/2}$ initial data. Furthermore, they also proved the global existence and exponential decay towards equilibrium for $H^{5/2}$ initial data near certain particular configurations. Mori, Rodenberg \& Spirn proved in \cite{MRS19} a local well-posedness result for \eqref{peskin2} for initial data of arbitrary size in the \emph{little H\"older} space $h^{1,\gamma}, \ \gamma > 0$. In addition, these authors also proved that the solution becomes $\mathcal{C}^n$ for arbitrary $n$ in arbitrarily short amount of time, and that the above unique local solutions are global and decay exponentially toward a uniformly distributed circle of positive radius when the initial data is small in the $ h^{1, \gamma} $ topology. The authors of \cite{MRS19} proved the above results taking advantage of the contour dynamics formulation of the problem \eqref{peskin2}. Indeed, if we drop the $t$ from the notation the system \eqref{peskin2} can be equivalently written as the following nonlinear and nonlocal system of 2 equations for $X$ \cite{LT19,MRS19}: \begin{align} \label{peskin2contoura} \partial_t X(s)=\textnormal{p.v.}\int_{\mathbb{S}^1} G(X(s)-X(\sigma))\partial_\sigma^2 X(\sigma) \textnormal{d} \sigma. \end{align} where the kernel $G$ is the so-called Stokeslet $$ G\pare{z}= \frac{1}{4\pi}\left( -\log \av{z} \ I + \frac{z\otimes z}{\av{z}^2} \right). $$ Very recently, Garcia-Juarez, Mori \& Strain \cite{garcia2020peskin} proved a global well-posedness result for the Peskin problem when two fluids with different viscosities are considered. Their result applies for medium size initial interfaces in critical spaces akin to the Wiener algebra and shows instant analytic smoothing. As noted before \cite{LT19,MRS19}, the Peskin problem has certain similarities with the Muskat problem (see \cite{alazard2020convexity,alazard2020endpoint,cameron2018global,castro2012rayleigh,cordoba2011interface,cheng2016well,gancedo2017survey,gancedo2020global,granero2020growth, matioc2018muskat,matioc2018viscous,nguyen2020paradifferential, GGS19, scrobogna2020well} and the references therein) \begin{subequations}\label{Muskat} \begin{align} u^\pm&=-\nabla p^\pm -\rho^\pm(0,1) && \text{ in }\Omega^\pm(t)\\ \nabla \cdot u^\pm&=0 && \text{ in }\Omega^\pm(t)\\ \jump{p}&=0 && \text{ on }\Gamma(t)\\ \partial_t x &=u\cdot n && \text{ on }\Gamma(t). \end{align} \end{subequations} First of all, both free boundary problems can be written as contour equations akin to \eqref{peskin2contoura}. Indeed, the Muskat problem when the fluids are separated by the graph of a the function $x(s,t)\in\mathbb{R}$ can be written as \begin{align}\label{peskin2contourab} \partial_t x(s)=\textnormal{p.v.}\int_{\mathbb{S}^1} K\pare{ x(s)-x(\sigma) }\pare{ x'(s)-x'(\sigma) } \textnormal{d} \sigma, \end{align} where the kernel $K$ is a nonlinear version of the Hilber transform \cite{cordoba2011interface}. Also, both systems have a natural energy balance, in the case of the Muskat problem, the energy law reads $$ -\jump{\rho}\|x(T)\|_{L^2(\mathbb{S}^1)}^2+2\int_0^T\|u(t)\|_{L^2(\mathbb{S}^1\times\mathbb{R})}^2\textnormal{d} t= -\jump{\rho}\|x_0\|_{L^2(\mathbb{S}^1)}^2 , $$ while for the Peskin problem, the energy balance is $$ \|X'(T)\|_{L^2(\mathbb{S}^1)}^2+2\int_0^T\|\nabla u(t)\|_{L^2(\mathbb{R}^2)}^2\textnormal{d} t= \|X'_0\|_{L^2(\mathbb{S}^1)}^2. $$ In both cases the energy balance is too weak to, just by itself, provide us with global existence of weak solutions. Some other similarities appear at the linear level but before stating that we need to introduce some notation. Let us recall the definition of the periodic Hilbert transform \begin{equation*} \mathcal{H} f \pare{s} = \frac{1}{2\pi} \textnormal{p.v.}\int_{\mathbb{S}^1} \cot\pare{\alpha / 2} f \pare{s-\alpha} \textnormal{d} \alpha. \end{equation*} Then we define the Lambda operator $ \Lambda f = \mathcal{H} \partial_s f $. With this notation we observe that the linearized Peskin problem (around the unitary circle) is \cite[Lemma 6.2]{LT19} \begin{align} \label{eq:linear_Peskin} \partial_t \mathcal{Y} = - \frac{1}{4} \ \Lambda \mathcal{Y} + \frac{1}{4}\pare{\begin{array}{cc} 0 & -\mathcal{H} \\ \mathcal{H} & 0 \end{array}} \mathcal{Y}, \end{align} while the linear Muskat problem equals \begin{equation} \label{eq:linear_Muskat} \partial_t y = \frac{\jump{\rho}}{2} \Lambda y. \end{equation} Then we notice that, despites its numerous similarities, the Peskin problem and the Muskat problem are rather different, being this difference already clear at the linear level. The first stark difference can be immediately deduced comparing \eqref{eq:linear_Peskin} with \eqref{eq:linear_Muskat}; while \eqref{eq:linear_Muskat} has a difussive operator that behaves well for $ \dot{W}^{k, \infty}, k\in \mathbb{N} $ functions, the linearized Peskin problem \eqref{eq:linear_Peskin} is necessarily more challenging in the same functional setting due to the unboundedness of the Hilbert transform $ \mathcal{H} $ in $ L^\infty $ and the coupling of both unknowns in the system. This problem is also present in the toy model that we study in the present manuscript and will be handled noticing that, denoting with $ J $ the symplectic matrix, the term $ -J\mathcal{H} \mathcal{Y} $ appearing on the right hand side of \eqref{eq:linear_Peskin} codify an inertial displacement. Furthermore if we decouple the linear Peskin problem we find additional differences. Indeed, if we take a time derivative, we find that $$ \partial_t^2 \mathcal{Y}^1 =-\frac{1}{4} \Lambda \partial_t\mathcal{Y}^1-\frac{1}{4} \mathcal{H} \partial_t\mathcal{Y}^2=-\frac{1}{4} \Lambda \partial_t\mathcal{Y}^1-\frac{1}{4} \mathcal{H} \left(-\frac{1}{4} \Lambda \mathcal{Y}^2+\frac{1}{4}\mathcal{H} \mathcal{Y}^1\right). $$ Taking a space derivative of the equation for $\mathcal{Y}^1$ we obtain that $$ \partial_t \partial_s\mathcal{Y}^1 +\frac{1}{4} \Lambda \partial_s \mathcal{Y}^1=-\frac{1}{4}\Lambda \mathcal{Y}^2. $$ Substituting the latter expression we conclude that $$ \partial_t^2 \mathcal{Y}^1 =-\frac{1}{4} \Lambda \partial_t\mathcal{Y}^1-\frac{1}{4} \mathcal{H} \left(\partial_t \partial_s\mathcal{Y}^1 +\frac{1}{4} \Lambda \partial_s \mathcal{Y}^1+\frac{1}{4}\mathcal{H} \mathcal{Y}^1\right). $$ Using the properties of the Hilbert transform for zero-mean functions we find the linear Klein-Gordon-like equation $$ \partial_t^2 \mathcal{Y} +\frac{1}{2} \Lambda \partial_t\mathcal{Y}=\frac{1}{16} \partial_s^2\mathcal{Y}+\frac{1}{16} \mathcal{Y} $$ for each component. This equation is very different to the parabolic equation for the Muskat problem. On the other hand, in the Peskin problem it is not possible to reparameterize the contour equation at convenience to obtain the same nonlinear solution. While the reparameterization freedom in free boundary problems for incompressible fluid have been extensively used as a help to deal with the nonlinear structure of nonlocal equations, the Peskin problem is sensible to reparameterizations in the sense that concentration of particles in the rod affects the elastic dynamics. Thus, different reparameterizations give rise to different dynamics and can converge to different steady state as time goes to infinity \cite{LT19,MRS19,garcia2020peskin}. Nevertheless, the right hand side of the nonlocal system \eqref{peskin2contoura} is invariant with respect to translations, so that the appropriate time-dependent translation $M(t)$ allows us to control some linear contributions which arise in the dynamics of the linearized version of \eqref{peskin2contoura}. Additionally, the Peskin problem lacks a divergence form structure. This is another rather big difference with the Muskat problem and makes passing to the limit in the weak formulation a rather delicate issue. To overcome this challenge we will use energy estimates in $\dot{H}^1$. This energy estimate will give the parabolic effect which is necessary to pass to the limit in the weak formulation. In order to better understand the mathematical subtleties and challenges of the Peskin problem, in this work we consider a scalar model of the Peskin problem (see equation \eqref{eq:model} below). This toy model, which we denote from now on with the name of \emph{N-Peskin} problem, takes the form of a fully nonlinear contour equation and shares most of the difficulties mentioned above but discards the contributions of the motion due to tangential elastic stretching of the rod. The study of scalar toy models in fluid dynamics is a classical research area that goes back to the work of Constantin-Lax-Majda \cite{CLM1985}. The plan of the paper is as follows: In Section \ref{sec2} we present our main result and the methodology. Furthermore, we also introduce there our new formulation of the Peskin problem. In Section \ref{sec3}, we state several pointwise bounds for singular integral operators. In Section \ref{sec:W1inftydecay} we prove the \emph{a priori} estimates showing the decay in the Lipschitz norm. Later, in Section \ref{sec:H1inftydecay} we prove the \emph{a priori} estimates in Sobolev spaces. These estimates are lower order but allow us to use the parabolic gain of regularity. In Section \ref{sec:path} we prove the estimates for the time derivative of the solution. These estimates are required to ensure the compactness required to pass to the limit in the weak formulation. Finally, in Section \ref{sec7} we prove the main result of this paper. \subsection{Notation} We denote with $ C $ any positive constant whose value is independent of the physical parameters of the problem, the explicit value of $ C $ may vary from line to line. We write $ A\lesssim B $ if $ A\leq C B $ and $ A\sim B $ if $ A\lesssim B $ and $ B\lesssim A $. We denote by $ \mathcal{P} \in \mathcal{C}^\infty\pare{\left[0, 1 \right) ; \mathbb{R}_+} $ any universal function such that $ \mathcal{P}\pare{0} \geq 0 $ and such that for any $ y\in \bra{0, 1/2} $ there exists a $ N $ for which the bound $ \mathcal{P}\pare{y}\leq C\pare{1+y}^N $ holds true. The explicit value of $ \mathcal{P} $ may vary from line to line. The one dimensional torus, i.e. the interval $ \bra{-\pi, \pi} $ endowed with periodic boundary conditions, is denoted by $ \mathbb{S}^1 $. Given any $ f\in \mathcal{C}^1\pare{\mathbb{S}^1} $ we denote with $ f' $ the covariant derivative of $ f $ onto $ \mathbb{S}^1 $ endowed with the euclidean metric, and $ f^{(k)} $ denotes the operator $ \cdot ' $ iterated $ k $ times. We define $ \Lambda f = \mathcal{H} f' $ and we recall that such operator can be expressed as the Fourier multiplier $ \widehat{\Lambda f}\pare{n} = \av{n} \hat{f}\pare{n} $. We can thus define the Sobolev spaces of fractional order (here $ \mathcal{S} $ denotes the periodic Schwartz class and $ \mathcal{S}_0 $ the periodic Schwartz class with zero average) \begin{align*} H^s\pare{\mathbb{S}^1} = \set{f\in \mathcal{S}' \ \left| \ \pare{1+\Lambda}^s f \in L^2 \right. }, && \dot{H}^s\pare{\mathbb{S}^1} = \set{f\in \mathcal{S}'_0 \ \left| \ \Lambda^s f \in L^2 \right. }, \end{align*} for any $ s \in \mathbb{R} $. For any $ \pare{p, k}\in\bra{1, \infty}\times \mathbb{N} $ we denote with \begin{align*} W^{k, p}\pare{\mathbb{S}^1} = \set{f\in\mathcal{S}' \ \left| \ f, f^{(k)}\in L^p\pare{\mathbb{S}^1} \right. }, && \dot{W}^{k, p}\pare{\mathbb{S}^1} = \set{f\in\mathcal{S}'_0 \ \left| \ f^{(k)}\in L^p\pare{\mathbb{S}^1} \right. }. \end{align*} We use the notation \begin{align*} L^p = L^p\pare{\mathbb{S}^1}, && H^s = H^s\pare{\mathbb{S}^1}, && W^{k, p} = W^{k, p}\pare{\mathbb{S}^1}, \end{align*} for functional spaces defined on the one-dimensional torus. Additionally, we use the simplified notation \begin{equation*} \int \bullet \ \textnormal{d} s=\text{p.v.}\int_{\mathbb{S}^1} \bullet \ \textnormal{d} s =\text{p.v.}\int_{-\pi}^\pi \bullet \ \textnormal{d} s , \end{equation*} in order to indicate Cauchy principal value integrals on the one-dimensional torus. \section{Main result and methodology}\label{sec2} \subsection{Derivation of the N-Peskin problem} \label{sec:h-M} As we have seen before, the Peskin problem can be written as the following contour equations \begin{equation}\label{eq:Peskin} \tag{P} \begin{aligned} \partial_t X\pare{s, t} & = \int G\pare{X\pare{s, t} - X\pare{\sigma , t}} X''\pare{\sigma, t} \textnormal{d} \sigma, \\ G\pare{z} & = G_1(z)+G_2(z),\\ G_{1} \pare{z} & = -\frac{1 }{4\pi} \log\av{z}\ I, \\ G_{2} \pare{z} & = \frac{1}{4\pi} \frac{z\otimes z}{\av{z}^{2}}=\frac{1}{4\pi|z|^2}\left(\begin{array}{cc}z_1^2 & z_1z_2\\ z_1z_2 & z_2^2\end{array}\right). \end{aligned} \end{equation} In this section we present the model of the Peskin problem that we consider in this work. To simplify the notation we write $ \gamma = \gamma \pare{s} = \pare{ \cos s, \sin s} $ and $Y\pare{s, t} = \ \pare{1 + h\pare{s, t}} \gamma\pare{s}$. Then let us suppose that \begin{equation} \label{eq:ansatz} \begin{aligned} X\pare{s, t} = & \ M\pare{t} + Y\pare{s, t}, \end{aligned} \end{equation} where we define the point $M(t)$ as the solution of the following ODE in terms of $h(s,t)$ \begin{equation}\label{eq:M} \ddt{M}\pare{t} = \frac{1}{4} \frac{1}{2\pi}\int h\pare{s, t} \pare{ \cos (s), \sin (s)} \ds . \end{equation} Roughly speaking, we use this $M(t)$ to control the inertial effects of the system. Mathematically, this unknown is required in order to absorb a low order nonlocal linear contribution akin to the first Fourier mode. With the ansatz \eqref{eq:ansatz} the evolution equation \eqref{eq:Peskin} becomes \begin{equation*} \partial_t Y\pare{s, t} + \ddt{M}\pare{t} = \int G\pare{Y \pare{s, t} - Y\pare{\sigma , t}} Y''\pare{\sigma, t} \textnormal{d} \sigma. \end{equation*} We can further compute \begin{equation*} \gamma\pare{s} \partial_t h\pare{s} + \ddt{M} \pare{t} = \int G \pare{\pare{1+h\pare{s}}\gamma \pare{s} - \pare{1+h\pare{\sigma}}\gamma \pare{\sigma}} \bra{ \gamma\pare{\sigma}\pare{ h''\pare{\sigma} -1-h\pare{\sigma} } + 2 \gamma'\pare{\sigma} h'\pare{\sigma} } \textnormal{d} \sigma . \end{equation*} We write \begin{align*} I_1 \pare{s} & = \int G_{1}\pare{\pare{1+h\pare{s}}\gamma \pare{s} - \pare{1+h\pare{\sigma}}\gamma \pare{\sigma}} \bra{ \gamma\pare{\sigma}\pare{ h''\pare{\sigma} - 1 - h\pare{\sigma} } + 2 \gamma'\pare{\sigma} h'\pare{\sigma} } \textnormal{d} \sigma, \\ I_2 \pare{s} & = \int G_{2} \pare{\pare{1+h\pare{s}}\gamma \pare{s} - \pare{1+h\pare{\sigma}}\gamma \pare{\sigma}} \bra{ \gamma\pare{\sigma}\pare{h''\pare{\sigma} - 1 - h\pare{\sigma} } + 2 \gamma'\pare{\sigma} h'\pare{\sigma} } \textnormal{d} \sigma , \end{align*} so that \begin{equation}\label{eq:eveqh1} \begin{aligned} \gamma\pare{s} \partial_t h\pare{s} & = I_1 \pare{s} + I_2 \pare{s} - \ddt{M} \pare{t}. \end{aligned} \end{equation} So far, \eqref{eq:M} and \eqref{eq:eveqh1} are a new formulation of the full Peskin problem. This new $h-M$ formulation is the starting point for the derivation of our scalar N-Peskin. Taking the scalar product of \eqref{eq:eveqh1} with $ \gamma\pare{s} $, we derive the \emph{scalar} evolution equation \begin{equation}\label{eq:model} \partial_t h\pare{s, t} = \gamma\pare{s} \cdot I_1 \pare{s, t} + \gamma\pare{s} \cdot I_2 \pare{s, t} - \gamma\pare{s}\cdot \ddt{M} \pare{t}. \end{equation} We propose equation \eqref{eq:model} as a scalar model of the full Peskin problem. However, as the tangential velocity is neglected in our approach we name this equation the N-Peskin problem. Let us simplify \eqref{eq:model}. Using classical trigonometric identities, the first term can be explicitly written as: \begin{multline}\label{eq:computation_I1} \gamma\pare{s} \cdot I_1 \pare{s} \\ = \int -\frac{1 }{4\pi} \log\pare{\av{\pare{1+h\pare{s}}\gamma \pare{s} - \pare{1+h\pare{\sigma}}\gamma\pare{\sigma}}} \bra{ \cos\pare{s-\sigma}\pare{ h''\pare{\sigma} - 1 - h\pare{\sigma} } + 2 \sin\pare{s-\sigma} h'\pare{\sigma} } \textnormal{d} \sigma\\ = \int -\frac{1 }{8\pi} \log\pare{\av{\pare{1+h\pare{s}}\gamma \pare{s} - \pare{1+h\pare{\sigma}}\gamma\pare{\sigma}}^2} \partial_\sigma^2\bra{ \cos\pare{s-\sigma}\pare{ 1+ h\pare{\sigma} }}\textnormal{d} \sigma . \end{multline} Let us now simplify the expression of the second kernel $G_2$. We have that \begin{multline*} \gamma \pare{s} \cdot G_{2} \pare{\pare{1+h\pare{s}}\gamma \pare{s} - \pare{1+h\pare{\sigma}}\gamma \pare{\sigma}} \cdot \gamma \pare{\sigma} \\ \begin{aligned} = & \ \frac{1}{4\pi} \frac{\gamma_i \pare{s} \pare{\pare{1+h\pare{s}}\gamma_i \pare{s} - \pare{1+h\pare{\sigma}}\gamma_i \pare{\sigma}} \pare{\pare{1+h\pare{s}}\gamma_j \pare{s} - \pare{1+h\pare{\sigma}}\gamma_j \pare{\sigma}} \gamma_j \pare{\sigma}}{\av{\pare{1+h\pare{s}}\gamma \pare{s} - \pare{1+h\pare{\sigma}}\gamma \pare{\sigma}}^2} , \\ = & \ \frac{1}{4\pi} \frac{ \bra{1+h\pare{s}- \pare{1+h\pare{\sigma}} \cos \pare{s-\sigma}} \bra{\pare{1+h\pare{s}} \cos \pare{s-\sigma} - \pare{1+h\pare{\sigma}}}}{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{\sigma}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{\sigma}} \cos\pare{s-\sigma} } , \end{aligned} \end{multline*} and \begin{multline*} \gamma \pare{s} \cdot G_{2} \pare{\pare{1+h\pare{s}}\gamma \pare{s} - \pare{1+h\pare{\sigma}}\gamma \pare{\sigma}} \cdot \gamma' \pare{\sigma} \\ \begin{aligned} = & \ \frac{1}{4\pi} \frac{\gamma_i \pare{s} \pare{\pare{1+h\pare{s}}\gamma_i \pare{s} - \pare{1+h\pare{\sigma}}\gamma_i \pare{\sigma}} \pare{\pare{1+h\pare{s}}\gamma_j \pare{s} - \pare{1+h\pare{\sigma}}\gamma_j \pare{\sigma}} \gamma_j' \pare{\sigma}}{ \av{\pare{1+h\pare{s}}\gamma \pare{s} - \pare{1+h\pare{\sigma}}\gamma \pare{\sigma}}^2 }, \\ = & \ \frac{1}{4\pi} \frac{ \bra{1+h\pare{s}- \pare{1+h\pare{\sigma}} \cos \pare{s-\sigma}} \pare{1+h\pare{s}} \sin \pare{s-\sigma} }{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{\sigma}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{\sigma}} \cos\pare{s-\sigma} } , \end{aligned} \end{multline*} After the change of variables $ \sigma = s-\alpha $ we find that \begin{multline*} \gamma\pare{s} \cdot I_2 \pare{s} \\ = \frac{1}{4\pi} \int \frac{ \bra{1+h\pare{s}- \pare{1+h\pare{s-\alpha}} \cos \alpha} \bra{\pare{1+h\pare{s}} \cos \alpha - \pare{1+h\pare{s-\alpha}}} }{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos\alpha } \ \pare{ h''\pare{s-\alpha} - 1 - h\pare{s-\alpha} } \textnormal{d} \alpha \\ + \frac{1}{2\pi} \pare{1+h\pare{s}} \int \frac{ \bra{1+h\pare{s}- \pare{1+h\pare{s-\alpha}} \cos \alpha} \sin \alpha }{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos\alpha } h'\pare{s-\alpha} \textnormal{d} \alpha. \end{multline*} Collecting the previous expressions and changing variables, we conclude the following scalar equation for $h$: \begin{equation} \label{eq:eveqh2} \begin{aligned} & \ \partial_t h\pare{s} + \gamma\pare{s}\cdot \ddt{M} \\ = & - \frac{1}{8\pi}\int \Bigg\{ \log \pare{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos\alpha} \ \partial_\alpha^2 \bra{ \big. \cos\alpha \pare{1 + h\pare{s-\alpha}} } \Bigg\} \textnormal{d} \alpha \\ & +\frac{1}{4\pi} \int \frac{ \bra{1+h\pare{s}- \pare{1+h\pare{s-\alpha}} \cos \alpha} \bra{\pare{1+h\pare{s}} \cos \alpha - \pare{1+h\pare{s-\alpha}}} }{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos\alpha } \ \pare{ h''\pare{s-\alpha} - 1 - h\pare{s-\alpha} } \textnormal{d} \alpha \\ & + \frac{1}{2 \pi} \pare{1+h\pare{s}} \int \frac{ \bra{1+h\pare{s}- \pare{1+h\pare{s-\alpha}} \cos \alpha} \sin \alpha }{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos\alpha } h'\pare{s-\alpha} \textnormal{d} \alpha. \end{aligned} \end{equation} Then, we define the following notion of weak solution \begin{definition}\label{definition1} We say that $h$ is a weak solution of the N-Peskin problem \eqref{eq:eveqh2} if the following equality holds $$ -\int \varphi(s,0)h_{0}(s)\textnormal{d} s +\int_0^T\int -\pare{ \partial_t \varphi (s,t) h (s,t) + \frac{1}{4} \Lambda\varphi (s,t) h(s,t) - \mathcal{N}\pare{h(s,t)}\varphi(s,t) } \textnormal{d} s\textnormal{d} t=0,$$ for all $\varphi\in C^\infty_{c}([ 0,T)\times \mathbb{S}^1)$, where $ \mathcal{N} $ is the nonlinearity \begin{equation*} \mathcal{N}\pare{h\pare{s, t}} - \frac{1}{4}\Lambda h\pare{s, t} + \gamma\pare{s}\cdot \ddt{M}\pare{t} = \text{r.h.s. of \eqref{eq:eveqh2}}. \end{equation*} \end{definition} \subsection{The linear $h-M$ formulation of the Peskin problem} \label{sec:linearized_h} To better understand the role of $M(t)$ and the reason behind its definition through the aforementioned ODE, we are going to compute the he linearized Peskin problem in the $h-M$ formulation. The linear Peskin problem for arbitrary curves can be expressed as \eqref{eq:linear_Peskin}. In the radial configuration we have that \begin{align} \label{eq:cYtor} \mathcal{Y}_1\pare{s} = r\pare{s,t} \ \cos s , && \mathcal{Y}_2\pare{s} = r\pare{s,t} \ \sin s, \end{align} where $$ r(s,t)=1+h(s,t). $$ Thus, multiplying \eqref{eq:linear_Peskin} by $ \gamma $, we obtain that \begin{align*} \partial_t r(s,t)&= - \frac{1}{4}\bra{ \cos (s) \ \Lambda \pare{r(s,t) \cos(s) } + \sin (s) \ \Lambda \pare{r(s,t)\sin(s) } \big. } \\&\quad+ \frac{1}{4}\bra{ -\cos (s) \ \mathcal{H} \pare{r(s,t) \sin(s) } + \sin (s) \ \mathcal{H} \pare{r(s,t)\cos(s) } \big. }\\& = L_1 + L_2. \end{align*} Dropping the $t$ from the notation we compute that \begin{align*} - \frac{1}{4} \cos (s) \ \Lambda \pare{r \cos } \pare{s} = & \ -\frac{1}{8\pi} \cos (s) \ \int \cot\pare{\alpha/2} \pare{r\pare{s-\alpha} \cos\pare{s-\alpha}}' \textnormal{d} \alpha , \\ = & \ \frac{1}{8\pi} \cos (s) \ \int \cot\pare{\alpha/2} \partial_\alpha \pare{r\pare{s-\alpha} \cos\pare{s-\alpha} - r\pare{s} \cos s} \textnormal{d} \alpha , \\ = & \ \frac{1}{8\pi} \cos^2 (s) \ \int \frac{1}{2\sin^2\pare{\alpha / 2}} \pare{r\pare{s-\alpha} \cos\alpha - r\pare{s}} \textnormal{d} \alpha , \\ & \ + \frac{1}{8\pi} \sin (s) \ \cos (s) \ \int \frac{1}{2\sin^2\pare{\alpha / 2}} r\pare{s-\alpha}\sin(\alpha) \ \textnormal{d} \alpha , \end{align*} and \begin{align*} - \frac{1}{4} \sin (s) \ \Lambda \pare{r \sin } \pare{s} = & \ -\frac{1}{8\pi} \sin (s) \ \int \cot\pare{\alpha/2} \pare{r\pare{s-\alpha} \sin\pare{s-\alpha}}' \textnormal{d} \alpha , \\ = & \ \frac{1}{8\pi} \sin (s) \ \int \cot\pare{\alpha/2} \partial_\alpha \pare{r\pare{s-\alpha} \sin\pare{s-\alpha} -r\pare{s} \sin s} \textnormal{d} \alpha , \\ = & \ \frac{1}{8\pi} \sin^2 (s) \ \int \frac{1}{2\sin^2\pare{\alpha / 2}} \pare{r\pare{s- \alpha}\cos\alpha - r\pare{s}} \textnormal{d} \alpha , \\ & \ -\frac{1}{8\pi} \sin (s) \cos (s) \ \int \frac{1}{2\sin^2\pare{\alpha / 2}} r\pare{s-\alpha}\sin(\alpha) \ \textnormal{d} \alpha. \end{align*} As a consequence we obtain \begin{equation*} L_1 = \frac{1}{4}\frac{1}{2\pi}\int \frac{1}{2\sin^2\pare{\alpha / 2}} \pare{r\pare{s- \alpha}\cos\alpha - r\pare{s}} \textnormal{d} \alpha. \end{equation*} In a similar fashion we compute \begin{equation*} L_2 = \frac{1}{2}\frac{1}{2\pi} \int \cos^2\pare{\alpha/2} \ r\pare{s-\alpha} \textnormal{d} \alpha. \end{equation*} Summing up these two expressions and substituting $ r = 1+ h $, we conclude that \begin{equation}\label{eq:evolution_linearization_h} \partial_t h \pare{s} = - \frac{1}{4} \frac{1}{2\pi} \int \frac{h\pare{s} - h\pare{s-\alpha}}{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha + \frac{1}{4} \frac{1}{2\pi} \int h\pare{s-\alpha} \cos \alpha \ \textnormal{d} \alpha. \end{equation} We see now that in the $h-M$ formulation the Peskin problem is parabolic at the linear level with a nonlocal zeroth-order forcing term. Moreover, we can compute $$ \ddt{M}\pare{t}\cdot \gamma(s) = \frac{1}{4}\frac{1}{2\pi} \int h\pare{\alpha, t} \pare{ \cos (\alpha), \sin (\alpha)} \cdot \pare{\cos (s), \sin (s)} \textnormal{d} \alpha = \frac{1}{4}\frac{1}{2\pi} \int h\pare{\alpha, t} \cos (s-\alpha) \textnormal{d} \alpha. $$ As a consequence, we also realize that the ODE for $M(t)$ is designed to absorb some of the linear contributions. \subsection{Main result} It is known \cite[Section 5]{MRS19} that the set of stationary solutions of \eqref{eq:Peskin} are circles. As a consequence, the equilibrium configurations are determined by the center and radius of the stationary circle. Without loss of generality we assume in what follows that the radius of the equilibrium circle equals one (different values can be handled similarly). The purpose of this paper is to establish the global existence and decay to equilibrium for \eqref{eq:eveqh2} in the case of Lipschitz initial data $X_0(s)\in W^{1,\infty}(\mathbb{S}^1)$ sufficiently close to an equilibrium configuration. In particular the following theorem is the main result of the present manuscript: \begin{theorem}\label{teo1}Let $h_0(s)\in W^{1,\infty}(\mathbb{S}^1)$ be the initial data for \eqref{eq:eveqh2}. There exists a universal constant $c_0\ll1$ such that if $h_0$ satisfies \begin{align} \label{eq:average_relation} & \av{h_0}_{W^{1,\infty}(\mathbb{S}^1)} \leq c_0, \nonumber \end{align} then there exists a global in time weak solution of \eqref{eq:eveqh2} in the sense of Definition \ref{definition1} which belong to the energy space \begin{align*} h\in L^\infty \pare{ [0,T);W^{1,\infty}\pare{\mathbb{S}^1} }\cap C\pare{ [0,T);H^1\pare{\mathbb{S}^1} }\cap L^2 \pare{ [0,T);H^{3/2}\pare{\mathbb{S}^1} }, && \forall \ T\in\pare{0, \infty} \end{align*} Furthermore for any $ 0<1\ll t < T $ we have that \begin{equation}\label{eq:convergence_to_zero} \av{h(t)}_{W^{1,\infty}(\mathbb{S}^1)} \leq \av{h_0}_{W^{1,\infty}(\mathbb{S}^1)} , \end{equation} and \begin{equation*} \av{h'(t)} _{L^\infty}\leq \av{h'_0}_{L^\infty} e^{-\delta t}, \end{equation*} for a small enough $\delta(h_0)>0$. \end{theorem} We remark that \eqref{eq:Peskin} is invariant with respect to the transformation \begin{align*} X_\lambda(s,t)=\frac{1}{\lambda}X(\lambda s,\lambda t), && \lambda\in\mathbb{Z}^+ , \end{align*} thus, $$ L^\infty\pare{ 0,T; \dot{W}^{1,\infty}\pare{ \mathbb{S}^1 } }, $$ is critical with respect to the previous scaling. \subsection{Methodology} Let us explain the main ideas behind Theorem \ref{teo1} using a simpler equation. We consider the following equation: $$ \partial_t f+f'\Lambda f+\Lambda f=0. $$ Using pointwise methods as in \cite{cordoba2004maximum,cordoba2009maximum}, we can obtain the following bounds $$ \ddt{ \av{f}_{W^{1,\infty}}}\leq0, $$ for initial data such that $$ \av{f_0}_{W^{1,\infty}}<1. $$ Then, we conclude the \emph{a priori} estimates in the Lipschitz class. For equations in divergence form, such estimates would lead to the global existence of weak solution via a vanishing viscosity type argument \cite{CCGS12,granero2014global}. However, for equations in non-divergence form, it is not obvious how to translate the previous bound into the global existence of weak solutions. When comparing the Peskin and the Muskat problem we see that this is an additional challenge that is inherent to the Peskin problem. To overcome this difficulty, we perform an additional $H^1$ energy estimate. A careful study of the nonlinearity will give us the appropriate bounds. Indeed, we have that \begin{align*} -\int f''f'\Lambda f \textnormal{d} s &=\frac{1}{2}\int (f')^2\Lambda f' \textnormal{d} s=\frac{1}{2}\iint(f'(s))^2\frac{\left(f'(s)-f'(\sigma)\right)}{|s-\sigma|^2} \textnormal{d} \sigma \textnormal{d} s=\frac{1}{2}\iint(f'(\sigma))^2\frac{\left(f'(\sigma)-f'(s)\right)}{|s-\sigma|^2} \textnormal{d} \sigma \textnormal{d} s\\ &=\frac{1}{4}\iint((f'(s))^2-(f'(\sigma))^2)\frac{\left(f'(s)-f'(\sigma)\right)}{|s-\sigma|^2} \textnormal{d} \sigma \textnormal{d} s=\frac{1}{4}\iint(f'(s)+f'(\sigma))\frac{\left(f'(s)-f'(\sigma)\right)^2}{|s-\sigma|^2} \textnormal{d} \sigma \textnormal{d} s\\ &\leq \frac{\av{f'}_{L^\infty}}{2} \ \av{f'}_{\dot{H}^{1/2}}^2. \end{align*} Thus, both estimates combined lead to a bound in $$ f\in L^2\pare{ \bra{0,T};H^{3/2} }. $$ With this parabolic effect we have the strong convergence of $f', \Lambda f$ and this allow us to pass to the limit in the weak formulation. Then, the proof of Theorem \ref{teo1} of this work can be summarized as follows: \begin{enumerate} \item Using pointwise methods we conclude that, for initial data close enough to the equilibrium, the solution decays in the Lipschitz norm. The purpose of Proposition \ref{prop:W1infty_enest} is to prove the previous claim. The decay in $W^{1,\infty}$ is a crucial point in the argument as it will allow to obtain the parabolic gain of regularity. Furthermore, the exponential decay of this norm ensures that the point $M(t)$ remains uniformly in a ball for every $0<t<\infty$. \item The decay in the Lipschitz norm is then used to find a global estimate in $$ L^2 \pare{ \bra{0,T} ;H^{3/2} }. $$ \item We invoke the parabolic gain of regularity obtained before to conclude the strong convergence of the derivative. \end{enumerate} \section{Pointwise estimates for the $\Lambda$ operator} \label{sec3} In this section we collect some pointwise estimates for the fractional Laplacian that will be used in the sequel and that may be of independent interest. We start with a lemma that compares the Lambda and the Hilbert transform: \begin{lemma}\label{lem:monotonicity_lemma} Let $ f$ be a smooth function and define $ \bar{s},\underline{s} \in \mathbb{S}^1 $ such that \begin{align*} f'\pare{\bar{s}} = \max_{s\in\mathbb{S}^1} f'(s), && f'\pare{\underline{s}} = \min_{s\in\mathbb{S}^1}f'(s). \end{align*} Then \begin{align*} \Lambda f'\pare{\bar{s}} - \Lambda f\pare{\bar{s}} \geq 0, && \Lambda f'\pare{\underline{s}} - \Lambda f\pare{\underline{s}}\leq 0. \end{align*} \end{lemma} \begin{proof} We know that \begin{align*} \Lambda f' \pare{s} = & \ \frac{1}{2\pi} \int \frac{f'\pare{s} - f'\pare{s-\alpha}}{2\sin^2\pare{\alpha/2}}, \\ \Lambda f \pare{s} = & \ -\frac{1}{2\pi} \int \cot \pare{\alpha/2} \pare{f'\pare{s} - f'\pare{s-\alpha}}, \end{align*} so that \begin{equation*} \Lambda f' \pare{s} - \Lambda f \pare{s} = \frac{1}{2\pi}\int \frac{1+\sin(\alpha)}{2\sin^2\pare{\alpha/ 2}} \ \pare{f'\pare{s} - f'\pare{s-\alpha}}, \end{equation*} and the claim follows since the integration kernel is nonnegative and \begin{align*} f'\pare{\bar{s}} - f'\pare{\bar{s}-\alpha} \geq 0, && f'\pare{\underline{s}} - f'\pare{\underline{s}-\alpha} \leq 0 . \end{align*} \end{proof} Furthermore, we observe that for zero-mean functions we have the following Poincar\'e-type pointwise inequalities (see \cite{ascasibar2013approximate} for instance) \begin{align}\label{eq:Lambda_Linfty} f\pare{\bar{s}}\leq C \ \Lambda f\pare{\bar{s} }, && -f\pare{\underline{s}}\leq -C \ \Lambda f\pare{\underline{s} }. \end{align} Similarly as in the proof of Lemma \ref{lem:monotonicity_lemma} we can prove the following result: \begin{lemma} \label{lem:monotonicity_lemma2} Let $ f$ be a smooth function and define $ \bar{s},\underline{s} \in \mathbb{S}^1 $ such that \begin{align*} f'\pare{\bar{s}} = \max_{s\in\mathbb{S}^1} f'(s), && f'\pare{\underline{s}} = \min_{s\in\mathbb{S}^1}f'(s). \end{align*} Let $ b=b\pare{\alpha} \geq 0 $ for every $ \alpha\in \mathbb{S}^1 $ and let us define the operators \begin{align*} \Lambda_b f\pare{s} & = \frac{1}{2\pi} \int \frac{b\pare{\alpha}}{2 \sin^2\pare{\alpha / 2}} \pare{f' \pare{s} - f' \pare{s-\alpha}} \textnormal{d}\alpha, \\ \mathcal{H}_b f\pare{s} & = -\frac{1}{2\pi} \int \frac{b\pare{\alpha}}{\tan\pare{\alpha / 2}} \pare{f'\pare{s} - f'\pare{s-\alpha}}, \end{align*} then we have \begin{align*} \Lambda_b f\pare{\bar{s}} - \mathcal{H}_b f\pare{\bar{s}} \geq 0, && \Lambda_b f\pare{\underline{s}} - \mathcal{H}_b f\pare{\underline{s}} \leq 0 \end{align*} \end{lemma} Finally, let us provide with an alternative expression for $ \Lambda f' $ when $ f $ is $ \mathcal{C}^2\pare{\mathbb{S}^1} $. This expression will be very useful when performing the pointwise estimates. We know that \begin{equation*} \Lambda f'\pare{s} = \frac{1}{2\pi} \int \frac{f'\pare{s} - f'\pare{s-\alpha}}{2\sin^2 \pare{\alpha / 2}} \textnormal{d} \alpha = \frac{1}{2\pi} \int \frac{\partial_\alpha \bra{f'\pare{s} \alpha - \pare{ f\pare{s} - f\pare{s-\alpha} }}}{2\sin^2 \pare{\alpha / 2}} \textnormal{d} \alpha. \end{equation*} Integrating by parts and exploiting the regularity $ f \in \mathcal{C}^2\pare{\mathbb{S}^1} $, we obtain that \begin{equation} \label{eq:Lambda_alternative} \Lambda f'\pare{s} = \frac{1}{2\pi} \int \frac{f'\pare{s}\alpha -\pare{f \pare{s} - f \pare{s-\alpha}}}{2\sin^3\pare{\alpha/2}} \cos\pare{\alpha/2} \ \textnormal{d}\alpha. \end{equation} \section{\emph{A priori} estimates in $ W^{1, \infty} $}\label{sec:W1inftydecay} Let us introduce some notation that will simplify the exposition. For a smooth function $h$ and any $ n\in \mathbb{N} $ let us denote the application s $ t \mapsto \overline{s^n_t} $ and $ t \mapsto \underline{s^n_t} $ such that \begin{align*} h^{\pare{ n }} \pare{\overline{s^n_t} , t} = \max _s \set{ h^{\pare{ n }} \pare{s , t}}=\max_s \set{ \partial_s^n h} \pare{s , t}, && h^{\pare{ n }} \pare{\underline{s^n_t} , t} = \min_s \set{h^{\pare{ n }} \pare{s , t}}=\min _s \set{ \partial_s^n h} \pare{s , t}. \end{align*} Let us define the auxiliary functions \begin{align}\label{eq:auxiliary_variables} r\pare{s} = 1+h\pare{s}, && \theta\pare{s, s-\alpha} = h\pare{s}-h\pare{s-\alpha}, && \eta \pare{s, s-\alpha, \alpha} = h\pare{s} - h\pare{s-\alpha}\cos\alpha, \end{align} thus we have that \begin{align*} 1+h\pare{s-\alpha} = r-\theta , && h'\pare{s-\alpha} = -\partial_\alpha h\pare{s-\alpha} = \partial_\alpha \theta. \end{align*} Finally, we denote \begin{equation} \label{eq:notation_bar_0} \begin{aligned} \bar{\theta} & = \theta \pare{\overline{s^0_t}, \overline{s^0_t}-\alpha}, & \bar{\eta} & = \eta \pare{\overline{s^0_t}, \overline{s^0_t}-\alpha}, & \bar{r} & = r\pare{\overline{s^0_t}}\\ \underline{\theta} & = \theta \pare{\underline{s^0_t}, \underline{s^0_t}-\alpha}, & \underline{\eta}& = \eta \pare{\underline{s^0_t}, \underline{s^0_t}-\alpha}, & \underline{r} & = r\pare{\underline{s^0_t}}. \end{aligned} \end{equation} In the present section we prove that the unitary circumference $\gamma$ is globally stable under small $ W^{1, \infty} $ perturbations which are graphs on the unitary circle. The detailed statement is formulated in the following proposition: \begin{prop}\label{prop:W1infty_enest} Let $T^\star\in(0,\infty]$ and $h = h(s,t)$ be a $\mathcal{C} \pare{ \bra{ 0,T^\star}; \mathcal{C}^2 } $ solution of \eqref{eq:eveqh2} such that $ \left. h\right|_{t=0} = h_0 $. Then there exists a $ c_0 > 0 $ such that if $ \av{h_0}_{W^{1, \infty}}< c_0 $ the following inequality holds \begin{align*} \av{h\pare{t}}_{W^{1, \infty}}\leq \av{h_0}_{W^{1, \infty}}&& \forall \ 0\leq t<T^\star. \end{align*} Furthermore, we have that there exists a $ \delta > 0 $ s.t. \begin{equation} \av{h'(t)} _{L^\infty}\leq \av{h'_0}_{L^\infty} e^{-\delta t}.\label{eq:exp_decay_derivative} \end{equation} \end{prop} The proof of this proposition is based on pointwise methods as in \cite{cordoba2009maximum,cordoba2014confined}. In particular, we will obtain the inequality $$ \ddt \av{h(t)}_{W^{1,\infty}}< 0\;\text{ a.e. in $t$}, $$ for small enough initial data. Integrating in time will lead to the maximum principle for this norm. Once the maximum principle is obtained, we can even obtain the exponential decay of the norm. \subsection{Estimates in $ L^\infty $} \label{sec:estLinfty} We consider now \eqref{eq:eveqh2}. With the notation introduced in \eqref{eq:auxiliary_variables} combined with the elementary identity $ 1-\cos\alpha = 2\sin^2\pare{\alpha/2} $, we can deduce that $$ \partial_t h\pare{s} + \gamma\pare{s}\cdot \ddt{M}= \ J_1 \pare{s} + J_2 \pare{s} + J_3 \pare{s}, $$ where \begin{equation} \label{eq:J's} \begin{aligned} J_1 = & \ -\frac{ 1}{8\pi} \int \log \pare{4r\pare{r-\theta} \sin^2\pare{\alpha/2} + \theta^2} \partial_\alpha^2\bra{\pare{r-\theta} \cos\alpha } \textnormal{d} \alpha, \\ J_2 = & \ \frac{1}{4\pi} \int \frac{\bra{2r\sin^2\pare{\alpha / 2} + \theta \cos\alpha} \bra{2r\sin^2\pare{\alpha/2}-\theta}}{4r\pare{r-\theta} \sin^2\pare{\alpha/2} +\theta^2} \pare{\partial_\alpha^2 \theta +\pare{r-\theta}} \textnormal{d} \alpha , \\ J_3 = & \ \frac{r}{2\pi} \int \frac{\bra{2r\sin^2\pare{\alpha/2} + \theta\cos\alpha}\sin(\alpha)}{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2} \ \partial_\alpha \theta \textnormal{d} \alpha. \end{aligned} \end{equation} \subsubsection*{Bound of $ J_1 $} Let us decompose \begin{equation*} J_1 = J_{1, \gamma} + J_{1, 1} + J_{1, 2} + J_{1, 3}, \end{equation*} where \begin{align*} J_{1, \gamma} & = \frac{ 1}{8\pi} \int \log \pare{4r\pare{r-\theta} \sin^2\pare{\alpha/2} + \theta^2} \cos\alpha \ \textnormal{d} \alpha , \\ J_{1, 1} & = \frac{1}{8 \pi} \int \frac{ 4r \partial_\alpha\theta \ \sin^2\pare{\alpha/2} }{4r\pare{r-\theta}\sin^2\pare{\alpha/2}+ \theta^2} \ \partial_\alpha\eta \ \textnormal{d} \alpha , \\ J_{1, 2} & = -\frac{1}{8\pi} \int \frac{ 2 r\pare{r-\theta}\sin(\alpha) }{4r\pare{r-\theta}\sin^2\pare{\alpha/2}+ \theta^2} \ \partial_\alpha\eta \ \textnormal{d} \alpha , \\ J_{1, 3} & = -\frac{1}{8 \pi} \int \frac{2\theta\partial_\alpha\theta}{4r\pare{r-\theta}\sin^2\pare{\alpha/2}+ \theta^2} \ \partial_\alpha \eta \ \textnormal{d} \alpha . \end{align*} We analyze now the term $ J_{1, \gamma} $. We use the Taylor expansion of the logarithm to find that \begin{align*} \log\pare{4r\pare{r-\theta}\sin^2\pare{\alpha/2} + \theta^2} &= \log\pare{1-\frac{\theta}{r}+\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2}}} + \log\pare{4r^2\sin^2\pare{\alpha/2}}, \\ & \leq \log\pare{4r^2\sin^2\pare{\alpha/2}} - \frac{\theta}{r} + \mathcal{P} \pare{\av{h'}_{ L^\infty }^2}. \end{align*} As a consequence of the previous estimate we find the following bound for $ J_{1, \gamma} \pare{\overline{s^0_t}} $: \begin{equation} \label{eq:J1gamma} J_{1, \gamma}\pare{\overline{s^0_t}} \leq \frac{1}{4}\frac{1}{2\pi} \int \log\pare{4 \bar{r}^2 \sin^2\pare{\alpha/2}} \cos\alpha \ \textnormal{d} \alpha -\frac{1}{4}\frac{1}{2\pi} \int \bar{\theta}\cos\alpha \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^{\infty}}, \end{equation} so that in \eqref{eq:J1gamma} we isolate static, linear end nonlinar contributions to the evolution of $|h|_{L^\infty}$. Since the term $ J_{1, 1}\pare{\overline{s^0_t}} $ is not a singular integral it can be bounded straightforwardly and we conclude that \begin{equation}\label{eq:J11} J_{1, 1} \pare{\overline{s^0_t}} \leq \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\av{h}_{W^{1, \infty}} \av{h'}_{L^{\infty}}. \end{equation} We start studying the term $ J_{1, 2} $. We remark that \begin{align*} J_{1, 2} = & \ - \frac{1}{4} \frac{1}{2 \pi}\int \pare{ 1 - \frac{ \theta^2 }{4r\pare{r-\theta}\sin^2\pare{\alpha/2}+ \theta^2} } \ \cot \pare{\alpha / 2} \partial_\alpha\eta \ \textnormal{d} \alpha , \\ = & \ J_{1,2, \RN{1}} + J_{1,2, \RN{2}} . \end{align*} The smallness of $h$ and the positivity of $\bar{\theta} $ allow us to deduce that \begin{equation*} J_{1, 2, \RN{2}} \pare{\overline{s^0_t}}=\frac{1}{8 \pi}\int \frac{ \bar{\theta} }{4\bar{r}\pare{\bar{r}-\bar{\theta}}\sin^2\pare{\alpha/2}+ \theta^2} \ \frac{\bar{\theta}}{\tan \pare{\alpha / 2}} \partial_\alpha\bar{\eta} \ \textnormal{d} \alpha \leq \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\int \frac{\bar{\theta}}{2\sin^2\pare{\alpha/2}}\textnormal{d} \alpha . \end{equation*} Thus, considering that $$ \eta = \theta +2h\pare{s-\alpha}\sin^2\pare{\alpha / 2}, $$ we obtain \begin{equation} \label{eq:J12} J_{1, 2} \pare{\overline{s^0_t}} \leq - \frac{1}{4} \frac{1}{2\pi } \int \frac{\bar{\theta}}{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha - \frac{1}{4} \frac{1}{2\pi } \int h\pare{\overline{s^0_t}-\alpha}\textnormal{d}\alpha + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\bar{\theta}}{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha. \end{equation} We rearrange the term $ J_{1, 3}\pare{\overline{s^0_t}} $ \begin{align*} J_{1, 3} &= - \frac{1}{4} \frac{1}{2 \pi} \int \frac{\theta}{2\sin^2(\alpha/2)} \frac{4\sin^2(\alpha/2)\partial_\alpha\theta}{4r\pare{r-\theta}\sin^2\pare{\alpha/2}+ \theta^2} \ \partial_\alpha \eta \ \textnormal{d} \alpha \\ \end{align*} The last term $ J_{1, 3}\pare{\overline{s^0_t}} $ can easily be controlled as follows: \begin{equation} \label{eq:J13} \av{J_{1, 3}\pare{\overline{s^0_t}}} \leq \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\bar{\theta}}{2\sin^2\pare{\alpha /2}}. \end{equation} We combine the estimates \eqref{eq:J1gamma}, \eqref{eq:J11}, \eqref{eq:J12} and \eqref{eq:J13} and obtain the bound \begin{multline}\label{eq:J1} J_1\pare{\overline{s^0_t}} \leq - \frac{1}{4} \frac{1}{2\pi } \int \frac{\bar{\theta} }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha - \frac{1}{4} \frac{1}{2\pi } \int h\pare{\overline{s^0_t}-\alpha}\textnormal{d} \alpha -\frac{1}{4}\frac{1}{2\pi} \int \bar{\theta}\cos\alpha \textnormal{d} \alpha \\ + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\bar{\theta}}{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}}\av{h'}_{L^{\infty}} \\ + \frac{1}{4} \frac{1}{2 \pi} \int \log\pare{4 \bar{r}^2 \sin^2\pare{\alpha/2}} \cos\alpha \ \textnormal{d} \alpha. \end{multline} \subsubsection*{Bound of $ J_2 $} \label{sec:J2} We study now the term $ J_2 $. In order to do that we decompose it as \begin{equation*} J_2 = J_{2, \gamma} + J_{2, 1} + J_{2, 2} , \end{equation*} where \begin{align*} J_{2, \gamma} & = \ \frac{1}{4\pi} \int \frac{\bra{2r\sin^2\pare{\alpha / 2} + \theta \cos\alpha} \bra{2r\sin^2\pare{\alpha/2}-\theta}}{4r\pare{r-\theta} \sin^2\pare{\alpha/2} +\theta^2} \ r \ \textnormal{d} \alpha , \\ J_{2, 1} & = \ -\frac{1}{4\pi} \int \frac{\bra{2r\sin^2\pare{\alpha / 2} + \theta \cos\alpha} \bra{2r\sin^2\pare{\alpha/2}-\theta}}{4r\pare{r-\theta} \sin^2\pare{\alpha/2} +\theta^2} \ \theta \ \textnormal{d} \alpha , \\ J_{2, 2} & = \ - \frac{1}{4\pi} \int \partial_\alpha \pare{ \frac{\bra{2r\sin^2\pare{\alpha / 2} + \theta \cos\alpha} \bra{2r\sin^2\pare{\alpha/2}-\theta}}{4r\pare{r-\theta} \sin^2\pare{\alpha/2} +\theta^2} } \ \partial_\alpha \theta \ \textnormal{d} \alpha . \end{align*} We start analyzing the term $ J_{2, \gamma} $. We write this term as \begin{equation*} J_{2, \gamma} = \frac{1}{4\pi} \int \frac{4r^2 \sin^2\pare{\alpha / 2}}{4r\pare{r-\theta} \sin^2\pare{\alpha/2} +\theta^2} \bra{ \pare{r-\theta}\sin^2\pare{\alpha / 2} - \frac{\theta^2\cos\pare{\alpha}}{4r\sin^2\pare{\alpha/2}}} \textnormal{d} \alpha . \end{equation*} We observe that \begin{align*} \frac{ 4r^2 \sin^2\pare{\alpha / 2}}{4r^2\sin^2\pare{\alpha/2} -4r\theta \sin^2\pare{\alpha/2} +\theta^2} & = \frac{ 1}{1-\frac{\theta }{r } +\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }} , \\ &= 1 +\frac{ \frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }}{1-\frac{\theta }{r } +\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }}\\ &=1 +\sum_{\ell=1}^\infty\left(\frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell\\ \frac{\theta^2\cos\pare{\alpha}}{4r \sin^2\pare{\alpha/2}} & \leq C\av{h'}_{L^\infty}^2, \end{align*} where the convergence of the series is ensured due to the smallness of $h$ in $W^{1,\infty}$ and $C$ is a universal constant that may change from line to line. Then, noticing that $$ \left(1+\frac{\theta }{r }\right)(r-\theta)=\frac{r^2-\theta^2}{r}, $$ we deduce that \begin{equation} \label{eq:J2gamma} J_{2, \gamma}\pare{\overline{s^0_t}} \leq \frac{\bar{r} }{2} \frac{1}{2\pi} \int \sin^2\pare{\alpha / 2} \textnormal{d} \alpha +\mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{equation} Next we study the term $ J_{2, 1} $. Using similar computations as the ones performed for the term $ J_{2, \gamma} $ we can reformulate it as \begin{align*} J_{2, 1}= & - \frac{1}{4\pi} \int \frac{4r^2 \sin^2\pare{\alpha / 2}}{4r\pare{r-\theta} \sin^2\pare{\alpha/2} +\theta^2} \bra{ \pare{1-\frac{\theta}{r}}\theta \sin^2\pare{\alpha / 2} - \frac{\theta^3\cos\pare{\alpha/2}}{4r^2\sin^2\pare{\alpha/2}}} \textnormal{d} \alpha. \end{align*} From the previous expression we can deduce the estimate \begin{equation}\label{eq:J21} \begin{aligned} J_{2, 1} \pare{\overline{s^0_t}} \leq & \ -\frac{1}{4\pi} \int \bar{\theta} \sin^2\pare{\alpha / 2} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} , \\ \end{aligned} \end{equation} At last we study the term $ J_{2, 2} $. We decompose it as \begin{equation*} J_{2, 2} = J_{2, 2, \RN{1}} + J_{2, 2, \RN{2}} + J_{2, 2, \RN{3}}, \end{equation*} where \begin{align*} J_{2, 2, \RN{1}} = & \ \frac{1}{4\pi} \int \frac{-4r \partial_\alpha \theta \sin^2\pare{\alpha/2} + 2r\pare{r-\theta}\sin(\alpha) + 2\theta\partial_\alpha \theta}{\pare{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 }^2} \pare{2r\sin^2\pare{\alpha / 2} + \theta \cos\alpha}\pare{2r\sin^2\pare{\alpha / 2} - \theta } \partial_\alpha \theta \ \textnormal{d} \alpha , \\ J_{2, 2, \RN{2}} = & \ - \frac{1}{4\pi} \int \frac{ \pare{r\sin(\alpha) + \partial_\alpha \theta \cos\alpha - \theta \sin \alpha}\pare{2r\sin^2\pare{\alpha / 2} - \theta } }{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 } \ \partial_\alpha \theta \ \textnormal{d} \alpha , \\ J_{2, 2, \RN{3}} = & \ - \frac{1}{4\pi} \int \frac{ \pare{2r\sin^2\pare{\alpha / 2} + \theta \cos\alpha}\pare{ r\sin(\alpha) - \partial_\alpha \theta } }{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 } \ \partial_\alpha \theta \ \textnormal{d} \alpha . \end{align*} Let us consider at first $ J_{2, 2, \RN{1}}\pare{\overline{s^0_t}} $. Recalling $$ \frac{\bra{2r\sin^2\pare{\alpha / 2} + \theta \cos\alpha} \bra{2r\sin^2\pare{\alpha/2}-\theta}}{4r\pare{r-\theta} \sin^2\pare{\alpha/2} +\theta^2}=\frac{4r^2 \sin^2\pare{\alpha / 2}}{4r\pare{r-\theta} \sin^2\pare{\alpha/2} +\theta^2} \bra{ \pare{1-\frac{\theta}{r}}\sin^2\pare{\alpha / 2} - \frac{\theta^2\cos\pare{\alpha}}{4r^2\sin^2\pare{\alpha/2}}}, $$ using similar computations as before, we can compute \begin{align*} \frac{ \pare{2r\sin^2\pare{\alpha / 2} + \theta \cos\alpha}\pare{2r\sin^2\pare{\alpha / 2} - \theta } }{(4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2)^2 } &= \left[1 +\sum_{\ell=1}^\infty\left(\frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell\right]\bra{ \pare{1-\frac{\theta}{r}} - \frac{\theta^2\cos\pare{\alpha}}{4r^2\sin^4\pare{\alpha/2}}}\\ &\quad\times\frac{1}{4r\pare{r-\theta} + \frac{\theta^2}{\sin^2\pare{\alpha / 2}}}. \end{align*} Then, using the positivity of $\bar{\theta}$, we have that \begin{equation*} J_{2, 2, \RN{1}}\pare{\overline{s^0_t}} \leq - \frac{1}{2}\frac{1}{4\pi} \int \bar{\theta} \ \cos\alpha \ \textnormal{d}\alpha + \av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}} } \int \frac{\bar{\theta}}{2\sin^2\pare{\alpha / 2}} \textnormal{d}\alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}} } \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{equation*} Very similar bounds hold for $ J_{2, 2, \RN{2}}\pare{\overline{s^0_t}} $ and $ J_{2, 2, \RN{3}}\pare{\overline{s^0_t}} $, namely \begin{align*} J_{2, 2, \RN{2}}\pare{\overline{s^0_t}} \leq& \ \frac{1}{2}\frac{1}{4 \pi} \int \bar{\theta} \ \cos \alpha \ \textnormal{d} \alpha + \av{h}_{W^{1,\infty}} \mathcal{P} \pare{ \av{h}_{W^{1, \infty}}} \int \frac{\bar{\theta}}{2\sin^2\pare{\alpha / 2}} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} , \\ J_{2, 2, \RN{3}}\pare{\overline{s^0_t}} \leq & \ \ \frac{1}{2}\frac{1}{4 \pi} \int \bar{\theta} \ \cos \alpha \ \textnormal{d} \alpha + \av{h}_{W^{1,\infty}} \mathcal{P} \pare{ \av{h}_{W^{1, \infty}}} \int \frac{\bar{\theta}}{2\sin^2\pare{\alpha / 2}} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} , \end{align*} and so we obtain that \begin{equation}\label{eq:J22} J_{2, 2}\pare{\overline{s^0_t}} \leq \ \frac{1}{2}\frac{1}{4 \pi} \int \bar{\theta} \ \cos \alpha \ \textnormal{d} \alpha + \av{h}_{W^{1,\infty}} \mathcal{P} \pare{ \av{h}_{W^{1, \infty}}} \int \frac{\bar{\theta}}{2\sin^2\pare{\alpha / 2}} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{equation} We combine the estimates \eqref{eq:J2gamma}, \eqref{eq:J21} and \eqref{eq:J22} and deduce that \begin{multline}\label{eq:J2} J_2\pare{\overline{s^0_t}} \leq \frac{\bar{r} }{2} \frac{1}{2\pi} \int \sin^2\pare{\alpha / 2} \textnormal{d} \alpha -\frac{1}{4\pi} \int \bar{\theta} \sin^2\pare{\alpha / 2} \textnormal{d} \alpha + \frac{1}{4}\frac{1}{2 \pi} \int \bar{\theta} \ \cos \alpha \ \textnormal{d} \alpha \\ + \av{h}_{W^{1,\infty}} \mathcal{P} \pare{ \av{h}_{W^{1, \infty}}} \int \frac{\bar{\theta}}{2\sin^2\pare{\alpha / 2}} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{multline} \subsubsection*{Bound of $ J_3 $} We decompose $ J_3 $ as \begin{equation*} J_3 = J_{3 , 1} + J_{3, 2}, \end{equation*} with \begin{align*} J_{3 , 1} & = \frac{r}{2\pi} \int \frac{2r \ \sin^2 \pare{\alpha / 2} \sin(\alpha)}{4r \pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2} \ \partial_\alpha \theta \ \textnormal{d} \alpha , \\ J_{3 , 2} & = \frac{r}{2\pi} \int \frac{ \theta \cos(\alpha)\sin(\alpha)}{4r \pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2} \ \partial_\alpha \theta \ \textnormal{d} \alpha. \end{align*} Using a Taylor expansion we know that \begin{equation}\label{eq:J31} J_{3 , 1}\pare{\overline{s^0_t}} = \frac{1}{2\pi} \int \left[1 +\sum_{\ell=1}^\infty\left(\frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell\right]\frac{\sin(\alpha)\partial_\alpha \theta}{2} \ \textnormal{d}\alpha , \end{equation} from which we easily deduce the bound \begin{equation*} J_{3 , 1} \pare{\overline{s^0_t}} \leq - \frac{1}{2} \frac{1}{2\pi} \int \bar{\theta} \cos(\alpha) \ \textnormal{d}\alpha + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\bar{\theta}}{2\sin^2\pare{\alpha / 2}} \textnormal{d} \alpha. \end{equation*} The term $ J_{3, 2} $ can be handled similarly and, in fact, \begin{equation*} J_{3 , 2 } \pare{\overline{s^0_t}} \leq \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\bar{\theta}}{2\sin^2\pare{\alpha / 2}} \textnormal{d} \alpha. \end{equation*} Collecting the previous estimates we deduce the desired bound for $ J_3 $ \begin{equation}\label{eq:J3} J_{3} \pare{\overline{s^0_t}} \leq - \frac{1}{2} \frac{1}{2\pi} \int \bar{\theta} \cos(\alpha) \ \textnormal{d}\alpha + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\bar{\theta}}{2\sin^2\pare{\alpha / 2}} \textnormal{d} \alpha. \end{equation} \subsubsection*{The equation for the evolution of $|h|_{L^\infty}$} We sum the inequalities \eqref{eq:J1}, \eqref{eq:J2} and \eqref{eq:J3} and obtain the bound \begin{multline}\label{eq:PD1} J_{1} \pare{\overline{s^0_t}} + J_{2} \pare{\overline{s^0_t}} + J_{3} \pare{\overline{s^0_t}} \leq \frac{1}{4 \pi} \int \log\pare{2 \bar{r} \sin \pare{\alpha/2}} \cos(\alpha) \ \textnormal{d} \alpha + \frac{\bar{r} }{4 \pi} \int \sin^2\pare{\alpha / 2} \textnormal{d} \alpha \\ -\frac{1}{4}\frac{1}{2\pi} \bra{1-\av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}}}} \int \frac{\bar{\theta}}{2\sin^2\pare{\alpha / 2}} \textnormal{d}\alpha -\frac{1}{4} \frac{1}{2\pi} \int \bar{\theta}\pare{1+\cos(\alpha)}\textnormal{d}\alpha \\ -\frac{1}{4} \frac{1}{2\pi} \int h\pare{ \overline{s^0_t} -\alpha} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{multline} Using pointwise methods as in \cite{cordoba2009maximum,cordoba2014confined} we have that $$ \ddt\max_{s\in\mathbb{S}^1}\set{h(s,t) }=\partial_t h\pare{ \overline{s^0_t} } \ \text{ a.e. } $$ Recalling equation \eqref{eq:eveqh2}, we obtain that \begin{multline*} \ddt \max_{s\in\mathbb{S}^1}\{h(s,t)\} + \gamma\pare{\overline{s^0_t}} \cdot \ddt M \leq \frac{1}{4 \pi} \int \log\pare{2 \bar{r} \sin \pare{\alpha/2}} \cos(\alpha) \ \textnormal{d} \alpha + \frac{\bar{r} }{4 \pi} \int \sin^2\pare{\alpha / 2} \textnormal{d} \alpha \\ -\frac{1}{4}\frac{1}{2\pi} \bra{1-\av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}}}} \int \frac{\bar{\theta}}{2\sin^2\pare{\alpha / 2}} \textnormal{d}\alpha -\frac{1}{4} \frac{1}{2\pi} \int \bar{\theta}\pare{1+\cos(\alpha)}\textnormal{d}\alpha \\ -\frac{1}{4} \frac{1}{2\pi} \int h\pare{ \overline{s^0_t} -\alpha} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty}. \end{multline*} Since \begin{align*} \gamma\pare{ s } \cdot \ddt M & = \frac{1}{4} \frac{1}{2\pi} \int h\pare{ s - \alpha}\cos(\alpha) \ \textnormal{d}\alpha , & \forall \ s\in \mathbb{S}^1, \\ \frac{1}{4 \pi} \int \log\pare{2 \bar{r} \sin \pare{\alpha/2}} \cos(\alpha) \ \textnormal{d} \alpha + \frac{\bar{r} }{4 \pi} \int \sin^2\pare{\alpha / 2} \textnormal{d} \alpha& = \frac{h\pare{\overline{s^0_t}}}{4}, \\ -\frac{1}{4}\frac{1}{2\pi} \int \bar{\theta} \ \textnormal{d}\alpha -\frac{1}{4} \frac{1}{2\pi} \int h\pare{ \overline{s^0_t} -\alpha} \textnormal{d} \alpha & = -\frac{h\pare{\overline{s^0_t}}}{4} , \\ -\frac{1}{4}\frac{1}{2\pi} \int \bar{\theta} \cos(\alpha) \ \textnormal{d}\alpha & = \frac{1}{4}\frac{1}{2\pi} \int h\pare{\overline{s^0_t} -\alpha}\cos(\alpha) \ \textnormal{d}\alpha, \end{align*} we obtain that the evolution equation for $\displaystyle\max_{s\in\mathbb{S}^1}\{h(s,t)\}$ can be estimated as \begin{equation*} \ddt h\pare{\overline{s^0_t}} \leq -\frac{1}{4}\frac{1}{2\pi} \bra{1-\av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}}}} \int \frac{\bar{\theta}}{2\sin^2\pare{\alpha / 2}} \textnormal{d}\alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{equation*} We can perform the same computations for the quantity $$ 0<-\min_{s\in\mathbb{S}^1}\{h(s,t)\}=-h\pare{\underline{s^0_t}}, $$ and obtain the bound \begin{equation*} - \ddt \min_{s\in\mathbb{S}^1}\{h(s,t)\} \leq -\frac{1}{4}\frac{1}{2\pi} \bra{1-\av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}}}} \int \frac{ - \underline{\theta}}{2\sin^2\pare{\alpha / 2}} \textnormal{d}\alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{equation*} As a consequence we have that the time-evolution of $ \av{h\pare{t}}_{L^\infty} = \max \set{ h\pare{\overline{s^0_t}, t} , \ - h\pare{\underline{s^0_t}, t} } $ is given by \begin{multline} \ddt \av{h}_{L^\infty} \leq -\frac{1}{4}\frac{1}{2\pi} \bra{1-\av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}}}} \max\set{\int \frac{\bar{\theta}}{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha , \int \frac{ -\underline{\theta} }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha} \\ + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} .\label{eq:control_h_in_Linfty} \end{multline} \subsection{Estimates in $ W^{1,\infty}$} We have to find the evolution equation for $h'$. In order to do that we differentiate now $ J_1 $, integrate by parts in $\alpha$ and use \begin{equation*} \partial_\alpha^2 \bra{\cos \alpha \ \pare{1+h\pare{s-\alpha}}} = -\partial_\alpha \bra{\sin \alpha \ \pare{1+h\pare{s-\alpha}}} - \partial_\alpha \bra{\cos(\alpha) \ h'\pare{s-\alpha}}, \end{equation*} to obtain \begin{align} J_1'\pare{s} = & \ \frac{1}{2}\frac{1}{2\pi} \int \Bigg\{ \frac{\pare{1+h\pare{s}}h'\pare{s} + \pare{1+h\pare{s -\alpha}}h'\pare{s -\alpha} - \pare{h'\pare{s}\pare{1+h\pare{s-\alpha}} + \pare{1+h\pare{s}} h'\pare{s-\alpha}}\cos \alpha }{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos(\alpha)} \nonumber\\ & \qquad \qquad \times \ \partial_\alpha \bra{ \big. \sin(\alpha) \pare{1 + h\pare{s-\alpha}} } \Bigg\} \textnormal{d} \alpha \nonumber\\ & \ - \frac{1}{2}\frac{1}{2\pi} \int \Bigg\{ \frac{ -\pare{1+h\pare{s}} h'\pare{s} + h'\pare{s} \pare{1+h \pare{s-\alpha}} \cos \alpha -\pare{1+h\pare{s}}\pare{1+h\pare{s-\alpha}} \sin \alpha }{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos(\alpha)} \nonumber\\ & \qquad \qquad \times \ \partial_\alpha \bra{ \big. \cos \alpha \ h'\pare{s-\alpha} } \Bigg\} \textnormal{d} \alpha .\label{eq:J1'derivation} \end{align} Using the trigonometric identity $$ 1-\cos(\alpha)=\sin^2(\alpha/2), $$ taking a derivative of the term $ J_2 $ and using $$ \partial_s\pare{ h''\pare{s-\alpha} - 1 - h\pare{s-\alpha} } =-\partial_\alpha\pare{ h''\pare{s-\alpha} - 1 - h\pare{s-\alpha} } $$ we find that \begin{equation*} \begin{aligned} J_2'\pare{s} = & \ \frac{1}{4\pi} \int \Bigg\{ \frac{ \bra{h' \pare{s}- h'\pare{s-\alpha} \cos \alpha} \bra{\pare{1+h\pare{s}} \cos \alpha - \pare{1+h\pare{s-\alpha}}} }{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos(\alpha) } \\ & \qquad +\frac{\bra{1+h\pare{s}- \pare{1+h\pare{s-\alpha}} \cos \alpha} \bra{ h'\pare{s} \cos \alpha - h'\pare{s-\alpha}} }{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos(\alpha) }\Bigg\}\ \pare{ h''\pare{s-\alpha} - 1 - h\pare{s-\alpha} } \ \textnormal{d} \alpha \\ & \ - \frac{1}{2\pi} \int \Bigg\{ \frac{ \bra{1+h\pare{s}- \pare{1+h\pare{s-\alpha}} \cos \alpha} \bra{\pare{1+h\pare{s}} \cos \alpha - \pare{1+h\pare{s-\alpha}}} }{\pare{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos(\alpha) }^{2}} \\ & \qquad \times \bra{\pare{1+h\pare{s}}h'\pare{s} + \pare{1+h\pare{s-\alpha}}h'\pare{s-\alpha} - \pare{ \pare{1+h\pare{s}}h'\pare{s-\alpha} + \pare{1+h\pare{s-\alpha}}h'\pare{s} } \cos \alpha } \Bigg\} \\ & \qquad \times \pare{ h''\pare{s-\alpha} - 1 - h\pare{s-\alpha} } \ \textnormal{d} \alpha \\ & \ - \frac{1}{4\pi} \int \frac{ \bra{1+h\pare{s}- \pare{1+h\pare{s-\alpha}} \cos \alpha} \bra{\pare{1+h\pare{s}} \cos \alpha - \pare{1+h\pare{s-\alpha}}} }{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos(\alpha) } \ \partial_\alpha\pare{ h''\pare{s-\alpha} - 1 - h\pare{s-\alpha} } \textnormal{d} \alpha . \end{aligned} \end{equation*} We can now integrate by parts in $\alpha$ and obtain that \begin{align} J_{2}' \pare{s} = & \ \frac{1}{4\pi} \int \Bigg\{ \frac{ \bra{ h'\pare{s} + \pare{1+h\pare{s-\alpha}} \sin(\alpha) } \bra{\pare{1+h\pare{s}} \cos \alpha - \pare{1+h\pare{s-\alpha}}} }{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos(\alpha) } \nonumber\\ & \qquad +\frac{\bra{1+h\pare{s}- \pare{1+h\pare{s-\alpha}} \cos \alpha} \bra{ h'\pare{s}\cos \alpha - \pare{ 1 + h\pare{s} }\sin(\alpha) } }{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos(\alpha)}\ \pare{ h''\pare{s-\alpha} - 1 - h\pare{s-\alpha} } \Bigg\} \ \textnormal{d} \alpha \nonumber\\ & \ - \frac{1}{2 \pi} \int \Bigg\{ \frac{ \bra{1+h\pare{s}- \pare{1+h\pare{s-\alpha}} \cos \alpha} \bra{\pare{1+h\pare{s}} \cos \alpha - \pare{1+h\pare{s-\alpha}}} }{\pare{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos(\alpha) }^{2}} \nonumber\\ & \qquad \times \bra{ \pare{1+h\pare{s}}h'\pare{s} -\pare{1-h\pare{s-\alpha}}h'\pare{s}\cos \alpha + \pare{1+h\pare{s}}\pare{1+h\pare{s-\alpha}} \sin(\alpha) } \nonumber\\ & \qquad \times \pare{ h''\pare{s-\alpha} - 1 - h\pare{s-\alpha} } \Bigg\} \ \textnormal{d} \alpha .\label{eq:J2'derivation} \end{align} A similar procedure can be used in order to compute $ J_3' $. By doing this we obtain \begin{equation}\label{eq:J3'derivation} \begin{aligned} J'_3 \pare{s} = & \ \frac{1}{2 \pi} \int \frac{ h' \pare{s} \bra{1+h\pare{s}- \pare{1+h\pare{s-\alpha}} \cos \alpha} \sin \alpha + \pare{1+h\pare{s}} h' \pare{s} \sin \alpha }{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos(\alpha) } h'\pare{s-\alpha} \textnormal{d} \alpha \\ & \ + \frac{1}{2 \pi} \int \frac{ \pare{1+h\pare{s}}^2 \cos(\alpha) -\pare{1+h\pare{s}}\pare{1+h\pare{s-\alpha}}\pare{\cos^2\alpha-\sin^2\alpha} }{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos(\alpha) } h'\pare{s-\alpha} \textnormal{d} \alpha \\ & \ - \frac{1}{2 \pi} \int \frac{ \pare{1+h\pare{s}} \bra{1+h\pare{s}- \pare{1+h\pare{s-\alpha}} \cos \alpha} \sin \alpha }{\pare{ \pare{1 + h\pare{s}}^2 + \pare{1 + h\pare{s-\alpha}}^2 - 2 \pare{1 + h\pare{s}}\pare{1 + h\pare{s-\alpha}} \cos(\alpha) }^2 } \\ & \qquad\quad \times \pare{2\pare{1+h\pare{s}} h'\pare{s} + 2 \pare{1+h\pare{s-\alpha}}\pare{ \pare{1+h\pare{s}}\sin(\alpha) -h'\pare{s} \cos(\alpha) }}h'\pare{s-\alpha} \textnormal{d} \alpha . \end{aligned} \end{equation} We combine equations \eqref{eq:J1'derivation}, \eqref{eq:J2'derivation} and \eqref{eq:J3'derivation} with \eqref{eq:eveqh2} in order to obtain the evolution equation for $ h' $: \begin{equation*} \partial_t h'\pare{s} + \gamma'\pare{s} \cdot \ddt M = \sum_{j=1}^7 \mathcal{J}_j\pare{s} , \end{equation*} where \begin{equation} \label{eq:cJ's} \begin{aligned} \mathcal{J}_1 = & \ \frac{1}{2}\frac{1}{2\pi} \int \frac{r r' + \pare{r-\theta} \pare{r-\theta}' - \pare{r' \pare{r-\theta} + r \pare{r-\theta}'}\cos \alpha }{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2} \ \partial_\alpha \bra{ \sin(\alpha) \pare{r-\theta} } \textnormal{d} \alpha \\ \mathcal{J}_2 = & \ - \frac{1}{2}\frac{1}{2\pi} \int \frac{ -r r' + r' \pare{r-\theta} \cos \alpha -r \pare{r-\theta} \sin \alpha }{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2} \partial_\alpha \bra{ \big. \cos \alpha \ \pare{r-\theta}' } \textnormal{d} \alpha \\ \mathcal{J}_3 = & \ \frac{1}{4\pi} \int \set{ \frac{ \bra{ r' + \pare{r-\theta}\sin(\alpha) } \bra{r \cos \alpha - \pare{r-\theta}} }{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 } +\frac{\bra{r - \pare{r-\theta} \cos \alpha} \bra{ r' \cos \alpha - r \sin(\alpha) } }{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2}}\\ & \qquad\quad\times\ \bra{ \pare{r-\theta}'' - \pare{r-\theta} } \ \textnormal{d} \alpha \\ \end{aligned} \end{equation} and \begin{equation} \label{eq:cJ's2} \begin{aligned} \mathcal{J}_4 = & \ - \frac{1}{2\pi} \int \set{ \frac{ \bra{r- \pare{r-\theta} \cos \alpha} \bra{ r \cos \alpha - \pare{r-\theta}} }{ \pare{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2}^{2}} \bra{ r r' -\pare{r-\theta}r'\cos \alpha + r \pare{r-\theta} \sin(\alpha) } }\\ &\qquad\quad\times \bra{ \pare{r-\theta}'' - \pare{r-\theta} }\ \textnormal{d} \alpha \\ \mathcal{J}_5 = & \ \frac{1}{2 \pi} \int \frac{ r' \bra{r - \pare{r-\theta } \cos \alpha} \sin \alpha + r r' \sin \alpha }{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 } \pare{r-\theta}' \textnormal{d} \alpha \\ \mathcal{J}_6 = & \frac{1}{2 \pi} \int \frac{ r^2 \cos(\alpha) -r\pare{r-\theta}\cos\pare{2\alpha} }{ 4 r \pare{r-\theta}\sin^2\pare{\alpha/2} + \theta^2 } \pare{r-\theta}' \textnormal{d} \alpha , \\ \mathcal{J}_7 = & \ - \frac{1}{2 \pi} \int \frac{ r \bra{r - \pare{r-\theta} \cos \alpha} \sin \alpha }{\pare{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 }^2 } \pare{2 r r' + 2 \pare{r-\theta }\pare{ r \sin(\alpha) -r' \cos(\alpha) }} \pare{r-\theta}' \textnormal{d} \alpha . \end{aligned} \end{equation} Let us remark that in \eqref{eq:cJ's} and \eqref{eq:cJ's2} the second-order terms are $ \pare{r-\theta}'' $ and $ \partial_\alpha\pare{r-\theta}' $. Using \begin{equation*} \pare{r-\theta}'' = h''\pare{s-\alpha} = -\partial_\alpha h'\pare{s-\alpha} = -\partial_\alpha \pare{r-\theta}', \end{equation*} we will be able to integrate by parts. After this, only first derivatives of $h$ appear in the evolution equation for $ h' $. This will allow us to close the pointwise estimates in $ W^{1, \infty} $. Additionally to the notation introduced in \eqref{eq:notation_bar_0} we denote with \begin{equation} \label{eq:notation_bar_1} \begin{aligned} \overline{\theta'} = \overline{\theta'} \pare{\alpha} & = \theta' \pare{\overline{s^1_t}, \overline{s^1_t}-\alpha}, & \overline{\eta'} = \overline{\eta'} \pare{\alpha} & = \eta' \pare{\overline{s^1_t}, \overline{s^1_t}-\alpha, \alpha}, & \overline{r'} & = r' \pare{\overline{s^1_t}} , \\ \underline{\theta'} = \underline{\theta'} \pare{\alpha} & = \theta' \pare{\underline{s^1_t}, \underline{s^1_t}-\alpha}, & \underline{\eta'} = \underline{\eta'} \pare{\alpha} & = \eta' \pare{\underline{s^1_t}, \underline{s^1_t}-\alpha, \alpha}, & \underline{r'} & = r' \pare{\underline{s^1_t}}. \end{aligned} \end{equation} \subsubsection*{Bound of $ \mathcal{J}_1 $} Let us remark at first that \begin{multline}\label{eq:dec_cJ1_1} \mathcal{J}_1 = \frac{1}{2}\frac{1}{2\pi} \int \frac{ 2 \pare{ r r' + \pare{r-\theta} \pare{r-\theta}' }\sin^2\pare{\alpha / 2} }{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2} \ \partial_\alpha \bra{ \sin(\alpha) \pare{r-\theta} } \textnormal{d} \alpha \\ - \frac{1}{2}\frac{1}{2\pi} \int \frac{\theta \theta' \cos \alpha }{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2} \ \partial_\alpha \bra{ \sin(\alpha) \pare{r-\theta} } \textnormal{d} \alpha = \mathcal{J}_{1, 1} + \mathcal{J}_{1, 2}. \end{multline} The term $ \mathcal{J}_{1, 1} $ is not singular and, using that $$ \int h'(s)\cos(\alpha)\textnormal{d} \alpha=0 $$ we obtain \begin{equation*} \mathcal{J}_{1, 1} \pare{\overline{s^1_t}} \leq - \frac{1}{4} \frac{1}{2\pi } \int \theta' \ \cos(\alpha) \ \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{equation*} Using the Taylor expansion, we can write \begin{align*} \mathcal{J}_{1, 2}&= - \frac{1}{2\pi} \int \frac{\theta }{4r^2\sin^2\pare{\alpha / 2} }\frac{4r^2\sin^2\pare{\alpha / 2} }{ 4r\pare{r-\theta}+ \frac{\theta^2}{\sin^2\pare{\alpha / 2} }} \ \theta' \cos (\alpha)\partial_\alpha \bra{ \sin(\alpha) \pare{r-\theta} } \textnormal{d} \alpha\\ &=-\frac{1}{2}\frac{1}{r^2}\frac{1}{2\pi} \int \frac{\theta }{2\sin^2\pare{\alpha / 2} }\bigg{[}1 +\sum_{\ell=1}^\infty\left(\frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell \bigg{]} \ \theta' \cos (\alpha)\partial_\alpha \bra{ \sin(\alpha) \pare{r-\theta} } \textnormal{d} \alpha. \end{align*} Hence, evaluating this identity in $ \overline{s^1_t} $ we find the estimate \begin{equation*} \mathcal{J}_{1, 2} \pare{\overline{s^1_t}} \leq \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha / 2}} \textnormal{d} \alpha. \end{equation*} Collecting both expressions we conclude that \begin{equation}\label{eq:cJ1} \mathcal{J}_{1} \pare{\overline{s^1_t}} \leq - \frac{1}{4} \frac{1}{2\pi } \int \overline{\theta'} \ \cos(\alpha) \ \textnormal{d} \alpha + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha / 2}} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{equation} \subsubsection*{Bound of $ \mathcal{J}_2 $ } Let us integrate $ \mathcal{J}_2 $ by parts in the variable $ \alpha $, we obtain \begin{multline}\label{eq:defcJ2} \mathcal{J}_2 = \ - \frac{1}{2}\frac{1}{2\pi} \int \frac{ - r' \partial_\alpha \theta \cos(\alpha) - \pare{r-\theta} r' \sin(\alpha) +r \partial_\alpha\theta \sin \alpha - r\pare{r-\theta} \cos(\alpha) }{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2} \ \eta ' \textnormal{d} \alpha \\ + \frac{1}{2}\frac{1}{2\pi} \int \frac{ -r r' + r' \pare{r-\theta} \cos \alpha -r \pare{r-\theta} \sin \alpha }{ \pare{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 }^2 } \\ \times \pare{-4r\partial_\alpha \theta \sin^2\pare{\alpha / 2} + 2r\pare{r-\theta}\sin(\alpha) + 2\theta\partial_\alpha \theta} \ \eta ' \textnormal{d} \alpha = \mathcal{J}_{2, 1} + \mathcal{J}_{2, 2}. \end{multline} We now evaluate $ \mathcal{J}_{2, 1} $ at $ \overline{s^1_t} $ and isolate its linear part while bounding from above the nonlinear part to find that \begin{multline}\label{eq:cJ21} \mathcal{J}_{2, 1} \pare{\overline{s^1_t}} \leq \frac{1}{2} \frac{1}{2\pi} \int \frac{\cos \alpha }{4\sin^2\pare{\alpha/2}} \ \overline{\eta'} \ \textnormal{d}\alpha \\ + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha/2}} \ \textnormal{d}\alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} , \end{multline} The same can be done for $ \mathcal{J}_{2, 2} $, and this gives \begin{multline}\label{eq:cJ22} \mathcal{J}_{2, 2} \pare{\overline{s^1_t}} \leq - \frac{1}{2} \frac{1}{2\pi} \int \frac{\cos^2\pare{\alpha/2} }{2\sin^2\pare{\alpha/2}} \ \overline{\eta'}\ \textnormal{d} \alpha \\ + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha/2}} \ \textnormal{d}\alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} , \end{multline} so that \eqref{eq:cJ21} and \eqref{eq:cJ22} together with $$ \frac{\cos(x)}{2}-\cos^2(x/2)=-\frac{1}{2}, $$ lead to \begin{equation*} \mathcal{J}_{2} \pare{\overline{s^1_t}} \leq - \frac{1}{4} \frac{1}{2\pi} \int \frac{\overline{\eta'} }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha \\ + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha/2}} \ \textnormal{d}\alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{equation*} We now use the following identity which holds for any $ s\in\mathbb{S}^1 $ \begin{multline*} \int \frac{\eta'}{2\sin^2\pare{\alpha / 2}} \textnormal{d}\alpha = \int \frac{h'\pare{s} - h'\pare{s-\alpha}\cos(\alpha)}{2\sin^2\pare{\alpha / 2}} \textnormal{d}\alpha \\ = \int \frac{h'\pare{s} - h'\pare{s-\alpha}}{2\sin^2\pare{\alpha / 2}} \textnormal{d}\alpha + \int h'\pare{s-\alpha}\textnormal{d} \alpha = \int \frac{\theta ' }{2\sin^2\pare{\alpha / 2}} \textnormal{d}\alpha. \end{multline*} As a consequence, \begin{equation}\label{eq:cJ2} \mathcal{J}_{2} \pare{\overline{s^1_t}} \leq - \frac{1}{4} \frac{1}{2\pi} \bra{1- \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}} \int \frac{\overline{\theta'} }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{equation} \subsubsection*{Bound of $ \mathcal{J}_3 $}\label{sec:cJ3} The terms $ \mathcal{J}_3 $ and $ \mathcal{J}_4 $ are more challenging to bound. Let us write \begin{equation*} \mathcal{J}_3 = \mathcal{J}_{3, 1} + \mathcal{J}_{3, 2}, \end{equation*} where \begin{align*} \mathcal{J}_{3, 1} = & \ \frac{1}{4\pi} \int \set{ \frac{ \bra{ r' + \pare{r-\theta}\sin(\alpha)} \bra{r \cos \alpha - \pare{r-\theta}} }{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 } +\frac{\bra{r - \pare{r-\theta} \cos \alpha} \bra{ r' \cos \alpha - r \sin(\alpha) } }{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2}} \ \pare{r-\theta}''\ \textnormal{d} \alpha , \\ \mathcal{J}_{3, 2} = & \ - \frac{1}{4\pi} \int \set{ \frac{ \bra{ r' + \pare{r-\theta}\sin(\alpha)} \bra{r \cos \alpha - \pare{r-\theta}} }{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 } +\frac{\bra{r - \pare{r-\theta} \cos \alpha} \bra{ r' \cos \alpha - r \sin(\alpha) } }{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2}}\ \pare{r-\theta} \ \textnormal{d} \alpha, \end{align*} and integrate by parts $ \mathcal{J}_{3, 1} $ in $ \alpha $. This gives \begin{equation*} \mathcal{J}_{3, 1}= \mathcal{J}_{3, 1, \RN{1}} + \mathcal{J}_{3, 1, \RN{2}}, \end{equation*} where, using the identity $ \pare{r-\theta}'' = -\partial_\alpha\pare{r-\theta}' = \partial_\alpha \theta' $ we obtain that \begin{align*} \mathcal{J}_{3, 1, \RN{1}} = & -\frac{1}{4\pi} \int \partial_\alpha \set{\frac{ \bra{ r' + \pare{r-\theta}\sin(\alpha)} \bra{\theta - 2r\sin^2\pare{\alpha / 2}} }{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 } } \theta' \ \textnormal{d} \alpha, \\ \mathcal{J}_{3, 1, \RN{2}} = & -\frac{1}{4\pi} \int \partial_\alpha \set{ \frac{\bra{2r\sin^2\pare{\alpha / 2} + \theta\cos(\alpha) } \bra{ r' \cos \alpha - r \sin(\alpha) } }{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2} } \theta' \ \textnormal{d} \alpha. \end{align*} As before, we use a Taylor expansion for the term $ \mathcal{J}_{3, 1, \RN{1}} $ and we obtain that \begin{equation}\label{eq:cJ31I} \mathcal{J}_{3, 1, \RN{1}} \pare{\overline{s^1_t}} \leq \frac{1}{4}\frac{1}{2\pi}\int \overline{\theta'} \cos(\alpha)\ \textnormal{d}\alpha +\av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}} } \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha/2}} \textnormal{d}\alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}} } \av{h}_{W^{1, \infty}}\av{h'}_{L^\infty} . \end{equation} We can now start studying the term $ \mathcal{J}_{3, 1, \RN{2}} $. We expand the derivative and we obtain that \begin{multline*} \mathcal{J}_{3, 1, \RN{2}} = \frac{1}{4\pi} \int \frac{-4r\partial_\alpha\theta \sin^2\pare{\alpha / 2} + 2r\pare{r-\theta}\sin(\alpha) + 2\theta\partial_\alpha\theta}{\pare{4r\pare{r-\theta}\sin^2\pare{\alpha/2} + \theta^2}^2} \bra{2r\sin^2\pare{\alpha / 2} + \theta\cos(\alpha) } \bra{ r' \cos \alpha - r \sin(\alpha) } \ \theta' \ \textnormal{d}\alpha \\ -\frac{1}{4\pi} \int \Bigg\{ \frac{\bra{r\sin(\alpha)+ \partial_\alpha\theta\cos(\alpha) - \theta\sin(\alpha) } \bra{ r' \cos \alpha - r \sin(\alpha) } }{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 } \\ + \frac{\bra{2r\sin^2\pare{\alpha / 2} + \theta\cos(\alpha) } \bra{ - r' \sin \alpha - r \cos(\alpha) } }{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2} \Bigg\} \theta' \ \textnormal{d} \alpha. \end{multline*} Computations similar to the ones performed for the terms $ \mathcal{J}_1$ and $ \mathcal{J}_2 $ allow us to deduce the estimate \begin{equation}\label{eq:cJ31II} \mathcal{J}_{3, 1, \RN{2}} \pare{\overline{s^1_t}} \leq \frac{1}{4}\frac{1}{2\pi}\int \overline{\theta'} \cos(\alpha)\ \textnormal{d}\alpha + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}} } \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha/2}} \textnormal{d}\alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}} } \av{h}_{W^{1, \infty}}\av{h'}_{L^\infty} . \end{equation} We combine now the estimates \eqref{eq:cJ31I} and \eqref{eq:cJ31II} and obtain that \begin{equation}\label{eq:cJ31} \mathcal{J}_{3, 1 } \pare{\overline{s^1_t}} \leq \frac{1}{2}\frac{1}{2\pi}\int \overline{\theta'} \cos(\alpha)\ \textnormal{d}\alpha + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}} } \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha/2}} \textnormal{d}\alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}} } \av{h}_{W^{1, \infty}}\av{h'}_{L^\infty} . \end{equation} We can now focus on the term $ \mathcal{J}_{3, 2} $. let us use the identity \begin{multline*} \bra{ r' + \pare{r-\theta}\sin(\alpha) } \bra{r \cos \alpha - \pare{r-\theta}} + \bra{r - \pare{r-\theta} \cos \alpha} \bra{ r' \cos \alpha - r \sin(\alpha)} \\ = \bra{r'\pare{1+\cos^2\alpha}\theta} - \bra{\pare{4r \pare{r-\theta} \sin^2\pare{\alpha/2} + \theta^2}\sin(\alpha)} - \bra{4rr'\sin^4\pare{\alpha / 2}}, \end{multline*} in order to reformulate $ \mathcal{J}_{3, 2} $ as \begin{equation*} \mathcal{J}_{3, 2} = \mathcal{J}_{3, 2, \RN{1}} + \mathcal{J}_{3, 2, \RN{2}} + \mathcal{J}_{3, 2, \RN{3}}, \end{equation*} where \begin{align*} \mathcal{J}_{3, 2, \RN{1}} & = -\frac{1}{4\pi} \int \frac{r'\pare{r-\theta} \pare{1+\cos^2\alpha}}{2r\pare{r-\theta} + \frac{\theta^2}{2\sin^2\pare{\alpha / 2}}}\frac{\theta}{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha, \\ \mathcal{J}_{3, 2, \RN{2}} & = \frac{1}{4\pi} \int\pare{r-\theta}\sin(\alpha) \ \textnormal{d} \alpha, \\ \mathcal{J}_{3, 2, \RN{3}} & = \frac{1}{4\pi} \int \frac{ 2\sin^2\pare{\alpha/2}}{2r\pare{r-\theta} + \frac{\theta^2 }{2 \sin^2\pare{\alpha/2}}} rr'\pare{r-\theta}\textnormal{d}\alpha. \end{align*} We can further compute \begin{align*} \mathcal{J}_{3, 2, \RN{1}} & = -\frac{1}{4\pi} \int \frac{r'\pare{r-\theta} \pare{1+\cos^2\alpha}}{2r\pare{r-\theta} + \frac{\theta^2}{2\sin^2\pare{\alpha / 2}}}\frac{\theta}{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha, \\ & = \frac{1}{4\pi} \int \frac{r'\pare{r-\theta} \pare{1+\cos^2\alpha}}{2r\pare{r-\theta} + \frac{\theta^2}{2\sin^2\pare{\alpha / 2}}}\frac{h'\pare{s}\alpha - \theta}{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha - \frac{1}{4\pi} \int \frac{r'\pare{r-\theta} \pare{1+\cos^2\alpha}}{2r\pare{r-\theta} + \frac{\theta^2}{2\sin^2\pare{\alpha / 2}}}\frac{h'\pare{s}\alpha }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha\\ & = \mathbb{L}_1 + \mathbb{L}_2. \end{align*} We use now the identity \begin{equation*} 1= 2\sin^2\pare{\alpha/4} + \cos\pare{\alpha / 2}, \end{equation*} in order to deduce that \begin{multline*} \mathbb{L}_1 = \frac{1}{4\pi} \int \frac{r'\pare{r-\theta} \pare{1+\cos^2\alpha}}{2r\pare{r-\theta} + \frac{\theta^2}{2\sin^2\pare{\alpha / 2}}}\frac{\sin^2\pare{\alpha/4}}{\sin^2\pare{\alpha/2}}\pare{h'\pare{s}\alpha - \theta} \textnormal{d} \alpha \\ + \frac{1}{4\pi} \int \pare{ \frac{r'\pare{r-\theta} \pare{1+\cos^2\alpha}}{2r\pare{r-\theta} + \frac{\theta^2}{2\sin^2\pare{\alpha / 2}}} \sin\pare{\alpha / 2} }\frac{h'\pare{s}\alpha - \theta}{2\sin^3\pare{\alpha/2}} \cos\pare{\alpha / 2} \textnormal{d} \alpha = \mathbb{L}_{1, 1} + \mathbb{L}_{1, 2}. \end{multline*} The term $ \mathbb{L}_{1, 1} $ is not a singular integral so that we can easily obtain the bound \begin{equation*} \mathbb{L}_{1, 1} \pare{\overline{s^1_t}} \leq \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}}\av{h'}_{L^\infty} . \end{equation*} Since \begin{equation*} \left. \frac{h'\pare{s}\alpha - \theta}{2\sin^3\pare{\alpha/2}} \cos\pare{\alpha / 2} \right|_{s=\overline{s^1_t}} \geq 0, \end{equation*} we deduce the bound \begin{equation}\label{eq:bL1} \mathbb{L}_{1} \pare{\overline{s^1_t}} \leq \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{h'\pare{\overline{s^1_t}}\alpha - \overline{\theta} }{2\sin^3\pare{\alpha/2}}\cos\pare{\alpha/2} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}}\av{h'}_{L^\infty} . \end{equation} We use now a Taylor expansion together with the symmetry of the integrand in order to write $ \mathbb{L}_2 $ as \begin{align*} \mathbb{L}_2&=- \frac{1}{4}\frac{1}{2\pi}\frac{1}{r^2} \int \left[1 +\sum_{\ell=1}^\infty\left(\frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell\right]\pare{1+\cos^2\alpha}r'\pare{r-\theta}\frac{h'\pare{s}\alpha }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha\\ &=- \frac{1}{4}\frac{1}{2\pi}\frac{(r')^2}{r} \int \left[1 +\sum_{\ell=1}^\infty\left(\frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell\right]\pare{1+\cos^2\alpha}\frac{\alpha }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha\\ &\quad+ \frac{1}{4}\frac{1}{2\pi}\frac{(r')^2}{r^2} \int \left[1 +\sum_{\ell=1}^\infty\left(\frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell\right]\pare{1+\cos^2\alpha}\frac{\alpha \theta }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha\\ &=- \frac{1}{4}\frac{1}{2\pi}\frac{(r')^2}{r} \int \left[\sum_{\ell=1}^\infty\left(\frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell\right]\pare{1+\cos^2\alpha}\frac{\alpha }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha\\ &\quad+ \frac{1}{4}\frac{1}{2\pi}\frac{(r')^2}{r^2} \int \left[1 +\sum_{\ell=1}^\infty\left(\frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell\right]\pare{1+\cos^2\alpha}\frac{\alpha \theta }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha. \end{align*} We find that $$ \frac{1}{4}\frac{1}{2\pi}\frac{(r')^2}{r^2} \int \left[1 +\sum_{\ell=1}^\infty\left(\frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell\right]\pare{1+\cos^2\alpha}\frac{\alpha \theta }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha\leq \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty}. $$ Similarly, we can expand and obtain that $$ \sum_{\ell=1}^\infty\left(\frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell=\sum_{\ell=1}^\infty \sum_{k=0}^\ell c_{\ell,k}\left(\frac{\theta }{r }\right)^{\ell-k}\left(-\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^k. $$ Using this identity we have that \begin{multline*} - \frac{1}{4}\frac{1}{2\pi}\frac{(r')^2}{r} \int \left[\sum_{\ell=1}^\infty\left(\frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell\right]\pare{1+\cos^2\alpha}\frac{\alpha }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha\\ \leq - \frac{1}{4}\frac{1}{2\pi}\frac{(r')^2}{r} \int \left[\sum_{\ell=1}^\infty\left(-\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell\right]\pare{1+\cos^2\alpha}\frac{\alpha }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha\\ +\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty}. \end{multline*} Furthermore, we compute \begin{multline*} - \frac{1}{4}\frac{1}{2\pi}\frac{(r')^2}{r} \int \left[\sum_{\ell=1}^\infty\left(-\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell\right]\pare{1+\cos^2\alpha}\frac{\alpha }{2\sin^2\pare{\alpha/2}}\left(1\pm\cos(\alpha/2)\right) \textnormal{d} \alpha\\ \leq \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{h'\pare{\overline{s^1_t}}\alpha - \bar{\theta}}{2\sin^3\pare{\alpha/2}}\cos\pare{\alpha/2} \textnormal{d} \alpha+\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty}. \end{multline*} As a consequence, we find that \begin{equation}\label{eq:bL2} \begin{aligned} \mathbb{L}_2\pare{\overline{s^1_t}} \leq & \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{h'\pare{\overline{s^1_t}}\alpha - \bar{\theta} }{2\sin^3\pare{\alpha/2}}\cos\pare{\alpha/2} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{aligned} \end{equation} The inequalities \eqref{eq:bL1} and \eqref{eq:bL2} allow us to deduce the estimate \begin{equation}\label{eq:cJ32I} \mathcal{J}_{3, 2,\RN{1}} \pare{\overline{s^1_t}} \leq \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{h'\pare{\overline{s^1_t}}\alpha - \bar{\theta} }{2\sin^3\pare{\alpha/2}}\cos\pare{\alpha/2} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{equation} The term $ \mathcal{J}_{3, 2, \RN{2}} $ provides a linear contribution, which is \begin{equation}\label{eq:cJ32II} \mathcal{J}_{3, 2, \RN{2}} = -\frac{1}{4\pi} \int \theta \sin \alpha \ \textnormal{d} \alpha = \frac{1}{4\pi} \int \theta'\cos(\alpha) \ \textnormal{d}\alpha . \end{equation} Let us now study the term $ \mathcal{J}_{3, 2, \RN{3}} $. We use the identity \begin{equation*} \frac{ 2\sin^2\pare{\alpha/2}}{2r\pare{r-\theta} + \frac{\theta^2 }{2 \sin^2\pare{\alpha/2}}} rr'\pare{r-\theta} = r' \sin^2\pare{\alpha / 2} + \mathcal{P}\pare{\av{h'}_{L^\infty}^2}, \end{equation*} which lead to the bound \begin{equation}\label{eq:cJ32III} \mathcal{J}_{3, 2, \RN{3}}\pare{\overline{s^1_t}} = \frac{1}{4} h'\pare{\overline{s^1_t}} + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{equation} We combine \eqref{eq:cJ32I}, \eqref{eq:cJ32II} and \eqref{eq:cJ32III} in order to obtain a bound on $ \mathcal{J}_{3, 2} $ which is \begin{multline}\label{eq:cJ32} \mathcal{J}_{3, 2} \pare{\overline{s^1_t}} \leq \frac{h'\pare{\overline{s^1_t}}}{4} +\frac{1}{2}\frac{1}{2\pi} \int \theta'\cos(\alpha) \ \textnormal{d}\alpha \\ + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{h'\pare{\overline{s^1_t}}\alpha - \bar{\theta} }{2\sin^3\pare{\alpha/2}}\cos\pare{\alpha/2} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{multline} The estimates \eqref{eq:cJ31} and \eqref{eq:cJ32} close, finally the estimation of $ \mathcal{J}_3 $, which is \begin{multline}\label{eq:cJ3} \mathcal{J}_{3 } \pare{\overline{s^1_t}} \leq \frac{h'\pare{\overline{s^1_t}}}{4} + \frac{1}{2\pi} \int \theta'\cos(\alpha) \ \textnormal{d}\alpha + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}} } \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha/2}} \textnormal{d}\alpha \\ + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{h'\pare{\overline{s^1_t}}\alpha - \bar{\theta} }{2\sin^3\pare{\alpha/2}}\cos\pare{\alpha/2} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}} } \av{h}_{W^{1, \infty}}\av{h'}_{L^\infty}. \end{multline} \subsubsection*{Bound of $ \mathcal{J}_4 $} As it was done for $ \mathcal{J}_3 $, we decompose $ \mathcal{J}_4 = \mathcal{J}_{4, 1} + \mathcal{J}_{4, 2} $ where \begin{align*} \mathcal{J}_{4, 1} = & \ \frac{1}{2\pi} \int \partial_\alpha\set{ \frac{ \bra{r- \pare{r-\theta} \cos \alpha} \bra{ r \cos \alpha - \pare{r-\theta}} }{ \pare{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2}^{2}} \bra{ r r' -\pare{r-\theta}r'\cos \alpha + r \pare{r-\theta} \sin(\alpha) } } \ \theta' \ \textnormal{d} \alpha , \\ \mathcal{J}_{4, 2} = & \ \frac{1}{2\pi} \int \set{ \frac{ \bra{r- \pare{r-\theta} \cos \alpha} \bra{ r \cos \alpha - \pare{r-\theta}} }{ \pare{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2}^{2}} \bra{ r r' -\pare{r-\theta}r'\cos \alpha + r \pare{r-\theta} \sin(\alpha) } }\pare{r-\theta} \textnormal{d} \alpha . \end{align*} And we expand $ \mathcal{J}_{4, 1} $ thus obtaining \begin{equation*} \mathcal{J}_{4, 1} = \mathcal{J}_{4, 1 , \RN{1}} + \mathcal{J}_{4, 1 , \RN{2}} + \mathcal{J}_{4, 1 , \RN{3}} + \mathcal{J}_{4, 1 , \RN{4}}, \end{equation*} where \begin{align*} \mathcal{J}_{4, 1 , \RN{1}} = & \ \frac{1}{2\pi} \int \frac{ -2 \pare{-4 r \partial_\alpha\theta\sin^2\pare{\alpha / 2} + 2r\pare{r-\theta}\sin(\alpha) + 2\theta\partial_\alpha\theta} }{ \pare{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2}^{3}} \\ & \qquad \times \bra{r- \pare{r-\theta} \cos \alpha} \bra{ r \cos \alpha - \pare{r-\theta}} \bra{ r r' -\pare{r-\theta}r'\cos \alpha + r \pare{r-\theta} \sin(\alpha) } \ \theta' \ \textnormal{d} \alpha , \\ \mathcal{J}_{4, 1 , \RN{2}} = & \ \frac{1}{2\pi} \int \frac{ \bra{\partial_\alpha\theta \cos \alpha + \pare{r-\theta}\sin(\alpha)} \bra{ r \cos \alpha - \pare{r-\theta}} }{ \pare{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2}^{2}} \bra{ r r' -\pare{r-\theta}r'\cos \alpha + r \pare{r-\theta} \sin(\alpha) } \ \theta' \ \textnormal{d} \alpha, \\ \mathcal{J}_{4, 1, \RN{3}} = & \ \frac{1}{2\pi} \int \frac{ \bra{r- \pare{r-\theta} \cos \alpha} \bra{-r\sin(\alpha) + \partial_\alpha\theta} }{ \pare{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2}^{2}} \bra{ r r' -\pare{r-\theta}r'\cos \alpha + r \pare{r-\theta} \sin(\alpha) } \ \theta' \ \textnormal{d} \alpha , \\ \mathcal{J}_{4, 1, \RN{4}} = & \ \frac{1}{2\pi} \int \frac{ \bra{r- \pare{r-\theta} \cos \alpha} \bra{ r \cos \alpha - \pare{r-\theta}} }{ \pare{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2}^{2}} \bra{ \pare{\pare{r-\theta} r' - r\partial_\alpha\theta}\sin(\alpha) + \pare{\partial_\alpha\theta r' + r\pare{r-\theta}}\cos(\alpha) } \ \theta' \ \textnormal{d} \alpha , \\ \end{align*} Let us start analyzing the term $ \mathcal{J}_{4, 1, \RN{1}} $, and we reformulate it as \begin{multline*} \mathcal{J}_{4, 1 , \RN{1}} = \frac{1}{2\pi} \textnormal{p.v.} \int \frac{\sin^6 \pare{\alpha /2} }{ \pare{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2}^{3}} \frac{ -2 \pare{-4 r \partial_\alpha\theta\sin^2\pare{\alpha / 2} + 2r\pare{r-\theta}\sin(\alpha) + 2\theta\partial_\alpha\theta} }{ \sin\pare{\alpha /2}} \\ \times \bra{\frac{r- \pare{r-\theta} \cos \alpha}{ \sin\pare{\alpha /2}}} \bra{ \frac{r \cos \alpha - \pare{r-\theta}}{\sin\pare{\alpha /2}}} \bra{ \frac{r r' -\pare{r-\theta}r'\cos \alpha + r \pare{r-\theta} \sin(\alpha) }{\sin\pare{\alpha /2}}} \ \frac{ \theta'}{\sin^2\pare{\alpha/2}} \ \textnormal{d} \alpha . \end{multline*} We use the following identities \begin{align*} \frac{\sin^6 \pare{\alpha /2} }{ \pare{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2}^{3}} & = \frac{1}{\pare{2r}^6} +\mathcal{P}\pare{\av{h'}_{L^\infty}} , \\ \frac{ -2 \pare{-4 r \partial_\alpha\theta\sin^2\pare{\alpha / 2} + 2r\pare{r-\theta}\sin(\alpha) + 2\theta\partial_\alpha\theta} }{ \sin\pare{\alpha /2}} & = - 8 r^2 \cos\pare{\alpha / 2} +\mathcal{P}\pare{\av{h'}_{L^\infty}} , \\ \frac{r- \pare{r-\theta} \cos \alpha}{ \sin\pare{\alpha /2}} & = 2r\sin\pare{\alpha/2} +\mathcal{P}\pare{\av{h'}_{L^\infty}} , \\ \frac{r \cos \alpha - \pare{r-\theta}}{\sin\pare{\alpha /2}} & = - 2r\sin\pare{\alpha/2} +\mathcal{P}\pare{\av{h'}_{L^\infty}} , \\ \frac{r r' -\pare{r-\theta}r'\cos \alpha + r \pare{r-\theta} \sin(\alpha) }{\sin\pare{\alpha /2}} & = 2r^2\cos\pare{\alpha / 2} +\mathcal{P}\pare{\av{h'}_{L^\infty}} , \end{align*} so that we proved that \begin{equation*} \mathcal{J}_{4, 1, \RN{1}} \pare{\overline{s^1_t}} = \frac{1}{2}\frac{1}{2\pi} \int \overline{\theta'} \pare{1+\cos(\alpha)}\textnormal{d} \alpha + \av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha / 2}} \textnormal{d}\alpha. \end{equation*} In a similar fashion we can deduce that \begin{align*} \mathcal{J}_{4, 1, \RN{2}} \pare{\overline{s^1_t}} & = -\frac{1}{4}\frac{1}{2\pi} \int \overline{\theta'} \pare{1+\cos(\alpha)}\textnormal{d} \alpha + \av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha / 2}} \textnormal{d}\alpha, \\ \mathcal{J}_{4, 1, \RN{3}} \pare{\overline{s^1_t}} & = -\frac{1}{4}\frac{1}{2\pi} \int \overline{\theta'} \pare{1+\cos(\alpha)}\textnormal{d} \alpha + \av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha / 2}} \textnormal{d}\alpha, \\ \mathcal{J}_{4, 1, \RN{4}} \pare{\overline{s^1_t}} & = -\frac{1}{4}\frac{1}{2\pi} \int \overline{\theta'} \cos(\alpha) \ \textnormal{d} \alpha + \av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha / 2}} \textnormal{d}\alpha. \end{align*} As a consequence we have that \begin{equation}\label{eq:J41} \mathcal{J}_{4, 1 }\pare{\overline{s^1_t}} \leq -\frac{1}{4}\frac{1}{2\pi} \int \overline{\theta'} \cos(\alpha) \ \textnormal{d} \alpha + \av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha / 2}} \textnormal{d}\alpha . \end{equation} We now use the identity \begin{equation*} \bra{r- \pare{r-\theta} \cos \alpha} \bra{ r \cos \alpha - \pare{r-\theta}} = -4r \pare{r-\theta} \sin^4\pare{\alpha / 2} + \mathcal{O}\pare{\av{h'}_{L^\infty}}, \end{equation*} in order to obtain the required bound for $ \mathcal{J}_{4, 2} $ \begin{equation*} \mathcal{J}_{4, 2}\pare{\overline{s^1_t}} \leq - \frac{h'\pare{\overline{s^1_t}}}{4} -\frac{1}{4}\frac{1}{2\pi} \int \overline{\theta'} \ \cos(\alpha) \ \textnormal{d}\alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\av{h}_{W^{1, \infty}} \av{h'}_{L^\infty}. \end{equation*} Collecting both estimates we conclude that \begin{multline} \label{eq:cJ4} \mathcal{J}_4\pare{\overline{s^1_t}} \leq - \frac{h'\pare{\overline{s^1_t}}}{4} -\frac{1}{2}\frac{1}{2\pi} \int \overline{\theta'} \cos(\alpha) \ \textnormal{d} \alpha + \av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha / 2}} \textnormal{d}\alpha \\ + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{multline} \subsubsection*{Bound of $ \mathcal{J}_5 $} Let us rewrite $ \mathcal{J}_5 $ as \begin{equation*} \begin{aligned} \mathcal{J}_5 = & \ \frac{1}{2 \pi} \int \frac{ \pare{ \eta + r }r' \sin \alpha }{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 } \pare{r-\theta}' \textnormal{d} \alpha , \\ = & \ \frac{1}{2 \pi} \int \frac{ \eta r' \sin \alpha }{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 } \pare{r-\theta}' \textnormal{d} \alpha + \frac{1}{2 \pi} \int \frac{ r r' \sin \alpha }{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 } \pare{r-\theta}' \textnormal{d} \alpha , \\ = & \ \mathcal{J}_{5, 1} + \mathcal{J}_{5, 2} . \end{aligned} \end{equation*} Thus we compute \begin{equation}\label{eq:J5_1} \mathcal{J}_{5, 1}\pare{\overline{s^1_t}} \leq\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{equation} We study now the term $ \mathcal{J}_{5, 2} $. We decompose ir as \begin{equation*} \mathcal{J}_{5, 2} = \frac{1}{2 \pi} \int \frac{ r \pare{ r' }^2 \sin \alpha }{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 } \textnormal{d} \alpha - \frac{1}{2 \pi} \int \frac{ r r' \sin \alpha }{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 } \ \theta' \ \textnormal{d} \alpha = \mathcal{J}_{5, 2, \RN{1}} + \mathcal{J}_{5, 2, \RN{2}}. \end{equation*} We start studying the term $ \mathcal{J}_{5, 2, \RN{2}} $, and we rewrite it as \begin{equation*} \mathcal{J}_{5, 2, \RN{2}} = - \frac{1}{2\pi} \int \frac{rr'}{2r\pare{r-\theta} + \frac{\theta^2}{2\sin^2\pare{\alpha/2}}} \cot\pare{\alpha/ 2} \theta' \ \textnormal{d} \alpha. \end{equation*} Using Lemma \ref{lem:monotonicity_lemma2} we conclude that \begin{equation}\label{eq:J5_2} \mathcal{J}_{5, 2, \RN{2}}\pare{\overline{s^1_t}} \leq \av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha/2}} \textnormal{d}\alpha. \end{equation} We can now consider the term $ \mathcal{J}_{5, 2, \RN{1}} $. To estimate this term we proceed similarly as for $\mathbb{L}_2$ and find that \begin{multline}\label{eq:cJ5} \mathcal{J}_5 \pare{\overline{s^1_t}} \leq \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{h'\pare{\overline{s^1_t}}\alpha - \bar{\theta} }{2\sin^3\pare{\alpha/2}}\cos\pare{\alpha/2} \textnormal{d} \alpha \\ + \av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha/2}} \textnormal{d}\alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}}\av{h'}_{L^{\infty}}. \end{multline} \subsubsection*{Bound of $ \mathcal{J}_6 $} Let us rewrite $ \mathcal{J}_6 $ as \begin{equation*} \mathcal{J}_6 = \frac{1}{2 \pi} \int \frac{r\cos \alpha \ \pare{2r \sin^2\pare{\alpha / 2}+\theta \cos(\alpha)} + r\pare{r-\theta}\sin^2\alpha }{ 4 r \pare{r-\theta}\sin^2\pare{\alpha/2} + \theta^2 } \pare{r-\theta}' \textnormal{d} \alpha , \end{equation*} and let us notice that \begin{equation*} \mathcal{J}_6 = \frac{1}{2\pi} \int \bra{1+\frac{\theta}{r} + \mathcal{P}\pare{\av{\theta}_{L^{ \infty}}^2}} \bra{ \frac{\cos(\alpha)}{2} + \frac{\theta \cos^2\alpha }{4r\sin^2\pare{\alpha / 2}} + \pare{1-\frac{\theta}{r}}\cos^2\pare{\alpha / 2} } \pare{r-\theta}'\textnormal{d}\alpha. \end{equation*} Hence computations similar to the ones performed for the term $ \mathcal{J}_5 $ lead us to the estimate \begin{multline}\label{eq:cJ6} \mathcal{J}_6 \pare{\overline{s^1_t}} \leq \frac{1}{2\pi} \int \pare{\frac{\cos(\alpha) }{2}+ \cos^2\pare{\alpha / 2}} \pare{\overline{r'}-\overline{\theta'} }\textnormal{d}\alpha \\ + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{h'\pare{\overline{s^1_t}}\alpha - \bar{\theta}}{2\sin^3\pare{\alpha/2}}\cos\pare{\alpha/2} \textnormal{d} \alpha \\ + \av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha/2}} \textnormal{d}\alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{multline} \subsubsection*{Bound of $ \mathcal{J}_7 $} Let us rewrite $ \mathcal{J}_7 $ as \begin{equation*} \begin{aligned} \mathcal{J}_7 = & \ - \frac{1}{2 \pi} \int \frac{ r \pare{2r\sin^2\pare{\alpha / 2} +\theta \cos \alpha} \sin \alpha }{\pare{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2 }^2 } \pare{2 r r' + 2 \pare{r-\theta }\pare{ r \sin(\alpha) -r' \cos(\alpha) }} \pare{r-\theta}' \textnormal{d} \alpha . \end{aligned} \end{equation*} We observe that \begin{equation*} \begin{aligned} \frac{2 r r' + 2 \pare{r-\theta }\pare{ r \sin(\alpha) -r' \cos(\alpha) }}{2r \sin\pare{\alpha / 2}} & = 2r \cos\pare{\alpha/2} + \mathcal{P}\pare{\av{h'}_{L^\infty}}, \\ \pare{\frac{4r^2\sin^2\pare{\alpha / 2}}{4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2}}^2 & = 1+\frac{2\theta}{r} + \mathcal{P}\pare{\av{h'}_{L^\infty}^2}, \\ \frac{r\ \sin(\alpha)}{\pare{ 2r\sin\pare{\alpha/2} }^2 } & = \frac{\cot\pare{\alpha / 2}}{4r}, \\ \frac{ 2r\sin^2\pare{\alpha / 2} + \theta\cos(\alpha) }{2 r \sin \pare{\alpha/2}} & = \sin\pare{\alpha / 2} + \frac{\theta\cos(\alpha)}{2r\sin\pare{\alpha / 2}}, \end{aligned} \end{equation*} and this in turn allow us to deduce that \begin{equation}\label{eq:cJ7} \mathcal{J}_7\pare{\overline{s^1_t}} \leq - \frac{1}{2\pi}\int\pare{\overline{r'} - \overline{\theta'}}\cos^2\pare{ \alpha / 2 }\textnormal{d}\alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{equation} We now sum \eqref{eq:cJ6} and \eqref{eq:cJ7} and obtain the estimate \begin{multline}\label{eq:cJ6cJ7} \mathcal{J}_6 \pare{\overline{s^1_t}} + \mathcal{J}_7 \pare{\overline{s^1_t}} \leq - \frac{1}{2}\frac{1}{2\pi} \int \overline{\theta'} \cos(\alpha) \ \textnormal{d}\alpha + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{h'\pare{\overline{s^1_t}}\alpha - \bar{\theta} }{2\sin^3\pare{\alpha/2}}\cos\pare{\alpha/2} \textnormal{d} \alpha \\ + \av{h}_{W^{1, \infty}}\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha/2}} \textnormal{d}\alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{multline} \subsubsection*{The equation for the evolution of $|h'|_{L^\infty}$} We combine the estimates \eqref{eq:cJ1}, \eqref{eq:cJ2}, \eqref{eq:cJ3}, \eqref{eq:cJ4}, \eqref{eq:cJ5} and \eqref{eq:cJ6cJ7} and use $$ h'\pare{\overline{s^1_t}} =\max_{s}\{h'(s,t)\} $$ in order to obtain the estimate \begin{multline*} \ddt \max_{s}\{h'(s,t)\}+ \gamma'\pare{\overline{s^1_t}} \cdot \dot{M}\pare{t} \\ \leq - \frac{1}{4} \frac{1}{2\pi} \bra{1- \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}} \textnormal{p.v.} \int \frac{\overline{\theta'} }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha - \frac{1}{4} \frac{1}{2 \pi} \int \overline{\theta'} \pare{ 1 + \cos(\alpha)} \textnormal{d}\alpha \\ + \frac{h'\pare{ \overline{s^1_t} }}{4} + \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \textnormal{p.v.} \int \frac{h'\pare{\overline{s^1_t}}\alpha - \theta \pare{\overline{s^1_t}, \overline{s^1_t} -\alpha } }{2\sin^3\pare{\alpha/2}}\cos\pare{\alpha/2} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{multline*} Let us remark that \begin{equation*} - \frac{1}{4} \frac{1}{2\pi} \int \overline{\theta'} \ \textnormal{d}\alpha + \frac{h'\pare{ \overline{s^1_t} }}{4} = 0 , \end{equation*} \begin{align*} \gamma'\pare{s } \cdot \dot{M}\pare{t} = \frac{1}{4} \frac{1}{2\pi} \int h' \pare{s -\alpha} \ \cos(\alpha) \ \textnormal{d}\alpha , && \forall \ s\in \mathbb{S}^1, \end{align*} which we use combined with \eqref{eq:Lambda_alternative} in order to derive that \begin{equation*} \ddt \max_{s}\{h'(s,t)\} \leq - \frac{1}{4} \frac{1}{2\pi} \bra{1- \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}} \int \frac{\overline{\theta'} }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{equation*} Similar computations allow us to control the positive quantity $$ -\min_{s}\{h'(s,t)\}= -h'\pare{\underline{s^1_t}} $$ as \begin{equation*} - \ddt \min_{s}\{h'(s,t)\} \leq - \frac{1}{4} \frac{1}{2\pi} \bra{1- \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}} \int \frac{- \underline{\theta'} }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} , \end{equation*} so that we can estimate the evolution of $ \av{h'\pare{t}}_{L^\infty} = \max \set{h'\pare{\overline{s^1_t}, t}, \ - h'\pare{\underline{s^1_t}} } $ as \begin{equation} \label{eq:control_h'_in_Linfty} \begin{aligned} \ddt \av{h'}_{L^\infty} &\leq - \frac{1}{4} \frac{1}{2\pi} \bra{1- \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}} \max\set{\int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha , \int \frac{ -\underline{\theta'} }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha} \\ &\quad + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{aligned} \end{equation} We combine the estimates \eqref{eq:control_h_in_Linfty} and \eqref{eq:control_h'_in_Linfty} and we deduce \begin{multline}\label{eq:est1} \ddt \av{h}_{W^{1, \infty}} \leq - \frac{1}{4} \frac{1}{2\pi} \bra{1- \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}} \max\set{\int \frac{\bar{\theta}}{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha , \int \frac{ -\underline{\theta} }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha} \\ - \frac{1}{4} \frac{1}{2\pi} \bra{1- \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}} \max\set{\int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha , \int \frac{ -\underline{\theta'} }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha}\\ + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\av{h}_{W^{1, \infty}} \av{h'}_{L^\infty} . \end{multline} We apply Lemma \ref{eq:Lambda_Linfty} in order to state that \begin{equation*} \av{h'}_{L^\infty} \leq C \max\set{\int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha , \int \frac{ -\underline{\theta'} }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha} . \end{equation*} As a consequence we can further simplify \eqref{eq:est1} and conclude that \begin{multline} \label{eq:est2} \ddt \av{h}_{W^{1, \infty}} \leq - \frac{1}{4} \frac{1}{2\pi} \bra{1- \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}} \max\set{\int \frac{\bar{\theta}}{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha , \int \frac{ -\underline{\theta} }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha} \\ - \frac{1}{4} \frac{1}{2\pi} \bra{1- \av{h}_{W^{1, \infty}} \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}} \max\set{\int \frac{\overline{\theta'}}{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha , \int \frac{ -\underline{\theta'} }{2\sin^2\pare{\alpha/2}} \textnormal{d} \alpha}. \end{multline} As a consequence, we can ensure that the right hand side of \eqref{eq:est2} is strictly negative if $ \av{h}_{W^{1, \infty}} $ is sufficiently small. We obtain that there exists a positive constant $ 0< \mathcal{C} \ll 1 $ such that if $ \av{h_0}_{W^{1, \infty}}\leq \mathcal{C} $ then for each $ t > 0 $ \begin{equation*} \av{h\pare{t}}_{W^{1, \infty}}\leq \av{h_0}_{W^{1, \infty}}. \end{equation*} We prove now the pointwise decay in time of $ \av{h\pare{t}}_{W^{1, \infty}} $. From \eqref{eq:control_h'_in_Linfty} we deduce, using the Poincar\'e-type inequality \eqref{eq:Lambda_Linfty}, that there exists a $ \delta > 0 $ s.t. \begin{equation}\label{eq:exp_decay_h'} \av{h'\pare{t}}_{L^\infty}\leq \av{h'_0}_{L^\infty} e^{-\delta t}. \end{equation} This concludes the proof of Proposition \ref{prop:W1infty_enest}. $ \Box $ \section{\emph{A priori} estimates in $ H^{1} $}\label{sec:H1inftydecay} The purpose of this section is to obtain the parabolic gain of regularity $$ L^2\pare{ 0,T;H^{3/2}} $$ for the solution. Although these estimates are lower order compared to the pointwise estimates, this regularity is necessary in order we can pass to the limit in the weak formulation of the N-Peskin problem. \begin{prop}\label{prop:unif_bounds_L2h} Let $ T^\star \in (0,\infty] $ and $h = h(s,t)$ be a $\mathcal{C} \pare{ \bra{ 0,T^\star}; \mathcal{C}^2 } $ solution of \eqref{eq:eveqh2} such that $ \left. h\right|_{t=0} = h_0 $. Assume that $$ \av{h_0}_{W^{1, \infty}}< c_0 $$ with $c_0$ the constant in Proposition \ref{prop:W1infty_enest}. Then, for all $T\leq T^*$, there exists a $ C(T) \in \pare{0, \infty} $ depending on $ T $ only such that \begin{equation*} h \in L^2 \pare{\bra{0, T}; H^{3/2} \pare{\mathbb{S}^1}}. \end{equation*} and the following bound holds true \begin{equation*} \norm{h}_{ L^2 \pare{\bra{0, T}; H^{3/2} \pare{\mathbb{S}^1}}} \leq C(T). \end{equation*} \end{prop} \begin{proof} All along the proof we denote with $ 0<\nu\ll 1 $ a positive constant whose explicit value may vary from line to line. \\ Let us recall that the evolution equation for $ h' $ can be written as \begin{equation*} \partial_t h' + \gamma'\cdot \dot{M} = \sum_{j=1}^7 \mathcal{J}_j , \end{equation*} where the explicit formulations of the terms $ \mathcal{J}_j, \ j=1, \ldots, 7 $ are given in \eqref{eq:cJ's} and \eqref{eq:cJ's2}. Using computations similar to the ones performed in \eqref{eq:cJ1}, \eqref{eq:cJ2}, \eqref{eq:cJ3}, \eqref{eq:cJ4}, \eqref{eq:cJ5} and \eqref{eq:cJ6cJ7}, which isolate the linear (in $ h $) contribution of every $ \mathcal{J}_j $, we reformulate the evolution equation for $ h' $ as \begin{multline*} \partial_t h' +\frac{1}{4} \ \Lambda h' = \pare{ \mathcal{J}_2 + \frac{1}{4} \ \Lambda h' } +\pare{ \mathcal{J}_1 + \frac{1}{4}\frac{1}{2\pi} \int \theta' \ \cos(\alpha) \ \textnormal{d}\alpha } + \pare{ \mathcal{J}_3 -\frac{h'\pare{s}}{4} - \frac{1}{2\pi} \int \theta' \ \cos(\alpha) \ \textnormal{d}\alpha } \\ + \pare{ \mathcal{J}_4 +\frac{h'\pare{s}}{4} + \frac{1}{2}\frac{1}{2\pi} \int \theta' \ \cos(\alpha) \ \textnormal{d}\alpha } + \mathcal{J}_5 + \pare{\mathcal{J}_6 + \mathcal{J}_7 + \frac{1}{2}\frac{1}{2\pi} \int \theta' \ \cos(\alpha) \ \textnormal{d}\alpha }, \end{multline*} so that defining \begin{equation*} \begin{aligned} \mathcal{I}_1 = & \ \mathcal{J}_1 + \frac{1}{4}\frac{1}{2\pi} \int \theta' \ \cos \alpha \ \textnormal{d}\alpha , \\ \mathcal{I}_2 = & \ \mathcal{J}_2 + \frac{1}{4} \ \Lambda h', \\ \mathcal{I}_3 = & \ \mathcal{J}_3 -\frac{h'\pare{s}}{4} - \frac{1}{2\pi} \int \theta' \ \cos(\alpha) \ \textnormal{d}\alpha , \\ \mathcal{I}_4 = & \ \mathcal{J}_4 +\frac{h'\pare{s}}{4} + \frac{1}{2}\frac{1}{2\pi} \int \theta' \ \cos(\alpha) \ \textnormal{d}\alpha , \\ \mathcal{I}_5 = & \ \mathcal{J}_5 , \\ \mathcal{I}_6 = & \ \mathcal{J}_6 + \mathcal{J}_7 + \frac{1}{2}\frac{1}{2\pi} \int \theta' \ \cos(\alpha) \ \textnormal{d}\alpha , \end{aligned} \end{equation*} the evolution equation for $ h' $ becomes \begin{equation}\label{eq:eveqh'Is} \partial_t h' + \frac{1}{4} \ \Lambda h' = \sum_{j=1}^6 \mathcal{I}_j . \end{equation} The advantage in the formulation \eqref{eq:eveqh'Is} is that the terms $ \mathcal{I}_j $ on the right hand side are all nonlinear in $ h $. Furthermore, we observe that there are three families of contributions. When testing with $h'$ and integrating, the terms $\mathcal{I}_j$ can be written in one of the following three ways: \begin{align*} \int \mathcal{N}(h,h')h'h' \textnormal{d} s, && \int \mathcal{N}(h,h')\Lambda h h' \textnormal{d} s && \text{ and } && \int \mathcal{N}(h,h')\Lambda h'h' \textnormal{d} s, \end{align*} where $\mathcal{N}$ denotes a nonlinear term. Once we are equipped with the estimates in $W^{1,\infty},$ this part is rather straightforward and as such we only sketch the proof. We start with the term $\mathcal{I}_2$, we define \begin{equation*} \tilde{\mathcal{I}}_2 = \int \mathcal{I}_2\pare{s}h'\pare{s}\textnormal{d} s, \end{equation*} thus, using the splitting defined in \eqref{eq:defcJ2} for the term $ \mathcal{J}_2 $, we find that $$ \tilde{\mathcal{I}}_{2}=\int \mathcal{I}_2(s) h'(s)\textnormal{d} s =\tilde{\mathcal{I}}_{2,1}+\tilde{\mathcal{I}}_{2,2}+\tilde{\mathcal{I}}_{2,3}, $$ where $ \tilde{\mathcal{I}}_{2,3} $ contains the lower order terms due to the cancellation of the linear part of the equation. Using the Taylor expansion together with H\"older and Poincar\'e inequalities, we find that \begin{equation} \label{eq:tildeI2,1_1} \begin{aligned} \tilde{\mathcal{I}}_{2,1}&=- \frac{1}{2}\frac{1}{2\pi} \iint \frac{ - r' \partial_\alpha \theta \cos(\alpha) - \pare{r-\theta} r' \sin(\alpha) +r \partial_\alpha\theta \sin \alpha }{4r^2 \sin^2\pare{\alpha / 2}}\frac{4r^2 \sin^2\pare{\alpha / 2}}{ 4r\pare{r-\theta}\sin^2\pare{\alpha / 2} + \theta^2} \ \eta ' h'(s) \textnormal{d} \alpha \textnormal{d} s\\ &\leq - \frac{1}{2}\frac{1}{2\pi} \iint \frac{ - r' \partial_\alpha \theta \cos(\alpha) - \pare{r-\theta} r' \sin(\alpha) +r \partial_\alpha\theta \sin \alpha }{4r^2 \sin^2\pare{\alpha / 2}}\left[1 +\sum_{\ell=1}^\infty\left(\frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell\right] \ \theta ' h'(s) \textnormal{d} \alpha \textnormal{d} s\\ &\quad + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^2}^2. \end{aligned} \end{equation} Let us define \begin{align*} W_{ \RN{1} } & = \frac{1}{2}\frac{1}{2\pi} \iint \frac{ r' \partial_\alpha \theta }{4r^2 \sin^2\pare{\alpha / 2}} \pare{1+\frac{\theta}{r}} \theta ' h'(s) \textnormal{d} \alpha \textnormal{d} s, \\ W_{ \RN{2} } & = \frac{1}{2}\frac{1}{2\pi} \iint \frac{ r r' \cot\pare{\alpha / 2}}{2r^2} \ \theta ' h'(s) \textnormal{d} \alpha \textnormal{d} s , \\ W_{ \RN{3} } & = - \frac{1}{2}\frac{1}{2\pi} \iint \frac{ r \partial_\alpha\theta \cot\pare{\alpha/2} }{2r^2 } \ \theta ' h'(s) \textnormal{d} \alpha \textnormal{d} s. \end{align*} With the above decomposition we find that \begin{multline*} - \frac{1}{2}\frac{1}{2\pi} \iint \frac{ - r' \partial_\alpha \theta \cos(\alpha) - \pare{r-\theta} r' \sin(\alpha) +r \partial_\alpha\theta \sin \alpha }{4r^2 \sin^2\pare{\alpha / 2}}\left[1 +\sum_{\ell=1}^\infty\left(\frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell\right] \ \theta ' h'(s) \textnormal{d} \alpha \textnormal{d} s \\ - W_{ \RN{1} } - W_{\RN{2} } - W_{ \RN{3} } = {\bf J} , \end{multline*} where $ {\bf J} $ is a operator with a regularizing kernel satisfying $$ {\bf J}\leq \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^2}^2 . $$ Then, it suffices to control the singular part of the integral $ \tilde{\mathcal{I}}_{2 , 1} $ composed by the simplified terms $ W_{j}, j=\RN{1}, \RN{2}, \RN{3} $. We prove now the estimates for the terms $ W_{\RN{2}} $ and $ W_{\RN{3}} $. We perform the computations for the term $ W_{\RN{2}} $ being the other identical. We use the boundedness of $ \av{h}_{W^{1, \infty}} $ in order to argue that $$ W_{ \RN{2} } = -C \iint \frac{ r r' \cot\pare{\alpha / 2}}{2r^2} \ h'(s-\alpha) h'(s) \textnormal{d} \alpha \textnormal{d} s= C \int r r' h'(s) \Lambda h(s) \textnormal{d} s \leq \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} $$ \begin{equation*} W_{\RN{3}} \leq \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \iint \av{\cot\pare{\alpha/2}} \av{\theta'}\textnormal{d}\alpha \textnormal{d} s \end{equation*} thus we use the embedding $ L^2\pare{\mathbb{S}^2}\hookrightarrow L^1\pare{\mathbb{S}^2} $ and the fact that $ \av{\cot\pare{\alpha/2}}^2\lesssim \pare{ \sin\pare{\alpha/2} }^{-2} $ and we deduce the control \begin{equation*} W_{\RN{3}} \leq \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{\Lambda^{1/2} h'}_{L^2}. \end{equation*} We conclude that \begin{equation}\label{eq:WII+WIII} W_{\RN{2}} + W_{\RN{3}} \leq \nu \av{\Lambda^{1/2} h}_{L^2}^2 + \frac{\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} }{\nu} \av{h}_{W^{1, \infty}}^2. \end{equation} We study now the term $ W_{\RN{1}} $, which is the more singular of the three. Let us reformulate it as \begin{equation*} W_{\RN{1}} = W_{\RN{1}, \RN{1}} + W_{\RN{1}, \RN{2}}, \end{equation*} where \begin{align*} W_{\RN{1}, \RN{1}} & = \frac{1}{2}\frac{1}{2\pi} \iint \frac{ r' \partial_\alpha \theta }{4r^2 \sin^2\pare{\alpha / 2}} \theta ' h'(s) \textnormal{d} \alpha \textnormal{d} s, \\ W_{\RN{1}, \RN{2}} & = \frac{1}{2}\frac{1}{2\pi} \iint \frac{ r' \theta \partial_\alpha \theta }{4r^3 \sin^2\pare{\alpha / 2}} \theta ' h'(s) \textnormal{d} \alpha \textnormal{d} s. \end{align*} The term $ W_{\RN{1}, \RN{2}} $ can be controlled using computations close to the ones performed to control the terms $ W_{\RN{2}} $ and $ W_{\RN{3}} $ obtaining that \begin{equation}\label{eq:WI,II} W_{\RN{1}, \RN{2}} \leq \nu \av{\Lambda^{1/2} h}_{L^2}^2 + \frac{\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} }{\nu} \av{h}_{W^{1, \infty}}^2. \end{equation} The term $ W_{\RN{1}, \RN{1}} $ is indeed the more singular one. Substituting the explicit values of the functions $ r=1+h\pare{s} $ and $ \theta = h\pare{s} - h\pare{s-\alpha} $ and changing variables, we obtain that \begin{align*} W_{\RN{1}, \RN{1}} & = \frac{1}{2}\frac{1}{2\pi} \iint \frac{\pare{h'\pare{s}}^2 h'\pare{s-\alpha}}{4\pare{1+h\pare{s}}^2\sin^2\pare{\alpha/2}} \pare{h'\pare{s} - h'\pare{s-\alpha}}\textnormal{d}\alpha \textnormal{d} s, \\ & = \frac{1}{2}\frac{1}{2\pi} \iint \frac{\pare{h'\pare{s}}^2 h'\pare{\sigma}}{4\pare{1+h\pare{s}}^2\sin^2\pare{\frac{s-\sigma}{2}}} \pare{h'\pare{s} - h'\pare{\sigma}}\textnormal{d}\sigma \textnormal{d} s , \\ & = -\frac{1}{2}\frac{1}{2\pi} \iint \frac{\pare{h'\pare{\sigma}}^2 h'\pare{s}}{4\pare{1+h\pare{\sigma }}^2\sin^2\pare{\frac{s-\sigma}{2}}} \pare{h'\pare{s} - h'\pare{\sigma}}\textnormal{d}\sigma \textnormal{d} s. \end{align*} Then, we find that \begin{equation} \label{eq:WI,I} \begin{aligned} W_{\RN{1}, \RN{1}} & = \frac{1}{2}\frac{1}{2\pi} \iint \frac{h'\pare{s} h'\pare{\sigma}\theta'}{2\sin^2\pare{\frac{s-\sigma}{2}}} \bra{ \frac{h'\pare{s}}{\pare{1+h\pare{s}}^2} - \frac{h'\pare{\sigma}}{\pare{1+h\pare{\sigma}}^2} }\textnormal{d}\sigma \textnormal{d} s, \\ & \leq C \av{h}_{W^{1, \infty}}^2 \pare{ \iint \pare{\frac{\theta' }{2\sin \pare{\frac{s-\sigma}{2}}}}^2 \textnormal{d} s \textnormal{d} \sigma }^{1/2} \pare{ \iint \bra{ \frac{\frac{h'\pare{s}}{\pare{1+h\pare{s}}^2} - \frac{h'\pare{\sigma}}{\pare{1+h\pare{\sigma}}^2} }{2\sin\pare{\frac{s-\sigma}{2}}} }^2 \textnormal{d}\sigma \textnormal{d} s }^{1/2}, \\ & \leq \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\av{h}_{W^{1, \infty}} \av{\Lambda^{1/2}h'}_{L^2}\av{\Lambda^{1/2}\pare{ \frac{h'}{\pare{1+h}^2} }}_{L^2}, \\ & \leq \mathcal{P}\pare{\av{h}_{W^{1, \infty}}}\av{h}_{W^{1, \infty}} \av{\Lambda^{1/2} h' }_{L^2}^2 + \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}}^2. \end{aligned} \end{equation} Using the smallness of $h$ in $W^{1,\infty}$, the rest of terms can be handled similarly and we find that $$ \tilde{\mathcal{I}}_{2}\leq \nu \av{\Lambda^{1/2} h' }_{L^2}^2 + \frac{\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} }{\nu} \av{h}_{W^{1, \infty}}^2. $$ We define $$ \tilde{\mathcal{I}}_1=\int \mathcal{I}_1(s) h'(s) \ \textnormal{d} s $$ and decompose $\mathcal{J}_1$ as in \eqref{eq:dec_cJ1_1} to find that \begin{multline*} \tilde{\mathcal{I}}_1\leq -\frac{1}{2}\frac{1}{r^2}\frac{1}{2\pi}\iint \frac{\theta \theta' \cos (\alpha) }{2\sin^2\pare{\alpha / 2} }\bra{ 1 +\sum_{\ell=1}^\infty\left(\frac{\theta }{r } -\frac{\theta^2}{4r^2\sin^2\pare{\alpha/2} }\right)^\ell } \ \partial_\alpha \bra{ \sin(\alpha) \pare{r-\theta} }h'(s) \textnormal{d} \alpha \textnormal{d} s\\ +\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}} \av{h'}_{L^2}^2 . \end{multline*} In order to deduce the above estimate we used the fact that the integral defining the term $ \mathcal{J}_{1,1} $ is not singular. The term $ \tilde{\mathcal{I}}_{1} $ is now in a form which resembles the one deduced for the term $ \tilde{\mathcal{I}}_{2,1} $ in equation \eqref{eq:tildeI2,1_1}. Very similar computations allow us to produce the bound \begin{equation*} \tilde{\mathcal{I}}_{1}\leq \nu \av{\Lambda^{1/2} h' }_{L^2}^2 + \frac{\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} }{\nu} \av{h}_{W^{1, \infty}}^2. \end{equation*} We decompose $\mathcal{I}_3$ following our splitting of $\mathcal{J}_3$ (see Section \ref{sec:cJ3}) and find that \begin{align*} \tilde{\mathcal{I}}_{3}= & \ \int \mathcal{I}_3(s) h'(s)\textnormal{d} s , \\ = & \ \tilde{\mathcal{I}}_{3,1}+\tilde{\mathcal{I}}_{3,2}+\tilde{\mathcal{I}}_{3,3} , \\ = & \ \tilde{\mathcal{I}}_{3,1,\RN{1}}+\tilde{\mathcal{I}}_{3,1,\RN{2}}+\tilde{\mathcal{I}}_{3,2,\RN{1}}+\tilde{\mathcal{I}}_{3,2,\RN{2}}+\tilde{\mathcal{I}}_{3,2,\RN{3}}+\tilde{\mathcal{I}}_{3,3}, \end{align*} where, as before, $\tilde{\mathcal{I}}_{3,3} $ contains lower order terms due to the cancellation of the linear part of the equation. The terms $\mathcal{I}_{3,1,j}, \ j=\RN{1}, \RN{2}$ can be estimated using computations similar to the ones performed in order to control $ \tilde{\mathcal{I}}_{2, 1} $ and we find $$ \tilde{\mathcal{I}}_{3,1}\leq \nu \av{\Lambda^{1/2} h}_{L^2}^2 + \frac{\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} }{\nu} \av{h}_{W^{1, \infty}}^2. $$ Now we estimate the term $\mathcal{I}_{3,2,\RN{1}}$, the other terms being easier. We find that \begin{align*} \tilde{\mathcal{I}}_{3,2,\RN{1}}&=-\frac{1}{4\pi} \iint \frac{r'\pare{r-\theta} \pare{1+\cos^2\alpha}}{2r\pare{r-\theta} + \frac{\theta^2}{2\sin^2\pare{\alpha / 2}}}\frac{\theta}{2\sin^2\pare{\alpha/2}} h'(s)\textnormal{d} \alpha \textnormal{d} s. \end{align*} We study now the more singular contribution of $ \tilde{\mathcal{I}}_{3, 2, \RN{1}} $, i.e. the integral \begin{equation*} -\frac{1}{4\pi} \iint \frac{r'}{r} \frac{\theta}{2\sin^2\pare{\alpha/2}} h'(s)\textnormal{d} \alpha \textnormal{d} s = -\frac{1}{4\pi} \int \frac{ \Lambda h\pare{s} h'(s) }{1+h\pare{s}} \textnormal{d} s \leq \nu \av{\Lambda^{1/2} h}_{L^2}^2 + \frac{\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} }{\nu} \av{h}_{W^{1, \infty}}^2. \end{equation*} The rest of terms can be handled in a similar way and we conclude that $$ \tilde{\mathcal{I}}_{3}\leq \nu \av{\Lambda^{1/2} h}_{L^2}^2 + \frac{\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} }{\nu} \av{h}_{W^{1, \infty}}^2. $$ The term $\tilde{\mathcal{I}}_4$ resembles the term $\mathcal{I}_3$ and as a consequence it can be handled using the same ideas. Then we obtain that $$ \tilde{\mathcal{I}}_{4}\leq \nu \av{\Lambda^{1/2} h}_{L^2}^2 + \frac{\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} }{\nu} \av{h}_{W^{1, \infty}}^2. $$ The term $\tilde{\mathcal{I}}_5$ is similar to $\mathcal{I}_{3,2,\RN{1}}$ and then we find that $$ \tilde{\mathcal{I}}_{5}\leq \nu \av{\Lambda^{1/2} h}_{L^2}^2 + \frac{\mathcal{P}\pare{\av{h}_{W^{1, \infty}}} }{\nu} \av{h}_{W^{1, \infty}}^2. $$ The term $\tilde{\mathcal{I}}_6$ can be estimated using the previous Taylor expansion together with the same ideas used to bound $\tilde{\mathcal{I}}_1$. Then, choosing $\nu$ small enough and using the maximum principle for $\av{h}_{W^{1,\infty}}$, we conclude $$ \ddt \av{h'}_{L^2}^2+\frac{1}{4}\av{\Lambda^{1/2}h'}_{L^2}^2\leq \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{h}_{W^{1, \infty}}^2\leq C. $$ Invoking now Gronwall's inequality we find that \begin{equation*} \int_{0}^T \av{h\pare{t}}_{H^{3/2}}^2\textnormal{d} t\leq C T. \end{equation*} \end{proof} \section{Estimates for $ \partial_t h $} \label{sec:path} The result we prove in the present section is the following one \begin{prop}\label{prop:unif_bounds_path} Let $ T^\star \in(0,\infty]$ and $h = h(s,t)$ be a $\mathcal{C} \pare{ \bra{ 0,T^\star}; \mathcal{C}^2 } $ solution of \eqref{eq:eveqh2} such that $ \left. h\right|_{t=0} = h_0 $. Assume that $$ \av{h_0}_{W^{1, \infty}}< c_0 $$ with $c_0$ the constant in Proposition \ref{prop:W1infty_enest}. Then, for all $T\leq T^*$, there exists a $ C(T) \in \pare{0, \infty} $ depending on $ T $ only such that \begin{equation*} \partial_t h \in L^2 \pare{\bra{0, T}; H^{-1} \pare{\mathbb{S}^1}}, \end{equation*} and the following bound holds true \begin{equation*} \norm{\partial_t h}_{ L^2 \pare{\bra{0, T}; H^{-1} \pare{\mathbb{S}^1}}} \leq C(T). \end{equation*} \end{prop} \begin{proof} Thanks to the regularity results proved in the previous sections it suffices to prove a suitable bound for the nonlinear terms in the evolution equation for $ h $. The bounds of Proposition \ref{prop:unif_bounds_path} are necessary in the application of an Aubin-Lions compactness theorem (cf. \cite{Simon87}) and are somewhat standard, for this reason we will sketch the computations only for the more singular terms and leave the rest of the computations for the interested reader. The term $ J_2 $ is the more singular term in \eqref{eq:J's} due to the presence of the term $ \partial_\alpha^2 \theta $. Using the notation of Section \ref{sec:J2} the term \begin{equation*} J_{2, 2} = \ - \frac{1}{4\pi} \textnormal{p.v.} \int \partial_\alpha \pare{ \frac{\bra{2r\sin^2\pare{\alpha / 2} + \theta \cos(\alpha)} \bra{2r\sin^2\pare{\alpha/2}-\theta}}{4r\pare{r-\theta} \sin^2\pare{\alpha/2} +\theta^2} } \ \partial_\alpha \theta \ \textnormal{d} \alpha , \end{equation*} is the more singular of the subterms making $ J_2 $. The term $ J_{2, 2} $, can be decomposed as in the previous section \begin{equation*} J_{2, 2} = J_{2, 2, \RN{1}} + J_{2, 2, \RN{2}} + J_{2, 2, \RN{3}}. \end{equation*} Let us denote with $ \overline{J_{2, 2, j}}, \ j=\RN{1}, \RN{2}, \RN{3} $ the more singular contributions of the terms $ J_{2, 2, j} , \ j=\RN{1}, \RN{2}, \RN{3} $, whose explicit expressions are \begin{equation*} \begin{aligned} \overline{J_{2, 2, \RN{1}} }= & \ - \frac{1}{4\pi} \textnormal{p.v.} \int \frac{ 2r^2 \sin(\alpha) + 2\theta\partial_\alpha \theta}{\pare{2 r \sin\pare{\alpha / 2} }^4} \theta^2 \partial_\alpha \theta \ \textnormal{d} \alpha , \\ \overline{J_{2, 2, \RN{2}}} = & \ \frac{1}{4\pi} \textnormal{p.v.} \int \frac{ \theta \pare{ \partial_\alpha \theta }^2 }{\pare{2r \sin\pare{\alpha / 2}}^2 } \ \textnormal{d} \alpha , \\ \overline{J_{2, 2, \RN{3}}} = & \ \frac{1}{4\pi} \textnormal{p.v.} \int \frac{ \theta \pare{ \partial_\alpha \theta }^2 }{\pare{2r \sin\pare{\alpha / 2}}^2} \ \textnormal{d} \alpha . \end{aligned} \end{equation*} Let us remark that the terms $ J_{2, 2, j} - \overline{J_{2, 2, j}}, \ j=\RN{1}, \RN{2}$ can be written as integral operator whose integration kernel is homogeneous of order zero. In particular the following bound holds true for any $ t\in \bra{0, T} $ and $ s\in\mathbb{S}^1 $ \begin{equation*} \av{J_{2, 2, j} \pare{s, t} - \overline{J_{2, 2, j}}\pare{s, t}} \leq \mathcal{P} \pare{\av{h\pare{t}}_{W^{1, \infty}}}. \end{equation*} As a consequence, they are more regular contributions and we can consider any $ \phi \in L^2 \pare{\bra{0, T}; H^1}, \ T \in\pare{ 0, T^\star } $ with unitary norm and deduce the estimate \begin{equation*} \int_0^{T} \int \pare{J_{2, 2, j} \pare{s, t} - \overline{J_{2, 2, j}}\pare{s, t}} \phi\pare{s, t}\textnormal{d} s \ \textnormal{d} t \leq \mathcal{P}\pare{\norm{h}_{L^\infty\pare{\bra{0, T}; W^{1, \infty}}}}. \end{equation*} Let $ \phi $ be as above. We will indeed bound the remaining more singular terms by duality. Let us first focus on the term $\overline{J_{2, 2, \RN{2}}}$. We compute \begin{multline*} \int \pare{ \int \frac{ \theta \pare{ \partial_\alpha \theta }^2 }{\pare{2r \sin\pare{\alpha / 2}}^2} \ \textnormal{d} \alpha } \phi\pare{s} \ds = \iint \frac{h\pare{s} - h\pare{s-\alpha}}{4 \sin^2\pare{\alpha/2}} \ \pare{h'\pare{s-\alpha}}^2 \ \frac{\phi\pare{s}}{\pare{ 1+h\pare{s} }^2 } \textnormal{d} \alpha \ \ds \\ = - \iint \frac{h\pare{z} - h\pare{z-\beta}}{4 \sin^2\pare{\beta/2}} \ \pare{h'\pare{z}}^2 \ \frac{\phi\pare{z-\beta}}{\pare{ 1+h\pare{z-\beta} }^2 } \textnormal{d} \beta \ \textnormal{d} z, \end{multline*} where in the last identity we used the change of variables $ s-\alpha = z, \ \beta = -\alpha $. We hence symmetrized the term $ \overline{J_{2, 2, \RN{2}}},$ as \begin{multline} \label{eq:nonlinear_cancellation} \int \pare{ \int \frac{ \theta \pare{ \partial_\alpha \theta }^2 }{\pare{2r \sin\pare{\alpha / 2}}^2} \ \textnormal{d} \alpha } \phi\pare{s} \ds \\ \begin{aligned} = & \frac{1}{2}\iint \frac{h\pare{s} - h\pare{s-\alpha}}{4 \sin^2\pare{\alpha/2}} \bra{ \pare{h'\pare{s-\alpha}}^2 \ \frac{\phi\pare{s}}{\pare{ 1+h\pare{s} }^2} - \pare{h'\pare{s}}^2 \ \frac{\phi\pare{s-\alpha}}{\pare{ 1+h\pare{s-\alpha} }^2 }} \textnormal{d} \alpha \ \ds, \\ = & \frac{1}{2} \iint \frac{h\pare{s} - h\pare{\sigma }}{4 \sin^2\pare{\frac{s-\sigma}{2}}} \bra{ \pare{h'\pare{\sigma}}^2 \ \frac{\phi\pare{s}}{\pare{ 1+h\pare{s} }^2} - \pare{h'\pare{s}}^2 \ \frac{\phi\pare{\sigma }}{\pare{ 1+h\pare{\sigma}}^2 } } \textnormal{d} \sigma \ \ds. \end{aligned} \end{multline} We compute \begin{multline*} \pare{h'\pare{\sigma}}^2 \ \frac{\phi\pare{s}}{\pare{ 1+h\pare{s} }^2} - \pare{h'\pare{s}}^2 \ \frac{\phi\pare{\sigma }}{\pare{ 1+h\pare{\sigma}}^2 } \\ = \pare{h'\pare{\sigma}}^2 \pare{ \frac{\phi\pare{s}}{\pare{ 1+h\pare{s} }^2} - \frac{\phi\pare{\sigma }}{\pare{ 1+h\pare{\sigma } }^2} } - \frac{\phi\pare{\sigma }}{\pare{ 1+h\pare{\sigma } }^2} \pare{h'\pare{\sigma}+ h'\pare{s}}\pare{h'\pare{s}- h'\pare{\sigma} } , \end{multline*} so that \begin{multline*} \int \pare{ \int \frac{ \theta \pare{ \partial_\alpha \theta }^2 }{\pare{2r \sin\pare{\alpha / 2}}^2} \ \textnormal{d} \alpha } \phi\pare{s} \ds \\ \begin{aligned} = & \ \frac{1}{2} \iint \frac{h\pare{s} - h\pare{\sigma }}{4 \sin^2\pare{\frac{s-\sigma}{2}}} \pare{h'\pare{\sigma}}^2 \pare{ \frac{\phi\pare{s}}{\pare{ 1+h\pare{s} }^2} - \frac{\phi\pare{\sigma }}{\pare{ 1+h\pare{\sigma } }^2} } \textnormal{d} \sigma \ \textnormal{d} s \\ & - \frac{1}{2} \iint \frac{h\pare{s} - h\pare{\sigma }}{4 \sin^2\pare{\frac{s-\sigma}{2}}} \frac{\phi\pare{\sigma }}{\pare{ 1+h\pare{\sigma } }^2} \pare{h'\pare{\sigma}+ h'\pare{s}}\pare{h'\pare{s}- h'\pare{\sigma} } \textnormal{d} \sigma \ \textnormal{d} s & = M_1 +M_2 . \end{aligned} \end{multline*} We start analyzing $ M_2 $. A H\"older inequality provides the bound \begin{equation*} \begin{aligned} M_2 \leq & \ C \ \frac{\av{\phi}_{L^\infty} \av{h'}_{L^\infty}}{1-\av{h}_{L^\infty}} \pare{\iint \pare{\frac{h\pare{s} - h\pare{\sigma}}{2\sin\pare{\frac{s-\sigma}{2}}}}^2 \textnormal{d} \sigma \ \textnormal{d} s}^{1/2} \pare{\iint \pare{\frac{h' \pare{s} - h ' \pare{\sigma}}{2\sin\pare{\frac{s-\sigma}{2}}}}^2 \textnormal{d} \sigma \ \textnormal{d} s}^{1/2}, \\ = & C \ \frac{\av{\phi}_{L^\infty} \av{h'}_{L^\infty}}{1-\av{h}_{L^\infty}} \av{\Lambda^{1/2} h}_{L^2} \av{\Lambda^{1/2} h' }_{L^2} , \\ \leq & \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{\phi}_{H^1} \av{\Lambda^{1/2} h'}_{L^2}. \end{aligned} \end{equation*} We control now the term $ M_1 $ as \begin{equation*} \begin{aligned} M_1 \leq & \ C \av{h'}_{L^\infty}^2 \pare{\iint \pare{\frac{h\pare{s} - h\pare{\sigma}}{2\sin\pare{\frac{s-\sigma}{2}}}}^2 \textnormal{d} \sigma \ \textnormal{d} s}^{1/2} \pare{\iint \pare{\frac{\frac{\phi\pare{s}}{\pare{1+h\pare{s}}^2} - \frac{\phi\pare{\sigma}}{\pare{1+h\pare{\sigma}}^2}}{2\sin\pare{\frac{s-\sigma}{2}}}}^2 \textnormal{d} \sigma \ \textnormal{d} s}^{1/2}, \\ = & \ C \av{h'}_{L^\infty}^2 \av{\Lambda^{1/2} h}_{L^2} \av{\Lambda^{1/2}\pare{\frac{\phi}{\pare{1+h}^2}}}_{L^2}, \\ \leq & \ \mathcal{P}\pare{\av{h}_{W^{1, \infty}}} \av{\phi}_{H^1}. \end{aligned} \end{equation*} The bounds provided for $ M_1 $ and $ M_2 $ allow us to argue that \begin{equation*} \int_0^{T} \overline{J_{2, 2,\RN{2}}}\pare{s, t} \ \phi\pare{s, t} \textnormal{d} s\ \textnormal{d} t \leq \mathcal{P}\pare{\norm{h}_{L^\infty\pare{\bra{0, T}; W^{1, \infty}}}} \norm{\phi}_{L^2\pare{\bra{0, T}; H^1}} \pare{\norm{\Lambda^{1/2}h'}_{L^2\pare{\bra{0, T}; L^2}} + \sqrt{T}}. \end{equation*} Similar bounds hold true for $ \overline{J_{2, 2,\RN{1}}} $. We hence proved that \begin{equation*} \norm{J_2}_{L^2\pare{\bra{0, T}; H^{-1}}} \leq C_T \mathcal{P}\pare{\norm{h}_{L^\infty\pare{\bra{0, T}; W^{1, \infty}}}} \pare{1 + \norm{\Lambda^{1/2}h'}_{L^2\pare{\bra{0, T}; L^2}}}. \end{equation*} Following the same ideas, we can obtain appropriate bounds for the terms $ J_1 $ and $ J_3 $. These estimates combined with the result of Proposition \ref{prop:unif_bounds_L2h} allow us to conclude the proof of Proposition \ref{prop:unif_bounds_path}. \end{proof} \section{Proof of Theorem \ref{teo1}}\label{sec7} In the present section we prove the main result of the manuscript via an approximation and compactness argument. Let us consider the regularized problem \begin{equation} \label{eq:Peskin_approximated} \left\lbrace \begin{aligned} & \partial_t h_\varepsilon + \Lambda h_\varepsilon -\varepsilon h''_\varepsilon = \mathcal{N}\pare{h_\varepsilon}, \\ & \left. h^\varepsilon\right|_{t=0} = \eta^\varepsilon \star h_0, \end{aligned} \right. \end{equation} where for $ \varepsilon > 0 $, $ s\in\mathbb{S}^1 $, the function $ \eta^\varepsilon $ is the periodic heat kernel at time $\varepsilon$ and the nonlinearity $ \mathcal{N} $ is defined as \begin{equation}\label{eq:cN} \mathcal{N}\pare{h_\varepsilon} = J_1\pare{h_\varepsilon} + J_2\pare{h_\varepsilon} + J_3\pare{h_\varepsilon} + \Lambda h_\varepsilon - \frac{1}{4} h_\varepsilon \star \cos, \end{equation} and the terms $ J_k = J_k\pare{h}, \ k=1, 2, 3 $ are defined in \eqref{eq:eveqh2}. Using Picard's Theorem together with the standard mollifier approach and energy estimates (see \cite{bertozzi2001vorticity}) we can prove that, fixed $ \varepsilon > 0 $, there exists a $ T_\varepsilon \in \left( 0, \infty\right] $ and a maximal solution $ h_\varepsilon $ of \eqref{eq:Peskin_approximated} which belongs to the space \begin{align}\label{eq:analytic_regularity_approx_solutions} h_\varepsilon \in \mathcal{C}^1\pare{\left[ 0, T_\varepsilon \right) ; H^3}. \end{align} At this point these approximate solutions may be defined only locally in time. Furthermore, using that our approximation scheme is merely a vanishing viscosity approach, this solution satisfies the same \emph{a priori} bounds in $L^\infty\pare{ 0,T;W^{1,\infty} } $ and $L^2 \pare{ 0,T;H^{3/2} }$ stated in Propositions \ref{prop:W1infty_enest}, \ref{prop:unif_bounds_L2h} and \ref{prop:unif_bounds_path}. Furthermore, we can prove the following $ L^2 $ estimate for \eqref{eq:Peskin_approximated}. We have indeed that for $ t\in\bra{0, T_\varepsilon } $ \begin{multline*} \frac{1}{2} \ddt \av{h_\varepsilon\pare{t}}_{L^2}^2 + \av{\Lambda^{1/2} h_\varepsilon\pare{ t}}_{L^2}^2 +\varepsilon \av{h_\varepsilon ' \pare{ t}}_{L^2}^2 \leq \av{\mathcal{N}\pare{h_\varepsilon}}_{H^{-1}} \av{h_\varepsilon}_{H^1} \\ \leq \frac{C}{\varepsilon} \av{\mathcal{N}\pare{ h_\varepsilon \pare{t} }}_{H^{-1}}^2 + \frac{\varepsilon }{2} \pare{ \av{h_\varepsilon\pare{t}}_{L^2}^2 + \av{h_\varepsilon '\pare{t}}_{L^2}^2 }. \end{multline*} As a consequence, an integration-in-time gives that \begin{equation*} \av{h_\varepsilon\pare{t}}_{L^2}^2 + \int_0 ^t \pare{\av{\Lambda^{1/2} h_\varepsilon\pare{\tau}}_{L^2}^2 +\varepsilon \av{h_\varepsilon ' \pare{\tau}}_{L^2}^2 } \textnormal{d} \tau \leq \av{h_0}_{L^2}^2 e^{\frac{C t}{\varepsilon}} + \frac{C e^{\frac{C t}{\varepsilon}} }{\varepsilon} \av{ \mathcal{N}\pare{h_\varepsilon}}_{L^2\pare{\bra{0, t}; H^{-1}}}^2. \end{equation*} The computations performed in the proof of Proposition \ref{prop:unif_bounds_path} assure us that $ \mathcal{N} \pare{h_\varepsilon} \in L^2 \pare{\bra{0, T_\varepsilon}; H^{-1}}$, so that \begin{align*} \av{h_\varepsilon\pare{t}}_{L^2}^2 + \int_0 ^t \pare{\av{\Lambda^{1/2} h_\varepsilon\pare{\tau}}_{L^2}^2 +\varepsilon \av{h_\varepsilon ' \pare{\tau}}_{L^2}^2 } \textnormal{d} \tau \leq \av{h_0}_{L^2}^2 + \frac{C\pare{ T_\varepsilon, \varepsilon}}{\varepsilon} \pare{1 + \av{h_\varepsilon}_{L^\infty\pare{\bra{0, T_\varepsilon}; W^{1, \infty}}}^N } , && N\gg 1. \end{align*} We recall now the result of Proposition \ref{prop:W1infty_enest}, which ensures us that $ \norm{h_\varepsilon}_{L^\infty\pare{\bra{0, T_\varepsilon}; W^{1, \infty}}}\leq c_0 $. This allow us to bound the right hand side of the above inequality with a quantity which is independent of $ h_\varepsilon $. A continuation argument for ODEs allow us to bootstrap the result, thus proving the following bound \begin{equation*} \av{h_\varepsilon\pare{t}}_{L^2}^2 + \int_0 ^t \pare{\av{\Lambda^{1/2} h_\varepsilon\pare{\tau}}_{L^2}^2 +\varepsilon \av{h_\varepsilon ' \pare{\tau}}_{L^2}^2 } \textnormal{d} \tau \leq c_0^2 + \frac{C\pare{T, \varepsilon}}{\varepsilon} \pare{1 + c_0^N } . \end{equation*} Similarly, using standard energy estimates together with the ideas in the previous sections and Propositions \ref{prop:W1infty_enest}, \ref{prop:unif_bounds_L2h} and \ref{prop:unif_bounds_path} it is possible to prove that if \begin{equation*} \av{h_\varepsilon ^{\pare{n-1}} \pare{t}}_{L^2}^2 + \int_0^t \pare{ \av{\Lambda^{1/2}h_\varepsilon^{\pare{n-1}} \pare{\tau}}_{L^2}^2 + \frac{\varepsilon}{2} \av{h_\varepsilon ^{\pare{n}} \pare{\tau}}_{L^2}^2 } \textnormal{d} \tau \leq C_{n-1}(\varepsilon, T , c_0) , \end{equation*} then \begin{equation*} \av{h_\varepsilon ^{\pare{n}} \pare{t}}_{L^2}^2 + \int_0^t \pare{ \av{\Lambda^{1/2}h_\varepsilon^{\pare{n}} \pare{\tau}}_{L^2}^2 + \frac{\varepsilon}{2} \av{h_\varepsilon ^{\pare{n+1}} \pare{\tau}}_{L^2}^2 } \textnormal{d} \tau \leq C_{n}(\varepsilon, T , c_0) , \end{equation*} for any $ n\geq 2 $, where the constants $ C_j\pare{\varepsilon, T}, \ j\geq 2 $ are \textit{not} uniformly bounded in $ \varepsilon $. \\ We can thus find that \begin{equation}\label{eq:global_est_H3_approx_Peskin} \av{h_\varepsilon\pare{t}}_{H^3}^2 + \int_0^t \pare{ \av{\Lambda^{1/2}h_\varepsilon\pare{\tau}}_{H^3}^2 + \frac{\varepsilon}{2} \av{h_\varepsilon ' \pare{\tau}}_{H^3}^2 } \textnormal{d} \tau \leq C(\varepsilon, T , c_0 ). \end{equation} In particular, we find that the approximate solutions are smooth and global in time. The global bounds of \eqref{eq:global_est_H3_approx_Peskin} allow us to apply the regularity results stated in Propositions \ref{prop:W1infty_enest}, \ref{prop:unif_bounds_L2h} and \ref{prop:unif_bounds_path}. Using a standard Aubin-Lions compactness theorem (cf. \cite[Corollary 4]{Simon87}), we find that \begin{equation} \label{eq:weak_convergence} \begin{aligned} h_\varepsilon\rightarrow h & \text{ in }L^2 \pare{ 0,T;H^{\frac{3}{2}-\vartheta} }, \ \vartheta > 0, \\ h_\varepsilon \rightharpoonup h &\text{ in }L^2 \pare{ 0,T;H^{3/2} }, \\ h_\varepsilon \xrightharpoonup{\ast} h & \text{ in }L^\infty \pare{ 0,T;L^\infty } , \\ h'_\varepsilon \xrightharpoonup{\ast} h' & \text{ in }L^\infty \pare{ 0,T;L^\infty }. \end{aligned} \end{equation} We now take $\varphi\in C^\infty_{c}\pare{ [0,T)\times\mathbb{S}^1 } $ and consider the weak formulation of the approximate problems \begin{multline}\label{eq:weak_eq_} -\int \varphi(s,0)\eta^\varepsilon \star h_0(s) \textnormal{d} s \\ +\int_0^T\int \Pare{ -\partial_t\varphi (s,t) h_\varepsilon (s,t) + \Lambda\varphi (s,t) h_\varepsilon(s,t) -\varepsilon \varphi''(s,t)h_\varepsilon(s,t) - \mathcal{N}\pare{h_\varepsilon(s,t)}\varphi(s,t) } \textnormal{d} s \ \textnormal{d} t=0. \end{multline} The previous regularity and convergence results are enough in order to pass to the limit in the nonlinear terms. Let us sketch why it is so. Let us use the notation $ \theta_\varepsilon = h_\varepsilon\pare{s}-h_\varepsilon\pare{s-\alpha} $ and $ r_\varepsilon = 1+h_\varepsilon\pare{s} $. We use an argument similar the one stated in Section \ref{sec:path} to argue that the term \begin{equation*} Z_\varepsilon \pare{s, t} = \int \frac{\theta_\varepsilon \pare{\partial_\alpha \theta_\varepsilon}^2}{\pare{ 2 r_\varepsilon \sin\pare{\alpha / 2} }^2 } \textnormal{d}\alpha , \end{equation*} is the more singular contribution of the many composing $ \mathcal{N}\pare{h_\varepsilon\pare{s}} $, hence, defining \begin{equation*} Z \pare{s, t} = \int \frac{\theta \pare{\partial_\alpha \theta }^2}{\pare{ 2 r \sin\pare{\alpha / 2} }^2 } \textnormal{d}\alpha , \end{equation*} we aim to prove that \begin{equation*} \mathcal{Z}_\varepsilon = \int_0^T \int \pare{Z_\varepsilon\pare{s, t} - Z\pare{s, t}}\varphi\pare{s, t}\textnormal{d} s \ \textnormal{d} t \xrightarrow{\varepsilon\to 0}0, \end{equation*} for each $\varphi\in C^\infty_{c} \pare{ [0,T)\times\mathbb{S}^1 }$. This will establish the weak convergence for the more singular nonlinear term in $ \mathcal{N} $. We write $ \mathcal{Z}_\varepsilon = \mathcal{Z}_{\varepsilon, 1} + \mathcal{Z}_{\varepsilon, 2} $ where \begin{align*} \mathcal{Z}_{\varepsilon, 1} & = \int_0^T \int \pare{ \int \frac{\pare{ \theta_\varepsilon -\theta }\pare{\partial_\alpha \theta_\varepsilon}^2}{\pare{ 2 r_\varepsilon \sin\pare{\alpha / 2} }^2 } \textnormal{d}\alpha } \varphi\pare{s, t} \textnormal{d} s \ \textnormal{d} t, \\ \mathcal{Z}_{\varepsilon, 2} & = \int_0^T \int \pare{ \int \frac{\theta \pare{\partial_\alpha \theta_\varepsilon + \partial_\alpha \theta} \pare{\partial_\alpha \theta_\varepsilon - \partial_\alpha \theta}}{\pare{ 2 r_\varepsilon \sin\pare{\alpha / 2} }^2 } \textnormal{d}\alpha } \varphi\pare{s, t} \textnormal{d} s \ \textnormal{d} t . \end{align*} Let us symmetrize the term $ \mathcal{Z}_{\varepsilon, 1} $. This gives that \begin{align*} \mathcal{Z}_{\varepsilon, 1} = & \ \int_0^T \iint \frac{\pare{\theta_\varepsilon - \theta} h'\pare{\sigma} \varphi\pare{s}}{4\pare{1+h_\varepsilon\pare{s}}^2 \sin^2\pare{\frac{s-\sigma}{2}}} \textnormal{d} s \textnormal{d} \sigma \ \textnormal{d} t - \int_0^T \iint \frac{\pare{\theta_\varepsilon - \theta} h'\pare{s} \varphi\pare{ \sigma }}{4\pare{1+h_\varepsilon\pare{\sigma}}^2 \sin^2\pare{\frac{s-\sigma}{2}}} \textnormal{d} s \textnormal{d} \sigma \ \textnormal{d} t, \\ = & \ \int_0^T \iint \frac{\theta_\varepsilon - \theta}{4 \sin^2\pare{\frac{s-\sigma}{2}}} \set{ \frac{h'\pare{\sigma} \varphi\pare{s}}{2\pare{1+h_\varepsilon\pare{s}}^2} - \frac{h'\pare{s} \varphi\pare{\sigma}}{2\pare{1+h_\varepsilon\pare{\sigma }}^2} } \textnormal{d} s \textnormal{d} \sigma \ \textnormal{d} t , \\ = & \ \int_0^T \iint \frac{\theta_\varepsilon - \theta}{2 \sin\pare{\frac{s-\sigma}{2}}}\frac{1}{2 \sin\pare{\frac{s-\sigma}{2}}} \\ & \qquad \qquad \times \set{ h'\pare{\sigma} \bra{ \frac{ \varphi\pare{s}}{2\pare{1+h_\varepsilon\pare{s}}^2} - \frac{ \varphi\pare{\sigma }}{2\pare{1+h_\varepsilon\pare{\sigma }}^2} } - \frac{\varphi\pare{\sigma}}{2\pare{1+h_\varepsilon\pare{\sigma }}^2} \bra{h'\pare{\sigma} - h'\pare{s}} } \textnormal{d} s \textnormal{d} \sigma \ \textnormal{d} t . \end{align*} From the above integral equality, using H\"older's inequality, we obtain that \begin{equation*} \av{\mathcal{Z}_{\varepsilon, 1}} \leq C \int_0^T \av{\Lambda^{1/2}\pare{h_\varepsilon - h}}_{L^2} \set{\av{\Lambda^{1/2}h'}_{L^2} \av{\frac{\varphi}{\pare{1+h_\varepsilon}^2}}_{L^\infty} + \av{h'}_{L^\infty} \av{\Lambda^{1/2}\pare{ \frac{\varphi}{\pare{1+h_\varepsilon}^2} }}_{L^2} } \textnormal{d} t. \end{equation*} Hence standard computations show that the convergence proved in \eqref{eq:weak_convergence} is sufficient to establish that \begin{equation*} \mathcal{Z}_{\varepsilon, 1}\xrightarrow{\varepsilon\to 0}0. \end{equation*} We compute \begin{align*} \mathcal{Z}_{\varepsilon, 2}&=\mathcal{Z}_{\varepsilon, 2,\RN{1}}+\mathcal{Z}_{\varepsilon, 2,\RN{2}} \\ \end{align*} where \begin{align*} \mathcal{Z}_{\varepsilon, 2,\RN{1}}& = \int_0^T \int \pare{ \int \frac{\theta \pare{\partial_\alpha \theta_\varepsilon + \partial_\alpha \theta} \pare{\partial_\alpha \theta_\varepsilon - \partial_\alpha \theta}}{\pare{ 2 r_\varepsilon \sin\pare{\alpha / 2} }^2 }-\frac{\theta \pare{\partial_\alpha \theta_\varepsilon + \partial_\alpha \theta} \pare{\partial_\alpha \theta_\varepsilon - \partial_\alpha \theta}}{\pare{ 2 r\sin\pare{\alpha / 2} }^2 } \textnormal{d}\alpha } \varphi\pare{s, t} \textnormal{d} s \ \textnormal{d} t ,\\ \mathcal{Z}_{\varepsilon, 2,\RN{2}}& = \int_0^T \int \pare{ \int \frac{\theta \pare{ \pare{\partial_\alpha \theta_\varepsilon}^2 - \pare{\partial_\alpha \theta}^2 }}{\pare{ 2 r \sin\pare{\alpha / 2} }^2 } \textnormal{d}\alpha } \varphi\pare{s, t} \textnormal{d} s \ \textnormal{d} t . \end{align*} The term $\mathcal{Z}_{\varepsilon, 2,\RN{1}}$ can be handled as $\mathcal{Z}_{\varepsilon, 1}$. For the term $\mathcal{Z}_{\varepsilon, 2,\RN{1}}$ we have to use a weak-strong convergence type argument. We decompose it as \begin{align*} \mathcal{Z}_{\varepsilon, 2,\RN{2}}& = \mathcal{A}_{\varepsilon, 1}+\mathcal{A}_{\varepsilon, 2}, \end{align*} with \begin{align*} \mathcal{A}_{\varepsilon, 1}&=\int_0^T \int \pare{ \int \frac{\theta \pare{\partial_\alpha \theta_\varepsilon-\partial_\alpha \theta}\partial_\alpha \theta_\varepsilon}{\pare{ 2 r \sin\pare{\alpha / 2} }^2 } \textnormal{d}\alpha } \varphi\pare{s, t} \textnormal{d} s \ \textnormal{d} t \\ \mathcal{A}_{\varepsilon, 2}&=\int_0^T \int \pare{ \int \frac{\theta \partial_\alpha \theta \pare{\partial_\alpha \theta_\varepsilon - \partial_\alpha \theta}}{\pare{ 2 r \sin\pare{\alpha / 2} }^2 } \textnormal{d}\alpha } \varphi\pare{s, t} \textnormal{d} s \ \textnormal{d} t. \end{align*} Let us focus first on the second term. Using $$ \partial_\alpha \theta=-\partial_\alpha h(s-\alpha)=h'(s-\alpha), $$ we find that $$ -\partial_\alpha \theta=-h'(s-\alpha)+h'(s)-h'(s)=\theta'-h'(s). $$ Using this we can equivalently write \begin{align*} \mathcal{A}_{\varepsilon, 2}& = \int_0^T \int \pare{ \int \frac{\theta \partial_\alpha \theta \pare{\partial_\alpha \theta_\varepsilon - \partial_\alpha \theta}}{\pare{ 2 r \sin\pare{\alpha / 2} }^2 } \textnormal{d}\alpha } \varphi\pare{s, t} \textnormal{d} s \ \textnormal{d} t\\ &=\int_0^T \int \pare{ \int \frac{\theta \left(\theta'-h'(s)\right) \pare{\theta'_\varepsilon-h'_\varepsilon(s) - \theta'+h'(s)}}{\pare{ 2 r \sin\pare{\alpha / 2} }^2 } \textnormal{d}\alpha } \varphi\pare{s, t} \textnormal{d} s \ \textnormal{d} t. \end{align*} We can decompose it as \begin{align*} \mathcal{B}_{\varepsilon, 1}&=\int_0^T \int \int \frac{\theta \theta' \pare{\theta'_\varepsilon- \theta'}}{\pare{ 2 r \sin\pare{\alpha / 2} }^2 } \textnormal{d}\alpha \varphi\pare{s, t} \textnormal{d} s \ \textnormal{d} t\\ \mathcal{B}_{\varepsilon, 2}&=\int_0^T \int \int \frac{\theta \theta' \pare{h'(s)-h'_\varepsilon(s)}}{\pare{ 2 r \sin\pare{\alpha / 2} }^2 } \textnormal{d}\alpha \varphi\pare{s, t} \textnormal{d} s \ \textnormal{d} t\\ \mathcal{B}_{\varepsilon, 3}&=-\int_0^T \int \int \frac{\theta h'(s) \pare{\theta'_\varepsilon- \theta'}}{\pare{ 2 r \sin\pare{\alpha / 2} }^2 } \textnormal{d}\alpha \varphi\pare{s, t} \textnormal{d} s \ \textnormal{d} t\\ \mathcal{B}_{\varepsilon, 4}&=-\int_0^T \int \int \frac{\theta h'(s) \pare{h'(s)-h'_\varepsilon(s)}}{\pare{ 2 r \sin\pare{\alpha / 2} }^2 } \textnormal{d}\alpha \varphi\pare{s, t} \textnormal{d} s \ \textnormal{d} t. \end{align*} The term $\mathcal{B}_{\varepsilon, 4}$ can be handled easily. We observe that it can be rewritten as $$ \mathcal{B}_{\varepsilon, 4}=C\int_0^T \int \frac{\Lambda h(s) h'(s)}{r} \pare{h'(s)-h'_\varepsilon(s)}\varphi\pare{s, t} \textnormal{d} s \ \textnormal{d} t\leq C\|h-h_\varepsilon\|_{L^2_T H^1_x}\|h\|_{L^2_T H^1_x}\|h\|_{L^\infty_T W^{1,\infty}_x}\xrightarrow{\varepsilon\to 0}0. $$ For the term $\mathcal{B}_{\varepsilon, 2}$ we proceed as follows, \begin{equation*} \mathcal{B}_{\varepsilon, 2}=\int_0^T \int \int \frac{\theta \theta' \pare{ h'(s)-h'_\varepsilon(s)}}{\pare{ 2 r \sin\pare{\alpha / 2} }^2 } \textnormal{d}\alpha \varphi\pare{s, t} \textnormal{d} s \ \textnormal{d} t \\ \leq C\norm{h'}_{L^\infty_T L^\infty_x} \|\Lambda^{1/2}h'\|_{L^2_T L^2_x}\|h'-h'_\varepsilon\|_{L^2_T L^2_x} \xrightarrow{\varepsilon\to 0}0. \end{equation*} Taking $0<\delta\ll1$ we also compute \begin{align*} \mathcal{B}_{\varepsilon, 3}&\leq C\|h'\|^2_{L^\infty_T L^\infty_x}\int_0^T \iint \frac{ \av{\theta'_\varepsilon- \theta'}}{ |\sin\pare{\alpha / 2}|^{1-\delta/2+\delta/2} } \textnormal{d}\alpha \textnormal{d} s \ \textnormal{d} t\\ &\leq C\|h'\|_{L^\infty_T L^\infty_x}^2 \left(\int \frac{1}{|\sin\pare{\alpha / 2}|^{\delta}}\textnormal{d} \alpha\right)^{1/2} \int_0^T \left(\iint \frac{ \av{\theta'_\varepsilon- \theta'}^2}{ |\sin\pare{\alpha / 2}|^{2-\delta} } \textnormal{d}\alpha \textnormal{d} s\right)^{1/2} \ \textnormal{d} t \\ &\leq C \sqrt{T} \|h'\|_{L^\infty_T L^\infty_x}^2 \norm{h'_\varepsilon - h'}_{L^2_T \dot{H}^{\frac{1-\delta}{2}}_x} \xrightarrow{\varepsilon\to 0}0. \end{align*} Finally, the last term $\mathcal{B}_{\varepsilon, 1}$ can be handled as $\mathcal{B}_{\varepsilon, 3}$. As a consequence, $h$ is a weak solution of \eqref{eq:eveqh2}. In addition, the maximum principle $$ \av{h(t)}_{W^{1,\infty}}\leq \av{h_0}_{W^{1,\infty}}\quad\forall\,0\leq t\leq T $$ and the exponential decay $$ \av{h'(t)}_{L^{\infty}}\leq \av{h'_0}_{L^{\infty}}e^{-\delta t}\quad\forall\,0\leq t\leq T $$ follow from the application of Proposition \ref{prop:W1infty_enest} to the regularized problem and the weak-$*$ lower semicontinuity of the norm. We argue as in \cite[Lemma 4.3]{CCGS12} in order to state that, since $ h $ is a uniform limit of continuous functions we obtain as well that \begin{equation*} h\in \mathcal{C}\pare{\bra{0, T}\times\mathbb{S}^1}. \end{equation*} $ \Box $ \section*{Acknowledgments} The research of F.G. has been partially supported by the grant MTM2017-89976-P (Spain) and by the ERC through the Starting Grant project H2020-EU.1.1.-639227. \noindent The research of R.G.B. is supported by the project ”Mathematical Analysis of Fluids and Applications” with reference PID2019-109348GA-I00/AEI/ 10.13039/501100011033 and acronym ``MAFyA” funded by Agencia Estatal de Investigaci\'on and the Ministerio de Ciencia, Innovacion y Universidades (MICIU). \noindent The research of S.S. is supported by the ERC through the Starting Grant project H2020-EU.1.1.-639227. \begin{footnotesize} \end{footnotesize} \end{document}
arXiv
\begin{definition}[Definition:Similar Figures] Two rectilineal figures are '''similar''' {{iff}}: :They have corresponding angles, all of which are equal :They have corresponding sides, all of which are proportional. \end{definition}
ProofWiki
\begin{document} \title{ Diffusion Limit and the optimal convergence rate of the Vlasov-Poisson-Fokker-Planck system } \author{ Mingying Zhong$^*$\\[2mm] \emph{\small\it $^*$College of Mathematics and Information Sciences, Guangxi University, P.R.China.}\\ {\small\it E-mail:\ [email protected]}\\[5mm] } \date{ } \pagestyle{myheadings} \markboth{Vlasov-Poisson-Fokker-Planck system } { M.-Y. Zhong } \maketitle \thispagestyle{empty} \begin{abstract}\noindent In the present paper, we study the diffusion limit of the classical solution to the Vlasov-Poisson-Fokker-Planck (VPFP) system with initial data near a global Maxwellian. We prove the convergence and establish the optimal convergence rate of the global strong solution to the VPFP system towards the solution to the drift-diffusion-Poisson system based on the spectral analysis with precise estimation on the initial layer. {\bf Key words}. Vlasov-Poisson-Fokker-Planck system, spectral analysis, diffusion limit, convergence rate. {\bf 2010 Mathematics Subject Classification}. 76P05, 82C40, 82D05. \end{abstract} \tableofcontents \section{Introduction} The Vlasov-Poisson-Fokker-Planck (VPFP) system can be used to model the time evolution of dilute charged particles governed by the electrostatic force coming from their (self-consistent) Coulomb interaction. The collision term in the kinetic equation is the Fokker-Planck operator that describes the Brownian force. In general, the rescaled VPFP system defined on $\R^3_x\times\R^3_v$ takes the form \bgr \dt F_{\eps}+\frac{1}{\eps}v\cdot\Tdx F _{\eps}+\frac{1}{\eps}\Tdx \Phi_{\eps}\cdot\Tdv F_{\eps}=\frac{1}{\eps^2}\Tdv\cdot(\Tdv F_{\eps}+vF_{\eps}),\label{VPFP1}\\ \Delta_x\Phi_{\eps}=\intr F_{\eps} dv-1,\label{VPFP3}\\ F_{\eps}(0,x,v)=F_{0}(x,v), \label{VPFP3i} \egr where $\eps>0$ is a small parameter related to the mean free path, $F_{\eps}=F_{\eps}(t,x,v)$ is the number density function of charged particles, and $\Phi_{\eps}(t,x)$ denotes the electric potential, respectively. Throughout this paper, we assume $\eps\in(0,1)$. In this paper, we study the diffusion limit of the strong solution to the rescaled VPFP system~\eqref{VPFP1}--\eqref{VPFP3i} with initial data near the normalized Maxwellian $M(v)$ given by $$ M=M(v)=\frac1{(2\pi)^{3/2}}e^{-\frac{|v|^2}2},\quad v\in\R^3. $$ Our motivation is to prove the convergence and establish the convergence rate of strong solutions $(F_{\eps}, \Phi_{\eps})$ to \eqref{VPFP1}--\eqref{VPFP3i} towards $(N M, \Phi)$, where $(N,\Phi)(t,x)$ is the solution of the following drift-diffusion-Poisson (DDP) system: \bgr \dt N+\Tdx\cdot(\Tdx N+N\Tdx \Phi)=0, \label{DDP1}\\ \Delta_x\Phi=N-1,\label{DDP2}\\ N(0,x)=N_0(x)=\intr F_0 dv.\label{DDP3} \egr Here, $N=N(t,x)$ stands for densities of charged particles in the ionic solution and $\Phi(t,x)$ is the self-consistent electric potential. The DDP system \eqref{DDP1}--\eqref{DDP3} provides a continuum description of the evolution of charged particles via macroscopic quantities, for example, the particle density, the current density etc., which have cheaper costs for numerics. Such continuum models can be (formally) derived from kinetic models by coarse graining methods, like the moment method, the Hilbert expansion method and so on \cite{DDP-5,Markowich,Maxwell}. Concerning the mathematical analysis, the initial value problem and the initial boundary value problem of the DDP system have been extensively studied, we refer to \cite{DDP-0,DDP-1,DDP-2,DDP-3,DDP-4,DDP-5,DDP-6}. The existence and uniqueness of solutions to the initial value problem or the initial boundary value problem of the VPFP system have been investigated in the literature. We refer to \cite{VPFP-1,VPFP-2,VPFP-3,Hwang} for results on the classical solutions and to \cite{VPFP-4,VPFP-5,VPFP-6,VPFP-7} for weak solutions and their regularity. Concerning the long-time behavior of the VPFP system, we refer to \cite{time-1,time-2,time-3,Hwang}. The diffusion limit of the weak solution to the VPFP system has been studied extensively in the literature (cf. \cite{FL-1,FL-2,FL-3,FL-4,FL-5,FL-6}). In \cite{FL-1,FL-2,FL-5}, the authors proved the convergence of suitable solutions to the single species VPFP system towards a solution to the drift-diffusion-Poisson model in the whole space. In \cite{FL-6}, the authors established a global convergence result of the renormalized solution to the multiple species VPFP system towards a solution to the Poisson-Nernst-Planck system. The spectrum structure and the optimal decay rate of the classical solution to the VPFP system were investigated in \cite{Li3}. However, in contrast to the works on weak solution~\cite{FL-1,FL-2,FL-3,FL-4,FL-5,FL-6}, the diffusion limit of the classical solution to the VPFP system \eqref{VPFP1}--\eqref{VPFP3i} has not been given despite of its importance. On the other hand, the convergence rate of the weak solution to the VPFP system towards the solution to the DDP system hasn't been obtained in \cite{FL-1,FL-2,FL-3,FL-4,FL-5,FL-6}. Therefore, it is natural to establish an explicit convergence rate of the solution to the VPFP system towards its diffusion limit. First of all, the VPFP system \eqref{VPFP1}--\eqref{VPFP3} has a stationary solution $(F_*,\Phi_*)=(M(v),0)$. Hence, we define the perturbation of $F_{\eps}$ as $$ F_{\eps}=M+\sqrt{M}f_{\eps}. $$ Then Cauchy problem of the VPFP system~\eqref{VPFP1}--\eqref{VPFP3i} can be rewritten as \bgr \dt f_{\eps}+\frac{1}{\eps}v\cdot\Tdx f_{\eps}-\frac{1}{\eps}v\sqrt{M}\cdot\Tdx \Phi_{\eps}-\frac{1}{\eps^2}Lf_{\eps} =\frac{1}{\eps} G(f_{\eps}),\label{VPFP4}\\ \Delta_x\Phi_{\eps}=\intr f_{\eps}\sqrt{M}dv,\label{VPFP5}\\ f_{\eps}(0,x,v)=f_0(x,v)=:\frac{F_0-M}{\sqrt{M}} ,\label{VPFP6} \egr where \bma Lf_{\eps}&= \Delta_vf_{\eps}-\frac{|v|^2}4f_{\eps}+\frac32f_{\eps},\\ G(f_{\eps})& =\frac12 (v\cdot\Tdx\Phi_{\eps})f_{\eps}- \Tdx\Phi_{\eps}\cdot\Tdv f_{\eps} . \ema Also, we define the perturbation of $N$ as $$n=N-1.$$ Then Cauchy problem of the DDP system~\eqref{DDP1}--\eqref{DDP3} becomes \bgr \dt n-\Delta_x n+n=-\Tdx\cdot(n \Tdx \Phi), \label{n_1}\\ \Delta_x\Phi=n,\\ n(0,x)=n_0=\intr f_0\sqrt{M} dv. \label{n_2} \egr The aim of this paper is to prove the convergence and establish the convergence rate of strong solutions $(f_{\eps}, \Phi_{\eps})$ to \eqref{VPFP4}--\eqref{VPFP6} towards $(n\sqrt M , \Phi)$, where $(n,\Phi)(t,x)$ is the solution of the DDP system \eqref{n_1}--\eqref{n_2}. In general, the convergence is not uniform near $t=0$. The breakdown of the uniform convergence near $t=0$ is the initial layer of the VPFP system. But we can show that if the initial data $f_0=n_0\sqrt M$, then the uniform convergence is up to $t=0$. We denote $L^2(\R^3)$ a Hilbert space of complex-value functions $f(v)$ on $\R^3$ with the inner product and the norm $$ (f,g)=\intr f(v)\overline{g(v)}dv,\quad \|f\|=\(\int_{\R^3}|f(v)|^2dv\)^{1/2}. $$ The nullspace of the operator $L$, denoted by $N_0$, is a subspace spanned by $\sqrt{M}$. Let $P_{0}$ be the projection operators from $L^2(\R^3_v)$ to the subspace $N_0$ with \begin{equation} P_{0}f=(f,\sqrt M)\sqrt M, \quad P_1=I-P_{0}. \label{Pdr} \end{equation} Corresponding to the linearized operator $L$, we define the following dissipation norm: \begin{equation} \|f\|^2_{L^2_\sigma}=\|\Tdv f\|^2+\|v f\|^2 . \end{equation} This norm is stronger than the $L^2$-norm because \begin{equation} \|f\|^2_{L^2_\sigma}\ge 2\|\Tdv f\|\|vf\|\ge -2(\Tdv f,vf)=3\|f\|^2. \end{equation} From \cite{Yu}, the linearized operator $L$ is non-positive and locally coercive in the sense that there is a constant $0<\mu<1$ such that \begin{equation} (Lf,f)\le -\|P_1f\|^2,\quad (Lf,f)\leq - \mu\|P_1f\|^2_{L^2_\sigma} .\label{L_4} \end{equation} \noindent\textbf{Notations:} \ \ Before state the main results in this paper, we list some notations. For any $\alpha=(\alpha_1,\alpha_2,\alpha_3)\in \mathbb{N}^3$ and $\beta=(\beta_1,\beta_2,\beta_3)\in \mathbb{N}^3$, denotes $$\dxa=\pt^{\alpha_1}_{x_1}\pt^{\alpha_2}_{x_2}\pt^{\alpha_3}_{x_3},\quad \dvb=\pt^{\beta_1}_{x_1}\pt^{\beta_2}_{x_2}\pt^{\beta_3}_{x_3}.$$ The Fourier transform of $f=f(x,v)$ is denoted by $$\hat{f}(\xi,v)=\mathcal{F}f(\xi,v)=\frac1{(2\pi)^{3/2}}\intr f(x,v)e^{- i x\cdot\xi}dx.$$ In the following, denote by $\|\cdot\|_{L^2_{x}(L^2_{v})}$ the norm of the function space $L^2(\R^3_x\times \R^3_v)$, and denote by $\|\cdot\|_{L^2_x}$ and $\|\cdot\|_{L^2_v}$ the norms of the function spaces $L^2(\R^3_x)$ and $L^2(\R^3_v)$ respectively. For any integer $k\ge 1$, we denote by $\|\cdot\|_{H^k_x(L^2_v)}$ and $\|\cdot\|_{H^k_x }$ the norms of the function spaces $H^k(\R^3_x,L^2(\R^3_v))$ and $H^k(\R^3_x)$ respectively. Now we are ready to state main results in this paper. \begin{thm}\label{existence} There exist small positive constants $\delta_0$ and $\eps_0$ such that if the initial data $f_0$ satisfies $\|f_{0}\|_{H^4_x(L^2_v)} +\|(f_0,\sqrt{M})\|_{L^{1}_x}\le \delta_0$, then for any $\eps\in (0,\eps_0)$, there exists a unique function $f_{\eps}(t)$ to the VPFP system \eqref{VPFP4}--\eqref{VPFP6} satisfying \begin{equation} \|f_{\eps}(t) \|_{H^4_x(L^2_v)}+\|\Tdx\Phi_{\eps}(t) \|_{H^4_x }\le C\delta_0e^{-\frac t2},\quad t>0, \end{equation} where $\Tdx\Phi_{\eps}(t)=\Tdx\Delta^{-1}_x(f_{\eps},\sqrt{M})$ and $C>0$ is a constant independent of $\eps$. There exists a small constant $\delta_0>0$ such that if $\|f_{0}\|_{H^4_x(L^2_v)} +\|(f_0,\sqrt{M})\|_{L^{1}_x}\le \delta_0$, then the DDP system \eqref{n_1}--\eqref{n_2} admits a unique global solution $n(t)$ satisfying \begin{equation} \|n (t)\|_{H^4_x}+\|\Tdx\Phi(t)\|_{H^4_x}\le C\delta_0 e^{-\frac t2}, \end{equation} where $\Tdx\Phi(t)=\Tdx\Delta^{-1}_xn(t)$ and $C>0$ is a constant. \end{thm} \begin{thm}\label{thm1.1} Let $(f_{\eps},\Phi_{\eps})=(f_{\eps}(t,x,v),\Phi_{\eps}(t,x)) $ be the global solution to the VPFP system \eqref{VPFP4}--\eqref{VPFP6}, and let $(n,\Phi)=(n,\Phi)(t,x)$ be the global solution to the DDP system \eqref{n_1}--\eqref{n_2}. Then, there exist two small positive constants $\delta_0$ and $\eps_0$ such that if the initial data $f_0$ satisfies $\|f_{0}\|_{H^4_x(L^2_v)}+\|\Tdv f_{0}\|_{H^3_x(L^2_v)}+\||v | f_{0}\|_{H^3_x(L^2_v)}+\|(f_0,\sqrt{M})\|_{L^{1}_x}\le \delta_0$, then for any $\eps\in (0,\eps_0)$, \begin{equation} \|f_{\eps}(t)-n(t)\sqrt M\|_{H^2_x(L^2_v)}+\|\Tdx\Phi_{\eps}(t)-\Tdx\Phi(t)\|_{H^2_x }\le C\delta_0\(\eps e^{-\frac{t}{2}}+e^{-\frac{at}{\eps^2}}\), \label{limit0}\end{equation} where $a>0$ and $C>0$ are two constants independent of $\eps$. Moreover, if the initial data $f_0=n_0 \sqrt M\in N_0$ and $\|n_{0}\|_{H^4_x} +\|n_0\|_{L^{1}_x}\le \delta_0$, then we have \begin{equation} \|f_{\eps}(t)-n(t)\sqrt M\|_{H^2_x(L^2_v)}+\|\Tdx\Phi_{\eps}(t)-\Tdx\Phi(t)\|_{H^2_x }\le C\delta_0\eps e^{-\frac{t}{2}} . \label{limit_1a} \end{equation} \end{thm} The results in Theorem~\ref{thm1.1} on the convergence rate of diffusion limits of the VPFP system is proved based on the spectral analysis \cite{Li3} and the ideas inspired by \cite{Ukai1,Ellis,Li4}. Indeed, we first develop a non-local implicit function theorem to show the existence of the eigenvalue $\lambda_0(|\xi|,\eps)$ of the linear VPFP operator $B_{\eps}(\xi)$ defined by \eqref{B3} for $\eps|\xi|$ small and establish the expansions of the eigenvalue $\lambda_0(|\xi|,\eps)$ and the corresponding eigenfunction $\psi_0(\xi,\eps)$ as (See Lemma \ref{eigen_1} and Theorem \ref{spect3}) \begin{equation} \label{lam0} \left\{\bln \lambda_0(|\xi|,\eps)&= -\eps^2(1+|\xi|^2)+O(\eps^4(1+|\xi|^2)^2),\\ P_0\psi_0(\xi,\eps)&= \frac{|\xi|}{\sqrt{1+|\xi|^2}} \sqrt{M} +O(\eps |\xi|),\\ P_1\psi_0(\xi,\eps)&= -i\eps \sqrt{1+|\xi|^2} \(v\cdot\frac{\xi}{|\xi|}\)\sqrt{M}+O(\eps^2(1+|\xi|^2)). \eln\right. \end{equation} Based on the spectral analysis of $B_{\eps}(\xi)$, we can decompose the semigroup $e^{ B_{\eps}(\xi)t/\eps^2}$ into the fluid part $S_1$ and the remainder part $S_2$, where the remainder part $S_2$ satisfies the decay rate $e^{-Ct/\eps^2}$, and the fluid part $S_1$ takes the form (See Theorem \ref{rate1}) \begin{equation} S_1(t,\xi,\eps)f= e^{\frac{t}{\eps^2}\lambda_0(|\xi|,\eps)}\(f,\overline{\psi_0(\xi,\eps)}\)_\xi\psi_0(\xi,\eps).\label{lam1} \end{equation} Then by applying the expansion \eqref{lam0} to the fluid part \eqref{lam1}, we can establish the convergence and the optimal convergence rate of the solution to the linear VPFP system towards its first and second order fluid limits, where the first order fluid limit is the solution to the linear DDP system (See Lemmas \ref{fl1} and \ref{fl2}). Finally, making use of the convergence results on the fluid limits of the linear VPFP system, we can prove the convergence and establish the optimal convergence rate of the nonlinear VPFP system towards the DDP system after a careful and tedious computation. Moreover, we can obtain the precise estimation on the initial layer. The rest of this paper will be organized as follows. In Section~\ref{sect2}, we give the spectrum analysis of the linear operator related to the linearized VPFP system. In Section~\ref{sect3}, we establish the first and second order fluid approximations of the solution to the linearized VPFP system based on the spectral analysis. In Section~\ref{sect4}, we prove the convergence and establish the convergence rate of the global solution to the original nonlinear VPFP system towards the solution to the nonlinear DDP system. \section{Spectral analysis} \setcounter{equation}{0} \label{sect2} In this section, we are concerned with the spectral analysis of the operator $B_{\eps}(\xi)$ defined by \eqref{B3}, which will be applied to study diffusion limit of the solution to the VPFP system \eqref{VPFP4}--\eqref{VPFP6}. From the system \eqref{VPFP4}--\eqref{VPFP6}, we have the following linearized VPFP system: \begin{equation}\label{LVPFP1} \left\{\bln &\eps^2\dt f_{\eps}=B_{\eps}f_{\eps},\quad t>0,\\ &f_{\eps}(0,x,v)=f_0(x,v), \eln\right. \end{equation} where \begin{equation} B_{\eps}f=Lf-\eps v\cdot\Tdx f-\eps v \cdot \Tdx(-\Delta_x)^{-1}P_0f. \end{equation} Take Fourier transform to \eqref{LVPFP1} in $x$ to get \begin{equation}\label{LVPFP5} \left\{\bln &\eps^2\dt \hat{f}_{\eps}= B_{\eps}(\xi)\hat f_{\eps},\quad t>0,\\ &\hat{f}_{\eps}(0,\xi,v)=\hat{f}_0(\xi,v), \eln\right. \end{equation} where \begin{equation} B_{\eps}(\xi)=L-i\eps v\cdot\xi-i\eps \frac{ v\cdot\xi}{|\xi|^2}P_{0},\quad \xi\ne 0. \label{B3} \end{equation} Following \cite{Yu}, we decompose $L$ as follows: \begin{equation} \label{L_1} \left\{\bln Lf&=-Af+Kf,\\ Af&=-\Delta_vf+\frac{|v|^2}4f-\frac321_{\{|v|\ge R\}}f,\\ Kf&=\frac321_{\{|v|\le R\}}f, \eln\right. \end{equation} where $R > 0$ is chosen to be large enough and $1_{\{|v|\le R\}}$ is the indicator function of the domain $|v| \le R$. The operators $A$ and $K $ are self-adjoint on $L^2$, and $K$ is a $A-$compact operator, that is, for any sequences both $\{u_n\}$ and $\{A u_n\}$ bounded in $L^2_v$, $\{Ku_n\}$ contains a convergent subsequence in $L^2_v$ (cf. Lemma 2.3 in \cite{Yu}). For $R>0$ sufficiently large, there are two constants $\nu_0,\nu_1>0$ such that \begin{equation} (Af,f)\ge \nu_1\|f\|^2_{L^2_\sigma}\ge \nu_0\|f\|^2. \label{L_3} \end{equation} Also, we define \begin{equation} A_{\eps}(\xi)=-A- i\eps (v\cdot\xi). \label{Cxi} \end{equation} Introduce a weighted Hilbert space $L^2_\xi(\R^3)$ for $\xi\ne 0$ as $$ L^2_\xi(\R^3)=\{f\in L^2(\R^3)\,|\,\|f\|_\xi=\sqrt{(f,f)_\xi}<\infty\}, $$ with the inner product defined by $$ (f,g)_\xi=(f,g)+\frac1{|\xi|^2}(P_{0} f,P_{0} g). $$ Since $P_{0}$ is a self-adjoint projection operator, it follows that $(P_{0} f,P_{0} g)=(P_{0} f, g)=( f,P_{0} g)$ and hence \begin{equation} (f,g)_\xi=(f,g+\frac1{|\xi|^2}P_{0}g)=(f+\frac1{|\xi|^2}P_{0}f,g).\label{C_1a}\end{equation} By \eqref{C_1a}, we have for any $f,g\in L^2_\xi(\R^3_v)\cap D(B_{\eps}(\xi))$, \begin{equation} (B_{\eps}(\xi)f,g)_\xi=(B_{\eps}(\xi) f,g+\frac1{|\xi|^2}P_{0} g)=(f,B_{\eps}(-\xi)g)_\xi. \label{L_7} \end{equation} Moreover, $B_{\eps}(\xi)$ is a dissipate operator in $L^2_\xi(\R^3)$: \begin{equation} {\rm Re}(B_{\eps}(\xi)f,f)_\xi=(Lf,f)\le 0. \label{L_8} \end{equation} We can regard $B_{\eps}(\xi)$ as a linear operator from the space $L^2_\xi(\R^3)$ to itself because $$ \|f\|^2\le \|f\|^2_\xi\le(1+|\xi|^{-2})\|f\|^2,\quad \xi\ne 0. $$ The $\sigma(A)$ denotes the spectrum of the operator $A$. The discrete spectrum of $A$, denoted by $\sigma_d(A)$, is the set of all isolated eigenvalues with finite multiplicity. The essential spectrum of $A$, $\sigma_{ess}(A)$, is the set $\sigma(A)\setminus \sigma_d (A)$. We denote $\rho(A)$ to be the resolvent set of the operator $A.$ By \eqref{L_7}, \eqref{L_8}, \eqref{L_1} and \eqref{L_3}, we have the following two lemmas. \begin{lem}\label{SG_1} (1) The operator $B_{\eps}(\xi)$ generates a strongly continuous contraction semigroup on $L^2_\xi(\R^3)$, which satisfies \begin{equation} \|e^{tB_{\eps}(\xi)}f\|_\xi\le\|f\|_\xi, \quad \forall\ t>0,\,\,f\in L^2_\xi(\R^3_v). \end{equation} (2) The operator $A_{\eps}(\xi)$ generates a strongly continuous contraction semigroup on $L^2 (\R^3)$, which satisfies \begin{equation} \|e^{tA_{\eps}(\xi)}f\| \le e^{-\nu_0t}\|f\| , \quad \forall\ t>0,\,\,f\in L^2 (\R^3_v). \end{equation} \end{lem} \begin{proof} Since the operator $B_{\eps}(\xi)$ is a densely defined closed operator on $L^2_\xi(\R^3)$, and both $B_{\eps}(\xi)$ and $B_{\eps}(\xi)^*=B_{\eps}(-\xi)$ are dissipative on $L^2_\xi(\R^3)$ by \eqref{L_8}, it follows that $B_{\eps}(\xi)$ generates a strongly continuous contraction semigroup on $L^2_\xi(\R^3)$. This proves (1). Similarly, we can prove (2) by \eqref{L_3}. \end{proof} \begin{lem}\label{Egn} The following conditions hold for all $\xi\ne 0$ and $\eps\in (0,1)$. \begin{enumerate} \item[(1)] $\sigma_{ess}(B_{\eps}(\xi))\subset \{\lambda\in \mathbb{C}\,|\, {\rm Re}\lambda\le -\nu_0\}$ and $\sigma(B_{\eps}(\xi))\cap \{\lambda\in \mathbb{C}\,|\, -\nu_0<{\rm Re}\lambda\le 0\}\subset \sigma_{d}(B_{\eps}(\xi))$. \item[(2)] If $\lambda$ is an eigenvalue of $B_{\eps}(\xi)$, then ${\rm Re}\lambda<0$ for any $\eps\xi\ne 0$ and $ \lambda=0$ iff $\eps\xi= 0$. \end{enumerate} \end{lem} \begin{proof} By Lemma \ref{SG_1}, $\lambda- A_{\eps}(\xi)$ is invertible for ${\rm Re}\lambda>-\nu_0$ and hence $\sigma (A_{\eps}(\xi))\subset \{\lambda\in \mathbb{C}\,|\, {\rm Re}\lambda\le -\nu_0\}$. Since $K$ is $A_\eps(\xi)-$compact and $i(v\cdot\xi)|\xi|^{-2}P_0$ is compact, namely, $ B_{\eps}(\xi)$ is a compact perturbation of $ A_{\eps}(\xi)$, it follows that $ \sigma_{ess}(B_{\eps}(\xi))=\sigma_{ess}(A_{\eps}(\xi))$ and $\sigma (B_{\eps}(\xi))$ in the domain ${\rm Re}\lambda>-\nu_0$ consists of discrete eigenvalues with possible accumulation points only on the line ${\rm Re}\lambda= -\nu_0$. This proves (1). By a similar as Proposition 2.2.8 in \cite{Ukai}, we can prove (2) and the detail is omitted for simplicity. \end{proof} Now denote by $T$ a linear operator on $L^2(\R^3_v)$ or $L^2_\xi(\R^3_v)$, and we define the corresponding norms of $T$ by $$ \|T\|=\sup_{\|f\|=1}\|Tf\|,\quad \|T\|_\xi=\sup_{\|f\|_\xi=1}\|Tf\|_\xi. $$ Obviously, \begin{equation} (1+|\xi|^{-2})^{-1/2}\|T\|\le \|T\|_\xi\le (1+|\xi|^{-2})^{1/2}\|T\|.\label{eee} \end{equation} First, we consider the spectrum and resolvent sets of $B_{\eps}(\xi)$ for $\eps|\xi|>r_0$ with $r_0>0$ a constant. To this end, we decompose $B_{\eps}(\xi)$ into \bma \lambda-B_{\eps}(\xi)&=\lambda-A_{\eps}(\xi)-K+i\eps\frac{ v\cdot\xi}{|\xi|^2}P_{0}\notag\\ &=\(I-K(\lambda-A_{\eps}(\xi))^{-1}+i\eps\frac{v\cdot\xi}{|\xi|^2}P_{0}(\lambda-A_{\eps}(\xi))^{-1}\)(\lambda-A_{\eps}(\xi)).\label{B_d} \ema Then, we have the estimates on the right hand terms of \eqref{B_d} as follows. \begin{lem}[\cite{Yu,Li3}]\label{LP03} There exists a constant $C>0$ so that the following holds \begin{enumerate} \item For any $\delta>0$, if ${\rm Re}\lambda\ge -\nu_0+\delta$, we have \begin{equation} \|K(\lambda-A_{\eps}(\xi))^{-1}\| \to 0, \quad {\rm as}\,\,\ |{\rm Im}\lambda|+\eps|\xi|\to \infty. \label{T_7} \end{equation} \item For any $\delta>0$ and $ r_0>0$, if ${\rm Re}\lambda\ge -\nu_0+\delta$ and $|\xi|\ge r_0$, we have \begin{equation} \|(v\cdot\xi)|\xi|^{-2}P_{0}(\lambda-A_{\eps}(\xi))^{-1}\| \leq C(\delta^{-1}+1)(r_0^{-1}+1)(|\xi|+|\lambda|)^{-1}. \label{L_9} \end{equation} \end{enumerate} \end{lem} By \eqref{B_d} and Lemma \ref{LP03}, we have the spectral gap of the operator $B_{\eps}(\xi)$ for $\eps|\xi|>r_0$. \begin{lem}[Spectral gap]\label{LP01} Fix $\eps\in (0,1)$. For any $r_0>0$, there exists $\alpha =\alpha(r_0)>0$ such that for $ \eps|\xi|\ge r_0$, \begin{equation} \sigma(B_{\eps}(\xi))\subset\{\lambda\in\mathbb{C}\,|\, \mathrm{Re}\lambda \leq-\alpha\} .\label{gap}\end{equation} \end{lem} \begin{proof} Let $\lambda\in \sigma(B_{\eps}(\xi))\cap\{\lambda\in\mathbb{C}\,|\, \mathrm{Re}\lambda\geq -\nu_0+\delta\}$ with $\delta>0$. We first show that $\sup_{\eps|\xi|\ge r_0}|{\rm Im}\lambda|<+\infty$. By \eqref{eee}, \eqref{T_7} and \eqref{L_9}, there exists a large constant $r_1=r_1(\delta)>0$ such that for $\mathrm{Re}\lambda\geq -\nu_0+\delta$ and $\eps|\xi|\geq r_1$, \begin{equation} \|K(\lambda-A_{\eps}(\xi))^{-1}\|_{\xi}\leq1/4,\quad \|\eps(v\cdot\xi)|\xi|^{-2}P_0(\lambda-A_{\eps}(\xi))^{-1}\|_{\xi}\leq1/4.\label{bound} \end{equation} This implies that $I+K(\lambda-A_{\eps}(\xi))^{-1}+i\eps(v\cdot\xi)|\xi|^{-2}P_0(\lambda-A_{\eps}(\xi))^{-1}$ is invertible on $L^2_{\xi}(\R^3_v)$, which together with \eqref{B_d} yield that $\lambda-B_{\eps}(\xi)$ is also invertible on $L^2_{\xi}(\R^3_v)$ and satisfies \begin{equation} (\lambda-B_{\eps}(\xi))^{-1}=(\lambda-A_{\eps}(\xi))^{-1}\(I-K(\lambda-A_{\eps}(\xi))^{-1}+i\eps\frac{v\cdot\xi}{|\xi|^2}P_0(\lambda-A_{\eps}(\xi))^{-1}\)^{-1} . \label{E_6} \end{equation} Therefore $\{\lambda\in\mathbb{C}\,|\,\mathrm{Re}\lambda\ge -\nu_0+\delta\}\subset \rho(B_{\eps}(\xi))$ for $\eps|\xi|\ge r_1$. As for $r_0\le\eps|\xi|\le r_1$, by \eqref{T_7} and \eqref{L_9} there exists $\beta=\beta(r_0,r_1,\delta)>0$ such that if $\mathrm{Re}\lambda\geq -\nu_0+\delta$, $|\mathrm{Im}\lambda|>\beta$ and $\eps|\xi|\in [r_0,r_1]$, then \eqref{bound} still holds and thus $\lambda-B_{\eps}(\xi)$ is invertible. This implies that $\{\lambda\in\mathbb{C}\,|\,\mathrm{Re}\lambda\ge -\nu_0+\delta, |\mathrm{Im}\lambda|>\beta\}\subset \rho(B_{\eps}(\xi))$ for $r_0\le\eps|\xi|\le r_1$. Thus, we conclude \begin{equation} \sigma(B_{\eps}(\xi)) \cap\{\lambda\in\mathbb{C}\,|\,\mathrm{Re}\lambda\ge-\nu_0+\delta\} \subset \{\lambda\in\mathbb{C}\,|\,\mathrm{Re}\lambda\ge -\nu_0+\delta,\,|\mathrm{Im}\lambda|\le \beta \}, \quad \eps|\xi|\ge r_0. \label{SpH} \end{equation} Next, we prove that $\sup_{ \eps |\xi|\ge r_0}{\rm Re}\lambda<0$. Base on the above argument, it is sufficient to prove that ${\rm Re}\lambda<0$ for $\eps|\xi|\in[r_0,r_1]$ and $|{\rm Im}\lambda|\le \beta$. If not, there exists $\lambda_n\in \sigma(B_{\eps}(\xi_n))$ with $\eps|\xi_n|\in[r_0,r_1]$ and $f_n\in L^2(\R^3)$ with $\|f_n\|_{\xi_n}=1$ such that $$Lf_n-i\eps(v\cdot\xi_n)f_n-\frac{i\eps(v\cdot\xi_n)}{|\xi_n|^2}P_0 f_n=\lambda_nf_n,\quad \text{Re}\lambda_n\rightarrow0.$$ Taking the inner product $(\cdot,\cdot)_{\xi_n}$ between the above equation and $f_n$ and choosing the real part, we have $${\rm Re}\lambda_n\|f_n\|_{\xi_n}= (Lf_n,f_n)\le -\mu \|P_1f_n\|^2.$$ This implies that $\lim_{n\to \infty}\|P_1f_n\|=0$ and hence $\lim_{n\to \infty}\|P_0f_n\|_{\xi_n}=1$. Since $P_0$ is a compact operator, there exists a subsequence $n_j$ of $n$ and a function $f_0\ne 0\in N_0$ such that $P_0f_{n_j}\to f_0$ in $L^2$ as $j\to \infty$. Thus we have $f_{n_j}\to f_0$ in $L^2$ as $j\to \infty$. Due to the fact that $\eps|\xi_n|\in[r_0,r_1]$, $|{\rm Im}\lambda_n|\le \beta$ and ${\rm Re}\lambda_n\to 0$, there exists a subsequence of (still denoted by) $(\xi_{n_j},\lambda_{n_j})$, and $(\xi_0,\lambda_0)$ with $\eps|\xi_0|\in[r_0,r_1]$, ${\rm Re}\lambda_0= 0$ such that $(\xi_{n_j},\lambda_{n_j})\to (\xi_0,\lambda_0).$ It follows that $B_{\eps}(\xi_0) f_0=\lambda_0 f_0$ and thus $\lambda_0$ is an eigenvalue of $B_{\eps}(\xi_0)$ with ${\rm Re}\lambda_0=0$, which contradicts ${\rm Re}\lambda<0$ for $\xi\ne 0$ established by Lemma~\ref{Egn}. This proves the lemma. \end{proof} Then, we investigate the spectrum and resolvent sets of $B_{\eps}(\xi)$ for $\eps|\xi|\le r_0$. To this end, we decompose $\lambda-B_{\eps}(\xi)$ as follows \begin{equation} \lambda-B_{\eps}(\xi)=\lambda P_{0}+\lambda P_1-Q_{\eps}(\xi)+i\eps P_{0}(v\cdot\xi)P_1+i\eps P_1(v\cdot\xi)\(1+\frac1{|\xi|^2}\)P_{0},\label{Bd3} \end{equation} where \begin{equation} Q_{\eps}(\xi)=L-i\eps P_1(v\cdot\xi)P_1.\label{Qxi} \end{equation} \begin{lem}\label{LP} Let $\xi\neq0$ and $Q_{\eps}(\xi)$ defined by \eqref{Qxi}. We have \begin{enumerate} \item If $\lambda\ne0$, then \begin{equation} \bigg\|\lambda^{-1}P_1(v\cdot\xi)\(1+\frac1{|\xi|^2}\)P_{0}\bigg\|_\xi\le C(|\xi|+1)|\lambda|^{-1}.\label{S_2} \end{equation} \item If $\mathrm{Re}\lambda>-1 $, then the operator $\lambda P_1-Q_{\eps}(\xi)$ is invertible on $N_0^\bot$ and satisfies \bma \|(\lambda P_1-Q_{\eps}(\xi))^{-1}\|&\leq(\mathrm{Re}\lambda+1 )^{-1},\label{S_3}\\ \|P_{0}(v\cdot\xi)P_1(\lambda P_1-Q_{\eps}(\xi))^{-1}P_1\|_\xi &\leq C(\mathrm{Re}\lambda+1)^{-1} |\xi|\bigg(1+ \frac{|\lambda|}{1+\eps|\xi|}\bigg)^{-1} .\label{S_5} \ema \end{enumerate} \end{lem} \begin{proof} By using $$ \bigg\|\lambda^{-1}P_1(v\cdot\xi)\(1+\frac1{|\xi|^2}\)P_{0}f\bigg\|_\xi\le C|\lambda|^{-1}\(|\xi|+\frac1{|\xi|}\)\|P_{0}f\|\le C|\lambda|^{-1}(|\xi|+1)\|f\|_\xi, $$ we prove \eqref{S_2}. Then, we show that for any $\lambda\in\mathbb{C}$ with $\mathrm{Re}\lambda>-1 $, the operator $\lambda P_1-Q_{\eps}(\xi)=\lambda P_1-L+i\eps P_1(v\cdot\xi) P_1$ is invertible from $N_0^\bot$ to itself. Indeed, by \eqref{L_4}, we obtain for any $f\in N_0^\bot\cap D(L)$ that \begin{equation} \text{Re}([\lambda P_1-L+i\eps P_1(v\cdot\xi) P_1]f,f) =\text{Re}\lambda(f,f)-(Lf,f)\geq(1+\text{Re}\lambda)\|f\|^2, \label{A_1} \end{equation} which implies that the operator $\lambda P_1-Q_{\eps}(\xi)$ is a one-to-one map from $N_0^\bot$ to itself so long as $\text{Re}\lambda>-1 $, and its range $\textrm{Ran}[\lambda P_1-Q_{\eps}(\xi)]$ is a closed subspace of $L^2(\R^3_v)$. It then remains to show that the operator $\lambda P_1-Q_{\eps}(\xi)$ is also a surjective map from $N_0^\bot$ to $N_0^\bot$, namely, $\textrm{Ran}[\lambda P_1-Q_{\eps}(\xi)] = N_0^\bot$. In fact, if it does not hold, then there exists a function $g\in N_0^\bot \setminus \textrm{Ran}[\lambda P_1-Q_{\eps}(\xi)]$ with $g\neq0$ so that for any $f\in N_0^\bot\cap D(L)$ that $$ ([\lambda P_1-L+ i\eps P_1(v\cdot\xi) P_1]f,g) =(f,[\overline{\lambda} P_1-L- i\eps P_1(v\cdot\xi) P_1]g)=0, $$ which yields $g=0$ since the operator $\overline{\lambda} P_1-L- i\eps P_1(v\cdot\xi) P_1$ is dissipative and satisfies the same estimate as \eqref{A_1}. This is a contradiction, and thus $\textrm{Ran}[\lambda P_1-Q_{\eps}(\xi)] = N_0^\bot$. The estimate \eqref{S_3} follows directly from \eqref{A_1}. By \eqref{S_3} and $\|P_{0}(v\cdot\xi) P_1f\|_\xi\le C(|\xi|+1)\|P_1f\|$, we have \begin{equation} \| P_{0}(v\cdot\xi) P_1(\lambda P_1-Q_{\eps}(\xi))^{-1} P_1f\|_\xi \leq C|\xi|(\mathrm{Re}\lambda+1 )^{-1}\|f\|. \label{2.33a} \end{equation} Meanwhile, we can decompose the operator $ P_{0}(v\cdot\xi) P_1(\lambda P_1-Q_{\eps}(\xi))^{-1} P_1$ as $$ P_{0}(v\cdot\xi) P_1(\lambda P_1-Q_{\eps}(\xi))^{-1} P_1=\frac1\lambda P_{0}(v\cdot\xi) P_1+\frac1\lambda P_{0}(v\cdot\xi) P_1Q_{\eps}(\xi)(\lambda P_1-Q_{\eps}(\xi))^{-1} P_1. $$ This together with \eqref{S_3} and the fact $ \| P_{0}(v\cdot\xi) P_1 Q_{\eps}(\xi)f\|_\xi\leq C|\xi|(1+\eps|\xi|)\|P_1f\| $ give \begin{equation} \| P_{0}(v\cdot\xi) P_1(\lambda P_1-Q_{\eps}(\xi))^{-1} P_1f\|_\xi \leq C|\xi||\lambda|^{-1}[(\mathrm{Re}\lambda+1 )^{-1}+1](1+\eps|\xi|)\|f\|. \label{2.33} \end{equation} The combination of the two cases \eqref{2.33a} and \eqref{2.33} yields \eqref{S_5}. \end{proof} By \eqref{Bd3} and Lemmas~\ref{Egn}--\ref{LP}, we are able to analyze the spectral and resolvent sets of the operator $B_{\eps}(\xi)$ as follows. \begin{lem}\label{spectrum2} Fix $\eps\in (0,1)$. The following facts hold. \begin{enumerate} \item For all $\xi\ne 0$, there exists $y_0>0$ such that \begin{equation} \{\lambda\in\mathbb{C}\,|\, \mathrm{Re}\lambda\ge-\frac{\nu_0}{2},\,|\mathrm{Im}\lambda|\geq y_0\} \cup\{\lambda\in\mathbb{C}\,|\,\mathrm{Re}\lambda>0\} \subset\rho(B_{\eps}(\xi)). \label{rb1} \end{equation} \item For any $\delta>0$, there exists $r_0=r_0(\delta)>0$ such that if $\eps(1+|\xi|)\leq r_0$, then \begin{equation} \sigma(B_{\eps}(\xi))\cap\{\lambda\in\mathbb{C}\,|\,\mathrm{Re}\lambda\ge-\frac12\} \subset \{\lambda\in\mathbb{C}\,|\,|\lambda|\le\delta\}. \label{sg4} \end{equation} \end{enumerate} \end{lem} \begin{proof} By Lemma \ref{LP}, we have for $\rm{Re}\lambda>-1 $ and $\lambda\neq 0$ that the operator $\lambda P_0+\lambda P_1-Q_{\eps}(\xi)$ is invertible on $L^2_\xi(\R^3_v)$ and it satisfies \begin{equation} (\lambda P_0+\lambda P_1-Q_{\eps}(\xi))^{-1} =\lambda^{-1} P_0+(\lambda P_1-Q_{\eps}(\xi))^{-1} P_1, \end{equation} because the operator $\lambda P_0$ is orthogonal to $\lambda P_1-Q_{\eps}(\xi)$. Therefore, we can re-write \eqref{Bd3} as \bmas \lambda-B_{\eps}(\xi) =&(I+Y_{\eps}(\lambda,\xi))(\lambda P_0+\lambda P_1-Q_{\eps}(\xi)), \\ Y_{\eps}(\lambda,\xi)= & i\eps\lambda^{-1} P_1(v\cdot\xi)(1+\frac1{|\xi|^2}) P_0 +i\eps P_0(v\cdot\xi) P_1(\lambda P_1-Q_{\eps}(\xi))^{-1} P_1. \emas By Lemma \ref{LP01}, there exists $R_0>0$ large enough so that if $\mathrm{Re}\lambda\geq -\nu_0/2$ and $\eps |\xi|\geq R_0$, then $\lambda-B_{\eps}(\xi)$ is invertible on $L^2_{\xi}(\R^3_v)$ and satisfies \eqref{E_6}. Thus, $\{\lambda\in\mathbb{C}\,|\,\mathrm{Re}\lambda\ge -\nu_0/2\}\subset \rho(B_{\eps}(\xi))$ for $\eps |\xi|\ge R_0$. For the case $\eps|\xi|\leq R_0$ such that $\eps(1+|\xi|)\leq R_0+1$, by \eqref{S_2} and \eqref{S_5} we can choose $y_0>0$ such that it holds for $\mathrm{Re}\lambda\ge-1/2$ and $|\mathrm{Im}\lambda|\geq y_0$ that \begin{equation} \|\eps\lambda^{-1}P_1(v\cdot\xi)(1+|\xi|^{-2})P_{0}\|_\xi\leq \frac14, \quad \|\eps P_{0}(v\cdot\xi)P_1(\lambda P_1-Q_{\eps}(\xi))^{-1}P_1\|_\xi\leq\frac14.\label{bound_1} \end{equation} This implies that the operator $I+Y_{\eps}(\lambda,\xi)$ is invertible on $L^{2}_\xi(\R^3_v)$ and thus $\lambda-B_{\eps}(\xi)$ is invertible on $L^{2}_\xi(\R^3_v)$ and satisfies \begin{equation} (\lambda-B_{\eps}(\xi))^{-1} =(\lambda^{-1} P_0+(\lambda P_1-Q_{\eps}(\xi))^{-1} P_1)(I+Y_{\eps}(\lambda,\xi))^{-1}.\label{S_8} \end{equation} Therefore, $\rho(B_{\eps}(\xi))\supset \{\lambda\in\mathbb{C}\,|\,{\rm Re}\lambda\ge-1/2, |{\rm Im}\lambda|\ge y_0\}$ for $\eps|\xi|\leq R_0$. This and Lemma~\ref{SG_1} lead to \eqref{rb1}. Assume that $ |\lambda|>\delta$ and $\mathrm{Re}\lambda\ge-1/2$. Then, by \eqref{2.33a} and \eqref{S_5} we can choose $r_0=r_0(\delta)>0$ so that estimates \eqref{bound_1} still hold for $\eps(1+|\xi|)\leq r_0$, and the operator $\lambda-B_{\eps}(\xi)$ is invertible on $L^{2}_\xi(\R^3)$. Therefore, we have $\rho(B_{\eps}(\xi))\supset\{\lambda\in\mathbb{C}\,|\, |\lambda|>\delta,\mathrm{Re}\lambda\ge-1/2\}$ for $\eps(1+|\xi|)\leq r_0$, which gives \eqref{sg4}. \end{proof} Now we establish the asymptotic expansions of the eigenvalues and eigenfunctions of $B_{\eps}(\xi)$ for $\eps|\xi|$ sufficiently small. Firstly, we consider a 1-D eigenvalue problem: \begin{equation} B_{\eps}(s)e=:\(L-i\eps s v_1-i\eps\frac{v_1}{s}P_{0}\)e=\beta e,\quad s\in \R.\label{L_2} \end{equation} Let $e$ be the eigenfunction of \eqref{L_2}, we rewrite $e$ in the form $e=e_0+e_1$, where $e_0=P_{0}e=C_0\sqrt M$ and $e_1=(I-P_{0})e=P_1e$. The eigenvalue problem \eqref{L_2} can be decomposed into \bma &\lambda e_0=-i\eps sP_{0}[ v_1(e_0+e_1)],\label{A_2}\\ &\lambda e_1=Le_1-i\eps sP_1[ v_1(e_0+e_1)]-i\eps \frac{v_1}{s}e_0.\label{A_3} \ema From Lemma \ref{LP} and \eqref{A_3}, we obtain that for any $\text{Re}\lambda>-1 $, \begin{equation} e_1=i\eps (L-\lambda P_1-i sP_1v_1P_1)^{-1}P_1\(sv_1e_0+\frac{v_1}{s}e_0\).\label{A_4}\end{equation} Substituting \eqref{A_4} into \eqref{A_2} and taking inner product the resulted equation with $\sqrt M$ gives \begin{equation} \lambda C_0=\eps^2(1+s^2)(R(\lambda,\eps s)v_1\sqrt{M},v_1\sqrt{M})C_0.\label{eigen} \end{equation} where $$ R(\lambda,s)=(L-\lambda P_1-i sP_1v_1P_1)^{-1}.$$ Denote \begin{equation} D_0(z,s,\eps)=z- (1+s^2)(R(\eps^2z,\eps s)v_1\sqrt{M},v_1\sqrt{M}).\label{ddd} \end{equation} \begin{lem}\label{eigen_1} There are two constants $r_0,r_1>0$ such that the equation $D_0(z,s,\eps)=0$ has a unique solution $z=z(s,\eps)$ for $\eps(1+|s|)\le r_0$ and $|z+1+s^2|\le r_1(1+s^2) $, which is a $C^{\infty}$ function of $s$, $\eps$ and satisfies \begin{equation} z(s,0)=-(1+s^2) ,\quad \pt_\eps z(s,0)=0. \label{z1} \end{equation} In particular, $z(s,\eps)$ satisfies the following expansion: \begin{equation} z(s,\eps)=-(1+s^2)+O(\eps^2(1+s^2)^2). \label{z2} \end{equation} \end{lem} \begin{proof} By \eqref{ddd} and the fact $L(v_j\sqrt{M})=-v_j\sqrt{M}$, $j=1,2,3$, we have \begin{equation} D_0(z(s),s,0)=0\quad {\rm with}\quad z(s)=-(1+s^2). \label{a}\end{equation} Define $$D(z,s,\eps)=(1+s^2)R_{11}(\eps^2z,\eps s),$$ with $R_{11}(\eps^2z,\eps s)= (R(\eps^2z,\eps s)v_1\sqrt{M},v_1\sqrt{M})$. It is straightforward to verify that a solution of $D_0(z,s,\eps)=0$ for any fixed $s$ and $\eps$ is a fixed point of $D(z,s,\eps)$. Since for any $(z,s,\eps)\in \mathbb{C}\times\R\times \R$, $$ |\partial_z R_{11}(\eps^2 z,\eps s)|\le C\eps^2 ,\quad |\partial_\eps R_{11}(\eps^2 z,\eps s)|\le C(|\eps z|+|s|),$$ it follows that \bmas |D(z,s,\eps)-z(s)|&= (1+s^2) \Big|R_{11}(\eps^2z,\eps s )-R_{11}(0,0 )\Big|\notag\\ &\le C(1+s^2)(|\eps^2z|+|\eps s|)\le r_1(1+s^2), \\ |D(z_1,s,\eps)-D(z_2,s,\eps)|&\le C\eps^2(1+s^2)|z_1-z_2|\le \frac12|z_1-z_2|, \emas for $|z-z(s)|\le r_1(1+s^2)$ and $\eps(1+|s|)\le r_0$ with $r_0,r_1>0$ sufficiently small. Hence by the contraction mapping theorem, there exists a unique fixed point $z(s,\eps): \eps\in B(0,r_0/(1+|s|))\to B(z(s),r_1(1+s^2))$ such that $D(z(s,\eps),s, \eps)=z(s,\eps)$ and $z(s,0)=z(s)$. This is equivalent to that $D_0(z(s,\eps),s,\eps)=0$. Since $D_0(z,s,\eps)$ is $C^{\infty}$ with respect to $z$, $s$ and $\eps$, it follows that $z(s,\eps)$ is a $C^{\infty}$ function with respect to $s$ and $\eps$. Next, we estimate the derivative of $z(s,\eps)$. By \eqref{ddd} and using the fact $$ L\sqrt{M}=0,\,\,\, L(v_j\sqrt{M})=-v_j\sqrt{M},\,\,\, LP_1(v^2_j\sqrt{M})=-2P_1(v^2_j\sqrt{M}),\,\,\,j=1,2,3, $$ with $P_1(v^2_j\sqrt{M})=(v^2_j- |v|^2/3)\sqrt{M}$, we have \bmas \pt_z D_0(z,s,0)&=1,\\ \pt_\eps D_0(z,s,0)&=i(1+s^2)s(P_1(v_1^2\sqrt{M}),v_1\sqrt{M})=0. \emas It follows that \begin{equation} \pt_\eps z(s,0)=- \frac{{\partial_\eps}D_0(z(s),s,0)}{{\partial_z}D_0(z(s),s,0)} =0. \label{b} \end{equation} Combining \eqref{a}--\eqref{b}, we prove \eqref{z1}. Finally, by a direct computation we obtain $$ \partial_z D_0(z,s,\eps) =1 +O(1)\eps^2(1+s^2) ,\quad \partial_{\eps} D_0(z,s,\eps) =O(1)\eps(1+s^2)^2, $$ for $\eps(1+|s|)\le r_0$ and $|z+1+s^2|\le r_1(1+s^2) $. This implies that $$ \pt_\eps z(s,\eps)=- \frac{{\partial_\eps}D_0(z,s,\eps)}{{\partial_z}D_0(z,s,\eps)} =O(1)\eps(1+s^2)^2,\quad \eps(1+|s|)\le r_0, $$ which together with \eqref{z1} leads to \eqref{z2}. This proves the lemma. \end{proof} With the help of Lemma \ref{eigen_1}, we have the eigenvalue $\beta_0(s,\eps)$ and the corresponding eigenfunction $e_0(s,\eps)$ of $B_{\eps}(s)$ defined by \eqref{L_2} as follows. \begin{lem}\label{eigen_4} (1) There exists a small constant $r_0>0$ such that $ \sigma(B_{\eps}(s))\cap \{\lambda\in \mathbb{C}\,|\, \mathrm{Re}\lambda>-1 /2\} $ consists of one point $\beta_0(s,\eps)$ for $\eps(1+|s|)\le r_0$. The eigenvalue $\beta_0(s,\eps)$ is a $C^\infty$ function of $s$ and $\eps $, and admit the following asymptotic expansion for $\eps(1+|s|)\le r_0$: \begin{equation} \beta_{0}(s,\eps) = -\eps^2(1+s^2)+O(\eps^4(1+s^2)^2). \label{specr0} \end{equation} (2) The corresponding eigenfunction $e_0(s,\eps)=e_0(s,\eps,v)$ is $C^\infty$ in $s$ and $\eps$ satisfying \begin{equation} \label{eigf2} \left\{\bln P_0e_0(s,\eps)&= \frac{s}{\sqrt{1+s^2}} \sqrt{M} +O(\eps s),\\ P_1e_0(s,\eps)&= -i\eps \sqrt{1+s^2} v_1\sqrt{M}+O(\eps^2(1+s^2)). \eln\right. \end{equation} \iffalse where $a_0(s,0) $ are smooth function of $s$ and $\eps $ for $|\eps s|\le r_0$ and satisfy \begin{equation} a_0(s,\eps)=\frac1{\sqrt{1+s^2}}+O(\eps \sqrt{1+s^2}). \end{equation}\fi \end{lem} \begin{proof} The eigenvalue $\beta_0(s,\eps)$ and the eigenfunction $e_0(s,\eps)$ can be constructed as follows. We take $\beta_0=\eps^2 z(s,\eps)$ with $ z(s,\eps)$ being the solution of the equation $D_0(z,s,\eps)=0$ given in Lemma \ref{eigen_1}, and define the corresponding eigenfunction $e_0(s,\eps)$ by \begin{equation} e_0(s,\eps)=a_0(s,\eps)\sqrt{M} +i \eps\(s+\frac1s\) a_0(s,\eps)(L-\eps^2z(s,\eps)-i s\eps P_1v_1P_1)^{-1} v_1\sqrt{M}, \label{C_2} \end{equation} where $a_0(s,\eps)$ is a complex value function determined later. We can normalize $e_0(s,\eps)$ by taking $$\(e_0(s,\eps),\overline{e_0(s,\eps)}\)_s=(e_0,\overline{e_0})+\frac{1}{s^2} (P_0e_0,P_0\overline{e_0})=1.$$ The coefficient $a_0(s,\eps) $ is determined by the normalization condition as \begin{equation} \label{C_4} a_0(s,\eps)^2\bigg(1+\frac1{s^2}+\eps^2\bigg(s+\frac1s\bigg)^2D_1(s,\eps)\bigg)=1, \end{equation} where $D_1(s,\eps)=(R(\beta_0,\eps s) v_1\sqrt{M}, R(\overline{\beta_0},-\eps s) v_1\sqrt{M}).$ Substituting \eqref{specr0} into \eqref{C_4}, we obtain \begin{equation} a_0(s,\eps)=\frac{s}{\sqrt{1+s^2}}\[1+O(\eps \sqrt{1+s^2})\]. \label{C_3}\end{equation} Combining \eqref{C_2} and \eqref{C_3}, we can obtain the expansion of $e_0(s,\eps)$ given in \eqref{eigf2}. This completes the proof of the lemma. \end{proof} Now we consider a 3-D eigenvalue problem: \begin{equation} B_{\eps}(\xi)\psi=\(L- i\eps v\cdot\xi-i\eps\frac{ v\cdot\xi}{|\xi|^2}P_{0}\)\psi=\lambda \psi,\quad \xi\in \R^3.\label{L_3a}\end{equation} With the help of Lemma \ref{eigen_4}, we have the eigenvalue $\lambda_0(|\xi|,\eps)$ and the corresponding eigenfunction $\psi_0(\xi,\eps)$ of $B_{\eps}(\xi)$ defined by \eqref{L_3a} as follows. \begin{thm}\label{spect3} (1) There exists a small constant $r_0>0$ such that $ \sigma(B_{\eps}(\xi))\cap \{\lambda\in \mathbb{C}\,|\, \mathrm{Re}\lambda>-1 /2\} $ consists of one point $\lambda_0(|\xi|,\eps)$ for $\eps(1+|\xi|)\le r_0$. The eigenvalue $\lambda_0(|\xi|,\eps)$ is a $C^\infty$ function of $|\xi|$ and $\eps$ for $\eps(1+|\xi|)\leq r_0$: \begin{equation} \lambda_0(|\xi|,\eps)=\beta_0(|\xi|,\eps)= -\eps^2(1+|\xi|^2)+O(\eps^4(1+|\xi|^2)^2),\label{eigen_2} \end{equation} where $\beta_0(s,\eps)$ is a $C^\infty$ function of $s$ and $\eps$ given in Lemma \ref{eigen_4}. (2) The eigenfunction $\psi_0(\xi,\eps)=\psi_0(\xi,\eps,v)$ satisfies \begin{equation} \left\{\bln \label{eigf1} P_0\psi_0(\xi,\eps)&= \frac{|\xi|}{\sqrt{1+|\xi|^2}} \sqrt{M} +O(\eps |\xi|),\\ P_1\psi_0(\xi,\eps)&= -i\eps \sqrt{1+|\xi|^2} \(v\cdot\frac{\xi}{|\xi|}\)\sqrt{M}+O(\eps^2(1+|\xi|^2)). \eln\right. \end{equation} \end{thm} \begin{proof} Let $\O$ be a rotational transformation in $\R^3$ such that $\O:\frac{\xi}{|\xi|}\to(1,0,0)$. We have \begin{equation} \O^{-1} \(L- i\eps v\cdot\xi-i\eps\frac{ v\cdot\xi}{|\xi|^2}P_{0}\)\O=L- i\eps sv_1-i\eps\frac{ v_1}{s}P_{0}.\end{equation} From Lemma \ref{eigen_4}, we have the following eigenvalue and eigenfunction for \eqref{L_3a}: \bgrs \(L- i\eps v\cdot\xi-i\eps\frac{ v\cdot\xi}{|\xi|^2}P_{0}\)\psi_0(\xi,\eps)=\lambda_0(|\xi|,\eps) \psi_0(\xi,\eps),\\ \lambda_0(|\xi|,\eps)=\beta_0(|\xi|,\eps),\quad \psi_0(\xi,\eps)=\O e_0(|\xi|,\eps). \egrs This proves the theorem. \end{proof} By virtue of Lemmas~\ref{LP03}--\ref{spectrum2} and Theorem \ref{spect3}, we can make a detailed analysis on the semigroup $S(t,\xi,\eps)=e^{\frac{t}{\eps^2}B_{\eps}(\xi)}$ in terms of an argument similar to that of Theorem~3.4 in \cite{Li2}, we give the details of the proof in Appendix. \begin{thm}\label{rate1} Let $r_0>0$ be given in Theorem \ref{spect3}. For any fixed $\eps\in(0,r_0)$, the semigroup $S(t,\xi,\eps)=e^{\frac{t}{\eps^2}B_{\eps}(\xi)}$ with $\xi\neq0$ can be decomposed into \begin{equation} S(t,\xi,\eps)f=S_1(t,\xi,\eps)f+S_2(t,\xi,\eps)f,\quad f\in L^2_\xi(\R^3_v), \ \ t>0,\label{B_0}\end{equation} where \begin{equation} S_1(t,\xi,\eps)f=e^{\frac{1}{\eps^2}\lambda_0(|\xi|,\eps)t}\(f,\overline{\psi_0(\xi,\eps)}\)_\xi \psi_0(\xi,\eps)1_{\{\eps (1+|\xi|)\le r_0\}}, \label{S1} \end{equation} with $(\lambda_0(|\xi|,\eps),\psi_0(\xi,\eps))$ being the eigenvalue and eigenfunction of the operator $B_{\eps}(\xi)$ given in Theorem \ref{spect3} for $\eps(1+|\xi|)\le r_0$, and $S_2(t,\xi,\eps) $ satisfies for two constants $a>0$ and $C>0$ independent of $\xi$ and $\eps$ that \begin{equation} \|S_2(t,\xi,\eps)f\|_\xi\le Ce^{-\frac{at}{\eps^2}}\|f\|_\xi. \label{S2} \end{equation} \end{thm} \section{Fluid approximations of semigroup} \setcounter{equation}{0} \label{sect3} In this section, we give the first and second order fluid approximations of the semigroup $e^{\frac{t}{\eps^2}B_\eps}$, which will be used to prove the convergence and establish the convergence rate of the solution to the VPFP system towards the solution to the DDP system. Let us introduce a Sobolev space of function $f=f(x,v)$ by $ H^k_P=\{f\in L^2(\R^3_{x}\times \R^3_v)\,|\, \|f\|_{H^k_P}<\infty\,\}\ (L^2_P=H^0_P)$ with the norm $\|\cdot\|_{H^k_P}$ defined by \bma \|f\|_{H^k_P} &=\(\int_{\R^3}(1+|\xi|^2)^k\|\hat{f}\|^2_\xi d\xi \)^{1/2}\notag\\ &=\(\int_{\R^3}(1+|\xi|^2)^k \(\|\hat{f}\|^2+\frac1{|\xi|^2}\lt|(\hat{f},\sqrt{M})\rt|^2\)d\xi \)^{1/2}, \label{H2P} \ema where $\hat{f}=\hat{f}(\xi,v)$ denotes the Fourier transform of $f(x,v)$ with respect to $x\in \R^3$. Note that it holds $$ \|f\|^2_{H^k_P} =\|f\|^2_{H^k_{x}(L^2_{v})}+\| \nabla_x\Delta_x^{-1}(f,\sqrt{M})\|^2_{H^k_{x}}. $$ For any $f_0\in L^2(\R^3_{x}\times \R^3_v)$, set \begin{equation} e^{\frac{t}{\eps^2}B_\eps}f_0=(\mathcal{F}^{-1}e^{\frac{t}{\eps^2}B_\eps(\xi)}\mathcal{F})f_0. \end{equation} By Lemma \ref{SG_1}, we have $$ \|e^{\frac{t}{\eps^2}B_\eps}f_0\|_{H^k_P}=\int_{\R^3}(1+|\xi|^2)^k\|e^{\frac{t}{\eps^2}B_\eps(\xi)}\hat f_0\|^2_\xi d\xi\le \int_{\R^3}(1+|\xi|^2)^k\|\hat f_0\|^2_\xi d\xi =\|f_0\|_{H^k_P}. $$ This means that the linear operator $\eps^{-2} B_\eps$ generates a strongly continuous contraction semigroup $e^{\frac{t}{\eps^2}B_\eps}$ in $H^k_P$, and therefore, $f(t,x,v)=e^{\frac{t}{\eps^2}B_\eps}f_0$ is a global solution to the linearized VPFP system \eqref{LVPFP1} for any $f_0\in H^k_P$. Now we are going to establish the first and second order fluid approximations of the semigroup $e^{\frac{t}{\eps^2}B_\eps}$. First of all, we introduce the following linearized DDP system to \eqref{n_1}--\eqref{n_2}: \begin{equation} \dt n=Dn,\quad n(0,x)=n_0(x),\label{LDDP} \end{equation} where \begin{equation} D=-I+\Delta_x . \label{A}\end{equation} It is easy to verify that the operator $D$ generates a strongly continuous contraction semigroup $e^{tD}$ in $H^k_x$. Thus, $n(t,x)=e^{tD}n_0$ is a global solution to the linearized DDP system \eqref{LDDP} for any $n_0\in H^k_x$. We give a prepare lemma which will be used to study the fluid dynamical approximations of the semigroup $e^{\frac{t}{\eps^2}B_\eps}$. \begin{lem} \label{S2a} For any $f_0\in N_0$, we have \begin{equation} \|S_{2}(t,\xi,\eps)f_0\|_{\xi}\le C\(\eps(1+|\xi|)1_{\{\eps(1+|\xi|)\le r_0\}}+1_{\{\eps(1+|\xi|)\ge r_0\}}\)e^{-\frac{at}{\eps^2}}\| f_0\|_{\xi} .\label{S5} \end{equation} \end{lem} \begin{proof} Define the projection $P_{\eps}(\xi)$ by $$P_{\eps}(\xi)f= \(f,\overline{\psi_0(\xi,\eps)}\)_{\xi} \psi_0(\xi,\eps),\quad \forall f\in L^2(\R^3_v),$$ where $ \psi_0(\xi,\eps)$ is the eigenfunctions of $B_{\eps}(\xi)$ for $\eps(1+|\xi|)\le r_0$ defined by \eqref{eigf1}. By Theorem \ref{rate1}, we can assert that \begin{equation} S_1(t,\xi,\eps)=S(t,\xi,\eps)1_{\{\eps(1+|\xi|)\le r_0\}}P_{\eps}(\xi). \label{S1a} \end{equation} Indeed, it follows from \eqref{V_3} that for $\eps(1+|\xi|)\le r_0$, \bmas e^{\frac{t}{\eps^2}B_{\eps}(\xi)}P_{\eps}(\xi)f =&\frac1{2\pi i}\int^{\kappa+ i\infty}_{\kappa- i\infty} e^{ \frac{\lambda t}{\eps^2}}(\lambda-B_{\eps}(\xi))^{-1}P_{\eps}(\xi)fd\lambda\\ =&\frac1{2\pi i}\int^{\kappa+ i\infty}_{\kappa- i\infty} e^{ \frac{\lambda t}{\eps^2}}(\lambda-\lambda_0(|\xi|,\eps))^{-1}P_{\eps}(\xi)fd\lambda\\ =&e^{\frac{1}{\eps^2}\lambda_0(|\xi|,\eps)t}(f,\overline{\psi_0(\xi,\eps)})_\xi \psi_0(\xi,\eps)=S_1(t,\xi,\eps)f . \emas By \eqref{S1a}, we can decompose $S_2(t,\xi,\eps)$ into \begin{equation} S_2(t,\xi,\eps)=S_{21}(t,\xi,\eps)+S_{22}(t,\xi,\eps),\label{S3} \end{equation} where \bma S_{21}(t,\xi,\eps)&=S(t,\xi,\eps)1_{\{\eps(1+|\xi|)\le r_0\}}\(I- P_{\eps}(\xi)\),\\ S_{22}(t,\xi,\eps)&=S(t,\xi,\eps)1_{\{\eps(1+|\xi|)\ge r_0\}}. \ema It holds that \begin{equation} \|S_{2j}(t,\xi,\eps)f\|_{\xi}\le Ce^{-\frac{at}{\eps^2}}\| f\|_{\xi},\quad j=1,2. \end{equation} Since for any $f_0\in N_0$, $$ f_0-P_{\eps}(\xi)f_0 =(f_0,h_0(\xi))_\xi h_0(\xi)-(f_0,\overline{\psi_0(\xi,\eps)})_\xi \psi_0(\xi,\eps), $$ where $h_0(\xi)=\frac{|\xi|}{\sqrt{1+|\xi|^2}} \sqrt{M}$, it follows from \eqref{eigf1} and the the fact $S_{21}(t,\xi,\eps)=S_{21}(t,\xi,\eps)(I- P_{\eps}(\xi))$ that \begin{equation} \|S_{21}(t,\xi,\eps)f_0\|_{\xi}\le C\eps(1+|\xi|)1_{\{\eps(1+|\xi|)\le r_0\}}e^{-\frac{at}{\eps^2}}\|f_0\|_{\xi}.\label{S4} \end{equation} By combining \eqref{S3}--\eqref{S4}, we obtain \eqref{S5}. \end{proof} Then, we have the first order fluid approximation of the semigroup $e^{\frac{t}{\eps^2}B_\eps}$ as follow. \begin{lem} \label{fl1} For any integer $k\ge 0$ and any $f_0\in H^{k+1}_{P}$, we have \begin{equation} \left\|e^{\frac{t}{\eps^2}B_\eps}f_0-e^{tD}n_0\sqrt{M}\right\|_{H^k_{P}} \le C\(\eps e^{-\frac{t}{2}}+e^{-\frac{at}{\eps^2}}\)\|f_0\|_{H^{k+1}_{P}}, \label{limit1} \end{equation} where $n_0=(f_0,\sqrt{M}),$ and $a$, $C>0$ are two constants independent of $\eps$. Moreover, if $f_0\in N_0$, then \begin{equation} \left\|e^{\frac{t}{\eps^2}B_\eps}f_0-e^{tD}n_0\sqrt{M}\right\|_{H^k_{P}} \le C\eps e^{-\frac{t}{2}}\|f_0\|_{H^{k+1}_{P}}. \label{limit1a} \end{equation} \end{lem} \begin{proof} For simplicity, we only prove the case of $k=0$. By Theorem \ref{rate1} and taking $\eps\le r_0/2$ with $r_0>0$ given in Theorem \ref{spect3}, we have \bma \left\|e^{\frac{t}{\eps^2}B_\eps}f_0-e^{tD}n_0\sqrt{M}\right\|^2_{L^2_{x}(L^2_{v})}=&\int_{\R^3}\left\|e^{\frac{t}{\eps^2}B_{\eps}(\xi)}\hat{f}_0-e^{tD(\xi)}\hat{n}_0\sqrt{M}\right\|^2d\xi \notag\\ \le &3\int_{1+|\xi|\le \frac{r_0}{\eps}} \left\|S_1(t,\xi,\eps)\hat{f}_0-e^{tD(\xi)}\hat{n}_0\sqrt{M}\right\|^2d\xi\notag\\ &+3\int_{\R^3}\left\|S_2(t,\xi,\eps)\hat{f}_0\right\|^2d\xi+3\int_{1+|\xi|\ge \frac{r_0}{\eps}} \left|e^{tD(\xi)}\hat{n}_0\right|^2d\xi\notag\\ =&I_1+I_2+I_3, \label{S_4} \ema where $D(\xi)=-(1+|\xi|^2)$ is the Fourier transform of the operator $D$. We estimate $I_j$, $j=1,2,3$ as follows. Since from \eqref{S1}, \eqref{eigen_2} and \eqref{eigf1}, $$ S_1(t,\xi,\eps)\hat{f}_0=e^{-(1+|\xi|^2)t+O(\eps^2(1+|\xi|^2)^2)t}\(\hat{n}_0\sqrt{M}+O(\eps\sqrt{1+|\xi|^2})\), $$ it follows that \bma I_1=& 3\int_{1+|\xi|\le \frac{r_0}{\eps}} \Big\|e^{-(1+|\xi|^2)t+O(\eps^2(1+|\xi|^2)^2)t}\(\hat{n}_0 \sqrt{M}+O(\eps\sqrt{1+|\xi|^2})\) \notag\\ &\qquad-e^{-(1+|\xi|^2)t }\hat{n}_0\sqrt{M}\Big\|^2d\xi\notag\\ \le & C\eps^2\int_{1+|\xi|\le \frac{r_0}{\eps}}e^{-(1+|\xi|^2)t}\( r^2_0(1+|\xi|^2)^3t^2|\hat{n}_0|^2 +(1+|\xi|^2)\|\hat{f}_0\|^2\) d\xi\notag\\ \le& C\eps^2e^{-t}\|f_0\|^2_{H^1_x(L^2_{v})}.\label{I1} \ema By \eqref{S2}, we have \begin{equation} I_2\le C\intr e^{-2\frac{at}{\eps^2}}\|\hat{f}_0\|^2_\xi d\xi \le Ce^{-2\frac{at}{\eps^2}}\(\|f_0\|^2_{L^2_x(L^2_v)}+\|\Tdx\Phi_0\|^2_{L^2_{x}}\). \end{equation} For $I_3$, it holds that \begin{equation} I_3=3\int_{1+|\xi|\ge \frac{r_0}{\eps}}e^{-2(1+|\xi|^2)t }|\hat{n}_0|^2 d\xi \le C e^{-\frac{r_0^2}{\eps^2}t }\|f_0\|^2_{L^2_x(L^2_{v})}. \label{S_6} \end{equation} Therefore, it follows from \eqref{S_4}--\eqref{S_6} that \begin{equation} \left\|e^{\frac{t}{\eps^2}B_\eps}f_0-e^{tD}n_0\sqrt{M}\right\|^2_{L^2_{x}(L^2_{v})} \le C\(\eps^2e^{-t}+e^{-2\frac{at}{\eps^2}}\)\|f_0\|^2_{H^1_{P}}. \label{aaa} \end{equation} Similarly, we can obtain \bma &\left\|\Tdx\Delta^{-1}_x(e^{\frac{t}{\eps^2}B_\eps}f_0,\sqrt{M})-\Tdx\Delta^{-1}_xe^{tD}n_0\right\|^2_{L^2_x}\notag\\ \le &3\int_{1+|\xi|\le \frac{r_0}{\eps}} \left|\frac{\xi}{|\xi|^2}(S_1(t,\xi,\eps)\hat{f}_0,\sqrt{M})-\frac{\xi}{|\xi|^2}e^{tD(\xi)}\hat{n}_0\right|^2d\xi\notag\\ &+3\int_{\R^3} \frac{1}{|\xi|^2}\left|(S_2(t,\xi,\eps)\hat{f}_0,\sqrt{M})\right|^2d\xi+3\int_{1+|\xi|\ge \frac{r_0}{\eps}} \frac{1}{|\xi|^2} |e^{tD(\xi)}\hat{n}_0|^2d\xi\notag\\ \le & C\(\eps^2e^{-t}+e^{-2\frac{at}{\eps^2}}\)\|f_0\|^2_{H^1_{P}}. \label{bbb} \ema Combining \eqref{aaa} and \eqref{bbb}, we obtain \eqref{limit1}. Then, we deal with \eqref{limit1a}. Since $f_0\in N_0,$ by Lemma \ref{S2a} we have \bma I_2&\le C\int_{1+|\xi|\le \frac{r_0}{\eps}}\eps^2 (1+|\xi|^2)e^{-2\frac{at}{\eps^2}}\| \hat{f}_0\|^2_{\xi}d\xi+C\int_{1+|\xi|\ge \frac{r_0}{\eps}}e^{-2\frac{at}{\eps^2}}\| \hat{f}_0\|^2_{\xi}d\xi \notag\\ &\le C\eps^2 e^{-2\frac{at}{\eps^2}}\(\|f_0\|^2_{H^1_x(L^2_v)}+\|\Tdx\Phi_0\|^2_{L^2_{x}}\).\label{S_7a} \ema Moreover, we can bound $I_3$ by \begin{equation} I_3=3\int_{1+|\xi|\ge \frac{r_0}{\eps}}e^{-2(1+|\xi|^2)t }|\hat{n}_0|^2 d\xi \le C\eps^2 e^{-\frac{r_0^2}{\eps^2}t }\|f_0\|^2_{H^1_x(L^2_{v})}. \label{S_6a} \end{equation} Thus, by \eqref{I1}, \eqref{S_7a} and \eqref{S_6a} we obtain \eqref{limit1a}. This proves the lemma. \end{proof} We have the second order fluid approximation of the semigroup $e^{\frac{t}{\eps^2}B_\eps}$ as follow. \begin{lem}\label{fl2} For any integer $k\ge 0$ and any $f_0\in H^{k+1}_{P}$ satisfying $P_0f_0=0$, \begin{equation} \bigg\|\frac1{\eps}e^{\frac{t}{\eps^2}B_\eps}f_0+e^{tD}\divx m_0\sqrt{M}\bigg\|_{H^k_{P}} \le C\(\eps t^{-\frac12}e^{-\frac{t}{2}}+ \frac1{\eps}e^{-\frac{at}{\eps^2}}\)\|f_0\|_{H^{k+1}_{x}(L^2_v)}, \label{limit2} \end{equation} where $m_0=(f_0,v\sqrt{M}),$ and $a>0$, $C>0$ are two constants independent of $\eps$. \end{lem} \begin{proof} For simplicity, we only prove the case of $k=0$. By Theorem \ref{rate1} and taking $\eps\le r_0/2$ with $r_0>0$ given in Lemma \ref{eigen_4}, we have \bma &\left\|\frac1{\eps}e^{\frac{t}{\eps^2}B_\eps}f_0+e^{tD}\divx m_0\sqrt{M}\right\|^2_{L^2_{x}(L^2_{v})}\notag\\ \le &3\int_{1+|\xi|\le \frac{r_0}{\eps}} \left\|\frac1{\eps}S_1(t,\xi,\eps)\hat{f}_0+e^{tD(\xi)}(\hat{m}_0\cdot\xi)\sqrt{M}\right\|^2d\xi\notag\\ &+3\int_{\R^3}\left\|\frac1{\eps}S_2(t,\xi,\eps)\hat{f}_0\right\|^2d\xi+3\int_{1+|\xi|\ge \frac{r_0}{\eps}} \left|e^{tD(\xi)}(\hat{m}_0\cdot\xi)\right|^2d\xi\notag\\ =:&I_1+I_2+I_3. \label{S_7} \ema We estimate $I_j$, $j=1,2,3$ as follows. Since for any $f_0\in L^2_{x}(L^2_{v})$ satisfying $P_0f_0=0$, $$ S_1(t,\xi,\eps)\hat{f}_0=e^{-(1+|\xi|^2)t+O(\eps^2(1+|\xi|^2)^2)t}\[-\eps(\hat{m}_0\cdot\xi)\sqrt{M}+O\(\eps^2(1+|\xi|^2)\)\], $$ it follows that \bma I_1=&3\int_{1+|\xi|\le \frac{r_0}{\eps}} \Big\|e^{-(1+|\xi|^2)t+O(\eps^2(1+|\xi|^2)^2)t}\[(\hat{m}_0\cdot\xi)\sqrt{M}-O\(\eps(1+|\xi|^2)\)\]\notag\\ &\qquad-e^{-(1+|\xi|^2)t}(\hat{m}_0\cdot\xi)\sqrt{M}\Big\|^2d\xi\notag\\ \le & C\eps^2\int_{1+|\xi|\le \frac{r_0}{\eps}}e^{-(1+|\xi|^2)t}\( r_0^2(1+|\xi|^2)^3t^2|(\hat{m}_0\cdot\xi)|^2 +(1+|\xi|^2)^2\|\hat{f}_0\|^2 \)d\xi\notag\\ \le& C\eps^2t^{-1} e^{-t}\| f_0\|^2_{H^1_{x}(L^2_{v})}. \ema By \eqref{S2} and $P_0f_0=0$, we have \begin{equation} I_2 \le C\int_{\R^3}\frac1{\eps^2} e^{-2\frac{at}{\eps^2}}\|\hat{f}_0\|^2 d\xi \le C\frac1{\eps^2}e^{-2\frac{at}{\eps^2}}\|f_0\|^2_{L^2_{x}(L^2_{v})}. \end{equation} For $I_3$, it holds that \begin{equation} I_3= 3\int_{1+|\xi|\ge \frac{r_0}{\eps}} e^{-2(1+|\xi|^2)t }|(\hat{m}_0\cdot\xi)|^2d\xi \le C e^{-\frac{r_0^2}{\eps^2}t }\|f_0\|^2_{H^1_{x}(L^2_{v})}. \label{S_9} \end{equation} Therefore, it follows from \eqref{S_7}--\eqref{S_9} that \begin{equation} \left\|\frac1{\eps}e^{\frac{t}{\eps^2}B_\eps}f_0+e^{tD}\divx m_0\sqrt{M}\right\|^2_{L^2_{x}(L^2_{v})} \le C\(\eps^2t^{-1} e^{-t}+ \frac1{\eps^2}e^{-2\frac{at}{\eps^2}}\)\|f_0\|^2_{H^1_{x}(L^2_v)}. \label{expan1} \end{equation} Similarly, we have \bma &\left\|\frac1{\eps}\Tdx\Delta^{-1}_x(e^{\frac{t}{\eps^2}B_\eps}f_0,\sqrt{M})-\Tdx\Delta^{-1}_xe^{tD}\divx m_0\right\|^2_{L^2_x}\notag\\ \le &3\int_{1+|\xi|\le \frac{r_0}{\eps}} \left|\frac{\xi}{\eps(1+|\xi|)^2}(S_1(t,\xi,\eps)\hat{f}_0,\sqrt{M})-\frac{\xi}{|\xi|^2}e^{tD(\xi)}(\hat{m}_0\cdot\xi)\right|^2d\xi\notag\\ &+3\int_{\R^3} \frac{1}{\eps^2|\xi|^2}\left| (S_2(t,\xi,\eps)\hat{f}_0,\sqrt{M})\right|^2d\xi+3\int_{1+|\xi|\ge \frac{r_0}{\eps}} \frac{1}{|\xi|^2}\left|e^{tD(\xi)}(\hat{m}_0\cdot\xi)\right|^2d\xi\notag\\ \le &C\(\eps^2t^{-1} e^{-t}+ \frac1{\eps^2}e^{-2\frac{at}{\eps^2}}\)\|f_0\|^2_{H^1_{x}(L^2_v)}. \label{expan2} \ema Combining \eqref{expan1} and \eqref{expan2}, we obtain \eqref{limit2}. This proves the lemma. \end{proof} \section{Diffusion limit} \setcounter{equation}{0} \label{sect4} In this section, we study the diffusion limit of the solution to the nonlinear VPFP system \eqref{VPFP4}--\eqref{VPFP6} based on the fluid approximations of the solution to the linear VPFP system given in Section 3. Since the operators $B_{\eps}$ and $A$ generate contraction semigroups in $H^k_P$ and $H^k_x$ $(k\ge 0)$ respectively, the solution $f_{\eps}(t)$ to the VPFP system \eqref{VPFP4}--\eqref{VPFP6} and the solution $n(t)$ to the DDP system \eqref{n_1}--\eqref{n_2} can be represented by \bma f_{\eps}(t)&=e^{\frac{t}{\eps^2}B_{\eps}}f_0+\int^t_0\frac1{\eps}e^{\frac{t-s}{\eps^2}B_{\eps}}G(f_{\eps})(s)ds, \label{fe} \\ n(t)&=e^{tD}n_0-\intt e^{(t-s)D}\divx(n\Tdx\Phi )(s)ds, \label{ne} \ema where $$n_0=\intr f_0\sqrt{M}dv.$$ By taking inner product between $\sqrt M$ and \eqref{VPFP4}, we obtain \begin{equation} \dt n_{\eps}+\frac{1}{\eps}\divx m_{\eps}=0, \label{G_3a} \end{equation} where $$n_{\eps}=(f_{\eps},\sqrt M), \quad m_{\eps}=(f_{\eps},v\sqrt M).$$ Taking the microscopic projection $ P_1$ to \eqref{VPFP4}, we have \bma &\dt( P_1f_{\eps})+ \frac{1}{\eps}P_1(v\cdot\Tdx P_1f_{\eps})-\frac{1}{\eps}v\sqrt M\cdot\Tdx\Phi_{\eps}-\frac{1}{\eps^2}L( P_1f_{\eps})\notag\\ =&- \frac{1}{\eps}P_1(v\cdot\Tdx P_0f_{\eps})+ \frac{1}{\eps}P_1 G(f_{\eps}).\label{GG2} \ema By \eqref{GG2}, we can express the microscopic part $ P_1f_{\eps}$ as \begin{equation} P_1f_{\eps}= L^{-1}[\eps^2\dt(P_1f_{\eps})+ \eps P_1(v\cdot\Tdx P_1f_{\eps})- \eps P_1 G ]-\eps v\sqrt{M}\cdot (\Tdx n_{\eps}-\Tdx \Phi_{\eps}). \label{p_c}\end{equation} Substituting \eqref{p_c} into \eqref{G_3a}, we obtain \begin{equation} \dt n_{\eps}+ \eps\dt\divx m_{\eps} + n_{\eps}- \Delta_x n_{\eps}=-\divx(v\cdot\Tdx P_1f_{\eps}-P_1G ,v\sqrt M). \label{G_9a} \end{equation} Define the energy $E(f_{\eps})$ and the dissipation $D(f_{\eps})$ by \bmas E(f_{\eps})&= \|f_{\eps}(t)\|^2_{H^4_x(L^2_v)}+\|\Tdx\Phi_{\eps}(t)\|^2_{H^4_x},\\ D(f_{\eps})&= \frac{1}{\eps^2}\|P_1f_{\eps}\|^2_{H^4_x(L^2_\sigma)}+ \| P_0f_{\eps}\|^2_{H^4_x(L^2_v)} +\| \Tdx\Phi_{\eps}\|^2_{H^4_x}, \emas where $\Tdx\Phi_{\eps}=\Tdx \Delta^{-1}_x(f_{\eps},\sqrt M)$. First, we establish the following energy estimate for the solution $f_{\eps}$ to the system \eqref{VPFP4}--\eqref{VPFP6}. \begin{lem}\label{energy1}For any $\eps\ll 1$, there exists a small constant $\delta_0>0$ such that if $\|f_{0}\|_{H^4_x(L^2_v)}+\|(f_0,\sqrt{M})\|_{L^1_x}\le \delta_0$, then the system \eqref{VPFP4}--\eqref{VPFP6} admits a unique global solution $f_{\eps}(t)= f_{\eps}(t,x,v) $ satisfying the following energy estimate: \begin{equation} E(f_{\eps}(t))+ \int^t_0 D(f_{\eps}(s))ds \le C\delta_0^2, \label{G_1} \end{equation} where $C>0$ is a constant independent of $\eps$. Moreover, $f_{\eps}(t)$ has the following time-decay rate: \begin{equation} E(f_{\eps}(t)) \le C\delta_0^2 e^{-t}. \label{G_11} \end{equation} \end{lem} \begin{proof} First, we establish the macroscopic energy estimate of $f_{\eps}$. Taking the inner product between $\dxa n_{\eps}$ and $\dxa\eqref{G_9a}$ with $|\alpha|\le 3$, we have \bma &\Dt \|\dxa n_{\eps}\|^2_{L^2_x}+2\Dt \eps\intr\dxa\divx m_{\eps}\dxa n_{\eps} dx+\frac32\(\|\dxa n_{\eps}\|^2_{L^2_x}+\|\dxa \Tdx n_{\eps}\|^2_{L^2_x}\)\notag\\ \le& C\|\dxa\Tdx P_1f_{\eps}\|^2_{L^2_{x}(L^2_{v})}+CE(f_{\eps})D(f_{\eps}).\label{en_5} \ema Similarly, taking the inner product between $-\dxa \Phi_{\eps}$ and $\dxa\eqref{G_9a}$ with $|\alpha|\le 3$ we obtain \bma &\Dt \|\dxa \Tdx\Phi_{\eps}\|^2_{L^2_x}+2\Dt \eps\intr\dxa m_{\eps}\dxa \Tdx\Phi_{\eps} dx+\frac32(\|\dxa \Tdx\Phi_{\eps}\|^2_{L^2_x}+\|\dxa n_{\eps}\|^2_{L^2_x})\notag\\ \le& C\(\|\dxa P_1f_{\eps}\|^2_{L^2_{x}(L^2_{v})}+\|\dxa\Tdx P_1f_{\eps}\|^2_{L^2_{x}(L^2_{v})}\)+CE(f_{\eps})D(f_{\eps}).\label{en_6} \ema Taking the summation of $\eqref{en_5}+\eqref{en_6}$ with $|\alpha|\le 3$, we obtain \bma &\Dt \(\|n_{\eps}\|^2_{H^3_x} +\|\Tdx\Phi_{\eps}\|^2_{H^3_x}\)+\frac32\(\|\Tdx n_{\eps}\|^2_{H^3_x}+2\| n_{\eps}\|^2_{H^3_x} +\| \Tdx\Phi_{\eps}\|^2_{H^3_x}\)\notag\\ &+\Dt \sum_{ |\alpha|\le 3}\eps\(2 \intr\dxa\divx m_{\eps}\dxa n_{\eps} dx+2 \intr\dxa m_{\eps}\dxa \Tdx\Phi_{\eps} dx\)\notag\\ \le & CE(f_{\eps})D(f_{\eps})+C\| P_1f_{\eps}\|^2_{H^4_x(L^2_{v})}.\label{E_1a} \ema Next, we deal with the microscopic energy estimate of $f_{\eps}$. By taking the inner product between $\dxa f_{\eps}$ and $\dxa\eqref{VPFP4}$ with $|\alpha|\le 4$, we have \bma &\frac12\Dt \(\|\dxa f_{\eps}\|^2_{L^2_{x}(L^2_{v})}+\|\dxa\Tdx\Phi_{\eps}\|^2_{L^2_x}\)-\frac{1}{\eps^2}\int_{\R^3}(L\dxa f_{\eps})\dxa f_{\eps}dxdv \notag\\ =&\frac{1}{2\eps}\int_{\R^6} \dxa (v\cdot\Tdx \Phi_{\eps} f_{\eps})\dxa P_1f_{\eps}dxdv-\frac{1}{\eps}\int_{\R^6}\dxa (\Tdx \Phi_{\eps}\cdot \Tdv f_{\eps})\dxa P_1f_{\eps}dxdv\notag\\ \le &\frac{\mu}{2\eps^2}\| \dxa P_1f_{\eps}\|^2_{L^2_{x}(L^2_{\sigma})}+ CE(f_{\eps})D(f_{\eps}),\label{G_0} \ema which leads to \bma \Dt \(\|f_{\eps}\|^2_{H^4_x(L^2_{v})}+\|\Tdx\Phi_{\eps}\|^2_{H^4_x}\)+\frac{\mu}{\eps^2}\| P_1f_{\eps}\|^2_{H^4_x(L^2_{\sigma})} \le CE(f_{\eps})D(f_{\eps}).\label{G_00} \ema Taking the summation of $C_1\eqref{E_1a}+\eqref{G_00}$ with $C_1>0 $ large and $\eps>0$ small to get \bma &\Dt \(\|f_{\eps}\|^2_{H^4_x(L^2_{v})}+C_1\|n_{\eps}\|^2_{H^3_x}+(C_1+1)\|\Tdx\Phi_{\eps}\|^2_{H^4_x}\)\notag\\ &+\Dt C_1\sum_{ |\alpha|\le 3}\eps\(2 \intr\dxa\divx m_{\eps}\dxa n_{\eps} dx+2 \intr\dxa m_{\eps}\dxa \Tdx\Phi_{\eps} dx\)\notag\\ &+ \frac{\mu}{\eps^2}\|P_1f_{\eps}\|^2_{H^4_x(L^2_\sigma)}+\frac32C_1\(\|\Tdx n_{\eps}\|^2_{H^3_x}+2\| n_{\eps}\|^2_{H^3_x} +\| \Tdx\Phi_{\eps}\|^2_{H^3_x}\) \notag\\ \le& CE(f_{\eps})D(f_{\eps}) . \label{G_10} \ema Assume that $E(f_{\eps})\le C\delta_0$. It follows from \eqref{G_10} that for $\eps>0$ and $\delta_0>0$ sufficiently small, there exist two functionals $E_1(f_\eps)\sim E(f_\eps)$ and $D_1(f_\eps)\sim D(f_\eps)$ such that \begin{equation} \Dt E_1(f_\eps)+ D_1(f_\eps)\le 0,\quad D_1(f_\eps)\ge E_1(f_\eps). \label{G_2} \end{equation} Since \bmas \|\Tdx\Phi_0\|^2_{H^4_x}&=\int_{\R^3}\frac1{|\xi|^2}(1+|\xi|^2)^2|( \hat{f}_0,\sqrt{M})|^2d\xi \\ &\le C\sup_{|\xi|\le1}|( \hat{f}_0,\sqrt{M})|^2\int_{|\xi|\le1} \frac1{|\xi|^2}d\xi+\int_{|\xi|\ge 1} (1+|\xi|^2)^2|( \hat{f}_0,\sqrt{M})|^2d\xi \\ &\le C\(\|f_0\|^2_{H^4_x(L^2_v)}+\|( f_0,\sqrt{M})\|^2_{L^1_x}\), \emas we can prove \eqref{G_1} and \eqref{G_11} by \eqref{G_2} for $\eps>0$ and $\delta_0>0$ sufficiently small. This completes the proof of the lemma. \end{proof} \begin{lem}\label{energy2}For any $\eps\ll 1$, there exists a small constant $\delta_0>0$ such that if $\|f_{0}\|_{H^4_x(L^2_v)}+\|\Tdv f_{0}\|_{H^3_x(L^2_v)}+\||v| f_{0}\|_{H^3_x(L^2_v)}+\|(f_0,\sqrt{M})\|_{L^1_x}\le \delta_0$, then the solution $f_{\eps}(t)= f_{\eps}(t,x,v) $ to the system \eqref{VPFP4}--\eqref{VPFP6} satisfies \bma \|f_{\eps}\|^2_{H^4_x(L^2_v)}+\|\Tdv f_{\eps}\|^2_{H^3_x(L^2_v)}+\||v| f_{\eps}\|^2_{H^3_x(L^2_v)}+\|\Tdx\Phi_{\eps}\|^2_{H^4_x} \le C\delta^2_0e^{-t} , \label{G_10a} \ema where $C>0$ is a constant independent of $\eps$. \end{lem} \begin{proof} By taking $P_1$ to \eqref{VPFP4} and noting that $P_1Lf_{\eps}=LP_1f_{\eps}$, we have \bma &\dt P_1 f_{\eps}+\frac{1}{\eps}v\cdot\Tdx P_1f_{\eps}-\frac{1}{\eps^2}\Delta_v P_1f_{\eps}+\frac{1}{\eps^2}\frac{|v|^2}4P_1 f_{\eps}-\frac{1}{\eps^2}\frac32 P_1 f_{\eps}\notag\\ =&\frac{1}{\eps}v\sqrt{M}\cdot\Tdx \Phi_{\eps}-\frac{1}{\eps}v\cdot\Tdx P_0 f_{\eps}+\frac{1}{\eps}P_0(v\cdot\Tdx P_1 f_{\eps})\notag\\ &+\frac1{2\eps}v\cdot\Tdx\Phi_{\eps} f_{\eps} + \frac{1}{\eps}\Tdx\Phi_{\eps}\cdot \Tdv f_{\eps}.\label{b_1a} \ema Taking the inner product between $|v|^2\dxa P_1 f_{\eps}$ and $\dxa\eqref{b_1a}$ with $|\alpha|\le 3$, we obtain \bma &\Dt \||v| P_1f_{\eps}\|^2_{H^3_x(L^2_{v})}+ \frac{1}{\eps^2}\(\||v|\Tdv P_1 f_{\eps}\|^2_{H^3_x(L^2_{v})}+\frac14\||v|^2 P_1f_{\eps}\|^2_{H^3_x(L^2_{v})}\)\notag\\ \le& C\(\|\Tdx f_{\eps}\|^2_{H^3_x(L^2_{v})}+\| \Tdx\Phi_{\eps}\|^2_{H^3_x}\)+\frac{C}{\eps^2}\| |v|P_1f_{\eps}\|^2_{H^3_x(L^2_{v})}\notag\\ & +C\|\Tdx\Phi_{\eps}\|^2_{H^{3}_x}\(\||v| f_{\eps}\|^2_{H^{3}_x(L^2_v)}+\|\Tdv f_{\eps}\|^2_{H^{3}_x(L^2_v)}\).\label{aa} \ema Taking $\dvb$ with $|\beta|=1$ to \eqref{b_1a}, we get \bma &\dt \dvb P_1f_{\eps}+\frac{1}{\eps}v\cdot\Tdx \dvb P_1f_{\eps}-\frac{1}{\eps^2}\Delta_v \dvb P_1f_{\eps}+\frac{1}{\eps^2}\frac{|v|^2}2\dvb P_1f_{\eps}-\frac{1}{\eps^2}\frac32\dvb P_1f_{\eps}\notag\\ =&\frac{1}{\eps}\dxb P_1f_{\eps}-\frac{1}{\eps^2}v^\beta P_1f_{\eps}+\frac{1}{\eps}\dvb(v\sqrt{M})\cdot\Tdx\Phi_{\eps}-\frac{1}{\eps}\dvb (v\cdot\Tdx P_0 f_{\eps})+\frac{1}{\eps}\dvb P_0(v\cdot\Tdx P_1 f_{\eps})\notag\\ &+\frac1{2\eps}v\cdot\Tdx\Phi_{\eps} \dvb f_{\eps}+ \frac1{2\eps}\dxb\Phi_{\eps} f_{\eps} + \frac{1}{\eps}\Tdx\Phi_{\eps}\cdot \Tdv\dvb f_{\eps}.\label{b_1} \ema Taking the inner product between $\dxa\dvb P_1f_{\eps}$ and $\dxa\eqref{b_1}$ with $|\alpha|\le 3$, we obtain \bma &\Dt \|\Tdv P_1f_{\eps}\|^2_{H^3_x(L^2_{v})}+ \frac{1}{\eps^2}\(\| \Tdv^2 P_1f_{\eps}\|^2_{H^3_x(L^2_{v})}+\||v|\Tdv P_1f_{\eps}\|^2_{H^3_x(L^2_{v})}\)\notag\\ \le& C \(\| \Tdx f_{\eps}\|^2_{H^3_x(L^2_{v})}+\|\Tdx\Phi_{\eps}\|^2_{H^3_x}\)+\frac{C}{\eps^2}\(\| \Tdv P_1f_{\eps}\|^2_{H^3_x(L^2_{v})}+\||v| P_1f_{\eps}\|^2_{H^3_x(L^2_{v})}\)\notag\\ & + C\|\Tdx\Phi_{\eps}\|^2_{H^{3}_x}\(\|f_{\eps}\|^2_{H^{3}_x(L^2_v)}+\|\Tdv f_{\eps}\|^2_{H^{3}_x(L^2_v)}\).\label{aa1} \ema Taking the summation of $C_2 \eqref{G_10}+\eqref{aa}+\eqref{aa1}$ with $C_2>0$ large and taking $\delta_0>0$ sufficiently small, we can prove \eqref{G_10a} by a similar argument as \eqref{G_10}. \end{proof} By a similar argument as Lemma \ref{energy1}, we can show \begin{lem}\label{energy3} There exists a small constant $\delta_0>0$ such that if $\|n_0\|_{H^4_x}+\|n_{0}\|_{L^1_x}\le \delta_0$, then the system \eqref{n_1}--\eqref{n_2} admits a unique global solution $n(t)$ satisfying \begin{equation} \|n (t)\|_{H^4_x}+\|\Tdx\Phi(t)\|_{H^4_x}\le C\delta_0 e^{-\frac t2}, \label{DDP} \end{equation} where $\Tdx\Phi(t)=\Tdx\Delta^{-1}_xn(t)$ and $C>0$ is a constant. \end{lem} Theorem \ref{existence} is directly follows from Lemmas \ref{energy1}. With the help of Lemmas \ref{energy1}--\eqref{energy3}, we can prove Theorem \ref{thm1.1} as follows. \begin{proof}[\underline{\textbf{Proof of Theorem \ref{thm1.1}}}] Define \begin{equation} Q_{\eps}(t)=\sup_{0\le s\le t}\(\eps e^{-\frac{s}{2}}+e^{-\frac{as}{\eps^2}}\)^{-1} \|f_{\eps}(s)-n(s)\sqrt M \|_{ H^{2}_P}, \label{Qt} \end{equation} where the norm $\|\cdot\|_{H^2_P}$ is defined by \eqref{H2P}. We claim that \begin{equation} Q_{\eps}(t) \le C\delta_0 ,\quad \forall t>0. \label{limit6} \end{equation} It is easy to verify that the estimate \eqref{limit0} follows from \eqref{limit6}. By \eqref{fe} and \eqref{ne}, we have \bma \|f_{\eps}(t)-n(t)\sqrt{M}\|_{ H^2_P}&\le \left\|e^{\frac{t}{\eps^2}B_{\eps}}f_0-e^{tD}n_0\sqrt{M}\right\|_{ H^2_P}\notag\\ &\quad+\int^t_0\bigg\|\frac1{\eps}e^{\frac{t-s}{\eps^2}B_{\eps}}G(f_{\eps})-e^{(t-s)D}\divx(n\Tdx\Phi )\sqrt{M}\bigg\|_{ H^2_P}ds\notag\\ &=:I_1+I_2. \label{f2} \ema By \eqref{limit1}, we can bound $I_1$ by \begin{equation} I_1\le C\delta_0\(\eps e^{-\frac{t}{2}}+e^{-\frac{at}{\eps^2}}\).\end{equation} To estimate $I_2$, we decompose \bma I_2&\le \int^t_0\bigg\|\frac1{\eps}e^{\frac{t-s}{\eps^2}B_{\eps}}G(f_{\eps})-e^{(t-s)D}\divx (n_{\eps}\Tdx\Phi_{\eps} )\sqrt{M}\bigg\|_{ H^2_P}ds \notag\\ &\quad+\intt\left\|e^{(t-s)D}\[\divx (n_{\eps}\Tdx\Phi_{\eps} )-\divx(n\Tdx\Phi )\]\sqrt{M}\right\|_{ H^2_P}ds \notag\\ &=:I_3+I_4. \label{l0} \ema Since $(G(f_{\eps}),\sqrt{M})=0$ and $(G(f_{\eps}),v\sqrt{M})=n_{\eps}\Tdx\Phi_{\eps},$ it follows from \eqref{limit2} that \bma I_3\le &C \intt\(\eps (t-s)^{-\frac{1}{2}} e^{-\frac{t-s}{2}}+ \frac1{\eps}e^{-\frac{a(t-s)}{\eps^2}}\)\|G(f_{\eps})\|_{H^3_x(L^2_v)}ds\notag\\ \le &C \delta_0^2\int^t_0\(\eps (t-s)^{-\frac{1}{2}}e^{-\frac{t-s}{2}}+ \frac1{\eps}e^{-\frac{a(t-s)}{\eps^2}}\)e^{-s}ds, \label{l1} \ema where we have used (refer to \eqref{G_10a}) $$\|G(f_{\eps})\|_{H^3_x(L^2_v)}\le C\(\|| v| f_{\eps} \|_{H^3_x(L^2_v)} +\| \Tdv f_{\eps}\|_{H^3_x(L^2_v)}\)\|\Tdx\Phi_{\eps}\|_{H^3_x}\le C\delta_0^2e^{-t}.$$ Denote \begin{equation} J_1=\int^t_0\frac1{\eps}e^{-\frac{a(t-s)}{\eps^2}}e^{-s}ds. \label{e2} \end{equation} It holds that \bma J_1&=\(\int^{t/2}_0+\int^{t}_{t/2}\) \frac1{\eps}e^{-\frac{a(t-s)}{\eps^2}}e^{-s} ds \notag\\ &\le \int^{t/2}_0\frac1{\eps}e^{-\frac{a(t-s)}{\eps^2}} ds+e^{-\frac t2}\int^{t}_{t/2} \frac1{\eps}e^{-\frac{a(t-s)}{\eps^2}} ds \notag\\ &\le C\(\eps e^{-\frac{at}{2\eps^2}}+\eps e^{-\frac t2}\). \label{e1} \ema Thus, it follows from \eqref{l1}--\eqref{e1} that \begin{equation} I_3 \le C \delta_0^2\eps \(e^{-\frac{t}{2}}+e^{-\frac{at}{2\eps^2}}\). \label{l3} \end{equation} For any vector $F=(F_1,F_2,F_3)\in H^k_x$, we have $$ \|e^{tD}\divx F\sqrt{M}\|^2_{H^k_P}\le C\intr e^{-2(1+|\xi|^2)t}(1+|\xi|^2)^{k+1} |\hat{F} |^2d\xi \le Ct^{-1}e^{-t}\|F\|^2_{H^k_x}. $$ This together with \eqref{Qt}, \eqref{DDP} and \eqref{G_10a} imply that \bma I_4\le &C\int^t_0(t-s)^{-\frac12}e^{-\frac{t-s}2}\Big(\|n_{\eps}-n\|_{H^2_x}\|\Tdx\Phi_{\eps}\|_{H^2_x}\notag\\ &\qquad+\| n\|_{H^2_x}\|\Tdx\Phi_{\eps}-\Tdx\Phi\|_{H^2_x}\Big)ds \notag\\ \le &C \delta_0 Q_{\eps}(t)\int^t_0(t-s)^{-\frac12}e^{-\frac{t-s}2}\(\eps e^{-s}+ e^{-(\frac12+\frac{a}{\eps^2})s}\)ds. \label{l2} \ema Denote \begin{equation} J_2=\int^t_0(t-s)^{-\frac12}e^{-\frac{t-s}2} e^{-(\frac12+\frac{a}{\eps^2})s} ds. \end{equation} Since $$\frac{1}{\eps}\sqrt{s}e^{-\frac{a}{\eps^2}s}\le C e^{-\frac{a}{2\eps^2}s},$$ we have \begin{equation} J_2\le C\eps e^{-\frac t2}\(\int^{t/2}_0+\int^{t}_{t/2}\) (t-s)^{-\frac12}s^{-\frac12} ds \le C\eps e^{-\frac t2}. \label{e3} \end{equation} Thus, it follows from \eqref{l2}--\eqref{e3} that \begin{equation} I_4 \le C \delta_0 Q_{\eps}(t)\eps e^{-\frac{t}{2}} . \label{l4} \end{equation} By combining \eqref{f2}--\eqref{l0}, \eqref{l3} and \eqref{l4}, we obtain \begin{equation} Q_{\eps}(t)\le C\delta_0+C\delta_0^2+C\delta_0 Q_{\eps}(t), \end{equation} where $C>0$ is a constant independent of $\eps$. By taking $\delta_0>0$ small enough, we can prove \eqref{limit6}. By \eqref{limit1a} and a similar argument as above, we can prove \eqref{limit_1a}. \end{proof} \section{Appendix} \setcounter{equation}{0} \label{Appendix} We give the proof of Theorem \ref{rate1} in the following. \begin{proof}[\underline{\textbf{Proof of Theorem \ref{rate1}}}] For simplicity, we assume that $0<\eps\le r_0/2$. By Theorem 2.7 in \cite{Pazy}, it is sufficient to prove \eqref{B_0} for $f\in D(B_{\eps}(\xi)^2)$ because the domain $D(B_{\eps}(\xi)^2)$ is dense in $L^2_{\xi}(\R^3_v)$. By Corollary 7.5 in \cite{Pazy}, the semigroup $e^{\frac{t}{\eps^2}B_{\eps}(\xi)}$ can be represented by \begin{equation} e^{\frac{t}{\eps^2}B_{\eps}(\xi)}f =\frac1{2\pi i}\int^{\kappa+ i\infty}_{\kappa- i\infty} e^{ \frac{\lambda t}{\eps^2}}(\lambda-B_{\eps}(\xi))^{-1}fd\lambda, \quad f\in D(B_{\eps}(\xi)^2),\,\, \kappa>0. \label{V_3} \end{equation} First of all, we investigate the formula \eqref{V_3} for $\eps(1+|\xi|)\le r_0$. By \eqref{S_8} we have \begin{equation} (\lambda-B_{\eps}(\xi))^{-1} =[\lambda^{-1} P_0 +(\lambda P_1-Q_{\eps}(\xi))^{-1} P_1]-Z_{\eps}(\lambda,\xi)\label{V_1}, \end{equation} with the operator $Z_{\eps}(\lambda,\xi)$ defined by \bma Z_{\eps}(\lambda,\xi)&=[\lambda^{-1} P_0 +(\lambda P_1-Q_{\eps}(\xi))^{-1} P_1] [I+Y_{\eps}(\lambda,\xi)]^{-1} Y_{\eps}(\lambda,\xi), \label{V_1a} \\ Y_{\eps}(\lambda,\xi)&= i\eps \lambda^{-1} P_1(v\cdot\xi)(1+\frac1{|\xi|^2}) P_0 + i\eps P_0(v\cdot\xi) P_1(\lambda P_1-Q_{\eps}(\xi))^{-1} P_1. \label{V_1b} \ema By a similar argument as Lemma 3.1 in \cite{Li3}, we conclude from \eqref{S_3} that the operator $Q_{\eps}(\xi) $ generates a strongly continuous contraction semigroup on $N_0^\bot$, which satisfies for any $t>0$ and $f\in N_0^\bot $ that \begin{equation} \|e^{tQ_{\eps}(\xi)}f\|\leq e^{- t}\|f\|. \label{decay_1} \end{equation} In addition, for any $x>-1 $ and $f\in N_0^\bot $, it holds that \begin{equation} \int^{+\infty}_{-\infty}\|[(x+ i y) P_1-Q_{\eps}(\xi)]^{-1}f\|^2dy \leq \pi(x+1 )^{-1}\|f\|^2.\label{S_4} \end{equation} Substituting \eqref{V_1} into \eqref{V_3}, we have the following decomposition of the semigroup $e^{\frac{t}{\eps^2}B_{\eps}(\xi)}$ \begin{equation} e^{\frac{t}{\eps^2}B_{\eps}(\xi)}f = P_0f+e^{\frac{t}{\eps^2}Q_{\eps}(\xi)} P_1f -\frac1{2\pi i}\int^{\kappa+ i\infty}_{\kappa- i\infty} e^{\frac{\lambda t}{\eps^2}}Z_{\eps}(\lambda,\xi)fd\lambda, \quad \eps(1+|\xi|)\le r_0. \label{V_3a} \end{equation} To estimate the last term on the right hand side of \eqref{V_3a}, let us denote \begin{equation} U_{\kappa,N}=\frac1{2\pi i}\int^{N}_{-N} e^{\frac{t}{\eps^2}(\kappa+ i y)}Z_{\eps}(\kappa+iy,\xi)f dy, \label{UsN} \end{equation} where the constant $N>0$ is chosen large enough so that $N>y_0$ with $y_0$ defined in Lemma \ref{spectrum2}. Since $Z_{\eps}(\lambda,\xi)$ is analytic on the domain ${\rm Re}\lambda>-1/2$ with only finite singularities at $\lambda=\lambda_0(|\xi|,\eps)\in \sigma(B_{\eps}(\xi))$ and $\lambda=0$, we can shift the integration \eqref{UsN} from the line ${\rm Re}\lambda=\kappa>0$ to ${\rm Re}\lambda=-1/2$. Then by the Residue Theorem, we obtain \bma U_{\kappa,N} =& {\rm Res} \lt\{e^{\frac{\lambda t}{\eps^2}}Z_{\eps}(\lambda,\xi)f;\lambda_0(|\xi|,\eps)\rt\}+{\rm Res} \lt\{e^{\frac{\lambda t}{\eps^2}}Z_{\eps}(\lambda,\xi)f;0\rt\} \notag\\ &+U_{-\frac12,N}+J_N, \label{UsN2} \ema where Res$\{f(\lambda);\lambda_j\}$ means the residue of $f(\lambda)$ at $\lambda=\lambda_j$ and $$ J_N=\frac1{2\pi i}\(\int^{\kappa+ i N}_{-\frac12+ i N} -\int^{\kappa- i N}_{-\frac12- i N}\) e^{\frac{\lambda t}{\eps^2}}Z_{\eps}(\lambda,\xi)f d\lambda. $$ The right hand side of \eqref{UsN2} is estimated as follows. By Lemma \ref{LP}, it is easy to verify that \begin{equation} \|J_N\|_{\xi}\rightarrow0, \quad\mbox{as}\quad N\rightarrow\infty. \label{UsN2a} \end{equation} Let \begin{equation} \lim_{N\to\infty} U_{-\frac12,N}(t) =U_{-\frac12,\infty}(t) =:\int^{-\frac12+i \infty}_{-\frac12-i \infty} e^{\frac{\lambda t}{\eps^2}}Z_{\eps}(\lambda,\xi)fd\lambda. \label{UsN3} \end{equation} Since it follows from Lemma \ref{LP} that $\|Y_{\eps}(-\frac12+ i y,\xi)\|_{\xi}\leq 1/2$ for $y\in \R$ and $\eps(1+|\xi|)\leq r_0$ with $r_0>0$ being sufficiently small, the operator $I-Y_{\eps}(-\frac12+ i y,\xi)$ is invertible on $L^2_{\xi}(\R^3_v)$ and satisfies $\|[I-Y_{\eps}(-\frac12+ i y,\xi)]^{-1}\|_{\xi}\le 2$ for $y\in \R$ and $\eps(1+|\xi|)\leq r_0$. Thus, we have for any $f,g\in L^2_{\xi}(\R^3_v)$ \bmas &|(U_{-\frac12,\infty}(t)f,g)_{\xi}| \le e^{-\frac{t}{2\eps^2}}\int^{+\infty}_{-\infty} |(Z_{\eps}(\lambda,\xi)f,g)_{\xi}|dy \notag\\ &\leq C \eps(1+|\xi|)e^{-\frac{t}{2\eps^2}}\int^{+\infty}_{-\infty} \( \|[\lambda P_1-Q_{\eps}(\xi)]^{-1} P_1f\| +|\lambda|^{-1}\| P_0f\|_{\xi}\) \notag \\ &\quad\quad\quad\quad \times\(\|[\overline{\lambda} P_1-Q_{\eps}(-\xi)]^{-1} P_1g\| +|\overline{\lambda}|^{-1}\| P_0g\|_{\xi}\) dy,\quad \lambda=-\frac12+ i y. \emas This together with \eqref{S_4} and $$ \int^{+\infty}_{-\infty}\|(x+ i y)^{-1} P_0f\|^2_{\xi} dy = \pi|x|^{-1}\|P_0f\|^2_{\xi} $$ imply that $ |(U_{-\frac12,\infty}(t)f,g)_{\xi}| \leq Cr_0\ e^{-\frac{ t}{2\eps^2}}\|f\|_{\xi}\|g\|_{\xi}, $ and \begin{equation} \|U_{-\frac12,\infty}(t)\|_{\xi} \le Cr_0 e^{-\frac{ t}{2\eps^2}}. \label{UsN4} \end{equation} Since $\lambda_0(|\xi|,\eps),0\in \rho(Q_{\eps}(\xi))$, and $$Z_{\eps}(\lambda,\xi)=\lambda^{-1} P_0 +(\lambda P_1-Q_{\eps}(\xi))^{-1} P_1-(\lambda-B_{\eps}(\xi))^{-1},$$ we can obtain \bma {\rm Res}\{e^{\lambda t}Z_{\eps}(\lambda,\xi)f;\lambda_0(|\xi|,\eps)\} &=-{\rm Res}\{e^{\lambda t}(\lambda-B_{\eps}(\xi))^{-1}f;\lambda_0(|\xi|,\eps)\}\notag\\ & =-e^{\lambda_0(|\xi|,\eps)t}\Big(f,\overline{\psi_0(\xi,\eps)}\Big)_{\xi}\psi_0(\xi,\eps), \label{projection}\\ {\rm Res}\{e^{\lambda t}Z_{\eps}(\lambda,\xi)f;0\} &={\rm Res}\{e^{\lambda t}\lambda^{-1}P_0f;0\}= P_0f. \label{projection1} \ema Therefore, we conclude from \eqref{V_3a} and \eqref{UsN}--\eqref{projection1} that \begin{equation} e^{\frac{t}{\eps^2}B_{\eps}(\xi)}f= e^{\frac{t}{\eps^2}Q_{\eps}(\xi)} P_1f+U_{-\frac12,\infty}(t) + e^{\frac{t}{\eps^2}\lambda_0(|\xi|,\eps)} \Big(f,\overline{\psi_0(\xi,\eps)}\Big)_{\xi}\psi_0(\xi,\eps) , \label{low} \end{equation} for $\eps(1+|\xi|)\leq r_0$. Next, we turn to investigate the formula \eqref{V_3} for $\eps(1+|\xi|)> r_0$. It holds that $\eps|\xi|> r_0/2$. By \eqref{E_6} we have \begin{equation} (\lambda-B_{\eps}(\xi))^{-1}=(\lambda-A_{\eps}(\xi))^{-1}+H_{\eps}(\lambda,\xi),\label{V_2} \end{equation} with \bma H_{\eps}(\lambda,\xi)&=(\lambda-A_{\eps}(\xi))^{-1}[I-G_{\eps}(\lambda,\xi)]^{-1}G_{\eps}(\lambda,\xi),\label{V_2a}\\ G_{\eps}(\lambda,\xi)&=(K- i\eps(v\cdot\xi)|\xi|^{-2}P_0)(\lambda-A_{\eps}(\xi))^{-1}.\label{V_2b} \ema By Lemma \ref{SG_1} and a similar argument as Lemma 3.1 in \cite{Li3}, we conclude that the operator $A_{\eps}(\xi) $ generates a strongly continuous contraction semigroup on $L^2(\R^3_v)$, which satisfies for any $t>0$ and $f\in L^2(\R^3_v) $ that \begin{equation} \|e^{tA_{\eps}(\xi)}f\|\leq e^{- \nu_0t}\|f\|. \label{decay_2} \end{equation} In addition, for any $x>-\nu_0 $ and $f\in L^2(\R^3_v)$, it holds that \begin{equation} \int^{+\infty}_{-\infty}\|(x+ i y-A_{\eps}(\xi))^{-1}f\|^2dy\leq \pi(x+\nu_0)^{-1}\|f\|^2. \label{VsNa5} \end{equation} Substituting \eqref{V_2} into \eqref{V_3} yields \begin{equation} e^{\frac{t}{\eps^2}B_{\eps}(\xi)}f = e^{\frac{t}{\eps^2}A_{\eps}(\xi)}f +\frac1{2\pi i}\int^{\kappa+ i\infty}_{\kappa- i\infty} e^{\frac{\lambda t}{\eps^2}}H_{\eps}(\lambda,\xi)fd\lambda,\quad \eps(1+|\xi|)> r_0. \label{V_3b} \end{equation} Similarly, in order to estimate the last term on the right hand side of \eqref{V_3b}, let us denote \begin{equation} V_{\kappa,N} =\frac1{2\pi i}\int^{N}_{-N}e^{\frac{t}{\eps^2}(\kappa+ i y)}H_{\eps}(\kappa+ i y,\xi) dy \label{VsN} \end{equation} for sufficiently large constant $N>0$ as in \eqref{UsN}. Since the operator $H_{\eps}(\lambda,\xi)$ is analytic on the domain ${\rm Re}\lambda\ge -\sigma_0$ for the constant $\sigma_0= \alpha(r_0/2)/2$ with $\alpha(r)>0$ given by Lemma~\ref{LP01}, we can again shift the integration of \eqref{VsN} from the line ${\rm Re}\lambda=\kappa>0$ to $\mbox{Re}\lambda=-\sigma_0$ to obtain \begin{equation} V_{\kappa,N}=V_{-\sigma_0,N}+I_N, \label{VsN1} \end{equation} with $$ I_N=\frac1{2\pi i}\(\int^{-\kappa+ i N}_{-\sigma_0+ i N}-\int^{-\kappa- i N}_{-\sigma_0- i N}\) e^{\frac{\lambda t}{\eps^2}}H_{\eps}(\lambda,\xi)f d\lambda. $$ By Lemma~\ref{LP03}, it holds that \begin{equation} \|I_N\|\rightarrow0 \ \ \mbox{as}\ \ N\rightarrow\infty.\label{VsNa} \end{equation} Moreover, by Lemma~\ref{LP03} and a similar argument as Lemma \ref{LP01}, we can obtain \begin{equation} \sup_{\eps|\xi|>r_0/2, y\in\R} \|[I-G_{\eps}(-\sigma_0+ i y,\xi)]^{-1}\| \le C.\label{VsNa4} \end{equation} By \eqref{V_2a}, \eqref{VsNa4} and \eqref{VsNa5}, we have for any $f,g\in L^2_{\xi}(\R^3_v)$ \bma &|(V_{-\sigma_0,\infty}(t)f,g)|\leq Ce^{-\frac{\sigma_0 t}{\eps^2}}\int^{+\infty}_{-\infty}|(H_{\eps}(\lambda,\xi)f,g)|dy \notag\\ &\leq C(\|K\|+\eps^2r_0^{-1})e^{-\frac{\sigma_0 t}{\eps^2}}\int^{+\infty}_{-\infty}\|(\lambda-A_{\eps}(\xi))^{-1}f\|\|(\overline{\lambda}-A_{\eps}(-\xi))^{-1}g\|dy \notag\\ &\leq C(\|K\|+\eps^2r_0^{-1})e^{-\frac{\sigma_0 t}{\eps^2}}(\nu_0-\sigma_0)^{-1}\|f\|\|g\|,\quad \lambda=-\sigma_0+ i y. \label{VsNb1} \ema From \eqref{VsNb1} and the fact $\|f\|^2\leq\|f\|_{\xi}^2\leq(1+4\eps^2r_0^{-2})\|f\|^2$ for $\eps(1+|\xi|)> r_0$, we have \begin{equation} \|V_{-\sigma_0,\infty}(t)\|_{\xi} \leq Ce^{-\frac{\sigma_0 t}{\eps^2}}(\nu_0-\sigma_0)^{-1}. \label{VsN2} \end{equation} Therefore, we conclude from \eqref{V_3b} and \eqref{VsN}--\eqref{VsN2} that \begin{equation} e^{\frac{t}{\eps^2}B_{\eps}(\xi)}f =e^{\frac{t}{\eps^2}A_{\eps}(\xi)}f +V_{-\sigma_0,\infty}(t), \quad \eps(1+|\xi|)> r_0. \label{high} \end{equation} The combination of \eqref{low} and \eqref{high} gives rise to \eqref{B_0} with $S_1(t,\xi,\eps)f$ and $ S_2(t,\xi,\eps)f$ defined by \bmas S_1(t,\xi,\eps)f&=e^{\frac{t}{\eps^2}\lambda_0(|\xi|,\eps)} \Big(f,\overline{\psi_0(\xi,\eps)}\Big)_{\xi}\psi_0(\xi,\eps)1_{\{\eps(1+|\xi|)\leq r_0\}}, \\ S_2(t,\xi,\eps)f&=\(e^{\frac{t}{\eps^2}Q_{\eps}(\xi)} P_1f+U_{-\frac12,\infty}(t)\)1_{\{\eps(1+|\xi|)\leq r_0\}}\\ &\quad+\(e^{\frac{t}{\eps^2}A_{\eps}(\xi)}f+V_{-\sigma_0,\infty}(t)\)1_{\{\eps(1+|\xi|)> r_0\}}. \emas In particular, $S_2(t,\xi,\eps)f$ satisfies \eqref{S2} in terms of \eqref{decay_1}, \eqref{decay_2} \eqref{UsN4} and \eqref{VsN2}. \end{proof} \noindent {\bf Acknowledgements:} The author would like to thank Yan Guo for helpful suggestions. The research of the author was supported by the National Science Fund for Excellent Young Scholars No. 11922107, the National Natural Science Foundation of China grants No. 11671100, and Guangxi Natural Science Foundation Nos. 2018GXNSFAA138210 and 2019JJG110010. \end{document}
arXiv
Involute In mathematics, an involute (also known as an evolvent) is a particular type of curve that is dependent on another shape or curve. An involute of a curve is the locus of a point on a piece of taut string as the string is either unwrapped from or wrapped around the curve.[1] Not to be confused with involution (mathematics). The evolute of an involute is the original curve. It is generalized by the roulette family of curves. That is, the involutes of a curve are the roulettes of the curve generated by a straight line. The notions of the involute and evolute of a curve were introduced by Christiaan Huygens in his work titled Horologium oscillatorium sive de motu pendulorum ad horologia aptato demonstrationes geometricae (1673), where he showed that the involute of a cycloid is still a cycloid, thus providing a method for constructing the cycloidal pendulum, which has the useful property that its period is independent of the amplitude of oscillation.[2] Involute of a parameterized curve See also: Arc length Let ${\vec {c}}(t),\;t\in [t_{1},t_{2}]$ be a regular curve in the plane with its curvature nowhere 0 and $a\in (t_{1},t_{2})$, then the curve with the parametric representation ${\vec {C}}_{a}(t)={\vec {c}}(t)-{\frac {{\vec {c}}'(t)}{|{\vec {c}}'(t)|}}\;\int _{a}^{t}|{\vec {c}}'(w)|\;dw$ is an involute of the given curve. Proof The string acts as a tangent to the curve ${\vec {c}}(t)$. Its length is changed by an amount equal to the arc length traversed as it winds or unwinds. Arc length of the curve traversed in the interval $[a,t]$ is given by $\int _{a}^{t}|{\vec {c}}'(w)|\;dw$ where $a$ is the starting point from where the arc length is measured. Since the tangent vector depicts the taut string here, we get the string vector as ${\frac {{\vec {c}}'(t)}{|{\vec {c}}'(t)|}}\;\int _{a}^{t}|{\vec {c}}'(w)|\;dw$ The vector corresponding to the end point of the string (${\vec {C}}_{a}(t)$) can be easily calculated using vector addition, and one gets ${\vec {C}}_{a}(t)={\vec {c}}(t)-{\frac {{\vec {c}}'(t)}{|{\vec {c}}'(t)|}}\;\int _{a}^{t}|{\vec {c}}'(w)|\;dw$ Adding an arbitrary but fixed number $l_{0}$ to the integral ${\Bigl (}\int _{a}^{t}|{\vec {c}}'(w)|\;dw{\Bigr )}$ results in an involute corresponding to a string extended by $l_{0}$ (like a ball of wool yarn having some length of thread already hanging before it is unwound). Hence, the involute can be varied by constant $a$ and/or adding a number to the integral (see Involutes of a semicubic parabola). If ${\vec {c}}(t)=(x(t),y(t))^{T}$ one gets ${\begin{aligned}X(t)&=x(t)-{\frac {x'(t)}{\sqrt {x'(t)^{2}+y'(t)^{2}}}}\int _{a}^{t}{\sqrt {x'(w)^{2}+y'(w)^{2}}}\,dw\\Y(t)&=y(t)-{\frac {y'(t)}{\sqrt {x'(t)^{2}+y'(t)^{2}}}}\int _{a}^{t}{\sqrt {x'(w)^{2}+y'(w)^{2}}}\,dw\;.\end{aligned}}$ Properties of involutes In order to derive properties of a regular curve it is advantageous to suppose the arc length $s$ to be the parameter of the given curve, which lead to the following simplifications: $\;|{\vec {c}}'(s)|=1\;$ and $\;{\vec {c}}''(s)=\kappa (s){\vec {n}}(s)\;$, with $\kappa $ the curvature and ${\vec {n}}$ the unit normal. One gets for the involute: ${\vec {C}}_{a}(s)={\vec {c}}(s)-{\vec {c}}'(s)(s-a)\ $ and ${\vec {C}}_{a}'(s)=-{\vec {c}}''(s)(s-a)=-\kappa (s){\vec {n}}(s)(s-a)\;$ and the statement: • At point ${\vec {C}}_{a}(a)$ the involute is not regular (because $|{\vec {C}}_{a}'(a)|=0$ ), and from $\;{\vec {C}}_{a}'(s)\cdot {\vec {c}}'(s)=0\;$ follows: • The normal of the involute at point ${\vec {C}}_{a}(s)$ is the tangent of the given curve at point ${\vec {c}}(s)$. • The involutes are parallel curves, because of ${\vec {C}}_{a}(s)={\vec {C}}_{0}(s)+a{\vec {c}}'(s)$ and the fact, that ${\vec {c}}'(s)$ is the unit normal at ${\vec {C}}_{0}(s)$. The family of involutes and the family of tangents to the original curve makes up an orthogonal coordinate system. Consequently, one may construct involutes graphically. First, draw the family of tangent lines. Then, an involute can be constructed by always staying orthogonal to the tangent line passing the point. Cusps This section is based on.[3] There are generically two types of cusps in involutes. The first type is at the point where the involute touches the curve itself. This is a cusp of order 3/2. The second type is at the point where the curve has an inflection point. This is a cusp of order 5/2. This can be visually seen by constructing a map $f:\mathbb {R} ^{2}\to \mathbb {R} ^{3}$ defined by $(s,t)\mapsto (x(s)+t\cos(\theta ),y(s)+t\sin(\theta ),t)$ where $(x(s),y(s))$ is the arclength parametrization of the curve, and $\theta $ is the slope-angle of the curve at the point $(x(s),y(s))$. This maps the 2D plane into a surface in 3D space. For example, this maps the circle into the hyperboloid of one sheet. By this map, the involutes are obtained in a three-step process: map $\mathbb {R} $ to $\mathbb {R} ^{2}$, then to the surface in $\mathbb {R} ^{3}$, then project it down to $\mathbb {R} ^{2}$ by removing the z-axis: $s\mapsto (s,l-s)\mapsto f(s,l-s)\mapsto (f(s,l-s)_{x},f(s,l-s)_{y})$ where $l$ is any real constant. Since the mapping $s\mapsto f(s,l-s)$ has nonzero derivative at all $s\in \mathbb {R} $, cusps of the involute can only occur where the derivative of $s\mapsto f(s,l-s)$ is vertical (parallel to the z-axis), which can only occur where the surface in $\mathbb {R} ^{3}$ has a vertical tangent plane. Generically, the surface has vertical tangent planes at only two cases: where the surface touches the curve, and where the curve has an inflection point. cusp of order 3/2 For the first type, one can start by the involute of a circle, with equation ${\begin{aligned}X(t)&=r(\cos t+(t-a)\sin t)\\Y(t)&=r(\sin t-(t-a)\cos t)\end{aligned}}$ then set $a=0$, and expand for small $t$, to obtain ${\begin{aligned}X(t)&=r+rt^{2}/2+O(t^{4})\\Y(t)&=rt^{3}/3+O(t^{5})\end{aligned}}$ thus giving the order 3/2 curve $Y^{2}-{\frac {8}{9r}}(X-r)^{3}+O(Y^{8/3})=0$, a semicubical parabola. cusp of order 5/2 For the second type, consider the curve $y=x^{3}$. The arc from $x=0$ to $x=s$ is of length $\int _{0}^{s}{\sqrt {1+(3t^{2})^{2}}}dt=s+{\frac {9}{10}}s^{5}+O(s^{9})$, and the tangent at $x=s$ has angle $\theta =\arctan(3s^{2})$. Thus, the involute starting from $x=0$ at distance $L$ has parametric formula ${\begin{cases}x(s)=s+(L-s-{\frac {9}{10}}s^{5}+\cdots )\cos \theta \\y(s)=s^{3}+(L-s-{\frac {9}{10}}s^{5}+\cdots )\sin \theta \end{cases}}$ Expand it up to order $s^{5}$, we obtain ${\begin{cases}x(s)=L-{\frac {9}{2}}Ls^{4}+({\frac {9}{2}}L-{\frac {9}{10}})s^{5}+O(s^{6})\\y(s)=3Ls^{2}-2s^{3}+O(s^{6})\end{cases}}$ which is a cusp of order 5/2. Explicitly, one may solve for the polynomial expansion satisfied by $x,y$: $\left(x-L+{\frac {y^{2}}{2L}}\right)^{2}-\left({\frac {9}{2}}L+{\frac {51}{10}}\right)^{2}\left({\frac {y}{3L}}\right)^{5}+O(s^{11})=0$ or $x=L-{\frac {y^{2}}{2L}}\pm \left({\frac {9}{2}}L+{\frac {51}{10}}\right)\left({\frac {y}{3L}}\right)^{2.5}+O(y^{2.75}),\quad \quad y\geq 0$ which clearly shows the cusp shape. Examples Involutes of a circle For a circle with parametric representation $(r\cos(t),r\sin(t))$, one has ${\vec {c}}'(t)=(-r\sin t,r\cos t)$. Hence $|{\vec {c}}'(t)|=r$, and the path length is $r(t-a)$. Evaluating the above given equation of the involute, one gets ${\begin{aligned}X(t)&=r(\cos(t+a)+t\sin(t+a))\\Y(t)&=r(\sin(t+a)-t\cos(t+a))\end{aligned}}$ for the parametric equation of the involute of the circle. The $a$ term is optional; it serves to set the start location of the curve on the circle. The figure shows involutes for $a=-0.5$ (green), $a=0$ (red), $a=0.5$ (purple) and $a=1$ (light blue). The involutes look like Archimedean spirals, but they are actually not. The arc length for $a=0$ and $0\leq t\leq t_{2}$ of the involute is $L={\frac {r}{2}}t_{2}^{2}.$ Involutes of a semicubic parabola The parametric equation ${\vec {c}}(t)=({\tfrac {t^{3}}{3}},{\tfrac {t^{2}}{2}})$ describes a semicubical parabola. From ${\vec {c}}'(t)=(t^{2},t)$ one gets $|{\vec {c}}'(t)|=t{\sqrt {t^{2}+1}}$ and $\int _{0}^{t}w{\sqrt {w^{2}+1}}\,dw={\frac {1}{3}}{\sqrt {t^{2}+1}}^{3}-{\frac {1}{3}}$. Extending the string by $l_{0}={1 \over 3}$ extensively simplifies further calculation, and one gets ${\begin{aligned}X(t)&=-{\frac {t}{3}}\\Y(t)&={\frac {t^{2}}{6}}-{\frac {1}{3}}.\end{aligned}}$ Eliminating t yields $Y={\frac {3}{2}}X^{2}-{\frac {1}{3}},$ showing that this involute is a parabola. The other involutes are thus parallel curves of a parabola, and are not parabolas, as they are curves of degree six (See Parallel curve § Further examples). Involutes of a catenary For the catenary $(t,\cosh t)$, the tangent vector is ${\vec {c}}'(t)=(1,\sinh t)$, and, as $1+\sinh ^{2}t=\cosh ^{2}t,$ its length is $|{\vec {c}}'(t)|=\cosh t$. Thus the arc length from the point (0, 1) is $\textstyle \int _{0}^{t}\cosh w\,dw=\sinh t.$ Hence the involute starting from (0, 1) is parametrized by $(t-\tanh t,1/\cosh t),$ and is thus a tractrix. The other involutes are not tractrices, as they are parallel curves of a tractrix. Involutes of a cycloid The parametric representation ${\vec {c}}(t)=(t-\sin t,1-\cos t)$ describes a cycloid. From ${\vec {c}}'(t)=(1-\cos t,\sin t)$, one gets (after having used some trigonometric formulas) $|{\vec {c}}'(t)|=2\sin {\frac {t}{2}},$ and $\int _{\pi }^{t}2\sin {\frac {w}{2}}\,dw=-4\cos {\frac {t}{2}}.$ Hence the equations of the corresponding involute are $X(t)=t+\sin t,$ $Y(t)=3+\cos t,$ which describe the shifted red cycloid of the diagram. Hence • The involutes of the cycloid $(t-\sin t,1-\cos t)$ are parallel curves of the cycloid $(t+\sin t,3+\cos t).$ (Parallel curves of a cycloid are not cycloids.) Involute and evolute The evolute of a given curve $c_{0}$ consists of the curvature centers of Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): c_{0} . Between involutes and evolutes the following statement holds: [4][5] A curve is the evolute of any of its involutes. Involute and evolute • Tractrix (red) as an involute of a catenary • The evolute of a tractrix is a catenary Application The most common profiles of modern gear teeth are involutes of a circle. In an involute gear system the teeth of two meshing gears contact at a single instantaneous point that follows along a single straight line of action. The forces exerted the contacting teeth exert on each other also follow this line, and are normal to the teeth. The involute gear system maintaining these conditions follows the fundamental law of gearing: the ratio of angular velocities between the two gears must remain constant throughout. With teeth of other shapes, the relative speeds and forces rise and fall as successive teeth engage, resulting in vibration, noise, and excessive wear. For this reason, nearly all modern planar gear systems are either involute or the related cycloidal gear system.[6] The involute of a circle is also an important shape in gas compressing, as a scroll compressor can be built based on this shape. Scroll compressors make less sound than conventional compressors and have proven to be quite efficient. The High Flux Isotope Reactor uses involute-shaped fuel elements, since these allow a constant-width channel between them for coolant. See also • Envelope (mathematics) • Evolute • Goat grazing problem • Involute gear • Roulette (curve) • Scroll compressor References 1. Rutter, J.W. (2000). Geometry of Curves. CRC Press. pp. 204. ISBN 9781584881667. 2. McCleary, John (2013). Geometry from a Differentiable Viewpoint. Cambridge University Press. pp. 89. ISBN 9780521116077. 3. Arnolʹd, V. I. (1990). Huygens and Barrow, Newton and Hooke : pioneers in mathematical analysis and catastrophe theory from evolvents to quasicrystals. Basel: Birkhaüser Verlag. ISBN 0-8176-2383-3. OCLC 21873606. 4. K. Burg, H. Haf, F. Wille, A. Meister: Vektoranalysis: Höhere Mathematik für Ingenieure, Naturwissenschaftler und ..., Springer-Verlag, 2012,ISBN 3834883468, S. 30. 5. R. Courant:Vorlesungen über Differential- und Integralrechnung, 1. Band, Springer-Verlag, 1955, S. 267. 6. V. G. A. Goss (2013) "Application of analytical geometry to the shape of gear teeth", Resonance 18(9): 817 to 31 Springerlink (subscription required). External links • Involute at MathWorld Differential transforms of plane curves Unary operations • Evolute • Involute • Dual curve • Inverse curve • Parallel curve • Isoptic Unary operations defined by a point • Pedal & Contrapedal curves • Negative pedal curve • Pursuit curve • Caustic Unary operations defined by two points • Strophoid Binary operations defined by a point • Roulette • Cissoid Operations on a family of curves • Envelope
Wikipedia
Enhanced secondary motion of th... Rosti, Marco E. and Brandt, Luca 2017. Numerical simulation of turbulent channel flow over a viscous hyper-elastic wall. Journal of Fluid Mechanics, Vol. 830, Issue. , p. 708. Samanta, Arghya 2017. Role of slip on the linear stability of a liquid flow through a porous channel. Physics of Fluids, Vol. 29, Issue. 9, p. 094103. Kuwata, Y. and Suga, K. 2017. Direct numerical simulation of turbulence over anisotropic porous media. Journal of Fluid Mechanics, Vol. 831, Issue. , p. 41. Vinuesa, Ricardo Hosseini, Seyed M. Hanifi, Ardeshir Henningson, Dan S. and Schlatter, Philipp 2017. Pressure-Gradient Turbulent Boundary Layers Developing Around a Wing Section. Flow, Turbulence and Combustion, Vol. 99, Issue. 3-4, p. 613. Lian, Jijian Wang, Hongzhen and Wang, Haijun 2018. Study on Vibration Transmission among Units in Underground Powerhouse of a Hydropower Station. Energies, Vol. 11, Issue. 11, p. 3015. Fornari, Walter Kazerooni, Hamid Tabaei Hussong, Jeanette and Brandt, Luca 2018. Suspensions of finite-size neutrally buoyant spheres in turbulent duct flow. Journal of Fluid Mechanics, Vol. 851, Issue. , p. 148. Vinuesa, R. Schlatter, P. and Nagib, H. M. 2018. Secondary flow in turbulent ducts with increasing aspect ratio. Physical Review Fluids, Vol. 3, Issue. 5, Ding, Lin Zou, Qunfeng Zhang, Li and Wang, Haibo 2018. Research on Flow-Induced Vibration and Energy Harvesting of Three Circular Cylinders with Roughness Strips in Tandem. Energies, Vol. 11, Issue. 11, p. 2977. Xu, Weilin Chen, Chunqi and Wei, Wangru 2018. Experimental Study on the Air Concentration Distribution of Aerated Jet Flows in a Plunge Pool. Water, Vol. 10, Issue. 12, p. 1779. Sharma, Abhishek K. and Bera, P. 2018. Linear stability of mixed convection in a differentially heated vertical channel filled with high permeable porous-medium. International Journal of Thermal Sciences, Vol. 134, Issue. , p. 622. Gou, Wenjuan Li, Huiping Du, Yunyi Yin, Hongxia Liu, Fang and Lian, Jijian 2018. Effect of Sediment Concentration on Hydraulic Characteristics of Energy Dissipation in a Falling Turbulent Jet. Applied Sciences, Vol. 8, Issue. 9, p. 1672. Xia, Chen Zhang, Zhiguang Huang, Guoping Zhou, Tong and Xu, Jianhua 2018. A Micro Swing Rotor Engine and the Preliminary Study of Its Thermodynamic Characteristics. Energies, Vol. 11, Issue. 10, p. 2684. Neeraja, A. Renuka Devi, R.L.V. Devika, B. Naga Radhika, V. and Krishna Murthy, M. 2019. Effects of viscous dissipation and convective boundary conditions on magnetohydrodynamics flow of casson liquid over a deformable porous channel. Results in Engineering, Vol. 4, Issue. , p. 100040. Shahmardi, Armin Zade, Sagar Ardekani, Mehdi N. Poole, Rob J. Lundell, Fredrik Rosti, Marco E. and Brandt, Luca 2019. Turbulent duct flow with polymers. Journal of Fluid Mechanics, Vol. 859, Issue. , p. 1057. Alvares, Cecília M.S. Grossi, Luiza B. Ramos, Ramatisa L. Magela, Cíntia S. and Amaral, Míriam C.S. 2019. Bi-dimensional modelling of the thermal boundary layer and mass flux prediction for direct contact membrane distillation. International Journal of Heat and Mass Transfer, Vol. 141, Issue. , p. 1205. Hu, Shaohua Zhou, Xinlong Luo, Yi and Zhang, Guang 2019. Numerical Simulation Three-Dimensional Nonlinear Seepage in a Pumped-Storage Power Station: Case Study. Energies, Vol. 12, Issue. 1, p. 180. Ardekani, M. N. Rosti, M. E. and Brandt, L. 2019. Turbulent flow of finite-size spherical particles in channels with viscous hyper-elastic walls. Journal of Fluid Mechanics, Vol. 873, Issue. , p. 410. Ghosh, Souvik Loiseau, Jean-Christophe Breugem, Wim-Paul and Brandt, Luca 2019. Modal and non-modal linear stability of Poiseuille flow through a channel with a porous substrate. European Journal of Mechanics - B/Fluids, Vol. 75, Issue. , p. 29. Kalpana, G. Madhura, K.R. and Kudenatti, Ramesh B. 2019. Impact of temperature-dependant viscosity and thermal conductivity on MHD boundary layer flow of two-phase dusty fluid through permeable medium. Engineering Science and Technology, an International Journal, Vol. 22, Issue. 2, p. 416. Nagib, H.M. Vidal, A. and Vinuesa, R. 2019. Vorticity fluxes: A tool for three-dimensional and secondary flows in turbulent shear flows. Journal of Fluids and Structures, Vol. 89, Issue. , p. 39. Download full list 10 December 2015 , pp. 681-693 Enhanced secondary motion of the turbulent flow through a porous square duct A. Samanta (a1), R. Vinuesa (a1), I. Lashgari (a1), P. Schlatter (a1) and L. Brandt (a1)... 1 Linné FLOW Centre and Swedish e-Science Research Centre (SeRC), KTH Mechanics, 100 44 Stockholm, Sweden DOI: https://doi.org/10.1017/jfm.2015.623 Published online by Cambridge University Press: 06 November 2015 Direct numerical simulations of the fully developed turbulent flow through a porous square duct are performed to study the effect of the permeable wall on the secondary cross-stream flow. The volume-averaged Navier–Stokes equations are used to describe the flow in the porous phase, a packed bed with porosity ${\it\varepsilon}_{c}=0.95$ . The porous square duct is computed at $\mathit{Re}_{b}\simeq 5000$ and compared with the numerical simulations of a turbulent duct with four solid walls. The two boundary layers on the top wall and porous interface merge close to the centre of the duct, as opposed to the channel, because the sidewall boundary layers inhibit the growth of the shear layer over the porous interface. The most relevant feature in the porous duct is the enhanced magnitude of the secondary flow, which exceeds that of a regular duct by a factor of four. This is related to the increased vertical velocity, and the different interaction between the ejections from the sidewalls and the porous medium. We also report a significant decrease in the streamwise turbulence intensity over the porous wall of the duct (which is also observed in a porous channel), and the appearance of short spanwise rollers in the buffer layer, replacing the streaky structures of wall-bounded turbulence. These spanwise rollers most probably result from a Kelvin–Helmholtz type of instability, and their width is limited by the presence of the sidewalls. © 2015 Cambridge University Press †Email address for correspondence: [email protected] Breugem, W. P., Boersma, B.-J. & Uittenbogaard, R. E. 2006 The influence of wall permeability on turbulent channel flow. J. Fluid Mech. 562, 35–72. Finnigan, J. 2000 Turbulence in plant canopies. Annu. Rev. Fluid Mech. 32, 519–571. Fischer, P. F., Lottes, J. W. & Kerkemeier, S. G.2008 Nek5000: open source spectral element CFD solver. Available from: http://nek5000.mcs.anl.gov. Gessner, F. B. 1973 The origin of secondary flow in turbulent flow along a corner. J. Fluid Mech. 58, 1–25. Huser, A. & Biringen, S. 1993 Direct numerical simulation of turbulent flow in a square duct. J. Fluid Mech. 257, 65–95. Jiménez, J., Uhlmann, M., Pinelli, A. & Kawahara, G. 2001 Turbulent shear flow over active and passive porous surfaces. J. Fluid Mech. 442, 89–117. Noorani, A., El Khoury, G. K. & Schlatter, P. 2013 Evolution of turbulence characteristics from straight to curved pipes. Intl J. Heat Fluid Flow 41, 16–26. Pinelli, A., Uhlmann, M., Sekimoto, A. & Kawahara, G. 2010 Reynolds number dependence of mean flow structure in square duct turbulence. J. Fluid Mech. 644, 107–122. Prandtl, L. 1926 Über die ausgebildete Turbulenz. In Verh. 2nd Int. Kong. Tech. Mech. Zürich, pp. 62–75 (translation in NACA Tech. Memo. no. 435). Schlatter, P. & Örlü, R. 2012 Turbulent boundary layers at moderate Reynolds numbers: inflow length and tripping effects. J. Fluid Mech. 710, 5–34. Vinuesa, R., Noorani, A., Lozano-Durán, A., El Khoury, G. K., Schlatter, P., Fischer, P. F. & Nagib, H. M. 2014 Aspect ratio effects in turbulent duct flows studied through direct numerical simulation. J. Turbul. 15, 677–706. Vinuesa, R., Schlatter, P., Malm, J., Mavriplis, C. & Henningson, D. S. 2015a Direct numerical simulation of the flow around a wall-mounted square cylinder under various inflow conditions. J. Turbul. 16, 555–587. Vinuesa, R., Schlatter, P. & Nagib, H. M. 2015b On minimum aspect ratio for duct flow facilities and the role of side walls in generating secondary flows. J. Turbul. 16, 588–606. Whitaker, S. 1996 The Forchheimer equation: a theoretical development. Trans. Porous Med. 25, 27–61. Zagni, A. F. M. & Smith, K. V. H. 1976 Stability of liquid flow down an inclined plane. J. Hydraul. Div. 102, 207–222. Zippe, H. J. & Graf, W. H. 1983 Turbulent boundary-layer flow over permeable and non-permeable rough surfaces. J. Hydraul. Res. 21, 51–65. URL: /core/journals/journal-of-fluid-mechanics MathJax is a JavaScript display engine for mathematics. For more information see http://www.mathjax.org. JFM classification Low-Reynolds-number flows: Porous media Turbulent Flows: Turbulence simulation Turbulent Flows: Turbulent boundary layers
CommonCrawl
Chloe wants to buy a hoodie that costs $\$32.75$. She empties her wallet and finds she only has three $\$10$ bills, eight quarters, and a pile of dimes. What is the minimum number of dimes that must be in her pile so she can pay for the hoodie? Let $n$ represent the unknown number of dimes. Chloe's total amount of money is $$3(\$10)+8(\$.25)+n(\$.10) \ge \$32.75.$$Simplifying gives \begin{align*} 30+2+.10n &\ge 32.75 \quad \implies \\ .10n &\ge .75 \quad \implies \\ n &\ge \frac{.75}{.10} \quad \implies \\ n &\ge 7.5. \end{align*}Chloe must have at least $\boxed{8}$ dimes in her pile.
Math Dataset
Addition of inert gas at "constant volume"? My textbook states that "The addition of inert gases at constant volume does not affect the equilibrium state of the reactants and products contained in that volume." Although I am not questioning the truth behind this statement, I am very confused by it. Let us consider a homogeneous gaseous equilibrium system contained within a closed cylinder. The cylinder has a very small hole on its side, through which an inert gas can be pumped in. This is how I visualize the addition of an inert gas to the system. Now, upon introducing a given amount of inert gas into the system, how does the volume remain constant in any way? While deriving Van der Waals equation for real gases, I learned that the term $V$ in the ideal gas equation stands for the empty space that is available to the gas molecules for movement. Therefore, when we introduce an inert gas into the system, aren't we essentially decreasing this amount of space that is available to the reactant and product molecules for movement? I am very confused. Please share your insights for it would be tremendously helpful for me. Thanks ever so much in advance :) Regards. equilibrium gas-laws The inert gas that you add does not act like a piston, denying part of the volume to the other molecules in the mixture. For a constant volume system comprised of an ideal gas mixture, the partial pressures of the reactants and products do not change when you add the inert gas. It only increases the total pressure, and that change is only because of its own partial pressure. However, if you are not operating in the ideal gas regime, the addition of the inert gas will affect the equilibrium. Chet MillerChet Miller $\begingroup$ OK, so what you're saying is that according to the assumptions of KTG for ideal gases, the volume of the reaction mixture will remain the volume of the container since the molecules of the inert gas have negligible volumes themselves, yeah? $\endgroup$ – user33789 Aug 31 '16 at 13:48 $\begingroup$ Additionally, if the reaction mixture is at constant pressure instead of constant volume, these inert gas molecules will exert pressure on the piston, moving it up and hence, increasing the volume, yeah? While studying Le Chatelier's principle for changes in volume, I didn't relate the volume change to pressure change. Instead, I imagined that on decreasing the volume, the equilibrium shifted in that direction which produces lesser number of molecules so that the volume would increase and not so pressure would decrease. I am wrong to think of it like this, right? $\endgroup$ – user33789 Aug 31 '16 at 13:53 $\begingroup$ I agree to your first comment. I don't understand your second comment. $\endgroup$ – Chet Miller Aug 31 '16 at 19:48 You should not :) "does not affect the equilibrium state" means that the thermodynamical constant is still the same. If you add an inert gas $\ce{G}$ in your system it will not react with other chemical coumponds. For example at the beginning you have $\ce{A} \rightleftharpoons \ce{B}$ and after you add you inert gas you have $\ce{A + G} \rightleftharpoons \ce{B + G}$. So it not afftects the value of $\ce{K_{eq}}$ because $\ce{K_{eq}}$ depends only of $\ce{T}$ the temperature. If you use a solid container the volume will not change but the pressure will. Also we have $$\ce{\mu_i(T,P_i)=\mu_i^°(T)}+\ce{RT\ln\frac{P_i}{P^°}}$$ Then $$\left(\frac{\partial\Delta_r\ce{G}}{\partial n_{inert}}\right)_{\ce{T,V}}=0$$ If you prefer calculus. Curt F. Hexacoordinate-CHexacoordinate-C What is a rigorous definition of gas volume, and how is the Van der Waals equation derived? Van der Waals equation at high temperature and high molar volume Why two ( or more) gases have proportional volumes at constant pressure and temp.? (Avogadro's Law) Dynamic equilibrium - effect of adding inert gas how to calculate the gas volume per purge when I change the cylinder? Van der Waals real gas equation Prove that a Change of Volume will not Change the Value of the Equilibrium Constant Calculating volume % in a mixture of gases at equilibrium How do molecules know how to behave when a disturbance occurs in an equilibrium? Using Le Chatelier's Principle
CommonCrawl
The qualitative and quantitative analysis of the coupled C, N, P and Si retention in complex of water reservoirs Lilianna Bartoszek1 & Piotr Koszelnik1 The Solina–Myczkowce complex of reservoirs (SMCR) accounts about 15 % of the water storage in Poland. On the base of historical (2004–2006 years) data, the mass balance of nitrogen, phosphorus, total organic carbon and dissolved silicon were calculated. Large, natural affluents were the main source of the biogenic compounds in the studied ecosystem, delivering 90 % of TOC, 87 % of TN and 81 % of TP and DSi load. Moreover, results show that SMCR is an important sink for all the analysed biogenic elements. About 15–30 % of external loads were retained in the reservoir mainly in upper Solina. Due to the intensive processes of primary production, inorganic forms of nitrogen and phosphorus were mainly retained. Internal production of organic matter lead to an amount of the organic matter deposited in the sediments greater than was anticipated on the basis of the mass balance calculations. A constant load of dissolved silicon originating only from natural sources did not contribute to supplement deficits of Si present in the body of water in the reservoirs, promoting disturbances in N:C:P:Si ratios and another growth condition for other types of algae. During the last decades an important development of anthropogenic sources of biogenic substances in supplied natural water was observed. However that growth is not similar between specific elements. Anthropogenic sources of nitrogen and phosphorus compounds are connected with sewage and fertilizer loss, while human sources of silicon are minor. Moreover increased N and P loads may stimulate primary and secondary production thus growth of organic carbon (TOC) loads (Zeleňáková et al. 2012; Wiatkowski et al. 2015; Gajewska 2015). This effect distorts the natural water C:N:P:Si ratio causing negative changes in the biology of the ecosystems and the deterioration of water quality. That is especially evident in the case of stagnant waters. The role of lakes and reservoirs as sinks of biogenic elements along the aquatic route from land to oceans is evident (Hejzlar et al. 2009; Bouwman et al. 2013; Grizzetti et al. 2015; Wiatkowski et al. 2015). The mechanism of that retention is complex and connected with physical, hydrological and chemical processes e.g. denitrification (for N), sedimentation and adsorption and also uptake. Qualitative and quantitative analysis of coupled C, N, P and Si retention in reservoirs and knowledge of these element cycles is essential for our understanding of the ecosystem biogeochemistry (Bouwman et al. 2013). Additionally, this information is important for interpretation of greenhouse gasses (e.g. N2O, CO2 and CH4) emission from reservoirs causes and intensity (Gruca-Rokosz and Tomaszek 2015). Retention of biogenic elements in water ecosystems is frequently described as a function of the morphometric and hydrologic parameters of reservoirs. The most frequently considered factor in model studies is a hydraulic retention time (HRT); also depth and area of the reservoir and loads (Behrend and Opitz 2000; Seitzinger et al. 2002; Tomaszek and Koszelnik 2003; Hejzlar et al. 2009) are utilised. In general, HRT changes proportionally, as longer stoppage of waters in slow flow zones promotes two basic mechanisms of retention, i.e. uptake by organisms and sedimentation. The behaviour of nitrogen is slightly different, as its retention may be high also in shallow and flowing basins due to conditions promoting denitrification, which in the literature is referred to as a third nitrogen retention mechanism, understood as a difference between inflow and outflow of load (Seitzinger et al. 2002; Tomaszek and Koszelnik 2003). The purpose of the study is interpretation of how a mountain complex of reservoirs can modify natural biogeochemical fluxes of four major biogenic elements in the river waterway. The Solina–Myczkowce complex of mountain reservoirs (SMCR) is a perfect training field for this interpretation. During the 2004–2006 period reservoir chemistry, hydrology and catchment area management were studied intensely and independently by different institutions. Study site The SMCR is located in the upstream part of the River San (SE Poland) within the River Vistula system (Fig. 1). The upper, Solina reservoir is the biggest man-made lake in the Vistula basin and accounts for about 15 % of the total water storage capacity in Poland (volume: 502 mln m3, mean depth: 22 m, hydraulic retention time: 215 days, mean discharge: 35 m3 s−1). The lower Myczkowce Reservoir, as a compensatory water body (volume: 10 mln m3, mean depth: 5 m, hydraulic retention time: 6 days), is supplied by the hypolimnetic waters of the upper one (90 % of total supply), and by minor tributaries. The upper reservoir has three major inflows (accounting for 90 % of total water supply) and three minor ones. Reversible pumping takes place sporadically. The outflow from the complex involves bottom water from the lower reservoir flowing through a hydro-electric-power plant. The bathymetric map of the Solina–Myczkowce complex of reservoirs. Location of sampling stations from stagnant water (M—Myczkowce, S—Solina) as well as studied affluents was shown The greater part (c. 75 %) of the 1250 km2 catchment area is covered by forest, followed by meadows and pastures. Arable land accounts for only a small fraction of the area. The drainage basin has a low population density of about 6 inhabitants per km2, while about half of the households concerned are not connected to either sewerage or septic systems. A relatively steep (6 %) slope favours the leaching of soil and ground cover, especially during periods of intensive atmospheric precipitation and snow melting. Sampling strategy and methods In order to assess the mass balance of N, P, DSi and TOC, water was sampled from the mouth parts of rivers and streams feeding the Solina reservoir, as well as from the outflow. Also water samples obtained from four stations on the Solina reservoir and two on the Myczkowce reservoir were subjected to analysis. Water was sampled 33 times every 1–6 weeks. Sampling dates were adjusted on the basis of the meteorological and hydrological factors. About 0.5 dm3 of glass-fiber-membrane-filtered water was made subject to spectrophotometric determinations for concentrations of nitrate-nitrogen (N-NO3 −, salicylate method, coefficient of variation for the procedure—CVP: ±1.5 %), nitrite-nitrogen (N-NO2 −, Griess reaction, CVP: ±1.7 %), ammonium-nitrogen (N-NH4 +, Berthelot's reaction, CVP: ±1.4 %), phosphate-phosphorus (P-PO4 3−, Molybdate method, CVP: ±1.8 %) dissolved silicon (the Molybdate method, CVP: ±1.6 %) and chlorophyll a (only in stagnant water, in the methanol) using a PhotoLab S12 spectrophotometer (WTW GmBH). Moreover, Kjeldahl nitrogen (NKjeld, distillation), Total Organic Carbon (TOC, Shimadzu TOC analyzer, CVP: ±0.6 %) as well as total phosphorus (TP, oxidation to phosphate), were analyzed in non-filtered samples. Total nitrogen (TN) was calculated as the sum of nitrate- and nitrite-nitrogen and NKjeld. Balances of respective elements, both for the entire complex and the two reservoirs, were calculated as follows: $$L_{R} + L_{DD} + L_{At} = L_{Out} + R(E)$$ where (mass time−1): L R —load inflowing from the drainage basin in affluent waters; L DD —load from the direct drainage basin; L At —load supplied from atmosphere along with precipitation; L Out —load removed along with run-off; R(E)—retention (elimination) of an element in the ecosystem. Load inflowing from the drainage basin in affluent waters (L R ) constituted a sum of loads inflowing with all the rivers and streams feeding the balanced ecosystems. Loads of elements for the particular sections were calculated as a product of concentration and water flow rate. Daily flow rate (Q) values of the analysed sections, necessary for the calculation of loads from the three main affluents (about 90 % of total supply) and run-offs from the reservoirs were obtained from Solina–Myczkowce Power Plant S.A (continuous stage measurements. Q value for smaller watercourses was calculated on the day of water sampling, using installed staff gauge readings. Concentrations between the sampling days were calculated using the statistical approach, according to Mukhopadhyay and Smith (2000). Uncertainty of calculated loads was approximated on the base of Harmel et al. (2006) as cumulative uncertainties of potential sources of error. We assumed errors during sampling (uncertainty of 15 %), chemical procedures and analyses (above mentioned CVP, ca. 2 %) and flows measurement (continuous stage measurements, uncertainty of 2 %). Therefore, C, N, P and Si loads are calculation with errors of ±19 %. To estimate the cumulative probable uncertainty of calculated retentions the root mean square error propagation method were used (Harmel et al. 2006). However, the analysis and discussion of data are based on the most probabilistic, average values due to clarity of the paper and per analogiam to many other authors (e.g. Mengis et al. 1997; Garnier et al. 1999; Torres et al. 2007; Hejzlar et al. 2009). L DD was calculated with as a sum of (1) load inflowing from point sources within the direct drainage basin (including the WWTP); (2) load inflowing from nonpoint sources within the direct drainage basin; (3) load introduced by bathers. Respective summands were calculated from the available models (Giercuszkiewicz-Bajtlik 1990; Jørgensen 2011) including data on the direct drainage basin area development, touristic burden and amount of wastewater discharged from the WWTP obtained from the Solina Municipality with its seat in Polańczyk. LAt was obtained from parallel study (Urbanik 2007, MSc thesis, unpublished data) utilising precipitation rate measurements carried out by the hydrological survey station in Lesko. As the reservoirs are not located in the area affected by ground waters, this source of feed was disregarded. In addition, because of the mesotrophic nature of waters, the effect of atmospheric nitrogen binding by cyanobacteria was recognised as insignificant for the balance of this element (Ferber et al. 2004; Koszelnik et al. 2007). The sedimentation rate of TN, TOC was calculated on the basis of the phosphorus mass balance (which, in contrast to nitrogen and carbon, is not present in a gaseous state in the biogeochemical cycle), using the following formulae (Dudel and Kohl 1992): $$N_sed = P_ret {-}\varDelta P (N:P)$$ $$C_sed = N_sed (C:N)$$ where: N sed (C sed )—nitrogen (carbon) sedimentation rate (t year−1); P ret —phosphorus retention in the reservoir between the sampling time points (t year−1); N:P—ratio of total N:P concentration in benthic deposits (from Koszelnik 2009a, b); C:N—ratio of organic C:total N concentration in benthic deposits (from Koszelnik 2009a, b); ∆P—change of mean phosphorus content in the water body between the sampling time days (from Koszelnik 2009a, b; t year−1); calculated from the following formula: $$\Delta P = \frac{{\left( {P_{n + 1} - P_{n} } \right)V_{R} }}{\Delta t} \cdot \frac{365}{{10^{6} }}$$ where P n+1 and P n correspond to TP concentration at nth and n + 1th day of sampling (g P m−3), V R is a reservoir volume (m3), ∆t is a time period between nth and n + 1th day of sampling (day), and 106 and 365 are conversion factors to obtain the t year−1 unit; The results of the mass balance for the dammed reservoir complex Solina–Myczkowce are listed in Table 1. The complied data show that the inflow of TN to the reservoir from all the sources in 2004 and 2005 amounted to approx. 1700 t. In the following, last year of study, the value was ca. 50 % greater, and amounted to 2489 t. Soluble forms, in particular nitrate(V) nitrogen (ca. 60 %) were the main contributors to the load. For the other contributors loads calculated for each year of balance were less variable. Annual inflow of TP was within a range of 76–101 t; RSi 2056–2244 t; and TOC 2517–2837 t. Predominantly the loads fed the Solina reservoir. Natural affluents of the Myczkowce reservoir played a minor part in biogenic compound feed (3–7 %). Large, natural affluents (Fig. 1) were the main source of the biogenic compounds in the studied ecosystem, delivering 90 % of TOC, 87 % of TN and 81 % of TP and DSi load (Fig. 2). The share of inflows from the direct drainage basin is more evident in the case of DSi balance (5 %), whereas the load originating from atmosphere is immaterial in the annual balance. Table 1 Mass balance of N, C, P and Si selected compounds for Solina and Myczkowce reservoirs The share of different biogenic element sources in the Solina–Myczkowce complex of reservoirs supply Except for TOC (R2 = 0.85–0.90; p < 0.001), no significant relations between hydraulic flows and concentration of total forms of the analysed elements in water were discovered. No seasonal changes in N and P concentration indicate that both N and P originate from point and non-point sources. Occurrence of these changes in the case of regional-originating Si can be related to its assimilation in river waters upstream of the reservoirs (Humborg et al. 2000). Calculated values of loads, retention/elimination rates of N, P, Si and C are listed in Table 1. An apparent discrepancy between results from both the reservoirs and the total balance, results from inclusion of reverse pumping of water from the Myczkowce to Solina reservoir in partial balances, which influences particular balances, but has no effect on the balance for the reservoir complex. Significant masses of the balanced elements were found retained within the analysed ecosystem. This indicates that the majority of biogenic compounds feeding the reservoirs is incorporated into the trophic chain and/or accumulated in the benthic deposits (Behrend and Opitz 2000; Koszelnik et al. 2007) by various retention mechanisms. The overall reservoir balance reveals that only for nitrogen was the outflowing load higher than the inflowing load in 2006. In general, retention of biogenic compounds in the entire cascade depended on the amount of elements retained in the upper reservoir. The lower reservoir, due to a short HRT and unfavourable thermal conditions normally has neither retained nor eliminated significant amounts of biogenic compounds. Only in 2005 the silicon retention was predominantly present in the Myczkowce reservoir, with the similar relation true for N-NO3 − and TOC for the entire study period. Except for 2006, when the SMCR was the nitrogen source for downstream waters, the nitrogen retention level was corresponding to the values determined in previous studies, carried out with varying frequency between 1970 and 2003 (10–20 %, Tomaszek and Koszelnik 2003). Approximately 60 % of the TN load supplying the reservoirs was nitrate(V) nitrogen. In 2004, from 132 t of TN retained in the reservoir complex, N-NO3 − amounted to as much as 89 t, and in 2005 the ratio was 374 to 361 t, respectively. Significantly higher load of TN in 2006 was connected rather with intensive water discharge during early spring than new nitrogen source (Urbanik 2007—unpublished data). The determined TP loads and percentages of element retention in the reservoir complex during the balanced years were slightly less diversified than nitrogen. 19–33 % of the inflowing TP was retained in the reservoirs. Despite the fact that phosphates constitute slightly more than 50 % of the load inflowing to the reservoirs, the calculated balance reveals, that retention of the easily assimilative phosphorus form was prevailing. Determined retention of the dissolved silicon (DSiret%) in the reservoir complex varied from 5 % (2006) to 24 % (2005) of the annual external load. It was observed that in the first and the last year of the study the majority of the supplied load was accumulated in the Solina reservoir. On the contrary, in 2005 from 492 t of the retained silicon, as much as 352 t was retained in the Myczkowce reservoir. Retention of TOC was observed in both the reservoirs and ranged from 11 to 22 % of the supplied load. In a way similar to the totals of remaining elements, a major amount of carbon was retained in the studied reservoir complex in 2005. Upon analysis of the morphometric factors on retention, a significant influence of HRT on retention of N, P and Si in waters of the entire reservoir complex was observed (Table 2). Flow rate of inflowing waters was correlated only with Nret% in the Myczkowce reservoir, while the load of elements supplied to the reservoir influenced the N retention in the Myczkowce reservoir and DSiret% in the Solina reservoir, as well as in the entire complex. Much better correlations were observed upon analysis of the influence of hydraulic retention on the elements retention (Wret—water inflow reduced by water outflow). Increase in Wret led to significant rise of Nret% in both the reservoirs and the complex, DSiret% and OWOret% in the Solina reservoir and the complex and Pret%, but only in the balance of the entire reservoir complex (Table 1). Significant correlations between retention values of N, P and Si for the Solina reservoir and the entire complex were seen upon analysis of interrelations of retention values for all the analysed elements (Table 2). Such relations were not seen for the Myczkowce reservoir, there was no influence of TOCret% on Nret%, and Pret% was correlated only with DSiret%. Table 2 Relationship between of the C, N, P, Si retention values and chosen parameters of the studied reservoirs expressed as the Pearson's correlation coefficient with its statistical significance Average annual sedimentation rate of TN, TOC in the Solina reservoir, calculated from the mass balance and levels of individual elements in sediments is provided in Table 3. Sedimentation in the Myczkowce reservoir was negligible due to the very short HRT in this reservoir (approximately 2 days). Production of sediments in this reservoir is mainly related to macrophyte production and supply of external matter. The TOC sedimentation calculated for the entire Solina reservoir amounts to 1300 t per year, and TN 110 t per year. Thickness analysis of the matter deposited from the beginning of reservoir's existence shows that approximately twice as much of the sediments were formed in backwaters (S1, S2 stations) of the reservoir when compared to lacustrine deep areas (S3, S4 stations). This information was taken into consideration upon calculation—it was anticipated that 2/3 of the total sedimentation takes place in the said area. Table 3 Sedimentation rate and content of total nitrogen, total organic carbon in the Solina reservoir The hydrologic balance (Urbanik 2007—unpublished data) shows that within the 3 years of study the hydraulic inflow to the reservoir was compensated by the outflow, so the calculated element retention is not the result of hydrologic factors, but biogeochemical factors. Mass balance of N, C, P and Si, conducted separately for both the reservoirs and the entire complex enabled identification of the classical biogeochemical cycle of conversion of the analysed elements. The results of the mass balance show that the SMCR retains a significant amount of biogenic elements (Table 1). The major part of elements was retained mostly in the Solina reservoir. The biogenic compounds were retained sporadically in the Myczkowce reservoir due to the hydrologic factors, i.e. feeding of a hypolimnion with N, C, P and Si-reach waters. The relationships presented in Table 2 show that retention of biogenic elements within the reservoir complex (mostly the Solina reservoir) results not only from the hydrologic or functional factors of the power station (storage of waters during the spring period and discharging during low water periods in summer and autumn), but also from inclusion of easily assimilative forms to the trophic chain and various chemical transformations. The fact that correlation between TN, TP and DSi, which are retained mostly in the easily assimilative forms (Table 1), is significant, it confirms that mechanisms of assimilation by water organisms are crucial for retention of elements in the studied reservoirs. The distinctive feature of the mass balance was DSi depletion from the water body in the reservoirs, mostly in the Solina reservoir (Fig. 2). Approximately 20 % of the inflowing load of DSi was retained in the studied reservoirs. Humborg et al. (2006) reports that the DSi load flowing off from 1 km2 of the River Vistula basin to the Baltic Sea amounts to 0.8 t per year. Average annual load of DSi feeding the SMCR amounts to 1947 t, which is equivalent to 1.5 t of flow off from 1 km2 of the basin. Hence, anticipating that silicon originates only from sources connected with soil erosion (Garnier et al. 1999; Humborg et al. 2002) and that the DSi load level produced in other areas of the Vistula basin is similar, ca. 50 % of dissolved silicon is retained within the area of the Vistula basin, unfavourably reducing loads feeding the Baltic sea. This decrease leads to deterioration of seawater quality due to the deficiency of DSi when compared to other biogenic compounds from anthropogenic sources. The said phenomenon leads to imbalance between diatoms and other algae (Humborg et al. 2000). A similar decrease in DSi load in dammed reservoirs was described by Garnier et al. (1999), while Humborg et al. (2006) conclude that the cascade design of dammed reservoirs on rivers increases HRT and favours retention of dissolved silicon, depleting it from downstream waters. 20 % of DSi retention is a distinctive feature of oligotrophic waters (Garnier et al. 1999), but the data available in the literature (Garnier et al. 1999; Humborg et al. 2002, 2006) also show that similar levels of DSi retention were seen in both oligo- and eutrophic ecosystems. In the studied case, depletion of dissolved silicon in the surface lake body, related in turn to DSiret affects the water quality, stimulating growth of algae. The correlations shown on Fig. 3 may confirm that increase of chlorophyll level can be related with emergence of non-diatomic (green) algae. Diatoms are seen in lake and reservoir waters mostly in spring, but also even in late winter (Humborg et al. 2000; Lehmann et al. 2004). By analogy to the condition of Lake Lugano (Lehmann et al. 2004) it can be concluded that DSi level exceeding 0.7 g m−3 in the epilimnion of the Solina reservoir contributes to chl a level related with the presence of diatoms and green algae. Below this level, in summer, a rapid increase in chl a, reaching even as much as 12 mg m−3, is observed which, in turn, may lead to occurrence of thermophilic cyanobacteria with concomitant disappearance of diatoms. Despite low water temperature in the Myczkowce reservoir, an elevated level of chl a was seen, but the index of >2.5 mg m−3 was noted only in 2005, while in 2006 it was low. In general, phytoplankton production in this reservoir is minor. However, due to the poor silicon feed, silicon shortages can occur, which lead to minor tides of algae, mostly in the warmer water area of the dam, analogous to those present in the Solina reservoir (Koszelnik 2013). A decrease in water DSi below 1 g m−3 was seen in 2005, but not in 2006. In addition, when silicon was almost completely depleted from the water body, a decrease in Si:N and Si:P ratios (see Fig. 4) was seen, and DSi became the limiting element. With silicon shortage present, phosphorus and nitrogen are the main substrates utilised in production of organic matter in the reservoirs. In turn, the value of N:P molar ratio (Fig. 4) significantly exceeding 16:1 proves the stoichiometric excess of nitrogen versus phosphorus. Influence of DSi depletion on phytoplankton growth (Chl a) in the Solina (a) and Myczkowce (b) reservoirs Mean chlorophyll a concentration versus N:P:Si Redfield ratios in lacustrine zone of the Solina reservoir (a, c) and Myczkowce reservoir (b, d) Approximately 28 % of phosphorus supplied to the studied reservoirs is retained. The process is occurring mostly in the Solina reservoir. This result depends on various factors allowing storage of phosphorus from the water body in the benthic deposits, related in general with an affinity to specific metals, presence of aerobic conditions or with pH (Golterman 1998). Phosphorus retention is a result of sedimentation of solid particles introduced to the reservoir with affluents and assimilative forms incorporated to the biomass of phytoplankton and transferred to sediments (Hejzlar et al. 2009; Dunalska et al. 2013). A part of such retained phosphorus can be released again to the water body due to resuspension or decomposition of bonds with iron or other metals in anaerobic conditions. The benthic deposits of the Solina reservoir are rich in metals with affinity to phosphorus, mostly in iron. Retention volume of these deposits is very high (Bartoszek et al. 2009), and favourable aerobic conditions make release of phosphorus from the deposit practically absent in both the reservoirs. On that basis it can be concluded that the principal phosphorus retention mechanism in the reservoir is a direct sedimentation of phosphorus contained in suspension and intermediate sedimentation of mineral forms, after their assimilation into the trophic chain. Storage properties of the benthic deposits are large enough so that the calculated Pret value could be greater, but the morphometry of the Solina reservoir (high depth, low area of an active bed) determines the identified level. Consequently, upon balancing of average phosphorus mass retained in the reservoir it was anticipated that the phosphate phosphorus retention will be equal to the amount of this element utilised by phytoplankton, and the difference between retention of P-PO4 3− and TP will be equal to the amount of element originating from external sources and accumulated in the deposit: For the complex of reservoirs (t year−1) Inflow Retention Partial P sedimentation Sedimentation P accumulated in biomass A complexity of biogeochemical nitrogen changes in the water environment affects the retention level of this element. In contrast to Si and P, nitrogen has a larger gaseous phase, and denitrification leading to change in state of aggregation affects the mass balance of this element in water ecosystems. In the studied period of 3 years, decrement of load inflowing to the reservoirs amounted only to 9 % per year. This value was affected by a significant element elimination in 2006. In 2005 Nret% was equal to ca. 22 % of the introduced load. Studies on nitrogen retention, carried out between 1999 and 2003, reveal that the retention of this element was varying significantly, amounting to from 12 to 36 % of the annual load (Tomaszek and Koszelnik 2003). Previous studies show that the rate of denitrification in the Solina reservoir is stable and amounts to ca. 5 g m−2 year−1 (Koszelnik et al. 2007), which corresponds to ca. 20 % of retention and 5 % of nitrogen load. Empirical models of denitrification in conditions occurring in the Solina reservoir, contingent on presence of nitrates and temperature (Gruca-Rokosz 2005, PhD thesis, unpublished data), were utilised to analyse various nitrogen retention mechanism. Estimated denitrification rate was 4 g N m−2 year−1, and on that basis it was calculated that, on average, 70 t of N is denitrified annually. Nitrogen sedimentation calculated from the balance mass equals to 110 t year−1, hence the balance of retained nitrogen is as follows: For the complex of reservoirs (t year−1) Inflow Retention Sedimentation Denitrification Indefinite 1957 174 110 70 −6 Share of denitrification process in nitrogen retention amounted to 40 % and was twice as high as that calculated for previous years (Koszelnik et al. 2007). However, load reduction is significantly lesser (similar to Nret%) and was only 3.6 %. Influence of denitrification on the nitrogen mass balance depends on various factors. Seitzinger et al. (2002) describes that for North American estuaries 50 % of nitrogen is supplied from denitrification. Estuaries are bodies of water similar in many cases to dammed reservoirs; mostly due to the ratio between areas of basin and water table, retention time or biogenic compounds load. The said value can be real, but in many cases—including estuaries—significantly lower values are seen, i.e. 5–30 % (Dudel and Kohl 1992; Koszelnik et al. 2007; Povilaitis et al. 2012), but also values as high as 70 % are reported (Mengis et al. 1997). A comprehensive analysis of the available data presented in previous papers (Koszelnik et al. 2007) enables one to conclude that in water regions with high hydraulic dynamics contribution of nitrification to the mass balance is minor when compared to natural lakes, where this process can be significant. Nitrogen sedimentation, in a way similar to phosphorus, is a result of its consumption. Thus, calculation of Nsed and Ncons contributions can be difficult. Jickells et al. (2000) states that in inland waters, retention mainly consists of storage of organic nitrogen produced within the ecosystems in the benthic deposits. In the studied case retention of assimilative forms, mostly nitrates, is equal to ca. 100 t year−1. Nevertheless, it should not be anticipated that the overall mass of retained NO3 − will be assimilated. Some part of it will be denitrified, as the main substrate for the said process, occurring in the anoxic layer of the benthic deposits, are nitrates(V) diffusing from water (Tomaszek and Gruca-Rokosz 2007). A surplus nitrogen seen in the above balance (−6 t) can result from utilisation of nitrogen stored in the deposits in the nitrification process, which, after various transformations, can be denitrified. The mass balance of total organic carbon calculated for the reservoir complex shows that annually approximately 442 t of TOC is retained. The calculated TOC sedimentation is three times higher and amounts to 1300 t year−1. Hence, it should be recognised that a significant part of sedimentation matter is produced within the ecosystems: For the complex of reservoirs (t year−1) Inflow Retention Sedimentation Lentic waters are characterised by a high carbon retention capability, as the major part of TOC supplied to lakes and reservoirs is respirated and included in the trophic chain (Garnier et al. 1999). The above is true for both deep and shallow reservoirs. Anderson and Sobek (2006) provide an example of a shallow lake in Sweden, in which the annual carbon load amounts to approximately 3 t. Calculated phytoplankton production for the lake is as high as 53 t C per year, and the macrophyte production—16 t C per year, while carbon sedimentation is three times greater than the carbon inflow to the ecosystem. Unlike the other elements, TOC retention in the Myczkowce reservoir was fairly high (7–8 % of the load), which can be explained by carbon uptake by macrophytes after its respiration. Rate of decomposition for these forms is low, and annually they release only 40 % of the retained organic carbon (Gessner 2001). Both the SMCR reservoirs are loaded with nitrogen and phosphorus in amounts significantly exceeding theoretical values considered to be allowable. Although the easily assimilative inorganic forms are dominating in the supplied mass of elements, concentration of both of the biogenic compounds and the amount of chlorophyll a fall within the level specific for mesotrophy. However, the inflow of biogenic elements is so significant that no distinct, seasonal variations of nitrogen and phosphorus concentrations were seen in the reservoirs. The major part of the biogenic compound load supplied to the studied reservoirs is retained therein via utilisation as substrates in the primary production process. During the summer, dissolved silicon deficits were observed in waters in both the reservoirs. This phenomenon was present due to silicon consumption within the water body and reduced inflow from the basin. In this case silicon became a limiting element for the production of diatomic organic matter, especially in the warmer Solina reservoir. This effect was accompanied by an increase in chlorophyll a concentration, sporadically reaching the value specific for eutrophy, which can be related to production of other species of (non-diatomic) algae. Retention values of particular elements, calculated from the mass balance, prove the intensity of element uptake process carried out by organisms. Approximately 20 % of inflowing total forms of N, P and Si are accumulated, mostly in the Solina reservoir. Dissolved inorganic forms are retained in more than 50 % of cases. Sedimentation of autochthonous biogenic forms is a result of inclusion of supplied elements to the trophic chain. The denitrification rate of nitrogen amounts to 20 % of retention and only to 5 % of the supplied load. TOC retention at the level of 30 % proves the allochthonous matter is accumulated in the deposit. Sedimentation of allochthonous organic matter calculated from the mass balance is three times lower than the value estimated on the basis of the overall TOC sedimentation in the Solina reservoir. The remaining amount results from an intra-ecosystemic production stimulated by an external inflow of biogenic compounds and elements circulation within the reservoirs. Anderson E, Sobek S (2006) Comparison of a mass balance and an ecosystem model approach when evaluating the carbon cycling in a lake ecosystem. Ambio 33(8):476–483. doi:10.1579/0044-7447(2006)35[476:COAMBA]2.0.CO;2 Bartoszek L, Tomaszek JA, Sutyła M (2009) Vertical phosphorus distribution in the bottom sediments of the Solina–Myczkowce reservoirs. Environ Prot Eng 35(4):21–29 Behrend H, Opitz D (2000) Retention of nutrients in river systems, dependence of specific runoff and hydraulic load. Hydrobiologia 410:111–122 Bouwman AF, Bierkens MFP, Griffioen J, Hefting MM, Middelburg JJ, Middelkoop H, Slomp CP (2013) Nutrient dynamics, transfer and retention along the aquatic continuum from land to ocean: towards integration of ecological and biogeochemical models. Biogeosciences 10:1–22. doi:10.5194/bg-10-1-2013 Dudel G, Kohl J-G (1992) The nitrogen budget of a shallow Lake (Grosser Műggelsee, Berlin). Int Rev Gesamten Hydrobiol 77:43–72 Dunalska J, Zieliński R, Bigaj I, Szymański D (2013) Indicators of changes in the phytoplankton metabolism in the littoral and pelagial zones of a eutrophic lake. Rocznik Ochrona Środowiska 15(1):621–636 Fantin-Cruz I, Pedrollo O, Girard P, Zeilhofer P, Hamilton SK (2015) Changes in river water quality caused by a diversion hydropower dam bordering the Pantanal floodplain. Hydrobiologia. doi:10.1007/s10750-015-2550-4 Ferber LR, Levine SN, Lini A, Livingston GP (2004) Do cyanobacteria dominate in eutrophic lakes because they fix atmospheric nitrogen? Freshw Biol 49(6):690–708 Gajewska M (2015) Influence of composition of raw wastewater on removal of nitrogen compounds in multistage treatment wetlands. Environ Prot Eng 41(3):19–30 Garnier J, Leporcq B, Sanchez N, Philippon X (1999) Biogeochemical mass balances (C, N, P, Si) in three large reservoirs of the Seine Basin (France). Biogeochemistry 47:119–146 Gessner MO (2001) Mass loss, fungal colonisation and nutrient dynamics of Phragmites australis leaves during senescence and early decay. Aquat Bot 69:325–339 Giercuszkiewicz-Bajtlik M (1990) Prognozowanie zmian jakości wód stojących. Warszawa, Wydawnictwo Instytutu Ochrony Środowiska, p 1990 Golterman HL (1998) The distribution of phosphate over iron-bound and calcium-bound phosphate in stratified sediments. Hydrobiologia 364:75–81 Grizzetti B, Passy P, Billen G, Bouraoui F, Garnier J, Lassaletta L (2015) The role of water nitrogen retention in integrated nutrient management: assessment in a large basin using different modelling approaches. Environ Res Lett. doi:10.1088/1748-9326/10/6/065008 Gruca-Rokosz R, Tomaszek JA (2015) Methane and carbon dioxide in the sediment of a eutrophic reservoir: production pathways and diffusion fluxes at the sediment-water interface. Water Air Soil Pollut. doi:10.1007/s11270-014-2268-3 Harmel RD, Cooper RJ, Slade RM, Haney RL, Arnold JG (2006) Cumulative uncertainty in measured streamflow and water quality data for small watersheds. Trans ASABE 49(3):689–701 Hejzlar J, Anthony S, Arheimer B, Behrendt H, Bouraoui F, Grizzetti B et al (2009) Nitrogen and phosphorus retention in surface waters: an inter-comparison of predictions by catchment models of different complexity. J Environ Monit 11:584–593. doi:10.1039/b901207a Humborg C, Conley DJ, Rahm L, Wulff F, Cociasu A, Ittekkot V (2000) Silicon retention in river basins, far-reaching effects on bigeochemistry and aquatic food webs in coastal marine environments. Ambio 29:44–49 Humborg C, Blomquist S, Avsan E, Bergensund Y, Smedberg E, Brink J, Mörth C-M (2002) Hydrological alterations with river damming in northern Sweden: implications for weathering and river biochemistry. Global Biogeochem Cycles. doi:10.1029/2000GB001369 Humborg C, Pastuszak M, Aigars J, Siegmund H, Mörth C-M, Ittekkot V (2006) Diatoms silica land-sea fluxes through damming in the Baltic Sea catchment—significance of particle trapping and hydrological alterations. Biogeochemistry 77:265–281 Jickells T, Andrews J, Samways G, Sanders R, Malcolm S, Sivyer D, Parker R, Nedwell D, Trimmer M, Ridgway J (2000) Nutrient fluxes through the Humber estuary—past, present and future. Ambio 29(3):130–135 Jørgensen SE (2011) Fundamentals of ecological modellin. Applications in environmental management and research. Amsterdam, Elsevier Koszelnik P (2009a) Atmospheric deposition as a source of nitrogen and phosphorus loads into the Rzeszow reservoir SE Poland. Environ Prot Eng 33(2):157–164 Koszelnik P (2009b) Źródła i dystrybucja pierwiastków biogennych na przykładzie zespołu zbiorników zaporowych Solina–Myczkowce. Rzeszów, Oficyna Wydawnicza Politechniki Rzeszowskiej Koszelnik P (2013) Rola krzemu w procesie eutrofizacji wód na przykładzie zbiorników Solina i Myczkowce. Rocznik Ochrona Środowiska 15:2218–2231 Koszelnik P, Tomaszek JA, Gruca-Rokosz R (2007) The significance of denitrification in relation to external loading and nitrogen retention in a mountain reservoir. Mar Freshw Res 58(9):818–826. doi:10.1071/MF07012 Lehmann MF, Bernasconi SM, Mckenzie JA, Barbieri A, Simona M, Veronesi M (2004) Seasonal variation of the δ13C and δ15 N of particulate and dissolved carbon and nitrogen in Lake Lugano, constrains on biogeochemical cycling in eutrophic lake. Limnol Oceanogr 49:415–429 Mengis M, Gächter R, Wehrli B, Bernasconi S (1997) Nitrogen elimination in two deep eutrophic lakes. Limnol Oceanogr 42:1530–1543 Mukhopadhyay B, Smith EH (2000) Comparison of statistical methods for examination of nutrient load to surface reservoirs for sparse data set: application with a modified model for phosphorus availability. Water Res 34(12):3258–3268 Povilaitis A, Stålnacke P, Vassiljev A (2012) Nutrient retention and export to surface waters in Lithuanian and Estonian river basins. Hydrol Res 43(4):359–373 Seitzinger S, Styles PRV, Boyer EW, Alexander RB, Billen G, Howarth RW, Mayer B, Van Bremer N (2002) Nitrogen retention in rivers: model development and application to watersheds in northern USA. Biogeochemistry 57(58):199–237 Tomaszek JA, Gruca-Rokosz R (2007) Rates of dissimilatory nitrate reduction to ammonium in two polish reservoirs: impacts of temperature, organic matter content, and nitrate concentration. Environ Technol 28:771–778. doi:10.1080/09593332808618834 Tomaszek JA, Koszelnik P (2003) A simple model of the nitrogen retention in reservoirs. Hydrobiologia 504(1/3):51–58 Torres IC, Resck RP, Pinto-Coelho RM (2007) Mass balance estimation of nitrogen, carbon, phosphorus and total suspended solids in the urban eutrophic, Pampulha reservoir. Brazil. Acta Limnol. Bras. 19(1):79–91 Wiatkowski M, Rosik-Dulewska C, Kasperek R (2015) Inflow of pollutants to the Bukówka drinking water reservoir from the Transboundary Bóbr river basin. Rocznik Ochrona Środowiska 17:316–336 Zeleňáková M, Čarnogurska M, Šlezingr M, Słyś D, Purcz P (2012) A model based on dimensional analysis for prediction of nitrogen and phosphorus concentrations at the river station Ižkovce, Slovakia. Hydrol Earth Syst Sci 17:201–209. doi:10.5194/hess-17-201-2013 Both authors contributed in the writing of this paper with shares amount to 50 %. Both authors read and approved the final manuscript. The research gained financial support from Poland's Ministry of Science, via Grant No. 2 PO4G 0842. Department of Environmental Engineering and Chemistry, Faculty of Civil and Environmental Engineering and Architecture, Rzeszów University of Technology, al. Powstańców Warszawy 6, 35-959, Rzeszow, Poland Lilianna Bartoszek & Piotr Koszelnik Lilianna Bartoszek Piotr Koszelnik Correspondence to Piotr Koszelnik. Bartoszek, L., Koszelnik, P. The qualitative and quantitative analysis of the coupled C, N, P and Si retention in complex of water reservoirs. SpringerPlus 5, 1157 (2016). https://doi.org/10.1186/s40064-016-2836-7 Mass balance Biogenic elements
CommonCrawl
Quantum Information Processing October 2018 , 17:275 | Cite as An efficient quantum digital signature for classical messages Ming-Qiang Wang Xue Wang Tao Zhan First Online: 06 September 2018 Quantum digital signature offers an information theoretically secure way to guarantee the identity of the sender and the integrity of classical messages between one sender and many recipients. The existing unconditionally secure protocols only deal with the problem of sending single-bit messages. In this paper, we modify the model of quantum digital signature protocol and construct an unconditionally secure quantum digital signature protocol which can sign multi-bit messages at one time. Our protocol is against existing quantum attacks. Compared with the previous protocols, our protocol requires less quantum memory and becomes much more efficient. Our construction makes it possible to have a quantum signature in actual application. Quantum digital signature Quantum commitment Photodetection event The author is supported by MMJJ20180210, NSFC: 61832012, NSFC Grant 61672019 and The Fundamental Research Funds of Shandong University Grant 2016JC029. Bob's optimal strategy is to minimize the probability of causing a photodetection event, with the cost matrix \({\mathbf {C}}\) with elements \(c_{\phi ,\theta }\). In the cost matrix \({\mathbf {C}}\), the diagonal elements represent the cases when recipient uses the same phase as sender, and the off-diagonal elements represent the cases when recipient uses the phase different from sender. In the specific experimental operation, to make our protocol to be secure against forging, a practical requirement is that the probabilities of registering a photodetection event on Charlie's signal null-port arm are greatly different between the above two cases. If so, Charlie can register distinctly more photodetection events than threshold value when Bob (or other external party) attempts to forge a message. That is to say, Charlie is capable of detecting a discrepancy between the true and forged messages. To achieve this requirement, the choice of the number of possible phase encodings p cannot be large. Clarke et al. [4] presents us a practical experimental data, they use 8 different phase states, and the average photon number per pulse is \(|\alpha ^2|=0.16\). In this experimental setup, the cost matrix is given by $$\begin{aligned} {\mathbf {C}}=\begin{pmatrix}3.89&{}\quad 4.40&{}\quad 5.24&{}\quad 5.95&{}\quad 6.35&{}\quad 6.00&{}\quad 5.29&{}\quad 4.39\\ 4.56&{}\quad 3.88&{}\quad 4.43&{}\quad 5.29&{}\quad 6.04&{}\quad 6.39&{}\quad 6.02&{}\quad 5.20\\ 5.28&{}\quad 4.60&{}\quad 3.89&{}\quad 4.42&{}\quad 5.29&{}\quad 6.02&{}\quad 6.37&{}\quad 5.95\\ 5.68&{}\quad 5.22&{}\quad 4.58&{}\quad 3.90&{}\quad 4.40&{}\quad 5.24&{}\quad 5.91&{}\quad 6.30\\ 6.36&{}\quad 5.68&{}\quad 5.27&{}\quad 4.59&{}\quad 3.89&{}\quad 4.43&{}\quad 5.24&{}\quad 6.01\\ 5.62&{}\quad 6.36&{}\quad 5.66&{}\quad 5.23&{}\quad 4.57&{}\quad 3.89&{}\quad 4.41&{}\quad 5.30\\ 5.26&{}\quad 5.68&{}\quad 6.40&{}\quad 5.70&{}\quad 5.22&{}\quad 4.60&{}\quad 3.88&{}\quad 4.40\\ 4.61&{}\quad 5.24&{}\quad 5.65&{}\quad 6.36&{}\quad 5.68&{}\quad 5.22&{}\quad 4.56&{}\quad 3.88\end{pmatrix}\times 10^{-3}. \end{aligned}$$ Lemma 4 For any bit message \(m_j,\ j=1,2,\ldots ,n\), the probability of Alice repudiating successfully is $$\begin{aligned} \text {Pr}(\mathrm{repudiation}\, m_j)\le 2^{-(s_v-s_a)L}. \end{aligned}$$ For this purpose, Alice needs to forward different quantum signatures to Bob and Charlie, or more generally, the most general state Alice prepares is \(\pi _{A,B_1,C_1,B_2,C_2,\ldots ,B_{L},C_{L}}\), which is a general \( 2L+1\)-partite state. Subsystem A Alice keeps and sends partitions \(B_1,\ldots ,B_{L}\) to Bob and \(C_1,\ldots ,C_{L}\) to Charlie. If Alice is honest, there is no subsystem A, \(B_i\) and \(C_i\) are identical coherent states with a complex phase known to Alice alone, as specified by the protocol. There are two cases of this attack: security against individual repudiation and security against coherent repudiation. At first, we show that our protocol is secure against individual repudiation. We assume that the system A is disentangled from the rest of Alice's state, and the subsystems \((B_kC_k)\) and \((B_lC_l)\) are not entangled with each other for \(k\ne l\). However, we allow the partitions \(B_k\) and \(C_k\) to be mutually entangled. This type of an attack we refer to as an individual attack. According to the protocol specifications, Bob and Charlie will individually run the pairs of states in the systems through the multi-port and commit to quantum memory whatever comes out on their signal outputs of the multi-port. For the purpose of showing security against repudiation, we can assume that they ignore the measurement outcomes on the multi-port null-ports. For the kth signature element, the joint system of Bob and Charlie which they store into memory is state \(\pi ^\mathrm{out}_{B_kC_k}\), which is symmetric under permutations of Bob's and Charlie's subsystems as we now show. Let $$\begin{aligned} \pi ^\mathrm{in}_{B_kC_k}=\int _{C^2} P(\alpha , \beta )|\alpha> <\alpha |\otimes |\beta > <\beta | d^2\alpha d^2\beta \end{aligned}$$ be any general two mode state given in the \(\mathrm P\) representation. Then the stored output state (when the null-port subsystems have been traced out) is $$\begin{aligned} \pi ^\mathrm{out}_{B_kC_k}= & {} \int _{C^2} P(\alpha , \beta )|(\alpha +\beta )/\sqrt{2}> <(\alpha +\beta )/\sqrt{2}|\nonumber \\&\otimes |(\alpha +\beta )/\sqrt{2}> <(\alpha +\beta )/\sqrt{2}| d^2\alpha d^2\beta , \end{aligned}$$ which is symmetric in the sense given above. From [4], we know that the signature states Bob and Charlie end up with are symmetric under the swap of their systems and the probability matrix describing a priori occurrence of photodection events on Bob's and Charlie's signal null-port arm is symmetric. So for every possible state \(\pi ^\mathrm{out}_{B_kC_k}\), the probability of getting event outcomes (0, 1) (only Charlie registers a photodection event) and (1, 0) (only Bob registers a photodection event) is the same and is no more than \(\frac{1}{2}\). Specifically, if Alice succeeds in repudiating that Bob accepted the message sent by Alice, but Charlie rejected, Charlie needs to register more photodection events than Bob, so we can bound the probability of Alice repudiating successfully as $$\begin{aligned} \text {Pr}(\text {repudiation})\le 2^{-(s_v-s_a)L}. \end{aligned}$$ The security against coherent repudiation of our protocol is rather obviously. In a coherent attack, the entanglement of the states Alice may use is unrestricted. From [4], we know that using globally entangled states cannot help Alice repudiate her signed message, that is to say $$\begin{aligned} \text {Pr}(\text {Alice cheats| individual\,attack})\ge \text {Pr(Alice\,cheats| coherent\,attack}). \end{aligned}$$ \(\square \) Rivest, R.L., Shamir, A., Adleman, L.: A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM 21, 120–126 (1978)MathSciNetCrossRefGoogle Scholar ElGamal, T.: A public key cryptosystem and a signature scheme based on discrete logarithms. IEEE Trans. Inf. Theory 31, 469–472 (1985)MathSciNetCrossRefGoogle Scholar Gottesman, D., Chuang, I.: Quantum digital signatures. Quantum Phys., Preprint at arXiv:quant-ph/0105032 (2001) Clarke, P.J., Collins, R.J., Dunjko, V., Andersson, E., Jeffers, J., Buller, G.S.: Experimental demonstration of quantum digital signatures using phase-encoded coherent states of light. Nat. Commun. 3(6), 1174 (2012)ADSCrossRefGoogle Scholar Wang, T.Y., Cai, X.Q., Ren, Y.L., Zhang, R.L.: Security of quantum digital signatures for classical messages. Sci. Rep. 5, 9231 (2015)CrossRefGoogle Scholar Greenberger, D.M., Horne, M.A., Zeilinger, A.: Bells theorem, quantum theory, and conceptions of universe. Physics 58, 1131 (1990)zbMATHGoogle Scholar Boykin, P.O., Roychowdhury, V.: Optimal encryption of quantum bits. Phys. Rev. A 67, 042317 (2003)ADSCrossRefGoogle Scholar Zeng, G., Keitel, C.H.: Arbitrated quantum-signature scheme. Phys. Rev. A 65, 042312 (2002)ADSCrossRefGoogle Scholar Li, Q., Chan, W.H., Long, D.Y.: Arbitrated quantum signature scheme using Bell states. Phys. Rev. A 79, 054307 (2009)ADSMathSciNetCrossRefGoogle Scholar Zou, X., Qiu, D.: Security analysis and improvements of arbitrated quantum signature schemes. Phys. Rev. A 82(4), 042325 (2010)ADSCrossRefGoogle Scholar Luo, M.X., Chen, X.B., Yun, D., Yang, Y.X.: Quantum signature scheme with weak arbitrator. Int. J. Theor. Phys. 51, 2135–2142 (2012)CrossRefGoogle Scholar Zou, X., Qiu, D., Yu, F., Mateus, P.: Security problems in the quantum signature scheme with a weak arbitrator. Int. J. Theor. Phys. 53(2), 603–611 (2014)MathSciNetCrossRefGoogle Scholar Andersson, E., Curty, M., Jex, I.: Experimentally realiable quantum comparison of coherent states and its applications. Phys. Rev. A 74(2), 022304-1–022304-11 (2006)ADSCrossRefGoogle Scholar Dunjko, V., Wallden, P., Andersson, E.: Quantum digital signatures without quantum memory. Phys. Rev. Lett. 112(4), 040502 (2014)ADSCrossRefGoogle Scholar Amiri, R., Wallden, P., Kent, A., Andersson, E.: Secture quantum signatures using insecure quantum channels. Phys. Rev. A 93(3), 032325 (2016)ADSCrossRefGoogle Scholar Hoeffding, W.: Probability inequalities for sums of bounded random variables. J. Am. Stat. Assoc. 58(301), 13–30 (1963)MathSciNetCrossRefGoogle Scholar Unruh, D.: Computationally binding quantum commitments. In: Advances in Cryptology-EUROCRYPT 2016, LNCS 9666, pages, pp. 497–527, Springer (2016)Google Scholar Wang, M.Q., Wang, X., Zhan, T.: Unconditionally secure multi-party quantum commitment scheme. Quantum Inf. Process. 17(2), 31 (2018)ADSMathSciNetCrossRefGoogle Scholar © Springer Science+Business Media, LLC, part of Springer Nature 2018 1.Key Laboratory of Cryptologic Technology and Information Security, Ministry of Education, School of MathematicsShandong UniversityJinanChina Wang, MQ., Wang, X. & Zhan, T. Quantum Inf Process (2018) 17: 275. https://doi.org/10.1007/s11128-018-2047-y Received 09 November 2017 First Online 06 September 2018 DOI https://doi.org/10.1007/s11128-018-2047-y
CommonCrawl
# Eigen values and eigen vectors - Definition of eigen values and eigen vectors - Properties of eigen values and eigen vectors - Diagonalization of a matrix - Eigen value decomposition - Applications of eigen values and eigen vectors in matrix factorization Consider the matrix A: $$ A = \begin{bmatrix} 4 & 2 \\ 2 & 3 \\ \end{bmatrix} $$ The eigen values of A are λ1 = 5 and λ2 = 1, and the corresponding eigen vectors are v1 = (1, 1) and v2 = (1, -1). ## Exercise Find the eigen values and eigen vectors of the matrix B: $$ B = \begin{bmatrix} 3 & 1 \\ 1 & 3 \\ \end{bmatrix} $$ # Matrix factorization and its applications - Types of matrix factorization: LU decomposition, QR decomposition, and singular value decomposition - Applications of matrix factorization in various fields, such as data compression, image processing, and machine learning - Solving linear systems using matrix factorization Consider the matrix C: $$ C = \begin{bmatrix} 1 & 2 \\ 3 & 4 \\ \end{bmatrix} $$ The LU decomposition of C is: $$ C = LU $$ where L is the lower triangular matrix: $$ L = \begin{bmatrix} 1 & 0 \\ 3 & 1 \\ \end{bmatrix} $$ and U is the upper triangular matrix: $$ U = \begin{bmatrix} 1 & 2 \\ 0 & 1 \\ \end{bmatrix} $$ ## Exercise Perform the LU decomposition of the matrix D: $$ D = \begin{bmatrix} 4 & 2 \\ 2 & 3 \\ \end{bmatrix} $$ # Iterative methods for matrix factorization - The power method for finding the dominant eigen value and eigen vector - The Jacobi method for solving linear systems with a symmetric matrix - The Gauss-Seidel method for solving linear systems with a symmetric and positive-definite matrix - The conjugate gradient method for solving linear systems with a symmetric and positive-definite matrix Consider the matrix E: $$ E = \begin{bmatrix} 4 & 2 \\ 2 & 3 \\ \end{bmatrix} $$ The dominant eigen value of E is λ = 5, and the corresponding eigen vector is v = (1, 1). ## Exercise Find the dominant eigen value and eigen vector of the matrix F: $$ F = \begin{bmatrix} 3 & 1 \\ 1 & 3 \\ \end{bmatrix} $$ # Sparse matrices and their properties - Definition of sparse matrices - Compressed sparse row (CSR) and compressed sparse column (CSC) formats - Applications of sparse matrices in data compression, image processing, and machine learning Consider the sparse matrix G: $$ G = \begin{bmatrix} 0 & 2 & 0 \\ 0 & 0 & 3 \\ 1 & 0 & 0 \\ \end{bmatrix} $$ The CSR format of G is: $$ G_{CSR} = \begin{bmatrix} 0 & 2 & 0 \\ 0 & 0 & 3 \\ 1 & 0 & 0 \\ \end{bmatrix} $$ ## Exercise Convert the sparse matrix H: $$ H = \begin{bmatrix} 0 & 1 & 0 \\ 2 & 0 & 3 \\ 0 & 4 & 0 \\ \end{bmatrix} $$ to the CSR format. # Implementing matrix factorization algorithms in C++ using Eigen library - Installing and setting up the Eigen library - Creating and manipulating matrices and vectors using Eigen - Implementing various matrix factorization algorithms using Eigen Consider the matrix I: $$ I = \begin{bmatrix} 1 & 2 \\ 3 & 4 \\ \end{bmatrix} $$ The Eigen implementation of this matrix is: ```cpp Eigen::Matrix2d I; I << 1, 2, 3, 4; ``` ## Exercise Implement the LU decomposition of the matrix J: $$ J = \begin{bmatrix} 4 & 2 \\ 2 & 3 \\ \end{bmatrix} $$ using the Eigen library. # Handling large scale matrix factorization problems - Parallel computing techniques for matrix factorization - Distributed memory systems for large scale matrix factorization - Sparse matrix formats for efficient storage and computation Consider the sparse matrix K: $$ K = \begin{bmatrix} 0 & 1 & 0 \\ 2 & 0 & 3 \\ 0 & 4 & 0 \\ \end{bmatrix} $$ The CSR format of K is: $$ K_{CSR} = \begin{bmatrix} 0 & 1 & 0 \\ 2 & 0 & 3 \\ 0 & 4 & 0 \\ \end{bmatrix} $$ ## Exercise Implement the LU decomposition of the sparse matrix L: $$ L = \begin{bmatrix} 0 & 1 & 0 \\ 2 & 0 & 3 \\ 0 & 4 & 0 \\ \end{bmatrix} $$ using the Eigen library and parallel computing techniques. # Applications of matrix factorization in data science - Collaborative filtering for recommending items to users - Principal component analysis for dimensionality reduction - Singular value decomposition for image compression Consider the matrix M: $$ M = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{bmatrix} $$ The singular value decomposition of M is: $$ M = U \Sigma V^T $$ where U is the matrix of left singular vectors, Σ is the diagonal matrix of singular values, and V is the matrix of right singular vectors. ## Exercise Apply the singular value decomposition to the matrix N: $$ N = \begin{bmatrix} 1 & 2 \\ 3 & 4 \\ 5 & 6 \\ \end{bmatrix} $$ # Error analysis and performance evaluation - Measuring the error of matrix factorization algorithms - Evaluating the performance of matrix factorization algorithms using metrics such as Frobenius norm, relative error, and convergence rate - Comparing different matrix factorization algorithms Consider the matrix factorization algorithms P and Q. The error of the factorization P ≈ Q can be measured using the Frobenius norm: $$ \text{Error} = \|\text{P} - \text{Q}\|_F $$ ## Exercise Evaluate the performance of the matrix factorization algorithms R and S using the Frobenius norm. # Optimization techniques for matrix factorization - Gradient descent and its variants - Alternating least squares (ALS) algorithm - Stochastic gradient descent (SGD) algorithm - Convex optimization techniques Consider the matrix factorization algorithm T. The gradient descent update rule for T is: $$ \text{T} \leftarrow \text{T} - \alpha \nabla \text{Error}(\text{T}) $$ where α is the learning rate. ## Exercise Implement the ALS algorithm for the matrix factorization problem U. # Convergence and stability of matrix factorization algorithms - Condition number of a matrix - Convergence criteria for matrix factorization algorithms - Stability analysis of matrix factorization algorithms Consider the matrix factorization algorithm V. The condition number of V is: $$ \text{cond}(\text{V}) = \frac{\text{max}|\text{V}|}{\text{min}|\text{V}|} $$ ## Exercise Analyze the convergence and stability of the matrix factorization algorithm W. # Case studies and real-world examples Consider the Netflix Prize dataset, which consists of movie ratings given by users. Matrix factorization algorithms can be used to analyze the user-movie interactions and recommend movies to users based on their preferences. ## Exercise Discuss the application of matrix factorization algorithms in the analysis of social network data for detecting communities and understanding the structure of social networks. # Course Table Of Contents 1. Eigen values and eigen vectors 2. Matrix factorization and its applications 3. Iterative methods for matrix factorization 4. Sparse matrices and their properties 5. Implementing matrix factorization algorithms in C++ using Eigen library 6. Handling large scale matrix factorization problems 7. Applications of matrix factorization in data science 8. Error analysis and performance evaluation 9. Optimization techniques for matrix factorization 10. Convergence and stability of matrix factorization algorithms 11. Case studies and real-world examples Course
Textbooks
Deltahedron In geometry, a deltahedron (plural deltahedra) is a polyhedron whose faces are all equilateral triangles. The name is taken from the Greek upper case delta (Δ), which has the shape of an equilateral triangle. There are infinitely many deltahedra, all having an even number of faces by the handshaking lemma. Of these only eight are convex, having 4, 6, 8, 10, 12, 14, 16 and 20 faces.[1] The number of faces, edges, and vertices is listed below for each of the eight convex deltahedra. Not to be confused with Deltohedron. The eight convex deltahedra There are only eight strictly-convex deltahedra: three are regular polyhedra, and five are Johnson solids. The three regular convex polyhedra are indeed Platonic solids. Regular deltahedra ImageNameFacesEdgesVerticesVertex configurationsSymmetry group tetrahedron4644 × 33Td, [3,3] octahedron81266 × 34Oh, [4,3] icosahedron20301212 × 35Ih, [5,3] Johnson deltahedra ImageNameFacesEdgesVerticesVertex configurationsSymmetry group triangular bipyramid6952 × 33 3 × 34 D3h, [3,2] pentagonal bipyramid101575 × 34 2 × 35 D5h, [5,2] snub disphenoid121884 × 34 4 × 35 D2d, [2,2] triaugmented triangular prism142193 × 34 6 × 35 D3h, [3,2] gyroelongated square bipyramid1624102 × 34 8 × 35 D4d, [4,2] In the 6-faced deltahedron, some vertices have degree 3 and some degree 4. In the 10-, 12-, 14-, and 16-faced deltahedra, some vertices have degree 4 and some degree 5. These five irregular deltahedra belong to the class of Johnson solids: convex polyhedra with regular polygons for faces. Deltahedra retain their shape even if the edges are free to rotate around their vertices so that the angles between edges are fluid. Not all polyhedra have this property: for example, if some of the angles of a cube are relaxed, the cube can be deformed into a non-right square prism. There is no 18-faced convex deltahedron.[2] However, the edge-contracted icosahedron gives an example of an octadecahedron that can either be made convex with 18 irregular triangular faces, or made with equilateral triangles that include two coplanar sets of three triangles. Non-strictly convex cases There are infinitely many cases with coplanar triangles, allowing for sections of the infinite triangular tilings. If the sets of coplanar triangles are considered a single face, a smaller set of faces, edges, and vertices can be counted. The coplanar triangular faces can be merged into rhombic, trapezoidal, hexagonal, or other equilateral polygon faces. Each face must be a convex polyiamond such as , , , , , , and , ...[3] Some smaller examples include: Coplanar deltahedra ImageNameFacesEdgesVerticesVertex configurationsSymmetry group Augmented octahedron Augmentation 1 tet + 1 oct 10 15 7 1 × 33 3 × 34 3 × 35 0 × 36 C3v, [3] 4 3 12 Trigonal trapezohedron Augmentation 2 tets + 1 oct 12 18 8 2 × 33 0 × 34 6 × 35 0 × 36 C3v, [3] 6 12 Augmentation 2 tets + 1 oct 12 188 2 × 33 1 × 34 4 × 35 1 × 36 C2v, [2] 2 2 2 117 Triangular frustum Augmentation 3 tets + 1 oct 14 219 3 × 33 0 × 34 3 × 35 3 × 36 C3v, [3] 1 3 1 96 Elongated octahedron Augmentation 2 tets + 2 octs 16 2410 0 × 33 4 × 34 4 × 35 2 × 36 D2h, [2,2] 4 4 126 Tetrahedron Augmentation 4 tets + 1 oct 16 2410 4 × 33 0 × 34 0 × 35 6 × 36 Td, [3,3] 4 64 Augmentation 3 tets + 2 octs 18 2711 1 × 33 2 × 34 5 × 35 3 × 36 D2h, [2,2] 2 1 2 2 149 Edge-contracted icosahedron 18 2711 0 × 33 2 × 34 8 × 35 1 × 36 C2v, [2] 12 2 2210 Triangular bifrustum Augmentation 6 tets + 2 octs 20 3012 0 × 33 3 × 34 6 × 35 3 × 36 D3h, [3,2] 2 6 159 triangular cupola Augmentation 4 tets + 3 octs 22 3313 0 × 33 3 × 34 6 × 35 4 × 36 C3v, [3] 3 3 1 1 159 Triangular bipyramid Augmentation 8 tets + 2 octs 24 3614 2 × 33 3 × 34 0 × 35 9 × 36 D3h, [3] 6 95 Hexagonal antiprism 24 3614 0 × 33 0 × 34 12 × 35 2 × 36 D6d, [12,2+] 12 2 2412 Truncated tetrahedron Augmentation 6 tets + 4 octs 28 4216 0 × 33 0 × 34 12 × 35 4 × 36 Td, [3,3] 4 4 1812 Tetrakis cuboctahedron Octahedron Augmentation 8 tets + 6 octs 32 4818 0 × 33 12 × 34 0 × 35 6 × 36 Oh, [4,3] 8 126 Non-convex forms There are an infinite number of nonconvex forms. Some examples of face-intersecting deltahedra: • Great icosahedron - a Kepler-Poinsot solid, with 20 intersecting triangles Other nonconvex deltahedra can be generated by adding equilateral pyramids to the faces of all 5 Platonic solids: triakis tetrahedron tetrakis hexahedron triakis octahedron (stella octangula) pentakis dodecahedron triakis icosahedron 12 triangles 24 triangles 60 triangles Other augmentations of the tetrahedron include: Augmented tetrahedra 8 triangles 10 triangles 12 triangles Also by adding inverted pyramids to faces: • Excavated dodecahedron Excavated dodecahedron A toroidal deltahedron 60 triangles 48 triangles See also • Simplicial polytope - polytopes with all simplex facets References 1. Freudenthal, H; van der Waerden, B. L. (1947), "Over een bewering van Euclides ("On an Assertion of Euclid")", Simon Stevin (in Dutch), 25: 115–128 (They showed that there are just 8 convex deltahedra. ) 2. Trigg, Charles W. (1978), "An Infinite Class of Deltahedra", Mathematics Magazine, 51 (1): 55–57, doi:10.1080/0025570X.1978.11976675, JSTOR 2689647. 3. The Convex Deltahedra And the Allowance of Coplanar Faces Further reading • Rausenberger, O. (1915), "Konvexe pseudoreguläre Polyeder", Zeitschrift für mathematischen und naturwissenschaftlichen Unterricht, 46: 135–142. • Cundy, H. Martyn (December 1952), "Deltahedra", Mathematical Gazette, 36: 263–266, doi:10.2307/3608204, JSTOR 3608204. • Cundy, H. Martyn; Rollett, A. (1989), "3.11. Deltahedra", Mathematical Models (3rd ed.), Stradbroke, England: Tarquin Pub., pp. 142–144. • Gardner, Martin (1992), Fractal Music, Hypercards, and More: Mathematical Recreations from Scientific American, New York: W. H. Freeman, pp. 40, 53, and 58-60. • Pugh, Anthony (1976), Polyhedra: A visual approach, California: University of California Press Berkeley, ISBN 0-520-03056-7 pp. 35–36 External links • Weisstein, Eric W., "Deltahedron", MathWorld • The eight convex deltahedra • Deltahedron • Deltahedron
Wikipedia
Derivation of VIX Formula I've read a lot of derivations about VIX formula. I can say it is -adjusted- fair strike of variance swap. But I can't see how it goes from variance swap rate to VIX formula. In particular I can't see the last part of VIX formula hosted here on page 4. Could you please lead me from Hull Technical Note 22: \begin{equation} \ E(V)= \frac{2}{T}ln\frac{F_{0}}{S^{*}} - \frac{2}{T}\left[ \frac{F_{0}}{S^{*}}-1\right] +\frac{2}{T}\left[\int_{K=0}^{S^{*}} \frac{1}{K^{2}}e^{RT}p(K)dK + \int_{K=S^{*}}^{\infty} \frac{1}{K^{2}}e^{RT}c(K)dK\right] \end{equation} to VIX Formula \begin{equation} \sigma^{2}= \frac{2}{T}\sum_i^{}\frac{\triangle K_{i}}{K_{i}^{2}}e^{RT}Q(K_{i}) - \frac{1}{T}\left[ \frac{F}{K_{0}}-1\right]^{2} \end{equation} volatility derivatives swaps variance vix TryingtobeQuantTryingtobeQuant $\begingroup$ The determination of Vix is quite involved, using the implied volatility of listed options and weighs them by how far they are OTM. The CBOE website has the calculation methodology. $\endgroup$ – AlRacoon Mar 3 '19 at 17:25 $\begingroup$ @AlRacoon yes it has methodology as I attached to this post. But it doesn't involve derivation. $\endgroup$ – TryingtobeQuant Mar 3 '19 at 17:45 $\begingroup$ It is clear that the $\Delta K_i$ come discretizing the integral - is it? $\endgroup$ – Ric Mar 3 '19 at 19:29 $\begingroup$ @Richard Yes it is. What about first part of Hull's equation and last term in VIX white? Can we say it is just an approximation ? $\endgroup$ – TryingtobeQuant Mar 3 '19 at 19:32 $\begingroup$ @TryingtobeQuant yes, it is the second order approximation of the logarithm of 1+x as discussed below in the answer $\endgroup$ – Ric Mar 4 '19 at 10:43 The piece you are missing is an approximation via the Taylor formula of the logarithm: $$\ln(1+x) \approx x-\frac{x^2}{2} \; .$$ Apply this to the first term in the final formula of the technical paper: $$\frac{2}{T}\ln\frac{F_{0}}{S^{*}} = \frac{2}{T}\ln\left(1+\left(\frac{F_{0}}{S^{*}}-1\right)\right) \approx \frac{2}{T}\left(\left(\frac{F_{0}}{S^{*}}-1\right) - \frac{1}{2}\left(\frac{F_{0}}{S^{*}}-1\right)^2\right) \;.$$ Now, the first term of this approximation cancels with the second term of the technical paper formula. You're left with the quadratic term. RaskolnikovRaskolnikov Not the answer you're looking for? Browse other questions tagged volatility derivatives swaps variance vix or ask your own question. Why does VIX need to calculate the Forward term? Is this an inconsistency between Swap and LIBOR? How to calibrate a volatility surface using SVI decompose correlation swap pnl What is the formula that determines when VIX futures expire? Variance of a straddle (Black Scholes) Intuition Behind Scaling Factor in Variance Swaps Obtaining swaption prices from lognormal volatility quotes Discretizing a Continuous Time Stochastic Volatility Model Variance swap volatility - ATMF vol, Skew and Curvature
CommonCrawl
\begin{document} \title{Wave-Particle Duality and the Hamilton-Jacobi Equation } \begin{center} {\bf Abstract} \end{center} \noindent The Hamilton-Jacobi equation of relativistic quantum mechanics is revisited. The equation is shown to permit solutions in the form of breathers (oscillating/spinning solitons), displaying simultaneous particle-like and wave-like behavior. The de Broglie wave thus acquires a clear {\it deterministic} meaning of a wave-like excitation of the classical action function.\\ \noindent The problem of quantization in terms of the breathing action function and the double-slit experiment are discussed. \noindent {\bf PACS}: 03.65-w, ~03.65Pm, ~ 03.65.-b \noindent {\bf Key words}: de Broglie waves; wave-particle duality; relativistic wave equation \setcounter{equation}{0} \noindent{\bf 1. Introduction}\\ \noindent A mathematical representation of the dual wave particle nature of matter remains one of the major challenges of quantum theory [1-7]. The present study is an attempt to resolve this issue through an appropriately revised Hamilton-Jacobi formalism.\\ \noindent Consider the relativistic Hamilton-Jacobi (HJ) equation for a particle in an electromagnetic field, \begin{eqnarray}\label{1} \left(1/c^2\right)\left(\partial S/\partial t+e U\right)^2-\left(\nabla S-e {\mathbf A}/c\right)^2=m^2c^2 \end{eqnarray} Here $U , {\mathbf A}$ are scalar and vector potentials of the field obeying the Lorentz calibration condition, \begin{eqnarray}\label{2} \left(1/c\right) \partial U/ \partial t + {\rm div} {\mathbf A} = 0 \end{eqnarray} \noindent The trajectory ${\mathbf x}(t)$ of the particle is governed by the equation [1], \begin{eqnarray}\label{3} \frac{d {\mathbf x}}{dt} = - \frac{c^2\left(\nabla S^{(0)}-e {\mathbf A}/c\right)}{\partial S^{(0)}/\partial t+e U} \quad , \end{eqnarray} \noindent where $S^{(0)}$ is an appropriate action function of the system. \noindent As is well known, the trajectories ${\mathbf x}(t)$ are {\it characteristics} of Eq.\:(1) [8], which are, in turn, the {\it traces} of small perturbations. Indeed, let's represent the action function as \begin{eqnarray}\label{4} S=S^{(0)}+s \end{eqnarray} \noindent where $s$ is a perturbation. Assuming $s$ to be small, Eq.\:(1) yields the linear equation, \begin{eqnarray}\label{5} \left(1/c^{2}\right) \left(\partial S^{(0)}/ \partial t + e U \right) \partial s / \partial t - \left(\nabla S^{(0)}-e {\mathbf A} /c \right) \nabla s = 0 \quad , \end{eqnarray} \noindent whose characteristics are identical to those of Eq.\:(1).\\ \noindent Let \begin{eqnarray}\label{6} {\mathbf f} \left({\mathbf x}, t \right) = {\mathbf c} \end{eqnarray} \noindent be a set of independent integrals of Eq.\:(3), $\mathbf c$ being a constant vector. Then the general solution of Eq.\:(5) may be written as, \begin{eqnarray}\label{7} s=s\left[{\mathbf f} \left({\mathbf x}, t \right) \right] \end{eqnarray} \noindent The perturbation $s$ is, therefore, advected along the trajectory ${\mathbf x} (t)$ with the velocity ${\mathbf v}=d{\mathbf x}/d t$. \noindent If the perturbation is localized enough it will mimic the motion of the particle. \noindent As an example, consider the case of a free particle $(U=0, {\mathbf A}=0)$, where \begin{eqnarray}\label{8} S^{(0)}=-Et+ {\mathbf p} {\mathbf \cdot}{\mathbf x} \qquad \left(E^{2}/c^{2}=p^{2}+m^{2}c^{2}\right) \end{eqnarray} \noindent Eq.\:(5) then yields, \begin{eqnarray}\label{9} s=s\left({\mathbf x} - {\mathbf v} t \right), \end{eqnarray} \noindent where \begin{eqnarray}\label{10} {\mathbf v}=c^2 {\mathbf p}/E \end{eqnarray} \noindent In the linear approximation the perturbation is advected without changing its shape. However, in a nonlinear description, due to the Huygens principle, the perturbation will gradually decay thereby implying stability (albeit nonlinear) of the regular solution $S^{(0)}$.\\ \noindent The question is whether it is possible to modify the HJ equation (1) so that the new equation would allow for localized, nonspreading and nondecaying perturbations (excitations) of the regular action function. Moreover, if the localized excitation breathes (oscillates/spins), one would end up with a {\it deterministic} model for a particle with quantum-like features. As we intend to show, this kind of behavior can be successfully modeled by the conventional quantum Hamilton-Jacobi (QHJ) equation, \begin{eqnarray}\label{11} \left(1/c^2\right)\left(\partial S/\partial t+e U\right)^2 - \left(\nabla S- e {\mathbf A} /c\right)^2 = m^2c^2+i \hbar \Box S \quad , \end{eqnarray} \noindent whose capacity, it transpires, has simply not been fully explored. \noindent {\bf 2. A free particle} \\ \noindent As is well known, the QHJ equation (11) is a transformed version of the linear Klein-Gordon (KG) equation \begin{eqnarray}\label{12} \left(1/c^2\right) \left(\partial /\partial t + i e U /\hbar \right)^2 \Psi - \left(\nabla - i e {\mathbf A}/c \hbar \right) ^2 \Psi + \left(mc/\hbar \right)^2 \Psi=0, \end{eqnarray} \noindent obtained through the substitution, \begin{eqnarray}\label{13} \Psi= \exp \left(i S / \hbar \right) \end{eqnarray} \noindent Consider first the case of a free particle $(U=0,{\mathbf A}= 0)$ where Eq.\:(12) becomes \begin{eqnarray}\label{14} \Box \Psi+(mc/\hbar)^2 \Psi=0 \qquad (\Box=(1/c^2) \partial^2/\partial t^2 - \nabla^2) \end{eqnarray} \noindent The KG equation (14) allows for a two-term spherically symmetric solution \begin{eqnarray}\label{15} \Psi= \exp [-i(mc^2/\hbar)t]+ \alpha \exp (-i \omega t)j_0(kr) \end{eqnarray} \noindent where \begin{eqnarray}\label{16} \omega = c \sqrt{k^2+(mc/\hbar)^2}, \end{eqnarray} \noindent $r=\sqrt{x^2+y^2+z^2}$, $\alpha $ is a free parameter, and \begin{eqnarray}\label{17} j_0(kr)= \sin (kr)/kr \end{eqnarray} \noindent is the zeroth-order spherical Bessel function. \noindent The second term in Eq.\:(15) is a standing spherically symmetric breather, $\left|\alpha \right|$ being its intensity. \noindent In terms of the action function $S$, by virtue of (13), Eq.\:(15) readily yields \begin{eqnarray}\label{18} S=mc^2t-i \hbar \ln \left\{1+ \alpha \exp [-i( \omega - mc^2/\hbar)t]j_0(kr)\right\} \end{eqnarray} \noindent Here the first term corresponds to the classical action function, $S^{(0)}=-mc^2t$, for a free particle in the rest system while the second term represents its localized excitation, oscillating and {\it nonspreading}. \noindent Let's set the frequency of oscillations in Eq.\:(18) in accordance with the de Broglie postulate that each particle at rest can be linked to an internal `clock' of frequency $mc^2/\hbar$. \noindent The frequency $\omega$ in Eq.\:(15) should therefore be specified as \begin{eqnarray}\label{19} \omega=2(mc^2/\hbar) \end{eqnarray} \noindent Hence, by virtue of Eq.\:(16), \begin{eqnarray}\label{20} k=\sqrt{3}(mc/\hbar) \end{eqnarray} \noindent Eq.\:(18) thus becomes, \begin{eqnarray}\label{21} S= - mc^2t-i\hbar \ln \left\{1+ \alpha \exp \left[-i \left(\frac{mc^2}{\hbar}\right) t \right] j_0 \left[\sqrt{3}\left(\frac{mc}{\hbar}\right)r \right]\right\} \end{eqnarray} \noindent Note that in Eq.\:(21) the frequency is not affected by the nonlinearity of the system, preserving its value irrespective of the breather intensity. \noindent Away from the breather's core ($r \gg\hbar/mc$ ), \begin{eqnarray}\label{22} S=-mc^2t-i\alpha\hbar\exp\left[-i\left(\frac{mc^2}{\hbar}\right)t\right] j_0 \left[\sqrt{3}\left(\frac{mc}{\hbar} \right) r \right] \end{eqnarray} \noindent The oscillations are therefore asymptotically {\it monochromatic}, again in accord with the de Broglie picture [1]. \noindent Similar to oscillations of an ideal pendulum, the breather (21) is stable to small perturbations. The stability follows from the linearity of the KG equation. Due to the linearity there is no coupling between the basic solution (15) and its perturbation, which also obeys the KG equation. Therefore, if the initial perturbation is small it will remain so indefinitely. The stability here is understood in a weak (non-asymptotic) sense. \\ \noindent Physically relevant action functions are quite specific {\it global} solutions defined over the entire time axis $-\infty<t<\infty$. Moreover, they may be multiple valued and bound in space. Such solutions cannot be obtained through a conventional initial-value problem unless suitable initial conditions are known in advance. \noindent Until now we have dealt with a particle at rest. For a particle moving at a constant velocity $v$ along, say, $x$ - axis, the corresponding expression for the action function is readily obtained from Eq.\:(21) through the Lorentz transformation, \begin{eqnarray} \label{23} t\to \frac{t-xv/c^2}{\sqrt{1-(v/c)^2}}\;, \qquad x\to\frac{ x-vt}{\sqrt{1-(v/c)^2}} \end{eqnarray} \noindent The transformed Bessel function $j_0$ will then mimic the motion of the classical particle while the transformed temporal factor $\exp\left[-i (mc^2/\hbar)t\right]$ will turn into the associated de Broglie wave, thereby demonstrating {\it simultaneous} particle-like and wave-like behavior. Moreover, unlike conventional quantum mechanics, here the modulated de Broglie wave acquires the clear {\it deterministic} meaning of a wave-like excitation of the action function, a complex-valued potential in configuration space. \noindent If, as is conventional, we associate the gradients $-\partial S/\partial t, \nabla S$ with the particle energy $E$ and momentum ${\bf p}$, then the Einstein relation $(1/c^2)E^2=p^2+m^2c^2$ appears to hold only far from the $\hbar/mc$ - wide breather's core, or {\it on average} over the entire breather. The correspondence with classical relativistic mechanics is therefore complied with. \noindent In addition to spherically symmetric breathers, Eq.\:(14) also permits asymmetric breathers, spinning around some axis. In the latter case the second term of Eq.(15) should be replaced by \begin{eqnarray}\label{24} \alpha\exp\left[-2i\left(\frac{mc^2}{\hbar}\right)t+in\phi\right]j_l\left[\sqrt{3}\left(\frac{mc}{\hbar}\right)r\right]P_l^n\left(\cos\theta\right), \end{eqnarray} where $j_l$, $P_l^n$ are high-order spherical Bessel functions and associated Legendre functions. It would be interesting to ascertain in what way (if any) the double-valued spin \!-$\tfrac{1}{2}$ breather may be linked to the Dirac wave function.\\ \noindent The next question is how to reproduce quantization directly in terms of the breathing action function. The geometrically simplest situation, where such an effect manifests itself, is the periodic motion of an otherwise free particle over a closed interval $0<x<d$. In this case the field-free version of Eq.(11) must be considered jointly with two boundary conditions, \begin{eqnarray}\label{25} \partial S(0,y,z,t)/\partial t=\partial S(d,y,z,t)/\partial t \:, \nonumber \\ \partial S(0,y,z,t)/\partial x=\partial S(d,y,z,t)/\partial x \end{eqnarray} \noindent Any classical action function for a free particle, \begin{eqnarray}\label{26} S=-Et+px \end{eqnarray} \noindent is clearly a solution of this problem. However, in the case of a breathing action function the situation proves to be different. Thanks to the boundary conditions (25), the moving breather interacts with itself, and this may well lead to its self-destruction unless some particular conditions are met. \noindent Consider first the simplest case of a particle at rest $(v=0)$. The pertinent solution is readily obtained by converting the problem for a finite interval into a problem for an infinite interval ($-\infty<x<\infty$) filled with a $d$-periodic train of standing breathers, assumed to be spherical for simplicity. The resulting action function then reads, \begin{eqnarray}\label{27} S=-mc^2t-i\hbar\ln\left\{ 1+\alpha\exp\left[-i\left(\frac{mc^2}{\hbar}\right)t\right]\sum_{k}\left( j_0\right)_{d,0}^{(k)}\right\}, \end{eqnarray} \noindent where \begin{eqnarray}\label{28} \left(j_0\right)_{d,0}^{(k)}=\frac{\sin \left[\sqrt{3}\left(mc/\hbar\right)r_{d,0}^{(k)}\right]}{\sqrt{3}\left(mc/\hbar\right)r_{d,0}^{(k)}}\;, \end{eqnarray} \begin{eqnarray}\label{29} r_{d,0}^{(k)}=\sqrt{(x-kd)^2+y^2+z^2} \qquad (k=0,\pm 1,\pm 2,...) \end{eqnarray} Here the second subscript stands for $v=0$. \noindent The action function for a moving particle $(v\ne0)$ is obtained from (27) (28) (29) through the Lorentz transformation (23), provided $d$ is replaced by $d/\sqrt{1-(v/c)^2}$. The latter step is needed to balance the relativistic contraction, and thereby to preserve the spatial period ($d$) of the system. The resulting action-function thus becomes, \begin{eqnarray}\label{30} S=-Et+px-i\hbar\ln \left\{1+\alpha\exp\left[i\left(\frac{-Et+px}{\hbar}\right)\right]\sum_{k}\left(j_0\right)_{d,v}^{(k)}\right\}, \end{eqnarray} where \begin{eqnarray}\label{31} \left(j_0\right)_{d,v}^{(k)}=\frac{\sin\left[\sqrt{3}\left(mc/\hbar\right)r_{d,v}^{(k)}\right]}{\sqrt{3}\left(mc/\hbar\right)r_{d,v}^{(k)}}\;, \end{eqnarray} \begin{eqnarray}\label{32} r_{d,v}^{(k)}=\sqrt{\left(\frac{x-vt-kd}{\sqrt{1-(v/c)^2}}\right)^2+y^2+z^2} \end{eqnarray} The spatial $2\pi\hbar/p$ - periodicity of $\exp\left[i\left(-Et+px\right)/\hbar\right]$ is compatible with the spatial $d$ - periodicity of $\sum_{k}\left(j_0\right)_{d,v}^{(k)}$ only if \begin{eqnarray}\label{33} dp=2\pi n\hbar \qquad (n=0,1,2,3,...)\;, \end{eqnarray} which recovers the familiar Bohr-Sommerfeld quantum condition. While the particle velocity is clearly subluminal its communication with boundary conditions is {\it superluminal}, which does not violate the Lorenz-invariance of the system.\\ \noindent The above solution (30)-(33) may be easily adapted for the problem of a particle shuttling between two perfectly reflecting walls, $x=0$ and $x=d/2$. To handle the double-valuedness of the pertinent action function the trajectory of the particle, following the Einstein-Keller topological approach [9,10], should be placed on the double-sheeted strip, $0<x<d/2$, $-\infty<y<\infty$, $z=\pm0$. Thereupon the problem reduces to the previous one. \noindent{\bf 3. A particle in a slowly varying field}\\ \noindent The de Broglie postulate holds at least for breathers exposed to slowly varying potentials, characterized by spatio-temporal scales much larger than $\hbar/mc, \hbar/mc^2$. \noindent Indeed, as may be readily shown, for slowly varying potentials, Eq.\:(21) becomes \begin{eqnarray}\label{34} S=-\left(mc^2+eU\right)t+ \frac{e}{c} \mathbf A \mathbf \cdot \mathbf x - i \hbar \ln \left\{1+ \alpha \exp \left[-i \left( \frac{mc^2}{\hbar} \right) t \right] j_0 \left[\sqrt{3}\left( \frac{mc}{\hbar} \right) r \right]\right\} \end{eqnarray} \noindent So, unlike the action function as a whole, the frequency of its oscillations in the rest system is not affected by the field. This invariance is untenable for the wave function $\Psi$ (13), which therefore cannot serve as a physically objective representation of the de Broglie clock.\\ \noindent For a particle {\it moving} in a slowly varying field the `fast' spatio-temporal coordinates $\mathbf x,t$ in Eq.\:(34) should be subjected to the Lorentz transformation, with the velocity $\mathbf v$ regarded as a slowly varying vector. \noindent The action function (34) and its Lorentz transformed version pertain to the interior of the breather (inner solution). Away from the breather's core the action function is described by the regular (breather-free) solution of the QJH equation $S^{(0)}$, involving only large spatio-temporal scales (outer solution). The inner solution is clearly affected by the outer solution through the velocity field $\mathbf v$, while the reverse influence does not take place, at least not for the leading order asymptotics. The simplest picture emerges in the nonrelativistic {\it semiclassical} limit [11] where the uniformly valid asymptotic solution may be represented as, \begin{eqnarray}\label{35} S=-mc^2t+S_{sc}-i\hbar \ln \left\{1+ \alpha \exp \left[-i \left(\frac{mc^2+mv^2/2}{\hbar}\right)t\right]\exp \left[i \left(\frac{\mathbf p \mathbf \cdot\mathbf x}{\hbar} \right)\right]j_0 \left[\sqrt{3}\left(\frac{mc}{\hbar}\right)r \right]\right\} \end{eqnarray} \noindent Here $S_{sc}$ is the semiclassical action function governed by the equation \begin{eqnarray}\label{36} \frac{\partial S_{sc}}{\partial t} + \frac{1}{2m} \left(\nabla S_{sc}-\frac{e}{c} \mathbf A\right)^{2}+eU=- \frac{i \hbar} {m} \nabla^2 S_c \quad , \end{eqnarray} \noindent where $S_c$ is the classical action function obeying the equation \begin{eqnarray}\label{37} \frac{\partial S_c}{\partial t} + \frac{1}{2m} \left(\nabla S_c - \frac{e}{c} \mathbf A \right)^{2}+eU=0 \end{eqnarray} The argument $r$ in $j_0\left[\sqrt{3}\left(mc/\hbar \right) r \right]$ is defined as \begin{eqnarray}\label{38} r= \left| \mathbf x - \mathbf x_p \left(t \right)\right|\quad , \end{eqnarray} where $\mathbf x_p\left(t\right)$ is the classical trajectory of the particle described by the equation \begin{eqnarray}\label{39} \frac{d \mathbf x _{p}}{dt} = \frac{1}{m} \mathbf p - \frac{e}{c} \mathbf A \qquad \left(\: \mathbf p = \nabla S_{c}\: \right ) \end{eqnarray} \noindent One therefore may readily see the connection between the new formalism and those of Schr$\ddot{\mathrm{o}}$dinger, Bohr and Sommerfeld. The Schr$\ddot{\mathrm{o}}$dinger formalism pertains to the outer solution which, under appropriate conditions, provides the data on the particle's range of energies, but says nothing about its trajectory. The information about the particle's trajectory comes from the inner solution, connecting de Broglie waves with the Bohr-Sommerfeld theory. Recall that for bound systems in the semiclassical limit (high quantum numbers), the Schr$\ddot{\mathrm{o}}$dinger and Bohr-Sommerfeld formalisms lead to identical energy spectra.\\ \noindent {\bf 4. The effect of boundaries}\\ The outlined formalism seems fully compatible with the double-slit experiment. Diffraction pictures obtained from electron beams of very low intensity [1,4] provide convincing evidence that the double-slit experiment is actually a one-particle effect where the particle communicates with distant boundaries that affect its trajectory. The breather passing through a slit `feels' whether the other slit is open or closed, and changes its trajectory accordingly. A geometrically simpler system, where one may readily observe the impact of boundaries, is a breather moving in a uniform {\it circular} motion through a thin toroidal duct, e.g. doughnut shaped carbon nanotube. In classical mechanics such a motion would be impossible without external forcing. In quantum mechanics however, the bending of the trajectory is a manifestation of the rotating nature of the pertinent KG solution. The problem may be easily tackled analytically in the intermediate limit when the width of the duct $d$ is comparable to the de Broglie wavelength, small compared to the torus centerline radius $R$, and large compared to the breather's width $\hbar / mc$. \noindent For the leading order asymptotics the resulting action function (inner solution), written in cylindrical co-ordinates $\rho,\phi,z$, reads, \begin{eqnarray} \label{40} S=-Et+p_{\phi} R\phi-i\hbar\ln \left\{1+\alpha\exp \left[ i\left(\frac{-Et+p_{\phi}R\phi}{\hbar}\right)\right]j_0 \left[\sqrt{3}\left( \frac {mc}{\hbar} \right)r \right]\right\}, \end{eqnarray} \noindent where \begin{eqnarray}\label{41} r=\sqrt{\left(R\phi-v_\phi t \right)^2 + \left(\rho-R\right)^2 + z^2} \quad ,\quad E=mc^2+mv^2_{\phi}/2 \quad,\quad p_\phi=mv_\phi=n \hbar/R \end{eqnarray} \noindent Here $v_\phi, p_\phi$ are the azimuthal velocity and momentum, $v_\phi \ll c \:, R \phi \sim \left| \rho -R\right|\sim \hbar/p_\phi, n\phi \sim 1,$ and $n$ is a large integer.\\ \noindent {\bf 5. Concluding remarks}\\ \noindent The proposed formalism is certainly related to de Broglie's double solution program [1]. Yet, unlike the latter, in the current model the breather is guided by a regular, generally nonwaving, action function $S^{(0)}$ rather than by a guiding wave solution (Sec 2). The guiding action function and its localized excitation (breather) are coupled through the {\it nonlinear} QHJ equation (11). \noindent The de Broglie's double solution program is adjacent to the de Broglie-Bohm pilot-wave theory [1,4,7], which should be seen as a degenerate double solution theory, where the breather has been replaced by the particle position governed by an appropriate guidance equation. For unbound systems the pilot-wave theory and the current formulation are likely to correspond. However, for bound systems the de Broglie-Bohm formalism, based on standing KG-Schr$\ddot{\mathrm{o}}$dinger waves, produce immobile particles [1,4], which contradicts the classical limit. In the present formulation, dealing with drifting nonspreading breathers and multiple valued action functions, this difficulty does not occur (Sec. 2).\\ \noindent At this stage it is difficult to see whether the amended QHJ formalism is indeed adequate enough to reproduce all the basic features of quantum-mechanical phenomenology. In any case, a few preliminary observations already show that a mathematical representation of unified wave-particle behavior is quite feasible, even within the framework of the conventional QHJ equation. \noindent {\bf Acknowledgments} \noindent The author wishes to thank Irina Brailovsky, Mark Azbel and Eugene Levich for interesting discussions, and Victor P. Maslov and Steven Weinberg for stimulating correspondence. \noindent These studies were supported in part by the Bauer-Neumann Chair in Applied Mathematics and Theoretical Mechanics, the US-Israel Binational Science Foundation (Grant 2006-151), and the Israel Science Foundation (Grant 32/09). \end{document}
arXiv
\begin{document} \begin{titlepage} \vspace*{-2cm} \begin{centering} \huge{Acid zeta function and ajoint acid zeta function} \large {Jining Gao }\\ Department of Mathematics, Shanghai Jiaotong University, Shanghai, P. R. China Max-Planck Institute for Mathematics, Bonn, Germany. \begin{abstract} In this paper we set up theory of acid zeta function and ajoint acid zeta function , based on the theory, we point out a reason to doubt the truth of Riemann hypothesis and also as a consequence, we give out some new RH equivalences . \end{abstract} \end{centering} \end{titlepage} \pagebreak \def\lh{\hbox to 15pt{\vbox{\vskip 6pt\hrule width 6.5pt height 1pt} \kern -4.0pt\vrule height 8pt width 1pt\hfil}} \def\mbox{$\;\Box$}{\mbox{$\;\Box$}} \def\qed{\hbox{${\vcenter{\vbox{\hrule height 0.4pt\hbox{\vrule width 0.4pt height 6pt \kern5pt\vrule width 0.4pt}\hrule height 0.4pt}}}$}} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{definition}[theorem]{Definition} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{proposition}[theorem]{Proposition} \newcommand{\bf Proof.\rm}{\bf Proof.\rm} \section{Introduction} The motivation of constructing acid zeta function is to study distribution of Riemann zeros. Let's take a look at how it comes up . First of all, we can consider Melling transformation of $N(T)$, the distribution of non-trivial Riemann zeros and it's a meromorphic function so called adjoint acid zeta function on some half plane. Of course, we are eager to get full information of this function,but unfortunately, we can not know it further without RH. That's why we have to consider another function so called acid zeta function which can be explicitly figured out . Under the RH , Acid zeta function is equal to ajoint acid zeta function. With some deep discussion on acid zeta function, we will see maginitude of acid zeta function increases along vertical direction with at most polynomial increasing, when RH is true but explicit formula of acid zeta function contains an isolated term which has expotient increaseness along vertical direction and we have reseason to doubt the truth of RH.This paper has three sections. In the first section, we will introduce acid zeta function and study it's properties in detail including analytic continuation ,explicit formula. In the second section we will introduce ajoint acid zeta function and give out a formula so called "half explicit" formula . In the last section we set up a remarkable formula which reflects relationship between the acid zeta function and the adjoint acid zeta function, as a consequence, we prove a "numberical" RH equivalence and point out infinite number of "numerical" equivalences can be obtained in same way. Our main tools in this paper are one variable complex analysis and some basic Riemann zeta function theory . Most theorems of this paper are obtained through many complicated residues compution, to avoid over complicated version, we add an appendix to help reader to study more easily. \section{Acid zeta function} In this section we will define and study properties of acid zeta function. {\bf Definition.} Let $\zeta$ be the Riemann zeta function,$\rho_i$ be all nontrival Riemann zeros which satisfy $Im \rho_{i}> 0$.Suppose $Re s>1$, we define a function of $s$ as follows: \begin{eqnarray} \zeta_{a}(s)=\sum_{m}\frac{1}{(\frac{\rho_{m}-\frac{1}{2}}{i})^{s}}\nonumber \end{eqnarray} so called acid zeta function. Obviously, $\zeta_{a}(s)$ is analytic in the region $Re s>1$,in order to extend domain of this function to the region $Re s<1$ as a meromorphic function , we need following theorem. \begin{theorem} When $1<Res<2$, we have that \begin{eqnarray} \zeta_{a}(s)= \frac{sin\frac{\pi}{2}s}{(s-1)\pi}\int_{\frac{1}{2}}^{+\infty}\frac{\psi(t+\frac{1}{2})}{t^{s-1}}dt\nonumber \end{eqnarray} Where $\psi(t)=\frac{d}{dt}(\frac{\xi^{'}}{\xi}(t))$ \end{theorem} \begin{proof} Let $\Gamma_{r}^T$ be the positively oriented contour consisting of following segments $\Gamma_{1}^{T}=[1.5, 1.5+Ti]$, $\Gamma_{2}^{T}=[-0.5, -0.5+Ti]$,$\Gamma_{h}^{T}=[-0.5+Ti, 1.5+Ti]$,$\Gamma_{3}=[-0.5, 0.5-r]\cup \Gamma_{r} \cup [0.5+r, 1.5]$,where $\Gamma_{r}$ is a upper half circle with radius $r$ and centered at $z=0.5$. Set $$\zeta_{a}^{T}(s)=\sum_{0<Im \rho_{m}<T}\frac{1}{(\frac{\rho_{m}-\frac{1}{2}}{i})^s}$$ We have that: \begin{eqnarray} \zeta_{a}^{T}(s)=\frac{1}{2\pi i}\int_{\Gamma_{r}^T}\frac{1}{(\frac{z-\frac{1}{2}}{i})^s}\frac{\xi^{'}}{\xi}(z)dz =\frac{1}{2\pi i}(\int_{\Gamma_{1}^T}+\int_{\Gamma_{2}^T}+\int_{\Gamma_{h}^T}+\int_{\Gamma_{3}})\frac{1}{(\frac{z-\frac{1}{2}}{i})^s}\frac{\xi^{'}}{\xi}(z)dz\nonumber \end{eqnarray} we denote above four integrals by $\Sigma_1$,$\Sigma_2$,$\Sigma_3$,$\Sigma_4$ respectively. let's first figure out $\Sigma_3$, by the appendix .. $$\left|\Sigma_{3}\right|\leq C_{1}\int_{-0.5}^{1.5}\frac{1}{\left|\frac{\sigma+iT-\frac{1}{2}}{i}\right|^{Res}}\left|\frac{\xi^{'}}{\xi}(\sigma+iT)\right|d\sigma$$ $$=O(T^{-Res}lnT)$$ Thus $$lim_{T\rightarrow 0}\Sigma_{3}=0$$ Since $\xi(z)=\xi(1-z)$, $\frac{\xi^{'}}{\xi}(z)=-\frac{\xi^{'}}{\xi}(1-z)$ \begin{eqnarray} \int_{\Gamma_{2}^T}\frac{1}{(\frac{z-\frac{1}{2}}{i})^s}\frac{\xi^{'}}{\xi}(z)dz= \int_{J_{2}^T}\frac{1}{(\frac{\frac{1}{2}-z'}{i})^s}\frac{\xi^{'}}{\xi}(z')dz'\nonumber \end{eqnarray} Where $J_{2}^T$ is the straight line from $z'=15-iT$ to $z'=1.5$ and $z'=1-z$ Since $\left|\frac{\xi^{'}}{\xi}(z)\right|=O(lnt) $, we can deform the path $\Gamma_{1}^T$ and $J_{2}^T$ to the real axis $[1.5,+\infty]$ when we let $T$ approaches to infinity, we get \begin{eqnarray} lim_{T\rightarrow \infty}\int_{\Gamma_{2}^T}\frac{1}{(\frac{z-\frac{1}{2}}{i})^s}\frac{\xi^{'}}{\xi}(z)dz=\int_{1.5}^{+\infty}\frac{1}{(\frac{t-\frac{1}{2}}{i})^s}\frac{\xi^{'}}{\xi}(t)dt\nonumber \end{eqnarray} and \begin{eqnarray} lim_{T\rightarrow \infty}\int_{J_{2}^T}\frac{1}{(\frac{z-\frac{1}{2}}{i})^s}\frac{\xi^{'}}{\xi}(z)dz=-\int_{1.5}^{+\infty}\frac{1}{(\frac{\frac{1}{2}-t}{i})^s}\frac{\xi^{'}}{\xi}(t)dt\nonumber =-(-1)^{-s}\int_{1.5}^{+\infty}\frac{1}{(\frac{t-\frac{1}{2}}{i})^s}\frac{\xi^{'}}{\xi}(t)dt\nonumber \end{eqnarray} To summarize, we have \begin{eqnarray} \zeta_{a}(s)=\frac{sin\frac{\pi}{2}s}{\pi}\int_{1.5}^{+\infty}\frac{1}{(t-\frac{1}{2})^s}\frac{\xi^{'}}{\xi}(t)dt +\frac{1}{2\pi i}\int_{\Gamma_{3}}\frac{1}{(\frac{z-\frac{1}{2}}{i})^s}\frac{\xi^{'}}{\xi}(z)dz\nonumber\\ =\frac{\sin\frac{\pi}{2}s}{\pi}\int_{0.5+r}^{+\infty}\frac{1}{(t-\frac{1}{2})^s}\frac{\xi^{'}}{\xi}(t)dt +\frac{1}{2\pi i}\int_{\Gamma_{r}}\frac{1}{(\frac{z-\frac{1}{2}}{i})^s}\frac{\xi^{'}}{\xi}(z)dz\label{a} \end{eqnarray} We can not simply let $r$ approach to zero in \ref{a} because the singuarity at $z=0.5$ will cause divergent result. To get through such divergence, we use integration by parts in \ref{a},we have \begin{eqnarray} \zeta_{a}(s)=\frac{sin\frac{\pi}{2}s}{\pi(s-1)}\int_{0.5+r}^{+\infty}\frac{\psi(t)}{(t-\frac{1}{2})^{s-1}}dt\nonumber +\frac{i^s}{2\pi i(s-1)}\int_{\Gamma_{r}}\frac{\psi(z)}{(\frac{z-\frac{1}{2}}{i})^{s-1}}dz \end{eqnarray} Where $\psi(z)=\frac{d}{dz}(\frac{\xi^{'}}{\xi}(z))$ When $1<Res<2$, $$\left|\int_{\Gamma_{r}}\frac{\psi(z)}{(\frac{z-\frac{1}{2}}{i})^{s-1}}dz\right|=O(r^{2-Res})$$ Let $r$ approaches to zero in \ref{a}, we obtain \begin{eqnarray} \zeta_{a}(s)=\frac{sin\frac{\pi}{2}s}{\pi(s-1)}\int_{0.5}^{+\infty}\frac{\psi(t)}{(t-\frac{1}{2})^{s-1}}dt =\frac{sin\frac{\pi}{2}s}{\pi(s-1)}\int_{0.5}^{+\infty}\frac{\psi(t+\frac{1}{2})}{t^{s-1}}dt\nonumber \end{eqnarray} When $1<Res<2$ \end{proof} To use above theorem to get analytic continuation, we are going to evaluate $\zeta_{a}(s)$ explicitly. First of all, we write $\psi(s)$ as sum of three functions as follows: \begin{eqnarray} \psi(s)=-\frac{1}{(s-1)^2}+J(s)+ \frac{d}{ds}(\frac{\xi^{'}}{\xi}(s))\nonumber \end{eqnarray} Where $$J(s)=-\frac{1}{s^2}+\frac{1}{2}\frac{d}{ds}(\frac{\Gamma'}{\Gamma}(s))=\sum_{n=1}^{\infty}\frac{1}{(s+2n)^2}$$ When $1<Res<2$, we have \begin{eqnarray} \int_{0}^{\infty}\frac{J(t+\frac{1}{2})}{t^{s-1}}dt=\sum_{n=1}^{\infty}\int_{0}^{\infty}\frac{1}{(t+2n+\frac{1}{2})^{2}t^{s-1}}dt\nonumber \end{eqnarray} To carry on integration by parts for every term, we assume tempoarily that $Res<1$ and get \begin{eqnarray} \int_{0}^{\infty}\frac{1}{(t+2n-\frac{1}{2})t^{s-1}}dt=(1-s)\int_{0}^{\infty}\frac{1}{(t+2n+\frac{1}{2})t^{s}}dt =\frac{\pi(s-1)}{sin\pi s(2n+\frac{1}{2})^s}\label{h} \end{eqnarray} Thus we have \begin{eqnarray} \int_{0}^{\infty}\frac{J(t+\frac{1}{2})}{t^{s-1}}dt=\frac{\pi(s-1)}{sin\pi z}\sum_{n=1}^{\infty}\frac{1}{(2n+\frac{1}{2})^s}\nonumber =\frac{\pi(s-1)}{2^{s}sin\pi s}\zeta(s,\frac{5}{4})\nonumber \end{eqnarray} Where $\zeta(s,q)$ is Hurwitz zeta function. To evaluate integral $$I=\frac{1}{s-1}\int_{\Gamma_{r}'}\frac{1}{(z-\frac{1}{2})^{2}z^{s-1}}dz$$ Where the path $$\Gamma_{r}'=[0,\frac{1}{2}-r]\cup \Gamma_{r}\cup[\frac{1}{2}+r,+\infty)$$ using integration by parts, $$I=-\int_{\Gamma_{r}'}\frac{1}{(z-\frac{1}{2})z^{s}}dz$$ Let's deform $\Gamma_{r}'$ to immaginary axis $[0,i\infty)$, then $$I=\frac{-1}{i^{s-1}}\int_{0}^{\infty}\frac{1}{(iy-\frac{1}{2})y^s}dy $$ Set $J=\int_{0}^{\infty}\frac{1}{(iy-\frac{1}{2})y^s}dy$, we have \begin{eqnarray} J=-\frac{1}{2}\int_{0}^{\infty}\frac{y^{-s}}{y^{2}+\frac{1}{4}}dy-i\int_{0}^{\infty}\frac{y^{1-s}}{y^{2}+\frac{1}{4}}dy\label{b} \end{eqnarray} By the substituation of $y=\sqrt{t}$ in \ref{b} and according appendix, we obtain \begin{eqnarray} J=-\frac{1}{4}\int_{0}^{\infty}\frac{t^{\frac{-s-1}{2}}}{t+\frac{1}{4}}dt-\frac{i}{2}\int_{0}^{\infty}\frac{t^{\frac{-s}{2}}}{t+\frac{1}{4}}dt\nonumber =\frac{\pi 2^{s-1}}{cos\frac{\pi}{2}s}+i\frac{\pi 2^{s-1}}{sin\frac{\pi}{2}s}\nonumber =\frac{i\pi 2^s}{sin\pi s}e^{-\frac{i\pi}{2}s}\nonumber \end{eqnarray} To summarize, when $1<Res<2,$the final integral representation of $\zeta_{a}(s)$ can be described as follows: \begin{eqnarray} \zeta_{a}(s)=\frac{2^{s-1}}{cos(\frac{\pi}{2}s)e^{i\pi s}}-\frac{\zeta(s,\frac{5}{4})}{cos(\frac{\pi}{2}s)2^{s+1}} +\frac{sin\frac{\pi}{2}s}{(s-1)\pi}\int_{\Gamma_{r}'}\frac{J_(z+\frac{1}{2})}{z^{s-1}}dz \label{c} \end{eqnarray} Where $J_{2}(s)=\frac{d}{ds}(\frac{\zeta'}{\zeta}(s))$ Because $J_{2}(t)=O(2^{-t})$, the domain of third term of $\zeta_{a}(s)$ can be extended to $-\infty<Res<2$ and furthermore, we have following theorem: \begin{theorem} Acid zeta function $\zeta_{a}(s)$ is a meromorphic function of whole complex plane and has an explicit formula as \ref{c} when $-\infty<Res<2$ \end{theorem} \section{Adjoint acid zeta function} To relate acid zeta function to the Riemann hyothesis, we need to define another function so called adjoint acid zeta function as follows: {\bf Definition.}Let $N(T)$ be distribution function of untrival Riemann zeros whose imaginary part are greater than zero, set $\zeta_{a}^{*}(s)=\int_{0}^{\infty}t^{-s}dN(t)$ , where $Res>1$. We call $\zeta_{a}^{*}(s)$ adjoint acid zeta function. {\bf Remark.} Obviously, if RH is true, acid zeta function is equal to adjoint acid zeta function and vise versa. Comparing explicit formula of acid zeta function with adjoint acid zeta function, we will find when $1<Res<2$ acid zeta function contains a term which is increasing exponently along vertical direction and ajoint acid zeta function is bounded along vertical direction, such difference is impressive and will make us doubt the truth of RH seriously though rigious disproof haven't come up yet. In this section, we will dicuss analytic continuation as we did for acid zeta function and get a ''half explicit" formula of ajoint acid zeta function, first of all, we need following lemma as prepartion. \begin{lemma} Let $f(z)$ be a holomorphic function in the region $Res>\frac{1}{2}-\delta$($\delta$ is some positive number) and $\left|f''(z)\right|=O(\frac{1}{\left|z\right|})$, $f(\bar{z})=\bar{f}(z)$. Set $M(t)=Imf(\frac{1}{2}+it)$,we have that \begin{eqnarray} \int_{0}^{\infty}\frac{M''(t)}{t^{s-1}}dt=-sin\frac{\pi}{2}s\int_{0}^{\infty}\frac{f''(\frac{1}{2}+t)}{t^{s-1}}dt\nonumber \end{eqnarray} Where $1<Res<2$ \end{lemma} \begin{proof} Since $f(\bar{z})=\bar{f}(z)$, $$M(t)=Imf(\frac{1}{2}+it)=\frac{1}{2i}[f(\frac{1}{2}+it)-f(\frac{1}{2}-it)]$$ and $$M''(t)=\frac{i}{2}[f''(\frac{1}{2}+it)-f''(\frac{1}{2}-it)]$$ Thus, \begin{eqnarray} \int_{0}^{\infty}\frac{M"(t)}{t^{s-1}}dt=\frac{i}{2}(\int_{0}^{\infty}\frac{f"(\frac{1}{2}+it)}{t^{s-1}}dt\nonumber -\int_{0}^{\infty}\frac{f''(\frac{1}{2}-it)}{t^{s-1}}dt)\label{d} \end{eqnarray} We just compute the first integral of \ref{d} and the second one can be dealed with similarly. Set $z=it$ we get \begin{eqnarray} \int_{0}^{\infty}\frac{f''(\frac{1}{2}+it)}{t^{s-1}}dt=i^{s-2}\int_{0}^{i\infty}\frac{f''(\frac{1}{2}+z)}{z^{s-1}}dz\nonumber \end{eqnarray} Deforming path back to real axis, we get that it's equal to $$-i^{s}\int_{0}^{i\infty}\frac{f"(\frac{1}{2}+t)}{t^{s-1}}dt$$ Similarly, \begin{eqnarray} \int_{0}^{\infty}\frac{f''(\frac{1}{2}-it)}{t^{s-1}}dt=-i^{-s}\int_{0}^{i\infty}\frac{f''(\frac{1}{2}+t)}{t^{s-1}}dt\nonumber \end{eqnarray} That follows the lemma \end{proof} Since $N(t)=\frac{1}{\pi}Img(\frac{1}{2}+it)$, where $g(s)=arg(s-1)\pi^{-\frac{s}{2}}\Gamma(\frac{s}{2}+1)\zeta(s)$ we have $$\zeta_{a}^{*}(s)=\frac{1}{\pi}\int_{1}^{\infty}t^{-s}darg(-\frac{1}{2}+it)+\int_{1}^{\infty}t^{-s}dM(t)+ \int_{1}^{\infty}t^{-s}dS_{0}(t)$$ Here $$M(t)=\frac{1}{\pi}arg\pi^{-\frac{1}{4}+\frac{it}{2}}\Gamma(-\frac{5}{4}+\frac{it}{2})$$ and $$S_{0}(t)=\frac{1}{\pi}arg\zeta(\frac{1}{2}+it)$$ Using integration by parts \begin{eqnarray} \int_{1}^{\infty}t^{-s}dM(t)=\frac{M'(1)}{s-1}+\frac{1}{s-1}\int_{1}^{\infty}t^{1-s}M''(t)dt\nonumber \end{eqnarray} Since $M''(t)=O(\frac{1}{t})$, assuming $1<Res<2$,we have $$\int_{1}^{\infty}t^{1-s}M''(t)dt=\int_{0}^{\infty}t^{1-s}M''(t)dt-\int_{0}^{1}t^{1-s}M''(t)dt$$ let $$f(z)=\pi^{-\frac{z}{2}-1}\Gamma(\frac{z}{2}+1)$$ by the lemma 3 and \ref{h}, we get \begin{eqnarray} \frac{1}{s-1}\int_{0}^{\infty}t^{1-s}M''(t)dt=\frac{-sin\frac{\pi}{2}s}{\pi(s-1)}\int_{0}^{\infty}t^{1-s}f''(\frac{1}{2}+t)dt=-\frac{\zeta(s,\frac{5}{4})}{2^{s+1}cos\frac{\pi}{2}s}\nonumber \end{eqnarray} and \begin{eqnarray} \frac{1}{\pi}\int_{0}^{\infty}t^{-s}darg(-\frac{1}{2}+it)=-\frac{1}{2\pi}\int_{0}^{\infty}\frac{1}{t^{s}(\frac{1}{4}+t^2)}dt\nonumber\\ =-\frac{1}{4\pi}\int_{0}^{\infty}\frac{1}{t'^{\frac{s+1}{2}}(\frac{1}{4}+t')}dt'\nonumber =\frac{2^{s-1}}{cos\frac{\pi}{2}s}\nonumber \end{eqnarray} We have used substituation $t'=t^2$ in the last two steps,thus $$\frac{1}{\pi}\int_{1}^{\infty}t^{-s}darg(-\frac{1}{2}+it)=\frac{2^{s-1}}{cos\frac{\pi}{2}s}-\frac{1}{\pi}\int_{0}^{1}t^{-s}darg(-\frac{1}{2}+it)$$ Simiarly, \begin{eqnarray} \int_{1}^{\infty}t^{-s}dS_{0}(t)=-S_{0}(1)+s\int_{1}^{\infty}\frac{S_{0}(t)}{t^{s+1}}dt\nonumber =-S_{0}(1)+s\int_{1}^{\infty}t^{-s-1}dS_{1}(t)\nonumber\\ =-S_{0}(1)-sS_{1}(1)+s(s+1)\int_{1}^{\infty}\frac{S_{1}(t)}{t^{s+2}}dt\nonumber \end{eqnarray} Where $S_{1}(T)=\int_{0}^{T}S_{0}(t)dt+c_1$ and $c_1$ is a constant , $S_{1}(t)=O(logt)$\cite{L}. Finally, we get "half explicit" formula for $\zeta_{a}^{*}(s)$ when $Res>-1$ as followings: \begin{theorem} When $Res>-1$, \begin{eqnarray} \zeta_{a}^{*}(s)=\frac{2^{s-1}}{cos\frac{\pi}{2}s}-\frac{\zeta(s,\frac{5}{4})}{2^{s+1}cos\frac{\pi}{2}s}+s(s+1)\int_{1}^{\infty}\frac{S_{1}(t)}{t^{s+2}}dt+R(s)\nonumber \end{eqnarray} Where $$R(s)=-\int_{0}^{1}t^{s}d(M(t)+arg(-\frac{1}{2}+it))-S_{0}(1)-sS_{1}(1)$$ \end{theorem} \section{Relationship between acid zeta function and ajoint acid zeta function} In this section, we will present a theorem which connects acid zeta function and ajoint acid zeta function, as a consequence, we can make "infinite number of numerical" equivalences of Riemann hypothesis. \begin{theorem} When $Res>-1$, we have that \begin{eqnarray} \zeta_{a}(s)=\zeta_{a}^{*}(s)+\sum_{n,m=1}^{\infty}\frac{(-1)^{n}s(s+1)\cdots(s+2n-1)}{(2n)!}\frac{(\sigma_{m}-\frac{1}{2})^{2n}}{\lambda_{m}^{s+2n}}\label{e} \end{eqnarray} Where $\rho_{m}=\sigma_{m}+i\lambda_{m}$ run over all off critical line zeros of Riemann zeta function ,which have positive imaginary part. \end{theorem} \begin{proof} When $Res>1$ \begin{eqnarray} \zeta_{a}(s)=\sum_{m}\frac{1}{(\frac{\rho_{m}-\frac{1}{2}}{i})^s} =\sum_{m}\frac{1}{\lambda_{m}}(1+\frac{i(\frac{1}{2}-\sigma_{m})}{\lambda_{m}})^{-s}\nonumber\\ =\sum_{m}\frac{1}{\lambda_{m}^{s}}[1+\sum_{k=1}^{\infty}\frac{((-s)(-s-1)\cdots(-s-k+1)}{k!}(\frac{i( \frac{1}{2}-\sigma_{m})}{\lambda_{m}})^{k}\nonumber]\\ =\sum_{m}\frac{1}{\lambda_{m}^{s}}[1+\sum_{n=1}^{\infty}\frac{(-i)^{k}(-s)(-s-1)\cdots(-s-k+1)}{k!}(\frac{( \frac{1}{2}-\sigma_{m})}{\lambda_{m}})^{k}]\nonumber\\ =\sum_{m}\frac{1}{\lambda_{m}^{s}}+\sum_{n,m=1}^{\infty}\frac{(-1)^{n}s(s+1)\cdots(s+2n-1)}{(2n)!}\frac{(\sigma_{m}-\frac{1}{2})^{2n}}{\lambda_{m}^{s+2n}}\nonumber \end{eqnarray} The last step is due to appearance in pair for $\rho_m$ and $1-\rho_m$. It's easy to find that above formula hold for $Res>-1$ \end{proof} With above formula, we can get \begin{corollary} RH is true if and only if $$\frac{d}{ds}\zeta_{a}(s)|_{s=0}=\frac{d}{ds}\zeta_{a}^{*}(s)|_{s=0}$$ \end{corollary} \begin{proof} Taking derivative on both sides of \ref{e}and let $s=0$,we have \begin{eqnarray} \frac{d}{ds}\zeta_{a}(s)|_{s=0}= \frac{d}{ds}\zeta_{a}^{*}(s)|_{s=0}-\frac{1}{2}\sum_{m=1}^{\infty}ln[1+(\frac{\sigma_m-\frac{1}{2}}{\lambda_m})^2]\nonumber \end{eqnarray} \end{proof} which immediately follows our corollary. {\bf Remark.} It's interesting to figure out $\zeta_{a}'(0)$ and the special value can be respresented in term of Euler constant, our equivalence look very close to the Volchkov criteria \cite{V} and comparison will also be interesting ,besides we can prove RH is true iff there exists a positive integer $n>1$ such that $\zeta_{a}(n)=\zeta_{a}^{*}(n)$,in other words, we get infinite number of RH equivalences. {\bf Appendix} When $-1<Res<0$, \begin{eqnarray} \int_{0}^{\infty}\frac{x^s}{x+a}dx=-\frac{\pi}{sin\pi s}a^s\nonumber \end{eqnarray} Where $a>0$ \end{document}
arXiv
builtonfacts How often does the sun emit 1 TeV photons? By mspringer on November 27, 2013. I had an interesting question posed to me recently: how frequently does the sun emit photons with an energy greater than 1 TeV? All of you know about the experiments going on at the LHC, where particles are accelerated to an energy which is equivalent to an electron being accelerated through a potential difference of trillions of volts (which is what a "trillion electron volts" - a TeV is). During the ensuing collisions between particles, high-energy TeV photons are produced. Of course everything is emitting light in the form of blackbody radiation all the time. Human beings emit mostly long-wavelength infrared, hot stoves emit shorter-wavelength infrared and red light, hotter objects like the sun emit across a broad range of wavelengths which include the entire visible spectrum. Here, from Wikipedia, is the spectrum of the sun: Solar spectrum. This graph is given in terms of wavelength. For light, energy corresponds to frequency, and frequency is inversely proportional to wavelength. Longer wavelength, lower frequency. A TeV is a gigantic amount of energy, which corresponds to a gigantically high frequency and thus a wavelength that would be pegged way the heck off the left end of this chart pegged almost but not quite exactly at 0 on the x axis. Let me reproduce the same blackbody as the Wikipedia diagram, but cast in terms of frequency: Same spectrum, in terms of frequency Here the x-axis is in hertz, and the y-axis is spectral irradiance in terms of watts per square meter per hertz. (That makes a difference - it's not just the Wikipedia graph with the x-axis relabeled although it gives the same watts-per-square-meter value when integrated over the same bandwidth region.) Ok, so what's the frequency of a 1 TeV photon? Well, photon energy is given by E = hf, where h is Planck's constant and f is the frequency. Plugging in, a 1 TeV photon has an frequency of about 2.4 x 1026 Hz. That's way off the right end of the graph. Thus you might think the answer is zero - the sun never emits such high-energy photons. But then again that tail never quite reaches zero, and there's a lot of TeVs per watt, and there's a lot of square meters on the sun... So to find out more exactly, let's take a look at the actual equation which gave us that chart: Planck's law for blackbody radiation: $latex \displaystyle B(f) = \frac{ 2 h f^3}{c^2} \frac{1}{e^\frac{h f}{kT} - 1}&s=2$ So you'd integrate that from 2.4 x 1026 Hz to infinity if you wanted to find how many watts per square meter the sun emits at those huge frequencies. (Here k is Boltzmann's constant, which is effectively the scale factor that converts from temperature to energy.) That's kind of an ugly integral though, but we can simplify it. That $latex e^\frac{h f}{kT}$ term? It's indescribably big. The hf term is 1 TeV, and the kT is about 0.45 eV (which is a "typical" photon energy emitted by the sun), so the exponential is on the order of e2200000000000. (The number of particles in the observable universe is maybe 1080 or so, for comparison.) Subtracting 1 from that gigantic number is absolutely meaningless, so we can drop it and end up with: $latex \displaystyle B(f) = \frac{ 2 h f^3}{c^2} e^{-\frac{h f}{kT}}&s=2$ which means the answer in watts per square meter is $latex \displaystyle I = \int_{a}^{\infty}\frac{ 2 h f^3}{c^2} e^{-\frac{h f}{kT}} \, df&s=2$ where "a" is the 1 TeV lower cutoff (in Hz). That exponential term now has a negative sign, so it's on the order of e-2200000000000. I'd say this is a safe place to stop and say "The answer is zero, the sun has never and will never emit photons of that energy through blackbody processes." But let's press on just to be safe. That expression above can be integrated pretty straightforwardly. I let Mathematica do it for me: $latex \displaystyle I = e^{-\frac{a h}{k T}}\frac{2 k T (a^3 h^3+3 a^2 h^2 k T+6 a h k^2 T^2+6 k^3 T^3 )}{c^2 h^3}&s=2$ So that's an exponential term multiplying a bunch of stuff. That bunch of stuff is a big number, because "a" is a big number and h is a tiny number in the denominator. I plug in the numbers and get that the stuff term is about 1093 watts per square meter, and you have to multiply that by the 1018 or so square meters on the surface of the sun. That's a very big number, but it's not even in the same sport as that e-2200000000000 term. Multiplying those terms together doesn't even dent the e-2200000000000 term. It's still zero for all practical purposes Which is a lot of work to say that our initial intuition was correct. 1 TeV from blackbody processes in the sun? Forget it. Now blackbody processes aren't the only things going on in the sun. I don't think there are too many TeV scale processes of other types, but stars can be weird things sometimes. I'd be curious to know if astrophysicists would know of other processes which might bump the TeV rate to something higher. [Personal note: I've been absent on ScienceBlogs since April, I think. Why? Writing my dissertation, defending, and summer interning. The upshot of all that is those things are done and I'm now Dr. Springer, and I have a potentially permanent position lined up next year. And now I might even have time to write some more!] Worked Problems Congratulations Dr Springer. By Acleron (not verified) on 27 Nov 2013 #permalink The obvious follow up question: What is the highest energy photon likely to be emitted by black body radiation by the sun this year? By MobiusKlein (not verified) on 28 Nov 2013 #permalink Congrats Dr Springer, and great post! Integrals, how lovely :) By wereatheist (not verified) on 28 Nov 2013 #permalink By Sue VanHattum (not verified) on 28 Nov 2013 #permalink Interesting deduction. However the mean energy of particles or photons at the very center of the sun is around 1000 eV. Even there TeV domain looks to be impossible! By Anandaram Mandyam (not verified) on 01 Dec 2013 #permalink Hi, very interesting and nice post. I'm quite sure there are >= 1 TeV photons accelerated by black hole. Anyway, my question is: why are you looking for 1 TeV photons exactly? Is there any measurement reporting more high-energy photons than expected? Another point is that I'm not really sure that Sun's atmosphere is transparent to 1 TeV photons, so maybe they are produced, but their energy gets "thermalized". I don't know for sure, just saying. By Riccardo Di Sipio (not verified) on 15 Dec 2013 #permalink Is Bitcoin Currently Experiencing a Selfish Miner Attack? Probably not. All right, now that you know my conclusion, let's see how to get there with data. First, some background. Let me give very quick overview of Bitcoin in this context. (There are many comprehensive overviews elsewhere.) Bitcoin is an ongoing ledger of transactions of along the lines of… I had an interesting question posed to me recently: how frequently does the sun emit photons with an energy greater than 1 TeV? All of you know about the experiments going on at the LHC, where particles are accelerated to an energy which is equivalent to an electron being accelerated through a… Everything in Pi... maybe. George Takei posted the following thing to Facebook recently: It got reposted by a bunch of people and provoked a tremendous amount of discussion (for a math topic, anyway), much of which was somewhere in the continuum between merely wrong and psychedelically incoherent. It's not a new subject - a… Why are clouds white? Why is the sky blue? It's a classic question - probably the classic question of the genre of explanatory popular physics. The famous short version of the answer is that Rayleigh scattering by air molecules affects short-waveength light more than long-wavelength light, and so blue light tends to get… Light from a Hairbrush Question from a reader: Pick up a comb, rub it with your hair and you have got some electric charge. Now shake it and you are generating an electromagnetic wave. Am I right? Yes indeed. So why don't we see light emitted when we brush our hair? Let's run some numbers. If you wiggle around an… Pouches, pockets and sacs in the heads, necks and chests of mammals, part VI: guttural pouches, false nostrils and preorbital fossae in horses, tapirs and rhinos Back to the series on pouches, pockets and sacs (for previous articles see links below). The previous article finished by looking at the guttural pouches present in the Mongolian gazelle Procapra gutturosa. This links us nicely to the select group of mammals - perissodactyls, hyraxes, bats and rodents - that possess air-filled structures (called guttural pouches) located in the upper respiratory… How "they" view "us": A woman dies of measles, and antivaccinationists think it's a conspiracy Dammit. I had planned on posting something else tomorrow that, because it had been posted elsewhere, would be minimal work. The reason, of course, is because it's the 4th of July weekend and today is a federal holiday. Unfortunately, sometimes things happen that I cannot ignore, even though all sorts of other bloggers are writing about the same thing, because, well, it's something that goes to… Weekend Diversion: Fitness Challenge! Darn it! I'm sick and tired of being a scarecrow! Charles Atlas says he can give me a real body. All right! I'll gamble a stamp and get his free book! -Countless Magazine and Comic Book Ads Last weekend, Abbie over at ERV proclaimed herself the fittest person on Scienceblogs, and one of my readers thought I might have something to say about that. I sure do; I'm going to tell you what -- as a…
CommonCrawl
What is the difference between a stochastic and a deterministic policy? In reinforcement learning, there are the concepts of stochastic (or probabilistic) and deterministic policies. What is the difference between them? reinforcement-learning comparison policy deterministic-policy stochastic-policy nbronbro A deterministic policy is a function of the form $\pi_{\mathbb{d}}: S \rightarrow A$, that is, a function from the set of states of the environment, $S$, to the set of actions, $A$. The subscript $_{\mathbb{d}}$ only indicates that this is a ${\mathbb{d}}$eterministic policy. For example, in a grid world, the set of states of the environment, $S$, is composed of each cell of the grid, and the set of actions, $A$, is composed of the actions "left", "right", "up" and "down". Given a state $s \in S$, $\pi(s)$ is, with probability $1$, always the same action (e.g. "up"), unless the policy changes. A stochastic policy is often represented as a family of conditional probability distributions, $\pi_{\mathbb{s}}(A \mid S)$, from the set of states, $S$, to the set of actions, $A$. A probability distribution is a function that assigns a probability for each event (in this case, the events are actions in certain states) and such that the sum of all the probabilities is $1$. A stochastic policy is a family and not just one conditional probability distribution because, for a fixed state $s \in S$, $\pi_{\mathbb{s}}(A \mid S = s)$ is a possibly distinct conditional probability distribution. In other words, $\pi_{\mathbb{s}}(A \mid S) = \{ \pi_{\mathbb{s}}(A \mid S = s_1), \dots, \pi_{\mathbb{s}}(A \mid S = s_{|S|})\}$, where $\pi_{\mathbb{s}}(A \mid S = s)$ is a conditional probability distribution over actions given that the state is $s \in S$ and $|S|$ is the size of the set of states of the environment. Often, in the reinforcement learning context, a stochastic policy is misleadingly denoted by $\pi_{\mathbb{s}}(a \mid s)$, where $a \in A$ and $s \in S$ are respectively a specific action and state, so $\pi_{\mathbb{s}}(a \mid s)$ is just a number and not a conditional probability distribution. A single conditional probability distribution can be denoted by $\pi_{\mathbb{s}}(A \mid S = s)$, for some fixed state $s \in S$. However, $\pi_{\mathbb{s}}(a \mid s)$ can also denote a family of conditional probability distributions, that is, $\pi_{\mathbb{s}}(A \mid S) = \pi_{\mathbb{s}}(a \mid s)$, if $a$ and $s$ are arbitrary. In the particular case of games of chance (e.g. poker), where there are sources of randomness, a deterministic policy might not always be appropriate. For example, in poker, not all information (e.g. the cards of the other players) is available. In those circumstances, the agent might decide to play differently depending on the round (time step). More concretely, the agent could decide to go "all in" $\frac{2}{3}$ of the times whenever it has a hand with two aces and there are two uncovered aces on the table and decide to just "raise" $\frac{1}{3}$ of the other times. A deterministic policy can be interpreted as a stochastic policy that gives the probability of $1$ to one of the available actions (and $0$ to the remaining actions), for each state. $\begingroup$ 'A deterministic policy can be interpreted as a stochastic policy that gives the probability of 1 to one of the available actions (and 0 to the remaining actions), for each state.' don't think this is correct since the definition of stochasticity is that the event can't be predicted. Meaning : "having a random probability distribution or pattern that may be analysed statistically but may not be predicted precisely." $\endgroup$ – DuttaA May 12 '19 at 19:21 $\begingroup$ @DuttaA You can have a probability distribution that assigns $1$ to one event and $0$ to everything else. This is mathematically possible. I said "you can interpret". I am not saying that this is a good way of thinking about it. $\endgroup$ – nbro May 12 '19 at 19:29 $\begingroup$ The definition of stochasticity is that you cannot predict, which is not the case here. $\endgroup$ – DuttaA May 12 '19 at 19:39 $\begingroup$ @DuttaA This is one definition of stochasticity. I have actually read one definition of PMF that states that each event must have a probability greater than 0 (but they likely meant $\geq 0$). What happens if there is only one event? In that case, that event must have probability $1$. So, a probability distribution can give probability $1$ to one event (and $0$ to the others). $\endgroup$ – nbro May 12 '19 at 19:46 Not the answer you're looking for? Browse other questions tagged reinforcement-learning comparison policy deterministic-policy stochastic-policy or ask your own question. What is experience replay in laymen's terms? What is the difference between a stationary and a non-stationary policy? Is tabular Q-learning considered interpretable? What is the difference between the definition of a stationary policy in reinforcement learning and contextual bandit? What is the difference between return and expected return? What is the difference between on and off-policy deterministic actor-critic? What is the difference between policy and action in reinforcement learning? What is the difference between "mutation" and "crossover"? Can Q-learning be used to derive a stochastic policy? Is the optimal policy always stochastic if the environment is also stochastic? What is the difference between a non-stationary policy and a state that stores time? What is the motivation behind using a deterministic policy? What's the value of making the RL agent's output stochastic opposed to deterministic?
CommonCrawl